Skip to main content
Mostly Clear icon
72º

White House wades into debate on 'open' versus 'closed' artificial intelligence systems

1 / 2

Copyright 2023 The Associated Press. All rights reserved

FILE - President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. The White House said Wednesday, Feb. 21, 2024, that it is seeking public comment on the risks and benefits of having an AI system's key components publicly available for anyone to use and modify. (AP Photo/Evan Vucci, File)

The Biden administration is wading into a contentious debate about whether the most powerful artificial intelligence systems should be “open-source” or closed.

The White House said Wednesday it is seeking public comment on the risks and benefits of having an AI system's key components publicly available for anyone to use and modify. The inquiry is one piece of the broader executive order that President Joe Biden signed in October to manage the fast-evolving technology.

Recommended Videos



Tech companies are divided on how open they make their AI models, with some emphasizing the dangers of widely accessible AI model components and others stressing that open science is important for researchers and startups. Among the most vocal promoters of an open approach have been Facebook parent Meta Platforms and IBM.

Biden’s order described open models with the technical name of “dual-use foundation models with widely available weights” and said they needed further study. Weights are numerical values that influence how an AI model performs.

When those weights are publicly posted on the internet, “there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” Biden’s order said. He gave Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

Now the Commerce Department's National Telecommunications and Information Administration says it is also opening a 30-day comment period to field ideas that will be included in a report to the president.

“One piece of encouraging news is that it’s clear to the experts that this is not a binary issue. There are gradients of openness,” said Alan Davidson, an assistant Commerce secretary and the NTIA's administrator. Davidson told reporters Tuesday that it's possible to find solutions that promote both innovation and safety.

Meta plans to share with the Biden administration "what we’ve learned from building AI technologies in an open way over the last decade so that the benefits of AI can continue to be shared by everyone,” according to a written statement from Nick Clegg, the company's president of global affairs.

Google has largely favored a more closed approach but on Wednesday released a new group of open models, called Gemma, that derive from the same technology used to create its recently released Gemini chatbot app and paid service. Google describes the open models as a more “lightweight” version of its larger and more powerful Gemini, which remains closed.

In a technical paper Wednesday, Google said it has prioritized safety because of the “irreversible nature” of releasing an open model such as Gemma and urged “the wider AI community to move beyond simplistic ’open vs. closed’ debates, and avoid either exaggerating or minimising potential harms, as we believe a nuanced, collaborative approach to risks and benefits is essential.”

Simply releasing an AI system’s components to the world doesn’t necessarily make it accessible or easy for outsiders to scrutinize because making use of an open model still requires “resources concentrated in the hands of a few large companies,” according to Cornell University researcher David Gray Widder.

Widder said the motivations for companies to take a more open or closed approach are also complicated. Those lobbying for open-source may hope to profit off external contributions, while those who argue that safety concerns compel them to closely guard their AI systems may also be looking to entrench their forerunner positions, Widder said.


Recommended Videos