Some of the leading experts in artificial intelligence are asking that the world say “whoa” for at least six months to the creation of “giant” AI experiments—in order to better study and mitigate potential dangers of systems like ChatGPT-4.
The request was made in an open letter signed by major players including Elon Musk, a co-founder of OpenAI, the lab that created ChatGPT and GPT-4; Emad Mostaque, founder of the London-based Stability AI; and Apple co-founder Steve Wozniak. More than 1,000 artificial intelligence experts, researchers, and backers signed the letter, including academics from North Texas.
The Guardian reported that the letter’s signatories include engineers from Amazon, DeepMind, Google, Meta and Microsoft, as well as academics including cognitive scientist Gary Marcus.
Local signers include UT Dallas professor Sriraam Natarajan and student assistant Charles Averill, according to the Silicon Valley Business Journal. Other signers from Texas included J. Craig Wheeler, the Samuel T. and Fern Yanagisawa Regents Professor of Astronomy, emeritus, at UT Austin and an inaugural fellow of the American Astronomical Society and past president of the American Astronomical Society; and Peter Stone, associate chair of computer science and director of robotics at the University of Texas.
Why slow the roll of powerful AI?
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators – can understand, predict, or reliably control,” the letter said.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
According to The Guardian, the letter’s authors, coordinated by the “longtermist” thinktank the Future of Life Institute, cited Sam Altman, another OpenAI co-founder, as justification for the pause.
Altman wrote in February: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”
The letter continued: “We agree. That point is now.”
Fox News reported that Altman is not a signatory to the letter.
The letter also said: “AI systems with human-competitive intelligence can pose profound risks to society … and should be planned for and managed with commensurate care… Unfortunately, this level of planning and management is not happening.”
The authors said that should researchers not voluntarily pause their work on AI models more powerful than GPT-4, the letter’s benchmark for “giant” models, then “governments should step in.”
GPT-4 in limited release
Generative Pre-trained Transformer 4, or GPT-4, is a multimodal large language model created by OpenAI and is the fourth in its GPT series. Released on March 14, it’s been made publicly available in a limited form via ChatGPT Plus.
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter’s authors said.
Since the release of GPT-4, OpenAI has been adding capabilities to the AI system with “plugins,” giving it the ability to look up data on the open web, plan holidays, and even order groceries, The Guardian reported. The company must deal, however, with “capability overhang”: the issue that its own systems are more powerful than it knows at release, according to the report.
The publications said that as researchers experiment with GPT-4 in the coming weeks and months, they likely will uncover new ways of “prompting” the system that improve its ability to solve difficult problems.
One recent discovery was that the AI is noticeably more accurate at answering questions if it is first told to do so “in the style of a knowledgeable expert,” The Guardian said.
OpenAI CEO says letter is ‘preaching to the choir’
TechCrunch reported that on Wednesday, Altman spoke with the Wall Street Journal and said that OpenAI has not started training GPT-5.
He said the company has given priority to safety in development and spent more than six months doing safety tests on GPT-4 before launching it.
“In some sense, this is preaching to the choir,” Altman told the Journal. “We have, I think, been talking about these issues the loudest, with the most intensity, for the longest.”
In an interview with TechCrunch in January, Altman argued that “starting these [product releases] now [makes sense], where the stakes are still relatively low, rather than just put out what the whole industry will have in a few years with no time for society to update.”
Altman also recently talked with computer scientist and podcaster Lex Fridman about his relationship with Musk, who was a co-founder of OpenAI but stepped away from the organization in 2018, citing conflicts of interest.
The news outlet Semafor recently reported that Musk left OpenAI after his offer to run it was rejected by its other co-founders, including Altman, who took the role of CEO in early 2019.
Get on the list.
Dallas Innovates, every day.
Sign up to keep your eye on what’s new and next in Dallas-Fort Worth, every day.