Elon Musk, Other Tech Leaders Call for 6-Month Pause on the ‘Dangerous Race’ to Giant AI Systems

The request was made in an open letter signed by major players including Elon Musk, co-founder of OpenAI, the lab that created ChatGPT and GPT-4; Emad Mostaque, founder of the London-based Stability AI; and Apple co-founder Steve Wozniak. More than 1,000 artificial intelligence experts, researchers, and backers signed the letter, including academics from UT Dallas.

Some of the leading experts in artificial intelligence are asking that the world say “whoa” for at least six months to the creation of “giant” AI experiments—in order to better study and mitigate potential dangers of systems like ChatGPT-4.

The request was made in an open letter signed by major players including Elon Musk, a co-founder of OpenAI, the lab that created ChatGPT and GPT-4; Emad Mostaque, founder of the London-based Stability AI; and Apple co-founder Steve Wozniak. More than 1,000 artificial intelligence experts, researchers, and backers signed the letter, including academics from North Texas.

The Guardian reported that the letter’s signatories include engineers from Amazon, DeepMind, Google, Meta and Microsoft, as well as academics including cognitive scientist Gary Marcus.

Local signers include UT Dallas professor Sriraam Natarajan and student assistant Charles Averill, according to the Silicon Valley Business Journal. Other signers from Texas included J. Craig Wheeler, the Samuel T. and Fern Yanagisawa Regents Professor of Astronomy, emeritus, at UT Austin and an inaugural fellow of the American Astronomical Society and past president of the American Astronomical Society; and Peter Stone, associate chair of computer science and director of robotics at the University of Texas.

Why slow the roll of powerful AI?

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators – can understand, predict, or reliably control,” the letter said.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

According to The Guardian, the letter’s authors, coordinated by the “longtermist” thinktank the Future of Life Institute, cited Sam Altman, another OpenAI co-founder, as justification for the pause.

Altman wrote in February: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

The letter continued: “We agree. That point is now.”

Fox News reported that Altman is not a signatory to the letter.

The letter also said: “AI systems with human-competitive intelligence can pose profound risks to society … and should be planned for and managed with commensurate care… Unfortunately, this level of planning and management is not happening.”

The authors said that should researchers not voluntarily pause their work on AI models more powerful than GPT-4, the letter’s benchmark for “giant” models, then “governments should step in.”

GPT-4 in limited release

Generative Pre-trained Transformer 4, or GPT-4, is a multimodal large language model created by OpenAI and is the fourth in its GPT series. Released on March 14, it’s been made publicly available in a limited form via ChatGPT Plus.

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter’s authors said.

Since the release of GPT-4, OpenAI has been adding capabilities to the AI system with “plugins,” giving it the ability to look up data on the open web, plan holidays, and even order groceries, The Guardian reported. The company must deal, however, with “capability overhang”: the issue that its own systems are more powerful than it knows at release, according to the report.

The publications said that as researchers experiment with GPT-4 in the coming weeks and months, they likely will uncover new ways of “prompting” the system that improve its ability to solve difficult problems.

One recent discovery was that the AI is noticeably more accurate at answering questions if it is first told to do so “in the style of a knowledgeable expert,” The Guardian said.

OpenAI CEO says letter is ‘preaching to the choir’

TechCrunch reported that on Wednesday, Altman spoke with the Wall Street Journal and said that OpenAI has not started training GPT-5.

He said the company has given priority to safety in development and spent more than six months doing safety tests on GPT-4 before launching it.

“In some sense, this is preaching to the choir,” Altman told the Journal. “We have, I think, been talking about these issues the loudest, with the most intensity, for the longest.”

In an interview with TechCrunch in January, Altman argued that “starting these [product releases] now [makes sense], where the stakes are still relatively low, rather than just put out what the whole industry will have in a few years with no time for society to update.”

Altman also recently talked with computer scientist and podcaster Lex Fridman about his relationship with Musk, who was a co-founder of OpenAI but stepped away from the organization in 2018, citing conflicts of interest.

The news outlet Semafor recently reported that Musk left OpenAI after his offer to run it was rejected by its other co-founders, including Altman, who took the role of CEO in early 2019.

Get on the list.
Dallas Innovates, every day.

Sign up to keep your eye on what’s new and next in Dallas-Fort Worth, every day.

One quick signup, and you’re done.  

R E A D   N E X T

  • AI Master's Degree UNT

    Innovators across North Texas and entrepreneurs all over the U.S. are racing to launch (and patent) the latest breakthrough AI. That can lead to IP policy issues as emerging tech hits the market. On February 8, the U.S. Patent and Trademark Office will a public meeting to discuss how to promote innovation in AI and Machine Learning tech, to be held virtually and in person at the Arts District Mansion in Dallas. "The takeaways will shape future work on AI and ET policy," says USPTO Director Kathi Vidal.

  • [Image: Ryzhi/istockphoto]

    North Texas universities are gearing up for the AI revolution, readying students for the jobs of tomorrow. Here's a look at what's happening at SMU, TCU, the University of North Texas, UT Arlington, and UT Dallas, as published in our annual 2023 Dallas Innovates magazine.

  • Florida-based Fountain Life, a preventative health and longevity company committed to transforming global healthcare from reactive to proactive, has opened its third precision diagnostic center in The Colony. The new 14,000-square-foot center opened to the public in November and is designed to help members live longer, healthier, and more fulfilling lives by detecting illnesses at their earliest stages, the company said. "The ability to detect illness early before it advances leads to extraordinarily high survival rates and enables our members to experience a much higher quality of life," Bill Kapp, M.D., Fountain Life CEO and co-founder, said in a statement.…

  • Two months after closing a $40 million investment led by Tritium Partners and naming Melissa Solis as the company's new CEO, Inbenta is planting its headquarters in Allen's One Bethany West. Solis, who previously founded and led Allen-based GIACT, made the decision to relocate based on Allen's city business support, a "strong talent pool," affordable cost of living, and more.

  • With the partnership, Dallas-based Evolon Technology will unite the long-distance object detection of its analytics with ZeroEyes' proprietary AI gun detection technology. The result: The ability to detect and track someone with a firearm more than a football field away. "Current events point to the need for a partnership like ours," Evolon President and CEO Kevin Stadler says.