Generative AI chatbots including OpenAI’s ChatGPT4 and Google’s Bard have rocked the world over the last several months. They offer a peek into a future of amazing promise, but of potential perils as well—from disastrous impacts on the workforce to even possible human extinction caused by rogue AI systems.
In March, those concerns led to a famous open letter signed by Elon Musk, Apple co-founder Steve Wozniak, and more than 1,000 other AI experts, researchers, and backers—including academics from UT Dallas—calling for a six-month pause on the “dangerous race” to create giant AI systems.
Artificial intelligence industry CEOs and experts have since testified in front of Congressional panels and conferred at the White House about the issues involved.
On Wednesday, four companies on the leading edge of AI development and research made a move to minimize AI’s risks while seeking to leverage its ability “to address society’s biggest challenges.”
Wednesday’s joint announcement of the Frontier Model Forum
In a joint announcement, Anthropic (a San Francisco-based AI safety and research startup), Google, Microsoft, and OpenAI said they’re launching the Frontier Model Forum, “an industry body focused on ensuring safe and responsible development of frontier AI models.”
The forum has four key goals, the companies said:
1. “Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.”
2. “Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.”
3. “Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.”
4. “Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.”
Kent Walker, president of global affairs at Google and Alphabet, said the four companies are “excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation.”
“We’re all going to need to work together to make sure AI benefits everyone,” Walker added in the joint statement.
Advisory board will guide strategy and priorities
The forum will establish an advisory board to help guide its strategy and priorities, the companies said, and “welcomes participation from from other organizations developing frontier AI models willing to collaborate toward the safe advancement of these models.”
Anna Makanju, VP of global affairs at OpenAI, says that “advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance.”
“It is vital that AI companies—especially those working on the most powerful models—align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” Makanju added in the statement. “This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
From best practices to research and information sharing
Over the coming year, the forum will focus on identifying best practices; advancing AI safety research; and facilitating information sharing among companies and governments, the companies said.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” Brad Smith, vice chair and president of Microsoft, said in the statement. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Dario Amodei, CEO of Anthropic—which was founded in 2021 by former members of OpenAI and reports raising $1.5 billion in funding—said his company “believes that AI has the potential to fundamentally change how the world works.”
“We’re excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology,” Amodei said in the statement. “The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”
You can read the companies’ announcement in full by going here.
Get on the list.
Dallas Innovates, every day.
Sign up to keep your eye on what’s new and next in Dallas-Fort Worth, every day.