Anthropic, Google, Microsoft, and OpenAI To Launch AI Frontier Model Forum

The forum aims to ensure safe and responsible development of frontier AI models that could "exceed the capabilities currently present in the most advanced existing models." A core objective: Collaborating with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks.

Generative AI chatbots including OpenAI’s ChatGPT4 and Google’s Bard have rocked the world over the last several months. They offer a peek into a future of amazing promise, but of potential perils as well—from disastrous impacts on the workforce to even possible human extinction caused by rogue AI systems.

In March, those concerns led to a famous open letter signed by Elon Musk, Apple co-founder Steve Wozniak, and more than 1,000 other AI experts, researchers, and backers—including academics from UT Dallas—calling for a six-month pause on the “dangerous race” to create giant AI systems.

Artificial intelligence industry CEOs and experts have since testified in front of Congressional panels and conferred at the White House about the issues involved.

On Wednesday, four companies on the leading edge of AI development and research made a move to minimize AI’s risks while seeking to leverage its ability “to address society’s biggest challenges.”

Wednesday’s joint announcement of the Frontier Model Forum

In a joint announcement, Anthropic (a San Francisco-based AI safety and research startup), Google, Microsoft, and OpenAI said they’re launching the Frontier Model Forum, “an industry body focused on ensuring safe and responsible development of frontier AI models.”

The forum has four key goals, the companies said:

1. “Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.”

2. “Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.”

3. “Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.”

4. “Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.”

Kent Walker, president of global affairs at Google and Alphabet, said the four companies are “excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation.”

“We’re all going to need to work together to make sure AI benefits everyone,” Walker added in the joint statement.

Advisory board will guide strategy and priorities

The forum will establish an advisory board to help guide its strategy and priorities, the companies said, and “welcomes participation from from other organizations developing frontier AI models willing to collaborate toward the safe advancement of these models.”

Anna Makanju, VP of global affairs at OpenAI, says that “advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance.”

“It is vital that AI companies—especially those working on the most powerful models—align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” Makanju added in the statement. “This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.” 

From best practices to research and information sharing

Over the coming year, the forum will focus on identifying best practices; advancing AI safety research; and facilitating information sharing among companies and governments, the companies said.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” Brad Smith, vice chair and president of Microsoft, said in the statement.This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Dario Amodei, CEO of Anthropic—which was founded in 2021 by former members of OpenAI and reports raising $1.5 billion in funding—said his company “believes that AI has the potential to fundamentally change how the world works.”

“We’re excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology,” Amodei said in the statement. “The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”

You can read the companies’ announcement in full by going here.

Get on the list.
Dallas Innovates, every day.

Sign up to keep your eye on what’s new and next in Dallas-Fort Worth, every day.

One quick signup, and you’re done.  

R E A D   N E X T

  • The request was made in an open letter signed by major players including Elon Musk, co-founder of OpenAI, the lab that created ChatGPT and GPT-4; Emad Mostaque, founder of the London-based Stability AI; and Apple co-founder Steve Wozniak. More than 1,000 artificial intelligence experts, researchers, and backers signed the letter, including academics from UT Dallas.

  • AI Master's Degree UNT

    Innovators across North Texas and entrepreneurs all over the U.S. are racing to launch (and patent) the latest breakthrough AI. That can lead to IP policy issues as emerging tech hits the market. On February 8, the U.S. Patent and Trademark Office will a public meeting to discuss how to promote innovation in AI and Machine Learning tech, to be held virtually and in person at the Arts District Mansion in Dallas. "The takeaways will shape future work on AI and ET policy," says USPTO Director Kathi Vidal.

  • Oscar, the AI-powered robotic trash can at AT&T's downtown Dallas HQ, can spot a recyclable in nothing flat. Developed by Vancouver, Canada-based Intuitive AI, the robot checks out any trash item you hold up to it and tells you whether it goes in its recycling, compost, or landfill bins.

  • The eighth annual HackDFW, powered by Say Yes to Dallas and presented by Google, connected hundreds of aspiring technologists to several Fortune 100 companies. It was a unique 48-hour marathon that challenged more than 550 people from 80 universities. Tech teams created ways to innovatively tackle waste management, climate change, better understand decisions from the Supreme Court, and much more.

  • [Image: Ryzhi/istockphoto]

    North Texas universities are gearing up for the AI revolution, readying students for the jobs of tomorrow. Here's a look at what's happening at SMU, TCU, the University of North Texas, UT Arlington, and UT Dallas, as published in our annual 2023 Dallas Innovates magazine.