Creators of ChatGPT and others beg for their technology to lower the risk of world extinction.

 


An open letter warning that the emergence of artificial intelligence (AI) could result in an extinction event and arguing that managing the technology should be a key global goal was signed by hundreds of academics, IT industry leaders, and other public figures.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the Centre for AI Safety, based in San Francisco, said in a statement.

The letter's succinct declaration almost reads like an apology for the technology about which its developers are now banding together to warn the public.

Ironically, the letter's top signatories included both Kevin Scott, CTO of Microsoft, OpenAI's largest investor, and Sam Altman, CEO of OpenAI, the firm that developed the enormously successful generative AI chatbot ChatGPT. Executives, engineers, and scientists from Google's DeepMind AI research group joined a number of OpenAI founders and executives.

The letter was also signed by Geoffrey Hinton, who is known as the father of artificial intelligence due to his contributions to the field over the previous 40 years or so. Hinton went so far as to claim that humans are nothing more than a temporary stage in the evolution of AI at a Q&A session at MIT earlier this month. Additionally, he claimed that doing research into the creation of artificial neural networks in the 1970s and 1980s was very reasonable. Today's technology, however, is equivalent to genetic engineers choosing to better grizzly bears by teaching them English and raising their "IQ to 210."

Hinton, though, asserted that he had no regrets about his role in the development of AI. "This particular stage of it wasn't really foreseeable. I believed that this existential crisis was far off until quite recently. Therefore, I don't really regret what I did, Hinton stated.

The US Senate hearings earlier this month, which featured evidence from Altman of OpenAI, also highlighted the immediate concerns posed by the development of AI."The Centre for AI Safety's declaration is alarming and unprecedented in the IT sector, in my opinion. According to Avivah Litan, a prominent analyst at Gartner and vice president, "When have you ever heard of tech entrepreneurs telling the public that the technology they are working on can wipe out the human race if left unchecked?" Yet they keep working on it due to pressure from the competition.

Litan also noted that corporations suffer "short-term and imminent" hazards from the adoption of AI, even though they are secondary to extinction. "They involve risks of misinformation and disinformation, and the potential for cyberattacks or societal manipulations that scale much more quickly than what we saw in the past decade with social media and online commerce," she said. If ignored, "these short-term risks can easily spiral out of control."

Guardrails and technical solutions can be used to address and reduce the shorter-term hazards that AI poses. She pointed out that international government cooperation and regulation can be used to address the longer-term existential dangers. "Governments are moving very slowly, but technical innovation and solutions — where possible — are moving at lightning speed, as you would expect," added Litan. So, who knows what is in store for us?

The letter published today is a follow-up to one published by the Future of Life Institute in March. Elon Musk, the CEO of SpaceX, and Steve Wozniak, the co-founder of Apple, both signed a letter requesting a six-month hiatus in ChatGPT development so that better controls could be implemented. Nearly 32,000 other people also signed the letter.A robust auditing and certification ecosystem, liability for AI-caused harm, robust public funding for technical AI safety research, and well-resourced institutions for handling the dramatic economic and political disruptions (especially to demonstrate) were all demanded in the March letter. It also called for oversight and tracking of highly capable AI systems and large pools of computational capability.

There are "many ways AI development could go wrong, just as pandemics can come from mismanagement, poor public health systems, wildlife, etc.," noted Dan Hendrycks, head of the Centre for AI Safety, in a follow-up tweet thread today.A robust auditing and certification ecosystem, liability for AI-caused harm, robust public funding for technical AI safety research, and well-resourced institutions for handling the dramatic economic and political disruptions (especially to demonstrate) were all demanded in the March letter. It also called for oversight and tracking of highly capable AI systems and large pools of computational capability.

There are "many ways AI development could go wrong, just as pandemics can come from mismanagement, poor public health systems, wildlife, etc.," noted Dan Hendrycks, head of the Centre for AI Safety, in a follow-up tweet thread today.

Comments