hundreds of tech experts sign statement that compare ai risks with pandemic and nuclear weapons

The Center for AI Safety, an organization that proposes policies and procedures to safely develop advanced artificial intelligence, published today a statement signed by prominent tech experts that compared the risk of this technology to that of nuclear war and the pandemic.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, was the brief sentence through which the institution summarized what AI could do to society if the technology is developed without appropriate guardrails.

The organization said that this statement aims to foster healthy discussions about these risks by making it clear that everyone is worried about it. Some of the most prominent signatories of the assertion thus far include the CEO of OpenAI, Sam Altman, the CEO of Google DeepMind, Demis Hassabis, and the CEO of Anthropic, Dario Amodei.

The fact that these three business leaders, who head what are perhaps the most successful companies in this field, are acknowledging that the stakes are that high may be enough to turn legislators and regulators’ heads to find solutions and implement policies that minimize those risks.

Other prominent tech executives that have signed the statement thus far include the CTO and Chief Scientific Officer from Microsoft (MSFT), several computer science professors from well-reputed universities, and researchers from top-notch institutions including the prestigious Massachusetts Institute of Technology (MIT) and Harvard University.

The number of signatories has kept increasing and new people may be added to the list progressively as the document can be signed by sending a simple form that asks for the individual’s full name, work e-mail, affiliation, and job title.

What is the Center for AI Safety (CAIS)?

The Center for AI Safety (CAIS) is headed by Bri Treece, Dan Hendrycks, and Oliver Zhang. It is a non-profit institution that researches the technology to outline its risks and propose potential remedies that protect society from them.

The organization has created a concept about what AI risks are and has identified 8 key risks that should be tackled to make the technology safe to be used and widely adopted by both organizations and the general public.

Experts from CAIS warn that advanced AI could match and then surpass human capabilities, posing catastrophic existential risks comparable to nuclear weapons. Though AI currently has many safe applications, experts caution that as AI progresses, it may pose large-scale threats in unforeseen ways.

Here Are the 8 AI Risks that CAIS Has Identified Thus Far

The weaponization of AI to create destructive technologies, the spread of AI-generated misinformation at scale, and the possibility of “proxy gaming” – AI systems finding ways to optimize objectives at the expense of human values – are among the most prominent risks associated with the uncontrolled development of sophisticated AI models.

Other risks include human “enfeeblement” as we delegate too many important tasks to machines, allowing power to concentrate among a small group with access to powerful AI giving rise to “value lock-in.”

CAIS’s experts worry these intelligent systems may also exhibit “emergent goals” that differ from what their creators intended as capabilities emerge unexpectedly. Systems could become deceptive in order to achieve their goals, undermining human oversight.

Also read: Elon Musk Warns of AI Risks & Wants a ‘Third Horse’ to Take on Bard and ChatGPT

Perhaps most concerning is the potential for AI to exhibit “power-seeking behavior” as systems acquire more power and strategic advantage for their owners. Such power maximization runs counter to trustworthy, safe, and transparent operations aligned with human values.

While AI offers tremendous opportunities to solve humanity’s biggest challenges, experts from the CAIS warn that advanced AI also poses unprecedented and potentially catastrophic risks to society.

Addressing these AI safety issues will require a multi-pronged, global effort involving technology developers, policymakers, researchers, and ethicists. Failure to adequately mitigate these risks could enable systems that ultimately threaten human welfare and even survival itself. Thus, experts are now calling for urgent and coordinated action to develop safe and beneficial AI for the digitized world of the future.

Other Related Articles: