Top AI and Policy Experts Call for an International AI Safety Treaty

[ad_1]

Such an AI safety treaty should include several core components, including:

  • Global compute thresholds: Globally established limits on the amount of computation that can be used to train any given model.
  • CERN for AI safety: A grand-scale collaborative effort to pool resources, expertise, and knowledge in the service of AI safety.
  • A compliance commission: Responsible for monitoring treaty compliance, a role similar to that of the International Atomic Energy Agency (IAEA).

The aims of such a treaty being:

  • To mitigate the catastrophic risks posed by AI systems to humanity, by preventing the unchecked escalation of AI capabilities and surging resources and expertise into safety research.
  • To ensure the benefits of AI for all.

This is a historic coalition of experts with leaders from business, policy, academia and industry, notable for being at the forefront of international AI governance: several of the signatories are part of the UN High Level Advisory Body on AI or part of the few select experts who have been invited to the International AI Safety Summit.

Signatories made the following accompanying statements on the record:

Yoshua Bengio: “Governments should agree together that companies designing Frontier AI systems beyond some level of computational capability should be demonstrably safe. They should also agree on the urgency of research to better understand and mitigate safety and ethical concerns with AI in order to help governments with the protection of the public.”

Bart Selman: “ChatGPT has reasoning and decision-making capabilities that surpass those of most humans. There will be significant commercial incentives to develop and field even more capable systems as quickly as possible. However, although there are many positive use cases for this new technology, the potential risks and negative impacts are not yet understood. It’s therefore pertinent that we reach an agreement among countries to control and manage these risks.”

Eleanor ‘Nell’ Watson: “The challenges of AI affect everyone on this planet, and they require a global coordinated response. I am pleased to sign this call for new, strong, international institutions and common sense rules to govern the excesses of AI development, ones which threaten human safety and wellbeing on a grand scale.”

Gary Marcus: “No letter is perfect, but I love the way that this one calls for concrete action. A CERN for AI is something I’ve been calling for 2017 and it’s exciting to see so many eminent people taking the idea seriously. Without deep international collaboration it’s not clear that we will ever get to AI that we can trust.”

Yi Zeng: “We never know in which way and how AI can bring us catastrophic risks simply because without very serious and careful design, the models and architectures can evolve in unpredicted ways, and this is exactly why as a human community, we need to collaborate and sort out potential ways and various threats to humanity and get prepared, if we may.” Yi Zeng added: “For the AI treaty, we need global consensus, and global actions all together, starting from AI major powers, leading AI researchers and industry, to show and lead the world to a better future, sharing how to develop and deploy safety-first and safety ready AI, for the benefit of common good, leaving no one behind.”

Geoffrey Odlum: “Based on my experience over a long career as a US diplomat working on nuclear nonproliferation treaties, I fully support this letter calling for the negotiation of an international AI safety treat. History has demonstrated the effectiveness of international treaties and regimes like the Treaty of the Non-Prolifreation of Nuclear Weapons (NPT), the Chemical Weapons Convention (CWC), an the Biologial Weapons Convwntin (BWC) to set enforceable and verifiable global rules for responsible nations to safely manage technologies that pose an existential risk to humanity. This call for an AI safety treat is feasible, practical, and achievable. We only lack the political will of leading technology powers like the United States, the European Union, and others to take this up as an urgent diplomatic imperative. The time has come to do so.”

Tolga Bilge: “Ensuring that AI is safe and benefits everyone, is as much a geopolitical problem as a technical one. We need the great powers of the world, including the United States and China, to put aside their differences for the sake of humanity, and work constructively on building a robust set of international regulations aimed with a laser focus on these problems.”

According to a recent international poll conducted in nine countries commissioned by the UK Centre for Data Ethics and Innovation, most people in each surveyed country share experts’ concerns around controllability of future AI systems, misuse of AI for cyberattacks on critical infrastructure, and for engineering bioweapons.

Humanity has shown remarkable unity when faced with global threats, as demonstrated by international cooperation on nuclear non-proliferation. With the global AI Safety Summit approaching, the letter’s signatories urge members of the international community to actively engage in discussions around an AI treaty, and strive towards implementing a robust set of international regulations.

Read the full text of the open letter at https://aitreaty.org.

Highly notable signatories of the open letter include:

  • Yoshua Bengio – Godfather of AI, and 2018 A.M. Turing Award Laureate. The most cited computer scientist of all time.
  • Yi Zeng – The leading expert on AI safety and governance in China, who recently briefed the UN Security Council on AI risks.
  • Max Tegmark – Distinguished physicist and co-founder of the Future of Life Institute, dedicated to ensuring that advanced technologies are developed and utilized in a manner that benefits all of humanity. Tegmark is one of only 100 people attending the AI Safety Summit, a number that includes foreign representatives.
  • Connor Leahy – CEO of Conjecture, an AI Safety lab. Leahy will also be one of the attendees at the AI Safety Summit.
  • Bart Selman – Former President of the Association for the Advancement of Artificial Intelligence, a leading authority on artificial intelligence.
  • Victoria Krakovna – Senior Research Scientist at Google DeepMind, Co-founder of the Future of Life Institute
  • Eleanor ‘Nell’ Watson – President of the European Responsible AI Office, Senior Fellow of the Atlantic Council.
  • Gary Marcus – Renowned cognitive scientist, who recently testified in the US Senate alongside OpenAI CEO Sam Altman.
  • Jaan Tallinn – Co-founder of Skype, Centre for the Study of Existential Risk, and Future of Life Institute.
  • Luke Muehlhauser – Board member of Anthropic, a leading AI lab.
  • Ramana Kumar – Former Senior Research Scientist at DeepMind.
  • Geoffrey Odlum – Former Director at the US National Security Council, responsible for G-8 affairs.

This open letter initiative is being led by Tolga Bilge, a British superforecaster and Mathematics Master’s student at the University of Bergen. He is also the lead author of the Treaty on Artificial Intelligence Safety and Cooperation (TAISC), a first attempt at producing a concrete blueprint for an AI safety treaty.

Other members of the team include:

  • Eli Lifland, Co-founder of Samotsvety Forecasting, one of the most well-regarded forecasting groups in the world.
  • Jonathan Mann, Cybersecurity architect.
  • Molly Hickman, Data Scientist at the Forecasting Research Institute.
  • Siméon Campos, Co-founder of SaferAI, an AI risk management organization working in EU AI standardization.

Media Contact
Tolga Bilge, AI Treaty initiative group, 61 410758649, [email protected], https://aitreaty.org/

SOURCE AI Treaty initiative group

[ad_2]

Leave a Reply