The first day of the summit on artificial intelligence security, hosted by the United Kingdom, resulted in the first major international agreement on the management of this technology. A total of 28 governments signed the so-called Bletchley Declaration. Among them were the U.S., China and the European Union, which agreed to recognize that artificial intelligence poses a potentially catastrophic risk to humanity.
The governments have agreed on a plan for international collaboration, including at least two new summits: one to be held in South Korea in six months’ time and another in France in a year’s time. The UK government wanted the go-ahead for the creation of an international testing center in the UK, but the initiative was not taken up in the end.
The statement highlights the need for transparency and accountability of the actors developing this technology. And it mentions the intention of governments to create plans to measure, monitor and mitigate potentially harmful capabilities.
“There is the potential for serious, even catastrophic harm, whether deliberate or unintended, arising from the most important capabilities of these artificial intelligence models,” the document says. Among the main risks, they highlighted those related to cybersecurity, biotechnology and the development of disinformation campaigns.
The group has decided to support the creation of “an internationally inclusive network of scientific research” on the security of artificial intelligence. The goal is to facilitate the provision of the “best available science” for public policymaking.
Artificial intelligence agreement achieves rare show of global unity
Michelle Donelan, UK technology secretary, opened the summit. “For the first time, we have countries agreeing that we need to look, not just independently but collectively, at the risks surrounding frontier artificial intelligence,” she told reporters.
She was joined on stage by U.S. Commerce Secretary Gina Raimondo and Chinese Vice Minister of Science and Technology Wu Zhaohui. A rare show of global unity. The meeting follows months of management by the government of Prime Minister Rishi Sunak, who has set out to forge a role for the U.K. as an intermediary between the U.S., Chinese and European Union economic blocs.
“The Declaration fulfills the key objectives of the summit by setting out shared agreement and responsibility for risks, opportunities and a process forward for international collaboration,” Britain said in a separate statement accompanying the declaration.
Minister Wu used his participation to say that countries must work to ensure that artificial intelligence “always remains under human control.” And he advocated that all countries, regardless of size and scale, should have “equal rights to develop and use artificial intelligence.”
U.S. Vice President Kamala Harris heads the U.S. delegation. In a speech from the embassy, Harris urged other countries to go further and faster. She noted that artificial intelligence is already causing harm, beyond existential threats about massive cyberattacks or biological weapons. Work needs to be done “across the spectrum,” he said.
In conversation with @elonmusk
After the AI Safety Summit
Thursday night on @x pic.twitter.com/kFUyNdGD7i
— Rishi Sunak (@RishiSunak) October 30, 2023
Global urgency begins to bear fruit
The summit is one of several global initiatives that governments around the world have rushed in recent months to regulate the development of artificial intelligence. Last week, UN Secretary-General António Guterres launched the first global body around artificial intelligence governance.
It is an advisory group, made up of 39 members from institutions, governments and technology companies from around the world. Representatives from Microsoft, Google and OpenAI are among the most important companies in this field. The body will assess the risks posed by artificial intelligence and formulate proposals to help address those challenges.
The U.S. government, meanwhile, launched a new executive order this week. The measure will require makers of major artificial intelligence systems, such as Google and OpenAI, to report key information to the government, such as when they decide to create a new model and related cybersecurity protections.
The European Union is still in the process of passing draft legislation on artificial intelligence, with the goal is to develop a set of principles and boundaries for the development of the technology in the region. And the G7, which brings together democracies with the world’s richest economies, promised a “code of conduct” by the end of the year.