AI summit a start but global agreement a distant hope

Britain’s Prime Minister Rishi Sunak, lef,) shakes hands with X (formerly Twitter) CEO Elon Musk after an in-conversation event in London on November 2, 2023, following the UK Artificial Intelligence (AI) Safety Summit. Photo: AFP

Britain’s Prime Minister Rishi Sunak, lef,) shakes hands with X (formerly Twitter) CEO Elon Musk after an in-conversation event in London on November 2, 2023, following the UK Artificial Intelligence (AI) Safety Summit. Photo: AFP

Published Nov 7, 2023


British Prime Minister Rishi Sunak championed landmark agreements after hosting the first artificial intelligence (AI) safety summit but a global plan for overseeing the technology remains a long way off.

Over two days of talks between world leaders, business executives and researchers, tech CEOs such as Elon Musk and OpenAI’s Sam Altman rubbed shoulders with the likes of US Vice-President Kamala Harris and European Commission chief Ursula von der Leyen to discuss the future regulation of AI.

Leaders from 28 nations, among them China, signed the Bletchley Declaration, a joint statement acknowledging the technology’s risks; the US and Britain announced plans to launch their own AI safety institutes; and two more summits were announced to take place in South Korea and France next year.

But while some consensus was reached on the need to regulate AI, disagreements remain over exactly how that should happen – and who would lead such efforts.

Risks around rapidly developing AI has been a priority for policymakers since Microsoft-backed Open AI released ChatGPT to the public last year.

The chatbot’s unprecedented ability to respond to prompts with human-like fluency has led some experts to call for a pause in the development of such systems, warning they could gain autonomy and threaten humanity.

Sunak talked of being “privileged and excited” to host Tesla-founder Musk, but European lawmakers warned of too much technology and data being held by a small number of companies in one country, the US.

“Having just one single country with all of the technologies, all of the private companies, all the devices, all the skills, will be a failure for all of us,” French Minister of the Economy and Finance Bruno Le Maire said.

The UK has also diverged from the EU by proposing light-touch approach to AI regulation, in contrast to Europe’s AI Act, which is close to being finalised and will bind developers of what are deemed “high-risk” applications to stricter controls.

“I came here to sell our AI Act,” Vera Jourova, the vice-president of the European Commission, said.

Jourova said that while she did not expect other countries to copy the bloc’s laws wholesale, some agreement on global rules was required.

“If the democratic world will not be rule-makers, and we become rule-takers, the battle will be lost,” she said.

While projecting an image of unity, attendees said the three main power blocs in attendance – the US, the EU and China – tried to assert their dominance.

Some suggested Harris had upstaged Sunak when the US government announced its own AI safety institute, just as Britain had a week earlier, and she delivered a speech in London highlighting the technology’s short-term risks, in contrast to the summit’s focus on existential threats.

“It was fascinating that just as we announced our AI safety institute, the Americans announced theirs,” said attendee Nigel Toon, the CEO of British AI firm Graphcore.

China’s presence at the summit and its decision to sign off on the Bletchley Declaration was trumpeted as a success by British officials.

China’s vice minister of science and technology said the country was willing to work with all sides on AI governance.

Signalling tension between China and the West, however, Wu Zhaohui told delegates: “Countries, regardless of their size and scale, have equal rights to develop and use AI.”

The Chinese minister participated in the ministerial roundtable on the Thursday, his ministry said. He did not participate in the public events on the second day, however.

A recurring theme of the behind-closed-door discussions, highlighted by many attendees, was the potential risks of open-source AI, which gives members of the public free access to experiment with the code behind the technology.

Some experts have warned that open-source models could be used by terrorists to create chemical weapons, or even create a super-intelligence beyond human control.

Speaking with Sunak at a live event in London last week, Musk said: “It will get to the point where you’ve got open-source AI, that will start to approach human-level intelligence, or perhaps exceed it. I don’t know quite what to do about it.”

Yoshua Bengio, an AI pioneer appointed to lead a “state of the science” report commissioned as part of the Bletchley Declaration, said the risks of open-source AI were a priority.

He said: “It could be put in the hands of bad actors, and it could be modified for malicious purposes. You can’t have the open-source release of these powerful systems, and still protect the public with the right guardrails.”