AI Safety Summit: Top takeaways from Bletchley Park
Last week the UK hosted the first ever AI Safety Summit, bringing together politicians and technology leaders to discuss the rise of generative AI such as ChatGPT, and the potential threats and opportunities it poses.
We highlight the key takeaways from the two-day event, which was held at Bletchley Park, the home of Britain’s World War 2 codebreakers.
Pipped at the post
British Prime minister Rishi Sunak had pitched the AI Safety Summit as an opportunity to bring world leaders together to discuss AI, and the event certainly achieved that. But from a UK perspective, it was touted as an opportunity for the country to take a position of leadership in AI technologies.
Unfortunately for Sunak, the biggest player in the game had already made its move earlier last week, when US President Joe Biden announced an executive order looking to regulate AI.
Biden didn’t attend the summit himself, either, instead sending Vice President Kamala Harris, who said the White House intended its initial domestic policies would “serve as a model for global policy, understanding that AI developed in one nation can impact the lives and livelihoods of billions of people around the world”.
Still, by hosting the first summit, Sunak achieved his primary goal, which was to show leadership on the AI issue.
Dr Yi Ding, assistant professor of information systems at the Gillmore Centre for Financial Technology at WBS, said: “Bringing together global AI experts was a positive step in the development of AI and cementing the UK’s position as a leader in this area. However, there is still uncertainty surrounding its safety and development in the months and years to come.
“As we move forward with AI in the wake of the Safety Summit, collaboration between Government, industry and academic research centres is essential to ensure the UK makes the most out of AI in a safe and secure manner.”
Working together on AI a work in progress
The most concrete result to come out of the summit was the announcement of plans for the UK and US to set up national AI safety institutes — to help AI startups prevent societal risks and threats to national security from the most advanced models.
Britain initially floated the idea, but as noted above, the US got the drop on them by making its own rules, with US officials reportedly wary of giving UK experts too much access to US systems. Instead, the US created its own national safety institute — something it officially announced on Day 1 of the AI Safety Summit.
Though both organisations will work together — Singapore has also announced a partnership with the UK on similar terms — this highlights the fact that states still haven’t entirely embraced the “work together” mantra suggested by the Bletchley Declaration, preferring instead to deal with some AI issues in-house.
This also extends to China, a key participant in the summit. Officials described Chinese delegates as constructive members of the discussions, but trust in China is clearly still an issue. Chinese delegates were reportedly excluded from certain sessions, including those on the priorities for AI in the next five years, despite China’s importance as a technology nation.
Push for open source
Sunak had launched the event, setting out the goal of tackling potential risks that advanced AI systems pose to humanity, either on their own as they become smarter, or in the hands of bad actors.
That threat has been widely shared and discussed in tech circles since the launch of ChatGPT a year ago, but some within the industry have argued focus on this threat is misguided.
A group that includes the likes of Meta chief AI Scientist Yann LeCun and Google Brain co-founder Andrew Ng claim this view would lead to over-regulation and could hamper open-source efforts and innovation.
They, and about 150 of other open-source believers, signed a statement published by Mozilla saying that open AI is “an antidote, not a poison”.
UK Deputy Prime Minister Oliver Dowden was a surprise backer: “If we want to make sure [AI] spreads globally, in terms of the developing world … I think there is a very high bar to restrict open source in any way,” he told Politico.
Suki Dhuphar, head of International Business at an AI-driven data analytics vendor Tamr, agreed, saying: “Collaborative strategies must involve open source. Support from the big players, such as Google and OpenAI isn’t enough, as independent academic papers must also be considered as key sources for information. The open-source community is strong and perceived as important for regulation; it must therefore be a critical voice in AI risk mitigation.”
Musk ado about nothing
The world’s richest man Elon Musk attended the summit and later met with Sunak in Downing Street for a fireside chat, which was streamed. In it, the Tesla CEO warned that AI was “one of the biggest threats to humanity”, which perhaps overshadowed more nuanced contributions.
Elon Musk backed British PM Rishi Sunak’s decision to invite China to the Artificial Intelligence (AI) Safety Summit 2023. “Having them here I think was essential, really. If they’re not participants, it’s pointless,” he said. pic.twitter.com/nB7RVu1LnI
— Sinical_C (@Sinical_C) November 3, 2023
This wasn’t new ground for Musk — who had previously signed a letter alongside other tech figures warning of the dangers posed by GenAI.
Musk, of course, has his own nascent AI venture xAI, which has just unveiled its first chatbot technology, named Grok, which aims to compete with ChatGPT.
“What we’re really aiming for here is to establish a framework for insight so that there’s at least a third-party referee, an independent referee, who can observe what leading AI companies are doing and, at the very least, sound the alarm if they have concerns,” the billionaire entrepreneur told reporters at Bletchley Park.
Just the first step
The big announcement at the end of the event was that representatives from 28 companies and countries have signed up to a declaration warning of the risks posed by cutting edge AI tech and pledging co-operation to work together to tackle future challenges.
In other words, the Bletchley Declaration is an agreement about future projects and does not outline any current moves.
This, according to Zoho Corporation UK managing director Sachin Agrawal, is understandable given the early stage of AI adoption.
“Much of the AI Summit quite rightly is focused on the discussion of its potential wider threat, and it is critical to prioritise this and formulate a global strategy,” he said.
“Zoho has long heralded that the safe and trustworthy roll-out of AI should involve collaboration between government, industry, technology experts and academia and this is true to both address potential negative use cases as well as to present more positive use cases of the technology.
“The Bletchley Park Summit acts as a first step in developing safe and ethical AI and we hope to see a continued focus on supporting businesses through a global, collaborative approach to this innovative technology.”
Signatories agreed to a follow-up meeting to be held in South Korea in six months’ time. A third is also planned for France next year.
Subscribe to our Editor's weekly newsletter