Six months after the Bletchley Park declaration saw governments publish initial guidelines for the development of AI, the technology has progressed exponentially.
“It feels like AI from the movies,” wrote the OpenAI chief executive, Sam Altman, of the latest version of ChatGPT, GPT-4o, which launched earlier on this month.
Despite earlier calls for a pause in the development of Gen AI, the technology has accelerated at a rapid pace, with spending on the tech set to reach $100 billion this year, according to estimates from Wedbush Securities.
As the politicians and tech leaders met for a late-night dinner in Seoul ahead of the second global AI Safety Summit – now called the AI Seoul Summit – the question remains: can governments keep up? And how do we solve this problem in Korea?
TechInformed sets out the key takeaways from the Seoul Summit.
Where did the ‘safety’ go?
When the UK hosted the first AI Safety Summit at Bletchley Park last year, it was clear from the name what the aim of the game was: to map out collaborations to make sure artificial intelligences is adopted in a safe and secure way.
Yet when details began to emerge about the second event in Seoul, one difference stood out – the name. It was no longer the AI Safety Summit, instead organisers naming the event AI Seoul Summit.
But what were the reasons behind this? It could be as simple as an AI Summit already existed in the UK as part of the annual London Tech Week, meaning the UK government had to change the name.
Or could it be that the organisers – which included the South Korean government and the UK government – wanted to move focus away from “safety” and broaden its scope to include other concerns?
New agreement
While safety may not have been there the name it was certainly on the minds of the politicians who attended.
One of the key announcements to emerge was an agreement between 10 countries – plus the European Union – to develop an international network aimed at advancing the science of AI safety.
Following Bletchley Park Summit, the UK launched its own AI Safety Institute. Its mission statement is to “minimise surprise to the UK and humanity from rapid and unexpected advances in AI.”
The new agreement struck in Seoul will see it work with other similar organisations from countries including the US, Japan and Singapore to build “complementarity and interoperability” between their technical work on AI safety.
This includes sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science around AI safety.
The declaration was initially signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United States of America and the United Kingdom, but by the end of this week this number had grown to 27 countries plus the EU.
UK Prime Minister, Rishi Sunak, said: “ Six months ago, at Bletchley we launched the UK’s AI Safety Institute. The first of its kind. Numerous countries followed suit and now with this news of a network we can continue to make international progress on AI safety.”
Business on board
Though regulation will play a massive role in how AI grows in a safe and manageable way, it is key for governments to bring businesses along with them. Balancing restrictions against the opportunity to grow platforms and maximise benefits from AI is a key consideration for policymakers.
According to Greg Hanson, GVP of EMEA North at software development firm Informatica the world has reached an inflection point with generative AI.
After the initial hype and excitement, he notes, the discourse is starting to change to reflect that AI needs to be designed, guided, and interpreted from a human perspective.
“It’s important to remember that we are still at an early stage of generative AI adoption. It will take time to get the responsibility, guardrails, and controls around AI to the right place as its use evolves,” he says.
“However, it’s reassuring to see that organisations, policy makers and technology providers have taken on board the mandate to act responsibly. Now they want to understand how to tackle some of the spikier challenges that generative AI poses so its transformative powers can be realised,” he adds.
To help achieve this, one of the key announcements at the summit was the voluntary sign-up of 16 companies to the AI safety standards that were developed at Bletchley Park last year.
Key signatories to the voluntary codes include the likes of Amazon, Google, IBM, Meta, Microsoft and Open AI while several companies from China – which hasn’t always supported these co-operation agreements (including Chat GPT’s Chinese rival Zhipu.AI) have also signed up.
So, what have these participants signed up for? The AI tech companies will each publish safety frameworks on how they measure risks of their frontier AI models, such as examining the risk of misuse of technology by bad actors.
The frameworks will also outline when severe risks, unless mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed.
In the most extreme circumstances, the companies have also committed to “not develop or deploy a model or system at all” if mitigations cannot keep risks below the thresholds.
Maria Koskinen, AI Policy Manager at Saidot warned that safety can be challenging when it comes to AI because Gen AI models can’t just be assessed by looking at the component parts.
This is why “evaluations are such a critical part of AI risk management, helping us to understand performance and mitigate the most pressing risks.”
Koskinen added that it was positive to see sixteen prominent organisations honour these commitments to AI safety.
“Whilst the commitments are voluntary, with no enforcement, and are only as good as the organisations implementing them, they still set a precedent for other AI organisations to follow. I expect we’ll see others follow suit, benefitting the entire AI community by creating a safer and more secure ecosystem for AI innovation.”
The balance between security and innovation in AI is key for many start ups who are concerned that heavy regulation could slow down their product roadmaps and add to their costs.
However, Ekaterina Almasque, general partner at early-stage VC OpenOcean believes that the summit struck the right balance.
“The focus on creating a supportive environment for AI startups was particularly encouraging, with frameworks to set standards and promote responsible AI development.
“However, true success in AI requires more – including increased R&D funding, improved access to scalable infrastructure, and policies that attract top talent. This summit has laid the groundwork for a thriving global AI ecosystem, ensuring that the benefits of AI are distributed widely and equitably.”
Waning enthusiasm?
The first AI Safety Summit in Bletchley Park was seen as a landmark moment for the development of AI, bringing together world leaders from tech and politics to discuss what is expected to be a life-changing technology.
Coming less than a year after the launch of ChatGPT sent shockwaves around the industry, all eyes were focussed on the home of British coding.
Fast forward just six months to Seoul and it is fair to say the event didn’t generate the same enthusiasm. Perhaps it was because it was partly a remote conference or perhaps it was just down to the fact that this isn’t the first conference, so it inevitably garnered less attention.
Or maybe – as Luminance CEO Eleanor Lightbody speculates – we’re all suffering from “AI fatigue.”
“With so much “hype” since the boom in generative AI, we are giving rise to more and more instances of ‘AI washing’, in which companies exaggerate their use of AI in order to create a ‘halo effect’,” she says.
“We must ensure that genuinely innovative start-ups and scale-ups that are working at the forefront of the AI sector aren’t being squeezed out of the conversation by Big Tech players with a significant resource advantage,” she adds.
The future
After the meeting, everyone agreed on a declaration recognising the importance of AI safety, innovation, and inclusivity. The declaration aims to deal with the various opportunities and challenges related to designing, developing, deploying, and using AI.
The Seoul Ministerial Statement emphasised the importance of a global strategy focused on AI safety, innovation, and inclusivity.
Key points included the need for transparency, accountability, and robust risk management for advanced AI models, while it also committed to ongoing dialogue, empirical research, and proactive measures to guide AI development responsibly.
The next Summit – safety or not – will be held in Paris on Monday 10 and Tuesday 11 February 2025.