International co-operation on AI needed says Google Deepmind COO
Google Deepmind’s COO, Lila Ibrahim has called for AI regulation to mitigate risk while making use of the technology’s potential, while the UK’s Secretary of State wants developers to produce risk assessments for algorithms they are working on
While Rishi Sunak announced in June that he wanted to make the UK the global hub of AI safety, the Google exec emphasised that it wasn’t something that could be tied down to just one country.
Speaking at the CogX event in London last week, Ibrahim added: “We need to take an international approach to this.”
UK Secretary of State, Michelle Donelan, also spoke at the O2-based techfest, claiming that she had put “bold new policies” in place.
She espoused the favoured rhetoric of UK politicians, who claim that the nation is now on track to become “a true science and technology superpower” by 2030 – with safety playing a key role.
Donelan bullishly framed the safety element as the UK’s unique selling point in the “AI arms race”.
“Safety is going to be the determining factor in the race to become the world’s leader in AI innovation – and we here in the UK are not waiting for the starting gun,” she said.
Both Donelan and Ibrahim agreed that it was up to organisations and governments to work alongside each other to ensure AI safety.
The UK government’s AI Safety Summit is set to take place on 1-2 November at Bletchley Park.
Donelan said in her keynote speech that those at the summit plan to identify and agree on the risks, find ways of collaborating on research and safety regulation, and establish how to make AI a force for good.
“The summit comes at a critical moment as we face uncertainty in what may emerge – as the frontier of this technology is pushed further and capabilities are quickly scaled up,” said Donelan.
The Secretary of State also invited AI developers to put forward their plans for “responsible capability scaling.”
Responsible capability scaling involves stating what risks are going to be monitored and taken control of once they are found, whether that means slowing or even pausing the work until better safety mechanisms are in place.
“Responsible capability scaling at the frontier of AI needs to become as common as people having a smoke alarm in their kitchen,” Donelan added.
Subscribe to our Editor's weekly newsletter