US defends itself as leader in AI safety ahead of UK Summit
US President Joe Biden has signed an executive order he claims is the most “significant” action that any government has made toward safe AI deployment.
The order aims to monitor and regulate the risks of artificial intelligence, requiring developers of powerful AI systems to share the results of their safety tests with the federal government before they are released to the public.
“Given the pace of this technology, we can’t move in normal government or private-sector pace, we have to move fast, really fast – ideally faster than the technology itself,” commented White House chief of staff, Jeff Zients.
According to the order, if an AI model poses a risk to national security, the economy, or the health of the country, companies are required to notify the government under the Defence Production Act.
The action arrives the same week as the UK government’s AI Safety Summit, an event the UK Prime Minister, Rishi Sunak, said would place the UK as the “geographical home of global AI safety regulation”.
Those attending the summit include the US vice-president, Kamala Karris, who said that the US government has a “moral, ethical, and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits.”
The vice-president added that she believes the US is the global leader in AI, and that the order should serve as a model for international action.
“It is American companies that lead the world in AI innovation,” Harris said. “It is America that can catalyse global action and build global consensus in a way that no other country can.”
The rapid growth of generative AI programmes and wider adoption of the likes of ChatGPT prompted several of tech’s biggest names – including Elon Musk – to pen a letter calling for a pause in AI development.
Industry experts commenting on the new executive order claim that it is “a monumental moment” for safe AI.
Leire Arbona, director of compliance at, digital authentication platform, Veridas, said: “With Europe currently working on the EU AI Act, the US is looking to join the developing global precedent being established, which will determine how countries and organisations approach AI.”
“We will surely begin to see a cascading trend of similar legal actions across the globe,” Arbona added.
Michael Covington, VP of strategy at cyber security firm, Jamf, said: “President Biden’s executive action is broad based and takes a long-term perspective, with considerations for security and privacy, as well as equity and civil rights, consumer protections, and labour market monitoring.”
“The intention is valid – ensuring AI is developed and used responsibly. But the execution must be balanced to avoid regulatory paralysis.”
Just a couple of weeks ago, TechInformed, summarised the AI regulatory laws across the EU, US, and the UK, which, according to a recent study, more than half of tech leaders believe to be a bigger threat to AI innovation than safety.
Zandra Moore, CEO of the firm that conducted the study, Panintelligence, said: “Our research suggests that the prospect of new rules governing AI has put a brake on AI innovation in the UK and Europe.”
“There is a risk that software companies based here will fall further behind in the AI race and that their users will be later to the AI party.”
For more AI news, click here
Subscribe to our Editor's weekly newsletter