In a significant policy shift, the White House has directed federal agencies to prioritise AI innovation by appointing chief AI officers and developing forward-looking strategies, while rolling back several guardrails introduced during the Biden administration.

According to a new fact sheet from the Office of Management and Budget, agencies are now encouraged to adopt a “pro-innovation and pro-competition mindset” in contrast to the “risk-averse approach of the previous administration.”

The updated guidance removes what it calls “unnecessary bureaucratic restrictions” that had aimed to safeguard civil rights, promote transparency, and regulate AI procurement.

The move signals a marked departure from President Biden’s 2023 executive order, which required AI developers to share safety data with the federal government and pushed agencies to implement strong guardrails around AI deployment.

That order has now been rescinded by President Trump.

Although Biden had also mandated the appointment of chief AI officers, the new directive redefines the role: these officers will now serve as “change agents and AI advocates,” tasked with accelerating AI adoption for low-risk use cases, managing high-impact risks, and advising on investments in the technology.

The fact sheet emphasises that systems classified as “high-impact AI” – those that could affect public rights or safety – will still require enhanced scrutiny and risk management.

At the same time, agencies are being urged to support a “competitive American AI marketplace” and maximise the use of domestic AI technologies “to bolster US leadership, economic strength, and national security”.

A companion memo provides guidance on how agencies can “acquire best-in-class AI quickly, competitively, and responsibly.”

The fact sheet also outlines current agency use cases. The Department of Veterans Affairs, for example, is using AI to analyse pulmonary nodules during lung cancer screenings, aiding early detection.

If no pause then how can we make sure that AI is ethical?

The Department of Justice is using AI to assess trends in the global drug market and understand the societal impact of illicit substances. Meanwhile, NASA’s Mars2020 Rover uses AI to autonomously navigate the Martian surface and optimise sensor functions.

While these use cases may accelerate innovation, the directive may have potentially sweeping consequences for accountability and public trust.

It also stands in stark contrast to the Open Letter issued by independent nonprofit Future of Life Institute which was signed by tech leaders including Elon Musk and major figures from DeepMind, OpenAI – who all called for a pause on the development of advanced AI systems (specifically anything more powerful than GPT-4 as it was in March 2023).

The signatories warned that unchecked development posed “profound risks to society and humanity,” comparing the stakes to pandemics and nuclear war.

Their message was clear: we need time to develop safety protocols, regulatory frameworks, and oversight mechanisms before moving forward.

The White House’s new directive now strips away some of the very safeguards those tech leaders (at least publicly) claimed were essential.

Personalized Feed
Personalized Feed