This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
France, Germany, and Italy reach agreement on AI regulation
France, Germany, and Italy have reached an agreement on AI regulation, according to Reuters, which it says is expected to “accelerate negotiations at the European level”.
According to the report, the trio of governments support “mandatory self-regulation through codes of conduct” for foundation models of AI, but they oppose “un-tested norms.”
“Together we underline that the AI Act regulates the application of AI and not the technology as such,” the paper reads. “The inherent risks lie in the application of AI systems rather than in the technology itself.”
Developers of foundation models, which are created to deliver a wide range of outputs, will have to define model cards, which will be used to provide information about a machine learning model, the paper explains.
AI and the law: A tale of three markets
“The model cards shall include the relevant information to understand the functioning of the model, its capabilities, and its limits and will be based on best practices within the developer’s community.
“An AI governance body could help to develop guidelines and could check the application of model cards.”
While no sanctions should be imposed for now, if violations of the code of conduct happen after a certain period, a system of sanctions could be set up, the paper said.
Harry Borovick, general counsel at AI firm Luminance commented: “As European policymakers continue to debate the final details of the EU AI Act, a new agreement by three of Europe’s largest economies to support a code of conduct for foundation models of AI has the potential to throw regulatory discussions into disarray.”
“For now, the agreement comes as a pro-business move that has little teeth — the code of conduct is ‘voluntary’ and therefore leaves plenty of latitude for businesses to determine their approach to AI governance,” he added.
“But it certainly has the potential to cause fractures within the EU bloc and delay discussion at a time when, more than ever, decisive regulatory action is needed as the rate of AI innovation continues to outpace regulation.”
Earlier this month, the UK government hosted the first-ever AI Safety Summit, which saw politicians and technology leaders gather to discuss the rise of generative AI and its potential threats and opportunities.
#BeInformed
Subscribe to our Editor's weekly newsletter