From the European Union’s AI Act to emerging global frameworks, businesses are facing the dual challenge of scaling AI adoption while ensuring AI regulation compliance and ethical governance.
Experts from across industries share their insights on what 2025 holds for AI in business – highlighting the importance of transparency, risk management, and building AI systems that comply with regulations while fostering trust and scalable growth.
Regulation and governance
Steven Webb, UK chief technology and innovation officer, Capgemini
“There will be a stronger focus on AI regulation and sustainability in the AI enterprise. As AI agents become more widely adopted across the enterprise, I expect to see a stronger focus on AI governance and sustainable adoption. In my experience, the majority of business leaders understand the importance of implementing robust control mechanisms before integrating AI agents into their operations and the need for human oversight. I believe that these guardrails will become increasingly important as calls for stronger AI regulation and the need for transparency grow over the next few years.”
Michael Adjei, director, systems engineering, Illumio
“With GenAI tools now ubiquitous, 2025 will see a frantic scramble to rein in AI – just as we saw with social media. The focus will not only be on protecting users but also on having frameworks to safeguard AI from other AI.
Frameworks and guidelines will be pushed at three levels: international (e.g. the EU), regional (e.g. NCSC), and organisational. The organisational level will likely be most effective due to clear guidelines on acceptable use and security, while higher levels become less effective. International regulations often allow room for interpretation, enabling businesses to circumvent them.”
Stuart Tarmy, global director of financial services, Aerospike
“The days of the AI wild west as ‘anything goes’ is starting to change, as governments around the world put in place legal frameworks and regulations that make systems more transparent and accountable. The EU Artificial Intelligence Act came into force at the beginning of August and while the UK is taking a different approach and developing a principles-based framework, any business that develops or deploys an AI system in the EU will need to comply with the EU AI Act. This will have a significant impact in multiple different ways as we move into 2025.”

Stuart Tarmy, global director of financial services, Aerospike
Responsible AI and ethical practices
Mahesh Desai, head of EMEA public cloud, Rackspace Technology
“In 2025, business leaders will set the standard for responsible AI. This year has seen the adoption of AI skyrocket, with businesses spending an average of $2.5 million on the technology. However, legislation such as the EU AI Act has led to heightened scrutiny into how exactly we are using AI, and as a result, we expect 2025 to become the year of Responsible AI.
While we wait on further insight on regulatory implementation, many business leaders will be looking for a way to stay ahead of the curve when it comes to AI adoption and the answer lies in establishing comprehensive AI Operating Models – a set of guidelines for responsible and ethical AI adoption.”
Laurent Doguin, director of developer relations and strategy, Couchbase
“If businesses don’t act on AI, regulation will act for them. AI regulation has been on everyone’s mind this year, with governments around the world debating legislation. It’s been two years since ChatGPT came out, and there are still many questions around how exactly AI should be regulated. We have seen much back and forth in regulations between The Cloud Act, Privacy Shield programme and Schrems 1 and 2. These were attempts at regulating user data transfer and ownership. The future of AI is based on user data and as such, independently of any AI regulation, there will be a need to sort out data regulations too.
How AI is used in regulated industries such as healthcare will define how strict or loose regulations are in other sectors. For example, if some industries or countries completely block AI from certain spaces, then companies will be forced to create ecosystems that ensure AI never touches those areas. Transparency will also be key. It will be essential in spotting and addressing AI that can be too realistic, such as deepfakes, and how that affects society, politics, privacy and creativity. We need to understand how to identify deepfakes and prevent their creation and distinguish between AI-generated art and art created by real people who put their blood, sweat and tears into it. That transparency around AI will be particularly important on social media and for content targeted at younger people. More than anything, we will need to ensure AI-generated content is not used for cyberbullying and harassment, fake news or anything that would damage the fabric of society.”

Laurent Doguin, director of developer relations and strategy, Couchbase
David Colwell, VP of artificial intelligence and machine learning, Tricentis
“It’s no question that AI technologies have become progressively sophisticated and pervasive. While this has increased their usefulness, it’s also significantly raised concerns about the potential risks and impacts. As we enter 2025, we can expect governments and regulatory bodies worldwide to increasingly introduce, and enforce, new guidelines and regulations that aim to ensure the responsible development and deployment of AI systems. Continuous and thorough testing will be critical for organisations to guarantee their AI products and systems not only meet new and evolving regulations, but also customers’ growing expectations of efficiency and safety.”
Kristof Symons, international CEO, Orange Business
“As AI and generative AI hit peak hype, businesses are asking: where’s the real value? The answer lies in breaking free from “pilot purgatory” and scaling AI to deliver meaningful outcomes. Success starts with full buy-in from the C-suite and employees alike. When leaders champion AI as a strategic priority, it sends a clear signal: we’re all in this together. Transparency and education are key—show employees how AI can simplify their work and drive better results. And don’t make AI a luxury for a select few; democratise it across the organisation to avoid resistance and foster collaboration.”
AI compliance and risk management
Luke Dash, CEO, ISMS.online
“2025 AI governance surge: New standards drive ethical, transparent, and accountable AI practices. In 2025, businesses will face escalating demands for AI governance and compliance, with frameworks like the EU AI Act setting the pace for global standards. Compliance with emerging benchmarks such as ISO 42001 will become crucial as organisations are tasked with managing AI risks, eliminating bias, and upholding public trust. This shift will require companies to adopt rigorous frameworks for AI risk management, ensuring transparency and accountability in AI-driven decision-making. Regulatory pressures, particularly in high-stakes sectors, will introduce penalties for non-compliance, compelling firms to showcase robust, ethical, and secure AI practices.”

Luke Dash, CEO, ISMS.online
Stuart Tarmy, global director of financial services, Aerospike
“Companies will need to implement new processes and tools to comply with the regulations, such as system audits, data protocols, and AI monitoring, and they will need to seek out trustworthy vendors who can help them comply and certify their AI use. Keeping across the new rules will mean being aware of how existing laws apply to new AI algorithms. It will be incumbent on CIOs to establish robust internal governance policies to manage risks associated with AI, and to guard against litigation over copyright infringement, for example.
There is also a job to do updating employees and end users when they are interacting with AI systems. Like data security rules, there is the risk of significant fines if companies don’t comply with AI regulations, so while the regulations don’t come into force until August 2026, now is the time to start preparations.”
Tomer Zuker, VP marketing, D-ID
“In 2025, we anticipate significant progress in closing the gap between rapid advancements in AI technology and regulatory frameworks. Governments and organisations will prioritise privacy, security, and ethical usage through new policies and laws designed to govern AI responsibly. This alignment will build trust and confidence among enterprises, fostering broader adoption of AI solutions. Tools and platforms will increasingly focus on transparency, safe use, and compliance, which are essential for gaining business sector approval and ensuring sustainable growth.”
Martin Taylor, co-founder and deputy CEO, Content Guru
“AI regulation is increasingly inevitable, as more standards emerge worldwide, though a universal approach remains unlikely. However, in 2025, I expect a growing number of regional regulators to adopt frameworks that closely align with the pacesetters, the EU and US, who have already made impressive progress with their standards. As a result, all organisations must remain agile and ready to adopt new regulations and ensure compliance. AI is widely used in the contact centre, and leaders in the sector will be keeping a close eye on proceedings.”