This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Predictions 2024: AI, the law & corporate governance
1: International collaboration will be needed to support national and international frameworks
“Looking ahead to 2024, data and analytics are going to play a pivotal role, and it’s all about getting to the heart of the matter — finding the truth within the data. This involves addressing issues like data mistakes, errors, and collection biases.
“Now, when it comes to international collaboration on frontier AI safety, it’s crucial for both national and international governments to take a look at those examples in the private sector that have shown us how to harness AI effectively — the public sector has sometimes made accessing essential services, like medical data or renewing passports, more complex than it needs to be.”
Suki Dhuphar, head of International Business, Tamr
“In 2024, the US government will continue to collaborate with the private sector in developing guidelines and regulatory frameworks for AI, while prioritising national and economic security.
“All people stand to benefit from a more responsible approach to the fast-paced innovation associated with AI. If not safeguarded from malicious actors and unintentional malicious uses, AI can pose an existential threat.
“The private and public sectors in the US and worldwide recognise these challenges. Individual companies will have to take the responsibility of complying with these standards or risk security concerns or falling behind from lack of utilisation of cutting-edge technology.”
Uzi Dvir, CIO, WalkMe
“It seems that the EU and the US are headed down diverging paths on AI regulation. This will cause a compliance headache for businesses who work across multiple markets and General Counsel must decide where to base their global privacy and compliance focus. The cost and complexity of compliance is higher than ever, and it will be up to legal teams to navigate this relatively unknown territory.”
Harry Borovick, General Counsel, Luminance
2: In the absence of law, businesses will rely on corporate governance to mitigate AI risks
“Techniques such as ensuring the quality and fidelity of training data and keeping a human in the loop for both training (reinforcement learning based on human feedback), and inference for the most sensitive scenarios are ways of balancing the augmented intelligence that Generative AI provides.
“We will also see an increase in robust governance policies, processes and tools including testing and validation for AI generated content, embedding monitoring throughout the entire system. Having a clear AI policy that lays out the criteria to determine what is ethical, responsible, and inclusive will guide the use of AI. This coupled with education so that teams working in this space can learn the skills necessary to implement the guidance, will be the cornerstone from which we will see businesses executing tangible AI plans.”
“Another area that successful businesses will prioritise is ensuring that AI models are fed high-quality, well-governed data, without which AI projects can be easily derailed. AI systems require accurate, diverse, and well-maintained data to function effectively. Poor data quality can lead to inaccurate or unreliable AI outputs.”
Art Hu, SVP & CIO, Lenovo
“CIOs will be focusing on creating trust in AI and thus they must become guardians of clean, compatible data, and ethical, responsible and trustworthy AI systems that positively impact society and amplify human potential. This mission will take priority over immediate profit in the short term and will rely on identifying and managing the risks associated with AI systems to safely unlock the full potential for innovation.”
Daniel Pell, VP and country manager, UKI, Workday
“I do expect regulation and ethical challenges to slow progress. In the same piece of research, 63% of UK CEOs said that the current lack of regulations and direction for generative AI within their industry will be a barrier to their organisation’s success. As a workaround, more businesses will find ways to use it more securely; for example, it will be common for organisations to develop their own Large Language Models for internal use.
David Harrison, UK technology strategy & transformation practice lead at KPMG UK
“AI-powered computer vision technologies are becoming a pre-requisite for organisations relying on camera streams to protect their assets and environments. Looking at the bigger picture, we can see countries investing in surveillance systems at a national scale, and computer vision solutions powered by AI are gaining more attention.
“Whether it is a small or large surveillance deployment, governance and policies are already in place to ensure that such deployments comply with people’s privacy and communities’ protection and rights while maintaining the necessary safety standards.
“With such governance readiness, we can expect AI-powered computer vision technologies to be at the heart of national digital transformation plans and Smart Cities’ security and safety in 2024 and beyond.”
Mustapha Koaik, country manager, KSA at Ipsotek
“The technical side of AI has never really been an issue. The real challenge for leaders will be the governance and management of data — how they can safely use AI and protect customer and employee personal data.
“Over the coming months, and off the back of the AI summit, we’re likely to see increasing numbers of businesses trying to reassure consumers of their security. Many will be following in the footsteps of figures such as Marc Benioff, CEO of Salesforce, who released a statement about the company’s commitment to consumer data protection and refusal to sell their customers’ data.”
Waseem Ali CEO of Rockborne
3: The dawn of Explainable AI (XAI) will help with regulation and accountability for algorithmic decision-making
“We’ll see increasing use of Explainable AI (XAI), a discipline that will allow companies to provide quick and understandable explanations about what their AI systems are doing, to whoever needs it, whether that’s regulating authorities, employees or customers.”
Stuart Tarmy, head of industry vertical solutions, fintech and global partnerships at Aerospike
“Explainable AI, which focuses on making machine learning models interpretable to non-experts, will become important as these technologies impact more sectors of society, and both regulators and the public demand the ability to contest algorithmic decision-making. Both of these subfields not only offer exciting avenues for technical innovation but also address growing societal and ethical concerns surrounding machine learning.”
Eleanor Watson, AI ethics engineer and AI Faculty at Singularity University and IEEE member
“Companies will create guidelines for responsible data collection and usage, as well as ensuring transparency and explainability in AI systems. They will also have a greater focus on diversity and inclusivity in their AI teams to prevent biases in algorithm development.”
Zain Ali ,CEO at Centuro Global
4: AI and in-house legalities
“In 2024, expect more generative AI tools to enter the market, trying to take advantage of the AI boom. Lawyers should only rely upon those that have been trained over highly specific and verified data — this will be key to ensuring they unlock the maximum benefits of AI whilst protecting against risk.
“AI is no longer an unknown frontier for the legal industry. In 2024, I think that increased trust in tried-and-tested systems will see lawyers start to place routine contract negotiations into the hands of AI. We hear a lot about AI co-pilots, but the new age of “Autopilot” will enable legal resources to be allocated where they’re needed most. In today’s world of ever-changing regulatory, economic and business complexities, I imagine that’s where a lot of that attention will be focused.”
Eleanor Lightbody, CEO at Luminance
5: AI regulation will weed out poorly built AI
“AI providers will not be able to get away with AI that hallucinates and/or generates inappropriate responses. AI regulation will promote the new standard: purpose-built AI created with the proper guardrails. When it comes to customer service this means AI that is domain-specific and built from rich CX data, ensuring that responses that are generated are accurate, relevant and appropriate.”
Barry Cooper, president, CX Division, NICE
#BeInformed
Subscribe to our Editor's weekly newsletter