Setting boundaries for AI
Balancing regulation without stifling innovation is just one of the many challenges facing Rishi Sunak’s safety summit at Bletchley Park this week, writes James Palmer
Setting boundaries for AI
This week the UK is hosting the world’s first-ever global AI safety summit. As AI develops at an unprecedented rate, Prime Minister Rishi Sunak is inviting countries, academics, and executives from AI companies to Bletchley Park to discuss regulations and restrictions.
The goal is to find common ground and a consistent global approach to surface-level AI regulations and safety to ensure that no country is left exposed or disadvantaged.
But is a one-size-fits-all set of rules counter-productive?
After all, the use of AI in business and its consequent need for regulation varies depending on the industry, company, role, function, and all the way down to the task itself. If Sunak and the summit attendees want to achieve their safer and fairer AI development goals, will they need to dive deeper?
With Google’s Bard and Microsoft-sponsored ChatGPT’s incredible development over the last year, more businesses and organisations are using generative AI in their day-to-day than ever before.
This unprecedented rise in the use of generative AI, which leverages large language models that can process and generate vast quantities of data, has made governments and certain tech companies speak up about the potential dangers of the technology.
AI cybercrime is one of the topics of discussion, but the main topics related to AI have been the risks of displacing jobs through automation and the risks to democracy through the generation and spread of misinformation.
While these challenges must be taken seriously, they don’t necessarily translate to every other area of AI usage, such as collaboration, training, coaching, and go-to-market processes.
AI is supposed to empower us, not replace us, and collaboration between business functions leads to better outcomes. Sunak and the summit need to take this into consideration. Even in industries that will need strict security and regulation, such as financial services, healthcare, and pharmaceutical, organisations will have several functions and processes that are separate from those that require the strictest regulation.
For example, businesses in financial services will want to leverage AI to empower digital transformations and drive go-to-market (GTM) efficiency in their highly competitive sectors.
With an AI-powered enablement platform, they can unite their marketing content, sales tools, and coaching to dismantle department siloes and upskill their staff.
In the healthcare industry, doctors can use AI as a learning and teaching tool to support new and existing trainees with things such as medical equipment training, new medicines, or even how to talk to patients.
So, while both industries require high levels of security and regulation for their processes involving financial assets or patient information, plenty of other areas can safely benefit from AI tools. Unless they’re restricted, that is.
Certainly, AI is developing at an unprecedented rate, and industries will need the strictest levels of security and regulation compared to many other industries.
That said, it’s essential that we recognise the competitiveness and efficiency with which AI can bolster an industry if we are careful to mitigate vulnerabilities. In the end, companies need to define their use of AI and where they draw their limits. We need clarity about how we can and will use this wonderful technology together to empower our ability to synergise and collaborate.
It’s important to develop a strong strategy for your organisation’s approach to AI that considers and addresses trust and data privacy concerns. Generative AI is still a tool that can get out of hand if used carelessly (just look at this unfortunate sales email), so it’s crucial to undergo thorough training and establish best practices.
In the end, the reason for AI’s existence in business is to empower us. It’s meant as a tool that should help us work smarter, not harder. Yes, we must be cautious when applying AI to areas such as sensitive or valuable assets. That’s indisputable. But if Sunak and the summit members want to make sure no nation is compromised or left behind in the AI revolution, they must first accept that AI use has already become too diverse for a set of global, one-size-fits-all rules and regulations.
The week, multiple open letters have been sent to Sunak from tech leaders calling for more inclusivity for smaller AI firms, and highlighting the need for stronger regulation.
Subscribe to our Editor's weekly newsletter