Open wide for AI
Using London Tech Week as a bellwether, TI reports on how industries, developers and researchers are approaching generative AI adoption
Open wide for AI
After years of being told that data is the new currency, it was only a matter of time before artificial intelligence experienced a boom and started gaining attention around the world.
So, when the titans of tech, and all those that touch it, came to London last month to discuss the state of AI, along with other technologies, it seemed like the perfect time for TechInformed to test market appetite, use cases, and concerns around ChatGPT and its ilk.
As with all technology shifts, adoption has been gradual, some might say unconscious, with many firms implementing AI to increase employee productivity.
AI can now spot ways to make customer interactions better, pick up statistical anomalies within data sets, as well as take whole jobs off the desks of employees.
Financial services are a great example of this, with MSU Federal Credit Union’s chief strategy and innovation officer Ben Maxim telling London Tech Week: “What we’re focused on is more broad automation, and conversational AI is really the automation of the customer experience.
“More recently, we’ve been working with our AI partners to integrate Large Language Models (LLM) to help improve our training data and also the speed to market for some of the daily updates we have to do when training our chatbot.”
Maxim added that his firm was also asking how it could use and unlock all the data it had.
“Financial institutions have rich data sets, although they are tied up in legacy systems and data governance which makes it hard to use this data. A lot of our ability to play in the AI space depends on partners within fintech and startups to help us unlock the power of our data,” he said.
“The challenge is definitely with the data because enormous amounts of data are being generated,” said Wendy Jephson, CEO and founder of technology support organisation Let’sThink.
“I’ve been trained as a behavioural scientist to improve decision-making in financial services because so many decisions are made and kept in data.”
While the sheer volume of data can be an issue Davide Martucci, CEO and co-founder of Next Gate Tech, has seen the other side, in which “different models of machine learning can spot anomalies, assess signals and patterns, and create emails and documents.
“In my view, the main complexity is getting businesses to understand the different applications for AI. There is a need to educate big institutions on how they can use AI models to smooth their processes.”
Of course, no adoption is smooth. The fact that experts throughout London Tech Week felt it necessary to stress that Chat GPT is a public ledger is a testament to that.
Yet when there is as much hype around a technology as there is around generative AI and LLMs, businesses tend to jump in without thinking of the consequences.
As Jephson says: “There are lots of individual use cases where AI can be used to drive greater efficiencies – such as chatbots, and customer support, where businesses can handle repetitive questions more efficiently, but I think we have to think carefully about the effectiveness of these models and what we might lose in getting to those efficiencies.”
Paul Henninger, head of Connected Technology UK at KPMG points out, strangely, one of the positives of generative AI like Chat GPT is it “fails really well.”
However, Sasha Haco, CEO and founder of Unitary AI, described this phenomenon as “alarming”.
“This is going to cause a lot of problems with generative AI because it looks to be correct, and you don’t double check the information you’re given is factual because the last 20 things that you were told were correct,” she said.
The issue arises when you go to post it on social networks or somewhere publicly as fact, said Haco.
“Other people see it as correct and it gets passed on, but you lose the fact that it originated from a rather inaccurate, but believable source.”
“Hallucinations, where the AI appears to make something up, is definitely a weakness, but it’s also a feature in the sense that previous AI would come up with something crazy when it failed,” said Henniger.
“Whereas ChatGPT will attempt to produce an answer, which could be wrong, could be right, but sounds plausible.
“I think that’s instructive when we discuss how this technology will impact businesses, which is to say, it is about how accurately it can achieve what you want it to do, but it’s also about whether you have an application around it, a process or people around it that know what to do when malfunctions occur.”
It seems artificial intelligence, particularly generative AI, still needs to be handled with kid gloves; after all, hallucinations are rarely a good sign.
Yet adoption of ChatGPT would give the impression that the genie is out of the bottle.
With this in mind, Colin Murdoch, chief business officer at Google DeepMind, and Jay Limburn, vice president of data and AI at IBM, believe the best route forward is for companies to take a measured approach to adoption, alongside governmental regulation that controls the technology.
“Businesses just have to make sure that they’re being responsible with this technology, and they are applying principles to their usage,” said Limburn.
“A big part of Watson X is about underlying AI governance, giving anyone building AI the capabilities to be able to govern the entire lifecycle of the model.
Limburn adds that there’s also the policy piece. “Engaging with governments, with industries, and influencing the regulation coming in around responsible use of AI and making sure that that regulation is applied within the industry, and not necessarily one big broad regulation.”
Murdoch added that firms needed to start thinking now about how they are going to work with AI and they can do this by starting conversations now with internal and external stakeholders about the benefits and the potential downsides.
“It’s a responsibility for all of us,” said Limburn.
“It’s about active participation, to help drive innovation, but making sure that we’re doing it responsibly and making sure we have our own principles that we will stick to in terms of how we use and implement AI.”
Subscribe to our Editor's weekly newsletter