In AI we trust?
The global head of artificial intelligence at industrial software company, AVEVA, Jim Chappell, likens the risk of AI to driving a car.
“A car is a great tool, but it can do bad in the hands of the wrong people,” he says.
This could be one of the reasons why the vast majority (94%) of business leaders have admitted to “tech anxiety” within their organisation’s senior leadership, according to a recent study, and 70% of workers fear losing their jobs to artificial intelligence, whether it be automated machines or chatbots such as ChatGPT.
AI experts and governments have been working on mitigating any AI risk, bias, or takeover while bidding to introduce regulations and ensuring that artificial intelligence and humans can work in harmony.
“We’re at the beginning of the AI journey, and it has the potential to be bigger than the internet, bigger than many things in the history of the world,” says Chappell.
“It is still a tool, and it can be used for very, good things, and has been used for very good things for a long time,” he enforces, with the reminder that “it has been around really since the 1950s at the earliest”.
Based in Florida, the global head of AI at AVEVA has been working with artificial intelligence since 1995, when he co-founded InStep Software which offered AI-driven predictive analytics and enterprise big data software.
InStep was then acquired by digital automation and energy management firm Schneider Electric in 2014, to which Chappell moved to AVEVA a couple of years later.
AVEVA is an industrial software company company of over 60 years, with projects including building the ‘world’s first 3D plant design system’, back in 1976.
Since then, the firm has worked on digitising the industrial sector and, now, focuses on technologies to help engineers factor greenhouse gas reduction in their plant designs, optimising solar energy production, and achieving efficiency and sustainability goals using AI and industrial data sharing.
For Chappell, AI has greatly advanced in the last 20 years: “It already knows and has more knowledge than humans with the large language model training because of the amount of data it was trained on,” he says.
Realistically, Chappell acknowledges that guardrails are important, “you have to be pragmatic about it,” but humans should not hold back so harshly on trusting the new technology.
As machine learning has already been around for decades, it is on companies to take advantage of the data and technology to do more than just observe trends and reports, says Chappell, and thus work with us humans.
“You have the system learn from it, use it, and then make decisions or alert the human to problems,” says Chappell.
“Early detection of problems has really started to pick up over the last 20 years, and I’ve been heavily involved in that,” he enthuses.
Chappell’s predictive analytics tools have already been used in the case of gas turbine routine maintenance, where the predictive AI tracks changes in the machine such as the ambient temperature and turbine efficiency.
It can then predict when a gas turbine needs to be taken down, and repaired, and it can also keep track of the cost of the gas turbine filter fouling and payback-time for replacement of the filters – enabling the optimisation of maintenance intervals and reduction of unnecessary expenditure.
Other industrial AI use cases include keeping wind turbines in check with similar predictive maintenance, and early-detection of factory faults using an AI-powered camera that can spot a fault visually.
According to Chappell, the industrial world has evolved with the help of predictive maintenance, with their own customers including big oil firm Shell, and energy firm EDF.
“It started in power generation, then it moved to oil and gas, and then more recently over the last decade, I would say it moved into other industries like life sciences, food and beverage, and other types of manufacturing.”
Chocolate manufacturers, for example, have already been digitally transforming with AVEVA to support product innovation, enhance consistency, leverage cost reductions, and improve sustainability outcomes.
Legacy knowledge transfer
“There’s a certain type of employee that worries about losing their job to AI,” says Chappell. “I don’t see that as the case.”
“I see jobs morphing,” Chappell argues. “It’s just like the industrial revolution of the 1800s or when computers came out in a big way in the 80s.”
To Chappell’s point, any new technology will ignite concern over losing jobs, and he says that workers need to gain more trust in it.
“AI could be as big or bigger than any of those things, but it all comes down to trust.”
While humans are hesitant towards the technology and the data it harvests, Chappell says that with the right security, AI can thrive.
For example, Chappell argues that generative AI can break down senior knowledge for junior employees.
A recent report by IBM found that US state CIOs are particularly nervous about a skills gap when senior, more knowledgeable, IT professionals retire.
When senior employees retire and junior employees take on the role, “how do you transition?” questions Chappell.
“Do you have to start over again and have the same engineers go through the life lessons and learn all over again,” points Chappell, “or should they get a head start?”
The global head of AI explains that artificial intelligence can take on the knowledge of the seniors and offer it to the junior employees with the help of large language models, “so they can achieve more faster, learn faster, and cause less human-induced error”.
However, “If humans don’t trust it, it will be less successful. If humans do trust it, it will be more successful.”
TI recently spoke with Siyabulela Mandela, great grandson of anti-apartheid activist and president of South Africa, Nelson Mandela, who warned that deep fakes could be used to spark genocide. Click here to read more
Subscribe to our Editor's weekly newsletter