EDF: Delivering value with AI
It’s been two years since the British-based French-owned energy company EDF launched an ambitious group-wide AI transformation project to offer new services and improve the performance of its operations as well as the experience of its employees and customers.
Earlier this month, at Tech Expo’s AI track Nidhal Zribi, the energy firm’s head of R&D Digital innovation, shared some insights into the company’s current applications and the challenges it faced in using AI to deliver value.
To meet its objectives with AI, the company has set up three units: an AI-specific R&D arm for large projects; an AI-based IT programme and a companywide A&I task force to explore use cases, develop proof of concepts and understand AI applications and their deployment within EDF’s production platforms.
According to Zribi, one early use case has been in the field of repair and maintenance, using an AI and computer vision model devised by its R&D team that detects blade faults in EDF’s offshore and on shore wind farms.
“These assets need to be inspected on a regular basis and this model allows maintenance engineers to identify problems within the blades, such as erosion and cracks and damages or other issues, and very rapidly engage in their repair, saving EDF both time and money,” Zribi explained.
Machine learning is also being deployed by EDF to remotely predict the thermal efficiency of customers’ homes, by feeding the ‘physical’ prediction model that the energy company had been using (taking data from metres and outdoor temperature etc) to an AI, which is then able to learn the model and improve on it accuracy.
“It’s not just about delivering a proof of concept – we need to be able to pass on the value to our end users. It’s about saving money, it’s about saving our carbon footprint, and its about bringing ideas that matter to our clients,” said Zribi.
According to Zribi EDF still has work to do before it is able to take advantage of the full potential of AI, however.
The firm is currently trying to explore how to expand the ‘explainability’ of black box AI technology applications. These are applications, developed by someone else (a Google or a Facebook for example), are adopted by the enterprises to help solve problems.
In machine learning black box models are created directly from data by an algorithm – a largely self-directed process which is difficult for data scientists and programmers to interpret.
The issues with black box AI tech arise when a software used for important operations and processes within an organisation can’t be easily viewed or understood, leading errors to go unnoticed until they cause problems.
AI bias, for example can be introduced to algorithms consciously or unconsciously by developers, or they can creep in through undetected errors.
“What we’re trying to do in this area is to introduce explainability models that enable use to improve the interpretability of the results and explain these to our clients – but this is still very much a future research area for my team,” Zribi said
Subscribe to our Editor's weekly newsletter