A group of AI experts has called for a more comprehensive approach to combatting AI bias, including rethinking of the role of the AI ethicist as well as a pushing for a more standardised rulebook on ethics and adopting a more consistent approach to the auditing of algorithms.
Twenty international AI experts from organisations including Accenture, The Alan Turing Institute, and UNESCO, were interviewed as part of the study, which looked at steps that business leaders can take today to respond to challenges around trust in AI.
In an increasingly data-driven world, complex algorithms are being used as business solutions across a huge number of commercial domains.
Commissioned by Northern Ireland-based software provider Kainos and slow news outlet Tortoise, the 34 page study – The future of trust in artificial intelligence explores how firms can prevent poorly designed and tested algorithms from reinforcing racial, gender or other socio-economic biases.
The study follows a series of high-profile cases of algorithmic bias resulting in heavy penalties for firms, including Uber when it removed drivers from its platform after authentication software failed to detect their faces.
Socio-economically discriminating algorithms were also used by Italian car insurance companies whose quotes for rates varied according to citizenship and birthplace – with a driver born in Ghana being charged as much as €1,000 more than a person with an identical profile born in Milan.
The report identified three guiding hypotheses to improve trust in AI and prevent biased algorithmic decision making.
1. Embed responsible AI across the business
The first step leaders can take is to ensure that everyone within their organisation takes ownership for responsible AI – rather than it being the domain of just one person.
Some of the study’s interviewees argued that hiring an AI ethicist that “just ticks these functional boxes” was not enough.
Will Griffin from AI platform provider Hypergiant added: “Some tech companies with a lot of money and resources hire the best tech ethicists and send them all around the world to discuss subjects like algorithmic bias, fairness and transparency.
“But it’s fruitless, because these knowledgeable professionals don’t have the buy-in to actually change the products at their own organisations.”
While most interviewees admitted hiring an AI ethicist was helpful, they agreed that it should be viewed “just one step in the process”.
Accenture’s global lead for Responsible AI, Ray Eitel-Porte added: “We very much take the view that Responsible AI is a responsibility and a business imperative that has to be embedded across the whole of the organisation and not just within the technology people.”
2. Standardise rules on AI
Algorithm Watch, a non-profit research and advocacy institute, found there are as many as 173 sets of AI principles and frameworks out there, However, it added that there was “a lot of virtue signalling going on and few efforts of enforcement” and called for a standardisation of both technical and ethical AI practices.
There are some efforts already being made along these lines. The Institute for Electrical and Electronics Engineers (IEEE) newly developed Certified mark seeks to engineer “credit-score-like” mechanisms, drawing on a well-defined set of ethical criteria to safeguard algorithmic trust with certification and standards.
Although this IEEE Kitemark is still in the piloting stages, Kostis Manolitzas, head of Data Science at Sky Media, said that it had the potential to fill a gap that he feels exists across multiple sectors.
“We need a framework that is consolidated and initially can be a bit more generic and doesn’t have to be applicable in every case, but at least it can cover the majority of the usage of these algorithms. Collaboration is going to be needed,” he said.
Another leading standards body The International Standards Organisation (ISO) has been working on a range of standards since 2017, alongside the International Electrotechnical Commission (IEC).
Through the expert working group SC 42 it claims to be making headway on “a ground-breaking standard” that, if accepted, will offer the world a new blueprint to facilitate an AI-ready culture.
Any standardising of rules, however, needs the input from a diverse range of stakeholders and shouldn’t just include the views of developed economies, Emma Ruttkamp-Bloem, Chairperson for UNESCO’s Recommendation on AI Ethics argued.
Other AI standards in the pipeline include draft legislation for the EU’s AI Act as well as the formation of a coalition of ethics professionals – the setting up of a network of experts within a particular domain who seek to share knowledge.
3. Focus on the auditing, not the maths
There have been many efforts to explain algorithmic decision-making systems – a process known as ‘explainability’.
Yet the study makes it clear that technical explanations – where they are possible – do not necessarily make sense to a vast majority of the people, because they’re not experts.
AI solutions provider Faculty suggests that there is often a correlation between complexity – the inner working of the so-called algorithmic “black box” – and the performance of the system: the more complex and seemingly unexplainable the behaviour of the model, the more accurate it can often be.
Given these quandaries – what can companies do to gain control and win back trust?
One idea being developed Eticas Consulting – a firm which teams up with organizations to identify black box algorithmic vulnerabilities – is algorithmic leafleting.
Eticas founder Gemma Galdón-Clavell explained: “It’s based on the same principles as the leaflets you get when you buy medicine: it comes with a document that tells you how to consume that medicine in conditions of safety.
“It tells you about the ingredients that go into it and you don’t always understand everything. But it’s a document that helps regulators and the public understand what some of the impacts of that piece of medicine are.”
Galdón-Clavell adds that the auditing of algorithms is another practical step companies can do to take back control. There are already some existing tools out there which can help with this function – including IBM’s AI Factsheets and Google’s Model Cards for Model Reporting, which companies can integrate into an AI workflow to provide the documentation and reporting that would support an audit.