Britain’s Equality watchdog to regulate public bodies’ use of AI
The Equality and Human Rights Commission (EHRC) is set to become the new regulator of the use of artificial intelligence by UK public bodies.
The regulator intends to clamp down on the discrimination technology can issue, with evidence showing that bias built in algorithms can lead to “less favorable” treatment of people depending on characteristics such as race and sex.
“While technology is often a force for good, there is evidence that some innovation, such as the use of artificial intelligence, can perpetuate bias and discrimination if poorly implemented.
“Many organisations may not know they could be breaking equality law, and people may not know how AI is used to make decisions about them,” said Marcial Boo, chief executive of the EHRC.
In July this year, the UK outlined new AI regulation which aimed to boost innovation and public trust in technology. The Information Commissioner’s Office (ICO) had already expressed concerns over the use of algorithms to filter recruitment applications.
In its new three-year strategy, EHRC has made evident that tackling discrimination in AI is a key focus, and has today published new guidelines to help organisations steer-clear from breaches of equality law, including the public sector equality duty (PSED).
“As part of this, we are monitoring how public bodies use technology to make sure they are meeting their legal responsibilities, in line with our guidance published today. The EHRC is committed to working with partners across sectors to make sure technology benefits everyone, regardless of their background,” Boo stresses.
Starting in October, the commission will work with roughly 30 local authorities to gain insight into how they are using AI to deliver essential services, such as benefits payments, amid concerns that automated systems are inappropriately flagging certain families as a fraud risk.
The government body is also exploring how best to use its authority to examine how organisations are using facial recognition technology, following worries that the software may be disproportionately affecting people from ethnic minorities.
“It’s vital for organisations to understand these potential biases and to address any equality and human rights impacts,” concludes Boo.
Subscribe to our Editor's weekly newsletter