If no pause, then how can we make sure AI is ethical?
“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?”
These were just some of the questions asked in an open letter published in March by the non-profit organisation Future of Life Institute, in response to the rapid development of generative AI.
New large language models (LLMs) such as Open AI’s Chat GPT and Google’s Bard have seen widescale uptake thanks to their ability to take on workloads and generate large-scale efficiencies.
The letter’s signatories included Tesla’s Elon Musk and Apple co-founder, Steve Wozniak, in sudden alarm to the booming and rapid development of artificial intelligence over the past few months.
To these tech leaders, the prospect of robots taking over has become a serious, imposing reality, not sci fi – with questions around its swift development as damning as: “Should we risk the loss of control of our civilization?”
Gen AIs also look set to replace thousands of job roles too, according to firms such as BT and IBM – the latter’s CEO pausing almost 8,000 job hires because their work, he reasoned, could now or will soon be performed by AI.
And yet while the letter warned that the rapid development of AI was “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” no one has stepped forward to volunteer a slow down.
However, while not all working in artificial intelligence agree with an entire pause on development, most believe that the technology needs to be built and regulated in a way that it can be trusted, and adopted without being discriminatory or causing us to, as the letter puts it, “lose control of our civilization.”
Picking up on the themes of this open letter last week her Next Web keynote in Amsterdam, Google DeepMinds COO, Lila Ibrahim expressed concern over AI bias.
“It’s critical that we address the importance of diversity, equality, and inclusion now. It’s not something that we can wait for, and it’s not the responsibility of any one organisation. This is a requirement within all our organisations,” she said.
How to control an AI bias
For Janet Adams, COO of decentralised AI firm, SingularityNET, technology – and AI specifically – “is like air – everybody should have it. It’s just as important for human rights as education, water, and food.”
And just as the air we breathe needs to be clean, so does the data we use that fuels these chatbots, she claimed.
“As humans we have all these crazy, silly biases that we’re not even aware of,” Adams said. “But with AI, we have the opportunity to find them because they’re documented in the data.”
As artificial intelligence receives its knowledge from large data sets, it’s what is embedded in the data that sets the bias.
“Every decision creates a data trail, and in that data trail, we can look at it and we can audit it and we can run it,” Adams tells TechInformed after her own Next Web keynote.
She uses the 2019 example of AI bias where a husband, with a poorer credit score than his wife, was granted a credit allowance twenty times more favorable, even though they both applied at the same time.
“It just showed bias in the decision-making of the humans who then drove the algorithm to decide that the man was a better creditor than the woman,” Adams explained.
Historically, “a man is more likely to get promoted, and they’re less likely to take time off for maternity,” so logically, and without equal thinking, the man is a better creditor.
But the future does not have to be like that past, argued Adams – and now is a unique opportunity to ensure that discrimination is not repeated.
“We can intervene, and we can change the future and we can say ‘no, we’re not accepting that gender bias’,” Adams states.
“It’s a beautiful opportunity, and it’s a once-in-a-species opportunity to weed out and eradicate bias.”
According to Ibrahim, diversity must already be part of a company’s ethos to tackle discrimination in AI. “When you don’t have the right folks around the table to give you that perspective, you need to go outside and figure out: How do you bring a seat in? How do you get those outside perspectives?”
Ibrahim and Adams both believe that creating regulation around AI and controlling its ethics is a joint effort between governments, stakeholders, and marginalised people to ensure, as Adams says, “that we are all developing ethically.”
“Artificial General Intelligence has to be able to cope with power consistency, it has to be able to cope with the fact that what is right to one person, may be wrong to the other,” she added.
On the topic of ethics and regulation, how do the algorithms ensure that they are being compliant with the ethics of a diverse group of people?” Adams asks.
SingularityNET’s answer to this issue is its own programming language called META, which is a functional programming language that enables computing to store concepts more complex than, for example, simply recognising a face on camera, or predicting what product a customer might buy.
It also serves as a decentralised AI marketplace which means that artificial intelligence, and those creating it, aren’t owned solely by one large corporate, and so to create a fair distribution of power.
“Developers can put AIs on the marketplace, and they can be downloaded and streamed just like a Google Play Store app,” explained Adams.
Subscribe to our Editor's weekly newsletter