Feature

The Rise of Artificial Intelligence and the Threat to our Human Rights

By Cate Brown, 14 Jul 2017
Institutions, Justice

From Frankenstein’s murderous monster to the malevolent force of the Matrix, science fiction often tussles with the idea of an evil other: a human creation gone wrong.

These creations have so far kept to the world of fiction, with artificial intelligence framed in largely positive terms. Google is the pub quiz master, Siri is on hand to make restaurant reservations and, by 2021, we’ll have our own virtual chauffeurs.

But where is artificial intelligence heading? Could science fiction become reality? And what does this all mean for human rights?

What is artificial intelligence?

Image credit: Robin Zebrowski / Flixr

First coined by John McCarthy in the 1950s, the term artificial intelligence (AI) really means machine intelligence. Despite what the movies say, this isn’t about robotics. A robot – like the voice of Apple’s Siri – is simply the personification of the machine.

Tim Urban, creator of the popular blog Wait But Why, says all artificial intelligence can be separated into three bands. These are:

  • Artificial Narrow Intelligence (ANI), where a machine is programmed to have a particular expertise;
  • Artificial General Intelligence (AGI), where a machine’s capabilities span the full spectrum of human activity, equalling us in terms of our understanding; and
  • Artificial Super Intelligence (ASI), where a machine’s intellect surpasses that of the best human brains.

Project robot domination

Image credit: Pixabay

So far, artificial intelligence has not gone beyond the first level – ANI. This technology is embedded everywhere, from the navigation system on your smart phone, to the spam-filter on your email account.

Compared to previous inventions, ANI has evolved, and transformed our lives, at an unprecedented rate. In science speak, this is the law of accelerating returns. Progress made during a set duration of time (say 100 years) accelerates over time (from one century to the next). In everyday terms, it’s why printed maps now seem about as helpful as a telegram.

Despite this progress, the second level – AGI – has not yet emerged. It would require not only an increase in computing power, but also a boost in machines’ intelligence. In other words, machines would need to be able to learn.

Following a worldwide survey in 2013, hundreds of AI experts concluded that 2040 was a “realistic estimate” for the development of AGI. More than 75% of those interviewed believed that the transition to ASI would follow within thirty years of this, so by 2070.

Confused? Don’t be. What this means is that there’s a convincing consensus that machines will acquire human-level intelligence during most of our lifetimes, and that artificial super-intelligence will follow after. The question is not if, but when.

So what might the future of artificial intelligence mean for human rights?

Threats to human rights

Image Credit: Pixabay

Current discussions about AI focus on the short-term threats posed by its expansion, such as job losses. Studies have found that up to 50% of all jobs are now susceptible to automation, including traditionally ‘safe’ professions such as law, accountancy and medicine.

In a recent public lecture broadcast by Gresham College, Professor Martyn Thomas warned that if displaced workers are not adequately retrained – and if the state does not fairly distribute the wealth generated by a boom in AI – the “social disruption could be enormous.” From a human rights perspective, this could endanger people’s economic, social and cultural rights.

There’s also a danger that personal data retained by machines will be accessed for criminal or political purposes, a reality demonstrated by recent cyber attacks. These attacks risk undermining our human right to privacy, which is protected by Article 8 of Human Rights Convention. Serious attacks could undermine other rights, for example the right to healthcare and the right to life, as shown by the recent attack on the NHS. 

Beyond this, machine learning based on human behaviour risks transferring the historical biases in our society to machines. This could mean, for example, that AI used in predictive policing or loan-approval systems would entrench discrimination on the grounds of race or gender – behaviour prohibited by Article 14 of the Human Rights Convention.

The end of the human race?

If an expansion of ANI, and its progression towards AGI, poses a threat to human rights, what about the creation of artificial super-intelligence?

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct.

Nick Bostrom, Professor in AI Ethics and Philosophy at the University of Oxford

Bostrom’s warning about ASI is echoed by the likes of Stephen Hawking, Bill Gates and Tesla-founder Elon Musk. All three have expressed concern that ASI poses an existential danger to humans, threatening the most basic human right of all, our right to life.   

This threat stems from machines developing competence, rather than evil intent, explains Hawking: “A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”

Unless programmed otherwise, ASI will pursue the most efficient means, irrespective of ethics. And seeing as it will be more intelligent than us, it may be impossible to foresee every consequence of our invention.“[We may well] produce something evil by accident”, warns Musk.

So that’s that then…

Not quite. For every dystopian prediction, ASI has the potential to create real gains when it comes to human rights.

This includes removing people from dangerous and degrading jobs and paving the way for additional time with family, a right enshrined by Article 8 of the Convention.

ASI could also develop a solution to climate change, one of the main causes of forced migration, and facilitate the production of meals from scratch in order to eradicate food poverty, which is a violation of Article 11 of the International Covenant on Economic, Social and Cultural Rights

According to some experts, it might even be possible to reverse ageing, thereby extending our right to life.

For those developing this technology, to halt progress on account of ‘doomsday’ predictions, risks delaying these gains.

We didn’t rush to put rules in place about how airplanes should work before we figured out how they’d fly in the first place.

Facebook’s creator Mark Zuckerberg on ASI

Well…What now?

Speaking at the AI for Good Global Summit last month, Amnesty International’s Secretary General concluded that there are “huge possibilities and benefits [to be gained] from artificial intelligence” if “human rights is a core design and use principle” of this technology.

This reflects the thinking of a budding beneficial-AI movement, backed by Musk and others. In January 2017, he and hundreds of AI experts endorsed an open letter published by the Future of Life Institute. This laid down 23 principles to ensure that AI remains a force for human good, including the concept of “value alignment”.

Whether these principles will be enough to protect mankind from machines really is the killer question.

About The Author

Cate Brown

Cate is a freelance journalist and filmmaker, who was an Associate Editor at EachOther from 2017 to 2019. She produces and directs documentaries for the major UK broadcasters, alongside writing for national publications. Cate is a qualified lawyer, with an interest in media law and press regulation.

Cate is a freelance journalist and filmmaker, who was an Associate Editor at EachOther from 2017 to 2019. She produces and directs documentaries for the major UK broadcasters, alongside writing for national publications. Cate is a qualified lawyer, with an interest in media law and press regulation.