No, the robots aren't coming: Why AI really won't steal your job
We need to stop crediting robots with a level of awareness that they just do not possess, advises John Higgs
Personally, I blame Skynet.
In the Terminator movies, the Skynet artificial intelligence (AI) became self-aware and decided that killing humanity was the most logical course of action. Whenever we think of AI now, we don't think of it as a useful tool that wants to help us, like the Star Wars robot C-3PO. We think of it as something aggressive and competitive that does not have our best interests at heart.
Please log in or register with Independent.ie for free access to this article.
Recent advances in AI are also seen in this competitive light. AI, we are told, is coming for our jobs.
A July 2018 report by the Government's Economic Policy Unit claimed that AI and automation will destroy or substantially change two out of every five Irish jobs over the next 20 years. That is a huge change to occur over such a short period of time.
If this is so certain, it is odd that nobody has told the financial markets. If a robot-led productivity revolution was on its way, you would expect to see high levels of investment, high returns on capital and high long-term interest rates. But there is little sign of such things, over the next 10 years at least.
You would expect that jobs would already be disappearing as automation increases, but the number of jobs being created is still going up. You would also expect older, incumbent companies to lose value on the stock markets, but that doesn't seem to be happening either.
Investors, it seems, see things differently to headline writers and technology evangelists. They know that just because automation is possible, it doesn't mean that the economic case for it is certain. Automatic car washes have been around for a long time, for example, yet hand car-washing services still flourish.
Part of the problem of working out exactly how AI will impact us is our difficulty in understanding what it can do, and what it can't. When we see it doing things that previously only humans could do, we can mistakenly ascribe human-like motivations to it.
Facebook shut down two AI chatbots in 2017 because they began communicating in a language of their own devising, which their developers could not understand. This story became front page news around the world.
If the chatbots were talking in their own code, journalists assumed, it must be because they did not want their makers to know what they were talking about. And if this was the case, then clearly they were planning something sinister. This was, after all, how Skynet began.
In reality, the chatbots were not keeping secrets from their makers, because they were not aware that their makers existed. They were not even aware that they themselves existed, and they were certainly not aware that humanity exists and needed to be wiped out. Instead, they were machine learning algorithms that had been trained to know which words fitted together without ever knowing what those words referred to in the real world, or even that there was a real world which words could refer to.
Our natural instinct to credit AI with an awareness that it does not have has a long history. The computer pioneer and the Second World War codebreaker Alan Turing used this issue to his advantage when he suggested a way to test for AI. The 'Turing Test' he proposed in 1950 did not assess whether a machine could think. Instead, it assessed whether a machine could give the impression that it could think.
AI is getting more powerful and achieving increasingly impressive goals, but it still needs a human to clearly define those goals for it. It is not like Skynet, which had an awareness of the world and was able to decide for itself what it should do. There is no evidence that current machine learning techniques will acquire this ability. This is not just a problem that we haven't solved yet. It is a problem that n0 one has any idea how to begin tackling.
AI is a tool to be used, rather than technology that will escape our control and act under its own steam. As such, the person or organisation using it is responsible for it. It is not a case that AI is going to take your job, for example. It is the case that your boss might sack you once he can get AI that will do your job cheaper. That might seem like a pedantic difference, but it is the key to how legislation will deal with this technology.
If AI does cause damage to our society, there will be someone legally responsible for deploying it. This person or organisation will be answerable to the same laws that prevent other forms of harm to society. One great fear is that AI will be used by the military and allowed to decide when to fire weapons and take lives. This could happen, but lobbying is underway to have technology like this banned in much the same way international treaties ban chemical weapons. The hope is that technology like this is never built but, if it is, someone will be responsible.
AI is unable to tackle the most important of human jobs, the generation of meaning and the defining of purpose. Those are tasks for us alone in the coming years. What it can do, however, is tasks that would be too time-consuming or boring for a human to attempt. If AI did achieve a Skynet-like sense of awareness and realised exactly what we were going to be using it for, such as algorithmically sorting Facebook feeds from now until the end of time, I suspect that the first decision it would make would be to turn itself off.
Remembering that AI is a tool which someone is using will help when you next read alarming stories about it. We are not building Skynet. But we might just be building C-3PO.
The Future Starts Here: Adventures in the Twenty-First Century by John Higgs is published by W&N in hardback at €24.40; eBook also available