How UCC mathematician paved the way for robots to replace us at work
Within the space of a couple of decades, a robot may be writing this article. It will probably be delivering your post. And if it isn't driving your car, you'll need to get with the times.
In the last few years, artificial intelligence (AI) has moved from a pipedream, or the domain of science fiction, to a reality that will have a profound impact on our lives.
Not only is AI certain to make millions of jobs that exist today obsolete, it will also force us to ask major questions, about privacy, laws and ethics.
Last week, many of the world's eminent computer scientists and mathematicians gathered at University College Cork to celebrate the legacy of George Boole, a legendary mathematician whose work on logic and human thought laid the groundwork for modern computing and the AI revolution.
Boole, who was born two centuries ago this year, devised the theory of logic that underpins binary - the "on" and "off" or "one" and "zero" commands that make up the language of computer code.
Many academics believe that, were it not for Boole's premature death in 1864, the digital revolution that began when Claude Shannon used Boolean logic to build and devise a type of electrical circuit in the 1930s would have come decades earlier.
Boole was also an early influence of the idea of artificial intelligence, believing that all human thought could be reduced into a series of mathematical rules. On one trip to London, recalls his biographer Des MacHale, Boole marvelled at the "thinking" exercised by Charles Babbage's Analytical Engine, an early calculating machine using looms and punch cards.
Given Boole's legacy, it was unsurprising that much of the conversation surrounding his bicentenary centred on the current state of AI. Interest in computer software that can understand inputs and apply meaning to them, whether that is interpreting a search query, navigating a road or translating a foreign language, is at an unprecedented level.
Applications of AI, such as Google's search algorithm or Microsoft Excel's automatic calculations, have been a part of everyday life for years (although it is a common complaint of advocates that as soon as an AI application becomes mainstream, people cease to think of it as intelligent). However, concepts have been in the popular imagination for much longer thanks to the science fiction of Isaac Asimov and Stanley Kubrick.
Now, rapid advancements in computing power and internet speeds, the huge increase in data collection and the deep pockets of Silicon Valley's finest have combined to forge a new revolution in AI.
Technologies that more closely resemble human intelligence, such as the iPhone's personal assistant Siri, which is able to interpret and respond to human language commands, and image recognition software that can detect faces and animals in photos, are commonplace. One "machine learning" company is DeepMind, a British startup bought by Google for £300m last year.
"We're in the AI spring. A few years ago people would talk about it being overhyped or say: 'That's not possible'. That's not the case now," says Oren Etzioni, head of the Allen Institute for AI in Seattle. "There's a wide-ranging commercial impact."
The potential applications of AI are, of course, enormous. Technology that can scan vast amounts of data for patterns will revolutionise research, while the most laborious tasks will be left to robots, should humans learn to trust them. But, unsurprisingly, such possibilities also carry fears that huge parts of the workforce will become obsolete.
Robots don't need salaries or benefits. They don't demand evenings, weekends and holidays. They don't come into work hungover, or late, and don't argue with colleagues. When they become cheap and capable enough, what business owner wouldn't want to replace a human with a robot? This isn't a new idea: Boole himself considered it more than 150 years ago, according to his wife Mary.
Experts are divided on the impact the robotic worker will have on society. Some say that, just as the industrial revolution destroyed farming jobs but created factory work, the rise of the machines will foster new opportunities. Others believe that the jobs that do emerge will be so specialised or skilled that large swathes of the population will find themselves obsolete. "It's a real concern," says the Allen Institute's Otzioni. "The impact on the labour force is something we are really having a discussion about."
A greater shadow potentially hangs over the concept of ever-smarter machines, one that everyone will recognise from the likes of the 'Ex Machina', 'I, Robot' and '2001: A Space Odyssey films' - the idea that super-intelligent machines may, one day, turn on mankind. In the last year, influential figures, among them Steven Hawking, Microsoft founder Bill Gates, Tesla's Elon Musk and Apple co-founder Steve Wozniak, have warned that mankind is rushing headfirst into developing "real" intelligence without pausing to consider the consequences. "It would take off on its own, and redesign itself at an ever increasing rate," Hawking said last year. In January, Gates warned: "I don't understand why some people aren't concerned."
Many of the experts gathered in Ireland last week brushed aside such concerns. Dr Kenneth Ford, a former Nasa executive who leads the Institute for Human and Machine Cognition in Florida, says most of the trepidation surrounding AI comes from our tendency towards anthropomorphism: assigning negative human qualities to things.
"We need to get beyond species-centric thinking," says Dr Ford. "Where AI gets scary is the idea of AI that's mimicking us, a human intelligence. [What's scary about these ideas often] isn't that something's too artificial, that it's too human."
Dr Ford says people mistakenly believe that man-made intelligence will resemble biological intelligence, in the same way that before the invention of the plane, the starting point for human flight mostly centred around attaching feathered wings to human arms and flapping around like a bird.
He points out that HAL 9000, the antagonist of 2001: A Space Odyssey who turned on his human passengers, was racked by paranoia. HAL's problem wasn't his artificial qualities, it was his human defects, and there is no reason to believe a real-life artificial intelligence would have such qualities.
But fears over the power of artifical intelligence have not been helped by the eminence of the Turing Test, often seen as the litmus paper for AI. To pass the Turing Test, devised 65 years ago by Alan Turing, a computer program must be able to convince a human communicating with it via a screen that it is, itself, human.
Most researchers believe that while a great thought experiment, the Turing Test is not so much an indicator of intelligence as an exercise in mimicry.
"The Turing Test is daft, he never intended it as a scientific goal," says Dr Ford.
Regardless, any machine that can be considered to have a human level of intelligence is likely to be years away. For now, robots remain our faithful servants, although their impact is impossible to ignore. (© Daily Telegraph, London)