An extraordinary thing happened in the media world shortly before Christmas: journalists everywhere discovered a computer could do their job almost as well as they could.
hat helps to explain the frenzied coverage of ChatGPT, the awkwardly named chatbot released by the company OpenAI late last year. If you haven’t yet played with ChatGPT, do so now: you’ll have even more fun than the time you tried asking Siri or Alexa rude or silly questions.
You give ChatGPT prompts or ask it questions and it replies with astonishing fluency and authority. I asked it to write an article about Sinn Féin in the style of Eoghan Harris: the answer lacked Harris’s verve, but it accurately captured his politics.
Much of the content ChatGPT produces passes the Turing Test: an observer could not tell it was produced by machine, not human. (The test was devised by British mathematician Alan Turing as a threshold for the emergence of artificial intelligence.) For anybody who writes for a living, this thing is both a miraculous toy and a clear and present danger.
Another group of professionals close to the heart of our culture have also helped to drive the debate on ChatGPT — the teachers. Asked to write an essay on the consequences of the French Revolution, ChatGPT’s answer will be indistinguishable from that of a generic, articulate student. Because of how ChatGPT works (it generates language based on predicting what words should follow in a sequence, rather than quoting chunks of text from the web), it can’t be detected by the anti-plagiarism tools schools and colleges use.
Teachers fear it will become impossible to police this kind of cheating, and the teaching of essay-writing, and the assessment of students based on written work, will become redundant. In a worst-case scenario, the coming generations will never learn the ability to structure an argument or to think critically.
But students have always managed to cheat at essays. As Simon Kuper wrote in his exposé of elitism in British public life, Chums, the art of essay writing — as learned at Oxford University — was largely about bluffing. One student famously managed to read out almost half of an essay before his tutor discovered he was ‘reading’ from a blank sheet; that student went on to run the NHS and now sits in the House of Lords.
What ChatGPT is extraordinarily good at is bad writing, which is useful for generic student essays, obscure academic articles, funding applications, cover letters, corporate spiel and, yes, those corners of the media where data and speed are a priority over craft.
For those of us who write for a living, ChatGPT can probably help automate or expedite some of the more mundane aspects of the job. For those for whom writing is a chore, not a craft, it will provide a lifeline. That makes this aspect of ChatGPT less like an existential threat and more like a productivity hack.
But there are other areas in which ChatGPT, and the advances in artificial intelligence it heralds, are more worrying.
Because ChatGPT is purely a prediction machine, it has no filter for accuracy or truth.
Fifty years ago, people relied on Encyclopaedia Britannica, written by experts and checked by editors. Twenty years ago, Wikipedia proved an encyclopaedia written and checked by users could be just as accurate. But ChatGPT’s answers are neither written nor checked by anyone: it simply produces strings of words that, in the vast corpus of the internet, are associated with each other.
It has no authority, but the simulation of authority. I asked it who was the US president in 2020. “ Donald Trump,” it answered, correctly. I told it it was wrong. “I apologise, that was an error in my response,” it replied. “The president of the United States in 2020 was actually Joe Biden.”
“It’s like a robot-written Wikipedia that has no references and no checking,” said Kris Shrishak, technology fellow at the Irish Council for Civil Liberties. “And if it does have references, it sometimes makes up the references itself — beautiful references to sources that don’t actually exist.”
It then becomes part of the internet and may be drawn on as part of the training of AI systems in what Shrishak called “a feedback loop”. ChatGPT has been trained on human-written text, but future models will be trained (at least in part) on AI-written text. Over time, the distinction between text written by humans and generated by bots will become “greyer and greyer”, Shrishak warned. The overall quality of text — and the internet itself — may simply deteriorate.
But what is that original, human- written text on which ChatGPT has been trained? It is our words and data. As the tech writer Maria Farrell said: “It’s based on theft. The AI has been trained on the words written by people on thousands or tens of thousands of websites, taken without their knowledge or compensation and copied and used for commercial gain.”
TJ McIntyre, chairman of Digital Rights Ireland and associate professor in the UCD School of Law, said: “These systems have taken your and my personal data without our consent. If asked about us, they will spew out information which is at best merely wrong and at worst greatly damaging.
“It’s amusing now while the systems are largely toys, but when they are incorporated into decision-making they present real risks.”
Shrishak says “there’s a lot of work that should have been done before deploying these systems on the internet”. He used the design and sale of new cars to explain how ChatGPT and Google’s rival chatbot Bard had been rolled-out prematurely.
Cars are carefully tested before being placed on the market. However, in the case of the chatbots, the companies “have done only part of the testing and then they’ve put these badly-tested ‘cars’ on the market and they’re using the accidents as their feedback”.
Paradoxically, this may be a good thing. Shoshana Zuboff’s 2019 bestseller The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power provided a wake-up call on the threat posed by Big Tech to the integrity of our democracy and culture.
“ChatGPT has shaken us up,” she told the Financial Times recently. “It has shocked people, forcing us to recognise how far AI has come with virtually no law and democratic governance to shape or constrain its development and application.”
After some hours probing and prodding ChatGPT, I finally succumbed to the most base of online instincts: I asked it what it knew about me.
“Colin Murphy is considered one of the leading voices in contemporary Irish theatre,” it replied, gratifyingly. “His most popular works include The Restoration of Arnold Middleton.”
I have never heard of The Restoration of Arnold Middleton. There is little about it online, but it transpires it was written by an English playwright, David Storey, in 1966. I found that information on the website of the Encyclopaedia Britannica.
There is cheap gratification to be had in poking fun at Chat GPT, but the risk is that this is a distraction.
The real issue is regulation. ChatGPT is just the latest in a long line of unregulated technologies that threaten the integrity of public discourse — and thus the viability of liberal democracy — even as they dazzle us with their new tricks.