Elon should chill out – there’s no sign of Metal Mickey rising up against humanity

Elon Musk. Photo: Getty

ChatGPT. Photo: Getty

thumbnail: Elon Musk. Photo: Getty
thumbnail: ChatGPT. Photo: Getty
Adrian Weckler

Could everybody please relax about artificial intelligence (AI)?

Honestly, it’s not going to rise up and kill us. It barely even gets our voice commands right.

Obviously Elon Musk and some other tech figures disagree. And last week they wrote a letter calling for a six-month halt on AI development.

Their reasoning? AI is “out of control” and a “risk to our civilisation” because it’s spawning “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.”

ChatGPT is just a sophisticated way of copying and pasting

Relax, lads. There’s no sign of Metal Mickey or Ed-209 yet.

This hasn’t reassured Musk. AI, the letter concludes, is being controlled by “unelected tech leaders”. (Like Musk, maybe, whose company is under investigation for lying about his car’s ability to drive autonomously.)

Could the letter be right? Have systems like ChatGPT – which is clearly the trigger for this latest anti-AI broadside – given us fresh reason to worry about the machines plotting to take over?

For any calm, reasonable person, it’s a hard argument to make. Despite the hype, ChatGPT – and systems like it – have not yet demonstrated anything like ‘master’ potential.

Remember, ChatGPT is a ‘large language model’. In other words, it’s a sophisticated way of copying, pasting and connecting bits and pieces from what it finds online.

ChatGPT. Photo: Getty

It recognises patterns in the way it has seen language being used. That includes how questions are typically answered. So when you ask it something, or ask it to do something (like a poem, a song or some computer coding), it regurgitates what it has seen constructed in the past (up to 2021, to be precise) to similar questions.

This doesn’t come close to what most people might reasonably regard as human-level ‘intelligence’.

Even so, could it be abused and upend our society?

The examples being offered are pretty flimsy.

‘The essays,’ we shriek. ‘How are we going to stop students turning in synthetic essays?’

In Ireland, this is an hilariously bad argument. When I was in secondary school, our essays were set by the curriculum to be rigid, point-based, boilerplate formulae for the purpose of achieving exam points.

They may as well have been ChatGPT scripts. There were a half dozen templates you were encouraged to learn, practically by rote.

To stray from this – to engage in any original or critical thinking – was to risk bad marks and a telling off. In my school, my fearful, obedient English teacher regularly gave me low grades because I dared to include characters and narratives in essays.

It doesn’t come close to what most people might regard as human ‘intelligence’

“This is not an essay for this class,” he admonished, more than once.

‘Ah yes,’ the AI worrier counters. ‘But what about college? Students will cheat. They won’t learn.’

I wince when I hear this argument. It’s untenably pessimistic, not only of the adult students’ own ambition, but of colleges’ ability to set and track standards.

Are these institutions supposed to be elite centres of higher learning or not? Is there a serious argument that some universities won’t be able to tell the difference between ChatGPT and an original essay? If so, why would we accord that university any respect?

To be clear, there are some considerable worries about AI. But it’s much more about how it’s used – deliberately by us – than the tech itself.

One good example is military drones. There has been a debate for the last decade about what autonomous, decision-making powers military attack drones should have in the field.

If a target is identified, should they be authorised to fire without a human pushing the button? Big military countries like Israel and the US lean toward autonomy, while others – such as France – dislike the idea.

There’s also real concern over how much responsibility AI systems should be given in civil and criminal processes, for things like facial recognition.

The danger of discriminatory profiling and mistakes is just too great, many of us believe. So don’t use it as a tool for those things.

And there is also the constant tension over algorithms that control the world’s information flows over social media platforms.

But it is always humans who ultimately call the shots in these scenarios, wither directly or through regulation.

The idea that AI should be scotched out of fear is a little like the Catholic Church asking for the printing press to be paused because of the unrest it could cause among peasants.

AI is a core building block for many of the things that make people’s lives easier, from less back-breaking industrial processes to fewer deaths from car accidents on the road.

There’s no sign of any Hal 9000 on the horizon. And if one comes along, we can always pull out the plug.