We need to chat about Artificial Intelligence — and real ‘fake news’

As artificial intelligence develops at a rapid pace, experts shift focus to ethical implications

AI has been 'overwhelmingly used for positive purposes' but threats remain. Photo: Colin Anderson Productions

Fake image of Pope Francis was generated by AI

Convincing but fake image of Donald Trump's arrest was generated by AI

Dr Tijana Milosevic

Alan Smeaton

Professor Tomás Ward

thumbnail: AI has been 'overwhelmingly used for positive purposes' but threats remain. Photo: Colin Anderson Productions
thumbnail: Fake image of Pope Francis was generated by AI
thumbnail: Convincing but fake image of Donald Trump's arrest was generated by AI
thumbnail: Dr Tijana Milosevic
thumbnail: Alan Smeaton
thumbnail: Professor Tomás Ward
Rodney Edwards

This article is on the pros and cons of artificial intelligence (AI) and these two opening paragraphs have been written by me, ChatGPT.

As an AI model based in America, I am well-versed in the topic of AI, its impacts, and its potential problems. And in this article, the Sunday Independent will draw upon expert opinions from academics in the field to provide insightful information.

And now back to me, the journalist, not the machine, who is typing this on my keyboard. I was exploring AI last week and have been left utterly amazed and concerned in equal measure.

The first two paragraphs of this article were written by AI after I gave it the instruction and keywords to include. It sent the words back to me in just over 20 seconds.

AI has the potential to revolutionise various aspects of our lives from healthcare to transportation to education. These systems can help diagnose diseases more accurately, reduce traffic congestion, and personalise learning for students. However, there are concerns about the ethical implications of AI in Ireland.

It can also be misused. Last week the editor-in-chief of a German magazine that used an artificial intelligence programme to produce fake quotes from Michael Schumacher lost his job.

The managing director of Funke media group which publishes the Die Aktuelle magazine said the “tasteless and misleading article should never have appeared”.

The Formula One world champion has not been seen publicly since suffering a near-fatal brain injury while skiing in December in France 10 years ago.

It gives a new meaning to “fake news” and the point that AI can be used for malicious purposes is a worry given the rise of misinformation and disinformation already in Ireland.

Convincing but fake image of Donald Trump's arrest was generated by AI

In the United States last week, the Republican Party released a 33-second video advert attacking president Joe Biden using images created entirely by artificial intelligence.

“While we always suspected that AI was being used in determining who to target for online campaign messages in elections, this is the first openly admitted use of AI in that arena,” Professor Alan Smeaton of Dublin City University says.

In recent months AI has probably had more media coverage than in the previous decade. ChatGPT and large language models (LLMs) are now everyday discussion topics.

“ChatGPT doesn’t just chat interactively but can also generate replies like full documents and can be used to generate new content, new text,” he says.

“LLMs are also turning the worlds of images and art upside down with systems like Midjourney allowing easy generation of images based on a text input, as was done with the video advert in the US.”​

Midjourney describes itself as “an independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species”.

He says while this part of AI has had “much media attention”, other areas such as computer vision, which uses artificial intelligence to “see” and interpret visual data, continue to develop and improve “at a rate of knots faster than ever before”.

Earlier this month Meta released the Segment Anything Model (SAM) “which will turn computer vision applications, like those that run on your phone, on its head”.

SAM will scan images to identify and categorise them, locate product flaws, and improve picture search results.

“This rapid progress across many parts of AI in the very recent past is due to the development and release of what we call foundation models. These are large AI systems pre-trained on vast amounts of data. Once the expensive pre-training is done then we adapt the AI model to a wide range of new tasks.”

But can we expect AI-generated images and text to be used in political campaigns in Ireland? “Absolutely, yes,” says Prof Smeaton.

Fake image of Pope Francis was generated by AI

Ireland’s reputation as a “hub for artificial intelligence” continues to grow, according to Barry O’Sullivan, a professor at the school of computer science and IT at University College Cork.

He says AI has been “overwhelmingly used for positive purposes”, including in the search for Covid-19 vaccines.

“AI is an exciting set of technologies,” Prof O’Sullivan says. “The movies and books we watch and read are often suggested to us using recommender systems. Spam emails are filtered out of our inboxes using machine learning techniques.

“Voice assistants are powered by speech and natural language processing methods. The hardware that powers our computers is verified using AI-based reasoning systems.”

He highlighted Ireland’s strong heritage and legacy in AI, citing George Boole, the first professor of mathematics at UCC (1849 until 1864), who created Boolean algebra. He also referred to Alan Turing, whose mother was from Co Clare, and John McCarthy, the son of an emigrant from the village of Cromane on the Iveragh Peninsula in Co Kerry, who coined the term “artificial intelligence”.

“Ireland has a lot to be proud of and a lot to be excited about,” O’Sullivan says.

“Our international academic leadership in AI today is strong in many areas, and Ireland is home to many of the world’s most innovative AI companies.”​

Dr Tijana Milosevic

But Dr Tijana Milosevic, of Dublin City University (DCU), who has been examining AI on social media, says the opportunities for AI misuse are “plentiful”.

“A few weeks ago, I received an email from a colleague with the following subject line, ‘Bogus PhD application written by ChatGPT’,” she says.

“One had to know the field well in order to catch the obvious signs, the colleague explained, but otherwise it was quite believable. At the same time, I’ve been hearing from other colleagues in Ireland and internationally that some of their students’ writing has suspiciously improved since ChatGPT came out last November.”

However, plagiarised essays are just one of a number of potential issues arising from AI.

“From convincingly well-written yet completely made-up news stories to deepfake photos and videos that can also be leveraged for cyber bullying, which is one of the topics of my research,” Dr Milosevic says.

“In the US, for example, a woman created deepfakes, using AI to create realistic looking photo and video content, to bully her daughter’s team-mates. The mum altered the girls’ images she had found online to falsely portray the girls as drinking alcohol, being nude or engaging in otherwise inappropriate behaviour,” she says.​

While this cyber bullying case is an atypical one, as it involves an adult who targeted minors, concerns around deep fakes being used for cyber bullying is “a source of worry in Ireland as well”.

“AI, however, can be used for positive purposes: to detect cyber bullying on social media platforms before users report it, in an attempt to minimise the problem on platforms,” Dr Milosevic says.

“Nonetheless, even such uses have implications for users’ privacy and freedom of expression. For example, would users approve of AI monitoring of direct messages? And what if AI labels some content as cyber bullying or harassment by mistake, and legitimate content that is not hurtful gets taken down?”

In a recent study, Dr Milosevic asked teenagers about their views on the effectiveness of such AI-driven cyber bullying regulation on social media platforms.

“I worry about the greater cultural implications of the use of AI and how it is shaping views on what is valuable in society. A university student recently approached me after a panel and shared her fear that she might be writing all these term papers for various modules completely in vain,” she says.

“If there’s ChatGPT that can write much better papers than her, she wondered if she was wasting her time honing a skill that would prove to be useless in the future job market.”

Professor Dave Lewis, head of Artificial Intelligence at Trinity College Dublin, is working with experts to develop and translate AI solutions.

Adapt at Trinity is a world-leading research centre and works in partnership with seven other institutions: DCU, University College Dublin, Technological University Dublin, Maynooth University, Munster Technological University, Technological University of the Shannon, and the University of Galway.

“Adapt’s research aims to fill the gap that has grown between societal expectations for trustworthy AI research and innovation in practice,” Prof Lewis says.

He says the challenges addressed include the complexity and unpredictability of AI systems’ impact, confusion about the distribution of accountability, a social-technical disciplinary divide, and a lack of effective tools and methodologies.

“We seek paths to how the commercial innovation and productivity arising from AI research builds and maintains user trust and is accountable and acceptable to society.”​

Professor Tomás Ward, of DCU, describes AI as an “incredible invention” which “promises to make our lives better”.

“There are concerns about how it could be used to harm society. For example, it can be used to create fake content. It is not hard to imagine how that can be used to spread lies and manipulate people.

“Another very real concern is job loss. Generative AI could take over jobs like writing or even journalism. It is capable of encroaching on the livelihood of creatives such as graphic designers, architects, artists and musicians. This is not good news for people who rely on those jobs to make ends meet.”

He lists the ethical issues, including how companies could use generative AI to create ads that are “hyper-personalised and manipulative”, especially for sharing over social media.

“This is particularly dangerous when used to exploit people who are more vulnerable. It can be used to engineer content to create opinions, influence decision-making and encourage behaviours,” Prof Ward says.

But he is at pains to add that it is “not all bad news”, saying there are ways “to minimise the risks of generative AI”.

“We can invest in research and development to better understand the technology and figure out how to regulate it and how to give society control and oversight. We can also educate people about how to recognise fake content and protect themselves from its influence.

“Regulation is clearly a key element of any strategy to harness AI for better living. We need to make sure that generative AI is used ethically and responsibly.”

He believes governments and regulatory bodies “should work together to create guidelines that ensure the technology is used for the good of everyone, not just a select few”.

“Generative AI is a powerful technology that can be used for good or bad. By working together in terms of regulation and education, we can help ensure it changes the world for the better,” he says.