There has been a rising tide of apprehension coming from experts in the field about the dangers that may lie ahead for humanity from developments in Artificial Intelligence (AI). Just last week, Geoffrey Hinton, a pioneer called the “Godfather of artificial intelligence” (AI) and a long-time leader of Google’s AI Research Division, resigned from his position at the tech giant, citing growing concerns about the ethical implications of the technology he helped create. We have long passed the “Turing test” for determining whether a machine can demonstrate human intelligence: If a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence.
The concern expressed by 1000 experts, who signed an open letter last month calling for a six-month moratorium on the training of AI systems more powerful than GPT-4, the newest model released by OpenAI in March, is that: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.” But the utilisation of AI is proceeding apace, and in fact has invaded the field of journalism and the media in general.
Back in 2018, China’s Xinhua News Agency launched the world’s first AI-run news anchor featuring a computer-generated male figure that realistically delivers the news in a startlingly lifelike manner. Last month, as Xinhua debuted a female anchor, Russia’s Svoye TV did the same with Snezzhana Tumanova as their first IA weather presenter, and India’s “India Today” unveiled Sana, a female bot who presents news updates several times daily on their Aaj Tak news channel. She has a realistic human-like appearance and is fed data that it reads in synchronized lip movements using text-to-speech technology. The media group described her without any irony as “bright, gorgeous, ageless, tireless”, and most viewers seem to agree. In March, the world’s first completely AI-generated news content for an internet newspaper went live as NewsGPT. Very few readers can detect that there was no human input.
This, of course, raises the question of the future of journalism. Just as the attack on social media from its consumers might have tapered off comes this new challenge, not necessarily for the news companies per se, but for its employees. Some believe that the bots might be able to compose and deliver news bulletins in writing or verbally, but may falter when it comes to face-to-face interactions with guests or reporters in the field. But from the interactions with ChatGPT, which is available to even novice computer users, these news bots have long passed that test.
At the base of AI’s operations, however, are the algorithms through which it was programmed. But in being composed of neural networks that simulate the human brain, the scientists who are in trepidation of AI’s development are fearful about its self-learning capabilities. For instance, in journalism, there is the potential for generating fake news from so many bot sources that consumers would be even more challenged to detect the real from the fake news. This could be directly utilised by political campaigns to influence voting patterns.
As one Indian commentator advises, “Technology problems are not solved by more technology, but by laws and codes — by regulation. The question of intellectual property is deep: why is it normal for an acolyte to train with a master, but a violation for an AI to be trained on (the musician) Drake’s music? But the question of deep fake news —fiction written by machines for electoral advantage —assumes urgency as national elections near in the US and India. Since free speech absolutism has become widely accepted, it may be difficult to step back to when it was agreed that liars deserve to be penalised, not merely exposed, for the greatest good of the greatest number —including the liars themselves.”
These should be concerns about Guyana’s 2025 elections, and not only from domestic players.