AI's moment of truth

AI is creating increasingly convincing, but fake, versions of reality. What does this mean for professions that pursue veracity, such as the law and journalism?

Image
image
Ed Sheeran speaks to journalists outside Manhattan Federal Court on May 4, 2023 following his trial for plagiarism.

 

By Stephen Phelan

During Ed Sheeran’s recent trial for plagiarism in New York, an expert witness played the court a computer-generated rendition of the song from which Sheeran had allegedly stolen – Marvin Gaye’s Let’s Get It On (co-written by Ed Townsend, whose family brought the lawsuit). This “AI-recording”, as submitted by the defence, replicated the four-chord progression at the heart of the case while intoning the lyrics in a sexless, soulless robot chant that drew laughs from the gallery. Others in attendance reported a certain queasiness.

“It was hideous,” Townsend’s daughter Griffin told the business website, Insider. Which is to say that a relatively frivolous news item about artificial intelligence can still bring a reflexive chill. Such tools are increasingly deployed in the service of fact-finding and the pursuit of what we call “truth” – a slippery concept at the best of times, but still a nominal ideal of the law, and of journalism. In the latter case, recent stories on the emerging benefits and hazards of this technology have also suggested all sorts of ironies.

For one thing, most reporters can barely comprehend what they’re reporting on. A majority are not remotely qualified to use the hardware or software in question, nor to speak or write about them in any substantive detail. (The author of this article does not exclude himself.)

 

 

Weakest links

"In general, the level of understanding and engagement with computer science and complex AI systems is considerably lower than what you would want or hope,” says Dr Bronwyn Jones, a translational fellow at the University of Edinburgh whose academic work involves bringing arts and humanities to bear on AI-related changes, and shaping relevant research into policy proposals and digestible communication. For more than a decade, Jones has also worked as a “boots-on-the-ground” news reporter in Merseyside, her home turf.

Image
image
\"We are seeing the pollution of the information ecosystem with content produced by generative AI which has no relationship to truth.\"

“We’re the weakest link in the chain,” she says of local journalists in the UK. “Overstretched, under-resourced, the last to receive any new tech.” Among her colleagues and throughout the field, she estimates about 10 per cent are “super-interested” in that tech, another 20 per cent just don’t want to know, and the rest constitute a middle ground “who struggle to get their heads around it”.

Here’s another irony, if you like: any given online article about machine-learning may be drawing upon the power of those very machines. Smooth, opaque, and expensive processor units in the corners of better-resourced newsrooms are even now harvesting material from searchable documents, surveilling social media for trending keywords, submitting headlines and photo captions, arranging statistical information into simple bulletins, moderating user comments and counting off how much a non-subscriber can read before hitting the paywall.

 

AI arms race

The current focus of such operations, says Jones, tends to be “efficiency savings”. “In theory, letting AI do the heavy lifting frees up journalists to do more creative or investigative work. But ultimately these are management decisions, whether you reallocate those resources in a productive way, or just automate and save the cost of labour.”

So far, Jones has seen plenty of evidence for AI taking over certain newsroom tasks, but not much for the resulting empowerment (or continued employment) of living, breathing staff. There are also abiding questions of veracity, transparency, integrity. AI systems are getting ever-better, ever-faster at flagging up “fake news” even as they keep upgrading their capacity to fabricate and manipulate digital media. “There’s a bit of a race on to improve the tech on each side,” says Jones. “Goodies and baddies trying to outgun each other.”

“But obviously we are seeing the pollution of the information ecosystem with content produced by generative AI which has no relationship to truth, because there is no such commitment built into it. Even content farm material produced just for clicks, with no malicious intent, can be very difficult for any journalist to parse, because it’s often presented in formats that mimic authoritative content, like academic reports or white papers.”

The appearance of truth

It’s worth asking if a machine that can learn to synthesise human faces, voices, thought processes and creative practices could also internalise our abstractions – our aspirations toward truth it

Image
image
“I don’t think there’s any realistic prospect of judges being replaced by AI.”

self? 

“The word ‘data’ is indeed very similar to the word ‘fact’,” says Vassilis Galanos, a Teaching Fellow at the University of Edinburgh’s School of Social and Political Science. “It literally means ‘givens’, as in a truth taken for granted. “If you can agree on a certain version of truth to be extracted from your data, then you might be able to instruct a machine to imitate that process and produce a form of abstracted truth that looks as if a human constructed it.”

Consider the auto-complete feature of Google’s search system, says Galanos, which will make various suggestions for finishing your line of enquiry – some quite reasonable, others pretty wild, but all drawn from non-transparent statistical processes that can extend or even amplify social biases. “Because they are electronic, and carry the authority of ‘data objectivity’, derived from ´collective intelligence’, they are taken as believable answers.”

But there are glitches in our inputs and inferences, “as we tend to make vast generalisations, associate irrelevant phenomena, or serve ulterior motives”. And there is a world of difference between trusting the accuracy of a pocket calculator and expecting a chatbot to make a “fixed truthful statement [on the basis of] limited sources and methodologies”.

 

 

Don't panic

Galanos’s PhD research into past and present expectations of AI has made them something of a historian of machine-learning – first pioneered here at the University of Edinburgh by Professor Donald Michie and his co-founders of AI lab on Hope Park Square circa 1963. Later alumnus Geoffrey Hinton received his own PhD for early research into artificial neural networks modelled on the human brain, and Hinton’s subsequent eminence in the field has added drama and gravity to his apparent change of heart: he is foremost among the experts now publicly alarmed by the quickening, menacing capacities of what they have wrought.

Galanos is mildly sardonic about “the heroic image of the ‘responsible scientist’ who offers the poison together with the remedy”. “Many popular narratives have been shaped by AI specialists themselves, who might have exaggerated its beneficial or harmful potential so their field receives more attention.” (They note that another pioneering cognitive/computer scientist, Marvin Minsky, advised Stanley Kubrick on creating the sentient operating system, HAL, for 2001: A Space Odyssey.)

Image
image
\"I'm sorry Dave, I can't do that\". When AI goes wrong in 2001: A Space Odyssey\n

Geoffrey Hinton’s original, statistical approach was long considered “doomed to fail”, says Galanos, for lack of sufficient data, until the ascendant internet supplied a new explosive yield. Now, to hear Hinton tell it, those neural networks seem almost doomed to succeed, and destroy us in the process. Galanos, for their part, is not so panicked.

“I don’t want to sound pretentiously brave, but nothing actually frightens me with AI per se. We’ve got many reasons to advance a sober, historical apprehension of these developments and build defence mechanisms.” As for the AI regulation that Big Tech figures like ChatGPT founder Sam Altman are now lobbying for, they’re inclined to believe it will only work if tailored by context – the military, or healthcare, present their own specific dangers and demands.

 

Compete where humanity is strongest

When it comes to the legal system, says Burkhard Schafer of Edinburgh Law School, “I don’t think there’s any realistic prospect of judges being replaced by AI.” As a Professor of Computational Legal Theory, he feels the judiciary has a reasonably clear understanding of what the machines can, and can’t, and shouldn’t be allowed to do. They could scan the relevant statutes, precedents and evidence to reach a valid verdict, and we might uphold this result as the truth of the matter, but “correctness” alone does not deliver justice in itself.

“In law, the process matters as much as the outcome. We give the accused a chance to speak, give the parties a stake. We don’t just want a decision, we want to see the reasoning behind it.” Schafer does not dispute that certain tasks and jobs within the profession (many notary services, for example) will soon be lost to automation.

Image
image
Sam Altman, CEO of OpenAI, testifies in a US Senate committee hearing on rules for artificial intelligence in May 2023

 

“Live with it,” he says. “It happened to the weavers, it will happen to you. Don’t compete with AI where it is strongest, compete where you, as a human, are strongest.” Instead of testing law students on rote memorisation of text that databases have already made instantly retrievable, Schafer’s optimal future lies in teaching applied empathy and psychology –  interview skills to help traumatised clients, storytelling skills to make compelling cases, the legal equivalent of better bedside manner. “Areas where AI is bad and always will be.”

Where AI is good, of course, and will only get more so, is falsifying digital material. The fear of fabricatation is mitigated, says Schafer, by tools that trace the sources and track the chain of custody for any given file. His bigger concern is that even authenticated evidence may be subject to suspicion. “Judges, juries, laypeople have all heard about deepfakes, and seen how convincing they can be. They might doubt everything as a result, even reliable stuff.

“The potential for total deniability is so much more worrying for lawyers, and journalists, than the danger of the technology itself.” Another Marvin Gaye song comes to mind here – a lyric that still sounds true enough, though its anecdotal note of caution may yet bear statistical correction: “People say believe half of what you see, son, and none of what you hear.”

Picture credits: Ed Sheeran - Alexi J. Rosenfeld/Getty; typewriter & robot with scales - Getty; 2001 - George Rinhart/Corbis via Getty; Sam Altman - Win McNamee/Getty