More on AI Hallucinations…

The mainstream adoption of AI continues to reveal the precarious balance between the benefits and the pitfalls.

Yes, AI tools can reduce the time it takes to research information, to draft documents and to complete repetitive tasks.

But, AI is not so good at navigating subtle nuances, interpreting specific context or understanding satire or irony. In short, AI cannot “read the room” based on a few prompts and a collection of databases.

And then there is the issue of copyright licensing and other IP rights associated with the original content that large language models are trained on.

One of the biggest challenges to AI’s credibility is the frequent generation of “hallucinations” – false or misleading results that can populate even the most benign of search queries. I have commented previously on whether these errors are deliberate mistakes, an attempt at risk limitation (disclaimers), or a way of training AI tools on human users. (“Spot the deliberate mistake!) Or a get-out clause if we are stupid enough to rely on a dodgy AI summary!

With the proliferation of AI-generated results (“overviews”) in basic search queries, there is a tendency for AI tools to conflate or synthesize multiple sources and perspectives into a single “true” definition – often without authority or verified citations.

A recent example was a senior criminal barrister in Australia who submitted fake case citations and imaginary speeches in support of a client’s case.

Leaving aside the blatant dereliction of professional standards and the lapse in duty of care towards a client, this example of AI hallucinations within the context of legal proceedings is remarkable on a number of levels.

First, legal documents (statutes, law reports, secondary legislation, precedents, pleadings, contracts, witness statements, court transcripts, etc.) are highly structured and very specific as to their formal citations. (Having obtained an LLB degree, served as a paralegal for 5 years, and worked in legal publishing for more than 10 years, I am very aware of the risks of an incorrect citation or use of an inappropriate decision in support of a legal argument!!!)

Second, the legal profession has traditionally been at the forefront in the adoption and implementation of new technology. Whether this is the early use of on-line searches for case reports, database creation for managing document precedents, the use of practice and case management software, and the development of decision-trees to evaluate the potential success of client pleadings, lawyers have been at the vanguard of these innovations.

Third, a simple document review process (akin to a spell-check) should have exposed the erroneous case citations. The failure to do so reveals a level laziness or disregard that in another profession (e.g., medical, electrical, engineering) could give rise to a claim for negligence. (There are several established resources in this field, so this apparent omission or oversight is frankly embarrassing: https://libraryguides.griffith.edu.au/Law/case-citators, https://guides.sl.nsw.gov.au/case_law/case-citators, https://deakin.libguides.com/case-law/case-citators)

In short, as we continue to rely on AI tools, unless we apply due diligence to these applications or remain vigilant to their fallibility, we use them at our peril.

 

When robots say “Humans do not compute…”

There’s a memorable scene in John Carpenter‘s 1970’s sci-fi classic, “Dark Star” where an astronaut tries to use Cartesian Logic to defuse a nuclear bomb. The bomb is equipped with artificial intelligence and is programmed to detonate via a timer once its circuits have been activated. Due to a circuit malfunction, the bomb believes it has been triggered, even though it is still attached to the spaceship, and cannot be mechanically released. Refuting the arguments against its existence, the bomb responds in kind, and simply states: “I think, therefore I am.”

Dark Star’s Bomb 20: “I think, therefore I am…”

Dark Star’s Bomb 20: “I think, therefore I am…”

The notion of artificial intelligence both thrills us, and fills us with dread: on the one hand, AI can help us (by doing a lot of routine thinking and mundane processing); on the other, it can make us the subjects of its own ill-will (think of HAL 9000 in “2001: A Space Odyssey”, or “Robocop”, or “Terminator” or any similar dystopian sci-fi story).

The current trend for smarter data processing, fuelled by AI tools such as machine learning, semantic search, sentiment analysis and social graph models, is making a world of driverless cars, robo-advice, the Internet of Things and behaviour prediction a reality. But there are concerns that we will abnegate our decision-making (and ultimately, our individual moral responsibility) to computers; that more and more jobs will be lost to robots; and we will end up being dehumanized if we no longer have to think for ourselves. Worse still, if our human behaviours cease making sense to those very same computers that we have programmed to learn how to think for us, then our demise is pre-determined.

The irony is, that if AI becomes as smart as we might imagine, then we will impart to the robots a very human fallibility: namely, the tendency to over-analyse the data (rather than examine the actual evidence before us). As Brian Aldiss wrote in his 1960 short story, “The Sterile Millennia”, when robots get together:

“…they suffer from a trouble which sometimes afflicts human gatherings: a tendency to show off their logic at the expense of the object of the meeting.”

Long live logic, but longer still live common sense!

Next week: 101 #Startup Pitches – What have we learned?