More on AI Hallucinations…

The mainstream adoption of AI continues to reveal the precarious balance between the benefits and the pitfalls.

Yes, AI tools can reduce the time it takes to research information, to draft documents and to complete repetitive tasks.

But, AI is not so good at navigating subtle nuances, interpreting specific context or understanding satire or irony. In short, AI cannot “read the room” based on a few prompts and a collection of databases.

And then there is the issue of copyright licensing and other IP rights associated with the original content that large language models are trained on.

One of the biggest challenges to AI’s credibility is the frequent generation of “hallucinations” – false or misleading results that can populate even the most benign of search queries. I have commented previously on whether these errors are deliberate mistakes, an attempt at risk limitation (disclaimers), or a way of training AI tools on human users. (“Spot the deliberate mistake!) Or a get-out clause if we are stupid enough to rely on a dodgy AI summary!

With the proliferation of AI-generated results (“overviews”) in basic search queries, there is a tendency for AI tools to conflate or synthesize multiple sources and perspectives into a single “true” definition – often without authority or verified citations.

A recent example was a senior criminal barrister in Australia who submitted fake case citations and imaginary speeches in support of a client’s case.

Leaving aside the blatant dereliction of professional standards and the lapse in duty of care towards a client, this example of AI hallucinations within the context of legal proceedings is remarkable on a number of levels.

First, legal documents (statutes, law reports, secondary legislation, precedents, pleadings, contracts, witness statements, court transcripts, etc.) are highly structured and very specific as to their formal citations. (Having obtained an LLB degree, served as a paralegal for 5 years, and worked in legal publishing for more than 10 years, I am very aware of the risks of an incorrect citation or use of an inappropriate decision in support of a legal argument!!!)

Second, the legal profession has traditionally been at the forefront in the adoption and implementation of new technology. Whether this is the early use of on-line searches for case reports, database creation for managing document precedents, the use of practice and case management software, and the development of decision-trees to evaluate the potential success of client pleadings, lawyers have been at the vanguard of these innovations.

Third, a simple document review process (akin to a spell-check) should have exposed the erroneous case citations. The failure to do so reveals a level laziness or disregard that in another profession (e.g., medical, electrical, engineering) could give rise to a claim for negligence. (There are several established resources in this field, so this apparent omission or oversight is frankly embarrassing: https://libraryguides.griffith.edu.au/Law/case-citators, https://guides.sl.nsw.gov.au/case_law/case-citators, https://deakin.libguides.com/case-law/case-citators)

In short, as we continue to rely on AI tools, unless we apply due diligence to these applications or remain vigilant to their fallibility, we use them at our peril.

 

AI and Deep (and not so deep…) Fakes

The New York Times recently posted a quiz“can you tell the difference between a photograph, and an image created by AI?”

Of the quiz examples, a mix of actual photos and AI-generated content, I was only able to correctly identify 8/10 as to which were which. My significant other claimed to have scored 10/10! In my defence, I correctly identified all of the AI images, but I mistook two authentic photos as being fakes. Of the latter, one featured a bunch of famous people, most of whom I did not recognise, and the photo had been significantly cropped, removing much of the visual context (I also suspect it had been subject to some additional photo-shopping, given it was a publicity shot). The other real photo had been taken at such an unusual angle that it distorted some of the natural perspective, so as to make some elements look wonky. (But maybe I’ve become more cynical or sceptical, and therefore I tend to disbelieve more of what I see, the more I know I am being exposed to AI-generated content?)

How can we remain alert to AI deceptions, while at the same time recognizing and embracing the potential that this amazing technology has to offer?

Taking my lead from the New York Times article, the following blog has been created using ChatGPT. All I did was enter some brief subject headings (including the title), a series of internet links, and a prompt to turn it into a blog on AI and fakes. Can you tell what is true, what is hallucination, and what important information has been left out?

AI and Deep (and not so deep…) Fakes

Artificial Intelligence (AI) has revolutionized numerous sectors, bringing forth innovations that were once unimaginable. However, one of the more controversial applications of AI is in the creation of deep fakes. These are hyper-realistic digital falsifications created using AI, capable of mimicking real people’s appearances and voices. This technology has sparked debates over ethics, security, and the future of digital media. Let’s delve into some recent examples and the implications they bring.

Dylan and Rodin: A Fabricated Encounter

A striking example of AI’s capacity to blur reality is the supposed encounter between Bob Dylan and the sculptor Auguste Rodin. In an article on Substack by Dave Haslam, the scenario of Dylan visiting Rodin’s Parisian studio is meticulously crafted, yet it’s a fabrication made plausible by AI tools. This fake story exemplifies how AI can create convincingly realistic yet entirely fictional narratives, challenging our ability to discern truth from fiction.

AI in Indian Elections

AI’s role extends beyond entertainment into more critical areas such as political campaigns. In the 2024 Indian elections, AI-generated deep fakes have been used to spread misinformation and manipulate public opinion. An article by ABC News details how AI is disrupting electoral processes, raising concerns about the integrity of democratic systems.

Mozilla’s Campaign Against Misinformation

In response to the growing threat of AI-generated misinformation, organizations like Mozilla have launched campaigns to protect the integrity of elections. Mozilla’s efforts aim to hold platforms like WhatsApp accountable for the spread of fake news and deep fakes, urging them to implement stricter controls to safeguard democratic processes.

Celebrity Persona Rights

AI’s ability to replicate real people has also sparked legal disputes, particularly concerning persona rights. Scarlett Johansson’s recent dispute with OpenAI highlights these issues. Johansson’s concerns about her likeness being used without consent underscore the broader debate about the ethical use of AI. As noted by Scientific American, the conflict brings to light questions about how AI-generated content can respect individual rights and privacy.

Legislative Actions on Deep Fakes

Governments are beginning to take action against the misuse of AI-generated content. In Australia, new laws are being proposed to combat the creation and distribution of deep fake pornography and other harmful images. The Guardian reports on legislative efforts to protect individuals from the malicious use of AI, reflecting a growing recognition of the need for regulation.

AI and Copyright

The intersection of AI and copyright law is another area of intense debate. A lawsuit involving famous authors against OpenAI raises questions about how AI uses copyrighted material for training and whether the content it generates can itself be copyrighted. This legal battle, reported by ABC News and Reuters, could set important precedents for how intellectual property laws adapt to the age of AI.

Conclusion

As AI technology continues to advance, its applications in creating deep fakes present both opportunities and challenges. While it can be used creatively and constructively, it also poses significant risks to privacy, security, and the authenticity of information. The ongoing developments in legislation, ethical guidelines, and technological safeguards will be crucial in shaping the future landscape of AI-generated content.

So, did you spot the “deliberate” error(s)? And what information was missing? Answers will be posted later this week.

Next week: AI & Music