A postscript on AI

AI tools and related search engines should know when a factual reference is incorrect, or indeed whether an individual (especially someone notable) is living or dead. In an interesting postscript to my recent series on AI, I came across this article – written by someone whom Google declared is no longer with us.

Glaring errors like these demand that tech companies (as well as publishers and media outlets who increasingly rely on these tools) take more seriously the individual’s right of reply, the right to correct or amend the record, as well as the right to privacy and to be forgotten on the internet.

As I commented in my series of articles, AI tools such as ChatGPT (and, it seems, Google Search) can easily conflate separate facts into false statements. Another reason to be on our guard as we embrace (and rely on) these new applications.

Next week: Bad Sports

 

 

Did you spot the “deliberate” mistake?

A few days ago, I posted a blog that was largely written by AI. I simply entered a few links, suggested some themes, and added a prompt to “write a blog about AI and fakes” – ChatGPT did the rest.

Overall, ChatGPT does a pretty good job of summarising content, and synthesising logical conclusions based on what it has been fed. However, apart from the known risk of hallucinations, AI tools such as ChatGPT do not give specific citations for any external sources (neither those included in the prompts, nor any data on which their LLMs were trained); and they can leave out important information, but how such omissions are made is far from clear.

As promised, here are some clarifications that ChatGPT did not provide, in reverse order of their appearance in my earlier blog.

First, the “Conclusion” was 100% the work of ChatGPT. I wanted to remain as objective as possible, and did not prompt ChatGPT to come to any specific conclusion, and I did not include any implicit suggestion as to the sentiment to adopt. As such, this section is neutral, objective, balanced and logical. As was the introductory section ChatGPT generated.

For the comments on “AI and Copyright”, I entered the phrase “OpenAI and authors”, with links to articles from ABC News and Reuters. Again, apart from omitting specific citations to the source content, the ChatGPT output is balanced, even though those inputs might be construed as negative towards AI in general and OpenAI in particular. But this raises the question of ChatGPT’s own self-awareness, and whether it “knows” or understands that it is a product of OpenAI?

The section on “Legislative Actions on Deep Fakes” is reasonable, given the only guidance I provided was the phrase “Fake images”, and links to two Guardian articles (here and here). However, one of those articles details a specific legal case, and allegations of criminal activity – quite negative for AI. I doubt if ChatGPT fully understands the principles of sub judice, but maybe it used its discretion or bias to omit the details of this article?

I was reasonably impressed with how ChatGPT compiled the section on “Celebrity Persona Rights”, especially the summary it extracted from the Scientific American article. The phrase “persona rights” is lifted from the html link I supplied, and the only specific prompt I gave was “Scarlett Johansson”. Again, ChatGPT was happy to include potentially negative references to OpenAI. However, ChatGPT did not directly engage with an extract I had used from the source article, which provides more context:

“Most discussions about copyright and AI focus […] on whether and how copyrighted material can be used to train the technology, and whether new material that it produces can be copyrighted.”

But, you could argue that ChatGPT made an equivalent reference in the section on “AI and Copyright”.

More significantly, the original article (and dispute) is about AI-generated voice similarity, whereas ChatGPT refers to “likeness”, which I would usually interpret as visual similarity.

The references to “Mozilla’s Campaign Against Misinformation” and “AI in Indian Elections” are both much weaker by comparison. I used the prompt “Mozilla”, and links to the specific WhatsApp campaign, as well as a list of other Mozilla campaigns. I also used “Indian election” as a specific prompt, as well as a relevant news article. First, the Mozilla campaigns include one about AI transparency, which ChatGPT does not address here, or in the section on copyright – perhaps it decided it was too critical of OpenAI et al? Second, the ABC article includes mention of a deep fake video of a deceased Indian politician – which I would have thought merited a mention by ChatGPT.

Finally, the section on “Dylan and Rodin: A Fabricated Encounter” is probably the most problematic. I used the prompt, “Dylan and Rodin….”, and links to two recent articles by Dave Haslam – one that discusses an ongoing fake narrative about “Bob Dylan photographed playing chess in Paris”, and the other about a fabricated, AI-generated “photograph of Auguste Rodin and Camille Claudel”. (I also included a link to the latter fake, with the prompt “Image”.) Somehow, ChatGPT has confused and/or incorrectly conflated these two topics, and erroneously concluded that this was a reference to a false account of Dylan meeting Rodin in France. ChatGPT simply reproduced the fake photo (which I chose to omit from my published blog this week), and left out any mention of Claudel. I wonder whether this is perhaps because Haslam is not as well indexed in ChatGPT’s database compared to the incorrect/misleading social media posts, or his articles are too critical of AI (and those that replicate its errors and perpetuate its myths), and too nuanced in their arguments. And was the failure to mention Claudel an oversight, or something more insidious?

I know that ChatGPT and other AI tools are trying to protect themselves with caveat emptor-style disclaimers, and no-one should rely on any AI output unless they are confident of the results (or they are indifferent/negligent as to the potential for harm or mischief), but the Dylan/Rodin example illustrates the inherent risks we still face as end users.

AI hallucinations and the law

Several years ago, I blogged about the role of technology within the legal profession. One development I noted was the nascent use of AI to help test the merits of a case before it goes to trial, and to assess the likelihood of winning. Not only might this prevent potentially frivolous matters coming to trial, it would also reduce court time and legal costs.

More recently, there has been some caution (if not out and out scepticism) about the efficacy of using AI in support of legal research and case preparation. This current debate has been triggered by an academic paper from Stanford University that compared leading legal research tools (that claim to have been “enhanced” by AI) and ChatGPT. The results were sobering, with a staggering number of apparent “hallucinations” being generated, even by the specialist legal research tools. AI hallucinations are not unique to legal research tools; nor to the AI tools and the Large Language Model (LLMs) they are trained on, as Stanford has previously reported. While the academic paper is awaiting formal publication, there has been some to-and-fro between the research authors and at least one of the named legal tools. This latter rebuttal rightly points out that any AI tool (especially a legal research and professional practice platform) has to be fit for purpose, and trained on appropriate data.

Aside from the Stanford research, some lawyers have been found to have relied upon AI tools such as ChatGPT and Google Bard to draft their submissions, only to discover that the results have cited non-existent precedents and cases – including in at least one high-profile prosecution. The latest research suggests that not only do AI tools “imagine” fictitious case reports, they can also fail to spot “bad” law (e.g., cases that have been overturned, or laws that have been repealed), offer inappropriate advice, or provide inaccurate or incorrect legal interpretation.

What if AI hallucinations resulted in the generation of invidious content about a living person – which in many circumstances, would be deemed libel or slander? If a series of AI prompts give rise to libelous content, who would be held responsible? Can AI itself be sued for libel? (Of course, under common law, it is impossible to libel the dead, as only a living person can sue for libel.)

I found an interesting discussion of this topic here, which concludes that while AI tools such as ChatGPT may appear to have some degree of autonomy (depending on their programming and training), they certainly don’t have true agency and their output in itself cannot be regarded in the same way as other forms of speech or text when it comes to legal liabilities or protections. The article identified three groups of actors who might be deemed responsible for AI results: AI software developers (companies like OpenAI), content hosts (such as search engines), and publishers (authors, journalists, news networks). It concluded that of the three, publishers, authors and journalists face the most responsibility and accountability for their content, even if they claimed “AI said this was true”.

Interestingly, the above discussion referenced news from early 2023, that a mayor in Australia was planning to sue OpenAI (the owners of ChatGPT) for defamation unless they corrected the record about false claims made about him. Thankfully, OpenAI appear to have heeded of the letter of concern, and the mayor has since dropped his case (or, the false claim was simply over-written by a subsequent version of ChatGPT). However, the original Reuters link, above, which I sourced for this blog makes no mention of the subsequent discontinuation, either as a footnote or update – which just goes to show how complex it is to correct the record, since the reference to his initial claim is still valid (it happened), even though it did not proceed (he chose not to pursue it). Even actual criminal convictions can be deemed “spent” after a given period of time, such that they no longer appear on an individual’s criminal record. Whereas, someone found not guilty of a crime (or in the mayor’s case, falsely labelled with a conviction) cannot guarantee that references to the alleged events will be expunged from the internet, even with the evolution of the “right to be forgotten“.

Perhaps we’ll need to train AI tools to retrospectively correct or delete any false information about us; although conversely, AI is accelerating the proliferation of fake content – benign, humourous or malicious – thus setting the scene for the next blog in this series.

Next week: AI and Deep (and not so deep…) Fakes