A few days ago, I posted a blog that was largely written by AI. I simply entered a few links, suggested some themes, and added a prompt to “write a blog about AI and fakes” – ChatGPT did the rest.
Overall, ChatGPT does a pretty good job of summarising content, and synthesising logical conclusions based on what it has been fed. However, apart from the known risk of hallucinations, AI tools such as ChatGPT do not give specific citations for any external sources (neither those included in the prompts, nor any data on which their LLMs were trained); and they can leave out important information, but how such omissions are made is far from clear.
As promised, here are some clarifications that ChatGPT did not provide, in reverse order of their appearance in my earlier blog.
First, the “Conclusion” was 100% the work of ChatGPT. I wanted to remain as objective as possible, and did not prompt ChatGPT to come to any specific conclusion, and I did not include any implicit suggestion as to the sentiment to adopt. As such, this section is neutral, objective, balanced and logical. As was the introductory section ChatGPT generated.
For the comments on “AI and Copyright”, I entered the phrase “OpenAI and authors”, with links to articles from ABC News and Reuters. Again, apart from omitting specific citations to the source content, the ChatGPT output is balanced, even though those inputs might be construed as negative towards AI in general and OpenAI in particular. But this raises the question of ChatGPT’s own self-awareness, and whether it “knows” or understands that it is a product of OpenAI?
The section on “Legislative Actions on Deep Fakes” is reasonable, given the only guidance I provided was the phrase “Fake images”, and links to two Guardian articles (here and here). However, one of those articles details a specific legal case, and allegations of criminal activity – quite negative for AI. I doubt if ChatGPT fully understands the principles of sub judice, but maybe it used its discretion or bias to omit the details of this article?
I was reasonably impressed with how ChatGPT compiled the section on “Celebrity Persona Rights”, especially the summary it extracted from the Scientific American article. The phrase “persona rights” is lifted from the html link I supplied, and the only specific prompt I gave was “Scarlett Johansson”. Again, ChatGPT was happy to include potentially negative references to OpenAI. However, ChatGPT did not directly engage with an extract I had used from the source article, which provides more context:
“Most discussions about copyright and AI focus […] on whether and how copyrighted material can be used to train the technology, and whether new material that it produces can be copyrighted.”
But, you could argue that ChatGPT made an equivalent reference in the section on “AI and Copyright”.
More significantly, the original article (and dispute) is about AI-generated voice similarity, whereas ChatGPT refers to “likeness”, which I would usually interpret as visual similarity.
The references to “Mozilla’s Campaign Against Misinformation” and “AI in Indian Elections” are both much weaker by comparison. I used the prompt “Mozilla”, and links to the specific WhatsApp campaign, as well as a list of other Mozilla campaigns. I also used “Indian election” as a specific prompt, as well as a relevant news article. First, the Mozilla campaigns include one about AI transparency, which ChatGPT does not address here, or in the section on copyright – perhaps it decided it was too critical of OpenAI et al? Second, the ABC article includes mention of a deep fake video of a deceased Indian politician – which I would have thought merited a mention by ChatGPT.
Finally, the section on “Dylan and Rodin: A Fabricated Encounter” is probably the most problematic. I used the prompt, “Dylan and Rodin….”, and links to two recent articles by Dave Haslam – one that discusses an ongoing fake narrative about “Bob Dylan photographed playing chess in Paris”, and the other about a fabricated, AI-generated “photograph of Auguste Rodin and Camille Claudel”. (I also included a link to the latter fake, with the prompt “Image”.) Somehow, ChatGPT has confused and/or incorrectly conflated these two topics, and erroneously concluded that this was a reference to a false account of Dylan meeting Rodin in France. ChatGPT simply reproduced the fake photo (which I chose to omit from my published blog this week), and left out any mention of Claudel. I wonder whether this is perhaps because Haslam is not as well indexed in ChatGPT’s database compared to the incorrect/misleading social media posts, or his articles are too critical of AI (and those that replicate its errors and perpetuate its myths), and too nuanced in their arguments. And was the failure to mention Claudel an oversight, or something more insidious?
I know that ChatGPT and other AI tools are trying to protect themselves with caveat emptor-style disclaimers, and no-one should rely on any AI output unless they are confident of the results (or they are indifferent/negligent as to the potential for harm or mischief), but the Dylan/Rodin example illustrates the inherent risks we still face as end users.