More on AI Hallucinations…

The mainstream adoption of AI continues to reveal the precarious balance between the benefits and the pitfalls.

Yes, AI tools can reduce the time it takes to research information, to draft documents and to complete repetitive tasks.

But, AI is not so good at navigating subtle nuances, interpreting specific context or understanding satire or irony. In short, AI cannot “read the room” based on a few prompts and a collection of databases.

And then there is the issue of copyright licensing and other IP rights associated with the original content that large language models are trained on.

One of the biggest challenges to AI’s credibility is the frequent generation of “hallucinations” – false or misleading results that can populate even the most benign of search queries. I have commented previously on whether these errors are deliberate mistakes, an attempt at risk limitation (disclaimers), or a way of training AI tools on human users. (“Spot the deliberate mistake!) Or a get-out clause if we are stupid enough to rely on a dodgy AI summary!

With the proliferation of AI-generated results (“overviews”) in basic search queries, there is a tendency for AI tools to conflate or synthesize multiple sources and perspectives into a single “true” definition – often without authority or verified citations.

A recent example was a senior criminal barrister in Australia who submitted fake case citations and imaginary speeches in support of a client’s case.

Leaving aside the blatant dereliction of professional standards and the lapse in duty of care towards a client, this example of AI hallucinations within the context of legal proceedings is remarkable on a number of levels.

First, legal documents (statutes, law reports, secondary legislation, precedents, pleadings, contracts, witness statements, court transcripts, etc.) are highly structured and very specific as to their formal citations. (Having obtained an LLB degree, served as a paralegal for 5 years, and worked in legal publishing for more than 10 years, I am very aware of the risks of an incorrect citation or use of an inappropriate decision in support of a legal argument!!!)

Second, the legal profession has traditionally been at the forefront in the adoption and implementation of new technology. Whether this is the early use of on-line searches for case reports, database creation for managing document precedents, the use of practice and case management software, and the development of decision-trees to evaluate the potential success of client pleadings, lawyers have been at the vanguard of these innovations.

Third, a simple document review process (akin to a spell-check) should have exposed the erroneous case citations. The failure to do so reveals a level laziness or disregard that in another profession (e.g., medical, electrical, engineering) could give rise to a claim for negligence. (There are several established resources in this field, so this apparent omission or oversight is frankly embarrassing: https://libraryguides.griffith.edu.au/Law/case-citators, https://guides.sl.nsw.gov.au/case_law/case-citators, https://deakin.libguides.com/case-law/case-citators)

In short, as we continue to rely on AI tools, unless we apply due diligence to these applications or remain vigilant to their fallibility, we use them at our peril.

 

Whose side is AI on?

At the risk of coming across as some sort of Luddite, recent commentary on Artificial Intelligence suggests that it is only natural to have concerns and misgivings about its rapid development and widespread deployment. Of course, at its heart, it’s just another technology at our disposal – but by its very definition, generative AI is not passive, and is likely to impact all areas of our life, whether we invite it in or not.

Over the next few weeks, I will be discussing some non-technical themes relating to AI – creativity and AI, legal implications of AI, and form over substance when it comes to AI itself.

To start with, these are a few of the questions that I have been mulling over:

– Is AI working for us, as a tool that we control and manage?  Or is AI working with us, in a partnership of equals? Or, more likely, is AI working against us, in the sense that it is happening to us, whether we like it or not, let alone whether we are actually aware of it?

– Is AI being wielded by a bunch of tech bros, who feed it with all their own prejudices, unconscious bias and cognitive limitations?

– Who decides what the Large Language Models (LLMs) that power AI are trained on?

– How does AI get permission to create derived content from our own Intellectual Property? Even if our content is on the web, being “publicly available” is not the same as “in the public domain”

– Who is responsible for what AI publishes, and are AI agents accountable for their actions? In the event of false, incorrect, misleading or inappropriate content created by AI, how do we get to clarify the record, or seek a right of reply?

– Why are AI tools adding increased caveats? (“This is not financial advice, this is not to be relied on in a court of law, this is only based on information available as at a certain point in time, this is not a recommendation, etc.”) And is this only going to increase, as in the recent example of changes to Google’s AI-generated search results? (But really, do we need to be told that eating rocks or adding glue to pizza are bad ideas?)

– From my own experience, tools like Chat GPT return “deliberate” factual errors. Why? Is it to keep us on our toes (“Gotcha!”)? Is it to use our responses (or lack thereof) to train the model to be more accurate? Is it to underline the caveat emptor principle (“What, you relied on Otter to write your college essay? What were you thinking?”). Or is it to counter plagiarism (“You could only have got that false information from our AI engine”). If you think the latter is far-fetched, I refer you to the notion of “trap streets” in maps and directories.

– Should AI tools contain better attribution (sources and acknowledgments) in their results? Should they disclose the list of “ingredients” used (like food labelling?) Should they provide verifiable citations for their references? (It’s an idea that is gaining some attention.)

– Finally, the increased use of cloud-based services and crowd-sourced content (not just in AI tools) means that there is the potential for overreach when it comes to end user licensing agreements by ChatGPT, Otter, Adobe Firefly, Gemini, Midjourney etc. Only recently, Adobe had to clarify latest changes to their service agreement, in response to some social media criticism.

Next week: AI and the Human Factor

AI vs IP

Can Artificial Intelligence software claim copyright in any work that was created using their algorithms?

The short answer is “no”, since only humans can establish copyright in original creative works. Copyright can be assigned to a company or trust, or it can be created under various forms of creative commons, but there still needs to be a human author behind the copyright material. While copyright may lapse over time, it then becomes part of the public domain.

However, the extent to which a human author can claim copyright in a work that has been created with the help of AI is now being challenged. A recent case in the USA has determined that the author of a graphic novel, which included images created using Midjouney, cannot claim copyright in those images. While it was accepted that the author devised the text and other prompts that the software used as the generative inputs, the output images themselves could not be the subject of copyright protection – meaning they are either in the public domain, or they fall under some category of creative commons? This case also indicates that, in the USA at least, failing to declare the use of AI tools in a work when applying for copyright registration may result in a rejected application.

Does this decision mean that the people who write AI programmes could claim copyright in works created using their software? Probably not – as this would imply that Microsoft could establish copyright in every novel written using Word, especially its grammar and spelling tools.

On the other hand, programmers and software developers who use copyright material to train their models may need to obtain relevant permission from the copyright holders (as would anyone using the AI tools and who uses copyright content as prompts), unless they could claim exemptions under “fair dealing” or “fair use” provisions.

We’re still early in the lengthy process whereby copyright and other intellectual property laws are tested and re-calibrated in the wake of AI. Maybe the outcomes of future copyright cases will depend on whether you are Ed Sheeran or Robin Thicke….

Next week: Customer Experience vs Process Design

 

An open letter to American Express

Dear American Express,

I have been a loyal customer of yours for around 20 years. (Likewise my significant other.)

I typically pay my monthly statements on time and in full.

I’ve opted for paperless statements.

I pay my annual membership fee.

I even accept the fact that 7-8 times out of 10, I get charged merchant fees for paying by Amex – and in most cases I incur much higher fees than other credit or debit cards.

So, I am very surprised I have not been invited to attend your pop-up Open Air Cinema in Melbourne’s Yarra Park – especially as I live within walking distance.

It’s not like you don’t try to market other offers to me – mostly invitations to increase my credit limit, transfer outstanding balances from other credit cards, or “enjoy” lower interest rates on one-off purchases.

The lack of any offer in relation to the Open Air Cinema just confirms my suspicions that like most financial institutions, you do not really know your customers.

My point is, that you must have so much data on my spending patterns and preferences, from which you should be able to glean my interests such as film, the arts, and entertainment.

A perfect candidate for a pop-up cinema!

Next week: Life After the Royal Commission – Be Careful What You Wish For….