AI hallucinations and the law

Several years ago, I blogged about the role of technology within the legal profession. One development I noted was the nascent use of AI to help test the merits of a case before it goes to trial, and to assess the likelihood of winning. Not only might this prevent potentially frivolous matters coming to trial, it would also reduce court time and legal costs.

More recently, there has been some caution (if not out and out scepticism) about the efficacy of using AI in support of legal research and case preparation. This current debate has been triggered by an academic paper from Stanford University that compared leading legal research tools (that claim to have been “enhanced” by AI) and ChatGPT. The results were sobering, with a staggering number of apparent “hallucinations” being generated, even by the specialist legal research tools. AI hallucinations are not unique to legal research tools; nor to the AI tools and the Large Language Model (LLMs) they are trained on, as Stanford has previously reported. While the academic paper is awaiting formal publication, there has been some to-and-fro between the research authors and at least one of the named legal tools. This latter rebuttal rightly points out that any AI tool (especially a legal research and professional practice platform) has to be fit for purpose, and trained on appropriate data.

Aside from the Stanford research, some lawyers have been found to have relied upon AI tools such as ChatGPT and Google Bard to draft their submissions, only to discover that the results have cited non-existent precedents and cases – including in at least one high-profile prosecution. The latest research suggests that not only do AI tools “imagine” fictitious case reports, they can also fail to spot “bad” law (e.g., cases that have been overturned, or laws that have been repealed), offer inappropriate advice, or provide inaccurate or incorrect legal interpretation.

What if AI hallucinations resulted in the generation of invidious content about a living person – which in many circumstances, would be deemed libel or slander? If a series of AI prompts give rise to libelous content, who would be held responsible? Can AI itself be sued for libel? (Of course, under common law, it is impossible to libel the dead, as only a living person can sue for libel.)

I found an interesting discussion of this topic here, which concludes that while AI tools such as ChatGPT may appear to have some degree of autonomy (depending on their programming and training), they certainly don’t have true agency and their output in itself cannot be regarded in the same way as other forms of speech or text when it comes to legal liabilities or protections. The article identified three groups of actors who might be deemed responsible for AI results: AI software developers (companies like OpenAI), content hosts (such as search engines), and publishers (authors, journalists, news networks). It concluded that of the three, publishers, authors and journalists face the most responsibility and accountability for their content, even if they claimed “AI said this was true”.

Interestingly, the above discussion referenced news from early 2023, that a mayor in Australia was planning to sue OpenAI (the owners of ChatGPT) for defamation unless they corrected the record about false claims made about him. Thankfully, OpenAI appear to have heeded of the letter of concern, and the mayor has since dropped his case (or, the false claim was simply over-written by a subsequent version of ChatGPT). However, the original Reuters link, above, which I sourced for this blog makes no mention of the subsequent discontinuation, either as a footnote or update – which just goes to show how complex it is to correct the record, since the reference to his initial claim is still valid (it happened), even though it did not proceed (he chose not to pursue it). Even actual criminal convictions can be deemed “spent” after a given period of time, such that they no longer appear on an individual’s criminal record. Whereas, someone found not guilty of a crime (or in the mayor’s case, falsely labelled with a conviction) cannot guarantee that references to the alleged events will be expunged from the internet, even with the evolution of the “right to be forgotten“.

Perhaps we’ll need to train AI tools to retrospectively correct or delete any false information about us; although conversely, AI is accelerating the proliferation of fake content – benign, humourous or malicious – thus setting the scene for the next blog in this series.

Next week: AI and Deep (and not so deep…) Fakes

 

 

 

 

AI and the Human Factor

Earlier this month, I went to the Melbourne premiere of “Eno”, a documentary by Gary Hustwit, which is described as the world’s first generative feature film. Each time the film is shown, the choice and sequencing of scenes is different – no two versions are ever the same. Some content may never be screened at all.

I’ll leave readers to explore the director’s rationale for this approach (and the implications for film-making, cinema and streaming). But during a Q&A following the screening, Hustwit was at pains to explain that this is NOT a film generated by AI. He was also guarded and refrained from revealing too much about the proprietary software and hardware system he co-developed to compile and present the film.

However, the director did want to stress that he didn’t simply tell an AI bot to scour the internet, scrape any content by, about or featuring Brian Eno, and then assemble it into a compilation of clips. This documentary is presented according to a series of rules-based algorithms, and is a content-led venture curated by its creator. Yes, he had to review hours and hours of archive footage from which to draw key themes, but he also had to shoot new interview footage of Eno, that would help to frame the context and support the narrative, while avoiding a banal biopic or series of talking heads. The result is a skillful balance between linear story telling, intriguing juxtaposition, traditional interviews, critical analysis, and deep exploration of the subject. The point is, for all its powerful capabilities, AI could not have created this film. It needed to start with human elements: innate curiosity on the part of the director; intelligent and empathetic interaction between film maker and subject; and expert judgement in editing the content – as a well as an element of risk-taking in allowing the algorithm to make the final choices when it comes to each screened version.

That the subject of this documentary is Eno should not be surprising, either. He has a reputation for being a modern polymath, interested in science and technology as well as art. His use of Oblique Strategies in his creative work, his fascination with systems, his development of generative music, and his adoption of technology all point to someone who resists categorisation, and for whom work is play (and vice versa). In fact, imagination and play are the two key activities that define what it is to be human, as Eno explored in an essay for the BBC a few years ago. Again, AI does not yet have the power of imagination (and probably has no sense of play).

Sure, AI can conjure up all sorts of text, images, video, sound, music and other outputs. But in truth, it can only regurgitate what it has been trained on, even when extrapolating from data with which it has been supplied, and the human prompts it is given. This process of creation is more akin to plagiarism – taking source materials created by other people, blending and configuring them into some sort of “new” artefact, and passing the results off as the AI’s own work.

Plagiarism is neither new, nor is it exclusive to AI, of course. In fact, it’s a very natural human response to our environment: we all copy and transform images and sounds around us, as a form of tribute, hommage, mimicry, creative engagement, pastiche, parody, satire, criticism, acknowledgement or denouncement. Leaving aside issues of attribution, permitted use, fair comment, IP rights, (mis)appropriation and deep fakes, some would argue that it is inevitable (and even a duty) for artists and creatives to “steal” ideas from their sources of inspiration. Notably, Robert Shore in his book about “originality”. The music industry is especially adept at all forms of “copying” – sampling, interpolation, remixes, mash-ups, cover versions – something that AI has been capable of for many years. See for example this (limited) app from Google released a few years ago. Whether the results could be regarded as the works of J.S.Bach or the creation of Google’s algorithm trained on Bach’s music would be a question for Bach scholars, musicologists, IP lawyers and software analysts.

Finally, for the last word on AI and the human condition, I refer you to the closing scene from John Carpenter’s cult SciFi film, “Dark Star”, where an “intelligent” bomb outsmarts its human interlocutor. Enjoy!

Next week: AI hallucinations and the law

 

 

State of the Music Industry…

Depending on your perspective, the music industry is in fine health. 2023 saw a record year for sales (physical, digital and streaming), and touring artists are generating more income from ticket sales and merchandising than the GDPs of many countries. Even vinyl records, CDs and cassettes are achieving better sales than in recent years!

On the other hand, only a small number of musicians are making huge bucks from touring; while smaller venues are closing down, meaning fewer opportunities for artists to perform.

And despite the growth in streaming, relatively few musicians are minting it from these subscription-based services, that typically pay very little in royalties to the vast majority of artists. (In fact, some content can be zero-rated unless it achieves a minimum number of plays.)

Aside from the impact of streaming services, there are two other related challenges that exercise the music industry: the growing use of Artificial Intelligence, and the need for musicians to be recognised and compensated more fairly for their work and their Intellectual Property.

With AI, a key issue is whether the software developers are being sufficiently transparent about the content sources used to train their models, and whether the authors and rights owners are being fairly recompensed in return for the use of their IP. Then there are questions of artistic “creativity”, authorial ownership, authenticity, fakes and passing-off when we are presented with AI-generated music. Generative music software has been around for some time, and anyone with a smart phone or laptop can access millions of tools and samples to compose, assemble and record their own music – and many people do just that, given the thousands of new songs that are being uploaded every day. Now, with the likes of Suno, it’s possible to “create” a 2-minute song (complete with lyrics) from just a short text prompt. Rolling Stone magazine recently did just that, and the result was both astonishing and dispiriting.

I played around with Suno myself (using the free version), and the brief prompt I submitted returned these two tracks, called “Midnight Shadows”:

Version 1

Version 2

The output is OK, not terrible, but displays very little in the way of compositional depth, melodic development, or harmonic structure. Both tracks sound as if a set of ready-made loops and samples had simply been cobbled together in the same key and tempo, and left to run for 2 minutes. Suno also generated two quite different compositions with lyrics, voiced by a male and a female singer/bot respectively. The lyrics were nonsensical attempts to verbally riff on the text prompt. The vocals sounded both disembodied (synthetic, auto-tuned and one-dimensional), and also exactly the sort of vocal stylings favoured by so many contemporary pop singers, and featured on karaoke talent shows like The Voice and Idol. As for Suno’s attempt to remix the tracks at my further prompting, the less said the better.

While content attribution can be addressed through IP rights and commercial licensing, the issue of “likeness” is harder to enforce. Artists can usually protect their image (and merchandising) against passing off, but can they protect the tone and timbre of their voice? A new law in Tennessee attempts to do just that, by protecting a singer’s a vocal likeness from unauthorised use. (I’m curious to know if this protection is going to be extended to Jimmy Page’s guitar sound and playing style, or an electronic musician’s computer processing and programming techniques?)

I follow a number of industry commentators who, very broadly speaking, represent the positive (Rob Abelow), negative (Damon Krukowski) and neutral (Shawn Reynaldo) stances on streaming, AI and musicians’ livelihood. For every positive opportunity that new technology presents, there is an equal (and sometimes greater) threat or challenge that musicians face. I was particularly struck by Shawn Reynaldo’s recent article on Rolling Stone’s Suno piece, entitled “A Music Industry That Doesn’t Sell Music”. The dystopian vision he presents is millions of consumers spending $10 a month to access music AI tools, so they can “create” and upload their content to streaming services, in the hope of covering their subscription fees….. Sounds ghastly, if you ask me.

Add to the mix the demise of music publications (for which AI and streaming are also to blame…), and it’s easy to see how the landscape for discovering, exploring and engaging with music has become highly concentrated via streaming platforms and their recommender engines (plus marketing budgets spent on behalf of major artists). In the 1970s and 1980s, I would hear about new music from the radio (John Peel), TV (OGWT, The Tube, Revolver, So It Goes, Something Else), the print weeklies (NME, Sounds, Melody Maker), as well as word of mouth from friends, and by going to see live music and turning up early enough to watch the support acts. Now, most of my music information comes from the few remaining print magazines such as Mojo and Uncut (which largely focus on legacy acts), The Wire (but probably too esoteric for its own good), and Electronic Sound (mainly because that’s the genre that most interests me); plus Bandcamp, BBC Radio 6’s “Freak Zone”, Twitter, and newsletters from artists, labels and retailers. The overall consequence of streaming and up/downloading is that there is too much music to listen to (but how much of it is worth the effort?), and multiple invitations to “follow”, “like”, “subscribe” and “sign up” for direct content (but again, how much of it is worth the effort?). For better or worse, the music media at least provided an editorial filter to help address quality vs quantity (even if much of it ended up being quite tribal).

In the past, the music industry operated as a network of vertically integrated businesses: they sourced the musical talent, they managed the recording, manufacturing and distribution of the content (including the hardware on which to play it), and they ran publishing and licensing divisions. When done well, this meant careful curation, the exercise of quality control, and a willingness to invest in nurturing new artists for several albums and for the duration of their career. But at times, record companies have self-sabotaged, by engaging in format wars (e.g., over CD, DCC and MiniDisc standards), by denying the existence of on-line and streaming platforms (until Apple and Spotify came along), and by becoming so bloated that by the mid-1980s, the major labels had to merge and consolidate to survive – largely because they almost abandoned the sustainable development of new talent. They also ignored their lucrative back catalogues, until specialist and independent labels and curators showed them how to do it properly. Now, they risk overloading the reissue market, because they lack proper curation and quality control.

The music industry really only does three things:

1) A&R (sourcing and developing new talent)

2) Marketing (promotion, media and public relations)

3) Distribution & Licensing (commercialisation).

Now, #1 and #2 have largely been outsourced to social media platforms (and inevitably, to AI and recommender algorithms), and #3 is going to be outsourced to web3 (micro-payments for streaming subscriptions, distribution of NFTs, and licensing via smart contracts). Whether we like it or not, and taking their lead from Apple and Spotify, the music businesses of the future will increasingly resemble tech companies. The problem is, tech rarely understands content from the perspective of aesthetics – so expect to hear increasingly bland AI-generated music from avatars and bots that only exist in the metaverse.

Meanwhile, I go to as many live gigs as I can justify, and brace my wallet for the next edition of Record Store Day later this month…

Next week: Reclaim The Night

 

 

 

BYOB (Bring Your Own Brain)

My Twitter and LinkedIn feeds are full of posts about artificial intelligence, machine learning, large language models, robotics and automation – and how these technologies will impact our jobs and our employment prospects, often in very dystopian tones. It can be quite depressing to trawl through this material, to the point of being overwhelmed by the imminent prospect of human obsolescence.

No doubt, getting to grips with these tools will be important if we are to navigate the future of work, understand the relationship between labour, capital and technology, and maintain economic relevance in a world of changing employment models.

But we have been here before, many times (remember the Luddites?), and so far, the human condition means we learn to adapt in order to survive. These transitions will be painful, and there will be casualties along the way, but there is cause for optimism if we remember our post-industrial history.

First, among recent Twitter posts there was a timely reminder that automation does not need to equal despair in the face of displaced jobs.

Second, the technology at our disposal will inevitably make us more productive as well as enabling us to reduce mundane or repetitive tasks, even freeing up more time for other (more creative) pursuits. The challenge will be in learning how to use these tools, and in efficient and effective ways so that we don’t swap one type of routine for another.

Third, there is still a need to consider the human factor when it comes to the work environment, business structures and organisational behaviour – not least personal interaction, communication skills and stakeholder management. After all, you still need someone to switch on the machines, and tell them what to do!

Fourth, the evolution of “bring your own device” (and remote working) means that many of us have grown accustomed to having a degree of autonomy in the ways in which we organise our time and schedule our tasks – giving us the potential for more flexible working conditions. Plus, we have seen how many apps we use at home are interchangeable with the tools we use for work – and although the risk is that we are “always on”, equally, we can get smarter at using these same technologies to establish boundaries between our work/life environments.

Fifth, all the technology in the world is not going to absolve us of the need to think for ourselves. We still need to bring our own cognitive faculties and critical thinking to an increasingly automated, AI-intermediated and virtual world. If anything, we have to ramp up our cerebral powers so that we don’t become subservient to the tech, to make sure the tech works for us (and not the other way around).

Adopting a new approach means:

  • not taking the tech for granted
  • being prepared to challenge the tech assumptions (and not be complicit in its in-built biases)
  • question the motives and intentions of the tech developers, managers and owners (especially those of known or suspected bad actors)
  • validate all the newly-available data to gain new insights (not repeat past mistakes)
  • evaluate the evidence based on actual events and outcomes
  • and not fall prey to hyperbolic and cataclysmic conjectures

Finally, it is interesting to note the recent debates on regulating this new tech – curtailing malign forces, maintaining protections on personal privacy, increasing data security, and ensuring greater access for those currently excluded. This is all part of a conscious narrative (that human component!) to limit the extent to which AI will be allowed to run rampant, and to hold tech (in all its forms) more accountable for the consequences of its actions.

Next week: “The Digital Director”