AI & Music

In a recent episode of a TV detective show, an AI tech dude tries to outsmart an old school musicologist by re-creating the missing part of a vintage blues recording. The professor is asked to identify which is the “real” track, compared to the AI versions. The blues expert guesses correctly within a few beats – much to the frustration of the coder.

“How did you figure it out so quickly?”

“Easy – it’s not just what the AI added, but more importantly what it left out.”

The failure of AI to fully replicate the original song (by omitting a recording error that the AI has “corrected”) is another example showing how AI lacks the human touch, does not yet have intuition, and struggles to exercise informed judgement. Choices may often be a matter of taste, but innate human creativity cannot yet be replicated.

Soon, though, AI tools will displace a lot of work currently done by composers, lyricists, musicians, producers, arrangers and recording engineers. Already, digital audio workstation (DAW) software easily enables anyone with a computer or mobile device to create, record, sample and mix their own music, without needing to read a note of music and without having to strum a chord. Not only that, the software can emulate the acoustic properties of site-specific locations, and correct out-of-tune and out-of-time recordings. So anyone can pretend they are recording at Abbey Road.

I recently blogged about how AI is presenting fresh challenges (as well as opportunities) for the music industry. Expect to see “new” recordings released by (or attributed to) dead pop stars, especially if their back catalogue is out of copyright. This is about more than exhuming preexisting recordings, and enhancing them with today’s technology; this is deriving new content from a set of algorithms, trained on vast back catalogues, directed by specific prompts (“bass line in the style of Jon Entwistle”), and maybe given some core principles of musical composition.

And it’s the AI training that has prompted the major record companies to sue two AI software companies, a state of affairs which industry commentator, Rob Abelow says was inevitable, because:

“It’s been clear that Suno & Udio have trained on copyrighted material with no plan to license or compensate”.

But on the other hand, streaming and automated music are not new. Sound designer and artist Tero Parviainen recently quoted Curtis Roads’ “The Computer Music Tutorial” (2023):

“A new industry has emerged around artificial intelligence (AI) services for creating generic popular music, including Flow Machines, IBM Watson Beat, Google Magenta’s NSynth Super, OpenAI’s Jukebox, Jukedeck, Melodrive, Spotify’s Creator Technology Research Lab, and Amper Music. This is the latest incarnation of a trend that started in the 1920s called Muzak, to provide licensed background music in elevators, business and dental offices, hotels, shopping malls, supermarkets, and restaurants”

And even before the arrival of Muzak in the 1920s, the world’s first streaming service was launched in the late 1890s, using the world’s first synthesizer – the Teleharmonium. (Thanks to Mark Brend’s “The Sound of Tomorrow”, I learned that Mark Twain was the first subscriber.)

For music purists and snobs (among whom I would probably count myself), all this talk about the impact of AI on music raises questions of aesthetics as well as ethics. But I’m reminded of some comments made by Pink Floyd about 50 years ago, when asked about their use of synthesizers, during the making of “Live at Pompeii”. In short, they argue that such machines still need human input, and as long as the musicians are controlling the equipment (and not the other way around), then what’s the problem? It’s not like they are cheating, disguising what they are doing, or compensating for a lack of ability – and the technology doesn’t make them better musicians, it just allows them to do different things:

“It’s like saying, ‘Give a man a Les Paul guitar, and he becomes Eric Clapton… It’s not true.'”

(Well, not yet, but I’m sure AI is working on it…)

Next week: Some final thoughts on AI

AI and Deep (and not so deep…) Fakes

The New York Times recently posted a quiz“can you tell the difference between a photograph, and an image created by AI?”

Of the quiz examples, a mix of actual photos and AI-generated content, I was only able to correctly identify 8/10 as to which were which. My significant other claimed to have scored 10/10! In my defence, I correctly identified all of the AI images, but I mistook two authentic photos as being fakes. Of the latter, one featured a bunch of famous people, most of whom I did not recognise, and the photo had been significantly cropped, removing much of the visual context (I also suspect it had been subject to some additional photo-shopping, given it was a publicity shot). The other real photo had been taken at such an unusual angle that it distorted some of the natural perspective, so as to make some elements look wonky. (But maybe I’ve become more cynical or sceptical, and therefore I tend to disbelieve more of what I see, the more I know I am being exposed to AI-generated content?)

How can we remain alert to AI deceptions, while at the same time recognizing and embracing the potential that this amazing technology has to offer?

Taking my lead from the New York Times article, the following blog has been created using ChatGPT. All I did was enter some brief subject headings (including the title), a series of internet links, and a prompt to turn it into a blog on AI and fakes. Can you tell what is true, what is hallucination, and what important information has been left out?

AI and Deep (and not so deep…) Fakes

Artificial Intelligence (AI) has revolutionized numerous sectors, bringing forth innovations that were once unimaginable. However, one of the more controversial applications of AI is in the creation of deep fakes. These are hyper-realistic digital falsifications created using AI, capable of mimicking real people’s appearances and voices. This technology has sparked debates over ethics, security, and the future of digital media. Let’s delve into some recent examples and the implications they bring.

Dylan and Rodin: A Fabricated Encounter

A striking example of AI’s capacity to blur reality is the supposed encounter between Bob Dylan and the sculptor Auguste Rodin. In an article on Substack by Dave Haslam, the scenario of Dylan visiting Rodin’s Parisian studio is meticulously crafted, yet it’s a fabrication made plausible by AI tools. This fake story exemplifies how AI can create convincingly realistic yet entirely fictional narratives, challenging our ability to discern truth from fiction.

AI in Indian Elections

AI’s role extends beyond entertainment into more critical areas such as political campaigns. In the 2024 Indian elections, AI-generated deep fakes have been used to spread misinformation and manipulate public opinion. An article by ABC News details how AI is disrupting electoral processes, raising concerns about the integrity of democratic systems.

Mozilla’s Campaign Against Misinformation

In response to the growing threat of AI-generated misinformation, organizations like Mozilla have launched campaigns to protect the integrity of elections. Mozilla’s efforts aim to hold platforms like WhatsApp accountable for the spread of fake news and deep fakes, urging them to implement stricter controls to safeguard democratic processes.

Celebrity Persona Rights

AI’s ability to replicate real people has also sparked legal disputes, particularly concerning persona rights. Scarlett Johansson’s recent dispute with OpenAI highlights these issues. Johansson’s concerns about her likeness being used without consent underscore the broader debate about the ethical use of AI. As noted by Scientific American, the conflict brings to light questions about how AI-generated content can respect individual rights and privacy.

Legislative Actions on Deep Fakes

Governments are beginning to take action against the misuse of AI-generated content. In Australia, new laws are being proposed to combat the creation and distribution of deep fake pornography and other harmful images. The Guardian reports on legislative efforts to protect individuals from the malicious use of AI, reflecting a growing recognition of the need for regulation.

AI and Copyright

The intersection of AI and copyright law is another area of intense debate. A lawsuit involving famous authors against OpenAI raises questions about how AI uses copyrighted material for training and whether the content it generates can itself be copyrighted. This legal battle, reported by ABC News and Reuters, could set important precedents for how intellectual property laws adapt to the age of AI.

Conclusion

As AI technology continues to advance, its applications in creating deep fakes present both opportunities and challenges. While it can be used creatively and constructively, it also poses significant risks to privacy, security, and the authenticity of information. The ongoing developments in legislation, ethical guidelines, and technological safeguards will be crucial in shaping the future landscape of AI-generated content.

So, did you spot the “deliberate” error(s)? And what information was missing? Answers will be posted later this week.

Next week: AI & Music

AI hallucinations and the law

Several years ago, I blogged about the role of technology within the legal profession. One development I noted was the nascent use of AI to help test the merits of a case before it goes to trial, and to assess the likelihood of winning. Not only might this prevent potentially frivolous matters coming to trial, it would also reduce court time and legal costs.

More recently, there has been some caution (if not out and out scepticism) about the efficacy of using AI in support of legal research and case preparation. This current debate has been triggered by an academic paper from Stanford University that compared leading legal research tools (that claim to have been “enhanced” by AI) and ChatGPT. The results were sobering, with a staggering number of apparent “hallucinations” being generated, even by the specialist legal research tools. AI hallucinations are not unique to legal research tools; nor to the AI tools and the Large Language Model (LLMs) they are trained on, as Stanford has previously reported. While the academic paper is awaiting formal publication, there has been some to-and-fro between the research authors and at least one of the named legal tools. This latter rebuttal rightly points out that any AI tool (especially a legal research and professional practice platform) has to be fit for purpose, and trained on appropriate data.

Aside from the Stanford research, some lawyers have been found to have relied upon AI tools such as ChatGPT and Google Bard to draft their submissions, only to discover that the results have cited non-existent precedents and cases – including in at least one high-profile prosecution. The latest research suggests that not only do AI tools “imagine” fictitious case reports, they can also fail to spot “bad” law (e.g., cases that have been overturned, or laws that have been repealed), offer inappropriate advice, or provide inaccurate or incorrect legal interpretation.

What if AI hallucinations resulted in the generation of invidious content about a living person – which in many circumstances, would be deemed libel or slander? If a series of AI prompts give rise to libelous content, who would be held responsible? Can AI itself be sued for libel? (Of course, under common law, it is impossible to libel the dead, as only a living person can sue for libel.)

I found an interesting discussion of this topic here, which concludes that while AI tools such as ChatGPT may appear to have some degree of autonomy (depending on their programming and training), they certainly don’t have true agency and their output in itself cannot be regarded in the same way as other forms of speech or text when it comes to legal liabilities or protections. The article identified three groups of actors who might be deemed responsible for AI results: AI software developers (companies like OpenAI), content hosts (such as search engines), and publishers (authors, journalists, news networks). It concluded that of the three, publishers, authors and journalists face the most responsibility and accountability for their content, even if they claimed “AI said this was true”.

Interestingly, the above discussion referenced news from early 2023, that a mayor in Australia was planning to sue OpenAI (the owners of ChatGPT) for defamation unless they corrected the record about false claims made about him. Thankfully, OpenAI appear to have heeded of the letter of concern, and the mayor has since dropped his case (or, the false claim was simply over-written by a subsequent version of ChatGPT). However, the original Reuters link, above, which I sourced for this blog makes no mention of the subsequent discontinuation, either as a footnote or update – which just goes to show how complex it is to correct the record, since the reference to his initial claim is still valid (it happened), even though it did not proceed (he chose not to pursue it). Even actual criminal convictions can be deemed “spent” after a given period of time, such that they no longer appear on an individual’s criminal record. Whereas, someone found not guilty of a crime (or in the mayor’s case, falsely labelled with a conviction) cannot guarantee that references to the alleged events will be expunged from the internet, even with the evolution of the “right to be forgotten“.

Perhaps we’ll need to train AI tools to retrospectively correct or delete any false information about us; although conversely, AI is accelerating the proliferation of fake content – benign, humourous or malicious – thus setting the scene for the next blog in this series.

Next week: AI and Deep (and not so deep…) Fakes

 

 

 

 

AI and the Human Factor

Earlier this month, I went to the Melbourne premiere of “Eno”, a documentary by Gary Hustwit, which is described as the world’s first generative feature film. Each time the film is shown, the choice and sequencing of scenes is different – no two versions are ever the same. Some content may never be screened at all.

I’ll leave readers to explore the director’s rationale for this approach (and the implications for film-making, cinema and streaming). But during a Q&A following the screening, Hustwit was at pains to explain that this is NOT a film generated by AI. He was also guarded and refrained from revealing too much about the proprietary software and hardware system he co-developed to compile and present the film.

However, the director did want to stress that he didn’t simply tell an AI bot to scour the internet, scrape any content by, about or featuring Brian Eno, and then assemble it into a compilation of clips. This documentary is presented according to a series of rules-based algorithms, and is a content-led venture curated by its creator. Yes, he had to review hours and hours of archive footage from which to draw key themes, but he also had to shoot new interview footage of Eno, that would help to frame the context and support the narrative, while avoiding a banal biopic or series of talking heads. The result is a skillful balance between linear story telling, intriguing juxtaposition, traditional interviews, critical analysis, and deep exploration of the subject. The point is, for all its powerful capabilities, AI could not have created this film. It needed to start with human elements: innate curiosity on the part of the director; intelligent and empathetic interaction between film maker and subject; and expert judgement in editing the content – as a well as an element of risk-taking in allowing the algorithm to make the final choices when it comes to each screened version.

That the subject of this documentary is Eno should not be surprising, either. He has a reputation for being a modern polymath, interested in science and technology as well as art. His use of Oblique Strategies in his creative work, his fascination with systems, his development of generative music, and his adoption of technology all point to someone who resists categorisation, and for whom work is play (and vice versa). In fact, imagination and play are the two key activities that define what it is to be human, as Eno explored in an essay for the BBC a few years ago. Again, AI does not yet have the power of imagination (and probably has no sense of play).

Sure, AI can conjure up all sorts of text, images, video, sound, music and other outputs. But in truth, it can only regurgitate what it has been trained on, even when extrapolating from data with which it has been supplied, and the human prompts it is given. This process of creation is more akin to plagiarism – taking source materials created by other people, blending and configuring them into some sort of “new” artefact, and passing the results off as the AI’s own work.

Plagiarism is neither new, nor is it exclusive to AI, of course. In fact, it’s a very natural human response to our environment: we all copy and transform images and sounds around us, as a form of tribute, hommage, mimicry, creative engagement, pastiche, parody, satire, criticism, acknowledgement or denouncement. Leaving aside issues of attribution, permitted use, fair comment, IP rights, (mis)appropriation and deep fakes, some would argue that it is inevitable (and even a duty) for artists and creatives to “steal” ideas from their sources of inspiration. Notably, Robert Shore in his book about “originality”. The music industry is especially adept at all forms of “copying” – sampling, interpolation, remixes, mash-ups, cover versions – something that AI has been capable of for many years. See for example this (limited) app from Google released a few years ago. Whether the results could be regarded as the works of J.S.Bach or the creation of Google’s algorithm trained on Bach’s music would be a question for Bach scholars, musicologists, IP lawyers and software analysts.

Finally, for the last word on AI and the human condition, I refer you to the closing scene from John Carpenter’s cult SciFi film, “Dark Star”, where an “intelligent” bomb outsmarts its human interlocutor. Enjoy!

Next week: AI hallucinations and the law