Some final thoughts on AI

Last week, I attended a talk by musical polymath, Jim O’Rourke, on the Serge Paperface modular synthesizer. It was part memoir, part demonstration, and part philosophy tutorial. At its heart, the Serge is an outwardly human-controlled electronic instrument, incorporating any number and combination of processors, switches, circuits, rheostats, filters, voltage controllers and patch cables. These circuits take their lead from the operator’s initial instructions, but then use that data (voltage values) to generate and manipulate sound. As the sound evolves, the “composition” takes on the appearance of a neural network as the signal is re-patched to and from each component, sometimes with random and unexpected results – rather like our own thought patterns.

But the Serge is not an example of Artificial Intelligence, despite its ability to process multiple data points (sequentially, in parallel, and simultaneously) and notwithstanding the level of unpredictability. On the other hand, that unpredictability may make it more “human” than AI.

My reasons for using the Serge as the beginning of this concluding blog on AI are three-fold:

First, these modular synthesizers only became viable with the availability of transistors and integrated circuits that replaced the valves of old, just as today’s portable computers rely on silicon chips and microprocessors. Likewise, although some elements of AI have been around for decades, the exponential rise of mobile devices, the internet, cloud computing and social media has allowed AI to ride on the back of their growth and into our lives.

Second, O’Rourke referred to the Serge as being “a way of life”, in that it leads users to think differently about music, to adopt an open mind towards the notion of composition, and to experiment knowing the results will be unpredictable, even unstable. In other words, suspend all pre-conception and embrace its whims (even surrender to its charms). Which is what many optimists would have us do with AI – although I think that there are still too many current concerns (and the potential for great harm) before we can get fully comfortable with what AI is doing, even if much of may actually be positive and beneficial. At least the Serge can be turned off with the flick of a switch if things get out of hand.

Third, as part of his presentation O’Rourke made reference to Stephen Levy’s book, “Artificial Life”, published 30 years ago. In fact, he cited it almost as a counterfoil to AI, in that Levy was exploring the interface between biological life and digital DNA in a pre-internet era, yet his thesis is even more relevant as AI neural nets become a reality.

So, where do I think we are in the evolution of AI? A number of cliches come to mind – the Genie is already out of the bottle, and like King Canute we can’t turn back the tide, but like the Sorceror’s Apprentice maybe we shouldn’t meddle with something we don’t understand. I still believe the risks associated with deep fakes, AI hallucinations and other factual errors that will inevitably be repeated and replicated without a thought to correct the record represent a major concern. I also think more transparency is needed as to how LLMs are built, and the way AI is trained on them, as well as disclosures when AI is actually being deployed, and what content has been used to generate the results. Issues of copyright theft and IP infringements are probably manageable with a combination of technology, industry goodwill and legal common sense. Subject to those legal clarifications, questions about what is “real” or original and what is “fake” or artificial in terms of creativity will probably come down to personal taste and aesthetics. But expect to see lots of disputes in the field of arts and entertainment when it comes to annual awards and recognition for creativity and originality!

At times, I can see AI is simply a combination of mega databases, powerful search engines, predictive tools, programmable logic, smart decision trees, pattern recognition on steroids, all aided by hi-speed computer processing and widespread data distribution. At other times, it feels like we are all being made the subject matter or inputs of AI (it is happening “to” us, rather than working for us), and in return we get a mix of computer-generated outputs with a high dose of AI “dramatic license”.

My over-arching conclusion at this point in the AI journey is that it resembles GMO crops – unless you live off grid and all your computers are air-locked, then every device, network and database you interact with has been trained on, touched by or tainted with AI. It’s inevitable and unavoidable.

Next week: RWAs and the next phase of tokenisation

 

AI and the Human Factor

Earlier this month, I went to the Melbourne premiere of “Eno”, a documentary by Gary Hustwit, which is described as the world’s first generative feature film. Each time the film is shown, the choice and sequencing of scenes is different – no two versions are ever the same. Some content may never be screened at all.

I’ll leave readers to explore the director’s rationale for this approach (and the implications for film-making, cinema and streaming). But during a Q&A following the screening, Hustwit was at pains to explain that this is NOT a film generated by AI. He was also guarded and refrained from revealing too much about the proprietary software and hardware system he co-developed to compile and present the film.

However, the director did want to stress that he didn’t simply tell an AI bot to scour the internet, scrape any content by, about or featuring Brian Eno, and then assemble it into a compilation of clips. This documentary is presented according to a series of rules-based algorithms, and is a content-led venture curated by its creator. Yes, he had to review hours and hours of archive footage from which to draw key themes, but he also had to shoot new interview footage of Eno, that would help to frame the context and support the narrative, while avoiding a banal biopic or series of talking heads. The result is a skillful balance between linear story telling, intriguing juxtaposition, traditional interviews, critical analysis, and deep exploration of the subject. The point is, for all its powerful capabilities, AI could not have created this film. It needed to start with human elements: innate curiosity on the part of the director; intelligent and empathetic interaction between film maker and subject; and expert judgement in editing the content – as a well as an element of risk-taking in allowing the algorithm to make the final choices when it comes to each screened version.

That the subject of this documentary is Eno should not be surprising, either. He has a reputation for being a modern polymath, interested in science and technology as well as art. His use of Oblique Strategies in his creative work, his fascination with systems, his development of generative music, and his adoption of technology all point to someone who resists categorisation, and for whom work is play (and vice versa). In fact, imagination and play are the two key activities that define what it is to be human, as Eno explored in an essay for the BBC a few years ago. Again, AI does not yet have the power of imagination (and probably has no sense of play).

Sure, AI can conjure up all sorts of text, images, video, sound, music and other outputs. But in truth, it can only regurgitate what it has been trained on, even when extrapolating from data with which it has been supplied, and the human prompts it is given. This process of creation is more akin to plagiarism – taking source materials created by other people, blending and configuring them into some sort of “new” artefact, and passing the results off as the AI’s own work.

Plagiarism is neither new, nor is it exclusive to AI, of course. In fact, it’s a very natural human response to our environment: we all copy and transform images and sounds around us, as a form of tribute, hommage, mimicry, creative engagement, pastiche, parody, satire, criticism, acknowledgement or denouncement. Leaving aside issues of attribution, permitted use, fair comment, IP rights, (mis)appropriation and deep fakes, some would argue that it is inevitable (and even a duty) for artists and creatives to “steal” ideas from their sources of inspiration. Notably, Robert Shore in his book about “originality”. The music industry is especially adept at all forms of “copying” – sampling, interpolation, remixes, mash-ups, cover versions – something that AI has been capable of for many years. See for example this (limited) app from Google released a few years ago. Whether the results could be regarded as the works of J.S.Bach or the creation of Google’s algorithm trained on Bach’s music would be a question for Bach scholars, musicologists, IP lawyers and software analysts.

Finally, for the last word on AI and the human condition, I refer you to the closing scene from John Carpenter’s cult SciFi film, “Dark Star”, where an “intelligent” bomb outsmarts its human interlocutor. Enjoy!

Next week: AI hallucinations and the law