Album Celebrations

When the first 12″ vinyl record was issued in 1948, did any record labels expect that this format would still be in use nearly 80 years later? The death of the 33rpm disc has been predicted many times, based on industry events and cultural trends that were expected to render vinyl albums obsolete. Music cassettes, CDs, MiniDiscs, mp3s, 7″ 45rpm singles, home-taping, downloads and streaming were all seen as existential threats to albums. Yet, despite reaching near extinction in the 1990s, vinyl albums (both new releases and back catalogue) are currently enjoying something of a revival.

This resurgence of interest in albums can be attributed to several factors: baby boomers reliving their youth; Gen X/Y/Z watching shows like “Stranger Things”; the box set, reissue and collector market; retro fashion trends; and a desire for all things analogue, tactile and physical (in contrast to the vapidity of streaming…).

Streaming has definitely changed the way many people listen to music, to the extent that albums have become deconstructed and fragmented thanks to shuffle, algorithms, recommender engines, playlists and a focus on one-off songs and collaborations by today’s popular artists. By contrast, most albums represent a considered and coherent piece of work: a selection of tracks designed and sequenced to be heard in a specific order, reflecting the artist’s creative intention or narrative structure. Streaming means that the artist’s work is being intermediated in a way that was not intended. You wouldn’t expect a novel, play or film to be presented in any old order – the author/playwright/director expects us to view the work as they planned. (OK, so there are some notable examples that challenge this convention, such as B.S.Johnson’s novel, “The Unfortunates” or the recent “Eno” documentary.)

Thankfully, classic albums are now being celebrated for their longevity, with significant anniversaries of an album’s release warranting deluxe reissues and live tours. This past weekend I went to two such events. The first was a concert by Black Cab, marking 10 years since the release of their album “Games Of The XXI Olympiad”. Appropriately, the show was the same day as the opening of the Paris Olympics, and the band started with a brief version of “Fanfare for the Common Man”. The second was part of the 30th anniversary tour for “Dream it Down”, the third album by the Underground Lovers. As well as getting most of the original band members together, the concert also featured Amanda Brown, formerly of The Go-Betweens, and who played on the album itself. (Also on stage was original percussionist, Derek Yuen – whose day job is designing shoes for the Australian Olympic team…)

It’s hard to imagine we will be celebrating the date when an artist first dropped a stream on Spotify….!

[This year also marks the 40th anniversary of the release of “Pink Frost”, the break-through single by The Chills, New Zealand’s finest musical export. So it was sad to read of the recent passing of their founder, Martin Phillipps. The Chills were one of many Antipodean bands that always seemed to be playing in London in the late 1980s, often to much larger audiences than they enjoyed at home. Their classic early singles and EPs are once again available on vinyl. Do yourself a favour, as someone once said!]

Next week: A postscript on AI

 

 

 

 

 

RWAs and the next phase of tokenisation

In the blockchain and digital asset communities, there are currently three key topics that dominate the industry headlines. In the short term, the spot Ethereum ETFs are finally due to launch in the USA this week. Then there is the perennial long-term price prediction for Bitcoin. In between, much of the debate is about the future of asset tokenisation, specifically for real-world assets (RWAs). Add to the mix the cat and mouse game of regulatory oversight/overreach and the rapid growth of fiat-backed stablecoins, and there you have all the elements of the crypto narrative for the foreseeable future.

The general view is that tokenising traditional assets such as real estate, equities, bonds, commodities, stud fees, art and intellectual property, and issuing them as digital tokens on a blockchain has several benefits. Tokenisation should reduce origination and transaction costs (fewer intermediaries, cheaper technology); reduce settlement times (instant, compared to T+1, T+2, T+3 days in legacy markets); democratize access to assets (using fractionalisation) that were previously available only to wholesale investors; and give rise to further innovation. For example, imagine hybrid tokens that comprise equity ownership; a right to a share of revenue streams; and membership discounts. Think of a tokenised toll road, or a sports stadium, or an art work that gets hired out to galleries and is licensed for merchandising purposes.

There are still quite a few issues to iron out, such as: the technology standards and smart contract designs that will originate, issue, distribute, track, cryptographically secure and transfer the digital tokens, both on native blockchains and across multiple networks; the role of traditional players (brokers, underwriters, custodians, trustees, transfer agents, payment agents, and share registries), and whether they are needed at all once assets are secured on-chain; and verification, certification and chain of ownership (given that an asset expressed as a digital token is very similar to a bearer bond – my private keys, my asset).

Last week, Upside in Melbourne hosted a panel discussion entitled: “Tokenise This! Unlocking the Value of Real World Asset Tokenisation”. The speakers were:

Richard Schroder, Head of Digital Asset Services, ANZ Bank

Lisa Wade, CEO, DigitalX

Andrew Sallabanks, Head of Strategy and Operations, CloudTech Group

Alan Burt, Executive Chairman, Redbelly Network

Shane Verner, A/NZ Sales Director, Fireblocks

Each of these firms has been working on a number of tokenisation projects such as stablecoins, real estate, government bonds, credit portfolios, fund of funds, and even stud fees. The key message was “faster, cheaper” is not good enough – RWA tokenisation solutions must offer something that is much better than traditional processes, and does not add friction (if anything, it should reduce current friction).

There were frequent references to fiat-backed stablecoins. In some ways, the tokenisation of real estate, bonds and equities is an extension of the tokenisation of money (as illustrated by stablecoins). However, there was no specific mention for the role of stablecoins in RWA tokensiation, for example, as on/off ramps, and as settlement instruments for the pricing, transfer and valuation of RWAs.

From an Australian perspective, the prospect of regulation (particularly for custody, crypto exchanges and brokers, and payment platforms that use stable coins) looms large. Generally, this was welcomed, to provide clarity and certainty. But without some specific provisions for crypto platforms and digital assets, if everything is brought under the existing ASIC/AFSL regime it will exclude many startups and smaller providers due to exorbitant capital adequacy and insurances etc.

Finally, despite the nature of the organisations they work for, all of the panelists agreed that “cryptographic trust is better than institutional trust”.

The potential for tokenising traditional assets has been around for several years. And while it is still relatively early in its evolution, the few listing and trading platforms for tokenised assets that have already launched have struggled to gain traction. They have few listings, limited liquidity, and minimal secondary trading – so, lack market depth. It feels that while the market opportunity may be huge (and the enabling technology is already here), there needs to be a more compelling reason to adopt tokenisation. Hopefully, that will emerge soon.

Next week: Album Celebrations

 

Some final thoughts on AI

Last week, I attended a talk by musical polymath, Jim O’Rourke, on the Serge Paperface modular synthesizer. It was part memoir, part demonstration, and part philosophy tutorial. At its heart, the Serge is an outwardly human-controlled electronic instrument, incorporating any number and combination of processors, switches, circuits, rheostats, filters, voltage controllers and patch cables. These circuits take their lead from the operator’s initial instructions, but then use that data (voltage values) to generate and manipulate sound. As the sound evolves, the “composition” takes on the appearance of a neural network as the signal is re-patched to and from each component, sometimes with random and unexpected results – rather like our own thought patterns.

But the Serge is not an example of Artificial Intelligence, despite its ability to process multiple data points (sequentially, in parallel, and simultaneously) and notwithstanding the level of unpredictability. On the other hand, that unpredictability may make it more “human” than AI.

My reasons for using the Serge as the beginning of this concluding blog on AI are three-fold:

First, these modular synthesizers only became viable with the availability of transistors and integrated circuits that replaced the valves of old, just as today’s portable computers rely on silicon chips and microprocessors. Likewise, although some elements of AI have been around for decades, the exponential rise of mobile devices, the internet, cloud computing and social media has allowed AI to ride on the back of their growth and into our lives.

Second, O’Rourke referred to the Serge as being “a way of life”, in that it leads users to think differently about music, to adopt an open mind towards the notion of composition, and to experiment knowing the results will be unpredictable, even unstable. In other words, suspend all pre-conception and embrace its whims (even surrender to its charms). Which is what many optimists would have us do with AI – although I think that there are still too many current concerns (and the potential for great harm) before we can get fully comfortable with what AI is doing, even if much of may actually be positive and beneficial. At least the Serge can be turned off with the flick of a switch if things get out of hand.

Third, as part of his presentation O’Rourke made reference to Stephen Levy’s book, “Artificial Life”, published 30 years ago. In fact, he cited it almost as a counterfoil to AI, in that Levy was exploring the interface between biological life and digital DNA in a pre-internet era, yet his thesis is even more relevant as AI neural nets become a reality.

So, where do I think we are in the evolution of AI? A number of cliches come to mind – the Genie is already out of the bottle, and like King Canute we can’t turn back the tide, but like the Sorceror’s Apprentice maybe we shouldn’t meddle with something we don’t understand. I still believe the risks associated with deep fakes, AI hallucinations and other factual errors that will inevitably be repeated and replicated without a thought to correct the record represent a major concern. I also think more transparency is needed as to how LLMs are built, and the way AI is trained on them, as well as disclosures when AI is actually being deployed, and what content has been used to generate the results. Issues of copyright theft and IP infringements are probably manageable with a combination of technology, industry goodwill and legal common sense. Subject to those legal clarifications, questions about what is “real” or original and what is “fake” or artificial in terms of creativity will probably come down to personal taste and aesthetics. But expect to see lots of disputes in the field of arts and entertainment when it comes to annual awards and recognition for creativity and originality!

At times, I can see AI is simply a combination of mega databases, powerful search engines, predictive tools, programmable logic, smart decision trees, pattern recognition on steroids, all aided by hi-speed computer processing and widespread data distribution. At other times, it feels like we are all being made the subject matter or inputs of AI (it is happening “to” us, rather than working for us), and in return we get a mix of computer-generated outputs with a high dose of AI “dramatic license”.

My over-arching conclusion at this point in the AI journey is that it resembles GMO crops – unless you live off grid and all your computers are air-locked, then every device, network and database you interact with has been trained on, touched by or tainted with AI. It’s inevitable and unavoidable.

Next week: RWAs and the next phase of tokenisation

 

AI & Music

In a recent episode of a TV detective show, an AI tech dude tries to outsmart an old school musicologist by re-creating the missing part of a vintage blues recording. The professor is asked to identify which is the “real” track, compared to the AI versions. The blues expert guesses correctly within a few beats – much to the frustration of the coder.

“How did you figure it out so quickly?”

“Easy – it’s not just what the AI added, but more importantly what it left out.”

The failure of AI to fully replicate the original song (by omitting a recording error that the AI has “corrected”) is another example showing how AI lacks the human touch, does not yet have intuition, and struggles to exercise informed judgement. Choices may often be a matter of taste, but innate human creativity cannot yet be replicated.

Soon, though, AI tools will displace a lot of work currently done by composers, lyricists, musicians, producers, arrangers and recording engineers. Already, digital audio workstation (DAW) software easily enables anyone with a computer or mobile device to create, record, sample and mix their own music, without needing to read a note of music and without having to strum a chord. Not only that, the software can emulate the acoustic properties of site-specific locations, and correct out-of-tune and out-of-time recordings. So anyone can pretend they are recording at Abbey Road.

I recently blogged about how AI is presenting fresh challenges (as well as opportunities) for the music industry. Expect to see “new” recordings released by (or attributed to) dead pop stars, especially if their back catalogue is out of copyright. This is about more than exhuming preexisting recordings, and enhancing them with today’s technology; this is deriving new content from a set of algorithms, trained on vast back catalogues, directed by specific prompts (“bass line in the style of Jon Entwistle”), and maybe given some core principles of musical composition.

And it’s the AI training that has prompted the major record companies to sue two AI software companies, a state of affairs which industry commentator, Rob Abelow says was inevitable, because:

“It’s been clear that Suno & Udio have trained on copyrighted material with no plan to license or compensate”.

But on the other hand, streaming and automated music are not new. Sound designer and artist Tero Parviainen recently quoted Curtis Roads’ “The Computer Music Tutorial” (2023):

“A new industry has emerged around artificial intelligence (AI) services for creating generic popular music, including Flow Machines, IBM Watson Beat, Google Magenta’s NSynth Super, OpenAI’s Jukebox, Jukedeck, Melodrive, Spotify’s Creator Technology Research Lab, and Amper Music. This is the latest incarnation of a trend that started in the 1920s called Muzak, to provide licensed background music in elevators, business and dental offices, hotels, shopping malls, supermarkets, and restaurants”

And even before the arrival of Muzak in the 1920s, the world’s first streaming service was launched in the late 1890s, using the world’s first synthesizer – the Teleharmonium. (Thanks to Mark Brend’s “The Sound of Tomorrow”, I learned that Mark Twain was the first subscriber.)

For music purists and snobs (among whom I would probably count myself), all this talk about the impact of AI on music raises questions of aesthetics as well as ethics. But I’m reminded of some comments made by Pink Floyd about 50 years ago, when asked about their use of synthesizers, during the making of “Live at Pompeii”. In short, they argue that such machines still need human input, and as long as the musicians are controlling the equipment (and not the other way around), then what’s the problem? It’s not like they are cheating, disguising what they are doing, or compensating for a lack of ability – and the technology doesn’t make them better musicians, it just allows them to do different things:

“It’s like saying, ‘Give a man a Les Paul guitar, and he becomes Eric Clapton… It’s not true.'”

(Well, not yet, but I’m sure AI is working on it…)

Next week: Some final thoughts on AI