State of the Music Industry…

Depending on your perspective, the music industry is in fine health. 2023 saw a record year for sales (physical, digital and streaming), and touring artists are generating more income from ticket sales and merchandising than the GDPs of many countries. Even vinyl records, CDs and cassettes are achieving better sales than in recent years!

On the other hand, only a small number of musicians are making huge bucks from touring; while smaller venues are closing down, meaning fewer opportunities for artists to perform.

And despite the growth in streaming, relatively few musicians are minting it from these subscription-based services, that typically pay very little in royalties to the vast majority of artists. (In fact, some content can be zero-rated unless it achieves a minimum number of plays.)

Aside from the impact of streaming services, there are two other related challenges that exercise the music industry: the growing use of Artificial Intelligence, and the need for musicians to be recognised and compensated more fairly for their work and their Intellectual Property.

With AI, a key issue is whether the software developers are being sufficiently transparent about the content sources used to train their models, and whether the authors and rights owners are being fairly recompensed in return for the use of their IP. Then there are questions of artistic “creativity”, authorial ownership, authenticity, fakes and passing-off when we are presented with AI-generated music. Generative music software has been around for some time, and anyone with a smart phone or laptop can access millions of tools and samples to compose, assemble and record their own music – and many people do just that, given the thousands of new songs that are being uploaded every day. Now, with the likes of Suno, it’s possible to “create” a 2-minute song (complete with lyrics) from just a short text prompt. Rolling Stone magazine recently did just that, and the result was both astonishing and dispiriting.

I played around with Suno myself (using the free version), and the brief prompt I submitted returned these two tracks, called “Midnight Shadows”:

Version 1

Version 2

The output is OK, not terrible, but displays very little in the way of compositional depth, melodic development, or harmonic structure. Both tracks sound as if a set of ready-made loops and samples had simply been cobbled together in the same key and tempo, and left to run for 2 minutes. Suno also generated two quite different compositions with lyrics, voiced by a male and a female singer/bot respectively. The lyrics were nonsensical attempts to verbally riff on the text prompt. The vocals sounded both disembodied (synthetic, auto-tuned and one-dimensional), and also exactly the sort of vocal stylings favoured by so many contemporary pop singers, and featured on karaoke talent shows like The Voice and Idol. As for Suno’s attempt to remix the tracks at my further prompting, the less said the better.

While content attribution can be addressed through IP rights and commercial licensing, the issue of “likeness” is harder to enforce. Artists can usually protect their image (and merchandising) against passing off, but can they protect the tone and timbre of their voice? A new law in Tennessee attempts to do just that, by protecting a singer’s a vocal likeness from unauthorised use. (I’m curious to know if this protection is going to be extended to Jimmy Page’s guitar sound and playing style, or an electronic musician’s computer processing and programming techniques?)

I follow a number of industry commentators who, very broadly speaking, represent the positive (Rob Abelow), negative (Damon Krukowski) and neutral (Shawn Reynaldo) stances on streaming, AI and musicians’ livelihood. For every positive opportunity that new technology presents, there is an equal (and sometimes greater) threat or challenge that musicians face. I was particularly struck by Shawn Reynaldo’s recent article on Rolling Stone’s Suno piece, entitled “A Music Industry That Doesn’t Sell Music”. The dystopian vision he presents is millions of consumers spending $10 a month to access music AI tools, so they can “create” and upload their content to streaming services, in the hope of covering their subscription fees….. Sounds ghastly, if you ask me.

Add to the mix the demise of music publications (for which AI and streaming are also to blame…), and it’s easy to see how the landscape for discovering, exploring and engaging with music has become highly concentrated via streaming platforms and their recommender engines (plus marketing budgets spent on behalf of major artists). In the 1970s and 1980s, I would hear about new music from the radio (John Peel), TV (OGWT, The Tube, Revolver, So It Goes, Something Else), the print weeklies (NME, Sounds, Melody Maker), as well as word of mouth from friends, and by going to see live music and turning up early enough to watch the support acts. Now, most of my music information comes from the few remaining print magazines such as Mojo and Uncut (which largely focus on legacy acts), The Wire (but probably too esoteric for its own good), and Electronic Sound (mainly because that’s the genre that most interests me); plus Bandcamp, BBC Radio 6’s “Freak Zone”, Twitter, and newsletters from artists, labels and retailers. The overall consequence of streaming and up/downloading is that there is too much music to listen to (but how much of it is worth the effort?), and multiple invitations to “follow”, “like”, “subscribe” and “sign up” for direct content (but again, how much of it is worth the effort?). For better or worse, the music media at least provided an editorial filter to help address quality vs quantity (even if much of it ended up being quite tribal).

In the past, the music industry operated as a network of vertically integrated businesses: they sourced the musical talent, they managed the recording, manufacturing and distribution of the content (including the hardware on which to play it), and they ran publishing and licensing divisions. When done well, this meant careful curation, the exercise of quality control, and a willingness to invest in nurturing new artists for several albums and for the duration of their career. But at times, record companies have self-sabotaged, by engaging in format wars (e.g., over CD, DCC and MiniDisc standards), by denying the existence of on-line and streaming platforms (until Apple and Spotify came along), and by becoming so bloated that by the mid-1980s, the major labels had to merge and consolidate to survive – largely because they almost abandoned the sustainable development of new talent. They also ignored their lucrative back catalogues, until specialist and independent labels and curators showed them how to do it properly. Now, they risk overloading the reissue market, because they lack proper curation and quality control.

The music industry really only does three things:

1) A&R (sourcing and developing new talent)

2) Marketing (promotion, media and public relations)

3) Distribution & Licensing (commercialisation).

Now, #1 and #2 have largely been outsourced to social media platforms (and inevitably, to AI and recommender algorithms), and #3 is going to be outsourced to web3 (micro-payments for streaming subscriptions, distribution of NFTs, and licensing via smart contracts). Whether we like it or not, and taking their lead from Apple and Spotify, the music businesses of the future will increasingly resemble tech companies. The problem is, tech rarely understands content from the perspective of aesthetics – so expect to hear increasingly bland AI-generated music from avatars and bots that only exist in the metaverse.

Meanwhile, I go to as many live gigs as I can justify, and brace my wallet for the next edition of Record Store Day later this month…

Next week: Reclaim The Night

 

 

 

BYOB (Bring Your Own Brain)

My Twitter and LinkedIn feeds are full of posts about artificial intelligence, machine learning, large language models, robotics and automation – and how these technologies will impact our jobs and our employment prospects, often in very dystopian tones. It can be quite depressing to trawl through this material, to the point of being overwhelmed by the imminent prospect of human obsolescence.

No doubt, getting to grips with these tools will be important if we are to navigate the future of work, understand the relationship between labour, capital and technology, and maintain economic relevance in a world of changing employment models.

But we have been here before, many times (remember the Luddites?), and so far, the human condition means we learn to adapt in order to survive. These transitions will be painful, and there will be casualties along the way, but there is cause for optimism if we remember our post-industrial history.

First, among recent Twitter posts there was a timely reminder that automation does not need to equal despair in the face of displaced jobs.

Second, the technology at our disposal will inevitably make us more productive as well as enabling us to reduce mundane or repetitive tasks, even freeing up more time for other (more creative) pursuits. The challenge will be in learning how to use these tools, and in efficient and effective ways so that we don’t swap one type of routine for another.

Third, there is still a need to consider the human factor when it comes to the work environment, business structures and organisational behaviour – not least personal interaction, communication skills and stakeholder management. After all, you still need someone to switch on the machines, and tell them what to do!

Fourth, the evolution of “bring your own device” (and remote working) means that many of us have grown accustomed to having a degree of autonomy in the ways in which we organise our time and schedule our tasks – giving us the potential for more flexible working conditions. Plus, we have seen how many apps we use at home are interchangeable with the tools we use for work – and although the risk is that we are “always on”, equally, we can get smarter at using these same technologies to establish boundaries between our work/life environments.

Fifth, all the technology in the world is not going to absolve us of the need to think for ourselves. We still need to bring our own cognitive faculties and critical thinking to an increasingly automated, AI-intermediated and virtual world. If anything, we have to ramp up our cerebral powers so that we don’t become subservient to the tech, to make sure the tech works for us (and not the other way around).

Adopting a new approach means:

  • not taking the tech for granted
  • being prepared to challenge the tech assumptions (and not be complicit in its in-built biases)
  • question the motives and intentions of the tech developers, managers and owners (especially those of known or suspected bad actors)
  • validate all the newly-available data to gain new insights (not repeat past mistakes)
  • evaluate the evidence based on actual events and outcomes
  • and not fall prey to hyperbolic and cataclysmic conjectures

Finally, it is interesting to note the recent debates on regulating this new tech – curtailing malign forces, maintaining protections on personal privacy, increasing data security, and ensuring greater access for those currently excluded. This is all part of a conscious narrative (that human component!) to limit the extent to which AI will be allowed to run rampant, and to hold tech (in all its forms) more accountable for the consequences of its actions.

Next week: “The Digital Director”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AI vs IP

Can Artificial Intelligence software claim copyright in any work that was created using their algorithms?

The short answer is “no”, since only humans can establish copyright in original creative works. Copyright can be assigned to a company or trust, or it can be created under various forms of creative commons, but there still needs to be a human author behind the copyright material. While copyright may lapse over time, it then becomes part of the public domain.

However, the extent to which a human author can claim copyright in a work that has been created with the help of AI is now being challenged. A recent case in the USA has determined that the author of a graphic novel, which included images created using Midjouney, cannot claim copyright in those images. While it was accepted that the author devised the text and other prompts that the software used as the generative inputs, the output images themselves could not be the subject of copyright protection – meaning they are either in the public domain, or they fall under some category of creative commons? This case also indicates that, in the USA at least, failing to declare the use of AI tools in a work when applying for copyright registration may result in a rejected application.

Does this decision mean that the people who write AI programmes could claim copyright in works created using their software? Probably not – as this would imply that Microsoft could establish copyright in every novel written using Word, especially its grammar and spelling tools.

On the other hand, programmers and software developers who use copyright material to train their models may need to obtain relevant permission from the copyright holders (as would anyone using the AI tools and who uses copyright content as prompts), unless they could claim exemptions under “fair dealing” or “fair use” provisions.

We’re still early in the lengthy process whereby copyright and other intellectual property laws are tested and re-calibrated in the wake of AI. Maybe the outcomes of future copyright cases will depend on whether you are Ed Sheeran or Robin Thicke….

Next week: Customer Experience vs Process Design

 

The Limits of Technology

As part of my home entertainment during lock-down, I have been enjoying a series of Web TV programmes called This Is Imminent hosted by Simon Waller, and whose broad theme asks “how are we learning to live with new technology?” – in short, the good, the bad and the ugly of AI, robotics, computers, productivity tools etc.

Niska robots are designed to serve ice cream…. image sourced from Weekend Notes

Despite the challenges of Zoom overload, choked internet capacity, and constant screen-time, the lock-down has shown how reliant we are upon tech for communications, e-commerce, streaming services and working from home. Without them, many of us would not have been able to cope with the restrictions imposed by the pandemic.

The value of Simon’s interactive webinars is two-fold – as the audience, we get to hear from experts in their respective fields, and gain exposure to new ideas; and we have the opportunity to explore ways in which technology impacts our own lives and experience – and in a totally non-judgmental way. What’s particularly interesting is the non-binary nature of the discussion. It’s not “this tech good, that tech bad”, nor is it about taking absolute positions – it thrives in the margins and in the grey areas, where we are uncertain, unsure, or just undecided.

In parallel with these programmes, I have been reading a number of novels that discuss different aspects of AI. These books seem to be both enamoured with, and in awe of, the potential of AI – William Gibson’s “Agency”, Ian McEwan’s “Machines Like Me”, and Jeanette Winterson’s “Frankissstein” – although they take quite different approaches to the pros and cons of the subject and the technology itself. (When added to my recent reading list of Jonathan Coe’s “Middle England” and John Lanchester’s “The Wall”, you can see what fun and games I’m having during lock-down….)

What this viewing and reading suggests to me is that we quickly run into the limitations of any new technology. Either it never delivers what it promises, or we become bored with it. We over-invest and place too much hope in it, then take it for granted (or worse, come to resent it). What the above novelists identify is our inability to trust ourselves when confronted with the opportunity for human advancement. Largely because the same leaps in technology also induce existential angst or challenge our very existence itself – not least because they are highly disruptive as well as innovative.

On the other hand, despite a general shift towards open source protocols and platforms, we still see age-old format wars whenever any new tech comes along. For example, this means most apps lack interoperability, tying us into rigid and vertically integrated ecosystems. The plethora of apps launched for mobile devices can mean premature obsolescence (built-in or otherwise), as developers can’t be bothered to maintain and upgrade them (or the app stores focus on the more popular products, and gradually weed out anything that doesn’t fit their distribution model or operating system). Worse, newer apps are not retrofitted to run on older platforms, or older software programs and content suffer digital decay and degradation. (Developers will also tell you about tech debt – the eventual higher costs of upgrading products that were built using “quick and cheap” short-term solutions, rather than taking a longer-term perspective.)

Consequently, new technology tends to over-engineer a solution, or create niche, hard-coded products (robots serving ice cream?). In the former, it can make existing tasks even harder; in the latter, it can create tech dead ends and generate waste. Rather than aiming for giant leaps forward within narrow applications, perhaps we need more modular and accretive solutions that are adaptable, interchangeable, easier to maintain, and cheaper to upgrade.

Next week: Distractions during Lock-down