More on Music Streaming

A coda to my recent post on music streaming:

Despite the growth in Spotify‘s subscribers (and an apparent shift from free to paid-for services), it seams that the company still managed to make a loss. Over-paying for high-profile projects can’t have helped the balance sheet either….

Why is it so hard for Spotify to make money? In part, it’s because streaming has decimated the price point for content. This price erosion began with downloads, and has accelerated with streaming – premium subscribers don’t bother to think about how little they are paying for each time they stream a song, they have just got used to paying comparatively little for their music, wherever and whenever they want it. So they are not even having to leave their screen or device to consume content – whereas, in the past, fixed weekly budgets and the need to visit a bricks and mortar shop meant record buyers were probably more discerning about their choices.

Paradoxically, the reduced cost of music production (thanks to cheaper recording and distribution technology) means there is more music being released than ever before. But there is a built-in expectation that the consumer price must also come down – and of course, with so much available content, there has to be a law of diminishing returns – both in terms of quality, and the amount of new content subscribers can listen to. (It would be interesting to know how many different songs or artists the average Spotify subscriber streams.)

While some artists continue to be financially successful in the streaming age (albeit backed up by concert revenue and merchandising sales), it means there is an awfully long tail of content that is rarely or never heard. Even Spotify has to manage and shift that inventory somehow, so that means marketing budgets and customer acquisition costs have to grow accordingly (even though some of the promotion expenses can be offloaded on to artists and their labels).

Not only is streaming eroding content price points, in some cases, it is also at risk of eroding copyright. Recently it was disclosed that Twitter (now X) is being sued by music companies for breach of copyright.

You may recall that just over 10 years ago, a service called Twitter Music was launched with much anticipation (if not much fanfare…). Interestingly, part of the idea was that Twitter Music users could “integrate” their Spotify, iTunes or Rdio (who…?) accounts. It was also seen as a way for artists to engage more directly with their audience, and enable fans to discover new music. Less than a year later, Twitter pulled the plug.

One conclusion from all of this is that often, even successful tech companies don’t really understand content. The classic case study in this area is probably Microsoft and Encarta, but you could include Kodak and KODAKOne – by contrast, I would cite News Corp and MySpace (successful content business fails to understand tech). I suppose Netflix (which started as a mail-order DVD rental business) is an example of a tech business (it gained patents for its early subscription tech) that has managed to get content creation right – and its recent drive to shut down password sharing looks like it is paying dividends.

Of all its contemporaries, Apple is probably the most vertically integrated tech and content company – it manufactures the platform devices, manages streaming services, and even produces film and TV content (but not yet music?). In this context, I would say Google is a close second (devices, streaming, dominates on-line advertising, but does not produce original content), with Amazon someway behind (although it has had a patchy experience with devices, it has a reasonable handle on streaming and content creation).

All of which makes it somewhat surprising that Spotify is running at a loss?

Next week: Digital Identity – Wallets are the key?

 

 

“The Digital Director”

Last year, the Australian Institute of Company Directors (AICD) ran a series of 10 webinars under the umbrella of “The Digital Director”. Despite the title, there was very little exploration of “digital” technology itself, but a great deal of discussion on how to manage IT within the traditional corporate structure – as between the board of directors, the management, and the workforce.

There was a great deal of debate on things like “digital mindset”, “digital adaption and adoption”, and “digital innovation and evolution”. During one webinar, the audience were encouraged to avoid using the term “digital transformation” (instead, think “digital economy”) – yet 2 of the 10 sessions had “digital transformation” in the title.

Specific technical topics were mainly confined to AI, data privacy, data governance and cyber security. It was acknowledged that while corporate Australia has widely adopted SaaS solutions, it lacks depth in digital skills; and the percentage of the ASX market capitalisation attributable to IP assets shows we are “30 years behind the USA”. There was specific mention of blockchain technology, but the two examples given are already obsolete (the ASX’s abandoned project to replace the CHESS system, and CBA’s indefinitely deferred roll-out of crypto assets on their mobile banking app).

Often, the discussion was more about change management, and dealing with the demands of “modern work” from a workforce whose expectations have changed greatly in recent years, thanks to the pandemic, remote working, and access to new technology. Yet, these are themes that have been with us ever since the first office productivity tools, the arrival of the internet, and the proliferation of mobile devices that blur the boundary between “work” and “personal”.

The series missed an opportunity to explore the impact of new technology on boards themselves, especially their decision-making processes. We have seen how the ICO (initial coin offering) phase of cryptocurrency markets in 2017-19 presented a wholly new dimension to the funding of start-up ventures; and how blockchain technology and smart contracts heralded a new form of corporate entity, the DAO (decentralised autonomous organisation).

Together, these innovations mean the formation and governance of companies will no longer rely on the traditional structure of shareholders, directors and executives – and as a consequence, board decision-making will also take a different format. Imagine being able to use AI tools to support strategic planning, or proof-of-stake to vote on board resolutions, and consensus mechanisms to determine AGMs.

As of now, “Digital Directors” need to understand how these emerging technologies will disrupt the boardroom itself, as well as the very corporate structures and governance frameworks that have been in place for over 400 years.

Next week: Back in the USA

 

 

 

BYOB (Bring Your Own Brain)

My Twitter and LinkedIn feeds are full of posts about artificial intelligence, machine learning, large language models, robotics and automation – and how these technologies will impact our jobs and our employment prospects, often in very dystopian tones. It can be quite depressing to trawl through this material, to the point of being overwhelmed by the imminent prospect of human obsolescence.

No doubt, getting to grips with these tools will be important if we are to navigate the future of work, understand the relationship between labour, capital and technology, and maintain economic relevance in a world of changing employment models.

But we have been here before, many times (remember the Luddites?), and so far, the human condition means we learn to adapt in order to survive. These transitions will be painful, and there will be casualties along the way, but there is cause for optimism if we remember our post-industrial history.

First, among recent Twitter posts there was a timely reminder that automation does not need to equal despair in the face of displaced jobs.

Second, the technology at our disposal will inevitably make us more productive as well as enabling us to reduce mundane or repetitive tasks, even freeing up more time for other (more creative) pursuits. The challenge will be in learning how to use these tools, and in efficient and effective ways so that we don’t swap one type of routine for another.

Third, there is still a need to consider the human factor when it comes to the work environment, business structures and organisational behaviour – not least personal interaction, communication skills and stakeholder management. After all, you still need someone to switch on the machines, and tell them what to do!

Fourth, the evolution of “bring your own device” (and remote working) means that many of us have grown accustomed to having a degree of autonomy in the ways in which we organise our time and schedule our tasks – giving us the potential for more flexible working conditions. Plus, we have seen how many apps we use at home are interchangeable with the tools we use for work – and although the risk is that we are “always on”, equally, we can get smarter at using these same technologies to establish boundaries between our work/life environments.

Fifth, all the technology in the world is not going to absolve us of the need to think for ourselves. We still need to bring our own cognitive faculties and critical thinking to an increasingly automated, AI-intermediated and virtual world. If anything, we have to ramp up our cerebral powers so that we don’t become subservient to the tech, to make sure the tech works for us (and not the other way around).

Adopting a new approach means:

  • not taking the tech for granted
  • being prepared to challenge the tech assumptions (and not be complicit in its in-built biases)
  • question the motives and intentions of the tech developers, managers and owners (especially those of known or suspected bad actors)
  • validate all the newly-available data to gain new insights (not repeat past mistakes)
  • evaluate the evidence based on actual events and outcomes
  • and not fall prey to hyperbolic and cataclysmic conjectures

Finally, it is interesting to note the recent debates on regulating this new tech – curtailing malign forces, maintaining protections on personal privacy, increasing data security, and ensuring greater access for those currently excluded. This is all part of a conscious narrative (that human component!) to limit the extent to which AI will be allowed to run rampant, and to hold tech (in all its forms) more accountable for the consequences of its actions.

Next week: “The Digital Director”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AI vs IP

Can Artificial Intelligence software claim copyright in any work that was created using their algorithms?

The short answer is “no”, since only humans can establish copyright in original creative works. Copyright can be assigned to a company or trust, or it can be created under various forms of creative commons, but there still needs to be a human author behind the copyright material. While copyright may lapse over time, it then becomes part of the public domain.

However, the extent to which a human author can claim copyright in a work that has been created with the help of AI is now being challenged. A recent case in the USA has determined that the author of a graphic novel, which included images created using Midjouney, cannot claim copyright in those images. While it was accepted that the author devised the text and other prompts that the software used as the generative inputs, the output images themselves could not be the subject of copyright protection – meaning they are either in the public domain, or they fall under some category of creative commons? This case also indicates that, in the USA at least, failing to declare the use of AI tools in a work when applying for copyright registration may result in a rejected application.

Does this decision mean that the people who write AI programmes could claim copyright in works created using their software? Probably not – as this would imply that Microsoft could establish copyright in every novel written using Word, especially its grammar and spelling tools.

On the other hand, programmers and software developers who use copyright material to train their models may need to obtain relevant permission from the copyright holders (as would anyone using the AI tools and who uses copyright content as prompts), unless they could claim exemptions under “fair dealing” or “fair use” provisions.

We’re still early in the lengthy process whereby copyright and other intellectual property laws are tested and re-calibrated in the wake of AI. Maybe the outcomes of future copyright cases will depend on whether you are Ed Sheeran or Robin Thicke….

Next week: Customer Experience vs Process Design