More on Music Streaming

A coda to my recent post on music streaming:

Despite the growth in Spotify‘s subscribers (and an apparent shift from free to paid-for services), it seams that the company still managed to make a loss. Over-paying for high-profile projects can’t have helped the balance sheet either….

Why is it so hard for Spotify to make money? In part, it’s because streaming has decimated the price point for content. This price erosion began with downloads, and has accelerated with streaming – premium subscribers don’t bother to think about how little they are paying for each time they stream a song, they have just got used to paying comparatively little for their music, wherever and whenever they want it. So they are not even having to leave their screen or device to consume content – whereas, in the past, fixed weekly budgets and the need to visit a bricks and mortar shop meant record buyers were probably more discerning about their choices.

Paradoxically, the reduced cost of music production (thanks to cheaper recording and distribution technology) means there is more music being released than ever before. But there is a built-in expectation that the consumer price must also come down – and of course, with so much available content, there has to be a law of diminishing returns – both in terms of quality, and the amount of new content subscribers can listen to. (It would be interesting to know how many different songs or artists the average Spotify subscriber streams.)

While some artists continue to be financially successful in the streaming age (albeit backed up by concert revenue and merchandising sales), it means there is an awfully long tail of content that is rarely or never heard. Even Spotify has to manage and shift that inventory somehow, so that means marketing budgets and customer acquisition costs have to grow accordingly (even though some of the promotion expenses can be offloaded on to artists and their labels).

Not only is streaming eroding content price points, in some cases, it is also at risk of eroding copyright. Recently it was disclosed that Twitter (now X) is being sued by music companies for breach of copyright.

You may recall that just over 10 years ago, a service called Twitter Music was launched with much anticipation (if not much fanfare…). Interestingly, part of the idea was that Twitter Music users could “integrate” their Spotify, iTunes or Rdio (who…?) accounts. It was also seen as a way for artists to engage more directly with their audience, and enable fans to discover new music. Less than a year later, Twitter pulled the plug.

One conclusion from all of this is that often, even successful tech companies don’t really understand content. The classic case study in this area is probably Microsoft and Encarta, but you could include Kodak and KODAKOne – by contrast, I would cite News Corp and MySpace (successful content business fails to understand tech). I suppose Netflix (which started as a mail-order DVD rental business) is an example of a tech business (it gained patents for its early subscription tech) that has managed to get content creation right – and its recent drive to shut down password sharing looks like it is paying dividends.

Of all its contemporaries, Apple is probably the most vertically integrated tech and content company – it manufactures the platform devices, manages streaming services, and even produces film and TV content (but not yet music?). In this context, I would say Google is a close second (devices, streaming, dominates on-line advertising, but does not produce original content), with Amazon someway behind (although it has had a patchy experience with devices, it has a reasonable handle on streaming and content creation).

All of which makes it somewhat surprising that Spotify is running at a loss?

Next week: Digital Identity – Wallets are the key?

 

 

Music streaming is so passé…

Streaming services have changed the way we listen to music, and not just in the way the content is delivered (primarily via mobile devices), or the sheer number of songs available for our listening pleasure (whole catalogues at our fingertips).

These streaming platforms (which have been with us for more then 15 years) have also led to some more negative consequences: the deconstruction of albums into individual tracks (thereby undermining artists’ intention to present their work as a whole, rather than its component parts); shifting the relationship we have with our music collections from “ownership” to “renting”; paying paltry levels of streaming fees compared to royalties on physical sales and downloads; pushing suggested content via opaque algorithms and “recommender engines” rather than allowing listener self-discovery; squashing music into highly compressed audio formats, thus impairing the listening quality; and reducing album cover art work and design into tiny thumbnail images that don’t do justice to the original. (If you can’t appreciate the significance and importance of album art work, this forthcoming documentary may change your mind.)

Of course, streaming is not the only way to consume music – we still have vinyl, CDs and even cassettes in current production. (And let’s not forget radio!) Although optimistic numbers about the vinyl revival of recent years have to be put in the context of the streaming behemoths, there is no doubt that this antique format still has an important role to play, for new releases, the box-set and reissue industry, and the second-hand market.

For myself, I’ve largely given up on Spotify and Apple Music: with the former, I don’t think there is enough transparency on streaming fees (especially those paid to independent artists and for self-released recordings) or how more popular artists and their labels can pay to manipulate the algorithms, plus the “recommendations” are often out of kilter with my listening preferences; with the latter, geo-blocking often means music I am looking for is not available in Australia. (As I am writing, Spotify is playing a track which has been given the wrong title, proving that their curation and editorial quality is not perfect.)

Streaming can also be said to be responsible for a type of content narrowcasting – the more often a song is streamed (especially one that has been sponsored or heavily promoted by a record label) the more often it will appear in suggested playlists. Some recent analysis by Rob Abelow suggests that fewer than 10% of songs on the Spotify billion stream club were released before 2000. This may have something to do with listener demographics (e.g., digital natives), but it also suggests that songs only available as streams (i.e., no download or physical release), or songs heavily marketed by labels wanting to promote particular content to a specific audience, will come to dominate these platforms.

Further evidence of how streaming is skewed towards major artists is a recent post by Damon Krukowski, showing how independent musicians like him are being “encouraged” to be more like megstars such as Ed Sheeran. Never mind the quality of the music, just think about the “pre-saves” and “countdown pages” (tools which are not yet available to every artist on Spotify?).

I’ve been using both Bandcamp and Soundcloud for more than 10 years, to release my own music and to discover new content. I began with Soundcloud, but soon lost my enthusiasm because they kept changing their business model, and they enabled more popular artists to dominate the platform with “premium” services and pay-to-play fees that favour artists and labels with bigger marketing budgets. Whereas Bandcamp appears to be doing a better job of maintaining a more level playing field in regard to artist access, and a more natural way for fans to connect with artists they already know, and to discover new music they may be interested in.

But all of this simply means that streaming has possibly peaked, at least as an emerging format. The industry is facing a number of challenges. Quite apart from ongoing disputes about royalty payments and album integrity, streaming is going to be disrupted by new technologies and business models, thanks to blockchain, cryptocurrencies and non-fungible tokens. These startups are going to improve how artists are remunerated for their work, create better engagement between creators and their audiences, and provide for more transparent content discovery and recommendations. Elsewhere, the European Union is considering ways to preserve cultural diversity, promote economic sustainability within the music industry, remove the harmful effects of payola, make better use of content metadata for things like copyright, creativity and attribution, and provide clear labeling on content that has been created using tools like AI.

Just for the record, I’m not a huge fan of content quotas (a possible outcome from the EU proposals), but I would prefer to see better ways to discover new music, via broadcast and online media, which are not dependent on regimented Top 40 playlists, the restrictive formats of ubiquitous TV talent shows, or record label marketing budgets. Australia’s Radio National used to have a great platform for new and alternative music, called Sound Quality, but that came off air nearly 10 years ago, with nothing to replace it. Elsewhere, I tune into BBC Radio 6 Music’s Freak Zone – not all of it is new music, but there is more variety in each 2 hour programme than a week’s listening on most other radio stations.

Next week: More Cold War Nostalgia

 

Who fact-checks the fact-checkers?

The recent stoush between POTUS and Twitter on fact-checking and his alleged use of violent invective has rekindled the debate on whether, and how, social media should be regulated. It’s a potential quagmire (especially the issue of free speech), but it also comes at a time when here in Australia, social media is fighting twin legal battles – on defamation and fees for news content.

First, the issue of fact-checking on social media. Public commentary was divided – some argued that fact-checking is a form of censorship, and others posed the question “Quis custodiet ipsos custodes?” (who fact-checks the fact-checkers?) Others suggested that fact-checking in this context was a form of public service to ensure that political debate is well-informed, obvious errors are corrected, and that blatant lies (untruths, falsehoods, fibs, deceptions, mis-statements, alternative facts….) are called out for what they are. Notably, in this case, the “fact” was not edited, but flagged as a warning to the audience. (In case anyone hadn’t noticed (or remembered), earlier this year Facebook announced that it would engage Reuters to provide certain fact-check services.) Given the current level of discourse in the political arena, traditional and social media, and the court of public opinion, I’m often reminded of an article I read many years ago in the China Daily, which said something to the effect that “it is important to separate the truth from the facts”.

Second, the NSW Court of Appeal recently ruled that media companies can be held responsible for defamatory comments posted under stories they publish on social media. While this specific ruling did not render Facebook liable for the defamatory posts (although like other content platforms, social media is subject to general defamation laws), it was clear that the media organisations are deemed to be “publishing” content on their social media pages. And even though they have no way of controlling or moderating the Facebook comments before they are made public, for these purposes, their Facebook pages are no different to their own websites.

Third, the Australian Government is going to force companies like Facebook and Google to pay for news content via revenue share from ad sales. The Federal Treasurer was quoted as saying, “It is only fair that the search ­engines and social media giants pay for the original news content that they use to drive traffic to their sites.” If Australia succeeds, this may set an uncomfortable precedent in other jurisdictions.

For me, much of the above debate goes to the heart of how to treat social media platforms – are they like traditional newspapers and broadcast media? are they like non-fiction publishers? are they communications services (like telcos)? are they documents of record? The topic is not new – remember when Mark Zuckerberg declared that he wanted Facebook to be the “world’s newspaper”? Be careful what you wish for…

Next week: Fact v Fiction in Public Discourse

30 years in publishing

It’s 30 years since I began my career in publishing. I have worked for two major global brands, a number of niche publishers, and now I work for a start-up. For all of this time, I have worked in non-fiction – mostly professional (law, tax, accounting), business and financial subjects. I began as an editor in London, became a commissioning editor, launched a publishing business in Hong Kong, managed a portfolio of financial information services for the capital markets in Asia Pacific, and currently lead the global business development efforts for a market data start-up in blockchain, crypto and digital assets. Even when I started back in 1989, industry commentators were predicting the end of print. And despite the best efforts of the internet and social media to decimate the traditional business models, we are still producing and consuming an ever-growing volume of content.

The importance of editing and proofreading still apply to publishing today…. Image sourced from Wikimedia Commons.

The first company I worked for was Sweet & Maxwell, a 200-year-old UK law publisher. In 1989, it had recently been acquired by The Thomson Corporation (now Thomson Reuters), a global media and information brand, and majority owned by the Thomson family of Canada. When I began as a legal editor with Sweet & Maxwell in London, Thomson still had newspaper and broadcasting interests (the family continues to own the Toronto Globe & Mail), a directory business (a rival to the Yellow Pages), a travel business (comprising an airline, a travel agent and a tour operator), and a portfolio of publishing brands that ranged from the arts to the sciences, from finance to medicine, from defence titles to reference works.

Thanks to Thomson, not only did I get incredible experience from working in the publishing industry, I also got to start a new business in Hong Kong (which is still in existence). This role took me to China for the first time in 1995, including a couple of private lunches at The Great Hall of The People in Beijing. The Hong Kong business expanded to include operations in Singapore and Malaysia – during which we survived the handover and the Asian currency crisis. I also spent quite a bit of time for Thomson in the USA, working on international sales and distribution, before joining one of their Australian businesses for a year.

Given the subscription nature of law, tax and accounting publishing, many of the printed titles came in the form of multi-volume loose-leaf encyclopedias, which required constant (and laborious) updating throughout the subscription year. In fact, as editors we had to forecast and estimate the average number of pages required to be added or updated each year. If we exceeded the page allowance, the production team would not be happy. And if the number of updates each year did not match the budgeted number we had promised subscribers, the finance team would not be happy. So, we had a plethora of weekly, monthly, bi-monthly, quarterly, semi-annual and annual deadlines and schedules to manage – even today, I recall the immense relief we experienced when we got the CRC (camera ready copy) for the next release back from the typesetters, on time, and on budget…

This blog owes its title to something that senior Thomson executives liked to proclaim: “Content is King!” We were still in the era of media magnates, when newspapers (with their display and classified advertising) had a license to print money – the “rivers of gold” as some called it. But as the internet and online search came to determine how readers discovered and consumed information, the catch cry became “Content in Context!”, as publishers needed to make sure they had the right material, at the right time, in the right place, for the right audience (and at the right price….).

Of course, over the 12 years I was at Thomson, technology completely changed the way we worked. When I first started, editors still did a lot of manual mark-up on hard copy, while other specialists were responsible for technical editing, layout, design, indexing, proofreading and tabling (creating footnotes and cross-references, and compiling lists of legal and academic citations). Most of the products were still in printed form, but this was a period of rapid transition to digital content – from dial-up databases to CD-ROM, from online to web formats. Word processing came into its own, as authors started to submit their manuscripts on floppy disk, and compositors leveraged SGML (Standard Generalized Markup Language) for typesetting and for rendering print books as digital documents. Hard to believe now, but CD-ROM editions of traditional text books and reference titles had to be exact visual replicas of the printed versions, so that in court, the judges and the lawyers could (literally) be on the same page if one party or other did not have the digital edition. Thankfully, some of the constraints disappeared as more content went online – reference works had to be readable in any web browser, while HTML enabled faster search, cross-referencing and indexing thanks to text tagging, Boolean logic, key words and embedded links.

The second global firm I worked for was Standard & Poor’s, part of the The McGraw-Hill Companies (now S&P Global). Similar to Thomson, when I started with McGraw-Hill, the McGraw family were major shareholders, and the group had extensive interests in broadcasting, magazines and education publishing, as well as financial services. But when I joined Standard & Poor’s in 2002, I was surprised that there were still print publications, and some in-house authors and editors continued to work with hard copy manuscripts and proofs (which they circulated to one another via their in/out trays and the internal mail system…). Thankfully, much of this time-consuming activity was streamlined in favour of more collaborative content development and management processes. And we migrated subscribers from print and CD-ROM to web and online (XML was then a key way of streaming financial data, especially for machine-to-machine transmission).

Working for Standard & Poor’s in a regional role, I was based in Melbourne but probably spent about 40% of my time overseas and interstate. My role involved product management and market development – but although I no longer edited content or reviewed proofs, I remained actively involved in product design, content development, user acceptance testing and client engagement. The latter was particularly interesting in Asia, especially China and Japan. Then the global financial crisis, and the role of credit rating agencies such as Standard & Poor’s, added an extra dimension to client discussions…

After a period as a freelance writer and editor, for the past few years I have been working for a startup news, research and market data provider, servicing the growing audience trading and investing in cryptocurrencies and digital assets. Most of the data is distributed via dedicated APIs, a website, desktop products and third party vendors. It may not sound like traditional publishing, but editorial values and production processes lie at the core of the business – quality digital content still needs a lot of work to capture, create and curate. And even though the internet gives the impression of reducing the price of online content to zero, there is still considerable value in standardizing, verifying and cataloguing all that data before it is served up to end users.

Next week: You said you wanted a revolution?