Music streaming is so passé…

Streaming services have changed the way we listen to music, and not just in the way the content is delivered (primarily via mobile devices), or the sheer number of songs available for our listening pleasure (whole catalogues at our fingertips).

These streaming platforms (which have been with us for more then 15 years) have also led to some more negative consequences: the deconstruction of albums into individual tracks (thereby undermining artists’ intention to present their work as a whole, rather than its component parts); shifting the relationship we have with our music collections from “ownership” to “renting”; paying paltry levels of streaming fees compared to royalties on physical sales and downloads; pushing suggested content via opaque algorithms and “recommender engines” rather than allowing listener self-discovery; squashing music into highly compressed audio formats, thus impairing the listening quality; and reducing album cover art work and design into tiny thumbnail images that don’t do justice to the original. (If you can’t appreciate the significance and importance of album art work, this forthcoming documentary may change your mind.)

Of course, streaming is not the only way to consume music – we still have vinyl, CDs and even cassettes in current production. (And let’s not forget radio!) Although optimistic numbers about the vinyl revival of recent years have to be put in the context of the streaming behemoths, there is no doubt that this antique format still has an important role to play, for new releases, the box-set and reissue industry, and the second-hand market.

For myself, I’ve largely given up on Spotify and Apple Music: with the former, I don’t think there is enough transparency on streaming fees (especially those paid to independent artists and for self-released recordings) or how more popular artists and their labels can pay to manipulate the algorithms, plus the “recommendations” are often out of kilter with my listening preferences; with the latter, geo-blocking often means music I am looking for is not available in Australia. (As I am writing, Spotify is playing a track which has been given the wrong title, proving that their curation and editorial quality is not perfect.)

Streaming can also be said to be responsible for a type of content narrowcasting – the more often a song is streamed (especially one that has been sponsored or heavily promoted by a record label) the more often it will appear in suggested playlists. Some recent analysis by Rob Abelow suggests that fewer than 10% of songs on the Spotify billion stream club were released before 2000. This may have something to do with listener demographics (e.g., digital natives), but it also suggests that songs only available as streams (i.e., no download or physical release), or songs heavily marketed by labels wanting to promote particular content to a specific audience, will come to dominate these platforms.

Further evidence of how streaming is skewed towards major artists is a recent post by Damon Krukowski, showing how independent musicians like him are being “encouraged” to be more like megstars such as Ed Sheeran. Never mind the quality of the music, just think about the “pre-saves” and “countdown pages” (tools which are not yet available to every artist on Spotify?).

I’ve been using both Bandcamp and Soundcloud for more than 10 years, to release my own music and to discover new content. I began with Soundcloud, but soon lost my enthusiasm because they kept changing their business model, and they enabled more popular artists to dominate the platform with “premium” services and pay-to-play fees that favour artists and labels with bigger marketing budgets. Whereas Bandcamp appears to be doing a better job of maintaining a more level playing field in regard to artist access, and a more natural way for fans to connect with artists they already know, and to discover new music they may be interested in.

But all of this simply means that streaming has possibly peaked, at least as an emerging format. The industry is facing a number of challenges. Quite apart from ongoing disputes about royalty payments and album integrity, streaming is going to be disrupted by new technologies and business models, thanks to blockchain, cryptocurrencies and non-fungible tokens. These startups are going to improve how artists are remunerated for their work, create better engagement between creators and their audiences, and provide for more transparent content discovery and recommendations. Elsewhere, the European Union is considering ways to preserve cultural diversity, promote economic sustainability within the music industry, remove the harmful effects of payola, make better use of content metadata for things like copyright, creativity and attribution, and provide clear labeling on content that has been created using tools like AI.

Just for the record, I’m not a huge fan of content quotas (a possible outcome from the EU proposals), but I would prefer to see better ways to discover new music, via broadcast and online media, which are not dependent on regimented Top 40 playlists, the restrictive formats of ubiquitous TV talent shows, or record label marketing budgets. Australia’s Radio National used to have a great platform for new and alternative music, called Sound Quality, but that came off air nearly 10 years ago, with nothing to replace it. Elsewhere, I tune into BBC Radio 6 Music’s Freak Zone – not all of it is new music, but there is more variety in each 2 hour programme than a week’s listening on most other radio stations.

Next week: More Cold War Nostalgia

 

BYOB (Bring Your Own Brain)

My Twitter and LinkedIn feeds are full of posts about artificial intelligence, machine learning, large language models, robotics and automation – and how these technologies will impact our jobs and our employment prospects, often in very dystopian tones. It can be quite depressing to trawl through this material, to the point of being overwhelmed by the imminent prospect of human obsolescence.

No doubt, getting to grips with these tools will be important if we are to navigate the future of work, understand the relationship between labour, capital and technology, and maintain economic relevance in a world of changing employment models.

But we have been here before, many times (remember the Luddites?), and so far, the human condition means we learn to adapt in order to survive. These transitions will be painful, and there will be casualties along the way, but there is cause for optimism if we remember our post-industrial history.

First, among recent Twitter posts there was a timely reminder that automation does not need to equal despair in the face of displaced jobs.

Second, the technology at our disposal will inevitably make us more productive as well as enabling us to reduce mundane or repetitive tasks, even freeing up more time for other (more creative) pursuits. The challenge will be in learning how to use these tools, and in efficient and effective ways so that we don’t swap one type of routine for another.

Third, there is still a need to consider the human factor when it comes to the work environment, business structures and organisational behaviour – not least personal interaction, communication skills and stakeholder management. After all, you still need someone to switch on the machines, and tell them what to do!

Fourth, the evolution of “bring your own device” (and remote working) means that many of us have grown accustomed to having a degree of autonomy in the ways in which we organise our time and schedule our tasks – giving us the potential for more flexible working conditions. Plus, we have seen how many apps we use at home are interchangeable with the tools we use for work – and although the risk is that we are “always on”, equally, we can get smarter at using these same technologies to establish boundaries between our work/life environments.

Fifth, all the technology in the world is not going to absolve us of the need to think for ourselves. We still need to bring our own cognitive faculties and critical thinking to an increasingly automated, AI-intermediated and virtual world. If anything, we have to ramp up our cerebral powers so that we don’t become subservient to the tech, to make sure the tech works for us (and not the other way around).

Adopting a new approach means:

  • not taking the tech for granted
  • being prepared to challenge the tech assumptions (and not be complicit in its in-built biases)
  • question the motives and intentions of the tech developers, managers and owners (especially those of known or suspected bad actors)
  • validate all the newly-available data to gain new insights (not repeat past mistakes)
  • evaluate the evidence based on actual events and outcomes
  • and not fall prey to hyperbolic and cataclysmic conjectures

Finally, it is interesting to note the recent debates on regulating this new tech – curtailing malign forces, maintaining protections on personal privacy, increasing data security, and ensuring greater access for those currently excluded. This is all part of a conscious narrative (that human component!) to limit the extent to which AI will be allowed to run rampant, and to hold tech (in all its forms) more accountable for the consequences of its actions.

Next week: “The Digital Director”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Trust in Digital IDs

Or: “Whose identity is it anyway?”

Over the past few years, there have been a significant number of serious data breaches among among banks, utilities, telcos, insurers and public bodies. As a result, hackers are able to access the confidential data and financial records of millions of customers, leading to ransomware demands, wide dissemination of private information, identity theft, and multiple phishing attempts and similar scams.

What most of these hacks reveal is the vulnerability of centralised systems as well as the unnecessary storage of personal data – making these single points of failure a target for such exploits. Worse, the banks and others seem to think they “own” this personal data once they have obtained it, as evidenced by the way they (mis)manage it.

I fully understand the need for KYC/AML, and the requirement to verify customers under the 100 Points of Identification system. However, once I have been verified, why does each bank, telco and utility company need to keep copies or records of my personal data on their systems? Under a common 100 Points verification process, shouldn’t we have a more efficient and less vulnerable system? If I have been verified by one bank in Australia, why can’t I be automatically verified by every other bank in Australia (e.g., if I wanted to open an account with them), or indeed any other company using the same 100 Points system?

Which is where the concept of Self-Sovereign Identity comes into play. This approach should mean that with the 100 Points system, even if initially I need to submit evidence of my driver’s license, passport or birth certificate, once I have been verified by the network I can “retrieve” my personal data (revoke the access permission), or specify with each party on the network how long they can hold my personal data, and for what specific purpose.

This way, each party on the network does not need to retain a copy of the original documents. Instead, my profile is captured as a digital ID that confirms who I am, and confirms that I have been verified by the network; it does not require me to keep disclosing my personal data to each party on the network. (There are providers of Digital ID solutions, but because they are centralised, and unilateral, we end up with multiple and inconsistent Digital ID systems, which are just as vulnerable to the risk of a single point of failure…)

But of course, banks etc. insist that not only do they have to ask for 100 Points of ID each and every time I open an account, they are required to retain copies or digital versions of my personal data. Hence, we should not be surprised by the number of data hacks we keep experiencing.

The current approach to identity in banking, telcos and utilities is baffling. Just a few examples I can think of:

1. In trying to upgrade my current mobile phone plan with my existing provider, I had to re-submit personal information via a mobile app (and this is a telco that experienced a major hack last year, resulting in me having to apply for a new driver’s license). If I have already been verified, why the need to ask for my personal data again, and via a mobile app?

2. I’ve lived at my current address for more than 5 years. I still receive bank statements intended for the previous occupant. I have tried on numerous occasions to inform the bank that this person is no longer living here. I’ve used the standard “Return to Sender” method, and tried to contact the bank direct, but because I am not the named account addressee or authorised representative, they won’t talk to me. Fair enough. But, the addressee is actually a self-managed superannuation fund. Given the fallout from the Banking Royal Commission, and the additional layers of verification, supervision and audit that apply to such funds, I’m surprised that this issue has not been picked up the bank concerned. It’s very easy to look up the current registered address of an SMSF via the APRA website, if only the bank could be bothered to investigate why the statements keep getting returned.

3. I have been trying to remove the name of a former director as a signatory to a company bank account. The bank kept asking for various forms and “proof” that this signatory was no longer a director and no longer authorised to access the account. Even though I have done this (and had to pay for an accountant to sign a letter confirming the director has resigned their position), if the bank had bothered to look up the ASIC company register, they would see that this person was no longer a company officer. Meanwhile, the bank statements keep arriving addressed to the ex-director. Apparently, the bank’s own “systems” don’t talk to one another (a common refrain when trying to navigate legacy corporate behemoths).

In each of the above, the use of a Digital ID system would streamline the process for updating customer records, and reduce the risk of data vulnerabilities. But that requires effort on the part of the entities concerned – clearly, the current fines for data breaches and for misconduct in financial services are not enough.

Next week: AI vs IP  

 

Digital Perfectionism?

In stark contrast to my last blog on AI and digital humans, I’ve just been reading Damon Krukowski‘s book, “The New Analog – Listening and Reconnecting in a Digital World”, published in 2017. It’s an essential text for anyone interested in the impact of sound compression, noise filtering, loudness and streaming on the music industry (and much more besides).

The are two main theses the author explores:

1. The paradoxical corollary to Moore’s Law on the rate of increase in computing power is Murphy’s Moore’s Law: that in striving for improved performance and perfectionism in all things digital, equally we risk amplifying the limitations inherent in analog technology. in short, the more something improves, the more it must also get worse. (See also my previous blogs on the problem of digital decay, and the beauty of decay music.)

2. In the realm of digital music and other platforms (especially social media), stripping out the noise (to leave only the signal) results in an impoverished listening, cultural and social experience; flatter sound, less dynamics, narrower tonal variation, limited nuance, an absence of context. In the case of streaming music, we lose the physical connection with the original artwork, accompanying sleeve notes, creative credits and even the original year of publication.

Thinking about #1 above, imagine this principle applied to #AI: would the pursuit of “digital perfectionism” mean we lose a large part of what makes analogue homo sapiens more “human”? Would we end up compressing/removing “noise” such as doubt, uncertainty, curiosity, irony, idiosyncrasies, cognitive diversity, quirkiness, humour etc.?

As for #2, like the author, I’m not a total Luddite when it comes to digital music, but I totally understand his frustration (philosophical, phonic and financial) when discussing the way CDs exploit “loudness” (in the technical sense), how .mp3 files compress more data into less space (resulting in a deterioration in overall quality), and the way streaming platforms have eroded artists’ traditional commercial return on their creativity.

The book also discusses the role of social media platforms in extracting value from the content that users contribute, reducing it to homogenised data lakes, selling it to the highest bidder, and compressing all our personal observations, relationships and original ideas (the things that make us nuanced human beings) into a sterilsed drip-feed of “curated” content.

In the narrative on music production, and how “loudness” took hold in the mid-1990s, Krukowski takes specific aim at the dreaded sub-woofer. These speakers now pervade every concert, home entertainment system, desk-top computer and car stereo. They even bring a distorted physical presence into our listening experience:

“Nosebleeds at festivals, trance states at dance clubs, intimidation by car audio…. When everything is louder than everything else, sounds lose context and thus meaning – even the meaning of loud.”

The main issue I have with digital music is that we as listeners have very little control over how we hear it – apart from adjusting the volume. So again, any nuance or variation has been ironed out, right to the point of consumption – we can’t even adjust the stereo balance. I recall that my boom box in the 1980s had separate volume controls for each speaker, and a built-in graphic equalizer. To paraphrase Joy Division, “We’ve Lost Control”.

Next week: I CAN live without my radio…