Digital Perfectionism?

In stark contrast to my last blog on AI and digital humans, I’ve just been reading Damon Krukowski‘s book, “The New Analog – Listening and Reconnecting in a Digital World”, published in 2017. It’s an essential text for anyone interested in the impact of sound compression, noise filtering, loudness and streaming on the music industry (and much more besides).

The are two main theses the author explores:

1. The paradoxical corollary to Moore’s Law on the rate of increase in computing power is Murphy’s Moore’s Law: that in striving for improved performance and perfectionism in all things digital, equally we risk amplifying the limitations inherent in analog technology. in short, the more something improves, the more it must also get worse. (See also my previous blogs on the problem of digital decay, and the beauty of decay music.)

2. In the realm of digital music and other platforms (especially social media), stripping out the noise (to leave only the signal) results in an impoverished listening, cultural and social experience; flatter sound, less dynamics, narrower tonal variation, limited nuance, an absence of context. In the case of streaming music, we lose the physical connection with the original artwork, accompanying sleeve notes, creative credits and even the original year of publication.

Thinking about #1 above, imagine this principle applied to #AI: would the pursuit of “digital perfectionism” mean we lose a large part of what makes analogue homo sapiens more “human”? Would we end up compressing/removing “noise” such as doubt, uncertainty, curiosity, irony, idiosyncrasies, cognitive diversity, quirkiness, humour etc.?

As for #2, like the author, I’m not a total Luddite when it comes to digital music, but I totally understand his frustration (philosophical, phonic and financial) when discussing the way CDs exploit “loudness” (in the technical sense), how .mp3 files compress more data into less space (resulting in a deterioration in overall quality), and the way streaming platforms have eroded artists’ traditional commercial return on their creativity.

The book also discusses the role of social media platforms in extracting value from the content that users contribute, reducing it to homogenised data lakes, selling it to the highest bidder, and compressing all our personal observations, relationships and original ideas (the things that make us nuanced human beings) into a sterilsed drip-feed of “curated” content.

In the narrative on music production, and how “loudness” took hold in the mid-1990s, Krukowski takes specific aim at the dreaded sub-woofer. These speakers now pervade every concert, home entertainment system, desk-top computer and car stereo. They even bring a distorted physical presence into our listening experience:

“Nosebleeds at festivals, trance states at dance clubs, intimidation by car audio…. When everything is louder than everything else, sounds lose context and thus meaning – even the meaning of loud.”

The main issue I have with digital music is that we as listeners have very little control over how we hear it – apart from adjusting the volume. So again, any nuance or variation has been ironed out, right to the point of consumption – we can’t even adjust the stereo balance. I recall that my boom box in the 1980s had separate volume controls for each speaker, and a built-in graphic equalizer. To paraphrase Joy Division, “We’ve Lost Control”.

Next week: I CAN live without my radio…

Free speech up for sale

When I was planning to post this article a couple of weeks ago, Elon Musk’s bid to buy Twitter and take it into private ownership was looking unlikely to succeed. Musk had just declined to take up the offer of a seat on the Twitter board, following which the board adopted a poison-pill defence against a hostile takeover. And just as I was about to go to press at my usual time, the news broke that the original bid had now been accepted by the board, so I hit the pause button instead and waited a day to see what the public reaction was. What a difference 72 hours (and US$44bn) can make… It seems “free speech” does indeed come with a price.

Of course, the Twitter transaction is still subject to shareholder approval and regulatory clearance, as well as confirmation of the funding structure, since Musk is having to raise about half the stated purchase from banks.

Musk’s stated objective in acquiring Twitter was highlighted in a press release put out by the company:

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Mr. Musk. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential – I look forward to working with the company and the community of users to unlock it.”

This latest development in Musk’s apparent love/hate relationship with Twitter is bound to further divide existing users as to the billionaire’s intentions, as well as raise concerns about the broader implications for free speech. Musk himself has encouraged his “worst critics” to stay with the platform. Meanwhile, founder and former CEO, Jack Dorsey has renewed his love of Twitter, despite only recently stepping away from the top job to spend more time on his other interests.

Personally, I’m not overly concerned that a platform such as Twitter is in private hands or under single ownership (subject, of course, to anti-trust rules, etc.). Far from creating an entrenched monopoly, it may actually encourage more competition by those who decide to opt out of Twitter. What I am less comfortable with is the notion that Twitter somehow acts as an exemplar of free speech, and as such, is a bastion of democracy.

On the positive side, we will be able to judge the veracity of Musk’s objectives against his actual deeds. For example, will Twitter actually introduce an edit button, make its algorithms open-source, exorcise the spam bots, verify users, and reduce/remove the platform’s reliance upon advertising?

On the negative side, what credible stance will Twitter now take on “free speech”, short of allowing an “anything goes” policy? If Musk is sincere that Twitter will be a platform for debating “matters vital to the future of humanity”, he may need to modify what he means by public discourse. Personal slanging matches with fellow-billionaires (and those less-able to defend themselves) do not make for an edifying public debating forum. Musk’s own disclosures about Twitter and his other business interests will also come under increased scrutiny. We know from past experience that Elon’s Tweets can move markets, and for this alone he should be aware of the responsibility that comes with ownership of the platform.

We have long understood that free speech is not the same as an unfettered right to say what you like in public – there are limits to freedom of expression, including accountability for the consequences of our words and actions, especially where they can cause harm. The broader challenges we face are:

  • technology outpacing regulation, when it comes to social media
  • defining what it means to “cause offence”
  • increased attacks on “mainstream media” and threats to freedom of the press

1. Just as the printing press, telegraphy, telephony, broadcasting and the internet each resulted in legislative changes, social media has continued to test the boundaries of regulation under which its predecessors now operate. Hitherto, much of the regulation that applies to social and digital media relates to privacy and data protection, as well as the existing law of defamation. But the latter varies considerably by jurisdiction, and by access to redress, and availability of remedies. Social media platforms have resisted attempts to treat them as traditional media (newspapers and broadcasters, which are subject to licensing and/or industry codes of practice) or treat them as publishers (and therefore responsible for content published on their platforms). (Then there is the question of how some social media platforms manage their tax affairs in the countries where they derive their revenue.)

The Australian government is attempting to challenge social media companies in a couple of ways. The first has been to force these platforms to pay for third-party news content from which they directly and indirectly generate advertising income. The second aims to hold social media more accountable for defamatory content published on their platforms, and remove the protection of “anonymity”. However, the former might be seen as a (belated) reaction to changing business models, and largely acting in favour of incumbents; while the latter is a technical response to the complex law of defamation in the digital age.

2. The ability to be offended by what we see or hear on social media is now at such a low bar as to be almost meaningless. During previous battles over censorship in print, on stage or on screen, the argument could be made that, “if you don’t like something you aren’t being forced to watch it”, so maybe you are deliberately going in search of content just to find it offensive. The problem is, social media by its very nature is more pervasive and, fed by hidden algorithms, is actually more invasive than traditional print and broadcast media. Even as a casual, passive or innocent user, you cannot avoid seeing something that may “offend” you. Economic and technical barriers to entry are likewise so low, that anyone and everyone can have their say on social media.

Leaving aside defamation laws, the concept of “hate speech” is being used to target content which is designed to advocate violence, or can be reasonably deemed or expected to have provoked violence or the threat of harm (personal, social or economic). I have problems with how we define hate speech in the current environment of public commentary and social media platforms, since the causal link between intent and consequence is not always that easy to establish.

However, I think we can agree that the use of content to vilify others simply based on their race, gender, sexuality, ethnicity, economic status, political affiliation or religious identity cannot be defended on the grounds of “free speech”, “fair comment” or “personal belief”. Yet how do we discourage such diatribes without accusations of censorship or authoritarianism, and how do we establish workable remedies to curtail the harmful effects of “hate speech” without infringing our civil liberties?

Overall, there is a need to establish the author’s intent (their purpose as well as any justification), plus apply a “reasonable person” standard, one that does not simply affirm confirmation bias of one sector of society against another. We must recognise that hiding behind our personal ideology cannot be an acceptable defence against facing the consequences of our actions.

3. I think it’s problematic that large sections of the traditional media have hardly covered themselves in glory when it comes to their ethical standards, and their willingness to misuse their public platforms, economic power and political influence to undertake nefarious behaviour and/or deny any responsibility for their actions. Think of the UK’s phone hacking scandals, which resulted in one press baron being deemed “unfit to run a company”, as well as leading to the closure of a major newspaper.

That said, it hardly justifies the attempts by some governments, populist leaders and authoritarian regimes to continuously undermine the integrity of the fourth estate. It certainly doesn’t warrant the prosecution and persecution of journalists who are simply trying to do their job, nor attacks and bans on the media unless they “tow the party line”.

Which brings me back to Twitter, and its responsibility in helping to preserve free speech, while preventing its platform being hijacked for the purposes of vilification and incitement to cause harm. If its new owner is serious about furthering public debate and mature discourse, then here are a few other enhancements he might want to consider:

  • in addition to an edit button, a “cooling off” period whereby users are given the opportunity to reconsider a like, a post or a retweet, based on user feedback or community interaction – after which time, they might be deemed responsible for the content as if they were the original author (potentially a way to mitigate “pile-ons”)
  • signing up to a recognised industry code of ethics, including a victim’s formal right of reply, access to mediation, and enforcement procedures and penalties against perpetrators who continually cross the line into vilification, or engage in content that explicitly or implicitly advocates violence or harm
  • a more robust fact-checking process and a policy of “truth in advertising” when it comes to claims or accusations made by or on behalf of politicians, political parties, or those seeking elected office
  • clearer delineation between content which is mere opinion, content which is in the nature of a public service (e.g., emergencies and natural disasters), content which is deemed part of a company’s public disclosure obligations, content which is advertorial, content which is on behalf of a political party or candidate, and content which is purely for entertainment purposes only (removing the bots may not be enough)
  • consideration of establishing an independent editorial board that can also advocate on behalf of alleged victims of vilification, and act as the initial arbiter of “public interest” matters (such as privacy, data protection, whistle-blowers etc.)

Finally, if Twitter is going to remove/reduce advertising, what will the commercial model look like?

Next week: The Crypto Conversation

Smart Contracts… or Dumb Software

The role of smart contracts in blockchain technology is creating an emerging area of jurisprudence which largely overlaps with computer programming. However, one of the first comments I heard about smart contracts when I started working in the blockchain and crypto industry was that they are “neither smart, nor legal”. What does this paradox mean in practice?

First, smart contracts are not “smart”, because they still largely rely on human coders. While self-replicating and self-executing software programs exist, a smart contact contains human-defined parameters or conditions that will trigger the performance of the contract terms once those conditions have been met. The simplest example might be coded as a type of  “if this, then that” function. For example, I could create a smart contract so that every time the temperature drops below 15 degrees, the heating comes on in my house, provided that there is sufficient credit in the digital wallet connected to my utilities billing account.

Second, smart contracts are not “legal”, unless they comprise the necessary elements that form a legally binding agreement: intent, offer, acceptance, consideration, capacity, certainty and legality. They must be capable of being enforceable in the event that one party defaults, but they must not be contrary to public policy, and parties must not have been placed under any form of duress to enter into a contract. Furthermore, there must be an agreed governing law, especially if the parties are in different jurisdictions, and the parties must agree to be subject to a legal venue capable of enforcing or adjudicating the contract in the event of a breach or dispute.

Some legal contacts still need to be in a prescribed form, or in hard copy with a wet signature. A few may need to be under seal or attract stamp duty. Most consumer contracts (and many commercial contracts) are governed by rules relating to unfair contract terms and unconscionable conduct. But assuming a smart contract is capable of being created, notarised and executed entirely on the blockchain, what other legal principles may need to be considered when it comes to capacity and enforcement?

We are all familiar with the process of clicking “Agree” buttons every time we sign up for a social media account, download software or subscribe to digital content. Let’s assume that even with a “free” social media account, there is consideration (i.e., there’s something in it for the consumer in return for providing some personal details), and both parties have the capacity (e.g., they are old enough) and the intent to enter into a contract, the agreement is usually no more than a non-transferable and non-exclusive license granted to the consumer. The license may be revoked at any time, and may even attract penalties in the event of a breach by the end user. There is rarely a transfer of title or ownership to the consumer (if anything, social media platforms effectively acquire the rights to the users’ content), and there is nothing to say that the license will continue into perpetuity. But think how many of these on-line agreements we enter into each day, every time we log into a service or run a piece of software. Soon, those “Agree” buttons could represent individual smart contracts.

When we interact with on-line content, we are generally dealing with a recognised brand or service provider, which represents a known legal entity (a company or corporation). In turn, that entity is capable of entering into a contract, and is also capable of suing/being sued. Legal entities still need to be directed by natural persons (humans) in the form of owners, directors, officers, employees, authorised agents and appointed representatives, who act and perform tasks on behalf of the entity. Where a service provider comprises a highly centralised entity, identifying the responsible party is relatively easy, even if it may require a detailed company search in the case of complex ownership structures and subsidiaries. So what would be the outcome if you entered into a contract with what you thought was an actual person or real company, but it turned out to be an autonmous bot or an instance of disembodied AI – who or what is the counter-party to be held liable in the event something goes awry?

Until DAOs (Decentralised Autonomous Organisations) are given formal legal recognition (including the ability to be sued), it is a grey area as to who may or may not be responsible for the actions of a DAO-based project, and which may be the counter-party to a smart contract. More importantly, who will be responsible for the consequences of the DAO’s actions, once the project is in the community and functioning according to its decentralised rules of self-governance? Some jurisdictions are already drafting laws that will recognise certain DAOs as formal legal entities, which could take the form of a limited liability partnership model or perhaps a particular type of special purpose vehicle. Establishing authority, responsibility and liability will focus on the DAO governance structure: who controls the consensus mechanism, and how do they exercise that control? Is voting to amend the DAO constitution based on proof of stake?

Despite these emerging uncertainties, and the limitations inherent in smart contracts, it’s clear that these programs, where code is increasingly the law, will govern more and more areas of our lives. I see huge potential for smart contracts to be deployed in long-dated agreements such as life insurance policies, home mortgages, pension plans, trusts, wills and estates. These types of legal documents should be capable of evolving dynamically (and programmatically) as our personal circumstances, financial needs and living arrangements also change over time. Hopefully, these smart contracts will also bring greater certainty, clarity and efficiency in the drafting, performance, execution and modification of their terms and conditions.

Next week: Free speech up for sale

 

How digital brands are advertising

During a recent visit to the cinema, I was surprised to see adverts for major digital brands on the big screen, ahead of the main feature.

I’ve always thought of cinema advertising as falling into one or more of the following categories:

  • ads you don’t see on TV (often longer than their small screen counterparts)
  • luxury names and aspirational brands (travel, spirits, fashion, financial services)
  • local businesses (the pizzeria “just a short walk from this theatre…”)
  • movie tie-ins (highlighting the product placement in the film you are about to see)
  • seasonal themes (especially Christmas)

What struck me on this occasion were the ads by three DNBs (digitally native brands), featuring LinkedIn, Tik Tok and Audible. Despite the disparate nature of their businesses, I realised that there was a common element.

As the above-linked McKinsey report states, successful DNBs are really good at connecting with (and understanding) their audience, identifying and fulfilling very specific needs with unique solutions, and leveraging the very technology they are built on to promote their services and engage with their customers. Witness the well-timed “alerts” from food-delivery platforms in the early evening, the viral campaigns designed to enforce brand awareness, and the social media feeds designed to build customer engagement and loyalty. (Note that the report features Peleton as a poster child for its thesis, before the personal exercise brand ran into recent difficulties.)

If you look at most DNB campaigns, they are primarily generating demand via very specific human drivers:

1. Aspirational – the pure FOMO element (not unique to DNBs, of course, but they do it more subtly than many consumer brands)
2. Experiential – highlighting the tangible benefits (of mostly intangible products)
3. Socialisation – the paradox of building a trusted relationship through hyper-personalisation and constant sharing…

These three cinema ads each contained implicit “story-telling“. LinkedIn positioned itself as a platform for establishing our own narrative (telling our own truth?); Audible promoted its audio content (books and podcasts) as a means to find authentic stories that resonate with us (and this was long before the recent shenanigans over at Spotify); and Tik Tok used a well-known viral video as the basis for building community around shared stories.

Of course, story-telling is hardly a new concept in brand marketing, and has been eagerly adopted by digital brands (think of campaigns during the pandemic which have featured on-line connectivity and remote working). However, it has become an over-used technique, and is often cynically exploited in the service of corporate green-washing, jumping on social bandwagons, and blatant virtue signalling.

Call me jaded, but I’m old enough to remember the fad of consulting firms pitching their clients on building a “corporate narrative“, drawing on employee stories and customer experiences, as the foundation for those anodyne mission/vision “statements” – but they typically ended up as exercises in damage control in case the truth got out.

These particular cinema ads managed to use story-telling to create a human dimension (authenticity, connectivity, community, sharing, etc.) that is more than simply “buy our product” or “use our tech” (although obviously that’s the ultimate goal). It would be very interesting to read the briefs given to their creative agencies, given that the ads were all in the service of corporate branding.

Next week: Doctrine vs Doctrinaire