Free speech up for sale

When I was planning to post this article a couple of weeks ago, Elon Musk’s bid to buy Twitter and take it into private ownership was looking unlikely to succeed. Musk had just declined to take up the offer of a seat on the Twitter board, following which the board adopted a poison-pill defence against a hostile takeover. And just as I was about to go to press at my usual time, the news broke that the original bid had now been accepted by the board, so I hit the pause button instead and waited a day to see what the public reaction was. What a difference 72 hours (and US$44bn) can make… It seems “free speech” does indeed come with a price.

Of course, the Twitter transaction is still subject to shareholder approval and regulatory clearance, as well as confirmation of the funding structure, since Musk is having to raise about half the stated purchase from banks.

Musk’s stated objective in acquiring Twitter was highlighted in a press release put out by the company:

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Mr. Musk. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential – I look forward to working with the company and the community of users to unlock it.”

This latest development in Musk’s apparent love/hate relationship with Twitter is bound to further divide existing users as to the billionaire’s intentions, as well as raise concerns about the broader implications for free speech. Musk himself has encouraged his “worst critics” to stay with the platform. Meanwhile, founder and former CEO, Jack Dorsey has renewed his love of Twitter, despite only recently stepping away from the top job to spend more time on his other interests.

Personally, I’m not overly concerned that a platform such as Twitter is in private hands or under single ownership (subject, of course, to anti-trust rules, etc.). Far from creating an entrenched monopoly, it may actually encourage more competition by those who decide to opt out of Twitter. What I am less comfortable with is the notion that Twitter somehow acts as an exemplar of free speech, and as such, is a bastion of democracy.

On the positive side, we will be able to judge the veracity of Musk’s objectives against his actual deeds. For example, will Twitter actually introduce an edit button, make its algorithms open-source, exorcise the spam bots, verify users, and reduce/remove the platform’s reliance upon advertising?

On the negative side, what credible stance will Twitter now take on “free speech”, short of allowing an “anything goes” policy? If Musk is sincere that Twitter will be a platform for debating “matters vital to the future of humanity”, he may need to modify what he means by public discourse. Personal slanging matches with fellow-billionaires (and those less-able to defend themselves) do not make for an edifying public debating forum. Musk’s own disclosures about Twitter and his other business interests will also come under increased scrutiny. We know from past experience that Elon’s Tweets can move markets, and for this alone he should be aware of the responsibility that comes with ownership of the platform.

We have long understood that free speech is not the same as an unfettered right to say what you like in public – there are limits to freedom of expression, including accountability for the consequences of our words and actions, especially where they can cause harm. The broader challenges we face are:

  • technology outpacing regulation, when it comes to social media
  • defining what it means to “cause offence”
  • increased attacks on “mainstream media” and threats to freedom of the press

1. Just as the printing press, telegraphy, telephony, broadcasting and the internet each resulted in legislative changes, social media has continued to test the boundaries of regulation under which its predecessors now operate. Hitherto, much of the regulation that applies to social and digital media relates to privacy and data protection, as well as the existing law of defamation. But the latter varies considerably by jurisdiction, and by access to redress, and availability of remedies. Social media platforms have resisted attempts to treat them as traditional media (newspapers and broadcasters, which are subject to licensing and/or industry codes of practice) or treat them as publishers (and therefore responsible for content published on their platforms). (Then there is the question of how some social media platforms manage their tax affairs in the countries where they derive their revenue.)

The Australian government is attempting to challenge social media companies in a couple of ways. The first has been to force these platforms to pay for third-party news content from which they directly and indirectly generate advertising income. The second aims to hold social media more accountable for defamatory content published on their platforms, and remove the protection of “anonymity”. However, the former might be seen as a (belated) reaction to changing business models, and largely acting in favour of incumbents; while the latter is a technical response to the complex law of defamation in the digital age.

2. The ability to be offended by what we see or hear on social media is now at such a low bar as to be almost meaningless. During previous battles over censorship in print, on stage or on screen, the argument could be made that, “if you don’t like something you aren’t being forced to watch it”, so maybe you are deliberately going in search of content just to find it offensive. The problem is, social media by its very nature is more pervasive and, fed by hidden algorithms, is actually more invasive than traditional print and broadcast media. Even as a casual, passive or innocent user, you cannot avoid seeing something that may “offend” you. Economic and technical barriers to entry are likewise so low, that anyone and everyone can have their say on social media.

Leaving aside defamation laws, the concept of “hate speech” is being used to target content which is designed to advocate violence, or can be reasonably deemed or expected to have provoked violence or the threat of harm (personal, social or economic). I have problems with how we define hate speech in the current environment of public commentary and social media platforms, since the causal link between intent and consequence is not always that easy to establish.

However, I think we can agree that the use of content to vilify others simply based on their race, gender, sexuality, ethnicity, economic status, political affiliation or religious identity cannot be defended on the grounds of “free speech”, “fair comment” or “personal belief”. Yet how do we discourage such diatribes without accusations of censorship or authoritarianism, and how do we establish workable remedies to curtail the harmful effects of “hate speech” without infringing our civil liberties?

Overall, there is a need to establish the author’s intent (their purpose as well as any justification), plus apply a “reasonable person” standard, one that does not simply affirm confirmation bias of one sector of society against another. We must recognise that hiding behind our personal ideology cannot be an acceptable defence against facing the consequences of our actions.

3. I think it’s problematic that large sections of the traditional media have hardly covered themselves in glory when it comes to their ethical standards, and their willingness to misuse their public platforms, economic power and political influence to undertake nefarious behaviour and/or deny any responsibility for their actions. Think of the UK’s phone hacking scandals, which resulted in one press baron being deemed “unfit to run a company”, as well as leading to the closure of a major newspaper.

That said, it hardly justifies the attempts by some governments, populist leaders and authoritarian regimes to continuously undermine the integrity of the fourth estate. It certainly doesn’t warrant the prosecution and persecution of journalists who are simply trying to do their job, nor attacks and bans on the media unless they “tow the party line”.

Which brings me back to Twitter, and its responsibility in helping to preserve free speech, while preventing its platform being hijacked for the purposes of vilification and incitement to cause harm. If its new owner is serious about furthering public debate and mature discourse, then here are a few other enhancements he might want to consider:

  • in addition to an edit button, a “cooling off” period whereby users are given the opportunity to reconsider a like, a post or a retweet, based on user feedback or community interaction – after which time, they might be deemed responsible for the content as if they were the original author (potentially a way to mitigate “pile-ons”)
  • signing up to a recognised industry code of ethics, including a victim’s formal right of reply, access to mediation, and enforcement procedures and penalties against perpetrators who continually cross the line into vilification, or engage in content that explicitly or implicitly advocates violence or harm
  • a more robust fact-checking process and a policy of “truth in advertising” when it comes to claims or accusations made by or on behalf of politicians, political parties, or those seeking elected office
  • clearer delineation between content which is mere opinion, content which is in the nature of a public service (e.g., emergencies and natural disasters), content which is deemed part of a company’s public disclosure obligations, content which is advertorial, content which is on behalf of a political party or candidate, and content which is purely for entertainment purposes only (removing the bots may not be enough)
  • consideration of establishing an independent editorial board that can also advocate on behalf of alleged victims of vilification, and act as the initial arbiter of “public interest” matters (such as privacy, data protection, whistle-blowers etc.)

Finally, if Twitter is going to remove/reduce advertising, what will the commercial model look like?

Next week: The Crypto Conversation

Facebook and that news ban

On February 18 this year, Facebook decided to “ban” news content in Australia. This meant that Australian Facebook users (including media companies) could not post news content or links, nor could they access local or overseas news. The move was a preemptive strike (and a somewhat crude negotiation tactic) by Facebook in an attempt to circumvent the Media Bargaining Code, which requires social media and search engine platforms (specifically, Google and Facebook) to pay news providers for the use of their content. Despite the gnashing and wailing among some sectors of the Australian community, the world did not end. And while Facebook has somewhat relented (following some concessions from the Federal government), the story has generated some useful debate about the power of certain tech platforms and the degree of influence or control they exercise over what we see on our screens each day.

Image sourced from Wikimedia

Personally, I did not find the ban an inconvenience, because I rarely use my Facebook account, and I certainly don’t rely on it for news or information. Instead, I prefer to access content direct from providers. One result of the ban was more downloads for Australian news apps such as the ABC and Inkl. Another (unforeseen?) result was a block on information posted by public and voluntary sector bodies, including essential services, health, community and charitable organisations.

Regarding the former, this can only be a good thing. Seriously, if we are relying on Facebook for news content, THAT is the real problem. As for the latter, it suggests a lot of organisations have become over-reliant on Facebook to reach their audience.

Meanwhile, Google (which had already struck a deal with Australian media companies) was eagerly promoting the number of Australian “partner publications” it offers in its News Showcase. This was something of a U-turn, because Google had threatened to remove search in Australia in response to the same Media Bargaining Code. While that might have been drastic, nevertheless, other search engines are available.

It was also interesting to see Microsoft (no stranger to anti-trust action during the so-called browser wars) promoting BuzzFeed via Twitter on the day of the Facebook ban. I also received a number of e-mails from various organisations reminding me that I could still access their content direct from their website or via their newsletter. These moves to re-connect direct with audiences started to make Facebook look very silly and petulant.

Just as there are other search engines besides Google, other social media platforms are available – so why do so many people appear to be against the Media Bargaining Code, and would prefer to give Facebook a free monopoly over which content they read?

I have written previously about Facebook’s relationship with “news”. For those people who felt “cheated” that they couldn’t access news, they should realise that a “free” social media account comes with a price – the consumer is the product, and is only there to serve up eyeballs and profiles to be sold to Facebook’s advertisers. In short, Facebook only sees news as a magnet for its own advertisers, so it seems only fair that they should pay for this piggyback ride on someone else’s content. (And we all know what else Facebook does with our personal information, as the Cambridge Analytica scandal revealed.)

Some commentary suggested that Facebook is providing a type of “public service” by enabling links to news stories – so much so, that they question whether it is equitable to force Facebook to pay for the privilege, under the new Code. In fact, some argued that Facebook should be charging the media companies for linking to their stories, since this drives traffic to third-party news sites, which in turn generate advertising income based on their own readership. But this overlooks the reality of the economic bargain being struck here: Facebook might like to argue that it is doing you a “favour” by serving up news content in your personal feed; whereas, the social media giant “curates” what you see in your feed purely to generate ad revenue.

Alternatively, if news content has no value to Facebook, why has it been happy to distribute it for “free” all these years? Because, I repeat, they know full well that without readers and content, they can’t sell advertising. Maybe Facebook should invest in journalism and create their own news content? Oh wait, they don’t want to be regulated like a newspaper. Remember in 2013 when Facebook said it wanted to be “the world’s newspaper”, but then they realized they’d have to comply with media laws (libel, racial vilification etc.) and quietly dropped the plan?

In short, Facebook is not interested in being a news publisher (nor being subject to relevant media laws) but they are happy to “leverage” third-party content. Now, they will have to pay a fair price to use that content.

The conclusions from this Facebook episode (and some clumsy messaging from the Federal government) are pretty obvious:

  1. There is no such thing as a free lunch – a “free” social media account comes with a price; and there is also a cost attached to using someone else’s content
  2. Taxation of tech company revenues like Facebook, Google, Apple, Netflix and Amazon should be at the point of sale and consumption (i.e., where the consumer value is created and the income is generated, not where the revenue is recognised).
  3. Other search engines and social media platforms are available and content can be accessed direct from the source (but we’re probably too lazy to change our habits….)
  4. In part, this is about the continued demise of the 4th estate – no-one wants to pay for content, so social media platforms are getting a free ride having already destroyed the newspapers’ classified and display advertising business model
  5. But it’s also about the attention economy – consumers are the product when it comes to social media, so perhaps we should get paid more for our own time spent looking at ads?
  6. As ever, tech outstrips legislation – the law lags behind and is playing catch up
  7. And politicians really don’t have a clue how to go about this…..

Next week: Rebooting the local economy

The Age of Responsibility

How old is old enough to know better? In particular, when can we be said to be responsible, and therefore accountable, for our actions? (All the recent political shenanigans around “collective accountability”, “departmental responsibility”, “creeping assumptions” and “ministerial conduct” has got me thinking….)

By the time we are 7 years of age, we should probably know the difference between “right and wrong”, at least in the context of home, school, culturally and socially – “don’t tell lies, don’t be rude to your elders, don’t steal, don’t hit your siblings…”

The age for criminal responsibility varies around the world, but the global average is between 10 and 14 years. In Australia, it is currently 10, but there are proposals to extend it to 14. While I can understand and appreciate some of the arguments in favour of the latter, I’m also aware that criminal intent (not just criminal acts or behaviour) can establish itself under the age of 10 – I’m thinking of the James Bulger case in the UK in particular.

Legally, 18 is the coming of age – for entering into contracts, getting married (without the need for parental approval), earning the right to vote, the ability to purchase alcohol and tobacco. But you can have sex, and start driving a car from the age of 16.

As a society, we appear to be extending the age at which we become “responsible adults”. The concept of “adolescence” emerged in the 15th century, to indicate a transition to adulthood. The notion of “childhood” appeared in the 17th century, mainly from a philosophical perspective. While “teenagers” are a mid-20th century marketing phenomenon.

However, we now have evidence that our brains do not finish maturing until our third decade – so cognitively, it could be argued we are not responsible for our actions or decisions until we are at least 25, because our judgment is not fully developed. In which case, it rather begs the question about our ability to procreate, drink, drive and vote….

Of course, many age-based demarcations are cultural and societal. Customary practices such as initiation ceremonies are still significant markers in a person’s development and their status in the community (including their rights and responsibilities).

Which brings me to social media – shouldn’t we also be responsible and held accountable for what we post, share, comment on or simply like on Facebook, Twitter etc.? Whether you believe in “nature” or “nurture”, some academics argue we always have a choice before we hit that button – so shouldn’t that be a guiding principle to live by?

Next week: Making Creeping Assumptions

 

 

 

 

 

 

Blockchain and the Limits of Trust

Last week I was privileged to be a guest on This Is Imminent, a new form of Web TV hosted by Simon Waller. The given topic was Blockchain and the Limitations of Trust.

For a replay of the Web TV event go here

As regular readers will know, I have been immersed in the world of Blockchain, cryptocurrency and digital assets for over four years – and while I am not a technologist, I think know enough to understand some of the potential impact and implications of Blockchain on distributed networks, decentralization, governance, disintermediation, digital disruption, programmable money, tokenization, and for the purposes of last week’s discussion, human trust.

The point of the discussion was to explore how Blockchain might provide a solution to the absence of trust we currently experience in many areas of our daily lives. Even better, how Blockchain could enhance or expand our existing trusted relationships, especially across remote networks. The complete event can be viewed here, but be warned that it’s not a technical discussion (and wasn’t intended to be), although Simon did find a very amusing video that tries to explain Blockchain with the aid of Spam (the luncheon meat, not the unwanted e-mail).

At a time when our trust in public institutions is being tested all the time, it’s more important than ever to understand the nature of trust (especially trust placed in any new technology), and to navigate how we establish, build and maintain trust in increasingly peer-to-peer, fractured, fragmented, open and remote networks.

To frame the conversation, I think it’s important to lay down a few guiding principles.

First, a network is only as strong as its weakest point of connection.

Second, there are three main components to maintaining the integrity of a “trusted” network:

  • how are network participants verified?
  • how secure is the network against malicious actors?
  • what are the penalties or sanctions for breaking that trust?

Third, “trust” in the context of networks is a proxy for “risk” – how much or how far are we willing to trust a network, and everyone connected to it?

For example, if you and I know each other personally and I trust you as a friend, colleague or acquaintance, does that mean I should automatically trust everyone else you know? (Probably not.) Equally, should I trust you just because you know all the same people as me? (Again, probably not.) Each relationship (or connection) in that type of network has to be evaluated on its own merits. Although we can do a certain amount of due diligence and triangulation, as each network becomes larger, it’s increasingly difficult for us to “know” each and every connection.

Let’s suppose that the verification process is set appropriately high, that the network is maintained securely, and that there are adequate sanctions for abusing the network trust –  then it is possible for each connection to “know” each other, because the network has created the minimum degree of trust for the network to be viable. Consequently, we might conclude that only trustworthy people would want to join a network based on trust where each transaction is observable and traceable (albeit in the case of Blockchain, pseudonymously).

When it comes to trust and risk assessment, it still amazes me the amount of personal (and private) information people are willing to share on social media platforms, just to get a “free” account. We seem to be very comfortable placing an inordinate amount of trust in these highly centralized services both to protect our data and to manage our relationships – which to me is something of an unfair bargain.

Statistically we know we are more likely to be killed in a car accident than in a plane crash – but we attach far more risk to flying than to driving. Whenever we take our vehicle out on to the road, we automatically assume that every other driver is licensed, insured, and competent to drive, and that their car is taxed and roadworthy. We cannot verify this information ourselves, so we have to trust in both the centralized systems (that regulate drivers, cars and roads), and in each and every individual driver – but we know there are so many weak points in that structure.

Blockchain has the ability to verify each and every participant and transaction on the network, enabling all users to trust in the security and reliability of network transactions. In addition, once verified, participants do not have to keep providing verification each time they want to access the network, because the network “knows” enough about each participant that it can create a mutual level of trust without everyone having to have direct knowledge of each other.

In the asymmetric relationships we have created with centralized platforms such as social media, we find ourselves in a very binary situation – once we have provided our e-mail address, date of birth, gender and whatever else is required, we cannot be confident that the platform “forgets” that information when it no longer needs it. It’s a case of “all or nothing” as the price of network entry. Whereas, if we operated under a system of self-sovereign digital identity (which technology like Blockchain can facilitate), then I can be sure that such platforms only have access to the specific personal data points that I am willing to share with them, for the specific purpose I determine, and only for as long as I decide.

Finally, taking control of, and being responsible for managing our own personal information (such as a private key for a digital wallet) is perhaps a step too far for some people. They might not feel they have enough confidence in their own ability to be trusted with this data, so they would rather delegate this responsibility to centralized systems.

Next week: Always Look On The Bright Side…