Regulating Social Media….

The term “mainstream media” (or MSM) is generally used as a derogatory term to describe traditional news services (print, broadcast, on-line), especially by anyone who thinks that MSM does not reflect what’s “really going on” in politics, society and the wider arena of current affairs. Depending on which conspiracy theories or political agenda you follow, if MSM doesn’t agree with or express your viewpoint, it’s become very easy to dismiss the Fourth Estate as an instrument of the (deep) State, or merely serving the interests of an oligarchy of wealthy media owners and press barons. This dialectic is sometimes described as the Fifth Estate – those bloggers, podcasters, citizen journalists and marginalized voices that seek to pursue their version(s) of the truth via new content platforms.

Although the tradition of the counter-culture as represented by this Fifth Estate has a very long history, its growth has been accelerated and amplified thanks to new digital technologies in general, and social media brands in particular. The problem is, not only is social media challenging (and ignoring) many of the rules and conventions that underpin the social contract between the public and the traditional media outlets, our governments and regulators cannot keep up with the pace of technology.

In the late 1980s, when I studied sub-editing and basic journalism at night school, the ethos of The Five Ws of Journalism were still taught as the essentials of any credible news outlet or publication. This was also a time when the media was going through significant changes, from new content technology to cross-border ownership, from multi-channel narrow-casting to 24-hour rolling news formats – yet the principles of source verification, fact-checking, libel laws and the right to reply were generally still seen as crucial to instilling public trust and confidence in the media (alongside a healthy dose of scepticism to not believe everything that we read in the paper!).

Now, with social media grabbing more of our attention, and with large, global and engaged audiences on their platforms, who are getting more of their news from these channels, the term “MSM” could easily apply to social media itself. Hence the term “legacy media” has emerged to describe traditional news services.

Whether it’s Facebook wanting to be the “world’s newspaper” or X positioning itself as the global “public square”, it’s clear that these new media barons are in many ways no different to the aging media moguls they seek to displace. Newspapers don’t make money from their cover price or even subscriptions – most revenue comes from advertising and the “rivers of gold” it represents. Now, those advertising dollars are on-line, and tied to our social media accounts and the proliferation of posts, “likes” and “shares” (as well as our personal data).

So how should we think about regulating social media, if the old rules no longer apply?

First, the policy, regulatory and industry framework to oversee social media needs to be simplified and streamlined. In Australia alone, based on a cursory internet search, I identified more than a dozen entities (government, agency, association) that have some form of oversight of social media. Apart from being highly inefficient, surely it doesn’t have to be this complicated? (And complexity and ambiguity can embolden those who seek to flout convention.)

Second, if a social media platform wants to be taken seriously as a trusted news source, and if it aspire to be recognsied as a publication of record, it has to adopt some fundamental principles such as The Five W’s. It’s all very well saying that these platforms are anti-censorship, and pro-free speech, but those rights come with a heap of legal and social responsibilities. To argue that these platforms are merely conduits for public opinion (rather then being content publishers) undermines agency theory. Given that I am not entitled to a social media account (I don’t think it’s yet risen to being a fundamental human right?), and that I don’t own my account (often, not even the content I post), social media companies act as our agents. They give us permission to use their services, and they ultimately control what we post on their digital real estate. They also use algorithms to manipulate what is served up in our feeds. Social media should therefore be held accountable for content that it enables to be disseminated; take more responsibility for any libel, lies or dis/misinformation issued on its platform; and risk prosecution for any content that promotes, encourages or incites violence, insurrection and public disorder.

Third, the fact that much of the content on social media is user-generated should not absolve these platforms from having to provide a formal right of reply, as well as adhering to a recognised and independent dispute resolution service. This will enable alleged victims of on-line bullying, harassment, personal abuse and outright lies to seek redress, without having to embark on expensive legal proceedings. (Of course, if social media companies maintained fact checking and other verification tools, they should be able to mitigate, if not eradicate, the need to invoke these mechanisms in the first place.)

Finally, any reputable social media company should be willing to sign up to minimum standards of practice in respect of content originated or disseminated on its platform, as well as observing existing regulation around personal data, data protection, cyber-security, privacy, intellectual property rights and general consumer protections. At the very least, social media has to prove itself a credible alternative to the legacy media it seeks to displace, otherwise they are not the solution, just another part of the problem.

The wrong end of the stick!

In a typical knee-jerk and censorial reaction, Australia’s Federal Parliament has recently approved legislation that will attempt to ban anyone under the age of 16 from accessing social media.

Knee-jerk, because the legislative process was rushed, with barely a 24 hour public consultation period. The policy itself was only aired less than 6 months earlier, and was not part of the Labor Government’s election manifesto in 2022.

Censorial, because Australia has a long history of heavy-handed censorship. I still recall when I lived in Adelaide in 1970 (aged 10), broadcasts of the children’s TV series, “Do Not Adjust Your Set” were accompanied by a “Mature Audience” rating – the same series which I had watched when it was first broadcast in the UK in 1967 during the tea-time slot!

As yet another example of government not understanding technology, the implementation details have been left deliberately vague. At its simplest, the technology companies behind the world’s most popular social media platforms (to be defined) will be responsible for compliance, while enforcement will likely come from the eSafety Commissioner (to be confirmed).

The Commissioner herself was somewhat critical of the new policy on its announcement, but has since “welcomed” the legislation, albeit with significant caveats.

From the perspective of both technology and privacy, the legislation is a joke. Whatever tools are going to be used, there will be ways around them (VPN, AI image filters…) And if tech companies are going to be required to hold yet more of our personal data, they just become a target for hackers and other malicious actors (cf. the great Optus data breach of 2022).

Even the Australian Human Rights Commission has been equivocal in showing any support for (or criticism of) the new law. While the “pros” may seem laudable, they are very generic and can be achieved by other, more specific and less onerous means. As for the “cons”, they are very significant, with serious implications and unintended consequences for personal privacy and individual freedoms.

Of course, domestic and international news media are taking a keen interest in Australia’s policy. The Federal Government is used to picking fights with social media companies (on paying for news content), tobacco giants (on plain packaging) and the vaping industry (restricting sales via pharmacies only), so is probably unconcerned about its public image abroad. And while some of this interest attempts to understand the ban and its implications (here and overseas), others such as Amnesty International, have been more critical. If anything, the ban will likely have a negative impact on Australia’s score for internet freedom, as assessed by Freedom House.

The aim of reducing, mitigating or removing “harm” experienced on-line is no doubt an admirable cause. But let’s consider the following:

  • On-line platforms such as social media are simply reflections of the society we live in. Such ills are not unique or limited to Facebook and others. Surely it would be far better to examine and address the root causes of such harms (and their real-world manifestations) rather than some of the on-line outcomes? This feels like a band-aid solution – totally inappropriate, based on the wrong diagnosis.
  • When it comes to addressing on-line abuse and bullying, our politicians need to think about their own behaviour. Their Orwellian use of language, their Parliamentary performances, their manipulation of the media for personal grandstanding, and their “calling out” of anything that does not accord with their own political dogma (while downplaying the numerous rorts, murky back-room deals and factional conflicts that pass for “party politics”). I can’t help thinking that the social media ban is either a deflection from their own failings, or a weird mea culpa where everyone else is having to pay the price for Parliamentary indiscretions.
  • A blanket “one size fits all” ban fails to recognise that children and young people mature and develop at different rates. Why is 16 seen as the magic age? (There are plenty of “dick heads” in their 20s, 30s, 40s etc. who get to vote, drive, reproduce and stand for public office, as well as post on social media…) From about the age of 12, I started reading books that would probably be deemed beyond my years. As a consequence, I by-passed young adult fiction, because much of it was naff in my opinion. Novels such as “Decline and Fall”, “A Clockwork Orange” or “The Drowned World” were essential parts of my formative reading. And let’s remember that as highly critical and critically acclaimed works of fiction, they should neither be regarded as the individual views of their authors, nor should they serve as life manuals for their readers. The clue is in the word “fiction”.
  • Children and young people can gain enormous benefits from using social media – connecting with family and friends, finding people with like-minded interests, getting tips on hobbies and sports, researching ideas and information for their school projects, learning about other communities and countries, even getting their daily news. Why deny them access to these rich resources, just because the Federal Government has a dearth of effective policies on digital platforms, and can’t figure a way of curbing the harms without taking away the benefits (or imposing more restrictions) for everyone else?
  • In another area of social policy designed to address personal harm, Governments are engaging with strategies such as pill-testing at music festivals, because in that example, they know that an outright ban on recreational drugs is increasingly ineffective. Likewise, wider sex, drug and alcohol education for children and young people. Draconian laws like the under-16 social media ban can end up absolving parents, teachers and other community leaders from their own responsibilities for parenting, education, civic guidance and instilling a sense of individual accountability. So perhaps more effort needs to go into helping minors in how they navigate social media, and improving their resilience levels when dealing with unpleasant stuff they are bound to encounter. Plus, making all social media users aware that they are personally responsible for what they post, share and like. Just as we shouldn’t allow our kids to cycle out on the street without undertaking some basic road safety education, I’d rather see children becoming internet savvy from an early age – not just against on-line bullying, but to be alert to financial scams and other consumer traps.
  • Finally, the new Australian legislation was introduced by the Labor Government, and had support from the Liberal Opposition, but not much from the cross-benches in the Senate. So it’s hardly a multi-partisan Act despite the alleged amount of public support expressed. It may even be pandering to the more reactionary elements in our society – such as religious fundamentalists and social conservatives. For example, banning under-16s from using social media could prevent them from seeking help and advice on things like health and reproductive rights, forced marriage, wage theft, coercive relationships and domestic violence. Just some of the unintended consequences likely to come as a result of this ill-considered and hastily assembled piece of legislation.

Whose side is AI on?

At the risk of coming across as some sort of Luddite, recent commentary on Artificial Intelligence suggests that it is only natural to have concerns and misgivings about its rapid development and widespread deployment. Of course, at its heart, it’s just another technology at our disposal – but by its very definition, generative AI is not passive, and is likely to impact all areas of our life, whether we invite it in or not.

Over the next few weeks, I will be discussing some non-technical themes relating to AI – creativity and AI, legal implications of AI, and form over substance when it comes to AI itself.

To start with, these are a few of the questions that I have been mulling over:

– Is AI working for us, as a tool that we control and manage?  Or is AI working with us, in a partnership of equals? Or, more likely, is AI working against us, in the sense that it is happening to us, whether we like it or not, let alone whether we are actually aware of it?

– Is AI being wielded by a bunch of tech bros, who feed it with all their own prejudices, unconscious bias and cognitive limitations?

– Who decides what the Large Language Models (LLMs) that power AI are trained on?

– How does AI get permission to create derived content from our own Intellectual Property? Even if our content is on the web, being “publicly available” is not the same as “in the public domain”

– Who is responsible for what AI publishes, and are AI agents accountable for their actions? In the event of false, incorrect, misleading or inappropriate content created by AI, how do we get to clarify the record, or seek a right of reply?

– Why are AI tools adding increased caveats? (“This is not financial advice, this is not to be relied on in a court of law, this is only based on information available as at a certain point in time, this is not a recommendation, etc.”) And is this only going to increase, as in the recent example of changes to Google’s AI-generated search results? (But really, do we need to be told that eating rocks or adding glue to pizza are bad ideas?)

– From my own experience, tools like Chat GPT return “deliberate” factual errors. Why? Is it to keep us on our toes (“Gotcha!”)? Is it to use our responses (or lack thereof) to train the model to be more accurate? Is it to underline the caveat emptor principle (“What, you relied on Otter to write your college essay? What were you thinking?”). Or is it to counter plagiarism (“You could only have got that false information from our AI engine”). If you think the latter is far-fetched, I refer you to the notion of “trap streets” in maps and directories.

– Should AI tools contain better attribution (sources and acknowledgments) in their results? Should they disclose the list of “ingredients” used (like food labelling?) Should they provide verifiable citations for their references? (It’s an idea that is gaining some attention.)

– Finally, the increased use of cloud-based services and crowd-sourced content (not just in AI tools) means that there is the potential for overreach when it comes to end user licensing agreements by ChatGPT, Otter, Adobe Firefly, Gemini, Midjourney etc. Only recently, Adobe had to clarify latest changes to their service agreement, in response to some social media criticism.

Next week: AI and the Human Factor

Free speech up for sale

When I was planning to post this article a couple of weeks ago, Elon Musk’s bid to buy Twitter and take it into private ownership was looking unlikely to succeed. Musk had just declined to take up the offer of a seat on the Twitter board, following which the board adopted a poison-pill defence against a hostile takeover. And just as I was about to go to press at my usual time, the news broke that the original bid had now been accepted by the board, so I hit the pause button instead and waited a day to see what the public reaction was. What a difference 72 hours (and US$44bn) can make… It seems “free speech” does indeed come with a price.

Of course, the Twitter transaction is still subject to shareholder approval and regulatory clearance, as well as confirmation of the funding structure, since Musk is having to raise about half the stated purchase from banks.

Musk’s stated objective in acquiring Twitter was highlighted in a press release put out by the company:

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Mr. Musk. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential – I look forward to working with the company and the community of users to unlock it.”

This latest development in Musk’s apparent love/hate relationship with Twitter is bound to further divide existing users as to the billionaire’s intentions, as well as raise concerns about the broader implications for free speech. Musk himself has encouraged his “worst critics” to stay with the platform. Meanwhile, founder and former CEO, Jack Dorsey has renewed his love of Twitter, despite only recently stepping away from the top job to spend more time on his other interests.

Personally, I’m not overly concerned that a platform such as Twitter is in private hands or under single ownership (subject, of course, to anti-trust rules, etc.). Far from creating an entrenched monopoly, it may actually encourage more competition by those who decide to opt out of Twitter. What I am less comfortable with is the notion that Twitter somehow acts as an exemplar of free speech, and as such, is a bastion of democracy.

On the positive side, we will be able to judge the veracity of Musk’s objectives against his actual deeds. For example, will Twitter actually introduce an edit button, make its algorithms open-source, exorcise the spam bots, verify users, and reduce/remove the platform’s reliance upon advertising?

On the negative side, what credible stance will Twitter now take on “free speech”, short of allowing an “anything goes” policy? If Musk is sincere that Twitter will be a platform for debating “matters vital to the future of humanity”, he may need to modify what he means by public discourse. Personal slanging matches with fellow-billionaires (and those less-able to defend themselves) do not make for an edifying public debating forum. Musk’s own disclosures about Twitter and his other business interests will also come under increased scrutiny. We know from past experience that Elon’s Tweets can move markets, and for this alone he should be aware of the responsibility that comes with ownership of the platform.

We have long understood that free speech is not the same as an unfettered right to say what you like in public – there are limits to freedom of expression, including accountability for the consequences of our words and actions, especially where they can cause harm. The broader challenges we face are:

  • technology outpacing regulation, when it comes to social media
  • defining what it means to “cause offence”
  • increased attacks on “mainstream media” and threats to freedom of the press

1. Just as the printing press, telegraphy, telephony, broadcasting and the internet each resulted in legislative changes, social media has continued to test the boundaries of regulation under which its predecessors now operate. Hitherto, much of the regulation that applies to social and digital media relates to privacy and data protection, as well as the existing law of defamation. But the latter varies considerably by jurisdiction, and by access to redress, and availability of remedies. Social media platforms have resisted attempts to treat them as traditional media (newspapers and broadcasters, which are subject to licensing and/or industry codes of practice) or treat them as publishers (and therefore responsible for content published on their platforms). (Then there is the question of how some social media platforms manage their tax affairs in the countries where they derive their revenue.)

The Australian government is attempting to challenge social media companies in a couple of ways. The first has been to force these platforms to pay for third-party news content from which they directly and indirectly generate advertising income. The second aims to hold social media more accountable for defamatory content published on their platforms, and remove the protection of “anonymity”. However, the former might be seen as a (belated) reaction to changing business models, and largely acting in favour of incumbents; while the latter is a technical response to the complex law of defamation in the digital age.

2. The ability to be offended by what we see or hear on social media is now at such a low bar as to be almost meaningless. During previous battles over censorship in print, on stage or on screen, the argument could be made that, “if you don’t like something you aren’t being forced to watch it”, so maybe you are deliberately going in search of content just to find it offensive. The problem is, social media by its very nature is more pervasive and, fed by hidden algorithms, is actually more invasive than traditional print and broadcast media. Even as a casual, passive or innocent user, you cannot avoid seeing something that may “offend” you. Economic and technical barriers to entry are likewise so low, that anyone and everyone can have their say on social media.

Leaving aside defamation laws, the concept of “hate speech” is being used to target content which is designed to advocate violence, or can be reasonably deemed or expected to have provoked violence or the threat of harm (personal, social or economic). I have problems with how we define hate speech in the current environment of public commentary and social media platforms, since the causal link between intent and consequence is not always that easy to establish.

However, I think we can agree that the use of content to vilify others simply based on their race, gender, sexuality, ethnicity, economic status, political affiliation or religious identity cannot be defended on the grounds of “free speech”, “fair comment” or “personal belief”. Yet how do we discourage such diatribes without accusations of censorship or authoritarianism, and how do we establish workable remedies to curtail the harmful effects of “hate speech” without infringing our civil liberties?

Overall, there is a need to establish the author’s intent (their purpose as well as any justification), plus apply a “reasonable person” standard, one that does not simply affirm confirmation bias of one sector of society against another. We must recognise that hiding behind our personal ideology cannot be an acceptable defence against facing the consequences of our actions.

3. I think it’s problematic that large sections of the traditional media have hardly covered themselves in glory when it comes to their ethical standards, and their willingness to misuse their public platforms, economic power and political influence to undertake nefarious behaviour and/or deny any responsibility for their actions. Think of the UK’s phone hacking scandals, which resulted in one press baron being deemed “unfit to run a company”, as well as leading to the closure of a major newspaper.

That said, it hardly justifies the attempts by some governments, populist leaders and authoritarian regimes to continuously undermine the integrity of the fourth estate. It certainly doesn’t warrant the prosecution and persecution of journalists who are simply trying to do their job, nor attacks and bans on the media unless they “tow the party line”.

Which brings me back to Twitter, and its responsibility in helping to preserve free speech, while preventing its platform being hijacked for the purposes of vilification and incitement to cause harm. If its new owner is serious about furthering public debate and mature discourse, then here are a few other enhancements he might want to consider:

  • in addition to an edit button, a “cooling off” period whereby users are given the opportunity to reconsider a like, a post or a retweet, based on user feedback or community interaction – after which time, they might be deemed responsible for the content as if they were the original author (potentially a way to mitigate “pile-ons”)
  • signing up to a recognised industry code of ethics, including a victim’s formal right of reply, access to mediation, and enforcement procedures and penalties against perpetrators who continually cross the line into vilification, or engage in content that explicitly or implicitly advocates violence or harm
  • a more robust fact-checking process and a policy of “truth in advertising” when it comes to claims or accusations made by or on behalf of politicians, political parties, or those seeking elected office
  • clearer delineation between content which is mere opinion, content which is in the nature of a public service (e.g., emergencies and natural disasters), content which is deemed part of a company’s public disclosure obligations, content which is advertorial, content which is on behalf of a political party or candidate, and content which is purely for entertainment purposes only (removing the bots may not be enough)
  • consideration of establishing an independent editorial board that can also advocate on behalf of alleged victims of vilification, and act as the initial arbiter of “public interest” matters (such as privacy, data protection, whistle-blowers etc.)

Finally, if Twitter is going to remove/reduce advertising, what will the commercial model look like?

Next week: The Crypto Conversation