Regulating Social Media….

The term “mainstream media” (or MSM) is generally used as a derogatory term to describe traditional news services (print, broadcast, on-line), especially by anyone who thinks that MSM does not reflect what’s “really going on” in politics, society and the wider arena of current affairs. Depending on which conspiracy theories or political agenda you follow, if MSM doesn’t agree with or express your viewpoint, it’s become very easy to dismiss the Fourth Estate as an instrument of the (deep) State, or merely serving the interests of an oligarchy of wealthy media owners and press barons. This dialectic is sometimes described as the Fifth Estate – those bloggers, podcasters, citizen journalists and marginalized voices that seek to pursue their version(s) of the truth via new content platforms.

Although the tradition of the counter-culture as represented by this Fifth Estate has a very long history, its growth has been accelerated and amplified thanks to new digital technologies in general, and social media brands in particular. The problem is, not only is social media challenging (and ignoring) many of the rules and conventions that underpin the social contract between the public and the traditional media outlets, our governments and regulators cannot keep up with the pace of technology.

In the late 1980s, when I studied sub-editing and basic journalism at night school, the ethos of The Five Ws of Journalism were still taught as the essentials of any credible news outlet or publication. This was also a time when the media was going through significant changes, from new content technology to cross-border ownership, from multi-channel narrow-casting to 24-hour rolling news formats – yet the principles of source verification, fact-checking, libel laws and the right to reply were generally still seen as crucial to instilling public trust and confidence in the media (alongside a healthy dose of scepticism to not believe everything that we read in the paper!).

Now, with social media grabbing more of our attention, and with large, global and engaged audiences on their platforms, who are getting more of their news from these channels, the term “MSM” could easily apply to social media itself. Hence the term “legacy media” has emerged to describe traditional news services.

Whether it’s Facebook wanting to be the “world’s newspaper” or X positioning itself as the global “public square”, it’s clear that these new media barons are in many ways no different to the aging media moguls they seek to displace. Newspapers don’t make money from their cover price or even subscriptions – most revenue comes from advertising and the “rivers of gold” it represents. Now, those advertising dollars are on-line, and tied to our social media accounts and the proliferation of posts, “likes” and “shares” (as well as our personal data).

So how should we think about regulating social media, if the old rules no longer apply?

First, the policy, regulatory and industry framework to oversee social media needs to be simplified and streamlined. In Australia alone, based on a cursory internet search, I identified more than a dozen entities (government, agency, association) that have some form of oversight of social media. Apart from being highly inefficient, surely it doesn’t have to be this complicated? (And complexity and ambiguity can embolden those who seek to flout convention.)

Second, if a social media platform wants to be taken seriously as a trusted news source, and if it aspire to be recognsied as a publication of record, it has to adopt some fundamental principles such as The Five W’s. It’s all very well saying that these platforms are anti-censorship, and pro-free speech, but those rights come with a heap of legal and social responsibilities. To argue that these platforms are merely conduits for public opinion (rather then being content publishers) undermines agency theory. Given that I am not entitled to a social media account (I don’t think it’s yet risen to being a fundamental human right?), and that I don’t own my account (often, not even the content I post), social media companies act as our agents. They give us permission to use their services, and they ultimately control what we post on their digital real estate. They also use algorithms to manipulate what is served up in our feeds. Social media should therefore be held accountable for content that it enables to be disseminated; take more responsibility for any libel, lies or dis/misinformation issued on its platform; and risk prosecution for any content that promotes, encourages or incites violence, insurrection and public disorder.

Third, the fact that much of the content on social media is user-generated should not absolve these platforms from having to provide a formal right of reply, as well as adhering to a recognised and independent dispute resolution service. This will enable alleged victims of on-line bullying, harassment, personal abuse and outright lies to seek redress, without having to embark on expensive legal proceedings. (Of course, if social media companies maintained fact checking and other verification tools, they should be able to mitigate, if not eradicate, the need to invoke these mechanisms in the first place.)

Finally, any reputable social media company should be willing to sign up to minimum standards of practice in respect of content originated or disseminated on its platform, as well as observing existing regulation around personal data, data protection, cyber-security, privacy, intellectual property rights and general consumer protections. At the very least, social media has to prove itself a credible alternative to the legacy media it seeks to displace, otherwise they are not the solution, just another part of the problem.

AI and Deep (and not so deep…) Fakes

The New York Times recently posted a quiz“can you tell the difference between a photograph, and an image created by AI?”

Of the quiz examples, a mix of actual photos and AI-generated content, I was only able to correctly identify 8/10 as to which were which. My significant other claimed to have scored 10/10! In my defence, I correctly identified all of the AI images, but I mistook two authentic photos as being fakes. Of the latter, one featured a bunch of famous people, most of whom I did not recognise, and the photo had been significantly cropped, removing much of the visual context (I also suspect it had been subject to some additional photo-shopping, given it was a publicity shot). The other real photo had been taken at such an unusual angle that it distorted some of the natural perspective, so as to make some elements look wonky. (But maybe I’ve become more cynical or sceptical, and therefore I tend to disbelieve more of what I see, the more I know I am being exposed to AI-generated content?)

How can we remain alert to AI deceptions, while at the same time recognizing and embracing the potential that this amazing technology has to offer?

Taking my lead from the New York Times article, the following blog has been created using ChatGPT. All I did was enter some brief subject headings (including the title), a series of internet links, and a prompt to turn it into a blog on AI and fakes. Can you tell what is true, what is hallucination, and what important information has been left out?

AI and Deep (and not so deep…) Fakes

Artificial Intelligence (AI) has revolutionized numerous sectors, bringing forth innovations that were once unimaginable. However, one of the more controversial applications of AI is in the creation of deep fakes. These are hyper-realistic digital falsifications created using AI, capable of mimicking real people’s appearances and voices. This technology has sparked debates over ethics, security, and the future of digital media. Let’s delve into some recent examples and the implications they bring.

Dylan and Rodin: A Fabricated Encounter

A striking example of AI’s capacity to blur reality is the supposed encounter between Bob Dylan and the sculptor Auguste Rodin. In an article on Substack by Dave Haslam, the scenario of Dylan visiting Rodin’s Parisian studio is meticulously crafted, yet it’s a fabrication made plausible by AI tools. This fake story exemplifies how AI can create convincingly realistic yet entirely fictional narratives, challenging our ability to discern truth from fiction.

AI in Indian Elections

AI’s role extends beyond entertainment into more critical areas such as political campaigns. In the 2024 Indian elections, AI-generated deep fakes have been used to spread misinformation and manipulate public opinion. An article by ABC News details how AI is disrupting electoral processes, raising concerns about the integrity of democratic systems.

Mozilla’s Campaign Against Misinformation

In response to the growing threat of AI-generated misinformation, organizations like Mozilla have launched campaigns to protect the integrity of elections. Mozilla’s efforts aim to hold platforms like WhatsApp accountable for the spread of fake news and deep fakes, urging them to implement stricter controls to safeguard democratic processes.

Celebrity Persona Rights

AI’s ability to replicate real people has also sparked legal disputes, particularly concerning persona rights. Scarlett Johansson’s recent dispute with OpenAI highlights these issues. Johansson’s concerns about her likeness being used without consent underscore the broader debate about the ethical use of AI. As noted by Scientific American, the conflict brings to light questions about how AI-generated content can respect individual rights and privacy.

Legislative Actions on Deep Fakes

Governments are beginning to take action against the misuse of AI-generated content. In Australia, new laws are being proposed to combat the creation and distribution of deep fake pornography and other harmful images. The Guardian reports on legislative efforts to protect individuals from the malicious use of AI, reflecting a growing recognition of the need for regulation.

AI and Copyright

The intersection of AI and copyright law is another area of intense debate. A lawsuit involving famous authors against OpenAI raises questions about how AI uses copyrighted material for training and whether the content it generates can itself be copyrighted. This legal battle, reported by ABC News and Reuters, could set important precedents for how intellectual property laws adapt to the age of AI.

Conclusion

As AI technology continues to advance, its applications in creating deep fakes present both opportunities and challenges. While it can be used creatively and constructively, it also poses significant risks to privacy, security, and the authenticity of information. The ongoing developments in legislation, ethical guidelines, and technological safeguards will be crucial in shaping the future landscape of AI-generated content.

So, did you spot the “deliberate” error(s)? And what information was missing? Answers will be posted later this week.

Next week: AI & Music

The Five Ws of Journalism

The importance of a free press within a democratic society cannot be overstated: without the Fourth Estate who will “speak truth unto power”? The role of the printing press was critical to the Reformation, the Enlightenment, and the great political reforms in nineteenth century Britain.

But lapses in journalistic behaviour and a decline in editorial standards over the past few decades have brought the press and broadcast media into increasing disrepute – to the point that mainstream media (MSM) has become a pejorative term, and social media (SoMe) claims to be the last bastion of free speech.

I think the truth lies somewhere between those two positions – I don’t believe that the MSM is totally devoid of ethics, nor do I believe that SoMe will displace formal journalism (and it certainly isn’t without its own ethical challenges when it comes to dis/misinformation and hate speech).

But what do I mean by “formal journalism”? After all, we have seen a raft of platforms giving rise to “citizen journalism” and other services which rely heavily on community sourced content, but few of these platforms appear to operate to the same professional standards of traditional reportage, fact-checking, investigative journalism or news dissemination. It also remains to be seen whether these new media channels can displace traditional print (and online) news media as “papers of record”.

As part of a career transition, I took a night class in journalism and sub-editing, with a view to becoming a writer or editor. Although I did work as an editor for many years, it was in the field of legal publishing, and not for a newspaper or magazine. Even though the course I completed was not a traditional degree in journalism, communications or media studies, I was still taught some of the key tenets of serious journalism, principally the Five W’s – the “who, what, where, when and why” of any news event (with the “how” also being an important component of any credible story).

This foundational approach to news reporting underpinned many of the most significant pieces of investigative journalism in the late 20th century, some of which changed laws and government policies, as well as influencing public opinion. Think of the role of the press in breaking the thalidomide story, publishing the Pentagon Papers, or exposing the Watergate cover-up. Even the Panama Papers relied on the collaboration of traditional news media outlets to bring the story to public attention. More recently, the work of Private Eye in helping to bring the UK’s post office miscarriage of justice to light is a prime example of the power of journalistic persistence in search of the truth.

On the other hand, a raft of tabloid scandals have dented the public trust in the traditional press, in particular the phone hacking exploits within the British media. Here in Australia, a recent high profile defamation case prompted the judge to put TV journalism under the microscope – and neither broadcaster involved in the case came away covered in glory. In particular, the court questioned whether the journalists involved had breached their own industry code of practice, by failing to check their facts and by inadequately testing the credibility of their witnesses. The grubby practice of cheque book journalism also came under renewed scrutiny, as did an ill-advised speech on TV by one of the parties that could have been prejudicial to a criminal case. More significantly, one media organisation displayed a willingness to believe (and even assert) that there had been a political conspiracy to suppress an alleged crime, when no such evidence of a cover-up had been established. This case (and its associated claims and counterclaims) still has a fair way to go, and has already embroiled senior politicians (some of whom have been accused of lying about what they knew, when and how), civil servants, political staffers, public prosecutors, multiple police forces, so-called “fixers” and “influencers” with their insidious “back grounding” and a number of TV producers who will probably never work in the industry again.

Added to this sh!t show has been the misnaming of a suspected murderer by one of the above-mentioned TV news channels. This major and latest faux-pas is believed to have been the result of “reporting” some false, misleading or mischievous commentary circulating on social media.

Apart from undertaking more rigorous fact-checking, and enforcing the established journalistic practice of getting actual confirmation of events from at least two credible sources, the news media also needs to make a greater distinction between the facts themselves on the one hand, and conjecture, speculation, opinion, analysis and commentary on the other.

Next week: Is it OK to take selfies in the gym?

 

 

 

Free speech up for sale

When I was planning to post this article a couple of weeks ago, Elon Musk’s bid to buy Twitter and take it into private ownership was looking unlikely to succeed. Musk had just declined to take up the offer of a seat on the Twitter board, following which the board adopted a poison-pill defence against a hostile takeover. And just as I was about to go to press at my usual time, the news broke that the original bid had now been accepted by the board, so I hit the pause button instead and waited a day to see what the public reaction was. What a difference 72 hours (and US$44bn) can make… It seems “free speech” does indeed come with a price.

Of course, the Twitter transaction is still subject to shareholder approval and regulatory clearance, as well as confirmation of the funding structure, since Musk is having to raise about half the stated purchase from banks.

Musk’s stated objective in acquiring Twitter was highlighted in a press release put out by the company:

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Mr. Musk. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential – I look forward to working with the company and the community of users to unlock it.”

This latest development in Musk’s apparent love/hate relationship with Twitter is bound to further divide existing users as to the billionaire’s intentions, as well as raise concerns about the broader implications for free speech. Musk himself has encouraged his “worst critics” to stay with the platform. Meanwhile, founder and former CEO, Jack Dorsey has renewed his love of Twitter, despite only recently stepping away from the top job to spend more time on his other interests.

Personally, I’m not overly concerned that a platform such as Twitter is in private hands or under single ownership (subject, of course, to anti-trust rules, etc.). Far from creating an entrenched monopoly, it may actually encourage more competition by those who decide to opt out of Twitter. What I am less comfortable with is the notion that Twitter somehow acts as an exemplar of free speech, and as such, is a bastion of democracy.

On the positive side, we will be able to judge the veracity of Musk’s objectives against his actual deeds. For example, will Twitter actually introduce an edit button, make its algorithms open-source, exorcise the spam bots, verify users, and reduce/remove the platform’s reliance upon advertising?

On the negative side, what credible stance will Twitter now take on “free speech”, short of allowing an “anything goes” policy? If Musk is sincere that Twitter will be a platform for debating “matters vital to the future of humanity”, he may need to modify what he means by public discourse. Personal slanging matches with fellow-billionaires (and those less-able to defend themselves) do not make for an edifying public debating forum. Musk’s own disclosures about Twitter and his other business interests will also come under increased scrutiny. We know from past experience that Elon’s Tweets can move markets, and for this alone he should be aware of the responsibility that comes with ownership of the platform.

We have long understood that free speech is not the same as an unfettered right to say what you like in public – there are limits to freedom of expression, including accountability for the consequences of our words and actions, especially where they can cause harm. The broader challenges we face are:

  • technology outpacing regulation, when it comes to social media
  • defining what it means to “cause offence”
  • increased attacks on “mainstream media” and threats to freedom of the press

1. Just as the printing press, telegraphy, telephony, broadcasting and the internet each resulted in legislative changes, social media has continued to test the boundaries of regulation under which its predecessors now operate. Hitherto, much of the regulation that applies to social and digital media relates to privacy and data protection, as well as the existing law of defamation. But the latter varies considerably by jurisdiction, and by access to redress, and availability of remedies. Social media platforms have resisted attempts to treat them as traditional media (newspapers and broadcasters, which are subject to licensing and/or industry codes of practice) or treat them as publishers (and therefore responsible for content published on their platforms). (Then there is the question of how some social media platforms manage their tax affairs in the countries where they derive their revenue.)

The Australian government is attempting to challenge social media companies in a couple of ways. The first has been to force these platforms to pay for third-party news content from which they directly and indirectly generate advertising income. The second aims to hold social media more accountable for defamatory content published on their platforms, and remove the protection of “anonymity”. However, the former might be seen as a (belated) reaction to changing business models, and largely acting in favour of incumbents; while the latter is a technical response to the complex law of defamation in the digital age.

2. The ability to be offended by what we see or hear on social media is now at such a low bar as to be almost meaningless. During previous battles over censorship in print, on stage or on screen, the argument could be made that, “if you don’t like something you aren’t being forced to watch it”, so maybe you are deliberately going in search of content just to find it offensive. The problem is, social media by its very nature is more pervasive and, fed by hidden algorithms, is actually more invasive than traditional print and broadcast media. Even as a casual, passive or innocent user, you cannot avoid seeing something that may “offend” you. Economic and technical barriers to entry are likewise so low, that anyone and everyone can have their say on social media.

Leaving aside defamation laws, the concept of “hate speech” is being used to target content which is designed to advocate violence, or can be reasonably deemed or expected to have provoked violence or the threat of harm (personal, social or economic). I have problems with how we define hate speech in the current environment of public commentary and social media platforms, since the causal link between intent and consequence is not always that easy to establish.

However, I think we can agree that the use of content to vilify others simply based on their race, gender, sexuality, ethnicity, economic status, political affiliation or religious identity cannot be defended on the grounds of “free speech”, “fair comment” or “personal belief”. Yet how do we discourage such diatribes without accusations of censorship or authoritarianism, and how do we establish workable remedies to curtail the harmful effects of “hate speech” without infringing our civil liberties?

Overall, there is a need to establish the author’s intent (their purpose as well as any justification), plus apply a “reasonable person” standard, one that does not simply affirm confirmation bias of one sector of society against another. We must recognise that hiding behind our personal ideology cannot be an acceptable defence against facing the consequences of our actions.

3. I think it’s problematic that large sections of the traditional media have hardly covered themselves in glory when it comes to their ethical standards, and their willingness to misuse their public platforms, economic power and political influence to undertake nefarious behaviour and/or deny any responsibility for their actions. Think of the UK’s phone hacking scandals, which resulted in one press baron being deemed “unfit to run a company”, as well as leading to the closure of a major newspaper.

That said, it hardly justifies the attempts by some governments, populist leaders and authoritarian regimes to continuously undermine the integrity of the fourth estate. It certainly doesn’t warrant the prosecution and persecution of journalists who are simply trying to do their job, nor attacks and bans on the media unless they “tow the party line”.

Which brings me back to Twitter, and its responsibility in helping to preserve free speech, while preventing its platform being hijacked for the purposes of vilification and incitement to cause harm. If its new owner is serious about furthering public debate and mature discourse, then here are a few other enhancements he might want to consider:

  • in addition to an edit button, a “cooling off” period whereby users are given the opportunity to reconsider a like, a post or a retweet, based on user feedback or community interaction – after which time, they might be deemed responsible for the content as if they were the original author (potentially a way to mitigate “pile-ons”)
  • signing up to a recognised industry code of ethics, including a victim’s formal right of reply, access to mediation, and enforcement procedures and penalties against perpetrators who continually cross the line into vilification, or engage in content that explicitly or implicitly advocates violence or harm
  • a more robust fact-checking process and a policy of “truth in advertising” when it comes to claims or accusations made by or on behalf of politicians, political parties, or those seeking elected office
  • clearer delineation between content which is mere opinion, content which is in the nature of a public service (e.g., emergencies and natural disasters), content which is deemed part of a company’s public disclosure obligations, content which is advertorial, content which is on behalf of a political party or candidate, and content which is purely for entertainment purposes only (removing the bots may not be enough)
  • consideration of establishing an independent editorial board that can also advocate on behalf of alleged victims of vilification, and act as the initial arbiter of “public interest” matters (such as privacy, data protection, whistle-blowers etc.)

Finally, if Twitter is going to remove/reduce advertising, what will the commercial model look like?

Next week: The Crypto Conversation