The wrong end of the stick!

In a typical knee-jerk and censorial reaction, Australia’s Federal Parliament has recently approved legislation that will attempt to ban anyone under the age of 16 from accessing social media.

Knee-jerk, because the legislative process was rushed, with barely a 24 hour public consultation period. The policy itself was only aired less than 6 months earlier, and was not part of the Labor Government’s election manifesto in 2022.

Censorial, because Australia has a long history of heavy-handed censorship. I still recall when I lived in Adelaide in 1970 (aged 10), broadcasts of the children’s TV series, “Do Not Adjust Your Set” were accompanied by a “Mature Audience” rating – the same series which I had watched when it was first broadcast in the UK in 1967 during the tea-time slot!

As yet another example of government not understanding technology, the implementation details have been left deliberately vague. At its simplest, the technology companies behind the world’s most popular social media platforms (to be defined) will be responsible for compliance, while enforcement will likely come from the eSafety Commissioner (to be confirmed).

The Commissioner herself was somewhat critical of the new policy on its announcement, but has since “welcomed” the legislation, albeit with significant caveats.

From the perspective of both technology and privacy, the legislation is a joke. Whatever tools are going to be used, there will be ways around them (VPN, AI image filters…) And if tech companies are going to be required to hold yet more of our personal data, they just become a target for hackers and other malicious actors (cf. the great Optus data breach of 2022).

Even the Australian Human Rights Commission has been equivocal in showing any support for (or criticism of) the new law. While the “pros” may seem laudable, they are very generic and can be achieved by other, more specific and less onerous means. As for the “cons”, they are very significant, with serious implications and unintended consequences for personal privacy and individual freedoms.

Of course, domestic and international news media are taking a keen interest in Australia’s policy. The Federal Government is used to picking fights with social media companies (on paying for news content), tobacco giants (on plain packaging) and the vaping industry (restricting sales via pharmacies only), so is probably unconcerned about its public image abroad. And while some of this interest attempts to understand the ban and its implications (here and overseas), others such as Amnesty International, have been more critical. If anything, the ban will likely have a negative impact on Australia’s score for internet freedom, as assessed by Freedom House.

The aim of reducing, mitigating or removing “harm” experienced on-line is no doubt an admirable cause. But let’s consider the following:

  • On-line platforms such as social media are simply reflections of the society we live in. Such ills are not unique or limited to Facebook and others. Surely it would be far better to examine and address the root causes of such harms (and their real-world manifestations) rather than some of the on-line outcomes? This feels like a band-aid solution – totally inappropriate, based on the wrong diagnosis.
  • When it comes to addressing on-line abuse and bullying, our politicians need to think about their own behaviour. Their Orwellian use of language, their Parliamentary performances, their manipulation of the media for personal grandstanding, and their “calling out” of anything that does not accord with their own political dogma (while downplaying the numerous rorts, murky back-room deals and factional conflicts that pass for “party politics”). I can’t help thinking that the social media ban is either a deflection from their own failings, or a weird mea culpa where everyone else is having to pay the price for Parliamentary indiscretions.
  • A blanket “one size fits all” ban fails to recognise that children and young people mature and develop at different rates. Why is 16 seen as the magic age? (There are plenty of “dick heads” in their 20s, 30s, 40s etc. who get to vote, drive, reproduce and stand for public office, as well as post on social media…) From about the age of 12, I started reading books that would probably be deemed beyond my years. As a consequence, I by-passed young adult fiction, because much of it was naff in my opinion. Novels such as “Decline and Fall”, “A Clockwork Orange” or “The Drowned World” were essential parts of my formative reading. And let’s remember that as highly critical and critically acclaimed works of fiction, they should neither be regarded as the individual views of their authors, nor should they serve as life manuals for their readers. The clue is in the word “fiction”.
  • Children and young people can gain enormous benefits from using social media – connecting with family and friends, finding people with like-minded interests, getting tips on hobbies and sports, researching ideas and information for their school projects, learning about other communities and countries, even getting their daily news. Why deny them access to these rich resources, just because the Federal Government has a dearth of effective policies on digital platforms, and can’t figure a way of curbing the harms without taking away the benefits (or imposing more restrictions) for everyone else?
  • In another area of social policy designed to address personal harm, Governments are engaging with strategies such as pill-testing at music festivals, because in that example, they know that an outright ban on recreational drugs is increasingly ineffective. Likewise, wider sex, drug and alcohol education for children and young people. Draconian laws like the under-16 social media ban can end up absolving parents, teachers and other community leaders from their own responsibilities for parenting, education, civic guidance and instilling a sense of individual accountability. So perhaps more effort needs to go into helping minors in how they navigate social media, and improving their resilience levels when dealing with unpleasant stuff they are bound to encounter. Plus, making all social media users aware that they are personally responsible for what they post, share and like. Just as we shouldn’t allow our kids to cycle out on the street without undertaking some basic road safety education, I’d rather see children becoming internet savvy from an early age – not just against on-line bullying, but to be alert to financial scams and other consumer traps.
  • Finally, the new Australian legislation was introduced by the Labor Government, and had support from the Liberal Opposition, but not much from the cross-benches in the Senate. So it’s hardly a multi-partisan Act despite the alleged amount of public support expressed. It may even be pandering to the more reactionary elements in our society – such as religious fundamentalists and social conservatives. For example, banning under-16s from using social media could prevent them from seeking help and advice on things like health and reproductive rights, forced marriage, wage theft, coercive relationships and domestic violence. Just some of the unintended consequences likely to come as a result of this ill-considered and hastily assembled piece of legislation.

Whose side is AI on?

At the risk of coming across as some sort of Luddite, recent commentary on Artificial Intelligence suggests that it is only natural to have concerns and misgivings about its rapid development and widespread deployment. Of course, at its heart, it’s just another technology at our disposal – but by its very definition, generative AI is not passive, and is likely to impact all areas of our life, whether we invite it in or not.

Over the next few weeks, I will be discussing some non-technical themes relating to AI – creativity and AI, legal implications of AI, and form over substance when it comes to AI itself.

To start with, these are a few of the questions that I have been mulling over:

– Is AI working for us, as a tool that we control and manage?  Or is AI working with us, in a partnership of equals? Or, more likely, is AI working against us, in the sense that it is happening to us, whether we like it or not, let alone whether we are actually aware of it?

– Is AI being wielded by a bunch of tech bros, who feed it with all their own prejudices, unconscious bias and cognitive limitations?

– Who decides what the Large Language Models (LLMs) that power AI are trained on?

– How does AI get permission to create derived content from our own Intellectual Property? Even if our content is on the web, being “publicly available” is not the same as “in the public domain”

– Who is responsible for what AI publishes, and are AI agents accountable for their actions? In the event of false, incorrect, misleading or inappropriate content created by AI, how do we get to clarify the record, or seek a right of reply?

– Why are AI tools adding increased caveats? (“This is not financial advice, this is not to be relied on in a court of law, this is only based on information available as at a certain point in time, this is not a recommendation, etc.”) And is this only going to increase, as in the recent example of changes to Google’s AI-generated search results? (But really, do we need to be told that eating rocks or adding glue to pizza are bad ideas?)

– From my own experience, tools like Chat GPT return “deliberate” factual errors. Why? Is it to keep us on our toes (“Gotcha!”)? Is it to use our responses (or lack thereof) to train the model to be more accurate? Is it to underline the caveat emptor principle (“What, you relied on Otter to write your college essay? What were you thinking?”). Or is it to counter plagiarism (“You could only have got that false information from our AI engine”). If you think the latter is far-fetched, I refer you to the notion of “trap streets” in maps and directories.

– Should AI tools contain better attribution (sources and acknowledgments) in their results? Should they disclose the list of “ingredients” used (like food labelling?) Should they provide verifiable citations for their references? (It’s an idea that is gaining some attention.)

– Finally, the increased use of cloud-based services and crowd-sourced content (not just in AI tools) means that there is the potential for overreach when it comes to end user licensing agreements by ChatGPT, Otter, Adobe Firefly, Gemini, Midjourney etc. Only recently, Adobe had to clarify latest changes to their service agreement, in response to some social media criticism.

Next week: AI and the Human Factor

Is it OK to take selfies in the gym?

Time to discuss personal boundaries when it comes to taking or sharing photos and video.

First, whatever the circumstances, it is usually respectful (and even a legal obligation) to ask a person’s consent before sharing a photo or video of them. And of course, you should only share content that you own, unless you have permission from the copyright holder.

Second, the sharing of third party content pay be permissible (depending on the situation) if it’s covered by established copyright law (e.g., fair use, public domain, creative commons, open source) or other legal principle (e.g., public interest).

Third, there are also legal principles about taking photos of private property from a public place, which largely build on privacy and data protection laws. (See my previous blog on this topic)

But in a selfie-driven and smartphone-obsessed world, I see too many examples of people snapping and sharing photos without a concern in the world (either for themselves or for others).

The gym I attend is a private club. All members and guests must abide by the terms and conditions of entry, otherwise they can be asked to leave (and their membership cancelled).

One of those conditions states that gym users must not film or take photos without the express prior consent of the gym management.

Some users may argue, “it’s only a selfie of me flexing” or “I’m only filming my buddy lifting weights”. But gym walls are usually mirrored, so there is no guarantee that your video or photo won’t inadvertently capture someone’s image without their knowledge or permission, and if you then share it on social media that is a potential breach of privacy.

(I have similar issues when people make audio and video calls, listen to music or watch videos on their smart phones in public places, without wearing earphones – I don’t want to listen to your crap!)

Going to the gym is an important part of my physical and mental well-being. I expect it to be a safe environment, and a small respite from the intrusions of the outside world.

Respect the space and the people who use it!

Next week: Perfect Days – and the Analogue Life

Digital Identity – Wallets are the key?

A few months ago, I wrote about trust and digital identity – the issue of who “owns” our identity, and why the concept of “self-sovereign digital identity” can help resolve problems of data security and data privacy.

The topic was aired at a recent presentation made by FinTech advisor, David Birch (hosted at Novatti) to an audience of Australian FinTech, Blockchain and identity experts.

David’s main thesis is that digital wallets will sit at the centre of the metaverse – linking web3 with digital assets and their owners. Wallets will not only be the “key” to transacting with digital assets (tokens), but proving “identity” will confirm “ownership” (or “control”) of wallets and their holdings.

The audience felt that in Australia, we face several challenges to the adoption of digital identity (and by extension, digital wallets):

1. Lack of common technical standards and lack of interoperability

2. Poor experience of government services (the nightmare that is myGov…)

3. Private sector complacency and the protected incumbency of oligopolies

4. Absence of incentives and overwhelming inertia (i.e., why move ahead of any government mandate?)

The example was given of a local company that has built digital identity solutions for consumer applications – but apparently, can’t attract any interest from local banks.

A logical conclusion from the discussion is that we will maintain multiple digital identities (profiles) and numerous digital wallets (applications), for different purposes. I don’t see a problem with this as long as individuals get to decide who, where, when and for how long third parties get to access our personal data, and for what specific purposes.

Next week: Defunct apps and tech projects