The wrong end of the stick!

In a typical knee-jerk and censorial reaction, Australia’s Federal Parliament has recently approved legislation that will attempt to ban anyone under the age of 16 from accessing social media.

Knee-jerk, because the legislative process was rushed, with barely a 24 hour public consultation period. The policy itself was only aired less than 6 months earlier, and was not part of the Labor Government’s election manifesto in 2022.

Censorial, because Australia has a long history of heavy-handed censorship. I still recall when I lived in Adelaide in 1970 (aged 10), broadcasts of the children’s TV series, “Do Not Adjust Your Set” were accompanied by a “Mature Audience” rating – the same series which I had watched when it was first broadcast in the UK in 1967 during the tea-time slot!

As yet another example of government not understanding technology, the implementation details have been left deliberately vague. At its simplest, the technology companies behind the world’s most popular social media platforms (to be defined) will be responsible for compliance, while enforcement will likely come from the eSafety Commissioner (to be confirmed).

The Commissioner herself was somewhat critical of the new policy on its announcement, but has since “welcomed” the legislation, albeit with significant caveats.

From the perspective of both technology and privacy, the legislation is a joke. Whatever tools are going to be used, there will be ways around them (VPN, AI image filters…) And if tech companies are going to be required to hold yet more of our personal data, they just become a target for hackers and other malicious actors (cf. the great Optus data breach of 2022).

Even the Australian Human Rights Commission has been equivocal in showing any support for (or criticism of) the new law. While the “pros” may seem laudable, they are very generic and can be achieved by other, more specific and less onerous means. As for the “cons”, they are very significant, with serious implications and unintended consequences for personal privacy and individual freedoms.

Of course, domestic and international news media are taking a keen interest in Australia’s policy. The Federal Government is used to picking fights with social media companies (on paying for news content), tobacco giants (on plain packaging) and the vaping industry (restricting sales via pharmacies only), so is probably unconcerned about its public image abroad. And while some of this interest attempts to understand the ban and its implications (here and overseas), others such as Amnesty International, have been more critical. If anything, the ban will likely have a negative impact on Australia’s score for internet freedom, as assessed by Freedom House.

The aim of reducing, mitigating or removing “harm” experienced on-line is no doubt an admirable cause. But let’s consider the following:

  • On-line platforms such as social media are simply reflections of the society we live in. Such ills are not unique or limited to Facebook and others. Surely it would be far better to examine and address the root causes of such harms (and their real-world manifestations) rather than some of the on-line outcomes? This feels like a band-aid solution – totally inappropriate, based on the wrong diagnosis.
  • When it comes to addressing on-line abuse and bullying, our politicians need to think about their own behaviour. Their Orwellian use of language, their Parliamentary performances, their manipulation of the media for personal grandstanding, and their “calling out” of anything that does not accord with their own political dogma (while downplaying the numerous rorts, murky back-room deals and factional conflicts that pass for “party politics”). I can’t help thinking that the social media ban is either a deflection from their own failings, or a weird mea culpa where everyone else is having to pay the price for Parliamentary indiscretions.
  • A blanket “one size fits all” ban fails to recognise that children and young people mature and develop at different rates. Why is 16 seen as the magic age? (There are plenty of “dick heads” in their 20s, 30s, 40s etc. who get to vote, drive, reproduce and stand for public office, as well as post on social media…) From about the age of 12, I started reading books that would probably be deemed beyond my years. As a consequence, I by-passed young adult fiction, because much of it was naff in my opinion. Novels such as “Decline and Fall”, “A Clockwork Orange” or “The Drowned World” were essential parts of my formative reading. And let’s remember that as highly critical and critically acclaimed works of fiction, they should neither be regarded as the individual views of their authors, nor should they serve as life manuals for their readers. The clue is in the word “fiction”.
  • Children and young people can gain enormous benefits from using social media – connecting with family and friends, finding people with like-minded interests, getting tips on hobbies and sports, researching ideas and information for their school projects, learning about other communities and countries, even getting their daily news. Why deny them access to these rich resources, just because the Federal Government has a dearth of effective policies on digital platforms, and can’t figure a way of curbing the harms without taking away the benefits (or imposing more restrictions) for everyone else?
  • In another area of social policy designed to address personal harm, Governments are engaging with strategies such as pill-testing at music festivals, because in that example, they know that an outright ban on recreational drugs is increasingly ineffective. Likewise, wider sex, drug and alcohol education for children and young people. Draconian laws like the under-16 social media ban can end up absolving parents, teachers and other community leaders from their own responsibilities for parenting, education, civic guidance and instilling a sense of individual accountability. So perhaps more effort needs to go into helping minors in how they navigate social media, and improving their resilience levels when dealing with unpleasant stuff they are bound to encounter. Plus, making all social media users aware that they are personally responsible for what they post, share and like. Just as we shouldn’t allow our kids to cycle out on the street without undertaking some basic road safety education, I’d rather see children becoming internet savvy from an early age – not just against on-line bullying, but to be alert to financial scams and other consumer traps.
  • Finally, the new Australian legislation was introduced by the Labor Government, and had support from the Liberal Opposition, but not much from the cross-benches in the Senate. So it’s hardly a multi-partisan Act despite the alleged amount of public support expressed. It may even be pandering to the more reactionary elements in our society – such as religious fundamentalists and social conservatives. For example, banning under-16s from using social media could prevent them from seeking help and advice on things like health and reproductive rights, forced marriage, wage theft, coercive relationships and domestic violence. Just some of the unintended consequences likely to come as a result of this ill-considered and hastily assembled piece of legislation.

Pudgy Penguins come to Melbourne

Last week, I got to chill out with some of the Pudgy Penguins crew, as they launched the Oceania chapter of their NFT community. In case you weren’t aware, Pudgy Penguins are one of the top NFT collections, and have built a loyal fan base for these digital characters.

I went to a major Pudgy Penguin “Pengu Fest” in Hong Kong last year, and got to see first hand how engaged their members are. I also gained some insights as to how this ecosystem enables their NFT holders to license the IP associated with their individual characters into royalty-based income. In short, a subset of the NFT characters are chosen to be turned into merchandise. (For example, Pudgy Penguin soft toys are available in major stores such as Walmart in the USA, and Big W in Australia.) Owners of the selected NFTs earn a percentage of the sales revenue (less tax and production costs etc.).

The most recent collection of Pudgy collectibles are the Igloo figurines, which include early online access to Pudgy World. As a proud owner of one of these plastic figures, I’m still not sure what I have let myself in for…

As well as local meetups, other ways in which the community can interact include a trading card game called Vibes, also launched via the Overpass IP licensing platform.

Igloo Inc, the parent company to Pudgy Penguins and Overpass, has also announced it is launching a Layer 2 blockchain on Ethereum, to be called Abstract, and is being positioned as a “the blockchain for consumer crypto”.

Whatever your views on crypto, NFTs, on-line worlds and collectibles, there is no doubt that Pudgy Penguins have set themselves up with the admirable goals of building a healthy and inclusive community, underpinned by the twin pillars of individual creativity and positive culture.

To crypto sceptics (and the merely crypto curious), the “community” and the enthusiasm of its members could resemble something of a cult. Someone did say during last week’s panel discussion that “I am my penguin, and my penguin is me”. But there are worse things for people to get involved with – and for younger people (I don’t regard myself as part of the Pudgy core demographic), I can see the appeal. For example, your Pudgy Penguin PFP can act as a protective avatar as you engage and explore online – allowing you to share only the personal information that you want to, while you build up trust with other community participants, and before you choose to meet IRL.

There was also a discussion about the difference between meme coins and NFTs – the short answer is that the former represent pure speculation, while the latter aim to create value for their holders. In fact, someone suggested that meme coin trading is not that different to punting on betting apps. But since most NFT collections are well down on their market highs of a couple of years ago, maybe NFT holders and communities like Pudgy Penguins are trying to convince themselves that they are still backing a winner?

Overall, however, I remain positive to the opportunities that NFTs represent – especially in the creative fields, and as a new model for IP licensing. Even if cute flightless birds from the southern hemisphere are not your thing, I don’t think you can dismiss or ignore the social, cultural and economic impact that NFTs will have.

Next week: “When I’m Sixty-Four”

 

 

AI & Music

In a recent episode of a TV detective show, an AI tech dude tries to outsmart an old school musicologist by re-creating the missing part of a vintage blues recording. The professor is asked to identify which is the “real” track, compared to the AI versions. The blues expert guesses correctly within a few beats – much to the frustration of the coder.

“How did you figure it out so quickly?”

“Easy – it’s not just what the AI added, but more importantly what it left out.”

The failure of AI to fully replicate the original song (by omitting a recording error that the AI has “corrected”) is another example showing how AI lacks the human touch, does not yet have intuition, and struggles to exercise informed judgement. Choices may often be a matter of taste, but innate human creativity cannot yet be replicated.

Soon, though, AI tools will displace a lot of work currently done by composers, lyricists, musicians, producers, arrangers and recording engineers. Already, digital audio workstation (DAW) software easily enables anyone with a computer or mobile device to create, record, sample and mix their own music, without needing to read a note of music and without having to strum a chord. Not only that, the software can emulate the acoustic properties of site-specific locations, and correct out-of-tune and out-of-time recordings. So anyone can pretend they are recording at Abbey Road.

I recently blogged about how AI is presenting fresh challenges (as well as opportunities) for the music industry. Expect to see “new” recordings released by (or attributed to) dead pop stars, especially if their back catalogue is out of copyright. This is about more than exhuming preexisting recordings, and enhancing them with today’s technology; this is deriving new content from a set of algorithms, trained on vast back catalogues, directed by specific prompts (“bass line in the style of Jon Entwistle”), and maybe given some core principles of musical composition.

And it’s the AI training that has prompted the major record companies to sue two AI software companies, a state of affairs which industry commentator, Rob Abelow says was inevitable, because:

“It’s been clear that Suno & Udio have trained on copyrighted material with no plan to license or compensate”.

But on the other hand, streaming and automated music are not new. Sound designer and artist Tero Parviainen recently quoted Curtis Roads’ “The Computer Music Tutorial” (2023):

“A new industry has emerged around artificial intelligence (AI) services for creating generic popular music, including Flow Machines, IBM Watson Beat, Google Magenta’s NSynth Super, OpenAI’s Jukebox, Jukedeck, Melodrive, Spotify’s Creator Technology Research Lab, and Amper Music. This is the latest incarnation of a trend that started in the 1920s called Muzak, to provide licensed background music in elevators, business and dental offices, hotels, shopping malls, supermarkets, and restaurants”

And even before the arrival of Muzak in the 1920s, the world’s first streaming service was launched in the late 1890s, using the world’s first synthesizer – the Teleharmonium. (Thanks to Mark Brend’s “The Sound of Tomorrow”, I learned that Mark Twain was the first subscriber.)

For music purists and snobs (among whom I would probably count myself), all this talk about the impact of AI on music raises questions of aesthetics as well as ethics. But I’m reminded of some comments made by Pink Floyd about 50 years ago, when asked about their use of synthesizers, during the making of “Live at Pompeii”. In short, they argue that such machines still need human input, and as long as the musicians are controlling the equipment (and not the other way around), then what’s the problem? It’s not like they are cheating, disguising what they are doing, or compensating for a lack of ability – and the technology doesn’t make them better musicians, it just allows them to do different things:

“It’s like saying, ‘Give a man a Les Paul guitar, and he becomes Eric Clapton… It’s not true.'”

(Well, not yet, but I’m sure AI is working on it…)

Next week: Some final thoughts on AI

AI and Deep (and not so deep…) Fakes

The New York Times recently posted a quiz“can you tell the difference between a photograph, and an image created by AI?”

Of the quiz examples, a mix of actual photos and AI-generated content, I was only able to correctly identify 8/10 as to which were which. My significant other claimed to have scored 10/10! In my defence, I correctly identified all of the AI images, but I mistook two authentic photos as being fakes. Of the latter, one featured a bunch of famous people, most of whom I did not recognise, and the photo had been significantly cropped, removing much of the visual context (I also suspect it had been subject to some additional photo-shopping, given it was a publicity shot). The other real photo had been taken at such an unusual angle that it distorted some of the natural perspective, so as to make some elements look wonky. (But maybe I’ve become more cynical or sceptical, and therefore I tend to disbelieve more of what I see, the more I know I am being exposed to AI-generated content?)

How can we remain alert to AI deceptions, while at the same time recognizing and embracing the potential that this amazing technology has to offer?

Taking my lead from the New York Times article, the following blog has been created using ChatGPT. All I did was enter some brief subject headings (including the title), a series of internet links, and a prompt to turn it into a blog on AI and fakes. Can you tell what is true, what is hallucination, and what important information has been left out?

AI and Deep (and not so deep…) Fakes

Artificial Intelligence (AI) has revolutionized numerous sectors, bringing forth innovations that were once unimaginable. However, one of the more controversial applications of AI is in the creation of deep fakes. These are hyper-realistic digital falsifications created using AI, capable of mimicking real people’s appearances and voices. This technology has sparked debates over ethics, security, and the future of digital media. Let’s delve into some recent examples and the implications they bring.

Dylan and Rodin: A Fabricated Encounter

A striking example of AI’s capacity to blur reality is the supposed encounter between Bob Dylan and the sculptor Auguste Rodin. In an article on Substack by Dave Haslam, the scenario of Dylan visiting Rodin’s Parisian studio is meticulously crafted, yet it’s a fabrication made plausible by AI tools. This fake story exemplifies how AI can create convincingly realistic yet entirely fictional narratives, challenging our ability to discern truth from fiction.

AI in Indian Elections

AI’s role extends beyond entertainment into more critical areas such as political campaigns. In the 2024 Indian elections, AI-generated deep fakes have been used to spread misinformation and manipulate public opinion. An article by ABC News details how AI is disrupting electoral processes, raising concerns about the integrity of democratic systems.

Mozilla’s Campaign Against Misinformation

In response to the growing threat of AI-generated misinformation, organizations like Mozilla have launched campaigns to protect the integrity of elections. Mozilla’s efforts aim to hold platforms like WhatsApp accountable for the spread of fake news and deep fakes, urging them to implement stricter controls to safeguard democratic processes.

Celebrity Persona Rights

AI’s ability to replicate real people has also sparked legal disputes, particularly concerning persona rights. Scarlett Johansson’s recent dispute with OpenAI highlights these issues. Johansson’s concerns about her likeness being used without consent underscore the broader debate about the ethical use of AI. As noted by Scientific American, the conflict brings to light questions about how AI-generated content can respect individual rights and privacy.

Legislative Actions on Deep Fakes

Governments are beginning to take action against the misuse of AI-generated content. In Australia, new laws are being proposed to combat the creation and distribution of deep fake pornography and other harmful images. The Guardian reports on legislative efforts to protect individuals from the malicious use of AI, reflecting a growing recognition of the need for regulation.

AI and Copyright

The intersection of AI and copyright law is another area of intense debate. A lawsuit involving famous authors against OpenAI raises questions about how AI uses copyrighted material for training and whether the content it generates can itself be copyrighted. This legal battle, reported by ABC News and Reuters, could set important precedents for how intellectual property laws adapt to the age of AI.

Conclusion

As AI technology continues to advance, its applications in creating deep fakes present both opportunities and challenges. While it can be used creatively and constructively, it also poses significant risks to privacy, security, and the authenticity of information. The ongoing developments in legislation, ethical guidelines, and technological safeguards will be crucial in shaping the future landscape of AI-generated content.

So, did you spot the “deliberate” error(s)? And what information was missing? Answers will be posted later this week.

Next week: AI & Music