AI vs IP

Can Artificial Intelligence software claim copyright in any work that was created using their algorithms?

The short answer is “no”, since only humans can establish copyright in original creative works. Copyright can be assigned to a company or trust, or it can be created under various forms of creative commons, but there still needs to be a human author behind the copyright material. While copyright may lapse over time, it then becomes part of the public domain.

However, the extent to which a human author can claim copyright in a work that has been created with the help of AI is now being challenged. A recent case in the USA has determined that the author of a graphic novel, which included images created using Midjouney, cannot claim copyright in those images. While it was accepted that the author devised the text and other prompts that the software used as the generative inputs, the output images themselves could not be the subject of copyright protection – meaning they are either in the public domain, or they fall under some category of creative commons? This case also indicates that, in the USA at least, failing to declare the use of AI tools in a work when applying for copyright registration may result in a rejected application.

Does this decision mean that the people who write AI programmes could claim copyright in works created using their software? Probably not – as this would imply that Microsoft could establish copyright in every novel written using Word, especially its grammar and spelling tools.

On the other hand, programmers and software developers who use copyright material to train their models may need to obtain relevant permission from the copyright holders (as would anyone using the AI tools and who uses copyright content as prompts), unless they could claim exemptions under “fair dealing” or “fair use” provisions.

We’re still early in the lengthy process whereby copyright and other intellectual property laws are tested and re-calibrated in the wake of AI. Maybe the outcomes of future copyright cases will depend on whether you are Ed Sheeran or Robin Thicke….

Next week: Customer Experience vs Process Design

 

Digital Perfectionism?

In stark contrast to my last blog on AI and digital humans, I’ve just been reading Damon Krukowski‘s book, “The New Analog – Listening and Reconnecting in a Digital World”, published in 2017. It’s an essential text for anyone interested in the impact of sound compression, noise filtering, loudness and streaming on the music industry (and much more besides).

The are two main theses the author explores:

1. The paradoxical corollary to Moore’s Law on the rate of increase in computing power is Murphy’s Moore’s Law: that in striving for improved performance and perfectionism in all things digital, equally we risk amplifying the limitations inherent in analog technology. in short, the more something improves, the more it must also get worse. (See also my previous blogs on the problem of digital decay, and the beauty of decay music.)

2. In the realm of digital music and other platforms (especially social media), stripping out the noise (to leave only the signal) results in an impoverished listening, cultural and social experience; flatter sound, less dynamics, narrower tonal variation, limited nuance, an absence of context. In the case of streaming music, we lose the physical connection with the original artwork, accompanying sleeve notes, creative credits and even the original year of publication.

Thinking about #1 above, imagine this principle applied to #AI: would the pursuit of “digital perfectionism” mean we lose a large part of what makes analogue homo sapiens more “human”? Would we end up compressing/removing “noise” such as doubt, uncertainty, curiosity, irony, idiosyncrasies, cognitive diversity, quirkiness, humour etc.?

As for #2, like the author, I’m not a total Luddite when it comes to digital music, but I totally understand his frustration (philosophical, phonic and financial) when discussing the way CDs exploit “loudness” (in the technical sense), how .mp3 files compress more data into less space (resulting in a deterioration in overall quality), and the way streaming platforms have eroded artists’ traditional commercial return on their creativity.

The book also discusses the role of social media platforms in extracting value from the content that users contribute, reducing it to homogenised data lakes, selling it to the highest bidder, and compressing all our personal observations, relationships and original ideas (the things that make us nuanced human beings) into a sterilsed drip-feed of “curated” content.

In the narrative on music production, and how “loudness” took hold in the mid-1990s, Krukowski takes specific aim at the dreaded sub-woofer. These speakers now pervade every concert, home entertainment system, desk-top computer and car stereo. They even bring a distorted physical presence into our listening experience:

“Nosebleeds at festivals, trance states at dance clubs, intimidation by car audio…. When everything is louder than everything else, sounds lose context and thus meaning – even the meaning of loud.”

The main issue I have with digital music is that we as listeners have very little control over how we hear it – apart from adjusting the volume. So again, any nuance or variation has been ironed out, right to the point of consumption – we can’t even adjust the stereo balance. I recall that my boom box in the 1980s had separate volume controls for each speaker, and a built-in graphic equalizer. To paraphrase Joy Division, “We’ve Lost Control”.

Next week: I CAN live without my radio…

Free speech up for sale

When I was planning to post this article a couple of weeks ago, Elon Musk’s bid to buy Twitter and take it into private ownership was looking unlikely to succeed. Musk had just declined to take up the offer of a seat on the Twitter board, following which the board adopted a poison-pill defence against a hostile takeover. And just as I was about to go to press at my usual time, the news broke that the original bid had now been accepted by the board, so I hit the pause button instead and waited a day to see what the public reaction was. What a difference 72 hours (and US$44bn) can make… It seems “free speech” does indeed come with a price.

Of course, the Twitter transaction is still subject to shareholder approval and regulatory clearance, as well as confirmation of the funding structure, since Musk is having to raise about half the stated purchase from banks.

Musk’s stated objective in acquiring Twitter was highlighted in a press release put out by the company:

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Mr. Musk. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential – I look forward to working with the company and the community of users to unlock it.”

This latest development in Musk’s apparent love/hate relationship with Twitter is bound to further divide existing users as to the billionaire’s intentions, as well as raise concerns about the broader implications for free speech. Musk himself has encouraged his “worst critics” to stay with the platform. Meanwhile, founder and former CEO, Jack Dorsey has renewed his love of Twitter, despite only recently stepping away from the top job to spend more time on his other interests.

Personally, I’m not overly concerned that a platform such as Twitter is in private hands or under single ownership (subject, of course, to anti-trust rules, etc.). Far from creating an entrenched monopoly, it may actually encourage more competition by those who decide to opt out of Twitter. What I am less comfortable with is the notion that Twitter somehow acts as an exemplar of free speech, and as such, is a bastion of democracy.

On the positive side, we will be able to judge the veracity of Musk’s objectives against his actual deeds. For example, will Twitter actually introduce an edit button, make its algorithms open-source, exorcise the spam bots, verify users, and reduce/remove the platform’s reliance upon advertising?

On the negative side, what credible stance will Twitter now take on “free speech”, short of allowing an “anything goes” policy? If Musk is sincere that Twitter will be a platform for debating “matters vital to the future of humanity”, he may need to modify what he means by public discourse. Personal slanging matches with fellow-billionaires (and those less-able to defend themselves) do not make for an edifying public debating forum. Musk’s own disclosures about Twitter and his other business interests will also come under increased scrutiny. We know from past experience that Elon’s Tweets can move markets, and for this alone he should be aware of the responsibility that comes with ownership of the platform.

We have long understood that free speech is not the same as an unfettered right to say what you like in public – there are limits to freedom of expression, including accountability for the consequences of our words and actions, especially where they can cause harm. The broader challenges we face are:

  • technology outpacing regulation, when it comes to social media
  • defining what it means to “cause offence”
  • increased attacks on “mainstream media” and threats to freedom of the press

1. Just as the printing press, telegraphy, telephony, broadcasting and the internet each resulted in legislative changes, social media has continued to test the boundaries of regulation under which its predecessors now operate. Hitherto, much of the regulation that applies to social and digital media relates to privacy and data protection, as well as the existing law of defamation. But the latter varies considerably by jurisdiction, and by access to redress, and availability of remedies. Social media platforms have resisted attempts to treat them as traditional media (newspapers and broadcasters, which are subject to licensing and/or industry codes of practice) or treat them as publishers (and therefore responsible for content published on their platforms). (Then there is the question of how some social media platforms manage their tax affairs in the countries where they derive their revenue.)

The Australian government is attempting to challenge social media companies in a couple of ways. The first has been to force these platforms to pay for third-party news content from which they directly and indirectly generate advertising income. The second aims to hold social media more accountable for defamatory content published on their platforms, and remove the protection of “anonymity”. However, the former might be seen as a (belated) reaction to changing business models, and largely acting in favour of incumbents; while the latter is a technical response to the complex law of defamation in the digital age.

2. The ability to be offended by what we see or hear on social media is now at such a low bar as to be almost meaningless. During previous battles over censorship in print, on stage or on screen, the argument could be made that, “if you don’t like something you aren’t being forced to watch it”, so maybe you are deliberately going in search of content just to find it offensive. The problem is, social media by its very nature is more pervasive and, fed by hidden algorithms, is actually more invasive than traditional print and broadcast media. Even as a casual, passive or innocent user, you cannot avoid seeing something that may “offend” you. Economic and technical barriers to entry are likewise so low, that anyone and everyone can have their say on social media.

Leaving aside defamation laws, the concept of “hate speech” is being used to target content which is designed to advocate violence, or can be reasonably deemed or expected to have provoked violence or the threat of harm (personal, social or economic). I have problems with how we define hate speech in the current environment of public commentary and social media platforms, since the causal link between intent and consequence is not always that easy to establish.

However, I think we can agree that the use of content to vilify others simply based on their race, gender, sexuality, ethnicity, economic status, political affiliation or religious identity cannot be defended on the grounds of “free speech”, “fair comment” or “personal belief”. Yet how do we discourage such diatribes without accusations of censorship or authoritarianism, and how do we establish workable remedies to curtail the harmful effects of “hate speech” without infringing our civil liberties?

Overall, there is a need to establish the author’s intent (their purpose as well as any justification), plus apply a “reasonable person” standard, one that does not simply affirm confirmation bias of one sector of society against another. We must recognise that hiding behind our personal ideology cannot be an acceptable defence against facing the consequences of our actions.

3. I think it’s problematic that large sections of the traditional media have hardly covered themselves in glory when it comes to their ethical standards, and their willingness to misuse their public platforms, economic power and political influence to undertake nefarious behaviour and/or deny any responsibility for their actions. Think of the UK’s phone hacking scandals, which resulted in one press baron being deemed “unfit to run a company”, as well as leading to the closure of a major newspaper.

That said, it hardly justifies the attempts by some governments, populist leaders and authoritarian regimes to continuously undermine the integrity of the fourth estate. It certainly doesn’t warrant the prosecution and persecution of journalists who are simply trying to do their job, nor attacks and bans on the media unless they “tow the party line”.

Which brings me back to Twitter, and its responsibility in helping to preserve free speech, while preventing its platform being hijacked for the purposes of vilification and incitement to cause harm. If its new owner is serious about furthering public debate and mature discourse, then here are a few other enhancements he might want to consider:

  • in addition to an edit button, a “cooling off” period whereby users are given the opportunity to reconsider a like, a post or a retweet, based on user feedback or community interaction – after which time, they might be deemed responsible for the content as if they were the original author (potentially a way to mitigate “pile-ons”)
  • signing up to a recognised industry code of ethics, including a victim’s formal right of reply, access to mediation, and enforcement procedures and penalties against perpetrators who continually cross the line into vilification, or engage in content that explicitly or implicitly advocates violence or harm
  • a more robust fact-checking process and a policy of “truth in advertising” when it comes to claims or accusations made by or on behalf of politicians, political parties, or those seeking elected office
  • clearer delineation between content which is mere opinion, content which is in the nature of a public service (e.g., emergencies and natural disasters), content which is deemed part of a company’s public disclosure obligations, content which is advertorial, content which is on behalf of a political party or candidate, and content which is purely for entertainment purposes only (removing the bots may not be enough)
  • consideration of establishing an independent editorial board that can also advocate on behalf of alleged victims of vilification, and act as the initial arbiter of “public interest” matters (such as privacy, data protection, whistle-blowers etc.)

Finally, if Twitter is going to remove/reduce advertising, what will the commercial model look like?

Next week: The Crypto Conversation

Who fact-checks the fact-checkers?

The recent stoush between POTUS and Twitter on fact-checking and his alleged use of violent invective has rekindled the debate on whether, and how, social media should be regulated. It’s a potential quagmire (especially the issue of free speech), but it also comes at a time when here in Australia, social media is fighting twin legal battles – on defamation and fees for news content.

First, the issue of fact-checking on social media. Public commentary was divided – some argued that fact-checking is a form of censorship, and others posed the question “Quis custodiet ipsos custodes?” (who fact-checks the fact-checkers?) Others suggested that fact-checking in this context was a form of public service to ensure that political debate is well-informed, obvious errors are corrected, and that blatant lies (untruths, falsehoods, fibs, deceptions, mis-statements, alternative facts….) are called out for what they are. Notably, in this case, the “fact” was not edited, but flagged as a warning to the audience. (In case anyone hadn’t noticed (or remembered), earlier this year Facebook announced that it would engage Reuters to provide certain fact-check services.) Given the current level of discourse in the political arena, traditional and social media, and the court of public opinion, I’m often reminded of an article I read many years ago in the China Daily, which said something to the effect that “it is important to separate the truth from the facts”.

Second, the NSW Court of Appeal recently ruled that media companies can be held responsible for defamatory comments posted under stories they publish on social media. While this specific ruling did not render Facebook liable for the defamatory posts (although like other content platforms, social media is subject to general defamation laws), it was clear that the media organisations are deemed to be “publishing” content on their social media pages. And even though they have no way of controlling or moderating the Facebook comments before they are made public, for these purposes, their Facebook pages are no different to their own websites.

Third, the Australian Government is going to force companies like Facebook and Google to pay for news content via revenue share from ad sales. The Federal Treasurer was quoted as saying, “It is only fair that the search ­engines and social media giants pay for the original news content that they use to drive traffic to their sites.” If Australia succeeds, this may set an uncomfortable precedent in other jurisdictions.

For me, much of the above debate goes to the heart of how to treat social media platforms – are they like traditional newspapers and broadcast media? are they like non-fiction publishers? are they communications services (like telcos)? are they documents of record? The topic is not new – remember when Mark Zuckerberg declared that he wanted Facebook to be the “world’s newspaper”? Be careful what you wish for…

Next week: Fact v Fiction in Public Discourse