Whose side is AI on?

At the risk of coming across as some sort of Luddite, recent commentary on Artificial Intelligence suggests that it is only natural to have concerns and misgivings about its rapid development and widespread deployment. Of course, at its heart, it’s just another technology at our disposal – but by its very definition, generative AI is not passive, and is likely to impact all areas of our life, whether we invite it in or not.

Over the next few weeks, I will be discussing some non-technical themes relating to AI – creativity and AI, legal implications of AI, and form over substance when it comes to AI itself.

To start with, these are a few of the questions that I have been mulling over:

– Is AI working for us, as a tool that we control and manage?  Or is AI working with us, in a partnership of equals? Or, more likely, is AI working against us, in the sense that it is happening to us, whether we like it or not, let alone whether we are actually aware of it?

– Is AI being wielded by a bunch of tech bros, who feed it with all their own prejudices, unconscious bias and cognitive limitations?

– Who decides what the Large Language Models (LLMs) that power AI are trained on?

– How does AI get permission to create derived content from our own Intellectual Property? Even if our content is on the web, being “publicly available” is not the same as “in the public domain”

– Who is responsible for what AI publishes, and are AI agents accountable for their actions? In the event of false, incorrect, misleading or inappropriate content created by AI, how do we get to clarify the record, or seek a right of reply?

– Why are AI tools adding increased caveats? (“This is not financial advice, this is not to be relied on in a court of law, this is only based on information available as at a certain point in time, this is not a recommendation, etc.”) And is this only going to increase, as in the recent example of changes to Google’s AI-generated search results? (But really, do we need to be told that eating rocks or adding glue to pizza are bad ideas?)

– From my own experience, tools like Chat GPT return “deliberate” factual errors. Why? Is it to keep us on our toes (“Gotcha!”)? Is it to use our responses (or lack thereof) to train the model to be more accurate? Is it to underline the caveat emptor principle (“What, you relied on Otter to write your college essay? What were you thinking?”). Or is it to counter plagiarism (“You could only have got that false information from our AI engine”). If you think the latter is far-fetched, I refer you to the notion of “trap streets” in maps and directories.

– Should AI tools contain better attribution (sources and acknowledgments) in their results? Should they disclose the list of “ingredients” used (like food labelling?) Should they provide verifiable citations for their references? (It’s an idea that is gaining some attention.)

– Finally, the increased use of cloud-based services and crowd-sourced content (not just in AI tools) means that there is the potential for overreach when it comes to end user licensing agreements by ChatGPT, Otter, Adobe Firefly, Gemini, Midjourney etc. Only recently, Adobe had to clarify latest changes to their service agreement, in response to some social media criticism.

Next week: AI and the Human Factor

“The Digital Director”

Last year, the Australian Institute of Company Directors (AICD) ran a series of 10 webinars under the umbrella of “The Digital Director”. Despite the title, there was very little exploration of “digital” technology itself, but a great deal of discussion on how to manage IT within the traditional corporate structure – as between the board of directors, the management, and the workforce.

There was a great deal of debate on things like “digital mindset”, “digital adaption and adoption”, and “digital innovation and evolution”. During one webinar, the audience were encouraged to avoid using the term “digital transformation” (instead, think “digital economy”) – yet 2 of the 10 sessions had “digital transformation” in the title.

Specific technical topics were mainly confined to AI, data privacy, data governance and cyber security. It was acknowledged that while corporate Australia has widely adopted SaaS solutions, it lacks depth in digital skills; and the percentage of the ASX market capitalisation attributable to IP assets shows we are “30 years behind the USA”. There was specific mention of blockchain technology, but the two examples given are already obsolete (the ASX’s abandoned project to replace the CHESS system, and CBA’s indefinitely deferred roll-out of crypto assets on their mobile banking app).

Often, the discussion was more about change management, and dealing with the demands of “modern work” from a workforce whose expectations have changed greatly in recent years, thanks to the pandemic, remote working, and access to new technology. Yet, these are themes that have been with us ever since the first office productivity tools, the arrival of the internet, and the proliferation of mobile devices that blur the boundary between “work” and “personal”.

The series missed an opportunity to explore the impact of new technology on boards themselves, especially their decision-making processes. We have seen how the ICO (initial coin offering) phase of cryptocurrency markets in 2017-19 presented a wholly new dimension to the funding of start-up ventures; and how blockchain technology and smart contracts heralded a new form of corporate entity, the DAO (decentralised autonomous organisation).

Together, these innovations mean the formation and governance of companies will no longer rely on the traditional structure of shareholders, directors and executives – and as a consequence, board decision-making will also take a different format. Imagine being able to use AI tools to support strategic planning, or proof-of-stake to vote on board resolutions, and consensus mechanisms to determine AGMs.

As of now, “Digital Directors” need to understand how these emerging technologies will disrupt the boardroom itself, as well as the very corporate structures and governance frameworks that have been in place for over 400 years.

Next week: Back in the USA

 

 

 

AI vs IP

Can Artificial Intelligence software claim copyright in any work that was created using their algorithms?

The short answer is “no”, since only humans can establish copyright in original creative works. Copyright can be assigned to a company or trust, or it can be created under various forms of creative commons, but there still needs to be a human author behind the copyright material. While copyright may lapse over time, it then becomes part of the public domain.

However, the extent to which a human author can claim copyright in a work that has been created with the help of AI is now being challenged. A recent case in the USA has determined that the author of a graphic novel, which included images created using Midjouney, cannot claim copyright in those images. While it was accepted that the author devised the text and other prompts that the software used as the generative inputs, the output images themselves could not be the subject of copyright protection – meaning they are either in the public domain, or they fall under some category of creative commons? This case also indicates that, in the USA at least, failing to declare the use of AI tools in a work when applying for copyright registration may result in a rejected application.

Does this decision mean that the people who write AI programmes could claim copyright in works created using their software? Probably not – as this would imply that Microsoft could establish copyright in every novel written using Word, especially its grammar and spelling tools.

On the other hand, programmers and software developers who use copyright material to train their models may need to obtain relevant permission from the copyright holders (as would anyone using the AI tools and who uses copyright content as prompts), unless they could claim exemptions under “fair dealing” or “fair use” provisions.

We’re still early in the lengthy process whereby copyright and other intellectual property laws are tested and re-calibrated in the wake of AI. Maybe the outcomes of future copyright cases will depend on whether you are Ed Sheeran or Robin Thicke….

Next week: Customer Experience vs Process Design

 

An AI Origin Story

Nowadays, no TV or movie franchise worth its salt is deemed complete unless it has some sort of origin story – from “Buzz Lightyear” to “Alien”, from “Mystery Road” to “Inspector Morse”. And as for “Star Wars”, I’ve lost count as to which prequel/sequel/chapter/postscript/spin-off we are up to. Origin stories can be helpful in explaining “what came before”, providing background and context, and describing how we got to where we are in a particular narrative. Reading Jeanette Winterson’s recent collection of essays, “12 Bytes”, it soon becomes apparent that what she has achieved is a tangible origin story for Artificial Intelligence.

Still from “Frankenstein” (1931) – Image sourced from IMDb

By Winterson’s own admission, this is not a science text book, nor a reference work on AI. It’s a lot more human than that, and all the more readable and enjoyable as a result. In any case, technology is moving so quickly these days, that some of her references (even those from barely a year ago) are either out of date, or have been superceded by subsequent events. For example, she makes a contemporaneous reference to a Financial Times article from May 2021, on Decentralized Finance (DeFi) and Non-Fungible Tokens (NFTs). She mentions a digital race horse that sold for $125,000. Fast-forward 12 months, and we have seen parts of the nascent DeFi industry blow-up, and an NFT of Jack Dorsey’s first Tweet (Twitter’s own origin story?) failing to achieve even $290 when it went up for auction, having initially been sold for $2.9m. Then there is the Google engineer who claimed that the Lamda AI program is sentient, and the chess robot which broke its opponent’s finger.

Across these stand-alone but interlinked essays, Winterson builds a consistent narrative arc across the historical development, current status and future implications of AI. In particular, she looks ahead to a time when we achieve Artificial General Intelligence, the Singularity, and the complete embodiment of AI, and not necessarily in a biological form that we would recognise today. Despite the dystopian tones, the author appears to be generally positive and optimistic about these developments, and welcomes the prospect of transhumanism, in large part because it is inevitable, and we should embrace it, and ultimately because it might the only way to save our planet and civilisation, just not in the form we expect.

The book’s themes range from: the first human origin stories (sky-gods and sacred texts) to ancient philosophy; from the Industrial Revolution to Frankenstein’s monster; from Lovelace and Babbage to Dracula; from Turing and transistors to the tech giants of today. There are sections on quantum physics, the nature of “binary” (in computing and in transgenderism), biases in algorithms and search engines, the erosion of privacy via data mining, the emergence of surveillance capitalism, and the pros and cons of cryogenics and sexbots.

We can observe that traditional attempts to imagine or create human-made intelligence were based on biology, religion, spirituality and the supernatural – and many of these concepts were designed to explain our own origins, to enforce societal norms, to exert control, and to sustain existing and inequitable power structures. Some of these efforts might have been designed to explain our purpose as humans, but in reality they simply raised more questions than they resolved. Why are we here? Why this planet? What is our destiny? Is death and extinction (the final “End-Time”) the only outcome for the human race? Winterson rigorously rejects this finality as either desirable or inevitable.

Her conclusion is that the human race is worth saving (from itself?), but we have to face up to the need to adapt and continue evolving (homo sapiens was never the end game). Consequently, embracing AI/AGI is going to be key to our survival. Of course, like any (flawed) technology, AI is just another tool, and it is what we do with it that matters. Winterson is rightly suspicious of the male-dominated tech industry, some of whose leaders see themselves as guardians of civil liberties and the saviours of humankind, yet fail to acknowledge that “hate speech is not free speech”. She acknowledges the benefits of an interconnected world, advanced prosthetics, open access to information, medical breakthroughs, industrial automation, and knowledge that can help anticipate danger and avert disaster. But AI and transhumanism won’t solve all our existential problems, and if we don’t have the capacity for empathy, compassion, love, humour, self-reflection, art, satire, creativity, imagination, music or critical thinking, then we will definitely cease to be “human” at all.

The Bibliography to this book is an invaluable resource in itself – and provides for a wealth of additional reading. One book that is not listed, but which might be of interest to her readers, is “Chimera”, a novel by Simon Gallagher, published in 1981 and subsequently adapted for radio and TV. Although this story is about genetic engineering (rather than AI), nevertheless it echoes some of Winterson’s themes and concerns around the morals and ethics of technology (e.g., eugenics, organ harvesting, private investment vs public control, playing god, and the over-emphasis on the preservation and prolongation of human lifeforms as they are currently constituted). Happy reading!

Next week: Digital Perfectionism?