Whose side is AI on?

At the risk of coming across as some sort of Luddite, recent commentary on Artificial Intelligence suggests that it is only natural to have concerns and misgivings about its rapid development and widespread deployment. Of course, at its heart, it’s just another technology at our disposal – but by its very definition, generative AI is not passive, and is likely to impact all areas of our life, whether we invite it in or not.

Over the next few weeks, I will be discussing some non-technical themes relating to AI – creativity and AI, legal implications of AI, and form over substance when it comes to AI itself.

To start with, these are a few of the questions that I have been mulling over:

– Is AI working for us, as a tool that we control and manage?  Or is AI working with us, in a partnership of equals? Or, more likely, is AI working against us, in the sense that it is happening to us, whether we like it or not, let alone whether we are actually aware of it?

– Is AI being wielded by a bunch of tech bros, who feed it with all their own prejudices, unconscious bias and cognitive limitations?

– Who decides what the Large Language Models (LLMs) that power AI are trained on?

– How does AI get permission to create derived content from our own Intellectual Property? Even if our content is on the web, being “publicly available” is not the same as “in the public domain”

– Who is responsible for what AI publishes, and are AI agents accountable for their actions? In the event of false, incorrect, misleading or inappropriate content created by AI, how do we get to clarify the record, or seek a right of reply?

– Why are AI tools adding increased caveats? (“This is not financial advice, this is not to be relied on in a court of law, this is only based on information available as at a certain point in time, this is not a recommendation, etc.”) And is this only going to increase, as in the recent example of changes to Google’s AI-generated search results? (But really, do we need to be told that eating rocks or adding glue to pizza are bad ideas?)

– From my own experience, tools like Chat GPT return “deliberate” factual errors. Why? Is it to keep us on our toes (“Gotcha!”)? Is it to use our responses (or lack thereof) to train the model to be more accurate? Is it to underline the caveat emptor principle (“What, you relied on Otter to write your college essay? What were you thinking?”). Or is it to counter plagiarism (“You could only have got that false information from our AI engine”). If you think the latter is far-fetched, I refer you to the notion of “trap streets” in maps and directories.

– Should AI tools contain better attribution (sources and acknowledgments) in their results? Should they disclose the list of “ingredients” used (like food labelling?) Should they provide verifiable citations for their references? (It’s an idea that is gaining some attention.)

– Finally, the increased use of cloud-based services and crowd-sourced content (not just in AI tools) means that there is the potential for overreach when it comes to end user licensing agreements by ChatGPT, Otter, Adobe Firefly, Gemini, Midjourney etc. Only recently, Adobe had to clarify latest changes to their service agreement, in response to some social media criticism.

Next week: AI and the Human Factor

An AI Origin Story

Nowadays, no TV or movie franchise worth its salt is deemed complete unless it has some sort of origin story – from “Buzz Lightyear” to “Alien”, from “Mystery Road” to “Inspector Morse”. And as for “Star Wars”, I’ve lost count as to which prequel/sequel/chapter/postscript/spin-off we are up to. Origin stories can be helpful in explaining “what came before”, providing background and context, and describing how we got to where we are in a particular narrative. Reading Jeanette Winterson’s recent collection of essays, “12 Bytes”, it soon becomes apparent that what she has achieved is a tangible origin story for Artificial Intelligence.

Still from “Frankenstein” (1931) – Image sourced from IMDb

By Winterson’s own admission, this is not a science text book, nor a reference work on AI. It’s a lot more human than that, and all the more readable and enjoyable as a result. In any case, technology is moving so quickly these days, that some of her references (even those from barely a year ago) are either out of date, or have been superceded by subsequent events. For example, she makes a contemporaneous reference to a Financial Times article from May 2021, on Decentralized Finance (DeFi) and Non-Fungible Tokens (NFTs). She mentions a digital race horse that sold for $125,000. Fast-forward 12 months, and we have seen parts of the nascent DeFi industry blow-up, and an NFT of Jack Dorsey’s first Tweet (Twitter’s own origin story?) failing to achieve even $290 when it went up for auction, having initially been sold for $2.9m. Then there is the Google engineer who claimed that the Lamda AI program is sentient, and the chess robot which broke its opponent’s finger.

Across these stand-alone but interlinked essays, Winterson builds a consistent narrative arc across the historical development, current status and future implications of AI. In particular, she looks ahead to a time when we achieve Artificial General Intelligence, the Singularity, and the complete embodiment of AI, and not necessarily in a biological form that we would recognise today. Despite the dystopian tones, the author appears to be generally positive and optimistic about these developments, and welcomes the prospect of transhumanism, in large part because it is inevitable, and we should embrace it, and ultimately because it might the only way to save our planet and civilisation, just not in the form we expect.

The book’s themes range from: the first human origin stories (sky-gods and sacred texts) to ancient philosophy; from the Industrial Revolution to Frankenstein’s monster; from Lovelace and Babbage to Dracula; from Turing and transistors to the tech giants of today. There are sections on quantum physics, the nature of “binary” (in computing and in transgenderism), biases in algorithms and search engines, the erosion of privacy via data mining, the emergence of surveillance capitalism, and the pros and cons of cryogenics and sexbots.

We can observe that traditional attempts to imagine or create human-made intelligence were based on biology, religion, spirituality and the supernatural – and many of these concepts were designed to explain our own origins, to enforce societal norms, to exert control, and to sustain existing and inequitable power structures. Some of these efforts might have been designed to explain our purpose as humans, but in reality they simply raised more questions than they resolved. Why are we here? Why this planet? What is our destiny? Is death and extinction (the final “End-Time”) the only outcome for the human race? Winterson rigorously rejects this finality as either desirable or inevitable.

Her conclusion is that the human race is worth saving (from itself?), but we have to face up to the need to adapt and continue evolving (homo sapiens was never the end game). Consequently, embracing AI/AGI is going to be key to our survival. Of course, like any (flawed) technology, AI is just another tool, and it is what we do with it that matters. Winterson is rightly suspicious of the male-dominated tech industry, some of whose leaders see themselves as guardians of civil liberties and the saviours of humankind, yet fail to acknowledge that “hate speech is not free speech”. She acknowledges the benefits of an interconnected world, advanced prosthetics, open access to information, medical breakthroughs, industrial automation, and knowledge that can help anticipate danger and avert disaster. But AI and transhumanism won’t solve all our existential problems, and if we don’t have the capacity for empathy, compassion, love, humour, self-reflection, art, satire, creativity, imagination, music or critical thinking, then we will definitely cease to be “human” at all.

The Bibliography to this book is an invaluable resource in itself – and provides for a wealth of additional reading. One book that is not listed, but which might be of interest to her readers, is “Chimera”, a novel by Simon Gallagher, published in 1981 and subsequently adapted for radio and TV. Although this story is about genetic engineering (rather than AI), nevertheless it echoes some of Winterson’s themes and concerns around the morals and ethics of technology (e.g., eugenics, organ harvesting, private investment vs public control, playing god, and the over-emphasis on the preservation and prolongation of human lifeforms as they are currently constituted). Happy reading!

Next week: Digital Perfectionism?

 

An open letter to American Express

Dear American Express,

I have been a loyal customer of yours for around 20 years. (Likewise my significant other.)

I typically pay my monthly statements on time and in full.

I’ve opted for paperless statements.

I pay my annual membership fee.

I even accept the fact that 7-8 times out of 10, I get charged merchant fees for paying by Amex – and in most cases I incur much higher fees than other credit or debit cards.

So, I am very surprised I have not been invited to attend your pop-up Open Air Cinema in Melbourne’s Yarra Park – especially as I live within walking distance.

It’s not like you don’t try to market other offers to me – mostly invitations to increase my credit limit, transfer outstanding balances from other credit cards, or “enjoy” lower interest rates on one-off purchases.

The lack of any offer in relation to the Open Air Cinema just confirms my suspicions that like most financial institutions, you do not really know your customers.

My point is, that you must have so much data on my spending patterns and preferences, from which you should be able to glean my interests such as film, the arts, and entertainment.

A perfect candidate for a pop-up cinema!

Next week: Life After the Royal Commission – Be Careful What You Wish For….

 

Big Data – Panacea or Pandemic?

You’ve probably heard that “data is the new oil” (but you just need to know where to drill?). Or alternatively, that the growing lakes of “Big Data” hold all the answers, but they don’t necessarily tell us which questions to ask. It feels like Big Data is the cure for everything, yet far from solving our problems, it is simply adding to our confusion.

Cartoon by Thierry Gregorious (Sourced from Flickr under Creative Commons – Some Rights Reserved)

There’s no doubt that customer, transaction, behavioral, geographic and demographic data points can be valuable for analysis and forecasting. When used appropriately, and in conjunction with relevant tools, this data can even throw up new insights. And when combined with contextual and psychometric analysis can give rise to whole new data-driven businesses.

Of course, we often use simple trend analysis to reveal underlying patterns and changes in behaviour. (“If you can’t measure it, you can’t manage it”). But the core issue is, what is this data actually telling us? For example, if the busiest time for online banking is during commuting hourswhat opportunities does this present? (Rather than, “how much more data can we generate from even more frequent data capture….”)

I get that companies want to know more about their customers so they can “understand” them, and anticipate their needs. Companies are putting more and more effort into analysing the data they already have, as well as tapping into even more sources of data, to create even more granular data models, all with the goal of improving customer experience. It’s just a shame that few companies have a really good single view of their customers, because often, data still sits in siloed operations and legacy business information systems.

There is also a risk, that by trying to enhance and further personalise the user experience, companies are raising their customers’ expectations to a level that cannot be fulfilled. Full customisation would ultimately mean creating products with a customer base of one. Plus customers will expect companies to really “know” them, to treat them as unique individuals with their own specific needs and preferences. Totally unrealistic, of course, because such solutions are mostly impossible to scale, and are largely unsustainable.

Next week: Startup Governance