An AI Origin Story

Nowadays, no TV or movie franchise worth its salt is deemed complete unless it has some sort of origin story – from “Buzz Lightyear” to “Alien”, from “Mystery Road” to “Inspector Morse”. And as for “Star Wars”, I’ve lost count as to which prequel/sequel/chapter/postscript/spin-off we are up to. Origin stories can be helpful in explaining “what came before”, providing background and context, and describing how we got to where we are in a particular narrative. Reading Jeanette Winterson’s recent collection of essays, “12 Bytes”, it soon becomes apparent that what she has achieved is a tangible origin story for Artificial Intelligence.

Still from “Frankenstein” (1931) – Image sourced from IMDb

By Winterson’s own admission, this is not a science text book, nor a reference work on AI. It’s a lot more human than that, and all the more readable and enjoyable as a result. In any case, technology is moving so quickly these days, that some of her references (even those from barely a year ago) are either out of date, or have been superceded by subsequent events. For example, she makes a contemporaneous reference to a Financial Times article from May 2021, on Decentralized Finance (DeFi) and Non-Fungible Tokens (NFTs). She mentions a digital race horse that sold for $125,000. Fast-forward 12 months, and we have seen parts of the nascent DeFi industry blow-up, and an NFT of Jack Dorsey’s first Tweet (Twitter’s own origin story?) failing to achieve even $290 when it went up for auction, having initially been sold for $2.9m. Then there is the Google engineer who claimed that the Lamda AI program is sentient, and the chess robot which broke its opponent’s finger.

Across these stand-alone but interlinked essays, Winterson builds a consistent narrative arc across the historical development, current status and future implications of AI. In particular, she looks ahead to a time when we achieve Artificial General Intelligence, the Singularity, and the complete embodiment of AI, and not necessarily in a biological form that we would recognise today. Despite the dystopian tones, the author appears to be generally positive and optimistic about these developments, and welcomes the prospect of transhumanism, in large part because it is inevitable, and we should embrace it, and ultimately because it might the only way to save our planet and civilisation, just not in the form we expect.

The book’s themes range from: the first human origin stories (sky-gods and sacred texts) to ancient philosophy; from the Industrial Revolution to Frankenstein’s monster; from Lovelace and Babbage to Dracula; from Turing and transistors to the tech giants of today. There are sections on quantum physics, the nature of “binary” (in computing and in transgenderism), biases in algorithms and search engines, the erosion of privacy via data mining, the emergence of surveillance capitalism, and the pros and cons of cryogenics and sexbots.

We can observe that traditional attempts to imagine or create human-made intelligence were based on biology, religion, spirituality and the supernatural – and many of these concepts were designed to explain our own origins, to enforce societal norms, to exert control, and to sustain existing and inequitable power structures. Some of these efforts might have been designed to explain our purpose as humans, but in reality they simply raised more questions than they resolved. Why are we here? Why this planet? What is our destiny? Is death and extinction (the final “End-Time”) the only outcome for the human race? Winterson rigorously rejects this finality as either desirable or inevitable.

Her conclusion is that the human race is worth saving (from itself?), but we have to face up to the need to adapt and continue evolving (homo sapiens was never the end game). Consequently, embracing AI/AGI is going to be key to our survival. Of course, like any (flawed) technology, AI is just another tool, and it is what we do with it that matters. Winterson is rightly suspicious of the male-dominated tech industry, some of whose leaders see themselves as guardians of civil liberties and the saviours of humankind, yet fail to acknowledge that “hate speech is not free speech”. She acknowledges the benefits of an interconnected world, advanced prosthetics, open access to information, medical breakthroughs, industrial automation, and knowledge that can help anticipate danger and avert disaster. But AI and transhumanism won’t solve all our existential problems, and if we don’t have the capacity for empathy, compassion, love, humour, self-reflection, art, satire, creativity, imagination, music or critical thinking, then we will definitely cease to be “human” at all.

The Bibliography to this book is an invaluable resource in itself – and provides for a wealth of additional reading. One book that is not listed, but which might be of interest to her readers, is “Chimera”, a novel by Simon Gallagher, published in 1981 and subsequently adapted for radio and TV. Although this story is about genetic engineering (rather than AI), nevertheless it echoes some of Winterson’s themes and concerns around the morals and ethics of technology (e.g., eugenics, organ harvesting, private investment vs public control, playing god, and the over-emphasis on the preservation and prolongation of human lifeforms as they are currently constituted). Happy reading!

Next week: Digital Perfectionism?

 

Monash University Virtual Demo Day

Last week I was invited to participate in a Virtual Demo Day for students enrolled in the Monash University Boot Camp, for the FinTech, Coding and UX/UI streams. The Demo Day was an opportunity for the students to present the results of their project course work and to get feedback from industry experts.

While not exactly the same as a start up pitch night, each project presented a defined problem scenario, as well as the proposed technical and design solution – and in some cases, a possible commercial model, but this was not the primary focus. Although the format of the Demo Day did not enable external observers to see all of the dozen-plus projects, overall it was very encouraging to see a university offer this type of practical learning experience.

Skills-based and aimed at providing a pathway to a career in ICT, the Boot Camp programme results in a Certificate of Completion – but I hope that undergraduates have similar opportunities as part of their bachelor degree courses. The emphasis on ICT (Cybersecurity and Data Analytics form other streams) is partly in response to government support for relevant skills training, and partly to help meet industry requirements for qualified job candidates.

Industry demand for ICT roles is revealing a shortage of appropriate skills among job applicants, no doubt exacerbated by our closed international borders, and a downturn in overseas students and skilled migration. This shortage is having a direct impact on recruitment and hiring costs, as this recent Tweet by one of my friends starkly reveals: “As someone who is hiring about 130 people right now, I will say this: Salaries in tech in Australia are going up right now at a rate I’ve never seen.” So nice work if you can get it!

As for the Demo Day projects themselves, these embraced technology and topics across Blockchain, two-sided marketplaces, health, sustainability, music, facilities management, career development and social connectivity.

The Monash Boot Camp courses are presented in conjunction with Trilogy Education Services, a US-based training and education provider. From what I can see online, this provider divides opinion as to the quality and/or value for money that their programmes offer – there seems to be a fair number of advocates and detractors. I can’t comment on the course content or delivery, but in terms of engagement, my observation is that the students get good exposure to key tech stacks, learn some very practical skills, and they are encouraged to follow up with the industry participants. I hope all of the students manage to land the type of opportunities they are seeking as a result of completing their course.

Next week: Here We Go Again…

Blockchain and the Limits of Trust

Last week I was privileged to be a guest on This Is Imminent, a new form of Web TV hosted by Simon Waller. The given topic was Blockchain and the Limitations of Trust.

For a replay of the Web TV event go here

As regular readers will know, I have been immersed in the world of Blockchain, cryptocurrency and digital assets for over four years – and while I am not a technologist, I think know enough to understand some of the potential impact and implications of Blockchain on distributed networks, decentralization, governance, disintermediation, digital disruption, programmable money, tokenization, and for the purposes of last week’s discussion, human trust.

The point of the discussion was to explore how Blockchain might provide a solution to the absence of trust we currently experience in many areas of our daily lives. Even better, how Blockchain could enhance or expand our existing trusted relationships, especially across remote networks. The complete event can be viewed here, but be warned that it’s not a technical discussion (and wasn’t intended to be), although Simon did find a very amusing video that tries to explain Blockchain with the aid of Spam (the luncheon meat, not the unwanted e-mail).

At a time when our trust in public institutions is being tested all the time, it’s more important than ever to understand the nature of trust (especially trust placed in any new technology), and to navigate how we establish, build and maintain trust in increasingly peer-to-peer, fractured, fragmented, open and remote networks.

To frame the conversation, I think it’s important to lay down a few guiding principles.

First, a network is only as strong as its weakest point of connection.

Second, there are three main components to maintaining the integrity of a “trusted” network:

  • how are network participants verified?
  • how secure is the network against malicious actors?
  • what are the penalties or sanctions for breaking that trust?

Third, “trust” in the context of networks is a proxy for “risk” – how much or how far are we willing to trust a network, and everyone connected to it?

For example, if you and I know each other personally and I trust you as a friend, colleague or acquaintance, does that mean I should automatically trust everyone else you know? (Probably not.) Equally, should I trust you just because you know all the same people as me? (Again, probably not.) Each relationship (or connection) in that type of network has to be evaluated on its own merits. Although we can do a certain amount of due diligence and triangulation, as each network becomes larger, it’s increasingly difficult for us to “know” each and every connection.

Let’s suppose that the verification process is set appropriately high, that the network is maintained securely, and that there are adequate sanctions for abusing the network trust –  then it is possible for each connection to “know” each other, because the network has created the minimum degree of trust for the network to be viable. Consequently, we might conclude that only trustworthy people would want to join a network based on trust where each transaction is observable and traceable (albeit in the case of Blockchain, pseudonymously).

When it comes to trust and risk assessment, it still amazes me the amount of personal (and private) information people are willing to share on social media platforms, just to get a “free” account. We seem to be very comfortable placing an inordinate amount of trust in these highly centralized services both to protect our data and to manage our relationships – which to me is something of an unfair bargain.

Statistically we know we are more likely to be killed in a car accident than in a plane crash – but we attach far more risk to flying than to driving. Whenever we take our vehicle out on to the road, we automatically assume that every other driver is licensed, insured, and competent to drive, and that their car is taxed and roadworthy. We cannot verify this information ourselves, so we have to trust in both the centralized systems (that regulate drivers, cars and roads), and in each and every individual driver – but we know there are so many weak points in that structure.

Blockchain has the ability to verify each and every participant and transaction on the network, enabling all users to trust in the security and reliability of network transactions. In addition, once verified, participants do not have to keep providing verification each time they want to access the network, because the network “knows” enough about each participant that it can create a mutual level of trust without everyone having to have direct knowledge of each other.

In the asymmetric relationships we have created with centralized platforms such as social media, we find ourselves in a very binary situation – once we have provided our e-mail address, date of birth, gender and whatever else is required, we cannot be confident that the platform “forgets” that information when it no longer needs it. It’s a case of “all or nothing” as the price of network entry. Whereas, if we operated under a system of self-sovereign digital identity (which technology like Blockchain can facilitate), then I can be sure that such platforms only have access to the specific personal data points that I am willing to share with them, for the specific purpose I determine, and only for as long as I decide.

Finally, taking control of, and being responsible for managing our own personal information (such as a private key for a digital wallet) is perhaps a step too far for some people. They might not feel they have enough confidence in their own ability to be trusted with this data, so they would rather delegate this responsibility to centralized systems.

Next week: Always Look On The Bright Side…

 

The Limits of Technology

As part of my home entertainment during lock-down, I have been enjoying a series of Web TV programmes called This Is Imminent hosted by Simon Waller, and whose broad theme asks “how are we learning to live with new technology?” – in short, the good, the bad and the ugly of AI, robotics, computers, productivity tools etc.

Niska robots are designed to serve ice cream…. image sourced from Weekend Notes

Despite the challenges of Zoom overload, choked internet capacity, and constant screen-time, the lock-down has shown how reliant we are upon tech for communications, e-commerce, streaming services and working from home. Without them, many of us would not have been able to cope with the restrictions imposed by the pandemic.

The value of Simon’s interactive webinars is two-fold – as the audience, we get to hear from experts in their respective fields, and gain exposure to new ideas; and we have the opportunity to explore ways in which technology impacts our own lives and experience – and in a totally non-judgmental way. What’s particularly interesting is the non-binary nature of the discussion. It’s not “this tech good, that tech bad”, nor is it about taking absolute positions – it thrives in the margins and in the grey areas, where we are uncertain, unsure, or just undecided.

In parallel with these programmes, I have been reading a number of novels that discuss different aspects of AI. These books seem to be both enamoured with, and in awe of, the potential of AI – William Gibson’s “Agency”, Ian McEwan’s “Machines Like Me”, and Jeanette Winterson’s “Frankissstein” – although they take quite different approaches to the pros and cons of the subject and the technology itself. (When added to my recent reading list of Jonathan Coe’s “Middle England” and John Lanchester’s “The Wall”, you can see what fun and games I’m having during lock-down….)

What this viewing and reading suggests to me is that we quickly run into the limitations of any new technology. Either it never delivers what it promises, or we become bored with it. We over-invest and place too much hope in it, then take it for granted (or worse, come to resent it). What the above novelists identify is our inability to trust ourselves when confronted with the opportunity for human advancement. Largely because the same leaps in technology also induce existential angst or challenge our very existence itself – not least because they are highly disruptive as well as innovative.

On the other hand, despite a general shift towards open source protocols and platforms, we still see age-old format wars whenever any new tech comes along. For example, this means most apps lack interoperability, tying us into rigid and vertically integrated ecosystems. The plethora of apps launched for mobile devices can mean premature obsolescence (built-in or otherwise), as developers can’t be bothered to maintain and upgrade them (or the app stores focus on the more popular products, and gradually weed out anything that doesn’t fit their distribution model or operating system). Worse, newer apps are not retrofitted to run on older platforms, or older software programs and content suffer digital decay and degradation. (Developers will also tell you about tech debt – the eventual higher costs of upgrading products that were built using “quick and cheap” short-term solutions, rather than taking a longer-term perspective.)

Consequently, new technology tends to over-engineer a solution, or create niche, hard-coded products (robots serving ice cream?). In the former, it can make existing tasks even harder; in the latter, it can create tech dead ends and generate waste. Rather than aiming for giant leaps forward within narrow applications, perhaps we need more modular and accretive solutions that are adaptable, interchangeable, easier to maintain, and cheaper to upgrade.

Next week: Distractions during Lock-down