An AI Origin Story

Nowadays, no TV or movie franchise worth its salt is deemed complete unless it has some sort of origin story – from “Buzz Lightyear” to “Alien”, from “Mystery Road” to “Inspector Morse”. And as for “Star Wars”, I’ve lost count as to which prequel/sequel/chapter/postscript/spin-off we are up to. Origin stories can be helpful in explaining “what came before”, providing background and context, and describing how we got to where we are in a particular narrative. Reading Jeanette Winterson’s recent collection of essays, “12 Bytes”, it soon becomes apparent that what she has achieved is a tangible origin story for Artificial Intelligence.

Still from “Frankenstein” (1931) – Image sourced from IMDb

By Winterson’s own admission, this is not a science text book, nor a reference work on AI. It’s a lot more human than that, and all the more readable and enjoyable as a result. In any case, technology is moving so quickly these days, that some of her references (even those from barely a year ago) are either out of date, or have been superceded by subsequent events. For example, she makes a contemporaneous reference to a Financial Times article from May 2021, on Decentralized Finance (DeFi) and Non-Fungible Tokens (NFTs). She mentions a digital race horse that sold for $125,000. Fast-forward 12 months, and we have seen parts of the nascent DeFi industry blow-up, and an NFT of Jack Dorsey’s first Tweet (Twitter’s own origin story?) failing to achieve even $290 when it went up for auction, having initially been sold for $2.9m. Then there is the Google engineer who claimed that the Lamda AI program is sentient, and the chess robot which broke its opponent’s finger.

Across these stand-alone but interlinked essays, Winterson builds a consistent narrative arc across the historical development, current status and future implications of AI. In particular, she looks ahead to a time when we achieve Artificial General Intelligence, the Singularity, and the complete embodiment of AI, and not necessarily in a biological form that we would recognise today. Despite the dystopian tones, the author appears to be generally positive and optimistic about these developments, and welcomes the prospect of transhumanism, in large part because it is inevitable, and we should embrace it, and ultimately because it might the only way to save our planet and civilisation, just not in the form we expect.

The book’s themes range from: the first human origin stories (sky-gods and sacred texts) to ancient philosophy; from the Industrial Revolution to Frankenstein’s monster; from Lovelace and Babbage to Dracula; from Turing and transistors to the tech giants of today. There are sections on quantum physics, the nature of “binary” (in computing and in transgenderism), biases in algorithms and search engines, the erosion of privacy via data mining, the emergence of surveillance capitalism, and the pros and cons of cryogenics and sexbots.

We can observe that traditional attempts to imagine or create human-made intelligence were based on biology, religion, spirituality and the supernatural – and many of these concepts were designed to explain our own origins, to enforce societal norms, to exert control, and to sustain existing and inequitable power structures. Some of these efforts might have been designed to explain our purpose as humans, but in reality they simply raised more questions than they resolved. Why are we here? Why this planet? What is our destiny? Is death and extinction (the final “End-Time”) the only outcome for the human race? Winterson rigorously rejects this finality as either desirable or inevitable.

Her conclusion is that the human race is worth saving (from itself?), but we have to face up to the need to adapt and continue evolving (homo sapiens was never the end game). Consequently, embracing AI/AGI is going to be key to our survival. Of course, like any (flawed) technology, AI is just another tool, and it is what we do with it that matters. Winterson is rightly suspicious of the male-dominated tech industry, some of whose leaders see themselves as guardians of civil liberties and the saviours of humankind, yet fail to acknowledge that “hate speech is not free speech”. She acknowledges the benefits of an interconnected world, advanced prosthetics, open access to information, medical breakthroughs, industrial automation, and knowledge that can help anticipate danger and avert disaster. But AI and transhumanism won’t solve all our existential problems, and if we don’t have the capacity for empathy, compassion, love, humour, self-reflection, art, satire, creativity, imagination, music or critical thinking, then we will definitely cease to be “human” at all.

The Bibliography to this book is an invaluable resource in itself – and provides for a wealth of additional reading. One book that is not listed, but which might be of interest to her readers, is “Chimera”, a novel by Simon Gallagher, published in 1981 and subsequently adapted for radio and TV. Although this story is about genetic engineering (rather than AI), nevertheless it echoes some of Winterson’s themes and concerns around the morals and ethics of technology (e.g., eugenics, organ harvesting, private investment vs public control, playing god, and the over-emphasis on the preservation and prolongation of human lifeforms as they are currently constituted). Happy reading!

Next week: Digital Perfectionism?

 

The Limits of Technology

As part of my home entertainment during lock-down, I have been enjoying a series of Web TV programmes called This Is Imminent hosted by Simon Waller, and whose broad theme asks “how are we learning to live with new technology?” – in short, the good, the bad and the ugly of AI, robotics, computers, productivity tools etc.

Niska robots are designed to serve ice cream…. image sourced from Weekend Notes

Despite the challenges of Zoom overload, choked internet capacity, and constant screen-time, the lock-down has shown how reliant we are upon tech for communications, e-commerce, streaming services and working from home. Without them, many of us would not have been able to cope with the restrictions imposed by the pandemic.

The value of Simon’s interactive webinars is two-fold – as the audience, we get to hear from experts in their respective fields, and gain exposure to new ideas; and we have the opportunity to explore ways in which technology impacts our own lives and experience – and in a totally non-judgmental way. What’s particularly interesting is the non-binary nature of the discussion. It’s not “this tech good, that tech bad”, nor is it about taking absolute positions – it thrives in the margins and in the grey areas, where we are uncertain, unsure, or just undecided.

In parallel with these programmes, I have been reading a number of novels that discuss different aspects of AI. These books seem to be both enamoured with, and in awe of, the potential of AI – William Gibson’s “Agency”, Ian McEwan’s “Machines Like Me”, and Jeanette Winterson’s “Frankissstein” – although they take quite different approaches to the pros and cons of the subject and the technology itself. (When added to my recent reading list of Jonathan Coe’s “Middle England” and John Lanchester’s “The Wall”, you can see what fun and games I’m having during lock-down….)

What this viewing and reading suggests to me is that we quickly run into the limitations of any new technology. Either it never delivers what it promises, or we become bored with it. We over-invest and place too much hope in it, then take it for granted (or worse, come to resent it). What the above novelists identify is our inability to trust ourselves when confronted with the opportunity for human advancement. Largely because the same leaps in technology also induce existential angst or challenge our very existence itself – not least because they are highly disruptive as well as innovative.

On the other hand, despite a general shift towards open source protocols and platforms, we still see age-old format wars whenever any new tech comes along. For example, this means most apps lack interoperability, tying us into rigid and vertically integrated ecosystems. The plethora of apps launched for mobile devices can mean premature obsolescence (built-in or otherwise), as developers can’t be bothered to maintain and upgrade them (or the app stores focus on the more popular products, and gradually weed out anything that doesn’t fit their distribution model or operating system). Worse, newer apps are not retrofitted to run on older platforms, or older software programs and content suffer digital decay and degradation. (Developers will also tell you about tech debt – the eventual higher costs of upgrading products that were built using “quick and cheap” short-term solutions, rather than taking a longer-term perspective.)

Consequently, new technology tends to over-engineer a solution, or create niche, hard-coded products (robots serving ice cream?). In the former, it can make existing tasks even harder; in the latter, it can create tech dead ends and generate waste. Rather than aiming for giant leaps forward within narrow applications, perhaps we need more modular and accretive solutions that are adaptable, interchangeable, easier to maintain, and cheaper to upgrade.

Next week: Distractions during Lock-down

 

 

 

 

 

 

The Ongoing Productivity Debate

In my previous blog, I mentioned that productivity in Australia remains sluggish. There are various ideas as to why, and what we could do to improve performance. There are suggestions that traditional productivity analysis may track the wrong thing(s) – for example, output should not simply be measured against input hours, especially in light of technology advances such as cloud computing, AI, machine learning and AR/VR. There are even suggestions that rather than working a 5-day week (or longer), a four-day working week may actually result in better productivity outcomes – a situation we may be forced to embrace with increased automation.

Image Source: Wikimedia Commons

It’s been a number of years since I worked for a large organisation, but I get the sense that employees are still largely monitored by the number of hours they are “present” – i.e., on site, in the office, or logged in to the network. But I think we worked out some time ago that merely “turning up” is not a reliable measure of individual contribution, output or efficiency.

No doubt, the rhythm of the working day has changed – the “clock on/clock off” pattern is not what it was even when I first joined the workforce, where we still had strict core minimum hours (albeit with flexi-time and overtime).  So although many employees may feel like they are working longer hours (especially in the “always on” environment of e-mail, smart phones and remote working), I’m not sure how many of them would say they are working at optimum capacity or maximum efficiency.

For example, the amount of time employees spend on social media (the new smoko?) should not be ignored as a contributory factor in the lack of productivity gains. Yes, I know there are arguments for saying that giving employees access to Facebook et al can be beneficial in terms of research, training and development, networking, connecting with prospective customers and suppliers, and informally advocating for the companies they work for; plus, personal time spent on social media and the internet (e.g., booking a holiday) while at work may mean taking less actual time out of the office.

But let’s try to put this into perspective. With the amount of workplace technology employees have access to (plus the lowering costs of that technology), why are we still not experiencing corresponding productivity gains?

The first problem is poor deployment of that technology. How many times have you spoken to a call centre, only to be told “the system is slow today”, or worse, “the system won’t let me do that”? The second problem is poor training on the technology – if employees don’t have enough of a core understanding of the software and applications they are expected to use (I don’t even mean we all need to be coders or programmers – although they are core skills everyone will need to have in future), how will they be able to make best use of that technology? The third problem is poor alignment of technology – whether caused by legacy systems, so-called tech debt, or simply systems that do not talk to one another. I recently spent over 2 hours at my local bank trying to open a new term deposit – even though I have been a customer of the bank for more than 15 years, and have multiple products and accounts with this bank, I was told this particular product still runs on a standalone DOS platform, and the back-end is not integrated into the other customer information and account management platforms.

Finally, don’t get me started about the NBN, possibly one of the main hurdles to increased productivity for SMEs, freelancers and remote workers. In my inner-city area of Melbourne, I’ve now been told that I won’t be able to access NBN for at least another 15-18 months – much, much, much later than the original announcements. Meanwhile, since NBN launched, my neighbourhood has experienced higher density dwellings, more people working from home, more streaming and on-demand services, and more tech companies moving into the area. So legacy ADSL is being choked, and there is no improvement to existing infrastructure pending the NBN. It feels like I am in a Catch 22, and that the NBN has been over-sold, based on the feedback I read on social media and elsewhere. I’ve just come back from 2 weeks’ holiday in the South Island of New Zealand, and despite staying in some fairly remote areas, I generally enjoyed much faster internet than I get at home in Melbourne.

Next week: Startup Vic’s Impact Pitch Night

 

 

 

 

 

Fear of the Robot Economy….

A couple of articles I came across recently made for quite depressing reading about the future of the economy. The first was an opinion piece by Greg Jericho for The Guardian on an IMF Report about the economic impact of robots. The second was the AFR’s annual Rich List. Read together, they don’t inspire me with confidence that we are really embracing the economic opportunity that innovation brings.

In the first article, the conclusion seemed to be predicated on the idea that robots will destroy more “jobs” (that archaic unit of economic output/activity against which we continue to measure all human, social and political achievement) than they will enable us to create in terms of our advancement. Ergo robots bad, jobs good.

While the second report painted a depressing picture of where most economic wealth continues to be created. Of the 200 Wealthiest People in Australia, around 25% made/make their money in property, with another 10% coming from retail. Add in resources and “investment” (a somewhat opaque category), and these sectors probably account for about two-thirds of the total. Agriculture, manufacturing, entertainment and financial services also feature. However, only the founders of Atlassian, and a few other entrepreneurs come from the technology sector. Which should make us wonder where the innovation is coming from that will propel our economy post-mining boom.

As I have commented before, the public debate on innovation (let alone public engagement) is not happening in any meaningful way. As one senior executive at a large financial services company told a while back, “any internal discussion around technology, automation and digital solutions gets shut down for fear of provoking the spectre of job losses”. All the while, large organisations like banks are hiring hundreds of consultants and change managers to help them innovate and restructure (i.e., de-layer their staff), rather than trying to innovate from within.

With my home State of Victoria heading for the polls later this year, and the growing sense that we are already in Federal election campaign mode for 2019 (or earlier…), we will see an even greater emphasis on public funding for traditional infrastructure rather than investing in new technologies or innovation.

Finally, at the risk of stirring up the ongoing corporate tax debate even further, I took part in a discussion last week with various members of the FinTech and Venture Capital community, to discuss Treasury policy on Blockchain, cryptocurrency and ICOs. There was an acknowledgement that while Australia could be a leader in this new technology sector, a lack of regulatory certainty and non-conducive tax treatment towards this new funding model means that there will be a brain drain as talent relocates overseas to more amenable jurisdictions.

Next week: The new productivity tools