BYOB (Bring Your Own Brain)

My Twitter and LinkedIn feeds are full of posts about artificial intelligence, machine learning, large language models, robotics and automation – and how these technologies will impact our jobs and our employment prospects, often in very dystopian tones. It can be quite depressing to trawl through this material, to the point of being overwhelmed by the imminent prospect of human obsolescence.

No doubt, getting to grips with these tools will be important if we are to navigate the future of work, understand the relationship between labour, capital and technology, and maintain economic relevance in a world of changing employment models.

But we have been here before, many times (remember the Luddites?), and so far, the human condition means we learn to adapt in order to survive. These transitions will be painful, and there will be casualties along the way, but there is cause for optimism if we remember our post-industrial history.

First, among recent Twitter posts there was a timely reminder that automation does not need to equal despair in the face of displaced jobs.

Second, the technology at our disposal will inevitably make us more productive as well as enabling us to reduce mundane or repetitive tasks, even freeing up more time for other (more creative) pursuits. The challenge will be in learning how to use these tools, and in efficient and effective ways so that we don’t swap one type of routine for another.

Third, there is still a need to consider the human factor when it comes to the work environment, business structures and organisational behaviour – not least personal interaction, communication skills and stakeholder management. After all, you still need someone to switch on the machines, and tell them what to do!

Fourth, the evolution of “bring your own device” (and remote working) means that many of us have grown accustomed to having a degree of autonomy in the ways in which we organise our time and schedule our tasks – giving us the potential for more flexible working conditions. Plus, we have seen how many apps we use at home are interchangeable with the tools we use for work – and although the risk is that we are “always on”, equally, we can get smarter at using these same technologies to establish boundaries between our work/life environments.

Fifth, all the technology in the world is not going to absolve us of the need to think for ourselves. We still need to bring our own cognitive faculties and critical thinking to an increasingly automated, AI-intermediated and virtual world. If anything, we have to ramp up our cerebral powers so that we don’t become subservient to the tech, to make sure the tech works for us (and not the other way around).

Adopting a new approach means:

  • not taking the tech for granted
  • being prepared to challenge the tech assumptions (and not be complicit in its in-built biases)
  • question the motives and intentions of the tech developers, managers and owners (especially those of known or suspected bad actors)
  • validate all the newly-available data to gain new insights (not repeat past mistakes)
  • evaluate the evidence based on actual events and outcomes
  • and not fall prey to hyperbolic and cataclysmic conjectures

Finally, it is interesting to note the recent debates on regulating this new tech – curtailing malign forces, maintaining protections on personal privacy, increasing data security, and ensuring greater access for those currently excluded. This is all part of a conscious narrative (that human component!) to limit the extent to which AI will be allowed to run rampant, and to hold tech (in all its forms) more accountable for the consequences of its actions.

Next week: “The Digital Director”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The Limits of Technology

As part of my home entertainment during lock-down, I have been enjoying a series of Web TV programmes called This Is Imminent hosted by Simon Waller, and whose broad theme asks “how are we learning to live with new technology?” – in short, the good, the bad and the ugly of AI, robotics, computers, productivity tools etc.

Niska robots are designed to serve ice cream…. image sourced from Weekend Notes

Despite the challenges of Zoom overload, choked internet capacity, and constant screen-time, the lock-down has shown how reliant we are upon tech for communications, e-commerce, streaming services and working from home. Without them, many of us would not have been able to cope with the restrictions imposed by the pandemic.

The value of Simon’s interactive webinars is two-fold – as the audience, we get to hear from experts in their respective fields, and gain exposure to new ideas; and we have the opportunity to explore ways in which technology impacts our own lives and experience – and in a totally non-judgmental way. What’s particularly interesting is the non-binary nature of the discussion. It’s not “this tech good, that tech bad”, nor is it about taking absolute positions – it thrives in the margins and in the grey areas, where we are uncertain, unsure, or just undecided.

In parallel with these programmes, I have been reading a number of novels that discuss different aspects of AI. These books seem to be both enamoured with, and in awe of, the potential of AI – William Gibson’s “Agency”, Ian McEwan’s “Machines Like Me”, and Jeanette Winterson’s “Frankissstein” – although they take quite different approaches to the pros and cons of the subject and the technology itself. (When added to my recent reading list of Jonathan Coe’s “Middle England” and John Lanchester’s “The Wall”, you can see what fun and games I’m having during lock-down….)

What this viewing and reading suggests to me is that we quickly run into the limitations of any new technology. Either it never delivers what it promises, or we become bored with it. We over-invest and place too much hope in it, then take it for granted (or worse, come to resent it). What the above novelists identify is our inability to trust ourselves when confronted with the opportunity for human advancement. Largely because the same leaps in technology also induce existential angst or challenge our very existence itself – not least because they are highly disruptive as well as innovative.

On the other hand, despite a general shift towards open source protocols and platforms, we still see age-old format wars whenever any new tech comes along. For example, this means most apps lack interoperability, tying us into rigid and vertically integrated ecosystems. The plethora of apps launched for mobile devices can mean premature obsolescence (built-in or otherwise), as developers can’t be bothered to maintain and upgrade them (or the app stores focus on the more popular products, and gradually weed out anything that doesn’t fit their distribution model or operating system). Worse, newer apps are not retrofitted to run on older platforms, or older software programs and content suffer digital decay and degradation. (Developers will also tell you about tech debt – the eventual higher costs of upgrading products that were built using “quick and cheap” short-term solutions, rather than taking a longer-term perspective.)

Consequently, new technology tends to over-engineer a solution, or create niche, hard-coded products (robots serving ice cream?). In the former, it can make existing tasks even harder; in the latter, it can create tech dead ends and generate waste. Rather than aiming for giant leaps forward within narrow applications, perhaps we need more modular and accretive solutions that are adaptable, interchangeable, easier to maintain, and cheaper to upgrade.

Next week: Distractions during Lock-down

 

 

 

 

 

 

Fear of the Robot Economy….

A couple of articles I came across recently made for quite depressing reading about the future of the economy. The first was an opinion piece by Greg Jericho for The Guardian on an IMF Report about the economic impact of robots. The second was the AFR’s annual Rich List. Read together, they don’t inspire me with confidence that we are really embracing the economic opportunity that innovation brings.

In the first article, the conclusion seemed to be predicated on the idea that robots will destroy more “jobs” (that archaic unit of economic output/activity against which we continue to measure all human, social and political achievement) than they will enable us to create in terms of our advancement. Ergo robots bad, jobs good.

While the second report painted a depressing picture of where most economic wealth continues to be created. Of the 200 Wealthiest People in Australia, around 25% made/make their money in property, with another 10% coming from retail. Add in resources and “investment” (a somewhat opaque category), and these sectors probably account for about two-thirds of the total. Agriculture, manufacturing, entertainment and financial services also feature. However, only the founders of Atlassian, and a few other entrepreneurs come from the technology sector. Which should make us wonder where the innovation is coming from that will propel our economy post-mining boom.

As I have commented before, the public debate on innovation (let alone public engagement) is not happening in any meaningful way. As one senior executive at a large financial services company told a while back, “any internal discussion around technology, automation and digital solutions gets shut down for fear of provoking the spectre of job losses”. All the while, large organisations like banks are hiring hundreds of consultants and change managers to help them innovate and restructure (i.e., de-layer their staff), rather than trying to innovate from within.

With my home State of Victoria heading for the polls later this year, and the growing sense that we are already in Federal election campaign mode for 2019 (or earlier…), we will see an even greater emphasis on public funding for traditional infrastructure rather than investing in new technologies or innovation.

Finally, at the risk of stirring up the ongoing corporate tax debate even further, I took part in a discussion last week with various members of the FinTech and Venture Capital community, to discuss Treasury policy on Blockchain, cryptocurrency and ICOs. There was an acknowledgement that while Australia could be a leader in this new technology sector, a lack of regulatory certainty and non-conducive tax treatment towards this new funding model means that there will be a brain drain as talent relocates overseas to more amenable jurisdictions.

Next week: The new productivity tools

When robots say “Humans do not compute…”

There’s a memorable scene in John Carpenter‘s 1970’s sci-fi classic, “Dark Star” where an astronaut tries to use Cartesian Logic to defuse a nuclear bomb. The bomb is equipped with artificial intelligence and is programmed to detonate via a timer once its circuits have been activated. Due to a circuit malfunction, the bomb believes it has been triggered, even though it is still attached to the spaceship, and cannot be mechanically released. Refuting the arguments against its existence, the bomb responds in kind, and simply states: “I think, therefore I am.”

Dark Star’s Bomb 20: “I think, therefore I am…”

Dark Star’s Bomb 20: “I think, therefore I am…”

The notion of artificial intelligence both thrills us, and fills us with dread: on the one hand, AI can help us (by doing a lot of routine thinking and mundane processing); on the other, it can make us the subjects of its own ill-will (think of HAL 9000 in “2001: A Space Odyssey”, or “Robocop”, or “Terminator” or any similar dystopian sci-fi story).

The current trend for smarter data processing, fuelled by AI tools such as machine learning, semantic search, sentiment analysis and social graph models, is making a world of driverless cars, robo-advice, the Internet of Things and behaviour prediction a reality. But there are concerns that we will abnegate our decision-making (and ultimately, our individual moral responsibility) to computers; that more and more jobs will be lost to robots; and we will end up being dehumanized if we no longer have to think for ourselves. Worse still, if our human behaviours cease making sense to those very same computers that we have programmed to learn how to think for us, then our demise is pre-determined.

The irony is, that if AI becomes as smart as we might imagine, then we will impart to the robots a very human fallibility: namely, the tendency to over-analyse the data (rather than examine the actual evidence before us). As Brian Aldiss wrote in his 1960 short story, “The Sterile Millennia”, when robots get together:

“…they suffer from a trouble which sometimes afflicts human gatherings: a tendency to show off their logic at the expense of the object of the meeting.”

Long live logic, but longer still live common sense!

Next week: 101 #Startup Pitches – What have we learned?