BYOB (Bring Your Own Brain)

My Twitter and LinkedIn feeds are full of posts about artificial intelligence, machine learning, large language models, robotics and automation – and how these technologies will impact our jobs and our employment prospects, often in very dystopian tones. It can be quite depressing to trawl through this material, to the point of being overwhelmed by the imminent prospect of human obsolescence.

No doubt, getting to grips with these tools will be important if we are to navigate the future of work, understand the relationship between labour, capital and technology, and maintain economic relevance in a world of changing employment models.

But we have been here before, many times (remember the Luddites?), and so far, the human condition means we learn to adapt in order to survive. These transitions will be painful, and there will be casualties along the way, but there is cause for optimism if we remember our post-industrial history.

First, among recent Twitter posts there was a timely reminder that automation does not need to equal despair in the face of displaced jobs.

Second, the technology at our disposal will inevitably make us more productive as well as enabling us to reduce mundane or repetitive tasks, even freeing up more time for other (more creative) pursuits. The challenge will be in learning how to use these tools, and in efficient and effective ways so that we don’t swap one type of routine for another.

Third, there is still a need to consider the human factor when it comes to the work environment, business structures and organisational behaviour – not least personal interaction, communication skills and stakeholder management. After all, you still need someone to switch on the machines, and tell them what to do!

Fourth, the evolution of “bring your own device” (and remote working) means that many of us have grown accustomed to having a degree of autonomy in the ways in which we organise our time and schedule our tasks – giving us the potential for more flexible working conditions. Plus, we have seen how many apps we use at home are interchangeable with the tools we use for work – and although the risk is that we are “always on”, equally, we can get smarter at using these same technologies to establish boundaries between our work/life environments.

Fifth, all the technology in the world is not going to absolve us of the need to think for ourselves. We still need to bring our own cognitive faculties and critical thinking to an increasingly automated, AI-intermediated and virtual world. If anything, we have to ramp up our cerebral powers so that we don’t become subservient to the tech, to make sure the tech works for us (and not the other way around).

Adopting a new approach means:

  • not taking the tech for granted
  • being prepared to challenge the tech assumptions (and not be complicit in its in-built biases)
  • question the motives and intentions of the tech developers, managers and owners (especially those of known or suspected bad actors)
  • validate all the newly-available data to gain new insights (not repeat past mistakes)
  • evaluate the evidence based on actual events and outcomes
  • and not fall prey to hyperbolic and cataclysmic conjectures

Finally, it is interesting to note the recent debates on regulating this new tech – curtailing malign forces, maintaining protections on personal privacy, increasing data security, and ensuring greater access for those currently excluded. This is all part of a conscious narrative (that human component!) to limit the extent to which AI will be allowed to run rampant, and to hold tech (in all its forms) more accountable for the consequences of its actions.

Next week: “The Digital Director”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The Ongoing Productivity Debate

In my previous blog, I mentioned that productivity in Australia remains sluggish. There are various ideas as to why, and what we could do to improve performance. There are suggestions that traditional productivity analysis may track the wrong thing(s) – for example, output should not simply be measured against input hours, especially in light of technology advances such as cloud computing, AI, machine learning and AR/VR. There are even suggestions that rather than working a 5-day week (or longer), a four-day working week may actually result in better productivity outcomes – a situation we may be forced to embrace with increased automation.

Image Source: Wikimedia Commons

It’s been a number of years since I worked for a large organisation, but I get the sense that employees are still largely monitored by the number of hours they are “present” – i.e., on site, in the office, or logged in to the network. But I think we worked out some time ago that merely “turning up” is not a reliable measure of individual contribution, output or efficiency.

No doubt, the rhythm of the working day has changed – the “clock on/clock off” pattern is not what it was even when I first joined the workforce, where we still had strict core minimum hours (albeit with flexi-time and overtime).  So although many employees may feel like they are working longer hours (especially in the “always on” environment of e-mail, smart phones and remote working), I’m not sure how many of them would say they are working at optimum capacity or maximum efficiency.

For example, the amount of time employees spend on social media (the new smoko?) should not be ignored as a contributory factor in the lack of productivity gains. Yes, I know there are arguments for saying that giving employees access to Facebook et al can be beneficial in terms of research, training and development, networking, connecting with prospective customers and suppliers, and informally advocating for the companies they work for; plus, personal time spent on social media and the internet (e.g., booking a holiday) while at work may mean taking less actual time out of the office.

But let’s try to put this into perspective. With the amount of workplace technology employees have access to (plus the lowering costs of that technology), why are we still not experiencing corresponding productivity gains?

The first problem is poor deployment of that technology. How many times have you spoken to a call centre, only to be told “the system is slow today”, or worse, “the system won’t let me do that”? The second problem is poor training on the technology – if employees don’t have enough of a core understanding of the software and applications they are expected to use (I don’t even mean we all need to be coders or programmers – although they are core skills everyone will need to have in future), how will they be able to make best use of that technology? The third problem is poor alignment of technology – whether caused by legacy systems, so-called tech debt, or simply systems that do not talk to one another. I recently spent over 2 hours at my local bank trying to open a new term deposit – even though I have been a customer of the bank for more than 15 years, and have multiple products and accounts with this bank, I was told this particular product still runs on a standalone DOS platform, and the back-end is not integrated into the other customer information and account management platforms.

Finally, don’t get me started about the NBN, possibly one of the main hurdles to increased productivity for SMEs, freelancers and remote workers. In my inner-city area of Melbourne, I’ve now been told that I won’t be able to access NBN for at least another 15-18 months – much, much, much later than the original announcements. Meanwhile, since NBN launched, my neighbourhood has experienced higher density dwellings, more people working from home, more streaming and on-demand services, and more tech companies moving into the area. So legacy ADSL is being choked, and there is no improvement to existing infrastructure pending the NBN. It feels like I am in a Catch 22, and that the NBN has been over-sold, based on the feedback I read on social media and elsewhere. I’ve just come back from 2 weeks’ holiday in the South Island of New Zealand, and despite staying in some fairly remote areas, I generally enjoyed much faster internet than I get at home in Melbourne.

Next week: Startup Vic’s Impact Pitch Night