The Ongoing Productivity Debate

In my previous blog, I mentioned that productivity in Australia remains sluggish. There are various ideas as to why, and what we could do to improve performance. There are suggestions that traditional productivity analysis may track the wrong thing(s) – for example, output should not simply be measured against input hours, especially in light of technology advances such as cloud computing, AI, machine learning and AR/VR. There are even suggestions that rather than working a 5-day week (or longer), a four-day working week may actually result in better productivity outcomes – a situation we may be forced to embrace with increased automation.

Image Source: Wikimedia Commons

It’s been a number of years since I worked for a large organisation, but I get the sense that employees are still largely monitored by the number of hours they are “present” – i.e., on site, in the office, or logged in to the network. But I think we worked out some time ago that merely “turning up” is not a reliable measure of individual contribution, output or efficiency.

No doubt, the rhythm of the working day has changed – the “clock on/clock off” pattern is not what it was even when I first joined the workforce, where we still had strict core minimum hours (albeit with flexi-time and overtime).  So although many employees may feel like they are working longer hours (especially in the “always on” environment of e-mail, smart phones and remote working), I’m not sure how many of them would say they are working at optimum capacity or maximum efficiency.

For example, the amount of time employees spend on social media (the new smoko?) should not be ignored as a contributory factor in the lack of productivity gains. Yes, I know there are arguments for saying that giving employees access to Facebook et al can be beneficial in terms of research, training and development, networking, connecting with prospective customers and suppliers, and informally advocating for the companies they work for; plus, personal time spent on social media and the internet (e.g., booking a holiday) while at work may mean taking less actual time out of the office.

But let’s try to put this into perspective. With the amount of workplace technology employees have access to (plus the lowering costs of that technology), why are we still not experiencing corresponding productivity gains?

The first problem is poor deployment of that technology. How many times have you spoken to a call centre, only to be told “the system is slow today”, or worse, “the system won’t let me do that”? The second problem is poor training on the technology – if employees don’t have enough of a core understanding of the software and applications they are expected to use (I don’t even mean we all need to be coders or programmers – although they are core skills everyone will need to have in future), how will they be able to make best use of that technology? The third problem is poor alignment of technology – whether caused by legacy systems, so-called tech debt, or simply systems that do not talk to one another. I recently spent over 2 hours at my local bank trying to open a new term deposit – even though I have been a customer of the bank for more than 15 years, and have multiple products and accounts with this bank, I was told this particular product still runs on a standalone DOS platform, and the back-end is not integrated into the other customer information and account management platforms.

Finally, don’t get me started about the NBN, possibly one of the main hurdles to increased productivity for SMEs, freelancers and remote workers. In my inner-city area of Melbourne, I’ve now been told that I won’t be able to access NBN for at least another 15-18 months – much, much, much later than the original announcements. Meanwhile, since NBN launched, my neighbourhood has experienced higher density dwellings, more people working from home, more streaming and on-demand services, and more tech companies moving into the area. So legacy ADSL is being choked, and there is no improvement to existing infrastructure pending the NBN. It feels like I am in a Catch 22, and that the NBN has been over-sold, based on the feedback I read on social media and elsewhere. I’ve just come back from 2 weeks’ holiday in the South Island of New Zealand, and despite staying in some fairly remote areas, I generally enjoyed much faster internet than I get at home in Melbourne.

Next week: Startup Vic’s Impact Pitch Night

 

 

 

 

 

Fear of the Robot Economy….

A couple of articles I came across recently made for quite depressing reading about the future of the economy. The first was an opinion piece by Greg Jericho for The Guardian on an IMF Report about the economic impact of robots. The second was the AFR’s annual Rich List. Read together, they don’t inspire me with confidence that we are really embracing the economic opportunity that innovation brings.

In the first article, the conclusion seemed to be predicated on the idea that robots will destroy more “jobs” (that archaic unit of economic output/activity against which we continue to measure all human, social and political achievement) than they will enable us to create in terms of our advancement. Ergo robots bad, jobs good.

While the second report painted a depressing picture of where most economic wealth continues to be created. Of the 200 Wealthiest People in Australia, around 25% made/make their money in property, with another 10% coming from retail. Add in resources and “investment” (a somewhat opaque category), and these sectors probably account for about two-thirds of the total. Agriculture, manufacturing, entertainment and financial services also feature. However, only the founders of Atlassian, and a few other entrepreneurs come from the technology sector. Which should make us wonder where the innovation is coming from that will propel our economy post-mining boom.

As I have commented before, the public debate on innovation (let alone public engagement) is not happening in any meaningful way. As one senior executive at a large financial services company told a while back, “any internal discussion around technology, automation and digital solutions gets shut down for fear of provoking the spectre of job losses”. All the while, large organisations like banks are hiring hundreds of consultants and change managers to help them innovate and restructure (i.e., de-layer their staff), rather than trying to innovate from within.

With my home State of Victoria heading for the polls later this year, and the growing sense that we are already in Federal election campaign mode for 2019 (or earlier…), we will see an even greater emphasis on public funding for traditional infrastructure rather than investing in new technologies or innovation.

Finally, at the risk of stirring up the ongoing corporate tax debate even further, I took part in a discussion last week with various members of the FinTech and Venture Capital community, to discuss Treasury policy on Blockchain, cryptocurrency and ICOs. There was an acknowledgement that while Australia could be a leader in this new technology sector, a lack of regulatory certainty and non-conducive tax treatment towards this new funding model means that there will be a brain drain as talent relocates overseas to more amenable jurisdictions.

Next week: The new productivity tools

The Maker Culture

London’s newly re-opened Design Museum welcomes visitors with a bold defining statement of intent. According to the curators, there are only designers, makers and users. To me, this speaks volumes about how the “makers” are now at the forefront of economic activity, and how they are challenging key post-industrial notions of mass-production, mass-consumption and even mass-employment. Above all, as users, we are becoming far more engaged with why, how and where something is designed, made and distributed. And as consumers we are being encouraged to think about and take some responsibility for our choices in terms of environmental impact and sustainability.

Design Museum, London (Photo: Rory Manchee)

Design Museum, London (Photo: Rory Manchee)

There are several social, economic, technological and environmental movements that have helped to define “maker culture”, so there isn’t really a single, neat theory sitting behind it all. Here is a (highly selective) list of the key elements that have directly or indirectly contributed to this trend:

Hacking – this is not about cracking network security systems, but about learning how to make fixes when things that don’t work the way that we want them to, or for creating new solutions to existing problems – see also “life hacks”, hackathons or something like BBC’s Big Life Fix. Sort of “necessity is the mother of invention”.

Open source – providing easier access to coding tools, software programs, computing components and data sources has helped to reduce setup costs for new businesses and tech startups, and deconstructed/demystified traditional development processes. Encompasses everything from Linux to Arduino; from Github to public APIs; from AI bots to widget libraries; from Touch Board to F.A.T. Lab; from SaaS to small-scale 3-D printers.

Getting Sh*t Done – from the Fitzroy Academy, to Andrea de Chirico’s SUPERLOCAL projects, maker culture can be characterised by those who want: to make things happen; to make a difference; to create (social) impact; to get their hands dirty; to connect with the materials, people, communities and cultures they work with; to disrupt the status quo; to embrace DIY solutions; to learn by doing.

The Etsy Effect – just part of the response to a widespread consumer demand for personalised, customised, hand-made, individual, artisan, crafted, unique and bespoke products. In turn, platforms like the Etsy and Craftsy market places have sparked a whole raft of self-help video guides and online tutorials, where people can not only learn new skills to make things, they can also learn how to patch, repair, re-use, recycle and re-purpose. Also loosely linked to the recent publishing phenomena for new magazines that combine lifestyle, new age culture, philosophy, sustainability, mindfulness, and entrepreneurism with a social conscience.

Startups, Meetups and Co-working Spaces – if the data is to be believed, more and more people want to start their own ventures rather than find employment with an existing organisation. (Under the gig economy, around 40% of the workforce will be self-employed, freelance or contractors within 5 years, so naturally people are having to consider their employment options more carefully.) While starting your own business is not for everyone, the expanding ecosystem of meetups and co-working spaces is enabling would-be entrepreneurs to experiment and explore what’s possible, and to network with like-minded people.

Maker Spaces – also known as fabrication labs (“FabLabs”), they offer direct access to tools and equipment, mostly for things like 3-D printing, laser-cutting and circuit-board assembly, but some commercial facilities have the capacity to support new product prototyping, test manufacturing processes or short-run production lines. (I see this  interface between “cottage industry” digital studios and full-blown production plants as being critical to the development of high-end, niche and specialist engineering and manufacturing services to replace the declining, traditional manufacturing sectors.) Some of the activity is formed around local communities of independent makers, some offer shared workshop spaces and resources. Elsewhere, they are also run as innovation hubs and learning labs.

Analogue Warmth – I’ve written before about my appreciation for all things analogue, and the increased sales of vinyl records and even music cassettes demonstrate that among audiophiles, digital is not always better, that there is something to be said for the tangible format. This preference for analogue, combined with a love of tactile objects and a spirit of DIY has probably reached its apotheosis (in photography at least) through Kelli Anderson’s “This Book Is A Camera”.

Finally, a positive knock-on effect of maker culture is the growing number of educational resources for learning coding, computing, maths and robotics: Raspberry PI, Kano and Tech Will Save Us; KidsLogic, Creative Coding HK and Machinam; Robogals, Techcamp and robokids. We can all understand the importance of learning these skills as part of a well-rounded education, because as Mark Pascall, founder of 3months.com, recently commented:

“I’m not going to advise my kids to embark on careers that have long expensive training programs (e.g. doctors/lawyer etc). AI is already starting to give better results.”

Better to learn how things work, how to design and make them, how to repair them etc., so that we have core skills that can adapt as technology changes.

Next week: Life Lessons from the Techstars founders

 

When robots say “Humans do not compute…”

There’s a memorable scene in John Carpenter‘s 1970’s sci-fi classic, “Dark Star” where an astronaut tries to use Cartesian Logic to defuse a nuclear bomb. The bomb is equipped with artificial intelligence and is programmed to detonate via a timer once its circuits have been activated. Due to a circuit malfunction, the bomb believes it has been triggered, even though it is still attached to the spaceship, and cannot be mechanically released. Refuting the arguments against its existence, the bomb responds in kind, and simply states: “I think, therefore I am.”

Dark Star’s Bomb 20: “I think, therefore I am…”

Dark Star’s Bomb 20: “I think, therefore I am…”

The notion of artificial intelligence both thrills us, and fills us with dread: on the one hand, AI can help us (by doing a lot of routine thinking and mundane processing); on the other, it can make us the subjects of its own ill-will (think of HAL 9000 in “2001: A Space Odyssey”, or “Robocop”, or “Terminator” or any similar dystopian sci-fi story).

The current trend for smarter data processing, fuelled by AI tools such as machine learning, semantic search, sentiment analysis and social graph models, is making a world of driverless cars, robo-advice, the Internet of Things and behaviour prediction a reality. But there are concerns that we will abnegate our decision-making (and ultimately, our individual moral responsibility) to computers; that more and more jobs will be lost to robots; and we will end up being dehumanized if we no longer have to think for ourselves. Worse still, if our human behaviours cease making sense to those very same computers that we have programmed to learn how to think for us, then our demise is pre-determined.

The irony is, that if AI becomes as smart as we might imagine, then we will impart to the robots a very human fallibility: namely, the tendency to over-analyse the data (rather than examine the actual evidence before us). As Brian Aldiss wrote in his 1960 short story, “The Sterile Millennia”, when robots get together:

“…they suffer from a trouble which sometimes afflicts human gatherings: a tendency to show off their logic at the expense of the object of the meeting.”

Long live logic, but longer still live common sense!

Next week: 101 #Startup Pitches – What have we learned?