BYOB (Bring Your Own Brain)

My Twitter and LinkedIn feeds are full of posts about artificial intelligence, machine learning, large language models, robotics and automation – and how these technologies will impact our jobs and our employment prospects, often in very dystopian tones. It can be quite depressing to trawl through this material, to the point of being overwhelmed by the imminent prospect of human obsolescence.

No doubt, getting to grips with these tools will be important if we are to navigate the future of work, understand the relationship between labour, capital and technology, and maintain economic relevance in a world of changing employment models.

But we have been here before, many times (remember the Luddites?), and so far, the human condition means we learn to adapt in order to survive. These transitions will be painful, and there will be casualties along the way, but there is cause for optimism if we remember our post-industrial history.

First, among recent Twitter posts there was a timely reminder that automation does not need to equal despair in the face of displaced jobs.

Second, the technology at our disposal will inevitably make us more productive as well as enabling us to reduce mundane or repetitive tasks, even freeing up more time for other (more creative) pursuits. The challenge will be in learning how to use these tools, and in efficient and effective ways so that we don’t swap one type of routine for another.

Third, there is still a need to consider the human factor when it comes to the work environment, business structures and organisational behaviour – not least personal interaction, communication skills and stakeholder management. After all, you still need someone to switch on the machines, and tell them what to do!

Fourth, the evolution of “bring your own device” (and remote working) means that many of us have grown accustomed to having a degree of autonomy in the ways in which we organise our time and schedule our tasks – giving us the potential for more flexible working conditions. Plus, we have seen how many apps we use at home are interchangeable with the tools we use for work – and although the risk is that we are “always on”, equally, we can get smarter at using these same technologies to establish boundaries between our work/life environments.

Fifth, all the technology in the world is not going to absolve us of the need to think for ourselves. We still need to bring our own cognitive faculties and critical thinking to an increasingly automated, AI-intermediated and virtual world. If anything, we have to ramp up our cerebral powers so that we don’t become subservient to the tech, to make sure the tech works for us (and not the other way around).

Adopting a new approach means:

  • not taking the tech for granted
  • being prepared to challenge the tech assumptions (and not be complicit in its in-built biases)
  • question the motives and intentions of the tech developers, managers and owners (especially those of known or suspected bad actors)
  • validate all the newly-available data to gain new insights (not repeat past mistakes)
  • evaluate the evidence based on actual events and outcomes
  • and not fall prey to hyperbolic and cataclysmic conjectures

Finally, it is interesting to note the recent debates on regulating this new tech – curtailing malign forces, maintaining protections on personal privacy, increasing data security, and ensuring greater access for those currently excluded. This is all part of a conscious narrative (that human component!) to limit the extent to which AI will be allowed to run rampant, and to hold tech (in all its forms) more accountable for the consequences of its actions.

Next week: “The Digital Director”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Designing The Future Workplace

Last week’s blog was about reshaping the Future of Work. From both the feedback I have received, and the recent work I have been doing with Re-Imagi, what really comes across is the opportunity to move the dialogue of “work” from “employer and employee” (transactional) to “co-contributors” (relationship). In an ideal world, companies contribute resources (capital, structure, equipment, tools, opportunities, projects, compliance, risk management), and individuals contribute resources (hard and soft skills, experience, knowledge, contacts, ideas, time, relationships, networks, creativity, thinking). If this is this the new Social Contract, what is the best environment to foster this collaborative approach?

Image: “MDI Siemens Cube farm” (Photo sourced from Flickr)

Many recent articles on the Future of Work and the Future Workplace have identified key social, organisational and architectural issues to be addressed:

  1. On-boarding, engaging and “nurturing” new employees
  2. Trust in the workplace
  3. The workplace structure and layout
  4. The physical and built environment
  5. Design and sustainability

Underpinning these changes are technology (e.g., cloud, mobile and social tools which support BYOD, collaboration and remote working), and the gig economy (epitomised by the tribe of digital nomads). Together, these trends are redefining where we work, how we work, what work we do and for which organisations. (For an intriguing and lively discussion on collaborative technology, check out this thread on LinkedIn started by Annalie Killian.)

Having experienced a wide range of working environments (cube farm, open plan, serviced office, hot-desking, small business park, corporate HQ, home office, public libraries, shared offices, internet cafes, co-working spaces, WiFi hot spots, remote working and tele-commuting), I don’t believe there is a perfect solution nor an ideal workplace – we each need different space and facilities at different times – so flexibility and access as well as resources are probably the critical factors.

The fashion for hot-desking, combined with flexible working hours, is having some unforeseen or undesired outcomes, based on examples from clients and colleagues I work with:

First, where hot-desking is being used to deal with limited office space, some employees are being “forced” into working from home or telecommuting a certain number of days each month – which can be challenging to manage when teams may need to get together in person.

Second, employees are self-organising into “quiet” and “noisy” areas based on their individual preferences. While that sounds fine because it means employees are taking some responsibility for their own working environment, it can be counter-productive to fostering collaboration, building cross-functional co-operation and developing team diversity. (One company I worked for liked to change the office floor plan and seating arrangements as often as they changed the org chart – which was at least 3 or 4 times a year – it was something to do with not letting stagnation set in.)

Third, other bad practices are emerging: rather like spreading out coats to “save” seats at the cinema, or using your beach towel to “reserve” a recliner by the hotel pool while you go and have breakfast, some employees are making a land grab for their preferred desk with post-it notes and other claims to exclusive use. Worse, some teams are using dubious project activity as an excuse to commandeer meeting rooms and other common/shared spaces on a permanent basis.

Another trend is for co-working spaces, linked to both the gig economy and the start-up ecosystem, but also a choice for a growing number of small businesses, independent consultants and self-employed professionals. In Melbourne, for example, in just a few years the number of co-working spaces has grown from a handful, to around 70. Not all co-working spaces are equal, and some are serviced offices in disguise, and some are closely linked to startup accelerators and incubators. And some, like WeWork, aspire to be global brands, with a volume-based membership model.

But the co-working model is clearly providing a solution and can act as a catalyst for other types of collaboration (although some co-working spaces can be a bit like New York condos, where the other tenants may get to approve your application for membership).

Given the vast number of road and rail commuters who are on their mobile devices to and from work, I sometimes think that the largest co-working spaces in Melbourne are either Punt Road or the Frankston line in rush hour….

Next week: Personal data and digital identity – whose ID is it anyway?

 

 

The Future of Work = Creativity + Autonomy

Some recent research from Indiana University suggests that, in the right circumstances, a stressful job is actually good for you. Assuming that you have a sufficient degree of control over your own work, and enough individual input on decision-making and problem-solving, you may actually live longer. In short, the future of work and the key to a healthy working life is both creativity and autonomy.

Time to re-think what the “dignity of labour” means? (Image sourced from Discogs)

Context

In a previous blog, I discussed the changing economic relationship we have with work, in which I re-framed what we mean by “employment”, what it means to be “employed”, and what the new era of work might look like, based on a world of “suppliers” (who offer their services as independent contractors) and “clients” (who expect access to just-in-time and on-demand resources).

The expanding “gig economy” reinforces the expectation that by 2020, average job tenure will be 3 years, and around 40% of the workforce will be employed on a casual basis (part-time, temporary, contractor, freelance, consultant etc.). The proliferation of two-sided market places such as Uber, Foodera, Freelancer, Upwork, Sidekicker, 99designs, Envato and Fiverr are evidence of this shift from employee to supplier.

We are also seeing a trend for hiring platforms that connect teams of technical and business skills with specific project requirements posted by hiring companies. Many businesses understand the value of people pursuing and managing “portfolio careers”, because companies may prefer to access this just-in-time expertise as and when they need it, not take on permanent overheads. But there are still challenges around access and “discovery”: who’s available, which projects, defining roles, agreeing a price etc.

Contribution

Meanwhile, employers and HR managers are re-assessing how to re-evaluate employee contribution. It’s not simply a matter of how “hard” you work (e.g., the hours you put in, or the sales you make). Companies want to know what else you can do for them: how you collaborate, do you know how to ask for help, and are you willing to bring all your experience, as well as who and what you know to the role? (As a case in point, when Etsy’s COO, Linda Kozlowski was recently asked about her own hiring criteria, she emphasized the importance of critical thinking, and the ability for new hires to turn analysis into actionable solutions.)

In another blog on purpose, I noted that finding meaningful work all boils down to connecting with our values and interests, and finding a balance between what motivates us, what rewards us, what we can contribute, and what people want from us. As I wrote at the time, how do we manage our career path, when our purpose and our needs will change over time? In short, the future of work will be about creating our own career opportunities, in line with our values, purpose and requirements.*

Compensation

From an economic and social policy perspective, no debate about the future of work can ignore the dual paradoxes:

  1. We will need to have longer careers (as life expectancy increases), but there will be fewer “traditional” jobs to go round;
  2. A mismatch between workforce supply and in-demand skills (plus growing automation) will erode “traditional” wage structures in the jobs that do remain

Politicians, economists and academics have to devise strategies and theories that support social stability based on aspirational employment targets, while recognising the shifting market conditions and the changing technological environment. And, of course, for trade unions, more freelance/independent workers and cheaper hourly rates undermine their own business model of an organised membership, centralised industrial awards, enterprise bargaining and the residual threat of industrial action when protective/restrictive practices may be under threat.

Which is why there needs to be a more serious debate about ideas such as the Universal Basic Income, and grants to help people to start their own business. On the Universal Basic Income (UBI), I was struck by a recent interview with everyone’s favourite polymath, Brian Eno. He supports the UBI because:

“…we’re now looking towards a future where there will be less and less employment, inevitably automation is going to make it so there simply aren’t jobs. But that’s alright as long as we accept the productivity that the automations are producing feeds back to people ….. [The] universal basic income, which is basically saying we pay people to be alive – it makes perfect sense to me.”

If you think that intellectuals like Eno are “part of the problem“, then union leaders like Tim Ayres (who advocates the “start-up grant”), actually have more in common with Margaret Thatcher than perhaps they realise. It was Thatcher’s government that came up with the original Enterprise Allowance Scheme which, despite its flaws, can be credited with launching the careers of many successful entrepreneurs in the 1980s. Such schemes can also help the workforce transition from employment in “old” heavy industries to opportunities in the growing service sectors and the emerging, technology-driven enterprises of today.

Creativity

I am increasingly of the opinion that, whatever our chosen or preferred career path, it is essential to engage with our creative outlets: in part to provide a counterbalance to work/financial/external demands and obligations; in part to explore alternative ideas, find linkages between our other interests, and to connect with new and emerging technology.

In discussing his support for the UBI, Eno points to our need for creativity:

“For instance, in prisons, if you give people the chance to actually make something …. you say to them ‘make a picture, try it out, do whatever’ – and the thrill that somebody gets to find that they can actually do something autonomously, not do something that somebody else told them to do, well, in the future we’re all going to be able to need those kind of skills. Apart from the fact that simply rehearsing yourself in creativity is a good idea, remaining creative and being able to go to a situation where you’re not told what to do and to find out how to deal with it, this should be the basic human skill that we are educating people towards and what we’re doing is constantly stopping them from learning.”

I’ve written recently about the importance of the maker culture, and previously commented on the value of the arts and the contribution that they make to society. There is a lot of data on the economic benefits of both the arts and the creative industries, and their contribution to GDP. Some commentators have even argued that art and culture contribute more to the economy than jobs and growth.

Even a robust economy such as Singapore recognises the need to teach children greater creativity through the ability to process information, not simply regurgitate facts. It’s not because we might need more artists (although that may not be a bad thing!), but because of the need for both critical AND creative thinking to complement the demand for new technical skills – to prepare students for the new world of work, to foster innovation, to engage with careers in science and technology and to be more resilient and adaptive to a changing job market.

Conclusions

As part of this ongoing topic, some of the questions that I hope to explore in coming months include:

1. In the debate on the “Future of Work”, is it still relevant to track “employment” only in statistical terms (jobs created/lost, unemployment rates, number of hours worked, etc.)?

2. Is “job” itself an antiquated economic unit of measure (based on a 9-5, 5-day working week, hierarchical and centralised organisational models, and highly directed work practices and structures)?

3. How do we re-define “work” that is not restricted to an industrial-era definition of the “employer-employee/master-servant” relationship?

4. What do we need to do to ensure that our education system is directed towards broader outcomes (rather than paper-based qualifications in pursuit of a job) that empower students to be more resilient and more adaptive, to help them make informed decisions about their career choices, to help them navigate lifelong learning pathways, and to help them find their own purpose?

5. Do we need new ways to evaluate and reward “work” contribution that reflect economic, scientific, societal, environmental, community, research, policy, cultural, technical, artistic, academic, etc. outcomes?

* Acknowledgment: Some of the ideas in this blog were canvassed during an on-line workshop I facilitated last year on behalf of Re-Imagi, titled “How do we find Purpose in Work?”. For further information on how you can access these and other ideas, please contact me at: rory@re-imagi.co

Next week: Designing The Future Workplace