Bringing Back Banter

Last week I watched “The Trip To Spain”, the latest in the “Trip” franchise. For anyone who has not yet seen these films (or the TV series from which they are compiled), the narratives revolve around a pair of actors playing fictional versions of themselves, as they embark on road trips to sample some of the best restaurants, hotels and historic locations. The semi-improvised dialogue between the two main characters is classic banter – as in “the playful and friendly exchange of teasing remarks“.

The gentle art of banter is at the heart of “The Trip To Spain” – Image sourced from British Comedy Guide

Sadly, just as the public discourse has become much uglier in recent years (despite various calls for a “kinder, gentler politics”), it seems there is something of a backlash against neo-banter (or “bantaaaaaaah!” as some would have it). Maybe there is a connection?

If our political leaders cannot engage in the natural ebb and flow of an ideological discussion shaped as informed conversation (rather than embarking on all out verbal warfare), then don’t be surprised if this is the same boorish, belligerent and bellicose tone adopted by protagonists in social media, op eds and parliamentary “debates”. (And I am not defending anyone who uses the term “banter” to excuse/explain the inappropriate.)

Banter can help to explore hypothetical scenarios, suggest alternative opinions, and take a discussion in different directions, without participants being hidebound by the first thing they say. Plus, if done really well, it allows us to see the ultimate absurdity of untenable positions.

Next week: Supersense – Festival of the Ecstatic

 

 

 

 

Long live experts….

Along with “liberal, metropolitan elite”, the word “expert” appears to have become a pejorative term. Well, I say, “long live experts”. Without experts, we’d still believe that the world was flat, that the sun orbited around the Earth, and that the universe is only 6,000 years old…. Without experts we’d also have no knowledge of ancient civilisations, no comprehension of languages, no awareness of scientific phenomena, no understanding of how to prevent and cure disease, no patience to engage with the human condition, and no appreciation of nature, technology, art or culture.

Just a couple of “experts”: Marie Curie and Albert Einstein

I read recently that, “Marie Curie and Albert Einstein went hiking together in the Alps”. At first, I thought this was some fantastic fiction, because I wasn’t aware they knew each other, let alone went walking. But the line didn’t come from a David Mitchell novel – I came across it in Alex Soojung-Kim Pang‘s recent book, “Rest: Why You Get More Done When You Work Less”. It reveals something of the way knowledge seeks out knowledge – how great minds (experts) often get together to collaborate, or just hang out and shoot the breeze. The expert mind is also an inquiring and creative mind, open to new ideas and influences, unlike the hermetically sealed personalities of many of our current leaders.

(According to Pang, regular physical activity, creative pursuits, technical mastery and planned rest are among the key traits for many experts – so much for the 35-hour working week, 9-5 routines, and a couple of weeks’ annual vacation….)

Maybe one reason for this increased disregard for experts is the fact that many experts tend to make us feel uncomfortable (about our own ignorance?), they challenge our assumptions (and highlight our personal prejudices?), and they tell us things we’d rather not think about (even if it’s probably for our own good?).

And while I accept some experts can be patronising, aloof and even smug, there is a breed of experts, like Demis Hassabis, who are brilliant communicators. They can explain complex ideas in straightforward terms, and through their enthusiasm and natural curiosity, they show how they continue to wonder about what they don’t yet know. They also manage to bring us on their journey into difficult topics and uncharted areas, such as artificial intelligence.

Finally, and in the interest of balance, the only thing worse than a recognised expert is a self-appointed one…. (a theme Laurie Anderson explored in her satirical work, “Only an Expert”.)

Next week: SportsTech and Wearables Pitch Night at Startup Victoria

The Future of Work = Creativity + Autonomy

Some recent research from Indiana University suggests that, in the right circumstances, a stressful job is actually good for you. Assuming that you have a sufficient degree of control over your own work, and enough individual input on decision-making and problem-solving, you may actually live longer. In short, the future of work and the key to a healthy working life is both creativity and autonomy.

Time to re-think what the “dignity of labour” means? (Image sourced from Discogs)

Context

In a previous blog, I discussed the changing economic relationship we have with work, in which I re-framed what we mean by “employment”, what it means to be “employed”, and what the new era of work might look like, based on a world of “suppliers” (who offer their services as independent contractors) and “clients” (who expect access to just-in-time and on-demand resources).

The expanding “gig economy” reinforces the expectation that by 2020, average job tenure will be 3 years, and around 40% of the workforce will be employed on a casual basis (part-time, temporary, contractor, freelance, consultant etc.). The proliferation of two-sided market places such as Uber, Foodera, Freelancer, Upwork, Sidekicker, 99designs, Envato and Fiverr are evidence of this shift from employee to supplier.

We are also seeing a trend for hiring platforms that connect teams of technical and business skills with specific project requirements posted by hiring companies. Many businesses understand the value of people pursuing and managing “portfolio careers”, because companies may prefer to access this just-in-time expertise as and when they need it, not take on permanent overheads. But there are still challenges around access and “discovery”: who’s available, which projects, defining roles, agreeing a price etc.

Contribution

Meanwhile, employers and HR managers are re-assessing how to re-evaluate employee contribution. It’s not simply a matter of how “hard” you work (e.g., the hours you put in, or the sales you make). Companies want to know what else you can do for them: how you collaborate, do you know how to ask for help, and are you willing to bring all your experience, as well as who and what you know to the role? (As a case in point, when Etsy’s COO, Linda Kozlowski was recently asked about her own hiring criteria, she emphasized the importance of critical thinking, and the ability for new hires to turn analysis into actionable solutions.)

In another blog on purpose, I noted that finding meaningful work all boils down to connecting with our values and interests, and finding a balance between what motivates us, what rewards us, what we can contribute, and what people want from us. As I wrote at the time, how do we manage our career path, when our purpose and our needs will change over time? In short, the future of work will be about creating our own career opportunities, in line with our values, purpose and requirements.*

Compensation

From an economic and social policy perspective, no debate about the future of work can ignore the dual paradoxes:

  1. We will need to have longer careers (as life expectancy increases), but there will be fewer “traditional” jobs to go round;
  2. A mismatch between workforce supply and in-demand skills (plus growing automation) will erode “traditional” wage structures in the jobs that do remain

Politicians, economists and academics have to devise strategies and theories that support social stability based on aspirational employment targets, while recognising the shifting market conditions and the changing technological environment. And, of course, for trade unions, more freelance/independent workers and cheaper hourly rates undermine their own business model of an organised membership, centralised industrial awards, enterprise bargaining and the residual threat of industrial action when protective/restrictive practices may be under threat.

Which is why there needs to be a more serious debate about ideas such as the Universal Basic Income, and grants to help people to start their own business. On the Universal Basic Income (UBI), I was struck by a recent interview with everyone’s favourite polymath, Brian Eno. He supports the UBI because:

“…we’re now looking towards a future where there will be less and less employment, inevitably automation is going to make it so there simply aren’t jobs. But that’s alright as long as we accept the productivity that the automations are producing feeds back to people ….. [The] universal basic income, which is basically saying we pay people to be alive – it makes perfect sense to me.”

If you think that intellectuals like Eno are “part of the problem“, then union leaders like Tim Ayres (who advocates the “start-up grant”), actually have more in common with Margaret Thatcher than perhaps they realise. It was Thatcher’s government that came up with the original Enterprise Allowance Scheme which, despite its flaws, can be credited with launching the careers of many successful entrepreneurs in the 1980s. Such schemes can also help the workforce transition from employment in “old” heavy industries to opportunities in the growing service sectors and the emerging, technology-driven enterprises of today.

Creativity

I am increasingly of the opinion that, whatever our chosen or preferred career path, it is essential to engage with our creative outlets: in part to provide a counterbalance to work/financial/external demands and obligations; in part to explore alternative ideas, find linkages between our other interests, and to connect with new and emerging technology.

In discussing his support for the UBI, Eno points to our need for creativity:

“For instance, in prisons, if you give people the chance to actually make something …. you say to them ‘make a picture, try it out, do whatever’ – and the thrill that somebody gets to find that they can actually do something autonomously, not do something that somebody else told them to do, well, in the future we’re all going to be able to need those kind of skills. Apart from the fact that simply rehearsing yourself in creativity is a good idea, remaining creative and being able to go to a situation where you’re not told what to do and to find out how to deal with it, this should be the basic human skill that we are educating people towards and what we’re doing is constantly stopping them from learning.”

I’ve written recently about the importance of the maker culture, and previously commented on the value of the arts and the contribution that they make to society. There is a lot of data on the economic benefits of both the arts and the creative industries, and their contribution to GDP. Some commentators have even argued that art and culture contribute more to the economy than jobs and growth.

Even a robust economy such as Singapore recognises the need to teach children greater creativity through the ability to process information, not simply regurgitate facts. It’s not because we might need more artists (although that may not be a bad thing!), but because of the need for both critical AND creative thinking to complement the demand for new technical skills – to prepare students for the new world of work, to foster innovation, to engage with careers in science and technology and to be more resilient and adaptive to a changing job market.

Conclusions

As part of this ongoing topic, some of the questions that I hope to explore in coming months include:

1. In the debate on the “Future of Work”, is it still relevant to track “employment” only in statistical terms (jobs created/lost, unemployment rates, number of hours worked, etc.)?

2. Is “job” itself an antiquated economic unit of measure (based on a 9-5, 5-day working week, hierarchical and centralised organisational models, and highly directed work practices and structures)?

3. How do we re-define “work” that is not restricted to an industrial-era definition of the “employer-employee/master-servant” relationship?

4. What do we need to do to ensure that our education system is directed towards broader outcomes (rather than paper-based qualifications in pursuit of a job) that empower students to be more resilient and more adaptive, to help them make informed decisions about their career choices, to help them navigate lifelong learning pathways, and to help them find their own purpose?

5. Do we need new ways to evaluate and reward “work” contribution that reflect economic, scientific, societal, environmental, community, research, policy, cultural, technical, artistic, academic, etc. outcomes?

* Acknowledgment: Some of the ideas in this blog were canvassed during an on-line workshop I facilitated last year on behalf of Re-Imagi, titled “How do we find Purpose in Work?”. For further information on how you can access these and other ideas, please contact me at: rory@re-imagi.co

Next week: Designing The Future Workplace

What might we expect in 2017?

On a number of measures, 2016 was a watershed year. Unexpected election results, fractious geopolitics, numerous celebrity deaths, too many lacklustre blockbuster films, spectacular sporting upsets (and regular doping scandals), and sales of vinyl records are outpacing revenue from digital downloads and streaming services. What might we expect from 2017?

Detail from "The Passing Winter" by Yayoi Kusama (Photo by Rory Manchee)

Detail from “The Passing Winter” by Yayoi Kusama [Photo by Rory Manchee]

Rather than using a crystal ball to make specific predictions or forecasts, here are some of the key themes that I think will feature in 2017:

First, the nature of public discourse will come under increased scrutiny. In the era of “post-truth”, fake news and searing/scathing social commentary, the need for an objective, fact-based and balanced media will be paramount. In addition, the role of op-ed pieces to reflect our enlightened liberal traditions and the need for public forums to represent our pluralist society will be critical to maintaining a sense of fairness, openness, and just plain decency in public dialogue.

Second, a recurring topic of public conversation among economists, politicians, sociologists, HR managers, career advisors, bureaucrats, union leaders, technologists, educators and social commentators will be the future of work. From the impact of automation on jobs, to the notion of a universal basic income; from the growth of the gig economy, to finding purpose through the work we do. How we find, engage with and navigate lifelong employment is now as important as, say, choosing high school electives, making specific career choices or updating professional qualifications.

Third, the ongoing focus on digital technology will revolve around the following:

  • The Internet of Things – based on a current exhibit at London’s Design Museum, the main use cases for IoT will continue to be wearable devices (especially for personal health monitoring), agriculture, transport and household connectivity
  • Fintech – if a primary role of the internet has been for content dissemination, search and discovery, then the deployment of Blockchain solutions, the growth in crypto-currencies, the use of P2P platforms and the evolution of robo-advice are giving rise to the Internet of Money
  • Artificial Intelligence – we are seeing a broader range of AI applications, particularly around robotics, predictive analytics and sensory/environmental monitoring. The next phase of AI will learn to anticipate (and in some cases moderate) human behaviour, and provide more efficacious decision-making and support mechanisms for resource planning and management.
  • Virtual Reality/Augmented Reality – despite being increasingly visible in industries like gaming, industrial design, architecture and even tourism, it can feel like VR/AR is still looking for some dedicated use cases. One sector that is expected to benefit from these emerging technologies is education, so I would expect to see some interesting solutions for interactive learning, curriculum delivery and student assessment.

Fourth, and somewhat at odds with the above, the current enthusiasm for the maker culture is also leading to a growing interest in products that represent craft, artisan and hand-made fabrication techniques and traditions. Custom-made, bespoke, personalized and unique goods are in vogue – perhaps as a reaction to the “perfection” of digital replication and mass-production?

Fifth, with the importance of startups in driving innovation and providing sources of new economic growth, equity crowdfunding will certainly need to come of age. Thus far, this method of fund-raising has been more suited (and in many cases, is legally restricted) to physical products, entertainment assets, and creative projects. The delicate balance between retail investor protection and entrepreneurial access to funding means that this method of startup funding is constrained (by volume, amounts and investor participation), and contrary to stated intentions, can involve disproportionate set up costs and administration. But its time will come.

Finally, as shareholder activism and triple bottom line reporting become more prevalent (combined with greater regulatory and compliance obligations), I can see that corporate governance principles are increasingly placing company directors in the role of quasi-custodians of a company’s assets and quasi-trustees of stakeholder interests. It feels like boards are now expected to be the conscience of the company – something that will require directors to have greater regard to the impact of their decisions, not just whether those decisions are permitted, correct or good.

One thing I can predict for 2017, is that Content in Context will continue to comment on these topics, and explore their implications, especially as I encounter them through the projects I work on and the clients I consult to.

Next week: The FF17 Semi Finals in Melbourne