The Age of Responsibility

How old is old enough to know better? In particular, when can we be said to be responsible, and therefore accountable, for our actions? (All the recent political shenanigans around “collective accountability”, “departmental responsibility”, “creeping assumptions” and “ministerial conduct” has got me thinking….)

By the time we are 7 years of age, we should probably know the difference between “right and wrong”, at least in the context of home, school, culturally and socially – “don’t tell lies, don’t be rude to your elders, don’t steal, don’t hit your siblings…”

The age for criminal responsibility varies around the world, but the global average is between 10 and 14 years. In Australia, it is currently 10, but there are proposals to extend it to 14. While I can understand and appreciate some of the arguments in favour of the latter, I’m also aware that criminal intent (not just criminal acts or behaviour) can establish itself under the age of 10 – I’m thinking of the James Bulger case in the UK in particular.

Legally, 18 is the coming of age – for entering into contracts, getting married (without the need for parental approval), earning the right to vote, the ability to purchase alcohol and tobacco. But you can have sex, and start driving a car from the age of 16.

As a society, we appear to be extending the age at which we become “responsible adults”. The concept of “adolescence” emerged in the 15th century, to indicate a transition to adulthood. The notion of “childhood” appeared in the 17th century, mainly from a philosophical perspective. While “teenagers” are a mid-20th century marketing phenomenon.

However, we now have evidence that our brains do not finish maturing until our third decade – so cognitively, it could be argued we are not responsible for our actions or decisions until we are at least 25, because our judgment is not fully developed. In which case, it rather begs the question about our ability to procreate, drink, drive and vote….

Of course, many age-based demarcations are cultural and societal. Customary practices such as initiation ceremonies are still significant markers in a person’s development and their status in the community (including their rights and responsibilities).

Which brings me to social media – shouldn’t we also be responsible and held accountable for what we post, share, comment on or simply like on Facebook, Twitter etc.? Whether you believe in “nature” or “nurture”, some academics argue we always have a choice before we hit that button – so shouldn’t that be a guiding principle to live by?

Next week: Making Creeping Assumptions

 

 

 

 

 

 

Blockchain and the Limits of Trust

Last week I was privileged to be a guest on This Is Imminent, a new form of Web TV hosted by Simon Waller. The given topic was Blockchain and the Limitations of Trust.

For a replay of the Web TV event go here

As regular readers will know, I have been immersed in the world of Blockchain, cryptocurrency and digital assets for over four years – and while I am not a technologist, I think know enough to understand some of the potential impact and implications of Blockchain on distributed networks, decentralization, governance, disintermediation, digital disruption, programmable money, tokenization, and for the purposes of last week’s discussion, human trust.

The point of the discussion was to explore how Blockchain might provide a solution to the absence of trust we currently experience in many areas of our daily lives. Even better, how Blockchain could enhance or expand our existing trusted relationships, especially across remote networks. The complete event can be viewed here, but be warned that it’s not a technical discussion (and wasn’t intended to be), although Simon did find a very amusing video that tries to explain Blockchain with the aid of Spam (the luncheon meat, not the unwanted e-mail).

At a time when our trust in public institutions is being tested all the time, it’s more important than ever to understand the nature of trust (especially trust placed in any new technology), and to navigate how we establish, build and maintain trust in increasingly peer-to-peer, fractured, fragmented, open and remote networks.

To frame the conversation, I think it’s important to lay down a few guiding principles.

First, a network is only as strong as its weakest point of connection.

Second, there are three main components to maintaining the integrity of a “trusted” network:

  • how are network participants verified?
  • how secure is the network against malicious actors?
  • what are the penalties or sanctions for breaking that trust?

Third, “trust” in the context of networks is a proxy for “risk” – how much or how far are we willing to trust a network, and everyone connected to it?

For example, if you and I know each other personally and I trust you as a friend, colleague or acquaintance, does that mean I should automatically trust everyone else you know? (Probably not.) Equally, should I trust you just because you know all the same people as me? (Again, probably not.) Each relationship (or connection) in that type of network has to be evaluated on its own merits. Although we can do a certain amount of due diligence and triangulation, as each network becomes larger, it’s increasingly difficult for us to “know” each and every connection.

Let’s suppose that the verification process is set appropriately high, that the network is maintained securely, and that there are adequate sanctions for abusing the network trust –  then it is possible for each connection to “know” each other, because the network has created the minimum degree of trust for the network to be viable. Consequently, we might conclude that only trustworthy people would want to join a network based on trust where each transaction is observable and traceable (albeit in the case of Blockchain, pseudonymously).

When it comes to trust and risk assessment, it still amazes me the amount of personal (and private) information people are willing to share on social media platforms, just to get a “free” account. We seem to be very comfortable placing an inordinate amount of trust in these highly centralized services both to protect our data and to manage our relationships – which to me is something of an unfair bargain.

Statistically we know we are more likely to be killed in a car accident than in a plane crash – but we attach far more risk to flying than to driving. Whenever we take our vehicle out on to the road, we automatically assume that every other driver is licensed, insured, and competent to drive, and that their car is taxed and roadworthy. We cannot verify this information ourselves, so we have to trust in both the centralized systems (that regulate drivers, cars and roads), and in each and every individual driver – but we know there are so many weak points in that structure.

Blockchain has the ability to verify each and every participant and transaction on the network, enabling all users to trust in the security and reliability of network transactions. In addition, once verified, participants do not have to keep providing verification each time they want to access the network, because the network “knows” enough about each participant that it can create a mutual level of trust without everyone having to have direct knowledge of each other.

In the asymmetric relationships we have created with centralized platforms such as social media, we find ourselves in a very binary situation – once we have provided our e-mail address, date of birth, gender and whatever else is required, we cannot be confident that the platform “forgets” that information when it no longer needs it. It’s a case of “all or nothing” as the price of network entry. Whereas, if we operated under a system of self-sovereign digital identity (which technology like Blockchain can facilitate), then I can be sure that such platforms only have access to the specific personal data points that I am willing to share with them, for the specific purpose I determine, and only for as long as I decide.

Finally, taking control of, and being responsible for managing our own personal information (such as a private key for a digital wallet) is perhaps a step too far for some people. They might not feel they have enough confidence in their own ability to be trusted with this data, so they would rather delegate this responsibility to centralized systems.

Next week: Always Look On The Bright Side…

 

Who fact-checks the fact-checkers?

The recent stoush between POTUS and Twitter on fact-checking and his alleged use of violent invective has rekindled the debate on whether, and how, social media should be regulated. It’s a potential quagmire (especially the issue of free speech), but it also comes at a time when here in Australia, social media is fighting twin legal battles – on defamation and fees for news content.

First, the issue of fact-checking on social media. Public commentary was divided – some argued that fact-checking is a form of censorship, and others posed the question “Quis custodiet ipsos custodes?” (who fact-checks the fact-checkers?) Others suggested that fact-checking in this context was a form of public service to ensure that political debate is well-informed, obvious errors are corrected, and that blatant lies (untruths, falsehoods, fibs, deceptions, mis-statements, alternative facts….) are called out for what they are. Notably, in this case, the “fact” was not edited, but flagged as a warning to the audience. (In case anyone hadn’t noticed (or remembered), earlier this year Facebook announced that it would engage Reuters to provide certain fact-check services.) Given the current level of discourse in the political arena, traditional and social media, and the court of public opinion, I’m often reminded of an article I read many years ago in the China Daily, which said something to the effect that “it is important to separate the truth from the facts”.

Second, the NSW Court of Appeal recently ruled that media companies can be held responsible for defamatory comments posted under stories they publish on social media. While this specific ruling did not render Facebook liable for the defamatory posts (although like other content platforms, social media is subject to general defamation laws), it was clear that the media organisations are deemed to be “publishing” content on their social media pages. And even though they have no way of controlling or moderating the Facebook comments before they are made public, for these purposes, their Facebook pages are no different to their own websites.

Third, the Australian Government is going to force companies like Facebook and Google to pay for news content via revenue share from ad sales. The Federal Treasurer was quoted as saying, “It is only fair that the search ­engines and social media giants pay for the original news content that they use to drive traffic to their sites.” If Australia succeeds, this may set an uncomfortable precedent in other jurisdictions.

For me, much of the above debate goes to the heart of how to treat social media platforms – are they like traditional newspapers and broadcast media? are they like non-fiction publishers? are they communications services (like telcos)? are they documents of record? The topic is not new – remember when Mark Zuckerberg declared that he wanted Facebook to be the “world’s newspaper”? Be careful what you wish for…

Next week: Fact v Fiction in Public Discourse

Blipverts vs the Attention Economy

There’s a scene in Nicolas Roeg’s 1976 film, “The Man Who Fell To Earth”, where David Bowie’s character sits watching a bank of TV screens, each tuned to a different station. At the same time he is channel surfing – either because his alien powers allow him to absorb multiple, simultaneous inputs, or because his experience of ennui on Earth leads him to seek more and more stimulus. Obviously a metaphor for the attention economy, long before such a term existed.

Watching the alien watching us… Image sourced from Flicker

At the time in the UK, we only had three TV channels to choose from, so the notion of 12 or more seemed exotic, even other worldly. And of those three channels, only one carried advertising. Much the same situation existed in British radio, with only one or two commercial networks, alongside the dominant BBC. So we had relatively little exposure to adverts, brand sponsorship or paid content in our broadcast media. (Mind you, this was still the era when tobacco companies could plaster their logos all over sporting events…)

For all its limitations, there were several virtues to this model. First, advertising airtime was at a premium (thanks to the broadcast content ratios), and ad spend was concentrated – so adverts really had to grab your attention. (Is it any wonder that so many successful film directors cut their teeth on commercials?) Second, this built-in monopoly often meant bigger TV production budgets, more variety of content and better quality programming on free-to-air networks than we typically see today with the over-reliance on so-called reality TV. Third, with less viewing choice, there was a greater shared experience among audiences – and more communal connection because we could talk about similar things.

Then along came cable and satellite networks, bringing more choice (and more advertising), but not necessarily better quality content. In fact, with TV advertising budgets spread more thinly, it’s not surprising that programming suffered. Networks had to compete for our attention, and they funded this by bombarding us with more ads and more paid content. (And this is before we even get to the internet age and time-shift, streaming and multicast platforms…)

Despite the increased viewing choices, broadcasting became narrow-casting – smaller and more fractured viewership, with programming appealing to niche audiences. Meanwhile, in the mid-80s (and soon after the launch of MTV), “Max Headroom” is credited with coining the term “blipvert”, meaning a very, very short (almost subliminal) television commercial. Although designed as a narrative device in the Max Headroom story, the blipvert can be seen as either a test of creativity (how to get your message across in minimal time); or a subversive propaganda technique (nefarious elements trying to sabotage your thinking through subtle suggestion and infiltration).

Which is essentially where we are in the attention economy. Audiences are increasingly disparate, and the battle for eyeballs (and minds) is being fought out across multiple devices, multiple screens, and multiple formats. In our search for more stimulation, and unless we are willing to pay for premium services and/or an ad-free experience, we are having to endure more ads that pop-up during our YouTube viewing, Spotify streaming or internet browsing. As a result, brands are trying to grab our attention, at increasing frequency, and for shorter, yet more rapid and intensive periods. (Even Words With Friends is offering in-game tokens in return for watching sponsored content.)

Some consumers are responding with ad-blockers, or by dropping their use of social media altogether; or they want payment for their valuable time. I think we are generally over the notion of giving away our personal data in return for some “free” services – the price in terms of intrusions upon our privacy is no longer worth paying. So, brands are having to try harder to capture our attention, and they need to personalize their message to make it seem relevant and worthy of our time – provided we are willing to let them know enough about our preferences, location, demographics, etc. so that they can serve up relevant and engaging content to each and every “audience of one”. And brands also want proof that the ads they have paid for have been seen by the people they intended to reach.

This delicate trade-off (between privacy, personalisation and payment) is one reason why the attention economy is seen as a prime use case for Blockchain and cryptocurrency: consumers can retain anonymity, while still sharing selected personal information (which they own and control) with whom they wish, when they wish, for as long as they wish, and they can even get paid to access relevant content; brands can receive confirmation that the personalised content they have paid for has been consumed by the people they intended to see it; and distributed ledgers can maintain a record of account and send/receive payments via smart contracts and digital wallets when and where the relevant transactions have taken place.

Next week: Jump-cut videos vs Slow TV