Blockchain and the Limits of Trust

Last week I was privileged to be a guest on This Is Imminent, a new form of Web TV hosted by Simon Waller. The given topic was Blockchain and the Limitations of Trust.

For a replay of the Web TV event go here

As regular readers will know, I have been immersed in the world of Blockchain, cryptocurrency and digital assets for over four years – and while I am not a technologist, I think know enough to understand some of the potential impact and implications of Blockchain on distributed networks, decentralization, governance, disintermediation, digital disruption, programmable money, tokenization, and for the purposes of last week’s discussion, human trust.

The point of the discussion was to explore how Blockchain might provide a solution to the absence of trust we currently experience in many areas of our daily lives. Even better, how Blockchain could enhance or expand our existing trusted relationships, especially across remote networks. The complete event can be viewed here, but be warned that it’s not a technical discussion (and wasn’t intended to be), although Simon did find a very amusing video that tries to explain Blockchain with the aid of Spam (the luncheon meat, not the unwanted e-mail).

At a time when our trust in public institutions is being tested all the time, it’s more important than ever to understand the nature of trust (especially trust placed in any new technology), and to navigate how we establish, build and maintain trust in increasingly peer-to-peer, fractured, fragmented, open and remote networks.

To frame the conversation, I think it’s important to lay down a few guiding principles.

First, a network is only as strong as its weakest point of connection.

Second, there are three main components to maintaining the integrity of a “trusted” network:

  • how are network participants verified?
  • how secure is the network against malicious actors?
  • what are the penalties or sanctions for breaking that trust?

Third, “trust” in the context of networks is a proxy for “risk” – how much or how far are we willing to trust a network, and everyone connected to it?

For example, if you and I know each other personally and I trust you as a friend, colleague or acquaintance, does that mean I should automatically trust everyone else you know? (Probably not.) Equally, should I trust you just because you know all the same people as me? (Again, probably not.) Each relationship (or connection) in that type of network has to be evaluated on its own merits. Although we can do a certain amount of due diligence and triangulation, as each network becomes larger, it’s increasingly difficult for us to “know” each and every connection.

Let’s suppose that the verification process is set appropriately high, that the network is maintained securely, and that there are adequate sanctions for abusing the network trust –  then it is possible for each connection to “know” each other, because the network has created the minimum degree of trust for the network to be viable. Consequently, we might conclude that only trustworthy people would want to join a network based on trust where each transaction is observable and traceable (albeit in the case of Blockchain, pseudonymously).

When it comes to trust and risk assessment, it still amazes me the amount of personal (and private) information people are willing to share on social media platforms, just to get a “free” account. We seem to be very comfortable placing an inordinate amount of trust in these highly centralized services both to protect our data and to manage our relationships – which to me is something of an unfair bargain.

Statistically we know we are more likely to be killed in a car accident than in a plane crash – but we attach far more risk to flying than to driving. Whenever we take our vehicle out on to the road, we automatically assume that every other driver is licensed, insured, and competent to drive, and that their car is taxed and roadworthy. We cannot verify this information ourselves, so we have to trust in both the centralized systems (that regulate drivers, cars and roads), and in each and every individual driver – but we know there are so many weak points in that structure.

Blockchain has the ability to verify each and every participant and transaction on the network, enabling all users to trust in the security and reliability of network transactions. In addition, once verified, participants do not have to keep providing verification each time they want to access the network, because the network “knows” enough about each participant that it can create a mutual level of trust without everyone having to have direct knowledge of each other.

In the asymmetric relationships we have created with centralized platforms such as social media, we find ourselves in a very binary situation – once we have provided our e-mail address, date of birth, gender and whatever else is required, we cannot be confident that the platform “forgets” that information when it no longer needs it. It’s a case of “all or nothing” as the price of network entry. Whereas, if we operated under a system of self-sovereign digital identity (which technology like Blockchain can facilitate), then I can be sure that such platforms only have access to the specific personal data points that I am willing to share with them, for the specific purpose I determine, and only for as long as I decide.

Finally, taking control of, and being responsible for managing our own personal information (such as a private key for a digital wallet) is perhaps a step too far for some people. They might not feel they have enough confidence in their own ability to be trusted with this data, so they would rather delegate this responsibility to centralized systems.

Next week: Always Look On The Bright Side…

 

Blipverts vs the Attention Economy

There’s a scene in Nicolas Roeg’s 1976 film, “The Man Who Fell To Earth”, where David Bowie’s character sits watching a bank of TV screens, each tuned to a different station. At the same time he is channel surfing – either because his alien powers allow him to absorb multiple, simultaneous inputs, or because his experience of ennui on Earth leads him to seek more and more stimulus. Obviously a metaphor for the attention economy, long before such a term existed.

Watching the alien watching us… Image sourced from Flicker

At the time in the UK, we only had three TV channels to choose from, so the notion of 12 or more seemed exotic, even other worldly. And of those three channels, only one carried advertising. Much the same situation existed in British radio, with only one or two commercial networks, alongside the dominant BBC. So we had relatively little exposure to adverts, brand sponsorship or paid content in our broadcast media. (Mind you, this was still the era when tobacco companies could plaster their logos all over sporting events…)

For all its limitations, there were several virtues to this model. First, advertising airtime was at a premium (thanks to the broadcast content ratios), and ad spend was concentrated – so adverts really had to grab your attention. (Is it any wonder that so many successful film directors cut their teeth on commercials?) Second, this built-in monopoly often meant bigger TV production budgets, more variety of content and better quality programming on free-to-air networks than we typically see today with the over-reliance on so-called reality TV. Third, with less viewing choice, there was a greater shared experience among audiences – and more communal connection because we could talk about similar things.

Then along came cable and satellite networks, bringing more choice (and more advertising), but not necessarily better quality content. In fact, with TV advertising budgets spread more thinly, it’s not surprising that programming suffered. Networks had to compete for our attention, and they funded this by bombarding us with more ads and more paid content. (And this is before we even get to the internet age and time-shift, streaming and multicast platforms…)

Despite the increased viewing choices, broadcasting became narrow-casting – smaller and more fractured viewership, with programming appealing to niche audiences. Meanwhile, in the mid-80s (and soon after the launch of MTV), “Max Headroom” is credited with coining the term “blipvert”, meaning a very, very short (almost subliminal) television commercial. Although designed as a narrative device in the Max Headroom story, the blipvert can be seen as either a test of creativity (how to get your message across in minimal time); or a subversive propaganda technique (nefarious elements trying to sabotage your thinking through subtle suggestion and infiltration).

Which is essentially where we are in the attention economy. Audiences are increasingly disparate, and the battle for eyeballs (and minds) is being fought out across multiple devices, multiple screens, and multiple formats. In our search for more stimulation, and unless we are willing to pay for premium services and/or an ad-free experience, we are having to endure more ads that pop-up during our YouTube viewing, Spotify streaming or internet browsing. As a result, brands are trying to grab our attention, at increasing frequency, and for shorter, yet more rapid and intensive periods. (Even Words With Friends is offering in-game tokens in return for watching sponsored content.)

Some consumers are responding with ad-blockers, or by dropping their use of social media altogether; or they want payment for their valuable time. I think we are generally over the notion of giving away our personal data in return for some “free” services – the price in terms of intrusions upon our privacy is no longer worth paying. So, brands are having to try harder to capture our attention, and they need to personalize their message to make it seem relevant and worthy of our time – provided we are willing to let them know enough about our preferences, location, demographics, etc. so that they can serve up relevant and engaging content to each and every “audience of one”. And brands also want proof that the ads they have paid for have been seen by the people they intended to reach.

This delicate trade-off (between privacy, personalisation and payment) is one reason why the attention economy is seen as a prime use case for Blockchain and cryptocurrency: consumers can retain anonymity, while still sharing selected personal information (which they own and control) with whom they wish, when they wish, for as long as they wish, and they can even get paid to access relevant content; brands can receive confirmation that the personalised content they have paid for has been consumed by the people they intended to see it; and distributed ledgers can maintain a record of account and send/receive payments via smart contracts and digital wallets when and where the relevant transactions have taken place.

Next week: Jump-cut videos vs Slow TV

 

 

 

 

Personal data and digital identity – whose ID is it anyway?

In an earlier blog on privacy in the era of Big Data and Social Media, I explored how our “analog identities” are increasingly embedded in our digital profiles. In particular, the boundaries between personal/private information and public/open data are becoming so blurred that we risk losing sight of what individual, legal and commercial rights we have to protect or exploit our own identity. No wonder that there is so much interest in what blockchain solutions, cyber-security tools and distributed ledger technology can do to establish, manage and protect our digital ID – and to re-balance the near-Faustian pact that the illusion of “free” social media has created.

Exchanging Keys in “Ghostbusters” (“I am Vinz Clortho the Keymaster of Gozer”)

It’s over 20 years since “The Net” was released, and more than 30 since the original “Ghostbusters” film came out. Why do I mention these movies? First, they both pre-date the ubiquity of the internet, so it’s interesting to look back on earlier, pre-social media times. Second, they both reference a “Gatekeeper” – the former in relation to some cyber-security software being hijacked by the mysterious Praetorian organisation; the latter in relation to the “Keymaster”, the physical embodiment or host of the key to unleash the wrath of Gozer upon the Earth. Finally, they both provide a glimpse of what a totally connected world might look like – welcome to the Internet of Things!

Cultural references aside, the use of private and public keys, digital wallets and payment gateways to transact with digital currencies underpins the use of Bitcoin and other alt coins. In addition, blockchain solutions and cyber-security technologies are being deployed to streamline and to secure the transfer of data across both peer-to-peer/decentralised networks, and public/private, permissioned/permissionless blockchain and distributed ledger platforms. Sectors such as banking and finance, government services, the health industry, insurance and supply chain management are all developing proofs of concept to remove friction but increase security throughout their operations.

One of the (false) expectations that social media has created is that by giving away our own personal data and by sharing our own content, we will get something in return – namely, a “free” Facebook account or “free” access to Google’s search engine etc. What happens, of course, is that these tech companies sell advertising and other services by leveraging our use of and engagement with their platforms. As mere users we have few if any rights to decide how our data is being used, or what third-party content we will be subjected to. That might seem OK, in return for “free” social media, but none of the huge advertising revenues are directly shared with us as ordinary end consumers.

But just as Google and Facebook are facing demands to pay for news content, some tech companies are now trying to democratise our relationships with social media, mobile content and financial services, by giving end users financial and other benefits in return for sharing their data and/or being willing to give selected advertisers and content owners access to their personal screens.

Before looking at some interesting examples of these new businesses, here’s an anecdote based on my recent experience:

I had to contact Facebook to ask them to take down my late father’s account. Despite sending Facebook a scanned copy of the order of service from my father’s funeral, and references to two newspaper articles, Facebook insisted on seeing a copy of my father’s death certificate.

Facebook assumes that only close relatives or authorised representatives would have access to the certificate, but in theory anyone can order a copy of a death certificate from the UK’s General Register Office. Further, the copy of the certificate clearly states that “WARNING: A CERTIFICATE IS NOT EVIDENCE OF IDENTITY”. Yet, it appears that Facebook was asking to see the certificate as a way of establishing my own identity.

(Side note: A few years ago, I was doing some work for the publishers of Who’s Who Australia, which is a leading source of biographical data on people prominent in public life – politics, business, the arts, academia, etc. In talking to prospective clients, especially those who have to maintain their own directories of members and alumni, it was clear that “deceased persons” data can be very valuable to keep their records up to date. It can also be helpful in preventing fraud and other deception. Perhaps Facebook needs to think about its role as a “document of record”?)

So, what are some of the new tech businesses that are helping consumers to take control of their own personal data, and to derive some direct benefit from sharing their personal profile and/or their screen time:

  1. Unlockd: this Australian software company enables customers to earn rewards by allowing advertisers and content owners “access” to their mobile device (such as streaming videos from MTV).
  2. SPHRE: this international blockchain company is building digital platforms (such as Air) that will empower consumers to create and manage their own digital ID, then be rewarded for using this ID for online and mobile transactions.
  3. Secco: this UK-based challenger bank is part of a trend for reputation-based solutions (e.g., personal credit scores based on your social media standing), that uses Aura tokens as a form of peer-to-peer or barter currency, within a “social-economic community”.

Linked to these initiatives are increased concerns about identity theft, cyber-security and safety, online trust, digital certification and verification, and user confidence. Anything that places more power and control in the hands of end users as to how, when and by whom their personal data can be used has to be welcome.

Declaration of interest: through my work at Brave New Coin, a FinTech startup active in blockchain and digital assets, I am part of the team working with SPHRE and the Air project. However, all comments here are my own.

Next week: Investor pitch night at the London Startup Leadership Program

Personal vs Public: Rethinking Privacy

An incident I recently witnessed in my neighbourhood has caused to me to rethink how we should be defining “privacy”. Data protection is one thing, but when our privacy can be compromised via the direct connection between the digital and analog worlds, all the cyber security in the world doesn’t protect us against unwanted nuisance, intrusion or even invasion of our personal space.

Pressefotografen mit KamerasScenario

As I was walking along the street, I saw another pedestrian stop outside a house, and from the pavement, use her smart phone to take a photograph through the open bedroom window. Regardless of who was inside, and irrespective of what they were doing (assuming nothing illegal was occurring), I would consider this to be an invasion of privacy.

For example, it would be very easy to share the picture via social media, along with date and location data. From there, it could be possible to search land registries and other public records to ascertain the identity of the owners and/or occupants. And with a little more effort, you might have enough information to stalk or even cyber-bully them.

Privacy Law

Photographing people on private property (e.g., in their home) from public property (e.g., on the street outside) is not an offence, although photographers must not cause a nuisance nor interfere with the occupants’ right of quiet enjoyment. Our current privacy laws largely exclude this breach of privacy (unless it relates to disclosure of personal data by a regulated entity). Even rules about the use of drones are driven by safety rather than privacy concerns.

Since the late 1990’s, and the advent of spam and internet hacking, there have been court decisions that update the law of trespass to include what could be defined as “digital trespass”, although some judges have since tried to limit such actions to instances where actual harm or damage has been inflicted on the plaintiff. (Interestingly, in Australia, an act of trespass does not have to be “intentional”, merely “negligent”.)

Apart from economic and financial loss that can arise from internet fraud and identity theft, invasion of privacy via public disclosure of personal data could lead to personal embarrassment, damage to reputation or even ostracism. (In legal terms emotional stress falls within “pain and suffering”).

Data Protection Law

The Australian Privacy Principles contained within the 1988 Privacy Act apply to government agencies, private companies with annual turnover of $3m or more, and any organisations trading in personal data, dealing with credit information or providing health services. There are specific provisions relating to the use and misuse of government-derived identifiers such as medical records and tax file numbers.

The main purpose of the privacy legislation is to protect “sensitive” information, and to prevent such data being used unlawfully to identify specific individuals. At a minimum, this means keeping personal data such as dates of birth, financial records or hospital files in a secure format.

Some Practical Definitions

The following are not legal definitions, but hopefully offer a practical framework to understand how we might categorise such data, and manage our obligations towards it:

“Confidential”

Secret information that must not be disclosed to anyone unless there is a legal obligation or permission to do so. (There are also specific issues and exceptions relating to “classified information”, public interest matters, whistleblower protection and Freedom of Information requests.)

“Private”

Information which is not for public or general consumption, although the data itself may not be “confidential”. May still be subject to legal protection or rights, such as the right of adopted children to discover the identity of their birth parents, or the right of someone not to be identified as a lottery winner.

“Personal”

Data that relates to, or can specifically identify a particular individual. An increasing issue for Big Data, because data that otherwise resides in separate locations can now be re-connected using triangulation techniques – scrape enough websites and drill down into enough databases, and you could probably find my shoe size.

“Public”

Anything that has been published, or easily discoverable through open search or public database retrieval (but, for example, does not include my past transactions on eBay unless I have chosen to disclose them to other users). My date of birth may be a matter of record, but unless you have authorised access to the relevant database or registry, you won’t be able to discover it and you certainly shouldn’t disclose it without my permission.

Copyright Law

One further dimension to the debate is copyright law – the ownership and related rights associated with any creative works, including photographs. All original content is copyright (except those works deemed to be in the “public domain”), and nearly all copyright vests with the person who created the work (unless they have legally assigned their copyright, or the material was created in the course of their employment).

In the scenario described above, the photographer would hold copyright in the picture they took. However, if the photograph included the image of an artwork or even a framed letter hanging on the wall, they could not reproduce the photograph without the permission of the person who owned the copyright in those original works. In some (limited) situations, a photograph of a building may be subject to the architect’s copyright in the design.

Curiosity is not enough justification to share

My personal view on all this is that unless there is a compelling reason to make something public, protecting our personal privacy takes precedent over the need to post, share or upload pictures of other people in their private residence, especially any images taken without the occupants’ knowledge or permission.

Just to clarify, I’m not referring to surveillance and monitoring by the security services and law enforcement agencies, for which there are understandable motives (and appropriate safeguards).

I’m saying that if we showed a little more respect for each others’ personal space and privacy (particularly within our homes, not just in cyberspace) then we might show a little more consideration to our neighbours and fellow citizens.

Next week: It’s OK to say “I don’t know”