Digital Identity – Wallets are the key?

A few months ago, I wrote about trust and digital identity – the issue of who “owns” our identity, and why the concept of “self-sovereign digital identity” can help resolve problems of data security and data privacy.

The topic was aired at a recent presentation made by FinTech advisor, David Birch (hosted at Novatti) to an audience of Australian FinTech, Blockchain and identity experts.

David’s main thesis is that digital wallets will sit at the centre of the metaverse – linking web3 with digital assets and their owners. Wallets will not only be the “key” to transacting with digital assets (tokens), but proving “identity” will confirm “ownership” (or “control”) of wallets and their holdings.

The audience felt that in Australia, we face several challenges to the adoption of digital identity (and by extension, digital wallets):

1. Lack of common technical standards and lack of interoperability

2. Poor experience of government services (the nightmare that is myGov…)

3. Private sector complacency and the protected incumbency of oligopolies

4. Absence of incentives and overwhelming inertia (i.e., why move ahead of any government mandate?)

The example was given of a local company that has built digital identity solutions for consumer applications – but apparently, can’t attract any interest from local banks.

A logical conclusion from the discussion is that we will maintain multiple digital identities (profiles) and numerous digital wallets (applications), for different purposes. I don’t see a problem with this as long as individuals get to decide who, where, when and for how long third parties get to access our personal data, and for what specific purposes.

Next week: Defunct apps and tech projects

 

 

Trust in Digital IDs

Or: “Whose identity is it anyway?”

Over the past few years, there have been a significant number of serious data breaches among among banks, utilities, telcos, insurers and public bodies. As a result, hackers are able to access the confidential data and financial records of millions of customers, leading to ransomware demands, wide dissemination of private information, identity theft, and multiple phishing attempts and similar scams.

What most of these hacks reveal is the vulnerability of centralised systems as well as the unnecessary storage of personal data – making these single points of failure a target for such exploits. Worse, the banks and others seem to think they “own” this personal data once they have obtained it, as evidenced by the way they (mis)manage it.

I fully understand the need for KYC/AML, and the requirement to verify customers under the 100 Points of Identification system. However, once I have been verified, why does each bank, telco and utility company need to keep copies or records of my personal data on their systems? Under a common 100 Points verification process, shouldn’t we have a more efficient and less vulnerable system? If I have been verified by one bank in Australia, why can’t I be automatically verified by every other bank in Australia (e.g., if I wanted to open an account with them), or indeed any other company using the same 100 Points system?

Which is where the concept of Self-Sovereign Identity comes into play. This approach should mean that with the 100 Points system, even if initially I need to submit evidence of my driver’s license, passport or birth certificate, once I have been verified by the network I can “retrieve” my personal data (revoke the access permission), or specify with each party on the network how long they can hold my personal data, and for what specific purpose.

This way, each party on the network does not need to retain a copy of the original documents. Instead, my profile is captured as a digital ID that confirms who I am, and confirms that I have been verified by the network; it does not require me to keep disclosing my personal data to each party on the network. (There are providers of Digital ID solutions, but because they are centralised, and unilateral, we end up with multiple and inconsistent Digital ID systems, which are just as vulnerable to the risk of a single point of failure…)

But of course, banks etc. insist that not only do they have to ask for 100 Points of ID each and every time I open an account, they are required to retain copies or digital versions of my personal data. Hence, we should not be surprised by the number of data hacks we keep experiencing.

The current approach to identity in banking, telcos and utilities is baffling. Just a few examples I can think of:

1. In trying to upgrade my current mobile phone plan with my existing provider, I had to re-submit personal information via a mobile app (and this is a telco that experienced a major hack last year, resulting in me having to apply for a new driver’s license). If I have already been verified, why the need to ask for my personal data again, and via a mobile app?

2. I’ve lived at my current address for more than 5 years. I still receive bank statements intended for the previous occupant. I have tried on numerous occasions to inform the bank that this person is no longer living here. I’ve used the standard “Return to Sender” method, and tried to contact the bank direct, but because I am not the named account addressee or authorised representative, they won’t talk to me. Fair enough. But, the addressee is actually a self-managed superannuation fund. Given the fallout from the Banking Royal Commission, and the additional layers of verification, supervision and audit that apply to such funds, I’m surprised that this issue has not been picked up the bank concerned. It’s very easy to look up the current registered address of an SMSF via the APRA website, if only the bank could be bothered to investigate why the statements keep getting returned.

3. I have been trying to remove the name of a former director as a signatory to a company bank account. The bank kept asking for various forms and “proof” that this signatory was no longer a director and no longer authorised to access the account. Even though I have done this (and had to pay for an accountant to sign a letter confirming the director has resigned their position), if the bank had bothered to look up the ASIC company register, they would see that this person was no longer a company officer. Meanwhile, the bank statements keep arriving addressed to the ex-director. Apparently, the bank’s own “systems” don’t talk to one another (a common refrain when trying to navigate legacy corporate behemoths).

In each of the above, the use of a Digital ID system would streamline the process for updating customer records, and reduce the risk of data vulnerabilities. But that requires effort on the part of the entities concerned – clearly, the current fines for data breaches and for misconduct in financial services are not enough.

Next week: AI vs IP  

 

The Age of Responsibility

How old is old enough to know better? In particular, when can we be said to be responsible, and therefore accountable, for our actions? (All the recent political shenanigans around “collective accountability”, “departmental responsibility”, “creeping assumptions” and “ministerial conduct” has got me thinking….)

By the time we are 7 years of age, we should probably know the difference between “right and wrong”, at least in the context of home, school, culturally and socially – “don’t tell lies, don’t be rude to your elders, don’t steal, don’t hit your siblings…”

The age for criminal responsibility varies around the world, but the global average is between 10 and 14 years. In Australia, it is currently 10, but there are proposals to extend it to 14. While I can understand and appreciate some of the arguments in favour of the latter, I’m also aware that criminal intent (not just criminal acts or behaviour) can establish itself under the age of 10 – I’m thinking of the James Bulger case in the UK in particular.

Legally, 18 is the coming of age – for entering into contracts, getting married (without the need for parental approval), earning the right to vote, the ability to purchase alcohol and tobacco. But you can have sex, and start driving a car from the age of 16.

As a society, we appear to be extending the age at which we become “responsible adults”. The concept of “adolescence” emerged in the 15th century, to indicate a transition to adulthood. The notion of “childhood” appeared in the 17th century, mainly from a philosophical perspective. While “teenagers” are a mid-20th century marketing phenomenon.

However, we now have evidence that our brains do not finish maturing until our third decade – so cognitively, it could be argued we are not responsible for our actions or decisions until we are at least 25, because our judgment is not fully developed. In which case, it rather begs the question about our ability to procreate, drink, drive and vote….

Of course, many age-based demarcations are cultural and societal. Customary practices such as initiation ceremonies are still significant markers in a person’s development and their status in the community (including their rights and responsibilities).

Which brings me to social media – shouldn’t we also be responsible and held accountable for what we post, share, comment on or simply like on Facebook, Twitter etc.? Whether you believe in “nature” or “nurture”, some academics argue we always have a choice before we hit that button – so shouldn’t that be a guiding principle to live by?

Next week: Making Creeping Assumptions

 

 

 

 

 

 

Blockchain and the Limits of Trust

Last week I was privileged to be a guest on This Is Imminent, a new form of Web TV hosted by Simon Waller. The given topic was Blockchain and the Limitations of Trust.

For a replay of the Web TV event go here

As regular readers will know, I have been immersed in the world of Blockchain, cryptocurrency and digital assets for over four years – and while I am not a technologist, I think know enough to understand some of the potential impact and implications of Blockchain on distributed networks, decentralization, governance, disintermediation, digital disruption, programmable money, tokenization, and for the purposes of last week’s discussion, human trust.

The point of the discussion was to explore how Blockchain might provide a solution to the absence of trust we currently experience in many areas of our daily lives. Even better, how Blockchain could enhance or expand our existing trusted relationships, especially across remote networks. The complete event can be viewed here, but be warned that it’s not a technical discussion (and wasn’t intended to be), although Simon did find a very amusing video that tries to explain Blockchain with the aid of Spam (the luncheon meat, not the unwanted e-mail).

At a time when our trust in public institutions is being tested all the time, it’s more important than ever to understand the nature of trust (especially trust placed in any new technology), and to navigate how we establish, build and maintain trust in increasingly peer-to-peer, fractured, fragmented, open and remote networks.

To frame the conversation, I think it’s important to lay down a few guiding principles.

First, a network is only as strong as its weakest point of connection.

Second, there are three main components to maintaining the integrity of a “trusted” network:

  • how are network participants verified?
  • how secure is the network against malicious actors?
  • what are the penalties or sanctions for breaking that trust?

Third, “trust” in the context of networks is a proxy for “risk” – how much or how far are we willing to trust a network, and everyone connected to it?

For example, if you and I know each other personally and I trust you as a friend, colleague or acquaintance, does that mean I should automatically trust everyone else you know? (Probably not.) Equally, should I trust you just because you know all the same people as me? (Again, probably not.) Each relationship (or connection) in that type of network has to be evaluated on its own merits. Although we can do a certain amount of due diligence and triangulation, as each network becomes larger, it’s increasingly difficult for us to “know” each and every connection.

Let’s suppose that the verification process is set appropriately high, that the network is maintained securely, and that there are adequate sanctions for abusing the network trust –  then it is possible for each connection to “know” each other, because the network has created the minimum degree of trust for the network to be viable. Consequently, we might conclude that only trustworthy people would want to join a network based on trust where each transaction is observable and traceable (albeit in the case of Blockchain, pseudonymously).

When it comes to trust and risk assessment, it still amazes me the amount of personal (and private) information people are willing to share on social media platforms, just to get a “free” account. We seem to be very comfortable placing an inordinate amount of trust in these highly centralized services both to protect our data and to manage our relationships – which to me is something of an unfair bargain.

Statistically we know we are more likely to be killed in a car accident than in a plane crash – but we attach far more risk to flying than to driving. Whenever we take our vehicle out on to the road, we automatically assume that every other driver is licensed, insured, and competent to drive, and that their car is taxed and roadworthy. We cannot verify this information ourselves, so we have to trust in both the centralized systems (that regulate drivers, cars and roads), and in each and every individual driver – but we know there are so many weak points in that structure.

Blockchain has the ability to verify each and every participant and transaction on the network, enabling all users to trust in the security and reliability of network transactions. In addition, once verified, participants do not have to keep providing verification each time they want to access the network, because the network “knows” enough about each participant that it can create a mutual level of trust without everyone having to have direct knowledge of each other.

In the asymmetric relationships we have created with centralized platforms such as social media, we find ourselves in a very binary situation – once we have provided our e-mail address, date of birth, gender and whatever else is required, we cannot be confident that the platform “forgets” that information when it no longer needs it. It’s a case of “all or nothing” as the price of network entry. Whereas, if we operated under a system of self-sovereign digital identity (which technology like Blockchain can facilitate), then I can be sure that such platforms only have access to the specific personal data points that I am willing to share with them, for the specific purpose I determine, and only for as long as I decide.

Finally, taking control of, and being responsible for managing our own personal information (such as a private key for a digital wallet) is perhaps a step too far for some people. They might not feel they have enough confidence in their own ability to be trusted with this data, so they would rather delegate this responsibility to centralized systems.

Next week: Always Look On The Bright Side…