Trust in Digital IDs

Or: “Whose identity is it anyway?”

Over the past few years, there have been a significant number of serious data breaches among among banks, utilities, telcos, insurers and public bodies. As a result, hackers are able to access the confidential data and financial records of millions of customers, leading to ransomware demands, wide dissemination of private information, identity theft, and multiple phishing attempts and similar scams.

What most of these hacks reveal is the vulnerability of centralised systems as well as the unnecessary storage of personal data – making these single points of failure a target for such exploits. Worse, the banks and others seem to think they “own” this personal data once they have obtained it, as evidenced by the way they (mis)manage it.

I fully understand the need for KYC/AML, and the requirement to verify customers under the 100 Points of Identification system. However, once I have been verified, why does each bank, telco and utility company need to keep copies or records of my personal data on their systems? Under a common 100 Points verification process, shouldn’t we have a more efficient and less vulnerable system? If I have been verified by one bank in Australia, why can’t I be automatically verified by every other bank in Australia (e.g., if I wanted to open an account with them), or indeed any other company using the same 100 Points system?

Which is where the concept of Self-Sovereign Identity comes into play. This approach should mean that with the 100 Points system, even if initially I need to submit evidence of my driver’s license, passport or birth certificate, once I have been verified by the network I can “retrieve” my personal data (revoke the access permission), or specify with each party on the network how long they can hold my personal data, and for what specific purpose.

This way, each party on the network does not need to retain a copy of the original documents. Instead, my profile is captured as a digital ID that confirms who I am, and confirms that I have been verified by the network; it does not require me to keep disclosing my personal data to each party on the network. (There are providers of Digital ID solutions, but because they are centralised, and unilateral, we end up with multiple and inconsistent Digital ID systems, which are just as vulnerable to the risk of a single point of failure…)

But of course, banks etc. insist that not only do they have to ask for 100 Points of ID each and every time I open an account, they are required to retain copies or digital versions of my personal data. Hence, we should not be surprised by the number of data hacks we keep experiencing.

The current approach to identity in banking, telcos and utilities is baffling. Just a few examples I can think of:

1. In trying to upgrade my current mobile phone plan with my existing provider, I had to re-submit personal information via a mobile app (and this is a telco that experienced a major hack last year, resulting in me having to apply for a new driver’s license). If I have already been verified, why the need to ask for my personal data again, and via a mobile app?

2. I’ve lived at my current address for more than 5 years. I still receive bank statements intended for the previous occupant. I have tried on numerous occasions to inform the bank that this person is no longer living here. I’ve used the standard “Return to Sender” method, and tried to contact the bank direct, but because I am not the named account addressee or authorised representative, they won’t talk to me. Fair enough. But, the addressee is actually a self-managed superannuation fund. Given the fallout from the Banking Royal Commission, and the additional layers of verification, supervision and audit that apply to such funds, I’m surprised that this issue has not been picked up the bank concerned. It’s very easy to look up the current registered address of an SMSF via the APRA website, if only the bank could be bothered to investigate why the statements keep getting returned.

3. I have been trying to remove the name of a former director as a signatory to a company bank account. The bank kept asking for various forms and “proof” that this signatory was no longer a director and no longer authorised to access the account. Even though I have done this (and had to pay for an accountant to sign a letter confirming the director has resigned their position), if the bank had bothered to look up the ASIC company register, they would see that this person was no longer a company officer. Meanwhile, the bank statements keep arriving addressed to the ex-director. Apparently, the bank’s own “systems” don’t talk to one another (a common refrain when trying to navigate legacy corporate behemoths).

In each of the above, the use of a Digital ID system would streamline the process for updating customer records, and reduce the risk of data vulnerabilities. But that requires effort on the part of the entities concerned – clearly, the current fines for data breaches and for misconduct in financial services are not enough.

Next week: AI vs IP  

 

An AI Origin Story

Nowadays, no TV or movie franchise worth its salt is deemed complete unless it has some sort of origin story – from “Buzz Lightyear” to “Alien”, from “Mystery Road” to “Inspector Morse”. And as for “Star Wars”, I’ve lost count as to which prequel/sequel/chapter/postscript/spin-off we are up to. Origin stories can be helpful in explaining “what came before”, providing background and context, and describing how we got to where we are in a particular narrative. Reading Jeanette Winterson’s recent collection of essays, “12 Bytes”, it soon becomes apparent that what she has achieved is a tangible origin story for Artificial Intelligence.

Still from “Frankenstein” (1931) – Image sourced from IMDb

By Winterson’s own admission, this is not a science text book, nor a reference work on AI. It’s a lot more human than that, and all the more readable and enjoyable as a result. In any case, technology is moving so quickly these days, that some of her references (even those from barely a year ago) are either out of date, or have been superceded by subsequent events. For example, she makes a contemporaneous reference to a Financial Times article from May 2021, on Decentralized Finance (DeFi) and Non-Fungible Tokens (NFTs). She mentions a digital race horse that sold for $125,000. Fast-forward 12 months, and we have seen parts of the nascent DeFi industry blow-up, and an NFT of Jack Dorsey’s first Tweet (Twitter’s own origin story?) failing to achieve even $290 when it went up for auction, having initially been sold for $2.9m. Then there is the Google engineer who claimed that the Lamda AI program is sentient, and the chess robot which broke its opponent’s finger.

Across these stand-alone but interlinked essays, Winterson builds a consistent narrative arc across the historical development, current status and future implications of AI. In particular, she looks ahead to a time when we achieve Artificial General Intelligence, the Singularity, and the complete embodiment of AI, and not necessarily in a biological form that we would recognise today. Despite the dystopian tones, the author appears to be generally positive and optimistic about these developments, and welcomes the prospect of transhumanism, in large part because it is inevitable, and we should embrace it, and ultimately because it might the only way to save our planet and civilisation, just not in the form we expect.

The book’s themes range from: the first human origin stories (sky-gods and sacred texts) to ancient philosophy; from the Industrial Revolution to Frankenstein’s monster; from Lovelace and Babbage to Dracula; from Turing and transistors to the tech giants of today. There are sections on quantum physics, the nature of “binary” (in computing and in transgenderism), biases in algorithms and search engines, the erosion of privacy via data mining, the emergence of surveillance capitalism, and the pros and cons of cryogenics and sexbots.

We can observe that traditional attempts to imagine or create human-made intelligence were based on biology, religion, spirituality and the supernatural – and many of these concepts were designed to explain our own origins, to enforce societal norms, to exert control, and to sustain existing and inequitable power structures. Some of these efforts might have been designed to explain our purpose as humans, but in reality they simply raised more questions than they resolved. Why are we here? Why this planet? What is our destiny? Is death and extinction (the final “End-Time”) the only outcome for the human race? Winterson rigorously rejects this finality as either desirable or inevitable.

Her conclusion is that the human race is worth saving (from itself?), but we have to face up to the need to adapt and continue evolving (homo sapiens was never the end game). Consequently, embracing AI/AGI is going to be key to our survival. Of course, like any (flawed) technology, AI is just another tool, and it is what we do with it that matters. Winterson is rightly suspicious of the male-dominated tech industry, some of whose leaders see themselves as guardians of civil liberties and the saviours of humankind, yet fail to acknowledge that “hate speech is not free speech”. She acknowledges the benefits of an interconnected world, advanced prosthetics, open access to information, medical breakthroughs, industrial automation, and knowledge that can help anticipate danger and avert disaster. But AI and transhumanism won’t solve all our existential problems, and if we don’t have the capacity for empathy, compassion, love, humour, self-reflection, art, satire, creativity, imagination, music or critical thinking, then we will definitely cease to be “human” at all.

The Bibliography to this book is an invaluable resource in itself – and provides for a wealth of additional reading. One book that is not listed, but which might be of interest to her readers, is “Chimera”, a novel by Simon Gallagher, published in 1981 and subsequently adapted for radio and TV. Although this story is about genetic engineering (rather than AI), nevertheless it echoes some of Winterson’s themes and concerns around the morals and ethics of technology (e.g., eugenics, organ harvesting, private investment vs public control, playing god, and the over-emphasis on the preservation and prolongation of human lifeforms as they are currently constituted). Happy reading!

Next week: Digital Perfectionism?

 

Free speech up for sale

When I was planning to post this article a couple of weeks ago, Elon Musk’s bid to buy Twitter and take it into private ownership was looking unlikely to succeed. Musk had just declined to take up the offer of a seat on the Twitter board, following which the board adopted a poison-pill defence against a hostile takeover. And just as I was about to go to press at my usual time, the news broke that the original bid had now been accepted by the board, so I hit the pause button instead and waited a day to see what the public reaction was. What a difference 72 hours (and US$44bn) can make… It seems “free speech” does indeed come with a price.

Of course, the Twitter transaction is still subject to shareholder approval and regulatory clearance, as well as confirmation of the funding structure, since Musk is having to raise about half the stated purchase from banks.

Musk’s stated objective in acquiring Twitter was highlighted in a press release put out by the company:

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Mr. Musk. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential – I look forward to working with the company and the community of users to unlock it.”

This latest development in Musk’s apparent love/hate relationship with Twitter is bound to further divide existing users as to the billionaire’s intentions, as well as raise concerns about the broader implications for free speech. Musk himself has encouraged his “worst critics” to stay with the platform. Meanwhile, founder and former CEO, Jack Dorsey has renewed his love of Twitter, despite only recently stepping away from the top job to spend more time on his other interests.

Personally, I’m not overly concerned that a platform such as Twitter is in private hands or under single ownership (subject, of course, to anti-trust rules, etc.). Far from creating an entrenched monopoly, it may actually encourage more competition by those who decide to opt out of Twitter. What I am less comfortable with is the notion that Twitter somehow acts as an exemplar of free speech, and as such, is a bastion of democracy.

On the positive side, we will be able to judge the veracity of Musk’s objectives against his actual deeds. For example, will Twitter actually introduce an edit button, make its algorithms open-source, exorcise the spam bots, verify users, and reduce/remove the platform’s reliance upon advertising?

On the negative side, what credible stance will Twitter now take on “free speech”, short of allowing an “anything goes” policy? If Musk is sincere that Twitter will be a platform for debating “matters vital to the future of humanity”, he may need to modify what he means by public discourse. Personal slanging matches with fellow-billionaires (and those less-able to defend themselves) do not make for an edifying public debating forum. Musk’s own disclosures about Twitter and his other business interests will also come under increased scrutiny. We know from past experience that Elon’s Tweets can move markets, and for this alone he should be aware of the responsibility that comes with ownership of the platform.

We have long understood that free speech is not the same as an unfettered right to say what you like in public – there are limits to freedom of expression, including accountability for the consequences of our words and actions, especially where they can cause harm. The broader challenges we face are:

  • technology outpacing regulation, when it comes to social media
  • defining what it means to “cause offence”
  • increased attacks on “mainstream media” and threats to freedom of the press

1. Just as the printing press, telegraphy, telephony, broadcasting and the internet each resulted in legislative changes, social media has continued to test the boundaries of regulation under which its predecessors now operate. Hitherto, much of the regulation that applies to social and digital media relates to privacy and data protection, as well as the existing law of defamation. But the latter varies considerably by jurisdiction, and by access to redress, and availability of remedies. Social media platforms have resisted attempts to treat them as traditional media (newspapers and broadcasters, which are subject to licensing and/or industry codes of practice) or treat them as publishers (and therefore responsible for content published on their platforms). (Then there is the question of how some social media platforms manage their tax affairs in the countries where they derive their revenue.)

The Australian government is attempting to challenge social media companies in a couple of ways. The first has been to force these platforms to pay for third-party news content from which they directly and indirectly generate advertising income. The second aims to hold social media more accountable for defamatory content published on their platforms, and remove the protection of “anonymity”. However, the former might be seen as a (belated) reaction to changing business models, and largely acting in favour of incumbents; while the latter is a technical response to the complex law of defamation in the digital age.

2. The ability to be offended by what we see or hear on social media is now at such a low bar as to be almost meaningless. During previous battles over censorship in print, on stage or on screen, the argument could be made that, “if you don’t like something you aren’t being forced to watch it”, so maybe you are deliberately going in search of content just to find it offensive. The problem is, social media by its very nature is more pervasive and, fed by hidden algorithms, is actually more invasive than traditional print and broadcast media. Even as a casual, passive or innocent user, you cannot avoid seeing something that may “offend” you. Economic and technical barriers to entry are likewise so low, that anyone and everyone can have their say on social media.

Leaving aside defamation laws, the concept of “hate speech” is being used to target content which is designed to advocate violence, or can be reasonably deemed or expected to have provoked violence or the threat of harm (personal, social or economic). I have problems with how we define hate speech in the current environment of public commentary and social media platforms, since the causal link between intent and consequence is not always that easy to establish.

However, I think we can agree that the use of content to vilify others simply based on their race, gender, sexuality, ethnicity, economic status, political affiliation or religious identity cannot be defended on the grounds of “free speech”, “fair comment” or “personal belief”. Yet how do we discourage such diatribes without accusations of censorship or authoritarianism, and how do we establish workable remedies to curtail the harmful effects of “hate speech” without infringing our civil liberties?

Overall, there is a need to establish the author’s intent (their purpose as well as any justification), plus apply a “reasonable person” standard, one that does not simply affirm confirmation bias of one sector of society against another. We must recognise that hiding behind our personal ideology cannot be an acceptable defence against facing the consequences of our actions.

3. I think it’s problematic that large sections of the traditional media have hardly covered themselves in glory when it comes to their ethical standards, and their willingness to misuse their public platforms, economic power and political influence to undertake nefarious behaviour and/or deny any responsibility for their actions. Think of the UK’s phone hacking scandals, which resulted in one press baron being deemed “unfit to run a company”, as well as leading to the closure of a major newspaper.

That said, it hardly justifies the attempts by some governments, populist leaders and authoritarian regimes to continuously undermine the integrity of the fourth estate. It certainly doesn’t warrant the prosecution and persecution of journalists who are simply trying to do their job, nor attacks and bans on the media unless they “tow the party line”.

Which brings me back to Twitter, and its responsibility in helping to preserve free speech, while preventing its platform being hijacked for the purposes of vilification and incitement to cause harm. If its new owner is serious about furthering public debate and mature discourse, then here are a few other enhancements he might want to consider:

  • in addition to an edit button, a “cooling off” period whereby users are given the opportunity to reconsider a like, a post or a retweet, based on user feedback or community interaction – after which time, they might be deemed responsible for the content as if they were the original author (potentially a way to mitigate “pile-ons”)
  • signing up to a recognised industry code of ethics, including a victim’s formal right of reply, access to mediation, and enforcement procedures and penalties against perpetrators who continually cross the line into vilification, or engage in content that explicitly or implicitly advocates violence or harm
  • a more robust fact-checking process and a policy of “truth in advertising” when it comes to claims or accusations made by or on behalf of politicians, political parties, or those seeking elected office
  • clearer delineation between content which is mere opinion, content which is in the nature of a public service (e.g., emergencies and natural disasters), content which is deemed part of a company’s public disclosure obligations, content which is advertorial, content which is on behalf of a political party or candidate, and content which is purely for entertainment purposes only (removing the bots may not be enough)
  • consideration of establishing an independent editorial board that can also advocate on behalf of alleged victims of vilification, and act as the initial arbiter of “public interest” matters (such as privacy, data protection, whistle-blowers etc.)

Finally, if Twitter is going to remove/reduce advertising, what will the commercial model look like?

Next week: The Crypto Conversation

Startupbootcamp – Melbourne FinTech Demo Day

Taking its cue from some of the economic effects of the current pandemic, the latest Startupbootcamp Melbourne FinTech virtual demo day adopted the theme of  financial health and well-being. When reduced working hours and layoffs revealed that many that people did not have enough savings to last 6 weeks, let alone 6 months, lock-down and furlough have not only put a strain on public finances, they have also revealed the need for better education on personal finance and wealth management. Meanwhile, increased regulation and compliance obligations (especially in the areas of data privacy, cyber security and KYC) are adding huge operational costs for companies and financial institutions. And despite the restrictions and disruptions of lock-down, the latest cohort of startups in the Melbourne FinTech bootcamp managed to deliver some engaging presentations.

Links to each startup are in the names:

Datacy

Datacy allows people to collect, manage and sell their online data easily and transparently, and gives businesses instant access to high quality and bespoke consumer datasets. They stress that the data used in their application is legally and ethically sourced. Their process is also designed to eliminate gaps and risks inherent in many current solutions, which are often manual, fragmented and unethical. At its heart is a Chrome or Firefox browser extension. Individual consumers can generate passive income from data sales, based on user-defined permissions. Businesses can create target data sets using various parameters. Datacy charges companies to access the end-user data, and also takes a 15% commission on every transaction via the plugin – some of which is distributed to end-users, but it wasn’t clear how that works. For example, is it distributed in equal proportions to everyone, or is it weighted by the “value” (however defined or calculated) of an individual’s data?

Harpocrates Solutions

Harpocrates Solutions provides a simplified data privacy via a “compliance compliance as a service” model. Seeing itself as part of the “Trust Economy”, Harpocrates is making privacy implementations easier. It achieves this by monitoring and observing daily regulatory updates, and capturing the relevant changes. It then uses AI to manage a central repository, and to create and maintain tailored rules sets.

Mark Labs

Mark Labs helps asset managers and institutional investors integrate environmental and social considerations into their portfolios. With increased investor interest in sustainability, portfolio managers are adopting ESG criteria in to their decision-making, and Mark Labs helps them in “optimising the impact” of their investments. There are currently an estimated $40 trillion of sustainable assets under management, but ESG portfolio management is data intensive, complex and still emerging both as an analytical skill and as a practical portfolio methodology. Mark Labs helps investors to curate, analyze and communicate data on their portfolio companies, drawing on multiple database sources, and aligning to UN Sustainable Development Goals. The founders estimate that there are $114 trillion of assets under management “at risk” if generational transfer and investor mandates shift towards more ESG criteria.

MassUp

MassUp is a digital white label solution for the property and casualty insurance industry (P&C), designed to sell small item insurance at the consumer point-of-sale (POS).
Describing their platform as a “plug and sell” solution, the founders noted that 70% of portable items are not covered by insurance policies, and many homes and/or contents are either uninsured or under-insured. MassUp is intended to simplify the process (“easy, accessible, online”), and will be launching in Australia under the Sorgenfrey brand in Q2 2021. For example, a product known as “The Flat Insurance” will cover items in and out of your home for a single monthly premium. As MassUp appears to be a tech solution, rather than a policy issuer, underwriter or re-insurer, I couldn’t see how they can achieve competitive policy rates both at scale and with simplicity (especially the claims process). Also, as we know, vendors love to “upsell” insurance on tech appliances, but many such policies have been seen to be redundant when considering existing statutory consumer rights and product warranties. On the other hand, short-term insurance policies (e.g., when I’m traveling, or on holiday, or renting out my home on AirBnB) are increasingly of interest to some consumers.

OnTrack Retirement

Ontrack provides B2B white label digital retirement planning solutions for financial institutions to help their customers in a more personalised way. There is a general consumer reluctance to pay for financial advice, but retirement planning is deemed too complicated. Taking an “holistic” approach, the founders claim to have developed a “best in class simulation engine” – founded on expected retirement spending priorities (rather than trying to predict the cost of living in 20 years’ time). Drawing on their industry experience, the founders stated that a key challenge for many financial planning providers is getting members comfortable with your service. I would also add that reducing complexity with cost-effective products is also key – and financial education forms a big part of the solution.

In Australia, the past 10 years has seen a major exit from the financial planning and wealth management industry – both at the individual adviser level (higher professional qualification requirements, increased compliance costs, and the end of trailing sales commissions in favour of “fee for advice”); and at the institutional level (3 of the big 4 banks have essentially withdrawn from offering financial planning and wealth management services). At the same time, there have been a number of new players – including many non-bank or non-financial institution providers – offering so-called robo-advice and “advice at scale”, mainly designed to reduce costs. In addition, the statutory superannuation regime keeps being tweaked so it is increasingly difficult to plan for the future, with the constant tax and other changes. Superannuation (a key success story of the Keating government) is just one of the “pillars” of personal finance in retirement: the others are the Commonwealth government aged pension (means-tested), personal wealth management (e.g., investments outside of superannuation); and retirement housing (with the expectation of more people opting to remain in their own homes). I would also include earnings from part-time employment while in “retirement”, as people work longer into older age (either from choice or necessity) – how that aligns with the aged pension and/or self-funded retirement is another part of the constantly-shifting tax and social security regime.

Plastiq.it

This product describes itself as a customer data platform that powers stored value, and was described as a “Safe harbour” solution (I’m not quite sure that’s what the founders meant in this context?). According to the pitch, consumers gain a fair and equitable outcome (consumer discounts), while retailers get targeted audiences. The team have created a vertically integrated gift card platform (working with MasterCard, Apple Pay and GooglePay), and launched JamJar, a cashback solution.

RegRadar

Similar to Harpocrates (above), RegRadar is a regulatory screening platform that helps companies “to set routes and avoid crashes”. The tool monitors regulatory changes (initially in the financial, food and healthcare sectors) and uses a pro-active process to developing a regulatory screening strategy, backed by analysis and a decision-support tool.

Having worked in legal, regulatory and compliance publishing for many years myself, I appreciate the challenge companies face when trying to keep up with the latest regulations, especially where they may be subject to multiple regulatory bodies within and across multiple jurisdictions. However, improved technology such as smart decision-support tools for building and maintaining rules-based business systems has helped enormously. In addition, most legislation is now online, so it can be searched more easily and monitored via automated alerts. Plus services such as Westlaw and Lexis-Nexis can also help companies track what is currently “good” or “bad” law by tracking court decisions, law reports and legislative updates. 

Next week: Goodbye 2020