Whose side is AI on?

At the risk of coming across as some sort of Luddite, recent commentary on Artificial Intelligence suggests that it is only natural to have concerns and misgivings about its rapid development and widespread deployment. Of course, at its heart, it’s just another technology at our disposal – but by its very definition, generative AI is not passive, and is likely to impact all areas of our life, whether we invite it in or not.

Over the next few weeks, I will be discussing some non-technical themes relating to AI – creativity and AI, legal implications of AI, and form over substance when it comes to AI itself.

To start with, these are a few of the questions that I have been mulling over:

– Is AI working for us, as a tool that we control and manage?  Or is AI working with us, in a partnership of equals? Or, more likely, is AI working against us, in the sense that it is happening to us, whether we like it or not, let alone whether we are actually aware of it?

– Is AI being wielded by a bunch of tech bros, who feed it with all their own prejudices, unconscious bias and cognitive limitations?

– Who decides what the Large Language Models (LLMs) that power AI are trained on?

– How does AI get permission to create derived content from our own Intellectual Property? Even if our content is on the web, being “publicly available” is not the same as “in the public domain”

– Who is responsible for what AI publishes, and are AI agents accountable for their actions? In the event of false, incorrect, misleading or inappropriate content created by AI, how do we get to clarify the record, or seek a right of reply?

– Why are AI tools adding increased caveats? (“This is not financial advice, this is not to be relied on in a court of law, this is only based on information available as at a certain point in time, this is not a recommendation, etc.”) And is this only going to increase, as in the recent example of changes to Google’s AI-generated search results? (But really, do we need to be told that eating rocks or adding glue to pizza are bad ideas?)

– From my own experience, tools like Chat GPT return “deliberate” factual errors. Why? Is it to keep us on our toes (“Gotcha!”)? Is it to use our responses (or lack thereof) to train the model to be more accurate? Is it to underline the caveat emptor principle (“What, you relied on Otter to write your college essay? What were you thinking?”). Or is it to counter plagiarism (“You could only have got that false information from our AI engine”). If you think the latter is far-fetched, I refer you to the notion of “trap streets” in maps and directories.

– Should AI tools contain better attribution (sources and acknowledgments) in their results? Should they disclose the list of “ingredients” used (like food labelling?) Should they provide verifiable citations for their references? (It’s an idea that is gaining some attention.)

– Finally, the increased use of cloud-based services and crowd-sourced content (not just in AI tools) means that there is the potential for overreach when it comes to end user licensing agreements by ChatGPT, Otter, Adobe Firefly, Gemini, Midjourney etc. Only recently, Adobe had to clarify latest changes to their service agreement, in response to some social media criticism.

Next week: AI and the Human Factor

Digital Identity – Wallets are the key?

A few months ago, I wrote about trust and digital identity – the issue of who “owns” our identity, and why the concept of “self-sovereign digital identity” can help resolve problems of data security and data privacy.

The topic was aired at a recent presentation made by FinTech advisor, David Birch (hosted at Novatti) to an audience of Australian FinTech, Blockchain and identity experts.

David’s main thesis is that digital wallets will sit at the centre of the metaverse – linking web3 with digital assets and their owners. Wallets will not only be the “key” to transacting with digital assets (tokens), but proving “identity” will confirm “ownership” (or “control”) of wallets and their holdings.

The audience felt that in Australia, we face several challenges to the adoption of digital identity (and by extension, digital wallets):

1. Lack of common technical standards and lack of interoperability

2. Poor experience of government services (the nightmare that is myGov…)

3. Private sector complacency and the protected incumbency of oligopolies

4. Absence of incentives and overwhelming inertia (i.e., why move ahead of any government mandate?)

The example was given of a local company that has built digital identity solutions for consumer applications – but apparently, can’t attract any interest from local banks.

A logical conclusion from the discussion is that we will maintain multiple digital identities (profiles) and numerous digital wallets (applications), for different purposes. I don’t see a problem with this as long as individuals get to decide who, where, when and for how long third parties get to access our personal data, and for what specific purposes.

Next week: Defunct apps and tech projects

 

 

Trust in Digital IDs

Or: “Whose identity is it anyway?”

Over the past few years, there have been a significant number of serious data breaches among among banks, utilities, telcos, insurers and public bodies. As a result, hackers are able to access the confidential data and financial records of millions of customers, leading to ransomware demands, wide dissemination of private information, identity theft, and multiple phishing attempts and similar scams.

What most of these hacks reveal is the vulnerability of centralised systems as well as the unnecessary storage of personal data – making these single points of failure a target for such exploits. Worse, the banks and others seem to think they “own” this personal data once they have obtained it, as evidenced by the way they (mis)manage it.

I fully understand the need for KYC/AML, and the requirement to verify customers under the 100 Points of Identification system. However, once I have been verified, why does each bank, telco and utility company need to keep copies or records of my personal data on their systems? Under a common 100 Points verification process, shouldn’t we have a more efficient and less vulnerable system? If I have been verified by one bank in Australia, why can’t I be automatically verified by every other bank in Australia (e.g., if I wanted to open an account with them), or indeed any other company using the same 100 Points system?

Which is where the concept of Self-Sovereign Identity comes into play. This approach should mean that with the 100 Points system, even if initially I need to submit evidence of my driver’s license, passport or birth certificate, once I have been verified by the network I can “retrieve” my personal data (revoke the access permission), or specify with each party on the network how long they can hold my personal data, and for what specific purpose.

This way, each party on the network does not need to retain a copy of the original documents. Instead, my profile is captured as a digital ID that confirms who I am, and confirms that I have been verified by the network; it does not require me to keep disclosing my personal data to each party on the network. (There are providers of Digital ID solutions, but because they are centralised, and unilateral, we end up with multiple and inconsistent Digital ID systems, which are just as vulnerable to the risk of a single point of failure…)

But of course, banks etc. insist that not only do they have to ask for 100 Points of ID each and every time I open an account, they are required to retain copies or digital versions of my personal data. Hence, we should not be surprised by the number of data hacks we keep experiencing.

The current approach to identity in banking, telcos and utilities is baffling. Just a few examples I can think of:

1. In trying to upgrade my current mobile phone plan with my existing provider, I had to re-submit personal information via a mobile app (and this is a telco that experienced a major hack last year, resulting in me having to apply for a new driver’s license). If I have already been verified, why the need to ask for my personal data again, and via a mobile app?

2. I’ve lived at my current address for more than 5 years. I still receive bank statements intended for the previous occupant. I have tried on numerous occasions to inform the bank that this person is no longer living here. I’ve used the standard “Return to Sender” method, and tried to contact the bank direct, but because I am not the named account addressee or authorised representative, they won’t talk to me. Fair enough. But, the addressee is actually a self-managed superannuation fund. Given the fallout from the Banking Royal Commission, and the additional layers of verification, supervision and audit that apply to such funds, I’m surprised that this issue has not been picked up the bank concerned. It’s very easy to look up the current registered address of an SMSF via the APRA website, if only the bank could be bothered to investigate why the statements keep getting returned.

3. I have been trying to remove the name of a former director as a signatory to a company bank account. The bank kept asking for various forms and “proof” that this signatory was no longer a director and no longer authorised to access the account. Even though I have done this (and had to pay for an accountant to sign a letter confirming the director has resigned their position), if the bank had bothered to look up the ASIC company register, they would see that this person was no longer a company officer. Meanwhile, the bank statements keep arriving addressed to the ex-director. Apparently, the bank’s own “systems” don’t talk to one another (a common refrain when trying to navigate legacy corporate behemoths).

In each of the above, the use of a Digital ID system would streamline the process for updating customer records, and reduce the risk of data vulnerabilities. But that requires effort on the part of the entities concerned – clearly, the current fines for data breaches and for misconduct in financial services are not enough.

Next week: AI vs IP  

 

Free speech up for sale

When I was planning to post this article a couple of weeks ago, Elon Musk’s bid to buy Twitter and take it into private ownership was looking unlikely to succeed. Musk had just declined to take up the offer of a seat on the Twitter board, following which the board adopted a poison-pill defence against a hostile takeover. And just as I was about to go to press at my usual time, the news broke that the original bid had now been accepted by the board, so I hit the pause button instead and waited a day to see what the public reaction was. What a difference 72 hours (and US$44bn) can make… It seems “free speech” does indeed come with a price.

Of course, the Twitter transaction is still subject to shareholder approval and regulatory clearance, as well as confirmation of the funding structure, since Musk is having to raise about half the stated purchase from banks.

Musk’s stated objective in acquiring Twitter was highlighted in a press release put out by the company:

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Mr. Musk. “I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential – I look forward to working with the company and the community of users to unlock it.”

This latest development in Musk’s apparent love/hate relationship with Twitter is bound to further divide existing users as to the billionaire’s intentions, as well as raise concerns about the broader implications for free speech. Musk himself has encouraged his “worst critics” to stay with the platform. Meanwhile, founder and former CEO, Jack Dorsey has renewed his love of Twitter, despite only recently stepping away from the top job to spend more time on his other interests.

Personally, I’m not overly concerned that a platform such as Twitter is in private hands or under single ownership (subject, of course, to anti-trust rules, etc.). Far from creating an entrenched monopoly, it may actually encourage more competition by those who decide to opt out of Twitter. What I am less comfortable with is the notion that Twitter somehow acts as an exemplar of free speech, and as such, is a bastion of democracy.

On the positive side, we will be able to judge the veracity of Musk’s objectives against his actual deeds. For example, will Twitter actually introduce an edit button, make its algorithms open-source, exorcise the spam bots, verify users, and reduce/remove the platform’s reliance upon advertising?

On the negative side, what credible stance will Twitter now take on “free speech”, short of allowing an “anything goes” policy? If Musk is sincere that Twitter will be a platform for debating “matters vital to the future of humanity”, he may need to modify what he means by public discourse. Personal slanging matches with fellow-billionaires (and those less-able to defend themselves) do not make for an edifying public debating forum. Musk’s own disclosures about Twitter and his other business interests will also come under increased scrutiny. We know from past experience that Elon’s Tweets can move markets, and for this alone he should be aware of the responsibility that comes with ownership of the platform.

We have long understood that free speech is not the same as an unfettered right to say what you like in public – there are limits to freedom of expression, including accountability for the consequences of our words and actions, especially where they can cause harm. The broader challenges we face are:

  • technology outpacing regulation, when it comes to social media
  • defining what it means to “cause offence”
  • increased attacks on “mainstream media” and threats to freedom of the press

1. Just as the printing press, telegraphy, telephony, broadcasting and the internet each resulted in legislative changes, social media has continued to test the boundaries of regulation under which its predecessors now operate. Hitherto, much of the regulation that applies to social and digital media relates to privacy and data protection, as well as the existing law of defamation. But the latter varies considerably by jurisdiction, and by access to redress, and availability of remedies. Social media platforms have resisted attempts to treat them as traditional media (newspapers and broadcasters, which are subject to licensing and/or industry codes of practice) or treat them as publishers (and therefore responsible for content published on their platforms). (Then there is the question of how some social media platforms manage their tax affairs in the countries where they derive their revenue.)

The Australian government is attempting to challenge social media companies in a couple of ways. The first has been to force these platforms to pay for third-party news content from which they directly and indirectly generate advertising income. The second aims to hold social media more accountable for defamatory content published on their platforms, and remove the protection of “anonymity”. However, the former might be seen as a (belated) reaction to changing business models, and largely acting in favour of incumbents; while the latter is a technical response to the complex law of defamation in the digital age.

2. The ability to be offended by what we see or hear on social media is now at such a low bar as to be almost meaningless. During previous battles over censorship in print, on stage or on screen, the argument could be made that, “if you don’t like something you aren’t being forced to watch it”, so maybe you are deliberately going in search of content just to find it offensive. The problem is, social media by its very nature is more pervasive and, fed by hidden algorithms, is actually more invasive than traditional print and broadcast media. Even as a casual, passive or innocent user, you cannot avoid seeing something that may “offend” you. Economic and technical barriers to entry are likewise so low, that anyone and everyone can have their say on social media.

Leaving aside defamation laws, the concept of “hate speech” is being used to target content which is designed to advocate violence, or can be reasonably deemed or expected to have provoked violence or the threat of harm (personal, social or economic). I have problems with how we define hate speech in the current environment of public commentary and social media platforms, since the causal link between intent and consequence is not always that easy to establish.

However, I think we can agree that the use of content to vilify others simply based on their race, gender, sexuality, ethnicity, economic status, political affiliation or religious identity cannot be defended on the grounds of “free speech”, “fair comment” or “personal belief”. Yet how do we discourage such diatribes without accusations of censorship or authoritarianism, and how do we establish workable remedies to curtail the harmful effects of “hate speech” without infringing our civil liberties?

Overall, there is a need to establish the author’s intent (their purpose as well as any justification), plus apply a “reasonable person” standard, one that does not simply affirm confirmation bias of one sector of society against another. We must recognise that hiding behind our personal ideology cannot be an acceptable defence against facing the consequences of our actions.

3. I think it’s problematic that large sections of the traditional media have hardly covered themselves in glory when it comes to their ethical standards, and their willingness to misuse their public platforms, economic power and political influence to undertake nefarious behaviour and/or deny any responsibility for their actions. Think of the UK’s phone hacking scandals, which resulted in one press baron being deemed “unfit to run a company”, as well as leading to the closure of a major newspaper.

That said, it hardly justifies the attempts by some governments, populist leaders and authoritarian regimes to continuously undermine the integrity of the fourth estate. It certainly doesn’t warrant the prosecution and persecution of journalists who are simply trying to do their job, nor attacks and bans on the media unless they “tow the party line”.

Which brings me back to Twitter, and its responsibility in helping to preserve free speech, while preventing its platform being hijacked for the purposes of vilification and incitement to cause harm. If its new owner is serious about furthering public debate and mature discourse, then here are a few other enhancements he might want to consider:

  • in addition to an edit button, a “cooling off” period whereby users are given the opportunity to reconsider a like, a post or a retweet, based on user feedback or community interaction – after which time, they might be deemed responsible for the content as if they were the original author (potentially a way to mitigate “pile-ons”)
  • signing up to a recognised industry code of ethics, including a victim’s formal right of reply, access to mediation, and enforcement procedures and penalties against perpetrators who continually cross the line into vilification, or engage in content that explicitly or implicitly advocates violence or harm
  • a more robust fact-checking process and a policy of “truth in advertising” when it comes to claims or accusations made by or on behalf of politicians, political parties, or those seeking elected office
  • clearer delineation between content which is mere opinion, content which is in the nature of a public service (e.g., emergencies and natural disasters), content which is deemed part of a company’s public disclosure obligations, content which is advertorial, content which is on behalf of a political party or candidate, and content which is purely for entertainment purposes only (removing the bots may not be enough)
  • consideration of establishing an independent editorial board that can also advocate on behalf of alleged victims of vilification, and act as the initial arbiter of “public interest” matters (such as privacy, data protection, whistle-blowers etc.)

Finally, if Twitter is going to remove/reduce advertising, what will the commercial model look like?

Next week: The Crypto Conversation