Whose side is AI on?

At the risk of coming across as some sort of Luddite, recent commentary on Artificial Intelligence suggests that it is only natural to have concerns and misgivings about its rapid development and widespread deployment. Of course, at its heart, it’s just another technology at our disposal – but by its very definition, generative AI is not passive, and is likely to impact all areas of our life, whether we invite it in or not.

Over the next few weeks, I will be discussing some non-technical themes relating to AI – creativity and AI, legal implications of AI, and form over substance when it comes to AI itself.

To start with, these are a few of the questions that I have been mulling over:

– Is AI working for us, as a tool that we control and manage?  Or is AI working with us, in a partnership of equals? Or, more likely, is AI working against us, in the sense that it is happening to us, whether we like it or not, let alone whether we are actually aware of it?

– Is AI being wielded by a bunch of tech bros, who feed it with all their own prejudices, unconscious bias and cognitive limitations?

– Who decides what the Large Language Models (LLMs) that power AI are trained on?

– How does AI get permission to create derived content from our own Intellectual Property? Even if our content is on the web, being “publicly available” is not the same as “in the public domain”

– Who is responsible for what AI publishes, and are AI agents accountable for their actions? In the event of false, incorrect, misleading or inappropriate content created by AI, how do we get to clarify the record, or seek a right of reply?

– Why are AI tools adding increased caveats? (“This is not financial advice, this is not to be relied on in a court of law, this is only based on information available as at a certain point in time, this is not a recommendation, etc.”) And is this only going to increase, as in the recent example of changes to Google’s AI-generated search results? (But really, do we need to be told that eating rocks or adding glue to pizza are bad ideas?)

– From my own experience, tools like Chat GPT return “deliberate” factual errors. Why? Is it to keep us on our toes (“Gotcha!”)? Is it to use our responses (or lack thereof) to train the model to be more accurate? Is it to underline the caveat emptor principle (“What, you relied on Otter to write your college essay? What were you thinking?”). Or is it to counter plagiarism (“You could only have got that false information from our AI engine”). If you think the latter is far-fetched, I refer you to the notion of “trap streets” in maps and directories.

– Should AI tools contain better attribution (sources and acknowledgments) in their results? Should they disclose the list of “ingredients” used (like food labelling?) Should they provide verifiable citations for their references? (It’s an idea that is gaining some attention.)

– Finally, the increased use of cloud-based services and crowd-sourced content (not just in AI tools) means that there is the potential for overreach when it comes to end user licensing agreements by ChatGPT, Otter, Adobe Firefly, Gemini, Midjourney etc. Only recently, Adobe had to clarify latest changes to their service agreement, in response to some social media criticism.

Next week: AI and the Human Factor

Digital Identity – Wallets are the key?

A few months ago, I wrote about trust and digital identity – the issue of who “owns” our identity, and why the concept of “self-sovereign digital identity” can help resolve problems of data security and data privacy.

The topic was aired at a recent presentation made by FinTech advisor, David Birch (hosted at Novatti) to an audience of Australian FinTech, Blockchain and identity experts.

David’s main thesis is that digital wallets will sit at the centre of the metaverse – linking web3 with digital assets and their owners. Wallets will not only be the “key” to transacting with digital assets (tokens), but proving “identity” will confirm “ownership” (or “control”) of wallets and their holdings.

The audience felt that in Australia, we face several challenges to the adoption of digital identity (and by extension, digital wallets):

1. Lack of common technical standards and lack of interoperability

2. Poor experience of government services (the nightmare that is myGov…)

3. Private sector complacency and the protected incumbency of oligopolies

4. Absence of incentives and overwhelming inertia (i.e., why move ahead of any government mandate?)

The example was given of a local company that has built digital identity solutions for consumer applications – but apparently, can’t attract any interest from local banks.

A logical conclusion from the discussion is that we will maintain multiple digital identities (profiles) and numerous digital wallets (applications), for different purposes. I don’t see a problem with this as long as individuals get to decide who, where, when and for how long third parties get to access our personal data, and for what specific purposes.

Next week: Defunct apps and tech projects