The mainstream adoption of AI continues to reveal the precarious balance between the benefits and the pitfalls.
Yes, AI tools can reduce the time it takes to research information, to draft documents and to complete repetitive tasks.
But, AI is not so good at navigating subtle nuances, interpreting specific context or understanding satire or irony. In short, AI cannot “read the room” based on a few prompts and a collection of databases.
And then there is the issue of copyright licensing and other IP rights associated with the original content that large language models are trained on.
One of the biggest challenges to AI’s credibility is the frequent generation of “hallucinations” – false or misleading results that can populate even the most benign of search queries. I have commented previously on whether these errors are deliberate mistakes, an attempt at risk limitation (disclaimers), or a way of training AI tools on human users. (“Spot the deliberate mistake!) Or a get-out clause if we are stupid enough to rely on a dodgy AI summary!
With the proliferation of AI-generated results (“overviews”) in basic search queries, there is a tendency for AI tools to conflate or synthesize multiple sources and perspectives into a single “true” definition – often without authority or verified citations.
A recent example was a senior criminal barrister in Australia who submitted fake case citations and imaginary speeches in support of a client’s case.
First, legal documents (statutes, law reports, secondary legislation, precedents, pleadings, contracts, witness statements, court transcripts, etc.) are highly structured and very specific as to their formal citations. (Having obtained an LLB degree, served as a paralegal for 5 years, and worked in legal publishing for more than 10 years, I am very aware of the risks of an incorrect citation or use of an inappropriate decision in support of a legal argument!!!)
Second, the legal profession has traditionally been at the forefront in the adoption and implementation of new technology. Whether this is the early use of on-line searches for case reports, database creation for managing document precedents, the use of practice and case management software, and the development of decision-trees to evaluate the potential success of client pleadings, lawyers have been at the vanguard of these innovations.
In short, as we continue to rely on AI tools, unless we apply due diligence to these applications or remain vigilant to their fallibility, we use them at our peril.
Several years ago, I blogged about the role of technology within the legal profession. One development I noted was the nascent use of AI to help test the merits of a case before it goes to trial, and to assess the likelihood of winning. Not only might this prevent potentially frivolous matters coming to trial, it would also reduce court time and legal costs.
More recently, there has been some caution (if not out and out scepticism) about the efficacy of using AI in support of legal research and case preparation. This current debate has been triggered by an academic paper from Stanford University that compared leading legal research tools (that claim to have been “enhanced” by AI) and ChatGPT. The results were sobering, with a staggering number of apparent “hallucinations” being generated, even by the specialist legal research tools. AI hallucinations are not unique to legal research tools; nor to the AI tools and the Large Language Model (LLMs) they are trained on, as Stanford has previously reported. While the academic paper is awaiting formal publication, there has been some to-and-fro between the research authors and at least one of the named legal tools. This latter rebuttal rightly points out that any AI tool (especially a legal research and professional practice platform) has to be fit for purpose, and trained on appropriate data.
Aside from the Stanford research, some lawyers have been found to have relied upon AI tools such as ChatGPT and Google Bard to draft their submissions, only to discover that the results have cited non-existent precedents and cases – including in at least one high-profile prosecution. The latest research suggests that not only do AI tools “imagine” fictitious case reports, they can also fail to spot “bad” law (e.g., cases that have been overturned, or laws that have been repealed), offer inappropriate advice, or provide inaccurate or incorrect legal interpretation.
What if AI hallucinations resulted in the generation of invidious content about a living person – which in many circumstances, would be deemed libel or slander? If a series of AI prompts give rise to libelous content, who would be held responsible? Can AI itself be sued for libel? (Of course, under common law, it is impossible to libel the dead, as only a living person can sue for libel.)
I found an interesting discussion of this topic here, which concludes that while AI tools such as ChatGPT may appear to have some degree of autonomy (depending on their programming and training), they certainly don’t have true agency and their output in itself cannot be regarded in the same way as other forms of speech or text when it comes to legal liabilities or protections. The article identified three groups of actors who might be deemed responsible for AI results: AI software developers (companies like OpenAI), content hosts (such as search engines), and publishers (authors, journalists, news networks). It concluded that of the three, publishers, authors and journalists face the most responsibility and accountability for their content, even if they claimed “AI said this was true”.
Interestingly, the above discussion referenced news from early 2023, that a mayor in Australia was planning to sue OpenAI (the owners of ChatGPT) for defamation unless they corrected the record about false claims made about him. Thankfully, OpenAI appear to have heeded of the letter of concern, and the mayor has since dropped his case (or, the false claim was simply over-written by a subsequent version of ChatGPT). However, the original Reuters link, above, which I sourced for this blog makes no mention of the subsequent discontinuation, either as a footnote or update – which just goes to show how complex it is to correct the record, since the reference to his initial claim is still valid (it happened), even though it did not proceed (he chose not to pursue it). Even actual criminal convictions can be deemed “spent” after a given period of time, such that they no longer appear on an individual’s criminal record. Whereas, someone found not guilty of a crime (or in the mayor’s case, falsely labelled with a conviction) cannot guarantee that references to the alleged events will be expunged from the internet, even with the evolution of the “right to be forgotten“.
Perhaps we’ll need to train AI tools to retrospectively correct or delete any false information about us; although conversely, AI is accelerating the proliferation of fake content – benign, humourous or malicious – thus setting the scene for the next blog in this series.
The role of smart contracts in blockchain technology is creating an emerging area of jurisprudence which largely overlaps with computer programming. However, one of the first comments I heard about smart contracts when I started working in the blockchain and crypto industry was that they are “neither smart, nor legal”. What does this paradox mean in practice?
First, smart contracts are not “smart”, because they still largely rely on human coders. While self-replicating and self-executing software programs exist, a smart contact contains human-defined parameters or conditions that will trigger the performance of the contract terms once those conditions have been met. The simplest example might be coded as a type of “if this, then that” function. For example, I could create a smart contract so that every time the temperature drops below 15 degrees, the heating comes on in my house, provided that there is sufficient credit in the digital wallet connected to my utilities billing account.
Second, smart contracts are not “legal”, unless they comprise the necessary elements that form a legally binding agreement: intent, offer, acceptance, consideration, capacity, certainty and legality. They must be capable of being enforceable in the event that one party defaults, but they must not be contrary to public policy, and parties must not have been placed under any form of duress to enter into a contract. Furthermore, there must be an agreed governing law, especially if the parties are in different jurisdictions, and the parties must agree to be subject to a legal venue capable of enforcing or adjudicating the contract in the event of a breach or dispute.
Some legal contacts still need to be in a prescribed form, or in hard copy with a wet signature. A few may need to be under seal or attract stamp duty. Most consumer contracts (and many commercial contracts) are governed by rules relating to unfair contract terms and unconscionable conduct. But assuming a smart contract is capable of being created, notarised and executed entirely on the blockchain, what other legal principles may need to be considered when it comes to capacity and enforcement?
We are all familiar with the process of clicking “Agree” buttons every time we sign up for a social media account, download software or subscribe to digital content. Let’s assume that even with a “free” social media account, there is consideration (i.e., there’s something in it for the consumer in return for providing some personal details), and both parties have the capacity (e.g., they are old enough) and the intent to enter into a contract, the agreement is usually no more than a non-transferable and non-exclusive license granted to the consumer. The license may be revoked at any time, and may even attract penalties in the event of a breach by the end user. There is rarely a transfer of title or ownership to the consumer (if anything, social media platforms effectively acquire the rights to the users’ content), and there is nothing to say that the license will continue into perpetuity. But think how many of these on-line agreements we enter into each day, every time we log into a service or run a piece of software. Soon, those “Agree” buttons could represent individual smart contracts.
When we interact with on-line content, we are generally dealing with a recognised brand or service provider, which represents a known legal entity (a company or corporation). In turn, that entity is capable of entering into a contract, and is also capable of suing/being sued. Legal entities still need to be directed by natural persons (humans) in the form of owners, directors, officers, employees, authorised agents and appointed representatives, who act and perform tasks on behalf of the entity. Where a service provider comprises a highly centralised entity, identifying the responsible party is relatively easy, even if it may require a detailed company search in the case of complex ownership structures and subsidiaries. So what would be the outcome if you entered into a contract with what you thought was an actual person or real company, but it turned out to be an autonmous bot or an instance of disembodied AI – who or what is the counter-party to be held liable in the event something goes awry?
Until DAOs (Decentralised Autonomous Organisations) are given formal legal recognition (including the ability to be sued), it is a grey area as to who may or may not be responsible for the actions of a DAO-based project, and which may be the counter-party to a smart contract. More importantly, who will be responsible for the consequences of the DAO’s actions, once the project is in the community and functioning according to its decentralised rules of self-governance? Some jurisdictions are already drafting laws that will recognise certain DAOs as formal legal entities, which could take the form of a limited liability partnership model or perhaps a particular type of special purpose vehicle. Establishing authority, responsibility and liability will focus on the DAO governance structure: who controls the consensus mechanism, and how do they exercise that control? Is voting to amend the DAO constitution based on proof of stake?
Despite these emerging uncertainties, and the limitations inherent in smart contracts, it’s clear that these programs, where code is increasingly the law, will govern more and more areas of our lives. I see huge potential for smart contracts to be deployed in long-dated agreements such as life insurance policies, home mortgages, pension plans, trusts, wills and estates. These types of legal documents should be capable of evolving dynamically (and programmatically) as our personal circumstances, financial needs and living arrangements also change over time. Hopefully, these smart contracts will also bring greater certainty, clarity and efficiency in the drafting, performance, execution and modification of their terms and conditions.
Taking its cue from some of the economic effects of the current pandemic, the latest Startupbootcamp Melbourne FinTech virtual demo day adopted the theme of financial health and well-being. When reduced working hours and layoffs revealed that many that people did not have enough savings to last 6 weeks, let alone 6 months, lock-down and furlough have not only put a strain on public finances, they have also revealed the need for better education on personal finance and wealth management. Meanwhile, increased regulation and compliance obligations (especially in the areas of data privacy, cyber security and KYC) are adding huge operational costs for companies and financial institutions. And despite the restrictions and disruptions of lock-down, the latest cohort of startups in the Melbourne FinTech bootcamp managed to deliver some engaging presentations.
Datacy allows people to collect, manage and sell their online data easily and transparently, and gives businesses instant access to high quality and bespoke consumer datasets. They stress that the data used in their application is legally and ethically sourced. Their process is also designed to eliminate gaps and risks inherent in many current solutions, which are often manual, fragmented and unethical. At its heart is a Chrome or Firefox browser extension. Individual consumers can generate passive income from data sales, based on user-defined permissions. Businesses can create target data sets using various parameters. Datacy charges companies to access the end-user data, and also takes a 15% commission on every transaction via the plugin – some of which is distributed to end-users, but it wasn’t clear how that works. For example, is it distributed in equal proportions to everyone, or is it weighted by the “value” (however defined or calculated) of an individual’s data?
Harpocrates Solutions provides a simplified data privacy via a “compliance compliance as a service” model. Seeing itself as part of the “Trust Economy”, Harpocrates is making privacy implementations easier. It achieves this by monitoring and observing daily regulatory updates, and capturing the relevant changes. It then uses AI to manage a central repository, and to create and maintain tailored rules sets.
Mark Labs helps asset managers and institutional investors integrate environmental and social considerations into their portfolios. With increased investor interest in sustainability, portfolio managers are adopting ESG criteria in to their decision-making, and Mark Labs helps them in “optimising the impact” of their investments. There are currently an estimated $40 trillion of sustainable assets under management, but ESG portfolio management is data intensive, complex and still emerging both as an analytical skill and as a practical portfolio methodology. Mark Labs helps investors to curate, analyze and communicate data on their portfolio companies, drawing on multiple database sources, and aligning to UN Sustainable Development Goals. The founders estimate that there are $114 trillion of assets under management “at risk” if generational transfer and investor mandates shift towards more ESG criteria.
MassUp is a digital white label solution for the property and casualty insurance industry (P&C), designed to sell small item insurance at the consumer point-of-sale (POS). Describing their platform as a “plug and sell” solution, the founders noted that 70% of portable items are not covered by insurance policies, and many homes and/or contents are either uninsured or under-insured. MassUp is intended to simplify the process (“easy, accessible, online”), and will be launching in Australia under the Sorgenfrey brand in Q2 2021. For example, a product known as “The Flat Insurance” will cover items in and out of your home for a single monthly premium. As MassUp appears to be a tech solution, rather than a policy issuer, underwriter or re-insurer, I couldn’t see how they can achieve competitive policy rates both at scale and with simplicity (especially the claims process). Also, as we know, vendors love to “upsell” insurance on tech appliances, but many such policies have been seen to be redundant when considering existing statutory consumer rights and product warranties. On the other hand, short-term insurance policies (e.g., when I’m traveling, or on holiday, or renting out my home on AirBnB) are increasingly of interest to some consumers.
Ontrack provides B2B white label digital retirement planning solutions for financial institutions to help their customers in a more personalised way. There is a general consumer reluctance to pay for financial advice, but retirement planning is deemed too complicated. Taking an “holistic” approach, the founders claim to have developed a “best in class simulation engine” – founded on expected retirement spending priorities (rather than trying to predict the cost of living in 20 years’ time). Drawing on their industry experience, the founders stated that a key challenge for many financial planning providers is getting members comfortable with your service. I would also add that reducing complexity with cost-effective products is also key – and financial education forms a big part of the solution.
In Australia, the past 10 years has seen a major exit from the financial planning and wealth management industry – both at the individual adviser level (higher professional qualification requirements, increased compliance costs, and the end of trailing sales commissions in favour of “fee for advice”); and at the institutional level (3 of the big 4 banks have essentially withdrawn from offering financial planning and wealth management services). At the same time, there have been a number of new players – including many non-bank or non-financial institution providers – offering so-called robo-advice and “advice at scale”, mainly designed to reduce costs. In addition, the statutory superannuation regime keeps being tweaked so it is increasingly difficult to plan for the future, with the constant tax and other changes. Superannuation (a key success story of the Keating government) is just one of the “pillars” of personal finance in retirement: the others are the Commonwealth government aged pension (means-tested), personal wealth management (e.g., investments outside of superannuation); and retirement housing (with the expectation of more people opting to remain in their own homes). I would also include earnings from part-time employment while in “retirement”, as people work longer into older age (either from choice or necessity) – how that aligns with the aged pension and/or self-funded retirement is another part of the constantly-shifting tax and social security regime.
This product describes itself as a customer data platform that powers stored value, and was described as a “Safe harbour” solution (I’m not quite sure that’s what the founders meant in this context?). According to the pitch, consumers gain a fair and equitable outcome (consumer discounts), while retailers get targeted audiences. The team have created a vertically integrated gift card platform (working with MasterCard, Apple Pay and GooglePay), and launched JamJar, a cashback solution.
Similar to Harpocrates (above), RegRadar is a regulatory screening platform that helps companies “to set routes and avoid crashes”. The tool monitors regulatory changes (initially in the financial, food and healthcare sectors) and uses a pro-active process to developing a regulatory screening strategy, backed by analysis and a decision-support tool.
Having worked in legal, regulatory and compliance publishing for many years myself, I appreciate the challenge companies face when trying to keep up with the latest regulations, especially where they may be subject to multiple regulatory bodies within and across multiple jurisdictions. However, improved technology such as smart decision-support tools for building and maintaining rules-based business systems has helped enormously. In addition, most legislation is now online, so it can be searched more easily and monitored via automated alerts. Plus services such as Westlaw and Lexis-Nexis can also help companies track what is currently “good” or “bad” law by tracking court decisions, law reports and legislative updates.