Law and Technology – when AI meets Smart Contracts…

Among the various ‘X’-Tech start-up themes (e.g., FinTech, EdTech, MedTech, InsurTech) one of the really interesting areas is LegTech (aka LawTech), and its close cousin, RegTech. While it’s probably some time before we see a fully automated justice system, where cases are decided by AI and judgments are delivered by robots, there are signs that legal technology is finally coming into its own. Here’s a very personal perspective on law and technology:

Photo by Lonpicman via Wikimedia Commons

1. Why are lawyers often seen as technophobes or laggards, yet in the 1980s and 1990s, they were at the vanguard of new technology adoption?

In the 1970s, law firms invested in Telex and document exchange (remember DX?) to communicate and to share information peer-to-peer. Then came the first online legal research databases (Lexis and Westlaw) which later gave rise to “public access” platforms such as AustLII and its international counterparts.

Lawyers were also among the first professional service firms to invest in Word Processing (for managing and drafting precedents) and e-mail (for productivity). Digitization meant that huge print libraries of reference materials (statutes and case-law) could be reduced to a single CD-ROM. Law firms were early adopters of case, practise, document and knowledge management tools – e.g., virtual document discovery rooms, precedent banks, drafting tools.

2. But, conversely, why did the legal profession seem to adopt less-optimal technology?

The trouble with being early adopters can mean you don’t make the right choices. For example, law firms in the 80s and 90s seemed to demonstrate a preference for Lotus Notes (not Outlook), Wang Computers and WordStar (not IBM machines or MS Office Word), and DOS-based interfaces (rather than GUIs).

Some of the first CD-ROM publications for lawyers were hampered by the need to render bound volumes as exact facsimiles of the printed texts (partly so lawyers and judges could refer to the same page/paragraph in open court). There was a missed opportunity to use the technology to its full potential.

3. On the plus side, legal technology is having a significant a role to play…

…in law creation (e.g., parliamentary drafting and statute consolidation), the administration of law (delivery of justice, court room evidence platforms, live transcripts, etc.), legal practice (practice management tools) and legal education (research, teaching, assessment, accreditation). Plus, decision support systems combining rules-based logic, precedent and machine learning, especially in the application of alternative dispute resolution.

4. Where next?

In recent years, we have seen a growing number of “virtual” law firms, that use low-cost operating models to deliver custom legal advice through a mix of freelance, part-time and remote lawyers who mainly engage with their clients online.

Blockchain solutions are being designed to register and track assets for the purposes of wills and trusts, linked to crypto-currency tokens and ID management for streamlining the transfer of title. Governments and local authorities are exploring the use of distributed ledger technology to manage land title registration, vehicle and driver registration, fishing permits and the notion of “digital citizenship”.

We are seeing the use of smart contracts powered by oracles on the Ethereum blockchain to run a range of decision-making, transactional, financial, and micro-payment applications. (Although as one of my colleagues likes to quip, “smart contracts are neither smart nor legal”.)

Artificial Intelligence (AI) is being explored to “test” legal cases before they come to trial, and more knowledge management and collaboration tools will continue to lower the cost of legal advice (although I doubt we will see lawyers being totally disintermediated by robots, but their role will certainly change).

There is further opportunity to take some of the friction and costs out of the legal system to improve access to justice.

Finally, and this feels both exciting and scary, is the notion of “crowd-sourcing policy“; some governments are already experimenting with hackathons to develop policy-making models, and even the policies themselves. But this does sound like we would be moving closer and closer to government by mini-plebiscites, rather than by parliamentary democracy.

Next week: Digital currencies are the new portals

 

When robots say “Humans do not compute…”

There’s a memorable scene in John Carpenter‘s 1970’s sci-fi classic, “Dark Star” where an astronaut tries to use Cartesian Logic to defuse a nuclear bomb. The bomb is equipped with artificial intelligence and is programmed to detonate via a timer once its circuits have been activated. Due to a circuit malfunction, the bomb believes it has been triggered, even though it is still attached to the spaceship, and cannot be mechanically released. Refuting the arguments against its existence, the bomb responds in kind, and simply states: “I think, therefore I am.”

Dark Star’s Bomb 20: “I think, therefore I am…”

Dark Star’s Bomb 20: “I think, therefore I am…”

The notion of artificial intelligence both thrills us, and fills us with dread: on the one hand, AI can help us (by doing a lot of routine thinking and mundane processing); on the other, it can make us the subjects of its own ill-will (think of HAL 9000 in “2001: A Space Odyssey”, or “Robocop”, or “Terminator” or any similar dystopian sci-fi story).

The current trend for smarter data processing, fuelled by AI tools such as machine learning, semantic search, sentiment analysis and social graph models, is making a world of driverless cars, robo-advice, the Internet of Things and behaviour prediction a reality. But there are concerns that we will abnegate our decision-making (and ultimately, our individual moral responsibility) to computers; that more and more jobs will be lost to robots; and we will end up being dehumanized if we no longer have to think for ourselves. Worse still, if our human behaviours cease making sense to those very same computers that we have programmed to learn how to think for us, then our demise is pre-determined.

The irony is, that if AI becomes as smart as we might imagine, then we will impart to the robots a very human fallibility: namely, the tendency to over-analyse the data (rather than examine the actual evidence before us). As Brian Aldiss wrote in his 1960 short story, “The Sterile Millennia”, when robots get together:

“…they suffer from a trouble which sometimes afflicts human gatherings: a tendency to show off their logic at the expense of the object of the meeting.”

Long live logic, but longer still live common sense!

Next week: 101 #Startup Pitches – What have we learned?