Smart Contracts… or Dumb Software

The role of smart contracts in blockchain technology is creating an emerging area of jurisprudence which largely overlaps with computer programming. However, one of the first comments I heard about smart contracts when I started working in the blockchain and crypto industry was that they are “neither smart, nor legal”. What does this paradox mean in practice?

First, smart contracts are not “smart”, because they still largely rely on human coders. While self-replicating and self-executing software programs exist, a smart contact contains human-defined parameters or conditions that will trigger the performance of the contract terms once those conditions have been met. The simplest example might be coded as a type of  “if this, then that” function. For example, I could create a smart contract so that every time the temperature drops below 15 degrees, the heating comes on in my house, provided that there is sufficient credit in the digital wallet connected to my utilities billing account.

Second, smart contracts are not “legal”, unless they comprise the necessary elements that form a legally binding agreement: intent, offer, acceptance, consideration, capacity, certainty and legality. They must be capable of being enforceable in the event that one party defaults, but they must not be contrary to public policy, and parties must not have been placed under any form of duress to enter into a contract. Furthermore, there must be an agreed governing law, especially if the parties are in different jurisdictions, and the parties must agree to be subject to a legal venue capable of enforcing or adjudicating the contract in the event of a breach or dispute.

Some legal contacts still need to be in a prescribed form, or in hard copy with a wet signature. A few may need to be under seal or attract stamp duty. Most consumer contracts (and many commercial contracts) are governed by rules relating to unfair contract terms and unconscionable conduct. But assuming a smart contract is capable of being created, notarised and executed entirely on the blockchain, what other legal principles may need to be considered when it comes to capacity and enforcement?

We are all familiar with the process of clicking “Agree” buttons every time we sign up for a social media account, download software or subscribe to digital content. Let’s assume that even with a “free” social media account, there is consideration (i.e., there’s something in it for the consumer in return for providing some personal details), and both parties have the capacity (e.g., they are old enough) and the intent to enter into a contract, the agreement is usually no more than a non-transferable and non-exclusive license granted to the consumer. The license may be revoked at any time, and may even attract penalties in the event of a breach by the end user. There is rarely a transfer of title or ownership to the consumer (if anything, social media platforms effectively acquire the rights to the users’ content), and there is nothing to say that the license will continue into perpetuity. But think how many of these on-line agreements we enter into each day, every time we log into a service or run a piece of software. Soon, those “Agree” buttons could represent individual smart contracts.

When we interact with on-line content, we are generally dealing with a recognised brand or service provider, which represents a known legal entity (a company or corporation). In turn, that entity is capable of entering into a contract, and is also capable of suing/being sued. Legal entities still need to be directed by natural persons (humans) in the form of owners, directors, officers, employees, authorised agents and appointed representatives, who act and perform tasks on behalf of the entity. Where a service provider comprises a highly centralised entity, identifying the responsible party is relatively easy, even if it may require a detailed company search in the case of complex ownership structures and subsidiaries. So what would be the outcome if you entered into a contract with what you thought was an actual person or real company, but it turned out to be an autonmous bot or an instance of disembodied AI – who or what is the counter-party to be held liable in the event something goes awry?

Until DAOs (Decentralised Autonomous Organisations) are given formal legal recognition (including the ability to be sued), it is a grey area as to who may or may not be responsible for the actions of a DAO-based project, and which may be the counter-party to a smart contract. More importantly, who will be responsible for the consequences of the DAO’s actions, once the project is in the community and functioning according to its decentralised rules of self-governance? Some jurisdictions are already drafting laws that will recognise certain DAOs as formal legal entities, which could take the form of a limited liability partnership model or perhaps a particular type of special purpose vehicle. Establishing authority, responsibility and liability will focus on the DAO governance structure: who controls the consensus mechanism, and how do they exercise that control? Is voting to amend the DAO constitution based on proof of stake?

Despite these emerging uncertainties, and the limitations inherent in smart contracts, it’s clear that these programs, where code is increasingly the law, will govern more and more areas of our lives. I see huge potential for smart contracts to be deployed in long-dated agreements such as life insurance policies, home mortgages, pension plans, trusts, wills and estates. These types of legal documents should be capable of evolving dynamically (and programmatically) as our personal circumstances, financial needs and living arrangements also change over time. Hopefully, these smart contracts will also bring greater certainty, clarity and efficiency in the drafting, performance, execution and modification of their terms and conditions.

Next week: Free speech up for sale

 

No-code product development

Anyone familiar with product development should recognise the image below. It’s a schematic for a start-up idea I was working on several years ago – for an employee engagement, reward and recognition app. It was the result of a number of workshops with a digital agency covering problem statements, user scenarios, workflow solutions, personas, UX/UI design and back-end architecture frameworks.

At the time, the cost quoted to build the MVP was easily 5-6 figures – and even to get to that point still required a load of work on story boards, wire frames and clickable prototypes….

Now, I would expect the developers to use something like a combination of open-source and low-cost software applications to manage the middle-ware functions, dial-up a basic cloud server to host the database and connect to external APIs, and commission a web designer to build a dedicated front-end. (I’m not a developer, programmer or coder, so apologies for any glaring errors in my assumptions…)

The growth in self-serve SaaS platforms, public APIs and low-cost hosting solutions (plus the plethora of design marketplaces) should mean that a developer can build an MVP for a tenth of the cost we were quoted.

Hence the interest in “low-code/no-code” product development, and the use of modular components or stack to build a range of repetitive, automated and small scale applications. (For a dev’s perspective check out Martin Slaney’s article, and for a list of useful resources see Ellen Merryweather’s post from earlier this year.)

There are obvious limitations to this approach: anything too complex, too custom, or which needs to scale quickly may break the model. Equally, stringing together a set of black boxes/off-the-shelf solutions might not work, if there are unforeseen incompatibilities or programming conflicts – especially if one component is upgraded, and there are unknown inter-dependencies that impact the other links in the chain. Which means the product development process will need to ensure a layer of code audits and test environments before deploying into production.

I was reflecting on the benefits and challenges of hermetically sealed operating systems and software programs over the weekend. In trying to downgrade my operating system (so that I could run some legacy third-party applications that no longer work thanks to some recent systems and software “upgrades”), I encountered various challenges, and it took several attempts and a couple of workarounds. The biggest problem was the lack of anything to guide me in advance – that by making certain changes to the system settings, or configuring the software a certain way, either this app or that function wouldn’t work. Also, because each component (the operating system, the software program and the third party applications) wants to defend its own turf within my device, they don’t always play nicely together in a way that the end user wants to deploy them in a single environment.

App interoperability is something that continues to frustrate when it comes to so-called systems or software upgrades. It feels like there needs to be a specialist area of product development that can better identify, mitigate and resolve potential tech debt, as well as navigate the product development maintenance schedule in anticipation of future upgrades and their likely impact, or understand the opportunities for retrofitting and keeping legacy apps current. I see too many app developers abandoning their projects because it’s just too hard to reconfigure for the latest system changes.

Next week: Telstar!

 

 

 

Monash University Virtual Demo Day

Last week I was invited to participate in a Virtual Demo Day for students enrolled in the Monash University Boot Camp, for the FinTech, Coding and UX/UI streams. The Demo Day was an opportunity for the students to present the results of their project course work and to get feedback from industry experts.

While not exactly the same as a start up pitch night, each project presented a defined problem scenario, as well as the proposed technical and design solution – and in some cases, a possible commercial model, but this was not the primary focus. Although the format of the Demo Day did not enable external observers to see all of the dozen-plus projects, overall it was very encouraging to see a university offer this type of practical learning experience.

Skills-based and aimed at providing a pathway to a career in ICT, the Boot Camp programme results in a Certificate of Completion – but I hope that undergraduates have similar opportunities as part of their bachelor degree courses. The emphasis on ICT (Cybersecurity and Data Analytics form other streams) is partly in response to government support for relevant skills training, and partly to help meet industry requirements for qualified job candidates.

Industry demand for ICT roles is revealing a shortage of appropriate skills among job applicants, no doubt exacerbated by our closed international borders, and a downturn in overseas students and skilled migration. This shortage is having a direct impact on recruitment and hiring costs, as this recent Tweet by one of my friends starkly reveals: “As someone who is hiring about 130 people right now, I will say this: Salaries in tech in Australia are going up right now at a rate I’ve never seen.” So nice work if you can get it!

As for the Demo Day projects themselves, these embraced technology and topics across Blockchain, two-sided marketplaces, health, sustainability, music, facilities management, career development and social connectivity.

The Monash Boot Camp courses are presented in conjunction with Trilogy Education Services, a US-based training and education provider. From what I can see online, this provider divides opinion as to the quality and/or value for money that their programmes offer – there seems to be a fair number of advocates and detractors. I can’t comment on the course content or delivery, but in terms of engagement, my observation is that the students get good exposure to key tech stacks, learn some very practical skills, and they are encouraged to follow up with the industry participants. I hope all of the students manage to land the type of opportunities they are seeking as a result of completing their course.

Next week: Here We Go Again…

The Limits of Technology

As part of my home entertainment during lock-down, I have been enjoying a series of Web TV programmes called This Is Imminent hosted by Simon Waller, and whose broad theme asks “how are we learning to live with new technology?” – in short, the good, the bad and the ugly of AI, robotics, computers, productivity tools etc.

Niska robots are designed to serve ice cream…. image sourced from Weekend Notes

Despite the challenges of Zoom overload, choked internet capacity, and constant screen-time, the lock-down has shown how reliant we are upon tech for communications, e-commerce, streaming services and working from home. Without them, many of us would not have been able to cope with the restrictions imposed by the pandemic.

The value of Simon’s interactive webinars is two-fold – as the audience, we get to hear from experts in their respective fields, and gain exposure to new ideas; and we have the opportunity to explore ways in which technology impacts our own lives and experience – and in a totally non-judgmental way. What’s particularly interesting is the non-binary nature of the discussion. It’s not “this tech good, that tech bad”, nor is it about taking absolute positions – it thrives in the margins and in the grey areas, where we are uncertain, unsure, or just undecided.

In parallel with these programmes, I have been reading a number of novels that discuss different aspects of AI. These books seem to be both enamoured with, and in awe of, the potential of AI – William Gibson’s “Agency”, Ian McEwan’s “Machines Like Me”, and Jeanette Winterson’s “Frankissstein” – although they take quite different approaches to the pros and cons of the subject and the technology itself. (When added to my recent reading list of Jonathan Coe’s “Middle England” and John Lanchester’s “The Wall”, you can see what fun and games I’m having during lock-down….)

What this viewing and reading suggests to me is that we quickly run into the limitations of any new technology. Either it never delivers what it promises, or we become bored with it. We over-invest and place too much hope in it, then take it for granted (or worse, come to resent it). What the above novelists identify is our inability to trust ourselves when confronted with the opportunity for human advancement. Largely because the same leaps in technology also induce existential angst or challenge our very existence itself – not least because they are highly disruptive as well as innovative.

On the other hand, despite a general shift towards open source protocols and platforms, we still see age-old format wars whenever any new tech comes along. For example, this means most apps lack interoperability, tying us into rigid and vertically integrated ecosystems. The plethora of apps launched for mobile devices can mean premature obsolescence (built-in or otherwise), as developers can’t be bothered to maintain and upgrade them (or the app stores focus on the more popular products, and gradually weed out anything that doesn’t fit their distribution model or operating system). Worse, newer apps are not retrofitted to run on older platforms, or older software programs and content suffer digital decay and degradation. (Developers will also tell you about tech debt – the eventual higher costs of upgrading products that were built using “quick and cheap” short-term solutions, rather than taking a longer-term perspective.)

Consequently, new technology tends to over-engineer a solution, or create niche, hard-coded products (robots serving ice cream?). In the former, it can make existing tasks even harder; in the latter, it can create tech dead ends and generate waste. Rather than aiming for giant leaps forward within narrow applications, perhaps we need more modular and accretive solutions that are adaptable, interchangeable, easier to maintain, and cheaper to upgrade.

Next week: Distractions during Lock-down