No-code product development

Anyone familiar with product development should recognise the image below. It’s a schematic for a start-up idea I was working on several years ago – for an employee engagement, reward and recognition app. It was the result of a number of workshops with a digital agency covering problem statements, user scenarios, workflow solutions, personas, UX/UI design and back-end architecture frameworks.

At the time, the cost quoted to build the MVP was easily 5-6 figures – and even to get to that point still required a load of work on story boards, wire frames and clickable prototypes….

Now, I would expect the developers to use something like a combination of open-source and low-cost software applications to manage the middle-ware functions, dial-up a basic cloud server to host the database and connect to external APIs, and commission a web designer to build a dedicated front-end. (I’m not a developer, programmer or coder, so apologies for any glaring errors in my assumptions…)

The growth in self-serve SaaS platforms, public APIs and low-cost hosting solutions (plus the plethora of design marketplaces) should mean that a developer can build an MVP for a tenth of the cost we were quoted.

Hence the interest in “low-code/no-code” product development, and the use of modular components or stack to build a range of repetitive, automated and small scale applications. (For a dev’s perspective check out Martin Slaney’s article, and for a list of useful resources see Ellen Merryweather’s post from earlier this year.)

There are obvious limitations to this approach: anything too complex, too custom, or which needs to scale quickly may break the model. Equally, stringing together a set of black boxes/off-the-shelf solutions might not work, if there are unforeseen incompatibilities or programming conflicts – especially if one component is upgraded, and there are unknown inter-dependencies that impact the other links in the chain. Which means the product development process will need to ensure a layer of code audits and test environments before deploying into production.

I was reflecting on the benefits and challenges of hermetically sealed operating systems and software programs over the weekend. In trying to downgrade my operating system (so that I could run some legacy third-party applications that no longer work thanks to some recent systems and software “upgrades”), I encountered various challenges, and it took several attempts and a couple of workarounds. The biggest problem was the lack of anything to guide me in advance – that by making certain changes to the system settings, or configuring the software a certain way, either this app or that function wouldn’t work. Also, because each component (the operating system, the software program and the third party applications) wants to defend its own turf within my device, they don’t always play nicely together in a way that the end user wants to deploy them in a single environment.

App interoperability is something that continues to frustrate when it comes to so-called systems or software upgrades. It feels like there needs to be a specialist area of product development that can better identify, mitigate and resolve potential tech debt, as well as navigate the product development maintenance schedule in anticipation of future upgrades and their likely impact, or understand the opportunities for retrofitting and keeping legacy apps current. I see too many app developers abandoning their projects because it’s just too hard to reconfigure for the latest system changes.

Next week: Telstar!

 

 

 

Monash University Virtual Demo Day

Last week I was invited to participate in a Virtual Demo Day for students enrolled in the Monash University Boot Camp, for the FinTech, Coding and UX/UI streams. The Demo Day was an opportunity for the students to present the results of their project course work and to get feedback from industry experts.

While not exactly the same as a start up pitch night, each project presented a defined problem scenario, as well as the proposed technical and design solution – and in some cases, a possible commercial model, but this was not the primary focus. Although the format of the Demo Day did not enable external observers to see all of the dozen-plus projects, overall it was very encouraging to see a university offer this type of practical learning experience.

Skills-based and aimed at providing a pathway to a career in ICT, the Boot Camp programme results in a Certificate of Completion – but I hope that undergraduates have similar opportunities as part of their bachelor degree courses. The emphasis on ICT (Cybersecurity and Data Analytics form other streams) is partly in response to government support for relevant skills training, and partly to help meet industry requirements for qualified job candidates.

Industry demand for ICT roles is revealing a shortage of appropriate skills among job applicants, no doubt exacerbated by our closed international borders, and a downturn in overseas students and skilled migration. This shortage is having a direct impact on recruitment and hiring costs, as this recent Tweet by one of my friends starkly reveals: “As someone who is hiring about 130 people right now, I will say this: Salaries in tech in Australia are going up right now at a rate I’ve never seen.” So nice work if you can get it!

As for the Demo Day projects themselves, these embraced technology and topics across Blockchain, two-sided marketplaces, health, sustainability, music, facilities management, career development and social connectivity.

The Monash Boot Camp courses are presented in conjunction with Trilogy Education Services, a US-based training and education provider. From what I can see online, this provider divides opinion as to the quality and/or value for money that their programmes offer – there seems to be a fair number of advocates and detractors. I can’t comment on the course content or delivery, but in terms of engagement, my observation is that the students get good exposure to key tech stacks, learn some very practical skills, and they are encouraged to follow up with the industry participants. I hope all of the students manage to land the type of opportunities they are seeking as a result of completing their course.

Next week: Here We Go Again…

The Limits of Technology

As part of my home entertainment during lock-down, I have been enjoying a series of Web TV programmes called This Is Imminent hosted by Simon Waller, and whose broad theme asks “how are we learning to live with new technology?” – in short, the good, the bad and the ugly of AI, robotics, computers, productivity tools etc.

Niska robots are designed to serve ice cream…. image sourced from Weekend Notes

Despite the challenges of Zoom overload, choked internet capacity, and constant screen-time, the lock-down has shown how reliant we are upon tech for communications, e-commerce, streaming services and working from home. Without them, many of us would not have been able to cope with the restrictions imposed by the pandemic.

The value of Simon’s interactive webinars is two-fold – as the audience, we get to hear from experts in their respective fields, and gain exposure to new ideas; and we have the opportunity to explore ways in which technology impacts our own lives and experience – and in a totally non-judgmental way. What’s particularly interesting is the non-binary nature of the discussion. It’s not “this tech good, that tech bad”, nor is it about taking absolute positions – it thrives in the margins and in the grey areas, where we are uncertain, unsure, or just undecided.

In parallel with these programmes, I have been reading a number of novels that discuss different aspects of AI. These books seem to be both enamoured with, and in awe of, the potential of AI – William Gibson’s “Agency”, Ian McEwan’s “Machines Like Me”, and Jeanette Winterson’s “Frankissstein” – although they take quite different approaches to the pros and cons of the subject and the technology itself. (When added to my recent reading list of Jonathan Coe’s “Middle England” and John Lanchester’s “The Wall”, you can see what fun and games I’m having during lock-down….)

What this viewing and reading suggests to me is that we quickly run into the limitations of any new technology. Either it never delivers what it promises, or we become bored with it. We over-invest and place too much hope in it, then take it for granted (or worse, come to resent it). What the above novelists identify is our inability to trust ourselves when confronted with the opportunity for human advancement. Largely because the same leaps in technology also induce existential angst or challenge our very existence itself – not least because they are highly disruptive as well as innovative.

On the other hand, despite a general shift towards open source protocols and platforms, we still see age-old format wars whenever any new tech comes along. For example, this means most apps lack interoperability, tying us into rigid and vertically integrated ecosystems. The plethora of apps launched for mobile devices can mean premature obsolescence (built-in or otherwise), as developers can’t be bothered to maintain and upgrade them (or the app stores focus on the more popular products, and gradually weed out anything that doesn’t fit their distribution model or operating system). Worse, newer apps are not retrofitted to run on older platforms, or older software programs and content suffer digital decay and degradation. (Developers will also tell you about tech debt – the eventual higher costs of upgrading products that were built using “quick and cheap” short-term solutions, rather than taking a longer-term perspective.)

Consequently, new technology tends to over-engineer a solution, or create niche, hard-coded products (robots serving ice cream?). In the former, it can make existing tasks even harder; in the latter, it can create tech dead ends and generate waste. Rather than aiming for giant leaps forward within narrow applications, perhaps we need more modular and accretive solutions that are adaptable, interchangeable, easier to maintain, and cheaper to upgrade.

Next week: Distractions during Lock-down

 

 

 

 

 

 

Australia’s Blockchain Roadmap

The Australian Government recently published its National Blockchain Roadmap – less than 12 months after announcing this initiative. While it’s an admirable development (and generally, to be encouraged), it feels largely aspirational and tends towards the more theoretical rather than the practical or concrete.

First, it references the US Department of Homeland Security, to define the use case for Blockchain. According to these criteria, if a project or application displays three of the four following requirements, then Blockchain technology may offer a suitable solution:

  • data redundancy
  • information transparency
  • data immutability
  • a consensus mechanism

In a recent podcast for The Crypto Conversation, Bram Cohen, the inventor of the BitTorrent peer-to-peer file sharing protocol, defined the primary use case for Blockchain as a “secure decentralized/distributed database”. On the one hand, he describes this as a “total oxymoron; on the other, he acknowledges that Blockchain provides a solution to the twin problems of having to have trusted third parties to verify transactions, and preventing double-spend on the network. This solution lies in having to have consensus on the state of the database.

Second, the Roadmap speaks of adopting a “principles based but technology-neutral” approach when it comes to policy, regulation and standards. Experience tells us that striking a balance between encouraging innovation and regulating a new technology is never easy. Take the example of VOIP: at the time, this new technology (itself built on the newish technology of the internet) was threatened by incumbent telephone companies and existing communications legislation. If the monopolistic telcos had managed to get their way, maybe the Post Office would then have wanted to start charging us for sending e-mails?

With social media (another internet-enabled technology), we continue to see considerable tension as to how such platforms should be regulated in relation to news, broadcasting, publishing, political advertising, copyright, financial services and privacy. In the music and film industries, content owners have attempted to own and control the means of production, manufacture and distribution, not just the content – hence the format wars of the past in videotape, compact discs and digital file protocols. (A recurring theme within  Blockchain commentary is the need for cross-chain interoperability.)

Third, the Roadmap mentions the Government support for Standards Australia in leading the ISO’s Technical Committee 307 on Blockchain and DLT Standards. While such support is to be welcomed, the technology is outpacing both regulation and standards. TC 307 only published its First Technical Report on Smart Contracts in September 2019 – three years after its creation. In other areas, regulation is still trying to catch up with the technology that enables Initial Coin Offerings, Security Token Offerings and Decentralized Autonomous Organizations.

If the ICO phenomenon of 2016-18 demonstrated anything, it revealed that within traditional corporate and market structures, companies no longer have a monopoly on financial capital (issuance was largely subscribed via crowdfunding and informal syndication); human capital (ICO teams were largely self-forming, self-sufficient and self-directed); or networks and markets (decentralized, peer-to-peer and trustless became catch words of the ICO movement). Extend this to DAOs, and the very existence of, and need for traditional boards and shareholders gets called into question.

Fourth, the Roadmap makes reference to some existing government-related projects and initiatives in the area of Blockchain and cryptocurrencies. One is the Digital Transformation Agency’s “Trusted Digital Identity Framework”; another is AUSTRAC’s “Digital Currency Exchange” regulation and registration framework. With the former, a more universal commercial and government solution lies in self-sovereign identity – for example, if I have achieved a 100 point identity check with Bank A, then surely I should be able to “passport” that same ID verification to Bank B, without having to go through a whole new 100 point process? And with the latter, as far as I have been able to ascertain, AUSTRAC does not publish a list of those digital currency exchanges that have registered, and exchanges are not required to publish their registration number on their websites.

Fifth, the need for relevant training is evident from the Roadmap. However, as we know from computer coding and software engineering courses, students often end up learning “yesterday’s language”, rather than acquiring flexible and adaptable coding skills and core building blocks in software development. It’s equally evident that many of today’s developers are increasingly self-taught, especially in Blockchain and related technologies – largely because it is a new and rapidly-evolving landscape.

Finally, the Roadmap has identified three “showcase” examples of where Blockchain can deliver significant outcomes. One is in agricultural supply chains (to track the provenance of wine exports), one is in education and training (to enable trusted credentialing), and one is in financial services (to streamline KYC checks). I think that while each of these is of interest, they are probably just scratching the surface of what is possible.

Next week: Brexit Blues (Part II)