Anyone familiar with product development should recognise the image below. It’s a schematic for a start-up idea I was working on several years ago – for an employee engagement, reward and recognition app. It was the result of a number of workshops with a digital agency covering problem statements, user scenarios, workflow solutions, personas, UX/UI design and back-end architecture frameworks.
At the time, the cost quoted to build the MVP was easily 5-6 figures – and even to get to that point still required a load of work on story boards, wire frames and clickable prototypes….
Now, I would expect the developers to use something like a combination of open-source and low-cost software applications to manage the middle-ware functions, dial-up a basic cloud server to host the database and connect to external APIs, and commission a web designer to build a dedicated front-end. (I’m not a developer, programmer or coder, so apologies for any glaring errors in my assumptions…)
The growth in self-serve SaaS platforms, public APIs and low-cost hosting solutions (plus the plethora of design marketplaces) should mean that a developer can build an MVP for a tenth of the cost we were quoted.
Hence the interest in “low-code/no-code” product development, and the use of modular components or stack to build a range of repetitive, automated and small scale applications. (For a dev’s perspective check out Martin Slaney’s article, and for a list of useful resources see Ellen Merryweather’s post from earlier this year.)
There are obvious limitations to this approach: anything too complex, too custom, or which needs to scale quickly may break the model. Equally, stringing together a set of black boxes/off-the-shelf solutions might not work, if there are unforeseen incompatibilities or programming conflicts – especially if one component is upgraded, and there are unknown inter-dependencies that impact the other links in the chain. Which means the product development process will need to ensure a layer of code audits and test environments before deploying into production.
I was reflecting on the benefits and challenges of hermetically sealed operating systems and software programs over the weekend. In trying to downgrade my operating system (so that I could run some legacy third-party applications that no longer work thanks to some recent systems and software “upgrades”), I encountered various challenges, and it took several attempts and a couple of workarounds. The biggest problem was the lack of anything to guide me in advance – that by making certain changes to the system settings, or configuring the software a certain way, either this app or that function wouldn’t work. Also, because each component (the operating system, the software program and the third party applications) wants to defend its own turf within my device, they don’t always play nicely together in a way that the end user wants to deploy them in a single environment.
App interoperability is something that continues to frustrate when it comes to so-called systems or software upgrades. It feels like there needs to be a specialist area of product development that can better identify, mitigate and resolve potential tech debt, as well as navigate the product development maintenance schedule in anticipation of future upgrades and their likely impact, or understand the opportunities for retrofitting and keeping legacy apps current. I see too many app developers abandoning their projects because it’s just too hard to reconfigure for the latest system changes.
Next week: Telstar!