Tools vs Solutions: When does our core offering need to change?

As regular readers of this blog will be aware, recent posts have focussed on digital – content, products, pricing etc.

I’ve also been immersing myself in the digital design process (next step: learn code?) and earlier this month I attended a workshop by a leading digital design studio. While most of the session was devoted to their own particular design methodology (basically, UCD with some fancy footwork) it also revealed that in developing tools to help customers undertake their own design projects, they have become a subscription software business. No doubt, they will continue as a design consultancy, but clearly the core offering is changing.

This shift echoes an analysis of McKinsey Solutions by the Harvard Business Review in late 2013. Basically, it suggested that rather than providing an all-in-one solution (based on black box consulting methodologies and processes), consulting firms are having to unbundle their offering, allowing them to remain relevant and move to more defendable positions in the value chain. In the case of McKinsey Solutions, embedding analytical tools at client sites is a cost-effective way of delivering services, while gaining insights on their customer needs, which in turn allows them to develop enhanced tools.

So it raises the question: Do consultants need to re-think their offering – rather than being solutions providers, should they focus on being enablers? This may seem overly disruptive (and potentially disenfranchising) for the consulting industry; but in the long run it should mean clients become more reliant on value-added solutions that deploy tools that they know, understand and trust (and can use for themselves). It should also mean that clients will want to retain access to these tools as they evolve, because they will be more invested in their development and use.

 

Smart Designs: 5 Trends for Digital Products

There are 5 key themes emerging in new digital products* that are grounded in the analogue world. It seems designers and developers are having to find ways to embrace analogue once more, and integrate it into digital solutions. While not everything old is new again, there are some distinct echoes of the past in many of these new developments.

Eno Hyde - Someday World

#1. Revivalism

Sony is reviving magnetic tape as a data storage medium – prompting some pundits to suggest that cassettes might be making a comeback. (Not if the participants in this video have their way…..) Interesting to note that tape storage is far more energy-efficient than traditional hard drive storage. And last week, Telstra announced the development of a major public Wi-Fi network which sounds like the stuff of the future, but looks back more than 25 years, with the launch of Telepoint services.

#2. Hybridisation

The combination of analogue and digital technologies** is not new (remember the Advanced Photo System and the Digital Compact Cassette from the 1990s?). But modern polymath Brian Eno and his latest musical collaborator Karl Hyde have just put out an iOS app that is designed to interact with the vinyl edition of their new album, “Someday World”. It’s not quite augmented reality, but the app uses that concept to project animated graphics to accompany the music when the user points the iPhone’s camera at the record label.*** This could just be the first example of making the vinyl record a digital artefact!

#3. Simulation

As someone who dabbles in iOS music apps (as well as beta-testing a few in my spare time), I have become used to replicating the analogue experience of old-school analogue synthesizers and drum machines on my iPhone and iPad. This has now been taken a step further with the launch of the iVCS3, an iPad version of one of the first portable analogue synthesizers from the late 1960s (an instrument made famous by The Who and Brian Eno, among others). A notoriously difficult piece of hardware to operate, it is almost the antithesis of digital predictability, yet makes perfect sense in the digital context when simulated via the touch screen interface of the iPad.

#4. Sensorial

Despite some concerns about smart phone biometric security tools, the use of biometrics in banking is a near certainty. Sensory-based smart phone applications and add-on devices in the areas of health (diagnostics), the environment (air quality monitoring) and even cooking (taste tests) will soon be commonplace.

#5. Interconnectivity

The Internet of Things is starting to get interesting (beyond the fridge that can do your grocery shopping), especially when combined with robotics (although this April fool spoof from Sphero was probably a bit too real for comfort….). A couple of physical devices that could find extended use when hooked up to an internet connection are the Auug (featured in the new Apple ads) and the SwatchMate Cube (a winner in the 2013 Melbourne Design Awards). For example, the Auug could be used in remote control or simulation applications, while SwatchMate could be modified to analyse surface materials beyond their colour properties.

NOTES

* I’ve been re-reading “Grounded Innovation: Strategies for Creating Digital Products” by Lars Erik Holmquist which has helped shaped some of my thinking on this topic.

** I thought I may have invented a new word as a possible title for this blog – Digilogue – until I came across this book. (But I took heart from the fact that the author, futurist Anders Sorman-Nilsson, like me also holds an LL.B.)

*** If you install the app and point your iPhone camera at the picture below, it should also have the same result as scanning the record label itself:

WARPLP249-Label

 

F for Facsimile: What are ‘Digital Forgeries’?

Last week, I attended the 2014 Foxcroft Lecture, given by Nicholas Barker, entitled “Forgery of Printed Documents”. The lecture prompted the question, what would we consider to be a ‘digital forgery’?

Make Up

The lecture was an investigation into a practice that emerged in the 18th century, when reproductions (‘fac similes’ – Latin for ‘make alike’) of early printed texts were created either as honest replicas, or to enable missing pages from antiquarian books to be restored to ‘make up’ a complete work. In some cases, the original pages had been removed by the censors, for others the pages had been left out in error during the binding process, and mostly they had simply been lost through damage or age.

Other factors created the need for these facsimiles: the number of copies of a book that could be printed at a time was often limited by law (censorship again at work), or works were licensed to different publishers in different markets, but printed using the original plates to save time and money.

Despite the innocent origins of facsimiles, unscrupulous dealers and collectors found a way to exploit them for financial gain – and of course, there were also attempts to pass off completely bogus works as genuine texts.

Replication vs Authentication

Technology has not only made the mass reproduction of written texts so much easier, it has also changed the way physical documents are authenticated – for example, faxed and scanned copies of signed documents are sometimes deemed sufficient proof of their existence, as evidence of specific facts, or in support of a contractual agreement or commercial arrangement. But this was not always the case, and even today, some legal documents have to be executed in written, hard-copy form, signed in person by the parties and in some situations witnessed by an independent party. For certain transactions, a formal seal needs to be attached to the original document.

Authenticating digital documents and artifacts present us with various challenges. Quite apart from the need to verify electronic copies of contracts and official documents, the ubiquity of e-mail (and social media) means it has been a target for exploitation by hackers and others, making it increasingly difficult to place our trust in these forms of communication. As a result, we use encryption and other security devices to protect our data. But what about other digital content?

Let’s define ‘digital artifacts’ in this context as things like software; music; video; photography; books; databases; or digital certificates, signatures and keys. We know that it is much easier to fabricate something that is not what it purports to be (witness the use of photo-editing in the media and fashion industries), and there is a corresponding set of tools to help uncover these fabrications. Time stamping, digital watermarks, metadata and other devices can help us to verify the authenticity and/or source of a digital asset.

Multiplication

In the case of fine art, the use of digital media (as standalone images or video, as part of an installation, or as a component in mixed media pieces) has meant that some artists have made only a single unique copy of their work, while others have created so-called ‘multiples’ – large-scale editions of their work. (The realm of ‘digital works’ and ‘digital prints’ produced by photographers and artists is worthy of a separate article.)

Making copies of existing digital works is relatively simple – the technology to reproduce and distribute digital artifacts on a widespread scale is built into practically every device linked to the Internet. Not all digital reproduction and file sharing is theft or piracy – in fact, through the wonders of social media ‘sharing’, we are actually encouraged to disseminate this content to our friends and followers.

The song doesn’t remain the same

Apart from the computer industry’s use of product keys to manage and restrict the distribution of unlicensed copies of their software, the music and film industries have probably done the most to tackle illegal copying since the introduction of the CD/DVD. At various times, the entertainment industries have deployed the following technologies:

  • copy-protection (to prevent copies being ripped and burned on computers)
  • encryption (discs and media files are ‘locked’ to a specific device or user account)
  • playback limits (mp3 files will become unplayable after a specific number of plays)
  • time expiry (content will be inaccessible beyond a specific date)

Most of these technologies have been abandoned because they either hamper our use and enjoyment of the content, or they have been easy to over-ride.

One technical issue to consider is ‘digital decay’ (*) – mostly, this relates to backing up and preserving digital archives, since we know that hard drives die, file formats become obsolete and software upgrades don’t always retrofit to existing data. But I wonder whether each subsequent copy of a digital artifact introduces unintentional flaws, which over time will generate copies that may render nothing like the original?

In the days of analogue audio tape, second, third and fourth generation copies were self-evident – namely, the audible tape hiss, wow and flutter caused by copying copies, by using machines with different motor speeds, and by minor fluctuations in power. Today, different file formats and things like compression and conversion can render very different versions of the ‘same’ digital content – for example, most mp3 files are highly compressed (for playback on certain devices) while audiophiles prefer FLAC. Although this is partly a question of taste, how do we know what the original should sound like? With a bit of effort, we can re-process an ‘original’ downloaded mp3 into our own unique ‘copy’ which may sound very different to the version put out by the record company (who probably mastered the commercially released mp3 from studio recordings created using high-quality audio processing and much faster data sampling rates).

So, would the re-processed version be a forgery?

(*) Thanks to Richard Almond for his article on Digital Decay which I found very useful.

 

 

 

 

 

 

 

Amazon, Apple, Google: Are they the New Conglomerates?

Are Amazon, Apple and Google the new conglomerates? If so, should we be concerned that these leading digital businesses increasingly resemble ‘old school’ industrial behemoths?

The classic model of a conglomerate generally describes a holding company that either owns or has controlling stakes in a diversified range of operating businesses, often in unrelated industries.

Conglomerates largely went out of fashion in America and Europe in the 1980’s and 1990’s (following an era of acquisitions and asset-stripping in the 1960’s and 1970’s), resulting in leveraged buy-outs, spin-offs and partial IPOs, etc. as owners and investors  realised that the total value of the individual parts was greater than the amalgamated whole. Although some major cross-sectoral mergers and acquisitions did occur after this period (e.g., AOL and Time Warner, Vivendi and Universal) most M&A activity was confined to single industry players, in pursuit of market share, economies of scale and other business synergies.

Despite this trend, various types of conglomerates (grounded in the ‘traditional’ industrial model) still exist – including the Chaebol of South Korea, Japan’s Keiretsu, China’s mega-SOEs, the trading houses of Hong Kong, and the FMCG “House of Brands” that fill our supermarket shelves. The UK-based Virgin Group and India’s Tata Group  represent contemporary examples of ‘old’ conglomerates as they operate across very separate and distinct businesses and industries.

Conglomerates are usually created by a need for vertical/horizontal integration or a basic desire to build diversified revenue streams. Some build on a core competence, then find an opportunity in a seemingly unrelated field – thus a company like General Electric, with deep expertise in power generation, storage and transmission, diversified into financial services as a way to help customers fund the purchase of its products.

Sometimes, conglomerates evolve as a result of financial necessity – Canada’s Thomson Corporation (now Thomson Reuters) once owned interests in North Sea oil and gas alongside its newspapers and media companies, but then divested most of these assets to focus on its publishing businesses across legal, scientific, financial, tax and accounting information.

For a long time, it also owned a vertically integrated travel business in the UK, comprising a charter airline, a package holiday company and a chain of high street travel agents.

As it was explained to me when I first worked for Thomson in the late 1980s, the rationale for this diversification was simply a question of cashflow: most of the information businesses were subscription-based, with revenues usually collected in the 4th quarter. Although summer package holidays generated a far lower margin than the information businesses, customers paid up front – normally in the 1st quarter, and up to 6 months in advance, creating more consistent cash flows across the business.

At times, conglomerates may need to diversify into new geographic or sectoral markets to avoid anti-trust measures if they come to dominate a particular territory or industry. However, as we have seen in recent years (Microsoft, EMI, Thomson Reuters) anti-trust measures have been used to force divestment or corporate restructures, across jurisdictions and markets.

Whether they have done so by design or by default, the case can be made that Amazon, Apple and Google have become the new conglomerates. Let’s take each in turn:

Amazon – began as an on-line retailer of hard-copy books, and has since moved into sales and distribution of digital content (books, films, music, games, software); a trading and sourcing platform for a wide range of consumer products; an electronics manufacturer (Kindle); cloud computing and data hosting services; and its own branded credit cards.

Apple – originally a manufacturer of personal computers and proprietary operating systems, now a vertically integrated digital content distribution business; a bricks and mortar retailer; a smart phone manufacturer; a key platform for the capture, creation and playback of audio-visual content (with a growing presence in broadcast television); a provider of cloud services; and now exploring opportunities in the automotive sector.

Google – what was once a late-entrant to on-line search has probably become the closest of these three internet giants to being a ‘true’ industrial conglomerate. In addition to its e-mail and social network offerings, Google has developed its own mobile device operating system (Android) and web browser (Chrome), plus smart phones (Nexus) and laptops (Chromebook). It rivals both Apple (most notably in mobile phones and apps distribution) and Amazon (principally for ebook distribution), and is making inroads into Microsoft’s dominance of productivity software. Plus, with Google Cars, Google Goggles (not forgetting Google Maps, Google AdWords, the Google Books Library Project and the 2006 acquisition of YouTube), Google is clearly on a path to being a diversified technology-based business, with integrated businesses across digital content, entertainment, transportation, navigation, archiving, streaming….

Meanwhile, all three have been investing in robotics; and surely telecoms (network carriers), biometrics, renewable energy, education, health, banking and financial services can’t be that far behind.

The risks for these neo-conglomerates are that they will either lose focus, over-reach themselves, or destroy the core businesses that lie at the heart of their success. Worse, they could fall foul of anti-trust provisions if they continue to become vertically and horizontally integrated – a threat equalled only by international moves to call tech-based multinationals to account for their cross-border tax planning.

As with all empires, the fortunes of conglomerates tend to wax and wane, and while the three companies discussed here have remained close to their core businesses, it will be interesting to see how each of them ensures that they continue to add value while not stretching the boundaries of their capabilities.