F for Facsimile: What are ‘Digital Forgeries’?

Last week, I attended the 2014 Foxcroft Lecture, given by Nicholas Barker, entitled “Forgery of Printed Documents”. The lecture prompted the question, what would we consider to be a ‘digital forgery’?

Make Up

The lecture was an investigation into a practice that emerged in the 18th century, when reproductions (‘fac similes’ – Latin for ‘make alike’) of early printed texts were created either as honest replicas, or to enable missing pages from antiquarian books to be restored to ‘make up’ a complete work. In some cases, the original pages had been removed by the censors, for others the pages had been left out in error during the binding process, and mostly they had simply been lost through damage or age.

Other factors created the need for these facsimiles: the number of copies of a book that could be printed at a time was often limited by law (censorship again at work), or works were licensed to different publishers in different markets, but printed using the original plates to save time and money.

Despite the innocent origins of facsimiles, unscrupulous dealers and collectors found a way to exploit them for financial gain – and of course, there were also attempts to pass off completely bogus works as genuine texts.

Replication vs Authentication

Technology has not only made the mass reproduction of written texts so much easier, it has also changed the way physical documents are authenticated – for example, faxed and scanned copies of signed documents are sometimes deemed sufficient proof of their existence, as evidence of specific facts, or in support of a contractual agreement or commercial arrangement. But this was not always the case, and even today, some legal documents have to be executed in written, hard-copy form, signed in person by the parties and in some situations witnessed by an independent party. For certain transactions, a formal seal needs to be attached to the original document.

Authenticating digital documents and artifacts present us with various challenges. Quite apart from the need to verify electronic copies of contracts and official documents, the ubiquity of e-mail (and social media) means it has been a target for exploitation by hackers and others, making it increasingly difficult to place our trust in these forms of communication. As a result, we use encryption and other security devices to protect our data. But what about other digital content?

Let’s define ‘digital artifacts’ in this context as things like software; music; video; photography; books; databases; or digital certificates, signatures and keys. We know that it is much easier to fabricate something that is not what it purports to be (witness the use of photo-editing in the media and fashion industries), and there is a corresponding set of tools to help uncover these fabrications. Time stamping, digital watermarks, metadata and other devices can help us to verify the authenticity and/or source of a digital asset.

Multiplication

In the case of fine art, the use of digital media (as standalone images or video, as part of an installation, or as a component in mixed media pieces) has meant that some artists have made only a single unique copy of their work, while others have created so-called ‘multiples’ – large-scale editions of their work. (The realm of ‘digital works’ and ‘digital prints’ produced by photographers and artists is worthy of a separate article.)

Making copies of existing digital works is relatively simple – the technology to reproduce and distribute digital artifacts on a widespread scale is built into practically every device linked to the Internet. Not all digital reproduction and file sharing is theft or piracy – in fact, through the wonders of social media ‘sharing’, we are actually encouraged to disseminate this content to our friends and followers.

The song doesn’t remain the same

Apart from the computer industry’s use of product keys to manage and restrict the distribution of unlicensed copies of their software, the music and film industries have probably done the most to tackle illegal copying since the introduction of the CD/DVD. At various times, the entertainment industries have deployed the following technologies:

  • copy-protection (to prevent copies being ripped and burned on computers)
  • encryption (discs and media files are ‘locked’ to a specific device or user account)
  • playback limits (mp3 files will become unplayable after a specific number of plays)
  • time expiry (content will be inaccessible beyond a specific date)

Most of these technologies have been abandoned because they either hamper our use and enjoyment of the content, or they have been easy to over-ride.

One technical issue to consider is ‘digital decay’ (*) – mostly, this relates to backing up and preserving digital archives, since we know that hard drives die, file formats become obsolete and software upgrades don’t always retrofit to existing data. But I wonder whether each subsequent copy of a digital artifact introduces unintentional flaws, which over time will generate copies that may render nothing like the original?

In the days of analogue audio tape, second, third and fourth generation copies were self-evident – namely, the audible tape hiss, wow and flutter caused by copying copies, by using machines with different motor speeds, and by minor fluctuations in power. Today, different file formats and things like compression and conversion can render very different versions of the ‘same’ digital content – for example, most mp3 files are highly compressed (for playback on certain devices) while audiophiles prefer FLAC. Although this is partly a question of taste, how do we know what the original should sound like? With a bit of effort, we can re-process an ‘original’ downloaded mp3 into our own unique ‘copy’ which may sound very different to the version put out by the record company (who probably mastered the commercially released mp3 from studio recordings created using high-quality audio processing and much faster data sampling rates).

So, would the re-processed version be a forgery?

(*) Thanks to Richard Almond for his article on Digital Decay which I found very useful.

 

 

 

 

 

 

 

Amazon, Apple, Google: Are they the New Conglomerates?

Are Amazon, Apple and Google the new conglomerates? If so, should we be concerned that these leading digital businesses increasingly resemble ‘old school’ industrial behemoths?

The classic model of a conglomerate generally describes a holding company that either owns or has controlling stakes in a diversified range of operating businesses, often in unrelated industries.

Conglomerates largely went out of fashion in America and Europe in the 1980’s and 1990’s (following an era of acquisitions and asset-stripping in the 1960’s and 1970’s), resulting in leveraged buy-outs, spin-offs and partial IPOs, etc. as owners and investors  realised that the total value of the individual parts was greater than the amalgamated whole. Although some major cross-sectoral mergers and acquisitions did occur after this period (e.g., AOL and Time Warner, Vivendi and Universal) most M&A activity was confined to single industry players, in pursuit of market share, economies of scale and other business synergies.

Despite this trend, various types of conglomerates (grounded in the ‘traditional’ industrial model) still exist – including the Chaebol of South Korea, Japan’s Keiretsu, China’s mega-SOEs, the trading houses of Hong Kong, and the FMCG “House of Brands” that fill our supermarket shelves. The UK-based Virgin Group and India’s Tata Group  represent contemporary examples of ‘old’ conglomerates as they operate across very separate and distinct businesses and industries.

Conglomerates are usually created by a need for vertical/horizontal integration or a basic desire to build diversified revenue streams. Some build on a core competence, then find an opportunity in a seemingly unrelated field – thus a company like General Electric, with deep expertise in power generation, storage and transmission, diversified into financial services as a way to help customers fund the purchase of its products.

Sometimes, conglomerates evolve as a result of financial necessity – Canada’s Thomson Corporation (now Thomson Reuters) once owned interests in North Sea oil and gas alongside its newspapers and media companies, but then divested most of these assets to focus on its publishing businesses across legal, scientific, financial, tax and accounting information.

For a long time, it also owned a vertically integrated travel business in the UK, comprising a charter airline, a package holiday company and a chain of high street travel agents.

As it was explained to me when I first worked for Thomson in the late 1980s, the rationale for this diversification was simply a question of cashflow: most of the information businesses were subscription-based, with revenues usually collected in the 4th quarter. Although summer package holidays generated a far lower margin than the information businesses, customers paid up front – normally in the 1st quarter, and up to 6 months in advance, creating more consistent cash flows across the business.

At times, conglomerates may need to diversify into new geographic or sectoral markets to avoid anti-trust measures if they come to dominate a particular territory or industry. However, as we have seen in recent years (Microsoft, EMI, Thomson Reuters) anti-trust measures have been used to force divestment or corporate restructures, across jurisdictions and markets.

Whether they have done so by design or by default, the case can be made that Amazon, Apple and Google have become the new conglomerates. Let’s take each in turn:

Amazon – began as an on-line retailer of hard-copy books, and has since moved into sales and distribution of digital content (books, films, music, games, software); a trading and sourcing platform for a wide range of consumer products; an electronics manufacturer (Kindle); cloud computing and data hosting services; and its own branded credit cards.

Apple – originally a manufacturer of personal computers and proprietary operating systems, now a vertically integrated digital content distribution business; a bricks and mortar retailer; a smart phone manufacturer; a key platform for the capture, creation and playback of audio-visual content (with a growing presence in broadcast television); a provider of cloud services; and now exploring opportunities in the automotive sector.

Google – what was once a late-entrant to on-line search has probably become the closest of these three internet giants to being a ‘true’ industrial conglomerate. In addition to its e-mail and social network offerings, Google has developed its own mobile device operating system (Android) and web browser (Chrome), plus smart phones (Nexus) and laptops (Chromebook). It rivals both Apple (most notably in mobile phones and apps distribution) and Amazon (principally for ebook distribution), and is making inroads into Microsoft’s dominance of productivity software. Plus, with Google Cars, Google Goggles (not forgetting Google Maps, Google AdWords, the Google Books Library Project and the 2006 acquisition of YouTube), Google is clearly on a path to being a diversified technology-based business, with integrated businesses across digital content, entertainment, transportation, navigation, archiving, streaming….

Meanwhile, all three have been investing in robotics; and surely telecoms (network carriers), biometrics, renewable energy, education, health, banking and financial services can’t be that far behind.

The risks for these neo-conglomerates are that they will either lose focus, over-reach themselves, or destroy the core businesses that lie at the heart of their success. Worse, they could fall foul of anti-trust provisions if they continue to become vertically and horizontally integrated – a threat equalled only by international moves to call tech-based multinationals to account for their cross-border tax planning.

As with all empires, the fortunes of conglomerates tend to wax and wane, and while the three companies discussed here have remained close to their core businesses, it will be interesting to see how each of them ensures that they continue to add value while not stretching the boundaries of their capabilities.

Publishers’ Choice: Be a Victim, or Join the Vanguard?

I recently posted a blog about saving the Australian publishing industry, prompted by some research I was doing on government-sponsored initiatives, notably EPICS and BISG. This generated a couple of (indirect) responses, one from the Department of Industry itself, the other from a long-time colleague in the industry. More on these later.

The future of publishing - circa 2000....

The future of publishing – circa 2000….

But first, some more industrial archeology, by way of demonstrating that book publishers are not shy about new technology – remember the first electronic ink? When I was working at the Thomson Corporation in the late 1990s, we were given access to a prototype version of what we would now recognise as an e-reader. It was about the size and thickness of a mouse pad but less flexible, and could only hold a small amount of data in its memory (content was uploaded via an ethernet cable). It was described as the future of book publishing, and was predicated on the idea of portability (it could be rolled up like a newspaper if the screen was thin and pliable enough), and updating it with new content whenever it was (physically) connected to a computer or the internet.

However, whatever their apparent appetite for new technology, publishers struggle to adapt their business models accordingly, or they are fixated on “old” ways of monetizing content, and locked into traditional supply chains, archaic market territories (geo-blocking), restrictive copyright practices and arcane licensing agreements; and unlike other content providers (notably music, TV and newspapers which have shifted their thinking, albeit reluctantly) the transition to digital is still tied to specific platforms and devices, unit-based pricing and margins, and territorial restrictions.

Anyway, back to the future. In response to my enquiry about the outcome of the BISG initiative, and the creation of the Book Industry Collaborative Council (BICC), the Department of Industry offered the following:

“A key outcome of the BICC process was to have been the establishment of a Book Industry Council of Australia, an industry-led body based on the residual BICC membership that would come to be a single point of policy communication with government, though following its own reform agenda in the identified areas and unsupported by any taxpayer funding. Terms of Reference and so forth were drawn up but as nearly as we can ascertain from media monitoring and contacts, the BICA was never formed. It appears the industry is waiting to ascertain what the current government’s policy priorities might be, as expressed in the outcomes of the current Commission of Audit and Budget, before possibly resurrecting the BICA concept and/or the policy issues identified in the BICC report.” (emphasis added)

My read on this is that the industry won’t take any initiatives itself until it knows what the government might do (i.e., let’s wait to see if there are any handouts, and if not, we can plead a special case about the lack of subsidies/protection and the threat of extinction…).

This defeatist attitude is not just confined to Australia – my former colleague recently attended the 2014 Digital Book World Conference in New York. He commented:

“I was disappointed to see the general negativity of the publishing industry and the “victim” like mentality – also the focus on the arch-enemy – AMAZON! I see great opportunities for content – but companies have to get their head around smaller micro transactions and a freemium model. Big publishers are “holding on” to margins – it’s a recipe for disaster – [but] I think we can become small giants these days.”

There are some signs that the industry is taking the initiative, and even grounds for optimism such as embracing digital distribution in Australia, moving to a direct-to-consumer (“D2C”) model in the USA, and new approaches to copyright and licensing in the UK.

The choice facing the publishing industry is clear: continue to see itself as a victim (leading to a self-fulfilling prophecy of doom and extinction), or become part of the vanguard in developing leading-edge products and services for the digital age.

Dawn of the neo-meta-banks

Digital is redefining the way we interact with money. While online banking is nothing new, virtual currencies are getting big enough to attract the attention of regulators. Mobile phones are becoming payment gateways and POS terminals; meanwhile, stored value and pre-paid debit cards are more ubiquitous than cheque accounts. (In Hong Kong, the Octopus card originally introduced as a payment system for public transport, then extended to small purchases like coffee and newspapers, has now launched a dedicated mobile SIM card.)

Last year, Wired magazine predicted that tomorrow’s banks will resemble Facebook, Google or Apple. And of course, PayPal is owned by eBay, so it sort of makes sense that tech giants with huge customer bases conducting millions of online and mobile transactions would be the source of new banking services. For example, earlier this month, online banking start-up, Simple was sold to a Spanish bank for $130m, even though it is not really a “proper” bank – more a banking services provider – because it had managed to attract customers who don’t want to deal with a “traditional” bank.

But where are the non-traditional banks and virtual financial services providers of the future actually going to come from?

The answer could be the People’s Republic of China.

Last week, it was reported that local tech companies Alibaba and Tencent will be included in a pilot scheme to establish private banks in China. The news should not be that surprising – Alibaba, for example, has already been using its experience and knowledge as a trading and sourcing platform to provide small-scale loans and export financing to Chinese manufacturers, funding production to fulfil customer orders. A few years ago I had the opportunity to visit Alibaba’s headquarters in Hangzhou, where I met with a team working on credit analysis and risk management for this micro-financing business, drawing on data insights from the payment history and transactional activity of their SME clients. It was certainly impressive, and my colleagues and I were left in no doubt that there was every intention to take this expertise into a full-blown banking vehicle.

However, this being China, it’s not quite as straightforward as it seems. Just a few days after the private bank pilot was announced, the People’s Bank of China suspended a mobile payments system used by Alibaba and Tencent.