F for Facsimile: What are ‘Digital Forgeries’?

Last week, I attended the 2014 Foxcroft Lecture, given by Nicholas Barker, entitled “Forgery of Printed Documents”. The lecture prompted the question, what would we consider to be a ‘digital forgery’?

Make Up

The lecture was an investigation into a practice that emerged in the 18th century, when reproductions (‘fac similes’ – Latin for ‘make alike’) of early printed texts were created either as honest replicas, or to enable missing pages from antiquarian books to be restored to ‘make up’ a complete work. In some cases, the original pages had been removed by the censors, for others the pages had been left out in error during the binding process, and mostly they had simply been lost through damage or age.

Other factors created the need for these facsimiles: the number of copies of a book that could be printed at a time was often limited by law (censorship again at work), or works were licensed to different publishers in different markets, but printed using the original plates to save time and money.

Despite the innocent origins of facsimiles, unscrupulous dealers and collectors found a way to exploit them for financial gain – and of course, there were also attempts to pass off completely bogus works as genuine texts.

Replication vs Authentication

Technology has not only made the mass reproduction of written texts so much easier, it has also changed the way physical documents are authenticated – for example, faxed and scanned copies of signed documents are sometimes deemed sufficient proof of their existence, as evidence of specific facts, or in support of a contractual agreement or commercial arrangement. But this was not always the case, and even today, some legal documents have to be executed in written, hard-copy form, signed in person by the parties and in some situations witnessed by an independent party. For certain transactions, a formal seal needs to be attached to the original document.

Authenticating digital documents and artifacts present us with various challenges. Quite apart from the need to verify electronic copies of contracts and official documents, the ubiquity of e-mail (and social media) means it has been a target for exploitation by hackers and others, making it increasingly difficult to place our trust in these forms of communication. As a result, we use encryption and other security devices to protect our data. But what about other digital content?

Let’s define ‘digital artifacts’ in this context as things like software; music; video; photography; books; databases; or digital certificates, signatures and keys. We know that it is much easier to fabricate something that is not what it purports to be (witness the use of photo-editing in the media and fashion industries), and there is a corresponding set of tools to help uncover these fabrications. Time stamping, digital watermarks, metadata and other devices can help us to verify the authenticity and/or source of a digital asset.

Multiplication

In the case of fine art, the use of digital media (as standalone images or video, as part of an installation, or as a component in mixed media pieces) has meant that some artists have made only a single unique copy of their work, while others have created so-called ‘multiples’ – large-scale editions of their work. (The realm of ‘digital works’ and ‘digital prints’ produced by photographers and artists is worthy of a separate article.)

Making copies of existing digital works is relatively simple – the technology to reproduce and distribute digital artifacts on a widespread scale is built into practically every device linked to the Internet. Not all digital reproduction and file sharing is theft or piracy – in fact, through the wonders of social media ‘sharing’, we are actually encouraged to disseminate this content to our friends and followers.

The song doesn’t remain the same

Apart from the computer industry’s use of product keys to manage and restrict the distribution of unlicensed copies of their software, the music and film industries have probably done the most to tackle illegal copying since the introduction of the CD/DVD. At various times, the entertainment industries have deployed the following technologies:

  • copy-protection (to prevent copies being ripped and burned on computers)
  • encryption (discs and media files are ‘locked’ to a specific device or user account)
  • playback limits (mp3 files will become unplayable after a specific number of plays)
  • time expiry (content will be inaccessible beyond a specific date)

Most of these technologies have been abandoned because they either hamper our use and enjoyment of the content, or they have been easy to over-ride.

One technical issue to consider is ‘digital decay’ (*) – mostly, this relates to backing up and preserving digital archives, since we know that hard drives die, file formats become obsolete and software upgrades don’t always retrofit to existing data. But I wonder whether each subsequent copy of a digital artifact introduces unintentional flaws, which over time will generate copies that may render nothing like the original?

In the days of analogue audio tape, second, third and fourth generation copies were self-evident – namely, the audible tape hiss, wow and flutter caused by copying copies, by using machines with different motor speeds, and by minor fluctuations in power. Today, different file formats and things like compression and conversion can render very different versions of the ‘same’ digital content – for example, most mp3 files are highly compressed (for playback on certain devices) while audiophiles prefer FLAC. Although this is partly a question of taste, how do we know what the original should sound like? With a bit of effort, we can re-process an ‘original’ downloaded mp3 into our own unique ‘copy’ which may sound very different to the version put out by the record company (who probably mastered the commercially released mp3 from studio recordings created using high-quality audio processing and much faster data sampling rates).

So, would the re-processed version be a forgery?

(*) Thanks to Richard Almond for his article on Digital Decay which I found very useful.

 

 

 

 

 

 

 

Pricing for the Digital Age – A Postscript

Last week I wrote about pricing for digital content. In the past, I’ve also written about geo-blocking.

So, I decided to conduct a (very) small experiment in price comparison by market territory.

I chose a specific book title, and compared prices of the digital and print editions, between several retail sites, in 3 markets (Australia, UK and USA).

Before I conducted the exercise, my expectation was that Australia would be the most expensive (based on current exchange rates*), and USA the cheapest, but not much cheaper than the UK. But I was surprised by the results….

First, digital version:

Apple’s iTunes store: Australia A$37.99; USA A$37.76; UK A$41.83

Amazon: Australia A$20.60; USA A$20.61; UK A$33.18

eBooks.com: Australia A$40.95; USA A$37.72; UK A$48.03

Booktopia: A$39.95

I was surprised that the iTunes price between Australia and the USA was so close – when it comes to music, iTunes Australia is usually far more expensive than either the USA or UK. Amazon appeared to have the title on sale, but I can’t work out why the UK prices are so much higher. Thanks to geo-blocking, of course, I cannot access the slightly cheaper price in the US store. But I was able to buy it from Amazon.com (and at the same, cheaper price as Amazon.com.au).

Second, print edition (based on shipping costs to/within Australia):

Amazon: Australia not available; USA A$40.89 (inc. P&P to Australia); UK A$50.37 (inc. P&P to Australia)

Book Depository: A$32.31 (inc. P&P from UK)

Angus & Robertson: A$39.99 (inc. P&P within Australia)

Readings: A$40.95 (inc. P&P within Australia)

Booktopia: A$48.45 (inc. P&P within Australia)

Clearly, Book Depository is the best option by far (and is frequently so) and seems willing to undercut its parent company, Amazon – or maybe there’s a deliberate strategy as Amazon.com.au does not yet sell physical products. However, the much higher price charged by Australia’s Booktopia might speak volumes about the state of local retail….

 

NOTE:

Prices were converted from the published local price on each website, then converted to A$ using xe.com

 

 

Publishers’ Choice: Be a Victim, or Join the Vanguard?

I recently posted a blog about saving the Australian publishing industry, prompted by some research I was doing on government-sponsored initiatives, notably EPICS and BISG. This generated a couple of (indirect) responses, one from the Department of Industry itself, the other from a long-time colleague in the industry. More on these later.

The future of publishing - circa 2000....

The future of publishing – circa 2000….

But first, some more industrial archeology, by way of demonstrating that book publishers are not shy about new technology – remember the first electronic ink? When I was working at the Thomson Corporation in the late 1990s, we were given access to a prototype version of what we would now recognise as an e-reader. It was about the size and thickness of a mouse pad but less flexible, and could only hold a small amount of data in its memory (content was uploaded via an ethernet cable). It was described as the future of book publishing, and was predicated on the idea of portability (it could be rolled up like a newspaper if the screen was thin and pliable enough), and updating it with new content whenever it was (physically) connected to a computer or the internet.

However, whatever their apparent appetite for new technology, publishers struggle to adapt their business models accordingly, or they are fixated on “old” ways of monetizing content, and locked into traditional supply chains, archaic market territories (geo-blocking), restrictive copyright practices and arcane licensing agreements; and unlike other content providers (notably music, TV and newspapers which have shifted their thinking, albeit reluctantly) the transition to digital is still tied to specific platforms and devices, unit-based pricing and margins, and territorial restrictions.

Anyway, back to the future. In response to my enquiry about the outcome of the BISG initiative, and the creation of the Book Industry Collaborative Council (BICC), the Department of Industry offered the following:

“A key outcome of the BICC process was to have been the establishment of a Book Industry Council of Australia, an industry-led body based on the residual BICC membership that would come to be a single point of policy communication with government, though following its own reform agenda in the identified areas and unsupported by any taxpayer funding. Terms of Reference and so forth were drawn up but as nearly as we can ascertain from media monitoring and contacts, the BICA was never formed. It appears the industry is waiting to ascertain what the current government’s policy priorities might be, as expressed in the outcomes of the current Commission of Audit and Budget, before possibly resurrecting the BICA concept and/or the policy issues identified in the BICC report.” (emphasis added)

My read on this is that the industry won’t take any initiatives itself until it knows what the government might do (i.e., let’s wait to see if there are any handouts, and if not, we can plead a special case about the lack of subsidies/protection and the threat of extinction…).

This defeatist attitude is not just confined to Australia – my former colleague recently attended the 2014 Digital Book World Conference in New York. He commented:

“I was disappointed to see the general negativity of the publishing industry and the “victim” like mentality – also the focus on the arch-enemy – AMAZON! I see great opportunities for content – but companies have to get their head around smaller micro transactions and a freemium model. Big publishers are “holding on” to margins – it’s a recipe for disaster – [but] I think we can become small giants these days.”

There are some signs that the industry is taking the initiative, and even grounds for optimism such as embracing digital distribution in Australia, moving to a direct-to-consumer (“D2C”) model in the USA, and new approaches to copyright and licensing in the UK.

The choice facing the publishing industry is clear: continue to see itself as a victim (leading to a self-fulfilling prophecy of doom and extinction), or become part of the vanguard in developing leading-edge products and services for the digital age.

From EPICS to BISG: Trying to save the Australian publishing industry

At the dawn of the century, the Australian government funded a series of research projects on the future of the local book publishing industry, under the Enhanced Printing Industry Competitiveness Scheme (EPICS). Part of that research effort included the Ad Rem Report on “The Australian Book Industry: Challenges and Opportunities”, published in September 2001.

Scenario Planning

Via consultation with publishers, printers, distributors and book sellers, Ad Rem examined a range of possible scenarios the industry would face leading up to 2010.

Using rather quaint titles for each scenario, from utopian to apocalyptic, the report made a strong case for:

  • increased collaboration and consolidation across printing and supply chain logistics;
  • adoption of new technology (including “print on demand”); and
  • increased focus on adding value through improved customer service.

So, under “Paradise Found”, a loose federation of specialist companies would focus on either printing, publishing or distribution services predicated on increased consumer demand for books and content available from multiple outlets, underpinned by happy customers served by a responsive and proactive publishing industry.

More stoically, selfless cooperation and collaboration in the form of “Shoulder to Shoulder” would ensure that despite reduced demand, the industry could become a “national model of supply chain efficiency” by sharing distribution networks and market data, and adopting industry-wide standards.

Conversely, limited cooperation and the lack of a single, dominant business model would result in a “Dog Eat Dog” scenario, with few local winners. Overall consumer demand would diminish, industry participants would seek to operate all along the supply chain (introducing some market inefficiencies), and the industry would end up competing on price alone, and fighting tooth and nail for the next major “blockbuster” title.

Alternatively, if the “Land of the Giants” was to prevail, “highly diversified global companies from outside traditional media industries would come to dominate the Australian book industry.” Demand would be driven and met by technological changes, carried forward by bundled products and services, end-to-end integrated businesses, and “predominantly proprietary industry standards”.

The reality is, we have “Land of the Giants” (as far as global businesses are concerned), while the local players are fighting it out in a “Dog Eat Dog” world.

Technology

“Print on demand” was going to be the answer, because it would minimise the supply chain logistics, improve sales margins for retailers, and enshrine the protectionism afforded local publishers and distributors under the 30-day rule written into the Copyright Act. In addition, increased training and upskilling would help the industry meet the challenges of digital content and the new means of production and distribution. (The publishing industry has traditionally invested very little into structured training – see Jo Bramble writing in “Developing Knowledge Workers In The Printing And Publishing Industries”, Cope & Freeman (Eds.), University Press/Common Ground Publishing (2002))

However, while ebooks were already on the market in 2001 (mainly read on PDAs), and although online content was already widespread, probably nothing could have prepared the industry for what has happened in the past 10 years such as:

  • the growth of ebook readers such as Kindle, Nook and Kobo,
  • the impact of Apple’s iOS/iTunes/iBook/iPad ecosystem,
  • self-publishing solutions from Amazon to Tablo, or
  • controversial online “library” projects like Google Books.

Print-on-demand never came about, partly because the dot.com boom/bust of 2001-2002 put the dampener on many digital initiatives (remember the original push for “e-Government” in Australia?), partly because internet speeds were not up to scratch, but mainly because there was little or no appetite for industry collaboration and common standards.

Retailing

Infamously, Borders came along to shake up the local market, but ended up laying waste to much of Australia’s book selling industry as it imploded under the weight of expectation (and crippling debt). While a couple of national chains remain, many independent and specialist bookshops have managed to survive – some may even be thriving – as they find ways to develop deeper engagement with their customers, and offer a range of value-added services.

However, sales of books in Australia have maintained a steady (if unspectacular) growth rate); online purchases now account for around 12% of all book sales, of which more than half are generated by overseas websites; meanwhile, ebooks have gone from 1.5% of the local market in 2010 to 10%-12% of all book sales in 2013 (of which 90% are made by offshore retailers).

Geo-blocking

Regular readers of this blog will know I have a thing about geo-blocking* – so, while I am an advocate for intellectual property protections such as copyright, I am against territorial restrictions that prevent/impede customers buying content from wherever/whomever they choose just because content owners and/or their distributors have decided to carve up the market to suit themselves. (Piracy is piracy, but parallel importation is about giving customers choice.)

Amazon finally launched its dedicated store in Australia in late 2013, but only for ebooks, and with an initial focus on Australian authors and publishers. So, for print books, local customers still need to go to the US and UK sites. For whatever reason, Amazon feels it necessary to have a local online presence (to counter protectionism? to avoid arguments over collecting local GST on overseas online purchases? to annoy local retailers who have been selling Kindles?)

What came next? Much the same really…

I can’t help thinking that the combination of an apparent lack of cooperation around standards, reluctance to collaborate on supply chain logistics and an inability to read the technology trends have all contributed to a 2-speed publishing industry in Australia: a series of small, specialist and independent print publishers and bookshops trying to compete with the global digital behemoths of Apple, Amazon and Google.

Despite the considerable effort behind the Ad Rem Report, it’s fair to say that nothing of substance materialised.

Fast forward 10 years, and along came the Book Industry Strategy Group (BISG) which reported in September 2011. Among its 21 recommendations were:

  • consolidation/streamlining within and across the supply chain – to create greater efficiencies
  • adjustments to GST – i.e., abolish/reduce the rate on Australian books, or collect GST on sales under $1,000 by overseas websites
  • increased protection(ism)  – via direct and indirect support for the local industry
  • review copyright legislation – in relation to digital content creation and distribution

Fairly predictable stuff, but not much about technology or related innovation…

NOTES:

The original Ad Rem website was decommissioned some time ago. I do have PDF copies of the various reports and working group papers if anyone if interested – although they are the copyright of Accenture, I’m sure they wouldn’t mind if I distributed a few copies in the interest of research and commentary. Meanwhile, a couple of papers are still online:

Click to access Ad_Rem_Scenario_Planning.pdf

Click to access Ad_-Rem_Value_Chain_Analysis.pdf

*GEO-BLOCKING REFERENCES:

https://contentincontext.me/2013/04/23/geo-blocking-the-last-digital-frontier/

https://contentincontext.me/2013/08/13/australian-mps-consider-a-ban-on-geo-blocking/