Australian MPs recommend a ban on geo-blocking

In a recent blog about geo-blocking, I commented on the frustrations of Australian consumers in trying to access digital content. That blog was written in light of a parliamentary inquiry into IT price discrimination.

ImageA Report by the House of Representatives Infrastructure and Communications Committee has just been published, and makes for some fascinating reading.

The Report reveals a number of key themes:

  • There is strong evidence that Australian consumers pay between 50 and 100 per cent more for the same product than consumers in comparable markets.
  • Price differentials cannot be fully explained by the so-called “Australia tax” (i.e., the relatively higher costs of doing business locally, due to wages, taxes, market regulation, shipping costs, economies of scale, etc.).
  • Consumer complaints about price discrimination are not being taken seriously by the industry as a whole.
  • Industry participants either deflected responsibility for price discrimination to other parts of the supply chain, or blamed inconsistent market practices as justifying the need for different regional and national price policies.
  • Despite being given the opportunity by the Committee to defend their pricing practices in public, most industry participants declined to co-operate in full; this gave rise to Apple, Adobe and Microsoft each being compelled to give evidence.
  • A number of submissions made by industry participants appeared to be disingenuous, self-serving, evasive and even misleading.

The Committee accepts that IT vendors are entitled to run their businesses as they see fit, and there is nothing to stop them from charging whatever prices they like. There was also general acknowledgment that copyright holders must be able to protect their IP assets.

However, geo-blocking (especially of digital content) simply reinforces price disparity based on a customer’s geographical location, rather than protecting the interests of copyright holders. Further, although so-called “Technological Protection Measures” (TPM) or “Effective Technological Measures” (ETM) and “Digital Rights Management” systems (DRM) may have a legitimate role in controlling copyright (and as such they enjoy protection under the relevant Copyright Law), their net effect has been to limit competition and to lock consumers into “walled gardens” which places considerable power in the hands of IT vendors as to how, when and where consumers access content.

In short, the Committee made several recommendations designed to address price discrimination and restricted market access imposed on Australian consumers, including:

  • Remove any remaining restrictions on parallel imports (in a bid to increase market competition among distributors and retailers).
  • Clarify the legal circumvention of TPM/ETM/DRM barriers that are purely designed as geo-blocking tools (rather than copyright protection measures).
  • Educate Australian consumers about their ability to buy cheaper goods from overseas, or to legally circumvent geo-blocking (without compromising product warranties or infringing copyright).
  • As a last resort, place a ban on geo-blocking and outlaw contacts or terms of service that rely on and enforce geo-blocking.

Unfortunately, while this Report is of great significance to the Australian digital economy, and seeks to achieve a balance between the rights of copyright holders and the interests of consumers, it is likely to be overshadowed by concerns about tax avoidance in respect to multinational companies. No doubt Australian consumers will make a connection between global IT companies whose products they buy, and transnational tax minimization strategies linked to transfer pricing policies and the routing of content royalties and copyright licensing fees via low-tax jurisdictions.

Whose content is it anyway?

Faust 2.0

Every social media and digital publishing platform is engaged in a continuous battle to acquire content, in order to attract audiences and bolster advertising revenues.

Content ownership is becoming increasingly contentious, and I wonder if we truly appreciate the near-Faustian pact we have entered into as we willingly contribute original material and our personal data in return for continued “free” access to Facebook, YouTube, Google, Flickr, LinkedIn, Pinterest, Twitter, MySpace, etc.

Even if we knowingly surrender legal rights over our own content because this is the acceptable price to pay for using social media, are we actually getting a fair deal in return? The fact is that more users and more content means more advertisers – but are we being adequately compensated for the privilege of posting our stuff on-line? Even if we are prepared to go along with the deal, are our rights being adequately protected and respected?

In late 2012, Instagram faced intense public backlash against suggestions it would embark upon the commercial exploitation of users’ photographs. While appearing to backtrack, and conceding that users retain copyright in their photographs, there is nothing to say that Instagram and others won’t seek to amend their end-user license agreements in future to claim certain rights over contributed content. For example, while users might retain copyright in their individual content, social media platforms may assert other intellectual property rights over derived content (e.g., compiling directories of aggregated data, licensing the metadata associated with user content, or controlling the embedded design features associated with the way content is rendered and arranged).

Even if a social media site is “free” to use (and as we all know, we “pay” for it by allowing ourselves to be used as advertising and marketing bait), I would still expect to retain full ownership, control and use of my own content – otherwise, in some ways it’s rather like a typesetter or printer trying to claim ownership of an author’s work….

The Instagram issue has resurfaced in recent months, with the UK’s Enterprise and Regulatory Reform Act. The Act amends UK copyright law in a number of ways, most contentiously around the treatment of “orphan” works (i.e., copyright content – photos, recordings, text – where the original author or owner cannot be identified). The stated intent of the Act is to bring orphan works into a formal copyright administration system, and similar reforms are under consideration in Australia.

Under the new UK legislation, a licensing and collection regime will be established to enable the commercial exploitation of orphan works, provided that the publisher has made a “diligent” effort to locate the copyright holder, and agrees to pay an appropriate license fee once permission to publish has been granted by the scheme’s administrator.

Such has been the outcry (especially among photographers), that the legislation has been referred to as “the Instagram Act”, and the UK government’s own Intellectual Property Office was moved to issue a clarification factsheet to mollify public concerns. However, those concerns continue to surface: in particular, the definition of “diligent” in this context; and the practice of some social media platforms to remove metadata from photos, making it harder to identify the owner or the original source.

Meanwhile, the long-running Google book scanning copyright lawsuit has taken another unexpected twist in the US courts. From the outset, Google tried to suggest it was providing some sort of public service in making long-out-of-print books available in the digital age. Others claim that it was part of a strategy to challenge Amazon.

Despite an earlier unfavourable ruling, a recent appeal has helped Google’s case in two ways: first, the previous decision to establish a class action comprising disgruntled authors and publishers has been set aside (on what looks like a technicality); second, the courts must now consider whether Google can claim its scanning activities (involving an estimated 20 million titles) constitute “fair use”, one of the few defences to allegations of breach of copyright.

Personally, I don’t think the “fair use” provisions were designed to cater for mass commercialization on the scale of Google, despite the latter saying it will restrict the amount of free content from each book that will be displayed in search results – ultimately, Google wants to generate a new revenue stream from 3rd party content that it neither owns nor originated, so let’s call it for what it is and if authors and publishers wish to grant Google permission to digitize their content, let them negotiate equitable licensing terms and royalties.

Finally, the upcoming release of Apple’s iOS7 has created consternation of its own. Certain developers with access to the beta version are concerned that Apple will force mobile device users to install app upgrades automatically. If this is true, then basically Apple is telling its customers they now have even less control over the devices and content that they pay for.

Geo-blocking: the last digital frontier?

Last month, senior executives from AdobeApple and Microsoft were summoned to appear before an Australian Parliamentary inquiry into IT pricing policies. It was alleged that Australian consumers can pay up to 70% more for comparable products and services sold in other markets.

Leaving aside the additional costs of distributing and shipping physical goods to Australia, at the heart of the pricing disparity is the practice of “geo-blocking” whereby customers in one location cannot purchase digital or physical products direct from vendors outside their country of residence. It’s the sort of industry practice that prevents Australian consumers buying some print books and CD’s from or (and neither store sells MP3’s to Australian customers).

When asked to explain the apparent disparity in market pricing, the tech execs responded with comments such as, “the inclusion of Australian sales tax in the retail price is confusing”, “it’s a reflection of the cost of doing business in Australia” and “it’s all because of the content owners’ and copyright holders’ archaic territorial licensing practices”.

Their answers were variously described as “evasive“, “unbelievable” and “failed to impress“. The suggestion by one CEO that Australian consumers should fly to the USA to buy cheaper products overseas, was frankly ludicrous, especially as sales warranties given in America would likely be invalid once the goods were brought back to Australia.

When it can be cheaper to buy a CD copy of an album from an on-line music retailer in the UK rather than download the MP3 version from a vendor in Australia, clearly there is something wrong with this picture.

Parallel imports” and “grey goods” are terms used in the fashion, cosmetic and other retail sectors to describe situations where wholesalers and distributors import branded goods that are technically subject to strict territorial sales and distribution licenses held by third parties. Alternatively, consumers in one country purchase goods direct from a retailer or distributor located in another country, who does not have the rights to sell or export the products to the consumer’s country of residence. The license holders can seek to block these unauthorized imports/exports, but in cases where the license holder has chosen not to distribute those specific goods, these “grey” imports could possibly be deemed legitimate (under the “use it or lose it” principle).

Whatever the legal interpretation of territorial licensing, when it comes to digital content, is geo-blocking still appropriate? Let me offer an illustration:

Imagine you are an Australian traveller on a business trip to New York. You visit a local book shop, to pick up a copy of the latest novel by your favourite author.

Unfortunately, the salesperson tells you the book is not in stock, because the publisher does not distribute that particular title to independent stores; instead, you have to go to the mega book store across town.

After making your way to the mega store, you find out that before you can make any purchase, you have to open an account, submit your credit card details and other personal information (and sign a contract that says things like “you must always keep books bought from our store in our proprietary and specially designed book shelves”).

Just as you are about to make your purchase, the shop assistant asks you for your passport. “Oh, I’m sorry, we don’t sell our books to people from Australia. You have to go to our mega store in Sydney.”

On the way back to your hotel, you phone the publisher (whose office is on your route) to see if you can buy a copy direct from their sales department. The conversation goes something like this:

“You sound Australian. Sorry, but we can’t sell it to you. You have to buy it from our Australian distributor.”

“OK, can you tell me who the Australian distributor is, or which shops stock your titles?”

“I’m not sure. I think it depends on who the author is. Or whether it’s the hardback or paperback edition. Or whether our distributor is importing that particular title. Maybe we only sell it through the Australian branch of the mega book store that wouldn’t sell you it to you while you were in town. Have a nice day.”

Great. With nothing to read on the 20-hour flight back to Australia, you catch up on a lot of episodes of “Bored to Death”, because you don’t expect them to be shown on Australian TV for at least a year. (But that’s another industry scenario…)

Back home in Australia, you visit the Sydney branch of the mega book store. “I’m sorry, we don’t have that title in stock, because we haven’t had enough customer requests to justify importing any copies…..”

Is it any wonder, with these sorts of restrictive commercial practices common in the software and digital content industries, that Australia has the highest level of illegal music downloading by capita, not because all Australian consumers are unwilling to pay for content, but often because customers cannot legitimately buy it.

Why we need a “Steam Internet”

1981 Alcatel Minitel terminal(Photo by Jef Poskanzer - Licensed under Creative Commons Attribution-Share Alike)

1981 Alcatel Minitel terminal
(Photo by Jef Poskanzer – Licensed under Creative Commons Attribution-Share Alike)

The Internet is passing through a period of consolidation, as befits an industry that has reached maturity:

1. A small number of mega-players dominate the market: Microsoft, Amazon, Twitter, Apple, Facebook, Google, Yahoo!, PayPal, YouTube and Wikipedia.

2. Product lines are being rationalized, as companies trim their offerings to focus on core business – the latest victim being Google’s Reader tool for RSS feeds.

3. The distinctions between hardware, software, content and apps are blurred because of overlapping services, increased inter-connectivity via mobile platforms, and cloud-based solutions.

4. The business model for Internet access and Web usage is primarily based on data consumption and/or underwritten by 3rd party advertising. Social Media and search services are often not counted as part of the usage, thus confusing our understanding of what content actually costs.

5. Since our concept of what constitutes “news” is rapidly being redefined by Social Media, and readers increasingly rely on Social Media channels to access news, it is harder for content providers to charge a premium for value-added  information services such as quality journalism and objective news reporting.

I would argue that to rediscover a key purpose of the Net (as a means to send/receive meaningful news and information), we need to reflect on how radio broadcasting repositioned itself when television came along – hence “Steam Internet”.

“Steam Radio” was a term used in 1950’s Britain to differentiate sound broadcasting (radio) from audio-visual broadcasts (television). Although somewhat self-deprecating (suggesting something slow, and obsolete – echoing the demise of steam railways following the introduction of electric and diesel locomotives), it actually helped to embed specific values and purpose around the role of radio as a simple but effective medium to inform, educate and entertain, despite its apparent limitations.

My interest in radio means that I continue to use it as a primary source of daily news and current affairs, and as a convenient means to access international content. The discipline of radio means that content is generally well structured, the format’s limitations emphasise quality over quantity, and when done well there is both an immediacy and an intimate atmosphere that can really only be achieved by the audio format.

Far from becoming an obsolete medium in the Internet age, the growth of digital stations (as well as Internet radio and mobile-streaming) means that radio is undergoing a renaissance as it increasingly provides very specific choices in content, and offers ease of access without a lot of the “noise” of many news and information websites, with their pop-up ads, unstable video and data-hungry graphics.

Over the past decade, the major growth in Internet traffic in general, and World-wide Web usage in particular, has been driven by Social Media. However, neither the Net nor the Web was originally designed to be a mass-media platform, but the success of a highly interactive, deeply personalized and far-reaching network threatens the viability of the Internet as a means to effective communication.

As Web content and functionality has become more complex, so it actually becomes harder and more frustrating to find exactly what we want, because:

  • search and retrieval is advertising-driven and based on popularity, frequency and connectivity (rather than on context, relevance and quality);
  • content searches reduce everything to a common level of “hits” and “results”; and
  • there is little or no hierarchy as to how information and search results are structured (maybe we need a Dewey Decimal system for organising Web content?). This is one reason why Twitter is enhancing its search function by using human intervention (i.e., contextual interpretation) to make more sense of trending news themes.

I’d like to offer a short historical perspective to provide further context for the need for “Steam Internet” services:

Along with bankers and brokers, lawyers were among the first to recognize the importance of dedicated Internet services for transacting data and information. The first on-line information service I ever used was Lexis-Nexis (a research tool for lawyers) when I was a paralegal in the 1980’s. Lexis-Nexis is a database that enables users to search summaries, transcripts and reports of relevant court decisions regarding specific points of law. It is a very structured and hierarchical content source. Back then, it was a dial-up service, requiring the user to place the handset of a fixed-line telephone into an external modem that was connected to the computer terminal from which the search was conducted. The reason I can remember it so vividly is because the first time I used it, I forgot to specify sufficiently narrow search terms, which meant pages and pages of text being churned out – and probably a bill of over $200, as the service was charged according to the number of results returned and pages printed.

In the mid-1990’s, when I was setting up my Internet access, the ISP was owned and run by a university, which made sense when we think that the Net grew out of the academic world. But even though I had an ISP account, I still had to download, install and configure a graphical browser (Mosaic) to access the Web – or alternatively, I could subscribe to a dedicated dial-up service such as AOL, that offered a limited number of dedicated information services. Otherwise, my Internet access really only supported e-mail via DOS-based applications, and the exchange of files. (This was pre-Explorer and pre-Netscape, and the browser wars of the 1990’s and early-2000’s – which continue to this day with Microsoft copping another EU fine just this month.)

As the Web became more interactive, but also more dependant on “push”-content driven by advertising-based search, user experience was enhanced by RSS readers – to get to the information we really needed, and to personalize what content would be pushed to our desktops. When I was demonstrating financial market information services to new clients the built-in RSS reader was a useful talking-point, because I had configured it to display scores from the English Premier League as well as general news and industry headlines. (There is an urban myth that some of the most popular news screens on Bloomberg are the sports results…)

Just a few years ago, pre-Social Media, there were discussions about building a dedicated, faster, more robust and more secure business-oriented Internet platform, because the popular and public demands placed on the Web were putting an inordinate strain on the whole system. Businesses felt the need to create a separate platform – not just VPN’s, but a new “Internet 2” for government, universities and businesses to communicate and interact.  In the end, all that has happened is an expansion of the Top-Level Domains (.biz, .mobi), with a continued programme of generic TLD’s in the works, but this is simply creating more real-estate on the Web, not building a dedicated data and information-led Internet for business.

At this point, it’s worth reflecting that only last year, France’s Minitel videotex service and the UK’s Ceefax teletext service were both finally decommissioned, each having been in operation for over 30 years. In their prime, these were innovative precursors to the Web, even though neither of them was considered to be part of the Internet. Their relevance as dedicated information services should not be overlooked just because technology has overtaken them; that’s like saying the news media are redundant because their print circulation is in decline.

In conclusion, I’m therefore very attracted to the idea of a Steam Internet which mainly carries news and information services as a way to bring focus and structure to this content.    


Declaration of interest: from time-to-time the author is a presenter on Community Radio, but does not currently derive an income from this activity, so no commercial or financial bias should be implied by his personal enthusiasm for this broadcast medium.