According to a BBC news report, it seems that a deal to tackle digital piracy is about to be realised between major UK ISPs and key content and entertainment industry organisations. Given that it took several years of wrangling to get to this point, the obvious question is whether this particular deal will work to the satisfaction of all concerned?
The report describes how the UK ISPs (i.e. BT, Sky, TalkTalk and VirginMedia) will be required to send ‘educational’ letters, or alerts, to users they believe are downloading illegal content. Among other things, the deal is predicated on the belief that increased awareness of legal alternatives will encourage such users away from illegal content acquisition, casual infringement and piracy. This voluntary alert system will be funded mainly by the content industry who in return will get monthly stats on alerts dished out by the ISPs. Overall, this deal is far removed from the more punitive “3 strikes” system originally mooted in the early days of the Digital Economy Act.
As with most cases there are 2 or more sides to the story, and below are some considerations to be taken into account before drawing your own conclusions, including:
1. Critics of this deal, i.e. presumably the content providers, will consider this too soft an approach to be effective in curbing the very real and adverse economic impact of piracy.
2. Supporters, including ISPs, will likely see this as fair compromise for securing their cooperation in tackling piracy, and a win-win for them and their customers.
3. Another perspective comprises the view of regulators and government intermediaries (aka brokers of this deal), who likely consider it a practical compromise which can always be tweaked depending on its efficacy or lack thereof.
4. There are probably many other viewpoints to be considered, but, in my opinion, the most important perspective belongs to the end-users who ultimately stand to benefit or suffer from the success or failure of this initiative, especially since:
- there is evidence that education trumps punishment when it comes to casual content piracy – e.g. the HADOPI experience in France which has effectively evolved into an educational campaign against copyright infringement.
- content consumers already have far too much choice over the source and format of content anyway, so punitive measures may not necessarily solve the piracy problem, if they can get content via other illegal means.
- any perceived failure of this deal, and its ‘educational’ approach, could lend support for more draconian and punitive measures, therefore it is in the interest of consumers to see it succeed.
5. Industrial scale piracy, on the other hand must be tackled head-on, with the full weight of the law, in order to close down and discourage the real criminal enterprises that probably do far more damage to the content industry.
In any case, regardless of how you view this and other similar developments, it is always worth bearing in mind that we are only in a period of transition to a comprehensive digital existence, therefore all current challenges and opportunities are certain to change, as new technology and usage paradigms continue to drive and reveal ever more intriguing changes in consumer behaviours. This battle is far from over.
Last month’s conference on copyright and technology provided plenty of food for thought from an array of speakers, organisations, viewpoints and agendas. Topics and discussions ran the gamut of increasingly obvious “business models are more important than technology” to downright bleeding edge “hypersonic activation of devices from outdoor displays “. There was something to take away for everyone involved. Read on for highlights.
The Mega Keynote interview: Mega’s CEO Vikram Kumar, discussed how the new and law-abiding cloud storage service is proving attractive to professionals who want to use and pay for the space, security and privacy that Mega provides. This is a far cry from the notorious MegaUpload, and founder Kim Dotcom’s continuing troubles with charges of copyright infringement, but there are still questions about the nature of the service – e.g. the end-to-end encryption approach which effectively makes it opaque to outside scrutiny. Read more about it here.
Anti-Piracy and the age of big data – Mark Monitor’s Thomas Sehested talked about the rise of data / content monitoring and anti-piracy services in what he describes as the data driven media company. He also discussed the demise of content release windows, and how mass / immediate release of content across multiple channels lowers piracy, but questioned if this is more profitable.
Hadopi and graduated response – Hadopi’s Pauline Blassel gave an honest overview on the impact of Hadopi, including evidence of some reduction in piracy (by factor of 6M-4M) before stabilsation. She also described how this independent public authority delivers graduated response in a variety of ways e.g. from raising awareness to imposing penalties and focusing primarily on what is known as PUR (aka ‘Promotion les Usage Responsible’)
Auto Content Recognition (ACR) and the 2nd Screen – ACR is a core set of tools (including DRM, watermarking and fingerprinting), and the 2nd screen opportunity (at least for broadcasters) is all about keeping TV viewership and relevance in the face of tough competition for people’s time and attention. This panel session discussed monetisation of second screen applications, and the challenges of how TV is regulated, pervasive and country specific. Legal broadcast rights is aimed at protection of broadcast signals, which triggers the 2nd screen application, (e.g. via ambient / STB / EPG based recognition). This begs the question of what regulation should be applied to the 2nd screen, and what rights apply? E.g. Ads on TV can be replaced in the 2 screen, but what are the implications?
Update on the Copyright Hub – The Keynote address by Sir Richard Hooper, chair of the Copyright Hub and co-author of the 2012 report on Copyright Works: Streamlining Copyright Licensing for the Digital Age, was arguably the high point of the event. He made the point that although there are issues with copyright in the digital age, the creative industries need to get off their collective backsides and streamline the licensing process before asking for a change in copyright law. He gave examples of issues with the overly complex educational licensing process and how the analogue processes are inadequate for the digital age (e.g. unique identifiers for copyright works).
The primary focus of the Copyright Hub, according to Sir Richard, is to enable high volume – low value transactions, (e.g. to search, license and use copyright works legally) by individuals and SMEs. The top tier content players already have dedicated resources for such activities hence they’re not a primary target of the Copyright Hub, but they’ll also benefit by removing the need to deal with trivial requests for licensing individual items (e.g. to use popular songs for wedding videos on YouTube).
Next phase work, and other challenges, for the Copyright Hub include: enabling consumer reuse of content, architectures for federated search, machine to machine transactions, orphan works registry & mass digitisation (collective licensing), multi licensing for multimedia content, as well as the need for global licensing. Some key messages and quotes in the ensuing Q&A include:
- “the Internet is inherently borderless and we must think global licensing, but need to walk before we can run”
- “user-centricity is key. People are happy not to infringe if easy / cheap to be legal”
- “data accuracy is vital, so Copyright Hub is looking at efforts from Linked Content Coalition and Global Repertoire Database”
- “Metadata is intrinsic to machine to Machine transactions – do you know it is a crime to strip metadata from content?”
- “Moral rights may add to overall complexity”
As you can probably see from the above, this one day event delivered the goods and valuable insights to the audience, which included people from the creative / content industries, as well as technologists, legal practitioners, academics and government agencies. Kudos to MusicAlly, the event organiser, and to Bill Rosenblatt, (conference chair), for a job well done.
I remember once when the mere mention of DRM stirred up such a frenzied reaction of blood boiling anger, outrage and disgust, from even the meekest of the meek. Thankfully those days are long gone, and DRM has been largely forgotten, or has it?
Sadly no, because DRM recently reared its dramatic head yet again following a decision by the World Wide Web Consortium (W3C) to bring video content protection into scope for discussion in their HTML5 Working Group. So what does this mean? Well, it depends on who you ask of course, because the usual pros vs. cons battle lines, championed by various organisations and pundits, have opened up with distinct perspectives on the matter. The following are summary points, culled from a quick web search on the topic.
Some viewpoints in support of the decision:
- Sir Tim Berners Lee on Encrypted content and the Open Web – reiterated that W3C staff remain passionate about the open Web, and indeed abhor certain forms of content protection and DRM. However, he went on to explain how putting content protection in scope for discussion is the lesser evil, given that exclusion of this topic from the HTML WG discussions will not necessarily exclude it from anyone’s systems.
- W3C Encrypted Media Extensions (EME) Editor’s draft 17th September 2013 – According to the abstract, “the proposal extends HTMLMediaElement providing APIs to control playback of protected content.” Also, the specification does not define any particular content protection or DRM system, but instead it defines a common API that may be used to discover, select and interact with various such mechanisms / DRM solutions.
- ArsTechnica “DRM in HTML5 is a victory for the open Web, not a defeat” – In this post, Peter Bright argues that EME will happen, one way or another, especially given how some important companies (i.e. Microsoft, Google and Netflix) are actively developing the specification. Furthermore, distributors of protected video content already use DRM, albeit outside the Web (e.g. via Microsoft’s Silverlight, Adobe Flash and / or mobile Apps). Finally, he concludes that EME will provide a way to deliver protected content via the Web instead of just using proprietary applications and plug-ins. .
Other viewpoints against the decision:
- The Electronic Frontier Foundation (EFF), “Lowering Your Standards: DRM and the Future of the W3C” – The EFF strongly objects to the inclusion of “playback of protected content” into the scope of HTML Working Group’s new charter, stating that such a move would mean the controversial Encrypted Media Extension could be included in the HTML5.1 standard, which would effectively cede control of browsers to 3rd parties (i.e. content providers). Furthermore, they argue, this could ultimately damage the W3C’s reputation / perception as guardian of the open Web, and that other media formats (e.g. images, fonts and music) may push for equivalent content protection standards, over a rapidly fragmenting Web.
- Boing Boing “W3C’s DRM for HTML5 sets the stage for jailing programmers…” – Cory Doctorow discusses how the decision will open the possibility of punitive fines or imprisonment for programmers who dare to attempt improving web browsers in ways that displease Hollywood.
- DefectiveByDesign “Tell W3C: We don’t want the Hollyweb” – Calls for the W3C to reject the EME proposal, stating that it would damage freedom on the Web and enable unethical, restrictive business models, as well as proliferation of DRM plug-ins needed to play protected media content.
Regardless of which side you take in this debate, it is probably disingenuous to think that DRM ever went away, if anything, it has in fact been thriving in various digital content services and technologies, well outside the limelight and notoriety it had in the past – perhaps until now. One of the key things I learnt during my sojourn into the DRM debate over the last decade, was that most content businesses are ultimately pragmatic in nature, and they now understand that suing customers (or casual pirates depending on viewpoint) can be suicidal, hence the move away from dramatic headlines and into developing services that users actually want to use and pay for. The saying holds true that the only good DRM system is invisible or transparent to the end user or consumer.
It could be argued that this current debate has arisen because the Web is designed, and perceived by many, to be open and universal, but it is this selfsame universality that allows even potentially restrictive models to have a place on the Web. In fighting for its own survival, and by openly considering inclusion of something like content protection, the W3C is actually living up to the open and universal remit of the Web. However, a real danger remains that commercial interests (aka content businesses) will almost certainly seize this opportunity to compete using flawed and restrictive business models, which will only throw DRM in the faces of their users, and possibly restart litigious campaigns against their users, once the latter decide again that unrestricted (and literally free) content is best. Truly, those that don’t learn from past mistakes are only doomed to repeat them.
In conclusion, although this is probably more than a mere storm in the proverbial teacup, the signs portend that this too shall pass into the annals of DRM aftershocks, in the grand scheme of things. I say this with some confidence because whilst the DRM battle rages on, the world of digital content, copyright and the Internet continues to evolve new opportunities and challenges that will reshape the digital landscape. A recent example concerns the IP value of curation, e.g. playlists, as a candidate for copyright (e.g. see Ministry of Sound versus Spotify)
BTW: I will be moderating a panel session, discussing Over the Top (OTT) video content protection, at the Copyright and Technology 2013 London conference, later this week. My panel of experts will most likely have something interesting to say about DRM and the Web. Why not join the debate at the event, if you are in London, otherwise I’ll keep you posted on this blog.