I believe that in a highly connected digital world, the future of content publishing lies with creating interlinked manifestations of a core concept or theme. I like to think of this as “multi(n) publishing”, (where ‘n’ stands for any number of things, e.g.: aspect / channel / facet / format / genre / sided / variant / etc.), or multi-publishing for short. To this end, I’ve created a framework which could prove very useful for conceptualizing and executing multi-publishing projects. Read on to find out more.
- Why Multi-Publishing?
There is increasing evidence of an evolution in the way people consume digitally enabled content, e.g.: watching a TV show whilst surfing the web, talking on the phone to a friend and posting comments on social media – all of which may or may not relate to each other or a single topic. This has put enormous pressure on content creators and publishers to find new ways to engage their audience and deliver compelling content to people that live in a world surfeit with competing content, channels, devices and distractions. In the above scenario, broadcasters have tried, with varying degrees of success, to engage viewers with second or multi-screen, content (e.g.: show on TV, cast info on website / mobile site, plus real time interaction on Social Media – all related to the show). Furthermore, the average attention span of most users appears to have shrunk and many prefer to ‘snack’ on content across devices and formats. This doesn’t bode well for the more traditional long-form content upon which many creative industries were established. As a result, many in the content production, publishing and marketing industries are seeking new ways to engage audiences across multiple devices and channels with even more compelling content and user experiences.
- What is Multi-publishing?
In this context, the term “multi(n) publishing” (or multi-publishing) describes the manifestation of a core concept / theme as distinct but inter-linked works across multiple media formats, channels and genres. This is somewhat different from other similar related terms such as: multi-format (or cross-media), multi-channel, single source, or even multi-platform publishing. The last one being mainly used by marketers to describe the practice of taking one thing and turning it into several products across a spectrum of online, offline and even ‘live’ experiential forms. The key difference between these terms and multi-publishing is that the latter encompasses them all, and more. In fact, the multi-publishing framework is closer to the information science idea of conceptualisation. Also, and perhaps more importantly, the various manifestations of multi-published content are not necessarily brand identical to the originating (aka ‘native’) core concept, or to each other. However, each and every manifestation is intended to be unique and distinct, yet able to enhance each other and provide a fuller and more fulfilling experience of the overall core concept.
- How does it work?
In order to achieve the desired outcome of the whole being more than a sum of its parts, it makes sense for creators and publishers to bear in mind, right from the outset, that their works will likely be: used, reused, decomposed, remixed and recomposed in so many different ways, (including new and novel expressions of which they couldn’t possibly imagine at the time of creation). Therefore, they must recognize where and how each of their output content fits within the context of a multi-publishing content framework or architecture. The diagram below is just such a framework (in mindmap form) and demonstrates the narrative-like progression of a single core concept / theme across various stages and interlinked manifestations.
This is only an example of what content creators and their publishers must consider and prepare as part of their creative (inspiration) and publishing (exploitation) process. It requires the creation and/or identification of a core concept which is manifest in the expression of the art (e.g. in the: story, song, prose, images, video, game, conversations or presentations etc), and which can be used to link each and every format, channel or media in which the concept is expressed.
Finally, the use of multi-publishing frameworks can also enable easier setup and automation of tracking and recording of all usage transactions, and potentially any subsequent remuneration for creator(s) and publisher(s), in a transparent manner, (perhaps using a trust mechanism such as blockchain). I will explore this particular topic in a subsequent post on this blog. In any case, there remains one key question to be answered, i.e.: how can or should we consider protecting core concepts or algorithms at the heart of multi-publishing frameworks, and if so what form should such protection take?
- The copyright yin and technology yang – Copyright has always had to change and adapt to new and disruptive technologies (which typically impact the extant business models of the content industry) and each time it usually comes out even stronger and more flexible – the age of digital disruption is no exception. As my 5 year old would say, “that glass is half full AND half empty”
- UK Copyright Hub – “Simplify and facilitate” is a recurring mantra on the role of copyright in the digital economy. The UK Copyright Hub provides an exchange that is predicated on usage rights. It is a closely watched example of what is required for digital copyright and could easily become a template for the rest of the world.
- Copyright frictions still a challenge – “Lawyers love arguing with each other”, but they and the excruciatingly slow process of policy making, have introduced a particular friction to copyright’s digital evolution. The pace of digital change has increased but policy has slowed down, perhaps because there are now more people to the party.
- Time for some new stuff – Copyright takes the blame for many things (e.g. even the normal complexity of cross border commerce). Various initiatives including: SOPA & PIPA / Digital Economy Act / Hadopi / 3 strikes NZ have stalled or been drastically cut back. It really is time for new stuff.
- Delaying the “time to street” – Fox describe their anti-piracy efforts in relation to film release windows, in an effort to delay the “time to street” (aka pervasive piracy). These and other developments such as fast changing piracy business models, or the balance between privacy vs. piracy and technologies (e.g. popcorn time, annonymising proxies, cyberlockers etc.) have added more fuel to the fire.
- Rights Languages & Machine-to-Machine communication – Somewhat reminiscent of efforts to use big data and analytics mechanisms to provide insight from structured and unstructured data sources. Think Hadoop based rights translation and execution engines.
- The future of private copying – The UK’s copyright exceptions now allow for individual private copies of owned content. Although this may seem obvious, but it has provoked fresh comments from content industries types and other observers e.g.: When will technology replace the need for people making private copies? Also, what about issues around keeping private copies in the cloud or in cyber lockers?
According to a BBC news report, it seems that a deal to tackle digital piracy is about to be realised between major UK ISPs and key content and entertainment industry organisations. Given that it took several years of wrangling to get to this point, the obvious question is whether this particular deal will work to the satisfaction of all concerned?
The report describes how the UK ISPs (i.e. BT, Sky, TalkTalk and VirginMedia) will be required to send ‘educational’ letters, or alerts, to users they believe are downloading illegal content. Among other things, the deal is predicated on the belief that increased awareness of legal alternatives will encourage such users away from illegal content acquisition, casual infringement and piracy. This voluntary alert system will be funded mainly by the content industry who in return will get monthly stats on alerts dished out by the ISPs. Overall, this deal is far removed from the more punitive “3 strikes” system originally mooted in the early days of the Digital Economy Act.
As with most cases there are 2 or more sides to the story, and below are some considerations to be taken into account before drawing your own conclusions, including:
1. Critics of this deal, i.e. presumably the content providers, will consider this too soft an approach to be effective in curbing the very real and adverse economic impact of piracy.
2. Supporters, including ISPs, will likely see this as fair compromise for securing their cooperation in tackling piracy, and a win-win for them and their customers.
3. Another perspective comprises the view of regulators and government intermediaries (aka brokers of this deal), who likely consider it a practical compromise which can always be tweaked depending on its efficacy or lack thereof.
4. There are probably many other viewpoints to be considered, but, in my opinion, the most important perspective belongs to the end-users who ultimately stand to benefit or suffer from the success or failure of this initiative, especially since:
- there is evidence that education trumps punishment when it comes to casual content piracy – e.g. the HADOPI experience in France which has effectively evolved into an educational campaign against copyright infringement.
- content consumers already have far too much choice over the source and format of content anyway, so punitive measures may not necessarily solve the piracy problem, if they can get content via other illegal means.
- any perceived failure of this deal, and its ‘educational’ approach, could lend support for more draconian and punitive measures, therefore it is in the interest of consumers to see it succeed.
5. Industrial scale piracy, on the other hand must be tackled head-on, with the full weight of the law, in order to close down and discourage the real criminal enterprises that probably do far more damage to the content industry.
In any case, regardless of how you view this and other similar developments, it is always worth bearing in mind that we are only in a period of transition to a comprehensive digital existence, therefore all current challenges and opportunities are certain to change, as new technology and usage paradigms continue to drive and reveal ever more intriguing changes in consumer behaviours. This battle is far from over.
Last month’s conference on copyright and technology provided plenty of food for thought from an array of speakers, organisations, viewpoints and agendas. Topics and discussions ran the gamut of increasingly obvious “business models are more important than technology” to downright bleeding edge “hypersonic activation of devices from outdoor displays “. There was something to take away for everyone involved. Read on for highlights.
The Mega Keynote interview: Mega’s CEO Vikram Kumar, discussed how the new and law-abiding cloud storage service is proving attractive to professionals who want to use and pay for the space, security and privacy that Mega provides. This is a far cry from the notorious MegaUpload, and founder Kim Dotcom’s continuing troubles with charges of copyright infringement, but there are still questions about the nature of the service – e.g. the end-to-end encryption approach which effectively makes it opaque to outside scrutiny. Read more about it here.
Anti-Piracy and the age of big data – Mark Monitor’s Thomas Sehested talked about the rise of data / content monitoring and anti-piracy services in what he describes as the data driven media company. He also discussed the demise of content release windows, and how mass / immediate release of content across multiple channels lowers piracy, but questioned if this is more profitable.
Hadopi and graduated response – Hadopi’s Pauline Blassel gave an honest overview on the impact of Hadopi, including evidence of some reduction in piracy (by factor of 6M-4M) before stabilsation. She also described how this independent public authority delivers graduated response in a variety of ways e.g. from raising awareness to imposing penalties and focusing primarily on what is known as PUR (aka ‘Promotion les Usage Responsible’)
Auto Content Recognition (ACR) and the 2nd Screen – ACR is a core set of tools (including DRM, watermarking and fingerprinting), and the 2nd screen opportunity (at least for broadcasters) is all about keeping TV viewership and relevance in the face of tough competition for people’s time and attention. This panel session discussed monetisation of second screen applications, and the challenges of how TV is regulated, pervasive and country specific. Legal broadcast rights is aimed at protection of broadcast signals, which triggers the 2nd screen application, (e.g. via ambient / STB / EPG based recognition). This begs the question of what regulation should be applied to the 2nd screen, and what rights apply? E.g. Ads on TV can be replaced in the 2 screen, but what are the implications?
Update on the Copyright Hub – The Keynote address by Sir Richard Hooper, chair of the Copyright Hub and co-author of the 2012 report on Copyright Works: Streamlining Copyright Licensing for the Digital Age, was arguably the high point of the event. He made the point that although there are issues with copyright in the digital age, the creative industries need to get off their collective backsides and streamline the licensing process before asking for a change in copyright law. He gave examples of issues with the overly complex educational licensing process and how the analogue processes are inadequate for the digital age (e.g. unique identifiers for copyright works).
The primary focus of the Copyright Hub, according to Sir Richard, is to enable high volume – low value transactions, (e.g. to search, license and use copyright works legally) by individuals and SMEs. The top tier content players already have dedicated resources for such activities hence they’re not a primary target of the Copyright Hub, but they’ll also benefit by removing the need to deal with trivial requests for licensing individual items (e.g. to use popular songs for wedding videos on YouTube).
Next phase work, and other challenges, for the Copyright Hub include: enabling consumer reuse of content, architectures for federated search, machine to machine transactions, orphan works registry & mass digitisation (collective licensing), multi licensing for multimedia content, as well as the need for global licensing. Some key messages and quotes in the ensuing Q&A include:
- “the Internet is inherently borderless and we must think global licensing, but need to walk before we can run”
- “user-centricity is key. People are happy not to infringe if easy / cheap to be legal”
- “data accuracy is vital, so Copyright Hub is looking at efforts from Linked Content Coalition and Global Repertoire Database”
- “Metadata is intrinsic to machine to Machine transactions – do you know it is a crime to strip metadata from content?”
- “Moral rights may add to overall complexity”
As you can probably see from the above, this one day event delivered the goods and valuable insights to the audience, which included people from the creative / content industries, as well as technologists, legal practitioners, academics and government agencies. Kudos to MusicAlly, the event organiser, and to Bill Rosenblatt, (conference chair), for a job well done.
I remember once when the mere mention of DRM stirred up such a frenzied reaction of blood boiling anger, outrage and disgust, from even the meekest of the meek. Thankfully those days are long gone, and DRM has been largely forgotten, or has it?
Sadly no, because DRM recently reared its dramatic head yet again following a decision by the World Wide Web Consortium (W3C) to bring video content protection into scope for discussion in their HTML5 Working Group. So what does this mean? Well, it depends on who you ask of course, because the usual pros vs. cons battle lines, championed by various organisations and pundits, have opened up with distinct perspectives on the matter. The following are summary points, culled from a quick web search on the topic.
Some viewpoints in support of the decision:
- Sir Tim Berners Lee on Encrypted content and the Open Web – reiterated that W3C staff remain passionate about the open Web, and indeed abhor certain forms of content protection and DRM. However, he went on to explain how putting content protection in scope for discussion is the lesser evil, given that exclusion of this topic from the HTML WG discussions will not necessarily exclude it from anyone’s systems.
- W3C Encrypted Media Extensions (EME) Editor’s draft 17th September 2013 – According to the abstract, “the proposal extends HTMLMediaElement providing APIs to control playback of protected content.” Also, the specification does not define any particular content protection or DRM system, but instead it defines a common API that may be used to discover, select and interact with various such mechanisms / DRM solutions.
- ArsTechnica “DRM in HTML5 is a victory for the open Web, not a defeat” – In this post, Peter Bright argues that EME will happen, one way or another, especially given how some important companies (i.e. Microsoft, Google and Netflix) are actively developing the specification. Furthermore, distributors of protected video content already use DRM, albeit outside the Web (e.g. via Microsoft’s Silverlight, Adobe Flash and / or mobile Apps). Finally, he concludes that EME will provide a way to deliver protected content via the Web instead of just using proprietary applications and plug-ins. .
Other viewpoints against the decision:
- The Electronic Frontier Foundation (EFF), “Lowering Your Standards: DRM and the Future of the W3C” – The EFF strongly objects to the inclusion of “playback of protected content” into the scope of HTML Working Group’s new charter, stating that such a move would mean the controversial Encrypted Media Extension could be included in the HTML5.1 standard, which would effectively cede control of browsers to 3rd parties (i.e. content providers). Furthermore, they argue, this could ultimately damage the W3C’s reputation / perception as guardian of the open Web, and that other media formats (e.g. images, fonts and music) may push for equivalent content protection standards, over a rapidly fragmenting Web.
- Boing Boing “W3C’s DRM for HTML5 sets the stage for jailing programmers…” – Cory Doctorow discusses how the decision will open the possibility of punitive fines or imprisonment for programmers who dare to attempt improving web browsers in ways that displease Hollywood.
- DefectiveByDesign “Tell W3C: We don’t want the Hollyweb” – Calls for the W3C to reject the EME proposal, stating that it would damage freedom on the Web and enable unethical, restrictive business models, as well as proliferation of DRM plug-ins needed to play protected media content.
Regardless of which side you take in this debate, it is probably disingenuous to think that DRM ever went away, if anything, it has in fact been thriving in various digital content services and technologies, well outside the limelight and notoriety it had in the past – perhaps until now. One of the key things I learnt during my sojourn into the DRM debate over the last decade, was that most content businesses are ultimately pragmatic in nature, and they now understand that suing customers (or casual pirates depending on viewpoint) can be suicidal, hence the move away from dramatic headlines and into developing services that users actually want to use and pay for. The saying holds true that the only good DRM system is invisible or transparent to the end user or consumer.
It could be argued that this current debate has arisen because the Web is designed, and perceived by many, to be open and universal, but it is this selfsame universality that allows even potentially restrictive models to have a place on the Web. In fighting for its own survival, and by openly considering inclusion of something like content protection, the W3C is actually living up to the open and universal remit of the Web. However, a real danger remains that commercial interests (aka content businesses) will almost certainly seize this opportunity to compete using flawed and restrictive business models, which will only throw DRM in the faces of their users, and possibly restart litigious campaigns against their users, once the latter decide again that unrestricted (and literally free) content is best. Truly, those that don’t learn from past mistakes are only doomed to repeat them.
In conclusion, although this is probably more than a mere storm in the proverbial teacup, the signs portend that this too shall pass into the annals of DRM aftershocks, in the grand scheme of things. I say this with some confidence because whilst the DRM battle rages on, the world of digital content, copyright and the Internet continues to evolve new opportunities and challenges that will reshape the digital landscape. A recent example concerns the IP value of curation, e.g. playlists, as a candidate for copyright (e.g. see Ministry of Sound versus Spotify)
BTW: I will be moderating a panel session, discussing Over the Top (OTT) video content protection, at the Copyright and Technology 2013 London conference, later this week. My panel of experts will most likely have something interesting to say about DRM and the Web. Why not join the debate at the event, if you are in London, otherwise I’ll keep you posted on this blog.