I believe that in a highly connected digital world, the future of content publishing lies with creating interlinked manifestations of a core concept or theme. I like to think of this as “multi(n) publishing”, (where ‘n’ stands for any number of things, e.g.: aspect / channel / facet / format / genre / sided / variant / etc.), or multi-publishing for short. To this end, I’ve created a framework which could prove very useful for conceptualizing and executing multi-publishing projects. Read on to find out more.
- Why Multi-Publishing?
There is increasing evidence of an evolution in the way people consume digitally enabled content, e.g.: watching a TV show whilst surfing the web, talking on the phone to a friend and posting comments on social media – all of which may or may not relate to each other or a single topic. This has put enormous pressure on content creators and publishers to find new ways to engage their audience and deliver compelling content to people that live in a world surfeit with competing content, channels, devices and distractions. In the above scenario, broadcasters have tried, with varying degrees of success, to engage viewers with second or multi-screen, content (e.g.: show on TV, cast info on website / mobile site, plus real time interaction on Social Media – all related to the show). Furthermore, the average attention span of most users appears to have shrunk and many prefer to ‘snack’ on content across devices and formats. This doesn’t bode well for the more traditional long-form content upon which many creative industries were established. As a result, many in the content production, publishing and marketing industries are seeking new ways to engage audiences across multiple devices and channels with even more compelling content and user experiences.
- What is Multi-publishing?
In this context, the term “multi(n) publishing” (or multi-publishing) describes the manifestation of a core concept / theme as distinct but inter-linked works across multiple media formats, channels and genres. This is somewhat different from other similar related terms such as: multi-format (or cross-media), multi-channel, single source, or even multi-platform publishing. The last one being mainly used by marketers to describe the practice of taking one thing and turning it into several products across a spectrum of online, offline and even ‘live’ experiential forms. The key difference between these terms and multi-publishing is that the latter encompasses them all, and more. In fact, the multi-publishing framework is closer to the information science idea of conceptualisation. Also, and perhaps more importantly, the various manifestations of multi-published content are not necessarily brand identical to the originating (aka ‘native’) core concept, or to each other. However, each and every manifestation is intended to be unique and distinct, yet able to enhance each other and provide a fuller and more fulfilling experience of the overall core concept.
- How does it work?
In order to achieve the desired outcome of the whole being more than a sum of its parts, it makes sense for creators and publishers to bear in mind, right from the outset, that their works will likely be: used, reused, decomposed, remixed and recomposed in so many different ways, (including new and novel expressions of which they couldn’t possibly imagine at the time of creation). Therefore, they must recognize where and how each of their output content fits within the context of a multi-publishing content framework or architecture. The diagram below is just such a framework (in mindmap form) and demonstrates the narrative-like progression of a single core concept / theme across various stages and interlinked manifestations.
This is only an example of what content creators and their publishers must consider and prepare as part of their creative (inspiration) and publishing (exploitation) process. It requires the creation and/or identification of a core concept which is manifest in the expression of the art (e.g. in the: story, song, prose, images, video, game, conversations or presentations etc), and which can be used to link each and every format, channel or media in which the concept is expressed.
Finally, the use of multi-publishing frameworks can also enable easier setup and automation of tracking and recording of all usage transactions, and potentially any subsequent remuneration for creator(s) and publisher(s), in a transparent manner, (perhaps using a trust mechanism such as blockchain). I will explore this particular topic in a subsequent post on this blog. In any case, there remains one key question to be answered, i.e.: how can or should we consider protecting core concepts or algorithms at the heart of multi-publishing frameworks, and if so what form should such protection take?
Last month’s conference on copyright and technology provided plenty of food for thought from an array of speakers, organisations, viewpoints and agendas. Topics and discussions ran the gamut of increasingly obvious “business models are more important than technology” to downright bleeding edge “hypersonic activation of devices from outdoor displays “. There was something to take away for everyone involved. Read on for highlights.
The Mega Keynote interview: Mega’s CEO Vikram Kumar, discussed how the new and law-abiding cloud storage service is proving attractive to professionals who want to use and pay for the space, security and privacy that Mega provides. This is a far cry from the notorious MegaUpload, and founder Kim Dotcom’s continuing troubles with charges of copyright infringement, but there are still questions about the nature of the service – e.g. the end-to-end encryption approach which effectively makes it opaque to outside scrutiny. Read more about it here.
Anti-Piracy and the age of big data – Mark Monitor’s Thomas Sehested talked about the rise of data / content monitoring and anti-piracy services in what he describes as the data driven media company. He also discussed the demise of content release windows, and how mass / immediate release of content across multiple channels lowers piracy, but questioned if this is more profitable.
Hadopi and graduated response – Hadopi’s Pauline Blassel gave an honest overview on the impact of Hadopi, including evidence of some reduction in piracy (by factor of 6M-4M) before stabilsation. She also described how this independent public authority delivers graduated response in a variety of ways e.g. from raising awareness to imposing penalties and focusing primarily on what is known as PUR (aka ‘Promotion les Usage Responsible’)
Auto Content Recognition (ACR) and the 2nd Screen – ACR is a core set of tools (including DRM, watermarking and fingerprinting), and the 2nd screen opportunity (at least for broadcasters) is all about keeping TV viewership and relevance in the face of tough competition for people’s time and attention. This panel session discussed monetisation of second screen applications, and the challenges of how TV is regulated, pervasive and country specific. Legal broadcast rights is aimed at protection of broadcast signals, which triggers the 2nd screen application, (e.g. via ambient / STB / EPG based recognition). This begs the question of what regulation should be applied to the 2nd screen, and what rights apply? E.g. Ads on TV can be replaced in the 2 screen, but what are the implications?
Update on the Copyright Hub – The Keynote address by Sir Richard Hooper, chair of the Copyright Hub and co-author of the 2012 report on Copyright Works: Streamlining Copyright Licensing for the Digital Age, was arguably the high point of the event. He made the point that although there are issues with copyright in the digital age, the creative industries need to get off their collective backsides and streamline the licensing process before asking for a change in copyright law. He gave examples of issues with the overly complex educational licensing process and how the analogue processes are inadequate for the digital age (e.g. unique identifiers for copyright works).
The primary focus of the Copyright Hub, according to Sir Richard, is to enable high volume – low value transactions, (e.g. to search, license and use copyright works legally) by individuals and SMEs. The top tier content players already have dedicated resources for such activities hence they’re not a primary target of the Copyright Hub, but they’ll also benefit by removing the need to deal with trivial requests for licensing individual items (e.g. to use popular songs for wedding videos on YouTube).
Next phase work, and other challenges, for the Copyright Hub include: enabling consumer reuse of content, architectures for federated search, machine to machine transactions, orphan works registry & mass digitisation (collective licensing), multi licensing for multimedia content, as well as the need for global licensing. Some key messages and quotes in the ensuing Q&A include:
- “the Internet is inherently borderless and we must think global licensing, but need to walk before we can run”
- “user-centricity is key. People are happy not to infringe if easy / cheap to be legal”
- “data accuracy is vital, so Copyright Hub is looking at efforts from Linked Content Coalition and Global Repertoire Database”
- “Metadata is intrinsic to machine to Machine transactions – do you know it is a crime to strip metadata from content?”
- “Moral rights may add to overall complexity”
As you can probably see from the above, this one day event delivered the goods and valuable insights to the audience, which included people from the creative / content industries, as well as technologists, legal practitioners, academics and government agencies. Kudos to MusicAlly, the event organiser, and to Bill Rosenblatt, (conference chair), for a job well done.
Last week, I attended a breakfast meeting at the House of Commons to discuss and reflect on practical issues around implementing recommendations of the Hargreaves Report, as well as ways in which the IP system can be evolved to better enable the benefits from 21st Century business and technology opportunities.
This event, organised by the Industry and Parliament Trust, featured brief talks by Professor Ian Hargreaves (author of the IP Review report & recommendations – download it here), Ben White (Head of IP at the British Library), and Nico Perez (co-founder of startup, MixCloud), plus Q&A style discussions with the attending group of politicians and business people from relevant industries. Some key observations and comments are:
- London has the largest cluster of IP related start-ups, as well as the biggest hub for VCs, in Europe
- There has been a lot of international interest in the Hargreaves report and recommendations (the good professor regularly gets calls from interested observers across the globe). Also, the review findings and recommendations had good traction with the UK government.
- Digital economy versus creative economy; are they one and the same (i.e. is there and/or should there really be a difference)?
- The larger creative industry players (e.g. publishers), and their lobbyists, are not in full agreement with the review findings and / or recommendations, and remain firmly resistant to change
- According to one attendee, the interests of creative stakeholder (e.g. content creators) were not well represented or served by the review findings and recommendations
- Collecting societies act like de facto monopolies, which can make life difficult for some more innovative start-ups
- Broadcast TV players are trying to innovate and catch up with what consumers are already doing in their homes, but the current IP system is not sufficiently geared towards enabling such initiatives.
Note: Further information, comments and observations can be found in the IPT blog post about this event.
The upshot of the above points, in my opinion, is that a new / evolved IP system must really be geared towards dual targets, i.e. to help simplify and facilitate the use and reuse of IP works, especially in the digital realm. Such a focus would undoubtedly go a long way towards addressing the legion of non-technological challenges faced by most innovators, entrepreneurs and investors in the creative digital industries. For example, according to an article (see: The Library of Utopia), published by MIT technology review, “the major problem with constructing a universal library nowadays has little to do with technology. It’s the thorny tangle of legal, commercial, and political issues that surrounds the publishing business.”
These are pretty much the same issues to be found in similar ventures within publishing and other major creative industries, e.g.: Music (think cross border licensing for the much vaunted Celestial Jukebox), or a global film and image library (e.g. a mash-up of Hulu, Netflix, Corbis and Getty Images). In all cases, technology is not the stumbling block, because the bigger challenges lie with any combination of: business strategy, commercial models, legal / political / cultural mindsets, encountered along the way.
Having said that, it can be argued that such hurdles are not sustainable, for various reasons, not least of which is that individuals (or customers, casual pirates, consumers, freetards etc. – take your pick) are already way ahead of the curve in terms of digital content / technology, and will often use it exactly as they see fit.
This means that established incumbent players in the creative industries are forever playing a reactive / catch-up game, instead of pursuing or encouraging discovery of the next big thing. As a result, most disruptive propositions will invariably have a high impact on established business models, especially if and when they harness the natural instincts of individual users. An interesting example could be the recently launched Google Drive, complete with built-in OCR capability (which will enable users to digitize and search scanned content). Could this ultimately lead to a user generated version of Google Books?
To conclude, an IP system worthy of the 21st century is an urgent necessity, but there is also pressing need to keep in mind the big picture, which is that the Internet is a global enabler / platform, therefore any new IP system must likewise be global in scope. The UK, with its wealth of creative talent, plus such efforts as the IP review and recommendations, may be in a unique position to provide some leadership on the best way forward for IP in this 21st century.
Recent developments in the world of publishing, clearly demonstrate yet again that the primary objective of the content industry is to make a tidy profit. Nothing wrong with that, if you ask me; however, it usually turns into a rather sticky mess when that pursuit is clouded by accusations of skulduggery, conspiracy and outright price fixing.
I refer to a recent lawsuit filed by the US Justice Department against Apple and 5 major book publishers, over allegations of conspiracy, collusion and price fixing. According to this article from the Wall Street Journal, it could change the course of a rapidly expanding eBook publishing industry. But how so, you ask?
Well, it is really down to opposing business models, (i.e. the so called agency versus wholesale approach to eBook pricing), where, on one hand, an agent such as Apple will allow publishers to set their own price, and take a cut (in this case 30%) from sales on its iBook platform. On the other hand, a wholesale pricing model is one where the retailer (e.g. Amazon or Barnes and Noble) sets the price for eBooks and can effectively apply discounts as they wish (even if it means selling eBooks at a loss). Obviously, this latter scenario leaves publishers with less control over prices, and consequently profits, hence the opportunity to take advantage of a more favourable option could not fail to be attractive.
However, the question remains about the value proposition for consumers, who are themselves increasingly embracing eBooks for its convenience, ease of use and, perhaps more to the point, a huge potential for significantly lower prices overall. One might argue that eBooks do not require paper, glue, physical stores / shelf space or any significant distribution / transport costs, therefore they really shouldn’t be priced anything close to their physical versions. Surely, this quest to keep prices high can only be in favour of publishers, and their bottom lines, mustn’t it?
So what are the key arguments / rationale for keeping eBook prices artificially high? Perhaps the main reason has to do with high operating costs incurred by large publishers, as well as the need to maintain a powerful marketing and promotional machinery. Furthermore, it may also be argued that lower cost eBooks are somehow cannibalising the margins to be had from physical books. Whatever the case, it seems publishers stand to lose out if they don’t do something (innovative?) to counter the effects of change.
Hmm, now where have we seen this before, (and how did that industry cope / survive)? Ah, yes, the music industry went through something similar, except they chose to sue those pirates and freeloaders (aka the people formerly known as customers), that supposedly ‘stole their bottom line’. However, they seem to have found other ways to complement dwindling revenue streams, e.g. via ticket sales for live performances. By the way, death may no longer prevent artistes from performing before a live audience, assuming this deceased artist hologram idea catches on.
Luckily the book publishing industry don’t have to take quite so drastic a measure, especially as it has been shown time and again that new media formats and channels do not necessarily mean the complete demise of existing ones. This is arguably the perfect time for publishers to embrace even bolder / more innovative thinking to discover complementary initiatives that will bolster an industry under threat, real or imagined. They must observe and capitalise on consumer trends and emergent user behaviours. For example, the sheer capacity, variety and anonymity (i.e. no tell tale covers) of reading material to be found on your average eBook reader means that users now carry, consume and explore hitherto unthinkable (at least in public) subject matter. The current boom in romantic erotica sub-genre, aka Mommy Porn, is an interesting case in point.
Perhaps even more fundamental, is a need to seriously consider the verboten idea of evolving copyright into something much better aligned with the digital age. Unfortunately, that will be a tough sell to the publishing industry, if this report of a speech given by HarperCollins International Chief Exec, at the London Book Fair, is anything to go by. According to the article, “others in the book trade, including the Publishers Association” have criticised the recent Hargreaves Review of Copyright, which some feel could weaken the current copyright regime. As you may have gathered by now, I don’t subscribe to that point of view, but then I am only an author and may not see things in quite the same light as a successful publisher might.
In many ways, this whole situation could be seen as a remix of circumstances surrounding the birth of copyright. In 1710, the printing industry lobbied for creation of a law to govern the rights to print or reproduce works (now known as the Statute of Anne), in order to protect their interests and the authors / creators of said works. Copyright is essentially an artificial system, which routinely needs a degree of manual intervention whenever new and disruptive content technology or consumer trend emerges. That, in my opinion, is the fundamental flaw with copyright which any revision thereof must try to address. In an age of multi-platform, multi-channel and multi-format publishing, there really is no place (or time) for manual intervention each time a new and disruptive trend, challenge and opportunity presents itself. I for one would be more than happy to attempt to demonstrate just how such a system could work (based on real copyright content), but then I would probably need a hefty six figure advance from some far-sighted multi-publisher to make it happen. Who says there is no future for publishing?