Archive

Archive for the ‘Copyright’ Category

Copyright, Blockchain, Technology and the State of Digital Piracy

January 15, 2017 Leave a comment
The next installment of one of my favourite conferences on copyright and technology is right around the corner, on January 24th in NYC, and as usual it promises some interesting: debate, controversy and hot-off-the-press insights into the murky world of copyright business, technology, and legislation. Plus, this year, it also features a panel on the game changing technology of blockchain and its myriad disruptive applications across entire industries, including copyright and the creative industries.

Thankfully, the inclusion of this panel session recognises the never-ending role of new and innovative technologies in shaping the evolution of Copyright. Ever since that first mass copy technology (i.e. the printing press) raised questions of rights ownership, and due recompense for works of the mind, new technologies for replicating and sharing creative content have driven the wheel of evolution in this area. Attendees will doubtless benefit from the insight and expertise of this panel of speakers as well as moderator and Program Chair, Bill Rosenblatt, who questioned (in a recent blog post), the practicality, relevance and usefulness of blockchain in a B2C context for copyright. You are in for a treat.

This is a very exciting period of wholesale digital transformation, and as I mentioned once or twice in previous articles and blog-posts, the game is only just beginning for potential applications of: blockchain, crypto currencies, smart licences and sundry trust mechanisms in the digital domain. In an age of ubiquitous content and digital access, the focus of copyright is rightfully shifting away from copying and moving towards the actual usage of digital content, which brings added complexity to an already complex and subjective topic. It is far too early to tell if blockchain can provide a comprehensive answer to this challenge.

The Copyright and Technology conference series have never failed to provide some thought-provoking insights and debates driven by expert speakers across multiple industries. In fact, I reconnected recently with a couple of previous speakers: Dominic Young and Chris Elkins, who are both still pretty active, informed and involved in the copyright and technology agenda. Dominic, ex-CEO of the UK’s Digital Catapault, is currently working on a hush hush project that will potentially transform the B2C transaction space. Chris is co-founder Muso, a digital anti-piracy organisation which has successfully secured additional funding to expand its global footprint with innovative approaches to anti-piracy. For example, if you ever wondered which countries are most active in media piracy, then look no further than Muso’s big data based state of digital piracy reports. Don’t say I never tell you anything.

In any case, I look forward to hearing attendees impressions on the Copyright and Technology 2017 conference, which I’m unable to attend / participate this timme unfortunately. In the meantime, I’ll continue to spend my spare time, or whatever brain capacity I have left, with pro-bono activities that allow me to: meet, mentor / coach and advise some amazing startups on the dynamic intersection of IP, business and technology. More on that in another post.
Advertisements

More Perils of Reusing Digital Content

February 7, 2016 1 comment
Some time ago I wrote an article and blog post entitled “the perils of reusing digital content” looking at the key challenges facing users of digital content which thanks to the power of computing and the Internet has become more easily available, transferable and modifiable. It says a lot about the age in which we live that this is still not universally perceived to be a good thing. It also explored the Creative Commons model as a complementary alternative to a woefully inadequate and somewhat anachronistic copyright system in the digital age. Since then the situation has got even more complex and challenging thanks to the introduction of newer technologies (e.g. IoT), more content (data, devices and channels), and novel trust / sharing mechanisms such as blockchain. 


I’ve written a soon-to-be-published article about blockchain, from which the following excerpt is taken:  “Blockchains essentially provide a digital trust mechanism for transactions by linking them sequentially into a cryptographically secure ledger. Blockchain applications that execute and store transactions of monetary value are known as cryptocurrencies, (e.g. Bitcoin), and they have the potential to cause significant disruption of most major industries, including finance and the creative arts. For example, in the music industry, blockchain cryptocurrencies can make it economically feasible to execute true micro-transactions, (i.e. to the nth degree of granularity in cost and content). There are already several initiatives using blockchain to demonstrate full transparency for music payments – e.g. British artiste Imogen Heap’s collaboration with UJO Music features a prototype of her song and shows how income from any aspect of the song and music is shared transparently between the various contributors.”


The above scenario makes it glaringly obvious that IP protection in digital environments should be focused more on content usage transparency rather than merely providing evidence or enforcing copying and distribution restrictions. The latter copy and distribute restriction model worked well in a historically analog world, with traditionally higher barriers-to-entry, whereas the former transparent usage capability plays directly to the a strength of digital – i.e. the ability to track and record usage and remuneration transactions to any degree of granularity, (e.g. by using blockchain).


Although it may sound revolutionary and possibly contrary to the goals of today’s content publishing models, in the longer term, this provides a key advantage to any publisher brave enough to consider digitising and automating their publishing business model. Make no mistake, we are drawing ever closer to the dawn of fully autonomous business models and services where a usage / transparency based IP system will better serve the needs of content owners and publishers.


In a recent post, I described a multi-publishing framework which can be used to enable easier setup and automation of the mechanisms for tracking and recording all usage transactions as well as delivering transparent remuneration for creator(s) and publisher(s). This framework could be combined with Creative Commons and blockchains to provide the right level of IP automation needed for more fluid content usage in a future that is filled with autonomous systems, services and business models.


Introducing a Framework for Multi-Publishing

January 16, 2016 1 comment

I believe that in a highly connected digital world, the future of content publishing lies with creating interlinked manifestations of a core concept or theme. I like to think of this as “multi(n) publishing”, (where ‘n’ stands for any number of things, e.g.: aspect / channel / facet / format / genre / sided / variant / etc.), or multi-publishing for short. To this end, I’ve created a framework which could prove very useful for conceptualizing and executing multi-publishing projects. Read on to find out more.

  1. Why Multi-Publishing?

There is increasing evidence of an evolution in the way people consume digitally enabled content, e.g.: watching a TV show whilst surfing the web, talking on the phone to a friend and posting comments on social media – all of which may or may not relate to each other or a single topic. This has put enormous pressure on content creators and publishers to find new ways to engage their audience and deliver compelling content to people that live in a world surfeit with competing content, channels, devices and distractions. In the above scenario, broadcasters have tried, with varying degrees of success, to engage viewers with second or multi-screen, content (e.g.: show on TV, cast info on website / mobile site, plus real time interaction on Social Media – all related to the show). Furthermore, the average attention span of most users appears to have shrunk and many prefer to ‘snack’ on content across devices and formats. This doesn’t bode well for the more traditional long-form content upon which many creative industries were established. As a result, many in the content production, publishing and marketing industries are seeking new ways to engage audiences across multiple devices and channels with even more compelling content and user experiences.

  1. What is Multi-publishing?

In this context, the term “multi(n) publishing” (or multi-publishing) describes the manifestation of a core concept / theme as distinct but inter-linked works across multiple media formats, channels and genres. This is somewhat different from other similar related terms such as: multi-format (or cross-media), multi-channel, single source, or even multi-platform publishing. The last one being mainly used by marketers to describe the practice of taking one thing and turning it into several products across a spectrum of online, offline and even ‘live’ experiential forms. The key difference between these terms and multi-publishing is that the latter encompasses them all, and more. In fact, the multi-publishing framework is closer to the information science idea of conceptualisation. Also, and perhaps more importantly, the various manifestations of multi-published content are not necessarily brand identical to the originating (aka ‘native’) core concept, or to each other. However, each and every manifestation is intended to be unique and distinct, yet able to enhance each other and provide a fuller and more fulfilling experience of the overall core concept.

  1. How does it work?

In order to achieve the desired outcome of the whole being more than a sum of its parts, it makes sense for creators and publishers to bear in mind, right from the outset, that their works will likely be: used, reused, decomposed, remixed and recomposed in so many different ways, (including new and novel expressions of which they couldn’t possibly imagine at the time of creation). Therefore, they must recognize where and how each of their output content fits within the context of a multi-publishing content framework or architecture. The diagram below is just such a framework (in mindmap form) and demonstrates the narrative-like progression of a single core concept / theme across various stages and interlinked manifestations.

The Multi-Publish Concept

This is only an example of what content creators and their publishers must consider and prepare as part of their creative (inspiration) and publishing (exploitation) process. It requires the creation and/or identification of a core concept which is manifest in the expression of the art (e.g. in the: story, song, prose, images, video, game, conversations or presentations etc), and which can be used to link each and every format, channel or media in which the concept is expressed.

Finally, the use of multi-publishing frameworks can also enable easier setup and automation of tracking and recording of all usage transactions, and potentially any subsequent remuneration for creator(s) and publisher(s), in a transparent manner, (perhaps using a trust mechanism such as blockchain). I will explore this particular topic in a subsequent post on this blog. In any case, there remains one key question to be answered, i.e.: how can or should we consider protecting core concepts or algorithms at the heart of multi-publishing frameworks, and if so what form should such protection take?

Copyright and technology: glass half full or half empty?

October 11, 2014 Leave a comment
following on from my last post about IP and Digital Economy, I’d like to focus this one on the evolving role of copyright in the digital economy. What are the key recent developments, trends and challenges to be addressed, and where are the answers forthcoming? Read on to find out.
Picture5-md
Where better to start than at the recent Copyright and Technology 2014 London conference in which both audience and speakers consisted of key players in the intersection of copyright, technology and digital economy. As you can probably imagine such a combination provided for great insights and debate on the role, trends and future of copyright and digital technology. Some key takeaways include:
  • The copyright yin and technology yang – Copyright has always had to change and adapt to new and disruptive technologies (which typically impact the extant business models of the content industry) and each time it usually comes out even stronger and more flexible – the age of digital disruption is no exception. As my 5 year old would say, “that glass is half full AND half empty”
  • UK Copyright Hub – “Simplify and facilitate” is a recurring mantra on the role of copyright in the digital economy. The UK Copyright Hub provides an exchange that is predicated on usage rights. It is a closely watched example of what is required for digital copyright and could easily become a template for the rest of the world.
  • Copyright frictions still a challenge – “Lawyers love arguing with each other”, but they and the excruciatingly slow process of policy making, have introduced a particular friction to copyright’s digital evolution. The pace of digital change has increased but policy has slowed down, perhaps because there are now more people to the party.
  • Time for some new stuff – Copyright takes the blame for many things (e.g. even the normal complexity of cross border commerce). Various initiatives including: SOPA & PIPA / Digital Economy Act / Hadopi / 3 strikes NZ have stalled or been drastically cut back. It really is time for new stuff.
Picture9-md

Source: Fox Entertainment Group

  • Delaying the “time to street” – Fox describe their anti-piracy efforts in relation to film release windows, in an effort to delay the “time to street” (aka pervasive piracy). These and other developments such as fast changing piracy business models, or the balance between privacy vs. piracy and technologies (e.g. popcorn time, annonymising proxies, cyberlockers etc.) have added more fuel to the fire.
  • Rights Languages & Machine-to-Machine communication – Somewhat reminiscent of efforts to use big data and analytics mechanisms to provide insight from structured and unstructured data sources. Think Hadoop based rights translation and execution engines.
  • The future of private copying – The UK’s copyright exceptions now allow for individual private copies of owned content. Although this may seem obvious, but it has provoked fresh comments from content industries types and other observers e.g.: When will technology replace the need for people making private copies? Also, what about issues around keeping private copies in the cloud or in cyber lockers?
Picture6a-sm

Mutant copyright techie-lawyer

In conclusion, and in light of the above gaps between copyright law and technology, I’ve decided that I probably need to study and become a mutant copyright techie-lawyer in order to help things along – you heard it here first. Overall, this was another excellent event, with lots of food for thought, some insights and even more questions, (when won’t there ever be?), but what I liked most was the knowledgeable mix of speakers and audience at this years event, and I look forward to the next one.

Fighting Piracy with Education

May 11, 2014 Leave a comment

According to a BBC news report, it seems that a deal to tackle digital piracy is about to be realised between major UK ISPs and key content and entertainment industry organisations. Given that it took several years of wrangling to get to this point, the obvious question is whether this particular deal will work to the satisfaction of all concerned?

Education versus Piracy

The report describes how the UK ISPs (i.e. BT, Sky, TalkTalk and VirginMedia) will be required to send ‘educational’ letters, or alerts, to users they believe are downloading illegal content. Among other things, the deal is predicated on the belief that increased awareness of legal alternatives will encourage such users away from illegal content acquisition, casual infringement and piracy. This voluntary alert system will be funded mainly by the content industry who in return will get monthly stats on alerts dished out by the ISPs. Overall, this deal is far removed from the more punitive “3 strikes” system originally mooted in the early days of the Digital Economy Act.

As with most cases there are 2 or more sides to the story, and below are some considerations to be taken into account before drawing your own conclusions, including:

1. Critics of this deal, i.e. presumably the content providers, will consider this too soft an approach to be effective in curbing the very real and adverse economic impact of piracy.

2. Supporters, including ISPs, will likely see this as fair compromise for securing their cooperation in tackling piracy, and a win-win for them and their customers.

3. Another perspective comprises the view of regulators and government intermediaries (aka brokers of this deal), who likely consider it a practical compromise which can always be tweaked depending on its efficacy or lack thereof.

4. There are probably many other viewpoints to be considered, but, in my opinion, the most important perspective belongs to the end-users who ultimately stand to benefit or suffer from the success or failure of this initiative, especially since:

  • there is evidence that education trumps punishment when it comes to casual content piracy – e.g. the HADOPI experience in France which has effectively evolved into an educational campaign against copyright infringement.
  • content consumers already have far too much choice over the source and format of content anyway, so punitive measures may not necessarily solve the piracy problem, if they can get content via other illegal means.
  • any perceived failure of this deal, and its ‘educational’ approach, could lend support for more draconian and punitive measures, therefore it is in the interest of consumers to see it succeed.

5. Industrial scale piracy, on the other hand must be tackled head-on, with the full weight of the law, in order to close down and discourage the real criminal enterprises that probably do far more damage to the content industry.

In any case, regardless of how you view this and other similar developments, it is always worth bearing in mind that we are only in a period of transition to a comprehensive digital existence, therefore all current challenges and opportunities are certain to change, as new technology and usage paradigms continue to drive and reveal ever more intriguing changes in consumer behaviours. This battle is far from over.

Copyright and Technology in 2013

November 18, 2013 Leave a comment

Last month’s conference on copyright and technology provided plenty of food for thought from an array of speakers, organisations, viewpoints and agendas. Topics and discussions ran the gamut of increasingly obvious “business models are more important than technology” to downright bleeding edge “hypersonic activation of devices from outdoor displays “. There was something to take away for everyone involved. Read on for highlights.

The Mega Keynote interview: Mega’s CEO Vikram Kumar, discussed how the new and law-abiding cloud storage service is proving attractive to professionals who want to use and pay for the space, security and privacy that Mega provides. This is a far cry from the notorious MegaUpload, and founder Kim Dotcom’s continuing troubles with charges of copyright infringement, but there are still questions about the nature of the service – e.g. the end-to-end encryption approach which effectively makes it opaque to outside scrutiny.  Read more about it here.

Anti-Piracy and the age of big data – Mark Monitor’s Thomas Sehested talked about the rise of data / content monitoring and anti-piracy services in what he describes as the data driven media company. He also discussed the demise of content release windows, and how mass / immediate release of content across multiple channels lowers piracy, but questioned if this is more profitable.

Hadopi and graduated response – Hadopi’s Pauline Blassel gave an honest overview on the impact of Hadopi, including evidence of some reduction in piracy (by factor of 6M-4M) before stabilsation. She also described how this independent public authority delivers graduated response in a variety of ways e.g. from raising awareness to imposing penalties and focusing primarily on what is known as PUR (aka ‘Promotion les Usage Responsible’)

Auto Content Recognition (ACR) and the 2nd Screen – ACR is a core set of tools (including DRM, watermarking and fingerprinting), and the 2nd screen opportunity (at least for broadcasters) is all about keeping TV viewership and relevance in the face of tough competition for people’s time and attention. This panel session discussed monetisation of second screen applications, and the challenges of how TV is regulated, pervasive and country specific. Legal broadcast rights is aimed at protection of broadcast signals, which triggers the 2nd screen application, (e.g. via ambient / STB / EPG based recognition). This begs the question of what regulation should be applied to the 2nd screen, and what rights apply? E.g. Ads on TV can be replaced in the 2 screen, but what are the implications?

Update on the Copyright Hub – The Keynote address by Sir Richard Hooper, chair of the Copyright Hub and co-author of the 2012 report on Copyright Works: Streamlining Copyright Licensing for the Digital Age, was arguably the high point of the event. He made the point that although there are issues with copyright in the digital age, the creative industries need to get off their collective backsides and streamline the licensing process before asking for a change in copyright law. He gave examples of issues with the overly complex educational licensing process and how the analogue processes are inadequate for the digital age (e.g. unique identifiers for copyright works).

Sir Richard Hooper

Sir Richard Hooper

The primary focus of the Copyright Hub, according to Sir Richard, is to enable high volume – low value transactions, (e.g. to search, license and use copyright works legally) by individuals and SMEs. The top tier content players already have dedicated resources for such activities hence they’re not a primary target of the Copyright Hub, but they’ll also benefit by removing the need to deal with trivial requests for licensing individual items (e.g. to use popular songs for wedding videos on YouTube).

Next phase work, and other challenges, for the Copyright Hub include: enabling consumer reuse of content, architectures for federated search, machine to machine transactions, orphan works registry & mass digitisation (collective licensing), multi licensing for multimedia content, as well as the need for global licensing. Some key messages and quotes in the ensuing Q&A include:

  • “the Internet is inherently borderless and we must think global licensing, but need to walk before we can run”
  • “user-centricity is key.  People are happy not to infringe if easy / cheap to be legal”
  • “data accuracy is vital, so Copyright Hub is looking at efforts from Linked Content Coalition and Global Repertoire Database”
  • “Metadata is intrinsic to machine to Machine transactions – do you know it is a crime to strip metadata from content?”
  • “Moral rights may add to overall complexity”

As you can probably see from the above, this one day event delivered the goods and valuable insights to the audience, which included people from the creative / content industries, as well as technologists, legal practitioners, academics and government agencies. Kudos to MusicAlly, the event organiser, and to Bill Rosenblatt, (conference chair), for a job well done.

Next Stop: I’ll be discussing key issues and trends with Digital Economy and Law at a 2 day event, organised by ACEPI,  in Lisbon. Watch this space.

DRM, Content Protection, and the future of the Web

October 14, 2013 Leave a comment

I remember once when the mere mention of DRM stirred up such a frenzied reaction of blood boiling anger, outrage and disgust, from even the meekest of the meek. Thankfully those days are long gone, and DRM has been largely forgotten, or has it?

DRM Wordle

DRM and the Web

Sadly no, because DRM recently reared its dramatic head yet again following a decision by the World Wide Web Consortium (W3C) to bring video content protection into scope for discussion in their HTML5 Working Group. So what does this mean? Well, it depends on who you ask of course, because the usual pros vs. cons battle lines, championed by various organisations and pundits, have opened up with distinct perspectives on the matter. The following are summary points, culled from a quick web search on the topic.

Some viewpoints in support of the decision:

  1. Sir Tim Berners Lee on Encrypted content and the Open Web – reiterated that W3C staff remain passionate about the open Web, and indeed abhor certain forms of content protection and DRM. However, he went on to explain how putting content protection in scope for discussion is the lesser evil, given that exclusion of this topic from the HTML WG discussions will not necessarily exclude it from anyone’s systems.
  2. W3C Encrypted Media Extensions (EME) Editor’s draft 17th September 2013 – According to the abstract, “the proposal extends HTMLMediaElement providing APIs to control playback of protected content.”  Also, the specification does not define any particular content protection or DRM system, but instead it defines a common API that may be used to discover, select and interact with various such mechanisms / DRM solutions.
  3. ArsTechnica “DRM in HTML5 is a victory for the open Web, not a defeat” – In this post, Peter Bright argues that EME will happen, one way or another, especially given how some important companies (i.e. Microsoft, Google and Netflix) are actively developing the specification. Furthermore, distributors of protected video content already use DRM, albeit outside the Web (e.g. via Microsoft’s Silverlight, Adobe Flash and / or mobile Apps). Finally, he concludes that EME will provide a way to deliver protected content via the Web instead of just using proprietary applications and plug-ins.  .

Other viewpoints against the decision:

  1. The Electronic Frontier Foundation (EFF), “Lowering Your Standards: DRM and the Future of the W3C” – The EFF strongly objects to the inclusion of “playback of protected content” into the scope of HTML Working Group’s new charter, stating that such a move would mean the controversial Encrypted Media Extension could be included in the HTML5.1 standard, which would effectively cede control of browsers to 3rd parties (i.e. content providers). Furthermore, they argue, this could ultimately damage the W3C’s reputation / perception as guardian of the open Web, and that other media formats (e.g. images, fonts and music) may push for equivalent content protection standards, over a rapidly fragmenting Web.
  2. Boing Boing “W3C’s DRM for HTML5 sets the stage for jailing programmers…” – Cory Doctorow discusses how the decision will open the possibility of punitive fines or imprisonment for programmers who dare to attempt improving web browsers in ways that displease Hollywood.
  3. DefectiveByDesign “Tell W3C: We don’t want the Hollyweb” – Calls for the W3C to reject the EME proposal, stating that it would damage freedom on the Web and enable unethical, restrictive business models, as well as proliferation of DRM plug-ins needed to play protected media content.

Regardless of which side you take in this debate, it is probably disingenuous to think that DRM ever went away, if anything, it has in fact been thriving in various digital content services and technologies, well outside the limelight and notoriety it had in the past – perhaps until now. One of the key things I learnt during my sojourn into the DRM debate over the last decade, was that most content businesses are ultimately pragmatic in nature, and they now understand that suing customers (or casual pirates depending on viewpoint) can be suicidal, hence the move away from dramatic headlines and into developing services that users actually want to use and pay for.  The saying holds true that the only good DRM system is invisible or transparent to the end user or consumer.

It could be argued that this current debate has arisen because the Web is designed, and perceived by many, to be open and universal, but it is this selfsame universality that allows even potentially restrictive models to have a place on the Web. In fighting for its own survival, and by openly considering inclusion of something like content protection, the W3C is actually living up to the open and universal remit of the Web. However, a real danger remains that commercial interests (aka content businesses) will almost certainly seize this opportunity to compete using flawed and restrictive business models, which will only throw DRM in the faces of their users, and possibly restart litigious campaigns against their users, once the latter decide again that unrestricted (and literally free) content is best. Truly, those that don’t learn from past mistakes are only doomed to repeat them.

In conclusion, although this is probably more than a mere storm in the proverbial teacup, the signs portend that this too shall pass into the annals of DRM aftershocks, in the grand scheme of things.  I say this with some confidence because whilst the DRM battle rages on, the world of digital content, copyright and the Internet continues to evolve new opportunities and challenges that will reshape the digital landscape. A recent example concerns the IP value of curation, e.g. playlists, as a candidate for copyright (e.g. see Ministry of Sound versus Spotify)

BTW:  I will be moderating a panel session, discussing Over the Top (OTT) video content protection, at the Copyright and Technology 2013 London conference, later this week. My panel of experts will most likely have something interesting to say about DRM and the Web. Why not join the debate at the event, if you are in London, otherwise I’ll keep you posted on this blog.