More Perils of Reusing Digital Content

February 7, 2016 Leave a comment
Some time ago I wrote an article and blog post entitled “the perils of reusing digital content” looking at the key challenges facing users of digital content which thanks to the power of computing and the Internet has become more easily available, transferable and modifiable. It says a lot about the age in which we live that this is still not universally perceived to be a good thing. It also explored the Creative Commons model as a complementary alternative to a woefully inadequate and somewhat anachronistic copyright system in the digital age. Since then the situation has got even more complex and challenging thanks to the introduction of newer technologies (e.g. IoT), more content (data, devices and channels), and novel trust / sharing mechanisms such as blockchain. 


I’ve written a soon-to-be-published article about blockchain, from which the following excerpt is taken:  “Blockchains essentially provide a digital trust mechanism for transactions by linking them sequentially into a cryptographically secure ledger. Blockchain applications that execute and store transactions of monetary value are known as cryptocurrencies, (e.g. Bitcoin), and they have the potential to cause significant disruption of most major industries, including finance and the creative arts. For example, in the music industry, blockchain cryptocurrencies can make it economically feasible to execute true micro-transactions, (i.e. to the nth degree of granularity in cost and content). There are already several initiatives using blockchain to demonstrate full transparency for music payments – e.g. British artiste Imogen Heap’s collaboration with UJO Music features a prototype of her song and shows how income from any aspect of the song and music is shared transparently between the various contributors.”


The above scenario makes it glaringly obvious that IP protection in digital environments should be focused more on content usage transparency rather than merely providing evidence or enforcing copying and distribution restrictions. The latter copy and distribute restriction model worked well in a historically analog world, with traditionally higher barriers-to-entry, whereas the former transparent usage capability plays directly to the a strength of digital – i.e. the ability to track and record usage and remuneration transactions to any degree of granularity, (e.g. by using blockchain).


Although it may sound revolutionary and possibly contrary to the goals of today’s content publishing models, in the longer term, this provides a key advantage to any publisher brave enough to consider digitising and automating their publishing business model. Make no mistake, we are drawing ever closer to the dawn of fully autonomous business models and services where a usage / transparency based IP system will better serve the needs of content owners and publishers.


In a recent post, I described a multi-publishing framework which can be used to enable easier setup and automation of the mechanisms for tracking and recording all usage transactions as well as delivering transparent remuneration for creator(s) and publisher(s). This framework could be combined with Creative Commons and blockchains to provide the right level of IP automation needed for more fluid content usage in a future that is filled with autonomous systems, services and business models.


Introducing a Framework for Multi-Publishing

January 16, 2016 1 comment

I believe that in a highly connected digital world, the future of content publishing lies with creating interlinked manifestations of a core concept or theme. I like to think of this as “multi(n) publishing”, (where ‘n’ stands for any number of things, e.g.: aspect / channel / facet / format / genre / sided / variant / etc.), or multi-publishing for short. To this end, I’ve created a framework which could prove very useful for conceptualizing and executing multi-publishing projects. Read on to find out more.

  1. Why Multi-Publishing?

There is increasing evidence of an evolution in the way people consume digitally enabled content, e.g.: watching a TV show whilst surfing the web, talking on the phone to a friend and posting comments on social media – all of which may or may not relate to each other or a single topic. This has put enormous pressure on content creators and publishers to find new ways to engage their audience and deliver compelling content to people that live in a world surfeit with competing content, channels, devices and distractions. In the above scenario, broadcasters have tried, with varying degrees of success, to engage viewers with second or multi-screen, content (e.g.: show on TV, cast info on website / mobile site, plus real time interaction on Social Media – all related to the show). Furthermore, the average attention span of most users appears to have shrunk and many prefer to ‘snack’ on content across devices and formats. This doesn’t bode well for the more traditional long-form content upon which many creative industries were established. As a result, many in the content production, publishing and marketing industries are seeking new ways to engage audiences across multiple devices and channels with even more compelling content and user experiences.

  1. What is Multi-publishing?

In this context, the term “multi(n) publishing” (or multi-publishing) describes the manifestation of a core concept / theme as distinct but inter-linked works across multiple media formats, channels and genres. This is somewhat different from other similar related terms such as: multi-format (or cross-media), multi-channel, single source, or even multi-platform publishing. The last one being mainly used by marketers to describe the practice of taking one thing and turning it into several products across a spectrum of online, offline and even ‘live’ experiential forms. The key difference between these terms and multi-publishing is that the latter encompasses them all, and more. In fact, the multi-publishing framework is closer to the information science idea of conceptualisation. Also, and perhaps more importantly, the various manifestations of multi-published content are not necessarily brand identical to the originating (aka ‘native’) core concept, or to each other. However, each and every manifestation is intended to be unique and distinct, yet able to enhance each other and provide a fuller and more fulfilling experience of the overall core concept.

  1. How does it work?

In order to achieve the desired outcome of the whole being more than a sum of its parts, it makes sense for creators and publishers to bear in mind, right from the outset, that their works will likely be: used, reused, decomposed, remixed and recomposed in so many different ways, (including new and novel expressions of which they couldn’t possibly imagine at the time of creation). Therefore, they must recognize where and how each of their output content fits within the context of a multi-publishing content framework or architecture. The diagram below is just such a framework (in mindmap form) and demonstrates the narrative-like progression of a single core concept / theme across various stages and interlinked manifestations.

The Multi-Publish Concept

This is only an example of what content creators and their publishers must consider and prepare as part of their creative (inspiration) and publishing (exploitation) process. It requires the creation and/or identification of a core concept which is manifest in the expression of the art (e.g. in the: story, song, prose, images, video, game, conversations or presentations etc), and which can be used to link each and every format, channel or media in which the concept is expressed.

Finally, the use of multi-publishing frameworks can also enable easier setup and automation of tracking and recording of all usage transactions, and potentially any subsequent remuneration for creator(s) and publisher(s), in a transparent manner, (perhaps using a trust mechanism such as blockchain). I will explore this particular topic in a subsequent post on this blog. In any case, there remains one key question to be answered, i.e.: how can or should we consider protecting core concepts or algorithms at the heart of multi-publishing frameworks, and if so what form should such protection take?

2015 in review

December 30, 2015 Leave a comment

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 1,100 times in 2015. If it were a cable car, it would take about 18 trips to carry that many people.

Click here to see the complete report.

Categories: social Tags: , ,

Predicting the (near) Future

December 22, 2015 Leave a comment
The future is always tricky to predict and, in keeping with Star Wars season, the dark side is always there to cloud everything. But as we all know in IT the ‘Cloud’ can be pretty cool, except of course when it leaks. Last month saw the final edition of Gartner’s Symposium/ITxpo 2015 in Barcelona, and I was fortunate to attend (courtesy of my Business Unit) and bear witness to some amazing predictions about the road ahead for our beloved / beleageured IT industry.
 
Judging from the target audience, and the number of people in attendance, it is safe to say that the future is at best unpredictable, and at worst unknowable, but Gartner’s Analysts gave it a good go; making bold statements about the state of things to be, within the next 5 years or so. The following are some key messages, observations and predictions which I took away from the event.
 
1. CIOs are keen to see exactly what lies ahead.
Obviously. However, it does confirm to my mind that the future is highly mutable, especially given the amount of change to be navigated on the journey towards digital transformation. I say ‘towards’ because, from all indications, there is likely no real end-point or destination to the journey of digital transformation. The changes (and challenges / opportunities) just keep coming thick and fast, and at an increasing pace. For example, by 2017, Gartner predicts that 50% of IT spending will be outside of IT, it currently stands at 42% today, therefore CIOs must shift their approach from command and control style management to leading via influence and collaboration.
 
2. Algorithmic business is the future of digital business
A market for algorithms (i.e. snippets of code with value) will emerge where organizations and individuals will be able to: licence, exchange, sell and/or give away algorithms – Hmmm, now where have we seen or heard something like that before? Anyway, as a result, many organisations will need an ‘owner’ for Algorithms (e.g. Chief Data Officer) who’s job it’ll be to create an inventory of their algorithms, classify it (i.e. private or “core biz” and public “non-core biz” value), and oversee / govern its use.
 
3. The next level of Smart Machines
In the impending “Post App” era, which is likely to be ushered in by algorithms, people will rely on new virtual digital assistants, (i.e. imagine Siri or Cortana on steroids) to conduct transactions on their behalf. According to Gartner, “By 2020, smart agent services will follow at least 10% of people to wherever they are, providing them with services they want and need via whatever technology is available.” Also, the relationship between machines and people will initially be cooperative, then co-dependant, and ultimately competitive, as machines start to vie for the same limited resources as people.
 
4. Platforms are the way forward (and it is bimodal all the way)
A great platform will help organisations add and remove capability ‘like velcro’. It will need to incorporate Mode 2 capability in order to: fail fast on projects / cloud / on-demand / data and insight. Organisations will start to build innovation competency, e.g. via innovation labs, in order to push the Mode 2 envelope. Platform thinking will be applied at all layers (including: delivery, talent, leadership and business model) and not just on the technology / infrastructure layer.
 
5. Adaptive, People Centric Security
The role of Chief Security Officer role will change and good security roles will become more expansive and mission critical. In future, everyone gets hacked, even you, and if not then you’re probably not important. Security roles will need to act more like intelligence officers instead of policemen. Security investment models will shift from predominantly prevention based to prevention and detection capabilities, as more new and unpredictable threats become manifest. Also organisations will look to deploy People Centric Security measures (PCS) in order to cover all bases.
 
6. The holy grail of business moments and programmable business models
The economics of connections (from increased density of connections and creation of value between: business / people / things) will become evident especially when organsiations focus on delivering business moments to delight their customers. Firms will start to capitalise on their platforms to enable C2C interactions (i.e. customer-2-customer interactions) and allow people and things to create their own value. It will be the dawn of programmable business models 
 
7. The Digital Mesh and the role of wearables and IoT
One of the big winners in the near future will be the ‘digital mesh’, amplified by the explosion of wearables and IoT devices (and their interactions) in the digital mesh environment. Gartner predicts a huge market for wearables (e.g. 500M units sold in 2020 alone – for just a few particular items). Furthermore, barriers to entry will be lower and prices will fall as a result of increased competition, along with: more Apps, better APIs and improved power.
 
The above are just a few of the trends and observations I got from the event, but I hasten to add that it will be impossible to reflect over 4 days of pure content in these highlight notes, and that other equally notable trends and topics such as: IoT Architecture, Talent Acquisition and CIO/CTO Agendas, only receive honourable mentions. However, I noticed that topics such as Blockchain were not fully explored as might be expected at an event of this nature. Perhaps next year will see it covered in more depth – just my prediction.
In summary, the above are not necessarily earth shattering predictions, but taken together they point the way forward to a very different experience of technology; one that is perhaps more in line with hitherto far-fetched predictions of the Singularity, as humans become more immersed and enmeshed with machines. Forget the Post-App era, this could be the beginning of a distinctly recognisable post human era. However, as with all predictions only time will tell, and in this case, lets see where we are this time next year. I hope you have a happy holiday / festive season wherever you are.

IBM Innovation Labs – where old meets new, and everything in between…

November 25, 2015 Leave a comment

If you’ve ever wondered how the big tech players do innovation then you might do well to head on over to IBM’s Hursley labs for a taste of their world class innovation facility. A few weeks ago, some colleagues and I were hosted to an executive briefing on innovation, the IBM way. Read on to find out more…

Pictures on lab visit

IBM Executive Briefing Day

We had a fairly simple and straightforward agenda / expectation in mind, i.e. to: hear, see and connect with IBM labs on key areas of innovation that we might be able to leverage in our own labs, and for clients. This objective was easily met and exceeded as we proceeded through the day long briefing program. Below are some highlights:

First of all, Dr Peter Waggett, Director for Innovation, gave an overview of IBM Research and ways of working. For example, with an annual R&D spend of over 5 Billion Dollars, and 1 Billion Dollars in annual revenues from patents alone, (IBM files over 50 patents a year), it quickly became clear that we were in for a day of superlatives. Dr. Waggett described the operating model, lab resources and key areas of focus, such as: working at the ‘bow wave’ of technology, ‘crossing the mythical chasm‘ and ‘staying close to market’. Some specific areas of active research include: Cognitive Computing (Watson et al), Homomorphic encryption, “data at the edge” and several emerging tech concepts / areas e.g.: Biometrics, biometry and Wetware / Neuromorphic computing with the IBM Synapse Chips. And that was just in the morning session!

The rest of the day involved visiting several innovation labs, as outlined below:

Retail Lab – demonstration of some key innovation in: retail back end integration, shopper relevance and customer engagement management (with analytics / precision marketing / customer lifecycle engagement). Also, touched on integration / extension with next generation actionable tags by PowaTag.

Emerging Technology & Solutions Lab – featured among other things: the IBM touch table (for collaborative interactive working), Buildings Management solutions (with sensors / alerts, dashboard, helmet and smart watch components); Manufacturing related IoT solutions (using Raspberry Pi & Node Red to enable closed loop sensor/analysis/action round trip); Healthcare innovations (including Smarthome based health and environment monitoring with inference capability) and of course Watson Analytics.

IOT Lab – Demonstrated various IoT based offers e.g.: from Device to Cloud; Instrumenting the World Proof of Concepts; Decoupled sensors / analysis / actuators; IoT reference architecture (incl. Device / Gateway / Cloud / Actuators ); and IoT starter kits (with Node Red development environment & predefined recipes for accelerated IoT).

IOC Labs – IBM’s Intelligent Operations Centre (IOC) was shown to be highly relevant for smarter cities as it enables the deployment of fourfold capabilities to: Sense / Analyse / Decide / Act, thus enabling the ability to predict and respond to situations even before they arise. IOC capabilities and cases studies were also demonstrated to be relevant & applicable across multiple industry scenarios including: retail, transport, utilities and supply chain.

Finally, you cannot complete a visit to Hursley without stopping off at their underground Museum of computing. Over the years, this has become a special place, showcasing the amazing innovations of yesterday which have now become objects of nostalgia and curiosity for today’s tech savvy visitors. It is almost incredible to think that computers once ran on: floppy discs, magnetic tape and even punch cards. This is made even more poignant by the thought that almost every new innovation we saw in the labs will one day take their place in the museum, (particularly if they prove successful). Perhaps some of them may even be brought to life by other, newer and as-yet-undiscovered innovations, e.g.: see if you can spot the 3D printed key on this IBM 705 data processor keyboard!

New 3D Printed Key on Keyboard

Spot the 3D printed key.

Overall, it was a great experience and many thanks to our hosts, and IBM event team, for making this a most interesting event. The team and I are certainly look forward to finding out how other tech players, both large and small, are pursuing their own innovation programs!

DRM for Things – Managing rights and permissions for IOT

November 24, 2015 Leave a comment

Given the proliferation of interconnected ‘Things’ on the Internet (aka IoT), it was only a matter of time before the pressing need for robust, pervasive governance became imperative. How can we manage the rights and permissions needed to do stuff with and / or by things? The following are some thoughts, based on a previous foray into the topic, and building on my earlier book on the related world of Digital Rights Management (aka DRM).

Does anyone remember DRM – that much maligned tool of real / perceived oppression, (somewhat ineptly deployed by a napsterized music industry)? It has all but disappeared from the spotlight of public opinion as the content industry continues to evolve and embrace the complex digital realities of today. But what has that got to do with the IoT, and what triggered the thought in the first place, you might ask…

Well, I recently had opportunity to chat with friend and mentor, Andy Mulholland (ex global CTO at Capgemini), and as usual, I got a slight headache just trying to get a grip on some of the more esoteric concepts about the future of digital technology. Naturally we touched on the future of IoT, and how some current thinking may be missing the point entirely, for example:

What is the future of IoT?

Contrary to simplistic scenarios, often demonstrated with connected sensors and actuators, IoT ultimately enables the creation and realisation of a true digital services economy. This is based on 3 key aspects of: ‘Things’, ‘Events’ and ‘Connectivity’ which will work together to deliver value via autonomous agents, systems and interactions. The real players, when it comes to IoT, actually belong outside the traditional world of IT. They include organisations in industries such as manufacturing, automotive, logistics etc., and when combined with the novel uses that people conceive for connected things, the traditional IT industry is and will continue to play catch up in this fast evolving and dynamic space.

What are key components of the IoT enabled digital services?

An autonomous or semi-autonomous IoT enabled digital service will include: an event hub (consisting of graph database and complex event processing capability) in the context of ‘fog computing‘ architectures (aka cloud edge computing) – as I said, this is headache territory (read Andy’s latest post if you dare). Together, event handling and fog computing can be used to create and deliver contextually meaningful value / services for end users. The Common Industrial Protocol (CIP) and API engines will also play key roles in the deployment of autonomous services between things and / or people. Finally, businesses looking to compete in this game need to start focusing on identifying / creating / offering such resulting services to their customers.

Why is Graph Database an important piece of the puzzle? 

Graph databases provide a way to store relationships in an unstructured manner, and IoT enabled services will need five separate stores for scaled up IoT environments, as follows:

  1. Device Info – e.g. type, form and function, data (provided/consumed), owner etc.
  2. Customer/Users – e.g. Relationship of device to the user / customer
  3. Location – e.g. Where is device located (also relative to other things / points of reference)
  4. Network – e.g. network type, protocols, bandwidth, transport, data rate, connectivity constraints etc.
  5. Permission – e.g. who can do: what, when, where, how and with whom/what, and under what circumstances (in connection with the above 4 four graphs) – According to Andy, “it is the combination of all five sets of graph details that matter – think of it as a sort of combination lock!”

So how does this relate to the notion of “DRM for Things”? 

Well, it is ultimately all about trust, as observed in another previous post. There must be real trust in: things (components and devices), agents, events, interactions and connections that make up an IoT enabled autonomous service (and its ecosystem). Secondly, the trust model and enforcement mechanisms must themselves be well implemented and trustworthy, or else the whole thing could disintegrate much like the aforementioned music industry attempts at DRM. Also, there are some key similarities in the surrounding contexts of both DRM and IoT:

  • The development and introduction of DRM took place during a period of Internet enabled disruptive change for the content industry (i.e. with file sharing tools such as: Napster, Pirate Bay and Cyberlockers). This bears startling resemblance to the current era of Internet enabled disruptive change, albeit for the IT industry (i.e. via IoT, Blockchain, AI and Social, Mobile, Big Data, Cloud etc.)
  • The power of DRM exists in the ability to control / manage access to content in the wild, i.e. outside of a security perimeter or business boundary. The ‘Things’ in IoT exist as everyday objects, typically with low computing overheads / footprints, which can be even more wide ranging than mere digital content.
  • Central to DRM is the need for irrefutable identity and clear relationships between: device, user (intent), payload (content) and their respective permissions. This is very much similar to autonomous IoT enabled services which must rely on the 5 graphs mentioned previously.

Although I would not propose using current DRM tools to govern autonomous IoT enabled services (that would be akin to using yesterday’s technology to solve the problems of today / tomorrow), however because it requires similar deperimeterised and distributed trust / control models there is scope for a more up-to-date DRM-like mechanism or extension that can deliver this capability. Fortunately, the most likely option may already exist in the form of Blockchain and its applications. As Ahluwalia, IBM’s CTO for Cloud, so eloquently put it: “Blockchain provides a scalable, trustworthy, highly distributed, redundant and peer-to-peer verification process for processing, coordinating device interactions and sharing access to assets in an IoT network.” Enough said.

In light of the above, it is perhaps easier to glimpse how an additional Blockchain component, for irrefutable trust and ID management, might provide equivalent DRM-like governance for IoT, and I see this as a natural evolution of DRM (or whatever you want to call it) for both ‘things’ and content. However, any such development would do well to take on board lessons learnt from the original Content DRM implementations, and to understand that it is not cool to treat people as things.

www.DarkSide 1 – Online Grooming

November 9, 2015 Leave a comment

A most topical and sensitive subject such as online child abuse, terrorist recruitment etc., will understandably garner a lot of interest and attention, not just from IT people, but also from all other members of society at large. The recent BCS event with a similar title was no exception and perhaps unsurprisingly it also became a target of unwanted attention by some self styled extremists. Read on for more.

First of all, the event featured only two out of five original speakers. Apparently, the other three were unable to attend for various reasons, including threats to personal safety by certain extremist group.  However, it still turned out to be an interesting / informative session, full of insightful takes on the legal and IT aspects of online grooming.

Will Richmond-Coggan, Partner at Pittmans LLP, described how UK Laws created in 2008 are not equipped to handle more recent emergent technology and behaviours, e.g.: ubiquitous social media and/or ephemeral messaging services such as Snapchat. The key challenge is the startling velocity with which certain social technology innovation can gain critical mass and become pervasive. Nowadays, even a joke on Twitter about ‘blowing up something’ can be misconstrued, setting off a chain of events that could result in a minimal charge of wasting police time, at best. Freedom of expression is tricky, because it is not without limitations.

Richmond-Coggan also discussed how well meaning individuals wanting to give moral or financial support to oppressed people overseas can easily become victims of online recruiters and / or radicalization by extremist organisations. He also presented case studies illustrating the repercussions of online grooming on innocent but vulnerable people, and their families, even in situations where the actual sex crimes were thwarted by vigilant family members.

Ryan Rubin, MD of Protiviti, focused his talk on the role of technology and strategic attacks and he sees grooming as part of a wider problem, including: ISIS, Trolls, cyber bullying and child abuse. There is much need to increase public awareness of these issues as well as the methods for detecting and combating them, e.g.: Digital evidence from EXIF data on digital cameras, digital breadcrumbs from social media tools and privacy controls. People need to employ good digital personal hygiene and risk management, such as: use of strong passwords, regular audit of privacy controls on social media, don’t publish your date of birth or any unnecessary information about your kids, and certainly monitor your kid’s online activities / content / channels. Remember, photos may contain location information and don’t post your travel plans (or else you might as well take out a “please rob me” ad). Finally, always post with caution, e.g. by applying the Grandma test (i.e. will your Grandma be offended by the content you’re just about to post online?).

Overall, I thought this was another excellent event by our North London BCS branch, despite unforeseen glitches caused by drop out of 3 speakers. I only hope the next Darkside event will be just as topical and provocative, because as IT professionals, we are supposed to be able to take a clear stance, if not actually leading the way, to helping resolve those technology related issues that affect the broader society as a whole.

Categories: BCS, Event Tags: , ,
Follow

Get every new post delivered to your Inbox.

Join 633 other followers