I believe that in a highly connected digital world, the future of content publishing lies with creating interlinked manifestations of a core concept or theme. I like to think of this as “multi(n) publishing”, (where ‘n’ stands for any number of things, e.g.: aspect / channel / facet / format / genre / sided / variant / etc.), or multi-publishing for short. To this end, I’ve created a framework which could prove very useful for conceptualizing and executing multi-publishing projects. Read on to find out more.
- Why Multi-Publishing?
There is increasing evidence of an evolution in the way people consume digitally enabled content, e.g.: watching a TV show whilst surfing the web, talking on the phone to a friend and posting comments on social media – all of which may or may not relate to each other or a single topic. This has put enormous pressure on content creators and publishers to find new ways to engage their audience and deliver compelling content to people that live in a world surfeit with competing content, channels, devices and distractions. In the above scenario, broadcasters have tried, with varying degrees of success, to engage viewers with second or multi-screen, content (e.g.: show on TV, cast info on website / mobile site, plus real time interaction on Social Media – all related to the show). Furthermore, the average attention span of most users appears to have shrunk and many prefer to ‘snack’ on content across devices and formats. This doesn’t bode well for the more traditional long-form content upon which many creative industries were established. As a result, many in the content production, publishing and marketing industries are seeking new ways to engage audiences across multiple devices and channels with even more compelling content and user experiences.
- What is Multi-publishing?
In this context, the term “multi(n) publishing” (or multi-publishing) describes the manifestation of a core concept / theme as distinct but inter-linked works across multiple media formats, channels and genres. This is somewhat different from other similar related terms such as: multi-format (or cross-media), multi-channel, single source, or even multi-platform publishing. The last one being mainly used by marketers to describe the practice of taking one thing and turning it into several products across a spectrum of online, offline and even ‘live’ experiential forms. The key difference between these terms and multi-publishing is that the latter encompasses them all, and more. In fact, the multi-publishing framework is closer to the information science idea of conceptualisation. Also, and perhaps more importantly, the various manifestations of multi-published content are not necessarily brand identical to the originating (aka ‘native’) core concept, or to each other. However, each and every manifestation is intended to be unique and distinct, yet able to enhance each other and provide a fuller and more fulfilling experience of the overall core concept.
- How does it work?
In order to achieve the desired outcome of the whole being more than a sum of its parts, it makes sense for creators and publishers to bear in mind, right from the outset, that their works will likely be: used, reused, decomposed, remixed and recomposed in so many different ways, (including new and novel expressions of which they couldn’t possibly imagine at the time of creation). Therefore, they must recognize where and how each of their output content fits within the context of a multi-publishing content framework or architecture. The diagram below is just such a framework (in mindmap form) and demonstrates the narrative-like progression of a single core concept / theme across various stages and interlinked manifestations.
This is only an example of what content creators and their publishers must consider and prepare as part of their creative (inspiration) and publishing (exploitation) process. It requires the creation and/or identification of a core concept which is manifest in the expression of the art (e.g. in the: story, song, prose, images, video, game, conversations or presentations etc), and which can be used to link each and every format, channel or media in which the concept is expressed.
Finally, the use of multi-publishing frameworks can also enable easier setup and automation of tracking and recording of all usage transactions, and potentially any subsequent remuneration for creator(s) and publisher(s), in a transparent manner, (perhaps using a trust mechanism such as blockchain). I will explore this particular topic in a subsequent post on this blog. In any case, there remains one key question to be answered, i.e.: how can or should we consider protecting core concepts or algorithms at the heart of multi-publishing frameworks, and if so what form should such protection take?
If you’ve ever wondered how the big tech players do innovation then you might do well to head on over to IBM’s Hursley labs for a taste of their world class innovation facility. A few weeks ago, some colleagues and I were hosted to an executive briefing on innovation, the IBM way. Read on to find out more…
IBM Executive Briefing Day
We had a fairly simple and straightforward agenda / expectation in mind, i.e. to: hear, see and connect with IBM labs on key areas of innovation that we might be able to leverage in our own labs, and for clients. This objective was easily met and exceeded as we proceeded through the day long briefing program. Below are some highlights:
First of all, Dr Peter Waggett, Director for Innovation, gave an overview of IBM Research and ways of working. For example, with an annual R&D spend of over 5 Billion Dollars, and 1 Billion Dollars in annual revenues from patents alone, (IBM files over 50 patents a year), it quickly became clear that we were in for a day of superlatives. Dr. Waggett described the operating model, lab resources and key areas of focus, such as: working at the ‘bow wave’ of technology, ‘crossing the mythical chasm‘ and ‘staying close to market’. Some specific areas of active research include: Cognitive Computing (Watson et al), Homomorphic encryption, “data at the edge” and several emerging tech concepts / areas e.g.: Biometrics, biometry and Wetware / Neuromorphic computing with the IBM Synapse Chips. And that was just in the morning session!
The rest of the day involved visiting several innovation labs, as outlined below:
Retail Lab – demonstration of some key innovation in: retail back end integration, shopper relevance and customer engagement management (with analytics / precision marketing / customer lifecycle engagement). Also, touched on integration / extension with next generation actionable tags by PowaTag.
Emerging Technology & Solutions Lab – featured among other things: the IBM touch table (for collaborative interactive working), Buildings Management solutions (with sensors / alerts, dashboard, helmet and smart watch components); Manufacturing related IoT solutions (using Raspberry Pi & Node Red to enable closed loop sensor/analysis/action round trip); Healthcare innovations (including Smarthome based health and environment monitoring with inference capability) and of course Watson Analytics.
IOT Lab – Demonstrated various IoT based offers e.g.: from Device to Cloud; Instrumenting the World Proof of Concepts; Decoupled sensors / analysis / actuators; IoT reference architecture (incl. Device / Gateway / Cloud / Actuators ); and IoT starter kits (with Node Red development environment & predefined recipes for accelerated IoT).
IOC Labs – IBM’s Intelligent Operations Centre (IOC) was shown to be highly relevant for smarter cities as it enables the deployment of fourfold capabilities to: Sense / Analyse / Decide / Act, thus enabling the ability to predict and respond to situations even before they arise. IOC capabilities and cases studies were also demonstrated to be relevant & applicable across multiple industry scenarios including: retail, transport, utilities and supply chain.
Finally, you cannot complete a visit to Hursley without stopping off at their underground Museum of computing. Over the years, this has become a special place, showcasing the amazing innovations of yesterday which have now become objects of nostalgia and curiosity for today’s tech savvy visitors. It is almost incredible to think that computers once ran on: floppy discs, magnetic tape and even punch cards. This is made even more poignant by the thought that almost every new innovation we saw in the labs will one day take their place in the museum, (particularly if they prove successful). Perhaps some of them may even be brought to life by other, newer and as-yet-undiscovered innovations, e.g.: see if you can spot the 3D printed key on this IBM 705 data processor keyboard!
Spot the 3D printed key.
Overall, it was a great experience and many thanks to our hosts, and IBM event team, for making this a most interesting event. The team and I are certainly look forward to finding out how other tech players, both large and small, are pursuing their own innovation programs!
Given the proliferation of interconnected ‘Things’ on the Internet (aka IoT), it was only a matter of time before the pressing need for robust, pervasive governance became imperative. How can we manage the rights and permissions needed to do stuff with and / or by things? The following are some thoughts, based on a previous foray into the topic, and building on my earlier book on the related world of Digital Rights Management (aka DRM).
Does anyone remember DRM – that much maligned tool of real / perceived oppression, (somewhat ineptly deployed by a napsterized music industry)? It has all but disappeared from the spotlight of public opinion as the content industry continues to evolve and embrace the complex digital realities of today. But what has that got to do with the IoT, and what triggered the thought in the first place, you might ask…
Well, I recently had opportunity to chat with friend and mentor, Andy Mulholland (ex global CTO at Capgemini), and as usual, I got a slight headache just trying to get a grip on some of the more esoteric concepts about the future of digital technology. Naturally we touched on the future of IoT, and how some current thinking may be missing the point entirely, for example:
What is the future of IoT?
Contrary to simplistic scenarios, often demonstrated with connected sensors and actuators, IoT ultimately enables the creation and realisation of a true digital services economy. This is based on 3 key aspects of: ‘Things’, ‘Events’ and ‘Connectivity’ which will work together to deliver value via autonomous agents, systems and interactions. The real players, when it comes to IoT, actually belong outside the traditional world of IT. They include organisations in industries such as manufacturing, automotive, logistics etc., and when combined with the novel uses that people conceive for connected things, the traditional IT industry is and will continue to play catch up in this fast evolving and dynamic space.
What are key components of the IoT enabled digital services?
An autonomous or semi-autonomous IoT enabled digital service will include: an event hub (consisting of graph database and complex event processing capability) in the context of ‘fog computing‘ architectures (aka cloud edge computing) – as I said, this is headache territory (read Andy’s latest post if you dare). Together, event handling and fog computing can be used to create and deliver contextually meaningful value / services for end users. The Common Industrial Protocol (CIP) and API engines will also play key roles in the deployment of autonomous services between things and / or people. Finally, businesses looking to compete in this game need to start focusing on identifying / creating / offering such resulting services to their customers.
Why is Graph Database an important piece of the puzzle?
Graph databases provide a way to store relationships in an unstructured manner, and IoT enabled services will need five separate stores for scaled up IoT environments, as follows:
- Device Info – e.g. type, form and function, data (provided/consumed), owner etc.
- Customer/Users – e.g. Relationship of device to the user / customer
- Location – e.g. Where is device located (also relative to other things / points of reference)
- Network – e.g. network type, protocols, bandwidth, transport, data rate, connectivity constraints etc.
- Permission – e.g. who can do: what, when, where, how and with whom/what, and under what circumstances (in connection with the above 4 four graphs) – According to Andy, “it is the combination of all five sets of graph details that matter – think of it as a sort of combination lock!”
So how does this relate to the notion of “DRM for Things”?
Well, it is ultimately all about trust, as observed in another previous post. There must be real trust in: things (components and devices), agents, events, interactions and connections that make up an IoT enabled autonomous service (and its ecosystem). Secondly, the trust model and enforcement mechanisms must themselves be well implemented and trustworthy, or else the whole thing could disintegrate much like the aforementioned music industry attempts at DRM. Also, there are some key similarities in the surrounding contexts of both DRM and IoT:
- The development and introduction of DRM took place during a period of Internet enabled disruptive change for the content industry (i.e. with file sharing tools such as: Napster, Pirate Bay and Cyberlockers). This bears startling resemblance to the current era of Internet enabled disruptive change, albeit for the IT industry (i.e. via IoT, Blockchain, AI and Social, Mobile, Big Data, Cloud etc.)
- The power of DRM exists in the ability to control / manage access to content in the wild, i.e. outside of a security perimeter or business boundary. The ‘Things’ in IoT exist as everyday objects, typically with low computing overheads / footprints, which can be even more wide ranging than mere digital content.
- Central to DRM is the need for irrefutable identity and clear relationships between: device, user (intent), payload (content) and their respective permissions. This is very much similar to autonomous IoT enabled services which must rely on the 5 graphs mentioned previously.
Although I would not propose using current DRM tools to govern autonomous IoT enabled services (that would be akin to using yesterday’s technology to solve the problems of today / tomorrow), however because it requires similar deperimeterised and distributed trust / control models there is scope for a more up-to-date DRM-like mechanism or extension that can deliver this capability. Fortunately, the most likely option may already exist in the form of Blockchain and its applications. As Ahluwalia, IBM’s CTO for Cloud, so eloquently put it: “Blockchain provides a scalable, trustworthy, highly distributed, redundant and peer-to-peer verification process for processing, coordinating device interactions and sharing access to assets in an IoT network.” Enough said.
In light of the above, it is perhaps easier to glimpse how an additional Blockchain component, for irrefutable trust and ID management, might provide equivalent DRM-like governance for IoT, and I see this as a natural evolution of DRM (or whatever you want to call it) for both ‘things’ and content. However, any such development would do well to take on board lessons learnt from the original Content DRM implementations, and to understand that it is not cool to treat people as things.
A most topical and sensitive subject such as online child abuse, terrorist recruitment etc., will understandably garner a lot of interest and attention, not just from IT people, but also from all other members of society at large. The recent BCS event with a similar title was no exception and perhaps unsurprisingly it also became a target of unwanted attention by some self styled extremists. Read on for more.
First of all, the event featured only two out of five original speakers. Apparently, the other three were unable to attend for various reasons, including threats to personal safety by certain extremist group. However, it still turned out to be an interesting / informative session, full of insightful takes on the legal and IT aspects of online grooming.
Will Richmond-Coggan, Partner at Pittmans LLP, described how UK Laws created in 2008 are not equipped to handle more recent emergent technology and behaviours, e.g.: ubiquitous social media and/or ephemeral messaging services such as Snapchat. The key challenge is the startling velocity with which certain social technology innovation can gain critical mass and become pervasive. Nowadays, even a joke on Twitter about ‘blowing up something’ can be misconstrued, setting off a chain of events that could result in a minimal charge of wasting police time, at best. Freedom of expression is tricky, because it is not without limitations.
Richmond-Coggan also discussed how well meaning individuals wanting to give moral or financial support to oppressed people overseas can easily become victims of online recruiters and / or radicalization by extremist organisations. He also presented case studies illustrating the repercussions of online grooming on innocent but vulnerable people, and their families, even in situations where the actual sex crimes were thwarted by vigilant family members.
Ryan Rubin, MD of Protiviti, focused his talk on the role of technology and strategic attacks and he sees grooming as part of a wider problem, including: ISIS, Trolls, cyber bullying and child abuse. There is much need to increase public awareness of these issues as well as the methods for detecting and combating them, e.g.: Digital evidence from EXIF data on digital cameras, digital breadcrumbs from social media tools and privacy controls. People need to employ good digital personal hygiene and risk management, such as: use of strong passwords, regular audit of privacy controls on social media, don’t publish your date of birth or any unnecessary information about your kids, and certainly monitor your kid’s online activities / content / channels. Remember, photos may contain location information and don’t post your travel plans (or else you might as well take out a “please rob me” ad). Finally, always post with caution, e.g. by applying the Grandma test (i.e. will your Grandma be offended by the content you’re just about to post online?).
Overall, I thought this was another excellent event by our North London BCS branch, despite unforeseen glitches caused by drop out of 3 speakers. I only hope the next Darkside event will be just as topical and provocative, because as IT professionals, we are supposed to be able to take a clear stance, if not actually leading the way, to helping resolve those technology related issues that affect the broader society as a whole.