Archive

Archive for the ‘Capgemini’ Category

Predicting the (near) Future

December 22, 2015 Leave a comment
The future is always tricky to predict and, in keeping with Star Wars season, the dark side is always there to cloud everything. But as we all know in IT the ‘Cloud’ can be pretty cool, except of course when it leaks. Last month saw the final edition of Gartner’s Symposium/ITxpo 2015 in Barcelona, and I was fortunate to attend (courtesy of my Business Unit) and bear witness to some amazing predictions about the road ahead for our beloved / beleageured IT industry.
 
Judging from the target audience, and the number of people in attendance, it is safe to say that the future is at best unpredictable, and at worst unknowable, but Gartner’s Analysts gave it a good go; making bold statements about the state of things to be, within the next 5 years or so. The following are some key messages, observations and predictions which I took away from the event.
 
1. CIOs are keen to see exactly what lies ahead.
Obviously. However, it does confirm to my mind that the future is highly mutable, especially given the amount of change to be navigated on the journey towards digital transformation. I say ‘towards’ because, from all indications, there is likely no real end-point or destination to the journey of digital transformation. The changes (and challenges / opportunities) just keep coming thick and fast, and at an increasing pace. For example, by 2017, Gartner predicts that 50% of IT spending will be outside of IT, it currently stands at 42% today, therefore CIOs must shift their approach from command and control style management to leading via influence and collaboration.
 
2. Algorithmic business is the future of digital business
A market for algorithms (i.e. snippets of code with value) will emerge where organizations and individuals will be able to: licence, exchange, sell and/or give away algorithms – Hmmm, now where have we seen or heard something like that before? Anyway, as a result, many organisations will need an ‘owner’ for Algorithms (e.g. Chief Data Officer) who’s job it’ll be to create an inventory of their algorithms, classify it (i.e. private or “core biz” and public “non-core biz” value), and oversee / govern its use.
 
3. The next level of Smart Machines
In the impending “Post App” era, which is likely to be ushered in by algorithms, people will rely on new virtual digital assistants, (i.e. imagine Siri or Cortana on steroids) to conduct transactions on their behalf. According to Gartner, “By 2020, smart agent services will follow at least 10% of people to wherever they are, providing them with services they want and need via whatever technology is available.” Also, the relationship between machines and people will initially be cooperative, then co-dependant, and ultimately competitive, as machines start to vie for the same limited resources as people.
 
4. Platforms are the way forward (and it is bimodal all the way)
A great platform will help organisations add and remove capability ‘like velcro’. It will need to incorporate Mode 2 capability in order to: fail fast on projects / cloud / on-demand / data and insight. Organisations will start to build innovation competency, e.g. via innovation labs, in order to push the Mode 2 envelope. Platform thinking will be applied at all layers (including: delivery, talent, leadership and business model) and not just on the technology / infrastructure layer.
 
5. Adaptive, People Centric Security
The role of Chief Security Officer role will change and good security roles will become more expansive and mission critical. In future, everyone gets hacked, even you, and if not then you’re probably not important. Security roles will need to act more like intelligence officers instead of policemen. Security investment models will shift from predominantly prevention based to prevention and detection capabilities, as more new and unpredictable threats become manifest. Also organisations will look to deploy People Centric Security measures (PCS) in order to cover all bases.
 
6. The holy grail of business moments and programmable business models
The economics of connections (from increased density of connections and creation of value between: business / people / things) will become evident especially when organsiations focus on delivering business moments to delight their customers. Firms will start to capitalise on their platforms to enable C2C interactions (i.e. customer-2-customer interactions) and allow people and things to create their own value. It will be the dawn of programmable business models 
 
7. The Digital Mesh and the role of wearables and IoT
One of the big winners in the near future will be the ‘digital mesh’, amplified by the explosion of wearables and IoT devices (and their interactions) in the digital mesh environment. Gartner predicts a huge market for wearables (e.g. 500M units sold in 2020 alone – for just a few particular items). Furthermore, barriers to entry will be lower and prices will fall as a result of increased competition, along with: more Apps, better APIs and improved power.
 
The above are just a few of the trends and observations I got from the event, but I hasten to add that it will be impossible to reflect over 4 days of pure content in these highlight notes, and that other equally notable trends and topics such as: IoT Architecture, Talent Acquisition and CIO/CTO Agendas, only receive honourable mentions. However, I noticed that topics such as Blockchain were not fully explored as might be expected at an event of this nature. Perhaps next year will see it covered in more depth – just my prediction.
In summary, the above are not necessarily earth shattering predictions, but taken together they point the way forward to a very different experience of technology; one that is perhaps more in line with hitherto far-fetched predictions of the Singularity, as humans become more immersed and enmeshed with machines. Forget the Post-App era, this could be the beginning of a distinctly recognisable post human era. However, as with all predictions only time will tell, and in this case, lets see where we are this time next year. I hope you have a happy holiday / festive season wherever you are.
Advertisements

IBM Innovation Labs – where old meets new, and everything in between…

November 25, 2015 Leave a comment

If you’ve ever wondered how the big tech players do innovation then you might do well to head on over to IBM’s Hursley labs for a taste of their world class innovation facility. A few weeks ago, some colleagues and I were hosted to an executive briefing on innovation, the IBM way. Read on to find out more…

Pictures on lab visit

IBM Executive Briefing Day

We had a fairly simple and straightforward agenda / expectation in mind, i.e. to: hear, see and connect with IBM labs on key areas of innovation that we might be able to leverage in our own labs, and for clients. This objective was easily met and exceeded as we proceeded through the day long briefing program. Below are some highlights:

First of all, Dr Peter Waggett, Director for Innovation, gave an overview of IBM Research and ways of working. For example, with an annual R&D spend of over 5 Billion Dollars, and 1 Billion Dollars in annual revenues from patents alone, (IBM files over 50 patents a year), it quickly became clear that we were in for a day of superlatives. Dr. Waggett described the operating model, lab resources and key areas of focus, such as: working at the ‘bow wave’ of technology, ‘crossing the mythical chasm‘ and ‘staying close to market’. Some specific areas of active research include: Cognitive Computing (Watson et al), Homomorphic encryption, “data at the edge” and several emerging tech concepts / areas e.g.: Biometrics, biometry and Wetware / Neuromorphic computing with the IBM Synapse Chips. And that was just in the morning session!

The rest of the day involved visiting several innovation labs, as outlined below:

Retail Lab – demonstration of some key innovation in: retail back end integration, shopper relevance and customer engagement management (with analytics / precision marketing / customer lifecycle engagement). Also, touched on integration / extension with next generation actionable tags by PowaTag.

Emerging Technology & Solutions Lab – featured among other things: the IBM touch table (for collaborative interactive working), Buildings Management solutions (with sensors / alerts, dashboard, helmet and smart watch components); Manufacturing related IoT solutions (using Raspberry Pi & Node Red to enable closed loop sensor/analysis/action round trip); Healthcare innovations (including Smarthome based health and environment monitoring with inference capability) and of course Watson Analytics.

IOT Lab – Demonstrated various IoT based offers e.g.: from Device to Cloud; Instrumenting the World Proof of Concepts; Decoupled sensors / analysis / actuators; IoT reference architecture (incl. Device / Gateway / Cloud / Actuators ); and IoT starter kits (with Node Red development environment & predefined recipes for accelerated IoT).

IOC Labs – IBM’s Intelligent Operations Centre (IOC) was shown to be highly relevant for smarter cities as it enables the deployment of fourfold capabilities to: Sense / Analyse / Decide / Act, thus enabling the ability to predict and respond to situations even before they arise. IOC capabilities and cases studies were also demonstrated to be relevant & applicable across multiple industry scenarios including: retail, transport, utilities and supply chain.

Finally, you cannot complete a visit to Hursley without stopping off at their underground Museum of computing. Over the years, this has become a special place, showcasing the amazing innovations of yesterday which have now become objects of nostalgia and curiosity for today’s tech savvy visitors. It is almost incredible to think that computers once ran on: floppy discs, magnetic tape and even punch cards. This is made even more poignant by the thought that almost every new innovation we saw in the labs will one day take their place in the museum, (particularly if they prove successful). Perhaps some of them may even be brought to life by other, newer and as-yet-undiscovered innovations, e.g.: see if you can spot the 3D printed key on this IBM 705 data processor keyboard!

New 3D Printed Key on Keyboard

Spot the 3D printed key.

Overall, it was a great experience and many thanks to our hosts, and IBM event team, for making this a most interesting event. The team and I are certainly look forward to finding out how other tech players, both large and small, are pursuing their own innovation programs!

DRM for Things – Managing rights and permissions for IOT

November 24, 2015 Leave a comment

Given the proliferation of interconnected ‘Things’ on the Internet (aka IoT), it was only a matter of time before the pressing need for robust, pervasive governance became imperative. How can we manage the rights and permissions needed to do stuff with and / or by things? The following are some thoughts, based on a previous foray into the topic, and building on my earlier book on the related world of Digital Rights Management (aka DRM).

Does anyone remember DRM – that much maligned tool of real / perceived oppression, (somewhat ineptly deployed by a napsterized music industry)? It has all but disappeared from the spotlight of public opinion as the content industry continues to evolve and embrace the complex digital realities of today. But what has that got to do with the IoT, and what triggered the thought in the first place, you might ask…

Well, I recently had opportunity to chat with friend and mentor, Andy Mulholland (ex global CTO at Capgemini), and as usual, I got a slight headache just trying to get a grip on some of the more esoteric concepts about the future of digital technology. Naturally we touched on the future of IoT, and how some current thinking may be missing the point entirely, for example:

What is the future of IoT?

Contrary to simplistic scenarios, often demonstrated with connected sensors and actuators, IoT ultimately enables the creation and realisation of a true digital services economy. This is based on 3 key aspects of: ‘Things’, ‘Events’ and ‘Connectivity’ which will work together to deliver value via autonomous agents, systems and interactions. The real players, when it comes to IoT, actually belong outside the traditional world of IT. They include organisations in industries such as manufacturing, automotive, logistics etc., and when combined with the novel uses that people conceive for connected things, the traditional IT industry is and will continue to play catch up in this fast evolving and dynamic space.

What are key components of the IoT enabled digital services?

An autonomous or semi-autonomous IoT enabled digital service will include: an event hub (consisting of graph database and complex event processing capability) in the context of ‘fog computing‘ architectures (aka cloud edge computing) – as I said, this is headache territory (read Andy’s latest post if you dare). Together, event handling and fog computing can be used to create and deliver contextually meaningful value / services for end users. The Common Industrial Protocol (CIP) and API engines will also play key roles in the deployment of autonomous services between things and / or people. Finally, businesses looking to compete in this game need to start focusing on identifying / creating / offering such resulting services to their customers.

Why is Graph Database an important piece of the puzzle? 

Graph databases provide a way to store relationships in an unstructured manner, and IoT enabled services will need five separate stores for scaled up IoT environments, as follows:

  1. Device Info – e.g. type, form and function, data (provided/consumed), owner etc.
  2. Customer/Users – e.g. Relationship of device to the user / customer
  3. Location – e.g. Where is device located (also relative to other things / points of reference)
  4. Network – e.g. network type, protocols, bandwidth, transport, data rate, connectivity constraints etc.
  5. Permission – e.g. who can do: what, when, where, how and with whom/what, and under what circumstances (in connection with the above 4 four graphs) – According to Andy, “it is the combination of all five sets of graph details that matter – think of it as a sort of combination lock!”

So how does this relate to the notion of “DRM for Things”? 

Well, it is ultimately all about trust, as observed in another previous post. There must be real trust in: things (components and devices), agents, events, interactions and connections that make up an IoT enabled autonomous service (and its ecosystem). Secondly, the trust model and enforcement mechanisms must themselves be well implemented and trustworthy, or else the whole thing could disintegrate much like the aforementioned music industry attempts at DRM. Also, there are some key similarities in the surrounding contexts of both DRM and IoT:

  • The development and introduction of DRM took place during a period of Internet enabled disruptive change for the content industry (i.e. with file sharing tools such as: Napster, Pirate Bay and Cyberlockers). This bears startling resemblance to the current era of Internet enabled disruptive change, albeit for the IT industry (i.e. via IoT, Blockchain, AI and Social, Mobile, Big Data, Cloud etc.)
  • The power of DRM exists in the ability to control / manage access to content in the wild, i.e. outside of a security perimeter or business boundary. The ‘Things’ in IoT exist as everyday objects, typically with low computing overheads / footprints, which can be even more wide ranging than mere digital content.
  • Central to DRM is the need for irrefutable identity and clear relationships between: device, user (intent), payload (content) and their respective permissions. This is very much similar to autonomous IoT enabled services which must rely on the 5 graphs mentioned previously.

Although I would not propose using current DRM tools to govern autonomous IoT enabled services (that would be akin to using yesterday’s technology to solve the problems of today / tomorrow), however because it requires similar deperimeterised and distributed trust / control models there is scope for a more up-to-date DRM-like mechanism or extension that can deliver this capability. Fortunately, the most likely option may already exist in the form of Blockchain and its applications. As Ahluwalia, IBM’s CTO for Cloud, so eloquently put it: “Blockchain provides a scalable, trustworthy, highly distributed, redundant and peer-to-peer verification process for processing, coordinating device interactions and sharing access to assets in an IoT network.” Enough said.

In light of the above, it is perhaps easier to glimpse how an additional Blockchain component, for irrefutable trust and ID management, might provide equivalent DRM-like governance for IoT, and I see this as a natural evolution of DRM (or whatever you want to call it) for both ‘things’ and content. However, any such development would do well to take on board lessons learnt from the original Content DRM implementations, and to understand that it is not cool to treat people as things.

Big Dating: Bringing real data to the dating game.

June 1, 2015 Leave a comment

The online dating industry has grown from strength to strength and is estimated to be valued in excess of £2Billion, globally. However, the future growth may hinge on how data and new technologies can be leveraged to improve user experience and matching outcomes. Some key questions: Does having more data about potential partners really make any difference in finding the right match? What are key emerging trends that will affect the evolution of online dating?

Picture12-md

There are literally thousands of online dating sites worldwide, including over 1400 sites in the UK alone where online dating accounts for 25% of all new relationships. As might be expected, there are many types of players and business models in the industry, including online behemoths such as eHarmony or Match.com; mobile players like Tinder or Hinge; and increasingly niche specialists that match users based on specific demographic factors e.g. age / income / ethnicity / religion / location / sexuality etc.

Regardless of player size, business model or target user groups, a quick web trawl reveals some salient observations about the current and future state of online dating, as follows:

  • Mobile dating on the rise – A key trend is the increasing use of mobile Apps for online dating – so the major players are refocusing efforts to improve the multi-channel experience for their users.
  • A question of trust – Online dating services typically require user data for matching potential partners, but this can be greatly impaired by inaccurate data. Users often exaggerate personal attributes, or lie outright, in order to attract potential partners. Providers seek additional data (e.g. from retail, social media, entertainment and online sources) to augment data accuracy. However, there are privacy implications here that will need addressing.
  • User behaviours – Some provider prefer to base matches on actual user behaviours. The idea being that people often say one thing then do the opposite, and this is not unusual with online dating where user reactions to proposed matches can often reveal their true preferences regardless of what is stated on their profiles.
  • Matching algorithms are far from perfect – In fact, some view matching algorithms as just “smoke and mirrors”, and that dating sites succeed simply by providing a larger pool of potential partners. Furthermore, human matching is a bi-directional proposition because, unlike Amazon recommends, your supposedly perfect match may not be all that into you.
  • The eternal shop window – General attitude to online dating has become more positive, and the number of people using dating apps is growing faster than all other apps combined. However, these also foster the notion that online dating encourages, or at least facilitates, perpetual window shopping for potential matches, even for those people in committed relationships.

It is clear from the above that although data and technology will continue to be crucial in the evolution of online dating, the continued success and growth of the industry will depend very much on how well it can handle complex human behaviours, motivations and inconsistencies.

Matching algorithms aside, there’s still significant opportunity and scope for complex human behaviour modelling, and improved dynamic/predictive analytics, to cater for users’ changing preferences, circumstances and motivations. These must all be in place in order for the claims and predictions of everlasting happiness via online dating can be tested or verified. Perhaps, if Romeo and Juliet had access to such computer enabled insight theirs may not have been such a tragic love story!

==========================
Sources:

Leading Digital In Practice

May 14, 2015 Leave a comment
I had the opportunity to read and review the book “Leading Digital” by George WestermanDidier Bonnet & Andrew McAfee, and as you might guess from the review score, I thought it was an excellent book. However, there’s nothing quite like putting something into practice to get a real feel for it, and I was able to do just that on a couple of recent occasions. Read on for highlights…

LD Book sm2

If you haven’t already read the book*, I can assure you it is chock full of common sense and great ideas on how to go about transforming your typical large, non-tech organisation into a digital master. However, as with most things, the theory can be vastly different from reality in practice, so below are a few observations from recent experiences where we tried to put into practice some of the wisdom from Leading Digital:

1. Not every organisation is geared up to do this right away – Even those organisations perceived by peers to be ahead of the pack may just be ‘Fashionistas’ at heart (i.e. very quick to try out shiny new digital toys without adult supervision). To gauge readiness it is important to understand where an organisation sits in the digital maturity quadrant**. Some organisations believe they already know the answer, but it’s always advisable to verify such a crucial starting point, in order to work out their best route to digital mastery.


Quadrant-sm

2. Engage both business and technology communities from the start – Anything else is just window dressing because, although either group can sell a good story as why they’re critical, neither side can fully deliver digital transformation without the other. It really is a game of two sides working well together to achieve a single outcome – no short cuts allowed.

3. Ground up or top down is great, but together they’re unbeatable – Every organisation must address four interlocking*** areas of: Vision, Engagement, Governance & Technology to stand any chance of leading digital. Many often have one or more of these areas needing serious intervention to get up to speed.

The-How-cropped-sm

4. Employees know their organisation better than anyone – This may be stating the obvious, but on several occasions we found critical knowledge locked in the heads of a few individuals, or that departments don’t communicate enough with each other, (not even those using the same systems / processes / suppliers). It is therefore a vital step to unearth such locked-in knowledge, and to untangle any communication gridlock.

5. Using the right tools in the right way pays off big – The Digital Maturity Quadrant or Digital Maturity Assessment exercise are great tools for stimulating debate, conversations and mission clarity. However the readiness of an organisation may impact how such tools are perceived as well as their effectiveness. In such situations, we need to reassess the best way to achieve a useful outcome.

In conclusion, I’d encourage all large, non-tech firms to look for opportunities to put some of the book’s wisdom into practice. The pay off is well worth it, and besides it’s never too late to start on the transformation journey because, as author Andrew McAfee puts it, when it comes to digital, “we ain’t seen nothing yet“!


=========
*Source: Leading Digital by George Westerman, Didier Bonnet and Andrew McAfee
**Source: Capgemini Consulting-MIT Analysis – Digital Transformation: A roadmap for billion-dollar organizations (c) 2012
*** Source: Capgemini 2014


Governing the Internet of Things.

February 28, 2015 Leave a comment
In light of increasing coverage about the so called “Internet of Things” (IoT), it is not surprising that sovereign governments are paying attention and introducing initiatives to try understand and take advantage of / benefit from the immense promise of the IoT. Despite the hype, it is probably too early to worry about how to govern such a potential game changer, or is it?


According to Gartner’s Hype Cycle for Emerging Technologies, the Internet of Things is hovering at the peak of inflated expectations, with a horizon of some 5 – 10 years before reaching the “plateau of productivity” as an established technology, so still fairly early days as yet, it would seem. However, that is not sufficient reason to avoid discussing governance options and implications for what is arguably the most significant technology development since the dawn of the Internet itself. To this end, I attended a recent keynote seminar on policy and technology priorities for IoT (see agenda here), and below are some of the key points I took away from the event:


1. No trillion IoT devices anytime soon –  According to Ovum’s Chief Analyst the popular vision of ‘a Trillion IoT devices’ will not appear overnight, for the simple reason that it is difficult, and will take some time, to deploy all those devices in all manner of places that they need to be.


2. What data avalanche? – Although a lot of data will be generated by the IoT, it shouldn’t come as a surprise that the proportion of meaningful information will depend on the cost to generate, store and extract useful information from the petabytes of noise – there is a lot of scope for data compression. For example, the vast majority of data from say environment sensing IoT devices will likely be highly repetitive and suitable for optimisation.


3. Regulatory implications – OFCOM, the UK’s Data regulator, identified the four themes as most relevant for the future development of  IoT, i.e.: 1. Data privacy (including authorisation schemes); 2. Network security & resilience (suitable for low end devices); 3. Spectrum (e.g. opening up 700Mhz band and other high / low frequency bands for IoT); and 4. Numbering & Addressing (need to ensure there is enough numbers & addresses in the future for IoT).


4. Standards and interoperability – these remain key to a workable, global Internet of Everything (IoE) particularly because of need for data availability, interoperability (at device and data level), and support for dynamic networks and business models.


5. Legal implications – again the key concern is data privacy. According to Philip James (Law Firm Partner at Sheridans), in describing the chatter between IoT devices: “hyper-connected collection and usage of data is a bit like passive smoking – not everyone is aware of it”.


In context of the above observations, it may be easy to ignore the elephant in the room, i.e. how to manage unintended consequences from something as intangible as the future promise of IoT? What will happen if and when the IoT becomes semi-autonomous and self reliant, or is that science fiction?


Well, I wouldn’t be so sure, because it all boils down to trust: trust between devices; trust in data integrity; and trust in underlying networks and connectivity. However, this is not something the Internet of today can provide easily, therefore some interesting ideas have started percolating around scalable trust and integrity. For example, Gurvinder Ahluwalia (IBM’s CTO for IoT and Cloud Computing) described a scenario using hitherto disruptive and notorious technologies (i.e. Blockchain and BitTorrent, of Bitcoin and Pirate Bay fame respectively), to create a self trusting environment for what he calls “democratic devices”.


The implications are astounding and much closer to the science fiction I mentioned previously. However, it is real enough when you consider that it requires a scalable, trustworthy, distributed system to verify, coordinate, and share access to the ‘Things’ on the IoT, and that key components and prototypes of such a system already exist today. This, in my opinion, is why sovereign governments are sitting up and taking notice, as should all private individuals around the world.


Free, your new IP strategy: is this the future of Intellectual Property and Innovation?

June 26, 2014 1 comment

I came across a blog post about electric automaker Tesla’s recent move to open up its patents by making them free to use by anyone, including competitors. According to founder, Elon Musk, “Technology leadership is not defined by patents, but rather by the ability of a company to attract and motivate the world’s most talented engineers.”

Tesla-specs

I believe that while this move may have multiple strategic intent, (i.e. Tesla could have other IP cards up their sleeve), it also highlights limitations in the current systems of Intellectual Property, and it’ll require a fundamental shift in philosophy to fully appreciate where such trends could take us.

Obviously, I admire Tesla’s creativity and innovation, not least because their eco-friendly cars do not remind me quite so much of badly constipated turtles, but because their sheer guts and willingness to take risks (aka multiple leaps of faith) puts them ahead of the curve.

If technology leadership is no longer defined by a sizeable IP portfolio, then this presents some very real challenges to various foul strategies and current sharp practices for IP, such as: “weaponised IP”, Patent Trolls, industrial espionage, and so forth. According to author Don Peppers’ blog post on the topic, such “open source” and “free revealing” (aka free sharing) of otherwise competitive IP assets actually drive innovation “while patents, copyrights and other legal mechanisms seem to be holding us back.”  He goes on to say: “This is big, everyone. If you don’t know how big this is, you haven’t been paying attention.”

In my opinion, a mindset of “share first then ask questions after” is vastly superior to the usual scarcity based approach to wealth creation, (i.e. “mine, mine, all mine” is not real wealth, just an illusion). True wealth, which is firmly based in abundance, actually favours the sharing mindset by motivating and empowering bright creative people to continue to do and share what they do best. Such a system fosters innovation, and is ultimately self replenishing, because it forces organisations to ensure they maintain the right ingredients to continue being innovative.

In such a world, an organisation may be deemed a failure when it no longer has the ability to innovate, regardless of the size of its bank balance, market capitalisation or IP portfolio. Instead, successful organisations will be ones which can establish and demonstrate a self-perpetuating culture for creativity and innovation. Such bold claims do however raise some serious questions over IP, e.g.:

  • Should IP be granted with implicit right for others to use and reuse by default, (along with fair recompense or royalty to the owner)?  And if this were feasible, would everyone play by the rules?
  • Are we likely to see a situation whereby IP may be rescinded from organisations that do not actively use them to innovate? I can already imagine Patent Trolls, and their IP lawyers, screaming in anguish at the thought.
  • If free sharing of IP became common practice, will it ultimately diminish the value of IP, and its raison d’etre, (i.e. a means to provide direct economic reward for creators and owners of IP)?  Bear in mind that creators and owners of IP are not always one and the same.

These and other similar questions easily rise to the fore when you extrapolate the developing trend for free IP sharing and their implication for both individuals and organisations. The preceding points / questions are not solely relevant to organisations. Individuals, particularly those in the creative arts (e.g. authors, musicians and other artistes), are also affected especially as they increasingly chose to explore alternative funding models to finance their works.

TV presenter and author, Kate Russell (of BBC Click fame) takes it a step further by advocating the creation of new IP models based on crowd funding in her recent BCS blog post. Her exact words were: “With the online world still in freefall about how to solve digital rights protection and make sure artists get paid fairly for creative works, I genuinely believe that crowd funding could form the groundwork of a new intellectual property model”. In my opinion, this is another example of the shifting mindset that will ultimately bring about the evolution of a more suitable IP system for the digital world of today and tomorrow.