- The copyright yin and technology yang – Copyright has always had to change and adapt to new and disruptive technologies (which typically impact the extant business models of the content industry) and each time it usually comes out even stronger and more flexible – the age of digital disruption is no exception. As my 5 year old would say, “that glass is half full AND half empty”
- UK Copyright Hub – “Simplify and facilitate” is a recurring mantra on the role of copyright in the digital economy. The UK Copyright Hub provides an exchange that is predicated on usage rights. It is a closely watched example of what is required for digital copyright and could easily become a template for the rest of the world.
- Copyright frictions still a challenge – “Lawyers love arguing with each other”, but they and the excruciatingly slow process of policy making, have introduced a particular friction to copyright’s digital evolution. The pace of digital change has increased but policy has slowed down, perhaps because there are now more people to the party.
- Time for some new stuff – Copyright takes the blame for many things (e.g. even the normal complexity of cross border commerce). Various initiatives including: SOPA & PIPA / Digital Economy Act / Hadopi / 3 strikes NZ have stalled or been drastically cut back. It really is time for new stuff.
- Delaying the “time to street” – Fox describe their anti-piracy efforts in relation to film release windows, in an effort to delay the “time to street” (aka pervasive piracy). These and other developments such as fast changing piracy business models, or the balance between privacy vs. piracy and technologies (e.g. popcorn time, annonymising proxies, cyberlockers etc.) have added more fuel to the fire.
- Rights Languages & Machine-to-Machine communication – Somewhat reminiscent of efforts to use big data and analytics mechanisms to provide insight from structured and unstructured data sources. Think Hadoop based rights translation and execution engines.
- The future of private copying – The UK’s copyright exceptions now allow for individual private copies of owned content. Although this may seem obvious, but it has provoked fresh comments from content industries types and other observers e.g.: When will technology replace the need for people making private copies? Also, what about issues around keeping private copies in the cloud or in cyber lockers?
The last Open Group Conference in London provided an opportunity to hear about latest developments in Health, Finance and eGovernment. It also featured major milestones for the Open Group, e.g. the successful conclusion of the Jericho forum (on de-perimeterised security), and the rise of Platform 3.0 (aka Digital). Read on for some highlights and headlines from the event
eGovernment – According to one keynote speaker, the transition towards egovernment is reflected in growing demand for the IT industry to help implement or enable such major initiatives as: open data, global tax information exchange, as well as an enterprise architecture plus supporting data structures to cover all human endeavour. The Global Risks 2013 report illustrates pressing issues to be addressed by world leaders, particularly in the G8 and G20 countries which together represent 50% – 95% of the global economy. Some IT enabled scenarios, such as massive disinformation and the dangers of starting “Digital wildfires in a hyperconnected world”, illustrate the hurdles that need to be overcome with vital input from the IT industry. According to one attendee, “…government is just the back office for the global citizen”. Overall, these initiatives are aimed at connecting governments, by enabling better information exchange, and providing much needed support for an emerging global citizen.
Platform 3.0 – The conference provided updates on Platform 3.0, (aka the Open Group’s approach to Digital). Andy Mulholland (Ex Global CTO at Capgemini) set the scene in his keynote speech, by discussing the real drivers for change and their implications, plus the emerging role of business architecture and innovation, as well as the Platform 3.0 approach to Digital. Subsequent sessions provided a summary of activities outlining key Principles (and requirements) for Platform 3.0, including: the role of the IT organisation in managing digital (i.e. brokering anywhere / anytime transactions), Inside Out vs. Outside In approach to interaction, and the challenge for Enterprise Architects to acquire key skills in organisational change & behaviours, in order to remain relevant.
eHealth – Several sessions were dedicated to the trends and impact of technology on healthcare. Topics discussed include: Big Data in healthcare and the growth in Smartphone or smart device capabilities for health care. Also discussed were:
- Shrinking R&D budgets leading to collaborative efforts (e.g. Pistoiaalliance.org ),
- Explosion of health monitoring related services and offerings e.g. self help health websites, bio telemetry wristbands etc.
- Personalized Ambient Monitoring (PAM) of mentally ill patients, using multiple devices and algorithms. apparently 1 in 4 people in the UK will experience some kind of mental illness within the year.
- Unobtrusive Smart Environment for Independent Living (USEFIL) aimed at senior citizens
- Trends in life logging (e.g. quantified self and life slices), heading towards embedded or implanted devices (e.g. digestible RFID chips)
- IPv6 and ubiquity of information points – ID management for tomorrow will include a surfeit of personal data.
However, key challenges discussed include privacy issues regarding the collection, storage and access to personal / health information. Also, who will monitor all that data gathered from sensors, monitoring and activation from the Internet of things for healthcare?
Innovation – These sessions focused on various aspects of future technology trends and innovation. It featured speakers from KPN, IBM, Inspired and Capgemini (i.e. yours truly), discussing:
- Smart technologies (e.g. SMART Grid) and interoperability constraints, plus the convergence of business and technology and fuzzy boundaries of “outside in” versus “inside out” thinking
- New technology architecture opportunities to leverage world changing developments such as: Semantics, nano technology, 3D printing, Robotics and the Internet-of-things, overlaid with exponential technologies (e.g. storage / processing power / bandwidth) and the network effect
- Effects of Mobile and Social vs. traditional MDM, plus emerging trends for incorporating new dynamic data (sentiment analysis / IoT sensors plus deep / dark data).
- Use of big data to enable the Social enterprise, via smarter workforce, innovation and gamification.
- Case study of Capgemini internal architecture and innovation work stream – illustrating key organisational trends and cross sector innovation, plus challenges for internal innovation, and the emerging role of business model innovation and architecture
As you can probably surmise from the above, this multi-day conference was jam-packed with information, networking and learning opportunities. Also the Open Group’s tradition of holding events in the great cities of the world, (e.g. this one took place just across the road from the UK Houses of Parliament), effectively brings the latest industry thinking / developments to your doorstep, and is highly commendable. Long may it continue!
Last month’s conference on copyright and technology provided plenty of food for thought from an array of speakers, organisations, viewpoints and agendas. Topics and discussions ran the gamut of increasingly obvious “business models are more important than technology” to downright bleeding edge “hypersonic activation of devices from outdoor displays “. There was something to take away for everyone involved. Read on for highlights.
The Mega Keynote interview: Mega’s CEO Vikram Kumar, discussed how the new and law-abiding cloud storage service is proving attractive to professionals who want to use and pay for the space, security and privacy that Mega provides. This is a far cry from the notorious MegaUpload, and founder Kim Dotcom’s continuing troubles with charges of copyright infringement, but there are still questions about the nature of the service – e.g. the end-to-end encryption approach which effectively makes it opaque to outside scrutiny. Read more about it here.
Anti-Piracy and the age of big data – Mark Monitor’s Thomas Sehested talked about the rise of data / content monitoring and anti-piracy services in what he describes as the data driven media company. He also discussed the demise of content release windows, and how mass / immediate release of content across multiple channels lowers piracy, but questioned if this is more profitable.
Hadopi and graduated response – Hadopi’s Pauline Blassel gave an honest overview on the impact of Hadopi, including evidence of some reduction in piracy (by factor of 6M-4M) before stabilsation. She also described how this independent public authority delivers graduated response in a variety of ways e.g. from raising awareness to imposing penalties and focusing primarily on what is known as PUR (aka ‘Promotion les Usage Responsible’)
Auto Content Recognition (ACR) and the 2nd Screen – ACR is a core set of tools (including DRM, watermarking and fingerprinting), and the 2nd screen opportunity (at least for broadcasters) is all about keeping TV viewership and relevance in the face of tough competition for people’s time and attention. This panel session discussed monetisation of second screen applications, and the challenges of how TV is regulated, pervasive and country specific. Legal broadcast rights is aimed at protection of broadcast signals, which triggers the 2nd screen application, (e.g. via ambient / STB / EPG based recognition). This begs the question of what regulation should be applied to the 2nd screen, and what rights apply? E.g. Ads on TV can be replaced in the 2 screen, but what are the implications?
Update on the Copyright Hub – The Keynote address by Sir Richard Hooper, chair of the Copyright Hub and co-author of the 2012 report on Copyright Works: Streamlining Copyright Licensing for the Digital Age, was arguably the high point of the event. He made the point that although there are issues with copyright in the digital age, the creative industries need to get off their collective backsides and streamline the licensing process before asking for a change in copyright law. He gave examples of issues with the overly complex educational licensing process and how the analogue processes are inadequate for the digital age (e.g. unique identifiers for copyright works).
The primary focus of the Copyright Hub, according to Sir Richard, is to enable high volume – low value transactions, (e.g. to search, license and use copyright works legally) by individuals and SMEs. The top tier content players already have dedicated resources for such activities hence they’re not a primary target of the Copyright Hub, but they’ll also benefit by removing the need to deal with trivial requests for licensing individual items (e.g. to use popular songs for wedding videos on YouTube).
Next phase work, and other challenges, for the Copyright Hub include: enabling consumer reuse of content, architectures for federated search, machine to machine transactions, orphan works registry & mass digitisation (collective licensing), multi licensing for multimedia content, as well as the need for global licensing. Some key messages and quotes in the ensuing Q&A include:
- “the Internet is inherently borderless and we must think global licensing, but need to walk before we can run”
- “user-centricity is key. People are happy not to infringe if easy / cheap to be legal”
- “data accuracy is vital, so Copyright Hub is looking at efforts from Linked Content Coalition and Global Repertoire Database”
- “Metadata is intrinsic to machine to Machine transactions – do you know it is a crime to strip metadata from content?”
- “Moral rights may add to overall complexity”
As you can probably see from the above, this one day event delivered the goods and valuable insights to the audience, which included people from the creative / content industries, as well as technologists, legal practitioners, academics and government agencies. Kudos to MusicAlly, the event organiser, and to Bill Rosenblatt, (conference chair), for a job well done.
This week’s quarterly Open Group conference in Washington DC, featured several thought provoking sessions around key issues / developments of interest and concern to the IT world, including: Security, Cloud, Supply Chain, Enterprise Transformation (including Innovation), and of course Enterprise Architecture (including TOGAF and Archimate).
Below are some key highlights, captured from the sessions I attended (or presented), as follows:
Day 1 – Plenary session focused on Cyber Security, followed by three tracks on Supply Chain, TOGAF and SOA. Key messages included:
- Key note by Joel Brenner described the Internet as a “porous and insecure network” which has become critical for so many key functions (e.g. financial, communications and operations) yet remains vulnerable to abuse by friends, enemies and competitors. Best quote of the conference, was: “The weakest link is not the silicon based unit on the desk, but the carbon based unit in the chair” (also tweeted and mentioned in @jfbaeur’s blog here)
- NIST’s Dr. Don Ross spoke about a perfect storm of consumerisation (BYOD), ubiquitous connectivity and sophisticated malware, leading to an “advanced persistent threat” enabled by available expertise / resources, multiple attack vectors and footholds in infrastructure
- MIT’s Professor Yossi Sheffi expounded on the concept of building security and resilience for competitive advantage. This, he suggested, can be done by embracing “flexibility DNA”, (as exhibited in a few successful organisations), into the culture of your organisation. Key flexibility traits include:
- Your resilience and security framework must drive, or at least feed into, “business-as-usual”
- Continuous communication is necessary among all members of the organisation
- Distribute the power to make decisions (especially to those closer to the operations)
- Create a passion for your work and the mission
- Deference to expertise, especially in times of crisis
- Maintain conditioning for disruptions – ability for stability is good, but flexibility to handle change is even better
- Capgemini’s Mats Gejneval discussed agility and enterprise architecture using Agile methods and TOGAF. He highlighted the relationship flow between: agile process -> agile architecture -> agile project delivery -> agile enterprise, and how the latter outcome requires each of the preceding qualities (e.g. agile methods, and faster results, on its own will not deliver agile solutions or enterprise). My favourite quote, during the Q/A, was: “…remember that architects hunt in packs!”
Day 2 – Plenary session focused on Enterprise Transformation followed by four streams on Security Architecture, TOGAF Case Studies, Archimate Tutorials, and EA & Enterprise Transformation (including our session on Innovation & EA). Key Highlights include:
- A case study on the role of open standards for enterprise transformation, featured Jason Uppal (Chief Architect at QRS), describing the transformation of Toronto’s University Health Network into a dynamic and responsive organisation, by placing medical expertise and requirements above the flexible, open standards based, IT delivery.
- A view on how to modernise service to citizens via a unified (or “single window government”) approach was provided by Robert Weisman (CEO of Build a Vision Inc). He described the process to simplify key events (from 1400 down to 12 major life events) around which the services could be defined and built.
- Samira Askarova (CEO of WE Solutions Group) talked about managing enterprise transformation through transitional architectures. She likened business transformation to a chameleon with: its huge, multi-directional eyes (i.e. for long term views), the camouflage ability (i.e. changing colours to adapt), and the deliberate gait (i.e. making changes one step at a time)
- The tutorial session on Innovation and EA, by Corey Glickman (Capgemini’s lead for Innovation-as-a-Managed Service) and yours truly, discussed the urgent need for EA to play a vital role in bridging the gap between rapid business model innovation and rapid project delivery (via Agile). It also provided several examples, as well as a practical demonstration of the Capgemini innovation service platform, which was well received by the audience. Key take aways include:
- Innovation describes an accomplishment, after the fact
- EA can bridge the gap between strategy (in the business model) and rapid project delivery (via Agile)
- Enterprise Architecture must actively embrace innovation
- Engage with your partners, suppliers, customers and employees – innovation is not all about technology
- Creating a culture of innovation is key to success
- Remember, if you are not making mistakes, you are not innovating
Day 3 – Featured three streams on Security Automation, Cloud Computing for Business, and Architecture methods and Techniques. Highlights from the Cloud stream (which I attended) include:
- Capgemini’s Mark Skilton (Co-chair of the Open Group’s Cloud Working Group) talked about the right metrics for measuring cloud computing’s ability to deliver business architecture and strategy. He discussed the complexity of Cloud and implications for Intellectual Property, as well as the emergence of ecosystem thinking (e.g. ecosystem architecture’ and ‘ecosystem metrics’) for cloud computing and applications
- A debate about the impact of cloud computing on modern IT organisational structure raised the point that a dysfunctional relationship exists between business and IT with respect to cloud services. The conclusion (and recommendation) is that healthy companies tend to avoid buying cloud services in business silos, instead they will pursue a single cloud strategy, in collaboration with IT, which is responsible for maintenance, security and integration into the enterprise landscape
- Prakash Rao, of the FEAC Institute, discussed Enterprise Architecture patterns for Cloud Computing. He reiterated the point made earlier about how enterprise architecture can be used to align enterprise patterns (i.e. business models) to development processes. Also that enterprise patterns enable comparison and benchmarking of cloud services in order to determine competitive advantage
The bullet items and observations recorded above does not do justice to breadth and depth of the entire conference which included networking with attendees from over 30 countries, across all key industries / sectors, plus multiple, simultaneous streams, sessions and activities, many of which I could not possibly attend. Overall, this was an excellent event that did not disappoint. Further materials can be found on the Open Group website, including:
- Event website: http://www.opengroup.org/dc2012
- Live Streams – http://new.livestream.com/opengroup
- Archimate 2.0 Specification (free download) – http://www.opengroup.org/archimate/
- Photo Contest – https://www.facebook.com/theopengroup
I would recommend the Open Group conference to any professional in IT and beyond.
Wednesday the 18th of April marked 100 days to the greatest show on earth, along with the promise of even more superlatives, as a direct consequence of the Olympic motto: “Faster, Higher, Stronger”. It certainly made an auspicious date for an event, held at the House of Lords, on the future of Supercomputers.
The event was The Second Lorraine King Memorial Lecture, sponsored by Kevin Cahill, FBCS.CITP (author of “Who owns Britain” and “Who owns the World”), and superbly hosted by the Lord Laird and Computer Weekly. The main topic of debate centred on whether Supercomputers were merely “prestige objects or crucial tools in science and industry”.
The lecture delivered by Supercomputer expert, Prof. Dr. Hans Werner Meuer, (see CV) was most illuminating, and I gathered, among other things, that the UK ranked 4th in the Top500 list of Supercomputer using countries, and that France was the only European country with any capability to manufacture Supercomputers. Clearly more needs to be done by the likes of the UK or Germany to remain competitive in the Supercomputing stakes, which begged the question, (as posed later by an attendee), of whether these machines were nothing more than objects of geopolitical prestige, superiority and / or bragging rights, (e.g. My Supercomputer is faster than yours, so Nyah-nyah, nyah-nyah nyah-nyah! – Or perhaps Na na, na, na, naa! – apologies to the Kaiser Chiefs).
In any case, several things stood out for me at this rather well attended event, including:
- The definition of a Supercomputer remains based on the most powerful or fastest computers, at any given point in time, e.g. Apple’s iPad 2 is two-thirds as powerful as the Cray2 Supercomouter from 1986. The typical measure of speed and power is based on sheer numerical processing power (i.e. not data crunching), using the Linpack test
- According a paper by Sponsor, Kevin Cahill, the Supercomputer sector is the fastest growing niche in the world of technology, and it is currently worth some $25Billion. Japan, China and the USA are currently holding the lead in the highly ego driven world of Supercomputing, but there is an acute shortage of the skills and applications required to make the most of these amazing machines
- Typical applications of Supercomputing include: university research, medicine (e.g. Human Genome Project), geophysics, global weather and climate research, transport or logistics. It is used in various industries e.g.: Aerospace, Energy, Finance and Defence etc. More recent applications, and aspirations, include: bio-realistic simulations (e.g. the Blue Brain Project), and a shift towards data crunching in order to model and tackle challenges in such areas as Social Networks and Big Data.
- The future of Supercomputers is to move past the Petaflop Supercomputers of today, to Exaflop capable machines by 2018. The next international conference on Supercomputers takes place June 17-21, in Hamburg, Germany, and it promises to include topics on: big data / alternative architectures for data crunching / Exascale computing / Energy efficiency / technology limits / Cloud computing for HPC, among other things.
Overall, this was an excellent event, in a most impressive venue, and the attendees got a chance to weigh in with various opinions, questions and comments to which the good Professor did his best to respond, (including inviting everyone to Hamburg, in June, to come see for themselves!). Perhaps the most poignant take away of the evening, in my opinion, was the challenge by Lord Laird to the computing industry about a certain lack visibility, and the need for us to become more vocal in expressing our wishes, concerns and desires to those in power, or at least to those with the responsibility to hold Government to account. As he eloquently put it, (but paraphrasing slightly), “If we don’t know who you are, or what it is you want, then that is entirely your own fault!”
It’s not often one gets an opportunity to attend three compelling events in one evening, but as luck would have it, the stars were aligned and I managed to do just that in a mad scramble from one venue to the next. Such are the benefits of living and working in a great city like London, but less so were the thorny issues under debate at each of the three events.
It took a minute to digest and process various messages from these events, but as promised / tweeted, below are three key points, take-away or opinions:
1. Publishers must embrace multi-platform models as business-as-usual (Publishing Expo 2011)
It was standing room only at the Multi-Publishing & Digital Strategies Theatre in a packed final session on “the future of multi-platform publishing”. According to one of the speakers, “the bleeding edge of multi-publishing model is one third print, one third digital, and one third live events.”
My Comment – Never mind multi-platform, it sounds more like a multi-model approach will be necessary for the entire creative industry, in my opinion.
2. But how do you value Intellectual Property? (IP For Innovation And Growth)
This has to be one of the thorniest questions for IP, because consistent and intelligent valuation of IP is at best confusing, or non-existent. IP is really just an economic mechanism, so a fundamental attribute should be the ability to establish an agreed value for the property in question, but this presents a severe problem because current valuation are highly subjective and always dependent on the buyer or seller’s points-of-view. Throw in the ability to effortlessly copy and distribute works via digital technology, and you’ll get the somewhat muddy picture.
My Comment – There is a clear opportunity here to create a dynamic and transparent IP valuation model or approach, which can produce the right valuation for IP, based on the buyer / seller relationship and context
3. And does a cash economy make IP any less relevant? (Private Equity Africa)
Apparently, it’s all about cash in Africa which leads me to wonder if and how global IP will work in a cash economy. This event does not immediately appear to have much in common with the others on IP or the creative industry, and even one of the speakers afterwards, said he considered Intellectual Property in Africa to be, and I quote, “nothing more than intellectual masturbation”. However, when you think of the thriving industry and market for music and filmed entertainment (e.g. Nigeria’s Nollywood), it is easy to see how IP can provide an important boost to developing economies. Therefore, even if there is little point in enforcing IP Rights locally, all developing economies must be interested and involved in any discussion relating to global IP rights and digital distribution / piracy.
My Comment – when it comes to content and IP, it is a level playing field as all jurisdictions and stakeholders struggle with the impact of digital technology
Overall, one clear trend I can see emerging from the above is that such tough questions / issues will need even tougher answers and resolutions to overcome. For example, they may well be pointing to the same underlying problem – i.e. a flawed and inflexible concept of economic value – but perhaps that is rightly the subject of another blog and blogger.