Wednesday the 18th of April marked 100 days to the greatest show on earth, along with the promise of even more superlatives, as a direct consequence of the Olympic motto: “Faster, Higher, Stronger”. It certainly made an auspicious date for an event, held at the House of Lords, on the future of Supercomputers.
The event was The Second Lorraine King Memorial Lecture, sponsored by Kevin Cahill, FBCS.CITP (author of “Who owns Britain” and “Who owns the World”), and superbly hosted by the Lord Laird and Computer Weekly. The main topic of debate centred on whether Supercomputers were merely “prestige objects or crucial tools in science and industry”.
The lecture delivered by Supercomputer expert, Prof. Dr. Hans Werner Meuer, (see CV) was most illuminating, and I gathered, among other things, that the UK ranked 4th in the Top500 list of Supercomputer using countries, and that France was the only European country with any capability to manufacture Supercomputers. Clearly more needs to be done by the likes of the UK or Germany to remain competitive in the Supercomputing stakes, which begged the question, (as posed later by an attendee), of whether these machines were nothing more than objects of geopolitical prestige, superiority and / or bragging rights, (e.g. My Supercomputer is faster than yours, so Nyah-nyah, nyah-nyah nyah-nyah! – Or perhaps Na na, na, na, naa! – apologies to the Kaiser Chiefs).
In any case, several things stood out for me at this rather well attended event, including:
- The definition of a Supercomputer remains based on the most powerful or fastest computers, at any given point in time, e.g. Apple’s iPad 2 is two-thirds as powerful as the Cray2 Supercomouter from 1986. The typical measure of speed and power is based on sheer numerical processing power (i.e. not data crunching), using the Linpack test
- According a paper by Sponsor, Kevin Cahill, the Supercomputer sector is the fastest growing niche in the world of technology, and it is currently worth some $25Billion. Japan, China and the USA are currently holding the lead in the highly ego driven world of Supercomputing, but there is an acute shortage of the skills and applications required to make the most of these amazing machines
- Typical applications of Supercomputing include: university research, medicine (e.g. Human Genome Project), geophysics, global weather and climate research, transport or logistics. It is used in various industries e.g.: Aerospace, Energy, Finance and Defence etc. More recent applications, and aspirations, include: bio-realistic simulations (e.g. the Blue Brain Project), and a shift towards data crunching in order to model and tackle challenges in such areas as Social Networks and Big Data.
- The future of Supercomputers is to move past the Petaflop Supercomputers of today, to Exaflop capable machines by 2018. The next international conference on Supercomputers takes place June 17-21, in Hamburg, Germany, and it promises to include topics on: big data / alternative architectures for data crunching / Exascale computing / Energy efficiency / technology limits / Cloud computing for HPC, among other things.
Overall, this was an excellent event, in a most impressive venue, and the attendees got a chance to weigh in with various opinions, questions and comments to which the good Professor did his best to respond, (including inviting everyone to Hamburg, in June, to come see for themselves!). Perhaps the most poignant take away of the evening, in my opinion, was the challenge by Lord Laird to the computing industry about a certain lack visibility, and the need for us to become more vocal in expressing our wishes, concerns and desires to those in power, or at least to those with the responsibility to hold Government to account. As he eloquently put it, (but paraphrasing slightly), “If we don’t know who you are, or what it is you want, then that is entirely your own fault!”
Recent developments in the world of publishing, clearly demonstrate yet again that the primary objective of the content industry is to make a tidy profit. Nothing wrong with that, if you ask me; however, it usually turns into a rather sticky mess when that pursuit is clouded by accusations of skulduggery, conspiracy and outright price fixing.
I refer to a recent lawsuit filed by the US Justice Department against Apple and 5 major book publishers, over allegations of conspiracy, collusion and price fixing. According to this article from the Wall Street Journal, it could change the course of a rapidly expanding eBook publishing industry. But how so, you ask?
Well, it is really down to opposing business models, (i.e. the so called agency versus wholesale approach to eBook pricing), where, on one hand, an agent such as Apple will allow publishers to set their own price, and take a cut (in this case 30%) from sales on its iBook platform. On the other hand, a wholesale pricing model is one where the retailer (e.g. Amazon or Barnes and Noble) sets the price for eBooks and can effectively apply discounts as they wish (even if it means selling eBooks at a loss). Obviously, this latter scenario leaves publishers with less control over prices, and consequently profits, hence the opportunity to take advantage of a more favourable option could not fail to be attractive.
However, the question remains about the value proposition for consumers, who are themselves increasingly embracing eBooks for its convenience, ease of use and, perhaps more to the point, a huge potential for significantly lower prices overall. One might argue that eBooks do not require paper, glue, physical stores / shelf space or any significant distribution / transport costs, therefore they really shouldn’t be priced anything close to their physical versions. Surely, this quest to keep prices high can only be in favour of publishers, and their bottom lines, mustn’t it?
So what are the key arguments / rationale for keeping eBook prices artificially high? Perhaps the main reason has to do with high operating costs incurred by large publishers, as well as the need to maintain a powerful marketing and promotional machinery. Furthermore, it may also be argued that lower cost eBooks are somehow cannibalising the margins to be had from physical books. Whatever the case, it seems publishers stand to lose out if they don’t do something (innovative?) to counter the effects of change.
Hmm, now where have we seen this before, (and how did that industry cope / survive)? Ah, yes, the music industry went through something similar, except they chose to sue those pirates and freeloaders (aka the people formerly known as customers), that supposedly ‘stole their bottom line’. However, they seem to have found other ways to complement dwindling revenue streams, e.g. via ticket sales for live performances. By the way, death may no longer prevent artistes from performing before a live audience, assuming this deceased artist hologram idea catches on.
Luckily the book publishing industry don’t have to take quite so drastic a measure, especially as it has been shown time and again that new media formats and channels do not necessarily mean the complete demise of existing ones. This is arguably the perfect time for publishers to embrace even bolder / more innovative thinking to discover complementary initiatives that will bolster an industry under threat, real or imagined. They must observe and capitalise on consumer trends and emergent user behaviours. For example, the sheer capacity, variety and anonymity (i.e. no tell tale covers) of reading material to be found on your average eBook reader means that users now carry, consume and explore hitherto unthinkable (at least in public) subject matter. The current boom in romantic erotica sub-genre, aka Mommy Porn, is an interesting case in point.
Perhaps even more fundamental, is a need to seriously consider the verboten idea of evolving copyright into something much better aligned with the digital age. Unfortunately, that will be a tough sell to the publishing industry, if this report of a speech given by HarperCollins International Chief Exec, at the London Book Fair, is anything to go by. According to the article, “others in the book trade, including the Publishers Association” have criticised the recent Hargreaves Review of Copyright, which some feel could weaken the current copyright regime. As you may have gathered by now, I don’t subscribe to that point of view, but then I am only an author and may not see things in quite the same light as a successful publisher might.
In many ways, this whole situation could be seen as a remix of circumstances surrounding the birth of copyright. In 1710, the printing industry lobbied for creation of a law to govern the rights to print or reproduce works (now known as the Statute of Anne), in order to protect their interests and the authors / creators of said works. Copyright is essentially an artificial system, which routinely needs a degree of manual intervention whenever new and disruptive content technology or consumer trend emerges. That, in my opinion, is the fundamental flaw with copyright which any revision thereof must try to address. In an age of multi-platform, multi-channel and multi-format publishing, there really is no place (or time) for manual intervention each time a new and disruptive trend, challenge and opportunity presents itself. I for one would be more than happy to attempt to demonstrate just how such a system could work (based on real copyright content), but then I would probably need a hefty six figure advance from some far-sighted multi-publisher to make it happen. Who says there is no future for publishing?