Category Archives: Digital Innovation (2013)

For the lent lent term, 2013, as part of the HighWire MRes I have to keep a journal for ‘Digital Innovation’ – this is it.

Cyber faith?

My previous blog post was discussing ‘digital death’ while thinking on that subject my mind began to wander, and envisioned a way of having cyber faith. Virtual religion if you will. And, I think it’s relevant to this series of blogs. I’ve actually dabbled with something in this area before, with my Prayer 2.0 project, but what I’m thinking about now I reckon is rather more useful, and interesting.

I’d like to set up a kind of framework where anyone can create a DIY faith. Festivals, rules, belief systems…. all of these would be constructed through some kind of computer interface. Potentially the framework could be linked into systems operating in the realm of ubicomp and the internet of things. You could pick and choose elements of major religions, and weave in your own customs.

think that the majority of the (global) population actually lives somewhere between atheism and gnosticism (I’m not backing this up with any research…. unfortunately I don’t have time!). I also believe that human beings have a tendency to want ‘something more’ in their life.. some kind of spiritual element. Maybe cyberfaith is one way that this can happen. *needs further exploration*

Don’t forget to feed the cats

When I was a child a man called Charlie lived in the tiny cottage next door to my family home. He died when I was still quite young, but I remember him and his house vividly. The cottage really was tiny, and inside I had the feeling of being in some fantastical place. It felt very other worldly. Charlie shared the cottage with three cats, one of which we ended up taking care of after Charlie died. Charlie’s greatest fear was that he’d die alone at home, and that his cats would have him for dinner! It’s as a memorial to Charlie that I titled this blog Don’t forget to feed the cats. By the way, Charlie died at hospital in the end.

An image of Charlie's Cottage as it is today (screen grab from Google Street View)

An image of Charlie’s Cottage as it is today (screen grab from Google Street View)

At the start of the special topics module I made a list of fleeting ideas that I’d had and was considering pursuing as my topic. One of these I described as after death, digital legacies – and it’s something I’m still very curious about. The question is how do you deal with digital assets, when somebody dies? There is of course lots of debate around what denotes an ‘asset’, and a multitude of different ways of applying legal frameworks in different ways (which varies greatly depending on where you are, and what the nature of the ‘asset’ is).

Governments debate about legislation, most recently there is an ongoing debate about the ‘right to be forgotten’. An interesting story emerged this week with regard to 17-year-old Paris Brown (employed by the police) had Tweeted inappropriate material earlier in her teens. The BBC ask – quite legitimately if you ask me – could the right to be forgotten help? But, back to death… currently most online services only refer to what happens on the event of a users death in their terms of service and privacy documents. In Facebook’s terms and conditions, you grant them a license to use your content indefinitely, but these days they do cooperate when users die by memorialising pages (which is a handy thing: 2.89m Facebook users were predicted to die in 2012). In 2004/5 the mother of a dead marine took Yahoo to court (and won) in order to gain access to his email account (this was inconsistent with the Yahoo terms of service). These are just a few examples; things get more complicated if consider blogs you might run, web email accounts, online currencies, access to banking, maybe even what happens to your Bitcoin estate after you die! Despite all the confusion, at least death itself is relatively clear cut.

Demonstrating that these kind of issues are becoming more important to both users and service providers, Google yesterday announced their Inactive Account Manager – quickly referred to as the Google Death Manager – which is designed to allow you to decide what to do with your data that Google holds, in the event that your account becomes inactive. It’s pretty neat, allowing you to chose individual services, a list of people to inform (each of whom are confirmed via text messages), and even the ability to set up an email auto responder.

As more and more of the digital services we all use move to the cloud, and those services become interconnected, the issue becomes increasingly complex. My follow HighWire-er Barney Craggs has recently written some interesting blogs about the grey area between big data and personal data which I recommend reading, a particularly neat idea Barney talks about is smart data. When referring to smart data, Barney describes it as:

the gathering of only that data which is truly needed to fulfil the purpose, data which is held only for as long as is needed..

Because managing your data, and maybe even ‘deleting’ yourself from the internet, is quite a difficult thing to achieve and this seems particularly pertinent when considering what happens to data after its creator, or owner, or custodian.. ceases to be alive. I think Barney’s ideas are relevant not only to data per-se, but also to services. We can assume most online services do involve an element of data, but they also have an inherent meaning, related to whatever the service is. This makes considering how you want a service to deal with your death a more involved affair than simply considering the data that lies behind a service alone.

Although this landscape is not a steady one, with legislation almost certain to go through several evolutions in the coming years, and service providers constantly tweaking their terms of service and approaches, I believe we have the technology available to begin to implement a kind of digital rights protocol that could provide a framework for services to comply with their users posthumous desires. Digital lockers or safes do exist, and can go someway to dealing with this, but in their current state they are not a holistic solution.

While thinking about this I arrived at a vision is of an internet ‘death authority’ (DA) that any person, and any service, can register themselves with. The DA would have two core functions, firstly to have an ‘notification’ mechanism for recording whether somebody is alive or not, and secondly to be a repository for holding instructions relating to what participating services should do in the event of a death. For the notification mechanism an API would have to be accessible so that the relevant authorities could plug into it. In the UK this could be an electronic link from the General Register Office. When somebody dies, the GRO connects to the DA’s API and the records are updated to show that the subject has passed away, allowing the second part of the system to swing into action.

The second part of the is the harder to imagine, but I’m sure a workable standard could be developed, with existing technologies, that would be fit for purpose. The way I imagine it working, you’d need to be able to update settings on a service to reflect what you want to happen to your data, on that service, when you die. The settings would have to be updated on the service itself but a record of your elected settings would also have to be viewable at the DA – so you can see all of your choices in a single place. So the API to facilitate this system would just need to have a way of describing what settings were chosen on any given site, and then communicate those choices back to the DA.

When the inevitable (for all of us!) happens, the each online service related to the deceased would be informed by the DA of the sad news. Each service would then be able to update their data as per your wishes, whether that be deleting everything, memorialising your facebook page, or posting a beyond the grave Tweet to say so long and thanks for all the fish. Feasibly the DA would be accessible by your probate lawyer who will, to some extent, be aware of what online services you used, and what your wishes were with regard to each one (this would also serve as a tool to monitor whether you wishes were actually being carried out).

Obviously this would only work in regions where deaths are routinely registered, and for people who sign up to the DA, and for services that subscribe. This may seem an unlikely proposition, but as our lives are even more entwined with online services I think it will become increasingly relevant for all of our tomorrows, and is something that could be well addressed today. I hope it is addressed, because I for one want to be able to plan for my digital demise with confidence, transparently, and without having to spend my life reading terms of service small print.

PS Selina Ellis Gray, also a HighWire PhD candidate, is doing some really interesting work in this area, specifically looking at bereavement and how it manifests itself digitally, check it out.

Not missing a mouse and why it makes me angry

So I got burgled recently, which sucked. I was actually just settling down to watch some terrible sci-fi featuring Arnold Schwarzenegger with Daniel Kershaw, when I was informed that my house had been broken into. Now cutting a long story short this was a horrible experience, both scary and incredibly inconvenient. Thankfully most of the data that was on the stolen computers was backed up on the cloud, and alike, but its still quite awful to have you space invaded, not to mention the material loss. Thankfully insurance covered replacement items, although in the overall it ended up costing me about £500 to cover excesses, etc. The key thing that needed replacing for me was a laptop computer.

For the first time in my life I opted to get an Apple Macbook. I’ve actually been quite actively talking down Macs for a long time, let alone contemplating owning one. I don’t like the company’s approach (to be honest it’s hard to like the approach of any of the big tech/computing companies) to restricting upgrades and designed-obsolescence. Plus the price of the hardware has always been a big turn off. Also I’ve always been a Windows user, I know the Microsoft OS very well, feel comfortable with it and the software that runs on it. What made me decide to get one now isn’t that interesting.. but is largely because the vast majority of my peers at HighWire (and the academic staff too actually) use Macs. It’s not (just) because I just wanted to be part of the club either but there are also some rather more noble/practical reasons for why this is useful: mainly that its easy to compare notes on different softwares that may be required in the line of duty (think reference managers etc). Other factors are relevant too: that fact that Macbooks generally have a decent battery life (as compared most other laptops) is a consideration when working in the way – so far as I can tell anyway – most HighWire PhD students do (unpredictably, all over the place, flexibly – as I write this I’m sat in Fuel Cafe). Finally the high-resolution (retina) screen really does improve my workflow: text documents are so much easier to read, and I have a wealth of screen real-estate to play with (happily handles two apps side-by-side).

Working on a laptop in Fuel

Working on a laptop in Fuel

The thing that spurred me to blog about this, however, is to mention to track pad on Apple Macbooks. It is, to be quite frank, vastly better than any other track pad I’ve ever used before. In the past I’ve always had to use a mouse for anything but the simplest of tasks because otherwise my workflow would be disrupted so much by the terribly bad trackpads that pervade on non-Apple laptops. So, still only a few weeks into owning the Macbook, I’m quite shocked to think that not only do I not require a mouse any more – I actually prefer working without one! I don’t think I’ve ever been quite so in love with a bit of technology as I am this computer, and the biggest single reason for this is the trackpad. I’m not going to go into laborious detail as to why I think this is the case, but as far as I can tell, the salient points are:

  1. Quality hardware
  2. Not too big, not too small
  3. Sensitive and accurate
  4. Excellent integration with the OS (via gestures)
  5. (whatever it is they’ve done to make it) Intuitive to use

I’m sure there are many reasons why it’s such a good piece of kit, and so central to the Macbook experience… but that’s not the point. My point here is: why is the Macbook trackpad (apparently) unique?

My hypothesis is that there are plenty of patents around and about that protect the sort of hardware (and probably software) features that make the Apple trackpad so good, thus ensuring that others are unable to use similarly well designed systems. I wonder whether this is because others can’t afford the licence fees that Apple would want to charge? Or maybe, because trackpads aren’t ‘essential’ (as per standards-essential patents) so maybe Apple can hold everyone else to ransom, refusing them the option of using the patent? Basically the root of my enquiry is to ask the question why is it that nobody else can get trackpads right? I’m all for protecting intellectual property, but I’m also all for the idea of encouraging innovation – something that I think the patent system was originally intended to do. For whatever reason it isn’t working in this instance as there seems precious little that even comes close to comparing to the efficiency of working with an Apple trackpad. I’m no expert and I may be wrong, so I’ll phrase it as a question: is the patent system just protecting Apple’s (apparently unique) ability to sell hardware at impressive profit margins? (and meanwhile preventing anyone else from making a decent trackpad!)

So… I’m really happy to have this Macbook: in a round about way the burglars did me favour, as I probably would never have willingly ‘converted’ to Apple without having the opportunity to get a brand new machine. The OS is really better than I knew, and it only took a little while to get used to it (after nearly two decades of working with Windows). So far I haven’t had any issues with software not being available for MacOS (at least not where there hasn’t been an appropriate alternative). The trackpad is amazing, and has transformed my mobile working. However, the lack of competitors for the trackpad (in particular) makes me somewhat depressed. I’m yet to see a compelling argument for why the kind of patents granted to the likes of Apple, Google, Samsung etc, on very generic hardware concepts are beneficial in terms of innovation. It’s even worse for software. I wish every trackpad in the world could be as good as this one, and I think if everybody using trackpads used one of these.. you’d be able to measure the effect via global productivity within weeks!!! (:-P maybe a slight exaggeration)

Patent Wars, from Business Week (http://www.businessweek.com/articles/2012-03-29/world-patent-war)

To slightly balance my disdain for Apple’s use of the patent system, I have another brief story. I was at a talk by Clive Grinyer, a designer working for Cisco. The presentation was part of the HighWire ‘Digital Futures’ series, so most of the people in the room were PhD candidates. Clive noted that every single laptop in the room was an Apple-designed machine. Initially I thought he would say something negative about this fact, however on the contrary he wanted to say that he thought this was encouraging.

Macs - Finishing off some work on our 'Deep Dive' project - showing off 4 Apple Laptops.

Macs – Finishing off some work on our ‘Deep Dive’ project – showing off 4 Apple Laptops.

The reason Clive saw it as encouraging, is that it indicated that all of us in the room were appreciative of an integrated approach to design (whether we knew it or not). Clive was arguing the importance of multi/cross/or maybe ‘post’ disciplinary working. So, not just coders talking to designers, but coders that appreciate design, and can have a realistic conversation with a manufacturing manager, taking into account what the manufacturing manager is telling them about supply chains while also being able to converse with the marketing department about branding….. and so on. Apple work in an integrated way, and the result is products designed holistically and integrated into the whole corporate ecosystem, and, into the consumer ecosystem…. and that… I guess… is why despite being ubiquitous Apple products remain so attractive, sell so well and as a result Apple have a (correct at time of writing) 137 billion USD cash pile. Well done Apple! Clive took pleasure in seeing that ‘we’ (HighWire PhD candidates) had ‘voted with our cash’ and collectively all appreciated the value in this integrated approach, and hence the room only had Apple computers in it. Well done us. But I do hope, maybe sooner rather than later, that a room full of PhD candidates in the future might have slightly more diverse tastes in their hardware, resulting from a better use of our intellectual property legislation.

Contrast Making it Tasty

So with food… if you mix up hot and cold, or crispy and smooth, or sweet and sour: you tend to get good results. The same exists with innovation. The same exists with creative thinking and outputs. Latest example 3d printed music (and that works on several different levels). Check it out:

Using Crowd Wisdom to Annotate the Web

Since January I’ve been intensively researching in the space around citation practices in academia as part of the HighWire MRes ‘Special Topics’ module: I’m really intrigued by it all. As is often the way with these things I’ve probably gone about it the wrong way, with an idea of a solution (I was thinking about applications of linked/semantic data…), before properly understanding the problem. Thankfully, to learn this kind of thing is why I’m doing an MRes.

I’m actually consciously trying to resist being consumed by my obsession with this topic, yet at the same time trying to master my own feeling of completeness as regards my knowledge of the subject. Part of the issue is that most (not all) of the academics I’ve spoken to about my concerns to do with citation practices react quickly and deeply, suggesting that this is ‘the way things are’, inherently political, unbounded… and generally a difficult area to work in. They’re probably right, but it doesn’t mean I shouldn’t go there. I’ve already written a 2000 word literature review titled ‘Impact Metrics: Lies, Damned Lies, Statistics‘, and a mock EPSRC funding proposal ‘Expressing Research Output Through Linked Data‘ on these subjects, so I won’t elaborate here… however my thinking in this area did lead me to think about annotation as a method by which to make various practices on the web more transparent, and potentially a way of mitigating the Matthew effect.

The Matthew effect suggests that ‘the rich get richer and the poor get poorer’. It applies in academia too: a highly cited paper is far more likely to grow its citations quicker than a paper that has no citations. Thompson Reuters actually run a ‘Highly Cited’ service, on their front page they state:

“Once achieved, the highly cited designation is retained. With each new list, we add highly cited individuals, departments and laboratories to this elite community.”

I don’t want to appear objectionable, but, it is quite a scary proposition. They’re saying that once they (Thompson Reuters) have awarded this accolade, it is enshrined forever.. thus the ‘elite’ community is created. This touches on my issues with impact measures per se. It is impossible to explain the nuances of a lot of literature, knowledge, or learning, and to express why or how it is valuable by way of a number. The content of academic literature (excepting tables, figures, etc) is qualitative. Regardless of the field there’s a qualitative element. So why don’t we discuss it qualitative terms? Plain English…! “This is relevant because….” or “I disagree because…”

I don’t think we should ignore statistically based metrics. I don’t think we should ignore citation counts. I do think that being highly cited (whether or not Thompson Reuters invite you into their club) is usually a great thing, helping both authors and researchers that need to access relevant literature. However we’re missing out on the subjective. And the subjective has value. Even worse, if we’re counting citations and making a judgement on them, we assume that they’re quite an objective thing: which is a total fallacy. Why one paper receives citations and another one doesn’t could be for any reason, right through to being friends with somebody, to typeface, to an artefact of the indexing process, or simply because of the keywords chosen to describe the paper.

I had a really great lecture from Wolfgang Emmerich, although the lecture was really about agile software development methods used at Wolfgang’s company Zuehlke, we tested out the wisdom of the crowd. Wolfgang had each of us guess the weight of a motocycle. We revealed our first guesses, discussed, then re-guessed. We took the mean of the second guesses, and the ‘crowd’ (only about 10 of us) was within 5% of the correct weight. Pretty impressive I thought.

So… I postulate that the wisdom of the crowd, combined with an open annotation system, could be a massively important tool for adding extra value to things like, for example, citations. On an exploratory punt at working with Mendeley to further explore this through my summer project I was pointed toward hypothes.is by William Gunn (head of Academic Outreach at Mendeley). Hypothes.is are developing an open annotation system, relying on crowd-sourced and reputation based data… to annotate everything…… After watching this short video (below) I had one of those terrible yet affirming moments. The thought running through my head was “Again?!?! Again!!!? Why does every idea I have, seem to have already been had by somebody infinitely more able to deliver on it than myself.” I had had this idea before, but kind of wrote it off as being ‘too big’, eventually sanitising it down so much that I was just thinking about annotating citations. As it is, the ambition embodied by the project I think potentially has the power to transform the web. It’s also a reminder to me to ignore those authority figures that suggest that maybe the area of interest is ‘too big’ or ‘too political’ or ‘just the way things are’ – sometimes you’ve got to throw caution to the wind, just like Mario Capecchi did. 

Cloud computing and big/open data: tools for cosmic data Aikido?

It isn’t by chance that me and Rob Potts (colleague, and new friend, and universal tool for metamorphosing the everyday) have referenced each others’ blogs this week. It will continue, I hope. We are in fact attempting to instigate a general cross-referencing of the blogs that our HighWire cohort have to write for various modules this term in order to stimulate some innovation

The title of this post is a shout out to Rob’s blog Sustain and Release. Read that post and you might see what I mean about him being a universal tool for (occasionally in)coherent metamorphosis of the everyday into visual, spoken and cognitive metaphor (has he infected me?!). Rob says, in regards to sustainability:

Is this a ‘damned if you do, damned if you don’t’ situation?’ What is the smart way to proceed? Perhaps now we need to begin to work with matter, cosmic Aikido.

Professor Gordon Blair presented a lecture on cloud computing to us today. It didn’t contain anything I wasn’t, at least a little, aware of before.. however in the inimitable way that any good presenter can, Gordon’s lecture did make me think about these things in detail – something that is happening consistently in my time at Lancaster.

So back to the Aikido and cloud computing. Cloud computing isn’t a distinct thing. It is Google Apps,  it’s Dropbox, it’s Amazon EC2, it’s BitTorrent, it’s iCloud, it’s the data centres that startups and corporate giants use to harbor their data.

There is no hard and fast definition, and the list above could be a very long one, but in essence, cloud computing is a whole host of overlapping technologies. This is very well demonstrated in this image taken from Cloud computing: state-of-the-art and research challenges (Qi Zhang, Lu Cheng, Raouf Boutaba 2010), via Gordon Blair.

Cloud Computing Architecture

Cloud Computing Architecture

End users see the different levels as Infrastructure as a Service (IaaS), Platform as a Service (PaaS) or Software as a Service (SaaS). The diagram shows the kinds of resources related to each of these layers, and the examples on the right show real world examples of each one. IaaS generally refers to quite raw, ‘low’ level stuff (such as simply having a virtual machine running Ubuntu, or Windows 8, that you have access to via ‘the cloud’). PaaS takes it up a level, maybe you will have access to a programming framework, or a database. You don’t really have to care how it works, but you know you can access it for your own means. Lastly SaaS is the kinds of things that I use everyday, Dropbox, Facebook, Google Apps: user-facing applications. Sometimes you might find that a SaaS is built atop one or both the two layers below it.

Cloud computing is great. It’s very clever, and with the bandwidth available these days, and the hyper-connectivity that in its own right is an intriguing area of study. With it I can happily go to the University campus knowing the papers that I need to read are stored in Mendeley’s online repository, my music is in Google Music, any other documents I need are accessible from Dropbox, and that if i have an innovative startup idea today, I can easily get the computing power needed to support it online – without huge outlay – by tomorrow, and that that solution will be scaleable. It’s incredible.

There is of course the hidden cost. It’s hard to find a reliable figure for this, but you could argue, legitimately, that searching Google twice (incidentally, for each hyperlink in this document I’ve searched Google at least once…) uses the same amount of energy as boiling a kettle. It isn’t fair, I don’t think, to make that comparison directly. However what is undeniably true is that the energy involved in running cloud based services, and the infrastructure that supports them, is magnificently huge. As an evangelist for general movement towards sustainability, and a leading expert in distributed computer systems this actually puts Gordon in a sticky place I would say; I don’t envy him on that front. I am aware of sustainability issues, and increasing care about them (and I want to do something about them) but… fortunately I’m not an expert in distributed systems! Conversely it’s a damn good job that some of the eminent experts in this field appreciate sustainability.

In the same session Gordon covered some issues related to big data and open data. I actually abhor the term big data, as it happens, based on its inherent ambiguity – but no matter. Cloud computing is one of the factors that has enabled big data to splurge across the world, and as a result big data has become a significant area of study (and – excuse me – a big business).

Big data, I think, should be respected and watched. The respect because it can harness great power for, potentially for both good and evil. Watched to make sure that this power is controlled equitably. It scares me to think how much information Google hold on me. It scares me to think how much money our personal data is worth to corporations. It scares me to think that if my DNA or health records become part of this big data craze and comes to be in the hands of corporations concerned with profiting from it. But at the same time the quirky correlations between Google search results and things like house prices or influenza outbreaks, if they continue to emerge and sustain, have huge potential for good in the world (those are just two examples of how people can utilise Google’s big data, there are many other vendors, types and examples). Another interesting story of how scary big data is comes from Malte Spitz. Spitz wondered one day exactly what data his mobile phone company was collecting about him, after a lengthy legal battle he finally received a file that contained 35,830 lines of coded data. From this data you can virtually relive Spitz’s life over an extended period. I really recommend this TED talk, delivered by Spitz, where he sums it up beautifully.

Cloud computing and big data (and indeed the Internet as a whole) share their thirst for energy, and there are no signs of this appetite abating. I find when talking to colleagues that some find it incredibly easy to become ‘anti’ quite quickly when thinking about this. The mixture of the gloomy global outlook when considering sustainability along with the bitterness derived by most when considering issues of privacy and trust related to big data is a heady mix, that can make those concerned with it appear reactionary. Conversely others that, I think quite pragmatically, conclude that big data (and sustainability issues) are with us to stay, oftentimes become equally vocal, and it isn’t difficult to find confusing theories that lead a logical observer toward a head-in-the-sand approach to the dilemmas here (on account of a how entangled the issues are). A third camp, and that is where I see myself, are optimistic that the benefits of big data can be realised while the issues of trust and privacy issues are dealt with sensibly. Apart from revolutionists, I haven’t heard any convincing argument as to how we could realistically dispense of these innovations now that we have them.

The final part of this cosmic data Aikido jigsaw is open data. Open data is an equally broad topic as big data, so I won’t go into any detail, but broadly speaking it means data that are publicly available. You could say that open data are to information, what the open source movement is to software applications. Like open source some see it as a tool or a model that fits into current paradigms, others see it as an entirely different philosophy. I think it has the potential to be both. One example of an open data project is OpenStreetMap, a global map that is made for people, by people, and is owned by people. New York City has a large repository of data that covers everything from wireless hotspots in the city (the most frequently viewed, via the open data portal) through to after school programmes, privately owned public spaces, fiscal stimulus data, and refuse collection tonnages. Another example of open data at work was after the large 2010 earthquake in Haiti the OpenStreetMap data for Port-au-Prince was taken from being virtually nonexistent, to some of the richest cartographic data that’s ever existed. This data was used by aid organisations and health agencies to great effect. In NYC you can view crime statistics on an interactive map, and maybe plan a safer route home, or decide where to live accordingly. It’s early days for open data, but so far some of the applications really have had impact, and are almost heartwarming to my mind.

It’s a difficult thing to imagine, but I really believe that if all of the elements of the system could be modeled to demonstrate that the utility and methods behind cloud computing can deliver the benefits of open and big data in a scaleable and sustainable way. Apart from a hell of a lot of work and ingenuity, it would require a ‘global’ cooperation. If you take global to mean whatever system you’re looking at, rather than ‘planet-wide’, then I think this really could be a reality in the not-so-distant future.

So what am I thinking? A hella distributed computer system. These distributed systems (some of which could be termed cloud computing) are really powerful. A system where every device would contribute its spare processing power and storage to the cloud, whether it be a phone, tablet, laptop, super computer, or a whole a datacentre. All data would be owned by everybody, so forcing a collective responsibility towards how much of it there is, and how it is used. To metaphoricalise it: imagine the world had a single well for drinking water. Nobody in their right mind would use it all up too quickly, neither would they treat it irresponsibly and contaminate it. Indeed if anybody tried to do either of those things, then everybody else would try to stop them. Interestingly the way I’m imagining this system, it could pretty much alleviate the privacy/trust issues associated with big data. You see, I think the best way to incentivise generators of big data to only generate, to only store data they need, and also to ensure that it is treated sensitively, is to store it in an entirely open cloud.

I realise having gotten to the end of this constructed idea of cloud-based cosmic data Aikido, that it is a little Utopian. Maybe a lot. However, there isn’t really anything to suggest that the idea couldn’t work on a relatively local level (look at Diaspora and BitTorrent), before being scaled up. Each increment of the network size would represent a net (pun?) ‘saving’, and a further step towards generating and using data responsibly.

Going a few steps further down the technological discovery line you can imagine how the Utopian vision described above could be supported by ubiquitous computing. It would be a challenge to quantify, but I dare say that if you added up all the spare processing and storage capacity that exists within the incredibly pervasive computer power in the world (including all the processes not only in phones, but in refrigerators, cars, escalators, boilers, routers, etc) – then you could maybe replace a large amount of the energy-gobbling (and expensive) data centres that power the (current incarnation of) the cloud and big data. On another note imagine that the way we store data could be disrupted by storing it inside DNAliving storage devices could be the answer to the practical problem of how to store seemingly infinite data (however, of course, this raises a whole myriad of ethical and trust concerns in its own right). It’s all possible though.

The European Union recently announced a €1 billion research project around large-scale brain simulations. I’m fascinated by it, and ever so slightly scared too. Depending on the outcome, maybe ubiquitous computing could become a vehicle for hosting a large scale virtual brain in a distributed manner. I think maybe I’ve watched too much Ghost in the Shell.

Put briefly, I love the things you can do with cloud computing, big data, and open data. I’m also aware that there are impacts. Computing is ubiquitous, but we’ve got to that stage without much thought for how to sustain it, or how to get the most out of it. Maybe it isn’t practicable, but, I’d like to think that there is a way in which all of these arenas are linked could be put to use and lean on each others’ strong points, while containing the negative connotations related to how we see them now.

“Hotpants vs Knockout Mouse” feat. Quad Bumlines (Sustainability Remix)

Imagine that planet Earth were a corporation with shareholders, how would investors be feeling? What would go in the annual report? It all comes down to exactly what it is the shareholders are interested in. One would usually assume that the corporation is interested in revenue, profit, capital gains of one sort or another; return on investment. So how is Earth Corp doing? Well it depends what you measure.

Could global human population represent “profit”?

Shows how dramatically human population has grown in last 200 years

Human Population

If global human population is the measure of success, then Earth Corp is doing pretty darn well.

But what else could we measure, that could be analogous to profit? Let’s be a little less abstract about how this corporation is measuring its success, and say that Earth Corp is measured by the Gross World Product, how much all of the economies in the world are worth. In that case you get something a bit like this:

Graph showing gross world product over last 80 years

Gross World Product

Also, not bad at all. The time frames on the two graphs are completely different, so don’t make a direct comparison, but the point is they’re both going up steeply. Population and economic output are growing.

In fact, there are very few measures that don’t fit with this trend of the graph goes up dramatically during the time of modernity.

 Try out global temperatures, you get the same pattern:

Global Temperature

Global Temperature

I’m sure most people are aware that if you overlay carbon dioxide levels in the atmosphere with this graph, they are very well correlated.

But maybe Earth Corp has some understanding of Corporate Social Responsibility, so the board are ensuring that they’re trying to move their activities to be in accord with the concepts included in the triple bottom line (TBL). TBL is akin to “full cost accounting”, and the idea is to incorporate several factors into a single measure of success, specifically economic, societal, and environmental factors. In fact the graphs above, could conceivably relate to the triple bottom line. So how do we interpret those graphs in relation to the TBL. Well… I guess the rise in economy means more wealth, which is good. The rise in population means that society must be working on some level, and maybe that health is improving, so that’s good too. And from where I sit, the increase in global temperatures, and correlation to CO2 output, is probably a bad thing.

So so far I’ve just pointed out some obvious facts. What I’m interested in is making sustainability tangible. How can we become more sustainable? Looking at each of those graphs, the one big question that occurs to me is how long can that go on like that?

Stuart Walker, speaking to me and a my cohort at Lancaster University, introduced me to the triple, and then quadruple bottom line only very recently (so admittedly, it’s something I’m still getting my own head around). The extra element added to the trio to arrive at the quadruple, is a spiritual element (also known as the personal element). I was surprised how much of an accord this had with me: I’m an atheist. However I do think there’s a place for spiritual understanding in the world (anyone who has a tension with being a spiritual atheist should probably consider exactly what spiritual means, or how it is meant) and actually where this idea of the triple or quad bottom line is concerned, it is essential in order to give the other factors some sort of context.

Another of the revelations that Stuart imbued was related to how the economic factor plays out in these models. In it’s pure form, the TBL is just an spectrum for measurement, that includes several factors. Great. However if you look at how it is implemented, used, how the world actually works.. then 9 out of 10 times the economic factor is an “end not a means” (quoting Stuart). The big point here is to view the triad of society, environment, and personal as the ends, and the economic factor as the means. Value beyond money, I suppose. I mean, who cares how much money you have, if there is no society or environment for you to personally enjoy it in. It is a fantastic idea, but sadly at the moment seems a bit Utopian.

Stuart concludes his lecture series with;

A more holistic approach…. From a knowledge economy based on what we can do, to a wisdom economy based on what we should do..

Stuart tentatively lays the foundations for some answers to the big questions like “is it possible to live in the world in a sustainable way?” (and similar) but purposefully doesn’t begin to address them directly. And who can blame him; our unsustainable way of living, isn’t something you can solve with a discrete solution, it is a wicked problem, and the unsustainable traits of modernity are so deeply ingrained it seems almost impossible to imagine a world where we’ve moved forward.

It’s one thing to talk about possible innovations that might help, but for now I’ll avoid that (I’ve got some ideas.. but they can wait for a future blog). What I want to talk about is the nature of innovation. How does innovation relate to risk? How do established norms relate to innovations? What strategic position is best to adopt when faced with a wicked problem? (in particular this wicked problem)

In order to answer this there are a few points I want to join together: the un-understood behavior of Kingfish, a reference to innovative heated hot-pants (the counterpoint to which is the maverick personality responsible for “knockout mice” and gene therapy), mentioning the complexity involved in figuring out the carbon footprint of the BBC… and then using those four points to ask, what is it those calling for innovation actually want?

First, the Kingfish. I only know about the Kingfish because I watched a recent episode of Africa on the BBC, presented by David Attenborough. The interesting thing about the Kingfish is that, despite being solitary hunting animals, they swim upstream once a year, in a large group, and then spontaneously begin circling round and round in the water. The Kingfish don’t spawn there, they don’t mate there, they don’t die they, and they aren’t from there. There’s no explanation for the behavior. Attenborough called them pilgrims. If these fish were people, deciding to go there, then you might say they had a cognitive bias, which in the words of Paul Ralph is a “systematic deviation from normal judgement”. Something that you do, because it’s the way its always been done.

Kingfish circling (screen grab from BBC's Africa)

Kingfish circling (screen grab from BBC’s Africa)\

Second, the “hot” pants. The pants in question are designed to keep cyclists’ muscles warm in the time between the warm-up finishing, and the race starting. The pants were part of the “marginal gains” programme that the British cycling team developed in the years preceding the Olympics. Matt Parker, head of the programme, realised that the pants would give the Brits a tiny advantage. There were no end of these tiny advances, each a little innovation in its own right. Another marginal gain was the practice of applying alcohol to the wheel rims (reducing dirt, and friction). None of these advances will redefine cycling though, in fact in sporting events this kind of practice either becomes standard (i.e. everybody does it) or gets banned. So in some way, it is a temporary gain.

My third point centres around Mario Capecchi’s “knockout mice”. Capecchi won a nobel prize for his work on the mice. I can’t confess to fully understand the process, but the context here is how and where he got the funding to do the work. When Capecchi said what he wanted to do, those funding the project told him they respected his work, and his talents, didn’t trust that his research would work – it was just so far out. So radical. Nobody believed he could do it. They did want to invest in the man though, so they said sure you can have the money, but please just do something boring, something sensible, something that is ‘doable’, something that will definitely work. Capecchi said fine, took the money, and did the mouse research anyway. He totally ignored the wishes of the funding body. The knockout mouse, as it happens, is the foundation for all gene therapy. It is invaluable work. And the body that funded the work were, retrospectively, grateful for Capecchi’s decision to ignore them! A maverick person was required in order to stimulate radical innovation, which in turn, may well see radical change in society as the cutting edge work founded in the knockout mouse begins to filter through to practical applications.

My fourth point is about bundles and complexity, in this case characterised  by how the BBC are trying to quantify their sustainability credentials. If you consume television media, have you ever considered what it’s environmental impact is? Have you ever wondered what the “best” way to consume content is? I have, but only so far as whether I listen to the sound through my hi-fi system, or use the TV speakers. If you’re the BBC and you’re trying to figure it out, it gets rather more complex. How many people watch via the digital terrestrial network)? How many watch online? Out of either group who records the programme, and who is actually watching it? How many people are sat in front of the TV? How big is the screen it’s connected to? This is all before you start to think about the resources that go into actually making the programme to start with.. For the BBC to figure out a method, which in turn will figure out a value, for a specific viewing of a specific programme. That’s tricky. For information, there is some relationship between distribution method (IP via iPlayer vs radio via digital terrestrial network) and number of viewers. If there is a single viewer, it would be most efficient to only distribute via IP. However if there are, for example, 10,000,000 viewers 90% of which are watching via radio, then those watching via IP will be consuming far more energy than those watching via radio. Probably… it’s complicated! The BBC have employed Janet West to look at these issues.

So how are these things related? Well, firstly they’re all to do with the BBC (the British Cycling/Capecchi examples came from Tim Harford’s new podcast).. but that’s more to do with chance than anything. Harford was making the point that although the marginal gains programme meant that the British cycling team was far more successful than any other in the Olympic velodrome, that fundamentally the marginal gains are quite boring. Boring, but easy to get funding for. Easy to convince people it’s the right thing. In fact the English Rugby Football has poached the man behind the marginal gains for their own needs. In a sense you might say this sort of innovation is fine on its own, but it won’t bring about a paradigm shift. On the other hand, the sort of “radical” innovation that Mario Copecchi brought about with his knockout mice is much riskier. Radical innovation will (yes, definitely, will) completely fail a lot of the time, but.. when it works, you get major advances. Marginal gains vs major advances… who is the winner? Well…

We need to stop being Kingfish. We can’t just continue doing things in the way characterised by modernity, congested with cognitive biases that are predicated on the fact that ‘this is the way its always been done’. We need to shake things up and get radical. At the same time however we need to be accepting of that fact that marginal gains do bring about advances. Understand that risky research will arrive at massively significant outcomes (but maybe only 1% of the time), less risky research will arrive a marginal improvement, but quite frequently. We need to spread our bets.

Lastly. What is it we’re trying to achieve? It is fine to be aware of our (complete lack of) sustainable living, but are you prepared to act? Are you happy to go through your life, along with the masses, living out marginal gains that will ultimately have negligible effect? (recycling your waste, doing a car share, using toilet paper from renewable sources) Or are you driven to radically innovate, shout your cause (whatever side of the argument you’re on) from the rooftops, and take a chance on being the fly in unsustainability’s ointment? What would you be willing to change, to live sustainably?

I don’t think it’s an easy one to answer.