The BBC is to remove recipes from its website, responding to pressure from the Government. It will also remove a number of other web only services. The news is symbolic of a larger issue, and the outcome of a much longer story. It’s a signal that the current government will actively reduce public sector activity on the web for fear of upsetting or displacing the private sector. This is not just a feature of the current Conservative government, the Blair administration treated the BBC in the same way. The idea is that by reducing the public sector a thousand commercial flowers will bloom, that competition will drive variety and quality, and that a vibrant commercial digital sector will create high skill jobs. Never mind that the web is already controlled by a handful of giant US monopolies, mostly employing people thousands of miles away. Ideology trumps pragmatism.

In the specific case of the BBC, the Government has won. The BBC’s entire digital presence is dependent on its TV and Radio operations. iPlayer can only exist when it’s making TV and Radio shows, the news website is relies on the news gathering operation it inherits from TV and radio.  TV (and possibly radio) are destined to have fewer viewers and listeners as we increasingly turn to digital. So, as licence fee payers disappear, the output will become less and of lower quality, the BBC’s presence in the national debate will diminish and it’s ability to argue for funding will be decreased. When it comes time to switch funding from a license fee for owning a television to a system that works on broadband connections, the BBC will already have lost. An outmoded organisation that has failed to adapt, a footnote rather than a source of national pride.

Put simply, the BBC has failed to make the case that it should exist in a digital era. Instead it’s chosen to remain a broadcast operation that happens to put some of it’s content on a website.  When TV finally dies, the BBC could be left in a position similar to NPR in the US, of interest to a minority of left-wing intellectuals, dwarfed by bombastic polarising media channels owned by two or three billionaires. That’s why it’s so critical that the BBC made a web offer separate from TV, but it hasn’t. The Government has been extremely successful at making the BBC embrace the principle that all web output must be linked to TV or Radio, which is why, for example, the BBC will be reducing commissions specifically for iPlayer too, and closing its online magazine.

The story has been evolving for a long time. I was working on the BBC’s website in 2009. It just been through a multi-year Public Value Test to prove to the Board it wasn’t being anti-competitive by providing video content online; at least the public were allowed iPlayer in the end. BBC Jam, which was a £150 million digital educational platform to support the national curriculum was cancelled in 2007 because of competition law. Don’t forget, at this point, they’d already built most of it. Millions of pounds of educational material were thrown in the bin because it would be ‘anti competitive’. Of course, no commercial alternative has ever been built.

When I arrived there was endless talk of restructuring, and optimism we’d get a clear set of rules dictating what projects would not be considered anti competitive. It never came. The project I worked on, about mass participation science experiments, was cancelled, I presume because it wasn’t directly connected to a TV program. All kinds of other bits of digital offers were closed.  H2G2, which pre-dated, and could (maybe?) have become, Wikipedia was shuttered. The Celebdaq revamp was another proposition which was entirely built and then cancelled before it ever went live.

The BBC will now offer recipes that are shown on TV programs, but only for 30 days after. That’s how hysterical the desire to prevent public service on the web is: you can create content, at considerable cost, but not leave on the web, which would cost virtually nothing.

The has BBC focused it’s digital R&D budget on it’s gigantic archive, looking at new ways of searching, ordering and displaying the millions of hours of audio and video it’s collected.  Which is a weird decision, because it’s a certain fact that the BBC will never get copyright clearance to make public anything but the tiniest fraction of that archive. I speculate the reason it has done this is because it saves the management from having to worry about a competitive analysis. Projects that can never go public don’t pose a problem.

If we shift our focus from the BBC to society as a whole, it’s disappointing to see how we’ve abandoned the notion of digital public space. The web has opened up a whole new realm for creativity, interaction, education and debate. As a society we’ve decided that almost nothing in that realm should be publicly provided  – which is absolutely perverse because the web intrinsically lends itself to what economists would think of as a public goods.

Look across the activities of the state and you’ll see than none have a significant presence in the digital realm. We think the state should provide education – but it does nothing online.  Local governments provide public spaces, from parks to town halls – but never online. We think the state should provide libraries – but never online. We love the state broadcaster, but we’re not allowed it online. We expect the state to provide healthcare – but the NHS offers only a rudimentary and fragmentary online presence.  You can apply the formula to any sector of government activity. Want career guidance? Not online. Want to know how to make a Shepherd’s Pie? Better hope it appeared on a TV cooking show in the last 30 days.





Lots of people are have been speculating, monetarily and verbally, on the value of Facebook. One bullish analysis has particularly stuck with me, I think it captures the essential failure to understand what a dangerous proposition Facebook is. To paraphrase, the thinking was something like this:

Lots of people have made their fortunes by devising new TV formats, or writing sitcoms, but Mark Zuckerberg has gone one further by inventing a whole new medium. Facebook is the new TV, and it’s controlled by one company.

If this were true it would certainly make Facebook very valuable. But it misses one point: Facebook does not create content, it’s a place for users to share content. This is a recipe for explosive growth, because of Metcalfes law: that the value of a social network is proportionate to the number of people on it. It’s positive feedback, the more people sign up the more interesting the site is, which leads to more people signing up.

But this exact same logic means Facebook can implode extremely quickly. If a few people decide they aren’t so keen on FB anymore and stop using it, then it’s a bit less valuable as an experience, so a few more people will stop using it, and the same positive feedback leads to a exponential drop in participation.

A lot of people think that Facebook is safe from this type of implosion because people don’t have the time or headspace to sign up to another to another social network, learn it and rebuild their friend list. However, as people spend more time online they become more savvy, more likely to demand specific features that FB doesn’t offer. Perhaps their focus will move to a network that is specific to their hobby or social group. Not only is rebuilding my friendship list easier than you might expect (import from Hotmail etc.), it’s also highly desirable. Lots of people want to spring clean their friend list or start with a clean sheet. Perhaps to rid themselves of the nagging feeling they don’t really understand the privacy settings and might be doing something stupid.

There is already plenty of anecdotal evidence of Facebook fatigue. Notwithstanding it’s admission of a lot of fake Facebook accounts, Facebook certainly still seems to be growing, but it’s not as easy to tell how it is fairing as you might think.

When Facebook give their user count in Active Monthly Uses they include anyone who uses a Facebook comment box on a third party site, likes any content anywhere or authenticates using Facebook. If you have a Facebook account, it would be quite hard to use the Internet without counting as an active monthly user. None the less, you might be almost completely disengaged with the site. If Facebook was seeing declining engagement wouldn’t show up in their overall user stat.

Whether this has started happening yet or not, or ever will, obviously I don’t know. But Facebook isn’t a company like others. Oil companies might risk having a Deep Water Horizon style disaster, but they can be sure everyone won’t simply stop using oil because it goes out of fashion. A TV company can be relatively sure that it’s audience won’t simply implode. But Facebook is a social convention, it’s more like a popular pub. The crowd can just move on.

At the moment, Facebook has a lot of to daily active users, so it hasn’t reached crunch point yet. However, I presume that anyone who signs up to Facebook is going to be a relatively active user immediately after signing up. Why else would they sign up to the service. Everyone seems to accept that their is a limit to how many new users the site can add, and when the novelty effect is no longer in action Facebook’s stats might start to look significantly less appealing.











Berg’s Little Printer has received something of a mixed reception. In essence it’s a receipt printer that connects to the Internet. It queues up a collection of content, when you press the print button on the top you get a printout of your content. It could be that day’s Guardian articles, a weather report, your Nike+ report, whatever, you can choose from their menu.

It costs £200, which is a lot of money, and a reason for some of the criticism. Another reason is that it’s bad for the environment to print things out unnecessarily. This I think is something of a marginal point; the average British person emits 9.2 tonnes of carbon a year, a couple of rolls of paper aren’t going to make any difference.

Really, the root of the criticism of the little printer is that it doesn’t seem that useful. But wait! Berg are design geniuses, why have they made something that isn’t that useful? First, perhaps we’re all wrong and it is actually very useful. Text messages and Twitter are both examples of an extremely limited format which has been successful exactly because of their narrow scope.

I don’t honestly think that’s what’s going to happen though. I think what happened was that it’s a very early step exploring the “Internet of Things” and first steps are always uncertain.

When I refer to the Internet of Things, I mean the idea that the future of computing is going to be less ‘virtual’ and more ‘real’. It’s an appealing idea: who isn’t fed up with staring into a computer screen all day? People choose their careers based on avoiding being etiolated in front of a computer. We’d never put up with it if it wasn’t so damn productive.

To the end of understanding this concept I’ve spent a long time thinking about what ‘virtual’ and ‘real’ mean: my conclusion is that there is no coherent definition. The world virtual only got the meaning of “having the essence or effect but not the appearance or form” in 1959, with the birth of computers.  Two related points that capture some aspects of virtuality:

  • Computers are virtual because they require humans to connect them to the real world, to put information in.
  • Computers are virtual because their output is non-physical, text or pictures. Humans have to take this and then cause the effect on the real world.

The IoT movement is about breaching these barriers. It’s worth noting that they only exist in respect of consumer electronics, in the industrial sphere computers have been controlling production lines and HVAC systems in building for decades. In doing this they have real-world sensors and real-world output. IoT happened years ago for industrial computing.

This conception of virtuality has mislead people into believing that what we want in our homes is for our computers to connect to the real world. But we already know this isn’t the case. When Bill Gates founded Microsoft he thought that he would sell PCs for spreadsheets, desktop publishing and home automation.  Home automation, almost a synonym for IoT, hasn’t ever taken off, it’s the nut that Microsoft didn’t crack. The use case simply isn’t there.

I don’t want my computer to know much about the real world: what items I have in my fridge, what temperature living room is, if someone is having a bath. I also don’t want my computer to have much physical world output: I’m not going to turn the oven on before I get home using my phone, I’m not going to 3D print myself a guitar and I’m not going to print out my day’s reading on a receipt. Domestically, there just isn’t the desire to have a computer interact with the real world.

But, as mentioned previously, we do like the idea of getting away from the computer screen. This brings me to second idea of virtuality: that a computer is virtual because it does everything. From buying and selling stocks to writing music to playing games, you do it all on the same device, and see the results on the same screen and through the same speakers. How could a device that exhibits this degree of flexibility be anything other than virtual, some remote of abstraction of the underlying processes?

In the consumer setting the Internet of Things is about UX, it’s about being able to access the power of a computer without having to do it through my laptop. This is where Little Printer fails, because although it offers a physical interface with your computer, it does it with worse UX than a computer screen. Being able to use Photoshop on it’s own tablet, having the calendar that hangs on my wall connected with Google Calendar, having an interface for my music collection that’s part of my HiFi – these might be valuable UX wins. 3D printing ticks the box of connecting the virtual with the physical, but it doesn’t solve a UX problem.

It’s interesting to see how audio equipment deals with the interface problem. Below, the blue item is an entirely analogue (tube-based!) mixing desk.  It couldn’t be more real, everything about it is totally physical. Open it up and there will be glowing valves inside. The grey one is a digital mixing desk, it’s totally fake, a laptop in a box, but for the sake of the UX, the outside is more or less identical to an analogue version.

Knobs, dials, real buttons and purpose-specific displays are what IoT really offers the consumer.


With we decided to use exclusively third party logins: Google Account, Facebook, Yahoo and OpenID are available. We did this to avoid the overhead of having to develop authentication ourselves, and because it felt like it was reducing a barrier to entry – not having to register, click on links in confirmation emails etc.

We’re using the Janrain service, which I have to say excellent. It integrates with Rails really beautifully allows you to deploy authentication through numerous services. If you weren’t using Janrain, using a third party logins to reduce development time would be an own goal – even if you only support Google, Facebook and Yahoo the creeping drag of keeping up with any updates to their authentication process is a potential nightmare.

But lets say you do decide to either use Janrain or do the development yourself. The first problem is the questionable quality of the data you are going to get. For example, many people have Facebook or Yahoo accounts but don’t regularly check the associated email addresses. Even if they are active addresses, you might need to confirm with the users that you can use those addresses to send mail (though Campaign Monitor & Mail Chimp seem to disagree about this). This extra step negates part of the ease of use you got from the automatic login. Additionally, I’ve noticed that you often get horrible usernames from services like Yahoo & Gmail – “xxsun.shine.95xx”, rather than a nice name.

Depending on the permissions you ask for, you will be able to get the users Facebook data, or post to their feed. However, this is a double-edged sword – no matter how honest your intentions, plenty of users are put off by the intrusion that FB auth might represent. Rather than fiddle around with their FB permissions they just won’t register.

Finally, if you offer a bunch of different login options then you’ve transferred the problem of remembering a password to remembering which service you used. This is exactly why I have multiple Stack Overflow accounts – I can’t remember which service I used.

Having been through various permutations, my feeling is that there is a lot to be said for developing your own authentication and, if necessary, allowing people to link a relevant other account to add functionality. The one scenario I’ve seen it work best was on the Monterosa 2-Screen apps:

  • You could only login through Facebook  – no forgetting which service you used
  • You still use most of the features of the site without logging in, so if you didn’t have an FB account it didn’t really matter
  • The app let you play against friends, so it made total sense that you used your Facebook account – it wasn’t just a shortcut to harvesting an email address.

Obviously, many factors etc. just a wanted to relate some personal experiences…



I’ve been working with Andrew Nagi to put together a new (much improved) version of It’s an app that allows you to make timelines, including what we hope is a handy auto-suggest feature.

It’s been a chance to have a play with a few interesting bits of tech.


We turned to Freebase‘s API to allow users to automatically add the dates of events. It uses their auto-suggest JS library and some custom code to work out the dates. For example, if you type in “Albert Ei” it will auto-suggest “Albert Einstein”. If you click on his name, it will automatically add his lifespan to your timeline, including his Wikipedia intro if it’s available. The quality of the data is hit and miss. Generally, while people are well covered, events are less likely to have accurate dates.

Simile Library

We also used MIT’s Simile library which  library does a lot of the heavy lifting of display a scrolling timeline, which is vastly more complex than you might expect if you haven’t tried getting a computer to represent human readable dates. Which leads me to…


If you didn’t already know that it’s famously difficult to make computers work with dates, a few moments of reflection make it obvious. Leap years, timezones, irregular numbers of days in the month are easily foreseeable landmines. Here are some of our favorite nuggets:

  • The Georgian calendar was only introduced in 1582, meaning that dates before this time can either be indicated by using the previous Julian calendar or by dating the Georgian calendar backwards (the Proleptic Georgian calendar) . This is similar to the phenomena of Russia’s October Revolution happening in November, depending on which calendar you want to use. The two calendars disagree about leap years so they slowly run out of sync with each other.
  • In the Georgian and Julian calendar there is no year zero.
  • Time in computers is very often calculated in seconds since 1970. Makes sense, because there weren’t many computers before 1970. However, for dates after 2032 or before the 20th century, the number of seconds becomes so large that PHP loses the ability to understand the number.
  • For more mind-bending complexity, check this out, or even this.

Ruby on Rails

The Ruby on Rails app development framework hardly needs more commentary, I’ll just mention a couple of observations. It’s very instructive to have a look at the Rails approach to making web apps, everyone agrees it’s an elegant structure. Rails is great at making app development beautiful, or at least enforcing good structure. It also removes a lot of the repetitive tasks that waste time when you develop apps without a framework. But I don’t think anyone can honestly say that it makes development easy. Ruby is a hard language to understand, Rails does a lot of magic that makes understanding it even harder.  You don’t often see this fact reported, I guess because it impugns the programming skill of the writer. Rails might make the best of expensive developer time by letting those programmers work as quickly as possible, but it does nothing to lower the level of experience required to be an app developer. Nor is exclusively a matter of experience – this note from O’Reilly is an interesting insight.



Waiting for a meeting in a business incubator I overheard someone explaining their startup. I can summarise: mobile, personalised, social shopping.  “It’s the future” he said. Sounds pretty ‘me-too’ to me. It’s OK, I can let you in this big idea, I haven’t sign an NDA.

If that’s not the most visionary idea you’ve ever heard, it’s antithesis comes from Jaron Larnier (author of “You Are Not A Gadget“).

“Let’s suppose that, back in the 1980s, I had said, `In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!’ It would have sounded utterly pathetic.”

If Jaron Larnier is disappointed with Linux and Wikipedia, I can’t imagine what he must feel in the current climate. One particularly depressing statement I’ve heard a lot of goes along these lines: “for most people Facebook is the Internet!”.

Perhaps we haven’t been let down though. Jaron Larnier’s 1980’s self was wonderfully optimistic, that’s probably why he’s achieved so much. But technology takes a long time. Just because processor speed doubles every two years, doesn’t mean society can work out what to do with it at the same rate.

The economic historian Paul A David makes the point that electrification took decades to change industrial production and get into people’s houses. At the beginning it seemed just as frivolous as Lolcats: In 1883 Mrs Cornelius Vanderbilt captured the spirit of the age by attending a lavish $250,000 fancy dress party in an electric light bulb costume, then decided electricity was a dangerous fad and had the incandescent lighting stripped from her house for fear of it burning down*.

Yet once power stations and transmission lines and electrical appliances were all in place there was a revolution. An often cited consequence is that labour saving devices in the home liberated women to join the workforce, increasing gender equality and creating economic growth. (Obviously, another way to do this would have been to abandon the convention that women do all the domestic work, which would have achieved equality but not the productivity gains.)

Again, David Edgerton highlights the complex route from invention to implementation in his book The Shock of the Old. He gives many compelling examples, my favourite is that London, Midland and Scottish Railways (one company) had as many horses as it did trains, 10,000. This in 1924, 16 years after the model T Ford had gone on sale and well after tractors were available. Steam power never replaced animal power, it was outmoded before it could do so even though it was well establish in the 18th centuary.

Back to the Internet. The reason mobile-social-etc is the innovation du jour is precisely because it is one of the least substantial. It’s easy for social networking to get take up, no company boards have to approve it, no standards have to be adopted by everyone in an industry, there are no sign up fees and security problems are around privacy, not money.

Consider the contrast with the process of invoicing. You could issue, pay, reconcile, enforce and incur any tax on transactions between companies automatically, using tech not profoundly different from a social network. Likewise for stock tracking, or any one of the bureaucratic processes that businesses are faced with. These would be revolutionary in terms of reducing the costs of business and driving economic growth. Really revolutionary, even more revolutionary than photo sharing or mobile shopping.  Walmart, the largest retailer in the world, dominates because it is able to generate exactly these kinds of logistical efficiencies, FedEx is another company whose competitive advantage comes from IT.

But these changes can’t happen easily (at least not between companies) because they require deep cooperation, the emergence of standards and the security challenges that come with money and goods. Nothing that’s worth doing is ever easy; because these systems are solutions to complex, critical problems between many actors they are largely still on the drawing board. As in the above examples, useful technology takes a long time to diffuse.

It’s not just big business either, education, social care, health, government and manufacture all have big gains to come from the net that haven’t been realised yet. Facebook might be the Internet for some people, but then for some people the invention of the incandescent light bulb was just a new opportunity to experiment with their wardrobe.

When the dotcom bust happened it was at least in part because of a failure to appreciate that the diffusion of technology is slow, that it takes time to install broadband and change habits. Perhaps there’s a quick win for someone in mobile-social-geo-gamifiaction, but it’s myopic to see that as anything other than a fragment of the big picture.

Social Networking is predominant now not because it’s the Internet’s destiny, but because it’s just the beginning, and thinking otherwise is just an inability to see the long view.

* I’m remembering this story from Bill Bryson’s Home, Googling seems to indicate to me that it might actually be a conflation of several stories, with both the Vanderbilts and the Astors seeming to be cited in all kinds of light bulb / fancy dress shenanigans. Then again, Bill Bryson probably has proper researchers.

Raspberry Pi has been all over the BBC new page, but before it existed I bought a Beagle Board, which is very similar but perhaps with a bit less charisma. When you get the board (it’s just a circuit board with some USB ports, monitor connection and a memory card slot) you have to install something called Angstrom Linux via memory card before you can do anything.

All told, I think it probably took me about 12 hours get the board working. You can only set up the SD card from another Linux machine, so I had to install a Linux virtual machine on my Mac. All sorts of fiddly things got in the way.

The first time I put the memory card – all perfectly set up as far as I know – into my Beagle Board it didn’t work. I’m not an embedded Linux expert, and there isn’t an error message  –  It just didn’t work. Here is a list of things I questioned in my head:

  1. My Beagle Board is broken (after all, it’s got no case, perhaps I damaged it)
  2. I have the wrong kind of Linux Virtual Machine on my mac
  3. The memory card or card adapter is broken – I’ve never used either of them before
  4. Something unknown is wrong with the files I’ve written to the SD card
  5. I’m following the wrong set of instructions for my Beagle Board, perhaps there are different versions or something?

In short, absolutely everything involved came into question, plus of course a kind of meta-doubt: what if something I’d never heard of wasn’t right?

Eventually I solved the problem by doing the whole thing again. I’ve still no idea what was wrong.

It’s a salutatory experience to be in territory where you’ve no idea what’s going on, as a nerd it’s easy to forget what that’s like. This is a diagram that has been going round the web for ages:

Obviously, this is an incredibly annoying response – a new user  has nothing like this level of clarity.  Here is a sketch of the decision tree that arises from a real world simple (Dad) problem –  entering phone numbers into a Google Spreadsheet, which treats them as normal numbers and removes the leading zero (it used to anyway):

When you are using something for the first their is an unknown cost/benefit of the tech you are trying to get running.  If my Beagle Board was actually broken, then I could spend two weeks on it and get nowhere. My inability to estimate the work involved undermines my enthusiasm to solve the problem. There is no way for me to estimate the time cost of solving this problem.

Even worse, perhaps when you get those numbers into Google Spreadsheets, or make the Beagle Board work, and it won’t be the tool you wanted anyway. The benefit is unclear too.

The diagram explaining how “tech experts” solve problems is a statement of the misconception that users are giving up solving a problem because they’re not up to the task. Of course that might be the case, but on other occasions times the worry that they are wasting their time, quite rationally, makes them stop bothering.

Lo and behold, the Beagle Board’s performance is not up to what I wanted it for. It is quite a fun thing, so I wouldn’t quite say it was time wasted, but the intuition that I should just throw my hands up in the air and give up is there for a reason.


WordPress’s slogan is “Code is Poetry”. Having just installed it to run this blog I can say it’s an aesthetically pleasing thing. The features you want are there, modifying it is easy, the dashboard is intuitive.

One thing that I love about WordPress is that it isn’t (all) written in object oriented code – a decision embodies Matt Mullenweg’s approach to the platform. If you haven’t come across this term before, object oriented refers to a style of writing code that has  advantages for big projects with many authors, at the price of requiring you to learn to code with a more complex syntax. I would describe it as quite counterintuitive, and to make matters worse it’s often explained badly too.

There is no question, object oriented code is best practice – it’s what everyone uses. But if you are a beginner then you don’t want to be dealing with it, it’s just makes everything even more confusing. The people who write WordPress would no doubt naturally write everything object oriented, but they make their lives harder to make the beginner’s life easier, which is why WordPress has such a vibrant community of people doing creative things. The learning curve means anyone can have a mess around. I love this decision because it’s antithetical to being a geek, but it serves their users better.

There’s an interesting comparison with the way Adobe Flash has evolved. I taught myself ActionScript 2, which is a horrible language, but more-or-less legible to someone who hasn’t done a degree in programming. If you spoke to Adobe about the new version, ActionScript 3, they would tell you how much faster and more mature it is; how it has all the features that Java programmers love and how it’s suitable for writing massive web apps. All this is true but if you show it to someone who doesn’t write much code it just looks like a nightmare. All the new power they’ve added has come at the price of making simple things very hard to do, often turning one line of code into three. I’ve got a lot more to say about what’s wrong with Flash another day.

Flash has turned from a creative tool that you could use without writing any code at all into an engineer’s utopia – which would have been fine if the future for Flash was writing gigantic web apps, but actually it wasn’t, so now Flash is moribund.

It’s all very reminiscent of Brian Eno’s Wired article about using the most advanced mixing desk in the world. Very powerful, and it completely gets in the way of being creative. The lesson is to tame the impulse to engineering purity.

Another instructive example is Vanilla Forums, which is the only forum platform that I know of which looks modern. Unfortunately the underlying code is impossibly difficult to understand, and as a consequence their are few plugins and the community is full of confused people. No surprise that Vanilla hasn’t really taken off.

This morning I went to a talk about devices which interface directly between the brain and computers. By way of an introductory remark Louise Marston noted that “for thousands of years humans have wanted to be able to communicate directly from one brain to another, which of course we can, by witting.” This set the tone for a discussion about the topic of technologically extending the functionality of our bodies.

The panel all agreed that it is a mistake to imagine that using (for example) brain implants to communicate with computers represented a sea change in our sense of self.

Anders Sandberg pointed out that we already use contact lenses and clothes to extend our personal capacities. What makes the ideas such brain implants alarming is that they represent a ‘transgression’ of our physical bodies. However, as Anders continued to point out, this transgression “makes good posters for films” but isn’t actually that practical, mostly because the dangers of infection and medical complication.

Instead he favoured subtle, low level interaction between brain and computer. He gave the beautiful example of his relationship with his laptop – he can subconsciously tell if the hard drive is ok from the noise that it makes.

Other examples include MIT’s “Sixth Sense“, while Professor Kevin Warwick showed a photo of a device that allowed users to get messages from their computer via tiny electric shocks on their tongue. Probably not to everyone’s taste.

Optogenetics a new approach again. This involves altering your genetic code so that your neurons respond to light and then shining a laser through your cranium to manipulate your brain’s behaviour.

While some of the technologies under discussion are not even on the lab bench yet, one technology already in medical use: Deep Brain Stimulation to treat Parkinson’s. An implant electrically stimulates the thalamus which reduces the  symptoms of the disease. Some patients go from being unable to dress themselves to being able to drive again. Impressive stuff, but it also reifies a moral thought experiment. Some people who use the device experience personality changes, for example becoming compulsive gamblers. Who would be responsible if the a patient had a personality change and went on to commit a crime? The device manufacturer, the surgeon or the patient? One guy is already suing his doctor because of gambling spree he claims was bought on by medication. 

Perhaps if we had more debates about these kinds of moral dilemmas we’d have a more nuanced understanding of what’s at stake. It drove me nuts during the riots that _every_ news presenter had to ask anyone that said anything explanatory about the cause of the riots “Are you making an excuse for them?”. Surely we can have a more sophisticated understanding of morals than that discourse seemed to indicate.

The panel itself had some interesting characters. Anders Sandberg comes from the grandly titled Future of Humanity Institute in Oxford, which is also home to a philosopher I particularly like –  Nick Bostrom. He’s very entertaining, I seem to remember that he did stand up for a while.  Bostrom also responsible for a confounding logical conclusion through his simulation argument.

Professor Kevin Warwick has had all manner of things implanted in him – a sure sign of commitment to your work. He told us he has a graph of the electrical activity associated with the onset of Parkinson’s on his living room wall to keep him focused on his work. Presumably he has a very understanding wife too – some of his experiments have included her, for example wiring their brains together to facilitate direct electrical communication. I once wrote a short story about exactly this. Unfortunately it’s not very good; I hope their experience went better than my short story.

Throughout the whole talk there was a tendency to wander between brain-computer interfaces and the subject of artificial intelligence. It seems to me that there isn’t really an obvious link between the two, except that they both endanger our sense of self. In many ways this is the most fascinating aspect of the technology. Most people distinguish between using technology to restore function that’s been damaged by disease or a car accident and the more treacherous moral territory where technology is used to exceed our ‘normal’ abilities.

We discussed that the use of a notebook as a memory aid would be could be considered a synthetic extension of our natural abilities, and that no one considers this to have moral implications. However, as I write this I’m quite happy to take advantage of a spell checker and my notebook.

It would feel weird if the computer started improving my prose by suggesting eloquent synonyms, or perhaps advised me that the above “not to everyone’s taste” pun is an execrable crime and should be deleted immediately. When computers, through implants, other types of brain-computer interfaces or artificial intelligence start doing things that we consider uniquely human – like creativity and punning – I think it really will cause us to radically reconceptualise ourselves. In this sense, I wonder if examples of using clothes or glasses to enhance ourselves are misleading, because they don’t strike at core concepts at what it is to be human. Or perhaps we’ll just get over it.

Today I spoke at Bar Camp Media City in Salford, Manchester. Part of the appeal was getting to see the new Media City home of the BBC. You get the tram from the train station – there’s something about getting on trams that makes me feel like I’ve left the real world and slipped into a theatre set where everything is just pretending. I quite like that. It’s because of the monorail at Chesington World of Adventures I think.

I’m glad the security gards that checked my computer cables, validated my photo ID and escorted me to the 5th floor of the BBC Quay House building didn’t find anything suspicious. They wouldn’t have hesitated to do a cavity search. You’d think the Queen was giving a presentation.

Who called it Media City? Accountancy consultants? They’re probably signing off the plans for Content Hamlet and Return On Investmentshire right now.

Anyway, I was just going to post something quick explaining the talk I gave. Forgive me if this isn’t watertight, and apologies that it’s been written in haste – hopefully it will clarify what I said for anyone who’s interested.

The Internet is not a medium

TV, radio, the novel, the Internet. It sort of makes sense. OK, the Internet is perhaps a broader category than radio, but we often think of the Internet as just another type of media. I’m going to argue that it isn’t and that thinking it is has negative consequences

Definition of a medium, No 1 

A medium is a method of transmitting messages where all the messages transmitted by that medium have similar features. Some of those features ar  conventions – for example that newspaper article have bylines, lead paragraphs explaining the facts and are written in a particular style. Other features that distinguish a medium are matters of technological expediency – there are no moving pictures in newspaper articles.

Mediums can nest, as illustrated below.


My contention is that podcasts, YouTube, eBooks and blogs are so dissimilar that there is literally nothing about them that puts them in one media category. Not even in the same broad nest. This might seem like a semantic point, but I think it leads to a number of problems:

  • Often people speak of the Internet as though it is one medium, and their claims need to be made more specific. “People who use the Internet for 4 hours a day have lower attention spans” doesn’t really mean anything – what are they using the internet for? That’s the critical fact, otherwise it’s about as broad as saying “people engaged in activities for 4 hours a day have lower attention spans”.
  • Erroneous assumptions that generic properties of the Internet exist. It’s also common to hear statements such as “the Internet is democratising”. Obviously this is widely debated, and that debate could proceed with more if the language was tightened up. What features of the net are democratising?
  • ‘First-TV-programme syndrome’ – When the first TV programmes were broadcast they simply pointed cameras at people doing radio shows. It took time to work out what could be done with the new technology. Clearly we’re on that same curve with the Internet. Being careful about what we’re referring to can only help. (Hat tip to The Guardian’s Martin Bellam)


Definition of a medium, No 2

A medium is a method of transmitting messages between people. This feels like an all encompassing definition of media to me, but this definition is still narrower than the Internet.

The reason is that the Internet can be used for transmitting data that is not intended for human consumption. It’s possible to email someone a CAD file and get a 3D prototype back without a human having ever read the data you supplied. With increasingly ubiquitous computing, and more sophisticated ways of shaping matter using data, this is a growing mode of Internet use. In this sense it’s more like an all purpose manufacturing aid. I think of it as similar to the way steam increased productivity in the industrial revolution (I’m not trying to make a comment on how important it is though).

Information is hard to charge for, but physical things are not. Projects such as Newspaper Club take advantage of this. They allow you to print your own low  volume newspaper. You’d never pay to publish something online, but paying to using a web app that makes something physical is a reasonable proposition.  Thinking like this might help you identify a revenue stream.

I think the fun of BarCamp is that you get to explain a pet idea, and it’s also a lovely arena to have a go a pubic speaking – I hope my audience weren’t too confused. Thanks to everyone that came along!