I made a bookshelf at the weekend. As with the table that I built before (which I wish I’d written up), it involves no screws, nails or glue. I’ve tried to design it so that cutting and drilling isn’t required to be particularly accurate either. The idea with both is to put the complexity into the design rather than the build.

It took 10 standard, 8ft lengths of 2″x 2″ and about 3 meters of dowel. Each length of 2″x 2″ had to be cut once and drilled 3 times. Then I just threaded the dowel through to make a grid. To make it stand up, I tied string between the dowels on the back. Materials cost ~  £70.

You can concertina it back up if you wanted to – I don’t why you’d want that. Or take it right a part into its constituent parts.

Being “on the diagonal” means that you can use tension to make it stand up, unlike a standard “vertical / horizontal” bookshelf.  Well, actually, you could use tension on a “vertical / horizontal” bookshelf, but it would be hard to stop it wilting in one direction or another. On the diagonal, it balances itself.

The strings at the back are under quite a lot of tension, and each play a note when plucked. They are about C, A# and D, as determined using a guitar tuner. It might be possible to tune them properly by rebalancing the books.

If you look carefully, you’ll be able to see a VHS copy of Hangin’ With Leo. I hope Leo would appreciate the lengths I’ve gone to to store his video.

A hallmark of the bien pensant intelligentsia is to be accepting of all forms of art. The most abstruse Turner prize winner, the most aesthetically banal Damien Hirst dot painting.

This is in contrast with the proletarian view, which is to hate modern art as an elaborate con being played by on and by the pretentious. Nothing could be more low brow than to criticise art on the basis that it demonstrates no skill or wasn’t a labour of love, summed up with the phrase “could have been done by my 5 year old”.

Perversely, this  means that people who think of themselves as having the most sophisticated and political thoughts about art actually have a completely atheortic view. Meanwhile, ironically, the tabloid, white-van-man position is actually a version of Marx’s labour theory of value.

Are Startrek style voice interfaces going to be a big part of how we interact with computers? Thinking about this reminded me of a point made by a product designer who worked on the Sinclair C5. He said – and I can’t find who it was anywhere –  that he always felt that the product would be failure.

Hindsight perhaps, but his thinking was not what you might expect. It wasn’t about battery life, speed, danger or price. It was the fact that the process of getting into the vehicle is physically diminishing, it makes you look silly. Adopting the Sinclair C5 position is not an empowering experience.

I think Siri has exactly the same problem – especially if it fails to understand you. It might work if you are on your own, in a car or similar, but in front of other people it’s just not a dignified experience. All the more so by the third repetition.

Of course if Siri gets good enough to work first time most of the time it might be something users can get more comfortable – but given the difficulties of understanding natural language, that might not be any time soon.

 

 

Berg’s Little Printer has received something of a mixed reception. In essence it’s a receipt printer that connects to the Internet. It queues up a collection of content, when you press the print button on the top you get a printout of your content. It could be that day’s Guardian articles, a weather report, your Nike+ report, whatever, you can choose from their menu.

It costs £200, which is a lot of money, and a reason for some of the criticism. Another reason is that it’s bad for the environment to print things out unnecessarily. This I think is something of a marginal point; the average British person emits 9.2 tonnes of carbon a year, a couple of rolls of paper aren’t going to make any difference.

Really, the root of the criticism of the little printer is that it doesn’t seem that useful. But wait! Berg are design geniuses, why have they made something that isn’t that useful? First, perhaps we’re all wrong and it is actually very useful. Text messages and Twitter are both examples of an extremely limited format which has been successful exactly because of their narrow scope.

I don’t honestly think that’s what’s going to happen though. I think what happened was that it’s a very early step exploring the “Internet of Things” and first steps are always uncertain.

When I refer to the Internet of Things, I mean the idea that the future of computing is going to be less ‘virtual’ and more ‘real’. It’s an appealing idea: who isn’t fed up with staring into a computer screen all day? People choose their careers based on avoiding being etiolated in front of a computer. We’d never put up with it if it wasn’t so damn productive.

To the end of understanding this concept I’ve spent a long time thinking about what ‘virtual’ and ‘real’ mean: my conclusion is that there is no coherent definition. The world virtual only got the meaning of “having the essence or effect but not the appearance or form” in 1959, with the birth of computers.  Two related points that capture some aspects of virtuality:

  • Computers are virtual because they require humans to connect them to the real world, to put information in.
  • Computers are virtual because their output is non-physical, text or pictures. Humans have to take this and then cause the effect on the real world.

The IoT movement is about breaching these barriers. It’s worth noting that they only exist in respect of consumer electronics, in the industrial sphere computers have been controlling production lines and HVAC systems in building for decades. In doing this they have real-world sensors and real-world output. IoT happened years ago for industrial computing.

This conception of virtuality has mislead people into believing that what we want in our homes is for our computers to connect to the real world. But we already know this isn’t the case. When Bill Gates founded Microsoft he thought that he would sell PCs for spreadsheets, desktop publishing and home automation.  Home automation, almost a synonym for IoT, hasn’t ever taken off, it’s the nut that Microsoft didn’t crack. The use case simply isn’t there.

I don’t want my computer to know much about the real world: what items I have in my fridge, what temperature living room is, if someone is having a bath. I also don’t want my computer to have much physical world output: I’m not going to turn the oven on before I get home using my phone, I’m not going to 3D print myself a guitar and I’m not going to print out my day’s reading on a receipt. Domestically, there just isn’t the desire to have a computer interact with the real world.

But, as mentioned previously, we do like the idea of getting away from the computer screen. This brings me to second idea of virtuality: that a computer is virtual because it does everything. From buying and selling stocks to writing music to playing games, you do it all on the same device, and see the results on the same screen and through the same speakers. How could a device that exhibits this degree of flexibility be anything other than virtual, some remote of abstraction of the underlying processes?

In the consumer setting the Internet of Things is about UX, it’s about being able to access the power of a computer without having to do it through my laptop. This is where Little Printer fails, because although it offers a physical interface with your computer, it does it with worse UX than a computer screen. Being able to use Photoshop on it’s own tablet, having the calendar that hangs on my wall connected with Google Calendar, having an interface for my music collection that’s part of my HiFi – these might be valuable UX wins. 3D printing ticks the box of connecting the virtual with the physical, but it doesn’t solve a UX problem.

It’s interesting to see how audio equipment deals with the interface problem. Below, the blue item is an entirely analogue (tube-based!) mixing desk.  It couldn’t be more real, everything about it is totally physical. Open it up and there will be glowing valves inside. The grey one is a digital mixing desk, it’s totally fake, a laptop in a box, but for the sake of the UX, the outside is more or less identical to an analogue version.

Knobs, dials, real buttons and purpose-specific displays are what IoT really offers the consumer.

 


In the beginning was the word, and the word was money. Actually bankers believe in the creation myth of the 1986 “Big Bang” when stock trading was computerised – facilitating high speed speculation – and the UK government deregulated financial markets in London to tempt money from other financial centres. Back in 1982 the London Docklands Development Corporation had declared the Isle of Dogs an enterprise zone, with special tax breaks, so it made sense for the exploding banking industry, whose main competitive advantage was already low taxation, to expand into the tax efficient offices of Canary Wharf.

Skip forward a bit. Yesterday we saw Bob Diamond of Barclays (head office: One Churchill Place, Canary Wharf), explaining why his company had rigged the LIBOR intersest rate. The banking crisis overall has left everyone in the country about £1,300 a year worse off, in terms of GDP per capita, for the last three years.  Pricing the crisis is almost impossible, Andrew Haldane of the Bank Of England believes about 10% of GDP gone forever is about the right mark. It’s not that the finance sector is unproductive – it’s worse. For some time periods its net effect is destructive.

What else could we do with the real estate? I think we should turn the the Docklands back into docks. It’s the most honest kind of shopping experience – go down and buy it off the boat. Don’t we live in a world where we value the shopping experience, and where products have to come with a story that makes them personally valuable to us?  Check out the history tag – telling you where the provenance of your goods on a web page, adding value by adding context.

There’s no romance in a container ship unloading, but that won’t happen because the docklands can’t cope with container ships. Imagine instead the Cutty Sark offloading Italian cheeses, Ethiopian coffee or Indonesian spices. Toiling cockney lightermen humping barrels as Islington mums pick their way through crews of lascivious Filipino sailors smoking clay pipes.  That’s better than the history tag, surely.

Prices would be higher, but perhaps not so much. Firstly, we should copy the Chinese idea of a Special Export Zone, and create a Special Import Zone where no tariffs are imposed.  Third world countries would be allowed into the market thus reducing prices. Secondly, rather than sponsoring banks, we could offer subsidies to traders who use the use sail boats, thus reducing carbon emissions.

And why not make the process into a holiday while we’re at it. If a gourmand coffee shop isn’t enough for you, why not sail to Jamaica, select your beans, sail them back to London and roast them.

We can send jellied eels and ale back.

Here are some pictures of the docks, in 1810, 1847 and 2012. 1847 looks like the most fun to me. I love the way it’s so legible – a dock for loading, another for unloading, and wharfs named after a product or location. Canary Wharf, obviously, once serviced the canary islands (I think the Wharf was owned by a fruit company). The utility of the docks is so obvious, now not even the people who work at Canary Wharf can tell you what they do.

If you could see a little further east you could see the East India dock, neatly mapping global trade into a few acres of East London.

Raspberry Pi has been all over the BBC new page, but before it existed I bought a Beagle Board, which is very similar but perhaps with a bit less charisma. When you get the board (it’s just a circuit board with some USB ports, monitor connection and a memory card slot) you have to install something called Angstrom Linux via memory card before you can do anything.

All told, I think it probably took me about 12 hours get the board working. You can only set up the SD card from another Linux machine, so I had to install a Linux virtual machine on my Mac. All sorts of fiddly things got in the way.

The first time I put the memory card – all perfectly set up as far as I know – into my Beagle Board it didn’t work. I’m not an embedded Linux expert, and there isn’t an error message  –  It just didn’t work. Here is a list of things I questioned in my head:

  1. My Beagle Board is broken (after all, it’s got no case, perhaps I damaged it)
  2. I have the wrong kind of Linux Virtual Machine on my mac
  3. The memory card or card adapter is broken – I’ve never used either of them before
  4. Something unknown is wrong with the files I’ve written to the SD card
  5. I’m following the wrong set of instructions for my Beagle Board, perhaps there are different versions or something?

In short, absolutely everything involved came into question, plus of course a kind of meta-doubt: what if something I’d never heard of wasn’t right?

Eventually I solved the problem by doing the whole thing again. I’ve still no idea what was wrong.

It’s a salutatory experience to be in territory where you’ve no idea what’s going on, as a nerd it’s easy to forget what that’s like. This is a diagram that has been going round the web for ages:

Obviously, this is an incredibly annoying response – a new user  has nothing like this level of clarity.  Here is a sketch of the decision tree that arises from a real world simple (Dad) problem –  entering phone numbers into a Google Spreadsheet, which treats them as normal numbers and removes the leading zero (it used to anyway):

When you are using something for the first their is an unknown cost/benefit of the tech you are trying to get running.  If my Beagle Board was actually broken, then I could spend two weeks on it and get nowhere. My inability to estimate the work involved undermines my enthusiasm to solve the problem. There is no way for me to estimate the time cost of solving this problem.

Even worse, perhaps when you get those numbers into Google Spreadsheets, or make the Beagle Board work, and it won’t be the tool you wanted anyway. The benefit is unclear too.

The diagram explaining how “tech experts” solve problems is a statement of the misconception that users are giving up solving a problem because they’re not up to the task. Of course that might be the case, but on other occasions times the worry that they are wasting their time, quite rationally, makes them stop bothering.

Lo and behold, the Beagle Board’s performance is not up to what I wanted it for. It is quite a fun thing, so I wouldn’t quite say it was time wasted, but the intuition that I should just throw my hands up in the air and give up is there for a reason.

 

Matt Biddulph (one of the Dopplr founders) is to blame. At least I think he’s the one that started the “Silicon Roundabout” name off.  What did Larry Page say when someone told him Google were buying space at London’s Silicon Roundabout? Probably, “what’s a roundabout?”. Americans are so cut-and-thrust they don’t have roundabouts, roundabouts imply too much collaboration between drivers.  At least Brighton’s Silicon Beach makes sense, in that sand is made of silicon (only, there’s no sand on Brighton beach.) Anyway, the organisers of Silicon Milkroundabout can’t be blamed for perpetuating a silly name, or making it sillier by punning it with the idea of the university milk round.

In case you haven’t come across it Silicon Milkroundabout is a job fair – startups (and mature companies) have stalls. On Saturday product managers went round the stalls and tried to find jobs, on Sunday developers did the same, and in much greater numbers. Having tried to hire developers, I can say that anything that makes finding them easier is a good thing.

I’ve never quite known what my job title ought to be, but it seems like I’m mostly in the product manager camp. So it was nice to meet a bunch of people who do the same thing and chat about our shared experiences. But I’m also a bit of a dev, so I went on Sunday too.

Aside from trying to find some work, it was an opportunity to see what kind of companies are growing. Distilling customer tastes from big data was definitely the standout theme. There were companies that mined data on previous purchases to discover what products you might like, others that looked at your results in personality test, and others that looked across you social graph. The objective was either to serve better targeted adverts, or to customise a website to highlight the products that a particular user is most likely to buy.

I’m slightly inclined to question a fundamental assumption in all this: that I have fixed propensity to purchase any given product, and that propensity can be discovered by looking at my behaviour, or my friend’s behaviour.

I have quite a vivid memory of going to a party where guy with a massive beard and a comforting northern accent was playing music. Everything he put on was different, unknown to me and really good. Everything he put on I asked what it was, and he told me some interesting things about the song and its context. He worked in a record shop, and if I was a customer I think I might have bought about 50% of the stuff he was playing.

Instead I got home and YouTubed most of it. It turned out that, listening off my laptop on my own, I liked much less of it – Yacht was the only band that really stuck with me. Even then, by playing perhaps 6 records he found one that was of genuine interest. He had a good conversion rate.  Obviously this is anecdotal, but there are two things that might be interesting:

  1. I felt “active” in the discovery process. I was at the right party speaking to the right guy to find these things out. I had exclusivity, if someone asked how I found out about Yacht I had a story to tell. Not a great story, but there was a real connection behind it.  I would have dismissed exactly the same results if they’d appeared to me as automatically generated recommendations in a  UI. In fact, I would probably have said they were stupid, because there was no personal investment in the selection process.
  2. No amount of looking through my previous purchases would have shown artists similar to Yacht. No amount of looking through my social graph would have shown that my friends liked Yacht. That was what made them a great discovery, I could say to my friends – “Hey, I found a cool thing”, and be reasonably sure I had new information.

Often I want website suggestion algorithms to fail, confirming what I like to think of as my unique and distinctive tastes.

Liking Yacht isn’t a deep-seated feature of my brain that could be discovered if you had enough data about me. It was something that happened when I guy that thought was cool, but not too cool, told me about them. Meeting him in that context made it better.

I hate Amazon Books suggestions. Even if they could perfectly predict what I would have bought, as soon as I see the suggestions I change my mind. My reading is a deep part of my individuality; if it can be predicated by an Amazon algorithm then I feel obliged to switch it up a bit. Conversely, I’d be more than happy to have a film recommended to me: film taste isn’t something that’s particularly important to me.

I love the Hype Machine, a site that finds music from blogs. It understands that my musical taste is not going to be formed by a suggestion engine, but by what other identifiable humans have said. Each track is presented with a snippet from the blog it appeared in. I have to search for it – I’m an active participant. I discover the music, rather than it being suggested to me.

Obviously, suggestions systems do work, enough for Netflix to invest in a million dollar prize for anyone who could improve their algorithm by 10%.  Small increases in conversion rate are worth a lot of money – I just wonder if they would work better if they took a deeper account of the social factor than crawling my Facebook friends. Or if you can generate the “meeting a guy at a party” moment on a website.

Turns out I’m not that into Yacht that much anymore. I don’t want to be identified with the kind of people who “like” them on YouTube.

 

 

Last week I was out on the street, stopping people and asking them to look at a printed mock up of a website and tell me their views. It’s taught me three things.

1) I should spend more time away from my laptop

To avoid carrying round a heavy laptop while accosting people on the street, for a day and half all I had with me were a notebook and pen. A friend joked that I could have a look at their netbook to prevent me from clucking. Actually through, I was pleased to kick the habit. I drew tables, wrote, unleashed rounds of bullet points – all the stuff you can do on a computer, you can do that on paper too! The cognitive process if writing on a notepad is totally different from a computer – much more lucid. (I know, I know, everyone else already does this, but I’ve never made the effort to not use my computer.) Also, there is no distraction, and no temptation to start coding something until you have thought about it properly. Perhaps it isn’t enough to use a notebook, you actually have to be somewhere you cannot get to a computer.

2) People are getting social network fatigue

I spoke to 14 students about the website I was evaluating (they are by far the easiest people to talk to). Of those, 2 said they were connected to no social networks – they had decided delete their accounts. 3 said that while they were on social networks but almost never used them. If predicting the future is about looking at what students do (which seems plausible) then the future is one where Facebook is an uncool place to be. The more carefully dressed the person I spoke to was, the less likely they were to be on Facebook.

Some time ago in a focus group we asked people “would you use our app?”, and then “do you use Facebook?”. People who said yes to both, we asked (obviously, I’m paraphrasing) “would you like it to be a Facebook app?”. They all said no, for two reasons. One was that they were simply fed up with spending time on Facebook, and didn’t want _another_ reason to be there. The other was that the app in question was for use by semi-pro consumers, and they felt that Facebook simply wasn’t the place for it. Facebook is not for serious stuff. This points up something interesting: that Facebook provides a digital identity, not your unique online identity. Most people like to present themselves in different ways depending on what they are doing.  Ergo, Facebook is never going to be the single sign on that people have sometimes thought.

Given the way Pintrest recently spread faster than bird flu I think it’s fair to say the concept of social networking is here to stay. It makes sense to me that people dislike Facebook though. I know I do.

It’s both a testament to the success of Facebook and a portent of a less successful future that people can define themselves by not having an account. As I’ve said, Facebook is not my idea of fun, but I think this goes beyond a personal prejudice.

3) Older people are ruder, generally

Trying to stop someone below the age of 30 to ask them questions is easy. They might not answer them, but they will stop and be civil. By the time you get to people over 50 you’ll be lucky if they tell you to bugger off.

 

 

 

WordPress’s slogan is “Code is Poetry”. Having just installed it to run this blog I can say it’s an aesthetically pleasing thing. The features you want are there, modifying it is easy, the dashboard is intuitive.

One thing that I love about WordPress is that it isn’t (all) written in object oriented code – a decision embodies Matt Mullenweg’s approach to the platform. If you haven’t come across this term before, object oriented refers to a style of writing code that has  advantages for big projects with many authors, at the price of requiring you to learn to code with a more complex syntax. I would describe it as quite counterintuitive, and to make matters worse it’s often explained badly too.

There is no question, object oriented code is best practice – it’s what everyone uses. But if you are a beginner then you don’t want to be dealing with it, it’s just makes everything even more confusing. The people who write WordPress would no doubt naturally write everything object oriented, but they make their lives harder to make the beginner’s life easier, which is why WordPress has such a vibrant community of people doing creative things. The learning curve means anyone can have a mess around. I love this decision because it’s antithetical to being a geek, but it serves their users better.

There’s an interesting comparison with the way Adobe Flash has evolved. I taught myself ActionScript 2, which is a horrible language, but more-or-less legible to someone who hasn’t done a degree in programming. If you spoke to Adobe about the new version, ActionScript 3, they would tell you how much faster and more mature it is; how it has all the features that Java programmers love and how it’s suitable for writing massive web apps. All this is true but if you show it to someone who doesn’t write much code it just looks like a nightmare. All the new power they’ve added has come at the price of making simple things very hard to do, often turning one line of code into three. I’ve got a lot more to say about what’s wrong with Flash another day.

Flash has turned from a creative tool that you could use without writing any code at all into an engineer’s utopia – which would have been fine if the future for Flash was writing gigantic web apps, but actually it wasn’t, so now Flash is moribund.

It’s all very reminiscent of Brian Eno’s Wired article about using the most advanced mixing desk in the world. Very powerful, and it completely gets in the way of being creative. The lesson is to tame the impulse to engineering purity.

Another instructive example is Vanilla Forums, which is the only forum platform that I know of which looks modern. Unfortunately the underlying code is impossibly difficult to understand, and as a consequence their are few plugins and the community is full of confused people. No surprise that Vanilla hasn’t really taken off.

This morning I went to a talk about devices which interface directly between the brain and computers. By way of an introductory remark Louise Marston noted that “for thousands of years humans have wanted to be able to communicate directly from one brain to another, which of course we can, by witting.” This set the tone for a discussion about the topic of technologically extending the functionality of our bodies.

The panel all agreed that it is a mistake to imagine that using (for example) brain implants to communicate with computers represented a sea change in our sense of self.

Anders Sandberg pointed out that we already use contact lenses and clothes to extend our personal capacities. What makes the ideas such brain implants alarming is that they represent a ‘transgression’ of our physical bodies. However, as Anders continued to point out, this transgression “makes good posters for films” but isn’t actually that practical, mostly because the dangers of infection and medical complication.

Instead he favoured subtle, low level interaction between brain and computer. He gave the beautiful example of his relationship with his laptop – he can subconsciously tell if the hard drive is ok from the noise that it makes.

Other examples include MIT’s “Sixth Sense“, while Professor Kevin Warwick showed a photo of a device that allowed users to get messages from their computer via tiny electric shocks on their tongue. Probably not to everyone’s taste.

Optogenetics a new approach again. This involves altering your genetic code so that your neurons respond to light and then shining a laser through your cranium to manipulate your brain’s behaviour.

While some of the technologies under discussion are not even on the lab bench yet, one technology already in medical use: Deep Brain Stimulation to treat Parkinson’s. An implant electrically stimulates the thalamus which reduces the  symptoms of the disease. Some patients go from being unable to dress themselves to being able to drive again. Impressive stuff, but it also reifies a moral thought experiment. Some people who use the device experience personality changes, for example becoming compulsive gamblers. Who would be responsible if the a patient had a personality change and went on to commit a crime? The device manufacturer, the surgeon or the patient? One guy is already suing his doctor because of gambling spree he claims was bought on by medication. 

Perhaps if we had more debates about these kinds of moral dilemmas we’d have a more nuanced understanding of what’s at stake. It drove me nuts during the riots that _every_ news presenter had to ask anyone that said anything explanatory about the cause of the riots “Are you making an excuse for them?”. Surely we can have a more sophisticated understanding of morals than that discourse seemed to indicate.

The panel itself had some interesting characters. Anders Sandberg comes from the grandly titled Future of Humanity Institute in Oxford, which is also home to a philosopher I particularly like –  Nick Bostrom. He’s very entertaining, I seem to remember that he did stand up for a while.  Bostrom also responsible for a confounding logical conclusion through his simulation argument.

Professor Kevin Warwick has had all manner of things implanted in him – a sure sign of commitment to your work. He told us he has a graph of the electrical activity associated with the onset of Parkinson’s on his living room wall to keep him focused on his work. Presumably he has a very understanding wife too – some of his experiments have included her, for example wiring their brains together to facilitate direct electrical communication. I once wrote a short story about exactly this. Unfortunately it’s not very good; I hope their experience went better than my short story.

Throughout the whole talk there was a tendency to wander between brain-computer interfaces and the subject of artificial intelligence. It seems to me that there isn’t really an obvious link between the two, except that they both endanger our sense of self. In many ways this is the most fascinating aspect of the technology. Most people distinguish between using technology to restore function that’s been damaged by disease or a car accident and the more treacherous moral territory where technology is used to exceed our ‘normal’ abilities.

We discussed that the use of a notebook as a memory aid would be could be considered a synthetic extension of our natural abilities, and that no one considers this to have moral implications. However, as I write this I’m quite happy to take advantage of a spell checker and my notebook.

It would feel weird if the computer started improving my prose by suggesting eloquent synonyms, or perhaps advised me that the above “not to everyone’s taste” pun is an execrable crime and should be deleted immediately. When computers, through implants, other types of brain-computer interfaces or artificial intelligence start doing things that we consider uniquely human – like creativity and punning – I think it really will cause us to radically reconceptualise ourselves. In this sense, I wonder if examples of using clothes or glasses to enhance ourselves are misleading, because they don’t strike at core concepts at what it is to be human. Or perhaps we’ll just get over it.