What do we really think about music? I’ve tried to find some data about how people think about musical genres using the Last FM API.

Ishkur’s strangely compelling guide to electronic music is a map of the relationships between various kinds of music, and a perfect example of the incredibly complex genre structures that music builds up around itself. He lists eighteen different sub-genres of Detroit techno including gloomcore, which I suspect isn’t for me. I wanted to try and create a similar musical map using data from Last FM.

I’ve written a bit before about the way in which the web might change the development of genres – what I didn’t ask was how important the concept of genre would continue to be. It’s difficult to listen to music in a shop, so having a really good system of classification means you have to listen to fewer tracks before you find something you like. Also, in a shop you have to put the CD in a section, so it can only have one genre attributed to it.

But on the web it’s easy to listen lots of 30 second samples of music, so arguably you don’t need to be so assiduous about categorisation. In addition, the fact that music doesn’t have to be physically located in any particular section of a shop also undermines the old system – one track can have two genres (or tags, in internet parlance).

Despite this online music shops like Beatport still separate music into finely differentiated categories, much as you would find in a bricks and mortar record shop. But do they reflect the way people actually think about their musical tastes?

Interestingly, two of the most commonly used tags on Last FM are “seen live” and “female vocalist” (yes, women have been defined as “the other” again), which aren’t traditional genres at all. “Seen live” is obviously personal, and “female singer” isn’t a part of the normal lexicon. Looking through people’s tags other anomalies crop up – “music that makes me cry” and tags based on where a person intends to listen to the music are examples.The more obscure genres from Iskur’s guide are lost in the noise of random tags that people have made for themselves. I would suggest Gloomcore isn’t used in a functional way that ‘metal’ or ‘pop’ are. It’s a classification that people do not naturally use to denote a particular kind of music on Last FM – perhaps it’s a useful term for writing about music, but nobody thinks they’d like to stick on some Gloomcore while they make breakfast.

I searched the Last FM database of top tags – the 5 tags most used by a user, and assumed that there was a link between any two genres that one person liked. For example, if you have ‘gothic’ and ‘industrial’ as top tags then I marked those two tags as linked. In the diagrams below I show the links that occurred between 1000 random Last FM users. If a link between two tags occurred more than about 15 times then it shows up on the diagram below.

Unsurprisingly, indie and rock are things that people often note they have seen live. By contrast, though people might talk of having heard electronic music ‘out’ (ie. not at home), they don’t care enough about it to use define a tag around it.

I was surprised to see tags such as ‘British’ and ‘German’, so I broke the above diagram down by country. Last FM has significant UK, German and Japanese user bases. Here is the result for Germany:


I think it’s very telling that while most of the connections are as you might expect, ‘black metal’ and ‘death metal’ are not connected to the main graph. I’m not particularly aware of these genres, but it certainly seems plausible they are very insular.

Here is the Japanese version:


Yep, plenty of references to Japan. The only nation to feature Jazz too. Here is the British version:

Lost in the noise: what we really think about musical genres

In Japan and Germany a defining feature of music is that it is Japanese or German. In Britain we don’t care. I suspect that’s because our musical tastes aren’t defined against a background of lyrics in a foreign language, as perhaps they are in the other two countries.

Last FM may well have particular ‘subculture’ of user in each country, so its hard to draw any firm conclusions because of this potential skew. As with so many of the insights you can gain from data gleaned from the web, at the moment it’s only possible to tell that one day this kind of tool could be very reveling about our psychology – what it will reveal isn’t very clear yet.

None the less, it will be interesting to see how these diagrams evolve over time – perhaps they will gradually diverge from the old names we’ve used to identify music, or perhaps there will be less and less consensus about what genres are called.

Incidentally, this would have been a post about data from Linked In, looking at the way your professional affects the kind of friendship group you have, but the Linked In API is so restricted that I gave up.

The data is available blow. It’s in the .dot format that creates these not very sexy spider diagrams.


I can provide a better version of this data if anyone wants it – send me a message.

Over the course of the General Election I recorded 1000 random tweets every hour and sent them to tweetsentiments.com for sentiment analysis.

Tweetsentiment have a service which gives one of three values to each tweet. ‘0’ means a negative sentiment (unhappy tweet), ‘2’ a neutral or undetermined sentiment and ‘4’ positive (happy tweet). Similar technology is used to detect levels of customer satisfaction at call centres by monitoring phone calls.

Obviously it’s difficult for a machine to detect the emotional meaning of a sentence, especially with the strange conventions used on Twitter. Despite this Tweetsentiment seems to be fairly reliable – tweets always which express happy emotions tend to be rated as such, and vice verse. More accurately, if Tweetsentiment does make a classification it tends to get it right. Sometimes an obviously positive / negative tweet gets a ‘2’, but that shouldn’t affect things here.

My hypothesis was that the Twitterati would be less happy if there was a Conservative victory. Of course I can’t prove that Twitter has a bias to the left, but I would presume that young, techy, early adopters are more likely to be left leaning. The reaction to the Jan Moir Stephen Gately article perhaps supports this.

David Cameron famously noted that Twitter is for twats, I wondered if Twitter would reciprocate…


The graph indicates that usually Twitter is just slightly positive, with a mood value of 2.1 on average. As predicted, as a conservative victory becomes apparent on Thursday evening there is a decline in mood which lasts until Saturday lunchtime. Then everyone cheers up, presumably goes down the pub, and is pretty chirpy for Sunday lunch. Sentiment only returns to average for the beginning of work on Monday morning.

In short, it does look like the election result was a disappointment to Twitter.

Obviously we need to know what normal Twitter behaviour is over the course of the week to draw very much information from the graph, and this is something that I’m going to try and produce a graph for soon.

It does look as though the size of negative reaction to a once-a-decade change in government is about the same magnitude as the positive mood elicited by the prospect of Sunday lunch – which I think is fairly consistent with the vicissitudes of Twitter as I experienced them.

I used Twitter’s API to gather the data, and frankly, it’s not particularly great, particularly if you want to get Tweets from the past. I was surprised to discover that any Tweets more than about 24 hours old simply disappear from the search function on Twitter.com – in effect they only exist in public for a day. For this reason the hourly sample size wasn’t always exactly 1000, but it was on average.

I’ll post again when I have some more data on normal behaviour. I’m also curious to find out if different countries have different average happiness levels on Twitter, but I think finding a Tweetsentiment-style service for other languages might prove difficult.

My last post used Wikipedia’s list of dates of births and deaths to build a timeline showing the lifespans of people who have pages on Wikipedia. There are a lot of people with Wikipedia pages, so I limited it to only include dead people.

That still leaves you with a lot of people to fit on one timeline, so I wanted to prioritise ‘important’ or ‘interesting’ people at the top and show only the most ‘important’ 1000. Some have been confused by my method for doing this, and others have questioned its validity, so this post will address both issues. I’m also going to suggest an improvement. It turns out that whatever I do Michael Jackson is more important than Jesus. I’m just the messenger.

Explaining the method
To get a measure of ‘importance’ I used work done by Stephan Dolan. He has developed a system for ranking Wikipedia pages which is very similar to the PageRank system which Google uses to prioritise its search results.

Wikipedia’s pages link to one another, and Stephan Dolan’s algorithm gives a measure of well linked to all the other Wikipedia pages a particular page is. If we want to know how well linked in the page about Charles Darwin is the algorithm examines every other page in Wikipedia and works out how many links you would have to follow to get from the page it is examining to the Charles Darwin page using the shortest route.

For example, to get from Aldous Huxley to Charles Darwin takes two links, one from Aldous to Thomas Henry Huxley (Aldous’s father) and then another to Darwin (TH Huxley famously defended evolution as a theory). Dolan’s method calculates the average number of clicks from every page in Wikipedia to the Charles Darwin page, and then takes an average value. To get to Charles Darwin takes an average 3.88 clicks from other Wikipedia pages.

Equivalently, Google shows pages that have many links pointing to them nearer the top in its search results.

This method works OK, but it could be better. For example Mircea Eliade ranks as the fifth most important dead person dead person on Wikipedia, taking on average 3.78 clicks to find him. But Mircea Eliade is a Romanian historian of religion – hardly a household name. We can take this as a positive statement, perhaps Mircea Eliade is a figure of hither to unrecognised importance and influence. On the other hand it seems impossible that he can be more ‘important’ than Darwin.

Testing the validity of the Dolan Index
I decided it would be interesting to compare what I’m going to call the Dolan index (the average number of clicks as described above) with two other metrics that could be construed as measuring the importance of a person. Before we do that, here is a Graph of what the Dolan index of dead people on Wikipedia looks like.

The bottom axis shows the rank order of pages, from Pope John Paul II, who is has the 275th highest Dolan index on Wikipedia, to Zi Pitcher, who comes 430900th in terms of Dolan index. It makes a very tidy log plot.

As I mentioned previously, the Dolan index is very similar to a Google PageRank, so lets compare them.




The x axis is the same as the first graph, Wikipedia pages from highest to lowest Dolan index. A well linked page has a low Dolan index, but a High PageRank, so I used the reciprocal of PageRank for the y axis. I’ve also added a log best fit line.

Comparing with PageRank seems to indicate there is a reasonable correlation between Dolan index and PageRank, which is indicated by the fact the first and second graphs have a similar shape.

PageRank is only given in integer values between 1-10 (realistically, all Wikipedia pages have a PageRank between 3-7), so I’ve smoothed the curve using a moving average.

This seems to lend some weight to the Dolan Index as a measure.

I’ve also made a comparison between the Dolan index the number of results returned when searching for a person’s name (without quotes) in Google search. It should be noted that this number seems to be quite unstable – a search will give a slightly different number of results from one day to the next. I’ve used a log scale because of the range of results.



There is barely any correlation here, except a very low values of Dolan index. Despite this, it’s still possible for the number of Google results to be useful, as becomes in apparent when trying to improve my measure of ‘importance’.

A suggestion for improvement
The problem with all the measures seems to be the noise inherent in the system. While Dolan Index, PageRank and number of Google results all provide a rough guide to ‘importance’ or ‘interest’ overall, each of them frequently gives unlikely results. How about using a mixture of all three? Here is a table comparing the top 25 dead people by Dolan index and using a hybrid measure of importance constructed from all three metrics.

Dolan index Hybrid measure
Pope John Paul II Michael Jackson
Michael Jackson Jesus
John F. Kennedy Ronald Reagan
Gerald Ford Jimi Hendrix
Mircea Eliade Abraham Lincoln
Peter Jennings Adolf Hitler
John Lennon Albert Einstein
Adolf Hitler William Shakespeare
Harry S. Truman Charles Darwin
Rold Reagan Oscar Wilde
J. R. R. Tolkien Woodrow Wilson
James Brown Isaac Newton
Anthony Burgess Elvis Presley
Elvis Presley Walt Disney
Christopher Reeve John Lennon
Susan Oliver George Washington
Franklin D. Roosevelt John F. Kennedy
Winston Churchill Timur
Ernest Hemingway Martin Luther
Theodore Roosevelt Voltaire


To get the hybrid measure I just messed around until things felt right. Here is the formula I came up with:


Hybrid measure = ((1/Dolan index)x 20) + (PageRank x0.6) + (log(number of results)x 0.6)

For some reason additive formulas give better results than multiplicative ones.

Using the hybrid measure seems to have removed the surprises (like Peter Jennings) although you might still argue that Oscar Wilde or Jimi Hendrix are much too high. Michael Jackson comes out as bigger than Jesus, but then he is an exceptionally famous person, and he died much more recently than Jesus. Timur (AKA Tamerlane) is a bit of a curiosity.

I considered ignoring Number of Google results because its such a noisy dataset, however it’s the only reason that Jesus appears at all in this list, he gets a very low ranking (4.01) from the Dolan Index. Any formula which brings Jesus out on top (which I think you could make a reasonable case for his deserving, at least over Michael Jackson!), gives all kinds of strage results elsewhere.

I am a bit suspicious of “number of google results” metric. In addition to volatility Number of results fails to take into account that occurrences of words such as “Newtonian” should probably count towards Newton’s ranking, but that people called David Mitchell will benefit artificially from the fact that at least two famous people share the name.

Any further investigation would have to consider what made a person ‘important’ – would it simply be how prominent they are in the minds of people (Michael Jackson and Jimi Hendrix) or would it reflect how influential they were (Charles Darwin for example, or the notably absent Karl Marx)?

I love the idea that the web reflects the collective conciousness, a kind of super-brain aggregation of human knowlege.

Just this week the idea of reflecting the whole of reality in one enormous computer systemwas promoted by Dirk Helbing, although my formula doesn’t rate him as very important, so I’m unsure as to how seriously to take this.

DBpedia mashup: the most important dead people according to Wikipedia

The timeline below shows the names of dead people and their lifespans, as retrieved from Wikipedia. They are arranged so that people nearer the top are the best linked in on Wikipedia, as measured by the average number of clicks it would take to get from any Wikipedia page to the page of the person in question.

I had imagined that Wikipedia ‘linkedin-ness’ would serve as a proxy for celebrity, which it kind of does – but only in a lose way.

Values range from 3.72 (at the top) to 4.04 (at the bottom). This means that if you were to navigate from a large number of Wikipedia pages, using only internal Wikipedia links, it would take you, on average, 3.72 clicks to get to Pope John Paul II. This data set was made by Stephan Dolan, who explains the concept better than me. Basically, it’s the 6 degrees of Kevin Bacon on Wikipedia.

I looped through the data set and queried DBpedia to see if the Wikipedia article was about a person, and if so retrieved their dates of birth and death.

The timeline does show a certain amnesia on the part of Wikipedia, Shakespeare and Newton are absent, while Romainian historian of religion Mircea Eliade comes 5th. If I had included people who are alive tennis players would have dominated the list (I don’t know why) – Billie Jean King is the second best-linked article on wikipedia, one ahead of the USA (the UK is number one!).

Any mistakes (I have seen some) are due to the sketchiness of the DBpedia data, though I can’t rule out having made some mistakes myself…

There results are limited to the top 1000, and they only go back to 1650. Almost no names previous to 1650 appeared, the exceptions being Jesus (who was still miles down) and Guy Fawkes.

In case you were wondering ‘Who’s Saul Bellow below?’, the answer is Rudolph Hess.


The digital revolution will not be televised – to the contrary, is it possible that no artist or medium can be said to have adequately addressed the information age?

Zizek once sumerised Marx as having said that the invention of steam engine caused more social change than any revolution ever would. Marx himself doesn’t seem to have provided a useful soundbite to this effect (at least not one that I can find though Google), so I’m afraid it will have to remain second hand. It’s a powerful sentiment, whoever originated it – which philosopher’s views cannot be analyzed as the product of the social and technological novelties of his day?

It’s easy to see that the technology that is most salient in our age is the internet, which has been made possible by consumer electronics. Have our philosophers stepped forward to engage with the latest technological crop?

Moving on from philosophers, what of our artists? Will Gompertz recently posted to share an apparently widely held view that no piece of art has yet spoken eloquently from or about the internet. He cites Turner prize winning Jeremy Deller describing “a post-warholian” era, presumably indicating that Warhol was last person to adequately reference technological change in the guise of mass production. I wonder if the Saatchi-fueled infloresence has also captured something of marketing-led landscape we currently live in, but whatever the last sufficient reflection on cultural change afforded by art was, I think we may be on safe ground in stating that the first widely accepted visual aperçus of the digital era is still to come.

Which is some surprise when you consider, for example, how engaged the news agenda is with technology: I was amazed to see that Google’s Wave technology (still barely incipient) got substantial coverage on BBC news.

With my employment centering on the web, and my pretensions at cultural engagement, this weekend I visited the Kinetica Art Fair. Kinetica is a museum which aims to ‘encourage convergence of art and technology’. The fair certainly captured one aspect of contemporary mood – a very reasonably priced bar was a welcome response to our collective financial deficit.

Standout pieces included a cleverly designed mechanical system for tracing the contours of plaster bust onto a piece of paper and a strangely terrifying triangular mirror with mechanically operated metal rods. It looked like a Buck Rogers inspired torture device designed to inflict pain by a method so awful that you’d have to see it in operation before its evil would be comprehensible. The other works included a urinal which provided an opportunity for punters to simulate pan-global urination (sadly not with real urine) by providing a jet of water and a globe in a urinal. I would defy anyone not to be entertained by spending time wondering round the the fair.

However, Will Gompertz’s challenge was not answered at Kinetica – the essence of the technological modernity was distilled into any of work – not even slightly.

I’ve been mulling over various possible reasons for this failure, and quite a few suggestions spring to mind. Do computers naturally alienate artists? Is information technology to visually banal to be characterised succinctly?

I’d like to suggest that its the transitory nature of our electronic lives that makes them so hard to pin down. Mobile phones, web sites, computers and opperating systems from a decade ago all look ludicrously dated – it’s almost impossible to capture the platonic form of these items because they have so little essential similarity. Moreover, their form is almost an accident, and not connected with their more profound meaning in any way. The boats of the merchantile age and the smoke stacks of the industrial age all seem to denote something broader – how can communism be separated from its tractors? Yet the form factor of my computer is trivial. Form and functional significance are of necessity separated by digital goods, their flexibility is the source of their power.

In someway I think films give us tacit acknowledgment of the contingent nature of the digital environment that we spend much of our lives in: no protagonist is ever seen using Windows on their computer, in films computer’s interfaces are always generic. When we see a Mac in a film it impossible to see it as anything other than product placement.

So, the Kinetica Art Fair may not have been able to help society understand its relationship with technology, but actually, despite their rhetoric, I think it was a little unfair to expect it to. Really the fair was about works facilitated by technology, rather than about it.

But, in case you think I’ve picked a straw man in Kentica, let me say that the V&As ongoing exhibition Decode really does no better, though its failures and successes are another topic.

Whatever we end up using the web for, don’t the world’s citizens lead more equal lives if they are all mediated by the same technology?

The queen tweets. She’s commissioned a special jewel embossed netbook and a bespoke Twitter client with skinned with ermine and sable.

I made that up. For starters, she hasn’t actually started tweeting – there is a generic royal feed which announces the various visits and condescensions of Britain’s feudal anachronism, but nothing from miss fancy hat herself. Perhaps royal protocol means she can only use it if her followers can find a way of curtsying in 140 characters?

The feed does give an insight into how boring the Royal’s lives might actually be – opening wards and meeting factory workers – when they aren’t having a bloody good time shooting and riding. However, as a PR initiative it breaks the rule that states for a Twitter account to be of any interest then tweets must emanate from the relevant horse’s mouth, if you’ll forgive the chimerical metaphor. If you can’t have the lady herself, I don’t really think there is much point in bothering. But that’s not the point I’m here to make.

I’m more interested in the fact that, should any of us choose to, Bill Gates, Sir Ranulph Twisleton-Wykeham-Fiennes, 3rd Baronet OBE, Osama Bin Laden and I will have exactly the same experience when we use Twitter (assuming it’s available in the relevant language).

I suppose Bin Laden might have quite a slow connection in Tora Bora, and probably Bill Gates has something faster than Tiscali’s 2meg package. Details aside, everyone is doing the same thing.

Actually, not only will we be using the same website, we’ll be using very similar devices. Bill probably doesn’t have a Mac like me (he may be the richest man in the world, he can still envy me one thing), but all our computers will be very similar.

The reason for this is that for both websites and computer technology have very high development costs, and low marginal costs per user. Even the Queen can’t afford to develop an iPhone, but everyone can afford to buy one.

If a lot of your life is mediated by technology then this is going to be very important to you. While there is healthy debate about the web’s democratisation of publishing, I think we might reasonably add to the web’s egalitarian reputation its ability to give people of disparate incomes identical online experiences.

That doesn’t sound like a blow against inequality and tyranny in all its forms – but none the less I think its important . Even people using OLPC computers [low priced laptops aimed at the third world] have basically the same experience of the internet as you or I. That’s to say Uruguayan children will quite possibly spend a good part of their day doing exactly the same things as New York’s office workers and Korea’s pensioners. When you consider that only very recently there were probably no major similarities in these disparate lives I think it does constitute a significant development.

Of course, for all I know a line of luxury websites will come along and exclude some strata of the social pile. In a way it’s already happened – we’ve seen the thousand dollar iPhone app – but its hard to see this one off as part of a pattern. This is not to say that the ‘freemium’ business model [basic website for free, pay to get the premium version] couldn’t exclude certain people, it’s more that this model can only exist when there isn’t much pressure for a free version. At the moment, there aren’t any widely used web applications that aren’t available at zero cost – of course this may change if your audience is sufficiently well off to attract paid advertising, but there again it may not.

This is a phenomena that’s been observed before: technology tends to eliminate differences between cultures. It’s been termed the Apparatgeist, and has been developed as a concept in response to the observation that mobile phone habits, one differentiated locally, are now more or less identical in all developed economies. As a concept it surely applies equally as well to class and income – leaving us us in a more equal experiential world. And perhaps also a monoculture – but then isn’t that entailed in the new equalities that so many internet optimists evangelise?

Whenever I watch a TED video it’s always so optimistic. Perhaps the independent TEDx event I attended in Manchester was under the pall of the city’s ceaseless rain, because it focused on some less than rosy home truths.

Content? Are you? Not if your job is to produce content. The anodyne catchall phrase for creativity as mediated by the web belies a bloodbath of job loss in newspapers, music and TV. The Evening Standard has recently accepted that what a consumer will pay for its product is zero, but it was last Friday at TEDx Manchester that a simple message came home to me.

It is conceivable that content is just something you can’t monetise in the era of the internet. Historically publishing has been fraught with similar difficulties, some of the world’s most influential books were utterly unable to remunerate their creators. Dr Johnson required a royal pension to keep him afloat despite having written the first full scale dictionary, likewise Diderot managed to remain poor after producing the West’s most famous encyclopaedia. No wonder publishers struggle when all the profundity they can muster is the Evening Standard. Are we simply returning to the equilibrium where creativity is next to impossible to convert into cash?

At TEDx Guardian Digital Editor Sarah Hartley articulated hyperlocal journalism (basically a local resident keeping a blog) as a possible future of news media, but she also admitted that she had no idea how journalists might earn a crust from this pursuit.

The next speaker to play into this theme was Marc Goodchild, head of Interactive for BBC childrens, who told us (amongst much else of interest) that at the age of 12 most kids started to predominantly spend their time on social networks and games – two areas where in effect you make the content yourself. He also told us that for the first time for children game play and internet use combined account for more hours of viewing than TV.

Hugh Garry, a Radio 1 producer, made the point even more firmly. His talk focused on a project that involved handing out mobile phones at festivals and asking people to record their experiences. The material was gathered into a film called “Shoot The Summer”. This exercise illustrated an interesting technical fact: mobile phones can produce footage that is perfectly watchable at cinema size.

A more subtle point was that most of the recipients of the mobile phones had a great natural sense of what would make interesting footage. If you don’t believe me, check out the film. And if you think that he just has the good bits from millions of hours of people taking drugs in tents, well, you’re right. That’s exactly the point – where is the space for the professional when a million amateur YouTube clips can be relied upon to produce a thousand gems? Of course, the content generation generation will also have a more natural sense of how to use a video camera compared with those for whom such devices are fresher developments.

Against a backdrop of the inevitable Twitterfall, and the equally inevitable Mancunian rainfall, the possibility of the end of professional content production took root in my mind. What medium might remain immune? Film? Surely this is the medium with the highest barrier to entry protecting its profits. Perhaps, but in a projected video of a JJ Abrahams TED talk we were all told we had no excuse not to be making films now the relevant hardware is so cheap.

I don’t really doubt that there are a number of ways for the paid journalist or film directors to survive, and it’s not news that the internet has put the squeeze on certain professions. There is a feeling though that we are just waiting for really cheap credit card transactions, or for Murdoch to spearhead online paid content, or for some other technological development to restore the professionals to their thrown. That might be misplaced optimism. Indeed some top journalists may be reduced to giving talks to a load half-arsed bloggers, perish the thought.

Do people live their lives differently to fulfill their obligations to writing? Is contriving you life to be Tweetable acceptable?

Mathew Paris’s melodic voice was easily called to mind as I read his recent article in The Spectator. In his soft-spoken lilt he detailed a moment of pique on the London Underground, the subject of his ire TfL’s decision to close the connection between Bank and Moment stations in one direction, a rule enforced by an escalator that conveys passengers up but not down.

Our protagonist struck a blow against the system by refusing to return to street level to make the connection, as those without Mr Paris’ anguished relationship with public transport might, instead dashing down the escalator the wrong way. Paris may have stood on the right previously, but on this occasion commuters must have been surprised to see him descend on the left.
I had imagined that he may have struggled to make the distance, fooled by his soft voice and gentle demeanour; I now discover he is in fact the fastest living marathon runner to have sat as an MP. He was, he stated, fuelled by a burning sense of injustice.

But I think he was also fueled by something else – the need to write an article for The Spectator. It would be too much to imply that petty rule breaking is the only means for a man with Paris’ talents to conjure an article, at the same time I don’t doubt that the same journalistic bent must have automatically packaged this handy anecdote into 800 words as he battled against the receding treads.

Without conferring the pejorative term annecdotalist on anyone these types of stories are the meat and potatoes of much journalistic writing – no news there. Having to come up with a bite sized morsel of zeitgeist on a regular basis must cause one to be constantly alive to the possibility that your weekly topic lurks in the article you are reading, the post office queue you are in or a conversation you had at the school gates. You must, I would suggest, encourage journalisable events to occur, at least on a subconscious level.

And surely, if this is the case, as more and more people have a quota of written output to fulfil, more and more people will live their lives in this way. I’m not referring to an increase in the number of professional writers, which certainly isn’t on the cards, but many people have a responsibility to a Twitter account, a regime of Facebook updates to keep up or even a full blown blog to maintain.

Next time you see an unreasonable argument in a restaurant, a petty provocation of social norms or perhaps even a novel act of kindness then you may be witnessing the need to construct a life makes good reading. Now Virgin Trains have introduced WiFi on trains perhaps we will all have something sensational to read on them. And there again, perhaps not.

Ian Pearson is a professional future gazer for BT. I interveiewed him for the thing is…

Life expectancy has grown at a fairly regular rate for about 150 years now, do you think there is any chance of a significant deviation from this pattern in the next 50 years?

Nobody knows how much life expectancy can be expected to grow, some people think that the limit will be about 130 years, with quite a lot of people living to 100.

You’ve predicted a decline in manual jobs, which isn’t much of a surprise. You’ve also predicted that technology could reproduce the skills of footballers, TV presenters and experts who rely on experience to make judgments. If I want a job that isn’t going to be automated, where should I look?

The idea of a job for life is already history. People in the future will have multiple careers over a lifetime. The average time that people spend with one company decreases every year, and over the last 20 years most jobs have changed beyond recognition. The idea of picking one job for life is a thing of the past.

I think in the information economy jobs that require intelligence can largely be automated. Administrative jobs can quite easily be automated, and the industrial sector robotics will be able to replace people. What’s left are jobs that require human contact, emotional, caring roles, and these are the things that people will largely be doing in 20-30 years time. We’re not talking about people having their jobs wiped out, but they will be focused more on the people issue.

And you’ve said that you think creativity could go that way too?

I think so. Already computers are creative in any sense that you choose to define it. Computers can already produce art, write poetry and things like that. The quality isn’t very good, but when you think that a supercomputer isn’t as powerful as the human brain yet that isn’t entirely surprising. As we get better AI we will start to see computer enhanced creativity. I don’t see it as a threat to human creativity, it will allow us to indulge our creativity.

I compose music, but I’m not very good at it. If I had machine creativity at my disposal all I would have to do is give a few hints about what I wanted and it would help me to compose something that Mozart would have been proud of.

Do you think all this means we can expect to enjoy more leisure time?

People in the past have predicted that we will have much more leisure time because machines would automate a lot of the work. That has happened, but instead of taking more leisure time we’ve decide to go for a much, much higher standard of living. Because I can get a job working 55 hours a week I work 55 hours a week and have a high standard of living compared to my parents or grandparents. If I was prepared to accept the same standard of living as my grandfather then I could work 10 hours a week, but do I really want to live in a very basic house with a 14” TV?

On the other had we do see a phenomena of downshifting at the moment. People opting out of the rat race and the materialistic life style and deciding to concentrate on their quality of life. Its impossible to predict how common this will become.

We often hear about the knowledge economy, and certainly it is the case that many more people have degrees in the UK than used to. Do you think it would be prudent to think a bit more about a masters or PhD to compete with future generations?

I think being highly qualified will be useful for the next 10-15 years at most. For the early part of your career it’s probably a good idea to have good qualifications to set yourself apart from everybody else. But in the long term the qualification stands for diddly-squat because if we believe that we are moving towards an economy where all the intelligent work will be done by machines then a PhD is of no commercial value whatever.

What counts are things like emotional and interpersonal skills, having a nice personality and being good at meeting people. For my 13 year old daughter’s generation I think qualifications will be irrelevant.

When I speak to education conferences I say that most of the useful stuff that kids are learning in school is on the playground not in the classroom. Learning to motivate and empathize is not on the curriculum in most schools. The popular guy who sits at the back of the class and messes about is actually likely to be in a much better position in 30 years time than you are working really hard.

Do you detect and increase in the rate of change of technology?

Change is accelerating, there’s no doubt about that. When I first started this job in the early 1990s I could keep up with technological change relatively easily. Now I can’t: it’s a positive feedback system where every new wave of technology helps to make the next wave even faster. For example bio-technologists might discover a new protein which can be used for organic computing, and the faster computers that result from that could in turn be used to better understand proteins. This leads to a phenomena known as a singularity, where the rate of change as shown on the graph would be essentially vertical. This will be probably happen around 2025, and the pace of change will be similar to ET landing and giving us all the technology from his space ship.

Obviously we perceive technology’s march, but is there a way of putting a number on the rate of change in technology?

There probably is, although I’ve never tried it. If you look at the number of patents filed each year there’s likely to be an exponential rise. There’s also an exponential rise in the quantity of information in the world. 30 years ago the amount of information in the world doubled every 10 years, now it doubles every year. Its quite exciting to think that every year half of the information in the world is new – almost everything we know we’ve only just discovered.

The might of countries like China and India is increasing at the moment. Do you think these countries will come to dominate the west, and if so will we notice a big difference in our everyday lives? Historically these things have been cyclical. If you look to the distant past China has been responsible for a lot of technology, and India has also been through a phase of being very powerful. Europe has gone past its peak and America has enjoyed the international lead for quite a long time, although it’s slipping fast, with China and India catching up, with China very much further ahead than India.


It’s no big surprise — China has a quarter of the worlds population, so on a level playing field you would expect it to dominate. Meanwhile Europe has an aging population and a lot of our young people are not very well educated. In 30 years’ time Europe will be an also-ran on the world stage.

True so. One last question — why is the future always depicted as a dystopia?

People aren’t interested in the nice side of things, they are only interested in things they have to worry about. In evolutionary terms people have always had to be on the look out for predators, in the same way people naturally look for threats when discussing the future, rather than seeing the opportunities. I don’t lose any sleep over the future – at the end of the day I believe that we’ll muddle through because we always do.