In the spirit of thinking through our new political reality, which I already started here, I’ve been thinking about the electoral success of policies that promise violence – imprisonment or war, mainly.

In advertising, apparently, sex sells. In politics, it’s a liability. What sells in politics is violence. The promise to do something violent has an appeal that is powerful and ubiquitous.

In a classic criticism of democracy, Thucydides tells us that in 480BC the crowd in Athens voted to kill every adult male on the rebellious island of Lesbos, only to realise subsequently that this was an unjust act of violence, spurred by the rhetoric of a demagogue. They have to dispatch a ship with the new orders, which only just arrives in time to prevent a massacre. In Orwell’s 1984, perpetual war was used as mechanism to confer legitimacy on the dictatorship, an approach contemporary Russia has learned from.

We have the term khaki election to refer to this phenomena. It was coined in 1900 to describe a UK election held in the context of the Second Boer War, where patriotic sentiment driven by the war is said to have helped the incumbent party win. More recently, we have Thatcher, whose prospects for reelection in 1983 looked dim until the Falklands War boosted her reputation – almost certainly changing the outcome in her favour. We might say the same of Bush’s second Gulf war. An aimless administration was transformed into a purposeful and successful one, the president on an aircraft carrier declaring victory just in time for the election. As it turned out, the invasion did not prove to be beneficial for US foreign policy, but it worked very well for Bush himself.

Internal violence can work the same – for example the way promising increased incarceration has been a successful electoral tool in the UK and the US, no matter the dropping crime levels and endless evidence prison is expensive and ineffective.

Why should a threat to do violence be so persuasive?

Unlike, for example, the myth that the economy works like household budget, I can’t see that the appeal of violence comes from  ‘common sense’ or our everyday experience. Has any family dispute ever been satisfactorily resolved by violence? How many teenage kids have been coerced into good behavior? Do employers seek those who are able to persuade and negotiate, or those who are aggressive and violent? In our own lives, we almost never witness violence, and even less so as a successful strategy. Perhaps it’s exactly this distance that allows us to be so blasé about drone strikes and regime change.

I can only think that the allure of violence is part of a broader sense making activity. Most people have some problems in their lives, disappointments to rationalise. Acknowledging that our lives are shaped by blind luck or unintended consequences is not a good narrative, it does not help us understand why things are as they are. But the idea that an enemy within or without can cause these things does make sense – and the natural solution to the enemy is violence. Lock them up, bomb them – a convincing panacea.

In America, Obama’s failure to use the words radical islam became totemic on the right. It signalled a failure to adopt the appropriately bellicose rhetoric. Trump was even able to suggest that Obama was in league with ISIS because of his failure to use the properly aggressive language. What Obama was really doing was attempting to deescalate the religious dimension of the conflict with the long run goal of bringing peace. It’s a totally rational strategy, except that, for the above reasons, it’s also a terrible electoral strategy.

This is a conundrum for any political party that wants to pursue a rational level of violence, which is generally much below the level apparently advocated by voters. I am not aware of any solution to this problem – as the ancient Greek critics of democracy pointed out, it may just be the price you pay.

 

Couple of notes from the Long Now Foundation health panel, both regarding how we aggregate and distribute knowledge.

Alison O’Mara-Eves (Senior Researcher in the Institute of Education at University College London) told us about the increasing difficulty of producing systematic reviews. Systematic reviews attempt to synthesise all the research on a particular topic into one view point: how much can you drink while pregnant, what interventions improve diabetes outcomes, etc.  These reviews, such as  venerable Cochrane reviews,  are struggling to sift through the increasing volumes research to decide what actionable advice to give doctors and the public. The problem is getting worse as the rate of medical research increases (although more research is obviously a good thing in itself).  We were told the research repository Web of Science indexes over 1 billion items of research. (I’m inclined to question what item is since there must be far less 100 million scientists in the world, and most of them must have contributed less than 10 items, however I take the point that there’s a lot of research.)

Alison sounded distinctly hesitant about using automation (such as machine learning) to assist in selecting papers to be included in a systemic review, as a way of making one of the steps of the process less burdensome. The problem is transparency: a systematic review ought to explain exactly what criteria they use to include papers, so that criteria can be interrogated by the public. That can be hard to do if an algorithm has played a part in the process. This problem is clearly going to have to be solved, research is no  use if we can’t sythesise it into an actionable form. And it seems tractable – we already have IBM Watson delivering medical diagnoses, apparently better than a doctor. In any case, I’m sure current systematic reviews of medical papers are carried out using various databases’s search function – who knows how that works or what malarkey those search algorithms might be up to in the background?

Mark Bale (Deputy Director in the Health Science and Bioethics Division at the Department of Health) was fascinating on the ethics of giving genetic data to the NHS, through their program the 100,000 genomes project. He described a case where a whole family who suffered with kidney complaints were treated due to one member having their genome sequenced, thus identifying a faulty genetic pathway. Good for that family, but potentially good for the NHS too – Mark described the possibility that by quickly identifying the root cause of a chronic, hard to diagnose ailment through genetic sequencing might save money too.

But – what of the ethics? What happens if your genome is on the database and subsequent research indicates that you may be vulnerable to a particular disease – do you want to know? Can I turn up at the doctors with my 23 and Me results? Can I take my data from the NHS and send it to 23 and Me to get their analysis? What happens if the NHS decides a particular treatment is unethical and I go abroad to a more permissive regulatory climes? What happens if I have a very rare disease and refuse to be sequenced, is that fair on the other sufferers? What happens if I refuse to have my rare disease sequenced, but then decide I’d like to benefit from treatments developed through other people’s contributions? I’ll stop now…

To me the part of the answer is that patients are going to have to acquire – at least to some extent – a technical understanding of the underlying process so they can make informed decisions. If that isn’t possible, perhaps smaller representative groups of patients who receive higher levels of training can play into decisions. One answer that’s very ethically questionable from my perspective is to take an extremely precautionary approach. This would be a terrible example of the status quo bias, many lives would be needlessly lost if we decided to overly cautious. There’s no “play it safe” option.

It’s interesting that with genomics the ethical issues are so immediate and visceral that they get properly considered, and have rightly become the key policy concern with this new technology. If only that happened for other new technologies…

The final question was whether humanity would still exist in 1000 years – much more in the spirit of the Long Now Foundation. Everyone agreed it would be, at least from a medical perspective, so don’t worry.

 

 

 

In JG Ballard’s novel Cocaine Nights, residents of a utopian Spanish retirement resort commit terrible crimes against one another. They are driven to crime because they need more discomfort. Ballard’s message is that humans will become pathological in utopia. We need a problem, because if there are no problems, how will tomorrow be better than today?

David Graeber, in his book Fragments of Anarchist Anthropology, says “There would appear to be no society which does not see human life as fundamentally a problem”. He might not be quite right, as former missionary Daniel Everett discovered when he went to the Amazon and met a strange tribe. The Piraha people, who believe themselves to be the happiest in the world (that’s what the name Piraha means in the the Piraha language), have no past or future tense in their language. They are the happiest people in the world because they cannot ask, how will tomorrow be better than today?

The quest for a better tomorrow is a much studied phenomena. John Gray concludes that we are doomed to repeat the utopian fantasies of the past, constantly seeking for a better tomorrow without realising that we simply recapitulate the same old problems in new ways. As he points out, utopian regimes of the 20th century, Marxist, Leninist, etc, only succeeded in making tomorrow worse than today.

Gray contends that the reason Western governments ban drugs is because they offer the wrong way of making tomorrow better than today, a way that doesn’t involve ever increasing material consumption. Governments require money-based redemption to keep the economy growing: more GDP to make tomorrow better than today.

I bring up the war on drugs because it seemed like a immovable feature of the landscape when Gray wrote about it in 2003. Now the war on drugs seems to be abating,  many states in the US are moving to legalise cannabis and countries across Europe are moving in the same direction. Does that hint at a shift in the collective consciousness, a mutation in the imagined better-tomorrow?  Economic thought feels like it’s turning a corner away from money redemption. Millennials are primarily civically minded, apparently. Philosophy offers career advice for ‘doing good better‘. Even a conservative government is partial to the rhetoric of “measuring what matters“.

There is another kind of redemption, which the USA is pioneering; a global militarism where a spectral adversary has to be defeated, a la George Orwell. That’s why the US can’t countenance gun control. As Obama said in an accidental moment of candour, in small town America, where money-redemption seems impossible, they instead “cling to guns and religion”. A watered-down version of nation-state kill-or-be-killed can be seen in the Tories “global race” election rhetoric.  We can only hope that this kind of zero-sum better-tomorrow goes away.

Robin Archer of the LSE gave a nice quote at a recent talk: “what a dismal time it has been for those of us on the left… because the unusual plastic state of the public mind which followed the global financial crisis feels like it’s starting to congeal and harden into something quite unsympathetic”. But perhaps a Tory victory is ripple on the surface of a Kondratiev-wave scale reorientation of the global outlook. Political radicalism consequent to the financial crisis didn’t really touch Britain, where the average voter has remained relatively unaffected compared to the devastation in peripheral eurozone countries.

But there is a global, almost post national chattering class, bound together by the web, which could emerge as a new force in politics. Evgeny Morozov thinks they too will be beholden to neoliberal money-redemption, while Cory Doctorow is more of an optimist.

Meanwhile, the diminishing marginal utility of wealth means that increasing GDP might not satisfy us forever, and in any case perhaps economic growth has gone for the foreseeable. Economics professor Ed Glaeser says “the introduction of happiness into economics by Richard Layard and others stops the economists primal sin, which is acting as if money is the be all and end all, which is equivalently foolish as the view that any one thing is the be all and end all.”

Time for a new multidimensional answer to how tomorrow will be better than today? I hope so.

 

 

 

 

Ames gunstock

Ames Gunstock Lathe in the Science Museum’s Making of the Modern World exhibition

The Ames Gunstock Lathe is a tool for carving rifle gunstocks from wood. It functions by running a probe over an already shaped “template” gunstock. The probe is mechanically linked to a cutting head that produces an identical copy from a wooden blank.

According to geographer Jarred Diamond’s book Guns, Germs and Steel, the ability to make guns has shaped global history. Ian Morris, in his book Why The West Rules For Now echoes this sentiment, suggesting that mass-produced guns tipped the power balance away from nomadic tribes and in favour of the sedentary urban populations that we now take to be defining feature of civilisation. Mechanisms such as this lathe are clearly influential in the broad sweep of history.

Specifically, this tool was built in the Springfield Armoury in the United States. The facility’s ability to mass produce guns had a profound effect on American history, and is now a national monument and museum for this reason. The production techniques pioneered there also seeded the Industrial Revolution in the United States.

In terms of historical impact, this exhibit couldn’t have much better credentials for inclusion in a gallery about the making of the modern world. It was the novelty of the mechanism that caught my attention, but what set me thinking more deeply was the attached description:

“This machines’ legacy is the computer numerically controlled (CNC) machining systems that characterise mass-production today”.

Perhaps if the label had been written more recently it would have referenced 3D printing instead of CNC.

To me, it’s not clear the lathe warrants a place in the gallery on this basis. While superficially similar to a CNC lathe in terms of it’s ability to automatically produce a complex form, the two things are in fact profoundly different.

The authors of this description have not appreciated that the Ames Gunstock Lathe has no numerical or computational aspects at all.

The machine is so fascinating exactly because it operates without any level of abstraction. It takes as input one gunstock and makes another with no representational intermediate. In this sense it’s the absolute antithesis of the “information age” in which now live, as defined by the rise of abstract representation.

In fact the lineage that leads to modern computer technology and CNC tools was already well established by 1857. The Jacquard loom used holes punched in cards to control the patterns which it wove into fabrics, a genuine information technology. The link between the Jacquard loom and modern computing is unambiguous. The system of using holes in cards as an encoding method was prevalent in computing right up until the 1960s. Much of the standardisation of punch cards was undertaken by IBM, very much a link to the contemporary.

So the Ames Lathe, which was built 50 years later than the first Jacquard looms, doesn’t feature in the genealogy of CNC machines after all.

Disinheriting the Ames Lathe is more than just an exercise in taxonomy. Comparing the Jacquard loom to the lathe is a case study which can shed light on the defining characteristics of information technology.

Claude Shannon published A Mathematical Theory of Communication in 1948, giving an account of how measure information that is widely accepted. However what information actually is and how it is deployed in technology is less clear.

The Ames lathe is a vivid illustration of the contrast between highly malleable and liquid data which powers the modern world, and the non-representational physical object which has been so much less fertile in terms of innovation.

As far as I can think, the only functional modern device that users an analogous mechanism to the Ames Lathe is the machine used for copying keys at high street shops. Meanwhile, the informational approach of the Jacquard loom was already exhibiting the advantages that make information based manufacturing so powerful.

For example, the cards that controlled the Jacquard loom could be converted into electrical signals, sent over telegraph nearly instantaneously and recreated at some distant location. Conversely, by requiring a physical full scale wooden representation of a gunstock, the Ames lathe can only transmit a design at the same speed as any other medium-sized physical object.

Punch cards can be reordered to produce new patterns in woven cloth with very little effort, while for Ames lathe to produce a new design a whole new template must be hand made.

This ease of manipulation and transmission are the key features of information technology.

For me the inclusion of this lathe says more about the making of the modern world than many of the exhibits in the gallery that genuinely embody computer technology. By illustrating a technological cul-de-sac it throws into sharper contrast the path that progress has actually taken.

Balint Bolygo mechanical sculpture

Device using similar mechanism made by artist Balint Bolygo. In this image it is copying a cast of his head onto paper.

A hallmark of the bien pensant intelligentsia is to be accepting of all forms of art. The most abstruse Turner prize winner, the most aesthetically banal Damien Hirst dot painting.

This is in contrast with the proletarian view, which is to hate modern art as an elaborate con being played by on and by the pretentious. Nothing could be more low brow than to criticise art on the basis that it demonstrates no skill or wasn’t a labour of love, summed up with the phrase “could have been done by my 5 year old”.

Perversely, this  means that people who think of themselves as having the most sophisticated and political thoughts about art actually have a completely atheortic view. Meanwhile, ironically, the tabloid, white-van-man position is actually a version of Marx’s labour theory of value.

For various reasons, I’ve been working from cafes a lot recently. People who have lengthy work conversations on the phone in public usually piss me off, but inevitably I’ve had to do one or two. So now I’m that guy.

Even worse, I’ve done Skype conferences. I feel self conscious adopting meeting-room jargon in front of a room full of people that didn’t want to hear it while having a coffee. A question occurs: Am I talking Internet bullshit? If they pay any attention at all, what do other cappuccino-quaffers think of type of conversation I’m having?

There is a whole vocabulary of businessese that requires special interpretation. This peculiar semantic field is at its most fertile in the meeting room, but it can be heard in all parts of the workplace ecology.

Sometimes, words that started off having a meaning become filler words buying their user an extra moment of thought: “going forward” and “in reality” for example. Though we might disapprove of these phrases as inarticulate, it seems unfair to hold people in a meeting to any higher standard than everyday conversation, which is riddled with “like” and (my own personal crutch) “to be honest”.

In the first place though, I suspect “going forward” did have a meaning, to suggest a change in future behaviour. Perhaps not a paradigm of clarity, nonetheless it reinforces the idea of a new procedure. In a household circumstance you might say “in the future, don’t leave your shoes on the stairs”, in a meeting you might say “going forward, we should flag up issues around untidiness”. This points to a possible motivation for workplace jargon – to distance what is said at work from sniping at one another.

Some phrases are more prone to being usurped than others – “granularity”, “bandwidth”, “ring fence”, “visibility” spring to mind – this is not to say that they do not, in some circumstances, denote useful concepts.

Then there are expressions which I think really are pure bullshit. Phrases such as “let’s press fire on [x project]”, “…blowing smoke up [x]’s arse” (I have observed this one, but only occasionally), and a personal favourite of mine, the only-observed-once “co-evaluate” (surely compare?). These phrases usually designed to make us think of the person who says them in a particular way. I also think they are designed to couch the inherently un-masculine work of organising, persuading and explaining in violent, aggressive or explicit language.

This brings us to what bullshit actually is. Harry G Frankfurt’s short essay “On Bullshit” argues that the defining feature of bullshit is that it “is not concerned with the enterprise of accurately describing reality”. Instead it is designed to make people think of the speaker in a particular way. His (very American) example is that of a 4th July orator who refers to the country as chosen by God and destined for greatness. The point Frankfurt makes is that orator is not trying to convince us that America is great, nor chosen by God. We would think it strange if he started producing evidence that the country was chosen by God. This is because what he is really doing is projecting his patriotism. The words do not contain truths, or even lies. Their relation to the truth is irrelevant, what they actually assert is “I am a patriot!” and nothing more; just as “co-evaluating” is uttered primarily to insinuate “I am a sophisticated business thinker”, as opposed to telling you something else.

This is a useful distinction, but it practice it’s harder to tell. What are the speakers intentions? Are they “blowing smoke up your arse”? Or you up theirs?

In our context, the question is: does the use of specialist language about the Internet / business / marketing indicate that someone is trying to look like an expert, or that it’s the easiest way to express a complex idea? No one would argue that doctors are bullshitters for adopting specialist terms for parts of the body. Human anatomy is a complicated thing, everyday words might easily lead to the wrong bit being amputated, irrigated or irradiated.

The same cannot be said for lawyers, who are often thought, even from within their profession, to adopt jargon that precludes non-experts from understanding what they are saying. Some people even think they do this with a view to building up the importance and complexity of their own jobs to enhancing their earnings. If you subscribe to such a view, you are accusing them, more or less, of being bullshitters.

Did bullshit contaminate the lattes of those unfortunate enough to have sat within earshot of me in the coffee shop? Perhaps I’m not the best person to judge, but there are, I think, real reasons to adopt a lexicon which sounds to the casual earwigger quite similar to bullshit – especially when talking about a (how to call it?) web product.

Lots of things connected with the Internet either don’t have a proper name, or have a really wanky name. In this way a useful designation comes to sound like bullshit. There are good reasons to refer to design as UX – you might not agree with them, but it’s not bullshit. If a comensual customer heard that I was choosing to use the word UX to describe a role very similar to that of designer they might think their worst suspicions confirmed.

Moreover, when everyone is in the right, er, headspace, very often you start to use “in-words” – that is, words that have an adopted meaning within the group. This is positive, it’s indicative of progress, and of thinking a lot about a particular project. In several projects I’ve worked on the conceptual landscape has been clarified by writing dictionary of “in-words” – that is, words that have an adopted meaning within the group. The process sometimes illuminates an unnoticed similarity or opposition between concepts. It’s very handy, if you get stuck with a project I can recommend writing that dictionary. And if you don’t have any in-words, then perhaps you haven’t talked about the problem enough.

So I’m going to be charitable to myself and say that it’s at least possible, perhaps even probable, that having a meeting about web page (or is it an app?) will make you sound as though you are talking bullshit – even if you, on this occasion, aren’t.

What’s the take home, I hear you ask? What are the actions, going forward? I think It’s important to flag up real bullshit in a profession littered with pretend bullshit – not tar everyone with the same guff brush. To be honest.

This morning I went to a talk about devices which interface directly between the brain and computers. By way of an introductory remark Louise Marston noted that “for thousands of years humans have wanted to be able to communicate directly from one brain to another, which of course we can, by witting.” This set the tone for a discussion about the topic of technologically extending the functionality of our bodies.

The panel all agreed that it is a mistake to imagine that using (for example) brain implants to communicate with computers represented a sea change in our sense of self.

Anders Sandberg pointed out that we already use contact lenses and clothes to extend our personal capacities. What makes the ideas such brain implants alarming is that they represent a ‘transgression’ of our physical bodies. However, as Anders continued to point out, this transgression “makes good posters for films” but isn’t actually that practical, mostly because the dangers of infection and medical complication.

Instead he favoured subtle, low level interaction between brain and computer. He gave the beautiful example of his relationship with his laptop – he can subconsciously tell if the hard drive is ok from the noise that it makes.

Other examples include MIT’s “Sixth Sense“, while Professor Kevin Warwick showed a photo of a device that allowed users to get messages from their computer via tiny electric shocks on their tongue. Probably not to everyone’s taste.

Optogenetics a new approach again. This involves altering your genetic code so that your neurons respond to light and then shining a laser through your cranium to manipulate your brain’s behaviour.

While some of the technologies under discussion are not even on the lab bench yet, one technology already in medical use: Deep Brain Stimulation to treat Parkinson’s. An implant electrically stimulates the thalamus which reduces the  symptoms of the disease. Some patients go from being unable to dress themselves to being able to drive again. Impressive stuff, but it also reifies a moral thought experiment. Some people who use the device experience personality changes, for example becoming compulsive gamblers. Who would be responsible if the a patient had a personality change and went on to commit a crime? The device manufacturer, the surgeon or the patient? One guy is already suing his doctor because of gambling spree he claims was bought on by medication. 

Perhaps if we had more debates about these kinds of moral dilemmas we’d have a more nuanced understanding of what’s at stake. It drove me nuts during the riots that _every_ news presenter had to ask anyone that said anything explanatory about the cause of the riots “Are you making an excuse for them?”. Surely we can have a more sophisticated understanding of morals than that discourse seemed to indicate.

The panel itself had some interesting characters. Anders Sandberg comes from the grandly titled Future of Humanity Institute in Oxford, which is also home to a philosopher I particularly like –  Nick Bostrom. He’s very entertaining, I seem to remember that he did stand up for a while.  Bostrom also responsible for a confounding logical conclusion through his simulation argument.

Professor Kevin Warwick has had all manner of things implanted in him – a sure sign of commitment to your work. He told us he has a graph of the electrical activity associated with the onset of Parkinson’s on his living room wall to keep him focused on his work. Presumably he has a very understanding wife too – some of his experiments have included her, for example wiring their brains together to facilitate direct electrical communication. I once wrote a short story about exactly this. Unfortunately it’s not very good; I hope their experience went better than my short story.

Throughout the whole talk there was a tendency to wander between brain-computer interfaces and the subject of artificial intelligence. It seems to me that there isn’t really an obvious link between the two, except that they both endanger our sense of self. In many ways this is the most fascinating aspect of the technology. Most people distinguish between using technology to restore function that’s been damaged by disease or a car accident and the more treacherous moral territory where technology is used to exceed our ‘normal’ abilities.

We discussed that the use of a notebook as a memory aid would be could be considered a synthetic extension of our natural abilities, and that no one considers this to have moral implications. However, as I write this I’m quite happy to take advantage of a spell checker and my notebook.

It would feel weird if the computer started improving my prose by suggesting eloquent synonyms, or perhaps advised me that the above “not to everyone’s taste” pun is an execrable crime and should be deleted immediately. When computers, through implants, other types of brain-computer interfaces or artificial intelligence start doing things that we consider uniquely human – like creativity and punning – I think it really will cause us to radically reconceptualise ourselves. In this sense, I wonder if examples of using clothes or glasses to enhance ourselves are misleading, because they don’t strike at core concepts at what it is to be human. Or perhaps we’ll just get over it.

Today I spoke at Bar Camp Media City in Salford, Manchester. Part of the appeal was getting to see the new Media City home of the BBC. You get the tram from the train station – there’s something about getting on trams that makes me feel like I’ve left the real world and slipped into a theatre set where everything is just pretending. I quite like that. It’s because of the monorail at Chesington World of Adventures I think.

I’m glad the security gards that checked my computer cables, validated my photo ID and escorted me to the 5th floor of the BBC Quay House building didn’t find anything suspicious. They wouldn’t have hesitated to do a cavity search. You’d think the Queen was giving a presentation.

Who called it Media City? Accountancy consultants? They’re probably signing off the plans for Content Hamlet and Return On Investmentshire right now.

Anyway, I was just going to post something quick explaining the talk I gave. Forgive me if this isn’t watertight, and apologies that it’s been written in haste – hopefully it will clarify what I said for anyone who’s interested.

The Internet is not a medium

TV, radio, the novel, the Internet. It sort of makes sense. OK, the Internet is perhaps a broader category than radio, but we often think of the Internet as just another type of media. I’m going to argue that it isn’t and that thinking it is has negative consequences

Definition of a medium, No 1 

A medium is a method of transmitting messages where all the messages transmitted by that medium have similar features. Some of those features ar  conventions – for example that newspaper article have bylines, lead paragraphs explaining the facts and are written in a particular style. Other features that distinguish a medium are matters of technological expediency – there are no moving pictures in newspaper articles.

Mediums can nest, as illustrated below.

Recursive_media

My contention is that podcasts, YouTube, eBooks and blogs are so dissimilar that there is literally nothing about them that puts them in one media category. Not even in the same broad nest. This might seem like a semantic point, but I think it leads to a number of problems:

  • Often people speak of the Internet as though it is one medium, and their claims need to be made more specific. “People who use the Internet for 4 hours a day have lower attention spans” doesn’t really mean anything – what are they using the internet for? That’s the critical fact, otherwise it’s about as broad as saying “people engaged in activities for 4 hours a day have lower attention spans”.
  • Erroneous assumptions that generic properties of the Internet exist. It’s also common to hear statements such as “the Internet is democratising”. Obviously this is widely debated, and that debate could proceed with more if the language was tightened up. What features of the net are democratising?
  • ‘First-TV-programme syndrome’ – When the first TV programmes were broadcast they simply pointed cameras at people doing radio shows. It took time to work out what could be done with the new technology. Clearly we’re on that same curve with the Internet. Being careful about what we’re referring to can only help. (Hat tip to The Guardian’s Martin Bellam)

Horn

Definition of a medium, No 2

A medium is a method of transmitting messages between people. This feels like an all encompassing definition of media to me, but this definition is still narrower than the Internet.

The reason is that the Internet can be used for transmitting data that is not intended for human consumption. It’s possible to email someone a CAD file and get a 3D prototype back without a human having ever read the data you supplied. With increasingly ubiquitous computing, and more sophisticated ways of shaping matter using data, this is a growing mode of Internet use. In this sense it’s more like an all purpose manufacturing aid. I think of it as similar to the way steam increased productivity in the industrial revolution (I’m not trying to make a comment on how important it is though).

Information is hard to charge for, but physical things are not. Projects such as Newspaper Club take advantage of this. They allow you to print your own low  volume newspaper. You’d never pay to publish something online, but paying to using a web app that makes something physical is a reasonable proposition.  Thinking like this might help you identify a revenue stream.

I think the fun of BarCamp is that you get to explain a pet idea, and it’s also a lovely arena to have a go a pubic speaking – I hope my audience weren’t too confused. Thanks to everyone that came along!

When I try to convince my friends of the merits of some new fangled internet thing, whether it’s about the relevance of Ushadi to international development or the usefulness of AMEE to engineers, I often feel that in their minds I’m being filed away into a particular box.

If you like Twitter, if you see potential for citizens to access government services via the web, if you blog, then you’re a hopeless, unsophisticated optimist who signs up to every passing fad.
On the other hand, nerdom does exactly the same thing right back. If you worry about “Internet addiction”, the breakdown of interpersonal skills, think that crowd sourcing threatens notions of professionalism or can’t see the point of gamification then you’re a luddite that “doesn’t get it”. You’re the kind of sentimentalist who would drag everyone back to the good old days of rationing and coal mining and slum tenements and feudalism.

Those are your choices. Guardian or Daily Mail, bullshitter or tedious reactionary, panglossian optimist or po-faced medievalist. Stephen Fry or Brian Sewell.

Being typecast in this way is annoying; it means that when I try to evince the benefits of some web thing or other anyone skeptical will simply assume that my judgement is hopelessly clouded.

Conversely anyone who raises a legitimate concern will disappear under an avalanche of comments.

Often this binary assumption about people’s psychology distracts from sensible conversation about which of the opportunities the web presents are most valuable to society. It’s from this angle that I consider the following question: does getting your intellectual nourishment from a computer screen reduce your capacity to have complex thoughts or reduce your mental acuity?

The most eloquent dismissal of this idea that I’ve heard is from an LSE podcast. Jonathan Douglas, director of The National Literacy Trust frames the debate in terms of a dynamic understanding of what it is to be literate. As examples, he points out that Socrates hated the idea of writing, and thought of it as “killing words”. For Socrates, the only way to be literate was to participate in discussion, not to read it secondhand.

In antiquity, it was most common for reading to be out loud, and the ability to clearly orate a text was a critical aspect of literacy. Now moving your lips as you read is a sign of stupidity.

To quote Jonathan Douglass “Technology is driving a massive change in reading, from personal to social and interactive”. He notes that the concept of authority and critical skills are now part of the core skills that you need to access ideas, so that to be literate in the most modern sense is to understand the provenance of Wikipedia articles and to treat the information appropriately.

None of this means that reading on the web is more or less able to convey complex ideas, or to be valued any more or less than books.

Books, however, have a particular fetishised status which many people can’t get over. For a long time they have been the primary means for getting access to ideas, and so they have come to be seen as the only (serious) means for accessing ideas. They no longer have this special status and we need to bear in mind that books are just containers – it’s their payload that really matters. The most important thing is for concepts to be imparted, not the means by which it is done.

Collecting books, which can absolutely see the appeal of, is really a kind of cargo cult. Having the first edition doesn’t change the knowledge contained within the book, it represents a kind of faith the physical object rather than the words within. This is the cult of books, and while understandable, it’s not a sound basis for ignoring other media.

I’ve seen representatives of the Campaign for Real Eduction in TV interviews criticising the idea that a school might buy laptops on the basis that they should really buy books. Susan Greenfield, an Oxford Neuroscientist, has suggested all kinds of problems that might be caused by a failure to spend enough time with books, always gathering attention from the popular press but never supporting her ideas with any evidence.

I think this notion of changing literacy is very helpful in explaining to skeptics the potential of the web to provide a whole new way to access intellectual thought. It couldn’t be more apposite that I discovered it by listening to a podcast from an event that I would otherwise never have found out about.

It’s not a sop to short attention spans, or “dumbing down”, to express information in format other than extended prose. One of my favorite examples is Hyperphysics, which shows the central concepts of physics in relation to each other. It’s not a linear text book, but I don’t think anyone can accuse it of dumbing physics down.

Most excitingly, there is an opportunity to throw open the doors to academia, with lectures and talks available as podcasts, professors keeping blogs and course notes appearing online – this is a genuine opportunity to let learning that was once confined to institutions out of it’s cage. It would be foolish to pass this up simply because of a dogmatic allegiance to binding our knowledge into volumes and lodging them at the British Library.