In the spirit of thinking through our new political reality, which I already started here, I’ve been thinking about the electoral success of policies that promise violence – imprisonment or war, mainly.

In advertising, apparently, sex sells. In politics, it’s a liability. What sells in politics is violence. The promise to do something violent has an appeal that is powerful and ubiquitous.

In a classic criticism of democracy, Thucydides tells us that in 480BC the crowd in Athens voted to kill every adult male on the rebellious island of Lesbos, only to realise subsequently that this was an unjust act of violence, spurred by the rhetoric of a demagogue. They have to dispatch a ship with the new orders, which only just arrives in time to prevent a massacre. In Orwell’s 1984, perpetual war was used as mechanism to confer legitimacy on the dictatorship, an approach contemporary Russia has learned from.

We have the term khaki election to refer to this phenomena. It was coined in 1900 to describe a UK election held in the context of the Second Boer War, where patriotic sentiment driven by the war is said to have helped the incumbent party win. More recently, we have Thatcher, whose prospects for reelection in 1983 looked dim until the Falklands War boosted her reputation – almost certainly changing the outcome in her favour. We might say the same of Bush’s second Gulf war. An aimless administration was transformed into a purposeful and successful one, the president on an aircraft carrier declaring victory just in time for the election. As it turned out, the invasion did not prove to be beneficial for US foreign policy, but it worked very well for Bush himself.

Internal violence can work the same – for example the way promising increased incarceration has been a successful electoral tool in the UK and the US, no matter the dropping crime levels and endless evidence prison is expensive and ineffective.

Why should a threat to do violence be so persuasive?

Unlike, for example, the myth that the economy works like household budget, I can’t see that the appeal of violence comes from  ‘common sense’ or our everyday experience. Has any family dispute ever been satisfactorily resolved by violence? How many teenage kids have been coerced into good behavior? Do employers seek those who are able to persuade and negotiate, or those who are aggressive and violent? In our own lives, we almost never witness violence, and even less so as a successful strategy. Perhaps it’s exactly this distance that allows us to be so blasé about drone strikes and regime change.

I can only think that the allure of violence is part of a broader sense making activity. Most people have some problems in their lives, disappointments to rationalise. Acknowledging that our lives are shaped by blind luck or unintended consequences is not a good narrative, it does not help us understand why things are as they are. But the idea that an enemy within or without can cause these things does make sense – and the natural solution to the enemy is violence. Lock them up, bomb them – a convincing panacea.

In America, Obama’s failure to use the words radical islam became totemic on the right. It signalled a failure to adopt the appropriately bellicose rhetoric. Trump was even able to suggest that Obama was in league with ISIS because of his failure to use the properly aggressive language. What Obama was really doing was attempting to deescalate the religious dimension of the conflict with the long run goal of bringing peace. It’s a totally rational strategy, except that, for the above reasons, it’s also a terrible electoral strategy.

This is a conundrum for any political party that wants to pursue a rational level of violence, which is generally much below the level apparently advocated by voters. I am not aware of any solution to this problem – as the ancient Greek critics of democracy pointed out, it may just be the price you pay.

 

We experienced something truly amazing in Marrakech. A taxi took us from the airport to the winding mesh of the medina, and we began to look for cafe. Only after a while we realise where you have to look – up. The cafes are on the rooftops.

Spiralling up and out onto a fourth floor terrace, a rose shaded cubist rhythm of rooftops stretches toward the High Atlas mountains, stepped African crenelations serrate the skyline. Dufy-esque palm trees shade the crows nest cafes, while flocks of tiny birds folded the November sun into barely audible soft peaks. We look down onto the emerald green glazed tiles of the 11th century Ben Youssef  mosque and order mint tea. We order Pastillas – chicken wrapped in filo pastry, dusted with icing sugar just as the mountains across the plain are dusted with snow.

Only then are we fortified enough to discuss the amazing thing we had just experienced in Marrakech – the queue at the airport. Even as the bus delivered us from the plane to the terminal we could see a roiling body of people thronging a low hall. Only when you got in did you realised the scale – a crazed mass of people pushing toward passport control booths that have disappeared behind the curve of the earth.  We join the crush.

After 15 minutes of queuing we realised that concealed within the tumult was a reminder of civility – the snaking Tensabarrier familiar from airports across the world.  We obeyed, and allowed ourselves to be guided perpendicular to our destination. We got deeper in. The temperature rose. Waves of jeering and whistles – a celebrity arrival? The president? Was that the reason for the delay? Nope – instead it was spontaneous outbursts of protest from the front of the queue, presaging what was to come. As the density increased I found myself toppled over other peoples’ luggage, only prevented from falling by the absence of enough space to do so. Someone begins to cry.

A very tall man that’s been behind us is now somehow far in front. We reach a Tensabarrier hairpin only to discover that the frustrated crowd has begun to duck under it – this is the point of where we begin a strange journey. Not to passport to control, but toward the dissolution of the old system, the social norms we arrived with. In its stead we formed a society based on a new morality – the morality of the Marrakech airport queue.

Someone unclips the barrier and we surge forward into space that isn’t there. An English couple we cut past protest – “we’ve been here an hour!”. “So have we…”. Very tall man is behind us again. Couples cling to each other. Progress ceases, every gap is squeezed from the crowd. We begin to forge a new social contract – we realise that obeying the symbols of the past is no longer rational. The barriers don’t mean anything, those who obey them are punished, those who do not are rewarded. Just as the Bible says of Armageddon, when it comes to entering the Kingdom of Morocco “many who are first will be last, and many who are last will be first”.

Egalitarian mob justice erupts. We collectively condemn the young and able bodied who push forward, while rallying round to support the frail. We crowd surf water to a French woman who has passed out, and attempt to summon the authorities – all without losing our places. Eventually a man in full scrubs – presumably straight from operating on another casualty – drags the woman from the crowd.

We heard wails, shouts and scuffling break out in the parallel ‘fast track’ queuing system next to us – I believe a different, and darker, culture emerged there.

Two hours in we are crushed against the final hurdle, the immovable metal barrier that separates us from a row of passport control booths. Very tall man is ahead.  Not long ago we poured scorn on those who jumped the barriers, now we saw it as the only way. We chanted “Do it!” at old believers who could not adjust to the new ways. One reluctant Chinese man demurred and gestured at his elderly parents. Moments later – and I swear by our newly minted gods this is absolutely true – he stood elevated astride the barrier and shouted “There are no rules any more!”. He looked back at his parents as though across the Berlin wall.

Finally, we were piled against a booth, 30 faces pressed against the perspex like children at an aquarium. Almost there. We watched as the official idly hunt-and-pecked the details of each passport into the computer, queried the minutia of hotel addresses and fastidiously stamped unique numbers into every passport.

We left the airport certain our pre-booked taxi would have have left hours ago, but a man wilted over the arrivals railing held a sign bearing the name of our hotel. We decompressed in the arrivals lounge, a luxurious architectural gesture, apparently intended to welcome travellers to country that sees tourism as its future.

We told him about our ordeal – had he ever heard of such a thing?

“Oh, yes – this happens every Saturday”

 

 

 

 

Most politicians – with the exceptions of the Lib Dems – have said that parliament should accept the results of the EU Referendum as the democratic will of the people.

This may be true for political or pragmatic reasons. Ethically, however, it’s far from obvious. If someone tells you Brexit is a moral necessity just because a vote has taken place, they are wrong. Obeying the vote requires a value judgement about the status of that vote, and the issue much more complex than simply asserting that a vote has taken place.

If we were to go around disputing the status of every vote, democracy would be impossible. Here I will present the case that the Brexit vote is uniquely precarious: direct democracy about an irreversible and highly important decision carried out in the context of asymmetric information. Specifically, polling data suggests many leave voters are expecting an outcome that not even the leave campaign itself think is possible.

There are good reasons to be wary of attempts to understand what voters ‘really’ wanted – analysis becomes a vessel for your own opinions. At the same time, we have to acknowledge that voters’ opinions can be shaped by the information they are receiving.

For these reasons, refusing to leave the EU would be an absolutely legitimate position for parliament.

In the national debate it seems to go almost unquestioned that simply going through a voting procedure automatically conveys unassailable democratic force to a decision. Not true: Russia, Zimbabwe, North Korea all have voting procedures, yet most people agree that they unsatisfactory in various ways. I’m not comparing the UK to those countries, but making the philosophical point that you stand in a booth and fill out a form and still not be ‘doing democracy’.

For a vote to carry democratic force – for it to convey the ‘will of the people’ – most people think you have to do more than just count pieces of marked paper. I complained about two criteria that I felt were lacking in the EU referendum before the vote took place – that the electorate be representative, and that voters should be well informed.

We apparently can’t agree on the demographics of EU Referendum voters, but we do know participation was unusually high, so let’s set the issue of representativeness on one side.

The electorate were not well informed, in fact they were actively misled about what leaving the EU would mean. This is the case in every election, here I will make the argument that the misinformation was both asymmetric and effective in changing voters’ views.

I’m also not claiming leave voters are stupid, or that they do whatever Rupert Murdoch says. I am not claiming that everyone who voted leave was misled. I am not claiming that voters would have voted remain with better access to information.

I am claiming that we do not know how voters would have behaved with better access to information, and that information in the EU referendum was unusually low quality.

This is a difficult empirical point to prove. We cannot observe how voters would have behaved in other circumstances. What we can do is build an empirical case that voters held beliefs that can reasonably be expected to influence voting behaviour, and that those beliefs are a result of systemic misinformation.

We can see from YouGov’s polls that many people believed that leaving the EU would make no difference to, or improve, the economy. In the last poll, which closed in the 19th June, 46% of respondents thought there would be no economic impact, while 9% thought they’d be better off. These views, unsurprisingly, correlate with the intention to vote leave. 18% of those intending to vote leave thought leaving would improve the economy, and 66% thought it would make no difference.

This is at stark variance with predictions. The Leave campaign’s economist Andrew Lilico own forecasts suggest that there would be a short term economic hit, but predicted that by 2030 the economy would have returned to normal. This prediction is more optimistic than almost any other, either from a private company, the Treasury or international organisations such as the OECD. If voters were aware that the most optimistic case was a short term recession, followed by a possible return to normal growth in 15 years time, rather than believing there would be no difference or an improvement, how would they have voted? We do not know.

This in turn bears on Leave’s promise to have extra money to spend on the NHS. A post-Brexit government can choose to spend more money on the NHS, they will not be doing so using ‘spare money’ created by Brexit – certainly not until 2030.

We are in the position of living in a future where Brexit now seems imminent, and the prediction of a short term slow down appear to be coming true, with Mark Carney confirming these effects both verbally and by providing £250Bn of tax payers money to support the economy.

In the same poll, 54% of respondents believed that Brexit would reduce immigration. Again, this correlates with intention to leave, with fully 85% of leave voters believing immigration would decrease. And again, this is at odds with the predictions of all sides. Leave’s economic model relies on immigration remaining roughly the same (Andrew Lilico again), and Leave campaigner Dan Hannan notably confirmed that immigration will remain broadly similar after Brexit. How would voters have behaved if they knew this? Again, we do not know.

I’m not claiming access to an objective reality about what will happen in the case of Brexit, instead I’m asserting that leave voters did not understand the position of the Leave campaign itself. Given that the Leave campaign is likely to have been over optimistic about what it can deliver, the reality of Brexit is likely to be even less satisfactory to leave voters.

We know that a typical leave voter thought that the economy would remain the same or improve while immigration would be reduced. But we do not know if these were factors that caused them to vote leave, or merely incidental. However, if we look at polls of issues that matter to voters, we see that immigration, the NHS, the EU and the economy are the top four issues. The average leave voter held unrealistic expectations about all of these, so it is reasonable to assume that some voters choose leave on the basis of these issues.

Where does these bad information come from? How can voters have come to believe a case for Brexit even more optimistic than the Leave campaign itself? We do see that newspaper coverage, which is traditionally on the right in the UK, is was strongly skewed to Brexit. Weighted for number for readers, newspaper articles about about 80% in favour of leave, even while the country as a whole is almost perfectly split. Meanwhile, the broadcast media are scrupulously balanced.

Article 50 has not yet been sent. The electorate now has a genuine opportunity to understand Brexit’s implications for the economy and immigration. If opinion polls show a significant shift in the light of this new information, that shift should be allowed to influence MP’s views; they should not feel bound by the referendum. The referendum did not convey an unassailable mandate based on the will of the people.

Edit: Reading Vernon Bogdanor I find my self slightly convinced by an idea similar to rule utilitarianism. Perhaps you can’t worry actual democracy in every vote, instead you have to set up the institution of voting and honour it regardless of the nuances of each referendum or election. Perhaps the damage to public trust is not worth the improvement in decision making.

We have so many aspirations for big data and evidence based policy, but apparently a fatally limited capacity to see the obvious: voters were furious about immigration and the EU. Techniques exist to build better empirical evidence regarding issues that matter to citizens; we should use them or risk a repeat of the referendum.   

Commentators from all over the spectrum believe that the leave vote represents not (only) a desire to leave the EU, but also the release of a tidal wave of pent up anger. That anger is often presumed to be partly explained by stagnating living standards for large parts of the population. As the first audience question on the BBC’s Question Time program asked the panel “Project Fear has failed, the peasants have revolted, after decades of ignoring the working class how does it feel to be punch in the nose?”. The Daily Mail’s victorious front page said the “Quiet people of Britain rose up against an arrogant, out-of-touch, political class”. The message is not subtle.

Amazingly, until the vote, no one seemed to have known anything: markets and betting odds all suggested remain would win. Politicians, even those on the side of Leave, thought Brexit was unlikely. The man bankrolling the Brexit campaign lost a fortune betting that it wouldn’t actually happen (the only good news I’ve seen in days). Niall Ferguson was allegedly paid $500,000 to predict that the UK would remain.

This state of ignorance contrasts radically with what we do know about the country. We know, in finicky detail, the income of every person and company. We measure changes in price levels, productivity, house prices, interest rates, and employment. Detailed demographic and health data are available – we have a good idea of what people eat, how long they sleep for, where they shop, we even have detailed evidence about people’s sex lives.

Yet, there seems to be have been very little awareness of (or weight attached to) what the UK population itself was openly saying in large numbers.

Part of the reason must be that the government didn’t want to hear. Post crisis everything was refracted through the prism of TINA – There Is No Alternative. There was no money for anything, so why even think about it? Well, now we have an alternative.

The traditional method for registering frustration is obviously to vote – a channel which was jammed in the last election. Millions of people voted UKIP, or for the Green Party, and got one MP a piece: no influence for either point of view.  A more proportional voting system is one well known idea, and I think an excellent one, but there are lots of other possibilities too.

What if there was a more structured way to report on citizen’s frustrations on a rolling basis? An Office of Budgetary Responsibility, but for national sentiment – preparing both statistical and qualitative reports that act as a radar for public anger. It would have to go beyond the existing ‘issue tracking’ polling to provide something more comprehensive and persuasive. Perhaps the data could be publicly announced with the same fanfare as quarterly GDP.

Consultative processes at the local level are much more advanced than at the national level. Here is some of the current thinking on the best ways to build a national ‘anger radar’, drawing on methods widely used at the local level.

Any such process faces the problem of  ‘strategic behaviour’. If someone asks you your opinion on immigration, you might be tempted to pretend you are absolute furious about it, even if you are are only mildly piqued by the topic. Giving extreme answers might seem like the best way to advocate for the change you want to see. Such extreme responses could mask authentically important signals. Asking respondents to rank responses in order or assign monetary values to outcomes are classic ways to help mitigate strategic behaviour.

Strategic behaviour can also be avoided by looking at actions that are hard to fake. Economists refer to these as ‘revealed’ preferences – often revealed by the act of spending money on buying something. It’s awful to think about, but house prices might encode public opinions on immigration. If house prices are lower in areas of high immigration, it might reveal to us the extent to which citizen truly find it to be an issue. Any such analysis would have to use well established techniques for removing confounding factors, for example accounting for the fact the immigration might disproportionately be to areas with lower house prices anyway. This approach might not be relevant for the issues in EU referendum, but might be important for other national policies. Do people pay more for a house which falls in the catchment of an academy school, for example. (More technical detail on all these approaches).

Social media is another source of data. Is the public discourse, as measured on Twitter or Facebook (if they allowed access to the data) increasingly mentioning immigration? What is the sentiment expressed in those discussions? Certainly a crude measure, but perhaps part of a wider analysis – and ultimately no cruder than the methods used to estimate inflation.

All these approaches are valuable because they tell us about ‘raw’ sentiment – what people believe before they are given a space to reflectively consider. ‘Raw’ views are important since they are the ones that determine how people will act, for example at a referendum.

But that is not enough on it’s own. As discussed in a previous post, good policy will also be informed by a knowledge of what people want when they have thought more deeply and have information that allows them to act in their own best interests. These kinds of views could be elicited using using processes such as the RSA’s recently announced Citizen’s Economics Council, where 50-60 (presumably representative) citizens will be given time and resources to help them think deeply about economic issues of the day, and subsequently give their views to policy makers.

Delib, a company that provides digital democracy software, offers a budget simulator which achieves a similar goal. The affordances of the interface mean that uses have to allocate a fixed budget between different options using sliders. In the processes of providing a view, users intrinsically become aware of the various compromises that must be made, and deliver a more informed decision.

We live in a society where more data is available about citizen’s behaviour then ever before. As is widely discussed, that represents a privacy challenge that is still being understood. The same data represents an opportunity for governments to be responsive in new ways. Did the intelligence services know which way the vote would go using their clandestine monitoring of our private communications? Who knows.

We cannot predict everything, famously a single Moroccan street vendor’s protest set off the whole of the Arab Spring. But we can see the contexts that makes that kind of volatility possible, and I believe the anti immigration context could easily have been detected in the run up to the referendum.

There is no longer any reason for a referendum about the EU to become a channel for anger about tangentially related issues. The political class would not have been ‘punched on the nose’ if they were a little better a listening.

Hat tip: Thanks to the Delib Twitter account, which has been keeping track of the conversation about new kinds of democracy post Brexit, which I’ve used in this post.

There seem to be three possible ways forward from the current position, all of which are absolutely disastrous for democracy. I have no idea which of these is more likely, all of them are very bad, and all of them represent a betrayal of voters – especially those that voted to leave.

Leaving the EU and the single market is the simplest proposition – in terms of democracy it would allow a government to deliver on the key pledges of immigration controls and bringing law making back to Westminster. However, the extreme financial situation the UK would likely find itself in would certainly make £15bn of extra investment in the NHS impossible. The costs to jobs and wages would be appalling, ‘Britain’s service economy would be cut up like an old car‘, and the nation would be in deep economic shock.

Ignoring the referendum (unless there is another general election) would obviously be an enormous affront to democracy, and the tabloid newspapers would howl with rage. The unexpectedly large constituency who voted leave, who already believe they are ignored and forgotten, would rightly be incensed. Such an option may easily lead to the rise of extremist parties.

The UK remains in the single market but out of the EU — the Norway option, the middle ground. Norway pays an enormous monetary price for access to the single market, if the UK got a similar deal there would not be spare cash to spend on the NHS. Norway accepts free movement of people, breaking the Leave campaign’s promise of border controls. Finally, Norway obeys many of the EU’s laws in order to gain access to the single market, and has no say during the process of EU legislation – which is difficult to square with Leave’s ‘taking back control’ motto.

The UK will not get an exact copy of the Norway deal. Perhaps a better deal can be struck? Someone, presumably Boris, would have to achieve a heroic feat of negotiation. He does not start from a good position, on a personal level, he has been lambasting the EU for months, even comparing the organisation to the Nazis. Many European leaders fear that a good deal for Britain would encourage discontent in their own countries, and may want to make an example of the UK. Watching David Cameron’s resignation speech must have had a visceral effect on other European leaders.

According to the rules of the Article 50 process, the UK will not be in the room for exit negotiations, results would be presented fait accompli to the UK, and if we don’t find agreement after two years, we’ll be automatically ejected. The single market option has been explicitly ruled out by several leading European politicians, so it looks set to be an uphill battle. Just in case it wasn’t hard enough, Scotland could leave, or threaten to leave, the UK during the negotiations – possibly to join the EU, maybe even the Euro.

It looks as though Boris hopes to find some combination of the Norway deal that keeps a watered down versions of his promises, probably mostly achieved through obfuscation. His Telegraph column sets out an impossible wishlist of access to the single market, border controls and savings in EU contributions which he will certainly never deliver.

This is, I believe, the most dysfunctional example of democracy of all the three options. The electorate have been sold an impossible dream of ‘taking control’, lowered immigration and a windfall savings in EU contributions. Under the Norway option, it will not be clear that any has been delivered.

We all know that political parties renege on their manifesto promises, but the Leave campaign set a new low. Within 48 hours of the result they had explicitly denied they felt at all bound to deliver on lower immigration or increasing NHS spending. The audacity is comedic, there are pictures of all the leading Leave campaigners standing in front of campaign buses emblazoned with huge slogans which they now claim mean nothing.  Perhaps they believe technicalities about which leave campaign said what, or whether their slogans were commitments or more like ‘serving suggestions’, will save them. They should consider what happened to Lib Dems when they (quite reasonably) blamed their broken tuition fee pledge on the coalition.

Before the referendum, no one had realised how much anger was directed at the political classes. After the referendum, there are only reasons for that anger to grow. In Norway-style scenarios Leave voters will only get the palest imitations of the policies they believe they voted for, but at a terrible, terrible cost. Leaving the EU might cause a recession, and will certainly cost jobs. Then there are the tens, possibly hundreds of billions of pounds in foregone GDP. All Government policy of any kind is on hold for years as we renegotiate. The cost of Government borrowing could spiral. Scientific and medical research will be disrupted and damaged. UK citizens will finding travelling and working in the EU harder.

Most importantly, many Leave voters, already from poor areas, will be in even worse poverty. Boris’s stall, as he set it out in the Telegraph, is about throwing off the ‘job destroying coils of EU bureaucracy’. The idea that removing workers rights is going to play a big part in reducing inequality is a fairy tale.  Leave voters are almost certain to see things getting worse not better, even if they are temporarily satisfied to have ‘taken back control’.

For a country that everyone recognises is divided and wounded, all of the routes forward point to ever more poverty, pain and division.

 

 

 

Most democratic countries use representative democracy – you vote for someone  who makes decisions on your behalf (in the UK’s case your MP). The EU referendum is different, it’s an example of direct democracy. Bypassing their representative, every citizen who is eligible to vote will be asked to make a decision themselves.

The referendum has this feature in common with most participatory design processes (by PD I mean including end users in process of designing a product or service). PD is normally carried out with the stakeholders themselves, not representatives of them. You could think of referendum as a participatory design process, designing a particular part of the UK’s economic and foreign policy.

The EU referendum fails as a participatory design process in two important ways. Firstly, most of the participants are deeply ill informed about the issues at hand, and under these circumstances it will be impossible for them to act in their own best interests. The consequences of their design decision may well run counter to their expectations.

An IPSOS MORI survey shows that on average UK voters believe that 15% of the population are EU migrants, where in fact only 5% are. On provocative issues such as the percentage of child benefit that is paid to children living in Europe, many people widely overestimate the amount by over 100 times (it’s about 0.3%, where 1 in 4 respondents estimated more than 24%).

Richard Dawkins has noted that very few people know all the relevant details to cast a vote, and laments the bizarre logic often used in discussions. He recommends voting for ‘remain’ in line with a ‘precautionary principle’, and has the following quote to illustrate the level of debate on TV:

“Well, it isn’t called Great Britain for nothing, is it? I’m voting for our historic greatness.”

Of course, it’s a question of degree. It would be unreasonable to suggest only a tiny number of world-leading experts can voice meaningful opinions. But there does seem to be a problem when decision makers are systemically wrong about the basic facts.

The second way that EU referendum fails is that the participants do not reflect the makeup of the country as a whole. Much of the speculation on the outcome focuses on turn out – which age groups and social classes will make the effort to cast a vote. Yet it hardly seems fair that such an important decision will be taken by a self selecting group. Criticism of participatory design projects often rightly centres on the demographic profile of the participants, especially when more vocal or proactive groups override others. If young people were more inclined to vote, the chances of a remain result would increase dramatically. If people with lower incomes were more likely to vote, it would boost leave. I take this to be a serious problem in the voting mechanism.

These are difficult problems to solve. How can a participatory process have well informed participants and accurately reflect the demographics of country, while offering everyone the chance to vote?

Harry Farmer has suggested that the rising number of referendums in the UK tells us we need to reform the way we do representative democracy, rather than resorting to bypassing it. Representatives have the time and resources to become well informed on issues so they would in theory make better decisions. However, this does nothing to address the issue of turnout – MPs are themselves selected by voters who disproportionately well off and older. MPs themselves are very far from reflecting the demographics of the UK as a whole.

Two more radical solutions have been put forward by Stanford Professor James Fishkin. In his ‘deliberation day’ model, the whole country would be given the day off to learn about, discuss, and vote on a topic, perhaps on an annual basis. Participation would be encouraged with a $150 incentive. The advantage is that (almost) everyone is included, and that the incentive ought to be enough to ensure most demographics are well represented. The participants would also be well informed, having been given the day to think deeply in a structured way. However, it’s clearly a massive logistical and political challenge implement ‘deliberation day’.

Fishkin’s other suggestion is to throw over inclusion – the attempt to allow everyone to get involved – and instead use ‘deliberative democracy’. In this scenario, a sample of the population, chosen to reflect the demographic makeup of the country as a whole, come together for a weekend, to discuss and learn about an issue before casting votes. This gives us well informed participants who are demographically reflective of the country as a whole. The model is roughly similar to jury service. The drawback is that some people may find it unfair to have a small, unelected group make a decision that affects everyone.

Making participation freely open to all stakeholders while ensuring that the participants are well informed and demographically representative is difficult in any participatory design process. Some may feel that the opportunity to participate is enough, and that if the young, or the less well off, decide not to vote that’s up to them.

However, voters having incorrect beliefs about the basic facts seems to me to point to a fundamentally broken process, where any decisions made are unlikely to turn out well. In classic participatory design projects, approaches such prototyping, iteration and workshopping can help participants improve their understanding of the situation and empower them to make decisions in their own interests.

Are there similar approaches we could take to improve national decision making? Perhaps in the UK we could look at the structure of the press, and ask if having a tiny number of extremely rich newspaper proprietors holding sway over public opinion isn’t perhaps a serious problem for a country pretending to be a democracy.

Yesterday we had a really great round table talking about supply chains and manufacturing, hosted by Future Makespaces. Supply chains touch on so many political topics. They matter intensely for labour conditions, wages,  immigration, the environment and for the diffusion of culture. At the same time they remain mostly invisible: they have a dispersed physical manifestation, and subsist in innumerable formal and informal social relations.

Governments publish some data on supply chains, but it can provide only a very low resolution picture. Jude Sherry told us that trying to locate manufacturers legally registered in Bristol often proved impossible. The opposite proved true as well – there are plenty of manufactures in Bristol who do not appear in official data.

Supply chains are especially salient now because technology is changing their structure. The falling cost of laser cutters and 3D printers is democratising processes once only possible in large scale manufacturing – thus potentially shortening the logistical pathway between manufacturer and consumer; perhaps even bringing manufacturing back into the cities of the developed world.

What I took from the round table was the surprising diversity of approaches to the topic – as well as a chance to reflect on how I communicate my position.

If your writing, design, or artistic practice is about making the invisible visible, then supply chains are a rich territory – a muse for your work, and an agent of change in the materials and processes you can work with. I took Dr Helge Mooshammer and Peter Mörtenböck to be addressing this cultural aspect with their World of Matter project and their Other Markets publications. Emma Reynolds told us about the British Council’s Maker Library Network, which I think you could see as an attempt to instrumentalise that cultural output.

Michael Wilson (who was in the UK for The Arts of Logistics conference), came to the topic from an overtly political direction, casting the debate in terms familiar from Adam Curtis’ All Watched Over by Machines of Loving Grace; positioning himself in relation to capitalism, neo-liberalism and anarchy. His Empire Logistics project aims to explore the supply infrastructure of California. He highlighted the way that supply chains had responded to the unionisation of dock workers in California by moving as many operations as possible away from the waterfront, and to an area called, poetically, the Inland Empire.

The ‘small p’ political also featured – Adrian McEwan told us about his local impact through setting up a Makespace in Liverpool. James Tooze told us about the modular furniture system he’s working on with OpenDesk – designed to reduce waste generated by office refits and be more suited to the flexible demands that startups make of their spaces.

My perspective is based mostly on ideas from the discipline of economics. I described the Hayekian idea of the market as a giant computer that efficiently allocates resources, where the market, through the profit motive, solves the knowledge problem – and that it does so in a way that cannot be improved upon.

Even if I don’t myself (completely) subscribe to this point of view, it is well embedded with policy makers, and I think needs to be addressed and rebutted.

Every attempt to actively change supply chains, from the circular economy to makespaces, faces a challenge from Hayekian reasoning: if a new system of supply was a good idea, why hasn’t the market already invented it?

I see my work as using the language of economics to position design practises that seek to augment or transcend that market logic. In particular, I think Elinor Ostrom’s work offers a way to acknowledge human factors in the way exchanges take place, as well as providing design principles based on empirical research.

One surprise was the divergent views on where the ambitions for the ‘maker movement’. Should it aim for a future where a significant fraction of manufacturing happens in makespaces? Or would that mean the movement had been co-opted? Is it’s subcultural status essential?

I realised that when I try to explain my position in future I need to address questions like why I’ve chosen to engage with economic language and try to illustrate how that dovetails with cultural and political perspectives.

 

TL;DR: Almost everyone thinks academic publishing needs to change. What would a better system look like? Economist Elinor Ostrom gave us design principles for an alternative – a knowledge commons, a sustainable approach to sharing research more freely. This approach exemplifies using economic principles to design a digital platform. 

Why is this relevant right now? 

The phrase ‘Napster Moment’ has been used to describe the current situation. Napster made MP3 sharing so easy that the music industry was forced to change it’s business model. The same might be about happen to academic publishing.

In a recent Science Magazine reader poll (admittedly unrepresentative), 85% of respondents thought pirating papers from illicit sources was morally acceptable, and about 25% said they did so weekly.

Elsevier – the largest for-profit academic publisher – is fighting back. They are pursuing the SciHub website through the courts. SciHub is the most popular website offering illegal downloads, and has virtually every paper ever published.

In another defensive move, Elsevier has recently upset everyone by buying Social Science Research Network – a highly successful not-for-profit website that allowed anyone to read papers for free.

Institutions that fund research are pushing for change, fed up with a system where universities pay for research, but companies like Elsevier make a profit from it. Academic publishers charge universities about $10Bn a year, and make unusually large profits.

In the longer term, the fragmentation of research publishing may be unsustainable. Over a million papers are published every year, and research increasingly requires academics to understand multiple fields. New search tools are desperately needed, but they are impossible to build when papers are locked away behind barriers.

How should papers be published? Who should pay the costs, and who should get the access? Economist and Nobel laureate Elinor Ostrom pioneered the idea of a knowledge commons to think about these questions.

What is a knowledge commons? 

A commons is a system where social conventions and institutions govern how people contribute to and take from some shared resource. In a knowledge commons that resource is knowledge.

You can think of knowledge, embodied in academic papers, as an economic resource just like bread, shoes or land. Clearly knowledge has some unique properties, but this assumption is a useful starting point.

When we are thinking about how to share a resource, Elinor Ostrom, in common with other economists, asks us to think about whether the underlying resource is ‘excludable’, or ‘rivalrous’.

If I bake a loaf of bread, I can easily keep it behind a shop counter until someone agrees to pay money in exchange for it – it is excludable. Conversely, if I build a road it will be time consuming and expensive for me to stop other people from using it without paying – it is non-excludable.

If I sell the bread to one person, I cannot sell the same loaf to another person – it is rivalrous. However, the number of cars using a road makes only a very small difference to the cost of providing it. Roads are non-rivalrous (at least until traffic jams take effect).

Excludable Non-excludable
Rivalrous Market Goods

Bread, shoes, cars

Common Pool Resources

Fish stocks, water 

Non-rivalrous Club Goods

Gyms, toll roads,
(academic papers) 

Public Goods

National defence, street lighting
(academic papers)

Most economists think markets (where money is used to buy and sell, top left in the grid) are a good systems for providing non-rivalrous, non-excludable private goods – bread, clothes, furniture etc. – perhaps with social security in the background to provide for those who cannot afford necessities.

But if a good is non-rivalrous, non-exclusionary, or both, things get a bit more complicated, and less effective. This is why roads are usually provided by a government rather than a market – though for profit toll roads do exist.

The well known ‘tragedy of the commons’ is a example of this logic playing out. The ‘tragedy of the commons’ thought experiment concerns a rivalrous, non-excludable natural resource – often the example given is a village with a common pasture land shared by everyone. Each villager has an incentive to graze as many sheep as they can on the shared pasture because then they will have nice fat sheep and plenty of milk. But if everyone behaves this way, unsustainably massive flocks of sheep will collectively eat all the grass and destroy the common pasture.

The benefit accrues to the individual villager, but the cost to the community as a whole. The classic economic solution is to put fences up and make the resource into an excludable, market-based system. Each villager gets an section of the common to own privately, which they can buy and sell as they choose.

Building and maintaining fences can be very expensive – if the resource is something like a fishing ground, it might even be impossible. The view that building a market is the only good solution has been distilled into an ideology, and, as is discussed later, that ideology lead to the existence of the commercial academic publishing industry. As the rest of this post will explain, building fences around knowledge has turned out to be very expensive.

Ostrom positioned herself directly against the ‘have to build a market’ point of view. She noticed that in the real world, many communities do successfully manage commons.

Ostrom’s Law: A resource arrangement that works in practice can work in
theory.

She developed a framework for thinking about social norms that allow effective resource management across a wide range of non-market systems, a much more nuanced approach than the stylised tragedy of the commons thought experiment. Her analysis calls for a more realistic model of the villagers, who might realise that the common is being overgrazed, call a meeting, and agree a rule how many sheep each person is allowed to graze. They are designing a social institution.

If this approach can be made to work, it saves the cost of maintaining the fences, but avoids the overgrazing that damages the common land.

The two by two grid above has the ‘commons’ as only one among four strategies. In reality, rivalry and excludability are questions of degree, and can be changed by making different design choices.

For this analysis, it’s useful to use the word ‘commons’ as a catchall for non-market solutions.

Ostrom and Hess published a book of essays, Understanding Knowledge as a Commons, arguing that we should use exactly this approach to understand and improve academic publishing. They argue for a ‘knowledge commons’.

The resulting infrastructure would likely be one or more web platforms. The design of these platforms will have to take into account the questions of incentives, rivalry and exclusion discussed above.

What would a knowledge commons look like? 

Through extensive real world research, Ostrom and her Bloomington School derived a set of design principles for effectively sharing common resources:

  1. Define clear group boundaries.
  2. Match rules governing use of common goods to local needs and conditions.
  3. Ensure that those affected by the rules can participate in modifying the rules.
  4. Make sure the rule-making rights of community members are respected by outside authorities.
  5. Develop a system, carried out by community members, for monitoring members’ behavior.
  6. Use graduated sanctions for rule violators.
  7. Provide accessible, low-cost means for dispute resolution.
  8. Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system.

These principles can help design a system where there is free access while preventing collapse from abusive treatment.

Principle 1 is already well addressed by the existence of universities, which give us a clear set of internationally comparable rules about who is officially a researcher in what area – doctorates, professorships etc. These hierarchies could also indicate who should participate in discussions about designing improvements to the knowledge commons, in accordance with 2 and 3. This is not to say that non-academic would be excluded, but that there is an existing structure which could help with decisions such as who is qualified to carry out peer review.

In a knowledge commons utopia, all the academic research ever conducted would be freely available on the web, along with all the related metadata – authors, dates, who references whom, citation counts etc. A slightly more realistic scenario might have all the metadata open, plus papers published from now forward.

This dataset would allow innovations that could address many of these design principles. In particular, in accordance with 5, it would allow for the design of systems measuring ‘demand’ and ‘supply’.  Linguistic analysis of papers might start to shine a light on who really supplies ideas to the knowledge commons, by following the spread of ideas through the discourse. The linked paper describes how to discover who introduces a new concept into a discourse, and track when that concept is widely adopted.

This could augment crude citation counts, helping identify those who provide a supply of new ideas to the commons. What if we could find out what papers people are searching for, but not finding? Such data might proxy for ‘demand’ – telling researches where to focus their creative efforts.

Addressing principle 6, there is much room for automatically detecting low quality ‘me-too’ papers, or outright plagiarism. Or perhaps it would be appropriate to establish a system where new authors have to be sponsored by existing authors with a good track record – a system which the preprint site arXiv currently implements. (Over publication is interestingly similar to overgrazing of a common pasture, abusing the system for personal benefit at the cost of the group.)

Multidisciplinary researchers could benefit from new ways aggregating papers that do not rely on traditional journal based categories, visualisations of networks of papers might help us orient ourselves in new territory quicker.

All of these innovations, and many others that we cannot foresee, require a clean, easily accessible data set to work with.

These are not new ideas. IBM’s Watson is already ingesting huge amounts of medical research to deliver cancer diagnosis and generate new research questions. But the very fact that only companies with the resources IBM can get to this data confirms the point about the importance of the commons. Even then, they are only able to look at a fraction of the total corpus of research.

But is the knowledge commons feasible?

How, in practical terms, could a knowledge commons be built?

Since 1665, the year the Royal Society was founded, about 50 million research papers have been published. As a back of an envelope calculation, that’s about 150 terabytes of data – that would cost $4,500 a month to store on Amazon’s cloud servers. Obviously just storing the data is not enough – so is there a real world example of running this kind of operation?

Wikipedia stores a similar total amount of data (about 40 million pages). It also has functionality that supports about 10 edits to those pages every second, and is one of the 10 most popular sites on the web. Including all the staffing and servers, it costs about $5o million per year.

That is less than 5% of what the academic publishing industry charges every year. If the money that universities spend on access to journals was saved for a single year, it would be enough to fund an endowment that would make academic publishing free in perpetuity – a shocking thought.

What’s the situation at the moment? 

Universities pay for the research that results in academic papers. Where papers are peer-reviewed, the reviewing is mostly done salaried university staff who don’t charge publishers for their time. Therefore, the cost of producing a paper to an academic publisher is, more or less, typesetting plus the admin.

Yet publishers charge what are generally seen as astronomical fees. An ongoing annual licenses to access a journal often costs many thousands of pounds. University libraries, which may have access to thousands of journals, pay millions each year in these fees. As a member of the public, you can download a paper for about $30 – and a single paper is often valueless without the network of papers it references. The result is an industry worth about $10bn a year, with profit margins that are often estimated at 40%. (Excellent detailed description here.)

I’ve heard stories of academics having articles published in journals their university does not have access to. They can write the paper, but their colleagues cannot subsequently read it – which is surely the opposite of publishing.  There are many papers that I cannot access from my desk at the Royal College of Art, because the university has not purchased access. But RCA has an arrangement with UCL allowing me to use their system. So I have to go across town just to log onto the Internet via UCL’s wifi. This cannot make sense for anyone.

I’m not aware of any similar system. It’s a hybrid of public funding plus a market mechanism. Tax payers money is spent producing what looks like a classic public or commons good (knowledge embodied in papers), free to everyone, non-rivalry and non-exclusionary. That product is then handed over a to private company, for free, and the private company makes a profit by selling that product back to the organisation that produced it. Almost no one (except the publishers) believes this represents value for money.

Overall, in addition to being a drain on the public purse, the current system fragments papers and associated metadata behind meaningless artificial barriers.

How did it get like that?

Nancy Kranich, in her essay for the book Understanding Knowledge as a Commons, gives useful history. She highlights the Reagan era ideological belief (mentioned earlier) that the private sector is always more efficient, plus the short-term incentives of the one-time profit you get by selling your in house journal. That’s seems to be about the end of the story, although in another essay in the same book Peter Suber points out that many high level policy makers often do not know how the system works – which might also be a factor.

If we look to Ostrom’s design principles, we cannot be surprised at what has happened. Virtually all the principles (especially 4,7 and 8) are violated when you have a commons with a small number of politically powerful, for-profit institutions who rely on appropriating resources from that commons. It’s analogous to the way industrial fishing operations are able to continuously frustrate legislation designed to prevent ecological disaster in overstrained fishing grounds by lobbying governments.

What are the current efforts to change the situation?

In 2003 the Bethesda Statement on Open Access indicated the Howard Hughes Medical Institute and the Wellcome trust, which between them manage an endowment of about $40bn, wanted research funded by them to be published Open Access – and that they would cover the costs. This seems to have set the ball rolling, although the situation internationally is too complex to easily unravel.

Possibly, charities lead the way because they are free of the ideological commitments of governments, as described by Kranich, and less vulnerable to lobbying efforts by publishers.

Focusing on the UK, Since 2013, the Research Council (which disperses about £3bn to universities each year) has insisted that work that it funds should be published Open Access. The details, however, make this rule considerably weaker than you might expect. RCUK recognises two kinds of Open Access publishing.

With Gold Route publishing, a commercial publisher will make make the paper free to access online, and publish it under a creative commons licence that allows others to do whatever they like with it – as long as the original authors are credited. The commercial publisher will only do this if they are paid – rates vary but it can be up to £5000 per paper. RCUK has made a £16 million fund available to cover these costs.

Green Route publishing is a much weaker type of Open Access. The publisher grants the academics who produced the paper the right to “self archive” – ie make their paper available through their university’s website. It is covered by a creative commons license that allows other people to use if for any non-commercial purpose, as long as they credit the author. However there can be an embargo of up to three years before the academic is allowed to ‘self-publish’ their paper. There are also restrictions on what sites they can publish the paper on – for example they cannot publish it to a site that mimics a conventional journal. Whether sites such as Academic.edu are acceptable is currently the subject of debate.

Is it working?

In 1995, Forbes predicted that commercial academic publishers had a business model that was imminently about to be destroyed by the web. That makes sense, after all, the web was literally invented to share academic papers. Here we are, 21 years later, and academic publishers exist, and still have enormous valuations. Their shareholders clearly don’t think they are going anywhere.

Elsevier is running an effective operation to prevent innovation by purchasing competitors (mendeley.com) or threatening them with copyright actions (academia.edu and SciHub). Even if newly authored papers are published open access, the historical archive will remain locked away. However, there is change.

Research Council UK carried out an independent review in 2014 where nearly all universities were able to report publishing at least 45% of papers as open access (via green or gold routes) – though the report is at pains to point out that most universities don’t keep good records of how their papers are published, so this figure could be inaccurate.

In fact the UK is doing a reasonable job of pursuing open access, and globally things are slowly moving in the right directionResearch is increasingly reliant on pre-prints hosted on sites like ArXiv, rather than official Journals, which move too slowly.

Once a database of the 50 million academic papers is gathered in one place (which SciHub may soon achieve) it’s hard to see how the genie can be put back in the bottle.

If this is a ‘Napster moment’, the question is what happens next. Many people thought that MP3 sharing was going to be the end of the commercial music industry. Instead, Apple moved in and made a service so cheap and convenient that it displaced illicit file sharing. Possibly commercial publishers could try the same trick, though they show no signs of making access cheaper or more convenient.

Elinor Ostrom’s knowledge commons shows us that there a sustainable, and much preferable alternative. An alternative that opens the worlds knowledge to everyone with an Internet connection, and provides an open platform for innovations that can help us deal with the avalanche of academic papers published every year.

 

 

 

 

Screen Shot 2016-05-18 at 13.54.49

StoryMap is a project that I worked on with Rift theatre company, Peter Thomas from Middlesex University and Angus Main, who is now at RCA, and Ben Koslowski who led the project. Oliver Smith took care of the tech side of things.  

The challenge was very specific, but the outcome was an interface that could work in a variety of public spaces.

We were looking to develop an artefact that could pull together all of the aspects of Rift’s Shakespeare in Shoreditch festival, including four plays in four separate locations over 10 days, the central hub venue where audiences arrived, and the Rude Mechanicals: a roving troupe of actors who put on impromptu plays around Hackney in the weeks leading up to the main event.

We wanted something in the hub venue which gave a sense of geography to proceedings. In the 2014 Shakespeare in Shoreditch festival the audience were encouraged to contribute to a book of 1000 plays (which the Rude Mechanicals used this year for their roving performances). We felt the 2016 version ought to include a way for the audience to contribute too.

The solution we ended up with was a digital/physical hybrid map, with some unusual affordances. We had a large table with a map of Hackney and surroundings (reimagined as an island) routed into the surface.

Screen Shot 2016-05-18 at 14.10.21

We projected a grid onto the table top. Each grid square could have a ‘story’ associated with it. Squares with stories appeared white. Some of the stories were from the Twitter feed of the Rude Mechanicals, so from day one the grid was partially populated. Some of them were added by the audience.

You could read the stories using a console. Two dials allowed users to move a red cursor square around the grid. When it was on a square with a story, that story would appear on a screen in the console.

Screen Shot 2016-05-18 at 14.18.52 Screen Shot 2016-05-18 at 14.18.10

If there was no story on the square, participants could add one. We had sheets of paper with prompts written on them, which you could feed into a typewriter and tap a response. Once you’d written your story, you put it in a slot in the console, and scanned it with the red button. (Example, Prompt: ‘Have you been on a memorable date in Hackney?’, Response: ‘I’m on one now!’)

Nearly 300 stories were submitted over 10 days.  Even though there really difficult to use, people loved the typewriters as an input method. Speaking from my own perspective, I found an input method that legitimised spelling mistakes and typos less intimidating. 

There were two modes of interaction – firstly, through the table based projection, which allowed a conversational, collective and discursive understanding of what had already been submitted.  Secondly, there was a more individual process of reading specific stories and adding your own story using the screen in the console. The second mode still relied on the projection, because you needed to move your cursor to find or submit a story.

The resolution of the projection was too low (because of the size of the table) for fonts or details to be rendered well. From this perspective, the map routed into the table really worked; it increased the ‘bandwidth’ of the information the table could convey, fine lines and small text worked well (which gave us a chance to play around with whimsically renaming bits of Hackney).

Having a way to convey spatialised data on a table where people can get round it and discuss it, combined with a (potentially private) way to add detail might work in a number of scenarios. Could it be a tool for planning consultation? A way to explore data spatialised in some other way, eg. a political spectrum or along a time line? Perhaps in a museum context?

The whole thing was developed as a web app, so it’s easy to extend across more screens, or perhaps to add mobile interaction. It’s opened my eyes to the fact that, despite all the noise around open data, there are relatively few ways to explore digital information in a collective, public way. The data is shared, but the exploration is always individual.  More to follow…

(I did a quick technical talk on how we delivered StoryMap for Meteor London, slides here.)