Don’t you think Brexit is fun, in a way? It’s such a perfect forecasting challenge. Everyone has passionate views, so there’s a real chance you can outsmart the other commentators by remaining rational. The field is wide open, Brexiters are certain that’s all going to be fine and Remainers frequently allude to the Titanic. Some smart people are going to be wrong. We should have preliminary results about 5 years – not that long.

I’m going to consider four Brexit scenarios and try and pick the most likely. In the spirit of being disinterested, I’ll start with a Brexit success scenario.

Brexit, and most people think it’s a success

Perceptions of Brexit will depend on two things: how well the UK fares, but also how badly wrong things go for the EU. If the EU does unexpectedly badly and the UK does unexpectedly well there is a scenario where Brexit will look like a good choice.

There will almost certainly be problems in the Eurozone as countries have to square the circle of monetary union without political union. Also in the mix are Catalonia, the migration crisis, and, of course, ‘black swan’ unforeseeable problems. Of these, the fundamental problems in the Eurozone seem the largest in scale and the most inevitable. Strains on the EU will almost certainly get worse, the question is how much.

What about the positive story for the UK? Here it is: a non-ideological, pragmatic politics take advantage of the UK’s new found freedom to legislate quickly without getting bogged down in EU bureaucracy. Remainers are won over when fisheries and farming – areas which have been badly legislated by the EU – become both more efficient, and, don’t worry Guardian readers, also greener. The UK works out how to legislate the gig economy equitably (another one for Guardianistas), and gets really nimble around tech legislation generally. Some EU migration is substituted for a more global approach, confounding accusations of xenophobia.

The shock of Brexit catalyses a new sense of purpose, including proactive and effective industrial policy. Questions of national unity galvanise genuine commitment to solving the north-south divide.

This picture is roughly what Philip Hammond thinks would be a good Brexit, according to his pre-budget interview with Andrew Marr. It’s also what Vote Leaves’ Dominic Cummings has in mind apparently (bit on science and education).

If this kind of Brexit happens, we might say something along the lines of: “I never believed in operation fear anyway, and look at the EU now. Isn’t Britain kind of exceptional in a pragmatic way? Look at all those philosophical French mired in youth unemployment.”

Brexit as a cautionary tale

The key to the worst Brexit outcome is that it shouldn’t be too bad too quickly. If it gets really bad, it will get called off (see the Remain scenario below). For maximum bad Brexit ill-tempered negotiations need to interminably zombie-shuffle towards an unknown destination. Other countries circle the moribund UK, waiting to take advantage of our desperate need for trade deals. Faith in UK institutions dwindles among investors. Borrowing to service the national debt gets more expensive, companies start building factories elsewhere. Unemployment ticks upwards, growth gets ever closer to zero.

At the 2022 election, in the context of another extension to the transition period, the UK has to choose between extremes. There’s Corbyn, with a stock-market crashing, bond-market titillating reversion to 70’s politics that promises to scare off the city and terrify big corporates.

Those who aren’t tempted by Labour can look to the conservatives. After 15 years of austerity, low wages and growing economic divergence between London and the rest of the country, the Tories will offer a free-market antidote: low wages, austerity and ever more emphasis on London as a city-state tax haven for the global super-rich. If the Tories win, Scotland may try for a go-around on landing a ‘yes’ to an independence vote. Negotiations with the EU would be even more difficult.

Of the two options, Cobyn is by far the most palatable. Tories called the referendum, possibly Labour would be in a better position to draw a line under it. Corbyn’s policies might alleviate people’s actual issues: housing, wages, geographic inequality. Any economic turbulence would probably be worth it, just to see the look on Nigel Farage’s face on election night.

The worst-case scenario for perceptions of Brexit needs a nice strong EU to make leaving look even more ridiculous. Here are some ideas. If the EU muddles through the Eurozone problems in some reasonable manner, it would not only be out of the woods, but it would acquire a reputation for virtual invincibility in the face of seemingly intractable problems. It would make the breakup of the EU, apparently possibly after 2008, seem inconceivable.

The migrant crisis is a difficult problem. It’s also an opportunity. The EU has already increased defence cooperation post-Brexit. What if migration crystalised the political will for the EU to intervene abroad? What if the EU’s fragmented national militaries worked together to take a lead in Syria or Libya, operating in the vacuum left by a less adventurous US? Might we feel that the traditionally interventionist UK had been usurped?

We’ll say:  “Thank god my skills are transferable to other countries. Now I just need to learn German.” or “Not having electricity on Thursdays isn’t as bad as I thought.”

Breixt – what was the fuss about?

The two scenarios above are ends of the spectrum. In the middle is indifferent Brexit. There’s always existential angst around the EU, it’s unlikely that will completely stop. Meanwhile, the UK can have very difficult negotiations without a true crunch point coming. Is it likely that some post-ideological hyper-rational governance of the UK will emerge? Not really, but the current levels of polarisation might reduce in the face of practicalities.

We’ll say – “Remember when people knew the difference between the EEA and the single market? What were they again?”


Full remain seems improbable. But Brexit-In-Name-Only is on the cards. Over the next few years  the reality of Brexit will become concretely apparent – the humungous bill, the impossibility of significantly reducing migration, the absence of any replacement trade deals, and stagnant living standards. At the same time, the UK will not have had a chance to do anything beneficial with its regulatory freedom. This is exactly the time period that Theresa May wanted to avoid having a general election in.

In this window, a disaster could tip the balance. No one can predict the economy – there could easily be a recession exactly at this critical juncture. The difference between weak growth and a small contraction is not directly that significant, but the news of negative growth could be disproportionately explosive. Another game changer would be any kind of violence in Northern Ireland. Or… that black swan again: war, May hit by a bus, other the things that are too out-there to even imagine.

I don’t know how the tabloids would back down from their Brexit-frenzy, but they might find a way. It could include a narrative of betrayal around David Davies, Borris, Gove, May, the BBC, Londoners, academics, cyclists, coffee-drinkers, vegetarians, heart disease or Maddie.

We’ll say: “Trump and Brexit? Did that really happen?”.

Prediction time

I think really good Brexit is unlikely – it requires two low probability events. Firstly, negotiating some ‘not too bad’ Brexit deal, secondly, not falling into the trap of a ‘Singapore’ style low tax, low regulation economy (which is nothing like Singapore). Such an outcome would be just as bad as a ‘no deal’ Brexit, addressing none of Britain’s underlying problems.

I think badly failed Brexit negotiations are unlikely because parliament would vote for a no-brexit / Brexit In Name Only option.

So my central prediction is for really quite bad Brexit followed by some flavour of political extremism. That comes with a side order of Remain-esque Brexit-in-name-only if the negotiations go badly enough quickly enough. The only thing that’s going to take the edge off either outcome is that the EU itself may well be looking a little less healthy than it is now.

A suitably un falsifiable prediction – let’s see how it looks in 2022.

China is a repressive country. Perversely, it’s also a laboratory for democracy in the digital age.

In 2010, Google pulled out of China amid pressure from the Chinese Government. In the West, the story was about a backward looking authoritarian state rejecting innovation and strangling freedom of expression.

Then Edward Snowden showed the world that Google was facilitating the NSA’s mass surveillance, and it started to look like China might have had some legitimate concerns about letting a US corporate collect vast amounts of data about its citizens. Now we’ve seen Russia using a sock puppet army to manipulate public opinion in the US, another very good reason why a country might want to regulate it’s own digital sphere. Did China get it right? You certainly don’t see many newspaper stories about China’s vindication.

I’m not naive enough to think that the Chinese government was only motivated by a benign intent to protect its society: it’s also a authoritarian state strangling freedom of expression. But repression is not the only lesson to draw from the Google story. In China, the government controls the Internet; in the West governments tell us the digital sphere must be left to market forces —  which in practice has meant a handful of monopolies —while covertly monitoring social media, largely without democratic consent, to keep a lid on the worst excesses.

China is a country where you can disappear for sharing the wrong opinion, but we have so few data points on how society should respond to digital technologies we need to take empirical evidence from where ever we can get it.

Here are two ‘good ideas’ from China.

Measuring non-monetary value
How about a society that rewards people for the good they do, taking into account not only their labour in the office or factory, but their hard work as a mother; not just the day rate they can command as a consultant but also the emotional labour of supporting a friend with depression.

China’s system of scoring citizens is …kind of… this — combining educational achievements, traffic infractions, financial behaviour and social media activity into one number that it publicly assigns to every citizen. It’s not clear what other activities it will influence the number, but, as a piece of infrastructure, it has the potential to nudge your rating up for helping an old lady across the road, or, for example, contributing to the public sphere a helpful blogpost that deftly justifies it’s apparently clickbaity title.

Sounds a bit authoritarian? Well, if you live in the West, you also have a score. The government secretly monitors your digital activity and assigns every citizen a number which indicates how likely you are to be a terrorist. Except this process is — was—secret. You also have a credit score, which is extremely analogous to the social credit system. Except, rather than being delivered by a government, it’s a kangaroo court run by big banks who are institutionally indifferent to ethics.

In China, the social score policy is public and transparent (one idea is that your score might appear on your dating site profile); though you can obviously make a strong case that the social scoring system is illegitimate because it’s done by an unelected government. In the West, you can make a roughly equivalent case that scoring is illegitimate because it’s undertaken by incumbent monopolies, or in secret by the government. You have a vote, but in practice its unlikely to give society control over such activities.

Social scoring is a version of ideas like alternative currencies and the need to value affective labour that are prevalent in the civic tech sector — if China’s social scoring system goes ahead its could  provide valuable insights for similar schemes in the West.

Deliberative democracy
If you are worried about an increasingly polarised society driven by filter bubble effects, again, China may have an answer. Deliberative democracy: where a group of citizens are invited to feedback to local officials on policy. Details vary, but normally a demographically representative group of people are selected to meet up and spend some time ‘deliberating’, discussing issues among themselves and with access to impartial experts, before making their views known to those in power.

The principal that everyone gets to vote is the core of Western democracy. At the momenth though, it’s undeniable that the electoral cycle is become an alarmingly centrifugal force, chaotically cartwheeling opinions to the extremes and tribalising the electorate. Deliberative democracy fixes a number of problems. Firstly, in deliberative democracy voters are selected to be truly demographically representative, rather than just those that turn up to the polling booth, which inevitably tends to be the better off. Secondly, in deliberative democracy, participants have a chance to become informed and discuss issues in a structured way, bypassing the filter bubble. These are not features that are easy to ensure if you insist that everyone must vote, it would simply be too resource intensive to give every single voter access to a deliberative process (though it has been suggested). Deliberative democracy has been tested in the West too, leading, for example, to oil obsessed Texas making a significant investment in wind farms for electricity, after a deliberative process showed that consumers were less price sensitive and more eco-conscious than expected.

Just as with social scoring, but to a lesser extent, there are arguments about legitimacy in both directions — obviously, China isn’t a democracy. On the other hand, if your public sphere is in the hands of a few newspaper barons, Russian trolls and social networks that algorithmically deepen polarisation, then citizen’s ability to vote in their best interests will inevitably be undermined by the flow of manipulative information.

China can provide evidence for all kinds of alternative approaches, for example, its intellectual property laws; or it’s app ecosystem, which is sometimes described as a digital Madagascar because it’s been cut off from the rest of the world so long it’s evolved it’s own solutions to common problems.

Transport for London (TfL), the institution responsible for regulating taxis in London, recently questioned Uber’s fitness to operate a taxi company. A lot of civic tech people suggested that TfL should run it’s own Uber alternative. The other camp said that if London wasn’t open to Uber, it was against innovation, the free market and the future — the tribalising echo chamber working as effectively as ever. When Uber tried to set up in China, the government had no compunctions about setting up a local alternative, Didi Chuxing, which is doing very nicely. Unlike Uber, which mobilises its PR and legal teams to frustrate local democracy in the cities in which it operates, you can bet that Didi will act if the Chinese government tells it to sort out its safety record.

So even though China fails to provide legitimate governance, it is another society struggling to work out how to make digital technology work better. If you believe that society is going to have to change radically in the face of technological innovation, its helpful to have somewhere radically different to draw lessons from.

[Apologies for typos. It’s a bit of a rush job, but wanted to get some thoughts down to clear my head…  ]

If you think this is grim, wait until you see Brexit. via @alldaycreative.

I am not just cooking up this story up because I’m a crazed ultra-remainer. I’d want to remain in the EU even if a great Brexit deal was on the table. At the same time, if I thought that the EU was a colossal affront to democracy, as many people do, then I would want to leave even if it meant a bad Brexit deal. I’d like to see lots of policy that isn’t GDP maximising, so I’m not complaining that Brexit might knock a few points off annual GDP growth. Even so, it seems to me the EU has every reason to annihilate the UK in the Brexit negotiation. The UK might be in for a shock – a shock that could set off a paroxysm of nationalism and unpleasantness.

The Brexit deal is going to be signed off by the parliaments of the 27 non-UK states in the EU. That means the deal is going to have to serve the incentives of those politicians. By proxy, that ought to mean it will serve the citizens of the EU, but it’s probably worth remembering in Europe Brexit doesn’t get saturation coverage like it does in the UK, so the pressure on politicians from voters is reduced. That’s the first of many asymmetries that seem to make it likely the Brexit deal – if there is one – will be nasty in a way that I’m not sure the British press is articulating.

For many European politicians, surely the breakup of the EU is the greatest fear. Right now the EU seems to be getting stronger, but there are plenty of reasons to think its future is still much less than certain. Many European politicians believe in the EU, but, from a less altruistic perspective, the political turmoil caused by a breakup might easily cause a shift in power away from the existing elite – so they will resist. What would do more to secure the future of the EU than a Brexit deal so bad the UK is forced to remain, demonstrating the impossibility of leaving, or – more likely – to leave and suffer traumatically, again serving as a warning to other potential leavers.

Leavers often point out that a bad Brexit would make the EU suffer too, and it would. But much less than the UK, and in less politically painful ways. If trade between the UK and EU dropped 10%, than would be a close to 5% of total trade for the UK, and 1.5% for the EU. So under ‘hard’ Brexit (or no deal at all), EU politicians win on stability of the EU and probably get to pick off a few strategic UK industries like finance, at the cost of a tiny drop in exports resulting from a negotiation that most Europeans didn’t really care about anyway. If you are the negotiator that gets the City to decamp to Frankfurt, you’ll be a hero forever. If it goes wrong and EU exports drop by some unmeasurable amount, who cares? The trade economics seem obvious – the EU wins by being aggressive.

What about migration? There are about 3.5 million EU citizens in the UK (5% of the UK population), and about 1.2 million UK citizens in the EU (0.2 % of the EU population). This situation is ambiguous in terms of negotiation. While Polish politicians will want to get a good deal for Polish people living in the UK, they will also know that the UK cannot possibly afford to send Polish people home exactly because there are so many of them and they are so important to the economy. Meanwhile, the EU can easily kick out Brits, who make up a tiny fraction of the workforce. The UK is also more constrained on this issue, with the Leave campaign strongly predicated on reducing migration – which surely must feature in the negotiation. Meanwhile, the EU could grant rights to Brits in Europe without it having a major impact. If the UK economy looks comparatively weak, economic migrants may leave anyway.

Theresa May is not playing her cards close to her chest, as many have been saying. She has no cards. There is virtually nothing she can do to control negotiations, even if they did find someone more competent that David Davies to run them.

The unknown variable is how the public will react. In my naiveté, I believed Tony Blair’s analysis that British people might turn against Brexit as the brutal reality becomes apparent. But the public might also turn against the EU for its bullying behaviour. As the government flounders in the negotiations, stoking up nationalism and evoking world wars might turn out to be the only viable PR strategy. If the negotiations become fiercely acrimonious, it is wrong to think the worst the EU can do to the UK is to end the two year negotiations without a deal. There’s a whole arsenal of humiliations for the EU to deploy, from expensive prosecco and long queues at airport immigration to sweetheart deals for Scotland and Northern Ireland to dismember the UK. Wait until the EU starts demanding an alternative to of the Permanent Security Council featuring the EU, US, China, India and Russia, or Germany starts spending 2% of GDP on defence, to see power really shifting around. Fun times!

I hope to have graduated by the time the proposed move to White City happens, for that reason I have not engaged much with the discussions around it. By total coincidence, while looking at the topic of social capital as part of my research, I realised a lot of the material I was finding could be relevant to the move. I thought I’d briefly write it up in case it’s of interest.

Much of what I’ve been reading confirms the obvious: people who work or study (spatially) close to one another form more social ties than those who are separated. I was surprised at how consistent and pronounced the effect is.

Reading Burt’s Brokerage and Closure1, my attention was piqued by his description of Festinger’s work. It shows that MIT students are predominately friends with people with whom they share dorm, and even within dorms location has a powerful influence2. Just being nearby one another was the single factor most important in determining friendships.

Why does that matter? Helliwell and Putnam suggest three possible sources value in university attendance3:

  • Higher education has the explicit goal of imparting skills to students (increasing their ‘human capital’).
  • University is also a place to meet people, thereby forming new networks that increase chances of finding others to productively collaborate with. Those networks can also convey information and norms that are not explicitly taught, especially information such as job offers4. These effects are together termed ‘social capital’.
  • Finally, higher education is thought to be a way of ‘signalling’5 – attending a prestigious university shows potential employers that you are committed, diligent and intelligent.

Social capital’s importance is traditionally captured in the phrases ‘old boys network’ (a similar phenomena apparently exists in Korea6), and ‘it’s not what you know, but who you know’. We have the idea of the ‘invisible college’7 to capture the value of social capital’s influence in academia. Whatever the ethical status of these elitist systems, the point remains that social connections are important. I think most people would intuitively agree, perhaps especially so in the arts.

If social capital is important, how do spatial factors change social capital formation? The literature is too copious to go through in detail (I’m already procrastinating by writing this), but some work stands out. By surveying 7 R&D labs, Alen and Fusfeld find that frequency of communication between researchers falls off strongly with distance, finding that working within 30 meters of another person greatly enhances the probability of frequent contact with them.

More recent research by Kabo et al compares two biomedical research labs with different spatial layouts to confirm that distance is a powerful factor when it comes to collaboration8. The research is able to go further and suggests that the critical factor is literally how often researchers’ paths cross. In this study, a 30m increase in distance between researchers reduced the probability of collaboration by 25%. Work at the Bartlett reaches similar conclusions9.

At the scale of communications between geographically separated offices, communication is also known to tail off exponentially as distance increases, including by electronically or by phone, a phenomena described by the Allen Curve10 11.

Further, it seems that the much of the cause of increasing collaboration with proximity is the increased chance of unplanned face-to-face conversation12 13, as opposed to convenience or some other factor. No matter how regular the bus service, unplanned meetings will occur less when students are split across campuses.

It could be argued that a small, isolated campus will generate a dense network within it, even if links to the wider university are diminished. Burt suggests that Ericsson’s famously innovative R&D lab benefited from such a dense network precisely because it was located in the small, isolated town of Lund in Sweden. Such dense networks may be good for innovation, unfortunately they are exactly opposite of the wide networks that are most effective for job hunting when you graduate14.

Spatial factors have such a profound effect on social networks that they show up everywhere from banks to disaster response teams15. Some spatial factors, such as the effects of open-plan offices are contested, but distance seems to reliably and strongly correlate with social tie formation; more specifically the probability of social ties between individuals correlates with the time they spend in a shared space.

I’m sure that excellent teaching will continue and ‘human capital’ aspect at RCA’s offer to students is assured. The question of the RCA’s standing, the ‘signal’ a degree from RCA sends, is a wider issue than a move to White City — one that many people have expressed anxiety over.

Empirical research confirms the common sense view: the social network of students will be profoundly influenced by the move to White City, and that the structure of your social network is the determinant of social capital, which is in turn a key aspect of attending university.

On this basis, as I’m sure many others have concluded, it seems only right that some mitigation ought to be put in place by the University. I’m aware the new campus will be adjacent to a satellite of Imperial University, but this hardly seems compensatory since RCA South Ken is already next to Imperial proper. I’m not sure if anyone has been able to estimate the value of proximity to the BBC R&D facility, it does seem unlikely that it could offset being separated from the wider student body.

Perhaps the move is itself could become a research opportunity, or in some other way could be turned to students’ advantage? Perhaps specific contingencies can be put into foster broader networks?

1 Burt, Ronald S. Brokerage and closure: An introduction to social capital. Oxford university press, 2005.

2 Festinger, Leon, Kurt W. Back, and Stanley Schachter. Social pressures in informal groups: A study of human factors in housing. Vol. 3. Stanford University Press, 1950.

3 Helliwell, John F., and Robert D. Putnam. Education and social capital. No. w7121. National Bureau of Economic Research, 1999.

4 Lin, Nan. “Social networks and status attainment.” Annual review of sociology(1999): 467-487.

5 Spence, Michael. “Competitive and optimal responses to signals: An analysis of efficiency and distribution.” Journal of Economic theory 7.3 (1974): 296-332.

6 Lee, Sunhwa, and Mary C. Brinton. “Elite education and social capital: The case of South Korea.” Sociology of education (1996): 177-192.

7 Crane, Diana. “Social structure in a group of scientists: A test of the” invisible college” hypothesis.” American Sociological Review (1969): 335-352.

8 Kabo, Felichism, et al. “Shared Paths to the Lab A Sociospatial Network Analysis of Collaboration.” Environment and Behavior 47.1 (2015): 57-84.

9 Sailer, Kerstin, and Ian McCulloh. “Social networks and spatial configuration—How office layouts drive social interaction.” Social networks 34.1 (2012): 47-58.

10 Allen, Thomas, and Gunter Henn. The organization and architecture of innovation. Routledge, 2007.


12Brown, Chloë, et al. “Tracking serendipitous interactions: how individual cultures shape the office.” Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 2014.

13 Owen-Smith, Jason. “Workplace Design, Collaboration, and Discovery.” Ann Arbor 1001 (2013): 48109-1382.

14 Granovetter, Mark S. “The strength of weak ties.” American journal of sociology (1973): 1360-1380.

15 Doreian, Patrick, and Norman Conti. “Social context, spatial structure and social network structure.” Social networks 34.1 (2012): 32-46.

In the spirit of thinking through our new political reality, which I already started here, I’ve been thinking about the electoral success of policies that promise violence – imprisonment or war, mainly.

In advertising, apparently, sex sells. In politics, it’s a liability. What sells in politics is violence. The promise to do something violent has an appeal that is powerful and ubiquitous.

In a classic criticism of democracy, Thucydides tells us that in 480BC the crowd in Athens voted to kill every adult male on the rebellious island of Lesbos, only to realise subsequently that this was an unjust act of violence, spurred by the rhetoric of a demagogue. They have to dispatch a ship with the new orders, which only just arrives in time to prevent a massacre. In Orwell’s 1984, perpetual war was used as mechanism to confer legitimacy on the dictatorship, an approach contemporary Russia has learned from.

We have the term khaki election to refer to this phenomena. It was coined in 1900 to describe a UK election held in the context of the Second Boer War, where patriotic sentiment driven by the war is said to have helped the incumbent party win. More recently, we have Thatcher, whose prospects for reelection in 1983 looked dim until the Falklands War boosted her reputation – almost certainly changing the outcome in her favour. We might say the same of Bush’s second Gulf war. An aimless administration was transformed into a purposeful and successful one, the president on an aircraft carrier declaring victory just in time for the election. As it turned out, the invasion did not prove to be beneficial for US foreign policy, but it worked very well for Bush himself.

Internal violence can work the same – for example the way promising increased incarceration has been a successful electoral tool in the UK and the US, no matter the dropping crime levels and endless evidence prison is expensive and ineffective.

Why should a threat to do violence be so persuasive?

Unlike, for example, the myth that the economy works like household budget, I can’t see that the appeal of violence comes from  ‘common sense’ or our everyday experience. Has any family dispute ever been satisfactorily resolved by violence? How many teenage kids have been coerced into good behavior? Do employers seek those who are able to persuade and negotiate, or those who are aggressive and violent? In our own lives, we almost never witness violence, and even less so as a successful strategy. Perhaps it’s exactly this distance that allows us to be so blasé about drone strikes and regime change.

I can only think that the allure of violence is part of a broader sense making activity. Most people have some problems in their lives, disappointments to rationalise. Acknowledging that our lives are shaped by blind luck or unintended consequences is not a good narrative, it does not help us understand why things are as they are. But the idea that an enemy within or without can cause these things does make sense – and the natural solution to the enemy is violence. Lock them up, bomb them – a convincing panacea.

In America, Obama’s failure to use the words radical islam became totemic on the right. It signalled a failure to adopt the appropriately bellicose rhetoric. Trump was even able to suggest that Obama was in league with ISIS because of his failure to use the properly aggressive language. What Obama was really doing was attempting to deescalate the religious dimension of the conflict with the long run goal of bringing peace. It’s a totally rational strategy, except that, for the above reasons, it’s also a terrible electoral strategy.

This is a conundrum for any political party that wants to pursue a rational level of violence, which is generally much below the level apparently advocated by voters. I am not aware of any solution to this problem – as the ancient Greek critics of democracy pointed out, it may just be the price you pay.


There seem to be three possible ways forward from the current position, all of which are absolutely disastrous for democracy. I have no idea which of these is more likely, all of them are very bad, and all of them represent a betrayal of voters – especially those that voted to leave.

Leaving the EU and the single market is the simplest proposition – in terms of democracy it would allow a government to deliver on the key pledges of immigration controls and bringing law making back to Westminster. However, the extreme financial situation the UK would likely find itself in would certainly make £15bn of extra investment in the NHS impossible. The costs to jobs and wages would be appalling, ‘Britain’s service economy would be cut up like an old car‘, and the nation would be in deep economic shock.

Ignoring the referendum (unless there is another general election) would obviously be an enormous affront to democracy, and the tabloid newspapers would howl with rage. The unexpectedly large constituency who voted leave, who already believe they are ignored and forgotten, would rightly be incensed. Such an option may easily lead to the rise of extremist parties.

The UK remains in the single market but out of the EU — the Norway option, the middle ground. Norway pays an enormous monetary price for access to the single market, if the UK got a similar deal there would not be spare cash to spend on the NHS. Norway accepts free movement of people, breaking the Leave campaign’s promise of border controls. Finally, Norway obeys many of the EU’s laws in order to gain access to the single market, and has no say during the process of EU legislation – which is difficult to square with Leave’s ‘taking back control’ motto.

The UK will not get an exact copy of the Norway deal. Perhaps a better deal can be struck? Someone, presumably Boris, would have to achieve a heroic feat of negotiation. He does not start from a good position, on a personal level, he has been lambasting the EU for months, even comparing the organisation to the Nazis. Many European leaders fear that a good deal for Britain would encourage discontent in their own countries, and may want to make an example of the UK. Watching David Cameron’s resignation speech must have had a visceral effect on other European leaders.

According to the rules of the Article 50 process, the UK will not be in the room for exit negotiations, results would be presented fait accompli to the UK, and if we don’t find agreement after two years, we’ll be automatically ejected. The single market option has been explicitly ruled out by several leading European politicians, so it looks set to be an uphill battle. Just in case it wasn’t hard enough, Scotland could leave, or threaten to leave, the UK during the negotiations – possibly to join the EU, maybe even the Euro.

It looks as though Boris hopes to find some combination of the Norway deal that keeps a watered down versions of his promises, probably mostly achieved through obfuscation. His Telegraph column sets out an impossible wishlist of access to the single market, border controls and savings in EU contributions which he will certainly never deliver.

This is, I believe, the most dysfunctional example of democracy of all the three options. The electorate have been sold an impossible dream of ‘taking control’, lowered immigration and a windfall savings in EU contributions. Under the Norway option, it will not be clear that any has been delivered.

We all know that political parties renege on their manifesto promises, but the Leave campaign set a new low. Within 48 hours of the result they had explicitly denied they felt at all bound to deliver on lower immigration or increasing NHS spending. The audacity is comedic, there are pictures of all the leading Leave campaigners standing in front of campaign buses emblazoned with huge slogans which they now claim mean nothing.  Perhaps they believe technicalities about which leave campaign said what, or whether their slogans were commitments or more like ‘serving suggestions’, will save them. They should consider what happened to Lib Dems when they (quite reasonably) blamed their broken tuition fee pledge on the coalition.

Before the referendum, no one had realised how much anger was directed at the political classes. After the referendum, there are only reasons for that anger to grow. In Norway-style scenarios Leave voters will only get the palest imitations of the policies they believe they voted for, but at a terrible, terrible cost. Leaving the EU might cause a recession, and will certainly cost jobs. Then there are the tens, possibly hundreds of billions of pounds in foregone GDP. All Government policy of any kind is on hold for years as we renegotiate. The cost of Government borrowing could spiral. Scientific and medical research will be disrupted and damaged. UK citizens will finding travelling and working in the EU harder.

Most importantly, many Leave voters, already from poor areas, will be in even worse poverty. Boris’s stall, as he set it out in the Telegraph, is about throwing off the ‘job destroying coils of EU bureaucracy’. The idea that removing workers rights is going to play a big part in reducing inequality is a fairy tale.  Leave voters are almost certain to see things getting worse not better, even if they are temporarily satisfied to have ‘taken back control’.

For a country that everyone recognises is divided and wounded, all of the routes forward point to ever more poverty, pain and division.




Yesterday we had a really great round table talking about supply chains and manufacturing, hosted by Future Makespaces. Supply chains touch on so many political topics. They matter intensely for labour conditions, wages,  immigration, the environment and for the diffusion of culture. At the same time they remain mostly invisible: they have a dispersed physical manifestation, and subsist in innumerable formal and informal social relations.

Governments publish some data on supply chains, but it can provide only a very low resolution picture. Jude Sherry told us that trying to locate manufacturers legally registered in Bristol often proved impossible. The opposite proved true as well – there are plenty of manufactures in Bristol who do not appear in official data.

Supply chains are especially salient now because technology is changing their structure. The falling cost of laser cutters and 3D printers is democratising processes once only possible in large scale manufacturing – thus potentially shortening the logistical pathway between manufacturer and consumer; perhaps even bringing manufacturing back into the cities of the developed world.

What I took from the round table was the surprising diversity of approaches to the topic – as well as a chance to reflect on how I communicate my position.

If your writing, design, or artistic practice is about making the invisible visible, then supply chains are a rich territory – a muse for your work, and an agent of change in the materials and processes you can work with. I took Dr Helge Mooshammer and Peter Mörtenböck to be addressing this cultural aspect with their World of Matter project and their Other Markets publications. Emma Reynolds told us about the British Council’s Maker Library Network, which I think you could see as an attempt to instrumentalise that cultural output.

Michael Wilson (who was in the UK for The Arts of Logistics conference), came to the topic from an overtly political direction, casting the debate in terms familiar from Adam Curtis’ All Watched Over by Machines of Loving Grace; positioning himself in relation to capitalism, neo-liberalism and anarchy. His Empire Logistics project aims to explore the supply infrastructure of California. He highlighted the way that supply chains had responded to the unionisation of dock workers in California by moving as many operations as possible away from the waterfront, and to an area called, poetically, the Inland Empire.

The ‘small p’ political also featured – Adrian McEwan told us about his local impact through setting up a Makespace in Liverpool. James Tooze told us about the modular furniture system he’s working on with OpenDesk – designed to reduce waste generated by office refits and be more suited to the flexible demands that startups make of their spaces.

My perspective is based mostly on ideas from the discipline of economics. I described the Hayekian idea of the market as a giant computer that efficiently allocates resources, where the market, through the profit motive, solves the knowledge problem – and that it does so in a way that cannot be improved upon.

Even if I don’t myself (completely) subscribe to this point of view, it is well embedded with policy makers, and I think needs to be addressed and rebutted.

Every attempt to actively change supply chains, from the circular economy to makespaces, faces a challenge from Hayekian reasoning: if a new system of supply was a good idea, why hasn’t the market already invented it?

I see my work as using the language of economics to position design practises that seek to augment or transcend that market logic. In particular, I think Elinor Ostrom’s work offers a way to acknowledge human factors in the way exchanges take place, as well as providing design principles based on empirical research.

One surprise was the divergent views on where the ambitions for the ‘maker movement’. Should it aim for a future where a significant fraction of manufacturing happens in makespaces? Or would that mean the movement had been co-opted? Is it’s subcultural status essential?

I realised that when I try to explain my position in future I need to address questions like why I’ve chosen to engage with economic language and try to illustrate how that dovetails with cultural and political perspectives.


TL;DR: Almost everyone thinks academic publishing needs to change. What would a better system look like? Economist Elinor Ostrom gave us design principles for an alternative – a knowledge commons, a sustainable approach to sharing research more freely. This approach exemplifies using economic principles to design a digital platform. 

Why is this relevant right now? 

The phrase ‘Napster Moment’ has been used to describe the current situation. Napster made MP3 sharing so easy that the music industry was forced to change it’s business model. The same might be about happen to academic publishing.

In a recent Science Magazine reader poll (admittedly unrepresentative), 85% of respondents thought pirating papers from illicit sources was morally acceptable, and about 25% said they did so weekly.

Elsevier – the largest for-profit academic publisher – is fighting back. They are pursuing the SciHub website through the courts. SciHub is the most popular website offering illegal downloads, and has virtually every paper ever published.

In another defensive move, Elsevier has recently upset everyone by buying Social Science Research Network – a highly successful not-for-profit website that allowed anyone to read papers for free.

Institutions that fund research are pushing for change, fed up with a system where universities pay for research, but companies like Elsevier make a profit from it. Academic publishers charge universities about $10Bn a year, and make unusually large profits.

In the longer term, the fragmentation of research publishing may be unsustainable. Over a million papers are published every year, and research increasingly requires academics to understand multiple fields. New search tools are desperately needed, but they are impossible to build when papers are locked away behind barriers.

How should papers be published? Who should pay the costs, and who should get the access? Economist and Nobel laureate Elinor Ostrom pioneered the idea of a knowledge commons to think about these questions.

What is a knowledge commons? 

A commons is a system where social conventions and institutions govern how people contribute to and take from some shared resource. In a knowledge commons that resource is knowledge.

You can think of knowledge, embodied in academic papers, as an economic resource just like bread, shoes or land. Clearly knowledge has some unique properties, but this assumption is a useful starting point.

When we are thinking about how to share a resource, Elinor Ostrom, in common with other economists, asks us to think about whether the underlying resource is ‘excludable’, or ‘rivalrous’.

If I bake a loaf of bread, I can easily keep it behind a shop counter until someone agrees to pay money in exchange for it – it is excludable. Conversely, if I build a road it will be time consuming and expensive for me to stop other people from using it without paying – it is non-excludable.

If I sell the bread to one person, I cannot sell the same loaf to another person – it is rivalrous. However, the number of cars using a road makes only a very small difference to the cost of providing it. Roads are non-rivalrous (at least until traffic jams take effect).

Excludable Non-excludable
Rivalrous Market Goods

Bread, shoes, cars

Common Pool Resources

Fish stocks, water 

Non-rivalrous Club Goods

Gyms, toll roads,
(academic papers) 

Public Goods

National defence, street lighting
(academic papers)

Most economists think markets (where money is used to buy and sell, top left in the grid) are a good systems for providing non-rivalrous, non-excludable private goods – bread, clothes, furniture etc. – perhaps with social security in the background to provide for those who cannot afford necessities.

But if a good is non-rivalrous, non-exclusionary, or both, things get a bit more complicated, and less effective. This is why roads are usually provided by a government rather than a market – though for profit toll roads do exist.

The well known ‘tragedy of the commons’ is a example of this logic playing out. The ‘tragedy of the commons’ thought experiment concerns a rivalrous, non-excludable natural resource – often the example given is a village with a common pasture land shared by everyone. Each villager has an incentive to graze as many sheep as they can on the shared pasture because then they will have nice fat sheep and plenty of milk. But if everyone behaves this way, unsustainably massive flocks of sheep will collectively eat all the grass and destroy the common pasture.

The benefit accrues to the individual villager, but the cost to the community as a whole. The classic economic solution is to put fences up and make the resource into an excludable, market-based system. Each villager gets an section of the common to own privately, which they can buy and sell as they choose.

Building and maintaining fences can be very expensive – if the resource is something like a fishing ground, it might even be impossible. The view that building a market is the only good solution has been distilled into an ideology, and, as is discussed later, that ideology lead to the existence of the commercial academic publishing industry. As the rest of this post will explain, building fences around knowledge has turned out to be very expensive.

Ostrom positioned herself directly against the ‘have to build a market’ point of view. She noticed that in the real world, many communities do successfully manage commons.

Ostrom’s Law: A resource arrangement that works in practice can work in

She developed a framework for thinking about social norms that allow effective resource management across a wide range of non-market systems, a much more nuanced approach than the stylised tragedy of the commons thought experiment. Her analysis calls for a more realistic model of the villagers, who might realise that the common is being overgrazed, call a meeting, and agree a rule how many sheep each person is allowed to graze. They are designing a social institution.

If this approach can be made to work, it saves the cost of maintaining the fences, but avoids the overgrazing that damages the common land.

The two by two grid above has the ‘commons’ as only one among four strategies. In reality, rivalry and excludability are questions of degree, and can be changed by making different design choices.

For this analysis, it’s useful to use the word ‘commons’ as a catchall for non-market solutions.

Ostrom and Hess published a book of essays, Understanding Knowledge as a Commons, arguing that we should use exactly this approach to understand and improve academic publishing. They argue for a ‘knowledge commons’.

The resulting infrastructure would likely be one or more web platforms. The design of these platforms will have to take into account the questions of incentives, rivalry and exclusion discussed above.

What would a knowledge commons look like? 

Through extensive real world research, Ostrom and her Bloomington School derived a set of design principles for effectively sharing common resources:

  1. Define clear group boundaries.
  2. Match rules governing use of common goods to local needs and conditions.
  3. Ensure that those affected by the rules can participate in modifying the rules.
  4. Make sure the rule-making rights of community members are respected by outside authorities.
  5. Develop a system, carried out by community members, for monitoring members’ behavior.
  6. Use graduated sanctions for rule violators.
  7. Provide accessible, low-cost means for dispute resolution.
  8. Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system.

These principles can help design a system where there is free access while preventing collapse from abusive treatment.

Principle 1 is already well addressed by the existence of universities, which give us a clear set of internationally comparable rules about who is officially a researcher in what area – doctorates, professorships etc. These hierarchies could also indicate who should participate in discussions about designing improvements to the knowledge commons, in accordance with 2 and 3. This is not to say that non-academic would be excluded, but that there is an existing structure which could help with decisions such as who is qualified to carry out peer review.

In a knowledge commons utopia, all the academic research ever conducted would be freely available on the web, along with all the related metadata – authors, dates, who references whom, citation counts etc. A slightly more realistic scenario might have all the metadata open, plus papers published from now forward.

This dataset would allow innovations that could address many of these design principles. In particular, in accordance with 5, it would allow for the design of systems measuring ‘demand’ and ‘supply’.  Linguistic analysis of papers might start to shine a light on who really supplies ideas to the knowledge commons, by following the spread of ideas through the discourse. The linked paper describes how to discover who introduces a new concept into a discourse, and track when that concept is widely adopted.

This could augment crude citation counts, helping identify those who provide a supply of new ideas to the commons. What if we could find out what papers people are searching for, but not finding? Such data might proxy for ‘demand’ – telling researches where to focus their creative efforts.

Addressing principle 6, there is much room for automatically detecting low quality ‘me-too’ papers, or outright plagiarism. Or perhaps it would be appropriate to establish a system where new authors have to be sponsored by existing authors with a good track record – a system which the preprint site arXiv currently implements. (Over publication is interestingly similar to overgrazing of a common pasture, abusing the system for personal benefit at the cost of the group.)

Multidisciplinary researchers could benefit from new ways aggregating papers that do not rely on traditional journal based categories, visualisations of networks of papers might help us orient ourselves in new territory quicker.

All of these innovations, and many others that we cannot foresee, require a clean, easily accessible data set to work with.

These are not new ideas. IBM’s Watson is already ingesting huge amounts of medical research to deliver cancer diagnosis and generate new research questions. But the very fact that only companies with the resources IBM can get to this data confirms the point about the importance of the commons. Even then, they are only able to look at a fraction of the total corpus of research.

But is the knowledge commons feasible?

How, in practical terms, could a knowledge commons be built?

Since 1665, the year the Royal Society was founded, about 50 million research papers have been published. As a back of an envelope calculation, that’s about 150 terabytes of data – that would cost $4,500 a month to store on Amazon’s cloud servers. Obviously just storing the data is not enough – so is there a real world example of running this kind of operation?

Wikipedia stores a similar total amount of data (about 40 million pages). It also has functionality that supports about 10 edits to those pages every second, and is one of the 10 most popular sites on the web. Including all the staffing and servers, it costs about $5o million per year.

That is less than 5% of what the academic publishing industry charges every year. If the money that universities spend on access to journals was saved for a single year, it would be enough to fund an endowment that would make academic publishing free in perpetuity – a shocking thought.

What’s the situation at the moment? 

Universities pay for the research that results in academic papers. Where papers are peer-reviewed, the reviewing is mostly done salaried university staff who don’t charge publishers for their time. Therefore, the cost of producing a paper to an academic publisher is, more or less, typesetting plus the admin.

Yet publishers charge what are generally seen as astronomical fees. An ongoing annual licenses to access a journal often costs many thousands of pounds. University libraries, which may have access to thousands of journals, pay millions each year in these fees. As a member of the public, you can download a paper for about $30 – and a single paper is often valueless without the network of papers it references. The result is an industry worth about $10bn a year, with profit margins that are often estimated at 40%. (Excellent detailed description here.)

I’ve heard stories of academics having articles published in journals their university does not have access to. They can write the paper, but their colleagues cannot subsequently read it – which is surely the opposite of publishing.  There are many papers that I cannot access from my desk at the Royal College of Art, because the university has not purchased access. But RCA has an arrangement with UCL allowing me to use their system. So I have to go across town just to log onto the Internet via UCL’s wifi. This cannot make sense for anyone.

I’m not aware of any similar system. It’s a hybrid of public funding plus a market mechanism. Tax payers money is spent producing what looks like a classic public or commons good (knowledge embodied in papers), free to everyone, non-rivalry and non-exclusionary. That product is then handed over a to private company, for free, and the private company makes a profit by selling that product back to the organisation that produced it. Almost no one (except the publishers) believes this represents value for money.

Overall, in addition to being a drain on the public purse, the current system fragments papers and associated metadata behind meaningless artificial barriers.

How did it get like that?

Nancy Kranich, in her essay for the book Understanding Knowledge as a Commons, gives useful history. She highlights the Reagan era ideological belief (mentioned earlier) that the private sector is always more efficient, plus the short-term incentives of the one-time profit you get by selling your in house journal. That’s seems to be about the end of the story, although in another essay in the same book Peter Suber points out that many high level policy makers often do not know how the system works – which might also be a factor.

If we look to Ostrom’s design principles, we cannot be surprised at what has happened. Virtually all the principles (especially 4,7 and 8) are violated when you have a commons with a small number of politically powerful, for-profit institutions who rely on appropriating resources from that commons. It’s analogous to the way industrial fishing operations are able to continuously frustrate legislation designed to prevent ecological disaster in overstrained fishing grounds by lobbying governments.

What are the current efforts to change the situation?

In 2003 the Bethesda Statement on Open Access indicated the Howard Hughes Medical Institute and the Wellcome trust, which between them manage an endowment of about $40bn, wanted research funded by them to be published Open Access – and that they would cover the costs. This seems to have set the ball rolling, although the situation internationally is too complex to easily unravel.

Possibly, charities lead the way because they are free of the ideological commitments of governments, as described by Kranich, and less vulnerable to lobbying efforts by publishers.

Focusing on the UK, Since 2013, the Research Council (which disperses about £3bn to universities each year) has insisted that work that it funds should be published Open Access. The details, however, make this rule considerably weaker than you might expect. RCUK recognises two kinds of Open Access publishing.

With Gold Route publishing, a commercial publisher will make make the paper free to access online, and publish it under a creative commons licence that allows others to do whatever they like with it – as long as the original authors are credited. The commercial publisher will only do this if they are paid – rates vary but it can be up to £5000 per paper. RCUK has made a £16 million fund available to cover these costs.

Green Route publishing is a much weaker type of Open Access. The publisher grants the academics who produced the paper the right to “self archive” – ie make their paper available through their university’s website. It is covered by a creative commons license that allows other people to use if for any non-commercial purpose, as long as they credit the author. However there can be an embargo of up to three years before the academic is allowed to ‘self-publish’ their paper. There are also restrictions on what sites they can publish the paper on – for example they cannot publish it to a site that mimics a conventional journal. Whether sites such as are acceptable is currently the subject of debate.

Is it working?

In 1995, Forbes predicted that commercial academic publishers had a business model that was imminently about to be destroyed by the web. That makes sense, after all, the web was literally invented to share academic papers. Here we are, 21 years later, and academic publishers exist, and still have enormous valuations. Their shareholders clearly don’t think they are going anywhere.

Elsevier is running an effective operation to prevent innovation by purchasing competitors ( or threatening them with copyright actions ( and SciHub). Even if newly authored papers are published open access, the historical archive will remain locked away. However, there is change.

Research Council UK carried out an independent review in 2014 where nearly all universities were able to report publishing at least 45% of papers as open access (via green or gold routes) – though the report is at pains to point out that most universities don’t keep good records of how their papers are published, so this figure could be inaccurate.

In fact the UK is doing a reasonable job of pursuing open access, and globally things are slowly moving in the right directionResearch is increasingly reliant on pre-prints hosted on sites like ArXiv, rather than official Journals, which move too slowly.

Once a database of the 50 million academic papers is gathered in one place (which SciHub may soon achieve) it’s hard to see how the genie can be put back in the bottle.

If this is a ‘Napster moment’, the question is what happens next. Many people thought that MP3 sharing was going to be the end of the commercial music industry. Instead, Apple moved in and made a service so cheap and convenient that it displaced illicit file sharing. Possibly commercial publishers could try the same trick, though they show no signs of making access cheaper or more convenient.

Elinor Ostrom’s knowledge commons shows us that there a sustainable, and much preferable alternative. An alternative that opens the worlds knowledge to everyone with an Internet connection, and provides an open platform for innovations that can help us deal with the avalanche of academic papers published every year.