Most politicians – with the exceptions of the Lib Dems – have said that parliament should accept the results of the EU Referendum as the democratic will of the people.

This may be true for political or pragmatic reasons. Ethically, however, it’s far from obvious. If someone tells you Brexit is a moral necessity just because a vote has taken place, they are wrong. Obeying the vote requires a value judgement about the status of that vote, and the issue much more complex than simply asserting that a vote has taken place.

If we were to go around disputing the status of every vote, democracy would be impossible. Here I will present the case that the Brexit vote is uniquely precarious: direct democracy about an irreversible and highly important decision carried out in the context of asymmetric information. Specifically, polling data suggests many leave voters are expecting an outcome that not even the leave campaign itself think is possible.

There are good reasons to be wary of attempts to understand what voters ‘really’ wanted – analysis becomes a vessel for your own opinions. At the same time, we have to acknowledge that voters’ opinions can be shaped by the information they are receiving.

For these reasons, refusing to leave the EU would be an absolutely legitimate position for parliament.

In the national debate it seems to go almost unquestioned that simply going through a voting procedure automatically conveys unassailable democratic force to a decision. Not true: Russia, Zimbabwe, North Korea all have voting procedures, yet most people agree that they unsatisfactory in various ways. I’m not comparing the UK to those countries, but making the philosophical point that you stand in a booth and fill out a form and still not be ‘doing democracy’.

For a vote to carry democratic force – for it to convey the ‘will of the people’ – most people think you have to do more than just count pieces of marked paper. I complained about two criteria that I felt were lacking in the EU referendum before the vote took place – that the electorate be representative, and that voters should be well informed.

We apparently can’t agree on the demographics of EU Referendum voters, but we do know participation was unusually high, so let’s set the issue of representativeness on one side.

The electorate were not well informed, in fact they were actively misled about what leaving the EU would mean. This is the case in every election, here I will make the argument that the misinformation was both asymmetric and effective in changing voters’ views.

I’m also not claiming leave voters are stupid, or that they do whatever Rupert Murdoch says. I am not claiming that everyone who voted leave was misled. I am not claiming that voters would have voted remain with better access to information.

I am claiming that we do not know how voters would have behaved with better access to information, and that information in the EU referendum was unusually low quality.

This is a difficult empirical point to prove. We cannot observe how voters would have behaved in other circumstances. What we can do is build an empirical case that voters held beliefs that can reasonably be expected to influence voting behaviour, and that those beliefs are a result of systemic misinformation.

We can see from YouGov’s polls that many people believed that leaving the EU would make no difference to, or improve, the economy. In the last poll, which closed in the 19th June, 46% of respondents thought there would be no economic impact, while 9% thought they’d be better off. These views, unsurprisingly, correlate with the intention to vote leave. 18% of those intending to vote leave thought leaving would improve the economy, and 66% thought it would make no difference.

This is at stark variance with predictions. The Leave campaign’s economist Andrew Lilico own forecasts suggest that there would be a short term economic hit, but predicted that by 2030 the economy would have returned to normal. This prediction is more optimistic than almost any other, either from a private company, the Treasury or international organisations such as the OECD. If voters were aware that the most optimistic case was a short term recession, followed by a possible return to normal growth in 15 years time, rather than believing there would be no difference or an improvement, how would they have voted? We do not know.

This in turn bears on Leave’s promise to have extra money to spend on the NHS. A post-Brexit government can choose to spend more money on the NHS, they will not be doing so using ‘spare money’ created by Brexit – certainly not until 2030.

We are in the position of living in a future where Brexit now seems imminent, and the prediction of a short term slow down appear to be coming true, with Mark Carney confirming these effects both verbally and by providing £250Bn of tax payers money to support the economy.

In the same poll, 54% of respondents believed that Brexit would reduce immigration. Again, this correlates with intention to leave, with fully 85% of leave voters believing immigration would decrease. And again, this is at odds with the predictions of all sides. Leave’s economic model relies on immigration remaining roughly the same (Andrew Lilico again), and Leave campaigner Dan Hannan notably confirmed that immigration will remain broadly similar after Brexit. How would voters have behaved if they knew this? Again, we do not know.

I’m not claiming access to an objective reality about what will happen in the case of Brexit, instead I’m asserting that leave voters did not understand the position of the Leave campaign itself. Given that the Leave campaign is likely to have been over optimistic about what it can deliver, the reality of Brexit is likely to be even less satisfactory to leave voters.

We know that a typical leave voter thought that the economy would remain the same or improve while immigration would be reduced. But we do not know if these were factors that caused them to vote leave, or merely incidental. However, if we look at polls of issues that matter to voters, we see that immigration, the NHS, the EU and the economy are the top four issues. The average leave voter held unrealistic expectations about all of these, so it is reasonable to assume that some voters choose leave on the basis of these issues.

Where does these bad information come from? How can voters have come to believe a case for Brexit even more optimistic than the Leave campaign itself? We do see that newspaper coverage, which is traditionally on the right in the UK, is was strongly skewed to Brexit. Weighted for number for readers, newspaper articles about about 80% in favour of leave, even while the country as a whole is almost perfectly split. Meanwhile, the broadcast media are scrupulously balanced.

Article 50 has not yet been sent. The electorate now has a genuine opportunity to understand Brexit’s implications for the economy and immigration. If opinion polls show a significant shift in the light of this new information, that shift should be allowed to influence MP’s views; they should not feel bound by the referendum. The referendum did not convey an unassailable mandate based on the will of the people.

Edit: Reading Vernon Bogdanor I find my self slightly convinced by an idea similar to rule utilitarianism. Perhaps you can’t worry actual democracy in every vote, instead you have to set up the institution of voting and honour it regardless of the nuances of each referendum or election. Perhaps the damage to public trust is not worth the improvement in decision making.

We have so many aspirations for big data and evidence based policy, but apparently a fatally limited capacity to see the obvious: voters were furious about immigration and the EU. Techniques exist to build better empirical evidence regarding issues that matter to citizens; we should use them or risk a repeat of the referendum.   

Commentators from all over the spectrum believe that the leave vote represents not (only) a desire to leave the EU, but also the release of a tidal wave of pent up anger. That anger is often presumed to be partly explained by stagnating living standards for large parts of the population. As the first audience question on the BBC’s Question Time program asked the panel “Project Fear has failed, the peasants have revolted, after decades of ignoring the working class how does it feel to be punch in the nose?”. The Daily Mail’s victorious front page said the “Quiet people of Britain rose up against an arrogant, out-of-touch, political class”. The message is not subtle.

Amazingly, until the vote, no one seemed to have known anything: markets and betting odds all suggested remain would win. Politicians, even those on the side of Leave, thought Brexit was unlikely. The man bankrolling the Brexit campaign lost a fortune betting that it wouldn’t actually happen (the only good news I’ve seen in days). Niall Ferguson was allegedly paid $500,000 to predict that the UK would remain.

This state of ignorance contrasts radically with what we do know about the country. We know, in finicky detail, the income of every person and company. We measure changes in price levels, productivity, house prices, interest rates, and employment. Detailed demographic and health data are available – we have a good idea of what people eat, how long they sleep for, where they shop, we even have detailed evidence about people’s sex lives.

Yet, there seems to be have been very little awareness of (or weight attached to) what the UK population itself was openly saying in large numbers.

Part of the reason must be that the government didn’t want to hear. Post crisis everything was refracted through the prism of TINA – There Is No Alternative. There was no money for anything, so why even think about it? Well, now we have an alternative.

The traditional method for registering frustration is obviously to vote – a channel which was jammed in the last election. Millions of people voted UKIP, or for the Green Party, and got one MP a piece: no influence for either point of view.  A more proportional voting system is one well known idea, and I think an excellent one, but there are lots of other possibilities too.

What if there was a more structured way to report on citizen’s frustrations on a rolling basis? An Office of Budgetary Responsibility, but for national sentiment – preparing both statistical and qualitative reports that act as a radar for public anger. It would have to go beyond the existing ‘issue tracking’ polling to provide something more comprehensive and persuasive. Perhaps the data could be publicly announced with the same fanfare as quarterly GDP.

Consultative processes at the local level are much more advanced than at the national level. Here is some of the current thinking on the best ways to build a national ‘anger radar’, drawing on methods widely used at the local level.

Any such process faces the problem of  ‘strategic behaviour’. If someone asks you your opinion on immigration, you might be tempted to pretend you are absolute furious about it, even if you are are only mildly piqued by the topic. Giving extreme answers might seem like the best way to advocate for the change you want to see. Such extreme responses could mask authentically important signals. Asking respondents to rank responses in order or assign monetary values to outcomes are classic ways to help mitigate strategic behaviour.

Strategic behaviour can also be avoided by looking at actions that are hard to fake. Economists refer to these as ‘revealed’ preferences – often revealed by the act of spending money on buying something. It’s awful to think about, but house prices might encode public opinions on immigration. If house prices are lower in areas of high immigration, it might reveal to us the extent to which citizen truly find it to be an issue. Any such analysis would have to use well established techniques for removing confounding factors, for example accounting for the fact the immigration might disproportionately be to areas with lower house prices anyway. This approach might not be relevant for the issues in EU referendum, but might be important for other national policies. Do people pay more for a house which falls in the catchment of an academy school, for example. (More technical detail on all these approaches).

Social media is another source of data. Is the public discourse, as measured on Twitter or Facebook (if they allowed access to the data) increasingly mentioning immigration? What is the sentiment expressed in those discussions? Certainly a crude measure, but perhaps part of a wider analysis – and ultimately no cruder than the methods used to estimate inflation.

All these approaches are valuable because they tell us about ‘raw’ sentiment – what people believe before they are given a space to reflectively consider. ‘Raw’ views are important since they are the ones that determine how people will act, for example at a referendum.

But that is not enough on it’s own. As discussed in a previous post, good policy will also be informed by a knowledge of what people want when they have thought more deeply and have information that allows them to act in their own best interests. These kinds of views could be elicited using using processes such as the RSA’s recently announced Citizen’s Economics Council, where 50-60 (presumably representative) citizens will be given time and resources to help them think deeply about economic issues of the day, and subsequently give their views to policy makers.

Delib, a company that provides digital democracy software, offers a budget simulator which achieves a similar goal. The affordances of the interface mean that uses have to allocate a fixed budget between different options using sliders. In the processes of providing a view, users intrinsically become aware of the various compromises that must be made, and deliver a more informed decision.

We live in a society where more data is available about citizen’s behaviour then ever before. As is widely discussed, that represents a privacy challenge that is still being understood. The same data represents an opportunity for governments to be responsive in new ways. Did the intelligence services know which way the vote would go using their clandestine monitoring of our private communications? Who knows.

We cannot predict everything, famously a single Moroccan street vendor’s protest set off the whole of the Arab Spring. But we can see the contexts that makes that kind of volatility possible, and I believe the anti immigration context could easily have been detected in the run up to the referendum.

There is no longer any reason for a referendum about the EU to become a channel for anger about tangentially related issues. The political class would not have been ‘punched on the nose’ if they were a little better a listening.

Hat tip: Thanks to the Delib Twitter account, which has been keeping track of the conversation about new kinds of democracy post Brexit, which I’ve used in this post.

There seem to be three possible ways forward from the current position, all of which are absolutely disastrous for democracy. I have no idea which of these is more likely, all of them are very bad, and all of them represent a betrayal of voters – especially those that voted to leave.

Leaving the EU and the single market is the simplest proposition – in terms of democracy it would allow a government to deliver on the key pledges of immigration controls and bringing law making back to Westminster. However, the extreme financial situation the UK would likely find itself in would certainly make £15bn of extra investment in the NHS impossible. The costs to jobs and wages would be appalling, ‘Britain’s service economy would be cut up like an old car‘, and the nation would be in deep economic shock.

Ignoring the referendum (unless there is another general election) would obviously be an enormous affront to democracy, and the tabloid newspapers would howl with rage. The unexpectedly large constituency who voted leave, who already believe they are ignored and forgotten, would rightly be incensed. Such an option may easily lead to the rise of extremist parties.

The UK remains in the single market but out of the EU — the Norway option, the middle ground. Norway pays an enormous monetary price for access to the single market, if the UK got a similar deal there would not be spare cash to spend on the NHS. Norway accepts free movement of people, breaking the Leave campaign’s promise of border controls. Finally, Norway obeys many of the EU’s laws in order to gain access to the single market, and has no say during the process of EU legislation – which is difficult to square with Leave’s ‘taking back control’ motto.

The UK will not get an exact copy of the Norway deal. Perhaps a better deal can be struck? Someone, presumably Boris, would have to achieve a heroic feat of negotiation. He does not start from a good position, on a personal level, he has been lambasting the EU for months, even comparing the organisation to the Nazis. Many European leaders fear that a good deal for Britain would encourage discontent in their own countries, and may want to make an example of the UK. Watching David Cameron’s resignation speech must have had a visceral effect on other European leaders.

According to the rules of the Article 50 process, the UK will not be in the room for exit negotiations, results would be presented fait accompli to the UK, and if we don’t find agreement after two years, we’ll be automatically ejected. The single market option has been explicitly ruled out by several leading European politicians, so it looks set to be an uphill battle. Just in case it wasn’t hard enough, Scotland could leave, or threaten to leave, the UK during the negotiations – possibly to join the EU, maybe even the Euro.

It looks as though Boris hopes to find some combination of the Norway deal that keeps a watered down versions of his promises, probably mostly achieved through obfuscation. His Telegraph column sets out an impossible wishlist of access to the single market, border controls and savings in EU contributions which he will certainly never deliver.

This is, I believe, the most dysfunctional example of democracy of all the three options. The electorate have been sold an impossible dream of ‘taking control’, lowered immigration and a windfall savings in EU contributions. Under the Norway option, it will not be clear that any has been delivered.

We all know that political parties renege on their manifesto promises, but the Leave campaign set a new low. Within 48 hours of the result they had explicitly denied they felt at all bound to deliver on lower immigration or increasing NHS spending. The audacity is comedic, there are pictures of all the leading Leave campaigners standing in front of campaign buses emblazoned with huge slogans which they now claim mean nothing.  Perhaps they believe technicalities about which leave campaign said what, or whether their slogans were commitments or more like ‘serving suggestions’, will save them. They should consider what happened to Lib Dems when they (quite reasonably) blamed their broken tuition fee pledge on the coalition.

Before the referendum, no one had realised how much anger was directed at the political classes. After the referendum, there are only reasons for that anger to grow. In Norway-style scenarios Leave voters will only get the palest imitations of the policies they believe they voted for, but at a terrible, terrible cost. Leaving the EU might cause a recession, and will certainly cost jobs. Then there are the tens, possibly hundreds of billions of pounds in foregone GDP. All Government policy of any kind is on hold for years as we renegotiate. The cost of Government borrowing could spiral. Scientific and medical research will be disrupted and damaged. UK citizens will finding travelling and working in the EU harder.

Most importantly, many Leave voters, already from poor areas, will be in even worse poverty. Boris’s stall, as he set it out in the Telegraph, is about throwing off the ‘job destroying coils of EU bureaucracy’. The idea that removing workers rights is going to play a big part in reducing inequality is a fairy tale.  Leave voters are almost certain to see things getting worse not better, even if they are temporarily satisfied to have ‘taken back control’.

For a country that everyone recognises is divided and wounded, all of the routes forward point to ever more poverty, pain and division.

 

 

 

Most democratic countries use representative democracy – you vote for someone  who makes decisions on your behalf (in the UK’s case your MP). The EU referendum is different, it’s an example of direct democracy. Bypassing their representative, every citizen who is eligible to vote will be asked to make a decision themselves.

The referendum has this feature in common with most participatory design processes (by PD I mean including end users in process of designing a product or service). PD is normally carried out with the stakeholders themselves, not representatives of them. You could think of referendum as a participatory design process, designing a particular part of the UK’s economic and foreign policy.

The EU referendum fails as a participatory design process in two important ways. Firstly, most of the participants are deeply ill informed about the issues at hand, and under these circumstances it will be impossible for them to act in their own best interests. The consequences of their design decision may well run counter to their expectations.

An IPSOS MORI survey shows that on average UK voters believe that 15% of the population are EU migrants, where in fact only 5% are. On provocative issues such as the percentage of child benefit that is paid to children living in Europe, many people widely overestimate the amount by over 100 times (it’s about 0.3%, where 1 in 4 respondents estimated more than 24%).

Richard Dawkins has noted that very few people know all the relevant details to cast a vote, and laments the bizarre logic often used in discussions. He recommends voting for ‘remain’ in line with a ‘precautionary principle’, and has the following quote to illustrate the level of debate on TV:

“Well, it isn’t called Great Britain for nothing, is it? I’m voting for our historic greatness.”

Of course, it’s a question of degree. It would be unreasonable to suggest only a tiny number of world-leading experts can voice meaningful opinions. But there does seem to be a problem when decision makers are systemically wrong about the basic facts.

The second way that EU referendum fails is that the participants do not reflect the makeup of the country as a whole. Much of the speculation on the outcome focuses on turn out – which age groups and social classes will make the effort to cast a vote. Yet it hardly seems fair that such an important decision will be taken by a self selecting group. Criticism of participatory design projects often rightly centres on the demographic profile of the participants, especially when more vocal or proactive groups override others. If young people were more inclined to vote, the chances of a remain result would increase dramatically. If people with lower incomes were more likely to vote, it would boost leave. I take this to be a serious problem in the voting mechanism.

These are difficult problems to solve. How can a participatory process have well informed participants and accurately reflect the demographics of country, while offering everyone the chance to vote?

Harry Farmer has suggested that the rising number of referendums in the UK tells us we need to reform the way we do representative democracy, rather than resorting to bypassing it. Representatives have the time and resources to become well informed on issues so they would in theory make better decisions. However, this does nothing to address the issue of turnout – MPs are themselves selected by voters who disproportionately well off and older. MPs themselves are very far from reflecting the demographics of the UK as a whole.

Two more radical solutions have been put forward by Stanford Professor James Fishkin. In his ‘deliberation day’ model, the whole country would be given the day off to learn about, discuss, and vote on a topic, perhaps on an annual basis. Participation would be encouraged with a $150 incentive. The advantage is that (almost) everyone is included, and that the incentive ought to be enough to ensure most demographics are well represented. The participants would also be well informed, having been given the day to think deeply in a structured way. However, it’s clearly a massive logistical and political challenge implement ‘deliberation day’.

Fishkin’s other suggestion is to throw over inclusion – the attempt to allow everyone to get involved – and instead use ‘deliberative democracy’. In this scenario, a sample of the population, chosen to reflect the demographic makeup of the country as a whole, come together for a weekend, to discuss and learn about an issue before casting votes. This gives us well informed participants who are demographically reflective of the country as a whole. The model is roughly similar to jury service. The drawback is that some people may find it unfair to have a small, unelected group make a decision that affects everyone.

Making participation freely open to all stakeholders while ensuring that the participants are well informed and demographically representative is difficult in any participatory design process. Some may feel that the opportunity to participate is enough, and that if the young, or the less well off, decide not to vote that’s up to them.

However, voters having incorrect beliefs about the basic facts seems to me to point to a fundamentally broken process, where any decisions made are unlikely to turn out well. In classic participatory design projects, approaches such prototyping, iteration and workshopping can help participants improve their understanding of the situation and empower them to make decisions in their own interests.

Are there similar approaches we could take to improve national decision making? Perhaps in the UK we could look at the structure of the press, and ask if having a tiny number of extremely rich newspaper proprietors holding sway over public opinion isn’t perhaps a serious problem for a country pretending to be a democracy.

Yesterday we had a really great round table talking about supply chains and manufacturing, hosted by Future Makespaces. Supply chains touch on so many political topics. They matter intensely for labour conditions, wages,  immigration, the environment and for the diffusion of culture. At the same time they remain mostly invisible: they have a dispersed physical manifestation, and subsist in innumerable formal and informal social relations.

Governments publish some data on supply chains, but it can provide only a very low resolution picture. Jude Sherry told us that trying to locate manufacturers legally registered in Bristol often proved impossible. The opposite proved true as well – there are plenty of manufactures in Bristol who do not appear in official data.

Supply chains are especially salient now because technology is changing their structure. The falling cost of laser cutters and 3D printers is democratising processes once only possible in large scale manufacturing – thus potentially shortening the logistical pathway between manufacturer and consumer; perhaps even bringing manufacturing back into the cities of the developed world.

What I took from the round table was the surprising diversity of approaches to the topic – as well as a chance to reflect on how I communicate my position.

If your writing, design, or artistic practice is about making the invisible visible, then supply chains are a rich territory – a muse for your work, and an agent of change in the materials and processes you can work with. I took Dr Helge Mooshammer and Peter Mörtenböck to be addressing this cultural aspect with their World of Matter project and their Other Markets publications. Emma Reynolds told us about the British Council’s Maker Library Network, which I think you could see as an attempt to instrumentalise that cultural output.

Michael Wilson (who was in the UK for The Arts of Logistics conference), came to the topic from an overtly political direction, casting the debate in terms familiar from Adam Curtis’ All Watched Over by Machines of Loving Grace; positioning himself in relation to capitalism, neo-liberalism and anarchy. His Empire Logistics project aims to explore the supply infrastructure of California. He highlighted the way that supply chains had responded to the unionisation of dock workers in California by moving as many operations as possible away from the waterfront, and to an area called, poetically, the Inland Empire.

The ‘small p’ political also featured – Adrian McEwan told us about his local impact through setting up a Makespace in Liverpool. James Tooze told us about the modular furniture system he’s working on with OpenDesk – designed to reduce waste generated by office refits and be more suited to the flexible demands that startups make of their spaces.

My perspective is based mostly on ideas from the discipline of economics. I described the Hayekian idea of the market as a giant computer that efficiently allocates resources, where the market, through the profit motive, solves the knowledge problem – and that it does so in a way that cannot be improved upon.

Even if I don’t myself (completely) subscribe to this point of view, it is well embedded with policy makers, and I think needs to be addressed and rebutted.

Every attempt to actively change supply chains, from the circular economy to makespaces, faces a challenge from Hayekian reasoning: if a new system of supply was a good idea, why hasn’t the market already invented it?

I see my work as using the language of economics to position design practises that seek to augment or transcend that market logic. In particular, I think Elinor Ostrom’s work offers a way to acknowledge human factors in the way exchanges take place, as well as providing design principles based on empirical research.

One surprise was the divergent views on where the ambitions for the ‘maker movement’. Should it aim for a future where a significant fraction of manufacturing happens in makespaces? Or would that mean the movement had been co-opted? Is it’s subcultural status essential?

I realised that when I try to explain my position in future I need to address questions like why I’ve chosen to engage with economic language and try to illustrate how that dovetails with cultural and political perspectives.

 

TL;DR: Almost everyone thinks academic publishing needs to change. What would a better system look like? Economist Elinor Ostrom gave us design principles for an alternative – a knowledge commons, a sustainable approach to sharing research more freely. This approach exemplifies using economic principles to design a digital platform. 

Why is this relevant right now? 

The phrase ‘Napster Moment’ has been used to describe the current situation. Napster made MP3 sharing so easy that the music industry was forced to change it’s business model. The same might be about happen to academic publishing.

In a recent Science Magazine reader poll (admittedly unrepresentative), 85% of respondents thought pirating papers from illicit sources was morally acceptable, and about 25% said they did so weekly.

Elsevier – the largest for-profit academic publisher – is fighting back. They are pursuing the SciHub website through the courts. SciHub is the most popular website offering illegal downloads, and has virtually every paper ever published.

In another defensive move, Elsevier has recently upset everyone by buying Social Science Research Network – a highly successful not-for-profit website that allowed anyone to read papers for free.

Institutions that fund research are pushing for change, fed up with a system where universities pay for research, but companies like Elsevier make a profit from it. Academic publishers charge universities about $10Bn a year, and make unusually large profits.

In the longer term, the fragmentation of research publishing may be unsustainable. Over a million papers are published every year, and research increasingly requires academics to understand multiple fields. New search tools are desperately needed, but they are impossible to build when papers are locked away behind barriers.

How should papers be published? Who should pay the costs, and who should get the access? Economist and Nobel laureate Elinor Ostrom pioneered the idea of a knowledge commons to think about these questions.

What is a knowledge commons? 

A commons is a system where social conventions and institutions govern how people contribute to and take from some shared resource. In a knowledge commons that resource is knowledge.

You can think of knowledge, embodied in academic papers, as an economic resource just like bread, shoes or land. Clearly knowledge has some unique properties, but this assumption is a useful starting point.

When we are thinking about how to share a resource, Elinor Ostrom, in common with other economists, asks us to think about whether the underlying resource is ‘excludable’, or ‘rivalrous’.

If I bake a loaf of bread, I can easily keep it behind a shop counter until someone agrees to pay money in exchange for it – it is excludable. Conversely, if I build a road it will be time consuming and expensive for me to stop other people from using it without paying – it is non-excludable.

If I sell the bread to one person, I cannot sell the same loaf to another person – it is rivalrous. However, the number of cars using a road makes only a very small difference to the cost of providing it. Roads are non-rivalrous (at least until traffic jams take effect).

Excludable Non-excludable
Rivalrous Market Goods

Bread, shoes, cars

Common Pool Resources

Fish stocks, water 

Non-rivalrous Club Goods

Gyms, toll roads,
(academic papers) 

Public Goods

National defence, street lighting
(academic papers)

Most economists think markets (where money is used to buy and sell, top left in the grid) are a good systems for providing non-rivalrous, non-excludable private goods – bread, clothes, furniture etc. – perhaps with social security in the background to provide for those who cannot afford necessities.

But if a good is non-rivalrous, non-exclusionary, or both, things get a bit more complicated, and less effective. This is why roads are usually provided by a government rather than a market – though for profit toll roads do exist.

The well known ‘tragedy of the commons’ is a example of this logic playing out. The ‘tragedy of the commons’ thought experiment concerns a rivalrous, non-excludable natural resource – often the example given is a village with a common pasture land shared by everyone. Each villager has an incentive to graze as many sheep as they can on the shared pasture because then they will have nice fat sheep and plenty of milk. But if everyone behaves this way, unsustainably massive flocks of sheep will collectively eat all the grass and destroy the common pasture.

The benefit accrues to the individual villager, but the cost to the community as a whole. The classic economic solution is to put fences up and make the resource into an excludable, market-based system. Each villager gets an section of the common to own privately, which they can buy and sell as they choose.

Building and maintaining fences can be very expensive – if the resource is something like a fishing ground, it might even be impossible. The view that building a market is the only good solution has been distilled into an ideology, and, as is discussed later, that ideology lead to the existence of the commercial academic publishing industry. As the rest of this post will explain, building fences around knowledge has turned out to be very expensive.

Ostrom positioned herself directly against the ‘have to build a market’ point of view. She noticed that in the real world, many communities do successfully manage commons.

Ostrom’s Law: A resource arrangement that works in practice can work in
theory.

She developed a framework for thinking about social norms that allow effective resource management across a wide range of non-market systems, a much more nuanced approach than the stylised tragedy of the commons thought experiment. Her analysis calls for a more realistic model of the villagers, who might realise that the common is being overgrazed, call a meeting, and agree a rule how many sheep each person is allowed to graze. They are designing a social institution.

If this approach can be made to work, it saves the cost of maintaining the fences, but avoids the overgrazing that damages the common land.

The two by two grid above has the ‘commons’ as only one among four strategies. In reality, rivalry and excludability are questions of degree, and can be changed by making different design choices.

For this analysis, it’s useful to use the word ‘commons’ as a catchall for non-market solutions.

Ostrom and Hess published a book of essays, Understanding Knowledge as a Commons, arguing that we should use exactly this approach to understand and improve academic publishing. They argue for a ‘knowledge commons’.

The resulting infrastructure would likely be one or more web platforms. The design of these platforms will have to take into account the questions of incentives, rivalry and exclusion discussed above.

What would a knowledge commons look like? 

Through extensive real world research, Ostrom and her Bloomington School derived a set of design principles for effectively sharing common resources:

  1. Define clear group boundaries.
  2. Match rules governing use of common goods to local needs and conditions.
  3. Ensure that those affected by the rules can participate in modifying the rules.
  4. Make sure the rule-making rights of community members are respected by outside authorities.
  5. Develop a system, carried out by community members, for monitoring members’ behavior.
  6. Use graduated sanctions for rule violators.
  7. Provide accessible, low-cost means for dispute resolution.
  8. Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system.

These principles can help design a system where there is free access while preventing collapse from abusive treatment.

Principle 1 is already well addressed by the existence of universities, which give us a clear set of internationally comparable rules about who is officially a researcher in what area – doctorates, professorships etc. These hierarchies could also indicate who should participate in discussions about designing improvements to the knowledge commons, in accordance with 2 and 3. This is not to say that non-academic would be excluded, but that there is an existing structure which could help with decisions such as who is qualified to carry out peer review.

In a knowledge commons utopia, all the academic research ever conducted would be freely available on the web, along with all the related metadata – authors, dates, who references whom, citation counts etc. A slightly more realistic scenario might have all the metadata open, plus papers published from now forward.

This dataset would allow innovations that could address many of these design principles. In particular, in accordance with 5, it would allow for the design of systems measuring ‘demand’ and ‘supply’.  Linguistic analysis of papers might start to shine a light on who really supplies ideas to the knowledge commons, by following the spread of ideas through the discourse. The linked paper describes how to discover who introduces a new concept into a discourse, and track when that concept is widely adopted.

This could augment crude citation counts, helping identify those who provide a supply of new ideas to the commons. What if we could find out what papers people are searching for, but not finding? Such data might proxy for ‘demand’ – telling researches where to focus their creative efforts.

Addressing principle 6, there is much room for automatically detecting low quality ‘me-too’ papers, or outright plagiarism. Or perhaps it would be appropriate to establish a system where new authors have to be sponsored by existing authors with a good track record – a system which the preprint site arXiv currently implements. (Over publication is interestingly similar to overgrazing of a common pasture, abusing the system for personal benefit at the cost of the group.)

Multidisciplinary researchers could benefit from new ways aggregating papers that do not rely on traditional journal based categories, visualisations of networks of papers might help us orient ourselves in new territory quicker.

All of these innovations, and many others that we cannot foresee, require a clean, easily accessible data set to work with.

These are not new ideas. IBM’s Watson is already ingesting huge amounts of medical research to deliver cancer diagnosis and generate new research questions. But the very fact that only companies with the resources IBM can get to this data confirms the point about the importance of the commons. Even then, they are only able to look at a fraction of the total corpus of research.

But is the knowledge commons feasible?

How, in practical terms, could a knowledge commons be built?

Since 1665, the year the Royal Society was founded, about 50 million research papers have been published. As a back of an envelope calculation, that’s about 150 terabytes of data – that would cost $4,500 a month to store on Amazon’s cloud servers. Obviously just storing the data is not enough – so is there a real world example of running this kind of operation?

Wikipedia stores a similar total amount of data (about 40 million pages). It also has functionality that supports about 10 edits to those pages every second, and is one of the 10 most popular sites on the web. Including all the staffing and servers, it costs about $5o million per year.

That is less than 5% of what the academic publishing industry charges every year. If the money that universities spend on access to journals was saved for a single year, it would be enough to fund an endowment that would make academic publishing free in perpetuity – a shocking thought.

What’s the situation at the moment? 

Universities pay for the research that results in academic papers. Where papers are peer-reviewed, the reviewing is mostly done salaried university staff who don’t charge publishers for their time. Therefore, the cost of producing a paper to an academic publisher is, more or less, typesetting plus the admin.

Yet publishers charge what are generally seen as astronomical fees. An ongoing annual licenses to access a journal often costs many thousands of pounds. University libraries, which may have access to thousands of journals, pay millions each year in these fees. As a member of the public, you can download a paper for about $30 – and a single paper is often valueless without the network of papers it references. The result is an industry worth about $10bn a year, with profit margins that are often estimated at 40%. (Excellent detailed description here.)

I’ve heard stories of academics having articles published in journals their university does not have access to. They can write the paper, but their colleagues cannot subsequently read it – which is surely the opposite of publishing.  There are many papers that I cannot access from my desk at the Royal College of Art, because the university has not purchased access. But RCA has an arrangement with UCL allowing me to use their system. So I have to go across town just to log onto the Internet via UCL’s wifi. This cannot make sense for anyone.

I’m not aware of any similar system. It’s a hybrid of public funding plus a market mechanism. Tax payers money is spent producing what looks like a classic public or commons good (knowledge embodied in papers), free to everyone, non-rivalry and non-exclusionary. That product is then handed over a to private company, for free, and the private company makes a profit by selling that product back to the organisation that produced it. Almost no one (except the publishers) believes this represents value for money.

Overall, in addition to being a drain on the public purse, the current system fragments papers and associated metadata behind meaningless artificial barriers.

How did it get like that?

Nancy Kranich, in her essay for the book Understanding Knowledge as a Commons, gives useful history. She highlights the Reagan era ideological belief (mentioned earlier) that the private sector is always more efficient, plus the short-term incentives of the one-time profit you get by selling your in house journal. That’s seems to be about the end of the story, although in another essay in the same book Peter Suber points out that many high level policy makers often do not know how the system works – which might also be a factor.

If we look to Ostrom’s design principles, we cannot be surprised at what has happened. Virtually all the principles (especially 4,7 and 8) are violated when you have a commons with a small number of politically powerful, for-profit institutions who rely on appropriating resources from that commons. It’s analogous to the way industrial fishing operations are able to continuously frustrate legislation designed to prevent ecological disaster in overstrained fishing grounds by lobbying governments.

What are the current efforts to change the situation?

In 2003 the Bethesda Statement on Open Access indicated the Howard Hughes Medical Institute and the Wellcome trust, which between them manage an endowment of about $40bn, wanted research funded by them to be published Open Access – and that they would cover the costs. This seems to have set the ball rolling, although the situation internationally is too complex to easily unravel.

Possibly, charities lead the way because they are free of the ideological commitments of governments, as described by Kranich, and less vulnerable to lobbying efforts by publishers.

Focusing on the UK, Since 2013, the Research Council (which disperses about £3bn to universities each year) has insisted that work that it funds should be published Open Access. The details, however, make this rule considerably weaker than you might expect. RCUK recognises two kinds of Open Access publishing.

With Gold Route publishing, a commercial publisher will make make the paper free to access online, and publish it under a creative commons licence that allows others to do whatever they like with it – as long as the original authors are credited. The commercial publisher will only do this if they are paid – rates vary but it can be up to £5000 per paper. RCUK has made a £16 million fund available to cover these costs.

Green Route publishing is a much weaker type of Open Access. The publisher grants the academics who produced the paper the right to “self archive” – ie make their paper available through their university’s website. It is covered by a creative commons license that allows other people to use if for any non-commercial purpose, as long as they credit the author. However there can be an embargo of up to three years before the academic is allowed to ‘self-publish’ their paper. There are also restrictions on what sites they can publish the paper on – for example they cannot publish it to a site that mimics a conventional journal. Whether sites such as Academic.edu are acceptable is currently the subject of debate.

Is it working?

In 1995, Forbes predicted that commercial academic publishers had a business model that was imminently about to be destroyed by the web. That makes sense, after all, the web was literally invented to share academic papers. Here we are, 21 years later, and academic publishers exist, and still have enormous valuations. Their shareholders clearly don’t think they are going anywhere.

Elsevier is running an effective operation to prevent innovation by purchasing competitors (mendeley.com) or threatening them with copyright actions (academia.edu and SciHub). Even if newly authored papers are published open access, the historical archive will remain locked away. However, there is change.

Research Council UK carried out an independent review in 2014 where nearly all universities were able to report publishing at least 45% of papers as open access (via green or gold routes) – though the report is at pains to point out that most universities don’t keep good records of how their papers are published, so this figure could be inaccurate.

In fact the UK is doing a reasonable job of pursuing open access, and globally things are slowly moving in the right directionResearch is increasingly reliant on pre-prints hosted on sites like ArXiv, rather than official Journals, which move too slowly.

Once a database of the 50 million academic papers is gathered in one place (which SciHub may soon achieve) it’s hard to see how the genie can be put back in the bottle.

If this is a ‘Napster moment’, the question is what happens next. Many people thought that MP3 sharing was going to be the end of the commercial music industry. Instead, Apple moved in and made a service so cheap and convenient that it displaced illicit file sharing. Possibly commercial publishers could try the same trick, though they show no signs of making access cheaper or more convenient.

Elinor Ostrom’s knowledge commons shows us that there a sustainable, and much preferable alternative. An alternative that opens the worlds knowledge to everyone with an Internet connection, and provides an open platform for innovations that can help us deal with the avalanche of academic papers published every year.

 

 

 

 

Screen Shot 2016-05-18 at 13.54.49

StoryMap is a project that I worked on with Rift theatre company, Peter Thomas from Middlesex University and Angus Main, who is now at RCA, and Ben Koslowski who led the project. Oliver Smith took care of the tech side of things.  

The challenge was very specific, but the outcome was an interface that could work in a variety of public spaces.

We were looking to develop an artefact that could pull together all of the aspects of Rift’s Shakespeare in Shoreditch festival, including four plays in four separate locations over 10 days, the central hub venue where audiences arrived, and the Rude Mechanicals: a roving troupe of actors who put on impromptu plays around Hackney in the weeks leading up to the main event.

We wanted something in the hub venue which gave a sense of geography to proceedings. In the 2014 Shakespeare in Shoreditch festival the audience were encouraged to contribute to a book of 1000 plays (which the Rude Mechanicals used this year for their roving performances). We felt the 2016 version ought to include a way for the audience to contribute too.

The solution we ended up with was a digital/physical hybrid map, with some unusual affordances. We had a large table with a map of Hackney and surroundings (reimagined as an island) routed into the surface.

Screen Shot 2016-05-18 at 14.10.21

We projected a grid onto the table top. Each grid square could have a ‘story’ associated with it. Squares with stories appeared white. Some of the stories were from the Twitter feed of the Rude Mechanicals, so from day one the grid was partially populated. Some of them were added by the audience.

You could read the stories using a console. Two dials allowed users to move a red cursor square around the grid. When it was on a square with a story, that story would appear on a screen in the console.

Screen Shot 2016-05-18 at 14.18.52 Screen Shot 2016-05-18 at 14.18.10

If there was no story on the square, participants could add one. We had sheets of paper with prompts written on them, which you could feed into a typewriter and tap a response. Once you’d written your story, you put it in a slot in the console, and scanned it with the red button. (Example, Prompt: ‘Have you been on a memorable date in Hackney?’, Response: ‘I’m on one now!’)

Nearly 300 stories were submitted over 10 days.  Even though there really difficult to use, people loved the typewriters as an input method. Speaking from my own perspective, I found an input method that legitimised spelling mistakes and typos less intimidating. 

There were two modes of interaction – firstly, through the table based projection, which allowed a conversational, collective and discursive understanding of what had already been submitted.  Secondly, there was a more individual process of reading specific stories and adding your own story using the screen in the console. The second mode still relied on the projection, because you needed to move your cursor to find or submit a story.

The resolution of the projection was too low (because of the size of the table) for fonts or details to be rendered well. From this perspective, the map routed into the table really worked; it increased the ‘bandwidth’ of the information the table could convey, fine lines and small text worked well (which gave us a chance to play around with whimsically renaming bits of Hackney).

Having a way to convey spatialised data on a table where people can get round it and discuss it, combined with a (potentially private) way to add detail might work in a number of scenarios. Could it be a tool for planning consultation? A way to explore data spatialised in some other way, eg. a political spectrum or along a time line? Perhaps in a museum context?

The whole thing was developed as a web app, so it’s easy to extend across more screens, or perhaps to add mobile interaction. It’s opened my eyes to the fact that, despite all the noise around open data, there are relatively few ways to explore digital information in a collective, public way. The data is shared, but the exploration is always individual.  More to follow…

(I did a quick technical talk on how we delivered StoryMap for Meteor London, slides here.)

The BBC is to remove recipes from its website, responding to pressure from the Government. It will also remove a number of other web only services. The news is symbolic of a larger issue, and the outcome of a much longer story. It’s a signal that the current government will actively reduce public sector activity on the web for fear of upsetting or displacing the private sector. This is not just a feature of the current Conservative government, the Blair administration treated the BBC in the same way. The idea is that by reducing the public sector a thousand commercial flowers will bloom, that competition will drive variety and quality, and that a vibrant commercial digital sector will create high skill jobs. Never mind that the web is already controlled by a handful of giant US monopolies, mostly employing people thousands of miles away. Ideology trumps pragmatism.

In the specific case of the BBC, the Government has won. The BBC’s entire digital presence is dependent on its TV and Radio operations. iPlayer can only exist when it’s making TV and Radio shows, the news website is relies on the news gathering operation it inherits from TV and radio.  TV (and possibly radio) are destined to have fewer viewers and listeners as we increasingly turn to digital. So, as licence fee payers disappear, the output will become less and of lower quality, the BBC’s presence in the national debate will diminish and it’s ability to argue for funding will be decreased. When it comes time to switch funding from a license fee for owning a television to a system that works on broadband connections, the BBC will already have lost. An outmoded organisation that has failed to adapt, a footnote rather than a source of national pride.

Put simply, the BBC has failed to make the case that it should exist in a digital era. Instead it’s chosen to remain a broadcast operation that happens to put some of it’s content on a website.  When TV finally dies, the BBC could be left in a position similar to NPR in the US, of interest to a minority of left-wing intellectuals, dwarfed by bombastic polarising media channels owned by two or three billionaires. That’s why it’s so critical that the BBC made a web offer separate from TV, but it hasn’t. The Government has been extremely successful at making the BBC embrace the principle that all web output must be linked to TV or Radio, which is why, for example, the BBC will be reducing commissions specifically for iPlayer too, and closing its online magazine.

The story has been evolving for a long time. I was working on the BBC’s website in 2009. It just been through a multi-year Public Value Test to prove to the Board it wasn’t being anti-competitive by providing video content online; at least the public were allowed iPlayer in the end. BBC Jam, which was a £150 million digital educational platform to support the national curriculum was cancelled in 2007 because of competition law. Don’t forget, at this point, they’d already built most of it. Millions of pounds of educational material were thrown in the bin because it would be ‘anti competitive’. Of course, no commercial alternative has ever been built.

When I arrived there was endless talk of restructuring, and optimism we’d get a clear set of rules dictating what projects would not be considered anti competitive. It never came. The project I worked on, about mass participation science experiments, was cancelled, I presume because it wasn’t directly connected to a TV program. All kinds of other bits of digital offers were closed.  H2G2, which pre-dated, and could (maybe?) have become, Wikipedia was shuttered. The Celebdaq revamp was another proposition which was entirely built and then cancelled before it ever went live.

The BBC will now offer recipes that are shown on TV programs, but only for 30 days after. That’s how hysterical the desire to prevent public service on the web is: you can create content, at considerable cost, but not leave on the web, which would cost virtually nothing.

The has BBC focused it’s digital R&D budget on it’s gigantic archive, looking at new ways of searching, ordering and displaying the millions of hours of audio and video it’s collected.  Which is a weird decision, because it’s a certain fact that the BBC will never get copyright clearance to make public anything but the tiniest fraction of that archive. I speculate the reason it has done this is because it saves the management from having to worry about a competitive analysis. Projects that can never go public don’t pose a problem.

If we shift our focus from the BBC to society as a whole, it’s disappointing to see how we’ve abandoned the notion of digital public space. The web has opened up a whole new realm for creativity, interaction, education and debate. As a society we’ve decided that almost nothing in that realm should be publicly provided  – which is absolutely perverse because the web intrinsically lends itself to what economists would think of as a public goods.

Look across the activities of the state and you’ll see than none have a significant presence in the digital realm. We think the state should provide education – but it does nothing online.  Local governments provide public spaces, from parks to town halls – but never online. We think the state should provide libraries – but never online. We love the state broadcaster, but we’re not allowed it online. We expect the state to provide healthcare – but the NHS offers only a rudimentary and fragmentary online presence.  You can apply the formula to any sector of government activity. Want career guidance? Not online. Want to know how to make a Shepherd’s Pie? Better hope it appeared on a TV cooking show in the last 30 days.

 

 

 

 

Sometimes some new scrap of information strings a link between two previously disconnected neurons, your cortex reconfigures, and a whole constellation of thoughts snap together in a new way. That’s happened to me recently, I’ve realised something that other people have a lot quicker than me – Facebook is eating the web. The original John Perry Barlow / Tim Berners Lee / Jimmy Wales vision of a digital space everyone owned is dying. It’s sometimes easy to forget how recently we had lofty visions, and how extensively the web has reoriented towards advertising.

But it’s more than that. The normal checks and balances for dominant corporations – competition laws – don’t apply here. You don’t pay for social networking, so it isn’t a market, so there is no competition law. I’ll come back to this later.

I’m doing a PhD looking at how the public sector can benefit from social media data.  Corporations own datasets of unimaginable social value, and the only thing they want to do with them is sell them to advertisers. All their other potentially beneficial social roles, tracking diseases, policy consultation and strengthening communities, to mention just three, are getting harder to realise.

That’s not to say there aren’t amazing civic technology projects still happening, but they all happen under the looming shadow of Facebookification.

In denial, I clung to the belief that Facebook’s unbelievably massive user numbers were just not true. Looking for research on this I discovered a paper which contained startling statistic – there are more Facebook users in Africa than there are people on the Internet. Exactly as I thought – Facebook are massively inflating their numbers. Except…  further investigation showed that many survey respondents were unaware that they were on the Internet when they used Facebook. They didn’t know about the web, they only knew about Facebook. Research that I thought was going to confirm my world view did the exact opposite: mind… flipped. That was the first inflection point, when I started to feel that everything had gone wrong.

The second was trying to use the Instagram API for some research. For a long time I’ve been aware that the Facebook API is so hostile that I wouldn’t be able to use it. Facebook is such a complicated product, with such complex privacy settings, that perhaps it’s inevitable that API is basically unusable. But Instagram is incredibly simple, and many people choose to make their photos public. To me, it’s absolutely natural that they would make public photos available via an API. But, since November 2015, Instagram’s API has been radically curtailed. All the apps that use it have to be reviewed, and there is an onerous list of conditions to comply with. To a first approximation, Instagram turned off their API.

Again, mind flipped. Facebook have purchased Instagram, and now they’ve strangled it as a source of data. They are a commercial company, and they can do what they like, but my mind boggles at the mean spiritedness of shutting the API. The photos belong to the users, and the users have asked for them to be published. Third parties might well do amazing things with the photos – to the benefit of everyone including their creators. Instagram can do that at very close to no cost to themselves. The traffic to the API is peanuts in server costs, and it’s simple to rate limit it. Similarly, rate limiting it means you wouldn’t be giving away the large scale analytics data you might sell. You can ban people from duplicating the Instagram app and depriving you of advertising revenue, just as Twitter have. The downsides to Instagram are tiny.

Not so long ago, the wisdom was that an API with a rich third party ecosystem was the key to success. Twitter was the model, and it’s still wonderfully open (fingers crossed). Yahoo really got it – remember Yahoo Pipes? A graphical interface for playing with all the open APIs that used to exist, infrastructure for a gentler time.

The new players don’t care. Facebook has very successfully pioneered the opposite approach, where you put up barriers anywhere you can.

Neither of these two things is big news, not the biggest stories on this topic by a long shot, but for whatever reason, they were an epiphany for me. They made me realise that Facebook is a unique position to take control of the web and drain it of its democratic potential.

I’m not in love with everything Google does, but, as a search engine, it’s interests could be seen as aligned with an open web. I don’t love Amazon’s dominance, but at least its marketplace makes a pretty transparent offer to users, just as Apple’s hardware business does. Facebook, which obviously competes in the advertising market with Google, has a strong interest in curtailing the open web. Facebook, as Mark Zuckerberg has explicitly said, would like to become the main place people go to read news, using Instant Articles rather than web pages, hidden away in Facebook’s walled garden. Increasingly, as the earlier evidence indicated, Facebook is the web.

But Facebook is different from the other big tech companies in another, much more important way. It is almost invulnerable to antitrust and competition regulations.  In the 1990s, Microsoft was in a massively dominant position in tech. In both Europe and the US, governments brought cases against MS, saying that they were exploiting their position to the detriment of consumers. The cases did serious damage to MS, and their dominant position slipped. Right now, the same thing is happening to Google’s dominance – the EU is bringing cases against them for their behaviour in relation to Android.

One reason that Apple always positions itself at the premium end of the market may be exactly to avoid gaining enough market share to qualify as a monopoly – instead it satisfies itself with high margins in a smaller segment.

But Facebook don’t actually sell anything to consumers, so they aren’t in a market, so no case can be bought against them. Sure, they are in the advertising market, and they are a big player, but only alongside Google and all the others.

Combined with Instagram and Whatsapp, Facebook is massively dominant in social networking. But social networking isn’t a market, because it’s free. Nor is Facebook a common carrier, nor are they a newspaper or a TV station, all of which have laws formulated specifically for them. For Facebook, there is no law.

I’d guess this one of the reasons that Facebook is so clear it will never charge users – to do so would expose them to competition law.

Maybe it’s OK, because some guy in a dorm room or garage is right now working on a Facebook killer. Except they aren’t, because, as with Instagram and Whatsapp, Facebook will just buy any thing that threatens it – and lock new purchases it into it’s own closed infrastructure. Nothing is more lucrative than a monopoly, so the stock market will write a blank cheque for Facebook to reinforce its position.

The board of Facebook must spend a great deal of time thinking about what could go wrong. A massive data leak? Accidentally deleting everyone’s photos? Cyberbullying suicides becoming common place?

Surely competition laws aimed at the company are near the top of risk register. What are they likely to be doing about that? They can do the normal revolving-door, expensive dinner lobbying shenanigans, and I’m sure they are. But Facebook has a whole other level of leverage. The platform itself is profoundly political. They have detailed data about people’s voting intentions, highly politically desirable advertising space, the ability to influence people’s propensity to vote, and can use the massively influential Facebook trending to promote whatever view they like. What politician wants to tangle with that kind of organisation?

If I was being cynical, I’d start to think about the Chan Zuckerberg Initiative. Facebook surely already has unimaginable access, but this organisation (not technically a charity) adds a halo of beneficence, a vehicle for the Zuckerberg point of view to embed itself even more deeply.

Why haven’t I mentioned Internet.org? It’s too depressing. I’ll write about that another day.

Not only is there no law for Facebook, but the democratic system for creating laws has incentives that mostly point in the wrong direction. You can construct all kinds of scenarios if you try hard enough. For me, the prospect of the mainstream web being controlled by a single corporate has moved from being distant possibility to being a likely future. Let’s just hope things turn out more complicated, they usually do…