Over the last few days, we’ve seen Leave voters jubilant that Article 50 has been sent, the apparent point of no return. Leavers make optimistic predictions about Britannia once again ruling the waves, becoming a stronger independent nation that it would have done within the EU. If you voted Remain, then presumably you foresee mostly down sides arising from leaving the EU, and are accordingly pessimistic about the future.

Words are cheap — what if you asked someone to stake their job, or a large fraction of their salary, on their prediction being right? I suspect that many of Twitters’ most ardent proselytisers, from whichever side, would decline such an offer. But there are people whose job is to stake their reputations on predicting the economy, professional traders who take positions on the stock market. At least in theory, they have ‘skin in the game’ — it will be bad (expensive) for them if they get it wrong, so they are more likely to give a dispassionate opinion.

So far, the stock market is betting on optimism; that things are going be good, economically speaking — for example the FTSE 250 is signalling this by mostly remaining stable or even moving upward. Some of the effect is to do with the sinking value of sterling, but it’s weird because, at the very least, you’d have to admit that we don’t really know what’s going to happen. No one has ever seen a Brexit before, and uncertainty is supposed to be what markets don’t like. But it’s not just uncertainty, most forecasts are negative. The Leave campaign’s own economic predictions are for a cooling economy during the Brexit process. Even if you think pessimistic economic predictions were politically motivated, most people agree there will be less growth in the near term than there otherwise would have been.

So why is the stock market going the opposite way? My theory is that large corporations are optimistic that the UK government, under the threat of the breakup of the country if the Brexit deal is perceived to have gone badly, will be forced to make epic concessions to lobbyists. Corporations will constantly leak internal reports about their desire to move a factory or their head office, or go on news programs to explain the Government doesn’t understand how to create jobs. Big finance firms in the City are always threatening to relocate if regulations do not favour them, resulting in the ‘socialised risk, privatised benefit’ we saw in the 2008 crash.

In the US, we can see the same thing is happening with Trump, where stocks have also been heading upward. As Ruchir Sharma, chief global strategist at Morgan Stanley, has pointed out, if an unpredictable populist with no previous experience of government came to power in a developing country, the stock market in that country would certainly tank. He describes the US as ‘post democratic’, in the sense that the stock market believes that whatever happens at the ballot box, no president would dare upset it. Trump’s failure to explain his legislative ambitions, or his lack of any convincing means of passing laws, is irrelevant: the stock market knows that Trump is one of their own.

It’s ‘disaster capitalism’, where political change is perceived by the stock market as a state of flux that, always and everywhere, will eventually recrystalise national institutions in their favour.

Douglass Carswell, formerly UKIP’s only MP, is writing a book called ‘Rebel: How to Overthrow the Emerging Oligarchy’. I can agree there is an emerging oligarchy, and also that both Brexit and Trump should be construed as voters railing against it. But the stock market is telling us that far from an overthrow, what we are going to witness is just another opportunity for the vampire squid to jam its blood funnel into anything that smells like money, as Matt Taibbi puts it.

When people realise this, they are going to be pissed off. If you voted Brexit in anticipation of wage rises powered by reduced immigration, that is not going to happen. If you voted for the empire to come back, it’s not going to happen. In 2020, voters will have endured years of epic wrangling with EU, all of which will have made no difference to their lives. They will find themselves in a country where low tariffs, low wages and low regulations are still the only policy prescription for the static or declining living standards most people endure.

Remainers worry about Brexit making the UK marginally poorer, or lament the lost freedom of travel. But what we should really worry about is the backlash when Brexit doesn’t deliver anything except an illusory warm glow of sovereignty and an orgy of corporate troughing. You may believe that Brexit or Trump are blows against the establishment, but the truest measure of elite sentiment is where the money is going, and right now the stock market disagrees with you.

In the spirit of thinking through our new political reality, which I already started here, I’ve been thinking about the electoral success of policies that promise violence – imprisonment or war, mainly.

In advertising, apparently, sex sells. In politics, it’s a liability. What sells in politics is violence. The promise to do something violent has an appeal that is powerful and ubiquitous.

In a classic criticism of democracy, Thucydides tells us that in 480BC the crowd in Athens voted to kill every adult male on the rebellious island of Lesbos, only to realise subsequently that this was an unjust act of violence, spurred by the rhetoric of a demagogue. They have to dispatch a ship with the new orders, which only just arrives in time to prevent a massacre. In Orwell’s 1984, perpetual war was used as mechanism to confer legitimacy on the dictatorship, an approach contemporary Russia has learned from.

We have the term khaki election to refer to this phenomena. It was coined in 1900 to describe a UK election held in the context of the Second Boer War, where patriotic sentiment driven by the war is said to have helped the incumbent party win. More recently, we have Thatcher, whose prospects for reelection in 1983 looked dim until the Falklands War boosted her reputation – almost certainly changing the outcome in her favour. We might say the same of Bush’s second Gulf war. An aimless administration was transformed into a purposeful and successful one, the president on an aircraft carrier declaring victory just in time for the election. As it turned out, the invasion did not prove to be beneficial for US foreign policy, but it worked very well for Bush himself.

Internal violence can work the same – for example the way promising increased incarceration has been a successful electoral tool in the UK and the US, no matter the dropping crime levels and endless evidence prison is expensive and ineffective.

Why should a threat to do violence be so persuasive?

Unlike, for example, the myth that the economy works like household budget, I can’t see that the appeal of violence comes from  ‘common sense’ or our everyday experience. Has any family dispute ever been satisfactorily resolved by violence? How many teenage kids have been coerced into good behavior? Do employers seek those who are able to persuade and negotiate, or those who are aggressive and violent? In our own lives, we almost never witness violence, and even less so as a successful strategy. Perhaps it’s exactly this distance that allows us to be so blasé about drone strikes and regime change.

I can only think that the allure of violence is part of a broader sense making activity. Most people have some problems in their lives, disappointments to rationalise. Acknowledging that our lives are shaped by blind luck or unintended consequences is not a good narrative, it does not help us understand why things are as they are. But the idea that an enemy within or without can cause these things does make sense – and the natural solution to the enemy is violence. Lock them up, bomb them – a convincing panacea.

In America, Obama’s failure to use the words radical islam became totemic on the right. It signalled a failure to adopt the appropriately bellicose rhetoric. Trump was even able to suggest that Obama was in league with ISIS because of his failure to use the properly aggressive language. What Obama was really doing was attempting to deescalate the religious dimension of the conflict with the long run goal of bringing peace. It’s a totally rational strategy, except that, for the above reasons, it’s also a terrible electoral strategy.

This is a conundrum for any political party that wants to pursue a rational level of violence, which is generally much below the level apparently advocated by voters. I am not aware of any solution to this problem – as the ancient Greek critics of democracy pointed out, it may just be the price you pay.

 

Most politicians – with the exceptions of the Lib Dems – have said that parliament should accept the results of the EU Referendum as the democratic will of the people.

This may be true for political or pragmatic reasons. Ethically, however, it’s far from obvious. If someone tells you Brexit is a moral necessity just because a vote has taken place, they are wrong. Obeying the vote requires a value judgement about the status of that vote, and the issue much more complex than simply asserting that a vote has taken place.

If we were to go around disputing the status of every vote, democracy would be impossible. Here I will present the case that the Brexit vote is uniquely precarious: direct democracy about an irreversible and highly important decision carried out in the context of asymmetric information. Specifically, polling data suggests many leave voters are expecting an outcome that not even the leave campaign itself think is possible.

There are good reasons to be wary of attempts to understand what voters ‘really’ wanted – analysis becomes a vessel for your own opinions. At the same time, we have to acknowledge that voters’ opinions can be shaped by the information they are receiving.

For these reasons, refusing to leave the EU would be an absolutely legitimate position for parliament.

In the national debate it seems to go almost unquestioned that simply going through a voting procedure automatically conveys unassailable democratic force to a decision. Not true: Russia, Zimbabwe, North Korea all have voting procedures, yet most people agree that they unsatisfactory in various ways. I’m not comparing the UK to those countries, but making the philosophical point that you stand in a booth and fill out a form and still not be ‘doing democracy’.

For a vote to carry democratic force – for it to convey the ‘will of the people’ – most people think you have to do more than just count pieces of marked paper. I complained about two criteria that I felt were lacking in the EU referendum before the vote took place – that the electorate be representative, and that voters should be well informed.

We apparently can’t agree on the demographics of EU Referendum voters, but we do know participation was unusually high, so let’s set the issue of representativeness on one side.

The electorate were not well informed, in fact they were actively misled about what leaving the EU would mean. This is the case in every election, here I will make the argument that the misinformation was both asymmetric and effective in changing voters’ views.

I’m also not claiming leave voters are stupid, or that they do whatever Rupert Murdoch says. I am not claiming that everyone who voted leave was misled. I am not claiming that voters would have voted remain with better access to information.

I am claiming that we do not know how voters would have behaved with better access to information, and that information in the EU referendum was unusually low quality.

This is a difficult empirical point to prove. We cannot observe how voters would have behaved in other circumstances. What we can do is build an empirical case that voters held beliefs that can reasonably be expected to influence voting behaviour, and that those beliefs are a result of systemic misinformation.

We can see from YouGov’s polls that many people believed that leaving the EU would make no difference to, or improve, the economy. In the last poll, which closed in the 19th June, 46% of respondents thought there would be no economic impact, while 9% thought they’d be better off. These views, unsurprisingly, correlate with the intention to vote leave. 18% of those intending to vote leave thought leaving would improve the economy, and 66% thought it would make no difference.

This is at stark variance with predictions. The Leave campaign’s economist Andrew Lilico own forecasts suggest that there would be a short term economic hit, but predicted that by 2030 the economy would have returned to normal. This prediction is more optimistic than almost any other, either from a private company, the Treasury or international organisations such as the OECD. If voters were aware that the most optimistic case was a short term recession, followed by a possible return to normal growth in 15 years time, rather than believing there would be no difference or an improvement, how would they have voted? We do not know.

This in turn bears on Leave’s promise to have extra money to spend on the NHS. A post-Brexit government can choose to spend more money on the NHS, they will not be doing so using ‘spare money’ created by Brexit – certainly not until 2030.

We are in the position of living in a future where Brexit now seems imminent, and the prediction of a short term slow down appear to be coming true, with Mark Carney confirming these effects both verbally and by providing £250Bn of tax payers money to support the economy.

In the same poll, 54% of respondents believed that Brexit would reduce immigration. Again, this correlates with intention to leave, with fully 85% of leave voters believing immigration would decrease. And again, this is at odds with the predictions of all sides. Leave’s economic model relies on immigration remaining roughly the same (Andrew Lilico again), and Leave campaigner Dan Hannan notably confirmed that immigration will remain broadly similar after Brexit. How would voters have behaved if they knew this? Again, we do not know.

I’m not claiming access to an objective reality about what will happen in the case of Brexit, instead I’m asserting that leave voters did not understand the position of the Leave campaign itself. Given that the Leave campaign is likely to have been over optimistic about what it can deliver, the reality of Brexit is likely to be even less satisfactory to leave voters.

We know that a typical leave voter thought that the economy would remain the same or improve while immigration would be reduced. But we do not know if these were factors that caused them to vote leave, or merely incidental. However, if we look at polls of issues that matter to voters, we see that immigration, the NHS, the EU and the economy are the top four issues. The average leave voter held unrealistic expectations about all of these, so it is reasonable to assume that some voters choose leave on the basis of these issues.

Where does these bad information come from? How can voters have come to believe a case for Brexit even more optimistic than the Leave campaign itself? We do see that newspaper coverage, which is traditionally on the right in the UK, is was strongly skewed to Brexit. Weighted for number for readers, newspaper articles about about 80% in favour of leave, even while the country as a whole is almost perfectly split. Meanwhile, the broadcast media are scrupulously balanced.

Article 50 has not yet been sent. The electorate now has a genuine opportunity to understand Brexit’s implications for the economy and immigration. If opinion polls show a significant shift in the light of this new information, that shift should be allowed to influence MP’s views; they should not feel bound by the referendum. The referendum did not convey an unassailable mandate based on the will of the people.

Edit: Reading Vernon Bogdanor I find my self slightly convinced by an idea similar to rule utilitarianism. Perhaps you can’t worry actual democracy in every vote, instead you have to set up the institution of voting and honour it regardless of the nuances of each referendum or election. Perhaps the damage to public trust is not worth the improvement in decision making.

We have so many aspirations for big data and evidence based policy, but apparently a fatally limited capacity to see the obvious: voters were furious about immigration and the EU. Techniques exist to build better empirical evidence regarding issues that matter to citizens; we should use them or risk a repeat of the referendum.   

Commentators from all over the spectrum believe that the leave vote represents not (only) a desire to leave the EU, but also the release of a tidal wave of pent up anger. That anger is often presumed to be partly explained by stagnating living standards for large parts of the population. As the first audience question on the BBC’s Question Time program asked the panel “Project Fear has failed, the peasants have revolted, after decades of ignoring the working class how does it feel to be punch in the nose?”. The Daily Mail’s victorious front page said the “Quiet people of Britain rose up against an arrogant, out-of-touch, political class”. The message is not subtle.

Amazingly, until the vote, no one seemed to have known anything: markets and betting odds all suggested remain would win. Politicians, even those on the side of Leave, thought Brexit was unlikely. The man bankrolling the Brexit campaign lost a fortune betting that it wouldn’t actually happen (the only good news I’ve seen in days). Niall Ferguson was allegedly paid $500,000 to predict that the UK would remain.

This state of ignorance contrasts radically with what we do know about the country. We know, in finicky detail, the income of every person and company. We measure changes in price levels, productivity, house prices, interest rates, and employment. Detailed demographic and health data are available – we have a good idea of what people eat, how long they sleep for, where they shop, we even have detailed evidence about people’s sex lives.

Yet, there seems to be have been very little awareness of (or weight attached to) what the UK population itself was openly saying in large numbers.

Part of the reason must be that the government didn’t want to hear. Post crisis everything was refracted through the prism of TINA – There Is No Alternative. There was no money for anything, so why even think about it? Well, now we have an alternative.

The traditional method for registering frustration is obviously to vote – a channel which was jammed in the last election. Millions of people voted UKIP, or for the Green Party, and got one MP a piece: no influence for either point of view.  A more proportional voting system is one well known idea, and I think an excellent one, but there are lots of other possibilities too.

What if there was a more structured way to report on citizen’s frustrations on a rolling basis? An Office of Budgetary Responsibility, but for national sentiment – preparing both statistical and qualitative reports that act as a radar for public anger. It would have to go beyond the existing ‘issue tracking’ polling to provide something more comprehensive and persuasive. Perhaps the data could be publicly announced with the same fanfare as quarterly GDP.

Consultative processes at the local level are much more advanced than at the national level. Here is some of the current thinking on the best ways to build a national ‘anger radar’, drawing on methods widely used at the local level.

Any such process faces the problem of  ‘strategic behaviour’. If someone asks you your opinion on immigration, you might be tempted to pretend you are absolute furious about it, even if you are are only mildly piqued by the topic. Giving extreme answers might seem like the best way to advocate for the change you want to see. Such extreme responses could mask authentically important signals. Asking respondents to rank responses in order or assign monetary values to outcomes are classic ways to help mitigate strategic behaviour.

Strategic behaviour can also be avoided by looking at actions that are hard to fake. Economists refer to these as ‘revealed’ preferences – often revealed by the act of spending money on buying something. It’s awful to think about, but house prices might encode public opinions on immigration. If house prices are lower in areas of high immigration, it might reveal to us the extent to which citizen truly find it to be an issue. Any such analysis would have to use well established techniques for removing confounding factors, for example accounting for the fact the immigration might disproportionately be to areas with lower house prices anyway. This approach might not be relevant for the issues in EU referendum, but might be important for other national policies. Do people pay more for a house which falls in the catchment of an academy school, for example. (More technical detail on all these approaches).

Social media is another source of data. Is the public discourse, as measured on Twitter or Facebook (if they allowed access to the data) increasingly mentioning immigration? What is the sentiment expressed in those discussions? Certainly a crude measure, but perhaps part of a wider analysis – and ultimately no cruder than the methods used to estimate inflation.

All these approaches are valuable because they tell us about ‘raw’ sentiment – what people believe before they are given a space to reflectively consider. ‘Raw’ views are important since they are the ones that determine how people will act, for example at a referendum.

But that is not enough on it’s own. As discussed in a previous post, good policy will also be informed by a knowledge of what people want when they have thought more deeply and have information that allows them to act in their own best interests. These kinds of views could be elicited using using processes such as the RSA’s recently announced Citizen’s Economics Council, where 50-60 (presumably representative) citizens will be given time and resources to help them think deeply about economic issues of the day, and subsequently give their views to policy makers.

Delib, a company that provides digital democracy software, offers a budget simulator which achieves a similar goal. The affordances of the interface mean that uses have to allocate a fixed budget between different options using sliders. In the processes of providing a view, users intrinsically become aware of the various compromises that must be made, and deliver a more informed decision.

We live in a society where more data is available about citizen’s behaviour then ever before. As is widely discussed, that represents a privacy challenge that is still being understood. The same data represents an opportunity for governments to be responsive in new ways. Did the intelligence services know which way the vote would go using their clandestine monitoring of our private communications? Who knows.

We cannot predict everything, famously a single Moroccan street vendor’s protest set off the whole of the Arab Spring. But we can see the contexts that makes that kind of volatility possible, and I believe the anti immigration context could easily have been detected in the run up to the referendum.

There is no longer any reason for a referendum about the EU to become a channel for anger about tangentially related issues. The political class would not have been ‘punched on the nose’ if they were a little better a listening.

Hat tip: Thanks to the Delib Twitter account, which has been keeping track of the conversation about new kinds of democracy post Brexit, which I’ve used in this post.

There seem to be three possible ways forward from the current position, all of which are absolutely disastrous for democracy. I have no idea which of these is more likely, all of them are very bad, and all of them represent a betrayal of voters – especially those that voted to leave.

Leaving the EU and the single market is the simplest proposition – in terms of democracy it would allow a government to deliver on the key pledges of immigration controls and bringing law making back to Westminster. However, the extreme financial situation the UK would likely find itself in would certainly make £15bn of extra investment in the NHS impossible. The costs to jobs and wages would be appalling, ‘Britain’s service economy would be cut up like an old car‘, and the nation would be in deep economic shock.

Ignoring the referendum (unless there is another general election) would obviously be an enormous affront to democracy, and the tabloid newspapers would howl with rage. The unexpectedly large constituency who voted leave, who already believe they are ignored and forgotten, would rightly be incensed. Such an option may easily lead to the rise of extremist parties.

The UK remains in the single market but out of the EU — the Norway option, the middle ground. Norway pays an enormous monetary price for access to the single market, if the UK got a similar deal there would not be spare cash to spend on the NHS. Norway accepts free movement of people, breaking the Leave campaign’s promise of border controls. Finally, Norway obeys many of the EU’s laws in order to gain access to the single market, and has no say during the process of EU legislation – which is difficult to square with Leave’s ‘taking back control’ motto.

The UK will not get an exact copy of the Norway deal. Perhaps a better deal can be struck? Someone, presumably Boris, would have to achieve a heroic feat of negotiation. He does not start from a good position, on a personal level, he has been lambasting the EU for months, even comparing the organisation to the Nazis. Many European leaders fear that a good deal for Britain would encourage discontent in their own countries, and may want to make an example of the UK. Watching David Cameron’s resignation speech must have had a visceral effect on other European leaders.

According to the rules of the Article 50 process, the UK will not be in the room for exit negotiations, results would be presented fait accompli to the UK, and if we don’t find agreement after two years, we’ll be automatically ejected. The single market option has been explicitly ruled out by several leading European politicians, so it looks set to be an uphill battle. Just in case it wasn’t hard enough, Scotland could leave, or threaten to leave, the UK during the negotiations – possibly to join the EU, maybe even the Euro.

It looks as though Boris hopes to find some combination of the Norway deal that keeps a watered down versions of his promises, probably mostly achieved through obfuscation. His Telegraph column sets out an impossible wishlist of access to the single market, border controls and savings in EU contributions which he will certainly never deliver.

This is, I believe, the most dysfunctional example of democracy of all the three options. The electorate have been sold an impossible dream of ‘taking control’, lowered immigration and a windfall savings in EU contributions. Under the Norway option, it will not be clear that any has been delivered.

We all know that political parties renege on their manifesto promises, but the Leave campaign set a new low. Within 48 hours of the result they had explicitly denied they felt at all bound to deliver on lower immigration or increasing NHS spending. The audacity is comedic, there are pictures of all the leading Leave campaigners standing in front of campaign buses emblazoned with huge slogans which they now claim mean nothing.  Perhaps they believe technicalities about which leave campaign said what, or whether their slogans were commitments or more like ‘serving suggestions’, will save them. They should consider what happened to Lib Dems when they (quite reasonably) blamed their broken tuition fee pledge on the coalition.

Before the referendum, no one had realised how much anger was directed at the political classes. After the referendum, there are only reasons for that anger to grow. In Norway-style scenarios Leave voters will only get the palest imitations of the policies they believe they voted for, but at a terrible, terrible cost. Leaving the EU might cause a recession, and will certainly cost jobs. Then there are the tens, possibly hundreds of billions of pounds in foregone GDP. All Government policy of any kind is on hold for years as we renegotiate. The cost of Government borrowing could spiral. Scientific and medical research will be disrupted and damaged. UK citizens will finding travelling and working in the EU harder.

Most importantly, many Leave voters, already from poor areas, will be in even worse poverty. Boris’s stall, as he set it out in the Telegraph, is about throwing off the ‘job destroying coils of EU bureaucracy’. The idea that removing workers rights is going to play a big part in reducing inequality is a fairy tale.  Leave voters are almost certain to see things getting worse not better, even if they are temporarily satisfied to have ‘taken back control’.

For a country that everyone recognises is divided and wounded, all of the routes forward point to ever more poverty, pain and division.

 

 

 

Most democratic countries use representative democracy – you vote for someone  who makes decisions on your behalf (in the UK’s case your MP). The EU referendum is different, it’s an example of direct democracy. Bypassing their representative, every citizen who is eligible to vote will be asked to make a decision themselves.

The referendum has this feature in common with most participatory design processes (by PD I mean including end users in process of designing a product or service). PD is normally carried out with the stakeholders themselves, not representatives of them. You could think of referendum as a participatory design process, designing a particular part of the UK’s economic and foreign policy.

The EU referendum fails as a participatory design process in two important ways. Firstly, most of the participants are deeply ill informed about the issues at hand, and under these circumstances it will be impossible for them to act in their own best interests. The consequences of their design decision may well run counter to their expectations.

An IPSOS MORI survey shows that on average UK voters believe that 15% of the population are EU migrants, where in fact only 5% are. On provocative issues such as the percentage of child benefit that is paid to children living in Europe, many people widely overestimate the amount by over 100 times (it’s about 0.3%, where 1 in 4 respondents estimated more than 24%).

Richard Dawkins has noted that very few people know all the relevant details to cast a vote, and laments the bizarre logic often used in discussions. He recommends voting for ‘remain’ in line with a ‘precautionary principle’, and has the following quote to illustrate the level of debate on TV:

“Well, it isn’t called Great Britain for nothing, is it? I’m voting for our historic greatness.”

Of course, it’s a question of degree. It would be unreasonable to suggest only a tiny number of world-leading experts can voice meaningful opinions. But there does seem to be a problem when decision makers are systemically wrong about the basic facts.

The second way that EU referendum fails is that the participants do not reflect the makeup of the country as a whole. Much of the speculation on the outcome focuses on turn out – which age groups and social classes will make the effort to cast a vote. Yet it hardly seems fair that such an important decision will be taken by a self selecting group. Criticism of participatory design projects often rightly centres on the demographic profile of the participants, especially when more vocal or proactive groups override others. If young people were more inclined to vote, the chances of a remain result would increase dramatically. If people with lower incomes were more likely to vote, it would boost leave. I take this to be a serious problem in the voting mechanism.

These are difficult problems to solve. How can a participatory process have well informed participants and accurately reflect the demographics of country, while offering everyone the chance to vote?

Harry Farmer has suggested that the rising number of referendums in the UK tells us we need to reform the way we do representative democracy, rather than resorting to bypassing it. Representatives have the time and resources to become well informed on issues so they would in theory make better decisions. However, this does nothing to address the issue of turnout – MPs are themselves selected by voters who disproportionately well off and older. MPs themselves are very far from reflecting the demographics of the UK as a whole.

Two more radical solutions have been put forward by Stanford Professor James Fishkin. In his ‘deliberation day’ model, the whole country would be given the day off to learn about, discuss, and vote on a topic, perhaps on an annual basis. Participation would be encouraged with a $150 incentive. The advantage is that (almost) everyone is included, and that the incentive ought to be enough to ensure most demographics are well represented. The participants would also be well informed, having been given the day to think deeply in a structured way. However, it’s clearly a massive logistical and political challenge implement ‘deliberation day’.

Fishkin’s other suggestion is to throw over inclusion – the attempt to allow everyone to get involved – and instead use ‘deliberative democracy’. In this scenario, a sample of the population, chosen to reflect the demographic makeup of the country as a whole, come together for a weekend, to discuss and learn about an issue before casting votes. This gives us well informed participants who are demographically reflective of the country as a whole. The model is roughly similar to jury service. The drawback is that some people may find it unfair to have a small, unelected group make a decision that affects everyone.

Making participation freely open to all stakeholders while ensuring that the participants are well informed and demographically representative is difficult in any participatory design process. Some may feel that the opportunity to participate is enough, and that if the young, or the less well off, decide not to vote that’s up to them.

However, voters having incorrect beliefs about the basic facts seems to me to point to a fundamentally broken process, where any decisions made are unlikely to turn out well. In classic participatory design projects, approaches such prototyping, iteration and workshopping can help participants improve their understanding of the situation and empower them to make decisions in their own interests.

Are there similar approaches we could take to improve national decision making? Perhaps in the UK we could look at the structure of the press, and ask if having a tiny number of extremely rich newspaper proprietors holding sway over public opinion isn’t perhaps a serious problem for a country pretending to be a democracy.

Screen Shot 2016-05-18 at 13.54.49

StoryMap is a project that I worked on with Rift theatre company, Peter Thomas from Middlesex University and Angus Main, who is now at RCA, and Ben Koslowski who led the project. Oliver Smith took care of the tech side of things.  

The challenge was very specific, but the outcome was an interface that could work in a variety of public spaces.

We were looking to develop an artefact that could pull together all of the aspects of Rift’s Shakespeare in Shoreditch festival, including four plays in four separate locations over 10 days, the central hub venue where audiences arrived, and the Rude Mechanicals: a roving troupe of actors who put on impromptu plays around Hackney in the weeks leading up to the main event.

We wanted something in the hub venue which gave a sense of geography to proceedings. In the 2014 Shakespeare in Shoreditch festival the audience were encouraged to contribute to a book of 1000 plays (which the Rude Mechanicals used this year for their roving performances). We felt the 2016 version ought to include a way for the audience to contribute too.

The solution we ended up with was a digital/physical hybrid map, with some unusual affordances. We had a large table with a map of Hackney and surroundings (reimagined as an island) routed into the surface.

Screen Shot 2016-05-18 at 14.10.21

We projected a grid onto the table top. Each grid square could have a ‘story’ associated with it. Squares with stories appeared white. Some of the stories were from the Twitter feed of the Rude Mechanicals, so from day one the grid was partially populated. Some of them were added by the audience.

You could read the stories using a console. Two dials allowed users to move a red cursor square around the grid. When it was on a square with a story, that story would appear on a screen in the console.

Screen Shot 2016-05-18 at 14.18.52 Screen Shot 2016-05-18 at 14.18.10

If there was no story on the square, participants could add one. We had sheets of paper with prompts written on them, which you could feed into a typewriter and tap a response. Once you’d written your story, you put it in a slot in the console, and scanned it with the red button. (Example, Prompt: ‘Have you been on a memorable date in Hackney?’, Response: ‘I’m on one now!’)

Nearly 300 stories were submitted over 10 days.  Even though there really difficult to use, people loved the typewriters as an input method. Speaking from my own perspective, I found an input method that legitimised spelling mistakes and typos less intimidating. 

There were two modes of interaction – firstly, through the table based projection, which allowed a conversational, collective and discursive understanding of what had already been submitted.  Secondly, there was a more individual process of reading specific stories and adding your own story using the screen in the console. The second mode still relied on the projection, because you needed to move your cursor to find or submit a story.

The resolution of the projection was too low (because of the size of the table) for fonts or details to be rendered well. From this perspective, the map routed into the table really worked; it increased the ‘bandwidth’ of the information the table could convey, fine lines and small text worked well (which gave us a chance to play around with whimsically renaming bits of Hackney).

Having a way to convey spatialised data on a table where people can get round it and discuss it, combined with a (potentially private) way to add detail might work in a number of scenarios. Could it be a tool for planning consultation? A way to explore data spatialised in some other way, eg. a political spectrum or along a time line? Perhaps in a museum context?

The whole thing was developed as a web app, so it’s easy to extend across more screens, or perhaps to add mobile interaction. It’s opened my eyes to the fact that, despite all the noise around open data, there are relatively few ways to explore digital information in a collective, public way. The data is shared, but the exploration is always individual.  More to follow…

(I did a quick technical talk on how we delivered StoryMap for Meteor London, slides here.)

The BBC is to remove recipes from its website, responding to pressure from the Government. It will also remove a number of other web only services. The news is symbolic of a larger issue, and the outcome of a much longer story. It’s a signal that the current government will actively reduce public sector activity on the web for fear of upsetting or displacing the private sector. This is not just a feature of the current Conservative government, the Blair administration treated the BBC in the same way. The idea is that by reducing the public sector a thousand commercial flowers will bloom, that competition will drive variety and quality, and that a vibrant commercial digital sector will create high skill jobs. Never mind that the web is already controlled by a handful of giant US monopolies, mostly employing people thousands of miles away. Ideology trumps pragmatism.

In the specific case of the BBC, the Government has won. The BBC’s entire digital presence is dependent on its TV and Radio operations. iPlayer can only exist when it’s making TV and Radio shows, the news website is relies on the news gathering operation it inherits from TV and radio.  TV (and possibly radio) are destined to have fewer viewers and listeners as we increasingly turn to digital. So, as licence fee payers disappear, the output will become less and of lower quality, the BBC’s presence in the national debate will diminish and it’s ability to argue for funding will be decreased. When it comes time to switch funding from a license fee for owning a television to a system that works on broadband connections, the BBC will already have lost. An outmoded organisation that has failed to adapt, a footnote rather than a source of national pride.

Put simply, the BBC has failed to make the case that it should exist in a digital era. Instead it’s chosen to remain a broadcast operation that happens to put some of it’s content on a website.  When TV finally dies, the BBC could be left in a position similar to NPR in the US, of interest to a minority of left-wing intellectuals, dwarfed by bombastic polarising media channels owned by two or three billionaires. That’s why it’s so critical that the BBC made a web offer separate from TV, but it hasn’t. The Government has been extremely successful at making the BBC embrace the principle that all web output must be linked to TV or Radio, which is why, for example, the BBC will be reducing commissions specifically for iPlayer too, and closing its online magazine.

The story has been evolving for a long time. I was working on the BBC’s website in 2009. It just been through a multi-year Public Value Test to prove to the Board it wasn’t being anti-competitive by providing video content online; at least the public were allowed iPlayer in the end. BBC Jam, which was a £150 million digital educational platform to support the national curriculum was cancelled in 2007 because of competition law. Don’t forget, at this point, they’d already built most of it. Millions of pounds of educational material were thrown in the bin because it would be ‘anti competitive’. Of course, no commercial alternative has ever been built.

When I arrived there was endless talk of restructuring, and optimism we’d get a clear set of rules dictating what projects would not be considered anti competitive. It never came. The project I worked on, about mass participation science experiments, was cancelled, I presume because it wasn’t directly connected to a TV program. All kinds of other bits of digital offers were closed.  H2G2, which pre-dated, and could (maybe?) have become, Wikipedia was shuttered. The Celebdaq revamp was another proposition which was entirely built and then cancelled before it ever went live.

The BBC will now offer recipes that are shown on TV programs, but only for 30 days after. That’s how hysterical the desire to prevent public service on the web is: you can create content, at considerable cost, but not leave on the web, which would cost virtually nothing.

The has BBC focused it’s digital R&D budget on it’s gigantic archive, looking at new ways of searching, ordering and displaying the millions of hours of audio and video it’s collected.  Which is a weird decision, because it’s a certain fact that the BBC will never get copyright clearance to make public anything but the tiniest fraction of that archive. I speculate the reason it has done this is because it saves the management from having to worry about a competitive analysis. Projects that can never go public don’t pose a problem.

If we shift our focus from the BBC to society as a whole, it’s disappointing to see how we’ve abandoned the notion of digital public space. The web has opened up a whole new realm for creativity, interaction, education and debate. As a society we’ve decided that almost nothing in that realm should be publicly provided  – which is absolutely perverse because the web intrinsically lends itself to what economists would think of as a public goods.

Look across the activities of the state and you’ll see than none have a significant presence in the digital realm. We think the state should provide education – but it does nothing online.  Local governments provide public spaces, from parks to town halls – but never online. We think the state should provide libraries – but never online. We love the state broadcaster, but we’re not allowed it online. We expect the state to provide healthcare – but the NHS offers only a rudimentary and fragmentary online presence.  You can apply the formula to any sector of government activity. Want career guidance? Not online. Want to know how to make a Shepherd’s Pie? Better hope it appeared on a TV cooking show in the last 30 days.

 

 

 

 

Sometimes some new scrap of information strings a link between two previously disconnected neurons, your cortex reconfigures, and a whole constellation of thoughts snap together in a new way. That’s happened to me recently, I’ve realised something that other people have a lot quicker than me – Facebook is eating the web. The original John Perry Barlow / Tim Berners Lee / Jimmy Wales vision of a digital space everyone owned is dying. It’s sometimes easy to forget how recently we had lofty visions, and how extensively the web has reoriented towards advertising.

But it’s more than that. The normal checks and balances for dominant corporations – competition laws – don’t apply here. You don’t pay for social networking, so it isn’t a market, so there is no competition law. I’ll come back to this later.

I’m doing a PhD looking at how the public sector can benefit from social media data.  Corporations own datasets of unimaginable social value, and the only thing they want to do with them is sell them to advertisers. All their other potentially beneficial social roles, tracking diseases, policy consultation and strengthening communities, to mention just three, are getting harder to realise.

That’s not to say there aren’t amazing civic technology projects still happening, but they all happen under the looming shadow of Facebookification.

In denial, I clung to the belief that Facebook’s unbelievably massive user numbers were just not true. Looking for research on this I discovered a paper which contained startling statistic – there are more Facebook users in Africa than there are people on the Internet. Exactly as I thought – Facebook are massively inflating their numbers. Except…  further investigation showed that many survey respondents were unaware that they were on the Internet when they used Facebook. They didn’t know about the web, they only knew about Facebook. Research that I thought was going to confirm my world view did the exact opposite: mind… flipped. That was the first inflection point, when I started to feel that everything had gone wrong.

The second was trying to use the Instagram API for some research. For a long time I’ve been aware that the Facebook API is so hostile that I wouldn’t be able to use it. Facebook is such a complicated product, with such complex privacy settings, that perhaps it’s inevitable that API is basically unusable. But Instagram is incredibly simple, and many people choose to make their photos public. To me, it’s absolutely natural that they would make public photos available via an API. But, since November 2015, Instagram’s API has been radically curtailed. All the apps that use it have to be reviewed, and there is an onerous list of conditions to comply with. To a first approximation, Instagram turned off their API.

Again, mind flipped. Facebook have purchased Instagram, and now they’ve strangled it as a source of data. They are a commercial company, and they can do what they like, but my mind boggles at the mean spiritedness of shutting the API. The photos belong to the users, and the users have asked for them to be published. Third parties might well do amazing things with the photos – to the benefit of everyone including their creators. Instagram can do that at very close to no cost to themselves. The traffic to the API is peanuts in server costs, and it’s simple to rate limit it. Similarly, rate limiting it means you wouldn’t be giving away the large scale analytics data you might sell. You can ban people from duplicating the Instagram app and depriving you of advertising revenue, just as Twitter have. The downsides to Instagram are tiny.

Not so long ago, the wisdom was that an API with a rich third party ecosystem was the key to success. Twitter was the model, and it’s still wonderfully open (fingers crossed). Yahoo really got it – remember Yahoo Pipes? A graphical interface for playing with all the open APIs that used to exist, infrastructure for a gentler time.

The new players don’t care. Facebook has very successfully pioneered the opposite approach, where you put up barriers anywhere you can.

Neither of these two things is big news, not the biggest stories on this topic by a long shot, but for whatever reason, they were an epiphany for me. They made me realise that Facebook is a unique position to take control of the web and drain it of its democratic potential.

I’m not in love with everything Google does, but, as a search engine, it’s interests could be seen as aligned with an open web. I don’t love Amazon’s dominance, but at least its marketplace makes a pretty transparent offer to users, just as Apple’s hardware business does. Facebook, which obviously competes in the advertising market with Google, has a strong interest in curtailing the open web. Facebook, as Mark Zuckerberg has explicitly said, would like to become the main place people go to read news, using Instant Articles rather than web pages, hidden away in Facebook’s walled garden. Increasingly, as the earlier evidence indicated, Facebook is the web.

But Facebook is different from the other big tech companies in another, much more important way. It is almost invulnerable to antitrust and competition regulations.  In the 1990s, Microsoft was in a massively dominant position in tech. In both Europe and the US, governments brought cases against MS, saying that they were exploiting their position to the detriment of consumers. The cases did serious damage to MS, and their dominant position slipped. Right now, the same thing is happening to Google’s dominance – the EU is bringing cases against them for their behaviour in relation to Android.

One reason that Apple always positions itself at the premium end of the market may be exactly to avoid gaining enough market share to qualify as a monopoly – instead it satisfies itself with high margins in a smaller segment.

But Facebook don’t actually sell anything to consumers, so they aren’t in a market, so no case can be bought against them. Sure, they are in the advertising market, and they are a big player, but only alongside Google and all the others.

Combined with Instagram and Whatsapp, Facebook is massively dominant in social networking. But social networking isn’t a market, because it’s free. Nor is Facebook a common carrier, nor are they a newspaper or a TV station, all of which have laws formulated specifically for them. For Facebook, there is no law.

I’d guess this one of the reasons that Facebook is so clear it will never charge users – to do so would expose them to competition law.

Maybe it’s OK, because some guy in a dorm room or garage is right now working on a Facebook killer. Except they aren’t, because, as with Instagram and Whatsapp, Facebook will just buy any thing that threatens it – and lock new purchases it into it’s own closed infrastructure. Nothing is more lucrative than a monopoly, so the stock market will write a blank cheque for Facebook to reinforce its position.

The board of Facebook must spend a great deal of time thinking about what could go wrong. A massive data leak? Accidentally deleting everyone’s photos? Cyberbullying suicides becoming common place?

Surely competition laws aimed at the company are near the top of risk register. What are they likely to be doing about that? They can do the normal revolving-door, expensive dinner lobbying shenanigans, and I’m sure they are. But Facebook has a whole other level of leverage. The platform itself is profoundly political. They have detailed data about people’s voting intentions, highly politically desirable advertising space, the ability to influence people’s propensity to vote, and can use the massively influential Facebook trending to promote whatever view they like. What politician wants to tangle with that kind of organisation?

If I was being cynical, I’d start to think about the Chan Zuckerberg Initiative. Facebook surely already has unimaginable access, but this organisation (not technically a charity) adds a halo of beneficence, a vehicle for the Zuckerberg point of view to embed itself even more deeply.

Why haven’t I mentioned Internet.org? It’s too depressing. I’ll write about that another day.

Not only is there no law for Facebook, but the democratic system for creating laws has incentives that mostly point in the wrong direction. You can construct all kinds of scenarios if you try hard enough. For me, the prospect of the mainstream web being controlled by a single corporate has moved from being distant possibility to being a likely future. Let’s just hope things turn out more complicated, they usually do…