Most democratic countries use representative democracy – you vote for someone  who makes decisions on your behalf (in the UK’s case your MP). The EU referendum is different, it’s an example of direct democracy. Bypassing their representative, every citizen who is eligible to vote will be asked to make a decision themselves.

The referendum has this feature in common with most participatory design processes (by PD I mean including end users in process of designing a product or service). PD is normally carried out with the stakeholders themselves, not representatives of them. You could think of referendum as a participatory design process, designing a particular part of the UK’s economic and foreign policy.

The EU referendum fails as a participatory design process in two important ways. Firstly, most of the participants are deeply ill informed about the issues at hand, and under these circumstances it will be impossible for them to act in their own best interests. The consequences of their design decision may well run counter to their expectations.

An IPSOS MORI survey shows that on average UK voters believe that 15% of the population are EU migrants, where in fact only 5% are. On provocative issues such as the percentage of child benefit that is paid to children living in Europe, many people widely overestimate the amount by over 100 times (it’s about 0.3%, where 1 in 4 respondents estimated more than 24%).

Richard Dawkins has noted that very few people know all the relevant details to cast a vote, and laments the bizarre logic often used in discussions. He recommends voting for ‘remain’ in line with a ‘precautionary principle’, and has the following quote to illustrate the level of debate on TV:

“Well, it isn’t called Great Britain for nothing, is it? I’m voting for our historic greatness.”

Of course, it’s a question of degree. It would be unreasonable to suggest only a tiny number of world-leading experts can voice meaningful opinions. But there does seem to be a problem when decision makers are systemically wrong about the basic facts.

The second way that EU referendum fails is that the participants do not reflect the makeup of the country as a whole. Much of the speculation on the outcome focuses on turn out – which age groups and social classes will make the effort to cast a vote. Yet it hardly seems fair that such an important decision will be taken by a self selecting group. Criticism of participatory design projects often rightly centres on the demographic profile of the participants, especially when more vocal or proactive groups override others. If young people were more inclined to vote, the chances of a remain result would increase dramatically. If people with lower incomes were more likely to vote, it would boost leave. I take this to be a serious problem in the voting mechanism.

These are difficult problems to solve. How can a participatory process have well informed participants and accurately reflect the demographics of country, while offering everyone the chance to vote?

Harry Farmer has suggested that the rising number of referendums in the UK tells us we need to reform the way we do representative democracy, rather than resorting to bypassing it. Representatives have the time and resources to become well informed on issues so they would in theory make better decisions. However, this does nothing to address the issue of turnout – MPs are themselves selected by voters who disproportionately well off and older. MPs themselves are very far from reflecting the demographics of the UK as a whole.

Two more radical solutions have been put forward by Stanford Professor James Fishkin. In his ‘deliberation day’ model, the whole country would be given the day off to learn about, discuss, and vote on a topic, perhaps on an annual basis. Participation would be encouraged with a $150 incentive. The advantage is that (almost) everyone is included, and that the incentive ought to be enough to ensure most demographics are well represented. The participants would also be well informed, having been given the day to think deeply in a structured way. However, it’s clearly a massive logistical and political challenge implement ‘deliberation day’.

Fishkin’s other suggestion is to throw over inclusion – the attempt to allow everyone to get involved – and instead use ‘deliberative democracy’. In this scenario, a sample of the population, chosen to reflect the demographic makeup of the country as a whole, come together for a weekend, to discuss and learn about an issue before casting votes. This gives us well informed participants who are demographically reflective of the country as a whole. The model is roughly similar to jury service. The drawback is that some people may find it unfair to have a small, unelected group make a decision that affects everyone.

Making participation freely open to all stakeholders while ensuring that the participants are well informed and demographically representative is difficult in any participatory design process. Some may feel that the opportunity to participate is enough, and that if the young, or the less well off, decide not to vote that’s up to them.

However, voters having incorrect beliefs about the basic facts seems to me to point to a fundamentally broken process, where any decisions made are unlikely to turn out well. In classic participatory design projects, approaches such prototyping, iteration and workshopping can help participants improve their understanding of the situation and empower them to make decisions in their own interests.

Are there similar approaches we could take to improve national decision making? Perhaps in the UK we could look at the structure of the press, and ask if having a tiny number of extremely rich newspaper proprietors holding sway over public opinion isn’t perhaps a serious problem for a country pretending to be a democracy.

Yesterday we had a really great round table talking about supply chains and manufacturing, hosted by Future Makespaces. Supply chains touch on so many political topics. They matter intensely for labour conditions, wages,  immigration, the environment and for the diffusion of culture. At the same time they remain mostly invisible: they have a dispersed physical manifestation, and subsist in innumerable formal and informal social relations.

Governments publish some data on supply chains, but it can provide only a very low resolution picture. Jude Sherry told us that trying to locate manufacturers legally registered in Bristol often proved impossible. The opposite proved true as well – there are plenty of manufactures in Bristol who do not appear in official data.

Supply chains are especially salient now because technology is changing their structure. The falling cost of laser cutters and 3D printers is democratising processes once only possible in large scale manufacturing – thus potentially shortening the logistical pathway between manufacturer and consumer; perhaps even bringing manufacturing back into the cities of the developed world.

What I took from the round table was the surprising diversity of approaches to the topic – as well as a chance to reflect on how I communicate my position.

If your writing, design, or artistic practice is about making the invisible visible, then supply chains are a rich territory – a muse for your work, and an agent of change in the materials and processes you can work with. I took Dr Helge Mooshammer and Peter Mörtenböck to be addressing this cultural aspect with their World of Matter project and their Other Markets publications. Emma Reynolds told us about the British Council’s Maker Library Network, which I think you could see as an attempt to instrumentalise that cultural output.

Michael Wilson (who was in the UK for The Arts of Logistics conference), came to the topic from an overtly political direction, casting the debate in terms familiar from Adam Curtis’ All Watched Over by Machines of Loving Grace; positioning himself in relation to capitalism, neo-liberalism and anarchy. His Empire Logistics project aims to explore the supply infrastructure of California. He highlighted the way that supply chains had responded to the unionisation of dock workers in California by moving as many operations as possible away from the waterfront, and to an area called, poetically, the Inland Empire.

The ‘small p’ political also featured – Adrian McEwan told us about his local impact through setting up a Makespace in Liverpool. James Tooze told us about the modular furniture system he’s working on with OpenDesk – designed to reduce waste generated by office refits and be more suited to the flexible demands that startups make of their spaces.

My perspective is based mostly on ideas from the discipline of economics. I described the Hayekian idea of the market as a giant computer that efficiently allocates resources, where the market, through the profit motive, solves the knowledge problem – and that it does so in a way that cannot be improved upon.

Even if I don’t myself (completely) subscribe to this point of view, it is well embedded with policy makers, and I think needs to be addressed and rebutted.

Every attempt to actively change supply chains, from the circular economy to makespaces, faces a challenge from Hayekian reasoning: if a new system of supply was a good idea, why hasn’t the market already invented it?

I see my work as using the language of economics to position design practises that seek to augment or transcend that market logic. In particular, I think Elinor Ostrom’s work offers a way to acknowledge human factors in the way exchanges take place, as well as providing design principles based on empirical research.

One surprise was the divergent views on where the ambitions for the ‘maker movement’. Should it aim for a future where a significant fraction of manufacturing happens in makespaces? Or would that mean the movement had been co-opted? Is it’s subcultural status essential?

I realised that when I try to explain my position in future I need to address questions like why I’ve chosen to engage with economic language and try to illustrate how that dovetails with cultural and political perspectives.

 

Screen Shot 2016-05-18 at 13.54.49

StoryMap is a project that I worked on with Rift theatre company, Peter Thomas from Middlesex University and Angus Main, who is now at RCA, and Ben Koslowski who led the project. Oliver Smith took care of the tech side of things.  

The challenge was very specific, but the outcome was an interface that could work in a variety of public spaces.

We were looking to develop an artefact that could pull together all of the aspects of Rift’s Shakespeare in Shoreditch festival, including four plays in four separate locations over 10 days, the central hub venue where audiences arrived, and the Rude Mechanicals: a roving troupe of actors who put on impromptu plays around Hackney in the weeks leading up to the main event.

We wanted something in the hub venue which gave a sense of geography to proceedings. In the 2014 Shakespeare in Shoreditch festival the audience were encouraged to contribute to a book of 1000 plays (which the Rude Mechanicals used this year for their roving performances). We felt the 2016 version ought to include a way for the audience to contribute too.

The solution we ended up with was a digital/physical hybrid map, with some unusual affordances. We had a large table with a map of Hackney and surroundings (reimagined as an island) routed into the surface.

Screen Shot 2016-05-18 at 14.10.21

We projected a grid onto the table top. Each grid square could have a ‘story’ associated with it. Squares with stories appeared white. Some of the stories were from the Twitter feed of the Rude Mechanicals, so from day one the grid was partially populated. Some of them were added by the audience.

You could read the stories using a console. Two dials allowed users to move a red cursor square around the grid. When it was on a square with a story, that story would appear on a screen in the console.

Screen Shot 2016-05-18 at 14.18.52 Screen Shot 2016-05-18 at 14.18.10

If there was no story on the square, participants could add one. We had sheets of paper with prompts written on them, which you could feed into a typewriter and tap a response. Once you’d written your story, you put it in a slot in the console, and scanned it with the red button. (Example, Prompt: ‘Have you been on a memorable date in Hackney?’, Response: ‘I’m on one now!’)

Nearly 300 stories were submitted over 10 days.  Even though there really difficult to use, people loved the typewriters as an input method. Speaking from my own perspective, I found an input method that legitimised spelling mistakes and typos less intimidating. 

There were two modes of interaction – firstly, through the table based projection, which allowed a conversational, collective and discursive understanding of what had already been submitted.  Secondly, there was a more individual process of reading specific stories and adding your own story using the screen in the console. The second mode still relied on the projection, because you needed to move your cursor to find or submit a story.

The resolution of the projection was too low (because of the size of the table) for fonts or details to be rendered well. From this perspective, the map routed into the table really worked; it increased the ‘bandwidth’ of the information the table could convey, fine lines and small text worked well (which gave us a chance to play around with whimsically renaming bits of Hackney).

Having a way to convey spatialised data on a table where people can get round it and discuss it, combined with a (potentially private) way to add detail might work in a number of scenarios. Could it be a tool for planning consultation? A way to explore data spatialised in some other way, eg. a political spectrum or along a time line? Perhaps in a museum context?

The whole thing was developed as a web app, so it’s easy to extend across more screens, or perhaps to add mobile interaction. It’s opened my eyes to the fact that, despite all the noise around open data, there are relatively few ways to explore digital information in a collective, public way. The data is shared, but the exploration is always individual.  More to follow…

(I did a quick technical talk on how we delivered StoryMap for Meteor London, slides here.)

The BBC is to remove recipes from its website, responding to pressure from the Government. It will also remove a number of other web only services. The news is symbolic of a larger issue, and the outcome of a much longer story. It’s a signal that the current government will actively reduce public sector activity on the web for fear of upsetting or displacing the private sector. This is not just a feature of the current Conservative government, the Blair administration treated the BBC in the same way. The idea is that by reducing the public sector a thousand commercial flowers will bloom, that competition will drive variety and quality, and that a vibrant commercial digital sector will create high skill jobs. Never mind that the web is already controlled by a handful of giant US monopolies, mostly employing people thousands of miles away. Ideology trumps pragmatism.

In the specific case of the BBC, the Government has won. The BBC’s entire digital presence is dependent on its TV and Radio operations. iPlayer can only exist when it’s making TV and Radio shows, the news website is relies on the news gathering operation it inherits from TV and radio.  TV (and possibly radio) are destined to have fewer viewers and listeners as we increasingly turn to digital. So, as licence fee payers disappear, the output will become less and of lower quality, the BBC’s presence in the national debate will diminish and it’s ability to argue for funding will be decreased. When it comes time to switch funding from a license fee for owning a television to a system that works on broadband connections, the BBC will already have lost. An outmoded organisation that has failed to adapt, a footnote rather than a source of national pride.

Put simply, the BBC has failed to make the case that it should exist in a digital era. Instead it’s chosen to remain a broadcast operation that happens to put some of it’s content on a website.  When TV finally dies, the BBC could be left in a position similar to NPR in the US, of interest to a minority of left-wing intellectuals, dwarfed by bombastic polarising media channels owned by two or three billionaires. That’s why it’s so critical that the BBC made a web offer separate from TV, but it hasn’t. The Government has been extremely successful at making the BBC embrace the principle that all web output must be linked to TV or Radio, which is why, for example, the BBC will be reducing commissions specifically for iPlayer too, and closing its online magazine.

The story has been evolving for a long time. I was working on the BBC’s website in 2009. It just been through a multi-year Public Value Test to prove to the Board it wasn’t being anti-competitive by providing video content online; at least the public were allowed iPlayer in the end. BBC Jam, which was a £150 million digital educational platform to support the national curriculum was cancelled in 2007 because of competition law. Don’t forget, at this point, they’d already built most of it. Millions of pounds of educational material were thrown in the bin because it would be ‘anti competitive’. Of course, no commercial alternative has ever been built.

When I arrived there was endless talk of restructuring, and optimism we’d get a clear set of rules dictating what projects would not be considered anti competitive. It never came. The project I worked on, about mass participation science experiments, was cancelled, I presume because it wasn’t directly connected to a TV program. All kinds of other bits of digital offers were closed.  H2G2, which pre-dated, and could (maybe?) have become, Wikipedia was shuttered. The Celebdaq revamp was another proposition which was entirely built and then cancelled before it ever went live.

The BBC will now offer recipes that are shown on TV programs, but only for 30 days after. That’s how hysterical the desire to prevent public service on the web is: you can create content, at considerable cost, but not leave on the web, which would cost virtually nothing.

The has BBC focused it’s digital R&D budget on it’s gigantic archive, looking at new ways of searching, ordering and displaying the millions of hours of audio and video it’s collected.  Which is a weird decision, because it’s a certain fact that the BBC will never get copyright clearance to make public anything but the tiniest fraction of that archive. I speculate the reason it has done this is because it saves the management from having to worry about a competitive analysis. Projects that can never go public don’t pose a problem.

If we shift our focus from the BBC to society as a whole, it’s disappointing to see how we’ve abandoned the notion of digital public space. The web has opened up a whole new realm for creativity, interaction, education and debate. As a society we’ve decided that almost nothing in that realm should be publicly provided  – which is absolutely perverse because the web intrinsically lends itself to what economists would think of as a public goods.

Look across the activities of the state and you’ll see than none have a significant presence in the digital realm. We think the state should provide education – but it does nothing online.  Local governments provide public spaces, from parks to town halls – but never online. We think the state should provide libraries – but never online. We love the state broadcaster, but we’re not allowed it online. We expect the state to provide healthcare – but the NHS offers only a rudimentary and fragmentary online presence.  You can apply the formula to any sector of government activity. Want career guidance? Not online. Want to know how to make a Shepherd’s Pie? Better hope it appeared on a TV cooking show in the last 30 days.

 

 

 

 

Sometimes some new scrap of information strings a link between two previously disconnected neurons, your cortex reconfigures, and a whole constellation of thoughts snap together in a new way. That’s happened to me recently, I’ve realised something that other people have a lot quicker than me – Facebook is eating the web. The original John Perry Barlow / Tim Berners Lee / Jimmy Wales vision of a digital space everyone owned is dying. It’s sometimes easy to forget how recently we had lofty visions, and how extensively the web has reoriented towards advertising.

But it’s more than that. The normal checks and balances for dominant corporations – competition laws – don’t apply here. You don’t pay for social networking, so it isn’t a market, so there is no competition law. I’ll come back to this later.

I’m doing a PhD looking at how the public sector can benefit from social media data.  Corporations own datasets of unimaginable social value, and the only thing they want to do with them is sell them to advertisers. All their other potentially beneficial social roles, tracking diseases, policy consultation and strengthening communities, to mention just three, are getting harder to realise.

That’s not to say there aren’t amazing civic technology projects still happening, but they all happen under the looming shadow of Facebookification.

In denial, I clung to the belief that Facebook’s unbelievably massive user numbers were just not true. Looking for research on this I discovered a paper which contained startling statistic – there are more Facebook users in Africa than there are people on the Internet. Exactly as I thought – Facebook are massively inflating their numbers. Except…  further investigation showed that many survey respondents were unaware that they were on the Internet when they used Facebook. They didn’t know about the web, they only knew about Facebook. Research that I thought was going to confirm my world view did the exact opposite: mind… flipped. That was the first inflection point, when I started to feel that everything had gone wrong.

The second was trying to use the Instagram API for some research. For a long time I’ve been aware that the Facebook API is so hostile that I wouldn’t be able to use it. Facebook is such a complicated product, with such complex privacy settings, that perhaps it’s inevitable that API is basically unusable. But Instagram is incredibly simple, and many people choose to make their photos public. To me, it’s absolutely natural that they would make public photos available via an API. But, since November 2015, Instagram’s API has been radically curtailed. All the apps that use it have to be reviewed, and there is an onerous list of conditions to comply with. To a first approximation, Instagram turned off their API.

Again, mind flipped. Facebook have purchased Instagram, and now they’ve strangled it as a source of data. They are a commercial company, and they can do what they like, but my mind boggles at the mean spiritedness of shutting the API. The photos belong to the users, and the users have asked for them to be published. Third parties might well do amazing things with the photos – to the benefit of everyone including their creators. Instagram can do that at very close to no cost to themselves. The traffic to the API is peanuts in server costs, and it’s simple to rate limit it. Similarly, rate limiting it means you wouldn’t be giving away the large scale analytics data you might sell. You can ban people from duplicating the Instagram app and depriving you of advertising revenue, just as Twitter have. The downsides to Instagram are tiny.

Not so long ago, the wisdom was that an API with a rich third party ecosystem was the key to success. Twitter was the model, and it’s still wonderfully open (fingers crossed). Yahoo really got it – remember Yahoo Pipes? A graphical interface for playing with all the open APIs that used to exist, infrastructure for a gentler time.

The new players don’t care. Facebook has very successfully pioneered the opposite approach, where you put up barriers anywhere you can.

Neither of these two things is big news, not the biggest stories on this topic by a long shot, but for whatever reason, they were an epiphany for me. They made me realise that Facebook is a unique position to take control of the web and drain it of its democratic potential.

I’m not in love with everything Google does, but, as a search engine, it’s interests could be seen as aligned with an open web. I don’t love Amazon’s dominance, but at least its marketplace makes a pretty transparent offer to users, just as Apple’s hardware business does. Facebook, which obviously competes in the advertising market with Google, has a strong interest in curtailing the open web. Facebook, as Mark Zuckerberg has explicitly said, would like to become the main place people go to read news, using Instant Articles rather than web pages, hidden away in Facebook’s walled garden. Increasingly, as the earlier evidence indicated, Facebook is the web.

But Facebook is different from the other big tech companies in another, much more important way. It is almost invulnerable to antitrust and competition regulations.  In the 1990s, Microsoft was in a massively dominant position in tech. In both Europe and the US, governments brought cases against MS, saying that they were exploiting their position to the detriment of consumers. The cases did serious damage to MS, and their dominant position slipped. Right now, the same thing is happening to Google’s dominance – the EU is bringing cases against them for their behaviour in relation to Android.

One reason that Apple always positions itself at the premium end of the market may be exactly to avoid gaining enough market share to qualify as a monopoly – instead it satisfies itself with high margins in a smaller segment.

But Facebook don’t actually sell anything to consumers, so they aren’t in a market, so no case can be bought against them. Sure, they are in the advertising market, and they are a big player, but only alongside Google and all the others.

Combined with Instagram and Whatsapp, Facebook is massively dominant in social networking. But social networking isn’t a market, because it’s free. Nor is Facebook a common carrier, nor are they a newspaper or a TV station, all of which have laws formulated specifically for them. For Facebook, there is no law.

I’d guess this one of the reasons that Facebook is so clear it will never charge users – to do so would expose them to competition law.

Maybe it’s OK, because some guy in a dorm room or garage is right now working on a Facebook killer. Except they aren’t, because, as with Instagram and Whatsapp, Facebook will just buy any thing that threatens it – and lock new purchases it into it’s own closed infrastructure. Nothing is more lucrative than a monopoly, so the stock market will write a blank cheque for Facebook to reinforce its position.

The board of Facebook must spend a great deal of time thinking about what could go wrong. A massive data leak? Accidentally deleting everyone’s photos? Cyberbullying suicides becoming common place?

Surely competition laws aimed at the company are near the top of risk register. What are they likely to be doing about that? They can do the normal revolving-door, expensive dinner lobbying shenanigans, and I’m sure they are. But Facebook has a whole other level of leverage. The platform itself is profoundly political. They have detailed data about people’s voting intentions, highly politically desirable advertising space, the ability to influence people’s propensity to vote, and can use the massively influential Facebook trending to promote whatever view they like. What politician wants to tangle with that kind of organisation?

If I was being cynical, I’d start to think about the Chan Zuckerberg Initiative. Facebook surely already has unimaginable access, but this organisation (not technically a charity) adds a halo of beneficence, a vehicle for the Zuckerberg point of view to embed itself even more deeply.

Why haven’t I mentioned Internet.org? It’s too depressing. I’ll write about that another day.

Not only is there no law for Facebook, but the democratic system for creating laws has incentives that mostly point in the wrong direction. You can construct all kinds of scenarios if you try hard enough. For me, the prospect of the mainstream web being controlled by a single corporate has moved from being distant possibility to being a likely future. Let’s just hope things turn out more complicated, they usually do…

 

Couple of notes from the Long Now Foundation health panel, both regarding how we aggregate and distribute knowledge.

Alison O’Mara-Eves (Senior Researcher in the Institute of Education at University College London) told us about the increasing difficulty of producing systematic reviews. Systematic reviews attempt to synthesise all the research on a particular topic into one view point: how much can you drink while pregnant, what interventions improve diabetes outcomes, etc.  These reviews, such as  venerable Cochrane reviews,  are struggling to sift through the increasing volumes research to decide what actionable advice to give doctors and the public. The problem is getting worse as the rate of medical research increases (although more research is obviously a good thing in itself).  We were told the research repository Web of Science indexes over 1 billion items of research. (I’m inclined to question what item is since there must be far less 100 million scientists in the world, and most of them must have contributed less than 10 items, however I take the point that there’s a lot of research.)

Alison sounded distinctly hesitant about using automation (such as machine learning) to assist in selecting papers to be included in a systemic review, as a way of making one of the steps of the process less burdensome. The problem is transparency: a systematic review ought to explain exactly what criteria they use to include papers, so that criteria can be interrogated by the public. That can be hard to do if an algorithm has played a part in the process. This problem is clearly going to have to be solved, research is no  use if we can’t sythesise it into an actionable form. And it seems tractable – we already have IBM Watson delivering medical diagnoses, apparently better than a doctor. In any case, I’m sure current systematic reviews of medical papers are carried out using various databases’s search function – who knows how that works or what malarkey those search algorithms might be up to in the background?

Mark Bale (Deputy Director in the Health Science and Bioethics Division at the Department of Health) was fascinating on the ethics of giving genetic data to the NHS, through their program the 100,000 genomes project. He described a case where a whole family who suffered with kidney complaints were treated due to one member having their genome sequenced, thus identifying a faulty genetic pathway. Good for that family, but potentially good for the NHS too – Mark described the possibility that by quickly identifying the root cause of a chronic, hard to diagnose ailment through genetic sequencing might save money too.

But – what of the ethics? What happens if your genome is on the database and subsequent research indicates that you may be vulnerable to a particular disease – do you want to know? Can I turn up at the doctors with my 23 and Me results? Can I take my data from the NHS and send it to 23 and Me to get their analysis? What happens if the NHS decides a particular treatment is unethical and I go abroad to a more permissive regulatory climes? What happens if I have a very rare disease and refuse to be sequenced, is that fair on the other sufferers? What happens if I refuse to have my rare disease sequenced, but then decide I’d like to benefit from treatments developed through other people’s contributions? I’ll stop now…

To me the part of the answer is that patients are going to have to acquire – at least to some extent – a technical understanding of the underlying process so they can make informed decisions. If that isn’t possible, perhaps smaller representative groups of patients who receive higher levels of training can play into decisions. One answer that’s very ethically questionable from my perspective is to take an extremely precautionary approach. This would be a terrible example of the status quo bias, many lives would be needlessly lost if we decided to overly cautious. There’s no “play it safe” option.

It’s interesting that with genomics the ethical issues are so immediate and visceral that they get properly considered, and have rightly become the key policy concern with this new technology. If only that happened for other new technologies…

The final question was whether humanity would still exist in 1000 years – much more in the spirit of the Long Now Foundation. Everyone agreed it would be, at least from a medical perspective, so don’t worry.