The Creative Exchange PhD program has been struggling with the meaning of the phrase ‘Digital Public Space’, which all of the researchers on the program are meant in some way to address. The phrase was originally coined at the BBC as it tried to work out it’s own digital strategy and the CX inherited it. It seems to somehow suck everyone into demotic vortex. One reason for this is the word ‘space’, which alongside its physical meaning is used metaphorically so widely that instantly sows confusion (head space / cyberspace / phase space / problem space / design space… ). You could just loose the word space and then the phrase become much more like digital civics, which I find a little more transparent.
The Research Through Design conference we thought a lot about how researchers’ individual practices can be used to effect change in the world while also generating research knowledge. I found the opportunity to consider foundational issues very helpful, and it made me realise that regardless of whether your practice is about knitting, lego, drones or workshops, from a design perspective you can define a set of affordances that characterise how people will interact with your work.
This brings me to the word ‘digital’. The word digital is absolutely content-free in regard of specifying anything about how people interact with your work, and therefore, at extremely tenuous in terms of its design consequences.
So when the CX program endeavours to collect together research using ‘digital’ as a parameter it struggles to find any way to get purchase on anyone’s particular practice. Nearly any innovation is going to have some digital aspect to it, simply in virtue of the fact that it’s an innovation in a profoundly digitised society.
For example, Chris Csikszentmihalyi’s RootIO, which I though was a fantastic project, is all about FM radio. But it makes perfect sense that it has a web interface, and various other digital aspects, just because that’s a logical way to build it. In fact, in many ways it’s a stop gap solution until Uganda has Internet infrastructure. In many ways it recreates the hyperlocal media that’s been made possible by the web. Calling this project digital or analogue is an arbitrary label. Digital isn’t a helpful design category.
Not about the app store
Nick Grant repurposed a number of apps to make his Young Digital Citizenship project. As he pointed out in his presentation, developing a native phone app is very expensive and uncertain process, which makes it a bad fit for research. More than that, nearly all the functionality that comes from a native app can be achieved in HTML5, which means the main reason for building an app is for the business model that the app store provides. In most research contexts this isn’t going to be relevant. Nick’s approach to using what already exists is a great way to get around the expense of development, which I think in general turns out to be an albatross.
Not about the artefact
There was a lot of discussion about whether Research Through Desing requires building an artefact – can you build a system instead? Or software? I think this was mostly triggered by the conference organisers asking speakers to show tangible objects, which are more compelling in the context of a conference. I don’t think this was a philosophical statement, just a practical one. Overall, I felt the project of defining ‘research through design’ by categories of practice or output is a bit futile. To me it seems that ‘research through design’ is research carried out by people who think of themselves as designers, or who have attached themselves to design culture, and there probably isn’t a lot more to usefully say about it, except perhaps to point out empirically it’s success or otherwise.
There’s been a sprinkling of talks that I can’t fit into a theme: Adam Harper on future of music, Jon Ronson, on shaming, food futurology etc. Do people want new textures in their foods because of the ennui of touch screens (Dr Morgaine Gaye)? Fun to think about…
But there’s also been many - perhaps most – talks that fit into a neat conceptual box, from the John Lanchester’s “How to talk money” to Michel Bauwens on Peer to Peer civilisation. It’s indicative that Dr. Michael Osborne (on AI) and Luciano Floridi (on tech & philosophy), both from Oxford University began their talks by emphasising how quickly the amount of information in the world is increasing. One sound bite stat was that there was more 1000 times more data on the web in 2014 than all the words spoken by humans in all of history. We get the idea that something is happening, even if it’s not exactly clear what it means.
Here’s how it crystallises for me: all this new information is a new way to solve the problem of social coordination. How can we act together most effectively to achieve our goals? The best way to understand this is to contrast it with old ways solving social coordination problems. John Lanchester pointed out that the very first time humans wrote things down was to track the movements of goods in the temples of ancient Sumer. What they were facing, for the first time ever, was the difficulty of making sure that a lot of human labour was direct towards a common goal, making the temple rich. You can’t solve a problem that complicated without information tech, in this case writing.
Skipping forward in history a bit, in 1945 the economist Friedrich Hayek says this, which deserves quoting at length:
Today it is almost heresy to suggest that scientific knowledge is not the sum of all knowledge. But a little reflection will show that there is beyond question a body of very important but unorganized knowledge which cannot possibly be called scientific in the sense of knowledge of general rules: the knowledge of the particular circumstances of time and place. It is with respect to this that practically every individual has some advantage over all others because he possesses unique information of which beneficial use might be made, but of which use can be made only if the decisions depending on it are left to him or are made with his active coöperation …. the shipper who earns his living from using otherwise empty or half-filled journeys of tramp-steamers, or the estate agent whose whole knowledge is almost exclusively one of temporary opportunities, or the arbitrageur who gains from local differences of commodity prices, are all performing eminently useful functions based on special knowledge of circumstances of the fleeting moment not known to others.
What Hayek wanted to say is that money is the information tech that currently solves the social coordination problem, in contrast to central planning, which he said could never handle the complexity. Money compresses a lots of information in to a single system of tokens, which is the most complex thing you can manage before electronic computers. I express my desire for something by my willingness to spend money on it, someone else is then motivated to try and provide that item. It’s both a way of expressing your goals and causing people to work together to achieve them. That’s what a market is. Money is, at least as it is at the moment, created and controlled by nation states. States do this because it’s very effective technology at solving social coordination. (Adam Lanchester [& David Graeber] point out that states originally did this to socially coordinate the mobilisation of armies.)
So coming back to the massive amount of information that we all agree is now being created, one thing we can do with it all is to help social coordination. We can solve Hayek’s problem now.
You can use a ‘money alternative’, like bitcoin, to track value, in which case you are using new information exchange technologies to do something very similar to the old market system, only you don’t have to rely on nation states to issue the money. Or you can think of services like AirBnB, an example of the sharing economy that Michel Bauwens attacked, which are parasitic on the existing money-based system. AirBnB and co use new systems for creating and sharing information (The Internet) to make transactions happen that were too hard before: sporadic letting of a spare room, or lending your car to someone.
But there are more radical alternatives too. If you can find several people who need the same infrastructure as you, say workshop equipment, then you could buddy up and agree to share that stuff, that’s what a makerspace is. What you need to make this work is a) to be able to find other people who want the same stuff as you, b) to be able ensure that one participant to use too much resource, ie spend all day using the lathe if someone else needs it. Elinor Ostrom’s design principles for common pool resources describe this second point nicely in rule 5:
Develop a system, carried out by community members, for monitoring members’ behaviour.
Both of these are information problems that used to solved by money – you’d either have to rent the equipment or have someone else do your manufacturing for you – both exchanges mediated by cash.
Not so any more, hence all the interest across Futurefest in notions of commoning, crowds sourcing, peer to peer community.
As a final point, I noticed that Vivienne Westwood’s view that ‘Capitalists prefer competition and death‘ was quite common across the conference. In rejecting the current economic system many people go beyond saying that it ought to be augmented or improved, and instead want to make the case that it should be abandoned. This is too far for me.
I think the modern economic system has delivered enormous benefits, it’s a liberal way of organising people to work together. The economic crisis has shown us that there are deep problems, rising inequality has to be addressed and no question new information technologies can offer alternatives.
But for me markets are as much about coordination as competition, and the reason they have elements of competition is because in a liberal society people want different things. The market provides a mechanism to sort thought and prioritise those wants. You can’t just throw the whole thing over.
China adopting a free market lifted more people out of poverty than all the development initiatives ever. The next step, which was nascent at Futurefest, is to re-embed the economy is human relations and tame the wild inequality that has come from the ideology of free market economics.
That’s my prism, obviously there are many others, but that’s the crux of my PhD. For me it stitches together lots of otherwise dispirate thoughts.
The effect of representing networks with ‘spaghetti’ network graphs, like the above inscrutable graph I found on Google images, is surprising because simultaneously almost completely illegible and yet at the same time immediately satisfying. Whenever I show a network graph when describing my own work everyone seems to ‘get’ what I’m up to.
If you want to gather data for social network analysis, or check it, or edit it, you tend to do so using a matrix table. Doing so via a network graph is going be very hard.
If you want to boil your data down into some aggregate picture then you can use mathematical approaches to derive properties: modularity, connectedness, etc. If you try to guess these properties by looking at a network graph, your intuition is not going to be great.
And yet network graphs of my work seem to be incredibly important for people to be able to mentally situate what’s going on, to position what I’m doing in their minds. It seems to live in between the comprehensive tabular matrix and the reductionist statistical analysis and fill a unique, qualitative role.
Gephi, which I use to visualise networks as graphs, has various algorithms for creating the network layouts. They are computationally expensive and take several seconds to run, yet after all this computation the result often leaps off the page as visually wrong – unbalanced in someway. Usually I can see what needs to change to make it right.
The sense of orientation that comes from a seeing a network graph, and the immediate ability to layout a graph in a way that apparently a computer cannot might be linked at some cognitive level – do humans have a special module for processing network graphs?
In any case, what I previously thought off as a bug – the seductive quality of the spaghetti graph – I am now reconsidering as a feature – that network graphs, even borderline illegible ones, give us some kind of context and confidence in the data we are examining. Perhaps they just act as a handy prompt to ask some important questions: are there meaningful clusters? Is the graph complete? What are the nodes, actually? What types of edges are there?
Some brief research has turned up some approaches to reducing the amount of spaghetti in the network diagram.
Above is an attempt to add matrices to network diagrams. Representing both halves of a necessarily symmetric matrix violates all kinds of Tufte dictums, that aside this graphic fails because it doesn’t aid intuition very much at all, and the hard data is impossible to read off because the matrices don’t have labels.
In another approach  we see an attempt to make clusters more intuitive. I think this work is more successful than the first example because of the specific focus on a qualitative, at a glance approach, however it’s also representing fewer nodes and edges so perhaps has set itself a lower hurdle. In case you can’t tell, the various coloured clusters have been encouraged to form into recognisable shapes – square, circle, heart, light great is approximating a triangle. But why square, circle, triangle? Was the key problem with this diagram comprehending the clusters anyway?
Contextual, rather than algorithmic
In looking through data visualisation books I found pure networks, the kind that the two examples above are trying to represent, quite rare. But we’ve all seen a very famous example of network data vis – perhaps the canonical example of data visualisation – the London Tube Map. What makes it work so well is the judicious addition and removal of information. The Thames isn’t part of the network, but without it the stations are completely geographically unmoored. Yet the precise distances between the stations have been scrapped, a detail that gets in the way of the aesthetic.
The tubemap has been been done to death, so I’ve included British Airways’ network graph of their flights, circa 1989. Here there is an extra contextual detail of the dotted connections which reach around what would be the back of the globe if we were looking at normal map of the earth. I also like the pleasing way that different destinations peel of a central spine.
These bespoke visualisations seem to be pointing out the inadequacy of the purely algorithmic approach of software packages like Gephi.
Even so, it seems that we take something from even the worst spaghetti diagrams.
 Henry, Nathalie, J. Fekete, and Michael J. McGuffin. “NodeTrix: a hybrid visualization of social networks.” Visualization and Computer Graphics, IEEE Transactions on 13.6 (2007): 1302-1309.
 Shannon, Ross, Aaron Quigley, and Paddy Nixon. “Graphemes: self-organizing shape-based clustered structures for network visualisations.” CHI’10 Extended Abstracts on Human Factors in Computing Systems. ACM, 2010.
Nesta’s Big and Open Data for the Common Good raced through (I think) 7 different projects, all of them detailed in the report, and including my work with @lagaia, @rowanelena and @mparsfield in Hounslow.
The projects underpinned a more general debate about two recurrent topics – ethics, and who should be responsible for building the open data infrastructure.
Ethics and data
Whenever you are using data about people the question “Have the people in question given informed consent?” arises. When the data is not directly about people, there is still a question “Is the end to which we are using this data ethical?”. This topic generated much debate on the Twitter hashtag – as can be seen in the storify.
Clearly, using people’s data without their consent is an invasion of their privacy as well as a disservice to society. In @lagaia’s example, if Citizen’s Advice Bureau opened up detailed data about what people ask them about payday loans (which, by the way, they have no intention of doing) that might be very useful to unscrupulous lenders.
As upstanding, morally conscious individuals the obvious answer is to be extremely conservative with the uses we put data to. This has a number of non-obvious drawbacks:
Informed Consent is extremely difficult to parse, since most people have no idea of the conclusions that can be drawn from a given set of data using statistical approaches. So strict interpretation of informed consent will be extremely limiting. Much of the activity discussed at the event would at be at best in a grey zone, for example. The ‘bigger’ the data, the harder it is to claim ‘informed’ consent because the the information that can be derived becomes more surprising.
There is a free-rider problem. If one person does not consent for their medical data to be shared for research purposes, but others do, is it fair for the person who does not consent to benefit from any research breakthroughs predicated on other people’s generosity with their own personal data?
Traditionally, academia and the third sector have been very strict about ethics, while unsurprisingly the commercial sector has not. On a case-by-case analysis we might see the strictest ethical interpretation as morally preferable, but if the cumulative outcome is for the commercial sector to have vast lead in theoretic and behavioral understanding, to be decisively more adept at data processing, is that really for the greater good?
Perhaps the most important point is the huge opportunity cost of not doing certain big and open data activities. Being over-cautious could have as bad an outcome for society as an incautious approach. Playing it safe is not cost free.
@Stianwestlake pointed out that rules to enforce ethics are unsuccessful, suggesting that disaster in the financial sector was the result of bad faith and could not have been averted by more rules. In some countries bankers now have to take a hippocratic oath. Perhaps something similar could be beneficial for those using data? Bankers and data scientists both work with social abstractions that make it easy to forget the human cost of bad decisions, and they both potentially face perverse incentives.
Data as infrastructure
We (nearly) all accept the governments role in enforcing contracts and standardising weights and measures. These activities are seen as precursors to all the public and private activity that makes our society work. Imagine trying to buy petrol if every station used it’s own system of measurement. Systems such as company registration, agreeing to use litres for fuel etc. become part of the furniture. We need rules about how information is recorded and transmitted to make the system work; a kind of systemic infrastructure.
Yet it seems clear the government does not have enough interest in enforcing similar rules for data formats and data sharing in the digital realm. For me this is the most fascinating part of the debate. @willperrin pointed out the huge potential for giving to local causes that is untapped in the UK simply because there is no mechanism to discover local charitable causes. @edtparkes talked about the important data the private sector has and which it easily could share. To me this issues is exactly the same requiring suppliers to list ingredients on packaging (and put me in mind of this amazing podcast, in part about Tesco’s and immigration patterns). @carljackmiller called for an ‘ebay’ style clearing house for collective social action.
How will these systems described above be built? Clearly the commercial sector is going to play a role, @edtparkes said “We’ll have no social impact if we don’t make a profit”, implying that anything that doesn’t make a profit won’t exist in the long term. On the other hand @trisml suggested the idea that for-profit companies could build all of this infrastructure was ‘magical thinking’ – noting that historically infrastructure has always been pioneered by the state. Finally, @duncan3ross, perhaps partially in answer to these questions, pointed out that when local authorities award contracts they should require that some part of the budget be allocated to open data concerns.
It’s hard to reinforce enough the idea – beautifully articulated by Keller Easterling here - that this systemic, digital infrastructure is as important to the public good as the network of roads or the hidden plumbing that we take to be the signifiers of civilisation.
We got off the train at the wrong stop, but, no matter, we’d get a taxi. Except… we’re not in London any more, and there are no taxis in Matera, especially when it’s nearly midnight.
So we walked the last mile to get to the old town, the Sassi, which runs along the side of a ravine. Looking out, I imagined the blinking light on top of a transmitter tower as a boat on the ocean, or a light aircraft over the Sahel – the town has the feel being on the edge of a great unknown, heightened by the even greater unknown of where our apartment was. In the end we found it by chance, just as panic was beginning to take hold.
In the daylight you can see across the valley to rows of caves, some of which are tiny churches with peeling frescos on their walls. Even in the daylight the old town is uncanny, there are so few people there. Before tourism made the Sassi an economic asset, the Italian government moved people out of the medieval caves and into the new town, they are only now moving back.
The Unmonastry is at the cusp of the Sassi’s ravine, the right place for a conference called Living On The Edge; the walls of an ancient cellar rubbing off on your shoulders as you listen to Emmanuele talk about memories of abandoned Italian villages.
The introductory session nicely bookended the LOTE spectrum between Robin Chase and Vinay Gupta. Robin is a Zipcar cofounder and her call was for us to look for “excess capacity” in the system and investigate ways to unlock it. If Zipcar unlocks the value of our underused cars through sharing them, what else could we bring that model to?
Compare with Vinay’s call, of similar kind but utterly different extent: to modernise anarchist theory, cease relying on the government, ignore the market economy and form radical cooperatives that act in our own interests using the internet as a platform. Charities be dammed, they are cut from the same cloth as the corporates. Zipcar type models are encompassed via the theoretical construct of guard labour, but it doesn’t stop there; climate change, wars and everything in between are in Vinay’s purview.
To round off the picture, Fra. Bembo pointed out that everyone would have to clean the toilets, and Jeff (who was dressed variously as a chef and a bin man over the weekend) spoke out in support of a guaranteed minimum income, which I think is rapidly becoming a hobby horse for me.
The stewardship in the title refers, I think, to the idea of an individuals or organisations that attempt to maximise the social benefit of a resource for non-financial reasons – people who ‘steward’ a resource. But the topic that kept recurring is actually the Zipcareque sharing economy. Similar idea, getting more value by sharing resources, but, crucially, driven by market forces rather than cooperative benevolence. This tension came out at the plenary at the end of the first day, with the question “Is Airbnb bad because it makes room sharing, which used to be a gesture of friendship, into a financial transaction?”
The answer that Robin gave, and which I’m inclined to agree with, is that before Airbnb existed people mostly didn’t let their spare rooms, because there was too much friction in the transaction. The new ‘sharing economy’ has probably displaced very little benevolent, non-monetised sharing activity, with either cars or spare rooms. Airbnb is straight out of VC-funded, bubble-valley, hypercapitalist California. I might not like the way it’s come about, but I can see the value of what it enables as despite the economic system in which it arose.
You might have guessed that by now I have an affinity for Robin’s way of thinking, and I think one of the most interesting things she posited was the idea of market failure in the sharing economy. Using economic theory as a lens in very helpful for me, but, I speculate, a total turn off for most other people at the conference. In any case, a massive case of market failure occurs around personal data, where people simply cannot understand the value of their personal data, and the way it aggregates to become incredibly powerful.
So that’s the cut-and-dry economics, but, much more than ever that ever before, Patrick persuaded of a rational position which is sceptical of economic theory. In his view, using money to value things causes people to have a different psychology. It makes it easier, for example, to abuse natural resources, because it makes them abstract. So when I explained that the market is a wonderful way of allocating resources, that it does a magical computation to prioritise what people most want, he agreed. But at the big scale, the environmental scale, the use of money causes this damaging psychological disconnect. When people explain to me why they don’t like economics, I often feel it’s because they don’t get how powerful it is, our conversation didn’t follow that pattern.
Which leads me to the unconference session that Helen and I lead, Art Vs Science. As a spur to discussion, we divided the world into Tribe A – scientists and techies – and Tribe B – Artists and the academic humanities. I proposed that the tribes need to recognise their cultural differences and reconcile them, Helen argued the tribes didn’t really exist like this, and that in any case the differences in culture were a good thing. More is available in the notes, but we did find that the tribe model rang at least a little true with the experience of those in the Unmonastry. Kat was especially good in the debate, bringing lots of useful ideas. I was interested to learn that artists-in-residence at CERN only get three months, those running the program worry the artists might go native and start thinking like scientists otherwise.
One funny conclusion was that even though Helen identifies as Tribe B, and I identify as Tribe A, we both perceive ourselves to be in the minority – at the conference and in wider society. Obviously impossible.
Listening to the talks I noticed a recurring structure in presentations. Someone would advocate for some kind of action, then we’d lament that not many people agreed with us. This was followed by the idea that ‘people’ should be re-educated so they will want they same thing we do. eg. People should use encryption on the web, but they don’t see the value, so we should re-educate them so they do. People shouldn’t go to supermarkets, but they do, so we should re-educate them to not.
In discussion with Theresia, I realised how pervasive this type of thinking is – to the point where I’ve certainly articulated it myself. It’s not a completely fair analysis, but it gave me something to think about.
Sam, who was videoing the gathering, told me he was having difficulty making the Sassi look real on screen. In the diffused light, with buff-colour tufo buildings between grey ravine and the grey sky, the town looks as flat as film set. In fact Pasolini’s films and Mel Gibson’s Passion of The Christ both used the Sassi as a backdrop, as will a remake of Ben Hur.
At the same time the Sassi is more three dimensional than any modern town. Buttresses fly, shoulder-width steps wind through, over, under. One man’s pavement is another mans roof. The bedrock is a Swiss cheese of human activity. When we arrived at the apartment we were led through a tiny cupboard-style door under the stairs, down a staircase and into a cellar space the size of a 4 bed London flat. There’s a story that at another conference a local showed a someone into his house, to show how it connected to the cave system. They went deep enough to end up under a church, looking into a pit of human bones.
God knows what else is under there. One of the town’s churches (wonderfully called Chiese Rupestri di San Nicola dei Greci e Madonna delle Virtù) intersects with an older church carved into the hill. No one knows how old the older church is, the first docmentation comes from the 14th century – maybe the question doesn’t even make sense, geology and architecture elide in Matera. In 1991 they discovered a Roman cistern, hiding, unknown, below the town square, under everyone’s feet.
The time dimension warped too; in the way that three days a Glastonbury feels like you’ve been away for a month. You could open a door in the depths of a cave and come out of at the top of a campanile a week in the past and on the other side of the valley. Perhaps I was just confused because the clocks changed while we there.
I didn’t have much more luck with trains when I came to leave. Somehow I seemed to keep missing the main train station. I took me too long to realise what should have been obvious: the train station is underground.
This week I’ve been putting into action a plan to get more people in to meet us, which there is budget for, and it seems like it will be helpful not only for our current projects, but also for future things – even after our PhDs. In particular I’m interested to chat with people who are interested in Network Analysis, which I think will be very interesting for several of us.
I’ve had a duel with John Fass via Google Docs. Having written a long piece about the culture in the humanities vs that of the sciences, in particular thinking about the way epistemological commitments underpin that culture. We’ve had a lot of fun, but it bought up an interesting aspect of how academic writing is read: for all the very close attention that has been paid to challenging every individual assertion that I make, the overall thrust of my argument has gotten lost. In some ways I think this ungenerous way of reading hampers debate and understanding; a well-structured and enlightening sketch can be composed of lemmas that are not themselves individually beyond contestation.
In terms of the PhD itself an interesting frication has come up between normal modes of recording methodology and software development. In particular, it’s hard for me describe in any deep detail how my software works for gathering data; but at the same time it’s the cornerstone of my methodology and needs to be presented as such. Even more, I make regular changes to the software to make it better, but it means my research is slightly dependent on shifting foundations. And if I suggest the changes are not material, one, who is to decide that, and two, what if a small change has massive unintended consequences?
I wish I had time to implement proper testing, but realistically I don’t.
Finally, there is the use of a Third Party entity extraction service (alchemy) and the question of whether using this black box is acceptable, how sensitive the results are to its unknowable inner workings.
A particularly good exhibition of sonic art by Rafael Lozano-Hemmer, which I wanted to keep some notes about.
The first thing, the most important thing, is that everything was presented beautifully, and worked. Where ultrasonic sensors were supposed to detect a person approaching, they did; when a button was supposed to record your voice, it did. I can say from experience this is no small achievement – so hats off to that. The second thing, which is also often lacking in exhibitions that involve something digital, is that everything was beautifully presented and looked as though the artists had been able to fulfil their intentions.
The artist has a concept of ‘speakers as pixels’ and in the piece Sphere Packing below it really works. Each sphere is covered in lots (sometimes hundreds) of tiny transducers working as speakers. They are so quiet, so close together, that from a distance each sphere emits white noise. But if you put your ear very close you can hear that it is actually playing a discernible song. For example, one sphere plays Mozart, but each speaker is playing a different bit of his work, so in total its just a random mess. I haven’t seen this played with before.
In this instance the way the effect works means that you to literally put your ear against the speaker to get an individual signal. I wondered if it would be possible to have the cross over from noise to signal a bit further away, but the logarithmic nature of perception might make this rather hard to achieve. I also wonder what the effect would be if the speakers played different but more related sounds, for example just fractionally out of sync. Virgin territory, as far as I know, it’s seems to explore a really interesting cross over between noise and music in a spatial way. This especially, but also perhaps the exhibition as whole, makes me thing of applications in calm technology.
In Voice Array visitors are able to record a short sample of their voice, which is played back on it’s own, then with all the previous contributions simultaneously. For me the sound aspect of this was less exciting than the way LEDs worked, each independently twinkling in slightly different shades of white, giving an effect that set it apart from the clinical look these things normally have.
Finally, Pan-Anthem, which features magnetic ‘bricks’ representing each country in the world by playing back their national anthem (although I seem to remember hearing that Oman doesn’t have a national anthem because music is banned there?). The magnet in the speaker sticks to the metal sheet on the wall, so the bricks can be re-arranged to visualise different sets of data. Weirdly reminiscent of the vitamin calendar, and obviously I like the idea of stand alone devices that play one song – being, this the eventual goal of the Rifff project.
Overall, the sound was actually quite annoying and didn’t add that much, but I feel like there is a realisation that could really make the sonic dimension work. I guess the problem with country’s national anthems is that they are mostly unknown and don’t evoke anything. If the system relied on a palette of sounds or songs I knew, or triggered a mood, perhaps it would work better. As with everything else though, beautifully, functionally executed.
Mega admin week. Why? Because the alternative is coding. However, I feel like I’ve reached the end of the coding ‘hygiene’ phase and got to a point where I can move ahead without feeling frustrated by the structure of the existing code base.
One of the mental overheads that I’ve been struggling with is improving the software while leaving it compatible with deployed Hounslow version of the project. My approach to this is going to be to do one more week of development in Hounslow, before deploying for the final pass there. Then I’ll abandon the Hounslow project and starting making incompatible changes so that I can move forward.
Further attempts to connect with academics have failed – I suspect because the people I’ve met with have full research slates. Hopefully further consultation with Tom will lead to a way forward on this…
Finally, further link ups between me and Dan Lockton look as though they might be on the cards, which would be very lovely.
While Hyperlocal observatory has successfully been deployed in Hounslow, it’s been quite an ad hoc process. While that’s inevitable in the circumstances, in the longer run, and for research to be based on it, clearly it needs a well defined methodology.
Pete suggested that I write down the methodology in such a way that I could ask someone else to repeat the experiment. This proved to be a informative process. The whole thing is more complicated that I expected, but what particularly stood out are the judgement calls the person coding the data is asked to make. Some of them I believe to be relative innocuous – such as tagging with categories. While this might be quite tricky, I don’t think differences of opinion at the margin are likely to fatally undermine the data that comes out.
What is more philosophically interesting is the point that there is also a judgement call “Is this relevant community data, or should I put it in the bin?”. Defining what qualifies as community data, as well as possibly influencing the outcome of the research, seems like a key part of the theoretical underpinning.
Elevator Pitch / First Para of Abstract
While discussing the fact that it’s hard for me to articulate the top line of what my project is, one of the suggestions was for me to again reformulate a single paragraph description. Here is the global version of that:
Communities have a ‘digital footprint’ of locally-oriented content: blogs, forums, Tweets, Facebook pages. This set of data, which barely existed a decade ago, is increasing in richness rapidly. My PhD seeks to address how this (intuitively valuable) dataset can be used to improve quality of life for a community.
In fact I formulated three versions, with two more specific version fleshing out how the value is to be extracted from this data. Presumably as my PhD narrows one direction will start to come out as a winner.
I continue to try and get myself into a place where I’m happy to look at and modify the code base, which is still elusive. A place where I can just add something in without feeling the need to re-evaluate old work. In part this is because new architectures suggest themselves constantly, in that is in tension with the fact that I need to deploy updates to an already deployed code base. Secondarily, having rushed so much previously there is lots of underlying technical debt – misnamed things, poorly considered folder structure etc. This is exacerbated by the fact that Meteor itself still has evolving conventions on how to structure projects.
Met with DCLG, which proved to be a interesting opportunity to explain my project to people who are very familiar with community issues, but less so with the tech. One of the things that came from this is the increasing need for me to position myself with regard to projects like CASM, where my headline idea is that they are doing Big Data, and I’m doing Little Data.
Also tried to get some Meteor work done, which involved me upgrading to Meteor 0.9, which was a nightmare. Such are the joys of using a platform that hasn’t reached version 1. Can’t wait for that, unless of course it introduces massive changes.