Couple of notes from the Long Now Foundation health panel, both regarding how we aggregate and distribute knowledge.

Alison O’Mara-Eves (Senior Researcher in the Institute of Education at University College London) told us about the increasing difficulty of producing systematic reviews. Systematic reviews attempt to synthesise all the research on a particular topic into one view point: how much can you drink while pregnant, what interventions improve diabetes outcomes, etc.  These reviews, such as  venerable Cochrane reviews,  are struggling to sift through the increasing volumes research to decide what actionable advice to give doctors and the public. The problem is getting worse as the rate of medical research increases (although more research is obviously a good thing in itself).  We were told the research repository Web of Science indexes over 1 billion items of research. (I’m inclined to question what item is since there must be far less 100 million scientists in the world, and most of them must have contributed less than 10 items, however I take the point that there’s a lot of research.)

Alison sounded distinctly hesitant about using automation (such as machine learning) to assist in selecting papers to be included in a systemic review, as a way of making one of the steps of the process less burdensome. The problem is transparency: a systematic review ought to explain exactly what criteria they use to include papers, so that criteria can be interrogated by the public. That can be hard to do if an algorithm has played a part in the process. This problem is clearly going to have to be solved, research is no  use if we can’t sythesise it into an actionable form. And it seems tractable – we already have IBM Watson delivering medical diagnoses, apparently better than a doctor. In any case, I’m sure current systematic reviews of medical papers are carried out using various databases’s search function – who knows how that works or what malarkey those search algorithms might be up to in the background?

Mark Bale (Deputy Director in the Health Science and Bioethics Division at the Department of Health) was fascinating on the ethics of giving genetic data to the NHS, through their program the 100,000 genomes project. He described a case where a whole family who suffered with kidney complaints were treated due to one member having their genome sequenced, thus identifying a faulty genetic pathway. Good for that family, but potentially good for the NHS too – Mark described the possibility that by quickly identifying the root cause of a chronic, hard to diagnose ailment through genetic sequencing might save money too.

But – what of the ethics? What happens if your genome is on the database and subsequent research indicates that you may be vulnerable to a particular disease – do you want to know? Can I turn up at the doctors with my 23 and Me results? Can I take my data from the NHS and send it to 23 and Me to get their analysis? What happens if the NHS decides a particular treatment is unethical and I go abroad to a more permissive regulatory climes? What happens if I have a very rare disease and refuse to be sequenced, is that fair on the other sufferers? What happens if I refuse to have my rare disease sequenced, but then decide I’d like to benefit from treatments developed through other people’s contributions? I’ll stop now…

To me the part of the answer is that patients are going to have to acquire – at least to some extent – a technical understanding of the underlying process so they can make informed decisions. If that isn’t possible, perhaps smaller representative groups of patients who receive higher levels of training can play into decisions. One answer that’s very ethically questionable from my perspective is to take an extremely precautionary approach. This would be a terrible example of the status quo bias, many lives would be needlessly lost if we decided to overly cautious. There’s no “play it safe” option.

It’s interesting that with genomics the ethical issues are so immediate and visceral that they get properly considered, and have rightly become the key policy concern with this new technology. If only that happened for other new technologies…

The final question was whether humanity would still exist in 1000 years – much more in the spirit of the Long Now Foundation. Everyone agreed it would be, at least from a medical perspective, so don’t worry.

 

 

 

Comments are closed.