BioPharma Beat: A Data-Driven History of Bioinformatics
by Hallam Stevens
University of Chicago Press, Chicago, 2013. 302 pp. $90, £63. ISBN 9780226080178. Paper, $30, £21.
Had I shared at a recent cocktail party that I read a great book on the History of Bioinformatics, it’s likely that I would have been met with blank or puzzled stares. Let’s face it, to most people this does not sound like an interesting or fun subject. Hallam Stevens, however, wrote a mesmerizing account that highlights not only the critical importance of this field, but also how it has evolved over time to play a major role in the understanding of life and disease, in the development of new treatments including drugs, and in the ways other fields outside biology are benefiting from advances in this discipline.
DD: What prompted you to write this book?
HS: We live in a highly technical society. Our world is described as a “knowledge economy” where much of that knowledge is connected to science and technology in some way. As a society, we need to find ways to talk about and understand science and technology that don’t take their findings for granted. Biomedical knowledge is particularly important in this respect — we hear lots about new discoveries, new drugs, and new treatments. But where does this new knowledge come from? Should we trust it blindly? What are the implications?
This book is about learning more about where our understanding of biology comes from. Computers and computational techniques are just one part of this knowledge evolution. But they are an increasingly important part, so I thought it was important to know more about why biology has become so dependent on them and what the consequences of this dependency might be for the way we understand life.
DD: Some will say that history, even if relatively recent, is too much in the past and the lessons are too rudimentary to be of help? Why do you believe that a look back is useful for looking ahead?
HS: My book is actually part history and part anthropology. The anthropology part meant spending a lot of time in labs working with scientists, observing what they were doing, and trying to make sense of it. In a sense, this is “history of the present” or history taken right up to the present (or at least until 2008 or so when I finished doing most of my research). So, the book — perhaps unlike other history books — is tracking trends that are still ongoing, perhaps even getting stronger. Genomics, as a research paradigm, is going to be with us for a long while and many of its ideas are even spreading to other fields.
DD: Of the various technologies and developments in the last 50 or so years, are there any you were most surprised to learn that they failed along the way?
HS: One thing that’s striking in this history is the lack of success of the computer before about 1980. In the 1960s and 70s many people tried to get the computer involved in doing biology, and the NIH spent significant sums of money on this. But, with some exceptions, most of these attempts didn’t get very far. They failed in the sense they didn’t end up having an immediate or a lasting influence. Even in the 1980s, using computers to do biology was a bit of a strange and counter-intuitive thing to do. I wrote a story in the book about James Ostell who was a PhD student at Harvard in the 1980s and developed some of the first programs for analyzing DNA sequences. But Havard’s biology department refused to give him a PhD — they just didn’t think that sort of thing should count as biological work. Obviously, things have changed a great deal.
DD: And how about any that turned out to surprise us in positive ways?
HS: I think very few people anticipated the important role that databases would play in biology (or in science in general). In this case, the NIH was reluctant to fund large-scale biological databases in the early 1980s and, in fact, the Europeans set theirs up first (EMBL Bank was established before GenBank). Many believed, like with the computer more generally, that this tool was not that crucial to basic science — that databases were just libraries or archives. They have now become so much more important and play an extremely active role in generating biological knowledge. Also, they have enabled the sharing of biological data on a large scale, which, over the last decade, has played a major role in the movement towards open science.
DD: Do you believe this field is evolving at the right pace given the technologies available? What are a couple of the big roadblocks we need to overcome?
HS: I’m not sure what the right pace is. Computers, data, and bioinformatics have raised some huge challenges. One of the most important is the set of questions around how science is done — is it possible to get useful answers by just gathering and processing more and more data. This set of approaches is called ‘data-driven science’ or ‘hypothesis-free science’. But many biologists (and other scientists) remain unconvinced that this is a valid or useful way of proceeding. I think we are even seeing the residue of this in the recent FDA shutdown of the consumer genetics company 23andMe — essentially the FDA said that the data 23andMe was providing to its clients (which is based on statistical-computational, data-heavy methods) was not valid and therefore potentially leading consumers astray [see http://bit.ly/1mrKWJd on Healthworks Collective]. Agreeing on methods and standards of evidence is going to be critical for moving forward.
DD: What are you most optimistic and excited about in this field?
HS: I think what’s most exciting is how open some of the fundamental questions are. Most biologists certainly don’t think they have all the answers to how life works or how genes and genomes work. They’re not even sure what genes are any more. Basic concepts are in flux and computers and data are playing a critical role in the deconstruction and reconstruction of key ideas.
DD: Can you share a bit of your background and what are you doing now?
HS: I am a historian of science and technology. I studied quite a bit of physics at one point, but gradually got more and more interested in historical questions. I now teach history of the life sciences and history of information technology at Nanyang Technological University in Singapore. One of the things I’m pursuing is the “data” elements of this work — biology was one of the first fields to have Big Data as the gigabytes of DNA code emerged from the Human Genome Project. But now Big Data is being applied to a range of other fields — I’m interested in seeing what we can learn from biology about what some of the consequences of Big Data might be. Singapore is a great place to be for pursuing questions like this — both geographically and culturally its sits between West and the emerging science & technology of China.
Hallam, thank you for writing this book full of insights and promise and for sharing your thoughts in this interview. Best wishes in your continuing endeavors in this field.
You may be interested
Care On The Road: How Telemedicine Can Reach Truck DriversLarry Alton - August 21, 2017
Telemedicine is considered a powerful tool for individuals living in rural areas, far from adequate services or in need of…
Where Is The Balance? Pushing Back Against Consumer Health TechLarry Alton - August 18, 2017
When Republican Congressman Jason Chaffetz glibly remarked that Americans struggling to afford insurance should choose between that and their smartphones,…