Plenary - Big Science: Does More Mean a Better Future?
Big data fuels big progress, but data alone is not enough
In the life sciences, the Human Genome Project is considered a great success story in "big science," a complex project that laid the foundation for immeasurable scientific progress. Since then, more broad-scale projects have followed: the BRAIN Initiative, the Precision Medicine Initiative, and the Cancer Moonshot, to name some of the more well-known. But as NIH Director Francis Collins asked a panel of experts at Partnering For Cures' final plenary panel discussion, are these large-scale efforts the right way to move forward? "There is nervousness out there in the biomedical community about whether we're doing too many of these projects," he said. "Are they taking away dollars that otherwise would might have gone to smaller, more focused projects?"
First to answer was Atul Butte, director of the Institute of Computational Health Sciences and professor of pediatrics at University of California, San Francisco, who noted that the value of big science projects is in the data they generate, but big data is only as useful as our ability to analyze it. "As a country, we're investing close to zero analyzing that data."
However, medical institutions are digging into data themselves. There are 15 million patients in the UC health care system, and researchers have access to those patient's electronic health records, a treasure trove of useful information with a massive sample size. Using this resource, Butte and his team of researchers have made extraordinary progress mapping the progression of diseases for UC patients in an effort to better understand relationships among diseases and health.
The silo challenge
Federal agencies also have access to reams of data – 200 million medical records, according to FDA Commissioner Robert Califf – but analysis is a challenge: the government lacks the ability to link data sets, limiting the data's utility. Moreover, Califf expressed concern about the cost of clinical trials, which limits the accumulation of additional data and impedes the progress of big science projects. "The number one issue affecting the cost of trials is giving clinicians adequate time to talk to patients about prospective studies," he said. "What we're seeing in our best academic centers is that clinicians don't have time to talk to patients about studies."
Another limiting factor in big science projects is the way our research culture limits data sharing, according to Greg Simon, who is executive director of the Cancer Moonshot Taskforce and former president of FasterCures. He compared the current data market to insider trading, when it should be an open marketplace. The challenge, he said, is "How do we build an interconnected and transparent marketplace where people can trade data freely?" Another barrier, Simon noted, is poor interconnectivity and interoperability between institutions with large data sets, which creates a siloed infrastructure that severely limits detection of patterns in the data. It's a problem the Cancer Moonshot team hopes to solve by changing attitudes about data sharing, making Moonshot data publicly available and ensuring a level of standardization within the data. "We have the networks, but we don't have the content to populate these networks."
From insights to action
A fervent champion of big data and the need for big science projects, Sonia Vallabh entered the biomedical field after learning she carried a genetic mutation that causes prion disease. Once a lawyer, but now co-founder of the Prion Alliance and a researcher at the Broad Institute, Vallabh says that big data not only helped identify her specific mutation, but also to draw meaningful insights from it. "Only by looking at tens of thousands of exomes from data sets at Harvard and MIT, were we able to identify the people walking around with this inactive gene [that causes prion disease]," she said. So, broad-scale studies looking at genetic data have identified the genetic mechanisms behind devastating diagnoses. "Now the question is: can we turn down this protein?"
High-level research, patient-level benefit
But perhaps the foremost big science advocate on stage was Kafui Dzirasa, principal investigator at Duke University's Laboratory for Psychiatric Neuroengineering and an assistant professor at the Duke Institute for Brain Sciences. Dzirasa is part of the team working on the BRAIN Initiative, designed to revolutionize science's understanding of the brain, including how individual cells and complex neural circuits interact in both time and space. As Dzirasa said, "the idea of big data is small compared to the how we will eventually understand the brain." The project is so large in fact that Dzirasa's team is spending five years simply developing the tools to analyze the brain and all its 150 billion cells. But as a clinician, Dzirasa hopes that insights from the BRAIN Initiative will trickle down to patients by facilitating progress on a host of maladies, from depression to diabetes. "It is the true hope and promise for the patients I take care of," he said.
Although big science projects are necessarily time consuming, some panelists see ways to accelerate progress toward cures. Califf said diseases with active and successful advocacy groups tend to see the most success in finding funding and patients for clinical trials, and he argues public health agencies "have a duty to help groups across the board." Meanwhile Butte noted solving the data sharing problem is another tool to advance progress. "DNA sequences are shared like crazy, but there are lot of things we don't share, such as clinical trials data," he said. "It is a must to share the failed trials data in particular."
Given the last word, Vallabh reiterated the necessity to use big data for more than pinpointing the roots of illness. "After we've gone on the big fact-finding mission and waded through all the data, after we found what causes the diseases, how are we using those insights to improve our next steps?"