(Re)defining impact: Assessing biomedical research investments
Tuesday, November 18, 2014
11:00 AM - 11:55 AM
GH - Empire Ballroom I
Annette Bakker, President and Chief Scientific Officer, Children's Tumor Foundation
Richard Hodes, Director, National Institute on Aging, National Institutes of Health
Thomas Luby, Senior Director, New Ventures, Johnson & Johnson Innovation Center, Boston
Benjamin Seet, Executive Director, Biomedical Research Council, A*STAR
Beth Meagher, Principal, Deloitte Consulting LLP
"Assessment and evaluation itself is an active process which has consequences and implications." This was the poignant statement made by Richard Hodes of the National Institute on Aging that resonated throughout the Partnering for Cures panel "(Re)defining impact: Assessing biomedical research investments." Kicking off the second day of the meeting, the panelists, moderated by Beth Meagher of Deloitte Consulting LLP, agreed that it is difficult to assess the investments made in biomedical research in terms of impact and accountability. This issue is being tackled by both public and private sectors, from Congress to the National Institutes of Health (NIH), and from industry to the philanthropic sector.
Before delving into metrics, Thomas Luby of the Johnson & Johnson Innovation Center made it clear that biomedical research understanding has driven human health and quality of life, and life science investment yields a clear return on investment for human health.
The conversation then turned to the purpose of evaluating research output, and Hodes discussed the key themes that emerged from a recent NIH-led initiative: accountability, optimization of planning efforts, and communication to the public. Further, he discussed three major objectives of biomedical research that should be taken into account when evaluating output: the accumulation of knowledge, contributions to human health, and societal impact. With each of these output streams, there are short- and long-term goals that contribute to the difficult challenge of evaluation.
Benjamin Seet of A*STAR described the national planning exercise in Singapore undertaken by his company, which utilized a similar framework to what NIH has done, in which they assessed research impact by academic mission, societal and economic impact, patient outcomes, and health delivery systems. He reinforced Hodes' earlier statement that each goal has both near- and long-term deliverables that need to be evaluated using different criteria, based on the goal of each sector.
Annette Bakker of the Children's Tumor Foundation (CTF) then gave an overview of how measurements of success and milestones may differ for her organization, since it is patient-funded and disease-focused, as opposed to the broader portfolios managed by both NIH and A*STAR. Likewise, Luby discussed those measures from an industry standpoint given the ability to access markets globally. "There is an unusual and, I think, exciting point of view that we can try and tackle problems that are not traditionally ones that you would think of from a life sciences company," he said, suggesting that measures of success are not static, but fluid depending on sector and goals.
Returning to the idea of risk that was alluded to earlier in the discussion, Hodes discussed risks associated with not only what we evaluate, or deem as "successful" in research, but also the behaviors that they may drive. For example, if success in research is assessed by number of publications, this might drive behaviors that are not ideal and may lead scientists away from doing potentially groundbreaking work that may not result in a high number of publications in the short term. The idea of managing the right balance in a research portfolio began to emerge from the discussion, and the panelists cautioned against stifling high-risk research that has the potential of high reward.
As the conversation shifted toward how each organization manages its research portfolio, everyone chimed in with tactics they use to encourage high-risk investigator-initiated research. Bakker stated that CTF allocates about 10-15 percent of its budget to investigator-initiated research, and it solicits experts who are willing to think outside of the box to review such grants. Other strategies included interacting with as many early-stage investigators as possible to tap into promising research prospects without hindering the often serendipitous process of high-risk research, as well as risk management as a tool to manage the proportion of investment in high-risk/high-reward projects and conventional projects with predictable outcomes.
If evaluation is the goal, then what are the useful tools for setting up the evaluation process? Data, data, and more data. Hodes reiterated the need to capture more data to inform funding decisions and set new priorities and goals. What was also made clear is that there is no difference in the way other sectors, such as pharma, government, and academia, are approaching this difficult task. While each sector may have varying goals, they are collaborating to determine how best to evaluate success and impact of investment in biomedical research.
As Seet suggested earlier, there is no easy solution for setting metrics and assessing value/success in biomedical research, and he put forth a solution that resonated with the whole group: setting different measures for different sectors based on set goals and desired outcomes. For example, evaluating universities based on commercial output may not be the best metric of success given their goals and resources, just as judging a life sciences manufacturing company based on publication record may not be an appropriate measure of its success. Aligning measures with goals is a progressive concept that all of the panelists agreed was important.