Foundations and Peer Review: Time to Reconsider the Process?
By Betsy Myers, Ph.D., Program Director for Medical Research at the Doris Duke Charitable Foundation
Funders of biomedical research, such as our foundation, have long relied on scientific peer review as an essential tool for identifying the best proposals. It is challenging to conceive of a more effective way to evaluate the scientific quality and impact potential of a research proposal than to solicit the perspectives of a group of experts in the same fields. They are best situated to appraise the importance of the questions being asked, the applicability of the science and the capacity of the work to change lives. Yet, like any time-honored practice, the peer review process should be reevaluated from time to time for its efficacy and usefulness, particularly when the practice is so reliant on—and therefore vulnerable to—the subjective perspectives of individuals.
To delve into this important terrain, the American Institute of Biological Sciences recently convened representatives from universities, research societies, journals and funding agencies (including the Doris Duke Charitable Foundation and the National Institutes of Health) to discuss scientific peer review in Washington D.C.
During discussion, participants identified several troublesome factors undermining the integrity of the review process. One of the most substantial areas of concern discussed was implicit bias, particularly with respect to applicant experience, applicant gender, race and institutional reputation. For example, a study from the National Institutes of Health found that, while about 29 percent of white applicants get research grants, only about 16 percent of black applicants do. This difference remains after adjusting for educational background, country of origin, training, previous research awards and publication record.
So, what can be done to reduce the effects of implicit bias on the review process? Some suggested an anonymization of the applicants to address this issue. This may seem like an obvious course, but the group acknowledged that doing so can create significant new burdens and management challenges to an already taxed system. Further, removing identifying factors, such as institutions and names, may not be practical for processes involving career development awards.
Meeting participants concurred that more work is needed to better understand the effects of bias and how to address it. Some areas deemed worthy of exploration included how the demographic backgrounds of reviewers or the type of research field affect the quality of a review.
Reviewers’ expertise and incentives may also be instrumental to the process. Anecdotally, reviewers of grant proposals have relayed that they are incentivized to participate in peer review to learn about an agency or funding organization’s latest programs, goals and policies; to see what makes a good proposal; and to help the field. Some funders also compensate their reviewers, making it easier to recruit experienced experts. Currently, foundations and other funding organizations don’t generally struggle as a whole to find well-qualified reviewers in the way that journals sometimes do.
With that noted, some worry that the process could suffer from some of the same vulnerabilities that some journals are currently seeing in their own peer review processes. Perhaps most markedly, as scientists increasingly turn down invitations to participate in reviews because of constraints on their time, journal editors have expressed concern that the reviewers who are conducting these evaluations may not be putting aside enough time to provide a high quality analysis of a paper. If current trends continue with respect to the growth in the number of proposals that funders receive with no commensurate growth in funding, the system could experience some of the same problems facing journals.
Finally, the conversation turned to outcomes of funded research and how well peer review identifies future successful work. The group discussed follow-on funding for the research, dissemination of results, citation of results by other researchers, and procurement of patent rights for inventions as potential measures of success. For example, impact measures of the Doris Duke Clinical Scientist Development Award (CSDA) were shared with the group. These awards are selected through a rigorous two-stage peer review process with panels of experts at each stage. It turns out that CSDA grantees selected by this process are much more successful at obtaining follow-on funding (larger research project grants from the National Institutes of Health) than matched unfunded CSDA applicants, indicating that the peer review process used to select CDSA recipients identified scientists who achieved greater research support as their careers progressed.
Despite some shortcomings, most of the meeting participants agreed that peer review has served the community well. One person observed that peer review is like democracy in the famous quote from Winston Churchill: “No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.”
Doing away with the scientific peer review process would be much like throwing out the baby with the bath water. The convening and the report have instead begun laying out a blueprint for how funders can challenge, reconsider, and improve aspects of the peer review process—leading to an evolution that could influence the entire future of the medical research field.
Please see the full report, Peer Review: A System Under Stress, for more information.
Ginther, Donna K., Schaffer, Walter T., Schnell, Joshua, Masimore, Beth, Liu, Faye, Haak, Laurel L., and Raynard Kington. "Race, ethnicity, and NIH research awards." Science 333.6045 (2011): 1015-1019
Escobar-Alvarez, SN, Myers, ER, “The Doris Duke clinical scientist development award: implications for early-career physician scientists” Academic Medicine 88 (2013): 1740