The adoption of new transparent reporting standards may have contributed to a significant reduction in the percentage of studies reporting positive research findings among large-budget clinical trials funded by the National Heart, Lung and Blood Institute, a study published August 5 in the journal PLOS ONE has found.
In all, 57 percent of large-budget clinical trials evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease published from 1970 to 1999 reported positive outcomes, while only 8 percent of clinical trials published between 2000 and 2012 reported positive outcomes, researchers from Oregon State University and the federal Agency for Healthcare Research and Quality found. The new reporting standards were phased in around 2000.
Under the new regulations, researchers conducting drug or dietary supplement trials using human subjects are required to identify projected outcomes and register their trials on the website, ClinicalTrials.gov, before they begin to collect data, said the study’s co-author, Veronica Irvin, an assistant professor in Oregon State University’s College of Public Health and Human Sciences.
ClinicalTrials.gov is a database of clinical trials using human subjects from studies around the world. When entering their trial into the database, researchers are required to state specifically the outcome they will focus on.
In the past, a researcher might have published an aspect of a study that was successful, even if the study overall did not produce the expected results. But the new requirements mean investigators are less likely to change their analysis plan to consider another outcome that, by chance, may have shown a positive result following drug treatment, she said.
“Some people focus only on positive results,” said Irvin, whose research interests include publication bias and transparency in reporting research outcomes. “Null outcomes, or results other than what was expected, might be disappointing, but they may inform doctors and patients about which treatments are not likely to be helpful. Publication of null results also prevents the unnecessary replication of the study by other investigators.”
In many cases, trials that do not show a significant benefit of the drug lead to less patient use of ineffective or even harmful treatments, Irvin said. One of the trials included in the analysis was the Women’s Health Initiative, which demonstrated that postmenopausal estrogen replacement therapy was not helpful for most women, for example.
ClinicalTrials.gov is accessible to the public, which improves transparency for clinicians, patients and others interested in learning more about a drug’s development or efficacy, she said.
Irvin began working on the project with the study’s lead author, Robert M. Kaplan of the Agency for Healthcare Research and Quality, while the two worked together in the National Institutes of Health’s Office of Behavior and Social Science Research.
They reviewed all large-budget clinical trials evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease that had received funding from the National Heart, Lung and Blood Institute between 1970 and 2012.
They chose the large-budget, NHLBI-funded trials in part because outcomes from the trials were more likely to be published, even if they did not produce the expected result. In all, 55 studies were included in the research, including 30 published prior to the reporting changes in 2000 and 25 published after the changes. Of the 25 studies published after 2000, only two showed positive outcomes, while 17 of the 30 studies published from 1970 to 2000 showed positive results.
There may be other factors contributing to the decline in positive outcomes, but Kaplan and Irvin were unable to identify other compelling alternative explanations. One suggestion, for example, was that older trials were more likely to compare new treatments to placebos, while newer treatments were more likely to compare new treatments to established treatments.
But when Kaplan and Irvin examined the data, they found that 60 percent of trials published before 2000 used placebo comparators and nearly the same amount, 64 percent, of trials after 2000 used placebos, making that an unlikely explanation.
Although many of the studies found that treatments were not effective, the authors praised the National Heart, Lung and Blood Institute for its leadership in enforcing transparent reporting requirements. Irvin said that the institute was an important leader in requiring higher standards for their clinical trials.
While the researchers focused on clinical trials related to cardiovascular health, the new reporting requirements affect all drug trials using human subjects. It would be reasonable to see similar changes in results across other disease types, she said.
“We don’t know if this decrease in positive outcomes also affects drug trials for prevention and treatment of cancer, diabetes or other diseases, but it would not be surprising because they have the same reporting requirements,” she said.
Irvin and Kaplan also are examining how results of clinical trials involving behavioral interventions may have changed under the new reporting requirements. At this time, researchers conducting studies involving behavioral changes are encouraged to register their trials but the National Institutes of Health is moving toward requiring the registration, Irvin said.