Is it time to imagine that well being analysis is fraudulent till confirmed in any other case? – Watts with that?

From thebmjopinion

An interesting article in the BMJ opinion blog. The ongoing credibility and reproducibility crisis in institutionalized research continues to unfold.

Health research is based on trust. Healthcare professionals and magazine editors who read the results of a clinical trial assume that the trial took place and that the results were reported honestly. But about 20% of the time, said Ben Mol, professor of obstetrics and gynecology at Monash Health, they’d be wrong. Having been concerned about research fraud for 40 years, I wasn’t so surprised by how many would be of this number, but it made me think the time may have come to stop assuming that research has actually happened and is being reported honestly , and assume the research is fraudulent until there is evidence that it took place and was honestly reported. The Cochrane Collaboration, which provides “trustworthy information”, is now taking a step in this direction.

As he described in a webinar last week, Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about honest reporting of studies after a colleague asked if he knew his systematic review , which cuts the mannitol in half, was Death from Head Injury was based on studies that had never been done. He didn’t, but he went to investigate the trials and confirmed that they had never taken place. They all had a lead author who pretended to come from an institution that didn’t exist and who killed himself a few years later. The studies were all published in renowned neurosurgery journals and were co-authored by several authors. None of the co-authors had contributed patients to the studies, and some did not know they were co-authors until the studies were published. When Roberts contacted one of the magazines, the editor replied, “I wouldn’t trust the data.” Why, Roberts wondered, was he making the trial public? None of the attempts were withdrawn.

Later, Roberts, who headed one of the Cochrane groups, conducted a systematic review of colloids versus crystalloids, only to find again that many of the studies included in the review were not trustworthy. He is now skeptical of all systematic reviews, especially those that are mostly overviews of several small studies. He compared the original idea of ​​systematic reviews to the search for diamonds, knowledge that would be available if it were brought together in systematic reviews; now he thinks of systematic inspection as searching of trash. He suggested that small, single-center studies should be discarded and not summarized in systematic reviews.

Mol, like Roberts, conducted systematic reviews only to find that most of the studies included were either zombie studies that had fatal errors or were not trustworthy. What, he asked, is the extent of the problem? Although retractions are increasing, only about 0.04% of biomedical studies have been withdrawn, suggesting the problem is minor. But anesthesiologist John Carlisle analyzed 526 studies submitted to anesthesia and found that 73 (14%) had incorrect data and 43 (8%) he categorized him as a zombie. When he was able to examine individual patient data in 153 studies, 67 (44%) had untrustworthy data and 40 (26%) were zombie studies. Many of the studies came from the same countries (Egypt, China, India, Iran, Japan, South Korea and Turkey), and when John Ioannidis, professor at Stanford University, examined individual patient data from studies submitted from these countries to anesthesia during one Year he found that many were wrong: 100% (7/7) in Egypt; 75% (3/4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan. Most of the trials were zombies. Ioannidis concluded that hundreds of thousands of zombie trials have been published from these countries alone.

Others have found similar results, and Mol’s best guess is that about 20% of attempts are wrong. Very few of these papers are withdrawn.

This is probably one of the toughest points.

Research fraud is often viewed as a “bad apple” problem, but Barbara K. Redman, speaking at the webinar, insists that if not bad forests or orchards, they’re bad kegs, not bad apples. In her book Research Misconduct Policy in Biomedicine: Beyond the Bad-Apple Approach, she argues that research misconduct is a systemic problem – the system incentivizes the publication of fraudulent research and lacks adequate regulatory processes.

Read the full opinion article here.

Here is a full recording of the webinar that discussed these topics.

Fraudulent processes in systematic reviews – a major public health problem

Research seminar hosted by Professor Ian Roberts, Co-Director of the Clinical Trials Unit at the London School of Hygiene & Tropical Medicine

Chair: Emma Sydenham, Coordinating Editor, Cochrane Injuries Group, LSHTM

agenda

Welcome: chair
Ian Roberts: Fraudulent Studies in Systematic Reviews (15 minutes).
Ian Roberts is Professor of Epidemiology and Co-Director of the Clinical Trials Division at the London School of Hygiene & Tropical Medicine. He is the editor of the Cochrane Injuries Group.
Ben Mol: The Academic Community’s Response (15 minutes)
Ben Mol is Professor of Obstetrics and Gynecology at Monash University, Melbourne, Australia and Chair of Obstetrics and Gynecology at the Aberdeen Center for Women’s Health Research, Scotland, UK.
Barbara Redman: Rotten Apples or Rotten Barrels – Structural Problems with Fraud (15 minutes).
Barbara Redman is an internationally recognized fraud expert. She is an Associate, Division of Medical Ethics, New York University Langone Medical Center and Associate Professor, NYU School of Nursing. She is the author of “Policy on Research Misconduct in Biomedicine, Beyond the Bad Apple Approach”.
Discussion (30 minutes)
Closing remarks by the chairman

HT / Joe Cool

4.9
10
voices

Item rating

Like this:

Like Loading…

Comments are closed.