header advert
Bone & Joint 360 Logo

Receive monthly Table of Contents alerts from Bone & Joint 360

Comprehensive article alerts can be set up and managed through your account settings

View my account settings

Visit Bone & Joint 360 at:

Loading...

Loading...

Full Access

Feature

Can you believe all that you read in the medical journals?



Download PDF

Abstract

By and large, physicians and surgeons trust what they read, even if they take authors’ conclusions with a pinch of salt. There is a world of difference between being cautious about the implications of what you read and being defrauded by dishonest researchers. Fraud and scientific research are incompatible bedfellows and yet are an unhappy part of our research existence. All subspecialties are to blame and orthopaedics is no exception.

Trauma surgeons are familiar with the use of mannitol in treating head injury. What doses they use are based on a combination of experience, knowledge gleaned from the scientific literature and information from guidelines. Those preferring a high-dose regime may have been influenced by a Cochrane review, published in 2005.1 This commended the use of a high dose, influenced greatly by three studies from the same lead author published in high impact neurosurgical journals.2-4 In 2007, serious doubts were cast on the veracity of these studies. It was alleged the data had been fabricated but the lead author is dead, the co-authors claim never to have had first-hand knowledge of the patients involved and there is no employing institution extant to mount an investigation.5 High dosages of mannitol may be of no additional value compared with standard dose or may actually be harmful.6 Thus, dishonesty by a researcher, non-adherence by co-authors to recognise authorship criteria, and failure of reviewers and editors to detect fraud can result in the publication of data that are likely fraudulent. One result was that the guidelines developed by Wakal, Roberts and Scierhout1 for the Cochrane Collaboration unwittingly laundered dirty data into clean clinical recommendations. Head trauma is serious business. If guidelines were wrong, because the supporting data were fraudulent, how many patients were harmed by inappropriate treatment?

By and large, physicians and surgeons trust what they read, even if they take authors’ conclusions with a pinch of salt. There is a world of difference, however, between being cautious about the implications of what you read and being defrauded by dishonest researchers.

In 2008, an investigation by Baystate Medical Center (Springfield, Massachusetts, USA) of its former head of anaesthesia, Scott Reuben, found that he had invented all or part of the data in a large series of studies since 1996, with the Journal of Bone and Joint Surgery[Am]7 and Anesthesia and Analgesia8 subsequently posting retractions. There were 21 papers retracted for fabricated data. Reuben’s research purportedly demonstrated excellent post-operative pain relief from the combination of a COX-2 inhibitor with pregabalin or gabapentin.9 His work in multimodal analgesia was welcomed by orthopaedic surgeons.10The findings are now in doubt, pending replication of his research.

The current record for retractions is probably held by Joachim Boldt, ex-professor of anaesthesiology in Rheinland-Pfalz (Germany), 89 of whose publications were retracted in 2011 after his institution found that none of the studies had IRB (ethics committee) approval.9 As investigations continued into the veracity of the data, the Association of Surgeons, the Intensive Care Society and numerous other professional bodies felt constrained to withdraw and amend their published guidelines on intravenous fluid therapy.

Such blatant fraud may not be common and has a habit of being detected, eventually, but any such ‘self-correction’ does not mean that patients have been unharmed. A survey of all 788 English language papers involving human research retracted from PubMed between 2000 and 2010 showed that 28 000 subjects had been recruited, the papers cited over 5000 times and the studies provoking or cited by 851 secondary studies which had enrolled 400 000 patients.11

Along with fabrication and falsification, journal editors generally include plagiarism as serious misconduct. There are certainly a lot of plagiarists about – even when one excludes those who claim inadvertent cutting and pasting, perhaps because of unfamiliarity with the language of the journal concerned. In the last two years alone, papers by Perugian orthopaedic surgeon, Bernardino Saccomanni on vertebroplasty, elbow arthroplasty and osteoarthritis of the knee have been retracted from Osteoporosis International, Clinical Rheumatology and Musculoskeletal Surgery for blatant plagiarism, and intrepid followers of the blog Retraction Watch12 have reported many more examples of unretracted plagiarised papers, using the simple expedient of typing a sentence or two into Google and watching the original provenance appear.

Much of the debate about research and publication misconduct has been initiated by journal editors. One result has been the establishment of COPE, the Committee on Publication Ethics, representing some 7000 editors, which publishes guidance on how to deal with suspected misconduct.13 Some experienced and senior editors express major concerns about probity. Marcia Angell, a former editor of the New England Journal of Medicine, stated in a book review about pharmaceutical company marketing: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion which I reached slowly and reluctantly over my two decades as an editor of NEJM.14

Richard Smith, former editor of the British Medical Journal included in the summary of his book, The Trouble with Medical Journals, the following comment: “It’s increasingly apparent that many of the studies journals contain are fraudulent, and yet the scientific community has not responded adequately to the problem of fraud.”15

I am not surprised that editors agonise over misconduct; my own experience as an editor taught me that handling such allegations could take up as much time as processing many hundreds of blameless submissions. Unfortunately, communications with the authors and their employing institutions often lead to a frustrating impasse, as demonstrated by the experience when assessing the veracity of studies supporting high-dose mannitol for head trauma.

Many scientists and clinicians cling to a ‘bad apple’ concept of research misconduct. However, even if we accept that megafraudsters are few and far between, the question arises just how common is research misconduct in general? Fanelli16 conducted a systematic review of surveys asking scientists about their own and their colleagues’ behaviour. A total of 1.97% admitted falsifying, fabricating or otherwise improperly manipulating data. Meanwhile, 14% reported they were personally aware of such behaviour by others. When asked about ‘questionable research practices,’ which include selective reporting, incomplete data analysis, modification of images, failure to obtain ethical approval, not declaring a significant financial conflict of interest, using ghost authors, etc., 33.7% admitted such actions and ascribed them to 72% of colleagues.16

Extrapolating from their trawl of MedLine citations, Errami et al17 suggest that as many as 117 500 are duplicate publications. This figure needs to be confirmed by further analysis but whatever the frequency, duplication is not just an underhand but harmless way of enhancing a curriculum vitae. It fundamentally corrupts the scientific record. For example, in a systematic review of studies on the efficacy of ondansetron in gastro-oesophageal reflux, the effect of including duplicated studies reduced the number needed to treat (NNT) from 9.5 to 4.9. Curiously, the duplicated publications were more favourable to the drug than those published only once.18

When inconvenient data are excluded, vital clinical information can be hidden. For example, the grossly excess mortality of elderly demented patients treated with rofecoxib was discovered only after original data were uncovered following a US Freedom of Information Act request. Comparing the final draft of the study with the published version showed that a number of apparent guest authors had been added to the original list of authors, who were all employees of the pharmaceutical manufacturer involved. Additionally, review articles had been prepared by these ‘ghosts’ presumably as a basis for academics to submit them to journals under their own names. In responding to these matters, the editors of the Journal of the American Medical Association stated: “Public trust for clinical research is in great jeopardy.”19

Medical editors have proposed various strategies to clean up the literature. Trial registration, now a requirement of nearly all journals, has gone some way towards preventing the burial of bad news. Some editors have devised risk stratification checklists of submitted papers for extra scrutiny, for example where it seems unlikely that the authors could have been funded to the extent that seems necessary given the nature of the trial. While still relying on their honesty, requiring authors to state their precise contribution to a paper is one method of reducing ghost and guest authorship. The hope is that by introducing transparency, undeclared conflicts of interest might be diminished. Others have called for raw data to be made available to readers, or at least, securely stored in a case retrieval system. Many of the better resourced journals are now using plagiarism detection and image manipulation software as a routine.

Relatively straightforward statistical tests can be used to investigate suspicious data. Thus, when finding unusual clustering of coefficients of variation in the data presented in 84 papers by a group of biochemists, Hudes, McCann and Ames20 commented that they could find: “no statistical or biological explanation.” A similar technique was used to expose fabricated data in 169 randomised trials by a disgraced professor of anaesthesiology.21

However, none of these is foolproof. The determined fraudster or proponent of questionable practices can usually find ways round these hazards, particularly if financial rewards, direct or indirect, are likely to follow publication.

In the end, clinicians need to have the same response to the medical literature as to the media generally. Not everything you read is true, whether by accident or intent. Scepticism is an important component of science, and a necessary clinical tool to protect your patients from the effects of research fraud.


Correspondence should be sent to Mr H. Marcovitch; e-mail:

1 Wakal A , RobertsL, ScierhoutG. Mannitol for acute traumatic brain injury. Cochrane Database Syst Rev2005;4:CD001049.CrossrefPubMed Google Scholar

2 Cruz C , MinojaG, OkuchiK. Improving clinical outcomes from acute subdural haematomas with emergency preoperative administration of high doses of mannitol: a randomized trial. Neurosurgery2001;49:864871. Google Scholar

3 Cruz C , MinojaG, OkuchiK. Major clinical and physiological benefits of early high doses of mannitol for intraparenchymal temporal lobe hemorrhages with abnormal papillary widening: a randomized trial. Neurosurgery2002;51:628638. Google Scholar

4 Cruz C , MinojaG, OkuchiK, FaccoE. Successful use of the new high-dose mannitol treatment in patients with Glasgow Coma Scale scores of 3 and bilateral abnormal papillary widening: a randomized trial. J Neurosurg2004;100:376383. Google Scholar

5 Roberts I , SmithR, EvansS. Doubts over head injury studies. BMJ2007;334:392394.CrossrefPubMed Google Scholar

6 Kaufmann AM , CardosoER. Aggravation of vasogenic cerebral edema by multiple-dose mannitol. J Neurosurg1992;77:584589.CrossrefPubMed Google Scholar

7 Reuben S, Buvanendran A. Preventing the development of chronic pain after orthopaedic surgery with preventive multimodal analgesic techniques. J Bone Joint Surg [Am] 2008;89-A:1343-1358. Retraction in Heckman JD. J Bone Joint Surg [Am] 2009;91-A:965. Google Scholar

8 Shafer SL . Tattered threads. Anesth Analg2009;108:13611363.CrossrefPubMed Google Scholar

9 Shafer SL . Shadow of doubt. Anesth Analg2011;112:498500.CrossrefPubMed Google Scholar

10 Goodman SB . Multimodal analgesia for orthopedic procedures. Anesth Analg2007;105:1920.CrossrefPubMed Google Scholar

11 Steen RG . Retractions in the medical literature: how many patients are put at risk by flawed research?J Med Ethics2011;37:688692.CrossrefPubMed Google Scholar

12 No authors listed. Multiple retractions as brazen plagiarist victimizes orthopedics literature. www.retractionwatch.wordpress.com/2011/12/22/multiple-retractions-as-brazen-plagiarist-victimes-orthopedics-literature/ ( date last accessed 20 June 2012). Google Scholar

13 No authors listed. COPE guidelines. www.publicationethics.org/resources/guidelines/ (date last accessed 20 June 2012). Google Scholar

14 Angell M. Drug companies & doctors: a story of corruption. New York Review of Books, January 15 2009. www.nybooks.com/articles/22237 (date last accessed June 20 2012). Google Scholar

15 Smith R. The Trouble with Medical Journals. London: Royal Society of Medicine Press Ltd, 2006. Google Scholar

16 Fanelli D . How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One2009;4:5738.CrossrefPubMed Google Scholar

17 Errami M , HicksJM, FisherW, et al.Déjà vu: a study of duplicate citations in Medline. Bioinformatics2008;24:243249. Google Scholar

18 Tramèr MR , ReynoldsDJ, MooreRA, McQuayHJ. Impact of covert duplicate publication on meta-analysis: a case study. BMJ1997;315:635640.CrossrefPubMed Google Scholar

19 De Angelis CD , FontanarosaPB. Impugning the integrity of medical science: the adverse effects of industry influence. JAMA2008;299:18331835.CrossrefPubMed Google Scholar

20 Hudes ML , McCannJC, AmesBN. Unusual clustering of coefficients of variation in published articles from a medical biochemistry department in India. FASEB J2009;23:689703.CrossrefPubMed Google Scholar

21 Carlisle JB . The analysis of 168 randomised controlled trials to test data integrity. Anaesthesia2012;67:521537.CrossrefPubMed Google Scholar