Advertisement for orthosearch.org.uk
Results 1 - 2 of 2
Results per page:
Applied filters
Content I can access

Include Proceedings
Dates
Year From

Year To
Orthopaedic Proceedings
Vol. 91-B, Issue SUPP_I | Pages 18 - 18
1 Mar 2009
Poolman R Struijs P Krips R Sierevelt I Marti R Farrokhyar F Zlowodzki M Bhandari M
Full Access

Background: While surgical trials can rarely blind surgeons or patients, they can often blind outcome assessors. The aim of this systematic review was threefold:

1) to examine the reporting of outcome measures in orthopaedic trials,

2) to determine the feasibility of blinding in published orthopaedic trials and

3) to examine the association between the magnitude of treatment differences and methodological safeguards such as blinding.

Specifically, we focused on an association between blinding of outcome assessment and the size of the reported treatment effect; in other words: does blinding of outcome assessors matter?

Methods: We reviewed 32 identified RCTs published in the Journal of Bone and Joint Surgery (American Volume), in 2003 and 2004 for the appropriate use of outcome measures. These RCTs represented 3.4% (32/938) of all studies published during that time period. All RCTs were reviewed by two of us for:

1) the outcome measures used and

2) the use of a methodological safeguard: blinding.

We calculated the magnitude of treatment effect of blinded compared to un-blinded outcome assessors.

Results: The methodological validation and clinical usefulness of the clinician-based, patient-based, and generic outcome instruments varied. Ten of the 32 RCTs (31%) used a modified outcome instrument. Of these 10 trials, 4 (40%) failed to describe how the outcome instrument was modified. Nine (90%) of the 10 articles did not describe how their modified instrument was validated and retested. Sixteen (50%) of the 32 RCTs did not report blinding of outcome assessors where blinding would have been possible. Among those studies with continuous outcome measure, unblinded outcomes assessment was associated with significantly larger treatment effects (standardized mean difference 0.76 versus 0.25, p=0.01). Similarly, in those studies with dichotomous outcomes, unblinded outcomes assessments were associated with significantly greater treatment effects (Odds ratio 0.13 versus 0.42, unblinded versus blinded, p< 0.001). The ratio of odds ratios (unblinded to blinded) was 0.31 suggesting that unblinded outcomes assessment was associated with an exaggeration of the benefit of a treatment’s effectiveness in our cohort of studies.

Conclusion: Reported outcomes in RCTs are often modified and rarely validated. Half of the RCTs did not blind outcome assessors even though blinding of outcome assessors would have been feasible in each case. Treatment effects may be exaggerated if outcome assessors are unblinded. Emphasis should be placed on detailed reporting of outcome measures to facilitate generalization. Outcome assessors should be blinded where possible to prevent bias.


Orthopaedic Proceedings
Vol. 91-B, Issue SUPP_I | Pages 18 - 18
1 Mar 2009
Poolman R Struijs P Krips R Sierevelt I Lutz K Zlowodzki M Bhandari M
Full Access

Background: The Levels of Evidence Rating System is widely believed to categorize studies by quality, with Level I studies representing the highest quality evidence. We aimed to determine the reporting quality of Randomised Controlled Trials (RCTs) published in the most frequently cited general orthopaedic journals.

Methods: Two assessors identified orthopaedic journals that reported a level of evidence rating in their abstracts from January 2003 to December 2004 by searching the instructions for authors of the four highest impact general orthopaedic journals. Based upon a priori eligibility criteria, two assessors hand searched all issues of the eligible journal from 2003–2004 for RCTs. The assessors extracted the demographic information and the evidence rating from each included RCT and scored the quality of reporting using the reporting quality assessment tool, which was developed by the Cochrane Bone, Joint and Muscle Trauma Group. Scores were conducted in duplicate, and we reached a consensus for any disagreements. We examined the correlation between the level of evidence rating and the Cochrane reporting quality score.

Results: We found that only the Journal of Bone and Joint Surgery–American Volume (JBJS-A) used a level of evidence rating from 2003 to 2004. We identified 938 publications in the JBJS-A from January 2003 to December 2004. Of these publications, 32 (3.4%) were RCTs that fit the inclusion criteria. The 32 RCTs included a total of 3543 patients, with sample sizes ranging from 17 to 514 patients. Despite being labelled as the highest level of evidence (Level 1 and Level II evidence), these studies had low Cochrane reporting quality scores among individual methodological safeguards. The Cochrane reporting quality scores did not differ significantly between Level I and Level II studies. Correlations varied from 0.0 to 0.2 across the 12 items of the Cochrane reporting quality assessment tool (p> 0.05). Among items closely corresponding to the Levels of Evidence Rating System criteria assessors achieved substantial agreement (ICC=0.80, 95%CI:0.60 to 0.90).

Conclusions: Our findings suggest that readers should not assume that

1) studies labelled as Level I have high reporting quality and

2) Level I studies have better reporting quality than Level II studies.

One should address methodological safeguards individually.