Advertisement for orthosearch.org.uk
Orthopaedic Proceedings Logo

Receive monthly Table of Contents alerts from Orthopaedic Proceedings

Comprehensive article alerts can be set up and managed through your account settings

View my account settings

Visit Orthopaedic Proceedings at:

Loading...

Loading...

Full Access

REPORTING OF OUTCOMES IN ORTHOPAEDIC RANDOMIZED TRIALS: DOES BLINDING OF OUTCOME ASSESSORS MATTER?



Abstract

Background: While surgical trials can rarely blind surgeons or patients, they can often blind outcome assessors. The aim of this systematic review was threefold:

  1. 1) to examine the reporting of outcome measures in orthopaedic trials,

  2. 2) to determine the feasibility of blinding in published orthopaedic trials and

  3. 3) to examine the association between the magnitude of treatment differences and methodological safeguards such as blinding.

Specifically, we focused on an association between blinding of outcome assessment and the size of the reported treatment effect; in other words: does blinding of outcome assessors matter?

Methods: We reviewed 32 identified RCTs published in the Journal of Bone and Joint Surgery (American Volume), in 2003 and 2004 for the appropriate use of outcome measures. These RCTs represented 3.4% (32/938) of all studies published during that time period. All RCTs were reviewed by two of us for:

  1. 1) the outcome measures used and

  2. 2) the use of a methodological safeguard: blinding.

We calculated the magnitude of treatment effect of blinded compared to un-blinded outcome assessors.

Results: The methodological validation and clinical usefulness of the clinician-based, patient-based, and generic outcome instruments varied. Ten of the 32 RCTs (31%) used a modified outcome instrument. Of these 10 trials, 4 (40%) failed to describe how the outcome instrument was modified. Nine (90%) of the 10 articles did not describe how their modified instrument was validated and retested. Sixteen (50%) of the 32 RCTs did not report blinding of outcome assessors where blinding would have been possible. Among those studies with continuous outcome measure, unblinded outcomes assessment was associated with significantly larger treatment effects (standardized mean difference 0.76 versus 0.25, p=0.01). Similarly, in those studies with dichotomous outcomes, unblinded outcomes assessments were associated with significantly greater treatment effects (Odds ratio 0.13 versus 0.42, unblinded versus blinded, p< 0.001). The ratio of odds ratios (unblinded to blinded) was 0.31 suggesting that unblinded outcomes assessment was associated with an exaggeration of the benefit of a treatment’s effectiveness in our cohort of studies.

Conclusion: Reported outcomes in RCTs are often modified and rarely validated. Half of the RCTs did not blind outcome assessors even though blinding of outcome assessors would have been feasible in each case. Treatment effects may be exaggerated if outcome assessors are unblinded. Emphasis should be placed on detailed reporting of outcome measures to facilitate generalization. Outcome assessors should be blinded where possible to prevent bias.

Correspondence should be addressed to Ms Larissa Welti, Scientific Secretary, EFORT Central Office, Technoparkstrasse 1, CH-8005 Zürich, Switzerland