Advertisement for orthosearch.org.uk
Results 1 - 20 of 100
Results per page:
The Bone & Joint Journal
Vol. 104-B, Issue 4 | Pages 510 - 518
1 Apr 2022
Perry DC Arch B Appelbe D Francis P Craven J Monsell FP Williamson P Knight M

Aims. The aim of this study was to evaluate the epidemiology and treatment of Perthes’ disease of the hip. Methods. This was an anonymized comprehensive cohort study of Perthes’ disease, with a nested consented cohort. A total of 143 of 144 hospitals treating children’s hip disease in the UK participated over an 18-month period. Cases were cross-checked using a secondary independent reporting network of trainee surgeons to minimize those missing. Clinician-reported outcomes were collected until two years. Patient-reported outcome measures (PROMs) were collected for a subset of participants. Results. Overall, 371 children (396 hips) were newly affected by Perthes’ disease arising from 63 hospitals, with a median of two patients (interquartile range 1.0 to 5.5) per hospital. The annual incidence was 2.48 patients (95% confidence interval (CI) 2.20 to 2.76) per 100,000 zero- to 14-year-olds. Of these, 117 hips (36.4%) were treated surgically. There was considerable variation in the treatment strategy, and an optimized decision tree identified joint stiffness and age above eight years as the key determinants for containment surgery. A total of 348 hips (88.5%) had outcomes to two years, of which 227 were in the late reossification stage for which a hip shape outcome (Stulberg grade) was assigned. The independent predictors of a poorer radiological outcome were female sex (odds ratio (OR) 2.27 (95% CI 1.19 to 4.35)), age above six years (OR 2.62 (95% CI (1.30 to 5.28)), and over 50% radiological collapse at inclusion (OR 2.19 (95% CI 0.99 to 4.83)). Surgery had no effect on radiological outcomes (OR 1.03 (95% CI 0.55 to 1.96)). PROMs indicated the marked effect of the disease on the child, which persisted at two years. Conclusion. Despite the frequency of containment surgery, we found no evidence of improved outcomes. There appears to be a sufficient case volume and community equipoise among surgeons to embark on a randomized clinical trial to definitively investigate the effectiveness of containment surgery. Cite this article: Bone Joint J 2022;104-B(4):510–518


The Bone & Joint Journal
Vol. 105-B, Issue 4 | Pages 400 - 411
15 Mar 2023
Hosman AJF Barbagallo G van Middendorp JJ

Aims

The aim of this study was to determine whether early surgical treatment results in better neurological recovery 12 months after injury than late surgical treatment in patients with acute traumatic spinal cord injury (tSCI).

Methods

Patients with tSCI requiring surgical spinal decompression presenting to 17 centres in Europe were recruited. Depending on the timing of decompression, patients were divided into early (≤ 12 hours after injury) and late (> 12 hours and < 14 days after injury) groups. The American Spinal Injury Association neurological (ASIA) examination was performed at baseline (after injury but before decompression) and at 12 months. The primary endpoint was the change in Lower Extremity Motor Score (LEMS) from baseline to 12 months.


The Bone & Joint Journal
Vol. 105-B, Issue 1 | Pages 64 - 71
1 Jan 2023
Danielsen E Gulati S Salvesen Ø Ingebrigtsen T Nygaard ØP Solberg TK

Aims. The number of patients undergoing surgery for degenerative cervical radiculopathy has increased. In many countries, public hospitals have limited capacity. This has resulted in long waiting times for elective treatment and a need for supplementary private healthcare. It is uncertain whether the management of patients and the outcome of treatment are equivalent in public and private hospitals. The aim of this study was to compare the management and patient-reported outcomes among patients who underwent surgery for degenerative cervical radiculopathy in public and private hospitals in Norway, and to assess whether the effectiveness of the treatment was equivalent. Methods. This was a comparative study using prospectively collected data from the Norwegian Registry for Spine Surgery. A total of 4,750 consecutive patients who underwent surgery for degenerative cervical radiculopathy and were followed for 12 months were included. Case-mix adjustment between those managed in public and private hospitals was performed using propensity score matching. The primary outcome measure was the change in the Neck Disability Index (NDI) between baseline and 12 months postoperatively. A mean difference in improvement of the NDI score between public and private hospitals of ≤ 15 points was considered equivalent. Secondary outcome measures were a numerical rating scale for neck and arm pain and the EuroQol five-dimension three-level health questionnaire. The duration of surgery, length of hospital stay, and complications were also recorded. Results. The mean improvement from baseline to 12 months postoperatively of patients who underwent surgery in public and private hospitals was equivalent, both in the unmatched cohort (mean NDI difference between groups 3.9 points (95% confidence interval (CI) 2.2 to 5.6); p < 0.001) and in the matched cohort (4.0 points (95% CI 2.3 to 5.7); p < 0.001). Secondary outcomes showed similar results. The duration of surgery and length of hospital stay were significantly longer in public hospitals. Those treated in private hospitals reported significantly fewer complications in the unmatched cohort, but not in the matched cohort. Conclusion. The clinical effectiveness of surgery for degenerative cervical radiculopathy performed in public and private hospitals was equivalent 12 months after surgery. Cite this article: Bone Joint J 2023;105-B(1):64–71


The Bone & Joint Journal
Vol. 106-B, Issue 4 | Pages 412 - 418
1 Apr 2024
Alqarni AG Nightingale J Norrish A Gladman JRF Ollivere B

Aims. Frailty greatly increases the risk of adverse outcome of trauma in older people. Frailty detection tools appear to be unsuitable for use in traumatically injured older patients. We therefore aimed to develop a method for detecting frailty in older people sustaining trauma using routinely collected clinical data. Methods. We analyzed prospectively collected registry data from 2,108 patients aged ≥ 65 years who were admitted to a single major trauma centre over five years (1 October 2015 to 31 July 2020). We divided the sample equally into two, creating derivation and validation samples. In the derivation sample, we performed univariate analyses followed by multivariate regression, starting with 27 clinical variables in the registry to predict Clinical Frailty Scale (CFS; range 1 to 9) scores. Bland-Altman analyses were performed in the validation cohort to evaluate any biases between the Nottingham Trauma Frailty Index (NTFI) and the CFS. Results. In the derivation cohort, five of the 27 variables were strongly predictive of the CFS (regression coefficient B = 6.383 (95% confidence interval 5.03 to 7.74), p < 0.001): age, Abbreviated Mental Test score, admission haemoglobin concentration (g/l), pre-admission mobility (needs assistance or not), and mechanism of injury (falls from standing height). In the validation cohort, there was strong agreement between the NTFI and the CFS (mean difference 0.02) with no apparent systematic bias. Conclusion. We have developed a clinically applicable tool using easily and routinely measured physiological and functional parameters, which clinicians and researchers can use to guide patient care and to stratify the analysis of quality improvement and research projects. Cite this article: Bone Joint J 2024;106-B(4):412–418


The Bone & Joint Journal
Vol. 106-B, Issue 11 | Pages 1333 - 1341
1 Nov 2024
Cheung PWH Leung JHM Lee VWY Cheung JPY

Aims. Developmental cervical spinal stenosis (DcSS) is a well-known predisposing factor for degenerative cervical myelopathy (DCM) but there is a lack of consensus on its definition. This study aims to define DcSS based on MRI, and its multilevel characteristics, to assess the prevalence of DcSS in the general population, and to evaluate the presence of DcSS in the prediction of developing DCM. Methods. This cross-sectional study analyzed MRI spine morphological parameters at C3 to C7 (including anteroposterior (AP) diameter of spinal canal, spinal cord, and vertebral body) from DCM patients (n = 95) and individuals recruited from the general population (n = 2,019). Level-specific median AP spinal canal diameter from DCM patients was used to screen for stenotic levels in the population-based cohort. An individual with multilevel (≥ 3 vertebral levels) AP canal diameter smaller than the DCM median values was considered as having DcSS. The most optimal cut-off canal diameter per level for DcSS was determined by receiver operating characteristic analyses, and multivariable logistic regression was performed for the prediction of developing DCM that required surgery. Results. A total of 2,114 individuals aged 64.6 years (SD 11.9) who underwent surgery from March 2009 to December 2016 were studied. The most optimal cut-off canal diameters for DcSS are: C3 < 12.9 mm, C4 < 11.8 mm, C5 < 11.9 mm, C6 < 12.3 mm, and C7 < 13.3 mm. Overall, 13.0% (262 of 2,019) of the population-based cohort had multilevel DcSS. Multilevel DcSS (odds ratio (OR) 6.12 (95% CI 3.97 to 9.42); p < 0.001) and male sex (OR 4.06 (95% CI 2.55 to 6.45); p < 0.001) were predictors of developing DCM. Conclusion. This is the first MRI-based study for defining DcSS with multilevel canal narrowing. Level-specific cut-off canal diameters for DcSS can be used for early identification of individuals at risk of developing DCM. Individuals with DcSS at ≥ three levels and male sex are recommended for close monitoring or early intervention to avoid traumatic spinal cord injuries from stenosis. Cite this article: Bone Joint J 2024;106-B(11):1333–1341


The Bone & Joint Journal
Vol. 106-B, Issue 1 | Pages 77 - 85
1 Jan 2024
Foster AL Warren J Vallmuur K Jaiprakash A Crawford R Tetsworth K Schuetz MA

Aims. The aim of this study was to perform the first population-based description of the epidemiological and health economic burden of fracture-related infection (FRI). Methods. This is a retrospective cohort study of operatively managed orthopaedic trauma patients from 1 January 2007 to 31 December 2016, performed in Queensland, Australia. Record linkage was used to develop a person-centric, population-based dataset incorporating routinely collected administrative, clinical, and health economic information. The FRI group consisted of patients with International Classification of Disease 10th Revision diagnosis codes for deep infection associated with an implanted device within two years following surgery, while all others were deemed not infected. Demographic and clinical variables, as well as healthcare utilization costs, were compared. Results. There were 111,402 patients operatively managed for orthopaedic trauma, with 2,775 of these (2.5%) complicated by FRI. The development of FRI had a statistically significant association with older age, male sex, residing in rural/remote areas, Aboriginal or Torres Strait Islander background, lower socioeconomic status, road traffic accident, work-related injuries, open fractures, anatomical region (lower limb, spine, pelvis), high injury severity, requiring soft-tissue coverage, and medical comorbidities (univariate analysis). Patients with FRI had an eight-times longer median inpatient length of stay (24 days vs 3 days), and a 2.8-times higher mean estimated inpatient hospitalization cost (AU$56,565 vs AU$19,773) compared with uninfected patients. The total estimated inpatient cost of the FRI cohort to the healthcare system was AU$156.9 million over the ten-year period. Conclusion. The results of this study advocate for improvements in trauma care and infection management, address social determinants of health, and highlight the upside potential to improve prevention and treatment strategies. Cite this article: Bone Joint J 2024;106-B(1):77–85


The Bone & Joint Journal
Vol. 106-B, Issue 6 | Pages 582 - 588
1 Jun 2024
Bertram W Howells N White SP Sanderson E Wylde V Lenguerrand E Gooberman-Hill R Bruce J

Aims. The aim of this study was to describe the prevalence and patterns of neuropathic pain over one year in a cohort of patients with chronic post-surgical pain at three months following total knee arthroplasty (TKA). Methods. Between 2016 and 2019, 363 patients with troublesome pain, defined as a score of ≤ 14 on the Oxford Knee Score pain subscale, three months after TKA from eight UK NHS hospitals, were recruited into the Support and Treatment After Replacement (STAR) clinical trial. Self-reported neuropathic pain and postoperative pain was assessed at three, nine, and 15 months after surgery using the painDETECT and Douleur Neuropathique 4 (DN4) questionnaires collected by postal survey. Results. Symptoms of neuropathic pain were common among patients reporting chronic pain at three months post-TKA, with half reporting neuropathic pain on painDETECT (191/363; 53%) and 74% (267/359) on DN4. Of those with neuropathic pain at three months, half continued to have symptoms over the next 12 months (148/262; 56%), one-quarter had improved (67/262; 26%), and for one-tenth their neuropathic symptoms fluctuated over time (24/262; 9%). However, a subgroup of participants reported new, late onset neuropathic symptoms (23/262; 9%). Prevalence of neuropathic symptoms was similar between the screening tools when the lower cut-off painDETECT score (≥ 13) was applied. Overall, mean neuropathic pain scores improved between three and 15 months after TKA. Conclusion. Neuropathic pain is common in patients with chronic pain at three months after TKA. Although neuropathic symptoms improved over time, up to half continued to report painful neuropathic symptoms at 15 months after TKA. Postoperative care should include screening, assessment, and treatment of neuropathic pain in patients with early chronic postoperative pain after TKA. Cite this article: Bone Joint J 2024;106-B(6):582–588


The Bone & Joint Journal
Vol. 106-B, Issue 3 Supple A | Pages 130 - 136
1 Mar 2024
Morlock M Perka C Melsheimer O Kirschbaum SM

Aims. Despite higher rates of revision after total hip arthroplasty (THA) being reported for uncemented stems in patients aged > 75 years, they are frequently used in this age group. Increased mortality after cemented fixation is often used as a justification, but recent data do not confirm this association. The aim of this study was to investigate the influence of the design of the stem and the type of fixation on the rate of revision and immediate postoperative mortality, focusing on the age and sex of the patients. Methods. A total of 333,144 patients with primary osteoarthritis (OA) of the hip who underwent elective THA between November 2012 and September 2022, using uncemented acetabular components without reconstruction shells, from the German arthroplasty registry were included in the study. The revision rates three years postoperatively for four types of stem (uncemented, uncemented with collar, uncemented short, and cemented) were compared within four age groups: < 60 years (Young), between 61 and 70 years (Mid-I), between 71 and 80 years (Mid-II), and aged > 80 years (Old). A noninferiority analysis was performed on the most frequently used designs of stem. Results. The design of the stem was found to have no significant influence on the rate of revision for either sex in the Young group. Uncemented collared stems had a significantly lower rate of revision compared with the other types of stem for females in the Mid-I group. There was a significantly higher rate of revision for uncemented stems in females in the Mid-II group compared with all other types of stem, while in males the rate for uncemented stems was only significantly higher than the rate for cemented stems. Cemented stems had a significantly lower revision rate compared with uncemented and short stems for both sexes in the Old cohort, as did females with collared stems. The rate of immediate postoperative mortality was similar for all types of stem in the Old age group, as were the American Society of Anesthesiologists grades. Conclusion. In patients aged > 80 years, uncemented and short stems had significantly higher revision rates compared with cemented and collared stems, especially in females. The design of the stem and type of fixation have to be analyzed in more detail than only considering cemented and uncemented fixation, in order to further improve the success of THA. Cite this article: Bone Joint J 2024;106-B(3 Supple A):130–136


The Bone & Joint Journal
Vol. 105-B, Issue 10 | Pages 1060 - 1069
1 Oct 2023
Holleyman RJ Jameson SS Reed M Meek RMD Khanduja V Hamer A Judge A Board T

Aims. This study describes the variation in the annual volumes of revision hip arthroplasty (RHA) undertaken by consultant surgeons nationally, and the rate of accrual of RHA and corresponding primary hip arthroplasty (PHA) volume for new consultants entering practice. Methods. National Joint Registry (NJR) data for England, Wales, Northern Ireland, and the Isle of Man were received for 84,816 RHAs and 818,979 PHAs recorded between April 2011 and December 2019. RHA data comprised all revision procedures, including first-time revisions of PHA and any subsequent re-revisions recorded in public and private healthcare organizations. Annual procedure volumes undertaken by the responsible consultant surgeon in the 12 months prior to every index procedure were determined. We identified a cohort of ‘new’ HA consultants who commenced practice from 2012 and describe their rate of accrual of PHA and RHA experience. Results. The median annual consultant RHA volume, averaged across all cases, was 21 (interquartile range (IQR) 11 to 34; range 0 to 181). Of 1,695 consultants submitting RHA cases within the study period, the top 20% of surgeons by annual volume performed 74.2% of total RHA case volume. More than half of all consultants who had ever undertaken a RHA maintained an annual volume of just one or fewer RHA, however, collectively contributed less than 3% of the total RHA case volume. Consultant PHA and RHA volumes were positively correlated. Lower-volume surgeons were more likely to undertake RHA for urgent indications (such as infection) as a proportion of their practice, and to do so on weekends and public holidays. Conclusion. The majority of RHAs were undertaken by higher-volume surgeons. There was considerable variation in RHA volumes by indication, day of the week, and between consultants nationally. The rate of accrual of RHA experience by new consultants is low, and has important implications for establishing an experienced RHA consultant workforce. Cite this article: Bone Joint J 2023;105-B(10):1060–1069


The Bone & Joint Journal
Vol. 106-B, Issue 5 | Pages 442 - 449
1 May 2024
Nieboer MF van der Jagt OP de Munter L de Jongh MAC van de Ree CLP

Aims. Periprosthetic proximal femoral fractures (PFFs) are a major complication after total hip arthroplasty (THA). Health status after PFF is not specifically investigated. The aim of this study is to evaluate the health status pattern over two years after sustaining a PFF. Methods. A cohort of patients with PFF after THA was derived from the Brabant Injury Outcomes Surveillance (BIOS) study. The BIOS study, a prospective, observational, multicentre follow-up cohort study, was conducted to obtain data by questionnaires pre-injury and at one week, and one, three, six, 12, and 24 months after trauma. Primary outcome measures were the EuroQol five-dimension three-level questionnaire (EQ-5D-3L), the Health Utility Index 2 (HUI2), and the Health Utility Index 3 (HUI3). Secondary outcome measures were general measurements such as duration of hospital stay and mortality. Results. A total of 70 patients with a PFF were included. EQ-5D utility scores were significantly lower on all timepoints except at six months’ follow-up compared to pre-injury. EuroQol visual analogue scale (EQ-VAS) scores at one month's follow-up were significantly lower compared to pre-injury. The percentage of reported problems at two years was higher for all dimensions except anxiety/depression when compared to pre-injury. The mean EQ-5D utility score was 0.26 higher in males compared to females (95% confidence interval (CI) 0.01 to 0.42; p = 0.003). The mean EQ-VAS score for males was 8.9 points higher when compared to females over all timepoints (95% CI 1.2 to 16.7; p = 0.027). Mortality was 10% after two years’ follow-up. Conclusion. PFF patients are a frail population with substantial functional impairment at baseline. Post-injury, they have a significant and clinically relevant lower health status two years after trauma when compared to pre-injury. Health status improves the most between one and three months after injury. Two years after PFF, more patients experience problems in mobility, self-care, usual activities, and pain/discomfort than pre-injury. Cite this article: Bone Joint J 2024;106-B(5):442–449


The Bone & Joint Journal
Vol. 106-B, Issue 2 | Pages 189 - 194
1 Feb 2024
Donald N Eniola G Deierl K

Aims. Hip fractures are some of the most common fractures encountered in orthopaedic practice. We aimed to identify whether perioperative hypotension is a predictor of 30-day mortality, and to stratify patient groups that would benefit from closer monitoring and early intervention. While there is literature on intraoperative blood pressure, there are limited studies examining pre- and postoperative blood pressure. Methods. We conducted a prospective observational cohort study over a one-year period from December 2021 to December 2022. Patient demographic details, biochemical results, and haemodynamic observations were taken from electronic medical records. Statistical analysis was conducted with the Cox proportional hazards model, and the effects of independent variables estimated with the Wald statistic. Kaplan-Meier survival curves were estimated with the log-rank test. Results. A total of 528 patients were identified as suitable for inclusion. On multivariate analysis, postoperative hypotension of a systolic blood pressure (SBP) < 90 mmHg two to 24 hours after surgery showed an increased hazard ratio (HR) for 30-day mortality (HR 4.6 (95% confidence interval (CI) 2.3 to 8.9); p < 0.001) and was an independent risk factor accounting for sex (HR 2.7 (95% CI 1.4 to 5.2); p = 0.003), age (HR 1.1 (95% CI 1.0 to 1.1); p = 0.016), American Society of Anesthesiologists grade (HR 2.7 (95% CI 1.5 to 4.6); p < 0.001), time to theatre > 24 hours (HR 2.1 (95% CI 1.1 to 4.2); p = 0.025), and preoperative anaemia (HR 2.3 (95% CI 1.0 to 5.2); p = 0.043). A preoperative SBP of < 120 mmHg was close to achieving significance (HR 1.9 (95% CI 0.99 to 3.6); p = 0.052). Conclusion. Our study is the first to demonstrate that postoperative hypotension within the first 24 hours is an independent risk factor for 30-day mortality after hip fracture surgery. Clinicians should recognize patients who have a SBP of < 90 mmHg in the early postoperative period, and be aware of the increased mortality risk in this specific cohort who may benefit from a closer level of monitoring and early intervention. Cite this article: Bone Joint J 2024;106-B(2):189–194


The Bone & Joint Journal
Vol. 103-B, Issue 8 | Pages 1428 - 1437
2 Aug 2021
Vogt B Roedl R Gosheger G Frommer A Laufer A Kleine-Koenig M Theil C Toporowski G

Aims. Temporary epiphysiodesis (ED) is commonly applied in children and adolescents to treat leg length discrepancies (LLDs) and tall stature. Traditional Blount staples or modern two-hole plates are used in clinical practice. However, they require accurate planning, precise surgical techniques, and attentive follow-up to achieve the desired outcome without complications. This study reports the results of ED using a novel rigid staple (RigidTack) incorporating safety, as well as technical and procedural success according to the idea, development, evaluation, assessment, long-term (IDEAL) study framework. Methods. A cohort of 56 patients, including 45 unilateral EDs for LLD and 11 bilateral EDs for tall stature, were prospectively analyzed. ED was performed with 222 rigid staples with a mean follow-up of 24.4 months (8 to 49). Patients with a predicted LLD of ≥ 2 cm at skeletal maturity were included. Mean age at surgery was 12.1 years (8 to 14). Correction and complication rates including implant-associated problems, and secondary deformities as well as perioperative parameters, were recorded (IDEAL stage 2a). These results were compared to historical cohorts treated for correction of LLD with two-hole plates or Blount staples. Results. The mean LLD was reduced from 25.2 mm (15 to 45) before surgery to 9.3 mm (6 to 25) at skeletal maturity. Implant-associated complications occurred in 4/56 treatments (7%), and secondary frontal plane deformities were detected in 5/45 legs (11%) of the LLD cohort. Including tall stature patients, the rate increased to 12/67 legs (18%). Sagittal plane deformities were observed during 1/45 LLD treatments (2%). Compared to two-hole plates and Blount staples, similar correction rates were observed in all devices. Lower rates of frontal and sagittal plane deformities were observed using rigid staples. Conclusion. Treatment of LLD using novel rigid staples appears a feasible and promising strategy. Secondary frontal and sagittal plane deformities remain a potential complication, although the rate seems to be lower in patients treated with rigid staples. Further comparative studies are needed to investigate this issue. Cite this article: Bone Joint J 2021;103-B(8):1428–1437


The Bone & Joint Journal
Vol. 106-B, Issue 11 | Pages 1231 - 1239
1 Nov 2024
Tzanetis P Fluit R de Souza K Robertson S Koopman B Verdonschot N

Aims. The surgical target for optimal implant positioning in robotic-assisted total knee arthroplasty remains the subject of ongoing discussion. One of the proposed targets is to recreate the knee’s functional behaviour as per its pre-diseased state. The aim of this study was to optimize implant positioning, starting from mechanical alignment (MA), toward restoring the pre-diseased status, including ligament strain and kinematic patterns, in a patient population. Methods. We used an active appearance model-based approach to segment the preoperative CT of 21 osteoarthritic patients, which identified the osteophyte-free surfaces and estimated cartilage from the segmented bones; these geometries were used to construct patient-specific musculoskeletal models of the pre-diseased knee. Subsequently, implantations were simulated using the MA method, and a previously developed optimization technique was employed to find the optimal implant position that minimized the root mean square deviation between pre-diseased and postoperative ligament strains and kinematics. Results. There were evident biomechanical differences between the simulated patient models, but also trends that appeared reproducible at the population level. Optimizing the implant position significantly reduced the maximum observed strain root mean square deviations within the cohort from 36.5% to below 5.3% for all but the anterolateral ligament; and concomitantly reduced the kinematic deviations from 3.8 mm (SD 1.7) and 4.7° (SD 1.9°) with MA to 2.7 mm (SD 1.4) and 3.7° (SD 1.9°) relative to the pre-diseased state. To achieve this, the femoral component consistently required translational adjustments in the anterior, lateral, and proximal directions, while the tibial component required a more posterior slope and varus rotation in most cases. Conclusion. These findings confirm that MA-induced biomechanical alterations relative to the pre-diseased state can be reduced by optimizing the implant position, and may have implications to further advance pre-planning in robotic-assisted surgery in order to restore pre-diseased knee function. Cite this article: Bone Joint J 2024;106-B(11):1231–1239


The Bone & Joint Journal
Vol. 105-B, Issue 4 | Pages 422 - 430
15 Mar 2023
Riksaasen AS Kaur S Solberg TK Austevoll I Brox J Dolatowski FC Hellum C Kolstad F Lonne G Nygaard ØP Ingebrigtsen T

Aims. Repeated lumbar spine surgery has been associated with inferior clinical outcomes. This study aimed to examine and quantify the impact of this association in a national clinical register cohort. Methods. This is a population-based study from the Norwegian Registry for Spine surgery (NORspine). We included 26,723 consecutive cases operated for lumbar spinal stenosis or lumbar disc herniation from January 2007 to December 2018. The primary outcome was the Oswestry Disability Index (ODI), presented as the proportions reaching a patient-acceptable symptom state (PASS; defined as an ODI raw score ≤ 22) and ODI raw and change scores at 12-month follow-up. Secondary outcomes were the Global Perceived Effect scale, the numerical rating scale for pain, the EuroQoL five-dimensions health questionnaire, occurrence of perioperative complications and wound infections, and working capability. Binary logistic regression analysis was conducted to examine how the number of previous operations influenced the odds of not reaching a PASS. Results. The proportion reaching a PASS decreased from 66.0% (95% confidence interval (CI) 65.4 to 66.7) in cases with no previous operation to 22.0% (95% CI 15.2 to 30.3) in cases with four or more previous operations (p < 0.001). The odds of not reaching a PASS were 2.1 (95% CI 1.9 to 2.2) in cases with one previous operation, 2.6 (95% CI 2.3 to 3.0) in cases with two, 4.4 (95% CI 3.4 to 5.5) in cases with three, and 6.9 (95% CI 4.5 to 10.5) in cases with four or more previous operations. The ODI raw and change scores and the secondary outcomes showed similar trends. Conclusion. We found a dose-response relationship between increasing number of previous operations and inferior outcomes among patients operated for degenerative conditions in the lumbar spine. This information should be considered in the shared decision-making process prior to elective spine surgery. Cite this article: Bone Joint J 2023;105-B(4):422–430


The Bone & Joint Journal
Vol. 104-B, Issue 4 | Pages 519 - 528
1 Apr 2022
Perry DC Arch B Appelbe D Francis P Craven J Monsell FP Williamson P Knight M

Aims. The aim of this study was to inform the epidemiology and treatment of slipped capital femoral epiphysis (SCFE). Methods. This was an anonymized comprehensive cohort study, with a nested consented cohort, following the the Idea, Development, Exploration, Assessment, Long-term study (IDEAL) framework. A total of 143 of 144 hospitals treating SCFE in Great Britain participated over an 18-month period. Patients were cross-checked against national administrative data and potential missing patients were identified. Clinician-reported outcomes were collected until two years. Patient-reported outcome measures (PROMs) were collected for a subset of participants. Results. A total of 486 children (513 hips) were newly affected, with a median of two patients (interquartile range 0 to 4) per hospital. The annual incidence was 3.34 (95% confidence interval (CI) 3.01 to 3.67) per 100,000 six- to 18-year-olds. Time to diagnosis in stable disease was increased in severe deformity. There was considerable variation in surgical strategy among those unable to walk at diagnosis (66 urgent surgery vs 43 surgery after interval delay), those with severe radiological deformity (34 fixation with deformity correction vs 36 without correction) and those with unaffected opposite hips (120 prophylactic fixation vs 286 no fixation). Independent risk factors for avascular necrosis (AVN) were the inability of the child to walk at presentation to hospital (adjusted odds ratio (aOR) 4.4 (95% CI 1.7 to 11.4)) and surgical technique of open reduction and internal fixation (aOR 7.5 (95% CI 2.4 to 23.2)). Overall, 33 unaffected untreated opposite hips (11.5%) were treated for SCFE by two-year follow-up. Age was the only independent risk factor for contralateral SCFE, with age under 12.5 years the optimal cut-off to define ‘at risk’. Of hips treated with prophylactic fixation, none had SCFE, though complications included femoral fracture, AVN, and revision surgery. PROMs demonstrated the marked impact on quality of life on the child because of SCFE. Conclusion. The experience of individual hospitals is limited and mechanisms to consolidate learning may enhance care. Diagnostic delays were common and radiological severity worsened with increasing time to diagnosis. There was unexplained variation in treatment, some of which exposes children to significant risks that should be evaluated through randomized controlled trials. Cite this article: Bone Joint J 2022;104-B(4):519–528


The Bone & Joint Journal
Vol. 102-B, Issue 4 | Pages 434 - 441
1 Apr 2020
Hamilton DF Burnett R Patton JT MacPherson GJ Simpson AHRW Howie CR Gaston P

Aims. There are comparatively few randomized studies evaluating knee arthroplasty prostheses, and fewer still that report longer-term functional outcomes. The aim of this study was to evaluate mid-term outcomes of an existing implant trial cohort to document changing patient function over time following total knee arthroplasty using longitudinal analytical techniques and to determine whether implant design chosen at time of surgery influenced these outcomes. Methods. A mid-term follow-up of the remaining 125 patients from a randomized cohort of total knee arthroplasty patients (initially comprising 212 recruited patients), comparing modern (Triathlon) and traditional (Kinemax) prostheses was undertaken. Functional outcomes were assessed with the Oxford Knee Score (OKS), knee range of movement, pain numerical rating scales, lower limb power output, timed functional assessment battery, and satisfaction survey. Data were linked to earlier assessment timepoints, and analyzed by repeated measures analysis of variance (ANOVA) mixed models, incorporating longitudinal change over all assessment timepoints. Results. The mean follow-up of the 125 patients was 8.12 years (7.3 to 9.4). There was a reduction in all assessment parameters relative to earlier assessments. Longitudinal models highlight changes over time in all parameters and demonstrate large effect sizes. Significant between-group differences were seen in measures of knee flexion (medium-effect size), lower limb power output (large-effect size), and report of worst daily pain experienced (large-effect size) favouring the Triathlon group. No longitudinal between-group differences were observed in mean OKS, average daily pain report, or timed performance test. Satisfaction with outcome in surviving patients at eight years was 90.5% (57/63) in the Triathlon group and 82.8% (48/58) in the Kinemax group, with no statistical difference between groups (p = 0.321). Conclusion. At a mean 8.12 years, this mid-term follow-up of a randomized controlled trial cohort highlights a general reduction in measures of patient function with patient age and follow-up duration, and a comparative preservation of function based on implant received at time of surgery. Cite this article: Bone Joint J 2020;102-B(4):434–441


The Bone & Joint Journal
Vol. 103-B, Issue 4 | Pages 672 - 680
1 Apr 2021
Clement ND Scott CEH Murray JRD Howie CR Deehan DJ

Aims. The aim of this study was to assess the quality of life of patients on the waiting list for a total hip (THA) or knee arthroplasty (KA) during the COVID-19 pandemic. Secondary aims were to assess whether length of time on the waiting list influenced quality of life and rate of deferral of surgery. Methods. During the study period (August and September 2020) 843 patients (THA n = 394, KA n = 449) from ten centres in the UK reported their EuroQol five dimension (EQ-5D) scores and completed a waiting list questionnaire (2020 group). Patient demographic details, procedure, and date when listed were recorded. Patients scoring less than zero for their EQ-5D score were defined to be in a health state “worse than death” (WTD). Data from a retrospective cohort (January 2014 to September 2017) were used as the control group. Results. The 2020 group had a significantly worse EQ-5D score compared to the control group for both THA (p < 0.001) and KA (p < 0.001). Over one-third (35.0%, n = 138/394) of patients waiting for a THA and nearly a quarter (22.3%, n = 100/449) for KA were in a health state WTD, which was significantly greater than the control group (odds ratio 2.30 (95% confidence interval (CI) 1.83 to 2.93) and 2.08 (95% CI 1.61 to 2.70), respectively; p < 0.001). Over 80% (n = 680/843) of the 2020 group felt that their quality of life had deteriorated while waiting. Each additional month spent on the waiting list was independently associated with a decrease in quality of life (EQ-5D: -0.0135, p = 0.004). There were 117 (13.9%) patients who wished to defer their surgery and the main reason for this was health concerns for themselves and or their family (99.1%, n = 116/117). Conclusion. Over one-third of patients waiting for THA and nearly one-quarter waiting for a KA were in a state WTD, which was approaching double that observed prior to the pandemic. Increasing length of time on the waiting list was associated with decreasing quality of life. Level of evidence: Level III retrospective case control study. Cite this article: Bone Joint J 2021;103-B(4):672–680


The Bone & Joint Journal
Vol. 103-B, Issue 2 | Pages 366 - 372
1 Feb 2021
Sun Z Li J Luo G Wang F Hu Y Fan C

Aims. This study aimed to determine the minimal detectable change (MDC), minimal clinically important difference (MCID), and substantial clinical benefit (SCB) under distribution- and anchor-based methods for the Mayo Elbow Performance Index (MEPI) and range of movement (ROM) after open elbow arthrolysis (OEA). We also assessed the proportion of patients who achieved MCID and SCB; and identified the factors associated with achieving MCID. Methods. A cohort of 265 patients treated by OEA were included. The MEPI and ROM were evaluated at baseline and at two-year follow-up. Distribution-based MDC was calculated with confidence intervals (CIs) reflecting 80% (MDC 80), 90% (MDC 90), and 95% (MDC 95) certainty, and MCID with changes from baseline to follow-up. Anchor-based MCID (anchored to somewhat satisfied) and SCB (very satisfied) were calculated using a five-level Likert satisfaction scale. Multivariate logistic regression of factors affecting MCID achievement was performed. Results. The MDC increased substantially based on selected CIs (MDC 80, MDC 90, and MDC 95), ranging from 5.0 to 7.6 points for the MEPI, and from 8.2° to 12.5° for ROM. The MCID of the MEPI were 8.3 points under distribution-based and 12.2 points under anchor-based methods; distribution- and anchor-based MCID of ROM were 14.1° and 25.0°. The SCB of the MEPI and ROM were 17.3 points and 43.4°, respectively. The proportion of the patients who attained anchor-based MCID for the MEPI and ROM were 74.0% and 94.7%, respectively; furthermore, 64.2% and 86.8% attained SCB. Non-dominant arm (p = 0.022), higher preoperative MEPI rating (p < 0.001), and postoperative visual analogue scale pain score (p < 0.001) were independent predictors of not achieving MCID for the MEPI, while atraumatic causes (p = 0.040) and higher preoperative ROM (p = 0.005) were independent risk factors for ROM. Conclusion. In patients undergoing OEA, the MCID for the increased MEPI is 12.2 points and 25° increased ROM. The SCB is 17.3 points and 43.3°, respectively. Future studies using the MEPI and ROM to assess OEA outcomes should report not only statistical significance but also clinical importance. Cite this article: Bone Joint J 2021;103-B(2):366–372


The Bone & Joint Journal
Vol. 103-B, Issue 2 | Pages 329 - 337
1 Feb 2021
MacDessi SJ Griffiths-Jones W Harris IA Bellemans J Chen DB

Aims. A comprehensive classification for coronal lower limb alignment with predictive capabilities for knee balance would be beneficial in total knee arthroplasty (TKA). This paper describes the Coronal Plane Alignment of the Knee (CPAK) classification and examines its utility in preoperative soft tissue balance prediction, comparing kinematic alignment (KA) to mechanical alignment (MA). Methods. A radiological analysis of 500 healthy and 500 osteoarthritic (OA) knees was used to assess the applicability of the CPAK classification. CPAK comprises nine phenotypes based on the arithmetic HKA (aHKA) that estimates constitutional limb alignment and joint line obliquity (JLO). Intraoperative balance was compared within each phenotype in a cohort of 138 computer-assisted TKAs randomized to KA or MA. Primary outcomes included descriptive analyses of healthy and OA groups per CPAK type, and comparison of balance at 10° of flexion within each type. Secondary outcomes assessed balance at 45° and 90° and bone recuts required to achieve final knee balance within each CPAK type. Results. There was similar frequency distribution between healthy and arthritic groups across all CPAK types. The most common categories were Type II (39.2% healthy vs 32.2% OA), Type I (26.4% healthy vs 19.4% OA) and Type V (15.4% healthy vs 14.6% OA). CPAK Types VII, VIII, and IX were rare in both populations. Across all CPAK types, a greater proportion of KA TKAs achieved optimal balance compared to MA. This effect was largest, and statistically significant, in CPAK Types I (100% KA vs 15% MA; p < 0.001), Type II (78% KA vs 46% MA; p = 0.018). and Type IV (89% KA vs 0% MA; p < 0.001). Conclusion. CPAK is a pragmatic, comprehensive classification for coronal knee alignment, based on constitutional alignment and JLO, that can be used in healthy and arthritic knees. CPAK identifies which knee phenotypes may benefit most from KA when optimization of soft tissue balance is prioritized. Further, it will allow for consistency of reporting in future studies. Cite this article: Bone Joint J 2021;103-B(2):329–337


The Bone & Joint Journal
Vol. 106-B, Issue 7 | Pages 744 - 750
1 Jul 2024
Saeed A Bradley CS Verma Y Kelley SP

Aims

Radiological residual acetabular dysplasia (RAD) has been reported in up to 30% of children who had successful brace treatment of infant developmental dysplasia of the hip (DDH). Predicting those who will resolve and those who may need corrective surgery is important to optimize follow-up protocols. In this study we have aimed to identify the prevalence and predictors of RAD at two years and five years post-bracing.

Methods

This was a single-centre, prospective longitudinal cohort study of infants with DDH managed using a published, standardized Pavlik harness protocol between January 2012 and December 2016. RAD was measured at two years’ mean follow-up using acetabular index-lateral edge (AI-L) and acetabular index-sourcil (AI-S), and at five years using AI-L, AI-S, centre-edge angle (CEA), and acetabular depth ratio (ADR). Each hip was classified based on published normative values for normal, borderline (1 to 2 standard deviations (SDs)), or dysplastic (> 2 SDs) based on sex, age, and laterality.