header advert
Results 661 - 680 of 8585
Results per page:
Bone & Joint Open
Vol. 2, Issue 10 | Pages 858 - 864
18 Oct 2021
Guntin J Plummer D Della Valle C DeBenedetti A Nam D

Aims. Prior studies have identified that malseating of a modular dual mobility liner can occur, with previous reported incidences between 5.8% and 16.4%. The aim of this study was to determine the incidence of malseating in dual mobility implants at our institution, assess for risk factors for liner malseating, and investigate whether liner malseating has any impact on clinical outcomes after surgery. Methods. We retrospectively reviewed the radiographs of 239 primary and revision total hip arthroplasties with a modular dual mobility liner. Two independent reviewers assessed radiographs for each patient twice for evidence of malseating, with a third observer acting as a tiebreaker. Univariate analysis was conducted to determine risk factors for malseating with Youden’s index used to identify cut-off points. Cohen’s kappa test was used to measure interobserver and intraobserver reliability. Results. In all, 12 liners (5.0%), including eight Stryker (6.8%) and four Zimmer Biomet (3.3%), had radiological evidence of malseating. Interobserver reliability was found to be 0.453 (95% confidence interval (CI) 0.26 to 0.64), suggesting weak inter-rater agreement, with strong agreement being greater than 0.8. We found component size of 50 mm or less to be associated with liner malseating on univariate analysis (p = 0.031). Patients with malseated liners appeared to have no associated clinical consequences, and none required revision surgery at a mean of 14 months (1.4 to 99.2) postoperatively. Conclusion. The incidence of liner malseating was 5.0%, which is similar to other reports. Component size of 50 mm or smaller was identified as a risk factor for malseating. Surgeons should be aware that malseating can occur and implant design changes or changes in instrumentation should be considered to lower the risk of malseating. Although further follow-up is needed, it remains to be seen if malseating is associated with any clinical consequences. Cite this article: Bone Jt Open 2021;2(10):858–864


Bone & Joint Open
Vol. 2, Issue 8 | Pages 594 - 598
3 Aug 2021
Arneill M Cosgrove A Robinson E

Aims. To determine the likelihood of achieving a successful closed reduction (CR) of a dislocated hip in developmental dysplasia of the hip (DDH) after failed Pavlik harness treatment We report the rate of avascular necrosis (AVN) and the need for further surgical procedures. Methods. Data was obtained from the Northern Ireland DDH database. All children who underwent an attempted closed reduction between 2011 and 2016 were identified. Children with a dislocated hip that failed Pavlik harness treatment were included in the study. Successful closed reduction was defined as a hip that reduced in theatre and remained reduced. Most recent imaging was assessed for the presence of AVN using the Kalamchi and MacEwen classification. Results. There were 644 dislocated hips in 543 patients initially treated in Pavlik harness. In all, 67 hips failed Pavlik harness treatment and proceeded to arthrogram (CR) under general anaesthetic at an average age of 180 days. The number of hips that were deemed reduced in theatre was 46 of the 67 (69%). A total of 11 hips re-dislocated and underwent open reduction, giving a true successful CR rate of 52%. For the total cohort of 67 hips that went to theatre for arthrogram and attempted CR, five (7%) developed clinically significant AVN at an average follow-up of four years and one month, while none of the 35 hips whose reduction was truly successful developed clinically significant AVN. Conclusion. The likelihood of a successful closed reduction of a dislocated hip in the Northern Ireland population, which has failed Pavlik harness treatment, is 52% with a clinically significant AVN rate of 7%. As such, we continue to advocate closed reduction under general anaesthetic for the hip that has failed Pavlik harness. Cite this article: Bone Jt Open 2021;2(8):594–598


The Bone & Joint Journal
Vol. 103-B, Issue 7 Supple B | Pages 17 - 24
1 Jul 2021
Vigdorchik JM Sharma AK Buckland AJ Elbuluk AM Eftekhary N Mayman DJ Carroll KM Jerabek SA

Aims. Patients with spinal pathology who undergo total hip arthroplasty (THA) have an increased risk of dislocation and revision. The aim of this study was to determine if the use of the Hip-Spine Classification system in these patients would result in a decreased rate of postoperative dislocation in patients with spinal pathology. Methods. This prospective, multicentre study evaluated 3,777 consecutive patients undergoing THA by three surgeons, between January 2014 and December 2019. They were categorized using The Hip-Spine Classification system: group 1 with normal spinal alignment; group 2 with a flatback deformity, group 2A with normal spinal mobility, and group 2B with a stiff spine. Flatback deformity was defined by a pelvic incidence minus lumbar lordosis of > 10°, and spinal stiffness was defined by < 10° change in sacral slope from standing to seated. Each category determined a patient-specific component positioning. Survivorship free of dislocation was recorded and spinopelvic measurements were compared for reliability using intraclass correlation coefficient. Results. A total of 2,081 patients met the inclusion criteria. There were 987 group 1A, 232 group 1B, 715 group 2A, and 147 group 2B patients. A total of 70 patients had a lumbar fusion, most had L4-5 (16; 23%) or L4-S1 (12; 17%) fusions; 51 patients (73%) had one or two levels fused, and 19 (27%) had > three levels fused. Dual mobility (DM) components were used in 166 patients (8%), including all of those in group 2B and with > three level fusions. Survivorship free of dislocation at five years was 99.2% with a 0.8% dislocation rate. The correlation coefficient was 0.83 (95% confidence interval 0.89 to 0.91). Conclusion. This is the largest series in the literature evaluating the relationship between hip-spine pathology and dislocation after THA, and guiding appropriate treatment. The Hip-Spine Classification system allows surgeons to make appropriate evaluations preoperatively, and it guides the use of DM components in patients with spinopelvic pathology in order to reduce the risk of dislocation in these high-risk patients. Cite this article: Bone Joint J 2021;103-B(7 Supple B):17–24


Bone & Joint Open
Vol. 3, Issue 1 | Pages 93 - 97
10 Jan 2022
Kunze KN Orr M Krebs V Bhandari M Piuzzi NS

Artificial intelligence and machine-learning analytics have gained extensive popularity in recent years due to their clinically relevant applications. A wide range of proof-of-concept studies have demonstrated the ability of these analyses to personalize risk prediction, detect implant specifics from imaging, and monitor and assess patient movement and recovery. Though these applications are exciting and could potentially influence practice, it is imperative to understand when these analyses are indicated and where the data are derived from, prior to investing resources and confidence into the results and conclusions. In this article, we review the current benefits and potential limitations of machine-learning for the orthopaedic surgeon with a specific emphasis on data quality


Bone & Joint Research
Vol. 11, Issue 2 | Pages 91 - 101
1 Feb 2022
Munford MJ Stoddart JC Liddle AD Cobb JP Jeffers JRT

Aims. Unicompartmental and total knee arthroplasty (UKA and TKA) are successful treatments for osteoarthritis, but the solid metal implants disrupt the natural distribution of stress and strain which can lead to bone loss over time. This generates problems if the implant needs to be revised. This study investigates whether titanium lattice UKA and TKA implants can maintain natural load transfer in the proximal tibia. Methods. In a cadaveric model, UKA and TKA procedures were performed on eight fresh-frozen knee specimens, using conventional (solid) and titanium lattice tibial implants. Stress at the bone-implant interfaces were measured and compared to the native knee. Results. Titanium lattice implants were able to restore the mechanical environment of the native tibia for both UKA and TKA designs. Maximum stress at the bone-implant interface ranged from 1.2 MPa to 3.3 MPa compared with 1.3 MPa to 2.7 MPa for the native tibia. The conventional solid UKA and TKA implants reduced the maximum stress in the bone by a factor of 10 and caused > 70% of bone surface area to be underloaded compared to the native tibia. Conclusion. Titanium lattice implants maintained the natural mechanical loading in the proximal tibia after UKA and TKA, but conventional solid implants did not. This is an exciting first step towards implants that maintain bone health, but such implants also have to meet fatigue and micromotion criteria to be clinically viable. Cite this article: Bone Joint Res 2022;11(2):91–101


Bone & Joint Open
Vol. 2, Issue 10 | Pages 879 - 885
20 Oct 2021
Oliveira e Carmo L van den Merkhof A Olczak J Gordon M Jutte PC Jaarsma RL IJpma FFA Doornberg JN Prijs J

Aims. The number of convolutional neural networks (CNN) available for fracture detection and classification is rapidly increasing. External validation of a CNN on a temporally separate (separated by time) or geographically separate (separated by location) dataset is crucial to assess generalizability of the CNN before application to clinical practice in other institutions. We aimed to answer the following questions: are current CNNs for fracture recognition externally valid?; which methods are applied for external validation (EV)?; and, what are reported performances of the EV sets compared to the internal validation (IV) sets of these CNNs?. Methods. The PubMed and Embase databases were systematically searched from January 2010 to October 2020 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The type of EV, characteristics of the external dataset, and diagnostic performance characteristics on the IV and EV datasets were collected and compared. Quality assessment was conducted using a seven-item checklist based on a modified Methodologic Index for NOn-Randomized Studies instrument (MINORS). Results. Out of 1,349 studies, 36 reported development of a CNN for fracture detection and/or classification. Of these, only four (11%) reported a form of EV. One study used temporal EV, one conducted both temporal and geographical EV, and two used geographical EV. When comparing the CNN’s performance on the IV set versus the EV set, the following were found: AUCs of 0.967 (IV) versus 0.975 (EV), 0.976 (IV) versus 0.985 to 0.992 (EV), 0.93 to 0.96 (IV) versus 0.80 to 0.89 (EV), and F1-scores of 0.856 to 0.863 (IV) versus 0.757 to 0.840 (EV). Conclusion. The number of externally validated CNNs in orthopaedic trauma for fracture recognition is still scarce. This greatly limits the potential for transfer of these CNNs from the developing institute to another hospital to achieve similar diagnostic performance. We recommend the use of geographical EV and statements such as the Consolidated Standards of Reporting Trials–Artificial Intelligence (CONSORT-AI), the Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence (SPIRIT-AI) and the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis–Machine Learning (TRIPOD-ML) to critically appraise performance of CNNs and improve methodological rigor, quality of future models, and facilitate eventual implementation in clinical practice. Cite this article: Bone Jt Open 2021;2(10):879–885


Bone & Joint Research
Vol. 11, Issue 2 | Pages 73 - 81
22 Feb 2022
Gao T Lin J Wei H Bao B Zhu H Zheng X

Aims. Trained immunity confers non-specific protection against various types of infectious diseases, including bone and joint infection. Platelets are active participants in the immune response to pathogens and foreign substances, but their role in trained immunity remains elusive. Methods. We first trained the innate immune system of C57BL/6 mice via intravenous injection of two toll-like receptor agonists (zymosan and lipopolysaccharide). Two, four, and eight weeks later, we isolated platelets from immunity-trained and control mice, and then assessed whether immunity training altered platelet releasate. To better understand the role of immunity-trained platelets in bone and joint infection development, we transfused platelets from immunity-trained mice into naïve mice, and then challenged the recipient mice with Staphylococcus aureus or Escherichia coli. Results. After immunity training, the levels of pro-inflammatory cytokines (tumour necrosis factor alpha (TNF-α), interleukin (IL)-17A) and chemokines (CCL5, CXCL4, CXCL5, CXCL7, CXCL12) increased significantly in platelet releasate, while the levels of anti-inflammatory cytokines (IL-4, IL-13) decreased. Other platelet-secreted factors (e.g. platelet-derived growth factor (PDGF)-AA, PDGF-AB, PDGF-BB, cathepsin D, serotonin, and histamine) were statistically indistinguishable between the two groups. Transfusion of platelets from trained mice into naïve mice reduced infection risk and bacterial burden after local or systemic challenge with either S. aureus or E. coli. Conclusion. Immunity training altered platelet releasate by increasing the levels of inflammatory cytokines/chemokines and decreasing the levels of anti-inflammatory cytokines. Transfusion of platelets from immunity-trained mice conferred protection against bone and joint infection, suggesting that alteration of platelet releasate might be an important mechanism underlying trained immunity and may have clinical implications. Cite this article: Bone Joint Res 2022;11(2):73–81




The Bone & Joint Journal
Vol. 104-B, Issue 2 | Pages 302 - 308
1 Feb 2022
Dala-Ali B Donnan L Masterton G Briggs L Kauiers C O’Sullivan M Calder P Eastwood DM

Aims. Osteofibrous dysplasia (OFD) is a rare benign lesion predominantly affecting the tibia in children. Its potential link to adamantinoma has influenced management. This international case series reviews the presentation of OFD and management approaches to improve our understanding of OFD. Methods. A retrospective review at three paediatric tertiary centres identified 101 cases of tibial OFD in 99 patients. The clinical records, radiological images, and histology were analyzed. Results. Mean age at presentation was 13.5 years (SD 12.4), and mean follow-up was 5.65 years (SD 5.51). At latest review, 62 lesions (61.4%) were in skeletally mature patients. The most common site of the tibial lesion was the anterior (76 lesions, 75.2%) cortex (63 lesions, 62.4%) of the middle third (52 lesions, 51.5%). Pain, swelling, and fracture were common presentations. Overall, 41 lesions (40.6%) presented with radiological deformity (> 10°): apex anterior in 97.6%. A total of 41 lesions (40.6%) were treated conservatively. Anterior bowing < 10° at presentation was found to be related to successful conservative management of OFD (p = 0.013, multivariable logistic regression). Intralesional excision was performed in 43 lesions (42.6%) and a wide excision of the lesion in 19 (18.8%). A high complication rate and surgical burden was found in those that underwent a wide excision regardless of technique employed. There was progression/recurrence in nine lesions (8.9%) but statistical analysis found no predictive factors. No OFD lesion transformed to adamantinoma. Conclusion. This study confirms OFD to be a benign bone condition with low rates of local progression and without malignant transformation. It is important to distinguish OFD from adamantinoma by a histological diagnosis. Focus should be on angular deformity, monitored with full-length tibial radiographs. Surgery is indicated in symptomatic patients and predicted by the severity of the initial angular deformity. Surgery should focus more on the deformity rather than the lesion. Cite this article: Bone Joint J 2022;104-B(2):302–308


The Bone & Joint Journal
Vol. 104-B, Issue 3 | Pages 394 - 400
1 Mar 2022
Lee KJ Kim YT Choi M Kim SH

Aims. The aim of this study was to compare the characteristics and outcomes of L-shaped and reverse L-shaped rotator cuff tears. Methods. A total of 82 shoulders (81 patients) after arthroscopic rotator cuff repair were retrospectively enrolled. The mean age of the patients was 62 years (SD 6), 33 shoulders (40.2%) were in male patients, and 57 shoulders (69.5%) were the right shoulder. Of these, 36 shoulders had an L-shaped tear (group L) and 46 had a reverse L-shaped tear (group RL). Both groups were compared regarding characteristics, pre- and postoperative pain, and functional outcomes. Muscle status was assessed by preoperative MRI, and re-tear rates by postoperative ultrasonography or MRI. Results. Patients in group RL were significantly older than in group L (p = 0.008), and group RL was significantly associated with female sex (odds ratio 2.5 (95% confidence interval 1.03 to 6.32); p = 0.041). Mean postoperative pain visual analogue scale (VAS) score was significantly greater (group L = 0.8 (SD 1.5), group RL = 1.7 (SD 2.2); p = 0.033) and mean postoperative American Shoulder and Elbow Surgeons (ASES) score was significantly lower in group RL than group L (group L = 91.4 (SD 13.1), group RL = 83.8 (SD 17.9); p = 0.028). However, postoperative mean VAS for pain and ASES score were not lower than the patient-acceptable symptom state scores. Mean retracted tear length was significantly larger in group L (group L = 24.6 mm (SD 6.5), group RL = 20.0 mm (SD 6.8); p = 0.003). Overall re-tear rate for 82 tears was 11.0% (nine shoulders), and re-tear rates in group L and RL were similar at 11.1% (four shoulders) and 10.9% (five shoulders), respectively (p = 1.000). No significant intergroup difference was found for fatty degeneration (FD) or muscle atrophy. Within group L, postoperative FD grades of supraspinatus and subscapularis worsened significantly (p = 0.034 and p = 0.008, respectively). Mean postoperative pain VAS (male = 1.2 (SD 1.8), female = 1.3 (SD 2.0)) and ASES scores (male = 88.7 (SD 15.5), female = 86.0 (SD 16.8)) were similar in male and female patients (p = 0.700 and p = 0.475, respectively). Regression analysis showed age was not a prognostic factor of postoperative pain VAS or ASES scores (p = 0.188 and p = 0.150, respectively). Conclusion. Older age and female sex were associated with reverse L-shaped tears. Although the postoperative functional outcomes of patients with reverse L-shaped tears were satisfactory, the clinical scores were poorer than those of patients with L-shaped tears. Surgeons should be aware of the differences in clinical outcome between L-shaped and reverse L-shaped rotator cuff tears. Cite this article: Bone Joint J 2022;104-B(3):394–400


The Bone & Joint Journal
Vol. 104-B, Issue 2 | Pages 274 - 282
1 Feb 2022
Grønhaug KML Dybvik E Matre K Östman B Gjertsen J

Aims. The aim of this study was to investigate if there are differences in outcome between sliding hip screws (SHSs) and intramedullary nails (IMNs) with regard to fracture stability. Methods. We assessed data from 17,341 patients with trochanteric or subtrochanteric fractures treated with SHS or IMN in the Norwegian Hip Fracture Register from 2013 to 2019. Primary outcome measures were reoperations for stable fractures (AO Foundation/Orthopaedic Trauma Association (AO/OTA) type A1) and unstable fractures (AO/OTA type A2, A3, and subtrochanteric fractures). Secondary outcome measures were reoperations for A2, A3, and subtrochanteric fractures individually, one-year mortality, quality of life (EuroQol five-dimension three-level index score), pain (visual analogue scale (VAS)), and satisfaction (VAS) for stable and unstable fractures. Hazard rate ratios (HRRs) for reoperation were calculated using Cox regression analysis with adjustments for age, sex, and American Society of Anesthesiologists score. Results. Reoperation rate was lower after surgery with IMN for unstable fractures one year (HRR 0.82, 95% confidence interval (CI) 0.70 to 0.97; p = 0.022) and three years postoperatively (HRR 0.86, 95% CI 0.74 to 0.99; p = 0.036), compared with SHS. For individual fracture types, no clinically significant differences were found. Lower one-year mortality was found for IMN compared with SHS for stable fractures (HRR 0.87; 95% CI 0.78 to 0.96; p = 0.007), and unstable fractures (HRR 0.91, 95% CI 0.84 to 0.98; p = 0.014). Conclusion. This national register-based study indicates a lower reoperation rate for IMN than SHS for unstable trochanteric and subtrochanteric fractures, but not for stable fractures or individual fracture types. The choice of implant may not be decisive to the outcome of treatment for stable trochanteric fractures in terms of reoperation rate. One-year mortality rate for unstable and stable fractures was lower in patients treated with IMN. Cite this article: Bone Joint J 2022;104-B(2):274–282


Aims. To provide normative data that can assess spinal-related disability and the prevalence of back or leg pain among adults with no spinal conditions in the UK using validated questionnaires. Methods. A total of 1,000 participants with equal sex distribution were included and categorized in five age groups: 20 to 29, 30 to 39, 40 to 49, 50 to 59, and 60 to 69 years. Individuals with spinal pathologies were excluded. Participants completed the Scoliosis Research Society-22 (SRS-22r), visual analogue scale (VAS) for back/leg pain, and the EuroQol five-dimension index (EQ-5D/VAS) questionnaires, and disclosed their age, sex, and occupation. They were also categorized in five professional groups: doctors, nurses, allied health professionals, office workers, and manual workers. Results. The mean age of all participants was 43.8 years (20 to 69). There was no difference in the SRS-22r, EQ-5D, or VAS scores among male and female participants (p > 0.05). There was incremental decrease in SRS-22r total scores as the age increased. The mean EQ-5D index score (0.84) ranged little across the age groups (0.72 to 0.91) but reduced gradually with increasing age. There was difference between the SRS-22r total score (4.51), the individual domain scores, and the EQ-5D score (index: 0.94 and VAS: 89) for the doctors’ group compared to all other occupational categories (p < 0.001). Doctors had a younger mean age of participants, which may explain their improved spinal health. There was no difference in the total or sub-domain SRS-22r and EQ-5D scores between the other four occupational groups. Conclusion. This study provides the first normative data for the SRS-22r, EQ-5D, and VAS for back/leg pain questionnaires among adults in the UK. We recorded an excellent correlation between the three assessment tools with individuals who reported less back and leg pain having better quality of life and greater function. The participants’ age, rather than their sex or profession, appears to be the major determinant for spinal health and quality of life. Cite this article: Bone Jt Open 2022;3(2):130–134


The Bone & Joint Journal
Vol. 103-B, Issue 9 | Pages 1442 - 1448
1 Sep 2021
McDonnell JM Evans SR McCarthy L Temperley H Waters C Ahern D Cunniffe G Morris S Synnott K Birch N Butler JS

In recent years, machine learning (ML) and artificial neural networks (ANNs), a particular subset of ML, have been adopted by various areas of healthcare. A number of diagnostic and prognostic algorithms have been designed and implemented across a range of orthopaedic sub-specialties to date, with many positive results. However, the methodology of many of these studies is flawed, and few compare the use of ML with the current approach in clinical practice. Spinal surgery has advanced rapidly over the past three decades, particularly in the areas of implant technology, advanced surgical techniques, biologics, and enhanced recovery protocols. It is therefore regarded an innovative field. Inevitably, spinal surgeons will wish to incorporate ML into their practice should models prove effective in diagnostic or prognostic terms. The purpose of this article is to review published studies that describe the application of neural networks to spinal surgery and which actively compare ANN models to contemporary clinical standards allowing evaluation of their efficacy, accuracy, and relatability. It also explores some of the limitations of the technology, which act to constrain the widespread adoption of neural networks for diagnostic and prognostic use in spinal care. Finally, it describes the necessary considerations should institutions wish to incorporate ANNs into their practices. In doing so, the aim of this review is to provide a practical approach for spinal surgeons to understand the relevant aspects of neural networks. Cite this article: Bone Joint J 2021;103-B(9):1442–1448


The Bone & Joint Journal
Vol. 104-B, Issue 2 | Pages 257 - 264
1 Feb 2022
Tahir M Mehta D Sandhu C Jones M Gardner A Mehta JS

Aims. The aim of this study was to compare the clinical and radiological outcomes of patients with early-onset scoliosis (EOS), who had undergone spinal fusion after distraction-based spinal growth modulation using either traditional growing rods (TGRs) or magnetically controlled growing rods (MCGRs). Methods. We undertook a retrospective review of skeletally mature patients who had undergone fusion for an EOS, which had been previously treated using either TGRs or MCGRs. Measured outcomes included sequential coronal T1 to S1 height and major curve (Cobb) angle on plain radiographs and any complications requiring unplanned surgery before final fusion. Results. We reviewed 43 patients (63% female) with a mean age of 6.4 years (SD 2.6) at the index procedure, and 12.2 years (SD 2.2) at final fusion. Their mean follow-up was 8.1 years (SD 3.4). A total of 16 patients were treated with MCGRs and 27 with TGRs. The mean number of distractions was 7.5 in the MCGR group and ten in the TGR group (p = 0.471). The mean interval between distractions was 3.4 months in the MCGR group and 8.6 months in the TGR group (p < 0.001). The mean Cobb angle had improved by 25.1° in the MCGR group and 23.2° in TGR group (p = 0.664) at final follow-up. The mean coronal T1 to S1 height had increased by 16% in the MCGR group and 32.9% in TGR group (p = 0.001), although the mean T1 to S1 height achieved at final follow-up was similar in both. Unplanned operations were needed in 43.8% of the MCGR group and 51.2% of TGR group (p = 0.422). Conclusion. In this retrospective, single-centre review, there were no significant differences in major curve correction or gain in spinal height at fusion. Although the number of planned procedures were fewer in patients with MCGRs, the rates of implant-related complications needing unplanned revision surgery were similar in the two groups. Cite this article: Bone Joint J 2022;104-B(2):257–264


Bone & Joint Open
Vol. 3, Issue 2 | Pages 123 - 129
1 Feb 2022
Bernard J Bishop T Herzog J Haleem S Lupu C Ajayi B Lui DF

Aims. Vertebral body tethering (VBT) is a non-fusion technique to correct scoliosis. It allows correction of scoliosis through growth modulation (GM) by tethering the convex side to allow concave unrestricted growth similar to the hemiepiphysiodesis concept. The other modality is anterior scoliosis correction (ASC) where the tether is able to perform most of the correction immediately where limited growth is expected. Methods. We conducted a retrospective analysis of clinical and radiological data of 20 patients aged between 9 and 17 years old, (with a 19 female: 1 male ratio) between January 2014 to December 2016 with a mean five-year follow-up (4 to 7). Results. There were ten patients in each group with a total of 23 curves operated on. VBT-GM mean age was 12.5 years (9 to 14) with a mean Risser classification of 0.63 (0 to 2) and VBT-ASC was 14.9 years (13 to 17) with a mean Risser classification of 3.66 (3 to 5). Mean preoperative VBT-GM Cobb was 47.4° (40° to 58°) with a Fulcrum unbend of 17.4 (1° to 41°), compared to VBT-ASC 56.5° (40° to 79°) with 30.6 (2° to 69°)unbend. Postoperative VBT-GM was 20.3° and VBT-ASC Cobb angle was 11.2°. The early postoperative correction rate was 54.3% versus 81% whereas Fulcrum Bending Correction Index (FBCI) was 93.1% vs 146.6%. The last Cobb angle on radiograph at mean five years’ follow-up was 19.4° (VBT-GM) and 16.5° (VBT-ASC). Patients with open triradiate cartilage (TRC) had three over-corrections. Overall, 5% of patients required fusion. This one patient alone had a over-correction, a second-stage tether release, and final conversion to fusion. Conclusion. We show a high success rate (95%) in helping children avoid fusion at five years post-surgery. VBT is a safe technique for correction of scoliosis in the skeletally immature patient. This is the first report at five years that shows two methods of VBT can be employed depending on the skeletal maturity of the patient: GM and ASC. Cite this article: Bone Jt Open 2022;3(2):123–129


Bone & Joint Open
Vol. 3, Issue 1 | Pages 61 - 67
18 Jan 2022
van Lingen CP Ettema HB Bosker BH Verheyen CCPM

Aims. Large-diameter metal-on-metal (MoM) total hip arthroplasty (THA) has demonstrated unexpected high failure rates and pseudotumour formation. The purpose of this prospective cohort study is to report ten-year results in order to establish revision rate, prevalence of pseudotumour formation, and relation with whole blood cobalt levels. Methods. All patients were recalled according to the guidelines of the Dutch Orthopaedic Association. They underwent clinical and radiographical assessments (radiograph and CT scan) of the hip prosthesis and whole blood cobalt ion measurements. Overall, 94 patients (95 hips) fulfilled our requirements for a minimum ten-year follow-up. Results. Mean follow-up was 10.9 years (10 to 12), with a cumulative survival rate of 82.4%. Reason for revision was predominantly pseudotumour formation (68%), apart from loosening, pain, infection, and osteolysis. The prevalence of pseudotumour formation around the prostheses was 41%, while our previous report of this cohort (with a mean follow-up of 3.6 years) revealed a 39% prevalence. The ten-year revision-free survival with pseudotumour was 66.7% and without pseudotumour 92.4% (p < 0.05). There was poor discriminatory ability for cobalt for pseudotumour formation. Conclusion. This prospective study reports a minimum ten-year follow-up of large-head MoM THA. Revision rates are high, with the main reason being the sequelae of pseudotumour formation, which were rarely observed after five years of implantation. Blood ion measurements show limited discriminatory capacity in diagnosing pseudotumour formation. Our results evidence that an early comprehensive follow-up strategy is essential for MoM THA to promptly identify and manage early complications and revise on time. After ten years follow-up, we do not recommend continuing routine CT scanning or whole cobalt blood measurements, but instead enrolling these patients in routine follow-up protocols for THA. Cite this article: Bone Jt Open 2022;3(1):61–67


The Bone & Joint Journal
Vol. 104-B, Issue 3 | Pages 401 - 407
1 Mar 2022
Kriechling P Zaleski M Loucas R Loucas M Fleischmann M Wieser K

Aims. The aim of this study was to report the incidence of implant-related complications, further operations, and their influence on the outcome in a series of patients who underwent primary reverse total shoulder arthroplasty (RTSA). Methods. The prospectively collected clinical and radiological data of 797 patients who underwent 854 primary RTSAs between January 2005 and August 2018 were analyzed. The hypothesis was that the presence of complications would adversely affect the outcome. Further procedures were defined as all necessary operations, including reoperations without change of components, and partial or total revisions. The clinical outcome was evaluated using the absolute and relative Constant Scores (aCS, rCS), the Subjective Shoulder Value (SSV) scores, range of motion, and pain. Results. The overall surgical site complication rate was 22% (188 complications) in 152 patients (156 RTSAs; 18%) at a mean follow-up of 46 months (0 to 169). The most common complications were acromial fracture (in 44 patients, 45 RTSAs; 5.3%), glenoid loosening (in 37 patients, 37 RTSAs; 4.3%), instability (in 23 patients, 23 RTSAs; 2.7%), humeral fracture or loosening of the humeral component (in 21 patients, 21 RTSAs; 2.5%), and periprosthetic infection (in 14 patients, 14 RTSAs; 1.6%). Further surgery was undertaken in 79 patients (82 RTSAs) requiring a total of 135 procedures (41% revision rate). The most common indications for further surgery were glenoid-related complications (in 23 patients, 23 RTSAs; 2.7%), instability (in 15 patients, 15 RTSAs; 1.8%), acromial fractures (in 11 patients, 11 RTSAs; 1.3%), pain and severe scarring (in 13 patients, 13 RTSAs; 1.5%), and infection (in 8 patients, 8 RTSAs; 0.9%). Patients who had a complication had significantly worse mean rCS scores (57% (SD 24%) vs 81% (SD 16%)) and SSV scores (53% (SD 27%) vs 80% (SD 20%)) compared with those without a complication. If revision surgery was necessary, the outcome was even further compromised (mean rCS score: 51% (SD 23%) vs 63% (SD 23%); SSV score: 4% (SD 25%) vs 61% (SD 27%). Conclusion. Although the indications for, and use of, a RTSA are increasing, it remains a demanding surgical procedure. We found that about one in five patients had a complication and one in ten required further surgery. Both adversely affected the outcome. Cite this article: Bone Joint J 2022;104-B(3):401–407