Receive monthly Table of Contents alerts from Orthopaedic Proceedings
Comprehensive article alerts can be set up and managed through your account settings
View my account settingsProgressive collapsing foot deformity (PCFD) is a common condition with an estimated prevalence of 3.3% in women greater than 40 years. Progressive in nature, symptomatic flatfoot deformity can be a debilitating condition due to pain and limited physical function; it has been shown to have one of the poorest preoperative patient reported outcome scores in foot and ankle pathologies, second to ankle arthritis. Operative reconstruction of PCFD can be performed in a single-stage manner or through multiple stages. The purpose of this study is to compare costs for non-staged (NS) flatfoot reconstructions, which typically require longer hospital stays, with costs for staged (S) reconstructions, where patients usually do not require hospital admission. To our knowledge, the comparison between single-staged and multi-staged flatfoot reconstructions has not been previously done. This study will run in conjunction with one that compares rates of complications and reoperation, as well as patient reported outcomes on function and pain associated with S and NS flatfoot reconstruction. Overall, the goal is to optimize surgical management of PCFD, by addressing healthcare costs and patient outcomes.
At our academic centre with foot and ankle specialists, we selected one surgeon who primarily performs NS flatfoot reconstruction and another who primarily performs S procedures. Retrospective chart reviews of patients who have undergone either S or NS flatfoot reconstruction were performed from November 2011 to August 2021. Length of operating time, number of primary surgeries, length of hospital admission, and number of reoperations were recorded. Cost analysis was performed using local health authority patient rates for non residents as a proxy for health system costs. Rates of operating room per hour and hospital ward stay per diem in Canadian dollars were used. The analysis is currently ongoing.
72 feet from 66 patients were analyzed in the S group while 78 feet from 70 patients were analyzed in the NS group. The average age in the S and NS group are 49.64 +/− 1.76 and 57.23 +/− 1.68 years, respectively. The percentage of female patients in the S and NS group are 63.89% and 57.69%, respectively. All NS patients stayed in hospital post-operatively and the average length of stay for NS patients is 3.65 +/− 0.37 days. Only 10 patients from S group required hospital admission.
The average total operating room cost including all stages for S patients was $12,303.12 +/− $582.20. When including in-patient ward costs for patients who required admission from S group, the average cost for operating room and in-patient ward admission was $14,196.00 +/− $1,070.01 after flatfoot reconstruction.
The average in-patient ward admission cost for NS patients was $14,518.83 +/− $1,476.94 after flatfoot reconstruction. The cost analysis for total operating room costs for NS patients are currently ongoing. Statistical analysis comparing S to NS flatfoot reconstruction costs are pending.
Preliminary cost analysis suggests that multi-staged flatfoot reconstruction costs less than single-staged flatfoot reconstruction. Once full assessment is complete with statistical analysis, correlation with patient reported outcomes and complication rate can guide future PCFD surgical management.
Preoperative talar valgus deformity increases the technical difficulty of total ankle replacement (TAR) and is associated with an increased failure rate. Deformity of ≥15° has been reported to be a contraindication to arthroplasty. The goal of the present study was to determine whether the operative procedures and clinical outcomes of TAR for treatment of end-stage ankle arthritis were comparable for patients with preoperative talar valgus deformity of ≥15° as compared to those with <15°. We will describe the evolving surgical technique being utilized to tackle these challenging cases.
Fifty ankles with preoperative coronal-plane tibiotalar valgus deformity of ≥15° “valgus” group) and 50 ankles with valgus deformity of <15° (“control” group) underwent TAR. The cohorts were similar with respect to demographics and components used. All TARs were performed by a single surgeon. The mean duration of clinical follow-up was 5.5 years (minimum two years). Preoperative and postoperative radiographic measurements of coronal-plane deformity, Ankle Osteoarthritis Scale (AOS) scores and Short Form (SF)-36 scores were prospectively recorded. All ancillary (intraoperative) and secondary procedures, complications and measurements were collected.
The AOS pain and disability subscale scores decreased significantly in both groups. The improvement in AOS and SF-36 scores did not differ significantly between the groups at the time of the final follow-up. The valgus group underwent more ancillary procedures during the index surgery (80% vs 26%). Tibio-talar deformity improved significantly toward a normal weight-bearing axis in the valgus group. Secondary postoperative procedures were more common in the valgus group (36%) than the controls (20%). Overall, re-operation was not associated with poorer patient outcome scores. Metal component revision surgery occurred in seven patients (three valgus and four controls). These revisions included two deep infections (2%), one in each group, which were converted to hindfoot fusions. Therefore, 94% of the valgus group retained their original components at final follow-up
Thus far, this is the largest reported study that specifically evaluates TAR with significant preoperative valgus alignment, in addition to having the longest follow-up. Satisfactory midterm results were achieved in patients with valgus mal-alignment of ≥15°. The valgus cohort required more procedures during and after their TAR, as well as receiving more novel techniques to balance their TAR. Whilst longer term studies are needed, valgus coronal-plane alignment of ≥15° should not be considered an absolute contraindication to TAR if the associated deformities are addressed.
Intraoperative range of motion (ROM) radiographs are routinely taken during scaphoidectomy and four corner fusion surgery (S4CF) at our institution. It is not known if intraoperative ROM predicts postoperative ROM. We hypothesize that patients with a greater intra-operativeROM would have an improved postoperative ROM at one year, but that this arc would be less than that achieved intra- operatively.
We retrospectively reviewed 56 patients that had undergone S4CF at our institution in the past 10 years. Patients less than 18, those who underwent the procedure for reasons other than arthritis, those less than one year from surgery, and those that had since undergone wrist arthrodesis were excluded. Intraoperative ROM was measured from fluoroscopic images taken in flexion and extension at the time of surgery. Patients that met criteria were then invited to take part in a virtual assessment and their ROM was measured using a goniometer. T-tests were used to measure differences between intraoperative and postoperative ROM, Pearson Correlation was used to measure associations, and linear regression was conducted to assess whether intraoperative ROM predicts postoperative ROM.
Nineteen patients, two of whom had bilateral surgery, agreed to participate. Mean age was 54 and 14 were male and 5 were male. In the majority, surgical indication was scapholunate advanced collapse; however, two of the participants had scaphoid nonunion advanced collapse. No difference was observed between intraoperative and postoperative flexion. On average there was an increase of seven degrees of extension and 12° arc of motion postoperatively with p values reaching significance Correlation between intr-operative and postoperative ROM did not reach statistical significance for flexion, extension, or arc of motion. There were no statistically significant correlations between intraoperative and postoperative ROM
Intraoperative ROM radiographs are not useful at predicting postoperative ROM. Postoperative extension and arc of motion did increase from that measured intraoperatively.
Trapeziectomy with ligament reconstruction and tendon interposition (LRTI) with the flexor carpi radialis (FCR) tendon is one of the most common procedures for the treatment of thumb carpometacarpal (CMC) arthritis. An alternative method involves trapeziectomy alone (TA). The trapeziectomy with LRTI procedure was developed to theoretically improve biomechanical strength and hand function when compared to TA, which leaves an anatomical void proximal to the first metacarpal. The LRTI procedure takes longer to perform and includes an autologous tendon graft. The goal of this retrospective cohort study was to evaluate the clinical outcomes of trapeziectomy with or without LRTI at a minimum follow-up of 1 year.
A total of 43 adult patients who had underwent a total of 58 (TA=36, LRTI=22) surgical procedures for CMC arthritis participated in the study. This single surgeon retrospective cohort study sampled patients who underwent CMC arthroplasty with either TA or LRTI techniques between 2008 and 2020 with a minimum time of 1 year post-operatively. The patients were evaluated subjectively (The Disabilities of the Arm, Shoulder, and Hand (DASH) questionnaire) and objectively (hand/thumb strength, pre/post-operative hand radiographs).
Both the TA and LRTI procedures provided good pain relief, motion, strength, and stability without any severe complications. There was no statistically significant difference in hand or thumb strength between the two groups. Radiography showed that compared to the preoperative status, the trapezial space decreased similarly between the two groups. There was no difference in size of collapse between TA and LRTI post-operatively.
The TA procedure had similar outcomes to LRTI and has the advantages of shorter surgical time, less incision length, and lower surgical complexity. TA provided equivalent trapezial space to LRTI after the operation. Future study should investigate these two procedures in a head-to-head comparison rather than longitudinally where both surgeon experience and time since procedure at follow-up may have impacted results.
The Adams-Berger reconstruction is an effective technique for treating distal radioulnar joint (DRUJ) instability. Graft preparation techniques vary amongst surgeons with insufficient evidence to support one technique over another. Our study evaluated the biomechanical properties of four graft preparation techniques.
Extensor tendons were harvested from fresh frozen porcine trotters obtained from a local butcher shop and prepared in one of three configurations (n=5 per group): tendon only; tendon prepared with non-locking, running suture (2-0 FiberLoop, Arthrex, Naples, FL) spaced at 6 mm intervals; and tendon prepared with suture spaced at 12 mm intervals. A fourth configuration of suture alone was also tested. Tendons were allocated in a manner to ensure comparable average diameters amongst groups. Biomechanical testing occurred using custom jigs simulating radial and ulnar tunnels attached to a Bose Electroforce 3510 mechanical testing machine (TA Instruments). After being woven through the jigs, all tendons were sutured end-to-end with 2-0 PROLENE suture (Ethicon). Tendons then underwent a staircase cyclic loading protocol (5-25 Newtons [N] at 1 hertz [Hz] for 1000 cycles, then 5-50 N at 1 Hz for 1000 cycles, then 5-75 N at 1 Hz for 1000 cycles) until graft failure; if samples did not fail during the protocol, they were then loaded to failure. Samples were visually inspected for mode of failure after the protocol. A one-way analysis of variance was used to compare average tendon diameter; post-hac Tuhey tests were used to compare elongation and elongation rate. Survival to cyclic loading was analyzed using Kaplan-Meier survival curves with log rank. Statistical significance was set at a = 0.05.
The average tendon diameter of each group was not statistically different [4.17 mm (tendon only), 4.33 mm (FiberLoop spaced 6 mm), and 4.30 mm (FiberLoop spaced 12 mm)]. The average survival of tendon augmented with FiberLoop was significantly higher than tendon only, and all groups had significantly improved survival compared to suture only. There was no difference in survival between FiberLoop spaced 6 mm and 12 mm. Elongation was significantly lower with suture compared to tendon augmented with FiberLoop spaced 6 mm. Elongation rate was significantly lower with suture compared to all groups. Modes of failure included rupture of the tendon, suture, or both at the simulated bone and suture and/or tendon interface, and elongation of the entire construct without rupture.
In this biomechanical study, augmentation of porcine tendons with FiberLoop suture spaced at either 6 or 12 mm for DRUJ reconstruction significantly increased survival to a staircase cyclic loading protocol, as suture material was significantly stiffer than any of the tendon graft configurations.
Distal radius fractures (DRF) are common and the indication for surgical treatment remain controversial in patients higher than 60 years old. The purpose of the study was to review and analyze the current evidence-based literature.
We performed a systematic review and meta-analysis according to PRISMA guidelines in order to evaluate the efficacy of volar locking plating (VLP) and conservative treatment in DRF in patients over 60 years old. Electronic databases including MEDLINE, CENTRAL, Embase, Web of science and Clinical Trial.gov were searched from inception to October 2020 for randomized controlled trials. Relevant article reference lists were also passed over.
Two reviewers independently screened and extracted the data. Main outcomes included functional status: wrist range of motion, validated scores and grip strength. Secondary outcomes include post-operative complications and radiologic assessment.
From 3009 screened citations, 5 trials (539 patients) met the inclusion criteria. All trials of this random effect meta-analysis were at moderate risk of bias due to lack of blinding. Differences in the DASH score (MD −5,91; 95% CI, −8,83; −3,00), PRWE score (MD −9.07; 95% CI, −14.57, −3.57) and grip strength (MD 5,12; 95% CI, 0,59-9,65) were statistically significant and favored VLPs. No effect was observed in terms of range of motion. Adverse events are frequent in both treatment groups, reoperation rate is higher in the VLP group.
VLP may provide better functional outcomes in patients higher than 60 years old. More RCT are still needed to evaluate if the risks and complications of VLP outweigh the benefits.
This study aimed to determine if multiple failed closed reductions (CRs) prior to fixation of distal radius fracture is associated with the odds of complication-related reoperation up to two years post fracture.
We identified all distal radius fracture patients aged 18 or older between the years of 2003-2016 in Ontario, Canada from linked administrative databases. We used procedural and fee codes to identify patients who underwent primary outpatient surgical fixation between 8 and 14 days post fracture, and grouped patients by the number of CRs they underwent prior to definitive fixation. We excluded patients who underwent fixation within 7 days of their fracture to exclude more complex fracture types and/or patients who required more immediate surgery. We grouped patients according to the number of CRs they underwent prior to definitive fixation. We used intervention and diagnostic codes to identify reoperations within two years of fixation. We used multi-level multivariable logistic regression to compare the association between the number of CRs and reoperation while accounting for clustering at the surgeon level and adjusting for other relevant covariables. We performed an age-stratified analysis to determine if the association between the number of CRs and reoperation differed by patient age.
We identified 5,464 patients with distal radius fractures managed with outpatient fixation between 8 and 14 days of their fracture. A total of 1,422 patients (26.0%) underwent primary surgical fixation (mean time to fixation 10.6±2.0 days), while 3,573 (65.4%) underwent secondary fixation following one failed CR (mean time to fixation 10.1±2.2 days, time to CR 0.3±1.2 days), and 469 (8.6%) underwent fixation following two failed CRs (mean time to fixation 10.8±2.2 days, time to first CR 0.0±0.1 days, time to second CR 4.7±3.0 days). The CR groups had higher proportions of female patients compared to the primary group, and patients who underwent two failed CRs were more likely to be fixed with a plate (vs. wires or pins). The unadjusted proportion of reoperations was significantly higher in the group who underwent two failed CRs (7.5%) compared to those who underwent primary fixation (4.4%), and fixation following one failed CR (4.9%). Following covariable adjustment, patients who underwent two failed CRs had a significantly higher odds of reoperation (odds ratio [OR] 1.72 [1.12-2.65]) compared to those who underwent primary fixation. This association appeared to worsen for patients over the age of 60 (OR 3.93 [1.76-8.77]). We found no significant difference between the odds of reoperation between patients who underwent primary fixation vs. secondary fixation following one failed CR.
We found that patients with distal radius fractures who undergo multiple CRs prior to definitive fixation have a significantly higher odds of reoperation compared to those who undergo primary fixation, or fixation following a single CR. This suggests that surgeons should offer fixation if indicated following a single failed CR rather than attempt multiple closed reductions. Prospective studies are required to confirm these findings.
Pathologies such as Scapho-Lunate Advanced Collapse (SLAC), Scaphoid Non-union Advanced Collapse (SNAC) and Kienbock's disease can lead to arthritis in the wrist. Depending on the articular surfaces that are involved, motion preserving surgical procedures can be performed. Proximal Row Carpectomy (PRC) and Four Corner Fusion (4CF) are tried and tested surgical options. However, prospective studies comparing the two methods looking at sufficient sample sizes are limited in the literature.
The purpose of this study was to prospectively compare the early results of PRC vs 4CF performed in a single centre.
Patients with wrist arthritis were prospectively enrolled (2015 to 2021) in a single centre in Vancouver, Canada. Thirty-six patients and a total of 39 wrists underwent either a PRC (n=18) or 4CF (n=21) according to pre-operative clinical, radiographical, and intra-operative assessment. Patient-Rated Wrist Evaluation (PRWE) scores were obtained preoperatively, as well as at six months and one year post operatively. Secondary outcomes were range of motion (ROM) of the wrist, grip strength, reoperation and complication rates. Statistical significance was set at p=0.05
Respectively for PRC and 4CF, the average PRWE scores at baseline were 61.64 (SD=19.62) and 63.67 (SD=20.85). There was significant improvement at the six-month mark to 38.81 (SD=22.95) (p=0.031) and 41.33 (SD=26.61) (p=0.007), then further improvement at the 12month mark to 33.11 (SD=23.42) (p=0.007) and 36.29 (SD=27.25) (p=0.002).
There was no statistical difference between the two groups at any time point.
Regarding ROM, statistical difference was seen in pronation for the PRC group at the 6month mark from an average of 72.18 deg to 61.56 deg and in flexion at the 12 month mark from 47.89 deg to 33.50 deg. All other parameters did not show statistically significant difference post operatively.
For ROM of the 4CF group, only flexion at the 12month mark showed statistically significant change from an average of 48.81 deg to 38.03 deg.
There was no statistical difference in pre-operative ROM between the two groups.
One patient in the 4CF group required a revision for delayed union, and three patients ended up with ulnar sided wrist pain.
Patients undergoing PRC and 4CF showed significant improvement in post operative PRWE scores, this reflects existing literature. For 4CF care must be taken to minimise ulnar sided wrist pain by relatively shortening the unar sided carpal column mass.
ROM analysis showed that patients lost some wrist flexion ROM post-operatively at the 12month mark with both PRC and 4CF. However, other ROM parameters were unchanged
SARS-CoV-2 emerged in Wuhan, China in December 2019 causing pneumonia and resulting in a pandemic, commonly known as COVID-19. This pandemic led to significant changes to our daily lives due to restrictions, such as social distancing, quarantining, stay at home orders and closure of restaurants and shops among other things. The psychological effects of this uncertainty as well as of these changes to our lives have been shown to be significant. This study is a prospective study investigating the mental effects of the pandemic on hand and wrist patients seen in our clinic during this pandemic.
A prospective database on wrist pain was used to identify patients seen in our hand clinic from January 1, 2018 to December 10, 2021. All participants had been diagnosed with either radial sided wrist pain or ulnar side wrist pain. The Center for Epidemiological Studies Depression (CES-D) Scale was used to assess the mental health of our participants before and during this pandemic. An independent samples t-test was used to compare the scores of the 2 groups.
A total of 437 CES-D questionnaires were collected during this period. 118 of them belonging to the pandemic group and 319 to the pre-pandemic group. A difference (p < 0 .05) in the CES-D score was observed between the pre-pandemic and during pandemic groups. The mean score for the pre-pandemic group was found to be 9.23 (8.94) and 12.81 (11.45) for the pandemic group. However, despite the increase in score, it didn't exceed the cut off score of 16 or greater used to assess depression.
Our results indicate that there was a slight increase in score for depression in hand and wrist patients, but not above the cut off level of 16 to be of a significant risk for depression. Other global studies have shown an increase in depression in the general public. Our mild results might be attributed to the fact that British Columbia did not implement severe restrictions compared to other countries or regions, i.e no stay at home orders. Additionally, our study population was skewed and included more middle age and older patients compared to younger ones and age might be a factor in keeping the score down.
There is no consensus regarding the optimum frequency of ultrasound for monitoring the response to Pavlik harness (PH) treatment in developmental dysplasia of hip (DDH). The purpose of our study was to determine if a limited-frequency hip ultrasound (USS) assessment in children undergoing PH treatment for DDH had an adverse effect on treatment outcomes when compared to traditional comprehensive ultrasound monitoring.
This study was a single-center non-inferiority randomized controlled trial. Children aged less than six months of age with dislocated, dislocatable and stable dysplastic hips undergoing a standardized treatment program with a PH were randomized, once stability had been achieved, to our current standard USS monitoring protocol (every clinic visit) or to a limited-frequency ultrasound protocol (USS only until hip stability and then end of treatment). Groups were compared based on alpha angle at the end of treatment, acetabular indices (AI) and IHDI grade on follow up radiographs at one-year post harness and complication rates. The premise was that if there were no differences in these outcomes, either protocol could be deemed safe and effective.
One hundred patients were recruited to the study; after exclusions, 42 patients completed the standard protocol (SP) and 36 completed the limited protocol (LP). There was no significant difference between the mean age between both groups at follow up x-ray (SP: 17.8 months; LP: 16.6 months; p=0.26). There was no difference between the groups in mean alpha angle at the end of treatment (SP: 69°; LP: 68.1°: p=0.25). There was no significant difference in the mean right AI at follow up (SP: 23.1°; LP: 22.0°; p=0.26), nor on the left (SP:23.3°; LP 22.8°; p=0.59). All hips in both groups were IHDI grade 1 at follow up. The only complication was one femoral nerve palsy in the SP group. In addition, the LP group underwent a 60% reduction in USS use once stable.
We found that once dysplastic or dislocated hips were reduced and stable on USS, a limited- frequency ultrasound protocol was not associated with an inferior complication or radiographic outcome profile compared to a standardized PH treatment pathway. Our study supports reducing the frequency of ultrasound assessment during PH treatment of hip dysplasia. Minimizing the need for expensive, time-consuming and in-person health care interventions is critical to reducing health care costs, improving patient experience and assists the move to remote care. Removing the need for USS assessment at every PH check will expand care to centers where USS is not routinely available and will facilitate the establishment of virtual care clinics where clinical examination may be performed remotely.
The Pavlik harness (PH) is commonly used to treat infantile dislocated hips. Variability exists in the duration of brace treatment after successful reduction of the dislocated hip. In this study we evaluate the effect of prescribed time in brace on acetabular index (AI) at two years of age using a prospective, international, multicenter database.
We retrospectively studied prospectively enrolled infants with at least one dislocated hip that were initially treated with a PH and had a recorded AI at two-year follow-up. Subjects were treated at one of two institutions. Institution 1 used the PH until they observed normal radiographic acetabular development. Institution 2 followed a structured 12-week brace treatment protocol. Hip dislocation was defined as less than 30% femoral head coverage at rest on the pre-treatment ultrasound or IHDI grade III or IV on the pre-treatment radiograph.
Fifty-three hips met our inclusion criteria. Hips from Institution 1 were treated with a brace 3x longer than hips from institution 2 (adjusted mean 8.9±1.3 months vs 2.6±0.2 months)(p < 0 .001). Institution 1 had an 88% success rate and institution 2 had an 85% success rate at achieving hip reduction (p=0.735). At 2-year follow-up, we observed no significant difference in AI between Institution 1 (adjusted mean 25.6±0.9˚) compared to Institution 2 (adjusted mean 23.5±0.8˚) (p=0.1). However, 19% of patients from Institution 1 and 44% of patients from Institution 2 were at or below the 50th percentile of previously published age- and sex- matched AI normal data (p=0.049). Also, 27% (7/26) of hips from Institution 1 had significant acetabular dysplasia, compared to a 22% (6/27) from Institution 2 (p=0.691). We found no correlation between age at initiation of bracing and AI at 2-year follow-up (p=0.071).
Our findings suggest that prolonged brace treatment does not result in improved acetabular index at age two years. Hips treated at Institution 1 had the same AI at age two years as hips treated at Institution 2, while spending about 1/3 the amount of time in a brace. We recommend close follow-up for all children treated for dislocated hips, as ~1/4 of infants had acetabular index measurements at or above the 90th percentile of normal. Continued follow-up of this prospective cohort will be critical to determine how many children require acetabular procedures during childhood. The PH brace can successfully treat dislocated infant hips, however, prolonged brace treatment was not found to result in improved acetabular development at two-year follow-up.
Diagnostic interpretation error of paediatric musculoskeletal (MSK) radiographs can lead to late presentation of injuries that subsequently require more invasive surgical interventions with increased risks of morbidity. We aimed to determine the radiograph factors that resulted in diagnostic interpretation challenges for emergency physicians reviewing pediatric MSK radiographs.
Emergency physicians provided diagnostic interpretations on 1,850 pediatric MSK radiographs via their participation in a web-based education platform. From this data, we derived interpretation difficulty scores for each radiograph using item response theory. We classified each radiograph by body region, diagnosis (fracture/dislocation absent or present), and, where applicable, the specific fracture location(s) and morphology(ies). We compared the interpretation difficulty scores by diagnosis, fracture location, and morphology. An expert panel reviewed the 65 most commonly misdiagnosed radiographs without a fracture/dislocation to identify normal imaging findings that were commonly mistaken for fractures.
We included data from 244 emergency physicians, which resulted in 185,653 unique radiograph interpretations, 42,689 (23.0%) of which were diagnostic errors. For humerus, elbow, forearm, wrist, femur, knee, tibia-fibula radiographs, those without a fracture had higher interpretation difficulty scores relative to those with a fracture; the opposite was true for the hand, pelvis, foot, and ankle radiographs (p < 0 .004 for all comparisons). The descriptive review demonstrated that specific normal anatomy, overlapping bones, and external artefact from muscle or skin folds were often mistaken for fractures. There was a significant difference in difficulty score by anatomic locations of the fracture in the elbow, pelvis, and ankle (p < 0 .004 for all comparisons). Ankle and elbow growth plate, fibular avulsion, and humerus condylar were more difficult to diagnose than other fracture patterns (p < 0 .004 for all comparisons).
We identified actionable learning opportunities in paediatric MSK radiograph interpretation for emergency physicians. We will use this information to design targeted education to referring emergency physicians and their trainees with an aim to decrease delayed and missed paediatric MSK injuries.
Over 500 supracondylar humerus fractures (SCHF) are treated at our institution each year. Our standard post-operative pathway includes a 3-week visit for splint removal, wire removal, and radiographs. Subsequent follow-up occurs at 12 weeks for a clinical examination. In an effort to minimize unnecessary follow-up visits, we investigated whether photographs and/or patient-reported outcome measure (PROM) scores could identify patients who do not need routine 3-month in-person follow-up.
At the 3-month visit, 248 SCHF patients (mean 6.2 yrs; 0.75-11yrs) had bilateral elbow motion (ROM) and carrying angles measured; and photographs documenting frontal and sagittal alignment of both injured and uninjured upper extremities, in both maximum elbow flexion and extension. Two independent assessors made the same measurements off the clinical photographs to compare these with the clinical measurements. Two PROMs: Self-Assessment Questionnaire (SAQ: 0 best to 14 worst) and QuickDASH (0 best to 100 worst) were completed at the 3-month visit.
Inter-rater reliability of the photograph measurements was excellent (Kappa: 0.88-0.93), but weakly concordant with clinical measurements (carrying angle Kappa=0.51;max flexion Kappa=0.68;max extension Kappa=0.64). SAQ moderately correlated with QuickDASH (Kappa=0.59) and performed better at identifying patients with abnormalities. SAQ score ≥ 4 identified patients meeting 3-month follow-up criteria, with sensitivity: 36.1%; specificity: 96.8% and negative-predictive-value (NPV): 87%.
We did not find that photographs were reliable. Although SAQ-score has high NPV, a more sensitive fracture-specific PROM is needed to identify patients who do not need a 3-month follow-up visit.
In response to the COVID-19 pandemic public health measures were implemented to limit virus spread. After initial implementation of a province-wide lockdown (Stage 1), there followed a sequential ease of restrictions through Stages 2 and 3 over a 6-month period from March to September 2020 (Table 1). We aimed to determine the impact of COVID-19 public health measures on the epidemiology of operative paediatric orthopaedic trauma and to determine differential effects of each stage of lockdown.
A retrospective cohort study was performed comparing all emergency department (ED) visits for musculoskeletal trauma and operatively treated orthopaedic trauma cases at a Level-1 paediatric trauma center during Mar-Sep 2020 (pandemic), compared with Mar-Sep 2019 (pre-pandemic). All operative cases were analyzed based on injury severity, mechanism of injury (MOI) and anatomic location (AL). Comparisons between groups were assessed using chi-square testing for categorical variables, and student t-tests and Fisher's exact tests for continuous variables.
During the pandemic period, ED visits for orthopaedic trauma decreased compared to pre-pandemic levels by 23% (1370 vs 1790 patients) and operative treatment decreased by 28% (283 vs 391 patients). There was a significant decrease in the number of operative cases per day in lockdown Stage 1 (1.25 pandemic vs 1.90 pre- pandemic; p < 0 .001) and Stage 2 (1.65 pandemic vs 3.03 pre-pandemic; p< 0.001) but no difference in operative case number during Stage 3 (2.18 pandemic vs 2.45 pre-pandemic; p=0.35). Significant differences were found in MOI and AL during Stage 1 (p < 0 .001) and Stage 2 (p < 0 .001) compared to pre-pandemic. During Stage 1 and 2, playground injuries decreased by 95% and 82%, respectively; sports injuries decreased by 79% and 13%, and trampoline injuries decreased 44% and 43%, compared to pre-pandemic. However, self-propelled transit injuries (bicycles/skateboards) increased during Stage 1 and Stage 2 by 67% and 28%, respectively compared to pre- pandemic. During lockdown Stage 3 there were no differences in MOI nor AL. There were no significant differences in injury severity in any lockdown stage compared to pre-pandemic.
COVID-19 lockdown measures significantly reduced the burden of operative paediatric orthopaedic trauma. Differences in volume, mechanism and pattern of injuries varied by lockdown stage offering evidence of the burden of operative trauma related to specific childhood activities.
These findings will assist health systems planning for future pandemics and suggest that improvements in safety of playgrounds and self-propelled transit are important in reducing severe childhood injury requiring operative intervention.
For any figures or tables, please contact the authors directly.
Timely and competent treatment of paediatric fractures is paramount to a healthy future working population. Anecdotal evidence suggests that children travel greater distances to obtain care compared to adults causing economic and geographic inequities. This study aims to qualify the informal regionalization of children's fracture care in Ontario. The results could inform future policy on resource distribution and planning of the provincial health care system.
A retrospective cohort study was conducted examining two of the most common paediatric orthopaedic traumatic injuries, femoral shaft and supracondylar humerus fractures (SCH), in parallel over the last 10 years (2010-2020) using multiple linked administrative databases housed at the Institute for Clinical Evaluative Sciences (ICES) in Toronto, Ontario. We compared the distance travelled by these pediatric cohorts to clinically equivalent adult fracture patterns (distal radius fracture (DR) and femoral shaft fracture). Patient cohorts were identified based on treatment codes and distances were calculated from a centroid of patient home forward sortation area to hospital location. Demographics, hospital type, and closest hospital to patient were also recorded.
For common upper extremity fracture care, 84% of children underwent surgery at specialized centers which required significant travel (44km). Conversely, 67% of adults were treated locally, travelling a mean of 23km. Similarly, two-thirds of adult femoral shaft fractures were treated locally (mean travel distance of 30km) while most children (84%) with femoral shaft fractures travelled an average of 63km to specialized centers. Children who live in rural areas travel on average 51km more than their adult rural-residing counterparts for all fracture care. Four institutions provide over 75% of the fracture care for children, whereas 22 institutions distribute the same case volume in adults.?
Adult fracture care naturally self-organizes with proportionate distribution without policy-directed systemization. There is an unplanned concentration of pediatric fracture care to specialized centers in Ontario placing undue burden on pediatric patients and inadvertently stresses the surgical resources in a small handful of hospitals. In contrast, adult fracture care naturally self-organizes with proportionate distribution without policy-directed systemization. Patient care equity and appropriate resource allocation cannot be achieved without appropriate systemization of pediatric fracture care.
Background: Anterior cruciate ligament (ACL) injury and re-injury rates are high and continue to rise in adolescents. After surgical reconstruction, less than 50% of patients return to their pre-injury level of physical activity. Clearance for return-to-play and rehabilitation progression typically requires assessment of performance during functional tests. Pain may impact this performance. However, the patient's level of pain is often overlooked during these assessments.
Purpose: To investigate the level of pain during functional tests in adolescents with ACL injury.
Fifty-nine adolescents with ACL injury (ACLi; female n=43; 15 ± 1 yrs; 167.6 ± 8.4 cm; 67.8 ± 19.9 kg) and sixty-nine uninjured (CON; female n=38; 14 ± 2 yrs; 165.0 ± 10.8 cm; 54.2 ± 11.5 kg) performed a series of functional tests. These tests included: maximum voluntary isometric contraction (MVIC) and isokinetic knee flexion-extension strength tests, single-limb hop tests, double-limb squats, countermovement jumps (CMJ), lunges, drop-vertical jumps (DVJ), and side-cuts. Pain was reported on a 5-point Likert scale, with 1 indicating no pain and 5 indicating extreme pain for the injured limb of the ACLi group and non-dominant limb for the CON group, after completion of each test. Chi-Square test was used to compare groups for the level of pain in each test. Analysis of the level of pain within and between groups was performed using descriptive statistics.
The distribution of the level of pain was different between groups for all functional tests (p≤0.008), except for ankle plantar flexion and hip abduction MVICs (Table 1). The percentage of participants reporting pain was higher in the ACLi group in all tests compared to the CON group (Figure 1). Participants most often reported pain during the strength tests involving the knee joint, followed by the hop tests and dynamic tasks, respectively. More specifically, the knee extension MVIC was the test most frequently reported as painful (70% of the ACLi group), followed by the isokinetic knee flexion-extension test, with 65% of ACLi group. In addition, among all hop tests, pain was most often reported during the timed 6m hop (53% of ACLi), and, among all dynamic tasks, during the side-cut (40% of ACLi) test (Figure 1). Furthermore, the tests that led to the higher levels of pain (severe or extreme) were the cross-hop (9.8% of ACLi), CMJ (7.1% of ACLi), and the isokinetic knee flexion-extension test (11.5% of ACLi) (Table 1).
Adolescents with and without ACL injury reported different levels of pain for all functional tasks, except for ankle and hip MVICs. The isokinetic knee flexion-extension test resulted in greater rates of severe or extreme pain and was also the test most frequently reported as painful. Functional tests that frequently cause pain or severe level of pain (e.g., timed 6m and cross hops, side-cut, knee flexion/extension MVICs and isokinetic tests) might not be the first test choices to assess function in patients after ACL injury/reconstruction. Reported pain during functional tests should be considered by clinicians and rehabilitation team members when evaluating a patient's readiness to return-to-play.
For any figures or tables, please contact the authors directly.
Olecranon fractures are common injuries representing roughly 5% of pediatric elbow fractures. The traditional surgical management is open reduction and internal fixation with a tension band technique where the pins are buried under the skin and tamped into the triceps. We have used a modification of this technique, where the pins have been left out of the skin to be removed in clinic. The purpose of the current study is to compare the outcomes of surgically treated olecranon fractures using a tension-band technique with buried k-wires (PINS IN) versus percutaneous k-wires (PINS OUT).
We performed a retrospective chart review on all pediatric patients (18 years of age or less) with olecranon fractures that were surgically treated at a pediatric academic center between 2015 to present. Fractures were identified using ICD-10 codes and manually identified for those with an isolated olecranon fracture. Patients were excluded if they had polytrauma, metabolic bone disease, were treated non-op or if a non-tension band technique was used (ex: plate/screws). Patients were then divided into 2 groups, olecranon fractures using a tension-band technique with buried k-wires (PINS IN) and with percutaneous k-wires (PINS OUT). In the PINS OUT group, the k-wires were removed in clinic at the surgeon's discretion once adequate fracture healing was identified. The 2 groups were then compared for demographics, time to mobilization, fracture healing, complications and return to OR.
A total of 35 patients met inclusion criteria. There were 28 patients in the PINS IN group with an average age of 12.8 years, of which 82% male and 43% fractured their right olecranon. There were 7 patients in the PINS OUT group with an average age of 12.6 years, of which 57% were male and 43% fractured their right olecranon. All patients in both groups were treated with open reduction internal fixation with a tension band-technique. In the PINS IN group, 64% were treated with 2.0 k-wires and various materials for the tension band (82% suture, 18% cerclage wire). In the PINS OUT group, 71% were treated with 2.0 k-wires and all were treated with sutures for the tension band. The PINS IN group were faster to mobilize (3.4 weeks (range 2-5 weeks) vs 5 weeks (range 4-7 weeks) p=0.01) but had a significantly higher complications rate compared to the PINS OUT group (6 vs 0, p =0.0001) and a significantly higher return to OR (71% vs 0%, p=0.0001), mainly for hardware irritation or limited range of motion. All fractures healed in both groups within 7 weeks.
Pediatric olecranon fractures treated with a suture tension-band technique and k-wires left percutaneously is a safe and alternative technique compared to the traditional buried k-wires technique. The PINS OUT technique, although needing longer immobilization, could lead to less complications and decreased return to the OR due to irritation and limited ROM.
The routine use of intraoperative vancomycin powder to prevent postoperative wound infections has not been borne out in the literature in the pediatric spine population. The goal of this study is to determine the impact of vancomycin powder on postoperative wound infection rates and determine its potential impact on microbiology.
A retrospective analysis of the Harms Study Group database of 1269 adolescent idiopathic scoliosis patients was performed. Patients that underwent a posterior fusion from 2004-2018 were analyzed. A comparative analysis of postoperative infection rates was done between patients that received vancomycin powder to those who did not. Statistical significance was determined using Chi-squared test. Additionally, the microbiology of infected patients was examined.
In total, 765 patients in the vancomycin group (VG) were compared to 504 patients in the non-vancomycin group (NVG). NVG had a significantly higher rate of deep wound infection (p<0.0001) and associated reoperation rate compared to VG (p<0.0001). Both groups were compared for age, gender, race, weight, surgical time, blood loss, number of levels instrumented, and preop curve magnitude. There were significant differences between the groups for race (p<0.0001); surgical time (p=0.0033), and blood loss (p=0.0021). In terms of microbiology, VG grew p.acnes (n=2), and serratia (n=1), whereas NVG grew p.acnes (n=1) and gram positive bacilli (n=1). The remaining cultures were negative.
The use of intraoperative vancomycin powder in adolescent idiopathic scoliosis appears to contribute significantly to deep wound infection prevention and reduction of associated reoperations. Based on this study's limited culture data, Vancomycin does not seem to alter the microbiology of deep wound infections.
Guided growth is commonly performed by placing an extra-periosteal two-hole plate across the growth plate with one epiphyseal and one metaphyseal screw. Recent work by Keshet et al. (2019) investigated the efficacy of the removal of the metaphyseal screw only (“sleeper plate”) after correction. They concluded the practice to be unnecessary as only 19% of patient show recurrence of deformity. The aim of this study is to examine the incidence of rebound and undesired bony in-growth of the plate (“tethering”) after metaphyseal screw removal only.
In this retrospective case series, patient data on 144 plates inserted around the knee was obtained. Plates still in situ (n=69) at time of study and full hardware removal (n=50) were excluded. The remaining 25 plates had a metaphyseal screw only removed after deformity correction. We analyzed the rate of re-bound, tethering and maintenance of correction in two age groups at latest follow-up for a mean of 3.5 years (1.25 to five). Fisher's exact test with Freeman-Halton extension was used to analyze the two by three contingency table.
Twenty-five plates were identified as “sleeper plates” in our series. 13 plates (52%) maintained the achieved correction after a mean of 21 months (four to 39), nine plates (36%) required screw re-insertion due to rebound after a mean of 22 months (12-48) from screw removal, and four plates (16%) showed tethering with undesired continuation of guided growth after a mean of 14 months (seven to 22) from screw removal. Younger patients (years at time of plate insertion) had higher rates of rebound and tethering (p=.0112, Fisher's exact test). All Tethering occurred in titanium plates, none occurred in steel plates.
The sleeper plate is an acceptable treatment strategy for coronal deformities around the knee. Rebounding and tethering are potential outcomes that occur in younger patients and should be disclosed to patients; titanium plates may increase the risk of tethering, however further long-term follow-up is needed. We stress the importance of close post-operative follow up to identify tethering early and prevent over correction.
The sleeper plate technique is a viable option in younger children with congenital abnormalities, however, continued monitoring of alignment is necessary after screw removal to check for rebound and tethering.
A huge commitment is required from patients and families who undergo a limb reconstruction procedure using the hexapod frame. This includes turning the struts on the frame, pin site care and intensive rehabilitation. Montpetit et al (2009) discovered that function, participation, engagement in regular activities of daily living is severely impacted during the hexapod lengthening period. Due to the long duration and burden for families, it is imperative that healthcare professionals understand the impact that the hexapod frame has on functional abilities and health related quality of life (HRQL).
This project involved a retrospective review of prospectively collected data on function and HRQL during two periods of time: (1) when the hexapod frame is applied on the child's lower extremity and (2) when the lengthening phase is completed, and the hexapod frame is removed. Data from 38 children (mean age: 12 years SD 3.8) who completed lower extremity reconstruction using the hexapod frame and completed either or both the Pediatric Quality of Life Inventory 4.0 Generic Core Scale (PedsQL) and Pediatric Outcomes data Collection Instrument (PODCI) was included. Analysis included, standardized response means, the non-parametric Wilcoxon test and effect size calculation.
A Wilcoxon signed rank test for those children who completed pre and post frame PODCI’;s revealed those scores were significantly greater once the hexapod frame was removed (Md=85.10, n=10) compared to during (Md=66.50, n=10) with a large effect size, r= 1.45. Similar, the PedsQL scores improved post frame removal (Md= 66.30, n=10) compared to during treatment (Md = 53.34, n=10), with a medium size effect, r= 0.62. All subtests improved once the frame was removed.
This study provides essential insights into the burden of the hexapod frame for children and provides valuable information for all allied healthcare professionals targeted interventions for health domains. This study shows that children's function improves once the hexapod frame is removed. However, this study highlights the importance for all healthcare professional to address health domains for the duration of the hexapod procedure where the child scored lower e.g. sports and physical function, pain and comfort, happiness from the PODCI. The PedsQL identified lower mean scores in physical and emotional function.
Primary care physicians rely on radiology reports to confirm a scoliosis diagnosis and inform the need for spine specialist referral. In turn, spine specialists use these reports for triage decisions and planning of care. To be a valid predictor of disease and management, radiographic evaluation should include frontal and lateral views of the spine and a complete view of the pelvis, leading to accurate Cobb angle measurements and Risser staging. The study objectives were to determine 1) the adequacy of index images to inform treatment decisions at initial consultation by generating a score and 2) the utility of index radiology reports for appropriate triage decisions, by comparing reports to corresponding images.
We conducted a retrospective chart and radiographic review including all idiopathic scoliosis patients seen for initial consultation, aged three to 18 years, between January 1-April 30, 2021. A score was generated based on the adequacy of index images to provide accurate Cobb angle measurements and determine skeletal maturity (view of full spine, coronal=two, lateral=one, pelvis=one, ribcage=one). Index images were considered inadequate if repeat imaging was necessary. Comparisons were made between index radiology report, associated imaging, and new imaging if obtained at initial consultation. Major discrepancies were defined by inter-reader difference >15°, discordant Risser staging, or inaccuracies that led to inappropriate triage decisions. Location of index imaging, hospital versus community-based private clinic, was evaluated as a risk factor for inadequate or discrepant imaging.
There were 94 patients reviewed with 79% (n=74) requiring repeat imaging at initial consultation, of which 74% (n=55) were due to insufficient quality and/or visualization of the sagittal profile, pelvis or ribcage. Of index images available for review at initial consult (n=80), 41.2% scored five out of five and 32.5% scored two or below. New imaging showed that 50.0% of those patients had not been triaged appropriately, compared to 18.2% of patients with a full score. Comparing index radiology reports to initial visit evaluation with <60 days between imaging (n=49), discrepancies in Cobb angle were found in 24.5% (95% CI 14.6, 38.1) of patients, with 18.4% (95% CI 10.0, 31.4) categorized as major discrepancies. Risser stage was reported in only 14% of index radiology reports. In 13.8% (n=13) of the total cohort, surgical or brace treatment was recommended when not predicted based on index radiology report. Repeat radiograph (p=0.001, OR=8.38) and discrepancies (p=0.02, OR=7.96) were increased when index imaging was obtained at community-based private clinic compared to at a hospital. Re-evaluation of available index imaging demonstrated that 24.6% (95% CI 15.2, 37.1) of Cobb angles were mis-reported by six to 21 degrees.
Most pre-referral paediatric spine radiographs are inadequate for idiopathic scoliosis evaluation. Standardization of spine imaging and reporting should improve measurement accuracy, facilitate triage and decrease unnecessary radiation exposure.
Clinically significant proximal junctional kyphosis (PJK) occurs in 20% of children treated with posterior distraction-based growth friendly surgery. In an effort to identify modifiable risk factors, it has been theorized biomechanically that low radius of curvature (ROC) implants (i.e., more curved rods) may increase post-operative thoracic kyphosis, and thus may pose a higher risk of developing PJK. We sought to test the hypothesis that EOS patients treated with low ROC (more curved rods) distraction-based treatment will have a greater risk of developing PJK as compared to those treated with high ROC (straighter) implants.
This is a retrospective review of prospectively collected data obtained from a multi-centre EOS database on children treated with rib-based distraction with minimum 2-year follow-up. Variables of interest included: implant ROC at index (220 mm or 500 mm), patient age, pre-operative scoliosis, pre-operative kyphosis, and scoliosis etiology. In the literature, PJK has been defined as clinically significant if revision surgery with superior extension of the upper instrumented vertebrae was performed.
In 148 scoliosis patients, there was a higher risk of clinically significant PJK with low ROC (more curved) rods (OR: 2.6 (95%CI 1.09-5.99), χ2 (1, n=148) = 4.8, p = 0.03). Patients had a mean pre-operative age of 5.3 years (4.6y 220 mm vs 6.2y 500 mm, p = 0.002). A logistic regression model was created with age as a confounding variable, but it was determined to be not significant (p = 0.6). Scoliosis etiologies included 52 neuromuscular, 52 congenital, 27 idiopathic, 17 syndromic with no significant differences in PJK risk between etiologies (p = 0.07). Overall, patients had pre-op scoliosis of 69° (67° 220mm vs 72° 500mm, p = 0.2), and kyphosis of 48° (45° 220mm vs 51° 500mm, p = 0.1). The change in thoracic kyphosis pre-operatively to final follow up (mean 4.0 ± 0.2 years) was higher in patients treated with 220 mm implants compared to 500 mm implants (220 mm: 7.5 ± 2.6° vs 500 mm: −4.0 ± 3.0°, p = 0.004).
Use of low ROC (more curved) posterior distraction implants is associated with a significantly greater increase in thoracic kyphosis which likely led to a higher risk of developing clinically-significant PJK in EOS patients.
Untreated clubfoot results in serious disability, but mild to moderate residual deformities can still cause functional limitations and pain. Measuring the impact of clubfoot deformities on children's wellbeing is challenging. There is little literature discussing the variability in outcomes and implications of clubfoot based on where geographically the child resides. Although the use of patient reported outcome measures (PROMs) is steadily growing in pediatric orthopaedics, few studies on clubfoot have incorporated them. The most widely used PROM for pediatric foot and ankle pathology is the Oxford Foot and Ankle Questionnaire for Children (OXFAQ-C) that include a physical, school and play, emotional and shoe wear domains. The aim of this study is to evaluate the validity and regional differences in scores of the OXFAQ-C questionnaire to identify functional disability in children with clubfoot in India and Canada.
This is a retrospective cohort study of children in Indian and Canadian clubfoot registries aged 5-16 years who completed >1 parent or child OXFAQ-C. The OXFAQ-C was administered once in 01/2020 to all patients in the Indian registry, and prospectively between 06/2019 and 03/2021 at initial visit, 3, 6, 12 months post-intervention, then annually for the Canadian patients. Demographic, clubfoot, and treatment data were compared to OXFAQ-C domain scores. Descriptive statistics and regression analysis were performed. Parent-child concordance was evaluated with Pearson's Coefficient of Correlation (PCC).
The cohort had 361 patients (253 from India, 108 from Canada). Non-idiopathic clubfoot occurred in 15% of children in India and 5% in Canada, and bilateral in 53% in India and 50% in Canada. Tenotomy rate was 75% in India and 62% in Canada. Median age at presentation was 3 months in India and 1 month in Canada. Mean Pirani score at presentation and number of Ponseti casts were 4.9 and 6.1 in India and 5.3 and 5.7 in Canada, respectively.
Parents reported lower scores in all domains the older the child was at presentation (p Canadians scored significantly lower for all domains (p < 0 .001), with the difference being larger for child-reported scores. The greatest difference was for physical domain. Canadian parents on average scored their child 6.21 points lower than Indian parents, and Canadian children scored a mean of 7.57 lower than Indian children.
OXFAQ-C scores differed significantly between Indian and Canadian children despite similar demographic and clubfoot characteristics. Younger age at presentation and tenotomy may improve OXFAQ-C scores in childhood. Parent-child concordance was strong in this population. The OXFAQ-C is an adequate tool to assess functional outcomes of children with clubfeet. Cultural validation of patient reported outcome tools is critical.
The Ponseti method is the gold standard treatment for clubfoot. It begins in early infancy with weekly serial casting for up to 3 months. Globally, a commonly reported barrier to accessing clubfoot treatment is increased distance patients must travel for intervention. This study aims to evaluate the impact of the distance traveled by families to the hospital on the treatment course and outcomes for idiopathic clubfoot. No prior studies in Canada have examined this potential barrier.
This is a retrospective cohort study of patients managed at a single urban tertiary care center for idiopathic clubfoot deformity. All patients were enrolled in the Pediatric Clubfoot Research Registry between 2003 and April 2021. Inclusion criteria consisted of patients presenting at after percutaneous Achilles tenotomy. Postal codes were used to determine distance from patients’ home address to the hospital. Patients were divided into three groups based on distance traveled to hospital: those living within the city, within the Greater Metro Area (GMA) and outside of the GMA (non-GMA). The primary outcome evaluated was occurrence of deformity relapse and secondary outcomes included need for surgery, treatment interruptions/missed appointments, and complications with bracing or casting.
A total of 320 patients met inclusion criteria. Of these, 32.8% lived in the city, 41% in the GMA and 26% outside of the GMA. The average travel distance to the treatment centre in each group was 13.3km, 49.5km and 264km, respectively. Over 22% of patients travelled over 100km, with the furthest patient travelling 831km.
The average age of presentation was 0.91 months for patients living in the city, 1.15 months for those within the GMA and 1.33 months for patients outside of the GMA. The mean number of total casts applied was similar with 7.1, 7.8 and 7.3 casts in the city, GMA and non-GMA groups, respectively.
At least one two or more-week gap was identified between serial casting appointments in 49% of patients outside the GMA, compared to 27% (GMA) and 24% (city). Relapse occurred in at least one foot in 40% of non-GMA patients, versus 27% (GMA) and 24% (city), with a mean age at first relapse of 50.3 months in non-GMA patients, 42.4 months in GMA and 35.7 months in city-dwelling patients. 12% of the non-GMA group, 6.8% of the GMA group and 5.7% of the city group underwent surgery, with a mean age at time of initial surgery of 79 months, 67 months and 76 months, respectively. Complications, such as pressure sores, casts slips and soiled casts, occurred in 35% (non-GMA), 32% (GMA) and 24% (city) of patients.
These findings suggest that greater travel distance for clubfoot management is associated with more missed appointments, increased risk of relapse and treatment complications. Distance to a treatment center is a modifiable barrier. Improving access to clubfoot care by establishing clinics in more remote communities may improve clinical outcomes and significantly decrease the burdens of travel on patients and families.
Prior to the introduction of steroid management in Duchenne Muscular Dystrophy (DMD), the prevalence of scoliosis approached 100%, concomitant with progressive decreases in pulmonary function. As such, early scoliosis correction (at 20-25°) was advocated, prior to substantial pulmonary function decline. With improved pulmonary function and delayed curve progression with steroid treatment, the role of early surgery has been questioned. The purpose of this study was to compare the post-operative outcomes of early versus late scoliosis correction in DMD. We hypothesize that performing later surgery with larger curves would not lead to worse post-operative complications.
Retrospective cohort study. Patients with DMD who underwent posterior scoliosis correction, had pre-operative pulmonary function testing, and at least 1-year post-operative follow-up, were included; divided into 2 Groups by pre-operative curve angle – 1: ≤45°, 2: >45°. Primary outcome was post-operative complications by Clavien-Dindo (CD) grading. Secondary outcomes included: age at surgery, forced vital capacity (FVC), steroid utilization, fractional shortening (FS) by echocardiogram, surgery duration, blood transfusion requirements, ICU length of stay (LOS), days intubated post-operatively, hospital LOS, infection, curve correction. Two-tailed t-test and chi-square testing were used for analysis of patient factors and CD complication grade, respectively.
31 patients met the inclusion criteria, with a mean total follow-up of 8.3±3.2 years. Steroid treatment (prednisone, deflazacort) was utilized for 21 (67.7%) patients, for a mean duration of 8.2±4.0 years. Groups were comparable for steroid use, FVC, echo FS, and age at surgery (p>0.05). Primary curve angle was 31.7±10.4° and 58.3±11.1° for Groups 1 and 2, respectively (p 0.05). Surgery duration, ICU LOS, days intubated, hospital LOS, were also not different between Groups. For the entire cohort, however, the overall complication rate was higher for patients with steroid treatment [61.9% vs 10%, respectively (p=0.008)], the majority being CDII. Neither FVC nor echo FS were different between Groups at final follow-up (p=0.6; p=0.4, respectively).
Post-operative complication rates were not different for early and late scoliosis correction in DMD. In general, however, patients undergoing steroid treatment were at higher risk of blood transfusion and deep infection. Delaying scoliosis correction in DMD while PF is favourable is reasonable, but patients with prior steroid treatment should be counseled regarding the higher risk of complications.
Our primary objective was to compare healing rates in patients undergoing arthroscopic rotator cuff repair for degenerative tears, with and without bone channeling. Our secondary objectives were to compare disease-specific quality of life and patient reported outcomes as measured by the Western Ontario Rotator Cuff Index (WORC), American Shoulder and Elbow Surgeons (ASES) score and Constant score between groups.
Patients undergoing arthroscopic rotator cuff repair at three sites were randomized to receive either bone channeling augmentation or standard repair. Healing rates were determined by ultrasound at 6 and 24 months post operatively. WORC, ASES, and Constant scores were compared between groups at baseline and at 3, 6, 12 and 24 months post operatively.
One hundred sixty-eight patients were recruited and randomized between 2013 to 2018. Statistically significant improvements occurred in both groups from pre-operative to all time points in all clinical outcome scores (p < 0 .0001). Intention to treat analysis revealed no statistical differences in healing rates between the two interventions at 24 months post-operative. No differences were observed in WORC, ASES or Constant scores at any time-point.
This trial did not demonstrate superiority of intra-operative bone channeling in rotator cuff repair surgery at 24 months post-operative. Healing rates and patient-reported function and quality of life measures were similar between groups.
Adequate visual clarity is paramount to performing arthroscopic shoulder surgery safely, efficiently, and effectively. The addition of epinephrine in irrigation fluid, and the intravenous or local administration of tranexamic acid (TXA) have independently been reported to decrease bleeding thereby improving the surgeon's visualization during arthroscopic shoulder procedures. No study has compared the effect of systemic administered TXA, epinephrine added in the irrigation fluid or the combination of both TXA and epinephrine on visual clarity during shoulder arthroscopy with a placebo group. The purpose of this study is to determine if intravenous TXA is a safe alternative to epinephrine delivered by a pressure-controlled pump in improving arthroscopic shoulder visualization during arthroscopic procedures and whether using both TXA and epinephrine together has an additive effect in improving visualization.
The design of the study was a double-blinded, randomized controlled trial with four 1:1:1:1 parallel groups conducted at one center. Patients aged ≥18 years undergoing arthroscopic shoulder procedures including rotator cuff repair, arthroscopic biceps tenotomy/tenodesis, distal clavicle excision, subacromial decompression and labral repair by five fellowship-trained upper extremity surgeons were randomized into one of four arms: Pressure pump-controlled regular saline irrigation fluid (control), epinephrine (1ml of 1:1000) mixed in irrigation fluid (EPI), 1g intravenous TXA (TXA), and epinephrine and TXA (EPI/TXA). Visualization was rated on a 4-point Likert scale every 15 minutes with 0 indicating ‘poor’ quality and 3 indicating ‘excellent’ quality. The primary outcome measure was the unweighted mean of these ratings. Secondary outcomes included mean arterial blood pressure (MAP), surgery duration, surgery complexity, and adverse events within the first postoperative week.
One hundred and twenty-eight participants with a mean age (± SD) of 56 (± 11) years were randomized. Mean visualization quality for the control, TXA, EPI, and EPI/TXA groups were 2.1 (±0.40), 2.1 (±0.52), 2.6 (±0.37), 2.6 (±0.35), respectively. In a regression model with visual quality as the dependent variable, the presence/absence of EPI was the most significant predictor of visualization quality (R=0.525; p < 0 .001). TXA presence/absence had no effect, and there was no interaction between TXA and EPI. The addition of MAP and surgery duration strengthened the model (R=0.529; p < 0 .001). Increased MAP and surgery duration were both associated with decreased visualization quality. When surgery duration was controlled, surgery complexity was not a significant predictor of visualization quality. No adverse events were recorded in any of the groups.
Intravenous administration of TXA is not an effective alternative to epinephrine in the irrigation fluid to improve visualization during routine arthroscopic shoulder surgeries although its application is safe. There is no additional improvement in visualization when TXA is used in combination with epinephrine beyond the effect of epinephrine alone.
In older patients (>75 years of age), with an intact rotator cuff, requiring a total shoulder replacement (TSR) there is, at present, uncertainty whether an anatomic TSR (aTSR) or a reverse TSR (rTSR) is best for the patient. This comparison study of same age patients aims to assess clinical and radiological outcomes of older patients (≥75 years) who received either an aTSR or a rTSA.
Consecutive patients with a minimum age of 75 years who received an aTSR (n=44) or rTSR (n=51) were prospectively studied. Pre- and postoperative clinical evaluations included the ASES score, Constant score, SPADI score, DASH score, range of motion (ROM) and pain and patient satisfaction for a follow-up of 2 years. Radiological assessment identified glenoid and humeral component osteolysis, including notching with a rTSR.
Postoperative improvement for ROM and all clinical assessment scores for both groups was found. There were significantly better patient reported outcome scores (PROMs) in the aTSR group compared with the rTSR patients (p<0.001). Both groups had only minor osteolysis on radiographs. No revisions were required in either group. The main complications were scapular stress fractures for the rTSR patients and acromioclavicular joint pain for both groups.
This study of older patients (>75 years) demonstrated that an aTSR for a judiciously selected patient with good rotator cuff muscles can lead to a better clinical outcome and less early complications than a rTSR.
Interscalene brachial plexus block is the standard regional analgesic technique for shoulder surgery. Given its adverse effects, alternative techniques have been explored. Reports suggest that the erector spinae plane block may potentially provide effective analgesia following shoulder surgery. However, its analgesic efficacy for shoulder surgery compared with placebo or local anaesthetic infiltration has never been established.
We conducted a randomised controlled trial to compare the analgesic efficacy of pre-operative T2 erector spinae plane block with peri-articular infiltration at the end of surgery. Sixty-two patients undergoing arthroscopic shoulder repair were randomly assigned to receive active erector spinae plane block with saline peri-articular injection (n = 31) or active peri-articular injection with saline erector spinae plane block (n = 31) in a blinded double-dummy design. Primary outcome was resting pain score in recovery. Secondary outcomes included pain scores with movement; opioid use; patient satisfaction; adverse effects in hospital; and outcomes at 24 h and 1 month.
There was no difference in pain scores in recovery, with a median difference (95%CI) of 0.6 (-1.9-3.1), p = 0.65. Median postoperative oral morphine equivalent utilisation was significantly higher in the erector spinae plane group (21 mg vs. 12 mg; p = 0.028). Itching was observed in 10% of patients who received erector spinae plane block and there was no difference in the incidence of significant nausea and vomiting. Patient satisfaction scores, and pain scores and opioid use at 24 h were similar. At 1 month, six (peri-articular injection) and eight (erector spinae plane block) patients reported persistent pain.
Erector spinae plane block was not superior to peri-articular injection for arthroscopic shoulder surgery.
Glenoid baseplate orientation in reverse shoulder arthroplasty (RSA) influences clinical outcomes, complications, and failure rates. Novel technologies have been produced to decrease performance heterogeneity of low and high-volume surgeons. This study aimed to determine novice and experienced shoulder surgeon's ability to accurately characterise glenoid component orientation in an intra-operative scenario.
Glenoid baseplates were implanted in eight fresh frozen cadavers by novice surgical trainees. Glenoid baseplate version, inclination, augment rotation, and superior-inferior centre of rotation (COR) offset were then measured using in-person visual assessments by novice and experienced shoulder surgeons immediately after implantation. Glenoid orientation parameters were then measured using 3D CT scans with digitally reconstructed radiographs (DRRs) by two independent observers. Bland-Altman plots were produced to determine the accuracy of glenoid orientation using standard intraoperative assessment compared to postoperative 3D CT scan results.
Visual assessment of glenoid baseplate orientation showed “poor” to “fair” correlation to 3D CT DRR measurements for both novice and experienced surgeon groups for all measured parameters. There was a clinically relevant, large discrepancy between intra-operative visual assessments and 3D CT DRR measurements for all parameters. Errors in visual assessment of up to 19.2 degrees of inclination and 8mm supero-inferior COR offset occurred. Experienced surgeons had greater measurement error than novices for all measured parameters.
Intra-operative measurement errors in glenoid placement may reach unacceptable clinical limits. Kinesthetic input during implantation likely improves orientation understanding and has implications for hands-on learning.
Open debridement and Outerbridge and Kashiwagi debridement arthroplasty (OK procedure) are common surgical treatments for elbow arthritis. However, the literature contains little information on the long-term survivorship of these procedures. The purpose of this study was to determine the survivorship after elbow debridement techniques until conversion to total elbow arthroplasty and revision surgery.
We performed a retrospective chart review of patients who underwent open elbow surgical debridement (open debridement, OK procedure) between 2000 and 2015. Patients were diagnosed with primary elbow osteoarthritis, post-traumatic arthritis, or inflammatory arthritis. A total of 320 patients had primary surgery including open debridement (n=142) and OK procedure (n=178), and of these 33 patients required a secondary revision surgery (open debridement, n=14 and OK procedure, n=19). The average follow-up time was 11.5 years (5.5 - 21.5 years). Survivorship was analyzed with Kaplan-Meier curves and Log Rank test. A Cox proportional hazards model was used assess the likelihood of conversion to total elbow arthroplasty or revision surgery while adjusting for covariates (age, gender, diagnosis). Significance was set p<0.05.
Kaplan-Meier survival curves showed open debridement was 100.00% at 1 year, 99.25% at 5 years, and 98.49% at 10 years and for OK procedure 100.00% at 1 year, 98.80% at 5 years, 97.97% at 10 years (p=0.87) for conversion to total elbow arthroplasty. There was no difference in survivorship between procedures after adjusting for significant covariates with the cox proportional hazard model. The rate of revision for open debridement and OK procedure was similar at 11.31% rand 11.48% after 10 years respectively. There were higher rates of revision surgery in patients with open debridement (hazard ratio, 4.84 CI 1.29 – 18.17, p = 0.019) compared to OK procedure after adjusting for covariates. We also performed a stratified analysis with radiographic severity as an effect modifier and showed grade 3 arthritis did better with the OK procedure compared to open debridement for survivorship until revision surgery (p=0.05). However, this difference was not found for grade 1 or grade 2 arthritis. This may suggest that performing the OK procedure for more severe grade 3 arthritis could decrease reoperation rates. Further investigations are needed to better understand the indications for each surgical technique.
This study is the largest cohort of open debridement and OK procedure with long term follow-up. We showed that open elbow debridement and the OK procedure have excellent survivorship until conversion to total elbow arthroplasty and are viable options in the treatment of primary elbow osteoarthritis and post traumatic cases. The OK procedure also has lower rates of revision surgery than open debridement, especially with more severe radiographic arthritis.
Previously, we conducted a multi-center, double-blinded randomized controlled trial comparing arthroscopic Bankart repair with and without remplissage. The end point for the randomized controlled trial was two years post-operative, providing support for the benefits of remplissage in the short term in reducing recurrent instability. The aim of this study was to compare the medium term (3 to 9 years) outcomes of patients previously randomized to have undergone isolated Bankart repair (NO REMP) or Bankart repair with remplissage (REMP) for the management of recurrent anterior glenohumeral instability. The rate of recurrent instability and instances of re-operation were examined.
The original study was a double-blinded, randomized clinical trial with two 1:1 parallel groups with recruitment undertaken between 2011 and 2017. For this medium-term study, participants were reached for a telephone follow-up in 2020 and asked a series of standardized questions regarding ensuing instances of subluxation, dislocation or reoperation that had occurred on their shoulder for which they were randomized. Descriptive statistics were generated for all variables. “Failure” was defined as occurrence of a dislocation. “Recurrent instability” was defined as the participant reporting a dislocation or two or more occurences of subluxation greater than one year post-operative. All analyses were undertaken based on intention-to-treat whereby their data was analyzed based on the group to which they were originally allocated.
One-hundred and eight participants were randomized of which 50 in the NO REMP group and 52 in the REMP group were included in the analyses in the original study. The mean number of months from surgery to final follow-up was 49.3 for the NO REMP group and 53.8 for the REMP group. The rates of re-dislocation or failure were 8% (4/52) in the REMP group at an average of 23.8 months post-operative versus 22% (11/50) in the NO REMP at an average of 16.5 months post-operative. The rates of recurrent instability were 10% (5/52) in the REMP group at an average of 24 months post-operative versus 30% (15/50) in the NO REMP group at an average of 19.5 months post-operative. Survival curves were significantly different favouring REMP in both scenarios.
Arthroscopic Bankart repair combined with remplissage is an effective procedure in the treatment of patients with an engaging Hill-Sachs lesion and minimal glenoid bone loss (<15%). Patients can expect favourable rates of recurrent instability when compared with isolated Bankart repair at medium term follw-up.
The purpose of this prospective pilot study is to examine the feasibility of a physiotherapist led rapid access shoulder screening clinic (RASC). The goal of this study is to assess for improvements in patient access to care, patient reported outcome measures, patient reported experience measures, and cost outcomes using time driven activity based costing methods.
Patient recruitment began in January 2021. Consultation requests from general practitioners and emergency rooms are analyzed and triaged through a central system. One half of patients awaiting consultation were triaged to the traditional route used at our center while the other half were triaged to be assessed at the RASC. Outcome measures consisting of the Simple Shoulder Test and SF-12 were recorded at the initial consultation and at follow up appointments. Cost benefit analysis was conducted using time driven activity based costing methods (TD-ABC).
From January to August of 2021, 123 new patients were triaged for RASC assessment. On average, the RASC gets 10 new referrals per month. As of September 2021, there are 65 patients still on waitlist for RASC assessment with 58 having been assessed. Of the 58, 11% were discharged through the RASC, 48% pursued private physiotherapy, 14% had injections, 19% proceeded on for surgical consultation, and 8% did not show. Over time same time period, approximately 15 new patients were seen in consultation by the surgeon's office.
Thirty-five responses were obtained from RASC patients during their initial intake assessment. The average age of respondents was 54.7 with 21 females and 14 males. Median SF-12 scores in the physical dimension (PCS-12) for RASC patients were 36.82 and mental (MCS-12) 49.38927. Median Simple Shoulder Test scores measured 6. Of the patients who responded to the follow up questionnaires after completing physiotherapy at the RASC, both the SF-12 and Simple Shoulder Test scoring improved. Median PCS-12 measured 47.08, MCS-12 of 55.87, and Simple Shoulder Test measured 8.
RASC assessments by PT saved $172.91 per hour for consultation and $157.97 per hour for patient follow ups.
Utilization of a physiotherapy led rapid access shoulder clinic resulted in improvements in patient outcomes as measured by the SF-12 and Simple Shoulder Test as well as significant direct cost savings. Proper triage protocols to identify which patients would be suitable for RASC assessment, buy-in from physiotherapists, and timely assessment of patients for early initiation of rehabilitation for shoulder pain is paramount to the success of a RASC system at our centre. Future research direction would be geared to analyzing a larger dataset as it becomes available.
The diagnosis of infection following shoulder arthroplasty is notoriously difficult. The prevalence of prosthetic shoulder infection after arthroplasty ranges from 3.9 – 15.4% and the most common infective organism is Cutibacterium acnes. Current preoperative diagnostic tests fail to provide a reliable means of diagnosis including WBC, ESR, CRP and joint aspiration. Fluoroscopic-guided percutaneous synovial biopsy (PSB) has previously been reported in the context of a pilot study and demonstrated promising results. The purpose of this study was to determine the diagnostic accuracy of percutaneous synovial biopsy compared with open culture results (gold standard).
This was a multicenter prospective cohort study involving four sites and 98 patients who underwent revision shoulder arthroplasty. The cohort was 60% female with a mean age was 65 years (range 36-83 years). Enrollment occurred between June 2014 and November 2021. Pre-operative fluoroscopy-guided synovial biopsies were carried out by musculoskeletal radiologists prior to revision surgery. A minimum of five synovial capsular tissue biopsies were obtained from five separate regions in the shoulder. Revision shoulder arthroplasty was performed by fellowship-trained shoulder surgeons. Intraoperative tissue samples were taken from five regions of the joint capsule during revision surgery.
Of 98 patients who underwent revision surgery, 71 patients underwent both the synovial biopsy and open biopsy at time of revision surgery. Nineteen percent had positive infection based on PSB, and 22% had confirmed culture positive infections based on intra-operative tissue sampling. The diagnostic accuracy of PSB compared with open biopsy results were as follows: sensitivity 0.37 (95%CI 0.13-0.61), specificity 0.81 (95%CI 0.7-0.91), positive predictive value 0.37 (95%CI 0.13 – 0.61), negative predictive value 0.81 (95%CI 0.70-0.91), positive likelihood ratio 1.98 and negative likelihood ratio 0.77.
A patient with a positive pre-operative PSB undergoing revision surgery had an 37% probability of having true positive infection. A patient with a negative pre-operative PSB has an 81% chance of being infection-free. PSB appears to be of value mainly in ruling out the presence of peri-prosthetic infection. However, poor likelihood ratios suggest that other ancillary tests are required in the pre-operative workup of the potentially infected patient.
Reverse shoulder arthroplasty (RSA) is commonly used to treat patients with rotator cuff tear arthropathy. Loosening of the glenoid component remains one of the principal modes of failure and is the main complication leading to revision. For optimal RSA implant osseointegration to occur, the micromotion between the baseplate and the bone must not exceed a threshold of 150 µm. Excess micromotion contributes to glenoid loosening. This study assessed the effects of various factors on glenoid baseplate micromotion for primary fixation of RSA.
A half-fractional factorial experiment design (2k-1) was used to assess four factors: central element type (central peg or screw), central element cortical engagement according to length (13.5 or 23.5 mm), anterior-posterior (A-P) peripheral screw type (nonlocking or locking), and bone surrogate density (10 or 25 pounds per cubic foot [pcf]). This created eight unique conditions, each repeated five times for 40 total runs. Glenoid baseplates were implanted into high- or low-density Sawbones™ rigid polyurethane (PU) foam blocks and cyclically loaded at 60 degrees for 1000 cycles (500 N compressive force range) using a custom designed loading apparatus. Micromotion at the four peripheral screw positions was recorded using linear variable displacement transducers (LVDTs). Maximum micromotion was quantified as the displacement range at the implant-PU interface, averaged over the last 10 cycles of loading.
Baseplates with short central elements that lacked cortical bone engagement generated 373% greater maximum micromotion at all peripheral screw positions compared to those with long central elements (p < 0.001). Central peg fixation generated 360% greater maximum micromotion than central screw fixation (p < 0.001). No significant effects were observed when varying A-P peripheral screw type or bone surrogate density. There were significant interactions between central element length and type (p < 0.001).
An interaction existed between central element type and level of cortical engagement. A central screw and a long central element that engaged cortical bone reduced RSA baseplate micromotion. These findings serve to inform surgical decision-making regarding baseplate fixation elements to minimize the risk of glenoid loosening and thus, the need for revision surgery.
Heterotopic ossification (HO) is a well-known complication of traumatic elbow injuries. The reported rates of post-traumatic HO formation vary from less than 5% with simple elbow dislocations, to greater than 50% in complex fracture-dislocations. Previous studies have identified fracture-dislocations, delayed surgical intervention, and terrible triad injuries as risk factors for HO formation. There is, however, a paucity of literature regarding the accuracy of diagnosing post-traumatic elbow HO. Therefore, the purpose of our study was to determine the inter-rater reliability of HO diagnosis using standard radiographs of the elbow at 52 weeks post-injury, as well as to report on the rate of mature compared with immature HO. We hypothesized inter-rater reliability would be poor among raters for HO formation.
Prospectively collected data from a large clinical trial was reviewed by three independent reviewers (one senior orthopedic resident, one senior radiology resident, and one expert upper extremity orthopedic surgeon). Each reviewer examined anonymized 52-week post-injury radiographs of the elbow and recorded: 1. the presence or absence of HO, 2. the location of HO, 3. the size of the HO (in cm, if present), and 4. the maturity of the HO formation. Maturity was defined by consensus prior to image review and defined as an area of well-defined cortical and medullary bone outside the cortical borders of the humerus, ulna, or radius. Immature lesions were defined as an area of punctate calcification with an ill-defined cloud-like density outside the cortical borders of the humerus, ulna or radius. Data were collected using a standardized online data collection form (CognizantMD, Toronto, ON, CA). Inter-rater reliability was calculated using Fleiss’ Kappa statistic and a multivariate logistic regression analysis was performed to identify risk factors for HO formation in general, as well as mature HO at 52 weeks post injury. Statistical analysis was performed using RStudio (version1.4, RStudio, Boston, MA, USA).
A total of 79 radiographs at the 52-week follow-up were reviewed (54% male, mean age 50, age SD 14, 52% operatively treated). Inter-rater reliability using Fleiss’ Kappa was k= 0.571 (p = 0.0004) indicating moderate inter-rater reliability among the three reviewers. The rate of immature HO at 52 weeks was 56%. The multivariate logistic regression analysis identified male sex as a significant risk factor for HO development (OR 5.29, 1.55-20.59 CI, p = 0.011), but not for HO maturity at 52 weeks. Age, time to surgery, and operative intervention were not found to be significant predictors for either HO formation or maturity of the lesion in this cohort.
Our study demonstrates moderate inter-rater reliability in determining the presence of HO at 52 weeks post-elbow injury. There was a high rate (56%) of immature HO at 52-week follow-up. We also report the finding of male sex as a significant risk factor for post traumatic HO development. Future research directions could include investigation into possible male predominance for traumatic HO formation, as well as improving inter-rater reliability through developing a standardized and validated classification system for reporting the radiographic features of HO formation around the elbow.
Knowledge of the premorbid glenoid shape and the morphological changes the bone undergoes in patients with glenohumeral arthritis can improve surgical outcomes in total and reverse shoulder arthroplasty. Several studies have previously used scapular statistical shape models (SSMs) to predict premorbid glenoid shape and evaluate glenoid erosion properties. However, current literature suggests no studies have used scapular SSMs to examine the changes in glenoid surface area in patients with glenohumeral arthritis. Therefore, the purpose of this study was to compare the glenoid articular surface area between pathologic glenoid cavities from patients with glenohumeral arthritis and their predicted premorbid shape using a scapular SSM. Furthermore, this study compared pathologic glenoid surface area with that from virtually eroded glenoid models created without influence from internal bone remodelling activity and osteophyte formation. It was hypothesized that the pathologic glenoid cavities would exhibit the greatest glenoid surface area despite the eroded nature of the glenoid and the medialization, which in a vault shape, should logically result in less surface area.
Computer tomography (CT) scans from 20 patients exhibiting type A2 glenoid erosion according to the Walch classification [Walch et al., 1999] were obtained. A scapular SSM was used to predict the premorbid glenoid shape for each scapula. The scapula and humerus from each patient were automatically segmented and exported as 3D object files along with the scapular SSM from a pre-operative planning software. Each scapula and a copy of its corresponding SSM were aligned using the coracoid, lateral edge of the acromion, inferior glenoid tubercule, scapular notch, and the trigonum spinae. Points were then digitized on both the pathologic humeral and glenoid surfaces and were used in an iterative closest point (ICP) algorithm in MATLAB (MathWorks, Natick, MA, USA) to align the humerus with the glenoid surface. A Boolean subtraction was then performed between the scapular SSM and the humerus to create a virtual erosion in the scapular SSM that matched the erosion orientation of the pathologic glenoid. This led to the development of three distinct glenoid models for each patient: premorbid, pathologic, and virtually eroded (Fig. 1). The glenoid surface area from each model was then determined using 3-Matic (Materialise, Leuven, Belgium).
Figure 1. (A) Premorbid glenoid model, (B) pathologic glenoid model, and (C) virtually eroded glenoid model.
The average glenoid surface area for the pathologic scapular models was 70% greater compared to the premorbid glenoid models (P < 0 .001). Furthermore, the surface area of the virtual glenoid erosions was 6.4% lower on average compared to the premorbid glenoid surface area (P=0.361).
The larger surface area values observed in the pathologic glenoid cavities suggests that sufficient bone remodelling exists at the periphery of the glenoid bone in patients exhibiting A2 type glenohumeral arthritis. This is further supported by the large difference in glenoid surface area between the pathologic and virtually eroded glenoid cavities as the virtually eroded models only considered humeral anatomy when creating the erosion.
For any figures or tables, please contact the authors directly.
Thoracic hyperkyphosis (TH – Cobb angle >40°) is correlated with rotator cuff arthropathy and associated with anterior tilting and protraction of scapula, impacting the glenoid orientation and the surrounding musculature. Reverse total shoulder arthroplasty (RTSA) is a reliable surgical treatment for patients with rotator cuff arthropathy and recent literature suggests that patients with TH may have comparable range of motion after RTSA. However, there exists no study reporting the possible link between patient-reported outcomes, humeral retroversion and TH after RTSA. While the risk of post-operative complications such as instability, hardware loosening, scapular notching, and prosthetic infection are low, we hypothesize that it is critical to optimize the biomechanical parameters through proper implant positioning and understanding patient-specific scapular and thoracic anatomy to improve surgical outcomes in this subset of patients with TH.
Patients treated with primary RTSA at an academic hospital in 2018 were reviewed for a two-year follow-up. Exclusion criteria were as follows: no pre-existing chest radiographs for Cobb angle measurement, change in post-operative functional status as a result of trauma or medical comorbidities, and missing component placement and parameter information in the operative note. As most patients did not have a pre-operative chest radiograph, only seven patients with a Cobb angle equal to or greater than 40° were eligible. Chart reviews were completed to determine indications for RTSA, hardware positioning parameters such as inferior tilting, humeral stem retroversion, glenosphere size/location, and baseplate size. Clinical data following surgery included review of radiographs and complications. Follow-up in all patients were to a period of two years. The American Shoulder and Elbow Surgeons (ASES) Shoulder Score was used for patient-reported functional and pain outcomes.
The average age of the patients at the time of RTSA was 71 years old, with six female patients and one male patient. The indication for RTSA was primarily rotator cuff arthropathy. Possible correlation between Cobb angle and humeral retroversion was noted, whereby, Cobb angle greater than 40° matched with humeral retroversion greater than 30°, and resulted in significantly higher ASES scores. Two patients with mean Cobb angle of 50° and mean humeral retroversion 37.5° had mean ASES scores of 92.5. Five patients who received mean humeral retroversion of 30° had mean lower ASES scores of 63.7 (p < 0 .05). There was no significant correlation with glenosphere size or position, baseplate size, degree of inferior tilting or lateralization.
Patient-reported outcomes have not been reported in RTSA patients with TH. In this case series, we observed that humeral stem retroversion greater than 30° may be correlated with less post-operative pain and greater patient satisfaction in patients with TH. Further clinical studies are needed to understanding the biomechanical relationship between RTSA, humeral retroversion and TH to optimize patient outcomes.
Postoperative surgical site infection in patients treated with lumbosacral fusion has been believed to be caused by perioperative contamination (Perioperative Inside-Out infections) in patients with comorbidities. With the proximity of these incisions to the perianal region and limited patient mobility in the early post-operative period, local contamination from gastrointestinal and/or urogenital flora (Postoperative Outside-In infections) should be considered as a major source of complication.
A single center, retrospective review of adult patients treated with open posterior lumbosacral fusions between January 2014 and January 2021. We aimed to identify common factors in patients experiencing deep postoperative infections. Oncological, minimally invasive, primary infection, and index procedures carried out at other institutions were excluded.
We identified 489 eligible patients, 20 of which required debridement deep to the fascia (4.1%). Mean age (62.9 vs 60.8), operative time (420 vs 390 minutes), estimated blood loss (1772 vs 1790 mL) and median levels fused (8.5 vs 9) were similar between the infected and non-infected groups. There was a higher percentage of deformity patients (75% vs 29%) and increased BMI (32.7 vs 28.4) in the infected group. The mean time from primary procedure to debridement was 40.8 days. Four patients showed no growth on culture. Three showed Staphylococcus species (Perioperative Inside-Out infections) requiring debridement at a mean of 100.3 days (95%CI 0- 225 days). Thirteen patients showed infection with intestinal or urogenital pathogens (Postoperative Outside-In infections) requiring debridement at a mean of 20.0 days (95%CI 9-31 days). Postoperative Outside-In infections led to debridement 80.3 days earlier than Perioperative Inside-Out infections (p= 0.007).
In this series, 65% of deep infections were due to early local contamination by gastrointestinal and/or urogenital tracts pathogens. These infections were debrided significantly earlier than the Staphylococcus species infections. Due to the proximity of the incisions to the perianal region, there should be increased focus on post-operative local wound management to ensure these pathogens are away from the wound during the critical stages of wound healing.
Prolonged length of stay (LOS) is a significant contributor to the variation in surgical health care costs and resource utilization after elective spine surgery. The primary goal of this study was to identify patient, surgical and institutional variables that influence LOS. The secondary objective is to examine variability in institutional practices among participating centers.
This is a retrospective study of a prospectively multicentric followed cohort of patients enrolled in the CSORN between January 2015 and October 2020. A logistic regression model and bootstrapping method was used. A survey was sent to participating centers to assessed institutional level interventions in place to decrease LOS. Centers with LOS shorter than the median were compared to centers with LOS longer than the median.
A total of 3734 patients were included (979 discectomies, 1102 laminectomies, 1653 fusions). The median LOS for discectomy, laminectomy and fusion were respectively 0.0 day (IQR 1.0), 1.0 day (IQR 2.0) and 4.0 days (IQR 2.0). Laminectomy group had the largest variability (SD=4.4, Range 0-133 days). For discectomy, predictors of LOS longer than 0 days were having less leg pain, higher ODI, symptoms duration over 2 years, open procedure, and AE (p< 0.05). Predictors of longer LOS than median of 1 day for laminectomy were increasing age, living alone, higher ODI, open procedures, longer operative time, and AEs (p< 0.05). For posterior instrumented fusion, predictors of longer LOS than median of 4 days were older age, living alone, more comorbidities, less back pain, higher ODI, using narcotics, longer operative time, open procedures, and AEs (p< 0.05). Ten centers (53%) had either ERAS or a standardized protocol aimed at reducing LOS.
In this study stratifying individual patient and institutional level factors across Canada, several independent predictors were identified to enhance the understanding of LOS variability in common elective lumbar spine surgery. The current study provides an updated detailed analysis of the ongoing Canadian efforts in the implementation of multimodal ERAS care pathways. Future studies should explore multivariate analysis in institutional factors and the influence of preoperative patient education on LOS.
This study aimed to identify factors associated with increased rates of blood transfusion in patients with adolescent idiopathic scoliosis (AIS) treated with posterior spinal fusion (PSF).
A retrospective case-control study was performed for AIS patients treated at a large children's hospital between August 2018 and December 2020. All patients with a diagnosis of AIS were evaluated. Data on patient demographics, AIS, and transfusion parameters were collected. Univariate regression and multivariate logistic modeling were utilized to assess risk factors associated with requiring transfusion. Odds ratios (OR) and 95% confidence interval (CI) were calculated. Surgeries were done by three surgeons and thirty anesthesiologists. To quantify the influence of anesthesia practice preferences a categorical variable was defined as “higher-transfusion practice preference”, for the provider with the highest rate of transfusion.
A total of 157 AIS patients were included, of whom 56 were transfused RBC units (cases), and 101 did not receive any RBC transfusion (controls). On univariate analysis, the following variables were significantly correlated with receiving RBC transfusion: “higher-transfusion practice preference,” “administration of crystalloids,” “receiving fresh frozen plasma (FFP),” “receiving platelets,” “pre-operative hemoglobin,” “cell saver volume,” and “surgical time.” On multiple regression modeling, “pre-operative hemoglobin less than 120 g/L” (OR 14.05, 95% CI: 1.951 to 135.7) and “higher-transfusion practice preference” (OR 11.84, 95% CI: 2.505 to 63.65) were found to be meaningfully and significantly predictive of RBC transfusion.
In this cohort, we identified pre-operative hemoglobin of 120 g/L as a critical threshold for requiring transfusion. In addition, we identified significant contribution from anesthesia transfusion practice preferences. Our multivariate model indicated that these two factors are the major significant contributors to allogenic blood transfusion. Although further studies are required to better understand factors contributing to transfusion in AIS patients, we suggest standardized, peri-operative evidence-based strategies to potentially help reduce variations due to individual provider preferences.
Single level discectomy (SLD) is one of the most commonly performed spinal surgery procedures. Two key drivers of their cost-of-care are duration of surgery (DOS) and postoperative length of stay (LOS). Therefore, the ability to preoperatively predict SLD DOS and LOS has substantial implications for both hospital and healthcare system finances, scheduling and resource allocation. As such, the goal of this study was to predict DOS and LOS for SLD using machine learning models (MLMs) constructed on preoperative factors using a large North American database.
The American College of Surgeons (ACS) National Surgical and Quality Improvement (NSQIP) database was queried for SLD procedures from 2014-2019. The dataset was split in a 60/20/20 ratio of training/validation/testing based on year. Various MLMs (traditional regression models, tree-based models, and multilayer perceptron neural networks) were used and evaluated according to 1) mean squared error (MSE), 2) buffer accuracy (the number of times the predicted target was within a predesignated buffer), and 3) classification accuracy (the number of times the correct class was predicted by the models). To ensure real world applicability, the results of the models were compared to a mean regressor model.
A total of 11,525 patients were included in this study. During validation, the neural network model (NNM) had the best MSEs for DOS (0.99) and LOS (0.67). During testing, the NNM had the best MSEs for DOS (0.89) and LOS (0.65). The NNM yielded the best 30-minute buffer accuracy for DOS (70.9%) and ≤120 min, >120 min classification accuracy (86.8%). The NNM had the best 1-day buffer accuracy for LOS (84.5%) and ≤2 days, >2 days classification accuracy (94.6%). All models were more accurate than the mean regressors for both DOS and LOS predictions.
We successfully demonstrated that MLMs can be used to accurately predict the DOS and LOS of SLD based on preoperative factors. This big-data application has significant practical implications with respect to surgical scheduling and inpatient bedflow, as well as major implications for both private and publicly funded healthcare systems. Incorporating this artificial intelligence technique in real-time hospital operations would be enhanced by including institution-specific operational factors such as surgical team and operating room workflow.
Acute spinal cord injury (SCI) is most often secondary to trauma, and frequently presents with associated injuries. A neurological examination is routinely performed during trauma assessment, including through Advanced Trauma Life Support (ATLS). However, there is no standard neurological assessment tool specifically used for trauma patients to detect and characterize SCI during the initial evaluation. The International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) is the most comprehensive and popular tool for assessing SCI, but it is not adapted to the acute trauma patients such that it is not routinely used in that setting. Therefore, the objective is to develop a new tool that can be used routinely in the initial evaluation of trauma patients to detect and characterize acute SCI, while preserving basic principles of the ISNCSCI.
The completion rate of the ISCNSCI during the initial evaluation after an acute traumatic SCI was first estimated. Using a modified Delphi technique, we designed the Montreal Acute Classification of Spinal Cord Injuries (MAC-SCI), a new tool to detect and characterize the completeness (grade) and level of SCI in the polytrauma patient. The ability of the MAC-SCI to detect and characterize SCI was validated in a cohort of 35 individuals who have sustained an acute traumatic SCI. The completeness and neurological level of injury (NLI) were assessed by two independent assessors using the MAC-SCI, and compared to those obtained with the ISNCSCI.
Only 33% of patients admitted after an acute traumatic SCI had a complete ISNCSCI performed at initial presentation. The MAC-SCI includes 53 of the 134 original elements of the ISNCSCI which is 60% less. There was a 100% concordance between the severity grade derived from the MAC-SCI and from the ISNCSCI. Concordance of the NLI within two levels of that obtained from the ISNCSCI was observed in 100% of patients with the MAC-SCI and within one level in 91% of patients. The ability of the MAC-SCI to discriminate between cervical (C0 to C7) vs. thoracic (T1 to T9) vs. thoraco-lumbar (T10 to L2) vs. lumbosacral (L3 to S5) injuries was 100% with respect to the ISNCSCI.
The rate of completion of the ISNCSCI is low at initial presentation after an acute traumatic SCI. The MAC-SCI is a streamlined tool proposed to detect and characterize acute SCI in polytrauma patients, that is specifically adapted to the acute trauma setting. It is accurate for determining the completeness of the SCI and localize the NLI (cervical vs. thoracic vs. lumbar). It could be implemented in the initial trauma assessment protocol to guide the acute management of SCI patients.
Pain management in spine surgery can be challenging. Cannabis might be an interesting choice for analgesia while avoiding some side effects of opioids. Recent work has reported on the potential benefits of cannabinoids for multimodal pain control, but very few studies focus on spinal surgery patients. This study aims to examine demographic and health status differences between patients who report the use of (1) cannabis, (2) narcotics, (3) cannabis and narcotics or (4) no cannabis/narcotic use.
Retrospective cohort study of thoracolumbar patients enrolled in the CSORN registry after legalization of cannabis in Canada. Variables included: age, sex, modified Oswestry Disability Index (mODI), Numerical Rating Scales (NRS) for leg and back pain, tingling/numbness leg sensation, SF-12 Quality of Life- Mental Health Component (MCS), Patient Health Questionnaire (PHQ-9), and general health state. An ANCOVA with pathology as the covariate and post-hoc analysis was run.
The majority of the 704 patients enrolled (mean age: 59; female: 46.9%) were non-users (41.8%). More patients reported narcotic-use than cannabis-use (29.7% vs 12.9%) with 13.4% stating concurrent-use. MCS scores were significantly lower for patients with concurrent-use compared to no-use (mean of 39.95 vs 47.98, p=0.001) or cannabis-use (mean=45.66, p=0.043). The narcotic-use cohort had significantly worse MCS scores (mean=41.37, p=0.001) than no-use. Patients reporting no-use and cannabis-use (mean 41.39 vs 42.94) had significantly lower ODI scores than narcotic-use (mean=54.91, p=0.001) and concurrent-use (mean=50.80, p=0.001). Lower NRS-Leg pain was reported in cannabis-use (mean=5.72) compared to narcotic-use (mean=7.19) and concurrent-use (mean=7.03, p=0.001). No-use (mean=6.31) had significantly lower NRS-Leg pain than narcotic-use (p=0.011), and significantly lower NRS-back pain (mean=6.17) than narcotic-use (mean=7.16, p=0.001) and concurrent-use (mean=7.15, p=0.012). Cannabis-use reported significantly lower tingling/numbness leg scores (mean=4.85) than no-use (mean=6.14, p=0.022), narcotic-use (mean=6.67, p=0.001) and concurrent-use (mean=6.50, p=0.01). PHQ-9 scores were significantly lower for the no-use (mean=6.99) and cannabis-use (mean=8.10) than narcotic-use (mean=10.65) and concurrent-use (mean=11.93) cohorts. Narcotic-use reported a significantly lower rating of their overall health state (mean=50.03) than cannabis-use (mean=60.50, p=0.011) and no-use (mean=61.89, p=0.001).
Patients with pre-operative narcotic-use or concurrent use of narcotics and cannabis experienced higher levels of disability, pain and depressive symptoms and worse mental health functioning compared to patients with no cannabis/narcotic use and cannabis only use. To the best of our knowledge, this is the first and largest study to examine the use of cannabis amongst Canadian patients with spinal pathology. This observational study lays the groundwork to better understand the potential benefits of adding cannabinoids to control pain in patients waiting for spine surgery. This will allow to refine recommendations about cannabis use for these patients.
The primary objective is to compare revision rates for lumbar disc replacement (LDR) and fusion at the same or adjacent levels in Ontario, Canada. The secondary objectives include acute complications during hospitalization and in 30 days, and length of hospital stay.
A population-based cohort study was conducted using health administrative databases including patients undergoing LDR or single level fusion between October 2005 to March 2018. Patients receiving LDR or fusion were identified using physician claims recorded in the Ontario Health Insurance Program database. Additional details of surgical procedure were obtained from the Canadian Institute for Health Information hospital discharge abstract. Primary outcome measured was presence of revision surgery in the lumbar spine defined as operation greater than 30 days from index procedure. Secondary outcomes were immediate/ acute complications within the first 30 days of index operation.
A total of 42,024 patients were included. Mean follow up in the LDR and fusion groups were 2943 and 2301 days, respectively. The rates of revision surgery at the same or adjacent levels were 4.7% in the LDR group and 11.1% in the fusion group (P=.003). Multivariate analysis identified risk factors for revision surgery as being female, hypertension, and lower surgeon volume. More patients in the fusion group had dural tears (p<.001), while the LDR group had more “other” complications (p=.037). The LDR group had a longer mean hospital stay (p=.018).
In this study population, the LDR group had lower rates of revision compared to the fusion group. Caution is needed in concluding its significance due to lack of clinical variables and possible differences in indications between LDR and posterior decompression and fusion.
Worldwide, most spine imaging is either “inappropriate” or “probably inappropriate”. The Choosing Wisely recommendation is “Do not perform imaging for lower back pain unless red flags are present.” There is currently no detailed breakdown of lower back pain diagnostic imaging performed in New Brunswick (NB) to inform future directions.
A registry of spine imaging performed in NB from 2011-2019 inclusive (n=410,000) was transferred to the secure platform of the NB Institute for Data, Training and Research (NB-IRDT). The pseudonymized data included linkable institute identifiers derived from an obfuscated Medicare number, as well as information on type of imaging, location of imaging, and date of imaging.
We included all lumbar, thoracic, and complete spine images. We excluded imaging related to the cervical spine, surgical or other procedures, out-of-province patients and imaging of patients under 19 years.
We verified categories of X-ray, Computed Tomography (CT), and Magnetic Resonance Imaging (MRI). Red flags were identified by ICD-10 code-related criteria set out by the Canadian Institute for Health Information.
We derived annual age- and sex-standardized rates of spine imaging per 100,000 population and examined regional variations in these rates in NB's two Regional Health Authorities (RHA-A and RHA-B). Age- and sex-standardized rates were derived for individuals with/without red flag conditions and by type of imaging. Healthcare utilization trends were reflected in hospital admissions and physician visits 2 years pre- and post-imaging. Rurality and socioeconomic status were derived using patients’ residences and income quintiles, respectively.
Overall spine imaging rates in NB decreased between 2012 and 2019 by about 20% to 7,885 images per 100,000 people per year. This value may be higher than the Canadian average.
Females had 23% higher average imaging rate than males. RHA-A had a 45% higher imaging rate than RHA-B. Imaging for red flag conditions accounted for about 20% of all imaging.
X-rays imaging accounted for 67% and 75% of all imaging for RHA-A and RHA-B respectively. The proportions were 20% and 8% for CT and 13% and 17% for MRI.
Two-year hospitalization rates and rates of physician visits were higher post-imaging. Females had higher age-standardized hospitalization and physician-visit rates, but the magnitude of increase was higher for males. Individuals with red flag conditions were associated with increased physician visits, regardless of the actual reason for the visit.
Imaging rates were higher for rural than urban patients by about 26%. Individuals in the lowest income quintiles had higher imaging rates than those in the highest income quintiles. Physicians in RHA-A consistently ordered more images than their counterparts at RHA-B.
We linked spine imaging data with population demographic data to look for variations in lumbar spine imaging patterns. In NB, as in other jurisdictions, imaging tests of the spine are occurring in large numbers. We determined that patterns of imaging far exceed the numbers expected for ‘red flag’ situations. Our findings will inform a focused approach in groups of interest. Implementing high value care recommendations pre-imaging ought to replace low-value routine imaging.
Lumbar fusion surgery is an established procedure for the treatment of several spinal pathologies. Despite numerous techniques and existing devices, common surgical trends in lumbar fusion surgery are scarcely investigated. The purpose of this Canada-based study was to provide a descriptive portrait of current surgeons’ practice and implant preferences in lumbar fusion surgery while comparing findings to similar investigations performed in the United Kingdom.
Canadian Spine Society (CSS) members were sampled using an online questionnaire which was based on previous investigations performed in the United Kingdom. Fifteen questions addressed the various aspects of surgeons’ practice: fusion techniques, implant preferences, and bone grafting procedures. Responses were analyzed by means of descriptive statistics.
Of 139 eligible CSS members, 41 spinal surgeons completed the survey (29.5%). The most common fusion approach was via transforaminal lumber interbody fusion (TLIF) with 87.8% performing at least one procedure in the previous year. In keeping with this, 24 surgeons (58.5%) had performed 11 to 50 cases in that time frame. Eighty-six percent had performed no lumbar artificial disc replacements over their last year of practice. There was clear consistency on the relevance of a patient specific management (73.2%) on the preferred fusion approach. The most preferred method was pedicle screw fixation (78%). The use of stand-alone cages was not supported by any respondents. With regards to the cage material, titanium cages were the most used (41.5%). Published clinical outcome data was the most important variable in dictating implant choice (87.8%). Cage thickness was considered the most important aspect of cage geometry and hyperlordotic cages were preferred at the lower lumbar levels. Autograft bone graft was most commonly preferred (61.0%). Amongst the synthetic options, DBX/DBM graft (64.1%) in injectable paste form (47.5%) was preferred.
In conclusion, findings from this study are in partial agreement with previous work from the United Kingdom, but highlight the variance of practice within Canada and the need for large-scale clinical studies aimed to set specific guidelines for certain pathologies or patient categories.
Neuromuscular scoliosis patients face rates of major complications of up to 49%. Along with pre-operative risk reduction strategies (including nutritional and bone health optimization), intra-operative strategies to decrease blood loss and decrease surgical time may help mitigate these risks. A major contributor to blood loss and surgical time is the insertion of instrumentation which is challenging in neuromuscular patient given their abnormal vertebral and pelvic anatomy. Standard pre-operative radiographs provide minimal information regarding pedicle diameter, length, blocks to pedicle entry (e.g. iliac crest overhang), or iliac crest orientation. To minimize blood loss and surgical time, we developed an “ultra-low dose” CT protocol without sedation for neuromuscular patients.
Our prospective quality improvement study aimed to determine: if ultra-low dose CT without sedation was feasible given the movement disorders in this population; what the radiation exposure was compared to standard pre-operative imaging; whether the images allowed accurate assessment of the anatomy and intra-operative navigation given the ultra-low dose and potential movement during the scan.
Fifteen non-ambulatory surgical patients with neuromuscular scoliosis received the standard spine XR and an ultra-low dose CT scan. Charts were reviewed for etiology of neuromuscular scoliosis and medical co-morbidities. The CT protocol was a high-speed, high-pitch, tube-current modulated acquisition at a fixed tube voltage. Adaptive statistical iterative reconstruction was applied to soft-tissue and bone kernels to mitigate noise. Radiation dose was quantified using reported dose indices (computed tomography dose index (CTDIvol) and dose-length product (DLP)) and effective dose (E), calculated through Monte-Carlo simulation. Statistical analysis was completed using a paired student's T-test (α = 0.05). CT image quality was assessed for its use in preoperative planning and intraoperative navigation using 7D Surgical System Spine Module (7D Surgical, Toronto, Canada).
Eight males and seven females were included in the study. Their average age (14±2 years old), preoperative Cobb angle (95±21 degrees), and kyphosis (60±18 degrees) were recorded. One patient was unable to undergo the ultra-low dose CT protocol without sedation due to a co-diagnosis of severe autism. The average XR radiation dose was 0.5±0.3 mSv. Variability in radiographic dose was due to a wide range in patient size, positioning (supine, sitting), number of views, imaging technique and body habitus. Associated CT radiation metrics were CTDIvol = 0.46±0.14 mGy, DLP = 26.2±8.1 mGy.cm and E = 0.6±0.2 mSv. CT radiation variability was due to body habitus and arm orientation. The radiation dose differences between radiographic and CT imaging were not statistically significant. All CT scans had adequate quality for preoperative assessment of pedicle diameter and orientation, obstacles impeding pedicle entry, S2-Alar screw orientation, and intra-operative navigation.
“Ultra-low dose” CT scans without sedation were feasible in paediatric patients with neuromuscular scoliosis. The effective dose was similar between the standard preoperative spinal XR and “ultra-low dose” CT scans. The “ultra-low dose” CT scan allowed accurate assessment of the anatomy, aided in pre-operative planning, and allowed intra-operative navigation despite the movement disorders in this patient population.
In multilevel posterior cervical instrumented fusions, extending the fusion across the cervico-thoracic junction at T1 or T2 (CTJ) has been associated with decreased rate of re-operation and pseudarthrosis but with longer surgical time and increased blood loss. The impact on patient reported outcomes (PROs) remains unclear. The primary objective was to determine whether extending the fusion through the CTJ influenced PROs at 3 and 12 months after surgery. Secondary objectives were to compare the number of patients reaching the minimally clinically important difference (MCID) for the PROs and mJOA, operative time duration, intra-operative blood loss (IOBL), length of stay (LOS), discharge disposition, adverse events (AEs), re-operation within 12 months of the surgery, and patient satisfaction.
This is a retrospective analysis of prospectively collected data from a multicenter observational cohort study of patients with degenerative cervical myelopathy. Patients who underwent a posterior instrumented fusion of 4 levels of greater (between C2-T2) between January 2015 and October 2020 with 12 months follow-up were included. PROS (NDI, EQ5D, SF-12 PCS and MCS, NRS arm and neck pain) and mJOA were compared using ANCOVA, adjusted for baseline differences. Patient demographics, comorbidities and surgical details were abstracted. Percentafe of patient reaching MCID for these outcomes was compared using chi-square test. Operative duration, IOBL, AEs, re-operation, discharge disposittion, LOS and satisfaction were compared using chi-square test for categorical variables and independent samples t-tests for continuous variables.
A total of 206 patients were included in this study (105 patients not crossing the CTJ and 101 crossing the CTJ). Patients who underwent a construct extending through the CTJ were more likely to be female and had worse baseline EQ5D and NDI scores (p> 0.05). When adjusted for baseline difference, there was no statistically significant difference between the two groups for the PROs and mJOA at 3 and 12 months. Surgical duration was longer (p 0.05). Satisfaction with the surgery was high in both groups but significantly different at 12 months (80% versus 72%, p= 0.042 for the group not crossing the CTJ and the group crossing the CTJ, respectively). The percentage of patients reaching MCID for the NDI score was 55% in the non-crossing group versus 69% in the group extending through the CTJ (p= 0.06).
Up to 12 months after the surgery, there was no statistically significant differences in PROs between posterior construct extended to or not extended to the upper thoracic spine. The adverse event profile did not differ significantly, but longer surgical time and blood loss were associated with construct extending across the CTJ.
Symptomatic lumbar spinal stenosis is a common entity and increasing in prevalence. Limited evidence is available regarding patient reported outcomes comparing primary vs revision surgery for those undergoing lumbar decompression, with or without fusion. Evidence available suggest a lower rate of improvement in the revision group. The aim of this study was to assess patient reported outcomes in patients undergoing revision decompression, with or without fusion, when compared to primary surgery.
Patient data was collected from the Canadian Spine Outcomes Research Network (CSORN) database. Patients undergoing lumbar decompression without or without fusion were included. Patients under 18, undergoing discectomy, greater than two level decompressions, concomitant cervical or thoracic spine surgery were excluded. Demographic data, smoking status, narcotic use, number of comorbidities as well as individual comorbidities were included in our propensity scores. Patients undergoing primary vs revision decompression were matched in a four:one ratio according to their scores, whilst a separate matched cohort was created for those undergoing primary vs revision decompression and fusion. Continuous data was compared using a two-tailed t-test, whilst categorical variables were assessed using chi-square test.
A total of 555 patients were included, with 444 primary patients matched to 111 revision surgery patients, of which 373 (67%) did not have fusion. Patients undergoing primary decompression with fusion compared to revision patients were more likely to answer yes to “feel better after surgery” (87.8% vs 73.8%, p=0.023), “undergo surgery again” (90.1% vs 76.2%, P=0.021) and “improvement in mental health” (47.7% vs 28.6%, p=0.03) at six months. There was no difference in either of these outcomes at 12 or 24 months. There was no difference between the groups ODI, EQ-5D, SF 12 scores at any time point. Patients undergoing primary vs revision decompression alone showed no difference in PROMs at any time point.
In a matched cohort, there appears to be no difference in improvement in PROMS between patients undergoing primary vs revision decompression, with or without fusion, at two year follow-up. This would suggest similar outcomes can be obtained in revision cases.
The ability to calculate quality-adjusted life-years (QALYs) for degenerative cervical myelopathy (DCM) would enhance treatment decision making and facilitate economic analysis. QALYs are calculated using utilities, or health-related quality-of-life (HRQoL) weights. An instrument designed for cervical myelopathy disease would increase the sensitivity and specificity of HRQoL assessments. The objective of this study is to develop a multi-attribute utility function for the modified Japanese Orthopedic Association (mJOA) Score.
We recruited a sample of 760 adults from a market research panel. Using an online discrete choice experiment (DCE), participants rated 8 choice sets based on mJOA health states. A multi-attribute utility function was estimated using a mixed multinomial-logit regression model (MIXL). The sample was partitioned into a training set used for model fitting and validation set used for model evaluation.
The regression model demonstrated good predictive performance on the validation set with an AUC of 0.81 (95% CI: 0.80-0.82)). The regression model was used to develop a utility scoring rubric for the mJOA. Regression results revealed that participants did not regard all mJOA domains as equally important. The rank order of importance was (in decreasing order): lower extremity motor function, upper extremity motor function, sphincter function, upper extremity sensation.
This study provides a simple technique for converting the mJOA score to utilities and quantify the importance of mJOA domains. The ability to evaluate QALYs for DCM will facilitate economic analysis and patient counseling. Clinicians should use these findings in order to offer treatments that maximize function in the attributes viewed most important by patients.
En bloc resection for primary bone tumours and isolated metastasis are complex surgeries associated with a high rate of adverse events (AEs). The primary objective of this study was to explore the relationship between frailty/sarcopenia and major perioperative AEs following en bloc resection for primary bone tumours or isolated metastases of the spine. Secondary objectives were to report the prevalence and distribution of frailty and sarcopenia, and determine the relationship between these factors and length of stay (LOS), unplanned reoperation, and 1-year postoperative mortality in this population.
This is a retrospective study of prospectively collected data from a single quaternary care referral center consisting of patients undergoing an elective en bloc resection for a primary bone tumour or an isolated spinal metastasis between January 1st, 2009 and February 28th, 2020. Frailty was calculated with the modified frailty index (mFI) and spine tumour frailty index (STFI). Sarcopenia, determined by the total psoas area (TPA) vertebral body (VB) ratio (TPA/VB), was measured at L3 and L4. Regression analysis produced ORs, IRRs, and HRs that quantified the association between frailty/sarcopenia and major perioperative AEs, LOS, unplanned reoperation and 1-year postoperative mortality.
One hundred twelve patients met the inclusion criteria. Using the mFI, five patients (5%) were frail (mFI ³ 0.21), while the STFI identified 21 patients (19%) as frail (STFI ³ 2). The mean CT ratios were 1.45 (SD 0.05) and 1.81 (SD 0.06) at L3 and L4 respectively. Unadjusted analysis demonstrated that sarcopenia and frailty were not significant predictors of major perioperative AEs, LOS or unplanned reoperation. Sarcopenia defined by the CT L3 TPA/VB and CT L4 TPA/VB ratios significantly predicted 1-year mortality (HR of 0.32 per one unit increase, 95% CI 0.11-0.93, p=0.04 vs. HR of 0.28 per one unit increase, 95% CI 0.11-0.69, p=0.01) following unadjusted analysis. Frailty defined by an STFI score ≥ 2 predicted 1-year postoperative mortality (OR of 2.10, 95% CI 1.02-4.30, p=0.04).
The mFI was not predictive of any clinical outcome in patients undergoing en bloc resection for primary bone tumours or isolated metastases of the spine. Sarcopenia defined by the CT L3 TPA/VB and L4 TPA/VB and frailty assessed with the STFI predicted 1-year postoperative mortality on univariate analysis but not major perioperative AEs, LOS or reoperation. Further investigation with a larger cohort is needed to identify the optimal measure for assessing frailty and sarcopenia in this spine population.
This study aims to 1) determine reported cannabis use among patients waiting for thoracolumbar surgery and to 2) identify demographics and health differences between cannabis-users and non-cannabis users.
This observational cohort study is a retrospective national multicenter review data from the Canadian Spine Outcomes and Research Network registry. Patients were dichotomized as cannabis users and non-cannabis users. Variables of interest: age, sex, BMI, smoking status, education, work status, exercise, modified Oswestry Disability Index (mODI), the Numerical Rating Scales (NRS) for leg and back pain, tingling/numbness scale, SF-12 Quality of Life Questionnaire - Mental Health Component (MCS), use of prescription cannabis, recreational cannabis, and narcotic pain medication. Continuous variables were compared using an independent t-test and categorical variables were compared using chi-square analyses.
Cannabis-use was reported by 28.4% of pre-operative patients (N=704), 47% of whom used prescription cannabis. Cannabis-use was reported most often by patients in Alberta (43.55%), British Colombia (38.09%) and New Brunswick (33.73%). Patients who reported using cannabis were significantly younger (mean=52.9 versus mean=61.21,). There was a higher percentage of concurrent narcotic-use (51.54 %) and smoking (21.5%) reported in cannabis-users in comparison to non-cannabis users (41.09%,p=0.001; 9.51%, p=0.001, respectively). There were significant differences in cannabis-use based on pathology (p=0.01). Patients who report using cannabis had significantly worse MCS scores (difference=3.93, p=0.001), and PHQ-8 scores (difference=2.51, p=0.001). There was a significant difference in work status (p=0.002) with cannabis-users reporting higher rates (20%) of being employed, but not working compared to non-cannabis users (11.13%). Non-cannabis users were more likely to be retired (45.92%) compared to cannabis-users (31.31%). There were no significant differences based on cannabis use for sex, education, exercise, NRS-back, NRS-Leg, tingling-leg, mODI, or health state.
Thoracolumbar spine surgery patients are utilizing cannabis prior to surgery both through recreational use and prescription. Patients who are using cannabis pre-operatively did not differ in regards to reported pain or disability from non-users, though they did in demographic and mental health variables.
There is a significant positive association between hours of brace wear and rate of success in the treatment of Adolescent Idiopathic Scoliosis (AIS). The abandon rate reported in the literature averages 18%. In a recent randomized trial conducted at our center; the abandon rate was 4%. We aim to document the abandon rate towards brace treatment during the COVID-19 pandemic and its impact on AIS progression.
We reviewed a database of AIS patients recruited between March and September 2020. Inclusion criteria were patients with AIS under brace treatment according to SRS criteria. The patients were divided in 2 cohorts: those with a self-reported good adherence to treatment and those who voluntarily abandoned treatment during follow-up. Patients with irregular adherence were excluded. Data analysis included age, gender, Risser stage, type of brace, Cobb angles at first visit and last follow-up (mean 11 months) and % of progression. Unpaired student tests were used for comparison.
154 patients met inclusion criteria. 20 patients were excluded due to irregular adherence. 89 patients (age: 12.1 y.o. ±1.4) reported good adherence to treatment, while 45 patients (age: 12.6 y.o. ±1.5) abandoned treatment, an abandon rate of 29%. The cohort of compliant patients started treatment with a mean main thoracic (MT) curve of 26° and finished with 27°. The mean difference between measurements was +0.65°±7.5; mean progression rate was −4.6%. However, patients who abandoned treatment started with a mean MT curve of 28° and finished with 33°, with a mean increase of +5°±8 and a mean progression rate of −11%. The differences between the 2 cohorts were statistically significant (p=0.002). Five (5) patients from the abandon group were offered for surgery because of curve progression.
The abandon rate of brace treatment in AIS significantly increased during the first wave of COVID-19 pandemic. Patients who voluntarily discontinued treatment had significant increases in curve progression and surgical indication rates.
To compare preoperative and postoperative Health Related Quality of Life (HRQoL) scores in operated Adolescent Idiopathic Scoliosis (AIS) patients with and without concomitant isthmic spondylolisthesis.
A retrospective study of a prospective cohort of 464 individuals undergoing AIS surgery between 2008 and 2018 was performed. All patients undergoing surgery for AIS with a minimum 2-year follow-up were included. We excluded patients with prior or concomitant surgery for spondylolisthesis. HRQoL scores were measured using the SRS-22 questionnaire. Comparisons were performed between AIS patients with vs. without concomitant spondylolisthesis treated non-surgically.
AIS surgery was performed for 36 patients (15.2 ±2.5 y.o) with concomitant isthmic spondylolisthesis, and 428 patients (15.5 ±2.4 y.o) without concomitant spondylolisthesis. The two groups were similar in terms of age, sex, preoperative and postoperative Cobb angles. Preoperative and postoperative HRQoL scores were similar between the two groups. HRQoL improved significantly for all domains in both groups, except for pain in patients with spondylolisthesis. There was no need for surgical treatment of the spondylolisthesis and no slip progression during the follow-up duration after AIS surgery.
Patients undergoing surgical treatment of AIS with non-surgical management of a concomitant isthmic spondylolisthesis can expect improvement in HRQoL scores, similar to that observed in patients without concomitant spondylolisthesis.
Individuals with multi-compartment knee osteoarthritis (KOA) frequently experience challenges in activities of daily living (ADL) such as stair ambulation. The Levitation “Tri-Compartment Offloader” (TCO) knee brace was designed to reduce pain in individuals with multicompartment KOA. This brace uses novel spring technology to reduce tibiofemoral and patellofemoral forces via reduced quadriceps forces. Information on brace utility during stair ambulation is limited. This study evaluated the effect of the TCO during stair descent in patients with multicompartment KOA by assessing knee flexion moments (KFM), quadriceps activity and pain.
Nine participants (6 male, age 61.4±8.1 yrs; BMI 30.4±4.0 kg/m2) were tested following informed consent. Participants had medial tibiofemoral and patellofemoral OA (Kellgren-Lawrence grades two to four) diagnosed by an orthopaedic surgeon.
Joint kinetics and muscle activity were evaluated during stair descent to compare three bracing conditions: 1) without brace (OFF); 2) brace in low power (LOW); and 3) brace in high power (HIGH). The brace spring engages from 60° to 120° and 15° to 120° knee flexion in LOW and HIGH, respectively. Individual brace size and fit were adjusted by a trained researcher.
Participants performed three trials of step-over-step stair descent for each bracing condition. Three-dimensional kinematics were acquired using an 8-camera motion capture system. Forty-one spherical reflective markers were attached to the skin (on each leg and pelvis segment) and 8 markers on the brace. Ground reaction forces and surface EMG from the vastus medialis (VM) and vastus lateralis (VL) were collected for the braced leg. Participants rated knee pain intensity performing the task following each bracing condition on a 10cm Visual Analog Scale ranging from “no pain” (0) to “worst imaginable pain” (100).
Resultant brace and knee flexion angles and KFM were analysed during stair contact for the braced leg. The brace moment was determined using brace torque-angle curves and was subtracted from the calculated KFM. Resultant moments were normalized to bodyweight and height. Peak KFMs were calculated for the loading response (Peak1) and push-off (Peak2) phases of support. EMG signals were normalized and analysed during stair contact using wavelet analysis. Signal intensities were summed across wavelets and time to determine muscle power.
Results were averaged across all 3 trials for each participant. Paired T-tests were used to determine differences between bracing conditions with a Bonferroni adjustment for multiple comparisons (α=0.025).
Peak KFM was significantly lower compared to OFF with the brace worn in HIGH during the push-off phase (p Table 1: Average peak knee flexion moments, quadriceps muscle power and knee pain during stair descent in 3 brace conditions (n=9).
Quadriceps activity, knee flexion moments and pain were significantly reduced with TCO brace wear during stair descent in KOA patients. These findings suggest that the TCO assists the quadriceps to reduce KFM and knee pain during stair descent. This is the first biomechanical evidence to support use of the TCO to reduce pain during an ADL that produces especially high knee forces and flexion moments.
For any figures or tables, please contact the authors directly.
Reconstruction of the anterior cruciate ligament (ACL) allows to restore stability of the knee, in order to facilitate the return to activity (RTA). Although it is understood that the tendon autograft undergoes a ligamentous transformation postoperatively, knowledge about longitudinal microstructural differences in tissue integrity between types of tendon autografts (ie, hamstring vs. patella) remains limited.
Diffusion tensor imaging (DTI) has emerged as an objective biomarker to characterize the ligamentization process of the tendon autograft following surgical reconstruction. One major limitation to its use is the need for a pre-injury baseline MRI to compare recovery of the graft, and inform RTA. Here, we explore the relationship for DTI biomarkers (fractional anisotropy, FA) between knees bilaterally, in healthy participants, with the hypothesis that agreement within a patient's knees may support the use of the contralateral knee as a reference to monitor recovery of the tendon autograft, and inform RTA.
Fifteen participants with no previous history of knee injuries were enrolled in this study (age, 26.7 +/− 4.4 years; M/F, 7/8). All images were acquired on a 3T Prisma Siemens scanner using a secured flexible 18-channel coil wrapped around the knee. Both knees were scanned.
A 3D anatomical Double Echo Steady State (DESS) sequence was acquired on which regions of interest (ROI) were placed consistent with the footprints of the ACL (femur, posteromedial corner on medial aspect of lateral condyle; tibia, anteromedial to intercondylar eminence). Diffusion images were acquired using fat saturation based on optimized parameters in-house.
All diffusion images were pre-processed using the FMRIB FSL toolbox. The footprint ROIs of the ACL were then used to reconstruct the ligament in each patient with fiber-based probabilistic tractography (FBPT), providing a semi-automated approach for segmentation. Average FA was computed for each subject, in both knees, and then correlated against one another using a Pearson correlation to assess the degree of similarity between the ACLs.
A total of 30 datasets were collected for this study (1/knee/participant; N=15). The group averaged FA (+/− standard deviation) for the FBPT segmented ACLs were found to equal 0.1683 +/− 0.0235 (dominant leg) and 0.1666 +/− 0.0225 (non-dominant leg). When comparing both knees within subjects, reliable agreement was found for the FBPT-derived ACL with a linear correlation coefficient (rho) equal to 0.87 (P < 0 .001).
We sought to assess the degree of concordance in FA between the knees of healthy participants with hopes to provide a method for using the contralateral “healthy” knee in the comparison of autograft-dependent longitudinal changes in microstructural integrity, following ACL reconstruction. Our results suggest that good agreement in anisotropy can be achieved between the non-dominant and dominant knees using DTI and the FBPT segmentation method.
Contralateral anisotropy of the ACL, assuming no previous injuries, may be used as a quantitative reference biomarker for monitoring the recovery of the tendon autograft following surgical reconstruction, and gather further insight as to potential differences between chosen autografts. Clinically, this may also serve as an index to supplement decision-making with respect to RTA, and reduce rates of re-injuries.
As the field of hip arthroscopy continues to develop, functional measures and testing become increasingly important in patient selection, managing patient expectations prior to surgery, and physical readiness for return to athletic participation. The Hip Sport Test (HST) was developed to assess strength, coordination, agility, and range of motion prior to and following hip arthroscopy as a functional assessment. However, the relationship between HST and hip strength, range of motion, and hip-specific patient reported outcome (PRO) measures have not been investigated. The purpose of this study was to evaluate the correlation between the HST scores and measurements of hip strength and range of motion prior to undergoing hip arthroscopy.
Between September 2009 and January 2017, patients aged 18-40 who underwent primary hip arthroscopy for the treatment of femoroacetabular impingement with available pre-operative HST, dynamometry, range of motion, and functional scores (mHHS, WOMAC, HOS-SSS) were identified. Patients were excluded if they were 40 years old, had a Tegner activity score < 7, or did not have HST and dynamometry evaluations within one week of each other. Muscle strength scores were compared between affected and unaffected side to establish a percent difference with a positive score indicating a weaker affected limb and a negative score indicating a stronger affected limb. Correlations were made between HST and strength testing, range of motion, and PROs.
A total of 350 patients met inclusion criteria. The average age was 26.9 ± 6.5 years, with 34% females and 36% professional athletes. Total and component HST scores were significantly associated with measure of strength most strongly for flexion (rs = −0.20, p < 0 .001), extension (rs = −0.24, p<.001) and external rotation (rs = −0.20, p < 0 .001). Lateral and diagonal agility, components of HST, were also significantly associated with muscle strength imbalances between internal rotation versus external rotation (rs = −0.18, p=0.01) and flexion versus extension (rs = 0.12, p=0.03). In terms of range of motion, a significant correlation was detected between HST and internal rotation (rs = −0.19, p < 0 .001). Both the total and component HST scores were positively correlated with pre-operative mHHS, WOMAC, and HOS-SSS (p<.001 for all rs).
The Hip Sport Test correlates with strength, range of motion, and PROs in the preoperative setting of hip arthroscopy. This test alone and in combination with other diagnostic examinations can provide valuable information about initial hip function and patient prognosis.
The Banff Patellofemoral Instability Instrument 2.0 (BPII 2.0) is a patient-reported disease-specific quality of life (QOL) outcome measure used to assess patients with recurrent lateral patellofemoral instability (LPI) both pre- and post-operatively. The purpose of this study was to compare the BPII 2.0 to four other relevant patient reported outcome measures (PROMs): the Tampa Scale-11 for kinesiophobia (TSK-11), the pain catastrophizing scale (PCS), a general QOL (EQ-5D-5L), and a return to sport index (ACL-RSI). This concurrent validation sought to compare and correlate the BPII 2.0 with these other measures of physical, psychological, and emotional health. The psychological and emotional status of patients can impact recovery and rehabilitation, and therefore a disease-specific PROM may be unable to consistently identify patients who would benefit from interventions encompassing a holistic and person-focused approach in addition to disease-specific treatment.
One hundred and ten patients with recurrent lateral patellofemoral instability (LPI) were assessed at a tertiary orthopaedic practice between January and October 2021. Patients were consented into the study and asked to complete five questionnaires: the BPII 2.0, TSK-11, PCS, EQ-5D-5L, and the ACL-RSI at their initial orthopaedic consultation. Descriptive demographic statistics were collected for all patients. A Pearson's r correlation coefficient was employed to examine the relationships between the five PROMs. These analyses were computed using SPSS 28.0 © (IBM Corporation, 2021).
One hundred and ten patients with a mean age of 25.7 (SD = 9.8) completed the five PROMs. There were 29 males (26.3%) and 81 females (73.6%) involving 50% symptomatic left knees and 50% symptomatic right knees. The mean age of the first dislocation was 15.4 years (SD = 7.3; 1-6) and the mean BMI was 26.5 (SD = 7.3; range = 12.5-52.6) The results of the Pearson's r correlation coefficient demonstrated that the BPII 2.0 was statistically significantly related to all of the assessed PROM's (p
There was significant correlation evident between the BPII 2.0 and the four other PROMs assessed in this study. The BPII 2.0 does not explicitly measure kinesiophobia or pain catastrophizing, however, the significant statistical relationship of the TSK-11 and PCS to the BPII 2.0 suggests that this information is being captured and reflected. The preliminary results of this concurrent validation suggest that the pre-operative data may offer predictive validity. Future research will explore the ability of the BPII 2.0 to predict patient quality of life following surgery.
External validation of machine learning predictive models is achieved through evaluation of model performance on different groups of patients than were used for algorithm development. This important step is uncommonly performed, inhibiting clinical translation of newly developed models. Recently, machine learning was used to develop a tool that can quantify revision risk for a patient undergoing primary anterior cruciate ligament (ACL) reconstruction (https://swastvedt.shinyapps.io/calculator_rev/). The source of data included nearly 25,000 patients with primary ACL reconstruction recorded in the Norwegian Knee Ligament Register (NKLR). The result was a well-calibrated tool capable of predicting revision risk one, two, and five years after primary ACL reconstruction with moderate accuracy. The purpose of this study was to determine the external validity of the NKLR model by assessing algorithm performance when applied to patients from the Danish Knee Ligament Registry (DKLR).
The primary outcome measure of the NKLR model was probability of revision ACL reconstruction within 1, 2, and/or 5 years. For the index study, 24 total predictor variables in the NKLR were included and the models eliminated variables which did not significantly improve prediction ability - without sacrificing accuracy. The result was a well calibrated algorithm developed using the Cox Lasso model that only required five variables (out of the original 24) for outcome prediction. For this external validation study, all DKLR patients with complete data for the five variables required for NKLR prediction were included. The five variables were: graft choice, femur fixation device, Knee Injury and Osteoarthritis Outcome Score (KOOS) Quality of Life subscale score at surgery, years from injury to surgery, and age at surgery. Predicted revision probabilities were calculated for all DKLR patients. The model performance was assessed using the same metrics as the NKLR study: concordance and calibration.
In total, 10,922 DKLR patients were included for analysis. Average follow-up time or time-to-revision was 8.4 (±4.3) years and overall revision rate was 6.9%. Surgical technique trends (i.e., graft choice and fixation devices) and injury characteristics (i.e., concomitant meniscus and cartilage pathology) were dissimilar between registries. The model produced similar concordance when applied to the DKLR population compared to the original NKLR test data (DKLR: 0.68; NKLR: 0.68-0.69). Calibration was poorer for the DKLR population at one and five years post primary surgery but similar to the NKLR at two years.
The NKLR machine learning algorithm demonstrated similar performance when applied to patients from the DKLR, suggesting that it is valid for application outside of the initial patient population. This represents the first machine learning model for predicting revision ACL reconstruction that has been externally validated. Clinicians can use this in-clinic calculator to estimate revision risk at a patient specific level when discussing outcome expectations pre-operatively. While encouraging, it should be noted that the performance of the model on patients undergoing ACL reconstruction outside of Scandinavia remains unknown.
Knee arthroscopy with meniscectomy is the third most common Orthopaedic surgery performed after TKA and THA, comprising up to 16.6% of all procedures. The efficiency of Orthopaedic care delivery with respect to waiting times and systemic costs is extremely concerning. Canadian Orthopaedic patients experience the longest wait times of any G7 country, yet perioperative surgical care constitutes a significant portion of a hospital's budget.
In-Office Needle Arthroscopy (IONA) is an emerging technology that has been primarily studied as a diagnostic tool. Recent evidence shows that it is a cost-effective alternative to hospital- and community-based MRI with comparable accuracy. Recent procedure guides detailing IONA medial meniscectomy suggest a potential node for OR diversion. Given the high case volume of knee arthroscopy as well as the potential amenability to be diverted away from the OR to the office setting, IONA has the potential to generate considerable improvements in healthcare system efficiency with respect to throughput and cost savings. As such, the purpose of this study is to investigate the cost savings and impact on waiting times on a mid-sized Canadian community hospital if IONA is offered as an alternative to traditional operating room (OR) arthroscopy for medial meniscal tears.
In order to develop a comprehensive understanding and accurate representation of the quantifiable operations involved in the current state for medial meniscus tear care, process mapping was performed that describes the journey of a patient from when they present with knee pain to their general practitioner until case resolution. This technique was then repeated to create a second process map describing the hypothetical proposed state whereby OR diversion may be conducted utilizing IONA. Once the respective process maps for each state were determined, each process map was translated into a Dupont decision tree. In order to accurately determine the total number of patients which would be eligible for this care pathway at our institution, the OR booking scheduling for arthroscopy and meniscectomy/repair over a four year time period (2016-2020) were reviewed. A sensitivity analysis was performed to examine the effect of the number of patients who select IONA over meniscectomy and the number of revision meniscectomies after IONA on 1) the profit and profit margin determined by the MCS-Dupont financial model and 2) the throughput (percentage and number) determined by the MCS-throughput model.
Based on historic data at our institution, an average of 198 patients (SD 31) underwent either a meniscectomy or repair from years 2016-2020. Revenue for both states was similar (p = .22), with the current state revenue being $ 248,555.99 (standard deviation $ 39,005.43) and proposed state of $ 249,223.86 (SD $ 39,188.73). However, the reduction in expenses was significant (p < .0001) at 5.15%, with expenses in the current state being $ 281,415.23 (SD $ 44,157.80) and proposed state of $ 266,912.68 (SD $ 42,093.19), representing $14,502.95 in savings. Accordingly, profit improvement was also significant (p < .0001) at 46.2%, with current state profit being $ (32,859.24) (SD $ 5,153.49) and proposed state being $ (17,678.82) (SD $ 2,921.28). The addition of IONA into the care pathway of the proposed state produced an average improvement in throughput of 42 patients (SD 7), representing a 21.2% reduction in the number of patients that require an OR procedure. Financial sensitivity analysis revealed that the proposed state profit was higher than the current state profit if as few as 10% of patients select IONA, with the maximum revision rate needing to remain below 40% to achieve improved profits.
The most important finding from this study is that IONA is a cost-effective alternative to traditional surgical arthroscopy for medial meniscus meniscectomy. Importantly, IONA can also be used as a diagnostic procedure. It is shown to be a cost-effective alternative to MRI with similar diagnostic accuracy. The role of IONA as a joint diagnostic-therapeutic tool could positively impact MRI waiting times and MRI/MRA costs, and further reduce indirect costs to society. Given the well-established benefit of early meniscus treatment, accelerating both diagnosis and therapy is bound to result in positive effects.
It has been reported that 60-85% of patients who undergo PAO have concomitant intraarticular pathology that cannot be addressed with PAO alone. Currently, there are limited diagnostic tools to determine which patients would benefit from hip arthroscopy at the time of PAO to address intra-articular pathology. This study aims to see if preoperative PROMs scores measured by IHOT-33 scores have predictive value in whether intra-articular pathology is addressed during PAO + scope. The secondary aim is to see how often surgeons at high-volume hip preservation centers address intra-articular pathology if a scope is performed during the same anesthesia event.
A randomized, prospective Multicenter trial was performed on patients who underwent PAO and hip arthroscopy to treat hip dysplasia from 2019 to 2020. Preoperative PROMs and intraoperative findings and procedures were recorded and analyzed. A total of 75 patients, 84% Female, and 16% male, with an average age of 27 years old, were included in the study. Patients were randomized to have PAO alone 34 patients vs. PAO + arthroscopy 41 patients during the same anesthesia event. The procedures performed, including types of labral procedures and chondroplasty procedures, were recorded. Additionally, a two-sided student T-test was used to evaluate the difference in means of preoperative IHOT score among patients for whom a labral procedure was performed versus no labral procedure.
A total of 82% of patients had an intra-articular procedure performed at the time of hip arthroscopy. 68% of patients who had PAO + arthroscopy had a labral procedure performed. The most common labral procedure was a labral refixation which was performed in 78% of patients who had a labral procedure performed. Femoral head-neck junction chondroplasty was performed in 51% of patients who had an intra-articular procedure performed. The mean IHOT score was 29.3 in patients who had a labral procedure performed and 33.63 in those who did not have a labral procedure performed P- value=0.24.
Our findings demonstrate preoperative IHOT-33 scores were not predictive in determining whether intra-articular labral pathology was addressed at the time of surgery. Additionally, we found that if labral pathology was addressed, labral refixation was the most common repair performed. This study also provides valuable information on what procedures high-volume hip preservation centers are performing when performing PAO + arthroscopy.
Over half of postpartum women experience pelvic ring or hip pain, with multiple anatomic locations involved. The sacroiliac joints, pubic symphysis, lumbar spine and pelvic girdle are all well documented pain generators. However, despite the prevalence of postpartum hip pain, there is a paucity of literature regarding underlying soft tissue intra-articular etiologies. The purpose of this systematic review is to document and assess the available evidence regarding underlying intra-articular soft tissue etiologies of peri- and postpartum hip pain.
Three online databases (Embase, PubMed and Ovid [MEDLINE]) were searched from database inception until April 11, 2021. The inclusion criteria were English language studies, human studies, and those regarding symptomatic labral pathology in the peri- or postpartum period. Exclusion criteria were animal studies, commentaries, book chapters, review articles and technical studies. All titles, relevant abstracts and full-text articles were screened by two reviewers independently. Descriptive characteristics including the study design, sample size, sex ratio, mean age, clinical and radiographic findings, pathology, subsequent management and outcomes were documented. The methodological quality of the included studies was assessed using the Methodological Index for Non-Randomized Studies (MINORS) instrument.
The initial search identified 2472 studies. A systemic screening and assessment of eligibility identified 5 articles that satisfied the inclusion criteria. Twenty-two females were included. Twenty patients presented with labral pathology that necessitated hip arthroscopy with labral debridement or repair with or without acetabuloplasty and/or femoroplasty. One patient presented with an incidental labral tear in the context of osteitis condensans illi. One patient presented with post-traumatic osteoarthritis necessitating a hip replacement. The mean MINORS score of these 5 non-comparative studies was 2.8 (range 0-7) demonstrating a very low quality of evidence.
The contribution of intra-articular soft tissue injury is a documented, albeit sparse, etiology contributing to peri- and postpartum hip pain. Further research to better delineate the prevalence, mechanism of injury, natural history and management options for women suffering from these pathologies at an already challenging time is necessary to advance the care of these patients.
Anterior cruciate ligament (ACL) injuries have been increasing, especially amongst adolescents. These injuries can increase the risk for early-onset knee osteoarthritis (OA). The consequences of late-stage knee OA include structural joint change, functional limitations and persistent pain. Interleukin-6 (IL-6) is a pro-inflammatory biomarker reflecting knee joint healing, and increasing evidence suggests that IL-6 may play a critical role in the development of pathological pain. The purpose of this study was to determine the relationship between subjective knee joint pain and function, and synovial fluid concentrations of the pro-inflammatory cytokine IL-6, in adolescents undergoing anterior cruciate ligament reconstruction surgery.
Seven youth (12-17 yrs.) undergoing anterior cruciate ligament (ACL) reconstruction surgery participated in this study. They completed the Pedi International Knee Documentation Committee (Pedi-IKDC) questionnaire on knee joint pain and function. At the time of their ACL reconstruction surgery, synovial fluid samples were collected through aspiration to dryness with a syringe without saline flushing. IL-6 levels in synovial fluid (sf) were measured using enzyme linked immunosorbent assay. Spearman's rho correlation coefficient was used to determine the correlation between IL-6 levels and scores from the Pedi-IKDC questionnaire.
There was a statistically significant correlation between sfIL-6 levels and the Pedi-IKDC Symptoms score (-.929, p=0.003). The correlations between sfIL-6 and Pedi-IKDC activity score (.546, p = .234) and between sfIL-6 and total Pedi-IKDC score (-.536, p = .215) were not statistically significant.
This is the first study to evaluate IL-6 as a biomarker of knee joint healing in an adolescent population, reported a very strong correlation (-.929, p=0.003) between IL-6 in knee joint synovial fluid and a subjective questionnaire on knee joint pain. These findings provide preliminary scientific evidence regarding the relationship between knee joint pain, as determined by a validated questionnaire and the inflammatory and healing status of the patient's knee. This study provides a basis and justification for future longitudinal research on biomarkers of knee joint healing in patients throughout their recovery and rehabilitation process. Incorporating physiological and psychosocial variables to current return-to-activity (RTA) criteria has the potential to improve decision making for adolescents following ACL reconstruction to reduce premature RTA thereby reducing the risk of re-injury and risk of early-onset knee OA in adolescents.
The goal of this study was to identify the effect of mismatches in the subchondral bone surface at the native:graft interface on cartilage tissue deformation in human patellar osteochondral allografts (OCA). Hypothesis: large mismatches in the subchondral bone surface will result in higher stresses in the overlying and surrounding cartilage, potentially increasing the risk of graft failure.
Nano-CT scans of ten 16mm diameter cadaveric patellar OCA transplants were used to develop simplified and 3D finite element (FE) models to quantify the effect of mismatches in the subchondral bone surface. The simplified model consisted of a cylindrical plug with a 16 mm diameter (graft) and a washer with a 16 mm inner diameter and 36 mm outer diameter (surrounding native cartilage). The thickness of the graft cartilage was varied from 0.33x the thickness of native cartilage (proud graft subchondral bone) to 3x the thickness of native cartilage (sunken graft subchondral bone; Fig. 1). The thickness of the native cartilage was set to 2 mm. The surface of the cartilage in the graft was matched to the surrounding native cartilage. A 1 MPa pressure was applied to the fixed patellar cartilage surface. Scans were segmented using Dragonfly and meshed using HyperMesh. FE simulations were conducted in Abaqus 2019.
The simplified model demonstrated that a high stress region occurred in the cartilage at the sharp bony edge between the graft and native subchondral bone, localized to the region with thinner cartilage. A 20% increase in applied pressure occurs up to 50μm away from the graft edge (primarily in the graft cartilage) for grafts with proud subchondral bone but varies little based on the graft cartilage thickness. For grafts with sunken subchondral bone, the size of the high stress region decreases as the difference between graft cartilage and native cartilage thickness decreases (Fig. 2-4), with a 200 μm high stress region occurring when graft cartilage was 3x thicker than native cartilage (i.e., greater graft cartilage thickness produces larger areas of stress in the surrounding native cartilage). The 3D models reproduced the key features demonstrated in the simplified model. Larger differences between native and graft cartilage thickness cause larger high stress regions. Differences between the 3D and simplified models are caused by heterogeneous cartilage surface curvature and thickness.
Simplified and 3D FE analysis confirmed our hypothesis that greater cartilage thickness mismatches resulted in higher cartilage stresses for sunken subchondral bone. Unexpectedly, cartilage stresses were independent of the cartilage thickness mismatch for proud subchondral bone. These FE findings did not account for tissue remodeling, patient variability in tissue mechanical properties, or complex tissue loading. In vivo experiments with full-thickness strain measurements should be conducted to confirm these findings. Mismatches in the subchondral bone can therefore produce stress increases large enough to cause local chondrocyte death near the subchondral surface. These stress increases can be reduced by (a) reducing the difference in thickness between graft and native cartilage or (b) using a graft with cartilage that is thinner than the native cartilage.
For any figures or tables, please contact the authors directly.
Ligament reconstruction following multi-ligamentous knee injuries involves graft fixation in bone tunnels using interference screws (IS) or cortical suspensory systems. Risks of IS fixation include graft laceration, cortical fractures, prominent hardware, and inability to adjust tensioning once secured. Closed loop suspensory (CLS) fixation offers an alternative with fewer graft failures and improved graft-to-tunnel incorporation. However, graft tensioning cannot be modified to accommodate errors in tunnel length evaluation. Adjustable loop suspensory (ALS) devices (i.e., Smith & Nephew Ultrabutton) address these concerns and also offer the ability to sequentially tighten each graft, as needed. However, ALS devices may lead to increased graft displacement compared to CLS devices. Therefore, this study aims to report outcomes in a large clinical cohort of patients using both IS and CLS fixation.
A retrospective review of radiographic, clinical, and patient-reported outcomes following ligament reconstruction from a Level 1 trauma centre was completed. Eligible patients were identified via electronic medical records using ICD-10 codes. Inclusion criteria were patients 18 years or older undergoing ACL, PCL, MCL, and/or LCL reconstruction between January 2018 and 2020 using IS and/or CLS fixation, with a minimum of six-month post-operative follow-up. Exclusion criteria were follow-up less than six months, incomplete radiographic imaging, and age less than 18 years. Knee dislocations (KD) were classified using the Schenck Classification. The primary outcome measure was implant removal rate. Secondary outcomes were revision surgery rate, deep infection rate, radiographic fixation failure rate, radiographic malposition, Lysholm and Tegner scores, clinical graft failure, and radiographic graft failure. Radiographic malposition was defined as implants over 5 mm off bone or intraosseous deployment of the suspensory fixation device. Clinical graft failure was defined as a grade II or greater Lachman, posterior drawer, varus opening at 20° of knee flexion, and/or valgus opening at 20° of knee flexion. Radiographic failure was defined when over 5 mm, 3.2 mm, and/or 2.7 mm of side-to-side difference occurred using PCL gravity stress views, valgus stress views, and/or varus stress views, respectively. Descriptive statistics were used.
Sixty-three consecutive patients (mean age = 41 years, range = 19-58) were included. A total of 266 CLS fixation with Ultrabuttons and 135 IS were used. Mean follow-up duration was 383 days. Most injuries were KD type II and III. Graft revision surgery rate was 1.5%. Intraosseous deployment occurred in 6.2% and 17% had implants secured in soft tissue, rather than on bone. However, the implant removal rate was only 6.2%. Radiographic PCL gravity stress views demonstrated an average of 1.2 mm of side-to-side difference with 6.2% meeting criteria for radiographic failure. A single patient met radiographic failure criteria for collateral grafts. Mean Lysholm and Tegner scores were 87.3 and 4.4, respectively, with follow-up beyond one year.
Both IS and CLS fixation demonstrate an extremely low revision surgery rate, a high rate of implant retention, excellent radiographic stability, and satisfactory patient-reported outcome scores.
Incorrect implant deployment was seen in a total of 17% of patients, yet none required implant removal. A single patient required graft revision due to implant failure.
To determine in skeletally mature patients with a traumatic, first-time, patellar dislocation, the effect of early MPFL reconstruction versus rehabilitation on the rate of recurrent patellar dislocations and functional outcomes.
Three online databases MEDLINE, EMBASE and PubMed were searched from database inception (1946, 1974, 1966 respectively), to August 20th, 2021, for literature addressing the management of patients sustaining acute first-time patellar dislocations. Data on redislocation rates, functional outcomes using the Kujala score, and complication rates were recorded. A meta-analysis was used to pool the mean postoperative kujala score as well as calculate the proportion of patients sustaining redislocation episodes using a random effects model. A risk of bias assessment was performed for all included studies using the MINORS and Detsky scores.
Overall, there were a total of 22 studies and 1705 patients included in this review. The pooled mean redislocation rate in 18 studies comprising 1409 patients in the rehabilitation group was 31% (95% CI 25%-36%, I2 = 65%). Moreover, the pooled mean redislocation rate in five studies comprising 318 patients undergoing early MPFL reconstruction was 7% (95% CI 2%-17%, I2 = 70%). The pooled mean postoperative Kujala anterior knee pain score in three studies comprising 67 patients in the reconstructive group was 91 (95% CI 84-97, I2 = 86%), compared to a score of 81 (95% CI 78-85, I2 = 78%) in 7 studies comprising 332 patients in the rehabilitation group. The reoperation rate was 9.0% in 936 patients in the rehabilitation group and 2.2% in 322 patients in the reconstruction group.
Management of acute first-time patellar dislocations with MPFL reconstruction resulted in a lower rate of redislocation and a higher Kujala score, as well as noninferiority with respect to complication rates compared to nonoperative treatment. The paucity of high-level evidence warrants further investigation in this topic in the form of well-designed and high-powered RCTs to determine the optimal management option in these patients.
The aim of the this study was to determine the effect of the knee flexion angle (KFA) during tibial anterior cruciate ligament (ACL) graft fixation on patient reported outcomes, graft stability, extension loss and re-operation following anatomic single-bundle ACL reconstruction.
All 169 included patients (mean age 28.5 years, 65% male) were treated with anatomic single bundle ACL reconstruction using patellar tendon autograft and randomized to tibial fixation of the ACL graft at either 0o (n=85) or 30o (n=84). The primary outcome was the Knee Injury and Osteoarthritis Outcome Score (KOOS) two years following surgery. Secondary outcomes were the Marx Activity Scale (MAS), the rate of re-operation, and physical exam findings at one year including KT-1000 and side to side differences in knee extension.
The follow-up rate was 82% (n=139) for the primary outcome. Graft failure rate at two years was 1% (n=2, 1 per group). ACL tibial graft fixation at 0o or 30o did not have a significant effect on KOOS scores at two years following ACLR. Patients whose graft was fixed at a knee flexion angle of 0o had greater scores on the Marx Activity Scale (mean 9.6 [95%CI 8.5-10.6] versus 8.0 [95%CI 6.9-9.1, p=0.04) and a greater proportion of patients who achieved the minimal clinical important difference (MCID) for the KOOS pain subscale (94% vs 81%, p=0.04). There was no significant difference in knee extension loss, KT-1000 measurements or re-operation between the two groups.
In the setting of anatomic single-bundle ACLR using patellar tendon autograft and anteromedial portal femoral drilling, there was no difference in KOOS scores among patients fixed at 0o and 30o. Patient fixed in full extension did demonstrate higher activity scores at 2 years following surgery and a greater likelihood of achieving the MCID for KOOS pain.
The purpose of this study was to determine the incidence of graft-tunnel mismatch (GTM) when performing anatomic anterior cruciate ligament reconstruction (ACLR) using bone-patella tendon-bone (BPTB) grafts and anteromedial portal drilling.
Beginning in November 2018, 100 consecutive patients who underwent ACLR by two sports fellowship-trained, orthopedic surgeons using BPTB autograft and anteromedial portal drilling were prospectively identified. The BPTB graft dimensions and the femoral tunnel distance, tibial tunnel distance, intra-articular distance, and total distance were measured. Surgeons determined the depth and angle of tunnels based on the patella tendon graft length dimensions in each case. After passage of the graft, the distance from the distal graft tip to the tibial cortex aperture was measured. GTM was defined as the need for additional measures to obtain satisfactory tibial graft fixation (< 1 5e20 mm of bone fixation).
The incidence of mismatch was 6/100 (6%). Five cases involved the graft being too long, with the tibial bone plug protruding excessively from the tibial tunneld4/5 had a patella tendon length ? 50 mm. Three cases were managed with femoral tunnel recession, and two were treated with a free bone plug technique. One patient with a patella tendon length of 35 mm had a graft that was too short, with the tibial bone plug recessed in the tibial tunnel. Of patients whose tibial tunnel distance was within 5 mm of the patella tendon length, only 1/46 (2%) patients had mismatch, whereas 5/54 (9%) of patients who had >5 mm difference had mismatch.
The incidence of grafttunnel mismatch after anatomic ACLR using BTPB and anteromedial portal drilling in this study is 6%. To limit the occurrence of GTM where the graft is too long, surgeons should drill tibial tunnel distances within 5 mm of the patella tendon length.
Arthroscopic hip procedures have increased dramatically over the last decade as equipment and techniques have improved. Patients who require hip arthroscopy for femoroacetabular impingement on occasion require surgery on the contralateral hip. Previous studies have found that younger age of presentation and lower Charlson comorbidity index have higher risk for requiring surgery on the contralateral hip but have not found correlation to anatomic variables. The purpose of this study is to evaluate the factors that predispose a patient to requiring subsequent hip arthroscopy on the contralateral hip.
This is an IRB-approved, single surgeon retrospective cohort study from an academic, tertiary referral centre. A chart review was conducted on 310 primary hip arthroscopy procedures from 2009-2020. We identified 62 cases that went on to have a hip arthroscopy on the contralateral side. The bilateral hip arthroscopy cohort was compared to unilateral cohort for sex, age, BMI, pre-op alpha angle and centre edge angle measured on AP pelvis XRay, femoral torsion, traction time, skin to skin time, Tonnis grade, intra-op labral or chondral defect. A p-value <0.05 was deemed significant.
Of the 62 patients that required contralateral hip arthroscopy, the average age was 32.7 compared with 37.8 in the unilateral cohort (p = 0.01) and BMI was lower in the bilateral cohort (26.2) compared to the unilateral cohort (27.6) (p=0.04). The average alpha angle was 76.30 in the bilateral compared to 660 in the unilateral cohort (p = 0.01). Skin to skin time was longer in cases in which a contralateral surgery was performed (106.3 mins vs 86.4 mins) (p=0.01). Interestingly, 50 male patients required contralateral hip arthroscopy compared to 12 female patients (p=0.01). No other variables were statistically significant.
In conclusion, this study does re-enforce existing literature by stating that younger patients are more likely to require contralateral hip arthroscopy. This may be due to the fact that these patients require increased range of motion from the hip joint to perform activities such as sports where as older patients may not need the same amount of range of motion to perform their activities. Significantly higher alpha angles were noted in patients requiring contralateral hip arthroscopy, which has not been shown in previous literature. This helps to explain that larger CAM deformities will likely require contralateral hip arthroscopy because these patients likely impinge more during simple activities of daily living. Contralateral hip arthroscopy is also more common in male patients who typically have a larger CAM deformity. In summary, this study will help to risk stratify patients who will likely require contralateral hip arthroscopy and should be a discussion point during pre-operative counseling. That offering early subsequent or simultaneous hip arthroscopy in young male patients with large CAMs should be offered when symptoms are mild.
Orthopaedic surgeons prescribe more opioids than any other surgical speciality. Opioids remain the analgesic of choice following arthroscopic knee and shoulder surgery. There is growing evidence that opioid-sparing protocols may reduce postoperative opioid consumption while adequately addressing patients’ pain. However, there are a lack of prospective, comparative trials evaluating their effectiveness. The objective of the current randomized controlled trial (RCT) was to evaluate the efficacy of a multi-modal, opioid-sparing approach to postoperative pain management in patients undergoing arthroscopic shoulder and knee surgery.
The NO PAin trial is a pragmatic, definitive RCT (NCT04566250) enrolling 200 adult patients undergoing outpatient shoulder or knee arthroscopy. Patients are randomly assigned in a 1:1 ratio to an opioid-sparing group or standard of care. The opioid-sparing group receives a three-pronged prescription package consisting of 1) a non-opioid prescription: naproxen, acetaminophen and pantoprazole, 2) a limited opioid “rescue prescription” of hydromorphone, and 3) a patient education infographic. The control group is the current standard of care as per the treating surgeon, which consists of an opioid analgesic. The primary outcome of interest is oral morphine equivalent (OME) consumption up to 6 weeks postoperatively. The secondary outcomes are postoperative pain scores, patient satisfaction, quantity of OMEs prescribed and number of opioid refills. Patients are followed at both 2 and 6 weeks postoperatively. Data analysts and outcome assessors are blinded to the treatment groups.
As of December 1, 2021 we have enrolled 166 patients, reaching 83% of target enrolment. Based on the current recruitment rate, we anticipate that enrolment will be completed by the end of January 2022 with final follow-up and study close out completed by March of 2022. The final results will be released at the Canadian Orthopaedic Association Meeting in June 2022 and be presented as follows. The mean difference in OME consumption was XX (95%CI: YY-YY, p=X). The mean difference in OMEs prescribed was XX (95%CI: YY-YY, p=X). The mean difference in Visual Analogue Pain Scores (VAS) and patient satisfaction are XX (95%CI: YY-YY, p=X). The absolute difference in opioid refills was XX (95%CI: YY-YY, p=X).
The results of the current study will demonstrate whether an opioid sparing approach to postoperative outpatient pain management is effective at reducing opioid consumption while adequately addressing postoperative pain in patients undergoing outpatient shoulder and knee arthroscopy. This study is novel in the field of arthroscopic surgery, and its results will help to guide appropriate postoperative analgesic management following these widely performed procedures.
Meniscal tears are the most common knee injuries, occurring in acute ruptures or in chronic degenerative conditions. Meniscectomy and meniscal repair are two surgical treatment options. Meniscectomy is easier, faster, and the patient can return to their normal activities earlier. However, this procedure has long-term consequences in the development of degenerative changes in the knee, potentially leading to knee replacement. On the other hand, meniscal repair can offer prolonged benefits to the patients, but it is difficult to perform and requires longer rehabilitation.
Sutures are used for meniscal repairs, but they have limitations. They induce tissue damage when passing through the meniscus. Furthermore, under dynamic loading of the knee, they can cause tissue shearing and potentially lead to meniscal repair failure.
Our team has developed a new technology of resistant adhesive hydrogels to coat the suture used to repair meniscal tissue.
The objective of this study is to biomechanically compare two suture types on bovine menisci specimens: 1) pristine sutures and 2) gel adhesive puncture sealing (GAPS) sutures, on a repaired radial tear under cyclic tensile testing.
Five bovine knees were dissected to retrieve the menisci. On the 10 menisci, a complete radial tear was performed. They were separated in two groups and repaired using either pristine (2-0 Vicryl) or GAPS (2-0 Vicryl coated with adhesive hydrogels) with a single stitch and five knots.
The repaired menisci were clamped on an Instron machine. The specimens were cyclically preconditioned between one and 10 newtons for 10 cycles and then cyclically loaded for 500 cycles between five and 25 newtons at a frequency of 0.16 Hz. The gap formed between the edges of the tear after 500 cycles was then measured using an electronic measurement device. The suture loop before and after testing was also measured to ensure that there was no suture elongation or loosening of the knot.
The groups were compared statistically using Mann-Whitney tests for nonparametric data. The level of significance was set to 0.05.
The mean gap formation of the pristine sutures was 5.61 mm (SD = 2.097) after 500 cycles of tensile testing and 2.38 mm (SD = 0.176) for the GAPS sutures. Comparing both groups, the gap formed with the coated sutures was significantly smaller (p = 0.009) than with pristine sutures. The length of the loop was equal before and after loading. Further investigation of tissue damage indicated that the gap was formed by suture filament cutting into the meniscal tissue.
The long-term objective of this research is to design a meniscal repair toolbox from which the surgeon can adapt his procedure for each meniscal tear. This preliminary experimentation on bovine menisci is promising because the new GAPS sutures seem to keep the edges of the meniscal tear together better than pristine sutures, with hopes of a clinical correlation with enhanced meniscal healing.
Since its creation, labral repair has become the preferred method among surgeons for the arthroscopic treatment of acetabular labral tears resulting in pain and dysfunction for patients. Labral reconstruction is performed mainly in revision hip arthroscopy but can be used in the primary setting when the labrum cannot be repaired or is calcified. The purpose of this study was to compare the survival between primary labral repair and labral reconstruction with survival defined as no further surgery (revision or total hip replacement).
Patients who underwent labral repair or reconstruction between January 2005 and December 2018 in the primary setting were included in the study. Patients were included if they had primary hip arthroscopy with the senior author for femoroacetabular impingement (FAI), involving either labral reconstruction or labral repair, and were within the ages of 18 and 65 at the time of surgery. Exclusion criteria included confounding injuries (Leggs Calves Perthes, avascular necrosis, femoral head fracture, etc.), history of unilateral or bilateral hip surgeries, or Tönnis grades of 2 or 3 at the time of surgery. Labral repairs were performed when adequate tissue was available for repair and labral reconstruction was performed when tissue was absent, ossified or torn beyond repair.
A total of 501 labral repairs and 114 labral reconstructions performed in the primary setting were included in the study. Labral reconstruction patients were older (37±10) compared to labral repair (34±11).(p=0.021). Second surgeries were required in 19/114 (17%) of labral reconstruction and 40/501(8%) [odds ratio: 2.3; 95% CI 1.3 to 4.2] (p=0.008). Revision hip arthroscopy were required in 6/114(5%) labral reconstructions and 33/501(6.5%) labral repair (p=0.496). Total hip replacement was required in 13/114 labral reconstructions and 7/501 labral repairs [odds ratio:9.1 95%CI 3.5 to 23] (p=< 0.01). The mean survival for the labral repair group was 10.2 years (95%CI:10 to 10.5) and 11.9 years (98%CI:10.9 to 12.8) in the labral reconstruction group.
Conversion to total hip was required more often following primary labral reconstruction. Revision hip arthroscopy rates were similar between groups as was the mean survival, with both over 10 years. Similar survival was seen in labral repair and reconstruction when strict patient selection criteria are followed.
Multiligament knee injuries (MLKI) are rare and life-altering injuries that remain difficult to treat clinically due to a paucity of evidence guiding surgical management and timing. The purpose of this study was to compare injury specific functional outcomes following early versus delayed surgical reconstruction in MLKI patients to help inform timing decisions in clinical practice.
A retrospective analysis of prospectively collected data from patients with MLKIs at a single academic level 1-trauma center was conducted. Patients were eligible for inclusion if they had an MLKI, underwent reconstructive surgery either prior to 6wks from injury or between 12weeks and 2 years from injury, and had at least 12months of post-surgical follow-up. Patients with a vascular injury, open injuries or associated fractures were excluded. Study participants were stratified into early (12 weeks - 2 years from injury). The primary outcome was patient reported, injury specific, quality of life in the form of the Multiligament Quality of Life questionnaire (MLQOL) and its four domains (Physical Impairment, Emotional Impairment, Activity Limitations and Societal Involvement). We secondarily analyzed differences in the need for manipulation under anesthesia, and reoperation rates, as well as radiographic Kellgren Lawrence (KL) arthritis grades, knee laxity grading and range of motion at the most recent follow-up.
A total of 131 patients met our inclusion criteria, all having had surgery between 2006 and 2019. There were 75 patients in the early group and 56 in the delayed group. The mean time to surgery was 17.6 ± 8.0 days in the early group versus 279 ± 146.5 days in the delayed. Mean postoperative follow-up was 58 months. There were no significant differences between early and delayed groups with respect to age (34 vs. 32.8 years), sex (77% vs 63% male), BMI (28.3 vs 29.7 kg/m2), or injury mechanism (p>0.05). The early surgery group was found to include more patients with lateral sided injuries (n=49 [65%] vs. n=23 [41%]; p=0.012), a higher severity of Schenck Classification (p=0.024) as well as nerve injuries at initial presentation (n=35 [49%] vs n=8 [18%]; p0.05), when controlling for age, sex, Schenck classification, medial versus lateral injury, and nerve injury status. In terms of our secondary outcomes, we found that the early group underwent significantly more manipulations under anesthesia compare with the delayed group (n=24, [32%] vs n=8 [14%], p=0.024). We did not identify a significant difference in physical examination laxity grades, range of motion, KL grade or reoperation rates between groups (p>0.05).
We found no difference in patient reported outcomes between those who underwent early versus delayed surgery following MLKI reconstruction. In our secondary outcomes, we found significantly more patients in the early surgery group required a manipulation under anesthesia following surgery, which may indicate a propensity for arthrofibrosis after early MLKI reconstruction.
Recurrent patellar instability is a common problem and there are multiple demographic and pathoanatomic risk factors that predispose patients to dislocating their patella. The most common of these is trochlear dysplasia. In cases of severe trochlear dysplasia associated with patellar instability, a sulcus deepening trochleoplasty combined with a medial patellofemoral ligament reconstruction (MPFLR) may be indicated. Unaddressed trochlear pathology has been associated with failure and poor post-operative outcomes after stabilization. The purpose of this study is to report the clinical outcome of patients having undergone a trochleoplasty and MPFLR for recurrent lateral patellofemoral instability in the setting of high-grade trochlear dysplasia at a mean of 2 years follow-up.
A prospectively collected database was used to identify 46 patients (14 bilateral) who underwent a combined primary MPFLR and trochleoplasty for recurrent patellar instability with high-grade trochlear dysplasia between August 2013 and July 2021. A single surgeon performed a thin flap trochleoplasty using a lateral para-patellar approach with lateral retinaculum lengthening in all 60 cases. A tibial tubercle osteotomy (TTO) was performed concomitantly in seven knees (11.7%) and the MPFLR was performed with a gracilis tendon autograft in 22%, an allograft tendon in 27% and a quadriceps tendon autograft in 57% of cases. Patients were assessed post-operatively at three weeks and three, six, 12 and 24 months. The primary outcome was the Banff Patellar Instability Instrument 2.0 (BPII 2.0) and secondary outcomes were incidence of recurrent instability, complications and reoperations.
The mean age was 22.2 years (range, 13 to 45), 76.7% of patients were female, the mean BMI was 25.03 and the prevalence of a positive Beighton score (>4/9) was 40%. The mean follow-up was 24.3 (range, 6 to 67.7) months and only one patient was lost to follow-up before one year post-operatively. The BPII 2.0 improved significantly from a mean of 27.3 pre-operatively to 61.1 at six months (p < 0 .01) and further slight improvement to a mean of 62.1 at 12 months and 65.6 at 24 months post-operatively. Only one patient (1.6%) experienced a single event of subluxation without frank dislocation at nine months. There were three reoperations (5%): one for removal of the TTO screws and prominent chondral nail, one for second-look arthroscopy for persistent J-sign and one for mechanical symptoms associated with overgrowth of a lateral condyle cartilage repair with a bioscaffold. There were no other complications.
In this patient cohort, combined MPFLR and trochleoplasty for recurrent patellar instability with severe trochlear dysplasia led to significant improvement of patient reported outcome scores and no recurrence of patellar dislocation at a mean of 2 years. Furthermore, in this series the procedure demonstrated a low rate (5%) of complications and reoperations.
Despite the routine use of systemic antibiotic prophylaxis, postoperative infection following fracture surgery remains a persistent issue with substantial morbidity. The use of additional local antibiotic prophylaxis may have a protective effect and some orthopaedic surgeons have adopted their use in recent years, despite limited evidence of its beneficial effect. The purpose of this systematic review and meta-analysis was to evaluate the current literature regarding the effect of prophylactic local antibiotics on the rate of infection in fracture surgery in both open and closed fractures.
A comprehensive search of Medline, EMBASE, and PubMed was performed. Cohort studies were eligible if they investigated the effect on infection rate of additional local antibiotic prophylaxis compared with systemic prophylaxis alone following fracture surgery. The data were pooled in a meta-analysis.
In total, four randomized controlled trials and 11 retrospective cohort studies with a total of 6161 fractures from various anatomical locations were eligible for inclusion. The majority of the included studies were Level 3 evidence and had a moderate risk of bias. When all fractures were pooled, the risk of infection was significantly reduced when local antibiotics were applied compared with the control group receiving systemic prophylaxis only (OR = 0.39; 95%CI: 0.26 to 0.53, P < 0.001). In particular, there was a significant reduction in deep infections (OR = 0.59; 95%CI: 0.38 to 0.91, P = 0.017). The beneficial effect of local antibiotics for preventing total infection was seen in both open fractures (OR = 0.35; 95%CI: 0.23 to 0.53, P < 0.001) and closed fractures (OR = 0.58; 95%CI: 0.35 to 0.95, P = 0.029) when analyzed separately.
This meta-analysis suggests a significant risk reduction for postoperative infection following fracture surgery when local antibiotics were added to standard systemic prophylaxis, with a protective effect present in both open and closed fractures.
Fractures of the humeral diaphysis occur in a bimodal distribution and represent 3-5% of all fractures. Presently, the standard treatment of isolated humeral diaphyseal fractures is nonoperative care using splints, braces, and slings. Recent data has questioned the effectiveness of this strategy in ensuring fracture healing and optimal patient function. The primary objective of this randomized controlled trial (RCT) was to assess whether operative treatment of humeral shaft fractures with a plate and screw construct provides a better functional outcome than nonoperative treatment. Secondary objectives compared union rates and both clinical and patient-reported outcomes.
Eligible patients with an isolated, closed humeral diaphyseal fracture were randomized to either nonoperative care (initial sugar-tong splint, followed by functional coaptation brace) or open reduction and internal fixation (ORIF; plate and screw construct). The primary outcome measure was the Disability Shoulder, Arm, Hand (DASH) score assessed at 2-, 6-, 16-, 24-, and 52-weeks. Secondary outcomes included the Short Musculoskeletal Functional Assessment (SMFA), the Constant Shoulder Score, range of motion (ROM), and radiographic parameters. Independent samples t-tests and Chi-squared analyses were used to compare treatment groups. The DASH, SMFA, and Constant Score were modelled over time using a multiple variable mixed effects model.
A total of 180 patients were randomized, with 168 included in the final analysis. There were 84 patients treated nonoperatively and 84 treated with ORIF. There was no significant difference between the two treatment groups for age (mean = 45.4 years, SD 16.5 for nonoperative group and 41.7, SD 17.2 years for ORIF group; p=0.16), sex (38.1% female in nonoperative group and 39.3% female in ORIF group; p=0.87), body mass index (mean = 27.8, SD 8.7 for nonoperative group and 27.2, SD 6.2 for ORIF group; p=0.64), or smoking status (p=0.74). There was a significant improvement in the DASH scores at 6 weeks in the ORIF group compared to the nonoperative group (mean=33.8, SD 21.2 in the ORIF group vs. mean=56.5, SD=21.1 in the nonoperative group; p < 0 .0001). At 4 months, the DASH scores were also significantly better in the ORIF group (mean=21.6, SD=19.7 in the ORIF group vs. mean=31.6, SD=24.6 in the nonoperative group; p=0.009. However, there was no difference in DASH scores at 12-month follow-up between the groups (mean=8.8,SD=10.9 vs. mean=11.0, SD=16.9 in the nonoperative group; p=0.39). Males had improved DASH scores at all timepoints compared with females. There was significantly quicker time to union (p=0.016) and improved position (p < 0 .001) in the ORIF group. There were 13 (15.5%) nonunions in the nonoperative group and four (4.7%) combined superficial and deep infections in the ORIF group. There were seven radial nerve palsies in the nonoperative group and five (a single iatrogenic) radial nerve palsies in the ORIF group.
This large RCT comparing operative and nonoperative treatment of humeral diaphyseal fractures found significantly improved functional outcome scores in patients treated surgically at 6 weeks and 4 months. However, the early functional improvement did not persist at the 12-month follow-up. There was a 15.5% nonunion rate, which required surgical intervention, in the nonoperative group and a similar radial nerve palsy rate between groups.
The widely used Fracture Risk Assessment Tool (FRAX) estimates a 10-year probability of major osteoporotic fracture (MOF) using age, sex, body mass index, and seven clinical risk factors, including prior history of fracture. Prior fracture is a binary variable in FRAX, although it is now clear that prior fractures affect future MOF risk differently depending on their recency and site. Risk of MOF is highest in the first two years following a fracture and then progressively decreases with time – this is defined as imminent risk. Therefore, the FRAX tool may underestimate true fracture risk and result in missed opportunities for earlier osteoporosis management in individuals with recent MOF. To address this, multipliers based on age, sex, and fracture type may be applied to baseline FRAX scores for patients with recent fractures, producing a more accurate prediction of both short- and long-term fracture risk. Adjusted FRAX estimates may enable earlier pharmacologic treatment and other risk reduction strategies. This study aimed to report the effect of multipliers on conventional FRAX scores in a clinical cohort of patients with recent non-hip fragility fractures.
After obtaining Research Ethics Board approval, FRAX scores were calculated both before and after multiplier adjustment, for patients included in our outpatient Fracture Liaison Service who had experienced a non-hip fragility fracture between June 2020 and November 2021. Patients age 50 years or older, with recent (within 3 months) forearm (radius and/or ulna) or humerus fractures were included. Exclusion criteria consisted of patients under the age of 50 years or those with a hip fracture. Age- and sex-based FRAX multipliers for recent forearm and humerus fractures described by McCloskey et al. (2021) were used to adjust the conventional FRAX score. Low, intermediate and high-risk of MOF was defined as less than 10%, 10-20%, and greater than 20%, respectively. Data are reported as mean and standard deviation of the mean for continuous variables and as proportions for categorical variables.
A total of 91 patients with an average age of 64 years (range = 50-97) were included. The majority of patients were female (91.0%), with 73.6% sustaining forearm fractures and 26.4% sustaining humerus fractures. In the forearm group, the average MOF risk pre- and post-multiplier was 16.0 and 18.8, respectively. Sixteen percent of patients (n = 11) in the forearm group moved from intermediate to high 10-year fracture risk after multiplier adjustment. Average FRAX scores before and after adjustment in the humerus group were 15.7 and 22.7, respectively, with 25% (n = 6) of patients moving from an intermediate risk to a high-risk score.
This study demonstrates the clinically significant impact of multipliers on conventional FRAX scores in patients with recent non-hip fractures. Twenty-five percent of patients with humerus fractures and 16% of patients with forearm fractures moved from intermediate to high-risk of MOF after application of the multiplier. Consequently, patients who were previously ineligible for pharmacologic management, now met criteria. Multiplier-adjusted FRAX scores after a recent fracture may more accurately identify patients with imminent fracture risk, facilitating earlier risk reduction interventions.
The prevalence of alcohol and opioids in severely injured patients has been widely reported from 30-80%. However, despite the increasing global misuse of stimulant drugs, there is a paucity of literature regarding the presence of stimulant drugs in trauma patients. The primary aim of this study was to define the prevalence of stimulant drugs that were detected in patients who presented to Level One Trauma Centers throughout North America, and their effect on length of stay and mortality.
Our triage criteria for admittance to the regional trauma centre are based on the recommendations by the American College of Surgeons Committee on Trauma, who now recommend toxicology screening on every patient. This was a retrospective analysis of data from the Trauma Quality Improvement Program, including all patients presenting emergently to participating Level 1 Trauma Centers from January 2017 to December 2018. A stimulant drug was defined as the detection of cocaine, amphetamine, or methamphetamine. Adults aged 18-64 years were included. Patient risk factors were included adjusted for in the analysis: age, sex, body mass index (BMI), alcohol screening results and smoking status. Univariate analysis was performed for all variables. Multivariable logistic regression and liner regression were used for mortality and length of stay, respectively.
Of a total of 110,561 patients included in the study, 15,958 patients (14.4%) had positive screens for stimulants. The average age in the stimulants cohort was 40.8 years, with a 77.6% male preponderance, BMI of 26.9, blood alcohol content of 0.07, and ISS of 11.3. The control cohort was comparable, though 71.1% male (p<0.001) Patients who tested positive for stimulants had 1.79 times (95% CI, 1.09-2.93) the odds of dying in the emergency department as the control group (p=0.02). Following transfer from the emergency department, the odds ratio for deaths in hospital (OR=1.02, 95% CI 0.90-1.15) was comparable to the control group (p=0.78). The mean length of stay was significantly higher in the stimulant group (2.84 days) compared to the control group (1.79 days) (p<.001). In the Intensive Care Unit, length of stay was 0.64 days in the stimulant group versus 1.65 in the control (p=0.48).
Stimulant misuse is a relevant issue in the trauma population, associated with a longer hospital stay and higher mortality in the emergency department. The continued routine drug screening of trauma patients may be beneficial in trauma centers, to implement preventative measures and optimise resource allocation.
There has been a substantial increase in the surgical treatment of unstable chest wall injuries recently. While a variety of fixation methods exist, most surgeons have used plate and screw fixation. Rib-specific locking plate systems are available, however evidence supporting their use over less-expensive, conventional plate systems (such as pelvic reconstruction plates) is lacking. We sought to address this by comparing outcomes between locking plates and non-locking plates in a cohort of patients from a prior randomized trial who received surgical stabilization of their unstable chest wall injury.
We used data from the surgical group of a previous multi-centred, prospective, randomized controlled trial comparing surgical fixation of acute, unstable chest wall injuries to non-operative management. In this substudy, our primary outcome was hardware-related complications and re-operation. Secondary outcomes included ventilator free days (VFDs) in the first 28 days following injury, length of ICU and hospital stay, and general health outcomes (SF-36 Physical Component Summary (PCS) and Mental Component Summary (MCS) scores). Categorical variables are reported as frequency counts and percentages and the two groups were compared using Fisher's Exact test. Continuous data are reported as median and interquartile range and the two groups were compared using the Wilcoxon rank-sum test.
From the original cohort of 207 patients, 108 had been treated surgically and had data available on the type of plate construct used. Fifty-nine patients (55%) had received fixation with non-locking plates (primarily 3.5 or 2.7 mm pelvic reconstruction plates) and 49 (45%) had received fixation with locking plates (primarily rib-specific locking plates). The two groups were similar in regard to baseline and injury characteristics. In the non-locking group, 15% of patients (9/59) had evidence of hardware loosening versus 4% (2/49 patients) in the locking group (p = 0.1). The rate of re-operation for hardware complications was 3% in the non-locking group versus 0% in the locking group (p = 0.5). No patients in either group required revision fixation for loss of reduction or nonunion. There were no differences between the groups with regard to VFDs (26.3 [19.6 – 28] vs. 27.3 [18.3 – 28], p = 0.83), length of ICU stay (6.5 [2.0 – 13.1] vs 4.1 [0 – 11], p = 0.12), length of hospital stay (17 [10 – 32] vs. 17 [10 – 24], p = 0.94) or SF-36 PCS (40.9 [33.6 – 51.0] vs 43.4 [34.1 – 49.6], p = 0.93) or MCS scores (47.8 [36.9 – 57.9] vs 46.9 [40.5 – 57.4], p = 0.95).
We found no statistically significant differences in outcomes between patients who received surgical stabilization of their unstable chest wall injury when comparing non-locking plates versus locking plates. However, the rate of hardware loosening was nearly 4 times higher in the non-locking plate group and trended towards statistical significance, although re-operation related to this was less frequent. This finding is not surprising, given the inherent challenges of rib fixation including thin bones, comminution, potential osteopenia and a post-operative environment of constant motion. We believe that the increased cost of locking plate fixation in this setting is likely justifiable given these findings.
To systematically review the literature regarding post-surgical treatment regimens on ankle fractures, specifically whether there is a benefit to early weightbearing or early mobilization (6 weeks form surgery).
The PubMed, MEDLINE and Embase databases were searched from inception to May 24, 2020. All randomized controlled trials that analyzed the effects of early weightbearing and mobilization following an ankle surgery were included. The primary outcome measure was the Olerud Molander Ankle Score (OMAS). Secondary outcomes included return to work (RTW) and complications. Logistic regression models with random intercepts were used to pool complication data by protocol clustered by study.
Twelve RCT's were included, with a total of 1177 patients (41.8 ± 8.4 years). In total, 413 patients underwent early weightbearing and early mobilization (35%), 338 patients underwent early weightbearing and delayed mobilization (29%), 287 patients underwent delayed weightbearing and early mobilization (24%), and 139 patients underwent delayed weightbearing and delayed mobilization (12%). In total, 81 patients had a complication (7%), including 53 wound complications (5%), 11 deep vein thromboses (1%), and 2 failures/nonunions (0%). Early ankle mobilization resulted in statistically significant increases in OMAS scores compared to delayed mobilization (3 studies [222 patients], 12.65; 95% CI, 7.07-18.22; P < 0.00001, I2 = 49%). No significant differences were found between early and delayed weightbearing at a minimum of one-year follow-up (3 studies [377 patients], 1.91; 95% CI, −0.73-4.55, P = 0.16, I2 = 0%). Patients treated with early weightbearing and early mobilization were at higher odds of facing any complication (OR 3.6, 95%CI 1.05-12.1, p=0.041) or wound complications (OR 4.9, 95%CI 1.3-18.8, p=0.022) compared to those with delayed weightbearing and delayed mobilization.
Early mobilization following surgical treatment for an ankle fracture resulted in improved ankle function scores compared to delayed mobilization regimens. There were no significant differences between early and delayed weightbearing with respect to patient reported outcomes. Patients who were treated with early mobilization and early weightbearing had an increased odds of postoperative complications.
Guidelines for the use of preoperative blood tests for elective surgery were established. However, there is less evidence and no guidelines regarding using these tests when a young, healthy patient undergoes minor orthopaedic trauma surgery. Bloodwork is often ordered routinely, regardless of medical history or the nature of the injury. We hypothesized that unnecessary blood work is requested for younger pre-operative patients, and their results will not change peri-operative management. This practice is not a judicious use of healthcare resources. This study aimed to evaluate the frequency, type, cost, and impact on clinical decisions if standard preoperative bloodwork was completed in healthy patients requiring surgical management of a minor fracture or dislocation.
After the approval of our institutional ethics board, a retrospective chart review was conducted. Inclusion criteria were patients aged 18-60 years, who had an isolated minor orthopaedic trauma requiring outpatient surgery, who were American Society of Anesthesiologists (ASA) class 1. ASA class 1 is defined as “a normal healthy patient, without any clinically important comorbidity and without a clinically significant past/present medical history.” Data records from January 1, 2016, to December 31, 2018, were extracted from a provincial database (the Analytics Data Integration, Measurement and Reporting) for five hospitals. Data including demographics, surgical treatment, type and number of blood tests ordered, and ordering physician were collected. Any abnormal test results were checked to see whether they led to a change in patient management or related to a postoperative adverse event. Independent samples t-tests and Chi-square tests were used to compare the characteristics of patients who had preoperative bloodwork versus those who did not. The cost of preoperative blood work was estimated.
During these two years, 627 patients met inclusion criteria, and 27% (n=168) of these patients had bloodwork completed pre-operatively, while only 34% (n=57) of these had one or more abnormal laboratory parameters. These abnormalities were minor and did not alter clinical management or result in repeated bloodwork peri-operatively. Patients who had bloodwork were significantly older (40.2 years) compared with patients without preoperative blood work (37.8 years; p=0.03), but there was no difference in sex between those who had bloodwork (53.4% male) and those who did not (51.4% male; p=0.63). The most common blood test ordered was a complete blood count, and the most commonly abnormal result was a mildly elevated white blood cell count (19%; n= 29). The most common patients to receive bloodwork were those with ankle (34%) and distal radius (34%) fractures. The bloodwork was primarily ordered by clinical associates (26%; n=46) and emergency department physicians (22%; n=38). Without considering lab personnel, consumables, and analysis time, the cost of this bloodwork was approximately $7685, an average of $45 per patient.
Pre-operative bloodwork in young, healthy, asymptomatic patients requiring outpatient surgery for minor orthopaedic trauma had no clinical significance and did not change patient management. Rigorous prospective research is warranted to establish national guidelines for appropriate pre-operative bloodwork ordering to minimize unnecessary and costly investigations.
The management of periprosthetic distal femur fractures is an issue of increasing importance for orthopaedic surgeons. Because of the expanding indications for total knee arthroplasty (TKA) and an aging population with increasingly active lifestyles there has been a corresponding increase in the prevalence of these injuries. The management of these fractures is often complex because of issues with obtaining fixation around implants and dealing with osteopenic bone or compromised bone stock. In addition, these injuries frequently occur in frail, elderly patients, and the early restoration of function and ambulation is critical in these patients. There remains substantial controversy with respect to the optimal treatment of periprosthetic distal femur fractures, with some advocating for Locked Plating (LP), others Retrograde Intramedullary Nailing (RIMN) and finally those who advocate for Distal Femoral Replacement (DFR). The literature comparing these treatments, has been infrequent, and commonly restricted to single-center studies. The purpose of this study was to retrospectively evaluate a large series of operatively treated periprosthetic distal femur fractures from multiple centers and compare treatment strategies.
Patients who were treated operatively for a periprosthetic distal femur fracture at 8 centers across North America between 2003 and 2018 were retrospectively identified. Baseline characteristics, surgical details and post-operative clinical outcomes were collected from patients meeting inclusion criteria. Inclusion criteria were patients aged 18 and older, any displaced operatively treated periprosthetic femur fracture and documented 1 year follow-up. Patients with other major lower extremity trauma or ipsilateral total hip replacement were excluded. Patients were divided into 3 groups depending on the type of fixation received: Locked Plating, Retrograde Intramedullary Nailing and Distal Femoral Replacement. Documented clinical follow-up was reviewed at 2 weeks, 3 months, 6 months and 1 year following surgery. Outcome and covariate measures were assessed using basic descriptive statistics. Categorical variables, including the rate of re-operation, were compared across the three treatment groups using Fisher Exact Test.
In total, 121 patients (male: 21% / female: 79%) from 8 centers were included in our analysis. Sixty-seven patients were treated with Locked Plating, 15 with Retrograde Intramedullary Nailing, and 39 were treated with Distal Femoral Replacement. At 1 year, 64% of LP patients showed radiographic union compared to 77% in the RIMN group (p=0.747). Between the 3 groups, we did not find any significant differences in ambulation, return to work and complication rates at 6 months and 1 year (Table 1). Reoperation rates at 1 year were 27% in the LP group (17 reoperations), 16% in the DFR group (6 reoperations) and 0% in the RIMN group. These differences were not statistically significant (p=0.058).
We evaluated a large multicenter series of operatively treated periprosthetic distal femur fractures in this study. We did not find any statistically significant differences at 1 year between treatment groups in this study. There was a trend towards a lower rate of reoperation in the Retrograde Intramedullary Nailing group that should be evaluated further with prospective studies.
For any figures or tables, please contact the authors directly.
High energy pelvic injury poses a challenging setting for the treating surgeon. Often multiple injuries are associated, which makes the measurement of short- and long-term functional outcomes a difficult task. The purpose of this study was to determine the incidence of pelvic dysfunction and late impacts of high energy pelvic ring fractures on pelvic floor function in women, with respect to urinary, sexual and musculoskeletal function. This was compared to a similar cohort of women with lower limb fractures without pelvis involvement.
The data in our study was prospectively gathered between 2010 and 2013 on 229 adult females who sustained injury between 1998 and 2012. Besides demographic and operative variables, the scores of three validated health assessment tools were tabulated: King's Health Questionnaire (KHQ), Female Sexual Function Index (FSFI) and the Short Musculoskeletal Functional Assessment (SMFA). A multivariate regression analysis was done to compare groups.
The incidence of sexual dysfunction was 80.8% in the pelvis and 59.4% in the lower extremity group. A Wilcoxon rank sum test showed a significant difference in KHQ-score (p<0.01) with the pelvis group being worse. When adjusting for age, follow-up and Injury Severity Score this difference was not significant (p=0.28), as was for FSFI and SMFA score. The mean FSFI scores of both groups met the criteria for female sexual dysfunction (<26). Patients with a Tile C fracture have better FSFI scores (16.98) compared to Tile B fractures (10.12; p=0.02). Logistic regression predicting FSFI larger than 26.5 showed that older age and pelvic fractures have a higher likelihood having a form of sexual dysfunction.
Sexual dysfunction after lower extremity trauma is found in patients regardless of pelvic ring involvement. Urinary function is more impaired after pelvic injuries, but more data is needed to confirm this. Older age and pelvic fracture are predictors for sexual dysfunction in women. This study is important as it could help counsel patients on the likelihood of sexual dysfunction, something that is probably under-reported and recognized during our patient follow up.
Direct oral anticoagulant (DOAC) use is becoming more widespread in the geriatric population. Depending on the type of DOAC, several days are required for its anticoagulant effects to resorb, which may lead to surgical delays. This can have an important impact on hip fracture patients who require surgery. The goal of the current study is to compare surgical delays, mortality and complications for hip fracture patients who were on a DOAC to those who were not.
A retrospective cohort study was conducted at a university hospital in Sherbrooke. All hip fracture patients between 2012 and 2018 who were on a DOAC prior to their surgery were included. These patients were matched with similar patients who were not on an anticoagulant (non-DOAC) for age, sex, type of fracture and date of operation. Demographic and clinical data were collected for all patients. Surgical delay was defined as time of admission to time of surgery. Mortality and complications up to one year postoperative were also noted.
Each cohort comprised of 74 patients. There were no statistically signification differences in Charleson Comorbidty Index and American Society of Anesthesiologists scores between cohorts. Surgical delay was significantly longer for DOAC patients (36.3±22.2 hours vs. 18.6±18.9 hours, p < 0 .001). Mortality (6.1%) and overall complication (33.8%) rates were similar between the two cohorts. However, there were more surgical reinterventions in DOAC patients than non-DOAC ones (16.2% vs. 0.0%, p < 0 .001). Among DOAC patients, mortality was greater for those operated after 48 hours (23.1% vs. 3.3%, p < 0 .05) and complications were more frequent for those operated after 24 hours (52.0% vs. 37.5%, p < 0 .05).
Direct oral anticoagulant (DOAC) use in hip fracture patients is associated with longer surgical delays. Longer delays to surgery are associated with higher mortality and complication rates in hip fracture patients taking a DOAC. Hip fracture patients should have their surgery performed as soon as medically possible, regardless of anticoagulant use.
It has been established that a dedicated orthopaedic trauma room (DOTR) provides significant clinical and organizational benefits to the management of trauma patients. After-hours care is associated with surgeon fatigue, a high risk of patient complications, and increased costs related to staffing. However, hesitation due to concerns of the associated opportunity cost at the hospital leadership level is a major barrier to wide-spread adoption. The primary aim of this study is to determine the impact of dedicated orthopaedic trauma room (DOTR) implementation on operating room efficiency. Secondly, we sought to evaluate the associated financial impact of the DOTR, with respect to both after-hours care costs as well as the opportunity cost of displaced elective cases.
This was a retrospective cost-analysis study performed at a single academic-affiliated community hospital in Toronto, Canada. All patients that underwent the most frequently performed orthopedic trauma procedures (hip hemiarthroplasty, open reduction internal fixation of the ankle, femur, elbow and distal radius), over a four-year period from 2016-2019 were included. Patient data acquired for two-years prior and two-years after the implementation of a DOTR were compared, adjusting for the number of cases performed. Surgical duration and number of day-time and after-hours cases was recorded pre- and post-implementation. Cost savings of performing trauma cases during daytime and the opportunity cost of displacing elective cases by performing cases during the day was calculated. A sensitivity analysis accounting for varying overtime costs and hospital elective case profit was also performed.
1960 orthopaedic cases were examined pre- and post-DOTR. All procedures had reduced total operative time post-DOTR. After accounting for the total number of each procedure performed, the mean weighted reduction was 31.4% and the mean time saved was 29.6 minutes per surgery. The number of daytime surgical hours increased 21%, while nighttime hours decreased by 37.8%. Overtime staffing costs were reduced by $24,976 alongside increase in opportunity costs of $22,500. This resulted in a net profit of $2,476.
Our results support the premise that DOTRs improve operating room efficiency and can be cost efficient. Through the regular scheduling of a DOTR at a single hospital in Canada, the number of surgeries occurring during daytime hours increased while the number of after-hours cases decreased. The same surgeries were also completed nearly one-third faster (30 minutes per case) on average. Our study also specifically addresses the hesitation regarding potential loss of profit from elective surgeries. Notably, the savings partially stem from decreased OR time as well as decreased nurse overtime. Widespread implementation can improve patient care while still remaining financially favourable.
The rate of arterial injury in trauma patients with pelvic ring fractures has been cited as high as 15%. Addressing this source of hemorrhage is essential in the management of these patients as mortality rates are reported as 50%. Percutaneous techniques to control arterial bleeding, such as embolization and REBOA, are being employed with increasing frequency due to their assumed lower morbidity and invasiveness than open exploration or cross clamping of the aorta.
There are promising results with regards to the mortality benefits of angioembolization. However, there are concerns with regards to morbidity associated with embolization of the internal iliac vessels and its branches including surgical wound infection, gluteal muscle necrosis, nerve injury, bowel infarction, and thigh / buttock claudication.
The primary aim of this study is to determine whether pelvic arterial embolization is associated with surgical site infection (SSI) in trauma patients undergoing pelvic ring fixation.
This observational cohort study was conducted using US trauma registry data from the American College of Surgeons (ACS) National Trauma Database for the year of 2018. Patients over the age of 18 who were transported through emergency health services to an ACS Level 1 or 2 trauma hospital and sustained a pelvic ring fracture treated with surgical fixation were included. Patients who were transferred between facilities, presented to the emergency department with no signs of life, presented with isolated penetrating trauma, and pregnant patients were excluded from the study.
The primary study outcome was surgical site infection. Multivariable logistic regression was performed to estimate treatment effects of angioembolization of pelvic vessels on surgical site infection, adjusting for known risk factors for infection.
Study analysis included 6562 trauma patients, of which 508 (7.7%) of patients underwent pelvic angioembolization. Overall, 148 (2.2%) of patients had a surgical site infection, with a higher risk (7.1%) in patients undergoing angioembolization (unadjusted odds ratio (OR) 4.0; 95% CI 2.7, 6.0; p < 0 .0001). Controlling for potential confounding, including patient demographics, vitals on hospital arrival, open fracture, ISS, and select patient comorbidities, pelvic angioembolization was still significantly associated with increased odds for surgical site infection (adjusted OR 2.0; 95% CI 1.3, 3.2; p=0.003).
This study demonstrates that trauma patients who undergo pelvic angioembolization and operative fixation of pelvic ring injuries have a higher surgical site infection risk. As the use of percutaneous hemorrhage control techniques increase, it is important to remain judicious in patient selection.
Slip and fall injuries represent a significant burden to the Canadian general public and healthcare system; the annual financial cost of these accidents in Canada is estimated to be $2 billion (2014). Interestingly, slip and fall accidents are not evenly distributed across the provinces, with the rate of hospitalization due to falls in Alberta being nearly three times greater than the rate in Ontario. Our research aim was to create the Alberta Slip and Fall Index (ASFI) – a simple scale like the UV or Air Quality index – that could be used to warn the general public about the presence of slippery conditions. The ASFI could be paired with interventions proven to prevent outdoor slips and falls, like promoting the use of ice cleats.
Eleven years (January 2008 - December 2018) of emergency room presentations to the four adult hospitals in Calgary, Alberta were filtered based on the ICD-10 diagnostic code W00 (slip and fall due to ice and snow). Multivariable dispersion-corrected Poisson regression models were used to analyze the weather conditions and time of year most predictive of slip and fall injuries. A slip and fall risk calculator (the ASFI) was designed using output from statistical modelling. To validate the ASFI we compared model predicted slip and fall risk to real presentations using retrospective weather and patient data.
The final dataset included 14,977 slip and fall incidents. The three months with the most emergency room presentations were January(n = 3591), February(n = 2997), and March(n = 2954); each of these predicted increased slip and fall accidents(p < 0 .001). Same day ice was significantly associated with more slip and fall accidents, as was the presence of ice one, two, and three days prior(p < 0 .001). Snow one day prior was mildly protective against slip and fall accidents, but this effect was not significant(p = 0.861). Snow, ice, and time of year variables can be input into the ASFI calculator, which computes the likelihood of slip and fall accidents on a 0-40 point scale, with 40 indicating maximum fall risk. Upon validation of the ASFI, we generally found days with the highest raw frequency of slip and fall accidents had higher ASFI scores. Although the ASFI can theoretically result in a score of 40, when we entered realistic weather conditions it was impossible to create a score higher than 20.
The ASFI represents a tool that can be used to prevent slip and fall accidents due to icy and snowy conditions. As demonstrated by our inability to maximize the risk score when using realistic weather conditions, the ASFI is imperfect. Despite its shortcomings, the ASFI is a preliminary step towards effectively disseminating information about the weather conditions likely to lead to falls. Ideally, a refined ASFI will help people better understand when to use protective equipment and take extra precaution outdoors. If implementing the ASFI led to even a 1% decrease in injuries caused by falls, the annual Canadian healthcare savings would be roughly $2 million.
Dual plate constructs have become an increasingly common fixation technique for midshaft clavicle fractures and typically involve the use of mini-fragment plates. The goal of this technique is to reduce plate prominence and implant irritation, as these are common reasons for revision surgery. However, limited biomechanical data exist for these lower-profile constructs. The study aim was to compare dual mini-fragment orthogonal plating to traditional small-fragment clavicle plates for biomechanical non-inferiority and to determine if an optimal plate configuration could be identified, using a cadaveric model.
Twenty-four cadaveric clavicles were randomized to one of six groups (n=4 per group), stratified by CT-based bone mineral content (BMC). The six different plating configurations compared were: pre-contoured superior or anterior fixation using a single 3.5-mm LC-DC plate, and four different dual-plating constructs utilizing 2.4-mm and 2.7-mm reconstruction or LC-DC plates. The clavicles were plated and then osteotomized to create an inferior butterfly fracture, which was then fixed with a single interfragmentary screw (OTA 15.2B). Axial, torsional, and bending (anterior and superior surface loading) stiffness were determined for each construct through non-destructive cyclic testing, using an MTS 858 Bionix materials testing system. This was followed by a load-to-failure test in three-point superior-surface bending. Kruskal-Wallace H and Mann-Whitney U were used to test for statistical significance.
There were no significant differences in BMC (median 7.9 g, range 4.2-13.8 g) for the six groups (p=1.000). For axial stiffness, the two dual-plate constructs with a superior 2.4-mm and anterior 2.7-mm plate (either reconstruction or LC-DC) were significantly stiffer than the other four constructs (p=0.021). For both superior and anterior bending, the superior 2.4-mm and anterior 2.7-mm plate constructs were significantly stiffer when compared to the 3.5-mm superior plate (p=0.043). In addition, a 3.5-mm plate placed anterior was a stiffer construct than a superior 3.5-mm plate (p=0.043). No significant differences were found in torsional stiffness or load-to-failure between the different constructs.
Dual plating using mini-fragment plates is biomechanically superior for fixation of midshaft clavicle fractures when compared to a single superior 3.5-mm plate and has similar biomechanical properties to a 3.5-mm plate placed anteriorly. With the exception of axial stiffness, no significant differences were found when different dual plating constructs were compared to each other. However, placing a 2.4-mm plate superiorly in combination with a 2.7-mm plate anteriorly might be the optimal construct, given the biomechanical superiority over the 3.5-mm plate placed superior.
In 2007, the National Hip Fracture Database (NHFD) was conceived in the United Kingdom (UK) as a national audit aiming to improve hip fracture care across the country. It now represents the world's largest hip fracture registry. The purpose of the NHFD is to evaluate aspects of best practice for hip fracture care, at an institutional level, that reflect the evidence-based clinical guidelines and quality standards developed by the National Institute for Health and Care Excellence. No national program currently exists, equivalent to the NHFD, in Canada despite evidence suggesting that national audit programs can significantly improve patient outcomes. The purpose of this study was to evaluate aspects of best practice for hip fractures at our Canadian academic tertiary referral center using the Key Performance Indicators (KPI) and benchmarks used by the NHFD. In doing so, we aimed to compare our performance to other hospitals contributing to the NHFD database.
A retrospective cohort study was conducted on consecutive patients who presented to our Canadian center for surgical management of a hip fracture between August 2019 to September 2020. Fracture types included intertrochanteric, subtrochanteric, and femoral neck fractures treated with either surgical fixation or arthroplasty. Cases were identified from the affiliate institute's Operatively Repaired Fractures Database (ORFD). The ORFD prospectively collects patient-level data extracted from electronic medical records, operating room information systems, and from patients’ discharge summaries. All applicable data from our database were compared to the established KPI and benchmarks published by the NHFD that apply to the Canadian healthcare system.
Six hundred and seven patients’ data (64.5% female) were extracted from the ORFD, mean age 80.4 ± 13.3 years. The NHFD contains data from 63,284 patients across the entire UK. The affiliate institute performed inferiorly compared to the NHFD for two KPIs: prompt surgery (surgery by the day following presentation with hip fracture, 52.8% vs. 69%) and prompt mobilization after surgery (mobilized out of bed by the day after operation, 43.0% vs. 81.0%). However, more patients at the affiliate institute were not delirious when tested postoperatively (89.6% vs. 68.4%). There was no significant difference in the average length of stay (12.23 days versus 13.5 days) or in 30-day mortality rate (8.4% versus 8.3%).
More than half of all KPI's and benchmarks for patients receiving a hip fracture surgery at our tertiary referral center in Canada ranked significantly lower than patients receiving a hip fracture surgery in the UK. These findings indicate that perhaps a national audit program should be implemented in Canada to improve aspects of hip fracture care, at an institutional level. Following evidence-based clinical guidelines and using standardized benchmarks would encourage change and foster improvement across Canadian centres when necessary.
Resection of the proximal femur raises several challenges to the orthopedic oncology surgeon. Among these is the re-establishment of the abductor mechanism that might impacts on hip function. Extent of tumor resection and surgeons’ preferences dictate the reconstruction method of the abductors. While some surgeons advocate the necessity of greater trochanter (GT) preservation whenever possible, others attempt direct soft tissues reattachment to the prosthesis. Sparse data in the literature evaluated the outcomes of greater trochanter fixation to the proximal femur megaprosthesis.
This is a retrospective monocentric study. All patients who received a proximal femoral replacement after tumor resection between 2005 and 2021 with a minimum follow-up of three months were included. Patients were divided into two groups: (1) those with preserved GT reattached to the megaprosthesis and (2) those with direct or indirect (tenodesis to fascia lata) abductor muscles reattachment. Both groups were compared for surgical outcomes (dislocation and revision rates) and functional outcomes (Trendelenburg gait, use of walking-assistive device and abductor muscle strength). Additionally patients in group 1 were subdivided into patients who received GT reinsertion using a grip and cables and those who got direct GT reinsertion using suture materials and studied for GT displacement at three, six and 12 months. Time to cable rupture was recorded and analyzed through a survival analysis.
Fifty-six patients were included in this study with a mean follow-up of 45 months (3-180). There were 23 patients with reinserted GT (group 1) and 33 patients with soft tissue repair (group 2). Revision rate was comparable between both groups(p=0.23); however, there were more dislocations in group 2 (0/23 vs 6/33; p=0.037). Functional outcomes were comparable, with 78% of patients in group 1 (18/23) and 73% of patients in group 2 (24/33) that displayed a Trendelenburg gait (p=0.76). In group 1, 70% (16/23) used walking aids compared to 79% of group 2 (27/33) (p=0.34). Mean abductor strength reached 2.7 in group 1 compared to 2.3 in group 2 (p=0.06). In group 1, 16 of the 23 patients had GT reinsertion with grip and cables. Median survival of cables for these 16 patients reached 13 months in our series. GT displacement reached a mean of two mm, three mm, and 11 mm respectively at three, six and 12 months of follow-up in patients with grip and cables compared to 12 mm, 24 mm and 26 mm respectively at the same follow-up intervals in patients with GT stand-alone suture reinsertion(p<0.05).
Although GT preservation and reinsertion did not improve functional outcomes after proximal femur resection and reconstruction with a megaprosthesis, it was significantly associated with lower dislocation rate despite frequent cable failure and secondary GT migration. No cable or grip revision or removal was recorded. Significantly less displacement was observed in patients for whom GT reattachment used plate and cables rather than sutures only. Therefore we suggest that GT should be preserved and reattached whenever possible and that GT reinsertion benefits from strong materials such as grip and cables.
Functional outcomes are commonly reported in studies of musculoskeletal oncology patients undergoing limb salvage surgery; however, interpretation requires knowledge of the smallest amount of improvement that is important to patients – the minimally important difference (MID). We established the MIDs for the Musculoskeletal Tumor Society Rating Scale (MSTS) and Toronto Extremity Salvage Score (TESS) in patients with bone tumors undergoing lower limb salvage surgery.
This study was a secondary analysis of the recently completed PARITY (Prophylactic Antibiotic Regimens in Tumor Surgery) study. This data was used to calculate: (1) the anchor-based MIDs using an overall function scale and a receiver operating curve analysis, and (2) the distribution-based MIDs based on one-half of the standard deviation of the change scores from baseline to 12-month follow-up, for both the MSTS and TESS.
There were 591 patients available for analysis. The Pearson correlation coefficients for the association between changes in MSTS and TESS scores and changes in the external anchor scores were 0.71 and 0.57, indicating “high” and “moderate” correlation. Anchor-based MIDs were 12 points and 11 points for the MSTS and TESS, respectively. Distribution-based calculations yielded MIDs of 16-17 points for the MSTS and 14 points for the TESS.
The current study proposes MID scores for both the MSTS and TESS outcome measures based on 591 patients with bone tumors undergoing lower extremity endoprosthetic reconstruction. These thresholds will optimize interpretation of the magnitude of treatment effects, which will enable shared decision-making with patients in trading off desirable and undesirable outcomes of alternative management strategies. We recommend anchor-based MIDs as they are grounded in changes in functional status that are meaningful to patients.
Functional outcomes are important for patients with bone tumors undergoing lower extremity endoprosthetic reconstruction; however, there is limited empirical evidence evaluating function longitudinally. The objective of this study was to determine the changes in function over time in patients undergoing endoprosthetic reconstructions of the proximal femur, distal femur and proximal tibia.
We conducted a secondary analysis of functional outcome data from the Prophylactic Antibiotic Regimens in Tumor Surgery (PARITY) trial. Patient function was assessed with the Musculoskeletal Tumor Society Score 93 (MSTS) and the Toronto Extremity Salvage Score (TESS), which were administered preoperatively and at 3, 6 and 12 months postoperatively. Both instruments are scored from 0-100, with higher scores indicated greater function. Mean functional scores were evaluated over time and we explored for differences among patients undergoing proximal femur reconstructions (PFR), distal femur reconstructions (DFR) and proximal tibia reconstructions (PTR). The patient-importance of statistically significant differences in function was evaluated utilizing the minimally important difference (MID) of 12 for the MSTS and 11 for the TESS. We explored for differences in change scores between each time interval with paired t-tests. Differences based on endoprosthetic reconstruction undertaken were evaluated by analysis of variance and post-hoc comparisons using the Tukey test.
A total of 573 patients were included. The overall mean MSTS and TESS scores were 77.1(SD±21) and 80.2(SD±20) respectively at 1-year post-surgery, demonstrating approximately a 20-point improvement from baseline for both instruments. When evaluating change scores over time by type of reconstruction, PFR patients experienced significant functional improvement during the 3-6 and 6-12 month follow-up intervals, DFR patients demonstrated significant improvements in function at each follow-up interval, and PTR patients reported a significant decrease in function from baseline to 3 months, and subsequent improvements during the 3-6 and 6-12 month intervals.
On average, patients undergoing endoprosthetic reconstruction of the lower extremity experience important improvements in function from baseline within the first year. Patterns of functional recovery varied significantly based on type of reconstruction performed. The results of this study will inform both clinicians and patients about the expected rehabilitation course and functional outcomes following endoprosthetic reconstruction of the lower extremity.
Radiation induced sarcoma of bone is a rare but challenging disease process associated with a poor prognosis. To date, series are limited by small patient numbers; data to inform prognosis and the optimal management for these patients is needed. We hypothesized that patients with radiation-induced pelvic bone sarcomas would have worse surgical, oncologic, and functional outcomes than patients diagnosed with primary pelvic bone sarcomas
This was a multi-institution, comparative cohort analysis. A retrospective chart review was performed of all patients diagnosed with a radiation-induced pelvic and sacral bone sarcoma between January 1st, 1985 and January 1st, 2020 (defined as a histologically confirmed bone sarcoma of the pelvis in a previously irradiated field with a minimum 3-year interval between radiation and sarcoma diagnosis). We also identified a comparison group including all patients diagnosed with a primary pelvic osteosarcoma/spindle cell sarcoma of bone (i.e. eligible for osteosarcoma-type chemotherapy) during the same time interval. The primary outcome measure was disease-free and overall survival.
We identified 85 patients with primary osteosarcoma of the pelvis (POP) and 39 patients with confirmed radiation induced sarcoma of the bony pelvis (RISB) undergoing surgical resection. Patients with RISB were older than patients with POP (50.5 years vs. 36.5 years, p67.7% of patients with POP underwent limb salvage as compared to 77% of patients with RISB; the type of surgery was not different between groups (p=.0.24). There was no difference in the rate of margin positive surgery for RISB vs. POP (21.1% vs. 14.1%, p=0.16). For patients undergoing surgical resection, the rate of surgical complications was high, with more RISB patients experiencing complications (79.5%) than POP patients (64.7%); this approached statistical significance (p=0.09).
15.4% of patients with RISB died perioperative period (within 90 days of surgery) as compared to 3.5% of patients with POP (p= 0.02). For patients undergoing surgical resection, 5-year OS was significantly worse for patients with RISB vs. POP (27.3% vs. 47.7%, p=0.02). When considering only patients without metastatic disease at presentation, a significant difference in 5-year survival remains for patients with RISB vs. POP (28.6% vs. 50%, p=0.03) was a trend towards poorer 5-year DFS for patients with RISB vs. POP (30% vs. 47.5%), though this did not achieve statistical significance (p=0.09).
POP and RISB represent challenging disease processes and the oncologic outcomes are similarly poor between the two; however, the disease course for patients with RISB appears to be worse overall. While surgery can result in a favorable outcome for a small subset of patients, surgical treatment is fraught with complications.
The poor prognosis of patients with soft-tissue sarcoma as not changed in the past several decades, highlighting the necessity for new therapeutic approaches. T-cell based immunotherapies are a promising alternative to traditional cancer treatments due to their ability to target only malignant cells, leaving benign cells unharmed. The development of successful immunotherapy requires the identification and characterization of targetable immunogenic tumor antigens. Cancer-testis antigens (CTA) are a group of highly immunogenic tumor-associated proteins that have emerged as potential targets for CD8+ T-cell recognition. In addition to identifying a targetable antigen, it is crucial to understand the tumor immune microenvironment. The level of immune infiltration and mechanisms of immune suppression within the tumor play important roles in the outcome of immunotherapy.
The goal of this study is to identify targetable immunogenic antigens for T-cell based immunotherapy and to characterize the tumor immune microenvironment in human dedifferentiated liposarcoma (DDLS) by Nanostring and IHC.
To assess the complexity of the human DDLS tumor immune microenvironment and to identify target antigens we used the nCounter NanoString platform to generate a gene expression profile for hundreds of genes from RNA obtained from 29 DDLS and 10 control fat FFPE samples. To classify inflammatory status of DDLS tumors, we performed hierarchical clustering based on expression levels of selected tumor inflammatory signature genes (CCL5, CD27, CD274, CD276, CD8A, CMKLR1, CXCL9, CXCR6, HLA-DQA1, HLA-E, IDO1, LAG3, PDCDILG2, PSMB10, STAT1, TIGIT). To confirm protein expression and distribution of identified antigens, we performed immunohistochemistry on human tissue micro-arrays encompassing DDLPS tumor tissues and matched normal control tissue from 63 patients. IHC for the cancer testis antigens PBK, SPA17, MAGE-A3, NY-ESO-1 and SSX2 was performed, and the staining results were scored by two authors based on maximal staining intensity on a scale of zero to three (absent=0, weak=1, moderate=2, or strong=3) and the percentage of tumor cells that stained.
Hierarchical clustering of DDLS tumors based on expression of tumor inflammation signature genes revealed two distinct groups, consisting of 15 inflamed tumor and 14 non-inflamed tumors, demonstrating tumor heterogeneity within the DDLS sarcoma subtype. All antigens were found to be expressed in DDLS at an mRNA level. SPA17 was expressed at the highest levels in DDLS, however, this antigen was expressed at high levels in normal fat. Notably, antigens PBK and TTK had the largest fold change increase in expression in DDLS compared to normal fat controls.
Immunohistochemical analysis of selected antigens revealed that PBK was found to be expressed in 96% (52/54) of DDLS samples at high levels. Other antigens were absent or expressed at low levels in DDLS; MAGEA3 in 15.87% (10/63) NY-ESO-1 in 6.35% (4/62) and SSX2 in 12.7% (8/63) and SPA17 in 5.5% (3/54).
This data shows considerable inter-tumoral heterogeneity of inflammation, which should be taken into consideration when designing an immunotherapy for DDLS. To date, these results show promising expression of PBK antigen in DDLS, which may be used as a target in the future development of an immunotherapy for sarcoma.
The reconstruction of peri-acetabular defects after severe bone loss or pelvic resection for tumor is among the most challenging surgical intervention. The Lumic® prosthesis (Implantcast, Buxtehude, Germany) was first introduced in 2008 in an effort to reduce the mechanical complications encountered with the classic peri-acetabular reconstruction techniques and to improve functional outcomes. Few have evaluated the results associated with the use of this recent implant.
A retrospective study from five Orthopedic Oncology Canadian centers was conducted. Every patient in whom a Lumic® endoprosthesis was used for reconstruction after peri-acetabular resection or severe bone loss with a minimal follow-up of three months was included. The charts were reviewed and data concerning patients’ demographics, peri-operative characteristics and post-operative complications was collected. Surgical and functional outcomes were also assessed.
Sixteen patients, 11 males and five females, were included and were followed for 28 months [3 – 60]. Mean age was 55 [17-86], and mean BMI reached 28 [19.6 – 44]. Twelve patients (75%) had a Lumic® after a resection of a primary sarcoma, two following pelvic metastasis, one for a benign tumor and one after a comminuted acetabular fracture with bone loss. Twelve patients (75%) had their surgery performed in one stage whereas four had a planned two-stage procedure. Mean surgical time was 555 minutes [173-1230] and blood loss averaged 2100 mL [500-5000]. MSTS score mean was 60.3 preoperatively [37.1 – 97] and 54.3 postoperatively [17.1-88.6]. Five patients (31.3%) had a cemented Lumic® stem. All patients got the dual mobility bearing, and 10 patients (62.5%) had the largest acetabular cup implanted (60 mm). In seven of these 10 patients the silver coated implant was used to minimize risk of infection. Five patients (31.3%) underwent capsular reconstruction using a synthetic fabric aiming to reduce the dislocation risk. Five patients had per-operative complications (31.3%), four were minor and one was serious (comminuted iliac bone fracture requiring internal fixation). Four patients dislocated within a month post-operatively and one additional patient sustained a dislocation one year post-operatively. Eight patients (50%) had a post-operative surgical site infection. All four patients who had a two-stage surgery had an infection. Ten patients (62.5%) needed a reoperation (two for fabric insertion, five for wash-outs, and three for implant exchange/removal). One patient (6.3%) had a septic loosening three years after surgery. At the time of data collection, 13 patients (81.3%) were alive with nine free of disease. Silver coating was not found to reduce infection risk (p=0.2) and capsuloplasty did not prevent dislocation (p=1).
These results are comparable to the sparse data published. Lumic® endoprosthesis is therefore shown to provide good functional outcomes and low rates of loosening on short to medium term follow-up. Infection and dislocation are common complications but we were unable to show benefits of capsuloplasty and of the use of silver coated implants. Larger series and longer follow-ups are needed.
Traditional staging systems for high grade osteosarcoma (Enneking, MSTS) are based largely on gross surgical margins and were developed before the widespread use of neoadjuvant chemotherapy. It is now well known that both microscopic margins and chemotherapy are predictors of local recurrence. However, neither of these variables are used in the traditional surgical staging and the precise safe margin distance is debated. Recently, a novel staging system utilizing a 2mm margin cutoff and incorporating precent necrosis was proposed and demonstrated improved prognostic value for local recurrence free survival (LRFS) when compared to the MSTS staging system. This staging system has not been validated beyond the original patient cohort. We propose to analyze this staging system in a cohort of patients with high-grade osteosarcoma, as well as evaluate the ability of additional variables to predict the risk of local recurrence and overall survival.
A retrospective review of a prospectively collected database of all sarcoma patients between 1985 and 2020 at a tertiary sarcoma care center was performed. All patients with high-grade osteosarcoma receiving neo-adjuvant chemotherapy and with no evidence of metastatic disease on presentation were isolated and analyzed. A minimum of two year follow up was used for surviving patients. A total of 225 patients were identified meeting these criteria. Univariate analysis was performed to evaluate variable that were associated with LRFS. Multivariate analysis is used to further analyze factors associated with LRFS on univariate analysis.
There were 20 patients (8.9%) who had locally recurrent disease. Five-year LRFS was significantly different for patients with surgical margins 2mm or less (77.6% v. 93.3%; p=0.006) and those with a central tumor location (67.9 v. 94.4; <0.001). A four-tiered staging system using 2mm surgical margins and a percent necrosis of 90% of greater was also a significant predictor of 5-year LRFS (p=0.019) in this cohort. Notably, percent necrosis in isolation was not a predictor of LRFS in this cohort (p=0.875). Tumor size, gender, and type of surgery (amputation v. limb salvage) were also analyzed and not associated with LRFS. The MSTS surgical margin staging system did not significantly stratify groups (0.066).
A 2mm surgical margin cutoff was predictive of 5-year LRFS in this cohort of patients with localized high-grade osteosarcoma and a combination of a 2mm margin and percent necrosis outperformed the prognostic value of the traditional MSTS staging system. Utilization of this system may improve the ability of surgeons to stage thier patients. Additional variables may increase the value of this system and further validation is required.
Surgical management for acute or impending pathologic fractures in metastatic bone disease (MBD) places patients at high-risk for post-operative venous thromboembolism (VTE). Due to the combination of malignancy, systemic cancer treatment, and surgical treatment, VTE-risk is increased 7-fold in patients with MBD compared to non-cancer patients undergoing the same procedure. The extent and duration of post-operative hypercoagulability in patients with MBD remains unknown and thromboprophylaxis guidelines were developed for non-cancer patients, limiting their applicability to address the elevated VTE-risk in cancer patients. Thrombelastography (TEG) analysis is a point-of-care test that measures clot formation, stabilization, and lysis in whole blood samples. The TEG parameter, maximal amplitude (MA), indicates clot strength and the threshold of ≥65 mm has been used to define hypercoagulability and predict VTE events in non-cancer patients requiring orthopaedic surgery. Therefore, this study aims to quantify the extent and duration of post-operative hypercoagulability in patients with MBD using serial TEG analysis.
Consecutive adults (≥18 years) with MBD who required orthopaedic surgery for acute or impending pathologic fractures were enrolled into this single-centre, prospective cohort study. Serial TEG analysis was performed onsite using a TEG®6s haemostasis analyzer (Haemonetics Corporation, Boston, MA) on whole blood samples collected at seven timepoints: pre-operatively; on post-operative day (POD) 1, 3, and 5; and at 2-, 6-, and 12-weeks post-operatively. Hypercoagulability was defined as MA ≥65 mm. Participants received standardized thromboprophylaxis for four weeks and patient-reported compliance with thromboprophylaxis was recorded. VTE was defined as symptomatic DVT or PE, or asymptomatic proximal DVT, and all participants underwent a screening post-operative lower extremity Doppler ultrasound on POD3. Descriptive statistics were performed and difference between pre-operative MA values of participants with VTE versus no VTE was evaluated using Student's t-test (p≤0.05).
Twenty-one participants (10 female; 47.6%) with a mean age of 70 ± 12 years were enrolled. Nine different primary cancers were identified amongst participants, with breast (23.8%), colorectal (19.0%), and lung cancer (14.3%) most frequently reported. Most participants (57.1%) were hypercoagulable pre-operatively, and nearly half remained hypercoagulable at 6- and 12-weeks post-operatively (47.1 and 46.7%, respectively). VTE occurred in 5 patients (23.8%) and mean MA was 68.1 ± 4.6 mm at the time of diagnosis. Mean pre-operative MA values were significantly higher (p=0.02) in patients who experienced VTE (68.9 ± 3.5 mm) compared to those who did not (62.7 ± 6.5 mm). VTE incidence was highest in the first week post-operatively, during which time four VTE events (80%) occurred. The proportion of patients in a hypercoagulable state increased at three consecutive timepoints, beginning on POD3 (85.0%), increasing on POD5 (87.5%), and peaking at 2-weeks post-operatively (88.9%).
Current thromboprophylaxis guidelines do not consider cancer-associated risk factors that contribute to increased VTE incidence and prescription duration may be inadequate to address prolonged post-operative hypercoagulability in patients with MBD. The high rate of VTE events observed and sustained hypercoagulable state indicate that thromboprophylaxis may be prematurely terminated while patients remain at high risk for VTE. Therefore, extending thromboprophylaxis duration beyond 4-weeks post-operatively in patients with MBD warrants further investigation.
Diffuse-type Tenosynovial Giant-Cell Tumour (d-TGCT) of large joints is a rare, locally aggressive, soft tissue tumour affecting predominantly the knee. Previously classified as Pigmented Villonodular Synovitis (PVNS), this monoarticular disease arises from the synovial lining and is more common in younger adults. Given the diffuse and aggressive nature of this tumour, local control is often difficult and recurrence rates are high. Current literature is comprised primarily of small, and a few larger but heterogeneous, observational studies. Both arthroscopic and open synovectomy techniques, or combinations thereof, have been described for the treatment of d-TGCT of the knee.
There is, however, no consensus on the best approach to minimize recurrence of d-TGCT of the knee. Some limited evidence would suggest that a staged, open anterior and posterior synovectomy might be of benefit in reducing recurrence. To our knowledge, no case series has specifically looked at the recurrence rate of d-TGCT of the knee following a staged, open, posterior and anterior approach. We hypothesized that this approach may provide better recurrence rates as suggested by larger more heterogeneous series.
A retrospective review of the local pathology database was performed to identify all cases of d-TGCT or PVNS of the knee treated surgically at our institution over the past 15 years. All cases were treated by a single fellowship-trained orthopaedic oncology surgeon, using a consistent, staged, open, posterior and anterior approach for synovectomy. All cases were confirmed by histopathology and followed-up with regular repeat MRI to monitor for recurrence. Medical records of these patients were reviewed to extract demographic information, as well as outcomes data, specifically recurrence rate and complications. Any adjuvant treatments or subsequent surgical interventions were noted.
Twenty-three patients with a minimum follow-up of two years were identified. Mean age was 36.3 at the time of treatment. There were 10 females and 13 males. Mean follow-up was seven and a half years. Fourteen of 23 (60.9%) had no previous treatment. Five of 23 had a previous arthroscopic synovectomy, one of 23 had a previous combined anterior arthroscopic and posterior open synovectomy, and three of 23 had a previous open synovectomy. Mean time between stages was 87 days (2.9 months). Seven of 23 (30.4%) patients had a recurrence. Of these, three of seven (42.9%) were treated with Imatinib, and four of seven (57.1%) were treated with repeat surgery (three of four arthroscopic and one of four open).
Recurrence rates of d-TGCT in the literature vary widely but tend to be high. In our retrospective study, a staged, open, anterior and posterior synovectomy provides recurrence rates that are lower than rates previously reported in the literature. These findings support prior data suggesting this approach may result in better rates of recurrence for this highly recurrent difficult to treat tumour.
Metastatic bone disease (MBD) is a significant contributor to diminished quality of life in cancer patients, often leading to pathologic fractures, hypercalcaemia, intractable bone pain, and reduced functional independence. Standard of care management for MBD patients undergoing orthopaedic surgery is multi-disciplinary, includes regular surgical follow-up, case by case assessment for use of bone protective medications, and post-operative radiation therapy to the operative site. The number of patients in southern Alberta receiving standard of care post-operative management is currently unclear. Our aim is to develop a database of all patients in southern Alberta undergoing orthopaedic surgery for MBD and to assess for deficiencies and opportunities to ensure standard of care for this complex patient population.
Patients were identified for database inclusion by a search query of the Alberta Cancer Registry of all patients with a diagnosis of metastatic cancer who underwent surgery for an impending or pathologic fracture in the Calgary, South and Central Alberta Zones. Demographic information, primary cancer history, previous local and systemic treatments, anatomical location of MBD event(s), surgical fixation techniques, and post-operative care details were collected. The rate of standard of care post-operative treatment was evaluated. A comparison of outcomes between tertiary urban centres and rural centres was also completed. Survival was calculated from time of first operation to date of death. Univariate and multivariate analyses were performed to identify the impact of post-operative care variables on survival amongst patients surviving longer than one month.
We identified 402 patients who have undergone surgical treatment for MBD in southern Alberta from 2006-2018. Median age at time of surgery was 66.3 years and 52.7% of patients were female. Breast, lung, prostate, renal cell and multiple myeloma were the most common primary malignancies (n=328, 81.6%). Median post-operative survival was 6.8 months (95%CI: 5.7-8.3). 203 patients (52.5%) were treated with post-operative radiotherapy and 159 patients (50.8%) had post-operative surgical follow-up. Only 39 patients (11.3%) received bone protective agents in the peri-operative period. On multivariate survival analysis, post-operative surgical follow-up was associated with improved survival (p<0.001). Patients were treated at nine hospitals across southern Alberta with most patients treated in an urban center (65.9%). Post-operative survival was significantly longer amongst patients treated in an urban center (9.0 months, 95%CI: 6.9-12.3 versus 4.3 months, 95%CI: 3.4-5.6, p<0.001).
The burden of MBD is significant and increasing. With treatment occurring at multiple provincial sites, there is a need for standardized, primary disease-specific peri- and post-operative protocols to ensure quality and efficacious patient care. To provide evidence informed treatment recommendations, we have developed a database of all patients in southern Alberta undergoing orthopaedic surgery for MBD. Our results demonstrate that many patients were not treated according to post-operative standard of care recommendations. Notably, half of the included patients did not have documented surgical follow-up, post-operative radiation treatment was low and only 11% were actively treated with bone protective agents. This data justifies the need for established surgical MBD care pathways and provides reference data to benchmark prospective QA and QI outcomes in this patient population.
The presence of metastatic bone disease (MBD) often necessitates major orthopaedic surgery. Patients will enter surgical care either through emergent or electively scheduled care pathways. Patients in a pain crisis or with an acute fracture are generally admitted via emergent care pathways whereas patients with identified high-risk bone lesions are often booked for urgent yet scheduled elective procedures. The purpose of this study is to compare the post-operative outcomes of patients who present through emergent or electively scheduled care pathways in patients in a Canadian health care system.
We have conducted a retrospective, multicenter cohort study of all patients presenting for surgery for MBD of the femur, humerus, tibia or pelvis in southern Alberta between 2006 and 2021. Patients were identified by a search query of all patients with a diagnosis of metastatic cancer who underwent surgery for an impending or actual pathologic fracture in the Calgary, South and Central Alberta Zones. Subsequent chart reviews were performed. Emergent surgeries were defined by patients admitted to hospital via urgent care mechanisms and managed via unscheduled surgical bookings (“on call list”). Elective surgeries were defined by patients seen by an orthopaedic surgeon at least once prior to surgery, and booked for a scheduled urgent, yet elective procedure. Outcomes include overall survival from the time of surgery, hospital length of stay, and 30-day hospital readmission rate.
We have identified 402 patients to date for inclusion. 273 patients (67.9%) underwent surgery through emergent pathways and 129 patients (32.1%) were treated through urgent, electively scheduled pathways. Lung, prostate, renal cell, and breast cancer were the most common primary malignancies and there was no significant difference in these primaries amongst the groups (p=0.06). Not surprisingly, emergent patients were more likely to be treated for a pathologic fracture (p<0.001) whereas elective patients were more likely to be treated for an impending fracture (p<0.001). Overall survival was significantly shorter in the emergent group (5.0 months, 95%CI: 4.0-6.1) compared to the elective group (14.9 months 95%CI: 10.4-24.6) [p<0.001]. Hospital length of stay was significantly longer in the emergent group (13 days, 95%CI: 12-16 versus 5 days, 95%CI: 5-7 days). There was a significantly greater rate of 30-day hospital readmission in the emergent group (13.3% versus 7.8%) [p=0.01].
Electively managed MBD has multiple benefits including longer post-operative survival, shorter length of hospital stay, and a lower rate of 30-day hospital readmission. These findings from a Canadian healthcare system demonstrate clinical value in providing elective orthopaedic care when possible for patients with MBD. Furthermore, care delivery interventions capable of decreasing the footprint of emergent surgery through enhanced screening or follow-up of patients with MBD has the potential to significantly improve clinical outcomes in this population. This is an ongoing study that will justify refinements to the current surgical care pathways for MBD in order to identify patients prior to emergent presentation. Future directions will evaluate the costs associated with each care delivery method to provide opportunity for health economic efficiencies.
Cartilage lesions vary in the spectrum from benign enchondromas to highly malignant dedifferentiated chondrosarcomas. From the treatment perspective, enchondromas are observed, Grade 1 chondrosarcomas are curetted like aggressive benign tumors, and rest are resected like other sarcomas. Although biopsy for tissue diagnosis is the gold standard for diagnosis and grade determination in chondrosarcoma, tumor heterogeneity limits the grading in patients following a biopsy. In the absence of definite pre-treatment grading, a surgeon is therefore often in a dilemma when deciding the best treatment option. Radiology has identified aggressive features and aggressiveness scores have been used to try and grade these tumors based on the imaging characteristics but there have been very few published reports with a uniform group and large number of cases to derive a consistent scoring and correlation.
The authors asked these study questions :(1) Does Radiology Aggressiveness and its Score correlate with the grade of chondrosarcoma? (2) Can a cut off Radiology Agressiveness Score value be used to guide the clinician and add value to needle biopsy information in offering histological grade dependent management?
A retrospective analysis of patients with long bone extremity intraosseous primary chondrosarcomas were correlated with the final histology grade for the operated patients and Radiological parameters with 9 parameters identified a priori and from published literature (radiology aggressiveness scores - RAS) were evaluated and tabulated. 137 patients were identified and 2 patients were eliminated for prior surgical intervention. All patients had tissue diagnosis available and pre-treatment local radiology investigations (radiographs and/or CT scans and MRI scans) to define the RAS parameters.
Spearman correlation has indicated that there was a significant positive association between RAS and final histology grading of long bone primary intraosseous chondrosarcomas. We expect higher RAS values will provide grading information in patients with inconclusive pre-surgery biopsy to tumor grades and aid in correct grade dependant surgical management of the lesion. Prediction of dedifferentiated chondrosarcoma from higher RAS will be attempted and a correlation to obtain a RAS cut off, although this may be challenging to achieve due to the overlap of features across the intermediate grade, high grade and dedifferentiated grades.
Radiology Aggressiveness correlates with the histologic grade in long bone extremity primary chondrosarcomas and the correlation of radiology and biopsy can aid in treatment planning by guiding us towards a low-grade neoplasm which may be dealt with intralesional extended curettage or high-grade lesion which need to be resected. Standalone RAS may not solve the grading dilemma of primary long bone intraosseous chondrosarcomas as the need for tissue diagnosis for confirming atypical cartilaginous neoplasm cannot be eliminated, however in the event of a needle biopsy grade or inconclusive open biopsy it may guide us towards a correlational diagnosis along with radiology and pathology for grade based management of the chondrosarcoma.
Wide resection, with or without adjuvant therapy, is the mainstay of treatment for soft tissue sarcoma of the extremities. The surgical treatment of soft tissue sarcoma can portend a prolonged course of recovery from a functional perspective. However, data to inform the expected course of recovery following sarcoma surgery is lacking. The purpose of this study was to identify time to maximal functional improvement following sarcoma resection and to identify factors that delay the expected course of recovery.
A retrospective chart review was performed of all patients undergoing surgical treatment of a soft tissue sarcoma of the extremities between January 1st, 1985 and November 15, 2020 with a minimum of 1 follow up. The primary outcome measure was time to maximal functional improvement, defined as failure to demonstrate improvement on two consecutive follow up appointments, as defined by the functional outcome measures of Toronto Extremity Salvage Score (TESS) and Musculoskeletal Tumor Society (MSTS) Score or by achieving 90% of maximum outcome score.
We identified 1188 patients who underwent surgical resection of a soft tissue sarcoma of the extremities. Patients typically achieved a return to their baseline level of function by 1 year and achieved “maximal” functional recovery by 2 year's time postoperatively.
Patient and tumor factors that were associated with worse functional outcome scores and a delayed return to maximal functional improvement included older age (p=0.007), female sex (p-0.004), larger tumor size (p < 0 .001), deep tumor location (p < 0 .001), pelvic location (p < 0 .001), higher tumor grade (p < 0 .001). Treatment factors that were associated with worse functional outcome scores and a delayed return to maximal functional improvement included use of radiation therapy (p < 0 .001), perioperative complications (p < 0 .001), positive margin status (p < 0 .001) and return of disease, locally or systemically (p < 0 .001).
Most patients will recover their baseline function by 1 year and achieve “maximal” recovery by 2 years’ time following surgical resection for soft tissue sarcoma of the extremities. Several patient, tumor and treatment factors should be used to counsel patients as to a delayed course of recovery.
Non-invasive sampling of tumor-derived genetic material in circulation through liquid biopsy may be very beneficial for an accurate diagnosis and evaluation of response to treatment in patients with malignant and benign soft tissue tumors. We previously showed that tumor-derived genomic aberrations can be detected in plasma of patients with leiomyosarcoma (LMS) and leiomyoma (LM). In LMS patients, we also showed that the levels of circulating tumor DNA (ctDNA) correspond with response to treatment. We developed an approach tailored to genomic profile of LMS (characterized by intermediate levels of point mutations and copy number alterations, CNAs). Based on TCGA data, we designed a panel of 89 most frequently mutated genes in LMS, which we profiled in plasma DNA by deep sequencing. In parallel, plasma samples were analyzed by shallow whole genome sequencing for detection of CNAs. With this approach, we detected ctDNA in 71% (20/28) of samples from 6/7 patients with advanced disease with >98% specificity. The combination approach for orthogonal profiling of point mutations and CNAs proved to increase the sensitivity of ctDNA detection. Currently, we seek to further improve the sensitivity of ctDNA detection by refining our capture panel and tracking LMS-specific DNA methylation markers in circulation, in addition to point mutations and CNAs. The ultimate goals of our ctDNA studies are 1) to develop a highly sensitive assay for evaluation of response to therapy and long-term surveillance for patients with LMS, and 2) to develop a blood-based test for accurate pre-operative distinction between LMS and LM.
To identify LMS-specific DNA methylation markers, we analyzed a test cohort of 76 LM, 35 uterine LMS and 31 extra-uterine LMS by Illumina Infinium EPIC arrays. We identified differentially methylated CpGs between LM and uterine LMS, and between LM and all LMS using a newly developed custom pipeline in R. The results of this analysis are currently being validated in a new dataset of 41 LM and 153 LMS generated by our group. Recently published (PMID: 34301934) genomic data from new 53 LMS samples are used to refine the panel of the most frequently mutated genes that we identified previously in the LMS TCGA data.
Our preliminary analysis of test cohort revealed >270 differentially methylated CpGs between LM and uterine LMS, and >1000 differentially methylated CpGs between LM and all LMS. The preliminary analysis of genomic data shows that the initial panel of 89 frequently mutated genes could be substantially narrowed down to cover only selected tumor suppressor genes. Once validated, these results will be used to refine the ctDNA assay for LMS and LM.
Our results point to multiple epigenetic markers that could be used for ctDNA profiling, in addition to point mutations or CNAs. Further validation will allow us to select the most reliable LMS- and LM-specific DNA methylation markers and the most frequently mutated regions across independent datasets, and these markers will be incorporated into our new ctDNA test for a concurrent detection of point mutations, CNAs and DNA methylation markers in circulation.
Undifferentiated pleomorphic sarcoma (UPS) is one of the most common and aggressive adult soft tissue sarcomas (STS). Once metastatic, UPS is rapidly fatal. Most STS, including UPS, are resistant to conventional immunotherapies as these tumours have low numbers of spontaneous tumour infiltrating lymphocytes (TILs) and are densely populated with immune suppressive macrophages. Intra-tumoural activation of the STimulator of INterferon Genes (STING) pathway is a novel immunotherapeutic strategy to recruit anti-tumour TILs into the tumour microenvironment. In a murine model of UPS, we have demonstrated that intra-tumoural injection of a murine-specific STING agonist, DMXAA, results in profound immune mediated tumour clearance. Recently, molecules capable of activating both human and mouse STING pathways have been developed. In pursuit of clinically relevant therapeutic opportunities, the purpose of this study is to evaluate the anti-tumour potential of two agonists of the human and murine STING receptors: ADU-S100 and MSA-2 as monotherapies and in combination with the immune checkpoint inhibitor, anti-PD1 in a murine model of UPS.
Immune competent mice were engrafted with murine UPS cells in the hindlimb muscle. Once palpable, mice in the monotherapy group were treated with a single intra-tumoural dose of 1) ADU-S100 or 2) MSA-2 or 3) DMXAA. In additional experimental groups, mice were treated with the different STING agonists and monoclonal anti-PD1. Tumour volume measurements and tumour bioluminescence were measured over time. To quantify dynamic changes in immune populations and in the tumour immune microenvironment, STING treated UPS tumours were evaluated using flow cytometry and mRNA quantification at various timepoints after therapy.
DMXAA monotherapy produced complete tumour eradication in 50% of mice, whereas both ADU-S100 or MSA-2 monotherapy only extended survival but did not result in complete tumour clearance. Flow cytometry and transcriptional profiling of tumours at multiple timepoints post-treatment showed similar inflammatory changes and increased TILs numbers across all STING agonists. The addition of anti-PD1 treatment to STING therapy significantly extended survival times with both ADU-S100 and MSA-2, and resulted in 14% complete tumour clearance with ADU-S100. No complete survivors were observed with MSA-2-anti-PD1 combinations therapy.
STING activation is a promising immunotherapeutic strategy for UPS. Recently developed human STING agonists are not as effective as DMXAA despite similar immunologic responses to treatment. STING and anti-PD-1 treatment were therapeutically synergistic for both human STING agonists. These results justify further research around STING activation as a therapeutic modality for STS. DMXAA may possess additional off-target therapeutic properties beyond STING activation which warrants further investigation. Elucidating these differences may be critical to further optimize STING therapy for human STS.
Prolonged bedrest in hospitalized patients is a major risk factor for venous thromboembolism (VTE), especially in high risk patients with hip fracture. Thrombelastography (TEG) is a whole blood viscoelastic hemostatic assay with evidence that an elevated maximal amplitude (MA), a measure of clot strength, is predictive of VTE in orthopaedic trauma patients. The objective of this study was to compare the TEG MA parameter between patients with hip fracture who were more mobile post-operatively and discharged from hospital early to patients with hip fracture with reduced mobility and prolonged hospitalizations post-operatively.
In this prospective cohort study, TEG analysis was performed in patients with hip fracture every 24-hours from admission until post-operative day (POD) 5, then at 2- and 6-weeks post-operatively. Hypercoagulability was defined by MA > 65. Patients were divided into an early (within 5-day) and late (after 5-day) discharge group, inpatient at 2-weeks group, and discharge to MSK rehabilitation (MSK rehab), and long term care (LTC) groups. Two-sample t-test was used to analyze differences in MA between the early discharge and less mobile groups. All statistical tests were two-sided, and p-values < 0.05 were considered statistically significant.
In total, 121 patients with a median age of 81.0 were included. Patients in the early discharge group (n=15) were younger (median age 64.0) and more likely to ambulate without gait aids pre-injury (86.7%) compared to patients in the late discharge group (n=105), inpatients at 2-weeks (n=48), discharged to MSK rehab (n=30), and LTC (n=20). At two weeks post-operative, the early discharge group was significantly less hypercoagulable (MA=68.9, SD 3.0) compared to patients in the other four groups. At 6-weeks post-operative, the early discharge group was the only group to demonstrate a trend towards mean MA below the MA > 65 hypercoagulable threshold (MA=64.4, p=0.45). Symptomatic VTE events were detected in three patients (2.5%) post-operatively. All three patients had hospitalizations longer than five days after surgery.
In conclusion, our analysis of hypercoagulability secondary to reduced post-operative mobility demonstrates that patients with hip fracture who were able to mobilize independently sooner after hip fracture surgery, have a reduced peak hypercoagulable state. In addition, there is a trend towards earlier return to normal coagulation status as determined by the TEG MA parameter. Post-operative mobility status may play a role in determining individualized duration of thromboprophylaxis following hip fracture surgery. Future studies comparing TEG to clinically validated mobility tools may more closely evaluate the contribution of venous stasis due to reduced mobility on hypercoagulation following hip fracture surgery.
Thrombelastography (TEG) is a point-of-care tool that can measure clot formation and breakdown using a whole blood sample. We have previously used serial TEG analysis to define hypercoagulability and increased venous thromboembolism (VTE) risk following a major fracture requiring surgical treatment. Additionally, we have used serial TEG analysis to quantify the prolonged hypercoagulable state and increased VTE risk that ensues following a hip fracture. Recently developed cartridge-based platelet mapping (PLM) using TEG analysis can be used to activate platelets at either the adenosine diphosphate (ADP) receptor or at the Thromboxane A2 (AA) receptor, in order to evaluate clot strength when platelets are activated only through those specific receptors. This study aim was to evaluate platelet contribution to hypercoagulability, in order to identify potential therapeutic targets for VTE prevention. We hypothesized that there would be a platelet-predominant contribution to hypercoagulability following a hip fracture.
Patients aged 50 years or older with a hip fracture treated surgically were enrolled in this prospective cohort study. Exclusion criteria were: prior history of VTE, active malignancy, or pre-injury therapeutic dose anticoagulation. Serial TEG and PLM analyses were performed at admission, post-operative day (POD) 1, 3, 5, 7 and at 2-, 4-, 6- and 12-weeks post-operatively. All patients received thromboprophylaxis with low molecular weight heparin (LMWH) for 28 days post-operatively. Hypercoagulability was defined as maximal amplitude (MA; a measure of clot strength) over 65mm based on TEG analysis. Independent samples t-tests were used to compare MA values with this previously established threshold and a mixed effects linear regression model was used to compare MA values over time. Independent samples t-tests and Chi-sqaured analyses were used to compare between the surgical fixation and arthroplasty groups.
Forty-six patients with an acute hip fracture were included, with a mean age of 77.1 (SD = 10.6) years, with 61% (N=11) being female. Twenty-six were treated with arthroplasty (56.5%), while the remainder underwent surgical fixation of their hip fractures. TEG analysis demonstrated post-operative hypercoagulability (mean MA over 65mm) at all follow-up timepoints until 12-weeks. PLM identified a platelet-mediated hypercoagulable state based on elevated ADP-MA and AA-MA, with more pronounced platelet contribution demonstrated by the AA pathway. Patients treated with arthroplasty had significantly increased AA-MA compared with ADP-MA at POD 3 and at the 12-week follow-up.
Thrombelastography can be used to identify hypercoagulability and increased risk for VTE following a hip fracture. Platelet mapping analysis from this pilot study suggests a platelet-mediated hypercoagulable state that may benefit from thromboprophylaxis using an anti-platelet agent that specifically targets the AA platelet activation pathway, such as acetylsalicylic acid (ASA). This research also supports differences in hypercoagulability between patients treated with arthroplasty compared to those who undergo fracture fixation.
Acute Compartment Syndrome (ACS) is an orthopaedic emergency that can develop after a wide array of etiologies. In this pilot study the MY01 device was used to assess its ease of use and its ability to continuously reflect the intracompartmental pressure (ICP) and transmit this data to a mobile device in real time. This preliminary data is from the lead site which is presently expanding data collection to five other sites as part of a multi-center study.
Patients with long bone trauma of the lower or upper extremity posing a possibility of developing compartment syndrome were enrolled in the study. Informed consent was obtained from the patients. A Health Canada licensed continuous compartmental pressure monitor (MY01) was used to measure ICP. The device was inserted in the compartment that was deemed most likely to develop ACS and ICP was continuously measured for up to 18 hours. Fractures were classified according to the AO/OTA classification. Patient clinical signs and pain levels were recorded by healthcare staff during routine in-patient monitoring and were compared to the ICP from the device. Important treatment information was pulled from the patient's chart to help correlate all of the patient's data and symptoms.
The study period was conducted from November 2020 through December 2021. Twenty-six patients were enrolled. There were 17 males, and nine females. The mean age was 38 years (range, 17–76). Seventeen patients received the device post-operatively and nine received it pre-operatively. Preliminary results show that post-operative ICPs tend to be significantly higher than pre-operative ICPs but tend to trend downwards very quickly. The trend in this measurement appears to be more significant than absolute numbers which is a real change from the previous literature. One patient pre-operatively illustrated a steep trend upwards with minimal clinical symptoms but required compartment release at the time of surgery that exhibited no muscle necrosis. The trend in this patient was very steep and, as predicted, predated the clinical findings of compartment syndrome. This trend allows an early warning signal of the absolute pressure, to come, in the compartment that is being assessed by the device.
Preliminary results suggest that this device is reliable and relatively easy to use within our institutions. In addition it suggests that intracompartmental pressures can be higher immediately post-op but lower rapidly when the patient does not develop ACS. These results are in line with current literature of the difference between pre and post-operative baselines and thresholds of ICP, but are much more striking, as continuous measurements have not been part of the data set in most of past studies.
Further elucidation of the pressure thresholds and profiles are currently being studied in the ongoing larger multicenter study and will add to our understanding of the critical values. This data, plus the added value of continuous trends in the pressure, upwards or downwards, will aid in preventing muscle necrosis during our management of these difficult long bone fractures.
Major orthopaedic fractures are an independent risk factor for the development of venous thromboembolism (VTE), which are significant causes of preventable morbidity and mortality in trauma patients. Despite thromboprophylaxis, patients who sustain a pelvic or acetabular fracture (PA) continue to have high rates of VTE (12% incidence). Thrombelastography (TEG) is a whole-blood, point-of-care test which provides an overview of the clotting process. Maximal amplitude (MA), from TEG analysis, is the measure of clot strength and values ≥65mm have been used to quantify hypercoagulability and increased VTE risk. Therefore, the primary aim was to use serial TEG analysis to quantify the duration of hypercoagulability, following surgically treated PA fractures.
This is a single centre, prospective cohort study of adult patients 18 years or older with surgically treated PA fractures. Consecutive patients were enrolled from a Level I trauma centre and blood draws were taken over a 3-month follow-up period for serial TEG analysis. Hypercoagulability was defined as MA ≥65mm. Exclusion criteria: bleeding disorders, active malignancy, current therapeutic anticoagulation, burns (>20% of body surface) and currently, or expecting to become pregnant within study timeframe.
Serial TEG analysis was performed using a TEG6s hemostasis analyzer (Haemonetics Corp.) upon admission, pre-operatively, on post-operative day (POD) 1, 3, 5, 7 (or until discharged from hospital, whichever comes sooner), then in follow-up at 2-, 4-, 6-weeks and 3-months post-operatively. Patients received standardized thromboprophylaxis with low molecular weight heparin for 28 days post-operatively. VTE was defined as symptomatic DVT or PE, or asymptomatic proximal DVT, and all participants underwent a screening post-operative lower extremity Doppler ultrasound on POD3. Descriptive statistics were used to determine the association between VTE events and MA values. For the primary outcome measure, the difference between the MA threshold value (≥65mm) and serial MA measures, were compared using one-sided t-tests (α=0.05).
Twenty-eight patients (eight females, 29%) with a mean age of 48±18 years were included. Acetabular fractures were sustained by 13 patients (46%), pelvic fractures by 14 patients (50%), and one patient sustained both. On POD1, seven patients (25%) were hypercoagulable, with 21 patients (78%) being hypercoagulable by POD3, and 17 patients (85%) by POD5. The highest average MA values (71.7±3.9mm) occurred on POD7, where eight patients (89%) were hypercoagulable. At 2-weeks post-operatively, 16 patients (94%) were hypercoagulable, and at four weeks, when thromboprophylaxis was discontinued, six patients (40%) remained hypercoagulable. Hypercoagulability persisted for five patients (25%) at 6-weeks and for two patients (10%) by three months.
There were six objectively diagnosed VTE events (21.4%), five were symptomatic, with a mean MA value of 69.3mm±4.3mm at the time of diagnosis. Of the VTE events, four occurred in participants with acetabular fractures (three male, 75%) and two in those with pelvic fractures (both males).
At 4-weeks post-operatively, when thromboprophylaxis is discontinued, 40% of patients remained hypercoagulable and likely at increased risk for VTE. At 3-months post-operatively, 10% of the cohort continued to be hypercoagulable. Serial TEG analysis warrants further study to help predict VTE risk and to inform clinical recommendations following PA fractures.