header advert
Results 1 - 20 of 1071
Results per page:
Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_10 | Pages 60 - 60
1 Oct 2022
Dudareva M Corrigan R Hotchen A Muir R Sattar A Scarborough C Kumin M Atkins B Scarborough M McNally M Collins G
Full Access

Aim. Recurrence of bone and joint infection, despite appropriate therapy, is well recognised and stimulates ongoing interest in identifying host factors that predict infection recurrence. Clinical prediction models exist for those treated with DAIR, but to date no models with a low risk of bias predict orthopaedic infection recurrence for people with surgically excised infection and removed metalwork. The aims of this study were to construct and internally validate a risk prediction model for infection recurrence at 12 months, and to identify factors that predict recurrence. Predictive factors must be easy to check in pre-operative assessment and relevant across patient groups. Methods. Four prospectively collected datasets including 1173 participants treated in European centres between 2003 and 2021, followed up to 12 months after surgery for orthopaedic infections, were included in logistic regression modelling [1–3]. The definition of infection recurrence was identical and ascertained separately from baseline factors in three contributing cohorts. Eight predictive factors were investigated following a priori sample size calculation: age, gender, BMI, ASA score, the number of prior operations, immunosuppressive medication, glycosylated haemoglobin (HbA1c), and smoking. Missing data, including systematically missing predictors, were imputed using Multiple Imputation by Chained Equations. Weekly alcohol intake was not included in modelling due to low inter-observer reliability (mean reported intake 12 units per week, 95% CI for mean inter-rater error −16.0 to +15.4 units per week). Results. Participants were 64% male, with a median age of 60 years (range 18–95). 86% of participants had lower limb orthopaedic infections. 732 participants were treated for osteomyelitis, including FRI, and 432 for PJI. 16% of participants experienced treatment failure by 12 months. The full prediction model had moderate apparent discrimination: AUROC (C statistic) 0.67, Brier score 0.13, and reasonable apparent calibration. Of the predictors of interest, associations with failure were seen with prior operations at the same anatomical site (odds ratio for failure 1.51 for each additional prior surgery; 95% CI 1.02 to 2.22, p=0.06), and the current use of immunosuppressive medications (odds ratio for failure 2.94; 95% CI 0.89 to 9.77, p=0.08). Conclusions. This association between number of prior surgeries and treatment failure supports the urgent need to streamline referral pathways for people with orthopaedic infection to specialist multidisciplinary units


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_3 | Pages 73 - 73
23 Feb 2023
Hunter S Baker J
Full Access

Acute Haematogenous Osteomyelitis (AHO) remains a cause of severe illness among children. Contemporary research aims to identify predictors of acute and chronic complications. Trends in C-reactive protein (CRP) following treatment initiation may predict disease course. We have sought to identify factors associated with acute and chronic complications in the New Zealand population. A retrospective review of all patients <16 years with presumed AHO presenting to a tertiary referral centre between 2008–2018 was performed. Multivariate was analysis used to identify factors associated with an acute or chronic complication. An “acute” complication was defined as need for two or more surgical procedures, hospital stay longer than 14-days, or recurrence despite IV antibiotics. A “chronic” complication was defined as growth or limb length discrepancy, avascular necrosis, chronic osteomyelitis, pathological fracture, frozen joint or dislocation. 151 cases met inclusion criteria. The median age was 8 years (69.5% male). Within this cohort, 53 (34%) experienced an acute complication and 18 (12%) a chronic complication. Regression analysis showed that contiguous disease, delayed presentation, and failure to reduce CRP by 50% at day 4/5 predicted an acutely complicated disease course. Chronic complication was predicted by need for surgical management and failed CRP reduction by 50% at day 4/5. We conclude that CRP trends over 96 hours following commencement of treatment differentiate patients with AHO likely to experience severe disease


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_13 | Pages 60 - 60
1 Dec 2022
Martin RK Wastvedt S Pareek A Persson A Visnes H Fenstad AM Moatshe G Wolfson J Lind M Engebretsen L
Full Access

External validation of machine learning predictive models is achieved through evaluation of model performance on different groups of patients than were used for algorithm development. This important step is uncommonly performed, inhibiting clinical translation of newly developed models. Recently, machine learning was used to develop a tool that can quantify revision risk for a patient undergoing primary anterior cruciate ligament (ACL) reconstruction (https://swastvedt.shinyapps.io/calculator_rev/). The source of data included nearly 25,000 patients with primary ACL reconstruction recorded in the Norwegian Knee Ligament Register (NKLR). The result was a well-calibrated tool capable of predicting revision risk one, two, and five years after primary ACL reconstruction with moderate accuracy. The purpose of this study was to determine the external validity of the NKLR model by assessing algorithm performance when applied to patients from the Danish Knee Ligament Registry (DKLR). The primary outcome measure of the NKLR model was probability of revision ACL reconstruction within 1, 2, and/or 5 years. For the index study, 24 total predictor variables in the NKLR were included and the models eliminated variables which did not significantly improve prediction ability - without sacrificing accuracy. The result was a well calibrated algorithm developed using the Cox Lasso model that only required five variables (out of the original 24) for outcome prediction. For this external validation study, all DKLR patients with complete data for the five variables required for NKLR prediction were included. The five variables were: graft choice, femur fixation device, Knee Injury and Osteoarthritis Outcome Score (KOOS) Quality of Life subscale score at surgery, years from injury to surgery, and age at surgery. Predicted revision probabilities were calculated for all DKLR patients. The model performance was assessed using the same metrics as the NKLR study: concordance and calibration. In total, 10,922 DKLR patients were included for analysis. Average follow-up time or time-to-revision was 8.4 (±4.3) years and overall revision rate was 6.9%. Surgical technique trends (i.e., graft choice and fixation devices) and injury characteristics (i.e., concomitant meniscus and cartilage pathology) were dissimilar between registries. The model produced similar concordance when applied to the DKLR population compared to the original NKLR test data (DKLR: 0.68; NKLR: 0.68-0.69). Calibration was poorer for the DKLR population at one and five years post primary surgery but similar to the NKLR at two years. The NKLR machine learning algorithm demonstrated similar performance when applied to patients from the DKLR, suggesting that it is valid for application outside of the initial patient population. This represents the first machine learning model for predicting revision ACL reconstruction that has been externally validated. Clinicians can use this in-clinic calculator to estimate revision risk at a patient specific level when discussing outcome expectations pre-operatively. While encouraging, it should be noted that the performance of the model on patients undergoing ACL reconstruction outside of Scandinavia remains unknown


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_13 | Pages 42 - 42
1 Dec 2022
Abbas A Toor J Lex J Finkelstein J Larouche J Whyne C Lewis S
Full Access

Single level discectomy (SLD) is one of the most commonly performed spinal surgery procedures. Two key drivers of their cost-of-care are duration of surgery (DOS) and postoperative length of stay (LOS). Therefore, the ability to preoperatively predict SLD DOS and LOS has substantial implications for both hospital and healthcare system finances, scheduling and resource allocation. As such, the goal of this study was to predict DOS and LOS for SLD using machine learning models (MLMs) constructed on preoperative factors using a large North American database. The American College of Surgeons (ACS) National Surgical and Quality Improvement (NSQIP) database was queried for SLD procedures from 2014-2019. The dataset was split in a 60/20/20 ratio of training/validation/testing based on year. Various MLMs (traditional regression models, tree-based models, and multilayer perceptron neural networks) were used and evaluated according to 1) mean squared error (MSE), 2) buffer accuracy (the number of times the predicted target was within a predesignated buffer), and 3) classification accuracy (the number of times the correct class was predicted by the models). To ensure real world applicability, the results of the models were compared to a mean regressor model. A total of 11,525 patients were included in this study. During validation, the neural network model (NNM) had the best MSEs for DOS (0.99) and LOS (0.67). During testing, the NNM had the best MSEs for DOS (0.89) and LOS (0.65). The NNM yielded the best 30-minute buffer accuracy for DOS (70.9%) and ≤120 min, >120 min classification accuracy (86.8%). The NNM had the best 1-day buffer accuracy for LOS (84.5%) and ≤2 days, >2 days classification accuracy (94.6%). All models were more accurate than the mean regressors for both DOS and LOS predictions. We successfully demonstrated that MLMs can be used to accurately predict the DOS and LOS of SLD based on preoperative factors. This big-data application has significant practical implications with respect to surgical scheduling and inpatient bedflow, as well as major implications for both private and publicly funded healthcare systems. Incorporating this artificial intelligence technique in real-time hospital operations would be enhanced by including institution-specific operational factors such as surgical team and operating room workflow


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_12 | Pages 90 - 90
1 Dec 2022
Abbas A Toor J Du JT Versteeg A Yee N Finkelstein J Abouali J Nousiainen M Kreder H Hall J Whyne C Larouche J
Full Access

Excessive resident duty hours (RDH) are a recognized issue with implications for physician well-being and patient safety. A major component of the RDH concern is on-call duty. While considerable work has been done to reduce resident call workload, there is a paucity of research in optimizing resident call scheduling. Call coverage is scheduled manually rather than demand-based, which generally leads to over-scheduling to prevent a service gap. Machine learning (ML) has been widely applied in other industries to prevent such issues of a supply-demand mismatch. However, the healthcare field has been slow to adopt these innovations. As such, the aim of this study was to use ML models to 1) predict demand on orthopaedic surgery residents at a level I trauma centre and 2) identify variables key to demand prediction. Daily surgical handover emails over an eight year (2012-2019) period at a level I trauma centre were collected. The following data was used to calculate demand: spine call coverage, date, and number of operating rooms (ORs), traumas, admissions and consults completed. Various ML models (linear, tree-based and neural networks) were trained to predict the workload, with their results compared to the current scheduling approach. Quality of models was determined by using the area under the receiver operator curve (AUC) and accuracy of the predictions. The top ten most important variables were extracted from the most successful model. During training, the model with the highest AUC and accuracy was the multivariate adaptive regression splines (MARS) model, with an AUC of 0.78±0.03 and accuracy of 71.7%±3.1%. During testing, the model with the highest AUC and accuracy was the neural network model, with an AUC of 0.81 and accuracy of 73.7%. All models were better than the current approach, which had an AUC of 0.50 and accuracy of 50.1%. Key variables used by the neural network model were (descending order): spine call duty, year, weekday/weekend, month, and day of the week. This was the first study attempting to use ML to predict the service demand on orthopaedic surgery residents at a major level I trauma centre. Multiple ML models were shown to be more appropriate and accurate at predicting the demand on surgical residents as compared to the current scheduling approach. Future work should look to incorporate predictive models with optimization strategies to match scheduling with demand in order to improve resident well being and patient care


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_13 | Pages 97 - 97
1 Dec 2022
Burke Z Lazarides A Gundavda M Griffin A Tsoi K Ferguson P Wunder JS
Full Access

Traditional staging systems for high grade osteosarcoma (Enneking, MSTS) are based largely on gross surgical margins and were developed before the widespread use of neoadjuvant chemotherapy. It is now well known that both microscopic margins and chemotherapy are predictors of local recurrence. However, neither of these variables are used in the traditional surgical staging and the precise safe margin distance is debated. Recently, a novel staging system utilizing a 2mm margin cutoff and incorporating precent necrosis was proposed and demonstrated improved prognostic value for local recurrence free survival (LRFS) when compared to the MSTS staging system. This staging system has not been validated beyond the original patient cohort. We propose to analyze this staging system in a cohort of patients with high-grade osteosarcoma, as well as evaluate the ability of additional variables to predict the risk of local recurrence and overall survival. A retrospective review of a prospectively collected database of all sarcoma patients between 1985 and 2020 at a tertiary sarcoma care center was performed. All patients with high-grade osteosarcoma receiving neo-adjuvant chemotherapy and with no evidence of metastatic disease on presentation were isolated and analyzed. A minimum of two year follow up was used for surviving patients. A total of 225 patients were identified meeting these criteria. Univariate analysis was performed to evaluate variable that were associated with LRFS. Multivariate analysis is used to further analyze factors associated with LRFS on univariate analysis. There were 20 patients (8.9%) who had locally recurrent disease. Five-year LRFS was significantly different for patients with surgical margins 2mm or less (77.6% v. 93.3%; p=0.006) and those with a central tumor location (67.9 v. 94.4; <0.001). A four-tiered staging system using 2mm surgical margins and a percent necrosis of 90% of greater was also a significant predictor of 5-year LRFS (p=0.019) in this cohort. Notably, percent necrosis in isolation was not a predictor of LRFS in this cohort (p=0.875). Tumor size, gender, and type of surgery (amputation v. limb salvage) were also analyzed and not associated with LRFS. The MSTS surgical margin staging system did not significantly stratify groups (0.066). A 2mm surgical margin cutoff was predictive of 5-year LRFS in this cohort of patients with localized high-grade osteosarcoma and a combination of a 2mm margin and percent necrosis outperformed the prognostic value of the traditional MSTS staging system. Utilization of this system may improve the ability of surgeons to stage thier patients. Additional variables may increase the value of this system and further validation is required


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_12 | Pages 72 - 72
1 Dec 2022
Kendal J Fruson L Litowski M Sridharan S James M Purnell J Wong M Ludwig T Lukenchuk J Benavides B You D Flanagan T Abbott A Hewison C Davison E Heard B Morrison L Moore J Woods L Rizos J Collings L Rondeau K Schneider P
Full Access

Distal radius fractures (DRFs) are common injuries that represent 17% of all adult upper extremity fractures. Some fractures deemed appropriate for nonsurgical management following closed reduction and casting exhibit delayed secondary displacement (greater than two weeks from injury) and require late surgical intervention. This can lead to delayed rehabilitation and functional outcomes. This study aimed to determine which demographic and radiographic features can be used to predict delayed fracture displacement. This is a multicentre retrospective case-control study using radiographs extracted from our Analytics Data Integration, Measurement and Reporting (DIMR) database, using diagnostic and therapeutic codes. Skeletally mature patients aged 18 years of age or older with an isolated DRF treated with surgical intervention between two and four weeks from initial injury, with two or more follow-up visits prior to surgical intervention, were included. Exclusion criteria were patients with multiple injuries, surgical treatment with fewer than two clinical assessments prior to surgical treatment, or surgical treatment within two weeks of injury. The proportion of patients with delayed fracture displacement requiring surgical treatment will be reported as a percentage of all identified DRFs within the study period. A multivariable conditional logistic regression analysis was used to assess case-control comparisons, in order to determine the parameters that are mostly likely to predict delayed fracture displacement leading to surgical management. Intra- and inter-rater reliability for each radiographic parameter will also be calculated. A total of 84 age- and sex-matched pairs were identified (n=168) over a 5-year period, with 87% being female and a mean age of 48.9 (SD=14.5) years. Variables assessed in the model included pre-reduction and post-reduction radial height, radial inclination, radial tilt, volar cortical displacement, injury classification, intra-articular step or gap, ulnar variance, radiocarpal alignment, and cast index, as well as the difference between pre- and post-reduction parameters. Decreased pre-reduction radial inclination (Odds Ratio [OR] = 0.54; Confidence Interval [CI] = 0.43 – 0.64) and increased pre-reduction volar cortical displacement (OR = 1.31; CI = 1.10 – 1.60) were significant predictors of delayed fracture displacement beyond a minimum of 2-week follow-up. Similarly, an increased difference between pre-reduction and immediate post reduction radial height (OR = 1.67; CI = 1.31 – 2.18) and ulnar variance (OR = 1.48; CI = 1.24 – 1.81) were also significant predictors of delayed fracture displacement. Cast immobilization is not without risks and delayed surgical treatment can result in a prolong recovery. Therefore, if reliable and reproducible radiographic parameters can be identified that predict delayed fracture displacement, this information will aid in earlier identification of patients with DRFs at risk of late displacement. This could lead to earlier, appropriate surgical management, rehabilitation, and return to work and function


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_12 | Pages 30 - 30
1 Dec 2022
McGoldrick N Cochran M Biniam B Bhullar R Beaulé P Kim P Gofton W Grammatopoulos G
Full Access

Short cementless femoral stems are increasingly popular as they allow for less dissection for insertion. Use of such stems with the anterior approach (AA) may be associated with considerable per-operative fracture risk. This study's primary aim was to evaluate whether patient-specific femoral- and pelvic- morphology and surgical technique, influence per-operative fracture risk. In doing so, we aimed to describe important anatomical thresholds alerting surgeons. This is a single-center, multi-surgeon retrospective, case-control matched study. Of 1145 primary THAs with a short, cementless stem inserted via the AA, 39 periprosthetic fractures (3.4%) were identified. These were matched for factors known to increase fracture risk (age, gender, BMI, side, Dorr classification, stem offset and indication for surgery) with 78 THAs that did not sustain a fracture. Radiographic analysis was performed using validated software to measure femoral- (canal flare index [CFI], morphological cortical index [MCI], calcar-calcar ratio [CCR]) and pelvic- (Ilium-ischial ratio [IIR], ilium overhang, and ASIS to greater trochanter distance) morphologies and surgical technique (% canal fill). Multivariate and Receiver-Operator Curve (ROC) analysis was performed to identify predictors of fracture. Femoral factors that differed included CFI (3.7±0.6 vs 2.9±0.4, p3.17 and II ratio>3 (OR:29.2 95%CI: 9.5–89.9, p<0.001). Patient-specific anatomical parameters are important predictors of fracture-risk. When considering the use of short stems via the AA, careful radiographic analysis would help identify those at risk in order to consider alternative stem options


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_3 | Pages 118 - 118
23 Feb 2023
Zhou Y Dowsey M Spelman T Choong P Schilling C
Full Access

Approximately 20% of patients feel unsatisfied 12 months after primary total knee arthroplasty (TKA). Current predictive tools for TKA focus on the clinician as the intended user rather than the patient. The aim of this study is to develop a tool that can be used by patients without clinician assistance, to predict health-related quality of life (HRQoL) outcomes 12 months after total knee arthroplasty (TKA). All patients with primary TKAs for osteoarthritis between 2012 and 2019 at a tertiary institutional registry were analysed. The predictive outcome was improvement in Veterans-RAND 12 utility score at 12 months after surgery. Potential predictors included patient demographics, co-morbidities, and patient reported outcome scores at baseline. Logistic regression and three machine learning algorithms were used. Models were evaluated using both discrimination and calibration metrics. Predictive outcomes were categorised into deciles from 1 being the least likely to improve to 10 being the most likely to improve. 3703 eligible patients were included in the analysis. The logistic regression model performed the best in out-of-sample evaluation for both discrimination (AUC = 0.712) and calibration (gradient = 1.176, intercept = -0.116, Brier score = 0.201) metrics. Machine learning algorithms were not superior to logistic regression in any performance metric. Patients in the lowest decile (1) had a 29% probability for improvement and patients in the highest decile (10) had an 86% probability for improvement. Logistic regression outperformed machine learning algorithms in this study. The final model performed well enough with calibration metrics to accurately predict improvement after TKA using deciles. An ongoing randomised controlled trial (ACTRN12622000072718) is evaluating the effect of this tool on patient willingness for surgery. Full results of this trial are expected to be available by April 2023. A free-to-use online version of the tool is available at . smartchoice.org.au.


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_13 | Pages 3 - 3
1 Dec 2022
Getzlaf M Sims L Sauder D
Full Access

Intraoperative range of motion (ROM) radiographs are routinely taken during scaphoidectomy and four corner fusion surgery (S4CF) at our institution. It is not known if intraoperative ROM predicts postoperative ROM. We hypothesize that patients with a greater intra-operativeROM would have an improved postoperative ROM at one year, but that this arc would be less than that achieved intra- operatively. We retrospectively reviewed 56 patients that had undergone S4CF at our institution in the past 10 years. Patients less than 18, those who underwent the procedure for reasons other than arthritis, those less than one year from surgery, and those that had since undergone wrist arthrodesis were excluded. Intraoperative ROM was measured from fluoroscopic images taken in flexion and extension at the time of surgery. Patients that met criteria were then invited to take part in a virtual assessment and their ROM was measured using a goniometer. T-tests were used to measure differences between intraoperative and postoperative ROM, Pearson Correlation was used to measure associations, and linear regression was conducted to assess whether intraoperative ROM predicts postoperative ROM. Nineteen patients, two of whom had bilateral surgery, agreed to participate. Mean age was 54 and 14 were male and 5 were male. In the majority, surgical indication was scapholunate advanced collapse; however, two of the participants had scaphoid nonunion advanced collapse. No difference was observed between intraoperative and postoperative flexion. On average there was an increase of seven degrees of extension and 12° arc of motion postoperatively with p values reaching significance Correlation between intr-operative and postoperative ROM did not reach statistical significance for flexion, extension, or arc of motion. There were no statistically significant correlations between intraoperative and postoperative ROM. Intraoperative ROM radiographs are not useful at predicting postoperative ROM. Postoperative extension and arc of motion did increase from that measured intraoperatively


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_12 | Pages 33 - 33
1 Dec 2022
Abbas A Lex J Toor J Mosseri J Khalil E Ravi B Whyne C
Full Access

Total knee and hip arthroplasty (TKA and THA) are two of the highest volume and resource intensive surgical procedures. Key drivers of the cost of surgical care are duration of surgery (DOS) and postoperative inpatient length of stay (LOS). The ability to predict TKA and THA DOS and LOS has substantial implications for hospital finances, scheduling and resource allocation. The goal of this study was to predict DOS and LOS for elective unilateral TKAs and THAs using machine learning models (MLMs) constructed on preoperative patient factors using a large North American database. The American College of Surgeons (ACS) National Surgical and Quality Improvement (NSQIP) database was queried for elective unilateral TKA and THA procedures from 2014-2019. The dataset was split into training, validation and testing based on year. Multiple conventional and deep MLMs such as linear, tree-based and multilayer perceptrons (MLPs) were constructed. The models with best performance on the validation set were evaluated on the testing set. Models were evaluated according to 1) mean squared error (MSE), 2) buffer accuracy (the number of times the predicted target was within a predesignated buffer of the actual target), and 3) classification accuracy (the number of times the correct class was predicted by the models). To ensure useful predictions, the results of the models were compared to a mean regressor. A total of 499,432 patients (TKA 302,490; THA 196,942) were included. The MLP models had the best MSEs and accuracy across both TKA and THA patients. During testing, the TKA MSEs for DOS and LOS were 0.893 and 0.688 while the THA MSEs for DOS and LOS were 0.895 and 0.691. The TKA DOS 30-minute buffer accuracy and ≤120 min, >120 min classification accuracy were 78.8% and 88.3%, while the TKA LOS 1-day buffer accuracy and ≤2 days, >2 days classification accuracy were 75.2% and 76.1%. The THA DOS 30-minute buffer accuracy and ≤120 min, >120 min classification accuracy were 81.6% and 91.4%, while the THA LOS 1-day buffer accuracy and ≤2 days, >2 days classification accuracy were 78.3% and 80.4%. All models across both TKA and THA patients were more accurate than the mean regressors for both DOS and LOS predictions across both buffer and classification accuracies. Conventional and deep MLMs have been effectively implemented to predict the DOS and LOS of elective unilateral TKA and THA patients based on preoperative patient factors using a large North American database with a high level of accuracy. Future work should include using operational factors to further refine these models and improve predictive accuracy. Results of this work will allow institutions to optimize their resource allocation, reduce costs and improve surgical scheduling. Acknowledgements:. The American College of Surgeons National Surgical Quality Improvement Program and the hospitals participating in the ACS NSQIP are the source of the data used herein; they have not verified and are not responsible for the statistical validity of the data analysis or the conclusions derived by the authors


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_13 | Pages 62 - 62
1 Dec 2022
Milligan K Rakhra K Kreviazuk C Poitras S Wilkin G Zaltz I Belzile E Stover M Smit K Sink E Clohisy J Beaulé P
Full Access

It has been reported that 60-85% of patients who undergo PAO have concomitant intraarticular pathology that cannot be addressed with PAO alone. Currently, there are limited diagnostic tools to determine which patients would benefit from hip arthroscopy at the time of PAO to address intra-articular pathology. This study aims to see if preoperative PROMs scores measured by IHOT-33 scores have predictive value in whether intra-articular pathology is addressed during PAO + scope. The secondary aim is to see how often surgeons at high-volume hip preservation centers address intra-articular pathology if a scope is performed during the same anesthesia event. A randomized, prospective Multicenter trial was performed on patients who underwent PAO and hip arthroscopy to treat hip dysplasia from 2019 to 2020. Preoperative PROMs and intraoperative findings and procedures were recorded and analyzed. A total of 75 patients, 84% Female, and 16% male, with an average age of 27 years old, were included in the study. Patients were randomized to have PAO alone 34 patients vs. PAO + arthroscopy 41 patients during the same anesthesia event. The procedures performed, including types of labral procedures and chondroplasty procedures, were recorded. Additionally, a two-sided student T-test was used to evaluate the difference in means of preoperative IHOT score among patients for whom a labral procedure was performed versus no labral procedure. A total of 82% of patients had an intra-articular procedure performed at the time of hip arthroscopy. 68% of patients who had PAO + arthroscopy had a labral procedure performed. The most common labral procedure was a labral refixation which was performed in 78% of patients who had a labral procedure performed. Femoral head-neck junction chondroplasty was performed in 51% of patients who had an intra-articular procedure performed. The mean IHOT score was 29.3 in patients who had a labral procedure performed and 33.63 in those who did not have a labral procedure performed P- value=0.24. Our findings demonstrate preoperative IHOT-33 scores were not predictive in determining whether intra-articular labral pathology was addressed at the time of surgery. Additionally, we found that if labral pathology was addressed, labral refixation was the most common repair performed. This study also provides valuable information on what procedures high-volume hip preservation centers are performing when performing PAO + arthroscopy


Orthopaedic Proceedings
Vol. 106-B, Issue SUPP_8 | Pages 4 - 4
10 May 2024
Hoffman T Knudsen J Jesani S Clark H
Full Access

Introduction. Debridement, antibiotics irrigation and implant retention (DAIR) is a common management strategy for hip and knee prosthetic joint infections (PJI). However, failure rates remain high, which has led to the development of predictive tools to help determine success. These tools include KLIC and CRIME80 for acute-postoperative (AP) and acute haematogenous (AH) PJI respectively. We investigated whether these tools were applicable to a Waikato cohort. Method. We performed a retrospective cohort study that evaluated patients who underwent DAIR between January 2010 and June 2020 at Waikato Hospital. Pre-operative KLIC and CRIME80 scores were calculated and compared to success of operation. Failure was defined as: (i) need for further surgery, (ii) need for suppressive antibiotics, (iii) death due to the infection. Logistic regression models were used to calculate the area under the curve (AUC). Results. 117 eligible patients underwent DAIR, 53 in the AP cohort and 64 in the AH cohort. Failure rate at 2 years post-op was 43% in the AP cohort and 59% in the AH cohort. In the AP cohort a KLIC score of <4 had a DAIR failure rate of 28.6%, while those who scored ³4 had a failure rate of 72.2% (p=0.002). In the AH cohort a CRIME80 score of <3 had a DAIR failure rate of 48% while those who scored ³3 had a 100% failure rate (p<0.001). Discussion. This study represents the first external validation of the KLIC and CRIME80 scores for predicting DAIR failure in an Australasian population. The results indicate that both KLIC and CRIME80 scoring tools are valuable aids for the clinician seeking to determine the optimal management strategy in patients with AP or AH PJI


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_3 | Pages 43 - 43
23 Feb 2023
Bekhit P Coia M Baker J
Full Access

Several different algorithms attempt to estimate life expectancy for patients with metastatic spine disease. The Skeletal Oncology Research Group (SORG) has recently developed a nomogram to estimate survival of patients with metastatic spine disease. Whilst the use of the SORG nomogram has been validated in the international context, there has been no study to date that validates the use of the SORG nomogram in New Zealand. This study aimed to validate the use of the SORG nomogram in Aotearoa New Zealand. We collected data on 100 patients who presented to Waikato Hospital with a diagnosis of spinal metastatic disease. The SORG nomogram gave survival probabilities for each patient at each time point. Receiver Operating Characteristic (ROC) Area Under Curve (AUC) analysis was performed to assess the predictive accuracy of the SORG score. A calibration curve was also performed, and Brier scores calculated. A multivariate Cox regression analysis was performed. The SORG score was correlated with 30 day (AUC = 0.72) and 90-day mortality (AUC = 0.71). The correlation between the SORG score and 90-day mortality was weaker (AUC = 0.69). Using this method, the nomogram was correct for 79 (79%) patients at 30-days, 59 patients (59%) at 90-days, and 42 patients (42%) at 365-days. Calibration curves demonstrated poor forecasting of the SORG nomogram at 30 (Brier score = 0.65) and 365 days (Brier score = 0.33). The calibration curve demonstrated borderline forecasting of the SORG nomogram at 90 days (Brier score = 0.28). Several components of the SORG nomogram were not found to be correlated with mortality. In this New Zealand cohort the SORG nomogram demonstrated only acceptable discrimination at best in predicting life 30-, 90- or 356-day mortality in patients with metastatic spinal disease


Orthopaedic Proceedings
Vol. 103-B, Issue SUPP_15 | Pages 38 - 38
1 Dec 2021
Yacovelli S Goswami K Shohat N Shahi A Parvizi J
Full Access

Aim. D-dimer is a widely available serum test that detects fibrinolytic activities that occur during infection. Prior studies have explored its utility for diagnosis of chronic periprosthetic joint infections (PJI), but not explored its prognostic value for prediction of subsequent treatment failure. The purpose of this study was to: (1) assess the ability of serum D-dimer and other standard-of-care serum biomarkers to predict failure following reimplantation, and (2) establish a new cutoff value for serum D-dimer for prognostic use prior to reimplantation. Method. This prospective study enrolled 92 patients undergoing reimplantation between April 2015 and March 2019 who had previously undergone total hip/knee resection arthroplasty with placement of an antibiotic spacer for treatment of chronic PJI. Serum D-dimer level, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) levels were measured preoperatively for all patients. Failure following implantation was defined per the Delphi consensus criteria. Optimal cutoffs for D-dimer, ESR, and CRP were calculated based on ROC curves and compared in their association with failure following reimplantation criteria at minimum 1-year follow-up. Results. 15/92(16.3%) patients failed reimplantation surgery at mean follow up of 2.9 years (range 1.0–4.8). Optimal thresholds for D-Dimer, ESR and CRP were determined to be 1300ng/mL, 30mm/hr, and 1mg/L, respectively. The failure rate in patient with positive D-dimer was significantly higher at 32.0% (8/25) compared to those with negative D-dimer 10.6% (7/66); p=0.024. In comparison, 17.8% (8/45) of patients with ESR above threshold failed, compared to 13.89% (5/41) below (p=0.555) and 16.0% (4/25) of patients with CRP above threshold failed, compared to 16.1% (10/62) below (p=1.000). Conclusions. Patients with elevated D-Dimer appear to be at higher risk of failure after reimplantation surgery. This serum marker may be used to generate an additional data point in patients undergoing reimplantation surgery, especially in circumstances when optimal timing of reimplantation cannot be determined based on clinical circumstances


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_2 | Pages 74 - 74
10 Feb 2023
Genel F Pavlovic N Lewin A Mittal R Huang A Penm J Patanwala A Brady B Adie S Harris I Naylor J
Full Access

In the Unites States, approximately 24% of people undergoing primary total knee or total hip arthroplasty (TKA, THA) are chronic opioid users pre-operatively. Few studies have examined the incidence of opioid use prior to TKA/THA and whether it predicts outcomes post-surgery in the Australian context. The aim was to determine: (i) the proportion of TKA and THA patients who use opioids regularly (daily) pre-surgery; (ii) if opioid use pre-surgery predicts (a) complication and readmission rates to 6-months post-surgery, (b) patient-reported outcomes to 6-months post-surgery. A retrospective cohort study was undertaken utilising linked individual patient-level data from two independent databases comprising approximately 3500 people. Patients had surgery between January 2013 and June 2018, inclusive at Fairfield and Bowral Hospitals. Following data linkage, analysis was completed on 1185 study participants (64% female, 69% TKA, mean age 67 (9.9)). 30% were using regular opioids pre-operatively. Unadjusted analyses resulted in the following rates in those who . were. vs . were not. using opioids pre-operatively (respectively); acute adverse events (39.1% vs 38.6%), acute significant adverse events (5.3% vs 5.7%), late adverse events: (6.9% vs 6.6%), total significant adverse events: (12.5% vs 12.4%), discharge to inpatient rehab (86.4% vs 88.6%), length of hospital stay (5.9 (3.0) vs 5.6 (3.0) days), 6-month post-op Oxford Score (38.8 (8.9) vs 39.5 (7.9)), 6 months post-op EQ-VAS (71.7 (20.2) vs 76.7 (18.2), p<0.001), success post-op described as “much better” (80.2% vs 81.3%). Adjusted regression analyses controlling for multiple co-variates indicated no significant association between pre-op opioid use and adverse events/patient-reported outcomes. Pre-operative opioid use was high amongst this Australian arthroplasty cohort and was not associated with increased risk of adverse events post-operatively. Further research is needed in assessing the relationship between the amount of pre-op opioid use and the risk of post-operative adverse events


Orthopaedic Proceedings
Vol. 104-B, Issue SUPP_12 | Pages 76 - 76
1 Dec 2022
Eltit F Ng T Gokaslan Z Fisher C Dea N Charest-Morin R
Full Access

Giant cell tumors of bone (GCTs) are locally aggressive tumors with recurrence potential that represent up to 10% of primary tumors of the bone. GCTs pathogenesis is driven by neoplastic mononuclear stromal cells that overexpress receptor activator of nuclear factor kappa-B/ligand (RANKL). Treatment with specific anti-RANKL antibody (denosumab) was recently introduced, used either as a neo-adjuvant in resectable tumors or as a stand-alone treatment in unresectable tumors. While denosumab has been increasingly used, a percentage of patients do not improve after treatment. Here, we aim to determine molecular and histological patterns that would help predicting GCTs response to denosumab to improve personalized treatment. Nine pre-treatment biopsies of patients with spinal GCT were collected at 2 centres. In 4 patients denosumab was used as a neo-adjuvant, 3 as a stand-alone and 2 received denosumab as adjuvant treatment. Clinical data was extracted retrospectively. Total mRNA was extracted by using a formalin-fixed paraffin-embedded extraction kit and we determined the transcript profile of 730 immune-oncology related genes by using the Pan Cancer Immune Profiling panel (Nanostring). The gene expression was compared between patients with good and poor response to Denosumab treatment by using the nSolver Analysis Software (Nanostring). Immunohistochemistry was performed in the tissue slides to characterize cell populations and immune response in CGTs. Two out of 9 patients showed poor clinical response with tumor progression and metastasis. Our analysis using unsupervised hierarchical clustering determined differences in gene expression between poor responders and good responders before denosumab treatment. Poor responding lesions are characterized by increased expression of inflammatory cytokines as IL8, IL1, interferon a and g, among a myriad of cytokines and chemokines (CCL25, IL5, IL26, IL25, IL13, CCL20, IL24, IL22, etc.), while good responders are characterized by elevated expression of platelets (CD31 and PECAM), coagulation (CD74, F13A1), and complement classic pathway (C1QB, C1R, C1QBP, C1S, C2) markers, together with extracellular matrix proteins (COL3A1, FN1,. Interestingly the T-cell response is also different between groups. Poor responding lesions have increased Th1 and Th2 component, but good responders have an increased Th17 component. Interestingly, the checkpoint inhibitor of the immune response PD1 (PDCD1) is increased ~10 fold in poor responders. This preliminary study using a novel experimental approach revealed differences in the immune response in GCTs associated with clinical response to denosumab. The increased activity of checkpoint inhibitor PD1 in poor responders to denosumab treatment may have implications for therapy, raising the potential to investigate immunotherapy as is currently used in other neoplasms. Further validation using a larger independent cohort will be required but these results could potentially identify the patients who would most benefit from denosumab therapy


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_6 | Pages 119 - 119
1 Jul 2020
Busse J Heels-Ansdell D Makosso-Kallyth S Petrisor B Jeray K Tufescu T Laflamme GY McKay P McCabe R Le Manach Y Bhandari M
Full Access

Persistent post-surgical pain and associated disability are common after a traumatic fracture repair. Preliminary evidence suggests that patients' beliefs and perceptions may influence their prognosis. We sought to explore this association. We used data from the Fluid Lavage of Open Wounds trial to determine, in 1560 open fracture patients undergoing surgical repair, the association between Somatic PreOccupation and Coping (captured by the SPOC questionnaire) and recovery at 1 year. Of the 1218 open fracture patients with complete data available for analysis, 813 (66.7%) reported moderate to extreme pain at 1 yr. The addition of SPOC scores to an adjusted regression model to predict persistent pain improved the concordance statistic from 0.66 to 0.74, and found the greatest risk was associated with high SPOC scores [odds ratio: 5.63, 99% confidence interval (CI): 3.59–8.84, absolute risk increase 40.6%, 99% CI: 30.8%, 48.6%]. Thirty-eight per cent (484 of 1277) reported moderate to extreme pain interference at 1 yr. The addition of SPOC scores to an adjusted regression model to predict pain interference improved the concordance statistic from 0.66 to 0.75, and the greatest risk was associated with high SPOC scores (odds ratio: 6.06, 99% CI: 3.97–9.25, absolute risk increase: 18.3%, 95% CI: 11.7%, 26.7%). In our adjusted multivariable regression models, SPOC scores at 6 weeks post-surgery accounted for 10% of the variation in short form-12 physical component summary scores and 14% of short form-12 mental component summary scores at 1 yr. Amongst patients undergoing surgical repair of open extremity fractures, high SPOC questionnaire scores at 6 weeks post-surgery were predictive of persistent pain, reduced quality of life, and pain interference at 1 yr


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_1 | Pages 70 - 70
1 Feb 2020
Khasian M LaCour M Dessinger G Meccia B Komistek R
Full Access

Introduction. Forward solution joint models (FSMs) can be powerful tools, leading to fast and cost-efficient simulation revealing in vivo mechanics that can be used to predict implant longevity. Unlike most joint analysis methods, mathematical modeling allows for nearly instantaneous evaluations, yielding more rapid surgical technique and implant design iterations as well as earlier insight into the follow-up outcomes used to better assess potential success. The current knee FSM has been developed to analyze both the kinematics and kinetics of commercial TKA designs as well as novel implant designs. Objective. The objective of this study was to use the knee FSM to predict the condylar translations and axial rotation of both fixed- and mobile-bearing TKA designs during a deep knee bend activity and to compare these kinematics to known fluoroscopy evaluations. Methods. The knee joint is modeled mathematically using Kane's dynamics, incorporating muscle controllers to predict the muscle forces, contact detection algorithms to compute the knee joint forces, and nonlinear ligaments at the knee joint. The tibiofemoral kinematics data for 20 subjects implanted with fixed-bearing (FB) PS TKA and 20 subjects implanted with mobile-bearing (MB) PS TKA were collected using fluoroscopy data during a deep knee bend (DKB) activity from full extension to 120° of flexion. All subjects were implanted by the same surgeon. The same CAD models for these implanted were incorporated in the FSM to predict the tibiofemoral kinematics. The average component placement from fluoroscopy data were used as an initial condition for the placement of the component in the mathematical model. Results. Overall, fluoroscopy results showed patients experienced 6.8 mm and 6.4 mm posterior rollback of the lateral femoral condyle for FB and MB PS TKA groups, respectively. The FSM predicted 5.9 mm and 6.3 mm of lateral posterior rollback for FB and MB PS TKA models, respectively (Figure 1). On average, media condyle translated posteriorly −2.9 mm and −2.5 mm, for FB and MB subjects, respectively. The mathematical model prediction for FB and MB models was −1.4 mm and −2.4 mm, respectively (Figure 2). The overall axial rotation was 5.1° and 4.5°, for FB and MB subjects from fluoroscopy, respectively. The axial rotation prediction using the FSM was 6.0° and 4.2°, for FB and MB models, respectively (Figure 3). Conclusion. Overall, it is clear that the FSM can accurately predict both the patterns and magnitudes of fixed- and mobile-bearing TKA condylar translations and axial rotations, showing consistent rollback of the lateral condyle, less translation of the medial condyle, and consistent axial rotation throughout flexion, all of which were also observed in the fluoroscopy data. The correlation between the theoretically predicted and experimentally confirmed kinematic patterns demonstrates the viability of forward solution modeling as a valuable and accurate method to evaluate total joint replacement mechanics. For any figures or tables, please contact the authors directly


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_1 | Pages 76 - 76
1 Feb 2020
Roche C Simovitch R Flurin P Wright T Zuckerman J Routman H
Full Access

Introduction. Machine learning is a relatively novel method to orthopaedics which can be used to evaluate complex associations and patterns in outcomes and healthcare data. The purpose of this study is to utilize 3 different supervised machine learning algorithms to evaluate outcomes from a multi-center international database of a single shoulder prosthesis to evaluate the accuracy of each model to predict post-operative outcomes of both aTSA and rTSA. Methods. Data from a multi-center international database consisting of 6485 patients who received primary total shoulder arthroplasty using a single shoulder prosthesis (Equinoxe, Exactech, Inc) were analyzed from 19,796 patient visits in this study. Specifically, demographic, comorbidity, implant type and implant size, surgical technique, pre-operative PROMs and ROM measures, post-operative PROMs and ROM measures, pre-operative and post-operative radiographic data, and also adverse event and complication data were obtained for 2367 primary aTSA patients from 8042 visits at an average follow-up of 22 months and 4118 primary rTSA from 11,754 visits at an average follow-up of 16 months were analyzed to create a predictive model using 3 different supervised machine learning techniques: 1) linear regression, 2) random forest, and 3) XGBoost. Each of these 3 different machine learning techniques evaluated the pre-operative parameters and created a predictive model which targeted the post-operative composite score, which was a 100 point score consisting of 50% post-operative composite outcome score (calculated from 33.3% ASES + 33.3% UCLA + 33.3% Constant) and 50% post-operative composite ROM score (calculated from S curves weighted by 70% active forward flexion + 15% internal rotation score + 15% active external rotation). 3 additional predictive models were created to control for the time required for patient improvement after surgery, to do this, each primary aTSA and primary rTSA cohort was subdivided to only include patient data follow-up visits >20 months after surgery, this yielded 1317 primary aTSA patients from 2962 visits at an average follow-up of 50 months and 1593 primary rTSA from 3144 visits at an average follow-up of 42 months. Each of these 6 predictive models were trained using a random selection of 80% of each cohort, then each model predicted the outcomes of the remaining 20% of the data based upon the demographic, comorbidity, implant type and implant size, surgical technique, pre-operative PROMs and ROM measures inputs of each 20% cohort. The error of all 6 predictive models was calculated from the root mean square error (RMSE) between the actual and predicted post-op composite score. The accuracy of each model was determined by subtracting the percent difference of each RMSE value from the average composite score associated with each cohort. Results. For all patient visits, the XGBoost decision tree algorithm was the most accurate model for both aTSA & rTSA patients, with an accuracy of ∼89.5% for both aTSA and rTSA. However for patients with 20+ month visits only, the random forest decision tree algorithm was the most accurate model for both aTSA & rTSA patients, with an accuracy of ∼89.5% for both aTSA and rTSA. The linear regression model was the least accurate predictive model for each of the cohorts analyzed. However, it should be noted that all 3 machine learning models provided accuracy of ∼85% or better and a RMSE <12. (Table 1) Figures 1 and 2 depict the typical spread and RMSE of the actual vs. predicted total composite score associated with the 3 models for aTSA (Figure 1) and rTSA (Figure 2). Discussion. The results of this study demonstrate that multiple different machine learning algorithms can be utilized to create models that predict outcomes with higher accuracy for both aTSA and rTSA, for numerous timepoints after surgery. Future research should test this model on different datasets and using different machine learning methods in order to reduce over- and under-fitting model errors. For any figures or tables, please contact the authors directly