Advertisement for orthosearch.org.uk
Results 1 - 3 of 3
Results per page:
Bone & Joint Research
Vol. 12, Issue 9 | Pages 512 - 521
1 Sep 2023
Langenberger B Schrednitzki D Halder AM Busse R Pross CM

Aims. A substantial fraction of patients undergoing knee arthroplasty (KA) or hip arthroplasty (HA) do not achieve an improvement as high as the minimal clinically important difference (MCID), i.e. do not achieve a meaningful improvement. Using three patient-reported outcome measures (PROMs), our aim was: 1) to assess machine learning (ML), the simple pre-surgery PROM score, and logistic-regression (LR)-derived performance in their prediction of whether patients undergoing HA or KA achieve an improvement as high or higher than a calculated MCID; and 2) to test whether ML is able to outperform LR or pre-surgery PROM scores in predictive performance. Methods. MCIDs were derived using the change difference method in a sample of 1,843 HA and 1,546 KA patients. An artificial neural network, a gradient boosting machine, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic net, random forest, LR, and pre-surgery PROM scores were applied to predict MCID for the following PROMs: EuroQol five-dimension, five-level questionnaire (EQ-5D-5L), EQ visual analogue scale (EQ-VAS), Hip disability and Osteoarthritis Outcome Score-Physical Function Short-form (HOOS-PS), and Knee injury and Osteoarthritis Outcome Score-Physical Function Short-form (KOOS-PS). Results. Predictive performance of the best models per outcome ranged from 0.71 for HOOS-PS to 0.84 for EQ-VAS (HA sample). ML statistically significantly outperformed LR and pre-surgery PROM scores in two out of six cases. Conclusion. MCIDs can be predicted with reasonable performance. ML was able to outperform traditional methods, although only in a minority of cases. Cite this article: Bone Joint Res 2023;12(9):512–521


The Bone & Joint Journal
Vol. 104-B, Issue 9 | Pages 1060 - 1066
1 Sep 2022
Jin X Gallego Luxan B Hanly M Pratt NL Harris I de Steiger R Graves SE Jorm L

Aims

The aim of this study was to estimate the 90-day periprosthetic joint infection (PJI) rates following total knee arthroplasty (TKA) and total hip arthroplasty (THA) for osteoarthritis (OA).

Methods

This was a data linkage study using the New South Wales (NSW) Admitted Patient Data Collection (APDC) and the Australian Orthopaedic Association National Joint Replacement Registry (AOANJRR), which collect data from all public and private hospitals in NSW, Australia. Patients who underwent a TKA or THA for OA between 1 January 2002 and 31 December 2017 were included. The main outcome measures were 90-day incidence rates of hospital readmission for: revision arthroplasty for PJI as recorded in the AOANJRR; conservative definition of PJI, defined by T84.5, the PJI diagnosis code in the APDC; and extended definition of PJI, defined by the presence of either T84.5, or combinations of diagnosis and procedure code groups derived from recursive binary partitioning in the APDC.


Bone & Joint Open
Vol. 5, Issue 2 | Pages 139 - 146
15 Feb 2024
Wright BM Bodnar MS Moore AD Maseda MC Kucharik MP Diaz CC Schmidt CM Mir HR

Aims

While internet search engines have been the primary information source for patients’ questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability.

Methods

We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, “Please explain so it is easier to understand,” to evaluate ChatGPT’s ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a “yes” or “no” question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered “yes.”