Advertisement for orthosearch.org.uk
Results 1 - 4 of 4
Results per page:
The Bone & Joint Journal
Vol. 106-B, Issue 7 | Pages 688 - 695
1 Jul 2024
Farrow L Zhong M Anderson L

Aims. To examine whether natural language processing (NLP) using a clinically based large language model (LLM) could be used to predict patient selection for total hip or total knee arthroplasty (THA/TKA) from routinely available free-text radiology reports. Methods. Data pre-processing and analyses were conducted according to the Artificial intelligence to Revolutionize the patient Care pathway in Hip and knEe aRthroplastY (ARCHERY) project protocol. This included use of de-identified Scottish regional clinical data of patients referred for consideration of THA/TKA, held in a secure data environment designed for artificial intelligence (AI) inference. Only preoperative radiology reports were included. NLP algorithms were based on the freely available GatorTron model, a LLM trained on over 82 billion words of de-identified clinical text. Two inference tasks were performed: assessment after model-fine tuning (50 Epochs and three cycles of k-fold cross validation), and external validation. Results. For THA, there were 5,558 patient radiology reports included, of which 4,137 were used for model training and testing, and 1,421 for external validation. Following training, model performance demonstrated average (mean across three folds) accuracy, F1 score, and area under the receiver operating curve (AUROC) values of 0.850 (95% confidence interval (CI) 0.833 to 0.867), 0.813 (95% CI 0.785 to 0.841), and 0.847 (95% CI 0.822 to 0.872), respectively. For TKA, 7,457 patient radiology reports were included, with 3,478 used for model training and testing, and 3,152 for external validation. Performance metrics included accuracy, F1 score, and AUROC values of 0.757 (95% CI 0.702 to 0.811), 0.543 (95% CI 0.479 to 0.607), and 0.717 (95% CI 0.657 to 0.778) respectively. There was a notable deterioration in performance on external validation in both cohorts. Conclusion. The use of routinely available preoperative radiology reports provides promising potential to help screen suitable candidates for THA, but not for TKA. The external validation results demonstrate the importance of further model testing and training when confronted with new clinical cohorts. Cite this article: Bone Joint J 2024;106-B(7):688–695


Bone & Joint Open
Vol. 5, Issue 2 | Pages 139 - 146
15 Feb 2024
Wright BM Bodnar MS Moore AD Maseda MC Kucharik MP Diaz CC Schmidt CM Mir HR

Aims. While internet search engines have been the primary information source for patients’ questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability. Methods. We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, “Please explain so it is easier to understand,” to evaluate ChatGPT’s ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a “yes” or “no” question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered “yes.”. Results. The mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ. 2. = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85). Conclusion. ChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement. Cite this article: Bone Jt Open 2024;5(2):139–146


Bone & Joint Research
Vol. 12, Issue 9 | Pages 512 - 521
1 Sep 2023
Langenberger B Schrednitzki D Halder AM Busse R Pross CM

Aims

A substantial fraction of patients undergoing knee arthroplasty (KA) or hip arthroplasty (HA) do not achieve an improvement as high as the minimal clinically important difference (MCID), i.e. do not achieve a meaningful improvement. Using three patient-reported outcome measures (PROMs), our aim was: 1) to assess machine learning (ML), the simple pre-surgery PROM score, and logistic-regression (LR)-derived performance in their prediction of whether patients undergoing HA or KA achieve an improvement as high or higher than a calculated MCID; and 2) to test whether ML is able to outperform LR or pre-surgery PROM scores in predictive performance.

Methods

MCIDs were derived using the change difference method in a sample of 1,843 HA and 1,546 KA patients. An artificial neural network, a gradient boosting machine, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic net, random forest, LR, and pre-surgery PROM scores were applied to predict MCID for the following PROMs: EuroQol five-dimension, five-level questionnaire (EQ-5D-5L), EQ visual analogue scale (EQ-VAS), Hip disability and Osteoarthritis Outcome Score-Physical Function Short-form (HOOS-PS), and Knee injury and Osteoarthritis Outcome Score-Physical Function Short-form (KOOS-PS).


The Bone & Joint Journal
Vol. 104-B, Issue 8 | Pages 929 - 937
1 Aug 2022
Gurung B Liu P Harris PDR Sagi A Field RE Sochart DH Tucker K Asopa V

Aims. Total hip arthroplasty (THA) and total knee arthroplasty (TKA) are common orthopaedic procedures requiring postoperative radiographs to confirm implant positioning and identify complications. Artificial intelligence (AI)-based image analysis has the potential to automate this postoperative surveillance. The aim of this study was to prepare a scoping review to investigate how AI is being used in the analysis of radiographs following THA and TKA, and how accurate these tools are. Methods. The Embase, MEDLINE, and PubMed libraries were systematically searched to identify relevant articles. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping reviews and Arksey and O’Malley framework were followed. Study quality was assessed using a modified Methodological Index for Non-Randomized Studies tool. AI performance was reported using either the area under the curve (AUC) or accuracy. Results. Of the 455 studies identified, only 12 were suitable for inclusion. Nine reported implant identification and three described predicting risk of implant failure. Of the 12, three studies compared AI performance with orthopaedic surgeons. AI-based implant identification achieved AUC 0.992 to 1, and most algorithms reported an accuracy > 90%, using 550 to 320,000 training radiographs. AI prediction of dislocation risk post-THA, determined after five-year follow-up, was satisfactory (AUC 76.67; 8,500 training radiographs). Diagnosis of hip implant loosening was good (accuracy 88.3%; 420 training radiographs) and measurement of postoperative acetabular angles was comparable to humans (mean absolute difference 1.35° to 1.39°). However, 11 of the 12 studies had several methodological limitations introducing a high risk of bias. None of the studies were externally validated. Conclusion. These studies show that AI is promising. While it already has the ability to analyze images with significant precision, there is currently insufficient high-level evidence to support its widespread clinical use. Further research to design robust studies that follow standard reporting guidelines should be encouraged to develop AI models that could be easily translated into real-world conditions. Cite this article: Bone Joint J 2022;104-B(8):929–937