Advertisement for orthosearch.org.uk
Results 1 - 5 of 5
Results per page:
Bone & Joint Research
Vol. 13, Issue 10 | Pages 588 - 595
17 Oct 2024
Breu R Avelar C Bertalan Z Grillari J Redl H Ljuhar R Quadlbauer S Hausner T

Aims. The aim of this study was to create artificial intelligence (AI) software with the purpose of providing a second opinion to physicians to support distal radius fracture (DRF) detection, and to compare the accuracy of fracture detection of physicians with and without software support. Methods. The dataset consisted of 26,121 anonymized anterior-posterior (AP) and lateral standard view radiographs of the wrist, with and without DRF. The convolutional neural network (CNN) model was trained to detect the presence of a DRF by comparing the radiographs containing a fracture to the inconspicuous ones. A total of 11 physicians (six surgeons in training and five hand surgeons) assessed 200 pairs of randomly selected digital radiographs of the wrist (AP and lateral) for the presence of a DRF. The same images were first evaluated without, and then with, the support of the CNN model, and the diagnostic accuracy of the two methods was compared. Results. At the time of the study, the CNN model showed an area under the receiver operating curve of 0.97. AI assistance improved the physician’s sensitivity (correct fracture detection) from 80% to 87%, and the specificity (correct fracture exclusion) from 91% to 95%. The overall error rate (combined false positive and false negative) was reduced from 14% without AI to 9% with AI. Conclusion. The use of a CNN model as a second opinion can improve the diagnostic accuracy of DRF detection in the study setting. Cite this article: Bone Joint Res 2024;13(10):588–595


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_7 | Pages 71 - 71
4 Apr 2023
Arrowsmith C Burns D Mak T Hardisty M Whyne C
Full Access

Access to health care, including physiotherapy, is increasingly occurring through virtual formats. At-home adherence to physical therapy programs is often poor and few tools exist to objectively measure low back physiotherapy exercise participation without the direct supervision of a medical professional. The aim of this study was to develop and evaluate the potential for performing automatic, unsupervised video-based monitoring of at-home low back physiotherapy exercises using a single mobile phone camera. 24 healthy adult subjects performed seven exercises based on the McKenzie low back physiotherapy program while being filmed with two smartphone cameras. Joint locations were automatically extracted using an open-source pose estimation framework. Engineered features were extracted from the joint location time series and used to train a support vector machine classifier (SVC). A convolutional neural network (CNN) was trained directly on the joint location time series data to classify exercises based on a recording from a single camera. The models were evaluated using a 5-fold cross validation approach, stratified by subject, with the class-balanced accuracy used as the performance metric. Optimal performance was achieved when using a total of 12 pose estimation landmarks from the upper and lower body, with the SVC model achieving a classification accuracy of 96±4% and the CNN model an accuracy of 97±2%. This study demonstrates the feasibility of using a smartphone camera and a supervised machine learning model to effectively assess at-home low back physiotherapy adherence. This approach could provide a low-cost, scalable method for tracking adherence to physical therapy exercise programs in a variety of settings


Orthopaedic Proceedings
Vol. 101-B, Issue SUPP_11 | Pages 71 - 71
1 Oct 2019
Vail TP Shah RF Bini SA
Full Access

Background. Implant loosening is a common cause of a poor outcome and pain after total knee arthroplasty (TKA). Despite the increase in use of expensive techniques like arthrography, the detection of prosthetic loosening is often unclear pre-operatively, leading to diagnostic uncertainty and extensive workup. The objective of this study was to evaluate the ability of a machine learning (ML) algorithm to diagnose prosthetic loosening from pre-operative radiographs, and to observe what model inputs improve the performance of the model. Methods. 754 patients underwent a first-time revision of a total joint at our institution from 2012–2018. Pre-operative X-Rays (XR) were collected for each patient. AP and lateral X-Rays, in addition to demographic and comorbidity information, were collected for each patient. Each patient was determined to have either loose or fixed prosthetics based on a manual abstraction of the written findings in their operative report, which is considered the gold standard of diagnosing prosthetic loosening. We trained a series of deep convolution neural network (CNN) models to predict if a prosthesis was found to be loose in the operating room from the pre-operative XR. Each XR was pre-processed to segment the bone, implant, and bone-implant interface. A series of CNN models were built using existing, proven CNN architectures and weights optimized to our dataset. We then integrated our best performing model with historical patient data to create a final model and determine the incremental accuracy provided by additional layers of clinical information fed into the model. The models were evaluated by its accuracy, sensitivity and specificity. Results. The CNN we built demonstrated high performance at detecting prosthetic loosening from radiographs alone. Our first model built from scratch on just the image as an input had an accuracy of 70%. Our final model which was built by fine-tuning and optimizing a publicly available model named DenseNet, combining the AP and lateral radiographs, incorporating information from the patient history, had an accuracy, sensitivity, and specificity of 98.5%, 93.9%, and 99.5% on the patients that it was trained on, and an accuracy, sensitivity, and specificity of 88.3%, 70.2%, and 95.6% on the patients it was tested on. Conclusions. The use of machine learning (ML) can accurately detect the presence of prosthetic loosening based on plain radiographs. Its accuracy is progressively enhanced when additional clinical data is added to the loosening analysis algorithm. While this type of machine learning may not be sufficient in its present state of development as a standalone metric of loosening, it is clearly a useful augment for clinical decision making in its present state. Further study and development will be needed to determine the feasibility of applying machine learning as a more definitive test in the clinical setting. For figures, tables, or references, please contact authors directly


Bone & Joint Research
Vol. 12, Issue 7 | Pages 447 - 454
10 Jul 2023
Lisacek-Kiosoglous AB Powling AS Fontalis A Gabr A Mazomenos E Haddad FS

The use of artificial intelligence (AI) is rapidly growing across many domains, of which the medical field is no exception. AI is an umbrella term defining the practical application of algorithms to generate useful output, without the need of human cognition. Owing to the expanding volume of patient information collected, known as ‘big data’, AI is showing promise as a useful tool in healthcare research and across all aspects of patient care pathways. Practical applications in orthopaedic surgery include: diagnostics, such as fracture recognition and tumour detection; predictive models of clinical and patient-reported outcome measures, such as calculating mortality rates and length of hospital stay; and real-time rehabilitation monitoring and surgical training. However, clinicians should remain cognizant of AI’s limitations, as the development of robust reporting and validation frameworks is of paramount importance to prevent avoidable errors and biases. The aim of this review article is to provide a comprehensive understanding of AI and its subfields, as well as to delineate its existing clinical applications in trauma and orthopaedic surgery. Furthermore, this narrative review expands upon the limitations of AI and future direction.

Cite this article: Bone Joint Res 2023;12(7):447–454.


Bone & Joint Open
Vol. 2, Issue 10 | Pages 879 - 885
20 Oct 2021
Oliveira e Carmo L van den Merkhof A Olczak J Gordon M Jutte PC Jaarsma RL IJpma FFA Doornberg JN Prijs J

Aims

The number of convolutional neural networks (CNN) available for fracture detection and classification is rapidly increasing. External validation of a CNN on a temporally separate (separated by time) or geographically separate (separated by location) dataset is crucial to assess generalizability of the CNN before application to clinical practice in other institutions. We aimed to answer the following questions: are current CNNs for fracture recognition externally valid?; which methods are applied for external validation (EV)?; and, what are reported performances of the EV sets compared to the internal validation (IV) sets of these CNNs?

Methods

The PubMed and Embase databases were systematically searched from January 2010 to October 2020 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The type of EV, characteristics of the external dataset, and diagnostic performance characteristics on the IV and EV datasets were collected and compared. Quality assessment was conducted using a seven-item checklist based on a modified Methodologic Index for NOn-Randomized Studies instrument (MINORS).