Advertisement for orthosearch.org.uk
Results 1 - 4 of 4
Results per page:
Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_1 | Pages 133 - 133
1 Feb 2020
Borjali A Chen A Muratoglu O Varadarajan K
Full Access

INTRODUCTION

Mechanical loosening of total hip replacement (THR) is primarily diagnosed using radiographs, which are diagnostically challenging and require review by experienced radiologists and orthopaedic surgeons. Automated tools that assist less-experienced clinicians and mitigate human error can reduce the risk of missed or delayed diagnosis. Thus the purposes of this study were to: 1) develop an automated tool to detect mechanical loosening of THR by training a deep convolutional neural network (CNN) using THR x-rays, and 2) visualize the CNN training process to interpret how it functions.

METHODS

A retrospective study was conducted using previously collected imaging data at a single institution with IRB approval. Twenty-three patients with cementless primary THR who underwent revision surgery due to mechanical loosening (either with a loose stem and/or a loose acetabular component) had their hip x-rays evaluated immediately prior to their revision surgery (32 “loose” x-rays). A comparison group was comprised of 23 patients who underwent primary cementless THR surgery with x-rays immediately after their primary surgery (31 “not loose” x-rays). Fig. 1 shows examples of “not loose” and “loose” THR x-ray. DenseNet201-CNN was utilized by swapping the top layer with a binary classifier using 90:10 split-validation [1]. Pre-trained CNN on ImageNet [2] and not pre-trained CNN (initial zero weights) were implemented to compare the results. Saliency maps were implemented to indicate the importance of each pixel of a given x-ray on the CNN's performance [3].


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_8 | Pages 79 - 79
1 Aug 2020
Bozzo A Ghert M Reilly J
Full Access

Advances in cancer therapy have prolonged patient survival even in the presence of disseminated disease and an increasing number of cancer patients are living with metastatic bone disease (MBD). The proximal femur is the most common long bone involved in MBD and pathologic fractures of the femur are associated with significant morbidity, mortality and loss of quality of life (QoL).

Successful prophylactic surgery for an impending fracture of the proximal femur has been shown in multiple cohort studies to result in longer survival, preserved mobility, lower transfusion rates and shorter post-operative hospital stays. However, there is currently no optimal method to predict a pathologic fracture. The most well-known tool is Mirel's criteria, established in 1989 and is limited from guiding clinical practice due to poor specificity and sensitivity. The ideal clinical decision support tool will be of the highest sensitivity and specificity, non-invasive, generalizable to all patients, and not a burden on hospital resources or the patient's time. Our research uses novel machine learning techniques to develop a model to fill this considerable gap in the treatment pathway of MBD of the femur. The goal of our study is to train a convolutional neural network (CNN) to predict fracture risk when metastatic bone disease is present in the proximal femur.

Our fracture risk prediction tool was developed by analysis of prospectively collected data of consecutive MBD patients presenting from 2009–2016. Patients with primary bone tumors, pathologic fractures at initial presentation, and hematologic malignancies were excluded. A total of 546 patients comprising 114 pathologic fractures were included. Every patient had at least one Anterior-Posterior X-ray and clinical data including patient demographics, Mirel's criteria, tumor biology, all previous radiation and chemotherapy received, multiple pain and function scores, medications and time to fracture or time to death.

We have trained a convolutional neural network (CNN) with AP X-ray images of 546 patients with metastatic bone disease of the proximal femur. The digital X-ray data is converted into a matrix representing the color information at each pixel. Our CNN contains five convolutional layers, a fully connected layers of 512 units and a final output layer. As the information passes through successive levels of the network, higher level features are abstracted from the data. The model converges on two fully connected deep neural network layers that output the risk of fracture. This prediction is compared to the true outcome, and any errors are back-propagated through the network to accordingly adjust the weights between connections, until overall prediction accuracy is optimized. Methods to improve learning included using stochastic gradient descent with a learning rate of 0.01 and a momentum rate of 0.9.

We used average classification accuracy and the average F1 score across five test sets to measure model performance. We compute F1 = 2 x (precision x recall)/(precision + recall). F1 is a measure of a model's accuracy in binary classification, in our case, whether a lesion would result in pathologic fracture or not. Our model achieved 88.2% accuracy in predicting fracture risk across five-fold cross validation testing. The F1 statistic is 0.87.

This is the first reported application of convolutional neural networks, a machine learning algorithm, to this important Orthopaedic problem. Our neural network model was able to achieve reasonable accuracy in classifying fracture risk of metastatic proximal femur lesions from analysis of X-rays and clinical information. Our future work will aim to externally validate this algorithm on an international cohort.


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_7 | Pages 96 - 96
1 Jul 2020
Bozzo A Ghert M
Full Access

Advances in cancer therapy have prolonged cancer patient survival even in the presence of disseminated disease and an increasing number of cancer patients are living with metastatic bone disease (MBD). The proximal femur is the most common long bone involved in MBD and pathologic fractures of the femur are associated with significant morbidity, mortality and loss of quality of life (QoL).

Successful prophylactic surgery for an impending fracture of the proximal femur has been shown in multiple cohort studies to result in patients more likely to walk after surgery, longer survival, lower transfusion rates and shorter post-operative hospital stays. However, there is currently no optimal method to predict a pathologic fracture. The most well-known tool is Mirel's criteria, established in 1989 and is limited from guiding clinical practice due to poor specificity and sensitivity. The goal of our study is to train a convolutional neural network (CNN) to predict fracture risk when metastatic bone disease is present in the proximal femur.

Our fracture risk prediction tool was developed by analysis of prospectively collected data for MBD patients (2009–2016) in order to determine which features are most commonly associated with fracture. Patients with primary bone tumors, pathologic fractures at initial presentation, and hematologic malignancies were excluded. A total of 1146 patients comprising 224 pathologic fractures were included. Every patient had at least one Anterior-Posterior X-ray. The clinical data includes patient demographics, tumor biology, all previous radiation and chemotherapy received, multiple pain and function scores, medications and time to fracture or time to death. Each of Mirel's criteria has been further subdivided and recorded for each lesion.

We have trained a convolutional neural network (CNN) with X-ray images of 1146 patients with metastatic bone disease of the proximal femur. The digital X-ray data is converted into a matrix representing the color information at each pixel. Our CNN contains five convolutional layers, a fully connected layers of 512 units and a final output layer. As the information passes through successive levels of the network, higher level features are abstracted from the data. This model converges on two fully connected deep neural network layers that output the fracture risk. This prediction is compared to the true outcome, and any errors are back-propagated through the network to accordingly adjust the weights between connections. Methods to improve learning included using stochastic gradient descent with a learning rate of 0.01 and a momentum rate of 0.9.

We used average classification accuracy and the average F1 score across test sets to measure model performance. We compute F1 = 2 x (precision x recall)/(precision + recall). F1 is a measure of a test's accuracy in binary classification, in our case, whether a lesion would result in pathologic fracture or not. Five-fold cross validation testing of our fully trained model revealed accurate classification for 88.2% of patients with metastatic bone disease of the proximal femur. The F1 statistic is 0.87. This represents a 24% error reduction from using Mirel's criteria alone to classify the risk of fracture in this cohort.

This is the first reported application of convolutional neural networks, a machine learning algorithm, to an important Orthopaedic problem. Our neural network model was able to achieve impressive accuracy in classifying fracture risk of metastatic proximal femur lesions from analysis of X-rays and clinical information. Our future work will aim to validate this algorithm on an external cohort.


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_1 | Pages 26 - 26
1 Feb 2020
Bloomfield R McIsaac K Teeter M
Full Access

Objective. Emergence of low-cost wearable systems has permitted extended data collection for unsupervised subject monitoring. Recognizing individual activities performed during these sessions gives context to recorded data and is an important first step towards automated motion analysis. Convolutional neural networks (CNNs) have been used with great success to detect patterns of pixels in images for object detection and recognition in many different applications. This work proposes a novel image encoding scheme to create images from time-series activity data and uses CNNs to accurately classify 13 daily activities performed by instrumented subjects. Methods. Twenty healthy subjects were instrumented with a previously developed wearable sensor system consisting of four inertial sensors mounted above and below each knee. Each subject performed eight static and five dynamic activities: standing, sitting in a chair/cross-legged, kneeling on left/right/both knees, squatting, laying, walking/running, biking and ascending/descending stairs. Data from each sensor were synchronized, windowed, and encoded as images using a novel encoding scheme. Two CNNs were designed and trained to classify the encoded images of both static and dynamic activities separately. Network performance was evaluated using twenty iterations of a leave-one-out validation process where a single subject was left out for test data to estimate performance on future unseen subjects. Results. Using 19 subjects for training and a single subject left out for testing per iteration, the average accuracy observed when classifying the eight static activities was 98.0% ±2.9%. Accuracy dropped to 89.3% ±10.6% when classifying all dynamic activities using a separate model with the same evaluation process. Ascending/descending stairs, walking/running, and sitting on a chair/squatting were most commonly misclassified. Conclusions. Previous related work on activity recognition using accelerometer and/or gyroscope raw signals fails to provide sufficient data to distinguish static activities. The proposed method operating on lower limb orientations has classified eight static activities with exceptional accuracy when tested on unseen subject data. High accuracy was also observed when classifying dynamic activities despite the similarity of the activities performed and the expected variance of individuals’ gait. Accuracy reported in existing literature classifying comparable activities from other wearable sensor systems ranges between 27.84% to 84.52% when tested using a similar leave-one-subject-out validation strategy[1]. It is expected that incorporating these trained models into the previously developed wearable system will permit activity classification on unscripted instrumented activity data for more contextual motion analysis