Advertisement for orthosearch.org.uk
Results 1 - 8 of 8
Results per page:
Orthopaedic Proceedings
Vol. 103-B, Issue SUPP_16 | Pages 76 - 76
1 Dec 2021
de Mello FL Kadirkamanathan V Wilkinson JM
Full Access

Abstract. Objectives. Conventional approaches (including Tobit) do not accurately account for ceiling effects in PROMs nor give uncertainty estimates. Here, a classifier neural network was used to estimate postoperative PROMs prior to surgery and compared with conventional methods. The Oxford Knee Score (OKS) and the Oxford Hip Score (OHS) were estimated with separate models. Methods. English NJR data from 2009 to 2018 was used, with 278.655 knee and 249.634 hip replacements. For both OKS and OHS estimations, the input variables included age, BMI, surgery date, sex, ASA, thromboprophylaxis, anaesthetic and preoperative PROMs responses. Bearing, fixation, head size and approach were also included for OHS and knee type for OKS estimation. A classifier neural network (NN) was compared with linear or Tobit regression, XGB and regression NN. The performance metrics were the root mean square error (RMSE), maximum absolute error (MAE) and area under curve (AUC). 95% confidence intervals were computed using 5-fold cross-validation. Results. The classifier NN and regression NN had the best RMSE, both with the same scores of 8.59±0.04 for knee and 7.88±0.04 for hip. The classifier NN had the best MAE, with 6.73±0.03 for knee and 5.73±0.03 for hip. The Tobit model was second, with 6.86±0.03 for knee and 6.00±0.01 for hip. The classifier NN had the best AUC, with (68.7±0.4)% for knee and (73.9±0.3)% for hip. The regression NN was second, with (67.1±0.3)% for knee and (71.1±0.4)% for hip. The Tobit model had the best AUC among conventional approaches, with (66.8±0.3)% for knee and (71.0±0.4)% for hip. Conclusions. The proposed model resulted in an improvement from the current state-of-the-art. Additionally, it estimates the full probability distribution of the postoperative PROMs, making it possible to know not only the estimated value but also its uncertainty


Orthopaedic Proceedings
Vol. 106-B, Issue SUPP_18 | Pages 17 - 17
14 Nov 2024
Kjærgaard K Ding M Mansourvar M
Full Access

Introduction. Experimental bone research often generates large amounts of histology and histomorphometry data, and the analysis of these data can be time-consuming and trivial. Machine learning offers a viable alternative to manual analysis for measuring e.g. bone volume versus total volume. The objective was to develop a neural network for image segmentation, and to assess the accuracy of this network when applied to ectopic bone formation samples compared to a ground truth. Method. Thirteen tissue slides totaling 114 megapixels of ectopic bone formation were selected for model building. Slides were split into training, validation, and test data, with the test data reserved and only used for the final model assessment. We developed a neural network resembling U-Net that takes 512×512 pixel tiles. To improve model robustness, images were augmented online during training. The network was trained for 3 days on a NVidia Tesla K80 provided by a free online learning platform against ground truth masks annotated by an experienced researcher. Result. During training, the validation accuracy improved and stabilised at approx. 95%. The test accuracy was 96.1 %. Conclusion. Most experiments using ectopic bone formation will yield an inter-observer or inter-method variance of far more than 5%, so the current approach may be a valid and feasible technique for automated image segmentation for large datasets. More data or a consensus-based ground truth may improve training stability and validation accuracy. The code and data of this project are available upon request and will be available online as part of our publication


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_7 | Pages 71 - 71
4 Apr 2023
Arrowsmith C Burns D Mak T Hardisty M Whyne C
Full Access

Access to health care, including physiotherapy, is increasingly occurring through virtual formats. At-home adherence to physical therapy programs is often poor and few tools exist to objectively measure low back physiotherapy exercise participation without the direct supervision of a medical professional. The aim of this study was to develop and evaluate the potential for performing automatic, unsupervised video-based monitoring of at-home low back physiotherapy exercises using a single mobile phone camera. 24 healthy adult subjects performed seven exercises based on the McKenzie low back physiotherapy program while being filmed with two smartphone cameras. Joint locations were automatically extracted using an open-source pose estimation framework. Engineered features were extracted from the joint location time series and used to train a support vector machine classifier (SVC). A convolutional neural network (CNN) was trained directly on the joint location time series data to classify exercises based on a recording from a single camera. The models were evaluated using a 5-fold cross validation approach, stratified by subject, with the class-balanced accuracy used as the performance metric. Optimal performance was achieved when using a total of 12 pose estimation landmarks from the upper and lower body, with the SVC model achieving a classification accuracy of 96±4% and the CNN model an accuracy of 97±2%. This study demonstrates the feasibility of using a smartphone camera and a supervised machine learning model to effectively assess at-home low back physiotherapy adherence. This approach could provide a low-cost, scalable method for tracking adherence to physical therapy exercise programs in a variety of settings


Orthopaedic Proceedings
Vol. 106-B, Issue SUPP_2 | Pages 2 - 2
2 Jan 2024
Ditmer S Dwenger N Jensen L Ghaffari A Rahbek O
Full Access

The most important outcome predictor of Legg-Calvé-Perthes disease (LCPD) is the shape of the healed femoral head. However, the deformity of the femoral head is currently evaluated by non-reproducible, categorical, and qualitative classifications. In this regard, recent advances in computer vision might provide the opportunity to automatically detect and delineate the outlines of bone in radiographic images for calculating a continuous measure of femoral head deformity. This study aimed to construct a pipeline for accurately detecting and delineating the proximal femur in radiographs of LCPD patients employing existing algorithms. To detect the proximal femur, the pretrained stateof-the-art object detection model, YOLOv5, was trained on 1580 manually annotated radiographs, validated on 338 radiographs, and tested on 338 radiographs. Additionally, 200 radiographs of shoulders and chests were added to the dataset to make the model more robust to false positives and increase generalizability. The convolutional neural network architecture, U-Net, was then employed to segment the detected proximal femur. The network was trained on 80 manually annotated radiographs using real-time data augmentation to increase the number of training images and enhance the generalizability of the segmentation model. The network was validated on 60 radiographs and tested on 60 radiographs. The object detection model achieved a mean Average Precision (mAP) of 0.998 using an Intersection over Union (IoU) threshold of 0.5, and a mAP of 0.712 over IoU thresholds of 0.5 to 0.95 on the test set. The segmentation model achieved an accuracy score of 0.912, a Dice Coefficient of 0.937, and a binary IoU score of 0.854 on the test set. The proposed fully automatic proximal femur detection and segmentation system provides a promising method for accurately detecting and delineating the proximal femoral bone contour in radiographic images, which is necessary for further image analysis


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_7 | Pages 134 - 134
4 Apr 2023
Arrowsmith C Alfakir A Burns D Razmjou H Hardisty M Whyne C
Full Access

Physiotherapy is a critical element in successful conservative management of low back pain (LBP). The aim of this study was to develop and evaluate a system with wearable inertial sensors to objectively detect sitting postures and performance of unsupervised exercises containing movement in multiple planes (flexion, extension, rotation). A set of 8 inertial sensors were placed on 19 healthy adult subjects. Data was acquired as they performed 7 McKenzie low-back exercises and 3 sitting posture positions. This data was used to train two models (Random Forest (RF) and XGBoost (XGB)) using engineered time series features. In addition, a convolutional neural network (CNN) was trained directly on the time series data. A feature importance analysis was performed to identify sensor locations and channels that contributed most to the models. Finally, a subset of sensor locations and channels was included in a hyperparameter grid search to identify the optimal sensor configuration and the best performing algorithm(s) for exercise classification. Models were evaluated using F1-score in a 10-fold cross validation approach. The optimal hardware configuration was identified as a 3-sensor setup using lower back, left thigh, and right ankle sensors with acceleration, gyroscope, and magnetometer channels. The XBG model achieved the highest exercise (F1=0.94±0.03) and posture (F1=0.90±0.11) classification scores. The CNN achieved similar results with the same sensor locations, using only the accelerometer and gyroscope channels for exercise classification (F1=0.94±0.02) and the accelerometer channel alone for posture classification (F1=0.91±0.03). This study demonstrates the potential of a 3-sensor lower body wearable solution (e.g. smart pants) that can identify proper sitting postures and exercises in multiple planes, suitable for low back pain. This technology has the potential to improve the effectiveness of LBP rehabilitation by facilitating quantitative feedback, early problem diagnosis, and possible remote monitoring


Orthopaedic Proceedings
Vol. 106-B, Issue SUPP_18 | Pages 57 - 57
14 Nov 2024
Birkholtz F Eken M Boyes A Engelbrecht A
Full Access

Introduction. With advances in artificial intelligence, the use of computer-aided detection and diagnosis in clinical imaging is gaining traction. Typically, very large datasets are required to train machine-learning models, potentially limiting use of this technology when only small datasets are available. This study investigated whether pretraining of fracture detection models on large, existing datasets could improve the performance of the model when locating and classifying wrist fractures in a small X-ray image dataset. This concept is termed “transfer learning”. Method. Firstly, three detection models, namely, the faster region-based convolutional neural network (faster R-CNN), you only look once version eight (YOLOv8), and RetinaNet, were pretrained using the large, freely available dataset, common objects in context (COCO) (330000 images). Secondly, these models were pretrained using an open-source wrist X-ray dataset called “Graz Paediatric Wrist Digital X-rays” (GRAZPEDWRI-DX) on a (1) fracture detection dataset (20327 images) and (2) fracture location and classification dataset (14390 images). An orthopaedic surgeon classified the small available dataset of 776 distal radius X-rays (Arbeidsgmeischaft für Osteosynthesefragen Foundation / Orthopaedic Trauma Association; AO/OTA), on which the models were tested. Result. Detection models without pre-training on the large datasets were the least precise when tested on the small distal radius dataset. The model with the best accuracy to detect and classify wrist fractures was the YOLOv8 model pretrained on the GRAZPEDWRI-DX fracture detection dataset (mean average precision at intersection over union of 50=59.7%). This model showed up to 33.6% improved detection precision compared to the same models with no pre-training. Conclusion. Optimisation of machine-learning models can be challenging when only relatively small datasets are available. The findings of this study support the potential of transfer learning from large datasets to improve model performance in smaller datasets. This is encouraging for wider application of machine-learning technology in medical imaging evaluation, including less common orthopaedic pathologies


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_16 | Pages 63 - 63
17 Nov 2023
Bicer M Phillips AT Melis A McGregor A Modenese L
Full Access

Abstract. OBJECTIVES. Application of deep learning approaches to marker trajectories and ground reaction forces (mocap data), is often hampered by small datasets. Enlarging dataset size is possible using some simple numerical approaches, although these may not be suited to preserving the physiological relevance of mocap data. We propose augmenting mocap data using a deep learning architecture called “generative adversarial networks” (GANs). We demonstrate appropriate use of GANs can capture variations of walking patterns due to subject- and task-specific conditions (mass, leg length, age, gender and walking speed), which significantly affect walking kinematics and kinetics, resulting in augmented datasets amenable to deep learning analysis approaches. METHODS. A publicly available (. https://www.nature.com/articles/s41597-019-0124-4. ) gait dataset (733 trials, 21 women and 25 men, 37.2 ± 13.0 years, 1.74 ± 0.09 m, 72.0 ± 11.4 kg, walking speeds ranging from 0.18 m/s to 2.04 m/s) was used as the experimental dataset. The GAN comprised three neural networks: an encoder, a decoder, and a discriminator. The encoder compressed experimental data into a fixed-length vector, while the decoder transformed the encoder's output vector and a condition vector (containing information about the subject and trial) into mocap data. The discriminator distinguished between the encoded experimental data from randomly sampled vectors of the same size. By training these networks jointly using the experimental dataset, the generator (decoder) could generate synthetic data respecting specified conditions from randomly sampled vectors. Synthetic mocap data and lower limb joint angles were generated and compared to the experimental data, by identifying the statistically significant differences across the gait cycle for a randomly selected subset of the experimental data from 5 female subjects (73 trials, aged 26–40, weighing 57–74 kg, with leg lengths between 868–931 mm, and walking speeds ranging from 0.81–1.68 m/s). By conducting these comparisons for this subset, we aimed to assess the synthetic data generated using multiple conditions. RESULTS. We visually inspected the synthetic trials to ensure that they appeared realistic. The statistical comparison revealed that, on average, only 2.5% of the gait cycle showed significantly differences in the joint angles of the two data groups. Additionally, the synthetic ground reaction forces deviated from the experimental data distribution for an average of 2.9% of the gait cycle. CONCLUSIONS. We introduced a novel approach for generating synthetic mocap data of human walking based on the conditions that influence walking patterns. The synthetic data closely followed the trends observed in the experimental data, also in the literature, suggesting that our approach can augment mocap datasets considering multiple conditions, an approach unfeasible in previous work. Creation of large, augmented datasets allows the application of other deep learning approaches, with the potential to generate realistic mocap data from limited and non-lab-based data. Our method could also enhance data sharing since synthetic data does not raise ethical concerns. You can generate and download virtual gait data using our GAN approach from . https://thisgaitdoesnotexist.streamlit.app/. . Declaration of Interest. (b) declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported:I declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research project


Orthopaedic Proceedings
Vol. 103-B, Issue SUPP_16 | Pages 52 - 52
1 Dec 2021
Wang J Hall T Musbahi O Jones G van Arkel R
Full Access

Abstract. Objectives. Knee alignment affects both the development and surgical treatment of knee osteoarthritis. Automating femorotibial angle (FTA) and hip-knee-ankle angle (HKA) measurement from radiographs could improve reliability and save time. Further, if the gold-standard HKA from full-limb radiographs could be accurately predicted from knee-only radiographs then the need for more expensive equipment and radiation exposure could be reduced. The aim of this research is to assess if deep learning methods can predict FTA and HKA angle from posteroanterior (PA) knee radiographs. Methods. Convolutional neural networks with densely connected final layers were trained to analyse PA knee radiographs from the Osteoarthritis Initiative (OAI) database with corresponding angle measurements. The FTA dataset with 6149 radiographs and HKA dataset with 2351 radiographs were split into training, validation and test datasets in a 70:15:15 ratio. Separate models were learnt for the prediction of FTA and HKA, which were trained using mean squared error as a loss function. Heat maps were used to identify the anatomical features within each image that most contributed to the predicted angles. Results. FTA could be predicted with errors less than 3° for 99.8% of images, and less than 1° for 89.5%. HKA prediction was less accurate than FTA but still high: 95.7% within 3°, and 68.0 % within 1°. Heat maps for both models were generally concentrated on the knee anatomy and could prove a valuable tool for assessing prediction reliability in clinical application. Conclusions. Deep learning techniques could enable fast, reliable and accurate predictions of both FTA and HKA from plain knee radiographs. This could lead to cost savings for healthcare providers and reduced radiation exposure for patients