Advertisement for orthosearch.org.uk
Results 1 - 9 of 9
Results per page:
Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_2 | Pages 102 - 102
10 Feb 2023
White J Wadhawan A Min H Rabi Y Schmutz B Dowling J Tchernegovski A Bourgeat P Tetsworth K Fripp J Mitchell G Hacking C Williamson F Schuetz M
Full Access

Distal radius fractures (DRFs) are one of the most common types of fracture and one which is often treated surgically. Standard X-rays are obtained for DRFs, and in most cases that have an intra-articular component, a routine CT is also performed. However, it is estimated that CT is only required in 20% of cases and therefore routine CT's results in the overutilisation of resources burdening radiology and emergency departments. In this study, we explore the feasibility of using deep learning to differentiate intra- and extra-articular DRFs automatically and help streamline which fractures require a CT. Retrospectively x-ray images were retrieved from 615 DRF patients who were treated with an ORIF at the Royal Brisbane and Women's Hospital. The images were classified into AO Type A, B or C fractures by three training registrars supervised by a consultant. Deep learning was utilised in a two-stage process: 1) localise and focus the region of interest around the wrist using the YOLOv5 object detection network and 2) classify the fracture using a EfficientNet-B3 network to differentiate intra- and extra-articular fractures. The distal radius region of interest (ROI) detection stage using the ensemble model of YOLO networks detected all ROIs on the test set with no false positives. The average intersection over union between the YOLO detections and the ROI ground truth was Error! Digit expected.. The DRF classification stage using the EfficientNet-B3 ensemble achieved an area under the receiver operating characteristic curve of 0.82 for differentiating intra-articular fractures. The proposed DRF classification framework using ensemble models of YOLO and EfficientNet achieved satisfactory performance in intra- and extra-articular fracture classification. This work demonstrates the potential in automatic fracture characterization using deep learning and can serve to streamline decision making for axial imaging helping to reduce unnecessary CT scans


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_3 | Pages 70 - 70
23 Feb 2023
Gupta S Smith G Wakelin E Van Der Veen T Plaskos C Pierrepont J
Full Access

Evaluation of patient specific spinopelvic mobility requires the detection of bony landmarks in lateral functional radiographs. Current manual landmarking methods are inefficient, and subjective. This study proposes a deep learning model to automate landmark detection and derivation of spinopelvic measurements (SPM). A deep learning model was developed using an international multicenter imaging database of 26,109 landmarked preoperative, and postoperative, lateral functional radiographs (HREC: Bellberry: 2020-08-764-A-2). Three functional positions were analysed: 1) standing, 2) contralateral step-up and 3) flexed seated. Landmarks were manually captured and independently verified by qualified engineers during pre-operative planning with additional assistance of 3D computed tomography derived landmarks. Pelvic tilt (PT), sacral slope (SS), and lumbar lordotic angle (LLA) were derived from the predicted landmark coordinates. Interobserver variability was explored in a pilot study, consisting of 9 qualified engineers, annotating three functional images, while blinded to additional 3D information. The dataset was subdivided into 70:20:10 for training, validation, and testing. The model produced a mean absolute error (MAE), for PT, SS, and LLA of 1.7°±3.1°, 3.4°±3.8°, 4.9°±4.5°, respectively. PT MAE values were dependent on functional position: standing 1.2°±1.3°, step 1.7°±4.0°, and seated 2.4°±3.3°, p< 0.001. The mean model prediction time was 0.7 seconds per image. The interobserver 95% confidence interval (CI) for engineer measured PT, SS and LLA (1.9°, 1.9°, 3.1°, respectively) was comparable to the MAE values generated by the model. The model MAE reported comparable performance to the gold standard when blinded to additional 3D information. LLA prediction produced the lowest SPM accuracy potentially due to error propagation from the SS and L1 landmarks. Reduced PT accuracy in step and seated functional positions may be attributed to an increased occlusion of the pubic-symphysis landmark. Our model shows excellent performance when compared against the current gold standard manual annotation process


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_1 | Pages 4 - 4
1 Feb 2020
Oni J Yi P Wei J Kim T Sair H Fritz J Hager G
Full Access

Introduction. Automated identification of arthroplasty implants could aid in pre-operative planning and is a task which could be facilitated through artificial intelligence (AI) and deep learning. The purpose of this study was to develop and test the performance of a deep learning system (DLS) for automated identification and classification of knee arthroplasty (KA) on radiographs. Methods. We collected 237 AP knee radiographs with equal proportions of native knees, total KA (TKA), and unicompartmental KA (UKA), as well as 274 radiographs with equal proportions of Smith & Nephew Journey and Zimmer NexGen TKAs. Data augmentation was used to increase the number of images available for DLS development. These images were used to train, validate, and test deep convolutional neural networks (DCNN) to 1) detect the presence of TKA; 2) differentiate between TKA and UKA; and 3) differentiate between the 2 TKA models. Receiver operating characteristic (ROC) curves were generated with area under the curve (AUC) calculated to assess test performance. Results. The DCNNs trained to detect KA and to distinguish between TKA and UKA both achieved AUC of 1. In both cases, heatmap analysis demonstrated appropriate emphasis of the KA components in decision-making. The DCNN trained to distinguish between the 2 TKA models also achieved AUC of 1. Heatmap analysis of this DCNN showed emphasis of specific unique features of the TKA model designs for decision making, such as the anterior flange shape of the Zimmer NexGen TKA (Figure 1) and the tibial baseplate/stem shape of the Smith & Nephew Journey TKA (Figure 2). Conclusion. DCNNs can accurately identify presence of TKA and distinguish between specific designs. The proof-of-concept of these DCNNs may set the foundation for DCNNs to identify other prosthesis models and prosthesis-related complications. For any figures or tables, please contact the authors directly


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_2 | Pages 5 - 5
1 Feb 2020
Burton W Myers C Rullkoetter P
Full Access

Introduction. Gait laboratory measurement of whole-body kinematics and ground reaction forces during a wide range of activities is frequently performed in joint replacement patient diagnosis, monitoring, and rehabilitation programs. These data are commonly processed in musculoskeletal modeling platforms such as OpenSim and Anybody to estimate muscle and joint reaction forces during activity. However, the processing required to obtain musculoskeletal estimates can be time consuming, requires significant expertise, and thus seriously limits the patient populations studied. Accordingly, the purpose of this study was to evaluate the potential of deep learning methods for estimating muscle and joint reaction forces over time given kinematic data, height, weight, and ground reaction forces for total knee replacement (TKR) patients performing activities of daily living (ADLs). Methods. 70 TKR patients were fitted with 32 reflective markers used to define anatomical landmarks for 3D motion capture. Patients were instructed to perform a range of tasks including gait, step-down and sit-to-stand. Gait was performed at a self-selected pace, step down from an 8” step height, and sit-to-stand using a chair height of 17”. Tasks were performed over a force platform while force data was collected at 2000 Hz and a 14 camera motion capture system collected at 100 Hz. The resulting data was processed in OpenSim to estimate joint reaction and muscle forces in the hip and knee using static optimization. The full set of data consisted of 135 instances from 70 patients with 63 sit-to-stands, 15 right-sided step downs, 14 left-sided step downs, and 43 gait sequences. Two classes of neural networks (NNs), a recurrent neural network (RNN) and temporal convolutional neural network (TCN), were trained to predict activity classification from joint angle, ground reaction force, and anthropometrics. The NNs were trained to predict muscle and joint reaction forces over time from the same input metrics. The 135 instances were split into 100 instances for training, 15 for validation, and 20 for testing. Results. The RNN and TCN yielded classification accuracies of 90% and 100% on the test set. Correlation coefficients between ground truth and predictions from the test set ranged from 0.81–0.95 for the RNN, depending on the activity. Predictions from both NNs were qualitatively assessed. Both NNs were able to effectively learn relationships between the input and output variables. Discussion. The objective of the study was to develop and evaluate deep learning methods for predicting patient mechanics from standard gait lab data. The resulting models classified activities with excellent performance, and showed promise for predicting exact values for loading metrics for a range of different activities. These results indicate potential for real-time prediction of musculoskeletal metrics with application in patient diagnostics and rehabilitation. For any figures or tables, please contact authors directly


Orthopaedic Proceedings
Vol. 105-B, Issue SUPP_3 | Pages 114 - 114
23 Feb 2023
Chai Y Boudali A Farey J Walter W
Full Access

Human error is usually evaluated using statistical descriptions during radiographic annotation. The technological advances popularized the “non-human” landmarking techniques, such as deep learning, in which the error is presented in a confidence format that is not comparable to that of the human method. The region-based landmark definition makes an arbitrary “ground truth” point impossible. The differences in patients’ anatomies, radiograph qualities, and scales make the horizontal comparison difficult. There is a demand to quantify the manual landmarking error in a probability format. Taking the measurement of pelvic tilt (PT) as an example, this study recruited 115 sagittal pelvic radiographs for the measurement of two PTs. We proposed a method to unify the scale of images that allows horizontal comparisons of landmarks and calculated the maximum possible error using a density vector. Traditional descriptive statistics were also applied. All measurements showed excellent reliabilities (intraclass correlation coefficients > 0.9). Eighty-four measurements (6.09%) were qualified as wrong landmarks that failed to label the correct locations. Directional bias (systematic error) was identified due to cognitive differences between observers. By removing wrong labels and rotated pelves, the analysis quantified the error density as a “good doctor” performance and found 6.77°-11.76° maximum PT disagreement with 95% data points. The landmarks with excellent reliability still have a chance (at least 6.09% in our case) of making wrong landmark decisions. Identifying skeletal contours is at least 24.64% more accurate than estimating landmark locations. The landmark at a clear skeletal contour is more likely to generate systematic errors. Due to landmark ambiguity, a very careful surgeon measuring PT could make a maximum 11.76° random difference in 95% of cases, serving as a “good doctor benchmark” to qualify good landmarking techniques


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_2 | Pages 6 - 6
1 Feb 2020
Burton W Myers C Rullkoetter P
Full Access

Introduction. Real-time tracking of surgical tools has applications for assessment of surgical skill and OR workflow. Accordingly, efforts have been devoted to the development of low-cost systems that track the location of surgical tools in real-time without significant augmentation to the tools themselves. Deep learning methodologies have recently shown success in a multitude of computer vision tasks, including object detection, and thus show potential for the application of surgical tool tracking. The objective of the current study was to develop and evaluate a deep learning-based computer vision system using a single camera for the detection and pose estimation of multiple surgical tools routinely used in both knee and hip arthroplasty. Methods. A computer vision system was developed for the detection and 6-DoF pose estimation of two surgical tools (mallet and broach handle) using only RGB camera frames. The deep learning approach consisted of a single convolutional neural network (CNN) for object detection and semantic key point prediction, as well as an optimization step to place prior known geometries into the local camera coordinate system. Inference on a camera frame with size of 256-by-352 took 0.3 seconds. The object detection component of the system was evaluated on a manually-annotated stream of video frames. The accuracy of the system was evaluated by comparing pose (position and orientation) estimation of a tool with the ground truth pose as determined using three retroreflective markers placed on each tool and a 14 camera motion capture system (Vicon, Centennial CO). Markers placed on the tool were transformed into the local camera coordinate system and compared to estimated location. Results. Detection accuracy determined from frame-wise confusion matrices was 82% and 95% for the mallet and broach handle, respectively. Object detection and key point predictions were qualitatively assessed. Marker error resulting from pose estimation was as little as 1.3 cm for the evaluation scenes. Pose estimation of the tools from each evaluation scene was also qualitatively assessed. Discussion. The proposed computer vision system combined CNNs with optimization to estimate the 6-DoF pose of surgical tools from only RGB camera frames. The system's object detection component performed on par with state-of-the-art object detection literature and the pose estimation error was efficiently computed from CNN predictions. The current system has implications for surgical skill assessment and operations based research to improve operating room efficiency. However, future development is needed to make improvements to the object detection and key point prediction components of the system, in order to minimize potential pose error. Nominal marker errors of 1.3 cm demonstrate the potential of this system to yield accurate pose estimates of surgical tools. For any figures or tables, please contact authors directly


Orthopaedic Proceedings
Vol. 103-B, Issue SUPP_15 | Pages 85 - 85
1 Dec 2021
Goswami K Shope A Wright J Purtill J Lamendella R Parvizi J
Full Access

Aim. While metagenomic (microbial DNA) sequencing technologies can detect the presence of microbes in a clinical sample, it is unknown whether this signal represents dead or live organisms. Metatranscriptomics (sequencing of RNA) offers the potential to detect transcriptionally “active” organisms within a microbial community, and map expressed genes to functional pathways of interest (e.g. antibiotic resistance). We used this approach to evaluate the utility of metatrancriptomics to diagnose PJI and predict antibiotic resistance. Method. In this prospective study, samples were collected from 20 patients undergoing revision TJA (10 aseptic and 10 infected) and 10 primary TJA. Synovial fluid and peripheral blood samples were obtained at the time of surgery, as well as negative field controls (skin swabs, air swabs, sterile water). All samples were shipped to the laboratory for metatranscriptomic analysis. Following microbial RNA extraction and host analyte subtraction, metatranscriptomic sequencing was performed. Bioinformatic analyses were implemented prior to mapping against curated microbial sequence databases– to generate taxonomic expression profiles. Principle Coordinates Analysis (PCoA) and Partial Least Squares-Discriminant Analysis were utilized to ordinate metatranscriptomic profiles, using the 2018 definition of PJI as the gold-standard. Results. After RNA metatranscriptomic analysis, blinded PCoA modeling revealed accurate and distinct clustering of samples into 3 separate cohorts (infected, aseptic, and primary joints) – based on their active transcriptomic profile, both in synovial fluid and blood (synovial anosim p=0.001; blood anosim p=0.034). Differential metatranscriptomic signatures for infected versus noninfected cohorts enabled us to train machine learning algorithms to 84.9% predictive accuracy for infection. Multiple antibiotic resistance genes were expressed, with high concordance to conventional antibiotic sensitivity data. Conclusions. Our findings highlight the potential of metatranscriptomics for infection diagnosis. To our knowledge, this is the first report of RNA sequencing in the orthopaedic literature. Further work in larger patient cohorts will better inform deep learning approaches to improve accuracy, predictive power, and clinical utility of this technology


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_1 | Pages 129 - 129
1 Feb 2020
Maag C Langhorn J Rullkoetter P
Full Access

INTRODUCTION. While computational models have been used for many years to contribute to pre-clinical, design phase iterations of total knee replacement implants, the analysis time required has limited the real-time use as required for other applications, such as in patient-specific surgical alignment in the operating room. In this environment, the impact of variation in ligament balance and implant alignment on estimated joint mechanics must be available instantaneously. As neural networks (NN) have shown the ability to appropriately represent dynamic systems, the objective of this preliminary study was to evaluate deep learning to represent the joint level kinetic and kinematic results from a validated finite element lower limb model with varied surgical alignment. METHODS. External hip and ankle boundary conditions were created for a previously-developed finite element lower limb model [1] for step down (SD), deep knee bend (DKB) and gait to best reproduce in-vivo loading conditions as measured on patients with the Innex knee (. orthoload.com. ) (Figure1). These boundary conditions were subsequently used as inputs for the model with a current fixed-bearing total knee replacement to estimate implant-specific kinetics and kinematics during activities of daily living. Implant alignments were varied, including variation of the hip-knee-ankle angle-±3°, the frontal plane joint line −7° to +5°, internal-external femoral rotation ±3°, and the tibial posterior slope 5° and 0°. Through varying these parameters a total of 2464 simulations were completed. A NN was created utilizing the NN toolbox in MATLAB. Sequence data inputs were produced from the alignment and the external boundary conditions for each activity cycle. Sequence outputs for the model were the 6 degree of freedom kinetics and kinematics, totaling 12 outputs. All data was normalized across the entire data set. Ten percent of the simulation runs were removed at random from the training set to be used for validation, leaving 2220 simulations for training and 244 for validation. A nine-layer bi-long short-term memory (LSTM) NN was created to take advantage of bi-LSTM layers ability to learn from past and future data. Training on the network was undertaken using an RMSprop solver until the root mean square error (RMSE) stopped reducing. Evaluation of NN quality was determined by the RMSE of the validation set. RESULTS. The trained NN was able to effectively estimate the validation data. Average RMSE over the kinetics of the validation data set was 140.7N/N∗m while the average RMSE over the kinematics of the validation data set was 4.47mm/deg (Figure 2,3–DKB, Gait shown). It is noted the error may be skewed by the larger magnitude kinetics and kinematics in the DKB activity as the average RMSE for just SD and gait was 85.9N/N∗m and 2.8mm/deg for the kinetics and kinematics, respectively. DISCUSSION. The accuracy of the generated NN indicates its potential for use in real-time modeling, and further work will explore additional changes in post-operative soft-tissue balance as well as scaling to patient-specific geometry


Bone & Joint Open
Vol. 4, Issue 9 | Pages 696 - 703
11 Sep 2023
Ormond MJ Clement ND Harder BG Farrow L Glester A

Aims

The principles of evidence-based medicine (EBM) are the foundation of modern medical practice. Surgeons are familiar with the commonly used statistical techniques to test hypotheses, summarize findings, and provide answers within a specified range of probability. Based on this knowledge, they are able to critically evaluate research before deciding whether or not to adopt the findings into practice. Recently, there has been an increased use of artificial intelligence (AI) to analyze information and derive findings in orthopaedic research. These techniques use a set of statistical tools that are increasingly complex and may be unfamiliar to the orthopaedic surgeon. It is unclear if this shift towards less familiar techniques is widely accepted in the orthopaedic community. This study aimed to provide an exploration of understanding and acceptance of AI use in research among orthopaedic surgeons.

Methods

Semi-structured in-depth interviews were carried out on a sample of 12 orthopaedic surgeons. Inductive thematic analysis was used to identify key themes.