header advert
Results 1 - 2 of 2
Results per page:
Applied filters
General Orthopaedics

Include Proceedings
Dates
Year From

Year To
Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_2 | Pages 5 - 5
1 Feb 2020
Burton W Myers C Rullkoetter P
Full Access

Introduction

Gait laboratory measurement of whole-body kinematics and ground reaction forces during a wide range of activities is frequently performed in joint replacement patient diagnosis, monitoring, and rehabilitation programs. These data are commonly processed in musculoskeletal modeling platforms such as OpenSim and Anybody to estimate muscle and joint reaction forces during activity. However, the processing required to obtain musculoskeletal estimates can be time consuming, requires significant expertise, and thus seriously limits the patient populations studied. Accordingly, the purpose of this study was to evaluate the potential of deep learning methods for estimating muscle and joint reaction forces over time given kinematic data, height, weight, and ground reaction forces for total knee replacement (TKR) patients performing activities of daily living (ADLs).

Methods

70 TKR patients were fitted with 32 reflective markers used to define anatomical landmarks for 3D motion capture. Patients were instructed to perform a range of tasks including gait, step-down and sit-to-stand. Gait was performed at a self-selected pace, step down from an 8” step height, and sit-to-stand using a chair height of 17”. Tasks were performed over a force platform while force data was collected at 2000 Hz and a 14 camera motion capture system collected at 100 Hz. The resulting data was processed in OpenSim to estimate joint reaction and muscle forces in the hip and knee using static optimization. The full set of data consisted of 135 instances from 70 patients with 63 sit-to-stands, 15 right-sided step downs, 14 left-sided step downs, and 43 gait sequences. Two classes of neural networks (NNs), a recurrent neural network (RNN) and temporal convolutional neural network (TCN), were trained to predict activity classification from joint angle, ground reaction force, and anthropometrics. The NNs were trained to predict muscle and joint reaction forces over time from the same input metrics. The 135 instances were split into 100 instances for training, 15 for validation, and 20 for testing.


Orthopaedic Proceedings
Vol. 102-B, Issue SUPP_2 | Pages 6 - 6
1 Feb 2020
Burton W Myers C Rullkoetter P
Full Access

Introduction

Real-time tracking of surgical tools has applications for assessment of surgical skill and OR workflow. Accordingly, efforts have been devoted to the development of low-cost systems that track the location of surgical tools in real-time without significant augmentation to the tools themselves. Deep learning methodologies have recently shown success in a multitude of computer vision tasks, including object detection, and thus show potential for the application of surgical tool tracking. The objective of the current study was to develop and evaluate a deep learning-based computer vision system using a single camera for the detection and pose estimation of multiple surgical tools routinely used in both knee and hip arthroplasty.

Methods

A computer vision system was developed for the detection and 6-DoF pose estimation of two surgical tools (mallet and broach handle) using only RGB camera frames. The deep learning approach consisted of a single convolutional neural network (CNN) for object detection and semantic key point prediction, as well as an optimization step to place prior known geometries into the local camera coordinate system. Inference on a camera frame with size of 256-by-352 took 0.3 seconds. The object detection component of the system was evaluated on a manually-annotated stream of video frames. The accuracy of the system was evaluated by comparing pose (position and orientation) estimation of a tool with the ground truth pose as determined using three retroreflective markers placed on each tool and a 14 camera motion capture system (Vicon, Centennial CO). Markers placed on the tool were transformed into the local camera coordinate system and compared to estimated location.