header advert
Orthopaedic Proceedings Logo

Receive monthly Table of Contents alerts from Orthopaedic Proceedings

Comprehensive article alerts can be set up and managed through your account settings

View my account settings

Visit Orthopaedic Proceedings at:

Loading...

Loading...

Full Access

General Orthopaedics

AUTOMATED LANDMARK DETECTION IN FUNCTIONAL LATERAL RADIOGRAPHS USING DEEP LEARNING

The New Zealand Orthopaedic Association and the Australian Orthopaedic Association (NZOA AOA) Combined Annual Scientific Meeting, Christchurch, New Zealand, 31 October – 3 November 2022. Part 2 of 2.



Abstract

Evaluation of patient specific spinopelvic mobility requires the detection of bony landmarks in lateral functional radiographs. Current manual landmarking methods are inefficient, and subjective. This study proposes a deep learning model to automate landmark detection and derivation of spinopelvic measurements (SPM).

A deep learning model was developed using an international multicenter imaging database of 26,109 landmarked preoperative, and postoperative, lateral functional radiographs (HREC: Bellberry: 2020-08-764-A-2). Three functional positions were analysed: 1) standing, 2) contralateral step-up and 3) flexed seated. Landmarks were manually captured and independently verified by qualified engineers during pre-operative planning with additional assistance of 3D computed tomography derived landmarks. Pelvic tilt (PT), sacral slope (SS), and lumbar lordotic angle (LLA) were derived from the predicted landmark coordinates. Interobserver variability was explored in a pilot study, consisting of 9 qualified engineers, annotating three functional images, while blinded to additional 3D information. The dataset was subdivided into 70:20:10 for training, validation, and testing.

The model produced a mean absolute error (MAE), for PT, SS, and LLA of 1.7°±3.1°, 3.4°±3.8°, 4.9°±4.5°, respectively. PT MAE values were dependent on functional position: standing 1.2°±1.3°, step 1.7°±4.0°, and seated 2.4°±3.3°, p< 0.001. The mean model prediction time was 0.7 seconds per image. The interobserver 95% confidence interval (CI) for engineer measured PT, SS and LLA (1.9°, 1.9°, 3.1°, respectively) was comparable to the MAE values generated by the model.

The model MAE reported comparable performance to the gold standard when blinded to additional 3D information. LLA prediction produced the lowest SPM accuracy potentially due to error propagation from the SS and L1 landmarks. Reduced PT accuracy in step and seated functional positions may be attributed to an increased occlusion of the pubic-symphysis landmark. Our model shows excellent performance when compared against the current gold standard manual annotation process.


Email: