Active Shape Models (ASM) have been widely used in the literature for the extraction of the tibial and the femoral bones from MRI. These methods use Statistical Shape Models (SSM) to drive the deformation and make the segmentation more robust. One crucial step for building such SSM is the shape correspondence (SC). Several methods have been described in the literature. The goal of this paper is to compare two SC methods, the 28 MRI of the knee have been used. The validation has been performed by using the leave-one-out cross-validation technique. An ASMMDL and an ASMIMCP-GMMM has been built with the SSMs computed respectively with the MDL and IMCP-GMM methods. The computation time for building both SSMs has been also measured. For 90% of data, the error is inferior to 1.78 mm and 1.85 mm for respectively the ASMIMCP-GMM and the ASMMDL methods. The computation time for building the SSMs is five hours and two days for respectively the IMCP-GMM and the MDL methods. Both methods seem to give, at least, similar results for the femur segmentation in MRI. But (1) IMCP-GMM can be used for all types of shape, this is not the case for the MDL method which only works for closed shape, and (2) IMCP-GMM is much faster than MDL.
Functional approaches for the localisation of the hip centre (HC) are widely used in Computer Assisted Orthopedic Surgery (CAOS). These methods aim to compute the HC defined as the centre of rotation (CoR) of the femur with respect to the pelvis. The Least-Moving-Point (LMP) method is one approach which consists in detecting the point that moves the least during the circumduction motion. The goal of this paper is to highlight the limits of the native LMP (nLMP) and to propose a modified version (mLMP). A software application has been developed allowing the simulation of a circumduction motion of a hip in order to generate the required data for the computation of the HC. Two tests have been defined in order to assess and compare both LMP methods with respect to (1) the camera noise (CN) and (2) the acetabular noise (AN). The mLMP and nLMP error is respectively: (1) 0.5±0.2mm and 9.3±1.4mm for a low CN, 21.7±3.6mm and 184.7±13.1mm for a high CN, and (2) 2.2±1.2mm and 0.5±0.3mm for a low AN, 35.2±18.5mm and 13.0±8.2mm for a high AN. In conclusion, mLMP is more robust and accurate than the nLMP algorithm.
The hip centre (HC) in Computer Assisted Orthopedic Surgery (CAOS) can be determined either with anatomical (AA) or functional approaches (FA). AA is considered as the reference while FA compute the hip centre of rotation (CoR). Four main FA can be used in CAOS: the Gammage, Halvorsen, pivot, and least-moving point (LMP) methods. The goal of this paper is to evaluate and compare with an in-vitro experiment (a) the four main FA for the HC determination, and (b) the impact on the HKA. The experiment has been performed on six cadavers. A CAOS software application has been developed for the acquisitions of (a) the hip rotation motion, (b) the anatomical HC, and (c) the HKA angle. Two studies have been defined allowing (a) the evaluation of the precision and the accuracy of the four FA with respect to the AA, and (b) the impact on the HKA angle. For the pivot, LMP, Gammage and Halvorsen methods respectively: (1) the maximum precision reach 14.2, 22.8, 111.4 and 132.5 mm; (2) the maximum accuracy reach 23.6, 40.7, 176.6 and 130.3 mm; (3) the maximum error of the frontal HKA is 2.5°, 3.7°, 12.7° and 13.3°; and (4) the maximum error of the sagittal HKA is 2.3°, 4.3°, 5.9°, 6.1°. The pivot method is the most precise and accurate approach for the HC localisation and the HKA computation.
Over the last twenty years, image-guided interventions have been greatly expanded by the advances in medical imaging and computing power. A key step for any image-guided intervention is to find the image-to-patient transformation matrix, which is the transformation matrix between the preoperative 3D model of patient anatomy and the real position of the patient in the operating room. In this work, we propose a robust registration algorithm to match ultrasound (US) images with preoperative Magnetic Resonance (MR) images of the Humerus. The fusion of preoperative MR images with intra-operative US images is performed through an NDI Spectra® Polaris system and a L12-5L60N TELEMED® ultrasound transducer. The use of an ultrasound probe requires a calibration procedure in order to determine the transformation between an US image pixel and its position according to a global reference system. After the calibration step, the patient anatomy is scanned with US probe. US images are segmented in real time in order to extract the desired bone contour. The use of an optical measurement system together with trackers and the previously-computed calibration matrix makes it possible to assign a world coordinate position to any pixel of the 2D US image. As a result, the set of US pixels extracted from the images results in a cloud of 3D points which will be registered with the 3D Humerus model reconstructed from MR images. The proposed registration method is composed of two steps. The first step consists of US 3D points cloud alignment with the 3D bone model. Then, the second step performs the widely-known Iterative Closest Point (ICP) algorithm. In order to perform this, we define the coordinate system of both the 3D Humerus model and the US points cloud. The frame directions correspond to the directions of the principal axes of inertia calculated from the matrices of inertia of both the preoperative 3D model and the US data obtained intra-operatively. Then, we compute the rotation matrix to estimate the transformation between the two coordinate systems previously calculated. Finally the translation is determined by evaluating the distance between the mass centres of the two 3D surfaces.INTRODUCTION
MATERIALS AND METHODS
Automated MRI bone segmentation is one of the most challenging problems in medical imaging. To increase the segmentation robustness, a prior model of the structure could guide the segmentation. Statistical Shape Models (SSMs) are efficient examples for such application. We present an automated SSM construction approach of the The basic idea is to relate only corresponding parts of the shape under investigation. A sample from the samples set is chosen as a common reference (atlas), and the other samples are landmarked and registered to it so that the corresponding points can be identified. The registration has three levels: alignment, rigid and elastic transformations. To align two Afterwards, the samples are locally deformed toward the atlas using directly their landmarks (traditional approach). Unfortunately, landmarks-correspondences could be mismatched at some anatomically complex, “critical,” zones of the scapula. To overcome such a problem, we suggest to 3D-segment these “critical” zones using a 3D Watershed-based method. Watershed is based on a physical concept of immersion, where it is achieved in a similar way to water filling geographic basins. We believe that this is a natural way to segment the surface of the scapula since it has two large “basins”: the Once we have the zones, surface-to-surface correspondence is defined and the landmarks' point-to-point correspondences are obtained within each zone pair separately. The elastic registration is then applied on the whole surface via a multi-resolution B-Spline algorithm. The atlas is built through an iterative procedure to eliminate the bias to the initial choice and the correspondences are identified by a reverse registration. Finally, the statistical model can be constructed by performing Principle Component Analysis (PCA).INTRODUCTION
METHODS
For any image guided surgery, independently of the technique which is used (navigation, templates, robotics), it is necessary to get a 3D bone surface model from CT or MR images. Such model is used for planning, registration and visualization. We report that graphical representation of patient bony structure and the surgical tools, inter-connectively with the tracking device and patient-to-image registration, are crucial components in such system. For Total Shoulder Arthroplasty (TSA), there are many challenges. The most of cases that we are working with are pathological cases such as rheumatoid arthritis, osteoarthritis disease. The CT images of these cases often show a fusion area between the glenoid cavity and the humeral head. They also show severe deformations of the humeral head surface that result in a loss of contours. These fusion area and image quality problems are also amplified by well-known CT-scan artefacts like beam-hardening or partial volume effects. The state of the art shows that several segmentation techniques, applied to CT-Scans of the shoulder, have already been disclosed. Unfortunately, their performances, when used on pathological data, are quite poor. In severe cases, bone-on-bone arthritis may lead to erosion-wearing away of the bone. Shoulder replacement surgery, also called shoulder arthroplasty, is a successful, pain-relieving option for many people. During the procedure, the humeral head and the glenoid bone are replaced with metal and plastic components to alleviate pain and improve function. This surgical procedure is very difficult and limited to expert centres. The two main problems are the minimal surgical incision and limited access to the operated structures. The success of such procedure is related to optimal prosthesis positioning. For TSA, separating the humeral head in the 3D scanner images would allow enhancing the vision field for the surgeon on the glenoid surface. So far, none of the existing systems or software packages makes it possible to obtain such 3D surface model automatically from CT images and this is probably one of the reasons for very limited success of Computer Assisted Orthopaedic Surgery (CAOS) applications for shoulder surgery. This kind of application often has been limited due to CT-image segmentation for severe pathologic cases and patient to image registration. The aim of this paper is to present a new image guided planning software based on CT scan of the patient and using bony structure recognition, morphological and anatomical analysis for the operated region. Volumetric preoperative CT datasets have been used to derive a surface model shape of the shoulder. The proposed planning software could be used with a conventional localisation system, which locates in 3D and in real time position and orientation for surgical tools using passive markers associated to rigid bodies that will be fixed on the patient bone and on the surgical instruments. 20 series of patients aged from 42 years to 91 years (mean age of 71 years) were analysed. The first step of this planning software is fully automatic segmentation method based on 3D shape recognition algorithms applied to each object detected in the volume. The second step is a specific processing that only treats the region between the humerus and the glenoid surface in order to separate possible contact areas. The third step is a full morphological analysis of anatomical structure of the bone. The glenoid surface and the glenoid vault are detected and a 3D version and inclination angle of the glenoid surface are computed. These parameters are very important to define an optimal path for drilling and reaming glenoid surface. The surgeon can easily modify the position of the implant in 3D aided by 3D and 2D view of the patient anatomy. The glenoid version/inclination angle and the glenoid vault are computed for each postion in real time to help the surgeon to evaluate the implant position and orientation. In summary, preoperative planning, 3D CT modelling and intraoperative tracking produced improved accuracy of glenoid implantation. The current paper has presented new planning software in the world of image guided surgery focused on shoulder arthroplasty. Within our approach, we propose, to use pattern recognition instead of manual picking of landmarks to avoid user intervention, in addition to potentially reducing the procedure time. A very important role is played by 3D data sets to visualise specific anatomical structures of the patient. The automatic segmentation of arthritic joints with bone recognition is intended to form a solid basis for the registration. The results of this methodology were tested on arthritic patients to prove that it is not just easy and fast to perform but also very accurate so it realises all conditions for the clinical use in OR.
One of the advantages of Computer Assisted Orthopaedic Surgery is to obtain functional and morphological information in real time during the procedure. 3D models can be built, without preoperative images, based on elastic 3D to 3D registration methods. The bone morphing algorithm is one of them. It allows to specifically build the 3D shape of bones using a deformable model and a set of spare points obtained on the patient. These points are obtained with a pointer tracker visible by the station which digitises the surface of the bone. However, it’s not always possible to digitise directly the bone in the context of minimal invasive surgery. In this case, the lack of information leads to an inaccurate reconstruction of bone’s surfaces. To collect such missing information we propose to rely on ultrasound (US) images despite the fact that ultrasound is not the best modality to image bones. To use this method, a segmentation step is first needed to detect automatically the bone in US images. Then, a calibration step of the US probe is carried out to obtain the 3D position of any point of the 2D ultrasonic images using 3D infra-red localizer. Several methods can be carried out to calibrate US probes, however to take into account surgical constraints such as accuracy, robustness, speed and ease of use, we decided to implement the single wall procedure. The calibration step consists in the estimation of a transformation matrix which carries out the connection between the 2D reference system of the US image and a 3D reference system in the space. To estimate correctly this matrix, a wall is scanned with different motions of the US probe. The images are then processed to automatically detect the lines representing the wall in the US images. A preliminary step allows to clean the images using a threshold and a gradient operation. Then, a method based on the Hough transform detects the lines on the images. Once all the images are processed, the calibration parameters can be estimated by using a new method which minimises the distance between the real plane and the points obtained with the US images. This optimisation step is composed of the genetic algorithms and of the Levenberg-Marquardt (LM) method. The first algorithm allows to obtain a good initialisation in a defined space for the LM algorithm. This good initialisation found thanks to the stochastic behaviour of the genetic algorithms is very important otherwise the LM algorithm could detect local minimum and the calibration parameters could be wrong. The accuracy of the calibration method is assessed by measuring the distance between the position of a known point in the space and the same point obtained with the US image and the calibration. 40 calibrations matrices are used to estimate correctly the accuracy. An average accuracy of 1.22 mm and a standard deviation (Std. Dev.) of 0.42 mm are measured. The accuracy of the system is quite high but the reproducibility is too low to use this approach in a clinical environment. The main reason of this lack of reproducibility is the thickness of the US beam. A slight modification in the design of the calibration tool will allow to increase the reproducibility. We will then have an efficient and automatic calibration procedure with the required accuracy and robustness, usable for clinical purposes.