Abstract
Introduction: In vitro simulation experiments and in vivo metal ion studies have been used to investigate metal-metal bearing wear. In vitro studies demonstrate an early high wear phase followed by a rapid decline to a significantly lower steady state phase. Clinical metal ion studies have never shown such a significant fall in later years although they reveal early high wear. This study compares in vitro and in vivo wear rates.
Methods: In vivo measurements were obtained from daily cobalt excretion in 26 patients with 50 and 54mm resurfacings up to 4 years. Their activity averaged 2Mcyc-per-yr. In vitro measurements were obtained from gravimetric wear rates (Prosim hip simulator) of ten 50 mm diameter resurfacings of the same design. Diluted calf serum was the lubricant.
Results: Simulator results, shown in fig 1, are wear per day equivalent. In fig 2 it is seen that during the first year simulator results predict wear that exceeds metal ion output. This can be accounted for by postulating that particulate debris is higher during the early years. Subsequently the plots converge showing that particulate debris release is progressively reduced in comparison to metal ion release. At 3 years the simulator predicts lower wear than that observed in the metal ion study. This can be accounted for by postulating that corrosion of previously shed particles is responsible for the difference.
Discussion: From these results it can be stated that during the run-in period, 4/5ths of bearing wear occurs as insoluble particles and the rest is soluble metal ions. This relationship progressively changes through the steady state phase. At around the 3-year stage, even if we assume that most bearing wear releases soluble metal ions, nearly a fifth (2.8/14.4) can only be accounted for through passive corrosion of wear particles.
Correspondence should be addressed to BHS c/o BOA, at the Royal College of Surgeons, 35–43 Lincoln’s Inn Fields, London, WC2A 3PE, England.