We report on the performance of a simple algorithm using a combination of synovial fluid White blood cell count(WBC), C-reactive protein(CRP) and α-Defensin(AD) tests to aid in the diagnosis of prosthetic joint infections. Sixty-six synovial fluid samples were collected prospectively in patients with suspected PJI (hip and knee). All samples were tested by: WBC counts (read manually) and CRP test (Alere-Afinion™ validated in-house); and on 37 of these with AD test. Synovial fluid samples were collected in 5 ml ethylenediaminetetraacetic acid (EDTA) tubes. Samples that were very viscous were pre-processed by the addition of 100µl of hyaluronidase solution. Grossly blood stained and clotted samples were excluded. A clinical diagnosis of infection was based IDSA definitions1. Cut offs of >3000 × 106 cells/L for total synovial WBC count and >12mg/L for CRP were used to define infection2,3.Aim
Methods
It is now widely accepted that acute knee dislocations should be managed operatively. Most published studies are from outside the UK and from major trauma or specialist centres. The aim of the study is to report the functional outcomes of all patients presenting with an acute knee dislocation at our institution all of whom were surgically managed. The results were then compared to other published series. The hypothesis being that there would be no significant difference in the functional outcome scores between the groups. All patients presenting with an acute knee dislocation over the last 15 years were included in the study. The patients were followed up using functional assessment scores: Knee outcome score (ADL), Knee outcome score (sports), Tegner Lysholm Scores and overall Patient Satisfaction. The patients were classified according to the Schecnk classification of knee dislocations.Introduction
Methods
It is now widely accepted that acute knee dislocations should be managed operatively. Most published studies are from outside the UK and from major trauma or specialist centres. The aim of the study is to report the functional outcomes of all patients presenting with an acute knee dislocation at our institution all of whom were surgically managed. The results were then compared to other published series. The hypothesis being that there would be no significant difference in the functional outcome scores between the groups. All patients presenting with an acute knee dislocation over the last 15 years were included in the study. The patients were followed up using functional assessment scores: Knee outcome score (ADL), Knee outcome score (sports), Tegner Lysholm Scores and overall Patient Satisfaction. The patients were classified according to the Schecnk classification of knee dislocations.Introduction
Methods
Large diameter femoral heads offer increased range of motion and reduced risk of dislocation. However, their use in total hip arthroplasty has historically been limited by their correlation with increased polyethylene wear. The improved wear resistance of highly crosslinked UHWMPE has led a number of clinicians to transition from implanting traditionally popular sizes (28mm and 32 mm) to implanting 36 mm heads. Desire to further increase stability and range of motion has spurred interest in even larger sizes (> 36 mm). While the long-term clinical ramifications are unknown, in-vivo measurements of highly crosslinked UHMWPE liners indicate increases in head diameter are associated with increased volumetric wear [1]. The goal of this study was to determine if this increase in wear could be negated by using femoral heads with a ceramic surface, such as oxidized Zr-2.5Nb (OxZr), rather than CoCrMo (CoCr). Specifically, wear of 10 Mrad crosslinked UHMWPE (XLPE) against 36 mm CoCr and 44 mm OxZr heads was compared. Ram-extruded GUR 1050 UHMWPE was crosslinked by gamma irradiation to 10 Mrad, remelted, and machined into acetabular liners. Liners were sterilized using vaporized hydrogen peroxide and tested against either 36 mm CoCr or 44 mm OxZr (OXINIUM(tm)) heads (n=3). All implants were manufactured by Smith & Nephew (Memphis, TN). Testing was conducted on a hip simulator (AMTI, Watertown, MA) as previously described [2]. The 4000N peak load (4 time body weight for a 102 kg/225 lb patient) and 1.15 Hz frequency used are based upon data obtained from an instrumented implant during fast walking/jogging and have previously been shown to generate measurable XLPE wear [2,3]. Lubricant was a serum (Alpha Calf Fraction, HyClone Laboratories, Logan, UT) solution that was replaced once per week [2]. Liners were weighed at least once every million cycles (Mcycle) over the duration of testing (∼ 5 Mcycle). Loaded soak controls were used to correct for fluid absorption. Single factor ANOVA was used to compare groups (a = 0.05).Introduction
Materials and Methods
Since the recognition of chronic exertional compartment syndrome (CECS) of the leg as a cause of exercise-induced leg pain was made in the 1950s, there has been no universally accepted diagnostic pressure. A 1997 review found 16 papers from 1962 to 1990, which have differing diagnostic criteria. The threshold pressure used at DMRC Headley Court is based on the work of Allen and Barnes from 1986, where in a patient with a suitable history, a dynamic pressure in the exercising muscle compartment above 50 mmHg is diagnostic. We present the data gathered at DMRC Headley Court during the first year of the new protocol on dynamic pressure testing, from May 2007. The new exercise protocol involved exercising patients using a representative military task: the combat fitness test (CFT) using a 15 kg Bergen on a treadmill, set at 6.5 km/h with zero incline. During this period, we performed 151 intra-compartmental pressure studies in 76 patients. 120 were successful in 68 patients, with 31 technical failures. Patients complained of exercise-induced leg pain on performing the CFT and pointed to the muscles in either the anterior or deep posterior muscle compartments and these were exclusively tested with invasive studies. No patients complained of symptoms in the lateral or superficial posterior compartments and therefore neither was tested. The majority were performed in the anterior leg compartment (110 successful), with a few (9 successful) in the deep posterior compartment, and there was only one complication with a posterior tibial artery puncture. The mean age of patient was 28.9 years (SD 6.7). In 119 compartment studies, the mean pressure was 97.8 mmHg (SD 31.7). This data is normally distributed (Shapiro Wilk test, W=0.98 p=0.125). In summary, we present the data using the CFT as the exercise protocol in patients who give a history compatible with CECS and have symptoms of leg pain during exercise. This data has a mean of approximately 100 mmHg, which is double that of the diagnostic criteria of Allen and Barnes, who used running as the exercise protocol. The presence of a weighted bergen as well as the stride and gait pattern used during the loaded march may be contributory factors in explaining why the pressures are higher compared to other forms of exercise. Further work is ongoing with determining the intracom-partmental muscle pressure in normal subjects with no history of exertional leg pain performing the CFT.
It has stated that the application of a pre-hospital tourniquet could prevent 7% of combat deaths, but their widespread use has been questioned due to the potential risk from prolonged ischaemia, or local pressure. The debate centres on their ability to improve survival after major haemorrhage, versus the potential risk of limb loss. A recent US military prospective study on their use demonstrated improved survival when a tourniquet was applied, and reported that no limb was lost solely from tourniquet use. However, this study focused on early limb loss, with a median follow-up of only 7 days, and so could not consider later morbidity. The aim of this study was to investigate if the pre-hospital application of a tourniquet resulted in an increase in morbidity following significant ballistic limb injury. We reviewed members of the UK armed forces who sustained severe limb-threatening injuries in Iraq and Afghanistan, and based on the presence or absence of a pre-hospital tourniquet a cohort study was then performed. Of the 23 lower limbs that definitely had a pre-hospital tourniquet applied it was possible to match 22 limbs with 22 that did not have a pre-hospital tourniquet. The injuries were matched for anatomical location, severity of the bony injury, initial surgical management, Injury Severity Score and Mangled Extremity Severity Score as much as possible. Of the 22 limbs with a pre-hospital tourniquet applied, 19 limbs had a least 1 complication. Of the 22 with no tourniquet applied, 15 had at least 1 complication (p=0.13). There were 10 limbs with at least 1 major complication in the pre-hospital tourniquet group but only 4 in the group with no tourniquet (p=0.045). There was no difference in the amputation rate. The significant difference in the incidence of major complications is a concern, particularly as the difference was mainly due to a deep infection rate of 32% vs. 4.5%. Although there are a number of variables which could have influenced these small groups, such as choice of implant, method and timing of wound closure, the use of a cohort and a p <
0.05, does suggest the use of a pre-hospital tourniquet was a factor. Although the use of pre-hospital tourniquets cannot be decried as a result of this study, there does remain the need to continually review their use, prospectively, to determine their risk/benefit ratio.
Approximately 1% of joint replacement operations are complicated by infection. Thirty percent of these infections are due to