Javascript not enabled
ResearchFurther Opinion

How will surgical site infection be measured to ensure "high quality care for all"?

E Ashby, FS Haddad, E O’Donnell, APR Wilson

J Bone Joint Surg [Br] 2010;92-B:1294-9.

 

From April of this year individual surgeons’ Surgical Site Infection rates are to be published on behalf of the NHS. Departmental funding will be dependent on those infection rates and departments with high infection levels will face budgetary cuts.

Clearly there are a number of issues surrounding this policy – the most important of which is the need to implement a universal methodology with which to assess infection rates.

In this paper, Ashby et al1 compare their methodology, the system used by University College Hospital London, ASEPSIS, to two other systems: that used by the US Centre for Disease Control – the National Nosocomial Infections Surveillance System (NNIS) – and the English Nosocomial Infection National Surveillance Scheme (NINSS).

They conclude that ASEPSIS is the most efficient and accurate way of collecting this important data, stating that in a review of 7448 patients the NNIS produced an infection rate of 15.5%, the NINSS 11.3% and ASEPSIS 8.8%. They thus argue that the ASEPSIS method lessens the chance of over reporting the surgical infections.

Yet are they right to assert that their method is the best?

Surgical Site Infection needs to be recorded as accurately as possible and it is a step forward that national figures are being produced to provide an indication of quality of care.

However, whatever method is used must be cost effective if it is to be implemented universally and, more importantly, the results need to be recognised internationally so that like can be compared with like.

The authors report that ASEPSIS takes a mean of 59 minutes to collect and analyse the data with a dedicated team of four nurses needed to carry out the work. However, no information is given regarding how long the other two systems take to collect and analyse data or how many staff are involved.

An important factor in this paper is that the patients were followed up for two months after their hospital discharge using a postal questionnaire with an over 90% follow up rate. This is something that must be done more frequently especially after joint replacement surgery.

Rightly the authors point out that all institutions need to use the same method for diagnosing surgical site infection and that if different methods are used direct comparison will be invalid and misleading information produced.

Yet it is worth noting that the only method to have an international recognition is the CDC’s NNIS system (the UK system, NINSS apparently removes the role of the surgeon and is therefore fundamentally flawed).

Surgical Site Infection is defined as incisional (superficial) or deep infection occurring within 30 days of surgery or within one year of the introduction of a surgical implant.2 The CDC NNIS method3 depends upon the definitions of clean, clean contaminated, contaminated and dirty wounds and also includes risk factors such as the physical state of the patient as defined by the American Association of Anaesthetists and the length of time of the surgery. Its use has been shown to be effective in the UK in a pilot study of 5400 patients from 11 different sites who had metal work inserted for either total joint replacement or fractured neck of the femur.5

In their paper Ashby et al dismiss the CDC method as unreliable quoting Allami et al7 who looked at superficial wound infections following arthroplasty in 50 patients and found that the system had problems that needed addressing.  Those conclusions were valid – no system is perfect and Allami et al were only addressing the problem of superficial infections in a small but selected group of patients. They make the point, however, that observational variability occurs in diagnosing superficial infections and that this only reinforces the need for the clinician involved in the care of the patient to be involved in signing off the diagnosis of infection.

Ashby et al also discard the UK NINSS because the reproducibility was low, citing the paper by Wilson et al from the Public Health Laboratories in 2002.7 However there are issues surrounding their interpretation of the data. Wilson et al (2002) gave the results of a survey with a response rate of 90% from 113 hospitals that had taken part in collecting data for surgical site infection. These authors recorded that the views of the users were very positive and indicated considerable support for this type of national surveillance. Wilson et al concluded that a national system was of value providing standardised surveillance methods and cooperative data. They reported that their results had been used by clinicians to initiate change in clinical practice, increase in awareness of infection control and demonstrate good quality care, all of which were positive factors. Sensibly they recommended areas for further development such as an extended range of surgical procedures to be included, the use of post discharge surveillance and improved local data collection and analysis systems. These were all valid points that need to be taken into account for an effective system of surgical site infection surveillance.

Ashby et al demonstrate that the ASEPSIS method is a useful and detailed way of assessing infections, which has been around for the last 20 years and appears to be more objective than the other two systems studied. But it is also worth noting that it may be under reporting infections and that it appears to be time consuming and labour intensive to perform, both of which will cost institutions. It is also arguable that the system is overly complex.

Ultimately clinicians need to make the final diagnosis on the basis of the evidence supplied to them. Clearly there is the possibility of bias on the part of the clinician whose name will be on the data published nationally, but without a more absolute method it must be the clinician in charge’s final responsibility to make the diagnosis in the full acknowledgement that the institution’s funding may be affected.

 The recording of Surgical Site Infection rates is here to stay as an index of quality of care. What is of importance, however, is the reporting of deep infections, which can occur for more than a year after surgery, as this is what really affects both the patient and the institution involved. This is where the funding will be affected.

Recording too much detail about superficial infections may also be fairly irrelevant as the evidence for superficial infections proceeding to deep infections is scant. It is deep infections that cause problems to the patient and cost to the institution and therefore a scoring system accurately covering deep infections in particular is of paramount importance.

It is because of this that, despite Ashby et al’s arguments in favour of the ASEPSIS system, I would strongly recommend that the internationally recognised CDC NINS system is used in the UK so that national infection rates can be directly available for all to see and can be compared to those of other centres around the world.

To do this there needs to be a system of nurse-lead surgical site infection data collection in every hospital. Furthermore, each clinician whose infection rate is to be published must take overall responsibility for this and sign off the diagnosis of post-operative surgical site infection.

References

1. Ashby E, Haddad FS, O’Donnell, Wilson APR. How will surgical site infection be measured to ensure “high quality of care for all”? J Bone Joint Surg [Br] 2010;92-B:1294-9.
2. Horan TC, Gaynes RP, Martone WJ, Jarvis WR, Emori TG. CDC definitions of Nosocomial surgical site infections, 1992. A modification of CDC definitions of surgical site infections. Infect Cont Hosp Epidemiol 1992;13:606-8.
3. No authors listed. National Nosocomial Infections Surveillance System 1992 -2004. Am J Infect Cont 2004;32:470-85.
4. Gaynes RP, Culver DH, Horan TC, Edwards JR, et al. Surgical site infection rates in the United States 1992- 1998. Clinics Infect Dis 2001;33(Suppl 3):69-77.
5. Morgan M, Black J, Bone F, Fry C, et al. Clinician led surgical site infection surveillance of orthopaedic procedures; a UK multicentre pilot stud. J Hosp Infect 2005;6:201-12.
6. Allami MK, Jamil W Fourie B. Ashton V, Gregg PJ. Superficial incisional infection in arthroplasty of the lower limb. J Bone Joint Surg [Br] 2005;87-B:1267-72.
7. Wilson JA, Ward VP, Coello R, Chalet A, Pearson A. User evaluation of the Nosocomial Infection national Surveillance System; surgical site infection module. J Hosp infect 2002;5:114–21.

 

Hughes S, Emeritus Professor Orthopaedic Surgery

Imperial College London

E-mail: seanfrancishughes@imperial.ac.uk