Original Research

The influence of a continuing education program on the image interpretation accuracy of rural radiographers

AUTHORS

name here
Tony Smith
1 PhD, Senior lecturer in medical radiation science * ORCID logo

name here
Peter Traise
2 BA(Hons), RDP research fellow

name here
Aiden Cook
3 BAppSci(MedImag), GradDip(ImagInterp), Director of Medical Imaging

CORRESPONDENCE

*A/Prof Tony Smith

AFFILIATIONS

1, 2 University Department of Rural Health, The University of Newcastle, Tamworth, New South Wales, Australia

3 Medical Imaging Department, Toowoomba Base Hospital. Toowoomba, Queensland, Australia

PUBLISHED

24 April 2009 Volume 9 Issue 2

HISTORY

RECEIVED: 19 December 2008

REVISED: 21 March 2009

ACCEPTED: 24 April 2009

CITATION

Smith T, Traise P, Cook A.  The influence of a continuing education program on the image interpretation accuracy of rural radiographers. Rural and Remote Health 2009; 9: 1145. https://doi.org/10.22605/RRH1145

AUTHOR CONTRIBUTIONSgo to url

© Tony Smith, Peter Traise, Aiden Cook 2009 A licence to publish this material has been given to ARHEN, arhen.org.au


abstract:

Introduction: In regional, rural and remote clinical practice, radiographers work closely with medical members of the acute care team in the interpretation of radiographic images, particularly when no radiologist is available. However, the misreading of radiographs by non-radiologist physicians has been shown to be the most common type of clinical error in the emergency department. Further, in Australia few rural radiographers are specifically trained to interpret and report on images. This study aimed to evaluate the accuracy of a group of rural radiographers in interpreting musculoskeletal plain radiographs, and to assess the effectiveness of continuing education (CE) in improving their accuracy within a short time frame.
Methods: Following ethics approval, 16 rural radiographers were recruited to the study. At inception a purpose-designed 'test-object' of 25 cases compiled by a radiologist was used to assess image interpretation accuracy. The cases were categorised into three grades of complexity. The radiographers entered their answers on a structured radiographer opinion form (ROF) that had three levels of response - 'general opinion', 'observations' and 'open comment'. Subsequent to base-line testing, the radiographers participated in a CE program aimed at improving their image interpretation skills. After a 4 month period they were re-tested using the same methodology. The ROFs were scored by the radiologist and the pooled results analysed for statistically significant changes at all ROF levels and grades of complexity.
Results: While for the small number of less complex grade 1 cases there was no change in image interpretation accuracy, for the more numerous and more complex grade 2 and grade 3 cases there was a statistically significant improvement at the 'general opinion' and 'observation' levels (paired t-test, p < 0.05). Also, with the exception of the small sample of grade 1 cases, the proportion of cases correctly interpreted by the radiographers decreased as the ROF level, and therefore the amount of detail required, increased.
Conclusions: This study had a number of methodological limitations but the results suggest that short-term, intensive CE programs can improve the ability of radiographers to accurately interpret plain musculoskeletal radiographic examinations. Similar, larger scale initiatives such as this could help reduce the risk of misdiagnosis in acute care settings, especially in the absence of a radiologist. However, radiographers' ability to use radiological vocabulary needs improvement. The complementary role that exists between radiographers and other members of the acute care team should be nurtured and developed in the context of declining numbers of radiologists, particularly in non-metropolitan areas. Intensive, short-term training in image interpretation may target junior medical officers, GPs and critical care nurse practitioners, as well as radiographers.

Key words: continuing education, emergency care, radiography.

full article:

Introduction

In rural and remote health services, where there is often no radiologist in attendance, radiographers work closely with non-radiologist medical practitioners in the interpretation of radiographs. However, non-radiologist physician interpretation of radiographs has been reported as the leading cause of diagnostic error in the accident and emergency department1, although it is reasonable to predict that this is reduced by physician-radiographer consultation2. This is evident in studies dating back to the 1980s. De Lacey et al3 found that 2.5% of medically significant findings were missed by 'casualty' (emergency department) medical officers in a study of 531 patients; while in a later study Berman et al4 found that radiographers correctly identified 28 abnormalities that were missed by casualty doctors among 1496 patients. In a more recent study, Guly found that 77.8% of diagnostic errors in the emergency department were due to misreading of radiographs1, often by relatively junior medical officers, and that little had changed compared with a study in the same department 20 years earlier.

Radiographic examinations offer the greatest benefit when a radiologist's report is immediately available5. However, delays of 1 to 3 days are commonplace in both rural and metropolitan public hospitals in Australia, and much longer delays have been reported6,7. Delayed reporting of images is considered to be less satisfactory, although it increases the detection of clinically significant abnormalities and provides clarification in cases where the referring doctor is unsure of the diagnosis.

An alternative practice model that has been extensively implemented and evaluated in the UK is the training of radiographers in frontline radiological image interpretation and reporting8. A meta-analysis of UK studies found that, compared with a reference standard, radiographers' overall sensitivity and specificity were 92.6% and 97.7%, respectively9. After radiographers received specific training in image interpretation there was no statistically significant difference in their accuracy, compared with radiologists.

In the absence of a radiologist, as is often the case in rural hospitals, the healthcare outcomes for patients may be improved by the introduction of a system of frontline radiological reporting by radiographers. Rural radiographers are often put in the position where their opinion is actively sought and valued by referring doctors, particularly in the emergency care setting10,pp210-211, although very few have been specifically trained in the radiological aspects of image interpretation and reporting.

Short-term training programs have been shown to be effective in improving radiographers' image interpretation accuracy11-15. However, in Australia, because radiographers have no formal reporting role, to date no studies have been performed to evaluate the effectiveness of continuing education (CE) as a means of improving radiographers' image interpretation accuracy. The aims of this study, therefore, were to evaluate the accuracy of a group of rural radiographers in interpreting plain, musculoskeletal radiographic images, and to assess the effectiveness of CE in improving their image interpretation accuracy within a short time frame.

Methods

Ethics approval for the project was obtained from both the Hunter New England and the University of Newcastle Human Research Ethics Committees. A letter of invitation to participate was mailed together with an information sheet to each of the 20 radiographers who were employed in public hospitals in the rural Northern Sector of the Hunter New England Area Health Service at the time of the study. Sixteen radiographers agreed to participate.

At inception, each radiographer was allocated a code number to ensure anonymity. Their base-line image interpretation accuracy was assessed using a 'test-object' of 25 de-identified cases that had been assembled by a radiologist academic. The images were embedded in a software program that permitted viewing on a desktop or laptop computer with a conventional monitor. It also had the capability to adjust image density and contrast and magnify regions of interest. The cases included plain radiographic examinations of the appendicular and axial skeleton. Although the radiographers were not told so, all of the examinations demonstrated abnormalities, whether traumatic, non-acute or both. No clinical history was given. The radiographers each viewed the images separately, isolated and under examination conditions. Most completed their interpretation of the cases to their satisfaction in less than 1.5 hours.

The cases were graded according to the degree of complexity, as follows:

  • grade 1: a new medical graduate would be expected to interpret the case correctly (3 cases)
  • grade 2: most radiology fellowship candidates would correctly interpret the case at the time of undergoing their final examination (17 cases)
  • grade 3: all specialist musculoskeletal radiologists and experienced general radiologists would correctly interpret the case (5 cases).

To attain an accuracy of 100% a radiographer had to correctly identify and describe most, but not necessarily all of the abnormal radiological signs in all 25 cases. A target was set of 85% accuracy compared with the radiologist's interpretation.

Participants were directed to enter their interpretation on a radiographer opinion form (ROF)16, which had 3 levels of response:

  • level 1: 'general opinion' - whether or not there was any abnormality
  • level 2: 'observations' - indicating the nature of the abnormality(ies) from a list of possibilities
  • level 3: 'open comment' - a brief, concise written description of the abnormal appearance(s).

The first two levels required the radiographers to simply tick the correct box(es), while the third level required a more detailed explanation of the radiographers' responses at the other 2 levels.

Subsequent to base-line testing, the radiographers participated in a CE program over a 4 month period in 2007 aimed to improve their ability to correctly interpret musculoskeletal plain radiography examinations. Because the participating radiographers were distributed across a wide geographical area the program used flexible, IT-based modes of delivery that consisted of:

  • self-guided Microsoft PowerPoint presentations emailed to each participant approximately every 2 weeks. Presentations included directed-learning material and self-test case studies with model answers (Fig1)
  • weekly one-hour tutorials or discussion groups, facilitated by a radiographer academic (author 1), which were videoconferenced at all 7 sites in the region where the radiographers were located
  • recommended readings, emailed as PDF files
  • internet URLs of relevant, high-quality internet sites.




Figure 1: An example of the Microsoft PowerPoint directed learning, self-test image interpretation quizzes (top) with model answers using the radiographer opinion form (ROF) format.


After the CE intervention had been completed, the radiographers' image interpretation accuracy was reassessed using the same test-object. Both the pre- and post-intervention ROF answer sheets were examined and scored by the radiologist who had assembled the cases. While for the first two ROF levels of response the scoring was dichotomous (agree = 1, disagree = 0), the open-ended responses were scored as:

  • A = strong to perfect correlation between the radiographer's and radiologist's opinion
  • B = no clinically significant differences in opinion
  • C = clinically significant false positive on the part of the radiographer
  • D = clinically significant false negative on the part of the radiographer.

For the purpose of statistical analysis of the responses at this higher level of radiographer opinion, scores of both A and B were considered agreement (1) and scores of either C or D as disagreement (0).

Cases were pooled for all 16 radiographers, creating an overall total of 400 cases (grade 1 = 48; grade 2 = 272; grade 3 = 80). Results were entered into a Microsoft Excel spreadsheet. Two-sided, paired t-tests (α = 0.05) were used to test for statistically significant changes in accuracy for the group of radiographers as a whole at all three levels of ROF response, as well as all three grades of complexity.

Results

The results are shown in Table 1. Overall, for the combined total of 400 cases, the radiographers' level of accuracy did not reach the 85% target, either before or after the CE intervention. At the 'general opinion' and 'observations' levels there was, however, a statistically significant improvement in the radiographers' accuracy between the pre- and post-intervention testing. At the 'open comment' level there was a slight increase the proportion of cases interpreted correctly for all grades of complexity, although the improvement was not statistically significant.

For the grade 1 cases, of which there were only 3, the 16 radiographers agreed with the radiologist on more than 90% of the interpretations at all ROF levels. No statistically significant improvement was found in interpretations between the pre- and post-intervention for grade 1 cases at any of the ROF levels.

For the grade 2 and 3 cases, the radiographers' accuracy showed a statistically significant improvement after the intervention at both the 'general opinion' and 'observation' levels. With the exception of the small sample of grade 1 cases, the proportion of cases correctly interpreted by the radiographers decreased as the ROF level, and therefore the amount of detail required, increased. In fact, both before and after taking part in the CE program, the radiographers' image interpretation accuracy decreased with the demand for a more detailed description of their observations. There was also no statistically significant improvement at the 'open comment' level overall, or for any of the grades of complexity.

Table 1: Results for the combined 25 cases and 16 radiographers ('overall'), as well as the breakdown for grade of complexity at each radiographer opinion form level



Discussion

It is apparent that, while for the small number of grade 1 cases there was no change in image interpretation accuracy, for the more numerous and more complex grade 2 and grade 3 cases there was a significant improvement at the 'general opinion' and 'observation' ROF levels. This suggests that it is possible to advance the knowledge and skills of radiographers in image interpretation using short bursts of CE.

Similar improvements have been shown in other such small-scale studies. Using a test bank of 30 radiographs, the sensitivity of radiographers in fracture detection improved from 78.9% to 88.2% (p < 0.05) as a consequence of a 2 day face-to-face training program in orthopaedic radiology and skeletal trauma15. Interestingly, 6 months after the course the radiographers combined sensitivity had fallen to below the pre-course level, suggesting a need for ongoing education to maintain competency. Other similar studies, however, have shown that the gains achieved can be held after the completion of a short course in image interpretation. In the study described in this article, no follow-up assessments have been performed of the radiographers' image interpretation accuracy to date. In another previous study, the reporting accuracy of a radiographer who was undergoing formal postgraduate training in image interpretation increased progressively from 87.8% to 100%, compared with the supervising radiologist, over a 9 week period11, illustrating that a sustained improvement is achievable with ongoing education.

In addition to the need for more comprehensive, university-based education programs in image interpretation and reporting for radiographers, there is need for short-term, intensive means of improving the image interpretation accuracy of non-radiologists in the acute care setting. It is evident that such programs have the potential to quickly improve the detection rate of radiological abnormalities, increasing the immediacy with which patients receive definitive treatment. The combined accuracy of radiographers and emergency physicians has been shown to closely approach that of radiologists2. It may be argued, therefore, that the complementary role that exists between radiographers and non-radiologist physicians should be nurtured and developed in the context of a decline in the availability of radiologists, particularly in regional, rural and remote areas. As well as radiographers, therefore, intensive training in image interpretation may also target emergency department junior medical staff, GPs and critical care nurse practitioners. Short-term education programs such as the one described in this article may also have relevance in large metropolitan hospitals where there is a general shortage of radiologists performing 'hot' reporting.

There is great potential to develop online interprofessional education in image interpretation for non-radiologists who are required to interpret radiographic images as part of their healthcare role. This may in turn improve the quality of service in the acute care setting. By delivering courses online the material can be widely accessible and effectively managed using existing online delivery technology. There is a danger, however, of perceiving such initiatives simply as a threat to traditional professional roles. Unfortunately, these perceptions impacted on this study, with the radiologist involved eventually having to relinquish his role in the study because of his colleagues' negative attitudes to educating radiographers in image interpretation and reporting. Such attitudes are unproductive at a time when changes in the way that health care is delivered in Australia appear to be timely and imminent. There is need for collaboration across interprofessional boundaries to ensure that quality and safety are maintained and improved as changes are implemented.

The investigators concede that there are several limitations to this study, which were generally related to funding and time constraints, as well as to the difficulty in engaging radiologists in this type of research. Only one radiologist's opinion was used as the gold standard. However, that radiologist had meticulously compiled the 25 cases used as the test-object, together with model answers. Furthermore, it is not common practice for more than one radiologist to report on musculoskeletal plain radiographs. The methodology may also be criticised because the same 25 cases were used for both pre- and post-intervention testing, which may have biased the results. This can be discounted against the fact that 4 months elapsed between tests, during which time the radiographers would have seen a large number of other cases. The influence of a bias is likely to be marginal compared with the effect of the intervention. The other studies mentioned that also involved pre- and post-intervention testing used a similar methodology.

The sample size of radiographers was small and limited to a particular region, which decreases the generalisability of the findings to the broader population. Further, no break down of the radiographers' years of experience or other variable characteristic is given. It was decided at the outset that these independent variables were not relevant in that all of the radiographers involved in the study were accredited practitioners and regularly worked after-hours, on-call duty or as sole practitioners, or both.

Finally, some limitations in the statistical methods were imposed on the study out of necessity. The test-object contained no negative cases, which precluded the chance of true negative interpretations by the radiographers. Thus, it was not possible to construct a contingency table, calculate sensitivity and specificity or plot receiver operator characteristic (ROC) curves.

One particularly interesting finding of this study is the decrease in the level of accuracy as the radiographers were required to provide a more precise description of their interpretation in the 'open comment' section of the ROF. This suggests that the radiographers had difficulty converting their observations into words with the result that, in some cases, the validity of their answers at the other two ROF levels came into question. This seems to point strongly to a need to further 'up-skill' radiographers in the vocabulary of radiology so that they can better communicate their observations to the doctors and other members of the acute care team, as well as to radiologists. This needs to be addressed in both undergraduate and postgraduate education and training of radiographers.

Conclusions

In spite of the limitations of this study, it is reasonable to conclude that short-term, intensive CE programs can have a positive effect on the ability of radiographers to accurately interpret plain musculoskeletal radiographic examinations. Extrapolating this finding, it seems that providing such programs could be beneficial in reducing the risk of misdiagnosis in the emergency department and other acute care settings, especially if no radiologist is available. It may also be argued that encouraging greater collaboration been radiographers and other members of the healthcare team will have a positive effect on teamwork and on patient outcomes.

It has been extensively argued during recent years that extending the role of some health professionals, creating new roles and working more collaboratively will be necessary strategies to meet future growth in demand for health services with declining workforce participation rates. However, if such innovations are to be effective in maintaining or improving service quality and safety they must be reinforced by CE, as well as close monitoring and further research. Initiatives such as the one described in this article may inform future, larger scale development of CE and research in this field.

Acknowledgements

This study was supported by the Australian Government's Primary Health Care Research Evaluation and Development (PHCRED), Researcher Development Program (RDP). The investigators acknowledge the contribution of the radiographers and the positive attitude toward interprofessional education and practice of the radiologist involved in this study.

References

1. Guly HR. Diagnostic errors in an emergency department. Emergency Medicine Journal 2001; 18: 263-269.

2. Willis BH, Sur SD. How good are emergency department Senior House Officers at interpreting X-rays following radiographers' triage? European Journal of Emergency Medicine 2007; 14(1): 6-12.

3. De Lacey G, Barker A, Harper J, Wignall B. An assessment of the clinical effects of reporting accident and emergency radiographs. British Journal of Radiology1980; 53: 304-309.

4. Berman L, de Lacey G, Twomey E, Twomey B, Welch T, Eban R. Reducing errors in the accident department: a simple method using radiographers. BMJ 1985; 290: 421-422.

5. Espinosa JA, Nolan TW. Reducing errors made by emergency physicians in interpreting radiographs: longitudinal study. BMJ 2000; 320: 737-740.

6. Patty A. X-ray backlog: patients at risk; hospital woes. Sydney Morning Herald Sydney, NSW; 6 October 2007, 1.

7. State of New South Wales. Final Report of the Special Commission of Inquiry: Acute Care in NSW Public Hospitals. 27 November 2008. Available: http://www.lawlink.nsw.gov.au/acsinquiry (Accessed 11 December 2008).

8. Hardy M, Snaith B, Smith T. Radiographer reporting of trauma images: United Kingdom experience and the implications for evolving international practice. The Radiographer 2008; 55(1): 16-19.

9. Brealey S, Scally A, Hahn S, Thomas N, Godfrey C, Coomarasamy A. Accuracy of radiographer plain radiograph reporting in clinical practice: a meta-analysis. Clinical Radiology 2005; 60(2): 234-241.

10. Smith AN. Remote x-ray operator radiography: a case study in interprofessional rural clinical practice. PhD thesis, The University of Newcastle; 2006. Available: http://nova.newcastle.edu.au/vital/access/manager/Repository/uon:709?start=106&sort=creator%2f (Accessed 19 December 2008).

11. Loughran CF. Reporting of fracture radiographs by radiographers: the impact of a training programme. British Journal of Radiology 1994; 67: 945-950.

12. Carter S, Manning D. Performance monitoring during postgraduate radiography training in reporting: a case study. Radiography 1999; 5: 71-78.

13. McConnell JR, Webster AJ. Improving radiographer highlighting of trauma films in the accident and emergency department with a short course of study: an evaluation. British Journal of Radiology 2000; 73(870): 608-612.

14. Brealey S, King DG, Crowe MTI, Crawshaw I, Ford L, Warnock NG et al. Accident and emergency and general practitioner plain radiograph reporting by radiographers and radiologists: a quasi-randomised controlled trial. British Journal of Radiology 2003; 76: 57-61.

15. MacKay SJ. The impact of a short course of study on the performance of radiographers when highlighting fractures on trauma radiographs: 'the red dot system'. British Journal of Radiology 2006; 79(942): 468-472.

16. Smith A, Younger C. Accident and emergency radiological interpretation using the radiographer opinion form (ROF). The Radiographer 2002; 49: 27-31.

You might also be interested in:

2020 - Strategies to increase the pharmacist workforce in rural and remote Australia: a scoping review

2019 - Just lie there and die: barriers to access and use of general practitioner out-of-hours services for older people in rural Ireland

2008 - Health behaviors and weight status among urban and rural children

This PDF has been produced for your convenience. Always refer to the live site https://www.rrh.org.au/journal/article/1145 for the Version of Record.