The impact of intensive training in preliminary image evaluation (PIE) for radiographers in the emergency department of a regional hospital in New Zealand e A pilot study
K. Lewis
a,b,*, S. Mdletshe
b, A. Doubleday
b, T. Pieterse
baRadiology Department, Te Whatu Ora Taranaki, New Plymouth, New Zealand
bDepartment of Anatomy and Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
a r t i c l e i n f o
Article history:
Received 3 December 2023 Received in revised form 4 February 2024 Accepted 13 February 2024
Keywords:
Preliminary image evaluation Preliminary clinical evaluation New Zealand
Training
Appendicular skeleton
a b s t r a c t
Introduction: New Zealand has seen an increase in the X-ray examinations in the emergency de- partments (ED), and the radiology report is generally unavailable immediately. This requires practitioners managing the patient to take the responsibility of detecting any abnormalities in the images and using such information for the management of the patient. There is, therefore, a need for consideration of the contribution that radiographers could make in the accurate management of the patients in ED in New Zealand. The aim of this study was to assess if an intensive preliminary image evaluation (PIE) training course improved radiographer accuracy, sensitivity, and specificity on extremity X-ray examinations in a regional ED in New Zealand.
Method: A pre-post-intervention design was employed for this study. Seven radiographers working at a regional base hospital in New Zealand undertook image evaluation tests to evaluate their ability to detect and describe abnormalities prior to and following a 2-day intensive PIE training course. The training concentrated on acute extremity abnormalities. Tests were then scored to determine sensitivity, speci- ficity, and accuracy.
Results:Following an intensive PIE training course, the post-intervention test mean demonstrated an improved sensitivity by an average of 3.99% (89.01e93.0), specificity improved by an average of 6.13%
(79.77e85.90%), and accuracy improved by an average of 3.33% (77.55e80.87%).
Conclusion: This study demonstrated that an intensive training course in PIE improved the participants' sensitivity, specificity, and accuracy when evaluating acute extremity X-ray examinations in ED at the study site, however further research is required to see if these results also represent clinical ability.
Implication for practice:The NZ healthcare system could benefit by the introduction of a radiographers’ PIE system. It is therefore recommended that when introducing PIE into an ED in New Zealand, radi- ographers should undertake additional training to improve image evaluation sensitivity, specificity, and accuracy prior to participation.
©2024 The Author(s). Published by Elsevier Ltd on behalf of The College of Radiographers. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Introduction
Patient presentation to New Zealand (NZ) Emergency De- partments (ED) has increased significantly over the last 10 years, with a 28.91% increase in presentations from 2011 to 2022.1,2As a result, the diagnosis and treatment of minor musculoskeletal trauma have frequently become the responsibility of nurse practi- tioners, clinical nurse specialists, and junior doctors.3,4The increase
in ED patient presentations has led to an increase in demand for X- ray examinations, and frequently a radiology report is not available at the time of diagnosis, leaving the interpretation of X-ray exam- inations to the ED clinical staff.5,6A study undertaken in Australia found the ability of ED clinical staff to accurately interpret X-ray examinations varies with clinical experience and level of training.
This raises concerns about treatment delays while junior ED clini- cians seek second opinions, incorrect or missed treatment, or inappropriate additional imaging.7Additionally, another Australian study suggested that 3.1% of fractures are missed on initial acute extremity X-ray examinations by ED clinicians, only to be discov- ered when the radiology report later becomes available.8Hence, the most common cause of treatment error in ED is the incorrect interpretation of X-ray images.9The situation is not dissimilar in
*Corresponding author. Radiology Department, Te Whatu Ora Taranaki, New Plymouth, New Zealand.
E-mail address:[email protected](K. Lewis).
@avidkeo(K. Lewis)
Contents lists available atScienceDirect
Radiography
j o u r n a l h o m e p a g e :w w w . e l s e v i e r . c o m / l o c a t e / r a d i
https://doi.org/10.1016/j.radi.2024.02.008
1078-8174/©2024 The Author(s). Published by Elsevier Ltd on behalf of The College of Radiographers. This is an open access article under the CC BY license (http://
creativecommons.org/licenses/by/4.0/).
the New Zealand healthcare system where there is a lack of radi- ologists to provide radiology reports in a timely manner.6 For example, the average time for a radiology report to be completed was confirmed to be 4.52 days (range 1 mine47.35 days), with 24.32% of radiology reports taking longer than 7 days for the period between March 2022eFebruary 2023 in the study location.10There is therefore a need to consider alternative approaches to improve the accuracy of the detection of abnormalities. One such approach is preliminary image evaluation (PIE).
Preliminary image evaluation is an abnormality detection system (ADS) that enables radiographers to assist ED clinicians when inter- preting acute extremity X-ray examinations and to help minimize missed diagnoses or sub-optimal patient management.11ADSs have been in use in EDs in the United Kingdom (UK) since the 1980s, when it was recognised that radiographers had the ability to detect ab- normalities on X-ray examinations.12Thefirst system introduced was the ‘Red Dot’system, which involved radiographers affixing a red sticker to physical X-ray images when they detected an abnormality.13 This system had significant drawbacks, predominantly ambiguity. For example, the absence of a‘red dot’did not mean absence of abnor- mality. It could also mean the radiographer was not participating in the‘red dot’system.14Additionally there was no method available for radiographers to specify what the abnormality was, only that an ab- normality was present.15 To address these drawbacks, PIE, also referred to as preliminary clinical evaluation, was developed. This allowed radiographers to write a brief comment outlining exactly what abnormality they had detected on the X-ray examination, or alternatively, indicating that they had not detected any abnormality.15 There has been extensive research in the UK over the last 20 years demonstrating radiographers have the skill to provide accu- rate written comments on extremity X-ray examinations in ED when appropriately trained.11,16In the last 10 years researchers in Australia, South Africa, Ghana, and Singapore have also found that training radiographers in PIE has similar results.17e20Additionally, research in Australia has demonstrated that when ED clinicians and radiographers work together patients have better outcomes due to timely treatment.21However, a major barrier identified in studies for radiographers participating in a PIE system is a reported lack of confidence in participants’ability to write comments,22it has also been shown that radiographers confidence improved following formal PIE training.23
In New Zealand, radiographers have a responsibility to alert ED clinicians tofindings on X-ray examinations24and image evaluation is part of undergraduate curriculum.25However, there is no formal PIE in New Zealand and there is no study that reports on the radiographers’ability to perform PIE.
As the potential benefit of PIE is yet to be explored in New Zealand, this study aimed to assess if an intensive PIE training course for radiographers in NZ improved accuracy, sensitivity, and specificity when performing PIE on extremity X-ray examinations in a regional ED.
Method Ethics
Ethics approval was granted by the Auckland Health Research Ethics Committee (AH25289) and Te Whatu Ora Taranaki District Chief Medical Advisor.
Study setting and participants
The study was conducted at a regional base hospital in NZ. All radiographers (n¼28) who worked in ED in Te Whato Ora Tar- anaki, either Taranaki Base or Hawera Hospital were invited to
participate in the study. Seven radiographers (25%) agreed to participate. This sample size was deemed adequate as each participant would evaluate 120 images over the two tests, with a total of 840 images during the study, similar to, or greater than comparable studies using a pre-post-test methodology.17,18,23 Study design
A pre-post-intervention study design, modelled on studies performed in the UK and Australia was utilised.8,15,17,23,26,27 The study was conducted in March and April 2023. Participants completed an initial image evaluation test two weeks prior to the intervention to determine their pre-training accuracy, sensitivity, and specificity. Participants then took part in an intensive two-day PIE training course as an intervention, which involved targeted education on the appearance of acute extremity abnormalities on X-ray examinations. Intensive training involves longer training sessions over two days, rather than shorter training sessions held over several weeks.11Within the two weeks following the inter- vention participants completed a post-intervention image evalua- tion test to determine any changes to accuracy, sensitivity, and specificity. As the time between the two tests was less than a month, different X-ray examinations were used in each image evaluation test to remove the possibility of recall bias. The X-ray examinations selected in both tests reflected what the participant would encounter in typical clinical practice and were reviewed to ensure they were of similar difficulty.
Intensive PIE training
The two-day intensive training course comprised a total of 9 h, and included extremity X-ray examinations from shoulder tofin- gers and hips to toes. The course was developed and delivered by an expert in image evaluation, who has qualifications specific to image interpretation (MHSc image interpretation), and who has been teaching image evaluation at post-graduate level for over 10 years.
Image evaluation test formation
The image evaluation tests were modelled on the method described by Neep et al..28Participants were given 90 min to evaluate 60 extremity X-ray examinations. This allowed 90 s per X-ray ex- amination as recommended by the Royal College of Radiologists, United Kingdom.29This same time limit is used in other studies that utilise an image evaluation test.19,23,27The goal of the image evalu- ation test was to emulate clinical practice as much as possible. Pre- vious studies that utilise a test bank method had an abnormality prevalence ranging from 25% to 70%.17,30However, concerns have been raised that using an image evaluation test with a higher ab- normality prevalence than would typically be found clinically may lead to an exaggerated sensitivity.31 Therefore, to determine the makeup of the X-ray examinations in the tests an audit based on the method described by Neep et al.28was conducted on all extremity X- ray examinations performed in the study location's ED from 1st March 2022 to 28th February 2023.28The audit consisted of analysing the radiologist's report for every X-ray examination in the audit period to determine if the X-ray examination was to be included in the study, and if so, whether there was an acute abnormality present.
Following this, the X-ray examinations were assessed to determine the proportions of each extremity region imaged, and the prevalence of cases with acute abnormalities in each region. The X-ray exami- nations for the image evaluation test were then randomly selected from X-ray examinations performed in the six-month period prior to the image evaluation test, in the proportions of X-ray examination type and abnormality prevalence determined by the audit (Table 1).
The selected X-ray examinations were exported to a folder within the Picture Archive Communication System (PACS) and anonymised using the inbuilt tools, to remove identifying patient information and the examination report. Participants were instructed that they could adjust the images in the X-ray examina- tions, such as image contrast, brightness, invert, zoom and pan, to reflect their everyday clinical practice. For each X-ray examination, participants were given brief clinical indications, and they were required to provide a written comment on each X-ray examination in the image evaluation test. Participants were not given guidelines on what to include in their PIEs for the pre-intervention test, how- ever the training course included this information. The tests were undertaken in semi-darkened rooms, using standard personal display monitors calibrated to DICOM standards. Participants were asked to create their own unique identifier to use during the study to ensure their identity remained confidential to the researchers.
Scoring system
All image evaluation test answers were scored by a senior radiographer who has previously received training in and per- formed PIE in Queensland, Australia, against the corresponding X- ray examination's radiologist report. While only a single scorer can lead to bias, a senior consultant radiologist was consulted for any discrepancies between the PIE and the initial radiology report, such as a fracture identified in the PIE that was not noted on the radi- ologist report to confirm thefindings. Answers were classified as true positive, true negative, false positive, or false negative. Accu- racy was scored using the criteria adapted from Neep et al.27out- lined in Table 2, where each answer has a possible score of 3, resulting in a maximum possible accuracy score of 180. Any instance where a participant does not complete all 60 examina- tions, the maximum possible accuracy score will be determined by the number of examinations they completed. For example, if they completed 40 examinations, the maximum score would be 120, similar to other studies of this type.
Analysis
Results of both pre- and post-intervention image evaluation tests were analysed for individual sensitivity, specificity, and accuracy.
Sensitivity was calculated as the percentage of x-ray examinations with acute abnormalities that were marked as true positive, while specificity was calculated as the percentage of x-ray examinations without acute abnormalities that were marked as true negative.
Accuracy was presented as a percentage by dividing the actual score achieved by the participant by the maximum possible accuracy score for that test. The group mean was also calculated.
Results
The length of time allocated for participants to complete the image evaluation tests proved to be an unexpected challenge. While six of the seven participants completed evaluations for all the X-ray examinations in the pre-intervention image evaluation test, only one participant completed evaluations for all the X-ray examina- tions in the post-intervention image evaluation test (range 23e52).
Therefore, sensitivity, specificity and accuracy were calculated based on the number of X-ray examinations the participants eval- uated. As a result, there was potential for sensitivity and/or speci- ficity bias to be introduced into the results if the abnormality prevalence was different for each participant.32 This bias was addressed by ensuring the distribution of acute X-ray examinations in the image evaluation tests was randomised, with thefinal prev- alence encountered by the participants in the post-intervention image evaluation test determined by the number of X-ray exami- nations the participants evaluated (Table 3). Additionally, there may be selection bias, where the participants did not complete PIEs for x- ray examinations if they were uncertain of their evaluation, How- ever, when the IETs were assessed, there were no skipped exami- nations, with the participants completing questions in order. This implies the participants did not have enough time to complete all of the examinations rather than skipping difficult examinations. These biases were minimised as the prevalence of acute X-ray examina- tions in the completed portions of the post-intervention tests were within 5% of the prevalence in the overall test.
Table 1
Results of the audit of extremity x-ray examinations performed in ED 1st March 2022e28th February 2023 to determine the makeup of the image evaluation tests.
Region Total
examinations
Number with acute abnormalities
Abnormality prevalence
Percentage of total examinations
Number of acute X-ray examinations in the image evaluation test
Total X-ray examinations in the image evaluation test
Shoulder/humerus 942 336 35.67% 12.69% 3 8
Elbow/forearm 747 241 32.26% 10.06% 2 6
Wrist/hand 1880 650 34.57% 25.33% 5 15
Pelvis/femur 889 165 18.56% 11.98% 1 7
Knee/tibia 1162 194 16.70% 15.66% 2 9
Ankle/foot 1802 414 22.97% 24.28% 3 15
Total: 7422 2000 26.95% 100.00% 16 60
Table 2
Scoring criteria for the image evaluation tests. Adapted from Neep et al.28 For extremity X-ray examinations with traumatic abnormality
Abnormality not detected 0
Abnormality detected, but not described correctly 1 Abnormality detected; description incomplete (but not incorrect) 2 Abnormality detected and correctly described in entirety 3 For extremity X-ray examinations without traumatic abnormality
False abnormality reported or described 0
Correct report of absence of any traumatic abnormality 3
Table 3
Number of completed X-ray examinations and prevalence of acute abnormalities encountered.
Participant Completed X-ray examinations
Prevalence of acute abnormalities in the completed portion of the image evaluation test Pre-
intervention test
Post- intervention test
Pre- intervention (%)
Post- intervention (%)
1 60 23 28.33 30.43
2 34 32 32.35 25.00
3 60 32 28.33 28.13
4 60 52 28.33 26.92
5 60 60 28.33 30.00
6 60 30 28.33 26.67
7 60 52 28.33 26.92
Mean: 56.29 40.14 28.90 27.72
Table 4shows the individual results and the group mean for pre- and post-intervention tests. This shows there was a post- intervention increase in sensitivity by an average of 3.99%
(range:12.5e17.65), specificity by 6.13% (range:17.87e17.38), and accuracy by 3.32% (range: 12.77 e14.45). An unexpected finding in this study was the number of participants who were unable to evaluate all post-intervention images. Only one study discussed unattempted examinations in image evaluation tests, and also found the number of unattempted examinations were higher in the post-intervention test.23The authors did not discuss the reason for the increased number of examinations. One potential reason for this is that participants could have been taking extra time to ensure they were evaluating the images more carefully.
Table 5shows individual pre- and post-intervention True Posi- tive (TP), True Negative (TN), False Positive (FP) and False Negative (FN). While there were fewer examinations completed in the post- intervention test, this table demonstrates the participants propor- tion of false responses decreased in the post-intervention test, with false positives decreasing by an average of 3.8% and false negatives decreasing by an average of 1.2% demonstrating the improvement of the participants to detect abnormalities.
Discussion
This study is considered as the first published study in New Zealand that measures radiographers’ability to accurately detect and describe abnormalities on X-ray examinations using PIE. The results of this study aligned with international researchfindings in PIE.15e17,20,23,33The results show that three of the seven participants in this study improved their sensitivity, specificity, and accuracy in their ability to detect and describe abnormalities after the inter- vention. Additionally, the participant who had scored 100% sensi- tivity pre-intervention not only maintained this for the post- intervention test, but also improved their specificity and accuracy in PIE.
Previous studies have found that while sensitivity increased with training, specificity may decrease as radiographers describe an abnormality on normal X-ray examinations, due to a height- ened sensitivity for abnormalities. This same phenomenon can
also affect radiographers' accuracy.8,15,17 However, this was not demonstrated in the current study, with only one participant demonstrating increased sensitivity, with decreased specificity and accuracy in the post-intervention image evaluation test.
Additionally, one participant demonstrated increased specificity, with a decrease in sensitivity but increase in accuracy. The results indicated participants’ accuracy was most affected. This was an unexpected result which may be due to participants commenting on a fewer number of X-ray examinations in the post- intervention compared with the pre-intervention test. However, this finding aligns with the findings in a similar study by McConnell et al..8
The participant's average sensitivity score in the pre- intervention test was already at a high level (89.01%) with mini- mal scope to increase, the pre-intervention specificity and accuracy scores allowed for a greater level of improvement. The high level of sensitivity means radiographers can detect abnormalities on X-ray examinations, however the low accuracy indicates further training is required to improve describing the abnormalities. When compared with junior doctors and nurses, international studies show radiographers have higher accuracy, sensitivity and speci- ficity when detecting abnormalities on extremity X-ray examina- tions without additional training.26,30,34One study has shown that when radiographers work with ED clinicians using PIE, fewer acute abnormalities are missed on X-ray examinations.21
Currently there is no research on PIE use and accuracy in NZ.
Previous NZ research focused on role expansion into radiographer reporting, rather than investigating methods to improve radiogra- phers' skills to fully utilise their scope of practice.35,36The Medical Radiation Technologists Board, the regulatory body in NZ which outlines radiographer scope of practice, state in the competency standards that radiographers are responsible to“recognise when it is appropriate to collaborate with and include others in decision making, or to refer decisions on”.24PIE would enable radiographers in NZ to fully utilise their scope of practice, by providing an official method for radiographers to collaborate with ED clinicians in clinical decision making. With PIE, radiographers can play a bigger role in the multidisciplinary team in ED to assist in reducing treatment errors and improve patient outcomes.37
Table 4
Individual and mean results for pre- and post-intervention image evaluation tests.
Participant Pre-intervention Sensitivity (%)
Post-intervention Sensitivity (%)
Pre-intervention specificity (%)
Post-intervention specificity (%)
Pre-intervention accuracy (%)
Post-intervention accuracy (%)
1 88.89 100.00 83.33 93.75 78.89 85.51
2 100.00 87.50 68.18 70.83 74.51 69.79
3 81.25 88.89 79.55 86.96 73.89 78.13
4 94.12 85.71 72.09 89.47 73.33 83.33
5 76.47 88.89 93.02 100.00 85.56 91.67
6 82.35 100.00 86.05 68.18 78.33 65.56
7 100.00 100.00 76.19 92.11 78.33 92.78
Mean 89.01 93.00 79.77 85.90 77.55 80.87
Table 5
Individual results for True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN), pre- and post-intervention.
Participant number: Pre-intervention Post-intervention
TP TN FP FN TP TN FP FN
1 16 35 7 2 7 15 1 0
2 12 15 7 0 7 17 7 1
3 13 35 9 3 8 20 3 1
4 16 31 12 1 12 34 4 2
5 13 40 3 4 16 42 0 2
6 14 37 6 3 8 15 7 0
7 18 32 10 0 14 35 3 0
Mean 14.57 (26.5%) 32.14 (56.3%) 7.71 (14.14) 1.9 (3.1) 10.29 (25.8) 25.43 (61.9) 3.57 (10.3) 0.86 (1.9)
Limitations
The limitations identified in this study include a selection bias, where radiographers who agreed to participate have a clear inter- est in improving their image evaluation skills, and therefore were more likely to respond positively to the intensive training course.
This study utilises an IET to evaluate radiographer's ability to detect and describe acute abnormalities in test conditions, which may not replicate their ability in clinical practice because in practice, PIE is performed by the radiographer immediately after acquiring the image. The radiographer would therefore have seen the patient and been involved with the acquiring of the image. While steps were taken to emulate clinical practice, further research would be needed to demonstrate how these results translate to clinical practice. Another limitation of this study is the time limit applied to the image evaluation tests. A time limit has been used by several previous studies utilising IETs to assess performance,19,27,28how- ever, thismay have affected the results as many of the participants were unable to complete the post-intervention image evaluation test. This suggests that the recommended time-limit of 90 s per examination, while suitable for Radiologists, may not be appro- priate for PIE, particularly in test conditions. It is suggested time- limits are removed for subsequent studies as unattempted exami- nations have the potential to skew results significantly. This study also only demonstrated an immediate post-intervention test improvement for the participants. Previous studies have demon- strated that 6 months following intervention, when re-tested, radiographers often return to their baseline ability,11 therefore this study cannot comment on long term retention of skills. A further IET will be performed after 6 months to determine if this ability is improved in the long term.
Conclusion
This study demonstrated that the participants’ability to detect and describe abnormalities on acute extremity imaging in ED has the potential to improve with a short intensive PIE training course, however more research is required to see if this ability transfers to clinical practice The introduction of a PIE system in New Zealand EDs could assist to alleviate the challenge of the immediate avail- ability of radiology reports for skeletal X-ray examinations. It is suggested that prior to introducing a PIE system in New Zealand, radiographers should undertake additional training to improve image evaluation sensitivity, specificity, and accuracy prior to participation.
Ethics approval
Ethics approval was granted by the University of Auckland Ethics committee (AH25289) and Te Whatu Ora, Taranaki District Chief Medical Advisor.
Conflict of interest statement None.
Acknowledgements
Acknowledgements to the New Zealand Institute of Medical Radiation Technology Continuing Education Fund for travel and accommodation funds for the intensive training course facilitator.
And to the intensive training course facilitator for offering their time and expertise to training the participants.
References
1. Ministry of Health.Emergency department use 2014/2015. 2016.
2. Te Whatu Ora - Health New Zealand. Emergency department presentations.
2023.
3. Cooper E, Neep MJ, Eastgate P. Communicating traumatic pathology to ensure shared understanding: is there a recipe for the perfect preliminiary image evaluation?J Med Radiat Sci2020;67(2):143e50.
4. College of Nurses Aotearoa Nz.NPNZ members list. 2023 [Available from:
https://www.nurse.org.nz/npnz-members-list.html.
5. Lidgett T, Pittock L, Piper K, Woznitza N. A pilot study to assess radiographer preliminary clinical evaluation (PCE) introduced for emergency department adult appendicular X-ray examinations: comparison of trained and untrained radiographers.Radiography2023;29(2):307e12.
6. Doyle AJ. Radiology and Te Whatu OraeHealth New Zealand in 2022. Why we should all care.N Z Med J2022;135(1564):66.
7. Brown C, Neep MJ, Pozzias E, McPhail S. Reducing risk in the emergency department: a 12-month prospective longitudinal study of radiographer pre- lminary image evaluations.J Med Radiat Sci2019;66(3):154e62.
8. McConnell J, Devaney C, Gordon M, Goodwin M, Strahan R, Baird M. The impact of a pilot education programme on Queensland radiographer abnormality description of adult appendicular musculo-skeletal trauma. Radiography 2012;18(3):184e90.
9. Pinto A, Berritto D, Russo A, Riccitiello F, Caruso M, Belfiore MP, et al. Traumatic fractures in adults: missed diagnosis on plain radiographs in the Emergency Department.Acta Biomed2018;89(1-s):111e23.
10. Austin L. In: Lewis K, editor.RE: official information request; 2023.
11. Murphy A, Ekpo E, Steffens T, Neep MJ. Radiographic image interpretation by Australian radiographers: a systematic review.J Med Radiat Sci2019;66(4):
269e83.
12. Snaith B, Hardy M. Radiographer abnormality detection schemes in the trauma environmentdan assessment of current practice. Radiography 2008;14:
277e81.
13. Hargreaves J, Mackay S. The accuracy of the red dot system: can it improve with training?Radiography2003;9(4):283e9.
14. Oglat AA, Fohely F, Masalmeh AA, Jbour IA, Jaradat LA, Athamnah SI. Attitudes toward the integration of radiographers into thefirst-line interpretation of imaging using the red dot system.Bioengineering (Basel)2023;10(1).
15. Hardy M, Culpan G. Accident and emergency radiography: a comparison of radiographer commenting and 'red dotting'.Radiography2007;13(1):65e71.
16. Stevens BJ, Thompson JD. The impact of focused training on abnormality detection and provision of accurate preliminary clinical evaluation in newly qualified radiographers.Radiography (Lond).2018;24(1):47e51.
17. Williams I, Baird M, Pearce B, Schneider M. Improvement of radiographer commenting accuracy of the appendicular skeleton following a short course in plain radiography image interpretation: a pilot study. J Med Radiat Sci 2019;66(1):14e9.
18. Hazell L, Motto j, Chipeya L. The influence of image interpretatoin training on the accuracy of abnormality detection and written comments on musculo- skeletal radiographs by South African radiographers.J Med Imag Radiat Sci 2015;46(3):302e8.
19. Ofori-Manteaw BB, Dzidzornu E. Accuracy of appendicular radiographic image interpretation by radiographers and junior doctors in Ghana: can this be improved by training?Radiography (Lond)2019;25(3):255e9.
20. Tay YX, Wright C. Image interpretation: experiences from a Singapore in-house education program.Radiography (Lond)2018;24(3):e69e73.
21. Petts A, Neep M, Thakkalpalli M. Reducing diagnostic errors in the emergency department at the time of patient treatment.Emerg Med Australasia (EMA) 2023;35(3):466e73.
22. Neep MJ, Steffens T, Owen R, McPhail SM. Radiographer commenting of trauma radiographs: a survey of the benefits, barriers and enablers to participation in an Australian healthcare setting.J Med Imagi Radiat Oncol2014;58(4):431e8.
23. Neep MJ, Steffens T, Eastgate P, McPhail SM. Evaluating the effectiveness of intensive versus non-intensive image interpretation education for radiogra- phers: a randomised controlled trial.J Med Radiat Sci2019:5e13.
24. Competence standards for medical imaging and radiation therapy practitioners in Aotearoa New Zealand. 2023.
25. University of Auckland.MEDIMAGE 711. musculoskeletal trauma image eval- uation; 2023 [Available from:https://courseoutline.auckland.ac.nz/dco/course/
MEDIMAGE/711/.
26. Lockwood P, Pittock L. Multi-professional image interpretation: performance in preliminary clinical evaluation of appendicular radiographs. Radiography (London, England 1995)2019;25(4):e95e107.
27. Piper KJ, Paterson A. Initial image interpretation of appendicular skeletal ra- diographs: a comparison between nurses and radiographers.Radiography 2009;15(1):40e8.
28. Neep MJ, Steffens T, Riley V, Eastgate P, McPhail SM. Development of a valid and reliable test to assess trauma radiograph interpretation performance.
Radiography2017;23(2):153e8.
29. The Royal College of Radiologists.Clinical radiology workload: guidance on ra- diologists' reportingfigures. Royal College of Radiologists; 2012.
30. Coleman L, Piper K. Radiographic interpretation of the appendicular skeleton: a comparison between casualty officers, nurse practitioners and radiographers.
Radiography2009;15(3):196e202.
31. Brealey S. Measuring the effects of image interpretation: an evaluative framework.Clin Radiol2001;56(5):341e7.
32. Brealey S, Scally AJ. Bias in plainfilm reading performance studies.Br J Radiol 2001;74(880):307e16.
33. Smith TN, Traise P, Cook A. The influence of a continuing education program on the image interpretation accuracy of rural radiographers.Rural Rem Health 2009;9(2):1145.
34. Liu YM, O'Hagan S, Holdt FC, Lahri S, Pitcher RD. After-hour trauma-radiograph interpretation in the emergency centre of a District Hospital.Afr J Emerg Med 2022;12(3):199e207.
35. Smith T, Yielder J, Ajibulu O, Caruana E. Progress towards advanced practice roles in Australia, New Zealand and the Western Pacific.Radiography (London, England 1995)2008;14:e20e3.
36. Yielder J, Young A, Park S, Coleman K. Establishing advanced practice for medical imaging in New Zealand.J Med Radiat Sci2014:14e21.
37. McConnell JR, Baird MA. Could musculo-skeletal radiograph interpretation by radiographers be a source of support to Australian medical interns: a quanti- tative evaluation.Radiography2017;23(4):321e9.