Knowledge

POCUS Competency

Competency Metrics: Interpretation

Methods used to assess POCUS learners can range from tool or scale usage to direct image annotation

While multiple scales exist for assessing learners' image interpretation skills (e.g., ACTS, OSAUS, etc.), many of these tools may still rely on subjective judgement from experts and longitudinal tracking (e.g., rating learner proficiency on a scale from 1-5). Assessed by experts using the ACTS scale, learners were found to improve most over the first 25-30 practice studies before plateauing. 

An alternative method for learner assessment is via direct ultrasound scan documentation with expert review to determine percent accuracy (e.g., learners annotate the abnormal area of the scan compared to expert annotation). Using this method, Kwan, et al. (2020) found that "For the average (50th percentile) learners, the predicted median number of cases needed across our four applications was 0 to 45 for 80% accuracy, 25 to 97 for 85% accuracy, 66 to 175 for 90% accuracy, and 141 to 290 for 95% accuracy (Figure 4A)" (p. 114).


Delete

Scan Documentation With Expert Review


Publication

Kwan C, Pusic M, Pecaric M, Weerdenburg K, Tessaro M, Boutis K. The Variable Journey in Learning to Interpret Pediatric Point-of-care Ultrasound Images: A Multicenter Prospective Cohort Study. AEM Educ Train. 2019 Jul 30;4(2):111-122. doi: 10.1002/aet2.10375. Erratum in: AEM Educ Train. 2021 Mar 05;5(3):e10581.

Key Findings

  • Experts classified images as "normal vs abnormal" and annotated indicators of abnormality on ultrasound study images and clips. Unmarked exams were provided to learners.
  • Learners classified images as "normal vs abnormal," used a digital marker to annotate indicators of abnormality on ultrasound study images and clips. Immediately after submission, the learner received automatic and "immediate visual and written feedback on the correctness of their response, diagnosis of the case, and normal anatomy" (p. 113).

Paper Implications

  • Use of percent accuracy to judge learner ability (based on image annotation and interpretation as "normal vs abnormal") removed the subjectivity of rating learners using a scale
  • This image annotation and interpretation method enabled immediate and automatic feedback, which could reduce time required from experts and thus program efficiency


Delete

Other Methods


Publication

Søren Helbo Skaarup, Christian B. Laursen, Anne Sofie Bjerrum, and Ole Hilberg. (2017). Objective and Structured Assessment of Lung Ultrasound Competence: A Multispecialty Delphi Consensus and Construct Validity Study. Ann Am Thorac Soc, 14(4), 555–560.

Key Findings

  • On Table 2 page 557, for a lung ultrasound objective structured assessment of technical skills tool, image interpretation metrics are assessed under the category "findings," on scales of 1 to 5, where 1 = Not able to assess correctly, 3 = Properly assessed sometimes, and 5 = Properly assessed every time
  • Competency components include:
    • Correct assessment of pleura
    • Correct assessment of B-lines
    • Correct assessment of consolidations
    • Correct assessment of pleural effusion
    • Correct assessment of whether ultrasound-guided thoracentesis is safe

Paper Implications

  • Assessment of image interpretation can still require expert subjective judgement mapped onto a numerical scale

Publication

Martin G. Tolsgaard, Tobias Todsen, Jette L. Sorensen, Charlotte Ringsted, Torben Lorentzen, Bent Ottesen, Ann Tabor. (2013). International Multispecialty Consensus on How to Evaluate Ultrasound Competence: A Delphi Consensus Survey. PLoS ONE, 8(2), e57687.

Key Findings

  • In Table 3, the Objective Structured Assessment of Ultrasound Skills (OSAUS) suggests assessment of "Interpretation of Images" in a scale from 1 to 5
  • "Interpretation of Images" comprises "recognition of image pattern and interpretation of findings, where 1 = "unable to interpret any findings", 3 = "does not consistently interpret findings correctly", and 5 = "consistently interprets findings correctly"

Paper Implications

  • Assessing "interpretation of findings" may require an expert to know what the correct findings are in order to assess a learner
  • "Consistent interpretation of findings" suggests that competency assessment could need longitudinal tracking

Publication

Scott J. Millington, Robert T. Arntfeld, Robert Jie Guo, Seth Koenig, Pierre Kory, Vicki Noble, Haney Mallemat and Jordan R. Schoenherr. (2017). The Assessment of Competency in Thoracic Sonography (ACTS) scale: validation of a tool for point-of-care ultrasound. Crit Ultrasound J, 9, 25.

Key Findings

  • In the Assessment of Competency in Thoracic Sonography (ACTS) tool on page 3 of 8, image interpretation is assessed on a binary 0 or 1 in answer to the question "Based on all the images presented, do you feel able to interpret the following:"
    • Presence or absence of a pneumothorax
    • Presence or absence of an interstitial syndrome
    • Presence or absence of a consolidation
    • Presence or absence of a pleural effusion
    • Where 0 = "image quality does not permit meaningful interpretation" and 1 = "image quality permits meaningful interpretation"

Paper Implications

  • Interestingly, a binary is used rather than a numerical scale for the assessment of ultrasound image interpretation, and is based on the quality of image rather than interpretation of the learner