Introduction
Debriefing is a core component of simulation-based learning. Reflection on the events of a completed simulation is a crucial step whereby participants learn and modify their behavior.[1] This reflection is generally guided by a facilitator whose goal is to identify knowledge gaps and attempt to address them.[2] Commercially available courses exist to teach debriefing, and workshops devoted to improving these skills are often part of the curriculum at national and international simulation courses. Also, many programs host their internal debriefing training. Numerous debriefing models exist to help facilitate the debriefing of participants within a simulation.[3]
Fewer models exist to facilitate the debriefing of the faculty running the simulation. As of this publication date, the International Nursing Association for Clinical Simulation and Learning (INASCL) has created a curriculum of best practices in simulation, including session facilitation and debriefing. While the summary article mentions maintaining debriefing skills through observed practice and peer evaluation, these evaluation tools are not explicitly addressed.[4]
Continuing Education
Several post-simulation evaluation tools have been created to meet this need, but they vary in focus. Some assess the simulation as a whole; others focus only on the debriefing portion, and a few focus specifically on the facilitator.
OSAD
One early tool created to evaluate the faculty debriefer is the Objective Structured Assessment of Debriefing (OSAD).[5] Described in the surgery literature in 2012, the OSAD was initially developed by researchers through a review of the existing literature and focused interviews of providers and receivers of debriefing. These results were then synthesized into eight features essential to an effective debriefing. These features were modified into the categories of the final OSAD, which are then rated on a 5-point Likert scale by trained observers: approach, environment, engagement, reaction, reflection, analysis, diagnosis, and application. Descriptive anchors were included in the 5-point scale. The benefits of OSAD include its brevity (estimated at 5 minutes to completion), validity, and interrater reliability.[5]
A similar development process was used to develop a pediatric-specific OSAD.[6] This tool had related categories and 5-point Likert benchmarks. Several committees and simulation centers later validated and adopted it for debriefing standardization and faculty development.[6]
Citing concerns for the traditional, paper-based form of the OSAD, other researchers have created a modified electronic version (eOSAD).[7] These authors note the benefits of good interrater reliability, the ease of use of video-recorded debriefing sessions, and the ability to add comments, which were missing from the traditional OSAD survey. They also describe protection from data corruption as a benefit, namely by eliminating the risk of losing paper copies or exposing data to transcription errors. However, one major limitation of the eOSAD is its requirement for real-time computer and internet access.
DASH
An alternative to the OSAD for evaluating faculty debriefing and simulation sessions is the Debriefing Assessment for Simulation in Healthcare (DASH). This tool, first described in 2012, evaluates six elements of debriefing.[8] Using descriptions of observable behaviors as anchors, participants receive a rating on a 7-point effectiveness scale. This tool has been validated and tested, demonstrating reliability. It requires standardized training to use, which generally takes place via webinar.
Authors of the DASH boast that it applies to simulations in various domains and disciplines.[8] Indeed, since its development, it has been used to evaluate debriefing in several contexts. In addition to its use in faculty development, it has measured outcomes in research studies with a simulation component. For example, a modified version of the DASH (the DASH student version) has been used to measure outcomes in faculty-led vs. resident-led debriefing sessions for medical students and residents.[9][10][11] It has also been a tool when evaluating interprofessional simulation debriefings.[12]
PADI
More recently, allied health literature introduced an alternative evaluation tool. The Peer Assessment Debriefing Instrument (PADI) adds a self-evaluation component to the post-debriefing assessment.[13] It evaluates eight aspects of planning and conducting a simulation debriefing. The debriefer and evaluator rate performances on multiple elements within each domain on a 4-point Likert scale. They then compare responses, which opens a conversation, allowing the debriefer to focus the feedback on particular areas he or she wants the evaluator to address.[13] It is reliable and valid across healthcare disciplines for evaluating debriefing.
The creators of the PADI suggest its benefits include the short time necessary to learn and implement the tool.[14] They also suggest it could be a valuable data source when evaluating teaching skills and effectiveness.[14]
Other Tools
Some authors have created debriefing evaluation tools for their studies, but these are not commonly used outside that setting.[15][16] Others modify existing scales to fit their needs. For example, based on the OSAD and DASH, a self-reporting quality scale was used to create the TeamGAINS debriefing tool.[17]
Still, others allocate a portion of more comprehensive tools for focusing on debriefing. These include the more holistic evaluation of nursing simulation facilitators, as was done with the Facilitator Competency Rubric.[18]
Several tools seek participant evaluations of the debriefer as part of a larger assessment of the simulated learning session. These include the Simulation Design Scale, created by the National League for Nursing, the Simulation Effectiveness Tool-Modified, and the Debriefing Experience Scale.[19][20] Each represents post-event evaluations that students complete by describing their perception of a faculty member or simulation event’s effectiveness.
Clinical Significance
There is no direct clinical significance to the evaluations of faculty debriefing after simulations. Instead, tools such as the OSAD, DASH, PADI, and others allow faculty to receive an assessment of their debriefing skills during a single simulation event, which may work to improve debriefing skills, which may then improve learning and impact clinical care.
Enhancing Healthcare Team Outcomes
Simulation-based medical education is a growing component of medical and nursing education. It is used to test systems, enhance communication, and improve teamwork.[21][22] The post-simulation debriefing is the primary venue for exploring and addressing knowledge and behavior gaps. Principles such as psychological safety and nonjudgemental attitudes are crucial to enhancing this learning environment.[23] Faculty development of facilitators leading these debriefing sessions may improve debriefing quality and, by extension, the team’s learning.