Back To Search Results

Validating Assessment Tools in Simulation

Editor: Stormy M. Monks Updated: 7/24/2023 9:53:36 PM

Introduction

Health care simulation is a growing field that combines innovative technologies and adult learning theory to reproducibly train medical professionals in clinical skills and practices. A wide range of assessment tools are available to assess learners on taught skills and knowledge, and there is stake-holder interest validating these assessment tools. Reliable quantitative assessment is critical for high-stakes certification, such as licensing opportunities and board examinations. There are many aspects to an evaluation in healthcare simulation that range from educating new learners and training current professionals, to a systematic review of programs to improve outcomes. Validation of these assessment tools is essential to ensure that they are valid and reliable. Validity refers to whether any measuring instrument measures what it is intended to measure. Additionally, reliability is part of the validity assessment and refers to the consistent or reproducible results of an assessment tool. The assessment tool should yield the same results for the same type of learner every time it is used. In practice, actual healthcare delivery requires knowledge of technical, analytical, and interpersonal skills. This merits assessment systems to be comprehensive, valid, and reliable enough to assess the necessary elements along with testing for critical knowledge and skills. Validating assessment tools for healthcare simulation education ensure that learners can demonstrate the integration of knowledge and skills in a realistic setting.

The assessment process itself is influential for the process of curriculum development, as well as feedback and learning.[1] Recent developments in psychometric theory and standard settings have been efficient in assessing professionalism, communication, procedural, and clinical skills.[2] Ideally, simulation developers should reflect on the purpose of the simulation to determine if the focus will be on teaching or learning.[3] If the focus is on teaching, then assessments should focus on performance criteria with exercises for a set of skill-based experiences – this assesses the teaching method's effectiveness in task training. Alternatively, if the focus of the simulation is to determine higher-order learning, then the assessment should be designed to measure multiple integrated abilities such as factual understanding, problem-solving, analysis, and synthesis.[4] In general, multiple assessment methods are necessary to capture all relevant aspects of clinical competency. For higher-order cognitive assessment (knowledge, application, and synthesis of knowledge), context-based multiple-choice questions (MCQ), extended matching items, and short answer questions are appropriate.[5] For the demonstration of skills mastery, a multi-station objective structured clinical examination (OSCE) is viable.[6] Performance-based assessments such as Mini-Clinical Evaluation Exercise (mini-CEX) and Direct Observation of Procedural Skills (DOPS) are appropriate to have a positive effect on learner comprehension.[7] Alternatively, for the advanced professional continuing learner, a clinical work sampling and portfolio or logbook may be used.

In an assessment, the developers select an assessment instrument with known characteristics. A wide range of assessment tools is currently available for assessment of knowledge and application and performance assessment.[8] The assessment materials are then created around learning objectives, and the developers directly control all aspects of delivery and assessment. The content should relate to the learning objectives and the test comprehensive enough that it produces reliable scores. This ensures that the performance is wholly attributable to the learner – and not an artifact of curriculum planning or execution. Additionally, different versions of the assessment that are comparable in difficulty will permit comparisons among examinees and against standards.

Learner assessment is a wide-ranging decision-making process with implications beyond student achievement alone. It is also related to program evaluation and provides important information to determine program effectiveness. Valid and reliable assessments satisfy accreditation needs and contribute to student learning.

Function

Register For Free And Read The Full Article
Get the answers you need instantly with the StatPearls Clinical Decision Support tool. StatPearls spent the last decade developing the largest and most updated Point-of Care resource ever developed. Earn CME/CE by searching and reading articles.
  • Dropdown arrow Search engine and full access to all medical articles
  • Dropdown arrow 10 free questions in your specialty
  • Dropdown arrow Free CME/CE Activities
  • Dropdown arrow Free daily question in your email
  • Dropdown arrow Save favorite articles to your dashboard
  • Dropdown arrow Emails offering discounts

Learn more about a Subscription to StatPearls Point-of-Care

Function

Validity and reliability in assessments.

The intended purpose of an assessment is to determine whether an assessment tool gives valid and reliable results that are wholly attributable to the learner. An assessment tool has to give reproducible results, and after many trials, statistical analyses can determine areas with a variation. This determines the effectiveness of the assessment tool. If there is a previously established ideal, against which others are compared, then a novel assessment should correlate the results to this; however, many times, there is no such "gold standard," and thus, a comparison should be made to similar assessment tools.

Reliability relates to the uniformity of a measure.[9] For example, a learner completing an assessment meant to measure the effectiveness of chest compressions should have approximately the same results each time the test is completed. Reliability can be estimated in several ways and will vary depending upon the type of assessment tool used.[10]

  1. The Kuder-Richardson coefficient for two-answer questions and Cronbach's alpha for questions with more than two answers can be used to measure internal consistency where 2 to 3 questions are generated that measure the same concept, and the correlation among the answers is measured. Strong correlations where the reliability estimate is as close to 1 as possible (higher than 0.7), indicate high reliability, while weak correlations indicate the assessment tool may not be reliable.
  2. Test-retest reliability is measured when an assessment tool is given to the same learners more than once, at different times, and under similar conditions. The correlation between the measurements at different time points is then calculated. Similarly, parallel-form or alternate-form reliability is similar to test-retest except that a different form of the original assessment is given to learners in the following evaluations. For example, the concepts being tested are the same in both versions of the assessment, but the phrasing of it is different.[11] This approach is more conservative regarding reliability than Cronbach alpha. However, it takes at least two rounds of the assessment, whereas Cronbach alpha can be calculated after a single assessment. Reliability is measured as a correlation with Pearson r, a statistic that measures the linear correlation between two variables X and Y and has a value between +1 and −1. In general, a correlation coefficient of less than 0.3 signifies a weak correlation, 0.3 to 0.5 is moderate, and greater than 0.5 is strong.[9]
  3. Interrater reliability is used to study the effect of different assessors using the same assessment tools. Consistency in rater scores relates to the level of inter-rater reliability of the assessment tool. It is estimated by Cohen's Kappa, which compares the proportion of actual agreement between raters to the proportion expected to agree by chance.
  4. Analysis of variance (ANOVA) is another tool to generate a generalizability coefficient. This theory recognizes that multiple sources of error and true score variance exist and that measures may have different reliabilities in different situations. This method aims to quantify how much measurement error is attributable to each potential factor, such as differences in question phrasing, learner attributes, raters, or time between assessments. This model looks at the overall reliability of the results.

The validity of an assessment tool refers to how well the tool measures what it intends to measure. High reliability is not the only measure of efficacy for an assessment tool; other measures of validity are necessary to determine the integrity of the assessment approach. Determining validity requires evidence to support the use of the assessment tool in a particular context. The development of new tools is not always necessary, but they must be appropriate for program activities, and reliability and validity reported or references cited for each assessment tool used.[12] Validity evidence, as stipulated in the Standards for Educational and Psychological Testing and are briefly outlined here, for assessment is necessary for building a validity argument to support the use of an assessment for a specific purpose.[13] The five sources of evidence for validity are:

  1. Evidence for content validity is the "relationship between a test's content and the construct it is intended to measure." This refers to the themes, wording, and format of the items presented in an assessment tool.[14] This should include analyses by independent subject matter experts (SME) regarding how adequately items represent the targeted domain. Additionally, if assessments that have been previously used in similar settings cannot be utilized, the development of new assessment tools should be based on established educational theories.[10] If learners improve scores after receiving additional training, then this would add to the validity of the assessment tool.
  2. The response process involves the analyses of the responses to the assessment and includes the strategies and thought processes of individual learners. Analyzing the variance in response patterns between different types of learners may reveal sources of inconsistency, and that is irrelevant to the concept being measured.[15]
  3. The internal structure of the assessment tool refers to "the degree to which the relationships among test items and test components conform to the construct on which the proposed test score interpretations are based." Evidence to support the internal structure of an assessment includes dimensionality, measurement invariance, and reliability.[16] For example, an assessment that intends to report one composite score should be mostly unidimensional. Evidence for measurement invariance should include item characteristics that are comparable across demographics such as gender or race. Reliability, as previously discussed, should report that assessment outcomes are consistent throughout repeated test administrations.
  4. Evidence for the relation to other variables involves the statistical relationship between assessment scores and another measure relevant to the measured construct. A strongly positive relationship would indicate two measures that measure the same construct, or a negligible relationship would describe measures that should be independent.[17]
  5. Consequences refer to effects from the administration of the assessment tool. In other words, consequences evidence assesses the impact, whether positive or negative and intended or unintended, of the assessment itself.[18] This can either support or contest the soundness of score interpretations.

Issues of Concern

Threats to validity can weaken the validity of an assessment.[19] These threats refer to alternative factors that are attributable to the performance of a learner that is unrelated to the knowledge or skills assessed. Some potential threats to validity come from 1) low reliability, 2) misalignment of the assessment to the learning objectives, or 3) interface difficultly that includes inadequate instructions or lack of computer skills for computerized assessments. To have validity evidence, all threats to validity should be addressed and eliminated.

Another question remains on how to determine reliability and validity in healthcare simulation, which in many cases, can be subjective. To overcome subjectivity, the assessment tool should be designed so that there is no room for assessor interpretation, such as by having a uniform rubric; however, even this would not remove all individual bias and thus should be stated.

Curriculum Development

To develop a curriculum and subsequent assessment, a literature search should be performed to find previously developed measures for outcomes. If the assessment tool is to be modified for a particular setting or learners, describe the modifications and include support for how these changes improve suitability for the novel situation. Discussion of the adaptation is warranted if 1) previously characterized assessment tools are modified, 2) the assessment is used for a different setting, purpose, or set of learners or 3) there is a different interpretation for the outcomes. Potential limitations for the new approach should also be disclosed and discussed. If a previously characterized assessment tool is used in the same setting, with the same types of learners, and for the same purpose, then citing the referenced literature is appropriate. Included in this discussion should be whether these modifications are likely to affect the validity or reliability of the assessment tools. Additionally, the developers of novel assessment tools should state the development process and present any data that would substantiate validity and reliability.

Developers should reflect on the purpose of the simulation to determine if the focus will be on teaching or learning.[3] If the focus is on teaching, then assessments should evaluate mastery of taught skills through performance criteria as well as weigh the teaching method effectiveness in task training. Alternatively, if the focus is to determine higher-order learning, then the assessment should be designed to measure multiple integrated abilities such as factual understanding, problem-solving, analysis, and synthesis.[4] This can be accomplished by performance assessments that use problem-solving experiences meant to draw on a longitudinal set of acquired skills. Higher-order learning assessments should follow established psychometric and adult learning theory to guide the design.[4]

Clinical Clerkships

Assessment for theoretical knowledge in clinical clerkships is generally constructed around learner performance in examinations and assignments. Assessment for clinical competence, however, is more complicated and involves the degree to which theoretical knowledge is applied, decision-making skills, and the ability to act in a shifting environment.[20] Clinical assessment requires the use of diverse measures such as journals, surveys, peer evaluation, self-assessments, and learner interviews. There exists an inherent difficulty in clinical assessment because there will always exist some degree of subjectivity and interrater bias.[21] Integrating the use of a rubric-based assessment tool with a rating scale, clearly defined performance criteria, and a detailed description of performance at each level would help to overcome such interrater bias.[22] Assessment tools for clinical clerkships need an emphasis on validity evidence that rate for interrater agreement. For instance, resident performance from observer ratings would need to mitigate bias from interrater bias to be validated. Learners should be presented with the assessment tool before the learning experience and during the description of the learning objectives so that they are aware of expectations.

Traditionally, oral examinations have poor content validity, high inter-rater variability, and are inconsistent, depending on the grading schema. Thus, this assessment tool is susceptible to bias and is intrinsically unreliable. Alternatives to this assessment approach include short answer questions, extended matching items, key feature tests, OSCE, Mini-CEX, and DOPS. A short description follows here:

The Short Answer Question (SAQ) or Modified Essay Question (MEQ) is an open-ended, semi-structured question format. The questions can incorporate clinical scenarios, and equivalent or higher test reliabilities can be reached with fewer items when compared to true/false questions.[23]

Extended Matching Item is based on a single theme and is a written examination format similar to multiple-choice questions (MCQ). The key difference between EM and MCQ is that it can be used for the assessment of clinical scenarios and diagnostic reasoning while maintaining objectivity and consistency that is not likely from MCQs.[24]

Key-feature questions (KFQs) have been developed to assess clinical reasoning skills. Examinations using KFQs focus on the diagnosis and management of a clinical problem where the learners are most likely to make errors. More than other methods, KFQs illuminate the strengths and limits of a learners' clinical competency, and this assessment tool is more likely than other forms of evaluation to differentiate between stronger or weaker candidates in the area of clinical reasoning.[25][26]

Objective Structured Clinical examination (OSCE) involves multiple locations where each learner performs a defined task. This tool can be used to assess competency-based on direct observation. Unlike a traditional clinical exam, the OSCE can evaluate areas critical to performance, such as communication skills and the ability to handle changing patient behaviors.[27]

The Mini-Clinical Evaluation Exercise (Mini-CEX) is a rating scale in which an expert observes and rates the learner's performance. It was developed to assess six core competencies in residents, such as medical interviewing skills, physical examination skills, professionalism, clinical judgment, counseling skills, and organization and efficiency.

Similarly, Direct Observation of Procedural Skills (DOPS) is a structured rating scale for assessing and providing feedback on practical procedures. The competencies that are commonly evaluated are general knowledge about the procedure, informed consent, pre-procedure preparation, analgesia, technical ability, aseptic technique, and counseling and communication.

Procedural Skills Assessment

Methods to evaluate procedural skills will be different than those used in clinical clerkships where simulation is most commonly used as a teaching modality.[3] There are two general approaches to measuring procedural skills: global rating scales and checklists. The global rating scale is based on the Objective Structured Assessment of Technical Skills (OSATS) tool used to evaluate surgical residents' technical skills. It can be modified and validated for content, response process, and interrater reliability, to evaluate learner performance for varied procedures and with standardized patients.[28][29][30] While GRS is subjective, they provide flexibility when innovative approaches are required to assess decision-making skills, team management, and patient assessments.

A training checklist can determine if the learner appropriately prepares for and executes a task by checking off whether they independently and correctly perform the specified exercise. Given that procedures are likely to have sequential steps, checklists are appropriate for assessing technical skills. Checklists allow for a thorough, structured, and objective assessment of the modular skills for a procedure.[31] While a detailed checklist may include more steps than a trainee can memorize, it serves as a useful instruction tool for guiding the learner through a complex technique.[32]

Alternatively, the global rating scale allows a rater to evaluate the degree (on a 1 to 5 scale) to which a learner performs all steps in a given assessment exercise. Paired together, the checklist and the global rating scale is an influential assessment tool for procedural skills assessment.[33]

Medical Decision Making and Leadership Development

Medical training requires a high level of cognitive function and confidence in the decision-making process. These abilities are fundamental to being a leader; thus, there is an essential need to provide opportunities for clinicians to develop leadership abilities. Additionally, influencing how services are delivered leads to greater confidence that patient care is central to function and not directed by external agendas.[34]

Leadership training as part of the resident curriculum can significantly increase confidence in leadership skills in terms of alignment, communication, and integrity, which are tools that have been previously shown in business models to be essential for effective and efficient teams.[35] Assessing these attributes with a pre- and post-administered scorecard survey can determine whether confidence in decision making and leadership development has improved with training. Previous studies on the implementation of leadership courses have shown that the experience is enjoyable and results in enhanced leadership skills for the participants.[36]

Another assessment tool in ethical decision making (EDM) stems from using a modified Delphi method, where a theoretical framework and a self-assessment tool consisting of 35 statements called Ethical Decision-Making Climate Questionnaire (EDMCQ) was developed. The EDMCQ is meant to capture three EDM domains in healthcare, such as interdisciplinary collaboration and communication, leadership, and ethical environment. This assessment tool has been subsequently validated in 13 European countries and the USA and has been useful in EDM by clinicians and contributes to the EDM climate of healthcare organizations.[37]

Continuing Education

Medical science is an ever-evolving field with advances in all aspects of patient care that stem from basic, translational, and clinical investigations. Clinicians, therefore, engage in lifelong learning to keep up with these changes throughout their professional lives. Continuing medical education (CME) is such a critical aspect that it is required in most states for medical license renewal. Involvement in CME may include a variety of learning experiences to keep the learner up to date in their area of expertise.

In recent decades, there has been a transformation of CME in the USA due to stakeholder concerns over the cost of healthcare, frequency of medical errors, fragmentation of patient care, commercial influence, and the competence of healthcare professionals.[38] The resulting recommendations from the Institute of Medicine have led to strategies to address these challenges. Five themes that are grounded in educational and politico-economic priorities for healthcare in the USA were motivators for these developments. The main themes were 1) a shift from attendance and time-based credits to a metric that infers competence, 2) an increased focus on cross-professional competencies that foster coordinated care delivery, 3) integration of CPD quality improvement and linking evidence-based science; for CPD, 4) a shift from disease-specific CPD to expansion to address complex population and public health issues, and 5) the standardization of continuing medical education competencies by measuring outcomes of participation and through performance improvement initiatives. The overall goals are for improved effectiveness and efficiency of the healthcare system to meet the needs of patients.

Clinical Significance

Competency-based medical education (CBME) has become mainstream in medical education and assessment for the next generation of clinicians.[39] Providing higher quality care and reducing variation in healthcare delivery were significant motivators for the implementation of CBME as multiple studies demonstrated systemic failures in healthcare improvement [40] and indications that residency training has a significant influence in future performance.[41][42][43]

A conceptual framework for clinician assessment has been developed and seeks to address the issues of competence and performance in clinical settings. The seven-level outcomes framework works progressively through participation (level 1) through the evaluation of the impact of changes in actual practice. Additionally, this framework can aid in the development of new CME assessment tools that are in agreement with the industry-wide paradigm shift in continuing professional development. The framework for CME developed is fully described by Moore et al. 2009, and is briefly outlined here [44]:

  • Level 1: Participation in CME determined through attendance records.
  • Level 2: Learner satisfaction determined through questionnaires following the CME activity.
  • Level 3: The level of declarative knowledge or procedural learning assessed by pre and post-test (objective) or self-reporting of knowledge gained (subjective).
  • Level 4: Competence determined through observation (objective) or self-reporting.
  • Level 5: The degree to which the participants performed learned skills in practice can be assessed objectively by observation of performance in a patient care setting or through patient chart studies.
  • Level 6: The degree to which the health status of patients improves as a result of CME can be assessed through chart studies and administrative databases.
  • Level 7: The effect of CME on public and community health can be assessed through epidemiological data or community surveys. 

This framework was specifically developed to aid CME developers in assessing clinician competence, performance, and patient health status in an actual healthcare setting.

Pearls and Other Issues

Health care simulation is a growing field that combines innovative technologies and adult learning theory to train medical professionals in clinical skills and practices reproducibly.

The intended purpose of an assessment is to determine whether an assessment tool gives valid and reliable results that are wholly attributable to the learner.

Threats to validity can weaken the validity of an assessment.

To develop a curriculum and subsequent assessment, a literature search should take place to find previously developed measures for outcomes. If the assessment tool is to be modified for a particular setting or learners, describe the modifications and include support for how these changes improve suitability for the novel situation.

Assessment for theoretical knowledge in clinical clerkships is generally constructed around learner performance in examinations and assignments.

Methods to evaluate procedural skills will be different than those used in clinical clerkships where simulation is most commonly used as a teaching modality.

Medical training requires a high level of cognitive function and confidence in the decision-making process. These abilities are fundamental to being a leader; thus, there is an essential need to provide opportunities for clinicians to develop leadership abilities.

Medical science is an ever-evolving field with advances in all aspects of patient care that stem from basic, translational, and clinical investigations. Clinicians, therefore, engage in lifelong learning to keep up with these changes throughout their professional lives.

Providing higher quality care and reducing variation in healthcare delivery were significant motivators for the implementation of CBME as multiple studies demonstrated systemic failures in healthcare improvement and indications that residency training has a significant influence in future performance.

Enhancing Healthcare Team Outcomes

It has become necessary to develop medicine as a cooperative science; the clinician, the specialist, and the laboratory workers uniting for the good of the patient, each assisting in the elucidation of the problem at hand, and each dependent upon the other for support. –William J. Mayo, Commencement speech at Rush Medical College, 1910

Patients receive safer, higher quality care when a network of providers work as an effective healthcare team.[45] Evidence regarding the effectiveness of interventions in team-training has grown and include patient outcomes, including mortality and morbidity, and quality of care indices.[46] Additionally, secondary outcomes include teamwork behaviors, knowledge, skills, and attitudes.[47]

Simulation and classroom-based team-training can improve teamwork processes such as communication, coordination, and cooperation, and correlates with improvements in patient safety outcomes. Team training interventions are a practical approach that organizations can take to enhance team, and therefore patient, outcomes. These exercises are shown to improve cognitive and affective outcomes, teamwork processes, and performance outcomes. The most effective healthcare team training interventions were those that were reported to have included organizational changes to support the teamwork environment and the transfer of such competencies into daily practice.[45]

References


[1]

Jaye P, Thomas L, Reedy G. 'The Diamond': a structure for simulation debrief. The clinical teacher. 2015 Jun:12(3):171-5. doi: 10.1111/tct.12300. Epub     [PubMed PMID: 26009951]


[2]

Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, Hays R, Palacios Mackay MF, Roberts T, Swanson D. 2018 Consensus framework for good assessment. Medical teacher. 2018 Nov:40(11):1102-1109. doi: 10.1080/0142159X.2018.1500016. Epub 2018 Oct 9     [PubMed PMID: 30299187]

Level 3 (low-level) evidence

[3]

Kaakinen J, Arwood E. Systematic review of nursing simulation literature for use of learning theory. International journal of nursing education scholarship. 2009:6():Article 16. doi: 10.2202/1548-923X.1688. Epub 2009 May 7     [PubMed PMID: 19492985]

Level 1 (high-level) evidence

[4]

Adams NE. Bloom's taxonomy of cognitive learning objectives. Journal of the Medical Library Association : JMLA. 2015 Jul:103(3):152-3. doi: 10.3163/1536-5050.103.3.010. Epub     [PubMed PMID: 26213509]


[5]

Sood R, Singh T. Assessment in medical education: evolving perspectives and contemporary trends. The National medical journal of India. 2012 Nov-Dec:25(6):357-64     [PubMed PMID: 23998869]

Level 3 (low-level) evidence

[6]

Mossey PA, Newton JP, Stirrups DR. Scope of the OSCE in the assessment of clinical skills in dentistry. British dental journal. 2001 Mar 24:190(6):323-6     [PubMed PMID: 11325158]


[7]

Lörwald AC, Lahner FM, Mooser B, Perrig M, Widmer MK, Greif R, Huwendiek S. Influences on the implementation of Mini-CEX and DOPS for postgraduate medical trainees' learning: A grounded theory study. Medical teacher. 2019 Apr:41(4):448-456. doi: 10.1080/0142159X.2018.1497784. Epub 2018 Oct 28     [PubMed PMID: 30369283]


[8]

Al-Wardy NM. Assessment methods in undergraduate medical education. Sultan Qaboos University medical journal. 2010 Aug:10(2):203-9     [PubMed PMID: 21509230]


[9]

Heale R, Twycross A. Validity and reliability in quantitative studies. Evidence-based nursing. 2015 Jul:18(3):66-7. doi: 10.1136/eb-2015-102129. Epub 2015 May 15     [PubMed PMID: 25979629]


[10]

Litzelman DK, Westmoreland GR, Skeff KM, Stratos GA. Factorial validation of an educational framework using residents' evaluations of clinician-educators. Academic medicine : journal of the Association of American Medical Colleges. 1999 Oct:74(10 Suppl):S25-7     [PubMed PMID: 10536584]

Level 1 (high-level) evidence

[11]

Reeves TD, Marbach-Ad G. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers. CBE life sciences education. 2016 Spring:15(1):rm1. doi: 10.1187/cbe.15-08-0183. Epub     [PubMed PMID: 26903498]


[12]

Sullivan GM. A primer on the validity of assessment instruments. Journal of graduate medical education. 2011 Jun:3(2):119-20. doi: 10.4300/JGME-D-11-00075.1. Epub     [PubMed PMID: 22655129]


[13]

Sireci S, Faulkner-Bond M. Validity evidence based on test content. Psicothema. 2014:26(1):100-7. doi: 10.7334/psicothema2013.256. Epub     [PubMed PMID: 24444737]


[14]

Beckman TJ, Cook DA, Mandrekar JN. What is the validity evidence for assessments of clinical teaching? Journal of general internal medicine. 2005 Dec:20(12):1159-64     [PubMed PMID: 16423109]


[15]

McLeod PJ, James CA, Abrahamowicz M. Clinical tutor evaluation: a 5-year study by students on an in-patient service and residents in an ambulatory care clinic. Medical education. 1993 Jan:27(1):48-54     [PubMed PMID: 8433660]


[16]

Rios J, Wells C. Validity evidence based on internal structure. Psicothema. 2014:26(1):108-16. doi: 10.7334/psicothema2013.260. Epub     [PubMed PMID: 24444738]


[17]

Cook DA, Zendejas B, Hamstra SJ, Hatala R, Brydges R. What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. Advances in health sciences education : theory and practice. 2014 May:19(2):233-50. doi: 10.1007/s10459-013-9458-4. Epub 2013 May 2     [PubMed PMID: 23636643]

Level 1 (high-level) evidence

[18]

Cook DA, Lineberry M. Consequences Validity Evidence: Evaluating the Impact of Educational Assessments. Academic medicine : journal of the Association of American Medical Colleges. 2016 Jun:91(6):785-95. doi: 10.1097/ACM.0000000000001114. Epub     [PubMed PMID: 26839945]


[19]

Bewley WL, O'Neil HF. Evaluation of medical simulations. Military medicine. 2013 Oct:178(10 Suppl):64-75. doi: 10.7205/MILMED-D-13-00255. Epub     [PubMed PMID: 24084307]


[20]

Skúladóttir H, Svavarsdóttir MH. Development and validation of a Clinical Assessment Tool for Nursing Education (CAT-NE). Nurse education in practice. 2016 Sep:20():31-8. doi: 10.1016/j.nepr.2016.06.008. Epub 2016 Jun 8     [PubMed PMID: 27428801]

Level 1 (high-level) evidence

[21]

Isaacson JJ, Stacy AS. Rubrics for clinical evaluation: objectifying the subjective experience. Nurse education in practice. 2009 Mar:9(2):134-40. doi: 10.1016/j.nepr.2008.10.015. Epub 2008 Dec 10     [PubMed PMID: 19083270]


[22]

Donaldson JH, Gray M. Systematic review of grading practice: is there evidence of grade inflation? Nurse education in practice. 2012 Mar:12(2):101-14. doi: 10.1016/j.nepr.2011.10.007. Epub 2011 Nov 29     [PubMed PMID: 22129576]

Level 1 (high-level) evidence

[23]

Hift RJ. Should essays and other "open-ended"-type questions retain a place in written summative assessment in clinical medicine? BMC medical education. 2014 Nov 28:14():249. doi: 10.1186/s12909-014-0249-2. Epub 2014 Nov 28     [PubMed PMID: 25431359]


[24]

Chen PH. Three-Element Item Selection Procedures for Multiple Forms Assembly: An Item Matching Approach. Applied psychological measurement. 2016 Mar:40(2):114-127. doi: 10.1177/0146621615605307. Epub 2015 Sep 22     [PubMed PMID: 29881042]


[25]

Hrynchak P, Takahashi SG, Nayer M. Key-feature questions for assessment of clinical reasoning: a literature review. Medical education. 2014 Sep:48(9):870-83. doi: 10.1111/medu.12509. Epub     [PubMed PMID: 25113114]


[26]

Nayer M, Glover Takahashi S, Hrynchak P. Twelve tips for developing key-feature questions (KFQ) for effective assessment of clinical reasoning. Medical teacher. 2018 Nov:40(11):1116-1122. doi: 10.1080/0142159X.2018.1481281. Epub 2018 Jul 12     [PubMed PMID: 30001652]


[27]

Zayyan M. Objective structured clinical examination: the assessment of choice. Oman medical journal. 2011 Jul:26(4):219-22. doi: 10.5001/omj.2011.55. Epub     [PubMed PMID: 22043423]


[28]

Iyer MS, Santen SA, Nypaver M, Warrier K, Bradin S, Chapman R, McAllister J, Vredeveld J, House JB, Accreditation Council for Graduate Medical Education Committee, Emergency Medicine and Pediatric Residency Review Committee. Assessing the validity evidence of an objective structured assessment tool of technical skills for neonatal lumbar punctures. Academic emergency medicine : official journal of the Society for Academic Emergency Medicine. 2013 Mar:20(3):321-4. doi: 10.1111/acem.12093. Epub     [PubMed PMID: 23517267]

Level 3 (low-level) evidence

[29]

Goff BA, Nielsen PE, Lentz GM, Chow GE, Chalmers RW, Fenner D, Mandel LS. Surgical skills assessment: a blinded examination of obstetrics and gynecology residents. American journal of obstetrics and gynecology. 2002 Apr:186(4):613-7     [PubMed PMID: 11967481]


[30]

Siddiqui NY, Stepp KJ, Lasch SJ, Mangel JM, Wu JM. Objective structured assessment of technical skills for repair of fourth-degree perineal lacerations. American journal of obstetrics and gynecology. 2008 Dec:199(6):676.e1-6. doi: 10.1016/j.ajog.2008.07.054. Epub     [PubMed PMID: 19084100]

Level 3 (low-level) evidence

[31]

Miller GE. The assessment of clinical skills/competence/performance. Academic medicine : journal of the Association of American Medical Colleges. 1990 Sep:65(9 Suppl):S63-7     [PubMed PMID: 2400509]


[32]

. Assessment methods in medical education. International journal of health sciences. 2008 Jul:2(2):3-7     [PubMed PMID: 21475483]


[33]

Seo S, Thomas A, Uspal NG. A Global Rating Scale and Checklist Instrument for Pediatric Laceration Repair. MedEdPORTAL : the journal of teaching and learning resources. 2019 Feb 27:15():10806. doi: 10.15766/mep_2374-8265.10806. Epub 2019 Feb 27     [PubMed PMID: 30931385]


[34]

Ong IL, Diño MJS, Calimag MMP, Hidalgo FA. Development and validation of interprofessional learning assessment tool for health professionals in continuing professional development (CPD). PloS one. 2019:14(1):e0211405. doi: 10.1371/journal.pone.0211405. Epub 2019 Jan 25     [PubMed PMID: 30682137]

Level 1 (high-level) evidence

[35]

Awad SS, Hayley B, Fagan SP, Berger DH, Brunicardi FC. The impact of a novel resident leadership training curriculum. American journal of surgery. 2004 Nov:188(5):481-4     [PubMed PMID: 15546554]


[36]

Hill DA MD, Jimenez JC, Cohn SM, Price MR. How To Be a Leader: A Course for Residents. Cureus. 2018 Jul 30:10(7):e3067. doi: 10.7759/cureus.3067. Epub 2018 Jul 30     [PubMed PMID: 30280063]


[37]

Van den Bulcke B, Piers R, Jensen HI, Malmgren J, Metaxa V, Reyners AK, Darmon M, Rusinova K, Talmor D, Meert AP, Cancelliere L, Zubek L, Maia P, Michalsen A, Decruyenaere J, Kompanje EJO, Azoulay E, Meganck R, Van de Sompel A, Vansteelandt S, Vlerick P, Vanheule S, Benoit DD. Ethical decision-making climate in the ICU: theoretical framework and validation of a self-assessment tool. BMJ quality & safety. 2018 Oct:27(10):781-789. doi: 10.1136/bmjqs-2017-007390. Epub 2018 Feb 23     [PubMed PMID: 29475979]

Level 2 (mid-level) evidence

[38]

Balmer JT. The transformation of continuing medical education (CME) in the United States. Advances in medical education and practice. 2013:4():171-82. doi: 10.2147/AMEP.S35087. Epub 2013 Sep 19     [PubMed PMID: 24101887]

Level 3 (low-level) evidence

[39]

Hawkins RE, Welcher CM, Holmboe ES, Kirk LM, Norcini JJ, Simons KB, Skochelak SE. Implementation of competency-based medical education: are we addressing the concerns and challenges? Medical education. 2015 Nov:49(11):1086-102. doi: 10.1111/medu.12831. Epub     [PubMed PMID: 26494062]


[40]

Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. The New England journal of medicine. 2010 Nov 25:363(22):2124-34. doi: 10.1056/NEJMsa1004404. Epub     [PubMed PMID: 21105794]

Level 2 (mid-level) evidence

[41]

Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA. 2009 Sep 23:302(12):1277-83. doi: 10.1001/jama.2009.1356. Epub     [PubMed PMID: 19773562]

Level 2 (mid-level) evidence

[42]

Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014 Dec 10:312(22):2385-93. doi: 10.1001/jama.2014.15973. Epub     [PubMed PMID: 25490329]


[43]

Asch DA, Nicholson S, Srinivas SK, Herrin J, Epstein AJ. How do you deliver a good obstetrician? Outcome-based evaluation of medical education. Academic medicine : journal of the Association of American Medical Colleges. 2014 Jan:89(1):24-6. doi: 10.1097/ACM.0000000000000067. Epub     [PubMed PMID: 24280859]


[44]

Moore DE Jr, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. The Journal of continuing education in the health professions. 2009 Winter:29(1):1-15. doi: 10.1002/chp.20001. Epub     [PubMed PMID: 19288562]


[45]

Weaver SJ, Dy SM, Rosen MA. Team-training in healthcare: a narrative synthesis of the literature. BMJ quality & safety. 2014 May:23(5):359-72. doi: 10.1136/bmjqs-2013-001848. Epub 2014 Feb 5     [PubMed PMID: 24501181]

Level 2 (mid-level) evidence

[46]

Manser T. Teamwork and patient safety in dynamic domains of healthcare: a review of the literature. Acta anaesthesiologica Scandinavica. 2009 Feb:53(2):143-51. doi: 10.1111/j.1399-6576.2008.01717.x. Epub     [PubMed PMID: 19032571]


[47]

Salas E, DiazGranados D, Klein C, Burke CS, Stagl KC, Goodwin GF, Halpin SM. Does team training improve team performance? A meta-analysis. Human factors. 2008 Dec:50(6):903-33     [PubMed PMID: 19292013]

Level 1 (high-level) evidence