Evaluation is the systematic collection and analysis of data related to the design, process, implementation, and outcomes of the education program for monitoring and improving its effectiveness and quality. Therefore, it involves understanding the program in detail through a systematic, routine, and deliberate action of obtaining information to identify and reveal the factors contributing to the success or failure of the medical education program (Holmboe, 2015). Several research designs are applicable in achieving these objectives and include descriptive, cross-sections, and case studies under discussion in this paper.
The application of descriptive designs in evaluating medical programs is among the oldest approaches in the field. By definition, descriptive research is the design that describes the specific characteristics of the phenomenon or population of interest. Descriptive studies emphasize more on the “what” than the “why” of the subject (Patel, 2016). Therefore, the area of interest in these studies is to describe the nature of a given demographic segment without descrbing why the phenomenon occurs. The distinctive characteristics of descriptive research include quantitative research, cross-sectional studies, and uncontrolled variables.
When evaluating medical education programs, descriptive design is applied to examine the characteristics and nature of the population of the medical program studies. The methods applying descriptive research use such approaches as evaluation questionnaires or surveys distributed to the learners (Holmboe, 2015). In the questionnaires and surveys, the learners express their experiences with the medical program, what they have gained or learned, and other aspects within their experience in the program. From the results, the researcher can analyze the descriptive evaluations through such approaches as coding the answers into specific categories and themes (Vassar, 2010). Such methods are effective in describing the experiences of the learners in the program, active learning in practice, learning goals, and missing professional training in previous medical studies. In addition, they describe the causal effect of the cause or program design and implementation by describing the perceptions of the learners through their experiences/
However, like most descriptive research methods, it is difficult to answer the questions related to “why” the phenomenon occurs. For instance, because the researcher only describes the student population, it is difficult to evaluate the factors or effects that produce the observable results (Newcomer et al., 2015). For example, the evaluator cannot tell the reason why the program has failed to meet the target, why a program is problematic, and the reason why students develop better skills in one program compared to another.
An example of such an approach the Kirkpatrick’s evaluation model in general education, which is also applicable in medical education programs. In this case, Kirkpatrick’s four hierarchical levels of program outcomes include learner satisfaction or reaction to the program, measures of learning, changes in learner behavior, and the final results (Newcomer et al., 2015). Descriptive studies are used to describe the learners’ perceptions or satisfaction, skills and attitudes, changes in behavior, and the results. However, the methods fail to measure the reason behind the observed results or phenomenon.
In contrast, the cross-sectional design obtains and analyzes data on the whole population of interest. Cross-sectional studies aim to examine the relationship between exposure and the outcomes within a given period. Unlike descriptive design, the cross-sectional approach can be either descriptive or analytical, especially when used in health studies on diseases. In the evaluation of medical education programs, cross-sectional studies are among the most common and effective approaches (Patel, 2016). Researchers consider the whole population of medical students in a given grade or grades and examine the attitudes, knowledge, and behaviors before exposure to learning in a particular course within the larger medical program and after the exposure. From a descriptive perspective, the researcher can design surveys that the students can answer about their experiences in the program before and then after attending the course. In the same way, it can be analytical when the researcher wants to examine the relationship between the student’s pre-exposure attitudes and the outcomes in changes after the exposure to the course materials (Schwartz, 2019). In such a case, it is possible to examine the question of why the observed phenomenon has occurred in contrast to the descriptive studies that do not answer this question.
This analytical approach has a basis in the logical theory in which the research attention is paid to the relationship between the program components and the outcomes. Consequently, the logical model of evaluating medical education programs is a good example of cross-sectional design (Posavac, 2015). The whole population of learners in a given grade or class is taken in a study to examine the relationship between their changes in knowledge, attitudes, and behaviors before and after the exposure.
In the case of study research, the aim is to achieve a holistic investigation of the phenomenon and reveal the underlying elements or aspects that influence the outcomes. In particular, the three main purposes of case studies are to determine what occurred, whether the occurrence had an impact, and the links between the program and the observed impacts or outcomes (Rama et al., 2018). Therefore, a case study appears superior to both cross-sectional and descriptive designs as it addresses the questions of what, why, and how, which the two previous approaches do not address. In addition, case studies can utilize several data collection methods such as both structured and semi-structured interviews, unstructured interviews, observation, document reviews, and surveys. An example of such a study is to consider one or more medical education programs in multiple colleges and use them as cases for the study (Rama et al., 2018). Then, one or more of the aforementioned data collection studies can be used such as interviewing the learners, examining documents such as examination results, and observations. The results of the different programs are comparable and the results can explain the what, why, and how of the outcomes.
In conclusion, evaluation of medical education programs seeks to make informed decisions about the worth or value of the specific program. It is a critical approach that seeks to uncover the factors that contribute to the success or failure of the medical education program and the actions that should improve the outcomes. Moreover, case study approaches are more effective in the evaluation of medical education programs than both descriptive and cross-sectional studies. Indeed, the descriptive methods only answer the question of “what” and not “why” and “how” in regards to the occurrence of the phenomenon. In contrast, cross-sectional evaluation can answer the questions of “what” and “why” but not “how”. However, the case studies are wide in their approach and can use multiple data collection and analysis methods to address all three questions. Therefore, the recommendation from the discussion is for the medical education programs to undergo frequent evaluations using the case study approaches as they are beneficial in revealing the underlying factors in the observable phenomenon and the reasons behind the occurrences.
Holmboe, E. S. (2015). Realizing the promise of competency-based medical education. Academic Medicine, 90(4), 411-413. Web.
Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2015). Handbook of practical program evaluation. John Wiley & Sons.
Patel, P. (2016). An evaluation of the current patterns and practices of educational supervision in postgraduate medical education in the UK. Perspectives on medical education, 5(4), 205-214. Web.
Posavac, E. J. (2015). Program evaluation: Methods and case studies. Routledge.
Rama, J. A., Falco, C., & Balmer, D. F. (2018). Using appreciative inquiry to inform program evaluation in graduate medical education. Journal of graduate medical education, 10(5), 587-590. Web.
Schwartz, A. R., Siegel, M. D. & Lee, A. I. (2019). A novel approach to the program evaluation committee. BMC Med Educ 19, 465. Web.
Vassar, M., Wheeler, D. L., Davison, M., & Franklin, J. (2010). Program evaluation in medical education: An overview of the utilization-focused approach. Journal of educational evaluation for health professions, 7, 1-3. Web.