“Chapter 7. Evaluation of Learning” in “Creative Clinical Teaching in the Health Professions”
7 Evaluation of Learning
Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better. SAMUEL BECKETT (1983, P. 11)
Few topics generate more impassioned discussions among educators of health-care professionals than the evaluation of learning. In many clinical practice settings, instructors are required to apply tools of evaluation that they have not designed themselves. On the one hand, criticisms of standardized assessment techniques for required professional competencies and skill sets note the overemphasis on reproducing facts by rote or implementing memorized procedures. On the other, teachers might find themselves filling out extensive and perhaps incomprehensible checklists of criteria intended to measure critical thinking. How can evaluation possibilities be created to advance required competencies with individuals in complex practice environments?
Expectations of learners must be set out clearly before learning can be measured accurately. Within the clinical environment, the stakes are high for learners. Client safety cannot be compromised. Furthermore, measurement considerations must not dominate the time that educators might otherwise spend on creating meaningful instructional approaches. In his seminal Learning to Teach in Higher Education, Paul Ramsden (1992) establishes an important distinction between deep learning and surface learning. In his view, deep and meaningful learning occurs when assessment focuses on both what students need to learn and how educators can best teach them.
Understanding the complexities of evaluating students and our teaching is an ongoing process. Approaching the process collaboratively in ways that consistently involve learners as active participants, rather than passive recipients, can support their success and inspire our teaching. In this chapter, we introduce the vocabulary of evaluation and discuss methods of evaluating both students and teachers. We suggest creative strategies for evaluation that teachers can use in a variety of clinical practice settings.
VOCABULARY OF EVALUATION
Educators can feel overwhelmed by measuring how learners create personal meaning and demonstrate understanding of the consensually validated knowledge that they will need to practise competently in their health fields. Measuring the efficacy of our own teaching in relation to preparing learners to practise safely, ethically, and in accordance with entry-to-practice competencies is not straightforward either. However, whether we are seeking to appraise student learning or our own teaching, knowing the criteria for expected outcomes will help us to understand what is being measured. The terms “measurement, ” “assessment, ” “evaluation, ” “feedback, ” and “grading” are used in appraising student learning and our own teaching.
Measurement, Assessment, and Evaluation
Measurement determines the attributes of a physical object in relation to a standard instrument. For example, just as a thermometer measures temperature, so too standardized educational tests measure student performance. Reliable and valid measurement depends on the skilful use of appropriate and accurate instruments. In 1943, Douglas Scales was one of the first to argue against applying the principles of scientific measurement to the discipline of education.
The kind of science that seeks only the simplest generalizations can depart rather far from flesh-and-blood reality, but the kind of science that can be applied in the everyday work of teachers, administrators, and counsellors must recognize the great variety of factors in the practical conditions under which these people do their work. Any notion of science that stems from a background of engineering concepts in which all significant variables can be readily identified, isolated, measured, and controlled is both inadequate and misleading. Education, in both its theory and its practice, requires a new perspective in science that will enable it to deal with composite phenomena in which physical science normally deals with highly specific, single factors (Scales, 1943, p. 1).
One example of a standardized measurement tool is a required student evaluation form. Most health profession programs provide clinical instructors with evaluation forms designed to measure learning outcomes in relation to course objectives. These forms provide standardization in that they are implemented with all students in a course. They often focus on competencies such as safety, making them relevant to all members of the profession (Walsh et al., 2010). However, clinical instructors who use the forms might have little or no input into their construction and might not see clear links to their own practice setting.
Another example of a standardized measurement tool is a qualifying examination that all members of a profession must pass in order to practise. Similarly, skills competency checklists, rating scales, multiple choice tests, and medication dosage calculation quizzes can provide standardized measurement. Again, clinical instructors might have limited input into the design of these tools.
Assessment obtains information in relation to a complex objective, goal, or outcome. Although the standardized measurements noted above can all contribute to assessing student performance, additional information is necessary. Processes for assessment require inferences about what individuals do in relation to what they know (Assessment, n.d.). For example, inferences can be drawn about how students apply theory to practice from instructors’ observations of students while implementing client care, from student self-assessments, and from peer assessments.
Evaluation makes judgments about value or worth in relation an objective, goal, or outcome. Evaluation needs information from a variety of sources and at different times. Evaluation of learners in clinical practice settings is considered subjective rather than objective (Emerson, 2007; Gaberson et al., 2015; Gardner & Suplee, 2010; O’Connor, 2015).
Formative evaluation is continuous, diagnostic, and focused on both what students are doing well and what they need to improve (Carnegie Mellon, n.d.). Because the goal of formative evaluation is to improve future performance, a mark or grade is not usually included (Gaberson et al., 2015; Marsh et al., 2005). Formative evaluation, sometimes referred to as mid-term evaluation, should precede final or summative evaluation.
Summative evaluation summarizes how students have or have not achieved the outcomes and competencies stipulated in course objectives (Carnegie Mellon, n.d.) and includes a mark or grade. Summative evaluation can be completed at mid-term or end of term. Both formative evaluation and summative evaluation consider context. They can include measurement and assessment methods noted previously as well as staff observations, written work, presentations, and a variety of other measures.
Whether the term “measurement, ” “assessment, ” or “evaluation” is used, the outcome criteria or what is expected must be defined clearly and measured fairly. The process must be transparent and consistent. For all those who teach and learn in health-care fields, succeeding or not succeeding has profound consequences.
Feedback
Feedback differs from assessment and evaluation. Assessment requires instructors to make inferences, and evaluation requires them to make judgments. Feedback is defined as “a process through which learners make sense of information from various sources and use it to enhance their work or learning strategies” (Carless & Boud, 2018, p. 1315). Feedback is non-judgmental and requires instructors to provide learners with information that facilitates improvement (Concordia University, n.d.). Feedback should focus on tasks rather than on individuals, it should be specific, and it should be directly linked to learners’ personal goals (Archer, 2010).
Periodic, timely, constructive feedback that recognizes both strengths and areas for improvement is perceived by students as encouraging and helpful in bolstering their confidence and independence (Bradshaw & Lowenstein, 2014). The tone of verbal or written feedback should always communicate respect for the student and for any work done. The feedback should be specific enough that students know what to do but not so specific that the work is done for them (Brookhart, 2008).
Chickering and Gamson (1987, p. 2) identified seven principles of good practice in undergraduate education, which
- encourage contact between students and faculty,
- develop reciprocity and cooperation among students,
- encourage active learning,
- give prompt feedback,
- emphasize time on task,
- communicate high expectations, and
- respect diverse talents and ways of learning.
All of these principles should be considered when providing feedback to students in the clinical area. Certainly, providing “prompt feedback” is particularly relevant. If more time passes before you give feedback on learning experiences, then you will find it more difficult to remember details and provide effective feedback (Gaberson et al., 2015).
Including students’ self-assessments when providing feedback is a critical element of the process. Throughout their careers, health-care professionals are encouraged to reflect on their own practice. This needed reflection can be developed by opening any feedback session with open-ended questions that invite learners to share their reflections and self-assessments. This strategy can soften perceptions of harshness associated with corrective feedback and bring unexpected questions and issues into the discussion (Ramani & Krackov, 2012).
All too often feedback is viewed as educator driven (Molloy et al., 2020), with instructors assuming primary responsibility for initiating and directing sessions. A more learner-centred approach encourages students to take a central role in the process and to seek opportunities to gather feedback from instructors and others in the practice area (Rudland et al., 2013).
Grading
Grading, whether with a numerical value, letter grade, or pass/fail designation, indicates the degree of accomplishment achieved by a learner. Differentiating between norm-referenced grading and criterion-referenced grading is important. The former evaluates a student’s performance compared with that of other students in a group or program, determining whether the performance is better than, worse than, or equivalent to that of other students (Gaberson et al., 2015). The latter evaluates a student’s performance in relation to predetermined criteria and does not consider the performance of other students (Gaberson et al., 2015).
Criterion-referenced grading reflects only individual accomplishment. If all of the participants in a learning group demonstrate strong clinical skills, then they all earn top grades. In contrast, a learner’s grade in norm-referenced grading reflects accomplishment in relation to others in the group. Only a select few can earn top grades, most will receive mid-level grades, and at least some will receive failing grades. Norm-referenced grading is based on the symmetrical statistical model of a bell or normal distribution curve.
The advantages of norm-referenced grading include the opportunity to compare students in a particular location with national norms, to highlight assignments that are too difficult or too easy, and to monitor grade distributions such as too many students receiving high or overinflated grades (Centre for the Study of Higher Education, 2002). The disadvantages of norm-referenced grading centre on the notion that one student’s achievement, success, and even failure can depend unfairly on the performances of others. Grade inflation, an upward trend in grades awarded to students, has led many programs in the health disciplines to establish rigorous admission requirements and use a pass/fail grading approach.
Criterion-referenced grading judges student achievement against objective criteria outlined in course objectives and expected outcomes without consideration of what other students have or have not achieved. The process is transparent, and students can link their grades to their performance on predictable and set tasks (Centre for the Study of Higher Education, 2002). In turn, this approach can consider an individual student’s learning needs and build in opportunities for remediation when needed (Winstrom, n.d.). One disadvantage of criterion-referenced grading is that instructors need more time for it. Also, awarding special recognition with prizes or scholarships to excelling students might not be as clear-cut when students are not compared with their peers.
METHODS OF EVALUATING STUDENTS
Professional expectations dictate that all health-care practitioners must demonstrate prescribed proficiencies. Assessing, evaluating, providing feedback to, and ultimately assigning grades to students in clinical courses require teachers to implement a variety of evaluation methods. Going beyond measuring students’ performance on standardized tests and checklists is essential. Here we discuss methods of evaluating students that invite collaboration, tap into what students know, and identify future learning needs.
Instructor Observation
Instructor observation is one of the most commonly implemented methods of evaluating students. Instructor observation, also referred to as clinical performance assessment, provides important information about contextual aspects of a learning situation (O’Connor, 2015). Knowing the context of why a student acted in a particular way can provide a more complete understanding of behaviour. If a task was not completed on time, knowing that the student reasoned that it was more important to stop and listen to clients’ or patients’ concerns can help instructors to make inferences about students’ strengths and weaknesses. Yet anxiety about the experience of being observed is well known to all of us. At what point is the stress of achieving course outcomes equivalent to the stress inherent in actual practice conditions? Does performance anxiety help or hinder evaluation?
Performance anxiety can be expected during instructor-observed activities (Cheung & Au, 2011; Weeks & Horan, 2013; Welsh, 2014). Instructional strategies that decrease performance anxiety include (1) demonstrating skills with supplemental sessions in laboratory settings before students complete skills in clinical settings and (2) arranging opportunities for peers to observe and evaluate one another. Engaging students in non-evaluated discussion time can also help to reduce their anxiety (Melrose & Shapiro, 1999). Furthermore, inviting students to complete a self-assessment of any instructor-observed activity can help to make the experience collaborative.
Self-Assessment
Self-assessment opportunities can be made available and acknowledged to help students develop critical awareness and reflexivity (Dearnley & Meddings, 2007). Self-assessment is a necessary skill for lifelong learning (Boud, 1995). Practitioners in self-regulating health professions are required to self-assess. When students become familiar with the process during their education, they enter their professions with a stronger capacity for assessing and developing needed competencies (Kajander-Unkuri et al., 2013).
Self-assessment can shed light on the incidental, surprise, or unexpected learning (Chapter 3) that can occur beyond the intended goals and objectives of a clinical course. Pose questions such as “What surprised you when . . . ?” or “Can you talk about what happened that you didn’t expect when . . . ?” Encouraging students to identify and then discuss their incidental learning in individual ways helps to build their confidence.
A cautionary note: self-assessments can be flawed. The most common flaw is that people often overrate themselves, indicating inaccurately that they are above average (Davis et al., 2006; Dunning et al., 2004; Mort & Hansen, 2010; Pisklakov et al., 2014). They might not accurately identify areas of weakness (Regehr & Eva, 2006) or overestimate their skills and performance (Baxter & Norman, 2011; Galbraith et al., 2008). Students who are the least competent in other areas of study are the least able to self-assess accurately (Austin & Gregory, 2007; Colthart et al., 2008). Balancing students’ reflections on their activities with valid assessments of their progress and achievements is not straightforward (Melrose, 2017). Despite the flaws of self-assessment, inviting students to contribute their perceptions of what they have learned and what they still do not know is a critical aspect of evaluation.
Peer Assessment
Peer assessment, in which individuals of similar status evaluate the performance of their peers and provide feedback, can also help students to develop a critical attitude toward their own and others’ practice (Laske, 2019; Mass et al., 2014; Sluijsmans et al., 2003). The advantages of peer assessment include opportunities for students to think more deeply about the activity being assessed, to gain insight into how others tackle similar problems, and to give and receive constructive criticism (Rush et al., 2012). The disadvantages include peers who have limited knowledge of a situation, show bias toward their friends, and hesitate to award low marks for poor work because they fear offending their peers (Rush et al., 2012). Personalities or learning styles might not be compatible among peers, and students might believe that they spend less individualized time with instructors when being reviewed by peers (Secomb, 2008). Instructors need to remain involved with any peer assessment activity in order to correct inaccurate or insufficient peer feedback (Hodgson et al., 2014).
Peer and self-assessments often differ from clinical teachers’ assessments, indicating that neither of them can substitute for teacher assessment (Mehrdad et al., 2012). Even though peer assessments of students’ clinical performance cannot be expected to provide a complete picture of students’ strengths and areas needing improvement, they are a useful method of evaluation and should be incorporated whenever possible. When students step into the role of evaluator, either of themselves or of others at a similar stage of learning, they gain a new perspective on the teaching role. This familiarity might help them to believe that they are actively participating in the evaluation of themselves and others.
Anecdotal Notes
Anecdotal notes are the collections of information that instructors record, either by hand or electronically, to describe student performance in clinical practice (Hall, 2013). Notes are usually completed daily or weekly on all students and provide snapshots of their clients/patients and skills. Instructors are expected to complete anecdotal notes after observing a student complete a client/patient care procedure or report. Notes are also completed after incidents in which students have behaved in unusual or concerning ways, such as difficulty completing previously learned skills, showing poor decision making, appearing to be unprepared, or behaving in an unprofessional manner (Gardner & Suplee, 2010).
Each anecdotal note should be completed as soon as possible after observing a student’s performance or concerning incident, and it should only address that one performance or incident. Each note should include a description of the client/patient and the required skills as well as objective observations of the student’s behaviour actually seen and heard by the instructor. Individual anecdotal notes are narrative accounts of an experience at one point in time and should be shared with students (Gaberson et al., 2015; O’Connor, 2015). Many instructors invite students to respond or add to anecdotal notes after students review and reflect on them.
Cumulatively, individual anecdotal notes can be reviewed over time for patterns of behaviour useful in evaluating student progress and continued learning needs. These notes should be retained after courses end since disputes over clinical grades might occur (Heaslip & Scammel, 2012). Anecdotal notes need not just be descriptions of students’ behaviour. They can and should also include the specific suggestions and guidance that teachers provide to support their students in being successful.
Records of students’ assignments should also be retained. These records can reflect how different opportunities were available to students to demonstrate required skills. They can illustrate the situations in which students performed well and poorly. These records have also been used to defend instructors’ decisions to fail students who assert that they were given overly difficult assignments (O’Connor, 2015).
LEARNING CONTRACTS
Adult educator Malcolm Knowles (1975, p. 130) explains that a learning contract is a “means of reconciling the ‘imposed’ requirements from institutions and society with the learners’ need to be self-directing. It enables them to blend these requirements in with their own personal goals and objectives, to choose ways of achieving them and the measure of their own progress toward achieving them.” In other words, the goal of any learning contract is to promote learner self-direction, autonomy, and independence. As Knowles emphasizes, learning contracts must include what is to be learned, how it will be learned, and how that learning will be evaluated.
As part of continuing competence requirements, most health-care professionals are expected to engage in self-directed learning activities. These activities demonstrate to regulatory bodies that these professionals can identify what they need to know, how they will learn it, and how they will evaluate their learning. Initiating learning contracts with students can help to prepare them for this practice requirement.
Traditionally, learning contracts have been used mainly with students who are struggling to meet clinical objectives and standards or whose performance is perceived as unsafe (Frank & Scharff, 2013; Gregory et al., 2009). In these instances, instructors must clearly identify the outcomes to be addressed and work collaboratively with students to determine the resources and assistance needed to address the issues (Atherton, 2013). A contract must be signed by both instructor and student, and both must document the progress made or not made after each clinical experience.
Extending the idea of learning contracts beyond struggling students is becoming more common. Learning contracts can be a teaching strategy that fosters motivation and independent learning in students in nursing (Chan & Wai-tong, 2000; Timmins, 2002), respiratory care (Rye, 2008), physiotherapy (Ramli et al., 2013), and clinical psychology (Keary & Byrne, 2013). Although incorporating learning contracts for all students and not just those who struggle might initially seem to be time consuming, the result can be rewarding.
FAILURE
Despite clear objectives, thoughtful teaching strategies, and a supportive learning environment, some learners simply are not able to demonstrate the competencies required to pass a clinical course. The experience of failure can be devastating for all involved (Black et al., 2014; Handwerker, 2018; Larocque & Luhanga, 2013).
The accepted norm within clinical teaching is that, at the beginning of any educational event, participants will be thoroughly informed about both the learning outcomes that they are expected to achieve and specific institutional policies that apply when those objectives are not met. Similarly, learners must be informed promptly when an evaluator begins to notice problems with their progress. Typically, learners are informed of problems through collaborative formative evaluations and feedback long before a final failing summative evaluation.
The daily anecdotal notes or records of learners’ actions mentioned above are essential throughout any evaluative process, but they become particularly important when a learner is in danger of failing. Most formative or mid-term evaluation instruments are designed to provide feedback on learning progress and identify further work needed. Summative or final evaluations describe the extent to which learners have achieved course objectives. Thus, when a learner is not progressing satisfactorily, a prompt, documented learning contract or plan can be invaluable in identifying specific behaviours that the instructor and student agree to work on together. Instructors’ supervisors must be informed about any students who are struggling or unsafe, and they must be kept up to date throughout the process.
In some cases, learners might choose not to collaborate on a remedial learning contract or plan. Documenting students’ and instructors’ perceptions of this process is important as well. Providing students with information about institutional procedures for withdrawing from the learning event or appealing a final assessment is essential in demonstrating an open, fair, and transparent process of evaluation.
Given the emotional nature of clinical failure, those involved in the process might not be able to identify immediately how the experience is one of positive growth and learning. In fact, having opportunities to talk and debrief can help both students and instructors. For university, college, and technical institute students, counselling services are generally available through the institution. For instructors, both full-time continuing faculty and those employed on a contract or sessional basis, counselling services might be available from an employee assistance program.
Knowing that students might fail and that counselling services might help, you can distribute pamphlets outlining contact information for those counselling services to all students in the group at the beginning of the course. If the information is already at hand, then referring an individual learner to the service when needed normalizes the suggestion. In some cases, without compromising confidentiality, actually accompanying an individual to a counselling appointment or walking with the person into the counselling services area can begin to ease the devastation.
Clinical instructors and preceptors can be reluctant to fail students. The phrase “failure to fail” (Duffy, 2003, 2004) is used to describe a growing trend toward passing students who do not meet course objectives and outcomes. In one study, “37% of mentors [preceptors] passed student nurses, despite concerns about competencies or attitude, or who felt they should fail” (Gainsbury, 2010, p.1).
One key reason that clinical instructors fail to fail is lack of support (Black et al., 2014; Bush et al., 2013; Duffy, 2004; Gainsbury, 2010; Larocque & Luhanga, 2013). When universities overturn decisions to fail on appeal and require detailed written evidence justifying an instructor’s decision to fail a student, clinical instructors can feel as though they are not supported (Gainsbury, 2010). As caring health professionals, instructors can believe that failing is an uncaring action (Scanlan et al., 2001). Many also fear that a student’s failure will reflect badly on them and that others will judge them as bad teachers (Gainsbury, 2010).
However, health-care professionals have a duty of care to protect the public from harm. When students whose practice is unsafe and who fail to meet required course outcomes are not assigned failing grades, instructors must question whether they are neglecting their duty of care (Black et al., 2014). The reputation of the professional program can be diminished as a result of failing to fail a student (Larocque & Luhanga, 2013). Viewing clinical failure in a positive light is difficult for both students and instructors. Learning from the experience is what counts. As Samuel Beckett wrote, “Ever tried. Ever failed. No matter. Try again. Fail again. Fail better” (1983, p. 11).
METHODS OF EVALUATING TEACHING
Clinical programs in the health professions usually stipulate specific assessment tools to be used to evaluate clinical teachers. Commonly, students are given anonymous questionnaires to complete at the ends of their courses, and supervisors complete standardized performance appraisals. Clinical instructors employed as full-time continuing faculty members might be involved in constructing these tools, but sessional instructors or those employed on a contract basis are seldom consulted. Although clinical instructors might have little control over the tools of evaluation required by their programs, performance appraisal documents are likely to include opportunities for self-assessments, which can be framed as teaching portfolios. Collecting information from a variety of assessment tools over a period of time is necessary to construct accurate student evaluations, and the same is true for instructor evaluation (Billings & Halstead, 2012).
Teaching portfolios, also called teaching dossiers or teaching profiles, are pieces of evidence collected over time and used to highlight teaching strengths and accomplishments (Barrett, n.d.; Edgerton et al., 2002; Seldin, 1997; Shulman, 1998). The collection of evidence can be paper based or electronic. Teaching portfolios can usually be integrated into self-assessment sections of performance appraisal requirements. No two teaching portfolios are alike, and the content pieces can be arranged in creative and unique ways.
Portfolios usually begin with an explanation of the instructor’s teaching philosophy. In Chapter 2, we provided suggestions for crafting a personal teaching philosophy statement. When the purpose of a portfolio is to contribute self-assessment information to performance appraisals, goals should also be explained. Reflective inquiry is a critical element of any portfolio, and reflections on teaching approaches that failed, as well as those that succeeded, should be included (Lyons, 2006). Both goals that have been accomplished and specific plans for accomplishing future goals should be noted. Since clinical teachers must maintain competencies in both their clinical practice and their teaching practice, another segment of the portfolio could list certifications earned; workshops, conferences, or other educational events attended; papers written about clinical teaching in a course; and awards received.
Teaching products could constitute another segment, such as writing a case study about a typical client or patient in your practice setting, developing a student orientation module for your students, crafting a student learning activity such as a game or puzzle, devising an innovative strategy to support a struggling student, or demonstrating a skill on video. Mementoes such as thank-you messages from students, colleagues, agency staff, or clients/patients could also be included. Distinguish between content that can be made public and that which should be kept private. For example, a student learning activity might be made public by publishing it in a journal article or on a teaching website, whereas mementoes would be private and likely shared only with supervisors.
If your program does not provide students with an opportunity to offer formative evaluation to instructors of how the course is going, create this opportunity for them. Rather than waiting for student feedback at the end of the course, seek it at mid-term. Provide a mechanism that is fully anonymous, for example an online survey, in which students can comment on what is going well, what is not going well, and what advice they would like to give to the instructor. In your portfolio, discuss this formative feedback, your responses to it, and your evaluation of the process.
As the above examples illustrate, the possibilities for demonstrating instructional achievements are limitless. Each item in your portfolio should include a brief statement explaining why it is included and how it reflects a valid and authentic assessment of teaching achievements (Barrett, n.d.). Two other pieces of content commonly included in teaching portfolios are responses from student questionnaires and peer assessments.
Students’ assessments of their instructors’ teaching effectiveness are most often collected through anonymous questionnaires at the end of the course. Anonymity is important since students can fear that rating their instructor poorly could affect their grades. Completing the questionnaire is optional, and instructors must not be involved in administering or collecting the questionnaires (Center for Teaching and Learning, n.d.).
Research indicates that instructors are more likely to receive higher ratings from students who are highly motivated and interested in the course content (Benton & Cashin, 2012). Although students’ ratings of their instructors yield valuable interpretations of instructors’ engagement of students and enthusiasm, students are not subject matter experts and therefore cannot evaluate the accuracy and depth of instructors’ knowledge (Oermann, 2015). In general, college students’ ratings of their instructors tend to be more statistically reliable, valid, and relatively free from bias than any other data used for instructor evaluation. They are only one source of data, however, and should be used in combination with other sources of information (Benton & Cashin, 2012). Including samples of responses from student questionnaires is expected in most teaching portfolios.
If peer assessments of instructors are not usually part of your program, then consider including them in your teaching portfolio. Acquire permission for peer assessment from both program and clinical site administrators. Peer observers can be other teachers in the program or staff at the clinical agency, and they should be provided with an evaluation instrument. For example, Chickering and Gamson’s (1987) seven principles can guide peer observers in framing their feedback. Introduce the peer observer to students in the group and relevant agency staff members. Ensure that students understand that the purpose of the observation is instructor evaluation, not student evaluation (Center for Teaching and Learning, n.d.).
CONCLUSION
Evaluating our students and ourselves is a critical aspect of clinical teaching. In this chapter, we discussed methods of evaluating students and methods of evaluating teachers. The process of evaluating students requires clinical teachers to make judgments about whether students are meeting objectives or not based on information gathered and recorded throughout the course. Clinical teachers measure attributes of learning with standardized instruments and assess learning through inferences about how students are applying theory to practice based on observations in different situations.
Meaningful evaluation goes beyond identifying students’ progress in relation to course objectives and outcomes. Deep learning occurs when teachers provide their students with specific and individualized feedback. Students’ self-assessments of their strengths and plans for improvement should frame any feedback conversation. Ultimately, instructors must assign grades. They can be determined through a norm-referenced approach that compares students with other students or through a criterion-referenced approach that compares students to set criteria.
Clinical teachers can evaluate students using instructors’ observations, students’ self-assessments, and peer assessments. Daily anecdotal notes should be kept and shared with students, recording instructors’ objective observations of their clinical performance. Learning contracts can be co-created with all students, though traditionally they have been used mainly with students whose practice is unsafe or who are not meeting course objectives.
Student failure is a devastating experience. All too often clinical teachers and preceptors fail to fail students whose practice is unsafe or who have not met course outcomes. Evaluating students requires that those involved consider the duty of care for all health professionals to protect the public from harm.
We also discussed methods of evaluating our own teaching. We suggested creating a teaching portfolio as a method of self-assessment. Teaching portfolios can include statements of personal teaching philosophy, responses from student questionnaires, and peer assessments. They can showcase a variety of achievements and reflections.
REFERENCES
- Archer, J. (2010). State of the science in health profession education: Effective feedback. Medical Education, 44(1), 101–108. https://doi.org/10.1111/j.1365-2923.2009.03546.x
- Assessment. (n.d.). In The glossary of education reform. http://edglossary.org/
- Atherton, J. (2013). Learning and teaching: Learning contracts [Fact sheet]. http://www.learningandteaching.info/teaching/learning_contracts.htm
- Austin, Z., & Gregory, p. (2007). Evaluating the accuracy of pharmacy students’ self-assessment skills. American Journal of Pharmaceutical Education, 71(5), 1–8.
- Barrett, H. (n.d.). Dr. Helen Barrett’s electronic portfolios [Website]. http://electronicportfolios.com/
- Baxter, P., & Norman, G. (2011). Self-assessment or self-deception? A negative association between self-assessment and performance. Journal of Advanced Nursing, 67(11), 2406–2413. https://doi.org/10.1111/j.1365-2648.2011.05658.x
- Beckett, S. (1983). Worstward ho. Grove Press.
- Benton, S., & Cashin, W. (2012). Student ratings of teaching: A summary of research and literature. Idea Paper 50. The Idea Centre. http://www.ntid.rit.edu/sites/default/files/academic_affairs/Sumry%20of%20Res%20%2350%20Benton%202012.pdf
- Billings, D., & Halstead, J. (2012). Teaching in nursing: A guide for faculty (4th ed.). Elsevier.
- Black, S., Curzio, J., & Terry, L. (2014). Failing a student nurse: A new horizon of moral courage. Nursing Ethics, 21(2), 224–238.
- Boud, D. (1995). Enhancing learning through self-assessment. Kogan Page.
- Bradshaw, M., & Lowenstein, A. (2014). Innovative teaching strategies in nursing and related health professions education (6th ed.). Jones & Bartlett.
- Brookhart, S. (2008). How to give effective feedback to your students. Association for Supervision and Curriculum Development.
- Bush, H., Schreiber, R., & Oliver, S. (2013). Failing to fail: Clinicians’ experience of assessing underperforming dental students. European Journal of Dental Education, 17(4), 198–207. https://doi.org/10.1111/eje.12036
- Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354
- Carnegie Mellon. (n.d.) What is the difference between formative and summative assessment? [Fact sheet]. Eberly Center for Teaching Excellence, Carnegie Mellon University, Pittsburgh, PA. http://www.cmu.edu/teaching/assessment/basics/formative-summative.html
- Center for Teaching and Learning. (n.d.). Peer observation guidelines and recommendations [Fact sheet]. University of Minnesota, Minneapolis, MN. http://www1.umn.edu/ohr/teachlearn/resources/peer/guidelines/index.html
- Centre for the Study of Higher Education. (2002). A comparison of norm-referencing and criterion-referencing methods for determining student grades in higher education. Australian Universities Teaching Committee. http://www.cshe.unimelb.edu.au/assessinglearning/06/normvcrit6.html
- Chan, S., & Wai-tong, C. (2000). Implementing contract learning in a clinical context: Report on a study. Journal of Advanced Nursing, 31(2), 298–305.
- Cheung, R., & Au, T. (2011). Nursing students’ anxiety and clinical performance. Journal of Nursing Education, 50(5), 286–289. https://doi.org/10.3928/01484834-20110131-08
- Chickering, A., & Gamson, Z. (1987). Seven principles for good practice in undergraduate education. American Association for Higher Education AAHE Bulletin, 39(7), 2–6.
- Colthart, I., Bagnall, G., Evans, A., Allbutt, H., Haig, A., Illing, J., & McKinstry, B. (2008). The effectiveness of self-assessment on the identification of learning needs, learner activity, and impact on clinical practice: BEME Guide No. 10. Medical Teacher, 30, 124–145.
- Concordia University. (n.d.) How to provide feedback to health professions students [Wiki]. http://www.wikihow.com/Provide-Feedback-to-Health-Professions-Students
- Davis, D., Mazmanian, P., Fordis, M., Van Harrison, R., Thorpe, K., & Perrier, L. (2006). Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. The Journal of the American Medical Association, 296(9), 1094–1102.
- Dearnley, C., & Meddings, F. (2007). Student self-assessment and its impact on learning: A pilot study. Nurse Education Today, 27(4), 333–340.
- Duffy, K. (2003). Failing students: A qualitative study of factors that influence the decisions regarding assessment of students’ competence in practice. Glasgow Caledonian Nursing and Midwifery Research Centre. http://www.nmc-uk.org/Documents/Archived%20Publications/1Research%20papers/Kathleen_Duffy_Failing_Students2003.pdf
- Duffy, K. (2004). Mentors need more support to fail incompetent students. British Journal of Nursing, 13(10), 582. https://doi.org/10.12968/bjon.2004.13.10.13042
- Dunning, D., Heath, C., & Suls, J. (2004). Flawed self-assessment: Implications for health education and the workplace. Psychological Science in the Public Interest, 5(3), 69–106. https://faculty-gsb.stanford.edu/heath/documents/PSPI%20-%20Biased%20Self%20Views.pdf
- Edgerton, R., Hutching, P., & Quinlan, K. (2002). The teaching portfolio: Capturing the scholarship of teaching. American Association of Higher Education.
- Emerson, R. (2007). Nursing education in the clinical setting. Mosby.
- Frank, T., & Scharff, L. (2013). Learning contracts in undergraduate courses: Impacts on student behaviors and academic performance. Journal of the Scholarship of Teaching and Learning, 13(4), 36–53.
- Gaberson, K., Oermann, M., & Shellenbarger, T. (2015). Clinical teaching strategies in nursing (4th ed.). Springer.
- Gainsbury, S. (2010). Mentors passing students despite doubts over ability. Nursing Times, 106(16), 1–3.
- Galbraith, R., Hawkins, R., & Holmboe, E. (2008). Making self-assessment more effective. Journal of Continuing Education in the Health Professions, 28(1), 20–24.
- Gardner, M., & Suplee, p. (2010). Handbook of clinical teaching in nursing and health sciences. Jones & Bartlett.
- Gregory, D., Guse, L., Dick, D., Davis, P., & Russell, C. (2009). What clinical learning contracts reveal about nursing education and patient safety. Canadian Nurse, 105(8), 20–25.
- Hall, M. (2013). An expanded look at evaluating clinical performance: Faculty use of anecdotal notes in the US and Canada. Nurse Education in Practice, 13, 271–276.
- Handwerker, S. (2018). Challenges experienced by nursing students overcoming one course failure: A phenomenological research study. Teaching and Learning in Nursing, 13, 168–173.
- Heaslip, V., & Scammel, J. (2012). Failing underperforming students: The role of grading in practice assessment. Nursing Education in Practice, 12(2), 95–100.
- Hodgson, P., Chan, K., & Liu, J. (2014). Outcomes of synergetic peer assessment: First-year experience. Assessment and Evaluation in Higher Education, 39(2), 168–179.
- Kajander-Unkuri, S., Meretoja, R., Katajisto, J., Saarikoski, M., Salminen, L., Suhonene, R., & Leino-Kilpi, H. (2013). Self-assessed level of competence of graduating nursing students and factors related to it. Nurse Education Today, 34(5), 795–801.
- Keary, E., & Byrne, M. (2013). A trainee’s guide to managing clinical placements. The Irish Psychologist, 39(4), 104–110.
- Knowles, M. (1975). Self-directed learning. Association Press.
- Larocque, S., & Luhanga, F. (2013). Exploring the issue of failure to fail in a nursing program. International Journal of Nursing Education Scholarship, 10(1), 1–8.
- Laske, R. (2019). Peer evaluation of clinical teaching practices. Teaching and Learning in Nursing, 14(1), 65–68.
- Lyons, N. (2006). Reflective engagement as professional development in the lives of university teachers. Teachers and Teaching: Theory and Practice, 12(2), 151–168.
- Marsh, S., Cooper, K., Jordan, G., Merrett, S., Scammell, J., & Clark, V. (2005). Assessment of students in health and social care: Managing failing students in practice. Bournemouth University Publishing.
- Mass, M., Sluijsmans, D., van der Wees, P., Heerkens, Y., Nijhuis-van der Sanden, M., & van der Vleuten, C. (2014). Why peer assessment helps to improve clinical performance in undergraduate physical therapy education: A mixed methods design. BMC Medical Education, 14, 117. https://doi.org/10.1186/1472-6920-14-117
- Mehrdad, N., Bigdeli, S., & Ebrahimi, H. (2012). A comparative study on self, peer and teacher evaluation to evaluate clinical skills of nursing students. Procedia—Social and Behavioral Science, 47, 1847–1852.
- Melrose, S. (2017). Balancing reflection and validity in health profession students’ self-assessment. International Journal of Learning, Teaching and Educational Research, 16(8), 65–76.
- Melrose, S., & Shapiro, B. (1999). Students’ perceptions of their psychiatric mental health clinical nursing experience: A personal construct theory explanation. Journal of Advanced Nursing, 30(6), 1451–1458.
- Molloy, E., Ajjawi, R., Bearman, M., Noble, C., Rudland, J., & Ryan, A. (2020). Challenging feedback myths: Values, learner involvement and promoting effects beyond the immediate task. Medical Education, 54(1), 33–39. https://doi.org/10.1111/medu.13802
- Mort, J., & Hansen, D. (2010). First-year pharmacy students’ self-assessment of communication skills and the impact of video review. American Journal of Pharmaceutical Education, 74(5), 1–7.
- O’Connor, A. (2015). Clinical instruction and evaluation. Jones & Bartlett.
- Oermann, M. (2015). Teaching in nursing and role of the educator. Springer.
- Pisklakov, S., Rimal, J., & McGuirt, S. (2014). Role of self-evaluation and self-assessment in medical student and resident education. British Journal of Education, Society and Behavioral Science, 4(1), 1–9.
- Ramani, S., & Krackov, S. (2012). Twelve tips for giving feedback effectively in the clinical environment. Medical Teacher, 34, 787–791.
- Ramli, A., Joseph, L., & Lee, S. (2013). Learning pathways during clinical placement of physiotherapy students: A Malaysian experience of using learning contracts and reflective diaries. Journal of Educational Evaluation in the Health Professions, 10, 6. https://doi.org/10.3352/jeehp.2013.10.6
- Ramsden, p. (1992). Learning to teach in higher education. Routledge.
- Regehr, G., & Eva, K. (2006). Self-assessment, self-direction, and the self-regulating professional. Clinical Orthopaedics and Related Research, 449, 34–38. http://innovationlabs.com/r3p_public/rtr3/pre/pre-read/Self-assessment.Regher.Eva.2006.pdf
- Rudland, J., Wilkinson, T., Wearn, A., Nicol, P., Tunny, T., Owen, C., & O’Keefe, M. (2013). A student-centred model for educators. The Clinical Teacher, 10(2), 92–102.
- Rush, S., Firth, T., Burke, L., & Marks-Maran, D. (2012). Implementation and evaluation of peer assessment of clinical skills for first year student nurses. Nurse Education in Practice, 12(4), 2219–2226.
- Rye, K. (2008). Perceived benefits of the use of learning contracts to guide clinical education in respiratory care students. Respiratory Care, 53(11), 1475–1481.
- Scales, D. E. (1943). Differences between measurement criteria of pure scientists and of classroom teachers. Journal of Educational Research 37, 1–13.
- Scanlan, J., Care, D., & Glessler, S. (2001). Dealing with the unsafe student in clinical practice. Nurse Educator, 26(1), 23–27.
- Secomb, J. (2008). A systematic review of peer teaching and learning in clinical education. Journal of Clinical Nursing, 17(6), 703–716. https://doi.org/10.1111/j.1365-2702.2007.01954.x
- Seldin, p. (1997). The teaching portfolio: A practical guide to improved performance and promotion/tenure decisions (2nd ed.). Anker.
- Shulman, L. (1998). Teacher portfolios: A theoretical activity. In N. Lyons (Ed.), With portfolio in hand: Validating the new teacher professionalism (pp. 23–37). Teachers College Press.
- Sluijsmans, D., Van Merriënboer, J., Brand-Gruwel, S., & Bastiaens, T. (2003). The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Education Evaluation, 29, 23–42.
- Timmins, F. (2002). The usefulness of learning contracts in nurse education: The Irish perspective. Nurse Education in Practice, 2(3), 190–196. https://doi.org/10.1054/nepr.2002.0069
- Walsh, T., Jairath, N., Paterson, M., & Grandjean, C. (2010). Quality and safety education for nurses clinical evaluation tool. Journal of Nursing Education, 49(9), 517–522.
- Weeks, B., & Horan, S. (2013). A video-based learning activity is effective for preparing physiotherapy students for practical examinations. Physiotherapy, 99, 292–297.
- Welsh, p. (2014). How first year occupational therapy students rate the degree to which anxiety negatively impacts on their performance in skills assessments: A pilot study at the University of South Australia. Ergo, 3(2), 31–38. http://www.ojs.unisa.edu.au/index.php/ergo/article/view/927
- Winstrom, E. (n.d.). Norm-referenced or criterion-referenced? You be the judge! [Fact sheet]. http://www.brighthubeducation.com/student-assessment-tools/72677-norm-referenced-versus-criterion-referenced-assessments/?cid=parsely_rec
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.