“4 | Authenticity and Engagement” in “Assessment Strategies for Online Learning: Engagement and Authenticity”
4 | Authenticity and Engagement
The Question of Quality in Assessment
Authentic assessments, especially in blended and online learning contexts, encourage students to take a deep approach to learning, provide necessary alignment for faculty to better determine the quantity and quality of student learning, and provide institutions with the evidence necessary to respond to external pressures regarding their ability to measure student learning outcomes. This chapter defines authentic assessment, grounds it in constructivist theory, and considers some of the design considerations necessary to build authentic assessments that deliver on the promise of their potential.
Defining Authentic Assessment
Over 20 years ago, the “Principles of Good Practice for Assessing Student Learning” (Astin et al., 1992) were developed under the auspices of the American Association for Higher Education’s Assessment Forum. These principles of good practice suggest that successful assessment begins with issues of use and then focuses on the issues relevant to educators and learners. Colby, Ehrlich, Beaumont, and Stephens (2003) suggest that assessment practices should assess students holistically, including “knowledge, abilities, values, attitudes and habits of mind that affect academic success and performance beyond the classroom” (p. 259). To assess these different areas, Astin et al., in their list of principles, recommended that assessment begin with educational values, and they caution that when values are skipped over, assessment diminishes to measuring what’s easy, rather than offering a process that seeks to improve what’s important to learners. Astin et al.’s principles further assert that assessment works best when it is ongoing, not episodic, when assessment reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time; assessment also requires attention not only to outcomes but also and in equal measure to the performance that leads to those outcomes. These experiences should include “a diverse array of methods, including those that call for actual performance, using them over time to reveal change, growth, and increasing degrees of integration” (Astin et al, 1992). Authentic assessments fulfill the spirit of these principles.
Authentic assessments are based in real-world relevance. Authentic assessments include activities that closely match real-world tasks undertaken by practitioners (Herrington, Oliver, & Reeves, 2006). They are designed to actively engage students in their own learning by using real-life situations, requiring students to make connections and forge relationships between prior knowledge and skills, and allowing for multiple pathways for solutions and a diversity of perspectives (Moon, Brighton, Callahan, & Robinson, 2005). Authentic-assessment tasks are ill defined and “open ended, meaning that they can be solved through multiple approaches, mirroring what students will encounter later in life” (Moon et al., 2005). Authentic assessments are also highly engaging learning opportunities that can help foster students’ higher-order thinking skills such as communicating, solving problems collaboratively, and thinking critically. Such skills support the new economy, which is characterized by “flatter management structures, decentralized decision making, information sharing, and the use of task teams” (Kay & Greenhill, 2011, p. 42), where such structures permit flexible work arrangements and encourage teams to work more creatively and productively, thus adding value to the workplace. Authentic assessments are frequently collaborative in nature, routinely using technology-rich co-construction environments (Barber, King, & Buchanan, 2015).
Other distinguishing features of authentic assessments include a longer and sustained time period and the use of multiple products, which can better gauge learner growth over time. According to Campbell and Schwier (2014),
An instructor who assesses for authenticity either creates natural or real-life settings and activities or contextualizes learning in the settings that already exist in order to understand and document how learners think and behave over an extended period of time. . . the instructor uses multiple sources for gathering information that would reveal a more accurate picture of learning progress as well as emphasizing the process of learning, not just the final product. (p. 361)
Authentic assessments serve the interests of students by encouraging them to play a more active role in the assessment of their own learning through activities such as reflective exercises, self-evaluations in tandem with peer assessments, collaborative projects, semantic mapping, and e-portfolios.
A noteworthy characteristic of authentic assessment is its collaborative nature. Matuga (2006) writes that “situating assessment and evaluation as essentially social activities, influenced by unique affordances and constraints of a particular educational context, is a critical pedagogical component when designing and teaching online courses” (p. 317). This social, interactive dimension of meaning and knowledge construction is a suitable teaching approach for many areas, but especially for the growing focus on essential employability skills (Ontario Ministry of Advanced Education and Skills Development, 2015), which include communication (reading, writing, listening), gathering and managing information (selecting and using appropriate tools and technology, computer literacy, Internet skills), interpersonal skills (team work, conflict resolution), and personal skills (managing the use of time and taking responsibility for one’s own actions, decisions, and consequences). Webb and Gibson confirm the value of collaborative, technology-enhanced learning, arguing that learning in technology-enabled collaborative environments requires cognitive, metacognitive, and social skills to develop “shared task understanding, negotiating shared perspectives, argumentation, and maintaining focus” (2015, p. 678). These complex cognitive skills are precisely the types of transferable lifelong skills highly desired in today’s workplace by both students and employers.
Authentic assessment is especially important for many distance learners because, as adults, they are co-existing in twin worlds of work and learning (Campbell & Schwier, 2014). Such learners benefit most from assessments that as closely as possible replicate the task or process being assessed. And as authentic assessment is “connected to adults’ life circumstances, frames of reference, and values” (Wlodkowski, 2008, p. 313), such assessments encourage participants to bring their authentic selves to the learning environment. Cranton and Carusetta (2004) define authenticity as a “multi-faceted concept that includes at least four parts: being genuine, showing consistency between values and actions, relating to others in such a way as to encourage their authenticity, and living a critical life” (p. 7). In authentic assessments, where students are called upon to work on real-life tasks with others, they are encouraged to bring their authentic selves, self-reflect on the congruence of their values and actions, and relate to others in authentic relationships. Because authentic assessments are open ended, based in reality, and frequently collaborative, they create the conditions conducive to transformative learning, where students, encountering alternative points of view and perspectives, come to question their assumptions, beliefs, and values, potentially leading to a change in world view and values (Kelly, 2009).
The Theoretical Foundations of Authentic Assessment
Authentic assessments emerge from constructivist and social-constructivist theory and from collaborative-constructivist transactional process models such as the Community of Inquiry. Constructivist pedagogies of active, interactive, and collaborative learning have proven effective in aiding student learning, so that, in recent years, positivist approaches to education and learning that objectified learning have ceded place to constructivist views. Constructivists emphasize the importance of creating meaning from personal experience and divergent thinking, and believe that many of the problems in current assessment practice can be overcome using a social-constructivist approach. Within the CoI framework, assessment is part of “teaching presence,” the unifying force that “brings together the social and cognitive processes directed to personally meaningful and educationally worthwhile outcomes” (Vaughan, Garrison, & Cleveland-Innes, 2013, p. 12). Teaching presence consists of the design, facilitation, and direction of a community of inquiry, and design includes assessment, as well as course organization and delivery.
As noted in Colby et al. (2003), “the research literature on the effectiveness of pedagogies of engagement is extensive; it is also complicated because their impact depends on the quality and conditions of their use and the specific outcomes chosen to be assessed” (p. 136). While pedagogical effectiveness is dependent on a host of factors, Colby et al. posit that it is fair to say that when done well,
teaching methods that actively involve students in the learning process and provide them with opportunities for interaction with their peers as well as with faculty enhance students’ content learning, critical thinking, transfer of learning to new situations, and such aspects of moral and civic development as a sense of social responsibility, tolerance, and non-authoritarianism. (2003, p. 136)
McKeachie, Pintrich, Lin, and Smith (1987) include several studies highlighting key findings regarding the effectiveness of constructivist approaches. Gruber and Weitman (1962), for example, found that students who engaged in small discussion groups without a teacher not only did at least as well on a final examination as those students who sat in on the teacher’s lecture but also surpassed their peers in curiosity (as measured by question-asking behaviour) and in their interest in educational psychology. Similarly, Webb and Grib (1967) reported on six studies that compared student-led discussions with instructor-led discussions or lectures and found that there were significant differences in achievement test results that favoured the student-led discussions. These two examples highlight the wealth of 50 years of research validating active and collaborative pedagogies. From the research, certain principles of learning have been developed:
1. Learning is an active, constructive process. In order to achieve real understanding, learners must actively struggle to work through and interpret ideas, look for patterns of meaning, and connect new ideas with what they already know.
2. Genuine and enduring learning occurs when students are interested in, even enthusiastic about, what they are learning, when they see it as important for their present and future goals.
3. Thinking and learning are not only active but also social processes. In most work and other non-academic settings, people are more likely to think and remember through interaction with other people than as a result of what they do alone.
4. Knowledge and skills are shaped in part by the particular contexts in which they are learned. Few skills are truly generic, and transfer of knowledge and skills to very different contexts is difficult.
5. One way to increase the likelihood that transfer will be successful is to make the context in which skills and knowledge are learned more similar to the settings in which they will be used. Another way to increase likelihood of transfer is by creating “the expectation of transfer” by making transferability an explicit teaching goal (Salomon & Perkins, 1989).
6. Reflective practice, accompanied by informative feedback, is essential to learning.
7. Chickering and Gamson’s “Seven Principles for Good Practice in Undergraduate Education” (1991) encourages respect for diverse talents and ways of learning. Broadening the array of skills, tasks, and modes of representation used in a course increases the likelihood that students with different strengths will be able to connect productively with the work.
8. The development of genuine understanding is supported by the capacity to represent an idea or skill in more than one modality, and learning benefits from experiences that provide a wider array of modalities than those that usually dominate higher education (namely the linguistic and logical/mathematical). (Colby et al., 2003, pp. 136–138)
These principles highlight constructivist learning approaches, which form the foundation for the construction of effective authentic assessments.
Design Considerations for Authentic Assessment
There are several ways to create authenticity in learning and assessment. Reflecting the meaning of authentic assessment—assessment that values and connects to adults’ life experiences and circumstances—educators can create assessment and evaluation tools that offer learners the opportunity to relate their learning to real-life subjects and real-life problems. Service learning, for example—where learners leave the classroom and engage in meaningful and authentic work in a community setting—offers a type of learning that is located in real time and is seen by some to provide a solution to perceived weaknesses in today’s educational systems (Bok, 2006). Of service learning, Steinke and Fitch (2007) write that,
because of [its] goal-based, real world nature, enhancing the quality of service-learning assessment can also provide a fresh perspective on the increasingly complex and often contentious assessment debates at colleges and universities across the country. The nature of service-learning often demands authentic assessments as faculty struggle to capture the real world transfer skills they believe are developing in their students. (p. 28)
Although the opportunities offered by service learning are not designed specifically for online learning, the philosophy and practice could easily be incorporated into online courses or programs. For example, with the same kind of preparation and structure as would be provided from classroom instruction, online learners could enter into a service-learning arrangement in their communities. The following are examples of potential service-learning experiences:
- Work on a Habitat for Humanity project constructing housing for families with low incomes
- Organize or assist with voter registration
- Work with a neighbourhood association
- Work with a public interest organization
- Work with a political campaign
- Assist with community events and projects such as museum activities, cultural awareness programs, fairs and festivals, Adopt-a-Highway, neighbourhood clean-up and beautification days
- Serve as a mentor for a young person through Big Brothers Big Sisters, Scouting, 4-H, or other youth organizations
- Help senior citizens with a variety of activities that enhance their quality of life
- Conduct a conservation project at a park, lakeshore, or nature centre
- Tutor elementary or secondary students in a variety of subjects, work with literacy, or serve as a “Reading Partner” to encourage youngsters to develop good reading habits. (University of Wisconsin–Eau Claire, n.d.)
Learners returning from their service-learning placements are assessed on their on-site experiences in relation to course learning outcomes that have been achieved. The blend of real-life experience with reflective activity, centred on expected outcomes, should produce a very authentic assessment or evaluation activity. In their report on service-learning assessment, Steinke and Fitch (2007) describe not only the virtues and appropriateness of authentic qualitative assessment but also present many qualitative tools that could be applied to measure service-learning outcomes.
To design an effective authentic assessment in any environment, one could ask, “How can I use assessment to encourage students to adopt a surface approach to learning, and then do the opposite?” (Wittmann-Price & Godshall, 2009, p. 216). Or, as Bull (2015) asks: “What is the absolute best evidence that learning has occurred for any particular learning outcome?” For carpentry students, the best evidence that they can plan and pour a suspended concrete slab is for them to plan and pour a suspended concrete slab. For paramedic students, the best evidence that they can respond to patients in crisis is to respond to patients in crisis, demonstrate the ability to remain calm in emergency situations, monitor patient vitals, and exercise judgment about what appropriate actions need to be taken, such as administering morphine alongside the presence of a preceptor so the patient is not put at unnecessary risk. Designing authentic assessments becomes more complex, however, when trying to assess higher-order cognitive skills such as critical thinking, problem solving, and communication. Critical thinking, especially, while frequently and intensely discussed among educators and researchers, remains a concept that eludes definition and assessment (Deller, Brumwell, & MacFarlane, 2015; Garrison & Archer, 2000).
Even though assessing higher-order cognitive processes and skills is difficult, it does not diminish the fact that design must commence with a focus on constructive alignment (Rust et al., 2005). Everything in the curriculum—the learning outcomes, learning and teaching methods, and assessment methods—should follow one from another and be connected in demonstrable ways. Learners should be able to see and understand the relationship between the parts of their courses. Learning outcomes serve as the roadmap to course content. They are broad yet direct statements that describe competencies that students should possess at the end of a course or program, competencies that show “what learners are supposed to know and what they are supposed to be able to do as a result of their learning” (Kenny, 2011, para. 1). Learning outcomes not only describe what students will be able to know or do but may also help students to understand how their course or their program will directly contribute to the competencies that are required of them in the workplace. Fuller discussions of learning outcomes and their contribution to authentic learning and assessment are found later in this chapter and in Chapter 7.
Addressing the need for the thoughtful design of authentic assessment, Gulikers, Bastiaens, and Kirschner (2004) developed the Five-Dimensional Framework for Authentic Assessment, a framework that includes essential planning elements to consider when designing authentic assessment: Task, Physical Context, Social Context, Assessment Result or Form, and Criteria and Standards. Building tasks for authenticity is essential for learners to engage with problems and tasks that replicate, as much as possible, real-life and professional situations. Herrington, Oliver, and Reeves (2006) suggest that authentic tasks support the learner by providing a meaningful context, enhancing motivation, supporting metacognitive development, and promoting transferability of learning.
The aspect of physical context has significant implications for all learners, but especially for distance learners, as there may be limitations in creating a truly authentic context, given the fact of the virtual environment. Physical context accounts for the relationship between where we are and how we do something. However, we could say the same for face-to-face learners as we question “whether assessing students in a clean and safe environment really assesses their ability to wisely use their competencies in real life situations” (Gulikers et al., 2004, p. 74).
According to Gulikers et al. (2004), assessment results include (a) a quality product or performance that students would be asked to produce in real life, (b) demonstration that permits making valid inferences about the underlying competencies, (c) multiple indicators of learning in order to come to fair conclusions, and (d) the expectation that students should defend their work to others to ensure that their apparent mastery is genuine. These expectations correspond to Herrington et al.’s (2006) perspective on the value of authentic tasks and their “polished products.” Criteria and standards, therefore, become valued characteristics of assessments, with standards being the level of performance expected. Because employees usually know the criteria by which they will be judged, Gulikers et al. (2004) maintain that, for fairness and efficacy, it is important for teachers to set criteria and make them explicit and transparent to learners. Even more important than having criteria, however, is having students engage with criteria. A useful strategy for this is a marking exercise where students use a rubric to mark an exemplar. This exercise can deepen students’ awareness of the standards by which they will be judged.
Tools for Authentic Assessment
There are several tools that can be useful for course designers in creating an environment in which authentic assessment gives learners a means of integrating assessment with learning, with real-life situations and with past experience. Feedback, as a tool, is considered separately below, as it occurs post-assessment. Both learning outcomes and rubrics should—ideally—precede assessment.
Learning Outcomes
Often equated to the behavioural objectives posed by Gagne (1971) and Mager (1997) decades ago, learning outcomes are a source of contention among educators. They are considered by some to be reductionist and narrow in their attempt to capture the breadth of learning in a succinct statement or two. Dron (2007) is highly critical: “Worse still, learning outcomes are fuzzy, context-related, and dubious constructs, at best and, at worst, absolutely meaningless” (p. 296). In the same criticism, Dron accuses learning outcomes of trying to bridge the gap between “knowing how” and “knowing that” (p. 296). We are particularly intrigued with this criticism, as it strikes at the heart of rigorous prior learning assessment processes that we endorse as authentic learning activities. Dron’s contention, and the ability of prior learning processes to address this concern, are discussed in Chapter 5.
It may be true that poorly designed learning outcomes do not provide much assistance to the learning process in the same way that poor teachers do not add much to the teaching process and poor materials do not contribute to learners’ learning. However, if we assume the presence of well-designed learning outcomes, outcomes that are not fuzzy or dubious, outcomes to which learning activities, materials, and ultimately assessments are aligned, then we accept that learning outcomes do indeed form an integral part of the learning cycle. Yogi Berra, that man of memorable words, famously said: “If you don’t know where you’re going, you’ll end up someplace else.” More poetically, and in the same vein, the author Reif Larsen (2009) speaks of maps in this way: “A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected” (p. 138). We consider learning outcomes as maps to learning. Garrison and Archer (2000) argue that properly constructed and applied learning outcomes align with a constructivist and collaborative learning environment. In keeping with this understanding, then, we note the encouraging integration of learning outcomes into quality assurance planning, program standards, degree qualifications frameworks, curriculum design, and transfer credit agreements (Deller et al., 2015).
The alignment of learning outcomes to activities, resources, and assessments is important to the integrity of the learning cycle. The role of learning outcomes in the alignment and planning process is discussed in Chapter 7.
Rubrics
Like learning outcomes, rubrics are contentious learning tools. As with learning outcomes, they are touted as useful guidelines for effective teaching and learning. And like learning outcomes, they are also considered potentially reductionist. As with anything, they can be rigorously and appropriately prepared, or they can be “fuzzy” and haphazard and therefore of little use. One of the better examples of rigorously developed rubrics are the 16 VALUE rubrics (Valid Assessment of Learning in Undergraduate Education) developed by the American Association of Colleges and Universities as part of the Liberal Education and America’s Promise initiative from 2007 to 2009. Each rubric was developed to support essential learning outcomes, which reflect the most frequently identified characteristics of learning, having been tested by faculty at over 100 college campuses.
Ideally, a grading rubric tells students the goals, purpose, and manner of assessment: It states why the assessment is being conducted and how learners can succeed. The rubric should clarify curriculum objectives and provide criteria for meeting a range of proficiency levels (Mathur & Murray, 2006). We are of two minds about rubrics. As a tool and an aid to learning, they can indeed be helpful to learners in outlining the conditions of the assessment instrument and, as Mathur and Murray indicate, rubrics can guide learners in knowing how to complete the task successfully. However, all too often, rubrics are developed as a required add-on to assignments and follow a template that is generic, vague, and in its vagueness, open to the usual degree of subjectivity exercised by the marker of the assignment.
The examples that follow are actual rubrics, instructor-written and designer approved, for a university course. What does it mean to write, in a rubric: “Learners will demonstrate a high degree of comprehension of subject matter?” Similarly, consider this longer and more detailed rubric: “Content/ideas are thoughtful, relevant and presented clearly and logically. Assignment topics are coherently addressed and supported with relevant examples. Conclusion is relevant and insightful. Three or more references have been used appropriately.” Even here, there is room for subjectivity in the assessment of relevance, thoughtfulness, logic, and coherence.
Subjectivity in the teaching-learning process is often regarded as the elephant in the room—more so in the social sciences and humanities than in the hard sciences, which is a discussion akin to the ever-present one around the “truthfulness” of both qualitative and quantitative research. There are also concerns regarding the “Gentleman’s A” and grade inflation. We cannot deny our bias as teachers; the best we can do is understand it and address it by making it clear. Exploring and understanding our philosophical approach, as teachers, is key to this process. Medland (2010) concludes her study on subjectivity in assessment with the suggestion that understanding our own biases and subjectivity could help educators engaged in team marking find great “coherence.” Educators who have participated in team marking will know, from experience, that the range of responses to learners’ work by colleagues in the same discipline, content area, or field can be astonishingly varied. Bloxham (2009), speaking frankly, acknowledges that the topic of marking is under-researched and remains a “largely subjective process based on professional judgment grounded in assumptions of mutual understanding of disciplinary standards” (cited in Medland, 2010).
Wlodkowski (2008) explains what some instructors are doing when they do not use rubrics “formally” (p. 340). They are using them tacitly, or intuitively, making their judgments based on their professional experience and understanding of the topic, which would be captured in a rubric—if well written—but rather exists only in their heads.
However, another support for the use of rubrics comes from adult-education principles that emphasize autonomy and self-direction. Following this notion, the collaboration of learners with the instructor in the creation of rubrics supports constructivist thinking and fosters the building of community within the learning group. Another one of the benefits in having students employ the criteria and standards by which they will be judged in a marking exercise is the constant refinement of the rubric itself for greater clarity and appropriateness.
However, rubrics cannot overcome, diminish, or sidestep the marker’s dependence on his or her own judgment, professionalism, and integrity. But in their defence, they can provide some degree of guidelines and rationalization for the forthcoming assessment to learners as they go about their work. On a cautionary note, however, Wlodkowski (2008) uses this analogy: “They’re like a wall whose cracks you can’t see until you get very close” (p. 341). By this he means that although the words on the page may seem concrete and make sense, the intricacy and complexity of assessment and performance is subtle, nuanced, and detailed, its actual demands eluding us until we are fully immersed in the “doing.”
Feedback and Critique: Keeping the Learning Cycle Turning
Another important consideration in designing authentic assessments is planning for formative assessment and feedback. Given the variety of ways in which assessment can be used and the blurring of lines between summative and formative depending on that usage (see Chapter 1’s discussion), “formative assessment” here refers to assessment that fosters a response to the learner, regardless of whether or not a grade is assigned to the work. Although some research argues that feedback is the most important factor in affecting future learning and student performance (Hattie, 1987; Black & Wiliam, 1998; Rust et al., 2005), other educators hold, perhaps more cynically, that the final grade is the telling factor for learners. Whatever the case, feedback—explanatory and confirmatory—is key to the cycle of authentic assessment. The most useful type of feedback is timely, detailed, and precise so that it can support learning. Such feedback helps clarify what good performance is; it facilitates self-assessment and reflection, encourages teacher and peer dialogue around learning, encourages positive motivational beliefs and self-esteem, provides opportunities to close the gap between current and desired performance, and can be used by instructors to help shape their teaching (Vaughan et al., 2013). Many students say they would like feedback more regularly (Colby et. al., 2003), and one of the great complaints by students of the reading of their assignments is that feedback is sparse or more confirmatory than explanatory.
Planning for the delivery of positive feedback to learners can help them succeed in their studies. Who among us has not received a paper back with only a checkmark on the last page and a grade? We are left to wonder what we did right and what we did wrong—or even if it was closely read at all. Positive feedback can help learners develop the self-confidence in themselves as competent learners; the resultant emotional dynamic feeds on itself, helping learners develop and maintain a learning pattern that fuels their efforts and carries them through the inevitable setbacks and hesitations that all learners face at some time. As assessment feedback contributes to the CoI’s teaching presence, “instructors who take the time to acknowledge the contributions of students through words of encouragement, affirmation or validation can achieve high levels of teaching presence” (Wisneski, Ozogul, & Bichelmeyer, 2015, p. 18). The ability to both give and receive quality feedback is an essential communication skill in itself, as well as forming a component of authentic leadership (George, Sims, McLean & Mayer, 2011).
In addition to providing feedback, the constructivist approach that we have espoused requires that students actively engage with the feedback. Rust et al. (2005) cite Sadler (1989), who identified three conditions for effective feedback: (1) a knowledge of the standards in use; (2) comparison of those standards to one’s own work; and (3) the required action to close the gap between the two. Vaughan, Cleveland-Innes, and Garrison (2013) suggest that, to promote student engagement by using feedback, “instructors in a blended community of inquiry are also encouraged to take a portfolio approach to assessment, [as] this involves students receiving a second chance or opportunity for summative assessment on their course assignments” (p. 93). Providing multiple opportunities to submit iterations of their work, and thereby encouraging students to work to close the gap between current and desired performance, is highly authentic and similar to real-world work contexts. Peer assessment (see Chapter 5) can also be a particularly useful approach to building a knowledge of standards, comparing those standards to a learning object, and providing students opportunities to engage with feedback and improve their work. As Nagel and Kotzé (2010) point out, “one of the strategies that can improve the quality of education, particularly in web-based classes, is electronic peer review. When students assess their colleagues’ work, the process becomes reflexive: they learn by teaching and by assessing” (p. 46).
In summary, Reeves, Herrington, and Oliver (2002) have written extensively on authentic activities in online learning contexts, and the table below provides 10 characteristics of online tasks and the opportunities that authenticity should afford students, along with supporting research.
Table 4.1. Characteristics of Authentic Activity.
1 | Have real-world relevance |
2 | Are ill-defined, requiring learners to define the tasks and sub-tasks needed to complete the activity |
3 | Comprise complex tasks to be investigated by learners over a sustained period of time |
4 | Provide the opportunity for learners to examine the task from different perspectives, using a variety of resources |
5 | Provide the opportunity to collaborate |
6 | Provide the opportunity to reflect and involve learners’ beliefs and values |
7 | Can be integrated and applied across different subject areas and lead beyond domain-specific outcomes |
8 | Are seamlessly integrated with assessment |
9 | Create polished products valuable in their own right rather than as preparation for something else |
10 | Allow competing solutions and diversity of outcome |
Source: Reeves, T. C., Herrington, J., & Oliver, R. (2002).
Concluding Thoughts
Recently, the notion of authentic assessment has become more central to higher education. The Higher Educational Quality Council of Ontario offered a three-part series on the challenges and opportunities in assessment in late 2015, and Educause offered a three-part digital badge series (entitled Learning Beyond Letter Grades), also in late 2015. Each series called for a move toward more authentic assessment strategies designed to increase learner engagement in the learning process at the same time as setting the stage for learners to develop higher-order cognitive skills that align with both learner and employer expectations. If assessment is the heart of the learning experience, assessment practices will need to encourage learners to bring their whole selves to engage with meaningful, relevant tasks to prepare them for a life of 21st century work and learning. Well-designed authentic assessments do just that.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.