“5. Assessment” in “Principles of Blended Learning”
Chapter 5 | Assessment
We may not like it, but students can and do ignore our teaching; however, if they want to get a qualification, they have to participate in the assessment processes we design and implement.
—(Brown, 2004, p. 81)
The term “assessment” in higher education often conjures up different sentiments and emotions. From a teacher’s perspective, Ramsden (2003, p. 180) states that assessment involves “getting to know our students and the quality of their learning.” Conrad and Openo (2018) suggest that assessment fundamentally shapes approaches to learning and reveals the qualitative nature of the educational experience. Yet students in a research study were asked to use one word to describe their perceptions of assessment (Vaughan, 2013). The four most common words were fear, stress, anxiety, and judgment.
This disconnect between teacher and student perceptions regarding assessment is a serious issue, especially since a number of educational researchers have clearly linked student approaches to learning with the design and associated feedback of an assessment activity (Biggs, 1998; Hedberg & Corrent-Agostinho, 1999; Marton & Saljo, 1984; Ramsden, 2003; Thistlethwaite, 2006). For example, standardized tests with minimal feedback can lead to memorization and a surface approach to learning, whereas collaborative group projects can encourage dialogue, richer forms of feedback, and deeper modes of learning (Entwistle, 2003). In addition, a report by the International Commission on the Futures of Education (2021) advocates that assessment needs to evolve from a mode of compliance to a process of shared goal setting, which leads to growth.
This focus on development is closely aligned with some Indigenous perspectives on assessment. Claypool and Preston (2011) state that Euro-American-centric assessment practices focus on written quizzes, tests, and exams, which primarily promote cognitive development via rational, linear, and accountable activities. They suggest that this approach to assessment is focused largely on meeting curricular outcomes, and it tends to neglect the physical, emotional, and spiritual domains of students. Marule (2012) suggests that effective assessment from an Indigenous perspective utilizes practices that include the cognitive domain but focus equally on physical, emotional, intellectual, and spiritual growth and development. Our seventh principle of blended learning, then, is ensuring that assessment is aligned with learning outcomes and growth for all students.
Our purpose in this chapter is to demonstrate how the Community of Inquiry framework can be applied to blended learning environments in order to encourage deep approaches to learning and create meaningful assessment activities for all students. The key is to maintain a CoI approach by not overloading students and teachers with assessment tasks (remember that less is more) as well as by providing choice and flexibility in the process of assessment.
Approaches to Assessment in Higher Education
In higher education, the three approaches to assessment commonly used by teachers are diagnostic, formative, and summative (Reeves, 2000). Diagnostic assessments are used to determine a student’s prior knowledge and identify strengths and weaknesses. This type of self-assessment is crucial in helping students to become lifelong learners. As they engage in self-assessment practices, they learn to make sense of information, relate it to prior knowledge, and use it for new learning. This is often referred to in terms of assessment as learning since it helps students to develop and support their metacognitive skills (Manitoba Education, 2006). Diagnostic assessment is concerned with monitoring metacognitively the learning dynamic.
Formative assessment is used to provide students with feedback on their progress throughout a course. This type of assessment provides students with timely and specific feedback on how they might make adjustments to their learning. This assessment for learning approach can be accomplished through peer feedback techniques (LearnAlberta, 2008). This introduces the importance of shared metacognition to monitor the learning process, particularly to manage collaborative teaching and learning strategies going forward.
Summative assessment is used to estimate performance at the end of a course and to grade students’ work. This assessment of learning is a snapshot in time that lets students know how well they have completed the learning tasks and activities. It provides information on their achievements (Manitoba Education, 2006).
Students develop a sense of ownership and efficacy when they use diagnostic, formative, and summative assessment feedback to make adjustments, improvements, and changes to how they learn and process information. These forms of assessment must be shared and discussed in order to foster a collaborative approach to learning. In a blended learning environment, teachers can integrate these three forms of assessment in a purposeful and intentional manner.
Blended Approach to Assessment
As mentioned, a blended approach to learning and teaching provides opportunities to integrate meaningfully classroom and online learning opportunities. Diagnostic self-assessment approaches can be used to gauge student learning before a synchronous (face-to-face or F2F) session. Formative peer assessment techniques can be used for timely and specific feedback during anF2F session, and summative teacher assessments can be performed after an F2F session.
Diagnostic Self-Assessment: Before an F2F Session
Diagnostic self-assessment activities can be used as pre-class advance organizers to help teachers determine students’ prior knowledge or experience with a concept, topic, or issue (Ausubel, 1968). These activities also help to engage students and stimulate existing connections with prior learning and experience. An excellent guide to creating pre-class diagnostic activities is Brame’s (2013) website on just-in-time teaching.
This diagnostic assessment strategy was designed by Novak and colleagues (1999) as a feedback loop between pre-class and in-class activities. Students prepare for class by reading, viewing, or interacting with web-based resources and then complete an online diagnostic assessment activity (e.g., quiz, game, discussion forum post). Teachers have access to the compiled results from these diagnostic activities, which they can use to tailor the in-class (synchronous) activities to meet students’ learning needs and expectations.
Pre-Class Readings and Videos
Probably the most common pre-class just-in-time teaching diagnostic activity is to have students read an article or watch a video and then complete a self-assessment quiz. In terms of readings, we recommend that teachers work with their subject area librarians to select peer-reviewed articles from their institutional online library resources (Mount Royal University, 2022). Doing so provides students with opportunities to practise their digital information literacy skills.
With regard to video, many teachers have taken their Microsoft PowerPoint (Microsoft, 2022) or Google Slides (Google, 2022f) presentations that traditionally they would have displayed during class time and created short, narrated versions for students to view before class. The key is to create a corresponding self-assessment quiz or knowledge probe that allows students to determine their prior knowledge and experience related to the key concepts, topics, or issues in the pre-class reading or video. These quizzes should focus on conceptual understanding rather than factual knowledge, and a final question should be included: “What did you not understand about the required reading or video, and what would you like us to focus on during our next class session?” These quizzes can be created easily by teachers in their institutional learning management systems, such as Blackboard (2022), Brightspace (Desire2Learn, 2022), Canvas (2022), and Moodle (2022).
TED-Ed Pre-Class Activities
In addition, the Technology, Entertainment, and Design (TED) non-profit organization has developed a free system for teachers to create pre-class activities using its extensive video repository (TED-Ed, 2022). Here is a sample process for creating these pre-class activities.
Pre-Class Online Discussion Forum
An alternative to using online quizzes is an online discussion forum to allow students to post questions or issues related to the pre-class reading or video. This pre-class crowdsourcing can be a powerful learning activity because students are able to read and respond to each other’s questions in advance of the F2F session.
In a blended course, the key to an effective online discussion is to link it clearly to the F2F session. For example, it is important for the teacher to review the discussion forum posts before class in order to determine key questions and themes, which can then be explored further, discussed, and debated during the synchronous session. We also recommend asking students for permission to display their posts during class time. Doing so helps to highlight key points and allows for an increased student “voice” in the synchronous discussions.
Digital Diagnostic Assessment Applications
With advances in digital technology, new forms of diagnostic assessment activities continue to emerge. For example, commercial software applications such as Lyryx (2022) have created sophisticated and challenging problems for students to solve before classes. Students receive immediate feedback on their problem-solving skills, and their results are automatically integrated into the learning management system’s gradebook for teachers to view and compile for use in the F2F sessions.
Mobile phone apps have also provided opportunities for some creative diagnostic assessment activities. For example, Singapore Management University (2022) has developed an Accounting Challenge game app that students can play before a class session. There is an option to have students’ game scores automatically entered into an institution’s learning management system gradebook.
Our experience suggests that students often struggle with self-assessment activities in a blended course because of a lack of experience and proper instruction. Students in a related research study had a wide range of perceptions regarding the value of self-assessment (Vaughan, 2014). As one student indicated, “I don’t find it too important to me. I see by my grades how I am doing instead of assessing myself” (Survey Participant 11); another student stated that “I would rather get feedback from a teacher or a peer” (Survey Participant 6). A number of students commented that they did not have previous experience with self-assessment activities; one noted that “I can sometimes have a hard time recognizing where I can improve when I’m self-assessing” (Survey Participant 17).
In terms of overcoming these issues, we recommend the Taylor Institute’s (2022) Learning Module: Critical Reflection. This online resource provides faculty members with an extensive guide to designing, facilitating, and directing self-assessment activities in blended and online courses.
Formative Peer Assessment: During an F2F Session
A common complaint among students about a blended approach to learning is the-course-and-a-half syndrome (Twigg, 2003). They indicate that there is no clear connection or integration between the online and face-to-face components of a blended course and that they often feel like they are taking a course and a half. Thus, the key to a successful F2F session is to build upon student feedback collected from the pre-class diagnostic assessment activities.
Online Survey Results
Survey and quiz results or discussion forum posts can be shared by the teacher and reviewed by the students at the beginning of a class. The ensuing debate helps to clarify key concepts and allows students to begin comparing and contrasting their perspectives and experiences related to the questions and issues raised in the pre-class activities.
Formative Peer Assessment
Attributed to the French moralist and essayist Joubert (1842) is the quotation “to teach is to learn twice,” and in an effective Community of Inquiry all participants are both students and teachers. The term “teaching” rather than “teacher” presence implies that everyone in the community is responsible for providing input on the design, facilitation, and direction of the teaching process. In a study conducted by Vaughan (2013), students commented on the value of formative peer assessment activities but indicated that one of the biggest challenges was finding a common place and time to meet outside the classroom. They recommended that teachers “provide class time to begin and conclude formative peer assessment activities in order to build trust and accountability for the peer assessment process” (p. 19).
Thomas and Brown (2021) have documented how the intentional design of formative assessment strategies helps to foster collaborative learning during synchronous sessions. They indicate that these designs include the use of conversational protocols to help students clarify their thinking. Such protocols are structured sets of guidelines to promote effective and efficient communication and problem solving (Government of Ontario, 2016). They also recommend creating class time for groups to engage in peer feedback loops in order to improve and refine their group work. This includes providing the groups with clear criteria (e.g., an assessment rubric) to give feedback to their peers. Loureiro et al. (2012) emphasize that clear criteria are essential to mitigate students’ negative perceptions of peer assessment and support collaboration. Clarifying learning intentions helps to promote student success with collaborative learning activities (Wiliam & Leahy, 2015). The University of Wisconsin—Stout (2022) has an excellent online resource for creating and using rubrics for assessment.
Classroom Response Systems and Peer Instruction
The majority of students now have mobile phones, and they are being used as a classroom response system to support a form of peer instruction (Onodipe & Ayadi, 2020). The process begins with the teacher posing a question or problem. Such questions or problems should be focused on threshold concepts. Meyer and Land (2005) define a threshold concept as a core idea that is conceptually challenging for students. They often struggle to grasp it, but once grasped it radically transforms their perception of the subject. Although this material is difficult to learn, understanding threshold concepts is essential to mastering any field of study. Kent (2016) has created an excellent guide to the effective use of threshold concepts in higher education.
Once the teacher has displayed the question or problem digitally, the students work initially and individually toward a solution and vote on what they believe is the correct response by selecting the desired numbered or lettered response on their phones. The results are then projected for the entire class to view. For a good question, there is usually a broad range of responses. Students are then required to compare and discuss their solutions with the person next to them to reach a consensus (Salzer, 2018). Another vote is taken, but this time only one phone per group can be utilized. In most circumstances, the range of responses decreases and usually centres on the correct answer.
There are various software applications to support this form of peer instruction. Currently, three of the most common tools are Mentimeter (Menti, 2022), Poll Everywhere (2022), and Slido (2022), which have a variety of options and pricing requirements.
The use of classroom response systems and peer instruction is particularly effective in large classes at the beginning of a semester since it provides an “icebreaker” to allow students to get to know others in the course. After the initial activity, we recommend having students exchange email addresses or text message numbers so that they can begin the process of creating critical friends (described in Chapter 3).
Calibrated Peer Review
With regard to formative peer feedback, many students in higher education have limited prior experience, and often they are reluctant to engage meaningfully in this form of assessment practice (Vaughan, 2014). Therefore, it is important for teachers to provide students with guidance and practice on providing and receiving peer feedback. The University of California Los Angeles (2019) has developed the Calibrated Peer Review application to help students learn how to conduct a peer review.
Clase et al. (2010) describe the process as consisting of three phases.
- 1. Writing: Students first write and then digitally submit their work on a topic in a format specified by the teacher.
- 2. Calibration training: Training for peer review comes next. Students assess three “calibration” submissions against detailed questions that address the criteria on which the assignment is based. Students individually review each of these calibration submissions according to the questions specified by the rubric and then assign a holistic rating out of 10. Feedback at this stage is vital. If the reviews are poorly done and do not yet meet the teacher’s expectations, then the students get a second try. The quality of the reviews is taken into account in the next step of reviewing real submissions from other students.
- 3. Peer review: Once the deadline for calibration reviews has passed, each student is given anonymous submissions by three other students. The student uses the same rubric to review the peers’ work, this time providing comments to justify the assessment and rating. Poor calibration performance in the second phase decreases the impact of the grades given to peers’ work. After the students have done all three, they assess their own submissions.
A study by Pelaez (2002, p. 174) demonstrated how this peer review process helps students to improve their academic performance: “Results show that, when undergraduate non-science majors write about problem-based learning assignments followed by anonymous peer review, they perform better than with didactic lectures followed by group work.”
Other web-based peer review systems include Kritik (2022) and Peergrade (2022). Kritik is an online peer assessment platform that focuses on learning by teaching. By using the application, “students who teach what they’ve learned go on to show higher levels of understanding and knowledge retention” (home page). Peergrade is also an online platform used to facilitate peer feedback sessions with students. Two research studies support the application’s approach to peer feedback (Price et al., 2016; Sanchez et al., 2017). Both studies demonstrated that students who engaged in peer feedback activities performed better on subsequent tests and writing assignments than students who did not participate in such activities.
Labatorials
In our Introduction, we referred to Gierdowsk et al.’s (2020) study indicating that students want to continue face-to-face classes more than any other learning environment, with a majority preferring either completely or mostly F2F classes. To support a blended approach, Pelletier et al. (2021) describe how higher education institutions have begun to make major investments in classroom redesign for collaborative learning. One example is redesigning large lecture halls for labatorials (Sobhanzadeh & Zizler, 2021).
Typically, undergraduate courses in the natural sciences consist of lectures delivered by a tenured faculty member in a large hall with laboratory and tutorial sessions facilitated by graduate students. A common complaint from undergraduate students is the lack of alignment and clear connection between lectures and laboratory sessions. They also complain about the individualistic, formulaic, and repetitive nature of the laboratory assignments. To overcome these issues, the Department of Physics at the University of Calgary developed a modified labatorial approach (Ahrensmeier et al., 2009). Labatorials combine elements of both lab experiments and tutorials in order to allow students to develop their conceptual understanding of fundamental physics concepts through group-based problem solving and self-driven experimentation.
Labatorials are driven by a core experiment (or set of experiments). Students are asked to make predictions about the outcome, perform the experiment, collect data, and interpret the results (Kalman et al., 2020). Students might be given direct instructions for some experimental parts of the lab, whereas for other parts they might be asked to design their own simple protocols for investigating the concept at hand. Labatorials focus on key physics concepts and encourage students to present and share their ideas with one another. After performing the experiments, they discuss whether or not the results support their hypotheses. There are typically three to six checkpoints in each labatorial to encourage ongoing interaction between the students and the teaching team, consisting of faculty members and graduate students. Each time the students reach a checkpoint, they review the answers with the teaching team. All students in one group must have the same answers. If the answer to a question is wrong, or students are not proceeding in the right direction, then the teaching team directs them to find the correct answer by themselves, exploring and discussing alternative ideas.
In a labatorial, students can work collaboratively at circular tables with whiteboards and projection screens on the walls. The whiteboards can be used for collaborative problem solving, and the projection screens can be used to display student work to the entire class. For more information on classroom redesign for blended learning, we recommend the Western Active Learning Spaces website (University of Western Ontario, 2022).
Teacher Assessment: After an F2F Session
Teacher assessment practices in higher education are often limited to high-stakes summative assessment activities such as midterm and final examinations (Boud, 2000). The role of a teacher in a Community of Inquiry is to provide ongoing and meaningful assessment feedback in order to help students develop the necessary metacognitive skills and strategies to take responsibility for their own learning.
Video Feedback
In a blended environment, there are a variety of digital technologies that a teacher can use to provide diagnostic, formative, and summative assessments to students in a Community of Inquiry. For example, teachers can use collaborative writing tools such as Google Docs (Google, 2022a) to provide formative assessment feedback at checkpoints or milestones for individual or group projects. This approach allows students to receive teacher feedback throughout the process of constructing the project rather than just focusing on summative assessment feedback on the final product.
In addition, teachers can use digital video to provide assessment feedback. Ryan (2021) has published a paper describing how video feedback can be used to support the socio-emotional aspects of blended and online learning. She recommends the following key design considerations for creating video feedback comments in order to bolster socio-emotional outcomes for students.
- Different from text-based feedback. Video feedback can and should feature messages qualitatively different from text-based feedback. Content analysis conducted by Borup et al. (2015) showed that text-based feedback tended to feature comments that highlighted specific strengths, weaknesses, and areas for improvement in relation to the task. In contrast, video feedback more frequently included general and specific praise for the students’ work as well as comments aimed at strengthening the relationship between teacher and student (e.g., use of the student’s name). As shown in the broader feedback and blended learning literature, both praise and relational comments are useful for improving social presence, strengthening feelings of trust, and helping students to feel supported and motivated (Plante & Asselin, 2014; Yang & Carless, 2013).
- Time-sensitive nature of video feedback. Borup et al. (2015) and others (e.g., Crow & Murray, 2020) argue that it is important for teachers to foster a sense of community and belonging in the first few weeks of the semester. Furthermore, students can obtain a greater sense of value, support, and social presence when feedback on assessment tasks is provided in a timely manner (Crow & Murray, 2020; Plante & Asselin, 2014).
- Video feedback can be more effective for certain types of students. For example, students who are generally moderate-to-high achievers but have performed poorly on an assessment task during the COVID-19 pandemic (presumably because of health or well-being issues) can benefit from the more personalized and supportive style of communication that video feedback affords. In these circumstances, Borup et al. (2015) argue that it is important for teachers to be highly cognizant of keeping their body language, expression, and tone of voice positive so as not to convey unintentionally information that could be interpreted as negative or discouraging.
Caldwell (2021) reverses this process and requires students to create videos to demonstrate their conceptual understanding of first-year physics principles. This process begins early in the course (e.g., the first question on the first assignment). She asks students to upload short videos introducing themselves to the class. They are free to share whatever information they wish in these videos. Their purpose is to build community and have students learn the process of recording and uploading a video to the learning management system. Caldwell indicates that she posts an instructional video to demonstrate the upload process, but invariably there are some technical issues for students to work out (e.g., certain devices can upload videos only by using certain web browsers). She recommends that it is best to sort out these issues early in the course, before students become too busy with the actual course work.
Then, throughout her first-year physics course, Caldwell (2021) has one video explanation question on each assignment. She emphasizes that the types of questions that she assigns for video explanation are not typical calculation problems from a physics textbook but focused on an explanation of the steps behind the calculation. She also provides the following example of asking students to do the following.
- • List the forces that act on the person, and classify them as conservative or non-conservative.
- • Explain whether mechanical energy is conserved based on the specific criteria I provide.
- • Show how trigonometry can be used to calculate the relevant distances.
- • Find the minimum speed the person must be running to make it across the ravine.
Flipgrid (2022) is a free web-based video discussion platform from Microsoft that can be particularly effective for this type of assessment process. This application allows teachers or students to post a discussion prompt, and then other students can respond with short videos.
Students share their voices by recording short videos with Flipgrid’s camera. Flipgrid contains a variety of tools for students to tell their stories, including text, emoji, inking, boards, screen recording, the ability to upload clips, and more.
Community Expert Assessment Activities
Digital technologies also provide opportunities for students to receive assessment feedback from experts in a field of study. This can be accomplished through the use of blogs, videos, and professional learning plans.
Blogs
For example, students are often required to critique academic articles on key concepts and findings in a disciplinary field of study. Students often find this type of assignment tedious since they find the articles challenging to read, and they receive limited feedback on their critiques. To improve the effectiveness of this critique assignment, we recommend that teachers work in partnership with their instructional librarians first to identify seminal articles and second to contact the authors of the articles and invite them to review the students’ critiques.
We then recommend the following process to guide the article critique and review process. Be sure to provide students with a clear rationale for the assignment as well as samples of previous work, an assessment rubric, and a guide to writing an article critique. We recommend providing students with an opportunity to use the assessment rubric to review collaboratively previous work so that they are clear about the expectations of the assignment. The University of Arizona (2022) also has an excellent student guide to writing an article critique.
- 1. Initial article critique: Students use a blogging application such as Blogger (2022) or WordPress (2022) to compose the first draft of the critique.
- 2. Peer review: The teacher then provides time during the synchronous session for these drafts to be peer reviewed by critical friends.
- 3. Author review: The student revises the critique based upon the peer review, and then the author of the article is invited to provide an expert review of the student’s work.
- 4. Teacher review: The student makes final revisions to the critique based upon the author’s review and submits the final work to the teacher for a summative assessment.
Students who have completed an article critique with author feedback commented that publishing their critiques and receiving expert feedback made the task much more authentic and engaging.
Videos
Community experts can also provide assessment feedback on individual or group presentations through the use of web-based video technologies. These types of presentations can be video-recorded and either streamed live (e.g., Vimeo, 2022) or posted to a video-sharing site such as YouTube (2022). The community experts can then provide assessment feedback to the students in either synchronous (e.g., real-time audio) or asynchronous formats (e.g., online discussion forums).
E-Portfolios
Teachers are also encouraged to take a portfolio approach to assessment in their courses and programs. This involves students receiving peer, self-, and teacher assessments on their course assignments. For example, students complete the first draft of a course assignment and post it to their e-portfolios. The critical friends then review the assignments and provide peer feedback. Students use this feedback to improve the quality of their work, and they have the opportunity for external experts to provide them with additional feedback. The students then complete self-assessments to ensure that they have met all the objectives of the course assignment. Finally, the teacher reviews the course assignment and provides summative assessment feedback.
Various e-portfolio tools can support this process, ranging from commercial applications such as Weebly (2022) and Wix (2022) to the free Google Sites tool (Google, 2022e). The teacher education program at Mount Royal University (MRU) uses e-portfolios to support a professional learning plan process modelled on the Alberta Teachers Association (2022) professional growth plan. An MRU teacher candidate’s professional learning plan is the primary space in which a student can document and articulate learning related to the MRU Bachelor of Education program competencies (planning, facilitating, assessing, inclusive environment, professional roles and responsibilities). This is the space in which teacher candidates can develop and communicate self-understanding and create learning goals that allow them to be successful in their future teaching practice.
In addition, Mitchell et al. (2021) have documented how an e-portfolio approach to assessment can greatly enhance student employability. They conducted a research study at Griffith University in Australia in which students indicated that e-portfolios could have a positive impact on their employability by allowing them to demonstrate their learning as well as assisting them in their professional development. The students in this study further stated that the most beneficial aspects of ePortfolios related to employability were the ability to collate experiences and assessments, provide evidence of competency development, and facilitate reflection in order to help them develop a “growth mindset” (Dweck, 2006).
Classroom Assessment Techniques
As we indicated in Chapter 2, it is important that the design and organization of a blended course are flexible in order to meet the emerging learning needs and interests of students throughout the semester. In addition, students in a teacher education study wanted to “provide teachers with more feedback on their assignments and teaching practice throughout the semester, not just at the end—assessment should be a two-way conversation between students and instructors” (Vaughan, 2010, p. 22). We recommend the use of classroom assessment techniques as a method for teachers to receive ongoing feedback from students about the course design.
The Classroom Assessment Techniques (CATs) approach was developed by Angelo and Cross (1993). They are simple, non-graded, anonymous activities designed to allow students to provide faculty members with feedback about the teaching-learning process in a course. The following box highlights some of the most common CATs.
Course Evaluation
Although the terms “assessment” and “evaluation” occasionally have been used synonymously, there is an important difference (Garrison, 2017). Assessment is associated with determining students’ learning processes and outcomes, whereas evaluation is used to refer to the act of comparing a unit, course, or program with some set of performance or outcome criteria.
Evaluation begins by determining the strategic intent of the course or program. In this regard, clearly identifying why a particular course has been redesigned for blended learning is crucial to evaluating its effectiveness. Traditionally, distance education courses have been offered in order to increase access to educational opportunities by spanning geographic or temporal distances. Although access is a component of blended learning, added value speaks to issues of quality reflected by collaborative thinking and learning experiences. To evaluate this type of blended learning experience, we recommend using the Community of Inquiry (Garrison et al., 2022) and Shared Metacognition (Garrison & Akyol, 2015a) surveys.
With regard to the CoI survey, Arbaugh et al. (2008) conducted a multi-institutional study to develop and validate this survey instrument, which operationalizes Garrison et al.’s (2000) CoI framework. The results of their research suggest that the instrument is a valid, reliable, and efficient measure of the dimensions of social presence and cognitive presence, thereby providing additional support for the validity of the CoI framework in constructing effective online learning environments. Although factor analysis supported the idea of teaching presence as a construct, it also suggested that the construct consisted of two factors: one related to course design and organization, the other related to instructor behaviour during the course.
The CoI survey has been used extensively to evaluate the social presence, cognitive presence, and aspects of teaching presence for numerous online and blended courses. More information on the CoI survey can be found on the CoI website (Garrison et al., 2022), and the survey questions are listed in Appendix D. In addition, Garrison and Akyol (2015a) conducted a research study to develop and validate a shared metacognitive construct and questionnaire for use in collaborative learning environments. The questionnaire was developed using the CoI framework as a theoretical guide and tested by applying qualitative research techniques. The results indicate that, in order to understand better the structure and dynamics of metacognition in emerging collaborative learning environments, we must go beyond individual approaches to learning and consider metacognition in terms of complementary self-regulation and co-regulation that integrate individual and shared regulation.
We recommend shared metacognition as an area of study for those interested in thinking and learning collaboratively in blended courses and programs. Shared metacognition provides the construct to study how students manage discourse actively and construct meaning responsibly. The construct provides a solid theoretical foundation and an instrument to explore the complex transaction of a Community of Inquiry. Additional information on the shared metacognition survey is contained in a blog post by Garrison (2019), and the survey questions can be found in Appendix E.
We encourage the combined use of the shared metacognition and CoI surveys in order to study the design, facilitation, and direction of shared metacognition in a blended Community of Inquiry. An example of how to do this is provided in a study by Vaughan and Lee Wah (2020), who examined the development of shared metacognition in a blended teacher education course.
Conclusion
In a blended Community of Inquiry, self-, peer, and teacher assessments should be an integrated process rather than a series of isolated events in order to help all students develop shared metacognitive awareness and strategies. For example, a student in a teacher education study commented that “I used self-reflection for checking my work and making sure I had everything I needed. I used peer-review for a different perspective on my work, and I used teacher feedback to understand how I could improve my work” (Vaughan, 2010, p. 23). Another student in the study stated that “self-reflection showed me what I liked about my work and what needed to be improved, peer feedback gave comments on what could be done better and then teacher feedback gave ideas on how the assignment could be fixed up to get a better mark” (p. 23).
In addition, these students stressed how a blended CoI framework supported by digital technologies helped them to integrate these three forms of assessment into a triad approach (see Figure 5.1).
This triad approach involves students using rubrics, blogs, and online quizzes to provide themselves with self-reflection and feedback on their course assignments. They can then receive further peer feedback on their course work via digital technologies such as classroom response systems and calibrated peer review tools. Finally, teachers and, in some cases, community experts can review students’ e-portfolios and use digital video technologies to observe student performance, diagnose student misconceptions, and provide additional assessment feedback.
An international call for a greater focus on assessment for learning, rather than on assessment for just measurement and accountability of student performance, is well documented in the educational research literature (Yeh, 2009). The use of digital technologies to support student assessment in a blended Community of Inquiry can lead to Hattie’s (2009, p. 238) vision of a visible teaching and learning framework in which “teachers SEE learning through the eyes of their students and students SEE themselves as their own teachers.”
Figure 5.1
Using Digital Technologies to Support a Triad Approach to Assessment in a Blended Course
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.