Skip to main content

Assessment Strategies for Online Learning: Engagement and Authenticity: 8 | Flexible, Flipped, and Blended

Assessment Strategies for Online Learning: Engagement and Authenticity
8 | Flexible, Flipped, and Blended
    • Notifications
    • Privacy

“8 | Flexible, Flipped, and Blended” in “Assessment Strategies for Online Learning: Engagement and Authenticity”

8   |   Flexible, Flipped, and Blended

Technology and New Possibilities in Learning and Assessment

This book has, so far, focused on issues of assessment specifically arising from online learning. We have enthusiastically discussed authentic learning and authentic assessment, learner engagement, and how the affordances of online learning and a constructivist philosophy can help to create and implement authentic learning experiences.

Since the advent of computer-assisted learning and Web-based learning, the possibilities arising from the integration of technology with learning continue to grow. Among these possibilities, which will be defined in the sections that follow, blended learning has generated renewed excitement; flexible learning, whose definition is somewhat confusing, represents another “blend” of learning environments; and, more recently, flipped learning has represented a noticeable change in traditional ways of learning. This chapter explores these variations and considers their impact on the shape of authentic and engaging assessment, as we have defined it thus far in this book.

Blended Learning

Blended learning is not new. As far back as 1935, a triad of blackboard, TV, and transmitter was used to reach and teach students outside the classroom (Roscorla, 2014). More recently, Garrison and Vaughan (2008) defined blended learning as “the organic integration of thoughtfully selected and complementary face-to-face and online approaches and technologies” (p. 148). Simply put, this means that some form of blending traditional face-to-face classroom time and activities with online activities will take place. The forms and methods of blending are limitless. Interestingly, this hybrid methodology has brought to the attention of traditional educators the potential of aspects of online teaching and learning for the first time. It may be that their exposure to the power of online learning through blended learning is helping to fuel online learning’s rise in popularity and legitimacy. As Rees (2015) recently opined,

Old style faculty will become dinosaurs whether they deserve to be or not. That’s why I recently made a commitment to start teaching online, beginning in the fall of 2016. My plan is to create a rigorous and engaging online U.S. history survey course while I am still in a position to dictate terms. After all, if I create a respectable, popular class that takes advantage of the Internet to do things that can’t be done in person, then it will be harder for future online courses at my university (or elsewhere for that matter) to fail to live up to that example. (Emphasis added.)

At its most basic level, blended learning seeks to take advantage of the Internet to do things that cannot be done in person. Google Docs, WordPress blogs, and wikis (inside or outside the LMS) make learning more collaborative and the process of learning more visible to the instructor than ever before. The authors of this book agree with Hill (2016) that online learning has firmly entered the mainstream—despite lingering criticisms from those weighing in on the practice of online learning who lack experience with it. Online learning in contexts that have been well planned and designed with the rigour, engagement, and affordances mentioned by Rees above are especially appreciated by learners. Blended learning is sharing this level of acceptance.

Horn and Staker (2012) claim that the new flexibility offered by blended learning increases the need for formative assessment, which is useful in gauging student progress and guiding the choice of what learning and activities are best to follow. Our view is that the importance of formative assessment has always been there, and that meaningful interaction, engagement, and authenticity among learners have always been integral to active learning, but that the transparency and accessibility of online learning have revealed to new adopters an enormous and previously untapped resource: the learners themselves. Supporting this view, Garrison and Vaughan (2008) stress that blended learning does not simply represent an “add-on” to traditional teaching strategies, but rather that the redesign of teaching and learning that blends technologies with face-to-face encounters creates new possibilities for learning. Garrison and Vaughan call this outcome “multiplicative” (p. 8); they present three notions underlying blended learning: the thoughtful integrating of face-to-face and online learning, the fundamental re-thinking of course design to maximize student engagement, and the restructuring and replacing of traditional class contact hours.

We would argue that blended learning has fuelled (and continues to fuel) a pedagogical renaissance. (We refrain from using the term “revolution.”) The heated debates that sought to prove (or disprove) the effectiveness of blended learning or face-to-face learning have led to a deeper exploration of teaching practice, as well a more complex understanding of how students learn. It has been argued by Jensen, Kummer, and Godoy (2015), for example, that the improvements of a flipped classroom may simply be the fruits of active learning. The flipped classroom is a basic blend of activities, where recorded lectures, instructional videos, animations, or other learning objects and resources, are accessed “outside” of class. When students are “inside” the classroom, their online learning experiences are complemented by active pedagogical approaches, such as problem-based learning, case studies, and peer interaction. Blended learning’s success has certainly caused fundamental rethinking of course design and what teaching and learning can look like.

In their quasi-experimental study, Jensen, Kummer, and Godoy (2015) looked at unit exams, homework assignments, final exams, and student attitudes to compare non-flipped and flipped sections of a course, and they determined that “the flipped classroom does not result in higher learning gains or better attitudes over the non-flipped classroom when both utilize an active-learning, constructivist approach” (emphasis added). The effectiveness of active learning has been established in studies like this and in four major meta-analyses (Freeman et al., 2014; Hake, 1998; Michael, 2006; Prince, 2004) that indicate that the key advantage for blended learning is that it enables students to be active and engaged in various ways, depending on the context and design of the course.

The sought-after result of blended learning is fuller, more resourceful, more integrative learning. For those traditional practitioners whose teaching experiences have been restricted to time-and-place in a bricks-and-mortar environment, the possibilities offered by blended learning are many, and Garrison and Vaughan’s (2008) book is a good resource. However, for those practitioners who have already transitioned to online teaching and learning, we imagine (and hope) that the results of such a transition have been achieved through the adoption of a constructivist and engaged pedagogy that mixes content attainment with concept application in authentic learning environments.

McElhone (2015) presents yet another lens through which to consider blended learning, based on the question, “what are we blending?” She contests the line drawn between physical versus virtual presence and suggests that the notion of “blend” should be opened up to include a smorgasbord of learning activities—formal, informal, Web-based, people-centred. Although her corporate background prompts her to include, as strategies, webinars and online modules as “training” applications, she also focuses on relationships (mentoring, feedback, and collaboration) and real-life applications (practical and applied assessment) in order to make learning better. Elsewhere, Roscorla (2014), investigating the excitement around blended learning, suggests that perhaps it will expedite learner completion and success. On that note, prior learning assessment (see Chapter 5) has also been presented as a route to faster completion, which is one of its most obvious attractions to learners.

What blended learning has done, most of all, is cause a fundamental rethinking of learning-as-delivery and of content as an item for mechanistic transfer. It has also refocused discussion on how to design learning environments so that students are offered the best opportunities for engaging at a deep level with the content and with others in the learning environment. As Feldstein and Hill (2016) observe, when “content broadcast” (content attainment) is moved out of the classroom, it provides more space to “allow the teacher to observe the students’ work in digital products, so there is more room to coach students” (p. 26) in the concept-application phase of learning.

We are concerned here only with the intersection of blended learning and assessment, and we find ourselves landing squarely back on the concept of authenticity—the creation of meaningful real-life opportunities with which learners can enthusiastically engage. We have described, in prior chapters, authenticity and its place in assessment and the teaching-learning dynamic. However blended-learning design is to be incorporated into teaching and learning, the relationship of authenticity to assessment, and vice versa, does not change. To the constructivist, that meld is still the answer.

Figure 8.1

Figure 8.1. A Triad Approach to Assessment. The Community of Inquiry’s use of digital technologies to support a triad approach to assessment. Source: Vaughan, N. D., Cleveland-Innes, M., & Garrison, D. R. (2013, pp. 94-95).

To this end, we consider the well-known Community of Inquiry (CoI) model, which developed a triad approach to assessment while also providing a model for planning assessments by identifying the most beneficial technologies and interactive platforms to use. Like Webb and Gibson (2015) and Herrington, Oliver, and Reeves. (2006), the CoI model suggests that digital technologies can provide fertile ground for assessing higher-order skills, supporting interactions, and generating synergies among learners and instructors. From a blended learning-advocacy stance, the triad model reiterates and supports the strengths of online learning, where technology becomes “an enabler for increasing meaningful personal contact” (Feldstein & Hill, 2016).

The triad approach highlights several technological tools that can be used for self-, peer-, and instructor-led assessments. Below are some examples of how these tools have been used to create dynamic and robust assessment approaches.

“Clickers”

Student response systems (SRS, aka clickers) offer a powerful and flexible tool for teaching and learning. They can be used peripherally or they can take a central role during class, but even with minimal use, significant differences have been found in final grades between sections of the same course (Caldwell, 2007; Lantz, 2010). In our experience, clickers can be used to increase student participation, integrate with other commercially provided learning resources to provide useful feedback to instructors on student learning, and increase opportunities for fun through formative assessment practices. In a small community college, instructors used clickers in various ways, including using them for comprehensive review at the end of a module, or for students to “get their head in the game” and activate prior learning at the beginning of class with 5 to 10 short questions. Other instructors used them with icebreaker activities to solicit student opinions on controversial topics to which they might be reluctant to admit without the veil of anonymity, as a way to launch discussion. Others used it for team-based games where students competed for top points. Students enjoyed the increased interactivity and faculty felt more able to assess the learning of the entire class rather than random, individual students. Based on student content attainment, faculty developed remedial lectures on specific elements and were able to reflect on whether or not their instructional approaches were successful. Through think-pair-share (or teach-test-reteach), the opportunities for peer instruction—where students teach each other concepts through discussion—are endless. The relevance of clicker use to blended learning centres on its ability to create dynamic interaction and assessment potential among learners in the classroom-based part of the “blend.”

Wikis

Wikis can be used to assist in group assessment. Group assessment, as was discussed in Chapter 5, is fraught with difficulties. Group assessment is often necessary because of the massification and “scalability” of higher education, where some undergraduate courses have upward of 1,000 students. In a pedagogical sense, group assessment has become popular because the collaborative nature of the assessment task provides the opportunity for learners to develop interpersonal skills such as leadership and communication.

Figure 8.2

Figure 8.2. An example of a group wiki from Athabasca University’s EdD program.

Wikis enable group collaboration, but they also afford the instructor the ability to track individual participation and contributions. Eddy and Lawrence (2013) suggest that wikis are excellent as platforms for authentic assessment because they are a user-friendly Web space that supports collaborative authorship and learner choice; they also meet the calls for authenticity and accountability by allowing students to focus on “real world tasks.” Wikis are a flexible tool when crafting assessment tasks. They provide a way to document student learning because they reduce the limits and time constraints of collaboration while promoting a fundamental rethinking of what it means to be “face-to-face” and “in the classroom.”

Figure 8.3

Figure 8.3. Peer Review Tools. Peer review tools, such as VoiceThread, enable learners to engage with grading criteria and revise their products based on peer feedback.

Blogs

The term “blog” was coined in the late 1990s (as a short form of “weblog”), and blogs are now one of the older forms of user-generated content. Blogs are so “old school” that they have given way to other social media platforms, such as Twitter (micro-blogging). On this topic, the popular blogger Seth Godin (2016) has suggested that Google and Facebook no longer want people to read blogs because they are free, uncensored, and exist outside their walled gardens. Still, blogs remain an effective strategy as a form of student engagement to foster collective and reflective learning (Mansouri & Piki, 2016). While students primarily use blogs for entertainment and personal fulfillment, it has been suggested that “we would be more effective teachers if we helped students solve their real-world personal, professional, and academic writing problems by building on existing practices, including the flexible use of the composing technologies that permeate their everyday lives” (Moore et al., 2016). Blogging offers a powerful option for formative assessment, whether it takes place within the closed environment of the LMS or out on the open Web as a way to facilitate collaborative learning, reflection, and social support. As Garrison and Akyol (2009) suggest, this venerable Web 2.0 tool goes beyond simple interaction, giving learners the opportunity to engage in purposeful discourse to construct meaning, share meaning, and consolidate understanding at both personal and conceptual levels. Blogs may produce the greatest benefits for students who are shy, introverted, or naturally reflective (Ciampa & Gallagher, 2015).

This brief look at clickers, wikis, and blogs highlights the dynamic and flexible digital tools that can be used to create sophisticated blended-learning environments. Such environments enable faculty and learners to engage with, critically monitor, and assess the quality of learning taking place in any form of educational provision. These standard tools are now being complemented by other social networks, such as Twitter, to expand assessment approaches. Twitter has been used to enhance social presence in large-enrolment online courses (Junco, Heiberger, & Loken, 2011; Rohr & Costello, 2015; Rohr, Costello, & Hawkins, 2015) or to increase concept retention, course enjoyment, and student achievement by creating avenues for student engagement that supersede those available via traditional classroom activities. Others (Barnard, 2016) have taken advantage of Twitter’s strict character limit to teach creative writing and storytelling skills. As mentioned earlier, the affordances of these tools provide limitless opportunities to construct more creative assessment approaches.

Flexible Learning

Flexible learning is a generous, convenient label for the many blends of learning affordances now available thanks to infusions of technology over recent years. The key to optimizing the use of such flexibility is understanding its myriad dimensions and contributing factors. Issues and factors to be considered in making course design decisions include the following:

  • the “anytime, anywhere, anyhow” element
  • appropriate application of technology and delivery mode
  • appropriate choice of pedagogy—instructional approaches, resources
  • timing, logistics, organization

But what about assessment? Several resources on flexible learning that we accessed for research on this topic—from universities, governments, and private enterprise—did not mention assessment at all. Burge, Gibson, and Gibson’s (2011) edited volume on flexible learning did not mention assessment, while their concluding chapter re-emphasized the pervasive institutional issues with which we are all familiar: resistance to change, change dynamics, organizational structure and policy, costs, and “false promises and false prophets” (p. 336). Quite possibly, this predictable absence is because the need to assess learning competently remains the same; the need always remains the same. However, putting assessment into practice often yields a wide range of strategies and instruments.

In their discussion of flexibility in distance learning and through their critical examination of the “goodness” that flexible learning connotes, Burge et al. (2011) emphasized the potential of technology, the many different flexibility advantages, the learner-centredness of flexible learning, and the opportunity for the variety of strategies that it allows. Flexible-learning approaches can, theoretically, lead to greater authenticity and engagement for learners. Whether these promises are fulfilled is dependent upon factors both individual and institutional, as we have discussed in previous chapters in this book. On its own, flexible learning holds no guarantees.

One of the most promising aspects of flexible learning and assessment is contained within the concept of differentiated assessment: “An educational structure that seeks to address differences among students by providing flexibility in the levels of knowledge acquisition, skills development, and types of assessment undertaken by students” (Varsavsky & Rayner, 2013, p.790), rather than taking the “middle of the cohort teaching approach.” There are significant challenges and opportunities in giving students choice for how they will provide evidence of learning. Again, massification, scalability, reduced funding, or developing rubrics that can be fairly and meaningfully applied in a high-choice, high-variability environment are all significant challenges. However, differentiated approaches to assessment in higher education potentially provide a genuine framework for student learning because they recognize that learning is, by its very nature, an individual experience (Varsavsky & Rayner, 2013).

Differentiated assessment also reflects sound adult learning principles, such as giving students—particularly adult students—control over how they will be assessed. By allowing participation in the creation of their assessment, learners become co-creative participants in their experience, affording them the opportunity to generate ideas about what would be the most meaningful, valuable, and hands-on ways to demonstrate their learning.

The Flipped Classroom

The recent flipped classroom phenomenon is a subset of flexible learning. Based on constructivist principles, the flipped classroom allocates the “delivery” of material to home-time, using computer-based instructional methods, and reserves precious face-to-face classroom time for interactive activities such as discussion, presentations, debates, group work, and role play. In this way, learners presumably come to class forearmed with the knowledge required to participate in engaging activities that are both more authentic and more interesting than the lecture-style classroom. Brame (2013) posits that the flipped classroom permits Bloom’s lower levels of cognition to occur outside of class while the higher levels of cognition, such as creativity, are practised within the classroom and the group environment. The flipped classroom presents interesting assessment opportunities that, while not new in themselves, could potentially expand the application and usage of authentic assessments.

Flipping the classroom permits, theoretically, learners to come to learning activities prepared to engage. Speaking cynically, one could say that such situations would resemble a mature learning environment where learners are present by choice, well prepared, and eager to participate. Cynicism aside, with prepared and eager learners, a flipped classroom provides opportunities for authentic and engaging assessment.

Out-of-Classroom Preparation

As with any “seek and find” or research activity, out-of-classroom preparation is more exciting than sitting in the lecture hall. That learners can connect with each other using Web-based technologies increases the potential for collaboration, community, camaraderie, and excitement. It allows for design activities that, while do-able, are challenging and fun. As always, the type and level of activity will depend on the learners, on their maturity and readiness levels. Novice learners require more guidance on activities and generally access a lower cognitive level on Bloom’s Taxonomy (1956), while more advanced or mature learners can more easily engage in activities that require higher level skills, such as synthesis, evaluation, or creativity. Some activities that engage learners while they prepare their out-of-classroom materials might include treasure hunts, contests, dyad or paired-activity, self-quizzes, worksheets, preparation of a document that records cognitive process such as a mind map, or a small media presentation using any one of the software products freely available.

Should this work be graded? The decision to award grades for a piece of work in a course is tied to the learning cycle, to the materials under discussion, and to the fit of those materials and that topic into the course design. Given these factors, there is no one-size-fits-all answer to this question. That said, a course-wide participation grade could contain marks for completing at-home preparation successfully. On a broader scale, a larger assignment, perhaps spread over a longer period of time, could stand alone as a graded activity. Whatever the choice, instructors must ensure that their grading or weighting scheme is fair to learners, especially if they are preparing the work “cold,” without any previous exposure to the topic, and prior to arriving at the in-class session.

While pre-class tests to measure the degree of preparation that occurred outside the classroom are suggested in some literature (Bishop & Verlager, 2013), adult educators do not favour tests for their mature students; if a testing occasion is warranted, a self-quiz is preferred.

In the Flipped Classroom

Once the prepared learners are in the face-to-face classroom, time can be spent in active engagement, learners-with-learners. Apart from perhaps an initial “consolidation of learning” period that could be conducted in small groups, with large-group reporting, or as a large group, the instructor is freed from lecturing or otherwise presenting the material. In-class activities can now “focus on higher level learning objectives” (Brame, 2013). Learners can engage with the teacher and each other in applying their new knowledge or moving further, with the teacher’s encouragement and guidance, in their understanding of it (Reigeluth, 2012). During class time, small group discussion can work to promote deeper understanding; guided large-group discussion can achieve the same results. The keys to successful in-class activity are interaction and the application and synthesis of knowledge.

Assessment, from this point on, will take on whatever instruments best suit the learning design to achieve intended outcomes. The variety of assessments in the flipped classroom arises from the out-of-classroom portion of the class and are plentiful, and learners’ responses to flipping the classroom have been positive (Bishop & Verlager, 2013). Our own experience confirms this positive response. In an ongoing research project at a small community college, when students were interviewed about the benefits of a flipped classroom environment, they responded positively to the sense of control they felt they had in the instruction process (Openo, 2017). They studied in the kitchen, in cafés, or on their beds. With video, they could rewind (or fast-forward) parts of the lecture or re-watch sections they were unclear of. They had time to process, reflect, and develop more meaningful questions. They also had control over their energy. As one student put it, “In [a] lecture, by the end of class, I didn’t want to ask questions because I just want to get out of there, and I know I didn’t absorb half of it because I’m tired.” But with the flipped classroom, “I can do it when I know I am going to be able to focus.” This is a perfect expression of what Anderson (2004) calls temporal freedom, and if students can interact with lectures multiple times and at times when they feel ready for learning, this affordance will likely be evidenced in their assessments.

Social Media

We have alluded to social media and its contributions to online learning throughout this book. There is no denying the power and influence of the inclusion of social media in its many forms. Twitter, YouTube, blogs, wiki: all contribute to increased connectivity among learners, flexibility of modality, and, perhaps we can assume, renewed vitality. Social media enhances the distribution and sharing of ideas and opinions, material, resources, and experiences. It’s a brave new connected, colourful, and vibrant learning world!

What does such innovation mean for assessment in the social media–rich learning environment? We would suggest, at the heart of it, in the most basic and pedagogical sense, nothing. It is worth quoting Salmon’s unvarnished and economical adage: “Don’t ask what the technology can do for you, rather [ask] what the pedagogy needs” (cited in JISC, 2010, p. 16). Our stance does not seek to diminish the contributions of social media, nor does it deny the fact that many new doors and windows have been opened for learners. Rather, we maintain that the essence of assessment does not change: Whatever the medium and regardless of the reasons, assessment and evaluation exist and will likely continue to do so.

Paulin and Gilbert (2016), in their chapter on social media and learning in the new Sage Handbook of E-Learning Research, approached the issue of assessment but didn’t quite land on it. In fairness, it may be that they didn’t intend to, as the word assessment is not used, but rather the words measuring and measurement are. Perhaps the issue in their approach lies in this sentence: “Since learning through social media transcends the boundaries of traditional learning platforms and environments, it can be difficult to measure if and how students learn in these environments” (p. 362). Did the measurement of learning change when blackboards became whiteboards? When videoconferencing tried to replace audio-teleconferencing? It did not. The notion of assessing or evaluating learners’ handling of material and experiences presented to them has remained constant. But yes, the nature and type of potential assessment and evaluation instruments, tools, and processes has evolved and grown.

Where we once had just paper, we now have many types of computer-based and computer-generated assessment tools. (As we pointed out in the Preface, these tools were not intended to be a topic in this book.) Where learners once toiled with pens to write on that paper, they can now rap out their work on their computers and upload the results to a course dropbox. More creatively, they can create a Powtoon cartoon or a Prezi display. Their instructors can enter their comments on their documents, on their slides, on their animations, or use voice technology. The variety of possibilities is vast and, most would agree, wonderful. But that’s just the technology of social media learning; Salmon, as well as all constructivist researchers and these writers, would have you consider the pedagogy.

In his book on constraint and control in online learning, Dron (2007) deftly analyzes the impact of social software on the structure and dynamics of online learning from a theoretical and design point of view. While he doesn’t address the issue of assessment, some of his observations and conclusions can provide a foundation for that discussion. For example, Clay Shirky’s first definition of social software is “software that treats groups as first-class objects in the system” (as cited in Allen, 2004). This is a useful definition for our work on assessment, as it implies the issue of agency in the structure-agency relationship.

Clarifying the meanings of “social software” and “social media” is difficult even for those technologists who work in this field (INCLUSO, n.d.). The INCLUSO (Social Software for the Social Inclusion of Marginalised Youngsters) manual suggests that social media is a more up-to-date term than social software, but points out that, at the end of the day, we are still referring to a set of tools: “Tools like email and message boards are decades old. . . today these tools are supplemented by such software as blogs, chat and social network(ing) sites.” We would like to add this nuance to the separation of the two terms: The term social media adds a dimension of agency, because “media” implies active use of tools, whereas “social software” implies structure, because “software” is just that—structure. Shirky’s (2009) emphasis on the importance of groups supports the notion of agency.

Correspondingly, Dron (2007) notes the difficulty in controlling “effective and coherent sequence[s]” (p. 295) in learning environments based on social software. In other words, in a McLuhanesque fashion, the user is becoming the message. In his list presenting the “argument of this book in 10 stages” (p. 311), Dron’s numbers 8 and 10 speak effectively to a discussion of assessment:

8. In some senses, social software allows learners to engage in a form of dialogue as well as to benefit from the resulting structure, thereby providing both high and low transactional distance not at the same time, but under the continuous control of the learners.

10. In such an environment, the self-organising feedback loop derived from the collective intelligence of its inhabitants offers the potential for a qualitatively different and (probably) better learning experience.

Dron is emphasizing the key importance of learners as agents of their own learning and placing into their hands the potential for self-direction and organization within an online learning experience. This concept supports the provision of assessment vehicles that encourage, permit, and capture the energy generated by the “collective intelligence” and autonomy of the group. Taken together, this is a strong endorsement for constructivism and authentic assessment.

The tools and the theory are in place for informed instructors to not only support constructive knowledge-building efforts but also to provide creative assessments. We have discussed and described what some of those assessment instruments can look like throughout this book.

To recap briefly, we have provided some suggested activities for assessment that make use of social media.

Project Work

Undertaken individually or in groups, setting learners loose on a relevant topic with well-defined guidelines and criteria will yield creative results. Encourage the use of social media tools but caution learners not to lose themselves in technology. Guidelines should provide instruction on how to stay on task. As a guard against the illicit “borrowing” of material from or the wholesale import of an OER, ensure that the project topic is closely tied to course themes or directions and include requisite nods to learners’ individual experiences or practices in the field.

Trawling the Internet

We all hunt through the Internet for our resources, whether it be writing a student paper or writing a book. How many of us remember those afternoons spent in the university library thumbing through endless index cards? With a clearly outlined task and appropriate guidelines, send learners on a directed quest to satisfy a question, a challenge, a chronology, a debate—whatever is most appropriate and pedagogically complete. Specify that sources be meticulously recorded.

Critiquing the Internet

Encourage learners’ critical thinking skills by creating the modern-day equivalent of a bibliography or article-review. In addition to locating and critiquing topic-relevant material, learners will experience the range of quality—and non-quality—of Internet resources. A group-based, guided debrief on the results can provide another opportunity for the use of software tools.

Blogs

Many of your learners are already accomplished bloggers. Create opportunities for them to apply their blogging skills to an assignment for private, class-only viewing—or not. Some will detect a delicacy here around learner-generated “public pedagogy” and the wisdom of its use. Are your learners ready to go live with their efforts and conclusions? These questions require individual attention from the instructor depending on situation and circumstance. For further discussion on the related topics of peer-assessment and self-assessment, see Chapters 6 and 9. (When using blogs for either summative or formative assessment, ensure that you can access the blog without extensive password control. Over time, needing password permission can become very labour intensive. Also ensure that you can easily engage with the blog to include your feedback or commentary.)

Role Play and Simulations

In traditional classrooms, it was easy to stage a role play as a learning experience or assignment. However, online role play, before the advent of social media, was reduced to the use of somewhat tedious scripts and text. The value of role play has blossomed with the new tools available, as learners can enhance their roles with animation, video clips, avatars, and Second Life, to name a few. As always, there will be learners who are so technologically proficient and keen that it is easy for them to forget the pedagogical intent of the assignment. Keep this front and centre with careful instructions and an appropriate grading scheme.

Assessing Social Media–Based Assignments

Grading assignments that are completed using social media raises new issues. Social media, as outlined above, open the door to much more variety and creativity than was possible on paper. In a similar fashion, the criteria used to grade in the past demands an update. While it is possible to continue to award points for content and mechanics, it’s really no longer sufficient. Word length? Page length? In most social media instances, these criteria are no longer relevant. Some possible grading criteria are suggested below.

Content

In most cases, content should continue to be a primary focus for learners. Achieving some level of mastery of required content remains a basic premise of learning and assessment.

Mechanics

At university level, the need to express yourself clearly and articulately is key. We suggest, however, that there are certain places in social-media–based assignments where an emphasis on mechanics is less important than the message or content of the presentation. In role play, for example, when an avatar or character is “in character,” mechanics could be dismissed in the quest for authenticity. Similarly, in an informally constructed blog designed to capture in-the-moment inspiration, the emphasis on grammar and sentence structure could be decreased or even omitted. As always, the criteria for grading should be made explicit.

Choice of Medium

In a media world full of choice, should the choice of medium constitute a factor in the grade? Does an animated cartoon presentation carry the message more effectively than a PowerPoint presentation? Is audience participation or enthusiasm a factor? If feedback or response or follow-up is a part of the project, perhaps it is. These decisions are as important for instructors—in choosing a designated medium or a range of choices as vehicles for presentation—as they are to learners.

Creativity

Choice of medium could be included in the broader criterion of creativity. If the course is not a course in communication or design, should creativity matter? Again, consider the purpose of the assignment. Is it intended to evoke the interest and participation of others? Does the knowledge sharing and knowledge building among members that may result from the project hinge on “pulling them in?” Sometimes, we have seen the notion of creativity slightly disguised as “audience participation.” Often, in these instances, participation from the group is also measured in terms of the interest sparked and the quality of response to the presentation or topic. Although this appears to be a valid criterion for assessment, grading learners’ creativity remains a murky area. Whatever the case, criteria guidelines should be clearly defined.

A Feedback Loop

In certain subjects or areas of learning, reflective thinking is more prized and valuable than in others. In the humanities and social sciences, areas that we have focused on in this book, there are many opportunities to encourage this important skill. As a part of the assessment activities, and falling into the purview of self-assessment or peer-assessment, learners might reflect upon the experience of having done what they’ve done, having presented what they’ve presented, having moderated what they’ve moderated or facilitated online, and so on. Again, this is Schön’s (1983) reflection-on-action in action, giving the learner the opportunity to sit back and think about the experience that has just passed. While the grading of such an activity often presents another area of contention, these writers believe that with careful instruction, these reflections can be valid occasions for evaluation, especially since they follow on the heels of, and relate to, a presentation or document that has already been reviewed. That said, the grade weight of such a piece should be kept low and may be perceived to be not worth the effort for either learners or instructor.

The New Normal?

Guri-Rosenblit (2014) points out that “one of the main conclusions of the OECD [Organization for Economic Cooperation and Development, 2005] study was that most higher education institutions use online teaching to enhance classroom encounters rather than to adopt a distance teaching pedagogy” (p. 109, emphasis added). She concluded that the historically clear and distinct functions of distance education providers were no longer clear and distinct; we now see that any bricks-and-mortar institution can extend itself to students outside its on-site campus and offer online courses in some format to learners. Has this become the “new normal” over the past decade, since the OECD study? As more and more “blended learning” research is undertaken, its implementation appears to be comfortably embedded within online global teaching practice.

As far back as 2006, researchers at Educause’s Center for Applied Research (Albrecht, 2006) wrote that,

the battles over the efficacy of residential learning versus online learning have disappeared with the quiet adoption of blended learning. While an occasional attack surfaces, the attraction of mixed delivery mechanisms has led to implementation, often without transcripting and virtually without announcement. (p. 2)

Looking to the future, we echo Moebs’ (2013) conclusion that,

blended learning combines mobile learning and (flipped) classroom sessions. The terms m-learning, e-learning, and blended learning have disappeared. People are learning with whatever device is available and the learning systems are flexible enough to allow everybody to start at the appropriate level. (p. 52)

Similarly, Ontario’s distance education consortium, Contact North (2012), when looking at the long-term strategic perspectives among Ontario college and university presidents, suggests that blended learning works because it is evolving naturally, because students like and demand it, and because faculty members find that it enhances, rather than replaces, their traditional teaching methods. In fact, Contact North suggests that “it is highly likely that such terms as ‘online,’ ‘hybrid,’ or ‘blended’ learning will disappear in the near future as the technology becomes so integrated into teaching and learning that it is taken for granted” (p. 10).

Concluding Thoughts

For those already engaged in fully online teaching and learning, the recently heralded innovations of blending and flipping do not bring much to the table. We have already adjusted to and accommodated distributed learning and all that it means in terms of interaction, activity, engagement, and assessment. Having celebrated the arrival of techniques that can be helpful stepping stones for traditional classroom educators, our literature is now breaking down label-barriers and embracing a mash-up of learning approaches utilizing wikis, blogs, Twitter, and differentiated assessment. Flexibility is key. Assessment, as already noted, is a critically important, vital part of the learning cycle. Instructors and instructional designers need to be clear about the assessment choices they make; do they align with the learning outcomes and one’s teaching philosophy? Is the choice of an essay, a portfolio, a wiki or a blog the most appropriate assessment method? Do these affordances enable increasingly meaningful personal and interpersonal contact, or greater learner choice and control? Are they selected to reduce grading loads, which is deemed by many to be a perfectly reasonable factor upon which to make an assessment decision? While there is no recipe for the perfect blend, these considerations are necessary as one rethinks assessment strategies in a blended learning environment.

Next Chapter
9 | A Few Words on Self-Assessment
PreviousNext
This work is licensed under a Creative Commons License (CC BY-NC-ND 4.0). It may be reproduced for non-commercial purposes, provided that the original author is credited.
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org
Manifold uses cookies

We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.