“6 | The Age of “Open”” in “Assessment Strategies for Online Learning: Engagement and Authenticity”
6 | The Age of “Open”
Alternative Assessments, Flexible Learning, Badges, and Accreditation
The ancient curse “may you live in interesting times” could adequately describe the dilemma that has confronted institutions of higher learning in the past decade or so. Not only have the stalwart halls of traditional learning faced, and adapted to, online learning and virtual learning environments since the late 1980s, they have also, more recently, been bombarded with the antithesis of higher education: courses purporting to enrol thousands of students who will pay no tuition and never step inside a classroom. MOOCs (Massive Open Online Courses), pushing at the walls of higher education in unprecedented fashion, have been offered by institutions of the highest calibre (Stanford, MIT, the University of Toronto, Harvard, and many others) and sparked new levels of discussion that centre on the roles of education and educators, while also uncovering the philosophies underlying higher education, including those that guide assessment and evaluation.
As a part of the ongoing MOOC discussion, Davidson (2014) outlined her view of their limitations: They are not going to remedy higher education’s problems, which she describes—referring to the United States—as a “product of 50 years of neoliberalism, both the actual defunding of public higher education by state legislatures and the magical thinking that corporate administrators can run universities more cost-effectively than faculty members.”
Davidson appears to be correct. The largest study so far into who enrolls in MOOCs, and why, suggests that participants are mostly well educated and employed individuals in developed countries and that “the individuals whom the MOOC revolution is supposed to help the most—those without access to higher education in developing countries—are underrepresented among early adopters” (Christensen et al., 2013, p. 1). The magical thinking that MOOCs, or any technological solution, can solve existing power differentials does not address the fact that educational opportunity, often based on standardized assessments, maintains and reinforces an unequal playing field.
In a society where people start out unequal, educational opportunity—especially postsecondary educational opportunity dictated by test scores and grades—can become a dodge, a way of laundering the found money that comes with being born into the right bank account or the right race. As social science has proven, the meritocratic basis of education is, at least in part, a social construct. Education is itself stratified by race and class, ultimately creating a hierarchy of educational inclusion that confers public and private power over others. A vote is not worth nearly as much personal power over others as is a college degree leading to a well-paid professional occupation. Testing and all the other metrics that allocate educational opportunity are better social indicators of our collective failure to provide equal opportunity than measures of innate individual merit or deservedness. (Carnevale, 2016, p. 22, emphasis added)
MOOCs, with improvements in instructional design and a gain in recognition, may eventually take an important place in the educational landscape. In an odd twist, MOOC participants tend to themselves be educators who satisfy their curiosity and gain new ideas for instruction (Newton, 2015) from their MOOC participation, which may in part explain the high attrition rate from MOOCs, where participants leave the course once they have obtained what they “need.” But the long-term possibility that MOOCs will deliver meaningful educational opportunities to the least privileged, such as English language learners or the digitally unprepared, seems remote. This is also the studied opinion outlined in Sir John Daniel’s 2012 South Korean report, in which he concluded that “the discourse about MOOCs is overloaded with hype and myth while the reality is shot through with paradoxes and contradictions.”
All this being said, however, two things have become very clear during the turmoil that MOOCs have wrought: Firstly, the pedagogical nature of distance and online learning has not been well understood within higher education; and secondly, the role of assessment within learning remains paramount and complicated. Issues of assessment, in fact, have been the source of much of the furor around the potential acceptance of MOOCs into university programs. This chapter will consider the role of assessment in open and alternative learning. We have already considered the role of traditional assessment and its role in the learning cycle. Has that changed? Should it change? Will the champions of the Open Education Resources (OER) movement drive their mantra of openness right through established assessment protocols? UNESCO has now appointed three OER Chairs, worldwide. How will this political—and seemingly trending—endorsement of openness affect higher education’s approach to assessment?
Disruption and Assessment in the Age of “Open”
MOOCs represent only one aspect of the shifts to “open” that are ongoing in today’s higher education. There are more shifts, all signalling changes that affect the business of institutional learning on a macro level that exceeds our focus here on assessment. In the discussion that follows, we consider several elements relating to assessment as it is disrupted or challenged in light of the shift toward “open.”
The broader trend underpinning these shifts in educational culture can be labelled “popularism.” In a recent essay, Shea (2014) refers to the currently popular TED (Technology, Entertainment, and Design) talks, noting that today’s academic “celebrity” is viewed on TED stages in 18-minute bites, “upending traditional hierarchies of academic visibility and helping to change which ideas gain purchase in the public discourse.” One of the results of this popularization is the “flattening” or down-sizing of ideas; the result is a “quick-hit, name-branded, business-friendly kind of self-helpish insight” (Shea, 2014). Consequently, he argues, the decline of theory in favour of a new “life-hacking culture” has enhanced society’s potential for productivity, achievement, and quick gain. Similarly, Rose describes our current society as running at “twitch-speed” (2013, p. 8). Although it should be noted that this trend is not disparaged by all and is in fact welcomed by some who are weary of philosophical long-windedness. The salient point here is that notions of assessment within higher education are assimilated and affected by the spill over of these broader trends. An examination of how resultant innovations bear on evaluation and assessment strategies in higher education follows.
MOOCs and OER
These closely related phenomena have presented twin challenges to higher education’s traditional operations. As discussed above, the sudden emergence of MOOCs and the immense controversy (and confusion) generated by them have initiated intense scrutiny, not just of “how” to deliver courses but also of the place and meaning of higher education in our lives. What is the current state of MOOCs? At the time of writing, their juggernaut has lost some of its thrust, but in its wake, there remains both some activity and new research into the nature of the phenomenon (Kolowich, 2014). In a study cited by Kolowich (2014), results showed, among other things, that students learned as well in MOOC courses as in traditional courses, and that the professoriate could either save time, by using already-developed or accessible resources (an OER or an entire MOOC) or expend more time, by having to adapt materials to already-existing courses. What was not raised in the research was the issue of assessment and credentialization—the difficulty inherent in assessing performance in MOOCs and the larger difficulty of accrediting MOOC learning to a formal credential. Assessment, and subsequent recognition of performance, has already been identified in higher education as the highest hurdle to cross (Conrad, 2013; Friesen & Wihak, 2013).
OER offer a more understandable and narrower focus for educational change. OER refers to the creation of educational resources (modules, lessons, curricula, video, podcasts, graphics, animation, and, potentially, assessment design) that are made widely available by OER creators. The intent of OER advocates is to facilitate the introduction of multifaceted resources into course production in financially feasible ways. Weller (2011) acknowledges that freely available academic content not only removes many types of restrictions and limitations on the accessibility of resources but also supports constructivist thought in that knowledge is truly constructed rather than simply accessed, passed on, or delivered.
Flattened Hierarchies, Crowd-Sourcing, and Crowd-Teaching
Together, these relatively recent “facts” of educational life are subsets of the wave of openness that attempts to flatten the traditional hierarchical structures of the Ivory Tower. In one way or other, each asks the question: Why should one person be privileged or stand above others? In academia, the “standing above” usually amounts or equates to assessment. Why should you, for example, as a journal editor, tell me that my work is acceptable or not acceptable? Why should you, as teacher, tell me what I should learn from this source or about this topic? Clearly, evaluation and assessment as it has been known and observed in higher education is facing pressure in light of the open movement.
Peer Assessment and Peer Evaluation
Peer assessment may be the oldest and most commonly used of these new developments in assessment techniques. No doubt most educators have used some sort of peer assessment in their classes at some time. Examples are numerous: Let’s have a debate and the class will vote on which side wins; let’s do a role-play and let the class decide who has been most effective at portraying a historical figure; after our projects have each been presented, each class member will complete an “evaluation” form outlining the strengths and weaknesses, high and lows, of each presentation.
The jury is out on peer evaluation and assessment in the literature, although good practice recognizes that there is pedagogical value in engaging learners in self- and peer-assessment at a formative level. Race (2001) outlined several aspects of the positive contributions of peer assessment to learning:
- Students are doing it already in different ways.
- Students will get the chance to find out more about assessment culture.
- Lecturers have less time to assess than before.
- Learning is enhanced when students have contributed to their marking criteria.
- Assessing is a good way to achieve deep learning.
- Students can learn from the successes of others.
- Students can learn from other’s mistakes. (pp. 94–95)
Until the emergence of MOOCs, summative peer assessment or evaluation was rare in traditional classrooms. However, following on the heels of the reliance of MOOCs on learners to initiate and lead activities among its thousands of enrolees, peer assessment became a MOOC staple, for example, in Coursera courses (Cronenweth, 2012). Sometimes software-assisted, sometimes permitting written commentary, the use of peer assessment, in our opinion, has limited value and limited reliability when used summatively (and hence becoming evaluation). Value and reliability are further limited in courses designed for seeking, or intended to seek, credit toward a postsecondary credential. In stating this, we do not disparage teaching and learning philosophies that value group work and learner collaboration, or constructivist principles that value learners’ prior experience and encourage learners to bring that knowledge forward in the creation of new, shared knowledge within the group; it is often noted that “peer assessment is an important part of a shift towards more participatory forms of learning in our schools and universities” (Kollar & Fischer, 2010). However, encouraging student engagement and participation—and thereby, one hopes, cognitive growth—within an instructor-designed or instructor-led framework is a far cry from instructor abnegation. Brookfield (1990b) famously stated that teachers or instructors had a moral obligation to give something of value to their learners. Whether labelled teaching, instructing, or even facilitating learning, Brookfield cautioned against the role of teacher devolving to that of gatekeeper.
On the topic of instructor engagement, and referring to peer assessment, Cronenweth (2012) asked: “This form of ‘crowd-sourced commentary’ helps create a learning community—so why not build the community even further by empowering learners to evaluate one another?” The abbreviated answer, from our point of view, is simply that learners are learners because they do not yet have the scope or depth of relevant knowledge that teachers do. In many cases, as well, they lack the skills to manage the classroom and its processes—whether face-to-face or online—effectively. To this end, instructors of non-MOOC online classes—the kind we were used to before the MOOC juggernaut—know that they must be regularly present in order to prevent “crowd-sourced commentary” from going off the rails or from careening down a tangential or erroneous path that has been instigated by a poorly informed member of the group, however well intentioned. As for MOOCs, the fly in their ointment has been, from their genesis, finding a way to conduct rigorous and reliable assessments and evaluations that are acceptable to other receiving institutions in the interests of receiving credit for completed MOOC courses.
Open Access Journals
Open access journals, while no longer the new kid on the block, owe their existence and rising popularity to the open movement. Currently, most such journals make their content freely accessible through electronic sites but still engage in the traditional processes of blind peer review of submitted manuscripts. Research shows that the peer review aspect of open access journals is still highly valued by the academic community (Edgar & Willinsky, 2010). Assessment in the traditional sense is still at play here, although the breadth of the process, of necessity, makes it more democratic and less hierarchical than, for example, a typical teaching situation where one instructor/professor/teacher is responsible for allocating grades to many learners. But the call is heard for changes to journal processes. A number of open access journals, including arXiv and PLOS, have experimented with alternatives to traditional peer review, in an effort to determine whether an open peer-review process can sustain the appropriate level of academic integrity and rigour. These experiments seek to answer a question posed by Chris Anderson (2006): “Who are the peers in peer review?” Evaluating the pros and cons of an open review process, Tom DeCoursey (2006) writes,
Reviewers can give their expert opinion, which might be honest, tainted by emotion, or even an overt attempt to suppress the manuscript. The authors can rebut these arguments. But it is the editors who must determine whether the reviewer has noble motives.
And then what? After a period of robust “he said/she said” volleying of opinion, an evaluation must be made upon the worth of the work for publication. Someone must do this! DeCoursey (2006) ultimately admits that the summative process of an open-style review can provide value to the article (optimally, that is the function of the current peer-review process) but concedes that perhaps there is a place for anonymity in the case of rejected articles. DeCoursey concludes that this might be for the best, reinforcing the notion that assessment as evaluation is a difficult task, even a moral responsibility—a sometimes unpleasant task that must reside with someone.
Social Media and Crowd-Sourcing
Years ago, while teaching English at the college level and using Orwell’s 1984 as an example, we used to awaken the critical consciousness of young learners to the dangers of “crowd-speak,” that is, to the danger of being coerced or unduly influenced by what others were doing or saying. In an ironic reversal of social mores, technology and social media have created an environment where both seeking and making possible the opinions and input of others is not only acceptable but often demanded. There is some justification for this trend, as shown in many instances where Twitter, for example, has been an effective medium for disseminating useful information and for helping individuals engage in and contribute to useful social and community endeavours.
However, for purposes of academic learning, does social media’s “even playing fields” contribute to a quality, or even acceptable, learning experience? The answer to this is a work in progress. Wikipedia is a good example. Current opinion has it that Wikipedia, once scorned by academics, has become, and is becoming, more acceptable. While the watchword on its use remains, sensibly, “use critical judgment,” the take-up of Wikipedia by reputable academics has increased in recent years. And while the fact of its lack of peer review is still an issue cited as a factor against its credibility, the argument once again comes down to one’s view of openness,
The openness of Wikipedia is instructive in another way: by clicking on tabs that appear on every page, a user can easily review the history of any article as well as contributors’ ongoing discussion of and sometimes fierce debates around its content, which offer useful insights into the practices and standards of the community that is responsible for creating that entry in Wikipedia. (In some cases, Wikipedia articles start with initial contributions by passionate amateurs, followed by contributions from professional scholars/researchers who weigh in on the “final” versions. Here is where the contested part of the material becomes most usefully evident.) In this open environment, both the content and the process by which it is created are equally visible, thereby enabling a new kind of critical reading—almost a new form of literacy—that invites the reader to join in the consideration of what information is reliable and/or important. (Brown & Adler, 2008)
In similar fashion, some journals in the sciences are practising a new type of openness whereby all reviews and inputs to a published piece are captured as that piece potentially grows and changes shape in response to feedback from readers. No more blind reviews: Authors and readers are exposed to each other. What will be the effect of this level of transparency on the traditionally “closed” world of academe and traditional forms of assessment and evaluation?
The Changing Face of Assessment in the Open World
Sometimes we don’t find indications of change where we think we might. A scan of the presentations at a recent international open learning conference, promoting “open education for a multicultural world” (OCW Consortium Global Conference, 2014) uncovered only one mention of the word “assessment”—and that was a reference to needs assessment, which is not the type of assessment around which the central issue of assessing open learning pivots. (At this conference, the pedagogical track comprised 24 presentations, but within pedagogy, it appears that no presentation was tackling the very real issue of assessment within open learning. Policy, research and technology, and knowledge dissemination made up the other tracks.)
Nevertheless, the forces outlined in previous sections continue to beat on the walls of the academy. As higher education’s online world changes or contemplates change, so also does the nature of assessment within that world. The discussion above identified several global forces within education’s purview that demand changes to traditional areas and processes. Some basic tenets regarding assessment and evaluation remain clear: (a) Formal, credential-granting institutions will continue to zealously protect their evaluation turf in the interests of quality and reputation; (b) the trend toward openness and popularization of “voice”—a trend both instigated and fanned by the online world—will continue to find new manifestations in all corners of education; (c) the associated trend of constructivism as a learning approach in Westernized education supports the notion of “voice” and speaks to the interest and value of authenticity—in learning and in assessment.
Given that online learning occurs at a distance, with learners separated from teachers while engaged in learning processes that are mediated through technology, how can viable assessment and evaluation take place? From a constructivist, learner-centred approach, the answer is authenticity. In higher education, authenticity is defined in assessment as “connected to adults’ life circumstances, frames of reference, and values” (Wlodkowski, 2008, p. 313) and is prized as a key factor in good evaluation and assessment protocols.
The application of authentic assessment and evaluation strategies to online learning environments can serve as a salient factor in distinguishing face-to-face assessment strategies from distance assessment strategies. While we have previously stated that there should be no philosophical difference in the role of assessment between online and face-to-face environments, the “I can’t see you” factor mentioned above is troublesome to many educators. Consider that the most common concern raised about assessment in distance learning is whether or not it permits an increased degree of academic dishonesty. Old, tired tests and quizzes, poorly formatted multiple-choice tests, and “rote” testing techniques—in other words, unauthentic assessment materials that do not ask the learner to relate in a personal or sustained fashion to the material at hand—are more likely to encourage and enable cheating, whether in face-to-face or distance assessment. As Ferriman (2013) points out,
anyone who wants to cheat is going to find a way to do so, be it for an online course or in a normal classroom setting. While it cannot be completely controlled, you do have some strategies available to you that decrease the likelihood of cheating—or at least discourage it by making life a bit more difficult.
While Ferriman’s “making life a bit more difficult” refers to a plethora of technological tools and software designed to oversee computerized test taking, it should also refer to the development of authentic assessment instruments. Of course, a disciplinary difference exists between what is possible and practical in making assessment less fallible. The sciences are more likely to adopt technical solutions to solve issues of academic honesty; the social sciences and humanities will, or should, in more cases, turn to authentic assessments. That said, attempts to automate social sciences and humanities’ assessment and evaluation instruments continue, although recent developments by software companies to replicate the “human” touch have been widely criticized for their poor performance and susceptibility to being compromised by savvy learners (Sands, 2014).
UNESCO’s 2016 “Advisory Statement” classified academic cheating as corruption (Daniel, 2016). In the advisory statement, Sir John Daniel outlined several areas with issues in integrity that, in addition to assessment, included research, credentials, publications, teaching, and higher education in general. Interestingly, the assessment areas highlighted in the report focus largely on traditional assessments, such as tests, and the misconduct that can occur in that type of assessment. There is no mention of authenticity.
Creating Authenticity in Assessment
There are several ways to create authenticity in learning and assessment. Reflecting the meaning of authentic assessment—that which is “connected to adults’ life circumstances, frames of reference, and values” (Wlodkowski, 2008, p. 313)—educators can create assessment and evaluation tools that offer learners the opportunity to relate their learning to real-life subjects and real-life problems.
Service learning, for example, offers a type of learning that is located in real-time and is seen by some to provide a solution to perceived weaknesses in today’s educational systems (Bok, 2006). Although the opportunities offered by service learning do not specifically cross over to online learning, the philosophy and practice could easily be incorporated into online courses or programs.
Digital badges offer another approach to authentic assessment, one firmly rooted in the online world. The creation and implementation of digital badges as a means of assessment was initiated by Mozilla in 2010 from a likely commercial orientation. However, since that time, in tandem with the many other open initiatives at play in educational and private venues, their application to learning continues to gain momentum. As outlined below in a speech by the American Secretary of Education Arne Duncan in 2011, badges can represent engagement, collaboration, and inclusion; in short, badges can reflect authenticity in learning.
Badges can help engage students in learning, and broaden the avenues for learners of all ages to acquire and demonstrate—as well as document and display—their skills. . . . Badges can help speed the shift from credentials that simply measure seat time, to ones that more accurately measure competency . . . [a]nd badges can help account for formal and informal learning in a variety of settings. (Duncan, 2011)
Mozilla made its initial pitch for badges at the 10th International ePortfolio and Identity conference in London in July 2012. The presentation focused on what Sullivan (2013, p. 1) describes as the “mundane uses” of digital badges as “motivational stickers for engagement and encouragement (such as recognition of signing into a homework help site for 30 days in a row).”6 However, as she goes on to acknowledge, badges also “have the potential for greater, extended use for individuals in multiple learning environments to create skill and knowledge portraits more comprehensive than a single letter grade or certificate can capture” (p. 2).
The future of digital badges is uncertain, but recent literature reinforces the claim that badges are clearly gaining momentum in learning communities across the globe (Lindstrom & Dyjur, 2017). If badges are gaining momentum and continue to do so, it is because of the transparency of standards—such as date of issue, institution offering the badge, and the demonstration of learning outcomes—to all parties involved in the teaching and learning transaction and because the transparency of standards empowers learners with greater control in displaying their accomplishments digitally and in sharing their professional development with others (Lindstrom & Dyjur, 2017).
The opportunity exists for badges to find their place in postsecondary education and assessment practice because they break learning down into chunks, they require learners to demonstrate mastery of outcomes, and they possess some level of practical usefulness in capturing and reporting that elusive term—lifelong learning. Hensiek et al. (2017) noted in their research that digital badges and assessment guidelines were created and communicated to learners in a hands-on undergraduate chemistry course. In the videos that learners submitted for assessment, students stated their names, showed their face and hands, and then did a task, such as performing a close-up shot of a calibration mark on lab equipment. Mid-semester examinations on how to use the equipment demonstrated that between 74% and 95% of students who received their laboratory badges answered laboratory use questions correctly, and, at the same time, the department saved $3,200 in equipment costs—two very different ways to prove that students had more effectively mastered the learning outcomes of safe and effective use of lab equipment.
What is the lesson here? As the use of digital badges increases, it may become clear when and where they are most effective in influencing student engagement and motivation. Harmon and Copeland (2016) found that students in a public library management course were underwhelmed by the experience of badges and did not plan to pursue them as a form of professional development. This finding confirms that digital badges may not be as effective as a simple “add-on” but may need to be incorporated into deeper and more fundamental changes to instructional design (Stetson-Tiligadas, 2017). Badge taxonomies (McDaniel, 2016) and digital badges possess great synergy with digital game-based learning environments and may contribute to authentic assessment opportunities through these media. Other research suggests badging may be a significant predictor of student self-efficacy and, therefore, learning performance (Yang, Quadir, & Chen, 2016), enabling instructors in online environments to readily identify which students have less self-efficacy and may require greater encouragement.
Casilli and Hickey (2016) put forth two strong arguments indicating that badges might become a more prominent feature on the assessment landscape. The first is that digital badges provide an opportunity for schools to generate more claims of student learning, with more evidence to support those claims. Secondly, digital badges increase the transparency of assessment practice, and through the transparency of badges—which includes metadata, assessments, and artifacts—it is possible that the importance of conventional forms of recording learner performance, for example, transcripts, where there is no supporting evidence of student learning, may diminish. The badge advocates would have us think so, but the only thing we know about the future with certainty is that we are incapable of foreseeing it accurately. However, over time, both educators and those institutions willing to experiment with these bold concepts will move toward developing better badging systems that will be authentically meaningful and acceptable to educators, learners, and prospective employers.
Sullivan too sees the notion of digital badges as “part of this larger and changing learning and assessment ecosystem” (2013, p. 8) that this chapter has labelled the open movement. Emerging concepts of openness and the shift to e-learning in higher education has created a fertile environment for potential synergy between authenticity and assessment.
Concluding Thoughts
The discussion above has ranged across the current state of and trends in the online higher-education world, accessing classic literature in the field in addition to late-breaking developments in several daily online publications. In keeping with the trend of “open” education, as well as with the rapidity of change in a networked world, such sources (for example, the Chronicle of Higher Education) are now becoming more acceptable to scholars, in much the same way that Wikipedia has slowly gained some credibility in the academy. On the subject of “open,” we cite here Biswas-Diener and Jhangiani (2017), editors of a recent publication on that topic, who wrote in their introduction,
Open education, open science, open access, and open pedagogy are new phenomena. They are imperfect and many challenges remain to be overcome. However, as the open movement matures and gains momentum, and as the questions it poses grow increasingly nuanced, the boundaries of the movement continue to expand. The open movement represents . . . an optimistic promise for the future as well as a myriad of practical tools and strategies for the present. (p. 6)
But where do we end up with assessment? These things we know for sure: Meaningful assessment and accompanying evaluation are critical parts of the learning cycle. The institution requires, ultimately, evaluation; assessment is also valued for its contribution to the evolution and improvement of learning processes. The transition to more accessible and flexible open and distance, specifically online, learning has given rise to both adapted and innovative assessment and evaluation tools. The “age of open” has challenged and tested traditional beliefs about teaching, learning, and assessment and continues to do so on many fronts.
__________
6 Mozilla’s Open Badge representatives Carla Casilli and Doug Belshaw, from the US and the UK, respectively, made the presentation. At the time, the overwhelming response from delegates to the notion was “A great idea . . . but how will it integrate with traditional assessment paradigms?” That is still the question.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.