“APPENDIX” in “Assessment Strategies for Online Learning: Engagement and Authenticity”
APPENDIX
Other Voices
Reflections from the Field
As a part of this project, we asked colleagues from around the world to reflect on assessment in any manner they chose. Their submissions are presented in this appendix. As always, in our humanities and social science academic world, there are varied opinions on what’s what, representing a variety of topics that came to mind. This section provides insights on technologies, on strategies, and on philosophies. Knowing our colleagues, we see some of their research interests and personal missions reflected here! And with academic freedom, we recognize that there are many approaches to academic work and assessment. We sincerely thank these colleagues who shared their experience and insights with us.
Wholly Assessing Learning • Stephen Downes
I have often been in the position of assessing student work: as a trainer in a computing services company, as a newspaper editor, as a philosophy professor, as a night school teacher introducing people to the Internet, and most recently, as the instructor in multiple MOOCs.
As my experience in assessment grew and became more diverse, I found myself relying less and less on specific metrics and much more on an overall judgment of a person’s capacity. Even in technical disciplines like logic and computer science, I found I could easily see whether a person had an overall aptitude for the subject.
When it came to my experience with MOOCs, I was fortunate enough to be able to turn over the task of formal assessment to my colleagues and instead address the whole person, rather than their specific capacity to perform specific tasks. I could see their progress overall through a process of continual engagement—how they responded to me, how they responded to each other, how they interacted with the guests we had in the course.
By the end of the course, I could evaluate individuals through the process of having a short conversation. It’s like Sabine Hossenfelder (2016) said: “During a decade of education, we physicists learn more than the tools of the trade; we also learn the walk and talk of the community, shared through countless seminars and conferences, meetings, lectures and papers. After exchanging a few sentences, we can tell if you’re one of us. You can’t fake our community slang any more than you can fake a local accent in a foreign country.” Physicists recognize each other.
When I dispense with the metrics, it’s not as if I’m using nothing to assess expertise. I’m using everything to assess expertise. It may look from the outside like an off-the-cuff judgment, but it’s the result of a deep understanding of the discipline. When I assess a person, I’m looking at overall fluency, not completion of a certain set of metrics or competencies.
Stephen Downes is a specialist in learning technology, media, and theory for the Information and Communications Technology portfolio at the National Research Council of Canada in Moncton, New Brunswick, Canada.
Assessing Participation • Ellen Rose
For me, one of the most challenging aspects of online assessment is the issue of learner participation. When I first began teaching online graduate courses in education, I followed the lead of others, who typically gave a weighted mark for participation. For example, participation in online discussions and learning activities might be worth 20% of a student’s overall grade.
However, this commonplace practice soon came to seem problematic. I did not give a participation mark in my classroom-based graduate courses, so why was it necessary in online offerings? If it was intended as an incentive for participation, then surely it was not necessary for adult learners who wanted to learn and participate, whether in face-to-face or online courses, and who were able to make intelligent decisions about their learning. Further, it seemed to me that the participation mark was a poor substitute for providing a variety of engaging ways for learners to explore and respond to new content. Finally, I realized that the participation mark benefited those learners who were perhaps less thoughtful but more ready to get online and post their first thoughts, while disadvantaging those who preferred to take the time to reflect more deeply before going public with their perspectives. The result was often an overwhelming quantity of banal discussion posts offered as evidence of “participation.”
However, when I tried removing the participation mark in an online course, there was an immediate outcry from my students. They wanted their participation assessed, in recognition of the fact that participation in online course is often challenging and time-consuming. A group activity that might take 20 minutes for five students sitting around a table can take an entire week for five students working together asynchronously.
My current approach, a kind of negotiation between these two competing perspectives, involves asking learners to assess their own participation. At the beginning of the course, we agree on a set of indicators, which becomes a rubric they can use to think critically about their own involvement and contributions. Importantly, the indicators emphasize quality rather than quantity.
Ellen Rose teaches in the Faculty of Education’s doctoral program at the University of New Brunswick, in Fredericton, New Brunswick, Canada.
Voice Marking • Terry Anderson
In the online graduate program that I teach in, most of the assessment consists of student essays, reports or examination of artifacts such as business plans, e-portfolios, or learning design documentation. I’m not a fast typist, so I find the work of doing a thorough assessment and recommendations for improvement to be time consuming. Thus, I was intrigued when I heard Phil Ice present data on voice marking at a conference.
The technique I have settled on involves saving students’ work as PDF files and then using the annotation tools that are built into Adobe Acrobat to insert voice snippets. These then appear throughout the text as clickable speaker icons that the student listens to as they review my feedback and assessment. The feedback from students has been almost 100% positive. I know I can be much more relaxed using voice and let my “teaching presence” shine! I can also be more informal. But most importantly, this technique saves me time, and the amount of feedback provided is greatly expanded compared to what I used to provide by typing.
One lesson was learnt, though, when I inserted a comment—“Praise the Lord, after two pages we finally come to the topic of the essay!” The student accused me of blasphemy!
I did have some technical issues. For some reason, on a Mac, one can’t throttle back the recording speed of the internal microphone, so I purchased an external mic. The high fidelity doesn’t really make a difference to the voice quality, but the resulting size of the PDF file can become huge, especially in longer documents such as complete theses. One can also use built-in tools for Word, but at least in past versions, the resulting file size prevented returning via email.
The research literature supports both the time saving and the almost universal enthusiasm from students when using this type of feedback. I am sure that there are now tools that allow video as well as voice, but once again file size may become problematic with more media in play. In any case, the students probably get quite enough of my “presence” (and bad haircut!) with just the voice.
Terry Anderson, Professor Emeritus, Athabasca University, and editor of the Issues in Distance Education series from Athabasca University Press.
Authentic Assessment Using Audio • Archie Zariski
The field of alternative dispute resolution has expanded significantly in the 21st century as courts, agencies, and corporations have embraced the practices of arbitration, conciliation, and mediation. Athabasca University’s course of the same name is intended to introduce students to the theory and practice of conflict and dispute resolution. Because mediation is widespread, the course attempts to give students some first-hand experience of what it is like to act as a mediator.
Mediators use a toolbox of skills and techniques to help disputing parties see common interests and pathways to agreement. Active listening and providing constructive feedback are two key capacities mediators must develop. In order to assess learning of such skills online, we adopted two oral assignments that reflected common practices of mediators: First, students submitted their presentation of a mediator’s opening statement to the parties in conflict to introduce them to the process; and second, students submitted their response as mediators to hearing one party’s statement of their grievance.
The free recording program Audacity is recommended to students for completing these assignments, but they may submit an audio file using whatever program they have. For the second assignment, I recorded a monologue of a disputant that is played for students and that they must then carefully summarize back as they would in a mediation conference. I believe these assignments represent authentic assessments that help students learn about the complex task of acting as a mediator.
Student reactions to the course have included: “I really enjoy the audio assignments”; “I liked the oral assignments. They required me to learn to do them, but also pushed me there, and to record the role of a mediator was very applicable to the learning objectives of this course”; and, “this course was relevant and well-structured to teach students, not just test their memorization.”
Archie Zariski, LLM, teaches legal studies at Athabasca University.
Negotiated Assessment • Beth Perry
I have come to the conclusion that the adult learners I teach in online courses value thorough and thoughtful feedback from the instructor, and, importantly, they also value evaluation. That is, they are motivated by reaching for (and achieving) a grade that they view as success. My teaching philosophy is founded on William Purkey’s (1992) invitational theory, which focuses on trust, respect, optimism, and intentionality. My approaches to student assessment and evaluation are structured to take these four pillars into consideration.
One graduate course I teach has content related to becoming an effective educator of health professionals. This course is structured to provide learners the opportunity (within parameters) to create their own course assignments and evaluation guidelines for those assignments. That is, students are required to produce one descriptive artifact and two analytic artifacts as evidence of achieving the course learning outcomes. The format of these artifacts can include written, video, or audio aspects, and the details of each artifact (including how it will be assessed) is negotiated with the instructor during the first weeks of the course.
The course incorporates invitational principles of respect, as learners have choice in how they will demonstrate their learning. Having choice respects individual differences and helps to personalize learning. Trust is enhanced as learners come to know that their viewpoint, preferences, and learning goals matter to the course instructor. The learning they have chosen matters to them and their personal and professional goals.
As the instructor, I have a one-to-one relationship with each learner as their plans for the assignment are determined in a contractual manner at the outset of the course. Assessment becomes an integrated and very intentional element of each artifact. Everything students do in this course becomes meaningful to them and their assignments, and the instructor’s contribution in terms of responses and feedback to those assignments is personalized.
The lesson learned from teaching this course is that students are often uncomfortable at first with the apparent openness of this assessment design. They want to know what they have to do to get a certain grade. Learners seem more familiar with assessment strategies where requirements are clearly stated by the instructor in the course syllabus. Instructional time spent one-to-one with learners helping them to embrace this approach is considerable. However, the benefits to the learners seem worth this investment.
As a side note, I find teaching a class with a more invitational and personalized assessment plan challenging, intellectually stimulating, and less monotonous than marking multiple papers on the same topic. As an added benefit I become well acquainted with each learner early in the course, and this relationship enhances our experience over the duration of the learning.
Beth Perry teaches online graduate and undergraduate courses in the Faculty of Health Disciplines at Athabasca University.
The Value of Feedback and Revision • Julie Shattuck
My approach to assessment is directly linked to my own experiences as an online student. What I appreciated the most from my professors was timely, detailed feedback that helped me move forward as a learner. It strikes me as ironic that, as a doctoral candidate, I received more help on how to improve as a writer than I had at any other time in my academic journey. My professors treated me as an apprentice writer and afforded me multiple opportunities to revise my work before it was assessed.
I pay my positive assessment experiences forward to my online students who are in their undergraduate English composition courses. I encourage collaborative assessment by first exposing students to the benefits of participating in a structured peer-review process. I give examples of feedback I’ve received on my writing, which shows students that there is no such thing as a perfect writer and also models what kind of feedback is most useful in helping a writer produce their best work. We discuss how hard it is to share writing, especially in the online environment, but by building a strong community of learners, I help students build trust in each other. After students revise their peer-reviewed essays, they submit their work to me. I give students prompt feedback on a few major areas that would help them improve their writing without overwhelming them with every detail that could be revised. I encourage students to reflect on this feedback and use my comments and their reactions to the comments to revise their essays for regrading. Not all students take me up on this opportunity, and that’s fine, as not all students want to raise their initial grade. For me, giving students the opportunity to judge for themselves whether the grade they earn initially ends up being their final grade is an important step in helping students become able and confident writers.
Julie Shattuck teaches English courses both online and on-campus at Frederick Community College.
Guided Interactive Self-Assessment • Dianne Conrad
The master’s course that I teach at Athabasca University is housed in the Centre for Distance Education although it is a course on adult and lifelong learning. As we have pointed out in this book, these two areas of education are very closely related and so the placement of this course within the Centre for Distance Education makes sense. What is continually surprising to me is that the study of adult learning is quite new to many if not most of my students, who come to it from a variety of program areas.
The learning journeys of these “new-to-adult-education” learners were often steep climbs, and it became important to me to engage them in self-reflection along the way. Any graduate student should be able to crank out a research paper with good criteria and instructions. But what were they experiencing and what did they themselves think about their own learning and engagement in this course? I introduced a learning journal in which they were invited to reflect on their learning, related to course discussions, materials, and interactions, all along the way. I emphasized that the journal’s focus was “you,” rather than an external topic. These documents turned out to be extremely long, intense, and revealing reads. Unfortunately, given the nature of the assignment, they were due at the end of the course, and I didn’t have the chance to learn from their experiences in time to do anything or reply meaningfully.
I therefore moved the submission date forward a couple of weeks. Taking two weeks away from their writing time did not shorten the length of the journals! But it gave me time to fully digest their reflections and respond to them using Track Changes on the file. (I will work on changing those responses to voice.) Finally, adopting Schön’s (1983) reflection-on-action notion, I ask learners to self-assess their course experience in three categories: reflections on a certain very powerful group experience; last-minute insight on their learning experience during the course; and five adjectives that describe them as learners. This short document (a two-pager) is due right at the end of the course and provides learners with another opportunity for self-reflection on their performance. It summarizes, affirms, and complements journal themes, while also giving me another chance to provide feedback or comment on what I have observed or read.
The response to this assignment, which constitutes a lot of writing by students, has been overwhelmingly positive. The work is generally of very high quality. As instructor, I hear what is really going on with them as learners, the highs and lows, and I can plainly see their struggles as they try to make sense of concepts and theories. These documents take a long time to read, but my own learning from them is time so well spent.
Implications of an Experiment Measuring Teacher Satisfaction against Performance • Rory McGreal
Many years ago, in an unnamed university, more than 100 teachers taught the same English as a Second Language program to more than 1,000 pre-university students. All teachers were constrained to teach the same content in a similar way. All students took exactly the same examination on completion of the full-year program. They also completed teacher evaluations.
As a research project, the teacher evaluations were divided into three groups: positive, negative, and other (those either not understood or neutral). The prediction was that those students with teachers who received a positive evaluation would do significantly better than those who were in classes with teachers who received a negative evaluation. Even a cursory analysis showed that there was no discernible difference between the two groups. A statistical analysis also showed no significant difference between the results of teachers with positive student evaluations and those with negative student evaluations.
This experience relates to one’s position regarding the need for an objective assessment. It is very easy for either students or teachers to mistakenly assume that learning has occurred. If students feel good about a teacher or the course, then they may judge that they have mastered the content or skills being taught. Because the students feel good and are positive in the class, the teacher too may assume that the concepts and/or skills have been mastered. That is why, in my opinion, objective tests of achievement are an important if not essential part of the learning process, at least in formal situations. Of course, learners can acquire knowledge and skills on their own, but at some point this needs to be tested for confirmation. Tests may be informal or formal.
Rory McGreal is the UNESCO/Commonwealth of Learning/International Council for Open and Distance Education Chairholder in Open Educational Resources and professor at Athabasca University.
E-Portfolios and Journals as Reflective Tools for Assessment • Lisa Marie Blaschke
In recent years, I’ve moved away from typical assessment measures, such as the written essay or summative quiz. Instead, I have started evaluating students along a continuum, applying formative assessment, which allows me to give feedback that supports students’ ongoing learning and development and their ability to improve on their work as they progress along the learning path. Some of the key learning tools within this context are the online e-portfolio and the reflective journal (or blog) that students keep as part of their e-portfolio, which showcases their acquired skills and competencies. With each set of learning-journal reflections that students post, I give them feedback on areas for further exploration and research, and ways in which they can improve upon their learning approach. The learning journal not only gives students an opportunity to reflect on what they have learned and how they have learned it but also gives me a window into the student experience, insight into their abilities, and an idea of what interests them most. Their learning journals give me insight into what concepts or ideas they are struggling with, what motivates them, what inspires them (and what doesn’t), how they learn best, and where they need support. I like this kind of assessment because it gives students an opportunity to improve on what they have done and an ongoing investment in their work, and allows me to coach them as they progress along the learning path.
Lisa Marie Blaschke is associate professor (adjunct) at the University of Maryland University College and program director of the Master of Distance Education and E-Learning program at Carl von Ossietzky University of Oldenburg.
Assessment in Online Learning Using Social Networks • Gürhan Durak
For about three years, I have been teaching my undergraduate and postgraduate courses either fully online or as hybrids. The name of the course that I have taught fully online is “Scientific Ethics.” We teach all our courses via Edmodo, a leader among social-learning networks, where learners can share any kind of resources, announcements, and files. Another advantage of this system is that it provides mobile support. When learners install the mobile application on their smart phones, they can instantly see the course-related sharing. In this way, they are simultaneously informed about any sharing done via the system, about any resource uploaded on the system, and about any homework assigned via the system. In this case, the user does not have to log into the system to see the resources uploaded on the system in online asynchronous applications. Learners also participate in live lessons as part of the course.
Within the scope of the course, on a weekly basis, I share resources related to the videos. Learners, after studying with the help of these resources, respond to the questions directed to them in front of the video camera. These applications, repeated on a weekly basis, are evaluated and given to the learners as feedback (in the form of homework scores and comments) until the following week. The course is executed in this way until midterm and end-of-year exams, which are administered using the exam application on Edmodo.
In these exams, we use various question types such as open-ended, multiple choice, fill in the blank, and matching. During the exam, learners record themselves using a webcam (from a certain angle and at a certain distance). In this way, we can confirm that there is nobody else near the learner during the exam and that they are focused on just the exam. (In the past, when I once gave an exam for an online course, I had doubts about whether the learners took the exam alone or with a companion. Therefore, for the last two years, I ask them to video-record the exam process). At the end of the exam, learners upload the video-record and the exam file on the system and complete the process.
At the end of academic terms, I hold individual interviews with learners who give positive feedback on the course format and strategies used in the teaching of the course. I feel that this method is useful for evaluation, which is one of the most problematic aspects of distance education, and I recommend it.
Gürhan Durak teaches undergraduate and master’s-level courses in the department of Computer Education and Instructional Technologies at Balıkesir University in Turkey.
Using Quizzes for Assessment in a Negotiation Massive Open Online Course • Noam Ebner
In negotiation courses, assessment is one area in which there is a great deal of variation between teachers. Teachers strive to apply multiple methods of assessment, many of which require a great deal of effort and time investment per student.
As I prepared to teach a MOOC on Negotiation, I realized that the scale of a MOOC requires a different approach to assessment. I was concerned about the validity of the two models MOOCs have converged around—automated multiple-choice quizzes graded by the system, and peer-assessment systems—for providing summative assessment. Quizzes test only for certain types of learning; succeeding can be more a matter of quiz skills than of content understanding. Peer-review relegates assessment to non-experts. Both systems have offered up new opportunities for plagiarism and cheating.
Upon reflection, I realized that my search for valid methods for summative feedback was based more on habit than on any real need. The not-for-credit nature of the MOOC actually eliminated the necessity of providing summative assessment. Only those students completing the course and requesting a certificate of completion (on average, about 5% of students who register for a MOOC) required any form of summative assessment to determine whether their work achieved a minimal performance bar. For the large majority of my students, I could focus on providing formative assessment. One element of this manifested itself in offering formative feedback somewhat camouflaged as an ordinarily summative assessment method, the multiple-choice quiz.
Each of the course’s four weekly modules included a five-question multiple-choice quiz. At the end of the course there was an additional 15-question quiz. These quizzes had different purposes. The weekly quizzes provided only formative assessment; once a completed quiz was submitted, the system would indicate which questions were answered correctly. It would also provide the correct answer to questions answered incorrectly and direct the student to the specific piece of course content in which the answer could be found. Participants could take these quizzes as many times as desired, until they were sure they had understood all the material by aceing it. However, the grade achieved on this quiz was not recorded by the system; it only recorded the fact that a particular student had taken the quiz. Taking the quizzes, not “passing” them, was a requirement for those students wishing to receive a certificate of completion. Another formative purpose of the weekly quizzes was familiarizing students with the quiz platform and typical questions—preparing them for taking the course’s final quiz.
The final quiz provided a combination of formative and summative assessment. In addition to providing the formative input described above, this quiz was also graded. A student scoring 10/15 correct answers fulfilled this certification requirement. A student dissatisfied with the quiz score could take it one additional time, with a partially new set of questions generated from the question bank. Once again, there was a formative purpose underlying this summative structure: Students were incentivized to review the correct and incorrect answers on their first attempt, as they would thereby improve their odds for succeeding the second time around.
Noam Ebner is professor of Negotiation and Conflict Resolution in the Department of Interdisciplinary Studies at Creighton University.
Integrating Assessment into the Learning Cycle • Susan Bainbridge
Completing a written assignment is often assumed to be the equivalent of taking a test: By writing a paper, a student is expected to demonstrate particular abilities and knowledge. But I want my students to learn from the activity as well. When students are asked to submit papers, I assess their work on the basis of criteria that have been shared with the students in advance and then return their papers to them. At that point, I do not want my students simply to look at their grade, perhaps quickly scan my comments, and then move on. I want them to learn how to create a better submission. With this goal in mind, I ask them to revise their papers and resubmit them. Then I grade the second submission as the final one. Now the paper has become a teaching tool, and, by making changes, students hopefully will have incorporated new knowledge. Students are expected to read my initial comments and then make use of them. The grades are naturally higher with their final submissions, but so is the quality of their work. Assessment should be part of the learning process. Otherwise, it is of no significant value, as students will simply continue to make the same errors.
Although I currently teach at the graduate level, I have used this approach in undergraduate instruction as well, and it works extremely well in online courses. I can track changes to papers and add voice comments, and, if need be, the student and I can Skype once the first draft is returned. The process not only enables students to learn from their mistakes but also helps to build a healthy rapport between the instructor and the learner.
Susan Bainbridge earned a Doctor of Education in Distance Education from Athabasca University, where she currently holds an adjunct appointment.
In Praise of Authentic Assessment • Jon Dron
With rare exceptions (where intense, lonely pressure is authentic to the task, such as in some kinds of journalism), I hate invigilated, written examinations: They are normally unfair, inauthentic, weakly discriminatory, hugely stressful and, above all, make a negative contribution to the learning process by making motivation entirely extrinsic and teacher-controlled. They are not even cheap: it’s not cost-effective to incorporate something into the teaching process that actively militates against learning. And they don’t even do the one job they are supposed to do well at all. In some countries, four out of five students admit to exam cheating, and more than half admit to it in most countries, including Canada. That’s an inevitable consequence of making them the point of learning, and it is entirely our own fault.
I have steadfastly avoided exams in my own courses, using combinations of techniques like personally chosen projects, embedding the sharing of work with other students, negotiable personal outcomes, portfolios of flexible evidence, shared reflective diaries, community building, giving feedback on achievement but never grades on assignments, and so on. Every activity contributes to both individual learning and to the learning of others, gives learners control, and is personally relevant and uniquely challenging for every learner. Feedback is only ever supportive, never judgmental, and inherent in the process. In combination, this makes cheating highly improbable, but it always involves me in yet another fight with those that believe exams are the gold standard of reliable summative testing and that nothing else will do. One of my colleagues, persuaded by my arguments but not willing to face the wrath of colleagues by removing exams, has found a nice halfway solution. After following an open, social, reflective, project-oriented process throughout his courses, he simply requires students on the final exam to write about what they did, with a structured reflective framework to help guide them. They know this in advance, so there’s relatively little pressure. It is only possible for those who actually did the work to do this, and, more importantly, it serves a very useful pedagogical purpose that helps to consolidate, connect, and reinforce what they have learned. The main complaint students have about it is that they get writer’s cramp because they write so much.
Jon Dron is a member of the Technology Enhanced Knowledge Research Institute and chair of the School of Computing and Information Systems at Athabasca University, as well as an Honorary Faculty Fellow in the Centre for Learning and Teaching at the University of Brighton.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.