“CHAPTER 08 2001 E-Learning Standards” in “25 Years of Ed Tech”
CHAPTER 08
2001
E-Learning Standards
This chapter brings together the two preceding ones on e-learning and learning objects (LO). By the turn of the millennium, e-learning was part of mainstream education provision. The Internet was no longer dismissed as a fad, and most universities were engaging in some form of e-learning, even if only as a support tool for campus students. After the initial flurry of activity, typified by something of a Wild West approach to creating your own website, there was a necessary, if slightly less enjoyable, rationalization of efforts. This meant developing platforms that could be easily set up to deliver e-learning across an institution (we’ll come to the Learning Management System later), a more professional approach to the creation of e-learning content, the establishment of evidence on the effectiveness of e-learning — which often found there was no significant difference compared to traditional modes (Russell, 1999) — and initiatives to describe and share tools and content.
This maturing of e-learning gave rise to the development of several standards, particularly IMS (see https://www.imsglobal.org/ep/index.html). This body grew out of a 1995 EDUCAUSE project on Instructional Management Systems, hence IMS, although the organization would later drop the interpretation and retain the acronym. The aim was to address one of the problems Lamb (2018) identified in the previous chapter, namely that platforms and content developers all used different formats, and so it was difficult to take e-learning content from one context and deploy it elsewhere. This undermined the entire learning object ethos, and, at the time, many universities deployed more than one learning platform so they could not easily transfer content even within their own institution. The focus of IMS was then on interoperability in e-learning. Akin to having different electronic companies, each with their own types of plugs, the sector needed some form of standardization in order to progress.
IMS, therefore, set about developing standards to describe content, assessment, courses, and, more ambitiously, learning design. There are two main areas where interoperability is key: content and tools. By specifying standards for tools, it allows an educator to use a variety of web-based tools and plug these into the standard platform, and pass data between them. The Learning Tools Interoperability (LTI) standard of IMS attempted to realize this, and was successful to a degree, although the vision of a plug and play, service-oriented architecture that allowed someone to assemble a bespoke learning environment from a range of best-of-breed tools never quite materialized (we will look at this in chapter 18 on Personal Learning Environments). Interoperability of content was addressed by SCORM, which stands for Sharable Content Object Reference Model — note the presence of object language in the title. The aim of SCORM was to define a means of constructing content so that it could be deployed in any platform that was SCORM compliant. This went on to become an industry standard in specifying content and allowed content creators to produce content in one format, with the knowledge that it could be delivered in all the major platforms, rather than creating separate versions for each. Prior to the advent of SCORM, there was a good deal of overhead in switching content from one platform to another.
Perhaps the standard that causes many ed tech people to break out in a cold sweat is that of metadata, and particularly the Dublin Core. The Dublin Core Metadata Initiative (DCMI) was formed in the 1990s from a workshop series focusing on different metadata approaches. Metadata was used to describe a piece of content (such as a learning object) so that it could be discovered and deployed easily, and hopefully automatically. The 2003 version of the Dublin Core Metadata Element set (DCMI, 2003) had the following fields (or “elements”):
Title: A name given to the resource.
Creator: An entity primarily responsible for making the content of the resource.
Subject: A topic of the content of the resource, expressed in keywords, key phrases, or classification codes.
Description: An account of the content of the resource, such as an abstract or table of contents.
Publisher: An entity, such as a person or an organization, responsible for making the resource available.
Contributor: An entity responsible for making changes to the content of the resource.
Date: A date of an event in the lifecycle (usually creation) of the resource.
Type: The nature or genre of the content of the resource.
Format: The physical or digital manifestation of the resource.
Identifier: An unambiguous reference to the resource within a given context.
Source: A reference to a resource from which the present resource is derived.
Language: The language of the intellectual content of the resource.
Relation: A reference to a related resource.
Coverage: The extent or scope of the content of the resource.
Rights: Information about rights held in and over the resource.
These 15 fields represent the basic set of vocabulary terms to describe any digital resource, such as videos, images, or web pages. It is not specific to learning content, and in order to be useful in an educational setting more fields are required to describe pedagogic attributes, such as learning outcomes, age range, difficulty, and so on. A further range of metadata standards developed based on describing learning objects. These were much more complex than the Dublin Core basic set; for instance, the set of the 2003 UK Learning Object Metadata Core (UK LOM Core, 2003) contained over 60 elements, including items such as meta-metadata, semantic density, and taxonomic path. Some elements were mandatory and others optional.
The reason that mention of metadata still induces wry chuckles from some in the ed tech field is that at the time it was largely human-derived. Even the basic set of the Dublin Core represents a significant overhead for every learning object a user might create. Erik Duval (2005), who did much of the early work on analytics, used to preach that “electronic forms must die,” and much of the basic metadata generation shifted to machine-generated terms over subsequent years. But for the rich metadata required for learning objects, the human labour required was excessive. An educator would spend time crafting a useful activity and was then presented with pages of metadata to describe it, which often required more effort than the initial content. This was obviously not an approach that would scale. As well as simply being tiresome to complete, this level of description also became restrictive, in that it seemed to define exactly how the content should be used, and often that is unpredictable. The intentions behind the UK LOM Core and most e-learning standards were admirable, but they essentially offered a poor return on investment. In order to be useful, particularly with a vision of automatic assembly in mind, they needed to become increasingly complex. As this complexity increased, they became more specialized and required more effort to complete and work with, and thus fewer people used them. And if fewer people adopted them, then the benefit decreased in doing so, which meant they were caught in a cycle of diminishing return. In analyzing the problems with a later version of the LTI standard, Feldstein (2017) emphasized this return on investment:
One of the major implications that falls out of this circumstance is that organizations will not adopt the standard . . . unless they believe the benefits will outweigh the costs. If a specification is going to be broadly adopted, then it needs to be designed so that all the adopting parties will see direct benefit. Remember, every minute of developer time spent implementing a standard could have been spent developing a feature or fixing a bug instead. (para. 4)
As the standard became more complex, the costs of implementation began to outweigh the potential benefits. I had personal experience of this situation when I was involved with developing one of the pilot courses for the ill-fated UK eUniversity around 2002–2003 (Garrett, 2004). The UK eUniversity was a government initiative to develop a platform to deliver the best of UK higher education online across the world (much as FutureLearn would do later with MOOC). The university developed a whole new platform that was based on learning objects. Every object needed to have metadata based on the UK LOM Core entered by hand, and if a change was subsequently made to the learning object, for example, correcting a typo, then the nascent platform lost all the metadata and it had to be entered afresh. The long-term goal was that the university would create a repository across all courses, from all the different providers, by specifying all content as learning objects. However, this nirvana of a rich pool of easily discoverable content seemed a long way off when you were manually entering the same metadata for the third time, and indeed never materialized as the UK eUniversity faltered and later collapsed. The effort in creating those metadata fields did not equate to sufficient reward.
E-learning standards are an interesting case study in ed tech. They are much less prevalent in discussions today around e-learning. In some respects, that is a sign of their success — good standards retreat into the background and just help things work. But it’s also the case that they failed in some of their ambition to have easily assembled, discoverable, plug-and-play content. The dream was that a learner would type in “course on climate change” and the program would automatically assemble the best content, with some automated assessment at the end. This wildly underestimated the complexity of learning and overestimated the good quality of learning objects. So, while the standards community worked away effectively, it was surpassed in popular usage by the less specific, but more human approach to description and sharing that underlined the Web 2.0 explosion. We will look at this in chapter 13, but the success of folksonomies (user-created categories) over metadata and embedded code over SCORM is another example of the “good enough” principle. For anything complex, the formal standards of metadata and SCORM were required, but for popular usage, the Web 2.0 versions were adequate. For example, even those who worked on the UK LOM Core (Thomas, Campbell, Barker, & Hawksey, 2012) came to recognize its limitations and recommended the following:
As a result of an increasing realisation that the UK LOM Core was not achieving the intended results, and partly in response to new approaches to resource description, such as folksonomies, informal tagging, and the use of platforms for resource sharing such as Flickr, YouTube and SlideShare, which did not support formal metadata schema, the decision was taken not to mandate the use of formal metadata schema for the UK OER Programme. (p. 38)
It is interesting to note that not only did some of the ideas from learning objects and standards later evolve into the work on open educational resources (OER) but many of the same personnel were involved; for example, Stephen Downes, David Wiley, Lorna Campbell, Brian Lamb, and Sheila MacNeill all contributed to this early field and then became significant voices in open education. This demonstrates that while some approaches do not achieve the success envisaged for them, the ideas and people involved develop the key ideas into more successful versions.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.