“CHAPTER 23 2016 The Return of Artificial Intelligence” in “25 Years of Ed Tech”
CHAPTER 23
2016
The Return of Artificial Intelligence
Artificial intelligence (AI) is an interesting case study in ed tech, combining several themes that have already arisen in this book: promise versus reality, the cyclical nature of ed tech, and the increasingly thorny ethical issues raised by its application. The possibilities of AI in education saw an early burst of enthusiasm in the 1980s, particularly with the concept of Intelligent Tutoring Systems (ITS). This initial enthusiasm waned somewhat in the 1990s. This was mainly because ITS only worked for very limited, tightly specified domains. Developers needed to predict the types of errors people would make in order to provide advice on how to rectify these. And in many subjects (the humanities in particular), it transpired that people could be very creative in the errors they made, and more significantly, what constituted the right answer was less well defined. For example, in their influential paper, Anderson, Boyle, and Reiser (1985) detailed intelligent tutoring systems for geometry and the programming language LISP (derived from “list processor”). They confidently predicted that “cognitive psychology, artificial intelligence, and computer technology have advanced to the point where it is feasible to build computer systems that are as effective as intelligent human tutors” (p. 456).
Yet, by 1997, Anderson and his colleagues (Corbett, Koedinger, & Anderson, 1997) were among those lamenting that “intelligent tutoring has had relatively little impact on education and training in the world” (p. 850). In their analysis they hit upon something which seems obvious, and yet continues to echo through educational technology, namely that the technology (in this case intelligent tutoring systems, but it might equally apply to MOOC, say) has not been developed according to educational perspectives. They stated:
The creative vision of intelligent computer tutors has largely arisen among artificial intelligence researchers rather than education specialists. Researchers recognized that intelligent tutoring systems are a rich and important natural environment in which to deploy and improve AI algorithms . . . the bottom line is that intelligent tutoring systems are generally evaluated according to artificial intelligence criteria . . . rather than with respect to a cost/benefit analysis educational effectiveness. (p. 851)
In short, the technology is developed and evaluated by people who like the technology but don’t have an appreciation of the educational context. In this snapshot, we have much of the history of ed tech.
In the 1990s, while issues with ITS were becoming apparent, there was a second flush of popularity around AI in general, focused on the potential of two approaches: expert systems and neural networks. These were contrasting approaches: expert systems sought to explicitly capture expertise in the form of rules, whereas neural networks learned from inputs in a manner analogous to the brain. They can be viewed as top-down and bottom-up approaches respectively.
Expert systems were primarily focused on problem solving and diagnosis, but they had potential as teaching aids also — if the knowledge of an expert in, say, medical diagnosis, could be captured effectively, then this would form a useful teaching aid, particularly for problem-based approaches to education. While expert systems could perform reasonably well within constrained domains, they did not achieve a major impact in education. The problem was twofold: the oft-quoted “knowledge acquisition bottleneck” (e.g., Wagner, 2006; Cullen & Bryman, 1998) and the complexity of real-world domains. The knowledge acquisition bottleneck refers to the difficulty in acquiring knowledge from experts (or other resources) in a format that can be represented in an expert system. It is not possible to simply extract the knowledge from an expert like siphoning petrol from a car, and so it requires lengthy interviews or observations.
Experts don’t always agree and making expertise explicit is notoriously difficult. What often characterizes an expert is that they “just know.” For instance, chess experts will be able to reproduce a board they are shown much better than you could (assuming you’re not a chess expert, that is). The reason is that they encode it as patterns linked to long-term memory, whereas novices are encoding it as discrete elements, for example, the white rook is next to the black pawn two spaces in. Experts don’t know how they do this, it arises as a by-product of expertise — they don’t explicitly intend to encode in this manner, but it is what they do as they gain expertise (Chi, Glaser, & Farr, 2014). Interestingly, if you show expert chess players a random placement of figures, that is, they are not real mid-game positions, then they recall them with the same accuracy as everyone else. What this means for AI is that acquiring the knowledge from experts into an encodable form is time consuming and not always an accurate representation of what they know.
The complexity issue means that the world will operate in unpredictable ways. For example, I developed an expert system for diagnosing flaws in an aluminium die-casting system (Webster, Weller, Sfantsikopoulos, & Tsoukalas, 1993). This worked tolerably well, by characterizing typical flaws, but sometimes these flaws co-occurred, sometimes they appeared differently, and often the causes of the flaws were multiple. To borrow a term from software engineering, expert systems, and intelligent tutoring systems, the system did not “degrade gracefully.” In software, this refers to the ability of a system to maintain limited functionality even when a large portion of it is inoperative or under heavy load. In early expert systems, it might be interpreted as the system either knew or it didn’t know. Humans are very good at degrading gracefully (and sometimes disgracefully too), so they can take a good guess based on experience.
What is of general interest is that the current claims of AI are much the same and that some of the problems remain. However, what has really changed in the interim is the power of computation. This helps address some of the complexity issues because multiple possibilities and probabilities can be accommodated. In this we see a recurring theme in ed tech: nothing changes while simultaneously everything changes. AI has definitely improved since the 1980s, but some of the fundamental issues that beleaguered it remain.
In an analysis of AI in education, Roll and Wylie (2016) identified several trends since its early implementation, including an increase in the empirical evaluation of tools. This is another universal trend in ed tech — early research tends to focus on potential and possibility, but gradually more critical perspectives are brought to bear, and the need for reliable evidence becomes prominent. This same pattern has been seen in OER (Weller, 2016a) and learning analytics (Gasevic, Siemens, & Rosé, 2017). Roll and Wylie (2016) also reported an increased discussion on the theoretical implications, a focus on STEM applications, and the development of step-based systems rather than complex domains. This represents a narrowing of focus for ITS away from the broader claims of being applicable to all subjects and of replacing teachers, to a more practical implementation centred on approaches and subjects where there is evidence of success. Roll and Wylie summarized the evolution of AI in education stating that it has been “focusing on a very specific scenario, and has been doing it well: the use of computers in the classroom to teach domain knowledge in STEM topics using step-based problems” (p. 590). Again, this is symptomatic of much of ed tech — initial hype and claims of revolution followed by a sequence of more tightly focused adoption within the existing educational framework.
AI has definitely improved since the 1990s, though, and perhaps most significantly it has become prevalent in much of our daily lives: credit assessment, technical troubleshooting, voice recognition systems such as Siri, and computer games all rely on aspects of AI. It is important to distinguish, however, between narrow and general AI. These applications are all examples, however effective, of narrow AI, which means they can perform one aspect of human functioning pretty well but cannot generalize. They are designed for a specific purpose and have no ambition to go beyond those narrow boundaries. This type of AI will likely proliferate, and its application in education has potential, as the review above indicates. By concentrating on narrow tasks, good performance can be realized. For example, language learning bots, sophisticated automatic assessment, resource recommenders, and so forth can all be deployed within an existing educational ecosystem.
This narrow AI is very distinct from the type of AI that most people call to mind, and which tends to attract headlines, which is a more general AI. The aim of general AI is to develop systems that can generalize and perform any intellectual task that a human being can. Successful applications of general AI are rare to non-existent — which is not to say there won’t be a world of Blade Runner-type replicants one day, but it is unlikely to arrive any time soon, and if it does arrive, the social impacts will be far beyond education. Selwyn (2018) proposed six reasons why artificial intelligence technology will never take over from human teachers:
- Human teachers have learned what they know.
- Human teachers make cognitive connections.
- Human teachers make social connections.
- Human teachers talk out loud.
- Human teachers perform with their bodies.
- Human teachers improvise and “make do.”
This list can be seen as more of an argument against general AI than against particular narrow AI tools. It also repeats some of the elitism against distance education that was evident in early e-learning criticisms — the idea that face-to-face, real-time education is the only true form of learning is evident in claims such as “teachers perform with their bodies.” However, the flexibility, emotional, and cognitive connections that learners make with human educators is an important aspect of the educational process. It is why education has been so resistant to a formula for success — it is a fundamentally human experience.
More significant than the technological issues are the ethical ones. As Audrey Watters (2017) has contended, “Artificial intelligence is not developed in a vacuum. AI isn’t simply technological: it’s ideological” (para 1). We shall return to the social implications of algorithms and black box approaches when we discuss 2018 in chapter 25, but the more authority and power we allocate to AI systems, which we cannot “see inside of,” the more possibilities for real-life negative effects arise from these systems that cannot be explained, tracked, or held accountable. The concern about AI is not that it won’t deliver on the promise held forth by its advocates but rather that someday it will. And then the assumptions embedded in code will shape how education is realized, and if learners don’t fit that conceptual model, they will find themselves outside of the area in which compassion will allow a human to intervene. Perhaps the greatest contribution of AI will be to make us realize how important people truly are in the education system.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.