“4. How Technologies Work” in “How Education Works”
4 | How Technologies Work
[Techne] is a way of making [or] . . . of bringing something into being whose origin is in the maker and not in the thing.
—Aristotle (cited in Illich, 1971, p. 84)
Given that technologies are always situated, capable of being used in different ways, differ from one use to the next, and constantly evolve, if we are to understand education as a technological process, then it is useful to have some idea of how technologies change and how they adapt. This chapter explains some of the central dynamics of technologies and technology development, drawing mainly from complex systems theories, to lay foundations that will explain much of the nature of educational systems in the chapters to come.
We Shape Our Dwellings . . .
The development of technologies follows a complex recursive dynamic that is partly extrinsic, shaped by social, environmental, political, conceptual, and creative forces, and partly intrinsic and self-reinforcing. Changes that we make to our world in turn affect us, in an endless cycle of affect and effect. As Dewey et al. (1995, p. 154) put it, “the organism acts in accordance with its own structure, simple or complex, upon its surroundings. As a consequence, the changes produced in the environment react upon the organism and its activities. The living creature undergoes, suffers, the consequences of its own behavior.” McLuhan (1994, p. xxi) puts it more simply: “We become what we behold. . . . We shape our tools and afterwards our tools shape us.”1
However we express it, there is a constant interplay between the technologies and structures that we create and their effects on us, in turn changing how we build them and what we do within and with them. This does not just make them complicated. It makes them complex; technologies can and do result in behaviours and forms that are difficult or impossible to predict in advance. Looking at their components is seldom very useful, because the whole is different from the sum of its parts, just as a body is more than a collection of cells, a poem is more than just a collection of words, and a community is more than just a collection of people. A complex whole emerges through the multiple, many-layered interactions of its elements.
Even tiny differences can matter. For example, the sentence “the man bit the dog” contains the same words, grammatical structure, and general form of the sentence “the dog bit the man.” In almost every respect, it is an instance of the same combination of technologies, and both sentences work equally well to convey meaning, but their meanings are utterly different. The same dependence on small details of arrangement is true of virtually every technology, from guitars to teaching methods. And, like the sentence “the man bit the dog,” a great deal depends on the surrounding context: what other stuff exists and how it relates to the technology of interest.
The Importance of Boundaries
It is of little use to think about technologies unless they are the ones that matter in the context that we are researching. Because technologies are assemblies (and often assemblies of assemblies of assemblies . . .), we can choose different boundaries around different parts of the assembly and thus see different technologies. Boundaries are how we give and discover form in the world. They are the impermeable, semi-permeable, or entirely permeable lines that we draw to distinguish one thing from another—a heart from a lung, a property from a street, a married couple from an unmarried couple, and so on. They do not (necessarily) separate objects; instead, they define what they contain (Cilliers, 2001). Different problems and different behaviours emerge at different boundaries (Holland, 2012). They pass and make use of different signals internally, not all of which will matter in other parts of the assembly. For instance, if we place our boundary around a computer-based tutorial and a learner, then what matters is the extent to which the learner learns from the lesson, how it fits with existing knowledge, and so on. Because we have chosen this particular boundary, the computer, the operating system, the application’s code, and other important constituents of or containers for the application are not our immediate concern, though they can play a causal role in how it achieves its purpose. The things that matter are the behaviours of what lies within the boundary that we have chosen, notably including the learner’s own orchestration, the primary process that leads to the technology about which we care. If we place our boundary around the lesson and the teacher, then what matters is how easy it is to write that tutorial, which authoring supports it offers, how it fits with future plans, the teacher’s skills, and so on. This means that knowledge relevant at one boundary can be completely irrelevant or at best partial or tangential at another. The signals that pass from a brake pedal to a brake assembly (whether electrical, pneumatic, or whatever) matter greatly to the engineer designing a car, but it is the effect of foot pressure on the pedal, and how that affects the overall slowing of the vehicle, that matter more to the driver. Where we choose to place our boundaries depends on our interests and current needs.
The importance of boundaries is shown in particularly sharp relief when we examine computers and their uses in learning. The computer is a universal tool, medium, and environment, which means that it can become pretty much any other technology—that is the overarching phenomenon about the computer that makes it useful. The computer is a technology in itself only from a few perspectives, of interest mainly to computer scientists, purchasers, marketers, and creators of the product. Mostly it is the means by which we create other technologies, a part of (and a container for) an indefinite number of much more interesting and complex assemblies. For a computer used to run, say, a quiz program in a school, it is a different technology for the student, for the teacher, for the programmer, and for the technician supporting the machine. Each different user utilizes some of the same phenomena, some that are different, and some that are differently orchestrated, for different uses. Although the computer might be a necessary constituent part, it seldom makes sense to think of the technology that matters the most as a “computer,” and even then context matters too: when buying a computer, we usually have at least a sample range of uses in mind. If we do choose to use the term, then “computer” is usually a synecdoche for a host of other technologies: it is a part that signifies a whole.
Unfortunately, it is common and entirely natural to pick the wrong boundaries, to blur the computer technology and the vast number of technologies that result from its use into one, leading people to ask (and often proceed to answer) misleading questions such as “do computers improve learning?” (Bouygues, 2019) or “is Google changing our brains?” (Brabazon, 2007) or even “do CRT monitors interfere with learning?” (Garland & Noyes, 2004). When posed as general questions related to all possible uses of the technology, they make no sense at all, but they are not much better even when applied to specific instances, unless seen as part of an assembly of the technology that we are in fact investigating. The problem with such questions is that they mistakenly conflate an arbitrary range of different technologies that happen to use the same element as part of their assembly into a single part that happens to be common to all of them. We could choose different boundaries just as validly.
We might, for instance, ask “do transistors improve learning?” or “are cooling fans changing our brains?” The answer is undoubtedly “yes” in both cases, but this tells us little of any value because we are looking at the wrong assembly: we are choosing the wrong boundaries. It is reasonable to ask if such things can affect learning, in positive or negative ways, as part of a design process in which we consider the strengths and limitations of particular components in order to think how we might use them. It is also reasonable to ask questions about some specific uses of them, or about specific uses in a common context, or to compare different uses to find out how they differ. However, it is not reasonable to pinpoint one generic part of the assembly that could make any number of positive or negative contributions, depending on its relationship with the rest of the assembly and the orchestration that brings them together. It is the wrong boundary. We might equally (and perhaps with greater reason) pick a broader boundary and ask whether the institutions in which they are used are changing our brains (yes, definitely) or improving learning (sometimes yes, sometimes no).
Choosing the Right Boundaries
Misattribution of boundaries occurs all the way down the line to detailed cases. Even when two technologies appear to be the same, and are used in similar ways, other processes are often—and, in an educational context, usually are—more significant. Indeed, as any experienced teacher will tell you, there can be a world of difference between two instances of the same lesson plan using more or less identical pedagogical methods and tools on two different occasions. Again there are different boundaries (encompassing a different group of learners, all of whom introduce their own processes and pedagogies) in each case. For much the same reason, it makes no sense to talk of Google Search (and especially not, as some have suggested seriously, Google) as changing our brains in any particular way. Some of the ways that Google Search can be assembled with other technologies (including processes and methods for using it and the purposes to which we put it) almost certainly will have some effect because we will learn something, and learning always changes our brains—that might even be a fair working definition of the word learning, albeit incomplete for reasons that will become clear. We can even find regularities—for example, biases introduced by Google’s algorithms that cause replicable effects—but each different assembly can have different effects, each of which will affect us in different ways. A critical attitude toward any technology, be it a part or a whole, is vital, but we have to be very clear about the boundaries as well as about which phenomena, orchestrations, and uses are contributing to the effect. If there are statistically significant things to be said about the effects of computers, say, as a whole on learning, then this is useful information, and it should cause us to question the reasons behind it, to look more closely at the actual technologies, and the contexts in which they are used, that are having this effect and why. However, to blame one piece of a technology assembly, especially when we know that those effects are not seen in every case, is lazy and intellectually incoherent.
The effects that we see are the results of many technologies, many forces, and many co-dependent and independent factors, of which a given technology is only a visible feature that we have chosen as our boundary. Although there might be a statistical correlation between computer use and (say) an inability to concentrate, surface thinking, and a host of other ills (as well as a host of benefits), it is almost never the computer or a given piece of software itself that is the cause but the assemblies of which it might form a host or a part, its contexts of use, and the consequent purposes to which it is put. We know that there are ways of using computers that do not cause these problems, so it is not the computer that is the issue. Studies suggesting that computers have potentially limited value in classrooms such as OECD (2015) and Bouygues (2019)—the latter with far greater care and discernment—describe not the value of computers in specific instances of learning per se but the value that they have (on average) as part of a particular kind of technological assembly that includes (on average) a certain kind of test, set of pedagogies, structural power relationships, and organizational methods common to most schools. Enthusiasts—and I am one of them—might justly counter-claim that the problem is not with the use of computers but with the processes and other technologies in schools with which they are assembled. Such studies often highlight inadequacies in the technologies with which we attempt to assemble them, or in weaknesses in the ways that we use them, not in computers (at least without a great deal of further argument). If the parts of the assembly are not working together, then of course, regardless of whether they work separately, the whole technology will not work. It is as foolish to put computers in classrooms without changing the rest of the assembly (or adjusting computer use to fit it) as it is to put a V8 engine on a pushbike or to use a fork for eating soup. These parts of the assembly might make more sense if we were to strengthen the bicycle or make the soup chunkier. But perhaps such issues should encourage us to rethink our strategy altogether.
It is always possible to use technologies differently, to assemble them with others, to apply orchestration to different phenomena, to orchestrate them in other ways. It is important to analyze these assemblies but futile to demonize or extol one part without considering the rest and without thinking about ways that it might be used differently with different effects.
Although some rigid, prescriptive, and dominative technologies—laws, production lines, exam regulations, and so on—can and do strongly determine behaviour (part of the stuff that they are organized to do or how they are organized to do it), for the most part, especially when we are considering types of technology rather than specific instances within specific assemblies, in themselves they do not. All technologies, of course, do have affordances (often the reason that we use them) and limitations that affect their use, but they do not entail so much as enable consequences (Longo et al. 2012). The consequences that we observe are only some among (sometimes) many that could have been with a slightly different assembly. We might see consistent patterns and assume that they are causal, but in fact they are normally the results of what is enabled (or disabled) by technologies rather than direct results of using them. For example, asynchronous online learning (a massive class of diverse technologies) typically enables people to learn at any hour of the day, and it is constrained in ways that slow down the pace of communication, both of which undoubtedly lead to more common uses of certain patterns of pedagogical design, distinctively skewed demographics, physiological commonalities (e.g., learners can be tired after a long day working), and so on. Similarly, until recently, computers did require that we sit down to use them, often in limited physical contexts, and many still do. That matters: it is among the phenomena that result from using them, and it can be among the phenomena that we orchestrate to achieve our goals or that stand in the way of effective learning. But, again, it is not a relationship of entailment: computers do not cause any particular kind of learning. What causes it, or inhibits it, is how they are organized with other stuff.
Rarely, we might conclude that there is indeed inevitable harm from a specific technology, though even that depends on the context of use. A gun, for instance, though deliberately designed to cause great harm, can make a serviceable paperweight, doorstop, or decorative item. Some people believe (with a little justification) that the ubiquitous glow of our device screens or (with no justification) the broadcast of wifi signals can have bad effects on our brains. If some incontrovertible evidence emerged that screens cause harm, or that wifi signals do adversely affect some people, unequivocally contradicting copious and overwhelming evidence to the contrary, it still would not tell us that computers as a technology are bad. It would tell us only that we need to fix those screens (e-paper screens, perhaps, or filters) and figure out a better way to send signals around (light, for instance, or shielded wires). Perhaps we would have to put them (or affected users) inside a Faraday cage. It is always possible to imagine different assemblies, each with its own weaknesses, strengths, and things that it enables or prohibits.
Equally, there can be default behaviours or particular kinds of orchestration that have consistent or regular effects (harmful or otherwise) that are phenomena that need to be considered when we use them. For example, the delays that computers typically impose (time to start up, load data, access a network, and so on) can be sufficiently harmful in some assemblies (e.g., web page load time, leading us to abandon an otherwise useful search) to render potentially useful technologies worse than useless. Again this is a signal that something about the technologies needs to be fixed, not that the types that they represent are inherently bad. And the fix might not be to make them faster: it might be that different uses could be made of the time spent waiting. Sometimes new technologies can result from exaptation rather than design.
Statements about particular technologies—be they computers, teaching methods, cars, or poems—are always, and must always be, provisional because technologies can always be changed, improved, assembled with others (including other methods), and used differently. We need to look critically at the past, choosing the right boundaries and observing the salient characteristics, because we should always aim to improve our technologies, and they can always be improved. However, we should not assume that our analysis of what happens now necessarily predicts the future. Technologies are not natural, invariant phenomena; they are inventions. Analysis of current behaviour can be little more than a good story: it can, say, warn us of potential failures, provide hints of what leads to success, enrich our design vocabulary, or inspire us, but it cannot provide us with an invariant law.
The Adjacent Possible
Every new thing in the world opens up possibilities that were not there before. This is what powers the ratchet of evolution. Eyes could not have evolved had light-sensitive cells not already evolved, for instance. A fallen rock can provide a step up to reach fruit from a tree that was previously unreachable. Space travel as we know it would not get off the ground without metallurgy, bolts, or radio. Ideas depend on other ideas to form, most famously recognized in Newton’s unoriginal assertion, perhaps made in a barbed reference to his (very short) bitter rival, Robert Hooke: “If I have seen further, it is by standing on the shoulders of giants.”
New things in the world, especially when (as in the case of technologies) evolution is involved, increase what Kauffman (2000) frames as a formal concept of the “adjacent possible.” The adjacent possible, or more precisely the “adjacent possible empty niche” (Longo et al., 2012), describes the potential next steps that any development might take, and it alters as the world around it and the things being developed in the surrounding system change (species, chemicals, technologies, etc.). For things that reproduce with variation, such as species, or for things that are reproduced with variation, such as technologies, such possibilities are additive. Succeeding generations do not immediately, if ever, replace their forebears but can, and normally do, coexist with them, interact with them, and sometimes help to constitute them or incorporate them into their own constitutions. Sometimes they compete with them. This concept of additive expansion of the adjacent possible applies in many areas and in many ways. In the context of culture, Wilson (2012, loc. 1403) describes the process as autocatalytic: each advance makes new advances more likely. Of knowledge, Ridley (2010, p. 248) tells us that, “the more knowledge you generate, the more you can generate.” Of technology, Nye (2006, loc. 65) explains that, “when humans possess a tool, they excel at finding new uses for it. The tool often exists before the problem to be solved. Latent in every tool are unforeseen transformations.” Of language, Melville (1850, p. 1162) put it more poetically: “The trillionth part has not yet been said; and all that has been said, but multiplies the avenues to what remains to be said.”
The creation of a new kind of thing, based upon something that came before it, rarely immediately negates the possibilities available to its forebear, at least when viewed at a global level—what Arthur (2009, loc. 78) refers to as the collective of technology. Locally, we may replace one type of technology with another (a blackboard with a whiteboard, say) and thus block some former channels of possibility for those affected, but globally the technology that we replaced typically continues to exist somewhere, and often it remains a latent possibility locally. Over time, the new technology might come largely or eventually to replace its predecessor completely. However, at the point of its creation, it seldom prevents its ancestors from doing what they always did, notwithstanding occasional issues of forced dependency (e.g., the availability of ink cartridges for discontinued printers).
New technologies are sometimes created to do something that could not be done before, sometimes to do the same thing better, more cheaply, more accurately, or simply differently. As a result, though one technology can completely replace another in a local context, the sum of technological possibilities in the world grows with every novel change or new assembly. As Kelly (2010, p. 309) says, “more complexity expands the number of possible choices.” And, thanks to a combination of the power of assembly and the relentless expansion of the adjacent possible, the technological world is becoming more complex at an ever-faster rate. We look back on the past to what we perceive as a simpler age not just through nostalgia but also because it actually was a lot simpler. It had fewer possibilities, fewer choices that could be made, and considerably less variety. Whether we choose to see this as progress or not depends a lot on our context and point of view, but the inexorable dynamic of the adjacent possible means that there has been without doubt an ever-accelerating rate of technological change.
There are mainly cognitive and pragmatic rather than intrinsic limits to this acceleration, a fact all too evident for a long time. Melville’s positive assertion cited a couple of paragraphs ago, for example, is followed immediately with a phrase that echoes a very modern complaint: “It is not so much paucity, as superabundance of material that seems to incapacitate modern authors” (1984, p. 1162). The rate of technological change has been on an upward curve for many generations and likely for many millennia. For each generation, the curve becomes steeper, and the discontinuities become greater, because the adjacent possible expands exponentially. This makes it harder to know everything that might be known and easier to know that we do not know it.
The greater diversity that results from effervescent change more often than not also provides possibilities for combination. The ability to create lightbulbs, for example, relies on a wide range of foundational technologies, each of which provides new opportunities, but together they make a lightbulb not just possible but also likely to be invented. Indeed, it was invented independently at least 23 times prior to Edison’s “invention” of it (Kelly, 2010), each within a few years of one another. The combination of glass-blowing, knowledge of inert gasses or vacuums, electrical generators and batteries, wires, metallurgy, manufacturing methods for filaments, and a host of other technologies enabled an adjacent possible of the lightbulb when they were assembled together. It was not inevitable by any means (technological determinism is extremely rare, if it exists at all), but the combination made it far more likely that it would be developed, particularly given the overwhelming need for cheap, bright, clean lighting at the time. This need itself was driven by other changes, such as more widely available literature, industrialists’ desires for longer productive hours, and overcrowding that affected safety on streets.
The electric light developed not because a proto-lightbulb existed prior to it but because the pieces were ready to be assembled and because there was a need for better lighting. Once it was available, it was used and adapted for a great variety of other purposes, from heaters to signs to electronic valves (and hence radios, TVs, and computers). This is the foundation of the basic evolutionary dynamic noted by Arthur (2009), that technologies evolve through assembly and recombination. As we have seen in many different ways, human intelligence resides only partially in the heads of individuals. It is a collective and emergent phenomenon. We never learn alone, we never think alone, and we never invent alone. This is as true of educational technologies as it is of lightbulbs.
The Adjacent Impossible
Whenever we create, we become able to create more. New possibilities, new potentials, new assemblies become possible and change what we do and how we progress. However, technologies can also be subtractive as well as additive. If we replace old technologies with new ones, then new constraints typically emerge, and avenues untaken remain closed, sometimes (at least at a local scale) more than ever before. We create boundaries and barriers, whether through rules, regulations, and processes or through physical restrictions or the propensities of our tools and systems. From digital rights management (DRM) to immigration restrictions, we find new ways (at least locally) to restrict what we can do by inventing new technologies that take away from what we could do before. When we replace multiple learning management systems (LMSs) in a university with one centralized technology, for instance, we lose the adjacent possibles of those that we replaced. Although the adjacent possible expands globally with each new invention, for those forced or cajoled into using prescriptive, dominative technologies, the former possibles become impossibles. For some technologies, notably those that Schwartz (1997) describes as “ideologies”—technologies of ideas—the effects on us can be profound. Echoing Churchill, McLuhan, and Dewey, Schwartz (2015, p. 10) claims that “we ‘design’ human nature, by designing the institutions within which people live.” By way of example he describes how behaviourist models of work and learning inherited from Adam Smith, B. F. Skinner, Frederick Winslow Taylor, and others have become self-fulfilling prophecies. Smith’s jaundiced (though, as Schwartz is careful to observe, misunderstood and unsubtly interpreted) view that people do not want to work and must therefore be incentivized to do so was very influential, particularly because it happened to fit neatly with industrial revolution technologies of mass production demanding that individuals behave in cog-like ways. This way of thinking, originally a misleadingly partial observation of human nature, has been inherited and embedded in our schools, businesses, hospitals, and other institutions to the extent that it has become self-fulfilling: many people do not want to work because they have been trained to believe that they do so only because of incentives. They are working for pay rather than being paid for work; pay is a (or sometimes the) reward rather than being an award. You do not normally need to give incentives unless you believe that the activity is unpleasant. By removing challenge, meaning, and autonomy, rewards and punishments have become a viable means of sustaining some level of compliance, albeit less effectively than would be the case were individuals allowed to find intrinsic motivation. As Schwartz explains, what was an invention can now be, for social scientists and psychologists, a discovery, a phenomenon to be studied and used. In the conditions that were invented, based upon the erroneous belief that people are inherently reluctant to work, rewards and punishments have some effectiveness, in large part precisely because they make people reluctant to work.
The same is true of the use of grades and credentials to shape behaviour in our education systems. An invention based upon a mistaken premise (or, as I will argue later, a side effect of how the system was constructed)—that people must be made to learn—now plays a critical role in the educational machine and directly causes people not to want to learn. This phenomenon can then become an object of further “scientific” research that, unsurprisingly, proves it to be true because the system has been designed to make it true. I use the scare quotes around “scientific” because, though the methods used can be similar superficially to reductive methods used in natural sciences, what is investigated is a contingent invention that exists in a web of other contingent inventions, all of which are in a constant state of flux, changes to any one of which will likely change the phenomena being investigated. Unlike scientific investigation of the natural world, it is therefore extremely unlikely that generalizable knowledge will result from such practices. They can still be useful, at least for those building or using such a system for the moments that they remain stable, but (as we saw in the case of the demonization of computers) it is extremely unwise to extrapolate the results beyond the bounds of a specific design intervention. Such use of the trappings of a reductive scientific method in an inappropriate context is what Feynman (1997) scathingly refers to as “cargo-cult science.”
Technologies can profoundly constrain learning. If we create a room with no light, then we will be unlikely to have much success with a drawing appreciation class, though it is an interesting exercise in learning design to come up with ways to do so. I have faced a similar problem in adapting a course on computer game design for blind students: it can certainly be done, and the result is often better for everyone because it forces us to think of more diverse needs. But some constraints are truly deterministic. If we do not have technologies of communication, for instance, then we will have little success with social pedagogies that demand interaction at a distance, regardless of the potential value of guided didactic conversation (Holmberg et al., 2005) in partially simulating dialogue.
Technology Evolves by Assembly Rather Than Mutation
Technologies evolve through reproduction, variation, and competition for survival. They virtually never spring from an inventor’s mind fully formed, without predecessors, but make use of, assemble, and repurpose existing technologies. However, though they do evolve and it is possible to draw an evolutionary tree for all technologies, the evolution of technologies is significantly different from the evolution of natural species. Technologies evolve by assembly, not through mutations of genes (Arthur, 2009). Technologies evolve within a social and technological context, and they embed many beliefs and assumptions that spring not from the technologies themselves but from the world that they inhabit (Bijker et al., 1989). They are not neutral. Page (2011) observes that some of the significant ways that creative and evolutionary systems differ include the following.
- In evolutionary systems, every variation has to be viable, or the mutation will die. Creative systems do not, at first, have to work at all. We can create prototypes, glorious failures, and not-quite-working technologies.
- In evolutionary systems, everything must follow from its immediate ancestors: unlike a technology, a creature that evolves today cannot arbitrarily incorporate features of one that existed millions of years ago or thousands of kilometres away.
- Creative systems can make large leaps, sometimes without intervening steps.
- Creative systems can define their own selection operators: success or failure can be measured in many different ways, not simply in terms of survival.
- Creative systems can be created in anticipation of the future: we might knowingly create, for example, an app for a device not yet on the market.
For all those differences between natural and technological evolution, the essential dynamics are similar and Darwinian in nature. It is possible to observe species and lineages for technologies that in many ways are similar to those that we might draw for natural species, albeit perhaps more like bacteria inasmuch as there is a huge amount of reuse of parts across “species” (e.g., nuts and bolts appear in many manufactured items). New technologies are invented as the adjacent possible expands around them. Each new invention opens up possibilities for further inventions and often creates new problems for new inventions to solve (Rosen, 2010), but it is firmly grounded in the assembly of technologies of which it is constituted and tied to what came before. For instance, the traditional university relies on buildings, books, cataloguing systems, pedagogies, lecture theatres, timetables, doors, enrolment systems, regulations, statutes, and countless other technical inventions that were necessary before it could achieve its present form.
Technological development and change are far from deterministic. Longo et al. (2012) describe the dynamic as one of enablement rather than entailment: technologies open up adjacent possibles and thus provide a normally large but constrained range of possible futures that might or might not come to pass.
Technological determinism is largely false for the simple reason that there can be no prestatable entailment in any complex emergent system (Longo et al., 2012). Although we can trace back, at least in principle, from any technology to identify its antecedent causes, at any point we cannot predict what will happen in the future because, not only are there vast numbers of adjacent possible empty niches for most technologies (especially those that we will recognize later as soft), but also those niches are mutually constituted and affected by all the other complex parts that surround and co-evolve with them, stretching indefinitely far in every direction.
The direction of technological evolution is far from arbitrary, however, as the near-simultaneous invention of so many technologies, from the lightbulb to the telephone, reveals. There is a momentum, as Nye (2006) explains, to the change that gives it direction. Technological momentum exists not just because of the dynamics of entailment but also because we often create boundaries that, without further invention, are difficult to cross. Once we have settled on, say, a particular standard for electrical voltage, anything that does not comply simply will not work. The case is similar in education systems. Radical reinvention is extremely rare, if it exists at all. For instance, distance education institutions, though fundamentally different in many ways that we will explore later, were built upon foundations laid by in-person forebears and inherited many structures and processes of their ancestors, from curriculums to convocations. Furthermore, they exist within a broader educational context—including funding bodies, credentialing norms, and hiring pools—that greatly limits the possible ways in which they might develop. Constraints can greatly influence behaviour, and there are strong patterns to this influence, to which we turn next.
The Large and Slow
The large and slow in any complex system influence the small and fast more than vice versa, a phenomenon that Brand (2018) describes as pace layering. This is a universal law of complex systems whether natural or artificial. A mountain will have greater influence on the trees that grow on it than vice versa. Beneath the trees, shrubs and animals will have their lives constrained by the shapes, shadows, and chemistry of the trees. Within the gut of a mouse that makes its home under the root of a tree can be thousands of species of bacteria unknown to science (Jones, 1999), whose entire world is shaped and constrained by the ecosystem provided by the mouse’s gut.
Pace layering is a strong feature of educational systems, in which buildings, classrooms, LMSs, legislation, regulations, curriculums, and a host of other relatively slow-moving elements influence pedagogies and student learning far more than vice versa. Although it makes no sense, for reasons that will become clearer as this book progresses, to put pedagogy first in any educational intervention, the reasons that many educators loudly claim that we should are understandable, inasmuch as pedagogy is unlikely to have natural precedence. Similarly, the centralization of resources, especially when organized hierarchically, tends to result in disproportionate levels of control at the top, which is why such systems have been popular in the past, but (without great care in construction) they can greatly reduce agility for those inhabiting the levels below. In an education system, which relies heavily on creative individuals free to experiment and try new ways of thinking, this can be harmful.
Unlike natural systems, human systems can be and sometimes are redesigned. It is not that the small and fast can make no big change to the large and slow, especially when they work together. It is merely that changes of the large by the small are much less likely to occur and will occur less often because they are more costly in almost every way, at least in the short term, than adapting to the large and slow. Without the stable foundation of the large and slow-moving, the small and fast cannot (individually) thrive.
Crowds and mobs obey a different dynamic but often can be treated as a single large individual, what Kauffman (2016) describes as a “Kantian whole” or what I and others have described as a “collective” (Dron, 2002; Dron & Anderson, 2009; Segaran, 2007). This is most obvious in eusocial animals, such as termites and ants, that exhibit collective intelligence through the phenomenon of stigmergy (work done by signs left in the environment, such as pheromones) as well as in mobbing or herding species, such as locusts and wildebeests (Bonabeau et al., 1999), but it is also a factor in human systems, including those involving learning technologies (Dron, 2004).
In general terms, the larger and slower parts of a system act as a brake on progress. However, a certain amount of stability is essential to maintaining system metastability and evolutionary growth, avoiding the traps of a Red Queen Regime, in which co-adaptation is continuous and unchecked, so that the system is always running to stay in the same place, and the Stalinist Regime, in which nothing ever changes (Kauffman, 1995). Neither extreme of system dynamics allows for development, evolution, and adaptation to occur.
However, pace layering is a symptom, not a cause, of a further underlying structuring pattern. The dynamic that drives this effect is that things within a system that are less flexible and more unchanging (which I will describe later as harder) have a greater effect than things that can adapt and change faster (which I will describe later as softer) than vice versa. This is a matter not just of empirical observation but also of logical necessity. If fast-moving things affected slow-moving things more rapidly than vice versa, then slow-moving things would not be slow moving anymore. Things are slow or fast in relation to something, and they are slow because something else has changed at a more rapid or less rapid speed than their own. Similarly, the larger parts of a system exist either in a relationship of containment (e.g., a house contains its rooms, a mountain contains the trees that live on it) or, more generally, in a relationship of confinement (e.g., a rock that stands in the path of a herd of wildebeests or a hill that diverts a stream). If something contains another, then it cannot change more rapidly than its contents, and it must be larger. Similarly, if something confines another, then, relatively speaking and by definition, it is a greater influence on what it confines than the thing that it confines and again must be of sufficient size to have such an effect: if that relationship changes, then it no longer confines the smaller, faster-moving thing and thus is no longer the larger, slower-moving thing. The bounded are subordinate to the boundary.
Path Dependencies
The pattern of the inflexible disproportionately affecting the flexible can be seen as part of a still larger family of path dependencies. When we make changes to our environment, further changes are built on top of them, solidifying and embedding them in ever more solid strata, thus making those that came first less easy to change because of the disruptions that such changes will bring to things created in their context. This is the nature of assembly: what comes after builds upon what came before. In many cases, it results in a complex network of interdependent relationships that can lock us in to an arbitrary pattern that, if we were to start again, might be designed completely differently.
There are many classic examples of this in the development of technologies—the standardization of the QWERTY keyboard, the dominance of VHS tapes over Betamax, the widespread use of Microsoft Word, or the persistent use of lectures in education, for instance. These are not inherently large and slower-moving parts of the system—originally, they competed with similar technologies (most of which were only slightly less fit, some of which were objectively superior by some but seldom all measures) and were agile, but they became fixed parts of the landscape as investments of time, money, space, and expertise cemented their positions, and the costs of change began to exceed the benefits. It is much the same as the dynamic that drives natural evolution. Often what survives does so by chance, affected by sometimes random events that change the fitness landscape, from meteors to sea level changes to accidental isolation. But, once it has survived, it crowds out alternatives: it both adapts to and changes its environment so that, in a sense, its environment adapts to it. However, lucky breaks do happen.
The spread of invasive species shows that, at least sometimes, things can change. The same is true of technologies, including education, as Christensen et al. (2008) argue, in which disruptive innovation may occasionally supplant older, more established species. However, it is important to note that disruptive innovations seldom if ever compete head on with their forebears at first: they tend to sneak into niches that give them time and space to develop before spreading out to challenge the entrenched. The same thing drives speciation in natural evolution: boundaries, often geographic, lead to different selection pressures in different places, and speciation always occurs most rapidly in isolated domains (Calvin, 1997). It is not coincidental that much of Darwin’s evolutionary thinking evolved while visiting the Galapagos Islands, where different species of finch had evolved in the different ecological niches of the different islands. If radical disruption ever occurs in education systems, then it is unlikely to come from within the existing tightly interwoven system itself. If it happens at all, then most likely it will come from an area not currently connected in any meaningful way to the greater whole.
An archetypal example of the path-dependency lock-in effect is that the size of the rockets used on the space shuttle, arguably among the most advanced transportation technologies ever created, was predetermined by the width of Roman warhorses (Kelly, 2010, p. 180). The chain of events started in Roman times with chariots built to accommodate the width of two large warhorses, which meant that carts were built with wheels set to the same width so that they could follow the ruts left by the chariots, which led to roads built in Britain to accommodate Roman carts, which led to tramways designed to accommodate horse-drawn carriages built for those roads, thence to railway tracks in England, thus to railway tracks in the United States (because workers imported from the United Kingdom used familiar tools and jigs), onward to the space shuttle engines built in Utah that had to be transported by rail to Florida, going through tunnels that could not accommodate more than, as Kelly quotes an anonymous wag as saying, the width of two horses’ arses. In this, as in any evolving system, there were countless possible ways in which things might have evolved differently. Each decision made is a branch in a track, leaving other branches untaken. Once decisions are made, they tend to be baked in and accrete further props as they continue to persist. People make investments of time, money, legislation, and passion that would be lost if things were to change. Even when change would be easy in a physical sense, attachment to things with which we are familiar and into which we have invested time and emotional energy makes us inclined to keep them the way they are, notwithstanding a counterbalancing love of novelty that helps to drive change. Many of our educational institutions have a history stretching back at least a millennium, and in some cases millennia, so they have inherited many such dependencies.
Path Dependencies in Education Systems
The effects of path dependencies on educational technologies are immense. To give a simple example, once an educational institution decides on an LMS, it usually results in a great deal of content being created, a great deal of training and on-the-job learning of its quirks and capabilities, and a large set of expectations in a student body of how learning content and process will be delivered. This investment is almost always orders of magnitude greater than the cost of the LMS in the first place. The costs of change are even greater, which is one of the key reasons that what many have argued is a substandard system such as Blackboard (which entered the market early on with what appears to have been in retrospect a half-formed product that, till at least not long ago, was a patchwork of code bases that barely fit together) could retain both its high price and its market share for many years, despite widespread dislike of the product. Customers are forced to follow upgrade paths if they wish for secure, competitive systems that meet needs created by the adjacent possible that earlier versions opened up, an effect exacerbated by the low motivation of the company to make its data portable. Moving to a different system altogether takes both great bravery and great resources, and few are willing to take that step lightly. A substandard system that is a known devil is usually better than none at all.
Blackboard’s lock-in is not complete, as evidenced by its steady loss of customers over the past decade. One major reason for its demise is what Cory Doctorow (2019) describes as “adversarial interoperability.” It refers to situations in which competitors (against the wishes of market leaders) attempt to take advantage of a thriving marketplace by making their own, independent tools compatible with those of their competitors, classic examples being support for proprietary and unpublished Microsoft Word formats in multiple alternative word processors and the exponential growth of IBM PC-compatible microcomputers in the 1980s using BIOSs (basic input/output systems that provide interfaces between an operating system and hardware) that replicated the functionality of IBM’s own without using any of IBM’s code. Although its own database structure is famously opaque, Blackboard, under pressure for many years from the market as well as internal needs, has supported a certain amount of data export, albeit with the loss of some of the structure and data that might matter. Competitors have taken advantage of this and made migration a great deal easier than it might otherwise be by using the limited export tools that Blackboard provides and reverse-engineering how it stores its data. The path dependency remains, but it can now be built upon by further toolsets that increase the adjacent possible.
The development of the LMS as a class of technologies in the first place follows a similar dynamic of path dependency. Those building online learning technologies in the 1990s originally (and I speak from direct experience) based their designs on the functions that they observed in existing educational institutions. To be effective tools for learning, they did not have to embody courses, classes, teaching roles, presentations, discussion forums, and assessment tools, but they did, because that was how the environment of education systems into which they needed to slot had evolved. They did not have a blank slate: they had to be assembled to fit the larger, slower-moving systems of which they were to become parts, or no one (least of all their creators) would have a use for them.
The same pattern of path dependencies can be seen all the way down the line in education systems: infrastructure such as libraries and classrooms defines limits on how teaching might occur, and their use is strongly encouraged by those who have made the investment. Accreditation systems standardized and embedded across society are difficult to change, leading to great disruption and resistance from many sides when even minor changes occur. Chains of dependencies run through these deeply embedded systems, each dependency embedding it further.
The decision to use a new technology in a novel situation is a relatively easy one, but the decision to replace an entrenched one is far harder to make and can have far greater repercussions across a system that depends on it. The larger the boundaries that the technology embraces, the more profound its resilience to change: smaller pieces are more easily replaced. This is unfortunate in the case of large, centralized toolsets widely used by many people—Facebook, Blackboard, or Microsoft Word, for instance—because the costs of replacement can be much higher than the costs of suffering their multiple dire consequences. The greater the number of people who use a technology, the more embedded it tends to become and the more assemblies it becomes a part of. To make things worse, without great care in construction, large centralized toolsets can be extremely constraining and, to add further constraint, need to be constructed to suit average or majority needs, which can stifle diversity. As a result, monolithic technologies play a strong (and seldom carefully considered) role in establishing norms and practices and in setting boundaries where none needs to exist.
The survival of a substandard system is not entirely regardless of its utility. A truly useless or dangerous technology eventually and typically will fail to survive, though it might persist for a long time, and inevitably it will be propped up by counter-technologies that embed it further. However, constructed differently, were we to sit down and start afresh, we would often do things differently and much better. Unfortunately, what exists in national and sometimes international education systems is so deeply embedded with other stuff that substantial change is virtually impossible from within. The massive network of embedded connections, themselves often linking large and slow-moving networks such as education systems and legislatures, emergently becomes the most determinant large and slow-moving feature. We will return to this theme in Chapter 10.
It was a rational choice for the makers of early LMSs to situate their technologies within traditional education systems: had they gone back to basics and reconsidered how learning might be enabled using online technologies, it is highly unlikely that there would have been a market for them. That particular possible was far too far away. However, just because education systems are deeply embedded does not mean that change cannot occur. As Christensen (2008) observes, disruptive innovations nearly always come from outside the field, where they can develop without the hindrances and interdependencies of large, established systems, exactly as finches in each of the Galapagos Islands developed independently of one another.
Path dependencies, as the example of the space shuttle shows, can stretch back a long way. Existing higher education systems derive directly from the exigencies emerging from the decisions by cities such as Paris and Bologna in medieval Europe to attract rich students by forming centralized places of learning where scholars, books, and students could come together (Norton, 1909). As much as the width of two horses’ arses need not have led to the dimensions of rocket motors in space shuttles, things could have been different in higher education. Indeed, had we followed the Bolognese model of universities, in which students determined who taught, and what and how it was taught, rather than the Parisian model, in which such decisions were made by faculty, things would be very different today. Better? Not in every way and certainly worse in some. But change is possible, it appears to be happening, and it is the change that matters more than what results from it. This is because what it is likely to lead to is not the replacement of what we already have (unless it really is significantly better in every way, in which case so much the better) but the coexistence and co-evolution of different ways of doing things. Old technologies seldom if ever die, so the demise of institutional education probably will never happen, or at least not in the space of a few generations, so whatever comes next will add to, rather than fully replace, what currently exists. This means an increase in the adjacent possible, which, like all technological change, builds new adjacent possibles as each avenue is explored, thus moving us forward into a richer future filled with new as well as old ways of learning. On balance, this is a good thing.
It is worth noting finally that large, scale-free networks of the sort that link education and other core social systems, though highly tolerant of perturbation, sometimes (depending on their form) can fail catastrophically (Watts, 2003) and, when faced with compelling competing networks that can emerge outside the education system, can be depleted of so many nodes that they cannot thrive and must transform. This change might be in progress right now as networked technologies combine and mutate in a breathtaking crescendo driven by the exponentially emerging adjacent possibles of new technologies and ways of thinking. From MOOCs (Massive Open Online Courses) to Wikipedia, from the Kahn Academy to Sugata Mitra’s Hole in the Wall experiments, from fast-track bootcamps to open mentoring systems, different models and conceptions of teaching are emerging as a direct consequence of new technologies outside or on the fringes of academia. Similarly, free universities, in person and otherwise, have started to emerge both online (e.g., the University of the People) and in person (e.g., the Free University of Brighton) and are separate from their institutional peers.
Meanwhile, alternative forms of credentialing are creeping in, from OpenBadges to LinkedIn endorsements. Tools and standards to record and manage the process independently of institutions are beginning to hit the mainstream, notably TinCan (also known as the Experience API or xAPI) and e-portfolios. Apart from MOOCs, these are not yet seen as competitors to traditional education. Indeed, in many cases, they are being incorporated so as to bolster the old ways. But they are beginning to fill in some of the gaps that might lead to a new adjacent possible. Other technology standards, from newsfeeds to LTI (learning tools interoperability), allow different online systems to interoperate—for one technology to become part of the operation of another—and thus to extend the adjacent possibles of both. Again this opens up the potential for change, though these interdependencies create yet more path dependencies and hardness (inflexibility and resistance to change) in the overall system. Technologies are not determinate and not predictable in advance, so it is hard to know whether these or any other initiatives will tip the balance, but the ever-burgeoning adjacent possible is leading inevitably to combinations and assemblies that likely will provide, at some point, serious competition for the incumbent institutions that comprise most of what we recognize as education systems today. If or when they do, however, they will incorporate into their assembly much of what came before, and likely they will create adjacent possibles that will feed back into the technologies of the institutional systems that they (partially) replace. The world will change, but much of it will remain familiar because of the nature of technologies as assemblies.
1 In fact, though attributed to McLuhan and no doubt used by him as cited, this phrase is attributable to his friend, John Culkin (Culkin, 1967).
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.