“8. Technique, Expertise, and Literacy” in “How Education Works”
8 | Technique, Expertise, and Literacy
It’s not just learning that’s important. It’s learning what to do with what you learn and learning why you learn things that matters.
—Norton Juster (1962, p. 229)
Beyond broad pedagogical principles and theories, a co-participation perspective allows us to think differently about how learning occurs at individual and social levels and how teaching can affect it. It allows us to see that one part of the educational process is concerned with constructing “machines” in our (individual and social) minds. Hard technologies such as rules of arithmetic or grammar are clearly analogous to their physical counterparts, as are the methods and procedures that we must learn in order to participate in other hard technologies. It also allows us to see that soft techniques—how we use such hard knowledge—are equally if not more significantly developed by educational processes and are the main reasons that we create those hard machines in our minds in the first place. We do not just learn to be like machines in order to behave as cogs within them. We normally learn to be like machines so that we can use them to do more, do better, do differently.
If a significant part of learning is concerned with the creation of technologies in our minds, then it seems to be reasonable to suppose that those technologies will behave in the ways that, as we have seen, all technologies behave. In this chapter, I discuss some of the ways that this plays out in practice.
Hard Skills, Soft Skills, and Technique
The hard skills required to enact even the softest technologies demand what we commonly refer to as “technique,” by which I mean how something—a method—is done by a human being. Machines do not have techniques, but people (and perhaps some other animals) do. A technique is a way of doing something: holding a pencil, moving a bow over strings, giving constructive feedback, and so on. Viewed at a fine-grained level, techniques are methods, but they are more than that. They tend to be idiosyncratic, seldom if ever recur in the same way twice, and evolve over time as we become more proficient.
People can develop and hone their technique, and, if a technology is even moderately soft, they can improve it indefinitely. For instance, if our intention is to write a sentence with a pencil, then we need hard motor skills to hold the pencil and to control it on the paper; we need to know about spelling, syntax, and semantics; we need to know the alphabet that we are using; and so on. The letters that we write must be sufficiently similar to their numerous archetypes to be legible. There are hard techniques of handwriting that must be performed correctly. However, there are only fairly soft and diverse rules about what “correct” means, and the chances of writing even a single letter in the same way from one word to the next are slim: handwriting is highly idiosyncratic, as are styles of writing. It is therefore possible to express much more than the words that we write in how we write them: handwriting can convey mood or personality as much as it can represent words. Given that the majority of our uses of writing have a softer purpose, we also need to be able to use the words that we write creatively, to express our thoughts, beliefs, and arguments, each of which demands a host of both softer and harder skills and techniques: uses of metaphor, knowledge of different ways of structuring a narrative, and so on, not to mention knowledge of phenomena such as what is acceptable to an audience, what is expected, and which effects we want to achieve. The many phenomena orchestrated range from hard rules that should not be broken (e.g., application of APA citation rules) to much softer ones that can be broken with impunity and used with virtually limitless creativity, such as
where
to
break
a line of text.
It can take time to learn how to use both soft and hard techniques, but the human role differs in each. Soft techniques can be complex. Skill is needed to use them well. Hard techniques can be complicated. Skill might be needed to enact them correctly, but, no matter how complicated they can become, there is still one and only one way to enact them.
Soft techniques take form, in part and sometimes in whole, in the human mind (however we conceive it, extended into the world or the body). They become softer the more we use them because our minds change by using them, and, recursively, we change the enactment of the technologies as a result. As we learn more, more adjacent possibles emerge, paths that were not there before, because each new capability that we develop offers opportunities for further capabilities to emerge. For the most part, this is an additive process: what we learn does not replace what came before it, so it increases our potential to do more. The technology itself plays a part in our cognition, often leveraging our thinking to new and different levels. In effect, the technique becomes softer as it and we evolve. This is mainly why we practise: to become more skillful and more capable of a wider variety of things. Although we might get pleasure from the practice itself, its purpose is usually to perfect (or, more precisely, to aim for perfection in) the hard techniques required to make the technologies soft: to become better practitioners. From playing a violin to drawing, from writing to playing chess, each new technique that we learn expands the adjacent possible. We can not only to do what we could do before but also new things that, in turn, make further things doable. It is a powerful learning process or, more accurately, part of an assembly that leads to learning.
Techniques assembled in this way are aggregations that are adapted and refined as we learn more. The more we know, the greater the adjacent possibles, the softer the technologies become. Just as physical machines and devices evolve through assembly, so too techniques join and merge to create something new, with different parts often enriching others. For instance, we might apply tactics from one game in another or approaches to learning one musical instrument to a different one. Pedagogical methods, in particular, tend to be highly assemblable with others, and thus they are highly transferable to different contexts. We can learn ways to learn. Sometimes we invent new techniques. Technologies therefore learn, and, because we are part of them, as well as affected by them, we learn too. Echoing Clark’s (2008) notion of the extended mind, learning is in the system, of which we are part (including our bodies), not just in our heads.
Soft techniques are an embodiment of Culkin’s (1967, p. 70) dictum that we shape our tools, and our tools shape us. Technology emerges through complex and ever-evolving interactions between us and our tools (Orlikowski, 1992). We build technologies, and not only do they help to shape us but they are, in part, made of us.
Although a skill such as carpentry becomes softer with increasing proficiency, this does not make carpentry a particularly hard technology for the novice. It is just one that offers fewer adjacent possibles than it does for the veteran. Unskilled users of a soft technology learn from their errors, experiment and adapt, in conversation with the technology, not just as users of it. They can also learn from others, directly or indirectly, or they can gain inspiration from models and designs and strive to improve. To an unskilled carpenter assembling a piece of IKEA furniture, it is a hard technology. The pedagogies that teach us how to assemble it are also hard, in the form of precise numbered visual instructions, all of which must be obeyed. A great deal of thought and pedagogical design normally go into making this as clear and foolproof as possible, so that we can play the role of a production line or machine in bringing it to fruition. However, because we enact it, we can change it. Indeed, there is a whole movement of IKEA hacking, as can be found at http://www.ikeahackers.net. So, though the technology of IKEA superficially might resemble a poorer version of an assembly line, it contains within it a deferred (Patel, 2003) but perhaps limitless potential that an assembly line cannot achieve.
The hardest technologies do not have this capacity. We can learn to use them correctly, but once they are learned there is no more correct way of using them. For example, a machine to manufacture electronic components might require a great deal of learning before it can be operated effectively, but once the skill is learned it will never do more than manufacture the same electronic components in the same way. Perhaps aspects or components of the skill are of more generic use, but the skill itself is perfectible. Skills needed to be part of hard technologies can be perfected (in principle and sometimes in practice), whereas skills needed to operate soft technologies seldom if ever are. We can aim for perfection and achieve high levels of proficiency in the hard skills that form part of their assembly, but not in the assembly itself. There is no fixed point, say, at which one can say that one knows everything there is to know about using a guitar, language, computer, or even screwdriver. The same is true of the use of any pedagogical method. Although the hard techniques that might be part of its assembly—for instance, clear handwriting or diction or accurate marking of tests—can be measurably correct, there is no perfect standard for setting a challenging assignment or encouraging recall, no point at which it can be said that it can’t possibly be improved.
Honouring Error
“Perfect” hard technique is not always desirable and, in many cases, might mask what we value most in soft technologies, or worse, as in the case of prescriptive copy editing, the rigid application of a technique might render an authors’ sentences unreadable, meaningless, or ambiguous. In artworks, for instance, we typically value the differences that give particular artists their style. Although it could be argued that this might stem from their invention of techniques that differ from the norm, it is at least as often in the gaps between intention and execution that the most interesting things can be found, because they open up new potential, different opportunities, new adjacent possibles. Brian Eno, in collaboration with painter Peter Schmidt, once created a deck of cards to be used as inspiration in the creative process that summarizes this aspect of technique well. Among many pieces of good advice, one card simply read “honor thy error as a hidden intention.” For example, as a presenter (despite a huge amount of practice and a love of public speaking), I tend to stutter, fumble words, “umm” and “ahh” constantly, forget words, forget whole concepts, repeat myself, and so on. I sometimes make weird gesticulations and expressions, jump around a lot, and generally distract my audience in ways that, were I to design the presentation in advance, I might be circumspect about trying. Occasionally—and unfortunately beyond my conscious control to repeat—I get into a flow and reel off long, dense, and at least superficially erudite phrases at great speed with barely a pause. This is probably just as bad as my fumbling, if not worse. I have a tendency to diverge. The gap between intention and execution is usually vast. However, and notwithstanding the many times that I have failed to interest or inspire my audience (these are soft technologies that demand constant invention and can easily fail), none of that really matters. My numerous flaws might be weak components when viewed individually, but in assembly they sometimes work well, achieving a wide range of desirable effects, from gaining audience sympathy to allowing time to absorb a message. All are deeply affected by my perception of my audience, the richly interwoven ways that we interact with one another, and above all the narrative flow in which each sentence, gesticulation, or movement builds upon and incorporates those that came before it. I am usually aware of and reactive to my errors, which then become part of the tapestry that I am weaving (and often components that I might use later in a different assembly). At least some people like it at least some of the time. I have watched countless speakers who do this better than I, whose “flawed” delivery actually gains effectiveness from its imperfections. The same essential dynamic is true of almost all music, dance, and visual art, not to mention philosophy, critical analysis, and the writing of books. We shape ourselves, as well as what we create, in the act of creation itself, often without prior intention. As Richard Powers (2006) said in an interview, “I write the way you might arrange flowers. Not every try works, but each one launches another. Every constraint, even dullness, frees up new design.”
Conversely, most of us have had to sit through somebody reading from notes (worse still from slides), perhaps with something close to “perfect” diction, intonation, and phrasing, while we have struggled to stay awake. Perfect hard speaking technique does not necessarily equal perfect teaching technique, because it eliminates the human, the possibility of being affected by feedback, the conversation with the unfolding process. It hardens something that (arguably) should be soft. On the whole, I would rather read what someone has written or at least listen to a recording that I can pause, rewind, and fast-forward. Some public speakers can sustain both exceptional hard technique and exceptional expression, and their oratory seems to be almost superhuman, yet they remain engaged with and responsive to their audiences, and with their own narrative flows, throughout. Indeed, most of us, with enough practice, and like actors, can learn and deliver a speech with “perfect” technique, but some fantastic speakers can do this on the fly, including in their responses to questions from audiences. They might well be employing some preplanned intentional hard techniques (e.g., use of pauses, dramatic inflections, narrative devices, and so on) that can help a lot, but for the most part it is their on-the-fly soft technique (unique to them and every situation), which includes engagement with their audiences and with what they have already presented, that impresses. They have at their disposal a toolbox far bigger than mine at least from which to assemble new and impressive works. Although few of us will ever become as skilled as Winston Churchill, Oscar Wilde, or Groucho Marx in this regard, we can all learn to improve our technique, through methodical and reflective practice. Indeed, there are techniques for learning techniques: we can become better at learning. Repeated practice, when we reflect on it and observe its effects, at least enables us to become better at dealing with and responding intelligently to our flaws. This is how we develop a distinctive style, and even flawed techniques can achieve effective or even brilliant results.
Great hard technique can make a great artist greater, but relatively poor hard technique does not preclude the potential for great artistry. Indeed, I am fond of some punk music that makes a positive virtue out of poor hard technique—it is part of the raw energy and a major contributor to the emotional impact of the genre—and there is much outsider art that displays weak technical skills but great expression. Technique and creativity are not causally related, and, though a certain amount of technique is usually essential to create anything at all, too much focus on perfecting a technique can limit creativity. It is not the fact of it but the focus that is problematic. It is normally worthwhile to practise, reflect, and constantly seek to improve one’s hard technique but not at the expense of sucking the life out of the finished performance or product. Excessive focus on hard technique, in effect, can strip away some or all of the softness inherent in the activity by imposing constraints and boundaries that do not need to be imposed. Few of us are sufficiently talented or creative to pass through such boundaries, though when such a passage occurs we tend to notice it. The genius of, say, Glenn Gould (whose timing while playing Bach combined perfect technique with superb expression) demonstrates that sterility is far from a necessary consequence of perfection, but for most of us an excessive focus on becoming more technically proficient puts us at risk of forgetting the things that we value most in the technology. This is nowhere more true than in the act of teaching, in which personal technique cannot be developed fully in isolation but must conform and adapt to the learning needs of (potentially many) others. We must observe, be aware of, and reflect on the effects of our actions if we are to be successful.
Machines Pretending to Be Human
Technique can be emulated by machines, sometimes convincingly. Even when unconvincing, they can be useful. As a musician, I abhor drum machines because they eliminate (or, worse, emulate) the imperfections in technique that express the nuanced humanity of the performer. A perfect beat (to me, though you might think differently) is a rigid, unadaptive, soulless taskmaster, robotic and devoid of the life that makes music meaningful to me, and emulated imperfections seem to express emotions that no sane human could feel. Likewise for voice autotuners. Nonetheless, thanks to the power of assembly, they can be used to create great art. Great musicians have used the phenomena that I normally loathe to create magic.
For those who are time poor and/or cannot develop such skills themselves, it can be useful to automate hard techniques that usually would be enacted by humans. It can provide a means to produce something more pleasing to the ear or eye than what they could produce otherwise or to provide scaffolding for the learning process, and, if that is the intended use, then this is an effective way to achieve it. Despite my abhorrence of drum machines, occasionally I use one myself while practising an instrument. It is good enough to allow me to develop some (but far from all) of the skills needed when playing with a human drummer, and it is far better than a metronome for keeping time. Significantly, even when such technologies are used to produce a finished product, they seldom completely remove the need for skill. Some creative decisions must still be made, even if they largely involve picking an item from a list. It is about as creative as sharing a meme that someone else has made, but there is a place for that. In fact, it is much like a tried and tested approach to course development that, at my university, we describe (scathingly) as a “textbook wraparound,” in which course authors add little to a textbook other than instructions to read particular chapters and to perform particular exercises. It might not be pretty, it is certainly not great art, it is hardly inspiring, and we try to discourage the approach as much as we can, but (at least when a good textbook that embeds effective pedagogies is available) it has proven to be a sufficient learning technology for thousands of students over the past few decades. We are co-participants in the technologies of education, but this does not necessarily require us to be creators and leaders of everything that they involve. In fact, we cannot and should not even attempt to be so, because it is a waste of the teaching gestalt, and none of us has enough time to achieve expertise in all things. There is a great deal to be said, for example, for the use of open educational resources (OERs), especially in subjectivist or complexivist approaches, because they can fill gaps for different learners in different and often better ways than we could manage for ourselves. In the process, we often learn to be better teachers, because we see other ways of teaching that we might not have imagined.
It is also possible to convincingly simulate a human teacher using software and hardware, at least when the scope is sufficiently limited. Goel and Polepeddi (2018) have successfully fooled many students into thinking that they were being helped by human assistants, that were actually chatbots built using IBM’s Watson AI engine. The machine—under the pseudonym Jill Watson—only answered questions that it could, with some assurance, answer. Humans answered the rest, and their answers were in turn used to improve the machine’s training. Jill Watson did not depart from the confines of a limited data set of problems with well-defined answers within a particular course, almost all of which related to course procedures and rules, not the subject of study. Despite appearances of softness, this was a hard technology used in the service of the hard technology aspects of the teaching process—assignment deadlines and formats, schedule issues, and so on. At its most refined, the technology was able to field about 60% of all questions (though work continues on improving this percentage, and I have heard from Goel himself that it can now effectively answer 90% of the questions). It is unclear to what extent these answers furthered student learning, though no doubt there were benefits in relieving human tutors of some of the need to repeat mechanical answers to questions that had nothing to do with the subject being learned. However, something important was lost. When humans interact with other humans, there is at least a chance that they might understand contexts, motivations, needs, fears, and hopes of one another, whereas chatbots, including those based on large language models like GPT or LaMDA, understand nothing of them.
Education is about learning to be human in a human world, with all its complexity and intertwingularity (Nelson, 1974), so there are firm limits on how far this kind of technology should be taken. Even in this limited context, Jill Watson failed in some disturbing ways to answer questions that any human would understand. For instance, though it recognized a question about procedures from a male student who was about to become a father, it failed to recognize a similar question from a female student who was pregnant, thanks to the predominantly male demographics in the computer science course from which training data were taken (Eicher et al., 2018). As Shulman and Wilson (2004, p. 504) observe about classroom teaching, but that could be applied to the whole educational endeavour, it is “perhaps the most complex, most challenging, and most demanding, subtle, nuanced, and frightening activity that our species has ever invented.” This is undoubtedly a gross exaggeration, but, notwithstanding the remarkable achievements of large language models like ChatGPT or LaMBDA in simulating human behaviour, the kind of artificial general intelligence (AGI) that might cope with its complexity is at best a long way off, and there is a good chance that it might never occur at all (Goertzel, 2014). If it ever comes close to happening—when machines themselves truly can be soft partners in harder technologies—then I might need to rewrite this book. I doubt that I will.
Machines may increasingly easily fool us, for sure, but the gap between current technologies and a machine that understands what it is to be a human, living in a society of other humans, is hardly any less vast than it was 70 or more years ago. The use of generative AI is and remains that of the humans who create or deploy it, not of the machine. Machines are not creators of technologies but instances of them. Though sometimes closely resembling human users of soft techniques in what they produce they are not fillers of gaps so much as generators of them. Their participant roles are as hard parts of our own soft assemblies. This is probably a good thing, because there is something deeply distasteful about a process through which we learn to be human that is managed by a machine. This is not to dismiss or diminish the enormous changes wrought by generative AIs. By mixing and remixing vast swaths of our own creations they vastly expand the adjacent possible in unforeseeable, excitingly disruptive ways. But, though their seeming intelligence derives from the works of countless humans, they are not and cannot be human, or anything like it.
Even and perhaps especially when such a machine represents the “best” of us—when, like recent large scale large language models, it may appear to be tireless, supportive, friendly, or even compassionate—its very lack of foibles makes it a poor role model. This is not to mention many concerns about whose idea of best is being imposed, whether it ossifies systematic biases, some individual’s concept of best is programmed into it, or it blandly averages out what it learns from its vast dataset, like a mediocre filter bubble the size of the internet.
There is a creepy dystopian aspect to accelerating trends in the use of generative AI to mimic human behaviours. This may be blatant, such as in the way that Jill Watson was designed to incorporate a random (but never long) delay in its answers in order to appear “more human,” or in “companion” AIs such as Replika, which is described by its makers as “the AI companion who cares” (https://replika.com/). However, mimicry of humans underlies most uses of such tools for everything from cheating in colleges to generating books or lesson plans. When expectations of being human are learned from a (hard) machine, and when emotional attachment and belongingness depend on something not quite human, they are the start of a slippery slope that will not end well for any of us. Such machines currently learn by ingesting vast amounts of data from human interactions and creations: their seeming humanity derives directly from average behaviours of actual humans and, for the most part, the outputs are therefore very average: good, but not great. Before long, a significant number of those interactions and creations feeding the machines will (if trends continue) be created by such machines, so subsequent generations of machines will learn from them, and we will learn from them, in a slow cycle of decay or stagnation, with all the creative softness and humanity taken out of it. But, even if it ends less dystopically, the underlying values that it represents remain deeply problematic. To teach using an artificial human is underpinned by values that treat education as nothing more than a mechanical process of learning facts, hard techniques, and cognitive tools. Such elements are indeed parts of an educational process, but they are not the reason for it. They are the means, not the ends.
Humans Pretending to Be Machines
Those who practise a musical instrument in order to play a particular piece “perfectly,” or who copy famous artworks as precisely as possible, appear to be replacing human creativity with harder processes. If that is their sole intention, then indeed they are enacting a hard technology, and in terms of performance it probably would be better automated, for instance using a pianola or similar device. However, that is seldom the use to which the orchestration of phenomena is applied. Sometimes we might simply find joy in overcoming a challenge. In this case, the purpose of achieving perfection is personal satisfaction, not replication of a perfect method per se: we orchestrate the orchestrations that we enact in order to please ourselves or to impress others. It is important to remember that there can be value in acting like or as part of a machine. Mastery of a human-instantiated technology, whether soft or hard, can be very supportive of intrinsic motivation, whether or not it leads to further capabilities (Deci & Moller, 2005, Ryan & Deci, 2017). Conversely, sometimes we practise to meet the demands of someone else, such as a music teacher or examiner. This can be very antagonistic to intrinsic motivation (Ryan & Deci, 2017).
More often than not, though, we practise a piece of music or copy an artwork in order to become proficient in our own right so that we can become more effective creators. If we need to participate in a hard technology in order to enact a soft technology, then it matters that our hard techniques are well honed. Our hardest human-enacted technologies are nearly all prerequisites for assembling softer technologies: we need to become parts of a hard technology in order to make it a soft one. The purpose of repeating a musical scale until something like perfection is attained is not normally to reproduce perfectly a musical scale but more quickly to gain the ability to play more complex pieces and often to be able to be more creative. Practice is rarely an end in itself but a pedagogy intended to change us in positive ways.
We can see this in even sharper relief in common join-the-dots pictures used to teach children a range of skills, from manual dexterity to visualization. These are puzzles with a purpose, which is not the production of a picture but the development of mental and motor skills. This relates significantly to the nature and role of technologies in learning and especially to technological literacies, to which we will soon turn. There are significant differences between how and why we acquire hard skills than softer ones, and the kinds of pedagogy that work for one might be inappropriate for another. There is a large difference between this and a dominative or prescriptive technology demanding that we play a particular role, inasmuch as (unless coerced by teachers) we choose to do so. We are intentionally (and perhaps even creatively) building the cognitive tools that we will later assemble into something else, and like most technologies there are better and worse ways to build them.
It is also important to be aware of more than the obvious façade of a technology, because many technologies do more than what it says on the tin. Take handwriting, for instance. Viewed as a hard technology, a handwritten letter is no more than a vehicle for conveying words from one person to another. In many ways, such a letter lacks the efficiency, speed, and clarity of an email (especially if your handwriting is as poor as mine), so it makes more sense to use email or a typewritten letter for many purposes. However, a handwritten letter’s meaning extends far beyond the mere communication of words between one person and another. In part, this is because of the non-verbal things that handwriting communicates, especially in terms of mood. The tear-stained email has yet to make the mainstream, and it is difficult to see whether something has been typed passionately. It might be partly that we write differently (though not often better) by hand than by computer (Bangert-Drowns, 1993). However, the act of handwriting itself—the physicality of it, the layers of meaning stretching back for millennia, the gifting of an object, and so on—creates something much more than a medium for transferring words.
The simple fact that a piece of paper has been handled by another person lends it a different meaning: this is why even cheap and mass-produced artifacts formerly owned by famous people or related to auspicious events command high prices in auctions. More than that, other information can be imparted. Seely Brown and Duguid (2000) describe a researcher who, investigating an archive of letters from the 19th century, sniffed each letter that he handled. When asked why, he replied that he was seeking the smell of vinegar, a widely used means of disinfecting letters sent from cholera-afflicted areas. Knowing these circumstances, he was able to read between the lines of otherwise cheerful letters and to extract layers of meaning that otherwise would have been hidden. When we talk of the utilization of phenomena to particular uses, we must always be alert to the possibility that those uses can extend far beyond their most obvious utilitarian functions, and the phenomena can involve far more than what we focus on most easily.
Failure to recognize hidden utility lies behind many problems with hardened technologies, especially in the field of education. For instance, when early e-learning adopters wittingly or unwittingly replaced lectures with web-based resources, they neglected to observe the value of shared schedules for sustaining motivation and engagement; of meeting others outside a lecture hall and learning with them (often serendipitously) simply as a consequence of being there; of myriad small acts of communication (not always with the lecturer) that occur in even the driest of lectures; and of the flexibility in form possible in a live performance, including opportunities for dialogue. There are plentiful ways to avoid such traps, and many ways to use online learning that are more effective than most lectures, but it is all too easy to focus on obvious functions to the exclusion of things that really do matter. To this day, far too many online learning solutions replicate the veneer of the lecture—its information-imparting function—while neglecting the vast web of benefits that surround it in in-person learning.
The COVID-19 pandemic revealed this in sharp relief as many in-person teachers attempted to replicate the methods and motifs of their classroom teaching using technologies such as Zoom, Webex, Adobe Connect, or MS Teams, and either they were overwhelmed by the effort of trying to sustain the human connection, or they failed to adapt to the distinctly different affordances and limitations of the technologies, leaving students adrift and unsupported. Lectures work as solutions to problems of in-person teaching for many reasons, including the salience of travelling to them, the affective presence of others in the room, opportunities for engagement when leaving them, and much more. They are not great ways of imparting information at the best of times, but without these vital elements they are nearly worthless, and much else needs to be done to make up the shortfall. It is interesting, though, that the overall system sometimes found ways to adapt. Students with supportive families and friends, for example, were able to fill the gaps more easily than those without them and, in the process, amplified inequalities and weaknesses already endemic before the crisis began (Darmody et al., 2021).
Humans Made to Act like Machines
Uncreative participation in hard technologies does not have to be a bad thing, as long as we have chosen to participate, and we can choose not to do so. However, it is important to be able to diverge. Those of us who have sat on, say, exam boards or university committees can almost certainly remember countless occasions when unrelenting rules determined the behaviour of otherwise rational humans in completely irrational ways, when divergence was frowned on or prohibited. I have sat in meetings at which motions failed to carry because of the incorrect application of Robert’s Rules, despite nearly unanimous assent by all parties present.
Although pedagogies are inherently soft, human-enacted hard technologies are often found in the practice of teaching itself. Schwartz (2015, p. 42) provides a sample of a script issued to a teacher in America:
Script for Day: 053
TITLE: Reading and enjoying literature/ words with “b”
TEXT: The Bath
LECTURE: Assemble students on the rug or reading area. . . . Give students a warning about the dangers of hot water. . . . Say, “Listen very quietly as I read the story.” . . . Say, “Think of other pictures that make the same sound as the sound bath begins with.”
Schwartz observes that the script that the teacher had to follow was twice as long as the book that she was reading. This is an extremely hard technology, which seems to be dominative and prescriptive, as much as possible reducing the human within it to a cog in a machine. In fact, that is precisely the intent. As Schwartz notes, “scripted curricula and tests were aimed at improving the performance of weak teachers in failing schools—or forcing them out of teaching altogether” (p. 45).
This is not a new phenomenon. Among the earliest and most influential proponents of this approach, Pestalozzi (1894, p. 41) wrote that “I believe it is not possible for common popular instruction to advance a step, so long as formulas of instruction are not found which make the teacher . . . merely the mechanical tool of a method, the result of which springs from the nature of the formulas and not from the skill of the man who uses it.”
The underlying assumptions—that most teachers are average or below average, that there is a “right” way of teaching, that uniformity is an equalizing force rather than a driver of mediocrity, and that method can be divorced from technique—remain strongly embedded in education systems. Such beliefs are much of the reason that textbooks, reusable learning objects, open educational resources, and MOOCs are seen to be beneficial, inasmuch as (though capable of achieving many more benefits) they allow a weak or overworked teacher to be replaced, in part or in whole, by better, tireless teachers. The same is true of many applications of AI, from automated tutors to learning analytics tools.
Because such hard technologies are enacted by people, there is a lot of scope for error, inefficiency, and interpretation, so not only is it dehumanizing and demotivating, but also there is a good chance that it can fail to work as intended. From my point of view, as one who sees education as archetypally human, fundamentally soft, and essentially liberative, this appears to be a horrendous distortion of all that learning with a teacher should be. However, thanks to creative human nature and the many cracks in the technology through which softness might shine, and especially since a hard pedagogical method like the aforementioned script can be used as part of an assembly rather than as the sum total of the activity, it is not doomed to fail. Indeed, it is unlikely that—unless acting under obscene coercion or monitoring—many human teachers would take this script as anything more than (perhaps strongly) advisory. Furthermore, there are occasions when even a champion of teacher freedom would find it justifiable to use such a hard, human-enacted technology—for example, if someone without any training as a teacher and virtually no knowledge of the subject had to step in temporarily to lead a class, then this kind of script might be useful. The fact that education is and must be a soft process does not preclude there being hard components of it. What matters is whether the degree of hardness is appropriate to the situation. The situated nature of all learning means that there can be occasions when even the inhuman, the sterile, and the mechanical are useful. Like practising scales, this can help us to learn or at least explicitly to take advantage of the distributed teacher in order to teach better than we could alone. As ever, we are co-participants in the technologies of education, not just users of them. It is fine to be part of the machine if that is what works and if we do so willingly.
Appropriate Roles for Hard and Soft Pedagogies
Hard technique is needed to operate any hard technology, be it a form or a vending machine. Equally, virtually all soft technologies demand at least some prerequisite hard skills to enact them: the ability to spell accurately, form handwritten letters correctly, draw lines with a pencil clearly, place fingers on the fingerboard of a violin accurately, pronounce a word properly, and so on. Likewise for “facts” (knowledge of previously defined and classified information) that might be needed to support them. This dependency—and the fact that learning the hard skills must precede or at least coincide with learning the soft skills—tends to lead to the perception by both teachers and students that education is concerned most significantly with enabling learners to replicate hard skills and knowledge.
The tendency is reinforced by the relative ease with which hard skills and knowledge can be assessed. If something can be done correctly (as opposed to well), then normally it is not too difficult to measure the extent to which it is done incorrectly. It is in principle and usually in practice much more difficult to measure soft skills, or knowledge production objectively, because there is no end to the number of ways that they can be expressed or enacted. This is not to suggest that they are totally unconstrained: a soft technology allows a move into the adjacent possible, not the impossible, and all are rich in path dependencies, not least those imposed by the hard technologies, skills, and structures that provide their foundations. Nor is it to suggest that judgments of soft skills are particularly difficult, especially when a soft technology is used with the intention of bringing about specific aims, such as teaching someone a hard skill or making a comfortable chair. Furthermore, there tends to be a lot of agreement between evaluators of even the softest pieces of work. However, there is always room for interpretation, surprise, and invention that teachers have never thought of. The hardness of a poorly designed marking scheme might make it difficult or impossible to award marks, but creativity in execution is always possible when using soft skills.
Learning technologies intended to teach hard skills, such as most Khan Academy tutorials, or many adaptive hypermedia lessons, and much institutional learning, deliberately focus on the inculcation of habits and behaviours that allow a learner to be part of or enact a machine. Such things matter greatly, as parts of a learning technology assembly, but it is all too easy to confuse the parts with the whole and to forget that the main reason we need hard skills is to react, adapt, and act creatively in the world. Hard skills are a non-negotiable part of what we learn, but they are only ever a part.
Harder pedagogies tend to be more effective—or at least more provably so—when learning harder skills than when learning softer skills. There is a circle here, though. They are more provable precisely because they are hard: the fact that orchestration, phenomena, and uses are well defined and replicable makes comparisons possible in ways that make no sense when every instance is invented anew. Repetition, drill and practice, spaced learning, interleaving, behaviourist techniques, and many sequenced pedagogical methods from Gagne’s (1985) nine events to Direct Instruction (Stockard et al., 2018) can all be provably effective means of achieving a tightly specified outcome, even though they might offer far less value and even be counterproductive in achieving soft skills with expansive, fuzzy, or open outcomes. They might not even be particularly effective for learning hard skills, especially if coerced by teachers. Softer methods are more variable, more dependent on skillful technique, and thus more likely to be done badly, so on average they might not seem to be so beneficial. However, though hard pedagogies likely will form part of an assembly, they should rarely, if ever, form it all. At the least, they will be more effective if they are aligned with authentic, personally relevant learner needs or interests, or they are applied in a meaningful and, where appropriate, authentic context.
Although harder techniques will be needed to enact virtually all soft technologies, subjectivist pedagogical methods, such as problem-based, inquiry-based, or other more open-ended complexivist learning approaches, tend to be necessary parts of the assembly when softer skills are to be learned. By definition, softer skills require invention and creative choices to be made, which means that (among other things) they always contain the capacity to surprise, and that success rarely can be accurately quantified, whether or not human markers agree substantially in their evaluations. Because softer skills can always be improved, no matter the level of competence, the notion of achieving 100% in a test makes little or no sense: 100% of what? There are also far more likely to be outcomes that were not pre-specified but that can have great value. Some subjects are inherently soft: creative writing, art, design, some aspects of architecture, computer programming, philosophy, and so on can barely be conceived in terms of hard skills alone (though many hard skills are needed for all of them), so more open, expansive pedagogies are par for the course.
The need for soft pedagogies might be less obvious in the case of “right answer” harder subjects such as math, physics, engineering, and computer science, but we should remember that the main value of such subjects lies in their application, not in accomplishing accurate replication of their mechanical parts. The occupations, for instance, with which they are associated tend to be anything but hard, demanding great creativity, problem solving, and adaptability. This is equally true when they play a subsidiary but still prominent role in other assemblies, from social contexts such as barroom arguments, to critical or reflective writing about their roles in society, to their use in the construction of other technologies. All have deep and complex ethical dimensions and greater or lesser relevance in forming personal identity and meaning. Also, notwithstanding the great pleasure gained from solving a right answer problem or doing something well, many hard pedagogical methods, especially those focused on repetition or replication, can be boring. Pragmatically, therefore, as well as pedagogically, it makes no sense to teach them as though they were purely hard skills, separate from their context of application.
This is all the more important in applied areas such as medicine, architecture, or computer programming. It is important that doctors know the names of all the bones in the body, because they must work with other doctors as co-participants in the same machine, and may not always have time to look it up in a reference source: unless all agree that this bone is a radius and that one an ulna, or that this medicine is a statin and that one an anti-inflammatory, the consequences for patients can be dire. However, though it makes sense for there to be some means of judging whether they know enough, this does not imply that mechanistic means should be used to train doctors or to judge their competence. Hard skills alone are useless: no practising doctor in the history of medicine has had to identify all the bones in the body under test conditions, so it is odd that doctors often have to do so as a rite of passage toward becoming doctors in the first place. It is far more important that they can apply such knowledge in an authentic setting or one that closely resembles it (bearing in mind the risks to patients of learning on the job). It also matters more that they have learned appropriate methods to continue to learn throughout their careers, because new knowledge and new technologies that replace as well as augment existing approaches are constantly being developed.
The machine is not static, and learning does not end when programs and courses end, in any subject. Tellingly, it is rare for practising doctors to memorize new knowledge in the same way that they are expected to memorize body parts for tests. Rather, they remember new things because they are useful and necessary in practice, and/or they know where to look things up when needed, and we would judge their success as practitioners according to how well they used that knowledge. Why would we do any differently in an academic setting? It is worth remembering that this kind of hard knowledge can deteriorate too. If the names of bones are not used in practice, some might well be forgotten. As patients, we care mainly about the soft technology of medical practice because that is what makes us well, not the hard parts assembled to achieve that.
Hard Technologies as Part of Our Knowledge and Skills
The fact that much of what doctors “know” is actually where to find the information that they need (or the people who have it) points to another important aspect of learning. Hard technologies embody the learning of those who contributed to their creation and thus become part of our own: our minds extend beyond our bodies into the objects and people around us.
Many human-enacted hard technologies, on some occasions, can do more than just embody the learning of others: the learning that they embed can rub off on us. They can provide a scaffold for us to be supported in learning for ourselves, a support on which we can build and develop our own independent skills. This is true, for instance, when we practise scales and arpeggios on a musical instrument or learn to play a piece of music “correctly.” Whether or not we learn from hard technologies, at least sometimes they can give us a boost. Their patterns embody the creative thoughts of another person and let us begin close to the point where they left off, or to take what we need and branch in another direction. The same is true of teaching. We can learn to teach ourselves in part as a result of having been taught. As Cuban (1986, p. 59) puts it, “teaching is one of the few occupations where practically everyone learns firsthand about the job while sitting a few yards away, as students, year after year. We all have absorbed lessons on how to teach as we have watched our teachers.”
We acquire many useful habits of learning this way. Of course, if we are badly taught, then we might learn bad ways of teaching ourselves. This is a highly significant issue when we are taught through conventional methods by teachers who do not understand the processes that they use to teach. I sometimes describe myself as an unteacher because many of my students, through hard technologies of objectivist, carrot-and-stick teaching, have learned that learning is about being told something and having to perform in some extrinsically defined way to prove that they know it. My job, in part, is to unteach them so that they can unlearn their preconceptions derived from objectivist approaches.
There can be disadvantages to leveraging the learning of others. There is a virtual industry of books and articles bemoaning the dumbing down of society and especially the effects of the internet on learning (e.g., Brabazon, 2007; Carr, 2011; Keen, 2007; Vaidhyanathan, 2012). As I write this, only months after the launch of ChatGPT, a plethora of similarly fearful nostalgic authors are lamenting the ease with which cognitive tasks can now be performed by machines, and predicting dire consequences. It is true that, among the myriad technologies that have become available through the internet, many sometimes cause harm, including those that mostly cause good. Postman’s (2011) Faustian bargain remains ever present. It is also true that we (and our brains) are changed by the things that we do in the world, especially those that we do a lot. The fact, say, that taxi drivers’ hippocampi (on average) are different from those of most of the rest of us (Maguire et al., 2000) is because habitually they have been used differently.
Whether our increased reliance on digital technologies is harmful or not remains an open question, and the answer is almost certainly different for every person. Those who rely on such technologies might be less able in some respects but (thanks to our ability to access and process more information) more able in others (Pinker, 2010). We gain cognitive prosthetics that let us do more complex things. In fact, Johnson (2006) makes a compelling argument that the (inevitable) rise in the complexity of media has made us smarter than we ever were, albeit that there is some evidence that the Flynn effect—a general tendency for average IQ scores to increase over time—upon which he based his arguments might have plateaued or even be in decline (Uttl et al., 2018).
Whether or not the average effects are positive or negative, when techniques are hardened into the mechanism of a machine, it can come back to bite us later. For instance, when we fail to learn how to land a drone manually because there is a button provided for it, a gust of wind or the appearance of an unexpected obstacle can leave us ill prepared to take on a skillful role in operating the device. Similar concerns apply to a dependence on internet resources as a prosthetic memory, or our inability to light a fire without a lighter or matches when neither is available, or even when our car breaks down in the middle of nowhere, especially if highly hardened technologies such as microcomputer controllers are essential to its operation.
When hard technologies fail, we might regret not learning the hard skills that they replace, and we might regret not having the skills that they embody. Yet, in many cases, we would not be able to achieve the heights that we have achieved if we learned those skills at the expense of others that incorporated more orchestration. It seems to be entirely inappropriate, for instance, even though it might be true, for Socrates to complain that writing provides only a semblance of knowing (Plato, 360 BCE), and thus represents a retrograde step in our development, because reading is one of the most central foundational technologies upon which much of our extelligence (Cohen & Stewart, 1997) as a species rests. If we had not invented it, or something like it, then we would not (for better or worse) be the smart species that we are today. And it is highly significant that it seems to be natural to use the word we for the inventors of these technologies. Language and its associated technologies of reading and writing were and are continually reinvented as a collective enterprise. Almost all of us are not just users but also creators of and participants in the evolution of these technologies, in smaller or larger ways.
Whether contributing small pieces to a softer assembly or acting as forms around which we can learn more, harder technologies allow us to leapfrog parts of the journey and to get to points farther along the path, letting us grapple with more complex and (sometimes) interesting and useful problems sooner. Beyond that, if people are involved in the enactment of those hard technologies, such as when following a recipe or repeating a phrase in a foreign language until it sounds right, then those technologies can allow us to develop habits of mind and increase our expertise to the point that we can become creators and inventors. From a learning perspective, we should wish for hard technologies. They do restrict what is learned and how it is learned, but in many cases that is precisely why they are useful.
The Technological Nature of Literacies
Until the close of the 19th century, to be literate simply meant that one was educated and well read (UNESCO, 2006). Its meaning has evolved since then to mainly signify that one can read and write, though shades of the original meaning remain: it is possible to be more literate, implying not just a greater vocabulary but also familiarity with more literature and all the learning that it implies. In recent decades, the word literacy has been hijacked by a great many academic communities to stand in for any group of skills that seems to be relevant to the topic of interest for researchers or teachers, such as media literacy, network literacy, digital literacy, music literacy, health literacy, and even hip-hop literacy (Richardson, 2006), to name but a handful of the many uses of the word. To the creators of such uses, a notable benefit of using the word is that it makes an area of interest seem to be more important than one defined simply by the need for a set of skills. I am uncomfortable with such uses.
The reason that literacy itself is important and deserves a name of its own is that the hard skills of reading and writing are essential foundations that every individual needs to participate effectively in any modern industrialized and technologically complex society. The ability to operate the hard technologies of vocabulary, grammar, syntax, and writing is a prerequisite of the soft techniques of reading and writing (or close analogues), without which it is difficult to perform any useful role in a developed society or to partake fully in it. Many societal roles would be impossible without literacy. Likewise for numeracy: it is difficult to operate in a society without some grasp of how to manipulate numbers, though the ways in which they can be used once such hard skills are learned are innumerable. There are plenty of other essential skills in most modern societies, from shopping to paying taxes, from following the rules of the road to dressing appropriately. However, though necessary, few are foundational, in the sense that other skills depend directly on them.
We might need food, say, to do pretty much anything else, but it does not form a part of most other activities beyond cooking and eating, and often cooking can be delegated. For most skills that might be thought of as similarly foundational as reading and writing, notwithstanding that they can be improved through explicit tuition or intentional study, it is normally easy to acquire them by imitation, practice, or simple instruction: politeness, say, or manual dexterity. If not, then we can employ others to provide them—accountants for our taxes, lawyers for legal help, and so on. Reading, writing, and arithmetic differ insofar as they are extremely difficult (or impossible for many people) to learn without a deliberate and fairly prolonged process of instruction, and they cannot be delegated easily. The hard skills needed are complicated and arcane. As Steinbeck et al. (2003, p. 123) put it, “learning to read is probably the most difficult and revolutionary thing that happens to the human brain and if you don’t believe that, watch an illiterate adult try to do it.”
Plenty of other skills have little to do with reading, writing, or arithmetic but also demand deliberate instruction, such as music. I strongly value musical skills. I think that music has immense cultural and social significance and should be taught to all. However, it would be hard to argue that skills in music are necessary for effective participation in most societies. They are essential for some musical subcultures, for sure, and can matter to many people, but they are not of general concern as a crucial set of techniques for survival within a modern society, and, more significantly, musical literacy is of relatively limited (though not zero) value as a means of learning other things. Moreover, though it is easy enough to identify skills of reading and writing, and they remain moderately (though far from wholly) consistent across most contexts, the skills needed for music vary considerably more than those for reading and writing. The ability to read musical notation and turn it into beautiful sounds matters greatly to a classical musician but is irrelevant, for example, to most blues musicians. For modern pop musicians, technologies such as autotune and sampling matter more than a grasp of musical notation.
Although there are many culturally specific forms of writing, and a world of difference between, say, reading a legal document and enjoying poetry, the hard, technical differences are nowhere near as vast as those between different musical cultures because the foundational hard skills (reading and writing) remain consistent. For musicians, the instruments, scales, rhythms, need to learn hard skills such as reading a manuscript, and almost every other aspect, beyond the fact that all involve the production of sound, differ radically from one culture, instrument, or genre to the next, notwithstanding substantial overlaps between many. Mastering each genre, instrument, scale, and so on is not like learning a new language but like learning a new way of thinking. Similar concerns relate to many of the x-literacies that have been invented: they have value in specific cultures, but few if any matter outside a narrow range of cultural contexts.
Digital and Other Technological Literacies
In most modern societies, it might be argued, and many have done so (e.g., Gilster & Glister, 1997; Koltay, 2011; Lankshear & Knobel, 2006; Potter, 2013; Rivoltella, 2008; Sharkey & Brandt, 2008), that skills in the use of digital technologies are as fundamental, widespread, and complicated to learn as reading, writing, and arithmetic. Indeed, it can be complicated to use some digital technologies, and increasingly they are essential for everything from shopping to watching TV. However, some critical differences are worth noting. First, as digital technologies evolve, many techniques that apply to older generations become irrelevant. Whereas the skills of assembling letters to write words and deciphering them to read change fairly slowly and normally last a lifetime, those needed to deal with modern technologies are ever more transient, thanks to ever-expanding adjacent possibles.
Although spelling, grammar, and vocabularies do constantly evolve, they are sufficiently stable that we would have little difficulty deciphering what someone wrote 200 years ago or more, and (ignoring terms and uses of terms that might cause trouble) vice versa, but someone born 20 years ago would be flummoxed by a computer from even 30 years ago, let alone 40 or 50 years ago. For most modern digital technologies, notwithstanding their gradual evolution that assembles old with new, their value beyond a particular time and/or place is often negligible. They become stale. Once upon a time, it was useful to know how to deal with config.sys and autoexec.bat files in a DOS or Windows system if one was a user of an IBM-compatible PC, and any definition of computer literacy in those not-so-far-off days would have included these competencies. Now, my ability to navigate the original Netflix web app only partly transfers to operating the Apple TV version of the app.
Second, the issue is made worse by the fact that a large amount of the development of digital technologies is concerned with trying to make them easier to use by hardening aspects of software that formerly demanded hard skills, from managing file systems to producing well-exposed photographs to parking cars. Ongoing and burgeoning hardening and automation render previously useful skills useless or at least relegate them to minor roles about which few people need to care. Much can be lost in this process. For example, in old-school photography, it is still useful to be able to manipulate apertures, shutter speeds, film types, lenses, and focus. There are many ways that we can soften the technology to produce exactly the effect that we seek, albeit with the usual costs associated with a soft technology. Nowadays, though, we can let the camera focus, choose a film speed, an aperture, and a shutter speed, because it does so better than all but expert photographers.
Hard is easy, so we do not have to think about it anymore. This brings some benefits. As a result of such mechanisms, we are freed to more easily consider composition, timing, lighting, and other factors that (in most cases) matter more when producing a photo. Sometimes even experts can capture moments that they might have missed while adjusting settings manually. Automation can liberate as well as dominate. More of us can participate effectively, without the limitations of simplified cameras that could capture only a fraction of possible photos well. And, in fairness, the capabilities of modern image editing software can bring even greater softness in post-processing than we had in the past, without any of the dangerous chemicals and expense. However, they also bring additional ways to harden. The ubiquitous filters on photo upload sites that make any photo look “artistic” offer such “skills” to anyone, including my cat. Some digital cameras can even take the picture for you, choosing a moment according to predefined algorithms that take into account composition, movement, and the expressions of subjects. Whether this is good or bad in the grand scheme of things is open to argument. However, the overall trend is toward hardening, automation, and deskilling, in everything from cameras to operating systems to help systems to automated teller machines to learning management systems, reducing or eliminating our need to learn hard techniques to use them.
Recognizing the transience and cultural specificity of modern “literacies,” some have sought underpinning commonalities that are more persistent and relatively unaffected by constant change around us or that seem to matter more in this shifting landscape. For instance, Jenkins (2006) identifies a range of what he describes as “New Media Literacies” that include play, performance, appropriation, judgment, negotiation, and multitasking. Although these are all aspects of an individual’s competence that can be improved through instruction and practice, and they often come with (typically culturally specific) techniques that can be used in their enactment, they are not techniques like reading and writing, but attitudes and aptitudes that we value in individuals in order for them to do pretty much anything in a society, including reading and writing. Equally, reading and writing are potential causes of such qualities, not types of the same kind of thing. These skills are not about being literate; they are about being human. Children are usually pretty good at playing, for instance, and do not need to be taught explicitly to do so, albeit that there might be many technological skills involved. It is a misappropriation of the term, though, to call them “literacies,” and it confuses the issue. Attitudes and values are important parts of many competencies, and we need to cultivate them (usually through applied techniques), but that is exactly the point: attitudes and values in themselves are not competencies.
Despite these reservations, the term “literacy” might have some value as a shorthand for the set of hard techniques that is a prerequisite for any human system for which we can identify boundaries: in other words, for any culture. Such techniques are what we need to operate the technologies of a culture, from the smallest clique to the largest nation and beyond, whether it is transient or stable. Different hard skills are needed to be part of a culture of academics or hipsters, a family, a religion, a country, Twitter users, researchers in learning technologies, and often subcultures within those cultures or that cut across them. Each culture has its own literacies, its own suite of hard techniques (including structures, methods, principles, skills, strategies, and so on), that must be mastered to participate at a minimum level and that are prerequisites for using the soft technologies that help to define its values and purposes. We can identify these technologies by considering what is required of an outsider to become a participant and of a participant to become a full member of a culture.
The culture is not wholly defined by such techniques: usually, there are common values, attitudes, and shared context that are at least as important and often more so. But every culture demands a set of hard skills and knowledge of the technologies and structures that matter to it. “Literacy” seems to be as good a term as any to describe that set, and it is in keeping with its more conventional meaning. There are, though, millions of these cultures. The Reddit site provides a useful function that Usenet newsgroups or, before the internet became popular, bulletin boards used to play in the past of allowing such cultures to be reified. As I write this in March 2023, according to Business of Apps (https://www.businessofapps.com/data/reddit-statistics/) there are over 2.2 million subreddits (representing topics of interest), of which over 130,000 are active.1 Each subreddit not only reifies a culture but creates a new culture of its own, shaped by its moderators, having explicit rules and expectations of behaviour. This is just the tip of the iceberg, inasmuch as there are cultures in every organization, community, and household distinct from any other. All demand literacies of greater or lesser complexity.
Soft and Hard Illiteracies
There are two distinct forms of this broader technological definition of literacy that relate to our abilities to use different technologies, and they are revealed when we consider what it means to be technologically illiterate. The first is when we are part of a hard technology and fail to play our role correctly (e.g., pressing the wrong buttons at the wrong times, overwinding a watch, or filling in a form incorrectly), in which case the technology simply does not work as it should. The second is when we are not sufficiently able to fill in the gaps in a soft technology (e.g., not knowing how to draw a picture of a hand, play an instrument, or compose a sentence), in which case the technology does not work well. We might characterize these as hard illiteracy and soft illiteracy: in the case of hard illiteracies, the technologies fail to work at all, whereas in the case of soft illiteracies we can see different degrees of skill, and better or worse techniques, ranging from hardly any (e.g., a toddler’s first attempts at drawing) to a lot (e.g., a professional artist’s skill in illustration).
Programmers have an acronym for everything, and there are plenty for user “errors”: PICNIC (problem in chair not in computer), RTFM (read the fucking manual), CBE (carbon-based error), or TSTO (too stupid to operate), for example. I teach my programming students that, if they ever need to invoke such acronyms, the problem is with their program, not with the user. Although it is easier to blame the user, hard illiteracy might be seen equally as a failure, or at least as an opportunity for improvement, in hard technology design. This is particularly true in anything mediated by a computer program, especially since most code in a modern computer is dedicated to making software error-proof. It makes no sense to devote hundreds or more hours to building software and then to require users to play roles that the software could perform just as easily, and far more efficiently, accurately, and speedily.
Anything that can be performed better by a machine or uncritical process probably (or at least normally) should be: this is where automation can have great value. However, this comes with a big proviso that inordinate care needs to be taken to ensure that hardening does not come with unwanted constraints or hardens things that should not be hardened. It is important to bear in mind that virtually all technologies involve costs as well as benefits, all have side effects, and, most importantly, sometimes the obvious use is not the only one that matters. Dishwashing, for instance, in most ways is done better by a dishwashing machine, which is more effective, faster, less environmentally harmful, and so on. However, there are social aspects of the hand-washing process in many families, there is a sense of pride in accomplishments that some people experience, it is far more convenient when camping, it is essential in the absence of electrical power, and so on. The uses to which we put technologies can extend far beyond those that give them their names.
It is especially important to be aware of what we are hardening and why. For instance, designing an assignment submission system that does not allow for the possibility of late submissions normally would be a bad idea, though it does happen. “Assignment submission” is not just what it says on the box. In fact, it is part of a much bigger assembly of complex processes, which includes mitigating circumstance processes, methods of dealing with broken systems, teachers’ knowledge of students, and much more. Our failure to acknowledge such factors is a classic example of our common failure to understand technologies as situated, deeply connected systems in which the boundaries that matter are seldom those of a labelled technology: assignment submission is a synecdoche for many processes, not just the thing itself.
It is also important to recognize the value of some “mindless” processes, from sawing wood to cleaning to giving lectures, that often have purposes and physical, social, or psychological benefits that go beyond their most obvious functions. Many technological activities that we do for pleasure are concerned with a great deal more than the labelled activity itself. Nonetheless, from a design perspective, if we are creating any sort of technology, then it is normally a bad idea to force humans to play fixed roles simply to make a hard machine work, and (at least when the activity is performed regularly) efforts should be made to reduce the need for it. Students should not need to learn too many esoteric and situated hard skills for submitting assignments in a specified format if they are learning, say, to write creatively, unless such esoteric skills are an authentic part of the process of being a creative writer.
Regardless of our preferences or needs, or the many unwanted consequences for many people, it seems that, from a broad perspective, fewer and fewer hard skills are required of people in a modern society, and, with some exceptions, most of those needed are ephemeral. On the whole, hard illiteracy should be designed out of a system because hard literacy is dehumanizing. With a few exceptions, it should not be possible to be illiterate in hard skills if (and only if) all that they achieve is the correct operation of a machine, because machines can replace us more effectively, efficiently, and reliably and because being nothing but a cog is (in itself though not necessarily when viewed as part of an assembly) debasing. It is important to be clear about what I am claiming here. This is not an appeal to embed all hard skills in machines, by any means. It is about doing so only for those things that fail to provide any extra value to any of us (beyond extrinsic reward) and where we provide insufficient value in our role in the assembly in any way that could not be performed just as well or better by a machine. Speaking to the socio-technical perspective of soft and hard technologies, it is about reducing the need for us to be part of dominative and prescriptive technologies.
Soft illiteracy should never be designed out of a system, even though the technologies that we might use can evolve over time or be replaced by better ones. To make creative use of most technologies, there are virtually always hard skills to learn, in both senses of the word hard. For example, to form written sentences, we need to learn how to form letters that are intelligible to others, as efficiently as we can. We need vocabularies, a grasp of syntax, understanding of approaches to rhetoric, knowledge of punctuation, and many other hard skills. On their own, hard skills usually can be (and have been) programmed into machines, but in assembly they become part of something deeply soft: they offer the means to create new technologies, from forms to poems, to reports to shopping lists to dictionaries. They provide us with the tools to be more human, to be our best selves. Relatedly, language learning, at least in some of its most important aspects, is hard learning: notwithstanding the great artistry involved in rhetorical and poetic skills, it is fundamentally concerned with repeatable habits that should be replicated reasonably precisely; otherwise, they become unintelligible. Indeed, language learning is one of the few contexts in which harder methods such as drill and practice, spaced learning, and computer-based training show unequivocal benefits (Chang & Hung, 2019). Harder pedagogies may support skills as diverse as kicking a football, performing calculations, or making a valid argument. But what matters is the value that they bring to us as creative, social, engaged, motivated human beings, to expand our horizons, to open ourselves to one another, to make the world a better place (for people, not for machines).
It is not coincidental that pedagogies used to develop hard literacies are often hard themselves. Where we must be enactors of hard technologies in order to be participants in soft technologies, it is important that we play our roles correctly, and that often means “programming” ourselves to behave like machines. To be more precise, we are not programming ourselves (as a whole) so much as parts of ourselves that we can assemble with parts of other technologies to do something else. The word programming, as I use it here, is decidedly metaphorical. We are not programmable in remotely the same way that computers are programmable, but there is value in the metaphor inasmuch as it serves as a reminder that, just as computers can run many programs and subroutines, so too our thinking can be composed of many parts necessary in the enactment of the whole. Whenever we make changes in our brains (i.e., when we learn), unlike when we program a computer, those changes are seldom if ever localized, are always connected with other knowledge, cannot be switched on and off or loaded and unloaded at will, and can affect and be affected by many other aspects of our thinking.
If we unlearn them, we do not so much erase them as add alternative paths. In effect, as discussed in Chapter 6, we are building small and large technologies in our minds that we can assemble in different configurations and incorporate in infinite varieties of other technologies to achieve our purposes. This is a massively recursive process in which we create technologies in our minds that in turn alter our minds, enabling us to do things, change things, change ourselves, and form our identities in the world. The assemblies and assemblies of assemblies used together in a vast network or interlinked technologies lead to a vast range of possible combinations, almost all of which will never combine, so creativity, uniqueness, and selfhood emerge as inevitable consequences (Hofstadter, 1979).
Further evidence that at least part of our thinking is composed, in some ways, of technologies comes from the fact that the technologies that we build in our minds obey the same general rules and patterns of other technologies: the large and slow changing affect the small and fast changing more than vice versa, and they develop through a process of assembly of hard and soft parts. This is not to suggest in any way that our brains are physically organized like technologies: the technologies of the mind exist at a different level of explanation than the connections and networks of which our thoughts consist physically. Computer programs written in a high-level programming language similarly bear little resemblance to the machine code that runs inside their microchips, though there is a direct correlation in computers that might not be present in brains: computer code, can be decompiled to reveal much if not all of its original code, whereas there is no good reason to believe that this might be possible for thoughts.
The Double-Edged Sword of Expertise
Boundaries matter when considering hard literacies because skills are aggregable. For example, once we have developed the skill of repeating a common phrase, to a large extent it can be seen as a single harder assemblage that henceforth can be treated as an atomic unit (or at least a subroutine) in further technologies—to use the phrase in a conversation, for example. This effect has been well researched in chess playing in which it has been found that expert players do not consider each possible move and its detailed consequences but recognize patterns (Chi, 2006), thus allowing them to disregard those not likely to be useful. They have pre-orchestrated significant parts of the assembly, making them harder, thus faster and more efficient, and they can chunk them together to achieve more than those of us who must figure out those patterns anew. In some ways, expertise can therefore be seen as a reduction in knowledge (or at least a reduction in information)—we do not need to concentrate on irrelevant or distracting details—rather than an increase in it. In effect, we black-box some of the machinery in our minds in order to achieve more complex ends. Many of us can remember the early stages of learning to drive a car or ride a bike, when each action required thought and attention, and the result was clumsy, unreliable, and a little embarrassing. As we develop the necessary habits, such actions become second nature: hardened pieces that then can be used in a bigger assembly, allowing us to perform skillfully in order to use them in many different ways, as soft technologies. This hardening, however, can be a two-edged sword.
Experts no longer need to think of the smaller details, which means that they “know” what is wrong and “know” that some patterns are not worth pursuing. This works well as long as the aggregated models that they are using are valid and the conditions surrounding them do not change too much. Unfortunately, those models are sometimes wrong, or incomplete, or fail to adjust to external changes. Not all of our habits make sense, and not all such learning remains useful forever. Sometimes habits of thinking can become counterproductive. Having lived in a country whose citizens drive on the other side of the road than the country in which I grew up for well over fifteen years, I still have to think about where to look each time I cross a road, and my early skills in procedural programming have not served me well in object-oriented let alone more recent coding approaches. Most technological skills are ephemeral, but they can be harder to forget than to learn.
It is significant that a disproportionate number of major innovations and breakthroughs in many fields are accomplished by the young, because they do not necessarily know what is impossible and do not have so many ingrained habits to unlearn. They can assemble the more atomic skills differently because they have not developed habits and knowledge orchestrating coarser, larger patterns. Shirky (2007) describes this as the Bayesian advantage of youth. Although, in a large percentage of cases, those who lack expertise will come up with poorer solutions than experts, big breakthroughs often occur precisely because of that lack of expertise. As always, soft technologies provide greater flexibility and the potential for creativity. If we lack the “subroutines” gained through expertise, then we have to make them up ourselves, usually based upon our incomplete knowledge. On the whole, we will do so less well and with greater effort than we would, or will, when we have learned to do them “properly”—soft technologies tend to be inefficient, inaccurate, slow, and difficult to implement. But sometimes we will wind up inventing something better than the “proper” way of doing it. Such examples, though rare, are important. Teachers (including when we teach ourselves) should be aware of what and how learners are learning and seek misconceptions and errors in order to correct them. However, it is always important to look at such deviations with a critical eye to see what might be good or even better than what we believe to be correct. Teaching is and must always be learning.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.