“CHAPTER 25 2018 Ed Tech’s Dystopian Turn” in “25 Years of Ed Tech”
CHAPTER 25
2018
Ed Tech’s Dystopian Turn
For this final year of the 25, a trend rather than a technology is the focus. There is in much of ed tech a growing divide, particularly in evidence at conferences. One camp is largely uncritical, seeing ed tech as a sort of Silicon Valley-inspired, technological utopia that will cure all of education’s problems. This is often a reflection-free zone, because the whole basis of this industry is built on selling perfect solutions, often to problems that have been artificially concocted. In contrast to this is a developing strand of criticality around the role of technology in society and in education in particular. This camp can sometimes be guilty of being overly critical, seeking reasons to refute every technology and dismiss any change. However, with the impact of social media on politics, Russian bots, disinformation, data surveillance, and numerous privacy scares, the need for a critical approach is apparent. Being skeptical about technology can no longer be seen as a specialist interest.
This criticality comes in many forms. One prominent strand of such an approach is suspicion about the claims of educational technology in general, and the role of software companies in particular, as we saw with the assertions relating to blockchain. One of the consequences of ed tech entering the mainstream of education is that it becomes increasingly attractive to companies that wish to join the lucrative education market. Much of the narrative around ed tech is associated with change, which quickly becomes co-opted into broader agendas around commercialization, commodification, and massification of education.
For instance, in their report, “An Avalanche Is Coming,” Barber, Donnelly, and Rizvi (2013) argued that systemic change in higher education is inevitable because education — perceived as slow, resistant to change, and old-fashioned — is seen as ripe for disruption, and ed tech is the means through which such change is realized. Increasingly, then, academic ed tech is reacting against these claims about the role of technology and is questioning the impacts on learner and scholarly practice, as well as the long-term implications for education in general. For example, in learning analytics we saw that academics are questioning the ethical framework and seeking to influence the field for the benefit of learners.
One of the key voices in ed tech criticality is Neil Selwyn (2014), who argued that engaging with the digital impact on education in a critical manner is a key role of educators, stating “the notion of a contemporary educational landscape infused with digital data raises the need for detailed inquiry and critique” (p. 68) This includes being self-critical and analyzing the assumptions and progress in movements within ed tech. It is important to distinguish critique as Selwyn sees it from the posture of simply being anti-technology or putting forward a blanket resistance to any change. It is a mistake to position these views in pro- and anti-technology camps, and indeed such a positioning is often deployed by vendors of ed tech to pressure uptake of their solution, with the implicit, and sometimes explicit, argument that someone is either stuck in the past and resistant to technology or they are forward-looking and progressive, and therefore keen to adopt their technology. Associating technology adoption with positive characteristics and criticism with negative ones is not a new marketing technique. The stance of criticality vis-à-vis ed tech should be seen as a more nuanced view than this — one can still be enthusiastic about the application of technology to benefit learners while being aware of the broader implications, questioning claims, and advocating (or conducting) research about real impacts.
While there are many flavours of criticality in educational technology, and we have seen a number of these related to specific technologies in the preceding chapters, by focusing on the work of three critical voices in ed tech some broader principles can be extracted that are not linked to just one technology.
The first of these voices relates to the invasive uses of technologies, many of which are co-opted into education, which highlights the importance of developing an understanding of how data is used. Chris Gilliard (2017) monitored the invasive applications of technology and curated a list of reports detailing such uses of technology, which included:
Facebook outs sex workers (Hill, 2017) — By using algorithms unknown to the user, sex workers with two identities found that Facebook connected these and suggested their “real” identity to clients.
Uber’s creepy stalker view — At a party, the Uber CEO allegedly treated guests to a display of the “creepy stalker view, showing them the whereabouts and movements of 30 Uber users in New York in real time” (Hill, 2014).
Amazon’s remote deletion of 1984 (Manjoo, 2009) — In one of the most ironic accounts of privacy invasion, Amazon deleted purchased copies of Orwell’s 1984 from Kindle, removing copies without the permission or knowledge of users.
The use of big data to predict employee sickness (Silverman, 2016) — “Employee wellness firms” and insurers mined data about individual’s prescription drugs, and shopping habits to predict which workers would have health problems.
Facial recognition in church (Bailey, 2015) — A company offered churches facial recognition software so they could track who attended services.
Disneyland’s electronic whip (Allen, 2011) — Workers at Disneyland in Florida had their data displayed on public, flat-screen monitors. The display listed workers by name, so their colleagues could compare work speeds.
The company that searches social media for brand risk — The software company Fama (https://www.fama.io) claimed to apply machine learning to social media content to identify any history of anti-social behaviour in potential employees.
While any one of these accounts may be exaggerated, justified, or since rectified, in combination they reveal a society where data can be used in unexpected ways, for purposes that the individual cannot control. While these examples are not in ed tech, it is not difficult to imagine versions of them in education.
Beyond privacy issues, Watters (2018b) compiled a list of the nefarious social and political uses or connections of educational technology, either technology designed for education specifically or co-opted into educational purposes. These included:
Border surveillance — The entrepreneur who developed the popular virtual reality software Oculus, was reportedly developing software to monitor and identify illegal border crossings from Mexico to the U.S. (Levy, 2018).
AI to deprofessionalize teachers — AI researcher and former Google executive Kai-Fu Lee set out how he envisaged AI applications in China would allow for 1,000-to-1 student-to-teacher ratios, monitor attendance using facial recognition, and ensure certain students learn from a select group of masters (Corcoran, 2018).
Links between the far right and blockchain — Golumbia (2018) detailed the philosophical underpinnings of much of the cryptocurrency movements, including their dependency on “right-wing and often anti-Semitic conspiracy theories about the nature of central banking” (para. 23).
YouTube’s role in radicalization — The recommendation engine of YouTube accelerates the move to extreme content, so that a user might find they are quickly presented with conspiracy theories and radicalizing content. Tufecki (2018) reported this happens for both left- and right-wing politics, stating that “YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with” (para. 4). This algorithmic recommendation of polarizing content is addressed in more detail below.
Algorithms that reinforce social bias — An experimental machine-learning algorithm was tested by Amazon to help select the best applicants for jobs, but it began to exhibit bias against female applicants, because it learned from, and reinforced, previous bias in selection procedures (Vincent, 2018).
As with the previous list, it is not any individual story in the above list that is significant, but rather the general pattern the stories reveal. For example, it might not be too significant for ed tech that YouTube has been put to nefarious uses by some — Mein Kampf was a published book after all, but that doesn’t mean that books themselves are a flawed technology that those in education should disengage from. But rather, the examples above highlight that technology has often negative social consequences, and so the argument that technology is neutral is naïve at best. This emphasizes the social responsibility of educators both in reinforcing the position of technology and in exposing students to potentially harmful environments.
The final strand of this analysis of ed tech criticality comes from Mike Caulfield (2016). He acknowledged the positive impact of the web and related technologies but argued that “to do justice to the possibilities means we must take the downsides of these environments seriously and address them” (para. 7). He adopted the term “digital polarization” to capture how online technologies lead to increasingly extreme and divided groups. This is evident in trends such as:
Algorithmic filters — These control what the user sees on social media, with the effect of limiting exposure to opinions different than our own.
Misinformation and “fake news” — These deliberately seek to reinforce the user’s existing worldview and can create “an entirely separate factual universe to its readers.”
Harassment, trolling, and “callout culture” on social media — These have the intention and impact of silencing minority voices and opinions.
Organized (sometimes by foreign states) hacking campaigns and bot programs — These seek to fuel distrust, grow conspiracy theories, and undermine democratic institutions.
Caulfield (2017b) gave a telling example of this process in action, demonstrating that, on the social media site Pinterest, a user might find that they go from searching for recipes to anti-vaccination conspiracy theories suggested by the site’s algorithms. Within a few clicks, their page has transformed from one filled with recipes for watermelon drinks, say, to one dominated by memes on vaccination conspiracy theories. To the unwitting user, the presentation of such content gives it credibility and normality that it does not warrant. And of course, from here the algorithms promote further content on government plots, antisemitic theories of a secret world order, and so on. This highlights one of the significant shifts that has occurred since the advent of the early technologies we have covered in this book, such as the web browser, wikis, and blogs, around the discoverability of content. Previously, discovering online content required an active effort from the user to follow suggestions from blog rolls, undertake searches, click on links, and so on.
When these technologies removed the publication filter that had existed hitherto, it was entirely predictable that along with the new release and useful, humorous, informative content would come undesirable content. However, it required an active, cognitive choice to seek this out, which meant that its impact on wider society was limited. What the type of algorithmic-driven approach that Caulfield highlighted does is to transform discovery into a passive rather than an active process. This opens up a whole new audience for racist, misogynistic, and conspiracy theory sites, and this passive presentation helps to normalize these views. If they’re presented regularly and alongside reputable news sources, then they begin to take on legitimacy for people who lack the critical abilities and information networks to see through them and to contradict them.
What Gilliard (2017), Watters (2018b), and Caulfield (2016, 2017b) each provide through these three strands of ed tech critique are aspects of what we can term “the dark side of ed tech.” These can be summarized as issues of privacy and data intrusion, social impact, and digital polarization. Taking on these challenges provides a framework for how those involved in ed tech can proceed. Doing so incorporates four elements that acknowledge the dark side of ed tech, without resolving to abandon the use of all technology in an educational context. These four approaches have an increasing level of effort and expertise but are applicable for most educators.
The first element is that of responsibility or duty of care. As educators, it is important to acknowledge the type of negative aspects set out above, and not to unknowingly commit students to the use of technologies or approaches that can lead to invasion of privacy or polarization. Higher education operates within society, and so has a role in both shaping how the communities use such technology and in holding technology companies to account.
The second element is related to this and can be termed “appropriate skepticism.” Educators have the appropriate critical skills to question the claims in technology press releases and media reports. This does not entail rejecting all technology but rather having a healthy, questioning approach to claims regarding the impact of technology.
The third element is to take both of the preceding elements and use them to actively develop skills in students so they can recognize and deal with these issues. For instance, Caulfield (2017a) developed a free, open, online textbook that educators can use to develop these critical skills in students. He based this approach around four moves:
Check for previous work — Look around to see if someone else has already fact-checked the claim or provided a synthesis of research.
Go upstream to the source — Most web content is not original. Go to the original source of the claim to understand the trustworthiness of the information.
Read laterally — Once you get to the source of a claim, read what other people say about the source (publication, author, etc.). The truth is in the network.
Circle back — If you get lost, hit dead ends, or find yourself going down an increasingly confusing rabbit hole, back up and start over. Knowing what you know now, you’re likely to take a more informed path with different search terms and better decisions.
This type of activity can be implemented in all subjects and has the advantage of being useful for the study of the topic itself, rather than a separate and often dry “digital competence” type of activity.
The fourth element is to engage in research and evaluation or practice that counters the dark side of tech. The response by academics to any social development is to engage in research and gather evidence. Whether this is addressing the claims of technology, investigating how algorithms shape behaviour, or developing tools that counteract some of the negative aspects, there is a need for universities and research funders to bring critical, research-based approaches to much of ed tech. Golumbia and Gilliard (2018) highlighted examples where resistance to invasive uses of technology has prevented their development, such as the backlash against the Peeple app, which allowed users to give people a rating without their consent. These examples indicate that the negative implementation of technology is not inevitable and that educators can play a role in facilitating these acts of resistance through education, evidence, and analysis.
Ed tech research, then, has begun to witness a shift from straightforward advocacy, which tended to promote the use of new technologies, to a more critical perspective. This can be framed as a dystopian turn if, for instance, we consider the early technologies in this book — the web, constructivism, wikis, CMC, OER — and associated literature, often marked by exploration of the possibilities of rethinking education in terms of social justice and radical, student-centred visions of education. Compared with the later chapters in the book which look at AI, learning analytics, MOOC, and blockchain — while there are certainly advocates of these technologies for improving the learning experience — the accompanying literature also contains issues relating to privacy, ethics, surveillance, and de-professionalization. If the early years covered in this book were characterized by excitement and hope, the later entries are marked by concern, debate, and anxiety.
There is still insufficient critical thought in much of the ed tech field, but arguably the year 2018 marks a more widespread and receptive approach to critical perspectives. If the evangelist and skeptic approaches represent two distinct groups, then sitting in-between these two is the group most focused on ed tech — the practitioners in universities, schools, and colleges who want to do the best for their learners, and finding a means to navigate this landscape is an important function of that role.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.