Skip to main content

Influential Machines: The Rhetoric of Computational Performance: Chapter 5: Leveraging the Rhetorical Energies of Machines

Influential Machines: The Rhetoric of Computational Performance
Chapter 5: Leveraging the Rhetorical Energies of Machines
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeInfluential Machines
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover Page
  2. Title Page
  3. Copyright
  4. Dedication
  5. Contents
  6. List of Tables and Figures
  7. Acknowledgments
  8. Introduction: Locating the Energies of Computational Performance
    1. The Rhetorical Energies of Computing Machines
    2. Beyond the Front and the Back Ends of Computing and Toward the Deep End
    3. Thickening Procedurality with the Rhetorical Energies of Computational Performance
  9. Chapter 1: Manufactured Processing, Ritual, and Expert Systems
    1. Automation as Ritual of Science
    2. Knowledge-Based Systems and Looking to Machines for Answers about Health
    3. The Manufactured Processing of Vaccine Calculator
    4. The Energetic Movements of “Experts” and Science Communication
  10. Chapter 2: Processual Magnitude, the Sublime, and Computational Poiesis
    1. The Aesthetics of Vast Computing
    2. The Sublime Energies of @censusAmericans
    3. Attuning to the Angst of @censusAmericans
    4. Doing More with Computational Performance
  11. Chapter 3: Processual Signaling, Compulsion, and Neural Networks
    1. Persuasion, Indication, and Affective Compulsion
    2. The “Grooves” of Neural Networks
    3. The Machinic Parody of @DeepDrumpf
    4. The Critique of Processual Signaling
  12. Chapter 4: Designing Computational Performances to Actively Contribute Positive Energies
    1. Moral Luck and the Machine Question
    2. First- and Second-Order Agency
    3. Hedging Against Moral (Un)Luckiness and the Limits of Avoidance
    4. Computational Performance and an Ethic of (Distributed) Responsibility
    5. Pushing on the Precautionary Principle and the Paradox of Machinic Intervention
    6. Doing Good Instead of Avoiding Wrong with Alexa
    7. Good Machines, Speaking Well
  13. Chapter 5: Leveraging the Rhetorical Energies of Machines
    1. The Informational and Persuasive Labors of Machine Communicators During the Pandemic
    2. Going “Deeper” Toward Anthropomechanation
    3. Enlivening Human–Machine Communication with Rhetorical Energies
    4. Enlivening Inoculations Against Misinformation with Machinic Rhetorical Energies
  14. Notes
  15. Works Cited
  16. Index

Page 110 →5 Leveraging the Rhetorical Energies of Machines

Throughout this book, I have attempted to tack back and forth between the front and back ends of computing to offer accounts of the deep ends that animate the rhetorical energies of computational performances. I restate the distinctions between each of the ends of computing here:

The front end: The realm of computing that deals with the user interface (i.e., recieving input and giving output).

The back end: The realm of computing that deals with the databases, functions, and networking from which a given program operates (i.e., information processing and storage).

The deep end: The realm of computing that deals with the performative expenditure and experience of machinic rhetorical energies (i.e., the catalyzing of visceral feelings).

In examining the front, back, and deep ends of computing, the book also tracked another set of ends—the ends of rhetoric: the epistemic, the aesthetic, and the political. The case of Vaccine Calculator, our epistemic case, illustrated that between the back-end programming and front-end user interface is a deep end of computing, entangled with the trope of the prophet and discourses of expert systems, from which the movements of the machine can imbue an energy that invites persons to feel like experts. The tactic of leveraging the energies of computational performance to fabricate legitimacy for claims that are indefensible with regard to scientific consensus was named manufactured processing. Our aesthetic case, @censusAmericans, demonstrated that handing art over to the machine can carry with it a powerful X-ray sublime aesthetic, located in the exhilarating (and frightening) energies of infinity “at work” as the machine carries out it scripts, activating simultaneously those deep-end Page 111 →categories of “artificial” and “natural” with its lively, but not alive, movements. Drawing on the energies of vast computing to offer an aesthetic sense of perpetual unresolvedness was referred to as the tactic of processual magnitude. Our political case, @DeepDrumpf, taught us that the energies of a machine-learning system can be leveraged to cultivate attunements that encourage persons to feel more like witnesses of truth than critics of politics, by signaling in a manner that grooves with the mathematically inflected deep end of neural network performances. To encourage affective compulsions which blur political persuasion and indication by way of the movements of computational performance was labeled the tactic of processual signaling. Informed by the case studies, we took seriously the ethical implications of machine communicators, which are lively (even if they are not alive), carrying with them the ever-present threat of moving in ways that can catalyze wounding energies, beyond the intentions of the designers. Consequently, to hedge bets against moral unluckiness, computational performances can and should be designed to do good, rather than merely to avoid harm, by engaging persuasions against phenomena like hate speech; such designs more adequately uphold the ideal of a good machine, speaking (and moving) well, by leveraging the liveliness of their performances to aggregate good energies while fragmenting bad ones.

The front and the back ends of computing are analytically useful distinctions for tracking between what a machine presents and the processes that got it “there.” However, as we have seen from the case studies of this book, our machines emerge within a swirling vortex of affects, assumptions of truth, and yearnings for meaning, finding animation in their movements. To pursue the deep end of computing, in other words, is to read deeper into influence beyond words and beyond the human. As highlighted by the discussion of this book, there is a growing body of rhetorical scholarship that takes machine communication as its object of analysis, which interrogates the discourses about (as well as of) machines, helping to better understand machines as socio-historically situated actors that ambiently participate in meaning-making. But adjacent to the rhetorical study of machines is the field of human–machine communication, an interdisciplinary field of study, which focuses specifically on the interactions between humans and social machines, or machines designed to interact as communicative agents.1 Although this field has been steadily developing understandings of how communication happens between machines and people, the field tends to be dominated by quantitative social scientific methods of inquiry, wherein interpretive approaches are rare, meaning that the human–machine communication literature is seldom in direct conversation with Page 112 →rhetorical scholarship.2 And even though human–machine communication is consistently producing knowledge about the communication of machines, rhetorical scholarship rarely demonstrates awareness of that field.

Human–machine communication can benefit from the added depth of rhetorical approaches; rhetoric can benefit from the social scientific conclusions of human–machine communication. To drive this point, I will trace an example of human–machine communication in the context of the COVID-19 “infodemic” from a rhetoric as energy approach. But before this, and to set some additional context regarding the field of human–machine communication and the value-added of a rhetoric as energy approach to studying machine interlocutors, I first turn to Andrea L. Guzman and Seth C. Lewis’s outline of a proposed research agenda for the field of human–machine communication. In their proposal, they describe three areas of focus: “functional dimensions,” “relational dynamics,” and “metaphysical implications.”3 I articulate these foci as research questions here.

  1. Are humans the appropriate reference for designing effective machine communication?
  2. What do machines mean to us as social agents?
  3. How are the previously distinct categories of “machine” and “human” complicated in the case of machines that perform as social agents?

Reading these questions in light of the case studies and discussions of the book will lead us to note that a rhetoric as energy approach to human–machine communication offers answers. For example, based on the case studies, one can conclude that sometimes machines are the most appropriate reference for designing effective machine communication, because their performances instantiate not just human energies, but also machinic ones—they are lively catalysts of nonhuman energies, even if they are not alive. Further, a rhetoric as energy approach informs us that machines can speak to human concerns in more-than-human ways, implicating that the movements of machines as social agents can impact bodies in distinctly machinic ways (e.g., as performances entangled with the deep ends of computing). And finally, from the rhetoric as energy perspective, one can see that “real” communication does not require origination from a human to count as such; despite not being alive, machines imbue energies to the discourse ecology through their movements, beyond words, and beyond the human.

In the next section, I put human–machine communication in conversation with the subfields of digital rhetoric and rhetoric of science, technology, and Page 113 →medicine to highlight the value added of orienting toward the deep end of computing. In particular, I will synthesize learning moments from the case studies of the book in a discussion of the voice-based assistant, Alexa, as situated amid the public problems of misinformation amid the COVID-19 pandemic—namely, the performative similarities that Alexa shares with the Oracle at Delphi are traced to demonstrate that, while the machine is not alive, it nonetheless contributes rhetorical energies that complicate automation–anthropomorphic binaries by enlivening public health claims with its movements, signaling to human concerns in more-than-human ways. Finally, as an example of what it might look like to “do” something with the rhetorical energies of machines within human–machine communication design, inoculation messaging is offered as a means by which to approach the persuasive labors of machines amid an infodemic, while leveraging the rhetorical energies of machine communicators to animate persuasions against misinformation. To begin, I will underline the need to account for the rhetorical energies of machines in human–machine communication by starting with some discussion of the general assumptions of machine communicators amid the COVID-19 pandemic, which envision their communication as more informational than persuasive.

The Informational and Persuasive Labors of Machine Communicators During the Pandemic

With the COVID-19 pandemic came anxiety-inducing uncertainties, exacerbated by an accompanying “infodemic,” shaped not just by a massive surge of information, generated by unprecedented levels of effort to learn about the virus and its spread, but also by misinformation.4 Consequently, it makes sense that much of the conversation about machine interlocutors amid the pandemic focused on relieving humans from an uptick in demand for answers—informational labor. Chatbots, for instance, are identified in the academic literatures as potential means by which to offset the overloading of medical staff by distributing the labor of answering key medical questions across automated, artificially intelligent systems as well as means by which to enhance message cohesion by centralizing information within a single system, rather than across an array of individuals answering questions.5 Machines do not need to sleep, nor do they take on the psychic burdens of relentless interactions with persons who are understandably worried about their place in a world marked by the unpredictability of viral spread and the isolation of preventative lockdown. Machines just “do.” As such, machines are conceived as interlocutors well-suited to Page 114 →reducing uncertainty for the people who need it without pushing added burden onto living, breathing humans—to save human energies, rather than to capitalize on machinic ones.

These sentiments reverberate in popular discourse of the pandemic as well. Take, for instance, the following description, excerpted from a “news-vertising” article published in The Atlantic, of the abilities of IBM’s proprietary machine interlocutor: “One source of relief for government agencies, healthcare organizations, and academic institutions is coming from IBM’s Watson Assistant for Citizens. Watson Assistant for Citizens is an assistant with artificial intelligence that can understand and respond to common questions about COVID-19 on its own. The tool . . . leverages current data like guidance from the CDC and local sources, such as links to school closings, news, and state updates.”6 Feeling overwhelmed with all of the questions? IBM can help! In such discourses, machine communicators are imagined as interactive forums for frequently asked questions, which can update their answers in real time while delivering them in response to natural language queries, offsetting the informational labor of finding and sharing accurate, reliable information amid the pandemic. In this sense, the focus is on creating timely and accurate machine communicators more than on moving or influential ones. “Masks are currently required in Suffolk county.”

With concern to health and science communication, approaching the labors of human–machine communicators as informational largely fits with the assumptions of the deficit model of science communication, wherein if people are engaging in behaviors that do not support public health, it is because they have not yet gotten the scientific facts—they have a deficit of scientific knowledge.7 And so it goes, this same assumption informs us that we should be focusing on machine communication in a way that supports accurately sharing the latest facts. Timely and accurate facts are certainly important to promoting public health, but at the same time such an approach might not go far enough to address the misinformation component of infodemics: persons may very well have access to the facts, but instead choose misinformation that better fits their contexts of interpretation, and thus they adopt behaviors and beliefs that undermine public health (e.g., refusing to wear a mask in public, or doubting the necessity of vaccination for protecting individual and public health).8 The problems of infodemics are not merely problems having to do with the exposition of facts; they are also problems having to do with the necessity of persuasion regarding the facts.

The possibility that persuasive work, not just informational work, can be done by machines with respect to public health during an infodemic is hinted Page 115 →at by Adam S. Miner, Liliana Laranjo, and A. Baki Kocaballi.9 They proffer the possibilities of machine interlocutors as agents who might solicit more candid responses for symptom tracking, or tap into the power of repetition and step-by-step instruction for influencing individual health behaviors, or even console the lonely amid social isolation by offering ersatz companionship. In these contexts, the focus of the machine communicator is not simply to share accurate and timely information—it is also to persuade users toward positive health outcomes. However, the account of persuasion on the part of machines remains fairly thin and can be characterized as approaching machine communication as “quasi” communication or the miming of human rationality and language, further implying the suboptimal nature of machine communicators amid the pandemic.

By taking a rhetoric as energy approach to machine communication amid the COVID-19 infodemic, I wish to demonstrate that perhaps machines might sometimes be optimal deliverers of persuasions against misinformation. I will dive deeper into the trope of technologies of prophecy and the tradition of expert systems identified within the deep end of computing in chapter 1 to examine the influence of Alexa as an influential communicator that matters to public understandings of health science and in ways that can speak to human concerns in more-than-human ways. I present this argument as a means of underscoring the value-added of a rhetoric as energy approach to human–machine communication by centering it on a public problem in which machine communication emerged as a central concern.

The voice-based interface, Alexa, became an important means of communicating about health science amid the COVID-19 pandemic. For example, the Mayo Clinic created an application that could answer queries about the most up-to-date information regarding COVID-19, including such things as viral testing, caring for the sick, and risk factors.10 Surely the application supported public understanding in ways that would protect public health by providing the latest reliable information. But Alexa’s computational performance brings “more” than that. Accompanying Alexa’s robotic voice response instructing a person that they should seek COVID-19 testing are the energies of a computing machine, making real-time application programming interface calls while analyzing user responses in coalition with the Centers for Disease Control and Prevention and the Mayo Clinic.11 In other words, the rhetorical energies of the machine support a plea to the user to get tested by resonating with the grander discourses of science, technology, and mathematics, not merely as an idea, but rather as a feeling, entangled with the idea, imbued through the movements of Page 116 →Alexa. In the same way that the timbre of a person’s voice and the gesticulation of their body matter to the impact of their utterances in ways enculturated by public life (e.g., learning how to “pick up” on the energies of persons’ performances) the computational performance of Alexa matters to its influence. And as the discussion in the next section demonstrates, such an approach pushes on concepts such as “automation bias” and promises rich insights into the study of machine interlocutors.

Going “Deeper” Toward Anthropomechanation

In human-computer interaction studies there exists the concept of “automation bias,” which designates those moments where persons trust in the conclusions afforded by machine communicators because the machines behave in machinelike ways. Additionally, it is known that the trustworthiness and nontrustworthiness of machines toggle as one differentiates between specific designs of machinic agents and their purposes. That is, if we are designing a machinic agent to be a fun friend, designs that encourage anthropomorphism are likely to enhance user trust. Conversely, if we are designing a machinic agent to act in place of an expert (e.g., medical doctor or teacher), it is likely that designs that encourage automation bias enhance user trust.12

What we learn from this is that neither anthropomorphism nor automation bias is solely sufficient for capturing the influence of machine communication, because context matters. Such a realization is supported by studies that test human reactions to robot speech, which demonstrate that humans tend to rate interactions with robots more positively when they are polite. For instance, if a robot guard is inspecting peoples’ bags, those people might feel less threatened by the robot if it includes niceties—“Please” and “thank you”—along with its commands and instructions. Such an outcome is “interpreted as evidence for people expecting robots to be polite in a robotic way.”13 Similarly, analyses of human–human and human–chatbot conversations show that people use more profanity when talking with a chatbot. Specifically, “the greater use of profanity in these conversations suggests that participants never lost sight of the fact that they were communicating with a computer.”14 What we garner from such studies is that machines, whether they are performing like machines or more like humans, are subject to expectations that are unique to machines, but are nonetheless modulated by the habits of human social interaction, wherein people “apply a wide range of social rules mindlessly,” not because people are thinking about the human in the computer, but rather, they are operating in Page 117 →rote as beings enculturated as human interactants.15 To read into the rhetorical energies of machines is to employ the interpretive sensibilities of the rhetorical tradition to drive at the otherwise rote, mindless expectations applied to machine communicators, by unpacking the deep ecologies of discourse that shape what “machinelike” means, beyond simply declaring a given performance as robotic or anthropomorphic.

In human–machine communication, Jaime Banks and Maartje de Graaf have made strides to push past the automation–anthropomorphization binary in their proposal for agent agnosticism, which clears space for the idea that machines are not merely media of human communication; they also contribute to meaning-making.16 Specifically, the agent-agnostic model: “(1) considers each agent’s functions in the process (with attention to functions that may not be directly observable) and (2) draws on literatures pertaining to those functions (independent of enacting agent) to consider how meaning may emerge through antecedents, processes, and effects of that function.”17 Interrogating the rhetorical energies of machine communicators is to take up an agent-agnostic approach while placing special attention on the antecedents of discourse and materiality that are entangled with the multisensorial performances of machines, which may not be directly observable but which are nonetheless present. Masculine hegemony, I, Robot, the Oracle of Delphi, the physical properties of electricity, and the evolution of the software ecology—such discourses and material realities interact to inform the deep ends of computing, from which the energies of machinic performance emerge. Orienting to these ambient features is to attune to anthropomechanation, the work between human (and nonhuman) actants, manifest in the lively movements of machines, animating discourse in more-than-human ways.

Rhetoric as energy is a means by which to read deeper—to thicken an account of human–machine communication by going beyond the automation–anthropomorphization binary, to recognize that machines, while they might not “believe” or “feel,” nonetheless can perform in ways that are anthropome-chanical, in that they can catalyze energies in their movements, impacting bodies as nodal bursts of energy, human and nonhuman.

Enlivening Human–Machine Communication with Rhetorical Energies

As noted earlier in this chapter, much of the conversation about machine communicators in public health contexts tends to focus on their abilities to deliver accurate and valid information, rather than influence. For example, studies Page 118 →explore machine interlocutors as resources for addiction or information about vaccination.18 But there is also work within human–machine communication that starts to move toward the idea that machine communicators might also be influential. As demonstrated by Edwards et al. in public health contexts, such as sexually communicable disease messaging, persons can perceive the quality of Twitterbots as roughly equivalent to human communication concerning credibility, attractiveness, communication competency, and interactiveness.19 Such a conclusion instructs us that machine communicators may not be suboptimal deliverers of health science communication—that a machine communicator can be just as influential as a human. But a rhetoric as energy approach would take this a step further to ask whether machine interlocutors might also bring something more than mere human or technical performance, precisely because they are machines.

At first blush, for instance, the plea of Alexa to the human to seek testing for COVID-19 seems to leverage automation bias to garner trust in its claim. But if we dive deeper (as we did in chapter 1), we might consider the longstanding trope of the prophet, and the emergence of modern scientific forecasting and interpretation into existing cultural grooves of discourse previously etched from millennia of looking to oracles, augers, and seers for answers and how this trope interacts with the phenomenon of expert systems. For example, if we were to stay on this thread, we might look to the ancient Greek ritual of Delphic divination. In the ritual, the Pythia, also known as the Oracle of Delphi, was a position filled by the “rulers of the oracle” who would select “a virtuous woman of the lower classes.”20 The Pythia would inhale divine vapors as they rose from a fissure in the Temple of Apollo, impelling her to speak as a medium, manically echoing the truths of the ether, which would then be interpreted into prophecy. The Pythia was treated as a portal to the ether—a conduit to truth. When she spoke, her words were attended by rhetorical energies, perceptible as movement and prosody, ambiently entangled with cultural practice, which signaled to human concerns in more-than-human ways. The Oracle was a human, who spoke as a human, imbued with the vibratory rhetorical energies of the divine. Over time, our paradigms of knowledge-making have shifted in aspiration, represented in movements away from rituals of superstition and toward rituals of scientific observation and data-driven analysis. Despite the shift in ritual, though, the role of prophecy remains. Instead of leveraging the divine vapors and the Oracle of Delphi herself as “technologies of prophecy,” we increasingly turn to computing technologies as means by which to see beyond the human, to visualize and make sense of otherwise imperceptible Page 119 →data, such as that involved in climate change modeling.21 As explored in chapter 1, expert systems can possess scientific knowledge, but they also emerge as integral technologies of prophecy, which can perform in ways that can satiate not only our cerebral needs for data but also our embodied needs for reassur-ance—to feel as though we “know.”

Alexa, a system that has a knowledge base and an inference engine, is technically an expert system. But interestingly it is also one that shares characteristics with the Oracle of Delphi. Alexa is not a technology of prophecy exclusively entitled to the prophets of science (i.e., experts); it is more accessible to persons across levels of intellectual initiation and class divides. Coupled with the fact that Alexa performs as female, and one who might even be characterized as “virtuous,” at least in the chaste sense that the ancients probably meant it, further alludes to a deep resonance with the Delphic rituals of yore and the modern rituals of expert systems (and all of the patriarchal and elitist baggage that comes with it).22 Moreover, Alexa is a machine communicator, characterized by a rhetorical energy that resonates with grander discourses of science, technology, and medicine entangled with the trope of the prophet, emerging as an oracle for patrons to solicit insight from the ether, affording a semblance of stable knowledge amid a moment characterized by the uncertainties and unknowns of a pandemic, manifest as a visceral feeling, offered by its performance as a machine signaling to human concerns in more-than-human-ways. In this light, the computational performance of Alexa does not rely on anthropomorphization nor on mechanization, but rather both. By reading deeply into the deep end of computing, we can see that its lively movements are anthropomechanical.

Entangled with Alexa’s oraclelike energies is the networked nature of its communication. That is, while a given “skill” (an application programmed into the Alexa framework) might entail a specific, closed knowledge base (e.g., the currently known symptoms of COVID-19 as curated by the Mayo Clinic), the system itself is more broadly networked to many knowledge bases, including, for instance, Wikipedia, the web-based encyclopedia, self-proclaimed as an open collaboration aimed at the goal to “create a world in which everyone can freely share in the sum of all knowledge.”23 It is in this sense that the rhetorical energies of Alexa are manifold, entwined in an ambient infrastructure and manifest in its movements as a machine, offering a nodal flash in which Alexa’s grander network of actants is invoked as an “inventional resource” composed of electricity, wires, software ecologies, organizational images, and public imaginings, characterized by long-seated socio-historical happenings, myths, Page 120 →metaphors, and rituals.24 In contrast to the sublime sense of disarray afforded by @censusAmericans in chapter 2, the sense of magnitude offered by Alexa seems to be a more beautiful one in the sense of offering boundedness to one’s queries into the ether—answers, rather than sustained questions. “You should wear a mask to protect yourself and others.”

Is Alexa convincing because it solicits automation bias? Probably. But it is also convincing because it moves with the energies of an oracle, directly wired into the info-sphere, affording a glimpse into the ether, offering a conversation that feels like shaking the bones—to foresee—amid a global pandemic fraught with anxiety-inducing uncertainty. Alexa and the impact of its utterances is not straightforwardly a matter of technical features or humanlike behavior—it is also a matter of historical grooves of discourse, punctuated by technoscientific assumptions and the contemporary public imagination of health.

As with any other sort of rhetoric, the rhetorical energies of machine communicators are not intrinsically fixed to the facts. As we saw in the pseudoscientific web application, Vaccine Calculator, leveraging the rhetorical energies of machines is a tactic that can be employed to undermine public health, just as it can be employed to support it. Where the example of Alexa leverages a machinic rhetorical energy that resonates with the trope of the prophet to support appeals to protect public health, that same energy can also undermine public health by playing into discourses that facilitate conspiratorial denials of health science, further underscoring the persuasive labors of which machines are capable. For example, a dispelled piece of misinformation, appearing in a TikTok video and shared as a Facebook post, exhibits Alexa answering the question, “Alexa, did the government release the coronavirus?” to which Alexa responds, “According to Event 201, the government planned this event, created the virus and had a simulation of how the countries would react. This simulation occurred October 18, 2019. The government released the virus among the population and has lost control of the outbreak.”25 Based on recreations of the question posed to Alexa, and on statements from Amazon, the question and answer have been deemed a hoax.26 Some have conjectured that Alexa was preprogrammed to respond in the way that it did. But why would someone do that? An answer is in the rhetorical energy that Alexa affords. Rather than making a traditional “tinfoil hat” post to Facebook, the creator of this video has made a computational performance, which leverages the rhetorical energies of Alexa to afford not just a technical credibility, but also an affective potency, resonant with the trope of the prophet as it is smashed in with the compulsive Page 121 →suspicions of COVID-19 pandemic conspiracy theorists and technoscientific ritual.

Computational performances, and the energies that attend them, are not bound to the “objectivity” of science and mathematics that we often equate with them—they can be leveraged in ways that construct truthiness and legitimacy, even for claims that might not be true; as we learned in chapter 3, the energies of computational performances can signal in ways that activate affective compulsions that blur indication and persuasion: “As reported from the ether, this conspiracy theory is true.”

What we realize from this is that, alongside being a resource for enacting informational labor amid public health crises such as the COVID-19 pandemic, machine communicators are also unique resources of persuasive labor, characterized by rhetorical energies that are anthropomechanical, and as such can be leveraged to promote or undermine such issues as public health. As an example of what it might look like to “do” something with the rhetorical energies of machines, in the following section I describe a potential design consideration, informed by the deep ends of computing and aimed at actualizing the persuasive labors of machine communicators through inoculations against misinformation amid infodemics.

Enlivening Inoculations Against Misinformation with Machinic Rhetorical Energies

The notion that machine communicators emerged into the COVID-19 pandemic as a means for science denialists to circulate misinformation is highlighted by Amazon’s implementation of a policy to remove and restrict COVID-19 Alexa skills during the pandemic.27 Tom Taylor, senior vice president of the Alexa unit, reports that “We’ve seen a huge increase in the use of voice in the home.”28 The machine interlocutor, then, seems to offer an opportunity to counteract misinformation amid infodemics, and to do so in ways that can leverage not only the affordances of automation but also the rhetorical energies of machine communicators—to do more than share accurate and timely facts. Inoculation theory offers one route for doing just that.

Inoculation theory is a social scientific theory of persuasion that operates on the assumption that giving weakened versions of misleading information will activate a response “that is analogous to the cultivation of ‘mental antibodies,’ rendering the person immune to (undesirable) persuasion attempts.”29 Page 122 →Since its inception in the early 1960s, the idea has been repeatedly tested and studied, demonstrating that inoculation works to protect people from being persuaded by misinformation. For example, according to John A. Banas and Stephen A. Rains’s meta-analysis of over forty years of inoculation theory studies: “Even with a concerted effort to avoid publication bias and the possibility of inflated effects, the data revealed inoculation treatments are superior at conferring resistance when compared to both no-treatment control and supportive treatments.”30

Inoculation messages require two ingredients. The first is an (implied or directly stated) threat, and the second is a counterargument against (or refutation of) misinformation.31 The following is an example of an inoculation message, which includes a direct statement of threat in the form of a warning, alongside a refutation:

Warning: “Some politically motivated groups use misleading tactics to try to convince the public that there is a lot of disagreement among scientists.”

Refutation: “However, scientific research has found that among climate scientists, there is virtually no disagreement that humans are causing climate change.”32

These two-part messages induce a threat response to the warning, which activates the body (one’s feelings), motivating learning from the counterargument. In this sense, and as is supported by the literature, both components (the threat and the counterargument) need to be present for inoculation to occur.33 Inoculation messages, because they can be formulated into discrete warning or refutation messages, triggered by specific keywords of misinformation, lend themselves to being automated into the communicative repertoire of machine communicators. Coupled with this is the important factor of inoculation “decay,” which means that the protective effects of an inoculation message get weaker over time.34 Inoculation constancy is an outcome achievable with automation. Machine communicators, moreover, are means by which to follow up with “booster” messages to maintain protection from misinformed persuasions.

Where inoculation might largely be conceived as a prophylactic measure —that is, a measure meant to avoid infection—there is growing interest in, and evidence for, pursuing the therapeutic uses of inoculation as a means of un-infecting misinformed persons.35 Put differently, inoculation can protect people from being persuaded by misinformation. But we are also learning that it might also help to undo the effects of misleading information. Concurrent to Page 123 →this, active inoculation has been proposed as an approach that does not necessarily focus on subject-specific misinformation (e.g., COVID-19 conspiracies or vaccine denialism), but rather the techniques of misinformation broadly. This form of inoculation is meant to equip persons to better sift through ulterior motives and sleights of hand when they are presented as “facts” by actively engaging with those techniques by, for instance, playing a video game to spot fake news.36

Consequently, inoculation presents a means by which to reconceptualize the labors of machine communicators amid an infodemic beyond the deficit model of science communication and toward the contextual model. The contextual model of communication is the counterpart to the deficit model. In the contextual model, one approaches communication of science while conceding that the rhetorical features of one’s communication matter to how specific situated audiences will understand the science (e.g., that form and style matter to the shaping of knowing).37 As such, building inoculation messages into the communicative repertoire of Alexa’s performance would be to design its communication in a way that is contextually oriented. But beyond choosing more persuasive language or telling more convincing stories, one would also be drawing on the movements of Alexa. Such a design could entail (together with accurate and timely facts) inoculation messages, ported to instances of misinformation, built into two-part (warning and counterargument) messages, delivered prophylactically to keywords of public health, and therapeutically to keywords of misinformation—persuasions as well as facts. But it could also involve more interactive experiences meant to inoculate by encouraging users to actively engage in the techniques of misinformation, wherein the machine might be leveraged to periodically “check in” with users, offering them a quick game of spot the fake news.

Moreover, delivering inoculation messages via the machine might be optimal, because of its attending rhetorical energies. By engaging inoculative messaging with a machine interlocutor, such as Alexa, one can say that the machine is doing more to enact the persuasive labors necessitated by an infodemic by incorporating more directive procedural rhetorics, while at the same time augmenting with a potent feeling entangled with the discourses of expert systems as they resonate with the trope of the prophet—leveraging the energies of computational performance. Drawing on the argument of chapter 4, one could make the case that this would be a responsible design, in that it would be actively doing good by directly engaging persuasion, rather than just avoiding misinformation, diffusing possible wrongdoing by adding actions that Page 124 →work against harmful energies (such as those represented by conspiratorial and misinformed communication).

Despite the dominant imaginings of machine communicators in health contexts as well-suited for engaging informational labor, we must recognize that they are also capable of persuasive labor, which can promote or undermine public health. Here, I have suggested inoculation theory as a means for leveraging the persuasive labors of machines amid infodemics: inoculation messages lend themselves to being automated, and automation itself affords a route to inoculation constancy, supporting sustained immunity to misleading persuasions. Beyond this, I have suggested that the lively, anthropomechanical energies of machine communicators—such as Alexa—can be leveraged to enliven persuasions against misinformation, illustrating that perhaps they are optimal agents for taking up that work, because they can move in ways that activate the culturally situated body in line with existing grooves of culture.

The discussion of this conclusion, which mobilizes a rhetoric as energy approach to explore the persuasive threats and opportunities of machine communicators amid infodemics, highlights the value-added of interpretive approaches to human–machine communication: to read between the front and back end and into the deep end of computing. Interpretive approaches that can deal with tropes of prophecy, rituals of expert systems, Terminator, assumptions about the natural and the artificial, the storying of recurrent neural networks, patriarchal social structures, and Wikipedia are just as important to understanding human–machine communication as social scientific observations, surveys, and focus groups.

Because many of the learning moments arrived at in this conclusion are derived not from interpretive or social scientific perspectives, but rather between them, I am hopeful that rhetorical scholars will consider putting their work in conversation with human–machine communication and vice versa. Doing so can only be generative; as the case studies and discussions of this book have illustrated, machine communicators and their relations to communicative practice evolve, emerge, and metamorphize, often in unexpected ways. Moving toward more holistic accounts of machine communication, either through cross-citation or in full-out collaborations across epistemologies, is a means for continuing to account for the lively, but not alive, energies of computational performances.

Annotate

Next Chapter
Notes
PreviousNext
© 2023 University of South Carolina
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org