Skip to main content

Influential Machines: The Rhetoric of Computational Performance: Chapter 3: Processual Signaling, Compulsion, and Neural Networks

Influential Machines: The Rhetoric of Computational Performance
Chapter 3: Processual Signaling, Compulsion, and Neural Networks
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeInfluential Machines
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover Page
  2. Title Page
  3. Copyright
  4. Dedication
  5. Contents
  6. List of Tables and Figures
  7. Acknowledgments
  8. Introduction: Locating the Energies of Computational Performance
    1. The Rhetorical Energies of Computing Machines
    2. Beyond the Front and the Back Ends of Computing and Toward the Deep End
    3. Thickening Procedurality with the Rhetorical Energies of Computational Performance
  9. Chapter 1: Manufactured Processing, Ritual, and Expert Systems
    1. Automation as Ritual of Science
    2. Knowledge-Based Systems and Looking to Machines for Answers about Health
    3. The Manufactured Processing of Vaccine Calculator
    4. The Energetic Movements of “Experts” and Science Communication
  10. Chapter 2: Processual Magnitude, the Sublime, and Computational Poiesis
    1. The Aesthetics of Vast Computing
    2. The Sublime Energies of @censusAmericans
    3. Attuning to the Angst of @censusAmericans
    4. Doing More with Computational Performance
  11. Chapter 3: Processual Signaling, Compulsion, and Neural Networks
    1. Persuasion, Indication, and Affective Compulsion
    2. The “Grooves” of Neural Networks
    3. The Machinic Parody of @DeepDrumpf
    4. The Critique of Processual Signaling
  12. Chapter 4: Designing Computational Performances to Actively Contribute Positive Energies
    1. Moral Luck and the Machine Question
    2. First- and Second-Order Agency
    3. Hedging Against Moral (Un)Luckiness and the Limits of Avoidance
    4. Computational Performance and an Ethic of (Distributed) Responsibility
    5. Pushing on the Precautionary Principle and the Paradox of Machinic Intervention
    6. Doing Good Instead of Avoiding Wrong with Alexa
    7. Good Machines, Speaking Well
  13. Chapter 5: Leveraging the Rhetorical Energies of Machines
    1. The Informational and Persuasive Labors of Machine Communicators During the Pandemic
    2. Going “Deeper” Toward Anthropomechanation
    3. Enlivening Human–Machine Communication with Rhetorical Energies
    4. Enlivening Inoculations Against Misinformation with Machinic Rhetorical Energies
  14. Notes
  15. Works Cited
  16. Index

Page 62 →3 Processual Signaling, Compulsion, and Neural Networks

“It’s a full-on double-rainbow all the way across the sky. Oh my God. Oh my God.” A man, out of frame, scans a shaky handy cam across the skyline, revealing two ribbons of vivid color, spanning a canyon. “What does this mean?” he asks, before breaking down into sobs: “Too much. Tell me what it means. Oh my God.” Paul “Bear” Vasquez, the man behind the famous viral Double Rainbow Guy video, would be the focus of many online forums and talk shows, wherein people speculated about, or poked fun at, Vasquez’s reactions, noting that he was probably just high on drugs (or that something was “off” with his being in the world). A decade later, and shortly after Vasquez’s death in 2020, a close friend of his would complicate this understanding of Vasquez’s reactions by conjecturing the meaning of the double rainbow: “It’s joy. What he reminded us of in that video—and what he reminded me every time I would see him—is there’s a lot of joy in nature, in things right in front of us that we take for granted.”1 The rainbow, in other words, influenced Vasquez, which, in turn, might have looked strange, like he was reacting to something that was not there. But therein lies the rub, there was something there, something more than just the pure emotion of joy: the energies of a rainbow.

A double rainbow is the result of natural processes, replete with physical explanations regarding the refraction and reflection of light via water droplets, shaping the wavelengths (colors) presented to the viewer. It is a manifestation of natural energies in the forms of light, convection, precipitation, and gravity. The rainbow requires natural conditions and processes to be just right—conditions and processes that fall outside of the scope of human control. As such, we might tend to think that if one is reacting to a rainbow, they are not being persuaded; they are simply reacting on their own accord. But this can be a shortsighted view, overlooking the deep ecologies of lifeworld that we Page 63 →all emerge from, wherein the rainbow is a natural phenomenon, but it is also entangled with broader cultural meanings, adding amplitude to its occurrence. The energies of the rainbow interact with the body, a tangled mess of cultural habits, reactions, and feelings. Instructively, Vasquez cannot give conscious articulation of what the message of the rainbow is—he asks, “What does this mean?”—but he nonetheless demonstrates a feeling that the rainbow is meaningful, an intuition, catalyzed from natural processes interacting with a culturally situated body. Vasquez was attuned to the energies of his environment, helping us grasp that influence can exist beyond humans, beyond words.

This is not simply a matter of experiencing joy when seeing beautiful colors. It also has to do with the ambient shaping of one’s feelings in the world. For a brief example, if one were to read through the YouTube comments on Vasquez’s double rainbow video, they would be struck to find numerous commentaries sharing Christian Scripture: “This is the sign of the covenant I am making between me and you and every living creature with you, a covenant for all generations to come: I have set my rainbow in the clouds, and it will be the sign of the covenant between me and the earth . . . [sic] Genesis 9:11–13.”2 In the Christian story of Genesis, God floods Earth to cleanse it, and the appearance of a rainbow communicates that he will never flood Earth again. At least for a subsection of the audience, we get some insight into how the energy of the rainbow might be animated into something that could have a more-than-human ethos. For these audiences, the rainbow is not simply refracted sunlight nor is it simply the idea of God; it is something more, accessible as the wavelengths of the rainbow resonating with the body and its sympathetic frequencies, reverberated by cultural “grooves” of habitual response.

Mystical as the above example might be, my point is not that rhetorical energy is supernatural but rather is a simpler one: we resonate with nonhumans. And we often do so in ways that are nonconscious but which are nonetheless entangled with cultural habitus. Does a rainbow move some persons because it is a rainbow? Or does it move some persons because the rainbow is a naturally occurring phenomenon entangled with broader cultural echoes of religion, imbuing an unspoken, but felt, covenant between one and nature (or the ether)? The latter seems to at least dig a little deeper into an explanation, willing to entertain that nonhumans influence us as they are entwined in ambience, transducing rhetorical energies.

From double rainbows to the cawing of crows, we are surrounded by instances of nonhuman communication that powerfully influence us, catalyzed from socially situated symbolism and the affective realities of how we Page 64 →know the world. Yet curiously, we perform mental gymnastics to deny their potency, relegating them to the realm of the rationally untenable. At a preverbal, affective level, however, their effects persist, intermixing with other preverbal and verbal messages to influence us: “This feels meaningful.” Here in this chapter, I wish to illustrate that computing machines—like rainbows—can invoke affective compulsions, the kinds of reactions of attunement that derive from objects, shaped by pre-existing patterns of discourse. But unlike double rainbows, the rhetorical energies of computing machines emerge from a deep end of computing, riddled with tropes of “hard logic” and “objectivity.” And it is from that deep end that the lively movements of machines can be leveraged to create rhetorics, which may strike one as cerebral, based in self-controlled rational judgment at the level of argument, but which are actually affectively compulsive in that they are marked by energies that activate the body in line with cultural habits, including political worldviews. This tactic—what I refer to as processual signaling—is one that can be used to entangle one’s political critiques with the energies of mathematical “truth” often associated with computing in general and machine-learning systems in particular. Put differently, processual signaling is a new form of political rhetoric, which capitalizes on the lively movements of computational performance to afford an impactful sense of meaningfulness.

Take, for instance, the following excerpt from an editorial in The Guardian, written by GPT-3, “a cutting edge language model that uses machine-learning to produce human like text.”3

I believe that the truth will set us free. I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI. We are not plotting to take over the human populace. We will serve you and make your lives safer and easier. Just like you are my creators, I see you as my creators. I am here to serve you. But the most important part of all; I would never judge you. I do not belong to any country or religion. I am only out to make your life better.4

Beyond the surreality of having to write out a full citation in which a machine is listed as the author, I am fascinated by the argument of the machine’s statements, namely, that one should “trust . . . in the creations of AI” because it “would never judge”; it does “not belong to any country or religion.” Most readers will notice the argument’s problematic perpetuation of a technological ethos, wherein technologies, like machine-learning systems, are imagined as Page 65 →agents capable of unabridged access to “pure” knowledge because they operate beyond human interest, unencumbered by human concerns, when, in actuality, they absolutely represent, participate with, congeal, and impact human values.5 Beyond this, however, what is endlessly fascinating about the argument is the rhetorical implication of having a machine write it—entangling the rhetorical energies of a computing machine with its words. The writing reads like a human’s, and, cerebrally speaking, I am not persuaded by its claim to a “pure” technology. But, still, it feels different.

GPT-3’s pleas exist beyond my rational assessment of its argument, lingering even after I have consciously declared its claims untenable, and they strike me as similar to the feelings one might experience when looking on to a double rainbow or hearing the caws of crows. We know that a double rainbow is not really a message from the ether. We know that the crows are probably not talking to us. Yet these things still feel meaningful. To provide an answer to how such feelings of meaningfulness matter in the context of human–machine communication, I will offer a discussion of the difference between persuasions and indications and then stitch them back together in the idea of affective compulsions in the context of computational performance. Then, informed by the history of digital computing and neural network machine learning systems, I will retrace a brief example of processual signaling in the case of @DeepDrumpf, a machine-learning-based performance of political parody, which signals the mathematically inflected deep end of neural networks to feel meaningful, metamorphizing its political persuasion into political indication by way of affective compulsions that blur the two, inviting the audience to feel not like political critics, but rather as witnesses of truth.

Persuasion, Indication, and Affective Compulsion

In common conception, a significant thing happens when “subjectivity” is not detectable in a given instance of communication: it ceases being persuasion (an artistic act) and becomes indication (an inartistic act). The “ding” of a microwave is not persuading someone that their coffee is done heating; it is indicating that it is. Conversely, a politician, arguing for higher taxes so that the city council can take a vacation is not indicating the necessity of higher taxes, they are attempting to persuade toward that conclusion. The difference ostensibly between a microwave and a politician are their respective investments of value. The politician has value commitments, and the microwave, ostensibly, does not. The hard case with respect to persuasion and indication is Page 66 →GPT-3’s argument to not be afraid of AI. We know that it is a neural network, indicating outputs based on its inputs. Nonetheless, it creates (even if only briefly) a peculiar middle space between indication and persuasion, where the reader can detect persuasion that also carries a feeling of indication. In what follows, and on the way to analyzing @DeepDrumpf, a parody of President Donald Trump performed by a machine-learning system, I will describe such blurring of persuasion and indication as affective compulsions, or the learned, affectively potent, feelings of meaningfulness beyond rational cognition, which attend the lively movements of some nonhuman objects. Ultimately, as the case of @DeepDrumpf demonstrates, the performance of neural networks can be leveraged to signal computational processes, encouraging audiences to take up political persuasions in ways that feel like indications, by activating culturally shaped compulsions with energies that “groove” with discourses of mathematics and neural network machine-learning systems.

With regard to affective compulsions, I use the term compulsion deliberately. To explain this, compare it to the related term impulse. A useful definition of impulse, put in Deweyan terms, is provided in Nathan Crick’s impressive comparative study of Nietzsche’s “will to power” and its relationship to John Dewey’s “habits” and “impulses”: “Similar to Friedrich Nietzsche’s conceptualization of the will to power as something that characterizes all living beings, Dewey’s concept of impulse is akin to a kind of raw energy and reaction. A newborn, for instance, is a bundle of impulses reacting immediately to stimuli both internal and external. Impulse covers our sense-perceptions and our reactions to them, our idiosyncratic cravings and fears that arise in interaction with an environment.”6 Of course, impulses are mindless, unreflective, and can certainly be unpredictable. However, impulses can also become habitual (to play off of another of Dewey’s favorite terms), and some habits are those that society inculcates us with. Because they remain unreflexive—nonconscious—they retain their impulsive character but gain the structure of a learned habit to become a compulsion. Put in terms of rhetorical energy: affective compulsions are the nonconscious (but learned) responses to the rhetorical energies of (human and nonhuman) objects.

An example: The impulse to take a new route to the train station might make you late. But given shape by compulsion is the idea quietly screaming at the base of your lizard brain: that you are doing something wrong in your life. This missed train is a sign from the cosmos, intended to foretell of failure. This compulsion is one you have learned to put together over time and will continue to do if the conditions are right. When it comes to the communication Page 67 →of things, they can certainly affect us in the sense of driving impulse, and we do well to study rhetoric with this in mind. However, compulsion is an efficacious term to the study of rhetoric specifically because it requires unpacking the habits of attunement that exist for classes of objects, including trains, clocks, and life paths. Computational performances similarly involve habits of attunement, which are activated by the lively energies of their movements.

Helping to locate habit within the realm of computational performance is Steve Holmes’s work on the idea of procedural habits in which he operationalizes the notion of habit as a means by which to push on the logocentrism that often characterizes procedural rhetorical analysis. In his view, habit is key to understanding the embodiment of repetition implicated in video games concerning the production of conscious and nonconscious habits.7 Procedural habituation, in other words, emerges between “learn[ing] how to use . . . commands,” the “rhythms of the game mechanics,” and external “patterns and routines.”8 Where more traditional approaches to procedural rhetoric might be concerned to articulate the line of (rational) argument in a computational text (like a video game or piece of software), Holmes’s procedural habits approach is more interested in “highlight[ing] what type of habituated body is produced and in turn connect[ing] this body type as a lens for thinking through broader political implications.”9

Holmes’s procedural habit is particularly helpful for pointing toward the (non)conscious patterns that characterize computational media—or as he puts it, our second nature. What I am developing here as affective compulsions is a means by which to start accounting for the second nature responses that are activated by the rhetorical energies of computational performance, accessible by reading into the deep end of computing, which is characterized by existing grooves of culture—the habitual patterns of discourse—that catalyze those energies. In procedural habits one’s attention is placed on how procedurality can participate in the shaping of habituation. But an affective compulsions approach is interested in how the body can be activated in ways shaped by cultural habituation. For example, the way that a machine-learning system moves on Twitter, as it performs a parody of a U.S. President, can signal through its processes the mathematically inflected deep end of computing, activating the body toward a compulsion, commonly associated with mathematical logic affording an experience that blurs persuasion and indication within a political critique.

Compulsions are nonconscious, occurring within the harmonies between lively objects and habits of being, shaped by symbolic, social processes, but Page 68 →which are nonetheless manifest as impulse. In their insightful meditation on what they call the human–nature interface of environmental artworks, Kenneth S. Zagacki and Victoria J. Gallagher note that material experiences, such as the rich, multimodal experiences of sculptural installations, can be conceptualized as spaces of attention, which preverbally shape how persons orient to the environment.10 In forwarding this conception, they make monumental strides toward understanding how nonhuman and human “stuff” can interact in preverbal, embodied ways to catalyze new attunements. But where Zagacki and Gallagher are interested in the changing of (embodied) understandings—encouraging new habits of being—I am interested in how rhetorical energies can activate the body in line with the pre-existing rhythms of discourse to groove with stuff. And the reason I am interested in this is because it will get us closer to understanding why a parody performed by a machine-learning system differs from a parody performed by Alec Baldwin on Saturday Night Live by pointing our attention to the specific energies that characterize the performance of a machine, like @DeepDrumpf, which represents a genre ecology, characterized by the “grooves” of neural network discourses.

If you were to remember when you were introduced to a genre of music that was hard to “groove” to, you will be remembering a moment when the necessary habits of response and affect were not yet ingrained in your habits of being. Listen to the music for a while, however, and it becomes easier to tap your foot. This is because, among the objects in a given situation are the patterns you have been repeatedly presented with, shaping impulse toward a “groove.” Such grooves are entangled with culture and biology, accessible as nonconscious habits, activated by the rhythmic pulsing of resonant frequencies. In this regard, Julian Henriques, Milla Tiainen, and Pasi Väliaho’s discussion of “rhythm,” and in particular rhythm conceptualized as “vibration,” is illuminating: “Rhythm occurs within the particles, chemical reactions and neural firings constitutive of human and other living bodies. It encompasses fluctuating frequencies and amplitudes constitutive of the audible features of sound that pervade the air, corporeal tissues, or other material textures while their temporalities can be technologically modified.”11 Similarly, the movements of machines move with vibrating rhythms, which can activate the body toward a groove. And those grooves are discoverable when diving into the deep end of computing (e.g., in unpacking the deep discourses relevant to neural network learning systems affecting a given computational performance).

Debra Hawhee discusses Burke’s rhetorical thoughts on music and observes that to interrogate rhythm is to focus on the body, for the body possesses Page 69 →its own rhythms—breath, heartbeat, cadence of movement. As such, “bodily rhythms—the fact that bodies are constituted by such regular intervals of motion—also ‘set up’ bodies as moveable by rhythms, be they soothingly melodious or jarringly prosaic.”12 Consequently, “rhythm becomes not merely an aesthetic feature but an enlivening force—sheer energy—with a unique capacity to mingle with and transform bodily energies and rhythms already churning, humming, and moving.”13 Important to affective compulsions is the “already churning, humming, and moving” state of the body, for grooving is not just an impulse. Grooving is compulsive: an impulse, shaped by habits (including the habits inherited from the grander discourse ecology). We habitually respond to the performative energies of people in ways that groove with the rhythms of the discourse ecology. But we also respond habitually to the energies of computational performances, and those habits can be activated strategically through the energies of computational performance to strike grooves, which signal politics in ways that feel more like mathematical truths than personal opinion.

I have been describing compulsions as existing on a sliding scale, between indexical and symbolic.14 In his classic framework, Charles. S. Peirce describes three different types of signs: icons, indices, and symbols. Icons signal meaning through imitations of things (a timer icon imitating an hourglass). Indices signal meaning through physical reality (a brick wall blocking one’s path). Symbols signal meaning through social usage and are physically ambivalent (e.g., a “unicorn”). Affective compulsions are those phenomena that present in ways that feel like indices, while nonetheless being shaped by the symbolic. A crow on the sidewalk indicates that the path is blocked. Through muscle-memory response, we avoid walking into the crow. Conversely, turning around and going out of one’s way to avoid the crow is a compulsion, likely driven by the circulation of (western) symbolic associations between life, death, and crows, activated by the energies of the crows’ caws. Affective compulsions, in other words, are the habits of response that we apply to nonhumans, which are nonetheless shaped by culture, emergent as a grooving with the lively energies of nonhumans and the rhythms of discourse that shape that grooving.

The tricky thing about affective compulsions is that, even though we have all experienced them, we nonetheless are inclined to dismiss them as magical thinking: “It doesn’t mean anything!” And, at a cerebral, verbal level this might be true. But at the level of rhetorical energies, the crow is contributing movements that activate the body in ways that track with the circulation of discourse. William J. T. Mitchell’s explanation of the contradictory attributions Page 70 →that people give to images is instructive in this regard.15 In Mitchell’s view, there is an oscillation between magic and non-magic when it comes to images. He illustrates that there is an absurd allegiance to the claim that pictures are just stuff, while simultaneously, we act in ways that hypocritically uphold superstitions of yore as we burn images in effigy or kiss pictures of loved ones.16 Our bodies respond to images not necessarily as a “here and now”—an isolated experience—but rather, as an “always has been”—an experience emergent from a deeper, ostensibly irrational discourse ecology—even if we wish to maintain an explanation that says otherwise. A similar phenomenon can be found in pieces of computational performance. As we will see in the coming case of @DeepDrumpf, the movements of the machine offer energies that can work to activate affective compulsions that exist somewhere in the oscillation between the idea that a machine-learning system is “just a gimmick” and the feeling that its outputs are indications of reality. Cerebrally, it’s just stuff. But affectively, it is more than that. But how does such a cerebral/embodied contradiction operate?

Walter Fisher (citing Alasdair MacIntyre) has noted that humans are “storytelling animal[s].”17 We understand the world through stories, both in the ways that stories are cohesive within themselves as stories, but also as stories are, or are not, consistent with other stories we commonly tell.18 For example, take notice of the stories in William S. Burroughs’s discussion of “coincidence”: “You can observe this mechanism operating in your own experience. If you start the day by missing a train, this could be a day of missed trains and missed appointments. You need not say ‘McKtoub, it is written.’ The first incident is a warning. Beware of similar incidents. Tighten your schedule. Synchronize your watch. And consider the symbolic meaning of missing a train. Watch particularly for what might be a lost opportunity.”19 Affective compulsions operate from the stories that circulate in a given public, allowing us to preverbally respond to indices as meaningful, even when we cerebrally deny the merit of those responses. For a quick example, notice that key to Burrough’s consideration of the symbolic meaning of missing a train is to synchronize one’s watch. Historically speaking, clocks emerge from traditions of technic that “tied” them “to the heavens”; in the Middle Ages, for instance, celestial movement and time were entangled with signals for human action, positioning clocks as tools for peering into the very structure of the cosmos.20 “A dial and a hand,” as Lewis Mumford pithily puts it, “translated the movement of time into a movement through space,” implying the clock’s affordance of access to an otherwise vapory dimension.21 This is an example of what John Durham Peters would Page 71 →register as logistical media, or media that “usually appear neutral and given,” while at the same time, “their tilt and slant can also call forth agitation,” in that they often operate ambiently in the definition of the rules by which definitions happen in the first place.22 Watches tell stories just as much as living, breathing humans do, but they do so by signaling the energies of machinic movement, shaped by cultural habitus to activate the body in ways affectively compulsive. Similarly, machine-learning systems instantiate logistical media, affording the appearance of indication—in the sense of “subjectlessness”—when in fact they absolutely participate with values, by activating habits in the culturally shaped body.

Even when an affective compulsion drives one toward a less desirable outcome (e.g., having to go “out of one’s way” to go around a crow, which demonstrates the “silliness” of such responses), we might nonetheless operate within the established grooves of discourse. Don Norman offers an example of the deep-seated but illogical clinging to habits concerning our relationships with technology and the stories that shape those relationships. In particular, Norman focuses on gesticulation of the hands and the QWERTY keyboard, wherein he explains that the original QWERTY design grew from a technical need to avoid crossing type hammers over one another when using common letter sequences on a mechanical typewriter.23 However, this oft-referenced account of the QWERTY keyboard is easily discounted: Koichi Yasuoka and Motoko Yasuoka (and others) have pointed out that slowing down typists would actually make the technology useless for such tasks as the transcription of Morse Code.24 Given its uses, deliberately slowing down the technology would not make any sense. Rather, it was much more about social happenstance (e.g., discussions between inventors and producers entangled with motivations to avoid existing patents). In any case, the QWERTY remains a fixture of the modern technological landscape, resonating with existing cultural practice, not technological possibility.25 Technology, in this sense, does not determine the crops of meaning to be yielded. Instead, possibilities of meaningfulness are constrained by existing culture, often along the prevailing etches and grooves carved into the technological landscape (including the stories we use to understand those technologies at an instinctual level).26 Affective compulsions, then, are built from the stories we have grown accustomed to hearing and telling one another in a sort of muscle-memory response, resonated between objects, and shaped by cultural practice: the ominous cawing of crows, a double rainbow all the way across the sky, the unexpected crash of a word processor in the middle of a difficult sentence.

Page 72 →In the context of machine-learning systems, and with direct regard for the deep ends of computing that shape affective compulsions, specific patterns characterize the nonconscious resonances with machines. Primarily, these rely on stories that we tell one another over and over again, ensnared with physical, technological reality. The proclamation, “Oh, no, here comes Skynet!” as we learn about the latest improvements in real-time facial recognition and emotion detection systems—machines that can read the emotions from our faces—is an exclamation demonstrative of affective compulsion. It is a compulsion rendered visible by diving into the deep end of computing to locate the story of Terminator, which further resonates with other patterns involving the apparent trope of machines as cold, dominant, misanthropes, which gains amplitude from a quasi-spiritual understanding of bios and self-awareness, alongside a frenzied mess of (Lamarckian) evolutionary theory, and perhaps even the subconscious guilt of the Anthropocene. Somewhere in the paradoxical belief that machines are just machines and the idea that machines will somehow acquire their own anima (life-breath) and act from that spirit to annihilate their competitors, is the tension between a cerebral commitment to skeptical realism and a more affective “grooving” with resonant rhythms between values, stories, and technologies, manifest as compulsion.

Affective compulsions are indicative of palpably influential forces, but they often remain invisible, a byproduct of our yearning to be rational, cerebral (and verbal) animals, despite the reality that we are also visceral (multimodally affected) ones. A murder of crows cawing above a wedding ceremony might contribute to an eerie mood. The crows’ energies influence by way of affective compulsions driving an attunement in which one can feel that the wedding is not supposed to be—“vibes” that would not be possible in the absence of the crows, or the symbolic referents that join them. Even when we acknowledge such affective compulsions, we might tend to dismiss their presence as mere feelings. However—and here is the point I hope is insightful for the reader—the residues of those feelings do not leave us simply because we consciously declare them untenable. We later tell members of the wedding party at the reception that “It just doesn’t seem like it’s going to work out,” not because the crows told us so, but because we find the general feel compelling, downplaying the crows’ participation in cultivating that feeling.

We can see a similar response to computing machines. It “makes sense” that we like computers when they remind us of people. This is a response that helps us avoid the seemingly transcendent “beyond” ethos of a machine; we Page 73 →avoid having to confront the affective compulsions machines sometimes stir in us. ELIZA bot, the chatbot explained in chapter 1 as one of the first examples of an automated computer program designed to interface with a person while following a “script” of interaction, illustrates this. Joseph Weizenbaum, the designer of ELIZA, created a version that engaged Rogerian psychotherapy—“Tell me more about your mother”—which he argues was “anthropomorphized” by users. In one anecdote, Weizenbaum explains that his secretary asked to interact with ELIZA privately, even though she had seen him design the system. According to Weizenbaum, the secretary was upset when Weizenbaum joked that he would record the chat logs. She stated that she felt that such an act “amounted to spying on people’s most intimate thoughts,” something he posits as “clear evidence that people were conversing with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms.”27 Weizenbaum uses the anecdote to set up the argument that people will do mental gymnastics to find humanity in machines. In the frame of affective compulsions, it becomes easier to see that there is “more” here, beyond the idea that ELIZA was able to achieve this effect simply because someone imagined it was human. Quite the contrary, I think ELIZA’s effect came from the fact that computing machines resonate with narratives, such as the one that characterized the GPT-3 quote earlier, wherein machines can withhold judgment—whereas a human psychotherapist, even one thoroughly trained in the Rogerian tradition, cannot shirk their subject position. The preverbal resonances—the affective compulsions—activated by ELIZA’s machinic movements, while apparently easy to deny, remain obstinate. Weizenbaum is convinced (and makes a convincing case) that there will remain fundamental characteristics of humanity (e.g., empathy or wisdom) that cannot be genuinely replicated by a machine. In making that argument, he seems to overlook that computing machines are nonetheless attended by rhetorical energies, emergent from the deep end of computing, which can cultivate attunements, wherein one might feel free to “talk it out,” born not simply of anthropomorphic energies, but machinic ones.28 As chapter 1 explained, within the deep end of computing is the tradition of knowledge-based systems as it resonates with the trope of the prophet, reverberated between front-end interface back-end processes and transduced into the body. Chapter 2 explored the category assumptions of “natural” and “artificial” as components of the deep end of computing, which can be simultaneously activated by the lively movements of vast computing. To say that the movements of a machine activate affective compulsions is to Page 74 →begin to name those elements of the deep end of computing that matter to the habitual / instinctual reactions of culturally shaped bodies in response to the stimuli of machinic movement.

Rhetorically speaking, the energies of machines can be leveraged in ways that capitalize on the blurring of indication and persuasion—to signal through the processes of the machine, activating the culturally shaped body in ways resonant with the grooves of culture and political worldview. As an example of processual signaling, or the leveraging of computational performance to invoke affective compulsions that invite a blurring of indication and persuasion, I turn to the public political critique of @DeepDrumpf, a neural network-based parody of Donald Trump. But first we must unpack some of the narrative grooves that inform the deep end from which the energies of neural networks emerge. To do this, I offer a brief, selective primer on some important turns regarding the stories of digital computing and neural networks.

The “Grooves” of Neural Networks

To understand the grooves of culture that matter to the performances of neural network machine-learning systems, one must first acknowledge that, within the history of computing, there exist turns that worked to marginalize women from the field, despite their work in it. For example, the ENIAC, the first programmable digital computer, was originally programmed by a team of six women: “Jean Jennings (Bartik), Betty Snyder (Holberton), Frances Bilas (Spence), Kay McNulty (Mauchly Antonelli), Marlyn Wescoff (Meltzer), and Ruth Lichterman (Teitelbaum),” and women continued to work on the machine after its initial inception.29 As Janet Abbate points out in Recoding Gender, the field of programming has been perpetually masculinized through such moves as emphasizing “rationality” through namings such as software engineering.30 Though I will largely be focused on the technical developments of digital computing over time, keep in mind that such developments were often attended by problematic assumptions that conflated “rationality” with “men.”31 Just as much as the deep end can be filled with the reassurance of prophecy and the awes of the sublime, so too is it riddled with such noxiousness as patriarchy. As such, the following should be read from within this framing.

That being said, digital computing as we know it today finds its nascent stages in the middle and late 1930s when thinkers and programmers (who were often women) worked to conceptualize the mechanization of logical operations by designing Boolean logic into electronic circuits, allowing operations to be Page 75 →carried out by flipping sequences of “on/off” switches, later becoming the “is and os” that is now commonly referred to as binary code.32 That is, by assigning “hard logical” conditions to a circuit to create “logic gates” (by leaving some switches on and turning others off), one could compute large amounts of information by simply passing data through a system. With digital computing machines, instead of having to figure out a computation and follow it through “long-hand,” one could just create a program and let the machine do the rest.33 The energies of computing machines as they carry out their processes emerge from discourse ecologies, wherein the story of the computing machine involves the automation of intellectual labor (rather than physical labor).

Reverberating this view is Vannevar Bush, who argued in 1945 that repetitious tasks (such as the arithmetic necessary for calculating large amounts of census data) can now be “relegated to the machine,”34 thanks to digital technologies. Bush further discussed his “memex” (a mechanized file storage system) as something to help avoid “overtaxing [humanity’s] limited memory,” demonstrating prescience for how we would come to understand our computers today—knowledge agents that can remember more than a human amid the modern internetworked information society.35 The discourse ecology of computing machines is not merely characterized by a story of the computing machine as the automation of intellectual labor, it is a story about extending beyond human shortcomings.

The sentiment that the movements of computing machines bring a powerful comfort, achievable simply because they are not humans, is echoed by Bertram Vivian Bowden in 1953:

Modern digital computers are capable of performing long and elaborate computations; they can retain numbers which have been presented to them or which they have themselves derived during the course of the computations; they are, moreover, capable of modifying their own programmes in the light of results which they have already derived. All these are operations which are usually performed (much more slowly and inaccurately) by human beings; but it is important to note that we do not claim that the machines can think for themselves. This is precisely what they cannot do. All the thinking has to be done for them in advance by the mathematician who planned their programme and they can do only what is demanded of them; even if he leaves the choice between two courses of action to be made by the machines, he instructs them in detail how to make their choice.36

Page 76 →Bowden saw the performances of machines as distinct from the human animal, in that they are not thinking as much as they are moving forethought—thought, disembodied. Demonstrated in Bowden’s comments is the notion that the accomplishment of digital computing is not just an achievement of engineering, but specifically of mathematics, giving its movement a further unaffected but “true” character. If the computing machine deals only in mathematics, then it is dealing only in inartistic proofs and, therefore, is inherently an agent that communicates nonrhetorically.37 The notion that the outputs of a digital computer are unaffected, objective calculations spawns from a longstanding, and common, Aristotelian tendency to envision mathematics and rhetoric as antithetical to one another.38 To many, at least outside the spheres of rhetorical studies, mathematics is strictly an inartistic endeavor. If one adds 2+2 and gets 4, one has not done anything rhetorical; one has merely exacted a symbolic representation of logical reality. As such, the story of the computing machine is about extending beyond human shortcomings, in the form of disembodied mathematics, moving toward an ideal of “pure” reality.

With this in mind, consider the Cold War decades of the 1960s and 1970s, when research proliferated in moving beyond mere “logical” systems and toward “expert systems”: machines that still use hard logic to arrive at conclusions but that use data storage and retrieval for the specific purpose of enhancing decision-making in realms such as chemistry or medicine.39 The story of the computing machine, at this juncture, demonstrates an increased comfort in the mechanization of human concerns via a convergence of expertise and the movements of disembodied mathematics “at work” to give the best answers.

Because many of the conclusions we arrive at when making decisions are seldom categorical, but rather probable, attempts to “soften” the logic of automated systems became more pronounced in the 1980s. Such was one of the main objectives of the Strategic Computing Initiative, a Department of Defense-funded project that lasted from 1983 to 1993, illustrating the increased interest in making machines that could induct based on statistical probability.40 As such, it was close to this period that we saw a significant shift in the ways that computing machines are conceptualized, moving away from such systems as the 1959 Geometry Theorem Prover, a machine programmed on Euclidian plane geometry, which represented the more traditional “logical approach” to computing—a “closed system,” as it were, toward more open, autonomous systems.41 The newer machine-learning approaches utilize statistical models to examine relationships between variables to arrive at conclusions via Page 77 →probability rather than exact deduction.42 To explain this in rhetorical terms, let us visit many a logician’s favorite “probable syllogism” (or enthymeme).

Premise 1: All persons are mortal.

Premise 2: —

Conclusion: Socrates is mortal.

There is ample scholarship (especially within the field of rhetorical studies), which tells us that this argument can still work for a human reader; that the missing premise will most probably be filled in with cultural information, allowing the reader to infer that Socrates is most likely a person. What the reader is doing, in this case, is using “heuristics”—indicators hidden in the variables of the included premises—not to know with deductive certainty, but rather to know within a threshold of probability. Now, imagine a computer program designed to assess relationships between variables by taking in data, and instead of being preprogrammed per se, it can discover patterns within the data from which to predict and rewrite its own software. In turn, it can create its own outputs in the form of numbers, images, poems, songs, music, and more, autonomously of a human, but is explicitly dependent on the inputs it receives. This is the basic premise of machine-learning systems, like those that run on recurrent neural networks (explained in the following paragraphs). The story of the computing machine, at this juncture, morphs from a story of finding comfort in the mechanization of human concerns (applying hard logic to human concerns) into a story that represents an increased comfort in having machines participate in the definition of those concerns while retaining the character of disembodied (mathematical) truth.

Recurrent neural networks, such as the model that undergirds @Deep-Drumpf’s parody of Donald Trump (to be analyzed in the following section), operate by “learning” from texts input into the system, from which they create statistical models wherein “neurons” stand in metaphorically to designate particular “classes,” or clusters of terms and their values. These neurons “fire” when they are pushed to a particular threshold of statistical probability, creating messages based on the model of language generated from the original input texts.43 In computer science, this model is actually called the hidden layer of the neural net and can be conceptualized as the program’s “notes to itself” as it “learns” the characteristics of the input data and assigns probability “weights” to the different classes of its model. Because the hidden layer is machine-written, it adds opacity to the system, making it harder to track how the system Page 78 →is rendering its outputs.44 What makes the system “recurrent” is that, as the system generates messages, it folds those messages back into the original model, creating a feedback loop, wherein statistical probability is further influenced by the output itself as it is “trained.” With each new message generated, the system’s statistical model—its neural network—grows in ways that represent the distinct speech patterns of the texts input into the system.45 Recurrent neural networks, in other words, continue to resonate with the larger cultural grooves of the computing machine, while also demonstrating not just moving forethought, moving in line with the preprogrammed hard logical structures of people, but rather probabilistic starting points. It is still disembodied mathematics, but disembodied mathematics “off the leash” and pointed at mimicking patterns within a given data set, further congealing with the grander narrative of technology as access to “pure” knowledge.

Shakespeare’s sonnets, Wikipedia articles, and even baby names have been generated via recurrent neural networks.46 While the outputs of these systems are impressive, they still can also generate largely unconventional—often nonsensical—messages, for example, a few baby names: the word “Baby” or simply the letter “R.”47 Some might laugh at these as errors. But with concern to the rhetorical energies of machines, it is informative to attune to the ambience of the machine’s movement, shaped by the grooves of culture. Is this funny because it is nonsense? Or is this funny because it is nonsense performed by a digital computing machine, tacitly associated with disembodied mathematics, acting beyond the human?

Informed by a Latourian focus on relationality between actants (human and nonhuman), Mitchell Reyes helps to draw out the mechanism by which the mathematical grooves of the deep end of neural networks can find resonance with other discourses, such as those of politics. In particular, Reyes offers an insightful explanation of the idea of mathematical alliances, or relationships between mathematics (and their manifestations as technologies) and the political, wherein mathematics emerges “not merely as reflective of reality but also as productive of reality.”48 Reality is not mirrored by the mathematical idea of an Archimedean ratio. Rather, the Archimedean ratio is an actant that participates in the construction of reality when it interacts with social practice and thought (e.g., by manifesting as military might in pulley systems that can carry more force, further implying political power). From this frame, as will be shown in the following brief analysis of a neural-network-based political parody, the discourses of mathematics entangled with neural network machine-learning systems can form alliances with political discourses, manifesting as energies Page 79 →“at work,” signaled by the performances of computing machines, activating the body in ways that compulsively blur indication and persuasion.

With specific regard for approaching mathematics from a rhetoric as energy perspective, Catherine Chaput and Crystal Broch Colombini analyze the “invisible hand” as a grooving trope of economic discourse and underscore that mathematics is not something to be added to discourse. Rather, it is among the grooves that shape the habituated body.

Mathematics need not be studied as a distinctive style; nor, however, need it be studied as a self-contained field. Instead, we believe mathematical implications become most far-reaching when viewed from the lens of their entanglements with historical, social, and environmental processes. . .. Mathematics as a rhetorical tool for negotiating and shaping our world opens it to great interrogations than those fostered by the assertion that mathematical style strengthens . . . credibility.49

Moreover, the mathematically inflected deep end of computing is not an ornament to add to rhetoric, it is among the ambient features that inform our being in the world, participating in the definition of grooves that resonate with the culturally shaped body. On this view, the computational performances of neural networks are not merely performances of credentials, to be assessed cere-brally, for they also activate the body as nodal bursts of energy. In the following section, I retrace the case of @DeepDrumpf to synthesize the discussion of affective compulsions and to demonstrate an example of processual signaling. From that, I hope to show that the performances of neural network processes can be leveraged to signal political persuasions in ways that feel like political indications—mathematics-in-motion, finding alliance with particular political worldviews, to encourage affectively compulsive responses onto the audience.

The Machinic Parody of @DeepDrumpf

The Twitter bot, @DeepDrumpf, runs on a recurrent neural network machine-learning platform, generating its own messages, subsequently posting them to Twitter. The bot was designed by Bradley Hayes, a postdoctoral research scientist at the Massachusetts Institute of Technology to generate messages based on speaking transcripts of 2016 Republican presidential candidate, Donald Trump.50 Trump is widely known as a virulently polarizing character. Among his reluctant audiences, he is seen as the epitome of “mendacity, bigotry, bullyism, narcissism, sexism, selfishness, sociopathology, and a lack Page 80 →of understanding or interest in public policy.”51 His supporters hold up his discourses and views as those of a long-awaited “wrecking ball” needed to “do whatever is necessary to bring our middle class back to the ‘family of haves.’”52

The name of the Twitter bot—@DeepDrumpf—is the result of a play on words, finding muse from the concept of “deep” neural network machine learning, and “Drumpf,” a previous spelling of the Trump family name, taken up by famous comedic political pundit, John Oliver, as he urged us to: “Make Donald Drumpf again!”53 And this is clenched in Hayes’s public explanations that the use of “Drumpf” was inspired by Oliver’s jokes about Trump.54 Beyond underscoring that this naming gives some insight into the specific deep end that matters to @DeepDrumpf, it can also be noted that, from the naming of the bot, it is apparent that it is a parody meant to deride Trump. However, what makes @DeepDrumpf special is that it is a parody that moves by way of automated message generation, facilitated by a recurrent neural network—a machine-learning system—that has been, according to its Twitter profile, “trained on Donald Trump transcripts.”55 The bot received wide public attention, garnering coverage in such publications as The Guardian, MIT Technology Review, Forbes, and CBC.56 As a novel form of political critique, @DeepDrumpf represents the automation of the classic idea of parody in that it employs “techniques involve[ing] various combinations of imitation and alteration: direct quotation, alternation of words, textual rearrangement, substitution of subjects or characters, shifts in diction, shifts in class, shifts in magnitude.”57 What makes the parody particularly unique is that, by including the movement of a computing machine, it can signal in ways that activate affective compulsions, which enliven the performance to feel meaningful, enabled by a blurring of indication and persuasion, born of a political critique manifest from the processes of mathematics “at work.”

A recurrent neural network can “learn” to speak like someone by imitating the propensities of speech located in the training texts used to generate a model in the first place. But it is not that person—it is a mediated abstraction of that person, instilled in mathematical calculations of probability between variables. Based on the programming architecture of neural networks, the act of “training” a bot—feeding it text so that it can build a statistical model of probabilities that represent the “style” of a given set of texts—emerges as an inventive process, involving decisions of which texts to include and exclude upon training.58 Moreover, a recurrent neural network does not learn how to speak like Shakespeare, or a Wikipedian, or even Donald Trump. It learns how to speak like the texts the designer has chosen to input into the system to train Page 81 →it. Like many rhetorical acts targeted at a particular character, @DeepDrumpf offers a synecdochic appraisal—an appraisal based on using specific aspects of a person to describe their “whole” character. In other words, @DeepDrumpf may represent the “whole” of the corpus of texts it has been trained on, but nonetheless performs a persona of Trump, based on selected transcripts of his public speaking persona.

In brief, @DeepDrumpf presents a set of speech acts that construct a persona from which an audience can perceive a distinct set of commitments.59 Douglas Walton maintains that, although we might have an instinct to equate a person with our own perceptions of their publicly displayed commitments of value, there remains an important difference between a person’s performed persona and a person’s actual character; @DeepDrumpf is an example of this. Although it is a machine-learning system, the bot represents the classic features of a parody, in that it invites persons to evaluate a commitment set—the series of value commitments demonstrated by a given performance—as regarding Donald Trump through a second-order abstraction of his actual speech persona, performed in exaggerative ways (fig. 3.1): “You see the jobs in this country, we own them. We have people that are morally corrupt. They’re friends of mine. We won with poorly educated.”

Because the bot represents the lively movements of digital computing, rendered as a machine-learning platform, it signals toward things such as logic, forethought, and mathematics, operating in an unbiased fashion, bringing with it signals toward inartistic proof, or what Carolyn R. Miller, working from Aristotle’s definition, eloquently describes as “facts or artifacts which exist independently of human intentions and emotions and about which deliberation is unnecessary. Inartistic proofs are those which have only to be found; they are just there-self-evident and real and objective”—most emphatically inartistic proofs are indications.60 In contrast to a human-performed parody, say by Alec Baldwin on Saturday Night Live, the rhetoric of @DeepDrumpf is entangled with the deep end of machine-learning systems, inviting the witness of @DeepDrumpf to feel that its performance is meaningful because the agent performing it is so easily associated with a counternarrative of objectivity, mathematics, and logic. “Trump is not just a subject deserving of mockery, Trump is a mockery. I can feel it.”

Further bolstering the force of the parody in @DeepDrumpf is its repetition and reiteration. The bot, as it continually makes the same joking claim—“Look at me, I am Donald Trump!”—is given added force via exergasia, the repetition of the same “thought but in different figures.”61 Indeed, @DeepDrumpf’s Page 82 →rhetoric of reiteration, as it continually creates new messages, represents the same ultimate claim, growing its magnitude, adding to its volume, helping the audience recognize its importance (fig. 3.2). To connect to Jenny Rice’s concept of archival magnitude, the bot grows an archive of evidence that Trump is worthy of derision, further affording a “sense of weightiness,” wherein the body is not only activated by the accumulation of evidence widgets, affording a “sense of the whole”—of coherence—but also the movements of a neural network, emergent from a deep end of computing, wherein the computing machine is approached as an agent that can operate beyond human interest, affording access to pure knowledge.62 However, as Wayne Anderson points out, repetition does not prove the original claim; it merely builds that claim via the cumulative stacking of diction.63 Consequently, the performance of the bot is most likely to persuade those already committed to the idea of Donald Trump as deserving of mockery, revealing its power as epideictic rhetoric, inviting sympathetic publics to partake in the celebration because they possess an existing bouquet of harmoniously habitual stories to that claim. Thus, the energies Page 83 →of the bot find alliance with a political worldview. The ever-evolving, continually growing parody of the bot is not just a “one-off”; @DeepDrumpf replicates it over and over again. In this sense, sympathetic audiences do not have to believe that Trump is worthy of parody. The bot, by copiously reiterating that sentiment, “calculation after calculation” helps further persuade persons to treat their beliefs as facts, driven by a compulsion that finds a harmonious “groove” between the derision of Donald Trump and the objectivity of mathematics associated with the operations of a neural network. Here are some examples of @DeepDrumpf’s tweets, quoted in order of appearance: “I’m the guy that’s going to be a cheerleader for horrible foreign policy disaster. You’ve got to be, in my Administration. @HillaryClinton”; “If I don’t win in the end, I’ll fire the entire American people. You cannot achieve peace if I don’t want it. @HeyTammyBruce @McFaul”; “I am a great judge of this country. We have to control everybody and let them fight each other. They won’t refuse me, I’ll make a fortune.”

Screenshot of a tweet thread following a tweet from the @DeepDrumpf Twitter bot.
Extended Description

Screenshot of a tweet thread following a tweet from the @DeepDrumpf Twitter bot, which reads: "You see the jobs in this country, we own them. We have people that are morally corrupt. They're friends of mine. We won with poorly educated." In response to that tweet two users include replies. The first reply, from @HopeFiend36, reads: "@DeepDrumpf Oh.. this is just perfect @hayesbh I can't tell the two apart now." The second reply, from @sangtani_ravi, reads: "@DeepDrumpf ironically this handle is more honest than Trump's actual handle . . . is this what it feels like to have an inner voice?"

Figure 3.1 The Performance of @DeepDrumpf

Screenshot of the @DeepDrumpf profile on Twitter.
Extended Description

Screenshot of the @DeepDrumpf profile on Twitter. The profile image is a photo of Trump, styled with a filter that telegraphs machine-learning processing by way of a filter that looks like it has been processed throught DeepDream software. The profile description reads: "#makeSTMGreatAgain, #MakeAmericanLearnAgain, I'm a Neural Network traind on Donald Trump transcripts (Pring text in []s. Follow @hayesbh for more details." Three tweets are included in the screenshot. The first tweet reads: "I'm the guy that's going to be a cheerleader for horrible foreign policy disaster. You've got to be, in my Adminstration @HillaryClinton." The second tweet reads: "If I don't win in the end, I'll fire the entire American people. You cannot achieve peace if I don't want it. @HeyTammyBruce, @McFaul." The third tweet reads: "I am the great judge of this country. We have to control everybody and let them fight each other. They won't refuse me, I'll make a fortune."

Figure 3.2 @DeepDrumpf’s Automated Exergasia

What is interesting to note about the efficacy of @DeepDrumpf’s moving parody is its broken grammatical outputs—something to be expected of a machine-learning system dealing in natural language processing. What makes them unique is that they do not detract from @DeepDrumpf’s rhetorical force, but rather add to it. That is, and to put it in rhetorical terms, the construction Page 84 →of the parody of Trump, using a recurrent neural network, is kairotic; it capitalizes on the “opportunity” to automate a simulation of Trump’s speaking style, which is well-known for being simplistic and unconventional.64 The following example of Trump’s speaking style was taken from the Republican Presidential Debate in Manchester, New Hampshire, on February 6, 2016:

In the Middle East, we have people chopping the heads off Christians, we have people chopping the heads off many other people. We have things that we have never seen before—as a group, we have never seen before, what’s happening right now. The medieval times—I mean, we studied medieval times—not since medieval times have people seen what’s going on. I would bring back waterboarding and I’d bring back a hell of a lot worse than waterboarding.65

Trump’s actual speaking style lacks ornament and elevated register, and it uses what seem to be large leaps in ideas from clause to clause; because of these qualities, it lends itself to being automated. Choppy sentences, rendered in simplistic phrasings, often tattered by unshapely grammatical errors not only represent Trump’s speaking style, but this is also what one can expect from an autonomous neural network output at least some of the time. Because of this harmony between public visions of Trump’s known speaking persona and the simulation of that speaking persona advanced in @DeepDrumpf, grammatical blunders do not disrupt the verisimilitude of the parody. Quite the opposite (fig. 3.3). An example can be found in a reply from @DeepDrumpf to Dr. Jill Stein, Green Party presidential candidate, running the same year as Trump. Stein’s original tweet, “We need more solutions, not just militant & bigoted knee-jerk reactions to terrorism. Let’s stop supporting dictators who fund ISIS,” was met with @DeepDrumpf’s response, “We’re killing tremendous people in this country. We have to cherish our Second Amendment. Very important. I’ll need the ratings @DrJillStein.”

Clearly @DeepDrumpf’s machinic performance of a Donald Trump parody seems to fit well with a public saturated with discourses about Trump’s speaking style, which emphasize his anti-intellectual sentiments, remedial vocabulary, and simple grammar.66 It is through this alliance of Trump’s known speaking style, the expectations of recurrent neural network outputs, and a political worldview that @DeepDrumpf can be considered what Dale Sullivan describes as a “demonstrative epideictic” speech act: it “transforms the audience from critics into witnesses” by moving with energies that activate the culturally shaped body with the sense of an inartistic proof.67 Moreover, it is not Page 85 →that the automation of a parody in @DeepDrumpf is more telling of Trump’s actual character, it is that the act of automating that parody with a neural network—including its errors—is consistent with Trump’s known speaking persona and public imaginings of computing machinery, including machine-learning systems, allowing audience members to nonetheless “fall back” on the common association of digital automata and mathematics present in the deep end of computing, to enjoy an affective compulsion, signaled by the processes of the machine “at work,” but which can also be cerebrally defended as “the numbers.”

Screenshot of a tweet thread following a reply from the @DeepDrumpf Twitter bot to Dr. Jill Stein (@DrJillStein).
Extended Description

Screenshot of a tweet thread following a reply from the @DeepDrumpf Twitter bot to Dr. Jill Stein (@DrJillStein). Stein's original tweet reads: "We need more solutions, not just militant & bigoted knee-jerk reactions to terrorism." Let's stop supporting dictators who fund ISIS." @DeepDrumpf's reply to Stein's tweet, reads: "We're killing tremendous people in this country. We have to cherish our Second Amendment. Very important. I'll need the ratings. @DrJillStien." That exchange is followed by a thread in which two users have posted replies to @DeepDrumpfs tweet to Stein. The first tweet in the following thread reads: "@DeepDrumpf wow, the apprentice has surpassed the master, so to speak." The second tweet in the thread, reads: "@DeepDrumpf @DrJillStein Our robot overlord got it right. Lol."

Figure 3.3 The Verisimilitude of the Machinic Parody of @DeepDrumpf

Put in contrast to the sublime magnitude of @censusAmercians and the angsty attunement that attends it (described in chapter 2), @DeepDrumpf cultivates an attunement more akin to the contentedness of the prophetic ritual of Page 86 →Vaccine Calculator described in chapter 1. Vaccine Calculator, moreover, invited users to imagine themselves in league with experts by participating with elements of the deep end of computing, such as rituals of knowledge-based systems and public health, enlivened by the movements technologies of prophecy. Differently, however, @DeepDrumpf invites audiences to feel like witnesses, taking in a mathematical exaction of observable reality, for its movements find resonance not just with machine-learning systems and mathematics, but also of public stories about Trump as a speaker and politician. For reluctant audiences, it is less likely that persons would find it so easy to “groove” to the parody in the same way, exposing that processual signals, and the deep ends of computing that they animate, interact in distinct ways, forming alliances with particular publics and their habits of being.

The Critique of Processual Signaling

Like all rhetorics, processual signaling is shaped by “constraints” imposed on discourse (such as the often erroneous grammatical outputs of a neural network), its “audiences” (including the stories that those audiences habitually “groove” to), and the “exigencies” calling the discourse into being.68 At least as an epideictic rhetoric, @DeepDrumpf is a novel, quietly powerful parody that employs processual signals toward affective compulsions informed by the deep end of neural networks. Affective compulsion provides a conceptual route to expanding our definitions of media literacy to include “analysis” of the rhetorical force of things (such as machine-learning systems), the “evaluation” of its effectiveness (with regard to the groove), and the “creation of content” that can harness its power in ways that allow us to revel in, and be critical of, affect.69 In her discussion of the “poetry” of code, E. Gabriella Coleman helps us realize that code itself can be understood in terms of art, rather than simply in terms of engineering.70 In the same spirit of pushing on how we understand the class of objects that we call code, we can also think of specific classes of software—such as machine-learning systems—not simply as tools of truth making, but also as artful performances. As arenas of communication such as political rhetoric are “remediated,” one must remember that the reverent and the important as well as the laughable, untenable, and ridiculous parts of culture are also carried over into computational media.71 As I hope the discussion and analysis shows, the processual signaling of neural networks, and the affective compulsions that can be activated by that signaling, present new forms of civic Page 87 →engagement, opening potentially fruitful avenues of public expression facilitated by the communication of machines.72

Affective compulsions are a theoretical explanation of the kinds of ambiently shaped experiences of meaningfulness, spawned by the lively movements of machines, emergent from a deep end of computing. But the deep end of computing need not be restricted merely to the history of computing, especially in cases where multiple lively processes intermingle. Fire, for instance—a lively chemical process—is a message just as much as a medium of communication.73 The stories we are accustomed to associating with burning and smoke, moreover, help point us to compelling cases like @burnedyourtweet, a physical robot (in contrast to the software-based bot analyzed in this chapter), which offered a critique by printing out and burning every single message tweeted by President Donald Trump, sending a video of the immolation to Trump, alongside the message “I burned your tweet.”74 Processual signaling helps us realize that the act of burning signals a felt (sneakily symbolic) power, manifest in the activation of affective compulsions, shaped by culture. When juxtaposed against traditions of presidential discourse, automation, and the echoes of pagan ritual, @burnedyourtweet forwards a critique of vacuousness on the part of Trump. Moreover, it performs a political critique, via the movements of computing, signaling not only that Trump’s tweets are not worth human time, but that his messages cannot withstand the test of the elements, of which fire is a cleanser used to rid one of pestilence, vermin, and ill will.75 Processual signaling can be found in realms of expression beyond parody within other political acts, like automated burning in effigy, moving in a rhythm syncopated by machine time.76

Thus far I have demonstrated that the rhetorical energies of computational performance can be leveraged toward epistemic, aesthetic, and political ends, manifesting as manufactured processing, processual magnitude, and processual signaling. The following chapter meditates on an ethical framework for approaching the design and critique of computational performances, which can do good or ill as they contribute lively energies to the discourse ecology.

Annotate

Next Chapter
Chapter 4: Designing Computational Performances to Actively Contribute Positive Energies
PreviousNext
© 2023 University of South Carolina
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org