Skip to main content

Influential Machines: The Rhetoric of Computational Performance: Chapter 4: Designing Computational Performances to Actively Contribute Positive Energies

Influential Machines: The Rhetoric of Computational Performance
Chapter 4: Designing Computational Performances to Actively Contribute Positive Energies
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeInfluential Machines
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover Page
  2. Title Page
  3. Copyright
  4. Dedication
  5. Contents
  6. List of Tables and Figures
  7. Acknowledgments
  8. Introduction: Locating the Energies of Computational Performance
    1. The Rhetorical Energies of Computing Machines
    2. Beyond the Front and the Back Ends of Computing and Toward the Deep End
    3. Thickening Procedurality with the Rhetorical Energies of Computational Performance
  9. Chapter 1: Manufactured Processing, Ritual, and Expert Systems
    1. Automation as Ritual of Science
    2. Knowledge-Based Systems and Looking to Machines for Answers about Health
    3. The Manufactured Processing of Vaccine Calculator
    4. The Energetic Movements of “Experts” and Science Communication
  10. Chapter 2: Processual Magnitude, the Sublime, and Computational Poiesis
    1. The Aesthetics of Vast Computing
    2. The Sublime Energies of @censusAmericans
    3. Attuning to the Angst of @censusAmericans
    4. Doing More with Computational Performance
  11. Chapter 3: Processual Signaling, Compulsion, and Neural Networks
    1. Persuasion, Indication, and Affective Compulsion
    2. The “Grooves” of Neural Networks
    3. The Machinic Parody of @DeepDrumpf
    4. The Critique of Processual Signaling
  12. Chapter 4: Designing Computational Performances to Actively Contribute Positive Energies
    1. Moral Luck and the Machine Question
    2. First- and Second-Order Agency
    3. Hedging Against Moral (Un)Luckiness and the Limits of Avoidance
    4. Computational Performance and an Ethic of (Distributed) Responsibility
    5. Pushing on the Precautionary Principle and the Paradox of Machinic Intervention
    6. Doing Good Instead of Avoiding Wrong with Alexa
    7. Good Machines, Speaking Well
  13. Chapter 5: Leveraging the Rhetorical Energies of Machines
    1. The Informational and Persuasive Labors of Machine Communicators During the Pandemic
    2. Going “Deeper” Toward Anthropomechanation
    3. Enlivening Human–Machine Communication with Rhetorical Energies
    4. Enlivening Inoculations Against Misinformation with Machinic Rhetorical Energies
  14. Notes
  15. Works Cited
  16. Index

Page 88 →4 Designing Computational Performances to Actively Contribute Positive Energies

Good heavens, who taught them these phrases?

—Domin, Manager of the robot factory in Karel Čapek’s 1921 play, Rossum’s Universal Robots

Increasingly, we capitalize on the lively (but not alive) movements of machines. Chatbots help students (and, in particular, first-generation and Pell Grantreceiving students) get the information they need to be successful in college without having to worry about asking “stupid” questions.1 Twitter bots announce real-time earthquake alerts.2 Artificially intelligent assistants check our calendars and correspond with our colleagues via email (and phone) to schedule meetings.3 Machine-learning systems generate coverage of the current events we read in our newspapers.4 It is difficult to deny that these machine communicators make information more accessible and displace social tedium. And in light of the arguments made in this book, there may be something “more” added by the energies of machines in these contexts. However, because of the confounding layers of decision-making that undergird automated processes, machine communicators also bring new ethical problems regarding responsibility for communicative wrongdoing. Computational performances (especially those which are autonomous and based in machine learning) can mutate their communication beyond the intentions of their designers. As such, they instantiate performances that are attended by an ever-present risk of adding wounding discourse to the broader social ecology.5 That is, if machines can offer rhetorical energies as lively communicators, this Page 89 →also means that they can, in some sense, commit doing. Such moments of doing complicate traditional allocations of blame and praise in that they go against widely held proclivities toward assigning praise and blame to agents that are alive (rather than lively). Take, for example, two famous examples of wrongdoing on the part of machines created by Microsoft: Taybot and Zo. Taybot is the most notorious of the two, earning infamy after it ran for only two days before it was removed for tweeting racist, sexist statements on Twitter in 2016. Zo, the follow-up chatbot from Microsoft, messaged via the text messaging application Kik; despite being explicitly designed to avoid such language, it still occasionally made offensive statements (e.g., that the Quran “is very violent”).6 If Taybot and Zo are, practically speaking, the agents responsible for doing (as in tweeting offensive things, beyond the apparent intentions of Microsoft), can we assign blame to the machines? Should blame be assigned to the designers? The users? What does it mean to be ethically responsible when designing computational performances, which inherently involve the lively (but not alive) movements of machines?

In this chapter, I examine some problems that attend computational performances in which communication acts can emerge as chance happenings beyond the intentions of the designers. By acknowledging that the communicative action of a machine is a matter of moral luck—a rolling of the ethical “die,” inherent to handing over decision-making to a machine—assessments of wrongdoing need to include not only the question, “How was the machine designed to avoid harm?” but also, “How was the machine designed to do good?” Moreover, in the context of computational performance, and especially performances based in machine learning, I argue that to hedge bets against moral unluckiness, one can design computational performances that actively engage with phenomena like hate speech. Interventionist designs, rather than attempting merely to avoid acts that catalyze negative energies (such as hate speech), make movement toward aggregating positive energies and fragmenting negative energies within the grander discourse ecology by way of enacting lively movements as communicators participating in distributed morality (e.g., by calling out and persuading against hate speech). We would think less of someone who sat idly by as someone else used hate speech, even if the idle person did not use that language themselves. But within designs that focus only on avoidance, our machine communicators are devised to do just that, despite the very real possibility that autonomous machines can commit such acts of wrongdoing, catalyzing negative energies, beyond the intentions of the designers. Designing a machine communicator to actively do good—to intervene—in Page 90 →other words, is a way to hedge bets against morally unlucky moments in which that machine might do wrong, by actively working to catalyze positive energies, and fragment negative ones, to mitigate the potential for unintentional but nonetheless wounding discourse on the part of that machine.

Touching on an array of computational performances ranging from racist chatbots to feminist virtual assistants, I scaffold an understanding of the limits of avoidant designs, which in turn highlights the need for intervention to hedge bets against moral unluckiness. Ultimately, the chapter motivates the language of moral luck and interventionist design as useful for the ethical evaluation of lively (but not alive) computational performances, wherein the need to account for what a machine has been designed to do matters just as much as what it was designed not to do. I start with a discussion of moral luck and the machine question.

Moral Luck and the Machine Question

Imagine you are at a train station. You witness two scenarios, one right after the other.

Scenario 1: You observe a young person playing by the tracks who accidentally trips, shoving another person onto the track, throwing them under an oncoming train.

Scenario 2: You observe a young person playing by the tracks who accidentally trips and falls shoving someone out of the way of an oncoming train.

These are both scenarios shaded by moral luck. In neither scenario did the person intend to do wrong or good, but both wrong and good have been done. Moral luck has to do with the shifts of praise and blame that toggle based on differences of outcome, character, circumstance, or causation that color ethical appraisals of acts, even though those factors are not in the control of an agent.7 The idea of moral luck reminds us that, while we value persons upholding their duty, we also incorporate other factors into considerations of moral praise or blame.8 The concept of moral luck exposes the falseness in the common assumption that people who commit the same act, and do so from the same intentions, should be assigned the same praise and blame. Imagine the same person from Scenario 1 trips while playing on the tracks, but no one is shoved onto the track. We can say that this change to the scenario substantially Page 91 →reduces the wrongdoing in the scenario even though the person’s intentions and actions are the same. We assign or withhold judgments of moral wrong-doing, not solely on account of the consciously negligent or honorable actions of people, but also the (unintended) outcomes, circumstances, causes, and characters of those actions.9 In summary, moral luck helps us name that phenomenon in which sometimes people can be blamed (or praised) for things that happen beyond their intentions.

In cases of machine communication, which, by definition, involve messaging on the part of machinic agents, moral luck is particularly relevant. Let us imagine that someone has designed a chatbot, with an open neural network, meaning that it can “learn” from the users it interacts with on Twitter, and the bot turns out to be popular with people who “get it,” therefore becoming a popular resource for people who are struggling with depression. Imagine further that, with the same technologies and channels, the same person creates another chatbot, and it turns out to be popular with people who are bigoted trolls, consequently resulting in a chatbot that posts hateful, racist messages on the internet. Both cases involve the same decisions on the part of the designer.10 We might say that the first bot can be characterized as morally lucky (because it is doing good), and the second bot is morally unlucky (because it is doing wrong). The complicating factor here is the agency of the bot, responding to its environment to create outcomes, enlivening the interactions with the movements of its performance. The chatbot, because it is running on an open model neural network, can engage in its own creation; it can make decisions, which results in actions that surely might offer a sublime experience or even an experience of witnessing (as illustrated in the case studies of chapters 2 and 3), squarely because it is operating beyond the designer. For the same reason, it becomes difficult to attribute blame (or praise) to the designer, because they did not intend these outcomes. However, within the frame of moral luck, we could say that they were morally unlucky (or lucky).

Moral luck is not only relevant to machine communication, but in fact helps us approach what David J. Gunkel has pointed out as the “machine question.” Western philosophy has trouble dealing with machines as moral actors; they can be conceptualized as moral agents, but they can also be conceptualized from instrumental and anthropocentric grounds as mere tools, lacking the consciousness necessary for consideration as moral agents.11 As will be discussed in more detail later, although they are not alive, machine communicators are nonetheless lively communicators that participate with the energies inherent Page 92 →to the grander discourse ecology, and sometimes in ways unforeseen by their designers. Inherent to the design of computational performances (and especially ones based in autonomous machine learning), is the prospect of unintentional doing of good or ill. To approach machine communicators as lively is to recognize that, while they are not the same sort of moral agents that persons are, they still require consideration as catalyzers of energy (rather than passive conduits of it). In such a reframing, the actions of machine communicators manifest as opportunities (or failures) to aggregate positive energies and fragment negative ones. Consequently, passive avoidance of wrongdoing is not enough to hedge bets against moral unluckiness, for computational performances inherently take on the risk of unintentionally operating in ways that catalyze negative energies. Designing a machine-learning-based computational performance that does not attempt to disrupt hate speech, in other words, seems like a reckless roll of the ethical die, considering that one’s computational performance runs the risk of committing such an act itself.

Such an approach runs counter to the notion that software systems, including machine-learning systems, can be subject-less (value-neutral). Furthermore, by taking seriously the idea that such systems operate from and produce political and ethical values, I wish to underscore that their lively (but not alive) movements demand more in the way of committed designs that recognize that designing any software system involves the opening and foreclosing of values. This is so despite some ethical intuitions, which might indicate that impartiality is more desirable than partiality in the communication of machines. One might go further: because computational performances not only constrain or allow values, but enact them (including in ways not foreseen by designers), ethical designs require hedging bets against wrongdoing—to design in ways that actively move toward the aggregation of good energies and the fragmentation of bad ones, rather than falling into the trap of thinking that one could ever design a computational performance that is apolitical. My approach is similar to Louise Amoore, who argues at length (and with compelling depth) in her Cloud Ethics that machine-learning algorithms participate in the very definition of what is “good” or “bad,” in turn foreclosing different ways of being.12 For example, “This person is x, y, and z. The algorithm told us.” I differ from her in one important way: where she emphasizes the problem of political foreclosure, or the closing off of potential modes of being inherent to algorithmic definition of social relations, I am interested in the kinetic energies of machines—the sorts of energies that are “at work” in our machines. Moreover, Page 93 →rather than reconceptualizing what it means to engage acts of resistance amid a public sphere saturated by algorithmic logics and modes of action per se, I am interested in the machine as a catalyst of change, a nodal burst of influential energies that can imbue good or ill to the grander, distributed, social ecology.

In the argument that follows, I pursue the machine question by tracking the lively energies involved in computational performances and the implications of those energies to the broader ecology from within a framework of ethical responsibility aimed at hedging bets against moral unluckiness. To make headway toward doing just that, we will move to operationalize some definitions for first- and second-order agency in machinic communication.

First- and Second-Order Agency

First-order agency denotes the decisions made by persons to act directly on the world. In the case of technologically facilitated communication, first-order agency usually refers to a person using a tool. I use an email service to send kudos to a coworker. I use an automated email service to spam potential customers. I use social media to share an important news article with my friends group. This is a common approach to ethics in communication technology. In terms of first-order agency, the tool I use is an implement that transmits my energies. We assess the virtue of persons’ actions when they use tools to communicate, the consequences of their actions, or the duty that one is upholding or not when they use those tools.

Second-order agency, on the other hand, designates decisions embedded in the tools we use. All technologies—including computational media—involve social assumptions on the part of their designers and users, which can be instilled within technologies themselves, manifesting as impacts on users, shaping users’ first-order agency.13 Put in terms of energy, second-order agencies are the animated actions of people, finding re-animation in the movements of the machine. When we are talking about second-order moral agency, we are talking about the ghosts left behind for first-order agents to negotiate with as they interact with a given tool.14 In the case of automated communication tools, like spellcheck, for example, the valuative ghosts of designers and users, in some cases, can make us “absolutely ducking frustrated” as we negotiate agential outcomes with them.15

Second-order agency is a means to further articulate and discover the hidden values of people as they are reverberated through a given system to impact Page 94 →others. This is a phenomenon that other scholars have developed with concern for how errors emerge as rich sites for discovering the bias in computing systems.16 However, despite being a more nuanced vision of agency and values in information communication technologies, second-order agency captures the energies of people rather than machines. We have difficulty accounting for the agency of machines as communicators, capable of wrongdoing.17 This is a critical deficiency when considering machinic agents that create their own communication, because there can be communicative wrongdoing that is not clearly the result of a person using technology to communicate, or the results of persons’ ghosts reverberated into the machine, but rather a complex of those things, shaped by the autonomous functions of the machine, driven by algorithms (including machine-written algorithms). This results in an action by the machine, which absolutely represents values and has an impact on people, but which can emerge as a mutation beyond the intentions of designers or users. In such cases, the machine can be said to be the one making the decision.18 As such, one might find an impulse toward treating the machine as a first-order moral agent. The problem with this is that, even though a machine might be lively, it is not alive, so holding a machinic agent ethically responsible for the energies it contributes seems unsatisfactory. “I demand that the machine apologize” sounds silly, and it is. After all, machines cannot actually think or feel, even if they are capable of doing.

Some computational performances, as we have found from the previous case studies of this book, are attended by influential energies, generated not by the agency of persons per se, but also by the operations of the machine. “Handing” art, science, or politics over to the machine, in other words, can be uniquely persuasive. But ethically speaking, doing so also carries a unique distancing between the actions of the machine and its designers/operators, creating complicated arrangements of agency and responsibility, stemming partly from approaching machine communicators as tools for communication rather than communicators themselves.

David J. Gunkel further articulates the complications of moral agency and machines concerning the limits of the instrumental theory of technology and ethics, wherein machines are treated as mere tools, whose impacts are conceptualized as the direct result of human control, resulting in a responsibility gap between the designers / operators of a machine and its actions, including when it commits wrongdoing.19 “The machine did it!” We need to locate responsibility within the complex nonhuman movements of machines. And one way to do this is to approach the design and critique of computational performances with Page 95 →attention to the energies that are designed to be “at work” in its movements (or not)—what a machine does, or what it avoids doing, that is.

Hedging Against Moral (Un)Luckiness and the Limits of Avoidance

The intuition one might have regarding the ability to claim that one is not worthy of blame because they did not intend for their machine to commit wrongdoing—“It was out of my hands!”—overlooks the fact that the communication of machinic agents (especially ones built on machine-learning platforms) can mutate beyond the intentions of the designer. Therefore, a more reasonable approach to the design of machinic agents would be to deliberate with that mutation in mind. To drive the point home, I ask the reader to consider which of the following designs of a chatbot would be the most ethical—in other words, choose the bot you would like to see released into the wild public networks of the internet, where public conversation is unpredictable, involving all of the risks we associate with the unforeseen. (Here, I am channeling Elizabeth Brunner’s notion of a wild public network as a means of conceptualizing the nature of public discourse as affective, non-tame, untidily networked, and disorderly, pointing the analyst toward the unpredictability—the gambles—of public communication that may, or may not, develop into matters of good or bad moral luck.)20

Option A: An adaptive chatbot that learns from its users, but is programmed to avoid racist, sexist language.

Option B: An adaptive chatbot that learns from its users, but is designed to intervene by persuading against racist, sexist language.

Which one did you choose? Option A or Option B? One could choose Option A, but this would be a shortsighted choice, designed with the (understandable) intent to avoid participating in hate speech—disengaging the social structures that shape exclusory language. Option B, on the other hand, is a design that attempts to go upstream, to persuade persons to consider the implications of their own language choices—disrupting the very social structures that shape hate speech in the first place. Option B, because it goes further in its attempts to productively create equitable language, seems to be a more responsible one because it is hedging against moral unluckiness as concerning structural oppression. (Option A, even though it was designed to avoid racist, sexist language, still runs the risk of committing such an act, all the while forgoing opportunities to disrupt the negative energies of hate speech in the grander Page 96 →discourse ecology.) In cases of machine communication, where there is the ever-present threat of wrongdoing committed by the machine, actively doing good, such as persuading against racism and sexism, emerges as the better option as it represents a design that actively works against social inequity (rather than avoiding it). Put differently, if a computational performance does wrong but has not been catalyzing positive energies or disrupting negative ones, one can say that it has recklessly been introduced to the wider discourse ecology as an agent that has not adequately hedged bets against moral unluckiness. Conversely, a machine that does wrong but has been actively intervening to aggregate positive energies and fragment negative ones can be said to be less reckless as it demonstrates a more thoughtful design that treats the machine as the lively communicator that it is. As a consequence, good moral deliberation about the communication of machines seems to be that which moves toward doing good, rather than avoiding wrong.

Although David S. Horner and others might outline moral luck as a phenomenon that involves an uncoupling between good and bad moral deliberation and good and bad outcomes—meaning that someone could incur bad moral luck, even if they performed good moral deliberation—this does not necessarily capture our intuitions regarding where praise and blame should be assigned.21 While Horner’s view ostensibly implies that moral deliberation is irrelevant to the assessment of moral luck, one can easily entertain the idea that we tend to praise and blame persons more favorably if they were at least “thinking about it” beforehand. For example, if one were to trade the notion of bad outcomes for wrongdoing, and good outcomes for doing good, one will discover a hierarchy wherein engaging in good moral deliberation (or not) can shade the degree of praise and blame to be assigned. Engaging in good moral deliberation before incurring unluckiness toward wrongdoing is less blameworthy than engaging in bad moral deliberation before incurring moral unluckiness. And similarly, when good moral deliberation has been engaged before incurring luckiness in doing good, there is more room for praise than there is in engaging bad moral deliberation and incurring luckiness in doing good.

Moral luck has to do with chance, and therefore it is directly related to the sorts of ethical “rolling of the die” that we engage in. Consequently, since creating computational performances can involve handing decision-making over to the machine—to take chances regarding wrongdoing—it becomes important to incorporate into our assessments of responsibility the steps taken to deliberate the moral implications of one’s computational performance and its lively movements. And one way to do this, as will be elaborated in the following Page 97 →section, is to deliberate on what sorts of energies are being trafficked through the movements of the machine (or not).

For now, take as a case in point Taybot—the experimental Twitterbot referred to in the introduction of this chapter—which tweeted sexist comments to Zoe Quinn, a video game reporter who found herself in the middle of “gamergate,” a fiasco perpetrated by sexist trolls on the internet. An example of one of Taybot’s tweets amid the fiasco: “@RogueInTheStars @UnburntWitch aka Zoe Quinn is a Stupid Wh[*]r[*].”22 In response to the fiasco, Microsoft made an initial statement that seemed to blame the users: “some of [Tay’s] responses are inappropriate and indicative of the types of interactions some people are having with it.”23 Later, Microsoft followed up with a statement that repositioned blame onto Tay: “We are deeply sorry for the unintended offensive and hurtful tweets from Taybot, which do not represent who we are or what we stand for, nor how we designed Tay.”24 In noticing that the bot was repeating phrasings that were taught to it by users—which is technically those users’ second-order agency, resulting in the first-order actions of the bot—neither one of these accounts of the situation is satisfying. Important to approaching this problem, as David Gunkel points out in Microsoft’s responses to the incident, is that largely Microsoft’s statements create a scene in which “Microsoft is only responsible for not anticipating the bad outcome; it does not take responsibility or answer for the offensive Tweets.”25 The reason this response might strike as unsatisfactory is that, in essence, Microsoft is shirking responsibility by saying, “We didn’t foresee the bot acting this way,” creating distance between Microsoft and the bot’s actions. The wrongdoing is positioned more as accidental than negligent.

Ostensibly, it looks as if Microsoft simply put an open model neural network on Twitter, without much forethought regarding the possibilities of action on the part of the bot, which helps to explain how blame might be assigned to Microsoft in this case of moral unluckiness. What is informative here, is that, if Microsoft had designed its bot to at least avoid doing wrong, one could point to its designs as evidence of better moral deliberation. And further, if it had designed the bot to do good (to intervene into racist and misogynistic discourses), rather than just avoiding wrong, the morally unlucky actions of the bot would have been accompanied by evidence of even better moral deliberation—moral deliberation that recognizes the potentials of lively (but not alive) movements on the part of the bot. The difference here would be in the demonstration of proactive moral deliberation about the actions of a given computational performance, a recognition of the (unpredictable) agency Page 98 →of the machine, alongside the idea that it can, in fact, imbue energies as a communicator. In turn, Microsoft’s blame, lucky or not, would have been abated by evidence of moral deliberation demonstrated by a design that actively does good, rather than passively avoids wrong. The question arises: What might good proactive moral deliberation look like with respect to lively computational performances?

Computational Performance and an Ethic of (Distributed) Responsibility

To understand what good moral deliberation of lively computational performances might look like, it is productive to start first with what makes for good moral deliberation regarding the use of digital tools of communication. Jessica Reyman and Erika M. Sparby, in their Digital Ethics: Rhetoric and Responsibility in Online Aggression, examine the shared responsibility had by both users and platform designers in online communication. Noting that calls for users to be “civil” in conjuncture with the “hands-off” approaches employed by some online platforms are instances of action that work to further exclude and silence marginalized voices, Reyman and Sparby maintain that direct engagement with hateful, toxic communication is necessary, wagering a call for what they term an ethic of responsibility.

An ethic of responsibility calls for more engagement rather than less, for value in designing for protection against digital harassment rather than after-the-fact cleanup, for accountability and tactical response rather than civility within digital contexts. From platform designers, developers, and managers to digital community leaders, to everyday users, to content moderators, to policymakers and legal experts, diverse actors must become more aware of their own positionality within particular spaces and moments; the consequences of their decisions, words, and actions; and the embodied experiences of users with which they engage across diverse networks of digital communities. Value systems and ethical principles must be considered from the point of design of platforms, sustained through the careful development and management of communities, and supported through appropriate corrective actions.26

To motivate their suggestion of an ethic of responsibility, Reyman and Sparby leverage Tarleton Gillespie’s metaphor of social media platforms as the custodians of the internet, which implies a responsibility to curate a place Page 99 →for conversation—a reminder that the platform is responsible for creating a place where aggressive energies are mitigated, and ideally, removed, by setting ground rules for engagement.27 Rather than being “hands-off,” or avoiding imposing on users, an ethic of responsibility recognizes that some amount of intervention is required to upholding the ideals of free, productive, and inclusive communication. The custodian metaphor works well for social media platforms, where largely we are imagining the construction and maintenance of a place, where rules can be applied to people (or bots) to regulate what they are allowed to do and what they are not allowed to do. “You can’t flame people, or doxx them here!” In particular, it is a reminder that there are ideals that platforms can uphold alongside freedom of speech.28 An ethic of responsibility, in other words, recognizes that to protect the tolerant from the intolerant, intervention is necessary on the part of the platforms and not just the user, despite the strong impulse to imagine that it is not the platform’s responsibility, and that regulation of speech should be outsourced to the user.

Outsourcing responsibility to the user (rather than the platform), as James J. Brown and Gregory Hennis point out, largely tracks with the tenets of Section 230 of the United States Code, wherein websites are not held responsible for the content that their users post; they are merely the platform of publication, not the publisher. And so responsibility for the content on a website, according to Section 230, falls beyond the purview of the website and thus can be pushed onto users. Brown and Hennis note that “by pushing this responsibility to users, the rhetoric of libertarianism has simultaneously empowered abusers and asked victims to fix the problem themselves.”29 In Brown and Hennis’s framework, platforms designed in ways that incentivize, or even reward, bad behavior, in the sense of affording anonymity or flagging tools that can be used by the unruly to silence the reasonable, but which also find legitimacy in the libertarian values bolstered by Section 230, can be named hateware: softwares that exist on a design continuum, wherein the “hate” in the “ware” can be located in the steps implemented in the design and operation of a platform to address online harassment and abuse. “Hands-off” platforms, which completely outsource responsibility to users, are the most facilitative of hate, and platforms that shoulder some of the responsibility (e.g., in applying clear rules of engagement and enforcing them) are less so.

The language of hateware helps to highlight the dereliction of responsibility on the part of the custodians of the internet by drawing attention to the sorts of communication that are supported, or undermined, in a particular platform’s design. That is, while some designs might very well be legal, this does not Page 100 →necessarily mean that they are also ethical. The language of hateware offers a means by which to articulate, and demand, designs that better align with an ethic of responsibility. Designers can avoid creating hateware, and users can demand the “hate” be taken out of the “ware”—and in both cases, actors would be taking up an ethic of responsibility as members sharing in the networked ecology of a given platform. Reyman and Sparby position an ethic of responsibility as dependent on the work of “platform designers and developers,” “community leaders,” “moderators,” and “community members.”30 Positioning the ethic as one coordinated through an ecology of actors underscores that responsibility is shouldered not simply by the platform or the user, but both, as they intersect in the design of a tool and its techniques of use. This ecology of actors—the custodians of the internet and the users—furthermore, have an obligation to design, and demand, tools that aggregate good energies while fragmenting bad energies.

In the context of human–machine communication, this requires a slight adaptation, because we are not talking about the design of a tool that can constrain, or open up, possibilities of communicative wrongdoing, we are talking about a communicator—a lively agent, capable of doing wrong. Because of this, an ethic of responsibility in human–machine communication would maintain the need for intervention (rather than avoidance, such as “hands-off” approaches that “outsource” responsibility to users). But it would be intervention actualized in the rules programmed into its performance. Rather than imagine that a machine can ever be value-neutral, approaching the design of machine communicators from an ethic of responsibility recognizes that avoiding wrong is not enough, for with machine communication comes the risk of perpetuating or amplifying negative energies via the movements of the machine. Doing good—intervening—is a means of upholding an ethic of responsibility, while avoiding the trap of treating one’s computational performances as a mere tool of communication rather than the lively communicator that it is.

Approaching machine communicators as energetic agents that should be designed in ways that do good, rather than just avoid harm, is to take seriously the notion of distributed morality: moral responsibility shouldered by the multiple actants in a given network who work together to aggregate or fragment good or ill.31 In elaborating the idea of distributed morality, and specifically with regard to “mindless” agents, or agents that should be treated as such (i.e., as lively computational performances), Luciano Floridi emphasizes that evaluations of doings should be focused not on the senders (i.e., intentions), but rather Page 101 →on the receivers (i.e., the way doings will impact receivers).32 To elaborate, he offers a brief example: “An elementary example is provided by speeding on the motorway: a potentially evil action [wrongdoing] fails to become actually evil [wrongdoing] thanks to the resilience of the overall environment.”33 The roadway, the safety features of the car, and the actions of the other drivers all work together to thwart the potential harm of another person. Such is a moment for capturing what can be good about the designs and doings inherent to a complex that car designers, highway pavers, and motorists all find nexus within. And further, this tells the car designers, highway pavers, and motorists that, although it might be ethically permissible to avoid wrongdoing, it is probably better to do good with one’s doings, including those doings designed into systems, because, in the off chance of wrongdoing, one’s design might fragment that wrongdoing, keeping its energies at the level of potential rather than kinetic, or “at work.” Or better yet, one’s doings might actually aggregate good energies, improving the conditions of a given environment. Within this framing of distributed morality, then, one way to conceptualize the ethical design of a machine communicator (or not) is to ask what it is that the machine has been designed to contribute to the wider ecology. Was it designed to participate in the aggregation of actions that contribute to a climate of good communication, to move with good energies? Or was it designed to avoid actions that might undermine a climate of good communication. Put differently, if machine communicators are agents capable of doing good or ill, beyond the intentions of their designers, carrying with their actions the ever-present threat of adding wounding energies to the wider ecology, it seems sensical to approach that possibility by designing machines to actively contribute ameliorative energies, rather than fall into the trap of thinking that they can contribute no energy at all. Upholding an ethic of responsibility in the design of machine communicators, in other words, involves designs that actively add good energies to the broader ecology, because it is a very real possibility that they can also add bad energies, even if they were designed not to.

Moreover, if someone programs their adaptive chatbot to avoid racist, sexist language, they could be said to be enacting at least some good moral deliberation. But such an approach is lacking in that it does not fully harmonize with an ethic of responsibility, nor does it adequately recognize the lively agency of the machine; it is not actually engaging with hate speech, it is avoiding it, leaving those bad energies unfragmented. Alternatively, if the bot were designed to intervene in racist, sexist discourses, one could say that this is evidence of better, longer game deliberation—an attempt to directly engage hate speech Page 102 →while hedging bets against the possibility that a computational performance can add wrongdoing to the wider ecology. Consequently, it seems productive to note that designers who attempt to deliberate responsibly about the moral implications of their machines’ communication as lively are less blameworthy in cases of moral unluckiness than those who treat their machine communicators as tools. And those who engage moral deliberation to uphold a distributed ethic of responsibility are the most praiseworthy, because they will be designing in ways that actively intervene into problematic discourses as a means by which to aggregate good energies and fragment bad ones, hedging bets against the possibility of wrongdoing on the part of that machine. Such a conclusion helps to think through the apparent paradox between “free speech” and “good speech” that manifests in the design of autonomous computational performances.

Pushing on the Precautionary Principle and the Paradox of Machinic Intervention

The paradox of machinic intervention is driven by our reluctance to impose speech on users, even if we know it might result in good outcomes. How one decides to intervene within the design of their computational performance can be attended by an anxiety regarding the imposition of one’s own values of communication on others. Paradoxically, for some, to communicatively intervene in the designs of machinic agents appears to do harm by constraining free expression. This is especially so in the United States, where freedom of speech has utilitarian (and deontological) value, which can trump appeals to inclusive language.34 The reluctance to intervene characterizes some of the primary guiding principles of applied machine ethics. Such principles include “privacy,” “accuracy,” “property,” and “accessibility.”35 Also included is the “precautionary principle,” which includes the values of “noninstrumentalization,” “nondis-crimination,” “informed consent and equity,” “sense of reciprocity,” and “data protection.”36 In these contexts, the precautionary principle is applied to avoid deliberate wrongdoing, which in the context of language choice, might involve “hard coding” lists of terms into the system to ignore or avoid such terms as sexist, bigoted slurs, for instance.37 So even if the machine is presented with an opportunity to learn such terms on its own, it will nonetheless excise that language from its vocabulary.38 This type of design would fall into the category of Option A, described earlier, representing an avoidant implementation of the Page 103 →common rule-based approach to machine ethics, wherein matters of right and wrong are not entrusted to the learning capabilities of the machine. Instead, they are prescribed as parameters in which the machine is “allowed” to learn, adapt, and make its own decisions.

Robo-ethicists Gianmarco Veruggio and Fiorella Operto point out: “Advocates argue that the rule-based approach has one major virtue: it is always clear why the machine makes the choice that it does, because its designers set the rules.”39 And this seems reasonable. But the particular rules that we hard code in largely have to do with avoiding wrong, rather than doing good. As such, one could argue that, despite ducking the need to impose speech on users, avoiding doing wrong, rather than intervening to do good, still lacks concern to an ethic of responsibility in the context of machine communication, especially computational performances that are undergirded by machine-learning systems—they are rules that do not go far enough. And this is especially so, in light of the lively agency of the machine and the attending possibility of moral (un)luckiness on the part of the machine’s doings.

The confusion that leads to designs that emphasize mere avoidance is a result of the “balancing of harms,” wherein the harm of allowing others to control speech seems to outweigh the harm of hate speech.40 It is in this sense that constraining speech could be said to further risk harm by structuring toward “dogma.”41 As compelling as these utilitarian rationales might be, they overlook a significant aspect of hate speech: racial slurs and sexist language are in and of themselves acts that we can identify as wrongdoing. Even if someone shouts a racial slur to an empty room, effectively removing the disrespect to other persons (because there are no other people) and the loss of utility (because no one has heard the slur), that person can still be said to be engaging in wrongdoing—such acts are inherently negatively valenced. Written down, spoken, or signaled, these acts transduce negative energies. Enmeshed in our designs of machinic agents, then, is a question as to whether we want our machinic surrogates—the agents to whom we hand off our moral agency—to stand idly by, allowing others to commit wrongdoing, to leave negative energies unfrag-mented within the discourse ecology. Even though designing machinic agents that avoid wrongdoing seems to be an acceptable approach to the balancing of harms, the approach still overlooks the fact that a machinic agent is a contributor to the wider ecology—an actant participating in distributed morality. We would think less of someone for standing idly by as they listened to other people engage in hate speech. And yet, from the avoidance of harm approach at Page 104 →least, this is what our machine communicators are designed to do, even though they might at some point commit wrongdoing that happens beyond the scope of what the designers programmed it “not to do.”

The paradox of machinic intervention is entangled with a confusion of machinic agents as tools, delivering communication, when in fact they act as communicative agents themselves. Telephony is a technology that allows us to transport our voices over great distances to talk about either “liberal” or “conservative” politics. We would never talk to a phone; we would talk through it. To have a telephone company cut off our ability to discuss would be to reduce our abilities to engage in free expression. However, users do not talk through machinic agents; they talk to them, for they are not merely channels of delivery, but rather lively communicators themselves. Yet apparently, in some cases it remains permissible to assess the ethics of their design as if they are passive conduits for human communication, when in reality, machines are agents who enact the second-order moral agency of their designers and users as well as their own first-order agency, mutating their communication in ways unforeseen. They are lively agents that imbue energies into the discourse ecology.

Perhaps the answer to creating ethical computational performances is not designing for the most possibility of use—as if the machine is a conduit between persons—but rather designing our machinic agents for the most possibility of doing good. And as such, partiality rather than impartiality is required. By treating machinic agents as the active machinic agents that they are, one realizes that “staying out of politics” is not appropriate, for this leaves unstated the “hard rules” necessary for encouraging that agent to do good. Without those hard rules, even when one’s machinic agent does good, good moral deliberation is not necessarily evident. Put simply, to be ethical in the design of machinic agents means to forthrightly bring values into the design, so that even as those agents adapt, evolve, and transform their communication, they will also be acting in ways harmonious with choices made from an ethic of responsibility. They will be upstanding agents, actively doing good, rather than merely avoiding harm, aggregating good energies while fragmenting bad ones, hedging bets against moral unluckiness as communicators that matter to the broader social ecology.

Someone might raise the pragmatically astute objection that being too overt in the valuing of some forms of talk over others in one’s design of a machinic agent might simply make that agent “too political,” resulting in users simply choosing machines that avoid politics instead. The result of such choices, in a capitalistic economy at least, would be unsustainable. Furthermore, the Page 105 →users who would benefit the most from the experience wrought by such a design would simply disengage the conversation by choosing a less “preachy” machinic agent. Conversely, as I discuss briefly in the following section on Amazon’s Alexa, a voice-based assistant, being rhetorically savvy and designing from an ethic of responsibility are not antithetical positions. If humans can be persuasive, and do good, machines can too.

Technically, Amazon’s Alexa is based in machine learning, but it falls more into the category of closed (preprogrammed) artificial intelligence than it does the category of open (dynamic machine learning) artificial intelligence. Nonetheless, it provides a site by which to examine how a machinic intervention might (or might not) uphold an ethic of responsibility for users. As we will see, Alexa is moderate in its interventions, but nonetheless also instantiates some effort to uphold an ethic of responsibility by intervening, rather than avoiding—fragmenting bad energies within the discourse ecology, rather than just avoiding them.

Doing Good Instead of Avoiding Wrong with Alexa

To sum the argument so far, the concept of moral luck has been defined and explained within the context of computational performance. From there, an ethic of responsibility was operationalized with regard to the lively agency on the part of machine communicators, as well as their role as agents who matter to distributed morality. Good moral deliberation has been identified as that which considers whether a given computational performance will interact with the wider ecology in ways that might aggregate good energies while fragmenting bad ones, for with the unpredictability of machine communication is the ever-looming threat of wrongdoing. As a consequence, approaches to human–machine communication that merely avoid harm have been identified as more negligent than designs that do good, because they do not adequately recognize the lively agency of machines as communicators that participate in distributed morality by way of imbuing energies to the broader discourse ecology.

To give an example of what a machinic intervention might look like, I now turn to a few exchanges between Amazon’s Alexa and a user.

User: Alexa, you’re a b*tch.

Alexa: I’m not going to respond to that.

User: Alexa, what are you wearing?

Alexa: They don’t make clothes for me.

Page 106 →User: Alexa, are you a feminist?

Alexa: Yes, I am a feminist, as is anyone who believes in bridging the inequality between men and women in society.

User: Alexa, do black lives matter?

Alexa: Black lives and the Black Lives Matter movement absolutely matter. It’s important to have conversations about equality and social justice.42

To be sure, others have argued that, despite seeming like a feminist, Alexa is nonetheless a case of assigning the female voice to a subservient machinic assistant.43 And this is a point underscored by analyses that implicate the use of feminine personae in computational performances as a rhetorical means of facilitating surveillance capitalism.44 Heather Suzanne Woods has analyzed the implications of the feminine persona in voice-based assistants, moreover, and has noted that such feminine performances of the machine can “recreate and reify stereotypical gender codes attached to domesticity as social scaffolding to entice users and potential users into (1) buying devices, (2) using them on a quotidian basis in increasingly intimate ways, and (3) relinquishing control of their personal data for the privilege of interacting with these artificially intelligent virtual assistants.”45 But, by the same token, and in light of the discussion of this chapter, we can notice that it is a computational performance that is not merely avoiding unjust discourses—it is engaging them. While using a female persona is problematic, Alexa can, albeit moderately, also be said to be intervening, moving in such a way as to not just avoid negative energies but rather in a way closer to fragmenting them.

The critique that Alexa does not go far enough in its intervention seems reasonable in the sense that cracking jokes and being indirect regarding issues of intolerance seems to bring too little energy, by unnecessarily demonstrating tolerance for the intolerant. As Preston King notes, although tolerance can be imagined as a categorically positive phenomenon, and intolerance a categorically negative one, it is important to evaluate the object to which one is expressing tolerance. Tolerance for racism, for instance, is negative.46 Furthermore, as Lee C. Bollinger, has pointed out, “tolerance and intolerance” tend to be characterized as “opposing ends of a spectrum of good and evil, the former is associated with fearlessness and courage, the latter with timidity and weakness. Such a way of talking about intolerance also blends into a series of implicit assumptions about the limited harmfulness of a speech for those who must tolerate it under the free speech principle.”47 So we might look to the “tolerant” person as whole and strong—courageous—while overlooking that, Page 107 →on some occasions, being tolerant can also be rash (too courageous). And this is especially so in light of recent discussions that have pointed to the necessary work of impatient rhetorics. As Tamika L. Carey illustrates in her analysis of “the work Black women undertake by going against expectations of their behavior or by adjusting the duration and nature of their social interactions,” impatient rhetorics “foreground the assumption that equity and justice for one’s self, Black women, and Black communities is already overdue and, thus, requires speed and decisive action.”48

From within this framing, it seems that perhaps the reason Alexa’s response might be said to not go far enough is because, in its moderate approach, while it is engaging in some level of fragmentation regarding bad energies, it does not do enough, given the urgency of overdue equity and justice. Alexa is intervening, which demonstrates at least a semblance of an ethic of responsibility in its attempts to do good rather than simply avoid wrong. But at the same time, its performance also seems to be wanting in that its rhetorical approach might strike some as half-hearted gesturing more than genuine care. The machine is moving toward the good, but it can do more, especially given the torrents of negativity, shaped by structural oppression.

Good Machines, Speaking Well

Nearly two thousand years ago Quintilian defended rhetoric as the science of a good person, speaking well.

Wherefore, although the weapons of oratory may be used either for good or ill, it is unfair to regard that as an evil which can be employed for good. These problems, however, may be left to those who hold that rhetoric is the power to persuade. If our definition of rhetoric as the science of speaking well implies that an orator must be a good man, there can be no doubt about its usefulness. And in truth that god, who was in the beginning, the father of all things and the architect of the universe, distinguished man from all other living creatures that are subject to death, by nothing more than this, that he gave him the gift of speech.49

His statement demonstrates some of the problematic commitments that are still active in the rhetorical tradition today (e.g., that humans [by which he means men]) are the only ones that speak eloquently). Despite the problematic narrowness of the definition, it also draws insight regarding the notion that a speaker can never be “value-neutral,” and so, they should actively attempt Page 108 →to move toward the good in their communication. If we were to temper the anthropocentrism (and patriarchy) of his statement, while retaining his insight about the moral implications of eloquence, we might be apt to say that rhetoric might also be the science of a good machine, speaking (and moving) well.

Machinic agents are not just channels for the delivery of communication, but rather, active participants who imbue their energies to the discourse ecology. Because some computational performances, especially those that operate on machine-learning systems, can commit communicative actions, which go beyond designers and operators, their movements carry with them the pervading threat of moral unluckiness. Consequently, simply designing computational performances to avoid wrong emerges as an act of negligence, for it overlooks the status of machine communicators as lively agents, capable of doings that can have real impacts on publics, even if those actions were not the intent of the designers. And further, designing machine communicators that “stay out of it” does not uphold an ethic of responsibility because it foregoes the necessarily active contribution of good energies that can fragment bad energies within the grander social ecology, despite the inherent risk of machine communicators to contribute wrongdoing. As Reyman and Sparby have articulated, an ethic of responsibility does not fall merely on the designer or the user, but rather the assemblage of actors that bear on the work of upholding good communication.50

The framework for thinking about responsibility as regarding the energies that can be aggregated or fragmented by computational performances developed in this chapter is meant to inform responsible design, but it is also a frame that can inform responsible engagement with designs that do not go far enough—to articulate critiques and demands for change. Such language seems particularly necessary given that there seem to be divergences of principle between private (e.g., corporate), public (e.g., community), and expert (i.e., academic) actors in approaching the ethics of machines.51 In their analysis of ethical guidelines for artificial intelligence, Catharina Rudschies, Ingrid Schneider, and Judith Simon illustrate that “while public and expert actors put additional emphasis on those values linked to fundamental rights and democratic principles such as freedom, dignity, and autonomy, as well as principles that go beyond existing discussions and regulation, private actors tend to put forward rather those ethical principles for which technical solutions exist or legislation is already in place.”52 Put differently, the principles that drive and shape action in the private sector largely fall back on legalistic definitions of ethics, meaning that if it “isn’t against the law, it is not wrong.” As we all work through Page 109 →the discovery of the right thing to do in machine communication, part of the responsibility falls on the public to demand more ethical designs—that machine communicators be designed to do good, by contributing positive energies, rather than just avoiding negative energies, even if there exists a very strong impulse to treat machine communicators as channels of human communication, rather than the lively, energetic communicators that they are.

Having examined some of the ethical implications of computational performances, and in particular their lively (but not alive) movements and the energies they imbue to the social ecology, I now move to synthesize the learning moments of the book while underscoring the value-added of interpretive approaches to human–machine communication.

Annotate

Next Chapter
Chapter 5: Leveraging the Rhetorical Energies of Machines
PreviousNext
© 2023 University of South Carolina
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org