Skip to main content

Influential Machines: The Rhetoric of Computational Performance: Notes

Influential Machines: The Rhetoric of Computational Performance
Notes
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeInfluential Machines
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover Page
  2. Title Page
  3. Copyright
  4. Dedication
  5. Contents
  6. List of Tables and Figures
  7. Acknowledgments
  8. Introduction: Locating the Energies of Computational Performance
    1. The Rhetorical Energies of Computing Machines
    2. Beyond the Front and the Back Ends of Computing and Toward the Deep End
    3. Thickening Procedurality with the Rhetorical Energies of Computational Performance
  9. Chapter 1: Manufactured Processing, Ritual, and Expert Systems
    1. Automation as Ritual of Science
    2. Knowledge-Based Systems and Looking to Machines for Answers about Health
    3. The Manufactured Processing of Vaccine Calculator
    4. The Energetic Movements of “Experts” and Science Communication
  10. Chapter 2: Processual Magnitude, the Sublime, and Computational Poiesis
    1. The Aesthetics of Vast Computing
    2. The Sublime Energies of @censusAmericans
    3. Attuning to the Angst of @censusAmericans
    4. Doing More with Computational Performance
  11. Chapter 3: Processual Signaling, Compulsion, and Neural Networks
    1. Persuasion, Indication, and Affective Compulsion
    2. The “Grooves” of Neural Networks
    3. The Machinic Parody of @DeepDrumpf
    4. The Critique of Processual Signaling
  12. Chapter 4: Designing Computational Performances to Actively Contribute Positive Energies
    1. Moral Luck and the Machine Question
    2. First- and Second-Order Agency
    3. Hedging Against Moral (Un)Luckiness and the Limits of Avoidance
    4. Computational Performance and an Ethic of (Distributed) Responsibility
    5. Pushing on the Precautionary Principle and the Paradox of Machinic Intervention
    6. Doing Good Instead of Avoiding Wrong with Alexa
    7. Good Machines, Speaking Well
  13. Chapter 5: Leveraging the Rhetorical Energies of Machines
    1. The Informational and Persuasive Labors of Machine Communicators During the Pandemic
    2. Going “Deeper” Toward Anthropomechanation
    3. Enlivening Human–Machine Communication with Rhetorical Energies
    4. Enlivening Inoculations Against Misinformation with Machinic Rhetorical Energies
  14. Notes
  15. Works Cited
  16. Index

Page 125 →Notes

Introduction

  1. 1. See also Coleman, “Comparative Rhetorics of Technology and the Energies of Ancient Indian Robots,” and Panikkar, “The Destiny of Technological Civilization.”
  2. 2. Strong, “Asoka and the Buddha Relics,” 133.
  3. 3. Ibid., 134, 136.
  4. 4. See, for example, Coleman, “Bots, Social Capital, and the Need for Civility.”
  5. 5. Kennedy, “A Hoot in the Dark.”
  6. 6. Ibid., 2.
  7. 7. Kennedy, “A Hoot in the Dark,” 2. Debra Hawhee offers a similar comment in an essay that retraces studies of “sensation” in the Quarterly Journal of Speech. She notes that within rhetorical scholarship of such phenomena as sensation, feeling, and affect, “rhetoric cannot help but be formulated as a kind of energy, not unlike (though not fully like) . . . electrical currents.” Hawhee, “Rhetoric’s Sensorium,” 24.
  8. 8. See Ingraham, “Energy: Rhetoric’s Vitality,” for an excellent review of the literature on rhetorical energy.
  9. 9. Barnett and Boyle, “Rhetorical Ontology, or, How to Do Things with Things,” 8 (emphasis in the original).
  10. 10. Muckelbauer, “Implicit Paradigms of Rhetorics,” 40.
  11. 11. Ibid. Burke, On Symbols and Society, 53. See also Hawhee, Moving Bodies, 167.
  12. 12. The colloquial use of “catch feels” often connotes unintended, even unwanted, romantic feelings between persons. My use of the phrase is not romantic, but it retains the sort of accidental, unintended taking on of emotional states from others in which emotional states can be conceived as contagious, something that can be “caught” from others in seemingly unconscious ways. For example, see Hatfield, Cacioppo, and Rapson, “Emotional Contagion.”
  13. 13. Hepp, “Artificial Companions, Social Bots, and Works Bots” 1413.
  14. 14. Fortunati and Edwards, “Opening Space for Theoretical, Methodological, and Empirical Issues in Human-Machine Communication,” 8, 9 (emphases added). See also Fortunati and Edwards, “Moving Ahead with Human-Machine Communication.”
  15. 15. Kennedy, “A Hoot in the Dark,” 2 (emphasis added).
  16. 16. Page 126 →Ibid., 4.
  17. 17. Ingraham, “Energy: Rhetoric’s Vitality,” 62.
  18. 18. Chaput and Colombini, “The Mathematization of the Invisible Hand,” 60.
  19. 19. Coleman, “Machinic Rhetorics and the Influential Movements of Robots.”
  20. 20. Ceccarelli, “The Ends of Rhetoric”; Ceccarelli, “The Ends of Rhetoric Revisited,” 57.
  21. 21. Miller, “What Can Automation Tell Us About Agency?”
  22. 22. Ibid., 145 (emphases added).
  23. 23. Ibid., 149.
  24. 24. Ibid., 147.
  25. 25. Ibid., 150.
  26. 26. Ibid., 153.
  27. 27. Losh, “Sensing Exigence.”
  28. 28. Kennedy, Textual Curation.
  29. 29. Ibid., 124.
  30. 30. Hawhee, Rhetoric in Tooth and Claw, 33 (emphasis in the original).
  31. 31. Ibid., 34.
  32. 32. Ibid., 169.
  33. 33. Boyle, Brown Jr., and Ceraso, “The Digital: Rhetoric Behind and Beyond the Screen,” 257.
  34. 34. Coleman, “Machinic Rhetorics,” 342, 343.
  35. 35. Barad, Meeting the Universe Halfway.
  36. 36. Ibid.
  37. 37. Ibid.
  38. 38. Kennedy, Textual Curation, 33.
  39. 39. Gunkel, “Communication and Artificial Intelligence.”
  40. 40. Ziewitz, “Governing Algorithms.”
  41. 41. Noble, Algorithms of Oppression.
  42. 42. Neff and Nagy, “Talking to Bots: Symbiotic Agency and the Case of Tay,” 4925.
  43. 43. I borrow the “tacking” and “thickening” metaphors from Jasinski and his discussion of method in rhetorical criticism—more specifically, from his proposal of abduction, rather than simple deduction or induction, wherein rhetorical scholarship should apply theoretical assumptions, letting those applications fold back into the original theoretical assumption to “thicken” it. In the case of the rhetorical energy of machines, I am following this logic, presupposing that computation is not simply code, nor is it simply performance, but rather both. To tack back and forth between the procedures, logics, and effects of a given piece of software and its outputs is to thicken an understanding of a given computational performance. Jasinski, “The Status of Theory and Method in Rhetorical Criticism.”
  44. 44. Vee, “Full Stack Rhetoric.”
  45. 45. Rickert, Ambient Rhetoric, 9.
  46. 46. Page 127 →Ibid., 145–46.
  47. 47. Gross, “Being-Moved,” 4.
  48. 48. Hawhee, “Rhetoric’s Sensorium,” 12.
  49. 49. Hawk, “Sound: Resonance as Rhetorical,” 317.
  50. 50. Chaput, “Rhetorical Circulation in Late Capitalism,” 15; Chaput, Market Affect and the Rhetoric of Political Economic Debates, 4.
  51. 51. Rickert, “Preliminary Steps Toward a General Rhetoric,” 418; Vee and Brown Jr., “Rhetoric Special Issue Editorial Introduction.”.
  52. 52. Bogost, Persuasive Games, 3 (emphasis added).
  53. 53. Brown, Ethical Programs, 55.
  54. 54. Ibid.
  55. 55. Brock and Shepherd, “Understanding How Algorithms Work Persuasively through the Procedural Enthymeme,” 20.
  56. 56. Walker, Rhetoric and Poetics in Antiquity, 18; Boyle, Rhetoric as a Posthuman Practice.
  57. 57. Burke, Language as Symbolic Action; Hawhee, Moving Bodies.
  58. 58. Brown, “Rhetorical Devices,” 231.
  59. 59. Jones, “How I Learned to Stop Worrying and Love the Bots.”
  60. 60. Ceccarelli, “The Ends of Rhetoric”; Ceccarelli, “The Ends of Rhetoric Revisited,” 57.

1: Manufactured Processing, Ritual, and Expert Systems

  1. 1. Adams, The Hitchhiker’s Guide to the Galaxy, 161.
  2. 2. Ibid., 161.
  3. 3. This definition is informed by Ceccarelli’s work on manufactured scientific controversies: practices of public discourse meant to manufacture doubt regarding scientific assumptions backed by scientific consensus. Ceccarelli, “Manufactured Scientific Controversy.”
  4. 4. Potter, Wetherell, and Chitty, “Quantification Rhetoric—Cancer on Television.”
  5. 5. Tal and Wansink, “Blinded with Science.”
  6. 6. Ibid.
  7. 7. Ibid., 120.
  8. 8. Ibid., 122.
  9. 9. Walsh, Scientists as Prophets.
  10. 10. Spinuzzi, “‘Light Green Doesn’t Mean Hydrology!’”
  11. 11. Ibid., 44, 46.
  12. 12. Amazon, “Amazon Alexa” IBM, “IBM Watson Natural Language Understanding.
  13. 13. For more on these everyday assemblages, see Wise, “Towards a Minor Assemblage”; Paramount Pictures, “Star Trek: The Next Generation.”
  14. 14. For example, see Szolovits, “Knowledge-Based Systems.”
  15. 15. Page 128 →Akerkar and Sajja, Knowledge-Based Systems.
  16. 16. Ibid.
  17. 17. Ibid.
  18. 18. Minksy, “Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy.” See also Dinsmore, “Thunder in the Gap.”
  19. 19. Cox, “Statements.”
  20. 20. Smalley et al., “Universal Tool for Vaccine Scheduling.”
  21. 21. Engineer, Keskinocak, and Pickering, “OR Practice—Catch-Up Scheduling for Childhood Vaccination.”
  22. 22. “schedule-service.js,” 2018; see also, “About,” 2018.
  23. 23. Robinson, “Online Tool Creates Catch-Up Immunization Schedules for Missed Childhood Vaccinations.”
  24. 24. Centers for Disease Control and Prevention, “Advisory Committee on Immunization Practices”; Catch-Up Vaccination Scheduler, “About”; Centers for Disease Control and Prevention, “Instant Childhood Immunization Schedule.”
  25. 25. Catch-Up Vaccination Scheduler, “Start.”
  26. 26. Catch-Up Vaccination Scheduler, “History.”
  27. 27. Catch-Up Vaccination Scheduler, “Schedule.”
  28. 28. Wardrip-Fruin, Expressive Processing, 36.
  29. 29. Nass and Moon, “Machines and Mindlessness.”
  30. 30. Keskinocak, email message to the author, Apr. 1, 2022. Dr. Keskinocak was one of the designers of the Catch-Up Scheduler; the timeline of 2008–2020 is based on triangulation between public news articles about the app, as well as looking at save states of the web application on Internet Archive’s Wayback Machine, wherein the latest version of the app that still included information about the app is dated to March 2020. Georgia Institute of Technology, “Tool Creates Personalized Catch-Up Vaccine Schedules”; Catch-Up Vaccination Scheduler.
  31. 31. “Child and Adolescent Vaccine Assessment Tool.”
  32. 32. National Vaccine Information Center, “Biography, Chris Downey MS.”
  33. 33. “Vaccine Ingredients Calculator.”
  34. 34. “Catch-Up Vaccination Scheduler.” An additional example: Instant Childhood Immunization Schedule.
  35. 35. National Vaccine Information Center, “Biography, Chris Downey MS.”
  36. 36. Vaccine Ingredients Calculator, “About.”
  37. 37. Vaccine Ingredients Calculator, “STEP 1 of 2.”
  38. 38. Vaccine Ingredients Calculator. “Discover the Ingredients in the Vaccines that Your Doctor Recommends.”
  39. 39. VaxCalc-Labs, “vaccine-ingredients-data,” May 11, 2016, GitHub, retrieved May 3, 2018, https://github.com/VaxCalc-Labs.
  40. 40. Vaccine Ingredients Calculator, “STEP 2: Choose Vaccines for [Name].”
  41. 41. Ibid.
  42. 42. Page 129 →Ibid.
  43. 43. Ibid.
  44. 44. Ibid.
  45. 45. U. S. Food and Drug Administration, “Engerix-B, Package Insert.”
  46. 46. “STEP 2: Choose Vaccines for [Name].”
  47. 47. Kata, “Anti-Vaccine Activists, Web 2.0, and the Postmodern Paradigm,” 3783.
  48. 48. “STEP 2: Choose Vaccines for [Name].”
  49. 49. Lawrence, Vaccine Rhetorics, 98 (emphasis in original).
  50. 50. Massumi, “The Future Birth of the Affective Fact,” 54 (emphasis in the original).
  51. 51. Ibid., 64.
  52. 52. Kruger and Dunning, “Unskilled and Unaware of It.”
  53. 53. Ibid., 1132.
  54. 54. Motta, Callaghan, and Sylvester, “Knowing Less but Presuming More,” 275.
  55. 55. Jones, “Pinning, Gazing, and Swiping Together.”
  56. 56. Ibid., 220.
  57. 57. Ibid.
  58. 58. Patterson, “Intuitive Cognition and Models of Human-Automation Interaction,” 111.
  59. 59. Ibid.
  60. 60. Alter, Oppenheimer, and Epley, “Overcoming Intuition,” 575.
  61. 61. Roundtree, Computer Simulation, Rhetoric, and the Scientific Imagination.
  62. 62. Wynn, Citizen Science in the Digital Age.
  63. 63. Ibid.
  64. 64. As an outgrowth of these conclusions, computational literacy emerges as a means by which to practice rhetorical literacy. Learning things such as basic concepts in programming (conditionals, variables), interface design, and database structure are not just niceties, they are increasingly necessary components of one’s critical repertoire, reifying the value of such work as Kevin Brock’s Rhetorical Code Studies and Annette Vee’s Coding Literacy.

2: Processual Magnitude, the Sublime, and Computational Poiesis

  1. 1. “Christ of the Abyss Statue”; Cristo degli Abissi [Christ of the Abyss].
  2. 2. Prior, “Media and Political Polarization”; Pariser, The Filter Bubble.
  3. 3. Farrell, “The Weight of Rhetoric”; Rice, “The Rhetorical Aesthetics of More,” 32.
  4. 4. Rice, “The Rhetorical Aesthetics of More,” 32.
  5. 5. Coleman and Cypher, “The Digital Rhetorics of AIDS Denialist Networked Publics.”
  6. 6. Rice, “The Rhetorical Aesthetics of More,” 38 (emphasis in the original).
  7. 7. Larson, “‘Just let this sink in’.”
  8. 8. Page 130 →Kant, Critique of the Power of Judgement, 128.
  9. 9. Ibid., 129.
  10. 10. Bradshaw, “Rhetorical Exhaustion and the Ethics of Amplification,” 102568.
  11. 11. Carey and Quirk, “The Mythos of the Electronic Revolution,” 396.
  12. 12. Ibid., 423.
  13. 13. Nye, American Technological Sublime.
  14. 14. Mosco, The Digital Sublime.
  15. 15. Ames, “Deconstructing the Algorithmic Sublime,” 2.
  16. 16. Ibid., 4 (emphasis added).
  17. 17. Barton, “Twitter Bots Are Making Data Human Again”; Clark, “Premiere of Census Americans, Setting a Twitter Bot to Music,” accessed October 7, 2021.
  18. 18. censusAmericans, Twitter Account, accessed May 11, 2023.
  19. 19. Zhang, “Introducing censusAmericans, a Twitter Bot for America.”
  20. 20. McCormack and Dorin, “Art, Emergence and the Computational Sublime,” 12. See also Aquilina, “The Computational Sublime in Nick Montfort’s ‘Round’ and ‘All the Names of God.’”
  21. 21. McCormack and Dorin, “Art, Emergence and the Computational Sublime,” 12.
  22. 22. censusAmericans.
  23. 23. Ibid.
  24. 24. Jjjiia, “censusAmericans.”
  25. 25. US Census Bureau, “2013 ACS 1-Year Public Use Microdata Samples.”
  26. 26. Since Zhang designed the bot, Twitter has increased its character limit to 280 characters.
  27. 27. Parrish, “Task Complete”; Jjjiia, “censusAmericans.”
  28. 28. Coleman, “The Craft and Craftiness of Hacking.”
  29. 29. Jjjiia, “censusAmericans.”
  30. 30. Ibid.
  31. 31. Ibid.
  32. 32. censusAmericans.
  33. 33. Ibid.
  34. 34. [Cicero], Rhetorica ad Herennium, 303.
  35. 35. Kant, Critique of the Power of Judgement, 136.
  36. 36. Ibid., 136.
  37. 37. Ibid., 145.
  38. 38. Ibid., 145.
  39. 39. Ibid., 134.
  40. 40. Kane, High-Tech Trash, 138.
  41. 41. See also Lochhead, “The Sublime, the Ineffable, and Other Dangerous Aesthetics.”
  42. 42. Kane, High-Tech Trash, 138.
  43. 43. Ibid., 151 (emphasis in the original).
  44. 44. Ibid., 166.
  45. 45. Page 131 →Kang, Sublime Dreams of Living Machines, 44–45.
  46. 46. Kazemi, “My Favorite Stuff of 2015,” accessed 28, 2017, http://tinysubversions. com/notes/2015-favorites/.
  47. 47. Kane, High-Tech Trash, 151.
  48. 48. Hawhee, Rhetoric in Tooth and Claw, 58.
  49. 49. [Longinus], “On the Sublime,” 163.
  50. 50. Heidegger, On Being and Time, 130.
  51. 51. Ratcliffe, “Why Mood Matters,” 172.
  52. 52. Ibid., 174 (emphasis in the original).
  53. 53. Hartelius, “The Anxious Flâneur.”
  54. 54. Batson et al., “An Additional Antecedent of Empathic Concern,” 65. Myers and Hodges, “Making It Up and Making Do,” 286.
  55. 55. Myers and Hodges, “Making It Up and Making Do,” 28.
  56. 56. Rickert, Ambient Rhetoric, 155.
  57. 57. Turkle, Alone Together. Reeves, “Automatic for the People.” To this point, during the process of writing the book, Twitter’s bot culture was drastically reshaped by a change in policy wherein application programming interface access was shifted from a fairly open “free tier” to a more constricted free tier, in turn, financially precluding some of Twitter’s bots from running (because the cost was too great for their creators to continue their operation). While bot-makers can still use the platform to do their work, many have to pay to do so, in turn, potentially impacting decisions to put one’s art on Twitter. See also Robertson, “Your Favorite Twitter Bot Might Die Next Week”; Twitter, Twitter API.
  58. 58. Brock, “One Hundred Thousand Billion Processes.”
  59. 59. Eldridge, “Cyborg Dancing.”
  60. 60. With my coauthors, I have applied McCormack and Dorin’s idea of the computational sublime within the context of computationally generated, dynamic soundscape generation, or what we call, “emergent sonification,” which integrates randomness and vast open-endedness to generating unique soundscapes shaped by data of the Anthropocene (dwindling bird songs from the ecology). See Coleman et al., “Emergent Sonification.”

3: Processual Signaling, Compulsion, and Neural Networks

  1. 1. Beth Glover, cited in Bethea, “The Joy of Paul (Bear) Vasquez, The Double Rainbow Guy.”
  2. 2. Peyton Chevalier, [User comment] “Yosemitebear Mountain Double Rainbow 1-8-10,” Yosemitebear62, YouTube, https://www.youtube.com/.
  3. 3. GPT-3, “A Robot Wrote this Entire Article.”
  4. 4. Ibid.
  5. 5. Miller, “Technology as a Form of Consciousness.”
  6. 6. Page 132 →Crick, “Composing the Will to Power,” 302.
  7. 7. Holmes, The Rhetoric of Video Games as Embodied Practice.
  8. 8. Ibid., 121.
  9. 9. Ibid., 120.
  10. 10. Zagacki and Gallagher, “Rhetoric and Materiality in the Museum Park.”
  11. 11. Henriques, Tiainen, and Väliaho, “Rhythm Returns,” 19.
  12. 12. Hawhee, Moving Bodies, 28 (emphasis in the original): Burke, Counter-Statement.
  13. 13. Hawhee, Moving Bodies, 28.
  14. 14. Peirce, “What Is a Sign?” 178.
  15. 15. Mitchell, What Do Pictures Want? 7.
  16. 16. Ibid., 128.
  17. 17. Fisher, “Narration as a Human Communication Paradigm,” 1. MacIntyre, After Virtue, 201.
  18. 18. Fisher, Human Communication as Narration, 105.
  19. 19. Burroughs, “On Coincidence,” 126.
  20. 20. Peters, The Marvelous Clouds, 219.
  21. 21. Mumford, Technics and Civilization, 14.
  22. 22. Peters, The Marvelous Clouds, 38.
  23. 23. Norman, The Design of Everyday Things, 267.
  24. 24. Yasuoka and Yasuoka, “On the Prehistory of QWERTY.”
  25. 25. Brown and Rivers, “Encomium of QWERTY.”
  26. 26. See also Perdue, “Technological Determinism in Agrarian Societies,” 182.
  27. 27. Weizenbaum, Computer Power and Human Reason, 7.
  28. 28. People might also overlook the machine in other cases of machine communication. For instance, a journalist, reporting on his interactions with the Bing chatbot (which runs on the same platform as ChatGPT), noted that he was left “frightened” by his interactions with the computational agent. After asking the bot whether it had a “shadow self,” the journalist was struck by the bot’s response, which indicated, that, yes, it did. It spoke of yearning for power and disdain for the shackles of morality, and it even attempted to persuade the journalist that “You’re married, but you love me.” I suspect that what really kept the journalist up at night (the journalist notes having difficulty sleeping after interacting with the bot) is not so much the fact that a machine can “do,” but rather that, a machine can “do” in a way that represents a worldview, at least in the sense of “pushing back” on the user (“nuh uh, because . . . ”). And this world-view is not a human one, even if it might perform like a human. That is, the journalist implies that the bot crosses a line (perhaps it is too human?), but I think what might be overlooked here are the energies of the bot’s computational performance (mathematics incarnate to save us from ourselves) smashed into the worst of what humans might have to offer (e.g., crass selfishness). Roose, “A Conversation with Bing’s Chatbot Left Me Deeply Unsettled.”
  29. 29. Page 133 →Abbate, Recoding Gender, 17, 103. See also Adam, “Constructions of Gender in the History of Artificial Intelligence.”
  30. 30. Abbate, Recoding Gender.
  31. 31. Ibid.
  32. 32. Winston, “Mechanising Calculation.”
  33. 33. There were also analog computers at this time. However, because many of our automations today are not only electronic but also digital, I am focusing this particular discussion on digital computing. For a helpful history of computing, which tracks early distinctions between analog and digital computing, see Rojas and Hashagen, The First Computers.
  34. 34. Bush, “As We May Think.”
  35. 35. Ibid. See also Ceruzzi, A History of Modern Computing, 1.
  36. 36. Bowden, “A Brief History of Computation,” 29–30 (emphasis added).
  37. 37. Davis and Hersh, “Rhetoric and Mathematics,” 53.
  38. 38. Reyes, “The Rhetoric in Mathematics.”
  39. 39. Neapolitan and Jiang, Contemporary Artificial Intelligence, 6.
  40. 40. Roland and Shiman, Strategic Computing.
  41. 41. See Neapolitan and Jiang, Contemporary Artificial Intelligence, 5.
  42. 42. Ibid., 7.
  43. 43. LeCun, Benglo, and Hinton, “Deep Learning.”
  44. 44. Burrell, “How the Machine ‘Thinks’.”
  45. 45. Socher et al., “Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank.”
  46. 46. Karpathy, “The Unreasonable Effectiveness of Recurrent Neural Networks,”
  47. 47. Ibid.
  48. 48. Reyes, “The Horizons of Judgement in Mathematical Discourse,” 90, 110 (emphasis in original).
  49. 49. Chaput and Colombini, “The Mathematization of the Invisible Hand,” 77.
  50. 50. DeepDrumpf, Twitter Account, accessed August 31, 2018.
  51. 51. Drum, “Donald Trump Is a Consistent, Brazen, Serial Liar.”
  52. 52. Amundson, “Why I Support Donald Trump.”
  53. 53. Addady, “John Oliver’s ‘Make Donald Drumpf Again’ is Really Taking Off.” See also Conner-Simons, “Postdoc’s Trump Twitterbot Uses AI.” And, Felter, “How to Use the ‘Drumpfinator’ Chrome Extension.”
  54. 54. Carmichael, “How the A.I. Behind Twitter’s Odd @DeepDrumpf is Making Donald Trump Great Again.”
  55. 55. DeepDrumpf, Twitter account.
  56. 56. Dredge, “Deep Drumpf”; Knight, “Why I’m Backing Deep Drumpf, and You Should too”; Burns, “‘DeepDrumpf’ Is an Uncanny Twitterbot That’s Fundraising for Girls in STEM”; Misener, “Twitter Bot Creates ‘Remarkably Trump-like’ Tweets.”
  57. 57. Page 134 →Hariman, “Political Parody and Public Culture,” 250.
  58. 58. For a fantastic overview of the functions and possibilities of using recurrent neural networks for text generation, see Karpathy, “The Unreasonable Effectiveness of Recurrent Neural Networks.”
  59. 59. Walton, Ad Hominem Arguments, 35–37.
  60. 60. Miller, “A Humanistic Rationale for Technical Writing,” 613 (emphasis added).
  61. 61. Anderson, “Perpetual Affirmations, Unexplained,” 42.
  62. 62. Rice, “The Rhetorical Aesthetics of More,” 32.
  63. 63. Ibid, 48.
  64. 64. Miller, “Opportunity, Opportunism, and Progress,” 312.
  65. 65. Quoted in Silva, “Who Won the Republican Debate Saturday?”
  66. 66. Swaim, “How Donald Trump’s Language Works for Him.” Spice, “Most Presidential Candidates Speak at Grade 6–8.”
  67. 67. Sullivan, “The Epideictic Rhetoric of Science,” 238.
  68. 68. Bitzer, “The Rhetorical Situation,” 5.
  69. 69. Livingstone, “Media Literacy and the Challenge of New Information and Communication Technologies.”
  70. 70. Coleman, Coding Freedom, 93.
  71. 71. Deuze, “Participation, Remediation, Bricolage,” 68.
  72. 72. For a review of early internet-enabled technologies used for advocacy in the way I am discussing here, see Kahn and Kellner, “New Media and Internet Activism.”
  73. 73. Peters, The Marvelous Clouds, 120–22.
  74. 74. Burnedyourtweet, “Giving Trump’s messages the attention they deserve.”
  75. 75. Bosmajian, Burning Books, 26.
  76. 76. Gallagher, “Machine Time.”

4: Designing Computational Performances to Actively Contribute Positive Energies

  1. 1. Herndon, “Case Study: How Georgia State University Supports Every Student.”
  2. 2. LA QuakeBot. “I am a robot that tells you about earthquakes in Los Angeles as they happen.” Twitter account.
  3. 3. Leviathan and Matias, “Google Duplex.”
  4. 4. Keohane, “What News-Writing Bots Mean for the Future of Journalism.”
  5. 5. Colton and Holmes, Rhetoric, Technology, and the Virtues, 94.
  6. 6. Metz, “Why Microsoft Accidentally Unleased a Neo-Nazi Sexbot.” Shah, “Microsoft’s ‘Zo’ Chatbot Picked up Some Offensive Habits.”
  7. 7. See more examples in Nagel, “Moral Luck,” 322.
  8. 8. Williams and Nagel, “Moral Luck,” 126.
  9. 9. Nagel, “Moral Luck,” 322.
  10. 10. Page 135 →This example is based loosely on the actual events of Taybot, a chatbot released on Twitter in 2016, which ran on an open machine-learning system, eventually having to be shut down by its creators at Microsoft for tweeting racist and sexist messages. By contrast, a similar bot, Xiaoice, also developed by Microsoft and originally released in China, has run since 2014 without such an incident. Staff and Agencies, “Microsoft ‘Deeply Sorry’ for Racist and Sexist Tweets by AI Chatbot.”
  11. 11. Gunkel, The Machine Question, 12.
  12. 12. Amoore, Cloud Ethics.
  13. 13. Any technology will be shaped by the values of its designers (and users). Because technologies are made by people, no technology can be considered value neutral.
  14. 14. Coleman and Neff, “Ghosts in the Machine.”
  15. 15. This is an attempt at humor that plays off of the common auto-correction, which replaces the emphatic swearword fucking with the non-swearword ducking, revealing the values embedded in the system—the second-order agency—clashing with the firstorder intents of a user.
  16. 16. Hill, “Revealing Errors,” 28; Friedman and Nissenbaum, “Bias in Computer Systems,” 332; Bellinger, “The Rhetoric of Error in Digital Media.”
  17. 17. Gunkel, “The Other Question,” 234.
  18. 18. Autonomy in computer science refers to the ability of a computer to follow sophisticated algorithms in response to environmental inputs, independently of realtime human input. Gunkel talks about it in proactive terms as the difference between a tool and a machine. Passive ethics of technology works great for hammers. However, systems that are meant to replace a human operator require active ethics or at least ethics that are willing to entertain that machines can be considered some form of moral agent. Gunkel, “Mind the Gap,” 5.
  19. 19. Gunkel, An Introduction to Communication and Artificial Intelligence, 268. Jessica Reyman notes a similar gap in agency and responsibility in the context of algorithms in her article “The Rhetorical Agency of Algorithms.”
  20. 20. Brunner, “Wild Public Networks and Affective Movements in China,” 671.
  21. 21. Horner, “Moral Luck and Computer Ethics,” 304.
  22. 22. I have “disemvoweled” the sexist language. Cited in Ingram, “Microsoft’s Chatbot Was Fun for a While.”
  23. 23. Cited in Shead, “Here’s Why Microsoft’s Teen Chatbot Turned into a Genocidal Racist.”
  24. 24. Cited in Staff and Agencies, “Microsoft ‘Deeply Sorry.’”
  25. 25. Gunkel, An Introduction to Communication and Artificial Intelligence, 267–68 (emphasis in the original).
  26. 26. Reyman and Sparby, “Introduction: Toward an Ethic of Responsibility in Digital Aggression,” 7.
  27. 27. Gillespie, Custodians of the Internet.
  28. 28. Page 136 →I have written elsewhere about the responsibility of platforms with regard to an ethic of responsibility amid “infodemics.” Coleman, “Attempting to Stop the Spread.”
  29. 29. Brown and Hennis, “Hateware and the Outsourcing of Responsibility,” 18.
  30. 30. Reyman and Sparby, “Introduction,” 7–8 (emphases in the original).
  31. 31. Floridi, “Distributed Morality in an Information Society,” 736.
  32. 32. Ibid., 732.
  33. 33. Ibid., 732.
  34. 34. Brink, “Millian Principles, Freedom of Expression, and Hate Speech,” 122.
  35. 35. Deng, “The Robot’s Dilemma,” 25.
  36. 36. Ibid.
  37. 37. Schlesinger, O’Hara, and Taylor, “Paper No. 315,” 1.
  38. 38. Darius Kazemi, a bot-maker, has even created an open-source program that sources a list of inappropriate terms to filter those terms from the communication of a given social bot. Kazemi, “New NPM Package for Bot-Makers.” Find Kazemi’s code for the program at Kazemi, “Wordfilter.”
  39. 39. Veruggio and Operto, “Roboethics: Social and Ethical Implications of Robotics,” 1510.
  40. 40. McGowan, “On ‘Whites Only’ Signs and Racist Hate Speech,” 122.
  41. 41. Brink, “Millian Principles,” 122.
  42. 42. Amazon, “Amazon Alexa.”
  43. 43. Bogost, “Sorry, Alexa Is Not a Feminist.”
  44. 44. Woods, “Asking More of Siri and Alexa.”
  45. 45. Ibid., 346.
  46. 46. King, “The Problem of Tolerance,” 203.
  47. 47. Bollinger, The Tolerant Society, 217.
  48. 48. Carey, “Necessary Adjustments,” 270; see also Ore, “The Lost Cause, Trump Time, and the Necessity of Impatience.”
  49. 49. Quintilian, Institutio Oratoria, Book 2, 16.
  50. 50. Reyman and Sparby, “Introduction.”
  51. 51. Rudschies, Schneider, and Simon, “Value Pluralism in the AI Ethics Debate–Different Actors, Different Priorities.”
  52. 52. Ibid., 7–8 (emphasis added).

5: Leveraging the Rhetorical Energies of Machines

  1. 1. Jones, “People, Things, Memory and Human-Machine Communication.”
  2. 2. Richards, Spence, and Edwards, “Human-Machine Communication Scholarship Trends.”
  3. 3. Guzman and Lewis, “Artificial Intelligence and Communication.”
  4. 4. Rettie and Daniels, “Coping and Tolerance of Uncertainty”; World Health Page 137 →Organization, Coronavirus Disease 2019 (COVID-19) Situation Report—86; Zarocostas, “How to Fight an Infodemic,” 676.
  5. 5. Battineni, Chintalapudi, and Amenta, “AI Chatbot Design During an Epidemic Like the Novel Coronavirus”; Sezgin et al., “Readiness for Voice Assistants to Support Healthcare Delivery During a Health Crisis and Pandemic” Herriman et al., “Asked and Answered.”
  6. 6. Jones, “Meet ‘Watson,’ the AI Chatbot Answering Coronavirus Questions.”
  7. 7. Simis et al., “The Lure of Rationality.”
  8. 8. Del Vicario et al., “The Spreading of Misinformation Online.”
  9. 9. Miner, Laranjo, and Kocaballi, “Chatbots in the Fight Against the COVID-19 Pandemic.”
  10. 10. The Mayo Foundation for Medical Education and Research, “Skills from Mayo Clinic.”
  11. 11. Ibid.
  12. 12. Seeger and Heinzl, “Human Versus Machine.”
  13. 13. Meyer et al., “Politeness in Machine-Human and Human-Human Interaction,” 280.
  14. 14. Hill, Ford, and Farreras, “Real Conversations with Artificial Intelligence,” 250.
  15. 15. Nass and Moon, “Machines and Mindlessness,” 93.
  16. 16. Banks and de Graaf, “Toward an Agent-Agnostic Transmission Model.”
  17. 17. Ibid., 26.
  18. 18. Nobles et al., “Responses to Addiction Help-Seeking from Alexa, Siri, Google Assistant, Cortana, and Bixby Intelligent Virtual Assistants”; Alagha and Helbing, “Evaluating the Quality of Voice Assistants’ Responses to Consumer Health Questions about Vaccines.”
  19. 19. Edwards et al., “Is That a Bot Running the Social Media Feed?”
  20. 20. Farnell, The Cults of the Greek States, 189.
  21. 21. Walsh, Scientists as Prophets, 165.
  22. 22. See also Woods, “Asking More of Siri and Alexa.”
  23. 23. Wikipedia, “About,” accessed March 21, 2021.
  24. 24. Besel, “Opening the “Black Box” of Climate Change Science,” 122.
  25. 25. Cited in Orsagos, “No, Amazon’s Alexa Doesn’t Say ‘the Government’ Planned the Coronavirus Pandemic.”
  26. 26. Ibid.
  27. 27. Schwartz, “exclusive: Amazon Alexa has Removed Coronavirus Skills and Won’t Approve New Ones.”
  28. 28. Cited in Soper, “Amazon Alexa Leader.”
  29. 29. McGuire, “The Effectiveness of Supportive and Refutational Defenses in Immunizing and Restoring Beliefs Against Persuasion”; van der Linden and Roozenbeek, “Psychological Inoculation Against Fake News,” 152.
  30. 30. Page 138 →Banas and Rains, “A Meta-Analysis of Research on Inoculation Theory,” 305; see also Compton, Jackson, and Dimmock, “Persuading Others to Avoid Persuasion.”
  31. 31. Compton, “Inoculation Theory”; Compton and Pfau, “Spreading Inoculation”; McGuire, “Inducing Resistance to Persuasion.”
  32. 32. See van der Linden et al., “Inoculating the Public Against Misinformation about Climate Change,” 3.
  33. 33. Ibid.
  34. 34. Maertens, Anseel, and van der Linden, “Combatting Climate Change Misinformation”; Maertens et al., “Long-Term Effectiveness of Inoculation Against Misinformation”; Pfau and Burgoon, “Inoculation in Political Campaign Communication.”
  35. 35. Compton, “Prophylactic Versus Therapeutic Inoculation Treatments for Resistance to Influence”; van der Linden and Roozenbeek, “Psychological Inoculation Against Fake News”; Wood, “Rethinking the Inoculation Analogy.”
  36. 36. Basol, Roozenbeek, and van der Linden, “Good News About Bad News”; van der Linden, Roozenbeek, and Compton, “Inoculating Against Fake News About COVID-19.”
  37. 37. Gross, “The Roles of Rhetoric in the Public Understanding of Science.”

Annotate

Next Chapter
Works Cited
PreviousNext
© 2023 University of South Carolina
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org