Fake News and Anthropology: A Conversation on Technology, Trust, and Publics in an Age of Mass Disinformation Part II

Emergent Conversation 9

A discussion with Andrew Graan, Adam Hodges, Meg Stalcup moderated by Mei-chun Lee

The Roots of Fake News, from Wikimedia Commons. CC BY-SA 3.0-UNESCO (https://commons.wikimedia.org/wiki/File:The_roots_of_%27fake_news%27.png)

This Emergent Conversation is part of a PoLAR Online series, Digital Politics, which will also include a Virtual Edition with open access PoLAR articles. Anthropologists Adam Hodges, Andrew Graan, and Meg Stalcup joined this virtual conversation to share their thoughts on fake news, disinformation, and political propaganda. It was moderated by PoLAR Digital Editorial Fellow Mei-chun Lee. The conversation will be published in three installments. This is Part II of the discussion. Part I is available here and part III is here.

Mei-chun Lee:  Thank you all for such inspiring and thought-provoking responses. I want to follow up on what Adam calls a “post-trust” era—“an erosion of trust in the skills and competencies associated with networked communities of professionals that opens the door to a new social network of knowledge producers with its own revered spokespersons and means of distributing their own truth claims.” What is implied here, for me, is a new way of networking that nurtures fake news to be—echoing Andrew—“a particular way of participating in a public.” Meg further calls our attention to the aesthetics of truth claims and how tech infrastructure, such as WhatsApp in her example, enables new modes of truth claims and different relationships of trust. It seems to me that fake news cannot be understood without a discussion on the digital culture of sharing, remix, anonymity, and pranking; and it is closely related to some of the internet phenomena, such as memes and trolling. Also, the spread of fake news is accelerated within filter bubbles and echo chambers created by social media algorithms. I wonder how tech infrastructure and digital culture contribute to the birth and growth of fake news? What roles do algorithms play here? Are we witnessing the emergence of a new form of public that is mediated by new networking technologies?

Adam Hodges:  Andrew, I love how you bring in Gal’s conceptualization of publics and relate the constitution and regimentation of publics to interdiscursivity. I’d like to pick up on this point in responding to Mei-chun’s question about how tech infrastructure and digital culture contribute to the birth and growth of fake news.

As I argue elsewhere (Hodges 2018), we need a new theory of propaganda for the social media age (in many ways, fake news can be viewed as a form of propaganda used to promote a particular political cause). Twentieth century frameworks privileged a top-down model that viewed propaganda and mass communication as vertical processes. Someone or some group, such as the Russian state, is seen as an all-powerful actor responsible for organizing and disseminating messages that are more or less unproblematically received by the masses (per the hypodermic needle model of communication). Debra Spitulnik (1996) problematizes this view of mass communication in her work on the social circulation of media discourse and John Oddo (2018) does the same in his work on war propaganda.

Spitulnik underscores the role of lateral communication—that is, communication that takes place between individuals participating in a public (to use Andrew’s terminology) who recontextualize and redistribute mass media messages. Likewise, in his theory of propaganda, Oddo emphasizes the importance of what he calls horizontal propaganda: “propaganda spread collectively by a diffusion of participants.” In other words, there is a “social organization of interdiscursivity” (per Gal)—a public, to continue with Andrew’s terminology—responsible for sanctioning and forwarding messages to others in their social network who, in turn, do the same, and so on and so forth.

This intertextual web of participatory behaviors—behaviors that privilege forms of lateral or horizontal communication—is central to twenty-first century digital culture. Digital culture revolves around the ability to share messages with others in your network; and, of course, technologies are designed around sharing and virality so that messages can be easily reposted and forwarded. Memes, tweets, and multimedia messages hold the potential to go viral when shared widely enough. This raises the question of what makes a message worthy of sharing? What helps a message go viral?

Digital sharing is just a form of entextualization—turning a message into a text (defined per Gal 2006 as “any objectified unit of discourse”) that can be easily moved from one context to another, from one person’s social media feed to another’s. In his discussion of “Entextualization, Replication, and Power,” Greg Urban writes that “some kinds of discourse are more copiable and hence more shareable than others” (1996, 24). Pithy messages worth repeating often exploit the poetic dimension of language, using words or images in a clever way that compels others to repeat the message. To echo Meg, there is an aesthetic element that makes such messages memorable and impactful.

In the age of social media, the intrinsic appeal of a message also seems to rely on how much outrage it can stir. In a noisy mediascape, messages that amplify the outrage stand out. The popularity of Trump’s insult-laden tweets epitomizes this—whether one likes them or not, they’re hard to ignore. Even publics opposed to Trumpian policies or who recognize the erroneous foundation of fake news stories can help propagate the messages, such as by sharing fake news for a laugh or to point out the absurdity of claims.

Message resonance is also central to the compulsion to share. Meg underscored the importance of affective engagement. If a message properly stirs a public, it can lead to that “enjoyment of efficacious resonance” (per Mazzarella). If a message typifies a worldview shared by a public (regardless of the truth value of the claims), it can lead to another share. I like how Meg emphasized that “even when the stories are false, the sensations they produce are real.” This emotional resonance seems to propel the sharing of certain messages over others within a given public.

As these messages propagate along a speech chain, fake news stories that go viral often benefit from passing through a type of speech chain that Judith Irvine (1989) refers to as “a chain of authentication, a historical sequence by which an expert’s attestation…is relayed to other people.” When the president of the United States retweets a conspiracy theory, this helps authenticate the value of those ideas. Likewise, when prominent senators or representatives defend the president by variously repeating or otherwise giving credence to debunked claims voiced by the president, they further a chain of authentication that lends their symbolic power to the authentication of fake news.

It may be that the difference between an in-group message that stays mostly within a limited public and a viral message that moves more widely is the way the message gets authenticated by key figures, or brokers that hold sway across multiple publics. There’s no doubt the power of the presidency can help perpetuate fake news (just as it could alternatively help squelch unfounded conspiracy theories). But in many ways, the president is but one “influencer” in the larger social media ecosystem. In today’s digital culture, “influencers” (those with large followings on social media) are often more important than recognized experts (scientists, investigative journalists, professional diplomats) in helping to communicate memes and messages. I think this returns to the issue of trust. As Meg emphasized, it’s better to think of trust as “replaced rather than broken.” Influencers have become the new authorities who are trusted to share messages of perceived value and mediate the dissemination of information much like traditional journalists did in the twentieth century (and continue to do within a “split public”).

Andrew Graan:  First, I just want to say thanks to you all for such a rich conversation! Meg, I love the way that you frame the relationship between aesthetics and resonance so as to complicate the discussion on truth. You reminded me of Lauren Berlant’s complementary argument that, “all politics is emotional.” Not only do stories that resonate with beliefs and emotions advance their own sort of truth claim, but the “rational” is also an aesthetic with its own affective resonances. And, Adam, among other things, I love how you sketch out a theory of virality in relation to publics in your latest response to Mei-chun’s excellent questions, whereby virality describes the mass recontextualization of memes within but also across distinct digital publics. You all leave my head swimming in appreciation.

I do, though, want to take up Mei-chun’s generative suggestion that there is something particular to contemporary media technologies and digital culture that has created an ecology, so to speak, in which fake news practices and discourses can thrive. This seems undoubtedly true to me, and Meg’s analysis of WhatsApp use in Brazil is brilliant in this regard. The app, its affordances, and histories of use, combined with widespread personal access to smartphones and mobile communication infrastructures, create the conditions for particular sorts of communication, with their own aesthetics, participation norms, temporalities, trajectories and dynamics of circulation and so on. One thing that strikes me about this example, though, is just how decentralized such communication networks can be. WhatsApp publics, or those organized through Facebook, Twitter, Instagram, TikTok, etc., are a far remove from the publics anchored by Walter Cronkite and the nightly news. Simply put, in so many cases, the internet and smartphones have spurred on a relative decentralization and pluralization of media production and distribution.

Dominic Boyer summed this phenomenon up nicely in his book, The Life Informatic. Boyer (2013, 127) argues that the expansion of the internet and mobile telephony fundamentally disrupted the forms of “radial messaging” that predominated in the twentieth century, that is, the broadcast model of communication associated with network television and mainstream print journalism. Instead, varieties of “lateral messaging,” that is, of point-to-point, two-way communication, are now ascendant. One result is the proliferation of “micropublics” that not only mediate social relationships but also the production and circulation of media artifacts. This is the domain of  personal microblogs, selfies, memes, clickbait, and rants but also the digital infrastructure by which news, advertisements, entertainment, and punditry now also circulate.

This point, I hope, complements Adam’s welcome challenge to “hypodermic needle” accounts of propaganda and his emphasis on the lateral forms of communication and the “intertextual web of participatory behaviors” by which discourse does (or does not) circulate in publics. Boyer’s argument also works to displace an analytic model of top-down, vertical communication linked to a small number of media centers. But, rather than focus on the interpersonal, interdiscursive chains by which texts circulate within and across publics—an important point to be sure—Boyer highlights how contemporary media ecologies are marked by greater diversity, containing numerous nodes, numerous micropublics that potentially belie the centrality and authority of remaining broadcast media. Boyer is clear to point out that this development is neither necessary nor universal: not all contemporary media ecologies look this way and internet technologies have resulted in, and could still result in, other social formations. Nonetheless, in so many cases, the sheer multiplicity of digitally mediated communication networks has impacted who makes media, how it is distributed, to whom, to what ends, and so on.

For instance, in some cases, the relative democratization of media production technologies and the decentralization of media systems make it easier for people to get involved. As Yarimar Bonilla and Jonathan Rosa (2015) highlighted in their analysis of #Ferguson on Twitter, the affordances of Twitter and camera-equipped smartphones made a new sort of Black, anti-racist activist subject possible, one that could command a Twitter public and at times push back against mainstream media narratives on the Ferguson protests against police violence. Yet, arguably, this same democratization and decentralization of media production technologies is also what conditions fake news practices, which so often mimic the participation norms and media aesthetics of particular publics so as to resonate.

Complicating matters still, as Mei-chun underscored, is the role of algorithms in deciding how particular contributions to, or interventions in a digital public circulate. A recent quote by Wael Ghonim, the Google executive who became a leading figure during the 2011 Tahrir Square demonstrations in Cairo, nicely encapsulates all of these different dimensions of contemporary digital publics. While he recognized that Facebook was essential to the Egyptian protests, his hindsight evaluations of Facebook are much more ambivalent. Ghonim stated to the New York Times, “The system of Facebook is a mobocratic system—if there is a mob of people and they are all organized around liking content, the content will get massive distribution. The editor became dumb software that just optimizes for whatever sells ads.” We should thus also be attuned to how “dumb software” plays a structuring role in digital publics. The result might sometimes be the virality of protest videos or #Ferguson and #MeToo. At other times, the result might be Pizzagate or anti-vax dogma.

At any rate, the deeper we contemplate the problem-space of fake news, to echo Meg, the more that “fake news” brings other issues and problems into relief. And while digital technologies may foster increasingly heterogeneous publics with new avenues for popular participation, including both viral activism and fake news, on another level, as Shoshana Zuboff warns in The Age of Surveillance Capitalism (2019), one net effect of the expansion of platform-based communication is an increase in fundamentally undemocratic forms of social control and the massive commodification of human behavior. This too is part of the fake news iceberg worth considering.

“It’s not good to drive with a cell phone in hand!” by Genildo. http://www.genildo.com/2019/03/barbeiro-de-baixa-patente.html

Meg Stalcup:  My reading list is growing enormously, alongside a list of new ideas to pursue! I think that where I want to pick up here is with some of the key points both Andrew and Adam raise, related to the norms of behavior in what Mei-chun calls digital cultures, and tech infrastructures.

Andrew observes that contemporary media ecologies are diverse, and also profoundly capitalistic. It follows that there will be distinct differences within a given country, and between countries. That’s exactly what large national surveys find, such as the annual one by Reuters on how people get their digital news. The newer tech infrastructures we tend to be interested in are part of a complex ecology in which “top down” TV and radio are very important. They interact with the internet, amplifying what happens there, and the reverse happens too, which speaks to Mei-chun’s point about sharing and remix. I think it’s worth saying that these practices are happening alongside ones that are less interactive, and more typical of older media, although it’s hard to measure “passive” use (Trifiro and Gerson 2019). If you look at the people who post or comment on Facebook, Twitter, in WhatsApp groups or on TwitchTV, relative to the number who are there hanging out, a lot are lurkers. People are accustomed to “superusers” putting out most of the content. Some share often, but most only infrequently. They engage the medium as spectators, despite its capacity for interaction. To understand what else the quiet majority are doing with the information they get (false and true), we need to be there with them, observing and asking questions. Obviously folks are on social media, reading, liking and disliking, forwarding, having conversations, sometimes creating, but in the midst of widespread, more passive involvement. As Andy said, fake news practices often mimic the participation norms and media aesthetics of particular publics, and understanding one will help understand the other. We also have to go beyond the metrics the platforms themselves give us or facilitate, since a better accounting of what constitutes participation will include more than what we can see of it online.

Fake news as false information is of course just one of a larger repertoire of ways to manipulate communication. A form of propaganda among others, to echo Adam, which is partly computational (using bots, botnets, and the like). The people behind the propaganda are not counting on “organic” virality. Instead, someone enacts a campaign, or rather, multiple, often competing entities do, from black ops PR firms with the automated resources to flood networks, to smaller, more hybrid content makers and distributors. Since social media networks are relatively decentralized, they are co-opted by volume, a deluge of memes, or frenetic likes by bot, rather than a single injection. I don’t have a good sense, though, of how much virality can be “forced.” I think when fake news is very successful, it’s usually a combination of computational propaganda, genuine uptake by individuals, and mainstream media attention. In any case, Adam’s question of what makes a message worthy of sharing is one that’s dear to me, and it seems worth asking in specific circumstances to see when and how this may differ.

There is a computational social science study which found that people are more likely to share untrue things than true ones on Twitter (Vosoughi et al. 2018). False news spread “farther, faster, deeper, and more broadly” in every category of information (they looked at terrorism, natural disasters, science, urban legends, and financial), and the effect was more pronounced for political news. Why that’s the case is partly what we’ve been discussing, and I love this concept Adam brings in of a “chain of authentication” and its link to charismatic authority. Another factor often mentioned is how well it dovetails with one’s view on the world. Both seem like useful categories for inquiry into empirical circumstances. And, regardless of whether one trusts the sender or believes something is true, it turns out that fake news also tends to say something “new” (according to that same Vosoughi et al. 2018 study), and people like sharing new things. Drawing raw materials from the real world but untethered from actual events, fake news has a lot of freedom to be novel. It can offer a surprising narrative that nonetheless confirms one’s worldview, which may make it very appealing to share.

Gabriella Coleman (2013, 2014) has written a lot about that subset of internet creators who are hackers with Anonymous. After working on how lulz and successful tricks motivated them, she looked at why a privileged group of actors, with the skills to access social and economic advantages, is also disproportionately politically active. They build tools, become popular media advocates, and engage in policymaking. She ends up identifying a shared commitment to preserving autonomy, despite different political views on what social change is needed, and on how it should happen. She notes as well—and this is pertinent to our question of tech infrastructures and their affordances—that their common political tools, willingness to work across political lines, and tendency to risky direct action, “emerge from the concrete experiences of their craft” (Coleman 2017, S100).

Whitney Phillips, Jessica Beyer, and Coleman together point out (2017) that trolling has been used to mean a number of quite different things: harmless pranking, less harmless troublemaking and merciless abuse. Trolling can express bigotries, or be an attempt to counter them. Phillips, Beyer, and Coleman are concerned that this heterogeneity of action and intention  gets elided. One of their most important insights, also made by researchers at the Berkman Klein Center (Benkler, Faris, and Roberts 2018), is that fake news does not necessarily convince people, but to a disturbing degree it can succeed in setting the agenda. It’s an attention hijacker.

To relate the “tech cultures” and “tech infrastructures” parts of Mei-Chun’s question, it’s clear that different platforms feel different. Twitter lends itself to both quips and pile-on attacks, for example. Their technical features and affordances are, though, more than just levers pulled by actors. Case studies on how fake news is interpreted, deployed, or reined in, show interactive and mutual development. Take the example of message forwarding in WhatsApp. After fatal mob attacks in India were attributed to fake news stories about child traffickers, in July 2018, Facebook limited the number of chats to which a message could be forwarded there from 256 to 5, while globally they reduced it to 20. After a six-month testing period, they made five chats the limit everywhere. Researchers I’ve spoken with in Brazil found that, as it turns out, this slows the spread of fake news, although not necessarily its reach (Melo et al. 2019). Administrators running the right-wing groups in which I lurk were very attuned to these changes. Messages urging support for or militating against something come through quite often. These are partisan but not necessarily false. They might be memes saying “we stand with” one politician or another, over a patriotic image, or “Let’s tell them we oppose” an upcoming Supreme Court decision. Very quickly after the forwarding limit was imposed, these messages appeared with comments such as “everyone do their part and forward to five people. That’s the limit, and we—the law-abiding good citizens—respect the law, so since we need this message to get out, send it to five of your groups or contacts.”

I have also seen people making “fake news” with algorithms in mind. In the example I gave before about the Zika rumors, you have someone, somewhere, making up a fake audio that namechecks places and actors so that recipients will then do a search that lands them on a monetized page or video on a YouTube channel. The creator is crafting the “news” item and filling in the metadata to favor a given, algorithmically determined search result. This is what you’d expect from the work of Nick Seaver (2017). He says in an interview that from an anthropological perspective, algorithms work less as a predictable recipe or sequence of well-defined operations, which is what computer programmers might offer up when asked for a definition, and more as “collections of human practices.” Algorithms are multiple—people enact different versions of them. The people making and distributing fake news don’t have to be programmers to game social media algorithms. They just employ what Amelia Acker and Joan Donovan (2019) call “‘data craft”—exploiting metrics, metadata, and recommendation engines to attract audiences to disinformation campaigns.

Andrew Graan is a Lecturer in Social and Cultural Anthropology at the University of Helsinki. A cultural and linguistic anthropologist, his research examines the politics of public spheres in North Macedonia. He earned his PhD in anthropology from the University of Chicago in 2010. His current project, “Brand Nationalism: Neoliberal Statecraft and the Politics of Nation Branding in Macedonia,” examines how the coordinated efforts to regulate public communication that are found in nation branding projects constitute a wider program of economic and social governance. 

 

Adam Hodges is a linguistic anthropologist and adjunct assistant professor at the University of Colorado Boulder. His books include When Words Trump Politics: Resisting a Hostile Regime of Language (2019, Stanford University Press) and The ‘War on Terror’ Narrative (2011, Oxford University Press). His articles have appeared in the American Anthropologist, Discourse & Society, Language & Communication, Language in Society, and the Journal of Linguistic Anthropology.

 

Meg Stalcup is Assistant Professor of Anthropology at the University of Ottawa, where she does research and teaches on media and visual anthropology, science and technology studies, ethics, and methods. Her current project, “Sensing Truth: The Aesthetic Politics of Information in Digital Brazil” looks at institutional and epistemological aspects of media in four cases: health, politics, environment, and security. Previous work has been published in Anthropological Theory, Visual Anthropology Review, and Theoretical Criminology, among other places.

Works Cited

Acker, Amelia, and Joan Donovan. 2019. “Data Craft: A Theory/Methods Package for Critical Internet Studies.” Information, Communication & Society 22 (11): 1590–1609. https://doi.org/10.1080/1369118X.2019.1645194.

Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press.

Berlant, Lauren. 2016. “Trump, or Political Emotions.” The New Inquiry (blog). August 5, 2016. https://thenewinquiry.com/trump-or-political-emotions/.

Bonilla, Yarimar, and Jonathan Rosa. 2015. “#Ferguson: Digital Protest, Hashtag Ethnography, and the Racial Politics of Social Media in the United States.” American Ethnologist 42 (1): 4–17.

Boyer, Dominic. 2013. The Life Informatic: Newsmaking in the Digital Age. Ithaca; London: Cornell University Press.

Coleman, Gabriella. 2013. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton: Princeton University Press.

—. 2014. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. London; New York: Verso.

—. 2017. “From Internet Farming to Weapons of the Geek.” Current Anthropology 58 (S15): S91–S102.

Melo, Philipe de Freitas, Carolina Coimbra Vieira, Kiran Garimella, Pedro OS Vaz de Melo, and Fabrício Benevenuto. 2019. “Can WhatsApp Counter Misinformation by Limiting Message Forwarding?” In International Conference on Complex Networks and Their Applications, 372–384. Springer.

Gal, S. 2006. “Linguistic Anthropology.” In Encyclopedia of Language & Linguistics (Second Edition), edited by Keith Brown, 171–85. Oxford: Elsevier.

Hodges, Adam. 2018. “A Theory of Propaganda for the Social Media Age.” Anthropology News 59 (2): e149–52. https://doi.org/10.1111/AN.823.

Irvine, Judith T. 1989. “When Talk Isn’t Cheap: Language and Political Economy.” American Ethnologist 16 (2): 248–67. https://doi.org/10.1525/ae.1989.16.2.02a00040.

Oddo, John. 2018. The Discourse of Propaganda: Case Studies from the Persian Gulf War and the War on Terror. University Park: Pennsylvania State University Press.

Phillips, Whitney, Gabriella Coleman, and Jessica Beyer. 2017. “Trolling Scholars Debunk the Idea That the Alt-Right’s Shitposters Have Magic Powers.” Vice (blog). March 22, 2017. https://www.vice.com/en_us/article/z4k549/trolling-scholars-debunk-the-idea-that-the-alt-rights-trolls-have-magic-powers.

Seaver, Nick. 2017. “Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems.” Big Data & Society 4 (2): 1–12. https://doi.org/10.1177/2053951717738104.

Spitulnik, Debra. 1996. “The Social Circulation of Media Discourse and the Mediation of Communities.” Journal of Linguistic Anthropology 6 (2): 161–87. https://doi.org/10.1525/jlin.1996.6.2.161.

Trifiro, Briana M., and Jennifer Gerson. 2019. “Social Media Usage Patterns: Research Note Regarding the Lack of Universal Validated Measures for Active and Passive Use.” Social Media + Society 5 (2): 1–4. https://doi.org/10.1177/2056305119848743.

Urban, Gerg. 1996. “Entextualization, Replication, and Power.” In Natural Histories of Discourse, edited by Michael Silverstein and Gerg Urban, 21–44. Chicago: The University of Chicago Press.

Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. “The Spread of True and False News Online.” Science 359 (6380): 1146–51. https://doi.org/10.1126/science.aap9559.

Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Hachette Book Group.

 

Leave a comment