Categories
NEOVEX

Platform infrastructures and the spread of conspiracy theories

After the announcement in October 2022 that the microblogging platform Twitter had been acquired by Elon Musk, incidents both within the company and in the platform’s digital space began to spiral. Since then, many experts have been concerned that Twitter could become a communication platform where disinformation, hate speech, and conspiracy theories could take up a larger space than they have in the past. And in fact, initial ad hoc analyses of Twitter content suggest that the amount of hate speech, anti-vaccination content, and climate change disinformation has increased since Musk took over the service. This has been attributed in part to changes in the company’s organizational structure, the platform’s technical features, and the specific culture propagated by the platform’s new owner: On the one hand, there is some indication that the company’s involvement in the area of content moderation has been significantly scaled back – with reference to the freedom of speech that would be ensured as a result.

At the same time, a feature has been introduced, at least temporarily, making verification of a user’s profile dependent solely on paying a fee, which has immediately led to a large number of accounts spreading disinformation under the seal of verification.

Finally, the question of what kind of user culture might develop on the platform in the future cannot be considered independently of the norms and values exemplified by the new owner. Musk’s references to disinformation or conspiracy theories can also be read as signaling that the freedom of expression propagated also includes such content.

Context Dependence of the Spread of Conspiracy Theories

The recent example of Elon Musk’s Twitter takeover and its immediate consequences for the structures and content of public communication on the platform shows two things:

First, the term conspiracy theory is used in everyday language and in public and political debates in a way that does not necessarily have much in common with the scientific understanding of conspiracy theories. In everyday use, conspiracy theories are often used to condemn a specific – and quite well-known – group of actors or individual actors as so-called conspirators, or to pursue political and other strategic goals, such as undermining the trustworthiness of politicians or content from which one wishes to distance oneself. In these cases, the accusation of conspiracy is used as a particular kind of framing1 to promote “alternative” explanations. In the scientific context and in the context of the NEOVEX project, on the other hand, we are oriented towards an understanding of conspiracy theories as proposed explanations of historical, current, or predicted events in which the actions or processes are assumed to be intentionally driven and controlled by a relatively small and secretive, powerful group of conspirators acting in secret to implement hidden intentions – namely, to act for their own benefit and against the good of society and its citizens, or against specific innocent victims.2 In the context of the NEOVEX project, conspiracy theories are not intended to be “alternative” explanations. Second, the example vividly illustrates the contextuality of the spread of conspiracy theories. On the one hand, the general design and architecture of platforms are relevant. Through the way they enable access to the platform and the distribution of content, and through the way they structure content, they essentially determine which form of communication infrastructure is established.3 Closely related to this are questions of governance of platforms, but also the practices of platform-specific user cultures and the values and manners lived out by their users. All these aspects lead to platform-specific affordances, i.e. useful possibilities of using a platform for certain actors and goals in certain contexts, which go beyond the technical functionalities and only emerge from the interplay of technology, lived practices and user perceptions.

Platform infrastructures and affordances

For example, on discussion platforms such as 4Chan, users can start and participate in discussions without registering, and distribute text and images anonymously.4 Similarly, Reddit offers anonymity because no user account is required to access most content, which can be found through a general search or by selecting specific subreddits. No personal information is required for registration, allowing multiple and invalid personas by the same person.5 In this way, platform features allow open access to different forms of content, while linking that content to specific actors is prevented by platform policies and features that provide a high degree of anonymity.

Microblogging platforms such as Twitter (so far) and social networking platforms such as Facebook also provide users with an infrastructure for the distribution of content to platform-specific sub-publics (Facebook) or even a potentially global public (Twitter). However, the requirements of these platforms entail permanent identities of the posting individuals, as they require some form of registration and self-presentation via usernames or descriptions that lead to identifiable personas6 – even though the range between being identifiable as an authentic person and being pseudonymous is wide, depending on the concrete requirements of the platform and the use of privacy settings by the users.

It is precisely at this point that Twitter’s concrete change to its verification functionality, which meant that the authenticity of account holders was not the decisive criterion for assigning verification status, has a direct impact on user behavior and the overall culture of a platform.

It is not as if conspiracy theories were not being spread on Twitter before the recent changes. For example, Mahl and colleagues show that conspiracy theories such as the Agenda 21 theory which assumes that the UN is planning to wipe out 90% of the world’s population are discussed on the platform.7 Similarly, during the Corona pandemic, numerous conspiracy theories were discussed on the Twitter platform, such as the changing narrative around the spread of the 5G wireless standard. For Facebook, Bruns and colleagues show how this narrative evolved from the general idea that 5G was bad for health to a conspiracy theory that 5G technologies had been used extensively in Wuhan for surveillance purposes and that the COVID-19 pandemic was just a cover to hide the consequences of the 5G testing in Wuhan.8 This example also shows how the narrative of conspiracy theories can be based on the advancement of existing prejudices and how these, spread by alternative news outlets on the Internet, prominent politicians or influencers, can reach a wider user base and – as in the case of the call to destroy 5G masts – have very real consequences.

With regard to Twitter, Mahl and colleagues also point out that in the discourse surrounding conspiracy theories on the platform, two general types or groups of users can be distinguished so far: on the one hand, users who participate in the narration and dissemination of preconspiracy theories, and on the other hand, a group of users who use hashtags related to conspiracy theories, but whose content explicitly opposes the respective narratives or at least does not express support for them.9 Although Mahl and colleagues note that debunking efforts on the platform tend to originate from scientific, medical, or journalistic circles, and that these contributions are not usually adopted or disseminated by the “conspiracy believers”, at least here narration and counter-narration took place on one platform. However, based on the retweeting behavior between the two groups, the authors conclude that actual cross-group communication hardly takes place. In contrast, image and discussion boards such as 4Chan or Reddit create group-based, closed discussion spaces from the outset, whose members demonstrate their belonging to a specific community by using insider acronyms and slang that are difficult to recognize and understand from the outside10, and in which “outsiders” beyond the subcultural milieus and their speech are excluded or rejected.

Conclusion

The characteristics of technical infrastructures, the governance of platforms, and the practices and values lived within platform-specific user cultures collectively influence which usage practices, communication styles, and content are more likely to become established on platforms.

Changes to platform structures that result in accounts that are more difficult to identify or verify, and governance decisions such as the reactivation of channels that were previously blocked for spreading disinformation, hate and incitement, such as Donald Trump’s, or other channels that have clearly spread far-right, anti-Semitic or conspiracy theory ideas, do not suggest a promising future.

This blog post was originally published in German on the NEOVEX website by Annett Heft in December 2022.


  1. Entman, R. M. (1993). Framing: Toward Clarification of a Fractured Paradigm. Journal of Communication43(4), 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x 
  2. Baden, C., & Sharon, T. (2021). BLINDED BY THE LIES? Toward an integrated definition of conspiracy theories. Communication Theory31(1), 82–106. https://doi.org/10.1093/ct/qtaa023
  3. Bossetta, M. (2019). The Digital Architectures of Social Media: Platforms and Participation in Contemporary Politics. University of Copenhagen, Faculty of Social Sciences.
  4. Tuters, M., Jokubauskaitė, E., & Bach, D. (2018). Post-Truth Protest: How 4chan Cooked Up the Pizzagate Bullshit. M/C Journal21(3). https://doi.org/10.5204/mcj.1422; Frischlich, L., Schatto-Eckrodt, T., & Völker, J. (2022). Rückzug in die Schatten? Die Verlagerung digitaler Foren zwischen Fringe Communities und „Dark Social“ und ihre Implikationen für die Extremismusprävention (Kurzgutachten Nr. 4; S. 1–38). CoRE-NRW connecting Research on Extremism in North Rhine-Westphalia.
  5. Prakasam, N., & Huxtable-Thomas, L. (2021). Reddit: Affordances as an Enabler for Shifting Loyalties. Information Systems Frontiers23(3), 723–751. https://doi.org/10.1007/s10796-020-10002-x
  6. Jasser, G., McSwiney, J., Pertwee, E., & Zannettou, S. (2021). ‘Welcome to #GabFam’: Far-right virtual community on Gab. New Media & Society, Online First. https://doi.org/10.1177/14614448211024546; Frischlich, L., Schatto-Eckrodt, T., & Völker, J. (2022). Rückzug in die Schatten? Die Verlagerung digitaler Foren zwischen Fringe Communities und „Dark Social“ und ihre Implikationen für die Extremismusprävention (Kurzgutachten Nr. 4; S. 1–38). CoRE-NRW connecting Research on Extremism in North Rhine-Westphalia.
  7. Mahl, D., Zeng, J., & Schäfer, M. S. (2021). From “Nasa Lies” to “Reptilian Eyes”: Mapping Communication About 10 Conspiracy Theories, Their Communities, and Main Propagators on Twitter. Social Media + Society7(2), 1–12. https://doi.org/10.1177/20563051211017482
  8. Bruns, A., Harrington, S., & Hurcombe, E. (2020). ‘Corona? 5G? or both?’: The dynamics of COVID-19/5G conspiracy theories on Facebook. Media International Australia177(1), 12–29. https://doi.org/10.1177/1329878X20946113
  9. Mahl, D., Zeng, J., & Schäfer, M. S. (2021). From “Nasa Lies” to “Reptilian Eyes”: Mapping Communication About 10 Conspiracy Theories, Their Communities, and Main Propagators on Twitter. Social Media + Society7(2), 1–12. https://doi.org/10.1177/20563051211017482
  10. Nissenbaum, A., & Shifman, L. (2017). Internet memes as contested cultural capital: The case of 4chan’s /b/ board. New Media & Society19(4), 483–501. https://doi.org/10.1177/1461444815609313 ; Frischlich, L., Schatto-Eckrodt, T., & Völker, J. (2022). Rückzug in die Schatten? Die Verlagerung digitaler Foren zwischen Fringe Communities und „Dark Social“ und ihre Implikationen für die Extremismusprävention (Kurzgutachten Nr. 4; S. 1–38). CoRE-NRW connecting Research on Extremism in North Rhine-Westphalia