Home » Blog » Arhiva » The Threat of Algorithmic Populism: Intelligence Strategies for Safeguarding Democracy

The Threat of Algorithmic Populism: Intelligence Strategies for Safeguarding Democracy

Rădulescu, Bogdan-George (2024), The Threat of Algorithmic Populism: Intelligence Strategies for Safeguarding Democracy, Intelligence Info, 4:1, DOI: 10.58679/II24680, https://www.intelligenceinfo.org/the-threat-of-algorithmic-populism-intelligence-strategies-for-safeguarding-democracy/

 

Abstract

The digital age has facilitated unprecedented transformations in communication, reshaping the way political discourse and influence manifest in democratic societies. Among these transformations, algorithmic populism emerges as a significant phenomenon. This concept, defined by the interaction between political actors, online activism, and algorithm-driven amplification, highlights how extremist or populist political messages gain traction in the digital ecosystem. By analysing this dynamic through the lens of intelligence studies, we have tried to identify the risks posed by algorithmic populism to democratic systems and propose strategies to mitigate its impact. The need for intelligence agencies to address these risks is clear, as they represent not only political but also existential threats to the democratic order.

Keywords: threat, algorithmic populism, intelligence strategies, democracy, moral panic, disinformation

Amenințarea populismului algoritmic: strategii de informații pentru protejarea democrației

Rezumat

Era digitală a facilitat transformări fără precedent în comunicare, remodelând modul în care discursul și influența politică se manifestă în societățile democratice. Printre aceste transformări, populismul algoritmic apare ca un fenomen semnificativ. Acest concept, definit de interacțiunea dintre actorii politici, activismul online și amplificarea bazată pe algoritmi, evidențiază modul în care mesajele politice extremiste sau populiste câștigă acțiune în ecosistemul digital. Analizând această dinamică prin prisma studiilor de informații, am încercat să identificăm riscurile pe care populismul algoritmic le prezintă sistemelor democratice și să propunem strategii pentru a-i atenua impactul. Necesitatea ca agențiile de informații să abordeze aceste riscuri este clară, deoarece acestea reprezintă nu numai amenințări politice, ci și existențiale la adresa ordinii democratice.

Cuvinte cheie: amenințare, populism algoritmic, strategii de intelligence, democrație, panică morală, dezinformare

 

INTELLIGENCE INFO, Volumul 4, Numărul 1, Martie 2025, pp. xx
ISSN 2821 – 8159, ISSN – L 2821 – 8159, DOI: 10.58679/II24680
URL: https://www.intelligenceinfo.org/the-threat-of-algorithmic-populism-intelligence-strategies-for-safeguarding-democracy/
© 2025 Bogdan-George RĂDULESCU. Responsabilitatea conținutului, interpretărilor și opiniilor exprimate revine exclusiv autorilor.

 

The Threat of Algorithmic Populism: Intelligence Strategies for Safeguarding Democracy

Bogdan-George RĂDULESCU[1]

georgebogdan32@gmail.com

[1] PhD in Communication Science, Babeș-Bolyai University, MA in Global Security, Coventry University

 

Algorithms are not ideologically neutral.

Algorithms are not ideologically neutral. They prioritize content that maximizes engagement—often sensational, polarizing, or emotionally charged. According to Maly (2018), this logic fosters a „digitally mediated communicative relationship” between human and non-human actors, including algorithms, which reorganize public discourse to favour populist messages. In traditional democratic systems, legitimacy stems from institutional debates, electoral transparency, and ethical political competition. Algorithmic populism replaces these norms with pseudo-plebiscites, where online popularity—measured in likes, shares, and comments—serves as a false proxy for public support. As Maly explains, these dynamic blurs the line between genuine political engagement and performative metrics of digital relevance.

The digital media age has facilitated, among other things, the emergence of a phenomenon that transcends the strictly technological realm of artificial intelligence usage to intersect with political dynamics. One of the most discussed topics in recent years in political science – a topic directly connected to the phenomenon of fake news – is the emergence and manifestation of populism. An extremely interesting study – which raises many challenges for serious analysis in both political science and communication studies – is the one conducted by Ico Maly, professor at Tilburg University and editor-in-chief of Diggit Magazine. Maly demonstrates how the public prominence gained by the initially marginal discourse of populist politicians with extremist agendas is less about their oratory talent or ideational production and more about the power of online activism supporting them, including trolls or automated systems (bots) that reproduce, redistribute in geometric progression, and amplify their public discourse. This is what Ico Maly refers to as „algorithmic populism” (Maly, 2018): „The possibilities offered by digital media and Web 2.0 lead us to a better understanding of populism as a communicative relationship mediated digitally between different human and non-human algorithmic actors” (Maly, 2018). The starting point of Maly’s analysis on the relationship between the algorithmic mechanisms of digital communication and the rise of populist political forms threatening the liberal societies of the West is that digital media do not merely serve as intermediaries between a classical message distributor and a classical receptor, as one might simplistically imagine this process. The nonlinear character of digital media and social networks „remodels and reorganizes the communicative structure of the initial discourse” (Maly, 2018).

The algorithms that underpin the technological functioning of digital media are not ideologically neutral, but are based on a certain social, political, and economic conception that justifies their existence in the informational marketplace. These algorithms were designed with „neoliberal principles behind them, which translate into the valorisation of hierarchy, competition, and a ‘winning’ mentality” (Van Dijck, 2013, p. 21). These algorithms favour online competition between dominant political narratives vying to impose their status as the reference system for explaining social realities.

Algorithms, by their cybernetic nature, tend to equate the digital audience or the number of subscribers on a digital platform („subscribers”) with popularity. Van Dijck explains this „royal principle of popularity” (Maly, 2018) in vivid terms, which populist leaders/populist movements use to legitimize their political offensive by invoking their online audience: „The more contacts you have and the more you make, the more valuable you become because more people think you’re popular and, therefore, want to connect with you” (Van Dijck, 2013, p. 13).

Algorithmic Populism and Moral Panic

Moral panic — a collective societal reaction to perceived threats to cultural or social norms — finds a potent ally in algorithmic populism. Online platforms exacerbate this panic by creating echo chambers where exaggerated or false narratives gain traction. For instance, fear-mongering about fake news itself, framed as a ubiquitous and uncontrollable threat, can undermine trust not only in journalistic institutions but also in democratic processes. This destabilizing effect reinforces a cycle where the public becomes more susceptible to simplistic, populist solutions, often advanced by demagogues who exploit this distrust. Fake news induces a widespread informational moral panic across various social segments. Matt Carlson’s theory linking the psychological effects triggered by fake news campaigns to the concept of ”moral panic”—first theorized by sociologist Stanley Cohen in his study Folk Devils and Moral Panics—is particularly relevant in this context. (Cohen, 2011)

According to Carlson, fake news represents a threat to societal cohesion and effective community functioning because the moral panic it engenders through alarming and sensationalist media discourse fosters public anxiety. This phenomenon, Carlson argues, leads to a sharp decline in moral and civic standards. As he notes, ”the framework of moral panic is inseparable from the context of mass society, where media occupies a central space in the production of meaning” (Carlson, 2018). Building on Cohen’s sociological insights, communication scholars have highlighted the impact of moral panic messaging within the framework of emerging communication technologies

From this perspective, Carlson explores fake news as a producer of ”informational moral panic”, a term he uses to denote ”a specific set of fears regarding the erosion of public communication by digital media”. His study seeks to encourage ”a closer examination of the symbolic dimension of fake news” and to analyse ”the discourses that construct the implantation of fake news in public consciousness. (Carlson, 2018) This approach emphasizes the necessity of understanding fake news not only as a vehicle of misinformation but as a powerful mechanism for shaping societal anxieties and undermining democratic communication norms.

The interplay between algorithmic populism and moral panic is increasingly visible in the landscape of digital communication, where the structural characteristics of online platforms profoundly reshape public perception and political dynamics. The findings of Nielsen and Graves (2017), referenced in the provided fragment, underscore the blurring of lines between authentic and false editorial content in the digital age. This phenomenon can be critically analysed through the lens of cognitive biases, the architecture of algorithm-driven media, and the socio-political effects of eroded epistemic trust.

Algorithmic populism thrives on the ability of digital platforms to flatten hierarchies of credibility, treating all content — irrespective of its source or validity — as equally consumable. The algorithms that curate online content prioritize engagement metrics such as likes, shares, and time spent on site, often at the expense of quality and accuracy. This „attention economy” creates a fertile ground for populist narratives that exploit emotional resonance, amplifying moral panic by portraying societal crises or cultural threats as existential emergencies. In this environment, fake news is no longer perceived as an outlier but becomes integrated into a broader spectrum of „information noise,” as highlighted by the Reuters Institute’s findings. For many users, distinguishing between genuine and fabricated stories becomes secondary to consuming content that aligns with their pre-existing beliefs, a cognitive shortcut reinforced by confirmation bias.

The inability of audiences to discern between truthful and deceptive content reflects a deeper erosion of epistemic trust — the collective confidence in shared systems of knowledge production. This erosion has profound political consequences, as it delegitimizes established institutions, from the press to political leadership, and leaves a vacuum often filled by populist actors. Such actors, including algorithmically amplified influencers, redefine the boundaries of moral discourse, presenting themselves as the arbiters of truth while decrying traditional epistemic authorities as corrupt or elitist.

To address the destabilizing effects of algorithmic populism and moral panic, policy interventions must focus on enhancing media literacy and algorithmic transparency. Public campaigns should aim to cultivate critical thinking skills that empower individuals to navigate the digital information ecosystem more effectively. Simultaneously, regulatory measures should require platform operators to disclose the mechanisms that prioritize certain types of content, thus mitigating the unintentional amplification of harmful narratives.

The Spider’s Web Architecture of the Disinformation Ecosystem

Like a spider’s web, these informational ecosystems are not only intricate but deliberately constructed to ensnare and manipulate unsuspecting targets. The metaphor highlights the strategic nature of these systems: each thread represents a node in the network—be it fake news outlets, social media accounts, bots, or state-sponsored troll farms—all interconnected to spread misinformation seamlessly. This imagery also emphasizes the web’s deceptive qualities. At first glance, it may seem innocuous or even invisible, but it is engineered for maximum entrapment, exploiting human psychology and technological vulnerabilities. Furthermore, the architecture adapts and rebuilds when disrupted, making it resilient and challenging to dismantle.

When analysing the phenomenon of algorithmic populism and the aggressive use of online propaganda—characterized by algorithms and troll farms—it becomes evident that such strategies are predominantly associated with extremist parties within the Western political landscape. Algorithmic populism and extremist propaganda are deeply intertwined, exploiting the reliance of digital-native generations on online platforms for news and information.

However, to understand the broader architecture of this disinformation ecosystem, it is essential to trace its origins to the ”headquarters” of digital disinformation warfare. These operations are often orchestrated by state actors employing information terrorism techniques aimed at generating moral panic and undermining societal cohesion. Notable among these actors are Russia, China, and Iran, whose strategies leverage digital tools to exploit vulnerabilities within democratic societies (Ahmed, 2018). This convergence of disinformation strategies among Russia, China, and Iran, and the challenges this poses for democracies, is an important topic in current academic research.

This refined understanding underscores the necessity of addressing not only the local manifestations of algorithmic populism within Western political systems but also the transnational dimensions of information warfare (Bjola, 2019). A comprehensive approach must integrate insights from security studies and political philosophy to counter the systemic threats posed by such state-coordinated campaigns. The use of online disinformation campaigns to create moral panic has been a hallmark of the information strategies employed by Russia, China, and Iran. These states use coordinated tactics to exploit social and political vulnerabilities in Western democracies. (Benkler, 2018)

Russia’s disinformation campaigns, especially during the 2016 U.S. presidential election, are well-documented. Troll farms, like the Internet Research Agency, amplified divisive issues such as race, immigration, and gun control, aiming to create societal polarization and erode trust in democratic institutions. (Woolley, 2018) Russia has also engaged in spreading COVID-19 misinformation, including claims that vaccines developed in the West are unsafe, fostering vaccine hesitancy and undermining public health efforts​. During the COVID-19 pandemic, China increased its disinformation efforts, using tactics inspired by Russia’s playbook. Beijing spread narratives suggesting that democratic countries had mismanaged the pandemic while promoting its own response as exemplary. False claims included allegations that the virus originated from U.S. military activities and that European healthcare systems were failing, aiming to sow distrust in Western governance and public health systems​. Iran has focused on amplifying anti-Western sentiments through its state media and social media campaigns. (Bradshaw, 2019)

A notable example includes promoting conspiracy theories about the COVID-19 pandemic, such as blaming the virus’s origin on Western powers, aiming to destabilize international trust and foster a narrative of victimization among its allies. These campaigns leverage digital platforms to escalate fear, uncertainty, and division. The convergence of disinformation strategies among these states reflects a growing challenge for democracies, as coordinated narratives often target the same themes, such as questioning the legitimacy of elections or undermining public health measures. (Bennett, 2018) These campaigns often aim to create moral panic by exploiting existing societal tensions and fears. They focus on divisive issues such as immigration, election integrity, and cultural conflicts to amplify social and political divisions (Brice, 2024).

China’s Digital War 2.0 illustrates the dynamic interplay between disinformation and geopolitical strategy. Its insidious tactics challenge the foundations of democratic governance and demand a robust, interdisciplinary response that integrates intelligence, security studies, and international policy frameworks.  The Romanian sociologist highlights China’s advanced approach to digital warfare, positioning it as a nuanced evolution of Russia’s established tactics. This raises important questions about the implications of such strategies on global security, public trust, and international relations. ”China’s Digital War 2.0 is far more insidious than Russia’s, difficult to detect, and combines various assertive tactics and techniques to manipulate the perceptions of citizens in Western states. The battle is for winning the minds and hearts of people in liberal democracies, including policymakers enticed with all sorts of investments. The increasing aggressiveness in Chinese ambassadors’ rhetoric somewhat mimics the Russian style of information manipulation. Data reveals that the Chinese political regime has absorbed key lessons from the failures of the Kremlin’s hybrid aggression and information warfare. Consequently, Chinese strategists continue to learn from the militarized ‘troll factories’ in managing social media campaigns. European Union documents already highlight the identification of a ‘trilateral convergence of disinformation narratives’ promoted by China, Iran, and Russia.” (Țîbrigan, 2020)

China’s ”Digital War 2.0” reflects a sophisticated and multi-layered strategy aimed at influencing public perceptions in liberal democracies. Unlike Russia’s more overt and often aggressive tactics (Jamieson, 2020), China’s methods are characterized by subtlety and strategic investment. This insidiousness lies in the blending of economic leverage (e.g., infrastructure investments through the Belt and Road Initiative) with targeted information campaigns, which appeal to both policymakers and broader populations.

Russian and Chinese operatives are using advanced microtargeting techniques to tailor disinformation to specific demographics and individuals. This personalized approach increases the effectiveness of their messaging and helps create echo chambers that reinforce divisive narratives. These types of campaigns are run through troll farms, but they provide ideological ammunition and electoral advantages to extremist and populist parties in the West, helping them poison the minds and hearts of Western media consumers. (Legucka, 2022) Russian and Chinese campaigns often target specific groups, such as ethnic minorities, political extremists, or undecided voters in swing states.

For example, Iranian operatives have targeted Arab and Muslim communities in Michigan. China has adopted a platform-centric model, focusing on co-opting or influencing information platforms, including social media companies, think tanks, and academic institutions. This approach allows for more subtle and long-term influence operations. The use of AI technologies, including generative AI and deepfakes, has enabled these actors to create more convincing and engaging content. This includes AI-generated text, images, and videos that are increasingly difficult to distinguish from genuine content. The objectives of these disinformation efforts extend beyond immediate electoral outcomes. They aim to gradually erode trust in democratic institutions, sow chaos, and diminish Western influence on the global stage. (Sloss, 2022)

Another short-term risk is the exacerbation of societal polarization, a dynamic that is exacerbated by the amplification of populist rhetoric via digital algorithms. As algorithms are designed to maximize user engagement, they tend to prioritize content that provokes strong emotional responses, often favouring extreme or divisive content over more moderate, reasoned discussions. This creates a feedback loop wherein populist actors—whether they be politicians, ideological movements, or state-sponsored entities—intentionally exploit these emotional triggers to deepen societal divides. The immediate effect of this is an increase in political polarization, as people increasingly define their identities in opposition to perceived enemies or out-groups, resulting in a diminishing capacity for constructive dialogue. Ultimately, the short-term risks posed by algorithmic populism go beyond political instability—they challenge the very ideals that underpin democratic governance. The degradation of public trust, the rising polarization, and the fracturing of the public sphere threaten the ability of societies to function cohesively and make informed, collective decisions.

Intelligence Perspectives on Algorithmic Populism

Algorithmic populism represents a significant threat to national security. Foreign state actors, such as Russia and China, exploit this phenomenon to destabilize democracies through disinformation campaigns. By amplifying divisive narratives, these actors sow discord, weaken institutions, and diminish public trust in governance. The rise of bot armies and coordinated trolling activities demonstrates the manipulation of digital platforms for political gain. These practices often involve systematic dissemination of fake news, deep fake and black propaganda, creating an environment where misinformation thrives. Intelligence agencies must monitor these activities and assess their implications for electoral integrity. The absence of robust legal frameworks governing digital platforms allows algorithmic populism to proliferate unchecked. Intelligence analysis must advocate for regulations ensuring transparency in algorithmic decision-making and accountability for harmful content dissemination.

The intersection of algorithmic populism and democratic stability poses profound challenges, demanding a nuanced understanding of vulnerabilities that threaten the integrity of public discourse and institutional legitimacy. An intelligence approach to identifying these vulnerabilities necessitates exploring their ontological, epistemological, and practical dimensions within the context of security studies.

In the face of algorithmic populism, the risks to democratic structures are profound, particularly in the short term, as they manifest in the degradation of public trust and the growing polarization of society. The rapid rise of populist movements, emboldened by the amplification capabilities of digital algorithms, is not merely a political phenomenon—it represents a complex interplay of technological, ideological, and social factors that, when combined, strain the very fabric of democratic governance.

Public trust is a cornerstone of any functioning democracy, yet this trust is increasingly undermined by the populist rhetoric that floods digital platforms. As algorithms prioritize emotionally charged, sensational content over nuanced, factual discussions, they contribute to an environment where information is not only distorted but manipulated for political gain. Politicians and movements, particularly those with populist agendas, can exploit this dynamic by crafting messages that resonate with the frustrations, fears, and desires of the public. These messages, though often based on misinformation or half-truths, gain traction because they are designed to provoke and to be shared, ensuring their wider reach. The short-term effect is the rapid erosion of confidence in established institutions, especially in the media and the political elite. As citizens become more distrustful of traditional news outlets, often accusing them of bias or complicity in the status quo, the public sphere becomes fractured, and democratic debate becomes increasingly polarized.

This erosion of trust is not just a consequence of populist rhetoric but is further amplified by the technological infrastructure that supports it. The algorithms that underpin social media platforms are designed to maximize engagement by prioritizing content that elicits strong emotional reactions, often promoting sensationalism and extremism over reasoned argumentation. The result is the creation of a fragmented informational environment in which truth is subjective, and where the line between fact and fiction becomes increasingly difficult to discern.

The security concern here lies in the destabilization of the public sphere and the undermining of democratic norms. As political polarization intensifies, citizens begin to retreat into their ideological echo chambers, often adopting increasingly radical positions. This not only reduces the opportunities for compromise but can also create fertile ground for extremist ideologies to flourish. In the short term, this polarization can lead to political gridlock, where opposing factions are unwilling or unable to cooperate. In the long run, it may lead to the rise of populist or authoritarian leaders who claim to offer a solution to the paralysis caused by polarized politics.

In response, intelligence agencies must assess the structural factors that enable such divisions, understanding how digital technologies, particularly social media, play a role in amplifying these dynamics. By identifying the key narratives and digital actors involved in the spread of polarization, intelligence agencies can develop countermeasures to reduce societal fragmentation. These countermeasures could include fostering inclusive, fact-based dialogue and improving citizens’ digital literacy, helping them to identify manipulative content. Moreover, intelligence should also focus on monitoring foreign actors who may seek to exploit these divisions for geopolitical gain, further exacerbating polarization and weakening democratic cohesion.

The short-term risks associated with algorithmic populism are twofold: the degradation of public trust in democratic institutions and the exacerbation of societal polarization. Both risks threaten the stability of democratic societies, creating a fertile ground for the rise of populist authoritarianism. From a security perspective, these risks require a coordinated response that combines traditional intelligence analysis with an understanding of digital communication dynamics, ensuring that the integrity of democratic processes is preserved in the face of new, technologically enabled threats.

This growing polarization is not simply a matter of political disagreements; it is a threat to the stability of democratic institutions themselves. As the political centre erodes, extremism—both from the left and the right—becomes more pronounced, further complicating the possibility of constructive political dialogue. The increasing radicalization of discourse, fuelled by algorithmic amplification, creates a vicious cycle in which the political discourse becomes ever more extreme and less conducive to democratic debate. The short-term risk, therefore, is not just political fragmentation but the possibility of the complete breakdown of democratic dialogue, replaced by a cacophony of competing ideologies that no longer share a common ground.

Intelligence agencies should invest in developing advanced AI systems to detect, analyse, and disrupt disinformation campaigns. This includes tools for automated vetting, fake news detection, and trollbot identification. Intelligence agencies should enhance their capabilities in understanding and analysing content in various languages and cultural contexts. This is crucial for identifying and responding to disinformation in diverse global settings. Agencies can work on pre-emptively countering anticipated disinformation narratives, as demonstrated in the case of Russia’s invasion of Ukraine. (Howard, 2020)

It is crucial that intelligence agencies develop capabilities to identify and counter sophisticated microtargeting techniques used by adversaries to spread tailored disinformation. The intelligence agencies should support initiatives to increase public awareness about AI-generated content, disinformation tactics, and critical thinking skills is not only relevant but essential in the contemporary landscape of information warfare. As AI-generated content becomes more sophisticated, the average citizen faces unprecedented challenges in discerning fact from fiction. Disinformation, once the domain of state-sponsored actors and ideological extremists, is now facilitated by AI tools that automate the creation of misleading or false narratives, often indistinguishable from legitimate information. (Kletter, 2020) (Gilmour, 2024)

Furthermore, intelligence agencies can play a pivotal role in identifying emerging disinformation tactics, such as the use of deepfake technologies or coordinated online propaganda campaigns. By providing the public with tools to recognize these tactics, they not only protect citizens from manipulation but also enhance societal resilience to hostile information operations. This, in turn, strengthens national security by reducing the effectiveness of adversaries in their attempts to polarize or destabilize democratic institutions. (Fuchs, 2018)

In addressing these issues, intelligence agencies must recognize that the risks posed by algorithmic populism extend beyond the political realm—they are also security risks. The degradation of trust in democratic institutions, the spread of disinformation, and the rising polarization all contribute to an environment in which the very concept of democracy is in jeopardy. From a security perspective, this requires a shift in how intelligence agencies assess threats. Rather than focusing solely on traditional forms of espionage or military threats, intelligence efforts must account for the ways in which digital platforms, algorithms, and disinformation campaigns can be used to destabilize democratic societies. This includes tracking the role of foreign state actors who may seek to exploit these vulnerabilities to further their geopolitical interests.

References

  • Ahmed, S. &. (2018). Artificial intelligence, China, Russia, and the global order. Alabama: Air University Press.
  • Benkler, Y. F. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press.
  • Bennett, W. L. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 122-139.
  • Bjola, C. &. (2019). Countering Online Propaganda and Extremism: The Dark Side of Digital Diplomacy. Oxfordshire: Routledge.
  • Bradshaw, S. &. (2019). The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation. Oxford: Oxford Internet Institute.
  • Brice, J. (2024, November 5). How election misinformation thrived online during the 2024 presidential race—and will likely continue in the days to come. Fortune. Retrieved from https://fortune.com/2024/11/05/election-misinformation-online-presidential-election-russia-iran-china-trump-musk/
  • Carlson, M. (2018). Fake news as an informational moral panic: the symbolic deviancy of social media during the 2016 US presidential election. Hubbard School of Journalism and Mass Communication, University of Minnesota, Minneapolis, 374-388 .
  • Cohen, S. (2011). Folk Devils and Moral Panics. Abingdon: Routledge.
  • Fuchs, C. (2018). Social Media: A Critical Introduction. London: Sage Publications.
  • Gilmour, T. (2024). Critical Thinking and Media Literacy in an Age of Misinformation. doi:DOI: 10.33774/apsa-2024-bsmtn-v2
  • Howard, P. N. (2020). Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives. New Haven, Connecticut : Yale University Press.
  • Jamieson, K. H. (2020). Cyberwar: How Russian Hackers and Trolls Helped Elect a President. Oxford: Oxford University Press.
  • Kletter, M. (2020, May 19). The Importance of Critical Thinking in the Age of Fake News. School Library Journal. Retrieved from https://www.slj.com/story/the-importance-of-critical-thinking-in-age-of-fake-news-webcast
  • Legucka, A. &. (2022). Disinformation, narratives and memory politics in Russia and Belarus. Routledge.
  • Sloss, D. L. (2022). Tyrants on Twitter: Protecting Democracies from Information Warfare. Redwood City, California: Stanford University Press.
  • Țîbrigan, N. (2020, June 22). Războiul digital 2.0 al Chinei este mult mai insidios decât cel al RusieiȚîbrigan, Nicolae, Războiul digital 2.0 al Chinei este mult mai insidios decât cel al Rusiei, HotNews.ro, 22 iunie 2020, in Laboratorul de Analiză a Războiului Informaţional şi Comu. HotNews. Retrieved from https://larics.ro/razboiul-digital-2-0-al-chinei-este-mult-mai-insidios-decat-cel-al-rusiei-interviu-cu-nicolae-tibrigan-expert-in-analiza-razboiului-informational/
  • Woolley, S. C. (2018). Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford: Oxford University Press.
Follow Bogdan-George Rădulescu:
PhD, Bogdan-George Rădulescu - Email: georgebogdan32@gmail.com M.A Security Studies Coventry University Book author: ”The Decline of Objectivity - Mass media in the age of fake news and post-truth"(2021, Cluj-Napoca, Presa Universitară Clujeană)

Lasă un răspuns

Adresa ta de email nu va fi publicată. Câmpurile obligatorii sunt marcate cu *