The joint fight against disinformation – updates from Europe
Zbigniew Gniatkowski is an EU official, Polish diplomat and civil servant
2024-03-01
EUROPE
GEOPOLITICS
That was far earlier than new technologies made it possible to spread news – truth and lies – just about as fast as a human finger (or bots) can click “repost”.
Although the concept of “fake news” – different terms have been used for information manipulation to better illustrate different angles, for instance misinformation, disinformation, malinformation - is not new, new technologies and the widespread use of social media have pushed the phenomenon onto a new level.
Individuals, organizations, and more recently state-actors actively engage in various activities and use information manipulation to create insecurity, doubt and confusion, and polarization, and promote their own agenda and interests – often of an economic or political nature. A number of cases in various countries have proved that disinformation combined with hate speech may lead to radicalisation, social unrest, and even crimes.
EU citizens recognise the challenge posed by mis- and disinformation. 81% of European citizens agree that the existence of news or information that misrepresent reality or is even false is a problem for democracy in general (Standard Eurobarometer 100 Autumn 2023).
In mid-January 2024, at Davos, President of the European Commission Ursula von der Leyen identified industrial-scale disinformation, along with climate change, among the most significant global threats. Referring to the World Economic Forum’s Global Risk Report, she said that disinformation and misinformation, followed by polarisation within our societies, limit our ability to tackle the big global challenges. Therefore, “strengthening our democracy and protecting it from the risks and interference it faces is our common and enduring duty. We need to build trust more than ever and Europe is prepared to play a key role.”
Many examples from the recent years demonstrate how information manipulation has been used and played a role in diverse scenarios – more significantly during the Covid-19 pandemic and during Russia’s war of aggression against Ukraine. The former was the fertile ground for misinformation and conspiracy theories, whereas the latter has created state-controlled foundations for unprecedented aggressive propaganda for the Russian domestic public, and externally oriented governmental “communication” – combinations of manipulative messages, denials, covert information operations and interference, though troll farms had already been busy earlier.
In response, in the past years the EU and like-minded countries imposed unprecedented sanctions on Russian media such as Sputnik, Russia Today, and many others, which prevent them from spreading their propaganda within the EU.
However, state actors such as Russia keep trying to carry on their false narrative.
Germany has uncovered a major pro-Russian disinformation campaign” using thousands of fake accounts on X platform to try and stir anger at Berlin’s support for Ukraine. More than 50,000 fake user accounts pumped out more than a million German-language tweets.
Similarly last year, the French authorities unveiled another Russian-led operation Doppelganger. The campaign is also known as RRN (“Recent Reliable News”) consisted of pro-Russian and pseudo-media websites, including hundreds of Facebook fake accounts and pages, and then a network of thousands of bots on X disseminating polarizing content and URLs redirecting to fake media. The main goal of such a massive campaign was to undermine the West’s support to Ukraine.
We need to be able to uncover the existence of such digital campaigns to manipulate information against our countries and institutions, expose malign actors, and apply relevant measures. Today we better understand the mechanisms of false narratives being spread within society, and in many instances, we have in hand more means to detect it, to become more aware of it and use more adequate tools to counter this threat.
Following the creation of the East StratCom Task Force, focused on “effective communication”, in the European External Action Service, in the wake of the Russia’s illegal annexation of Crimea in 2014, the EU has developed new tools in its fight against disinformation.
In 2020, the Commission adopted the European Democracy Action Plan to build more resilient democracies across the EU by:
- promoting free and fair elections
- strengthening media freedom
- countering disinformation.
Legislative actions include a strengthened Code of Practice, the Media Freedom Act, transparency of political advertising measures and protections for journalists including against litigation (SLAPPs). Other measures include greater internal coordination for disinformation response.
The Code of Practice on Disinformation is the world’s first voluntary self-regulatory instrument for online platforms. The strengthened Code (June 2022) with 34 signatories increases transparency and accountability about the platforms’ actions. In August 2023, the Digital Services Act (DSA) became legally enforceable for designated Very Large Online Platforms and Very Large Online Search Engines. They must share their annual risk assessments on illegal content disseminated through their service. From 17 February 2024 all DSA obligations fully apply. Also, the EU Member States have to nominate a Digital Services Coordinator. Moreover, the preparations for the conversion of the Code of Practice to a Code of Conduct are ongoing, giving the Commission the possibility to levy fines against very large online platforms that do not uphold their commitments.
Over recent months we have seen several important policy measures.
The Defence of Democracy package, put forward on 12 December 2023, introduces a legislative proposal to set up “common transparency and accountability standards for interest representation activities seeking to influence the decision-making process in the Union that is carried out on behalf of third countries”. Moreover, two recommendations aim to promote free, fair and resilient elections, as well as the participation of citizens and civil society organisations in policy making.
In December 2023, the Council Presidency and the European Parliament reached an agreement on a new regulation on the transparency and targeting of political advertising. The regulation has been drawn up amid concerns about the dangers posed by information manipulation and foreign interference in elections. It aims to make it easy for citizens to recognise political advertisements and understand who is behind them, so that they are better placed to make informed choices.
Yet another important initiative is the European Media Freedom Act aiming at ensuring a pluralistic media landscape in Europe. A proposed regulation is to safeguard against political interference in editorial decisions and against surveillance. It puts a focus on the independence and stable funding of public service media as well as on the transparency of media ownership and of the allocation of state advertising.
There were more regulations pushed forward last December at the EU level.
These included the Artificial Intelligence Act (AI act) – a regulation, announced for the first time in 2019, which is based on fundamental rights and a risk-based approach. Forms of AI that contradict certain EU values would be prohibited (e.g., exploitation of vulnerabilities or social scoring). In terms of governance and enforcement a series of measures have been foreseen at national and EU levels, including the European Artificial Intelligence Board. Many dispositions of the AI act may enter into force within up to 3 years of a transition period.
In addition to regulation, high quality and independent journalism, and media literacy, strategic communication based on facts, also shaped in response to citizens’ concerns and vital interest, seem to be the right way to go. While providing citizens with facts on EU policies, and topics of peoples’ interests – such as their security in the context of Russia’s war against Ukraine, health issues or costs of living, including food or energy prices, we also need to further boost resilience.
One of the goals is to increase media literacy among young people. Many young voters will be heading to the polls for the first time in 2024 – quite a special year with two billion people to cast their vote in 50 countries. Election-related disinformation attacks, with foreign malign actors involved, seem very likely and they would become more aggressive. False narratives may focus on election practicalities, with the aim of discouragement from voting, often coupled with multiple narratives sowing distrust in the political system or polarisation.
Indeed, safeguarding electoral processes and election integrity remain challenges. Foresight, prevention, detection, and as a last resort, the ability to rebut disinformation is a communication priority for the governments and institutions, with the overall goal to limit public exposure to harmful content. While countering foreign information manipulation and interference (FIMI) will remain a major task to be dealt with at the institutional and intergovernmental level, the broader fight against mis-, and disinformation is taking place across all parts of society, with many actors onboard – national authorities, civil society organisations, media, fact-checkers, and other stakeholders.
Certainly, all Internet users are encouraged to do fact-checking on their own; that is possible even with basic tools and basic knowledge. Platforms offer means to report abusive content though many of them are still not ready for active moderation (for various reasons, including lack of resources and complicated internal procedures). When we spot falsehoods on social media we use every day, we should not share nor amplify them; instead, by exposing and debunking misinformation we may help make the information ecosystem cleaner.
Will it be possible to eradicate disinformation in the near future? It is extremely difficult to tackle the phenomenon, as the problem encompasses multiple factors starting from people’s attitudes and behaviours in the digital world, to tech companies, and more specifically their business model and use of algorithms plus self-learning AI, to possibilities for liberal democracies to act with the aim to monitor, moderate, and when indispensable to regulate mass communication flows. Nevertheless, with good strategies and timely regulation we might be effective and mitigate some of the harmful effects. The stakes are very high.
Zbigniew Gniatkowski is an EU official, Polish diplomat and civil servant, and former Ambassador of Poland to New Zealand; from September 2022 he has been contributing to disinformation responses in the European Commission. This article in author’s private capacity does not necessarily reflect an official EU position.
Membership
NZIIA membership is open to anyone interested in understanding the importance of global affairs to the political and economic well-being of New Zealand.