Methodology
Learn more about the methodology used by the AI Observatory in Elections. This page may be subject to change as research and data collection are improved.
Learn more about the methodology used by the AI Observatory in Elections. This page may be subject to change as research and data collection are improved.
Our focus is on cases involving the spread of AI-generated content that could potentially interfere with national political opinion formation, democratic integrity, and voting decisions, whether through explicit association (names and electoral events) or implicit association (topics related to trust and electoral integrity).
We focus on content produced by or about individuals who either hold or will run for political office, about political parties and coalitions, and about Brazilian public institutions.



We also include content that directly addresses campaigns and official events in the electoral process, as well as any events that may raise doubts about the legitimacy of the election and the integrity of the electoral environment.
In addition, we look at multimedia content that directly or indirectly interferes with national democratic integrity, such as content that attacks public institutions as well as potential human rights violations, such as racist publications and hate speech.
We will identify multimedia publications (text, image, video, and audio formats) generated by AI and shared on social media accounts by individuals running for or holding public office, by political parties, and by coalitions with the aim of influencing national political opinion. We will identify any content that has been flagged or not for AI use by said social media accounts.
Synthetic content (video, image, or audio) that uses the likeness or other physical attributes of individuals running for or holding public office, political parties, and coalitions. Any content published on social media by the general public with the aim of harming or favoring candidates or political figures, and attacking Brazilian public institutions, will be taken into consideration.
We consider the following examples:
• Synthesis and replacing the likeness of politicians, candidates, public figures and voters
• Dubbing
• Deceased politicians or public figures
• Manipulation of facial features
• Deepnudes
• Fake Audio.
Cases of synthetic content with non-consensual pornographic content that uses images or other physical attributes of individuals running for or holding public office, published on social media with the aim of perpetuating gender-based political violence.
Synthetic content and/or deepfakes that violate human rights, such as racist content and other forms of hate speech.

The search for cases took place amidst a scenario of intense data privatization and lack of transparency on the platforms, which imposes obstacles mainly for researchers from countries located in the Global South as it exploits the production of local information through data colonialism (Tavares and Tranjan, 2023; Cassino, 2021). In addition to the end of Twitter’s free API in the first half of 2023, Meta shut down CrowdTangle on the eve of the Brazilian election, preventing new researchers from registering and replacing the tool with a program offering limited access to selected researchers. TikTok restricts API access only to researchers in Europe and the United States.

Due to the lack of free and open-source tools, the research team conducts active and constant searches in:
We conduct searches on social media platforms and search for keywords such as “deepfake,” “Artificial Intelligence,” and “Elections” to find posts shared by users. Despite the limitations of this technique, which is subject to algorithmic recommendations that restrict the results for each user’s preferences, synthetic pieces are identified on these networks because users search for keywords or indicate the use of AI in captions.
In addition, we have created a list with links to the accounts of the main potential candidates for the 2026 elections, as well as important figures in the Brazilian political scene such as ministers, former presidents, and other politicians. This allows us to monitor these politicians’ accounts more frequently in order to identify whether they are using the technology or not.


For Facebook, we actively searched the platform’s Ad Library to identify content promoted by candidates and/or voters that uses AI to manipulate or produce information. In our system, we search for ads that use the keyword “Artificial Intelligence” and are linked to “social issues, elections, or politics”.
We also searched the same keywords in Google’s search engine and the search engines of news and fact-checking websites to identify news published on the subject. This allowed us to monitor what the national press and independent media projects were reporting and identifying about AI in elections. These cases and monitoring reports are incorporated into our mapping and the media outlets are indicated.
Once the information was collected, we documented the cases and categorized them according to the following categories, which are available on the observatory’s website:
We used this to identify not only the types of circulated media, but also understand where the synthetic content was originating from (politicians/parties/candidates or voters) and what communicational, disinformation, and political objectives the AI-created content was projecting in the country’s public debate, both inside and outside the electoral process.
Yes. The observatory maps any synthetic content that could potentially influence national political opinion formation, regardless of the political or ideological spectrum that content and/or social media account that published it may have. Our goal is to understand how AI is being used by all actors and political orientations.
The Observatory's work is justified by the fact that, in practice, the election period is not limited to the calendar established by the Superior Electoral Court. The current dynamics of political campaigns extend beyond the official election period time as political actors are constantly building their public images, narratives, and engagement strategies on social media platforms.
Data collected outside of the election period is essential to capturing these political movements which, although do not formally occur during the campaign, directly influence the electoral landscape and voter behavior.
The Observatory considers synthetic content (or media) to be content that is completely or partially created by Artificial Intelligence tools, particularly generative AIs. The term Deepfake is a blend of Deep Learning and Fake, which refers to content generated by artificial neural networks. Deepfake is defined as false content generated by Artificial Intelligence that appears genuine to the human eye.
The synthetic media from Deepfake manipulates human images (the most common way it is used) by swapping the faces of individuals in an image or video with those of another person, creating non-real content that appears to be genuine.
Share with us and help our monitoring.