Brazil defines rules for AI in elections, candidates could lose mandate if they use tools to spread fake news

By March 5, 2024

São Paulo, Brazil – Brazil’s Superior Electoral Court approved a new set of rules for the use of artificial intelligence (AI) in elections. Among the measures are the complete ban on deep fakes and the mandatory warning of the use of AI in all content shared by candidates and their campaigns.

The country’s next set of elections will take place in October, where voting for local mayors and councilors will take place, and the new AI rules have already come into force. This is the first set of regulations ruled on by the electoral court regarding the use of AI by candidates and political parties.

The rules involve prohibiting all types of deep fakes; the obligation to warn about the use of AI in electoral propaganda, even if it is in neutral videos and photos; and the restriction of the use of robots to mediate contact with voters (the campaign cannot simulate dialogue with the candidate or any other person, for example).

According to the rules, any candidate who uses deep fakes to spread false content could lose their mandate if elected. Brazil’s electoral court defines deep fakes as “synthetic content in audio, video or both, which has been digitally generated or manipulated to create, replace or alter the image or voice of a living, dead or fictitious person.”

President of the Superior Electoral Court, Alexandre de Moraes, outlined severe punishment for candidates who disrespect the regulation and use AI to harm their opponents and to distort information in order to win elections.

President of the Superior Electoral Court and judge of the Supreme Court Alexandre de Moraes. Image courtesy of TSE

“The sanction will be the revocation of their candidate registration and, if they have already been elected, they could lose their mandate,” he said. According to Moraes, Brazil approved one of “the most modern regulations in the world in relation to combating disinformation and the illicit use of AI” in the electoral process. 

Big techs

The new regulation also provides for the liability of big tech companies, such as Google and Meta (which owns Facebook, WhatsApp and Instagram), that do not immediately remove content with disinformation, hate speech, Nazi and fascist ideology, as well as anti-democratic, racist and homophobic content.

According to the court, the platforms must provide services “in accordance with their duty of care and their social position.” Therefore, there is a duty for providers to adopt and publicize measures to prevent or reduce the circulation of “notoriously untrue or seriously out of context” facts that affect the integrity of the electoral process.

Electronic voting machines used in Brazilian elections. Image courtesy of Tribunal Superior Eleitoral (TSE).

There is also a need for platforms to promote, free of charge, content that informs or clarifies untrue facts. According to Judge Moraes, the measures are necessary to prevent the use of artificial intelligence to harm candidates and especially voters’ choices.

“Now, the electoral court has effective tools to combat distortions in electoral advertisements, hateful, fascist, anti-democratic speeches and the use of AI to put something into a person’s speech that they did not say,” he said.

SHARE ON

LATIN AMERICA REPORTS: THE PODCAST