News | AI with Rights | Digital Platforms and Markets
A general overview of the debate on Artificial Intelligence regulation in Brazil
Just as in many other countries, especially in Europe, the debate on artificial intelligence regulation has also progressed in Brazil, with the Senate leading the most mature discussions so far.
Five bills are being analyzed by a commission of senators, which will decide the legal basis for rules, principles, and limits for the use and development of artificial intelligence in the country.
Meanwhile, in the Chamber of Deputies, dozens of bills have emerged in recent months, primarily to punish abuses and regulate usage. However, the debate on a legal framework has not progressed as much there.
Created in August 2023, the Temporary Internal Commission on Artificial Intelligence in Brazil (CTIA) has already held 10 public hearings to gather contributions on the topic but decided to extend the discussions until May 2024 due to its “complexity”.
IT’S ✷ IMPORTANT ✷ BECAUSE…
A commission of jurists proposed an AI regulation based on mitigating risks that algorithms can cause to fundamental rights, considered the most advanced project under discussion.
However, private and public sectors view this risk-based approach as bureaucratic and harmful to innovation.
Senators from the Bolsonaro-aligned right aim to water down proposals that regulate high-risk AI.
The intention, in the coming months, is to give more space to the “economic perspective” when analyzing the issue, not just the technical dimension, according to CTIA president Senator Carlos Viana (PODE-MG), as justification for the extension.
Currently, the private sector is dissatisfied with a risk mitigation and rights assurance-based approach to AI regulation, as proposed by a group of jurists who discussed the regulation at the Senate’s request throughout 2022.
🤖
The CTIA was created to deliberate based on the results of a Commission of Jurists, which discussed the topic at the Senate’s request between March and December 2022. The experts produced a final report of 908 pages with suggestions that served as the basis for the senators’ deliberations.
WHAT ARE THE PROPOSALS?
Currently, the CTIA is discussing the following bills:
- PL 5051/2019, proposed by Senator Styvenson Valentim (PODE-RN);
- PL 5691/2019, also proposed by Valentim;
- PL 21/2020, proposed by Deputy Eduardo Bismarck (PDT-CE), already approved by the Federal Chamber;
- PL 872/2021, proposed by Senator Veneziano Vital do Rêgo (MDB-PB);
- PL 2338/2023, proposed by Senator Rodrigo Pacheco (PSD-MG).
Among the texts analyzed by the CTIA, PL 2338/23 “stands out as the most mature legislative proposal on the topic so far”, according to the Commission’s work plan. Pacheco’s proposal was based on the report produced by the commission of jurists who discussed the topic throughout 2022.
On the other hand, the other proposals generically establish principles and duties for AI usage without necessarily legislating on risks or holding agents accountable for the negative impacts of these technologies or their use to violate human rights.
It’s not just ChatGPT. Today, AI is used daily in algorithms that power social media feeds, classify urgency and severity of patients in health units, and in judicial systems to summarize cases and assist judges in decisions, for example.
WHAT DOES THE “MOST MATURE” BILL SAY?
The bill requires developers to submit their AI systems to an algorithmic impact assessment before making them available. This analysis will identify the application’s risk level concerning potential rights violations, such as data protection or discrimination.
If an AI system is deemed “high risk”, it must comply with several governance measures, including transparency in data management and use, and prevention of discriminatory biases in AI usage. If considered “excessive risk,” it cannot be used.
“When we look at the global trend in terms of AI regulation, instead of creating universal rules for all agents, you establish obligations according to the algorithm’s risk level”, says attorney Paula Guedes, a researcher at Data Privacy Brasil.
“To measure the risk [of AI algorithms], you need to conduct an assessment: How many people can be affected, whether the system deals with vulnerable groups like the elderly and children, catalogs political affiliation and religion, and uses personal or human information, for instance,” – Paula Guedes, Data Privacy Brasil.
Examples of risky algorithms, according to PL 2338/2023:
Excessive risk (prohibited):
— Those used by governments to evaluate and classify citizens based on social behavior or personality, deciding access to services and public policies.
— Those employing subliminal techniques or exploiting human vulnerabilities to induce harmful behaviors to health, safety, or legal compliance.
High risk (must follow governance rules):
— Systems for evaluating and monitoring students, recruiting job candidates, or managing already hired workers.
— Applications in border management and critical infrastructure security, like water and electricity supplies.
— Systems used by the Judiciary for investigation, law enforcement, or public safety.
— AI for assessing creditworthiness or access to essential public and private services.
— Prioritization systems for emergency response services like firefighters or medical assistance.
— AI assisting in medical diagnostics or procedures.
— Autonomous vehicle systems posing risks to personal safety.
In total, there are 14 types of algorithms considered “high risk”.
Data Privacy Brasil + AI Regulation Study
On December 13, Data Privacy Brasil released the study “Central Themes in AI Regulation: The Local, Regional, and Global in Pursuit of Regulatory Interoperability“, one of the primary sources for this report. Read the full study here.
OPPOSITION LOBBY
According to Guedes, there’s a “strong lobby” from private sectors that prefer a regulation based on “principles” for AI use and less on obligations to mitigate risks.
Amid criticism from lobbyists, in November, CTIA rapporteur Eduardo Gomes (PL-SE) told Agência Senado he would seek a “convergence text” for the legal framework, stating, “AI is the first subject you discuss with an expert today, and two months later, he knows less”.
“They want a principle-based regulation or no regulation at the moment, to avoid compliance with various obligations”, explains Guedes.
The main complaint is that under PL 2338/2023, many AI systems in use today would be classified as “high risk”, requiring adjustments and reassessments of various applications, a measure considered obstructive to innovation.
LEGISLATIVE PROPOSAL SURGE
In 2023, there was a surge in legislative proposals aimed at curbing the misuse of generative AI in the Chamber of Deputies. Most of these projects, however, do not discuss a legal framework but instead seek to prohibit, penalize, or regulate: the creation of deep nudes or deep fakes, the use of AI as an aggravating factor in crimes, among other proposals.
OUTDATED STRATEGY
In addition to the Senate debate, on December 11, 2023, the Ministry of Science, Technology, and Innovation (MCTI) announced the review of the Brazilian Artificial Intelligence Strategy (EBIA), launched in April 2021 during the Bolsonaro administration.
At the time, the document was considered outdated and insufficiently detailed.
“It was heavily criticized because, to be a strategy, it needed to define responsible actors, direct the most suitable areas for Brazil to grow in AI while protecting fundamental rights, but it did not predict anything concrete”, assesses Guedes.
The minister who launched the EBIA then was now-Senator Marcos Pontes, vice-president of CTIA. Amid debates, at the end of November 2023, the astronaut proposed a substitute for PL 2338/23. In practice, the text undermines the risk-based approach suggested by experts.
Pontes’ amendment proposes defining high-risk systems based on “probability assessments” of impact, according to each system’s autonomy and use of personal data. The model also eliminates the prohibition on excessive-risk AIs proposed in the original bill.
This new proposal by the astronaut senator “has serious and irremediable problems”, disrupts accountability for AI usage, and lacks technical rigor, according to attorney Filipe Medon, a professor at FGV-RJ who participated in the jurists’ commission that discussed AI in the Senate, in an article published in Jota.
REPORTING BY PEDRO NAKAMURA
DATA BY MICHEL GOMES
ART AND GRAPHICS BY RODOLFO ALMEIDA
EDITING BY SÉRGIO SPAGNUOLO
Text originally published on December 27, 2023, on the Núcleo website.
DataPrivacyBr Research | Content under licensing CC BY-SA 4.0