By Rafael A. F. Zanatta and Mariana Rielli

On Thursday, December 5, 2024, the Internal Temporary Artificial Intelligence Committee (CTIA) of the Federal Senate approved the substitute report for Bill 2338/2023. This bill aims to define the legal framework for regulating the use of Artificial Intelligence systems in Brazil. The text will be voted on in the Senate Plenary on December 10th and has a strong chance of being approved.

Originally proposed by Senator Rodrigo Pacheco (PSD/MG) after a year of work by a Committee of Jurists appointed by the legislative branch, the bill has been the subject of intense debate throughout 2024 under the rapporteurship of Senator Eduardo Gomes (PL/TO). Data Privacy Brasil participated in drafting the text for the Committee of Jurists, with contributions from Bruno Bioni (co-director of Data Privacy Brasil and PhD from USP) to the Committee led by Minister Ricardo Villas Bôas Cueva (STJ) and Professor Laura Schertel Mendes (UnB). Data Privacy Brasil also contributed to the thematic session in the Federal Senate to discuss the text, presenting research findings from the past four years (check out our AI materials here).

The legislative process for the AI bill has been shaped by significant debates about the risk-based regulation model and the new fundamental rights enshrined by law. In June 2024, a preliminary report analyzing hundreds of proposed amendments was presented. In July, the vote was supplemented with additional analyses and a recommendation for approval. During this period, the proposal for a centralized authority to enforce the law was abandoned, favoring the creation of an AI Regulation System integrated with the existing capacities of sectoral regulatory authorities such as Anatel, Anvisa, Anac, and the Central Bank, among others. This combination with the sectoral approach has been a major focus for businesses across various industries, aiming to avoid the concentration of power in a new state agency. As highlighted in our study Central Themes of AI Regulation, “a general law does not exclude but rather creates space for sectoral regulation to flourish, building on common foundations for different sectors of the economy” (Bioni et al., 2023, p. 6).

In the second half of 2024, there was significant mobilization among production sectors, artists (following Marisa Monte’s call for clear copyright rules), and unions concerning the bill’s provisions. On November 28th, the Senator supplemented the report with a new version of the text, shifting away from stricter regulatory obligations for high-risk activities and taking a more conciliatory stance toward private interests. Responding to requests from the National Confederation of Industries (CNI), the rapporteur removed protective measures for workers and provisions addressing mass layoffs and workers’ participation in drafting reports on the impact on fundamental rights. In a public statement, Data Privacy Brasil expressed concern over efforts to dilute Bill 2338.

In the first week of December, new disputes arose over information integrity and copyright, escalating tensions in Brasília. On December 3rd, the rapporteur published a new supplement to his report, addressing additional amendments. Senator Marcos Rogério (PL/RO) proposed amendments to remove discussions on copyright from the text, citing “the complexity of the issue and the need for detailed analysis,” as well as to eliminate provisions for the protection of work and workers, arguing that “these measures create excessive bureaucracy and regulations that could hinder adoption.” These amendments were rejected by Eduardo Gomes.

When presenting his report on Thursday, December 5, the rapporteur emphasized the support of organizations such as Febraban, CNSaúde, FIESP, and the Coalition for Digital Rights. Artists publicly expressed their stance through a letter addressed to the senators, signed by prominent figures including Fernanda Torres, Chico Buarque, Pedro Bial, Milton Nascimento, Caetano Veloso, and Miguel Falabella. Several members of the audiovisual sector organized under the banner “Responsible AI Front,” advocating for basic copyright rules, such as attribution of authorship, fair remuneration for creative work, and the right to oppose uses contrary to their works.

According to the National Congress’s e-Cidadania portal, over 35,000 citizens expressed support for the bill. This mobilization—comprising civil organizations, artists, cultural producers, and citizens—has bolstered the legitimacy of the senators’ positions in favor of passing the law.

In this text, we briefly analyze what was retained from Bill 2338 in the versions debated at the end of the second half of 2024 and the changes made to the text approved by the CTIA on December 5th. To this end, we compare the differences between the version presented in November 2024 and the version approved by the CTIA in December.

As an organization that has followed this topic for four years, we view the consensus reached in the CTIA and the unanimous approval of the final text presented by rapporteur Eduardo Gomes very positively. The legislation remains protective of fundamental rights and advances crucial standards for a fair informational ecosystem in the coming years. We publicly advocate for the law’s approval in the Federal Senate and, subsequently, in the House of Representatives, without any setbacks or changes to the democratic achievements made thus far.

What was retained regarding fundamental concepts and rights?

As mentioned, the main elements of the legislation were preserved, including the principle that the law is guided by the protection of fundamental rights, the promotion of responsible innovation and competitiveness, and the assurance of safe and reliable systems for the benefit of the human being. The legislation will not apply to “activities related to the investigation, research, testing, and development of systems, applications, or AI models before their market deployment,” provided that consumer protection, environmental laws, and personal data protection regulations are observed. It will also not apply to “services limited to providing storage infrastructure and data transportation used in artificial intelligence systems.”

The regulation establishes twenty foundational principles for the development, implementation, and use of AI in Brazil. These include the centrality of the human being, respect for human rights and democratic values, free development of personality, environmental protection and ecologically balanced development, equality, non-discrimination, plurality, diversity, social rights, privacy, personal data protection, informational self-determination, among others outlined in Article 2.

There are also seventeen basic principles for the development and use of AI in Brazil, such as inclusive growth, well-being, worker protection, freedom of decision and choice, effective human oversight, prohibition of unlawful and abusive discrimination, justice, fairness and inclusion, transparency and explainability (“considering the role of each actor in the AI value chain”), due diligence and auditability throughout the AI system’s lifecycle, reliability and robustness of AI systems, legal due process, contestability and adversarial process, accountability, liability and full compensation for damages, accessibility and safe use of AI systems and technologies by persons with disabilities, comprehensive protection for children and adolescents, among others specified in Article 3.

Conceptual definitions were also preserved. An “AI system” is defined as a machine-based system that, with varying degrees of autonomy and for explicit or implicit objectives, processes data or information to generate outcomes such as predictions, content, recommendations, or decisions that may influence virtual, physical, or real environments. A “general-purpose AI system” is defined as an AI system based on a model trained on large-scale datasets, capable of performing a wide variety of tasks and serving different purposes, including those for which it was not specifically designed or trained. It can be integrated into various systems or applications. Lastly, a “generative AI system” is defined as an AI model specifically designed to generate or significantly modify, with varying degrees of autonomy, text, images, audio, video, or software code.

Regarding stakeholders, the legislation identifies:

  • Developer: A natural or legal person, public or private, that develops an AI system, either directly or by commission, aiming to market it or apply it in services under their name or brand, for a fee or free of charge.
  • Distributor: A natural or legal person, public or private, that makes an AI system available for third-party application, for a fee or free of charge.
  • Applicator: A natural or legal person, public or private, that employs or uses an AI system for their name or benefit, including configuring, maintaining, or supporting it with the provision of data for the operation and monitoring of the system.

Drawing from anti-discrimination law, the legislation also retained conceptual definitions of “abusive or unlawful discrimination” (any distinction, exclusion, restriction, or preference in any area of public or private life, intended to or resulting in unlawfully or abusively nullifying or restricting the recognition, enjoyment, or exercise, on equal terms, of one or more rights or freedoms provided by law, based on personal characteristics) and of “indirect abusive or unlawful discrimination” (discrimination that occurs when a seemingly neutral rule, practice, or criterion results in disadvantages for individuals or groups affected or places them at a disadvantage, provided that the rule, practice, or criterion is abusive or unlawful).

The legislation also introduces the “preliminary assessment” (a simplified self-assessment process conducted before the use or market introduction of one or more AI systems, aimed at classifying their risk level to determine compliance with the obligations established by this law) and the “algorithmic impact assessment” (an analysis of the impact on fundamental rights, including preventive, mitigating, and corrective measures for negative impacts, as well as measures to enhance the positive impacts of an AI system).

Other new legal concepts include “relevant legal effects” (negative modifying, impeding or extinguishing legal consequences that affect fundamental rights and freedoms), “synthetic content” (information such as images, videos, audio, and text that has been significantly modified or generated by AI systems) and “information Integrity” (the result of an informational ecosystem that enables and provides reliable, diverse, and accurate information and knowledge in a timely manner to promote freedom of expression).

The legislation also addresses systemic risk, which arises from potential significant adverse effects caused by general-purpose or generative AI systems impacting individual and social fundamental rights.

The chapter on fundamental rights was not altered in this final stage. It establishes that individuals and groups affected by AI systems, regardless of their risk level, are entitled to the following rights:

  • Right to information: accessible, free, and easily understandable information about interactions with AI systems, including the automated nature of the interaction (except for AI systems dedicated exclusively to cybersecurity and cyber defense, as per regulations);
  • Right to privacy and personal data protection: particularly the rights of data subjects as outlined in Law No. 13,709 of August 14, 2018, and other relevant legislation;
  • Right to non-discrimination: protection against unlawful or abusive discrimination and the correction of illegal or abusive discriminatory biases, whether direct or indirect.

AI systems designed for vulnerable groups must, in all stages of its lifecycle, be transparent and use simple and clear language that is appropriate to the age and cognitive capacities of those groups. These systems must also be implemented with the best interests of those groups in mind. Information about basic rights must be provided using standardized, easily recognizable icons or symbols, without excluding other formats.


The legislation retains specific rights for individuals and groups affected by high-risk AI systems. The three basic rights are::

  • Right to explanation of decisions, recommendations, or predictions made by the system;
  • Right to contest and request review of decisions, recommendations, or predictions made by an AI system;
  • Right to human review of decisions, considering the context, risks, and the state of the art in technological development.

These rights are subject to two limitations. First, trade and industrial secrets, which may restrict the right to an explanation. Second, the law specifies that these rights “will be implemented considering the state of the art in technological development, and the agent of the high-risk AI system must always implement effective and proportional measures.”

The law also mandates that the right to an explanation be provided through a free process, using simple, accessible, and appropriate language that enables individuals to understand the outcome of the decision or prediction in question. The explanation must be given within a reasonable timeframe, depending on the complexity of the AI system and the number of agents involved.

Timelines and procedures for exercising the right to an explanation will be determined by the competent authority (the SIA).

What was maintained regarding risk regulation?

Risk regulation is a central component of AI governance. This legal technique involves dynamically calibrating obligations based on the level of risk a system may pose to individuals’ interests and fundamental rights. As explained in our study:
“The idea is to calibrate the weight of regulation—the intensity of obligations, rights, and duties for a particular regulated agent—based on the level of risk in a given context” (Bioni et al., 2023, p. 6).

This concept was highly contested between 2021 and 2022, during which much of the private sector advocated for a “principle-based legislation” without mechanisms for risk regulation. However, as we argued in public hearings in the Federal Senate, federal legislation without specific rules for excessive and high-risk situations would be ineffective. Risk regulation can also be asymmetrical, favoring small businesses and innovative ventures.

The core structure of the bill prepared by the Commission of Jurists was preserved, particularly regarding the gradation of risk and varying degrees of obligations imposed on agents depending on the identified risk level. The bill stipulates that before market introduction, application, or use, AI agents may conduct a preliminary assessment to determine the risk level of their system. This assessment is to be based on the criteria of the law, aligned with the state of the art in technological development. The results of this preliminary assessment can be used by AI agents to demonstrate compliance with the safety, transparency, and ethical requirements outlined in the legislation.

In addition to encouraging self-assessment of risks, the law establishes a categorization of risk types, ranging from intolerable (excessive risks) to those deemed tolerable under certain measures (high and low risks). The legislation explicitly prohibits the development, implementation, and use of AI systems for:

  • Instigating or inducing behavior of the natural person or groups in ways that harm their health, safety, or other fundamental rights, or those of third parties;
  • Exploiting the vulnerabilities of the natural person or groups to induce behavior that results in harm to their health, safety, or other fundamental rights, or those of others;
  • Assessing personality traits, characteristics, or past behaviors (criminal or otherwise) of individuals or groups for predicting the likelihood of committing crimes, infractions, or recidivism;
  • Enabling the production, dissemination, or creation of material depicting or representing the abuse or sexual exploitation of children and adolescents;
  • By the public authorities, using AI to assess, classify, or rank individuals based on social behavior or personality traits through universal scoring for access to goods, services, or public policies, in an illegitimate or disproportionate manner;
  • autonomous weapons systems (AWS);
  • real-time, remote biometric identification in publicly accessible spaces, except in the following cases: (a) during criminal investigations or proceedings, with prior and motivated judicial authorization, provided there is reasonable evidence of participation in a crime, the evidence cannot be obtained through other means, and the investigation does not involve minor offenses; (b) to locate victims of crimes, missing persons, or in cases of grave and imminent threats to life or physical integrity of the natural person; (c) during flagrant crimes punishable by imprisonment of more than two years, with immediate judicial notification; (d) for recapturing escaped defendants, executing arrest warrants, and enforcing judicially ordered restrictive measures.

For example, it is prohibited to market an AI system, such as a mobile app, that helps young people create harmful chemical substances at home (e.g., DIY versions of intoxicants or drugs made from household cleaning products). This scenario differs entirely from experimental AI uses in scientific research, such as antibiotic development.

It is also forbidden to distribute AI language models capable of emulating characters and encouraging harmful behaviors, such as the case in the U.S. where a 14-year-old died after compulsive use of a chatbot simulating a Game of Thrones character.

The use of remote facial recognition systems (a point opposed by Data Privacy Brasil throughout 2024) must be proportional and strictly necessary to serve public interest. These systems must comply with due legal process and judicial oversight while adhering to the principles and rights set forth in the law, as well as Brazil’s General Data Protection Law (Law No. 13,709 of August 14, 2018). Special emphasis is placed on protecting against discrimination and requiring algorithmic inferences to be reviewed by the responsible public authority.

High-risk AI systems are defined as those employed for specific purposes and contexts where the likelihood and severity of adverse impacts on individuals or groups are significant. These include:

  • AI used in the security management and operation of critical infrastructures such as traffic control and water or electricity supply networks, where there is a significant risk to physical integrity or interruption of essential services, in an unlawful or abusive manner, provided the systems are decisive for the outcomes, decisions, operations, or access to essential services;
  • AI systems that are determining factors in student selection processes for entry into educational or professional institutions, or in decisive academic progress evaluations or student monitoring (excluding monitoring solely for safety purposes);
  • AI systems used for recruitment, screening, evaluation of candidates, decision-making about promotions or terminations of employment contracts, performance evaluations, or behavioral assessments affecting employment, worker management, or self-employment access;
  • AI systems determining access, eligibility, allocation, revision, reduction, or revocation of private and public services deemed essential, including evaluating eligibility for public welfare and social security services;
  • AI systems used for call classification or determining priority levels for essential public services such as firefighting and medical assistance;
  • AI systems aiding judicial authorities in fact-finding or law application, where there is a risk to individual freedoms or democratic governance, excluding systems assisting administrative acts or activities;
  • AI employed in autonomous vehicles operating in public areas where their use poses significant risks to individuals’ physical integrity;
  • AI systems aiding medical diagnoses and procedures, where significant risks to physical and mental health are involved;
  • Analytical studies of crimes involving individuals, enabling law enforcement to analyze large datasets from various sources to identify patterns and behavioral profiles;
  • AI systems used by administrative authorities to assess the credibility of evidence during investigations or the suppression of infractions, or to predict the occurrence or recurrence of real or potential violations based on individual profiling;
  • AI systems for biometric identification and authentication aimed at emotion recognition, excluding systems used solely to confirm the identity of specific individuals;
  • AI systems employed to evaluate the admission of individuals or groups into national territory.

A crucial point was the maintenance of the possibility for authorities to regulate the classification of high-risk AI systems, “as well as identify new high-risk use cases, taking into account the likelihood and severity of adverse impacts on individuals or groups affected.” There was a pressure movement, possibly supported by the federal government, to relax the wording of the caput of Article 15, which could be detrimental to the public interest.

Governance elements were also maintained in the text, with specific rules for synthetic content AI. When the AI system generates synthetic content, it must include, considering the state of the art in technological development and the context of use, an identifier in such content to verify its authenticity or characteristics of its origin, modifications, or transmission, as regulated.

In alignment with the open data culture in the Access to Information Law (LAI) and the General Data Protection Law (LGPD), the law defines that public authorities, when developing or contracting AI systems, must ensure access to databases and full data portability for Brazilian citizens and public management. Another benefit is the minimum standardization of systems in terms of data architecture and metadata to promote interoperability between systems and good data governance.

In the case of using biometric systems for identification by public authorities, such as at airports or public buildings, an algorithmic impact assessment must be conducted, ensuring the exercise of rights for affected individuals or groups and protection against direct, indirect, illegal, or abusive discrimination.

Regarding the algorithmic impact assessment – a key legal tool of the AI Law for transparency and damage prevention – the bill defines that:

  • It is an obligation of the developer or applicator introducing or circulating the high-risk AI system in the market;
  • The obligation will be calibrated according to the role and participation of each agent in the chain;
  • Agents are free to create their methodologies, but there must be a minimum evaluation of risks and benefits to fundamental rights, mitigation measures, and the effectiveness of management measures;
  • A specific regulation will detail the situations in which agents must share the algorithmic impact assessment with the sectoral authority;
  • AI agents may request necessary information from other agents in the chain to perform the algorithmic impact assessment, respecting trade and industrial secrecy;
  • The assessment must be conducted before and in accordance with the specific context of introducing or circulating in the market;
  • Companies and the private sector may collaborate with the competent authority to define general criteria and elements for preparing the impact assessment and its periodicity;
  • The conclusions of the algorithmic impact assessment will be public, respecting trade and industrial secrets, in accordance with the future regulation.

The legislation also defines that the algorithmic impact assessment is distinct from the personal data protection impact report, which is a legal obligation under the General Data Protection Law. However, when useful, these documents can be prepared together for efficiency gains for organizations.

In addition to these protective rules, regulations on self-regulation, certification, incident reporting, and civil liability were also maintained.

What is the SIA and how can it function?

One of the key negotiation elements during 2024 was the final arrangement of the legislation enforcement structure. Unlike the European Union, which opted for a centralizing model with the “AI Office,” the Brazilian law operates based on existing capabilities.

The law creates the SIA, the National System for Artificial Intelligence Regulation and Governance. This system is made up of:

– National Data Protection Authority (ANPD), the competent authority that will coordinate the SIA;

– Sectoral authorities (Anatel, Anvisa, Anac, Aneel, Central Bank, etc.);

– Permanent Regulatory Cooperation Council for Artificial Intelligence (CRIA);

– Committee of Artificial Intelligence Experts and Scientists (CECIA);

CRIA is tasked with producing guidelines and serves as a permanent collaboration forum, including through technical cooperation agreements with sectoral authorities and civil society. CECIA, on the other hand, aims to guide and supervise the technical and scientific development and application of AI responsibly, being composed of scientists and specialists.

The SIA, according to the project, has two main functions: (i) to enhance and strengthen the regulatory, sanctioning, and normative competencies of sectoral authorities in harmony with the general competencies of the competent authority coordinating the SIA; and (ii) to seek harmonization and collaboration with regulators of transversal issues.

In this framework, the ANPD would play a coordination role, facilitating relations with various sectoral regulators. Additionally, among many functions defined in article 46, it would represent Brazil in international AI bodies, have the capacity to create rules on how and what information should be made public, and establish procedures for conducting algorithmic impact assessments. It would also hold a “normative and regulatory” role in unregulated economic activities.

What are the incentives for small and medium-sized enterprises?

A central concern in the development of the AI Bill was the impact on the emerging economy of companies that work intensively with AI, with special attention to the economic incentives for small and medium-sized enterprises.

The project proposes an experimental regulatory environment (called a sandbox, as per specialized financial regulation literature), which aims to facilitate the development, testing, and validation of innovative AI systems for a limited period before they are placed on the market or put into service according to a specific plan. This model allows for the exclusion of certain regulations under its jurisdiction concerning the regulated entity or groups of regulated entities.

The competent authority and the sectoral authorities that make up the SIA will regulate the procedures for requesting and authorizing the operation of regulatory sandboxes, and may limit or interrupt their operation and issue recommendations. The project also stipulates that sectoral authorities must provide micro and small businesses, startups, and public and private Scientific, Technological, and Innovation Institutions (ICTs) with priority access to testing environments.

What are the incentives for sustainability?

Another element agreed upon in the Senate is the promotion of innovation by states and municipalities, with a focus on AI. The legislation sets out guidelines for promoting innovation through public-private partnerships, investing in research for AI development in the country, “prioritizing the technological and data autonomy of the country and its insertion and competitiveness in both the domestic and international markets,” financing physical and technological AI resources that are difficult for small and medium-sized enterprises to access, and supporting research centers that promote sustainable practices, as well as encouraging the expansion of high-capacity sustainable data centers for AI systems.

The general law also encourages the creation of multidisciplinary research, development, and innovation centers in artificial intelligence, which could have a cascading effect on public universities and research institutes in Brazil. Regarding environmental concerns, the law stipulates that public and private entities should prioritize the use of AI systems and applications aimed at energy efficiency and rationalizing the consumption of natural resources.

Looking to the future, the project defines that CRIA, in cooperation with the Ministry of the Environment and Climate Change, will promote research and the development of certification programs to reduce the environmental impact of AI systems.

In summary, the project fosters an important connection between the environment and the digital sector, already identified in specialized literature and the G20 working groups (Zanatta, Vergili, Saliba, 2024; T20 et al., 2024).

How does the bill protect workers?

As we argued in the text “AI and rights for workers” (Mendonça et al., 2024), AI has significant potential to impact labor relations at various levels. The creation of specific rules for workers was one of the points of tension in the second half of 2024, with opposing positions between Fenadados/CUT and Fiesp/CNI.

The final version of the text reduced protections for workers but maintained some important aspects. The first is cooperation between authorities. The competent authority, sectoral authorities that make up the SIA, and the Artificial Intelligence Regulatory Cooperation Council (CRIA), in cooperation with the Ministry of Labor, will develop guidelines to advance four objectives:

– Mitigate the potential negative impacts on workers, especially the risks of job displacement and career opportunities related to AI;

– Maximize the positive impacts on workers, especially in improving health and workplace safety;

– Value negotiation tools and collective agreements;

– Promote the development of continuous training and skill-building programs for active workers, fostering professional development and improvement;

The specific rules regarding the containment of mass layoffs and workers’ rights to participate in algorithmic impact assessments were removed from the final text.

How does the bill protect the rights of content creators?

One of the main victories for civil society, creators, artists, and professionals in journalism and audiovisual sectors was the inclusion of the section on copyright and related rights in the final report of the CTIA. The text presents protective language for content creators and ensures dignified conditions for the exercise of rights.

Firstly, the bill stipulates that the AI developer who uses content protected by copyright and related rights must inform about the protected content used in the development of AI systems, through the publication of a summary on an easily accessible website, observing commercial and industrial secrets, according to specific regulations.

Secondly, the bill establishes a right of opposition. The holder of copyright and related rights can prohibit the use of their content in the development of AI systems if the use contradicts the interests of the creator. Additionally, the prohibition of using protected works and content in the databases of an AI system after the training process does not exempt the AI agent from being liable for damages, both moral and material.

Thirdly, a system of fair remuneration is established. The bill states that the AI agent using content protected by copyright and related rights in processes such as mining, training, or development of AI systems must compensate the respective holders of these contents for their use.

This remuneration (only due to the holders of copyright and related rights, both domestic and foreign, residing in Brazil) must ensure that rights holders are able to negotiate collectively, that the remuneration calculation considers the principles of reasonableness and proportionality (considering the size of the AI agent and competitive effects), and the free negotiation of the use of protected content, aiming to promote an environment of research and experimentation that enables the development of innovative practices.

Furthermore, the law engages in a dialogue with the Civil Code by stating that the use of image, audio, voice, or video content that portrays or identifies natural persons by AI systems must respect personality rights.

It is important to note that these rules — information, opposition, and remuneration — do not apply for the purposes of research and development of AI systems by scientific and research organizations, museums, public archives, libraries, and educational institutions, as long as the following conditions are observed:

– Access was made lawfully;

– It is not for commercial purposes;

– The use of content protected by copyright and related rights is made to the extent necessary to achieve the intended objective.

Changes in the final text and possible setbacks

The main changes in the final text of the AI Law approved by the CTIA relate to the theme of freedom of expression and information integrity. The rapporteur wrote that “given the imperative of guaranteeing freedom of expression as a fundamental value for any democratic society,” the risk to information integrity, freedom of expression, the democratic process, and political pluralism as criteria for regulation and identification of new high-risk AI scenarios was removed in Article 15.

This change was combined with the insertion of a new article in the law text stating: “Regulation of aspects related to the circulation of online content that may affect freedom of expression, including the use of AI for content moderation and recommendation, may only be done through specific legislation” (Article 77).

There was also a substantial change with the removal of the text that said: “The developer of a generative AI system must, before making it commercially available, ensure the adoption of measures for the identification, analysis, and mitigation of reasonably foreseeable risks concerning fundamental rights, the environment, information integrity, freedom of expression, and access to information.”

Organized civil society, especially through the efforts of the Rights in the Network Coalition (2024), has been warning about attempts to introduce suppressive amendments to eliminate articles on copyright and protective rules for content creators.

There is also a risk of changes to the regulations on high-risk activities and the removal of norms establishing the possibility of intervention by the competent authority, which could remove the regulatory and supervisory capacity of the SIA in the future.

It is crucial that the achievements made democratically, through consensual discussions among CTIA senators, are not undone due to pressures from powerful economic groups and private interests that oppose the public interest. Data Privacy Brasil advocates for AI with rights and a fair informational ecosystem. The approval of Bill 2338/2023 is a necessary step to move forward in this direction.

References

BRASIL. Senado Federal. Projeto de Lei 2338/2023, Senador Rodrigo Pacheco. Brasília: Senado Federal, 2023. Available at: https://www25.senado.leg.br/web/atividade/materias/-/materia/157233 

BIONI, Bruno; GARROTE; Marina; GUEDES, Paula. Temas Centrais da Regulação de Inteligência Artificial no Brasil: O local, o regional e o global na busca da interoperabilidade regulatória. São Paulo: Associação Data Privacy Brasil de Pesquisa, 2023. Available at: https://www.dataprivacybr.org/wp-content/uploads/2024/02/nota-tecnica-temas-regulatorios-ia_data.pdf 

BIONI, Bruno; PIGATTO, Jaqueline; KARCZESKI, Louise; PASCHOALINI, Nathan. Exploring Opportunities in the Digital Economy and AI at the G20. SGD Knowledge Hub, 2024. Available at: https://www.dataprivacybr.org/exploring-opportunities-in-the-digital-economy-and-ai-at-the-g20-2/ 

COALIZÃO DIREITOS NA REDE. Regular para promover uma IA responsável e protetiva de direitos: alertas sobre retrocessos, ameaças e garantias de direitos no PL nº 2.338/23. Brasília: Coalizão Direitos na Rede, 2024.

DATA PRIVACY BRASIL. Nota pública sobre apresentação do projeto de lei de Inteligência Artificial. São Paulo: Data Privacy Brasil, 2023. Available at: https://www.dataprivacybr.org/documentos/nota-publica-sobre-apresentacao-do-projeto-de-lei-de-inteligencia-artificial/ 

MENDONÇA, Eduardo; MENDONÇA, Júlia; RODRIGUES, Carla; ZANATTA, Rafael. IA e Direitos para quem trabalha. Projeto IA com Direitos. São Paulo: Data Privacy Brasil, 2024. Available at: https://www.dataprivacybr.org/ia-e-direitos-para-quem-trabalha/ 

MENDONÇA, Eduardo; MENDONÇA, Júlia; RODRIGUES, Carla. IA com Direitos: diálogo e colaboração para regular e proteger. São Paulo: Data Privacy Brasil, 2024. Available at: https://www.dataprivacybr.org/ia-com-direitos-dialogo-e-colaboracao-para-regular-e-proteger/ 

T20; C20; L20; W20. São Luís Declaration: Artificial Intelligence. Joint Statement from Engagement Groups to the G20 States on Artificial Intelligence. Brasília: G20, 2024. Available at: https://www.dataprivacybr.org/wp-content/uploads/2024/09/20240910-Sao-Luis-Declaration-Artificial-Intelligence.pdf 

ZANATTA, Rafael; VERGILI, Gabriela; SALIBA, Pedro. O nexo entre o ambiental e o digital, Revista PolitICS, 2024. Available at: https://politics.org.br/pt-br/infraestrutura-news/o-nexo-entre-o-ambiental-e-o-digital 

 

Veja também

Veja Também