Key Themes in AI Regulation: The local, regional, and global in the pursuit of regulatory interoperability
Data Privacy Brasil releases its report “KeyThemes in AI Regulation: The local, regional, and global in the pursuit of regulatory interoperability,” supported by the Heinrich Böll Foundation. This work is the result of months of research under the project “Where the SabIA Sings: Governance and Regulation of Artificial Intelligence from Brazil”.
The unregulated use of artificial intelligence (AI) poses a range of risks to fundamental rights, democracy, the environment, and the rule of law, including the reinforcement of unjustified discrimination across various domains, aiding disinformation campaigns, and intensifying natural resource extraction. Consequently, different countries, organizations, and international bodies have mobilized to find ways to regulate the use of this technology, as self-regulation strategies alone have proven insufficient to curb the negative externalities created or intensified by AI.
As a result, we are currently experiencing a normative upheaval, shifting the discussion from “if” to “how” we should regulate AI uses, with the continuous production of new proposals at local, regional, and global levels, encompassing both hard and soft law, from various significant global actors. These documents are not only challenging to track but also to compare and understand their convergences and particularities.
In this context, Data Privacy Brasil releases its report “KeyThemes in AI Regulation: The local, regional, and global in the pursuit of regulatory interoperability,” supported by the Heinrich Böll Foundation. This work is the result of months of research under the project “Where the SabIA Sings: Governance and Regulation of Artificial Intelligence from Brazil”, where more than 20 local, regional, and global normative sources were analyzed to identify regulatory convergence points among the current proposals under discussion, while also considering the specificities of the Brazilian reality.
Due to the complexity of the subject, with the aim of regulation being in constant evolution, the study was limited to three main thematic axes: (i) risk-based regulation; (ii) algorithmic impact assessments- AIA; (iii) Generative AI; and finally, a dedicated chapter on the particularities of AI regulation in Brazil. The choice of these topics was not random: the first two axes were selected for their central role in balancing risk-based and rights-based regulation, while the third represents one of the most contentious topics in recent regulatory approaches. Additionally, a specific topic on national particularities for AI regulation in Brazil was included.
In this context, despite ongoing claims that regulation might hinder technological development, the research findings indicate that this argument should be dismissed as a false trade-off, since the various proposals aim to enable responsible socio-economic innovation: developing and utilizing technologies that reinforce rights rather than contributing to their violations.
One of the research highlights is the finding of a common thread among different regulatory initiatives based on the definition of asymmetric, risk-based regulation, which adjusts regulatory intensity according to the level of risk posed by a particular AI system. This approach also identifies scenarios presenting unacceptable-excessive and high risks, with some variations across proposals. Despite this convergence, there are certain differences among the analyzed sources, particularly concerning attempts to reconcile this approach with one that also affirms rights.
Another point of attention in the research pertains to the necessary governance obligations for AI. Among these, the study highlights algorithmic impact assessments (AIA), as this tool is listed in almost all the normative sources analyzed, which opted for a minimum procedural framework for AIA through a tripod: (i) publicity; (ii) significant multisectoral public participation of potentially affected subjects and groups, especially the most vulnerable and marginalized; and (iii) the variety of risks and benefits to be assessed, due to the predominance of analysis through adverse effects on individual fundamental rights compared to social rights, such as the environment.
Moreover, one of the major findings of the research is the existence of a movement towards regulatory interoperability concerning AI regulation. In other words, there are common points among regulatory proposals that allow them to interact with each other. However, it is essential, particularly for countries in the Global South, such as Brazil, to consider the points of divergence between regional/international and local contexts. That will make it possible to ensure that AI regulation is not merely an uncritical importation of regulatory models from other contexts but addresses the local challenges and opportunities, such as the fight against racism and other forms of structural discrimination. Only in this way can we create AI using regulations that do not represent a new form of regulatory colonization.
Finally, it is worth noting that the central objective of the study was to map the main discussions related to AI regulation within the chosen axes to inform readers about the current debates in different contexts and thus produce a state-of-the-art diagnosis on the topic.
For more information, access the complete research here.
Veja também
-
AI in the 2024 Brazilian elections
Aláfia Lab, *desinformante and Data Privacy Brasil launch the report “AI in the 2024 Brazilian elections”, with an analysis of the use of artificial intelligence in the first round of the elections.
-
A Fragmented Landscape Is No Excuse for Global Companies Serious About Responsible AI
For the third year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have brought together an international panel of AI experts to help gain insights into the lack of alignment around global standards and norms for responsible AI. Bruno Bioni, co-director of Data Privacy Brasil, was one of the experts interviewed. Check it out in the text!
-
Research Project Outcomes: A vision for inclusive educational technology
Check out Júlia Mendonça's interview for Tech Ethics Lab, about the use of technology in schools.
-
Data Privacy Brasil participates in UN’s OHCHR briefing on Brazil
The organization highlighted how the advance of edtech has been violating children’s privacy in the country
Veja Também
-
AI in the 2024 Brazilian elections
Aláfia Lab, *desinformante and Data Privacy Brasil launch the report “AI in the 2024 Brazilian elections”, with an analysis of the use of artificial intelligence in the first round of the elections.
-
Research Project Outcomes: A vision for inclusive educational technology
Check out Júlia Mendonça's interview for Tech Ethics Lab, about the use of technology in schools.
-
Data Privacy Brasil participates in UN’s OHCHR briefing on Brazil
The organization highlighted how the advance of edtech has been violating children’s privacy in the country
DataPrivacyBr Research | Content under licensing CC BY-SA 4.0