Article | AI with Rights | Digital Platforms and Markets

Insufficiency of Ethical Principles for the Regulation of Artificial Intelligence: Antiracism and Antidiscrimination as Vectors for AI Regulation in Brazil

 Insufficiency of Ethical Principles for the Regulation of Artificial Intelligence: Antiracism and Antidiscrimination as Vectors for AI Regulation in Brazil

Paula Guedes Fernandes da Silva is a PhD candidate in Law and holds a Master’s degree in International and European Law from the Catholic University of Porto. She is a researcher in Law and Technology at Legalite PUC-Rio and a researcher at Data Privacy Brasil School.

Marina Gonçalves Garrote is a researcher at the Data Privacy Brasil Research Association and a Master’s candidate in Law at the University of São Paulo.

 

Chatbots, [1] identity verification, access to information, credit granting, job opportunities, and access to essential services. These are just a few of the many increasingly ubiquitous examples of Artificial Intelligence (AI) applications today. The total or partial automation of decision-making functions with significant impacts on the lives of individuals and groups raises a series of concerns, primarily due to its potential to generate objectively incorrect or questionable outcomes in terms of bias, opacity, and discrimination. [2] DIn light of the growing and unchecked use of technology with negative consequences for fundamental rights and freedoms, we have observed in recent years an international trend towards AI regulation, initially based on soft-law mechanisms [3], particularly self-regulation and the creation of ethical principles. By 2020, at least 173 public-private initiatives [4] had emerged globally to define values, principles, codes of conduct, and guidelines for the ethical development and deployment of AI. [5] At the same time, as highlighted by Elettra Bietti, [6] a movement began within technology companies to instrumentalize ethical language, aiming to defend a regulatory model more favorable to them, whether by arguing that AI market regulation is unnecessary, through self-regulation, or simply by ensuring that regulation is market-driven. This movement is referred to as “ethics washing.” Examples of practices falling under this model include the creation of AI ethics boards, hiring AI ethics researchers and even philosophers, without any real power to alter internal and market policies. One example of this was the dissolution of Google’s Advanced Technology External Advisory Council (ATEAC) about a week after its announcement, following a petition by company employees demanding the removal of a board member who was known to be anti-LGBT [7]. Thus, despite some consensus among different stakeholders regarding ethical principles for AI applications, such as transparency, fairness/non-discrimination, non-maleficence, accountability, and privacy, [8] practice reveals difficulty in translating these principles into concrete measures that effectively safeguard fundamental rights. [9] This is demonstrated by numerous cases of illegitimate or abusive practices and decisions resulting from AI applications, even after ethical guidelines were established, such as the proliferation of algorithmic racism, [10] failures in facial biometric identification, vigilantism, social exclusion, behavioral manipulation, and difficulty accessing essential services, [11] tall of which have disproportionate effects on marginalized groups. In this context of continued negative effects of AI coupled with the difficulty in transforming ethical guidelines into concrete actions, the singular approach of self-regulation of technology has proven insufficient and ineffective for the protection of individuals and society in the face of technological advancements. [12] Consequently, we have observed a new global trend of creating legally binding norms with enforceability, specifically for AI, as seen, for example, in the European Union’s Proposed AI Regulation [13] eand different proposals under review or already approved in the U.S. context, [14] uch as the Algorithmic Accountability Act proposal [15] (2022).

 

Brazilian regulatory scenario

In July 2021, Brazil became more strongly involved in the international trend by approving an urgent processing regime for Bill 21/2020 (PL 21/20), which aimed to create a regulatory framework for AI in the country. [16] EIn September of the same year, the Chamber of Deputies, the first legislative house to review the bill, approved the project in the form of a substitute report by the rapporteur Deputy Luiza Canziani (PTB-PR) [17] samid criticism from researchers and civil society organizations. In addition to the limited opportunity for public debate and popular participation, which are crucial given the ethical, technical, and legal complexity of the issue to be regulated, the approved text was also criticized and raised concerns due to its excessively principled and poorly normative approach. [18] While this move positioned Brazil within the international trend of regulating AI use through binding legislative instruments, the chosen approach at the time would undermine its practical enforceability, [19] preventing concrete regulation of the matter and, consequently, the effective protection of fundamental rights and freedoms, similar to what had occurred previously with strategies of ethical self-regulation. In this context, it is interesting to mention the studies by Julia Black and Andrew Murray. [20] The authors emphasize the importance of an AI regulatory system through law, focusing on the network effects risks generated by the technology, rather than just the ethical issues that arise with its individual use. Black and Murray compare the current experience of AI regulation with the regulation of the Internet in the 1990s.

At the time, due to the delay in government intervention with structural regulation, a communication technology with network effects and the potential to generate systemic risks and impacts, as well as create economic monopolies, was left under market control. The authors reflect that objectives and values (present in both the general debate on AI ethics and the principled nature of the bill) are only part of a regulatory system, which also requires individuals and organizations to modify their behaviors.

The text of Bill 21/20, approved by the Chamber of Deputies, would ultimately position Brazil in the AI regulatory debate in a delayed manner, repeating the aforementioned mistakes made in the regulation of the Internet. [21] If the European Union’s AI Regulation Proposal, for example, is already the subject of criticism—mainly related to the insufficient list of prohibited AI practices and the lack of significant requirements for impact assessments in the development and deployment of AI systems [22] – he Brazilian bill is substantially inferior from a normative perspective and even more problematic. In addition to establishing sectoral regulation associated with a high degree of self-regulation by the regulated agents themselves, the bill approved by the Chamber also did not include a list of rights and duties, which could hinder effective governance of AI systems in Brazil by allowing excessive fragmentation of the debate across different sectors and without the necessary enforceability to ensure the practical application of legal rules. [23] Furthermore, despite mentioning “risk-based management” and “regulatory impact assessment,” the text lacks depth and reflection, as it has some conceptual imprecision, lacks elements that would ensure normative density, and does not effectively provide for or proceduralize impact assessment mechanisms. In addition to all of the above, discussions on AI regulation in Brazil still lack a concrete understanding of its specificities as a Global South country, marked by a history of marginalization and discrimination against groups and communities, especially Black and Indigenous people, who are more adversely impacted by some AI applications. Technology, when applied in the Brazilian reality, ends up reinforcing and enhancing, both directly and indirectly, the historical structural racism [24] of Brazilian society, segregating various forms of black identity, [25] which is evident, for example, in the predominance of Black individuals being incarcerated due to the use of facial recognition in public security. [26] In light of these issues, in February 2022, the second phase of the legislative process began in the Senate, and in response to strong criticisms of the structure and content of Bill 21/20, a Commission of Jurists was established to draft a substitute version of the bill under the presidency of Ricardo Villas Bôas Cueva and the rapporteurship of Laura Schertel Mendes.[27]  

 

However, the Commission was criticized in an open letter from the Coalition Rights on the Net for its lack of racial diversity, absence of Black and Indigenous jurists, and failure to consider as a criterion for choosing its members the regional diversity and interests affected by AI applications. [28] Concerned and aware of the initial criticism regarding racial and regional diversity gaps, the Commission of Jurists sought to mitigate them in its actions. For example, in the composition of public hearings held in April and May 2022, there was greater racial and gender representation among the invited panelists, who discussed different topics related to AI regulation in Brazil, [29] such as risk gradation, transparency and explainability, review and the right to human intervention, algorithmic discrimination, and the precautionary principle. Although still incipient, as participation in public hearings does not carry the same power and weight as being included as a member of the Commission, this stance indicates a positive alignment of the Commission of Jurists with the antiracist struggle.

 

What we expect for the future of AI regulation in Brazil

With the establishment of the Commission of Jurists to draft the substitute version, Brazil was given a new opportunity to discuss AI regulation focused on the effective protection of rights and fundamental freedoms, especially those of vulnerable individuals and groups, moving beyond a merely principled logic and considering the historical oppression and discrimination rooted in the social fabric of the country.

According to Bianca Kremer, the legislative modernization project related to AI in the Global South, where Brazil is situated, must center on algorithmic governance that brings racial issues as a key organizing element, or else we risk subjecting ourselves to empty ethical principles and limits. Despite being well-intentioned, such arrangements would not be capable of addressing the protection of minorities and vulnerable groups in the context of the power dynamics and hegemonic interests of our colonial, aristocratic, bourgeois, and patriarchal heritage. [30] In this scenario, Adilson Moreira [31] clearly explains how the liberal project of a society without hierarchies did not take place in Brazil, since the modern liberal state itself is a Racial State. This is because its institutions were based on the oppression of Black people, and its political bodies and ideology allow the continuity of racial exclusion. In the absence of a society without hierarchies, it is impossible to conceive of legislation that seeks to offer equal treatment to individuals, as this is a view of procedural-liberal equality that does not take the social context into account:

We, black people, who are operators of the Law, must be aware that deprivations cause us to always be socially classified as members of a specific group, which eliminates the possibility of having our individuality recognized. This state of affairs will not change unless the social status and material status of our people are transformed through positive actions by state institutions, in addition to changing the way these individuals are socially perceived. (MOREIRA, Adilson José, 2019, p. 99-100)

Anita Allen [32] , eaddressing the issue of how changes in privacy and data protection legislation could promote equity in the African-American experience online, outlines five goals that could serve as inspiration for Brazilian legislation:

  • no exacerbation of racial inequality;
  • inability of racial neutrality in the impact of privacy policies (they will not protect and may possibly harm individuals unequally);
  • elimination of discriminatory hypervigilance based on race; (iv) reduction of discriminatory exclusion based on race; and
  • reduction of race-based exploitation and fraud.

Consequently, the importance of explicitly anti-discriminatory and anti-racist provisions in AI legislation cannot be underestimated, as AI will become increasingly socially significant, and can be an instrument either for amplifying or combating racism. As a tool for combating racism, AI can be used as a mechanism for implementing affirmative actions.

For example, AI can be used to reduce discrimination and racism by directly investigating and questioning the power structures of a racial state, rather than being directed at the behavior of vulnerable individuals. An example of this is the study by Barabas et al. [33] which constructs a model to predict the probability that a particular judge, in any case, would fail to respect the U.S. Constitution and impose an unaffordable bail without due process. Therefore, as stated by the AqualtuneLab collective, it is essential that the future AI legislation, designed by the Commission of Jurists, affirms the obligation for these systems to be anti-racist and oppose other practices of unlawful or abusive discrimination. In other words, PL 21/20, if passed as federal law, should have the principle of non-discrimination as a validity criterion for the promotion, development, and use of AI in Brazil, in addition to concretely providing for preventive actions and accountability tools, such as human rights impact assessments, which should necessarily follow this anti-discrimination approach. [34] Although it is positive for the Brazilian regulator to draw inspiration from successful regulatory models from comparative law, especially from Europe, the country needs to critically reflect and publicly discuss before incorporating foreign solutions, many of which are based on the supposed idea of universality represented by a Eurocentric subject of law. Thus, Brazilian AI regulation should be thought of in its own terms, considering coloniality as an element that permeates its entire historical, social, and economic context. In Kremer’s words [35] “A tech-regulation beneficial to humanity must, therefore, take into account – and primarily recognize, as a starting point – the processes of hierarchies of humanity that still develop within Brazilian society’s fabric, which has very well-defined color, gender, race, sexuality, and other intertwining elements outside of the protection standard that defines the universal subject of law.” (CORRÊA, Bianca Kremer Nogueira, 2021, p. 212). Thus, for the effective protection of rights and fundamental freedoms, especially of historically marginalized social groups, in addition to prohibiting certain uses of AI and requiring the creation of accountability instruments, such as human rights impact assessments and their concrete procedures, it is essential that future legislation considers the Brazilian context (a Global South country permeated by racism and deep-rooted discrimination in its social fabric) and incorporates anti-racism and anti-discrimination both as values that underpin legal regulation and as goals to be achieved [36], reflected throughout its text and regulatory implementation tools.

 

Article originally published in September 2022, on the Politcs website.