| Where the SabIA Sings: Governance and Regulation of Artificial Intelligence from Brazil | Digital Platforms and Markets

Analysing simulated environments and immersive technologies in the European AI Act and Brazil’s proposed AI regulation

 Analysing simulated environments and immersive technologies in the European AI Act and Brazil’s proposed AI regulation
By Bruno Bioni and Júlia Mendonça

Today, immersive technologies and simulated environments increase the divide between online and offline, real and virtual, “blurring” this line and creating what Luciano Floridi called onlife. Like all waves of innovation, this one is coated with ideas of techno-solutionism, as if it were inevitable to adopt new technologies, whereas adoption is a conscious choice.

In hyper-realistic experiences, virtual sensations can feel real, even using haptic devices. Because of that, they bring countless benefits to different sectors of society. In the medical field, students and medical practitioners can use them for training before intervening on real patients, or even in education to spark critical thinking and expand forms of learning.

At the same time, these same technologies involve risks. Depending on the approach to regulation, they may perpetuate problems that already exist in society and other technologies, such as online addiction, cyberbullying, discrimination, breaches of privacy and the exploitation of cognitive vulnerabilities.

To review relevant policy opportunities and challenges, the OECD’s CDEP Digital Ministerial, “The future of simulated environments and immersive technologies”, held a session on December 14, 2022. The document that guided the discussion was the OECD Digital Economy Paper “Harnessing the power of AI and emerging technologies”. Panellists came from different sectors and countries, including myself as part of CISAC’s delegation and representative of the Data Privacy Brazil Research Association.

A triangle to reengineer humanity

The group pointed out the primary issues that policy makers must address regarding data privacy and immersive technologies. For one, immersive technologies and simulated environments are necessarily associated with the deployment of AI. This triangle is capable of reengineering our humanity – to use the beautiful title of the book of Frischmann and Selinger. These three elements form another new architecture to modulate and manipulate our behaviour.

In hyper-realistic experiences where virtual sensations can feel real, AI systems can understand our emotional state and use such conditions to offer very personalised services. They also intensify some problems previously identified in other technologies by exploring the so-called privacy cognitive vulnerability, a critical issue to consider.

[1] With these immersive technologies, we see what Tim Wu, former White House Special Assistant for Technology and Competition in the Biden Administration,  called  “The Attention Merchants” at a whole new scale. And, as highlighted in the OECD’s paper mentioned above, if the basic safeguards are not in place for AI, future technologies could inherit and expand these risks when the time comes.

Finding the right AI regulation

New risks trigger new regulations, in particular when self-regulation is not efficient. That’s why it’s important to look for dynamic AI regulation not only in the global North but also in the Global South. Overall, it is essential to have a risk and rights-based approach to regulation. This normative scheme not only calibrates the regulatory burden according to the context of AI implementation but also leaves room to classify what is acceptable, what is not prudentially acceptable and that our humanity should not be re-engineered.

That’s why the proposed AI Act in Europe and Brazil’s proposed regulation classify the use of AI that encompasses actions with significant potential to manipulate people through “subliminal techniques” as unacceptable or excessive. Effectively, they say technologies that exploit cognitive vulnerabilities should not be allowed by default or should at least put substantial accountability mechanisms in place for the high level of risk.

This leads to the third point, which involves socioeconomic contexts.  Mixing quantitative and qualitative criteria for classifying AI risk with a powerful governance toolbox is essential. Brazil’s proposed AI regulation includes such items. It states that algorithmic impact assessments should be made public by default. The assessments should involve representatives of the impacted populations, especially vulnerable groups such as children, often in cognitive development phases. AI actors need public databases to register high-risk AI systems and AI incidents. The OECD is already leading an excellent global initiative to develop both. Finally, there should be wider use of sandboxes.

A primary issue is recognising that regulation should be a collective learning process to reduce information asymmetry. All proposed accountability mechanisms, including algorithmic impact assessments and public databases of high-risk AI and AI incidents, allow us to have public debates for designing technologies to flourish without diminishing our humanity. Ultimately, and most importantly, it is essential to build a governance network by which the regulators come from the state and include accountable private sector stakeholders, academia and a vigilant civil society. Otherwise, the regulation could be hijacked along with our agency, attention, humanity, and self-determination.

Originally published on March 27, 2023, on the OECD website.