News | AI with Rights | Digital Platforms and Markets
2023, the year ChatGPT introduced generative AI to the world

Earlier this year, ChatGPT gained popularity and showed the world what generative artificial intelligence (AI) is, a type of technology capable of generating texts, images, and other content in response to requests made in natural language.
Although it was launched at the end of November last year, it was in 2023 that ChatGPT, from OpenAI, became a household name. Here in Brazil, the system saw its searches grow on Google throughout January and has remained popular since then.
“People now know what artificial intelligence is, and sometimes they even consider AI to be ChatGPT,” comments Paula Guedes, a PhD candidate in Law at the Catholic University of Porto, researcher at Data Privacy Brasil, member of the Legalite Network at PUC/RJ, and focal point of the AI working group in the Coalition for Rights on the Web.
The popularity of ChatGPT has caused competitors to rush to launch their own generative AIs, such as Bard from Google, Gauss from Samsung, and Amazon Q. Despite these launches, ChatGPT, which was soon incorporated into Microsoft’s Bing search engine, remains the most popular of all, even becoming a synonym for the technology model itself – it’s not uncommon to hear users calling Bard the “ChatGPT of Google,” for example.
Dangers of generative AI
As soon as ChatGPT and its competitors became popular, their problems also became known, such as the spread of misinformation, discrimination biases, the so-called “hallucinations,” and failures in content filters.
“Innovation, progress, and ease are very clear, but when we look at the challenges, it’s a tangle of things, like an onion with layers,” comments Cynthia Picolo, director of the Public Policy and Internet Laboratory (LAPIN) and focal point of the AI working group in the Coalition for Rights on the Web.
The first and most noticeable problem of generative AIs is that they don’t always provide true information. Even without factual grounding, however, these systems generate very well-argued and convincing responses – these are the so-called “hallucinations.” This causes people to believe in information even when it is false, leading to the spread of misinformation and informational disorder.
“The ease of responses that seem very good undermines people’s critical thinking, as they are losing the ability to reason and research, because the answer comes so ready,” analyzes Cynthia, who expresses concern about this aspect for the coming year, when municipal elections will take place, and the effects of misinformation could become even more serious.
Beyond this problem, generative AI systems also commonly reproduce prejudices found in the databases that feed them, generating responses that are sexist and racist, for example.
In this case, the problem goes beyond text responses and also affects image-generating systems. This year, when it became a trend on social media to create drawings of Pixar characters using Microsoft’s Bing image generator, equipped with OpenAI’s DALL-E 3 technology, algorithmic biases became evident.
One example occurred in Rio de Janeiro in October when state representative Renata Souza (PSOL) asked the AI to generate an image of a black woman in a favela, and the system created a drawing of a woman holding a gun.
Image generators also pose additional problems, such as the possibility of creating fake “nudes,” as happened this year in schools in Rio de Janeiro and Recife.
Moreover, there are cases where generative AIs fail to filter when a request or question is malicious and end up providing instructions to users seeking dangerous information, such as tutorials on creating homemade bombs, for example.
Race for profit and lack of regulation
These problems are exacerbated by hasty releases by tech companies. With the success of generative AIs, companies rushed to launch their own solutions, putting systems on the market that still have flaws.
“It was the moment to pull the projects out of the drawer and really put them on the market to focus on competitiveness. There’s this market move behind it, and companies are trying to maintain their space,” comments Cynthia.
Paula states that these quick releases are “very problematic,” as, with no regulation in place for these systems, they are launched without adequate testing. In general, companies launch generative AIs warning that they are in a testing phase and may have flaws. When Bard was released, for example, Google itself admitted that “hallucinations” could happen, causing disinformation.
“But acknowledging there are problems doesn’t absolve you from responsibility. Theoretically, you should have solved the flaws before releasing the system to the public,” points out Paula.
The lack of regulation mentioned by Paula leaves the way clear for tech companies to launch problematic AIs in the Brazilian market without facing any consequences. Currently, some rules from other laws and regulations – such as the Internet Civil Framework, the General Data Protection Law (LGPD), the Consumer Protection Code (CDC), and the Civil Code itself – can be applied to the AI market, but there is no specific law on the subject.
“AI is very complex. Since it impacts all sectors of society, we need a law that truly addresses artificial intelligence from start to finish,” argues Cynthia.
In Brazil, the most advanced debate on the topic is taking place in the Senate. Bill (PL) 2338/2023 was proposed by the president of the House, Senator Rodrigo Pacheco (PSD-MG), after compiling proposals already under consideration in Congress with the report of a committee of jurists submitted last year. The text is expected to be one of the priorities in 2024.
Cynthia and Paula, who are directly involved in civil society movements pushing for legislation on the topic, say that the text is quite robust, but there is still progress to be made.
This is because, when the committee of jurists proposed ideas, ChatGPT and other generative AIs had not yet become popular, so many issues related to this sector of artificial intelligence are not regulated by the law.
Globally, one of the main examples of legislation on the topic is the EU’s AI Act, which requires AI systems to be safe, transparent, traceable, non-discriminatory, environmentally friendly, and supervised by humans, not automation.
The law also defines different rules for systems with different potential risks. In the case of generative AIs, it requires companies to disclose that content was generated by artificial intelligence, design systems to prevent the creation of illegal content, and protect copyright. Furthermore, these AIs must undergo “exhaustive evaluations,” and any serious incidents must be reported to the European Commission.
From the researchers’ perspective, there is still much to discuss about AI regulation in Brazil, but there is no longer room for debate about whether or not AI regulation is necessary. “There’s still a lot of argument that regulation will hinder innovation, which is a complete fallacy. You can innovate with responsible products that protect rights. It’s pointless to launch several products that will cause more harm than benefits,” says Paula.
Text originally published on 29.12.2023 on the IG website, written by Dimítria Coutinho.
DataPrivacyBr Research | Content under licensing CC BY-SA 4.0