News | AI with Rights | Digital Platforms and Markets
How Teachers Can Use Artificial Intelligence Tools in the Classroom
The launch of generative AI (artificial intelligence) tools like ChatGPT has brought concerns and resistance among teachers, similar to those raised with the popularization of calculators. Beyond the old fear that students might stop learning basic operations, the current apprehension deals with technologies capable of generating human-like texts on virtually any subject.
Globally, this “new threat” has elicited divergent reactions. In the United States, for example, the New York school system banned OpenAI’s chatbot. Meanwhile, Singapore’s government announced that its schools should teach students and teachers how to use the tool.
In Brazil, while artificial intelligence is still awaiting regulation in the Senate, the lack of adequate teacher training for technology usage has already led to conflicts. As reported by Aos Fatos, the adoption of ChatGPT to detect plagiarism in school assignments has triggered student complaints.
Experts interviewed by Aos Fatos, however, affirm that despite the challenges it poses to education, artificial intelligence cannot be ignored and must be integrated into classrooms. Its adoption, however, may face obstacles such as lack of training, absence of research focused on innovation in the sector, and adherence to outdated teaching methods.
While research matures, adopting technology in education can begin experimentally, suggests Marina Meira, project leader at the Data Privacy Brasil Research Association. In this context, Aos Fatos compiled reflections on using artificial intelligence in schools and practical suggestions from experts on how this can be initiated.
1. Rethink Teaching and Assessment Models
2. Develop Strategies for Transparent Use
3. Foster Creativity and Questioning Skills
4. Verify Information Produced by Tools
5. Highlight AI’s Limitations and Biases
6. Teach How to Identify AI-Generated Images
1. Rethink Teaching and Assessment Models
According to sociologist Glauco Arbix, a coordinator at USP’s Artificial Intelligence Center, ChatGPT is viewed skeptically by some educators because it forces them to question their evaluation processes. However, he sees this reflection as inevitable as technology becomes capable of performing tasks once reserved for humans.
For instance, the latest version of OpenAI’s chatbot reportedly outperformed 90% of human candidates in the U.S. bar exam. In Brazil, the technology could also pass the first phase of the Ordem dos Advogados do Brasil (OAB) exam.
As a university professor, Arbix has decided to modify his student evaluation system due to technological advances.
“There’s no point in asking for book summaries. There are tools that do this far better than we can, which means we need to evaluate other types of capabilities.”
In this context, Arbix believes the teacher’s role shifts to that of a “supervisor” overseeing technology use. Still, he emphasizes that the tool’s role will always be complementary since schools aim to educate citizens, not merely individuals adept at answering tests mechanically.
Researcher Marina Meira also stresses the need to rethink teaching strategies, replacing repetition-based models with a more critical approach. She underscores the importance of adopting digital education guidelines to empower students to use technology while reflecting on its potential pitfalls. “Just as we don’t think children should cross the street alone, the fact that they know how to use a phone early on doesn’t mean they fully grasp the risks”.
Meira adds that while children and adolescents might not be self-sufficient in the topic, they have much to teach about technology. Hence, she highlights the importance of fostering listening and exchange environments between teachers and students.
2. Develop Strategies for Transparent Use
Experts emphasize the importance of teachers establishing transparent policies with their students regarding AI use, particularly because current tools for detecting plagiarism are unreliable.
“Students must be guided not only to use the tool effectively but also to document which parts of their work were created using ChatGPT”, said Arbix.
He applied this system in his latest course, encouraging ChatGPT use. According to his findings, 15–20% of students employed the technology in their final assignments.
“Those who documented their use of ChatGPT showed improved outcomes. The quality of their writing, presentation of ideas, and connections to other topics all improved”, he observed.
However, Arbix notes that this strategy cannot be universally applied across all education levels. Like calculators, AI must be introduced gradually, aligning with students’ maturity levels.
“Misused, ChatGPT might temporarily weaken students’ ability to write texts and summarize. But a student who has read and enjoyed a book should be encouraged to use ChatGPT to summarize it and compare interpretations”, Arbix adds.
3. Foster Creativity and Questioning Skills
AI tools can play a significant role in teaching students how to ask questions, fostering creativity and debate.
Marina Meira shared an example of a high school philosophy teacher using AI in class. The teacher asked each student to pick a philosopher and request ChatGPT to explain their ideas. “Students had to ask ChatGPT questions to extract the philosopher’s thoughts”, she said.
Such exercises encourage students to engage in inquiry rather than merely answer questions, promoting curiosity and critical thinking.
4. Verify Information Produced by Tools
Although AI successfully mimics human language with coherent, grammatically correct texts, the information provided isn’t always accurate. AI can fabricate data and generate false references, which highlights the importance of skepticism.
ChatGPT, for example, generates responses by calculating the most likely word sequences based on millions of internet texts. This approach can lead to significant errors, termed “hallucinations”, due to poor-quality data sources or gaps in its database.
“Taking ChatGPT’s responses as truth can backfire. Human supervision is essential. Each consultation must be cross-checked with reliable and scientifically validated texts”, Arbix warns.
Teachers can use this limitation to encourage students to verify AI-generated information, fostering critical technology use.
5. Highlight AI’s Limitations and Biases
AI databases reflect societal norms, including biases. Despite developers’ efforts, studies show these tools often replicate racism, sexism, ageism, and xenophobia.
Marina Meira points out that the technologies available today were not necessarily designed with the Brazilian local reality in mind. Therefore, projects involving the use of AI should “question the hegemonic model of technology and often of education, which is a U.S.-centered, very male, very white model”.
Exercises using AI programs that generate images can be a good way to reveal these social patterns, fostering discussions on these topics with students. For this, the teacher could assign students the task of generating images of business meetings or environments representing certain professions, for instance, and then reflecting on what the scenes portray.
Another limitation of the technology that can be explored to spark classroom debate stems from the lack of data about local contexts in its databases. For example, the report requested ChatGPT to cite indigenous communities in the municipality of São Paulo; the tool responded that there were none, which is false. From exercises like this, students could be asked to seek out the information the technology could not provide, including through fieldwork.
6. TEACHING HOW TO IDENTIFY AI-GENERATED IMAGES
As previously reported by Aos Fatos, artificial intelligence tools—especially those specialized in image generation—have been used to produce misinformation. Even though these contents are not technically perfect, they can mislead inattentive users. A notable example was the fake photos of Pope Francis generated with the MidJourney tool that went viral in March. Critical digital education can aim to provide students with tools to identify such manipulations.
“Generative AI models often struggle to accurately reproduce the correct number of fingers, for example. It could be interesting to propose identifying these signs, like a spot-the-difference game,” suggests Meira.
Besides errors in the number of fingers on hands, generative AI tools may also fail in details such as body proportions, ear shapes, and more, as Aos Fatos has previously explained.
References:
- O Estado de S. Paulo (1 and 2)
Text written by Gisele Lobato and originally published on August 3, 2023, on the site Aos Fatos.
DataPrivacyBr Research | Content under licensing CC BY-SA 4.0