A Fragmented Landscape Is No Excuse for Global Companies Serious About Responsible AI
For the third year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have brought together an international panel of AI experts to help gain insights into the lack of alignment around global standards and norms for responsible AI. Bruno Bioni, co-director of Data Privacy Brasil, was one of the experts interviewed. Check it out in the text!
For the third year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us gain insights into how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. This year, we examine organizational capacity to address AI-related risks. In our previous article, we asked our experts about the need for AI-related disclosures. This month, we offered the following provocation: There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. A majority of our experts disagree with that statement (with 72% disagreeing or strongly disagreeing), citing a fragmented global landscape of requirements. Nevertheless, many of our panelists say global companies have a responsibility to implement RAI across the organization. Below, we share insights from our panelists and draw on our own RAI experience to offer global companies recommendations for navigating the complexity to achieve their RAI goals.
Aligning on RAI Principles and Codes of Conduct Is an Urgent Global Priority
Our panelists acknowledge that there is a growing global consensus around the urgency of RAI, as well as considerable alignment on core principles across international organizations, technical standards bodies, and other multilateral forums. H&M Group’s Linda Leopold observes that “from large international initiatives like the G7’s Hiroshima Process on Generative AI and the Global Partnership on Artificial Intelligence to AI risk management frameworks like NIST’s, as well as AI governance standards from ISO and IEEE, and existing and emerging regulations … there are recurring overarching themes, such as fairness, accountability, transparency, privacy, safety, and robustness.” Data Privacy Brasil’s Bruno Bioni contends that “these multilateral policy forums play a critical role in setting agendas, implementing principles, and facilitating information sharing.” Carl Zeiss AG’s Simone Oldekop believes that as a result, “there has been progress in the international alignment of codes of conduct and standards for global companies in the area of AI governance.”
Putting Global Principles Into Practice Remains a Work in Progress
Translating conceptual RAI frameworks into technical standards and enforceable regulations is another matter. Harvard Business School’s Katia Walsh explains that while “global companies generally agree on the principles around using AI in safe, ethical, and trustworthy ways … the reality of implementing specifics in practice is very different.” Walsh notes that addressing the ethical dilemmas that emerge from AI use is “by definition, not straightforward.” Automation Anywhere’s Yan Chow agrees that common concepts like “algorithmic bias, data privacy, transparency, and accountability are multifaceted and context dependent.”
Some experts attribute this fragmentation to the lack of a common taxonomy and definitions. For example, IAG’s Ben Dias notes that “the U.S. and EU define AI as encompassing all machine-based systems that can make decisions, recommendations, or predictions. But the U.K. defines AI by reference to the two key characteristics of adaptivity and autonomy.” And National University of Singapore’s Simon Chesterman asserts that despite the scores of standards that have been developed by agencies, industry associations, and standards bodies like the International Telecommunication Union, the International Organization for Standardization, the International Electrotechnical Commission, and the Institute of Electrical and Electronics Engineers, “there is no common language across the bodies, and many terms routinely used with respect to AI — fairness, safety and transparency — lack agreed-upon definitions.”
Check out the full article written by Elizabeth M. Renieris, David Kiron, and Steven Mills on the MIT Sloan Management Review website
Veja também
-
The Artificial Intelligence Legislation in Brazil: Technical Analysis of the Text to Be Voted on in the Federal Senate Plenary
The Internal Temporary Artificial Intelligence Committee (CTIA) of the Federal Senate approved the substitute report for Bill 2338/2023. This bill aims to define the legal framework for regulating the use of Artificial Intelligence systems in Brazil.
-
AI in the 2024 Brazilian elections
Aláfia Lab, *desinformante and Data Privacy Brasil launch the report “AI in the 2024 Brazilian elections”, with an analysis of the use of artificial intelligence in the first round of the elections.
-
Research Project Outcomes: A vision for inclusive educational technology
Check out Júlia Mendonça's interview for Tech Ethics Lab, about the use of technology in schools.
-
Key Themes in AI Regulation: The local, regional, and global in the pursuit of regulatory interoperability
Data Privacy Brasil releases its report “KeyThemes in AI Regulation: The local, regional, and global in the pursuit of regulatory interoperability,” supported by the Heinrich Böll Foundation. This work is the result of months of research under the project “Where the SabIA Sings: Governance and Regulation of Artificial Intelligence from Brazil”.
-
Data Privacy Brasil participates in UN’s OHCHR briefing on Brazil
The organization highlighted how the advance of edtech has been violating children’s privacy in the country
Veja Também
-
The Artificial Intelligence Legislation in Brazil: Technical Analysis of the Text to Be Voted on in the Federal Senate Plenary
The Internal Temporary Artificial Intelligence Committee (CTIA) of the Federal Senate approved the substitute report for Bill 2338/2023. This bill aims to define the legal framework for regulating the use of Artificial Intelligence systems in Brazil.
-
Research Project Outcomes: A vision for inclusive educational technology
Check out Júlia Mendonça's interview for Tech Ethics Lab, about the use of technology in schools.
-
Key Themes in AI Regulation: The local, regional, and global in the pursuit of regulatory interoperability
Data Privacy Brasil releases its report “KeyThemes in AI Regulation: The local, regional, and global in the pursuit of regulatory interoperability,” supported by the Heinrich Böll Foundation. This work is the result of months of research under the project “Where the SabIA Sings: Governance and Regulation of Artificial Intelligence from Brazil”.
-
Data Privacy Brasil participates in UN’s OHCHR briefing on Brazil
The organization highlighted how the advance of edtech has been violating children’s privacy in the country
DataPrivacyBr Research | Content under licensing CC BY-SA 4.0