For the third year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us gain insights into how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. This year, we examine organizational capacity to address AI-related risks. In our previous article, we asked our experts about the need for AI-related disclosures. This month, we offered the following provocation: There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. A majority of our experts disagree with that statement (with 72% disagreeing or strongly disagreeing), citing a fragmented global landscape of requirements. Nevertheless, many of our panelists say global companies have a responsibility to implement RAI across the organization. Below, we share insights from our panelists and draw on our own RAI experience to offer global companies recommendations for navigating the complexity to achieve their RAI goals.

Aligning on RAI Principles and Codes of Conduct Is an Urgent Global Priority

Our panelists acknowledge that there is a growing global consensus around the urgency of RAI, as well as considerable alignment on core principles across international organizations, technical standards bodies, and other multilateral forums. H&M Group’s Linda Leopold observes that “from large international initiatives like the G7’s Hiroshima Process on Generative AI and the Global Partnership on Artificial Intelligence to AI risk management frameworks like NIST’s, as well as AI governance standards from ISO and IEEE, and existing and emerging regulations … there are recurring overarching themes, such as fairness, accountability, transparency, privacy, safety, and robustness.” Data Privacy Brasil’s Bruno Bioni contends that “these multilateral policy forums play a critical role in setting agendas, implementing principles, and facilitating information sharing.” Carl Zeiss AG’s Simone Oldekop believes that as a result, “there has been progress in the international alignment of codes of conduct and standards for global companies in the area of AI governance.”

Putting Global Principles Into Practice Remains a Work in Progress

Translating conceptual RAI frameworks into technical standards and enforceable regulations is another matter. Harvard Business School’s Katia Walsh explains that while “global companies generally agree on the principles around using AI in safe, ethical, and trustworthy ways … the reality of implementing specifics in practice is very different.” Walsh notes that addressing the ethical dilemmas that emerge from AI use is “by definition, not straightforward.” Automation Anywhere’s Yan Chow agrees that common concepts like “algorithmic bias, data privacy, transparency, and accountability are multifaceted and context dependent.”

Some experts attribute this fragmentation to the lack of a common taxonomy and definitions. For example, IAG’s Ben Dias notes that “the U.S. and EU define AI as encompassing all machine-based systems that can make decisions, recommendations, or predictions. But the U.K. defines AI by reference to the two key characteristics of adaptivity and autonomy.” And National University of Singapore’s Simon Chesterman asserts that despite the scores of standards that have been developed by agencies, industry associations, and standards bodies like the International Telecommunication Union, the International Organization for Standardization, the International Electrotechnical Commission, and the Institute of Electrical and Electronics Engineers, “there is no common language across the bodies, and many terms routinely used with respect to AI — fairness, safety and transparency — lack agreed-upon definitions.”

Check out the full article written by Elizabeth M. Renieris, David Kiron, and Steven Mills on the MIT Sloan Management Review website

Veja também

Veja Também