Similarly, TelefĂłnica, a Spanish telecommunications firm, operationalizes its responsible AI approach using a methodology called âResponsible AI by Designâ. The operating model includes training and awareness activities on AI and ethics accessible in three languages (Spanish, English, and Portuguese) which is accompanied by dedicated workshops, and self-assessment questionnaires that each responsible manager developing products and services using AI is required to complete.
Despite these efforts, software developers often struggle to translate abstract AI principles into their daily work, with recent research suggesting that 79% of tech workers report a need for pragmatic resources to assist them in navigating ethical concerns.
The demand for resources that bridge the principles to practices gap in AI systems development has fostered a plethora of tools, methodologies, frameworks, and processes.
But while the creation of better tools remains an admirable goal, a paradoxical issue emerges. The sheer volume of available tools and methodologies poses a challenge for organizations. Striking a balance between quantity and quality, between accessibility and expertise, and between principle and practice remains a key challenge.
Auditing is becoming more prevalent for assessing whether AI developments are performed in a way consistent with the affirmed principles of an organization. Likewise, certifications serve to verify compliance with specific requirements applicable to AI applications.
While both measures have been geared towards companies developing AI tools, those that use AI are also starting to adopt them. In 2022, for example, American Express, General Motors, Nike, and Walmart announced that they would adopt scoring criteria to help reduce bias in algorithmic tools used to make hiring and workforce decisions.
Are there other approaches?
Given that each of the three recommended approaches offers different strengths and weaknesses, a holistic approach is recommended. A robust AI education, for example, when combined with strong governance could lead to more effective and ethically aligned decision-making across all levels of the organization.
At the same time, we must remember that the field of AI ethics is constantly evolving. The Council of Europe, for example, is focused on strengthening business commitments to human rights. And then there is the approach of AI regulations which is being discussed in different regions across the world, most recently with the EUâs Artificial Intelligence Act.
The importance of AI ethics for organizations is increasing, spurred on by the emergence of generative AI and large language models like ChatGPT. There is a strong need for compliance as regulations tighten, but there is also increasing pressure from civil society for organizations to act responsibly and ethically.
Quick-fix approaches to AI ethics may bring short-term benefits, but sustainable benefits require a more comprehensive and coordinated approach.
We recommend combining strong internal and external governance with engaging educational programs across the organization. There is a need to integrate AI ethics into organizational processes so that ethical lapses and risks can be identified and rectified before they become embedded into processes or offerings.