Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

Time to shift from artificial intelligence to artificial integrity 

Published 4 December 2024 in Artificial Intelligence • 8 min read

Artificial intelligence is developing at lightning speed. But as its “brainpower” increases, so do the risks. What can we do to shape the future of ethical AI such that it learns to prioritize our safety, social, and economic health, and fairness? How do we create artificial integrity? 

For a start, how these technologies work is opaque. AI systems can’t always explain how they arrive at certain findings, decisions, or content outcomes. Then there’s the integrity piece. As it processes data and generates content with unfathomable speed, AI does not pause to question whether its actions are safe, for instance, or whether they reflect human cultural norms or values; whether its outputs are fair, representative, inclusive, or even legal. AI never asks the question: is this ethical? 

This extraordinary technology has been primarily developed for intelligence, not integrity. This is a problem. Because without integrity, the risks are concerning and should not be overlooked.  

AI systems are not simply tools: deterministic apparatus or machines that are built and deployed, eventually degrading and becoming obsolete. AI, and machine learning systems, in particular, follow a completely different and more dynamic trajectory. As they interact with data, AI systems proactively learn from that data over time, reinforcing their learning to continuously refine themselves.  

AI evolves, and fast. If we fail to embed any ability to stop and check, to question integrity or ethical outcomes – to give it a moral code – it’s poised to become a force whose evolution is inversely proportional to its regard for human agency, values, well-being, or safety. As renowned US businessman and philanthropist, Warren Buffet, so succinctly put it: “In looking for people to hire, look for three qualities: integrity, intelligence, and energy. If they don’t have the first, the other two will kill you.”   

So, what can we do to shape the future of ethical AI such that it learns to prioritize our safety, social, and economic health, and fairness? How do we create artificial integrity? 

AI learns from data that is labeled or annotated so that machine learning systems can identify and contextualize information.

From artificial intelligence to artificial integrity

A good place to start is by understanding how AI works.

AI learns from data that is labeled or annotated so that machine learning systems can identify and contextualize information. I believe this is where artificial integrity should begin.

Annotating data for integrity, human values, and human principles could help guide fairer and more respectful decision-making processes, responses, and outcomes. From here, human supervision of machine learning techniques and training methods could be used to ensure that models not only learn to perform a task but also pinpoint and select integrity-led outcomes for that task. Integrating value or integrity “models” into training data is also key. Teaching an AI system to “understand” that deepfake content doesn’t match or align with specific contexts or settings – political content, say, or pornography – would help safeguard the integrity of critical social and societal processes. Meanwhile, human supervision is needed to review and adjust AI-generated content that touches the more nuanced aspects of human values – inclusivity, racial equity, ageism, and a host of things that are difficult to capture and account for with data and annotations alone.

While this might sound simple, it is challenging to enact.

For one thing, human values and principles are subjective. Individuals have their own unique perspectives on what constitutes cultural norms or ethical behavior. We are all vulnerable to unconscious biases or preconceptions that can also find their way all too easily into AI training processes. Scalability, too, is a challenge. Annotating large datasets with detailed value or integrity codes requires significant resources, both in terms of time and human expertise, and may not always be feasible in practice.

To overcome these and other issues, AI systems should be designed with mechanisms that allow for continuous learning and adaptation; they should be able to evolve in tandem with shifting ethical standards and societal values so they can recalibrate decisions as cultural contexts or ethical norms change over time.

Perhaps more importantly, artificial integrity cannot be the sole purview or responsibility of AI developers working in isolation. Developing AI with artificial integrity will hinge on interdisciplinary collaboration: ethicists, sociologists, public policy makers, domain experts, diverse user groups and more must be involved from the outset to ensure a comprehensive approach that reflects a range of perspectives.

Above all, artificial integrity will call for thoughtful, accountable, and effective leadership to prioritize diversity of input and responsibility and to coordinate and sustain efforts. In a world where AI systems are increasingly taking on critical roles across healthcare, education, transportation, finance, and public safety, it is incumbent on leaders in every sector and organization to make this a priority. 

When AI is predicated on integrity ahead of intelligence, machines and human beings could be expected to collaborate in new, more ethical ways.

Artificial integrity in practice: Four distinct operating modes

When AI is predicated on integrity ahead of intelligence, machines and human beings could be expected to collaborate in new, more ethical ways. There are four key operating modes that characterize this new paradigm:

1 – Marginal Mode:

In Marginal Mode, AI is not used to enhance human capabilities but to identify areas where both human and AI involvement has become unnecessary or obsolete. A key role of artificial integrity here is to proactively detect signs that a process or task no longer contributes anything meaningful to the organization. Say, for instance, activity in customer support drastically decreases due to automation or improved self-service options, AI should be able to flag the diminishing need for human involvement, helping the organization to prepare its workforce for more value-driven work.

2 – AI-First Mode:

In situations where AI is used to process vast amounts of data accurately and at speed – where AI takes the lead – artificial integrity would mean that integrity-led standards such as cultural contexts, fairness, and inclusion remain firmly embedded in processes. For instance, where AI is analyzing patient data to identify health trends, data annotation and checking would be used to ensure the system can explain how it arrives at certain results and conclusions. Transparency would be one outcome. Another would be bias avoidance. Here, training models could be leveraged to take diverse populations into account to avoid generating unreliable, skewed, or discriminatory medical outputs or advice.

3 – Human-First Mode:

There are contexts where human cognitive and emotional intelligence takes precedence over AI, which serves a supporting role in decision-making without overriding human judgment. Here, AI “protects” human cognitive processes from things like bias, heuristic thinking, or decision-making that activates the brain’s reward system and leads to incoherent or skewed results. In the human-first mode, artificial integrity can assist judicial processes by analyzing previous law cases and outcomes, for instance, without substituting a judge’s moral and ethical reasoning. For this to work well, the AI system would also have to show how it arrives at different conclusions and recommendations, considering any cultural context or values that apply differently across different regions or legal systems.

4 – Fusion Mode:

Artificial integrity in this mode is a synergy between human intelligence and AI capabilities combining the best of both worlds. Autonomous vehicles operating in Fusion Mode would have AI managing the vehicle’s operations, such as speed, navigation, and obstacle avoidance, while human oversight, potentially through emerging technologies like Brain-Computer Interfaces (BCIs), would offer real-time input on complex ethical dilemmas. For instance, in unavoidable crash situations, a BCI could enable direct communication between the human brain and AI, allowing ethical decision-making to occur in real-time, and blending AI’s precision with human moral reasoning. These kinds of advanced integrations between humans and machines will require artificial integrity at the highest level of maturity: artificial integrity would ensure not only technical excellence but ethical robustness, to guard against any exploitation or manipulation of neural data as it prioritizes human safety and autonomy.

Finally, artificial integrity systems should be able to perform in each mode, while transitioning from one mode to another, depending on the situation, the need, and the context in which they operate.

There are doubtless opportunities ahead of us to harness its extraordinary power to advance our communities, economies, and societies to the benefit of all human beings.

Questions for the leaders of today

AI is only set to continue its evolution, driving innovation, and wholly reshaping the way we communicate, function, work, and live. There are doubtless opportunities ahead of us to harness its extraordinary power to advance our communities, economies, and societies to the benefit of all human beings. There are profoundly serious risks and challenges ahead that leaders will need to foresee, anticipate, and navigate. Making a paradigm shift from artificial intelligence for the sake of intelligence to artificial integrity to safeguard human welfare and well-being means addressing these challenges. It means considering a range of critical issues and questions.

I would ask leaders in both private and public sectors and across industries and sectors to make the time and space to reflect on the following:

  • How might we protect people from becoming overly reliant on AI for critical thinking and decision-making to preserve human judgment, expertise and agency in areas like business, education, law, and healthcare, where human intuition, empathy, and ethical reasoning are critical?
  • AI will create but also replace jobs, with the potential to widen the skills gap, disproportionately affecting workers who don’t have the resources to reskill. How do we anticipate and address disparities in opportunity and wealth in our workforces and communities?
  • How do we build the right culture and processes to guard against unethical human practices in AI development, data labeling, hardware production, and training to protect human rights and dignity?
  • Can we ensure that the training processes of AI do not lead to unintended privacy violations, particularly when AI systems begin to interact with sensitive data at scale?
  • Finally, how do we regulate the development and use of AI technologies responsibly and sustainably, to mitigate environmental impact?

I am cautiously optimistic that open access to AI such as ChatGPT will continue to democratize technology, making it available to individuals, small businesses, and innovators who can leverage AI for good.

It is my hope that opening access to so many people, and giving them the chance to understand, interact with, and apply AI technology, will accelerate ethical awareness, and that together we will push for better governance – even as we race towards our unimaginable AI-powered future.

Authors

Hamilton Mann

Group Vice President at Thales

Hamilton Mann is a tech executive, pioneer in digital and AI for good, keynote speaker, and the originator of the concept of Artificial Integrity. He serves as Group Vice President at Thales, where he co-leads the AI initiative and Digital Transformation while also overseeing global Digital Marketing activities. He writes regularly for Forbes as an AI Columnist, and hosts The Hamilton Mann Conversation, a podcast on Digital and AI for Good, ranked in the Top 10 for technology thought leadership by Technology Magazine. Hamilton was inducted into the Thinkers50 Radar as one of the 30 most prominent rising business thinkers. He is the author of Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future (Wiley, 2024). 

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience