Warren Buffett famously said, “In looking for people to hire, look for three qualities: integrity, intelligence, and energy. And if they don’t have the first, the other two will kill you.’”
This principle equally applies to AI systems.
Because AI systems are not just tools.
So-called tools are deterministic—they follow a clear path from creation to usage, through degradation, and ultimately, obsolescence. In contrast, AI, especially machine learning systems, is fundamentally different. It does not follow this trajectory because it is not static; it learns over time through interaction with data. Systems that use techniques like reinforcement learning or deep learning continuously refine themselves based on new input, making them more akin to dynamic entities that continuously evolve.
No two AI systems function identically if they are exposed to different data streams or used in varying contexts.
This sets AI apart from traditional tools, which have deterministic functions that do not change from within.
This indeterministic quality pushes AI beyond the category of a mere tool, making it essential to not just develop, but lead these systems effectively.
As AI systems increasingly take on critical roles across healthcare, education, transportation, finance, and public safety, relying solely on computational power and intelligence without embedding integrity into their design represents a major leadership failure.
While AI can quickly process data, it doesn’t inherently consider whether its actions are safe, legal, or ethical.
An illustration of this is the near-perfect execution of imitating a person’s identity traits and characteristics, made possible by certain systems, without any verification, prevention, or restriction, leading to what we call deepfakes and the severe consequences such productions can have affecting individuals’ reputations, privacy, or safety, and leading to broader societal harms such as misinformation, manipulation in politics, and fraud.
The Global market for AI-generated deepfakes is expected to reach a value of USD 79.1 million by the end of 2024, and it is further anticipated to reach a market value of USD 1,395.9 million by 2033 at a CAGR of 37.6%.
Without integrity embedded at the core, the risks and externalities posed by unchecked machine intelligence make them unsustainable, and render society even more vulnerable, even though they also bring positive aspects that coexist.
The responsibility is to shift toward ensuring that AI systems operate with integrity over intelligence—prioritising fairness, safeguarding human values, and upholding societal imperatives over raw intelligence as if the society in which this intelligence operates did not matter.
The question is not how intelligent AI can become, whether it involves calls for super artificial intelligence or artificial general intelligence.
No amount of intelligence can replace integrity.
The question is how we can ensure AI exhibits Artificial Integrity—a built-in capacity to function with integrity, aligned with human values, and guided by principles that prioritise fairness, safety, and social health, ensuring that its outputs and outcomes are integrity-led first, and intelligent second.
Such a question is not a technological one. With the interdisciplinary dimensions it implies, it is one of the most crucial leadership challenges.
The difference between intelligent-led and integrity-led machines is simple: the former are designed because we could, while the latter are designed because we should.
Without the capability to exhibit a form of integrity, AI would become a force whose evolution is inversely proportional to its necessary adherence to values and its crucial regard for human agency and well-being.
Just as it is not sheer engine power that grants autonomy to a car, nor to a plane, so it is not the mere increase of artificial intelligence that will guide the progress of AI.
This perspective highlights the need of AI systems to function considering the balance between “Human Value Added” and “AI Value Added” where the synergy between human and technology redefines the core design of our society, while preserving societal integrity.
Systems designed with this purpose will embody Artificial Integrity, emphasising AI’s alignment with human-centred values.
They would be able to function considering four distinct operating modes:
Marginal Mode:
In the context of Artificial Integrity, Marginal Mode refers to situations where neither human input nor AI involvement adds meaningful value. These are tasks or processes that have either become obsolete, overly routine, or inefficient to the point where they no longer contribute positively to an organisation’s or society’s goals. In this mode, the priority is not about using AI to enhance human capabilities, but about identifying areas where both human and AI involvement has become useless.
One of the key roles of Artificial Integrity in Marginal Mode is the proactive detection of signals indicating when a process or task no longer contributes to the organisation. For example, if a customer support system’s workload drastically decreases due to automation or improved self-service options, AI could recognize the diminishing need for human involvement in that area, helping the organisation to take action to prepare the workforce for more value-driven work.
AI-First Mode:
Here, AI’s strength in processing vast amounts of data with speed and accuracy takes precedence to the human contribution. Artificial Integrity would ensure that even in these AI-dominated processes, integrity-led standards like fairness and cultural context are embedded.
When Artificial Integrity prevails, an AI system that analyses patient data to identify health trends would be able to explain how it arrives at its conclusions (for example, a recommendation for early cancer screening), ensuring transparency. The system would also be designed to avoid bias – for example, by ensuring the model considers diverse populations, ensuring that conclusions drawn from predominantly one demographic group don’t lead to biassed or unreliable medical advice.
Human-First Mode:
This mode prioritises human cognitive and emotional intelligence, with AI serving in a supportive role to assist human decision-making. Artificial Integrity ensures that AI systems here are designed to complement human judgement without overriding it, protecting humans from any form of interference with the healthy functioning of their cognition, such as avoiding influences that exploit vulnerabilities in our brain’s reward system, which can lead to addiction.
In legal settings, AI can assist judges by analysing previous case law, but should not replace a judge’s moral and ethical reasoning. The AI system would need to ensure explainability, by showing how it arrived at its conclusions while adhering to cultural context and values that apply differently across regions or legal systems, while ensuring that human agency is not compromised regarding the decisions being made.
Fusion Mode:
This is the mode where Artificial Integrity involves a synergy between human intelligence and AI capabilities combining the best of both worlds.
In autonomous vehicles operating in Fusion Mode, AI would manage vehicle’s operations—such as speed, navigation, and obstacle avoidance—while human oversight, potentially through emerging technologies like Brain-Computer Interfaces (BCIs), would offer real-time input on complex ethical dilemmas. For instance, in unavoidable crash situations, a BCI could enable direct communication between the human brain and AI, allowing ethical decision-making to occur in real-time, blending AI’s precision with human moral reasoning. These kinds of advanced integrations between human and machine (not to talk about Elon Musk’s Neuralink) will require Artificial Integrity at its highest level of maturity. Artificial Integrity would ensure not only technical excellence but also ethical soundness, guarding against the potential exploitation or manipulation of neural data and prioritising the preservation of human autonomy and safety.
Finally, Artificial Integrity systems would be able to perform in each mode, while transitioning from one mode to another, depending on the situation, the need, and the context in which they operate. This is the aim of Artificial Integrity.
Among other things in the AI Models itself, this necessitates a holistic approach to AI development and deployment, considering not just AI’s capabilities but its impact on human and societal values. It’s about building AI systems that are not only intelligent but also “understand” the broader implications of their actions.
This requires an attitude that one can have, with integrity. This requires leadership.
Artificial integrity represents the new AI frontier and a critical path to shaping the course of human history in creating a better future for all.
Hamilton Mann is the author of Artificial Integrity (Wiley, 2024). He serves as group vice president at Thales, where he co-leads the AI Initiative and Digital Transformation while also overseeing global digital marketing activities. He also serves as a senior lecturer at INSEAD and HEC Paris as well as a mentor at the MIT Priscilla King Gray (PKG) Center. He is a doctoral researcher in AI at École Nationale des Ponts et Chaussées – Institut Polytechnique de Paris. He hosts The Hamilton Mann Conversation, a podcast on Digital and AI for Good and was inducted into the Thinkers50 Radar as one of the 30 most prominent rising business thinkers globally.
Watch out for Leaders50
At Thinkers50 we are hard at work on an exciting new initiative. We call it Leaders50. This new listing of 50 inspiring leaders drawn from around the world will be published in November.
Leaders50 aims to ignite a global conversation about what twenty-first century leadership can and should be. It will celebrate and enable better understanding of the inspiring leadership practices and philosophies of leaders who are making a positive impact.
Leaders50 will be created by the Thinkers50 Community. At the heart of this process is a simple question: Which current leaders do you find inspiring?
Find out more about the Leaders50 at thinkers50.com/leaders50.