by Ted Ladd and Priyanka Shrivastava, Hult International Business School
Walk into any boardroom, strategy offsite, or investor summit and you’ll hear the same chorus: artificial intelligence will transform everything. Strategy, operations, accounting, hiring: AI is often portrayed simultaneously as the vehicle and the propellant for organisational transformation.
Yet beneath the hype sits a simpler, harder truth: when the volume of machine-generated “insight” explodes, the scarcest resource in a company shifts from data to disciplined doubt. The ability to question well – to identify root problems, probe assumptions, test claims, and use AI as an assistant, not a replacement – is fast becoming a professional superpower. We call this emerging capability Skeptical Intelligence – the disciplined art of questioning well.
Even though Skeptical Intelligence sounds like a rebrand of “critical thinking,” it is not. Since its introduction by Plato in 399 BCE and John Dewey’s popularization of the term in the 1930s, critical thinking has sprawled to include not just intellectual analysis but also personal self-reflection and emotional control. It has been used as a process, a single behaviour, a collection of multiple behaviours, a mindset, a disposition, and an instinct. The term is now so overextended as to be unhelpful.
Skeptical Intelligence is a fourth pillar of capability alongside IQ, Emotional Intelligence (EI), and Artificial Intelligence (AI). IQ gave us a way to measure “raw” intelligence. EI, and its measurement tool, the Emotion Quotient (EQ), helped us take emotions seriously as inputs to judgment and performance. AI purports to put all existing information into the hands of every person. But none of these three, on its own, equips leaders to find the optimal solution to a problem. Skeptical Intelligence fills that gap.
How IQ and EQ Got Us Here – and Where They Fall Short
For decades, IQ was the yardstick to measure smartness. It predicts academic performance and certain forms of professional success. That correlation still matters in business settings. Intellectual limitations cannot always be overcome by training or experience. People with high IQ perform well for well-specified problems with clean feedback loops and single solutions, not the murky dilemmas and trade-offs that leaders face today.
EQ arrived as a necessary corrective. It reframed emotions as useful input rather than interference. Leaders with high EQ listen better, regulate their own impulses, direct other people emotions, and build trustful teams. All of these are essential. Yet the managerial culture that evolved around EQ sometimes drifted toward comfort over challenge: satisfying meetings, thin critique, polite acceptance of flawed information for the sake of improved relationships. These traits may even be counterproductive in opaque, complex circumstances. AI is heralded as a cure, but it may be the cause.
The result is a capability gap. Highly intelligent, emotionally skilled executives are still focusing on dashboards and forecasts that deserve a cross-examination. EQ without skepticism can make smart people very agreeable about very shaky claims. Thus, while IQ and EQ remain valuable, neither equips leaders to interrogate the increasingly opaque decisions made by intelligent systems.
Why Do We Need a Fourth Pillar Now: AI
AI systems now write, recommend, price, forecast, rank, allocate, and deny at industrial scale. They are also fallible in systemic ways: biased training data, spurious correlations, brittle generalization, seductive visualizations, and confident responses that seem certain but are not, in fact, accurate. None of this makes AI useless. Yet AI is consistently and systematically prone to errors that require human discernment.
As AI improves, it also gets better at responding with conviction. Concrete language, declarative sentences, assertive but misused terms like “thinking” and “reasoning” all lull even disciplined operators into premature trust. When time pressure and social validation (“the model says…”) are added, the conditions are ripe for cognitive complacency.
Recent, well-documented AI failures have the same ingredients: historical bias baked into training data, opaque features, and leaders who assumed that “data-driven” equals “well-founded.” The fix is not to automatically reject or exile AI from critical decisions. As Kahneman (2011) suggests, leaders must learn to distinguish speed from haste – moving quickly while introducing structured friction at critical junctures.
Those seeking novel ideas will be disappointed. Structurally, AI reports the average answer to any question based on its historical data. Everyone asking AI for an innovation in a particular domain will receive the same answer. Unless… Unless you use AI to question and verify your own ideas.
Skeptical Intelligence (SI) is the capacity to interrogate claims with disciplined curiosity and intellectual humility. It is not reflexive contrarianism. It is not cynicism dressed up as rigor. It is a practical operating system for decision-makers who want the upside of AI without sleepwalking into the downside. SI operationalizes the often loosely applied concept of critical thinking into a measurable, teachable framework.
A few commentators have suggested that, because AI is (or at least seems) better at data collection, analysis, and delivery than most professionals, future leaders should focus on developing their EQ skills because, as yet, AI cannot connect emotionally to people. While EQ and SI can and should coexist in each person and team, our research intentionally positions SI as entirely distinct from EQ.
A Working Definition
Skeptical Intelligence is a repeatable way of thinking that combines four habits:
- Curiosity with an edge – a deliberate thoughtful provocation to ask “what would change my mind?” instead of “how do I prove I’m right?”
- Epistemic humility – an accurate sense of what you (and your models) know and do not know.
- Evidence discipline – an ability to trace a claim to its data, method, and assumptions.
- Counterfactual imagination – a habit of generating plausible alternatives and testing the original claim against them.
If IQ helps you solve the puzzle and EQ helps you read the room, Skeptical Intelligence helps you validate that you’re solving the right problem with the right solution based on the right data. It is not instinctive, even with training, because instincts are more prone to human bias. It is the intentional rejection of one’s instinct in order to engage a more disciplined approach.
Based on survey data from 401 participants across six industries, our statistical analysis revealed two core elements of SI: the ability to question new information and the ability to then verify this information within a specified context. We also validated the first iteration of a measurement scale, called the Skeptical Quotient (SQ), that people can use to assess their own abilities.
Our initial findings show that men are more likely to question and women are more likely to verify. When SI and EQ were compared through regression analysis, SI emerged as a stronger predictor of performance, while EQ’s effect became statistically insignificant.
In plain language, SI is more important for people using AI effectively. This does not invalidate EQ. It just shows that EQ is useful in collaborative teams, whereas SI is crucial for individual performance.
Asking questions with nuanced information in complex scenarios may result in rejecting your own initial ideas. In short, you were wrong. If your sense of your own worth is fragile, this conclusion can be so painful that you will not even risk asking the question. This aligns with prior findings linking psychological safety and intellectual humility to improved reasoning (Niemiec & Ryan, 2009: Leary et al, 2017) .
The Skeptic’s Playbook
Critical-thinking scholars have mapped this terrain for decades. We don’t need to reinvent the canon; we need to refit it for an algorithmic world. Here’s a practical short-list leaders can use without a philosophy degree to generate better answers. And many of these steps use AI in a controlled way.
- Clarify the problem. AI engines optimize for attention because the most accessible metric they have is the user’s repeated use of the AI tool. This is not necessarily nefarious, but it can descend into self-reinforcing bias toward user affirmation.
- Surface and test assumptions. What must be true about the data, the context, and the scope for an AI response to hold? Where is AI consistently limited or biased? AI can point out some of these assumptions and can propose ways to test them, but AI cannot actually run experiments.
- Spot alternative causes. Are there existing theories that might arrive at a different conclusion from the same data? AI can help identify and explain these adjacent fields of inquiry, but only if you ask.
- Consider counter-arguments. Can you, even with the help of AI, disprove AI’s initial response?
- Make your own decision. Do not outsource intellectual rigor to AI. Instead, treat it as an assistant with potentially useful observations that can feed into your human filter for new information.
None of these steps requires you to code. None expects formal training in logic. However, all of them require you to keep your hands on the wheel. Rather, they require conceptual clarity, structured reasoning, and a willingness to challenge assumptions.
How to Insert Skeptical Intelligence into a Team
Skeptical Intelligence scales when it’s institutional, not heroic. Four moves help:
- Train beyond compliance. Ditch generic “AI ethics” lectures for scenario workshops that ask teams to pressure-test real models and real dashboards under shifting conditions. Force them to identify the potential sources of human and AI bias in their methods. Make teams defend both the recommendation and the route to it.
- Hire for humility. Reward candidates who can say “I don’t know yet” and then show how they’d find out. Curiosity without swagger is a stronger predictor of sound model governance than tool fluency alone.
- Reward constructive dissent. Bake “thoughtful disagreement” into rituals and performance reviews. Assign a rotating Red Team to every major decision whose explicit, celebrated role is to attempt to (respectfully) disprove a recommendation.
- Celebrate often. Not everyone should get a trophy, but small achievements – like asking a thoughtful question or admitting uncertainty – deserve small public praise from managers and peers. This is where EQ can foster SI.
Some will say, “We already ask hard questions.” Often, that means we ask familiar questions. Skeptical Intelligence formalizes the unfamiliar ones: the questions that probe the scaffolding of a claim rather than its convenience to our plan. It forces rigor before momentum hardens into commitment. Ultimately, embedding SI in organisational culture requires visible modelling by senior leaders – demonstrating that asking better questions is not dissent but diligence.
The Payoff: Innovation and Productivity
Our research found a strong positive relationship between people who have the ability to employ Skeptical Intelligence with their ability to use AI to generate more and better innovative ideas and to use AI for other tasks more productively.
History will remember early AI leadership in two groups. One waved models and decisions through because they “looked right” and delivered a mix of spectacular wins (great!) and avoidable disasters (boo!). The other built muscle around interrogation – fast, repeatable, teachable – so that they could achieve more upside with fewer blind-side hits. This latter group also prepared for new generations of AI and other decision-making tools and types of information more innovatively and productively.
IQ still matters. EQ still matters. But the scarce executive asset in a world of plausible-sounding AI outputs is Skeptical Intelligence – the disciplined, curiosity-driven, humility-infused ability to demand better questions and resist easy answers. SI doesn’t make you slower. It makes you surer about when to go fast.

Ted Ladd is a Professor of Entrepreneurship & Innovation at the Hult International Business School and Instructor of Platform Entrepreneurship at Harvard University, where he focuses on techniques and mindsets for designing and testing AI-powered multi-sided platforms. His most recent book, Innovating with Impact, was published by The Economist

Priyanka Shrivastava is Professor of Marketing and Analytics and Associate Dean, DBA, at Hult International Business School. She is a passionate teacher and mentor, known for her experiential, data-driven approach. Her research interests include AI and sustainability, customer experience, and integrating yoga and mindfulness into leadership and research practice.
