When Confidence Outpaces AI-Competence
I first came across the Dunning–Kruger Effect listening to John Cleese joking about “stupid people not knowing they’re stupid.” It was funny, until I realized how true it is, especially in business.
The Dunning–Kruger Effect describes a psychological bias where people with little experience in a domain tend to overestimate their ability precisely because they don’t yet know what they don’t know. When we start learning something new, our confidence rises quickly, often faster than our actual competence. We’ve all met someone (sometimes even ourselves) who feels confident long before competence has caught up. True experts however often have a humbleness about them, that I personally enjoy a lot.
In AI, that same pattern is everywhere.
Across many conversations, I hear the same sentence: “We’re already doing AI.” Often that confidence comes from experimenting with tools like ChatGPT or Copilot. But using these tools for fun or personal productivity is a completely different ballgame than building Enterprise AI capability. Underneath the enthusiasm, the real work, integrating data, aligning teams, embedding governance, and creating a learning culture, has barely started.
It’s another pattern I noticed: Leaders believe they’ve moved further than they have, while teams on the ground see a different reality. A recent study by Multiverse captured that perfectly: 61 % of leaders think AI is fully implemented in their organization, yet only 36 % of employees agree. Even more telling, 60 % of leaders say they’re ahead of competitors, while just 46 % of workers share that view. (Multiverse, 2024)
When confidence runs ahead of competence, fundamentals get skipped, data quality, change management, and alignment fall behind. Expectations inflate, results disappoint, and the narrative quickly shifts to “AI doesn’t deliver.” Real transformation begins when we pause, look honestly at where we stand, and build from there.
Building Real AI Capability
The encouraging part is that overconfidence isn’t a fixed trait. It can be adjusted through awareness, honest feedback, and learning. I’ve seen teams that were once sure they “knew AI” evolve rapidly once they started treating AI as a craft to be learned, not a box to tick.
For companies, this means replacing the illusion of readiness with a culture of continuous learning.
Some ways to do that:
- Acknowledge the learning curve. Accept that maturity takes time and practice.
- Create psychological safety. Make it normal to admit what’s unclear or unknown.
- Invest in education and experimentation. Most organizations offer AI training for executives, and that’s a good start. But let’s be honest: learning about AI tools, Gen AI, or even agentic AI is not the same as doing AI. The real capability comes from understanding the plumbing underneath: data integration, data quality, and AI operations. That’s where strategy meets engineering. Training should therefore span all levels: from leadership awareness to hands-on enablement of the people who actually connect, clean, and run the systems that make AI real.
- Invite outside perspective. External partners can reveal blind spots you can’t see internally.
- Celebrate progress, not perfection. Every iteration builds competence and confidence the right way.
Humility in this context isn’t modesty, it’s precision. The best leaders I meet balance ambition with awareness.
Impact – The Power of Humble AI Leadership
I’ve noticed a common pattern even among advanced companies: many believe they’ve already “done” responsible AI when they’ve only taken early steps. BCG found the same, over half of organizations that claimed to have fully implemented responsible-AI programs had actually over-estimated their progress. (BCG, 2021) EY saw a similar gap: many executives assume they’re aligned with public expectations on AI ethics, but users see it differently. (EY, 2025)
These findings echo what experience already tells us: self-awareness builds trust. Leaders who are transparent about what’s working and what isn’t earn credibility, internally and externally. They move faster because they learn faster.
Organizations that combine ambition with humility build AI as a long-term capability, not a quick headline. They anchor success in data quality, culture, and governance, and they keep improving.
Because in the end, real AI confidence doesn’t come from the tools you’ve mastered; it comes from knowing how much there is still to learn.
References
- Multiverse (2024) The AI Maturity Gap: Leaders Overestimate AI Readiness and Workers Lack Training. Available at: https://www.prnewswire.com/news-releases/the-ai-maturity-gap-leaders-overestimate-ai-readiness-and-workers-lack-training-302258842.html (Accessed 20 Oct 2025).
- Boston Consulting Group (2021) The Four Stages of Responsible AI Maturity. Available at: https://www.bcg.com/publications/2021/the-four-stages-of-responsible-ai-maturity.
- Ernst & Young (2025) How responsible AI can unlock your competitive edge. Available at: https://www.ey.com/en_gl/insights/ai/how-responsible-ai-can-unlock-your-competitive-edge

