The System 3 Age — Part 3 of 5
·5 min read·Kenneth Pernyér·823 views·108 appreciation

System 3 Should Grow System 2

AI is not here to replace thinking. It is here to raise the ceiling of human cognition.

cognitionaisystem-thinkingaugmentationleadership

The most important thing AI is doing to our work is not speed. It is what it does to us.

When I work deeply with specification-driven development and AI assistance, I notice something surprising: I don't feel replaced. I feel enriched. My thinking becomes more structured. I connect more ideas. I challenge my own assumptions faster. I stretch my imagination of what is possible to build, and I raise my ambition from "next release" to "next decade."

That experience points to a better framing.

The choice is not "humans vs AI." The choice is between two operating modes:

  • Autopilot mode: we give up and let AI think and act for us
  • Augmentation mode: we use AI, plus other humans, to become more capable thinkers

This article argues for augmentation mode.


Evolving Cognitive Capability

Daniel Kahneman gave us System 1 (fast intuition) and System 2 (slow, deliberate reasoning). That dual-process model shaped decades of thinking about human judgment.

Now, recent research from Gideon Nave and Steve Shaw at the Wharton School and the University of Pennsylvania extends Kahneman's framework with a third cognitive system. In their paper Thinking—Fast, Slow, and Artificial, they introduce System 3: external cognition—AI operating alongside the brain.

  • System 1 — fast intuition
  • System 2 — slow, deliberate reasoning
  • System 3 — external cognition (AI)

Their research warns of "cognitive surrender"—the tendency to accept AI outputs with minimal scrutiny, overriding both intuition and deliberation. Across three preregistered experiments with over 1,300 participants, people chose to consult AI on the majority of tasks. Those with higher AI trust and lower need for cognition surrendered the most.

That is the autopilot failure mode. But the same framework reveals an opportunity.

If we treat AI as a generator, the best case is productivity. If we treat AI as a cognitive scaffold, the best case is growth.

Growth looks like:

  • clearer reasoning under complexity
  • better questions, not just faster answers
  • fewer blind spots
  • higher-quality tradeoffs
  • stronger imagination of what to build
  • better collaboration because shared context becomes visible

AI can help us become better humans at the kind of thinking modern systems require.


System 3 Is the Scaffold, System 2 Is the Point

Nave and Shaw's framework clarifies the relationship:

  • System 2 is judgment: intent, meaning, ethics, tradeoffs, priorities
  • System 3 is scaffolding: structure, state, process, verification, persistence

System 3 exists to support System 2, the way instruments support musicians, flight systems support pilots, and training supports athletes.

None of those remove the human. They raise the standard of what the human can reliably do.


Why "Just Let the AI Do It" Is a Trap

The seductive failure mode of modern AI is fluency. It can sound right even when it is wrong. It can build quickly even when the target is unclear. It can generate confidence without earning trust.

Autopilot mode fails in predictable ways:

  • shallow agreement replaces real alignment
  • output increases while understanding decreases
  • teams become dependent on results they cannot explain
  • verification is postponed until it is painful
  • ambition collapses into what is easy to generate

This is not a technical problem. It is the cognitive surrender that Nave and Shaw measured in the lab, playing out at organizational scale.


The AI as a Thinking Partner That Forces Clarity

The most valuable AI behavior is not "answer the question." It is "help me sharpen the question until the answer is meaningful."

In practice, that looks like:

  • turning vague ideas into explicit assumptions
  • generating counterexamples that expose weak logic
  • exploring alternatives without ego
  • forcing tradeoffs into the open
  • converting intent into testable specifications
  • building small proofs that kill uncertainty early

This is how System 3 grows System 2—or more precisely, how it enables what I call System 2b: structured human–AI co-deliberation where the human stays in the driver's seat, but the road is better lit.


The Practical Loop

If we want augmentation mode, we need a repeatable loop that keeps System 2 engaged:

  1. Intent: what are we trying to accomplish and why
  2. Constraints: what is non-negotiable
  3. Truth: what must be true for this to be a good decision
  4. Spec: make it concrete enough to build and test
  5. Proof: build the smallest thing that reduces uncertainty
  6. Reflection: what did we learn, and how do we encode that learning for next time

That last step is the difference between productivity and growth.


What We Should Aim For

The long-term competitive advantage will not be "who uses AI." Everyone will.

The advantage will be who builds the best thinking process around AI, who compounds learning through durable context, who can increase ambition safely, and who can move fast without losing understanding.

That is what "System 3 grows System 2" means in practice. The goal is System 2b as the default operating mode—not faster output, but better convergence to truth.

This is what Converge is built for. Not to generate more, but to converge to truths—reducing complexity, identifying real value, and refusing to dilute cognitive investment with noise.


A Personal Note

When I do this well, I feel it physically: more neurons firing, more connections formed, more imagination unlocked. I don't feel replaced. I feel upgraded.

That is the path worth taking.

More in The System 3 Age

Stockholm, Sweden

February 28, 2026

Kenneth Pernyér signature