How Nobel laureates Philippe Aghion and Peter Howitt’s theory of creative destruction helps us understand (and humanize) the turbulence of AI-driven progress

SUMMARY
The 2025 Nobel Prize in Economic Sciences recognized Philippe Aghion and Peter Howitt for formalizing the process of “creative destruction,” which speaks to how innovation-driven economies grow through cycles of renewal. Their insights remind us that the same churn that drives technological progress can also unsettle people and institutions. This post, the second in a 2-part series, spotlights the relevance and alignment Aghion’s and Howitt’s theories inspire in the context of today’s race towards AI development (read Part 1 of this series here).

The phrase “creative destruction” may sound like a paradox, but it captures a fundamental truth about economic progress.

Aghion’s and Howitt’s model, inspired by Joseph Schumpeter, describes how economies grow when new ideas and technologies replace old ones.

Growth, they showed, is not smooth; it’s cyclical and fueled by bursts of innovation that upend industries and reorganize labor markets.

The 2025 Nobel Committee recognized the mathematical precision the duo used in explaining the creative destruction process: innovation creates, but it also destroys. Old products, methods, and institutions lose relevance as new ones emerge. Though an engine of progress, it’s one that runs on disruption.

In our AI era, it’s safe to say we’re witnessing the most accelerated form of this cycle in history. Each new model, dataset, or breakthrough renders a previous generation of tools obsolete.

But unlike the mechanical revolutions of the past, AI’s churn affects not only industries but also cognition, trust, and identity.

When we apply a humane lens on Aghion’s and Howitt’s model, we concede the following: destruction is inevitable, but how societies reconstruct AFTER disruption determines whether innovation becomes sustainable or destabilizing.

How about incentives for ethical adaptation?

Aghion’s and Howitt’s theory emphasizes incentives: firms innovate to capture temporary advantages before being replaced. Yet this “race for the next idea” can create social casualties when innovation outpaces ethics.

In AI, for example, the drive for speed and market share can eclipse responsibility.

Automation displaces workers faster than systems can reskill them; algorithms amplify bias faster than policies can correct them. The challenge isn’t to slow progress but to reframe it.

One such way of reframing it, in our view, is a pivoted perspective which transforms creative destruction into ethical adaptation. Innovation must be guided by an awareness of its human impact.

Ethical adaptation asks:

  • How do we design transitions that value people, not just efficiency?
  • How do we define success beyond profitability?
  • How do we develop and deploy in support of collective well-being and informed participation?

Though Aghion’s and Howitt’s work assumes firms act to maximize innovation incentives, we expand this premise beyond enterprise by calling on societies to design incentives for humane innovation.

Economic churn inspires ideas of psychological churn

Aghion’s and Howitt’s model describes economic churn, which inspires our own ideas about the psychological churn economic churns create.

For example, when innovation destabilizes industries or identities, people experience more than financial loss; they experience cognitive and emotional dislocation. In AI’s context, that dislocation can be profound. Workers question their value when automation encroaches. Students doubt their creative worth when machines write, draw, and compose. Even professionals in AI itself experience burnout amid relentless acceleration.

We opt to frame this as an issue of psychological resilience; not mere endurance, but cognitive readiness.

Humane innovation thus requires the capacity to adapt, learn, and stay grounded in one’s purpose and values amid technological upheaval.

In terms of upheaval, Aghion’s and Howitt’s growth model assumes the destruction phase is temporary; progress ultimately stabilizes. Yet it’s important to remember that for humans, adaptation is ongoing. Therefore, the aim isn’t to remove change, but to cultivate mind-health systems (across education, communication, and workplaces) that help people evolve and iterate along with the technology, not against it.

Institutional renewal: building systems that learn as fast as they regulate

In Aghion’s and Howitt’s model, institutions play a critical role: they can either accelerate innovation by enabling competition or suppress it by protecting incumbents.

Their work highlights the political challenge of growth: when powerful actors resist creative destruction to preserve their dominance.

This lesson applies directly to AI governance today as well. As data monopolies and corporate control expand, institutional inertia threatens both fairness and innovation. Rules built for industrial economies strain under the demands of algorithmic systems.

These ideas inspire the need for institutional renewal — the redesign of governance systems that evolve alongside technology.

Institutions must become learning systems themselves: adaptive, interdisciplinary, and transparent. Renewal doesn’t mean deregulation; it means responsive regulation. AI oversight must be as dynamic as the technologies it governs, balancing freedom to innovate with accountability to the public good. This is the institutional version of creative destruction: rebuilding governance that can grow, not just control.

Closing reflections

The parallels between Aghion and Howitt’s theory and today’s AI revolution are striking.

Their model of creative destruction captures the same tension we now face: innovation as both builder and disruptor.

In this sense, their ideas offer not just economic insight but a mirror for our technological moment, revealing both the lessons to heed and the opportunities to harness.

As AI continues to reshape markets, institutions, and minds, Aghion’s and Howitt’s work reminds us that progress endures only when we manage its human consequences as thoughtfully as we design its machines.

Mayra Ruiz-McPherson, PhD(c), MA, MFA
Executive Director & Founder
The CyberPsych Institute (CPI)

Empowering Minds for the AI Age