When the Machine That Fired You Becomes Your Therapist: AI, Inequality, and the Coming Mental Health Crisis

AI is displacing workers, compressing decades of economic disruption into years, and creating an existential crisis of identity and meaning at civilizational scale. The question no one in the AI industry wants to answer: who is going to pay for the psychological fallout? And will people trust the machine that fired them to be their therapist?

calender-image
May 11, 2026
clock-image
18 min read
AI, Inequality & the Mental Health Crisis: What Therapists Must Know | Helm BMS

The same economic forces that have widened inequality through every major technological revolution in the last 200 years are about to do it again — only this time, the product being sold to the people left behind is their own healing.

The Conversation Henry Ford Should Have Had

There is a story — perhaps apocryphal, certainly instructive — about Henry Ford and a union organizer walking the floor of a newly automated factory. Ford, proud of his machines, turns to the union director and asks: "How are you going to get those machines to pay union dues?"

The union man pauses, then fires back: "How are you going to get them to buy your cars?"

Whether or not this exchange actually happened, the economic logic embedded in it is one of the most important questions of our time — and one of the least asked in the breathless coverage of the artificial intelligence revolution.

We are told that AI will create more jobs than it destroys. We are told that productivity gains will lift all boats. We are told that the displaced will simply upskill, pivot, and find their place in the new economy. We have been told versions of this story before — during the Industrial Revolution, during the automation of the mid-20th century, during the offshoring wave of the 1990s, during the gig economy's rise in the 2010s.

And every single time, the benefits flowed upward, and the costs landed on the people least equipped to absorb them.

Economic analyst and former Citibank interest rate trader Gary Stevenson has articulated perhaps the most precise diagnosis of why this cycle is so dangerous: when wealth concentrates at the very top, it stops circulating. Wealthy individuals and corporations do not spend proportionally — they save, invest, and accumulate. The consumer economy, which depends on broad-based purchasing power to function, begins to starve from the bottom up. The AI economy has not reckoned with this reality. The companies extracting the greatest productivity gains from AI are not absorbing the social cost of the displacement they create. They are externalizing that cost onto individuals, families, communities — and the mental health system.

This time, those costs will not just be economic. They will be psychological. And the question that nobody in the AI industry wants to answer is this: Who is going to pay for that?

What the Data Already Tells Us

Before we discuss what's coming, let's be clear about what's already here.

The Federal Reserve Bank of St. Louis has documented that occupations which embraced generative AI most intensively showed the largest unemployment gains between 2022 and 2025 — with a striking correlation coefficient of 0.57 between AI adoption rates and rising unemployment. Computer and mathematical occupations, the fields young people were told represented the safest career bets of their generation, have seen some of the steepest unemployment rises.

Goldman Sachs research reveals that unemployment among 20 to 30-year-olds in tech-exposed occupations has risen by nearly 3 percentage points since the start of 2025 alone — significantly higher than for their same-aged peers in less AI-exposed fields. Among employed recent graduates ages 22 to 27, 42.5% were working in jobs that typically don't require a college degree — the highest share since 2020.

The students themselves are reading the writing on the wall. A landmark Gallup and Lumina Foundation survey conducted in October 2025 found that 47% of all college students have given serious consideration to changing their majors because of AI. Thirteen percent of bachelor's degree students have already done so. Across the University of California system, computer science enrollment fell 6% in 2025, compounding a 3% decline in 2024 — a dramatic reversal after more than a decade of explosive growth.

The students are not panicking irrationally. They are responding rationally to a labor market that is telling them, with increasing clarity, that the credential they were promised would secure their future may not survive the decade. And workers already deep into their careers? A recent survey found that 52% of workers between the ages of 18 and 24 express worry about AI's impact on their future careers. Among companies actively using AI, 44% say their employees will "definitely" or "probably" be laid off due to the technology.

This is not a future projection. This is the present tense.

Two Hundred Years of the Same Story

To understand where we are going, we need to understand where we have been. The relationship between technological innovation and economic inequality is not new — and it is not neutral.

During the British Industrial Revolution, the top 10% of wealthholders in Britain, Sweden, and France came to own an average of 91%, 88%, and 84% of national wealth respectively, while the bottom half of the population owned barely 1 to 2%. Between 1820 and 1900, the ratio of income held by the richest 10% to that held by the poorest 50% rose from 1,800% to 4,100%.

Historian Daron Acemoglu has documented that during this era, increasingly efficient automation began replacing workers — worsening their conditions, stagnating wages, and increasing working hours by up to 20%. Weavers, among the first to be automated, saw their hourly wages fall by 30 to 40%.

Brookings Institution research shows that the share of wealth held by the top 1% in the United States rose from 23% to approximately 40% over the last several decades of technological acceleration. Today, the top 1% globally owns more than 50% of the world's wealth, while the bottom 50% holds less than 1% — and that gap is widening as technological disruption continues.

Every major wave of technological innovation has produced a version of the same outcome: enormous wealth creation at the top, structural displacement in the middle, and compounding vulnerability at the bottom. The tools change. The distribution of outcomes does not. The AI revolution is not a departure from this pattern. It is its logical continuation — at unprecedented scale and speed.

AI, Inequality & the Mental Health Crisis: What Therapists Must Know | Helm BMS

Work Is Not Just a Paycheck — It's an Identity

Here is what is missing from almost every economic analysis of AI displacement: the clinical reality of what losing work actually does to a human being.

Work, for most people, is not simply a means of generating income. It is the primary architecture of identity. It answers the questions that sit at the core of psychological functioning: Who am I? What am I worth? Where do I belong? What is my contribution to the world?

Viktor Frankl, writing from the ruins of the Holocaust, argued that the search for meaning is the primary human motivation. For the vast majority of people throughout history, work has been the central vehicle for that meaning. The utopian vision of AI — that people will work less, have more freedom, and fill their days with creativity and leisure — assumes that human beings are psychologically equipped to find meaning in the absence of productive contribution. The clinical evidence does not support this assumption.

A global study examining unemployment and mental disorders across 201 countries from 1970 to 2020 found a significant positive association between unemployment and anxiety, depression, and bipolar disorder. Job loss does not merely cause financial stress — it disrupts social role, self-efficacy, and psychological identity in ways that compound over time.

Qualitative research on the lived experience of unemployment identifies four recurring themes: disrupted identity and direction in life; a perceived failure to meet societal standards of value; the limitations of coping strategies without structural support; and only occasionally, an opportunity for growth — and only then for those with robust social and financial resources to absorb the transition.

Even voluntary work exit tells us something important. Research on retirement and sense of purpose consistently shows that retired adults have a measurably lower sense of purpose than their still-working counterparts — and that lower sense of purpose correlates directly with higher rates of depression and anxiety. If chosen exit from the workforce erodes meaning and mental health outcomes, what does forced displacement do?

A commentary published in PMC on the hidden benefits of work puts it plainly: job loss related to plant closings — where the worker bears no personal responsibility — still inflicts a "triple burden" of financial loss, identity loss, and social disconnection. The absence of personal fault does not protect against psychological harm.

What we already see in therapy offices tells a consistent story. Clients are not arriving with vague malaise or abstract philosophical uncertainty. They are arriving with concrete, named fear: Will I be replaced? Is my career already obsolete? What am I if I am not what I do? This anxiety is presenting across the entire lifespan — from high school students pivoting away from technology fields toward face-to-face careers driven by fear of obsolescence, to workers deep in careers they spent decades building, sitting with persistent dread that the floor beneath them is shifting. This is not irrational anxiety. This is a rational response to real, documented threat. And it is only the beginning.

The Paradox No One in AI Wants to Acknowledge

Here is the paradox at the heart of this entire conversation: as AI displaces workers and drives down the purchasing power of the middle and lower classes, it simultaneously makes human therapy increasingly unaffordable for the people who need it most — and then positions AI therapy tools as the solution.

The wound and the bandage are the same product. The company that fired you wants to sell you the therapy that helps you cope with being fired.

Think about how this plays out in practice. A worker loses their job to automation. Their income drops. Their employer-sponsored health insurance disappears. The private pay therapy market, where quality long-term care is available, is financially out of reach. Insurance-based therapy, already constrained by inadequate reimbursement rates, offers short-term, symptom-focused treatment. And into that gap steps the AI therapy app — affordable, available 24/7, backed by venture capital and marketed with the language of democratizing mental health care.

The comparison to social media is instructive here. Most people are aware that social media is addictive, that it damages mental health, that the platforms are engineered to exploit psychological vulnerabilities for profit. They use it anyway — not because they are irrational, but because the alternatives are more expensive, more effortful, or less immediately accessible. AI therapy will follow the same adoption curve. People will use it not because they trust it or believe it is equivalent to human care, but because they cannot afford the alternative. And they will use it while harboring a cognitive dissonance — a mixture of resentment, dependence, and grief — that the tool itself is fundamentally incapable of addressing.

You cannot process betrayal trauma with the instrument of the betrayal.

We Have Seen This Before: The Morality of "Good Enough" Care

The United States has a long and documented history of relegating its most vulnerable populations to inferior medical care — not out of scarcity, but out of a moral and economic calculus that has consistently valued certain lives less than others. The pattern is the same: quality care flows to those with resources, and the rest receive whatever the market decides is sufficient.

The mental health system is already operating along these lines. The American Psychological Association's 2024 Practitioner Pulse Survey found that 82% of psychologists cited insufficient reimbursement rates as the reason they are not in-network with insurance — and the most experienced, specialized therapists are predominantly available only through private-pay arrangements. Medicare reimbursement rates for mental health therapy dropped approximately 14% in 2025 compared to 2024 — while the Medicare Economic Index projected a 3.5% increase in the cost of providing those same services.

The result is a system in which depth-oriented, long-term therapy — the kind that actually addresses root causes rather than managing surface symptoms — is functionally reserved for people with disposable income. Short-term, symptom-based treatment, which research consistently shows has higher rates of regression and return to care, is what is available to everyone else. The business model of managed care is not incidentally aligned with keeping people symptomatic. It is structurally aligned with it. A healed client is a lost revenue stream.

Now layer AI onto that existing inequality. The displacement of the lower and middle classes will dramatically increase the population that cannot access private pay care. AI therapy tools — owned by large corporations optimizing for lifetime customer value rather than clinical outcomes — will step into that space. This is not cynicism. This is the basic economic logic of any product that profits from recurring dependency.

What AI Cannot Do in a Therapy Room

AI systems are genuinely capable of delivering psychoeducation, tracking mood patterns, providing CBT-based cognitive restructuring prompts, and offering accessible crisis resources. These are not trivial capabilities. But therapy — real therapy — is not primarily an information delivery system. It is a relational experience.

The therapeutic alliance — the quality of the relationship between therapist and client — is consistently identified in the research literature as the single most powerful predictor of therapeutic outcomes, across modalities, populations, and presenting concerns.

An AI cannot attune. It cannot sit in silence with a grieving person and let them feel that their grief is witnessed by another human consciousness. It cannot notice the micro-expression that contradicts what the client is saying. It cannot hold the weight of a traumatic disclosure in its own nervous system and model, through its own regulated presence, that the disclosure can be survived. It cannot offer what the research on attachment calls a "corrective emotional experience" — because it has no emotional experience to offer.

Furthermore, in the specific context of AI-driven economic displacement, the therapeutic relationship itself carries a symbolic weight that AI cannot navigate. For someone whose livelihood was automated away, turning to an AI for psychological support is not a neutral act. It is a retraumatizing one. The tool that represents their displacement cannot simultaneously represent their healing. This is not a technical limitation that future AI development will resolve. It is a fundamental structural barrier rooted in what therapy, at its core, actually is.

AI, Inequality & the Mental Health Crisis: What Therapists Must Know | Helm BMS

A Question Worth Sitting With: Are We Feeding the Beast?

This section is not a prescription. It is an invitation to reflection — one that every mental health provider operating in the current insurance landscape should feel free to sit with honestly.

The mental health field is facing a quiet but accelerating crisis that predates AI and will be dramatically worsened by it. The HRSA's 2025 State of the Behavioral Health Workforce report projects significant shortages of mental health counselors, marriage and family therapists, addiction counselors, and psychologists — driven in part by burnout, reimbursement challenges, and providers leaving the workforce entirely. In 2024, approximately 62 million U.S. adults had a mental illness — and nearly half received no treatment at all.

Thrizer's 2025 Mental Health Insurance and Marketing Report found that 34% of therapists cited financial reasons for leaving insurance networks, 26% cited administrative burden, and 13% cited burnout directly. Average in-network reimbursement was only $112 per session — while clinicians' own assessed fair market rates ranged from $180 to $200 per session. That gap is not a rounding error. It represents the systematic undervaluation of clinical expertise by institutions that profit from the access those clinicians provide.

So here is the question worth asking, honestly and without easy resolution:

When mental health providers participate in insurance systems that reimburse them below sustainable rates, require higher caseloads to make ends meet, accelerate burnout, and structurally limit the depth and duration of care available to lower-income clients — are we enabling the very economic disparity we are trained to address?

Are we, perhaps unintentionally, subsidizing a system in which large managed care corporations profit from the gap between what quality care costs and what they are willing to pay for it? And as AI therapy tools become more sophisticated, if we continue accepting inadequate reimbursement without collective resistance, are we inadvertently making it easier for corporations to argue that AI chatbots are good enough for the clients we were already being paid too little to serve properly?

There are no clean answers here. But these tensions may be approaching a tipping point that demands a more organized response. Other helping professions — teachers, nurses, social workers — have reached moments where individual ethical compromise was no longer sufficient to absorb systemic dysfunction, and collective action became the only viable language. It is worth asking whether mental health providers are approaching a similar moment.

What does it say about our collective moral framework if we continue, in silence, to make the managed care model of mental health profitable — knowing that its profitability depends on keeping care both limited and recurring, and knowing that AI is about to make that model exponentially more powerful and exponentially less human?

The Strategic Opportunity — and the Conclusion

The demand for human-centered, relationally grounded therapy is not going to decrease in the age of AI. It is going to increase dramatically. For mental health professionals who understand what is coming, this is a call to strategic clarity.

Advocate first. Reimbursement rates must reflect the actual cost of quality care. The mental health system must resist the reduction of therapy to symptom management used as a pretext to contain costs at the expense of outcomes.

Position depth therapy as a differentiator. Long-term, depth-oriented work produces durable change. Practices that can document and articulate this distinction will stand apart in a market about to be flooded with cheaper, shallower alternatives.

Look at the corporate and organizational space. The companies driving AI adoption are going to face a workforce mental health crisis their HR departments are not equipped to manage. Counselors with business-level fluency in organizational dynamics and economics are extraordinarily well positioned to consult in this space.

Speak the language of both worlds. The therapist who can articulate the psychological dimensions of economic displacement in the language of both clinical practice and economic theory is a rare voice. That voice will be sought out — by media, by organizations, by policymakers, and by clients who need their experience named in its full complexity.

The machine does not grieve. The machine does not wonder what it is for. The machine does not need to be told that its existence has value beyond its productivity. Human beings do. And when a society strips away the structures through which people have traditionally located that value, the consequences are not merely economic. They are existential.

The machines are not coming for the therapist's chair. But they are coming for everyone who might need to sit in it.

Related Blogs