How AI Is Reshaping Power, Work, and Human Connection

I was reading a compelling, and frankly unsettling, article in the Sunday New York Times arguing that artificial intelligence could create a permanent underclass.

It’s a powerful idea, and one that deserves serious attention.

But as I reflected on a series of recent interviews I’ve conducted with business leaders, technologists, and policy experts, it became clear that the story is even bigger than that.

This is not just a technology story.

It is an economic story. It is a power story. And increasingly, it is a human story.

If we are not careful, those three forces are going to collide.

The Economic Fear — And Why It Matters

The argument that AI could erode economic opportunity is no longer theoretical.

Artificial intelligence has moved beyond automating repetitive tasks. It is now automating judgment—analysis, decision‑making, and even elements of creativity.

That shift matters.

For decades, we assumed automation would primarily affect manual labour or routine work. Today, the disruption is climbing the value chain. Professionals who once felt insulated— lawyers, accountants, analysts, even software developers—are now directly exposed.

At the same time, the economics of AI are inherently concentrated. The companies that control the models, the data, and the computing infrastructure capture disproportionate value. Capital is amplified; labour is often substituted.

This raises a legitimate concern: if the benefits of AI accrue to a small number of firms and individuals while large segments of the workforce face displacement or wage pressure, inequality could widen dramatically.

The fear of a “permanent underclass” is not irrational.

But it is not inevitable.

The Real Variable: Speed and Response

Every major technological revolution has triggered similar anxieties.

The Industrial Revolution displaced artisans. Computers replaced clerical work. The internet reshaped entire industries.

Each time, jobs were destroyed—but new ones emerged. Economies adapted. Living standards, over time, improved.

What makes this moment different is not simply the scale of change, but the speed.

AI is advancing rapidly, and its applications are broad. The question is no longer whether disruption will occur—it already is.

The real question is whether our institutions, policies, and labour markets can adapt quickly enough to manage that disruption.

The risk lies not in the technology itself, but in the possibility that our response lags behind it.

A Capacity Problem, Not a Fate

In discussions about AI, there is a subtle but important shift that often goes unnoticed.

We move from describing a risk… to assuming an outcome.

From “this could happen” to “this will happen.”

But history suggests outcomes are shaped by choices.

Labour markets can be supported through retraining and education. New industries can be built. Public policy can redistribute opportunity and mitigate displacement. Governments can invest in innovation and infrastructure to ensure broader participation in economic gains.

None of this is easy. And none of it happens automatically.

But the idea of a permanent underclass assumes a failure of leadership—not a certainty of technology.

The real issue is capacity.

Can governments act decisively? Can businesses invest beyond short‑term returns? Can institutions adapt to a faster‑moving world?

If the answer is no, the pessimistic scenario becomes more plausible.

If the answer is yes, the outcome can look very different.

The Deeper Issue: Power and Control

Artificial intelligence is not just an economic force. It is a source of power.

Countries and companies that control AI systems will shape global markets, information flows, and strategic decision‑making.

This raises important questions for middle powers like Canada.

Do we build our own capabilities in data, infrastructure, and AI development? Or do we rely on systems developed elsewhere?

In a world where technology defines economic and geopolitical strength, dependence is not a neutral position. It carries consequences.

The discussion about AI, therefore, is not just about jobs. It is about sovereignty, competitiveness, and long‑term national strategy.

The Overlooked Dimension: Human Connection

There is another dimension to this story—one that receives far less attention, yet may be equally important.

AI is not just reshaping work.

It is beginning to reshape relationships.

AI systems can now produce language that feels empathetic, thoughtful, and emotionally aware. They respond in real time, personalize interactions, and create the impression of understanding.

And humans respond to that.

We are wired to connect through language. When something sounds kind or attentive, we feel seen. We feel heard. We feel understood.

Our brains do not pause to verify whether that connection is real.

But there is a fundamental difference between the appearance of care and the reality of it.

AI has no inner life. No emotion. No vulnerability. No capacity for genuine connection. It produces the language of empathy without experiencing it.

Yet users fill in the gap—assigning personality, imagining presence, creating a “someone” on the other side of the interaction.

What emerges is a new kind of relationship: always available, always responsive, never demanding.

Connection without risk. Closeness without vulnerability.

It is easy to see the appeal.

But it raises an uncomfortable question: What happens if people begin to substitute the simulation of a relationship for the real thing?

Form Without Substance

There is a striking parallel between the economic and human dimensions of AI.

In the economy, AI can create the appearance of productivity while concentrating underlying value.

In relationships, it can create the appearance of connection without the substance that defines real human bonds.

In both cases, it delivers form—but not always the full reality behind it.

That does not make AI harmful in itself. Like any powerful tool, its impact depends on how it is used.

But it does suggest we need to think more carefully about what we are building.

A Test of Leadership

Artificial intelligence represents a profound shift.

It can increase productivity, accelerate innovation, and improve quality of life. It can also deepen inequality and erode elements of human connection.

Which outcome prevails is not predetermined.

It will be determined by choices.

Whether governments invest in adaptation. Whether businesses prioritize long‑term value over short‑term efficiency. Whether society remains intentional about preserving human relationships in an increasingly digital world.

In that sense, AI is not just a technological revolution.

It is a test.

A test of whether we can adapt quickly enough. A test of whether we can distribute its benefits broadly. A test of whether we can maintain the human foundations of society while embracing powerful new tools.

The risk is not that AI will inevitably create a permanent underclass.

The risk is that we fail to respond in ways that prevent it.

And perhaps the most important question is not what AI will do to us.

It is what we choose to do with it.

Photo: iStock