On AI, Grief, and Restabilizing Society
I’ve been noticing something in the way people talk about AI.
- "I used to like my job, now all I do is review what AI spits out."
- "...AI has sucked the creative energy out of my soul."
- "Nobody is going to pay for this now that they can just have AI do it."
There's a lot to be said about the fear and anger surrounding the AI conversation. But today what was landing for me is the grief.
And I think grief matters more than we’re admitting.
Because half of us don't even realize we're grieving, and when people mention AI, we don’t actually end up in dialogue. We end up talking past each other—or talking at each other—while assuming the problem is about opinions, ethics, or policy, or education. There ARE problems there to tackle. But it's very difficult to get to real dialogue without first acknowledging grief.
Grief Doesn’t Always Look Like Sadness
When people say they’re afraid of AI, or angry about it, or dismissive of it, or overly euphoric about it, I do think a lot of them are responding to a sense that something familiar is slipping.
For some, it’s grief over:
- The loss of a craft they worked years to master
- Their work not being valued in the same way and how that has affected their financial situation
- The destabilization of identity tied to skill, peer-standing, or valued expertise
For others, it’s grief over:
- A world that already feels too fast and less humane
- The erosion of clear boundaries between human and machine
- The feeling that they didn’t choose this acceleration
- The future they pictured that they felt more confident about understanding
And for still others, it may be grief mixed with relief... which is often even harder to talk about honestly.
People exhibit grief differently: anger, aggression, withdrawal, anxiety, tribalism, denial, extreme life changes, erratic behaviour, depression.
And what I'm thinking on today is how hard it is to talk at a topical level with people who are clearly affected on a personal level. Which... and we all need to be more realistic about this... is everyone now. Because this is a global change that has either affected the person you're talking to or friends/family of the person you're talking to in significant ways.
Why the AI Debate Keeps Breaking Down
Most AI discourse is framed as a debate about outcomes:
- Is this good or bad?
- Should this be regulated or embraced?
- Is this ethical or dangerous?
But grief doesn’t really respond well to debate.
When someone is grieving, and we respond with counterarguments, what we communicate—unintentionally!—is that the feeling itself is illegitimate. That the problem is their reasoning, not their loss.
That’s often when conversations harden:
- Curiosity turns into defensiveness
- Questions turn into moral positioning
- People stop listening, because they no longer feel seen
And then we’re surprised that “dialogue” doesn’t lead anywhere.
Most people don't know how to talk with people who are grieving. A lot of people avoid it entirely. It's complicated!
Recognizing Grief Doesn’t Mean Stopping Progress
But we're not in a situation where we can avoid talking about it either. This is a real change that is happening to us and around us, and we have to find a way to engage in it.
So I'm not saying we should just walk on eggshells around this topic.
Recognizing grief doesn’t mean stopping progress, rejecting technology, or even agreeing on what comes next.
I think it's just about changing the posture of the conversations we have with people.
When grief is named, a different kind of question becomes possible:
- What are you afraid of losing?
- What part of this feels destabilizing to you?
- What do you wish people would acknowledge before trying to convince you?
And these questions are NOT just for people who seem obviously angry or depressed. Someone who's excited about the possibilities AI is opening up for them may also be grieving the loss of close relationship with people who see this topic really differently.
Those kinds of questions don’t solve anything on their own, of course. But they do something really important: they create the conditions where people can actually think together.
The Piece We Need to Move Forward Together
There’s a temptation—especially among people excited by AI—to treat hesitation or resistance as ignorance, or a failure to adapt. And there’s an equal temptation among critics to treat adoption as moral failure or thoughtlessness.
Both can miss the reality in the middle.
The world has changed around us in massive ways. Like the global pandemic, or World War 1 or 2, no one has totally escaped the effects of these irreversible changes.
And if that’s true, then the work in front of us isn’t just technical, ethical, or economic. It’s also relational. And not just in a "hold my hand" kind of way—I mean that the relational work we have to do is an ESSENTIAL part of stabilizing our future together.
When We Minimize Loss, We Widen the Divide
One of the easiest traps to fall into, especially in moments of rapid change, is scoffing at people who seem to have plenty and yet feel destabilized by losing something.
A software developer was making big money, but got laid off and hasn't been able to find work again, or had to take a much lower paying job.
It’s tempting to say: "Why are you complaining? Try working two jobs like I have had to for 10 years!" Or, "They kinda had it coming, real life isn't that easy." We often reflexively mock grief over loss that looks comparatively small (or even seems fair).
But that reflex doesn’t create equality or achieve justice. It just creates more distance.
It doesn't help us or them when we harden the lines between “us” and “them.” If we turn grief into a competition, we ensure no one really feels safe naming what they’re losing. And that's unhelpful for all of us.
Grief isn’t proportional in the way we want it to be. It’s relational. People grieve what their lives have been built around—identity, meaning, stability—even when those lives appear privileged or secure by other measures.
Acknowledging that doesn’t excuse systems of inequality. It doesn’t erase real disparities in power, safety, or opportunity.
Acknowledging that actually creates an opportunity to close the gaps effectively. I feel pain. You feel pain. If we stop measuring pain and pointing fingers for a moment, we give people on both sides a chance to process how to embrace a new identity and their current reality.
If we were facing this with fewer enemies, with more of a sense of shared ownership—not in "getting things back to the way they were" but in embracing reality as it is, we might find ourselves in a different kind of conversation.
Finding Solid Footing in the AI-Era Together
That doesn't mean people will agree, and it doesn't solve the very real ethical, economic, and ecological issues we're trying to untangle surrounding AI.
But I'd wager it's a better starting point for those conversations. Strong viewpoints make a lot more sense with context that goes beyond statistics.
We've got enough divisive issues we're facing today—and these are incredibly destabilizing.
Becoming more self-aware about our grief and curious (not as a thought-experiment; genuinely curious) about how the rapid changes around AI are affecting people feels like a pretty important part of restoring stability and energy to the world—and honestly, to ourselves as well.
I'm not at all claiming this is stuff other people haven't thought about. I am actually curious about what people who HAVE thought about this are noticing. Are there practical steps we could be taking (in communities, in work teams, in boardrooms, in schools) to set a different tone for the dialogue around AI adoption and the impact it's having? What has been working? What hasn't?