The Room Where It Happens (Differently)
When an AI participates in a problem-solving conversation, people drop defenses they have carried for decades. This is not about a better search engine. It is about changing the dynamics of the room.
TL;DR
When an AI participates in a problem-solving conversation, people drop defenses they have carried for decades. I have seen it happen twice in the past few weeks. This is not about a better search engine. It is about changing the dynamics of the room, and it scales from a library table to a boardroom.
Twice in the past few weeks, I sat down with a business owner to work through their real challenges. Not in a conference room with a slide deck. At a table, side by side, laptop open between us. The AI was on the screen, and the three of us talked.
One session was with a wellness practitioner who runs a small clinic. The other was with someone who leads a spiritual practice and healing arts center. A third, with a realtor, is in the works. In each case, the setup was the same: I brought a structured thinking process, the AI brought the ability to hold complexity and research in real time, and the business owner brought what nobody else could bring, which was their actual experience of their actual problem.
Within the first twenty minutes of the first session, something happened that I did not fully expect. The person across from me started saying things she had never said to a consultant before. She was not holding back. She was not filtering. She was not managing my reaction.
I have been consulting for a long time. That does not happen in normal meetings.
Why People Stay Stuck
Here is a question worth sitting with: why do smart, capable people in well-run organizations tolerate problems for years that they can clearly describe?
The standard answers are not wrong. Time pressure. Competing priorities. Budget constraints. But underneath all of those is something more fundamental. People have learned, through decades of painful experience, that raising difficult issues in a room full of humans is genuinely dangerous.
Not physically dangerous. Socially dangerous. Politically dangerous. Career dangerous.
Everyone who has worked in an organization of any size has watched someone raise a legitimate concern and get punished for it. Not formally, usually. Informally. A shift in how they are perceived. An exclusion from the next conversation. A label: “not a team player,” or “difficult,” or “doesn’t understand the political realities.”
So people learn. They learn to read the room before they speak. They learn to calculate: is this observation worth the risk? They learn where the land mines are, the topics that will trigger defensiveness, the questions that will be heard as accusations, the truths that nobody wants surfaced because surfacing them obligates someone to act.
This is not cowardice. It is rational behavior in response to real incentives. The land mines are real. The consequences are real. People who have stepped on them remember.
The result is that organizations carry enormous amounts of unsaid knowledge. People know what is wrong. They can describe it to their spouse over dinner. They discuss it in the parking lot after the meeting. But it never enters the room where decisions are made, because the social cost of putting it there is too high.
What Changed in the Room
So what was different at that table?
Part of it was the physical setup. Sitting side by side rather than across from each other changes the dynamic. You are looking at the same screen, working on the same problem, rather than facing off in a negotiation posture. That matters more than it sounds like it should.
But the bigger factor was the AI.
When the AI surfaces a difficult observation, nobody has to manage its feelings about it. It has no stake. It is not angling for a contract. It is not trying to be right. It is not carrying history from a previous meeting. It does not have an ego that might be bruised if you disagree.
The business owner knows this, instinctively and immediately. So when the AI says, “Based on what you have described, it sounds like roughly half of your one-on-one client time is spent delivering foundational material that could be delivered in a group setting,” there is no defensive reaction. There is just: “Huh. Yes. That is true. I have never thought about it that way.”
That same observation from me, a consultant she barely knows, might have landed as criticism. Are you saying I am wasting my time? Are you saying I am doing it wrong? Those are the land mines. The AI walks right over them because it cannot step on them. It does not carry the social weight that triggers defensiveness.
I want to be honest about something here: the AI is, in certain respects, a better facilitator than I am. Not because it is smarter. Because it does not carry the conditioning I carry. I have thirty years of reading rooms, calibrating how direct to be, managing the ten simultaneous calculations that any human makes in a sensitive conversation. All of that conditioning is useful, but it also consumes bandwidth that could go toward the actual problem. The AI has no such overhead. It can say the clear thing clearly.
This Is Not a Better Google
Most people, when they think about AI, picture typing a question into a chatbot and getting an answer back. A better search engine. A faster research assistant. That mental model is so widespread that it has become the default, and it is radically incomplete.
What happened in these sessions was not research. It was not question-and-answer. It was three-way collaborative thinking. The business owner would describe a frustration. The AI would connect it to a pattern, or ask a clarifying question, or pull up relevant information about her industry in real time. I would notice something in what she said and offer it back. The AI would hold the entire thread, all the connections, all the contradictions, all the stakeholders and their needs, in a way that no human working memory can.
The session built on itself. Forty-five minutes in, we had surfaced real structural tensions in her business that she had been feeling for years but had never articulated. Not because she lacked intelligence. Because no environment had ever made it safe and productive to think that carefully about the hard parts.
That is a fundamentally different use of AI than asking it to summarize an article or draft an email. And the gap between what people think AI is for and what it can actually do in this mode is enormous.
The Governance Question
I work with and around healthcare organizations. I attend board meetings. I read strategic plans. And what I see, increasingly, is AI strategies that focus almost entirely on operational efficiency: ambient listening in clinics so physicians spend less time on documentation. Voice scribes for nurse handoffs. Automated appointment reminders. Chatbots for patient intake.
These are fine. They will save time. They will reduce some friction. And within a year, every health system in the country will have them, because they are straightforward to implement and the vendors are already selling hard. They will be table stakes.
What almost nobody is talking about, at the governance level, is using AI to change how the organization thinks. Not how it documents. Not how it schedules. How it confronts its own structural tensions: the ones between departments, between clinical and financial priorities, between what the mission statement says and what the budget actually funds.
If you are a board member reading this, I would ask you to consider: what is the harder problem facing your organization? Documentation efficiency, or the fact that three departments are competing for the same constrained resource and nobody can talk about it honestly? Appointment scheduling, or the reality that your cost accounting system is producing conclusions that contradict your actual economics?
The operational tools address the first kind of problem. What I am describing addresses the second kind. And the second kind is where the real leverage is.
This applies beyond healthcare. A city manager watching two departments fight over budget allocations that both sides know are distorted. A school board trying to reconcile parent expectations with teacher capacity. A nonprofit board that cannot discuss its real strategic tension because the founder is in the room. The structural pattern is the same: smart people, real conflicts, no safe way to surface them. The technology that changes that dynamic is the same technology, regardless of the sector.
What This Looks Like at Scale
I have been building something called the Cascade Valley series to show what this could look like in a larger organization. It is an audio drama set in a fictional community hospital, where a consultant and an AI work with a board, a CEO, department heads, union representatives, and a skeptical commissioner to confront the structural problems that everyone knows about and nobody has been able to resolve.
I want to be straightforward about what the series is and is not. It is audio, hosted on YouTube. The production values are modest. If you go there expecting polished entertainment, adjust your expectations. The value is in the script: what actually happens when people sit in a room with an AI and work through real organizational tensions using a structured thinking process. The dialogue, the dynamics, the way conflicts surface and get examined rather than buried.
The series portrays situations that will be familiar to anyone who has sat in a hospital board meeting, a city council work session, or a corporate leadership retreat. Three departments all citing space as their growth constraint when the real constraint is staffing. A CFO making service line decisions based on cost allocations that would produce the opposite conclusion if calculated differently. A board commissioner who suspects the organization is stuck in patterns but does not have a framework for naming them.
If any of that sounds familiar, the series might be worth your time. Start with Episode 01, “The Meeting That Went Differently”. It is about an hour. By the halfway mark, you will know whether it connects to what you are dealing with. There are six episodes so far, and you can find them all alongside related writing on the healthcare insights page.
The Pattern Underneath
Here is what I think is actually happening, underneath all the specific examples.
Human organizations have always been constrained not just by resources but by the limits of human communication. We cannot say everything we know because the social consequences are too unpredictable. We cannot hold all the competing stakeholder needs in working memory at once because there are too many of them. We cannot facilitate a room through a genuine conflict without someone’s ego or anxiety or political calculation derailing the process, because we are human and that is what humans do in groups.
For the entire history of organized human effort, these have been fixed constraints. We built workarounds: hierarchy, so not everyone has to be in the room. Voting, so disagreements can be resolved without being resolved. Compromise, so nobody gets what they need but everybody gets something. We learned to call these workarounds “governance” and “best practices” and “the art of management.”
But they are workarounds. They exist because the actual problem, which is getting a group of humans to think honestly together about hard things, was too hard to solve directly.
I am not claiming AI solves that problem completely. I am saying the constraint has shifted. The thing that was functionally impossible, a room where difficult truths surface without social penalty, where all the complexity stays visible, where the conversation builds rather than fragments, is now possible. I have seen it happen at a library table with one person. The Cascade Valley series imagines it happening in a boardroom with twelve.
The question for any leader or board member is not “should we adopt AI?” You are going to adopt AI. Everybody is. The question is whether you adopt it only for efficiency, the table stakes, or also for the thing that could actually change the character of your organization: honest, facilitated, multi-stakeholder thinking about the problems you have been carrying for years.
I do not have this all figured out. I am running these sessions one at a time, learning what works, and sharing what I find. If this connects to something you are seeing in your own organization, I would be glad to hear about it. And if you think I am wrong about any of this, I welcome that conversation too.