See the live polling results and other resources provided by Mark Graban:
Watch the recording of the webinar:
Additional Bonus Q&A:
View and download the slides:
Listen to the recordings via our podcast (the webinar and then the Q&A)
A hospital invited Mark Graban in to see their lean work. They were excited about it. As they walked the hallways and visited departments, they wanted to show him their new huddle boards -- whiteboards designed to help track and drive improvement. They had visited a leading lean healthcare organization in the Upper Midwest, seen the boards being actively used, taken pictures, and gone home and had dozens made. The boards were installed throughout the hospital.
They were blank.
The boards had been up for a couple of months. Nobody was using them. The technology -- or in this case the physical tool -- wasn't the problem. The problem was what was missing from the environment that would lead people to actually participate.
This story opens the session because it applies just as directly to software as it does to whiteboards. A KaiNexus instance with a blank board is the same problem with a different interface. Tools and platforms don't create improvement cultures. Leaders do.
Professor Ethan Burris at the University of Texas Austin has studied why employees choose not to use their voice -- not to speak up about problems, not to share ideas for improvement. His research identifies two distinct barriers.
The first is fear. People are afraid of being mocked, ridiculed, marginalized, or punished for pointing out problems or sharing ideas. This is the more commonly named barrier. It's real, and leaders who want improvement cultures need to work actively to eliminate it.
But Burris's research shows the second factor is actually slightly more common. It's futility. People say: "I'm not afraid to speak up. It's just not worth the effort. I've pointed out problems before. I've shared ideas before. Nothing happens." Fear and futility both produce silence. But they have different causes and different remedies.
Organizations that focus only on reducing fear -- encouraging people to share ideas, promising no retribution, talking about openness -- but don't build the systems and habits to actually act on what people share, replace the fear factor with the futility factor. The result is the same silence, but now people feel they've already been given the chance to speak and chosen not to. The opportunity feels more closed, not more open.
Both barriers have to be addressed. Fear requires leadership behavior that makes speaking up demonstrably safe. Futility requires leadership behavior that makes speaking up demonstrably worth it -- that when someone points out a problem, something happens.
Jamie Bonini at Toyota's TSSC -- the group that does lean coaching with suppliers, nonprofits, hospitals, and other organizations -- defines the Toyota Production System in a way that puts people at the center. His definition: an organizational culture of highly engaged people solving problems or innovating to drive performance.
Three things in that definition are worth unpacking. Highly engaged people don't just materialize because you've hired well. Engagement is a function of what leaders do and don't do. Problem solving and innovation require a culture that makes it safe to surface problems and test ideas. And drive performance makes clear that this isn't soft culture work for its own sake -- it's the operational prerequisite for sustained results.
Toyota itself has written explicitly about this. In Toyota Culture, Jeff Liker and Mike Hoseus describe Toyota's belief that people must feel psychologically and physically safe -- notably, in that order. Psychological safety is listed first. The implication is that psychological safety leads to better physical safety, because people who feel safe speaking up will surface safety concerns rather than silently absorbing risk.
The andon cord story illustrates the gap between having the mechanism and having the culture. A BBC news article described a Ford truck plant in Michigan that had installed andon cord systems -- bought the equipment, put them in, set them up correctly. Toyota plants in Kentucky and elsewhere pull the cord a couple thousand times a week. The cord was being pulled twice a week at the Ford plant. Mark's observation: probably twice to test whether it was working.
Having an andon cord doesn't create a culture of using it. Creating a culture of using it requires two things: psychological safety and effective problem solving. Both. Make it safe to pull the cord, and make sure that pulling it actually leads to something useful. Otherwise you've installed equipment that demonstrates, every day it isn't used, that the culture isn't there yet.
Amy Edmondson, Professor at Harvard Business School and author of The Fearless Organization, defines psychological safety as a belief -- individualized, situational, felt differently by different people in different contexts -- that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes.
The word "belief" is important. Psychological safety isn't a policy or a declaration or a room. It's what each person actually believes to be true about the consequences of speaking up. Two people in the same meeting can have very different levels of psychological safety. Two people who've worked together for years can feel different levels depending on the topic, the stakes, and the others in the room. It's not binary, and it's not fixed.
Google's Project Aristotle research, which tried to understand what made some teams more effective than others across all of Google's disparate businesses, identified psychological safety as the strongest predictor of team success. All other things being equal, higher psychological safety produces better team performance.
Timothy Clark, CEO of LeaderFactor and author of The Four Stages of Psychological Safety, offers a concise working definition: a culture of rewarded vulnerability. Unpacking that: a vulnerable act is something that exposes you to risk of harm or loss -- professionally, personally, socially. Pointing out a problem, admitting a mistake, challenging a decision, saying "I don't know" -- these are vulnerable acts. They're not inherently negative. But they carry risk that varies with context. A culture of rewarded vulnerability is one where those acts are consistently met with positive responses rather than punishment, and where that pattern is real and felt, not just stated.
The distinction Clark draws between "you should feel safe" and actually building a culture that makes people feel safe is critical. Rhetorical reassurances don't work. Declaring a space safe doesn't make people feel safe. What builds safety is consistent, repeated, observable leader behavior -- and the experience of watching others take a risk and survive it well.
Clark's framework organizes psychological safety into four sequential stages. Each stage builds on the ones before it. Organizations that want to jump to stage four without building the foundation usually find the attempt doesn't hold.
Stage one: Inclusion safety. Can I be my authentic self? Do I feel included, accepted, and respected as a person? Without a reasonable baseline of inclusion safety, progression through the other stages is very difficult. People who don't feel genuinely accepted by the group aren't ready to expose their learning edge or their ideas.
Stage two: Learner safety. Can I learn and grow? Do I feel safe to ask questions, say "I don't know," admit a mistake, and try something uncertain without those acts being used against me? This is the stage most relevant to early lean and improvement work. A team that hasn't achieved learner safety will struggle to participate honestly in problem-solving.
Stage three: Contributor safety. Can I contribute and create value? Am I given genuine space to do my work to my full capability -- not just perform a defined task but bring my judgment and initiative to the work?
Stage four: Challenger safety. Can I be candid about change? Do I feel safe pointing out problems, sharing improvement ideas, and challenging the status quo without fear of embarrassment, marginalization, or punishment?
Stage four is where lean improvement work lives. It's also the hardest stage to reach and sustain. The live polling data from the session shows a consistent pattern: across audiences from many organizations, challenger safety scores the lowest and has the widest distribution of any stage, while inclusion and learner safety tend to score higher. The pattern makes sense -- it's harder to challenge the way things are done than to feel accepted or to ask a question.
The implication for CI leaders: when improvement programs plateau, the constraint is often not methodology or tools but the absence of stage four safety. People don't feel safe enough to challenge the status quo, which is exactly what improvement requires. The path forward isn't a new framework -- it's building the foundation stages that allow challenger safety to develop.
The two most powerful things leaders can do to build psychological safety are modeling and rewarding. They work together and reinforce each other.
Modeling vulnerable acts. When leaders say "I don't know," "I might be wrong," "I made a mistake," or "let's test this rather than assume it'll work," they change the reference point for what's acceptable to admit in the organization. The CEO who shares their own failures teaches everyone who works for them that admitting failure here is safe. Not because they said it's safe, but because they demonstrated it.
Mark describes a moment at KaiNexus: a company-wide meeting where leaders were asked to share not just what went well over the past six months but what went wrong. Leaders, including Jeff Roussel, KaiNexus's Chief Revenue Officer, said plainly: "I made mistakes." The purpose wasn't to be punished or to feel shame. It was to share the learning. When the CEO, co-founders, and senior leaders model that behavior consistently, it sets the norm. Stephanie Hill, a Senior Lean Strategist at KaiNexus, described in a LinkedIn post posting about what she called a significant failure -- and found that rather than being blamed, her colleagues rallied around her, focused on the learning, and encouraged her to keep bringing her mistakes to light. That response is what makes the next vulnerable act feel safer.
Rewarding when others follow. Modeling sets the example. But the second step is equally important: when others take the risk and speak up, leaders need to actively reward that behavior. Not just avoid punishing it -- actively appreciate it. The distinction matters because people are making a social bet every time they decide whether to say something difficult. Neutral responses don't reinforce the behavior. Explicit appreciation -- "Thank you for saying that," "I'm glad you pointed that out" -- closes the loop. It makes the bet feel worth it.
Together, modeling and rewarding create a cycle. Leaders model → others test the waters → leaders reward → more people follow → the safety level rises. This is not quick. People who have been punished for speaking up in prior organizations don't immediately trust a new environment just because the new manager says it's safe. They need to see the cycle play out multiple times before their internal calculus changes.
A question from the live session raised a common lean convention: always start with 5S. Mark's counterproposal: always start with psychological safety.
This isn't a rejection of 5S. It's a sequencing argument. If a CI program launches with tool deployment before the psychological foundation is in place, the tools are going to meet resistance, generate anxiety, and produce the blank-board pattern the session opened with. People copy the form of the activity without the substance of participation.
Starting with psychological safety means: before the first kaizen event, before the first training on lean tools, spend time building the foundations. Help people feel included in the effort rather than subjected to it. Create space for learning -- where it's safe to say "I don't understand this yet" and "what does that term mean?" Establish conditions where people feel like contributors rather than targets of efficiency programs. Only then are the tools going to find soil they can take root in.
The four stages of psychological safety map directly to the four requirements for effective kaizen work. Stage one inclusion: people need to feel the team includes them genuinely before they'll engage. Stage two learning: kaizen events are learning environments and require the safety to not know things and to try things that might not work. Stage three contributing: actually doing the improvement work. Stage four challenging: the whole point of kaizen is to challenge how the work is currently done.
Psychological safety can be assessed through validated survey instruments. LeaderFactor's survey, which Mark is certified to facilitate, measures overall levels and breaks them down by the four stages and the specific questions within each stage. KaiNexus surveyed its own team -- about 32 employees at the time -- and scored in the 80th percentile compared to other organizations across all four dimensions.
Two things worth noting about that result.
First, 80th percentile is genuinely good but not a reason to stop. The response at KaiNexus was not "80th percentile is enough." It was: what do the free-response answers tell us about specific barriers? What can we do to move toward 90th percentile? Where are the gaps within specific teams? The score became a starting point for improvement, not a destination.
Second, a question from a live attendee challenged this honestly: shouldn't a CI company score higher? The honest answer is yes, probably -- and the honest response to that is to keep working rather than to rationalize the gap. Morgan Wright, KaiNexus's Customer Marketing Manager, made a useful observation: even in a company where leadership openly models and rewards vulnerable acts, people are still working against a lifetime of prior conditioning that taught them mistakes are dangerous. That conditioning doesn't disappear when the culture changes. It fades slowly through accumulated experience of a different pattern.
The distribution of scores in the live polling across webinar attendees showed a consistent pattern: scores are spread widely rather than clustering. Inclusion safety and learner safety score highest. Contributor and challenger safety score lower. The spread is wide -- not a bell curve but closer to linear -- which reflects the enormous variation in what people carry from their prior work histories. This is why culture-building is inherently slow and why leaders have to be patient with the pace at which people's felt safety changes.
A point Mark makes near the end of the session that deserves explicit attention: psychological safety isn't only needed at the entry point of speaking up. It's needed through the entire improvement cycle.
The Plan phase: do people feel safe to say "I think our hypothesis might be wrong" or "let's test this before we commit to it"? There's a meaningful difference between "we're going to implement this solution" and "let's test this countermeasure." The language implies a different relationship to uncertainty. One suggests the answer is known. The other says we have a hypothesis we're going to learn from.
The Do phase: do people feel safe to run an experiment on a small scale rather than being pressured to roll out broadly before anything is validated?
The Study phase: when results don't match predictions, do people feel safe to say "this countermeasure didn't work as expected" rather than finding ways to explain why the data is misleading?
The Adjust phase: can the team honestly decide "abandon this and try something different" without feeling that acknowledging a failed experiment is a personal failure?
Each step requires a culture where learning from what happened is more valued than appearing to have had the right answer in advance. Building that culture is the work of psychological safety. The PDSA cycle itself is a scientific method for improvement. Psychological safety is what makes running actual scientific method -- including honest acknowledgment of results that don't confirm the hypothesis -- actually possible.
A few things worth naming about where the platform connects to the substance of this session.
The futility problem -- the perception that speaking up doesn't lead to anything -- is directly addressed by what KaiNexus does. When an idea or a problem gets submitted into KaiNexus, it gets routed, acknowledged, and tracked. The person who submitted it can see what happened to it. Improvement work is visible. That visibility is the antidote to futility: it demonstrates, through concrete feedback, that surfacing a problem leads to action.
The platform also supports the leader behavior that builds psychological safety. When leaders regularly comment on, acknowledge, and respond to improvements submitted by their teams -- one of the specific habits KaiNexus encourages, referenced in the Stephanie Hill panel discussion session -- those responses are visible in the platform. They create a pattern of reward that reinforces the safety of speaking up.
And for organizations that want to assess and track psychological safety over time alongside improvement activity, the platform provides the infrastructure to see whether the two are moving together -- whether improvement volume and participation are increasing as the cultural foundation strengthens.
Building the culture requires the leadership behaviors this session describes. The platform is the infrastructure that makes those behaviors visible, consistent, and connected to results.
Mark Graban is a Senior Advisor to KaiNexus and an internationally recognized author, speaker, consultant, and podcaster. He has decades of experience helping organizations improve performance through lean thinking, continuous improvement, and respectful leadership across healthcare, manufacturing, and technology. He is the author of several books including The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation, Lean Hospitals, and Measures of Success. He hosts the podcasts Lean Blog Interviews and My Favorite Mistake, and is widely known for his work on psychological safety, learning from mistakes, and systems thinking.
What is psychological safety and why does it matter for continuous improvement?
Psychological safety is the belief -- individual, situational, and felt differently by different people -- that you will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. It matters for CI because improvement work fundamentally requires people to surface problems, admit what they don't know, challenge the status quo, and experiment with approaches that might not work. All of those acts carry social risk. When the risk feels too high, people go quiet -- and improvement stops.
What are the two biggest barriers to people speaking up?
Research by Professor Ethan Burris at UT Austin identifies two distinct barriers. The first is fear: people are afraid of being punished, marginalized, or ridiculed for speaking up. The second, which Burris's research finds is slightly more common, is futility: people aren't afraid to speak up -- they just don't believe it will lead to anything. Both barriers must be addressed. Fear requires leaders to model safety through behavior. Futility requires leaders to actually act on what people share.
What are the four stages of psychological safety?
From Timothy Clark's framework: inclusion safety (can I be my authentic self?), learner safety (can I learn, ask questions, and admit mistakes?), contributor safety (can I contribute to my full capability?), and challenger safety (can I challenge the status quo?). The stages are sequential -- each builds on the ones before it. Challenger safety, which is where lean and CI work lives, is the hardest to reach and requires the prior three stages as a foundation.
What do leaders actually do to build psychological safety?
Two things, working together. First, modeling: leaders who admit their own mistakes, say "I don't know," and visibly test hypotheses rather than declaring certainty change the reference point for what's acceptable in the organization. When a CEO says "I made a mistake" in a team meeting, it teaches everyone in that meeting what this organization's norms actually are. Second, rewarding: when others follow the lead and take a vulnerable act, leaders explicitly appreciate that behavior. Not just failing to punish -- actively thanking. The reward reinforces the safety of the behavior and makes the next vulnerable act feel less risky.
Can psychological safety be measured?
Yes. Validated survey instruments -- including the LeaderFactor assessment that Mark is certified to facilitate -- measure both overall levels and the four stages specifically. KaiNexus surveyed its own team and scored in the 80th percentile overall. The organization's response was not to declare the work done but to use the results to identify specific barriers and continue improving. Scoring the team and then sitting with the gaps honestly is itself a demonstration of the culture the scores are measuring.
Should organizations start with 5S or with psychological safety?
Mark's counterproposal to the "always start with 5S" convention: always start with psychological safety. Not as a rejection of 5S or other lean tools, but as a sequencing argument. If tools are deployed before the psychological foundation exists -- before people feel included, safe to learn, empowered to contribute, and able to challenge -- the tools will be met with blank boards and unused andon cords. Building the foundation first makes the tools actually work.
Does psychological safety matter throughout the PDSA cycle, or just at the beginning?
Throughout. It takes psychological safety to say "our hypothesis might be wrong" in the Plan phase, to run a small-scale test rather than a forced rollout in the Do phase, to honestly report that results didn't match predictions in the Study phase, and to honestly decide to abandon an approach that didn't work in the Adjust phase. The entire scientific method of improvement depends on a culture where honest engagement with results is safer than managing appearances.

Copyright © 2026
Privacy Policy