Scroll down for a detailed summary.
Watch the recording of the webinar:
Listen to the recording via our podcast:
This session is different from most webinars in the KaiNexus series. There are no slides, no outside experts, no case studies from organizations at a distance. It's a panel of five people who work at KaiNexus every day -- the CEO, the Director of Product, the VP of Customer Experience, and a Senior Lean Strategist -- talking candidly about what it actually looks like to build a culture where mistakes are treated as learning opportunities rather than occasions for blame.
The honest version of that conversation, it turns out, includes admitting to clicking a phishing link, describing an embarrassing sprint toward the plant manager's office, having a CEO acknowledge when he's chatted to the wrong group in the middle of his own panel discussion, and a VP admitting that the company's onboarding was, by the CEO's own description, "piss poor" seven or eight years ago and explaining why it isn't now.
The session was organized as a launch event for Mark Graban's book The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation. The through-line is simple: you can't have a culture of continuous improvement without a culture of learning from mistakes. And you can't have a culture of learning from mistakes without psychological safety. The conversation unpacks each of those claims with real examples.
What follows is the substance of the session, organized so the page is useful whether you watched it or are landing here from search.
Greg Jacobson's opening observation sets the frame. If you break continuous improvement down to its most basic form, he says, you can look at waste as a mistake. You can look at a defect as a mistake. And if you recognize that, then you recognize that improvement work is fundamentally about finding the places where things haven't gone as intended and asking what better would look like.
That framing makes learning from mistakes not an add-on to CI culture but its engine. The organizations that can't talk openly about what went wrong can't improve systematically. They're limited to improvements that don't require admitting a problem first -- which is a very small category.
Greg's formulation: if you can open yourself up, open your team up, open your organization up to the idea that mistakes are not bad things but opportunities for learning, you've made the single most important step in developing a culture of continuous improvement, which he defines simply as every single person in the organization improving every day.
Maggie Millard adds a dimension that's easy to miss: when you ask people to improve, you're also asking them to make mistakes. Running an experiment means accepting the possibility of failure. If people understand that failure is punished, they won't run experiments. They'll do what's known to be safe. Improvement requires the freedom to fail, which means improvement requires the cultural infrastructure that makes failure survivable.
Kim Guilliotti walked through the product development process at KaiNexus as a concrete example of a system designed to catch mistakes at multiple points before they reach customers -- and to do it in a way that's psychologically safe rather than adversarial.
The process starts before any code is written. Product managers gather information from customers, prospects, and internal teams. A mock-up is created, a user story is written, and the story goes to developers -- who immediately provide feedback on what's unclear, missing, or potentially problematic. That first review catches errors in the specification, not the execution.
Once a developer builds a feature, every other developer on the team does a code review -- what they call a pull request. Not to catch the developer making a mistake, but to collectively improve the code. Suggestions are offered: could this be done a different way? Would this approach yield a performance improvement? The review is collaborative rather than evaluative.
After the pull request, the QA team tests the feature against the specification. Does it do what the product manager said it would do? Did the new code break anything downstream? Defects found at this stage go back to the developer to fix -- without drama, without finger-pointing, as part of the normal rhythm of the work.
Kim's framing: there's a lot of trust and psychological safety involved throughout that whole process. Everyone is constantly reviewing everyone else's work. The reviews are useful to the extent that people are honest about what they see and don't see. That honesty depends on an environment where reviewing someone's code and finding a problem is understood as helpful rather than critical.
Even with all of this, Kim notes, nothing has ever gone out perfect the first time. The agile model builds in the expectation of iteration. You build with the best information you have now, release it, get feedback, and improve. Mistakes are not a sign that the process failed. They're data that feeds the next iteration.
Maggie's answer to the question of what happens when a problem -- whether a product defect or just user confusion -- reaches a customer: get curious.
The key is understanding what the customer thinks the problem is, because it might be a product issue or it might be a misunderstanding about how something is supposed to work. Either way, the response starts with curiosity rather than defensiveness. The team asks: what happened? How did it happen? What circumstances led to this? What process problem needs to be fixed?
The orientation toward process rather than person is structural. When someone is trying their best and making a mistake, there's almost always a reason that's findable in the system -- inadequate documentation, an unclear specification, a training gap, too many competing priorities. Finding that reason is the improvement opportunity. Deciding the person is the problem closes off that opportunity and creates the additional problem of a team member who no longer feels safe reporting issues.
Greg's reinforcing observation: if someone doesn't think mistakes are happening in their organization, either they're not listening or they're not creating an environment where people can tell them. In healthcare settings he's worked in, mistakes were routinely swept under the rug because reporting them triggered punishment. That pattern doesn't reduce mistakes. It just makes them invisible -- which is worse.
The onboarding story is one of the most useful concrete examples in the session because it shows the CI culture mechanism working from the inside.
Every new KaiNexian goes through an onboarding project managed in KaiNexus itself. Part of that project -- not optional, not just encouraged but required -- is identifying at least one opportunity for improvement in the onboarding process. The new employee who has never seen the process before becomes the best possible observer of it. They notice things the long-tenured employees can't see anymore because they've normalized them.
Maggie describes the history: when this practice started, KaiNexus didn't really know how to onboard employees well. The process was, in Greg's unvarnished description, "quite piss poor." With each person who went through it, it got better. Not because leadership had the insight to improve it, but because each new employee improved it for the person behind them.
A specific example: someone who hadn't used Gmail before struggled with it. Current employees hadn't thought of it as a learning curve because they'd been using it for years. That new employee's onboarding improvement was a document explaining Gmail, linking to Google's own training resources. Now some new employees see it and think "I know this already." Others discover resources they needed but would never have found on their own. Either way, the improvement serves the next person.
The deeper point, which Maggie names explicitly: onboarding is the first time KaiNexus asks a new employee to take a leap of faith. The company is asking them to point out a problem with their new employer, their new colleagues, their new boss, while they're still in week one of understanding who those people are and how the organization actually operates. Not every new employee comes from a background where that's safe. The onboarding improvement task is a low-risk first test. It teaches people, through direct experience, that pointing out problems here is welcomed and acted upon -- not ignored or turned against the person who spoke up.
Kim adds another angle: fresh eyes are exactly what a process needs to surface the assumptions baked into it. People who've been somewhere long enough have normalized countless things that a newcomer finds strange or inefficient. Making fresh-eye observations a formal responsibility, rather than just a hoped-for benefit, is a structural way to capture that information before the new employee has also normalized those things.
Mark offers a working definition at the start of the psychological safety section: a social condition or workplace where people feel safe to use their voice -- to point out problems, admit mistakes, say "I don't know," share ideas for improvement -- without fear of punishment or marginalization.
The distinction Kim flags from Greg's influence: when asking why something went wrong, the word "why" can inadvertently trigger defensiveness. "Why did you do that?" carries an accusation even when none is intended. Reframing as "what" or "how" questions -- "what led you to that decision?" "how did this come about?" -- produces the same investigative purpose without the same charge. Language is one of the operational surfaces on which psychological safety lives.
Greg's description of what leaders do to build it: empathize publicly. When a team member shared on a company call that they'd had a message sitting unsent in their drafts folder -- which they'd mistaken for sent -- Greg's response was immediate: "I've made that exact mistake," followed by how he'd built a habit to catch it. Two things happened in that exchange. The person who shared the mistake was normalized rather than isolated. And everyone else heard a CEO saying "I make mistakes too," which changes the reference point for what's acceptable to admit.
Maggie's formulation: not just "don't punish people for speaking up" but actively thank and reward the behavior. When her team pushes back on a new process she's introduced, she doesn't just tolerate it -- she thanks them. Publicly, specifically. Because the act of pushing back required them to take a risk, and the explicit appreciation reinforces that the risk was worth taking. Over time, repeated cycles of risk-taking and appreciation build the foundation of a genuinely psychologically safe team.
Kim's observation about how the psychological safety built at the leadership level spreads through the organization: it wasn't one leader modeling the behavior. It was a critical mass of them. Greg and the other co-founders and VPs all displaying the same behaviors created an environment where the pattern was recognizable as organizational rather than individual. That's when it starts to propagate on its own.
Maggie on the limits of mandating psychological safety: you can't just declare a space safe and expect people to act accordingly. Some people arrive from very different backgrounds where speaking up had real costs. The only way to bring them into a psychologically safe culture is to keep modeling and keep rewarding, over a long enough period that they've personally experienced enough cycles of "I said something risky and it was fine" to internalize it. You have to pull people into the culture. You can't push them into it.
A question that came up in the live chat -- and that Greg describes as appropriately nuanced -- is whether psychological safety means no accountability. The short answer from the panel: no.
Maggie's framing: the distinction isn't whether a mistake happened but how the person responds to it. Do they hide it? Do they cover it up? Do they accept responsibility for it and engage in a genuine conversation about how to prevent it next time? Do they stay curious about what went wrong and what the process problem was? That willingness to engage is accountability. It's not the same as punishment, and it produces much better outcomes.
The RaDonda Vaught case comes up as a stark example of what the alternative looks like. Vaught was a nurse who made a medication error -- one that had significant systemic causes, including technology design problems -- and who disclosed the error rather than concealing it. She was criminally prosecuted. Greg's observation: think about what that case signals to every nurse and every doctor in the country. The lesson isn't "admit your mistakes." The lesson is "if you admit a mistake with serious consequences, you might lose your license and face criminal prosecution." That lesson is now baked into the risk calculus of every healthcare provider deciding whether to report an error. The downstream effect on patient safety is profound and almost certainly negative.
The KaiNexus cybersecurity example runs in the other direction. During penetration testing, team members are sent phishing emails and texts designed to be convincing. Maggie was the one who clicked a link. When she realized what she'd done, she reported it immediately. The policy at KaiNexus: if you think you clicked something, you will not be punished for telling us. The reasoning is straightforward: the cost of not knowing about a real attack is vastly higher than the cost of knowing about a false alarm. The non-punitive response policy exists specifically to ensure that information flows. Maggie sharing publicly that she was the one who clicked the link is a further contribution to the psychological safety that makes the policy actually work.
Greg's practical test for assessing whether an organization has a culture of learning from mistakes: ask a leader whether they think mistakes are happening in their organization.
If the answer is no, that tells you two things. Either the leader genuinely isn't listening -- which is one kind of problem. Or the culture is such that mistakes are being hidden rather than surfaced -- which is a different kind of problem and usually a more serious one.
If the answer is yes, the follow-up question is: what happens when they come to light? If the dominant response is investigation and learning, the culture is probably working. If the dominant response is assignment of blame, the culture is producing silence -- and the mistakes keep happening, invisible and unreported, while the organization operates on the assumption that things are under control.
Greg's point about where to invest leadership time if you're trying to build CI culture: 90% of the leverage is with leaders. Change the way leaders respond to mistakes, and the culture shifts. That's not because leaders are more important than anyone else, but because their responses set the reference point for what behavior is acceptable throughout the organization.
A question from the live audience that the panel addressed briefly but usefully: how do you create space for people to participate in improvement when the day job is already full?
Maggie's answer captures the KaiNexus orientation: improvement isn't a separate activity from the work. It's built into the work. If you're working on a project, improving that project is part of working on it. The improvement isn't additional time; it's part of the time.
Linda's addition: even when you can't address an improvement in the moment, you need a system to capture the observation. KaiNexus uses shared task lists where problems can be noted as they're identified, to be addressed when the team can come together and work through them. The capture is fast. The resolution happens later. But the gap between noticing and capturing has to be small or the observation is lost.
Greg's honest acknowledgment: KaiNexus is a startup, and the constant state of building new things means improvement is structurally integrated in ways that are harder to achieve in more stable operational environments. In organizations where the work is production-oriented and the ratio of improvement time to doing time is less organic, it genuinely does require deliberate allocation -- saying "we're pausing production to do improvement work" and meaning it. There isn't an easy answer for organizations that haven't yet made that a cultural norm.
The session's meta-observation, which runs throughout, is that KaiNexus is a continuous improvement software company that uses its own software and practices its own principles. Mark documents improvement ideas in KaiNexus. Keith, a new enterprise sales hire, suggested automating a manual webinar follow-up process -- and the automation got built, eliminating the risk of the manual step being forgotten or delayed. The onboarding process is managed in KaiNexus and improved through KaiNexus.
The reason this matters beyond organizational consistency is that it means KaiNexus's understanding of what continuous improvement actually requires is grounded in direct experience, not just customer observation. Greg, Maggie, Kim, and Linda aren't describing principles they've read about. They're describing how they actually operate, including the parts that were bad seven or eight years ago and the specific changes that made them better.
For organizations considering KaiNexus: the platform is infrastructure for exactly what this panel describes. It makes improvement ideas visible and actionable. It closes the loop between identifying a problem and doing something about it. It creates the conditions for the kind of improvement culture the panel describes -- where every person is expected to contribute to improvement, where contributions are acknowledged and tracked, and where the aggregate of many small improvements compounds over time.
The culture has to be built by people. The platform makes building it -- and sustaining it -- significantly more tractable.
Kym Guilliotti is Director of Product at KaiNexus, where she leads the team responsible for understanding customer needs and delivering software features that help organizations drive continuous improvement. She has more than ten years of experience in product management across healthcare and manufacturing.
Greg Jacobson is the co-founder and CEO of KaiNexus. He is an emergency medicine physician whose career path into software development was driven by what he observed as the single biggest barrier holding organizations back: the absence of systematic continuous improvement. He has been working in and thinking about CI for nearly two decades.
Maggie Millard is VP of Customer Experience at KaiNexus, where she leads the teams that support customers through training, account management, solutions engineering, and customer success. She has been with KaiNexus since 2012 and has helped build the customer experience function from its early days.
Linda Vicaro is a Senior Lean Strategist at KaiNexus with over 17 years of experience in lean and continuous improvement, primarily in healthcare. She works with customers to ensure their use of the KaiNexus platform aligns with and enhances their improvement work.
Mark Graban is a Senior Advisor with KaiNexus and host of the My Favorite Mistake podcast, which inspired his book The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation.
Why is learning from mistakes essential to continuous improvement?
Because waste, defects, and process failures are mistakes -- and improvement work is fundamentally about finding them and asking what better would look like. Organizations that can't talk openly about what went wrong can't improve systematically. They're limited to improvements that don't require admitting a problem first. Greg Jacobson's formulation: if you can open your organization to treating mistakes as learning opportunities rather than failures to be ashamed of, you've made the single most important step toward building a CI culture.
What's the relationship between psychological safety and learning from mistakes?
Psychological safety is the prerequisite for learning from mistakes at scale. If people believe that admitting a mistake will result in punishment, marginalization, or blame, they will hide their mistakes. Hidden mistakes can't be studied, can't be fixed, and keep recurring. Psychological safety -- the belief that speaking up about problems, admitting errors, and sharing ideas is safe -- is what allows mistakes to surface and become learning opportunities.
How do you build psychological safety in a team?
Two practices matter most, both described in the session. First, leaders model the behavior -- admitting their own mistakes openly, sharing their own failures and what they learned, responding to others' mistakes with empathy rather than judgment. When a CEO says "I've made that exact mistake," it changes the reference point for what's acceptable to admit. Second, leaders actively reward and appreciate the behavior when others follow their lead. Not just tolerating speaking up, but specifically thanking people who take the risk. Repeated cycles of risk-taking and positive response build the foundation over time.
Does psychological safety mean no accountability?
No. Maggie Millard's framing: the distinction isn't whether a mistake happened, but how the person responds to it. Do they hide it or engage with it? Do they accept responsibility and participate in figuring out how to prevent it next time? That willingness to engage is accountability. Psychological safety shifts the focus from punishment (which drives mistakes underground) to learning (which reduces them over time). Both can coexist.
How do you know whether your organization has a culture of learning from mistakes?
Greg Jacobson's diagnostic: ask a leader whether they think mistakes are happening in their organization. If the answer is no, either the leader isn't listening or the culture is suppressing the reporting of mistakes. If the answer is yes, the follow-up question is what happens when they come to light. Investigation and learning points to a functional culture. Assignment of blame points to one producing silence.
What does a concrete culture of learning from mistakes look like day-to-day?
Several practices from the session: every new employee is required to identify an improvement to the onboarding process, turning fresh-eyes observations into systematic improvement. Mistakes shared in team calls are responded to with empathy and "I've made that too" rather than criticism. Post-incident review focuses on "what about the process allowed this?" rather than "who is responsible?" Phishing link clicks are reported without fear of punishment because the organization understands that hidden security incidents are more dangerous than disclosed ones.

Copyright © 2026
Privacy Policy