<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=749646578535459&amp;ev=PageView&amp;noscript=1">

A KaiNexus webinar with Simon De Castro, Lean Six Sigma Black Belt, Texas Health Resources

 

Watch the recording of the presentation:

View the slides:

 

Listen to it as a podcast:

 

 

 

DMAIC works. The problem is how it gets used.

Simon De Castro has coached more than 300 yellow and green belt projects to completion over a 25-year career. He's seen the same patterns again and again -- projects that stall, scopes that creep, teams that skip Analyze and jump to a solution they were going to implement anyway, dashboards that look great but don't actually drive improvement. None of those failures are caused by something wrong with DMAIC. They're caused by how practitioners apply it in real organizations under real pressure.

This session is his field guide to the pitfalls. Framed around DMAIC, but the patterns map directly onto A3 thinking, PDSA, and almost any structured improvement approach. The session is grounded in his current role at Texas Health Resources, where he has worked on the design, implementation, and maintenance of KaiNexus since 2017, and in earlier improvement leadership roles at Sara Lee and Johnson & Johnson.

A note before the patterns: this is experience, not formal research. Simon's clear about that upfront. The patterns hold because they keep showing up, not because someone counted them in a controlled study.

When DMAIC isn't the right tool

The first pitfall is using DMAIC at all.

DMAIC is designed for complex problems with real risk -- the situations where rigorous measurement and analysis prevent wasted effort and bad solutions. The methodology's strength is that it discourages skipping steps. You can't get to Analyze without doing Define and Measure first. That discipline is exactly what some problems need.

But many problems don't need it. When DMAIC gets applied to situations that don't warrant the structure, practitioners end up using Six Sigma as a substitute for common sense. The project takes longer than it should, consumes resources that could go elsewhere, and produces a solution someone could have arrived at in an afternoon.

Simon names a root cause for this misuse that most organizations don't want to admit: well-designed certification programs reward project completion as proof of mastery. People who want to earn a belt look for projects to apply the methodology to, regardless of whether the methodology is the right fit. The certification incentive distorts the tool selection. The honest answer is that sometimes the right move is just to do the improvement -- run a quick PDSA, make the change, move on -- and save DMAIC for problems that actually require it.

Flying solo, missing sponsors, and unclear roles

DMAIC is a team methodology. The Measure phase needs people who know where the data lives. Analyze needs people who understand the process from different angles. Improve needs people who can implement and sustain the change. A project leader without that cross-functional team produces a project that misses what those other perspectives would have caught.

The problem is acute for Lean practitioners without formal authority. The CI coach reports to no one in operations and supervises no one in operations, but the project depends on people from operations showing up to meetings and contributing real work between meetings. Without explicit commitment from each team member's supervisor about what's expected and how much time they can spend, the project starves.

Two structural issues compound the team problem. The first is unclear roles. Project leader, process owner, sponsor, CI coach -- each is a real role with distinct responsibilities, and most organizations leave the distinctions vague. Simon recommends a RACI chart (Responsible, Accountable, Consulted, Informed) early in any project so people can see what's expected of them and whether they can miss a meeting without consequences.

The second is sponsorship. Simon flags both ends of the spectrum: sponsors who are too disengaged to clear obstacles, and sponsors who are too engaged to leave the project leader room to lead. The second is rarer in print but common in practice. A sponsor who speaks first in every meeting, who requires every decision to be vetted before it's made, or who participates so heavily that the team defers reflexively isn't sponsoring -- they're running the project from one rung up, and the project leader isn't growing.

Spotting the over-engaged sponsor is partly about communication patterns. Frequency, tone, openness to suggestions versus authoritative pronouncements. The hardest part isn't recognizing it. It's having the conversation that gives the project leader room to lead while keeping the sponsor invested.

Scope creep, prescribed solutions, and bad metrics

Three Define-phase failures show up over and over.

Scope creep is the most familiar. The project starts well-defined and gradually expands -- new departments included, new problems folded in, the original timeline rendered meaningless. Simon's prescription is to fix the scope explicitly at the start, set a realistic completion timeline tied to stage-gate milestones, and be willing to break a project into phases. Splitting a project into smaller stages isn't failure; it's often better practice. Pilot something, learn from it, celebrate, then move to phase two.

Prescribed solutions is the one that pains Simon most, and it's the most common. Most professionals were taught early in their careers: don't bring me problems, bring me solutions. The cultural pressure to arrive with the answer in hand produces three failure modes. One: the practitioner has a solution and reverse-engineers a problem for it. Two: the practitioner identifies a real problem but skips Measure and Analyze to jump straight to the predetermined solution. Three: the practitioner recognizes the gap between the problem and the solution but applies the solution anyway because the team likes it.

The language shift Simon recommends is small but consequential. Programs that ask for "ideas" tend to get pre-formed solutions. Programs that ask for "opportunities for improvement" tend to get observations that the team can then analyze together. The word changes what people bring forward.

The deeper point is that if you genuinely have a good idea and don't need root cause analysis, DMAIC isn't the right tool. Use change management or project management to implement the idea. Don't fake the analysis to satisfy a certification requirement.

Bad metrics is the third Define failure, and the one with the longest reach. Simon's distinction between leading and lagging indicators is well-established, but the application is sharper than most practitioners use it. For a patient-falls reduction project, the number of falls is the lagging indicator -- accurate, important, reportable, and useless for predicting whether your changes are working. The leading indicators are the things you can measure before falls happen: the number of risk factors present in the area, audit results, training completion rates. Both matter. Most dashboards have too many of the first and not enough of the second.

His sharpest example is the door-to-needle time standard for stroke care. National standard: under 60 minutes from patient arrival in the ED to administration of the right medication. Many hospitals report compliance as a percentage of cases that fell under 60 minutes. The metric makes 59 minutes look identical to 5 minutes. From the patient's perspective, that's absurd. From an improvement perspective, percentage-of-compliance against a binary threshold actively hides the variation that DMAIC is designed to reduce. The fix is internal: keep reporting the percentage externally if required, but measure and act on the actual times internally, broken down by step. The same logic applies to patient satisfaction surveys reported as percentage in the top box rather than the full distribution.

The line Simon attributes to an unidentified source captures the principle: measure what you value, don't just value what you measure.

Skipping the gemba

The next pitfall is doing the project from a conference room.

Gemba -- the actual place where the work happens -- is where the unfiltered information lives. Process maps drawn in a conference room reflect what the team thinks happens. Process maps drawn after walking the actual work reflect what actually happens, and the two are almost always different.

Simon's framing: there's no CSI Desk. The detective shows don't solve crimes from behind a desk. They walk the scene. Improvement work that skips the gemba is the equivalent of trying to solve a case from your office. The data and the process map will tell you something, but the most important information -- the workarounds people use, the steps that look standard but don't get followed, the conditions that make the procedure hard to follow under load -- only surface when you go look.

A point worth holding onto: when leaders walk the gemba, the goal is to learn, not to teach. Asking questions with humility produces information. Showing up with answers produces silence and theater.

Root cause traps: 5 Whys and fishbone

Both tools are widely taught and frequently misused. Simon walks through specific failure modes for each.

For 5 Whys, the most common failure is stopping too early. The classic illustration: someone fell on the floor. Why? They stepped on a wet spot. Stop there and the response is a warning sign and a safety briefing. Push further: why was the floor wet? A pipe leak. Why was the pipe leaking? It hadn't been maintained. Why hadn't it been maintained? The maintenance program didn't cover that area. Now you have a root cause that, if fixed, prevents the recurrence. Stop at "wet floor" and you'll be back here next month with the same incident in a different building.

Two checks for whether you've found the right root cause. First: if that cause hadn't been present, would the situation have happened? Second: if you fix that cause, will the problem stop recurring? Both answers need to be yes.

A useful verification Simon uses: read the 5 Whys backward and see if the causal chain still makes sense. "Because we don't have a maintenance program, we didn't maintain the pipes. Because we didn't maintain the pipes, there was a leak. Because there was a leak, there was water on the floor. Because there was water on the floor, the employee fell." If reading it backward exposes a gap, you have a disconnected step in the chain.

Two other 5 Whys traps. The first is wording. "Why" by itself produces drift. "Why did this happen?" forces specificity. The second is treating 5 Whys as a single chain when the problem branches. The wet floor example splits naturally: why was there water on the floor (maintenance issue) and why wasn't the floor cleaned (staffing model issue, in Simon's example -- cleaning crews allocated by company size rather than square footage). Two valid branches, two valid root causes.

For fishbone diagrams, the failure is stopping at the first level. Product out of spec because raw material was out of spec -- end of analysis, no actionable countermeasure. Combining fishbone with 5 Whys on each branch keeps the analysis going. In the raw materials example, pushing further reveals that the supplier had the wrong specs because document version control isn't maintained. Now you have something to fix.

Simon also recommends building the fishbone from the team's input rather than starting with predefined categories. Sticky notes from each team member, posted on the board, then grouped into an affinity diagram. The categories emerge from the actual causes the team identified rather than constraining the team to fit pre-existing buckets. Cross out the items that don't actually qualify as root causes once the diagram is complete; circle the ones that do. That filtering work makes the action plan that follows much sharper.

One more language shift: write causes as full sentences. Not "illumination" but "poor illumination makes inspection more difficult." The complete sentence forces clarity about what the cause actually does, and makes the countermeasure easier to design.

Action plans that match the analysis

The action plan is where most projects either deliver or quietly stall.

Common failure modes: the practitioner can't delegate because their team doesn't report to them; the plan is too generic to track; tasks have no dependencies, so people don't know what blocks what; or the plan has no connection to the Analyze phase, which means the analysis was effectively wasted.

The fix is granularity. "Train associates -- 20 people -- HR -- September 30" looks complete and is useless. The same line, broken down, becomes a sequence of actual tasks: gather input from area leads, develop the training, validate it with a pilot group, deliver to first shift, deliver to second shift, give a knowledge check, schedule a refresher, add it to new-hire onboarding. Each task has an owner, a due date, dependencies, and a visible status. The granularity makes resource planning honest and accountability real.

Simon's solution hierarchy is the more important point. Most action plans default to people-based solutions: training, reminders, warnings. People-based solutions are the least effective tier because humans forget, misapply, and revert. The hierarchy, from least to most effective, is roughly:

Warnings and alerts. Necessary in the moment, weakest as sustainment.

Training and education. Useful, but learning gaps and decay are real.

Policies and procedures. Stronger, but only if reinforced.

Reminders and visual aids. Better, because they reduce reliance on memory.

Standardization. The friend of quality and efficiency.

Automation or mistake-proofing (poka-yoke). The most reliable because the wrong action becomes difficult or impossible.

The principle isn't that the higher tiers replace the lower ones. They sit on top of them. The principle is that if your action plan consists mostly of training and reminders, you've stayed at the bottom of the hierarchy. Pushing up the hierarchy -- toward systems that make the right behavior the easy behavior and the wrong behavior the hard one -- is where sustained improvement lives.

The Control phase: don't disappear too soon

The Control phase is where projects most often get rushed because everyone wants to close the books, certify the belt, and move on. Simon flags a few patterns worth resisting.

Data frequency matters. If your only data is monthly, you have one data point per month for SPC. Push for weekly or daily data if the metric can support it. The data is usually available; it's just not reported at the cadence you need. Inquire.

Some metrics have built-in delays. Patient satisfaction surveys arrive months after the patient experience that generated them. The patients responding to questions today are answering about a reality that no longer matches the current process. Plan for that lag; don't pretend it isn't there.

A formal Control phase checklist is worth building. If you ran an FMEA, recalculate the RPNs to reflect the controls now in place. If you mapped the current state, complete the future state map so the next improvement phase has a baseline. If your action plan included training, schedule the refresher. Update policies, procedures, position descriptions, and -- when applicable -- performance review criteria. Validate the changes with the customer whose voice you captured at the start. Then celebrate. Visibly. The people doing this work often aren't in CI roles full-time. Recognition is what gets them to do another project, and another after that.

Don't close the door on going back

DMAIC is drawn as linear because it's easier to teach that way. In practice, the work is iterative. You'll reach Analyze and realize Measure missed something. You'll reach Improve and discover the original problem statement was too narrow. Going back isn't failure -- it's the methodology working.

Simon's guidance: at each phase review, revisit the prior phases. Sometimes you'll need to change goals. The goals you set at the beginning were based on what you knew then. Now that you know more, the right goal may be different -- more ambitious, less ambitious, or pointed at a slightly different outcome. Renegotiate when you have to.

How KaiNexus connects

Several of the pitfalls Simon names connect directly to how improvement work is captured and made visible across an organization.

The knowledge bank problem is the clearest example. When someone starts a new improvement project, the first question they should ask is whether someone has already worked on this. Without a searchable repository of completed projects, every team starts from scratch. With one, teams find prior work, contact the people who did it, and build on what's already been learned. Simon was explicit about this during the Q&A -- the value of having all projects accessible by tags and keywords compounds over time.

The role-clarity and sponsor-engagement points map onto KaiNexus workflow design. Projects in the system have explicit owners, sponsors, and routing paths. Status is visible. Sponsors can see what's happening without micromanaging, and project leaders have a structured way to escalate when they need support. That visibility makes the engagement-versus-influence diagnosis easier to act on.

For the team-based work that DMAIC requires, KaiNexus surfaces what's happening across departments without requiring the project leader to chase status manually. Cross-functional teams can collaborate in one place rather than scattering work across email threads, spreadsheets, and personal task lists.

See KaiNexus in action →

About the presenter

Simon De Castro is a Lean Six Sigma Black Belt at Texas Health Resources, where he has worked on the design, implementation, and maintenance of KaiNexus since 2017. He is also certified as a coach and as a Change Management Practitioner. His career spans more than 25 years, with 17 of those in Lean Six Sigma managerial roles at companies including Sara Lee, Johnson & Johnson, and Texas Health Resources. Simon has designed and delivered Lean Six Sigma content throughout his career and has coached more than 300 yellow and green belt projects to successful completion.

Frequently Asked Questions

When should you use DMAIC versus a simpler approach?

DMAIC is designed for complex problems with real risk -- situations where measurement and analysis prevent wasted effort and bad solutions. For straightforward problems where the right answer is clear and the risk is low, a quick PDSA cycle is usually faster and just as effective. The certification-driven impulse to use DMAIC for every project distorts tool selection. The honest test is whether the problem actually requires analysis. If you already know what to do, just do it.

What does an over-engaged sponsor look like, and what do you do about it?

An over-engaged sponsor speaks first in meetings, requires every decision to be vetted before it's made, or participates so heavily that the project leader can't develop as a leader. Signs include communication frequency, authoritative tone rather than openness to suggestions, and visible deference from the team. The CI coach often has to mediate -- thanking the sponsor for their support while creating space for the project leader to make decisions. It's a real conversation, not an organizational chart problem.

What's the difference between leading and lagging indicators in practice?

Lagging indicators tell you how something went. Leading indicators tell you whether you're set up for it to go well. In a patient-falls project, the number of falls is the lagging indicator -- accurate and reportable but tells you nothing predictive. Leading indicators include the number of risk factors present in the area, training completion rates, and audit scores. Both matter. Most dashboards over-rely on lagging metrics because they're easier to count. Leading metrics give you the chance to act before the lagging number moves.

What's the most common 5 Whys mistake?

Stopping too soon. The first "why" usually surfaces a proximate cause that points to a tactical fix -- a warning sign, a brief reminder, a one-time correction. Continuing past it surfaces system causes that, if addressed, prevent recurrence entirely. A useful check: read the chain backward. If reading it backward exposes a logical gap, the chain has a disconnected step. Two verification questions for the root cause itself: if that cause hadn't been present, would this have happened? If you fix it, will the problem stop recurring? Both answers need to be yes.

Why does percentage compliance to a target hide improvement?

Because it makes 59 minutes look identical to 5 minutes. A binary yes/no against a threshold collapses all variation on one side of the line. The door-to-needle stroke standard requires reporting the percentage of cases under 60 minutes, but a hospital improving from 58 minutes to 25 minutes shows no change in the reported metric. Internally, the actual times -- with the median, the spread, and the step-by-step breakdown -- tell the improvement story. Report what's required externally, but measure and act on the underlying data internally.

Why does the solution hierarchy matter for sustainment?

Because people-based solutions decay. Training, reminders, and warnings rely on humans remembering and choosing correctly under pressure. When pressure is high or attention is divided, the wrong action becomes easy and the right action becomes hard. System-based solutions -- standardization, automation, mistake-proofing -- make the wrong action difficult or impossible regardless of the human state. Moving up the hierarchy doesn't replace the lower tiers; it adds resilience on top of them. If your action plan is mostly training, you've stayed at the bottom of the hierarchy and your sustainment will reflect that.

See KaiNexus in action →

Bonus Offer:

Free eBook: Guide to the 8 Wastes of Lean