<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=749646578535459&amp;ev=PageView&amp;noscript=1">

A KaiNexus webinar with Mark Graban and Greg Jacobson, MD

Watch the webinar here:

 

 

View the slides: 

 


 

 

Collecting ideas is the easy part.

Most organizations that launch an improvement program experience the same opening pattern. There's a big initial wave of submitted ideas. Employees who have been holding suggestions for years finally have a place to put them. Leadership celebrates the early numbers. The program looks successful.

Then it stalls.

The gap between ideas submitted and ideas actually implemented becomes visible. Submitted ideas pile up. Implementation lags. The visible gap teaches employees something specific: their input doesn't lead to action. They stop submitting. Within months, the program that started as a flood becomes what Mark calls an idea desert.

This webinar is about what causes that gap and how to close it. The argument running through it is sharper than it sounds: ideas without implementation aren't an engagement program, they're a teaching mechanism. They're teaching the workforce that improvement is bureaucratic, slow, and not worth the effort. The fix isn't more idea collection. It's a different operating discipline around what happens after an idea arrives.

Mark Graban is a Senior Advisor to KaiNexus and the author of Lean Hospitals and co-author of Healthcare Kaizen. Greg Jacobson, MD is co-founder and CEO of KaiNexus.

The case study that illustrates the problem

The session opens with a case study Mark and Greg encountered from a hospital that had publicized its improvement program results online. The numbers told a clear story.

Total ideas submitted: roughly $20 million in estimated value.

Ideas approved: about $1 million.

Ideas actually implemented: about $70,000.

On the chart, the implementation bar was barely visible next to the submitted-ideas bar. The hospital was promoting these numbers as success. The reading Mark and Greg apply is the opposite: the gap between $20 million in submitted ideas and $70,000 in implemented improvement is the program's failure mode, made visible. Why weren't the other $19 million worth of ideas approved? Why, of the million that were approved, was less than 10% actually implemented? Each of those gaps is a place where the program taught employees that their participation wouldn't lead to action.

The case study is useful because the pattern is common. Most organizations don't publicize the gap. The gap exists anyway. The first step in closing it is making it visible -- and then acting like the gap matters as much as the submission volume does.

From wave to river, not wave to desert

The metaphor Mark uses for the trajectory of a healthy idea program is a steady river. The starting state is often a desert -- nothing flowing because the organization hasn't built the practice yet. The early state of most programs is a wave -- a burst of enthusiasm and submissions. The failure mode is the wave receding back into a desert because nothing happens to the ideas.

The goal is the river. Steady flow of submissions. Steady flow of implementations. No bursts, no droughts. The work of leadership in an improvement program is keeping the river flowing -- which means watching the gap between submissions and implementations and addressing it before it grows.

The KaiNexus customer data in the session shows what this looks like in practice. One customer's early data shows the wave pattern -- a big initial spike of submitted ideas (the blue line) and a delayed but rising implementation line (red). What's good about that customer's pattern isn't the wave; it's that the wave settled into a river-like flow rather than collapsing into a desert. Another customer's data over a longer period shows a steady blue line of submitted ideas with the red line tracking close behind. The two lines stay near each other. That's a healthy program.

The framing Greg adds: every program is operating on a different starting point. Some organizations need to address the wave -- the initial enthusiasm risks creating expectations the implementation system can't meet. Others are starting from the desert and need to seed the practice through gemba walks, waste walks, and active solicitation. The work is different in each case. The common feature is that the gap between submission and implementation determines whether the program sustains.

What "mind the gap" looks like in practice

The London Underground announcement gets at the discipline. When leaders look at idea submission and implementation data, the relevant question isn't how many ideas were submitted this month. It's how big the gap is between submission and implementation, and whether that gap is growing or shrinking.

A growing gap is a warning signal. It means submissions are outpacing the organization's ability to act, which is the pattern that teaches employees their input doesn't matter. The intervention isn't to ask for fewer ideas. It's to develop more implementation capacity -- usually by distributing the implementation work to more people rather than concentrating it in a manager or central improvement team.

A shrinking gap is a sign of health. It means the organization is keeping up with the ideas coming in, which sustains the feedback loop that produces more ideas.

A flat gap with low volume is the desert. Either the program hasn't been launched effectively, or earlier failures have taught the workforce that submitting ideas isn't worth the effort.

The KaiNexus reports that show submissions and implementations on the same chart make this visible. Most organizations operating without that visibility don't realize the gap is forming until participation has already started dropping. By then, the recovery work is harder than the prevention work would have been.

Asking the right way

The session is specific about how leaders should solicit ideas. Several patterns are worth pulling out.

Ask for ideas. The instinct of many leaders is to wait for employees to bring problems forward. The empirical pattern is that most employees won't unless they're invited, encouraged, and given a reliable mechanism. Active solicitation matters.

Engage and inspire rather than demanding. Talking about the mission, the patient experience, the safety record, the quality goals -- these create the context that makes improvement feel meaningful. Demanding that "everyone submit five ideas this year" produces compliance ideas, which are different from real ones.

Don't set quotas or targets. Kaizen-style improvement works best when participation is voluntary. Quotas turn the work into bureaucracy and produce gaming.

Don't ask just for cost reductions. The most engaging frames are safety, reducing frustration, and improving quality. Cost reductions tend to follow naturally from improvements in these other dimensions, but leading with cost narrows the participation pool and signals to employees that only ideas with financial impact count.

Frame ideas as problems and opportunities rather than as solutions. The KaiNexus interface deliberately asks for a description of an opportunity first, with the proposed solution as an optional field. The reason: employees often arrive with a specific solution in mind, but the underlying problem is what matters for analysis. Capturing the problem keeps the door open for solutions the originator may not have considered.

The framing Greg adds: the capture step needs to be simple, disciplined, and consistent across the organization. Simple, because if the form has fifteen questions and asks frontline staff to estimate impact and pick a Lean tool, most people won't engage. Disciplined, because the process needs to operate continuously rather than in bursts. Consistent, because if every department uses a different format the data can't aggregate into anything useful.

Coaching ideas instead of rejecting them

The hardest part of running an idea program isn't capturing the ideas. It's responding to them in a way that builds engagement rather than killing it.

The session walks through a spectrum of responses to what a leader might initially perceive as a "bad idea." From worst to best:

Worst: blow off the idea entirely. Pretend it never came up. The employee learns that their input is invisible, and they stop submitting.

Bad: reject the idea outright. "Nope, can't do that. Bad idea." Even if the rejection isn't disparaging, the message is that the employee's contribution doesn't count.

Better: explain why the idea can't be implemented as proposed. If it's a million-dollar capital request that the organization can't fund this year, explaining the capital budgeting process at least respects the employee's contribution and gives them context.

Best: figure out something the organization can do that addresses the underlying problem, even partially. The employee almost never cares whether their specific proposed solution is implemented. They care about the underlying problem being addressed. A small, immediate fix that addresses 30% of the problem produces more engagement than a perfect solution that arrives a year later, if it arrives at all.

The key reframing Mark offers: most "bad ideas" aren't actually bad. They look bad because the manager doesn't know enough about the context. The discipline is to go and see -- talk to the employee who submitted the idea, talk to others involved in the situation, observe the actual process. The Lean principle of going to the gemba applies as much to evaluating ideas as it does to solving operational problems. Most ideas that initially look unreasonable turn out to be reasonable once the context is understood.

Distribute the implementation work

The reflex of most organizations is to make implementation the manager's responsibility. The manager reviews ideas, decides which ones to implement, assigns the work, and tracks completion. That model has a fatal limitation: the manager can only handle so many ideas. The bandwidth becomes the constraint, and the program stalls at whatever volume the manager can personally process.

The alternative is to distribute the implementation work across the team. The employee who submits an idea is often the right person to lead its implementation, with manager support and team collaboration. Other team members contribute by investigating, collecting data, brainstorming countermeasures, testing options, measuring results. The manager's role shifts from being the implementation bottleneck to being the coach who develops the team's capability to do the work themselves.

The data Greg references on this point is striking. The KaiNexus reports can show which team members are originating ideas, which are collaborating, which are responsible for implementation, and which are doing the actual work. The pattern in healthy teams is broad distribution -- many people contributing in different ways. The pattern in fragile teams is concentration -- one or two rock stars doing most of the work. When those rock stars leave the organization, everything collapses, because the capability was housed in individuals rather than the team.

The implication for leaders: build redundancy by design. Distribute the implementation responsibility so the program doesn't depend on a small number of heroic contributors. The platform helps make the distribution visible, but the design decision is a leadership decision.

The time problem

Both Mark and Greg name the "we don't have time" objection as one of the most common reasons improvement programs stall. They reframe it.

The objection is usually treated as a constraint. If the organization doesn't have time, the implication is that improvement has to wait until things calm down -- which never happens.

The reframe: lack of time isn't a constraint, it's a problem to solve. If improvement is genuinely important, the work of finding time is part of the work. Some organizations have addressed this by creating "no meeting zones" -- protected periods where managers and staff don't take meetings and can engage in improvement work instead. ThedaCare's approach is one well-documented example.

The underlying point Greg names is that "we don't have time" usually means "we haven't prioritized this." Both Mark and Greg have written wills late in life or not at all -- not because they didn't have time, but because they hadn't prioritized it. The same dynamic applies in organizations. The work that gets done is the work that leadership prioritizes. The work that doesn't get done is the work that's been deprioritized, regardless of how it's framed.

When leadership genuinely commits to continuous improvement and prioritizes the time, the "we don't have time" objection disappears. When leadership only nominally supports it, the objection persists indefinitely. The fix is upstream of the objection.

Measuring impact, not just activity

The session covers the data side of running an effective improvement program. Several patterns are worth pulling out.

The aggregate impact of improvements implemented by KaiNexus customers is substantial. As of the time of this recording, more than 11,000 improvements had been implemented through the platform with a tracked impact exceeding $72 million. Roughly half a million hours of time saved across customers. These numbers compound over time as more organizations come online and existing programs mature.

The distribution of impact across individual improvements is worth knowing. About 1% of all improvements turn out to have a financial impact greater than $100,000. About 2.5% have an impact greater than $10,000. The implication: most improvements are small, but the tail of the distribution is long. The team that generates a thousand small improvements finds the ten or twenty large ones embedded in them. The team that only asks for large improvements never finds either.

Impact tracking goes beyond cost savings. The session shows a chart from a customer measuring safety improvements (about 10% of all improvements on average), quality improvements, employee satisfaction, customer satisfaction, time savings, and other dimensions. A program that only counts dollars systematically undercounts its own value, because most improvements affect multiple dimensions and many produce value that doesn't convert cleanly to a dollar figure.

The KaiNexus reports that show this kind of distribution make the program defensible to leadership. Without the data, the program is anecdotal. With the data, the leader running it can show the CFO exactly what the organization is getting for its investment. That visibility is what keeps the program funded.

Spread is part of the implementation discipline

The fifth element the session covers is the discipline of spreading implemented improvements across the organization.

A health system with ten hospitals might have one hospital implement an improvement worth $85,000 a year. If the other nine hospitals implement the same improvement, the total impact becomes $765,000. If they don't, the lost opportunity is the same $765,000 -- not as a loss the organization will see on a financial report, but as value that was available and wasn't captured.

The work of spread is partly cultural and partly infrastructural. Culturally, organizations need to think of themselves as systems rather than as collections of independent units. The phrase "health system" implies systemic thinking; the reality is often a collection of hospitals operating in parallel without learning from each other. Infrastructurally, the spread requires visibility -- a way for site B to see what site A has implemented and adapt it for their context.

The spread isn't copy-paste. Different sites have different conditions, different patient mixes, different staffing realities. An improvement that worked at one location might need adaptation at another. The discipline includes the feedback loop -- if site B improves on the original implementation, that improvement should travel back to site A and to other sites that adopted the original version. Spread done well is iterative and compounding. Spread done badly is just rollout, which produces compliance without engagement.

How KaiNexus connects

The platform supports the disciplines the session walks through. Several connections are worth naming directly.

The visibility into the gap between submissions and implementations -- the chart that shows the blue submitted line against the red implemented line -- is what makes the "mind the gap" discipline operational. Without that visibility, the gap forms invisibly until participation has already started dropping.

The distribution of implementation work across the team is supported by reports that show who is originating ideas, who is collaborating, who is responsible for implementation, and who is doing the actual work. The patterns in those reports are diagnostic. Concentration in one or two people is a fragility signal. Broad distribution is a health signal.

The impact tracking that covers safety, quality, satisfaction, time, and dollars is what lets the program report aggregate value across all the dimensions that matter. A platform that only tracks dollars systematically undercounts the program's actual value.

The spread mechanism -- making implemented improvements at one site visible and searchable for other sites -- is what turns local improvements into organizational learning. Without that infrastructure, the $765,000 of missed cross-site value is structural, not optional.

None of this substitutes for the discipline. The platform supports the practice; it doesn't replace it. The leadership behaviors, the coaching, the distribution of work, the discipline of acting on ideas rather than just collecting them -- all of these have to be done by the humans running the program. The platform makes the discipline easier to sustain.

See KaiNexus in action →

About the presenters

Mark Graban is a Senior Advisor to KaiNexus and an internationally recognized Lean consultant, author, and speaker. He is the author of Lean Hospitals and co-author of Healthcare Kaizen. He has helped healthcare systems and other organizations strengthen improvement culture and leadership practices.

Greg Jacobson, MD is co-founder and CEO of KaiNexus. A practicing emergency physician before founding the company, he has focused on the science of organizational culture, habit formation, and the systems that make continuous improvement sustainable at scale.

Frequently Asked Questions

Why does the gap between ideas submitted and ideas implemented matter so much?

Because the gap is what teaches employees whether their participation in the improvement program is worth the effort. When ideas pile up unimplemented, the implicit message is that improvement is bureaucratic and slow. Employees who experience this stop submitting. The gap doesn't just represent unrealized value -- it actively damages the engagement that produces future ideas. Closing the gap is more important than expanding the volume of submissions.

What's the difference between a wave, a desert, and a river of ideas?

A wave is the initial burst of submissions when an idea program launches. It's enthusiasm, not a sustainable pattern. A desert is what happens when the wave recedes and the program collapses -- few or no submissions, low engagement. A river is the healthy state -- steady flow of submissions, steady flow of implementations, no bursts and no droughts. The goal of running an idea program isn't to maximize the wave. It's to convert the wave into a river before the program slides into a desert.

What should a leader do when an employee submits what looks like a bad idea?

First, go and see. Most "bad ideas" look bad because the leader doesn't know enough about the context the employee is operating in. Talk to the employee, talk to others involved, observe the actual situation. After understanding the context, look for something the organization can do that addresses the underlying problem -- even partially. Employees rarely care whether their specific proposed solution gets implemented. They care about the problem being addressed. Rejecting ideas outright kills engagement; coaching the underlying problem builds it.

Why distribute the implementation work across the team rather than centralizing it with the manager?

Because the manager becomes the bottleneck. No matter how skilled the manager is, they can only personally implement so many improvements. Concentration of implementation in one or two people also creates fragility -- when those people leave, the program collapses. Distributing the work across the team builds redundancy by design. It also builds team capability. The manager's role shifts from being the doer to being the coach who develops the team's ability to do the work themselves.

Why is "we don't have time" the wrong frame for improvement work?

Because it treats time as a constraint rather than as a problem to solve. If the organization is genuinely committed to continuous improvement, the work of finding time is part of the work. ThedaCare's "no meeting zones" are one example -- protected periods where managers and staff don't take meetings and can do improvement work instead. The underlying issue is usually that "we don't have time" is shorthand for "we haven't prioritized this." When leadership genuinely prioritizes improvement, the time problem solves itself.

What's the right way to measure the impact of an improvement program?

By tracking impact across multiple dimensions, not just dollars. About 10% of improvements on average are safety-related. Many improvements affect quality, employee satisfaction, customer satisfaction, and time savings without producing measurable cost savings. A program that only tracks dollars systematically undercounts its value, because most improvements affect multiple dimensions and many produce value that doesn't convert cleanly to a dollar figure. The discipline is to track impact broadly while still being able to report financial impact where it exists.

See KaiNexus in action →

Bonus Offer: New Call-to-action