Good Intentions Don’t Translate to Good Outcomes
Most programs don’t fail because of bad people. They fail because of good people with unchecked ideas.
The requirements were written with care. The reviews happened. The working groups met. Everyone was trying to do the right thing. And somewhere between the original need and the final delivery, the thing stopped being about the person who needed it and started being about the process surrounding it.
That’s not a cynical observation. It’s a structural one. And until you see the structure clearly, you can’t fix it.
The Good Idea Fairy
There is a character that shows up in nearly every program that goes sideways. They don’t have a formal role in the org chart necessarily, but they are the chief cause of the drift – The Good Idea Fairies... They’re not malicious. In fact, they almost always actually want to help. They’re usually senior enough to be heard and removed enough from the front line to have stopped feeling the problem directly.
They arrive with a suggestion.
“What if it were multi-purpose?”
“Could we make it more flexible?”
“Shouldn’t it be adaptive?”
Those are not requirements. They are aspirations wearing the clothes of requirements. And in a regulated environment — government, defense, healthcare — each one of those words is a like a trigger.
The moment “multi-purpose” enters the requirements document, the program is no longer solving the original problem. It is now also anticipating every other problem it might conceivably be asked to solve. Safety has to assess the new contexts. Security has to expand the threat surface. Testing has to cover the additional use cases. Documentation has to cover the testing. Training has to cover the documentation. Governance has to oversee all of it.
None of those functions/disciplines are wrong to respond the way they do. Every regulated discipline is doing exactly what it is supposed to do. The cascade is feature, not a flaw. It is working as designed.
The actual failure occurred one step earlier… when nobody stopped to think. When no one asked the Good Idea Fairy a simple question: what problem on the front line are you actually solving with ‘multi-purpose’?
More often than not, the answer is none. But the suggestion sounds better. It feels more defensible. It gives the impression of forward thinking. One word. In one meeting with the wrong audience and without a single follow-up question. Now the program just got six months longer and end product becomes considerably harder to use.
The Person Who Needed It
Before the beginning of every single program in existence, if you look back far enough, there is a person… This person has an actual problem. The problem they have is usually specific and observable. (It is also inevitably far less complicated than what eventually gets built to address it.) This also is the person who gets stuck with that complicated and barely useful ‘multi-functional’ product that falls out of the other end.
That person, usually called an end-user, is consulted at the start. Their input goes into a document. The document gets approved. And then, almost without exception, the program stops talking to them. The end user is an afterthought until the program wants to roll out their shiny, flexible solution.
In between the initial ask and final reveal, a lot of things happen. Decisions get made about what they probably need. For them, not with them. Constraints get applied based on what is assumed about their environment. Assumed because once again, no one thinks to ask… We have to go fast, right? Features get added because someone, somewhere, decided they would be useful and make it more ‘multi-functional’ that the first ‘multi-function’ that was added. None of those people involved are the person with the problem.
They are interpreting, translating, and augmenting — through their own lens, with their own priorities, at increasing distance from the original need.
By the time delivery happens, the gap between what was needed and what was built can be enormous. Not because anyone failed at their job. Because the job everyone was doing had quietly shifted from solving the problem to managing the program.
The person with the problem is still there!! The one who needed a solution. They’ve been there the whole time. They adapted. They found a workaround. They got on with things. They didn’t file a change request because the system made that part hard and their need didn’t wait!
When the delivery finally arrives, they look at it and they recognize almost nothing of what they originally asked for. And they are not wrong. They take the ‘multi-function’ turd and throw it on a shelf or in a trashcan, and use their improvised solution.
Good Intentions Are Not Enough
The instinct to make something more ‘capable’, more ‘flexible’, more ‘future-proof’ is not a bad instinct. In isolation, almost every addition that gets made to a program is defensible. The additional functions mean it can handle more situations. The compliance requirement protects everyone involved. But ‘defensible’ good intentions are not the same as actual requirements. Those solve actual problems held by real people.
The problem is not any individual decision. It is the accumulation of decisions made without returning to the original question: does this still serve the person we built it for?
That question stops being asked early in most programs. Not because people are negligent. Because the process doesn’t require it. Requirements are locked. Scope is defined. The program moves into execution and the feedback loop between what is being built and what is actually needed gets longer and longer until it effectively doesn’t exist.
By the final delivery, the program has answered a hundred questions that nobody asked and has not answered the one that started the whole thing.
What the Structure Has to Require
The fix is not a better requirements template. It is not another review gate or a more detailed approval process. Those are the mechanisms that got us here.
The fix is a standing obligation to return to the original person, with the original question, at every point where the program changes direction. Not a user representative appointed by management. The actual person who will use the thing, on the day it matters, under the conditions it was built for.
Before any new features get added, someone needs to be able to answer: who asked for this, and have we confirmed with the end user that it serves them?
Before any constraint gets applied, someone needs to ask: have we told the user what this means for them, and do they still get what they needed?
Those conversations are uncomfortable because people assume they surface disagreement or slow certain decisions down. And those do happen. But what they actually do is prevent the much larger cost of discovering at delivery that the thing you built is not the thing that was needed.
The most expensive outcome in any program is a faithfully executed wrong requirement. Everything that follows building the wrong thing expertly costs more than the conversation that could have prevented it. Thing like the rework, the delay, the loss of confidence, the capability gap that persisted while the program ran, and so on.
Good intentions matter. They are necessary. They are not sufficient.
What has to be asked, at every stage, is not just “are we doing this right?” It’s “Are we doing this right?” AND “Is this still the right thing to do for the person who needed it in the first place?”
If you can’t answer both of those questions, the program is running for itself (not the person you think).
At Leitwolf, keeping the original need in frame — and the original user in the room — is foundational to how we work. If your program has drifted or you want to make sure it won’t, let’s talk! Contact us: info@leitwolf.net
You Built Exactly What They Asked For. They Hate It.
Speaking from experience, this is one of the most demoralizing moments in the technical program work. You delivered on time. You delivered within budget. You built what the requirements document specified, line by line, with evidence. And the customer looked at what you built and told you it wasn't what they needed.
You're not wrong. They're not wrong. The requirements were wrong. And odds are that nobody caught it until you handed them something they couldn't use.
How This Happens
Requirements documents are written by humans, at a specific moment in time, based on what those humans understood about the problem at that moment. The understanding is almost always incomplete. The language is almost always imprecise. The assumptions are almost always unstated. None of that is intentional. It's just the nature of trying to describe a complex technical need in a document before the solution exists.
The problem is that most programs treat the requirements document as a finished product rather than a starting point. It gets written, reviewed, approved, and then handed to the development team as if the act of approval means the requirements are correct. They're not correct. They're the best available description of what was understood at the time. Those are different things.
When the development team executes faithfully against an incomplete understanding, they build a faithful representation of that incomplete understanding. Which is exactly what your customer is looking at when they tell you it's not what they needed.
The Translation Gap
Between what a customer says they need and what they actually need, there is almost always a gap.
Sometimes it's small: a terminology difference, an unstated assumption about operating environment, a constraint that was obvious to the customer and invisible to the developer. Those gaps produce surprises at delivery but usually survivable ones.
Sometimes it's large: the customer described the solution they imagined rather than the problem they were trying to solve, and the solution they imagined doesn't actually work. Those gaps produce programs that get cancelled, restarted, or quietly shelved.
We have worked on programs at both ends of that spectrum. The ones that ended well had someone whose job it was to close the translation gap early — to sit with the customer, understand the problem behind the requirement, surface the assumptions, and make sure the development team was building against the actual need rather than the documented approximation of it.
The ones that ended badly skipped that step. Usually because it wasn't in the contract, or because the schedule didn't allow for it, or because raising questions about the requirements felt like challenging the customer.
Why Nobody Raises the Flag
The pressure to just execute is real. In reality, you hear things like: “This is a go-fast program.” “We need this now.”
Raising questions about requirements feels like slowing things down. It can feel like telling the customer they don't know what they want — which is uncomfortable for everyone. In some program structures, it creates contract modification risk, which neither side wants to deal with.
So the requirements get accepted and never really questioned over time. Work begins. And the first time anyone really tests whether ‘requirements’ reflect ‘actual needs’ is at delivery — when it's too late and too expensive to fix.
The flag should have been raised at requirements review. Or at design review. Or at any of the checkpoints between the document and the delivered product where someone with the right knowledge and the right standing could have asked: does this actually solve the problem?
That question is not a threat to the program. It's the most important question the program needs answered. It’s also the most terrifying question that could be asked in practice for some reason…
Getting Ahead of It
The fix isn't a ‘better’ requirements template. It’s not more people looking at the problem. Neither of those help. The real solution involves a different kind of conversation at the front of the program.
Before requirements are locked, someone needs to be asking: what problem are we actually solving? What does success look like to the operator, not the program office? What assumptions are embedded in these requirements that we haven't tested? What would have to be true for this to work — and how confident are we that those things are true?
Those conversations are uncomfortable. They surface disagreement early, which feels like friction. What they actually are is load-bearing work that prevents the much larger friction of discovering at delivery that you built the wrong thing. It requires that someone, somewhere on the program have the answers. And maybe that is why the question is so threatening… no one wants to consider the answer may be ‘no one here has them’.
But that can be fixed. And it’s easier to do at the start. Because one of the most expensive mistakes you can make in technical development is a ‘faithfully executed, but entirely wrong’ requirement. Everything after that point costs more than it should have: rework, delay, relationship damage, crushed morale, lost customer confidence. It goes on and on. The gift no one wants, and one that keeps on giving.
None of it was necessary. The information to prevent it is always available. You just need someone willing to ask the right questions before the work starts. And then keep asking as the work progresses!
Someone has to be willing to be the person who asks. That's a specific skill and a specific kind of courage — and it's easier when it's someone's explicit job rather than an uncomfortable addition to everyone else's.
At Leitwolf, requirements translation — figuring out what the customer actually needs before the build starts — is one of the core things we do. If you've been burned by this before or want to make sure you won't be, let's talk.
A Quick Recipe for Lasting Change
Why Change Initiatives Fail the People Who Have to Implement Them
Most change initiatives don't fail because the plan was wrong. They fail because the plan was designed by the wrong people. Not wrong in terms of competence. Wrong in terms of position. The planners didn’t include the ‘Right People’…
They forgot and important fact of life: The people who design and approve changes are almost never the people who have to live with it. And that gap — between the people who drew the map and the people walking the terrain — is where most well-intentioned initiatives quietly collapse.
We've watched this happen enough times to recognize the pattern before it becomes a crisis. It usually looks like this: leadership identifies a problem, assembles a team, builds a plan, and rolls it out. The plan is logical. The objectives are clear. The timeline is reasonable. And then something strange happens — the people it was designed for start working around it instead of through it. Workarounds multiply. Momentum stalls. Six months later, everyone agrees the initiative didn't take, and the post-mortem blames culture, or resistance to change, or the wrong hire.
It was none of those things. It was architecture.
The Plan Worked for the People Who Designed It
Here's what actually happened: the plan was optimized for the outcome, not for the humans in the middle.
When you design a change initiative from the top down, you're solving for the end state. You know what success looks like, you build a path to get there, and you hand it to the people responsible for execution. What you don't always account for is what that path actually requires of the people walking it — the cognitive load, the workflow disruption, the moments where the new process conflicts with the old one in ways that weren't visible from the design room.
The people implementing the change aren't obstacles. They're load-bearing parts of the architecture. When the design doesn't account for them, the structure fails — not because they pushed back, but because the weight was distributed wrong from the start.
This is the same principle that breaks technical programs, product launches, and organizational restructures. It's not a change management problem. It's a requirements problem. You built something without fully understanding what it needed to do for the people using it.
What It Looks Like When It's Fixed
The difference between change that sticks and change that doesn't is usually visible before rollout — if you know where to look.
Teams that get this right do one thing differently: they treat the people implementing the change as primary sources of requirements, not secondary stakeholders to communicate with. Before the plan is finalized, they're asking the people closest to the work: where will this break? What are we not seeing? What does this require of you that we haven't accounted for?
Those conversations are uncomfortable. They surface problems before the plan is locked, which means rework before launch rather than failure after it. Most organizations avoid that discomfort and pay for it later — in stalled rollouts, in workarounds that become permanent, in the slow erosion of trust that happens when people feel like change is something that happens to them rather than with them.
The goal isn't a perfect plan. The goal is a plan that works for the people who have to execute it. Those are different things, and confusing them is where most initiatives go wrong.
The Practical Version
Before you finalize any change initiative, run it through three questions with the people who will actually have to implement it (not the people who designed it):
Where does this break for you? Not hypothetically. Specifically. What part of your actual day does this collide with in a way we haven't accounted for?
What does this assume about how you work that isn't true? Every plan has embedded assumptions. Most of them are invisible until someone who lives the work points them out.
What would make this easier to execute without changing the outcome? This is the question that produces the most useful design changes — because the people closest to the work almost always know a better path that the designers missed.
These aren't feel-good questions. They're load-bearing requirements. The answers change the architecture before it fails in the field — which is the only time it's cheap to change it.
Most change initiatives aren't killed by resistance. They're killed by designs that never accounted for the people they depended on. That's not a people problem. It's a process problem. And process problems have solutions — if you're willing to find them before you need them.
At Leitwolf, we help organizations build the right structure before problems compound — or diagnose what went wrong after they already have. If your initiative is stalling or you want to make sure it doesn't, we offer a free 30-minute assessment.
Sneak Peek:
5 Warning Signs Your Project is Failing (And What to Do About Them)
After 20+ years building systems for high-stakes environments—including tools for special forces operators—we've seen the same patterns sink dozens of projects.
Most failing projects don't announce themselves with catastrophic errors. They fail slowly, predictably, through warning signs that show up early and get ignored until it's too late.
Read about Warning Sign #1 below. If it sounds familiar, please sign up to get the full guide.
Warning Sign #1: Uncontrolled Scope Creep
What it looks like
You start with a dashboard showing three critical metrics. Six weeks later, you're building a full business intelligence platform with custom reporting, predictive analytics, and integration with five different systems. Nobody remembers deciding to do this - it just happened one "quick add" at a time.
The tech lead lets customers reprioritize every sprint. The "good idea fairy" shows up with each change of command. Someone who doesn't understand the technical constraints keeps adding requirements every time you talk. Before you know it, the original three-month project is on month nine with no end in sight.
Why it's dangerous
The project stalls completely. You deliver nothing for months while trying to build everything. Your team gets demotivated watching the finish line move further away every week, and you start losing your best people. Budgets explode (though in government work, that's sometimes less visible than it should be).
But here's the worst part: the customer never gets the solution they actually needed. That original dashboard with three metrics? It would have solved their problem. Now they're waiting indefinitely for a system they didn't ask for and may not even want.
What to do about it
Implement constraint-based decision making. This isn't about saying "no" to everything - it's about making the right thing the easy thing to do.
The 3-Question Filter: Before adding anything to scope, answer these three questions:
Does this solve the original problem we agreed to solve?
Can we deliver the core solution without this?
If this is truly essential, what are we removing to make room?
Get an outside perspective. Before accepting new requirements, run them past someone who wasn't in that meeting. Fresh eyes catch scope creep that insiders miss. Make it a rule: assumptions and changes get tested by someone outside the immediate team.
Document everything in a shared space. When someone suggests an addition, write it down where everyone can see it - with the date and who requested it. This simple act makes people think twice and gives you a paper trail when the finish line starts moving.
Empower the person closest to the problem. Your tech lead should have the authority to push back on mid-sprint reprioritization. The micromanager three levels up shouldn't be making technical decisions they don't understand.
The key insight: scope creep happens when there's no system preventing it. Build the constraint into your process, and you won't have to rely on people remembering to resist it under pressure.
Sound Familiar?
This is one of five warning signs we see repeatedly in failing projects. The full guide covers:
Warning Sign #2: Misaligned Stakeholders - When everyone agrees in the kickoff but you still build the wrong thing
Warning Sign #3: Hidden Blockers - The gatekeepers and bureaucratic mazes that surface at the worst possible time
Warning Sign #4: Unrealistic Scheduling - When timelines are set before anyone talks to the people doing the work
Warning Sign #5: Lack of Clear Authority - When everyone thinks they're in charge and nothing moves
Each warning sign includes what it looks like, why it's dangerous, and specific steps to fix it.
Get the complete guide: No sales pitch. No fluff. Just honest observations from the field and practical steps you can take—whether you work with us or not!
Submit your email using the subscription link below.