
Some problems are polite.
They show up, sit still, let you measure them, and then you fix them. You patch the leak. You swap the broken part. You write the policy. Done.
And then there are the other problems. The ones that move.
You think you solved them, and a month later the exact same issue returns, but sideways. Different shape. Different cause. Different people. Same headache. It feels like trying to organize a closet while someone keeps tossing new stuff into it from the hallway.
These are evolving problems. And designing solutions for them is… not the same craft as building a one time fix.
Because the real enemy is not the bug, or the outage, or the angry customer email. It’s the fact that the conditions that created the problem are changing while you work.
So this piece is about that. How to design solutions that survive a shifting environment. Not perfectly. Nothing does. But long enough to matter. Long enough to stay useful.
The first mistake is pretending the problem is stable A lot of teams do this without realizing it.
They treat the problem statement like a photo. A snapshot. “Users can’t find X.” “Orders are failing.” “People are churning.” And then they rush to build something. A feature. A process. A dashboard. A training doc.
But evolving problems are more like videos. They have a timeline. They have momentum. And the problem you are “solving” is often just one frame in a longer sequence.
If you design a solution for the snapshot, you might still ship something. You might even ship something useful. But it tends to decay fast. Because you optimized for the present, not the pattern.
This is why so many “fixes” feel like they worked for a week and then quietly stopped working, and nobody really wants to talk about it.
So the first thing you do is simple, but not easy.
You admit, out loud, that the problem is likely to change.
That sentence alone changes how you design.
Look for the system, not the symptom
Evolving problems usually live inside systems. And systems do what systems do. They adapt, they push back, they reroute pressure.
You block one path and the load goes somewhere else.
So instead of asking “what’s broken?” you ask:
What keeps producing this outcome?
Not once. Repeatedly.
A few examples, just to ground it:
• If customer support volume keeps spiking, the issue might not be support. It might be product complexity, unclear pricing, confusing onboarding, unreliable delivery, or even marketing overselling.
• If a team keeps missing deadlines, it might not be effort. It might be unclear scope, constant priority changes, poor dependency management, or incentives that reward starting over finishing.
• If fraud tactics keep changing, it’s not because your last rule was dumb. It’s because fraud is an adaptive opponent. They watch what you do.
When the problem evolves, it’s usually because the system is responding. Either to you, or to the
market, or to some constraint you did not control.
Designing solutions here means you are designing interactions, incentives, feedback loops. Not just outputs.
The solution is rarely “a thing”. It’s a capability
This is a big shift, and it’s one of those ideas that sounds like jargon until you feel it. For stable problems, “a thing” works. A feature. A checklist. A new tool.
For evolving problems, the thing becomes outdated. So what you want to build is a capability. A capability is the ability to keep responding as the problem changes.
It’s like the difference between:
• Writing a single play for one opponent
• Building a team that can read the field and adjust in real time
Capabilities often look boring from the outside. They sound like:
• Better detection and alerting, not just a one time fix
• A decision process that makes tradeoffs explicit
• A feedback loop that keeps learning
• Guardrails that prevent the worst outcomes while still allowing flexibility • Clear ownership and escalation paths
Not glamorous. But these are what keep you from constantly firefighting.
If you only ship “a fix”, you are done the moment the world changes. If you ship a capability, you have built something that can change with it.
Start designing with uncertainty in mind
When people design for evolving problems, they often ask for more information. More data, more research, more time. Totally reasonable. But you can’t always wait. And the data you want might not exist yet.
So the question becomes:
How do you design when you know you are wrong, you just don’t know how yet?
A few practical moves help:
Make it easy to change the solution without ripping everything apart.
This can mean modular architecture in software, sure. But it can also mean modular policy. Or modular process.
Example: instead of a policy that says “approve all refunds under $50 automatically, everything else manual”, you can design a policy that says “refund thresholds are reviewed monthly, and we track abuse rates and customer satisfaction”. Same concept, but one of them is adjustable by design.
There are decisions you can undo quickly. And there are decisions that lock you in.
With evolving problems, you want more reversibility early. Smaller bets. More experiments. Less pride attached to any single approach.
If you can roll back, you can move faster with less fear. And fear slows everything down.
Hard coding the logic is how you end up trapped.
This applies everywhere. Product rules, pricing rules, fraud rules, escalation rules. If the rules will change, don’t bury them in places that are hard to edit.
Build the mechanism so rules can be updated safely and visibly. With logs. With review. With monitoring.
Because the point is not that rules never change. The point is that rule changes are controlled and learnable.
Put feedback loops at the center, not the end
Most teams treat feedback as something you do after launch.
But with evolving problems, feedback is not a phase. It’s the product. It’s part of the solution. You want tight loops. Fast signals. Clear ownership. A rhythm.
So you design questions like:
• What would tell us this solution is starting to fail again?
• What metrics will move first, before everything blows up?
• Who gets alerted, and what can they actually do about it?
• How do we capture what we learned and feed it back into the next iteration?
This is where a lot of “smart” solutions die. They launch with no early warning system. So the team only notices failure when it becomes obvious and expensive.
A feedback loop is basically your solution’s immune system.
Without it, you’re just hoping the world stays still.
Use “versioning” as a mindset
Here’s something I wish more teams did: treat solutions like versions, not monuments. V1 is not a failure. V1 is what you can responsibly ship with what you know now.
Then you plan for V2, V3, and so on. Not in a rigid roadmap way. More like, you acknowledge the solution will need updates, and you budget for it culturally and operationally.
This is especially important for leaders, because teams copy what leaders reward.
If leadership celebrates a “final fix”, teams will pretend the fix is final. Even when it isn’t. And then you get brittle systems and quiet workarounds.
If leadership celebrates learning velocity and resilient design, teams will build in a way that expects change and handles it.
Versioning also helps emotionally. It removes the pressure to get it perfect. It makes iteration normal. It makes change feel like progress, not failure.
Design the human parts, because the human parts are the system
Evolving problems almost always have a human layer. Sometimes the problem is humans. Sometimes the solution is humans. Usually it’s both.
So if you design only the technical piece, you miss the real failure modes. A few human realities that matter:
When problems evolve, they drift across teams. The original owner moves roles. The product changes. The customer segment shifts. Suddenly nobody “owns” the mess.
A resilient solution includes explicit ownership, and a way to reassign ownership without drama.
If your incentives push speed over quality, you will get fast fragile solutions. If your incentives push local team metrics over global outcomes, you will get silo behavior.
Sometimes the best “solution” is changing what people are measured on. Which sounds political, because it is. But it’s also design.
If your solution requires constant manual babysitting, it will rot. People get busy. They forget. They quit. They burn out.
So part of designing for evolving problems is designing for maintenance. Who will do it, how often, and what happens when they don’t.
If the maintenance plan is “someone will remember”, that is not a plan.
Watch out for “solutions” that just relocate pain
This is sneaky. Because it looks like progress.
You solve the problem for one group by making it worse for another group. You reduce support tickets by making refunds harder. You reduce downtime by making deployments slower. You reduce fraud by adding friction that kills conversions.
Sometimes those tradeoffs are necessary. But you want them to be conscious.
When problems evolve, relocating pain tends to trigger a counter response. People route around friction. Customers change behavior. Bad actors adapt. Internal teams create workarounds.
So you want to ask:
• Where did the pain go?
• Who is paying for this fix?
• Is that cost acceptable, and for how long?
• What new behaviors did we create?
If you don’t ask, the system will answer for you. Usually later. Usually loudly. The best solutions create constraints, not control
Control is tempting. You want to lock things down. You want certainty.
But evolving problems punish over control because they need flexibility. Constraints are different. Constraints set boundaries. They define what cannot happen. They
protect the system. But inside the boundary, teams can adapt.
Examples of good constraints:
• “No release without automated rollback”
• “No policy change without measurable success criteria”
• “Any manual review queue must have a max SLA and an owner”
• “We do not store sensitive data without encryption at rest and in transit” • “Any new metric must have a definition, a source of truth, and a steward” These constraints prevent chaos while allowing the solution to evolve.
Control tries to predict every scenario. Constraints accept you can’t, and still keep the system safe.
Sometimes the real solution is choosing what not to solve
This is not the motivational part. This is the part that feels a little grim, but it’s honest.
Some evolving problems are not worth solving fully. Or they cannot be solved fully. Or solving them creates worse problems.
So part of good design is picking the right level of ambition.
Questions that help:
• What is the minimum we need to do to keep the system healthy?
• What outcomes do we refuse to allow, even if we accept smaller losses elsewhere? • What would we do if we had to live with this problem for the next two years? • If we “solve” this, what new problem will we create?
In other words, you design a posture. Not just a fix.
And that posture can be aggressive, conservative, experimental, defensive. But it should be intentional.
A simple framework you can actually use
If you are dealing with a problem that keeps evolving, try this as a working outline. Nothing fancy.
List what can change in the next 3 to 12 months.
Market conditions, user behavior, regulation, competitors, internal priorities, vendor dependencies, data quality, staffing. All of it.
This list is your uncertainty map.
Not “we want it to be better”.
More like: “we cannot allow unauthorized access” or “we cannot allow refunds to take 14 days” or “we cannot allow a single customer to crash the system”.
This creates boundaries for your design.
Not the biggest fix. The smallest thing that can sense, respond, and improve. Usually this means:
• Instrumentation
• Ownership
• A process for updates
• A first iteration of the rule, feature, or workflow
• A review cadence
Decide how you will update it. What triggers a revision. Who approves changes. What metrics guide decisions.
Put it in writing. Even if it’s a short doc. Even if it’s messy.
Because it is.
Closing thought, kind of
Designing for evolving problems is humbling.
You don’t get the satisfaction of a clean ending. You get something else. You get resilience. You get fewer 2am surprises. You get a system that bends instead of snapping.
And honestly, that’s the job a lot of the time.
Not to build perfect solutions. But to build solutions that can keep becoming better, even when the problem refuses to sit still.
FAQ
An evolving problem is one where the underlying conditions change over time, so a one time fix keeps becoming outdated. The problem adapts, or the system around it changes, or both.
If the plan is mostly “ship the fix and move on” with no monitoring, no ownership long term, and no revision cadence, you are probably treating it as stable.
It means building the ability to keep responding. Usually that includes feedback loops, clear ownership, adjustable rules, instrumentation, and a process for iteration. Not just a feature or a policy.
Some of it overlaps, sure. The difference is the emphasis. With evolving problems, iteration is not a method choice. It’s a survival requirement. You design the solution so it can change safely and often.
You define boundaries and success criteria. Set “never again” outcomes, pick the metrics that matter, and create a review cadence. Iteration should be guided, not random.
Frame the risk. Show how the problem has changed historically, explain the cost of brittle solutions, and propose a “versioned” approach with clear checkpoints. Leaders often want certainty. You can offer managed uncertainty instead.
Add a feedback loop. Even a basic one. Decide what signal will tell you the solution is failing, who will see it, and what action they can take. That alone makes your solution more resilient.