Failure Modes

Quiet analytical failure modes that accumulate over time—and how structure and guardrails help prevent them.

Pelorus Shield ICON with the North star on top, nautical Arc with six tic marks, lantern, flame, and lighthouse

Here’s an uncomfortable topic: failure.

Analytical failures can be broadly binned into two categories:

  • Analysis that was wrong and led to an incorrect decision or action
  • Process that was wrong—quiet failures that make life and work harder than they need to be

Most of this article is about the second version of failure. But let’s address the first quickly—to be clear, the first version probably deserves its own article; we just aren’t doing that today.

It is possible to do everything right and still fail (anyone catch the quote). That doesn’t mean your analysis was bad, and it doesn’t mean you were wrong. Many analytical projects are probabilistic at heart, and random chance gets a vote. If the work was done correctly, its results should stand. You shouldn’t make them fit a desired narrative; you present facts and interpretation. A project that produces undesirable or unexpected results is an opportunity to learn. Even if it’s a lesson we don’t want to learn, it can still be very valuable.

On to the quiet failures—the ones you may not notice. These build up over time, leading to frustration and burnout. I’m writing this from an analytical perspective, but honestly, these failure modes can be broadly applied to just about anything.

Failure Modes Defined

For our purposes, a failure mode is a specific action (or inaction) that leads to a potential project failure or significant problem. Sometimes they’re recoverable. Sometimes it’s merely a near miss. But a lot of things are OK—until they’re not.

The purpose of frameworks, structure, and process is to put guardrails in place to protect us from these and other failure modes. The reality is that many of these are squishy rules of thumb, not litmus tests.


1. Assumptions

When we assume what the boss wants—without validation—it creates the worst possible game of telephone: layers of interpretation, each one slightly more abstracted and less accurate.

Sometimes people get it right. Experience can lend reliable insights. Assumptions are useful to get the ball rolling, remove a blockage, or fill unanswerable questions. But they should always be identified and, when possible, validated.

Other times, we pour blood, sweat, and tears into a product that was never asked for. Or worse—we spin our wheels idling: not only working on something that isn’t desired, but also not working on something that is.

The fix is rarely complicated. A few simple questions often prevent the entire chain:

  • What does success look like?
  • Is this a full effort or a proof of concept?
  • Where does this sit relative to other priorities?

For me, those questions alone have been enough to shift people’s thinking and potentially avoid wasted effort or painful rework cycles.

Rule of Thumb:
Assumptions feel efficient. Clarification actually is—identify and validate assumptions whenever possible.


2. Silence Isn’t Alignment

Just because stakeholders, decision-makers, and co-workers don’t voice an opinion (dissenting or otherwise) doesn’t mean everyone is aligned.

I’m often surprised by the number of meetings I walk away from where my boss and I have distinctly different interpretations of what was meant by a given comment or direction. People stay silent for many reasons: fear of being wrong, belief that their opinion doesn’t matter, apparent consensus, keeping their head down, or simply not being asked.

Think of it this way: analysts like more data. There is information outside the analysis that can still materially affect it. By communicating and engaging where there is silence, you may uncover important information you were previously unaware of—unspoken requirements and constraints, desired end products, or “nice-to-have” extras.

Rule of Thumb:
The more you “just do the analysis,” the more fragile that analysis becomes.


3. Premature Analysis

You can’t do meaningful analysis without knowing where you want to go and having a general idea of how to get there. You should also validate that you have the data you need—and that it’s actually usable.

Frameworks like UPDATE deliberately place several steps before analysis. Understand and Plan align well with the previous two points, but don’t forget to check and understand your data.

As a simple example, a few days ago I was playing in Excel, making a chart that didn’t look right. Half the column was formatted as numbers, and the other half as text. My data wasn’t ready for analysis, but I tried to do it anyway. A small annoyance in my case—but I could easily see that same mistake impacting a Pareto analysis, YTD totals of revenue, taxes, or costs, or any number of more meaningful outputs.

Rule of Thumb:
A simple sanity check to ensure the data is (a) the data you need and (b) in a usable format is non-negotiable for any analysis.


4. Proving a Point

There is a fine line between advocacy, propaganda, red-teaming, and having an axe to grind. You can do legitimate analysis and still advocate for something, or you can pick something apart in a red-team without being malicious.

Our Latin motto is Quaerite ad IntellectumSeek Understanding—because that’s what good analysis should do: provide clarity. If your position is well founded, the analysis should naturally show why. It shouldn’t need to be contorted or forced, as is often the case with extreme advocacy.

Why is this a failure mode? You can successfully do analysis while angry, or analysis that proves you right (or them wrong—and no, those aren’t the same). But when you operate from a desire to prove a point, you risk losing objectivity. If everything is adversarial, you will disregard and diminish valid points from the other side—and by extension, you risk collective understanding for everyone involved.

Rule of Thumb:
Analysis begins with a question and seeks understanding. Proving a point starts with a conclusion and seeks confirming information.


5. Perfection (Analysis Paralysis)

It’s a real thing.

“I can make the model 5% better if you just give me more time!”
—Me, occasionally

We all want to be better—more accurate, more efficient, maybe more elegant in our solutions. But analysis is no good if it never makes it out the door. Almost every project reaches a point of diminishing returns, where the effort required to achieve additional improvement rises sharply and is no longer worth it.

It’s a tough judgment call to determine when “good enough” truly is. This is best addressed either:

  • during requirements and planning with a clear definition of done, or
  • through conversation with the end user

Rule of Thumb:
Continued improvement is only valuable if it meaningfully moves the needle.


6. Mission Creep

This category splits into two parts:

  • (a) the goal was wrong and needs updating, or
  • (b) something valuable could be added to the project

Part A

If the goal was wrong and project success is at stake, strongly consider adjusting—but take stock of the downstream impact. This usually occurs when there wasn’t a robust enough conversation about requirements and the North Star (or, less often, when the situation changes). Realistically, the only way to resolve this is through conversation, or—less ideally—iterative adjustment when each stage is revealed to be slightly off.

Part B

Value-add scope changes often come from “good” ideas. This isn’t inherently bad; it may be a net positive. However, judgment is required to determine whether the idea should be added (changing deliverables, timelines, personnel, data, or level of effort), spun off as an independent project, or deferred to a “work to be done” list.

Rule of Thumb:
Any change to the project’s North Star requires a conscious decision by everyone involved: expand, start a new project, or defer. Anything else is mission creep.


Conclusion

Normally, none of these failure modes are dramatic. They don’t announce themselves or crash projects overnight—at least not usually. They accumulate quietly through assumptions left untested, silence mistaken for agreement, analysis rushed or endlessly delayed, and scope allowed to drift without intent. Individually, each is survivable. Together, they explain a great deal of wasted effort, frustration, and burnout.

The goal isn’t to eliminate failure. That’s neither realistic nor desirable. The goal is to fail faster and fail better—to notice early, course-correct quickly, and protect the integrity of the work.

This is where structure earns its keep. Frameworks like UPDATE aren’t checklists or rigid processes; they’re guardrails. They exist to surface risk early, slow us down where precision matters, and speed us up where it doesn’t. When failure does happen, the same framework can be used to examine findings, process, and conclusions—to identify where assumptions went unchallenged or where results could have been improved.

Failure isn’t the opposite of good analysis.
Unexamined failure is.

Share this article:

Related posts

Critical Thinking in Analysis: Ten Elements of Thought

People think of Critical Thinking as 'Thinking Hard' but its actually evaluating the quality of your thought.

Failure Modes

Quiet analytical failure modes that accumulate over time—and how structure and guardrails help prevent them.