FAQ: How to Read and Use This Site
How this site works, and why it’s built this way.
This FAQ explains the philosophy behind this site. It exists to clarify why things are structured the way they are, why certain distinctions matter here, and why familiar approaches are intentionally avoided.
Many of the concepts on this site are easy to misread if approached as generic consulting advice or a traditional knowledge base. This section makes the assumptions explicit so you can decide whether this way of thinking is useful to you.
Frequently Asked Questions
About This Site
What this site is not
This site is not a generic project management resource, a PMI-style appendix, or a consulting playbook in disguise.
It is not written to be universally agreeable or broadly “best practice.” Many ideas here are intentionally precise and tied to specific failure modes: decision drag, interface voids, integrated brittleness, legacy schema debt, and the hidden taxes systems impose over time. Stripped of that context, the material can look like semantics or opinion. In practice, it’s a set of distinctions designed to survive contact with reality.
Common mistake: treating these pages as interchangeable with standard management doctrine. This site is more like a field manual: patterns, tradeoffs, and rationale drawn from real constraints, written for people who need clarity more than comfort.
Who is this site actually for?
This site is for people who are responsible for outcomes, not people who collect terminology.
If you’ve ever watched a team work hard while the organization still drifted, you’re the audience. If you’ve ever asked, “why is this so hard when the goal is obvious,” you’re the audience.
It’s written for operators, architects, program leaders, and quiet experts who need a way to explain what’s happening and steer it without turning everything into a fight.
Common mistake: assuming this is only for consultants or executives. The patterns here are for anyone who has to make decisions under constraints and still be accountable for results.
Why would someone care this much about this stuff?
Because the cost of getting this wrong is usually invisible until it’s too late.
Most people only notice the symptoms: stalled initiatives, endless reporting, whiplash priorities, and teams working hard while outcomes drift. The underlying cause is often structural: unclear decision pathways, fuzzy ownership, mismatched incentives, and language that hides tradeoffs.
Caring “too much” is a rational response to repeated, preventable loss: wasted time, burned trust, and systems that depend on heroics. If you’ve been accountable for outcomes inside a messy system, you learn that clarity is not a preference. It’s how you protect mission tempo, people, and budget.
Common mistake: assuming this is intellectual obsession or consultant theater. It’s stewardship. When you care early, you prevent expensive failure later.
Why do you care so much about “The Next Guy”?
The Short Answer
“Leave it for the next guy” is a common phrase often used to justify walking away from a mess or an unfinished task (e.g. “leave this for the morning shift”). In our doctrine, we reclaim this phrase not as a call to altruism, but as a recognition of a cyclical reality: you are inevitably going to be the “next guy” that someone else leaves a problem for (and frequently, that someone else is your past self). Everyone has been the person who inherited a broken system, a spreadsheet with no instructions, or a model with no “source code.” We have to realize that leaving a mess today is simply charging ourselves a high-interest loan for tomorrow.What you might be missing about this
People often misinterpret the “Next Guy” focus as an act of altruism or purely academic bookkeeping: it is actually a hard-nosed operational strategy to prevent Decision Drag and system downtime during transitions.Why does this site focus so heavily on distinctions and language?
Because unclear language produces unclear decisions, and unclear decisions produce waste.
In low-stakes environments, ambiguous terms are annoying but survivable. Under real stakes, ambiguity turns into misaligned expectations, silent scope creep, and “everyone thought it meant something else.”
This site treats vocabulary as operational infrastructure: shared terms reduce coordination cost, speed up decisions, and prevent rework.
Common mistake: assuming this is (just) pedantry. It’s risk control. If you can’t name a problem precisely, you can’t reliably solve it or repeat the solution.
Why would someone care this much about portfolio thinking?
Because most organizations don’t fail by doing the work wrong. They fail by doing the wrong work well.
Portfolio thinking is the discipline of choosing what not to do and making tradeoffs explicit. Without it, teams build “bridges to nowhere,” then blame execution when the real error was selection.
This matters even if you’re not an executive. The cost shows up as churn, whiplash priorities, endless reporting, and projects that ‘must be done’ but can’t explain why.
Common mistake: treating portfolio thinking as abstract strategy talk. In practice, it is how you protect time, attention, and budget from accidental waste.
How do you use this site in actual work?
The schemas I’ve developed allow me to stay present in conversation or in the work itself. Pattern recognition happens automatically – I recognize situations I’ve seen before without analytical processing. The synthesis work – connecting patterns across domains, building new frameworks – is cognitively expensive and happens in private reflection. If I’m doing that synthesis in real-time during conversation, my focus isn’t on the other person.
(For more on this learning approach, see Sphere and Spikes: Building What You Need Without Becoming a Specialist )
If you think this systematically, aren’t you exhausting to interact with?
The opposite. The schemas and heuristics I’ve developed (and am now documenting) are what allow me to stay PRESENT in conversation. Pattern recognition via schema is fast and automatic. The synthesis work – connecting patterns across domains, building new frameworks – is cognitively expensive and happens in private. If I’m doing that synthesis in real-time during a conversation, my focus isn’t on the other person. The ‘loudest listener’ approach means I’m running pattern recognition in the moment, then processing and documenting in reflection.
Why does this site emphasize federation so strongly
Because federation preserves agency under real-world constraints.
Federation accepts that different teams, systems, and partners operate under different incentives, tempos, and authorities. Instead of forcing uniformity, it defines shared interfaces and invariants that allow coordination without collapse. This matters in environments where you cannot mandate compliance, where change is constant, and where failure must be contained rather than amplified.
Integrated systems often look clean on paper and brittle in reality. They optimize for efficiency and control, but they collapse under surprise. Federation trades a small amount of efficiency for resilience, adaptability, and long-term survivability.
Common mistake: assuming federation is indecision or lack of standards. Federation is not the absence of structure. It is structure placed deliberately at the boundaries instead of forced into the core.
Who are you to write this?
Fair question. The answer is not “authority.” The answer is contribution.
A lot of the most valuable expertise in organizations is invisible and tacit. Tacit knowledge disappears when people rotate, retire, burn out, or simply stop caring. When that happens, the organization pays the relearning tax: the same lessons are re-learned at full price, over and over, usually under worse conditions.
This site exists because I’m choosing to make my patterns portable. Not perfect. Not universal. Portable. If a reader’s first reaction is “who does this guy think he is,” I get it. But my follow-up question is simple: what are you doing with what you’ve learned? Where are your reusable patterns? Where is your contribution that outlives your calendar and your inbox?
Common mistake: assuming doctrine is arrogance. In practice, doctrine is stewardship. It’s what you write down when you’re tired of paying for the same lessons twice.
Why document this publicly instead of just consulting privately?
Because repeating the same preventable failures in isolation is a waste of collective experience.
Consulting can fix one situation at a time. Public documentation can make the next situation less costly before the contract, the crisis, or the postmortem.
This site is designed to be a durable reference: patterns, distinctions, and failure modes that can be reused by people who weren’t in the room.
Common mistake: assuming public writing is just marketing. It’s closer to leaving a trail of tested heuristics so others can move faster and make fewer expensive mistakes.
Is this site trying to create a new methodology or proprietary framework?
No. It’s trying to make existing ideas usable under real conditions.
Most organizations don’t fail because they lack frameworks. They fail because incentives, ambiguity, and stakes distort how frameworks get applied.
When you see terms or models here, they are usually labels for recurring patterns observed in the field, with explicit notes about where they apply and where they break.
Common mistake: assuming this is an attempt to trademark a philosophy. The goal is operational clarity and repeatable decision-making, not inventing a new religion.
Doctrine
Doctrine? Really?
I utilize the term ‘Doctrine’ as a professional tribute to the rigor of formal institutional frameworks. My intent is not to replicate or replace official Publications, nor do I intend to be prescriptive (according to Doctrine X you -must-…). Rather, I believe that high-stakes operational experience deserves a similarly disciplined structure. By packaging these lessons as ‘Doctrine,’ I ensure they are codified, repeatable, and ready for the next practitioner to inherit: fulfilling the primary obligation of Interface Stewardship
What does ‘decision drag’ and why does it matter.
Decision drag is not indecision. It is the structural latency between when information exists and when a decision can actually be made and acted on. On this site, decision drag is treated as the natural outcome of unclear design, not a personality flaw.
In system language, decision drag shows up as: latency in decision pathways, overload at the center, high coupling between decision nodes, unclear escalation routes, ambiguous invariants, missing constraints and guardrails, unclear interfaces between teams, and dependency chains that keep growing.
Common mistake: blaming individuals for “not deciding fast enough.” If drag is recurring, it is almost always the system: unclear authority, unclear constraints, and unclear interfaces. Drag compounds because conditions change while you wait, misalignment grows, partners act independently, side effects accumulate, risk increases, and political stakes get higher.
Why would someone care about “seeing the dragon”?
Why would someone care about “seeing the dragon”?
Because the most critical information in complex systems is often invisible until you know how to look for it – like those Magic Eye stereograms from the 90s where the 3D image was always there but only appeared when you adjusted your focus.
“Seeing the dragon” is the moment when operational patterns become visible that were previously hidden in noise. Not because you got new information, but because you developed the schema to recognize what the existing information was telling you.
This matters because organizations make decisions based on what’s visible in dashboards and reports, missing the patterns that predict system failure. The information existed. The pattern was there. But no one could see it until after the breakdown.
Common mistake: Thinking pattern recognition is about data visualization or better metrics. In practice, it’s about having pre-compiled schemas from operational experience that let you recognize what matters when surrounded by what doesn’t.
You can stare at a Magic Eye image forever. Until you know how to defocus and look “through” it, the dragon stays invisible.
But here’s the thing about dragons: even when you can’t see them yet, they’re always staring back at you. The pattern is already affecting your system whether you recognize it or not.
What is integrated brittleness?
Integrated brittleness is what happens when a system becomes tightly coupled in the name of efficiency, and the coupling becomes the failure mode.
Integration can reduce local friction and improve short-term coordination, but it also increases blast radius. Small failures propagate, small changes become risky, and local teams lose the ability to adapt without negotiating with the entire system.
Integrated brittleness often hides behind “clean architecture” talk. It looks orderly until reality changes, a dependency breaks, or an edge case appears. Then the system fails as a whole because it can’t absorb surprise.
Common mistake: assuming more integration always increases strength. Integration increases efficiency. Federation preserves resilience. When the environment is volatile, resilience usually wins.
What does “Anticipatory Coaching” mean on this site?
Anticipatory coaching is how you prepare someone for first contact under real stakes when the coach will not be there. It assumes that after training, the learner will encounter resistance, time pressure, and consequences, and that “knowing the material” will not be enough by itself.
The method is simple: you name the predictable failure modes in advance, and you install correction cues the learner can recall under pressure. The goal is not perfection. The goal is that the first real encounter is not the first time they are thinking about the problem.
Common mistake: assuming coaching happens after failure, via feedback and iteration. That works when stakes are low and recovery is cheap. When stakes are high, the correction needs to be planted earlier. Anticipatory coaching is the difference between “I’ve heard this before” and “this is the first time I’m realizing what matters.”
What is the RS-CAT framework, and how is it used?
RS-CAT is an elicitation framework for converting raw, lived experience into reusable patterns. It exists because “expert intuition” is often real, but it’s stored as story, habit, and feel. That makes it hard to transfer to the next person or repeat under pressure.
RS-CAT stands for Retrieval, Sequencing, Compression, Abstraction, Teachability.
- Retrieval: extract the signal from noisy memory.
- Sequencing: put the steps in an order a learner can execute.
- Compression: reduce cognitive load without removing what matters.
- Abstraction: convert “my story” into a generalizable pattern without turning it into empty slogans.
- Teachability: make it portable by naming constraints, variables, and where it breaks.
Common mistake: treating expert recall as “the answer” and simply recording it. Raw recall is usually too linear, too noisy, and too context-bound to be reusable. RS-CAT forces you to (1) retrieve the signal, (2) sequence it in the order a learner can use, (3) compress it to reduce noise and cognitive load, (4) abstract it without turning it into empty consultant-speak, and (5) ensure it’s teachable across contexts by naming constraints and variables.
Common mistake: treating an expert interview as “capture the story and publish it.” That produces lore, not doctrine. RS-CAT forces the transformation from anecdote into a durable pattern you can apply again.
What is Mission Preservation?
It is the operational and ethical practice of protecting an individual’s sense of agency and identity as a “provider” during a crisis. It recognizes that for many people in high-friction environments, their work is the primary psychological infrastructure keeping them functional (especially if they are also impacted by the disaster they are helping others to recover from).
Why is it considered a “Load-Bearing” constraint?
If you strip away someone’s mission (by centralizing their supplies or forcing them to acknowledge their own victimhood), you damage the very system you are trying to help. Mission preservation ensures that local expertise remains online and accessible.When should I pivot to this schema?
You pivot the moment you realize your inquiry has “collapsed the distance” between the person’s role as a helper and their reality as someone affected by the crisis.Is this just being “nice”?
No. It is Operational Intelligence. A person who feels their dignity is intact will volunteer “The Mission Tour” (deep, unscripted system insights). A person who feels exposed will provide stilted, socially safe answers that satisfy the script but hide the truth.Why do people fail even when they “know the material”?
Because people often confuse recognition with capability.
Knowing the material can mean you can recite terms, follow a ritual, or recognize the correct model when it’s presented. That is not the same as being able to execute under resistance. This confusion creates the Interface Void: the gap between a stored sentence (recognition) and a working capability (execution). Bridging that void is not informational. It is operational.
In low-stakes settings, recognition can look like competence because failure is delayed, ambiguous, or cheap. In contact, the environment audits capability immediately. That is why “they knew it” is not a satisfying explanation. The system demanded execution, and the bridge was missing.
Common mistake: prescribing more explanation or more content. When the failure is the interface, the remedy is rehearsal, simulation, sequencing, constraints, and cues that survive contact, not more words.
Why isn’t post-training support enough when stakes are high?
Because in a Contact Environment, the system has a vote. When stakes are high, you don’t get unlimited retries, generous time windows, or clean feedback. You get friction, consequence, and rapidly compounding failure.
Post-training support assumes the learner will (1) notice they’re failing, (2) correctly diagnose why, and (3) have time and bandwidth to ask for help. Under real stakes, those assumptions collapse. Stress narrows perception, time compresses, and people revert to brittle scripts or partial recall. Even correct knowledge can fail to cross the interface into execution.
This is why preventive and contingent preparation matters. Training is not a guarantee of outcome. It is a commitment to agency in contact: reducing helplessness, building realistic readiness, and installing bridging steps (rehearsal, simulation, cues) that survive first contact.
Common mistake: treating support as a substitute for preparation. Support helps in evaluation environments. Stakes demand readiness before contact.
What is the ‘terror of uselessness,’ and why does it matter?
The “terror of uselessness” is the feeling of having zero agency. It shows up as a Stage 2 revelation: the moment you realize you are a spectator to a situation you should be able to influence, but you can’t. It is not about ego. It is about helplessness in the face of resistance.
This is why technical mastery matters. Committing to skill is not an insurance policy that guarantees success. It is a commitment to eliminate helplessness. You may follow the protocol perfectly and still lose the outcome, but you will not stand there useless while it happens.
Common mistake: confusing outcomes with agency. Organizations often demand “wins” and accidentally create systemic heroics: brittle systems that break when a win isn’t possible. This site treats stewardship as the obligation to build and maintain agency, even when the outcome is not fully controllable.
What’s the difference between a capability gap and an interface void?
A capability gap means you don’t have the skill yet. The fix is training, reps, and time.
An interface void means you do have knowledge (or even a model), but there is a missing bridge between stored knowledge and successful execution when it matters. You can recognize the right idea, explain it cleanly, and still fail at the moment of contact because the “bridge” was never built.
Stakes are the discriminator. The interface void thrives in environments with high consequence lag, where failure is delayed or ambiguous and superficial “recognition” can look like competence. In high-consequence domains, reality closes the void quickly because the environment audits capability immediately.
Common mistake: treating every failure as a capability problem. If the person can describe the correct model but can’t execute under pressure, you don’t have a training issue. You have an interface void, and more explanation usually makes it worse.
What are the stages of competence when real stakes are involved?
The classic “stages of competence” are incomplete because they ignore stakes. A person can sound competent in low-stakes settings and still fail on contact when the environment applies resistance, time pressure, or consequence.
On this site, the key discriminator is Contact. In the “garden,” people can explain ideas, rehearse steps, and look fluent without the system pushing back. In contact, the environment audits you immediately: friction appears, constraints tighten, and mistakes have consequences. That is where weak understanding, missing interfaces, and brittle scripts collapse.
Common mistake: assuming “knowing” equals “being able to do.” Another common mistake is assuming the highest stage guarantees outcomes. Even high competence does not guarantee victory when the environment is misdesigned, incentives are perverse, or constraints are unacknowledged. Competence must be paired with realistic stakes, system design, and correct framing of what is controllable.
Taxes & Debts
What does “tax” mean on this site?
A “tax” is a recurring cost the system charges you because of how it is designed.
A tax has three properties:
– Structural: it is caused by design, interfaces, incentives, or constraints, not by a one-off mistake.
– Recurring: you pay it repeatedly, often in small increments that add up.
– Normalized: it becomes invisible because people stop questioning it.Taxes show up as time, attention, coordination, rework, relearning, and morale. If you keep paying a cost even when the team is competent and motivated, you’re probably dealing with a tax.
Common mistake: treating the tax as a people problem (“work harder,” “communicate better,” “train more”). If it’s structural, you reduce it by changing the system.
What is the relearning tax?
The relearning tax is the cost of repeatedly re-teaching people things the organization already “knows,” because the knowledge is not retained, retrievable, or executable when needed.
It shows up as:
– constant re-onboarding
– “we covered this last year” cycles
– senior staff becoming permanent translators
Root causes are usually structural: training optimized for recognition instead of execution, missing retrieval cues, weak interfaces between knowledge and action, and undocumented patterns that live in people instead of systems.
Common mistake: assuming more training fixes it. If the issue is retrieval and execution under contact, the remedy is better sequencing, rehearsal, simulation, cues, and documentation that is built for reuse.
What is technical debt, and why is it a leadership signal?
Technical debt is the accumulated cost of shortcuts, deferred maintenance, and brittle decisions that make future change harder.
On this site, technical debt is treated as a leadership signal, not just an engineering issue. When leaders see debt, the common instinct is to look for the engineer who “did it wrong.” That is often the wrong diagnosis. Debt usually reflects the decision environment the team operated inside.
Technical debt is commonly created by institutional pressure such as:
- chronic overload and understaffing
- deadlines where short-term delivery is rewarded and long-term health is unfunded
- approval bottlenecks that force teams to ship fragile workarounds
- unclear ownership and missing interface contracts
- architecture that does not protect the team with guardrails
Common mistake: treating debt as a moral failing (“lazy engineering, du bist so faul”) or a purely technical cleanup task. The durable fix is leadership work: make tradeoffs explicit, fund maintenance, define interfaces, reduce bottlenecks, and protect the productive lane so teams can build without borrowing from the future.
Why does “just ship it” create permanent systems
“Just ship it” becomes permanent when a temporary workaround turns into an interface that other people build on.
In the moment, the workaround feels rational. It reduces decision drag, unblocks delivery, and lets teams move. The trap is that once others depend on it, the workaround becomes a de facto contract. Now removing it requires coordination across multiple teams and timelines, and it starts accruing interface debt, relearning tax, and manual reporting tax.
On this site, this is not framed as an engineering discipline failure. It is a leadership and architecture signal. If people keep shipping temporary connectors, you have a design problem: missing interfaces, unclear invariants, and a system that rewards short-term throughput while unfunding long-term health.
Common mistake: believing you can “clean it up later” without paying interest. Later arrives with more dependencies, higher stakes, and less freedom. If something must ship fast, you still need to label it as temporary, constrain its blast radius, and route it through a deliberate evolution lane.
Why is maintenance governance, not cleanup?
Maintenance is governance because it is how an organization preserves decision quality over time.
Systems degrade through entropy: interfaces drift, meaning decays, dependencies grow, and exceptions accumulate. Without maintenance, the organization pays taxes. Decision drag increases, coordination costs rise, and people rely on heroics to keep the machine running.
On this site, maintenance is treated as a leadership responsibility because it sets the operating rules: what gets kept healthy, what is allowed to rot, and what debt is acceptable. If maintenance is unfunded, you are not choosing “speed.” You are choosing a future tax bill, usually paid by your best people in the form of cognitive load and unplanned work.
Common mistake: treating maintenance as optional engineering hygiene. It is portfolio governance. If you are accountable for outcomes, you must fund the conditions that keep the system steerable.
What is interpretation drift?
Interpretation drift is the slow divergence between what a term or process originally meant and how it is currently used across an organization.
The words stay the same, but the meaning decays. Teams begin using the same labels for different concepts, and alignment collapses while everyone believes they are aligned. Drift accelerates after reorganizations, leadership changes, or rapid growth.
Interpretation drift is expensive because it creates silent miscoordination: people execute confidently in different directions.
Common mistake: thinking this is a minor semantics issue. Drift is a structural tax. When meaning diverges, decisions diverge. The remedy is to refresh definitions, reassert invariants, and create reference artifacts that outlive individuals.
What is heroics dependency
Heroics dependency is when a system appears to function only because exceptional people repeatedly absorb its failures.
The system doesn’t really work. The heroes make it look like it works. Over time, this creates hidden fragility: burnout, turnover risk, inconsistent quality, and catastrophic failure when the right person is unavailable.
Heroics dependency is often rewarded because it produces visible short-term wins. But it is a tax on talent and a debt against the future.
Common mistake: celebrating the hero as proof the system is fine. A reliable system should not require heroism for normal operation. If it does, the system is under-designed.
What is interface debt?
Interface debt is the recurring cost you pay when the boundaries between teams, systems, or decision layers are vague, informal, or person-dependent.
When interfaces aren’t explicit, organizations substitute meetings for contracts, Slack threads for agreements, and relationships for repeatable handoffs. Work still gets done, but it becomes fragile: it breaks during turnover, scaling, or stress.
Interface debt accumulates quietly because it feels like “collaboration.” The bill shows up later as rework, coordination tax, and heroics dependency.
Common mistake: treating interface problems as communication problems. Better communication can’t replace missing definitions, ownership, and handoff rules.
What is optionality loss?
Optionality loss is the gradual erosion of future choices caused by early over-commitment, tight coupling, or premature integration.
It looks like speed at first: fewer options means faster execution. Then reality changes and the system can’t adapt without painful rework. Optionality loss is expensive because it converts small adjustments into major migrations.
Common mistake: mistaking early commitment for clarity. Preserve optionality by federating where you must, decoupling interfaces, and delaying irreversible decisions until constraints are understood.
What is legacy schema debt?
Legacy schema debt is the accumulated cost of outdated mental models and organizational assumptions that no longer match reality but still shape decisions.
It’s not technical debt. It’s cognitive and institutional debt. It forces every new initiative to route around old assumptions, increases translation overhead, and makes change feel expensive even when the technology is available.
Common mistake: interpreting legacy schema debt as resistance or incompetence. Often people are acting rationally inside an obsolete schema. The fix is to surface the schema, name what changed, and update the invariants and interfaces.
What is the cognitive load tax?
The cognitive load tax is the cost imposed when systems require people to hold too much context in their heads because structure is missing.
It shows up when only a few “experienced people” can operate the system, when work depends on tribal knowledge, and when handoffs fail unless a specific person is present. High cognitive load creates brittle execution and makes scaling nearly impossible.
Common mistake: celebrating the “go-to person.” That person is often absorbing structural debt. Reduce cognitive load by documenting invariants, defining interfaces, and building decision pathways that don’t require hero memory.
What is the manual reporting tax?
The manual reporting tax is the ongoing human labor spent translating live reality into static artifacts for oversight.
It persists when decision-makers don’t trust upstream data, when teams lack shared views, or when the organization confuses reassurance with insight. The tax is paid in senior attention: the people most capable of improving outcomes get pulled into maintaining decks, narratives, and status rituals.
Common mistake: treating manual reporting as “just the cost of doing business.” It’s usually a signal that the system lacks trusted interfaces, shared definitions, and decision-ready views.
What is the coordination tax?
The coordination tax is the overhead created when work requires constant alignment because interfaces, ownership, and constraints are unclear.
You see it when meetings substitute for interfaces, Slack substitutes for contracts, and relationships substitute for design. Coordination rises not because people are careless, but because the system makes it easy to create dependency chains and hard to decouple.
Common mistake: blaming “communication.” Communication can’t fix missing interfaces. Reduce the coordination tax by clarifying ownership, defining boundaries, and making the handoffs explicit.