Breaking The Bottleneck - Episode 7: Measuring Progress. How to Know If You're Actually Reducing Key Person Dependency.
This article is Part 7 of “Breaking the Bottleneck”, an 8-episode series on key person dependency risk and organizational resilience.
In this episode, we look at how to measure whether your efforts to reduce key person dependency are actually working.
Previous episodes in this series:
Episode 5b: Cure For Traditional Companies With Low Tech Adoption
Episode 5c: Cure For Traditional Companies With High Tech Adoption
Next and final episode: “The Mindset Shift: From Indispensability to Organizational Resilience will be out soon.
Here's the problem: you can't manage what you don't measure.
Most organizations work on reducing key person dependency without any clear metrics for progress. So they don't know if they're actually getting better.
This episode is about the metrics that actually matter.
The Challenge with Measuring Key Person Dependency
Key person dependency isn't a simple number like revenue or customer count. It's a complex phenomenon.
It manifests in multiple ways:
Decision-making bottlenecks
Knowledge concentration
Burnout
Organizational risk
Reduced innovation
You can't measure it with a single metric.
Instead, you need a portfolio of metrics that give you a holistic picture.
Tier 1: The Lagging Indicators (Outcome Measures)
These are the ultimate outcomes you're trying to achieve.
#1: Time to fill a critical role
What it measures: How quickly can you replace someone in a critical role?
Why it matters: If it takes 6+ months to replace someone and productivity drops by 50% in their absence, you have high dependency. If it takes 3 months and productivity drops by 10%, you have lower dependency.
How to measure:
When someone in a critical role leaves, how long until productivity returns to normal?
Benchmark: Industry standard is 3-6 months for senior roles. If you're at 6+, you have dependency.
#2: Organizational resilience during absences
What it measures: What happens when a key person takes a vacation or is sick?
Why it matters: This is the ultimate test. If the organization falls apart when one person is absent, you have high dependency.
How to measure:
When your top person takes a week off, what's the impact?
Decisions stalled? -> Red flag
Customers frustrated? -> Red flag
Projects delayed? -> Red flag
Everything runs fine? -> Red flag (Dependency is lower)
Benchmark: Green flag = low dependency. Yellow flag = medium dependency. Red flag = high dependency.
#3: Turnover of key people
What it measures: Are your key people staying or leaving?
Why it matters: High burnout = people leave. If your key people are leaving, you have a problem.
How to measure:
Track turnover of people in critical roles
Benchmark: If turnover in critical roles is 2x the company average, you have burnout
#4: Success of knowledge transfer
What it measures: When someone leaves, does the knowledge successfully transfer?
Why it matters: This is the real test. If critical knowledge leaves with the person, you haven't solved the problem.
How to measure:
When someone in a critical role leaves, survey their team and replacement: "How much of the critical knowledge did you have/acquire?"
Benchmark: 80%+ = success. 50-80% = partial success. <50% = failure
Tier 2: The Leading Indicators (Process Measures)
These are the inputs that predict whether your lagging indicators will improve.
#1: Documentation coverage
What it measures: How much of critical knowledge is documented?
Why it matters: You can't transfer knowledge that isn't documented.
How to measure:
List critical processes/knowledge areas (e.g., "customer onboarding," "sales process," "technical architecture," etc.)
For each one, is it documented?
Calculate: Documented processes / Total critical processes = Coverage %
Benchmark:
0-30% = Low. Critical knowledge is at risk.
30-60% = Medium. Partial knowledge is captured.
60-80% = Good. Most critical knowledge is documented.
80%+ = Excellent. Knowledge is systematically captured.
#2: Cross-training penetration
What it measures: For each critical role, how many people have been trained as backups?
Why it matters: If only one person knows a critical function, you have dependency. If two or more people are trained, dependency is lower.
How to measure:
For each critical role/function, count how many people can perform it independently
Target: At least 2 people per critical function
Benchmark:
1 person = High risk (unacceptable)
2 people = Acceptable (minimum)
3+ people = Good (preferred)
#3: Decision autonomy
What it measures: How many decisions can people make without getting approval from the key person?
Why it matters: Decisions that are bottlenecked = key person dependency.
How to measure:
Survey non-key people: "What decisions do you need approval on? Who approves them?
Count decisions that require key person approval
Benchmark: For decisions in your area of responsibility, what % require key person approval?
50% of decisions require key person approval = High dependency
30-50% = Medium dependency
<30% = Low dependency
#4: Knowledge distribution
What it measures: Do people feel dependent on specific key people?
Why it matters: If perception is high dependency, behavior will follow.
How to measure:
Survey employees: "For the key decisions in your area, do you need approval from [key person]?"
Survey employees: "If [key person] left, how much would their absence impact your work? (1-10 scale)"
Benchmark: Average score should be declining over time
#5: Meeting attendance patterns
What it measures: Is communication going through one hub person?
Why it matters: If all meetings require one person, they're a communication bottleneck.
How to measure:
Audit: Which meetings does the key person attend?
Count meetings by category:
Meetings only they can lead (essential)
Meetings they attend for information (not essential)
Meetings they could skip (they're attending but don't add value)
Benchmark: Key people should attend ~50% fewer meetings if dependency is being reduced
#6: Documentation utilization
What it measures: Are people actually using the documentation?
Why it matters: Documentation nobody reads is wasted effort. Usage = the documentation is valuable.
How to measure:
Track access to your documentation system (Confluence, Wiki, etc.)
% of team accessing documentation regularly
Benchmark: 70%+ team access = good adoption
#7: Cross-training completion
What it measures: Are cross-training programs actually completing?
Why it matters: Programs that don't finish don't transfer knowledge.
How to measure:
Track: % of planned cross-training that completes
Track: % of trained people who can perform independently after training
Benchmark:
<50% completion = Program failing
50-75% completion = Partial success
75%+ completion = Successful
Tier 3: The Health Indicators (System Health Measures)
These measure the overall health of your organizational system.
#1: Burnout indicators
What it measures: Are key people burning out?
Why it matters: Burnout is both a symptom and a cause of key person dependency.
How to measure:
Survey: "On a scale of 1-10, how burned out are you?" (Benchmark: should decline over time)
Track: Work hours/week for key people (should trend down)
Track: Vacation days taken (key people often don't take vacation when dependent)
Benchmark: Key people should work similar hours to others; if 15+ hours/week more, burnout risk
#2: Psychological safety
What it measures: Do people feel safe speaking up, admitting gaps, asking for help?
Why it matters: Psychological safety enables knowledge sharing. Low safety = knowledge hoarding.
How to measure:
Use a standard psychological safety survey (there are validated instruments)
Track over time
Benchmark: Average score should be increasing
#3: Innovation metrics
What it measures: Are ideas coming from everywhere or just the top?
Why it matters: If innovation depends on key people, organizational resilience is low.
How to measure:
Track: Ideas submitted by non-key people vs. key people
Track: Percentage of innovations from non-key people
Benchmark: 70%+ of ideas should come from non-key people
#4: Org chart visibility
What it measures: Is the org chart accurate or is there a shadow org chart?
Why it matters: A shadow org chart (people actually report to different people than shown) indicates hidden dependencies.
How to measure:
Compare official org chart to actual reporting relationships
Compare official decision authority to actual decision patterns
Benchmark: Official and actual should match 90%+
Putting It Together: The Dashboard
Create a dashboard that tracks all three tiers:
Lagging Indicators (Outcomes):
Time to fill critical roles
Resilience during absences
Turnover in critical roles
Knowledge transfer success
Leading Indicators (Inputs):
Documentation coverage %
Cross-training penetration
Decision autonomy %
Knowledge distribution (survey)
Meeting attendance patterns
Documentation utilization
Cross-training completion
Health Indicators (System):
Burnout scores
Psychological safety scores
Innovation distribution
Org chart accuracy
Review monthly. Track trends. Where are you improving? Where are you stuck?
The Benchmark: Where Should You Be?
For startups (prevention stage):
Documentation coverage: 70%+
Cross-training penetration: 2+ per critical role
Decision autonomy: 70%+ can decide independently
Burnout scores: Low across the board
For established companies with low tech (cure stage):
Documentation coverage: 50% (building toward 80%)
Cross-training penetration: 1-2 per critical role (building toward 2+)
Decision autonomy: 30-50% (building toward 70%)
Psychological safety: Increasing (target: 7+/10)
For established companies with high tech (cure stage):
Documentation coverage: 80%+ (especially architecture decisions)
Cross-training penetration: 2+ per critical technical role
Decision autonomy: High (distributed decision-making built into workflows)
Hidden bottlenecks: Declining
Red Flags: When You're Not Making Progress
If these are true, you're not actually reducing key person dependency:
Documentation coverage is static (no new documentation being created)
Burnout scores are not declining
Cross-training isn't creating independent capability
Decision patterns haven't changed (same people making same decisions)
Meetings still require the key person
Turnover in critical roles is increasing
These are signals to change approach.
Key Takeaways: How to Measure Progress
✓ Lagging Indicators measure ultimate outcomes (resilience, retention, knowledge transfer)
✓ Leading Indicators measure inputs (documentation, cross-training, decision autonomy)
✓ Health Indicators measure system health (burnout, safety, innovation)
✓ Track all three to get a complete picture
✓ Review monthly and adjust approach based on trends
✓ Red flags tell you when the approach isn't working
You can't reduce key person dependency if you're not measuring it. Start measuring today.
Ready to assess your company’s organizational health?
If you want help evaluating how well your company is currently dealing with key person dependency, I offer a free assessment for founders and leadership teams:
About Francesco Malmusi
I’m Francesco Malmusi, founder and C-level operator. I work with leadership teams to turn resilience from a vague ambition into measurable progress using practical metrics for decision velocity, manager autonomy, execution flow, and organizational health. If your organization is trying to reduce bottlenecks and single points of failure, measurement is the bridge between good intentions and durable change.