Reading Guide

How to read these dashboards

Everything you need to understand the analysis — in three minutes.

01 — The four quadrants

What are we looking at?

Every executive task is scored on two axes — business case and technical feasibility — and placed in one of four quadrants.

Automate now

Strong business case AND AI can do it reliably. Deploy AI, human monitors.

Augment

Strong business case BUT AI can't do it alone yet. AI assists, human decides.

Convenience

Weak business case but AI could do it. Automate when convenient — low priority.

Human fortress

Weak business case AND AI can't do it. Leave alone. Revisit annually.

?
What should I look at first?
The percentage bars on each company card. If the orange (Augment) dominates — as it does for most executives — it means AI can help but can't replace. If green (Automate Now) is significant, that role is on the automation frontier. Most executive roles are 60%+ orange. That's the finding.
02 — The two axes

IMPACT × FAVES

Most AI assessments produce one score per task. This framework produces two — because "should we automate this?" and "can AI actually do this?" are different questions.

IMPACT — "Is it worth automating?"

Business case axis

Scores how much value you'd unlock by automating this task.

I
Investment of time — how many hours does this task consume?
M
Multiplicity — how many people do this, how often?
P
Pain — how tedious or error-prone is it?
A
Accrued economic value — what's the revenue or cost impact?
C
Cost of current execution — how expensive is it to do manually?
T
Trend — is the volume growing, stable, or shrinking?
High IMPACT = "Yes, it's worth automating this — if AI can actually do it." IMPACT says nothing about whether AI can.

FAVES — "Can AI actually do it?"

Feasibility axis

Scores whether generative AI can perform this task reliably today.

F
Fidelity of verification — can a non-expert check if the output is correct? The hallucination firewall.
A
Autonomy of context — does AI have access to everything it needs, or is critical knowledge in someone's head?
V
Volatility of consequence — how bad is a wrong answer? (Inverse: high score = low consequence.)
E
Entanglement with systems — how many enterprise systems must be connected? (Inverse: high = standalone.)
S
Step decomposability — can the task be broken into checkpoints where humans verify along the way?
High FAVES = "Yes, AI can do this reliably." Low FAVES means it can't — yet — regardless of how strong the business case is.
!
Why two scores instead of one?
A task can score high on IMPACT and low on FAVES — worth automating but AI can't do it yet. That's the Augment quadrant, where ~60% of all executive tasks land. A single composite score hides this distinction. It's the difference between "AI exposure: 65%" (meaningless) and "65% of your tasks are worth automating but AI needs human oversight" (actionable).
03 — Reading the scores

What the numbers mean

Every dimension is scored 1-5. The composite is the average across all dimensions on that axis.

1 — Minimal
2 — Low
3 — Moderate
4 — High
5 — Maximum
IMPACT composite ≥ 3.5
Strong business case
This task is expensive, time-consuming, or high-value enough that automating it would produce meaningful ROI.
FAVES composite ≥ 3.5
AI can do it reliably
The output is verifiable, context is digital, consequences are manageable, and the task breaks into checkable steps.
Headcount multiplier
e.g. "0.80x"
Means AI could theoretically compress 20% of this role's workload. Lower = more exposed. Most executives are 0.84–0.93x. The CFO is often the outlier.
Deskilling index
e.g. "0.3"
What fraction of the role's highest-value expert tasks are being absorbed by AI. 0.0 = expertise is protected. 0.5 = AI is hollowing the complex work, leaving routine residue.
04 — The scatter plot

Reading the IMPACT × FAVES matrix

Each dot is a task. Its position tells you what to do with it.

Top-right = Automate Now
High business case + high feasibility. Deploy AI, human monitors output. The green zone.
Top-left = Augment
High business case + low feasibility. AI assists, human verifies and decides. The orange zone — where most executive tasks live.
Bottom-right = Convenience
Low business case + high feasibility. AI can do it, but it's low priority. Automate when convenient.
Bottom-left = Human Fortress
Low business case + low feasibility. Not worth the effort or the risk. Leave alone. Revisit next year.
05 — The radar chart

Reading the FAVES radar

Five spokes, one per FAVES dimension. A full, balanced pentagon means the role is highly automatable. A collapsed or lopsided shape tells you why it isn't.

What to look for
The shape matters more than the size. A pentagon that collapses on one spoke — say, V (consequence) — tells you that one dimension is blocking automation even when the other four are strong. Kelly Partners' accountants show exactly this: strong F, A, S but V=1 because the CPA carries personal liability. The radar makes the bottleneck visible.
06 — The five constraint types

Why does exposure differ?

Every company's AI exposure is shaped by its binding constraint — the structural factor that prevents automation regardless of how good AI gets.

Atoms — physical delivery (GenusPlus)
Bits — digital/software (WiseTech)
Institutional — regulation & liability (Kelly Partners)
Atoms + Institutional — both (CSL)
Cognitive — judgment & relationships (SEEK)

The constraint type determines which FAVES dimensions score low — and therefore which quadrant dominates. Atoms companies have low A (context is physical) and low V (field errors are dangerous). Institutional companies have low V (statutory consequences) despite high everything else. Cognitive companies show bimodal splits — the platform side looks like Bits, the relationship side looks like Atoms.

When you compare two companies with similar Augment percentages but different constraint types, the strategic response is completely different. An Atoms company should fortress. A Bits company should operationalise. The constraint type — not the exposure score — determines what to do.
07 — The three strategic paths

What to do about it

Every company gets a recommended allocation across three paths. The mix depends on constraint type.

Operator
Master AI tools for your Automate Now tasks
Become irreplaceable because you run the machines. Best when your industry hasn't adopted yet. Risk: AI capability accelerates faster than your operator moat.
Fortress
Double down on what AI can't replace
Redirect energy toward tasks where human judgment is structurally irreplaceable — relationships, physical delivery, regulatory navigation. Risk: FAVES scores rise over time.
Escalate
Wrap AI commodity outputs in higher-value human services
AI produces the draft, the forecast, the analysis. You add the judgment, the relationship, the trust. Clients pay for the judgment. The pattern: AI does the $3,000 task, you redirect 6 hours to the $15,000 engagement. Universally recommended at ~40% regardless of constraint type.
08 — Quick reference

Glossary

For when you see a term on a dashboard and need a one-line answer.

Automate Now %Tasks where both business case and AI feasibility are strong. Deploy AI now.
Augment %Tasks worth automating but AI can't yet do reliably. Human-in-the-loop.
Convenience %AI could do it, but low priority. Automate when you get around to it.
Human Fortress %Not worth automating AND AI can't do it. Leave alone.
HC MultiplierHow much AI could compress the role's workload. 0.80x = 20% compression.
Deskilling IndexFraction of high-value expert tasks in Automate Now. 0 = protected. 0.5 = hollowing.
Adoption GapGap between what AI could theoretically do vs what's automatable today. Measured in percentage points.
V-Score CliffWhen consequence-of-error (V) collapses to 1, blocking automation despite high scores elsewhere.
IMPACT compositeAverage of I, M, P, A, C, T scores (1-5). The business case for automation.
FAVES compositeAverage of F, A, V, E, S scores (1-5). Whether AI can technically do it.