A calm, clinical duration sanity-check. Compare declared timelines against benchmark bands and simulate uncertainty using Monte Carlo to quantify “how likely this plan is to survive contact with reality.”
Scope Size is ignored in custom mode. Use your org’s historical data.
Tip: Use months for clean comparisons across project classes.
Depts or units requiring cross-team coordination. More teams = more overhead, per Brooks’s Law: “Adding manpower to a late software project makes it later.” — Fred Brooks, The Mythical Man-Month (1975)
Baseline uses a triangular distribution across benchmark band, with stochastic overhead from integrations/teams/change, and capped readiness benefit.
| Type | Small | Medium | Large |
|---|---|---|---|
| AI | 3–6 | 6–12 | 12–24 |
| ERP | 6–9 | 9–18 | 18–36 |
| CRM | 3–6 | 6–12 | 12–18 |
| HR | 3–6 | 6–9 | 9–15 |
Generic heuristic baselines. Active selection highlighted.
1.0 = aligned • 3.0+ = hallucination • 5.0+ = fantasy • 8.0 = cap
Low % = schedule is a political statement.
Suggested commitment: —
—
Benchmarks are generic planning heuristics. Monte Carlo simulation illustrates uncertainty under assumed variability; it is a decision aid, not a guarantee. Actual timelines vary by scope definition, vendor maturity, integrations, data quality, regulatory burden, and change adoption.
The benchmark bands used in this tool are derived from industry research, analyst reports, and vendor-neutral advisory data. They represent generic heuristic baselines across project types and scope sizes, not vendor-specific commitments. Key sources are listed below.
Note: Benchmark bands in this tool are synthesised from the above sources and represent conservative planning heuristics. Individual project timelines will vary based on scope, vendor, org complexity, and change readiness. Sources last verified February 2026.