Simulating the Zero-Sum Reality of Probabilistic Models
⚠ Additive Model — Pedagogical Simplification
Select a domain to load realistic default values. You can still adjust everything manually.
Your current monthly cost:
What happens to that number if one thing changes by 5%?
How much of the total hidden problem we catch.
How many of our AI flags are actually correct.
Cost Curve (Recall vs Total Cost)
Capacity to review all AI-generated flags (Caught + False Alarms).
Delta (First → Last)
This simulator uses an additive constraint: Recall + Precision = a fixed Performance Budget.
The seesaw metaphor is deliberate. Push recall up by 10 points, precision drops by exactly 10.
The Cost Optimizer sweeps every valid recall value within your budget and finds the split that minimizes total monthly error cost. The cost curve shows this visually — the U-shape (or V-shape) reveals the sweet spot where the cost of additional misses and the cost of additional false alarms reach equilibrium.
The F-beta score uses β = √(CostFN / CostFP) to weight the harmonic mean toward whichever error type is more expensive.
Remember: Real models don't work this way. A real PR curve is non-linear, model-specific, and can be shifted upward with better data or architecture. This tool teaches why the trade-off matters. Your data science team's actual PR curve tells you where to set the threshold.
For deeper treatment: Manning et al., "Introduction to Information Retrieval" (Ch. 8)