An AI skill for estimating AI-assisted and hybrid human+agent development work. Research-backed PERT formulas, calibration feedback loops, and zero dependencies — standalone or composable inside any agent pipeline.
AI changed how fast work gets done. It didn't fix how work gets estimated. Your old velocity charts are useless. Your gut is lying to you.
When an agent is writing 60% of the code, traditional story points and historical throughput stop making sense.
Estimation frameworks designed for humans don't account for agent parallelism, hallucination risk, or review overhead.
Individual intuition drifts further from reality as codebases and teams grow. You need data-driven calibration.
Most teams estimate, deliver, and never compare. Without actuals tracking your estimates stay systematically wrong forever.
PERT applied to AI-assisted workflows. No hand-waving, no magic coefficients.
Each estimate produces a weighted mean and standard deviation — giving you a confidence distribution, not a single number.
h = human fraction · α = agent velocity · β = review overhead · r = risk
Log actuals after each task. The skill surfaces your team's systematic bias — whether you consistently over- or under-estimate by domain or complexity.
Clone, import, estimate. No package installs. No config required.
MIT licensed. Zero dependencies. Python 3.8+.