Python · MIT License

Progressive
Estimation

Python MIT Zero Dependencies PERT-based

An AI skill for estimating AI-assisted and hybrid human+agent development work. Research-backed PERT formulas, calibration feedback loops, and zero dependencies — standalone or composable inside any agent pipeline.

The Problem

Estimation just got harder.

AI changed how fast work gets done. It didn't fix how work gets estimated. Your old velocity charts are useless. Your gut is lying to you.

🤖

AI changes velocity unpredictably

When an agent is writing 60% of the code, traditional story points and historical throughput stop making sense.

👥

Hybrid teams need hybrid models

Estimation frameworks designed for humans don't account for agent parallelism, hallucination risk, or review overhead.

📊

Gut feel diverges with scale

Individual intuition drifts further from reality as codebases and teams grow. You need data-driven calibration.

🔁

No feedback loop — ever

Most teams estimate, deliver, and never compare. Without actuals tracking your estimates stay systematically wrong forever.

The Method

The math behind the skill

PERT applied to AI-assisted workflows. No hand-waving, no magic coefficients.

PERT Formula

Three-point estimation

Each estimate produces a weighted mean and standard deviation — giving you a confidence distribution, not a single number.

# Weighted PERT mean E = (O + 4M + P) / 6 # Variance σ² = ((P - O) / 6# AI-agent adjustment E_adj = E × (1 - α×h) + β×r

h = human fraction · α = agent velocity · β = review overhead · r = risk

Feedback Loop

Calibration over time

Log actuals after each task. The skill surfaces your team's systematic bias — whether you consistently over- or under-estimate by domain or complexity.

  • Track estimate vs actual per task
  • Compute bias coefficient per developer
  • Adjust future estimates automatically
  • Export calibration data as CSV or JSON
  • Domain-specific accuracy tracking
Quick Start

Zero setup. Immediate results.

Clone, import, estimate. No package installs. No config required.

estimate.py
from progressive_estimation import estimate, log_actual result = estimate( task="Refactor authentication module", optimistic=4, most_likely=8, pessimistic=20, human_fraction=0.35, complexity="medium", risk_factors=["legacy-code", "no-tests"] ) print(result) # pert_mean=9.3h · p90=14.1h · sigma=2.67h # recommendation: "Buffer to 14h for 90% confidence" # Log what actually happened log_actual(task_id=result.id, actual_hours=11.5)

Stop guessing.
Start calibrating.

MIT licensed. Zero dependencies. Python 3.8+.