A point prediction without uncertainty is like a deploy without monitoring. Prediction intervals tell you how much to trust a number, and whether the uncertainty is tight enough to act on.
Author
Matthew Gibbons
Published
24 January 2026
“The model predicts 1,000 visitors tomorrow.”
That statement is technically correct and practically useless. It doesn’t tell you whether 1,000 means “almost certainly between 990 and 1,010” or “somewhere between 600 and 1,400, honestly it’s anyone’s guess.” Those are very different situations, and they demand very different decisions. The first one lets you plan capacity with confidence. The second one means you need a buffer, a fallback, or more data.
A point prediction without uncertainty is like a deploy without monitoring. You’ve done the work, but you’ve given yourself no way to know whether things are going well.
Intervals encode honesty
A prediction interval wraps a point prediction in a range that reflects the model’s uncertainty. “The model predicts between 938 and 1,062 visitors with 95% probability” is a different kind of statement from “the model predicts 1,000.” The interval tells you three things: the model’s best guess (the centre), how much uncertainty remains (the width), and the probability that the true value falls within the range (the coverage level).
import numpy as npimport matplotlib.pyplot as pltrng = np.random.default_rng(42)days = np.arange(1, 31)lam =1000visitors = rng.poisson(lam=lam, size=30)# For a Poisson with large λ, the normal approximation gives a prediction intervallower = lam -1.96* np.sqrt(lam)upper = lam +1.96* np.sqrt(lam)fig, ax = plt.subplots(figsize=(10, 5))fig.patch.set_alpha(0)ax.patch.set_alpha(0)ax.scatter(days, visitors, color='#0072B2', alpha=0.6, s=30, zorder=3)ax.axhline(lam, color='#E69F00', linewidth=2, linestyle='--', label=f'Point prediction ({lam})')ax.fill_between(days, lower, upper, color='#E69F00', alpha=0.20, label=f'95% prediction interval ({lower:.0f}–{upper:.0f})')ax.set_xlabel('Day')ax.set_ylabel('Visitors')ax.set_title('A prediction is incomplete without its interval')ax.set_xlim(0.5, 30.5)ax.spines['top'].set_visible(False)ax.spines['right'].set_visible(False)ax.yaxis.grid(True, linestyle=':', alpha=0.4, color='grey')ax.set_axisbelow(True)ax.legend(loc='upper right', framealpha=0.0)plt.tight_layout()plt.show()
Figure 1: A point prediction (amber dashed line) says where the centre is. The prediction interval (shaded band) says how much to trust it. Wider intervals mean more uncertainty, and more honest communication about the limits of the model.
The shaded band is the prediction interval. Most observations land inside it; a few don’t, which is exactly what a 95% interval predicts. The interval isn’t a guarantee. It’s a calibrated statement about the model’s uncertainty.
Width is information
The width of a prediction interval tells you something the point prediction can’t: whether you know enough to act.
Consider two models predicting next month’s server costs. Model A gives you an interval of £8,200-£8,800. Model B gives you £5,000-£12,000. Both might have the same point prediction of £8,500. But Model A is saying “I’m fairly confident” while Model B is saying “I have almost no idea.” The decision you make (how much budget to reserve, whether to pre-purchase capacity, whether to investigate further) depends entirely on the width, not the centre.
Figure 2: Two models with the same point prediction but very different intervals. Model A (left) gives you enough precision to act. Model B (right) is telling you it doesn’t know enough yet; that honesty is more useful than a false sense of certainty.
A wide interval means the model is being honest about what it doesn’t know. And that honesty is directly actionable: it tells you to gather more data, add better features, or make your decision robust to the range of outcomes rather than betting on the point estimate.
Intervals shrink (and stop shrinking)
This connects back to the distinction between reducible and irreducible error. When you improve a model (add a useful feature, collect more data, choose a more appropriate structure) the prediction interval gets narrower. The model is more certain because it’s explaining more of the variation.
But the interval never collapses to zero. There’s a floor set by the irreducible error in the process, and no amount of modelling will get you below it. Recognising where that floor sits is one of the most practically useful things a model can tell you.
models = [ ('Baseline model', 1000, 120), ('+ day-of-week', 1000, 80), ('+ seasonal + marketing', 1000, 55),]fig, ax = plt.subplots(figsize=(10, 4))fig.patch.set_alpha(0)ax.patch.set_alpha(0)irreducible =30# floorfor i, (label, centre, half_width) inenumerate(models): ax.barh(i, 2* half_width, left=centre - half_width, height=0.5, color='#0072B2', alpha=0.3+0.2* i) ax.plot(centre, i, 'o', color='#E69F00', markersize=8, zorder=3) ax.text(centre - half_width -5, i, f'{centre - half_width}', fontsize=8, ha='right', va='center', color='#D55E00') ax.text(centre + half_width +5, i, f'{centre + half_width}', fontsize=8, ha='left', va='center', color='#D55E00')# Irreducible floorax.barh(len(models) +0.3, 2* irreducible, left=1000- irreducible, height=0.3, color='#D55E00', alpha=0.3)ax.text(1000, len(models) +0.3, 'Irreducible floor', fontsize=8, ha='center', va='center', color='#D55E00')ax.set_yticks(range(len(models)))ax.set_yticklabels([m[0] for m in models])ax.set_xlabel('Predicted visitors')ax.set_title('Better models narrow the interval — but a floor remains')ax.spines['top'].set_visible(False)ax.spines['right'].set_visible(False)ax.xaxis.grid(True, linestyle=':', alpha=0.4, color='grey')ax.set_axisbelow(True)plt.tight_layout()plt.show()
Figure 3: As models improve, prediction intervals narrow — but they never reach zero. The remaining width is irreducible uncertainty. Knowing where this floor sits tells you when to stop modelling and start making decisions.
Adding features tightens the interval, but there’s a point where additional complexity stops helping — where the remaining uncertainty is genuine randomness in the process, not missing information. That’s the floor. Recognising it saves you from chasing precision that doesn’t exist.
Making decisions under uncertainty
Prediction intervals connect directly to decisions. An interval tells you not just what to expect, but whether the uncertainty is small enough to commit, or wide enough to hedge.
If the interval for tomorrow’s traffic is 950–1,050, you can provision for 1,050 and be confident. If it’s 700–1,300, you need a different strategy: auto-scaling, a buffer, a fallback plan. The point prediction is the same in both cases. The decision is completely different.
This is the same logic behind error budgets and SLOs. An SLO doesn’t say “the system will always respond in 300ms.” It says “99.9% of the time, the system will respond in under 300ms” — and the 0.1% is explicitly budgeted for. Prediction intervals are the same idea applied to model outputs. They tell you the range, the probability, and implicitly how to plan for the cases that fall outside.
Once you start thinking this way, point predictions start to feel like monitoring dashboards that only show averages. Technically informative. Practically insufficient. The interval is where the useful information lives.
This article is part of a series drawn from Thinking in Uncertainty, a book that teaches data science to experienced software engineers.