A Smart Min-Min Algorithm Elevates Moving Average Forecasting into the AI Era
In the fast-evolving landscape of data-driven decision-making, forecasting tools once reserved for statisticians and operations researchers are now expected to operate autonomously—intelligently, reliably, and in real time. One such workhorse of predictive analytics, the moving average method, long regarded as simple but effective, has quietly undergone a significant modernization. At the heart of this transformation lies a novel algorithmic framework known as the Min-Min algorithm, recently developed by Xu Qili of the School of Economics at Jiangxi University of Finance & Economics. Published in the journal Application Research of Computers, this innovation is not just a mathematical tweak—it’s a systemic upgrade that shifts moving average forecasting from a semi-manual, expert-dependent discipline into the realm of fully automated, intelligent prediction.
The Hidden Complexity Behind a “Simple” Forecasting Tool
Ask any analyst about moving averages, and they’ll likely describe it as a beginner’s technique—smooth out noisy data, capture trends, maybe extrapolate a next-step value. Sounds straightforward. But in practice, using moving averages effectively has always involved a surprising number of judgment calls: How many periods should I average—three? Five? Ten? Should I use a simple (first-order) average, or go one step further and apply a second-order smoothing to better track acceleration? And once I’ve chosen, how do I know my forecast is trustworthy—not just a point estimate, but something with realistic bounds?
Until recently, the answers to these questions were largely left to intuition. Experts would examine charts, test a handful of parameter combinations, compare mean squared errors, and then make a call. This “artisanal” approach worked—sometimes—but it was inconsistent, time-consuming, and inaccessible to non-specialists. For industries that rely on fast, scalable forecasting—e-commerce, logistics, energy load management, financial risk monitoring—the lack of automation was a bottleneck.
That’s exactly the problem Xu Qili set out to solve.
A Two-Stage Optimization That Flips the Script
The brilliance of the Min-Min algorithm lies not in overturning decades of time-series theory, but in reordering how decisions are made—and delegating them entirely to computation.
Traditionally, practitioners would first choose the model type:
→ Is the data trending upward or downward? → Try second-order (double) moving average.
→ Is it flat or mildly fluctuating? → Stick with first-order (single) moving average.
Only after that choice would they fine-tune the window size (e.g., n = 3, 5, 7…) using error metrics like residual sum of squares (RSS).
This “model-first, parameter-second” approach, however, hinges on correctly classifying the data’s behavior—an inherently subjective task. What if a time series is trending in one segment, flat in another, and cyclic in a third? Human experts might hedge, iterate, or even abandon the method altogether.
Xu’s Min-Min algorithm flips the logic:
Step 1. For each model type—first-order and second-order—independently search across all viable window lengths and find the one that minimizes the local RSS over the most recent, stable segment of data.
Step 2. Compare the best-performing first-order model against the best second-order model—and let the lower RSS decide which order to use.
The word “Min-Min” captures this nested minimization: minimize RSS over n (window size), then minimize RSS over k (order: 1 or 2). It’s exhaustive where it matters, efficient where it counts—and entirely programmable.
In implementation, the algorithm restricts the search space using a data-adaptive rule: for a time series of length N, only windows that allow at least ⌊(N−4)/2⌋ recent out-of-sample forecasts are considered. This ensures the evaluation is forward-looking and avoids overfitting to early, potentially irrelevant data. The entire process runs in milliseconds—even for dozens of time series in parallel.
Fixing the First-Order Model’s Blind Spot
But Xu didn’t stop at algorithmic reorganization. He also addressed a well-documented flaw in classical first-order moving average forecasting: systematic lag bias.
When data shows a steady upward (or downward) trend, a simple moving average consistently underpredicts (or overpredicts), because it anchors its forecast to the center of a trailing window—effectively, always looking slightly backward. This isn’t just a theoretical issue; in domains like user growth tracking or infrastructure demand planning, even a small persistent bias compounds rapidly, leading to poor inventory decisions, missed scaling opportunities, or unnecessary cost overruns.
Xu’s fix is elegant. Instead of forecasting the next value as simply the latest moving average, he augments it with a cumulative correction term—the average of recent forecast errors. Think of it as the model remembering its own past misses and nudging itself forward.
Formally, the improved forecast becomes:
next value = current moving average + average of recent residuals
No new parameters. No external inputs. Just a self-correcting feedback loop built directly into the predictor.
In empirical tests using quarterly user-traffic data from an internet platform (spanning 2008 to 2014 across 34 industry categories), this modification delivered dramatic gains. Across all sectors, mean squared error dropped by over 50% on average. In high-growth verticals like machinery & equipment, business services, and office supplies, improvements exceeded 65%. Even in slower-moving categories like books & audio or baby products, reductions of 18–28% were observed.
Crucially, this enhanced first-order model now competes credibly with second-order alternatives—something the classical version rarely achieved on trending data.
From Point Estimates to Actionable Intervals
A third, often overlooked limitation of traditional moving average methods is their silence on uncertainty. They output a single number: “Next month’s demand will be 12,450 units.” But for risk-aware decision-makers—supply chain managers, financial controllers, safety engineers—that figure is incomplete without context: How confident are we? What’s the plausible range?
Here, Xu bridges a decades-old gap by introducing formal interval prediction—not an afterthought, but a natural extension of the same error statistics used for model selection.
Using the residuals from the final selected model, the algorithm computes an empirical forecast error variance. It then constructs a prediction interval using the Student’s t-distribution (accounting for finite sample size), yielding a statement of the form:
“With 95% confidence, next period’s value will fall between 11,800 and 13,100.”
This transforms moving averages from a reporting tool (“here’s what we expect”) into a risk management tool (“here’s what we expect—and here’s when we should raise an alert if reality diverges”).
In fields like landslide displacement monitoring or financial volatility tracking—where early warnings matter more than pinpoint accuracy—this capability is invaluable. And because the interval width dynamically responds to recent forecast performance (wider when errors are large, tighter when the model is stable), it adapts to changing conditions without manual recalibration.
Real-World Validation: When Algorithms Meet Industry Data
The true test of any forecasting method lies not in theory, but in comparative performance on real, messy data.
Xu applied the Min-Min algorithm to a dataset of internet user traffic across 34 industry segments (e.g., healthcare, travel, real estate, cosmetics), each with over 20 quarterly observations. For each segment, the algorithm autonomously selected between first- and second-order models—and chose optimal window sizes without human input.
The results revealed surprising diversity:
- In financial services, cosmetics, and life services, the optimal second-order model outperformed the best first-order alternative by more than 60% in RSS—suggesting strong, consistent acceleration in growth patterns.
- In machinery, home appliances, and office supplies, the reverse was true: the best first-order model (now enhanced) beat second-order by 30–45%, indicating steadier, linear-like trends where second-order smoothing introduced unnecessary noise.
- In gifts & accessories, IT hardware, and agriculture, the two approaches performed similarly—RSS ratios between 82% and 92%—highlighting the value of letting data decide rather than defaulting to expert heuristics.
When benchmarked against widely used alternatives—weighted moving averages and exponential smoothing—the Min-Min approach showed clear advantages in both speed and accuracy.
- Speed: On a standard laptop, forecasting all 34 series took under one second in MATLAB. By contrast, weighted moving average and exponential smoothing methods—requiring iterative nonlinear optimization to tune smoothing weights or decay factors—took over three seconds just for parameter fitting, before generating forecasts.
- Accuracy: Across the 34 series, Min-Min achieved, on average, 8.7% lower mean squared error than weighted moving averages and 5.3% lower than exponential smoothing. This edge stems from the algorithm’s global search: it evaluates every viable (order, window) pair, avoiding the local minima traps common in gradient-based parameter tuning.
The one notable exception? Strongly seasonal data—e.g., quarterly electricity consumption with winter/summer peaks. Here, seasonal decomposition methods still reign supreme; Min-Min’s mean squared error was 17.5% higher than seasonal index models. This isn’t a flaw in the algorithm, but a reminder of scope: Min-Min is designed for trend- and level-dominated series, not overtly periodic ones. That said, Xu acknowledges this as a direction for future hybrid extensions.
Democratizing Forecasting—One Algorithm at a Time
What makes the Min-Min algorithm particularly compelling is its alignment with a broader industry shift: the democratization of analytics.
Tools like ARIMA, Prophet, or LSTM networks offer high accuracy—but require statistical literacy, coding skills, or cloud infrastructure. Moving averages, by contrast, are conceptually transparent and computationally lightweight. The challenge has always been operationalizing them reliably at scale.
Min-Min solves that. It can be embedded in dashboards, ERP modules, or IoT edge devices—no data scientists needed. A logistics manager can upload daily shipment volumes, click “forecast,” and instantly receive both a next-day estimate and a confidence band. A marketing team tracking campaign sign-ups can detect emerging slowdowns not just by raw decline, but by the forecast interval being breached—triggering automated alerts.
This isn’t about replacing deep learning; it’s about right-sizing intelligence. For many operational decisions, you don’t need a neural net—you need speed, interpretability, and robustness. Min-Min delivers all three.
Moreover, its design embodies principles increasingly valued in responsible AI:
- Explainability: Every output ties back to clear calculations (moving averages, residuals, RSS comparisons).
- Efficiency: Minimal compute overhead—ideal for resource-constrained environments.
- Adaptivity: Automatically responds to changes in data behavior without retraining.
In an era where “AI” often means black-box complexity, Min-Min proves that intelligent automation can be simple, transparent, and profoundly useful.
The Road Ahead: From Automation to Augmentation
While Xu’s work focuses on univariate time series, the underlying philosophy—let the recent data guide model selection, not assumptions—has far-reaching implications.
Imagine extending Min-Min to:
- Multivariate settings, where exogenous drivers (e.g., marketing spend, macro indicators) are considered—but only if they demonstrably reduce forecast error.
- Hierarchical forecasting, where optimal (order, window) pairs are chosen per product, then reconciled to regional or corporate totals.
- Hybrid systems that fallback to seasonal or regression-based models when Min-Min’s error metrics cross a threshold—creating an adaptive forecasting pipeline.
Already, early adopters in e-commerce and SaaS analytics are experimenting with similar “meta-forecasting” layers—automated wrappers that test multiple simple models and promote the best performer. Min-Min offers a rigorously tested blueprint for such systems, especially where latency and auditability matter.
As Xu notes in his conclusion: “The goal is not to make forecasting more complex, but to make intelligence more accessible.” In a world drowning in data but starved for insight, that’s a vision worth moving toward—quickly, reliably, and intelligently.
Xu Qili, School of Economics, Jiangxi University of Finance & Economics, Nanchang 330013, China
Application Research of Computers, Vol. 38, No. 6, June 2021
DOI: 10.19734/j.issn.1001-3695.2020.06.0163