This article needs additional citations for verification. (June 2016) (Learn how and when to remove this template message) |

In statistics, a **forecast error** is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest. Since the forecast error is derived from the same scale of data, comparisons between the forecast errors of different series can only be made when the series are on the same scale.^{[1]}

In simple cases, a forecast is compared with an outcome at a single time-point and a summary of forecast errors is constructed over a collection of such time-points. Here the forecast may be assessed using the difference or using a proportional error. By convention, the error is defined using the value of the outcome *minus* the value of the forecast.

In other cases, a forecast may consist of predicted values over a number of lead-times; in this case an assessment of forecast error may need to consider more general ways of assessing the match between the time-profiles of the forecast and the outcome. If a main application of the forecast is to predict when certain thresholds will be crossed, one possible way of assessing the forecast is to use the timing-error—the difference in time between when the outcome crosses the threshold and when the forecast does so. When there is interest in the maximum value being reached, assessment of forecasts can be done using any of:

- the difference of times of the peaks;
- the difference in the peak values in the forecast and outcome;
- the difference between the peak value of the outcome and the value forecast for that time point.

Forecast error can be a calendar forecast error or a cross-sectional forecast error, when we want to summarize the forecast error over a group of units. If we observe the average forecast error for a time-series of forecasts for the same product or phenomenon, then we call this a calendar forecast error or time-series forecast error. If we observe this for multiple products for the same period, then this is a cross-sectional performance error. Reference class forecasting has been developed to reduce forecast error. Combining forecasts has also been shown to reduce forecast error.^{[2]}^{[3]}

## Calculating forecast error

The forecast error is the difference between the observed value and its forecast based on all previous observations. If the error is denoted as then the forecast error can be written as;

where,

= observation

= denote the forecast of based on all previous observations

Forecast errors can be evaluated using a variety of methods namely mean percentage error, root mean squared error, mean absolute percentage error, mean squared error. Other methods include tracking signal and forecast bias.

**For forecast errors on training data**

denotes the observation and is the forecast

**For forecast errors on test data**

denotes the actual value of the h-step observation and the forecast is denoted as

## Examples of forecasting errors

Michael Fish - A few hours before the Great Storm of 1987 broke, on 15 October 1987, he said during a forecast: "Earlier on today, apparently, a woman rang the BBC and said she heard there was a hurricane on the way. Well, if you're watching, don't worry, there isn't!". The storm was the worst to hit South East England for three centuries, causing record damage and killing 19 people.^{[4]}

Great Recession - The financial and economic crisis that erupted in 2007—arguably the worst since the Great Depression of the 1930s—was not foreseen by most of the forecasters, even if a few lone analysts had been predicting it for some time (for example, Nouriel Roubini and Robert Shiller). The failure to forecast the "Great Recession" has caused a lot of soul searching in the profession. The UK's Queen Elizabeth herself asked why had nobody noticed that the credit crunch was on its way, and a group of economists—experts from business, the City, its regulators, academia, and government—tried to explain in a letter.^{[5]}

It was not just forecasting the Great Recession, but also its impact where it was clear that economists struggled. For example, in Singapore, Kit Wei Zheng, a government scholar at Citi, argued the country would experience "the most severe recession in Singapore’s history". It turns out that he couldn't have been more wrong. In the end the economy grew in 2009 by 3.1% and in 2010, the nation saw a 15.2% growth rate.^{[6]}^{[7]}

## See also

- Calculating demand forecast accuracy
- Errors and residuals in statistics
- Forecasting
- Forecasting accuracy
- Mean squared prediction error
- Optimism bias
- Reference class forecasting

## References

**^**"2.5 Evaluating forecast accuracy | OTexts".*www.otexts.org*. Retrieved 2016-05-12.**^**J. Scott Armstrong (2001). "Combining Forecasts".*Principles of Forecasting: A Handbook for Researchers and Practitioners*(PDF). Kluwer Academic Publishers.**^**J. Andreas Graefe; Scott Armstrong; Randall J. Jones, Jr.; Alfred G. Cuzán (2010). "Combining forecasts for predicting U.S. Presidential Election outcomes" (PDF).**^**"Michael Fish revisits 1987's Great Storm".*BBC*. 16 October 2017. Retrieved 16 October 2017.**^**British Academy-The Global Financial Crisis Why Didn't Anybody Notice?-Retrieved July 27, 2015 Archived July 7, 2015, at the Wayback Machine**^**Chen, Xiaoping; Shao, Yuchen (2017-09-11). "Trade policies for a small open economy: The case of Singapore".*The World Economy*. doi:10.1111/twec.12555. ISSN 0378-5920.**^**Subler, Jason (2009-01-02). "Factories slash output, jobs around world".*Reuters*. Retrieved 2020-09-20.