Diagnostics
#%load_ext rpy2.ipython #%matplotlib inline from fbprophet import Prophet import pandas as pd from matplotlib import pyplot as plt import logging logging.getLogger('fbprophet').setLevel(logging.ERROR) import warnings warnings.filterwarnings("ignore") df = pd.read_csv(example_wp_log_peyton_manning.csv) m = Prophet() m.fit(df) future = m.make_future_dataframe(periods=366)
%%R library(prophet) df <- read.csv('../examples/example_wp_log_peyton_manning.csv') m <- prophet(df) future <- make_future_dataframe(m, periods=366)
Prophet includes functionality for time series cross validation to measure forecast error using historical data. This is done by selecting cutoff points in the history, and for each of them fitting the model using data only up to that cutoff point. We can then compare the forecasted values to the actual values. This figure illustrates a simulated historical forecast on the Peyton Manning dataset, where the model was fit to a initial history of 5 years, and a forecast was made on a one year horizon.
from fbprophet.diagnostics import cross_validation df_cv = cross_validation( m, '365 days', initial='1825 days', period='365 days') cutoff = df_cv['cutoff'].unique()[0] df_cv = df_cv[df_cv['cutoff'].values == cutoff] fig = plt.figure(facecolor='w', figsize=(10, 6)) ax = fig.add_subplot(111) ax.plot(m.history['ds'].values, m.history['y'], 'k.') ax.plot(df_cv['ds'].values, df_cv['yhat'], ls='-', c='#0072B2') ax.fill_between(df_cv['ds'].values, df_cv['yhat_lower'], df_cv['yhat_upper'], color='#0072B2', alpha=0.2) ax.axvline(x=pd.to_datetime(cutoff), c='gray', lw=4, alpha=0.5) ax.set_ylabel('y') ax.set_xlabel('ds') ax.text(x=pd.to_datetime('2010-01-01'),y=12, s='Initial', color='black', fontsize=16, fontweight='bold', alpha=0.8) ax.text(x=pd.to_datetime('2012-08-01'),y=12, s='Cutoff', color='black', fontsize=16, fontweight='bold', alpha=0.8) ax.axvline(x=pd.to_datetime(cutoff) + pd.Timedelta('365 days'), c='gray', lw=4, alpha=0.5, ls='--') ax.text(x=pd.to_datetime('2013-01-01'),y=6, s='Horizon', color='black', fontsize=16, fontweight='bold', alpha=0.8);
The Prophet paper gives further description of simulated historical forecasts.
This cross validation procedure can be done automatically for a range of historical cutoffs using the cross_validation
function. We specify the forecast horizon (horizon
), and then optionally the size of the initial training period (initial
) and the spacing between cutoff dates (period
). By default, the initial training period is set to three times the horizon, and cutoffs are made every half a horizon.
The output of cross_validation
is a dataframe with the true values y
and the out-of-sample forecast values yhat
, at each simulated forecast date and for each cutoff date. In particular, a forecast is made for every observed point between cutoff
and cutoff + horizon
. This dataframe can then be used to compute error measures of yhat
vs. y
.
Here we do cross-validation to assess prediction performance on a horizon of 365 days, starting with 730 days of training data in the first cutoff and then making predictions every 180 days. On this 8 year time series, this corresponds to 11 total forecasts.
%%R df.cv <- cross_validation(m, initial = 730, period = 180, horizon = 365, units = 'days') head(df.cv)
from fbprophet.diagnostics import cross_validation df_cv = cross_validation(m, initial='730 days', period='180 days', horizon = '365 days') df_cv.head()
In R, the argument units
must be a type accepted by as.difftime
, which is weeks or shorter. In Python, the string for initial
, period
, and horizon
should be in the format used by Pandas Timedelta, which accepts units of days or shorter.
The performance_metrics
utility can be used to compute some useful statistics of the prediction performance (yhat
, yhat_lower
, and yhat_upper
compared to y
), as a function of the distance from the cutoff (how far into the future the prediction was). The statistics computed are mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), mean absolute percent error (MAPE), and coverage of the yhat_lower
and yhat_upper
estimates. These are computed on a rolling window of the predictions in df_cv
after sorting by horizon (ds
minus cutoff
). By default 10% of the predictions will be included in each window, but this can be changed with the rolling_window
argument.
%%R df.p <- performance_metrics(df.cv) head(df.p)
from fbprophet.diagnostics import performance_metrics df_p = performance_metrics(df_cv) df_p.head()
Cross validation performance metrics can be visualized with plot_cross_validation_metric
, here shown for MAPE. Dots show the absolute percent error for each prediction in df_cv
. The blue line shows the MAPE, where the mean is taken over a rolling window of the dots. We see for this forecast that errors around 5% are typical for predictions one month into the future, and that errors increase up to around 11% for predictions that are a year out.
%%R -w 10 -h 6 -u in plot_cross_validation_metric(df.cv, metric = 'mape')
from fbprophet.plot import plot_cross_validation_metric fig = plot_cross_validation_metric(df_cv, metric='mape')
The size of the rolling window in the figure can be changed with the optional argument rolling_window
, which specifies the proportion of forecasts to use in each rolling window. The default is 0.1, corresponding to 10% of rows from df_cv
included in each window; increasing this will lead to a smoother average curve in the figure.
The initial
period should be long enough to capture all of the components of the model, in particular seasonalities and extra regressors: at least a year for yearly seasonality, at least a week for weekly seasonality, etc.
conda update conda pip install pystan pip install fbprophet