Eric D. Brown / Oct 08 2019

Forecasting Time Series Data With Prophet IV

Introduction

This is the fourth in a series of posts about using Forecasting Time Series data with Prophet. The other parts can be found here:

In those previous posts, I looked at forecasting monthly sales data 24 months into the future using some example sales data that you can find here.

In this post, I want to look at the output of Prophet to see how we can apply some metrics to measure ‘accuracy’. When we start looking at ‘accuracy’ of forecasts, we can really do a whole lot of harm by using the wrong metrics and the wrong data to measure accuracy. That said, its good practice to always try to compare your predicted values with your actual values to see how well or poorly your model(s) are performing.

For the purposes of this post, I’m going to expand on the data in the previous posts. For this post we are using fbprophet version 0.2.1. Also – we’ll need scikit-learn and scipy installed for looking at some metrics.

Note: While I’m using Prophet to generate the models, these metrics and tests for accuracy can be used with just about any modeling approach.

Since the majority of the work has been covered in Part 3, I’m going to skip down to the metrics section…

Import necessary libraries

import pandas as pd
import numpy as np
from fbprophet import Prophet
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
 
plt.rcParams['figure.figsize']=(20,10)
plt.style.use('ggplot')

Matplotlib must be manually registered with Pandas due to a conflict between Prophet and Pandas.

pd.plotting.register_matplotlib_converters()

Read in the data

Read the data in from the retail sales CSV file in the examples folder then set the index to the date column. We are also parsing dates in the data file.

retail_sales.csv
sales_df = pd.read_csv(
retail_sales.csv
, index_col='date', parse_dates=True)
sales_df.head()
datesales
2009-10-01338630
2009-11-01339386
2009-12-01400264
2010-01-01314640
2010-02-01311022
5 items

Prepare for Prophet

As explained in previous prophet posts, for prophet to work, we need to change the names of these columns to ds and y.

df = sales_df.reset_index()
df.head()
datesales
02009-10-01338630
12009-11-01339386
22009-12-01400264
32010-01-01314640
42010-02-01311022
5 items

Let's rename the columns as required by fbprophet. Additionally, fbprophet doesn't like the index to be a datetime... it wants to see ds as a non-index column, so we won't set an index differently than the integer index.

df=df.rename(columns={'date':'ds', 'sales':'y'})
df.head()
dsy
02009-10-01338630
12009-11-01339386
22009-12-01400264
32010-01-01314640
42010-02-01311022
5 items

Now's a good time to take a look at your data. Plot the data using pandas' plot function

plt.figure()
df.set_index('ds').y.plot().get_figure()

Running Prophet

Now, let's set Prophet up to begin modeling our data using our promotions dataframe as part of the forecast

Note: Since we are using monthly data, you'll see a message from Prophet saying Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this. This is OK since we are working with monthly data but you can disable it by using weekly_seasonality=True in the instantiation of Prophet.

model = Prophet(weekly_seasonality=True)
model.fit(df);
model.weekly_seasonality
True

We've instantiated the model, now we need to build some future dates to forecast into.

future = model.make_future_dataframe(periods=24, freq = 'm')
future.tail()
ds
912017-04-30
922017-05-31
932017-06-30
942017-07-31
952017-08-31
5 items

To forecast this future data, we need to run it through Prophet's model.

forecast = model.predict(future)

The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe:

forecast.tail()

We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with:

forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
dsyhatyhat_loweryhat_upper
912017-04-30477814.8556124669472384.57979563996483243.6921667977
922017-05-31476745.63667196187470984.0140790574482503.0393795469
932017-06-30475688.35458719084469461.0635859311481449.94429493666
942017-07-31479770.6856162995473377.5419909371485777.41729538876
952017-08-31455621.6718395891449243.7110760838462451.4513158483
5 items

Plotting Prophet results

Prophet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area).

model.plot(forecast);

Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here.

Additionally, Prophet let's us take a at the components of our model, including the holidays. This component plot is an important plot as it lets you see the components of your model including the trend and seasonality (identified in the yearly pane).

model.plot_components(forecast);

Now that we have our model, let's take a look at how it compares to our actual values using a few different metrics - R-Squared and Mean Squared Error (MSE).

To do this, we need to build a combined dataframe with yhat from the forecasts and the original y values from the data.

metric_df = forecast.set_index('ds')[['yhat']].join(df.set_index('ds').y).reset_index()
metric_df.tail()
dsyhaty
912017-04-30477814.8556124669
922017-05-31476745.63667196187
932017-06-30475688.35458719084
942017-07-31479770.6856162995
952017-08-31455621.6718395891
5 items

You can see from the above, that the last part of the dataframe has NaN for y... that's fine because we are only concerned about checking the forecast values versus the actual values so we can drop these NaN values.

metric_df.dropna(inplace=True)
metric_df.tail()
dsyhaty
672015-05-01463310.0074438375462615.0
682015-06-01448897.68864012434448229.0
692015-07-01453159.7624656404457710.0
702015-08-01454307.4463012435456340.0
712015-09-01431775.77250561136430917.0
5 items

Now let's take a look at our R-Squared value

r2_score(metric_df.y, metric_df.yhat)
0.9930150764264917

An r-squared value of 0.99 is amazing (and probably too good to be true, which tells me this data is most likely overfit).

mean_squared_error(metric_df.y, metric_df.yhat)
11020348.091546109

That's a large MSE value... and confirms my suspicion that this data is overfit and won't likely hold up well into the future. Remember... for MSE, closer to zero is better.

Now...let's see what the Mean Absolute Error (MAE) looks like.

mean_absolute_error(metric_df.y, metric_df.yhat)
2609.1933238624315

Not good. Not good at all. BUT... the purpose of this particular post is to show some usage of R-Squared, MAE and MSE's as metrics and I think we've done that.

I can tell you from experience that part of the problem with this particular data is that its monthly and there aren't that many data points to start with (only 72 data points... not ideal for modeling).

Another approach for metrics

While writing this post, I came across ML Metrics, which provides 17 metrics for Python (python version here).

Let's give it a go and see what these metrics show us.

import ml_metrics as metrics
metrics.mae(metric_df.y, metric_df.yhat)
2609.1933238624315

Same value for MAE as before... which is a good sign for this new metrics library. Let's take a look at a few more.

Here's the Absolute Error (pointwise...shows the error of each date's predicted value vs actual value)

metrics.ae(metric_df.y, metric_df.yhat)
array([1056.4...858.77250561])

Let's look at Root Mean Square Error

metrics.rmse(metric_df.y, metric_df.yhat)
3319.690963259398

This new metrics library looks interesting...I'll keep in in my toolbox for future use.

Hopefully this was useful for you to see how to look at some useful metrics for analyzing how good/bad your models are. These approaches can be used for most other modeling techniques other than Facebook Prophet.