Forecasting

Forecast Tests in MetrixND

January 19, 2017

Out-of-sample tests are a useful tool for seeing how well a model performs with data it hasn’t seen before (i.e., data that weren’t used to estimate the model’s coefficients). This is important because a model’s performance on out-of-sample observations is a helpful indicator of how well a model will forecast. However, truly “testing” the forecasting power of a dynamic model (i.e., AR(1), lagged dependent, smoothing) can be a bit trickier than that of a static model.

For a static model, testing is simple. You just need to identify the observations to withhold from estimation and make sure that the model residuals for these observations are not used. Estimation proceeds by minimizing the sum of the squared errors for the remaining observations. The resulting coefficients can then be used to compute residuals and summary statistics for the test observations. That’s how it works in MetrixND when you drop a binary variable into the Test box on the model design form. In periods when the binary value is 1.00, the residuals are weighted to zero. The residuals for the test periods are computed, but they are not included in the sum of squared errors. Statistics for these residuals are reported under Forecast Statistics on the MStat tab of the model object.

For a dynamic model, life is more complicated. If we simply ignore the test period residuals for estimation, the resulting test statistics are one-period-ahead statistics. This distinction is most clear when we withhold data in blocks within the estimation range or at the end of estimation (a forecast test). For example, suppose we estimate a model with a lagged dependent variable (Y_(t-1)) and we use 100 observations to estimate the model. Suppose we then test the model using the next 12 observations. In period 101, which is the first test period, it is OK to use the actual value for the lagged dependent (Y_100). That is a one-period-ahead forecast. But in period 102 if we were really forecasting, we would not know the value of Y_101. We need to hide Y_101 and instead use Y ̂_101. The result is a two-period-ahead forecast. Similarly in period 103 we would need to hide Y_101 and Y_102, and so on.

In the model objects (regression, neural networks, ARIMA, and smoothing), MetrixND does not hide the Y-data in the case of multi period test blocks. As a result, the statistics are one-period-ahead test statistics and give no indication of how accuracy degrades for multi-period forecasts.

One of the nice things about the latest release of MetrixND (version 4.7) is that the Forecast Test object has been reworked to allow for a true forecast test of dynamic models. The Forecast Test now hides the Y-data from dynamic terms so that you get a real sense for how a dynamic model will forecast. Simply drag and drop the model into the Model box of the Forecast Test object, set the Testing Ends date to be the last observation of the series and the Testing Begins date to some time before the end of the data series, e.g., 24 months.

The tail end of the Y-data then become the test set and are hidden from the model. Using a rolling-origin, MetrixND will generate a forecast using the start and end dates selected, and then proceed to add an observation, re-estimate the model, and generate a new forecast beginning in the period following the newly added observation. It will do this all the way through until it gets to the very last available observation.

For a dynamic model, this means that the model is forecasting using the actual Y-data through the last estimation period, and has to use the Y ̂-data thereafter. As a result, we get a series of forecast tests that yield one-period-ahead statistics all the way out to n-period-ahead statistics, giving us a real sense for the model’s forecasting power. Generally speaking, we would expect to see the out-of-sample statistics degrade for longer forecast horizons (e.g., 12 periods ahead vs. 1 period ahead).

In contrast, for a robust static model, we would expect the out-of-sample statistics to be pretty stable through the forecast period.

In conclusion, if you want to do an out-of-sample test on a static model, then any of the testing options in MetrixND will fit your needs. But, if you want to do a true out-of-sample test on a dynamic model, you should use the Forecast Test object.

Wystąpił błąd podczas przetwarzania szablonu.
The following has evaluated to null or missing:
==> authorContent.contentFields  [in template "44616#44647#114455" at line 9, column 17]

----
Tip: It's the step after the last dot that caused this error, not those before it.
----
Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)??
----

----
FTL stack trace ("~" means nesting-related):
	- Failed at: contentFields = authorContent.content...  [in template "44616#44647#114455" at line 9, column 1]
----
1<#assign 
2	webContentData = jsonFactoryUtil.createJSONObject(author.getData()) 
3	classPK = webContentData.classPK 
4/> 
5 
6<#assign 
7authorContent = restClient.get("/headless-delivery/v1.0/structured-contents/" + classPK + "?fields=contentFields%2CfriendlyUrlPath%2CtaxonomyCategoryBriefs") 
8contentFields = authorContent.contentFields 
9categories=authorContent.taxonomyCategoryBriefs 
10authorContentData = jsonFactoryUtil.createJSONObject(authorContent) 
11friendlyURL = authorContentData.friendlyUrlPath 
12authorCategoryId = "0" 
13/> 
14 
15<#list contentFields as contentField > 
16   <#assign  
17	 contentFieldData = jsonFactoryUtil.createJSONObject(contentField)  
18	 name = contentField.name 
19	 /> 
20	 <#if name == 'authorImage'> 
21	    <#if (contentField.contentFieldValue.image)??> 
22	        <#assign authorImageURL = contentField.contentFieldValue.image.contentUrl />	 
23			</#if> 
24	 </#if> 
25	 <#if name == 'authorName'> 
26	    <#assign authorName = contentField.contentFieldValue.data /> 
27			<#list categories as category > 
28         <#if authorName == category.taxonomyCategoryName> 
29				     <#assign authorCategoryId = category.taxonomyCategoryId /> 
30				 </#if> 
31      </#list> 
32	 </#if> 
33	 <#if name == 'authorDescription'> 
34	    <#assign authorDescription = contentField.contentFieldValue.data /> 
35			 
36	 </#if> 
37	  
38	 <#if name == 'authorJobTitle'> 
39	    <#assign authorJobTitle = contentField.contentFieldValue.data /> 
40			 
41	 </#if> 
42 
43</#list> 
44 
45<div class="blog-author-info"> 
46	<#if authorImageURL??> 
47		<img class="blog-author-img" id="author-image" src="${authorImageURL}" alt="" /> 
48	</#if> 
49	<#if authorName??> 
50		<#if authorName != ""> 
51			<p class="blog-author-name">By <a id="author-detail-page" href="/w/${friendlyURL}?filter_category_552298=${authorCategoryId}"><span id="author-full-name">${authorName}</span></a></p> 
52			<hr /> 
53		</#if> 
54	</#if> 
55	<#if authorJobTitle??> 
56		<#if authorJobTitle != ""> 
57			<p class="blog-author-title" id="author-job-title" >${authorJobTitle}</p> 
58			<hr /> 
59		</#if> 
60	</#if> 
61	<#if authorDescription??> 
62		<#if authorDescription != "" && authorDescription != "null" > 
63			<p class="blog-author-desc" id="author-job-desc">${authorDescription}</p> 
64			<hr /> 
65		</#if> 
66	</#if> 
67</div>