Forecasting
Are SAE Models Accurate?
If you follow Itron’s forecasting work, you know that we’ve employed Statistically Adjusted End Use (SAE) models since the early 2000’s. To date, almost 60 companies across North America use the SAE models. These models are applied to both internal company forecasts and external regulatory filings.
With the widespread adoption of the SAE models, comes the question “are they accurate?”
Considering that SAE models are applied to long-term forecasts (10+ years), accuracy is not the primary reason why these models are used. The value of SAE models is developing an understanding of key energy drivers based on how customers use electricity.
Unlike traditional economic models which are based on macro-economic variables, the SAE model contains information about thermal shells, appliance saturations, and energy efficiency changes. This information allows the forecast to bend with known changes in end-use codes and standards. Explicit assumptions for energy efficiency trends create opportunities for developing scenarios and accounting for energy efficiency programs. And, detailed end-use projections open the door for understanding which end-uses are responsible for forecast growth.
Ultimately, trust in a forecast lies in its ability to present a plausible picture of the future. The greater the explanatory power of the forecast, the greater the trust.
But, is SAE accurate? If there’s one thing every forecaster knows, it’s that every forecast is wrong.
In 2001, the SAE model predicted increasing average use due to increasing saturations of central air conditioners, larger homes, and greater penetration of consumer electronics such as home office equipment and security systems. The figure below shows the 2001 indexed sales forecast (blue) compared against historical calculations from 2015 (red). In this figure, the forecast aligns well with history through 2005. Deviations begin to occur in 2006 with the first projected declines in average use due to higher energy prices and energy efficiency gains. And of course, the 2001 forecast missed the Great Recession, 2005 EPACT legislation, and 2007 EISA impacts.
The SAE model allows us to examine the end-use components and assess where the errors occur. In the figure below, the forecast of non-HVAC average usage shows consistency with history through 2006. After 2006, the deviation begins. Digging into the SAE data shows that improved lighting efficiency is the main factor driving non-HVAC usage down. Obviously, the advances in lighting standards and technology were not known in 2001.
Once the error is understood, future forecasts in the SAE model are improved by updating information about lighting efficiency.
So, are they accurate? Remember, the value of SAE models is not in accuracy but in information. Even so, Itron reviewed the accuracy of SAE models in its 2015 Benchmarking Survey. When Itron asked forecasters how well their 2013 forecast performed in 2014, SAE modelers (35 responses) showed an average MAPE of 1.46% while non-SAE modelers (32 responses) showed an average MAPE of 1.53%.
While accuracy is important, remember it is not the only factor in selecting a forecast.
With the widespread adoption of the SAE models, comes the question “are they accurate?”
Considering that SAE models are applied to long-term forecasts (10+ years), accuracy is not the primary reason why these models are used. The value of SAE models is developing an understanding of key energy drivers based on how customers use electricity.
Unlike traditional economic models which are based on macro-economic variables, the SAE model contains information about thermal shells, appliance saturations, and energy efficiency changes. This information allows the forecast to bend with known changes in end-use codes and standards. Explicit assumptions for energy efficiency trends create opportunities for developing scenarios and accounting for energy efficiency programs. And, detailed end-use projections open the door for understanding which end-uses are responsible for forecast growth.
Ultimately, trust in a forecast lies in its ability to present a plausible picture of the future. The greater the explanatory power of the forecast, the greater the trust.
But, is SAE accurate? If there’s one thing every forecaster knows, it’s that every forecast is wrong.
In 2001, the SAE model predicted increasing average use due to increasing saturations of central air conditioners, larger homes, and greater penetration of consumer electronics such as home office equipment and security systems. The figure below shows the 2001 indexed sales forecast (blue) compared against historical calculations from 2015 (red). In this figure, the forecast aligns well with history through 2005. Deviations begin to occur in 2006 with the first projected declines in average use due to higher energy prices and energy efficiency gains. And of course, the 2001 forecast missed the Great Recession, 2005 EPACT legislation, and 2007 EISA impacts.
The SAE model allows us to examine the end-use components and assess where the errors occur. In the figure below, the forecast of non-HVAC average usage shows consistency with history through 2006. After 2006, the deviation begins. Digging into the SAE data shows that improved lighting efficiency is the main factor driving non-HVAC usage down. Obviously, the advances in lighting standards and technology were not known in 2001.
Once the error is understood, future forecasts in the SAE model are improved by updating information about lighting efficiency.
So, are they accurate? Remember, the value of SAE models is not in accuracy but in information. Even so, Itron reviewed the accuracy of SAE models in its 2015 Benchmarking Survey. When Itron asked forecasters how well their 2013 forecast performed in 2014, SAE modelers (35 responses) showed an average MAPE of 1.46% while non-SAE modelers (32 responses) showed an average MAPE of 1.53%.
While accuracy is important, remember it is not the only factor in selecting a forecast.
Kesalahan terjadi ketika Memproses Template.
The following has evaluated to null or missing:
==> authorContent.contentFields [in template "44616#44647#114455" at line 9, column 17]
----
Tip: It's the step after the last dot that caused this error, not those before it.
----
Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)??
----
----
FTL stack trace ("~" means nesting-related):
- Failed at: contentFields = authorContent.content... [in template "44616#44647#114455" at line 9, column 1]
----
1<#assign
2 webContentData = jsonFactoryUtil.createJSONObject(author.getData())
3 classPK = webContentData.classPK
4/>
5
6<#assign
7authorContent = restClient.get("/headless-delivery/v1.0/structured-contents/" + classPK + "?fields=contentFields%2CfriendlyUrlPath%2CtaxonomyCategoryBriefs")
8contentFields = authorContent.contentFields
9categories=authorContent.taxonomyCategoryBriefs
10authorContentData = jsonFactoryUtil.createJSONObject(authorContent)
11friendlyURL = authorContentData.friendlyUrlPath
12authorCategoryId = "0"
13/>
14
15<#list contentFields as contentField >
16 <#assign
17 contentFieldData = jsonFactoryUtil.createJSONObject(contentField)
18 name = contentField.name
19 />
20 <#if name == 'authorImage'>
21 <#if (contentField.contentFieldValue.image)??>
22 <#assign authorImageURL = contentField.contentFieldValue.image.contentUrl />
23 </#if>
24 </#if>
25 <#if name == 'authorName'>
26 <#assign authorName = contentField.contentFieldValue.data />
27 <#list categories as category >
28 <#if authorName == category.taxonomyCategoryName>
29 <#assign authorCategoryId = category.taxonomyCategoryId />
30 </#if>
31 </#list>
32 </#if>
33 <#if name == 'authorDescription'>
34 <#assign authorDescription = contentField.contentFieldValue.data />
35
36 </#if>
37
38 <#if name == 'authorJobTitle'>
39 <#assign authorJobTitle = contentField.contentFieldValue.data />
40
41 </#if>
42
43</#list>
44
45<div class="blog-author-info">
46 <#if authorImageURL??>
47 <img class="blog-author-img" id="author-image" src="${authorImageURL}" alt="" />
48 </#if>
49 <#if authorName??>
50 <#if authorName != "">
51 <p class="blog-author-name">By <a id="author-detail-page" href="/w/${friendlyURL}?filter_category_552298=${authorCategoryId}"><span id="author-full-name">${authorName}</span></a></p>
52 <hr />
53 </#if>
54 </#if>
55 <#if authorJobTitle??>
56 <#if authorJobTitle != "">
57 <p class="blog-author-title" id="author-job-title" >${authorJobTitle}</p>
58 <hr />
59 </#if>
60 </#if>
61 <#if authorDescription??>
62 <#if authorDescription != "" && authorDescription != "null" >
63 <p class="blog-author-desc" id="author-job-desc">${authorDescription}</p>
64 <hr />
65 </#if>
66 </#if>
67</div>
The following has evaluated to null or missing: ==> authorContent.contentFields [in template "44616#44647#114455" at line 9, column 17] ---- Tip: It's the step after the last dot that caused this error, not those before it. ---- Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)?? ---- ---- FTL stack trace ("~" means nesting-related): - Failed at: contentFields = authorContent.content... [in template "44616#44647#114455" at line 9, column 1] ----
1<#assign
2 webContentData = jsonFactoryUtil.createJSONObject(author.getData())
3 classPK = webContentData.classPK
4/>
5
6<#assign
7authorContent = restClient.get("/headless-delivery/v1.0/structured-contents/" + classPK + "?fields=contentFields%2CfriendlyUrlPath%2CtaxonomyCategoryBriefs")
8contentFields = authorContent.contentFields
9categories=authorContent.taxonomyCategoryBriefs
10authorContentData = jsonFactoryUtil.createJSONObject(authorContent)
11friendlyURL = authorContentData.friendlyUrlPath
12authorCategoryId = "0"
13/>
14
15<#list contentFields as contentField >
16 <#assign
17 contentFieldData = jsonFactoryUtil.createJSONObject(contentField)
18 name = contentField.name
19 />
20 <#if name == 'authorImage'>
21 <#if (contentField.contentFieldValue.image)??>
22 <#assign authorImageURL = contentField.contentFieldValue.image.contentUrl />
23 </#if>
24 </#if>
25 <#if name == 'authorName'>
26 <#assign authorName = contentField.contentFieldValue.data />
27 <#list categories as category >
28 <#if authorName == category.taxonomyCategoryName>
29 <#assign authorCategoryId = category.taxonomyCategoryId />
30 </#if>
31 </#list>
32 </#if>
33 <#if name == 'authorDescription'>
34 <#assign authorDescription = contentField.contentFieldValue.data />
35
36 </#if>
37
38 <#if name == 'authorJobTitle'>
39 <#assign authorJobTitle = contentField.contentFieldValue.data />
40
41 </#if>
42
43</#list>
44
45<div class="blog-author-info">
46 <#if authorImageURL??>
47 <img class="blog-author-img" id="author-image" src="${authorImageURL}" alt="" />
48 </#if>
49 <#if authorName??>
50 <#if authorName != "">
51 <p class="blog-author-name">By <a id="author-detail-page" href="/w/${friendlyURL}?filter_category_552298=${authorCategoryId}"><span id="author-full-name">${authorName}</span></a></p>
52 <hr />
53 </#if>
54 </#if>
55 <#if authorJobTitle??>
56 <#if authorJobTitle != "">
57 <p class="blog-author-title" id="author-job-title" >${authorJobTitle}</p>
58 <hr />
59 </#if>
60 </#if>
61 <#if authorDescription??>
62 <#if authorDescription != "" && authorDescription != "null" >
63 <p class="blog-author-desc" id="author-job-desc">${authorDescription}</p>
64 <hr />
65 </#if>
66 </#if>
67</div>