Using end-use load data for better infrastructure investment planning
Background of Infrastructure Planning in the Electric Utility Industry
Since the development of the modern electric grid, pioneered by Thomas Edison with the Pearl Street Station in New York City in 1882, the struggle to match generation and transmission of electricity to the ever-increasing demands created by uses of electricity has been constant. Throughout the expansion of electrification in the U.S., major infrastructure projects were undertaken to satisfy the ever-increasing demand for electricity. Starting with the Federal Power Act of 1935, regulations have been put into place to ensure that federal and state governments worked with utilities to ensure that adequate and reliable power supply infrastructure would be in place to meet customer demand.
The ability of the utility industry, who typically planned and built the power plants, to project demand and plan ahead for building capacity, was key to enabling reliability of the power supply to consumers and businesses who became accustomed to having power whenever they flipped the light switch. The consequences of not having power were demonstrated during several major blackouts over the years:
- Northeast US/Canada – November 9, 1965: This blackout affected eight US States from New Jersey to Vermont and parts of Ontario, Canada, and over 30 million people, and was caused by an incorrect relay switch setting (human error) in the Niagara Power Station.
- New York City July 13, 1977: This affected most of New York City, caused by cascading overloads started by lightning strikes that took out transmission lines leading into the city from the North.
- California Electricity Crisis of 2000: This was a statewide problem during the early days of electric deregulation in California. Illegal market manipulation by energy suppliers, combined with a drought affecting hydro output, delays in bringing power plants on-line, and strict environmental requirements for new plants, all limited capacity within the state. Throughout the year, planned blackouts were initiated by the utilities to address the supply shortages. This was the first modern major incidence of power shortages caused by inadequate infrastructure, exacerbated by the fraudulent exercise of control over independent power plants by market participants.
- Northeast Blackout of August 14, 2003: Approximately 55 million people were affected in eight states from New Jersey to Ohio and parts of Ontario, caused by a software bug in the alarm system at a control room of the FirstEnergy Corporation, located in Ohio.
In all these cases, there was an uproar from the public and government, whose assumption of reliable power was shaken, and investigations followed, with some changes in regulation aimed at preventing recurrence. For example, in 1999, the New York State Reliability Council formulated rule I-R1 – Operating Reserves / Unit Commitment (New York City) that currently dictates that 80% of New York City load be met with “in-city” power plants in order to prevent the kind of event that happened in 1977, when the bulk of city power was being imported.
Electric Power Load Forecasting and Infrastructure
The value and importance of reliable power has been a key tenet of the utility industry since its inception over 100 years ago. Recognizing this, utilities have always placed a high degree of importance on accurate load forecasting so as to ensure that infrastructure could be properly planned, especially given the lead times for adding infrastructure.
Up until the Arab Oil Embargo of 1973, the process of load forecasting and resulting planning of additional generation was fairly routine. Load forecasting typically followed a trend for all sectors, driven primarily by GDP (gross domestic product) and population, and options for power plants were limited. Early electric supply options were coal and hydro, with nuclear and oil plants becoming common in the 1970’s to meet the exploding demand for electricity as the baby boomer population grew up and end uses like air conditioning began to proliferate. The Arab Oil Embargo caused the price of oil to skyrocket (from $3.50/bbl to over $10/bbl), affecting energy costs for cars, as well as fuel oil for many power plants, which constituted about 17 percent of total U.S. power generation in 1973. The resulting economic impact upset the previously smooth historical trend of positive load growth (as shown in the figure below), which has continued to be disrupted since, and resulted in the need for new load forecasting methodologies that incorporated additional economic and other variables that had not been necessary factors before.
Electric Sales by Sector with Key Milestones Affecting Energy Prices
In the figure, the positive growth in electric use (billion kWh units) was interrupted by the 1974 Arab Oil embargo, 1982 Iran-Iraq war, 2001 California Energy Crisis, and the 2008 oil price spike and recession. Growth has continued to be flat since then, with high oil prices only relenting since late 2014, and with industrial electric sales further eroded by natural gas availability and other factors since 2000.
On the demand side, “top-down” load forecasting of sector-level loads began to be replaced in the 1970’s with more “bottom-up” options where load components were built up. As Demand-side Management expanded in the 1980’s, more information on end uses was available for load forecasting models. End use intensities, combined with saturation surveys, produced counts and usages that could be built up to sector level, incorporating appliance standards, new efficient equipment and new end uses (e.g. VCRs, large screen TVs, heat pumps, etc.) to project future growth. In addition, the effects of utility and government DSM programs such as high-efficiency equipment incentives, energy management systems, and “load management/load control” (now known as demand response) programs had to be incorporated, as well.
On the supply side, meeting the supply needs prompted by these load forecasts was also becoming more complicated, as economic considerations and longer lead times for siting and permitting meant that accurate longer-term planning was even more critical. Between concerns over imported oil dependence, environmental impacts of coal plants and safety issues of nuclear plants after Three Mile Island, options were becoming more limited. With the deregulation of natural gas in the 1980’s combined cycle plants that could burn both oil and gas were thus becoming the preferred option. Hydropower sites were largely already exhausted so other renewable generation options, such as solar and wind were being considered but still too expensive, although the economic gap was closing.
Matching the demand-side, as projected by load forecasting, with the supply side, with the increasing dynamic nature of both, has been the focus of more comprehensive planning studies, often referred to as Integrated Resource Plans. Rather than just build power plants to meet projected loads, options for demand reduction as an alternative to supply additions must be considered equally, to the extent possible, requiring more accuracy on the demand side than had been necessary before.
Complexity of Infrastructure Options
Technology and market improvements have reduced the price of distributed energy resources (DER), especially renewables like solar and wind, as well as biomass and battery storage. While this has many benefits, including the obvious one of using renewable power sources rather than fossil fuels, it adds significantly to the complexity of power supply planning because these sources are intermittent and not as reliable and predictable as traditional power plants. These “not-so-new” forms of generation have thus been labeled “Disruptive Technology” and rightly so. Currently 29 states have renewable portfolio standards requiring grid connected DER from 2 percent to as much as 80 percent of retail sales over the next 10 years. As these sources have become more pervasive to meet these standards, the effects of these technologies are starting to ripple through the industry, extending well beyond prioritizing circuits, purchasing land and equipment. More detailed analytics are required to implement, model and predict the effects of these new technologies. Technical challenges also include voltage support and fault current management, as well as balancing the economics of PV, wind, biomass and storage.
Voltage regulating devices attempt to follow the generation curves. However, the highly intermittent nature of DER, coupled with the time delay of these devices, can be problematic. For example, when a cloud passes over a large solar farm, this may signal a voltage regulator to tap up and increase voltage, but since these events happen so quickly, the operation lag time of the regulator is likely to tap up after the cloud has passed, introducing high voltage and potential voltage violation to the utility. For a real-life example, an unexpected drop in wind in West Texas in February 2008 caused a short-term power shortage, requiring emergency measures to balance the voltage control and supply drop.
In addition to utilities’ use of renewables as central station power sources, there has been an increasing use of DERs like solar, wind, and even batteries (separately and in electric vehicles) as an option for average homeowners and businesses. This complicates infrastructure planning further in two ways:
- Reduced demand, when DERs used on the customer side of the meter – Utility customers who install DERs reduce their demand without any utility control or knowledge and, at times, that may not help grid resources or reliability. This has caused much concern by utilities whose priority has always been reliability, as dictated by their regulators.
- Added supply to the grid – A growing option is for DERs to be “net metered”, whereby excess power can be fed back to the local grid. Rules for this are complex and evolving, with the utilities concerned that these resources are unpredictable and therefore unreliable, and resulting pricing structures can be controversial.
Regulations for monitoring DERs have been hard-pressed to keep up with technology improvements in the marketplace. As a result, utility planners have been struggling to effectively plan their systems, both from an overall grid supply perspective and also on a distribution planning level, where infrastructure issues can be nearly as complex, in terms of siting and costs.
Power system planners are now turning their focus more to time range analysis as new concerns are coming to light with high penetration solar. A single snapshot peak day computer simulation model has traditionally been used to determine a utility’s ability to meet capacity demands and plan for future load growth and projects. High penetration solar and wind inject a new array of issues that now require more detailed analysis, even hourly or down to the minute.
For example, of particular concern are those hours in the spring when electric demand is low and PV generation is at its peak or when cloud cover becomes highly intermittent during heavy sunlight conditions. In many areas of the US where PV is developing, there has been a complete paradigm shift of utility customer expectations in that customers expect and rely on their rooftop PV systems to generate power during times of high irradiance and the increasing incidence has now introduced the need to model the secondary side of the distribution transformers. In one specific case in a high penetration PV area, investigation of customer complaints led to the determination that rooftop PV units were actually tripping off due to high voltage on the secondary side. With these investments in the tens of thousands of dollars, utility patrons are getting a quick and expensive lesson in power system engineering. While the orientation and the available space on a roof may support a given panel and inverter capacity, consideration must be given to the infrastructure, assuming there wouldn’t be customer side generation.
In other cases, planners are now concerned with the operations of down-the-line feeder voltage support equipment due to cloud cover intermittency and how this may impact the feeder reliability and useful life. In most cases, engineers are still planning their systems with the assumption of zero contribution of solar during peak load times due to the imbalance of the demand and generation curve throughout the year and during that peak snapshot. The chart below provides a good indication of the problem demand and generation imbalance.
The chart, widely known as the “California Duck Curve” , shows that during a spring day, when generation can be high but demand is minimal, the planned base load can be less than the generation level being implemented, potentially injecting power into the transmission system.
Projected Spring Day California Hourly Load Curve with Increasing Solar Contribution
In Southern US states, there are other issues to be concerned with. Many circuits can be winter peaking as a result of central heat pumps providing both heating & cooling and during those abnormal winter cold mornings, when emergency electric heating strips trip on to satisfy thermostat calls when the heat pump can’t meet the demands. Heat pumps are generally designed to efficiently heat down to only around 32 degrees Fahrenheit. When temperatures drop significantly below that for extended periods, auxiliary electric heat must often supplement the system, which means additional local power requirements.
On the morning of February 20, 2015, many South Carolina electric utilities hit their all-time energy peak during a time that solar was completely off. Record low temperatures across the entire state for an extended period forced many residents to operate auxiliary heat, resulting in power consumption hitting an all-time high for many circuits. Planning for these weather anomalies while provisioning for distributed generation can be a challenging balancing act.
Behavioral change, demand side management and new technologies such as consumer storage and edge voltage support devices will need to continue evolving before we can rely and plan for distributed energy resources contributing during those times when it is most needed. Engineers embrace these new challenges with advanced planning tools now capable of modeling the granularity of data needed to support these new business needs.
End Use Load Forecasting to Improve Infrastructure Planning
There is an increasing number, variation and added complexity of electric end uses that did not even exist only 10-15 years ago, including electric vehicles, set-top boxes, phone/device chargers, TV/entertainment units and smart home devices. While virtually every specific appliance and end use is more efficient than previous versions, thanks to appliance standards, improved technology, and energy efficiency programs and information, the addition of new end uses has meant that average customer usage has actually continued to increase in most cases. As with the post-oil embargo period, trending of energy usage is not an effective option for accurate load forecasting. Keeping up with technology has kept load forecasters busy trying to understand the energy intensity and load patterns of these new end uses, as well as changes in the patterns of traditional end uses, such as the impact of the new “learning” thermostats on cooling and electric heating load patterns. With this constantly changing mix of end uses, counts and saturations, and energy intensity, keeping up with technology and understanding of the components of customer bills is an ongoing challenge. Accurate end use data, up-to-date studies, current knowledge of the engineering and market aspects related to end uses, methods for calibration of components of customer bills, and more sophisticated end use forecasting models are the only ways to build accurate and effective load forecasts.
How has DNV GL furthered the Cause of Increased Accuracy in End Use Load Forecasting and Infrastructure Planning?
DNV GL has pioneered many methodologies in both the development of end use information, DSM program evaluation, modeling of “bottom-up” end use components and both system-wide and “small-area” load forecasting for all levels of utility planning (generation, transmission and distribution). Our deep bench of acknowledged leaders in the load research and load forecast modeling areas is a particularly valuable asset in meeting our client needs in these areas. For example, DNV GL conducted an EPRI study for ISO-New England where we disaggregated ISO-NE system load in the New England area by nine regions, by sector (residential, commercial, industrial, institutional), and by end use (including heating, cooling, water heating, motors, refrigeration and others, down to office equipment and set-top boxes).
In addition, DNV GL has made extensive investments in its system planning software technology to meet the growing needs of planners for modeling distribution systems in the full detail and has added a number of applications specific for PV, storage (batteries) and wind. Utilities are now incorporating analysis capabilities such as hosting capacity, PV placement and time range analysis as part of their system planning process to ensure a safe and reliable grid.
How can DNV GL Help You Improve Your Load Forecasting and Infrastructure Development?
Based on DNV GL’s extensive experience, ongoing research (independent, industry and utility client studies), experience in technology and utility program evaluation, and library of end use load shapes, we can identify the best sources, optimal methods for calibration and adjustment, and plans/support for utility-specific end use data collection, where warranted, including turn-key end use load studies. Our experience in load forecasting, development of base case end use disaggregation, and industry-leading involvement in policy issues keep us abreast of the state-of-the-art in all aspects of the end use, load forecasting and supply planning issues needed to address projects in this area.
Once your information needs are identified, our teams of statisticians, engineers, market researchers, DER technology experts and load data modeling specialists have the experience and expertise to help you develop integrated resource plans, load forecasts, end use load shape libraries, and distributed energy portfolio analyses.
DNV GL is regular contributor to industry conferences, including recent papers presented at ACEEE, AEIC, BECC and IEPEC, as well as presenting at renewable systems conferences. Please contact us to discuss this blog or how we can help you develop and utilize end-use load shapes in your planning and evaluation efforts. Visit www.dnvgl.com to learn more.
Bert Taube, Principal Consultant in DNV GL’s Policy Advisory and Research group, also contributed to the development of this blog.
 A percentage of the ten (10) minute NYCA operating reserves equal to the ratio of the NYC zone peak load to the statewide peak load shall be required to be selected from resources located within the NYC zone. www.nysrc.org/pdf/…/PRR%20REP-08%20(D-R1)%201-19-14.pdf
 http://www.eia.gov/totalenergy/data/monthly/#electricity – Table 7.6
 U.S. Nuclear Regulatory Commission. Backgrounder on the Three Mile Island Accident. http://www.nrc.gov/reading-rm/doc-collection/fact-sheets/3mile-isle.html
 Texas Wind Generation Crisis – http://www.reuters.com/article/2008/02/28/us-utilities-ercot-wind-idUSN2749522920080228#bfEmkRG6mTTD860K.97
 California Duck Curve (CAISO) https://www.caiso.com/Documents/FlexibleResourcesHelpRenewables_FastFacts.pdf