Our blogs Blogs home
Energy in Transition

Energy in Transition

Energy Management

Perfect Strangers: Evaluators Working with Implementers

The Problem

The implementers of energy efficiency programs and the people who evaluate these programs are frequently at odds. Too often these relationships are fraught with distrust and disappointment. One common reason is that evaluators are often the bearers of bad news. Impact evaluations often say that programs did not save as much energy as claimed. perfect-strangersProcess evaluations often point out shortcomings in program design and delivery.  To make matters worse, program implementers often claim that evaluation findings come too late to be useful. When evaluation findings are negative, in addition to being late, it’s not surprising that it leads to a cycle of mistrust between program evaluators and implementers (Figure 1).

Figure 1: The Cycle of Mistrust

blog-pic-1

Another contributing factor to lack of trust and collaboration between program evaluators and implementers is structural. Often evaluators and implementers do not have much opportunity to talk to each other.  Figure 2 shows the typical communication/management structure for energy efficiency programs.  It shows that there is very limited direct contact between implementers and evaluators, especially at a management level. Most communications between these two groups are filtered through the contract administrator.

Figure 2: Typical Communication/Management Structure for Energy Efficiency Programs

blog-pic-2

Contract administrators may set up this structure for various reasons. They may believe that keeping evaluators and implementers apart will enhance the appearance of the evaluators as being independent, third-party assessors of program performance.  They may fear that bringing together parties who have such different perspectives will only lead to confusion and possible antagonism.

However, lack of interaction between program evaluators and implementers can lead to lack of knowledge and empathy between the two groups. Lack of knowledge — not knowing what the other side is doing and why they are doing it — can create difficulties on both ends. Lack of empathy – the inability to identify with and understand another’s situation, feelings, and motives – can result when evaluators and implementers only interact through email. They do not get to know each other personally, which makes it difficult to really understand the needs and motivations of the person at the other end of the email.  Without this “empathetic awareness” they will feel less responsibility and urgency to help out their counterpart (e.g., provide important data in a timely manner).

This lack of interaction can also make it difficult to develop mutual trust. Things inevitably go wrong in both program delivery and evaluation. Program participation may be too low, there may be too many “free riders” in the program, or evaluators may have difficulty interviewing a representative sample of the program participants. If the evaluators and implementers do not trust each other, they will likely conceal these problems from each other. Yet often the concealment of these problems benefits neither party. Smaller problems with program delivery, if uncorrected, may lead to bigger problems in the future. In addition, the concealment of these problems can mean a missed opportunity for a solution from the other party.

This mistrust and lack of collaboration is unfortunate because both program evaluators and implementers share a common interest in the success of energy efficiency programs. Nobody wins when an energy efficiency program fails. Negative consequences may include: unhappy program participants, missed opportunities for energy savings, high levels of free ridership, the loss of performance incentives for utilities or implementation contractors, increased regulatory scrutiny, the replacement of the implementation contractor and even program termination. While these negative consequences often weigh more heavily on program implementers, they also create problems for program evaluators.

How do we break this cycle of late evaluation findings and mutual mistrust between evaluators and implementers?

The Solution

Program evaluations that DNV GL has been conducting in Michigan for many years provide a useful case study for breaking this cycle.  Since 2009, DNV GL has been evaluating a portfolio of energy efficiency programs that serves nearly two dozen Michigan utilities under the Efficiency United (EU) brand. Michigan Community Action (MCA) is the non-profit which manages the program evaluation and implementation contracts for this EU portfolio of energy efficiency programs.

In the first few years of our evaluation, the relationship between DNV GL and the implementation contractors was traditional. The only direct communications between the evaluators and the implementers involved data queries concerning the program tracking databases. All other communications were filtered through the contract administrator.

EU’s introduction of a group of Special Pilot programs in 2013, however, forced a change in this relationship. Because many of these program designs were untested, MCA requested that the evaluation and implementation teams attend monthly meetings together to increase program understanding and allow evaluators to provide near-real-time feedback on how to address program challenges. In these monthly meetings, the evaluators and implementers learned to lower their guard and discuss challenges and ideas openly and frankly in order to find solutions.

Figure 3: The Efficiency United Communication/Management Structure

blog-pic-3

icon-1Quarterly program tracking database reviews: Tracking database reviews are important because if the program uses incorrect or outdated deemed savings values, evaluators can reduce program energy savings estimates. For the first few years of the EU evaluation, DNV GL only did an annual tracking database review. This meant that any potential database errors were only caught fairly late in the process. In addition, the lengthy discussion of these database errors and how to correct them threatened regulatory reporting deadlines.

As part of their new collaboration, DNV GL and the implementers worked together to improve this process and adopt a new quarterly review of program tracking databases. A quarterly process reassured the implementation team that calculations were correct throughout the year. The added repetition and the spreading of corrections across the year also helped to reduce the time needed for the annual review.

icon-2Adjusting the evaluation schedule: In the early years of the evaluation of the EU program portfolio, participants who took part in first three quarters were surveyed by the evaluators in the fourth quarter of the program year. This schedule had two disadvantages. First, when data collection happened in the fourth quarter of the year and analysis and reporting occurred in the first quarter of the following year, program implementers did not get the evaluation findings in time for their program planning efforts. A second disadvantage was that the evaluators were not surveying customers who had participated in the last quarter of the program year. Often program participation spiked at the end of the implementation year.

One fruit of the new, more collaborative relationship between program evaluators and implementers was the redesign of the evaluation schedule to avoid these problems. The new schedule benefitted the implementers by making evaluation results available in September when they could still inform program planning efforts. The new schedule also helped the evaluators by allowing them to evaluate program activities in the fourth quarter and by spreading evaluation activities out more across the year reducing pressure during the first quarter (just prior to utility filing deadlines).

icon-3Establishing an engineering pre-review process: Possibly the most important outcome of the more collaborative relationship between DNV GL and the program implementers was to establish an engineering pre-review process. This was a process where the implementers could request an evaluation engineer review of documents for a proposed energy efficiency project prior to committing financial incentives to the project.

Both sides found that discussing the calculations, assumptions and documentation prior to project completion helped to minimize some of the misunderstandings and contentiousness that often characterize post-evaluation conversations about engineering. The pre-review process also supported greater transparency from the program implementers than occurred in the post-evaluation review process. This was because the program has not yet invested significantly in the proposed project and therefore was more willing to consider the project’s limitations.


For more information, please download Chris’s 2016 ACEEE Summer Study paper, Perfect Strangers: Evaluators Working with Implementers.

0 Comments Add your comment

Reply with your comment

Your email address will not be published. Required fields are marked *