In California energy efficiency, timing is everything…and uncertainty is crippling.

Yesterday I attended one California’s (CA) most important annual events – the CEDMC fall conference. After a truly inspiring opening discussion by State Senator Henry Stern (look for him to follow Gavin to the CA governor’s office according to those in the know) we dug into some of the key issues of the day. The crowd there was primarily program implementers, evaluators and regulators – a healthy cross section of the world of regulated energy efficiency (EE) programs in the state. The conference is a great place to see where things are headed in EE industry in California and some trends are emerging.

Chief among those trends is that the old days of simply saving energy are gone. We’re growing into a more mature industry, and the picture of how EE contributes to meeting resource adequacy in the state is growing more sophisticated. It’s not enough to know how much energy we’re saving, but it’s increasingly important to know when we’re saving that energy. Simply saving energy costs is fine for the customer point of view, but if you want state incentive dollars, saving afternoon kilowatt-hours is a losing proposition.

duck-curve-electricity-generation-kw-engineering
Source: DOE

The chart above is the short answer why. As the belly of the “duck curve” has gotten deeper, and the neck has gotten steeper, the value of afternoon energy savings is down to next to nothing (see top title image), and the value of evening energy savings is much higher. And as you can see by comparing the avoided costs for 2017 vs. the 2018 update, both have gotten more extreme.

The impact of that avoided cost is going to have a big impact on the design, and valuation of energy efficiency programs in the upcoming solicitation cycle. Those of us designing program approaches are contending with measures that no longer provide much value to the grid, and therefore show up with an increasingly poor TRC[1]. As pointed out by conference speakers Meghan Dewey and Rob Kasman (both of PG&E), this means that some EE standbys like converting office lighting to solid state, provide very little benefit, because those lights are only on when the avoided costs are low.

Uncertainty #1 – Are we really still using TRC?

This timing issue has been on everyone’s radar for a while but the new TRC calculators that incorporate these new avoided costs have brought the issue into clear and short-term focus for a lot of us. I understand that the realities of the duck curve lead to a simple fact that afternoon EE is just not worth as much as it used to be. But it also brings up a bigger question for me, why are we still using TRC? I’ve heard strong arguments against it from PG&E and NRDC for years.

My beef with the TRC is that it simply doesn’t contain the right stack of costs and benefits to assess the value of EE to the grid. I’m not an economist but I’ve reviewed the CPUC documents that define TRC and with experience using the cost effectiveness tool sends a clear message – that the inclusion measures with marginal economics (i.e. long paybacks) makes your TRC go south pretty fast. The programmatic answer to that would be to “cream skim” or just do slam dunk measures with short paybacks.

That result sends a very powerful market signal to program implementers to do exactly the opposite of what regulators actually want us to do – target deep incentives that get us on the track of meeting our steep goals for energy efficiency.

I heard some feedback at the conference that we should ignore TRCs for the moment and focus on value to the grid. But right now, there are decisions in place that require IOU portfolios to meet the TRC = 1.0 threshold. As long as that guidance is in place, we have uncertainty in the marketplace as to how that will play out in the RFA process. And business abhors uncertainty because it can’t plan.

Uncertainty #2 – How long will we have a duck curve?

For the near future we’re probably stuck with the duck curve. At a high level, we’ll have it as long as we have this big mismatch between demand and energy supply side resources like solar and wind (let’s call them traditional energy sources from here out, shall we?). Which is where the storage folks come in to save the day. But until we have a deployment of storage that matches the deployment of solar and wind, we’re going to have cheap afternoon energy and high evening costs. One could predict that there’s an eventual equilibrium to be met there when that storage meets the demand of those evening hours, and the cost of that storage meets the levelized cost in the difference between afternoon and evening prices. Right now, that seems like a long way off but then it seems where we got to a grid crowded with solar a lot faster than most people expected. When markets tip, they really tip.

Uncertainty #3 – What will regulatory oversight look like in 2019?

As we’re prepping for next year’s programs, the biggest uncertainty for me is the regulatory picture. To be specific on two fronts:

  1. Will we bring some order to the chaos that has been the ex-ante review process?
  2. Will the CPUC stick with its draft decision to pull NMEC-platform programs through that custom review process?

To the first point, the ex-ante review (aka custom review) process has been absolutely crippling to the implementation of commercial and industrial programs that rely on custom calculations to estimate energy savings. While the objectives of that process were appropriate (reactions to poor NTG as the principal one) the cure was worse than the disease. The timeline for those reviews, and the customer interactions that delays, and incentives that got taken off the table, were nothing less than a programmatic nightmare to the point where our industry took the issue to the state legislature and passed AB 1131 to address. If we see a custom process that keeps the same administrative hassles and lack of clarity, a lot of us will be avoiding custom review like the plague.

Enter NMEC, which a lot of program implementers plan to use as a better tool for demonstrating savings at our sites in a way that is relatively clear and simple. However, a prior draft decision on NMEC assigned NMEC projects to the same custom review process. Industry reaction to that decision was widely negative, but we don’t yet have a final decision, leading again to uncertainty.

With all this uncertainty about, it’s kind of hard to know how to propose the right program approaches. And with SDG&E being the first to hit the streets with an RFA next week, it’s making a lot of us nervous.

I hope that we get a decision on NMEC soon, and we’ll need to wait and see how a revised ex-ante review comes together. But if we get those two steps in place, we’ll be ready to go with program concepts that can provide benefit to the state’s ratepayers, mitigate risk to public funds, and save our customers money and make them more carbon neutral, whether the TRC says so or not.

Like this post? Share it on LinkedIn.

[1] Total Resource Cost test. Currently used to determine program cost effectiveness. As described by the CPUC it  “measures the net costs of a demand-side management program as a resource option based on the total costs of the program, including both the participants’ and the utility’s costs.” Current regs specify that program benefits, as measured by the test, must exceed costs, in other words the TRC must be greater than 1.0.

Scroll to Top