Skip to main content

The LMP supermodel

With Ferc’s standard market design (SMD) seeking a shift from zonal power pricing to nodal pricing, the concept of locational marginal pricing (LMP) has become a key policy issue. Henwood Energy’s Vikram Janardhan proposes 10 key modelling features to look for in an analytical tool that can simulate LMPs and SMD markets

grapgh01-jpg

The US electricity market is fast becoming intensely competitive. On March 15, 2002, the Federal Energy Regulatory Commission (Ferc) issued Order 2000 to advance the formation of regional transmission organisations (RTOs) and further develop wholesale competition in the power utility sector.

Ferc’s aim is to improve certainty in market rules across RTO geographies, streamline operations across all independent system operators (ISOs) and ultimately facilitate the creation of a stable, seamless, efficient, deregulated electricity market across the US.

What does all this mean for market participants? Does the range of opportunities widen or shrink? Should they perceive Ferc’s standardisation of market design as an opportunity or a threat? Does breaking down the barriers between RTO service territories inhibit trading and arbitrage opportunities or improve them? How does switching into a locational marginal pricing (LMP) structure affect short-term trading strategies?

As more US regional markets switch LMP, the search for an in-house market analytical model to mimic SMD market characteristics intensifies. We list here 10 key characteristics that will help in the search for a country-wide model.

1. Ability of the model to employ an ACOPF algorithm as well as a DCOPF algorithm

ISOs across North America are adopting either direct current (DC) or alternating current (AC) power flow models for LMP calculation. AC models take into account the real and reactive power flows on the network and include loss modelling. The DC optimal power flow (DCOPF) method compromises on accuracy for the sake of speed of model execution. It ignores megavolt-ampere-reactive (MVar)1 flows on the network, and in most cases ignores system transmission losses.

The Pennsylvania-New Jersey-Maryland (PJM) region has adopted a DC model, while the New York ISO and California’s MD02 market design proposal use an AC power flow model. Market participants don’t want to be stuck with a simulation tool that calculates LMPs only one way. Studies have shown that in peak hours – when the transmission system is more congested – DCOPF-based LMPs can diverge from ACOPF-generated LMPs by as much as 12–15% (see figure 1).

The in-house LMP tool should be able to model and forecast nodal prices using the same algorithm as the ISO.

2. Ability to calculate LMPs under economic dispatch as well as security-constrained dispatch

Economic dispatch is the commitment and dispatch of generators against forecast load, to maximise profit or minimise cost subject to a series of physical constraints. The main constraints are:

  • maintaining system energy balance (load + losses = generation);
  • maintaining control area operating reserves as set by the RTO;
  • adhering to transfer limits (power flows between transmission areas must be below a certain limit); and
  • adhering to unit operating limits (minimum and maximum operating range and chronological constraints of minimum up/down times of units).

This approach tries to force the marginal cost of all units to be equal. It also assumes true economic ordering of all units to meet load requirements.

Security-constrained dispatch is the commitment and dispatch of generators in a power system that is a compromise of true economic dispatch of units in order to maintain power system reliability. Any dispatch of units that is a variant of true economic dispatch or is an un-economic dispatch or an out-of-merit dispatch of units for the sake of maintaining power system security is a security-constrained dispatch.

A contingency is an unexpected but reasonably likely sudden change in the power system. Such a change could mean the loss of a transmission line, loss of a generator or a combination of events.

A security-constrained dispatch is an execution of the security-constrained optimal power flow (SCOPF) engine, which can calculate marginal prices of real power – shadow prices or spot prices – with and without certain power system equipment in service. The ability to feed-in a sequence of equipment outages – whether generation or transmission – and have the SCOPF engine execute with and without the outage, one at a time, is referred to as an n-1 contingency analysis.

The ability to remove multiple equipment concurrently and have the SCOPF engine calculate resulting marginal prices is referred to as an n-x contingency, where x=1 or more, depending on the concurrent outages that are simulated by the SCOPF engine.

Organisations need to identify whether their analytical engine has the ability to perform n-1, n-2, n-3... contingency runs.

3. Ability to toggle between zonal and nodal solutions using the same model and same set of synthesised inputs

The Electric Reliability Council of Texas (Ercot) today produces prices using a zonal configuration. PJM, the New York ISO and the Midwest ISO, on the other hand, have adopted a nodal pricing model. Market participants seeking to procure an in-house market analytics platform should confirm that it could straddle both nodal and zonal modelling worlds using the same synthesised assumptions and input parameters. They do not want two separate models, one for zonal price forecasting and the other for nodal pricing.

The second reason they need one synthesised set of assumptions is because, even in an LMP system, the ancillary service market such as regulation, spinning and non-spinning capacity2 prices will still be calculated zonally to meet system-wide reserve requirements. It is important that the same analytical model can calculate zonal ancillary service prices as well as zonal energy or nodal energy prices.

Even in nodal markets, certain types of forecasts will be needed at the zonal level for the longer-term horizon.

4a. Ability to calculate ancillary service prices as well as energy prices


Ferc’s SMD is not just about LMPs. Market participants have to contend with:

  • determining regulation and reserve prices;
  • modelling RTO protocols that support self-provision of ancillary services;
  • calculating shadow costs of the different capacity markets (regulation and frequency response, spinning operating reserves and supplemental operating reserves); and
  • modelling any demand-side resources that can respond within 30 minutes of a dispatch request.

Participants should look for analytical models that can compute ancillary-service prices, associated shadow costs or opportunity costs for the reserve market, and can forecast nodal energy prices.

4b. Ability to simultaneously ‘clear’ the energy and capacity markets: market co-optimisation


This feature pertains to the model’s ability to mimic the operation of the RTO or ISO. The theory of the energy and ancillary service price determination employed by the model should be based on a non-arbitrage principle. This means that the price in each market would be high enough to allow each accepted bidder in that particular market to receive at least as much profit as it could have received in any other market operated by the ISO that the bidder was technically capable of participating in.

The model should be able to maximise the economic value of the accepted bids – that is, accept the bids with the lowest overall cost.

4c. Ability to model generation-shift factors and calculate the cost of marginal losses

Some ISOs include losses in their LMP calculations, and others do not. Whether a firm uses an AC or a DC model, it is important to be able to include the effect of losses – if that is how the RTO or ISO is doing its modelling – or explicitly leave it out. The PJM ISO, which models its network as a loss-less network, does the latter. Hence, market participant organisations need to find out how generation-shift factors and marginal losses are calculated in the model.

5. Ability to model bidding behaviour of other market participants

The modelled bidding behaviour must reflect each generator’s costs and bidding strategy as well as the costs and behaviour of other participants. This is addressed by ‘game theory’: that is, participants determine their actions at least in part based on their beliefs about what other market participants will do.

The famous mathematician and Nobel laureate, John Nash, proposed an equilibrium concept for such circumstances. Nash equilibrium requires that each participant’s actions be determined based on assumptions about other participants’ behaviour. This theory has particular importance with regard to the simulation of bidding in the power industry.

The model must include bidding strategies built on plausible market-participant expectations of their rivals’ behaviour.

6. Ability to perform congestion revenue right evaluations and congestion analysis

The LMP tool of choice will have to help market participants formulate their congestion revenue right (CRR) auction bidding strategy, identify hedging strategies for eliminating exposure to LMP spikes caused by congestion and calculate the equivalent value CRRs for converting existing – or grandfathered – transmission contracts. CRRs are financial instruments that entitle the holder to receive compensation for congestion exposure that arises when the transmission grid is congested in the real-time market.

7. Ability to vary LMPs stochastically at a node

Transmission congestion is a major contributor to volatility in LMPs, but is not the exclusive contributor. Other key price drivers are load uncertainty, forced unit outages, hydro availability, fuel price uncertainty and other market participants’ bidding strategies. It is important that the in-house LMP model be able to explicitly treat key drivers in fundamental market price forecasting as random variables and vary them stochastically.

Users should be able to define the volatility for key drivers in market price forecasting: load, fuel price and hydro energy. They should also be able to specify the correlation between these drivers. Given assumptions for volatility and correlation, the model should then generate random draws for the drivers. This will enable users to review the resulting LMP price distribution and price volatility.

8. Ability to use the data set that the ISO uses

An LMP tool’s algorithmic superiority is only part of the analytical equation. It is just as important to understand the source of transmission data an ISO uses for calculating LMPs and all the underlying assumptions in modelling individual power plant characteristics, transmission line interconnections, ratings, losses and fixed and price-sensitive demands.

ISOs use the transmission data set that is part of their energy management system platform and the integrated forward market platform. The input information is gathered in the state-estimator module, flows to the power flow module and then reaches the LMP calculation module. Any network or generation upgrades are added to this operational data set.

The feature to look for in analytical models is their methodology for producing LMP network model load-flow cases and mapping them with the generators in the appropriate topology.

9. Network ‘equivalencing’ and data mapping methodologies

9a. Network equivalencing

When calculating LMPs for California, market participants may not be interested in detailed LMPs for, say, Montana. The way your model ‘shrinks’ a network and allows you to ‘equivalence’ external areas is very important, as it can skew LMP price formation.

Network equivalencing refers to the technique of representing one power system area by another, in which the second system area approximates the behaviour of the original area. Network equivalent systems are smaller in size and less detailed than the actual system area they are based on. This is in order to enable faster solving by the OPF engine. The equivalent network area contains fictitious power system elements to approximate the behaviour of the original.

9b. Data mapping methodology

As most organisations will agree, mapping generation units used in the unit commitment and dispatch engine with the transmission network model is a painstaking process. The process used for mapping these units or equivalencing allocation of generation capacity to appropriate buses, can affect the resulting LMP calculations.
Companies should ensure the analytical model they use includes appropriate and adequately detailed data mapping between the unit-commitment engine and the network model or load-flow case used by the ISOs themselves.

9c. Reliability-must-run and out-of-merit unit modelling


The California ISO (Cal ISO) publishes a list of ‘reliability-must-run’ (RMR) units. The Ercot’s ISO publishes a list of ‘out-of-merit’ units – those committed and dispatched out of economic merit order to maintain grid reliability. These units are run out of merit from the least-cost stacking order that a security-constrained unit-commitment (SCUC) tool would perform. It is important that the model can incorporate these rules into the simulation.

9d. Ability to model nomograms

Nomograms represent the combined throughput of multiple transmission lines that is typically less than the sum of the transfer capability of each individual line (see figure 2). They can also be represented as a function of load or generation online or a combination of both.

10. Data management and analysis of the LMP results

10a. How easy is it to perform ‘sanity checks’ on the results produced by the analytical model?

Sample checks could be:

  • Can you change the tolerance for the iteration convergence?
  • Is Kirchoff’s Current Law – the sum of megawatts going into a node should equal the sum coming out of a node – obeyed? Is there a report that shows that true power in equals real power out?
  • Is there a report showing which line limits can be monitored? Can the end-user set the voltage level of that monitoring? For example, can we monitor line limits of all lines of 135 kilovolts (kV) and above?
  • How bad are the violations of the line limits for lines below that threshold?
  • Can you produce a report that confirms that minimum/maximum levels of megawatt generation and MVar limits are not violated?

10b. How easy is it to view the LMP results from a simulation?

  • Does the product have a graphical ‘contour map’ of LMPs?
  • Can you produce a congestion revenue report taking LMP price differentials at either end of a line?
  • How easy is it to calculate gross margins for specific generator units operating in an LMP market?

10c. How easy or difficult is it to change input data and run a new scenario?

  • Can you easily add a peaker unit and see how profitable it will be over the course of the year?
  • Is the graphical user interface intuitive, so that the end-user can easily change the topology and rerun the LMP calculation?

Summary
Selecting an LMP price forecasting and in-house SMD market modelling tool can be a daunting task. Hopefully with the above pointers, market participants will be better informed as they embark on the decision-making process.

Vikram Janardhan is a vice-president at Sacramento, California-based software vendor Henwood Energy Services.
e-mail: vjanardhan@henwoodenergy.com

1 MVar is the reversible flow of energy to and from the load during a time interval
2 Spinning capacity refers to generation resources that are already sunchronised with the grid and can respond to the RTO within minutes of a dispatch request. Non-spinning capacity refers to generation resources that are not synchronised with the grid and so require either more time before it can be dispatched or demand-side resources that can curtail energy usage.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Chartis RiskTech100® 2024

The latest iteration of the Chartis RiskTech100®, a comprehensive independent study of the world’s major players in risk and compliance technology, is acknowledged as the go-to for clear, accurate analysis of the risk technology marketplace. With its…

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here