NOTICES
PENNSYLVANIA PUBLIC UTILITY COMMISSION
Amended Reliability Benchmarks and Standards for Electric Distribution Companies
[33 Pa.B. 3443] Public Meeting held
June 26, 2003Commissioners Present: Terrance J. Fitzpatrick, Chairperson; Robert K. Bloom, Vice Chairperson; Aaron Wilson, Jr.; Glen R. Thomas, statement follows; Kim Pizzingrilli
Amended Reliability Benchmarks and Standards for the Electric Distribution Companies; Doc. No. M-00991220
Tentative Order By the Commission:
Today, in conjunction with our proposed rulemaking order, at Docket No. L-00030161 we seek to tighten our standards for performance reliability in the electric distribution industry and reiterate the Commissions' regulations regarding qualifying an interruption as a major event as well as the process for filing formal requests for waivers from having to submit reliability data for any reporting period.
Procedural History
The Electricity Generation Customer Choice and Competition Act (act), 1996, Dec. 3, P. L. 802, No. 138 § 4, became effective January 1, 1997. The act amends Title 66 of the Pennsylvania Consolidated Statutes (Public Utility Code or Code) by adding Chapter 28 to establish standards and procedures to create direct access by retail customers to the competitive market for the generation of electricity, while maintaining the safety and reliability of the electric system. Specifically, the Commission was given a legislative mandate to ensure that levels of reliability that were present prior to the restructuring of the electric utility industry would continue in the new competitive markets.
In response to this legislative mandate, the Commission adopted a final rulemaking order on April 23, 1998, at Docket No. L-00970120, setting forth various reporting requirements designed to ensure the continuing safety, adequacy and reliability of the generation, transmission and distribution of electricity in this Commonwealth. See 52 Pa. Code §§ 57.191--57.197. The final rulemaking order acknowledged that the Commission could reevaluate its monitoring efforts at a later time as deemed appropriate.
On December 16, 1999, the Commission entered a Final Order at M-00991220, which established reliability benchmarks and standards for the electric distribution companies in accordance with 52 Pa. Code § 57.194(h).
Discussion
Electric Reliability Benchmarks and Standards
The Commission's regulations for Electric Reliability Standards at § 57.194(h)(1) state that:
''In cooperation with an electric distribution company and other affected parties, the Commission will, from time to time, establish numerical values for each reliability index or other measure of reliability performance that identify the benchmark performance of an electric distribution company, and performance standards.''In a series of orders at Docket No. M-00991220, the Commission established reliability Benchmarks and Standards regarding: (1) Customer Average Interruption Duration Index (CAIDI); (2) System Average Interruption Frequency Index (SAIFI); (3) System Average Interruption Duration Index (SAIDI); and (4) Momentary Average Interruption Index (MAIFI).1 The benchmark is the average of the historical annual averages for these indices for the 5-year period of 1994-1998 and is company specific. The standard is two standard deviations from the benchmark. These benchmarks and standards have remained in effect since their issuance in December 1999.
In June 2002, the Legislative Budget and Finance Committee (LB&FC) issued a report entitled Assessing the Reliability of Pennsylvania's Electric Transmission and Distribution Systems. The report, in part, concluded that the 2-standard deviation minimum performance standard is too loose and should be tightened as it does not assure that reliability performance will be maintained at levels experienced prior to the act.
A Staff Internal Working Group on Electric Service Reliability (Staff Internal Working Group) prepared a report entitled Review of the Commission's Monitoring Process For Electric Distribution Service Reliability, dated July 18, 2002, which reviewed the Commission's monitoring process for electric distribution service reliability and commented on the recommendations from the LB&FC report. In its report, the Staff Internal Working Group recommended, in part, that ''the Commission should develop minimum performance standards that achieve the Commission's policy objective (See Recommendation III-1, p. 7).'' A subsequent Commission Order at Docket No. D-02SPS021, August 29, 2002, directed:
''That the Commission staff shall undertake the preparation of such orders, policy statements, and proposed rulemakings as may be necessary to implement the recommendations contained within the Staff Internal Working Group . . . Report (p. 4).''The Staff Internal Working Group was assigned this task and conducted field visits to electric distribution companies (EDCs) to identify the current capabilities of each EDC for measuring and reporting reliability performance. These field visits began in October 2002 and continued intermittently through March 2003. As a result of the field visits, various forms of reliability reports and reliability data were received from the EDCs and analyzed by the Staff Internal Working Group to determine the most effective and reasonable approach for the Commission to monitor electric distribution service reliability. Ultimately, the Staff Internal Working Group concluded that sufficient data would be available to assess whether each EDC is meeting the statutory requirement to maintain reliability at precompetition levels.
Recalculation of Reliability Benchmarks
During the Staff Internal Working Group's review of the reliability performance standards it became apparent there were two sources of variability in the computation of the permanent benchmarks that made it difficult to set new performance standards equitably across the EDCs. The first source of variability was that some EDCs (that is, PECO) used one, systemwide operating area to compute their reliability metrics, while other EDCs (that is, PPL and GPU) subdivided their service territories and used multiple operating areas to compute their metrics. The number, size and composition of operating areas used for metric computations introduced variability into the criterion used to exclude major events from the reliability metrics reported to the Commission.
The regulations in § 57.192 define an operating area as ''a geographical area, as defined by an electric distribution company, of its franchise service territory for its transmission and distribution operations.'' The definition of a major event in § 57.192 includes: ''An interruption of electric service resulting from conditions beyond the control of the electric distribution company which affects at least 10% of the customers in an operating area during the course of an event for a duration of five minutes each or greater.'' Based on these definitions, an EDC that subdivided its service territory into several small geographic operating areas could exclude major events from its metric calculations based on a criterion of an interruption affecting 10% of the customers in an operating area; whereas another EDC, employing only one, service territory-wide operating area had to meet a much higher criterion of an interruption affecting 10% of the total EDC customer base. The result of this variability was that the metric calculations computed by the EDCs and reported to the Commission were not based on a uniform methodology.2 This in turn made it impossible for the Commission to develop an equitable performance standard that is derived from each EDC's benchmark.
The proposed solution to the benchmark variability problem was to develop one uniform calculation method for computing and reporting reliability metrics to the Commission. Two of the EDCs suggested to the Staff Internal Working Group that reliability performance be measured and reported based on systemwide performance (the performance for the entire service territory) as opposed to multiple operating areas that are defined differently by each EDC. The two EDCs indicated that their daily system operations and maintenance programs are managed on a systemwide basis as opposed to individual operating areas. In fact, the two EDCs indicated that they have to recompute their annual reliability data based on multiple operating areas when reporting their annual performance measures to the Commission. Based upon this input, the concept of measuring and reporting reliability performance on a service territory basis was proposed to the remaining EDCs and was accepted.
Therefore, the Staff Internal Working Group decided that the best uniform calculation method is for each EDC to compute and report its reliability metrics to the Commission considering the entire service territory as one operating area and the major event exclusion of an interruption that affects 10% of the entire customer base for a duration of 5 minutes or longer. Consistent with this decision, the Staff Internal Working Group requested those EDCs that had developed their metrics using more than one operating area to recalculate their metrics for the 1994-2002 period using the entire service territory criterion. Commission staff received very good cooperation from the EDCs in fulfilling this data request. The data recalculations were used by Commission staff to recompute the current benchmarks using a uniform methodology across the EDCs. Appendix A contains a table of the benchmarks as originally calculated and the recomputed benchmarks based on excluding major event data using the entire service territory criterion.
The Commission emphasizes that the recomputed benchmarks do not represent a lowering or raising of the historical benchmarks in any sense. All of the EDCs were asked to apply a uniform exclusion criterion to their original data. The EDCs indicated that the original data was available to perform the recalculation analysis and subsequently provided the results to the Commission. The only major events excluded from the recomputed benchmarks are unscheduled interruptions that affected 10% or more of the customers in the entire service territory for a duration of 5 minutes or longer. For EDCs that had previously excluded major events based on the multiple operating area criterion, the recomputed benchmark values may be higher than the original benchmark values. While the benchmark numerical values did change in some years in which prior exclusions existed on a multiple operating basis (but do not under the entire service territory criterion), the recomputed benchmarks should be viewed as representing the actual reliability performance during the historical period as calculated using a uniform methodology.
A second source of variability known to the Commission in the computation of the permanent benchmarks for Citizens' Electric and Penn Power made it difficult to set new performance standards equitably across the EDCs. Citizens' Electric Company did not exclude major events from their metric calculations for 1994-2002 as permitted by the regulations. This was in contrast to the calculations of all the other EDCs and, therefore, was a source of variability to only Citizens'. In the case of Penn Power, the metrics for the 1994-1997 period were not computed using the definition of a major event as contained in § 57.192. For this period, Penn Power used the First Energy (the parent of Penn Power) definition of a major event, which is different than the definition used by the Commission. In the cases of Citizens' and Penn Power, Commission staff requested that the metrics be recomputed so that they were calculated using the same uniform methodology that other EDCs used. Commission staff received good cooperation from Citizens' and Penn Power, which provided recomputed figures upon request.
It is also important to note that the Commission is proposing the reporting of worst performing circuit reliability data based on reliability indices and other relevant performance data (for example, lockouts) from each of the major EDCs, since the methodology using the entire service territory criterion eliminates the measuring and reporting of reliability performance for individual operating areas within an EDC's service territory. This reporting of circuit data will prevent any masking of poor reliability performance in small pockets of a major EDC's service territory. Masking is not an issue with the small EDCs since they have smaller service territories. The specific requirements are addressed in § 57.195(b)(5) and (e)(3) and (4).
Revisiting Performance Standards
In its July 2002 report entitled Review of the Commission's Monitoring Process for Electric Distribution Service Reliability, Commission staff identified two shortcomings in the methodology used to establish minimum reliability performance standards (pages 6 and 7). The first shortcoming was statistical in nature in that the method of setting the standard by applying two standard deviations to the benchmarks yielded a result so that an EDC can perform worse on an index after 1998 than in any year during the 1994-1998 benchmark period and still be within the standard. This creates the second shortcoming, namely that the statistical outcome appears to be contrary to the policy objective of the Commission setting standards for reliability that maintain the same level of reliability after restructuring as was experienced prior to restructuring. A similar conclusion was reached by the LB & FC.
Also, according to the LB&FC Report, on page 46, the percent differences between the EDCs' historic performance levels and the Commission's minimum performance standards for CAIDI and SAIFI for the large companies3 were as follows.
Company CAIDI SAIFI PECO 29% 38% PPL 21 30 Allegheny Power 25 61 GPU4 25 46 Duquesne 18 27 Penn Power 26 40 Notably there is some wide variation between the historic benchmarks and the minimum performance standards for the EDCs. This table shows us how far from the benchmark the 2-standard deviations lie. SAIFI, on average, has a 2-standard deviation that is 40% greater than the benchmark. Whereas, CAIDI has an average 2-standard deviation that is 24% greater than the benchmark.
Based on the previous findings, Commission staff recommended that the Commission move toward setting a standard that is more closely tied to the benchmark but also allows for some degree of year-to-year variability. Therefore, the Commission is proposing a two-tiered reliability performance standard, one tier for rolling 3-year performance and one tier for rolling 12-month performance.
Under the first tier of the standard, an EDC's rolling 3-year average performance for systemwide annual reliability indices should be no higher than 10% above the benchmark level (110% multiplied by the benchmark).5 The proposed rolling 3-year standard was set at the threshold of 110% of the benchmark to ensure that the rolling 3-year standard is not worse than the worst annual performance experienced during the years prior to restructuring (1994-1998). Rolling 3-year performance would be measured against the standard at the end of each calendar year. Appendix B contains a table comparing each EDC's current benchmarks and standards to the proposed recomputed benchmarks and the rolling 3-year standards. The table also shows the worst annual performance experienced by each EDC during the period 1994-1998 prior to restructuring. In addition, the table shows whether the EDCs would have met the rolling 3-year standard for the periods of 1999-2001 and 2000-2002.
A rolling 12-month standard is also being proposed by the Commission to monitor performance on a shorter term basis. For the large EDCs6 (companies with 100,000 or more customers), the Commission is proposing that the rolling 12-month averages of systemwide indices be within 20% of the benchmark. For small EDCs7 (companies with less than 100,000 customers), the Commission is proposing that rolling 12-month averages of systemwide indices should be within 35% of the historical benchmarks. A greater degree of short-term latitude is being proposed for the small EDCs because a single event can have more of a significant impact on the reliability performance of the small EDCs' distribution systems. Small EDCs have fewer customers and fewer circuits than the large EDCs. Thus, their sample sizes are smaller, and they have higher standards of deviation. Rolling 12-month performance would be measured against the standard on a quarterly basis. Appendix C contains two different tables, which show how the rolling 12-month standard would compare to the current methodology used to establish annual reliability standards (two standard deviations above the benchmark). One table shows the results for the major EDCs (120% of benchmark), while the other table shows the results for the small EDCs (135% of benchmark). In the vast majority of cases (21 of 22), the proposed rolling 12-month standard set points for SAIFI and CAIDI are closer to the benchmarks than the standard set points when applying the two standard deviation methodology. Therefore, the Commission is tightening its standards.
Consistent with proposed changes to the language of the Commission's Electric Reliability Standards in § 57.194(h)(3), the role of the standard is being revised. A failure on the part of an EDC to meet the first tier standard is a trigger for additional involvement of Commission staff in the form of remedial review and perhaps additional reporting by the EDC until performance is within the standard or Commission staff is satisfied that performance over time is not significantly deteriorating. Repeated violations of the 2-tiered standard shall result in the Commission staff pursuing an enforcement action including fines and other remedies available.
The proposed two-tier standard set points were selected for a number of reasons. First, the standards allow for some variability from the benchmarks because reliability performance is influenced by weather conditions and other factors that are inherently variable in nature. Second, a review of historical reliability performance reveals a certain degree of variance from year to year. However, the use of rolling averages, particularly for the 3-year rolling average standard, will tend to even out some of the inherent variances in performance metrics. The longer the period under review, the more year-to-year high and low variations will tend to cancel each other out. As such the 3-year rolling average standard should promote reliability performance to gravitate closer to the benchmark over time. Finally, the set points were selected so that the Commission staff would be actively involved when performance deviates significantly from the benchmark but would not be as involved when the variations were within the more typical range.
It is important to understand the interplay between the rolling 3-year and rolling 12-month standards or trigger points contemplated in this Order, and how a combined approach to monitoring reliability differs from the two standard deviation standard, which is the current trigger point established in the previous Commission Order. Under the two standard deviation standard, some EDCs could continuously perform worse than the worst performance experienced during the 1994-1998 period and still fall within the bounds of minimum acceptable performance. Under the proposed rolling 12-month standard (as measured each quarter), it is still possible for the performance of certain EDCs to be worse than the worst performance during the 1994-1998 period. However, this performance could not be sustained for more than a few quarters without triggering Commission action based on these EDCs falling outside the rolling 3-year standard (110% of the benchmark as measured each calendar year). The intent of the two-tiered approach to setting standards is to allow for some variability in performance over a short period of time while pushing that performance towards the benchmark over a longer period of time.
The Commission wants to be clear about how it views the role of benchmarks and standards in relation to long-term expectations for EDC reliability performance. The recomputed benchmarks represent the average performance of each EDC in the 1994-1998 period prior to the introduction of Electric Choice. The Commission's statutory obligation is to have each EDC achieve a level of performance after the introduction of Electric Choice that is at least as good as it was prior to competition. Therefore, the Commission's ultimate goal is to have each EDC achieve benchmark performance in the long-term. At the same time, the Commission acknowledges that, in short-term periods, reliability performance can be variable. By establishing 12-month and 3-year standards, we are noting that we will view performance that falls within the bandwidth between the benchmark and the standards as acceptable as long as the long-term trend is moving towards the benchmark in a reasonable period of time. Alternately, the Commission will not view performance that consistently falls within the bandwidth between the benchmark and the standards, but does trend toward the benchmark, as acceptable.
The Commission has considered but declined to use the standard deviation approach at this time for setting performance standards. A standard deviation measures the degree of variance from an average and can be useful for the establishment of reliability standards. However, because the benchmark data currently available consists of only five sample points for each reliability index (the annual average indices for the years 1994-1998), we are not confident that its use would be appropriate at this time. Nevertheless, since the Commission's revised regulations will result in more frequent reporting (and thus collection of more data points), and better quality data in future years, we will not rule out the use of a standard deviation approach for the establishment of performance standards in the future.
At this time, if the Commission were to tighten performance standards from 2-standard deviations to 1-standard deviation, this would be a tighter standard for the large EDCs8 than a standard of 120% of benchmark standard for CAIDI, but it would be a looser standard for the smaller EDCs than 135% of benchmark for CAIDI. Issues of fairness obviously arise with the one standard deviation approach. Regarding SAIFI, a standard of 120% of benchmark for the large EDCs and a 135% standard for the small EDCs is a significant tightening of the standard from approximately 2-standard deviations to 1-standard deviation. Both CAIDI (average duration of a customer's interruption) and SAIFI (systemwide average frequency of interruptions) are important measurements, and the tightening of these standards should result in better assurance that performance has not been deteriorating since 1999.
Reliability Data Quality Issues
During the Staff Internal Working Group's review of the Commission's electric reliability monitoring efforts, it became clear that there are data quality issues that should be acknowledged and taken into consideration. First, in the case of Allegheny Power, there are several months of data from the 1997 and 1998 years that are lost and not available for analysis. Therefore, the SAIFI metrics for those years are understated, resulting in the appearance of better performance being reported during 1997 and 1998 than actually existed. Because the 1997 and 1998 data was used along with the 1994-1996 years to compute the historical benchmark average, Allegheny Power's SAIFI benchmark is set artificially low. Thus, comparisons of Allegheny Power's SAIFI reliability performance in years subsequent to 1998 with the benchmark are going to be inherently unfavorable. This defect also impacts the SAIDI metric as SAIDI is a function of multiplying SAIFI and CAIDI data. Unfortunately, Allegheny Power cannot recapture the 1997 and 1998 missing data.
A second and more pervasive data quality issue exists that likely affects much of the reliability data reported by the EDCs since 1994. EDCs continue to implement automated reliability management systems that improve the accuracy of reliability monitoring information. Specifically, these systems provide better information about when an outage begins, ends and how many customers the outage has affected. Connectivity, the degree to which each customer is documented on the distribution system, has improved greatly over the last 10 years. This has profound implications for comparing historical reliability performance to current performance. If today's EDC reliability monitoring systems were in place several years ago (including the benchmark period of 1994-1998), we would find that reliability performance was not at the level as reported in the past. Earlier monitoring processes were often manual in nature and did not accurately capture each customer that had lost power, and therefore inaccurately concluded the actual number of customers affected by an interruption. Today, reliability monitoring systems have 96% or better connectivity and are less likely to inaccurately record how customers are affected by outages. This differential ability of the monitoring systems to capture accurate information introduces a degree of uncertainty into our ability to interpret reliability trend data. Trends in reliability metrics can change over time because of true differences in reliability performance as well as because of differential measurement capability. The differential measurement capability can introduce method variance into our trend data. For accurate trend analysis, it is important to separate out the method variance from the variance in scores that is attributable to true reliability performance. While we conclude that method variance has introduced a degree of change in reliability metrics separate from true changes in reliability performance, we cannot quantify the exact degree of method variance. Therefore, we cannot definitely conclude in every case that the entire differences that we observe in reliability metric trends are due to true changes in reliability performance.
While the data quality issue pertaining to method variance is problematic in the interim, fortunately it is a problem that will resolve itself in the near future. The Commission proposes to move forward with setting new standards at this time and to revisit the potential for setting new benchmarks and standards a few years in the future when we have several years of consistent data collection under the improved reliability monitoring systems being implemented and refined today. Once we reach that point in time the method variance issue will become moot and new benchmarks and standards can be set and used for accurate comparisons of future data.
This process for resolving data quality issues has implications for how the Commission can conduct the task of reliability monitoring during this transition period we are in now. First, a hypothetical application of the proposed standards to recent performance data reveals that a number of the EDCs would not have reliability performance that falls within the proposed standards. Repeated violations of the two-tiered standard shall result in enforcement actions including fines and other available remedies.
Waivers
In its reliability review, the Staff Internal Working Group recommended at Recommendation No. V-1 that the Commission should require all EDCs to formally file petitions for waivers of the reliability reporting requirements when they are unable to conform to such requirements and, upon granting the waiver, the Commission should issue an order that specifies the terms of the waiver.
Title 52, Chapter 57, Subchapter N (Electric Reliability Standards) directs EDCs to file various information related to electric service reliability with the Commission. At times, for various reasons, EDCs may deem it necessary to file information in a manner different than that directed in the Pennsylvania Code or to not file the information at all. We take this time to remind EDCs that 52 Pa. Code § 5.43 addresses petitions for issuance, amendment, waiver or repeal of regulations. Requests shall be made in writing, prior to each filing due date of the required information. Thus, if an EDC receives approval of amendment or waiver of a regulation for one reporting period, a subsequent request is necessary for amendment or waiver of additional reporting periods. We specifically reiterate ''The petition shall set forth the purpose of, and the facts claimed to constitute the grounds requiring the regulation, amendment, waiver or repeal.'' See 52 Pa. Code § 5.43. Each request will be approved or denied, in writing, by the Commission.
We direct that all requests for waiver shall be made formally in writing to the Commission. EDCs are required to timely file a petition for waiver of formal reporting requirements, under 52 Pa. Code § 1.91 (relating to applications for waiver of formal requirements). The EDCs are directed to disclose the reasons they are not in full conformity with the reliability regulations in all reliability documents submitted to the Commission.
Formal Requests for Exclusion of Service Interruptions as Major Events-Major Event Definition
The Staff Internal Working Group's Recommendation No. IV-1 states that the Commission should implement a process that will enable EDCs to formally request exclusion of service interruptions for reporting purposes by proving an outage qualifies as a major event. To analyze and set measurable goals for service reliability performance, outage data is partitioned into normal and abnormal days so that only normal event days are used for calculating service reliability indices. The term ''major event'' is used to identify an abnormal event, for which this outage data is to be excluded when calculating service reliability indices. Section 57.192 currently defines a ''major event'' as follows:
(i) Either of the following:(A) An interruption of electric service resulting from conditions beyond the control of the electric distribution company which affects at least 10% of the customers in an operating area during the course of the event for a duration of 5 minutes each or greater. The event begins when notification of the first interruption is received and ends when service to all customers affected by the event are restored. When one operating area experiences a major event, the major event shall be deemed to extend to all other affected operating areas of the electric distribution company.(B) An unscheduled interruption of electric service resulting from an action taken by an electric distribution company to maintain the adequacy and security of the electrical system, including emergency load control, emergency switching and energy conservation procedures, as described in Section 57.52 (relating to emergency load control and energy conservation by electric utilities), which affects at least one customer.(ii) A major event does not include scheduled outages in the normal course of business or an electric distribution company's actions to interrupt customers served under interruptible rate tariffs.The Staff Internal Working Group identified the following scenarios wherein certain EDCs had inappropriately claimed service interruptions as a major event:
* Combining two separate storm events, of which only one meets the definition of a major event, into one major event.
* Excluding outage data from all operating areas when a major event has occurred in only one operating area.
* Excluding all outage data that took place on any day in which a major event took place, regardless of the actual timeframes in which the major event took place.
Reliability performance will appear to be better than it really is when an EDC excludes more outage data from its reliability calculations than it should. The performance will appear to be better because the number of customers interrupted and/or the customer minutes of the interruption are excluded from the calculations of the performance metrics, thus resulting in lower (better) scores. To avoid the inappropriate exclusion of outage data from any calculated service reliability indices reported to the Commission, the Staff Internal Working Group recommended that a process be established, whereby the EDC could formally notify the Commission that it has recently experienced what it believes to be a major event and request that specific outage data associated with this event be excluded for calculating reliability performance. The Commission could review the request, and if deemed appropriate, grant the EDC permission to exclude the related outage data from its reliability calculations. The Staff Internal Working Group also recommended the following outage data be provided in support of the request:
* The starting and ending times of the outage.
* The main operating areas affected by the major event, including the causes and number of customers affected.
* The neighboring operating areas affected, including the causes and number of customers affected.
Upon further review of this issue, the Commission orders the implementation of a formal process to request the exclusion of service interruptions for reporting purposes by proving a service interruption qualifies as a major event as defined by regulations. The outage data to be provided in support of the request will be as follows:
(1) The approximate number of customers involved in the incident/outage.
(2) The total number of customers served in the service territory.
(3) The geographic areas affected, in terms of the county and local political subdivision.
(4) The reason for the interruption, including weather conditions if applicable.
(5) The number of utility workers and others assigned specifically to the repair work.
(6) The date and time of the first information of a service interruption.
(7) The actual time that service was restored to the last affected customer.
Appendix D is a sample Major Event exclusion request form which the Commission directs the companies to use to request exclusions for major events.
It will not be necessary to provide information about neighboring operating areas affected, since the Staff Internal Working Group is recommending that the definition of a major event be revised to be based on interruption criteria of the entire service territory of an operating company as opposed to individual operating areas defined by each operating company. The Commission will review the formal request and supporting data, and if deemed appropriate, grant the EDC permission to exclude the related outage data from its reliability calculations.
Starting and Ending Times of Major Events
The LB&FC and Staff Internal Working Group identified scenarios wherein certain EDCs had inappropriately claimed service interruptions as a major event by excluding all outage data that took place on any day in which a major event took place, regardless of the actual timeframes in which the major event took place. The current definition of a ''major event'' (as defined in 52 Pa. Code § 57.192) indicates that ''The event begins when notification of the first interruption is received and ends when service to all customers affected by the event are restored.'' We agree that the designated starting and ending time of major events should be enforced according to the regulations.
The Staff Internal Working Group advocates the enforcement of current regulations regarding the designated starting and ending times of major events. Although the Staff Internal Working Group is recommending a revision to the definition of a major event, there are no suggested changes to any provisions regarding the starting and ending times of a major event. The Commission hereby reiterates that there are regulations which define the designated starting and ending times of major events according to 52 Pa. Code § 57.192 and these should be followed by all EDCs. Therefore,
It Is Ordered That:
1. The Commission is issuing, under 52 Pa. Code § 57.194(h), tentative benchmarks and standards for EDCs operating within this Commonwealth as set forth in Appendices A, B and C.
2. The Secretary certify this Order and Appendices A, B, C and D and deposit them with the Legislative Reference Bureau to be published in the Pennsylvania Bulletin for a 60 day comment period. When proposing comments, parties should consider this Tentative Order in conjunction with the Proposed Rulemaking Order at L-00030161.
3. A copy of this Order and Appendices A, B, C and D shall be filed at the Proposed Rulemaking Docket L-00030161, Rulemaking Re Amending Electric Service Reliability Regulations at 52 Pa. Code Chapter 57.
4. An EDC shall request, in writing to the Commission's Secretary's Bureau, any waivers of reliability reporting requirements necessary to fulfill its obligations under 52 Pa. Code Chapter 57, Subchapter N (Electric Reliability Standards).
5. Copies of this Order be served upon all EDCs operating in this Commonwealth, the Attorney General's Office of Consumer Advocate and Office of Small Business Advocate.
6. In the event no comments are filed to this Order, it shall become a Final Order by operation of law within 90 days from its date of publication in the Pennsylvania Bulletin.
7. The Commission shall review and consider the EDC's request for waivers and shall issue Secretarial Letters granting or denying said requests.
8. EDCs are directed to use the draft form in Appendix D when requesting the exclusion of service interruptions for reporting purposes by proving a service interruption qualifies as a major event as defined by regulations.
JAMES J. MCNULTY,
Secretary
Statement of Commissioner Glen R. Thomas Public meeting:
June 26, 2003Rulemaking Re Amending Electric Service
Reliability Regulations at 52 Pa. Code Chapter 57JUN-2003-L-0064* Docket No. L-00030161
Amended Reliability Benchmarks and
Standards for the Electric Distribution CompaniesJUN-2003-L-0076* Docket No. M-00991220
The Commission has a statutory responsibility to ensure the reliability of electric service to the citizens of the Commonwealth. I have often referred to reliability as ''job one'' for the Commission. Today's two-part action takes us a step closer to fulfilling that responsibility.
The tentative benchmarks order would significantly tighten the standards for reliability performance by electric distribution companies (''EDCs''). It would require EDCs to report on a ''systemwide'' basis as opposed to an ''operating area'' basis. The practical result would be that EDCs would not be able to classify as many outages as ''major outages.'' The order also sets forth reliability performance standards that are more closely tied to the benchmarks. Through standards that gradually toughen, EDCs would be nudged ever closer to the established benchmarks. The ultimate goal is the achievement of the benchmarks. Enhanced reporting would give the Commission the information necessary to determine whether enforcement action is necessary. The proposed rulemaking is designed to go hand-in-hand with the benchmarks order and to set forth clear rules for performance and reporting. As with any tentative order and proposed rulemaking, I hope that all interested parties provide the Commission with their input.
I would be remiss if I did not acknowledge the two primary groups of people that have been instrumental on reliability issues. First, I would like to extend appreciation to the Legislative Budget & Finance Committee for bringing many of these reliability issues to the attention of the Commission. Second, the Staff Internal Working Group dedicated many long hours to assembling this superior work product. Their efforts are a testament to the quality of people working here at the Commission.
Glen R. Thomas
Appendix A
Current Benchmarks and Recomputed Benchmarks
Name of EDC Reliability Indices Current Benchmark Recomputed Benchmark Allegheny Power SAIFI 0.67 0.67 CAIDI 178 178 SAIDI 116 119 Duquesne Light SAIFI 1.15 1.17 CAIDI 108 108 SAIDI 123 126 Met-Ed SAIFI 0.97 1.06 CAIDI 117 127 SAIDI 113 135 Penelec SAIFI 1.07 1.15 CAIDI 104 115 SAIDI 108 132 Penn Power SAIFI 1.01 1.02 CAIDI 93 92 SAIDI 95 94 PECO SAIFI 1.23 1.23 CAIDI 112 112 SAIDI 138 138 PPL SAIFI 0.88 0.98 CAIDI 128 145 SAIDI 113 142 UGI SAIFI 0.83 0.83 CAIDI 169 169 SAIDI 147 140 Citizens SAIFI 1.29 0.20 CAIDI 73 105 SAIDI 73 21 Pike County SAIFI 0.39 0.39 CAIDI 178 178 SAIDI 66 69 Wellsboro SAIFI 2.74 1.23 CAIDI 128 124 SAIDI 309 153
Appendix B
Rolling 3-Year Average Standard
(110% of Benchmark)
(A) (B) (C) (D) (E) (F) (G) (H) (I) (J) (K) (L) Name of
EDCReliability Indices Current Benchmark Current Standard Recomputed Benchmark 2-Std. Dev. Above Recomputed Benchmark Worst Performance (1994-1998) Proposed Rolling 3-Yr Avg. Standard Rolling 3-Yr Avg. (1999-2001) Performance Meets 3-Yr Standard (Yes/No) Rolling 3-Yr Avg. (2000-2002) Performance Meets 3-Yr Standard (Yes/No) Allegheny SAIFI 0.67 1.08 0.67 1.08 0.95 0.74 0.90 No 1.05 No Power CAIDI 178 223 178 224 205 196 206 No 208 No SAIDI 116 159 119 241 NM 144 185 No 220 No Duquesne SAIFI 1.15 1.46 1.17 1.49 1.30 1.29 1.19 Yes 1.20 Yes Light CAIDI 108 127 108 127 125 119 84 Yes 86 Yes SAIDI 123 143 126 189 NM 153 101 Yes 104 Yes Met-Ed SAIFI 0.97 1.29 1.06 1.29 1.18 1.17 1.06 Yes 1.18 No CAIDI 117 140 127 155 141 140 168 No 164 No SAIDI 113 155 135 200 NM 163 175 No 188 No Penelec SAIFI 1.07 1.70 1.15 1.42 1.27 1.27 1.36 No 1.58 No CAIDI 104 134 115 141 129 127 148 No 159 No SAIDI 108 140 132 201 NM 160 209 No 254 No Penn SAIFI 1.01 1.41 1.02 1.41 1.25 1.12 1.44 No 1.41 No Power CAIDI 93 117 92 119 113.9 101 108 No 122 No SAIDI 95 154 94 168 NM 114 155 No 168 No PECO SAIFI 1.23 1.70 1.23 1.70 1.51 1.35 1.20 Yes 1.21 Yes CAIDI 112 144 112 143 135 123 122 Yes 107 Yes SAIDI 138 196 138 244 NM 167 148 Yes 120 Yes PPL SAIFI 0.88 1.14 0.98 1.19 1.15 1.08 0.93 Yes 1.04 Yes CAIDI 128 155 145 190 174 160 132 Yes 128 Yes SAIDI 113 155 142 226 NM 172 123 Yes 135 Yes UGI SAIFI 0.83 1.35 0.83 1.35 1.15 0.91 0.77 Yes 0.88 Yes CAIDI 169 304 169 305 274 186 110 Yes 149 Yes SAIDI 147 331 140 412 NM 170 85 Yes 142 Yes Citizens SAIFI 1.29 3.10 0.20 0.38 0.35 0.22 0.19 Yes 0.14 Yes CAIDI 73 156 105 230 174 115 95 Yes 78 Yes SAIDI 73 123 21 86 NM 25 21 Yes 12 Yes Pike SAIFI 0.39 0.58 0.39 0.58 0.47 0.43 0.46 No 0.47 No County CAIDI 178 283 178 283 247 196 243 No 269 No SAIDI 66 112 69 165 NM 84 113 No 130 No Wellsboro SAIFI 2.74 6.16 1.23 1.91 1.61 1.35 2.34 No 2.21 No CAIDI 128 195 124 252 217 136 71 Yes 103 Yes SAIDI 309 565 153 483 NM 185 163 Yes 224 No Column C--The current benchmarks established December 16, 1999 at Docket No. M-00991220. It represents the five-year average of the historical performance for years 1994-1998.
Column D--The current standards established December 16, 1999 at Docket No. M-00991220. The standard is plus two standard deviations from the established benchmarks.
Column E--The recomputed benchmarks based on historical performance excluding major event data using the entire service territory criterion.
Column F--Represents what the current standard would be if applying the two-standard deviation methodology to the recomputed benchmarks.
Column G--The worst annual performance experienced during the historical years used to establish pre-restructuring performance.
Column H--The proposed rolling three-year average standard. The threshold is at 110% of the recomputed benchmark.
Columns I--K--Actual rolling average performance for the periods 1999-2001 and 2000-2002, and an indication if the actual performance would have met the proposed standard of a three-year average being no greater than 110% of benchmark.
Note 1--Although SAIDI is the product of SAIFI and CAIDI, the current SAIDI standard in Column D has been based on the statistical calculation of two standard deviations above the current SAIDI benchmark in Column C. All recomputed benchmarks and related values, including the proposed three-year rolling average standard (Columns E, F, and H), for SAIDI represent the product of SAIFI and CAIDI.
Note 2--NM indicates it is not meaningful to compare the proposed SAIDI standards to prior performance since SAIDI is the product of SAIFI and CAIDI, and the proposed SAIFI and CAIDI standards were the basis for the analysis.
Appendix C
Rolling 12-Month Standard for Major EDCs
(120% of Benchmark)
( A )
( B ) ( C ) ( D ) ( E ) ( F ) ( G ) Name of EDC
Reliability Indices Current Benchmark Current Standard Recomputed Benchmark 2-Std. Dev. Above Recomputed Benchmark Proposed Rolling 12-Month Standard Allegheny Power SAIFI 0.67 1.08 0.67 1.08 0.80 CAIDI 178 223 178 224 214 SAIDI 116 159 119 241 172 Duquesne Light SAIFI 1.15 1.46 1.17 1.49 1.40 CAIDI 108 127 108 127 130 SAIDI 123 143 126 189 182 Met-Ed SAIFI 0.97 1.29 1.06 1.29 1.27 CAIDI 117 140 127 155 152 SAIDI 113 155 135 200 194 Penelec SAIFI 1.07 1.70 1.15 1.42 1.38 CAIDI 104 134 115 141 138 SAIDI 108 140 132 201 190 Penn Power SAIFI 1.01 1.41 1.02 1.41 1.22 CAIDI 93 117 92 119 110 SAIDI 95 154 94 168 135 PECO SAIFI 1.23 1.70 1.23 1.70 1.48 CAIDI 112 144 112 143 134 SAIDI 138 196 138 244 198 PPL SAIFI 0.88 1.14 0.98 1.19 1.18 CAIDI 128 155 145 190 174 SAIDI 113 155 142 226 205
Rolling 12-Month Standard for Small EDCs
(135% of Benchmark)
( A )
( B ) ( C ) ( D ) ( E ) ( F ) ( G ) Name of EDC
Reliability Indices Current Benchmark Current Standard Recomputed Benchmark 2-Std. Dev. Above Recomputed Benchmark Proposed Rolling 12-Month Standard UGI SAIFI 0.83 1.35 0.83 1.35 1.12 CAIDI 169 304 169 305 228 SAIDI 147 331 140 412 256 Citizens SAIFI 1.29 3.10 0.20 0.38 0.27 CAIDI 73 156 105 230 141 SAIDI 73 123 21 86 38 Pike County SAIFI 0.39 0.58 0.39 0.58 0.53 CAIDI 178 283 178 283 240 SAIDI 66 112 69 165 127 Wellsboro SAIFI 2.74 6.16 1.23 1.91 1.66 CAIDI 128 195 124 252 167 SAIDI 309 565 153 483 278 Column C--The current benchmarks established December 16, 1999 at Docket No. M-00991220. It represents the five-year average of the historical performance for years 1994-1998.
Column D--The current standards established December 16, 1999 at Docket No. M-00991220. The standard is plus two standard deviations from the established benchmarks.
Column E--The recomputed benchmarks based on historical performance excluding major event data using the entire service territory criterion.
Column F--Represents what the current standard would be if applying the two-standard deviation methodology to the recomputed benchmarks.
Column G--The proposed rolling 12-month standard. The threshold is at 120% of the recomputed benchmark for the major EDCs and 135% of the recomputed benchmarks for the small EDCs.
Appendix D
REQUEST FOR EXCLUSION OF MAJOR OUTAGE FOR RELIABILITY REPORTING PURPOSES TO
PENNSYLVANIA PUBLIC UTILITY COMMISSION
P. O. BOX 3265
HARRISBURG, PA 17105-3265Reports require an original and one copy to be filed with the Secretary's Bureau.
Information Required:
1. Requesting Utility: __________
Address: __________
__________
2. Name and title of person making request:
___________________________ ___________________________
(Name) (Title)
3. Telephone number: ___________________________
(Telephone Number)
4. Interruption or Outage:
(a) Number of customers affected:
Total number of customers in service territory: __________(b) Number of troubled locations in each geographic area affected listed by county and local political subdivision:
__________
__________
__________
__________(c) Reason for interruption or outage, including weather data where applicable:
__________
__________
__________
__________(d) The number of utility workers and others assigned specifically to the repair work:
__________(e) The date and time of the first notification of a service interruption: __________
(f) The actual time that service was restored to the last affected customer: __________
Remarks: __________
__________
__________
__________
__________
__________
__________
__________
__________
__________
__________
__________
__________
______1CAIDI is Customer Average Interruption Duration Index. It is the average duration of sustained interruptions for those customers who experience interruptions during the analysis period. CAIDI represents the average time required to restore service to the average customer per sustained interruption. It is determined by dividing the sum of all sustained customer interruption durations, in minutes, by the total number of interrupted customers. SAIFI is System Average Interruption Frequency Index. SAIFI measures the average frequency of sustained interruptions per customer occurring during the analysis period. SAIDI is System Average Interruption Duration Index. SAIDI measures the average duration of sustained customer interruptions per customer occurring during the analysis period. MAIFI (Momentary Average Interruption Frequency Index) measures the average frequency of momentary interruptions per customer occurring during the analysis period. These indices are accepted national reliability performance indices as adopted by the Institute of Electrical and Electronics Engineers, Inc. (IEEE), and are defined with formulas at 52 Pa. Code § 57.192.
2PPL volunteered to perform a series of analyses for Commission staff that included how much variability was introduced by these two calculation methods. PPL computed their metrics using both methods and provided the results to Commission staff. Staff concluded that the two methods can yield significantly different results.
3See definition of large and small companies on page 10.
4This chart lists GPU instead of the breakdown of Met Ed and Penelec.
5When referring to the establishment of new performance standards based on a percentage of the benchmark, it is important to note that this is the recomputed benchmark based on excluding major event data using the entire service territory criterion.
6 Large EDCs currently include: Allegheny Power, Duquesne Light, Met-Ed, Penelec, Penn Power, PECO and PPL.
7Small EDCs include: UGI, Citizens', Pike County and Wellsboro.
8The large EDCs have equal to or greater than 100,000 customers and currently the large EDCs include: Allegheny Power, Duquesne Light, Met-Ed, Penelec, Penn Power, PECO and PPL.
[Pa.B. Doc. No. 03-1378. Filed for public inspection July 11, 2003, 9:00 a.m.]
No part of the information on this site may be reproduced for profit or sold for profit.This material has been drawn directly from the official Pennsylvania Bulletin full text database. Due to the limitations of HTML or differences in display capabilities of different browsers, this version may differ slightly from the official printed version.