Pennsylvania Code & Bulletin
COMMONWEALTH OF PENNSYLVANIA

• No statutes or acts will be found at this website.

The Pennsylvania Bulletin website includes the following: Rulemakings by State agencies; Proposed Rulemakings by State agencies; State agency notices; the Governor’s Proclamations and Executive Orders; Actions by the General Assembly; and Statewide and local court rules.

PA Bulletin, Doc. No. 04-931

NOTICES

PENNSYLVANIA PUBLIC UTILITY COMMISSION

Amended Reliability Benchmarks and Standards for Electric Distribution Companies

[34 Pa.B. 2764]

   Public Meeting held
May 7, 2004

Commissioners Present:  Terrance J. Fitzpatrick, Chairperson; Robert K. Bloom, Vice Chairperson; Glen R. Thomas; Kim Pizzingrilli; Wendell F. Holland

Amended Reliability Benchmarks and Standards for the Electric Distribution Companies; Doc. No. M-00991220

Order

By the Commission:

   Today, in conjunction with our Final Rulemaking Order at Docket No. L-00030161, we tighten our standards for performance reliability in the electric distribution industry and reiterate the Commission's regulations regarding qualifying an interruption as a major event as well as the process for filing formal requests for waivers from having to submit reliability data for any reporting period.

Procedural History

   The Electricity Generation Customer Choice and Competition Act (Act), December 3, 1996, P. L. 802, No. 138 § 4, became effective January 1, 1997. The Act amends 66 Pa.C.S. by adding Chapter 28 to establish standards and procedures to create direct access by retail customers to the competitive market for the generation of electricity, while maintaining the safety and reliability of the electric system. Specifically, the Commission was given a legislative mandate to ensure that levels of reliability that were present prior to the restructuring of the electric utility industry would continue in the new competitive markets. 66 Pa.C.S. § 2802(12).

   In response to this legislative mandate, the Commission adopted a Final Rulemaking Order on April 23, 1998, at Docket No. L-00970120, setting forth various reporting requirements designed to ensure the continuing safety, adequacy and reliability of the transmission and distribution of electricity in the Commonwealth. See 52 Pa. Code §§ 57.191--57.197. The Final Rulemaking Order acknowledged that the Commission could reevaluate its monitoring efforts at a later time as deemed appropriate.

   On December 16, 1999, the Commission entered a Final Order at M-00991220, which established reliability benchmarks and standards1 for the electric distribution companies (EDC) in accordance with 52 Pa. Code § 57.194(h). The Commission's regulations for Electric Reliability Standards at 52 Pa. Code § 57.194(h)(1) state that:

''In cooperation with an electric distribution company and other affected parties, the Commission will, from time to time, establish numerical values for each reliability index or other measure of reliability performance that identify the benchmark performance of an electric distribution company, and performance standards.''

   In a series of orders at Docket No. M-00991220, the Commission established reliability Benchmarks and Standards regarding: (1) Customer Average Interruption Duration Index (CAIDI); (2) System Average Interruption Frequency Index (SAIFI); (3) System Average Interruption Duration Index (SAIDI); and (4) Momentary Average Interruption Index (MAIFI).2 The benchmark for each performance value index is the average of the historical annual averages of the index for the 5-year period from 1994-1998 and is company specific. The standard is two standard deviations from the benchmark. These benchmarks and standards have remained in effect since their issuance in December 1999.

   In June 2002, the Legislative Budget and Finance Committee (LB&FC) issued a report entitled Assessing the Reliability of Pennsylvania's Electric Transmission and Distribution Systems. The report, in part, concluded that the two-standard deviation minimum performance standard is too loose and should be tightened as it does not assure that reliability performance will be maintained at levels experienced prior to the Act, December 3, 1996, P. L. 802, No. 138 § 4, effective January 1, 1997.

   A Staff Internal Working Group on Electric Service Reliability (Staff Internal Working Group) prepared a report entitled Review of the Commission's Monitoring Process For Electric Distribution Service Reliability, dated July 18, 2002, which reviewed the Commission's monitoring process for electric distribution service reliability and commented on the recommendations from the LB&FC report. In its report, the Staff Internal Working Group recommended, in part, that ''the Commission should develop minimum performance standards that achieve the Commission's policy objective (See Recommendation III-1, p. 7).'' A subsequent Commission Order on August 29, 2002, at Docket No. D-02SPS021 directed:

''That the Commission staff shall undertake the preparation of such orders, policy statements, and proposed rulemakings as may be necessary to implement the recommendations contained within the Staff Internal Working Group . . . Report (p. 4).''

   The Staff Internal Working Group was assigned this task and conducted field visits to EDCs to identify the current capabilities of each EDC for measuring and reporting reliability performance. These field visits began in October 2002 and continued through March 2003.

   On June 27, 2003, the Commission entered a Tentative Order at M-00991220, which recomputed the reliability benchmarks and standards for the EDCs. The Tentative Order was published for comments in the Pennsylvania Bulletin. Comments were filed by the Attorney General's Office of Consumer Advocate (OCA), AFL-CIO Utility Caucus (AFL-CIO), Energy Association of Pennsylvania (EAP), Metropolitan Edison Company (Met-Ed), Pennsylvania Electric Company (Penelec), Pennsylvania Power Company (Penn Power), Citizens' Electric Company (Citizens'), Wellsboro Electric Company (Wellsboro), Pike County Light & Power Company (Pike County), PPL Electric Utilities Corporation (PPL), UGI Utilities, Inc.--Electric Division (UGI), Allegheny Power and PECO Energy Company (PECO). Reply Comments were filed by EAP, Met-Ed, Penelec, Penn Power, the AFL-CIO and the OCA.

Discussion

   The comments raised issues regarding several topics. The following is a short synopsis of each topic, the parties' positions and our disposition of each issue.

1.  Recalculation of Reliability Benchmarks

A.  System-wide Major Event Exclusion Standardization

   In our Tentative Order, we discussed two sources of variability in the computation of the permanent benchmarks to date that made it difficult to set new performance standards equitably across the EDCs.

   The first source of variability was that some EDCs used one, system-wide operating area to compute their reliability metrics, while other EDCs subdivided their service territories and used multiple operating areas to compute their metrics. The number, size and composition of operating areas used for metric computations introduced variability into the criterion used to exclude major events from the reliability metrics reported to the Commission. An EDC that subdivided its territory into several small geographic operating areas could exclude major events from its metric calculations based on a criterion of an interruption affecting 10% of the customers in an operating area; whereas another EDC, employing only one service territory-wide operating area had to meet a much higher criterion of an interruption affecting 10% of the total EDC customer base. The proposed solution to this benchmark variability problem was to develop one uniform calculation method using system-wide performance (for the entire service territory) for computing and reporting reliability metrics to the Commission.

   We proposed that EDCs should compute and report their reliability metrics to the Commission considering the entire service territory as one operating area and the major event exclusion of an interruption that affects 10% of the entire customer base for a duration of 5 minutes or longer.

   To develop proposed standards based on the uniform definition of an operating area, Commission staff requested those EDCs that had developed their metrics using more than one operating area to recalculate their metrics for the 1994-2002 period using the entire service territory criterion. The data recalculations were used by Commission staff to recompute the current benchmarks using a uniform methodology across the EDCs. In the Tentative Order, the Commission emphasized that the recomputed benchmarks do not represent a lowering or raising of the historical benchmarks. All of the EDCs were asked to apply a uniform exclusion criterion to their original data. The only major events excluded from the recomputed benchmarks were unscheduled interruptions that affected 10% or more of the customers in the entire service territory for a duration of 5 minutes or longer. For EDCs that had previously excluded major events based on the multiple operating area criterion, the recomputed benchmark values may be higher than the original benchmark values because previously excluded outage data may now be included in the metric values. However, we noted in our Tentative Order that the recomputed benchmarks should be viewed as representing the actual reliability performance during the historical period, as calculated using a uniform methodology.

Positions of the Parties

   PPL filed comments in support of the Commission's proposed recalculation of the historical benchmarks using the single operating area data as it will establish a uniform calculation method for computing and reporting reliability metrics. However, the OCA strongly urged the Commission to retain the existing historic performance benchmarks rather than lowering expectations for certain EDCs through a recomputation of historic data. The OCA agrees that on a prospective basis, the Commission should ensure that the major event criteria are applied uniformly by the EDCs as that is an appropriate way to proceed on a going forward basis. The OCA also noted a concern identified in the LB&FC Report that wide variations exist among EDCs in both data collection and the application of Commission regulations to the data. The OCA noted that the LB&FC recommended that the Commission clarify when an EDC can exclude data for major events from the underlying data used to calculate the metrics.

   In response to the OCA's comments, EAP countered that the recomputed benchmarks do not change the historical service provided to the relevant EDC customers. EAP commented that the Commission has not lowered the benchmarks going forward but has sought to ensure compatibility. EAP notes that it is critical for accurate comparisons that the same method be employed for historical and future evaluations of reliability.

   The OCA submits that operating area information reflects how an EDC manages its distribution system and utilizes its resources within its system and that worst performing circuit reports as required under the companion Final Rulemaking Order at L-00030161 are not a suitable proxy for operating area information. The OCA also recognizes that the Staff Report noted that some EDCs defined operating areas differently for internal purposes than for Commission reporting purposes. As a result, the OCA suggests that EDCs be required to continue reporting of operating area reliability metrics using operating areas consistent with those used for internal operations and monitoring.

Disposition

   The Commission strongly emphasizes that recalculating the historical benchmarks so that all EDCs are using standard criteria for excluding major events is not lowering the bar for future reliability performance. The recalculation is consistent with the recommendation of the LB&FC (as noted in the comments of the OCA) that the Commission clarify when an EDC can exclude major events from the data used to calculate the metrics. The benchmark recalculation achieves three important objectives for the Commission. The first objective is uniformity of metric calculations. The second objective is that the Commission, in performing its reliability monitoring role, can view the metric values on the same numerical scale. The third Commission objective is captured in the reply comments of EAP who points out that it is critical to use identical calculations for historical benchmarks (reference points) and future reliability performance measures. We would add that to allow the use of different calculation measures for benchmarks, but to use a standard calculation method for measuring reliability performance on a going forward basis (as suggested by the OCA), would render erroneous results so that the Commission would conclude that some EDCs' performance relative to their benchmarks had improved or deteriorated when in fact that was not the case. Therefore, we will retain the recalculated benchmarks and require EDCs to use the standard methodology that employs the system-wide definition of an operating area for the exclusion of major events from reliability metric calculations.

   It should be realized that if EDCs are required to report by the operating areas they use for internal operations, all previous years' operating area reliability metrics would need to be recomputed each time they reconfigure their internal operations. This would make it more difficult to find pocket areas where reliability is a concern, since the companies could continually reconfigure operating areas to cover areas of concern. The circuit analysis proposed eliminates this potential problem and allows for identifying problem areas that are in need of remedial action. Therefore, we will adopt the initial Commission position, whereby companies report reliability data using a system-wide operating area and a listing of worst performing circuits. Our position is further addressed in our Final Rulemaking Order at L-00030161.

B.  Standardization of Individual EDC Calculations

   A second source of variability discussed in our Tentative Order that made it difficult to equitably set new performance standards for all the EDCs pertained to two EDCs not excluding any data on major events and another EDC using a different major event definition than that contained in Commission regulations and used by all the other EDCs. In the first instance, Wellsboro and Citizens' did not exclude major events from their metric calculations for 1994-2002, although the regulations permit these exclusions.3 This was in contrast to the calculations of all the other EDCs and, therefore, was a source of variability to only Citizens' and Wellsboro. In the second instance, Penn Power used the FirstEnergy definition of a major event which is different than the definition used by the Commission and can yield a different result.

   Commission staff requested that the metrics for Citizens', Wellsboro and Penn Power be recomputed so that they would be calculated using the same uniform methodology that other EDCs used.

Positions of the Parties

   The OCA noted in their comments that for Citizens' and Wellsboro, the recomputed historical benchmarks suggest that a much higher reliability was achieved from 1994-1998 than was previously calculated. The OCA is unclear as to why there was a change in these two EDCs' benchmarks with the recalculation since these small EDCs always reported on a system-wide basis rather than a multiple operating area basis. EAP reply comments noted that Citizens and Wellsboro have recomputed their benchmarks to exclude major events consistent with the other EDCs to ensure comparability. In their comments, FirstEnergy did not specifically address the recomputation of Penn Power's benchmarks using the Commission's definition of a major event rather than using FirstEnergy's definition which yields a different result. However, FirstEnergy commented that as a broad conceptual matter and over the long-term, that they agree with and support the Commission's efforts to standardize among the EDCs the outage data maintained and submitted to the Commission.

Disposition

   The reply comments of EAP correctly capture the reason why the recalculated benchmarks of Citizens' and Wellsboro reflect a higher level of reliability during the benchmark period than was previously reported. As we noted in our Tentative Order, Citizens' did not exclude any major events from its metric calculations for 1994-2002 although the Commission's regulations permit these exclusions. To place Citizens' and Wellsboro's metric values on the same numerical scale as the metrics from the other EDCs, the Commission requested that Citizens' and Wellsboro recalculate their benchmarks using the allowed exclusions of major events, thereby lowering their benchmark values from those reported previously. We will retain the recomputed benchmarks for Citizens' and Wellsboro. We will retain the recomputed benchmarks for Penn Power as advanced in the Tentative Order that used the Commission's definition of a major event so that Penn Power's benchmarks are calculated using the same methodology that other EDCs use. We will interpret FirstEnergy's comments to be consistent with this disposition.

   Appendix A contains a table of the benchmarks as originally calculated and the recomputed benchmarks based on: (1) excluding major event data using the entire service territory criterion (changes for Allegheny Power, Duquesne Light, Met-Ed, Penelec and PPL); (2) excluding major events for the first time (Citizens' and Wellsboro); (3) using the Commission's definition of a major event (Penn Power as noted in our Tentative Order); and corrected SAIDI calculations that reflect SAIDI as being the product of SAIFI multiplied by CAIDI (UGI and Pike County). We will adopt the recomputed benchmarks contained in Appendix A and also add remarks to clarify why the prior benchmarks were changed as reflected in the recomputed benchmarks.

2.  Reliability Data Quality Issues

   In our Tentative Order, we discussed two data quality issues that may affect the Commission's electric reliability monitoring efforts. The first issue pertained to Allegheny Power which reported having several months of missing data for their 1997 and 1998 SAIFI calculations. We noted that because 1997 and 1998 data was used to calculate the historical benchmarks, Allegheny Power's SAIFI and SAIDI benchmarks were set artificially low. Thus Allegheny Power's SAIFI and SAIDI post restructuring reliability performance metric values as compared to the benchmarks would be inherently unfavorable to the company.

   The second data quality issue we discussed in our Tentative Order pertained to EDCs that had implemented automated reliability Outage Management Systems (OMS) which had the potential to improve the accuracy of reliability monitoring information. We noted that the changes in data gathering methods had implications for comparing historical reliability performance to current performance and introduced a degree of uncertainty into our ability to interpret reliability trend data. Our discussion in the Tentative Order pointed out the importance of separating out the method variance (due to differences in measurement capability) from the variance in reliability scores that is attributable to true changes in reliability. We concluded that we could not quantify the exact degree of method variance resulting from OMS implementation.

Positions of the Parties

   As to the first data quality issue, Allegheny Power commented that the specific benchmarks proposed for Allegheny Power are unrealistic and not useful for future comparisons. Allegheny Power claims that their SAIFI benchmark is skewed by a period of incomplete data and that their SAIFI benchmark is unrealistically low in comparison to other large EDCs. As evidence, Allegheny Power comments that their SAIFI performance for 2000-2002 matches the best performance of all large EDCs for the same period. Accordingly, Allegheny Power requests an adjustment of their benchmarks.

   Comments were filed on behalf of Met-Ed, Penelec and Penn Power (collectively, FE Companies) that pertained to data quality issues about the implementation of OMS and the resulting implications for the validity of the proposed benchmarks. The FE Companies noted that with the exception of Allegheny Power, Met-Ed and Penelec are in the unique position of having installed and implemented new automated processes for collecting outage information after the 1994-1998 base period used by the Commission in setting the reliability benchmarks. The FE Companies comment that the Tentative Order recalculates their benchmarks without any consideration of the improvement in their methods for collecting reliability data since electric restructuring. Quoting from their 2002 Reliability Report to the Commission, the FE Companies state that although statistics for several operating areas are elevated, there has been no real change in reliability performance. FE Companies believe that the elevated statistics have been the result of the implementation of the new automated systems.

   The comments of the FE Companies also note that the benchmarks and standards proposed in the Tentative Order for Penn Power do not give any consideration to the inaccuracy of some of its historic period reliability data. The comments note that in the early 1990s Penn Power relied in part on estimates of the number of customers affected by power outages. However, with more recent electronic mapping efforts, Penn Power now has substantially more accurate outage statistics that are not directly comparable to the historic benchmark as proposed by the Commission.

   In place of the benchmarks proposed by the Commission in the Tentative Order, the FE Companies request that the Commission utilize revised benchmarks and standards proposed by the FE Companies which are based on reported performance during the 1998-2002 period. The benchmarks and standards proposed by the FE Companies in Exhibit 1 of their comments are significantly higher (allowing for worse reliability performance) than those we proposed. In support of the higher benchmarks and standards proposed by the FE Companies, they cite the Commission's 2002 Customer Service Performance Report as evidence of customers' positive perception of reliability performance.

   The AFL-CIO and the OCA filed reply comments in response to some of the points noted in the comments of Allegheny Power and the FE Companies pertaining to data quality issues. The AFL-CIO comments that, in theory, it is possible that the mere fact of changing data collection methods could have some effect on the reliability statistics reported by the EDCs. However, the AFL-CIO notes that the FE Companies have not shown that this has occurred. Similarly, the OCA comments that no evidence has been presented by the EDCs that shows or even supports the claims that the historic data is not representative of pre-restructuring performance or that the installation of new OMS is the sole cause of the apparent deterioration in reliability. The OCA notes that the claim that the new OMS are causing the appearance of deterioration in reliability has not been subjected to evaluation or review. The OCA commented that the LB&FC Report made the point that careful analyses of these claims are needed before any adjustments should be considered. The OCA also comments that the Commission should not entertain requests to change individual EDC benchmarks and standards through the Tentative Order. In the view of the OCA, these requests are more properly made as a separate petition where the merits and all underlying facts can be thoroughly examined on the record.

   The AFL-CIO and the OCA also offered reply comments addressing the FE Companies citation of the Commission's 2002 Customer Service Performance Report findings to note customer satisfaction with post-restructuring reliability and the need to adapt new benchmarks and standards proposed by the FE Companies. The AFL-CIO notes that the Commission's Report evaluates EDC call center operations and has nothing to do with the reliability of distribution service. The OCA comments that the use of customer service data is not an adequate substitute for objective standards for reliability.

Disposition

   First we will address the requests by Allegheny Power and the FE Companies to adjust the Commission's proposed benchmarks and standards or to substitute benchmarks and standards proposed by the FE Companies for those proposed by the Commission in our Tentative Order. We will adopt the position advanced by the OCA that the Commission should not entertain requests to change individual EDC benchmarks and standards through this Tentative/Final Order process. We note that this is a generic proceeding and does not have provisions for the more intensive presentation and review of evidence that the AFL-CIO and the OCA note should accompany a request for changes in benchmarks and standards.

   The data that Allegheny Power and the FE Companies are now claiming is inaccurate was the same data (covering the period of 1994-1998) used to establish the original benchmarks in 1999, and no EDC appealed the Commission's December 16, 1999, Order at M-00991220 which established those benchmarks. The December 16, 1999, Order stemmed from a consensus proceeding as opposed to an evidentiary hearing and at that time the companies represented that those benchmarks were the averages of their indices over a 5-year, precompetition period (from 1994-1998). Based upon the companies' representations, the December 16, 1999, Order was entered establishing the benchmarks and standards. No one appealed said Order and we believe the EDCs cannot now challenge the original benchmarks. However, we will allow the EDCs to challenge the recomputed benchmarks if they have new evidence such as the impact of OMS systems on their reliability indices by allowing utility-specific on-the-record proceedings to afford the parties the opportunity to examine all relevant issues and provide the Commission with a complete factual record upon which to base its decision. The proceedings must be initiated within 30 days of the date of entry of this Order and the burden of proof is to be on the Petitioners. The petition must be served upon all parties of interest including the Pennsylvania AFL-CIO (Utility Caucus), the OCA and OSBA.

   In the case of the FE Companies requests for new benchmarks and standards, we believe that a thorough examination of factual data by all interested parties is necessary before any potential revisions to the benchmarks are made. We note that as recently as May 2001, Met Ed and Penelec reliability metrics were incorporated into a Service Quality Index that was part of the Joint Application for Approval of the Merger of GPU, Inc. with First Energy Corporation approved by the Commission in an Order dated May 24, 2001. It is not clear why the FE Companies' claims regarding the inaccuracy of the metrics was not an issue at that time, but is an issue now.

   Further, we note that other Pennsylvania EDCs have implemented OMS and taken measures to increase connectivity but have not made similar claims of adverse effects on reliability indices. Absent an on-the-record proceeding which can determine the facts that are specific to the FE Companies, it does not appear to be fair to make specific adjustments to the FE Companies benchmarks and standards that will not also be made to other EDCs' benchmarks and standards. The FE Companies should have expected in advance that implementing OMS had the potential to affect the measurement of reliability performance and thus should have taken steps to conduct parallel measurements of their old and new data gathering systems.

   With parallel measurement and analysis, the FE Companies could then determine the degree of method variance, if any, and have factual information to present to the Commission to support a request for a change in benchmarks and standards. Also, having factual information about the degree of method variance would appear to be necessary to make meaningful comparisons of current performance to past performance so that EDC management could determine if there was any true change in reliability performance over time aside from any change that may have occurred from the implementation of OMS. If parallel measurement and analyses were conducted, this information should be presented in an on-the-record proceeding before the Commission.

   We also want to address the comments of the FE Companies that cite the Commission's 2002 Customer Service Performance Report as evidence of customer satisfaction with service reliability. As noted by the AFL-CIO, the customer survey reported on in the Commission Report does not inquire about satisfaction on the number of service interruptions or service restoration times. The focus of the survey data is on call center performance such as access, courteousness and knowledge of the call center representatives. We do not view this data as an indication that customers are satisfied with the aspects of Met Ed and Penelec's service reliability measured by the benchmarks and standards.

   Going forward, the Commission wishes to stress the importance of EDCs conducting parallel measurements and analyses when implementing changes in reliability monitoring and data gathering methods so that the Commission is provided with accurate information about true reliability performance. Parallel measurement efforts also appear necessary to enable EDC management to fulfill their obligations to effectively maintain good reliability performance.

   Finally, we want to point out that the Commission is providing some degree of latitude to all EDCs by setting the 3-year rolling standard at 110% of the benchmark versus 100% of the benchmark as discussed in greater detail later in this Order. This latitude should sufficiently account for any potential typical degree of method variance that may have occurred in the measurement of the benchmarks and performance in the post-restructuring period. Absent a determination from the Commission based on an on-the-record proceeding, the Commission will not permit revisions to individual EDCs' benchmarks and standards to allow for a greater degree of latitude because of reliability measurement method variance.

3.  Revised Performance Standards

   In our Tentative Order, we noted two shortcomings in our existing minimum reliability standards that were established in 1999. The first shortcoming was statistical in nature and related to the establishment of standards that were two standard deviations above the benchmarks. This method of establishing standards yielded a result that enabled an EDC to perform worse on a performance index (such as CAIDI or SAIFI) after 1998 than any year during the 1994-1998 benchmark period and still be within the standard. This wide band of acceptable performance within the standard led to the second shortcoming, an inconsistency with the Commission's policy objective of setting standards for reliability that maintain the same level of reliability after restructuring as was experienced prior to restructuring. We also noted that the LB&FC arrived at a similar conclusion about an overly wide band of acceptable performance with the current performance standards. In our Tentative Order, we showed figures for the major EDCs revealing that our two standard deviation approach to setting standards allowed for average SAIFI values to be 40% greater than the historical benchmark and average CAIDI values to be 24% above the benchmark, but still within standards.

   Based on the shortcomings previously identified, the Commission proposed to set new reliability standards that were more closely tied to the EDCs' historic benchmark performance but also allowed for some degree of variability from year to year. The Commission considered but declined to use the standard deviation approach for setting the proposed new performance standards. A standard deviation measures the degree of variance from an average and can be useful for the establishment of variability standards. However, because the benchmark data currently available consists of only five data points for each reliability index per EDC (the annual average indices for the years 1994, 1995, 1996, 1997 and 1998), we were not confident that the standard deviation statistic would yield a valid result. The standard deviation is typically used to summarize the variability in a large data set. We did not believe that this underlying assumption for the statistic was met with only five data points per EDC for each metric.

   Instead of using the standard deviation approach for setting an acceptable band of performance, the Commission proposed thresholds using a percentage bandwidth above the benchmark for a shorter term standard and another percentage for a longer term standard.4 The proposed longer term standards were generic in the sense that the proposed percentages above each EDC's benchmarks were the same for all EDCs. However, the generic percentage standard was applied to each EDCs' benchmarks which were based on individual EDC performance from 1994-1998. The proposed longer term standard was that the rolling 3-year average for system-wide reliability indices should be no higher than 10% above the historic benchmark. The proposed rolling 3-year standard was set at 10% above the benchmark to ensure that the rolling 3-year standard is not worse than the worst annual performance experienced during the years prior to restructuring (1994-1998). Rolling 3-year performance was proposed to be measured against the standard at the end of each calendar year.

   The Commission also proposed a short-term standard to monitor performance on a more frequent basis. For the large EDCs5 (companies with 100,000 or more customers) the Commission proposed that the rolling 12-month averages for the system-wide indices be within 20% of the benchmark. For small EDCs6 (companies with less than 100,000 customers), the Commission proposed that the rolling 12-month averages for the system-wide indices should be within 35% of the historical benchmarks. A greater degree of short-term latitude was proposed for the small EDCs in the rolling 12-month standard because small EDCs have fewer customers and fewer circuits than the large EDCs, potentially allowing a single event to have more of a significant impact on the reliability performance of the small EDCs' distribution systems.

   The distinction between small EDCs and large EDCs is illustrated by the SAIFI calculation. SAIFI is a ratio of customers interrupted divided by customers served. With a much smaller number of customers served, outages that are relatively insignificant for a large EDC's reliability measures will have a more significant impact on small EDCs. Thus, small EDCs have standards of deviation that are higher than the large EDCs because of small sample sizes. Reducing the former two-standard deviation standard to a 135% standard is a significant tightening of the standard for the small EDCs. The rolling 12-month performance was proposed to be measured against the standard on a quarterly basis.

   The proposed long-term and short-term standard set points were selected for a number of reasons. First, the standards allow for some variability from the benchmarks because reliability performance is influenced by weather conditions and other factors that are inherently variable in nature. Second, a review of historical reliability performance levels reveals a certain degree of variance from year to year. However, the use of rolling averages, particularly for the 3-year rolling average standard, will tend to even out some of the inherent variance in performance metrics. The longer the period under review, the more year-to-year high and low variations will tend to cancel each other out. As such, the 3-year rolling average standard should promote reliability performance that is closer to the benchmark over time. Finally, the set points were selected so that the Commission would be more actively involved in monitoring and remedial activities when performance deviates significantly from the benchmark, but would not be as involved when the variations are within the more typical range.

   The Tentative Order also made comparisons of the proposed standards with the standards set by the Commission in 1998. In all cases, the 3-year rolling average standards are tighter than the previous standards that were based on two standard deviations. Comparisons of the proposed 12-month rolling standards to the previous standards revealed that in 32 of 33 cases (11 EDCs with 3 metrics each) the proposed standards are tighter than those established in 1998. Therefore, the Commission concluded that the proposed standards represented a tightening of our reliability standards for electric distribution service.

Positions of the Parties

   The FE Companies and EAP offered comments in support of abandoning the two standard deviation approach for setting reliability standards for the large EDCs. However, numerous comments were filed by the small EDCs (Citizens', Pike, UGI and Wellsboro) in support of using the standard deviation approach for setting reliability standards for themselves. EAP also supported this approach and joined with Citizens', Pike County and UGI in recommending that the 12-month rolling average standard should be set at 1 1/2 standard deviations above the benchmark and the 3-year rolling standard be set at one standard deviation above the benchmark for the small EDCs only. Pike County recommended the use of standard deviations because of the significant amount of variation in the data caused by small events that skew the statistics. In reply comments the OCA noted that it could not support the use of the standard deviation approach for the small EDC standards.

   The Commission's proposal to generally tighten the reliability standards received support in comments by the AFL-CIO, the FE Companies and the OCA. However, the AFL-CIO and the OCA commented that the Commission did not go far enough in their efforts to tighten the standards. These commenters pointed out that the Commission's proposals fall short of requiring reliability performance to be at a level experienced prior to restructuring. The OCA recommends an alternative approach whereby the 12-month rolling average standard be established at 10% above the benchmark and the 3-year rolling standard be established at the benchmark.

   Comments filed by PPL recommend a different model of setting reliability standards than that proposed by the Commission in the Tentative Order. PPL comments that there should be a single Statewide standard for the industry. PPL believes that benchmarks and standards should consider an EDC's historical performance and provide additional allowances for those EDCs that have met performance objectives. In PPL's view, the application of their model for a Statewide standard would ensure that better performing EDCs are not penalized for historically good performance and that improvement by those EDCs whose performance has lagged would be encouraged.

   The Commission received supportive comments from several parties (AFL-CIO, the FE Companies, the OCA and PPL) about the overall proposed model whereby we would seek to establish short-term and long-term standards. The FE Companies and PPL also generally supported the percentages proposed for the short-term and long-term standards (20% and 10% above the benchmarks, respectively). EAP and UGI recommended that as an alternative to using standard deviations to set the standards for small EDCs, the Commission should consider setting the 12-month standard at 45% above the benchmark and the 3-year standard at 15% above the benchmark.

   The OCA filed comments recommending that the Commission clarify the regulatory purpose of the short-term, 12-month rolling average standard. The OCA recommends that the 12-month standard be used to ensure that performance does not deteriorate on an annual basis to a level that makes it unlikely that an EDC will meet the requirements of the regulation over time. The OCA suggested that the Commission incorporate language from the 2002 Staff Report that addressed the purpose of the short-term standard.

Disposition

   The Commission will retain its proposal for using percentages to establish standards for electric distribution service reliability. In so doing, we will not adopt the position of the small EDCs who offered comments in support of the alternative of using the standard deviation statistic. With only five data points, a key underlying assumption for the standard deviation statistic is not met, thereby rendering the statistic invalid for our purposes.

   We will also retain our proposal for adopting both a long-term, 3-year rolling average standard and a 12-month rolling average short-term standard for all EDCs. We will not adopt the model advised by PPL for a single, Statewide standard for all EDCs. The intent of the Act is that service ''be maintained at the same level of quality under retail competition'' 66 Pa.C.S. § 2807(d) (emphasis added). The Act could have required that all EDCs' performance not fall below some absolute standard, but it does not state that. Instead, the language of the Act implicitly recognizes that different EDCs may have had different levels of service reliability, and that each EDC's historic performance prior to electric restructuring would be the minimum performance standard to be maintained for the future. We also recognize that a single, Statewide performance standard may not account for legitimate differences in geography that can affect reliability. Accordingly, we shall, for the time being, retain these standards on a company-specific basis.

   As previously noted, EAP, UGI and other small EDCs provided comments in support of having somewhat more lenient standards for the small EDCs than those proposed by the Commission. Commenters supported 1 1/2 standard deviations or an upper range of 45% above the benchmark for the for the 12-month rolling standard for small EDCs and advanced either one standard deviation or an upper range of 15% above the benchmark for the 3-year rolling standard for small EDCs. We have already addressed our reservations about using the standard deviation statistic, the logic of which applies to small and large EDCs alike. We are not inclined to set an even wider bandwidth of acceptable performance for the small EDCs than we originally proposed. With regard to the 3-year rolling average, we believe the small EDCs should be within 10% of their benchmark, just like the large EDCs. For the 12-month rolling average, we proposed a somewhat more lenient standard of 135% for the small EDCs versus 120% for the large EDCs. We believe this extra degree of latitude is justified for the small EDCs because of the greater potential impact of single outage events on distribution systems with few circuits. However, we decline to provide even more latitude to the small EDCs. We would prefer to keep the acceptable performance range moderate and to examine specific causes and events on a case-by-case basis should the reported metric values exceed the 135% standard.

   Comments filed by the AFL-CIO and the OCA recommended that the Commission further tighten the standards for EDC reliability performance beyond that proposed in our Tentative Order. We will not adopt standards that are tighter than we proposed in our Tentative Order at this time. We believe that our proposals represent very significant steps to tighten the standards over the next few years and should serve to focus EDC management on achieving benchmark performance in the future. Given the uncertainty of weather and other events that can affect reliability performance, EDCs should set goals to achieve benchmark performance or better to allow for those times when unforeseen circumstances push the indices above the benchmark. By carefully managing performance in this manner, EDCs will have the necessary latitude to occasionally have performance above the benchmark, but still have the 12-month and 3-year averages close to the benchmark and well within the Commission's standards.

   We agree with the OCA that the Commission should clarify the purpose of the short-term 12-month rolling average standard. The primary purpose of the short-term 12-month standard is to ensure that performance does not deteriorate and move too far from the benchmark without Commission attention during the period in which the 3-year average develops. If quarterly monitoring of the 12-month rolling average metric values reveals trends that are incompatible with meeting the long-term standards, the Commission will conduct further reviews and remedial activities with the subject EDC until performance trends in the desirable range.

   Appendix B contains the recomputed benchmarks, rolling 12-month standard and the rolling 3-year standard for each EDC's SAIFI, CAIDI and SAIDI metrics.

4.  Waiver Petitions

   In Ordering Paragraph No. 4 of the Commission's June 27, 2003, Tentative Order, we ordered EDCs to request, in writing to the Commission's Secretary Bureau, any waivers of reliability reporting requirements necessary to fulfill its obligations under 52 Pa. Code Chapter 57, Subchapter N (Electric Reliability Standards). Since there were no adverse comments to this requirement and there were supportive comments filed by the OCA and PPL, we will maintain our initial position requiring the formal filing of waiver requests, and again direct that all requests for waiver shall be made formally in writing to the Commission. EDCs are required to timely file in advance of the reporting requirements a petition for waiver of formal reporting requirements under 52 Pa. Code § 5.43 (relating to petitions for waiver of regulations). The EDCs are directed to disclose the reasons they are not in full conformity with the reliability regulations in all waiver petitions submitted to the Commission.

5.  Starting and Ending Times of Major Events

   The LB&FC and the Staff Internal Working Group identified scenarios wherein certain EDCs had inappropriately claimed service interruptions as a major event by excluding all outage data that took place on any day in which a major event took place, regardless of the actual timeframes in which the major event took place. The current definition of a ''major event'' (as defined in 52 Pa. Code § 57.192) indicates that ''The event begins when notification of the first interruption is received and ends when service to all customers affected by the event are restored.'' We agree that the designated starting and ending time of major events should be enforced according to the regulations.

   Although we revised the definition of a major event, there was no change made to the starting and ending times of a major event. The Commission hereby reiterates that there are regulations which define the designated starting and ending times of major events according to 52 Pa. Code § 57.192 and these shall be followed by all EDCs.

Positions of the Parties

   No adverse comments were filed.

Disposition

   We reiterate the starting and ending times of major events is adequately defined in 52 Pa. Code § 57.192.

[Continued on next Web Page]

_______

1  A performance benchmark is the statistical average of an EDC's annual reliability performance index values for a given time period and is established by the Commission. The benchmark represents company-specific reliability performance for a specific historical period. An EDC's performance benchmark is the average of the historical annual averages of the performance index values for the 5-year time period from 1994-1998 and appear in Appendix B.
   A performance standard is a numerical value established by the Commission that represents the minimal performance allowed for each reliability index for a given EDC. Performance standards established by this order are derived from and based on each EDC's historical performance as represented in performance benchmarks. Both long-term and short-term performance standards are established for each EDC. Long-term, 3-year rolling performance standards are based on the three most recent annual index values. Short-term, 12-month rolling performance standards are based on the four most recent quarterly index values. The long-term and short-term performance standards appear in Appendix B.

2  CAIDI is Customer Average Interruption Duration Index. It is the average duration of sustained interruptions for those customers who experience interruptions during the analysis period. CAIDI represents the average time required to restore service to the average customer per sustained interruption. It is determined by dividing the sum of all sustained customer interruption durations, in minutes, by the total number of interrupted customers. SAIFI is System Average Interruption Frequency Index. SAIFI measures the average frequency of sustained interruptions per customer occurring during the analysis period. SAIDI is System Average Interruption Duration Index. SAIDI measures the average duration of sustained customer interruptions per customer occurring during the analysis period. MAIFI measures the average frequency of momentary interruptions per customer occurring during the analysis period. These indices are accepted national reliability performance indices as adopted by the Institute of Electrical and Electronics Engineers, Inc. (IEEE), and are defined with formulas at 52 Pa. Code § 57.192.

3  The Tentative Order noted that Citizens' did not exclude major events. However, it is clear that both Citizens and Wellsboro did not exclude major events in their original calculations.

4  When referring to the establishment of new performance standards based on a percentage of the benchmark, it is important to note that this is the recomputed benchmark based on excluding major event data using the entire service territory criterion.

5  Large EDCs currently include: Allegheny Power, Duquesne Light, Met-Ed, Penelec, Penn Power, PECO and PPL.

6  Small EDCs include: UGI, Citizens', Pike County and Wellsboro.



No part of the information on this site may be reproduced for profit or sold for profit.

This material has been drawn directly from the official Pennsylvania Bulletin full text database. Due to the limitations of HTML or differences in display capabilities of different browsers, this version may differ slightly from the official printed version.