******************************************************************************** ATM Forum Document Number: BTD-TEST-TM-PERF.00.03 (96-0810R6) ******************************************************************************** Title: ATM Forum Performance Testing Specification - Baseline Text ******************************************************************************** Abstract: This baseline document includes all text related to performance testing that has been agreed so far by the ATM Forum Testing Working Group. ******************************************************************************** Source: Raj Jain, Gojko Babic, Arjan Durresi, Justin Dolske. The Ohio State University Department of CIS Columbus, OH 43210-1277 Phone: 614-292-3989, Fax: 614-292-2911, Email: Jain@ACM.Org The presentation of this contribution at the ATM Forum is sponsored by NASA. ******************************************************************************** Date: September 1997 ******************************************************************************** Distribution: ATM Forum Technical Working Group Members (AF-TEST, AF-TM) ******************************************************************************** Notice: This contribution has been prepared to assist the ATM Forum. It is offered to the Forum as a basis for discussion and is not a binding proposal on the part of any of the contributing organizations. The statements are subject to change in form and content after further study. Specifically, the contributors reserve the right to add to, amend or modify the statements contained herein. *********************************************************************** A postscript version of this contribution including all figures and tables has been uploaded to the ATM Forum ftp server in the incoming directory. It may be moved from there to atm documents directory. The postscript version is also available on our web page via: http://www.cse.wustl.edu/~jain/atmf/bperf03.htm Technical Committee ATM Forum Performance Testing Specification September 1997 BTD-TEST-TM-PERF.00.02 (96-0810R6) ATM Forum Performance Testing Specifications Version 1.0, September 1997 (C) 1997 The ATM Forum. All Rights Reserved. No part of this publication may be reproduced in any form or by any means. The information in this publication is believed to be accurate at its publication date. Such information is subject to change without notice and the ATM Forum is not responsible for any errors. The ATM Forum does not assume any responsibility to update or correct any information in this publication. Notwithstanding anything to the contrary, neither The ATM Forum nor the publisher make any representation or warranty, expressed or implied, concerning the completeness, accuracy, or applicability of any information contained in this publication. No liability of any kind shall be assumed by The ATM Forum or the publisher as a result of reliance upon any information contained in this publication. The receipt or any use of this document or its contents does not in any way create by implication or otherwise: * Any express or implied license or right to or under any ATM Forum member company's patent, copyright, trademark or trade secret rights which are or may be associated with the ideas, techniques, concepts or expressions contained herein; nor * Any warranty or representation that any ATM Forum member companies will announce any product(s) and/or service(s) related thereto, or if such announcements are made, that such announced product(s) and/or service(s) embody any or all of the ideas, technologies, or concepts contained herein; nor * Any form of relationship between any ATM Forum member companies and the recipient or user of this document. Implementation or use of specific ATM recommendations and/or specifications or recommendations of the ATM Forum or any committee of the ATM Forum will be voluntary, and no company shall agree or be obliged to implement them by virtue of participation in the ATM Forum. The ATM Forum is a non-profit international organization accelerating industry cooperation on ATM technology. The ATM Forum does not, expressly or otherwise, endorse or promote any specific products or services. Table of Contents 1. Introduction 1 1.1. Scope 1 1.2. Goals of Performance Testing 2 1.3. Non-Goals of Performance Testing 3 1.4. Terminology 3 1.5. Abbreviations 4 2. Classes of Application 4 2.1. Performance Testing Above the ATM Layer 4 2.2. Performance Testing at the ATM Layer 5 3. Performance Metrics 6 3.1. Throughput 6 3.1.1. Definitions 6 3.1.2. Units 7 3.1.3. Statistical Variations 7 3.1.4. Measurement Procedures 7 3.1.5. Foreground Traffic 8 3.1.6. Background Traffic 11 3.1.7. Guidelines For Scaleable Test Configurations 11 3.1.8. Reporting results 13 3.2. Frame Latency 13 3.2.1. Definition 13 3.2.2. Units 15 3.2.3. Statistical Variations 15 3.2.4. Measurement Procedures 15 3.2.5. Foreground traffic 16 3.2.6. Background Traffic 16 3.2.7. Guidelines For Scaleable Test Configurations 17 3.2.8. Reporting results 19 3.3. Throughput Fairness 20 3.3.1. Definition 20 3.3.2. Units 20 3.3.3. Measurement procedures 21 3.3.4. Statistical Variations 21 3.3.5. Reporting Results 21 3.4. Frame Loss Ratio 21 3.4.1. Definition 21 3.4.2. Units 22 3.4.3. Measurement Procedures 22 3.4.4. Statistical Variations 22 3.4.5. Reporting Results 22 3.5. Maximum Frame Burst Size (MFBS) 22 3.5.1. Definition 22 3.5.2. Units 23 3.5.3. Statistical Variations 23 3.5.4. Traffic Patterns 23 3.5.5. Guidelines For Using This Metric 23 3.6. Call Establishment Latency 23 3.6.1. Definition 23 3.6.2. Units 24 3.6.3. Configurations 24 3.6.4. Statistical Variations 25 3.6.5. Guidelines For Using This Metric 25 3.7. Application Goodput 25 3.7.1. Guidelines For Using This Metric 26 4. References 26 Appendix A: MIMO Latency 27 A.1. Definition 27 A.2. Introduction 27 A.3. Contiguous Frames 29 A.4. Discontiguous Frames 33 1. Introduction Performance testing in ATM deals with the measurement of the level of quality of a system under test (SUT) or an interface under test (IUT) under well-known conditions. The level of quality can be expressed in the form of metrics such as latency, end-to-end delay, effective throughput. Performance testing can be carried at the end-user application level (e.g., FTP, NFS), at or above the ATM layers (e.g., cell switching, signaling, etc.). Performance testing also describes in details the procedures for testing the IUTs in the form of test suites. These procedures are intended to test the SUT or IUT and do not assume or imply any specific implementation or architecture of these systems. This document highlights the objectives of performance testing and suggests an approach for the development of the test suites. 1.1. Scope Asynchronous Transfer Mode, as an enabling technology for the integration of services, is gaining an increasing interest and popularity. ATM networks are being progressively deployed and in most cases a smooth migration to ATM is prescribed. This means that most of the existing applications can still operate over ATM via service emulation or service interworking along with the proper adaptation of data formats. At the same time, several new applications are being developed to take full advantage of the capabilities of the ATM technology through an Application Protocol Interface (API). While ATM provides an elegant solution to the integration of services and allows for high levels of scalability, the performance of a given application may vary substantially with the IUT or the SUT utilized. The variation in the performance is due to the complexity of the dynamic interaction between the different layers. For example, an application running with TCP/IP stacks will yield different levels of performance depending on the interaction of the TCP window flow control mechanism and the ATM network congestion control mechanism used. Hence, the following points and recommendations are made. First, ATM adopters need guidelines on the measurement of the performance of user applications over different systems. Second, some functions above the ATM layer, e.g., adaptation, signaling, constitute applications (i.e. IUTs) and as such should be considered for performance testing. Also, it is essential that these layers be implemented in compliance with the ATM Forum specifications. Third, performance testing can be executed at the ATM layer in relation to the QoS provided by the different service categories. Finally, because of the applications in generic classes. Each class of applications requires different testing environment such as metrics, test suites and traffic test patterns. It is noted that the same application, e.g., ftp, can yield different performance results depending on the underlying layers used (TCP/IP to ATM versus TCP/IP to MAC layer to ATM). Thus performance results should be compared based on the utilization of the same protocol stack. Performance testing is related to user perceived performance of ATM technology. In other words, goodness of ATM will be measured not only by cell level performance but also by frame- level performance and performance perceived at higher layers. Most of the quality of Service (QoS) metrics, such as cell transfer delay (CTD), cell delay variation (CDV), cell loss ratio (CLR), and so on, may or may not be reflected directly in the performance perceived by the user. For example, while comparing two switches if one gives a CLR of 0.1% and a frame loss ratio of 0.1% while the other gives a CLR 1% but a frame loss of 0.05%, the second switch will be considered superior by many users. ATM Forum and ITU have standardized the definitions of ATM layer QoS metrics. We need to do the same for higher level performance metrics. Without a standard definition, each vendor will use their own definition of common metrics such as throughput and latency resulting in a confusion in the market place. Avoiding such a confusion will help buyers eventually leading to better sales resulting in the success of the ATM technology. The initial work at the ATM Forum will be restricted to the native ATM layer and the adaptation layer. Any work on the performance of the higher layers is being deferred for further study. 1.2. Goals of Performance Testing The goal of this effort is to enhance the marketability of ATM technology and equipment. Any additional criteria that helps in achieving that goal can be added later to this list. a. The ATM Forum shall define metrics that will help compare various ATM equipment in terms of performance. b. The metrics shall be such that they are independent of switch or NIC architecture. (i) The same metrics shall apply to all architectures. c. The metrics can be used to help predict the performance of an application or to design a network configuration to meet specific performance objectives. d. The ATM Forum will develop a precise methodology for measuring these metrics. (i) The methodology will include a set of configurations and traffic patterns that will allow vendors as well as users to conduct their own measurements. e. The testing shall cover all classes of service including CBR, rt-VBR, nrt-VBR, ABR, and UBR. f. The metrics and methodology for different service classes may be different. g. The testing shall cover as many protocol stacks and ATM services as possible. (i) As an example, measurements for verifying the performance of services such as IP, Frame Relay and SMDS over ATM may be included. h. The testing shall include metrics to measure performance of network management, connection setup, and normal data transfer. i. The following objectives are set for ATM performance testing: (i) Definition of criteria to be used to distinguish classes of applications. (ii) Definition of classes of applications, at or above the ATM Layer, for which performance metrics are to be provided. (iii) Identification of the functions at or above the ATM Layer which influence the perceived performance of a given class of applications. Example of such functions include traffic shaping, quality of service, adaptation, etc. These functions need to be measured in order to assess the performance of the applications within that class. (iv) Definition of common performance metrics for the assessment of the performance of all applications within a class. The metrics should reflect the effect of the functions identified in (iii). (v) Provision of detailed test cases for the measurement of the defined performance metrics. 1.3. Non-Goals of Performance Testing a. The ATM Forum is not responsible for conducting any measurements. b. The ATM Forum will not certify measurements. c. The ATM Forum will not set thresholds such that equipment performing below those thresholds are called "unsatisfactory." d. The ATM Forum will not establish any requirement that dictates a cost versus performance ratio. e. The following areas are excluded from the scope of ATM performance testing: (i) Applications whose performance cannot be assessed by common implementation independent metrics. In this case the performance is tightly related to the implementation. An example of such applications is network management, whose performance behavior depends on whether it is a centralized or a distributed implementation. (ii) Performance metrics which depend on the type of implementation or architecture of the SUT or the IUT. (iii) Test configurations and methodologies which assume or imply a specific implementation or architecture of the SUT or the IUT. (iv) Evaluation or assessment of results obtained by companies or other bodies. (v) Certification of conducted measurements or of bodies conducting the measurements. 1.4. Terminology The following definitions are used in this document: * Implementation Under Test (IUT): The part of the system that is to be tested. * Metric: a variable or a function that can be measured or evaluated and which reflects quantitatively the response or the behavior of an IUT or an SUT. * System Under Test (SUT): The system in which the IUT resides. * Test Case: A series of test steps needed to put an IUT into a given state to observe and describe its behavior. * Test Suite: A complete set of test cases, possibly combined into nested test groups, that is necessary to perform testing for an IUT or a protocol within an IUT. 1.5. Abbreviations ISO International Organization for Standardization IUT Implementation Under Test NP Network Performance NPC Network Parameter Control PDU Protocol Data Unit PVC Permanent Virtual Circuit QoS Quality of Service SUT System Under Test SVC Switched Virtual Circuit WG Working Group 2. Classes of Application Developing a test suite for each existing and new application can prove to be a difficult task. Instead, applications should be grouped into categories or classes. Applications in a given class have similar performance requirements and can be characterized by common performance metrics. This way, the defined performance metrics and test suites will be valid for a range of applications. Classes of application can be defined based on one or a combination of criteria. The following criteria can be used in the definition of the classes: (i) Time or delay requirements: real-time versus non real-time applications. (ii) Distance requirements: LAN versus WAN applications. (iii) Media type: voice, video, data, or multimedia application. (iv) Quality level: for example desktop video versus broadcast quality video. (v) ATM service category used: some applications have stringent performance requirements and can only run over a given service category. Others can run on several service categories. An ATM service category relates application aspects to network functionalities. (vi) Others to be determined. 2.1. Performance Testing Above the ATM Layer Performance metrics can be measured at the user application layer, and sometimes at the transport layer and the network layer, and can give an accurate assessment of the perceived performance. Since it is difficult to cover all the existing applications and all the possible combinations of applications into classes. Performance metrics and performance test suites can be provided for each class of applications. The perceived performance of a user application running over an ATM network is dependent on many parameters. It can vary substantially by changing an underlying protocol stack, the ATM service category it uses, the congestion control mechanism used in the ATM network, etc. Furthermore, there is no direct and unique relationship between the ATM Layer Quality of Service (QoS) parameters and the perceived application performance. For example, in an ATM network implementing a packet level discard congestion mechanism, applications using TCP as the transport protocol may see their effective throughput improved while the measured cell loss ratio may be relatively high. In practice, it is difficult to carry out measurements in all the layers that span the region between the ATM Layer and the user application layer given the inaccessibility of testing points. More effort needs to be invested to define the performance at these layers. These layers include adaptation, signaling, etc. 2.2. Performance Testing at the ATM Layer The notion of application at the ATM Layer is related to the service categories provided by the ATM service architecture. The Traffic Management Specification, version 4.0, specifies five service categories: CBR, rt-VBR, nrt-VBR, UBR, and ABR. Each service category defines a relation of the traffic behavior. There is an assessment criteria of the QoS associated with each of these parameters. These are summarized below. QoS PERFORMANCE PARAMETER QoS ASSESSMENT CRITERIA Cell Error Ratio Accuracy Severely-Errored Cell Block Ratio Accuracy Cell Misinsertion Ratio Accuracy Cell Loss Rate Dependability Cell Transfer Delay Speed Cell Delay Variation Speed A few methods for the measurement of the QoS parameters are defined in [2]. However, detailed test cases and procedures, as well as test configurations are needed for both in-service and out-of-service measurement of QoS parameters. An example of test configuration for the out-of-service measurement of QoS parameters is given in [1]. Performance testing at the ATM Layer covers the following categories: (i) In-service and out-of-service measurement of the QoS performance parameters for all five service categories (or application classes in the context of performance testing): CBR, rt-VBR, nrt-VBR, UBR, and ABR. The test configurations assume a non- overloaded SUT. (ii) Performance of the SUT under overload conditions. In this case, the efficiency of the congestion avoidance and congestion control mechanisms of the SUT are tested. In order to provide common performance metrics that are applicable to a wide range of SUT's and that can be uniquely interpreted, the following requirements must be satisfied: (i) Reference load models for the five service categories CBR, rt-VBR, nrt-VBR, UBR, and ABR, are required. Reference load models are to be defined by the Traffic Management Working Group. (ii) Test cases and configurations must not assume or imply any specific implementation or architecture of the SUT. 3. Performance Metrics In the following description System Under Test (SUT) refers to an ATM switch. However, the definitions and measurement procedures are general and may be used for other devices or a network consisting of multiple switches as well. 3.1. Throughput 3.1.1. Definitions There are three frame-level throughput metrics that are of interest to a user: * Loss-less throughput - It is the maximum rate at which none of the offered frames is dropped by the SUT. * Peak throughput - It is the maximum rate at which the SUT operates regardless of frames dropped. The maximum rate can actually occur when the loss is not zero. * Full-load throughput - It is the rate at which the SUT operates when the input links are loaded at 100% of their capacity. A model graph of throughput vs. input rate is shown in Figure 3.1. Level X defines the loss-less throughput, level Y defines the peak throughput and level Z defines the full-load throughput. [Figure 3.1: Peak, loss-less and full-load throughput.] The loss-less throughput is the highest load at which the count of the output frames equals the count of the input frames. The peak throughput is the maximum throughput that can be achieved in spite of the losses. The full-load throughput is the throughput of thesystem at 100% load on input links. Note that the peak throughput may equal the loss-less throughput in some cases. Only frames that are received completely without errors are included in frame-level throughput computation. Partial frames and frames with CRC errors are not included. 3.1.2. Units Throughput should be expressed in the effective bits/sec, counting only bits from frames excluding the overhead introduced by the ATM technology and transmission systems. This is preferred over specifying it in frames/sec or cells/sec. Frames/sec requires specifying the frame size. The throughput values in frames/sec at various frame sizes cannot be compared without first being converted into bits/sec. Cells/sec is not a good unit for frame-level performance since the cells aren't seen by the user. 3.1.3. Statistical Variations There is no need for obtaining more than one sample for any of the three frame-level throughput metrics. Consequently, there is no need for calculation of the means and/or standard deviations of throughputs. 3.1.4. Measurement Procedures Before starting measurements, a number of VCCs (or VPCs), henceforth referred to as "foreground VCCs", are established through the SUT. Foreground VCCs are used to transfer only the traffic whose performance is measured. That traffic is referred as the foreground traffic. Characteristics of foreground traffic are specified in 3.1.5. The tests can be conducted under two conditions: * without background traffic; * with background traffic; Procedure without background traffic. The procedure to measure throughput in this case includes a number of test runs. A test run starts with the traffic being sent at a given input rate over the foreground VCCs with early packet discard disabled (if this feature is available in the SUT and can be turned off). The average cell transfer delay is constantly monitored. A test run ends and the foreground traffic is stopped when the average cell transfer delay has not significantly changed (not more than 5%) during a period of at least 5 minutes. During the test run period, the total number of frames sent to the SUT and the total number of frames received from the SUT are recorded. The throughput (output rate) is computed based on the duration of a test run and the number of received frames. If the input frame count and the output frame count are the same then the input rate is increased and the test is conducted again. The loss-less throughput is the highest throughput at which the count of the output frames equals the count of the input frames. The input rate is then increased even further (with early packet discard enabled, if available). Although some frames will be lost, the throughput may increase till it reaches the peak throughput value. After this point, any further increase in the input rate will result in a decrease in the throughput. The input rate is finally increased to 100% of the link input rates and the full-load throughput is recorded. Procedure with background traffic Measurements of throughput with background traffic are under study. 3.1.5. Foreground Traffic Foreground traffic is specified by the type of foreground VCCs, connection configuration, service class, arrival patterns, frame length and input rate. Foreground VCCs can be permanent or switched, virtual path or virtual channel connections, established between ports on the same network module on the switch, or between ports on different network modules, or between ports on different switching fabrics. A system with n ports can be tested for the following connection configurations: * n-to-n straight, * n-to-(n-1) full cross, * n-to-m partial cross, 1 <= m <= n-1, * k-to-1, 1 0, incorrect; Note that FIFO may change with changing output rate (while not changing the switch latency). So, FIFO does not correctly represent the switch latency. - LILO = 0, correct - MIMO = min {LILO, FILO - Frame Size/Output Rate} = min {0, FIFO} = 0, correct Case 2bC: Contiguous Frames, Input rate < Output rate, Nonzero-Delay Switch [Figure A.2bC shows the flow in this case.] [Figure A.2bC: Contiguous frames, Input rate < Output rate, Nonzero-delay switch] In this case, the switch latency D is determined by a delay of the last bit. Here we have: - FIFO > D, incorrect; As in Case 2aC, FIFO may change with changing output rate (without changing the switch latency). So, FIFO does not correctly represent the switch latency. - LILO = D, correct - MIMO = min {LILO, FILO - Frame Size/Output Rate} = min {D, FIFO} = D, correct Case 3aC: Contiguous Frames, Input rate > Output rate, Zero-Delay Switch [Figure A.3aC shows the flow in this case.] [Figure A.3aC: Contiguous frames, Input rate > Output rate, Zero-delay switch] In this case, only the first bit on the input appears immediately on the output, and other bits have to be buffered, because the input rate is larger (more bits are input) than the output rate (fewer bits are output). Here we have: - FIFO = 0, correct - LILO > 0, incorrect; Note that LILO may change with changing the output rate and not changing the switch otherwise. So, LILO does not correctly represent the switch latency. - MIMO = min {LILO, FILO - Frame Size/Output Rate} = min {LILO, FIFO} = 0, correct Case 3bC: Contiguous Frames, Input rate > Output rate, Nonzero-Delay Switch [Figure A.3bC shows the flow in this case. ] [Figure A.3bC: Contiguous frames, Input rate > Output rate, nonzero-delay switch] In this case, the switch latency D is determined by a delay of the first bit. Here we have: - FIFO = D, correct - LILO > D, incorrect; As in Case 3aC, LILO may change with changing the output rate and not changing the switch otherwise. So, LILO does not correctly represent the switch latency. - MIMO = min {LILO, FILO - Frame Size/Output Rate} = min {LILO, FIFO} = D, correct A.4. Discontiguous Frames In this section we consider cases where frames on input as well as on output are discontiguous, i.e. there are gaps between cells of frames. Depending upon the number of gaps on input and output, we have three possibilities: - The number of gaps on output is same as that on input. This is the case of no change in gaps. - The number of gaps on output is more than that on input. This is the case of expansion of gaps. - The number of gaps on output is less than that on input. This is the case of compression of gaps. It should be noted that cases with contiguous frames on input and/or output are special cases of discontiguous frames with no gaps. The nine cases and the applicability of the three metrics (FIFO, LILO and MIMO) to those cases are shown in Table A.2. Each case includes a case with a nonzero delay switch and (if possible) a case with a zero-delay switch. [Table A.2: Applicability of Various Latency Definitions For Discontiguous Frames] Case 1aD: Discontiguous Frames, Input rate = Output rate, No Changes in Gaps [Figure A.1aD shows the flow for a zero-delay switch and a nonzero- delay switch.] [Figure A.1aD: Discontiguous frames, Input rate = Output rate, No change in gaps] This case is similar to cases 1aC and 1bC. The switch latency is determined by a delay of the first bit (or the last bit). Here we have: - FIFO = D, correct - LILO = D, correct - Input rate = Output rate * MIMO = min {LILO, FILO - FIT} = {D, D} = D, correct Case 1bD: Discontiguous Frames, Input Rate = Output Rate, Expansion of Gaps [Figure A.1bD shows the flow for a nonzero-delay switch, while a zero-delay switch with expansion of gaps is an impossible scenario. ] [Figure A.1bD: Discontiguous frames, Input rate = Output rate, Expansion of gaps ] In this case, the switch latency D is given by: D = first bit delay + time of additional gaps on output Here we have: - FIFO < D, incorrect; FIFO is incorrect because it does not reflect expansion of gaps. Note that for a nonzero-delay switch, FIFO may be zero (the case of zero delay for the first bit) - LILO = D, correct - Input rate = Output rate * MIMO = min {LILO, FILO - FIT} = min {D, D} = D, correct Case 1cD: Discontiguous Frames, Input Rate = Output Rate, Compression of Gaps [Figure A.1cD shows the flow for a zero-delay and a nonzero-delay switch with compression of gaps.] [Figure A.1cD: Discontiguous frames, Input rate = Output rate, Compression of gaps.] In this case, the switch latency D is given by: D = Last bit delay = First bit delay - Time of additional gaps on input Here we have: - FIFO > D, incorrect; FIFO is incorrect because it does not reflect compression of gaps. - LILO = D, correct - Input rate = Output rate * MIMO = min {LILO, FILO - FIT} = min {D, D} = D, correct Case 2aD: Discontiguous Frames, Input Rate < Output Rate, No change in Gaps [Figure A.2aD shows the flow for a zero-delay switch and a nonzero-delay switch.] [Figure A.2aD: Discontiguous frames, Input rate < Output rate, No change in gaps] This case is similar to cases 2aC and 2bC. The switch latency D is determined by a delay of the last bit. Here we have: - FIFO > D, incorrect; FIFO may change with changing the output rate and not changing the switch otherwise. So, FIFO does not correctly represent the switch latency. - LILO = D, correct - Input rate < Output rate * FILO - FITxInput rate/Output rate > D * MIMO = min {LILO, FILO - FITxInput rate/Output rate} = D, correct Case 2bD: Discontiguous Frames, Input Rate < Output Rate, Expansion of Gaps [Figure A.2bD shows the flow for a zero-delay switch and a nonzero-delay switch.] [Figure A.2bD: Discontiguous frames, Input rate < Output rate, Expansion of gaps] In this case, the switch latency D is determined by a delay of the last bit. Here we have: - FIFO is incorrect because: a. FIFO may be affected by changing the output rate and not changing the switch (latency) otherwise. b. FIFO may change by changing the number of gaps on the output while the switch (latency) is unchanged. It should be noted that for this case, with the given input rate and the given number of gaps on input, it is possible to produce cases with the appropriate output rate and the appropriate number of gaps on output such that FIFO > D or FIFO < D or even FIFO = D, all without changing the switch (latency). - LILO = D, correct - Input rate < Output rate * FILO - FITxInput rate/Output rate > D * MIMO = min {LILO, FILO - FITxInput rate/Output rate} = D, correct Case 2cD: Discontiguous Frames, Input Rate < Output Rate, Compression of Gaps [Figure A.2cD shows the flow for a zero-delay switch and a nonzero-delay switch.] [Figure A.2cD: Discontiguous frames, Input rate < Output rate, Compression of gaps.] In his case, the switch latency D is determined by the last bit delay. Here we have: - FIFO > D incorrect; FIFO may be affected by changing the output rate or/and with changing the number of gaps on the output while the switch (latency) is unchanged. So, FIFO does not correctly represent the switch latency. - LILO = D, correct - Input rate < Output rate * FILO - FITxInput rate/Output rate > D * MIMO = min {LILO, FILO - FITxInput rate/Output rate} = D, correct Case 3aD: Discontiguous Frames, Input Rate > Output Rate, No Change in Gaps [Figure A.3aD shows the flow for a zero-delay switch and a nonzero-delay switch.] [Figure A.3aD: Discontiguous frames, Input rate > Output rate, No Change in Gaps] This case is similar to cases 3aC and 3bC. The switch latency D is determined by a delay of the first bit. Here we have: - FIFO = D, correct - LILO > D, incorrect; Note that LILO may change with changing the output rate and not changing the switch otherwise. So, LILO does not correctly represent the switch latency. - MIMO = min {LILO, FILO - FITxInput rate/Output rate} = min {LILO, D} = D, correct Case 3bD: Discontiguous Frames, Input Rate > Output Rate, Expansion of Gaps [Figure A.3bD shows the flow for a nonzero-delay switch, while a zero-delay switch with expansion of gaps is an impossible scenario.] [Figure A.3bD: Discontiguous frames, Input rate > Output rate, Expansion of gaps] In this case, the switch latency D is given by: D = first bit delay + time of additional gaps on output Here we have: - FIFO < D, incorrect; FIFO is incorrect because it does not reflect expansion of gaps. Note for a nonzero-delay switch, FIFO may be even zero (the case of a zero delay for the first bit) - LILO > D, incorrect; Here a similar argument applies as in Case 3aD for LILO incorrectly being influenced by the output rate, but with the observation that LILO correctly accounts for a time of additional gaps. - MIMO = min{LILO, FILO - FITxInput rate/Output rate} = min{LILO, D} = D, correct Case 3cD: Discontiguous Frames, Input Rate > Output Rate, Compression of Gaps [Figure A.3cD shows the flow for a zero-delay switch, the positive-delay switch and the speed- up switch.] [Figure A.3cD: Discontiguous frames, Input rate > Output rate, Compression of gaps.] In this case the switch latency D is given by: D = first bit delay - time of missing gaps on output Three cases can be distinguished: a. the case of a zero-delay switch, where: first bit delay = time of missing gaps on output b. the case of a positive-delay switch, where: first bit delay > time of missing gaps on output c. the case of a speedup-delay switch (a negative-delay switch), where: first bit delay < time of missing gaps on output - FIFO > D, incorrect; FIFO is incorrect because it does not reflect compression of gaps. Note that, here FIFO may be zero (the case of zero delay for the first bit) while the switch latency is negative - LILO > D, incorrect; Here a similar argument applies as in Case 3aD for LILO incorrectly being influenced by the output rate, but with the observation that LILO correctly accounts for a time of missing gaps. - MIMO = min {LILO, FILO - FITxInput rate/Output rate} = min {LILO, D} = D, correct In summary, MIMO latency is the only metric that applies to all cases. Other policies could be used but must be specified. Applies only if cells of setup and connect messages are contiguous at the input port. BTD-TEST-TM-PERF.00.02 (96-0810R5) ATM Forum Performance Testing Specification ATM Forum Performance Testing Specification BTD-TEST-TM-PERF.00.02 (96-0810R5) Page iv ATM Forum Technical Committee ATM Forum Technical Committee Page iii BTD-TEST-TM-PERF.00.02 (96-0810R6) ATM Forum Performance Testing Specification ATM Forum Performance Testing Specification BTD-TEST-TM-PERF.00.02 (96-0810R6) Page iv ATM Forum Technical Committee ATM Forum Technical Committee 40