Experience with ATM Switch Performance Testing

Arjan Durresi, Raj Jain, Gojko Babic

Department of Computer and Information Science, The Ohio State University,

2015 Neil Ave, Columbus, OH 43210-1277
Phone: 614-688-5610, Fax: 614-292-2911
Email: { durresi, jain, babic}@cse.wustl.edu

 

Abstract

For users to be able to compare performance of different switches, it is important to have a common set of metrics. Examples of several such metrics and their measurements are presented in this paper. These measurements and metrics have impacted standardization efforts in this area . The key distinguishing feature of our effort is its emphasis on frame-level metrics (rather than cell-level metrics of the past). In particular, in this paper, we present definitions, methodologies, and measured results for frame latency, throughput, and maximum frame burst size (MFBS).

Keywords: Performance testing, performance metrics, ATM switches, frame-level performance

Solicited area: testbeds and measurements.

1. Introduction

Asynchronous Transfer Mode provides an elegant solution to the integration of services and allows for high levels of scalability. However, the performance of a given application may vary substantially with the switches used in the network. It is important that there be a standard set of metrics on which different switches can be compared. Without a standard set of definitions, each vendor would use their own definition of common metrics such as throughput and latency resulting in confusion in the market place. Avoiding such confusion will help buyers eventually leading to better sales resulting in the success of the ATM technology.

Therefore, in October 1995, we started an effort at the ATM Forum to develop a standard set of metrics for ATM switches. This paper presents some of the metrics that we proposed as a part of that effort [1, 3, 6]. One goal of this effort was that the metrics should be, as much as possible, representative of real network situations. Also they should be independent of switch architecture. We emphasized frame level metrics, because this level is more likely to influence the application performance. Cell-level metrics do not very often reflect the performance as experienced (or desired) by end users. For example, a video user sending 30 frames/sec would like frames to be completely delivered every 33 ms and it does not matter whether the cells belonging to a frame arrive back-to-back or regularly spaced. Thus, it is the frame delay and its variation that matters, not cell delay.

A frame is defined here as the ATM Adaptation Layer (AAL) protocol data unit (PDU). One problem in measuring the frame delay in ATM networks is that when seen inside the network, the frames may be discontinuous with numerous gaps between the cells as well as cells of other frames. Note that the monitoring equipment, if placed inside the host, will be affected by the performance of the host and may not accurately reflect the performance of the switch. Thus, the test probes of the monitoring equipment should be placed at the entrance and the exit of the system to be measured, as in Figure 1.

fig1

Figure 1. Measurement Point

Although we use the term "switch" throughout this paper, the discussion applies equally well to any network element (including switches, routers, multiplexers, inverse-multiplexers, wires, etc) or a network as whole.

In particular in this paper we will present measurement methodologies, results and analysis for frame latency, throughput and maximum frame burst dize (MFBS).

2. MIMO frame latency

The delay of switch at the cell level is generally measured by FILO (first-bit in to the last-bit out) latency as indicated in Figure 2. Other alternative metrics such as FIFO (first-bit in to the first-bit out), LILO (last-bit in to the last-bit out), and LIFO (last-bit in to the first-bit out) latencies can be easily obtained from the figure. A complete analysis of these metrics is given in [ 2,3 ] where is shown that unfortunately none of the four above metrics is appropriate for an ATM network. So LIFO may result in negative values, FIFO does not reflect the expansion and compression of gabs on output, and FILO is strongly influenced by the frame gap pattern. For this reason we have introduced a new metric to measure frame latency, called MIMO (Message In Message Out) [ 1, 2, 5, 7, 12]. Most ITU documents measure cell level delay using FILO metric. Therefore, we will use FILO for our discussion. FILO frame latency is shown in Figure 3.

fig2

Figure 2. FILO latency at cell level

fig3

Figure 3. FILO latency for the switch B that delays each cell by 1 ms

Generally, the measured performance of a system depends upon the system as well as the workload. Some metrics are highly workload dependent while others are less dependent. A metric, which depends more on the system and less on the workload, is generally preferred particularly if the users are interested in comparing the systems and not the workloads. It turns out that the FILO frame latency as defined above has the undesirable property that it depends heavily on the workload.

To show the problem in its extreme case, consider the situation in Figure 3, where the two cells of the frame arrive two days apart. The switch delays each cell by 1 ms. But the FILO frame latency is 2 days plus 3 ms. It mostly reflects the arrival gap and is nowhere close to the actual delay introduced by the switch. In order to avoid this inconvenience, we have proposed a new metric called MIMO latency that measures the true contribution of the switch to the frame latency and is not affected by the arrival patterns (gaps) of the cells constituting the frame.

2.1 MIMO Latency Definition

We introduce the concept of an ideal switch that does the best possible processing of its frames. MIMO latency is calculated for any given arrival pattern as the FILO frame latency for the pattern through the ideal switch (FILO0) subtracted from the measured FILO frame latency of the switch under test gives, i.e.:

MIMO latency = FILO latency- FILO 0 (1)

For the example shown in Figure 3, FILO0is 2 days plus 2 ms and so MIMO latency is 1 ms. Notice that, MIMO latency reflects the switch behavior.

FILO0for a given frame is equal to the FILO latency of that frame passing through an ideal switch. An ideal switch is defined as a switch that handles incoming frames in such way that they are transmitted on the output link without any unnecessary time consumption, i.e. the best any switch can do. By definition, MIMO latency for an ideal switch is zero. Hence, an ideal switch can also be called a zero-delay switch.

The procedure for FILO0calculation is as follows:

a. Initially FILO0= 0 and time t is measured from the arrival of the first bit of the first cell.

  1. For each cell with its first bit arriving at time t, update FILO0as follows:

FILO0= max{t, FILO0} + max{CIT, COT}

where:

CIT = cell input time = 424 bits / Input Link Rate in bps

COT = cell output time = 424 bits / Output Link Rate in bps

Note that MIMO latency, as a switch delay metric, accounts only for delays caused by node processing, such as switching, routing and queuing delays, and not by transmission delays introduced by communication links.

2.2 Frame Latency through an Ideal Switch

The concept of ideal switch is explored in this section. In particular it is shown how an ideal switch handles discontinuous frames in an ATM environment.

Figures 4 present two possible cases of a frame passing through an ideal switch with the input link rate higher than the output link rate. Figure 4a illustrates the case when cells of a frame do not have to wait. The given frame includes two cells and the input link rate is 4 times the output link rate. The two cells start arriving at time t = 0 and t = 5, respectively. An ideal switch will start transmitting the first cell at time t = 0 and finish at time t = 4. The second cell can be transmitted without waiting and the transmission is finished at t = 9. This is how long an ideal switch will take to transmit this frame. Hence, FILO latency of an ideal switch for this frame is 9 indicated as FILO0.

Figure 4b shows the another possible case of a frame passing through an ideal switch with an input link rate higher than the output link rate when cells of a frame have to wait. As in Figure 4a, the given frame has two cells and the input link rate is 4 times the output link rate. However, the frame has a different gap pattern. The second cell arrives at time t = 2 and thus has to wait. An ideal switch will start transmitting the first cell at time t = 0 and finish at time t = 4. The second cell transmission starts at t = 4 and it is finished at t = 8. Hence, FILO latency of an ideal switch for this frame is 8, i.e. FILO0= 8. Thus, Figures 4 illustrate possibilities that an incoming cell can be transmitted immediately without waiting and that an incoming cell has to wait for previously received cells of the same frame to be transmitted.

In general, for a given discontinuous frame when the input link rate is higher than the output link rate, it is possible that some cells have to wait on previously received cells of the same frame, while some cells can be transmitted without waiting.

fig4a

Figure 4a. No-Cell-Waiting Operation of an Ideal Switch for Input Rate > Output Rate

fig4b

Figure 4b. Cell-Waiting Operation of an Ideal Switch for Input Rate > Output Rate

Also, notice that ideal switch on output decreases the size of each gap from input, with some gaps being completely removed.

Figure 5 illustrates the only possible case of a frame passing through an ideal switch with an input rate lower than the output rate. Again, the frame includes two cells but the output link rate is now four times the input link rate. The two cells arrive at time t = 0 and t = 5, respectively. An ideal switch will start transmitting the first cell at time t = 3 (not at t = 0, in order to avoid an underrun), and finish at time t = 4. The second cell transmission starts at t = 8 and finishes at t = 9. This is how long an ideal switch will take to transmit this frame. Hence, the FILO latency of an ideal switch for this frame is 9, i.e. FILO0= 9.

Note that in the cases when the input rate is less than or equal to the output rate, a cell never has to wait for completion of transmissions of previously received cells. The FILO0in such cases is equal to the frame input time (first-bit-in to the last bit in) and MIMO latency becomes equal to the delay of the last bit of the last cell, i.e. LILO latency. Thus, when input link rate <output link rate, we have:

MIMO latency = LILO latency (2)

2.3 Measurement Experiences

In this section we describe several measurements performed in our performance laboratory using a commercial available ATM monitor as a traffic generator as well as a traffic analyzer [4, 5, 8]. This monitor and, as far as we are aware all other similar systems, can provide measurement data on delays and inter-arrival times at the cell level.

fig5

Figure 5. Operations of an Ideal Switch for Input Rate < Output Rate

The following two relations, which can be easily derived, are used later in this section for MIMO latency calculation:

FILO latency = First cell to last cell inter-arrival time at the output + First cell transfer

delay (3)

LILO latency = Last cell transfer delay – Cell input time (4)

2.3.1. Tests with Input Rate Higher Than Output Rate

The test configuration for the MIMO latency measurements for the case with the input link rate higher than the output link rate, shown in Figure 6. It uses a 155 Mbps UTP-5 link between the monitor port 1 and the switch port A1 and a 25 Mbps link between the monitor port 2 and the switch port D1.

fig6

Figure 6. Test configuration for measurements of MIMO latency

In this configurations:

We performed all our tests with 32-cell frames. One of the measurements used contiguous frames, i.e. cells of the test frame were transmitted back-to-back. In the rest of the tests, we introduce identical gaps (unassigned cells or cells of other frames) between cells of the test frame.

Table 1 presents measurement results for eight test runs, from which MIMO latency is calculated. The first test uses a contiguous test frame on input. All other tests use discontinuous frames on input, with gaps between cells of the test frame, as indicated in the second column.

The third and fourth columns present measurement results for the first cell delay and inter-arrival time between the first and the last cells. The fifth column includes calculated values for FILO0, as explained in Section 2.3, given a frame pattern on input. Here is how we calculate those values. For the first five tests, it can be found that each cell entering an ideal switch has to wait for transmission of the previously received cell to finish. Thus, on output we should have back-to-back cells, i.e. a contiguous frame. Therefore, we can calculate FILO0for 32-cell frames in all those cases as:

FILO0= 32 ´ COT = 32 ´ 16.56 = 530 m s

In the last three tests, the gaps on input are large enough that no cells have to wait on a previously received cell. In the case with 5-cell gaps, the first bit of the 32nd(last) cell arrives at an ideal switch at time t, where:

Table 1: (All times are in m s)

Test

No.

Frame Pattern

1stcell CTD

1stcell to last cell nter-arrival time

FILO0

FILO latency (3)

MIMO Latency (1)

1

No gap

36.8

526.5

530.0

563.3

33.3

2

1-cell gaps

35.8

526.0

530.0

561.8

31.8

3

2-cell gaps

36.8

526.0

530.0

562.8

32.8

4

3-cell gaps

34.8

526.5

530.0

561.3

31.3

5

4-cell gaps

40.8

519.5

530.0

560.3

30.3

6

5-cell gaps

36.8

526.5

542.9

562.8

19.9

7

6-cell gaps

36.8

616.0

630.6

652.8

22.2

8

7-cell gaps

35.3

705.0

718.4

740.3

21.9

t = (CIT + 5-cell gap) ´ 31 = 6 CIT ´ 31 = 526.4 m s

and then

FILO0= t + COT = 526.4 + 16.5 = 542.9

Similarly in the cases with 6-cell gaps and 7-cell gaps, FILO0is calculated as 630.6 m s and 718.4 m s, respectively.

The sixth column shows FILO latency calculated, according to the expression (3) as the sum of terms in the third and the fourth column. In the last column, according to the expression (1), MIMO latency values are obtained subtracting terms in the fifth column from terms in sixth column.

Note that the switch latency is higher in the first 5 tests due to cell queueing. In the last three tests, the gap between the cells is large and there is no queueing. MIMO latency clearly reflects this effect.

2.3.2. Tests with Input Link Rate Lower Than Output Link Rate

We also performed tests using the configuration in Figure 6, but with the traffic flow in the opposite direction as indicated in the figure. Thus, this is the configuration with the input link rate lower than the output link rate. In this case, we have:

We performed tests with 32-cell frames, with random idle periods between cells. Table 2 includes measurement data from two tests for which MIMO latency is also calculated. Since the input link rate is lower than the output link rate, both the expression (1) and the expression (2) can be used to calculate MIMO latency.

The results in Table 2 show clearly that MIMO latency reflects the switch behavior and is not affected by the arrival pattern. On the other hand, it is shown that FILO latency is strongly affected by the arrival pattern. It can be observed that good agreement of MIMO latency values can be obtained using any of the two expressions for its calculation.

Table 2. (All times are in m s)

Last cell CTD

MIMO latency (2)

1stcell CTD

1stcell to last cell inter-arrival time

FILO0

FILO

Latency

MIMO

Latency(1)

32.0

15.44

31.0

535.0

550.0

566.0

16.0

32.5

15.94

33.0

1067.5

1082.6

1100.5

17.9

3. Throughput

There are three frame-level throughput metrics that are of interest to a user:

A model graph of throughput vs. input rate is shown in Figure 7. Level X defines the loss-less throughput, level Y defines the peak throughput and level Z defines the full-load throughput.

The loss-less throughput is the highest load at which the count of the output frames equals the count of the input frames. The peak throughput is the maximum throughput that can be achieved in spite of the losses. The full-load throughput is the throughput of the system at 100% load on input links. Note that the peak throughput may equal the loss-less throughput in some cases.

Only frames that are received completely without errors are included in frame-level throughput computation. Partial frames and frames with CRC errors are not included.

3.1 Throughput measurement

In throughput measurements, we use an n-to-1 configuration as given in [1, 3, 5, 11], i.e. the case with n traffic sources generating frames through input links to one output link. However, since our monitor has only 4 ports, we are able to perform tests only with the 4-to-1 configuration. We also perform tests with 2-to-1 and 3-to-1 configurations, but the results are similar to those reported here for the 4-to-1 case.

The 4-to-1 configuration for throughput measurements is given in Figure 8. The configuration includes one ATM monitor and one ATM switch with two 155 Mbps UTP-5 links and two 155 Mbps OC-3c links. Four permanent virtual path connections (VPC) are established between the monitor ports. Note that the link between the monitor port 3 and the switch port B1 is used in one direction as the output link and in the another direction as one of the input links.

Four traffic sources generate, over VPC's, fixed length frames (106 cells) at identical rates with equally spaced frames. All frames are generated in simulated AAL 5 format.

A frame in simulated AAL 5 format is transmitted as 106 back-to back cells, with PT field in the ATM header set to 0 in the first 105 cells, while set to 1 in the last cell. Since we are interested not only in frame losses but also in cell losses to compare with, each cell payload includes a 16 bit cell sequence number and 10 bit CRC field. With such cells, undetected cell loss is unlikely.

Changing the frame rate varies the input load. Each test run lasts 180 sec. Our measurements show that as long as the total input load is less than the output link rate, no loss of frames (or cells) is observed. For example, no loss is detected even when the load on each input line is 24.94% of its rate resulting in a total load of 99.76% (=4x24.94) of the output link rate.

If the total input rate is even slightly higher than the output link rate, the frames are lost at a high rate.

Table 3 presents measurement results for the case when the total load is 100.32% (= 4x25.08%) of the output link rate. Measured results include cell loss ratio and frame loss ratio

image19

Figure 7: Peak, loss-less and full-load throughput

Table 3: Total offered load = 100.32% of output link rate

Metric

Input 1

Input 2

Input 3

Input 4

Mean

Cell Loss Ratio

0.00333

0.00381

0.00387

0.00278

0.00345

Frame Loss Ratio

0.288

0.283

0.204

0.283

0.265

Table 4 presents the same results when the total offered load to the output link is 120% (= 4x30%) of its rate.

Table 4: Total offered load = 120% of output link rate

Metric

Input 1

Input 2

Input 3

Input 4

Mean

Cell Loss Ratio

0.177

0.187

0.157

0.146

0.167

Frame Loss Ratio

0.817

0.784

0.736

0.820

0.789

fig8

Figure 8: Throughput measurement configuration

Table 5 presents the same results when the total offered load to the output link is 400% (= 4x100%) of its rate.

Table 5: Total offered load = 400% of output link rate

Metric

Input 1

Input 2

Input 3

Input 4

Mean

Cell Loss Ratio

0.742

0743

0743

0744

0743

Frame Loss Ratio

1.0

1.0

1.0

1.0

1.0

From Table 3, it is observed that even with loads just slightly over the output link rate, the cell loss ratio is small but the frame loss ratio is high. The frame loss ratio is two orders of magnitude larger than the cell loss ratio. Note that frame loss rate varies between four traffic sources (within the range 20%-29%) resulting in some unfairness

From Table 4, it is seen that with an offered load of 20% over the output link rate, the frame loss ratio is considerable, and 73% to 82 % of input frames are lost

From Table 5, it is observed that with an offered load 300 % over the output link rate (full load per each input), all input frames are lost.

Although the manufacturer of the ATM switch we tested claims that Early Packet Discard is implemented, our tests did not show any improvements in frame loss rates with EPD on.

In conclusion, for the n-to-1 configurations, the lossless throughput for the switch under test is 155 Mbps (ie equal to the output link rate). Obviously, in this case the lossless throughput equals the peak throughput. Also, from the results presented in Table 5, we have found that for this particular ATM switch, the full load throughput for the n-to-1 configuration does not make sense, because even with EPD turned on practically all the frames are lost.

4. Maximum Frame Burst Size (MFBS)

Maximum Frame Burst Size (MFBS) is the maximum number of frames that each of source end systems can send at the peak rate through a system under test without incurring any loss. MFBS measures the data buffering capability of the SUT and its ability to handle back-to-back frames [1, 3, 10].

Many applications and transport layer protocol drivers often present a burst of frames to AAL for transmission. For such applications, Maximum Frame Burst Size provides a useful indication.

This metric is particularly relevant to UBR service category since the UBR sources are always allowed to send a burst at peak rate. ABR sources may be throttled down to a lower rate if a switch runs out of buffer.

4.1. MFBS Measurement

Four virtual paths were set up inside the switch, for switching test traffic from four different input ports to a single output port as is shown in Figure 9. Two of the input ports were 155 Mbps UTP ports, the other two were 155 Mbps OC-3 ports. The first UTP port also served as the output port for the tests. An ATM analyzer was used to generate the four traffic sources. Each source generator produced a burst of back-to-back cells, and was coordinated with the other generators to produce identical bursts starting at the same instant. The size of the bursts was increased until losses were observed.

The maximum sizes of the bursts that could be sent over each link without losses, the Maximum Cell Burst Size (MCBS), are summarized in Table 6.Burst sizes were adjusted with a 100 cell granularity, so the precision is +/- 50 cells. Although we repeated the experiments several times, the results were same. There was no variation.

Table 6. Measured MCBS per source.

Traffic Configuration

MCBS (per source)

2-to-1

9,050 cells

3-to-1

4,650 cells

4-to-1

3,050 cells

The MCBS is the largest length of back-to-back cells that all sources may send simultaneously without loss. The ratio of the measured values is as expected, indicating that the MCBS for all k-to-1 configurations (where k= 2, 3,.) can be predicted from any single MCBS measurement where k is given. For example, in the 2-to-1 configuration, the results of this test imply the ability of the switch to buffer about 9,050 cells on that output port. That is, during each cell interval of the bursts, one cell can be transmitted by the switch, and one cell must be buffered. In the 3-to-1 configuration, one would expect the MCBS to be one-half of the MCBS from the 2-to-1 configuration. Whereas one cell can still be transmitted by the switch, it must now buffer two cells during each cell interval. Similarly, the 4-to-1 configuration would be expected to have a MCBS one-third of the 2-to-1 MCBS, as the switch must now buffer three cells during each cell interval. This is shown in Table 7. So given (measured) MCBS for k, MCBS for j can be calculated as:

MCBSj= MCBSk* (k-1)/(j-1)

fig9

Figure 9: MFBS test configuration

Table 7 How the switch transmit and buffer cells.

Traffic Configuration

The Switch in each cell interval

 

Transmits:

buffers:

2-to-1

1 cell

1 cell

3-to-1

1 cell

2 cells

4-to-1

1 cell

3 cells

i-to-1

1 cell

i-1 cells

The Maximum Frame Burst Size (MFBS) is the number of complete frames of a given size (including the AAL overhead) that can fit within the bounds of the MCBS. It is expressed as a total number of data octets. The MFBS values from this test (assuming no AAL overhead) are summarized in Table 8 Again, MFBS for various values of k and frame sizes can be computed from one test. Therefore, we conclude that it is not necessary to repeat the experiment for various values of k.

Table 8 MFBS values for the each configuration and frame size.

Traffic Configuration

64B

frames

1518B

frames

9188B

frame

64kB

frames

2-to-1

434,368 B

434,148 B

431, 836 B

393, 216 B

3-to-1

223, 168 B

223, 146 B

220,512 B

196,608 B

4-to-1

146, 368 B

145,728 B

137, 820 B

132, 072 B

5. Conclusions

In this paper we defined the following frame level performance metrics: MIMO frame latency, throughput, and MFBS. MIMO latency, as a switch delay metric, accounts only for delays caused by node processing, and not by transmission delays introduced by communication links.

Three different types of throughputs, namely, lossless, peak, and full-load, were defined. Our measurements show that lossless throughput is very close peak throughput and that full-load throughput is generally close to zero.

MFBS measures the data buffering capability of the switch and its ability to handle back-to-back frames. A number of configurations were tested and it was shown that a 2-to-1 configuration is sufficient to evaluate the MFBS.

Methodologies to measure these metrics have also been developed. The experience presented in this paper has been the basis for our contributions to standardization bodies such as ATM Forum and ANSI [7].

6. References:

[1] ATM Forum/BTD-TEST-TM-PERF.00.12 (Draft) February 1999

[2] Gojko Babic, Raj Jain, Arjan Durresi "Frame Delay Through ATM Switches: MIMO Latency" Submitted to IEEE Letters on Communications, February 1999, Available through http://www1.cse.ohio state.edu/~jain/papers/index.html

[3] Gojko Babic, Raj Jain, Arjan Durresi, "ATM Performance Testing and QoS Management" in F. Golshani, Ed., "The IEC ATM Handbook" to be published by International Engineering Consortium, Chicago, IL, 1999.

[4] Arjan Durresi, Raj Jain, Gojko Babic, and Bruce Northcote " Methodology for Implementing Scalable Test Configurations in ATM Switches, " Submitted to IFIP Broadband Communication'99 , February 1999. http://www.cse.wustl.edu/~jain/papers/scalab.htm

[5] Gojko Babic, Arjan Durresi, Raj Jain, Justin Dolske, Shabbir Shahpurwala, "ATM Switch Performance Testing Experiences," ATM Forum/97-0178R1, April 1997, http://www.cse.wustl.edu/~jain/atmf/atm97-0178R1.htm

[6] Raj Jain and Gojko Babic, Performance Testing Effort at the ATM Forum: An Overview, IEEE Communication Magazine, Special issue on ATM performance, August 1997, 11 pp., http://www.cse.wustl.edu/~jain/papers/perf_com.htm

[7] Gojko Babic, Raj Jain, Arjan Durresi, Frame Delay Through ATM Switches: MIMO Latency, ANSI T1A1.3/98-056, December 1998.

[8] Gojko Babic, Arjan Durresi, Justin Dolske, Raj Jain, Measurement Experiences with the Revised MIMO Latency Definition, ATM Forum/97-0859, September 1997 http://www.cse.wustl.edu/~jain/atmf/atm97-0859.htm

[9] Arjan Durresi, Gojko Babic, Raj Jain, Proposed text for Performance Testing Terminology, ATM Forum/98-0411, July 1998 http://www.cse.wustl.edu/~jain/atmf/atm98-0411.htm

[10] Justin Dolske, Gojko Babic, Arjan Durresi, Raj Jain, Testing Experiences and Modifications to Mean Frame Burst Siza (MFBS) Section of Performance Testing Baseline Text, ATM Forum/97-0833, September 1997, http://www.cse.wustl.edu/~jain/atmf/atm97-0833.htm

[11] Gojko Babic, Arjan Durresi, Raj Jain, Justin Dolske, Modifications to the throughput section of Performance Testing Baseline, ATM Forum/97-0614, July 1997, http://www.cse.wustl.edu/~jain/atmf/atm97-0614.htm

[12] Gojko Babic, Arjan Durresi, Raj Jain, Justin Dolske, Revised MIMO Definition, ATM Forum/97-0612, July 1997, http://www.cse.wustl.edu/~jain/atmf/atm97-0612.htm