Overload Based Explicit Rate Switch Schemes with MCR Guarantees Bobby Vandalore, Sonia Fahmy, Raj Jain, Rohit Goyal, Mukul Goyal The Ohio State University Department of Computer and Information Science Columbus, OH 43210-1277 Phone: 614-688-4482, Fax: 614-292-2911 E-mail: {vandalor, fahmy, jain, goyal, mukul}@cse.wustl.edu Abstract An explicit rate switch scheme monitors the load at each link and gives feedback to the sources. We define the overload factor as the ratio of the input rate to the available capacity. In this paper, we present three overload based switch schemes which provide MCR (minimum cell rate) guarantees for the ATM (asynchronous transfer mode) ABR (available bit rate) service. The switch schemes proposed use the overload factor and other terms including current source rate and target utilization to calculate feedback rates. A dynamic queue control mechanism is used to achieve efficient usage of the link, control queues and, achieve constant queuing delay at steady state. The proposed algorithms are studied and compared using several configurations. The configurations were chosen to test the performance of the algorithms in presence of link bottlenecks, source bottlenecks and transient sources. Finally, a comparison of the proposed algorithms based on the simulation results is given. 1 Introduction The ATM (asynchronous transfer mode) is the chosen technology for imple- menting B-ISDN (broad-band integrated services digital network). ATM is a connection-oriented cell switching standard. It uses fixed size cells which are 53 bytes long. ATM offers different service categories for transporting different types of traffic. The ABR (available bit rate) service category in ATM is used _________________________________________ (footnote1:) This research was sponsored in part by Rome Laboratory/C3BC Contract #F30602-96-C-0156. to transport data traffic with minimum rate guarantee. The ABR users specify minimum rate using MCR parameter during connection setup. The ABR ser- vice gives guarantee that the ACR (allowed cell rate) is never less than MCR. ABR uses closed loop feedback to control the source rates (see Figure 1). The source sends periodically (after every N rm - 1 data cells) an RM (resource management) cell to gather information from the network [1 ]. The RM cells are turned around at the destination. The switches along the path indicate the current rate which they can support in the explicit rate (ER) field of the RM cell. When the source receives the backward RM cells, it adjusts its allowed rate accordingly. Figure 1: ABR flow control. RM cells are sent periodically by the source. The RM cell is turned around at the destination. The RM cells in the forward direction are called FRM cells and those in the backward direction are called BRM cells. The switches along the RM cell path indicate the rate which they can currently support. The specification of the ABR feedback control algorithm (switch scheme) is not yet standardized. An early switch algorithm used binary feedback to advise source about congestion information [2 ]. Distributed algorithms for the congestion control using explicit rate feedback were given in [3 , 4 ]. Improved, simpler distributed algorithms were proposed in [5 , 6, 7, 8, 9, 10]. Recently, a generalized definition of max-min fairness and its distributed implementation were presented in [11 , 12 ]. A discussion of weight-based max-min fairness policy and its implementation in ABR service is given [13 ]. The fairness criteria in the presence of MCR guarantees is discussed [14 , 15 ]. In a related work, we have proposed a general definition of fairness, and gave an overload based ABR switch scheme which provides MCR guarantees [16 ]. In this paper, we propose three additional algorithms which use the overload factor to calculate the explicit rate feedback. All the proposed algorithms provide MCR guarantee and achieve generalized fairness. The load factor (also referred to as "overload factor" or "overload") is the ra- tio of the measured input rate to the available ABR (available bit rate) capacity. ERICA+ switch scheme monitors the load on the link and calculates feedback based on the load. It tries to achieve unit load on links to efficiently use the link and also converge to max-min fairness [1 ]. Max-min fairness assumes zero MCR values. In this paper, we have used the generalized fairness with MCR as defined in [16 ]. Section 2 discusses the definition of generalized fairness used in algorithms. Section 3 gives a brief overview of ERICA+ and then discusses proposed al- gorithms. In section 4, various configurations used in the simulations are dis- cussed. In section 5 we present the results of the simulations. The simulations test whether the schemes provide MCR guarantees and converge to generalized fairness. A comparison of the algorithms based on the simulations results is given in section 6. Finally, section 7 gives the conclusions of the paper. 2 General Weighted Fairness: Definition ATM Forum Traffic Management Specification 4.0 [1 ] recommends different def- initions of fairness to be used when MCRs are non-zero. These fairness defi- nitions are equal share of excess bandwidth, proportional to MCR of excess bandwidth, proportional to predetermined weights of bandwidth. Here, we give the definition (as defined [16 ]) of generalized weighted fairness which can realize all the above fairness definitions. Define the following parameters: A = excess bandwidth, to be shared by connections bottle-necked on this link. i = MCR of connection i. P n = i=1 i Sum of MCRs of active connections which are bottle-necked by this link. wi = preassigned weight associated with the connection i. gi = generalized weighted fair Allocation for connection i. The general weighted (GW) fair allocation is defined as follows: gi = i + wi(A_-_)_____Pn j=1 wj The excess available bandwidth (A - ) is divided in proportion to the predetermined weights. 3 The Switch Schemes The general structure of the algorithms proposed are similar to the ERICA+ [5 ]. First, we briefly discuss the ERICA+ algorithm and then give the general structure of the proposed algorithms. The three switch algorithms proposed have the same structure and only differ in, the way in which end of interval accounting is done, and the manner in which the feedback explicit rate is cal- culated. The switch algorithm with MCR guarantees discussed in [16 ] also has the same structure as three these algorithms. 3.1 Overview of ERICA+ ERICA+ operates at the output port of a switch. It periodically monitors the load, active number of VCs and provides feedback in the BRM (backward RM) cells. The measurement period is called the "averaging interval." The mea- surements are done in forward direction and feedback is given in the reverse direction. The complete description of ERICA+ algorithm can be obtained from [5 ]. In overload conditions, the algorithm calculates the maximum of the FairShare (link capacity divided by number of VCs) and VCShare (current cell rate divided by overload) as the feedback rate. In underload conditions, the maximum of MaxAllocPrev (which is the maximum allocation given in the pre- vious averaging interval) and the previous two terms is the feedback rate. Part of the available ABR capacity is used for draining queues. Target ABR capac- ity is obtained by multiplying the total available ABR capacity by a F raction term. 1 - F raction amount of the link capacity is used to drain the queues. F raction can be either a constant less than one (e.g., F raction = 0:9 implies 90% link utilization), or dynamic function of switch queue length. ERICA+ uses a two hyperbolic curves with which control the queue in underload and overload regions. To ensure that a minimum amount of link capacity is used for data, the F raction value is limited to a minimum of QDLF (queue drain limit factor). Typically a value QDLF = 0:5 is used. The dynamic queue control function given(above can be expressed as follows _____bQ0_____ 0 Q Q0 f (Q) = (b-1)Q+Q0min(QDLF ; ____aQ0______ (a-1)Q+Q0 Q0 )< Q 1 The "Target Delay" parameter specifies the desired queue length (Q0 ) at steady state. The 'a-curve' (hyperbola given by ____aQ0______(a-1)Q+Q0) is used as long as it evaluates to value greater than QDLF. The design of queue control functions which enable the switch algorithms to reduce rate oscillations, achieve constant delay and high link utilization (100%) is discussed in detail in [17]. 3.2 Overload Based Algorithm: General Structure The three different switch schemes have the following common algorithmic struc- ture. They differ in the manner in which the feedback rate is calculated and in accounting. The algorithms perform the following steps for calculating the feedback rate: 1. The problem of non zero MCRs is reduced to one with zero MCRs. The MCR is subtracted from each connection's current cell rate (CCR(i)) to obtain the excess rate of each source (SR(i)) over MCR. If the source is transmitting at a rate less than its MCR, an excess rate of zero is used. 2. The switch algorithm for zero MCRs is applied to these excess rates to obtain the feedback rate. The excess available capacity (A - ) is divided in proportion to the pre-determined weight. 3. MCR for each source is added to the explicit feedback rate calculated in the previous step. The resulting rate is indicated in the ER field of BRM cells. The sources transmit data initially at their initial cell rate (ICR) value. Af- ter one round trip time, the feedback from the switches arrives at the sources, and sources adjust their allowed rate accordingly. When a constant queue con- trol function is used (say F raction = 0:9) rates converge to the GW fairness allocation. When a dynamic queue control function is used, the available ca- pacity varies depending on the queue length. Therefore, the feedback rate also varies till the queues are drained. Once the queues are drained, the feedback rates converge to the GW fair allocation. We present simulation results using both the constant queue and dynamic queue control functions. Overload Based Algorithm Structure: At the End of Each Averaging Interval: ABR Capacity Link Capacity - VBR Capacity Xn - min (SR(i); i) i=0 TargetABRCap f (Q) x ABR Cap Xn Input Rate ABR Input Rate - min (SR(i); i) i=0 z ___Input_Rate________TargetABRCap End__of__Interval__Accounting() When an FRM is received: CCR(i) CCRRM _Cell When a BRM is received: Ex_ER Calculate__Excess__ER() ER i + Ex_ER ERRM _Cell Min(ERRM _Cell ,ER,Target ABR Cap) The key steps that differentiate the algorithms are the procedures End_of_Interval_Accounting() and Calculate_Excess_ER(). 3.3 Algorithm A: ExcessFairShare/Overload The ExcessFairShare term is defined as follows: ExcessF airShare(i) = wi(A_-_)_____Pn j=1 wj This divides the excess available bandwidth (A - ) (i.e., ABR capacity) in proportion to the weights w(i). An allocation of i + ExcessF airShare(i) for each source i is the GW fair allocation. A source might be bottlenecked at some other link, hence using less than its fair share of link capacity. By using the notion of activity level, the effective weight of the bottlenecked source can be calculated. The activity level for a given VC is defined as follows: AL(i) = min 1; _S_ourceRate____(i)_-_i_____ExcessF airShare(i) The activity level can be used to accurately estimate the effective number of VCs. Effective number of VCs is given by the following expression: Xn Effective number of VCs = AL(j) j=1 The VCs which are bottlenecked by this link will have activity level of 1 and will be counted as one VC. The VCs bottlenecked elsewhere will are counted as fractional VCs depending on their activity level. The proof that above expres- sion estimates accurately the effective number of VCs is given in [18 ]. We extend the notion activity level to the weighted case by multiplying the weight function with the activity level of the ExcessFairShare term. Therefore the ExcessFairShare is: ExcessF airShare(i) = ___wi(A_-_)_________Pn j=1 wjAL(j) In this algorithm, the Ex_ER is calculated based on ExcessF airShare and the overload factor z. For each source, the activity level and the ExcessF airShare are updated at the end of each interval. When a BRM cell arrives, the feedback rate is calculated as the ExcessF airShare term, divided by the overload. The source rate (V CShare term) is not used in feedback rate calculation. If the network is overloaded (z > 1) the Ex_ER decreases since fairshare is divided by the overload factor z. In underload conditions (z < 1) the sources are asked to increase their rate. Due the recursive definition of ExcessF airShare and activity levels these value converge. As the network reaches steady state, the overload will become one and the Ex_ER will converge to the required ExcessF airShare, achieving GW fair al- location. Proof convergence is similar to the proof given in [16 ]. Since the overload factor varies every averaging interval, the rate occillations increase for this algorithm. If the "averaging interval" is greater than the feedback loop, then the switch adjusts to the feedback rate before the next averaging interval. If the averaging interval is smaller, then before the source can adjust to the feedback rate, multiple feedbacks are given. In this situation ExcessF airShare calculation is not accurate, since the source rate does not reflect the feedback rate. Exponential averaging of the ExcessF airShare term is used to overcome this problem. Exponential averaging is done as follows: ExcessF airShare = ff ExcessF airShare + (1 - ff)PrevExcessFairShare where the P revExcessF airShare is the value calculated for the previous av- eraging interval. The parameter ff is called the decay factor. In the simulations a value of ff = 0:9 was used. End__of__Interval__Accounting(): foreach VC i do AL(i) min 1; _S_ourceRate____(i)_-_i_____ExcessF airShare(i) ExcessF airShare(i) (TargetABRCap)________wiAL(i)____Pn j=1 wjAL(j) endfor Calculate__Excess__ER(): Ex_ER ExcessFairShare(i)______z 3.4 Algorithm B: MaxAllocation/Overload Let the GW fair allocation be (g1 ; g2 ; : : :; gn ) for n bottle- necked sources. The excess bandwidth is divided proportional to weights (gi - i)=w(i) = (gj - j)=w(j). Let m be the VC such that term (gm - m )=w(m) is the maximum of such terms of all VCs. An allocation which assigns rates as i + w(i)(gm - m )=w(m) will achieve GW fairness. The term (gm - m )=w(m) is defined as the weighted maximum allocation. In algorithm B, the feedback is calculated as term proportional to weight of VC and the weighted maximum allocation is divided by overload. The overload in the denominator increases or decreases the allocation depending on the load. The source rate is not used in the feedback calculation. This algorithm can give rise to large queues if the weighted maxi- mum allocation is measured incorrectly. This problem of large queues occurred during the simulation of this algorithm using GFC-2 configuration. End__of__Interval__Accounting(): WtMaxAllocPrev WtMaxAllocCur WtMaxAllocCur 0 Calculate__Excess__ER(): Ex_ER w(i)__WtMaxAllocPrev________z WtMaxAllocCur Max (WtMaxAllocCur,Ex_ER/w(i)) 3.5 Algorithm C: VCShare and MaxAllocation In this algorithm, the End_of_Interval_Accounting() is the same as in the pre- vious algorithm (algorithm B). The Ex_ER is calculated based on the weighted maximum previous allocation (W tM axAllocP rev) and V CShare under under- loaded conditions (z < 1 + ffi). For overloaded conditions, the V CShare is given as the feedback rate. The problem of large queues explained in the previous algorithm is not expected to occur since the maximum previous allocation is given as feedback only if the link underloaded. As the overload converges to one, the V CShare converges to w(i)(gm - m )=w(m), achieving the GW fair allocation. Calculate__Excess__ER(): VCShare max(0;_SourceRate(i)_-_i)___________z IF (z > 1 + ffi) THEN Ex_ER VCShare ELSE Ex_ER max(w(i) WtMaxAllocPrev, VCShare) WtMaxAllocCur max(WtMaxAllocCur,Ex_ER/w(i)) 4 Simulation Configurations We used simple, link bottleneck and source bottleneck configurations to test the proposed algorithms. Infinite sources were used (which have an infinite amount of data to send, and always send data at ACR) in all the simulations. The rates are expected to converge to GW fair allocation values in the presence of infinite sources. The algorithms are expected to give minimum rate guarantees for Poisson or self-similar sources. These types of sources were not used since the GW fair allocation for source with varying rates is also dynamically varying. The data traffic is only one way, from source to destination. Using two-way traffic would produce similar results, expect that the convergence time would be larger since the RM cells in the backward direction would travel with traffic from destination to source. All the link bandwidths are 149.76 (155.52 less the SONET overhead), except in the GFC-2 configuration. 4.1 Three Sources This is a simple configuration in which three sources send data to three destina- tions over two switches and a bottleneck link (see Figure 2). This configuration is used to demonstrate that the switch algorithms can achieve the GW fairness. 4.2 Source Bottleneck In this configuration, the source S1, is bottle-necked to rate (10 Mbps), which is below its fairshare (50 Mbps) for first 400 ms of the simulation (see Figure 3). Figure 2: N Sources - N Destinations Configuration This configuration tests whether the fairness criterion can be achieved in the presence of source bottleneck. Figure 3: 3 Sources - Bottleneck Configuration 4.3 Generic Fairness Configuration - 2 (GFC-2) This configuration is a combination of upstream and parking lot configuration (see Figure 4). In this configuration, all the links are bottle-necked links. This configuration is explained in [19 ]. 4.4 Simulation Parameters The simulations were done using an extensively modified version of NIST ATM simulator [20 ]. The parameter values for different configurations are given in Table 1. The algorithms were simulated using both constant queue control func- tion (F raction = 0:9) and using dynamic queue control function. For dynamic queue control, hyperbolic functions was used, with curve parameters a = 1:15 and b = 1. The QDLF value was set to 0:5. Exponential averaging was used to decrease the variation in measured quan- tities such as overload and number of VCs. Exponential averaging of overload factor and number of VCs were done with a decay factor of 0.8 for algorithm C. The algorithms A and B are more sensitive to overload factor. So, a the decay factor of 0.4 was used to average overload in algorithms A and B. The weight function of one was used in all configurations. This corresponds to MCR plus equal share of excess bandwidth. The value of ffi = 0:1 was used for algorithm C. In [16 ] it was shown that an overload based explicit rate algorithm achieves GW fairness for various weight functions. Table 1: Simulation Parameter Values _______________________________________________________________________ | Configuration | Link | Averaging | Target | Wt | |________Name________|____Dist_|____interval____|__Delay___|___Func__|_ | Three Sources | 1000 Km 5 ms | 1.5 ms | 1 | | Src Bottleneck | 1000 Km 5 ms | 1.5 ms | 1 | |_______GFC-2_______|__1000_Km_______15_ms_____|___1.5_ms__|_____1___|_ Table 2: GW fair allocation for different configurations _________________________________________________________________________ |____Config.__and_Queue_Control______|_______Src_1__|__Src_2__|__Src_3__| | Simple and CQF | 24.93 | 44.93 | 64.93 | | Simple and DQF | 29.92 | 49.92 | 69.92 | | Src Bottleneck + CQF (0-0.4s) | 32.39 | 52.39 | 72.39 | | Src Bottleneck + DQF (0-0.4s) | 39.86 | 59.86 | 79.86 | |Src Bottleneck + CQF (0.4-0.8s) | 24.93 | 44.93 | 64.93 | |Src_Bottleneck_+_DQF_(0.4-0.8s)__|__________29.92__|__49.92__|__69.92__| 5 Simulation Results In this section we present the simulation results of algorithms using different configurations. Table 2 gives the expected GW fair allocation with constant queue control function (shown as configuration name and CQF) and with dy- namic queue control function (shown as configuration and DQF) for each simu- lation using three source configuration. The queues when using constant queue control function took longer time to drain. The bottleneck link utilization was as expected 90% when constant queue control was used. With dynamic queue control the algorithms achieved 100% link utilization for bottlenecked links. 5.1 Three Source: Results The MCR value for the three source configuration is 10,30,50 Mbps for the source 1, source 2 and source 3 respectively. For the simulation with queue control, the excess bandwidth (149.76 - 90 =) 59.76 is divided equally among the three sources. The expected allocation is (10+59.76/3, 30+59.76/3, 50+59.76/3) = (29.92, 49.92, 69.92). Figure 5(a)-(c) shows the ACRs for algorithms A,B, and C (with dynamic queue control) respectively. From the graphs, it can be seen that the expected allocation is achieved by all the three algorithms. 5.2 Source Bottleneck: Results In this configuration the MCR values of (10,30,50) Mbps were used. The total simulation time was 800 ms. The source S1, is bottle- necked at 10 Mbps for first 400 ms of the simulation. It always sends data up to 10 Mbps even its ACR larger than 10 Mbps. Figures 6 (a)-(c) shows the ACRs for the algorithms A, B, and C with constant queue control respectively. The expected allocation is (32.39,52.39,72.39) for first 400 ms and it is (29.92,49.92,69.92) between 400 ms and 800 ms. The algorithms do converge to the expected allocation. The algorithm A has lesser rate oscillations and converges faster than algorithm B and C. 5.3 GFC-2: Results MCR value of zero was used for all sources expect for type A sources which had a value MCR of 5 Mbps. The expected allocation for each type of VC using constant queue control and dynamic queue control is given in the table 3. Figure 7 (a)-(c) show the ACRs of each type of VCs A through H for algorithms A, B, and C with constant queue control respectively. Algorithm A converges to the expected allocation. Algorithm B with constant queue control, does not converge to expected GW fair allocation within the simulation time as seen in figure 7(f). This is due the presence of sources with different round trip times sharing the same bottleneck link. In such a case the sources with larger round trip time take a long time to adjust the rates compared to the ones which have smaller round trip times. For example maximum allocation for A type VC which has large round trip time was assigned for seven VCs of type G which have small round trip time. This also led to large switch queues and slow convergence of algorithm B. The input rate at the link between SW6 and SW7 is overloaded by a factor of seven which gives rise to the huge queues. Table 3: GFC-2 configuration: Expected allocations when using DQF and CQF for each type_of_VC _____________________________________________________________________ |____A____|__B___|____C____|____D____|____E____|____F____|__G___|____H__| | 11.25 | 5 | 33.75 | 33.75 | 33.75 | 6.25 | 5 | 50.62 | |_____9_____|4.5__|_31.5___|___31.5___|__31.5___|___9____|_4.5__|_47.25_| 6 Comparison of Switch Schemes Table 4 gives a comparison of the algorithms. Algorithm A has higher complexity than the other two algorithms. Algo- rithm B results in large switch queues since in presence of multiple round trip time. The algorithm C is the best of the proposed algorithm since it has O(1) Table 4: Comparison of the algorithms ______________________________________________________________________ |Algo- |End of Intrvl | Feedback | Max | Sensitivity | |rithm__|_Complexity___|______Complexity__|_____Q_Len__|____to_Q_cntrl__| | A | O(N) | O(1) | Med | High | | B | O(1) | O(1) | Large | Low | |___C_____|____O(1)_______|_______O(1)______|____Med___|________Low_____| complexity both at the end of interval and when a BRM cell arrives. Also, algorithm C has smaller queues compared to algorithm B. 7 Conclusion In this paper, we have presented three algorithms which achieve GW fairness and provide MCR guarantee. The algorithms monitor the load on the link and calcu- late the overload factor. The overload and other quantities (ExcessF airShare or W tM axAllocP rev) are used to calculate the feedback rates. The algorithms proposed have similar structure. The algorithms differ in the end of interval accounting and feedback calculation. Simulations show that the algorithms converge to GW fairness in most cases. Queue control can be done using constant function and dynamic (hyperbolic) function. Algorithm A has O(N) complexity for the end of interval calculations. Algorithm B, can give rise to large queues if the configuration has sources with different round trip times sharing the same link. The algorithm C, which uses the V CShare and W tM axAllocP rev is the best, since it has O(1) complexity and is less sensitive to queue control function. References [1] Shirish S. Sathaye. "ATM Forum Traffic Management Specification Version 4.0". April 1996 [2] Nanying Yin and M. G. Hluchyj. "On closed-loop rate control for ATM cell relay networks". In Proc. of IEEE INFOCOM, pp. 99-108, 1994. [3] Anna Charny. "An algorithm for rate allocation in a packet-switching, network with feedback". Master's thesis, MIT, Cambridge, May 1994. [4] Danny H. K. Tsang, Wales K. F. Wong. "A new rate-based switch algorithm for ABR traffic to achieve max-min fairness with analytical approximation and delay adjustment". In Proc. IEEE Globecom'96. [5] Shivkumar Kalyanaraman, Raj Jain, Rohit Goyal, Sonia Fahmy, and Bobby Vandalore. "The ERICA Switch Algorithm for ABR Traffic Man- agement in ATM Networks," (see footnote 2) Submitted to IEEE/ACM Transactions on Networking, April 1999, [6] C. Fulton, San-Qi Li and C. S. Lim. "UT: ABR feedback control with tracking". Preprint. [7] L. Kalampoukas, A. Varma, K. K. Ramakrishnan. "An efficient rate alloca- tion algorithm for ATM networks providing max-min fairness". In Proc. of the 6th IFIP International Conference on High Performance Networking, September 1995. [8] K. Siu and T. Tzeng. "Intelligent congestion control for ABR service in ATM networks". Computer Communication Review, vol. 24, no. 5, pp. 81-106, October 1995. [9] Y. Afek, Y. Mansour, and Z. Ostfeld. "Phantom: A simple and effective flow control scheme". In Proceedings of the ACM SIGCOMM, August 1996. [10] L. Roberts. "Enhanced PCRA (Proportional Rate Control Algorithm)". ATM Forum Contribution/AF-TM 94-0735R1, August 1994. [11] Yiewei T. Hou, Henry H.-Y. Tzeng, Shivendra S. Panwar. "A generalized max-min rate allocation policy and its distributed implementation using the ABR flow control mechanism". In Proc. of INFOCOM, April 1998. [12] Santosh P. Abraham and Anurag Kumar. "A stochastic approximation approach for a max-min fair adaptive rate control of ABR sessions with MCRs". In Proc. of INFOCOM, April 1998. [13] Y. T. Hou, H. Tzeng, and S. S. Panwar. "A Simple ABR switch algo- rithm for the weighted max-min fairness policy". In Proc. IEEE ATM'97 Workshop, pp. 329-338, Lisbon, Portugal, May 25-28, 1997. [14] D. Hughes. "Fair share in the context of MCR". ATM Forum contribution/AF-TM 94-0977, October 1994. [15] N. Yin. "Max-min fairness vs. MCR guarantee on bandwidth allocation for ABR". In Proc. of IEEE ATM'96 Workshop, San Franscisco, CA, August 25-27, 1996. [16] Bobby Vandalore, Sonia Fahmy, Raj Jain, Rohit Goyal, Mukul Goyal, "A Definition Definition of General Weighted Fairness and its Support in Ex- plicit Rate Switch Algorithms", ICNP `98, Austin, Texas, October 1998, pp 22-30. _________________________________________ (footnote 2) All our papers and ATM Forum contributions are available through http://www.cse.ohio- state.edu/"jain/ [17] Bobby Vandalore, Raj Jain, Rohit Goyal, Sonia Fahmy "Design and Analysis of Queue Control Functions for Explicit Rate Switch Schemes", IC3N'98, October 1998, pp 780-786. [18] Sonia Fahmy, Raj Jain, Shivkumar Kalyanaraman, Rohit Goyal and Bobby Vandalore, "On Determining the Fair Bandwidth Share for ABR Connec- tions in ATM Networks," Proc. of the IEEE International Conference on Communications (ICC) 1998, June 1998 [19] Robert J. Simcoe. "Test configurations for fairness and other tests". ATM Forum/94-0557, July 1994. [20] Nada Golmie. "Netsim: network simulator". http://www.nist.gov/. Figure 4: Generic Fairness Configuration - 2 (a) Algorithm A and CQF (b) Algorithm B and CQF (a) Algorithm A and DQF (b) AlgorithmiB and DQF Figure : GFC-2 config:(c)AAlgorithmCCRandgDQFraphs for algorithms A, B, and C. Figure 5: Three Sources: ACR graphs for algorithms A, B, and C. (a) Algorithm A with CQF (b) Algorithm B with CQF (c) Algorithm C with CQF Figure 6: Source Bottleneck: ACR graphs for Algorithm A, B, and C.