Traffic Management for TCP/IP over Satellite-ATM Networks

 

Abstract

Several Ka-band satellite systems have been proposed that will use ATM technology to seamlessly Internet traffic. The ATM UBR, GFR and ABR service categories have been designed for data. However, several studies have reported poor TCP performance over satellite-ATM networks. We describe techniques to improve TCP performance over satellite-ATM networks. We first discuss the various design options available for TCP end-systems, IP-ATM edge devices as well as ATM switches for long latency connections. We discuss buffer management policies, guaranteed rate services, and the virtual source/virtual destination option in ATM. We present a comparison of ATM service categories for TCP transport over satellite links. The main goal of this paper is to discuss design and performance issues for the transport of TCP over UBR, GFR and ABR services for satellite-ATM networks.

  1. Introduction

    ATM technology is expected to provide quality of service based networks that support voice, video and data applications. ATM was originally designed for fiber based terrestrial networks that exhibit low latencies and low error rates. With the widespread availability of multimedia technology, and an increasing demand for electronic connectivity across the world, satellite networks play an indispensable role in the deployment of global networks. Ka-band satellites using the gigahertz frequency spectrum can reach user terminals across most of the populated world. As a result, ATM based satellite networks can effectively provide real time as well as non-real time communications services to remote areas.

    However, satellite systems have several inherent constraints. The resources of the satellite communication network, especially the satellite and the earth station are expensive and typically have low redundancy; these must be robust and be used efficiently. The large delays in GEO systems, and delay variations in LEO systems, affect both real time and non-real time applications. In an acknowledgment and timeout based congestion control mechanism (like TCP), performance is inherently related to the delay-bandwidth product of the connection. Moreover, TCP Round Trip Time (RTT) measurements are sensitive to delay variations that may cause false timeouts and retransmissions. As a result, the congestion control issues for broadband satellite networks are somewhat different from those of low latency terrestrial networks. Both interoperability issues, as well as performance issues need to be addressed before a transport layer protocol like TCP can satisfactorily work over long latency satellite-ATM networks.

    In this paper, we describe the various design options for improving he performance of TCP/IP over satellite-ATM networks. The next section describes the ATM service categories and options available to TCP/IP traffic. We then describe each ATM design option as well as TCP mechanism, and evaluate their performance over satellite networks. We conclude with a comparison of ATM service categories for TCP transport over satellite links.

  2. Design Issues for TCP/IP over Satellite-ATM

    Satellite-ATM networks can be used to provide broadband accessto remote locations, as well as to serve as an alternative to fiber based backbonenetworks. In either case, a single satellite is designed to support thousands of earth terminals. The earth terminals set up VCs through the on-board satellite switches to transfer ATM cells among one another. Because of the limited capacity of a satellite switch, each earth terminal has a limited number of VCs it can use for TCP/IP data transport. In backbone networks, these earth terminals are IP-ATM edge devices that terminate ATM connections, and route IP traffic in and out of the ATM network. These high capacity backbone routers must handle thousands of simultaneous IP flows. As a result, the routers must be able to aggregate multiple IP flows onto individual VCs. Flow classification may be done by means of a QoS manager that can use IP source-destination address pairs, as well as transport layer port numbers. The QoS manager can further classify IP packets into flows based on the differentiated services priority levels in the TOS byte of the IP header.

    In addition to flow and VC management, the earth terminals must also provide means for congestion control between the IP network and the ATM network. The on-board ATM switches must perform traffic management at the cell and the VC levels. In addition, TCP hosts can implement various TCP flow and congestion control mechanisms for effective network bandwidth utilization. Figure 1 illustrates a framework for the various design options available to networks and TCP hosts for congestion control. The techniques in the figure can be used to implement various ATM services in the network. Enhancements that perform intelligent buffer management policies at the switches can be developed for UBR to improve transport layer throughput and fairness. A policy for selective cell drop based on per-VC accounting can be used to improve fairness.

    Providing a minimum Guaranteed Rate (GR) to the UBR traffic has been discussed as a possible candidate to improve TCP performance over UBR. The goal of providing guaranteed rate is to protect the UBR service category from total bandwidth starvation, and provide a continuous minimum bandwidth guarantee. It has been shown that in the presence of high load of higher priority Constant Bit Rate (CBR), Variable Bit Rate (VBR) and Available Bit Rate (ABR) traffic, TCP congestion control mechanisms benefit from a guaranteed minimum rate.

    Guaranteed Frame Rate (GFR) has been recently proposed in the ATM Forum as an enhancement to the UBR service category. Guaranteed Frame Rate will provide a minimum rate guarantee to VCs at the frame level. The GFR service also allows for the fair usage of any extra network bandwidth. GFR is likely to be used by applications that can neither specify the traffic parameters needed for a VBR VC, nor have capability for ABR (for rate based feedback control). Current internetworking applications fall into this category, and are not designed to run over QoS based networks. Routers separated by satellite-ATM networks can use the GFR service to establish guaranteed rate VCs between one another. GFR and GR can be implemented using per-VC queuing or buffer management.

    The Available Bit Rate (ABR) service category is another option to implement TCP/IP over ATM. The Available Bit Rate (ABR)service category is specified by a PCR and Minimum Cell Rate (MCR) which is guaranteed by the network. ABR connections use a rate-based closed-loop end-to-end feedback-control mechanism for congestion control. The network tries to maintain a low Cell Loss Ratio by changing the allowed cell rates (ACR) at which a source can send. Switches can also use the virtual source/virtual destination (VS/VD) feature to segment the ABR control loop into smaller loops. Studies have indicated that ABR with VS/VD can effectively reduce the buffer requirement for TCP over ATM especially for long delay paths. ABR can be implemented using the feedback control mechanisms in figure 1.

    In addition to network based drop policies, end-to-end flow control and congestion control policies can be effective in improving TCP performance over UBR. The fast retransmit and recovery mechanism [FRR], can be used in addition to slow start and congestion avoidance to quickly recover from isolated segment losses. The selective acknowledgments (SACK) option has been proposed to recover quickly from multiple segment losses. A change to TCP's fast retransmit and recovery has been suggested in [HOE96] and [FLOYD98]. The use of performance enhancing TCP gateways to improve performance over satellite links has also been proposed in recent studies. The following sections discuss the design and performance issues for TCP over UBR, GFR and ABR services for satellite networks.

    image1

    Figure 1Design Issues for TCP over ATM

  3. TCP over UBR

    In its simplest form, an ATM switch implements a tail drop policy for the UBR service category. If cells are dropped, the TCP source loses time, waiting for the retransmission timeout. Even though TCP congestion mechanisms effectively recover from loss, the link efficiency can be very low, especially for large delay-bandwidth networks. In general, link efficiency typically increases with increasing buffer size. Performance of TCP over UBR can be improved using buffer management policies. In addition, TCP performance is also effected by TCP congestion control mechanisms and TCP parameters such as segment size, timer granularity, receiver window size, slow start threshold, and initial window size.

    TCP Reno implements the fast retransmit and recovery algorithms that enable the connection to quickly recover from isolated segment losses. However fast retransmit and recovery cannot efficiently recover from multiple packet losses within the same window. A modification to Reno is proposed in [HOE96] so that the sender can recover from multiple packet losses without having to time out.

    TCP with Selective Acknowledgments (SACK TCP) is designed to efficiently recover from multiple segment losses. With SACK, the sender can recover from multiple dropped segments in about one round trip. Comparisons of TCP drop policies for persistent and WWW traffic over satellite-ATM are presented in [GOYAL97, MUKUL98]. The studies show that in low delay networks, the effect of network based buffer management policies is very important and can dominate the effect of SACK. The throughput improvement provided by SACK is very significant for long latency connections. When the propagation delay is large, timeout results in the loss of a significant amount of time during slow start from a window of one segment. Reno TCP (with fast retransmit and recovery), results in worst performance (for multiple packet losses) because timeout occurs at a much lower window than vanilla TCP. With SACK TCP, a timeout is avoided most of the time, and recovery is complete within a small number of roundtrips. Even if timeout occurs, the recovery is as fast as slow start but some time may be lost in the earlier retransmissions. For lower delay satellite networks (LEOs), both NewReno and SACK TCPs provide high throughput, but as the latency increases, SACK significantly outperforms NewReno, Reno and Vanilla.

    1. UBR+: Enhancements to UBR

      Recent research has focussed on fair buffer management for best effort network traffic. In these proposals, packets are dropped when the buffer occupancy exceeds a certain threshold. Most buffer management schemes improve the efficiency of TCP over UBR. However, only some of the schemes affect the fairness properties of TCP over UBR. The proposals for buffer management can be classified into four groups based on whether they maintain multiple buffer occupancies (Multiple Accounting -- MA) or a single global buffer occupancy (Single Accounting -- SA), and whether they use multiple discard thresholds (Multiple Thresholds -- MT) or a single global discard threshold (Single Threshold -- ST). Table 1 lists the four classes of buffer management schemes and examples of schemes for these classes. The schemes are briefly discussed below.

      The SA schemes maintain a single count of the number of cells currently in the buffer. The MA schemes classify the traffic into several classes and maintain a separate count for the number of cells in the buffer for each class. Typically, each class corresponds to a single connection, and these schemes maintain per-connection occupancies. In cases where the number of connections far exceeds the buffer size, the added over-head of per-connection accounting may be very expensive. In this case, a set of active connections can be defined as those connections with at least one packet in the buffer, and only the buffer occupancies of active connections need to be maintained.

      Table 1Classification of Buffer Management Schemes

      Buffer Management Class

      Examples

      Threshold Type (Static/Dynamic)

      Drop Type (Deterministic/ Probabilistic)

      Tag Sensitive (Yes/No)

      Fairness

      SA--ST

      EPD, PPD

      Static

      Deterministic

      No

      None

      RED

      Static

      Probabilistic

      No

      Equal allocation in limited cases

      MA--ST

      FRED

      Dynamic

      Probabilistic

      No

      Equal allocation

      SD, FBA

      Dynamic

      Deterministic

      No

      Equal allocation

      VQ+Dynamic EPD

      Dynamic

      Deterministic

      No

      Equal Allocation

      MA--MT

      PME+ERED

      Static

      Probabilistic

      Yes

      MCR guarantee

      DFBA

      Dynamic

      Probabilistic

      Yes

      MCR guarantee

      VQ+MCR scheduling

      Dynamic

      Deterministic

      No

      MCR guarantee

      SA--MT

      Priority Drop

      Static

      Deterministic

      Yes

      --

      Single threshold (ST) schemes compare the buffer occupancy(s) with a single threshold and drop packets when the buffer occupancy exceeds the threshold. Multiple thresholds (MT) can be maintained corresponding to classes, connections, or to provide differentiated services. Several modifications to this drop behavior can be implemented, including averaging buffer occupancies, static versus dynamic thresholds, deterministic versus probabilistic discards, and discard levels based on packet tags. Examples of packet tags are the CLP bit in ATM cells or the TOS octet in the IP header of the IETF's differentiated services architecture.

      The SA-ST schemes include Early Packet Discard (EPD), Partial Packet Discard (PPD) [ROMANOV95] and Random Early Detection (RED) [FLOYD93]. EPD and PPD improve network efficiency because they minimize the transmission of partial packets by the network. Since they do not discriminate between connections in dropping packets, these schemes are unfair in allocating bandwidth to competing connections [GOYAL98b],[LI96]. Random Early Detection (RED) maintains a global threshold for the average queue. When the average queue exceeds this threshold, RED drops packets probabilistically using a uniform random variable as the drop probability.

      However, it has been shown in [LIN97] that RED cannot guarantee equal bandwidth sharing. The paper also contains a proposal for Fair Random Early Drop (FRED). FRED maintains per-connection buffer occupancies and drops packets probabilistically if the per-connection occupancy exceeds the average queue length. In addition, FRED ensures that each connection has at least a minimum number of packets in the queue. FRED can be classified as one that maintains per-connection queue lengths, but has a global threshold (MA-ST).

      The Selective Drop (SD) [GOYAL98b] and Fair Buffer Allocation (FBA) [HEIN] schemes are MA-ST schemes proposed for the ATM UBR service category. These schemes use per-connection accounting to maintain the current buffer utilization of each UBR Virtual Channel (VC). A fair allocation is calculated for each VC, and during congestion (indicated when the total buffer occupancy exceeds a threshold), if the VC's buffer occupancy exceeds its fair allocation, its subsequent incoming packet is dropped. Both Selective Drop and FBA improve both fairness and efficiency of TCP over UBR.This is because cells from overloading connections are dropped in preference to underloading ones.

      The Virtual Queuing (VQ) [WU97] scheme achieves equal buffer allocation by emulating on a single FIFO queue, a per-VC queued round-robin server. At each cell transmit time, a per-VC variable ( g i ) is decremented in a round-robin manner, and is incremented whenever a cell of that VC is admitted in the buffer. When g i exceeds a fixed threshold, incoming packets of the ith VC are dropped. An enhancement called Dynamic EPD changes the above drop threshold to include only those sessions that are sending less than their fair shares.

      Since the above MA-ST schemes compare the per-connection queue lengths (or virtual variables with equal weights) with a global threshold, they can only guarantee equal buffer occupancy (and thus throughput) to the competing connections. These schemes do not allow for specifying a guaranteed rate for connections or groups of connections. Moreover, in their present forms, they cannot support packet discard levels based on tagging.

      Another enhancement to VQ, called MCR scheduling [SIU97], proposes the emulation of a weighted scheduler to provide Minimum Cell Rate (MCR) guarantees to ATM connections. In this scheme, a per-VC, weighted variable (W i ) is updated in proportion to the VCs MCR, and compared with a global threshold. [FENG] proposes a combination of a Packet Marking Engine (PME) and an Enhanced RED scheme based on per-connection accounting and multiple thresholds (MA-MT). PME+ERED is designed for the IETF's differentiated services architecture, and can provide loose rate guarantees to connections. The PME measures per-connection bandwidths and probabilistically marks packets if the measured bandwidths are lower than the target bandwidths (multiple thresholds). High priority packets are marked, and low priority packets are unmarked. The ERED mechanism is similar to RED except that the probability of discarding marked packets is lower that that of discarding unmarked packets.

      The DFBA scheme [GOYAL98c] proposed for the ATM GFR service provides MCR guarantees for VCs carrying multiple TCP connections. DFBA maintains high and low target buffer occupancy levels for each VC, and performs probabilistic drop based on a VCs buffer occupancy and its target thresholds. The scheme gives priority to CLP=0 packets over CLP=1 packets.

      A simple SA-MT scheme can be designed that implements multiple thresholds based on the packet discard levels. When the global queue length (single accounting) exceeds the first threshold, packets with the lowest discard level are dropped. When the queue length exceeds the next threshold, packets from the lowest and the next discard level are dropped. This process continues until EPD/PPD is performed on all packets.

      As discussed in the previous section, for satellite-ATM networks, TCP congestion control mechanisms have more effect on TCP throughput than ATM buffer management policies. However, these drop policies are necessary to provide fair allocation of link capacity, to provide differentiated services based on discard levels, and to provide minimum cell rate guarantees to low priority VCs. The Guaranteed Frame Rate service describes in the next section makes extensive use of the intelligent buffer management policies described here.

  4. Guaranteed Frame Rate

    The GFR service guarantee requires the specification of a minimum cell rate (MCR) and a maximum frame size (MFS) for each VC. If the user sends packets (or frames) of size at most MFS, at a rate less than the MCR, then all the packets are expected to be delivered by the network with low loss. If the user sends packets at a rate higher than the MCR, it should still receive at least the minimum rate. The minimum rate is guaranteed to the untagged (CLP=0) frames of the connection. In addition, a connection sending in excess of the minimum rate should receive a fair share of any unused network capacity. The exact specification of the fair share has been left unspecified by the ATM Forum.

    There are three basic design options that can be used by the network to provide the per-VC minimum rate guarantees for GFR -- tagging, buffer management, and queueing:

  5. ABR over Satellite

    [KALYAN97b] provides a comprehensive study of TCP performance over the ABR service category. We discuss a key feature ABR called virtual source/virtual destination, and highlight its relevance to long delay paths. Most of the discussion assumes that the switches implement a rate based switch algorithm like ERICA+.

    In long latency satellite configurations, the feedback delay is the dominant factor (over round trip time) in determining the maximum queue length. A feedback delay of 10 ms corresponds to about 3670 cells of queue for TCP over ERICA, while a feedback delay 550 ms corresponds to 201850 cells. This indicates that satellite switches need to provide at least one feedback delay worth of buffering to avoid loss on these high delay paths. A point to consider is that these large queues should not be seen in downstream workgroup or WAN switches, because they will not provide so much buffering. Satellite switches can isolate downstream switches from such large queues by implementing the virtual source/virtual destination (VS/VD) option.

    image3

    Figure 3The VS/VD option in ATM-ABR

    [GOYAL98a] has examined some basic issues in designing VS/VD feedback control mechanisms. VS/VD can effectively isolate nodes in different VS/VD loops. As a result, the buffer requirements of a node are bounded by the feedback delay-bandwidth product of the upstream VS/VD loop. However, improper design of VS/VD rate allocation schemes can result in an unstable condition where the switch queues do not drain.

    VS/VD, when implemented correctly, helps in reducing the buffer requirements of terrestrial switches that are connected to satellite gateways. Figure 3 illustrates a the results of a simulation experiment showing the effect of VS/VD on the buffer requirements of the terrestrial switch S. In the figure the link between S and host D is the bottleneck link. The feedback delay-bandwidth product of the satellite hop is about 16000 cells, and dominates the feedback delay-bandwidth product of the terrestrial hop (about 3000 cells). Without VS/VD, the terrestrial switch that is a bottleneck, must buffer cells of upto the feedback delay-bandwidth product of the entire control loop (including the satellite hop). With a VS/VD loop between the satellite and the terrestrial switch, the queue accumulation due to the satellite feedback delay is confined to the satellite switch. The terrestrial switch only buffers cells that are accumulated due to the feedback delay of the terrestrial link to the satellite switch.

  6. Comparison of ATM Service Categories

    Existing and proposed ATM standards provide several options for TCP/IP data transport over a satellite-ATM network. The three service categories -- ABR, UBR and GFR -- and their various implementation options present a cost-performance tradeoff for TCP/IP over ATM. A comparison of the service categories can be based on the following factors

    Higher complexity arises from resource allocation algorithms for Connection Admission Control (CAC) and Usage Parameter Control (UPC), as well as from sophisticated queuing and feedback control mechanisms. While UPC is performed at the entrance of the ATM network to control the rate of packets entering the network, CAC is performed during connection establishment by each network element. UBR is the least complex service category because it does not require any CAC or UPC. Typical UBR switches are expected to have a single queue for all UBR VCs. Buffer management in switches can vary from a simple tail drop to the more complex per-VC accounting based algorithms such as FBA. An MCR guarantee to the UBR service would require a scheduling algorithm that prevents the starvation of the UBR queue. The GFR service could be implemented by either a single queue using a DFBA like mechanism, or per-VC queues and scheduling. The ABR service can be implemented with a single ABR queue in the network. However, the VS/VD option requires the use of per-VC queuing and increases the implementation complexity of ABR. The CAC requirements for GFR and ABR are similar. However, the tagging option, CLP conformance and MFS conformance tests in GFR add complexity to the UPC function.

    The additional complexity for ABR feedback control presents a tradeoff with ABR buffer requirements. Network buffering is lower for ABR than for UBR or GFR. In addition, ABR has controlled buffer requirements that depend on the bandwidth-delay product of the ABR feedback loop. At the edge of the ATM network, network feedback in the case of ABR can provide information for buffer dimensioning. Large buffers in edge routers can be used when the ABR network is temporarily congested. In the case of UBR and GFR, edge devices do not have network congestion information, and simply send the data out to the ATM network as fast as they can. As a result, extra buffers at the edge of the network do not help for UBR or GFR. This is an important consideration for large delay bandwidth satellite networks. With ABR, satellite gateways (routers at the edges of a satellite-ATM network) can buffer large amounts of data, while the buffer requirements of the on-board ATM switches can be minimized. The buffer requirements with UBR/GFR are reversed for the gateways and on-board switches.

    The ABR service can make effective use of available network capacity by providing feedback to the sources. Edge devices with buffered data can fill up the bandwidth within one feedback cycle of the bandwidth becoming available. This feedback cycle is large for satellite networks. With UBR and GFR, available bandwidth can be immediately filled up by edge devices that buffer data. However, the edge devices have no control on the sending rate, and data is likely to be dropped during congestion. This data must be retransmitted by TCP, and can result in inefficient use of the satellite capacity.

    In addition to efficient network utilization, a satellite-ATM network must also fairly allocate network bandwidth to the competing VCs. While vanilla UBR has no mechanism for fair bandwidth allocation, UBR or GFR with buffer management can provide per-VC fairness. ABR provides fairness by per-VC rate allocation. A typical satellite ATM network will carry multiple TCP connections over a single VC. In ABR, most losses are in the routers at the edges of the network, and there routers can perform fair buffer management to ensure IP level fairness. In UBR and GFR on the other hand, most losses due to congestion are in the satellite-ATM network, where there is no knowledge of the individual IP flows. In this case, fairness can only be provided at the VC level.

  7. Concluding Remarks

    In this paper, we described several techniques to improve the TCP throughput over UBR. These include frame-level discard policies, intelligent buffer management policies, SACK, and guaranteed rates. Frame level discard policies such as early packet discard (EPD) improve the throughput significantly over cell-level discard policies. However, the fairness is not guaranteed unless intelligent buffer management using per-VC accounting is used. For long delay paths, the throughput improvement due to SACK is more than that from discard policies and buffer management. When several TCP flows are multiplexed on to a few VCs, fairness among the IP flows can be provided by the routers at the edges of the ATM network, while VC level fairness must be provided by the ATM network using either buffer management or per-VC queuing.

    Another method of improving the UBR performance using guaranteed rates helps in the presence of a high load of higher priority traffic such as Constant Bit Rate (CBR) or Variable Bit Rate (VBR) traffic. It has been found that TCP connections need a guaranteed minimum cell rate (MCR) for reasonable performance. MCR ensures that the flow of TCP packets and acknowledgements is continuous and prevents TCP timeouts due to temporary bandwidth starvation of UBR. Minimum rate guarantees can be provided simply to the entire UBR service category (UBR with guaranteed rate) or to each ATM VCs using the GFR or the ABR service category.

    For TCP over ABR, in addition to the four methods discussed above, VS/VD can be used to isolate long-delay segments from terrestrial segments. This helps in efficiently sizing buffers in routers and ATM switches.

  8. References

    [AKYL97]Ian F. Akyildiz, Seong-Ho Jeong, "Satellite ATM Networks: A Survey," IEEE Communications Magazine, July 1997, Vol 5.35. No. 7.

    [FENG]Wu-chang Feng, Dilip Kandlur, Debanjan Saha, Kang G. Shin, "Techniques for Eliminating Packet Loss in Congested TCP/IP Networks," ____________.

    [FLOYD93]Sally Floyd, Van Jacobson, "Random Early Detection Gateways for Congestion Avoidance," IEEE/ACM Transaction on Networking, August 1993.

    [GOYAL98a]Rohit Goyal, Raj Jain et. al., "Per-VC rate allocation techniques for ABR feedback in VS/VD networks," Submitted to Globecom'98.

    [GOYAL98b]Rohit Goyal, Raj Jain, et. al., "Improving the Performance of TCP over the ATM-UBR Service," To appear in Computer Communications, 1998.

    [HEIN]Juha Heinanen, Kalevi Kilkki, "A Fair Buffer Allocation Scheme," Unpublished Manuscript.

    [HOE96]Janey C. Hoe, "Improving the Start-up Behavior of a Congestion Control Scheme for TCP,'' Proceedings of SIGCOMM'96, August 1996.

    [KALYAN97b]Shivkumar Kalyanaraman, " Traffic Management for the Available Bit Rate (ABR) Service in Asynchronous Transfer Mode (ATM) Networks," PhD Dissertation, The Ohio State University, 1997.

    [KALYAN98a]Shivkumar Kalyanaraman, R. Jain, et. al., "Performance and Buffering Requirements of Internet Protocols over ATM ABR and UBR Services," To appear, IEEE Computer Communications Magazine.

    [KOTA97]Sastri Kota, R. Goyal, Raj Jain, "Satellite ATM Network Architectural Considerations and TCP/IP Performance," Proceedings of the 3 rd K-A Band Utilization Conference, 1997.

    [LI96]H. Li, K.Y. Siu, H.T. Tzeng, C. Ikeda and H. Suzuki, "TCP over ABR and UBR Services in ATM,'' Proc. IPCCC'96, March 1996.

    [LIN97]Dong Lin, Robert Morris, "Dynamics of Random Early Detection," Proceedings of SIGCOMM97, 1997.

    [ROMANOV95]Allyn Romanov, Sally Floyd, ``Dynamics of TCP Traffic over ATM Networks,'' IEEE Journal of Selected Areas In Telecommunications, May 1995.

    [SIU97]Kai-Yeung Siu, Yuan Wu, Wenge Ren, "Virtual Queuing Techniques for UBR+ Service in ATM with Fair Access and Minimum Bandwidth Guarantee," Proceedings of Globecom'97 , 1997.

    [TCPS98]Mark Allman, Dan Glover, "Enhancing TCP Over Satellite Channels using Standard Mechanisms," IETF draft, February 1998, http://tcpsat.lerc.nasa.gov/tcpsat

    [TM4096]"The ATM Forum Traffic Management Specification Version 4.0," ATM Forum Trafic Management AF-TM-0056.000, April 1996.

    [WU97]Yuan Wu, Kai-Yeung Siu, Wenge Ren, "Improved Virtual Queuing and Dynamic EPD Techniques for TCP over ATM," Proceedings of ICNP97, 1997.