Network Working Group                                 B. Braden, USC/ISI
Request for Comments: 2309                             D. Clark, MIT LCS
Category: Informational                                J. Crowcroft, UCL
                                                 B. Davie, Cisco Systems
                                               S. Deering, Cisco Systems
                                                          D. Estrin, USC
                                                          S. Floyd, LBNL
                                                       V. Jacobson, LBNL
                                                  G. Minshall, Fiberlane
                                                       C. Partridge, BBN
                                      L. Peterson, University of Arizona
                                      K. Ramakrishnan, ATT Labs Research
                                                  S. Shenker, Xerox PARC
                                                  J. Wroclawski, MIT LCS
                                                          L. Zhang, UCLA
                                                              April 1998
        
Network Working Group                                 B. Braden, USC/ISI
Request for Comments: 2309                             D. Clark, MIT LCS
Category: Informational                                J. Crowcroft, UCL
                                                 B. Davie, Cisco Systems
                                               S. Deering, Cisco Systems
                                                          D. Estrin, USC
                                                          S. Floyd, LBNL
                                                       V. Jacobson, LBNL
                                                  G. Minshall, Fiberlane
                                                       C. Partridge, BBN
                                      L. Peterson, University of Arizona
                                      K. Ramakrishnan, ATT Labs Research
                                                  S. Shenker, Xerox PARC
                                                  J. Wroclawski, MIT LCS
                                                          L. Zhang, UCLA
                                                              April 1998
        

Recommendations on Queue Management and Congestion Avoidance in the Internet

关于Internet中队列管理和拥塞避免的建议

Status of Memo

备忘录的状况

This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.

本备忘录为互联网社区提供信息。它没有规定任何类型的互联网标准。本备忘录的分发不受限制。

Copyright Notice

版权公告

Copyright (C) The Internet Society (1998). All Rights Reserved.

版权所有(C)互联网协会(1998年)。版权所有。

Abstract

摘要

This memo presents two recommendations to the Internet community concerning measures to improve and preserve Internet performance. It presents a strong recommendation for testing, standardization, and widespread deployment of active queue management in routers, to improve the performance of today's Internet. It also urges a concerted effort of research, measurement, and ultimate deployment of router mechanisms to protect the Internet from flows that are not sufficiently responsive to congestion notification.

本备忘录向互联网社区提出了两项建议,涉及改善和保持互联网性能的措施。它强烈建议在路由器中测试、标准化和广泛部署主动队列管理,以提高当今互联网的性能。它还敦促各方共同努力,研究、测量和最终部署路由器机制,以保护互联网免受对拥塞通知响应不足的流量的影响。

1. INTRODUCTION
1. 介绍

The Internet protocol architecture is based on a connectionless end-to-end packet service using the IP protocol. The advantages of its connectionless design, flexibility and robustness, have been amply demonstrated. However, these advantages are not without cost: careful design is required to provide good service under heavy load. In fact, lack of attention to the dynamics of packet forwarding can result in severe service degradation or "Internet meltdown". This phenomenon was first observed during the early growth phase of the Internet of the mid 1980s [Nagle84], and is technically called "congestion collapse".

Internet协议体系结构基于使用IP协议的无连接端到端分组服务。它的无连接设计、灵活性和健壮性的优势已经得到充分的证明。然而,这些优点并非没有成本:需要仔细设计,以便在重载下提供良好的服务。事实上,缺乏对数据包转发动态的关注可能会导致严重的服务降级或“互联网崩溃”。这种现象最早出现在20世纪80年代中期互联网的早期发展阶段[Nagle84],技术上称为“拥塞崩溃”。

The original fix for Internet meltdown was provided by Van Jacobson. Beginning in 1986, Jacobson developed the congestion avoidance mechanisms that are now required in TCP implementations [Jacobson88, HostReq89]. These mechanisms operate in the hosts to cause TCP connections to "back off" during congestion. We say that TCP flows are "responsive" to congestion signals (i.e., dropped packets) from the network. It is primarily these TCP congestion avoidance algorithms that prevent the congestion collapse of today's Internet.

互联网崩溃的最初修复是由Van Jacobson提供的。从1986年开始,Jacobson开发了TCP实现所需的拥塞避免机制[Jacobson88,HostReq89]。这些机制在主机中运行,导致TCP连接在拥塞期间“后退”。我们说TCP流对来自网络的拥塞信号(即丢弃的数据包)作出“响应”。正是这些TCP拥塞避免算法阻止了当今互联网的拥塞崩溃。

However, that is not the end of the story. Considerable research has been done on Internet dynamics since 1988, and the Internet has grown. It has become clear that the TCP congestion avoidance mechanisms [RFC2001], while necessary and powerful, are not sufficient to provide good service in all circumstances. Basically, there is a limit to how much control can be accomplished from the edges of the network. Some mechanisms are needed in the routers to complement the endpoint congestion avoidance mechanisms.

然而,这并不是故事的结局。自1988年以来,人们对互联网动态进行了大量的研究,互联网也得到了发展。很明显,TCP拥塞避免机制[RFC2001]虽然必要且强大,但不足以在所有情况下提供良好的服务。基本上,从网络边缘可以完成的控制量是有限的。路由器中需要一些机制来补充端点拥塞避免机制。

It is useful to distinguish between two classes of router algorithms related to congestion control: "queue management" versus "scheduling" algorithms. To a rough approximation, queue management algorithms manage the length of packet queues by dropping packets when necessary or appropriate, while scheduling algorithms determine which packet to send next and are used primarily to manage the allocation of bandwidth among flows. While these two router mechanisms are closely related, they address rather different performance issues.

区分与拥塞控制相关的两类路由器算法很有用:“队列管理”和“调度”算法。粗略地说,队列管理算法通过在必要或适当时丢弃数据包来管理数据包队列的长度,而调度算法则确定下一个要发送的数据包,主要用于管理流之间的带宽分配。虽然这两种路由器机制密切相关,但它们解决的性能问题却截然不同。

This memo highlights two router performance issues. The first issue is the need for an advanced form of router queue management that we call "active queue management." Section 2 summarizes the benefits that active queue management can bring. Section 3 describes a recommended active queue management mechanism, called Random Early Detection or "RED". We expect that the RED algorithm can be used with a wide variety of scheduling algorithms, can be implemented relatively efficiently, and will provide significant Internet

本备忘录强调了两个路由器性能问题。第一个问题是需要一种高级形式的路由器队列管理,我们称之为“主动队列管理”。第2节总结了主动队列管理可以带来的好处。第3节介绍了推荐的主动队列管理机制,称为随机早期检测或“RED”。我们期望RED算法可以与各种各样的调度算法一起使用,可以相对高效地实现,并将提供重要的Internet服务

performance improvement.

业绩改进。

The second issue, discussed in Section 4 of this memo, is the potential for future congestion collapse of the Internet due to flows that are unresponsive, or not sufficiently responsive, to congestion indications. Unfortunately, there is no consensus solution to controlling congestion caused by such aggressive flows; significant research and engineering will be required before any solution will be available. It is imperative that this work be energetically pursued, to ensure the future stability of the Internet.

本备忘录第4节讨论的第二个问题是,由于流量对拥塞指示无响应或响应不足,未来互联网拥塞崩溃的可能性。不幸的是,没有一致的解决方案来控制这种侵略性流量造成的拥堵;在提供任何解决方案之前,需要进行大量的研究和工程设计。必须积极开展这项工作,以确保互联网的未来稳定。

Section 5 concludes the memo with a set of recommendations to the IETF concerning these topics.

第5节总结了备忘录,并就这些主题向IETF提出了一系列建议。

The discussion in this memo applies to "best-effort" traffic. The Internet integrated services architecture, which provides a mechanism for protecting individual flows from congestion, introduces its own queue management and scheduling algorithms [Shenker96, Wroclawski96]. Similarly, the discussion of queue management and congestion control requirements for differential services is a separate issue. However, we do not expect the deployment of integrated services and differential services to significantly diminish the importance of the best-effort traffic issues discussed in this memo.

本备忘录中的讨论适用于“尽力而为”流量。Internet集成服务体系结构提供了一种保护单个流免受拥塞的机制,它引入了自己的队列管理和调度算法[Shenker96,Wroclawski96]。同样,关于差异服务的队列管理和拥塞控制要求的讨论也是一个单独的问题。然而,我们并不期望集成服务和差异服务的部署会显著降低本备忘录中讨论的尽力而为流量问题的重要性。

Preparation of this memo resulted from past discussions of end-to-end performance, Internet congestion, and RED in the End-to-End Research Group of the Internet Research Task Force (IRTF).

本备忘录的编制源于互联网研究任务组(IRTF)端到端研究小组过去对端到端性能、互联网拥塞和红色的讨论。

2. THE NEED FOR ACTIVE QUEUE MANAGEMENT
2. 主动队列管理的必要性

The traditional technique for managing router queue lengths is to set a maximum length (in terms of packets) for each queue, accept packets for the queue until the maximum length is reached, then reject (drop) subsequent incoming packets until the queue decreases because a packet from the queue has been transmitted. This technique is known as "tail drop", since the packet that arrived most recently (i.e., the one on the tail of the queue) is dropped when the queue is full. This method has served the Internet well for years, but it has two important drawbacks.

管理路由器队列长度的传统技术是为每个队列设置最大长度(以数据包为单位),接受队列的数据包直到达到最大长度,然后拒绝(丢弃)后续传入的数据包,直到队列减少,因为队列中的数据包已被传输。这种技术被称为“尾部丢弃”,因为最近到达的数据包(即队列尾部的数据包)在队列已满时被丢弃。这种方法在互联网上应用多年,但有两个重要的缺点。

1. Lock-Out

1. 封锁

In some situations tail drop allows a single connection or a few flows to monopolize queue space, preventing other connections from getting room in the queue. This "lock-out" phenomenon is often the result of synchronization or other timing effects.

在某些情况下,尾部下降允许单个连接或几个流独占队列空间,从而阻止其他连接在队列中获得空间。这种“锁定”现象通常是同步或其他定时效应的结果。

2. Full Queues

2. 满队

The tail drop discipline allows queues to maintain a full (or, almost full) status for long periods of time, since tail drop signals congestion (via a packet drop) only when the queue has become full. It is important to reduce the steady-state queue size, and this is perhaps queue management's most important goal.

尾部丢弃规则允许队列长时间保持完全(或几乎完全)状态,因为尾部丢弃仅在队列已满时发出拥塞信号(通过数据包丢弃)。减少稳态队列大小很重要,这可能是队列管理最重要的目标。

The naive assumption might be that there is a simple tradeoff between delay and throughput, and that the recommendation that queues be maintained in a "non-full" state essentially translates to a recommendation that low end-to-end delay is more important than high throughput. However, this does not take into account the critical role that packet bursts play in Internet performance. Even though TCP constrains a flow's window size, packets often arrive at routers in bursts [Leland94]. If the queue is full or almost full, an arriving burst will cause multiple packets to be dropped. This can result in a global synchronization of flows throttling back, followed by a sustained period of lowered link utilization, reducing overall throughput.

天真的假设可能是延迟和吞吐量之间有一个简单的折衷,将队列保持在“非满”状态的建议本质上转化为低端到端延迟比高吞吐量更重要的建议。然而,这并没有考虑到数据包突发在互联网性能中所起的关键作用。尽管TCP限制了流的窗口大小,但数据包通常以突发方式到达路由器[Leland94]。如果队列已满或几乎已满,到达的突发将导致多个数据包被丢弃。这可能导致流的全局同步节流,然后是链路利用率的持续降低,从而降低总体吞吐量。

The point of buffering in the network is to absorb data bursts and to transmit them during the (hopefully) ensuing bursts of silence. This is essential to permit the transmission of bursty data. It should be clear why we would like to have normally-small queues in routers: we want to have queue capacity to absorb the bursts. The counter-intuitive result is that maintaining normally-small queues can result in higher throughput as well as lower end-to-end delay. In short, queue limits should not reflect the steady state queues we want maintained in the network; instead, they should reflect the size of bursts we need to absorb.

网络中的缓冲点是吸收数据突发,并在(希望)随后的突发沉默期间传输它们。这对于允许传输突发数据至关重要。应该很清楚为什么我们希望在路由器中有通常较小的队列:我们希望有队列容量来吸收突发。与直觉相反的结果是,保持正常的小队列可以提高吞吐量和降低端到端延迟。简言之,队列限制不应反映我们希望在网络中保持的稳态队列;相反,它们应该反映我们需要吸收的爆发的规模。

Besides tail drop, two alternative queue disciplines that can be applied when the queue becomes full are "random drop on full" or "drop front on full". Under the random drop on full discipline, a router drops a randomly selected packet from the queue (which can be an expensive operation, since it naively requires an O(N) walk through the packet queue) when the queue is full and a new packet arrives. Under the "drop front on full" discipline [Lakshman96], the router drops the packet at the front of the queue when the queue is full and a new packet arrives. Both of these solve the lock-out problem, but neither solves the full-queues problem described above.

除了尾部下降外,当队列变满时可以应用的两种替代队列规程是“满时随机下降”或“满时下降前端”。在完全随机丢弃规则下,当队列已满且新数据包到达时,路由器从队列中丢弃随机选择的数据包(这可能是一个昂贵的操作,因为它天真地需要在数据包队列中进行O(N)遍历)。根据“完全丢弃前端”原则[Lakshman96],当队列已满且新数据包到达时,路由器在队列前端丢弃数据包。这两种方法都解决了锁定问题,但都不能解决上述的完全队列问题。

We know in general how to solve the full-queues problem for "responsive" flows, i.e., those flows that throttle back in response to congestion notification. In the current Internet, dropped packets serve as a critical mechanism of congestion notification to end nodes. The solution to the full-queues problem is for routers to drop packets before a queue becomes full, so that end nodes can respond to congestion before buffers overflow. We call such a proactive approach "active queue management". By dropping packets before buffers overflow, active queue management allows routers to control when and how many packets to drop. The next section introduces RED, an active queue management mechanism that solves both problems listed above (given responsive flows).

我们通常知道如何解决“响应”流的满队列问题,即那些响应拥塞通知而节流的流。在当前的互联网中,丢弃的数据包是向终端节点发出拥塞通知的关键机制。满队列问题的解决方案是路由器在队列满之前丢弃数据包,这样终端节点可以在缓冲区溢出之前响应拥塞。我们将这种主动式方法称为“主动队列管理”。通过在缓冲区溢出之前丢弃数据包,主动队列管理允许路由器控制丢弃数据包的时间和数量。下一节将介绍RED,它是一种主动队列管理机制,可以解决上面列出的两个问题(给定响应流)。

In summary, an active queue management mechanism can provide the following advantages for responsive flows.

总之,主动队列管理机制可以为响应流提供以下优势。

1. Reduce number of packets dropped in routers

1. 减少路由器中丢弃的数据包数量

Packet bursts are an unavoidable aspect of packet networks [Willinger95]. If all the queue space in a router is already committed to "steady state" traffic or if the buffer space is inadequate, then the router will have no ability to buffer bursts. By keeping the average queue size small, active queue management will provide greater capacity to absorb naturally-occurring bursts without dropping packets.

分组突发是分组网络不可避免的一个方面[Willinger95]。如果路由器中的所有队列空间已经提交给“稳态”流量,或者如果缓冲空间不足,那么路由器将无法缓冲突发。通过保持平均队列大小较小,主动队列管理将提供更大的容量来吸收自然发生的突发,而不会丢弃数据包。

Furthermore, without active queue management, more packets will be dropped when a queue does overflow. This is undesirable for several reasons. First, with a shared queue and the tail drop discipline, an unnecessary global synchronization of flows cutting back can result in lowered average link utilization, and hence lowered network throughput. Second, TCP recovers with more difficulty from a burst of packet drops than from a single packet drop. Third, unnecessary packet drops represent a possible waste of bandwidth on the way to the drop point.

此外,如果没有主动队列管理,当队列溢出时会丢弃更多的数据包。这是不可取的,原因有几个。首先,使用共享队列和尾部丢弃规则,减少流量的不必要的全局同步可能导致平均链路利用率降低,从而降低网络吞吐量。其次,TCP从突发数据包丢失中恢复比从单个数据包丢失中恢复更困难。第三,不必要的数据包丢弃表示在到达丢弃点的过程中可能会浪费带宽。

We note that while RED can manage queue lengths and reduce end-to-end latency even in the absence of end-to-end congestion control, RED will be able to reduce packet dropping only in an environment that continues to be dominated by end-to-end congestion control.

我们注意到,即使在没有端到端拥塞控制的情况下,RED也可以管理队列长度并减少端到端延迟,但只有在端到端拥塞控制仍然占主导地位的环境中,RED才能减少数据包丢弃。

2. Provide lower-delay interactive service

2. 提供低延迟交互服务

By keeping the average queue size small, queue management will reduce the delays seen by flows. This is particularly important for interactive applications such as short Web transfers, Telnet traffic, or interactive audio-video sessions, whose subjective

通过保持平均队列大小较小,队列管理将减少流看到的延迟。这对于交互式应用程序尤其重要,如短网络传输、Telnet通信或交互式音频视频会话,其主观

(and objective) performance is better when the end-to-end delay is low.

(和目标)端到端延迟较低时,性能更好。

3. Avoid lock-out behavior

3. 避免锁定行为

Active queue management can prevent lock-out behavior by ensuring that there will almost always be a buffer available for an incoming packet. For the same reason, active queue management can prevent a router bias against low bandwidth but highly bursty flows.

主动队列管理可以通过确保传入数据包几乎总是有可用的缓冲区来防止锁定行为。出于同样的原因,主动队列管理可以防止路由器偏向低带宽但高突发的流。

It is clear that lock-out is undesirable because it constitutes a gross unfairness among groups of flows. However, we stop short of calling this benefit "increased fairness", because general fairness among flows requires per-flow state, which is not provided by queue management. For example, in a router using queue management but only FIFO scheduling, two TCP flows may receive very different bandwidths simply because they have different round-trip times [Floyd91], and a flow that does not use congestion control may receive more bandwidth than a flow that does. Per-flow state to achieve general fairness might be maintained by a per-flow scheduling algorithm such as Fair Queueing (FQ) [Demers90], or a class-based scheduling algorithm such as CBQ [Floyd95], for example.

很明显,锁定是不可取的,因为它在流量组之间构成了严重的不公平。然而,我们并没有将这种好处称为“增加的公平性”,因为流之间的一般公平性需要每个流状态,而队列管理并没有提供这种状态。例如,在使用队列管理但仅使用FIFO调度的路由器中,两个TCP流可能接收非常不同的带宽,这仅仅是因为它们具有不同的往返时间[Floyd91],而不使用拥塞控制的流可能接收比使用拥塞控制的流更多的带宽。实现一般公平性的每流状态可以由每流调度算法(例如,公平排队(FQ)[Demers90])或基于类的调度算法(例如,CBQ[Floyd95])来维护。

On the other hand, active queue management is needed even for routers that use per-flow scheduling algorithms such as FQ or class-based scheduling algorithms such as CBQ. This is because per-flow scheduling algorithms by themselves do nothing to control the overall queue size or the size of individual queues. Active queue management is needed to control the overall average queue sizes, so that arriving bursts can be accommodated without dropping packets. In addition, active queue management should be used to control the queue size for each individual flow or class, so that they do not experience unnecessarily high delays. Therefore, active queue management should be applied across the classes or flows as well as within each class or flow.

另一方面,即使对于使用按流调度算法(如FQ)或基于类的调度算法(如CBQ)的路由器,也需要主动队列管理。这是因为单流调度算法本身无法控制总体队列大小或单个队列的大小。需要主动队列管理来控制总体平均队列大小,以便在不丢弃数据包的情况下容纳到达的突发。此外,应使用主动队列管理来控制每个流或类的队列大小,以便它们不会经历不必要的高延迟。因此,应该在类或流之间以及每个类或流内部应用主动队列管理。

In short, scheduling algorithms and queue management should be seen as complementary, not as replacements for each other. In particular, there have been implementations of queue management added to FQ, and work is in progress to add RED queue management to CBQ.

简而言之,调度算法和队列管理应该被看作是互补的,而不是相互替代的。特别是,已经在FQ中添加了队列管理的实现,并且正在将红色队列管理添加到CBQ中。

3. THE QUEUE MANAGEMENT ALGORITHM "RED"
3. 队列管理算法“RED”

Random Early Detection, or RED, is an active queue management algorithm for routers that will provide the Internet performance advantages cited in the previous section [RED93]. In contrast to traditional queue management algorithms, which drop packets only when the buffer is full, the RED algorithm drops arriving packets probabilistically. The probability of drop increases as the estimated average queue size grows. Note that RED responds to a time-averaged queue length, not an instantaneous one. Thus, if the queue has been mostly empty in the "recent past", RED won't tend to drop packets (unless the queue overflows, of course!). On the other hand, if the queue has recently been relatively full, indicating persistent congestion, newly arriving packets are more likely to be dropped.

随机早期检测(Random Early Detection,简称RED)是路由器的一种主动队列管理算法,可提供上一节[RED93]中提到的互联网性能优势。与传统的队列管理算法(仅在缓冲区已满时丢弃数据包)不同,RED算法概率地丢弃到达的数据包。丢弃的概率随着估计的平均队列大小的增加而增加。请注意,RED响应的是时间平均队列长度,而不是瞬时队列长度。因此,如果队列在“最近的过去”中大部分是空的,RED将不会丢弃数据包(当然,除非队列溢出!)。另一方面,如果队列最近比较满,表示持续拥塞,则新到达的数据包更有可能被丢弃。

The RED algorithm itself consists of two main parts: estimation of the average queue size and the decision of whether or not to drop an incoming packet.

RED算法本身包括两个主要部分:估计平均队列大小和决定是否丢弃传入数据包。

(a) Estimation of Average Queue Size

(a) 平均队列大小的估计

RED estimates the average queue size, either in the forwarding path using a simple exponentially weighted moving average (such as presented in Appendix A of [Jacobson88]), or in the background (i.e., not in the forwarding path) using a similar mechanism.

RED使用简单的指数加权移动平均(如[Jacobson88]附录a中所示)在转发路径中估计平均队列大小,或使用类似机制在后台(即,不在转发路径中)估计平均队列大小。

Note: The queue size can be measured either in units of packets or of bytes. This issue is discussed briefly in [RED93] in the "Future Work" section.

注意:队列大小可以以数据包或字节为单位进行测量。[RED93]在“未来工作”一节中简要讨论了这个问题。

Note: when the average queue size is computed in the forwarding path, there is a special case when a packet arrives and the queue is empty. In this case, the computation of the average queue size must take into account how much time has passed since the queue went empty. This is discussed further in [RED93].

注意:当在转发路径中计算平均队列大小时,存在一种特殊情况,即数据包到达且队列为空。在这种情况下,平均队列大小的计算必须考虑队列变空后经过的时间。[RED93]对此进行了进一步讨论。

(b) Packet Drop Decision

(b) 丢包决策

In the second portion of the algorithm, RED decides whether or not to drop an incoming packet. It is RED's particular algorithm for dropping that results in performance improvement for responsive flows. Two RED parameters, minth (minimum threshold) and maxth (maximum threshold), figure prominently in

在算法的第二部分,RED决定是否丢弃传入的数据包。RED的特定丢弃算法可以提高响应流的性能。两个红色参数minth(最小阈值)和maxth(最大阈值)在图中突出显示

this decision process. Minth specifies the average queue size *below which* no packets will be dropped, while maxth specifies the average queue size *above which* all packets will be dropped. As the average queue size varies from minth to maxth, packets will be dropped with a probability that varies linearly from 0 to maxp.

这是一个决策过程。Minth指定平均队列大小*,低于该大小*将不会丢弃数据包,而maxth指定平均队列大小*,高于该大小*将丢弃所有数据包。由于平均队列大小从minth到maxth不等,数据包将以从0到maxp线性变化的概率丢弃。

Note: a simplistic method of implementing this would be to calculate a new random number at each packet arrival, then compare that number with the above probability which varies from 0 to maxp. A more efficient implementation, described in [RED93], computes a random number *once* for each dropped packet.

注意:实现这一点的一种简单方法是在每个数据包到达时计算一个新的随机数,然后将该随机数与上述概率(从0到maxp)进行比较。[RED93]中描述的更有效的实现为每个丢弃的数据包计算一次随机数。

Note: the decision whether or not to drop an incoming packet can be made in "packet mode", ignoring packet sizes, or in "byte mode", taking into account the size of the incoming packet. The performance implications of the choice between packet mode or byte mode is discussed further in [Floyd97].

注意:可以在“数据包模式”下决定是否丢弃传入数据包,忽略数据包大小,或在“字节模式”下决定是否丢弃传入数据包,同时考虑传入数据包的大小。[Floyd97]中进一步讨论了分组模式或字节模式之间选择的性能影响。

RED effectively controls the average queue size while still accommodating bursts of packets without loss. RED's use of randomness breaks up synchronized processes that lead to lock-out phenomena.

RED有效地控制了平均队列大小,同时仍然能够在不丢失数据包的情况下容纳突发数据包。瑞德对随机性的使用打破了导致锁定现象的同步过程。

There have been several implementations of RED in routers, and papers have been published reporting on experience with these implementations ([Villamizar94], [Gaynor96]). Additional reports of implementation experience would be welcome, and will be posted on the RED web page [REDWWW].

路由器中已经有几个RED的实现,并且已经发表了关于这些实现经验的报告([Villamizar94],[Gaynor96])。欢迎更多关于实施经验的报告,并将在红色网页[REDWWW]上公布。

All available empirical evidence shows that the deployment of active queue management mechanisms in the Internet would have substantial performance benefits. There are seemingly no disadvantages to using the RED algorithm, and numerous advantages. Consequently, we believe that the RED active queue management algorithm should be widely deployed.

所有可用的经验证据都表明,在Internet上部署主动队列管理机制将具有显著的性能优势。使用RED算法似乎没有缺点,也有许多优点。因此,我们认为RED主动队列管理算法应该得到广泛的应用。

We should note that there are some extreme scenarios for which RED will not be a cure, although it won't hurt and may still help. An example of such a scenario would be a very large number of flows, each so tiny that its fair share would be less than a single packet per RTT.

我们应该注意到,在某些极端情况下,红色并不能治愈疾病,尽管它不会造成伤害,也可能仍然有帮助。这种情况的一个例子是大量的流,每个流都非常小,其公平份额将小于每个RTT的单个数据包。

4. MANAGING AGGRESSIVE FLOWS
4. 管理积极的流动

One of the keys to the success of the Internet has been the congestion avoidance mechanisms of TCP. Because TCP "backs off" during congestion, a large number of TCP connections can share a single, congested link in such a way that bandwidth is shared reasonably equitably among similarly situated flows. The equitable sharing of bandwidth among flows depends on the fact that all flows are running basically the same congestion avoidance algorithms, conformant with the current TCP specification [HostReq89].

互联网成功的关键之一是TCP的拥塞避免机制。由于TCP在拥塞期间“后退”,大量TCP连接可以共享单个拥塞链路,从而在类似位置的流之间合理公平地共享带宽。流之间带宽的公平共享取决于以下事实:所有流运行的拥塞避免算法基本相同,符合当前TCP规范[HostReq89]。

We introduce the term "TCP-compatible" for a flow that behaves under congestion like a flow produced by a conformant TCP. A TCP-compatible flow is responsive to congestion notification, and in steady-state it uses no more bandwidth than a conformant TCP running under comparable conditions (drop rate, RTT, MTU, etc.)

我们将术语“TCP兼容”引入到一个流中,该流在拥塞情况下的行为类似于由一致的TCP生成的流。TCP兼容流对拥塞通知作出响应,在稳定状态下,它使用的带宽不超过在类似条件下运行的一致TCP(丢弃率、RTT、MTU等)

It is convenient to divide flows into three classes: (1) TCP-compatible flows, (2) unresponsive flows, i.e., flows that do not slow down when congestion occurs, and (3) flows that are responsive but are not TCP-compatible. The last two classes contain more aggressive flows that pose significant threats to Internet performance, as we will now discuss.

可以方便地将流分为三类:(1)TCP兼容流,(2)无响应流,即发生拥塞时不会减速的流,以及(3)有响应但不兼容TCP的流。最后两个类包含对Internet性能构成重大威胁的更具攻击性的流,我们现在将讨论这一点。

o Non-Responsive Flows

o 非响应流

There is a growing set of UDP-based applications whose congestion avoidance algorithms are inadequate or nonexistent (i.e, the flow does not throttle back upon receipt of congestion notification). Such UDP applications include streaming applications like packet voice and video, and also multicast bulk data transport [SRM96]. If no action is taken, such unresponsive flows could lead to a new congestion collapse.

越来越多的基于UDP的应用程序的拥塞避免算法不足或不存在(即,在收到拥塞通知时,流不会节流)。这种UDP应用程序包括流式应用程序,如分组语音和视频,以及多播批量数据传输[SRM96]。如果不采取行动,这种无响应的流量可能导致新的拥堵崩溃。

In general, all UDP-based streaming applications should incorporate effective congestion avoidance mechanisms. For example, recent research has shown the possibility of incorporating congestion avoidance mechanisms such as Receiver-driven Layered Multicast (RLM) within UDP-based streaming applications such as packet video [McCanne96; Bolot94]. Further research and development on ways to accomplish congestion avoidance for streaming applications will be very important.

通常,所有基于UDP的流媒体应用程序都应该包含有效的拥塞避免机制。例如,最近的研究表明,在基于UDP的流媒体应用程序(如分组视频)中加入拥塞避免机制(如接收器驱动的分层多播(RLM))是可能的[McCanne96;Bolot94]。进一步研究和开发实现流应用程序拥塞避免的方法将非常重要。

However, it will also be important for the network to be able to protect itself against unresponsive flows, and mechanisms to accomplish this must be developed and deployed. Deployment of such mechanisms would provide incentive for every streaming application to become responsive by incorporating its own

然而,网络能够保护自身免受无响应流量的影响也很重要,必须开发和部署实现这一点的机制。这种机制的部署将激励每个流媒体应用程序通过合并自己的流媒体应用程序来做出响应

congestion control.

拥塞控制。

o Non-TCP-Compatible Transport Protocols

o 不兼容TCP的传输协议

The second threat is posed by transport protocol implementations that are responsive to congestion notification but, either deliberately or through faulty implementations, are not TCP-compatible. Such applications can grab an unfair share of the network bandwidth.

第二个威胁是对拥塞通知作出响应的传输协议实现,但这些协议实现有意或通过错误的实现不兼容TCP。此类应用程序可能会抢占不公平的网络带宽份额。

For example, the popularity of the Internet has caused a proliferation in the number of TCP implementations. Some of these may fail to implement the TCP congestion avoidance mechanisms correctly because of poor implementation. Others may deliberately be implemented with congestion avoidance algorithms that are more aggressive in their use of bandwidth than other TCP implementations; this would allow a vendor to claim to have a "faster TCP". The logical consequence of such implementations would be a spiral of increasingly aggressive TCP implementations, leading back to the point where there is effectively no congestion avoidance and the Internet is chronically congested.

例如,互联网的普及导致了TCP实现数量的激增。其中一些可能无法正确实现TCP拥塞避免机制,因为实现较差。其他的可能故意使用拥塞避免算法来实现,这些算法在带宽使用方面比其他TCP实现更具攻击性;这将允许供应商声称拥有“更快的TCP”。这种实现的逻辑结果将是一个日益激进的TCP实现的螺旋,导致回到没有有效的拥塞避免和互联网长期拥塞的地步。

Note that there is a well-known way to achieve more aggressive TCP performance without even changing TCP: open multiple connections to the same place, as has been done in some Web browsers.

请注意,有一种众所周知的方法可以在不更改TCP的情况下实现更具攻击性的TCP性能:像在某些Web浏览器中所做的那样,打开到同一位置的多个连接。

The projected increase in more aggressive flows of both these classes, as a fraction of total Internet traffic, clearly poses a threat to the future Internet. There is an urgent need for measurements of current conditions and for further research into the various ways of managing such flows. There are many difficult issues in identifying and isolating unresponsive or non-TCP-compatible flows at an acceptable router overhead cost. Finally, there is little measurement or simulation evidence available about the rate at which these threats are likely to be realized, or about the expected benefit of router algorithms for managing such flows.

这两个类别的流量预计都会增加,占互联网总流量的一小部分,这显然对未来的互联网构成了威胁。迫切需要对当前状况进行测量,并进一步研究管理这种流动的各种方式。在以可接受的路由器开销成本识别和隔离无响应或不兼容TCP的流时,存在许多困难的问题。最后,关于这些威胁可能实现的速度,或者关于路由器算法管理这些流的预期好处,几乎没有可用的测量或模拟证据。

There is an issue about the appropriate granularity of a "flow". There are a few "natural" answers: 1) a TCP or UDP connection (source address/port, destination address/port); 2) a source/destination host pair; 3) a given source host or a given destination host. We would guess that the source/destination host pair gives the most appropriate granularity in many circumstances. However, it is possible that different vendors/providers could set different granularities for defining a flow (as a way of "distinguishing" themselves from one another), or that different granularities could

有一个关于“流”的适当粒度的问题。有几个“自然”答案:1)TCP或UDP连接(源地址/端口、目标地址/端口);2) 源/目的主机对;3) 给定的源主机或给定的目标主机。我们猜想,在许多情况下,源/目标主机对提供了最合适的粒度。然而,不同的供应商/提供者可能会为定义流设置不同的粒度(作为彼此“区分”的一种方式),或者可能会设置不同的粒度

be chosen for different places in the network. It may be the case that the granularity is less important than the fact that we are dealing with more unresponsive flows at *some* granularity. The granularity of flows for congestion management is, at least in part, a policy question that needs to be addressed in the wider IETF community.

可以为网络中的不同位置选择。在这种情况下,粒度可能不如我们在*某些*粒度上处理更多无响应流这一事实重要。拥塞管理流的粒度至少在一定程度上是一个需要在更广泛的IETF社区中解决的政策问题。

5. CONCLUSIONS AND RECOMMENDATIONS
5. 结论和建议

This discussion leads us to make the following recommendations to the IETF and to the Internet community as a whole.

通过讨论,我们向IETF和整个互联网社区提出以下建议。

o RECOMMENDATION 1:

o 建议1:

Internet routers should implement some active queue management mechanism to manage queue lengths, reduce end-to-end latency, reduce packet dropping, and avoid lock-out phenomena within the Internet.

Internet路由器应该实现一些主动队列管理机制来管理队列长度,减少端到端延迟,减少数据包丢失,并避免Internet中的锁定现象。

The default mechanism for managing queue lengths to meet these goals in FIFO queues is Random Early Detection (RED) [RED93]. Unless a developer has reasons to provide another equivalent mechanism, we recommend that RED be used.

在FIFO队列中,管理队列长度以满足这些目标的默认机制是随机早期检测(RED)[RED93]。除非开发人员有理由提供其他等效机制,否则我们建议使用红色。

o RECOMMENDATION 2:

o 建议2:

It is urgent to begin or continue research, engineering, and measurement efforts contributing to the design of mechanisms to deal with flows that are unresponsive to congestion notification or are responsive but more aggressive than TCP.

迫切需要开始或继续研究、工程和测量工作,以帮助设计机制来处理对拥塞通知无响应或响应性强但比TCP更具攻击性的流。

Although there has already been some limited deployment of RED in the Internet, we may expect that widespread implementation and deployment of RED in accordance with Recommendation 1 will expose a number of engineering issues. For example, such issues may include: implementation questions for Gigabit routers, the use of RED in layer 2 switches, and the possible use of additional considerations, such as priority, in deciding which packets to drop.

尽管在互联网上已经有一些有限的RED部署,但我们可以预期,根据建议1广泛实施和部署RED将暴露一些工程问题。例如,这些问题可能包括:千兆路由器的实现问题、在第2层交换机中使用RED以及在决定丢弃哪些数据包时可能使用其他考虑因素,例如优先级。

We again emphasize that the widespread implementation and deployment of RED would not, in and of itself, achieve the goals of Recommendation 2.

我们再次强调,广泛实施和部署RED本身并不能实现建议2的目标。

Widespread implementation and deployment of RED will also enable the introduction of other new functionality into the Internet. One example of an enabled functionality would be the addition of explicit congestion notification [Ramakrishnan97] to the Internet architecture, as a mechanism for congestion notification in addition

RED的广泛实施和部署还将使互联网引入其他新功能。启用功能的一个示例是在互联网体系结构中添加显式拥塞通知[Ramakrishnan97],作为另外一种拥塞通知机制

to packet drops. A second example of new functionality would be implementation of queues with packets of different drop priorities; packets would be transmitted in the order in which they arrived, but during times of congestion packets of the lower drop priority would be preferentially dropped.

对数据包的丢弃。新功能的第二个示例是使用具有不同丢弃优先级的数据包实现队列;分组将按照它们到达的顺序传输,但是在拥塞期间,较低丢弃优先级的分组将优先丢弃。

6. References
6. 工具书类

[Bolot94] Bolot, J.-C., Turletti, T., and Wakeman, I., Scalable Feedback Control for Multicast Video Distribution in the Internet, ACM SIGCOMM '94, Sept. 1994.

[Bolot94]Bolot,J.-C.,Turletti,T.,和Wakeman,I.,互联网中多播视频分发的可伸缩反馈控制,ACM SIGCOMM'941994年9月。

[Demers90] Demers, A., Keshav, S., and Shenker, S., Analysis and Simulation of a Fair Queueing Algorithm, Internetworking: Research and Experience, Vol. 1, 1990, pp. 3-26.

[Demers90]Demers,A.,Keshav,S.,和Shenker,S.,公平排队算法的分析和模拟,互联网:研究与经验,第1卷,1990年,第3-26页。

[Floyd91] Floyd, S., Connections with Multiple Congested Gateways in Packet-Switched Networks Part 1: One-way Traffic. Computer Communications Review, Vol.21, No.5, October 1991, pp. 30-47. URL http://ftp.ee.lbl.gov/floyd/.

[Floyd91]Floyd,S.,分组交换网络中多个拥塞网关的连接第1部分:单向流量。《计算机通信评论》,第21卷,第5期,1991年10月,第30-47页。统一资源定位地址http://ftp.ee.lbl.gov/floyd/.

[Floyd95] Floyd, S., and Jacobson, V., Link-sharing and Resource Management Models for Packet Networks. IEEE/ACM Transactions on Networking, Vol. 3 No. 4, pp. 365-386, August 1995.

[Floyd95]Floyd,S.,和Jacobson,V.,分组网络的链路共享和资源管理模型。IEEE/ACM网络交易,第3卷第4期,第365-386页,1995年8月。

[Floyd97] Floyd, S., RED: Discussions of Byte and Packet Modes, March 1997 email, http://www-nrg.ee.lbl.gov/floyd/REDaveraging.txt.

[Floyd97]Floyd,S.,RED:字节和数据包模式的讨论,1997年3月电子邮件,http://www-nrg.ee.lbl.gov/floyd/REDaveraging.txt.

[Gaynor96] Gaynor, M., Proactive Packet Dropping Methods for TCP Gateways, October 1996, URL http://www.eecs.harvard.edu/~gaynor/ final.ps.

[Gaynor96]Gaynor,M.,TCP网关的主动数据包丢弃方法,1996年10月,URLhttp://www.eecs.harvard.edu/~gaynor/final.ps。

[HostReq89] Braden, R., Ed., "Requirements for Internet Hosts -- Communication Layers", STD 3, RFC 1122, October 1989.

[HostReq89]Braden,R.,Ed.“互联网主机的要求——通信层”,STD 3,RFC 1122,1989年10月。

[Jacobson88] V. Jacobson, Congestion Avoidance and Control, ACM SIGCOMM '88, August 1988.

[Jacobson88]诉Jacobson,拥塞避免和控制,ACM SIGCOMM'88,1988年8月。

[Lakshman96] T. V. Lakshman, Arnie Neidhardt, Teunis Ott, The Drop From Front Strategy in TCP Over ATM and Its Interworking with Other Control Features, Infocom 96, MA28.1.

[Lakshman 96]T.V.Lakshman,Arnie Neidhardt,Teunis Ott,TCP Over ATM的前端下降策略及其与其他控制功能的交互,Infocom 96,MA28.1。

[Leland94] W. Leland, M. Taqqu, W. Willinger, and D. Wilson, On the Self-Similar Nature of Ethernet Traffic (Extended Version), IEEE/ACM Transactions on Networking, 2(1), pp. 1-15, February 1994.

[Leland 94]W.Leland,M.Taqqu,W.Willinger和D.Wilson,关于以太网流量的自相似性质(扩展版),IEEE/ACM网络事务,2(1),第1-15页,1994年2月。

[McCanne96] McCanne, S., Jacobson, V., and M. Vetterli, Receiver-driven Layered Multicast, ACM SIGCOMM

[McCanne96]McCanne,S.,Jacobson,V.,和M.Vetterli,接收器驱动分层多播,ACM SIGCOMM

[Nagle84] Nagle, J., "Congestion Control in IP/TCP", RFC 896, January 1984.

[Nagle84]Nagle,J.,“IP/TCP中的拥塞控制”,RFC 896,1984年1月。

[Ramakrishnan97] Ramakrishnan, K. K., and S. Floyd, "A Proposal to add Explicit Congestion Notification (ECN) to IPv6 and to TCP", Work in Progress.

[Ramakrishnan97]Ramakrishnan,K.K.和S.Floyd,“向IPv6和TCP添加显式拥塞通知(ECN)的提案”,正在进行中。

[RED93] Floyd, S., and Jacobson, V., Random Early Detection gateways for Congestion Avoidance, IEEE/ACM Transactions on Networking, V.1 N.4, August 1993, pp. 397-413. Also available from http://ftp.ee.lbl.gov/floyd/red.html.

[RED93]Floyd,S.和Jacobson,V.,《避免拥塞的随机早期检测网关》,IEEE/ACM网络交易,第1卷第4期,1993年8月,第397-413页。也可从http://ftp.ee.lbl.gov/floyd/red.html.

[REDWWW] Floyd, S., The RED Web Page, 1997, URL http://ftp.ee.lbl.gov/floyd/red.html.

[REDWWW]Floyd,S.,红色网页,1997年,网址http://ftp.ee.lbl.gov/floyd/red.html.

[RFC 2001] Stevens, W., "TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms", RFC 2001, January 1997.

[RFC 2001]Stevens,W,“TCP慢启动、拥塞避免、快速重传和快速恢复算法”,RFC 2001,1997年1月。

[Shenker96] Shenker, S., Partridge, C., and R. Guerin, "Specification of Guaranteed Quality of Service", Work in Progress.

[Shenker96]Shenker,S.,Partridge,C.,和R.Guerin,“保证服务质量规范”,正在进行的工作。

[SRM96] Floyd. S., Jacobson, V., McCanne, S., Liu, C., and L. Zhang, A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing. ACM SIGCOMM '96, pp 342-355.

[SRM96]弗洛伊德。Jacobson,V.,McCanne,S.,Liu,C.,和L.Zhang,一个用于轻量级会话和应用程序级帧的可靠多播框架。ACM SIGCOMM'96,第342-355页。

[Villamizar94] Villamizar, C., and Song, C., High Performance TCP in ANSNET. Computer Communications Review, V. 24 N. 5, October 1994, pp. 45-60. URL http://ftp.ans.net/pub/papers/tcp-performance.ps.

[Villamizar94]Villamizar,C.,和Song,C.,ANSNET中的高性能TCP。《计算机通信评论》,第24卷第5期,1994年10月,第45-60页。统一资源定位地址http://ftp.ans.net/pub/papers/tcp-performance.ps.

[Willinger95] W. Willinger, M. S. Taqqu, R. Sherman, D. V. Wilson, Self-Similarity Through High-Variability: Statistical Analysis of Ethernet LAN Traffic at the Source Level, ACM SIGCOMM '95, pp. 100- 113, August 1995.

[Willinger95]W.Willinger,M.S.Taqqu,R.Sherman,D.V.Wilson,通过高可变性的自相似性:源级以太网LAN流量的统计分析,ACM SIGCOMM'95,第100-113页,1995年8月。

[Wroclawski96] Wroclawski, J., "Specification of the Controlled-Load Network Element Service", Work in Progress.

[Wroclawski96]Wroclawski,J.,“受控负荷网元服务规范”,正在进行的工作。

Security Considerations

安全考虑

While security is a very important issue, it is largely orthogonal to the performance issues discussed in this memo. We note, however, that denial-of-service attacks may create unresponsive traffic flows that are indistinguishable from flows from normal high-bandwidth isochronous applications, and the mechanism suggested in Recommendation 2 will be equally applicable to such attacks.

虽然安全性是一个非常重要的问题,但它在很大程度上与本备忘录中讨论的性能问题是正交的。然而,我们注意到,拒绝服务攻击可能会产生与正常高带宽等时应用程序流无法区分的无响应流量,建议2中建议的机制同样适用于此类攻击。

Authors' Addresses

作者地址

Bob Braden USC Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292

鲍勃·布拉登南加州信息科学研究所,地址:加利福尼亚州马里纳·德雷市金钟路4676号,邮编:90292

Phone: 310-822-1511 EMail: Braden@ISI.EDU

电话:310-822-1511电子邮件:Braden@ISI.EDU

David D. Clark MIT Laboratory for Computer Science 545 Technology Sq. Cambridge, MA 02139

麻省理工学院计算机科学实验室,马萨诸塞州剑桥545技术广场,邮编02139

Phone: 617-253-6003 EMail: DDC@lcs.mit.edu

电话:617-253-6003电子邮件:DDC@lcs.mit.edu

Jon Crowcroft University College London Department of Computer Science Gower Street London, WC1E 6BT ENGLAND

Jon Crowcroft大学学院伦敦计算机科学系伦敦高尔街,WC1E 6BT英格兰

   Phone: +44 171 380 7296
   EMail: Jon.Crowcroft@cs.ucl.ac.uk
        
   Phone: +44 171 380 7296
   EMail: Jon.Crowcroft@cs.ucl.ac.uk
        

Bruce Davie Cisco Systems, Inc. 250 Apollo Drive Chelmsford, MA 01824

布鲁斯·戴维斯思科系统公司,马萨诸塞州切姆斯福德阿波罗大道250号,邮编01824

Phone: EMail: bdavie@cisco.com

电话:电邮:bdavie@cisco.com

Steve Deering Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

Steve Deering Cisco Systems,Inc.加利福尼亚州圣何塞西塔斯曼大道170号,邮编95134-1706

Phone: 408-527-8213 EMail: deering@cisco.com

电话:408-527-8213电子邮件:deering@cisco.com

Deborah Estrin USC Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292

Deborah Estrin USC信息科学研究所4676金钟路Marina del Rey,加利福尼亚州90292

Phone: 310-822-1511 EMail: Estrin@usc.edu

电话:310-822-1511电子邮件:Estrin@usc.edu

Sally Floyd Lawrence Berkeley National Laboratory, MS 50B-2239, One Cyclotron Road, Berkeley CA 94720

Sally Floyd Lawrence Berkeley国家实验室,加利福尼亚州伯克利回旋加速器路一号MS 50B-2239,邮编94720

Phone: 510-486-7518 EMail: Floyd@ee.lbl.gov

电话:510-486-7518电子邮件:Floyd@ee.lbl.gov

Van Jacobson Lawrence Berkeley National Laboratory, MS 46A, One Cyclotron Road, Berkeley CA 94720

加利福尼亚州伯克利回旋加速器路一号MS 46A范雅各布森劳伦斯伯克利国家实验室,邮编94720

Phone: 510-486-7519 EMail: Van@ee.lbl.gov

电话:510-486-7519电子邮件:Van@ee.lbl.gov

Greg Minshall Fiberlane Communications 1399 Charleston Road Mountain View, CA 94043

加利福尼亚州查尔斯顿路山景城1399号Greg Minshall Fiberlane Communications 94043

   Phone:  +1 650 237 3164
   EMail:  Minshall@fiberlane.com
        
   Phone:  +1 650 237 3164
   EMail:  Minshall@fiberlane.com
        

Craig Partridge BBN Technologies 10 Moulton St. Cambridge MA 02138

Craig Partridge BBN Technologies马萨诸塞州剑桥莫尔顿街10号02138

Phone: 510-558-8675 EMail: craig@bbn.com

电话:510-558-8675电子邮件:craig@bbn.com

Larry Peterson Department of Computer Science University of Arizona Tucson, AZ 85721

Larry Peterson亚利桑那大学计算机科学系AZ图森分校85721

Phone: 520-621-4231 EMail: LLP@cs.arizona.edu

电话:520-621-4231电子邮件:LLP@cs.arizona.edu

K. K. Ramakrishnan AT&T Labs. Research Rm. A155 180 Park Avenue Florham Park, N.J. 07932

罗摩克里希南AT&T实验室。研究室。新泽西州弗洛勒姆公园公园大道180号A155号,邮编07932

Phone: 973-360-8766 EMail: KKRama@research.att.com

电话:973-360-8766电子邮件:KKRama@research.att.com

Scott Shenker Xerox PARC 3333 Coyote Hill Road Palo Alto, CA 94304

加利福尼亚州帕洛阿尔托郊狼山路3333号斯科特·申克施乐公园,邮编94304

Phone: 415-812-4840 EMail: Shenker@parc.xerox.com

电话:415-812-4840电子邮件:Shenker@parc.xerox.com

John Wroclawski MIT Laboratory for Computer Science 545 Technology Sq. Cambridge, MA 02139

约翰·沃克罗夫斯基麻省理工学院计算机科学实验室,马萨诸塞州剑桥545技术广场,邮编:02139

Phone: 617-253-7885 EMail: JTW@lcs.mit.edu

电话:617-253-7885电子邮件:JTW@lcs.mit.edu

Lixia Zhang UCLA 4531G Boelter Hall Los Angeles, CA 90024

加利福尼亚州洛杉矶加利福尼亚大学洛杉矶分校张丽霞4531G博尔特大厅,邮编90024

Phone: 310-825-2695 EMail: Lixia@cs.ucla.edu

电话:310-825-2695电子邮件:Lixia@cs.ucla.edu

Full Copyright Statement

完整版权声明

Copyright (C) The Internet Society (1998). All Rights Reserved.

版权所有(C)互联网协会(1998年)。版权所有。

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

本文件及其译本可复制并提供给他人,对其进行评论或解释或协助其实施的衍生作品可全部或部分编制、复制、出版和分发,不受任何限制,前提是上述版权声明和本段包含在所有此类副本和衍生作品中。但是,不得以任何方式修改本文件本身,例如删除版权通知或对互联网协会或其他互联网组织的引用,除非出于制定互联网标准的需要,在这种情况下,必须遵循互联网标准过程中定义的版权程序,或根据需要将其翻译成英语以外的其他语言。

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.

上述授予的有限许可是永久性的,互联网协会或其继承人或受让人不会撤销。

This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

本文件和其中包含的信息是按“原样”提供的,互联网协会和互联网工程任务组否认所有明示或暗示的保证,包括但不限于任何保证,即使用本文中的信息不会侵犯任何权利,或对适销性或特定用途适用性的任何默示保证。