Network Working Group                                         S. Floyd
Request for Comments: 2914                                       ACIRI
BCP: 41                                                 September 2000
Category: Best Current Practice
        
Network Working Group                                         S. Floyd
Request for Comments: 2914                                       ACIRI
BCP: 41                                                 September 2000
Category: Best Current Practice
        

Congestion Control Principles

拥塞控制原则

Status of this Memo

本备忘录的状况

This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements. Distribution of this memo is unlimited.

本文件规定了互联网社区的最佳现行做法,并要求进行讨论和提出改进建议。本备忘录的分发不受限制。

Copyright Notice

版权公告

Copyright (C) The Internet Society (2000). All Rights Reserved.

版权所有(C)互联网协会(2000年)。版权所有。

Abstract

摘要

The goal of this document is to explain the need for congestion control in the Internet, and to discuss what constitutes correct congestion control. One specific goal is to illustrate the dangers of neglecting to apply proper congestion control. A second goal is to discuss the role of the IETF in standardizing new congestion control protocols.

本文件的目的是解释互联网拥塞控制的必要性,并讨论什么是正确的拥塞控制。一个具体的目标是说明忽视应用适当的拥塞控制的危险。第二个目标是讨论IETF在标准化新拥塞控制协议方面的作用。

1. Introduction
1. 介绍

This document draws heavily from earlier RFCs, in some cases reproducing entire sections of the text of earlier documents [RFC2309, RFC2357]. We have also borrowed heavily from earlier publications addressing the need for end-to-end congestion control [FF99].

本文件大量借鉴了早期的RFC,在某些情况下复制了早期文件[RFC2309,RFC2357]的整个文本部分。我们还大量借鉴了早期的出版物,以解决端到端拥塞控制的需要[FF99]。

2. Current standards on congestion control
2. 关于拥塞控制的现行标准

IETF standards concerning end-to-end congestion control focus either on specific protocols (e.g., TCP [RFC2581], reliable multicast protocols [RFC2357]) or on the syntax and semantics of communications between the end nodes and routers about congestion information (e.g., Explicit Congestion Notification [RFC2481]) or desired quality-of-service (diff-serv)). The role of end-to-end congestion control is also discussed in an Informational RFC on "Recommendations on Queue Management and Congestion Avoidance in the Internet" [RFC2309]. RFC 2309 recommends the deployment of active queue management mechanisms in routers, and the continuation of design efforts towards mechanisms

IETF关于端到端拥塞控制的标准关注于特定协议(例如TCP[RFC2581]、可靠多播协议[RFC2357]),或终端节点和路由器之间关于拥塞信息的通信的语法和语义(例如,显式拥塞通知[RFC2481])或期望的服务质量(区分服务)。端到端拥塞控制的作用也在关于“互联网队列管理和拥塞避免建议”的信息RFC中讨论[RFC2309]。RFC2309建议在路由器中部署主动队列管理机制,并继续对这些机制进行设计

in routers to deal with flows that are unresponsive to congestion notification. We freely borrow from RFC 2309 some of their general discussion of end-to-end congestion control.

在路由器中处理对拥塞通知无响应的流。我们自由地借用RFC2309关于端到端拥塞控制的一些一般性讨论。

In contrast to the RFCs discussed above, this document is a more general discussion of the principles of congestion control. One of the keys to the success of the Internet has been the congestion avoidance mechanisms of TCP. While TCP is still the dominant transport protocol in the Internet, it is not ubiquitous, and there are an increasing number of applications that, for one reason or another, choose not to use TCP. Such traffic includes not only multicast traffic, but unicast traffic such as streaming multimedia that does not require reliability; and traffic such as DNS or routing messages that consist of short transfers deemed critical to the operation of the network. Much of this traffic does not use any form of either bandwidth reservations or end-to-end congestion control. The continued use of end-to-end congestion control by best-effort traffic is critical for maintaining the stability of the Internet.

与上面讨论的RFC不同,本文档对拥塞控制原则进行了更一般性的讨论。互联网成功的关键之一是TCP的拥塞避免机制。虽然TCP仍然是Internet上的主要传输协议,但它并不是无处不在的,而且越来越多的应用程序出于某种原因选择不使用TCP。这种流量不仅包括多播流量,还包括单播流量,例如不需要可靠性的流媒体;以及诸如DNS或路由消息之类的流量,这些流量由被认为对网络运行至关重要的短传输组成。大部分流量不使用任何形式的带宽预留或端到端拥塞控制。通过尽力而为的流量持续使用端到端拥塞控制对于保持互联网的稳定性至关重要。

This document also discusses the general role of the IETF in the standardization of new congestion control protocols.

本文还讨论了IETF在新拥塞控制协议标准化中的一般作用。

The discussion of congestion control principles for differentiated services or integrated services is not addressed in this document. Some categories of integrated or differentiated services include a guarantee by the network of end-to-end bandwidth, and as such do not require end-to-end congestion control mechanisms.

本文件不讨论区分服务或集成服务的拥塞控制原则。某些类型的集成或差异化服务包括网络对端到端带宽的保证,因此不需要端到端拥塞控制机制。

3. The development of end-to-end congestion control.

3. 端到端拥塞控制的发展。

3.1. Preventing congestion collapse.

3.1. 防止堵塞崩溃。

The Internet protocol architecture is based on a connectionless end-to-end packet service using the IP protocol. The advantages of its connectionless design, flexibility and robustness, have been amply demonstrated. However, these advantages are not without cost: careful design is required to provide good service under heavy load. In fact, lack of attention to the dynamics of packet forwarding can result in severe service degradation or "Internet meltdown". This phenomenon was first observed during the early growth phase of the Internet of the mid 1980s [RFC896], and is technically called "congestion collapse".

Internet协议体系结构基于使用IP协议的无连接端到端分组服务。它的无连接设计、灵活性和健壮性的优势已经得到充分的证明。然而,这些优点并非没有成本:需要仔细设计,以便在重载下提供良好的服务。事实上,缺乏对数据包转发动态的关注可能会导致严重的服务降级或“互联网崩溃”。这种现象最早出现在20世纪80年代中期互联网的早期发展阶段[RFC896],在技术上被称为“拥塞崩溃”。

The original specification of TCP [RFC793] included window-based flow control as a means for the receiver to govern the amount of data sent by the sender. This flow control was used to prevent overflow of the receiver's data buffer space available for that connection. [RFC793]

TCP[RFC793]的原始规范包括基于窗口的流量控制,作为接收方控制发送方发送的数据量的一种手段。此流控制用于防止可用于该连接的接收器数据缓冲区空间溢出。[RFC793]

reported that segments could be lost due either to errors or to network congestion, but did not include dynamic adjustment of the flow-control window in response to congestion.

报告称,由于错误或网络拥塞,可能会丢失段,但未包括响应拥塞的流量控制窗口的动态调整。

The original fix for Internet meltdown was provided by Van Jacobson. Beginning in 1986, Jacobson developed the congestion avoidance mechanisms that are now required in TCP implementations [Jacobson88, RFC 2581]. These mechanisms operate in the hosts to cause TCP connections to "back off" during congestion. We say that TCP flows are "responsive" to congestion signals (i.e., dropped packets) from the network. It is these TCP congestion avoidance algorithms that prevent the congestion collapse of today's Internet.

互联网崩溃的最初修复是由Van Jacobson提供的。从1986年开始,Jacobson开发了TCP实现所需的拥塞避免机制[Jacobson88,RFC2581]。这些机制在主机中运行,导致TCP连接在拥塞期间“后退”。我们说TCP流对来自网络的拥塞信号(即丢弃的数据包)作出“响应”。正是这些TCP拥塞避免算法防止了当今互联网的拥塞崩溃。

However, that is not the end of the story. Considerable research has been done on Internet dynamics since 1988, and the Internet has grown. It has become clear that the TCP congestion avoidance mechanisms [RFC2581], while necessary and powerful, are not sufficient to provide good service in all circumstances. In addition to the development of new congestion control mechanisms [RFC2357], router-based mechanisms are in development that complement the endpoint congestion avoidance mechanisms.

然而,这并不是故事的结局。自1988年以来,人们对互联网动态进行了大量的研究,互联网也得到了发展。很明显,TCP拥塞避免机制[RFC2581]虽然必要且强大,但不足以在所有情况下提供良好的服务。除了开发新的拥塞控制机制[RFC2357],基于路由器的机制正在开发中,以补充端点拥塞避免机制。

A major issue that still needs to be addressed is the potential for future congestion collapse of the Internet due to flows that do not use responsible end-to-end congestion control. RFC 896 [RFC896] suggested in 1984 that gateways should detect and `squelch' misbehaving hosts: "Failure to respond to an ICMP Source Quench message, though, should be regarded as grounds for action by a gateway to disconnect a host. Detecting such failure is non-trivial but is a worthwhile area for further research." Current papers still propose that routers detect and penalize flows that are not employing acceptable end-to-end congestion control [FF99].

仍然需要解决的一个主要问题是,由于流量不使用负责任的端到端拥塞控制,互联网未来可能出现拥塞崩溃。1984年,RFC 896[RFC896]建议网关应检测并“压制”行为不正常的主机:“但未能响应ICMP源猝灭消息应被视为网关断开主机连接的理由。检测此类故障并非小事,但值得进一步研究。”当前的论文仍然建议路由器检测并惩罚未采用可接受的端到端拥塞控制的流[FF99]。

3.2. Fairness
3.2. 公平

In addition to a concern about congestion collapse, there is a concern about `fairness' for best-effort traffic. Because TCP "backs off" during congestion, a large number of TCP connections can share a single, congested link in such a way that bandwidth is shared reasonably equitably among similarly situated flows. The equitable sharing of bandwidth among flows depends on the fact that all flows are running compatible congestion control algorithms. For TCP, this means congestion control algorithms conformant with the current TCP specification [RFC793, RFC1122, RFC2581].

除了对拥堵崩溃的担忧外,还有对尽力而为的流量的“公平性”的担忧。由于TCP在拥塞期间“后退”,大量TCP连接可以共享单个拥塞链路,从而在类似位置的流之间合理公平地共享带宽。流之间带宽的公平共享取决于所有流都运行兼容的拥塞控制算法这一事实。对于TCP,这意味着拥塞控制算法符合当前TCP规范[RFC793、RFC1122、RFC2581]。

The issue of fairness among competing flows has become increasingly important for several reasons. First, using window scaling [RFC1323], individual TCPs can use high bandwidth even over high-

由于几个原因,竞争性流动之间的公平性问题变得越来越重要。首先,使用窗口缩放[RFC1323],单个TCP可以使用高带宽,甚至在高带宽上-

propagation-delay paths. Second, with the growth of the web, Internet users increasingly want high-bandwidth and low-delay communications, rather than the leisurely transfer of a long file in the background. The growth of best-effort traffic that does not use TCP underscores this concern about fairness between competing best-effort traffic in times of congestion.

传播延迟路径。第二,随着网络的发展,互联网用户越来越需要高带宽和低延迟的通信,而不是在后台悠闲地传输长文件。不使用TCP的尽力而为流量的增长强调了在拥塞时竞争的尽力而为流量之间的公平性。

The popularity of the Internet has caused a proliferation in the number of TCP implementations. Some of these may fail to implement the TCP congestion avoidance mechanisms correctly because of poor implementation [RFC2525]. Others may deliberately be implemented with congestion avoidance algorithms that are more aggressive in their use of bandwidth than other TCP implementations; this would allow a vendor to claim to have a "faster TCP". The logical consequence of such implementations would be a spiral of increasingly aggressive TCP implementations, or increasingly aggressive transport protocols, leading back to the point where there is effectively no congestion avoidance and the Internet is chronically congested.

互联网的普及导致了TCP实现数量的激增。其中一些可能无法正确实施TCP拥塞避免机制,因为实施不佳[RFC2525]。其他的可能故意使用拥塞避免算法来实现,这些算法在带宽使用方面比其他TCP实现更具攻击性;这将允许供应商声称拥有“更快的TCP”。这种实现的逻辑结果将是越来越激进的TCP实现,或越来越激进的传输协议的螺旋式发展,导致回到没有有效避免拥塞和互联网长期拥塞的地步。

There is a well-known way to achieve more aggressive performance without even changing the transport protocol, by changing the level of granularity: open multiple connections to the same place, as has been done in the past by some Web browsers. Thus, instead of a spiral of increasingly aggressive transport protocols, we would instead have a spiral of increasingly aggressive web browsers, or increasingly aggressive applications.

有一种众所周知的方法可以在不改变传输协议的情况下通过改变粒度级别来实现更高的性能:像过去一些Web浏览器所做的那样,在同一位置打开多个连接。因此,我们将有一个越来越激进的网络浏览器或越来越激进的应用程序的螺旋,而不是越来越激进的传输协议的螺旋。

This raises the issue of the appropriate granularity of a "flow", where we define a `flow' as the level of granularity appropriate for the application of both fairness and congestion control. From RFC 2309: "There are a few `natural' answers: 1) a TCP or UDP connection (source address/port, destination address/port); 2) a source/destination host pair; 3) a given source host or a given destination host. We would guess that the source/destination host pair gives the most appropriate granularity in many circumstances. The granularity of flows for congestion management is, at least in part, a policy question that needs to be addressed in the wider IETF community."

这就提出了“流”的适当粒度问题,我们将“流”定义为适用于公平性和拥塞控制的粒度级别。来自RFC2309:“有几个‘自然’答案:1)TCP或UDP连接(源地址/端口、目标地址/端口);2)源/目标主机对;3)给定的源主机或给定的目标主机。我们猜测,在许多情况下,源/目标主机对提供了最合适的粒度。拥塞管理流的粒度至少部分是一个需要在更广泛的IETF社区中解决的策略问题."

Again borrowing from RFC 2309, we use the term "TCP-compatible" for a flow that behaves under congestion like a flow produced by a conformant TCP. A TCP-compatible flow is responsive to congestion notification, and in steady-state uses no more bandwidth than a conformant TCP running under comparable conditions (drop rate, RTT, MTU, etc.)

再次借用RFC2309,我们将术语“TCP兼容”用于在拥塞情况下表现为一致TCP产生的流的流。与TCP兼容的流对拥塞通知作出响应,并且在稳定状态下使用的带宽不超过在类似条件下运行的一致TCP(丢包率、RTT、MTU等)

It is convenient to divide flows into three classes: (1) TCP-compatible flows, (2) unresponsive flows, i.e., flows that do not slow down when congestion occurs, and (3) flows that are responsive but are not TCP-compatible. The last two classes contain more aggressive flows that pose significant threats to Internet performance, as we discuss below.

可以方便地将流分为三类:(1)TCP兼容流,(2)无响应流,即发生拥塞时不会减速的流,以及(3)有响应但不兼容TCP的流。最后两个类包含对Internet性能构成重大威胁的更具攻击性的流,如下所述。

In addition to steady-state fairness, the fairness of the initial slow-start is also a concern. One concern is the transient effect on other flows of a flow with an overly-aggressive slow-start procedure. Slow-start performance is particularly important for the many flows that are short-lived, and only have a small amount of data to transfer.

除了稳态公平性外,初始慢启动的公平性也是一个值得关注的问题。一个关注点是,具有过大的缓慢启动程序的流对其他流的瞬态影响。慢启动性能对于许多生命周期短且只有少量数据要传输的流来说尤为重要。

3.3. Optimizing performance regarding throughput, delay, and loss.

3.3. 优化吞吐量、延迟和损失方面的性能。

In addition to the prevention of congestion collapse and concerns about fairness, a third reason for a flow to use end-to-end congestion control can be to optimize its own performance regarding throughput, delay, and loss. In some circumstances, for example in environments of high statistical multiplexing, the delay and loss rate experienced by a flow are largely independent of its own sending rate. However, in environments with lower levels of statistical multiplexing or with per-flow scheduling, the delay and loss rate experienced by a flow is in part a function of the flow's own sending rate. Thus, a flow can use end-to-end congestion control to limit the delay or loss experienced by its own packets. We would note, however, that in an environment like the current best-effort Internet, concerns regarding congestion collapse and fairness with competing flows limit the range of congestion control behaviors available to a flow.

除了防止拥塞崩溃和对公平性的担忧之外,流使用端到端拥塞控制的第三个原因是优化其自身在吞吐量、延迟和丢失方面的性能。在某些情况下,例如在高统计复用的环境中,流所经历的延迟和丢失率在很大程度上独立于其自身的发送速率。然而,在具有较低统计复用级别或每流调度的环境中,流所经历的延迟和丢失率在一定程度上是流自身发送速率的函数。因此,流可以使用端到端拥塞控制来限制其自身分组所经历的延迟或丢失。然而,我们要注意的是,在当前尽力而为的互联网环境中,对拥塞崩溃和竞争流公平性的担忧限制了流可用的拥塞控制行为的范围。

4. The role of the standards process
4. 标准过程的作用

The standardization of a transport protocol includes not only standardization of aspects of the protocol that could affect interoperability (e.g., information exchanged by the end-nodes), but also standardization of mechanisms deemed critical to performance (e.g., in TCP, reduction of the congestion window in response to a packet drop). At the same time, implementation-specific details and other aspects of the transport protocol that do not affect interoperability and do not significantly interfere with performance do not require standardization. Areas of TCP that do not require standardization include the details of TCP's Fast Recovery procedure after a Fast Retransmit [RFC2582]. The appendix uses examples from TCP to discuss in more detail the role of the standards process in the development of congestion control.

传输协议的标准化不仅包括可能影响互操作性的协议方面的标准化(例如,由终端节点交换的信息),还包括被认为对性能至关重要的机制的标准化(例如,在TCP中,响应于分组丢弃而减少拥塞窗口)。同时,不影响互操作性且不显著影响性能的传输协议的实施特定细节和其他方面不需要标准化。不需要标准化的TCP领域包括TCP在快速重传后的快速恢复过程的详细信息[RFC2582]。附录使用TCP中的示例更详细地讨论了标准流程在拥塞控制开发中的作用。

4.1. The development of new transport protocols.

4.1. 新传输协议的开发。

   In addition to addressing the danger of congestion collapse, the
   standardization process for new transport protocols takes care to
   avoid a congestion control `arms race' among competing protocols.  As
   an example, in RFC 2357 [RFC2357] the TSV Area Directors and their
   Directorate outline criteria for the publication as RFCs of
   Internet-Drafts on reliable multicast transport protocols.  From
   [RFC2357]:  "A particular concern for the IETF is the impact of
   reliable multicast traffic on other traffic in the Internet in times
   of congestion, in particular the effect of reliable multicast traffic
   on competing TCP traffic....  The challenge to the IETF is to
   encourage research and implementations of reliable multicast, and to
   enable the needs of applications for reliable multicast to be met as
   expeditiously as possible, while at the same time protecting the
   Internet from the congestion disaster or collapse that could result
   from the widespread use of applications with inappropriate reliable
   multicast mechanisms."
        
   In addition to addressing the danger of congestion collapse, the
   standardization process for new transport protocols takes care to
   avoid a congestion control `arms race' among competing protocols.  As
   an example, in RFC 2357 [RFC2357] the TSV Area Directors and their
   Directorate outline criteria for the publication as RFCs of
   Internet-Drafts on reliable multicast transport protocols.  From
   [RFC2357]:  "A particular concern for the IETF is the impact of
   reliable multicast traffic on other traffic in the Internet in times
   of congestion, in particular the effect of reliable multicast traffic
   on competing TCP traffic....  The challenge to the IETF is to
   encourage research and implementations of reliable multicast, and to
   enable the needs of applications for reliable multicast to be met as
   expeditiously as possible, while at the same time protecting the
   Internet from the congestion disaster or collapse that could result
   from the widespread use of applications with inappropriate reliable
   multicast mechanisms."
        

The list of technical criteria that must be addressed by RFCs on new reliable multicast transport protocols include the following: "Is there a congestion control mechanism? How well does it perform? When does it fail? Note that congestion control mechanisms that operate on the network more aggressively than TCP will face a great burden of proof that they don't threaten network stability."

RFC必须针对新的可靠多播传输协议解决的技术标准清单包括:“是否有拥塞控制机制?它的表现如何?什么时候失败?请注意,在网络上比TCP更积极地运行的拥塞控制机制将面临巨大的举证责任,证明它们不会威胁网络稳定性。”

It is reasonable to expect that these concerns about the effect of new transport protocols on competing traffic will apply not only to reliable multicast protocols, but to unreliable unicast, reliable unicast, and unreliable multicast traffic as well.

可以合理预期,这些关于新传输协议对竞争流量的影响的关注不仅适用于可靠的多播协议,而且也适用于不可靠的单播、可靠的单播和不可靠的多播流量。

4.2. Application-level issues that affect congestion control
4.2. 影响拥塞控制的应用程序级问题

The specific issue of a browser opening multiple connections to the same destination has been addressed by RFC 2616 [RFC2616], which states in Section 8.1.4 that "Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy."

RFC 2616[RFC2616]已经解决了浏览器打开同一目的地的多个连接的具体问题,在第8.1.4节中指出“使用持久连接的客户端应限制它们与给定服务器同时保持的连接数。单用户客户端与任何服务器或代理的连接不应超过2个。”

4.3. New developments in the standards process
4.3. 标准过程中的新发展

The most obvious developments in the IETF that could affect the evolution of congestion control are the development of integrated and differentiated services [RFC2212, RFC2475] and of Explicit Congestion Notification (ECN) [RFC2481]. However, other less dramatic developments are likely to affect congestion control as well.

IETF中可能影响拥塞控制演变的最明显的发展是集成和区分服务[RFC2212,RFC2475]和显式拥塞通知(ECN)[RFC2481]的发展。然而,其他不太引人注目的发展也可能影响拥塞控制。

One such effort is that to construct Endpoint Congestion Management [BS00], to enable multiple concurrent flows from a sender to the same receiver to share congestion control state. By allowing multiple connections to the same destination to act as one flow in terms of end-to-end congestion control, a Congestion Manager could allow individual connections slow-starting to take advantage of previous information about the congestion state of the end-to-end path. Further, the use of a Congestion Manager could remove the congestion control dangers of multiple flows being opened between the same source/destination pair, and could perhaps be used to allow a browser to open many simultaneous connections to the same destination.

其中一项工作是构造端点拥塞管理[BS00],以允许从发送方到同一接收方的多个并发流共享拥塞控制状态。通过允许到同一目的地的多个连接在端到端拥塞控制方面充当一个流,拥塞管理器可以允许单个连接缓慢启动,以利用关于端到端路径的拥塞状态的先前信息。此外,使用拥塞管理器可以消除在同一源/目的地对之间打开多个流的拥塞控制危险,并且可以用于允许浏览器打开到同一目的地的多个同时连接。

5. A description of congestion collapse
5. 拥塞崩溃的描述

This section discusses congestion collapse from undelivered packets in some detail, and shows how unresponsive flows could contribute to congestion collapse in the Internet. This section draws heavily on material from [FF99].

本节详细讨论了未送达数据包导致的拥塞崩溃,并说明了无响应流如何导致Internet中的拥塞崩溃。本节大量使用[FF99]中的材料。

Informally, congestion collapse occurs when an increase in the network load results in a decrease in the useful work done by the network. As discussed in Section 3, congestion collapse was first reported in the mid 1980s [RFC896], and was largely due to TCP connections unnecessarily retransmitting packets that were either in transit or had already been received at the receiver. We call the congestion collapse that results from the unnecessary retransmission of packets classical congestion collapse. Classical congestion collapse is a stable condition that can result in throughput that is a small fraction of normal [RFC896]. Problems with classical congestion collapse have generally been corrected by the timer improvements and congestion control mechanisms in modern implementations of TCP [Jacobson88].

非正式地说,当网络负载增加导致网络所做的有用功减少时,就会发生拥塞崩溃。如第3节所述,拥塞崩溃首次报告于20世纪80年代中期[RFC896],其主要原因是TCP连接不必要地重新传输传输传输中的数据包或接收器已接收到的数据包。我们将不必要的数据包重传导致的拥塞崩溃称为经典拥塞崩溃。经典的拥塞崩溃是一种稳定的情况,可能导致吞吐量仅为正常值的一小部分[RFC896]。经典拥塞崩溃的问题通常通过现代TCP实现中的计时器改进和拥塞控制机制得到纠正[Jacobson88]。

A second form of potential congestion collapse occurs due to undelivered packets. Congestion collapse from undelivered packets arises when bandwidth is wasted by delivering packets through the network that are dropped before reaching their ultimate destination. This is probably the largest unresolved danger with respect to congestion collapse in the Internet today. Different scenarios can result in different degrees of congestion collapse, in terms of the fraction of the congested links' bandwidth used for productive work. The danger of congestion collapse from undelivered packets is due primarily to the increasing deployment of open-loop applications not using end-to-end congestion control. Even more destructive would be best-effort applications that *increase* their sending rate in response to an increased packet drop rate (e.g., automatically using an increased level of FEC).

第二种形式的潜在拥塞崩溃是由于未传递的数据包造成的。当通过网络传输在到达最终目的地之前丢弃的数据包而浪费带宽时,会出现未送达数据包导致的拥塞崩溃。这可能是当今互联网拥塞崩溃方面最大的尚未解决的危险。不同的场景可能会导致不同程度的拥塞崩溃,就拥塞链路用于生产性工作的带宽比例而言。未送达数据包导致拥塞崩溃的危险主要是由于越来越多的开环应用程序部署不使用端到端拥塞控制。更具破坏性的将是尽力而为的应用程序,这些应用程序*增加*其发送速率以响应增加的丢包率(例如,自动使用增加的FEC级别)。

Table 1 gives the results from a scenario with congestion collapse from undelivered packets, where scarce bandwidth is wasted by packets that never reach their destination. The simulation uses a scenario with three TCP flows and one UDP flow competing over a congested 1.5 Mbps link. The access links for all nodes are 10 Mbps, except that the access link to the receiver of the UDP flow is 128 Kbps, only 9% of the bandwidth of shared link. When the UDP source rate exceeds 128 Kbps, most of the UDP packets will be dropped at the output port to that final link.

表1给出了一个场景的结果,该场景中未送达的数据包导致拥塞崩溃,稀缺的带宽被从未到达目的地的数据包所浪费。模拟使用了一个场景,其中三个TCP流和一个UDP流在拥挤的1.5 Mbps链路上竞争。所有节点的访问链路均为10 Mbps,但UDP流接收器的访问链路为128 Kbps,仅为共享链路带宽的9%。当UDP源速率超过128 Kbps时,大多数UDP数据包将在输出端口丢弃到该最终链路。

        UDP
        Arrival   UDP       TCP       Total
        Rate      Goodput   Goodput   Goodput
       --------------------------------------
         0.7       0.7      98.5      99.2
         1.8       1.7      97.3      99.1
         2.6       2.6      96.0      98.6
         5.3       5.2      92.7      97.9
         8.8       8.4      87.1      95.5
        10.5       8.4      84.8      93.2
        13.1       8.4      81.4      89.8
        17.5       8.4      77.3      85.7
        26.3       8.4      64.5      72.8
        52.6       8.4      38.1      46.4
        58.4       8.4      32.8      41.2
        65.7       8.4      28.5      36.8
        75.1       8.4      19.7      28.1
        87.6       8.4      11.3      19.7
       105.2       8.4       3.4      11.8
       131.5       8.4       2.4      10.7
        
        UDP
        Arrival   UDP       TCP       Total
        Rate      Goodput   Goodput   Goodput
       --------------------------------------
         0.7       0.7      98.5      99.2
         1.8       1.7      97.3      99.1
         2.6       2.6      96.0      98.6
         5.3       5.2      92.7      97.9
         8.8       8.4      87.1      95.5
        10.5       8.4      84.8      93.2
        13.1       8.4      81.4      89.8
        17.5       8.4      77.3      85.7
        26.3       8.4      64.5      72.8
        52.6       8.4      38.1      46.4
        58.4       8.4      32.8      41.2
        65.7       8.4      28.5      36.8
        75.1       8.4      19.7      28.1
        87.6       8.4      11.3      19.7
       105.2       8.4       3.4      11.8
       131.5       8.4       2.4      10.7
        

Table 1. A simulation with three TCP flows and one UDP flow.

表1。三个TCP流和一个UDP流的模拟。

Table 1 shows the UDP arrival rate from the sender, the UDP goodput (defined as the bandwidth delivered to the receiver), the TCP goodput (as delivered to the TCP receivers), and the aggregate goodput on the congested 1.5 Mbps link. Each rate is given as a fraction of the bandwidth of the congested link. As the UDP source rate increases, the TCP goodput decreases roughly linearly, and the UDP goodput is nearly constant. Thus, as the UDP flow increases its offered load, its only effect is to hurt the TCP and aggregate goodput. On the congested link, the UDP flow ultimately `wastes' the bandwidth that could have been used by the TCP flow, and reduces the goodput in the network as a whole down to a small fraction of the bandwidth of the congested link.

表1显示了发送方的UDP到达率、UDP goodput(定义为传送到接收方的带宽)、TCP goodput(传送到TCP接收方)以及拥塞1.5 Mbps链路上的聚合goodput。每个速率都是拥塞链路带宽的一部分。随着UDP源速率的增加,TCP吞吐量大致呈线性下降,UDP吞吐量几乎保持不变。因此,当UDP流增加其提供的负载时,它唯一的影响就是损害TCP和聚合goodput。在拥塞的链路上,UDP流最终“浪费”了TCP流可能使用的带宽,并将整个网络中的吞吐量降低到拥塞链路带宽的一小部分。

The simulations in Table 1 illustrate both unfairness and congestion collapse. As [FF99] discusses, compatible congestion control is not the only way to provide fairness; per-flow scheduling at the congested routers is an alternative mechanism at the routers that guarantees fairness. However, as discussed in [FF99], per-flow scheduling can not be relied upon to prevent congestion collapse.

表1中的模拟说明了不公平性和拥塞崩溃。正如[FF99]所讨论的,兼容的拥塞控制并不是提供公平性的唯一途径;拥塞路由器上的每流调度是路由器上保证公平性的另一种机制。然而,正如[FF99]中所讨论的,不能依靠每流调度来防止拥塞崩溃。

There are only two alternatives for eliminating the danger of congestion collapse from undelivered packets. The first alternative for preventing congestion collapse from undelivered packets is the use of effective end-to-end congestion control by the end nodes. More specifically, the requirement would be that a flow avoid a pattern of significant losses at links downstream from the first congested link on the path. (Here, we would consider any link a `congested link' if any flow is using bandwidth that would otherwise be used by other traffic on the link.) Given that an end-node is generally unable to distinguish between a path with one congested link and a path with multiple congested links, the most reliable way for a flow to avoid a pattern of significant losses at a downstream congested link is for the flow to use end-to-end congestion control, and reduce its sending rate in the presence of loss.

只有两种方法可以消除未送达数据包造成拥塞崩溃的危险。防止未送达数据包导致拥塞崩溃的第一个备选方案是由端节点使用有效的端到端拥塞控制。更具体地说,要求流量避免路径上第一条拥挤链路下游链路的重大损失模式。(这里,如果任何流都使用带宽,否则会被链路上的其他业务所使用),那么我们会考虑任何链路“拥塞链路”。流在下游拥塞链路上避免重大损失模式的最可靠方法是使用端到端拥塞控制,并在出现损失时降低其发送速率。

A second alternative for preventing congestion collapse from undelivered packets would be a guarantee by the network that packets accepted at a congested link in the network will be delivered all the way to the receiver [RFC2212, RFC2475]. We note that the choice between the first alternative of end-to-end congestion control and the second alternative of end-to-end bandwidth guarantees does not have to be an either/or decision; congestion collapse can be prevented by the use of effective end-to-end congestion by some of the traffic, and the use of end-to-end bandwidth guarantees from the network for the rest of the traffic.

防止未送达数据包的拥塞崩溃的第二个备选方案是网络保证在网络拥塞链路上接受的数据包将一直传送到接收器[RFC2212,RFC2475]。我们注意到,端到端拥塞控制的第一个备选方案和端到端带宽保证的第二个备选方案之间的选择不必是非此即彼的决定;通过对部分流量使用有效的端到端拥塞,以及对其余流量使用来自网络的端到端带宽保证,可以防止拥塞崩溃。

6. Forms of end-to-end congestion control
6. 端到端拥塞控制的形式

This document has discussed concerns about congestion collapse and about fairness with TCP for new forms of congestion control. This does not mean, however, that concerns about congestion collapse and fairness with TCP necessitate that all best-effort traffic deploy congestion control based on TCP's Additive-Increase Multiplicative-Decrease (AIMD) algorithm of reducing the sending rate in half in response to each packet drop. This section separately discusses the implications of these two concerns of congestion collapse and fairness with TCP.

本文讨论了关于拥塞崩溃的担忧,以及TCP对新形式拥塞控制的公平性。然而,这并不意味着,考虑到TCP的拥塞崩溃和公平性,所有尽力而为的流量都必须基于TCP的加法-增加-乘法-减少(AIMD)算法部署拥塞控制,该算法将发送速率减少一半,以响应每个数据包丢失。本节分别讨论这两个问题对TCP拥塞崩溃和公平性的影响。

6.1. End-to-end congestion control for avoiding congestion collapse.

6.1. 端到端拥塞控制,避免拥塞崩溃。

The avoidance of congestion collapse from undelivered packets requires that flows avoid a scenario of a high sending rate, multiple congested links, and a persistent high packet drop rate at the downstream link. Because congestion collapse from undelivered packets consists of packets that waste valuable bandwidth only to be dropped downstream, this form of congestion collapse is not possible in an environment where each flow traverses only one congested link, or where only a small number of packets are dropped at links downstream of the first congested link. Thus, any form of congestion control that successfully avoids a high sending rate in the presence of a high packet drop rate should be sufficient to avoid congestion collapse from undelivered packets.

要避免未交付数据包造成的拥塞崩溃,需要流避免高发送速率、多个拥塞链路和下游链路处持续高丢包率的情况。由于未交付数据包的拥塞崩溃由浪费宝贵带宽的数据包组成,这些数据包只能被丢弃到下游,因此在每个流只通过一个拥塞链路的环境中,这种拥塞崩溃形式是不可能的,或者,在第一个拥塞链路的下游链路上仅丢弃少量数据包。因此,在存在高分组丢弃率的情况下成功避免高发送速率的任何形式的拥塞控制都应足以避免由于未交付分组而导致的拥塞崩溃。

We would note that the addition of Explicit Congestion Notification (ECN) to the IP architecture would not, in and of itself, remove the danger of congestion collapse for best-effort traffic. ECN allows routers to set a bit in packet headers as an indication of congestion to the end-nodes, rather than being forced to rely on packet drops to indicate congestion. However, with ECN, packet-marking would replace packet-dropping only in times of moderate congestion. In particular, when congestion is heavy, and a router's buffers overflow, the router has no choice but to drop arriving packets.

我们会注意到,在IP体系结构中添加显式拥塞通知(ECN)本身并不能消除尽力而为流量拥塞崩溃的危险。ECN允许路由器在数据包头中设置一个位,作为对终端节点拥塞的指示,而不是被迫依赖数据包丢弃来指示拥塞。然而,有了ECN,只有在适度拥塞时,数据包标记才能取代数据包丢弃。特别是,当拥塞严重,路由器的缓冲区溢出时,路由器别无选择,只能丢弃到达的数据包。

6.2. End-to-end congestion control for fairness with TCP.

6.2. 端到端拥塞控制,实现TCP的公平性。

The concern expressed in [RFC2357] about fairness with TCP places a significant though not crippling constraint on the range of viable end-to-end congestion control mechanisms for best-effort traffic. An environment with per-flow scheduling at all congested links would isolate flows from each other, and eliminate the need for congestion control mechanisms to be TCP-compatible. An environment with differentiated services, where flows marked as belonging to a certain diff-serv class would be scheduled in isolation from best-effort traffic, could allow the emergence of an entire diff-serv class of traffic where congestion control was not required to be TCP-compatible. Similarly, a pricing-controlled environment, or a diff-serv class with its own pricing paradigm, could supercede the concern about fairness with TCP. However, for the current Internet environment, where other best-effort traffic could compete in a FIFO queue with TCP traffic, the absence of fairness with TCP could lead to one flow `starving out' another flow in a time of high congestion, as was illustrated in Table 1 above.

[RFC2357]中表达的关于TCP公平性的担忧对尽力而为流量的可行端到端拥塞控制机制的范围造成了重大的限制,尽管不是致命的限制。在所有拥塞链路上都有按流调度的环境将使流彼此隔离,并消除拥塞控制机制与TCP兼容的需要。一个具有区分服务的环境,其中标记为属于某个区分服务类别的流将与尽力而为的流量隔离进行调度,可以允许出现整个区分服务类别的流量,其中拥塞控制不需要与TCP兼容。类似地,定价控制环境或具有自己定价范式的diff-serv类可以取代对TCP公平性的关注。然而,在当前的互联网环境中,其他尽力而为的流量可能在FIFO队列中与TCP流量竞争,TCP缺乏公平性可能导致一个流在高拥塞时“饥饿”另一个流,如上面表1所示。

However, the list of TCP-compatible congestion control procedures is not limited to AIMD with the same increase/ decrease parameters as TCP. Other TCP-compatible congestion control procedures include

但是,与TCP兼容的拥塞控制程序列表不限于AIMD,其增加/减少参数与TCP相同。其他与TCP兼容的拥塞控制过程包括

rate-based variants of AIMD; AIMD with different sets of increase/decrease parameters that give the same steady-state behavior; equation-based congestion control where the sender adjusts its sending rate in response to information about the long-term packet drop rate; layered multicast where receivers subscribe and unsubscribe from layered multicast groups; and possibly other forms that we have not yet begun to consider.

基于比率的AIMD变体;AIMD具有不同的增加/减少参数集,这些参数具有相同的稳态行为;基于等式的拥塞控制,其中发送方根据关于长期分组丢弃率的信息调整其发送速率;分层多播,其中接收器订阅和取消订阅分层多播组;也可能是我们尚未开始考虑的其他形式。

7. Acknowledgements
7. 致谢

Much of this document draws directly on previous RFCs addressing end-to-end congestion control. This attempts to be a summary of ideas that have been discussed for many years, and by many people. In particular, acknowledgement is due to the members of the End-to-End Research Group, the Reliable Multicast Research Group, and the Transport Area Directorate. This document has also benefited from discussion and feedback from the Transport Area Working Group. Particular thanks are due to Mark Allman for feedback on an earlier version of this document.

本文档的大部分内容直接借鉴了以前解决端到端拥塞控制的RFC。这篇文章试图总结已经讨论了很多年,也被很多人讨论过的观点。特别是,感谢端到端研究组、可靠多播研究组和传输区理事会的成员。本文件还得益于运输区工作组的讨论和反馈。特别感谢Mark Allman对本文档早期版本的反馈。

8. References
8. 工具书类

[BS00] Balakrishnan H. and S. Seshan, "The Congestion Manager", Work in Progress.

[BS00]Balakrishnan H.和S.Seshan,“拥堵管理者”,工作正在进行中。

[DMKM00] Dawkins, S., Montenegro, G., Kojo, M. and V. Magret, "End-to-end Performance Implications of Slow Links", Work in Progress.

[DMKM00]Dawkins,S.,黑山,G.,Kojo,M.和V.Magret,“慢链接的端到端性能影响”,正在进行的工作。

   [FF99]       Floyd, S. and K. Fall, "Promoting the Use of End-to-End
                Congestion Control in the Internet", IEEE/ACM
                Transactions on Networking, August 1999.  URL
                http://www.aciri.org/floyd/end2end-paper.html
        
   [FF99]       Floyd, S. and K. Fall, "Promoting the Use of End-to-End
                Congestion Control in the Internet", IEEE/ACM
                Transactions on Networking, August 1999.  URL
                http://www.aciri.org/floyd/end2end-paper.html
        

[HPF00] Handley, M., Padhye, J. and S. Floyd, "TCP Congestion Window Validation", RFC 2861, June 2000.

[HPF00]Handley,M.,Padhye,J.和S.Floyd,“TCP拥塞窗口验证”,RFC 28612000年6月。

[Jacobson88] V. Jacobson, Congestion Avoidance and Control, ACM SIGCOMM '88, August 1988.

[Jacobson88]诉Jacobson,拥塞避免和控制,ACM SIGCOMM'88,1988年8月。

[RFC793] Postel, J., "Transmission Control Protocol", STD 7, RFC 793, September 1981.

[RFC793]Postel,J.,“传输控制协议”,标准7,RFC 793,1981年9月。

[RFC896] Nagle, J., "Congestion Control in IP/TCP", RFC 896, January 1984.

[RFC896]Nagle,J.,“IP/TCP中的拥塞控制”,RFC896,1984年1月。

[RFC1122] Braden, R., Ed., "Requirements for Internet Hosts -- Communication Layers", STD 3, RFC 1122, October 1989.

[RFC1122]Braden,R.,Ed.“互联网主机的要求——通信层”,STD 3,RFC 1122,1989年10月。

[RFC1323] Jacobson, V., Braden, R. and D. Borman, "TCP Extensions for High Performance", RFC 1323, May 1992.

[RFC1323]Jacobson,V.,Braden,R.和D.Borman,“高性能TCP扩展”,RFC 1323,1992年5月。

[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997.

[RFC2119]Bradner,S.,“RFC中用于表示需求水平的关键词”,BCP 14,RFC 2119,1997年3月。

[RFC2212] Shenker, S., Partridge, C. and R. Guerin, "Specification of Guaranteed Quality of Service", RFC 2212, September 1997.

[RFC2212]Shenker,S.,Partridge,C.和R.Guerin,“保证服务质量规范”,RFC 2212,1997年9月。

[RFC2309] Braden, R., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K.K., Shenker, S., Wroclawski, J., and L. Zhang, "Recommendations on Queue Management and Congestion Avoidance in the Internet", RFC 2309, April 1998.

[RFC2309]Braden,R.,Clark,D.,Crowcroft,J.,Davie,B.,Deering,S.,Estrin,D.,Floyd,S.,Jacobson,V.,Minshall,G.,Partridge,C.,Peterson,L.,Ramakrishnan,K.K.,Shenker,S.,Wroclawski,J.,和L.Zhang,“关于互联网中队列管理和拥塞避免的建议”,RFC 2309,1998年4月。

[RFC2357] Mankin, A., Romanow, A., Bradner, S. and V. Paxson, "IETF Criteria for Evaluating Reliable Multicast Transport and Application Protocols", RFC 2357, June 1998.

[RFC2357]Mankin,A.,Romanow,A.,Bradner,S.和V.Paxson,“IETF评估可靠多播传输和应用协议的标准”,RFC 2357,1998年6月。

[RFC2414] Allman, M., Floyd, S. and C. Partridge, "Increasing TCP's Initial Window", RFC 2414, September 1998.

[RFC2414]奥尔曼,M.,弗洛伊德,S.和C.帕特里奇,“增加TCP的初始窗口”,RFC2414141998年9月。

[RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z. and W. Weiss, "An Architecture for Differentiated Services", RFC 2475, December 1998.

[RFC2475]Blake,S.,Black,D.,Carlson,M.,Davies,E.,Wang,Z.和W.Weiss,“差异化服务架构”,RFC 24751998年12月。

[RFC2481] Ramakrishnan K. and S. Floyd, "A Proposal to add Explicit Congestion Notification (ECN) to IP", RFC 2481, January 1999.

[RFC2481]Ramakrishnan K.和S.Floyd,“向IP添加明确拥塞通知(ECN)的提案”,RFC 2481,1999年1月。

[RFC2525] Paxson, V., Allman, M., Dawson, S., Fenner, W., Griner, J., Heavens, I., Lahey, K., Semke, J. and B. Volz, "Known TCP Implementation Problems", RFC 2525, March 1999.

[RFC2525]Paxson,V.,Allman,M.,Dawson,S.,Fenner,W.,Griner,J.,Skys,I.,Lahey,K.,Semke,J.和B.Volz,“已知的TCP实施问题”,RFC 25251999年3月。

[RFC2581] Allman, M., Paxson, V. and W. Stevens, "TCP Congestion Control", RFC 2581, April 1999.

[RFC2581]Allman,M.,Paxson,V.和W.Stevens,“TCP拥塞控制”,RFC 25811999年4月。

[RFC2582] Floyd, S. and T. Henderson, "The NewReno Modification to TCP's Fast Recovery Algorithm", RFC 2582, April 1999.

[RFC2582]Floyd,S.和T.Henderson,“TCP快速恢复算法的NewReno修改”,RFC 25821999年4月。

[RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P. and T. Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999.

[RFC2616]菲尔丁,R.,盖蒂斯,J.,莫卧儿,J.,弗莱斯蒂克,H.,马斯特,L.,利奇,P.和T.伯纳斯李,“超文本传输协议——HTTP/1.1”,RFC 2616,1999年6月。

[SCWA99] S. Savage, N. Cardwell, D. Wetherall, and T. Anderson, TCP Congestion Control with a Misbehaving Receiver, ACM Computer Communications Review, October 1999.

[SCWA99]S.Savage,N.Cardwell,D.Wetheral和T.Anderson,使用行为不当接收器的TCP拥塞控制,ACM计算机通信评论,1999年10月。

[TCPB98] Hari Balakrishnan, Venkata N. Padmanabhan, Srinivasan Seshan, Mark Stemm, and Randy H. Katz, TCP Behavior of a Busy Internet Server: Analysis and Improvements, IEEE Infocom, March 1998. Available from: "http://www.cs.berkeley.edu/~hari/papers/infocom98.ps.gz".

[TCPB98]Hari Balakrishnan,Venkata N.Padmanabhan,Srinivasan Seshan,Mark Stemm和Randy H.Katz,《繁忙互联网服务器的TCP行为:分析和改进》,IEEE信息网,1998年3月。可从以下网址获得:http://www.cs.berkeley.edu/~hari/papers/infocom98.ps.gz”。

[TCPF98] Dong Lin and H.T. Kung, TCP Fast Recovery Strategies: Analysis and Improvements, IEEE Infocom, March 1998. Available from: "http://www.eecs.harvard.edu/networking/papers/infocom-tcp-final-198.pdf".

[TCPF98]董林和孔海涛,TCP快速恢复策略:分析和改进,IEEE Infocom,1998年3月。可从以下网址获得:http://www.eecs.harvard.edu/networking/papers/infocom-tcp-final-198.pdf".

9. TCP-Specific issues
9. TCP特定问题

In this section we discuss some of the particulars of TCP congestion control, to illustrate a realization of the congestion control principles, including some of the details that arise when incorporating them into a production transport protocol.

在本节中,我们将讨论TCP拥塞控制的一些细节,以说明拥塞控制原则的实现,包括将它们合并到生产传输协议中时出现的一些细节。

9.1. Slow-start.

9.1. 慢启动。

The TCP sender can not open a new connection by sending a large burst of data (e.g., a receiver's advertised window) all at once. The TCP sender is limited by a small initial value for the congestion window. During slow-start, the TCP sender can increase its sending rate by at most a factor of two in one roundtrip time. Slow-start ends when congestion is detected, or when the sender's congestion window is greater than the slow-start threshold ssthresh.

TCP发送方无法通过一次发送大量数据(例如,接收方的广告窗口)来打开新连接。TCP发送方受到拥塞窗口的小初始值的限制。在慢速启动期间,TCP发送方最多可以在一次往返时间内将其发送速率提高两倍。当检测到拥塞或发送方的拥塞窗口大于慢速启动阈值ssthresh时,慢速启动结束。

An issue that potentially affects global congestion control, and therefore has been explicitly addressed in the standards process, includes an increase in the value of the initial window [RFC2414,RFC2581].

一个可能影响全局拥塞控制的问题,因此已在标准过程中明确解决,包括初始窗口值的增加[RFC2414,RFC2581]。

Issues that have not been addressed in the standards process, and are generally considered not to require standardization, include such issues as the use (or non-use) of rate-based pacing, and mechanisms for ending slow-start early, before the congestion window reaches ssthresh. Such mechanisms result in slow-start behavior that is as conservative or more conservative than standard TCP.

标准过程中未解决的问题,通常认为不需要标准化,包括使用(或不使用)基于速率的起搏,以及在拥塞窗口达到ssthresh之前提前结束慢启动的机制。这种机制导致慢启动行为,与标准TCP一样保守或更保守。

9.2. Additive Increase, Multiplicative Decrease.

9.2. 加性增加,乘性减少。

In the absence of congestion, the TCP sender increases its congestion window by at most one packet per roundtrip time. In response to a congestion indication, the TCP sender decreases its congestion window by half. (More precisely, the new congestion window is half of the minimum of the congestion window and the receiver's advertised window.)

在没有拥塞的情况下,TCP发送方每往返时间最多增加一个数据包的拥塞窗口。作为对拥塞指示的响应,TCP发送方将其拥塞窗口减少一半。(更准确地说,新的拥塞窗口是拥塞窗口和接收者广告窗口最小值的一半。)

An issue that potentially affects global congestion control, and therefore would be likely to be explicitly addressed in the standards process, would include a proposed addition of congestion control for the return stream of `pure acks'.

一个可能会影响全球拥塞控制的问题,因此可能会在标准过程中明确解决,包括为“纯ack”的返回流增加拥塞控制。

An issue that has not been addressed in the standards process, and is generally not considered to require standardization, would be a change to the congestion window to apply as an upper bound on the number of bytes presumed to be in the pipe, instead of applying as a sliding window starting from the cumulative acknowledgement. (Clearly, the receiver's advertised window applies as a sliding window starting from the cumulative acknowledgement field, because packets received above the cumulative acknowledgement field are held in TCP's receive buffer, and have not been delivered to the application. However, the congestion window applies to the number of packets outstanding in the pipe, and does not necessarily have to include packets that have been received out-of-order by the TCP receiver.)

标准过程中未解决且通常不认为需要标准化的问题是,将拥塞窗口更改为应用于假定在管道中的字节数上限,而不是应用于从累积确认开始的滑动窗口。(显然,接收方的公告窗口应用为从累积确认字段开始的滑动窗口,因为在累积确认字段上方接收的数据包保存在TCP的接收缓冲区中,并且尚未交付给应用程序。但是,拥塞窗口应用于未完成的数据包数量在管道中,并且不一定必须包括TCP接收器无序接收的数据包。)

9.3. Retransmit timers.

9.3. 重新传输计时器。

The TCP sender sets a retransmit timer to infer that a packet has been dropped in the network. When the retransmit timer expires, the sender infers that a packet has been lost, sets ssthresh to half of the current window, and goes into slow-start, retransmitting the lost packet. If the retransmit timer expires because no acknowledgement has been received for a retransmitted packet, the retransmit timer is also "backed-off", doubling the value of the next retransmit timeout interval.

TCP发送方设置一个重传计时器,以推断数据包已在网络中丢弃。当重传计时器过期时,发送方推断数据包已丢失,将ssthresh设置为当前窗口的一半,然后进入慢速启动,重传丢失的数据包。如果由于未接收到重传数据包的确认,重传计时器过期,则重传计时器也会“后退”,使下一个重传超时间隔的值加倍。

An issue that potentially affects global congestion control, and therefore would be likely to be explicitly addressed in the standards process, might include a modified mechanism for setting the retransmit timer that could significantly increase the number of retransmit timers that expire prematurely, when the acknowledgement has not yet arrived at the sender, but in fact no packets have been dropped. This could be of concern to the Internet standards process

一个可能影响全局拥塞控制的问题,因此可能会在标准过程中明确解决,可能包括用于设置重传计时器的修改机制,该机制可能会显著增加提前过期的重传计时器的数量,当确认尚未到达发送方,但实际上没有丢弃任何数据包时。这可能会引起互联网标准化进程的关注

because retransmit timers that expire prematurely could lead to an increase in the number of packets unnecessarily transmitted on a congested link.

因为过早过期的重传计时器可能会导致拥塞链路上不必要传输的数据包数量增加。

9.4. Fast Retransmit and Fast Recovery.

9.4. 快速重传和快速恢复。

After seeing three duplicate acknowledgements, the TCP sender infers a packet loss. The TCP sender sets ssthresh to half of the current window, reduces the congestion window to at most half of the previous window, and retransmits the lost packet.

在看到三个重复的确认后,TCP发送方推断出数据包丢失。TCP发送方将ssthresh设置为当前窗口的一半,将拥塞窗口最多减少为前一窗口的一半,并重新传输丢失的数据包。

An issue that potentially affects global congestion control, and therefore would be likely to be explicitly addressed in the standards process, might include a proposal (if there was one) for inferring a lost packet after only one or two duplicate acknowledgements. If poorly designed, such a proposal could lead to an increase in the number of packets unnecessarily transmitted on a congested path.

一个可能影响全局拥塞控制的问题,因此可能会在标准过程中明确解决,可能包括一个建议(如果有的话),用于仅在一个或两个重复确认之后推断丢失的数据包。如果设计不当,这样的建议可能导致在拥挤路径上不必要地传输的数据包数量增加。

An issue that has not been addressed in the standards process, and would not be expected to require standardization, would be a proposal to send a "new" or presumed-lost packet in response to a duplicate or partial acknowledgement, if allowed by the congestion window. An example of this would be sending a new packet in response to a single duplicate acknowledgement, to keep the `ack clock' going in case no further acknowledgements would have arrived. Such a proposal is an example of a beneficial change that does not involve interoperability and does not affect global congestion control, and that therefore could be implemented by vendors without requiring the intervention of the IETF standards process. (This issue has in fact been addressed in [DMKM00], which suggests that "researchers may wish to experiment with injecting new traffic into the network when duplicate acknowledgements are being received, as described in [TCPB98] and [TCPF98]."

标准过程中未解决且预计不需要标准化的一个问题是,如果拥塞窗口允许,建议发送“新”或假定丢失的数据包,以响应重复或部分确认。这方面的一个例子是发送一个新的数据包来响应一个重复的确认,以保持“确认时钟”的运行,以防没有进一步的确认到达。这样的提议是一个有益的改变的例子,它不涉及互操作性,也不影响全球拥塞控制,因此供应商可以在不需要IETF标准过程干预的情况下实施。(这个问题实际上已经在[DMKM00]中得到了解决,这表明“研究人员可能希望在接收到重复确认时尝试向网络注入新流量,如[TCPB98]和[TCPF98]中所述。”

9.5. Other aspects of TCP congestion control.

9.5. TCP拥塞控制的其他方面。

Other aspects of TCP congestion control that have not been discussed in any of the sections above include TCP's recovery from an idle or application-limited period [HPF00].

TCP拥塞控制的其他方面没有在上述任何一节中讨论,包括TCP从空闲或应用程序有限期恢复[HPF00]。

10. Security Considerations
10. 安全考虑

This document has been about the risks associated with congestion control, or with the absence of congestion control. Section 3.2 discusses the potentials for unfairness if competing flows don't use compatible congestion control mechanisms, and Section 5 considers the dangers of congestion collapse if flows don't use end-to-end congestion control.

本文档介绍了与拥塞控制或缺少拥塞控制相关的风险。第3.2节讨论了竞争流不使用兼容的拥塞控制机制时可能出现的不公平,第5节考虑了流不使用端到端拥塞控制时拥塞崩溃的危险。

Because this document does not propose any specific congestion control mechanisms, it is also not necessary to present specific security measures associated with congestion control. However, we would note that there are a range of security considerations associated with congestion control that should be considered in IETF documents.

由于本文件未提出任何具体的拥塞控制机制,因此也没有必要提出与拥塞控制相关的具体安全措施。然而,我们注意到,IETF文件中应考虑与拥塞控制相关的一系列安全考虑因素。

For example, individual congestion control mechanisms should be as robust as possible to the attempts of individual end-nodes to subvert end-to-end congestion control [SCWA99]. This is a particular concern in multicast congestion control, because of the far-reaching distribution of the traffic and the greater opportunities for individual receivers to fail to report congestion.

例如,单个拥塞控制机制应尽可能地对单个端节点试图破坏端到端拥塞控制的尝试具有鲁棒性[SCWA99]。在多播拥塞控制中,这是一个特别值得关注的问题,因为通信量的分布范围很广,而且单个接收者有更多的机会无法报告拥塞。

RFC 2309 also discussed the potential dangers to the Internet of unresponsive flows, that is, flows that don't reduce their sending rate in the presence of congestion, and describes the need for mechanisms in the network to deal with flows that are unresponsive to congestion notification. We would note that there is still a need for research, engineering, measurement, and deployment in these areas.

RFC 2309还讨论了无响应流对Internet的潜在危险,即在出现拥塞时不会降低发送速率的流,并描述了网络中处理对拥塞通知无响应流的机制的需要。我们会注意到,在这些领域仍然需要进行研究、工程、测量和部署。

Because the Internet aggregates very large numbers of flows, the risk to the whole infrastructure of subverting the congestion control of a few individual flows is limited. Rather, the risk to the infrastructure would come from the widespread deployment of many end-nodes subverting end-to-end congestion control.

由于互联网聚集了大量的流量,因此破坏个别流量的拥塞控制对整个基础设施的风险是有限的。相反,基础设施面临的风险将来自大量端节点的广泛部署,从而破坏端到端的拥塞控制。

AUTHOR'S ADDRESS

作者地址

Sally Floyd AT&T Center for Internet Research at ICSI (ACIRI)

萨莉·弗洛伊德美国电话电报公司ICSI互联网研究中心(ACIRI)

   Phone: +1 (510) 642-4274 x189
   EMail: floyd@aciri.org
   URL: http://www.aciri.org/floyd/
        
   Phone: +1 (510) 642-4274 x189
   EMail: floyd@aciri.org
   URL: http://www.aciri.org/floyd/
        

Full Copyright Statement

完整版权声明

Copyright (C) The Internet Society (2000). All Rights Reserved.

版权所有(C)互联网协会(2000年)。版权所有。

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

本文件及其译本可复制并提供给他人,对其进行评论或解释或协助其实施的衍生作品可全部或部分编制、复制、出版和分发,不受任何限制,前提是上述版权声明和本段包含在所有此类副本和衍生作品中。但是,不得以任何方式修改本文件本身,例如删除版权通知或对互联网协会或其他互联网组织的引用,除非出于制定互联网标准的需要,在这种情况下,必须遵循互联网标准过程中定义的版权程序,或根据需要将其翻译成英语以外的其他语言。

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.

上述授予的有限许可是永久性的,互联网协会或其继承人或受让人不会撤销。

This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

本文件和其中包含的信息是按“原样”提供的,互联网协会和互联网工程任务组否认所有明示或暗示的保证,包括但不限于任何保证,即使用本文中的信息不会侵犯任何权利,或对适销性或特定用途适用性的任何默示保证。

Acknowledgement

确认

Funding for the RFC Editor function is currently provided by the Internet Society.

RFC编辑功能的资金目前由互联网协会提供。