Network Working Group                                            K. Poduri
Request for Comments: 2415                                      K. Nichols
Category: Informational                                       Bay Networks
                                                            September 1998
        
Network Working Group                                            K. Poduri
Request for Comments: 2415                                      K. Nichols
Category: Informational                                       Bay Networks
                                                            September 1998
        

Simulation Studies of Increased Initial TCP Window Size

增加初始TCP窗口大小的仿真研究

Status of this Memo

本备忘录的状况

This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.

本备忘录为互联网社区提供信息。它没有规定任何类型的互联网标准。本备忘录的分发不受限制。

Copyright Notice

版权公告

Copyright (C) The Internet Society (1998). All Rights Reserved.

版权所有(C)互联网协会(1998年)。版权所有。

Abstract

摘要

An increase in the permissible initial window size of a TCP connection, from one segment to three or four segments, has been under discussion in the tcp-impl working group. This document covers some simulation studies of the effects of increasing the initial window size of TCP. Both long-lived TCP connections (file transfers) and short-lived web-browsing style connections were modeled. The simulations were performed using the publicly available ns-2 simulator and our custom models and files are also available.

TCP impl工作组正在讨论将TCP连接的允许初始窗口大小从一个段增加到三个或四个段。本文档涵盖了增加TCP初始窗口大小的影响的一些模拟研究。对长寿命TCP连接(文件传输)和短命web浏览样式的连接进行了建模。模拟使用公开的ns-2模拟器进行,我们的自定义模型和文件也可用。

1. Introduction
1. 介绍

We present results from a set of simulations with increased TCP initial window (IW). The main objectives were to explore the conditions under which the larger IW was a "win" and to determine the effects, if any, the larger IW might have on other traffic flows using an IW of one segment.

我们给出了一组增加TCP初始窗口(IW)的模拟结果。主要目标是探索较大的IW是“双赢”的条件,并确定较大的IW可能对使用一段IW的其他交通流产生的影响(如果有)。

This study was inspired by discussions at the Munich IETF tcp-impl and tcp-sat meetings. A proposal to increase the IW size to about 4K bytes (4380 bytes in the case of 1460 byte segments) was discussed. Concerns about both the utility of the increase and its effect on other traffic were raised. Some studies were presented showing the positive effects of increased IW on individual connections, but no studies were shown with a wide variety of simultaneous traffic flows. It appeared that some of the questions being raised could be addressed in an ns-2 simulation. Early results from our simulations were previously posted to the tcp-impl mailing list and presented at the tcp-impl WG meeting at the December 1997 IETF.

本研究受慕尼黑IETF tcp impl和tcp sat会议讨论的启发。讨论了将IW大小增加到约4K字节(1460字节段为4380字节)的建议。人们对增加的效用及其对其他交通的影响表示担忧。一些研究表明,IW增加对个别连接有积极影响,但没有研究表明存在多种同时交通流。似乎提出的一些问题可以在ns-2模拟中解决。我们模拟的早期结果之前已发布到tcp IMP邮件列表,并在1997年12月IETF的tcp IMP工作组会议上展示。

2. Model and Assumptions
2. 模型和假设

We simulated a network topology with a bottleneck link as shown:

我们模拟了具有瓶颈链路的网络拓扑,如图所示:

10Mb, 10Mb, (all 4 links) (all 4 links)

10Mb,10Mb,(全部4个链路)(全部4个链路)

      C   n2_________                               ______ n6     S
      l   n3_________\                             /______ n7     e
      i              \\              1.5Mb, 50ms   //             r
      e               n0 ------------------------ n1              v
      n   n4__________//                          \ \_____ n8     e
      t   n5__________/                            \______ n9     r
      s                                                           s
        
      C   n2_________                               ______ n6     S
      l   n3_________\                             /______ n7     e
      i              \\              1.5Mb, 50ms   //             r
      e               n0 ------------------------ n1              v
      n   n4__________//                          \ \_____ n8     e
      t   n5__________/                            \______ n9     r
      s                                                           s
        
                    URLs -->          <--- FTP & Web data
        
                    URLs -->          <--- FTP & Web data
        

File downloading and web-browsing clients are attached to the nodes (n2-n5) on the left-hand side. These clients are served by the FTP and Web servers attached to the nodes (n6-n9) on the right-hand side. The links to and from those nodes are at 10 Mbps. The bottleneck link is between n1 and n0. All links are bi-directional, but only ACKs, SYNs, FINs, and URLs are flowing from left to right. Some simulations were also performed with data traffic flowing from right to left simultaneously, but it had no effect on the results.

文件下载和web浏览客户端连接到左侧的节点(n2-n5)。这些客户端由连接到右侧节点(n6-n9)的FTP和Web服务器提供服务。与这些节点之间的链路为10 Mbps。瓶颈链路位于n1和n0之间。所有链接都是双向的,但只有ACK、SYN、FIN和URL从左向右流动。还进行了一些模拟,数据流量同时从右向左流动,但对结果没有影响。

In the simulations we assumed that all ftps transferred 1-MB files and that all web pages had exactly three embedded URLs. The web clients are browsing quite aggressively, requesting a new page after a random delay uniformly distributed between 1 and 5 seconds. This is not meant to realistically model a single user's web-browsing pattern, but to create a reasonably heavy traffic load whose individual tcp connections accurately reflect real web traffic. Some discussion of these models as used in earlier studies is available in references [3] and [4].

在模拟中,我们假设所有FTP传输1-MB的文件,并且所有网页都有三个嵌入的URL。web客户端的浏览非常激烈,在1到5秒之间均匀分布的随机延迟后请求新页面。这并不是要实际模拟单个用户的web浏览模式,而是要创建一个相当重的流量负载,其单个tcp连接准确反映真实的web流量。参考文献[3]和[4]中对早期研究中使用的这些模型进行了一些讨论。

The maximum tcp window was set to 11 packets, maximum packet (or segment) size to 1460 bytes, and buffer sizes were set at 25 packets. (The ns-2 TCPs require setting window sizes and buffer sizes in number of packets. In our tcp-full code some of the internal parameters have been set to be byte-oriented, but external values must still be set in number of packets.) In our simulations, we varied the number of data segments sent into a new TCP connection (or initial window) from one to four, keeping all segments at 1460 bytes. A dropped packet causes a restart window of one segment to be used, just as in current practice.

最大tcp窗口设置为11个数据包,最大数据包(或段)大小设置为1460字节,缓冲区大小设置为25个数据包。(ns-2 tcp要求以数据包数设置窗口大小和缓冲区大小。在我们的tcp完整代码中,一些内部参数已设置为面向字节,但外部值仍必须以数据包数设置。)在我们的模拟中,我们改变了发送到新tcp连接(或初始窗口)的数据段数从1到4,将所有段保持在1460字节。丢弃的数据包会导致使用一个段的重新启动窗口,就像在当前实践中一样。

For ns-2 users: The tcp-full code was modified to use an "application" class and three application client-server pairs were written: a simple file transfer (ftp), a model of http1.0 style web connection and a very rough model of http1.1 style web connection. The required files and scripts for these simulations are available under the contributed code section on the ns-simulator web page at the sites ftp://ftp.ee.lbl.gov/IW.{tar, tar.Z} or http://www-nrg.ee.lbl.gov/floyd/tcp_init_win.html.

对于ns-2用户:tcp完整代码被修改为使用“应用程序”类,并编写了三个应用程序客户机-服务器对:一个简单文件传输(ftp)、一个http1.0样式的web连接模型和一个非常粗略的http1.1样式的web连接模型。这些模拟所需的文件和脚本可在站点ns模拟器网页的“贡献代码”部分中找到ftp://ftp.ee.lbl.gov/IW.{tar,tar.Z}或http://www-nrg.ee.lbl.gov/floyd/tcp_init_win.html.

Simulations were run with 8, 16, 32 web clients and a number of ftp clients ranging from 0 to 3. The IW was varied from 1 to 4, though the 4-packet case lies beyond what is currently recommended. The figures of merit used were goodput, the median page delay seen by the web clients and the median file transfer delay seen by the ftp clients. The simulated run time was rather large, 360 seconds, to ensure an adequate sample. (Median values remained the same for simulations with larger run times and can be considered stable)

使用8个、16个、32个web客户端和从0到3的多个ftp客户端运行模拟。IW从1到4不等,尽管4包的情况超出了当前推荐的范围。使用的优点数字是goodput、web客户端看到的页面延迟中值和ftp客户端看到的文件传输延迟中值。模拟运行时间相当长,为确保足够的样本,为360秒。(对于运行时间较长的模拟,中值保持不变,可以认为是稳定的)

3. Results
3. 后果

In our simulations, we varied the number of file transfer clients in order to change the congestion of the link. Recall that our ftp clients continuously request 1 Mbyte transfers, so the link utilization is over 90% when even a single ftp client is present. When three file transfer clients are running simultaneously, the resultant congestion is somewhat pathological, making the values recorded stable. Though all connections use the same initial window, the effect of increasing the IW on a 1 Mbyte file transfer is not detectable, thus we focus on the web browsing connections. (In the tables, we use "webs" to indicate number of web clients and "ftps" to indicate the number of file transfer clients attached.) Table 1 shows the median delays experienced by the web transfers with an increase in the TCP IW. There is clearly an improvement in transfer delays for the web connections with increase in the IW, in many cases on the order of 30%. The steepness of the performance improvement going from an IW of 1 to an IW of 2 is mainly due to the distribution of files fetched by each URL (see references [1] and [2]); the median size of both primary and in-line URLs fits completely into two packets. If file distributions change, the shape of this curve may also change.

在我们的模拟中,我们改变了文件传输客户端的数量,以改变链路的拥塞。回想一下,我们的ftp客户端连续请求1Mbyte传输,因此即使只有一个ftp客户端,链路利用率也超过90%。当三个文件传输客户端同时运行时,产生的拥塞在某种程度上是病态的,使得记录的值稳定。虽然所有连接都使用相同的初始窗口,但无法检测到增加1 MB文件传输的IW的影响,因此我们将重点放在web浏览连接上。(在表中,我们使用“web”表示web客户端的数量,“FTP”表示附加的文件传输客户端的数量。)表1显示了随着TCP IW的增加,web传输所经历的中间延迟。随着IW的增加,web连接的传输延迟明显有所改善,在许多情况下大约为30%。从IW为1到IW为2的性能改进的陡度主要是由于每个URL获取的文件的分布(参见参考文献[1]和[2]);主URL和内联URL的中间大小完全适合两个数据包。如果文件分布更改,此曲线的形状也可能更改。

Table 1. Median web page delay

表1。网页延迟中值

   #Webs   #FTPs   IW=1    IW=2    IW=3    IW=4
                   (s)        (% decrease)
   ----------------------------------------------
     8      0      0.56    14.3  17.9   16.1
     8      1      1.06    18.9  25.5   32.1
     8      2      1.18    16.1  17.1   28.9
     8      3      1.26    11.9  19.0   27.0
    16      0      0.64    11.0  15.6   18.8
    16      1      1.04    17.3  24.0   35.6
    16      2      1.22    17.2  20.5   25.4
    16      3      1.31    10.7  21.4   22.1
    32      0      0.92    17.6  28.6   21.0
    32      1      1.19    19.6  25.0   26.1
    32      2      1.43    23.8  35.0   33.6
    32      3      1.56    19.2  29.5   33.3
        
   #Webs   #FTPs   IW=1    IW=2    IW=3    IW=4
                   (s)        (% decrease)
   ----------------------------------------------
     8      0      0.56    14.3  17.9   16.1
     8      1      1.06    18.9  25.5   32.1
     8      2      1.18    16.1  17.1   28.9
     8      3      1.26    11.9  19.0   27.0
    16      0      0.64    11.0  15.6   18.8
    16      1      1.04    17.3  24.0   35.6
    16      2      1.22    17.2  20.5   25.4
    16      3      1.31    10.7  21.4   22.1
    32      0      0.92    17.6  28.6   21.0
    32      1      1.19    19.6  25.0   26.1
    32      2      1.43    23.8  35.0   33.6
    32      3      1.56    19.2  29.5   33.3
        

Table 2 shows the bottleneck link utilization and packet drop percentage of the same experiment. Packet drop rates did increase with IW, but in all cases except that of the single most pathological overload, the increase in drop percentage was less than 1%. A decrease in packet drop percentage is observed in some overloaded situations, specifically when ftp transfers consumed most of the link bandwidth and a large number of web transfers shared the remaining bandwidth of the link. In this case, the web transfers experience severe packet loss and some of the IW=4 web clients suffer multiple packet losses from the same window, resulting in longer recovery times than when there is a single packet loss in a window. During the recovery time, the connections are inactive which alleviates congestion and thus results in a decrease in the packet drop percentage. It should be noted that such observations were made only in extremely overloaded scenarios.

表2显示了相同实验的瓶颈链路利用率和丢包率。数据包丢失率确实随着IW的增加而增加,但在所有情况下,除了单一病理性过载外,丢失百分比的增加都小于1%。在某些过载情况下,可以观察到数据包丢失百分比的降低,特别是当ftp传输占用了大部分链路带宽,并且大量web传输共享了链路的剩余带宽时。在这种情况下,web传输经历严重的分组丢失,并且一些IW=4 web客户端从同一窗口遭受多个分组丢失,这导致恢复时间比在窗口中存在单个分组丢失时更长。在恢复时间期间,连接处于非活动状态,这缓解了拥塞,从而导致分组丢弃百分比的降低。应该指出的是,这种观察仅在超负荷情况下进行。

Table 2. Link utilization and packet drop rates

表2。链路利用率和丢包率

         Percentage Link Utilization            |      Packet drop rate
#Webs   #FTPs   IW=1    IW=2    IW=3  IW=4      |IW=1  IW=2  IW=3  IW=4
-----------------------------------------------------------------------
  8     0        34     37      38      39      | 0.0   0.0  0.0   0.0
  8     1        95     92      93      92      | 0.6   1.2  1.4   1.3
  8     2        98     97      97      96      | 1.8   2.3  2.3   2.7
  8     3        98     98      98      98      | 2.6   3.0  3.5   3.5
-----------------------------------------------------------------------
 16     0        67     69      69      67      | 0.1   0.5  0.8   1.0
 16     1        96     95      93      92      | 2.1   2.6  2.9   2.9
 16     2        98     98      97      96      | 3.5   3.6  4.2   4.5
 16     3        99     99      98      98      | 4.5   4.7  5.2   4.9
-----------------------------------------------------------------------
 32     0        92     87      85      84      | 0.1   0.5  0.8   1.0
 32     1        98     97      96      96      | 2.1   2.6  2.9   2.9
 32     2        99     99      98      98      | 3.5   3.6  4.2   4.5
 32     3       100     99      99      98      | 9.3   8.4  7.7   7.6
        
         Percentage Link Utilization            |      Packet drop rate
#Webs   #FTPs   IW=1    IW=2    IW=3  IW=4      |IW=1  IW=2  IW=3  IW=4
-----------------------------------------------------------------------
  8     0        34     37      38      39      | 0.0   0.0  0.0   0.0
  8     1        95     92      93      92      | 0.6   1.2  1.4   1.3
  8     2        98     97      97      96      | 1.8   2.3  2.3   2.7
  8     3        98     98      98      98      | 2.6   3.0  3.5   3.5
-----------------------------------------------------------------------
 16     0        67     69      69      67      | 0.1   0.5  0.8   1.0
 16     1        96     95      93      92      | 2.1   2.6  2.9   2.9
 16     2        98     98      97      96      | 3.5   3.6  4.2   4.5
 16     3        99     99      98      98      | 4.5   4.7  5.2   4.9
-----------------------------------------------------------------------
 32     0        92     87      85      84      | 0.1   0.5  0.8   1.0
 32     1        98     97      96      96      | 2.1   2.6  2.9   2.9
 32     2        99     99      98      98      | 3.5   3.6  4.2   4.5
 32     3       100     99      99      98      | 9.3   8.4  7.7   7.6
        

To get a more complete picture of performance, we computed the network power, goodput divided by median delay (in Mbytes/ms), and plotted it against IW for all scenarios. (Each scenario is uniquely identified by its number of webs and number of file transfers.) We plot these values in Figure 1 (in the pdf version), illustrating a general advantage to increasing IW. When a large number of web clients is combined with ftps, particularly multiple ftps, pathological cases result from the extreme congestion. In these cases, there appears to be no particular trend to the results of increasing the IW, in fact simulation results are not particularly stable.

为了更全面地了解性能,我们计算了网络功率goodput除以中间延迟(单位为Mbytes/ms),并将其与所有场景的IW进行对比。(每个场景都通过其站点数量和文件传输数量进行唯一标识。)我们在图1(pdf版本)中绘制了这些值,说明了增加IW的总体优势。当大量web客户端与FTP(特别是多个FTP)结合时,病理性病例是由极端拥塞引起的。在这些情况下,增加IW的结果似乎没有特别的趋势,事实上模拟结果并不特别稳定。

To get a clearer picture of what is happening across all the tested scenarios, we normalized the network power values for the non-pathological scenario by the network power for that scenario at IW of one. These results are plotted in Figure 2. As IW is increased from one to four, network power increased by at least 15%, even in a congested scenario dominated by bulk transfer traffic. In simulations where web traffic has a dominant share of the available bandwidth, the increase in network power was up to 60%.

为了更清楚地了解所有测试场景中发生的情况,我们通过IW为1时该场景的网络功率标准化了非病理场景的网络功率值。这些结果如图2所示。随着IW从1增加到4,网络功率至少增加了15%,即使在以批量传输流量为主的拥挤场景中也是如此。在网络流量占可用带宽主要份额的模拟中,网络功率的增加高达60%。

The increase in network power at higher initial window sizes is due to an increase in throughput and a decrease in the delay. Since the (slightly) increased drop rates were accompanied by better performance, drop rate is clearly not an indicator of user level performance.

在较高初始窗口大小下,网络功率的增加是由于吞吐量的增加和延迟的减少。由于(略微)增加的下降率伴随着更好的性能,下降率显然不是用户级性能的指标。

The gains in performance seen by the web clients need to be balanced against the performance the file transfers are seeing. We computed ftp network power and show this in Table 3. It appears that the improvement in network power seen by the web connections has negligible effect on the concurrent file transfers. It can be observed from the table that there is a small variation in the network power of file transfers with an increase in the size of IW but no particular trend can be seen. It can be concluded that the network power of file transfers essentially remained the same. However, it should be noted that a larger IW does allow web transfers to gain slightly more bandwidth than with a smaller IW. This could mean fewer bytes transferred for FTP applications or a slight decrease in network power as computed by us.

web客户端所看到的性能提升需要与文件传输所看到的性能相平衡。我们计算了ftp网络功率,如表3所示。web连接所带来的网络能力的提高似乎对并发文件传输的影响微乎其微。从表中可以看出,随着IW大小的增加,文件传输的网络能力略有变化,但看不到特定的趋势。可以得出结论,文件传输的网络能力基本上保持不变。然而,应该注意的是,与较小的IW相比,较大的IW确实允许web传输获得略多的带宽。这可能意味着FTP应用程序传输的字节数减少,或者根据我们的计算,网络功率略有下降。

Table 3. Network power of file transfers with an increase in the TCP IW size

表3。随着TCP IW大小的增加,文件传输的网络能力增强

   #Webs   #FTPs   IW=1    IW=2    IW=3    IW=4
   --------------------------------------------
     8      1      4.7     4.2     4.2     4.2
     8      2      3.0     2.8     3.0     2.8
     8      3      2.2     2.2     2.2     2.2
    16      1      2.3     2.4     2.4     2.5
    16      2      1.8     2.0     1.8     1.9
    16      3      1.4     1.6     1.5     1.7
    32      1      0.7     0.9     1.3     0.9
    32      2      0.8     1.0     1.3     1.1
    32      3      0.7     1.0     1.2     1.0
        
   #Webs   #FTPs   IW=1    IW=2    IW=3    IW=4
   --------------------------------------------
     8      1      4.7     4.2     4.2     4.2
     8      2      3.0     2.8     3.0     2.8
     8      3      2.2     2.2     2.2     2.2
    16      1      2.3     2.4     2.4     2.5
    16      2      1.8     2.0     1.8     1.9
    16      3      1.4     1.6     1.5     1.7
    32      1      0.7     0.9     1.3     0.9
    32      2      0.8     1.0     1.3     1.1
    32      3      0.7     1.0     1.2     1.0
        

The above simulations all used http1.0 style web connections, thus, a natural question is to ask how results are affected by migration to http1.1. A rough model of this behavior was simulated by using one connection to send all of the information from both the primary URL and the three embedded, or in-line, URLs. Since the transfer size is now made up of four web files, the steep improvement in performance between an IW of 1 and an IW of two, noted in the previous results, has been smoothed. Results are shown in Tables 4 & 5 and Figs. 3 & 4. Occasionally an increase in IW from 3 to 4 decreases the network power owing to a non-increase or a slight decrease in the throughput. TCP connections opening up with a higher window size into a very congested network might experience some packet drops and consequently a slight decrease in the throughput. This indicates that increase of the initial window sizes to further higher values (>4) may not always result in a favorable network performance. This can be seen clearly in Figure 4 where the network power shows a decrease for the two highly congested cases.

上面的模拟都使用了http1.0风格的web连接,因此,一个自然的问题是,迁移到http1.1会如何影响结果。通过使用一个连接来发送来自主URL和三个嵌入URL(或内嵌URL)的所有信息,模拟了这种行为的粗略模型。由于传输大小现在由四个web文件组成,先前结果中提到的IW为1和IW为2之间的性能急剧提高已经得到了平滑。结果如表4和表5以及图2所示。3 & 4. 偶尔,由于吞吐量没有增加或略有下降,IW从3增加到4会降低网络功率。在非常拥挤的网络中,以更大窗口打开的TCP连接可能会经历一些数据包丢失,从而导致吞吐量略有下降。这表明,将初始窗口大小进一步增大到更高的值(>4)可能并不总是产生良好的网络性能。这可以在图4中清楚地看到,在这两种高度拥挤的情况下,网络功率下降。

Table 4. Median web page delay for http1.1

表4。http1.1的网页延迟中值

   #Webs   #FTPs   IW=1    IW=2    IW=3    IW=4
                   (s)       (% decrease)
   ----------------------------------------------
     8      0      0.47   14.9   19.1   21.3
     8      1      0.84   17.9   19.0   25.0
     8      2      0.99   11.5   17.3   23.0
     8      3      1.04   12.1   20.2   28.3
    16      0      0.54   07.4   14.8   20.4
    16      1      0.89   14.6   21.3   27.0
    16      2      1.02   14.7   19.6   25.5
    16      3      1.11   09.0   17.0   18.9
    32      0      0.94   16.0   29.8   36.2
    32      1      1.23   12.2   28.5   21.1
    32      2      1.39   06.5   13.7   12.2
    32      3      1.46   04.0   11.0   15.0
        
   #Webs   #FTPs   IW=1    IW=2    IW=3    IW=4
                   (s)       (% decrease)
   ----------------------------------------------
     8      0      0.47   14.9   19.1   21.3
     8      1      0.84   17.9   19.0   25.0
     8      2      0.99   11.5   17.3   23.0
     8      3      1.04   12.1   20.2   28.3
    16      0      0.54   07.4   14.8   20.4
    16      1      0.89   14.6   21.3   27.0
    16      2      1.02   14.7   19.6   25.5
    16      3      1.11   09.0   17.0   18.9
    32      0      0.94   16.0   29.8   36.2
    32      1      1.23   12.2   28.5   21.1
    32      2      1.39   06.5   13.7   12.2
    32      3      1.46   04.0   11.0   15.0
        

Table 5. Network power of file transfers with an increase in the TCP IW size

表5。随着TCP IW大小的增加,文件传输的网络能力增强

   #Webs   #FTPs   IW=1    IW=2    IW=3    IW=4
   --------------------------------------------
     8      1      4.2     4.2     4.2     3.7
     8      2      2.7     2.5     2.6     2.3
     8      3      2.1     1.9     2.0     2.0
    16      1      1.8     1.8     1.5     1.4
    16      2      1.5     1.2     1.1     1.5
    16      3      1.0     1.0     1.0     1.0
    32      1      0.3     0.3     0.5     0.3
    32      2      0.4     0.3     0.4     0.4
    32      3      0.4     0.3     0.4     0.5
        
   #Webs   #FTPs   IW=1    IW=2    IW=3    IW=4
   --------------------------------------------
     8      1      4.2     4.2     4.2     3.7
     8      2      2.7     2.5     2.6     2.3
     8      3      2.1     1.9     2.0     2.0
    16      1      1.8     1.8     1.5     1.4
    16      2      1.5     1.2     1.1     1.5
    16      3      1.0     1.0     1.0     1.0
    32      1      0.3     0.3     0.5     0.3
    32      2      0.4     0.3     0.4     0.4
    32      3      0.4     0.3     0.4     0.5
        

For further insight, we returned to the http1.0 model and mixed some web-browsing connections with IWs of one with those using IWs of three. In this experiment, we first simulated a total of 16 web-browsing connections, all using IW of one. Then the clients were split into two groups of 8 each, one of which uses IW=1 and the other used IW=3.

为了进一步了解,我们回到了http1.0模型,并将一些带有一个IWs的web浏览连接与使用三个IWs的web浏览连接混合在一起。在这个实验中,我们首先模拟了总共16个web浏览连接,所有连接都使用IW。然后将客户机分成两组,每组8台,其中一台使用IW=1,另一台使用IW=3。

We repeated the simulations for a total of 32 and 64 web-browsing clients, splitting those into groups of 16 and 32 respectively. Table 6 shows these results. We report the goodput (in Mbytes), the web page delays (in milli seconds), the percent utilization of the link and the percent of packets dropped.

我们对总共32个和64个web浏览客户端重复了模拟,将它们分别分成16个和32个组。表6显示了这些结果。我们报告goodput(以MB为单位)、网页延迟(以毫秒为单位)、链接利用率百分比和丢包率百分比。

Table 6. Results for half-and-half scenario

表6。对半方案的结果

Median Page Delays and Goodput (MB)   | Link Utilization (%) & Drops (%)
#Webs     IW=1    |     IW=3          |       IW=1    |    IW=3
      G.put   dly |  G.put   dly      |  L.util  Drops| L.util   Drops
------------------|-------------------|---------------|---------------
16      35.5  0.64|  36.4   0.54      |   67      0.1 |   69       0.7
8/8     16.9  0.67|  18.9   0.52      |   68      0.5 |
------------------|-------------------|---------------|---------------
32      48.9  0.91|  44.7   0.68      |   92      3.5 |   85       4.3
16/16   22.8  0.94|  22.9   0.71      |   89      4.6 |
------------------|-------------------|---------------|----------------
64      51.9  1.50|  47.6   0.86      |   98     13.0 |   91       8.6
32/32   29.0  1.40|  22.0   1.20      |   98     12.0 |
        
Median Page Delays and Goodput (MB)   | Link Utilization (%) & Drops (%)
#Webs     IW=1    |     IW=3          |       IW=1    |    IW=3
      G.put   dly |  G.put   dly      |  L.util  Drops| L.util   Drops
------------------|-------------------|---------------|---------------
16      35.5  0.64|  36.4   0.54      |   67      0.1 |   69       0.7
8/8     16.9  0.67|  18.9   0.52      |   68      0.5 |
------------------|-------------------|---------------|---------------
32      48.9  0.91|  44.7   0.68      |   92      3.5 |   85       4.3
16/16   22.8  0.94|  22.9   0.71      |   89      4.6 |
------------------|-------------------|---------------|----------------
64      51.9  1.50|  47.6   0.86      |   98     13.0 |   91       8.6
32/32   29.0  1.40|  22.0   1.20      |   98     12.0 |
        

Unsurprisingly, the non-split experiments are consistent with our earlier results, clients with IW=3 outperform clients with IW=1. The results of running the 8/8 and 16/16 splits show that running a mixture of IW=3 and IW=1 has no negative effect on the IW=1 conversations, while IW=3 conversations maintain their performance. However, the 32/32 split shows that web-browsing connections with IW=3 are adversely affected. We believe this is due to the pathological dynamics of this extremely congested scenario. Since embedded URLs open their connections simultaneously, very large number of TCP connections are arriving at the bottleneck link resulting in multiple packet losses for the IW=3 conversations. The myriad problems of this simultaneous opening strategy is, of course, part of the motivation for the development of http1.1.

毫不奇怪,非分割实验与我们之前的结果一致,IW=3的客户表现优于IW=1的客户。运行8/8和16/16拆分的结果表明,混合运行IW=3和IW=1对IW=1对话没有负面影响,而IW=3对话保持其性能。但是,32/32分割显示IW=3的web浏览连接受到不利影响。我们认为这是由于这种极度拥挤的情况的病理动力学所致。由于嵌入式URL同时打开它们的连接,大量TCP连接到达瓶颈链路,导致IW=3会话的多个数据包丢失。当然,这种同步开放战略的无数问题是开发http1.1的部分动机。

4. Discussion
4. 讨论

The indications from these results are that increasing the initial window size to 3 packets (or 4380 bytes) helps to improve perceived performance. Many further variations on these simulation scenarios are possible and we've made our simulation models and scripts available in order to facilitate others' experiments.

这些结果表明,将初始窗口大小增加到3个数据包(或4380字节)有助于提高感知性能。这些模拟场景可能会有更多的变化,我们已经提供了我们的模拟模型和脚本,以方便其他人的实验。

We also used the RED queue management included with ns-2 to perform some other simulation studies. We have not reported on those results here since we don't consider the studies complete. We found that by adding RED to the bottleneck link, we achieved similar performance gains (with an IW of 1) to those we found with increased IWs without RED. Others may wish to investigate this further.

我们还使用ns-2中包含的红色队列管理来执行其他一些模拟研究。我们没有报告这些结果,因为我们不认为这些研究是完整的。我们发现,通过向瓶颈链接添加RED,我们获得了与增加不带RED的IW类似的性能提升(IW为1)。其他人可能希望对此进行进一步调查。

Although the simulation sets were run for a T1 link, several scenarios with varying levels of congestion and varying number of web and ftp clients were analyzed. It is reasonable to expect that the results would scale for links with higher bandwidth. However,

虽然模拟集是针对T1链路运行的,但分析了几种不同拥塞程度和不同数量的web和ftp客户端的场景。可以合理预期,结果将扩展到具有更高带宽的链路。然而

interested readers could investigate this aspect further.

感兴趣的读者可以进一步研究这方面。

We also used the RED queue management included with ns-2 to perform some other simulation studies. We have not reported on those results here since we don't consider the studies complete. We found that by adding RED to the bottleneck link, we achieved similar performance gains (with an IW of 1) to those we found with increased IWs without RED. Others may wish to investigate this further.

我们还使用ns-2中包含的红色队列管理来执行其他一些模拟研究。我们没有报告这些结果,因为我们不认为这些研究是完整的。我们发现,通过向瓶颈链接添加RED,我们获得了与增加不带RED的IW类似的性能提升(IW为1)。其他人可能希望对此进行进一步调查。

5. References
5. 工具书类

[1] B. Mah, "An Empirical Model of HTTP Network Traffic", Proceedings of INFOCOM '97, Kobe, Japan, April 7-11, 1997.

[1] B.Mah,“HTTP网络流量的经验模型”,《信息通信学报》,1997年,日本神户,1997年4月7日至11日。

[2] C.R. Cunha, A. Bestavros, M.E. Crovella, "Characteristics of WWW Client-based Traces", Boston University Computer Science Technical Report BU-CS-95-010, July 18, 1995.

[2] C.R.Cunha,A.Bestavros,M.E.Crovella,“基于WWW客户端的痕迹特征”,波士顿大学计算机科学技术报告BU-CS-95-010,1995年7月18日。

[3] K.M. Nichols and M. Laubach, "Tiers of Service for Data Access in a HFC Architecture", Proceedings of SCTE Convergence Conference, January, 1997.

[3] K.M.Nichols和M.Laubach,“HFC体系结构中数据访问的服务层”,SCTE融合会议记录,1997年1月。

[4] K.M. Nichols, "Improving Network Simulation with Feedback", available from knichols@baynetworks.com

[4] K.M.Nichols,“利用反馈改进网络模拟”,可从knichols@baynetworks.com

6. Acknowledgements
6. 致谢

This work benefited from discussions with and comments from Van Jacobson.

这项工作得益于与Van Jacobson的讨论和评论。

7. Security Considerations
7. 安全考虑

This document discusses a simulation study of the effects of a proposed change to TCP. Consequently, there are no security considerations directly related to the document. There are also no known security considerations associated with the proposed change.

本文讨论了对TCP的拟议变更影响的模拟研究。因此,没有与文件直接相关的安全考虑。也没有与拟议变更相关的已知安全考虑因素。

8. Authors' Addresses
8. 作者地址

Kedarnath Poduri Bay Networks 4401 Great America Parkway SC01-04 Santa Clara, CA 95052-8185

加利福尼亚州圣克拉拉市凯达纳-波杜里湾网络4401大美洲大道SC01-04,邮编95052-8185

   Phone: +1-408-495-2463
   Fax:   +1-408-495-1299
   EMail: kpoduri@Baynetworks.com
        
   Phone: +1-408-495-2463
   Fax:   +1-408-495-1299
   EMail: kpoduri@Baynetworks.com
        

Kathleen Nichols Bay Networks 4401 Great America Parkway SC01-04 Santa Clara, CA 95052-8185

Kathleen Nichols Bay Networks 4401大美洲大道SC01-04加利福尼亚州圣克拉拉市95052-8185

   EMail: knichols@baynetworks.com
        
   EMail: knichols@baynetworks.com
        

Full Copyright Statement

完整版权声明

Copyright (C) The Internet Society (1998). All Rights Reserved.

版权所有(C)互联网协会(1998年)。版权所有。

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

本文件及其译本可复制并提供给他人,对其进行评论或解释或协助其实施的衍生作品可全部或部分编制、复制、出版和分发,不受任何限制,前提是上述版权声明和本段包含在所有此类副本和衍生作品中。但是,不得以任何方式修改本文件本身,例如删除版权通知或对互联网协会或其他互联网组织的引用,除非出于制定互联网标准的需要,在这种情况下,必须遵循互联网标准过程中定义的版权程序,或根据需要将其翻译成英语以外的其他语言。

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.

上述授予的有限许可是永久性的,互联网协会或其继承人或受让人不会撤销。

This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

本文件和其中包含的信息是按“原样”提供的,互联网协会和互联网工程任务组否认所有明示或暗示的保证,包括但不限于任何保证,即使用本文中的信息不会侵犯任何权利,或对适销性或特定用途适用性的任何默示保证。