Internet Engineering Task Force (IETF)                         S. Loreto
Request for Comments: 6202                                      Ericsson
Category: Informational                                   P. Saint-Andre
ISSN: 2070-1721                                                    Cisco
                                                              S. Salsano
                                        University of Rome "Tor Vergata"
                                                              G. Wilkins
                                                                 Webtide
                                                              April 2011
        
Internet Engineering Task Force (IETF)                         S. Loreto
Request for Comments: 6202                                      Ericsson
Category: Informational                                   P. Saint-Andre
ISSN: 2070-1721                                                    Cisco
                                                              S. Salsano
                                        University of Rome "Tor Vergata"
                                                              G. Wilkins
                                                                 Webtide
                                                              April 2011
        

Known Issues and Best Practices for the Use of Long Polling and Streaming in Bidirectional HTTP

双向HTTP中使用长轮询和流的已知问题和最佳实践

Abstract

摘要

On today's Internet, the Hypertext Transfer Protocol (HTTP) is often used (some would say abused) to enable asynchronous, "server-initiated" communication from a server to a client as well as communication from a client to a server. This document describes known issues and best practices related to such "bidirectional HTTP" applications, focusing on the two most common mechanisms: HTTP long polling and HTTP streaming.

在今天的互联网上,超文本传输协议(HTTP)经常被用来(有人会说被滥用)实现从服务器到客户端的异步“服务器启动”通信以及从客户端到服务器的通信。本文档描述了与此类“双向HTTP”应用程序相关的已知问题和最佳实践,重点介绍了两种最常见的机制:HTTP长轮询和HTTP流。

Status of This Memo

关于下段备忘

This document is not an Internet Standards Track specification; it is published for informational purposes.

本文件不是互联网标准跟踪规范;它是为了提供信息而发布的。

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741.

本文件是互联网工程任务组(IETF)的产品。它代表了IETF社区的共识。它已经接受了公众审查,并已被互联网工程指导小组(IESG)批准出版。并非IESG批准的所有文件都适用于任何级别的互联网标准;见RFC 5741第2节。

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc6202.

有关本文件当前状态、任何勘误表以及如何提供反馈的信息,请访问http://www.rfc-editor.org/info/rfc6202.

Copyright Notice

版权公告

Copyright (c) 2011 IETF Trust and the persons identified as the document authors. All rights reserved.

版权所有(c)2011 IETF信托基金和确定为文件作者的人员。版权所有。

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

本文件受BCP 78和IETF信托有关IETF文件的法律规定的约束(http://trustee.ietf.org/license-info)自本文件出版之日起生效。请仔细阅读这些文件,因为它们描述了您对本文件的权利和限制。从本文件中提取的代码组件必须包括信托法律条款第4.e节中所述的简化BSD许可证文本,并提供简化BSD许可证中所述的无担保。

Table of Contents

目录

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  3
   2.  HTTP Long Polling  . . . . . . . . . . . . . . . . . . . . . .  4
     2.1.  Definition . . . . . . . . . . . . . . . . . . . . . . . .  4
     2.2.  HTTP Long Polling Issues . . . . . . . . . . . . . . . . .  5
   3.  HTTP Streaming . . . . . . . . . . . . . . . . . . . . . . . .  7
     3.1.  Definition . . . . . . . . . . . . . . . . . . . . . . . .  7
     3.2.  HTTP Streaming Issues  . . . . . . . . . . . . . . . . . .  8
   4.  Overview of Technologies . . . . . . . . . . . . . . . . . . . 10
     4.1.  Bayeux . . . . . . . . . . . . . . . . . . . . . . . . . . 10
     4.2.  BOSH . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
     4.3.  Server-Sent Events . . . . . . . . . . . . . . . . . . . . 13
   5.  HTTP Best Practices  . . . . . . . . . . . . . . . . . . . . . 13
     5.1.  Limits to the Maximum Number of Connections  . . . . . . . 13
     5.2.  Pipelined Connections  . . . . . . . . . . . . . . . . . . 14
     5.3.  Proxies  . . . . . . . . . . . . . . . . . . . . . . . . . 14
     5.4.  HTTP Responses . . . . . . . . . . . . . . . . . . . . . . 15
     5.5.  Timeouts . . . . . . . . . . . . . . . . . . . . . . . . . 15
     5.6.  Impact on Intermediary Entities  . . . . . . . . . . . . . 16
   6.  Security Considerations  . . . . . . . . . . . . . . . . . . . 16
   7.  References . . . . . . . . . . . . . . . . . . . . . . . . . . 17
     7.1.  Normative References . . . . . . . . . . . . . . . . . . . 17
     7.2.  Informative References . . . . . . . . . . . . . . . . . . 17
   8.  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 18
        
   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  3
   2.  HTTP Long Polling  . . . . . . . . . . . . . . . . . . . . . .  4
     2.1.  Definition . . . . . . . . . . . . . . . . . . . . . . . .  4
     2.2.  HTTP Long Polling Issues . . . . . . . . . . . . . . . . .  5
   3.  HTTP Streaming . . . . . . . . . . . . . . . . . . . . . . . .  7
     3.1.  Definition . . . . . . . . . . . . . . . . . . . . . . . .  7
     3.2.  HTTP Streaming Issues  . . . . . . . . . . . . . . . . . .  8
   4.  Overview of Technologies . . . . . . . . . . . . . . . . . . . 10
     4.1.  Bayeux . . . . . . . . . . . . . . . . . . . . . . . . . . 10
     4.2.  BOSH . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
     4.3.  Server-Sent Events . . . . . . . . . . . . . . . . . . . . 13
   5.  HTTP Best Practices  . . . . . . . . . . . . . . . . . . . . . 13
     5.1.  Limits to the Maximum Number of Connections  . . . . . . . 13
     5.2.  Pipelined Connections  . . . . . . . . . . . . . . . . . . 14
     5.3.  Proxies  . . . . . . . . . . . . . . . . . . . . . . . . . 14
     5.4.  HTTP Responses . . . . . . . . . . . . . . . . . . . . . . 15
     5.5.  Timeouts . . . . . . . . . . . . . . . . . . . . . . . . . 15
     5.6.  Impact on Intermediary Entities  . . . . . . . . . . . . . 16
   6.  Security Considerations  . . . . . . . . . . . . . . . . . . . 16
   7.  References . . . . . . . . . . . . . . . . . . . . . . . . . . 17
     7.1.  Normative References . . . . . . . . . . . . . . . . . . . 17
     7.2.  Informative References . . . . . . . . . . . . . . . . . . 17
   8.  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 18
        
1. Introduction
1. 介绍

The Hypertext Transfer Protocol [RFC2616] is a request/response protocol. HTTP defines the following entities: clients, proxies, and servers. A client establishes connections to a server for the purpose of sending HTTP requests. A server accepts connections from clients in order to service HTTP requests by sending back responses. Proxies are intermediate entities that can be involved in the delivery of requests and responses from the client to the server and vice versa.

超文本传输协议[RFC2616]是一种请求/响应协议。HTTP定义了以下实体:客户端、代理和服务器。客户端建立到服务器的连接以发送HTTP请求。服务器接受来自客户端的连接,以便通过发送回响应来服务HTTP请求。代理是中间实体,可以参与从客户端到服务器的请求和响应的传递,反之亦然。

In the standard HTTP model, a server cannot initiate a connection with a client nor send an unrequested HTTP response to a client; thus, the server cannot push asynchronous events to clients. Therefore, in order to receive asynchronous events as soon as possible, the client needs to poll the server periodically for new content. However, continual polling can consume significant bandwidth by forcing a request/response round trip when no data is available. It can also be inefficient because it reduces the responsiveness of the application since data is queued until the server receives the next poll request from the client.

在标准HTTP模型中,服务器不能启动与客户端的连接,也不能向客户端发送未请求的HTTP响应;因此,服务器无法将异步事件推送到客户端。因此,为了尽快接收异步事件,客户端需要定期轮询服务器以获取新内容。但是,连续轮询会在没有数据可用时强制执行请求/响应往返,从而消耗大量带宽。它也可能是低效的,因为它降低了应用程序的响应能力,因为在服务器接收到来自客户端的下一个轮询请求之前,数据一直在排队。

In order to improve this situation, several server-push programming mechanisms have been implemented in recent years. These mechanisms, which are often grouped under the common label "Comet" [COMET], enable a web server to send updates to clients without waiting for a poll request from the client. Such mechanisms can deliver updates to clients in a more timely manner while avoiding the latency experienced by client applications due to the frequent opening and closing of connections necessary to periodically poll for data.

为了改善这种情况,近年来已经实现了几种服务器推送编程机制。这些机制通常分组在通用标签“Comet”[Comet]下,使web服务器能够向客户端发送更新,而无需等待客户端的轮询请求。这种机制可以更及时地向客户机交付更新,同时避免客户机应用程序由于频繁打开和关闭定期轮询数据所需的连接而经历的延迟。

The two most common server-push mechanisms are HTTP long polling and HTTP streaming:

两种最常见的服务器推送机制是HTTP长轮询和HTTP流:

HTTP Long Polling: The server attempts to "hold open" (not immediately reply to) each HTTP request, responding only when there are events to deliver. In this way, there is always a pending request to which the server can reply for the purpose of delivering events as they occur, thereby minimizing the latency in message delivery.

HTTP长轮询:服务器尝试“保持打开”(而不是立即回复)每个HTTP请求,仅在有事件要传递时才响应。这样,总是有一个挂起的请求,服务器可以响应该请求,以便在事件发生时传递事件,从而最大限度地减少消息传递的延迟。

HTTP Streaming: The server keeps a request open indefinitely; that is, it never terminates the request or closes the connection, even after it pushes data to the client.

HTTP流:服务器无限期地打开一个请求;也就是说,它从不终止请求或关闭连接,即使在将数据推送到客户端之后也是如此。

It is possible to define other technologies for bidirectional HTTP; however, such technologies typically require changes to HTTP itself (e.g., by defining new HTTP methods). This document focuses only on

可以为双向HTTP定义其他技术;然而,此类技术通常需要更改HTTP本身(例如,通过定义新的HTTP方法)。本文件仅侧重于

bidirectional HTTP technologies that work within the current scope of HTTP as defined in [RFC2616] (HTTP 1.1) and [RFC1945] (HTTP 1.0).

在[RFC2616](HTTP 1.1)和[RFC1945](HTTP 1.0)中定义的当前HTTP范围内工作的双向HTTP技术。

The authors acknowledge that both the HTTP long polling and HTTP streaming mechanisms stretch the original semantic of HTTP and that the HTTP protocol was not designed for bidirectional communication. This document neither encourages nor discourages the use of these mechanisms, and takes no position on whether they provide appropriate solutions to the problem of providing bidirectional communication between clients and servers. Instead, this document merely identifies technical issues with these mechanisms and suggests best practices for their deployment.

作者承认HTTP长轮询和HTTP流机制都延伸了HTTP的原始语义,HTTP协议不是为双向通信而设计的。本文档既不鼓励也不鼓励使用这些机制,也不确定它们是否为客户机和服务器之间提供双向通信的问题提供了适当的解决方案。相反,本文件仅确定了这些机制的技术问题,并建议了部署这些机制的最佳做法。

The remainder of this document is organized as follows. Section 2 analyzes the HTTP long polling technique. Section 3 analyzes the HTTP streaming technique. Section 4 provides an overview of the specific technologies that use the server-push technique. Section 5 lists best practices for bidirectional HTTP using existing technologies.

本文件的其余部分组织如下。第2节分析HTTP长轮询技术。第3节分析了HTTP流技术。第4节概述了使用服务器推送技术的特定技术。第5节列出了使用现有技术实现双向HTTP的最佳实践。

2. HTTP Long Polling
2. HTTP长轮询
2.1. Definition
2.1. 释义

With the traditional or "short polling" technique, a client sends regular requests to the server and each request attempts to "pull" any available events or data. If there are no events or data available, the server returns an empty response and the client waits for some time before sending another poll request. The polling frequency depends on the latency that the client can tolerate in retrieving updated information from the server. This mechanism has the drawback that the consumed resources (server processing and network) strongly depend on the acceptable latency in the delivery of updates from server to client. If the acceptable latency is low (e.g., on the order of seconds), then the polling frequency can cause an unacceptable burden on the server, the network, or both.

使用传统的或“短轮询”技术,客户机定期向服务器发送请求,每个请求都试图“提取”任何可用的事件或数据。如果没有可用的事件或数据,服务器将返回空响应,客户端将等待一段时间,然后再发送另一个轮询请求。轮询频率取决于客户端在从服务器检索更新信息时可以容忍的延迟。这种机制的缺点是,消耗的资源(服务器处理和网络)在很大程度上取决于从服务器向客户端传递更新时可接受的延迟。如果可接受的延迟较低(例如,以秒为单位),则轮询频率可能会对服务器、网络或两者造成不可接受的负担。

In contrast with such "short polling", "long polling" attempts to minimize both the latency in server-client message delivery and the use of processing/network resources. The server achieves these efficiencies by responding to a request only when a particular event, status, or timeout has occurred. Once the server sends a long poll response, typically the client immediately sends a new long poll request. Effectively, this means that at any given time the server will be holding open a long poll request, to which it replies when new information is available for the client. As a result, the server is able to asynchronously "initiate" communication.

与这种“短轮询”相反,“长轮询”尝试最小化服务器-客户端消息传递的延迟和处理/网络资源的使用。服务器通过仅在特定事件、状态或超时发生时响应请求来实现这些效率。一旦服务器发送长轮询响应,客户机通常会立即发送新的长轮询请求。实际上,这意味着在任何给定的时间,服务器都会打开一个长轮询请求,当客户端有新信息可用时,服务器会对该请求作出响应。因此,服务器能够异步“启动”通信。

The basic life cycle of an application using HTTP long polling is as follows:

使用HTTP长轮询的应用程序的基本生命周期如下所示:

1. The client makes an initial request and then waits for a response.

1. 客户端发出初始请求,然后等待响应。

2. The server defers its response until an update is available or until a particular status or timeout has occurred.

2. 服务器将延迟其响应,直到更新可用或出现特定状态或超时。

3. When an update is available, the server sends a complete response to the client.

3. 当更新可用时,服务器将向客户端发送完整响应。

4. The client typically sends a new long poll request, either immediately upon receiving a response or after a pause to allow an acceptable latency period.

4. 客户机通常在收到响应后立即发送新的长轮询请求,或者在暂停后发送新的长轮询请求,以允许一个可接受的延迟期。

The HTTP long polling mechanism can be applied to either persistent or non-persistent HTTP connections. The use of persistent HTTP connections will avoid the additional overhead of establishing a new TCP/IP connection [TCP] for every long poll request.

HTTP长轮询机制可以应用于持久或非持久HTTP连接。使用持久HTTP连接将避免为每个长轮询请求建立新TCP/IP连接[TCP]的额外开销。

2.2. HTTP Long Polling Issues
2.2. HTTP长轮询问题

The HTTP long polling mechanism introduces the following issues.

HTTP长轮询机制引入了以下问题。

Header Overhead: With the HTTP long polling technique, every long poll request and long poll response is a complete HTTP message and thus contains a full set of HTTP headers in the message framing. For small, infrequent messages, the headers can represent a large percentage of the data transmitted. If the network MTU (Maximum Transmission Unit) allows all the information (including the HTTP header) to fit within a single IP packet, this typically does not represent a significant increase in the burden for networking entities. On the other hand, the amount of transferred data can be significantly larger than the real payload carried by HTTP, and this can have a significant impact (e.g., when volume-based charging is in place).

头开销:使用HTTP长轮询技术,每个长轮询请求和长轮询响应都是一条完整的HTTP消息,因此在消息框架中包含一组完整的HTTP头。对于小的、不频繁的消息,报头可以代表传输的数据的很大一部分。如果网络MTU(最大传输单元)允许所有信息(包括HTTP报头)适合于单个IP分组,则这通常并不表示网络实体的负担显著增加。另一方面,传输的数据量可能显著大于HTTP承载的实际有效负载,这可能会产生重大影响(例如,当基于卷的计费到位时)。

Maximal Latency: After a long poll response is sent to a client, the server needs to wait for the next long poll request before another message can be sent to the client. This means that while the average latency of long polling is close to one network transit, the maximal latency is over three network transits (long poll response, next long poll request, long poll response). However, because HTTP is carried over TCP/IP, packet loss and retransmission can occur; therefore, maximal latency for any TCP/IP protocol will be more than three network transits (lost

最大延迟:向客户端发送长轮询响应后,服务器需要等待下一个长轮询请求,然后才能向客户端发送另一条消息。这意味着,虽然长轮询的平均延迟接近一次网络传输,但最大延迟超过三次网络传输(长轮询响应、下一次长轮询请求、长轮询响应)。然而,由于HTTP是通过TCP/IP传输的,因此可能会发生数据包丢失和重传;因此,任何TCP/IP协议的最大延迟将超过三次网络传输(丢失)

packet, next packet, negative ack, retransmit). When HTTP pipelining (see Section 5.2) is available, the latency due to the server waiting for a new request can be avoided.

数据包,下一个数据包,否定应答,重传)。当HTTP管道(见第5.2节)可用时,可以避免由于服务器等待新请求而导致的延迟。

Connection Establishment: A common criticism of both short polling and long polling is that these mechanisms frequently open TCP/IP connections and then close them. However, both polling mechanisms work well with persistent HTTP connections that can be reused for many poll requests. Specifically, the short duration of the pause between a long poll response and the next long poll request avoids the closing of idle connections.

连接建立:短轮询和长轮询的一个常见批评是,这些机制经常打开TCP/IP连接,然后关闭它们。但是,这两种轮询机制都能很好地处理持久HTTP连接,这些连接可以重用于许多轮询请求。具体而言,长轮询响应和下一个长轮询请求之间的短暂暂停可以避免关闭空闲连接。

Allocated Resources: Operating systems will allocate resources to TCP/IP connections and to HTTP requests outstanding on those connections. The HTTP long polling mechanism requires that for each client both a TCP/IP connection and an HTTP request are held open. Thus, it is important to consider the resources related to both of these when sizing an HTTP long polling application. Typically, the resources used per TCP/IP connection are minimal and can scale reasonably. Frequently, the resources allocated to HTTP requests can be significant, and scaling the total number of requests outstanding can be limited on some gateways, proxies, and servers.

已分配资源:操作系统将向TCP/IP连接和这些连接上未完成的HTTP请求分配资源。HTTP长轮询机制要求每个客户端的TCP/IP连接和HTTP请求都保持打开状态。因此,在对HTTP长轮询应用程序进行大小调整时,考虑与这两个资源相关的资源是很重要的。通常,每个TCP/IP连接使用的资源最少,并且可以合理扩展。通常,分配给HTTP请求的资源可能非常重要,在某些网关、代理和服务器上,扩展未完成请求的总数可能会受到限制。

Graceful Degradation: A long polling client or server that is under load has a natural tendency to gracefully degrade in performance at a cost of message latency. If load causes either a client or server to run slowly, then events to be pushed to the client will queue (waiting either for the client to send a long poll request or for the server to free up CPU cycles that can be used to process a long poll request that is being held at the server). If multiple messages are queued for a client, they might be delivered in a batch within a single long poll response. This can significantly reduce the per-message overhead and thus ease the workload of the client or server for the given message load.

优雅降级:负载下的长轮询客户端或服务器自然倾向于以消息延迟为代价优雅地降级性能。如果加载导致客户端或服务器运行缓慢,则要推送到客户端的事件将排队(等待客户端发送长轮询请求或等待服务器释放CPU周期,这些CPU周期可用于处理服务器上保存的长轮询请求)。如果多条消息排队等待一个客户机,则它们可能会在一个长轮询响应内成批传递。这可以显著减少每条消息的开销,从而减轻客户机或服务器在给定消息负载下的工作负载。

Timeouts: Long poll requests need to remain pending or "hanging" until the server has something to send to the client. The timeout issues related to these pending requests are discussed in Section 5.5.

超时:长轮询请求需要保持挂起或“挂起”,直到服务器有东西要发送给客户端。第5.5节讨论了与这些未决请求相关的超时问题。

Caching: Caching mechanisms implemented by intermediate entities can interfere with long poll requests. This issue is discussed in Section 5.6.

缓存:由中间实体实现的缓存机制可能会干扰长轮询请求。第5.6节讨论了该问题。

3. HTTP Streaming
3. HTTP流媒体
3.1. Definition
3.1. 释义

The HTTP streaming mechanism keeps a request open indefinitely. It never terminates the request or closes the connection, even after the server pushes data to the client. This mechanism significantly reduces the network latency because the client and the server do not need to open and close the connection.

HTTP流机制使请求无限期地打开。它从不终止请求或关闭连接,即使在服务器将数据推送到客户端之后也是如此。此机制显著减少了网络延迟,因为客户端和服务器不需要打开和关闭连接。

The basic life cycle of an application using HTTP streaming is as follows:

使用HTTP流的应用程序的基本生命周期如下所示:

1. The client makes an initial request and then waits for a response.

1. 客户端发出初始请求,然后等待响应。

2. The server defers the response to a poll request until an update is available, or until a particular status or timeout has occurred.

2. 服务器延迟对轮询请求的响应,直到更新可用,或者直到出现特定状态或超时。

3. Whenever an update is available, the server sends it back to the client as a part of the response.

3. 无论何时更新可用,服务器都会将其作为响应的一部分发送回客户端。

4. The data sent by the server does not terminate the request or the connection. The server returns to step 3.

4. 服务器发送的数据不会终止请求或连接。服务器返回到步骤3。

The HTTP streaming mechanism is based on the capability of the server to send several pieces of information in the same response, without terminating the request or the connection. This result can be achieved by both HTTP/1.1 and HTTP/1.0 servers.

HTTP流机制基于服务器在同一响应中发送多条信息的能力,而无需终止请求或连接。这个结果可以通过HTTP/1.1和HTTP/1.0服务器实现。

An HTTP response content length can be defined using three options:

可以使用三个选项定义HTTP响应内容长度:

Content-Length header: This indicates the size of the entity body in the message, in bytes.

Content Length header:这表示消息中实体正文的大小,以字节为单位。

Transfer-Encoding header: The 'chunked' valued in this header indicates the message will break into chunks of known size if needed.

传输编码头:此头中的“chunked”值表示消息将在需要时分成已知大小的块。

End of File (EOF): This is actually the default approach for HTTP/1.0 where the connections are not persistent. Clients do not need to know the size of the body they are reading; instead they expect to read the body until the server closes the connection. Although with HTTP/1.1 the default is for persistent connections, it is still possible to use EOF by setting the 'Connection:close' header in either the request or the response, thereby indicating that the connection is not to be considered 'persistent' after the

文件结束(EOF):这实际上是HTTP/1.0的默认方法,其中连接不是持久的。客户不需要知道他们阅读的身体大小;相反,他们希望在服务器关闭连接之前读取正文。尽管HTTP/1.1的默认值是持久连接,但仍然可以通过在请求或响应中设置“Connection:close”头来使用EOF,从而指示在请求之后不将连接视为“持久”连接

current request/response is complete. The client's inclusion of the 'Connection: close' header field in the request will also prevent pipelining.

当前请求/响应已完成。客户端在请求中包含“Connection:close”头字段也会阻止管道传输。

The main issue with EOF is that it is difficult to tell the difference between a connection terminated by a fault and one that is correctly terminated.

EOF的主要问题是很难区分由故障终止的连接和正确终止的连接之间的区别。

An HTTP/1.0 server can use only EOF as a streaming mechanism. In contrast, both EOF and "chunked transfer" are available to an HTTP/1.1 server.

HTTP/1.0服务器只能使用EOF作为流机制。相反,EOF和“分块传输”都可用于HTTP/1.1服务器。

The "chunked transfer" mechanism is the one typically used by HTTP/1.1 servers for streaming. This is accomplished by including the header "Transfer-Encoding: chunked" at the beginning of the response, which enables the server to send the following parts of the response in different "chunks" over the same connection. Each chunk starts with the hexadecimal expression of the length of its data, followed by CR/LF (the end of the response is indicated with a chunk of size 0).

“分块传输”机制通常由HTTP/1.1服务器用于流式传输。这是通过在响应的开头包含标题“Transfer Encoding:chunked”来实现的,这使服务器能够在同一连接上以不同的“chunk”发送响应的以下部分。每个区块都以其数据长度的十六进制表达式开始,然后是CR/LF(响应的结尾用大小为0的区块表示)。

           HTTP/1.1 200 OK
           Content-Type: text/plain
           Transfer-Encoding: chunked
        
           HTTP/1.1 200 OK
           Content-Type: text/plain
           Transfer-Encoding: chunked
        

25 This is the data in the first chunk

25这是第一个区块中的数据

1C and this is the second one

1C这是第二个

0

0

Figure 1: Transfer-Encoding response

图1:传输编码响应

To achieve the same result, an HTTP/1.0 server will omit the Content-Length header in the response. Thus, it will be able to send the subsequent parts of the response on the same connection (in this case, the different parts of the response are not explicitly separated by HTTP protocol, and the end of the response is achieved by closing the connection).

为了获得相同的结果,HTTP/1.0服务器将在响应中省略内容长度头。因此,它将能够在同一连接上发送响应的后续部分(在这种情况下,响应的不同部分没有通过HTTP协议显式分隔,响应的结束是通过关闭连接实现的)。

3.2. HTTP Streaming Issues
3.2. HTTP流问题

The HTTP streaming mechanism introduces the following issues.

HTTP流机制引入了以下问题。

Network Intermediaries: The HTTP protocol allows for intermediaries (proxies, transparent proxies, gateways, etc.) to be involved in the transmission of a response from the server to the client. There is no requirement for an intermediary to immediately forward a partial response, and it is legal for the intermediary to buffer the entire response before sending any data to the client (e.g., caching transparent proxies). HTTP streaming will not work with such intermediaries.

网络中介:HTTP协议允许中介(代理、透明代理、网关等)参与从服务器到客户端的响应传输。中介不需要立即转发部分响应,在向客户端发送任何数据(例如,缓存透明代理)之前,中介缓冲整个响应是合法的。HTTP流将无法与此类中介体一起工作。

Maximal Latency: Theoretically, on a perfect network, an HTTP streaming protocol's average and maximal latency is one network transit. However, in practice, the maximal latency is higher due to network and browser limitations. The browser techniques used to terminate HTTP streaming connections are often associated with JavaScript and/or DOM (Document Object Model) elements that will grow in size for every message received. Thus, in order to avoid unlimited growth of memory usage in the client, an HTTP streaming implementation occasionally needs to terminate the streaming response and send a request to initiate a new streaming response (which is essentially equivalent to a long poll). Thus, the maximal latency is at least three network transits. Also, because HTTP is carried over TCP/IP, packet loss and retransmission can occur; therefore maximal latency for any TCP/IP protocol will be more than three network transits (lost packet, next packet, negative ack, retransmit).

最大延迟:理论上,在一个完美的网络上,HTTP流协议的平均和最大延迟是一次网络传输。然而,在实践中,由于网络和浏览器的限制,最大延迟更高。用于终止HTTP流连接的浏览器技术通常与JavaScript和/或DOM(文档对象模型)元素相关联,这些元素的大小会随着接收到的每条消息而增加。因此,为了避免客户端内存使用的无限增长,HTTP流实现偶尔需要终止流响应并发送请求以启动新的流响应(这基本上相当于长轮询)。因此,最大延迟至少为三次网络传输。此外,由于HTTP是通过TCP/IP传输的,因此可能会发生数据包丢失和重传;因此,任何TCP/IP协议的最大延迟将超过三次网络传输(丢失数据包、下一个数据包、负ack、重传)。

Client Buffering: There is no requirement in existing HTTP specifications for a client library to make the data from a partial HTTP response available to the client application. For example, if each response chunk contains a statement of JavaScript, there is no requirement in the browser to execute that JavaScript before the entire response is received. However, in practice, most browsers do execute JavaScript received in partial responses -- although some require a buffer overflow to trigger execution. In most implementations, blocks of white space can be sent to achieve buffer overflow.

客户端缓冲:现有HTTP规范中没有要求客户端库将部分HTTP响应中的数据提供给客户端应用程序。例如,如果每个响应块都包含一条JavaScript语句,则浏览器中不需要在接收整个响应之前执行该JavaScript。然而,在实践中,大多数浏览器确实执行在部分响应中收到的JavaScript——尽管有些浏览器需要缓冲区溢出来触发执行。在大多数实现中,可以发送空白块以实现缓冲区溢出。

Framing Techniques: Using HTTP streaming, several application messages can be sent within a single HTTP response. The separation of the response stream into application messages needs to be performed at the application level and not at the HTTP level. In particular, it is not possible to use the HTTP chunks as application message delimiters, since intermediate proxies might "re-chunk" the message stream (for example, by combining different chunks into a longer one). This issue does not affect the HTTP long polling technique, which provides a canonical framing technique: each application message can be sent in a different HTTP response.

帧技术:使用HTTP流,可以在单个HTTP响应中发送多个应用程序消息。将响应流分离为应用程序消息需要在应用程序级别执行,而不是在HTTP级别执行。特别是,不可能使用HTTP区块作为应用程序消息分隔符,因为中间代理可能会“重新区块”消息流(例如,通过将不同区块组合成更长的区块)。这个问题不影响HTTP长轮询技术,它提供了一种规范的帧技术:每个应用程序消息都可以在不同的HTTP响应中发送。

4. Overview of Technologies
4. 技术概述

This section provides an overview of existing technologies that implement HTTP-based server-push mechanisms to asynchronously deliver messages from the server to the client.

本节概述了实现基于HTTP的服务器推送机制以将消息从服务器异步传递到客户端的现有技术。

4.1. Bayeux
4.1. 巴约

The Bayeux protocol [BAYEUX] was developed in 2006-2007 by the Dojo Foundation. Bayeux can use both the HTTP long polling and HTTP streaming mechanisms.

Bayux协议[Bayux]是由Dojo基金会在2006-2007年开发的。Bayeux可以使用HTTP长轮询和HTTP流机制。

In order to achieve bidirectional communications, a Bayeux client will use two HTTP connections to a Bayeux server so that both server-to-client and client-to-server messaging can occur asynchronously.

为了实现双向通信,Bayeux客户端将使用到Bayeux服务器的两个HTTP连接,以便服务器到客户端和客户端到服务器的消息传递可以异步进行。

The Bayeux specification requires that implementations control pipelining of HTTP requests, so that requests are not pipelined inappropriately (e.g., a client-to-server message pipelined behind a long poll request).

Bayeux规范要求实现控制HTTP请求的管道化,以便请求不会被不适当地管道化(例如,长轮询请求后面的客户端到服务器消息管道化)。

In practice, for JavaScript clients, such control over pipelining is not possible in current browsers. Therefore, JavaScript implementations of Bayeux attempt to meet this requirement by limiting themselves to a maximum of two outstanding HTTP requests at any one time, so that browser connection limits will not be applied and the requests will not be queued or pipelined. While broadly effective, this mechanism can be disrupted if non-Bayeux JavaScript clients simultaneously issue requests to the same host.

实际上,对于JavaScript客户端,在当前的浏览器中,这种对管道的控制是不可能的。因此,Bayeux的JavaScript实现试图通过限制自己在任何时候最多两个未完成的HTTP请求来满足这一要求,这样就不会应用浏览器连接限制,请求也不会排队或管道化。虽然广泛有效,但如果非Bayeux JavaScript客户端同时向同一主机发出请求,则此机制可能会中断。

Bayeux connections are negotiated between client and server with handshake messages that allow the connection type, authentication method, and other parameters to be agreed upon between the client and the server. Furthermore, during the handshake phase, the client and the server reveal to each other their acceptable bidirectional techniques, and the client selects one from the intersection of those sets.

Bayeux连接通过握手消息在客户端和服务器之间进行协商,握手消息允许客户端和服务器之间商定连接类型、身份验证方法和其他参数。此外,在握手阶段,客户机和服务器相互展示其可接受的双向技术,并且客户机从这些集合的交集中选择一种。

For non-browser or same-domain Bayeux, clients use HTTP POST requests to the server for both the long poll request and the request to send messages to the server. The Bayeux protocol packets are sent as the body of the HTTP messages using the "application/json" Internet media type [RFC4627].

对于非浏览器或同一域Bayeux,对于长轮询请求和向服务器发送消息的请求,客户端都使用HTTP POST请求。Bayeux协议数据包作为HTTP消息体使用“application/json”互联网媒体类型[RFC4627]发送。

For browsers that are operating in cross-domain mode, Bayeux attempts to use Cross-Origin Resource Sharing [CORS] checking if the browser and server support it, so that normal HTTP POST requests can be used. If this mechanism fails, Bayeux clients use the "JSONP" mechanism as

对于在跨域模式下运行的浏览器,Bayeux尝试使用跨源资源共享[CORS],检查浏览器和服务器是否支持它,以便可以使用正常的HTTP POST请求。如果此机制失败,Bayeux客户端将使用“JSONP”机制作为

described in [JSONP]. In this last case, client-to-server messages are sent as encoded JSON on the URL query parameters, and server-to-client messages are sent as a JavaScript program that wraps the message JSON with a JavaScript function call to the already loaded Bayeux implementation.

如[JSONP]所述。在最后一种情况下,客户端到服务器的消息作为URL查询参数上的编码JSON发送,服务器到客户端的消息作为JavaScript程序发送,该程序使用JavaScript函数调用将消息JSON包装到已加载的Bayeux实现。

4.2. BOSH
4.2. 波什

BOSH, which stands for Bidirectional-streams Over Synchronous HTTP [BOSH], was developed by the XMPP Standards Foundation in 2003-2004. The purpose of BOSH is to emulate normal TCP connections over HTTP (TCP is the standard connection mechanism used in the Extensible Messaging and Presence Protocol as described in [RFC6120]). BOSH employs the HTTP long polling mechanism by allowing the server (called a "BOSH connection manager") to defer its response to a request until it actually has data to send to the client from the application server itself (typically an XMPP server). As soon as the client receives a response from the connection manager, it sends another request to the connection manager, thereby ensuring that the connection manager is (almost) always holding a request that it can use to "push" data to the client.

在2003-2004年,XMPP标准基金会开发了代表同步HTTP(BuSH)的双向流的BUH。BOSH的目的是通过HTTP模拟正常的TCP连接(TCP是[RFC6120]中所述的可扩展消息和状态协议中使用的标准连接机制)。BOSH采用HTTP长轮询机制,允许服务器(称为“BOSH连接管理器”)延迟对请求的响应,直到它实际有数据从应用服务器本身(通常是XMPP服务器)发送到客户端。一旦客户机接收到来自连接管理器的响应,它就会向连接管理器发送另一个请求,从而确保连接管理器(几乎)总是保存一个请求,它可以使用该请求将数据“推送”到客户机。

In some situations, the client needs to send data to the server while it is waiting for data to be pushed from the connection manager. To prevent data from being pipelined behind the long poll request that is on hold, the client can send its outbound data in a second HTTP request over a second TCP connection. BOSH forces the server to respond to the request it has been holding on the first connection as soon as it receives a new request from the client, even if it has no data to send to the client. It does so to make sure that the client can send more data immediately, if necessary -- even in the case where the client is not able to pipeline the requests -- while simultaneously respecting the two-connection limit discussed in Section 5.1.

在某些情况下,客户机需要在等待从连接管理器推送数据时向服务器发送数据。为了防止数据在长轮询请求之后通过管道传输,客户机可以通过第二个TCP连接在第二个HTTP请求中发送出站数据。BOSH强制服务器在收到来自客户机的新请求后立即响应它在第一个连接上保持的请求,即使它没有数据要发送给客户机。这样做是为了确保客户机可以在必要时立即发送更多数据——即使在客户机无法通过管道传输请求的情况下——同时遵守第5.1节中讨论的两个连接限制。

The number of long poll request-response pairs is negotiated during the first request sent from the client to the connection manager. Typically, BOSH clients and connection managers will negotiate the use of two pairs, although it is possible to use only one pair or more than two pairs.

长轮询请求-响应对的数量在从客户端发送到连接管理器的第一个请求期间协商。通常,BOSH客户机和连接管理器将协商使用两对,尽管可能只使用一对或两对以上。

The roles of the two request-response pairs typically switch whenever the client sends data to the connection manager. This means that when the client issues a new request, the connection manager immediately answers the blocked request on the other TCP connection, thus freeing it; in this way, in a scenario where only the client sends data, the even requests are sent over one connection, and the odd ones are sent over the other connection.

每当客户端向连接管理器发送数据时,两个请求-响应对的角色通常会切换。这意味着,当客户端发出新请求时,连接管理器会立即响应另一个TCP连接上被阻止的请求,从而释放该请求;这样,在只有客户端发送数据的场景中,偶数请求通过一个连接发送,奇数请求通过另一个连接发送。

BOSH is able to work reliably both when network conditions force every HTTP request to be made over a different TCP connection and when it is possible to use HTTP/1.1 and then rely on two persistent TCP connections.

当网络条件迫使每个HTTP请求通过不同的TCP连接发出时,以及当可以使用HTTP/1.1然后依赖两个持久TCP连接时,BOSH都能够可靠地工作。

If the connection manager has no data to send to the client for an agreed amount of time (also negotiated during the first request), then the connection manager will respond to the request it has been holding with no data, and that response immediately triggers a fresh client request. The connection manager does so to ensure that if a network connection is broken then both parties will realize that fact within a reasonable amount of time.

如果连接管理器在约定的时间内(也在第一次请求期间协商)没有数据发送给客户端,那么连接管理器将响应它一直持有的请求,而没有数据,并且该响应立即触发新的客户端请求。连接管理器这样做是为了确保如果网络连接断开,那么双方将在合理的时间内意识到这一事实。

Moreover, BOSH defines the negotiation of an "inactivity period" value that specifies the longest allowable inactivity period (in seconds). This enables the client to ensure that the periods with no requests pending are never too long.

此外,BOSH定义了“非活动期”值的协商,该值指定了允许的最长非活动期(以秒为单位)。这使客户端能够确保没有挂起请求的时间段不会太长。

BOSH allows data to be pushed immediately when HTTP pipelining is available. However, if HTTP pipelining is not available and one of the endpoints has just pushed some data, BOSH will usually need to wait for a network round-trip time until the server is able to again push data to the client.

BOSH允许在HTTP管道可用时立即推送数据。但是,如果HTTP管道不可用,并且其中一个端点刚刚推送了一些数据,BOSH通常需要等待网络往返时间,直到服务器能够再次将数据推送到客户端。

BOSH uses standard HTTP POST request and response bodies to encode all information.

BOSH使用标准的HTTP POST请求和响应体对所有信息进行编码。

BOSH normally uses HTTP pipelining over a persistent HTTP/1.1 connection. However, a client can deliver its POST requests in any way permitted by HTTP 1.0 or HTTP 1.1. (Although the use of HTTP POST with pipelining is discouraged in RFC 2616, BOSH employs various methods, such as request identifiers, to ensure that this usage does not lead to indeterminate results if the transport connection is terminated prematurely.)

BOSH通常在持久HTTP/1.1连接上使用HTTP管道。但是,客户机可以以HTTP 1.0或HTTP 1.1允许的任何方式传递其POST请求。(尽管RFC 2616不鼓励将HTTP POST与管道一起使用,但BOSH采用了各种方法,如请求标识符,以确保在传输连接过早终止时,这种使用不会导致不确定的结果。)

BOSH clients and connection managers are not allowed to use Chunked Transfer Coding, since intermediaries might buffer each partial HTTP request or response and only forward the full request or response once it is available.

BOSH客户端和连接管理器不允许使用分块传输编码,因为中介可能会缓冲每个部分HTTP请求或响应,并且只有在完整请求或响应可用时才转发。

BOSH allows the usage of the Accept-Encoding and Content-Encoding headers in the request and in the response, respectively, and then compresses the response body accordingly.

BOSH允许在请求和响应中分别使用接受编码和内容编码头,然后相应地压缩响应体。

Each BOSH session can share the HTTP connection(s) it uses with other HTTP traffic, including other BOSH sessions and HTTP requests and responses completely unrelated to the BOSH protocol (e.g., Web page downloads).

每个BOSH会话可以与其他HTTP通信共享其使用的HTTP连接,包括其他BOSH会话以及与BOSH协议完全无关的HTTP请求和响应(例如,网页下载)。

4.3. Server-Sent Events
4.3. 服务器发送事件

W3C Server-Sent Events specification [WD-eventsource] defines an API that enables servers to push data to Web pages over HTTP in the form of DOM events.

W3C服务器发送事件规范[WD eventsource]定义了一个API,使服务器能够以DOM事件的形式通过HTTP将数据推送到网页。

The data is encoded as "text/event-stream" content and pushed using an HTTP streaming mechanism, but the specification suggests disabling HTTP chunking for serving event streams unless the rate of messages is high enough to avoid the possible negative effects of this technique as described in Section 3.2.

数据被编码为“文本/事件流”内容,并使用HTTP流机制推送,但该规范建议禁用HTTP分块以服务事件流,除非消息速率足够高,以避免第3.2节所述的此技术可能产生的负面影响。

However, it is not clear if there are significant benefits to using EOF rather than chunking with regards to intermediaries, unless they support only HTTP/1.0.

然而,对于中介体,使用EOF而不是分块是否有显著的好处尚不清楚,除非它们只支持HTTP/1.0。

5. HTTP Best Practices
5. HTTP最佳实践
5.1. Limits to the Maximum Number of Connections
5.1. 限制最大连接数

HTTP [RFC2616], Section 8.1.4, recommends that a single user client not maintain more than two connections to any server or proxy, in order to prevent the server from being overloaded and to avoid unexpected side effects in congested networks. Until recently, this limit was implemented by most commonly deployed browsers, thus making connections a scarce resource that needed to be shared within the browser. Note that the available JavaScript APIs in the browsers hide the connections, and the security model inhibits the sharing of any resource between frames. The new HTTP specification [HTTPBIS] removes the two-connection limitation, only encouraging clients to be conservative when opening multiple connections. In fact, recent browsers have increased this limit to 6 or 8 connections; however, it is still not possible to discover the local limit, and usage of multiple frames and tabs still places 8 connections within easy reach.

HTTP[RFC2616]第8.1.4节建议单用户客户端与任何服务器或代理的连接不得超过两个,以防止服务器过载,并避免拥塞网络中出现意外的副作用。直到最近,这一限制才由最常用的浏览器实现,因此连接成为浏览器中需要共享的稀缺资源。请注意,浏览器中可用的JavaScript API隐藏了连接,安全模型禁止在帧之间共享任何资源。新的HTTP规范[HTTPBIS]取消了两个连接限制,只鼓励客户端在打开多个连接时保持保守。事实上,最近的浏览器已经将这个限制提高到了6到8个连接;但是,仍然无法发现本地限制,使用多个框架和选项卡仍然可以轻松实现8个连接。

Web applications need to limit the number of long poll requests initiated, ideally to a single long poll that is shared between frames, tabs, or windows of the same browser. However, the security constraints of the browsers make such sharing difficult.

Web应用程序需要限制启动的长轮询请求的数量,最好是在同一浏览器的框架、选项卡或窗口之间共享的单个长轮询。然而,浏览器的安全限制使得这种共享变得困难。

A best practice for a server is to use cookies [COOKIE] to detect multiple long poll requests from the same browser and to avoid deferring both requests since this might cause connection starvation and/or pipeline issues.

服务器的最佳实践是使用COOKIE[COOKIE]检测来自同一浏览器的多个长轮询请求,并避免延迟两个请求,因为这可能会导致连接不足和/或管道问题。

5.2. Pipelined Connections
5.2. 管道连接

HTTP [RFC2616] permits optional request pipelining over persistent connections. Multiple requests can be enqueued before the responses arrive.

HTTP[RFC2616]允许通过持久连接进行可选的请求管道。在响应到达之前,可以将多个请求排队。

In the case of HTTP long polling, the use of HTTP pipelining can reduce latency when multiple messages need to be sent by a server to a client in a short period of time. With HTTP pipelining, the server can receive and enqueue a set of HTTP requests. Therefore, the server does not need to receive a new HTTP request from the client after it has sent a message to the client within an HTTP response. In principle, the HTTP pipelining can be applied to HTTP GET and HTTP POST requests, but using HTTP POST requests is more critical. In fact, the use of HTTP POST with pipelining is discouraged in RFC 2616 and needs to be handled with special care.

在HTTP长轮询的情况下,当服务器需要在短时间内向客户端发送多条消息时,使用HTTP管道可以减少延迟。通过HTTP管道,服务器可以接收一组HTTP请求并将其排队。因此,服务器在HTTP响应中向客户端发送消息后,不需要从客户端接收新的HTTP请求。原则上,HTTP管道可以应用于HTTP GET和HTTP POST请求,但使用HTTP POST请求更为关键。事实上,RFC2616不鼓励将HTTP POST与管道一起使用,需要特别小心处理。

There is an issue regarding the inability to control pipelining. Normal requests can be pipelined behind a long poll, and are thus delayed until the long poll completes.

存在无法控制管道的问题。普通请求可以在长轮询之后通过管道传输,因此会延迟到长轮询完成。

Mechanisms for bidirectional HTTP that want to exploit HTTP pipelining need to verify that HTTP pipelining is available (e.g., supported by the client, the intermediaries, and the server); if it's not available, they need to fall back to solutions without HTTP pipelining.

想要利用HTTP管道的双向HTTP机制需要验证HTTP管道是否可用(例如,客户端、中介和服务器支持);如果它不可用,他们需要退回到没有HTTP管道的解决方案。

5.3. Proxies
5.3. 代理

Most proxies work well with HTTP long polling because a complete HTTP response will be sent either on an event or a timeout. Proxies are advised to return that response immediately to the user agent, which immediately acts on it.

大多数代理可以很好地使用HTTP长轮询,因为完整的HTTP响应将在事件或超时时发送。建议代理立即将该响应返回给用户代理,用户代理立即对其进行操作。

The HTTP streaming mechanism uses partial responses and sends some JavaScript in an HTTP/1.1 chunk as described in Section 3. This mechanism can face problems caused by two factors: (1) it relies on proxies to forward each chunk (even though there is no requirement for them to do so, and some caching proxies do not), and (2) it relies on user agents to execute the chunk of JavaScript as it arrives (even though there is also no requirement for them to do so).

HTTP流机制使用部分响应并发送HTTP/1.1块中的一些JavaScript,如第3节所述。该机制可能面临由两个因素引起的问题:(1)它依赖于代理来转发每个区块(即使不要求它们这样做,一些缓存代理也不要求这样做),以及(2)它依赖于用户代理在JavaScript区块到达时执行该区块(即使也不要求它们这样做)。

A "reverse proxy" basically is a proxy that pretends to be the actual server (as far as any client or client proxy is concerned), but it passes on the request to the actual server that is usually sitting behind another layer of firewalls. Any HTTP short polling or HTTP

“反向代理”基本上是假装为实际服务器的代理(就任何客户端或客户端代理而言),但它将请求传递给通常位于另一层防火墙后面的实际服务器。任何HTTP短轮询或HTTP

long polling solution will work fine with this, as will most HTTP streaming solutions. The main downside is performance, since most proxies are not designed to hold many open connections.

长轮询解决方案可以很好地解决这个问题,就像大多数HTTP流解决方案一样。主要的缺点是性能,因为大多数代理的设计不是为了保存许多打开的连接。

Reverse proxies can come to grief when they try to share connections to the servers between multiple clients. As an example, Apache with mod_jk shares a small set of connections (often 8 or 16) between all clients. If long polls are sent on those shared connections, then the proxy can be starved of connections, which means that other requests (either long poll or normal) can be held up. Thus, Comet mechanisms currently need to avoid any connection sharing -- either in the browser or in any intermediary -- because the HTTP assumption is that each request will complete as fast as possible.

当反向代理试图在多个客户端之间共享到服务器的连接时,可能会遇到麻烦。例如,带有mod_jk的Apache在所有客户端之间共享一小组连接(通常为8或16)。如果在这些共享连接上发送长轮询,那么代理可能会缺少连接,这意味着其他请求(长轮询或正常)可能会被延迟。因此,Comet机制目前需要避免任何连接共享——无论是在浏览器中还是在任何中介中——因为HTTP假设每个请求都将尽可能快地完成。

One of the main reasons why both HTTP long polling and HTTP streaming are perceived as having a negative impact on servers and proxies is that they use a synchronous programming model for handling requests, since the resources allocated to each request are held for the duration of the request. Asynchronous proxies and servers can handle long polls using slightly more resources than normal HTTP traffic. Unfortunately some synchronous proxies do exist (e.g., Apache mod_jk) and many HTTP application servers also have a blocking model for their request handling (e.g., the Java servlet 2.5 specification).

HTTP长轮询和HTTP流被视为对服务器和代理产生负面影响的主要原因之一是,它们使用同步编程模型来处理请求,因为分配给每个请求的资源在请求期间都被保留。异步代理和服务器可以使用略多于正常HTTP流量的资源处理长轮询。不幸的是,一些同步代理确实存在(例如,apachemod_jk),许多HTTP应用服务器也有一个用于请求处理的阻塞模型(例如,javaservlet 2.5规范)。

5.4. HTTP Responses
5.4. HTTP响应

In accordance with [RFC2616], the server responds to a request it has successfully received by sending a 200 OK answer, but only when a particular event, status, or timeout has occurred. The 200 OK body section contains the actual event, status, or timeout that occurred. This "best practice" is simply standard HTTP.

根据[RFC2616],服务器通过发送200 OK应答来响应其已成功接收的请求,但仅当特定事件、状态或超时发生时。200 OK正文部分包含实际发生的事件、状态或超时。这种“最佳实践”只是标准的HTTP。

5.5. Timeouts
5.5. 超时

The HTTP long polling mechanism allows the server to respond to a request only when a particular event, status, or timeout has occurred. In order to minimize (as much as possible) both latency in server-client message delivery and the processing/network resources needed, the long poll request timeout ought to be set to a high value.

HTTP长轮询机制允许服务器仅在发生特定事件、状态或超时时响应请求。为了尽可能减少服务器客户端消息传递的延迟和所需的处理/网络资源,应该将长轮询请求超时设置为高值。

However, the timeout value has to be chosen carefully; indeed, problems can occur if this value is set too high (e.g., the client might receive a 408 Request Timeout answer from the server or a 504 Gateway Timeout answer from a proxy). The default timeout value in a browser is 300 seconds, but most network infrastructures include proxies and servers whose timeouts are not that long.

但是,必须仔细选择超时值;实际上,如果该值设置得太高(例如,客户端可能从服务器接收408请求超时应答或从代理接收504网关超时应答),则可能会出现问题。浏览器中的默认超时值为300秒,但大多数网络基础设施包括超时时间不太长的代理和服务器。

Several experiments have shown success with timeouts as high as 120 seconds, but generally 30 seconds is a safer value. Therefore, vendors of network equipment wishing to be compatible with the HTTP long polling mechanism are advised to implement a timeout substantially greater than 30 seconds (where "substantially" means several times more than the medium network transit time).

一些实验表明,成功的超时时间高达120秒,但通常30秒是一个更安全的值。因此,建议希望与HTTP长轮询机制兼容的网络设备的供应商实施大体上大于30秒的超时(其中“大体上”是指比介质网络传输时间多几倍)。

5.6. Impact on Intermediary Entities
5.6. 对中介实体的影响

There is no way for an end client or host to signal to HTTP intermediaries that long polling is in use; therefore, long poll requests are completely transparent for intermediary entities and are handled as normal requests. This can have an impact on intermediary entities that perform operations that are not useful in case of long polling. However, any capabilities that might interfere with bidirectional flow (e.g., caching) can be controlled with standard headers or cookies.

终端客户机或主机无法向HTTP中介发送长轮询正在使用的信号;因此,长轮询请求对于中间实体来说是完全透明的,并且作为普通请求处理。这可能会对执行在长轮询情况下不有用的操作的中间实体产生影响。但是,任何可能干扰双向流(例如缓存)的功能都可以通过标准头或cookie进行控制。

As a best practice, caching is always intentionally suppressed in a long poll request or response, i.e., the "Cache-Control" header is set to "no-cache".

作为最佳实践,在长轮询请求或响应中总是有意抑制缓存,即,“缓存控制”头设置为“无缓存”。

6. Security Considerations
6. 安全考虑

This document is meant to describe current usage of HTTP to enable asynchronous or server-initiated communication. It does not propose any change to the HTTP protocol or to the expected behavior of HTTP entities. Therefore this document does not introduce new security concerns into existing HTTP infrastructure. The considerations reported hereafter refer to the solutions that are already implemented and deployed.

本文档旨在描述HTTP的当前使用情况,以支持异步或服务器启动的通信。它不建议对HTTP协议或HTTP实体的预期行为进行任何更改。因此,本文档不会在现有HTTP基础设施中引入新的安全问题。下文报告的注意事项是指已经实施和部署的解决方案。

One security concern with cross-domain HTTP long polling is related to the fact that often the mechanism is implemented by executing the JavaScript returned from the long poll request. If the server is prone to injection attacks, then it could be far easier to trick a browser into executing the code [CORS].

跨域HTTP长轮询的一个安全问题与以下事实有关:该机制通常是通过执行长轮询请求返回的JavaScript来实现的。如果服务器容易受到注入攻击,那么欺骗浏览器执行代码[CORS]就容易得多。

Another security concern is that the number of open connections that needs to be maintained by a server in HTTP long polling and HTTP streaming could more easily lead to denial-of-service (DoS) attacks [RFC4732].

另一个安全问题是,在HTTP长轮询和HTTP流传输中,服务器需要维护的开放连接数量可能更容易导致拒绝服务(DoS)攻击[RFC4732]。

7. References
7. 工具书类
7.1. Normative References
7.1. 规范性引用文件

[RFC1945] Berners-Lee, T., Fielding, R., and H. Nielsen, "Hypertext Transfer Protocol -- HTTP/1.0", RFC 1945, May 1996.

[RFC1945]Berners Lee,T.,Fielding,R.,和H.Nielsen,“超文本传输协议——HTTP/1.0”,RFC 1945,1996年5月。

[RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999.

[RFC2616]菲尔丁,R.,盖蒂斯,J.,莫卧儿,J.,弗莱斯蒂克,H.,马斯特,L.,利奇,P.,和T.伯纳斯李,“超文本传输协议——HTTP/1.1”,RFC 2616,1999年6月。

[RFC4732] Handley, M., Rescorla, E., and IAB, "Internet Denial-of-Service Considerations", RFC 4732, December 2006.

[RFC4732]Handley,M.,Rescorla,E.,和IAB,“互联网拒绝服务注意事项”,RFC 4732,2006年12月。

7.2. Informative References
7.2. 资料性引用

[BAYEUX] Russell, A., Wilkins, G., Davis, D., and M. Nesbitt, "Bayeux Protocol -- Bayeux 1.0.0", 2007, <http://svn.cometd.com/trunk/bayeux/bayeux.html>.

[BAYEUX]Russell,A.,Wilkins,G.,Davis,D.,和M.Nesbitt,“BAYEUX协议——BAYEUX 1.0.0”,2007年<http://svn.cometd.com/trunk/bayeux/bayeux.html>.

[BOSH] Paterson, I., Smith, D., and P. Saint-Andre, "Bidirectional-streams Over Synchronous HTTP (BOSH)", XSF XEP 0124, February 2007.

[BOSH]Paterson,I.,Smith,D.,和P.Saint Andre,“同步HTTP上的双向流(BOSH)”,XSF XEP 0124,2007年2月。

[COMET] Russell, A., "Comet: Low Latency Data for the Browser", March 2006, <http://infrequently.org/ 2006/03/comet-low-latency-data-for-the-browser/ >.

[COMET]Russell,A.,“COMET:浏览器的低延迟数据”,2006年3月<http://infrequently.org/ 2006/03/comet浏览器的低延迟数据/>。

[COOKIE] Barth, A., "HTTP State Management Mechanism", Work in Progress, March 2011.

[COOKIE]Barth,A.,“HTTP状态管理机制”,正在进行的工作,2011年3月。

[CORS] van Kesteren, A., "Cross-Origin Resource Sharing", W3C Working Draft WD-cors-20100727, latest version available at <http://www.w3.org/TR/cors/>, July 2010, <http://www.w3.org/TR/2010/WD-cors-20100727/>.

[CORS]van Kesteren,A.,“跨来源资源共享”,W3C工作草案WD-CORS-20100727,最新版本可在<http://www.w3.org/TR/cors/>,2010年7月<http://www.w3.org/TR/2010/WD-cors-20100727/>.

[HTTPBIS] Fielding, R., Ed., Gettys, J., Mogul, J., Nielsen, H., Masinter, L., Leach, P., Berners-Lee, T., Lafon, Y., Ed., and J. Reschke, Ed., "HTTP/1.1, part 1: URIs, Connections, and Message Parsing", Work in Progress, March 2011.

[HTTPBIS]菲尔丁,R.,Ed.,盖蒂,J.,莫卧儿,J.,尼尔森,H.,马斯特,L.,利奇,P.,伯纳斯李,T.,拉丰,Y.,Ed.,和J.雷什克,Ed.,“HTTP/1.1,第1部分:URI,连接和消息解析”,正在进行的工作,2011年3月。

[JSONP] Wikipedia, "JSON with padding", <http://en.wikipedia.org/wiki/JSONP#JSONP>.

[JSONP]维基百科,“带填充的JSON”<http://en.wikipedia.org/wiki/JSONP#JSONP>.

[RFC4627] Crockford, D., "The application/json Media Type for JavaScript Object Notation (JSON)", RFC 4627, July 2006.

[RFC4627]Crockford,D.,“JavaScript对象表示法(json)的应用程序/json媒体类型”,RFC4627,2006年7月。

[RFC6120] Saint-Andre, P., "Extensible Messaging and Presence Protocol (XMPP): Core", RFC 6120, March 2011.

[RFC6120]Saint Andre,P.,“可扩展消息和状态协议(XMPP):核心”,RFC61202011年3月。

[TCP] Postel, J., "Transmission Control Protocol", STD 7, RFC 793, September 1981.

[TCP]Postel,J.,“传输控制协议”,STD 7,RFC 793,1981年9月。

[WD-eventsource] Hickson, I., "Server-Sent Events", W3C Working Draft WD-eventsource-20091222, latest version available at <http://www.w3.org/TR/eventsource/>, December 2009, <http://www.w3.org/TR/2009/ WD-eventsource-20091222/>.

[WD eventsource]Hickson,I.,“服务器发送的事件”,W3C工作草案WD-eventsource-20091222,最新版本可在<http://www.w3.org/TR/eventsource/>,2009年12月<http://www.w3.org/TR/2009/ WD-eventsource-20091222/>。

8. Acknowledgments
8. 致谢

Thanks to Joe Hildebrand, Julien Laganier, Jack Moffitt, Subramanian Moonesamy, Mark Nottingham, Julian Reschke, Martin Thomson, and Martin Tyler for their feedback.

感谢Joe Hildebrand、Julien Laganier、Jack Moffitt、Subramanian Moonesamy、Mark Nottingham、Julian Reschke、Martin Thomson和Martin Tyler的反馈。

Authors' Addresses

作者地址

Salvatore Loreto Ericsson Hirsalantie 11 Jorvas 02420 Finland

萨尔瓦托·洛雷托·爱立信·赫萨兰蒂11 Jorvas 02420芬兰

   EMail: salvatore.loreto@ericsson.com
        
   EMail: salvatore.loreto@ericsson.com
        

Peter Saint-Andre Cisco 1899 Wyknoop Street, Suite 600 Denver, CO 80202 USA

美国科罗拉多州丹佛市怀诺普街1899号600室彼得·圣安德烈思科公司80202

   Phone: +1-303-308-3282
   EMail: psaintan@cisco.com
        
   Phone: +1-303-308-3282
   EMail: psaintan@cisco.com
        

Stefano Salsano University of Rome "Tor Vergata" Via del Politecnico, 1 Rome 00133 Italy

斯蒂法诺萨尔萨诺大学罗马“Tor Velgeta”通过德尔波利奇尼科,1罗马00133意大利

   EMail: stefano.salsano@uniroma2.it
        
   EMail: stefano.salsano@uniroma2.it
        

Greg Wilkins Webtide

格雷格·威尔金斯·韦比特

   EMail: gregw@webtide.com
        
   EMail: gregw@webtide.com