Internet Engineering Task Force (IETF)                         C. Davids
Request for Comments: 7502              Illinois Institute of Technology
Category: Informational                                       V. Gurbani
ISSN: 2070-1721                        Bell Laboratories, Alcatel-Lucent
                                                             S. Poretsky
                                                    Allot Communications
                                                              April 2015
        
Internet Engineering Task Force (IETF)                         C. Davids
Request for Comments: 7502              Illinois Institute of Technology
Category: Informational                                       V. Gurbani
ISSN: 2070-1721                        Bell Laboratories, Alcatel-Lucent
                                                             S. Poretsky
                                                    Allot Communications
                                                              April 2015
        

Methodology for Benchmarking Session Initiation Protocol (SIP) Devices: Basic Session Setup and Registration

会话启动协议(SIP)设备基准测试方法:基本会话设置和注册

Abstract

摘要

This document provides a methodology for benchmarking the Session Initiation Protocol (SIP) performance of devices. Terminology related to benchmarking SIP devices is described in the companion terminology document (RFC 7501). Using these two documents, benchmarks can be obtained and compared for different types of devices such as SIP Proxy Servers, Registrars, and Session Border Controllers. The term "performance" in this context means the capacity of the Device Under Test (DUT) to process SIP messages. Media streams are used only to study how they impact the signaling behavior. The intent of the two documents is to provide a normalized set of tests that will enable an objective comparison of the capacity of SIP devices. Test setup parameters and a methodology are necessary because SIP allows a wide range of configurations and operational conditions that can influence performance benchmark measurements.

本文档提供了一种测试设备会话启动协议(SIP)性能的方法。与SIP设备基准测试相关的术语在附带的术语文档(RFC 7501)中进行了描述。使用这两个文档,可以获得并比较不同类型设备(如SIP代理服务器、注册器和会话边界控制器)的基准测试。本上下文中的术语“性能”是指被测设备(DUT)处理SIP消息的能力。媒体流仅用于研究它们如何影响信令行为。这两份文件的目的是提供一组规范化的测试,以便对SIP设备的容量进行客观比较。测试设置参数和方法是必要的,因为SIP允许范围广泛的配置和操作条件,这些配置和操作条件可能会影响性能基准测量。

Status of This Memo

关于下段备忘

This document is not an Internet Standards Track specification; it is published for informational purposes.

本文件不是互联网标准跟踪规范;它是为了提供信息而发布的。

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741.

本文件是互联网工程任务组(IETF)的产品。它代表了IETF社区的共识。它已经接受了公众审查,并已被互联网工程指导小组(IESG)批准出版。并非IESG批准的所有文件都适用于任何级别的互联网标准;见RFC 5741第2节。

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7502.

有关本文件当前状态、任何勘误表以及如何提供反馈的信息,请访问http://www.rfc-editor.org/info/rfc7502.

Copyright Notice

版权公告

Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved.

版权所有(c)2015 IETF信托基金和确定为文件作者的人员。版权所有。

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

本文件受BCP 78和IETF信托有关IETF文件的法律规定的约束(http://trustee.ietf.org/license-info)自本文件出版之日起生效。请仔细阅读这些文件,因为它们描述了您对本文件的权利和限制。从本文件中提取的代码组件必须包括信托法律条款第4.e节中所述的简化BSD许可证文本,并提供简化BSD许可证中所述的无担保。

Table of Contents

目录

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   4
   2.  Terminology . . . . . . . . . . . . . . . . . . . . . . . . .   5
   3.  Benchmarking Topologies . . . . . . . . . . . . . . . . . . .   5
   4.  Test Setup Parameters . . . . . . . . . . . . . . . . . . . .   7
     4.1.  Selection of SIP Transport Protocol . . . . . . . . . . .   7
     4.2.  Connection-Oriented Transport Management  . . . . . . . .   7
     4.3.  Signaling Server  . . . . . . . . . . . . . . . . . . . .   7
     4.4.  Associated Media  . . . . . . . . . . . . . . . . . . . .   8
     4.5.  Selection of Associated Media Protocol  . . . . . . . . .   8
     4.6.  Number of Associated Media Streams per SIP Session  . . .   8
     4.7.  Codec Type  . . . . . . . . . . . . . . . . . . . . . . .   8
     4.8.  Session Duration  . . . . . . . . . . . . . . . . . . . .   8
     4.9.  Attempted Sessions per Second (sps) . . . . . . . . . . .   8
     4.10. Benchmarking Algorithm  . . . . . . . . . . . . . . . . .   9
   5.  Reporting Format  . . . . . . . . . . . . . . . . . . . . . .  11
     5.1.  Test Setup Report . . . . . . . . . . . . . . . . . . . .  11
     5.2.  Device Benchmarks for Session Setup . . . . . . . . . . .  12
     5.3.  Device Benchmarks for Registrations . . . . . . . . . . .  12
   6.  Test Cases  . . . . . . . . . . . . . . . . . . . . . . . . .  13
     6.1.  Baseline Session Establishment Rate of the Testbed  . . .  13
     6.2.  Session Establishment Rate without Media  . . . . . . . .  13
     6.3.  Session Establishment Rate with Media Not on DUT  . . . .  13
     6.4.  Session Establishment Rate with Media on DUT  . . . . . .  14
     6.5.  Session Establishment Rate with TLS-Encrypted SIP . . . .  14
     6.6.  Session Establishment Rate with IPsec-Encrypted SIP . . .  15
     6.7.  Registration Rate . . . . . . . . . . . . . . . . . . . .  15
     6.8.  Re-registration Rate  . . . . . . . . . . . . . . . . . .  16
   7.  Security Considerations . . . . . . . . . . . . . . . . . . .  16
   8.  References  . . . . . . . . . . . . . . . . . . . . . . . . .  17
     8.1.  Normative References  . . . . . . . . . . . . . . . . . .  17
     8.2.  Informative References  . . . . . . . . . . . . . . . . .  17
   Appendix A.  R Code Component to Simulate Benchmarking Algorithm   18
   Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . .  20
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  21
        
   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   4
   2.  Terminology . . . . . . . . . . . . . . . . . . . . . . . . .   5
   3.  Benchmarking Topologies . . . . . . . . . . . . . . . . . . .   5
   4.  Test Setup Parameters . . . . . . . . . . . . . . . . . . . .   7
     4.1.  Selection of SIP Transport Protocol . . . . . . . . . . .   7
     4.2.  Connection-Oriented Transport Management  . . . . . . . .   7
     4.3.  Signaling Server  . . . . . . . . . . . . . . . . . . . .   7
     4.4.  Associated Media  . . . . . . . . . . . . . . . . . . . .   8
     4.5.  Selection of Associated Media Protocol  . . . . . . . . .   8
     4.6.  Number of Associated Media Streams per SIP Session  . . .   8
     4.7.  Codec Type  . . . . . . . . . . . . . . . . . . . . . . .   8
     4.8.  Session Duration  . . . . . . . . . . . . . . . . . . . .   8
     4.9.  Attempted Sessions per Second (sps) . . . . . . . . . . .   8
     4.10. Benchmarking Algorithm  . . . . . . . . . . . . . . . . .   9
   5.  Reporting Format  . . . . . . . . . . . . . . . . . . . . . .  11
     5.1.  Test Setup Report . . . . . . . . . . . . . . . . . . . .  11
     5.2.  Device Benchmarks for Session Setup . . . . . . . . . . .  12
     5.3.  Device Benchmarks for Registrations . . . . . . . . . . .  12
   6.  Test Cases  . . . . . . . . . . . . . . . . . . . . . . . . .  13
     6.1.  Baseline Session Establishment Rate of the Testbed  . . .  13
     6.2.  Session Establishment Rate without Media  . . . . . . . .  13
     6.3.  Session Establishment Rate with Media Not on DUT  . . . .  13
     6.4.  Session Establishment Rate with Media on DUT  . . . . . .  14
     6.5.  Session Establishment Rate with TLS-Encrypted SIP . . . .  14
     6.6.  Session Establishment Rate with IPsec-Encrypted SIP . . .  15
     6.7.  Registration Rate . . . . . . . . . . . . . . . . . . . .  15
     6.8.  Re-registration Rate  . . . . . . . . . . . . . . . . . .  16
   7.  Security Considerations . . . . . . . . . . . . . . . . . . .  16
   8.  References  . . . . . . . . . . . . . . . . . . . . . . . . .  17
     8.1.  Normative References  . . . . . . . . . . . . . . . . . .  17
     8.2.  Informative References  . . . . . . . . . . . . . . . . .  17
   Appendix A.  R Code Component to Simulate Benchmarking Algorithm   18
   Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . .  20
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  21
        
1. Introduction
1. 介绍

This document describes the methodology for benchmarking Session Initiation Protocol (SIP) performance as described in the Terminology document [RFC7501]. The methodology and terminology are to be used for benchmarking signaling plane performance with varying signaling and media load. Media streams, when used, are used only to study how they impact the signaling behavior. This document concentrates on benchmarking SIP session setup and SIP registrations only.

本文档描述了术语文档[RFC7501]中描述的会话启动协议(SIP)性能基准测试方法。该方法和术语用于在不同的信令和媒体负载下对信令平面性能进行基准测试。使用媒体流时,仅用于研究它们如何影响信令行为。本文档仅集中于对SIP会话设置和SIP注册进行基准测试。

The Device Under Test (DUT) is a network intermediary that is RFC 3261 [RFC3261] capable and that plays the role of a registrar, redirect server, stateful proxy, a Session Border Controller (SBC) or a B2BUA. This document does not require the intermediary to assume the role of a stateless proxy. Benchmarks can be obtained and compared for different types of devices such as a SIP proxy server, Session Border Controllers (SBC), SIP registrars and a SIP proxy server paired with a media relay.

被测设备(DUT)是具有RFC 3261[RFC3261]功能的网络中介,扮演注册器、重定向服务器、有状态代理、会话边界控制器(SBC)或B2BUA的角色。本文档不要求中介机构承担无状态代理的角色。可以获得并比较不同类型的设备的基准,例如SIP代理服务器、会话边界控制器(SBC)、SIP注册器和与媒体中继配对的SIP代理服务器。

The test cases provide metrics for benchmarking the maximum 'SIP Registration Rate' and maximum 'SIP Session Establishment Rate' that the DUT can sustain over an extended period of time without failures (extended period of time is defined in the algorithm in Section 4.10). Some cases are included to cover encrypted SIP. The test topologies that can be used are described in the Test Setup section. Topologies in which the DUT handles media as well as those in which the DUT does not handle media are both considered. The measurement of the performance characteristics of the media itself is outside the scope of these documents.

测试用例提供了测试DUT能够在延长的时间段内无故障地维持的最大“SIP注册率”和最大“SIP会话建立率”的标准(延长的时间段在第4.10节的算法中定义)。一些案例包括加密SIP。可使用的测试拓扑在测试设置部分中进行了描述。同时考虑DUT处理介质的拓扑以及DUT不处理介质的拓扑。媒体本身性能特征的测量不在这些文件的范围之内。

Benchmark metrics could possibly be impacted by Associated Media. The selected values for Session Duration and Media Streams per Session enable benchmark metrics to be benchmarked without Associated Media. Session Setup Rate could possibly be impacted by the selected value for Maximum Sessions Attempted. The benchmark for Session Establishment Rate is measured with a fixed value for maximum Session Attempts.

基准度量可能会受到相关媒体的影响。会话持续时间和每个会话的媒体流的选定值允许在没有相关媒体的情况下对基准度量进行基准测试。会话设置速率可能会受到所选最大尝试会话值的影响。会话建立率的基准使用最大会话尝试次数的固定值进行测量。

Finally, the overall value of these tests is to serve as a comparison function between multiple SIP implementations. One way to use these tests is to derive benchmarks with SIP devices from Vendor-A, derive a new set of benchmarks with similar SIP devices from Vendor-B and perform a comparison on the results of Vendor-A and Vendor-B. This document does not make any claims on the interpretation of such results.

最后,这些测试的总体价值是充当多个SIP实现之间的比较函数。使用这些测试的一种方法是从供应商A获得SIP设备基准,从供应商B获得类似SIP设备的一组新基准,并对供应商A和供应商B的结果进行比较。本文件不对这些结果的解释提出任何要求。

2. Terminology
2. 术语

In this document, the key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in BCP 14, conforming to [RFC2119] and indicate requirement levels for compliant implementations.

在本文件中,关键词“必须”、“不得”、“要求”、“应”、“不应”、“应”、“不应”、“建议”、“不建议”、“可”和“可选”应按照BCP 14中所述进行解释,符合[RFC2119],并指出合规实施的要求级别。

RFC 2119 defines the use of these key words to help make the intent of Standards Track documents as clear as possible. While this document uses these keywords, this document is not a Standards Track document.

RFC 2119定义了这些关键词的使用,以帮助尽可能明确标准跟踪文档的意图。虽然本文档使用这些关键字,但本文档不是标准跟踪文档。

Terms specific to SIP [RFC3261] performance benchmarking are defined in [RFC7501].

SIP[RFC3261]性能基准测试的特定术语在[RFC7501]中定义。

3. Benchmarking Topologies
3. 基准拓扑

Test organizations need to be aware that these tests generate large volumes of data and consequently ensure that networking devices like hubs, switches, or routers are able to handle the generated volume.

测试组织需要意识到,这些测试生成大量数据,从而确保集线器、交换机或路由器等网络设备能够处理生成的数据量。

The test cases enumerated in Sections 6.1 to 6.6 operate on two test topologies: one in which the DUT does not process the media (Figure 1) and the other in which it does process media (Figure 2). In both cases, the tester or Emulated Agent (EA) sends traffic into the DUT and absorbs traffic from the DUT. The diagrams in Figures 1 and 2 represent the logical flow of information and do not dictate a particular physical arrangement of the entities.

第6.1至6.6节中列举的测试用例在两种测试拓扑上运行:一种是DUT不处理介质(图1),另一种是DUT处理介质(图2)。在这两种情况下,测试仪或仿真代理(EA)将流量发送到DUT,并吸收来自DUT的流量。图1和图2中的图表表示信息的逻辑流,并不表示实体的特定物理安排。

Figure 1 depicts a layout in which the DUT is an intermediary between the two interfaces of the EA. If the test case requires the exchange of media, the media does not flow through the DUT but rather passes directly between the two endpoints. Figure 2 shows the DUT as an intermediary between the two interfaces of the EA. If the test case requires the exchange of media, the media flows through the DUT between the endpoints.

图1描述了一种布局,其中DUT是EA的两个接口之间的中介。如果测试用例需要交换介质,则介质不会流经DUT,而是直接在两个端点之间通过。图2显示了DUT作为EA的两个接口之间的中介。如果测试用例需要交换介质,则介质在端点之间流过DUT。

      +--------+   Session   +--------+  Session    +--------+
      |        |   Attempt   |        |  Attempt    |        |
      |        |------------>+        |------------>+        |
      |        |             |        |             |        |
      |        |   Response  |        |  Response   |        |
      | Tester +<------------|  DUT   +<------------| Tester |
      |  (EA)  |             |        |             |  (EA)  |
      |        |             |        |             |        |
      +--------+             +--------+             +--------+
         /|\                                            /|\
          |              Media (optional)                |
          +==============================================+
        
      +--------+   Session   +--------+  Session    +--------+
      |        |   Attempt   |        |  Attempt    |        |
      |        |------------>+        |------------>+        |
      |        |             |        |             |        |
      |        |   Response  |        |  Response   |        |
      | Tester +<------------|  DUT   +<------------| Tester |
      |  (EA)  |             |        |             |  (EA)  |
      |        |             |        |             |        |
      +--------+             +--------+             +--------+
         /|\                                            /|\
          |              Media (optional)                |
          +==============================================+
        

Figure 1: DUT as an Intermediary, End-to-End Media

图1:DUT作为中介,端到端介质

      +--------+   Session   +--------+  Session    +--------+
      |        |   Attempt   |        |  Attempt    |        |
      |        |------------>+        |------------>+        |
      |        |             |        |             |        |
      |        |   Response  |        |  Response   |        |
      | Tester +<------------|  DUT   +<------------| Tester |
      |  (EA)  |             |        |             |  (EA)  |
      |        |<===========>|        |<===========>|        |
      +--------+   Media     +--------+    Media    +--------+
                 (Optional)             (Optional)
        
      +--------+   Session   +--------+  Session    +--------+
      |        |   Attempt   |        |  Attempt    |        |
      |        |------------>+        |------------>+        |
      |        |             |        |             |        |
      |        |   Response  |        |  Response   |        |
      | Tester +<------------|  DUT   +<------------| Tester |
      |  (EA)  |             |        |             |  (EA)  |
      |        |<===========>|        |<===========>|        |
      +--------+   Media     +--------+    Media    +--------+
                 (Optional)             (Optional)
        

Figure 2: DUT as an Intermediary Forwarding Media

图2:DUT作为中间转发介质

The test cases enumerated in Sections 6.7 and 6.8 use the topology in Figure 3 below.

第6.7节和第6.8节中列举的测试用例使用下面图3中的拓扑。

      +--------+ Registration +--------+
      |        |   request    |        |
      |        |------------->+        |
      |        |              |        |
      |        |   Response   |        |
      | Tester +<-------------|  DUT   |
      |  (EA)  |              |        |
      |        |              |        |
      +--------+              +--------+
        
      +--------+ Registration +--------+
      |        |   request    |        |
      |        |------------->+        |
      |        |              |        |
      |        |   Response   |        |
      | Tester +<-------------|  DUT   |
      |  (EA)  |              |        |
      |        |              |        |
      +--------+              +--------+
        

Figure 3: Registration and Re-registration Tests

图3:注册和重新注册测试

During registration or re-registration, the DUT may involve backend network elements and data stores. These network elements and data stores are not shown in Figure 3, but it is understood that they will impact the time required for the DUT to generate a response.

在注册或重新注册期间,DUT可能涉及后端网络元件和数据存储。这些网络元件和数据存储未在图3中显示,但可以理解,它们将影响DUT生成响应所需的时间。

This document explicitly separates a registration test (Section 6.7) from a re-registration test (Section 6.8) because in certain networks, the time to re-register may vary from the time to perform an initial registration due to the backend processing involved. It is expected that the registration tests and the re-registration test will be performed with the same set of backend network elements in order to derive a stable metric.

本文件明确区分了注册测试(第6.7节)和重新注册测试(第6.8节),因为在某些网络中,由于涉及后端处理,重新注册的时间可能与执行初始注册的时间不同。预计注册测试和重新注册测试将使用同一组后端网络元件执行,以导出稳定的度量。

4. Test Setup Parameters
4. 测试设置参数
4.1. Selection of SIP Transport Protocol
4.1. SIP传输协议的选择

Test cases may be performed with any transport protocol supported by SIP. This includes, but is not limited to, TCP, UDP, TLS, and websockets. The protocol used for the SIP transport protocol must be reported with benchmarking results.

测试用例可以使用SIP支持的任何传输协议执行。这包括但不限于TCP、UDP、TLS和WebSocket。用于SIP传输协议的协议必须报告基准测试结果。

SIP allows a DUT to use different transports for signaling on either side of the connection to the EAs. Therefore, this document assumes that the same transport is used on both sides of the connection; if this is not the case in any of the tests, the transport on each side of the connection MUST be reported in the test-reporting template.

SIP允许DUT在EAs连接的任一侧使用不同的传输来发送信号。因此,本文件假设连接两侧使用相同的运输方式;如果在任何测试中都不是这种情况,则必须在测试报告模板中报告连接两侧的传输。

4.2. Connection-Oriented Transport Management
4.2. 面向连接的传输管理

SIP allows a device to open one connection and send multiple requests over the same connection (responses are normally received over the same connection that the request was sent out on). The protocol also allows a device to open a new connection for each individual request. A connection management strategy will have an impact on the results obtained from the test cases, especially for connection-oriented transports such as TLS. For such transports, the cryptographic handshake must occur every time a connection is opened.

SIP允许设备打开一个连接并通过同一连接发送多个请求(响应通常通过发送请求的同一连接接收)。该协议还允许设备为每个请求打开新连接。连接管理策略将对从测试用例中获得的结果产生影响,特别是对于面向连接的传输,如TLS。对于这种传输,每次打开连接时都必须进行加密握手。

The connection management strategy, i.e., use of one connection to send all requests or closing an existing connection and opening a new connection to send each request, MUST be reported with the benchmarking result.

连接管理策略(即使用一个连接发送所有请求或关闭现有连接并打开新连接发送每个请求)必须与基准测试结果一起报告。

4.3. Signaling Server
4.3. 信令服务器

The Signaling Server is defined in the companion terminology document ([RFC7501], Section 3.2.2). The Signaling Server is a DUT.

信令服务器的定义见配套术语文件([RFC7501],第3.2.2节)。信令服务器是DUT。

4.4. Associated Media
4.4. 关联媒体

Some tests require Associated Media to be present for each SIP session. The test topologies to be used when benchmarking DUT performance for Associated Media are shown in Figure 1 and Figure 2.

有些测试要求每个SIP会话都有相关的媒体。对相关介质的DUT性能进行基准测试时使用的测试拓扑如图1和图2所示。

4.5. Selection of Associated Media Protocol
4.5. 相关媒体协议的选择

The test cases specified in this document provide SIP performance independent of the protocol used for the media stream. Any media protocol supported by SIP may be used. This includes, but is not limited to, RTP and SRTP. The protocol used for Associated Media MUST be reported with benchmarking results.

本文档中指定的测试用例提供独立于用于媒体流的协议的SIP性能。可以使用SIP支持的任何媒体协议。这包括但不限于RTP和SRTP。用于相关介质的协议必须报告基准测试结果。

4.6. Number of Associated Media Streams per SIP Session
4.6. 每个SIP会话的关联媒体流数

Benchmarking results may vary with the number of media streams per SIP session. When benchmarking a DUT for voice, a single media stream is used. When benchmarking a DUT for voice and video, two media streams are used. The number of Associated Media Streams MUST be reported with benchmarking results.

基准测试结果可能因每个SIP会话的媒体流数量而异。对DUT进行语音基准测试时,使用单个媒体流。对DUT进行语音和视频基准测试时,使用两个媒体流。相关媒体流的数量必须与基准测试结果一起报告。

4.7. Codec Type
4.7. 编解码器类型

The test cases specified in this document provide SIP performance independent of the media stream codec. Any codec supported by the EAs may be used. The codec used for Associated Media MUST be reported with the benchmarking results.

本文档中指定的测试用例提供独立于媒体流编解码器的SIP性能。可以使用EAs支持的任何编解码器。用于关联媒体的编解码器必须报告基准测试结果。

4.8. Session Duration
4.8. 会话持续时间

The value of the DUT's performance benchmarks may vary with the duration of SIP sessions. Session Duration MUST be reported with benchmarking results. A Session Duration of zero seconds indicates transmission of a BYE immediately following a successful SIP establishment. Setting this parameter to the value '0' indicates that a BYE will be sent by the EA immediately after the EA receives a 200 OK to the INVITE. Setting this parameter to a time value greater than the duration of the test indicates that a BYE will never be sent. Setting this parameter to a time value greater than the duration of the test indicates that a BYE is never sent.

DUT的性能基准值可能随SIP会话的持续时间而变化。必须报告会话持续时间和基准测试结果。会话持续时间为零秒表示在成功建立SIP后立即传输BYE。将此参数设置为值“0”表示EA将在收到邀请的200 OK后立即发送BYE。将此参数设置为大于测试持续时间的时间值表示永远不会发送BYE。将此参数设置为大于测试持续时间的时间值表示从未发送BYE。

4.9. Attempted Sessions per Second (sps)
4.9. 每秒尝试的会话数(sps)

The value of the DUT's performance benchmarks may vary with the Session Attempt Rate offered by the tester. Session Attempt Rate MUST be reported with the benchmarking results.

DUT的性能基准值可能随测试仪提供的会话尝试率而变化。会话尝试率必须与基准测试结果一起报告。

The test cases enumerated in Sections 6.1 to 6.6 require that the EA is configured to send the final 2xx-class response as quickly as it can. This document does not require the tester to add any delay between receiving a request and generating a final response.

第6.1节至第6.6节列举的测试用例要求EA配置为尽快发送最终2xx类响应。本文档不要求测试人员在接收请求和生成最终响应之间添加任何延迟。

4.10. Benchmarking Algorithm
4.10. 基准测试算法

In order to benchmark the test cases uniformly in Section 6, the algorithm described in this section should be used. A prosaic description of the algorithm and a pseudocode description are provided below, and a simulation written in the R statistical language [Rtool] is provided in Appendix A.

为了统一测试第6节中的测试用例,应该使用本节中描述的算法。下面提供了算法的平淡描述和伪代码描述,附录A中提供了用R统计语言[Rtool]编写的模拟。

The goal is to find the largest value, R, a SIP Session Attempt Rate, measured in sessions per second (sps), which the DUT can process with zero errors over a defined, extended period. This period is defined as the amount of time needed to attempt N SIP sessions, where N is a parameter of test, at the attempt rate, R. An iterative process is used to find this rate. The algorithm corresponding to this process converges to R.

目标是找到最大值R,即SIP会话尝试率,以每秒会话数(sps)为单位,DUT可以在定义的延长时间内以零错误处理该值。该时间段定义为尝试N个SIP会话所需的时间量,其中N是测试参数,以尝试速率R。使用迭代过程查找该速率。与此过程相对应的算法收敛到R。

If the DUT vendor provides a value for R, the tester can use this value. In cases where the DUT vendor does not provide a value for R, or where the tester wants to establish the R of a system using local media characteristics, the algorithm should be run by setting "r", the session attempt rate, equal to a value of the tester's choice. For example, the tester may initialize "r = 100" to start the algorithm and observe the value at convergence. The algorithm dynamically increases and decreases "r" as it converges to the maximum sps value for R. The dynamic increase and decrease rate is controlled by the weights "w" and "d", respectively.

如果DUT供应商提供R值,测试仪可以使用该值。如果DUT供应商未提供R值,或者测试人员希望使用本地媒体特性建立系统的R,则应通过将会话尝试率“R”设置为测试人员选择的值来运行算法。例如,测试仪可初始化“r=100”以启动算法,并在收敛时观察值。该算法在收敛到r的最大sps值时动态增加和减少“r”。动态增加和减少速率分别由权重“w”和“d”控制。

The pseudocode corresponding to the description above follows, and a simulation written in the R statistical language is provided in Appendix A.

与上述描述相对应的伪代码如下,附录a中提供了用R统计语言编写的模拟。

         ; ---- Parameters of test; adjust as needed
         N  := 50000  ; Global maximum; once largest session rate has
                      ; been established, send this many requests before
                      ; calling the test a success
         m  := {...}  ; Other attributes that affect testing, such
                      ; as media streams, etc.
         r  := 100    ; Initial session attempt rate (in sessions/sec).
                      ; Adjust as needed (for example, if DUT can handle
                      ; thousands of calls in steady state, set to
                      ; appropriate value in the thousands).
         w  := 0.10   ; Traffic increase weight (0 < w <= 1.0)
         d  := max(0.10, w / 2)    ; Traffic decrease weight
        
         ; ---- Parameters of test; adjust as needed
         N  := 50000  ; Global maximum; once largest session rate has
                      ; been established, send this many requests before
                      ; calling the test a success
         m  := {...}  ; Other attributes that affect testing, such
                      ; as media streams, etc.
         r  := 100    ; Initial session attempt rate (in sessions/sec).
                      ; Adjust as needed (for example, if DUT can handle
                      ; thousands of calls in steady state, set to
                      ; appropriate value in the thousands).
         w  := 0.10   ; Traffic increase weight (0 < w <= 1.0)
         d  := max(0.10, w / 2)    ; Traffic decrease weight
        
         ; ---- End of parameters of test
        
         ; ---- End of parameters of test
        

proc find_R

过程查找

            R = max_sps(r, m, N)  ; Setup r sps, each with m media
            ; characteristics until N sessions have been attempted.
            ; Note that if a DUT vendor provides this number, the tester
            ; can use the number as a Session Attempt Rate, R, instead
            ; of invoking max_sps()
        
            R = max_sps(r, m, N)  ; Setup r sps, each with m media
            ; characteristics until N sessions have been attempted.
            ; Note that if a DUT vendor provides this number, the tester
            ; can use the number as a Session Attempt Rate, R, instead
            ; of invoking max_sps()
        

end proc

结束程序

         ; Iterative process to figure out the largest number of
         ; sps that we can achieve in order to setup n sessions.
         ; This function converges to R, the Session Attempt Rate.
         proc max_sps(r, m, n)
            s     := 0    ; session setup rate
            old_r := 0    ; old session setup rate
            h     := 0    ; Return value, R
            count := 0
        
         ; Iterative process to figure out the largest number of
         ; sps that we can achieve in order to setup n sessions.
         ; This function converges to R, the Session Attempt Rate.
         proc max_sps(r, m, n)
            s     := 0    ; session setup rate
            old_r := 0    ; old session setup rate
            h     := 0    ; Return value, R
            count := 0
        
            ; Note that if w is small (say, 0.10) and r is small
            ; (say, <= 9), the algorithm will not converge since it
            ; uses floor() to increment r dynamically.  It is best
            ; to start with the defaults (w = 0.10 and r >= 100).
        
            ; Note that if w is small (say, 0.10) and r is small
            ; (say, <= 9), the algorithm will not converge since it
            ; uses floor() to increment r dynamically.  It is best
            ; to start with the defaults (w = 0.10 and r >= 100).
        
            while (TRUE) {
               s := send_traffic(r, m, n) ; Send r sps, with m media
               ; characteristics until n sessions have been attempted.
               if (s == n)  {
                   if (r > old_r)  {
                       old_r = r
                   }
                   else  {
                       count = count + 1
        
            while (TRUE) {
               s := send_traffic(r, m, n) ; Send r sps, with m media
               ; characteristics until n sessions have been attempted.
               if (s == n)  {
                   if (r > old_r)  {
                       old_r = r
                   }
                   else  {
                       count = count + 1
        
                        if (count >= 10)  {
                            # We've converged.
                            h := max(r, old_r)
                            break
                        }
                    }
        
                        if (count >= 10)  {
                            # We've converged.
                            h := max(r, old_r)
                            break
                        }
                    }
        
                    r  := floor(r + (w * r))
                }
                else  {
                    r := floor(r - (d * r))
                    d := max(0.10, d / 2)
                    w := max(0.10, w / 2)
                }
        
                    r  := floor(r + (w * r))
                }
                else  {
                    r := floor(r - (d * r))
                    d := max(0.10, d / 2)
                    w := max(0.10, w / 2)
                }
        

} return h end proc

}返回h结束程序

5. Reporting Format
5. 报告格式
5.1. Test Setup Report
5.1. 测试设置报告
      SIP Transport Protocol = ___________________________
      (valid values: TCP|UDP|TLS|SCTP|websockets|specify-other)
      (Specify if same transport used for connections to the DUT
      and connections from the DUT.  If different transports
      used on each connection, enumerate the transports used.)
        
      SIP Transport Protocol = ___________________________
      (valid values: TCP|UDP|TLS|SCTP|websockets|specify-other)
      (Specify if same transport used for connections to the DUT
      and connections from the DUT.  If different transports
      used on each connection, enumerate the transports used.)
        
      Connection management strategy for connection oriented
      transports
         DUT receives requests on one connection = _______
         (Yes or no.  If no, DUT accepts a new connection for
         every incoming request, sends a response on that
         connection, and closes the connection.)
         DUT sends requests on one connection = __________
         (Yes or no.  If no, DUT initiates a new connection to
         send out each request, gets a response on that
         connection, and closes the connection.)
        
      Connection management strategy for connection oriented
      transports
         DUT receives requests on one connection = _______
         (Yes or no.  If no, DUT accepts a new connection for
         every incoming request, sends a response on that
         connection, and closes the connection.)
         DUT sends requests on one connection = __________
         (Yes or no.  If no, DUT initiates a new connection to
         send out each request, gets a response on that
         connection, and closes the connection.)
        
      Session Attempt Rate  _______________________________
      (Session attempts/sec)
      (The initial value for "r" in benchmarking algorithm in
      Section 4.10.)
        
      Session Attempt Rate  _______________________________
      (Session attempts/sec)
      (The initial value for "r" in benchmarking algorithm in
      Section 4.10.)
        
      Session Duration = _________________________________
      (In seconds)
        
      Session Duration = _________________________________
      (In seconds)
        
      Total Sessions Attempted = _________________________
      (Total sessions to be created over duration of test)
        
      Total Sessions Attempted = _________________________
      (Total sessions to be created over duration of test)
        
      Media Streams per Session =  _______________________
      (number of streams per session)
        
      Media Streams per Session =  _______________________
      (number of streams per session)
        
      Associated Media Protocol =  _______________________
      (RTP|SRTP|specify-other)
        
      Associated Media Protocol =  _______________________
      (RTP|SRTP|specify-other)
        
      Codec = ____________________________________________
      (Codec type as identified by the organization that
      specifies the codec)
        
      Codec = ____________________________________________
      (Codec type as identified by the organization that
      specifies the codec)
        
      Media Packet Size (audio only) =  __________________
      (Number of bytes in an audio packet)
        
      Media Packet Size (audio only) =  __________________
      (Number of bytes in an audio packet)
        
      Establishment Threshold time =  ____________________
      (Seconds)
        
      Establishment Threshold time =  ____________________
      (Seconds)
        
      TLS ciphersuite used
      (for tests involving TLS) = ________________________
      (e.g., TLS_RSA_WITH_AES_128_CBC_SHA)
        
      TLS ciphersuite used
      (for tests involving TLS) = ________________________
      (e.g., TLS_RSA_WITH_AES_128_CBC_SHA)
        
      IPsec profile used
      (For tests involving IPsec) = _____________________
        
      IPsec profile used
      (For tests involving IPsec) = _____________________
        
5.2. Device Benchmarks for Session Setup
5.2. 会话设置的设备基准
      Session Establishment Rate, "R" = __________________
      (sessions per second)
      Is DUT acting as a media relay? (yes/no) = _________
        
      Session Establishment Rate, "R" = __________________
      (sessions per second)
      Is DUT acting as a media relay? (yes/no) = _________
        
5.3. Device Benchmarks for Registrations
5.3. 注册的设备基准
      Registration Rate =  ____________________________
      (registrations per second)
        
      Registration Rate =  ____________________________
      (registrations per second)
        
      Re-registration Rate =  ____________________________
      (registrations per second)
        
      Re-registration Rate =  ____________________________
      (registrations per second)
        
      Notes = ____________________________________________
      (List any specific backend processing required or
      other parameters that may impact the rate)
        
      Notes = ____________________________________________
      (List any specific backend processing required or
      other parameters that may impact the rate)
        
6. Test Cases
6. 测试用例
6.1. Baseline Session Establishment Rate of the Testbed
6.1. 测试床的基线会话建立率

Objective: To benchmark the Session Establishment Rate of the Emulated Agent (EA) with zero failures.

目标:以零故障为基准测试仿真代理(EA)的会话建立率。

Procedure: 1. Configure the DUT in the test topology shown in Figure 1. 2. Set Media Streams per Session to 0. 3. Execute benchmarking algorithm as defined in Section 4.10 to get the baseline Session Establishment Rate. This rate MUST be recorded using any pertinent parameters as shown in the reporting format of Section 5.1.

程序:1。在图1所示的测试拓扑中配置DUT。2.将每个会话的媒体流设置为0。3.执行第4.10节中定义的基准测试算法,以获得基线会话建立率。必须使用第5.1节报告格式中所示的任何相关参数记录该比率。

Expected Results: This is the scenario to obtain the maximum Session Establishment Rate of the EA and the testbed when no DUT is present. The results of this test might be used to normalize test results performed on different testbeds or simply to better understand the impact of the DUT on the testbed in question.

预期结果:这是在不存在DUT的情况下获得EA和测试台的最大会话建立率的场景。该测试的结果可用于规范化在不同试验台上执行的测试结果,或仅用于更好地理解DUT对相关试验台的影响。

6.2. Session Establishment Rate without Media
6.2. 无媒体会话建立率

Objective: To benchmark the Session Establishment Rate of the DUT with no Associated Media and zero failures.

目标:在没有相关介质和零故障的情况下,对DUT的会话建立率进行基准测试。

Procedure: 1. Configure a DUT according to the test topology shown in Figure 1 or Figure 2. 2. Set Media Streams per Session to 0. 3. Execute benchmarking algorithm as defined in Section 4.10 to get the Session Establishment Rate. This rate MUST be recorded using any pertinent parameters as shown in the reporting format of Section 5.1.

程序:1。根据图1或图2所示的测试拓扑配置DUT。2.将每个会话的媒体流设置为0。3.执行第4.10节中定义的基准测试算法,以获得会话建立率。必须使用第5.1节报告格式中所示的任何相关参数记录该比率。

Expected Results: Find the Session Establishment Rate of the DUT when the EA is not sending media streams.

预期结果:查找EA不发送媒体流时DUT的会话建立率。

6.3. Session Establishment Rate with Media Not on DUT
6.3. 媒体不在DUT上时的会话建立率

Objective: To benchmark the Session Establishment Rate of the DUT with zero failures when Associated Media is included in the benchmark test but the media is not running through the DUT.

目标:当相关介质包括在基准测试中,但介质未在DUT中运行时,以零故障为基准测试DUT的会话建立率。

Procedure: 1. Configure a DUT according to the test topology shown in Figure 1. 2. Set Media Streams per Session to 1. 3. Execute benchmarking algorithm as defined in Section 4.10 to get the session establishment rate with media. This rate MUST be recorded using any pertinent parameters as shown in the reporting format of Section 5.1.

程序:1。根据图1所示的测试拓扑配置DUT。2.将每个会话的媒体流设置为1。3.执行第4.10节中定义的基准测试算法,以获得媒体会话建立率。必须使用第5.1节报告格式中所示的任何相关参数记录该比率。

Expected Results: Session Establishment Rate results obtained with Associated Media with any number of media streams per SIP session are expected to be identical to the Session Establishment Rate results obtained without media in the case where the DUT is running on a platform separate from the Media Relay.

预期结果:在DUT运行在与媒体中继分离的平台上的情况下,使用每个SIP会话具有任意数量媒体流的关联媒体获得的会话建立速率结果预期与不使用媒体获得的会话建立速率结果相同。

6.4. Session Establishment Rate with Media on DUT
6.4. DUT上媒体的会话建立率

Objective: To benchmark the Session Establishment Rate of the DUT with zero failures when Associated Media is included in the benchmark test and the media is running through the DUT.

目标:当相关介质包括在基准测试中且介质在DUT中运行时,以零故障为基准测试DUT的会话建立率。

Procedure: 1. Configure a DUT according to the test topology shown in Figure 2. 2. Set Media Streams per Session to 1. 3. Execute benchmarking algorithm as defined in Section 4.10 to get the Session Establishment Rate with media. This rate MUST be recorded using any pertinent parameters as shown in the reporting format of Section 5.1.

程序:1。根据图2所示的测试拓扑配置DUT。2.将每个会话的媒体流设置为1。3.执行第4.10节中定义的基准测试算法,以获得媒体会话建立率。必须使用第5.1节报告格式中所示的任何相关参数记录该比率。

Expected Results: Session Establishment Rate results obtained with Associated Media may be lower than those obtained without media in the case where the DUT and the Media Relay are running on the same platform. It may be helpful for the tester to be aware of the reasons for this degradation, although these reasons are not parameters of the test. For example, the degree of performance degradation may be due to what the DUT does with the media (e.g., relaying vs. transcoding), the type of media (audio vs. video vs. data), and the codec used for the media. There may also be cases where there is no performance impact, if the DUT has dedicated media-path hardware.

预期结果:在DUT和媒体中继在同一平台上运行的情况下,使用相关媒体获得的会话建立率结果可能低于不使用媒体获得的结果。尽管这些原因不是测试参数,但测试人员了解这种退化的原因可能会有所帮助。例如,性能降低的程度可能是由于DUT对媒体的处理(例如,中继与转码)、媒体类型(音频与视频与数据)以及用于媒体的编解码器。如果DUT具有专用媒体路径硬件,也可能存在不影响性能的情况。

6.5. Session Establishment Rate with TLS-Encrypted SIP
6.5. TLS加密SIP的会话建立率

Objective: To benchmark the Session Establishment Rate of the DUT with zero failures when using TLS-encrypted SIP signaling.

目的:在使用TLS加密SIP信令时,对DUT的会话建立率进行基准测试,确保零故障。

Procedure: 1. If the DUT is being benchmarked as a proxy or B2BUA, then configure the DUT in the test topology shown in Figure 1 or Figure 2. 2. Configure the tester to enable TLS over the transport being used during benchmarking. Note the ciphersuite being used for TLS and record it in Section 5.1. 3. Set Media Streams per Session to 0 (media is not used in this test). 4. Execute benchmarking algorithm as defined in Section 4.10 to get the Session Establishment Rate with TLS encryption.

程序:1。如果DUT作为代理或B2BUA进行基准测试,则在图1或图2中所示的测试拓扑中配置DUT。2.将测试仪配置为通过基准测试期间使用的传输启用TLS。注意TLS使用的密码套件,并将其记录在第5.1节中。3.将每个会话的媒体流设置为0(此测试中不使用媒体)。4.执行第4.10节中定义的基准测试算法,以获得TLS加密的会话建立率。

Expected Results: Session Establishment Rate results obtained with TLS-encrypted SIP may be lower than those obtained with plaintext SIP.

预期结果:使用TLS加密SIP获得的会话建立率结果可能低于使用明文SIP获得的结果。

6.6. Session Establishment Rate with IPsec-Encrypted SIP
6.6. 使用IPsec加密SIP的会话建立率

Objective: To benchmark the Session Establishment Rate of the DUT with zero failures when using IPsec-encrypted SIP signaling.

目标:在使用IPsec加密SIP信令时,对DUT的会话建立率进行基准测试,确保零故障。

Procedure: 1. Configure a DUT according to the test topology shown in Figure 1 or Figure 2. 2. Set Media Streams per Session to 0 (media is not used in this test). 3. Configure tester for IPsec. Note the IPsec profile being used for IPsec and record it in Section 5.1. 4. Execute benchmarking algorithm as defined in Section 4.10 to get the Session Establishment Rate with encryption.

程序:1。根据图1或图2所示的测试拓扑配置DUT。2.将每个会话的媒体流设置为0(此测试中不使用媒体)。3.为IPsec配置测试仪。注意用于IPsec的IPsec配置文件,并将其记录在第5.1节中。4.执行第4.10节中定义的基准测试算法,以获得加密会话建立率。

Expected Results: Session Establishment Rate results obtained with IPsec-encrypted SIP may be lower than those obtained with plaintext SIP.

预期结果:使用IPsec加密SIP获得的会话建立率结果可能低于使用明文SIP获得的结果。

6.7. Registration Rate
6.7. 注册率

Objective: To benchmark the maximum registration rate the DUT can handle over an extended time period with zero failures.

目标:在零故障的情况下,对DUT能够在较长时间内处理的最大注册率进行基准测试。

Procedure: 1. Configure a DUT according to the test topology shown in Figure 3. 2. Set the registration timeout value to at least 3600 seconds. 3. Each register request MUST be made to a distinct Address of Record (AoR). Execute benchmarking algorithm as defined in

程序:1。根据图3所示的测试拓扑配置DUT。2.将注册超时值设置为至少3600秒。3.每个注册请求必须发送到不同的记录地址(AoR)。执行中定义的基准测试算法

Section 4.10 to get the maximum registration rate. This rate MUST be recorded using any pertinent parameters as shown in the reporting format of Section 5.1. For example, the use of TLS or IPsec during registration must be noted in the reporting format. In the same vein, any specific backend processing (use of databases, authentication servers, etc.) SHOULD be recorded as well.

第4.10节,以获得最大注册率。必须使用第5.1节报告格式中所示的任何相关参数记录该比率。例如,必须在报告格式中注明在注册期间使用TLS或IPsec。同样,还应记录任何特定的后端处理(使用数据库、身份验证服务器等)。

Expected Results: Provides a maximum registration rate.

预期结果:提供最大注册率。

6.8. Re-registration Rate
6.8. 再注册率

Objective: To benchmark the re-registration rate of the DUT with zero failures using the same backend processing and parameters used during Section 6.7.

目标:使用第6.7节中使用的相同后端处理和参数,对零故障DUT的重新注册率进行基准测试。

Procedure: 1. Configure a DUT according to the test topology shown in Figure 3. 2. Execute the test detailed in Section 6.7 to register the endpoints with the registrar and obtain the registration rate. 3. After at least 5 minutes of performing Step 2, but no more than 10 minutes after Step 2 has been performed, re-register the same AoRs used in Step 3 of Section 6.7. This will count as a re-registration because the SIP AoRs have not yet expired.

程序:1。根据图3所示的测试拓扑配置DUT。2.执行第6.7节中详述的测试,向注册机构注册端点并获得注册率。3.在执行步骤2至少5分钟后,但在执行步骤2后不超过10分钟,重新注册第6.7节步骤3中使用的相同AOR。这将被视为重新注册,因为SIP AOR尚未过期。

Expected Results: Note the rate obtained through this test for comparison with the rate obtained in Section 6.7.

预期结果:注意通过本试验获得的速率,以便与第6.7节中获得的速率进行比较。

7. Security Considerations
7. 安全考虑

Documents of this type do not directly affect the security of the Internet or corporate networks as long as benchmarking is not performed on devices or systems connected to production networks. Security threats and how to counter these in SIP and the media layer is discussed in RFC 3261, RFC 3550, and RFC 3711, and various other documents. This document attempts to formalize a set of common methodology for benchmarking performance of SIP devices in a lab environment.

只要不在连接到生产网络的设备或系统上执行基准测试,此类文档不会直接影响互联网或公司网络的安全性。RFC 3261、RFC 3550和RFC 3711以及各种其他文档中讨论了安全威胁以及如何在SIP和媒体层中应对这些威胁。本文档试图形式化一套通用方法,用于在实验室环境中对SIP设备的性能进行基准测试。

8. References
8. 工具书类
8.1. Normative References
8.1. 规范性引用文件

[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997, <http://www.rfc-editor.org/info/rfc2119>.

[RFC2119]Bradner,S.,“RFC中用于表示需求水平的关键词”,BCP 14,RFC 2119,1997年3月<http://www.rfc-editor.org/info/rfc2119>.

[RFC7501] Davids, C., Gurbani, V., and S. Poretsky, "Terminology for Benchmarking Session Initiation Protocol (SIP) Devices: Basic Session Setup and Registration", RFC 7501, April 2015, <http://www.rfc-editor.org/info/rfc7501>.

[RFC7501]Davids,C.,Gurbani,V.和S.Poretsky,“会话启动协议(SIP)设备基准术语:基本会话设置和注册”,RFC 7501,2015年4月<http://www.rfc-editor.org/info/rfc7501>.

8.2. Informative References
8.2. 资料性引用

[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Sparks, R., Handley, M., and E. Schooler, "SIP: Session Initiation Protocol", RFC 3261, June 2002, <http://www.rfc-editor.org/info/rfc3261>.

[RFC3261]Rosenberg,J.,Schulzrinne,H.,Camarillo,G.,Johnston,A.,Peterson,J.,Sparks,R.,Handley,M.,和E.Schooler,“SIP:会话启动协议”,RFC 3261,2002年6月<http://www.rfc-editor.org/info/rfc3261>.

[Rtool] R Development Core Team, "R: A Language and Environment for Statistical Computing", R Foundation for Statistical Computing Vienna, Austria, ISBN 3-900051-07-0, 2011, <http://www.R-project.org>.

[RToo] R发展核心团队,R:统计计算语言和环境,R统计计算基础维也纳,奥地利,ISBN 3-900051-07- 0,2011,<http://www.R-project.org>.

Appendix A. R Code Component to Simulate Benchmarking Algorithm
附录A.R模拟基准测试算法的代码组件

# Copyright (c) 2015 IETF Trust and the persons identified as # authors of the code. All rights reserved. # # Redistribution and use in source and binary forms, with or # without modification, are permitted provided that the following # conditions are met: # # The author of this code is Vijay K. Gurbani. # # - Redistributions of source code must retain the above copyright # notice, this list of conditions and # the following disclaimer. # # - Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following # disclaimer in the documentation and/or other materials # provided with the distribution. # # - Neither the name of Internet Society, IETF or IETF Trust, # nor the names of specific contributors, may be used to # endorse or promote products derived from this software # without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND # CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, # INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE # GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, # WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH # DAMAGE.

#版权所有(c)2015 IETF信托基金和被确定为代码作者的人员。版权所有只要满足以下条件,允许以源代码和二进制格式进行重新分发和使用,无论是否修改:#####本代码的作者是Vijay K.Gurbani。###源代码的重新分发必须保留上述版权#注意,此条件列表和#以下免责声明。###以二进制形式重新分发时,必须复制上述#版权声明、本条件列表以及随分发提供的文档和/或其他材料中的以下#免责声明。###未经事先书面许可,不得使用互联网协会、IETF或IETF Trust的名称或特定贡献者的名称来#认可或推广源自本软件的产品。#本软件由版权所有者和“按原样”贡献者提供,不承担任何明示或默示担保,包括但不限于对“适销性和特定用途适用性”的默示担保。在任何情况下,版权所有人或贡献者均不对任何直接、间接、附带、特殊、惩戒性或后果性损害(包括但不限于替代品或服务的采购;使用、数据或利润的损失;或业务中断)承担任何责任,无论其原因和责任理论如何,#无论是合同、严格责任还是因使用本软件而产生的侵权行为(包括#疏忽或其他),即使已告知此类#损害的可能性。

      w = 0.10
      d = max(0.10, w / 2)
      DUT_max_sps = 460     # Change as needed to set the max sps value
                            # for a DUT
        
      w = 0.10
      d = max(0.10, w / 2)
      DUT_max_sps = 460     # Change as needed to set the max sps value
                            # for a DUT
        
      # Returns R, given r (initial session attempt rate).
      # E.g., assume that a DUT handles 460 sps in steady state
      # and you have saved this code in a file simulate.r.  Then,
      # start an R session and do the following:
      #
      # > source("simulate.r")
      # > find_R(100)
      # ... debug output omitted ...
      # [1] 458
      #
      # Thus, the max sps that the DUT can handle is 458 sps, which is
      # close to the absolute maximum of 460 sps the DUT is specified to
      # do.
      find_R <- function(r)  {
         s     = 0
         old_r = 0
         h     = 0
         count = 0
        
      # Returns R, given r (initial session attempt rate).
      # E.g., assume that a DUT handles 460 sps in steady state
      # and you have saved this code in a file simulate.r.  Then,
      # start an R session and do the following:
      #
      # > source("simulate.r")
      # > find_R(100)
      # ... debug output omitted ...
      # [1] 458
      #
      # Thus, the max sps that the DUT can handle is 458 sps, which is
      # close to the absolute maximum of 460 sps the DUT is specified to
      # do.
      find_R <- function(r)  {
         s     = 0
         old_r = 0
         h     = 0
         count = 0
        

# Note that if w is small (say, 0.10) and r is small # (say, <= 9), the algorithm will not converge since it # uses floor() to increment r dynamically. It is best # to start with the defaults (w = 0.10 and r >= 100).

#注意,如果w很小(比如说,0.10),r很小#(比如说,<=9),算法将不会收敛,因为它使用floor()动态地增加r。最好从默认值开始(w=0.10,r>=100)。

         cat("r   old_r    w     d \n")
         while (TRUE)  {
            cat(r, ' ', old_r, ' ', w, ' ', d, '\n')
            s = send_traffic(r)
            if (s == TRUE)  {     # All sessions succeeded
        
         cat("r   old_r    w     d \n")
         while (TRUE)  {
            cat(r, ' ', old_r, ' ', w, ' ', d, '\n')
            s = send_traffic(r)
            if (s == TRUE)  {     # All sessions succeeded
        
                if (r > old_r)  {
                    old_r = r
                }
                else  {
                    count = count + 1
        
                if (r > old_r)  {
                    old_r = r
                }
                else  {
                    count = count + 1
        
                    if (count >= 10)  {
                        # We've converged.
                        h = max(r, old_r)
                        break
                    }
                }
        
                    if (count >= 10)  {
                        # We've converged.
                        h = max(r, old_r)
                        break
                    }
                }
        
                r  = floor(r + (w * r))
            }
        
                r  = floor(r + (w * r))
            }
        
            else  {
                r = floor(r - (d * r))
                d = max(0.10, d / 2)
                w = max(0.10, w / 2)
            }
         }
        
            else  {
                r = floor(r - (d * r))
                d = max(0.10, d / 2)
                w = max(0.10, w / 2)
            }
         }
        

h }

h}

      send_traffic <- function(r)  {
         n = TRUE
        
      send_traffic <- function(r)  {
         n = TRUE
        
         if (r > DUT_max_sps)  {
             n = FALSE
         }
        
         if (r > DUT_max_sps)  {
             n = FALSE
         }
        

n }

n}

Acknowledgments

致谢

The authors would like to thank Keith Drage and Daryl Malas for their contributions to this document. Dale Worley provided an extensive review that led to improvements in the documents. We are grateful to Barry Constantine, William Cerveny, and Robert Sparks for providing valuable comments during the documents' last calls and expert reviews. Al Morton and Sarah Banks have been exemplary working group chairs; we thank them for tracking this work to completion. Tom Taylor provided an in-depth review and subsequent comments on the benchmarking convergence algorithm in Section 4.10.

作者要感谢Keith Drage和Daryl Malas对本文件的贡献。Dale Worley进行了广泛的审查,从而改进了文件。我们感谢Barry Constantine、William Cerveny和Robert Sparks在文件的最后通话和专家审查期间提供了宝贵的意见。Al Morton和Sarah Banks是工作组的模范主席;我们感谢他们跟踪这项工作直到完成。Tom Taylor在第4.10节中对基准收敛算法进行了深入的回顾和后续评论。

Authors' Addresses

作者地址

Carol Davids Illinois Institute of Technology 201 East Loop Road Wheaton, IL 60187 United States

卡罗尔·戴维斯伊利诺伊理工学院,美国伊利诺伊州惠顿东环路201号,邮编60187

   Phone: +1 630 682 6024
   EMail: davids@iit.edu
        
   Phone: +1 630 682 6024
   EMail: davids@iit.edu
        

Vijay K. Gurbani Bell Laboratories, Alcatel-Lucent 1960 Lucent Lane Rm 9C-533 Naperville, IL 60566 United States

Vijay K.Gurbani Bell实验室,阿尔卡特朗讯1960朗讯巷,美国伊利诺伊州纳珀维尔9C-533室,邮编:60566

   Phone: +1 630 224 0216
   EMail: vkg@bell-labs.com
        
   Phone: +1 630 224 0216
   EMail: vkg@bell-labs.com
        

Scott Poretsky Allot Communications 300 TradeCenter, Suite 4680 Woburn, MA 08101 United States

Scott Poretsky Allot Communications 300交易中心,美国马萨诸塞州沃本4680室,邮编:08101

   Phone: +1 508 309 2179
   EMail: sporetsky@allot.com
        
   Phone: +1 508 309 2179
   EMail: sporetsky@allot.com