Network Working Group                                         A. Ghanwani
Request for Comments: 2816                                Nortel Networks
Category: Informational                                           W. Pace
                                                                      IBM
                                                            V. Srinivasan
                                                    CoSine Communications
                                                                 A. Smith
                                                         Extreme Networks
                                                                M. Seaman
                                                                  Telseon
                                                                 May 2000
        
Network Working Group                                         A. Ghanwani
Request for Comments: 2816                                Nortel Networks
Category: Informational                                           W. Pace
                                                                      IBM
                                                            V. Srinivasan
                                                    CoSine Communications
                                                                 A. Smith
                                                         Extreme Networks
                                                                M. Seaman
                                                                  Telseon
                                                                 May 2000
        

A Framework for Integrated Services Over Shared and Switched IEEE 802 LAN Technologies

基于共享和交换ieee802局域网技术的综合业务框架

Status of this Memo

本备忘录的状况

This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.

本备忘录为互联网社区提供信息。它没有规定任何类型的互联网标准。本备忘录的分发不受限制。

Copyright Notice

版权公告

Copyright (C) The Internet Society (2000). All Rights Reserved.

版权所有(C)互联网协会(2000年)。版权所有。

Abstract

摘要

This memo describes a framework for supporting IETF Integrated Services on shared and switched LAN infrastructure. It includes background material on the capabilities of IEEE 802 like networks with regard to parameters that affect Integrated Services such as access latency, delay variation and queuing support in LAN switches. It discusses aspects of IETF's Integrated Services model that cannot easily be accommodated in different LAN environments. It outlines a functional model for supporting the Resource Reservation Protocol (RSVP) in such LAN environments. Details of extensions to RSVP for use over LANs are described in an accompanying memo [14]. Mappings of the various Integrated Services onto IEEE 802 LANs are described in another memo [13].

本备忘录描述了在共享和交换LAN基础设施上支持IETF集成服务的框架。它包括关于IEEE 802类网络性能的背景材料,涉及影响综合服务的参数,如访问延迟、延迟变化和LAN交换机中的排队支持。它讨论了IETF的综合服务模型的一些方面,这些方面在不同的LAN环境中很难适应。它概述了在这种局域网环境中支持资源预留协议(RSVP)的功能模型。随附的备忘录[14]中描述了通过局域网使用的RSVP扩展的详细信息。在另一份备忘录[13]中描述了各种综合服务到IEEE 802 LAN的映射。

Contents

目录

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . .  3
   2.  Document Outline . . . . . . . . . . . . . . . . . . . . .  4
   3.  Definitions  . . . . . . . . . . . . . . . . . . . . . . .  4
   4.  Frame Forwarding in IEEE 802 Networks  . . . . . . . . . .  5
       4.1. General IEEE 802 Service Model  . . . . . . . . . . .  5
       4.2. Ethernet/IEEE 802.3 . . . . . . . . . . . . . . . . .  7
       4.3. Token Ring/IEEE 802.5 . . . . . . . . . . . . . . . .  8
       4.4. Fiber Distributed Data Interface  . . . . . . . . . . 10
       4.5. Demand Priority/IEEE 802.12 . . . . . . . . . . . . . 10
   5.  Requirements and Goals . . . . . . . . . . . . . . . . . . 11
       5.1. Requirements  . . . . . . . . . . . . . . . . . . . . 11
       5.2. Goals . . . . . . . . . . . . . . . . . . . . . . . . 13
       5.3. Non-goals . . . . . . . . . . . . . . . . . . . . . . 14
       5.4. Assumptions . . . . . . . . . . . . . . . . . . . . . 14
   6.  Basic Architecture . . . . . . . . . . . . . . . . . . . . 15
       6.1. Components  . . . . . . . . . . . . . . . . . . . . . 15
             6.1.1. Requester Module  . . . . . . . . . . . . . . 15
             6.1.2. Bandwidth Allocator . . . . . . . . . . . . . 16
             6.1.3. Communication Protocols . . . . . . . . . . . 16
       6.2. Centralized vs.  Distributed Implementations  . . . . 17
   7.  Model of the Bandwidth Manager in a Network  . . . . . . . 18
       7.1. End Station Model . . . . . . . . . . . . . . . . . . 19
             7.1.1. Layer 3 Client Model  . . . . . . . . . . . . 19
             7.1.2. Requests to Layer 2 ISSLL . . . . . . . . . . 19
             7.1.3. At the Layer 3 Sender . . . . . . . . . . . . 20
             7.1.4. At the Layer 3 Receiver . . . . . . . . . . . 21
       7.2. Switch Model  . . . . . . . . . . . . . . . . . . . . 22
             7.2.1. Centralized Bandwidth Allocator . . . . . . . 22
             7.2.2. Distributed Bandwidth Allocator . . . . . . . 23
       7.3. Admission Control . . . . . . . . . . . . . . . . . . 25
       7.4. QoS Signaling . . . . . . . . . . . . . . . . . . . . 26
             7.4.1. Client Service Definitions  . . . . . . . . . 26
             7.4.2. Switch Service Definitions  . . . . . . . . . 27
   8.  Implementation Issues  . . . . . . . . . . . . . . . . . . 28
       8.1. Switch Characteristics  . . . . . . . . . . . . . . . 29
       8.2. Queuing . . . . . . . . . . . . . . . . . . . . . . . 30
       8.3. Mapping of Services to Link Level Priority  . . . . . 31
       8.4. Re-mapping of Non-conforming Aggregated Flows . . . . 31
       8.5. Override of Incoming User Priority  . . . . . . . . . 32
       8.6. Different Reservation Styles  . . . . . . . . . . . . 32
       8.7. Receiver Heterogeneity  . . . . . . . . . . . . . . . 33
   9.  Network Topology Scenarios   . . . . . . . . . . . . . . . 35
       9.1. Full Duplex Switched Networks . . . . . . . . . . . . 36
       9.2. Shared Media Ethernet Networks  . . . . . . . . . . . 37
       9.3. Half Duplex Switched Ethernet Networks  . . . . . . . 38
       9.4. Half Duplex Switched and Shared Token Ring Networks . 39
        
   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . .  3
   2.  Document Outline . . . . . . . . . . . . . . . . . . . . .  4
   3.  Definitions  . . . . . . . . . . . . . . . . . . . . . . .  4
   4.  Frame Forwarding in IEEE 802 Networks  . . . . . . . . . .  5
       4.1. General IEEE 802 Service Model  . . . . . . . . . . .  5
       4.2. Ethernet/IEEE 802.3 . . . . . . . . . . . . . . . . .  7
       4.3. Token Ring/IEEE 802.5 . . . . . . . . . . . . . . . .  8
       4.4. Fiber Distributed Data Interface  . . . . . . . . . . 10
       4.5. Demand Priority/IEEE 802.12 . . . . . . . . . . . . . 10
   5.  Requirements and Goals . . . . . . . . . . . . . . . . . . 11
       5.1. Requirements  . . . . . . . . . . . . . . . . . . . . 11
       5.2. Goals . . . . . . . . . . . . . . . . . . . . . . . . 13
       5.3. Non-goals . . . . . . . . . . . . . . . . . . . . . . 14
       5.4. Assumptions . . . . . . . . . . . . . . . . . . . . . 14
   6.  Basic Architecture . . . . . . . . . . . . . . . . . . . . 15
       6.1. Components  . . . . . . . . . . . . . . . . . . . . . 15
             6.1.1. Requester Module  . . . . . . . . . . . . . . 15
             6.1.2. Bandwidth Allocator . . . . . . . . . . . . . 16
             6.1.3. Communication Protocols . . . . . . . . . . . 16
       6.2. Centralized vs.  Distributed Implementations  . . . . 17
   7.  Model of the Bandwidth Manager in a Network  . . . . . . . 18
       7.1. End Station Model . . . . . . . . . . . . . . . . . . 19
             7.1.1. Layer 3 Client Model  . . . . . . . . . . . . 19
             7.1.2. Requests to Layer 2 ISSLL . . . . . . . . . . 19
             7.1.3. At the Layer 3 Sender . . . . . . . . . . . . 20
             7.1.4. At the Layer 3 Receiver . . . . . . . . . . . 21
       7.2. Switch Model  . . . . . . . . . . . . . . . . . . . . 22
             7.2.1. Centralized Bandwidth Allocator . . . . . . . 22
             7.2.2. Distributed Bandwidth Allocator . . . . . . . 23
       7.3. Admission Control . . . . . . . . . . . . . . . . . . 25
       7.4. QoS Signaling . . . . . . . . . . . . . . . . . . . . 26
             7.4.1. Client Service Definitions  . . . . . . . . . 26
             7.4.2. Switch Service Definitions  . . . . . . . . . 27
   8.  Implementation Issues  . . . . . . . . . . . . . . . . . . 28
       8.1. Switch Characteristics  . . . . . . . . . . . . . . . 29
       8.2. Queuing . . . . . . . . . . . . . . . . . . . . . . . 30
       8.3. Mapping of Services to Link Level Priority  . . . . . 31
       8.4. Re-mapping of Non-conforming Aggregated Flows . . . . 31
       8.5. Override of Incoming User Priority  . . . . . . . . . 32
       8.6. Different Reservation Styles  . . . . . . . . . . . . 32
       8.7. Receiver Heterogeneity  . . . . . . . . . . . . . . . 33
   9.  Network Topology Scenarios   . . . . . . . . . . . . . . . 35
       9.1. Full Duplex Switched Networks . . . . . . . . . . . . 36
       9.2. Shared Media Ethernet Networks  . . . . . . . . . . . 37
       9.3. Half Duplex Switched Ethernet Networks  . . . . . . . 38
       9.4. Half Duplex Switched and Shared Token Ring Networks . 39
        
       9.5. Half Duplex and Shared Demand Priority Networks . . . 40
   10. Justification  . . . . . . . . . . . . . . . . . . . . . . 42
   11. Summary  . . . . . . . . . . . . . . . . . . . . . . . . . 43
   References . . . . . . . . . . . . . . . . . . . . . . . . . . 43
   Security Considerations  . . . . . . . . . . . . . . . . . . . 45
   Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 45
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . 46
   Full Copyright Statement . . . . . . . . . . . . . . . . . . . 47
        
       9.5. Half Duplex and Shared Demand Priority Networks . . . 40
   10. Justification  . . . . . . . . . . . . . . . . . . . . . . 42
   11. Summary  . . . . . . . . . . . . . . . . . . . . . . . . . 43
   References . . . . . . . . . . . . . . . . . . . . . . . . . . 43
   Security Considerations  . . . . . . . . . . . . . . . . . . . 45
   Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 45
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . 46
   Full Copyright Statement . . . . . . . . . . . . . . . . . . . 47
        
1. Introduction
1. 介绍

The Internet has traditionally provided support for best effort traffic only. However, with the recent advances in link layer technology, and with numerous emerging real time applications such as video conferencing and Internet telephony, there has been much interest for developing mechanisms which enable real time services over the Internet. A framework for meeting these new requirements was set out in RFC 1633 [8] and this has driven the specification of various classes of network service by the Integrated Services working group of the IETF, such as Controlled Load and Guaranteed Service [6,7]. Each of these service classes is designed to provide certain Quality of Service (QoS) to traffic conforming to a specified set of parameters. Applications are expected to choose one of these classes according to their QoS requirements. One mechanism for end stations to utilize such services in an IP network is provided by a QoS signaling protocol, the Resource Reservation Protocol (RSVP) [5] developed by the RSVP working group of the IETF. The IEEE under its Project 802 has defined standards for many different local area network technologies. These all typically offer the same MAC layer datagram service [1] to higher layer protocols such as IP although they often provide different dynamic behavior characteristics -- it is these that are important when considering their ability to support real time services. Later in this memo we describe some of the relevant characteristics of the different MAC layer LAN technologies. In addition, IEEE 802 has defined standards for bridging multiple LAN segments together using devices known as "MAC Bridges" or "Switches" [2]. Recent work has also defined traffic classes, multicast filtering, and virtual LAN capabilities for these devices [3,4]. Such LAN technologies often constitute the last hop(s) between users and the Internet as well as being a primary building block for entire campus networks. It is therefore necessary to provide standardized mechanisms for using these technologies to support end-to-end real time services. In order to do this, there must be some mechanism for resource management at the data link layer. Resource management in this context encompasses the functions of admission control, scheduling, traffic policing, etc. The ISSLL (Integrated Services

传统上,互联网只为尽力而为的流量提供支持。然而,随着链路层技术的最新进展,以及视频会议和互联网电话等众多新兴实时应用,人们对开发能够在互联网上提供实时服务的机制产生了极大兴趣。RFC 1633[8]中规定了满足这些新要求的框架,这推动了IETF综合服务工作组对各类网络服务的规范,如受控负载和保证服务[6,7]。这些服务类别中的每一个都旨在为符合指定参数集的流量提供一定的服务质量(QoS)。应用程序应根据其QoS要求选择其中一个类。终端站在IP网络中利用此类服务的一种机制由QoS信令协议提供,即IETF的RSVP工作组开发的资源预留协议(RSVP)[5]。IEEE在其802项目下为许多不同的局域网技术定义了标准。这些协议通常为IP等更高层协议提供相同的MAC层数据报服务[1],尽管它们通常提供不同的动态行为特征——在考虑它们支持实时服务的能力时,这些是很重要的。在本备忘录的后面,我们将介绍不同MAC层LAN技术的一些相关特性。此外,IEEE 802定义了使用称为“MAC网桥”或“交换机”的设备将多个LAN网段桥接在一起的标准[2]。最近的工作还为这些设备定义了流量类别、多播过滤和虚拟LAN功能[3,4]。这种局域网技术通常是用户与因特网之间的最后一跳,也是整个校园网的主要组成部分。因此,有必要为使用这些技术支持端到端实时服务提供标准化机制。为了做到这一点,在数据链路层必须有某种资源管理机制。在这种情况下,资源管理包括准入控制、调度、交通管制等功能。ISSLL(综合服务

over Specific Link Layers) working group in the IETF was chartered with the purpose of exploring and standardizing such mechanisms for various link layer technologies.

IETF中的特定链路层(over-Specific Link layer)工作组获得特许,目的是探索和标准化各种链路层技术的此类机制。

2. Document Outline
2. 文件大纲

This document is concerned with specifying a framework for providing Integrated Services over shared and switched LAN technologies such as Ethernet/IEEE 802.3, Token Ring/IEEE 802.5, FDDI, etc. We begin in Section 4 with a discussion of the capabilities of various IEEE 802 MAC layer technologies. Section 5 lists the requirements and goals for a mechanism capable of providing Integrated Services in a LAN. The resource management functions outlined in Section 5 are provided by an entity referred to as a Bandwidth Manager (BM). The architectural model of the BM is described in Section 6 and its various components are discussed in Section 7. Some implementation issues with respect to link layer support for Integrated Services are examined in Section 8. Section 9 discusses a taxonomy of topologies for the LAN technologies under consideration with an emphasis on the capabilities of each which can be leveraged for enabling Integrated Services. This framework makes no assumptions about the topology at the link layer. The framework is intended to be as exhaustive as possible; this means that it is possible that all the functions discussed may not be supportable by a particular topology or technology, but this should not preclude the usage of this model for it.

本文档涉及指定一个框架,用于通过共享和交换LAN技术(如以太网/IEEE 802.3、令牌环/IEEE 802.5、FDDI等)提供集成服务。我们从第4节开始讨论各种IEEE 802 MAC层技术的功能。第5节列出了能够在LAN中提供集成服务的机制的要求和目标。第5节中概述的资源管理功能由称为带宽管理器(BM)的实体提供。第6节描述了BM的架构模型,第7节讨论了BM的各个组件。第8节研究了与集成服务的链路层支持有关的一些实现问题。第9节讨论了考虑中的LAN技术的拓扑分类,重点讨论了每个拓扑的功能,这些功能可用于实现集成服务。该框架不假设链路层的拓扑结构。该框架旨在尽可能详尽;这意味着,所讨论的所有功能可能不受特定拓扑或技术的支持,但这不应排除对其使用此模型。

3. Definitions
3. 定义

The following is a list of terms used in this and other ISSLL documents.

以下是本文件和其他ISSLL文件中使用的术语列表。

- Link Layer or Layer 2 or L2: Data link layer technologies such as Ethernet/IEEE 802.3 and Token Ring/IEEE 802.5 are referred to as Layer 2 or L2.

- 链路层或第2层或L2层:以太网/IEEE 802.3和令牌环/IEEE 802.5等数据链路层技术称为第2层或L2层。

- Link Layer Domain or Layer 2 Domain or L2 Domain: Refers to a set of nodes and links interconnected without passing through a L3 forwarding function. One or more IP subnets can be overlaid on a L2 domain.

- 链路层域或第2层域或L2域:指不通过L3转发功能而互连的一组节点和链路。一个或多个IP子网可以覆盖在L2域上。

- Layer 2 or L2 Devices: Devices that only implement Layer 2 functionality as Layer 2 or L2 devices. These include IEEE 802.1D [2] bridges or switches.

- 第2层或第2层设备:仅将第2层功能实现为第2层或第2层设备的设备。这些包括IEEE 802.1D[2]网桥或交换机。

- Internetwork Layer or Layer 3 or L3: Refers to Layer 3 of the ISO OSI model. This memo is primarily concerned with networks that use the Internet Protocol (IP) at this layer.

- 网络层或第3层或L3层:指ISO OSI模型的第3层。本备忘录主要涉及在该层使用互联网协议(IP)的网络。

- Layer 3 Device or L3 Device or End Station: These include hosts and routers that use L3 and higher layer protocols or application programs that need to make resource reservations.

- 第3层设备或L3设备或终端站:包括使用L3和更高层协议的主机和路由器,或需要进行资源预留的应用程序。

- Segment: A physical L2 segment that is shared by one or more senders. Examples of segments include: (a) a shared Ethernet or Token Ring wire resolving contention for media access using CSMA or token passing; (b) a half duplex link between two stations or switches; (c) one direction of a switched full duplex link.

- 段:一个或多个发送方共享的物理L2段。段的示例包括:(a)使用CSMA或令牌传递解决媒体访问争用的共享以太网或令牌环线;(b) 两个站点或交换机之间的半双工链路;(c) 交换全双工链路的一个方向。

- Managed Segment: A managed segment is a segment with a DSBM (designated subnet bandwidth manager, see [14]) present and responsible for exercising admission control over requests for resource reservation. A managed segment includes those interconnected parts of a shared LAN that are not separated by DSBMs.

- 托管网段:托管网段是存在DSBM(指定子网带宽管理器,请参见[14])并负责对资源预留请求实施准入控制的网段。受管网段包括共享LAN中未被DSBMs分隔的互连部分。

- Traffic Class: Refers to an aggregation of data flows which are given similar service within a switched network.

- 流量类别:指交换网络中提供类似服务的数据流的集合。

- Subnet: Used in this memo to indicate a group of L3 devices sharing a common L3 network address prefix along with the set of segments making up the L2 domain in which they are located.

- 子网:在本备忘录中用于表示共享公共三级网络地址前缀的一组三级设备,以及组成它们所在二级域的一组段。

- Bridge/Switch: A Layer 2 forwarding device as defined by IEEE 802.1D [2]. The terms bridge and switch are used synonymously in this memo.

- 网桥/交换机:IEEE 802.1D[2]定义的第2层转发设备。术语桥接器和交换机在本备忘录中同义使用。

4. Frame Forwarding in IEEE 802 Networks
4. ieee802网络中的帧转发
4.1. General IEEE 802 Service Model
4.1. 通用ieee802服务模型

The user_priority is a value associated with the transmission and reception of all frames in the IEEE 802 service model. It is supplied by the sender that is using the MAC service and is provided along with the data to a receiver using the MAC service. It may or may not be actually carried over the network. Token Ring/IEEE 802.5 carries this value encoded in its FC octet while basic Ethernet/IEEE 802.3 does not carry it. IEEE 802.12 may or may not carry it depending on the frame format in use. When the frame format in use is IEEE 802.5, the user_priority is carried explicitly. When IEEE 802.3 frame format is used, only the two levels of priority (high/low) that are used to determine access priority can be recovered. This is based on the value of priority encoded in the start delimiter of the IEEE 802.3 frame.

用户_优先级是与IEEE 802服务模型中所有帧的发送和接收相关联的值。它由使用MAC服务的发送方提供,并与数据一起提供给使用MAC服务的接收方。它可能会也可能不会实际通过网络传输。令牌环/IEEE 802.5携带以其FC八位字节编码的该值,而基本以太网/IEEE 802.3不携带该值。根据使用的帧格式,IEEE 802.12可能会也可能不会携带它。当使用的帧格式为IEEE 802.5时,将明确携带用户优先级。使用IEEE 802.3帧格式时,只能恢复用于确定访问优先级的两级优先级(高/低)。这是基于IEEE 802.3帧的开始分隔符中编码的优先级值。

NOTE: The original IEEE 802.1D standard [2] contains the specifications for the operation of MAC bridges. This has recently been extended to include support for traffic classes and dynamic multicast filtering [3]. In this document, the reader should be aware that references to the IEEE 802.1D standard refer to [3], unless explicitly noted otherwise.

注:原始IEEE 802.1D标准[2]包含MAC网桥操作规范。这一功能最近得到了扩展,包括对流量类和动态多播过滤的支持[3]。在本文件中,读者应注意,除非另有明确说明,否则对IEEE 802.1D标准的引用参考[3]。

IEEE 802.1D [3] defines a consistent way for carrying the value of the user_priority over a bridged network consisting of Ethernet, Token Ring, Demand Priority, FDDI or other MAC layer media using an extended frame format. The usage of user_priority is summarized below. We refer the interested reader to the IEEE 802.1D specification for further information.

IEEE 802.1D[3]定义了一种一致的方式,用于在由以太网、令牌环、请求优先级、FDDI或使用扩展帧格式的其他MAC层媒体组成的桥接网络上承载用户_优先级的值。用户优先级的使用总结如下。我们请感兴趣的读者参考IEEE 802.1D规范以了解更多信息。

If the user_priority is carried explicitly in packets, its utility is as a simple label enabling packets within a data stream in different classes to be discriminated easily by downstream nodes without having to parse the packet in more detail.

如果在数据包中显式地携带用户_优先级,则其效用是作为简单的标签,使得数据流中不同类别的数据包能够容易地被下游节点区分,而不必更详细地解析该数据包。

Apart from making the job of desktop or wiring closet switches easier, an explicit field means they do not have to change hardware or software as the rules for classifying packets evolve; e.g. based on new protocols or new policies. More sophisticated Layer 3 switches, perhaps deployed in the core of a network, may be able to provide added value by performing packet classification more accurately and, hence, utilizing network resources more efficiently and providing better isolation between flows. This appears to be a good economic choice since there are likely to be very many more desktop/wiring closet switches in a network than switches requiring Layer 3 functionality.

除了使桌面或配线柜交换机的工作更容易之外,一个明确的字段意味着它们不必随着数据包分类规则的发展而改变硬件或软件;e、 g.基于新协议或新政策。可能部署在网络核心中的更复杂的第3层交换机可以通过更准确地执行分组分类来提供附加值,从而更有效地利用网络资源并在流之间提供更好的隔离。这似乎是一个很好的经济选择,因为网络中的桌面/配线柜交换机可能比需要第3层功能的交换机多得多。

The IEEE 802 specifications make no assumptions about how user_priority is to be used by end stations or by the network. Although IEEE 802.1D defines static priority queuing as the default mode of operation of switches that implement multiple queues, the user_priority is really a priority only in a loose sense since it depends on the number of traffic classes actually implemented by a switch. The user_priority is defined as a 3 bit quantity with a value of 7 representing the highest priority and a value of 0 as the lowest. The general switch algorithm is as follows. Packets are queued within a particular traffic class based on the received user_priority, the value of which is either obtained directly from the packet if an IEEE 802.1Q header or IEEE 802.5 network is used, or is assigned according to some local policy. The queue is selected based on a mapping from user_priority (0 through 7) onto the number of available traffic classes. A switch may implement one or more traffic classes. The advertised IntServ parameters and the switch's admission control behavior may be used to determine the mapping from

IEEE 802规范未对终端站或网络如何使用用户优先级做出任何假设。尽管IEEE 802.1D将静态优先级队列定义为实现多个队列的交换机的默认操作模式,但用户优先级实际上只是一种松散意义上的优先级,因为它取决于交换机实际实现的流量类别的数量。用户_优先级定义为3位量,值7表示最高优先级,值0表示最低优先级。一般的切换算法如下。数据包根据接收到的用户优先级在特定流量类别内排队,如果使用IEEE 802.1Q报头或IEEE 802.5网络,则该用户优先级的值直接从数据包获得,或者根据某些本地策略分配。根据从用户优先级(0到7)到可用流量类别数的映射选择队列。交换机可以实现一个或多个业务类别。播发的IntServ参数和交换机的准入控制行为可用于确定来自的映射

user_priority to traffic classes within the switch. A switch is not precluded from implementing other scheduling algorithms such as weighted fair queuing and round robin.

用户对交换机内的流量类别具有优先权。交换机不排除实现其他调度算法,如加权公平队列和循环调度。

IEEE 802.1D makes no recommendations about how a sender should select the value for user_priority. One of the primary purposes of this document is to propose such usage rules, and to discuss the communication of the semantics of these values between switches and end stations. In the remainder of this document we use the term traffic class synonymously with user_priority.

IEEE 802.1D未就发送方应如何选择用户优先级值提出建议。本文档的主要目的之一是提出此类使用规则,并讨论交换机和终端站之间这些值的语义通信。在本文档的其余部分中,我们使用术语traffic class与user_priority同义。

4.2. Ethernet/IEEE 802.3
4.2. 以太网/IEEE 802.3

There is no explicit traffic class or user_priority field carried in Ethernet packets. This means that user_priority must be regenerated at a downstream receiver or switch according to some defaults or by parsing further into higher layer protocol fields in the packet. Alternatively, IEEE 802.1Q encapsulation [4] may be used which provides an explicit user_priority field on top of the basic MAC frame format.

以太网数据包中没有明确的通信量类别或用户优先级字段。这意味着必须根据一些默认值在下游接收器或交换机处重新生成用户_优先级,或者通过进一步解析数据包中的高层协议字段来重新生成用户_优先级。或者,可以使用IEEE 802.1Q封装[4],该封装在基本MAC帧格式之上提供明确的用户优先级字段。

For the different IP packet encapsulations used over Ethernet/IEEE 802.3, it will be necessary to adjust any admission control calculations according to the framing and padding requirements as shown in Table 1. Here, "ip_len" refers to the length of the IP packet including its headers.

对于以太网/IEEE 802.3上使用的不同IP数据包封装,有必要根据表1所示的帧和填充要求调整任何准入控制计算。这里,“ip_len”指的是ip数据包的长度,包括其头部。

Table 1: Ethernet encapsulations

表1:以太网封装

   ---------------------------------------------------------------
   Encapsulation                          Framing Overhead  IP MTU
                                             bytes/pkt       bytes
   ---------------------------------------------------------------
   IP EtherType (ip_len<=46 bytes)             64-ip_len    1500
                (1500>=ip_len>=46 bytes)         18         1500
        
   ---------------------------------------------------------------
   Encapsulation                          Framing Overhead  IP MTU
                                             bytes/pkt       bytes
   ---------------------------------------------------------------
   IP EtherType (ip_len<=46 bytes)             64-ip_len    1500
                (1500>=ip_len>=46 bytes)         18         1500
        
   IP EtherType over 802.1D/Q (ip_len<=42)     64-ip_len    1500*
                (1500>=ip_len>=42 bytes)         22         1500*
        
   IP EtherType over 802.1D/Q (ip_len<=42)     64-ip_len    1500*
                (1500>=ip_len>=42 bytes)         22         1500*
        
   IP EtherType over LLC/SNAP (ip_len<=40)     64-ip_len    1492
                (1500>=ip_len>=40 bytes)         24         1492
   ---------------------------------------------------------------
        
   IP EtherType over LLC/SNAP (ip_len<=40)     64-ip_len    1492
                (1500>=ip_len>=40 bytes)         24         1492
   ---------------------------------------------------------------
        

*Note that the packet length of an Ethernet frame using the IEEE 802.1Q specification exceeds the current IEEE 802.3 maximum packet length values by 4 bytes. The change of maximum MTU size for IEEE 802.1Q frames is being accommodated by IEEE 802.3ac [21].

*请注意,使用IEEE 802.1Q规范的以太网帧的数据包长度超过当前IEEE 802.3最大数据包长度值4字节。IEEE 802.3ac[21]考虑了IEEE 802.1Q帧最大MTU大小的变化。

4.3. Token Ring/IEEE 802.5
4.3. 令牌环/IEEE 802.5

The Token Ring standard [6] provides a priority mechanism that can be used to control both the queuing of packets for transmission and the access of packets to the shared media. The priority mechanisms are implemented using bits within the Access Control (AC) and the Frame Control (FC) fields of a LLC frame. The first three bits of the AC field, the Token Priority bits, together with the last three bits of the AC field, the Reservation bits, regulate which stations get access to the ring. The last three bits of the FC field of a LLC frame, the User Priority bits, are obtained from the higher layer in the user_priority parameter when it requests transmission of a packet. This parameter also establishes the Access Priority used by the MAC. The user_priority value is conveyed end-to-end by the User Priority bits in the FC field and is typically preserved through Token Ring bridges of all types. In all cases, 0 is the lowest priority.

令牌环标准[6]提供了一种优先级机制,可用于控制用于传输的数据包队列和数据包对共享媒体的访问。优先级机制使用LLC帧的访问控制(AC)和帧控制(FC)字段中的位来实现。AC字段的前三位(令牌优先级位)与AC字段的最后三位(保留位)一起调节哪些站点可以访问环。LLC帧的FC字段的最后三位,即用户优先级位,是在请求传输分组时从User_Priority参数的更高层获得的。此参数还建立MAC使用的访问优先级。用户_优先级值由FC字段中的用户优先级位端到端地传送,并且通常通过所有类型的令牌环网桥来保存。在所有情况下,0都是最低优先级。

Token Ring also uses a concept of Reserved Priority which relates to the value of priority which a station uses to reserve the token for its next transmission on the ring. When a free token is circulating, only a station having an Access Priority greater than or equal to the Reserved Priority in the token will be allowed to seize the token for transmission. Readers are referred to [14] for further discussion of this topic.

令牌环还使用保留优先级的概念,该概念与站点用于为其在环上的下一次传输保留令牌的优先级值有关。当免费令牌在循环时,只有访问优先级大于或等于令牌中保留优先级的站点才允许占用令牌进行传输。读者可参考[14]进一步讨论该主题。

A Token Ring station is theoretically capable of separately queuing each of the eight levels of requested user_priority and then transmitting frames in order of priority. A station sets Reservation bits according to the user_priority of frames that are queued for transmission in the highest priority queue. This allows the access mechanism to ensure that the frame with the highest priority throughout the entire ring will be transmitted before any lower priority frame. Annex I to the IEEE 802.5 Token Ring standard recommends that stations send/relay frames as follows.

令牌环站理论上能够分别排队请求的八个用户优先级级别中的每一个级别,然后按照优先级顺序发送帧。站点根据在最高优先级队列中排队等待传输的帧的用户优先级设置保留位。这允许接入机制确保在整个环中具有最高优先级的帧将在任何低优先级帧之前传输。IEEE 802.5令牌环标准的附录I建议站点按如下方式发送/中继帧。

Table 2: Recommended use of Token Ring User Priority

表2:令牌环用户优先级的建议使用

            -------------------------------------
            Application             User Priority
            -------------------------------------
            Non-time-critical data      0
                  -                     1
                  -                     2
                  -                     3
            LAN management              4
            Time-sensitive data         5
            Real-time-critical data     6
            MAC frames                  7
            -------------------------------------
        
            -------------------------------------
            Application             User Priority
            -------------------------------------
            Non-time-critical data      0
                  -                     1
                  -                     2
                  -                     3
            LAN management              4
            Time-sensitive data         5
            Real-time-critical data     6
            MAC frames                  7
            -------------------------------------
        

To reduce frame jitter associated with high priority traffic, the annex also recommends that only one frame be transmitted per token and that the maximum information field size be 4399 octets whenever delay sensitive traffic is traversing the ring. Most existing implementations of Token Ring bridges forward all LLC frames with a default access priority of 4. Annex I recommends that bridges forward LLC frames that have a user_priority greater than 4 with a reservation equal to the user_priority (although IEEE 802.1D [3] permits network management override this behavior). The capabilities provided by the Token Ring architecture, such User Priority and Reserved Priority, can provide effective support for Integrated Services flows that require QoS guarantees.

为了减少与高优先级业务相关的帧抖动,附件还建议每个令牌仅传输一个帧,并且每当延迟敏感业务通过环时,最大信息字段大小为4399个八位字节。令牌环网桥的大多数现有实现转发所有LLC帧,默认访问优先级为4。附件I建议网桥转发用户_优先级大于4且保留等于用户_优先级的LLC帧(尽管IEEE 802.1D[3]允许网络管理覆盖此行为)。令牌环体系结构提供的功能(如用户优先级和保留优先级)可以为需要QoS保证的集成服务流提供有效支持。

For the different IP packet encapsulations used over Token Ring/IEEE 802.5, it will be necessary to adjust any admission control calculations according to the framing requirements as shown in Table 3.

对于令牌环/IEEE 802.5上使用的不同IP分组封装,有必要根据表3所示的帧要求调整任何接纳控制计算。

Table 3: Token Ring encapsulations

表3:令牌环封装

   ---------------------------------------------------------------
   Encapsulation                          Framing Overhead  IP MTU
                                             bytes/pkt       bytes
   ---------------------------------------------------------------
   IP EtherType over 802.1D/Q                    29          4370*
   IP EtherType over LLC/SNAP                    25          4370*
   ---------------------------------------------------------------
        
   ---------------------------------------------------------------
   Encapsulation                          Framing Overhead  IP MTU
                                             bytes/pkt       bytes
   ---------------------------------------------------------------
   IP EtherType over 802.1D/Q                    29          4370*
   IP EtherType over LLC/SNAP                    25          4370*
   ---------------------------------------------------------------
        

*The suggested MTU from RFC 1042 [13] is 4464 bytes but there are issues related to discovering the maximum supported MTU between any two points both within and between Token Ring subnets. The MTU reported here is consistent with the IEEE 802.5 Annex I recommendation.

*RFC 1042[13]中建议的MTU为4464字节,但存在与在令牌环子网内和令牌环子网之间的任意两点之间发现最大支持MTU相关的问题。此处报告的MTU与IEEE 802.5附录I建议一致。

4.4. Fiber Distributed Data Interface
4.4. 光纤分布式数据接口

The Fiber Distributed Data Interface (FDDI) standard [16] provides a priority mechanism that can be used to control both the queuing of packets for transmission and the access of packets to the shared media. The priority mechanisms are implemented using similar mechanisms to Token Ring described above. The standard also makes provision for "Synchronous" data traffic with strict media access and delay guarantees. This mode of operation is not discussed further here and represents area within the scope of the ISSLL working group that requires further work. In the remainder of this document, for the discussion of QoS mechanisms, FDDI is treated as a 100 Mbps Token Ring technology using a service interface compatible with IEEE 802 networks.

光纤分布式数据接口(FDDI)标准[16]提供了一种优先级机制,可用于控制传输数据包的排队以及数据包对共享媒体的访问。优先级机制使用与上述令牌环类似的机制来实现。该标准还规定了具有严格媒体访问和延迟保证的“同步”数据通信。此操作模式在此不作进一步讨论,代表ISSLL工作组范围内需要进一步工作的领域。在本文档的其余部分中,为了讨论QoS机制,FDDI被视为使用与IEEE 802网络兼容的服务接口的100 Mbps令牌环技术。

4.5. Demand Priority/IEEE 802.12
4.5. 需求优先级/IEEE 802.12

IEEE 802.12 [19] is a standard for a shared 100 Mbps LAN. Data packets are transmitted using either the IEEE 802.3 or IEEE 802.5 frame format. The MAC protocol is called Demand Priority. Its main characteristics with respect to QoS are the support of two service priority levels, normal priority and high priority, and the order of service for each of these. Data packets from all network nodes (end hosts and bridges/switches) are served using a simple round robin algorithm.

IEEE 802.12[19]是共享100 Mbps LAN的标准。数据包使用IEEE 802.3或IEEE 802.5帧格式传输。MAC协议称为请求优先级。它在QoS方面的主要特点是支持两个服务优先级级别,即正常优先级和高优先级,以及每个级别的服务顺序。来自所有网络节点(终端主机和网桥/交换机)的数据包使用简单的循环算法提供服务。

If the IEEE 802.3 frame format is used for data transmission then the user_priority is encoded in the starting delimiter of the IEEE 802.12 data packet. If the IEEE 802.5 frame format is used then the user_priority is additionally encoded in the YYY bits of the FC field in the IEEE 802.5 packet header (see also Section 4.3). Furthermore, the IEEE 802.1Q encapsulation with its own user_priority field may also be applied in IEEE 802.12 networks. In all cases, switches are able to recover any user_priority supplied by a sender.

如果IEEE 802.3帧格式用于数据传输,则在IEEE 802.12数据包的起始分隔符中对用户_优先级进行编码。如果使用IEEE 802.5帧格式,则在IEEE 802.5数据包头中FC字段的YYY位中额外编码用户_优先级(另请参见第4.3节)。此外,具有其自己的用户优先级字段的IEEE 802.1Q封装也可应用于IEEE 802.12网络中。在所有情况下,交换机都能够恢复发送方提供的任何用户优先级。

The same rules apply for IEEE 802.12 user_priority mapping in a bridge as with other media types. The only additional information is that normal priority is used by default for user_priority values 0 through 4 inclusive, and high priority is used for user_priority levels 5 through 7. This ensures that the default Token Ring user_priority level of 4 for IEEE 802.5 bridges is mapped to normal priority on IEEE 802.12 segments.

与其他媒体类型一样,网桥中的IEEE 802.12用户优先级映射也适用相同的规则。唯一的附加信息是,默认情况下,正常优先级用于用户_优先级值0到4(包括0到4),高优先级用于用户_优先级级别5到7。这确保IEEE 802.5网桥的默认令牌环用户_优先级为4映射到IEEE 802.12网段上的正常优先级。

The medium access in IEEE 802.12 LANs is deterministic. The Demand Priority mechanism ensures that, once the normal priority service has been preempted, all high priority packets have strict priority over packets with normal priority. In the event that a normal priority packet has been waiting at the head of line of a MAC transmit queue

IEEE802.12LAN中的介质访问是确定性的。请求优先级机制确保,一旦正常优先级服务被抢占,所有高优先级数据包都比具有正常优先级的数据包具有严格的优先级。如果正常优先级数据包一直在MAC传输队列的行首等待

for a time period longer than PACKET_PROMOTION (200 - 300 ms) [19], its priority is automatically promoted to high priority. Thus, even normal priority packets have a maximum guaranteed access time to the medium.

对于长于数据包_提升(200-300 ms)[19]的时间段,其优先级自动提升为高优先级。因此,即使是正常优先级分组也具有对介质的最大保证访问时间。

Integrated Services can be built on top of the IEEE 802.12 medium access mechanism. When combined with admission control and bandwidth enforcement mechanisms, delay guarantees as required for a Guaranteed Service can be provided without any changes to the existing IEEE 802.12 MAC protocol.

集成服务可以建立在IEEE 802.12介质访问机制之上。当与准入控制和带宽实施机制相结合时,可以在不改变现有IEEE 802.12 MAC协议的情况下提供保证服务所需的延迟保证。

Since the IEEE 802.12 standard supports the IEEE 802.3 and IEEE 802.5 frame formats, the same framing overhead as reported in Sections 4.2 and 4.3 must be considered in the admission control computations for IEEE 802.12 links.

由于IEEE 802.12标准支持IEEE 802.3和IEEE 802.5帧格式,因此在IEEE 802.12链路的准入控制计算中必须考虑第4.2节和第4.3节中报告的相同帧开销。

5. Requirements and Goals
5. 要求和目标

This section discusses the requirements and goals which should drive the design of an architecture for supporting Integrated Services over LAN technologies. The requirements refer to functions and features which must be supported, while goals refer to functions and features which are desirable, but are not an absolute necessity. Many of the requirements and goals are driven by the functionality supported by Integrated Services and RSVP.

本节讨论了应推动通过LAN技术支持集成服务的体系结构设计的需求和目标。需求指的是必须支持的功能和特性,而目标指的是需要但并非绝对必要的功能和特性。许多需求和目标都是由集成服务和RSVP支持的功能驱动的。

5.1. Requirements
5.1. 要求

- Resource Reservation: The mechanism must be capable of reserving resources on a single segment or multiple segments and at bridges/switches connecting them. It must be able to provide reservations for both unicast and multicast sessions. It should be possible to change the level of reservation while the session is in progress.

- 资源预留:该机制必须能够在单个段或多个段上以及在连接它们的网桥/交换机上预留资源。它必须能够为单播和多播会话提供预订。在会话进行期间,应该可以更改保留级别。

- Admission Control: The mechanism must be able to estimate the level of resources necessary to meet the QoS requested by the session in order to decide whether or not the session can be admitted. For the purpose of management, it is useful to provide the ability to respond to queries about availability of resources. It must be able to make admission control decisions for different types of services such as Guaranteed Service, Controlled Load, etc.

- 接纳控制:该机制必须能够估计满足会话请求的QoS所需的资源水平,以便决定会话是否可以被接纳。为了便于管理,提供对有关资源可用性的查询作出响应的能力是很有用的。它必须能够为不同类型的服务(如保证服务、控制负载等)做出准入控制决策。

- Flow Separation and Scheduling: It is necessary to provide a mechanism for traffic flow separation so that real time flows can be given preferential treatment over best effort flows. Packets of real time flows can then be isolated and scheduled according to their service requirements.

- 流量分离和调度:有必要提供一种交通流分离机制,以便实时流量可以优先于尽力而为的流量。然后,实时流的数据包可以根据其服务需求进行隔离和调度。

- Policing/Shaping: Traffic must be shaped and/or policed by end stations (workstations, routers) to ensure conformance to negotiated traffic parameters. Shaping is the recommended behavior for traffic sources. A router initiating an ISSLL session must have implemented traffic control mechanisms according to the IntServ requirements which would ensure that all flows sent by the router are in conformance. The ISSLL mechanisms at the link layer rely heavily on the correct implementation of policing/shaping mechanisms at higher layers by devices capable of doing so. This is necessary because bridges and switches are not typically capable of maintaining per flow state which would be required to check flows for conformance. Policing is left as an option for bridges and switches, which if implemented, may be used to enforce tighter control over traffic flows. This issue is further discussed in Section 8.

- 监管/塑造:必须由终端站(工作站、路由器)对流量进行塑造和/或监管,以确保符合协商的流量参数。整形是流量源的推荐行为。启动ISSLL会话的路由器必须根据IntServ要求实施流量控制机制,以确保路由器发送的所有流量一致。链路层的ISSLL机制在很大程度上依赖于能够这样做的设备在更高的层上正确实施策略/成形机制。这是必要的,因为网桥和交换机通常不能保持每个流的状态,而检查流的一致性需要这些状态。治安作为桥梁和交换机的一种选择,如果实施,可用于加强对交通流的控制。第8节将进一步讨论这个问题。

- Soft State: The mechanism must maintain soft state information about the reservations. This means that state information must periodically be refreshed if the reservation is to be maintained; otherwise the state information and corresponding reservations will expire after some pre-specified interval.

- 软状态:机制必须维护有关保留的软状态信息。这意味着,如果要维护保留,则必须定期刷新状态信息;否则,状态信息和相应的保留将在预先指定的时间间隔后过期。

- Centralized or Distributed Implementation: In the case of a centralized implementation, a single entity manages the resources of the entire subnet. This approach has the advantage of being easier to deploy since bridges and switches may not need to be upgraded with additional functionality. However, this approach scales poorly with geographical size of the subnet and the number of end stations attached. In a fully distributed implementation, each segment will have a local entity managing its resources. This approach has better scalability than the former. However, it requires that all bridges and switches in the network support new mechanisms. It is also possible to have a semi-distributed implementation where there is more than one entity, each managing the resources of a subset of segments and bridges/switches within the subnet. Ideally, implementation should be flexible; i.e. a centralized approach may be used for small subnets and a distributed approach can be used for larger subnets. Examples of centralized and distributed implementations are discussed in Section 6.

- 集中式或分布式实现:在集中式实现的情况下,单个实体管理整个子网的资源。这种方法的优点是易于部署,因为网桥和交换机可能不需要升级其他功能。但是,这种方法在子网的地理大小和连接的终端站数量方面的扩展性较差。在一个完全分布式的实现中,每个段都有一个本地实体来管理其资源。这种方法比前者具有更好的可扩展性。但是,它要求网络中的所有网桥和交换机都支持新机制。也可以采用半分布式实现,其中有多个实体,每个实体管理子网中的一个子集的资源和网桥/交换机。理想情况下,实施应该灵活;i、 e.小型子网可采用集中式方法,大型子网可采用分布式方法。第6节讨论了集中式和分布式实现的示例。

- Scalability: The mechanism and protocols should have a low overhead and should scale to the largest receiver groups likely to occur within a single link layer domain.

- 可伸缩性:机制和协议应具有较低的开销,并应扩展到单个链路层域中可能出现的最大接收器组。

- Fault Tolerance and Recovery: The mechanism must be able to function in the presence of failures; i.e. there should not be a single point of failure. For instance, in a centralized implementation, some mechanism must be specified for back-up and recovery in the event of failure.

- 容错和恢复:该机制必须能够在出现故障时运行;i、 e.不应存在单一故障点。例如,在集中式实现中,必须为发生故障时的备份和恢复指定某种机制。

- Interaction with Existing Resource Management Controls: The interaction with existing infrastructure for resource management needs to be specified. For example, FDDI has a resource management mechanism called the "Synchronous Bandwidth Manager". The mechanism must be designed so that it takes advantage of, and specifies the interaction with, existing controls where available.

- 与现有资源管理控件的交互:需要指定与现有资源管理基础架构的交互。例如,FDDI有一种称为“同步带宽管理器”的资源管理机制。机制的设计必须使其能够利用现有控件(如果可用),并指定与现有控件的交互。

5.2. Goals
5.2. 目标

- Independence from higher layer protocols: The mechanism should, as far as possible, be independent of higher layer protocols such as RSVP and IP. Independence from RSVP is desirable so that it can interwork with other reservation protocols such as ST2 [10]. Independence from IP is desirable so that it can interwork with other network layer protocols such as IPX, NetBIOS, etc.

- 独立于高层协议:机制应尽可能独立于高层协议,如RSVP和IP。独立于RSVP是可取的,因此它可以与其他预约协议(如ST2[10])交互工作。独立于IP是可取的,因此它可以与其他网络层协议(如IPX、NetBIOS等)互通。

- Receiver heterogeneity: this refers to multicast communication where different receivers request different levels of service. For example, in a multicast group with many receivers, it is possible that one of the receivers desires a lower delay bound than the others. A better delay bound may be provided by increasing the amount of resources reserved along the path to that receiver while leaving the reservations for the other receivers unchanged. In its most complex form, receiver heterogeneity implies the ability to simultaneously provide various levels of service as requested by different receivers. In its simplest form, receiver heterogeneity will allow a scenario where some of the receivers use best effort service and those requiring service guarantees make a reservation. Receiver heterogeneity, especially for the reserved/best effort scenario, is a very desirable function. More details on supporting receiver heterogeneity are provided in Section 8.

- 接收方异构性:这是指不同接收方请求不同服务级别的多播通信。例如,在具有多个接收器的多播组中,其中一个接收器可能比其他接收器期望更低的延迟界限。通过增加沿到该接收机的路径保留的资源量,同时保持对其他接收机的保留不变,可以提供更好的延迟界限。在其最复杂的形式中,接收方异构性意味着能够根据不同接收方的请求同时提供不同级别的服务。在其最简单的形式中,接收方异构性将允许一些接收方使用尽力而为服务,而那些需要服务保证的接收方进行预订。接收机异构性,特别是对于保留/尽力方案,是一个非常理想的功能。第8节提供了有关支持接收器异构性的更多详细信息。

- Support for different filter styles: It is desirable to provide support for the different filter styles defined by RSVP such as fixed filter, shared explicit and wildcard. Some of the issues with respect to supporting such filter styles in the link layer domain are examined in Section 8.

- 支持不同的过滤器样式:最好支持RSVP定义的不同过滤器样式,如固定过滤器、共享显式和通配符。第8节讨论了在链接层域中支持此类过滤器样式的一些问题。

- Path Selection: In source routed LAN technologies such as Token Ring/IEEE 802.5, it may be useful for the mechanism to incorporate the function of path selection. Using an appropriate path selection mechanism may optimize utilization of network resources.

- 路径选择:在源路由LAN技术(如令牌环/IEEE 802.5)中,该机制结合路径选择功能可能很有用。使用适当的路径选择机制可以优化网络资源的利用。

5.3. Non-goals
5.3. 非目标

This document describes service mappings onto existing IEEE and ANSI defined standard MAC layers and uses standard MAC layer services as in IEEE 802.1 bridging. It does not attempt to make use of or describe the capabilities of other proprietary or standard MAC layer protocols although it should be noted that published work regarding MAC layers suitable for QoS mappings exists. These are outside the scope of the ISSLL working group charter.

本文档描述现有IEEE和ANSI定义的标准MAC层上的服务映射,并使用IEEE 802.1桥接中的标准MAC层服务。它并不试图使用或描述其他专有或标准MAC层协议的功能,尽管应该注意的是,存在关于适用于QoS映射的MAC层的已发布工作。这些不属于ISSLL工作组章程的范围。

5.4. Assumptions
5.4. 假设

This framework assumes that typical subnetworks that are concerned about QoS will be "switch rich"; i.e. most communication between end stations using integrated services support is expected to pass through at least one switch. The mechanisms and protocols described will be trivially extensible to communicating systems on the same shared medium, but it is important not to allow problem generalization which may complicate the targeted practical application to switch rich LAN topologies. There have also been developments in the area of MAC enhancements to ensure delay deterministic access on network links e.g. IEEE 802.12 [19] and also proprietary schemes.

该框架假设关注QoS的典型子网将是“交换机丰富的”;i、 e.使用综合服务支持的终端站之间的大多数通信预计至少通过一个交换机。所描述的机制和协议将很容易扩展到同一共享介质上的通信系统,但重要的是不要允许问题泛化,这可能会使目标实际应用程序复杂化,从而切换丰富的LAN拓扑。在MAC增强方面也有一些发展,以确保网络链路上的延迟确定性访问,例如IEEE 802.12[19]和专有方案。

Although we illustrate most examples for this model using RSVP as the upper layer QoS signaling protocol, there are actually no real dependencies on this protocol. RSVP could be replaced by some other dynamic protocol, or the requests could be made by network management or other policy entities. The SBM signaling protocol [14], which is based upon RSVP, is designed to work seamlessly in the architecture described in this memo.

虽然我们用RSVP作为上层QoS信令协议来说明此模型的大多数示例,但实际上并不依赖于此协议。RSVP可以被其他一些动态协议替代,或者请求可以由网络管理或其他策略实体发出。基于RSVP的SBM信令协议[14]设计用于在本备忘录中描述的体系结构中无缝工作。

There may be a heterogeneous mix of switches with different capabilities, all compliant with IEEE 802.1D [2,3], but implementing varied queuing and forwarding mechanisms ranging from simple systems with two queues per port and static priority scheduling, to more complex systems with multiple queues using WFQ or other algorithms.

可能存在具有不同功能的交换机的异构混合,所有交换机均符合IEEE 802.1D[2,3],但实现了不同的排队和转发机制,从每个端口具有两个队列和静态优先级调度的简单系统,到使用WFQ或其他算法具有多个队列的更复杂系统。

The problem is decomposed into smaller independent parts which may lead to sub-optimal use of the network resources but we contend that such benefits are often equivalent to very small improvement in network efficiency in a LAN environment. Therefore, it is a goal that the switches in a network operate using a much simpler set of

该问题被分解为更小的独立部分,这可能导致网络资源的次优利用,但我们认为,这些好处通常相当于局域网环境中网络效率的微小提高。因此,网络中的交换机使用一组更简单的

information than the RSVP engine in a router. In particular, it is assumed that such switches do not need to implement per flow queuing and policing (although they are not precluded from doing so).

路由器中RSVP引擎以外的信息。特别是,假定这样的交换机不需要实现每流排队和策略(尽管并不排除这样做)。

A fundamental assumption of the IntServ model is that flows are isolated from each other throughout their transit across a network. Intermediate queuing nodes are expected to shape or police the traffic to ensure conformance to the negotiated traffic flow specification. In the architecture proposed here for mapping to Layer 2, we diverge from that assumption in the interest of simplicity. The policing/shaping functions are assumed to be implemented in end stations. In some LAN environments, it is reasonable to assume that end stations are trusted to adhere to their negotiated contracts at the inputs to the network, and that we can afford to over-allocate resources during admission control to compensate for the inevitable packet jitter/bunching introduced by the switched network itself. This divergence has some implications on the types of receiver heterogeneity that can be supported and the statistical multiplexing gains that may be exploited, especially for Controlled Load flows. This is discussed in Section 8.7 of this document.

IntServ模型的一个基本假设是,流在整个网络传输过程中相互隔离。期望中间排队节点对流量进行塑造或管理,以确保符合协商的流量规范。在这里提出的映射到第2层的架构中,为了简单起见,我们偏离了这个假设。警察/塑造职能假定在终端站实施。在某些LAN环境中,可以合理地假设终端站在网络输入端遵守其协商的契约,并且我们可以在准入控制期间过度分配资源,以补偿交换网络本身不可避免的数据包抖动/聚束。这种差异对可支持的接收器异质性类型和可利用的统计复用增益(尤其是受控负载流)有一定的影响。本文件第8.7节对此进行了讨论。

6. Basic Architecture
6. 基本架构

The functional requirements described in Section 5 will be performed by an entity which we refer to as the Bandwidth Manager (BM). The BM is responsible for providing mechanisms for an application or higher layer protocol to request QoS from the network. For architectural purposes, the BM consists of the following components.

第5节中描述的功能需求将由我们称为带宽管理器(BM)的实体执行。BM负责为应用程序或更高层协议提供从网络请求QoS的机制。出于架构目的,BM由以下组件组成。

6.1. Components
6.1. 组件
6.1.1. Requester Module
6.1.1. 请求程序模块

The Requester Module (RM) resides in every end station in the subnet. One of its functions is to provide an interface between applications or higher layer protocols such as RSVP, ST2, SNMP, etc. and the BM. An application can invoke the various functions of the BM by using the primitives for communication with the RM and providing it with the appropriate parameters. To initiate a reservation, in the link layer domain, the following parameters must be passed to the RM: the service desired (Guaranteed Service or Controlled Load), the traffic descriptors contained in the TSpec, and an RSpec specifying the amount of resources to be reserved [9]. More information on these parameters may be found in the relevant Integrated Services documents [6,7,8,9]. When RSVP is used for signaling at the network layer, this information is available and needs to be extracted from the RSVP PATH and RSVP RESV messages (See [5] for details). In addition to

请求者模块(RM)驻留在子网中的每个终端站中。其功能之一是提供应用程序或更高层协议(如RSVP、ST2、SNMP等)与BM之间的接口。应用程序可以通过使用与RM通信的原语并为其提供适当的参数来调用BM的各种功能。要启动保留,在链路层域中,必须向RM传递以下参数:所需的服务(保证服务或控制负载)、TSpec中包含的流量描述符,以及指定要保留的资源量的RSpec[9]。有关这些参数的更多信息,请参见相关综合服务文件[6,7,8,9]。当RSVP用于网络层的信令时,此信息可用,需要从RSVP路径和RSVP RESV消息中提取(详情参见[5])。除了

these parameters, the network layer addresses of the end points must be specified. The RM must then translate the network layer addresses to link layer addresses and convert the request into an appropriate format which is understood by other components of the BM responsible admission control. The RM is also responsible for returning the status of requests processed by the BM to the invoking application or higher layer protocol.

在这些参数中,必须指定端点的网络层地址。然后RM必须将网络层地址转换为链路层地址,并将请求转换为BM负责准入控制的其他组件可以理解的适当格式。RM还负责将BM处理的请求的状态返回给调用应用程序或更高层协议。

6.1.2. Bandwidth Allocator
6.1.2. 带宽分配器

The Bandwidth Allocator (BA) is responsible for performing admission control and maintaining state about the allocation of resources in the subnet. An end station can request various services, e.g. bandwidth reservation, modification of an existing reservation, queries about resource availability, etc. These requests are processed by the BA. The communication between the end station and the BA takes place through the RM. The location of the BA will depend largely on the implementation method. In a centralized implementation, the BA may reside on a single station in the subnet. In a distributed implementation, the functions of the BA may be distributed in all the end stations and bridges/switches as necessary. The BA is also responsible for deciding how to label flows, e.g. based on the admission control decision, the BA may indicate to the RM that packets belonging to a particular flow be tagged with some priority value which maps to the appropriate traffic class.

带宽分配器(BA)负责执行准入控制并维护子网中资源分配的状态。终端站可以请求各种服务,例如带宽预留、修改现有预留、查询资源可用性等。这些请求由BA处理。终端站和BA之间的通信通过RM进行。广管局的位置在很大程度上取决于实施方法。在集中式实现中,BA可以驻留在子网中的单个站点上。在分布式实现中,BA的功能可以根据需要分布在所有终端站和网桥/交换机中。BA还负责决定如何标记流,例如,基于接纳控制决定,BA可以向RM指示使用映射到适当业务类别的某个优先级值标记属于特定流的分组。

6.1.3. Communication Protocols
6.1.3. 通信协议

The protocols for communication between the various components of the BM system must be specified. These include the following:

必须指定BM系统各部件之间的通信协议。这些措施包括:

- Communication between the higher layer protocols and the RM: The BM must define primitives for the application to initiate reservations, query the BA about available resources, change change or delete reservations, etc. These primitives could be implemented as an API for an application to invoke functions of the BM via the RM.

- 高层协议和RM之间的通信:BM必须为应用程序定义原语,以启动保留、向BA查询可用资源、更改或删除保留等。这些原语可以作为应用程序的API实现,以便通过RM调用BM的功能。

- Communication between the RM and the BA: A signaling mechanism must be defined for the communication between the RM and the BA. This protocol will specify the messages which must be exchanged between the RM and the BA in order to service various requests by the higher layer entity.

- RM和BA之间的通信:必须为RM和BA之间的通信定义信令机制。该协议将指定RM和BA之间必须交换的消息,以便为高层实体的各种请求提供服务。

- Communication between peer BAs: If there is more than one BA in the subnet, a means must be specified for inter-BA communication. Specifically, the BAs must be able to decide among themselves

- 对等BA之间的通信:如果子网中有多个BA,则必须指定BA间通信的方式。具体而言,BAs必须能够自行决定

about which BA would be responsible for which segments and bridges or switches. Further, if a request is made for resource reservation along the domain of multiple BAs, the BAs must be able to handle such a scenario correctly. Inter-BA communication will also be responsible for back-up and recovery in the event of failure.

关于哪个BA将负责哪个网段、网桥或交换机。此外,如果请求沿多个BAs的域保留资源,则BAs必须能够正确处理此类场景。BA间通信还将负责故障时的备份和恢复。

6.2. Centralized vs. Distributed Implementations
6.2. 集中式与分布式实现

Example scenarios are provided showing the location of the components of the bandwidth manager in centralized and fully distributed implementations. Note that in either case, the RM must be present in all end stations that need to make reservations. Essentially, centralized or distributed refers to the implementation of the BA, the component responsible for resource reservation and admission control. In the figures below, "App" refers to the application making use of the BM. It could either be a user application, or a higher layer protocol process such as RSVP.

提供的示例场景显示了带宽管理器组件在集中式和完全分布式实现中的位置。请注意,在任何一种情况下,RM必须出现在需要预订的所有终端站。本质上,集中式或分布式是指BA的实现,BA是负责资源保留和准入控制的组件。在下图中,“应用程序”是指使用BM的应用程序。它可以是用户应用程序,也可以是更高层的协议进程,如RSVP。

                                +---------+
                            .-->|  BA     |<--.
                           /    +---------+    \
                          / .-->| Layer 2 |<--. \
                         / /    +---------+    \ \
                        / /                     \ \
                       / /                       \ \
   +---------+        / /                         \ \       +---------+
   |  App    |<----- /-/---------------------------\-\----->|  App    |
   +---------+      / /                             \ \     +---------+
   |  RM     |<----. /                               \ .--->|  RM     |
   +---------+      / +---------+        +---------+  \     +---------+
   | Layer 2 |<------>| Layer 2 |<------>| Layer 2 |<------>| Layer 2 |
   +---------+        +---------+        +---------+        +---------+
        
                                +---------+
                            .-->|  BA     |<--.
                           /    +---------+    \
                          / .-->| Layer 2 |<--. \
                         / /    +---------+    \ \
                        / /                     \ \
                       / /                       \ \
   +---------+        / /                         \ \       +---------+
   |  App    |<----- /-/---------------------------\-\----->|  App    |
   +---------+      / /                             \ \     +---------+
   |  RM     |<----. /                               \ .--->|  RM     |
   +---------+      / +---------+        +---------+  \     +---------+
   | Layer 2 |<------>| Layer 2 |<------>| Layer 2 |<------>| Layer 2 |
   +---------+        +---------+        +---------+        +---------+
        
   RSVP Host/         Intermediate       Intermediate       RSVP Host/
      Router          Bridge/Switch      Bridge/Switch         Router
        
   RSVP Host/         Intermediate       Intermediate       RSVP Host/
      Router          Bridge/Switch      Bridge/Switch         Router
        

Figure 1: Bandwidth Manager with centralized Bandwidth Allocator

图1:带集中式带宽分配器的带宽管理器

Figure 1 shows a centralized implementation where a single BA is responsible for admission control decisions for the entire subnet. Every end station contains a RM. Intermediate bridges and switches in the network need not have any functions of the BM since they will not be actively participating in admission control. The RM at the end station requesting a reservation initiates communication with its BA. For larger subnets, a single BA may not be able to handle the reservations for the entire subnet. In that case it would be necessary to deploy multiple BAs, each managing the resources of a

图1显示了一个集中式实现,其中单个BA负责整个子网的准入控制决策。每个终端站都包含一个RM。网络中的中间网桥和交换机不需要具有BM的任何功能,因为它们不会积极参与准入控制。请求预约的终端站的RM启动与其BA的通信。对于较大的子网,单个BA可能无法处理整个子网的保留。在这种情况下,有必要部署多个BAs,每个BAs管理一个系统的资源

non-overlapping subset of segments. In a centralized implementation, the BA must have some knowledge of the Layer 2 topology of the subnet e.g., link layer spanning tree information, in order to be able to reserve resources on appropriate segments. Without this topology information, the BM would have to reserve resources on all segments for all flows which, in a switched network, would lead to very inefficient utilization of resources.

段的非重叠子集。在集中实施中,BA必须了解子网的第2层拓扑结构,例如链路层生成树信息,以便能够在适当的段上保留资源。如果没有这种拓扑信息,BM将不得不为所有流在所有段上保留资源,这在交换网络中会导致资源的低效利用。

   +---------+                                              +---------+
   |  App    |<-------------------------------------------->|  App    |
   +---------+        +---------+        +---------+        +---------+
   |  RM/BA  |<------>|  BA     |<------>|  BA     |<------>|  RM/BA  |
   +---------+        +---------+        +---------+        +---------+
   | Layer 2 |<------>| Layer 2 |<------>| Layer 2 |<------>| Layer 2 |
   +---------+        +---------+        +---------+        +---------+
        
   +---------+                                              +---------+
   |  App    |<-------------------------------------------->|  App    |
   +---------+        +---------+        +---------+        +---------+
   |  RM/BA  |<------>|  BA     |<------>|  BA     |<------>|  RM/BA  |
   +---------+        +---------+        +---------+        +---------+
   | Layer 2 |<------>| Layer 2 |<------>| Layer 2 |<------>| Layer 2 |
   +---------+        +---------+        +---------+        +---------+
        
   RSVP Host/         Intermediate       Intermediate       RSVP Host/
      Router          Bridge/Switch      Bridge/Switch         Router
        
   RSVP Host/         Intermediate       Intermediate       RSVP Host/
      Router          Bridge/Switch      Bridge/Switch         Router
        

Figure 2: Bandwidth Manager with fully distributed Bandwidth Allocator

图2:带完全分布式带宽分配器的带宽管理器

Figure 2 depicts the scenario of a fully distributed bandwidth manager. In this case, all devices in the subnet have BM functionality. All the end hosts are still required to have a RM. In addition, all stations actively participate in admission control. With this approach, each BA would need only local topology information since it is responsible for the resources on segments that are directly connected to it. This local topology information, such as a list of ports active on the spanning tree and which unicast addresses are reachable from which ports, is readily available in today's switches. Note that in the figures above, the arrows between peer layers are used to indicate logical connectivity.

图2描述了完全分布式带宽管理器的场景。在这种情况下,子网中的所有设备都具有BM功能。所有终端主机仍然需要有RM。此外,所有车站都积极参与入场控制。使用这种方法,每个BA只需要本地拓扑信息,因为它负责直接连接到它的段上的资源。这种本地拓扑信息,例如生成树上活动的端口列表,以及可以从哪些端口访问哪些单播地址,在当今的交换机中很容易获得。请注意,在上图中,对等层之间的箭头用于指示逻辑连接。

7. Model of the Bandwidth Manager in a Network
7. 网络中带宽管理器的模型

In this section we describe how the model above fits with the existing IETF Integrated Services model of IP hosts and routers. First, we describe Layer 3 host and router implementations. Next, we describe how the model is applied in Layer 2 switches. Throughout we indicate any differences between centralized and distributed implementations. Occasional references are made to terminology from the Subnet Bandwidth Manager specification [14].

在本节中,我们将描述上述模型如何与IP主机和路由器的现有IETF综合服务模型相匹配。首先,我们描述第3层主机和路由器的实现。接下来,我们将描述该模型如何应用于第2层交换机。自始至终,我们指出了集中式和分布式实现之间的任何差异。偶尔参考子网带宽管理器规范中的术语[14]。

7.1. End Station Model
7.1. 端站模型
7.1.1. Layer 3 Client Model
7.1.1. 第3层客户端模型

We assume the same client model as IntServ and RSVP where we use the term "client" to mean the entity handling QoS in the Layer 3 device at each end of a Layer 2 Domain. In this model, the sending client is responsible for local admission control and packet scheduling onto its link in accordance with the negotiated service. As with the IntServ model, this involves per flow scheduling with possible traffic shaping/policing in every such originating node.

我们采用与IntServ和RSVP相同的客户机模型,其中我们使用术语“客户机”表示在第2层域的每一端的第3层设备中处理QoS的实体。在该模型中,发送客户端负责本地接纳控制,并根据协商服务在其链路上进行分组调度。与IntServ模型一样,这涉及每个流的调度,在每个这样的发起节点中可能进行流量整形/监管。

For now, we assume that the client runs an RSVP process which presents a session establishment interface to applications, provides signaling over the network, programs a scheduler and classifier in the driver, and interfaces to a policy control module. In particular, RSVP also interfaces to a local admission control module which is the focus of this section.

目前,我们假设客户端运行RSVP进程,该进程向应用程序提供会话建立接口,通过网络提供信令,在驱动程序中编程调度器和分类器,并与策略控制模块接口。特别是,RSVP还与本地准入控制模块接口,这是本节的重点。

The following figure, reproduced from the RSVP specification, depicts the RSVP process in sending hosts.

下图根据RSVP规范复制,描述了发送主机中的RSVP过程。

                     +-----------------------------+
                     | +-------+  +-------+        |   RSVP
                     | |Appli- |  | RSVP  <------------------->
                     | | cation<-->       |        |
                     | |       |  |process| +-----+|
                     | +-+-----+  |       +->Polcy||
                     |   |        +--+--+-+ |Cntrl||
                     |   |data       |  |   +-----+|
                     |===|===========|==|==========|
                     |   |  +--------+  |   +-----+|
                     |   |  |        |  +--->Admis||
                     | +-V--V-+  +---V----+ |Cntrl||
                     | |Class-|  | Packet | +-----+|
                     | | ifier|==>Schedulr|===================>
                     | +------+  +--------+        |    data
                     +-----------------------------+
        
                     +-----------------------------+
                     | +-------+  +-------+        |   RSVP
                     | |Appli- |  | RSVP  <------------------->
                     | | cation<-->       |        |
                     | |       |  |process| +-----+|
                     | +-+-----+  |       +->Polcy||
                     |   |        +--+--+-+ |Cntrl||
                     |   |data       |  |   +-----+|
                     |===|===========|==|==========|
                     |   |  +--------+  |   +-----+|
                     |   |  |        |  +--->Admis||
                     | +-V--V-+  +---V----+ |Cntrl||
                     | |Class-|  | Packet | +-----+|
                     | | ifier|==>Schedulr|===================>
                     | +------+  +--------+        |    data
                     +-----------------------------+
        

Figure 3: RSVP in Sending Hosts

图3:发送主机中的RSVP

7.1.2. Requests to Layer 2 ISSLL
7.1.2. 对第二层ISSLL的请求

The local admission control entity within a client is responsible for mapping Layer 3 session establishment requests into Layer 2 semantics.

客户端中的本地许可控制实体负责将第3层会话建立请求映射到第2层语义。

The upper layer entity makes a request, in generalized terms to ISSLL of the form:

上层实体向ISSLL发出请求,一般来说:

      "May I reserve for traffic with <traffic characteristic> with
      <performance requirements> from <here> to <there> and how should I
      label it?"
        
      "May I reserve for traffic with <traffic characteristic> with
      <performance requirements> from <here> to <there> and how should I
      label it?"
        

where

哪里

   <traffic characteristic> = Sender Tspec (e.g. bandwidth, burstiness,
   MTU)
   <performance requirements> = FlowSpec (e.g. latency, jitter bounds)
   <here> = IP address(es)
   <there> = IP address(es) - may be multicast
        
   <traffic characteristic> = Sender Tspec (e.g. bandwidth, burstiness,
   MTU)
   <performance requirements> = FlowSpec (e.g. latency, jitter bounds)
   <here> = IP address(es)
   <there> = IP address(es) - may be multicast
        
7.1.3. At the Layer 3 Sender
7.1.3. 在第三层发送器

The ISSLL functionality in the sender is illustrated in Figure 4.

发送器中的ISSLL功能如图4所示。

The functions of the Requester Module may be summarized as follows:

请求者模块的功能可总结如下:

- Maps the endpoints of the conversation to Layer 2 addresses in the LAN, so that the client can determine what traffic is going where. This function probably makes reference to the ARP protocol cache for unicast or performs an algorithmic mapping for multicast destinations.

- 将会话的端点映射到LAN中的第2层地址,以便客户端可以确定通信量流向何处。此函数可能参考用于单播的ARP协议缓存,或者为多播目的地执行算法映射。

- Communicates with any local Bandwidth Allocator module for local admission control decisions.

- 与任何本地带宽分配器模块通信,以进行本地准入控制决策。

- Formats a SBM request to the network with the mapped addresses and flow/filter specs.

- 使用映射的地址和流量/过滤器规格将SBM请求格式化到网络。

- Receives a response from the network and reports the admission control decision to the higher layer entity, along with any negotiated modifications to the session parameters.

- 接收来自网络的响应,并将接纳控制决策以及对会话参数的任何协商修改报告给更高层实体。

- Saves any returned user_priority to be associated with this session in a "802 header" table. This will be used when constructing the Layer 2 headers for future data packets belonging to this session. This table might, for example, be indexed by the RSVP flow identifier.

- 在“802头”表中保存要与此会话关联的任何返回的用户\u优先级。这将在为属于该会话的未来数据包构造第2层报头时使用。例如,此表可能由RSVP流标识符索引。

                    from IP     from RSVP
                  +----|------------|------------+
                  | +--V----+   +---V---+        |
                  | | Addr  <--->       |        | SBM signaling
                  | |mapping|   |Request|<----------------------->
                  | +---+---+   |Module |        |
                  |     |       |       |        |
                  | +---+---+   |       |        |
                  | |  802  <--->       |        |
                  | | header|   +-+-+-+-+        |
                  | +--+----+    /  | |          |
                  |    |        /   | |  +-----+ |
                  |    | +-----+    | +->|Band-| |
                  |    | |          |    |width| |
                  | +--V-V-+  +-----V--+ |Alloc| |
                  | |Class-|  | Packet | +-----+ |
                  | | ifier|==>Schedulr|=========================>
                  | +------+  +--------+         |  data
                  +------------------------------+
        
                    from IP     from RSVP
                  +----|------------|------------+
                  | +--V----+   +---V---+        |
                  | | Addr  <--->       |        | SBM signaling
                  | |mapping|   |Request|<----------------------->
                  | +---+---+   |Module |        |
                  |     |       |       |        |
                  | +---+---+   |       |        |
                  | |  802  <--->       |        |
                  | | header|   +-+-+-+-+        |
                  | +--+----+    /  | |          |
                  |    |        /   | |  +-----+ |
                  |    | +-----+    | +->|Band-| |
                  |    | |          |    |width| |
                  | +--V-V-+  +-----V--+ |Alloc| |
                  | |Class-|  | Packet | +-----+ |
                  | | ifier|==>Schedulr|=========================>
                  | +------+  +--------+         |  data
                  +------------------------------+
        

Figure 4: ISSLL in a Sending End Station

图4:发送端站中的ISSLL

The Bandwidth Allocator (BA) component is only present when a distributed BA model is implemented. When present, its function is basically to apply local admission control for the outgoing link bandwidth and driver's queuing resources.

只有在实现分布式BA模型时,才会出现带宽分配器(BA)组件。当存在时,其功能基本上是对传出链路带宽和驱动程序的排队资源应用本地准入控制。

7.1.4. At the Layer 3 Receiver
7.1.4. At the Layer 3 Receivertranslate error, please retry

The ISSLL functionality in the receiver is simpler and is illustrated in Figure 5.

接收器中的ISSLL功能更简单,如图5所示。

The functions of the Requester Module may be summarized as follows:

请求者模块的功能可总结如下:

- Handles any received SBM protocol indications.

- 处理任何接收到的SBM协议指示。

- Communicates with any local BA for local admission control decisions.

- 与任何本地BA沟通,以作出本地准入控制决策。

- Passes indications up to RSVP if OK.

- 如果正常,则将指示传递至RSVP。

- Accepts confirmations from RSVP and relays them back via SBM signaling towards the requester.

- 接受来自RSVP的确认,并通过SBM信令将其转发给请求者。

                          to RSVP       to IP
                            ^            ^
                       +----|------------|------+
                       | +--+----+       |      |
         SBM signaling | |Request|   +---+---+  |
         <-------------> |Module |   | Strip |  |
                       | +--+---++   |802 hdr|  |
                       |    |    \   +---^---+  |
                       | +--v----+\      |      |
                       | | Band- | \     |      |
                       | |  width|  \    |      |
                       | | Alloc |   .   |      |
                       | +-------+   |   |      |
                       | +------+   +v---+----+ |
         data          | |Class-|   | Packet  | |
         <==============>| ifier|==>|Scheduler| |
                       | +------+   +---------+ |
                       +------------------------+
        
                          to RSVP       to IP
                            ^            ^
                       +----|------------|------+
                       | +--+----+       |      |
         SBM signaling | |Request|   +---+---+  |
         <-------------> |Module |   | Strip |  |
                       | +--+---++   |802 hdr|  |
                       |    |    \   +---^---+  |
                       | +--v----+\      |      |
                       | | Band- | \     |      |
                       | |  width|  \    |      |
                       | | Alloc |   .   |      |
                       | +-------+   |   |      |
                       | +------+   +v---+----+ |
         data          | |Class-|   | Packet  | |
         <==============>| ifier|==>|Scheduler| |
                       | +------+   +---------+ |
                       +------------------------+
        

Figure 5: ISSLL in a Receiving End Station

图5:接收端站中的ISSLL

- May program a receive classifier and scheduler, if used, to identify traffic classes of received packets and accord them appropriate treatment e.g., reservation of buffers for particular traffic classes.

- 可以对接收分类器和调度器进行编程(如果使用),以识别所接收分组的业务类别,并给予它们适当的处理,例如,为特定业务类别保留缓冲区。

- Programs the receiver to strip away link layer header information from received packets.

- 对接收器进行编程,以从接收到的数据包中去除链路层报头信息。

The Bandwidth Allocator, present only in a distributed implementation applies local admission control to see if a request can be supported with appropriate local receive resources.

仅在分布式实现中存在的带宽分配器应用本地许可控制,以查看是否可以使用适当的本地接收资源支持请求。

7.2. Switch Model
7.2. 开关模型
7.2.1. Centralized Bandwidth Allocator
7.2.1. 集中式带宽分配器

Where a centralized Bandwidth Allocator model is implemented, switches do not take part in the admission control process. Admission control is implemented by a centralized BA, e.g., a "Subnet Bandwidth Manager" (SBM) as described in [14]. This centralized BA may actually be co-located with a switch but its functions would not necessarily then be closely tied with the switch's forwarding functions as is the case with the distributed BA described below.

在实现集中式带宽分配器模型的情况下,交换机不参与接纳控制过程。接纳控制由集中式BA实现,如[14]中所述的“子网带宽管理器”(SBM)。该集中式BA实际上可能与交换机位于同一位置,但其功能不一定与交换机的转发功能紧密相连,如下文所述的分布式BA的情况。

7.2.2. Distributed Bandwidth Allocator
7.2.2. 分布式带宽分配器

The model of Layer 2 switch behavior described here uses the terminology of the SBM protocol as an example of an admission control protocol. The model is equally applicable when other mechanisms, e.g. static configuration or network management, are in use for admission control. We define the following entities within the switch:

这里描述的第2层交换机行为模型使用SBM协议的术语作为准入控制协议的示例。当其他机制(如静态配置或网络管理)用于准入控制时,该模型同样适用。我们在交换机中定义以下实体:

- Local Admission Control Module: One of these on each port accounts for the available bandwidth on the link attached to that port. For half duplex links, this involves taking account of the resources allocated to both transmit and receive flows. For full duplex links, the input port accountant's task is trivial.

- 本地许可控制模块:每个端口上的其中一个模块负责连接到该端口的链路上的可用带宽。对于半双工链路,这需要考虑分配给发送和接收流的资源。对于全双工链路,输入端口会计师的任务很简单。

- Input SBM Module: One instance on each port performs the "network" side of the signaling protocol for peering with clients or other switches. It also holds knowledge about the mappings of IntServ classes to user_priority.

- 输入SBM模块:每个端口上的一个实例执行信令协议的“网络”端,以便与客户端或其他交换机进行对等。它还掌握有关IntServ类到用户优先级的映射的知识。

- SBM Propagation Module: Relays requests that have passed admission control at the input port to the relevant output ports' SBM modules. This will require access to the switch's forwarding table (Layer-2 "routing table" cf. RSVP model) and port spanning tree state.

- SBM传播模块:将在输入端口通过许可控制的请求中继到相关输出端口的SBM模块。这将需要访问交换机的转发表(第2层“路由表”参见RSVP模型)和端口生成树状态。

- Output SBM Module: Forwards requests to the next Layer 2 or Layer 3 hop.

- 输出SBM模块:将请求转发到下一个第2层或第3层跃点。

- Classifier, Queue and Scheduler Module: The functions of this module are basically as described by the Forwarding Process of IEEE 802.1D (see Section 3.7 of [3]). The Classifier module identifies the relevant QoS information from incoming packets and uses this, together with the normal bridge forwarding database, to decide at which output port and traffic class to enqueue the packet. Different types of switches will use different techniques for flow identification (see Section 8.1). In IEEE 802.1D switches this information is the regenerated user_priority parameter which has already been decoded by the receiving MAC service and potentially remapped by the forwarding process (see Section 3.7.3 of [3]). This does not preclude more sophisticated classification rules such as the classification of individual IntServ flows. The Queue and Scheduler implement the

- 分类器、队列和调度程序模块:该模块的功能基本上如IEEE 802.1D的转发过程所述(见[3]第3.7节)。分类器模块识别来自传入分组的相关QoS信息,并将其与正常网桥转发数据库一起使用,以确定在哪个输出端口和通信量类别将分组排队。不同类型的开关将使用不同的流量识别技术(见第8.1节)。在IEEE 802.1D交换机中,该信息是已由接收MAC服务解码并可能由转发过程重新映射的重新生成的用户_优先级参数(见[3]第3.7.3节)。这并不排除更复杂的分类规则,例如单个IntServ流的分类。队列和调度程序实现

output queues for ports and provide the algorithm for servicing the queues for transmission onto the output link in order to provide the promised IntServ service. Switches will implement one or more output queues per port and all will implement at least a basic static priority dequeuing algorithm as their default, in accordance with IEEE 802.1D.

输出端口队列,并提供算法,用于为队列提供服务,以便传输到输出链路,从而提供承诺的IntServ服务。根据IEEE 802.1D,交换机将为每个端口实现一个或多个输出队列,并且所有交换机都将至少实现一个基本的静态优先级出列算法作为其默认算法。

- Ingress Traffic Class Mapping and Policing Module: Its functions are as described in IEEE 802.1D Section 3.7. This optional module may police the data within traffic classes for conformance to the negotiated parameters, and may discard packets or re-map the user_priority. The default behavior is to pass things through unchanged.

- 入口流量类别映射和监管模块:其功能如IEEE 802.1D第3.7节所述。该可选模块可监控流量类别内的数据是否符合协商参数,并可丢弃数据包或重新映射用户优先级。默认行为是不改变地传递内容。

- Egress Traffic Class Mapping Module: Its functions are as described in IEEE 802.1D Section 3.7. This optional module may perform re-mapping of traffic classes on a per output port basis. The default behavior is to pass things through unchanged.

- 出口流量等级映射模块:其功能如IEEE 802.1D第3.7节所述。此可选模块可以在每个输出端口的基础上重新映射流量类。默认行为是不改变地传递内容。

Figure 6 shows all of the modules in an ISSLL enabled switch. The ISSLL model is a superset of the IEEE 802.1D bridge model.

图6显示了启用ISSLL的交换机中的所有模块。ISSLL模型是IEEE 802.1D网桥模型的超集。

                     +-------------------------------+
    SBM signaling    | +-----+   +------+   +------+ | SBM signaling
   <------------------>| IN  |<->| SBM  |<->| OUT  |<---------------->
                     | | SBM |   | prop.|   | SBM  | |
                     | +-++--+   +---^--+   /----+-+ |
                     |  / |          |     /     |   |
       ______________| /  |          |     |     |   +-------------+
      | \             /+--V--+       |     |  +--V--+            / |
      |   \      ____/ |Local|       |     |  |Local|          /   |
      |     \   /      |Admis|       |     |  |Admis|        /     |
      |       \/       |Cntrl|       |     |  |Cntrl|      /       |
      | +-----V+\      +-----+       |     |  +-----+    /+-----+  |
      | |traff |  \              +---+--+ +V-------+   /  |egrss|  |
      | |class |    \            |Filter| |Queue & | /    |traff|  |
      | |map & |=====|==========>|Data- |=| Packet |=|===>|class|  |
      | |police|     |           |  base| |Schedule| |    |map  |  |
      | +------+     |           +------+ +--------+ |    +-+---+  |
      +----^---------+-------------------------------+------|------+
   data in |                                                |data out
   ========+                                                +========>
        
                     +-------------------------------+
    SBM signaling    | +-----+   +------+   +------+ | SBM signaling
   <------------------>| IN  |<->| SBM  |<->| OUT  |<---------------->
                     | | SBM |   | prop.|   | SBM  | |
                     | +-++--+   +---^--+   /----+-+ |
                     |  / |          |     /     |   |
       ______________| /  |          |     |     |   +-------------+
      | \             /+--V--+       |     |  +--V--+            / |
      |   \      ____/ |Local|       |     |  |Local|          /   |
      |     \   /      |Admis|       |     |  |Admis|        /     |
      |       \/       |Cntrl|       |     |  |Cntrl|      /       |
      | +-----V+\      +-----+       |     |  +-----+    /+-----+  |
      | |traff |  \              +---+--+ +V-------+   /  |egrss|  |
      | |class |    \            |Filter| |Queue & | /    |traff|  |
      | |map & |=====|==========>|Data- |=| Packet |=|===>|class|  |
      | |police|     |           |  base| |Schedule| |    |map  |  |
      | +------+     |           +------+ +--------+ |    +-+---+  |
      +----^---------+-------------------------------+------|------+
   data in |                                                |data out
   ========+                                                +========>
        

Figure 6: ISSLL in a Switch

图6:交换机中的ISSLL

7.3. Admission Control
7.3. 准入控制

On receipt of an admission control request, a switch performs the following actions, again using SBM as an example. The behavior is different depending on whether the "Designated SBM" for this segment is within this switch or not. See [14] for a more detailed specification of the DSBM/SBM actions.

在接收到接纳控制请求时,交换机执行以下操作,再次以SBM为例。根据此段的“指定SBM”是否在此交换机内,行为会有所不同。有关DSBM/SBM操作的更详细规范,请参见[14]。

- If the ingress SBM is the "Designated SBM" for this link, it either translates any received user_priority or selects a Layer 2 traffic class which appears compatible with the request and whose use does not violate any administrative policies in force. In effect, it matches the requested service with the available traffic classes and chooses the "best" one. It ensures that, if this reservation is successful, the value of user_priority corresponding to that traffic class is passed back to the client.

- 如果入口SBM是该链路的“指定SBM”,则它转换任何接收到的用户\u优先级,或选择与请求兼容且其使用不违反任何有效管理策略的第2层流量类别。实际上,它将请求的服务与可用的流量类别相匹配,并选择“最佳”类别。它确保,如果此保留成功,则与该流量类别对应的用户_优先级的值会传回客户端。

- The ingress DSBM observes the current state of allocation of resources on the input port/link and then determines whether the new resource allocation from the mapped traffic class can be accommodated. The request is passed to the reservation propagator if accepted.

- 入口DSBM观察输入端口/链路上资源分配的当前状态,然后确定是否可以容纳来自映射流量类别的新资源分配。如果接受,请求将传递给保留传播程序。

- If the ingress SBM is not the "Designated SBM" for this link then it directly passes the request on to the reservation propagator.

- 如果入口SBM不是此链路的“指定SBM”,则它直接将请求传递给保留传播器。

- The reservation propagator relays the request to the bandwidth accountants on each of the switch's outbound links to which this reservation would apply. This implies an interface to routing/forwarding database.

- 保留传播器将请求转发给交换机的每个出站链路上的带宽会计师,该保留将应用于这些链路。这意味着一个到路由/转发数据库的接口。

- The egress bandwidth accountant observes the current state of allocation of queuing resources on its outbound port and bandwidth on the link itself and determines whether the new allocation can be accommodated. Note that this is only a local decision at this switch hop; further Layer 2 hops through the network may veto the request as it passes along.

- 出口带宽会计师观察其出站端口上排队资源分配的当前状态以及链路本身的带宽,并确定是否可以容纳新的分配。请注意,这只是此交换跃点的本地决定;当请求通过网络时,进一步的第2层跃点可能会否决该请求。

- The request, if accepted by this switch, is propagated on each output link selected. Any user_priority described in the forwarded request must be translated according to any egress mapping table.

- 如果此开关接受该请求,则会在选定的每个输出链接上传播该请求。转发请求中描述的任何用户优先级必须根据任何出口映射表进行转换。

- If accepted, the switch must notify the client of the user_priority to be used for packets belonging to that flow. Again, this is an optimistic approach assuming that admission control succeeds; downstream switches may refuse the request.

- 如果接受,交换机必须通知客户机要用于属于该流的数据包的用户_优先级。同样,假设接纳控制成功,这是一种乐观的方法;下游交换机可能会拒绝该请求。

- If this switch wishes to reject the request, it can do so by notifying the client that originated the request by means of its Layer 2 address.

- 如果此交换机希望拒绝请求,则可以通过其第2层地址通知发起请求的客户端来拒绝请求。

7.4. QoS Signaling
7.4. QoS信令

The mechanisms described in this document make use of a signaling protocol for devices to communicate their admission control requests across the network. The service definitions to be provided by such a protocol e.g. [14] are described below. We illustrate the primitives and information that need to be exchanged with such a signaling protocol entity. In all of the examples, appropriate delete/cleanup mechanisms will also have to be provided for tearing down established sessions.

本文档中描述的机制利用设备的信令协议在网络上传递其准入控制请求。下文描述了此类协议(例如[14])将提供的服务定义。我们举例说明了需要与这种信令协议实体交换的原语和信息。在所有示例中,还必须提供适当的删除/清理机制来拆除已建立的会话。

7.4.1. Client Service Definitions
7.4.1. 客户端服务定义

The following interfaces can be identified from Figures 4 and 5.

以下接口可从图4和图5中识别。

- SBM <-> Address Mapping

- SBM<->地址映射

This is a simple lookup function which may require ARP protocol interactions or an algorithmic mapping. The Layer 2 addresses are needed by SBM for inclusion in its signaling messages to avoid requiring that switches participating in the signaling have Layer 3 information to perform the mapping.

这是一个简单的查找函数,可能需要ARP协议交互或算法映射。SBM需要将第2层地址包含在其信令消息中,以避免要求参与信令的交换机具有第3层信息来执行映射。

l2_addr = map_address( ip_addr )

l2地址=映射地址(ip地址)

- SBM <-> Session/Link Layer Header

- SBM<->会话/链路层头

This is for notifying the transmit path of how to add Layer 2 header information, e.g. user_priority values to the traffic of each outgoing flow. The transmit path will provide the user_priority value when it requests a MAC layer transmit operation for each packet. The user_priority is one of the parameters passed in the packet transmit primitive defined by the IEEE 802 service model.

这用于通知传输路径如何向每个传出流的流量添加第2层报头信息,例如用户优先级值。当传输路径为每个数据包请求MAC层传输操作时,它将提供用户_优先级值。用户优先级是在IEEE 802服务模型定义的分组传输原语中传递的参数之一。

bind_l2_header( flow_id, user_priority )

绑定二级标题(流id、用户优先级)

- SBM <-> Classifier/Scheduler

- SBM<->分类器/调度器

This is for notifying transmit classifier/scheduler of any additional Layer 2 information associated with scheduling the transmission of a packet flow. This primitive may be unused in some implementations or it may be used, for example, to provide information to a transmit scheduler that is performing per traffic

这用于通知传输分类器/调度器与调度分组流的传输相关联的任何附加层2信息。该原语可能在某些实现中未使用,或者可能用于(例如)向按流量执行的传输调度器提供信息

class scheduling in addition to the per flow scheduling required by IntServ; the Layer 2 header may be a pattern (in addition to the FilterSpec) to be used to identify the flow's traffic.

除IntServ所需的每流调度之外的类调度;层2报头可以是用于识别流的业务的模式(除了FilterSpec)。

bind_l2schedulerinfo( flow_id, , l2_header, traffic_class )

绑定l2schedulerinfo(流id、l2头、流量类)

- SBM <-> Local Admission Control

- SBM<->本地准入控制

This is used for applying local admission control for a session e.g. is there enough transmit bandwidth still uncommitted for this new session? Are there sufficient receive buffers? This should commit the necessary resources if it succeeds. It will be necessary to release these resources at a later stage if the admission control fails at a subsequent node. This call would be made, for example, by a segment's Designated SBM.

这用于对会话应用本地许可控制,例如,是否有足够的传输带宽尚未提交给此新会话?是否有足够的接收缓冲区?如果成功的话,应该提交必要的资源。如果后续节点的准入控制失败,则有必要在稍后阶段释放这些资源。例如,该呼叫将由部门指定的SBM进行。

status = admit_l2session( flow_id, Tspec, FlowSpec )

状态=允许(流程id、Tspec、流程SPEC)

- SBM <-> RSVP

- SBM<->RSVP

This is outlined above in Section 7.1.2 and fully described in [14].

上文第7.1.2节对此进行了概述,并在[14]中进行了详细描述。

- Management Interfaces

- 管理界面

Some or all of the modules described by this model will also require configuration management. It is expected that details of the manageable objects will be specified by future work in the ISSLL WG.

此模型描述的部分或所有模块也需要配置管理。预计可管理对象的详细信息将在ISSLL工作组的未来工作中指定。

7.4.2. Switch Service Definitions
7.4.2. 交换机服务定义

The following interfaces are identified from Figure 6.

以下接口如图6所示。

- SBM <-> Classifier

- SBM<->分类器

This is for notifying the receive classifier of how to match incoming Layer 2 information with the associated traffic class. It may in some cases consist of a set of read only default mappings.

这用于通知接收分类器如何将传入的第2层信息与关联的流量类相匹配。在某些情况下,它可能由一组只读默认映射组成。

bind_l2classifierinfo( flow_id, l2_header, traffic_class )

绑定l2classifierinfo(流量id、l2头、流量类)

- SBM <-> Queue and Packet Scheduler

- SBM<->队列和数据包调度器

This is for notifying transmit scheduler of additional Layer 2 information associated with a given traffic class. It may be unused in some cases (see discussion in previous section).

这用于通知传输调度器与给定通信量类别相关的附加第2层信息。在某些情况下,它可能未使用(请参阅上一节中的讨论)。

bind_l2schedulerinfo( flow_id, l2_header, traffic_class )

绑定l2schedulerinfo(流id、l2头、流量类)

- SBM <-> Local Admission Control

- SBM<->本地准入控制

Same as for the host discussed above.

与上面讨论的主机相同。

- SBM <-> Traffic Class Map and Police

- SBM<->交通等级地图和警察

Optional configuration of any user_priority remapping that might be implemented on ingress to and egress from the ports of a switch. For IEEE 802.1D switches, it is likely that these mappings will have to be consistent across all ports.

任何用户_优先级重新映射的可选配置,可在交换机端口的入口和出口上实现。对于IEEE 802.1D交换机,这些映射可能必须在所有端口上保持一致。

bind_l2ingressprimap( inport, in_user_pri, internal_priority ) bind_l2egressprimap( outport, internal_priority, out_user_pri )

bind\u l2ingressprimap(输入、输入用户优先级、内部优先级)bind\u l2egresprimap(输出、内部优先级、输出用户优先级)

Optional configuration of any Layer 2 policing function to be applied on a per class basis to traffic matching the Layer 2 header. If the switch is capable of per flow policing then existing IntServ/RSVP models will provide a service definition for that configuration.

任何第2层监管功能的可选配置,以每类为基础应用于与第2层报头匹配的流量。如果交换机能够对每个流进行监控,那么现有的IntServ/RSVP模型将为该配置提供服务定义。

bind_l2policing( flow_id, l2_header, Tspec, FlowSpec )

绑定策略(流id、l2头、Tspec、流规范)

- SBM <-> Filtering Database

- SBM<->筛选数据库

SBM propagation rules need access to the Layer 2 forwarding database to determine where to forward SBM messages. This is analogous to RSRR interface in Layer 3 RSVP.

SBM传播规则需要访问第2层转发数据库,以确定在何处转发SBM消息。这类似于第3层RSVP中的RSRR接口。

output_portlist = lookup_l2dest( l2_addr )

输出端口列表=查找(l2地址)

- Management Interfaces

- 管理界面

Some or all of the modules described by this model will also require configuration management. It is expected that details of the manageable objects will be specified by future work in the ISSLL working group.

此模型描述的部分或所有模块也需要配置管理。预计ISSLL工作组的未来工作将指定可管理对象的详细信息。

8. Implementation Issues
8. 执行问题

As stated earlier, the Integrated Services working group has defined various service classes offering varying degrees of QoS guarantees. Initial effort will concentrate on enabling the Controlled Load [6] and Guaranteed Service classes [7]. The Controlled Load service provides a loose guarantee, informally stated as "the same as best effort would be on an unloaded network". The Guaranteed Service provides an upper bound on the transit delay of any packet. The

如前所述,集成服务工作组定义了各种服务类别,提供不同程度的QoS保证。最初的工作将集中于启用受控负载[6]和保证服务类[7]。受控负载服务提供了一种松散的保证,非正式地说是“与在无负载网络上尽最大努力相同”。保证服务提供了任何数据包传输延迟的上限。这个

extent to which these services can be supported at the link layer will depend on many factors including the topology and technology used. Some of the mapping issues are discussed below in light of the emerging link layer standards and the functions supported by higher layer protocols. Considering the limitations of some of the topologies, it may not be possible to satisfy all the requirements for Integrated Services on a given topology. In such cases, it is useful to consider providing support for an approximation of the service which may suffice in most practical instances. For example, it may not be feasible to provide policing/shaping at each network element (bridge/switch) as required by the Controlled Load specification. But if this task is left to the end stations, a reasonably good approximation to the service can be obtained.

链路层对这些服务的支持程度将取决于许多因素,包括所使用的拓扑和技术。下面将根据新兴的链路层标准和高层协议支持的功能讨论一些映射问题。考虑到某些拓扑的局限性,可能无法满足给定拓扑上集成服务的所有要求。在这种情况下,考虑在大多数实际情况下支持服务的近似是有用的。例如,按照受控负载规范的要求在每个网元(网桥/交换机)上提供监控/成形可能不可行。但是,如果将此任务留给终端站,则可以获得与服务相当好的近似值。

8.1. Switch Characteristics
8.1. 开关特性

There are many LAN bridges/switches with varied capabilities for supporting QoS. We discuss below the various kinds of devices that that one may expect to find in a LAN environment.

有许多LAN网桥/交换机具有支持QoS的各种功能。下面我们将讨论人们在局域网环境中可能会发现的各种设备。

The most basic bridge is one which conforms to the IEEE 802.1D specification of 1993 [2]. This device has a single queue per output port, and uses the spanning tree algorithm to eliminate topology loops. Networks constructed from this kind of device cannot be expected to provide service guarantees of any kind because of the complete lack of traffic isolation.

最基本的网桥是符合1993年IEEE 802.1D规范[2]的网桥。该设备每个输出端口有一个队列,并使用生成树算法消除拓扑循环。由于完全缺乏流量隔离,因此不能期望由此类设备构建的网络提供任何类型的服务保证。

The next level of bridges/switches are those which conform to the more recently revised IEEE 802.1D specification [3]. They include support for queuing up to eight traffic classes separately. The level of traffic isolation provided is coarse because all flows corresponding to a particular traffic class are aggregated. Further, it is likely that more than one priority will map to a traffic class depending on the number of queues implemented in the switch. It would be difficult for such a device to offer protection against misbehaving flows. The scope of multicast traffic may be limited by using GMRP to only those segments which are on the path to interested receivers.

下一级网桥/交换机是那些符合最新修订的IEEE 802.1D规范[3]的网桥/交换机。它们包括支持分别排队最多八个交通等级。提供的流量隔离级别比较粗糙,因为与特定流量类别对应的所有流量都是聚合的。此外,根据交换机中实现的队列数量,可能会有多个优先级映射到一个流量类别。对于这样一个设备来说,很难针对行为不当的流量提供保护。通过使用GMRP,可以将多播通信量的范围限制为仅限于到感兴趣的接收器的路径上的那些段。

A next step above these devices are bridges/switches which implement optional parts of the IEEE 802.1D specification such as mapping the received user_priority to some internal set of canonical values on a per-input-port basis. It may also support the mapping of these internal canonical values onto transmitted user_priority on a per-output-port basis. With these extra capabilities, network administrators can perform mapping of traffic classes between specific pairs of ports, and in doing so gain more control over admission of traffic into the protected classes.

这些设备上面的下一步是网桥/交换机,它们实现IEEE 802.1D规范的可选部分,例如将接收到的用户优先级映射到每个输入端口上的某个内部规范值集。它还可以支持基于每个输出端口将这些内部规范值映射到传输的用户_优先级。有了这些额外的功能,网络管理员可以在特定端口对之间执行通信量类的映射,这样就可以更好地控制将通信量引入受保护的类。

Other entirely optional features that some bridges/switches may support include classification of IntServ flows using fields in the network layer header, per-flow policing and/or reshaping which is essential for supporting Guaranteed Service, and more sophisticated scheduling algorithms such as variants of weighted fair queuing to limit the bandwidth consumed by a traffic class. Note that it is advantageous to perform flow isolation and for all network elements to police each flow in order to support the Controlled Load and Guaranteed Service.

一些网桥/交换机可能支持的其他完全可选的功能包括使用网络层报头中的字段对IntServ流进行分类、每个流的监管和/或重塑,这对于支持保证服务至关重要,以及更复杂的调度算法,如加权公平排队的变体,以限制流量类别所消耗的带宽。注意,为了支持受控负载和有保证的服务,执行流隔离和所有网络元件对每个流进行监控是有利的。

8.2. Queuing
8.2. 排队

Connectionless packet networks in general, and LANs in particular, work today because of scaling choices in network provisioning. Typically, excess bandwidth and buffering is provisioned in the network to absorb the traffic sourced by higher layer protocols, often sufficient to cause their transmission windows to run out on a statistical basis, so that network overloads are rare and transient and the expected loading is very low.

由于网络配置中的可伸缩性选择,无连接分组网络(尤其是局域网)如今可以正常工作。通常,在网络中提供多余的带宽和缓冲,以吸收来自更高层协议的流量,通常足以导致其传输窗口在统计基础上耗尽,因此网络过载是罕见的、瞬时的,并且预期负载非常低。

With the advent of time-critical traffic such over-provisioning has become far less easy to achieve. Time-critical frames may be queued for annoyingly long periods of time behind temporary bursts of file transfer traffic, particularly at network bottleneck points, e.g. at the 100 Mbps to 10 Mbps transition that might occur between the riser to the wiring closet and the final link to the user from a desktop switch. In this case, however, if it is known a priori (either by application design, on the basis of statistics, or by administrative control) that time-critical traffic is a small fraction of the total bandwidth, it suffices to give it strict priority over the non-time-critical traffic. The worst case delay experienced by the time-critical traffic is roughly the maximum transmission time of a maximum length non-time-critical frame -- less than a millisecond for 10 Mbps Ethernet, and well below the end to end delay budget based on human perception times.

随着时间关键型流量的出现,这样的过度资源调配变得越来越不容易实现。时间关键帧可能在临时文件传输流量突发之后排队等待令人烦恼的长时间,特别是在网络瓶颈点,例如,在从提升板到布线柜和从桌面交换机到用户的最终链路之间可能发生的100 Mbps到10 Mbps的转换。然而,在这种情况下,如果先验地(通过应用程序设计、基于统计或通过管理控制)知道时间关键型流量是总带宽的一小部分,那么就足以给予它相对于非时间关键型流量的严格优先级。时间关键业务所经历的最坏情况延迟大约是最大长度非时间关键帧的最大传输时间——对于10 Mbps以太网,小于一毫秒,并且远低于基于人类感知时间的端到端延迟预算。

When more than one priority service is to be offered by a network element e.g. one which supports both Controlled Load as well as Guaranteed Service, the requirements for the scheduling discipline become more complex. In order to provide the required isolation between the service classes, it will probably be necessary to queue them separately. There is then an issue of how to service the queues which requires a combination of admission control and more intelligent queuing disciplines. As with the service specifications themselves, the specification of queuing algorithms is beyond the scope of this document.

当一个网元提供多个优先级服务时,例如支持受控负载和保证服务的网元,调度规程的要求变得更加复杂。为了在服务类之间提供所需的隔离,可能需要将它们单独排队。然后是如何为队列提供服务的问题,这需要许可控制和更智能的队列规程的结合。与服务规范本身一样,排队算法的规范超出了本文的范围。

8.3. Mapping of Services to Link Level Priority
8.3. 服务到链路级优先级的映射

The number of traffic classes supported and access methods of the technology under consideration will determine how many and what services may be supported. Native Token Ring/IEEE 802.5, for instance, supports eight priority levels which may be mapped to one or more traffic classes. Ethernet/IEEE 802.3 has no support for signaling priorities within frames. However, the IEEE 802 standards committee has recently developed a new standard for bridges/switches related to multimedia traffic expediting and dynamic multicast filtering [3]. A packet format for carrying a user_priority field on all IEEE 802 LAN media types is now defined in [4]. These standards allow for up to eight traffic classes on all media. The user_priority bits carried in the frame are mapped to a particular traffic class within a bridge/switch. The user_priority is signaled on an end-to-end basis, unless overridden by bridge/switch management. The traffic class that is used by a flow should depend on the quality of service desired and whether the reservation is successful or not. Therefore, a sender should use the user_priority value which maps to the best effort traffic class until told otherwise by the BM. The BM will, upon successful completion of resource reservation, specify the value of user_priority to be used by the sender for that session's data. An accompanying memo [13] addresses the issue of mapping the various Integrated Services to appropriate traffic classes.

所支持的流量类别数量和所考虑技术的访问方法将决定可支持的服务数量和类型。例如,本机令牌环/IEEE 802.5支持八个优先级,这些优先级可以映射到一个或多个流量类别。以太网/IEEE 802.3不支持帧内的信令优先级。然而,IEEE 802标准委员会最近为与多媒体流量加速和动态多播过滤相关的网桥/交换机制定了一项新标准[3]。[4]中定义了在所有IEEE 802 LAN媒体类型上承载用户优先级字段的数据包格式。这些标准允许所有媒体上最多八个流量等级。帧中携带的用户优先级比特被映射到网桥/交换机内的特定业务类别。除非由网桥/交换机管理覆盖,否则用户_优先级以端到端的方式发出信号。流所使用的流量类别应取决于所需的服务质量以及预订是否成功。因此,发送方应使用映射到尽力而为流量类别的用户_优先级值,直到BM另行通知。BM将在成功完成资源保留后,指定发送方用于该会话数据的用户_优先级的值。随附的备忘录[13]解决了将各种综合服务映射到适当流量类别的问题。

8.4. Re-mapping of Non-conforming Aggregated Flows
8.4. 不一致聚合流的重映射

One other topic under discussion in the IntServ context is how to handle the traffic for data flows from sources that exceed their negotiated traffic contract with the network. An approach that shows some promise is to treat such traffic with "somewhat less than best effort" service in order to protect traffic that is normally given "best effort" service from having to back off. Best effort traffic is often adaptive, using TCP or other congestion control algorithms, and it would be unfair to penalize those flows due to badly behaved traffic from reserved flows which are often set up by non-adaptive applications.

在IntServ上下文中讨论的另一个主题是如何处理来自源的数据流的流量,这些数据流超过了与网络协商的流量契约。一种显示出某种前景的方法是使用“稍微低于尽力而为”的服务来处理此类流量,以保护通常提供“尽力而为”服务的流量不必退出。尽力而为的流量通常是自适应的,使用TCP或其他拥塞控制算法,由于保留流的流量表现不好(通常由非自适应应用程序设置),因此惩罚这些流量是不公平的。

A possible solution might be to assign normal best effort traffic to one user_priority and to label excess non-conforming traffic as a lower user_priority although the re-ordering problems that might arise from doing this may make this solution undesirable, particularly if the flows are using TCP. For this reason the controlled load service recommends dropping excess traffic, rather than re-mapping to a lower priority. This is further discussed below.

一种可能的解决方案可能是将正常的尽力而为流量分配给一个用户优先级,并将多余的不一致流量标记为较低的用户优先级,尽管这样做可能会导致重新排序问题,这可能会使该解决方案不受欢迎,特别是在流使用TCP的情况下。因此,受控负载服务建议丢弃多余的流量,而不是重新映射到较低的优先级。下面将进一步讨论这一点。

8.5. Override of Incoming User Priority
8.5. 覆盖传入用户优先级

In some cases, a network administrator may not trust the user_priority values contained in packets from a source and may wish to map these into some more suitable set of values. Alternatively, due perhaps to equipment limitations or transition periods, the user_priority values may need to be re-mapped as the data flows to/from different regions of a network.

在某些情况下,网络管理员可能不信任来自源的包中包含的用户u优先级值,并且可能希望将这些值映射到一些更合适的值集。或者,可能由于设备限制或过渡期,当数据流向/来自网络的不同区域时,可能需要重新映射用户u优先级值。

Some switches may implement such a function on input that maps received user_priority to some internal set of values. This function is provided by a table known in IEEE 802.1D as the User Priority Regeneration Table (Table 3-1 in [3]). These values can then be mapped using an output table described above onto outgoing user_priority values. These same mappings must also be used when applying admission control to requests that use the user_priority values (see e.g. [14]). More sophisticated approaches are also possible where a device polices traffic flows and adjusts their onward user_priority based on their conformance to the admitted traffic flow specifications.

一些开关可以在将接收到的用户优先级映射到某些内部值集的输入上实现这样的功能。该功能由IEEE 802.1D中称为用户优先级再生表的表提供(见[3]中的表3-1])。然后,可以使用上述输出表将这些值映射到传出用户_优先级值。当对使用用户优先级值的请求应用许可控制时,也必须使用这些相同的映射(参见例[14])。更复杂的方法也是可能的,其中设备对业务流进行策略,并根据其与允许的业务流规范的一致性来调整其向前的用户优先级。

8.6. Different Reservation Styles
8.6. 不同的预订风格

In the figure above, SW is a bridge/switch in the link layer domain. S1, S2, S3, R1 and R2 are end stations which are members of a group associated with the same RSVP flow. S1, S2 and S3 are upstream end stations. R1 and R2 are the downstream end stations which receive traffic from all the senders. RSVP allows receivers R1 and R2 to specify reservations which can apply to: (a) one specific sender only (fixed filter); (b) any of two or more explicitly specified senders (shared explicit filter); and (c) any sender in the group (shared wildcard filter). Support for the fixed filter style is straightforward; a separate reservation is made for the traffic from each of the senders. However, support for the other two filter styles has implications regarding policing; i.e. the merged flow from the different senders must be policed so that they conform to traffic parameters specified in the filter's RSpec. This scenario is further complicated if the services requested by R1 and R2 are different. Therefore, in the absence of policing within bridges/switches, it may be possible to support only fixed filter reservations at the link layer.

在上图中,SW是链路层域中的网桥/交换机。S1、S2、S3、R1和R2是终端站,它们是与同一RSVP流相关联的组的成员。S1、S2和S3为上游端站。R1和R2是从所有发送方接收流量的下游终端站。RSVP允许接收者R1和R2指定可应用于:(a)一个特定发送者(固定过滤器);(b) 两个或两个以上明确指定的发送者之一(共享明确筛选器);和(c)组中的任何发件人(共享通配符筛选器)。对固定过滤器样式的支持非常简单;对来自每个发送方的流量进行单独预订。但是,对其他两种过滤器样式的支持对警务有影响;i、 e.必须对来自不同发送方的合并流进行监控,使其符合过滤器RSpec中指定的流量参数。如果R1和R2请求的服务不同,则此场景会更加复杂。因此,在网桥/交换机内没有监控的情况下,可能只支持链路层的固定过滤器保留。

              +-----+       +-----+       +-----+
              | S1  |       | S2  |       | S3  |
              +-----+       +-----+       +-----+
                 |             |             |
                 |             v             |
                 |          +-----+          |
                 +--------->| SW  |<---------+
                            +-----+
                             |   |
                        +----+   +----+
                        |             |
                        v             V
                     +-----+       +-----+
                     | R1  |       | R2  |
                     +-----+       +-----+
        
              +-----+       +-----+       +-----+
              | S1  |       | S2  |       | S3  |
              +-----+       +-----+       +-----+
                 |             |             |
                 |             v             |
                 |          +-----+          |
                 +--------->| SW  |<---------+
                            +-----+
                             |   |
                        +----+   +----+
                        |             |
                        v             V
                     +-----+       +-----+
                     | R1  |       | R2  |
                     +-----+       +-----+
        

Figure 7: Illustration of filter styles

图7:过滤器样式的图示

8.7. Receiver Heterogeneity
8.7. 接收机异质性

At Layer 3, the IntServ model allows heterogeneous receivers for multicast flows where different branches of a tree can have different types of reservations for a given multicast destination. It also supports the notion that trees may have some branches with reserved flows and some using best effort service. If we were to treat a Layer 2 subnet as a single network element as defined in [8], then all of the branches of the distribution tree that lie within the subnet could be assumed to require the same QoS treatment and be treated as an atomic unit as regards admission control, etc. With this assumption, the model and protocols already defined by IntServ and RSVP already provide sufficient support for multicast heterogeneity. Note, however, that an admission control request may well be rejected because just one link in the subnet is oversubscribed leading to rejection of the reservation request for the entire subnet.

在第3层,IntServ模型允许多播流的异构接收器,其中树的不同分支可以对给定多播目的地具有不同类型的保留。它还支持这样一种观点,即树可能有一些具有保留流的分支,而有些分支使用尽力服务。如果我们将第2层子网视为[8]中定义的单个网络元素,则可以假设子网内的分发树的所有分支都需要相同的QoS处理,并将其视为准入控制等方面的原子单元,IntServ和RSVP已经定义的模型和协议已经为多播异构性提供了足够的支持。然而,请注意,接纳控制请求很可能被拒绝,因为子网中只有一条链路被超额订阅,从而导致整个子网的保留请求被拒绝。

As an example, consider Figure 8, SW is a Layer 2 device (bridge/switch) participating in resource reservation, S is the upstream source end station and R1 and R2 are downstream end station receivers. R1 would like to make a reservation for the flow while R2 would like to receive the flow using best effort service. S sends RSVP PATH messages which are multicast to both R1 and R2. R1 sends an RSVP RESV message to S requesting the reservation of resources.

作为一个例子,考虑图8,SW是参与资源预留的第2层设备(桥接/交换机),S是上行链路源端站,R1和R2是下行端站接收机。R1希望为流进行预订,而R2希望使用尽力服务接收流。S向R1和R2发送多播的RSVP路径消息。R1向S发送RSVP RESV消息,请求保留资源。

                           +-----+
                           |  S  |
                           +-----+
                              |
                              v
              +-----+      +-----+      +-----+
              | R1  |<-----| SW  |----->| R2  |
              +-----+      +-----+      +-----+
        
                           +-----+
                           |  S  |
                           +-----+
                              |
                              v
              +-----+      +-----+      +-----+
              | R1  |<-----| SW  |----->| R2  |
              +-----+      +-----+      +-----+
        

Figure 8: Example of receiver heterogeneity

图8:接收器异质性示例

If the reservation is successful at Layer 2, the frames addressed to the group will be categorized in the traffic class corresponding to the service requested by R1. At SW, there must be some mechanism which forwards the packet providing service corresponding to the reserved traffic class at the interface to R1 while using the best effort traffic class at the interface to R2. This may involve changing the contents of the frame itself, or ignoring the frame priority at the interface to R2.

如果在第2层保留成功,则发往该组的帧将被分类在与R1请求的服务相对应的业务类别中。在SW,必须有某种机制将与接口处的保留通信量类别对应的分组提供服务转发给R1,同时在接口处使用最大努力通信量类别转发给R2。这可能涉及更改帧本身的内容,或忽略R2接口处的帧优先级。

Another possibility for supporting heterogeneous receivers would be to have separate groups with distinct MAC addresses, one for each class of service. By default, a receiver would join the "best effort" group where the flow is classified as best effort. If the receiver makes a reservation successfully, it can be transferred to the group for the class of service desired. The dynamic multicast filtering capabilities of bridges and switches implementing the IEEE 802.1D standard would be a very useful feature in such a scenario. A given flow would be transmitted only on those segments which are on the path between the sender and the receivers of that flow. The obvious disadvantage of such an approach is that the sender needs to send out multiple copies of the same packet corresponding to each class of service desired thus potentially duplicating the traffic on a portion of the distribution tree.

支持异构接收器的另一种可能性是使用不同MAC地址的独立组,每个服务类别对应一个组。默认情况下,接收方将加入“最佳努力”组,其中流被分类为最佳努力。如果接收方成功地进行了预订,则可以将其传输到所需服务类别的组中。在这种情况下,实现IEEE 802.1D标准的网桥和交换机的动态多播过滤功能将是非常有用的功能。给定的流将仅在该流的发送方和接收方之间的路径上的那些段上传输。这种方法的明显缺点是,发送方需要发送与所需的每类服务相对应的同一分组的多个副本,从而潜在地复制分发树的一部分上的通信量。

The above approaches would provide very sub-optimal utilization of resources given the expected size and complexity of the Layer 2 subnets. Therefore, it is desirable to enable switches to apply QoS differently on different egress branches of a tree that divide at that switch.

鉴于第2层子网的预期规模和复杂性,上述方法将提供非常次优的资源利用率。因此,希望使交换机能够对在该交换机处划分的树的不同出口分支应用不同的QoS。

IEEE 802.1D specifies a basic model for multicast whereby a switch makes multicast forwarding decisions based on the destination address. This would produce a list of output ports to which the packet should be forwarded. In its default mode, such a switch would use the user_priority value in received packets, or a value regenerated on a per input port basis in the absence of an explicit value, to enqueue the packets at each output port. Any IEEE 802.1D switch which supports multiple traffic classes can support this operation.

IEEE 802.1D规定了多播的基本模型,交换机根据目标地址做出多播转发决策。这将生成一个输出端口列表,数据包将转发到这些端口。在其默认模式下,这样的交换机将使用接收到的数据包中的用户_优先级值,或在没有显式值的情况下基于每个输入端口重新生成的值,以使数据包在每个输出端口排队。任何支持多个流量类别的IEEE 802.1D交换机都可以支持此操作。

If a switch selects per port output queues based only on the incoming user_priority, as described by IEEE 802.1D, it must treat all branches of all multicast sessions within that user_priority class with the same queuing mechanism. Receiver heterogeneity is then not possible and this could well lead to the failure of an admission control request for the whole multicast session due to a single link being oversubscribed. Note that in the Layer 2 case as distinct from the Layer 3 case with RSVP/IntServ, the option of having some receivers getting the session with the requested QoS and some getting it best effort does not exist as basic IEEE 802.1 switches are unable to re-map the user_priority on a per link basis. This could become an issue with heavy use of dynamic multicast sessions. If a switch were to implement a separate user_priority mapping at each output port, then, in some cases, reservations can use a different traffic class on different paths that branch at such a switch in order to provide multiple receivers with different QoS. This is possible if all flows within a traffic class at the ingress to a switch egress in the same traffic class on a port. For example, traffic may be forwarded using user_priority 4 on one branch where receivers have performed admission control and as user_priority 0 on ones where they have not. We assume that per user_priority queuing without taking account of input or output ports is the minimum standard functionality for switches in a LAN environment (IEEE 802.1D) but that more functional Layer 2 or even Layer 3 switches (i.e. routers) can be used if even more flexible forms of heterogeneity are considered necessary to achieve more efficient resource utilization. The behavior of Layer 3 switches in this context is already well standardized by the IETF.

如IEEE 802.1D所述,如果交换机仅根据传入用户_优先级选择每端口输出队列,则必须使用相同的排队机制处理该用户_优先级类别内所有多播会话的所有分支。因此,接收机异构性是不可能的,这很可能导致整个多播会话的许可控制请求失败,因为单个链路被超额订阅。请注意,在第2层情况下,与使用RSVP/IntServ的第3层情况不同,不存在让某些接收器获得具有请求的QoS的会话以及让某些接收器尽最大努力获得会话的选项,因为基本IEEE 802.1交换机无法在每个链路的基础上重新映射用户的优先级。如果大量使用动态多播会话,这可能会成为一个问题。如果交换机要在每个输出端口实现单独的用户优先级映射,那么在某些情况下,预订可以在不同的路径上使用不同的流量类别,这些路径在交换机上分支,以便为多个接收机提供不同的QoS。这是可能的,如果所有流量在一个端口上的同一流量类别的交换机出口入口的流量类别内。例如,可以在接收机已经执行了准入控制的一个分支上使用用户_优先级4转发通信量,在没有执行准入控制的分支上使用用户_优先级0转发通信量。我们假设不考虑输入或输出端口的每用户优先级排队是LAN环境(IEEE 802.1D)中交换机的最低标准功能,但功能更强大的第2层甚至第3层交换机(即路由器)如果认为需要更灵活的异构形式以实现更高效的资源利用,则可以使用。IETF已经很好地标准化了这种情况下第3层交换机的行为。

9. Network Topology Scenarios
9. 网络拓扑方案

The extent to which service guarantees can be provided by a network depend to a large degree on the ability to provide the key functions of flow identification and scheduling in addition to admission control and policing. This section discusses some of the capabilities of the LAN technologies under consideration and provides a taxonomy of possible topologies emphasizing the capabilities of each with regard to supporting the above functions. For the

网络提供服务保障的程度在很大程度上取决于除了准入控制和监管之外,还提供流识别和调度等关键功能的能力。本节讨论正在考虑的LAN技术的一些功能,并提供可能拓扑的分类,强调每个拓扑在支持上述功能方面的功能。对于

technologies considered here, the basic topology of a LAN may be shared, switched half duplex or switched full duplex. In the shared topology, multiple senders share a single segment. Contention for media access is resolved using protocols such as CSMA/CD in Ethernet and token passing in Token Ring and FDDI. Switched half duplex, is essentially a shared topology with the restriction that there are only two transmitters contending for resources on any segment. Finally, in a switched full duplex topology, a full bandwidth path is available to the transmitter at each end of the link at all times. Therefore, in this topology, there is no need for any access control mechanism such as CSMA/CD or token passing as there is no contention between the transmitters. Obviously, this topology provides the best QoS capabilities. Another important element in the discussion of topologies is the presence or absence of support for multiple traffic classes. These were discussed earlier in Section 4.1. Depending on the basic topology used and the ability to support traffic classes, we identify six scenarios as follows:

在这里考虑的技术中,LAN的基本拓扑可以是共享、交换半双工或交换全双工。在共享拓扑中,多个发送方共享一个段。使用以太网中的CSMA/CD、令牌环中的令牌传递和FDDI等协议来解决媒体访问争用问题。交换半双工基本上是一种共享拓扑结构,其限制条件是任何网段上只有两个发射机争夺资源。最后,在交换式全双工拓扑中,链路两端的发射机始终可以使用全带宽路径。因此,在该拓扑中,不需要任何访问控制机制,例如CSMA/CD或令牌传递,因为发射机之间不存在争用。显然,这种拓扑提供了最好的QoS功能。拓扑讨论中的另一个重要因素是是否支持多个流量类别。上述内容已在第4.1节中进行了讨论。根据使用的基本拓扑和支持流量类的能力,我们确定了以下六种情况:

1. Shared topology without traffic classes. 2. Shared topology with traffic classes. 3. Switched half duplex topology without traffic classes. 4. Switched half duplex topology with traffic classes. 5. Switched full duplex topology without traffic classes. 6. Switched full duplex topology with traffic classes.

1. 没有流量类的共享拓扑。2.具有流量类的共享拓扑。3.无通信量类别的交换半双工拓扑。4.具有流量类别的交换半双工拓扑。5.无通信量类别的交换全双工拓扑。6.具有流量类别的交换全双工拓扑。

There is also the possibility of hybrid topologies where two or more of the above coexist. For instance, it is possible that within a single subnet, there are some switches which support traffic classes and some which do not. If the flow in question traverses both kinds of switches in the network, the least common denominator will prevail. In other words, as far as that flow is concerned, the network is of the type corresponding to the least capable topology that is traversed. In the following sections, we present these scenarios in further detail for some of the different IEEE 802 network types with discussion of their abilities to support the IntServ services.

在上述两种或两种以上共存的情况下,也存在混合拓扑的可能性。例如,在单个子网中,可能有一些交换机支持流量类,而有些交换机不支持流量类。如果所讨论的流穿过网络中的两种交换机,则以最小公分母为准。换言之,就该流而言,该网络是与所穿越的能力最低的拓扑相对应的类型。在以下部分中,我们将进一步详细介绍一些不同IEEE 802网络类型的场景,并讨论它们支持IntServ服务的能力。

9.1. Full Duplex Switched Networks
9.1. 全双工交换网络

On a full duplex switched LAN, the MAC protocol is unimportant as as access is concerned, but must be factored into the characterization parameters advertised by the device since the access latency is equal to the time required to transmit the largest packet. Approximate values for the characteristics on various media are provided in the following tables. These delays should be also be considered in the context of the speed of light delay which is approximately 400 ns for typical 100 m UTP links and 7 us for typical 2 km multimode fiber links.

在全双工交换LAN上,MAC协议在访问方面并不重要,但必须考虑到设备公布的特征参数,因为访问延迟等于传输最大数据包所需的时间。下表提供了各种介质特性的近似值。还应在光速延迟的情况下考虑这些延迟,对于典型的100 m UTP链路,光速延迟约为400 ns,对于典型的2 km多模光纤链路,光速延迟约为7 us。

Table 4: Full duplex switched media access latency

表4:全双工交换媒体访问延迟

        --------------------------------------------------
        Type               Speed      Max Pkt   Max Access
                                       Length      Latency
        --------------------------------------------------
        Ethernet         10 Mbps       1.2 ms       1.2 ms
                        100 Mbps       120 us       120 us
                          1 Gbps        12 us        12 us
        Token Ring        4 Mbps         9 ms         9 ms
                         16 Mbps         9 ms         9 ms
        FDDI            100 Mbps       360 us       8.4 ms
        Demand Priority 100 Mbps       120 us       120 us
        --------------------------------------------------
        
        --------------------------------------------------
        Type               Speed      Max Pkt   Max Access
                                       Length      Latency
        --------------------------------------------------
        Ethernet         10 Mbps       1.2 ms       1.2 ms
                        100 Mbps       120 us       120 us
                          1 Gbps        12 us        12 us
        Token Ring        4 Mbps         9 ms         9 ms
                         16 Mbps         9 ms         9 ms
        FDDI            100 Mbps       360 us       8.4 ms
        Demand Priority 100 Mbps       120 us       120 us
        --------------------------------------------------
        

Full duplex switched network topologies offer good QoS capabilities for both Controlled Load and Guaranteed Service when supported by suitable queuing strategies in the switches.

全双工交换网络拓扑在交换机中适当的排队策略的支持下,为受控负载和保证服务提供了良好的QoS能力。

9.2. Shared Media Ethernet Networks
9.2. 共享媒体以太网

Thus far, we have not discussed the difficulty of dealing with allocation on a single shared CSMA/CD segment. As soon as any CSMA/CD algorithm is introduced the ability to provide any form of Guaranteed Service is seriously compromised in the absence of any tight coupling between the multiple senders on the link. There are a number of reasons for not offering a better solution to this problem.

到目前为止,我们还没有讨论在单个共享CSMA/CD段上处理分配的困难。一旦引入任何CSMA/CD算法,在链路上的多个发送方之间没有任何紧密耦合的情况下,提供任何形式的保证服务的能力就会受到严重损害。没有为这个问题提供更好的解决方案有很多原因。

Firstly, we do not believe this is a truly solvable problem as it would require changes to the MAC protocol. IEEE 802.1 has examined research showing disappointing simulation results for performance guarantees on shared CSMA/CD Ethernet without MAC enhancements. There have been proposals for enhancements to the MAC layer protocols, e.g. BLAM and enhanced flow control in IEEE 802.3. However, any solution involving an enhanced software MAC running above the traditional IEEE 802.3 MAC, or other proprietary MAC protocols, is outside the scope of the ISSLL working group and this document. Secondly, we are not convinced that it is really an interesting problem. While there will be end stations on shared segments for some time to come, the number of deployed switches is steadily increasing relative to the number of stations on shared segments. This trend is proceeding to the point where it may be satisfactory to have a solution which assumes that any network communication requiring resource reservations will take place through at least one switch or router. Put another way, the easiest upgrade to existing Layer 2 infrastructure for QoS support is the installation of segment switching. Only when this has been done is it worthwhile to investigate more complex solutions involving

首先,我们认为这不是一个真正可以解决的问题,因为它需要对MAC协议进行更改。IEEE802.1研究表明,在没有MAC增强的共享CSMA/CD以太网上,性能保证的模拟结果令人失望。有人建议增强MAC层协议,例如,在IEEE 802.3中增强BARM和流控制。但是,任何涉及运行在传统IEEE 802.3 MAC或其他专有MAC协议之上的增强软件MAC的解决方案都不在ISSL工作组和本文件的范围之内。第二,我们不相信这真的是一个有趣的问题。虽然在未来一段时间内共享网段上会有终端站,但部署的交换机数量相对于共享网段上的站点数量正在稳步增加。这种趋势正在发展到一种可能令人满意的解决方案,该解决方案假定任何需要资源预留的网络通信将通过至少一个交换机或路由器进行。换句话说,对现有的第2层基础设施进行QoS支持的最简单升级就是安装段交换。只有这样做了,才值得研究更复杂的解决方案

admission control. Thirdly, the core of campus networks typically consists of solutions based on switches rather than on repeated segments. There may be special circumstances in the future, e.g. Gigabit buffered repeaters, but the characteristics of these devices are different from existing CSMA/CD repeaters anyway.

准入控制。第三,校园网的核心通常由基于交换机而不是基于重复段的解决方案组成。将来可能会有特殊情况,例如千兆缓冲中继器,但这些设备的特性无论如何都不同于现有的CSMA/CD中继器。

Table 5: Shared Ethernet media access latency

表5:共享以太网媒体访问延迟

        --------------------------------------------------
        Type             Speed        Max Pkt   Max Access
                                       Length      Latency
        --------------------------------------------------
        Ethernet       10 Mbps         1.2 ms    unbounded
                      100 Mbps         120 us    unbounded
                        1 Gbps          12 us    unbounded
        --------------------------------------------------
        
        --------------------------------------------------
        Type             Speed        Max Pkt   Max Access
                                       Length      Latency
        --------------------------------------------------
        Ethernet       10 Mbps         1.2 ms    unbounded
                      100 Mbps         120 us    unbounded
                        1 Gbps          12 us    unbounded
        --------------------------------------------------
        
9.3. Half Duplex Switched Ethernet Networks
9.3. 半双工交换以太网

Many of the same arguments for sub optimal support of Guaranteed Service on shared media Ethernet also apply to half duplex switched Ethernet. In essence, this topology is a medium that is shared between at least two senders contending for packet transmission. Unless these are tightly coupled and cooperative, there is always the chance that the best effort traffic of one will interfere with the reserved traffic of the other. Dealing with such a coupling would require some form of modification to the MAC protocol.

共享媒体以太网上保证服务的次优支持的许多相同论点也适用于半双工交换以太网。本质上,这种拓扑结构是一种在至少两个争用分组传输的发送方之间共享的介质。除非它们是紧密耦合和协作的,否则总有可能一方的尽力而为流量会干扰另一方的保留流量。处理这种耦合需要对MAC协议进行某种形式的修改。

Not withstanding the above argument, half duplex switched topologies do seem to offer the chance to provide Controlled Load service. With the knowledge that there are exactly two potential senders that are both using prioritization for their Controlled Load traffic over best effort flows, and with admission control having been done for those flows based on that knowledge, the media access characteristics while not deterministic are somewhat predictable. This is probably a close enough useful approximation to the Controlled Load service.

尽管有上述论点,半双工交换拓扑似乎确实提供了提供受控负载服务的机会。由于知道有两个潜在发送方都在对其控制的负载流量使用最佳努力流的优先级,并且基于该知识对这些流进行了准入控制,因此媒体访问特性虽然不确定,但在某种程度上是可预测的。这可能是与受控负载服务非常接近的有用近似值。

Table 6: Half duplex switched Ethernet media access latency

表6:半双工交换以太网媒体访问延迟

        ------------------------------------------
        Type        Speed     Max Pkt   Max Access
                              Length       Latency
        ------------------------------------------
        Ethernet   10 Mbps     1.2 ms    unbounded
                  100 Mbps     120 us    unbounded
                    1 Gbps      12 us    unbounded
        ------------------------------------------
        
        ------------------------------------------
        Type        Speed     Max Pkt   Max Access
                              Length       Latency
        ------------------------------------------
        Ethernet   10 Mbps     1.2 ms    unbounded
                  100 Mbps     120 us    unbounded
                    1 Gbps      12 us    unbounded
        ------------------------------------------
        
9.4. Half Duplex Switched and Shared Token Ring Networks
9.4. 半双工交换和共享令牌环网

In a shared Token Ring network, the network access time for high priority traffic at any station is bounded and is given by (N+1)*THTmax, where N is the number of stations sending high priority traffic and THTmax is the maximum token holding time [14]. This assumes that network adapters have priority queues so that reservation of the token is done for traffic with the highest priority currently queued in the adapter. It is easy to see that access times can be improved by reducing N or THTmax. The recommended default for THTmax is 10 ms [6]. N is an integer from 2 to 256 for a shared ring and 2 for a switched half duplex topology. A similar analysis applies for FDDI.

在共享令牌环网中,任何站点的高优先级业务的网络访问时间是有界的,由(N+1)*THTmax给出,其中N是发送高优先级业务的站点数,THTmax是最大令牌保持时间[14]。这假设网络适配器具有优先级队列,以便为当前在适配器中排队的具有最高优先级的流量保留令牌。很容易看出,通过减少N或THTmax可以改善访问时间。THTmax的建议默认值为10毫秒[6]。对于共享环,N是2到256之间的整数,对于交换半双工拓扑,N是2。类似的分析也适用于FDDI。

             Table 7: Half duplex switched and shared Token
                       Ring media access latency
        ----------------------------------------------------
        Type        Speed               Max Pkt   Max Access
                                         Length      Latency
        ----------------------------------------------------
        Token Ring  4/16 Mbps shared       9 ms      2570 ms
                    4/16 Mbps switched     9 ms        30 ms
        FDDI         100 Mbps            360 us         8 ms
        ----------------------------------------------------
        
             Table 7: Half duplex switched and shared Token
                       Ring media access latency
        ----------------------------------------------------
        Type        Speed               Max Pkt   Max Access
                                         Length      Latency
        ----------------------------------------------------
        Token Ring  4/16 Mbps shared       9 ms      2570 ms
                    4/16 Mbps switched     9 ms        30 ms
        FDDI         100 Mbps            360 us         8 ms
        ----------------------------------------------------
        

Given that access time is bounded, it is possible to provide an upper bound for end-to-end delays as required by Guaranteed Service assuming that traffic of this class uses the highest priority allowable for user traffic. The actual number of stations that send traffic mapped into the same traffic class as Guaranteed Service may vary over time but, from an admission control standpoint, this value is needed a priori. The admission control entity must therefore use a fixed value for N, which may be the total number of stations on the ring or some lower value if it is desired to keep the offered delay guarantees smaller. If the value of N used is lower than the total number of stations on the ring, admission control must ensure that the number of stations sending high priority traffic never exceeds

假设访问时间是有界的,则可以根据保证服务的要求提供端到端延迟的上限,假设此类流量使用用户流量允许的最高优先级。发送映射到与保证服务相同的业务类别的业务的站点的实际数量可能随时间而变化,但是,从准入控制的角度来看,该值是事先需要的。因此,准入控制实体必须为N使用一个固定值,该值可以是环上的站点总数,或者如果希望保持所提供的延迟保证较小,则使用一些较低的值。如果使用的N值低于环路上的站点总数,则准入控制必须确保发送高优先级流量的站点数量永远不会超过

this number. This approach allows admission control to estimate worst case access delays assuming that all of the N stations are sending high priority data even though, in most cases, this will mean that delays are significantly overestimated.

这个号码。这种方法允许准入控制在假设所有N个站点都在发送高优先级数据的情况下估计最坏情况下的访问延迟,即使在大多数情况下,这将意味着延迟被显著高估。

Assuming that Controlled Load flows use a traffic class lower than that used by Guaranteed Service, no upper bound on access latency can be provided for Controlled Load flows. However, Controlled Load flows will receive better service than best effort flows.

假设受控负载流使用的流量级别低于保证服务使用的流量级别,则无法为受控负载流提供访问延迟上限。然而,受控负载流将比尽力而为流得到更好的服务。

Note that on many existing shared Token Rings, bridges transmit frames using an Access Priority (see Section 4.3) value of 4 irrespective of the user_priority carried in the frame control field of the frame. Therefore, existing bridges would need to be reconfigured or modified before the above access time bounds can actually be used.

注意,在许多现有的共享令牌环上,网桥使用4的访问优先级(参见第4.3节)值来传输帧,而与帧的帧控制字段中携带的用户_优先级无关。因此,在实际使用上述访问时间界限之前,需要重新配置或修改现有桥梁。

9.5. Half Duplex and Shared Demand Priority Networks
9.5. 半双工和共享需求优先网络

In IEEE 802.12 networks, communication between end nodes and hubs and between the hubs themselves is based on the exchange of link control signals. These signals are used to control access to the shared medium. If a hub, for example, receives a high priority request while another hub is in the process of serving normal priority requests, then the service of the latter hub can effectively be preempted in order to serve the high priority request first. After the network has processed all high priority requests, it resumes the normal priority service at the point in the network at which it was interrupted.

在IEEE 802.12网络中,终端节点和集线器之间以及集线器本身之间的通信基于链路控制信号的交换。这些信号用于控制对共享介质的访问。例如,如果一个集线器接收到一个高优先级请求,而另一个集线器正在处理正常优先级请求,则可以有效地抢占后一个集线器的服务,以便首先处理高优先级请求。在网络处理完所有高优先级请求后,它将在网络中断点恢复正常优先级服务。

The network access time for high priority packets is basically the time needed to preempt normal priority network service. This access time is bounded and it depends on the physical layer and on the topology of the shared network. The physical layer has a significant impact when operating in half duplex mode as, e.g. when used across unshielded twisted pair cabling (UTP) links, because link control signals cannot be exchanged while a packet is transmitted over the link. Therefore the network topology has to be considered since, in larger shared networks, the link control signals must potentially traverse several links and hubs before they can reach the hub which has the network control function. This may delay the preemption of the normal priority service and hence increase the upper bound that may be guaranteed.

高优先级数据包的网络访问时间基本上是抢占正常优先级网络服务所需的时间。该访问时间是有限的,它取决于物理层和共享网络的拓扑结构。物理层在半双工模式下运行时具有重大影响,例如在非屏蔽双绞线(UTP)链路上使用时,因为在链路上传输数据包时,链路控制信号无法交换。因此,必须考虑网络拓扑,因为在较大的共享网络中,链路控制信号必须潜在地穿过多个链路和集线器,然后才能到达具有网络控制功能的集线器。这可能会延迟正常优先级服务的抢占,从而增加可保证的上限。

Upper bounds on the high priority access time are given below for a UTP physical layer and a cable length of 100 m between all end nodes and hubs using a maximum propagation delay of 570 ns as defined in

下面给出了UTP物理层和所有终端节点和集线器之间100 m电缆长度的高优先级访问时间上限,使用中定义的570 ns的最大传播延迟

[19]. These values consider the worst case signaling overhead and assume the transmission of maximum sized normal priority data packets while the normal priority service is being preempted.

[19]. 这些值考虑最坏情况信令开销,并且假设在正常优先级服务被抢占时传输最大大小的正常优先级数据分组。

Table 8: Half duplex switched Demand Priority UTP access latency

表8:半双工交换请求优先级UTP访问延迟

        ------------------------------------------------------------
        Type            Speed                    Max Pkt  Max Access
                                                  Length     Latency
        ------------------------------------------------------------
        Demand Priority 100 Mbps, 802.3 pkt, UTP  120 us      254 us
                                  802.5 pkt, UTP  360 us      733 us
        ------------------------------------------------------------
        
        ------------------------------------------------------------
        Type            Speed                    Max Pkt  Max Access
                                                  Length     Latency
        ------------------------------------------------------------
        Demand Priority 100 Mbps, 802.3 pkt, UTP  120 us      254 us
                                  802.5 pkt, UTP  360 us      733 us
        ------------------------------------------------------------
        

Shared IEEE 802.12 topologies can be classified using the hub cascading level "N". The simplest topology is the single hub network (N = 1). For a UTP physical layer, a maximum cascading level of N = 5 is supported by the standard. Large shared networks with many hundreds of nodes may be built with a level 2 topology. The bandwidth manager could be informed about the actual cascading level by network management mechanisms and can use this information in its admission control algorithms.

共享IEEE 802.12拓扑可以使用集线器级联级别“N”进行分类。最简单的拓扑结构是单集线器网络(N=1)。对于UTP物理层,标准支持N=5的最大级联级别。具有数百个节点的大型共享网络可采用2级拓扑结构构建。带宽管理器可以通过网络管理机制获知实际级联级别,并可以在其接纳控制算法中使用此信息。

In contrast to UTP, the fiber optic physical layer operates in dual simplex mode. Upper bounds for the high priority access time are given below for 2 km multimode fiber links with a propagation delay of 10 us.

与UTP不同,光纤物理层在双单工模式下工作。对于传播延迟为10 us的2km多模光纤链路,下面给出了高优先级接入时间的上界。

For shared media with distances of up to 2 km between all end nodes and hubs, the IEEE 802.12 standard allows a maximum cascading level of 2. Higher levels of cascaded topologies are supported but require a reduction of the distances [15].

对于所有终端节点和集线器之间距离达2km的共享媒体,IEEE 802.12标准允许最大级联级别为2。支持更高级别的级联拓扑,但需要缩短距离[15]。

The bounded access delay and deterministic network access allow the support of service commitments required for Guaranteed Service and Controlled Load, even on shared media topologies. The support of just two priority levels in 802.12, however, limits the number of services that can simultaneously be implemented across the network.

有界访问延迟和确定性网络访问允许支持保证服务和控制负载所需的服务承诺,即使在共享媒体拓扑上也是如此。然而,802.12中仅支持两个优先级限制了可以在网络上同时实现的服务数量。

Table 9: Shared Demand Priority UTP access latency

表9:共享请求优先级UTP访问延迟

     ----------------------------------------------------------------
     Type            Speed              Max Pkt  Max Access  Topology
                                         Length     Latency
     ----------------------------------------------------------------
     Demand Priority 100 Mbps, 802.3 pkt 120 us      262 us     N = 1
                                         120 us      554 us     N = 2
                                         120 us      878 us     N = 3
                                         120 us     1.24 ms     N = 4
                                         120 us     1.63 ms     N = 5
        
     ----------------------------------------------------------------
     Type            Speed              Max Pkt  Max Access  Topology
                                         Length     Latency
     ----------------------------------------------------------------
     Demand Priority 100 Mbps, 802.3 pkt 120 us      262 us     N = 1
                                         120 us      554 us     N = 2
                                         120 us      878 us     N = 3
                                         120 us     1.24 ms     N = 4
                                         120 us     1.63 ms     N = 5
        
     Demand Priority 100 Mbps, 802.5 pkt 360 us      722 us     N = 1
                                         360 us     1.41 ms     N = 2
                                         360 us     2.32 ms     N = 3
                                         360 us     3.16 ms     N = 4
                                         360 us     4.03 ms     N = 5
     -----------------------------------------------------------------
        
     Demand Priority 100 Mbps, 802.5 pkt 360 us      722 us     N = 1
                                         360 us     1.41 ms     N = 2
                                         360 us     2.32 ms     N = 3
                                         360 us     3.16 ms     N = 4
                                         360 us     4.03 ms     N = 5
     -----------------------------------------------------------------
        
             Table 10: Half duplex switched Demand Priority
                          fiber access latency
     -------------------------------------------------------------
     Type            Speed                     Max Pkt  Max Access
                                                Length     Latency
     -------------------------------------------------------------
     Demand Priority 100 Mbps, 802.3 pkt, fiber 120 us      139 us
                               802.5 pkt, fiber 360 us      379 us
     -------------------------------------------------------------
        
             Table 10: Half duplex switched Demand Priority
                          fiber access latency
     -------------------------------------------------------------
     Type            Speed                     Max Pkt  Max Access
                                                Length     Latency
     -------------------------------------------------------------
     Demand Priority 100 Mbps, 802.3 pkt, fiber 120 us      139 us
                               802.5 pkt, fiber 360 us      379 us
     -------------------------------------------------------------
        

Table 11: Shared Demand Priority fiber access latency

表11:共享需求优先级光纤访问延迟

     ---------------------------------------------------------------
     Type            Speed              Max Pkt  Max Access Topology
                                         Length    Latency
     ---------------------------------------------------------------
     Demand Priority 100 Mbps, 802.3 pkt 120 us     160 us     N = 1
                                         120 us     202 us     N = 2
        
     ---------------------------------------------------------------
     Type            Speed              Max Pkt  Max Access Topology
                                         Length    Latency
     ---------------------------------------------------------------
     Demand Priority 100 Mbps, 802.3 pkt 120 us     160 us     N = 1
                                         120 us     202 us     N = 2
        
     Demand Priority 100 Mbps, 802.5 pkt 360 us     400 us     N = 1
                                         360 us     682 us     N = 2
     ---------------------------------------------------------------
        
     Demand Priority 100 Mbps, 802.5 pkt 360 us     400 us     N = 1
                                         360 us     682 us     N = 2
     ---------------------------------------------------------------
        
10. Justification
10. 正当理由

An obvious concern is the complexity of this model. It essentially does what RSVP already does at Layer 3, so why do we think we can do better by reinventing the solution to this problem at Layer 2?

一个明显的问题是这个模型的复杂性。它本质上做了RSVP在第3层已经做过的事情,那么为什么我们认为我们可以通过在第2层重新发明这个问题的解决方案来做得更好呢?

The key is that there are a number of simple Layer 2 scenarios that cover a considerable portion of the real QoS problems that will occur. A solution that covers the majority of problems at significantly lower cost is beneficial. Full RSVP/IntServ with per flow queuing in strategically positioned high function switches or routers may be needed to completely resolve all issues, but devices implementing the architecture described in herein will allow for a significantly simpler network.

关键是,有许多简单的第2层场景覆盖了将要发生的实际QoS问题的相当一部分。以显著更低的成本解决大多数问题是有益的。可能需要在战略位置的高功能交换机或路由器中具有每流排队的完整RSVP/IntServ来完全解决所有问题,但实现本文所述架构的设备将允许显著简化网络。

11. Summary
11. 总结

This document has specified a framework for providing Integrated Services over shared and switched LAN technologies. The ability to provide QoS guarantees necessitates some form of admission control and resource management. The requirements and goals of a resource management scheme for subnets have been identified and discussed. We refer to the entire resource management scheme as a Bandwidth Manager. Architectural considerations were discussed and examples were provided to illustrate possible implementations of a Bandwidth Manager. Some of the issues involved in mapping the services from higher layers to the link layer have also been discussed. Accompanying memos from the ISSLL working group address service mapping issues [13] and provide a protocol specification for the Bandwidth Manager protocol [14] based on the requirements and goals discussed in this document.

本文件规定了通过共享和交换LAN技术提供集成服务的框架。提供QoS保证的能力需要某种形式的准入控制和资源管理。已确定并讨论了子网资源管理方案的要求和目标。我们将整个资源管理方案称为带宽管理器。讨论了体系结构方面的考虑,并提供了示例来说明带宽管理器的可能实现。还讨论了将服务从更高层映射到链路层所涉及的一些问题。ISSLL工作组的随附备忘录解决了服务映射问题[13],并根据本文档中讨论的要求和目标,为带宽管理器协议[14]提供了协议规范。

References

工具书类

[1] IEEE Standards for Local and Metropolitan Area Networks: Overview and Architecture, ANSI/IEEE Std 802, 1990.

[1] IEEE局域网和城域网标准:概述和体系结构,ANSI/IEEE标准802,1990。

[2] ISO/IEC 10038 Information technology - Telecommunications and information exchange between systems - Local area networks - Media Access Control (MAC) Bridges, (also ANSI/IEEE Std 802.1D-1993), 1993.

[2] ISO/IEC 10038信息技术-系统间远程通信和信息交换-局域网-媒体访问控制(MAC)网桥(也叫ANSI/IEEE标准802.1D-1993),1993年。

[3] ISO/IEC 15802-3 Information technology - Telecommunications and information exchange between systems - Local and metropolitan area networks - Common specifications - Part 3: Media Access Control (MAC) bridges (also ANSI/IEEE Std 802.1D-1998), 1998.

[3] ISO/IEC 15802-3信息技术-系统间远程通信和信息交换-局域网和城域网-通用规范-第3部分:媒体访问控制(MAC)网桥(也称ANSI/IEEE Std 802.1D-1998),1998年。

[4] IEEE Standards for Local and Metropolitan Area Networks: Virtual Bridged Local Area Networks, IEEE Std 802.1Q-1998, 1998.

[4] IEEE局域网和城域网标准:虚拟桥接局域网,IEEE标准802.1Q-1998,1998。

[5] Braden, B., Zhang, L., Berson, S., Herzog, S. and S. Jamin, "Resource Reservation Protocol (RSVP) - Version 1 Functional Specification", RFC 2205, September 1997.

[5] Braden,B.,Zhang,L.,Berson,S.,Herzog,S.和S.Jamin,“资源预留协议(RSVP)-第1版功能规范”,RFC 22052997年9月。

[6] Wroclawski, J., "Specification of the Controlled Load Network Element Service", RFC 2211, September 1997.

[6] Wroclawski,J.,“受控负荷网元服务规范”,RFC2211,1997年9月。

[7] Shenker, S., Partridge, C. and R. Guerin, "Specification of Guaranteed Quality of Service", RFC 2212, September 1997.

[7] Shenker,S.,Partridge,C.和R.Guerin,“保证服务质量规范”,RFC 2212,1997年9月。

[8] Braden, R., Clark, D. and S. Shenker, "Integrated Services in the Internet Architecture: An Overview", RFC 1633, June 1994.

[8] Braden,R.,Clark,D.和S.Shenker,“互联网体系结构中的综合服务:概述”,RFC16331994年6月。

[9] Wroclawski, J., "The Use of RSVP with IETF Integrated Services", RFC 2210, September 1997.

[9] Wroclawski,J.,“RSVP与IETF综合服务的使用”,RFC 2210,1997年9月。

[10] Shenker, S. and J. Wroclawski, "Network Element Service Specification Template", RFC 2216, September 1997.

[10] Shenker,S.和J.Wroclawski,“网元服务规范模板”,RFC 22161997年9月。

[11] Shenker, S. and J. Wroclawski, "General Characterization Parameters for Integrated Service Network Elements", RFC 2215, September 1997.

[11] Shenker,S.和J.Wroclawski,“综合业务网络元件的一般特征参数”,RFC 2215,1997年9月。

[12] Delgrossi, L. and L. Berger (Editors), "Internet Stream Protocol Version 2 (ST2) Protocol Specification - Version ST2+", RFC 1819, August 1995.

[12] Delgrossi,L.和L.Berger(编辑),“互联网流协议版本2(ST2)协议规范-版本ST2+”,RFC 18191995年8月。

[13] Seaman, M., Smith, A. and E. Crawley, "Integrated Service Mappings on IEEE 802 Networks", RFC 2815, May 2000.

[13] Seaman,M.,Smith,A.和E.Crawley,“IEEE 802网络上的综合服务映射”,RFC 2815,2000年5月。

[14] Yavatkar, R., Hoffman, D., Bernet, Y. and F. Baker, "SBM Subnet Bandwidth Manager): Protocol for RSVP-based Admission Control Over IEEE 802-style Networks", RFC 2814, May 2000.

[14] Yavatkar,R.,Hoffman,D.,Bernet,Y.和F.Baker,“SBM子网带宽管理器:IEEE 802风格网络上基于RSVP的准入控制协议”,RFC 2814,2000年5月。

[15] ISO/IEC 8802-3 Information technology - Telecommunications and information exchange between systems - Local and metropolitan area networks - Common specifications - Part 3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications, (also ANSI/IEEE Std 802.3- 1996), 1996.

[15] ISO/IEC 8802-3信息技术-系统间远程通信和信息交换-局域网和城域网-通用规范-第3部分:带冲突检测的载波侦听多路访问(CSMA/CD)访问方法和物理层规范(也可称为ANSI/IEEE Std 802.3-1996),1996年。

[15] ISO/IEC 8802-5 Information technology - Telecommunications and information exchange between systems - Local and metropolitan area networks - Common specifications - Part 5: Token Ring Access Method and Physical Layer Specifications, (also ANSI/IEEE Std 802.5-1995), 1995.

[15] ISO/IEC 8802-5信息技术-系统间远程通信和信息交换-局域网和城域网-通用规范-第5部分:令牌环访问方法和物理层规范(也是ANSI/IEEE标准802.5-1995),1995年。

[17] Postel, J. and J. Reynolds, "A Standard for the Transmission of IP Datagrams over IEEE 802 Networks", STD 43, RFC 1042, February 1988.

[17] Postel,J.和J.Reynolds,“通过IEEE 802网络传输IP数据报的标准”,STD 43,RFC 1042,1988年2月。

[18] C. Bisdikian, B. V. Patel, F. Schaffa, and M Willebeek-LeMair, The Use of Priorities on Token Ring Networks for Multimedia Traffic, IEEE Network, Nov/Dec 1995.

[18] C.Bisdikian,B.V.Patel,F.Schaffa和M.Willebeek LeMair,《多媒体通信中令牌环网络优先级的使用》,IEEE网络,1995年11月/12月。

[19] IEEE Standards for Local and Metropolitan Area Networks: Demand Priority Access Method, Physical Layer and Repeater Specification for 100 Mb/s Operation, IEEE Std 802.12-1995.

[19] IEEE局域网和城域网标准:100Mb/s操作的需求优先访问方法、物理层和中继器规范,IEEE标准802.12-1995。

[20] Fiber Distributed Data Interface MAC, ANSI Std. X3.139-1987.

[20] 光纤分布式数据接口MAC,ANSI标准X3.139-1987。

[21] ISO/IEC 15802-3 Information technology - Telecommunications and information exchange between systems - Local and metropolitan area networks - Specific requirements - Supplement to Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications - Frame Extensions for Virtual Bridged Local Area Network (VLAN) Tagging on 802.3 Networks, IEEE Std 802.3ac-1998 (Supplement to IEEE 802.3 1998 Edition), 1998.

[21] ISO/IEC 15802-3信息技术-系统间远程通信和信息交换-局域网和城域网-特定要求-带冲突检测的载波侦听多址接入补充(CSMA/CD)访问方法和物理层规范.802.3网络上虚拟桥接局域网(VLAN)标签的帧扩展,IEEE标准802.3ac-1998(IEEE 802.3 1998版的补充),1998年。

Security Considerations

安全考虑

Implementation of the model described in this memo creates no known new avenues for malicious attack on the network infrastructure. However, readers are referred to Section 2.8 of the RSVP specification [5] for a discussion of the impact of the use of admission control signaling protocols on network security.

本备忘录中所述模型的实施不会为网络基础设施上的恶意攻击创建任何已知的新途径。然而,读者可参考RSVP规范[5]第2.8节,以了解使用准入控制信令协议对网络安全的影响。

Acknowledgements

致谢

Much of the work presented in this document has benefited greatly from discussion held at the meetings of the Integrated Services over Specific Link Layers (ISSLL) working group. We would like to acknowledge contributions from the many participants via discussion at these meetings and on the mailing list. We would especially like to thank Eric Crawley, Don Hoffman and Raj Yavatkar for contributions via previous Internet drafts, and Peter Kim for contributing the text about Demand Priority networks.

本文件中介绍的许多工作都从特定链路层综合服务(ISSLL)工作组会议上进行的讨论中受益匪浅。我们希望通过这些会议上的讨论和邮件列表,感谢许多与会者的贡献。我们要特别感谢Eric Crawley、Don Hoffman和Raj Yavatkar通过以前的互联网草稿所做的贡献,以及Peter Kim对需求优先网络的贡献。

Authors' Addresses

作者地址

Anoop Ghanwani Nortel Networks 600 Technology Park Dr Billerica, MA 01821, USA

Anoop Ghanwani Nortel Networks 600技术园Billerica博士,马萨诸塞州01821

   Phone: +1-978-288-4514
   EMail: aghanwan@nortelnetworks.com
        
   Phone: +1-978-288-4514
   EMail: aghanwan@nortelnetworks.com
        

Wayne Pace IBM Corporation P. O. Box 12195 Research Triangle Park, NC 27709, USA

Wayne Pace IBM Corporation美国北卡罗来纳州三角研究园12195号邮政信箱,邮编27709

   Phone: +1-919-254-4930
   EMail: pacew@us.ibm.com
        
   Phone: +1-919-254-4930
   EMail: pacew@us.ibm.com
        

Vijay Srinivasan CoSine Communications 1200 Bridge Parkway Redwood City, CA 94065, USA

Vijay Srinivasan CoSine Communications 1200桥公园路美国加利福尼亚州红木市94065

   Phone: +1-650-628-4892
   EMail: vijay@cosinecom.com
        
   Phone: +1-650-628-4892
   EMail: vijay@cosinecom.com
        

Andrew Smith Extreme Networks 3585 Monroe St Santa Clara, CA 95051, USA

安德鲁·史密斯极限网络美国加利福尼亚州圣克拉拉门罗街3585号,邮编95051

   Phone: +1-408-579-2821
   EMail: andrew@extremenetworks.com
        
   Phone: +1-408-579-2821
   EMail: andrew@extremenetworks.com
        

Mick Seaman Telseon 480 S. California Ave Palo Alto, CA 94306 USA

美国加利福尼亚州帕洛阿尔托市南加利福尼亚大道480号米克·希曼电信公司,邮编94306

   Email: mick@telseon.com
        
   Email: mick@telseon.com
        

Full Copyright Statement

完整版权声明

Copyright (C) The Internet Society (2000). All Rights Reserved.

版权所有(C)互联网协会(2000年)。版权所有。

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

本文件及其译本可复制并提供给他人,对其进行评论或解释或协助其实施的衍生作品可全部或部分编制、复制、出版和分发,不受任何限制,前提是上述版权声明和本段包含在所有此类副本和衍生作品中。但是,不得以任何方式修改本文件本身,例如删除版权通知或对互联网协会或其他互联网组织的引用,除非出于制定互联网标准的需要,在这种情况下,必须遵循互联网标准过程中定义的版权程序,或根据需要将其翻译成英语以外的其他语言。

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.

上述授予的有限许可是永久性的,互联网协会或其继承人或受让人不会撤销。

This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

本文件和其中包含的信息是按“原样”提供的,互联网协会和互联网工程任务组否认所有明示或暗示的保证,包括但不限于任何保证,即使用本文中的信息不会侵犯任何权利,或对适销性或特定用途适用性的任何默示保证。

Acknowledgement

确认

Funding for the RFC Editor function is currently provided by the Internet Society.

RFC编辑功能的资金目前由互联网协会提供。