bitpie|ethernet audio videobridging

作者: bitpie
2024-03-14 20:02:44

AVB基础内容介绍 - 知乎

AVB基础内容介绍 - 知乎切换模式写文章登录/注册AVB基础内容介绍积木家智能科技整车电子电气架构设计 1 概述 本视频主要介绍Audio Video Bridging,又称音频视频桥接技术,简称AVB,用于汽车多媒体传输,AVB是基于车载以太网通信,处理音频和视频流比较常用的一种方式。首先,我们来看一个汽车网络架构,从图中可知以太网将用作主干网来连接车外,比如OBD,同时也连接车内不同域,未来将会应用更广泛,这里AVB主要用于抬头显示Head Unit和音视频节点间的通信。 (来自网络) AVB最主要的特性就是确定性延时,必须通过传输时间策略,确保每个音频和视频流在确切的时间到达对端,这一特性对时间敏感通信极其重要,这里音频设备麦克风和抬头显示的播放器之间的音频流传输就是时间敏感通信。对于ISO OSI七层参考模型,想必大家都不陌生了,而AVB所处位置如下图所示。 (来自网络) AVB协议是一个系列的协议,如上图中绿色和橘色框所示。因此在AVB的下层是以太网物理层,图中未体现,其实还应该包括部分以太网MAC层内容,AVB协议上层为音视频传输协议1722,简称AVTP,AVTP协议主要用于封装音视频流,而AVB系统协议为AVTP提供基础架构,确保AVTP流的确定性传输。 AVTP报文格式(来自网络) 上面介绍AVB协议是一系列AVB协议组成,主要协议有:1.AVB系统协议 简称IEEE 802.1BA,定义AVB系统内容。2.流预留协议 简称IEEE802.1Qat,定义AVB网络相关通信路径可以为Talker和Listener这对通信对象预留多少带宽。3.时间同步协议 简称IEEE802.1AS,AS协议为了确保网络中所有AVB节点在同一个时间基准下,当多个Listener播放器需要播放同一个音频时,必须采用时间同步机制来保证大家的时间是同步的。4.交换机流整形协议 简称IEEE802.1Qav,在流量带宽预留成功之后,必须保证从Talker到Listener之间帧的传输不能超出最大延时。 AVB网络系统包括端节点和AVB交换机,终端节点如下图白色框所示,既可以做Talker,也可以是Listener,甚至同时是Talker和Listener角色,这些都没有限制,终端节点本身就是可以为时间敏感数据流的源点和终点,而AVB交换机,如图中所示AVB Bridge,除了提供整车交换机功能外,还提供时间敏感数据转发功能。 2 AVB系列协议详细介绍 接下来,详细介绍AVB四种系列协议:1.系统协议 BA协议定义了一些AVB系统的配置方式以便快速创建AVB网络,可以配置的选项有协议选择,配置参数和流程配置,但因为该协议暂时还不成熟,因此未得到广泛应用。2.流预留协议 为了管理通信路径预留的资源,保证局域网的服务质量QoS,Qat协议最重要的工作由多流属性注册协议,多流属性注册协议是基于多注册协议,多注册协议用于在局域网中传递确定的属性。 该协议允许流端点注册是否需要talk或者listen特定流,如下图所示红色箭头表示talker宣贯消息,此时talker以广播的形式发送,以询问网络中所有端点,当有端点需要侦听时,绿色箭头所示通信路径代表应答者已经做好侦听AVB云的准备,其中AVB云由AVB交换机或者网桥组成,一旦listener希望接收流,必须保证流预留的资源和所希望达到的服务质量参数。 如下图所示,AVB网络由很多AVB节点和网桥组成,当左上角的AVB节点需要提供一个数据流给网络中其他节点,该节点作为talker通过Talker advertise消息来广播有数据要发送,如下图(Talker Advertise),在收到talker advertise后左下角的终端节点需要侦听数据流,该节点作为Listener通过Listener ready消息来单播告诉Talker已经准备好接收数据流,如下图(Listener Ready)。 如果所需带宽可用,AVB系统关于该流通信的整个路径的交换机和终端节点都将会对带宽资源进行锁定。 Talker Advertise Listener Ready3.时间同步协议 AS协议目标是建立通用的时间基准,以确保多个播放器同时播放同一个音频sample,而时间同步协议包含两部分内容,网络中最佳时钟选择和最佳时钟时间信息的分发,而在车辆系统中AVB系统比较简单,节点比较单一并且固定,因此主时钟的选择是静态定义不支持动态选择,那主时钟是如何在网络中分发自己的时钟信息的呢?在AVB系统中基于时间同步信息是从主时钟发送给时钟树的其他设备,已达到共享自己的时钟信息。 AVB系统中有四个设备,如下图所示,左边设备为主时钟Grandmaster,GM直连TimeAwareSystem以便在时间20发送synchronization同步消息,经过link delay = 2之后synchronization消息被系统2接收,之后主时钟GM发送第二个消息给time aware system这个消息叫Follow-up,跟随消息包含三个字段:第一个字段为Synchronization消息的发送时间20;第二个字段为synchronization消息相对于主时钟的延时这里为0;第三个字段为rate ratio用来表示主时钟速率和当前节点时钟速率的比率,当前节点时钟即为主时钟所以该值为1; 经过一段驻留时间5之后同步消息会被发送到系统3,现在的问题是系统2发出同步消息的时间是多少?这个时间需要通过跟随消息来传递以便系统3计算时间,一旦主时钟GM发送完同步消息那就需要在20基础上加链路延时2,还要加上消息在系统2中的驻留时间5,但是链路延时和驻留时间的参考时钟是系统2的,而系统2的时钟比率是1.01,因此将链路延时和驻留时间乘以系统2的速率比率以换算成主时钟时间,最终系统2发送的同步消息时间是27.07,这样在跟踪消息中第一个字段仍为主时钟源时间20,第二个字段为系统2中相对于主时钟时间的延时7.07,第三个字段需要传输系统2的时钟速率比率1.01,以便系统3来计算自己的速率比率。 系统3的处理完全等同于系统2,系统3发送同步消息之后也同样需要发送跟随消息,Follow-up中第一个字段仍为主时钟源时间20,第二个字段为系统3中相对于主时钟时间的延时,第三个字段需要传输系统2的时钟速率。3.流量整形协议 流量整形协议Qav是实施在交换机系统中避免包堆积和保证不超出链路的最大允许传输延时,如图仅为举例说明,图中有2个端口分别是交换机流量的进端口Ingress Port和出端口Egress Port,交换机有2个主要任务,一个是为队列分配帧,另一个是从队列中选择帧来发送。图中,在出端口前面定义了3个队列,分别对应三种类型的数据。正常情况下有三种流量类型:有很强的时间需求的严格时间流量类型A,对应队列2;有较强的时间需求的严格时间流量类型B,对应队列1;无时间需求的尽力而为数据类型,对应队列0; 如何判断输入帧是哪一种流量类型,主要依赖于输入帧中VLAN Tag中3比特表达的优先级,其中优先级和流量类型或者队列有一一对应关系,如果流量类型设置为类型B则表示对该流量进行队列选择是会选择队列1。 之后发送选择会优先选择更高队列的帧转发给出端口,这里只有队列1和队列2没有AVB帧要发送的情况下,采用严格优先级策略发送队列0中非AVB帧。 综上队列的发送选择策略有严格优先级和基于信用流量整形,严格优先级算法用于无AVB流量的队列0,基于信用流量整形算法用于队列1和队列2。基于信用流量整形算法的具体机制如下:开始一个橘色非AVB帧正在发送,如果这个时候队列1有几个AVB帧需要发送,此时它不能打断正在发送的非AVB帧,另外一个重要的点AVB发送的条件是信用credit必须大于0,因此绿色AVB帧的发送必须等到信用大于0。发送完绿色AVB帧后,导致信用下降为负数,虽然还有更多帧等待发送但还是要等待信用大于0,之后红色AVB帧得以被发送。剩余红色,黄色,蓝色AVB帧均遵循以上规则发送,必须严格基于信用。扫码关注我,带你轻松学习车载以太网技能!发布于 2020-10-14 22:28音视频通信video​赞同 19​​添加评论​分享​喜欢​收藏​申请

浅谈数字音视频传输网络——AVB[通俗易懂]-腾讯云开发者社区-腾讯云

音视频传输网络——AVB[通俗易懂]-腾讯云开发者社区-腾讯云全栈程序员站长浅谈数字音视频传输网络——AVB[通俗易懂]关注作者腾讯云开发者社区文档建议反馈控制台首页学习活动专区工具TVP最新优惠活动文章/答案/技术大牛搜索搜索关闭发布登录/注册首页学习活动专区工具TVP最新优惠活动返回腾讯云官网全栈程序员站长首页学习活动专区工具TVP最新优惠活动返回腾讯云官网社区首页 >专栏 >浅谈数字音视频传输网络——AVB[通俗易懂]浅谈数字音视频传输网络——AVB[通俗易懂]全栈程序员站长关注发布于 2022-07-30 09:07:552.5K0发布于 2022-07-30 09:07:55举报文章被收录于专栏:全栈程序员必看全栈程序员必看大家好,又见面了,我是你们的朋友全栈君。本文将围绕:音频信号的数字化、以太网的传输方式、数字音频信号对以太网的要求、QoS服务质量、传统以太网传输实时数据流的方式、以太网音视频桥接(AVB)技术等几个方面展开说明。在传统的音响系统里,声音信号是模拟信号,它的振幅具有随时间连续变化的特性。对模拟音频信号进行处理、存储和传送都会引入噪声和信号失真,并且随着复制次数的增加,每次都会加入新的噪声和失真,信号质量会越来越差。而数字音频技术的出现,解决了上述模拟信号中的诸多问题。数字音频技术是把模拟音频信号变换为振幅不变的脉冲信号,音频信号的信息量全部包含在脉冲编码调制(Pulse Code Modulation,PCM)中。各种处理设备引入的噪声和产生的失真与数字信息完全分离。因此,数字音频信号具有:复制不走样、抗干扰能力强、动态范围大、可远距离传输、可以远程监控等优点。现如今,数字音频信号还可以融入到网络传输系统中,在一条传输线路上同时实行多路音频信号的传输,大大节省了传输运行成本,简化了传输线路。音频信号的数字化将模拟信号转换成数字信号,需要对模拟信号进行一系列的处理,如图1所示,先对模拟信号进行采样,再经过低通滤波器去除掉采样中产生的高频失真,通过量化将采样后的数值调整为整数,再经过二进制编码后生成数字信号。 浅谈数字音视频传输网络——AVB[通俗易懂] 图1 音频信号的数字化 采样,是每隔一定的时间间隔,抽取信号的瞬时幅度值。每一秒钟所采样的次数叫做采样频率。以CD为例,采样频率为44.1kHz,即1秒钟对模拟信号进行了44100次取值,如图2b所示,采样后的信号变成了多个密布的点。采样频率越高,抽取的点密度越高,信号也就越精准。

浅谈数字音视频传输网络——AVB[通俗易懂] 图2a 原模拟信号频谱 浅谈数字音视频传输网络——AVB[通俗易懂] 图2b 采样后的频谱 在图2b中采样过后的信号除了原始频谱之外,还会额外产生一些高频的失真,形成新的频谱。这些失真的频谱以nfu(n为正整数)为中心、左右对称,它的频谱分布与原信号的频谱形状相同。采用低通滤波器(LPF)把新增加的多余的频谱滤掉就可以恢复原信号的频谱。 根据奈奎斯特(Nyquist)采样定理:采样频率fs大于或等于采样信号最高频率fu的2倍,就可以通过低通滤波器恢复无失真的原始信号。如果fs<2fu,采样过程中产生的一部分高频失真会与原始频谱相互叠加,如图3中会产生频谱混叠失真的现象,这种叠加是无法用低通滤波器分开的。 浅谈数字音视频传输网络——AVB[通俗易懂] 图3 频谱混叠失真 因此采样频率fs必须大于原信号中最高频率的2倍以上,新增加的频谱与原信号的频谱才不会相互叠加。例如,人耳的听音频率上限是20kHz,采样频率最低应为40kHz。但低通滤波器有一定的截止边沿宽度,是按一定规律逐步对信号衰减滤除的,为了较好的防止产生高频失真,通常fs=(2.1~2.5) fu。CD的采样频率是44.1kHz,它等于20kHz的2.205倍。 采样后的振幅值并不是整数,且是随机变化的。还需要将这些随机变化的振幅值通过四舍五入的方法将其变换为能用二进制数列来表达的数值,这个过程就是量化,单位是bit(比特),如图4中采样和量化所示。采样值是6.4的幅值量化后取整数6,采样值是3.6的幅值量化后取整数4。 浅谈数字音视频传输网络——AVB[通俗易懂] 图4 A/D转换的三个步骤 将量化后的二进制数组按照时间顺序排列成可以顺序传送的脉冲序列,这个过程就是编码。由于数字电路以开关的通和断(1和0)两种状态为基础,可以大大简化数字电路的运算,因此二进制编码在数字技术中获得了广泛的应用。

浅谈数字音视频传输网络——AVB[通俗易懂] 图5 量化误差与量化位数的关系 量化级数越多,量化误差就越小,声音质量就越好,如图5所示,3bit是23个二进制数,6bit是26个二进制数。对于音频信号,由于动态范围较大,而且要求的信噪比又高,所以量化的取值大一些,通常为16bit,甚至20-24bit。 以太网的传输方式 以太网创建于1980年,它是一种可以在互连设备之间相互传送数据的技术。发展至今日,因它具有成本低、速率快、可靠性高等特点被广泛的应用。我们可以通过以太网传送Email、图片、声音、视频等等。以太网络使用CSMA/CD(载波监听多路访问及冲突检测)技术,是一种争用型的介质访问控制协议。它的工作原理是: 发送数据前先侦听信道是否空闲 ,若空闲,则立即发送数据。若信道忙碌,则等待一段时间至信道中的信息传输结束后再发送数据;若在上一段信息发送结束后,同时有两个或两个以上的节点都提出发送请求,则判定为冲突。若侦听到冲突,则立即停止发送数据,等待一段随机时间,再重新尝试。我们称这种传输机制为“Best Effort”(尽力而为),也就是说当数据抵达端口后,本着FlFO(先入先出)的原则转发。不对数据进行分类,当数据进入端口的速度大于端口能发送的速度时,FIFO按数据到达端口的先后顺序让数据进入队列,同时,在出口让数据按进队的顺序出队,先进的数据将先出队,后进的数据将后出队。采用CSMA/CD控制方式的特点是:原理比较简单,技术上容易实现,网络中各工作站处于平等地位 ,不需集中控制,不提供优先级控制。在以太网中,我们经常会遇到“带宽”一词,它是指在单位时间(一般指的是1秒钟)内能传输的数据量。也就是在规定时间内从一端流到另一端的信息量,即数据传输率。数字信息流的基本单位是bit(比特),时间的基本单位是s(秒),因此bit/s(比特/秒,也用bps表示)是描述带宽的单位,1bit/s是带宽的基本单位。不难想象,以1bit/s的速率进行通信是非常缓慢的。幸好我们可以使用通信速率很快的设备,比如56k的调制解调器利用电话线拨号上网,其带宽是56000bit/s(1k=1000bit/s), 电信ADSL宽带上网在512kbit/s至100Mbit/s之间,而现如今的以太网则可以轻松达到100Mbit/s以上(1Mbit/s=1000×1000bit/s=1,000,000bit/s)。

以千兆网(1Gbit/s)为例:假如说交换机的端口带宽是1Gbit/s,也就是1000,000,000bit/s,则说明每秒可传输1000,000,000个二进制的“位”,那么1bit所占用的时间是1÷1000,000,000=1ns。也就是每个二进制位(1bit)之间的时间间隔大于1ns时,就不会发成冲突,如图6所示。 浅谈数字音视频传输网络——AVB[通俗易懂] 图6 但在以太网传输中,并不是以二进制位(bit)来传输的,而是以“帧”为单位的。如图7所示,在一帧中至少包含了46Byte(字节)的数据,那么一个最小的以太网帧是72 Byte;如果一帧中包含的最大数据是1500 Byte,那么一个最大的以太网帧是1526 Byte。

典型以太网帧866246-15004前导码目的地址源地址类型数据帧校验序列a)IEEE802.3(原版)7166246-15004前导码帧开始符目的地址源地址长度数据帧校验序列b)IEEE802.3(1997修订)7166246-15004前导码帧开始符目的地址源地址长度/类型数据帧校验序列c) 图7 以太网帧结构 网络设备和组件在接收一个帧之后,需要一段短暂的时间来恢复并为接收下一帧做准备,也就是相邻两帧之间是有一个间隙的,IFG(Inter frame Gap)帧间距。IFG的最小值是12Byte,如图8所示。

浅谈数字音视频传输网络——AVB[通俗易懂] 图8 我们假设这两帧数据在千兆网(1Gbit/s)内传输,那么两帧之间的时间间隔大于96ns就不会发生冲突。 随着网络带宽的提升,千兆网在传统以太网的基础上对帧的数据量做出了一定的修改。采用了载波延伸(Gamier Extension)的方法,将最小字节扩展到512Byte,即凡是发送帧长不足512 Byte时,就填充特殊字符(0F)补足。当许多短帧需要发送时,如果每一帧都扩展为512 Byte,会造成资源的巨大浪费。因此又设定了帧突发(Frame Bursting)的方法,可以解决此问题,第一个短帧使用载波延伸,一旦发送成功,则随后的短帧连续发送直到1500 Byte为止。此期间由于线路始终处于“忙”的状态,不会有其它站点抢占信道。 传统以太网如何传输实时数据流(音、视频流)?以太网通过RTP(Real-time Transport Protocol)实时传输协议为数据提供了具有实时特征的端对端传送服务。RTP 本身并不能保证传送,也不能保证防止无序传送。因此,想要对所有的数据流进行排序,就离不开对数据的缓冲(Buffer)。但是,一旦采用缓冲的机制就又会带来新的问题——延时。所以我们在网络上听歌、看电影的时候,都会缓冲后才开始播放。但这个缓冲时间,在专业音、视频传输领域里是不能被接受的。 数字音频信号对以太网的要求我们以CD为例,它的采样频率是44.1kHz,量化位数是16bit。每次采样的时间是1÷44.1×1000≈22.7μs。我们对声音的要求是连续不间断的,也就是要求每个采样下的数据传输间隔不能大于22.7μs。而在千兆网(1Gbit/s)里,两帧之间的最小时间间隔只有96ns,远小于我们所要求的22.7μs,那么在这个带宽下,我们是完全可以传输连续不间断的音频信号的。如果我们在1Mbit/s的带宽下传输数据,那么1bit所占用的时间是1÷1,000,000=1μs,两帧之间的间隔是96μs,这时候如果传输CD数字音频信号就会存在断断续续的问题了。

从上面两个例子不难看出,只要网速足够快,也就是网络有足够的带宽,我们就可以很顺利的在网络上传输数字音频信号。但大多数情况下,由于带宽通常是由多个设备共享的,我们不单单只用它去传输一路数字音频信号,我们会同时传送多路数字音频信号,还会传输邮件、网页、图片等等其它的数据。而所有的发送端没有基于时间的流量控制,那么这些发送端永远是尽最大可能发送数据。这样来自不同设备的数据流就会在时间上产生重叠,即我们前文所说的冲突。这一定会影响数字音频信号的传输,为了改善这种传输机制,提高部分数据传输效率,以太网通过QoS优先机制进行转发,可以保证一部分数据的传输。 什么是QoS?QoS(Quality of Service)是服务质量的简称,它包括保证传输的带宽,降低传输的延时,降低数据的丢失率以及延时抖动等。按照其工作的能力可分为以下几种模型:

1、DiffServ(Differentiated Service,区分服务)模型,根据服务要求对不同业务的数据进行分类,对数据按类进行优先级标记,然后有差别地提供服务。先行转发优先级高的数据,并将优先级低的数据做端口缓存,待网络中无高级别数据时再转发低级别数据,如图9所示。 浅谈数字音视频传输网络——AVB[通俗易懂] 图9 AVB 标准定义了两个流量类型,A类和B类。A类流优先级是5,B类流优先级是4,这两种数据相比较,A类数据流会被先行转发。

浅谈数字音视频传输网络——AVB[通俗易懂] 图10 2、IntServ(Integrated Service,综合服务)模型,在节点发送数据前,需要向网络申请资源预留,确保网络能够满足数据流的特定服务要求。它可以提供保证服务和负载控制服务两种服务。保证服务,提供保证的延迟和带宽来满足应用程序的要求;负载控制服务,保证即使在网络过载的情况下,也能对数据提供与网络未过载时类似的服务。

在网络中大量的数据极有可能在一瞬间抵达端口,保证服务如果要为每一个数据流提供QoS服务就变得不可想象了。因此,IntServ模型很难独立应用于大规模的网络,需要与流量整形(Traffic Shaping)结合使用。 流量整形(Traffic Shaping) 流量整形是为了避免在以太网中发生丢弃数据的情况,通常采用漏桶算法(Leaky Bucket)来完成流量整形或速率限制(Rate Limiting)。它的主要目的是控制数据注入到网络的速率,平滑网络上的突发流量。 浅谈数字音视频传输网络——AVB[通俗易懂] 图11 流量整形示意图 漏桶算法提供了一种机制,通过它突发流量可以被整形,以便为网络提供一个稳定的流量。在概念上,漏桶算法可以作如下理解:到达的数据被放置在底部具有漏孔的桶中(数据缓存);数据从漏桶中漏出,以常量速率注入网络,因此平滑了突发流量,如图12所示。

浅谈数字音视频传输网络——AVB[通俗易懂] 图12 漏桶算法 主机在每经过一个时间间隔向网络输出一个数据包,因此产生了一致的数据流,平滑了突发的流量。AVB 标准定义了两个流量类型,A类和B类。A类的时间间隔为125μs,B类的时间间隔为250μs。A类要求流具有更紧密的等待时间,并且具有较短的观察间隔,这意味着其数据包更小并且更频繁地传输。

浅谈数字音视频传输网络——AVB[通俗易懂] 图13 当数据流具有相同尺寸的时候,每个时间间隔传输一个数据的工作机制没有任何问题。但对于可变长度的数据来说,这种工作机制可能存在一点问题,此时,最好每个时间间隔传输固定数目的字节。 AVB有两种流格式:AM824和AAF。AM824支持24bit音频,iec60958音频编码(SPDIF和AES3),SMPTE时间码和MIDI。对于发送端AM824有三个选项“non-blocking(sync)”、“non-blocking(aync)”和“blocking”。 AM824(non-blocking,synchronous),典型的AVB音频设备使用此模式进行传输。每个观察周期发送一帧,每个以太网帧总发送相同数量的采样,在48kHz采样时,每帧包含6个采样;在96kHz时,每帧包含12个采样。AM824 (non-blocking, async packetization),此模式由于打包器和发送器的观测间隔不同步,有可能发送一个临时的以太网帧,其中包含一个或多个采样。一个打包器处理多个时钟域的设备通常采用此格式。因为它可以发送临时的以太网帧,在48kHz采样时,每帧包含7个采样;在96kHz时,每帧包含13个采样,它需要预留足够的带宽。苹果Mac采用此模式。AM824 (blocking)是一些火线设备使用的模式,因为它更容易打包和拆包。在48kHz采样时,每帧包含8个采样;在96kHz时,每帧包含16个采样。

AAF是IEEE p1722a中定义的新的打包格式。它比AM824开销低,要求数据流中每个帧具有相同的大小和格式,并允许16bit、24bit和32bit的量化,以及每个帧的采样数量选择。每个帧的大小和格式总是相同的。 浅谈数字音视频传输网络——AVB[通俗易懂] 图14 从图14中我们可以看出几个典型的AVB流在万兆网(10Gbps)中的传输规律。比如:48kHz采样32bit的立体声音频流,实际需要的带宽大约是3Mbps,采用Class A的传输间隔,1秒钟发送8000组数据(1÷8000=0.000125s=125μs),其中每组数据最多由80个帧组成。如果每帧都按照最大数据来传输,在前文中提到过最大帧是1526Byte(如图7中所示),再加上每帧的帧间隔12Byte,共1538 Byte,相当于12,304bit(1Byte=8bit),每组80个帧相当于12,304×80=984,320bit,每秒传输8000组相当于984,320×8000=7,874,560,000bit/s≈7.87Mbps。那么在10Gbps的带宽下,如果保留75%的带宽用于传输AVB流,可以传输952个这样的数据流。 AVB可以实现全双工的工作模式,每帧的数据量和传输的数据类型有关,也和时间间隔有关,从图14中不难看出,不同类型的数据所占用的字节并不是一个绝对的固定值。流量整形固定了实时数据流(音、视频流)的发送时间间隔和帧大小,当传统的异步以太网数据流(邮件、网页等)进入网络时,会不会对实时数据流有所影响呢? 802.1Qav:排队及转发协议(Queuing and Forwarding Protocol,简称Qav)Qav协议的作用是确保传统的异步以太网数据流不会干扰到AVB的实时数据流。AVB交换机把收到的各种数据分类,分别进入不同的转发队列,并重新赋予优先级,其中实时音视频流数据拥有最高优先级。为了避免冲突需要两种调度算法,一种是基于可信因子的整形算法CBS(credit based shaper transmission selection algorithm),一种是严格的优先级选择算法。各种不同的普通数据按照严格的优先级算法进行调度,当与流数据发生冲突时,则调用CBS算法。对时间敏感的实时数据流转发采用伪同步模式(Pseudo-synchronous),这个机制依赖于精准时间同步协议(PTP)提供的8kHz时钟。在每隔125μs的时间间隙(1÷8000=0.000125s=125μs),包含AVB数据的以太网等时帧就会被进行转发。在优先保证等时帧数据传输的条件下,继续提供普通异步传输的服务,这就是Qav的优先级管理(Prioritize)及流量整形(Traffic Shaping)。当数据经过多个交换机进行传输时,即使在相同的带宽下,也会因为路径不同导致传输时间的偏差,如何保证在整个网络里都具备相同的时间间隔呢?还需要一个时钟同步机制,将网络中的所有设备同步到相同的时钟上,来提高AVB流量整形的精准度。 802.1AS:精准时间同步协议(Precision Time Protocol,简称PTP)时钟同步的目的是维护一个全局一致的物理或逻辑时钟,或者说把分布在各地的时钟对准(同步起来),使得系统中的信息、事件有一个全局一致的解释。IEEE802.1AS采用点对点的工作模式,时间同步过程只在相邻设备之间进行,设备自己通过外带方式实现接口之间的时间同步,不经过内部交换板。它定义了主时钟选择与协商算法、路径延时测算与补偿算法、以及时钟频率匹配与调节的机制,可用于生成时钟以及对网络音视频系统的修复。PTP定义了一个自动协商网络主时钟的方法,即最优主时钟算法(Best Master Clock Algorithm,简称BMCA)。BMCA定义了底层的协商和信令机制,用于标识出AVB局域网内的主时钟(Grandmaster)。IEEE802.1AS的核心在于时间戳机制(Time stamping)。PTP消息在进出具备IEEE802.1AS功能的端口时,会根据协议触发对本地实时时钟(RTC)的采样,将自己的RTC值与来自该端口相对应的主时钟(Master)的信息进行比较,利用路径延迟测算和补偿技术,将其RTC时钟值匹配到PTP域的时间。当PTP同步机制覆盖了整个AVB局域网,各网络节点设备间就可以通过周期性的PTP消息交换精确的实时时钟调整和频率匹配算法。最终,所有的PTP节点都将同步到相同的“挂钟”(Wall Clock)时间,即Grandmaster时间。消息交换过程如下:

1. Master发送Sync消息,记下该消息的本地发送时间t1。

2. Slave接收到Sync消息,并记下其接收到该消息的本地时间t2。

3. Master有两种方式告诉Slave该Sync消息的发送时间t1。

1) 将t1时间嵌入到Sync消息中,这需要某种硬件处理以获得高精度。

2)在后续的Follow_Up消息中发送

4. Slave发送Delay_Req消息往Master,并记下发送时间t3。

5. Master接收到Delay_Req,并记下该消息到达时间t4。

6. Master发送Delay_Resp消息告知Slave t4。

利用这四个时间可以算出Master和Slave之间的时钟差值,前提是链路是对称的,即发送和接收延时一样。计算公式为:

offset = ((t2 – t1) – (t4 – t3))/2

one_way_delay = ((t2 – t1) + (t4 – t3))/2 浅谈数字音视频传输网络——AVB[通俗易懂] 图15 在最大7跳的网络环境中,理论上PTP能够保证时钟同步误差在1μs以内。由于串行连接交换机会影响延时的对称性,使同步精度降低,因此在构架AVB网络时建议采用对称的链路设计。 一旦主时钟被选定,所有局域网节点的PTP设备将以此主时钟为参考值,如果Grandmaster发生变化,整个AVB网络也能通过BMCA在最短时间内确定新的主时钟,确保整个网络保持时间同步。该标准所规范的协议严格保证了实时数据流在基于以太网延时固定或对称的传输媒质中的同步传送。其内容包括在网络正常运行或添加、移除或重新配置网络组件和网络故障时,对时间同步机制的维护,为以太网提供完美的低延迟、低抖动的时钟,保证高质量的带宽,使服务快速抵达。在确保了时间精准之后,如果带宽不充足会导致漏桶很快存满并且有数据溢出,如果是音、视频数据溢出被丢弃,就会出现断断续续的问题,还有可能丢失掉一部分声音或画面,因此我们还需要保证有足够的带宽可以传输实时音视频数据流。 802.1Qat:流预留协议(Stream Reservation Protocol,简称SRP)为了提供有保障的QoS,流预留协议确保了实时数据流设备间端到端的带宽可用性。如果所需的路径带宽可用,整个路径上的所有设备(包括交换机和终端设备)将会对此资源进行锁定。符合SRP标准的交换机能够将整个网络可用带宽资源的75%用于AVB链路,剩下25%的带宽留给传统的以太网流量。在SRP中,流服务的提供者叫做Talker,流服务的接收者叫做Listener。同一个Talker提供的流服务可同时被多个Listener接收,SRP允许只保障从Talker到Listener的单向数据流流动。只要从Talker到多个Listener中的任意一条路径上的带宽资源能够协商并锁定,Talker就可以开始提供实时数据流传输服务。SRP内部周期性的状态机制维护着Talker及Listener的注册信息,能够动态的对网络节点状态进行监测并更新其内部注册信息数据库,以适应网络拓扑的动态改变。无论Talker还是Listener,都可以随时加入或者离开AVB的网络,而不会对AVB网络的整体功能和状态造成不可恢复的影响。 1722:音视频桥接传输协议(Audio/Video Bridging Transport Protocol,简称AVBTP)AVBTP定义了局域网内提供实时数据流服务所需的二层数据格式,实时数据流的建立、控制及关闭协议。AVBTP为物理上分隔的音、视频编解码器之间建立了一条带有低延迟的虚拟链路。各种压缩的与非压缩的原始音频、视频数据流经由AVBTP协议进行打包(填充由SRP保留的流ID,打上PTP产生的时间戳以及媒体类型等相关信息),通过AVBTP专用的以太网帧类型进行组播,从流服务的提供者(Talker)发出,由AVB交换机进行转发,再被注册过此实时数据流服务的接收者(Listener)接收并解包、解码然后输出。AVBTP每125µs发送一次这个帧,它总是相同大小的数据流。每个流中可以由1-60个通道组成,最多可支持64个流。AVB——以太网音视频桥接技术(Ethernet Audio Video Bridging)是IEEE的802.1任务组于2005开始制定的一套基于新的以太网架构的协议,用于实时音视频的传输协议集。除了以上描述的相关协议之外,还包括:802.1BA:音视频桥接系统(Audio Video Bridging Systems)AVB系统标准定义了一系列在生产制造AVB兼容设备过程中使用的预设值和设定,使得不具备网络经验的用户也能够去建立、使用AVB网络,而不必对其进行繁琐的配置。1733:实时传输协议(Real-Time Transport Protocol,简称RTP)RTP是一种基于三层UDP/IP网络的协议,为了在基于IP的三层应用上利用二层AVB的性能,IEEE 1733对RTP进行了扩展,在通过桥接及路由的局域网内提供时间同步、延迟保障和带宽预留的服务,以提供实时数据频流的传输。其中涉及到封包格式,流的建立、控制、同步及关闭等协议。1722.1:负责设备搜寻、列举、连接管理、以及基于1722的设备之间的相互控制。用于AVB设备的发现,枚举,连接管理,固件升级等。 TSN (Time Sensitive Network)时间敏感网络IEEE 802.1任务组在2012年11月的时候正式将AVB更名为TSN——Time Sensitive Network时间敏感网络。也就是说,AVB只是TSN中的一个应用。此外,TSN还应用在汽车控制领域、商用电子领域、实时监控或实时反馈的工业领域。如果大家想更多的了解有关TSN网络的相关信息,可以浏览AVnu联盟的网站http://avnu.org/ 发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/129737.html原文链接:https://javaforall.cn本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。 原始发表:2022年4月1,如有侵权请联系 cloudcommunity@tencent.com 删除前往查看unix大数据编程算法本文分享自 作者个人站点/博客 前往查看如有侵权,请联系 cloudcommunity@tencent.com 删除。本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!unix大数据编程算法评论登录后参与评论0 条评论热度最新登录 后参与评论推荐阅读LV.关注文章0获赞0相关产品与服务大数据全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!产品介绍2024新春采购节领券社区专栏文章阅读清单互动问答技术沙龙技术视频团队主页腾讯云TI平台活动自媒体分享计划邀请作者入驻自荐上首页技术竞赛资源技术周刊社区标签开发者手册开发者实验室关于社区规范免责声明联系我们友情链接腾讯云开发者扫码关注腾讯云开发者领取腾讯云代金券热门产品域名注册云服务器区块链服务消息队列网络加速云数据库域名解析云存储视频直播热门推荐人脸识别腾讯会议企业云CDN加速视频通话图像分析MySQL 数据库SSL 证书语音识别更多推荐数据安全负载均衡短信文字识别云点播商标注册小程序开发网站监控数据迁移Copyright © 2013 - 2024 Tencent Cloud. All Rights Reserved. 腾讯云 版权所有 深圳市腾讯计算机系统有限公司 ICP备案/许可证号:粤B2-20090059 深公网安备号 44030502008569腾讯云计算(北京)有限责任公司 京ICP证150476号 |  京ICP备11018762号 | 京公网安备号11010802020287问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档Copyright © 2013 - 2024 Tencent Cloud.All Rights Reserved. 腾讯云 版权所有登录 后参与评论00

Audio Video Bridging - Wikipedia

Audio Video Bridging - Wikipedia

Jump to content

Main menu

Main menu

move to sidebar

hide

Navigation

Main pageContentsCurrent eventsRandom articleAbout WikipediaContact usDonate

Contribute

HelpLearn to editCommunity portalRecent changesUpload file

Search

Search

Create account

Log in

Personal tools

Create account Log in

Pages for logged out editors learn more

ContributionsTalk

Contents

move to sidebar

hide

(Top)

1Background

2Summary

3AV transport and configuration

Toggle AV transport and configuration subsection

3.1IEEE 1722 AVTP

3.2IEEE 1722.1 AVDECC

4Interoperability

Toggle Interoperability subsection

4.1IEEE 1733

4.2AES67

4.3Milan

4.4DetNet

5Standardization

6References

7External links

Toggle the table of contents

Audio Video Bridging

4 languages

DeutschEspañolFrançaisTiếng Việt

Edit links

ArticleTalk

English

ReadEditView history

Tools

Tools

move to sidebar

hide

Actions

ReadEditView history

General

What links hereRelated changesUpload fileSpecial pagesPermanent linkPage informationCite this pageGet shortened URLDownload QR codeWikidata item

Print/export

Download as PDFPrintable version

From Wikipedia, the free encyclopedia

Specifications for synchronized, low-latency streaming through IEEE 802 networks

AVBAVnu certification markManufacturer InfoManufacturerIEEE, AVnuDevelopment dateSeptember 2011; 12 years ago (September 2011)Network CompatibilitySwitchableYesRoutableNoEthernet data ratesAgnosticAudio SpecificationsMinimum latency2 ms (maximum)[1]Maximum channels per link256Maximum sampling rate192 kHz[2]Maximum bit depth32-bit floating point[2]: clause 8.3 

Audio Video Bridging (AVB) is a common name for the set of technical standards which provide improved synchronization, low-latency, and reliability for switched Ethernet networks.[3] AVB embodies the following technologies and standards:

IEEE 802.1AS-2011: Timing and Synchronization for Time-Sensitive Applications (gPTP);

IEEE 802.1Qav-2009: Forwarding and Queuing for Time-Sensitive Streams (FQTSS);

IEEE 802.1Qat-2010: Stream Reservation Protocol (SRP);

IEEE 802.1BA-2011:[4] Audio Video Bridging (AVB) Systems;

IEEE 1722-2011 Layer 2 Transport Protocol for Time-Sensitive Applications (AV Transport Protocol, AVTP); and

IEEE 1722.1-2013 Device Discovery, Enumeration, Connection Management and Control Protocol (AVDECC).

IEEE 802.1Qat and 802.1Qav amendments have been incorporated to the base IEEE 802.1Q-2011 document, which specifies the operation of Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks.

AVB was initially developed by the Institute of Electrical and Electronics Engineers (IEEE) Audio Video Bridging task group of the IEEE 802.1 standards committee. In November 2012, Audio Video Bridging task group was renamed to Time-Sensitive Networking task group to reflect the expanded scope of its work, which is to "provide the specifications that will allow time-synchronized low latency streaming services through IEEE 802 networks".[5] Further standardization efforts are ongoing in IEEE 802.1 TSN task group.

To help ensure interoperability between devices that implement the AVB and TSN standards, the AVnu Alliance develops device certification for the automotive, consumer, and professional audio and video markets.[6]

Background[edit]

Analog audio video (AV) equipment historically used one-way, single-purpose, point-to-point connections. Even digital AV standards, such as S/PDIF for audio and the serial digital interface (SDI) for video, retain these properties. This connection model results in large masses of cables, especially in professional applications and high-end audio.[7]

Wiring of a patch bay of an outside broadcasting van

Attempts to solve these problems were based on multi-point network topologies, such as IEEE 1394 (FireWire), and included adaptation of standard switched computer network technologies such as Audio over Ethernet and Audio over IP. Professional, home, and automotive AV solutions came to use specialized protocols that do not interoperate between each other or standard IT protocols, while standard computer networks did not provide tight quality of service with strict timing and predictable or bounded latency.[7]

To overcome these limitations, Audio Video Bridging networks transmit multiple audiovisual streams through standard Ethernet switches (i.e. MAC bridges) connected in a hierarchical tree topology. AVB includes layer 2 protocols to reserve connection bandwidth and prioritise network traffic, which guarantee precise sync clock and low transmission latency for each stream.[7]

Tight sync between multiple AV streams is needed for lip sync between video and related audio streams, to keep multiple digitally connected speakers in phase in a professional environment (which requires 1 μs precision), and to prevent audio or video packets from arriving late to the endpoint, resulting in a dropped frame of video and unwanted audio glitches such as a pop or silence. Worst-case delay, including source and destination buffering, is required to be low and deterministic: the user-interface delay shall be around 50 ms, so that the pressing of a button and the resulting action are perceived as happening instantly, and 2 ms for live performance or studio work.[7]

Summary[edit]

Figure 2 - AVB Connections

Audio Video Bridging is implemented as a switched Ethernet network which works by reserving a fraction of the available Ethernet for AV traffic. There are three primary differences introduced by the AVB architecture:

Precise synchronization using Generalized Precision Time Protocol (gPTP) profile (IEEE 802.1AS),

Traffic shaping for AV streams using frame priorities (IEEE 802.1Qav) and VLAN tags (IEEE 802.1Q), and

Admission controls with Stream Reservation Protocol (IEEE 802.1Qat).

The IEEE 802.1BA is an umbrella standard for these three principal technologies, which defines application-specific configurations and operation procedures for devices in switched audio video networks.

The new layer-2 configuration protocols work with backward-compatible extensions to the Ethernet 802.1 frame format; such minimal changes allow AVB devices to coexist and communicate in standard IT networks, however, only AVB-capable switches and endpoint can reserve network resources with admission control and synchronize local time to a master clock, which is required for low latency time-sensitive traffic.

AVB traffic is replicated in a multicast manner, with one talker (stream initiator) and multiple listeners. AVB packets are sent at regular intervals in the allocated time slots, preventing collisions for AV traffic.

AVB guarantees a latency of 2 ms for Class A traffic and 50 ms for Class B traffic over a maximum of 7 hops, with a transmission period of 125 μs for Class A and 250 μs for Class B traffic.

An IEEE 802.1AS network timing domain includes all devices that communicate using the gPTP protocol. The grandmaster is a device chosen as the reference clock; the 802.1BA specification requires every talker and network bridge to be grandmaster capable.

802.3 link management and 802.1AS link delay measurement protocols calculate the round-trip delay to the AVB endpoint; this needs to be better than worst-case wire delay from the 802.1AS peer delay algorithm.

Higher-level protocols may use 802.1AS clock information to set the exact presentation time for each AV stream.

AV transport and configuration[edit]

IEEE 1722 AVTP[edit]

See also: Audio over Ethernet

IEEE Std 1722-2011[8] for a Layer 2 Audio Video Transport Protocol (AVTP) defines details for transmitting IEEE 1394/IEC 61883 streams and other AV formats, setting the presentation time for each AV stream, and manage latencies from worst case delay calculated by the gPTP protocol.

IEEE 1722.1 AVDECC[edit]

IEEE Std 1722.1-2013[9] is a standard which allows AVB Discovery, Enumeration, Connection management and Control (AVDECC) of devices using IEEE Std 1722-2011. AVDECC defines operations to discover device addition and removal, retrieve device entity model, connect and disconnect streams, manage device and connection status, and remote control devices.

Interoperability[edit]

Higher layer services can improve synchronisation and latency of media transmission by mapping the AVB Stream ID to internal stream identifiers to and basing internal timestamps on gPTP master clock.

IEEE 1733[edit]

IEEE Std 1733-2011[10] defines a Layer 3 protocol profile for Real-time Transport Protocol (RTP) applications with a RTCP payload format, which assigns the Stream ID from SRP to the RTP's Synchronization source identifier (SSRC), and correlates RTP timestamps for presentation time with 802.1AS gPTP master clock.

AES67[edit]

AES67 is based on standard RTP over UDP/IP and IEEE 1588 Precision Time Protocol (PTPv2) for timing; interoperability with AVB/TSN can be achieved by linking IEEE 802.1AS timing information to AES67 PTPv2 payload data.[11][12][13][14]

AES67 implementation with AVB interoperability has been demoed at InfoComm 2016.[15][16]

Milan[edit]

In 2018, the Avnu Alliance announced the Milan initiative to promote interoperability of AVB devices and provide product certification and testing.[17]

The specification requires media clocking based on the AVTP CRF (Clock Reference Format) and sample rate of 48 kHz (optionally 96 and 192 kHz); audio stream format is based on AVTP IEC 61883-6 32-bit Standard AAF Audio Format with 1 to 8 audio channels per stream (optionally, 24- and 32-bit High Capacity Format with 56 and 64 channels). Redundancy is provided with two independent logical networks for every endpoint and a seamless switchover mechanism.[17]

DetNet[edit]

This section needs expansion. You can help by adding to it. (August 2019)

The IETF Deterministic Networking (DetNet) Working Group is working to define deterministic data paths with bounds on latency, loss, and packet delay variation (jitter), and high reliability. DetNet shall operate over both Layer 2 bridged segments and Layer 3 routed segments, relying on interoperability with AVB/TSN switches when possible.[18]

One of the possible application of DetNet is professional audio/video, such as music and film production, broadcast, cinema, live sound, and large venue (stadiums, halls, conference centers, theme parks, airports, train terminals, etc.) systems for public addressing, media streaming and emergency announcement. The stated goal is to enable geographically distributed, campus- or enterprise-wide Intranet for content delivery with bounded low latency (10-15 ms). A single network shall handle both A/V and IT traffic, with Layer 3 routing on top of AVB QoS networks to enable sharing content between Layer 2 AVB segments, and provide IntServ and DiffServ integration with AVB where possible. Unused reserved bandwidth shall be released for best-effort traffic. The protocol stack shall have Plug-and-play capabilities from top to bottom to reduce manual setup and administration, allow quick changes of network devices and network topology.[19]

Large-scale AVB networks, like those employed by ESPN SportsCenter "Digital Center 2" broadcast facility which hosts multiple individual studios, are laid with a thousand miles of fiber and have ten Tbps of bandwidth for a hundred thousand signals transmitted simultaneously; in the absence of standards-based solution to interconnect individual AVB segments, a custom software-defined networking router is required.[20][21]

Standardization[edit]

The work on A/V streaming started at the IEEE 802.3re 'Residential Ethernet' study group in July 2004.[22] In November 2005, it was moved to the IEEE 802.1 committee responsible for cross-network bridging standards.[23]

Audio Video Bridging standards suite

Standard

Title

Status

Publication Date

Audio Video Bridging (AVB) specifications

IEEE 802.1BA-2011

Audio Video Bridging (AVB) Systems

Superseded by IEEE 802.1BA-2021

30 September 2011

IEEE 802.1Qav-2009

Forwarding and Queuing Enhancements for Time-Sensitive Streams (FQTSS)

Incorporated into IEEE 802.1Q-2011 Clause 34

5 January 2010

IEEE 802.1Qat-2010

Stream Reservation Protocol (SRP)

Incorporated into IEEE 802.1Q-2011 Clause 35

30 September 2010

IEEE 802.1Q-2011

Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks (incorporates IEEE 802.1Qav and 802.1Qat amendmends)

Superseded by IEEE 802.1Q-2014/2018/2022

31 August 2011

IEEE 802.1AS-2011

Timing and Synchronization for Time-Sensitive Applications in Bridged Local Area Networks (gPTP)

Superseded by IEEE 802.1AS-2020

30 March 2011

Time-Sensitive Networking (TSN) specifications

IEEE 802.1AS-2020

Timing and Synchronization for Time-Sensitive Applications (gPTP)

Current,[24][25] amended by 802.1AS-2020/Cor1-2021[26]

30 January 2020

IEEE 802.1BA-2021

TSN profile for Audio Video Bridging (AVB) Systems

Current[27]

12 December 2021

IEEE 802.1Q-2022

Bridges and Bridged Networks

Current[28]

22 December 2022

Audio Video Transport Protocol (AVTP) and AVDECC specifications

IEEE 1733-2011

Layer 3 Transport Protocol for Time-Sensitive Applications in Local Area Networks (RTP)

Current

25 April 2011

IEEE 1722-2011

Layer 2 Transport Protocol for Time-Sensitive Applications in a Bridged Local Area Network (AVTP)

Superseded by IEEE 1722-2016

6 May 2011

IEEE 1722-2016

Layer 2 Transport Protocol for Time-Sensitive Applications in a Bridged Local Area Network (AVTP)

Current

16 December 2016

IEEE P1722b

AVTP - Amendment: New and Extended Streaming Formats

preparation

-

IEEE 1722.1-2013

Device Discovery, Enumeration, Connection Management and Control Protocol (AVDECC)

Current

23 August 2013

References[edit]

^ "An Introduction to AVB Networking". PreSonus. Retrieved December 2, 2020.

^ a b IEC 61883-6

^ Kreifeldt, Rick (30 July 2009). "AVB for Professional A/V Use" (PDF). AVnu Alliance White Paper.

^ "IEEE 802.1: 802.1BA - Audio Video Bridging (AVB) Systems". www.ieee802.org. Retrieved 2019-10-21.

^ "IEEE 802.1 AV Bridging Task Group". ieee802.org. Retrieved 2019-10-21.

^ "AVnu Alliance". Official website. Retrieved September 27, 2011.

^ a b c d Michael Johas Teener; et al. "No-excuses Audio/Video Networking: the Technology Behind AVnu" (PDF). Avnu Alliance. Archived from the original (PDF) on 2014-04-05.

^ "IEEE 1722-2011 - IEEE Standard for Layer 2 Transport Protocol for Time-Sensitive Applications in a Bridged Local Area Network". standards.ieee.org. Retrieved 2019-10-21.

^ "IEEE 1722.1-2013 - IEEE Standard for Device Discovery, Connection Management, and Control Protocol for IEEE 1722(TM) Based Devices". standards.ieee.org. Retrieved 2019-10-21.

^ "IEEE 1733-2011 - IEEE Standard for Layer 3 Transport Protocol for Time-Sensitive Applications in Local Area Networks". standards.ieee.org. Retrieved 2019-10-21.

^ AES67-2018 – Annex D (Informative) – Interfacing to IEEE 802.1AS clock domains

^ AES67-2018 – Annex C (Informative) – AVB network transport

^ Geoffrey M. Garner, Michel Ouellette and Michael Johas Teener (2012-09-27). "Using an IEEE 802.1AS Network as a Distributed IEEE 1588 Boundary, Ordinary, or Transparent Clock". 2010 International IEEE Symposium on Precision Clock Synchronization for Measurement Control and Communication (ISPCS) (IEEE)

^ Amaya, Nestor (March 2016). "AES67 FOR AUDIO PRODUCTION: BACKGROUND, APPLICATIONS AND CHALLENGES" (PDF). smpte.org. Retrieved 2019-10-21.

^ Joao Martins (2016-06-16). "AVB/TSN Momentum and AES67/AVB Harmony at InfoComm 2016". Retrieved 2016-12-08.

^ "BACH ST2110 AES67 Audio Networking Modules, Chips, and Software | OEM Developer Solutions". Ross Video. Retrieved 2019-10-21.

^ a b "Milan | A User-Driven Network Protocol for Professional Media". avnu.org. Retrieved 2019-10-21.

^ "Deterministic Networking (detnet) - Documents". datatracker.ietf.org. Retrieved 2019-10-21.

^ Grossman, Ethan (November 11, 2018). "DetNet Use Cases Overview" (PDF). ieee802.org. Retrieved 2019-10-21.

^ "ESPN Digital Center Ethernet AVB Case Study: Part 1". Digital Design Corporation. 2017-11-10. Retrieved 2019-10-21.

^ Daley, Dan (10 June 2014). "ESPN's DC2 Scales AVB Large". Sports Video Group. Retrieved 2019-10-21.

^ Richard Brand; et al. (July 14, 2004). "Residential Ethernet: IEEE 802.3 Call for Interest" (PDF). IEEE 802.3 standards committee. Retrieved September 27, 2011.

^ "IEEE 802.3 Residential Ethernet Study Group". Official web site. IEEE 802.3 standards committee. January 10, 2006. Retrieved September 27, 2011.

^ IEEE 802.1AS-2020 - IEEE Standard for Local and Metropolitan Area Networks--Timing and Synchronization for Time-Sensitive Applications, IEEE, retrieved 2021-01-26

^ "P802.1AS-2020 – Timing and Synchronization for Time-Sensitive Applications". 1.ieee802.org. Retrieved 2019-10-21.

^ "IEEE 802.1AS-2020 Local and Metropolitan Area Networks - Timing and Synchronization for Time-Sensitive Applications - Corrigendum 1: Technical and Editorial Correction". standards.ieee.org. IEEE.

^ "IEEE 802.1BA-2021 - IEEE Standard for Local and Metropolitan Area Networks--Audio Video Bridging (AVB) Systems".

^ IEEE 802.1Q-2022 - IEEE Standard for Local and Metropolitan Area Networks—Bridges and Bridged Networks, IEEE, retrieved 2021-01-26

External links[edit]

Time-Sensitive Networking task group

802.1 Audio/Video Bridging task group (Archived)

IEEE 1722 Layer 2 transport protocol working group for time-sensitive streams

IEEE 1722.1 working group for Device Discovery, Enumeration, Connection Management and Control Protocol for P1722 based devices

IEEE 1733 AVB layer 3 transport working group

Networking and interoperability in AVB-capable devices, by William Gravelle, UNH-IOL

AV Bridging and Ethernet AV – AVB overview presentation

Forum for discussion about AVB

vteIEEE standardsCurrent

488

693

730

754

Revision

854

828

829

896

1003

1014

1016

1076

1149.1

1154

1164

1275

1278

1284

1355

1394

1451

1497

1516

1541

1547

1584

1588

1596

1603

1613

1619

1666

1667

1675

1685

1722

1733

1800

1801

1815

1849

1850

1855

1900

1901

1902

1904

1905

2030

2050

11073

12207

14764

16085

16326

29148

42010

802 series802

.2

.4

.5

.6

.7

.8

.9

.10

.12

.14

.16

WiMAX · d · e

.17

.18

.20

.21

.22

.24

802.1

D

p

Q

Qav

Qat

Qay

w

X

ab

ad

AE

ag

ah

ak

aq

AS

AX (LACP)

az

BA

802.3 (Ethernet)

-1983

a

b

d

e

i

j

u

x

y

z

ab

ac

ad

ae

af

ah

ak

an

aq

at

au

av

az

ba

bt

bu

by

bz

ca

cb

cc

cd

ce

cg

ch

ck

cm

cn

cp

cq

cr

cs

ct

cu

cv

cw

cx

cy

cz

da

db

dd

de

df

802.11 (Wi-Fi)

-1997

legacy mode

a

b

c

d

e

f

g

h

i

j

k

n (Wi-Fi 4)

p

r

s

u

v

w

y

z

aa

ac (Wi-Fi 5)

ad (WiGig)

ae

af

ah

ai

aj

ak

aq

ax (Wi-Fi 6)

ay

az

ba

bb

bc

bd

be (Wi-Fi 7)

bf

bh

bi

bk

bn (Wi-Fi 8)

802.15

.1 (Bluetooth)

.2

.3

.4 (Zigbee)

.4a

.4b

.4c

.4d

.4e

.4f

.4g

.4z

.5

.6

.7

Proposed

P1363

P1619

P1699

P1823

P1906.1

Superseded

754-1985

830

1219

1233

1362

1364

1471

See also

IEEE Standards Association

Category:IEEE standards

vteDigital audio and video protocolsControlDirect

HDBaseT

HiQnet

Bus

CEC

MIDI

Modbus

Obsolete:

mLAN

ZIPI

IP

HiQnet

Modbus

ONVIF

Open Sound Control

AES70

RTSP

RTP-MIDI

DetNet

Audio onlyDirect

ADAT Lightpipe

AES3

MADI

S/PDIF

Bus

A-Net

AES50 (SuperMAC)

AudioRail

MaGIC

ULTRANET

Ethernet

AES51

AVB

Milan

CobraNet

dSNAKE

EtherSound

REAC

SoundGrid

IP

AES67

Dante

NetJack

Livewire

Q-LAN

Ravenna

WheatNet-IP

VideoDirect

DVI

HDBaseT

HDMI

SDI

Bus

DisplayPort

Ethernet

AVB

TSN

IP

IPTV

MMT

MTS

NDI

RTP

HBRMT

SMPTE 2022

SMPTE 2110

SRT

OtherVisual charts

Comparison of audio network protocols

See also

AES47

Retrieved from "https://en.wikipedia.org/w/index.php?title=Audio_Video_Bridging&oldid=1213143055"

Categories: IEEE standardsIEEE 802MultimediaEthernetAudio network protocolsEthernet standardsHidden categories: Articles with short descriptionShort description matches WikidataUse British English from January 2020Articles to be expanded from August 2019All articles to be expandedArticles using small message boxes

This page was last edited on 11 March 2024, at 09:45 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License 4.0;

additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Privacy policy

About Wikipedia

Disclaimers

Contact Wikipedia

Code of Conduct

Developers

Statistics

Cookie statement

Mobile view

Toggle limited content width

AVB_百度百科

百度百科 网页新闻贴吧知道网盘图片视频地图文库资讯采购百科百度首页登录注册进入词条全站搜索帮助首页秒懂百科特色百科知识专题加入百科百科团队权威合作下载百科APP个人中心收藏查看我的收藏0有用+10AVB播报讨论上传视频互联网术语本词条缺少概述图,补充相关内容使词条更完整,还能快速升级,赶紧来编辑吧!AVB(Ethernet Audio VideoBridging),互联网术语,以太网音视频桥接技术。中文名以太网音视频桥接技术外文名Ethernet Audio/VideoBridging别    名EthernetAVB网络音视频IEEE802标准以太网音视频桥接技术(Ethernet Audio/VideoBridging,又称“EthernetAVB”,以下简称AVB)是一项新的IEEE802标准,其在传统以太网络的基础上,通过保障带宽(Bandwidth),限制延迟(Latency)和精确时钟同步(Time synchronization),以支持各种基于音频、视频的网络多媒体应用。AVB关注于增强传统以太网的实时音视频性能,同时又保持了100%向后兼容传统以太网,是极具发展潜力的下一代网络音视频实时传输技术。新手上路成长任务编辑入门编辑规则本人编辑我有疑问内容质疑在线客服官方贴吧意见反馈投诉建议举报不良信息未通过词条申诉投诉侵权信息封禁查询与解封©2024 Baidu 使用百度前必读 | 百科协议 | 隐私政策 | 百度百科合作平台 | 京ICP证030173号 京公网安备110000020000

Ethernet AVB 以太网音视频桥接技术 - 知乎

Ethernet AVB 以太网音视频桥接技术 - 知乎首发于蓝葡庄园切换模式写文章登录/注册Ethernet AVB 以太网音视频桥接技术RebeccaLi​严肃的小白,理智的疯子Ethernet Audio\Video Bridging, 是由IEEE Audio\Video Bridging Task Group制定的章程,其提供的规范允许通过802网络进行时间同步的低延迟流服务。基于IEEE官方网站上对于AVB最新更新状态(2013-03-20),该规范具体目标包括:L2时间同步服务,适用于要求最严格的消费电子应用。这是与IEEE 1588工作组共同完成的,因此802.1AS-2011的点对点802.3子层是IEEE Std 1588-2008的特定配置文件。其原始版本已发布为"IEEE Std 802.1 AS-2011:用于本地和城域网的IEEE标准 - 桥接局域网中时间敏感应用的定时和同步",目前正在修订为"IEEE P802.1 AS bt:用于本地和城域网的IEEE标准 - 桥接局域网中时间敏感应用的定时和同步"。 定义准入控制系统,允许网桥去保证AV流所需的资源。其原始版本已发布为"IEEE Std 802.1Qat-2010:用于本地和城域网的IEEE标准---虚拟桥接局域网 - 修订:11:流保留协议(SRP)"。目前正在研究如何为冗余和增强的网络吞吐量提供多条路径。增强标准802.1桥接帧转发规则以支持AV流。其原始版本已发布为"IEEE Std 802.1Qav-2010:用于本地和城域网的IEEE标准---虚拟桥接局域网 - 修订12:用于时间敏感的流的转发和排队增强"。目前正在开展两个新规划,以提供更低的网络延迟:支持数据包抢占,允许时间非常敏感的数据包去中断出口端口上传输的正常数据包,然后在时间敏感数据包传输完成后立即恢复传输正常数据包。这项工作正在进行中,作为"规划P802.1Q bu:本地和城域网标准 - 媒体访问控制(MAC)网桥和虚拟桥接局域网修订:帧抢占"。预定的流量增强,定义了网桥和终端站基于定时来调度帧传输的方式,该方式是从IEEE Std 802.1AS派生的。这项规划将被完成于“项目P802.1Qbv:本地和城域网标准 - 媒体访问控制(MAC)网桥和虚拟桥接局域网修正:计划流量的增强”。建立一组特定情况使用的配置文件,将有助于确保使用AV桥接规范的联网设备之间的互操作性。 其原始版本已发布为"IEEE Std 802.1BA-2011:本地和城域网标准:音频视频桥接(AVB)系统"。 一旦其他新的AVB项目完成,将启动一个修改该标准的规划。AVB的原始工作是作为802.3的"住宅以太网研究组"的一部分完成的。依照该页的声明:音频/视频桥接任务组于2012年11月更名为“时间敏感网络任务组”。参考:IEEE 802.1 AV Bridging Task Group编辑于 2018-12-23 17:04以太网(Ethernet)汽车技术网络通信​赞同 1​​添加评论​分享​喜欢​收藏​申请转载​文章被以下专栏收录蓝葡庄园It's not our abilities that show what we truly

AVB_百度百科

百度百科 网页新闻贴吧知道网盘图片视频地图文库资讯采购百科百度首页登录注册进入词条全站搜索帮助首页秒懂百科特色百科知识专题加入百科百科团队权威合作下载百科APP个人中心收藏查看我的收藏0有用+10AVB播报讨论上传视频互联网术语本词条缺少概述图,补充相关内容使词条更完整,还能快速升级,赶紧来编辑吧!AVB(Ethernet Audio VideoBridging),互联网术语,以太网音视频桥接技术。中文名以太网音视频桥接技术外文名Ethernet Audio/VideoBridging别    名EthernetAVB网络音视频IEEE802标准以太网音视频桥接技术(Ethernet Audio/VideoBridging,又称“EthernetAVB”,以下简称AVB)是一项新的IEEE802标准,其在传统以太网络的基础上,通过保障带宽(Bandwidth),限制延迟(Latency)和精确时钟同步(Time synchronization),以支持各种基于音频、视频的网络多媒体应用。AVB关注于增强传统以太网的实时音视频性能,同时又保持了100%向后兼容传统以太网,是极具发展潜力的下一代网络音视频实时传输技术。新手上路成长任务编辑入门编辑规则本人编辑我有疑问内容质疑在线客服官方贴吧意见反馈投诉建议举报不良信息未通过词条申诉投诉侵权信息封禁查询与解封©2024 Baidu 使用百度前必读 | 百科协议 | 隐私政策 | 百度百科合作平台 | 京ICP证030173号 京公网安备110000020000

Ethernet AVB Overview and Status | SMPTE Conference Publication | IEEE Xplore

Ethernet AVB Overview and Status | SMPTE Conference Publication | IEEE Xplore

IEEE Account

Change Username/Password

Update Address

Purchase Details

Payment Options

Order History

View Purchased Documents

Profile Information

Communications Preferences

Profession and Education

Technical Interests

Need Help?

US & Canada: +1 800 678 4333

Worldwide: +1 732 981 0060

Contact & Support

About IEEE Xplore

Contact Us

Help

Accessibility

Terms of Use

Nondiscrimination Policy

Sitemap

Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.© Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Understanding Audio Video Bridging | Electronic Design

Understanding Audio Video Bridging | Electronic Design

ResourcesDirectoryWebinarsCAD ModelsVideoBlogsMore PublicationsAdvertiseSearchSearchTop StoriesTechXchangeAnalogPowerEmbeddedTestAI / MLAutomotiveData SheetsTopics- TechXchange Topics --- Markets --AutomotiveAutomation-- Technologies --AnalogPowerTest & MeasurementEmbeddedResourcesElectronic Design ResourcesTop Stories of the WeekNew ProductsKit Close-UpElectronic Design LibrarySearch Data SheetsCompany DirectoryBlogsContributeMembersContentBenefitsSubscribeDigital editionsAdvertisehttps://www.facebook.com/ElectronicDesignhttps://www.linkedin.com/groups/4210549/https://twitter.com/ElectronicDesgnhttps://www.youtube.com/channel/UCXKEiQ9dob20rIqTA7ONfJgTechnologiesCommunicationsUnderstanding Audio Video BridgingJan. 29, 2014Audio Video Bridging over Ethernet (AVB) is a set of IEEE standards for transporting audio and other real-time content over Ethernet. More than 20 silicon manufacturers, audio and infotainment companies, and networking vendors have adopted these standards.

Peter Hedinger, Andrew Lucas, Henk MullerAudio Video Bridging over Ethernet (AVB) is a set of IEEE standards for transporting audio and other real-time content over Ethernet. More than 20 silicon manufacturers, audio and infotainment companies, and networking vendors have adopted these standards.AVB often is purported to only serve large-scale applications, such as music venues. However, it also is excellently suited to small-scale applications such as consumer audio, audio conferencing, and in-car entertainment. Daisy-chained AVB fits these applications because it avoids the need for switches without reducing the system’s capacity.

Download this article in .PDF format

This file type includes high resolution graphics and schematics when applicapable.Table Of Contents• AVB In A Nutshell• The Stream Reservation Protocol• The Precision Time Protocol• Streams, Channels, Talkers, And Listeners• Daisy Chaining• Example Daisy-Chain Implementation• ConclusionAVB In A NutshellFrom a high-level perspective, AVB works by reserving a fraction of the available Ethernet bandwidth for AVB traffic. AVB packets are sent regularly in the allocated slots. As the bandwidth is reserved, there will be no collisions.All of the nodes in the system share a virtual clock. AVB packets have a presentation time that defines when the media packet should be played out. (AVB packets can include all sorts of time-sensitive data. In this article, we concern ourselves only with audio).Related Articles

Smarter Video Surveillance Requires The Right Hardware And Software

HEVC Improves Mobile VIdeo Performance

Active DisplayPort Cables Extend Video-Interconnect Reach

So, a system may comprise a host node that is delivering data (the talker) and two nodes that comprise the left and right speakers (the listeners). As all three nodes share a single global clock, the left and the right speaker will produce sound synchronously.The Stream Reservation ProtocolThe magic behind AVB is that it splits traffic on the network into two groups; real-time traffic and the rest. All real-time traffic is transmitted on an 8-kHz beat, and the rest is scheduled around it. Every 125 µs, all real-time streams send their data. Other packets are transmitted when no more real-time data is available, holding up the other traffic (Fig. 1).1. Various traffic-shaping scenarios are possible. For example, 20% can be reserved for AVB. IP and other legacy traffic then can be shaped around the AVB slots (top). Or, 75% can be reserved for AVB. Legacy traffic would be delayed or dropped if it couldn’t be shaped around the AVB slots. To ensure there is sufficient room available for all real-time traffic, a protocol is used to allocate bandwidth. Figure 2 shows a system comprising two switches and four nodes. Nodes A and D reserve a stream between them (say, 45 Mbits/s). Nodes B and C reserve another stream (say, 20 Mbits/s).2. In a system comprising two switches and four nodes, nodes A and D would reserve a stream between them, while nodes B and C would reserve another stream. All switches in between those nodes will make sure that sufficient bandwidth is available: 65 Mbits/s will be reserved between switches X and Y since both the traffic from A to D and B to C will travel over this link. If this happens to be a 100-Mbit/s link, then only 35 Mbits/s will be available for other traffic, such as Web traffic or configuration messages. If a large Web page is requested at D from A, then packets may be dropped at X.Using allocated bandwidth enables AVB to send data from endpoint to endpoint within a 2-ms window. AVB allows for a maximum of seven hops to meet this constraint, where each hop adds at most 125-µs delay. This means that a node can transmit audio requesting that it be played 2 ms in the future, and all samples will arrive in time to be played out at the right time.The protocol for allocating bandwidth is called the Stream Reservation Protocol (SRP, IEEE 802.1Qat). It forms a fundamental building block of the AVB standard. All nodes in the system (switches and endpoints) must implement SRP and shape traffic by sending real-time traffic at the 8-kHz beat. If one of the nodes were a legacy switch, then it would not treat real-time traffic preferentially, potentially delaying the real-time traffic and causing jitter in the output.The Precision Time ProtocolAll audio traffic in AVB is synchronized to a global clock so audio producers and consumers can play and record sound synchronously. The Precision Time Protocol (PTP), already commonly used in networked computers such as laptops and servers to provide a synchronised clock, implements the AVB clock.PTP assumes that all nodes have a reasonably good clock such as a crystal clock, preferably of a known accuracy like 25 ppm, equivalent to 2 seconds per day. Specified in IEEE standard 802.1AS, PTP is a second building block of AVB.PTP nodes that are connected using an Ethernet cable send regular messages to each other, reporting the time and calculating the skew between their respective clocks. The node with the most accurate clock is picked as a “master” node. All other nodes estimate their skew relative to the master clock, enabling them to compute a local clock that is closely kept in sync with the master clock.Synchronising the clocks over the network comes at a price. Suppose that a node has an instable clock—for example, because it is temperature sensitive—and its frequency is changing rapidly. This node will observe that its frequency is changing relative to the master clock. It can gently adjust the local clock to match the new frequency, but this will temporarily cause a phase difference between the master and the local clock. Alternatively, the frequency can be adjusted faster, but this creates a higher-frequency jitter in the clock signal. For audio, one typically allows for a small temporary phase drift, keeping the jitter at very low frequencies.Streams, Channels, Talkers, And ListenersAVB is built around streams of data. If the data is audio, a stream comprises multiple channels such as stereo audio. Each AVB packet includes 125 µs worth of samples for all channels that are part of the stream. Streams are produced by talkers, which are the nodes that produce audio. A microphone or a laptop playing MP3 files would be a talker. Listeners can subscribe to a stream. A speaker is an example listener that will typically pick a single channel out of a stream and play it out.A typical system may comprise, for example:• A single talker such as a DVD player, with six listeners, for 5.1 surroundsound• Multiple talkers such as a group of microphones, with a set of speakers, for conferencing• A few dozen microphones, a few dozen speakers, and a massive mixing desk for a music venueThere are no rules on how small or large an AVB system should be, but there are practical limits. AVB streams have a sizeable overhead, limiting the number of streams that an Ethernet cable can carry. A 100-Mbit Ethernet cable can carry nine stereo AVB streams for a total of 18 channels or a single AVB stream with 45 channels.A discovery protocol, IEEE 1722.1, is used to enumerate, discover, and control attached devices and their capabilities. This protocol is detached from the actual delivery of data. Hosts use it purely to configure the system.Daisy ChainingCompared to other mechanisms of digital audio distribution such as USB audio, AVB may appear expensive because of the need for AVB-aware switches. Daisy-chained AVB solves the cost issue. It uses an AVB endpoint with two Ethernet ports (we call them A and B) and a built-in “switch,” which really isn’t a fully fledged switch.In a typical layout, a laptop is connected to node 1, which is connected to node 2, which is connected to node 3, where the network ends (Fig. 3). Each node comprises two ports that are symmetrical and logic that connects the ports. If only one port is plugged in, the node acts as an ordinary AVB endpoint.3. AVB-enabled laptops with an Ethernet port can plug into a daisy chain comprising speakers, microphones, and other devices. However, if both ports are plugged in, the node mostly acts as a bridge across the two ports. All traffic is passed through as normal. The node itself will tap into any AVB streams that are passing through the device. Occasionally, the node will consume or produce a packet, such as when it’s responding to any of the SRP, PTP, or configuration protocols.The node, then, needs very little in terms of switching capacity. Data that comes in on port A will go to B unless it is destined for the local node. Traffic that comes in on port B will go to port A unless it was destined for the local node. Occasional packets may be generated locally, and the node must have knowledge as to whether these packets should go to A or B. The software that bridges A and B has to be AVB aware, and it has to participate in, for example, clock synchronisation.Note that neither routing tables nor buffers are required, and no operating system is needed to implement something that simple. This means that cost-wise, a daisy-chained AVB endpoint is little more than the cost of a normal AVB endpoint plus an extra Ethernet physical layer (PHY) and jack. There are limitations to this approach, though.First, unlike a switch, a daisy-chained network requires traffic destined for the tail to travel through the whole daisy chain. In a switch with seven nodes, all seven nodes can in theory receive 100 Mbits/s of traffic. In a daisy-chained system, that would require the head of the node to transport 700 Mbits/s. But in an AVB system, most traffic is multicast audio traffic, and very little traffic is destined to specific nodes. So where the nodes on the chain listen to the same stream, there is little extra traffic in a daisy chain.Second, the AVB standard does not allow for more than seven switches in a network in order to guarantee a 2-ms end-to-end latency. This limits a single daisy chain to seven nodes. There are two ways around it. First, one can forego the 2-ms guarantee in a closed system. Second, one can use a switch with daisy chains. If a daisy chain of four nodes is connected to each port of the switch, four times as many nodes can be used on a switch, reducing the cost of the infrastructure required.Because of these limitations, daisy-chained AVB is well suited to deal with small-scale systems.Example Daisy-Chain ImplementationFor example, one daisy-chain AVB node can be based on an XMOS chip with 16 logical cores (Fig. 4). The hardware used for the system comprises an xCORE multicore microcontroller with 16 logical cores, two Ethernet PHYs with magnetics and jacks, a low-jitter phase-locked loop (PLL) for word-clock generation, and a codec with input and output stages.4. An xCORE multicore microcontroller, two Ethernet PHYs, a low-jitter PLL, and a codec all can play a role in daisy-chained AVB-enabled solutions. The microcontroller runs seven tasks to control the two Ethernet ports, inputting packets, outputting packets, and routing packets between the two ports. Another six tasks implement the AVB stack: the talker/listener, PTP and media clock recovery, I2S control, SRP/MRP, and 1722.1 discovery and control tasks. All 13 tasks fit in 128 kbytes of on-chip memory, obviating the need for external RAM. An external flash chip is used to hold persistent data and the boot image. The software is very similar to the software found in high-channel-count AVB products, except for the media independent interface (MII) and buffering.The system is constructed using an XMOS sliceKIT with two Ethernet slices and an audio slice. The stack of daisy-chained nodes is connected to a laptop that uses two of the nodes as “left” and “right” channels (Fig. 5). Our audio slice comes with dual stereo input and dual stereo output as default. For this demonstration, we only use a single audio output.5. A stack of daisy-chained nodes can be connected to a laptop that uses two of the nodes as “left” and “right” channels. The laptop can discover the two nodes, and we can redirect our audio output to the two speakers. A scope probe on each of the clocks shows that the two channels are playing without a discernible phase difference. The same hardware/software architecture can be used to, for example, build a conference system or to drive a P/A system.ConclusionsYou can construct a low-overhead AVB system that obviates the need for full-blown AVB switches. This reduces the cost of AVB and enables daisy-chained systems to be constructed.

Download this article in .PDF format

This file type includes high resolution graphics and schematics when applicapable.Andrew Lucas leads AVB applications engineering at XMOS Ltd. He is responsible for the design and development of AVB reference designs and IP for the consumer, professional AV, and automotive markets. He also serves on behalf of XMOS on the AVnu Alliance, a consortium of companies working together to establish and certify the interoperability of AVB standards. He holds a first class (honors) master’s degree in computer science and electronics from the University of Bristol.Peter Hedinger is technical director of applications at XMOS Ltd. He is involved in the development and testing of AVB and USB audio products. Prior to that, he spent 13 years working on software tools, microprocessor architectures, and 10/40Gb Ethernet switches.Henk Muller is the principal technologist at XMOS Ltd. In that role he has been involved in the design and implementation of hardware and software for real-time systems. Prior to that, he worked in academia for 20 years in computer architecture, compilers, and ubiquitous computing. He holds a doctorate from the University of Amsterdam.Continue ReadingTargeting Automotive and Industrial Apps with 10BASE-T1SElectronic Design eBook: Focus on TimingSponsored RecommendationsPodcast: The best battery on Earth is free and it is powering the new Blue EconomyFeb. 11, 2024 Podcast: HIRO pushes Europe to the edge with high performance microdata centersFeb. 11, 2024 Podcast: The future of long-haul trucking is accelerating autonomously on a freeway near youFeb. 11, 2024 Podcast: VideoRay drives safe, effective underwater exploration leveraging AI and today’s newest technologiesFeb. 10, 2024 Comments To join the conversation, and become an exclusive member of Electronic Design, create an account today! Join today! I already have an account NewComponent Reuse: The Key to More Sustainable Electronic Devices?What’s the Difference Between Supercapacitors and Battery-Based UPS? Low-Data-Rate Ethernet Tied to Automotive Control Most ReadManaged Cloud Service Delivers NVIDIA DGX AI SupercomputingDynaNIC Software Eases SmartNIC DevelopmentPacking AI Acceleration into a Compact ModuleSponsoredWhite paper: Introduction to the vehicle-to-everything communications service V2X feature in 3GPP release 14The Growing Importance of Two-Node Architecture in Street Lighting (English)Precision Medium BandwidthLoad More Contenthttps://www.facebook.com/ElectronicDesignhttps://www.linkedin.com/groups/4210549/https://twitter.com/ElectronicDesgnhttps://www.youtube.com/channel/UCXKEiQ9dob20rIqTA7ONfJgAbout UsContact UsAdvertiseDo Not Sell or SharePrivacy & Cookie PolicyTerms of Service © 2024 Endeavor Business Media, LLC. All rights reserved.

A performance study of Ethernet Audio Video Bridging (AVB) for Industrial real-time communication | IEEE Conference Publication | IEEE Xplore

A performance study of Ethernet Audio Video Bridging (AVB) for Industrial real-time communication | IEEE Conference Publication | IEEE Xplore

IEEE Account

Change Username/Password

Update Address

Purchase Details

Payment Options

Order History

View Purchased Documents

Profile Information

Communications Preferences

Profession and Education

Technical Interests

Need Help?

US & Canada: +1 800 678 4333

Worldwide: +1 732 981 0060

Contact & Support

About IEEE Xplore

Contact Us

Help

Accessibility

Terms of Use

Nondiscrimination Policy

Sitemap

Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.© Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Schedulability analysis of Ethernet Audio Video Bridging networks with scheduled traffic support | Real-Time Systems

Schedulability analysis of Ethernet Audio Video Bridging networks with scheduled traffic support | Real-Time Systems

Skip to main content

Log in

Menu

Find a journal

Publish with us

Track your research

Search

Cart

Home

Real-Time Systems

Article

Schedulability analysis of Ethernet Audio Video Bridging networks with scheduled traffic support

Open access

Published: 02 February 2017

Volume 53, pages 526–577, (2017)

Cite this article

Download PDF

You have full access to this open access article

Real-Time Systems

Aims and scope

Submit manuscript

Schedulability analysis of Ethernet Audio Video Bridging networks with scheduled traffic support

Download PDF

Mohammad Ashjaei1, Gaetano Patti2, Moris Behnam1, Thomas Nolte1, Giuliana Alderisi2 & …Lucia Lo Bello2 Show authors

5543 Accesses

44 Citations

6 Altmetric

Explore all metrics

AbstractThe IEEE Audio Video Bridging (AVB) technology is nowadays under consideration in several automation domains, such as, automotive, avionics, and industrial communications. AVB offers several benefits, such as open specifications, the existence of multiple providers of electronic components, and the real-time support, as AVB provides bounded latency to real-time traffic classes. In addition to the above mentioned properties, in the automotive domain, comparing with the existing in-vehicle networks, AVB offers significant advantages in terms of high bandwidth, significant reduction of cabling costs, thickness and weight, while meeting the challenging EMC/EMI requirements. Recently, an improvement of the AVB protocol, called the AVB ST, was proposed in the literature, which allows for supporting scheduled traffic, i.e., a class of time-sensitive traffic that requires time-driven transmission and low latency. In this paper, we present a schedulability analysis for the real-time traffic crossing through the AVB ST network. In addition, we formally prove that, if the bandwidth in the network is allocated according to the AVB standard, the schedulability test based on response time analysis will fail for most cases even if, in reality, these cases are schedulable. In order to provide guarantees based on analysis test a bandwidth over-reservation is required. In this paper, we propose a solution to obtain a minimized bandwidth over-reservation. To the best of our knowledge, this is the first attempt to formally spot the limitation and to propose a solution for overcoming it. The proposed analysis is applied to both the AVB standard and the AVB ST. The analysis results are compared with the results of several simulative assessments, obtained using OMNeT++, on both automotive and industrial case studies. The comparison between the results of the analysis and the simulation ones shows the effectiveness of the analysis proposed in this work.

Similar content being viewed by others

Deterministic delay analysis of AVB switched Ethernet networks using an extended Trajectory Approach

Article

19 November 2016

Xiaoting Li & Laurent George

Timing analysis of rate-constrained traffic in TTEthernet using network calculus

Article

07 January 2017

Luxi Zhao, Paul Pop, … Huagang Xiong

Multi-Topology Routing based traffic optimization for IEEE 802.1 Time Sensitive Networking

Article

06 March 2023

Ömer Kaǧan Demir & Selçuk Cevher

Use our pre-submission checklist

Avoid common mistakes on your manuscript.

1 IntroductionDistributed real-time systems are nowadays found in many applications in, for example, automotive industry, industrial process control, smart buildings and energy distribution facilities. In these applications the amount of data to be exchanged within the distributed system is growing. Often, these data exchanges have constrains with respect to timing. Ethernet solutions are being considered as promising solutions to handle the mentioned applications due to their features of high bandwidth support and wide availability. In particular, the IEEE 802.1 Audio/Video Bridging (AVB) specifications are being followed by both automotive and industrial control domains.The Ethernet AVB consists of a set of technical standards to allow real-time traffic transmission. For this purpose, the AVB standard divides the traffic into different classes according to their priorities (currently, two real-time classes are defined, i.e., the Stream Reservation (SR) Class A and B), and adds a credit-based shaper (CBS) to prevent traffic bursts. Bandwidth reservation is realized through the Stream Reservation Protocol (SRP), as defined in the AVB standards. The IEEE Time Sensitive Networking (TSN) group is working on several projects aiming to provide the specifications that will allow time-synchronized low latency streaming services through 802 networks. In this regard, Scheduled Traffic (ST) enhancements are addressed in the recently published IEEE 802.1Qbv standard 

(2015). The standard foresees a periodically time window, called Protected Window, that is reserved for the transmission of ST traffic. The Protected Window is scheduled according to a set of rules associated to the transmission queue and not to the single ST message.A different approach to support ST traffic in AVB networks (called AVB ST) was presented in Alderisi et al.

(2013), which suggests to handle the Scheduled Traffic in a separate highest-priority class to be added on top of the already defined SR classes, so as to guarantee preferential service to this time-sensitive traffic class, which requires both time-driven transmission and low latency. Also, a Time-Aware Shaper (TAS), which allows traffic transmission based on a time schedule, is adopted to provide temporal isolation between the ST traffic and the other traffic classes, thus avoiding any interference on the ST traffic from the other traffic. The AVB ST proposal brings the following benefits: (i) short and strict latencies for the ST traffic, (ii) time-driven transmission, with off-line scheduling possibility, for the ST traffic, hence meeting the needs of time-sensitive control traffic, (iii) temporal isolation with other traffic classes, and (iv) not significant effect on the other traffic classes. Our focus in this paper is on the AVB ST networks. Moreover, the AVB ST allows for scheduling one time window for each ST message, according to the specific ST message period and length, using offset scheduling techniques. This approach compared to the IEEE 802.1Qbv entails a finer-grained scheduling of ST messages, thus a more optimized bandwidth utilization. More details on the main differences between the IEEE 802.1Qbv standard and the AVB ST are presented in Sect. 3.3.1.1 ContributionsIn all real-time application domains, timeliness guarantees are required. In fact, it is essential to provide an analytical method to achieve the worst-case latency of the messages in the network. Several timing analysis approaches for messages in AVB networks were presented, e.g., Diemer et al.

(2012) and Bordoloi et al.

(2014), however, none of them can support response time computation of messages in the presence of ST messages. In this paper, we identify the elements which influence the delay of messages when ST messages cross through the network. Then, we present a response time analysis for the different classes of traffic in the AVB ST. We follow the same response time analysis method as presented in Bordoloi et al.

(2014). However, our analysis has the following dissimilarities compared to the analyses presented in Diemer et al.

(2012) and Bordoloi et al.

(2014): (i) we consider the effect of ST messages on the analysis, and (ii) we discuss the effect of queuing jitter from higher priority messages by showing potential optimism when jitter is neglected in the analysis, using a counterexample.Furthermore, we focus on bandwidth reservation for messages in AVB networks. We show that the previously presented analyses do not lead to a schedulable result in most of the cases because of the tight bandwidth allocation dictated by the AVB standard. This problem stems from: (i) not considering the blocking by lower priority messages in the bandwidth reservation, (ii) not considering the queuing jitter of a message crossing multiple switches for the bandwidth reservation, and (iii) the inevitable pessimism in the analyses. We formally demonstrate this limitation. A solution is to increase the dictated bandwidth for the traffic classes, known as bandwidth over-reservation. Here, we propose a solution to obtain a minimized bandwidth over-reservation. The solution is general and can be used for both AVB and AVB ST networks. To the best of our knowledge, we formally show the limitation and propose a solution for the first time.Finally, we conduct experiments in two types of application domains, automotive and automation networks. The architecture and traffic of the networks are inspired by close-to-industry case studies. In particular, for the automotive case study we referred to an architecture designed by BMW group (Lim et al. 2011), while for the automation case study we adopted the maximum number of switches that the standard guarantees. Then, we compute the response time of messages using the timing analysis presented in this paper, considering the bandwidth over-reservation. Also, we simulate the networks using OMNeT++ to compare the message response times with the computed ones, in order to show the effectiveness of the presented response time analysis.1.2 OrganizationThe rest of the paper is organized as follows. The next section discusses related works on Ethernet AVB. Then, Sect. 3 presents the Ethernet AVB and AVB ST. Section 4 provides the system model. Section 5 recalls the previous response time analysis, while Sect. 6 presents the response time analysis for the AVB ST networks. Section 7 discusses the limitation of the analysis when allocating the bandwidth based on the AVB standard, and presents a solution to overcome it. The experiments on automation and automotive case studies are conducted in Sect. 8. Finally, Sect. 9 concludes the paper.2 Related workIn this section, we describe research and extensions relevant to Ethernet AVB. Moreover, we provide a brief overview of existing schedulability analysis approaches for AVB networks.2.1 AVB related researchRecently, the real-time performance of IEEE AVB has been investigated extensively in multiple application domains, namely automotive, aeronautics, and industrial automation. For automotive networks, the work in Lo Bello

(2011) and Tuohy et al.

(2015) indicate AVB as one of the possible candidates for real-time communication domain. Moreover, the AVB suitability for supporting traffic flows of both Advanced Driver Assistance Systems (ADAS) and multimedia/infotainment systems was proven in Steinbach et al.

(2012), Alderisi et al.

(2012, 2012). In Lim and Volker

(2011) the capability of AVB to be used as an in-car backbone network for inter domain communication is discussed. As far as the industrial automation communication is concerned, the Ethernet AVB ability to deal with real-time traffic requirements typically found in industrial automation is addressed in Imtiaz et al.

(2011) and Jasperneite et al.

(2009). In Jasperneite et al.

(2009) the latency of forwarding the traffic is pointed out as one of the main challenges. In fact, due to the shaper in IEEE 802.1Q, a real-time message might be delayed in every bridge resulting in a poor performance. Therefore, further improvements are foreseen, including (i) shortening the non-real-time messages that interfere with real-time messages, (ii) allowing only real-time messages or (iii) providing mechanisms to avoid the mentioned interference. Focusing on the avionics application, AVB performance is discussed in Land and Elliott

(2011) and Heidinger et al.

(2012). In Heidinger et al.

(2012) the reliability of AVB was evaluated and results showed that AVB solutions may be applicable to applications belonging to lower safety classes that have less demanding requirements on reliability. This is mainly due to the complexity of dynamic bandwidth reservation in AVB and failure probability of devices evaluated in Heidinger et al.

(2012). In Schneele and Geyer

(2012), the AVB is compared to the AFDX standard and the outcome is that further work is needed for making AVB suitable for the aeronautic industry requirements. In order to tackle the aforementioned improvements several kinds of traffic shapers were analyzed in Thangamuthu et al.

(2015) and the Time-Aware Shaper (TAS) proved to be the one that can offer the lowest latency along with good jitter performance, albeit with an increased configuration cost for the switches. TAS prevents interference on the scheduled traffic, thus the traffic can be delivered faster. The only delay that the scheduled traffic suffers from is the forwarding latency crossing the switches.Approaches to reduce latency for high priority traffic in the AVB networks, such as packet preemption and fragmentation, are discussed in Imtiaz et al.

(2012). Moreover, TAS or time windows were proposed in Pannel

(2012) and Cummings

(2012) for isolating class A streams from the interference due to other traffic types. However, these approaches map all time-sensitive flows on the same class (class A) irrespective of their heterogeneous sizes and time constraints. Such a choice is not beneficial to low latency small-size traffic, which should not be handled in the same queue as large messages in AVB. For this reason, the work in Alderisi et al.

(2013) proposed to add a separate class on top of the AVB Stream Reservation Classes A and B to introduce support for ST traffic, while maintaining the other traffic classes provided by the AVB standard. The work adopted TAS to enforce temporal isolation between ST and other classes of traffic. It also proved that ST traffic achieves both low and predictable latency, without significantly affecting the SR traffic. In this paper, we focus on the proposal in Alderisi et al.

(2013), named AVB ST network, as it introduces relatively high performance in transmission of time-sensitive traffic. The details of the AVB ST are discussed in Sect. 3.2.2.2 Timing analysis approaches for Ethernet AVBA number of timing analysis techniques were proposed for the Ethernet AVB networks, such as Imtiaz et al.

(2009), Lee et al.

(2006) and De Azua and Boyer

(2014), each one using different approaches. For instance, the analysis presented in De Azua and Boyer

(2014) applies the Network Calculus framework (Leboudec and Thiran 2001), while the one presented in Imtiaz et al.

(2009) adopts delay computation. However, these analysis techniques are restricted to the computation of worst-case response time per-class, without distinguishing the individual messages’ response times. It should be noted that in many industrial systems a large number of messages are transmitted. For instance, in a modern truck 6000 messages are exchanged across several networks (Keynote 2013). Therefore, the delays of each individual message should be bounded, but this is not possible using the mentioned analysis approaches. Another analysis framework for the Ethernet AVB is presented in Reimann et al.

(2013) and is based on Modular Performance Analysis (MPA) (Wandeler et al. 2006). In the presented analysis the interference from higher priority messages is not formally considered, i.e., multiple activations of higher priority messages are not taken into account.A formal timing analysis is given in Diemer et al.

(2012), where the response time of each individual message is computed in an Ethernet AVB architecture consisting of multiple switches. The recent work presented in Bordoloi et al.

(2014) showed that the analysis in Diemer et al.

(2012) considers only one blocking factor that results from lower priority messages, which is not the case in the Ethernet AVB, due to the traffic shaper. Thus, a new response time analysis is developed in Bordoloi et al.

(2014). However, the proposed analysis is still limited to the constrained deadline traffic model, and a single-switch architecture. In this paper, we extend the response time analysis presented in Bordoloi et al.

(2014) in two directions: (i) computing the response time of messages in Ethernet AVB when ST traffic is transmitted through the network, and (ii) considering the effect of queuing jitter from higher priority messages in multi-hop architectures.3 Ethernet AVB basicsIn this section, we present the Ethernet AVB protocol. Further, we describe the AVB ST approach.3.1 The Ethernet AVBThe IEEE AVB standard consists of a set of technical standards. For our purposes here we mention the IEEE 802.1AS

(2011) and the IEEE 802.1Q

(2014). The IEEE 802.1AS Time Synchronization protocol is a variation of the IEEE 1588

(2008) standard, which provides precise time synchronization of the network nodes to a reference time with an accuracy better than 1 \(\upmu \)s. The IEEE 802.1Q provides Stream Reservation Protocol (SRP) that allows for reservation of resources (i.e., buffers and queues) within the switches (called bridges in the AVB terminology) along the path between the talker (i.e., the stream source node) and the listener (i.e., the stream final destination node). Moreover, the IEEE 802.1Q provides Queuing and Forwarding mechanism for AV Bridges to split time-critical and non-time-critical traffic into different traffic classes and applies the CBS algorithm that prevents traffic bursts by exploiting traffic shaping at the output ports of bridges and end nodes. The AVB standard guarantees a fixed maximum latency for up to seven hops within the network for two different Stream Reservation (SR) classes, i.e., 2 ms for class A and 50 ms for Class B. According to the CBS algorithm, each SR traffic class has an associate credit parameter, whose value changes within two limits, called loCredit and hiCredit, respectively. Pending messages in the queues may be transmitted only when their associated credit is zero or higher. During the message transmission the credit decreases at the sendSlope rate defined for the class. The credit is replenished at the constant rate idleSlope defined for the class when (i) the messages of that class are waiting for the transmission or (ii) when no more messages of the class are waiting, but credit is negative. If the credit is greater than zero and no more messages of the corresponding traffic class are waiting, the credit is immediately reset to zero. Figure 1 illustrates the operation of the CBS algorithm for classes A and B. At the beginning \(m_2\) is being transmitted, hence its credit (class B) decreases. At time \(t_1\) message \(m_1\) is ready in the queue of class A, thus its credit starts to increase. When at time \(t_3\) the transmission of \(m_2\) finishes, the transmission of \(m_1\) is initiated, as the credit for class A is positive. Moreover, credit of class B starts to increase as there is \(m_3\) pending for transmission. At time \(t_4\) transmission of \(m_1\) finishes and finally \(m_3\) is started for transmission. At time \(t_5\) since there is no pending traffic for class A, the credit immediately becomes zero.Fig. 1Operation of the CBS algorithmFull size image

The AVB ST approach presented in Alderisi et al.

(2013) is summarised in Sect. 3.2 that shows a promising solution towards the support of ST traffic over AVB networks. The analysis and results in this paper are based on the AVB ST design presented in Alderisi et al.

(2013).3.2 Ethernet AVB STThe AVB ST approach introduces a separate traffic class for scheduled traffic, called the Scheduled Traffic Class (ST Class), on top of the AVB SR Classes A and B. ST frames get the highest priority TAG according to the IEEE 802.1Q standard, as scheduled traffic includes time-sensitive high-priority flows (e.g., control traffic) that deserve the best service. For this reason, the ST class not only has a separate queue, but also does not undergo credit-based shaping, this way avoiding the latency increase that traffic shaping introduces. Conversely, SR class A and B take the second and the third highest priority, respectively, and undergo CBS shaper. Finally, best-effort traffic is handled by strict priority (as in the IEEE 802.1Q standard).As ST flows are periodic, with fixed and a priori known period and frame size, they can be scheduled offline. Suitable scheduling techniques, e.g., offset scheduling (Palencia and González Harbour 1998), can be adopted at the network configuration time to ensure, by design, the absence of collisions between ST frames in the whole network (i.e., either in the end stations or in the bridges). The AVB ST approach requires that every node and every switch has to be aware of the right time for transmitting its ST traffic, therefore, synchronization is provided by the IEEE 802.1AS standard.The AVB ST approach is based on two fundamental concepts, i.e., TAS and ST_Window. TAS is a mechanism that, in order to prevent any interference on ST frames from other traffic classes, inhibits the transmission of non-ST traffic that would delay the upcoming ST one. In other words, in AVB ST, the TAS temporally isolates the transmission of ST frames from non-ST frames, thus enabling time-sensitive frames to be transmitted from a bridge port without any interference from other traffic types. In the AVB ST approach the messages belonging to the SR Classes undergo both TAS and credit shaping, while best-effort messages go through TAS only, in compliance with the AVB standard. The traffic shaping in the AVB ST design presented in Alderisi et al.

(2013) is shown in Fig. 2.Fig. 2Traffic classes in AVB ST approach adopted from Lo Bello

(2014)Full size image

According to the AVB ST design in Alderisi et al.

(2013), the CBS and TAS operate as follows. When there is no ST transmission, the CBS operates as in Sect. 3.1.When there are ST messages to transmit, the TAS prevents the other classes far in advance to be certain that the ST transmission is fully protected. This protection window is the maximum transmission time of the registered SR classes in the worst-case, which is called a guard band. During the guard band and ST message transmission, the credit for SR classes that have pending messages increases at the rate of the relevant idleSlope. If there are no pending messages, the credit becomes immediately zero. Fig. 3 depicts an example of AVB ST transmission. An ST message (\(m_{\textit{ST}}\)) is scheduled for transmission at time \(t_3\). Although \(m_2\) from class A can be transmitted at time \(t_1\) as the credit is zero, the TAS prevents it as it would interfere with \(m_{\textit{ST}}\) transmission. Finally, \(m_2\) is sent after \(m_{\textit{ST}}\).Fig. 3Operation of AVB ST shapersFull size image

It should be noted that there is an intelligent way to assign a guard band. In fact, if the implementation determines which messages are waiting in the queue, for the guard band it is sufficient to use the maximum size of those messages instead of the maximum of all messages in that class.The synchronization offered by the IEEE 802.1AS

(2011) makes AVB bridges time-aware nodes (i.e., nodes provided with network timing information). Consequently, to implement TASs, only a suitable mechanism to allow/inhibit transmissions in a given time window and a way to configure the TAS based on the information provided by a Management Information Base (MIB) are needed.The second fundamental concept for AVB ST is the ST_Window of each ST message. Such a window is defined as the time window, at the receiver side, within which the ST frame has to be received. In fact, as in the AVB ST approach ST frames are transmitted at known time instants and do not experience interference from the same class or from other traffic classes, the reception instant for any ST message can be calculated. The calculation has to consider the synchronization error between the nodes. The synchronization error is calculated using the drift of each node.The results of comparative simulations of AVB ST and AVB in Alderisi et al.

(2013) show a positive outcome for AVB ST, as ST traffic obtained low and predictable latency values, without significantly affecting SR traffic. The reason for this result is the combination of three features that are very beneficial for the ST class, namely, the offset-based scheduling, the temporal isolation provided by TAS and the absence of CBS shaping for the ST class.3.3 Overview of the IEEE 802.1Qbv standard and differences with AVB STThe standardization process of the IEEE TSN is in progress and several projects are ongoing. Recently, an amendment of the IEEE 802.1Q, called IEEE 802.1Qbv-2015 

(2015), was released. The IEEE 802.1Qbv introduces the support for scheduled traffic. To achieve this goal, a transmission gate is associated with each queue and the state of the transmission gate determines whether or not queued frames can be selected for transmission. For a given queue the gate can be in one of two states, i.e., open or closed. According to the transmission selection algorithm, a frame waiting in a traffic class queue cannot be transmitted if the transmission gate relevant to the queue is in the closed state or if there is not enough time for transmitting the entire frame before the next gate-close event. The gate operations are contained in a list and are cyclically repeated with a period called OperCycleTime. Two consecutive gate operations (i.e., opening/closing of one or multiple gates) are spaced by an interval called TimeInterval. Such an interval is equal for all the operations contained in the list. The list of operations is configured to create the protected window (PW) for the queues in which the scheduled traffic is transmitted with no interference. This operational approach foresees a unique PW cyclically repeated for each scheduled traffic queue large enough to accommodate the transmission of all the ST messages within a cycle. This results in a non-optimized scheduling in the case of multiple ST messages with different lengths and periods leading to the consequent waste of bandwidth. In fact, the PW should be sized so as to accommodate the transmission of all the ST frames handled by a node within a cycle, regardless of whether some ST messages have a larger period and they are not transmitted in each cycle. Note that the IEEE 802.1Qbv standard does not exclude multiple PWs for one queue. However, scheduling multiple protected windows for each message may result in a very complex and difficult implementation. The main differences between the IEEE 802.1Qbv standard and AVB ST are summarized as follows.

According to the IEEE 802.1Qbv standard the idleSlope increases only if the gate is open, i.e., the idleSlope does not increase during the guard band and ST transmission. However, according to the design in Alderisi et al.

(2013), the idleSlope for the SR classes increases even during the guard band and ST transmission (see Fig. 3).

The idleSlope increases in higher rate than when there is no ST transmission enabled. The duty cycle for the transmission gate is multiplied by the idleSlope according to Clause 8.6.8.2 in the IEEE 802.1Qbv standard 

(2015).

Unlike the IEEE 802.1Qbv standard the AVB ST provides scheduled windows, which consider the period and the length of each transmitted message. Therefore, ST Windows are scheduled only when there are ST messages to be transmitted and they are sized according to the frame length of the specific ST message, thus entailing a more efficient bandwidth utilization.

4 System modelIn this section we describe the system model for the AVB network and the traffic, separately.4.1 Network modelThe AVB switches are considered to be full-duplex, i.e., the input and output of a switch port are isolated. Thus, the receiving message does not delay the transmission of a message. In this paper, we define a link as a connection between a node and a switch, as well as a connection between two switches. The link is denoted by l. Also, the switch has a fabric latency due to the hardware configuration for relaying messages, which is denoted by \(\epsilon \). This delay varies in different switches and usually is accounted for the time that the switch takes to process a received message and to insert it into the output port queue. The link delay due to wire and its physical characteristics is assumed to be very small and negligible.Ethernet AVB uses a credit-based shaping algorithm to regulate the traffic transmission for two traffic classes, A and B, where class A has higher priority than class B. Assuming traffic class X, the replenishment rate (idleSlope) of the credit on link l is denoted by \(\alpha ^+_{X,l}\). Moreover, the credit is consumed when there is a transmission on a link, and the consumption rate (sendSlope) is specified by \(\alpha ^-_{X,l}\). The non-real-time traffic, known as best effort (BE) class, does not undergo the traffic shaper. Moreover, the total network bandwidth is denoted by R. It should also be noted that in the response time analysis the latency due to unprecise clock synchronization among the nodes is neglected.4.2 Traffic modelFor the traffic model, we use the real-time periodic model. A set of messages \(\Gamma \), composed by N messages, is characterized as follows:$$\begin{aligned} \Gamma = \{m_i(C_i, T_i, D_i, P_i, \mathcal {L}_i),\quad i=1\ldots N\} \end{aligned}$$

(1)

In this model, \(C_i\) represents the transmission time of \(m_i\), that is obtained from the message size based on the total network bandwidth (R). In our model, a message is not larger than the maximum possible Ethernet size, hence message fragmentation is not required. Also, \(T_i\) and \(D_i\) denote the period and relative deadline of the message, respectively. In this paper, we consider the constrained deadline model, i.e., \(D_i \le T_i\). A message belongs to a traffic class based on its priority. Several messages in the set may share a priority level and be assigned to the same traffic class. In this case, the FIFO policy applies to them in the queues of the switch. Therefore, \(P_i\) represents the class of \(m_i\), e.g., \(P_i = \textit{class}\;A\). In the analysis, \(\textit{lp}(m_i)\), \(sp(m_i)\) and \(hp(m_i)\) are the sets of the messages with lower, the same and higher priority than that of \(m_i\), respectively. Moreover, \(F_i\) represents the message length, which can be derived as \(F_i = C_i \cdot R\). A message traversing several switches may get variation in delay, which is called queuing jitter and denoted by \(J_i\).A message may traverse multiple switches to arrive to its destination node. A set of links that \(m_i\) passes through is defined by \(\mathcal {L}_i\), where the number of links in the set is defined by \(n_i = |\mathcal {L}_i|\). Each member of the set is a tuple \(l = \langle x, y \rangle \), that represents a link l between the nodes/switches x and y. Note that the sequence in the tuple shows the direction of the message transmission from x to y. In this analysis, we restricted the model to unicast streams, i.e., only one destination per message is assumed. The multicast and broadcast streams can be handled by transforming them into multiple unicast streams, however we leave that case as out of the scope of this paper for the sake of clarity. The response time of \(m_i\) is the temporal interval between the time at which the message is inserted in the queue of its source node (i.e., the time instant when it becomes ready for transmission), and the time at which the message is delivered to its destination node. The response time is specified by \(\textit{RT}_i\). Moreover, the response time of \(m_i\) when it is transmitted from a node/switch to another node/switch through link l in a multi-hop architecture is denoted by \(\textit{RT}^l_i\). Table 1 summarizes the notations that are used in this paper.Table 1 Table of notationsFull size table

5 AVB response time analysis recapIn this section, we recall the response time analysis of messages in single-switch AVB networks (Bordoloi et al. 2014). Note that some optimization methods have been presented in Bordoloi et al.

(2014), however we do not consider them when extending the analysis. The reason is that later in Sect. 7 we use the analysis to find the bandwidth over-reservation, which is not possible with the presented optimizations. New methods are required to achieve tighter analysis, which are left for future work. We present the response time analysis for each traffic class separately, considering that in plain AVB there are no ST messages transmitted in the network. Moreover, the presented analysis considers an architecture with a single switch. Therefore, the notation of link l is discarded from the equations.5.1 Response time of messages belonging to class AIn Ethernet AVB, class A messages have the highest priority, hence there is no interference from higher priority messages for this class. On the other hand, there might be blocking by lower priority messages, e.g., a blocking by class B or BE traffic. As there are only two traffic classes, several messages may be assigned to the same class, therefore a message may be delayed by the messages with the same priority in the FIFO queue. Finally, as the message transmission is controlled by the traffic shaper, even if the message is ready for transmission it may be blocked due to a negative credit. To sum up, three different elements have to be considered in the worst-case response time of a message: (i) blocking by lower priority messages, (ii) interference from the same priority messages in the FIFO queue, and (iii) traffic shaping.5.1.1 Blocking by lower priority messagesIt has been shown in Bordoloi et al.

(2014) that considering at most one lower priority message for the blocking is not enough. This is due to the traffic shaper behavior, as on every replenishment of the credit one message from the lower priority may be ready for transmission. We show the insufficiency of considering one lower priority message using an example illustrated in Fig. 4. In this example, \(m_1\) is the message under analysis, while LP and SP are lower and same priority messages, respectively. Initially, the credit increases as a lower priority message (\(\textit{LP}_1\)) is transmitted on the link. Afterwards, a message with the same priority, which is ahead of \(m_1\) in the FIFO queue, is transmitted. After the transmission of the same priority message, the credit becomes negative, therefore there is room for transmitting another lower priority message (\(\textit{LP}_2\)) with enough credit for transmission. Finally, \(m_1\) has a chance for transmission as the credit is positive. In this example, \(m_1\) experiences blocking by two lower priority messages, i.e., \(\textit{LP}_1\) and \(\textit{LP}_2\), due to traffic shaping.Fig. 4The blocking by lower priority messagesFull size image

However, it has been proved in Bordoloi et al.

(2014) that considering an inflation factor for the same priority messages in the analysis makes it sufficient to take one lower priority message for the blocking term. This inflation factor is calculated by \(\left( 1 + \frac{\alpha ^-_A}{\alpha ^+_A}\right) \). We show the effect of the inflation factor using an example shown in Fig. 4. If we inflate the SP message by the mentioned inflated factor, it becomes the Inflated SP shown in Fig. 4. Thus, it covers \(\textit{LP}_1\), \(\textit{SP}\) and the replenishment of the credit to zero. Therefore, \(\textit{LP}_1\) does not need to be accounted for the analysis. The only lower priority message to consider is \(\textit{LP}_2\), which is transmitted before \(m_1\) in this example. In order to mathematically show the inflation factor, let us assume an interval of time in Fig. 4, in which the Inflated SP is transmitted. This interval is denoted by L in this example and it is the summation of \(\textit{LP}_1\), \(\textit{SP}\) and the replenishment duration denoted by H.$$\begin{aligned} L = C_{\textit{LP}1} + C_{\textit{SP}} + H \end{aligned}$$

(2)

As the credit at the beginning and the end of the interval is zero, we can write:$$\begin{aligned} 0 = C_{\textit{LP}1}.\alpha _A^+ - C_{\textit{SP}}.\alpha _A^- + H.\alpha _A^+ \end{aligned}$$

(3)

Deriving H from above equation and inserting it in the calculation of L in Eq. (2), the interval length can be written as below:$$\begin{aligned} L = C_{\textit{SP}} \left( 1 + \frac{\alpha _A^-}{\alpha _A^+}\right) \end{aligned}$$

(4)

Therefore, \(C_{\textit{SP}}\) can be inflated by the inflation factor, thus making the Inflated SP in Fig. 4, which does not include the transmission time of the lower priority message \(C_{\textit{LP}1}\). For more details the reader is referred to the formal proofs provided in Bordoloi et al.

(2014).5.1.2 Interference from the same priority messagesThe interference from the same priority messages is the sum of the transmission times of all messages in the same traffic class. In a schedulable system where \(D_i \le T_i\), when a message is enqueued in a FIFO queue, at most one instance of the other messages can be ahead of the message in the queue (Davis et al. 2011). This means that if there are two instances of a message ahead of \(m_i\) in the FIFO queue, the system is not schedulable under the mentioned assumptions.5.1.3 Traffic shaper effectIn the worst-case scenario the credit of the traffic shaper must be considered to be as the negative as possible when the message under analysis is ready for transmission (critical instant). In this case, the traffic shaper blocks the message until the credit increases to zero. Then, the worst-case response time is the time between the critical instant and the complete transmission of the message under analysis (WCRT in Fig. 5). In order to reduce the pessimism in the analysis, the traffic shaper effect is considered in the final phase, i.e., after the transmission of the message under analysis. Therefore, the negative credit replenishment time at the critical instant is removed, and the negative credit replenishment time after the transmission is added to the analysis (modified WCRT definition in Fig. 5). For more details the reader is referred to the proofs provided in Bordoloi et al.

(2014).Fig. 5The response time definitionsFull size image

According to the modified WCRT definition, the last interval of the transmission contains the blocking by a lower priority message, the message under analysis and the credit replenishment time. The replenishment time of the credit is computed as \(C_i\left( \frac{\alpha _A^-}{\alpha _A^+}\right) \). Therefore, the last interval is obtained as below:$$\begin{aligned} \max _{m_j \in lp(m_i)} \{C_j\} + C_i + C_i\left( \frac{\alpha _A^-}{\alpha _A^+}\right) = \max _{m_j \in lp(m_i)} \{C_j\} + C_i\left( 1 + \frac{\alpha _A^-}{\alpha _A^+}\right) \end{aligned}$$

(5)

Therefore, not only the same priority messages should be inflated by the inflation factor, but also the message under analysis should be inflated by the same inflation factor.The response time of \(m_i\) in class A is computed using Eq. (6). The first term of the calculation is the blocking by lower priority messages, while the second term is the interference from the same priority messages when they are inflated. The transmission time of the message under analysis is added to the response time calculation, which is included in the second term of Eq. (6).$$\begin{aligned} RT_i = \max _{m_j \in lp(m_i)} \{C_j\}+ \sum _{m_j \in sp(m_i)} \left\{ C_j \left( 1 + \frac{\alpha ^-_A}{\alpha ^+_A}\right) \right\} \end{aligned}$$

(6)

We can observe that, according to Eq. (6), the worst case response times of the messages in class A are equal. This scenario occurs due to the FIFO nature of the transmission queue. Basically, in the worst-case a message in a FIFO queue suffers from all other messages in the same queue. Therefore, the interference due to the same priority messages is the same for all messages in the queue. For class A, there is no interference from the higher priority messages and the blocking due to the lower priority messages is constant for all messages in class A. Thus, their worst-case response times are the same. Please note that this scenario occurs only for class A traffic.5.2 Response time of messages belonging to class BA message from class B is not only blocked by lower priority messages (i.e., by the BE traffic), but it can also suffer from the interference of the higher priority messages (i.e., by the traffic in class A). Therefore, besides the three elements mentioned in the class A analysis, the interference from higher priority messages should be also considered. Although we adopt the constrained deadline model, considering one instance of the message under analysis is not sufficient. Instead, the response time of several instances of the message during a busy period (Lehoczky 1990) must be calculated and the maximum among them is the worst-case response time. The busy period is the maximum time interval during which the resource is busy. Note that in Ethernet AVB the resource is busy either when there is an ongoing transmission on the link or when the queue is not empty, but the transmission is prevented due to a negative credit. The reason behind the need for considering multiple instances is the non-preemptive nature of the transmission, which is thoroughly discussed in the Controller Area Network (CAN) response time analysis (Davis et al. 2007). Under high network utilization, a message may delay subsequent transmission of the higher priority messages. Thus, the higher priority interference may be pushed through into the next period of the messages, causing larger response time in the next instance. Let us consider an example in Fig. 6, where we are interested in computing the response time for \(m_3\). In this example, \(m_1\) and \(m_2\) are higher priority messages than \(m_3\). Moreover, \(m_1\) has period of 4 time units, while the period of the other messages is 6 time units. The transmission time of \(m_1\) and \(m_3\) is 2 time units and for \(m_2\) is 1 time unit. The first instance of \(m_3\) is completely sent at time 5, hence its worst-case response time is 5 time units if we consider the first instance only. However, \(m_1\) is ready at time 4, but it cannot preempt \(m_3\) due to the non-preemptive nature of the transmission. Thus, its transmission starts at time 5 and its third transmission starts at time 8, thus pushing the transmission of \(m_3\). Then, transmission of the second instance of \(m_3\) starts at time 10 and completes at time 12, thus making the worst-case response time for the first instance equal to 6 time units instead of 5. Therefore, in the calculation of the worst-case response time, several instances should be examined.Fig. 6An example of multiple instancesFull size image

Given the \(q^{th}\) instance of message \(m_i\) in the busy period, we compute the queuing delay \(w_i(q)\), which is the longest time from the start of the busy period until the beginning of the transmission of the \(q^{th}\) instance, as shown in Eq. (7). The equation is a recursive function that starts with an initial value for \(w_i(q)\) and terminates when the previous value of \(w_i(q)\) equals the new value derived by the equation.$$\begin{aligned} \begin{aligned} w_i(q)&= \max _{m_j \in lp(m_i)} \{C_j\} + (q-1)C_i\left( 1 + \frac{\alpha ^-_B}{\alpha ^+_B} \right) \\&\quad +\sum _{m_j \in sp(m_i), j \ne i} \left\{ \left\lfloor \frac{(q-1)T_i}{T_j} + 1 \right\rfloor C_j \left( 1 + \frac{\alpha ^-_B}{\alpha ^+_B} \right) \right\} \\&\quad + \sum _{m_j \in hp(m_i)} \left\{ C_j \left\lfloor \frac{w_i(q)}{T_j} + 1 \right\rfloor \right\} \end{aligned} \end{aligned}$$

(7)

The first term in Eq. (7) is the blocking by the lower priority messages. The second term is the transmission time of the message itself in the previous \(q-1\) instances. The third term is the interference from the same priority messages in the FIFO queue, excluding the message under analysis. The last term in the calculation is the interference from higher priority messages. Note that the inflation factor, as discussed before, is applied on the same priority messages including the message itself. Finally, the response time of \(m_i\), which is the maximum response time among the examined instances, is computed in Eq. (8). The first term of Eq. (8) is the queuing delay computed iteratively in Eq. (7), the second term is the number of periods for message \(m_i\) that has passed during the busy period, and the last term is the transmission time of the message itself.$$\begin{aligned} RT_i =\max _{q = 1\ldots q_{\textit{max}}} \left\{ w_i(q) - (q-1)T_i + C_i \left( 1 + \frac{\alpha ^-_B}{\alpha ^+_B} \right) \right\} \end{aligned}$$

(8)

The range of q for which the response time must be calculated is \([1, q_{\textit{max}}]\), where \(q_{\textit{max}}\) is the smallest positive integer q derived in Eq. (9). The left side of Eq. (9) is the length of the busy period. Therefore, by dividing the busy period length to the period of the message \(T_i\) (in the right side of Eq. 9), the maximum number of instances during the busy period is derived. The length of busy period is calculated by adding the blocking from lower priority messages, the interference from same and higher priority messages (i.e., the interference that makes the resource busy during the busy period).$$\begin{aligned} \begin{aligned} \max _{m_j \in lp(m_i)} \{C_j\}&+ \sum _{m_j \in hp(m_i)} \left\{ \left\lceil \frac{w_i(q)}{T_j} \right\rceil C_j\right\} \\&+ \sum _{m_j \in sp(m_i)} \left\{ \left\lfloor \frac{(q-1)T_i}{T_j} + 1 \right\rfloor C_j \left( 1 + \frac{\alpha ^-_B}{\alpha ^+_B} \right) \right\} \le q \cdot T_i \end{aligned} \end{aligned}$$

(9)

6 Response time analysis for AVB ST networksIn this section, we present the response time analysis for class A and B messages for the case of AVB ST networks, in which the transmission of ST messages has to be taken into account. We also present the transmission delay of ST messages. To do so, we first present the analysis for different classes of messages in one link in the network. Then, we extend that to multi-hop networks.6.1 Response time of messages in class AIn the approach presented in Alderisi et al.

(2013) the messages in the queues associated to the SR classes undergo both TAS and CBS, while the BE messages go through TAS only. According to the TAS mechanism, any non-ST message that is queued and is ready for transmission has to wait not only for the duration of an ST message transmission, but also for an additional time, called a guard band. The guard band is enforced by TAS to avoid the transmission of non-ST traffic that would delay the next ST message. Therefore, when calculating the response time for messages in class A, not only the interference from ST messages should be taken into account, but also the guard band should be considered. For the response time analysis of messages in class A four elements are required. These elements include: (i) interference from higher priority messages (i.e., from ST messages and their guard band), (ii) blocking by lower priority messages, (iii) interference from the same priority messages in the FIFO queue, and (iv) traffic shaper effect.6.1.1 Interference from higher priority messagesIn the higher priority message interference for traffic class A, besides the ST messages, we have to consider the guard band of the ST messages. To do so, we define a virtual message per ST message, whose period and priority are the same as the ST message ones. Note that if ST messages are clustered for transmission, a virtual message for the whole cluster would be sufficient. However, this requires an offset-based scheduling algorithm for ST messages, that is out of the scope of this paper and is left for future work. The transmission time of the virtual message is the maximum transmission time of all messages in classes A, B and BE, whose transmission would not be finished before the starting of the ST message. This is due to the fact that TAS prevents any transmission that can interfere with ST transmission, which in the worst-case is the largest message taking the same route as the message under analysis. Fig. 7 shows a scenario in which a message from class A \(m_A\) could interfere with an \(\textit{ST}\) message \(m_{\textit{ST}}\) scheduled for transmission at time t, but it is prevented from being transmitted by the TAS. The virtual message in this case is depicted by \(C^*_{\textit{ST}}\).Fig. 7Presentation of a virtual messageFull size image

Assuming \(m_k\) as an ST message with transmission time \(C_k\), the virtual message corresponding to \(m_k\) crossing link l is denoted by \(C^*_{k, l}\) and derived in Eq. (10). The equation gives the largest message among other traffic classes (A, B and BE), that traverse the same link as \(m_k\).$$\begin{aligned} C^*_{k, l} = \max _{\begin{array}{c} \forall r \in [1,N] \\ \wedge \; m_r \in lp(m_k) \\ \wedge \; l \in \mathcal {L}_r \end{array}} \{C_r\};\quad \forall k \in \{\textit{class\;ST}\} \end{aligned}$$

(10)

6.1.2 Blocking by lower priority messagesAs it was discussed before, a high priority message may experience multiple instances of blocking by lower priority messages due to the traffic shaper. However, here we show that even when ST messages exist in the network, considering one blocking by the lower priority messages is sufficient if a proper inflation factor is applied to the same priority messages. To do so, we use the same methodology presented in Bordoloi et al.

(2014).Fig. 8Inflation of the same priority messagesFull size image

A scheduling scenario for message \(m_1\) is depicted in Fig. 8 for link l. An interval of time is defined as the duration between the time at which the credit is zero and the time at which the credit is replenished to zero again, after the transmission of the ready messages (Fig. 8). In order to show that inflation of the same priority message covers the blocking time of class A messages by the lower priority message in presence of ST messages, we define an interval where LP, ST and SP messages are transmitted. In this example we consider that the ST message also includes the transmission time of its virtual message. The length of the interval L is calculated in the following equation, where H represents the time needed to replenish the negative credit to zero (see Fig. 8).$$\begin{aligned} L = C_{\textit{LP}} + C_{\textit{ST}} + C_{\textit{SP}} + H \end{aligned}$$

(11)

As the interval is defined between two zero credits, the total credit value remains zero. Thus, the credit value for the phase becomes:$$\begin{aligned} 0 = C_{\textit{LP}} \cdot \alpha ^+_{A,l} + C_{\textit{ST}} \cdot \alpha ^+_{A,l} - C_{\textit{SP}}\cdot \alpha ^-_{A,l} + H \cdot \alpha ^+_{A,l} \end{aligned}$$

(12)

Deriving H from the above equation, and inserting it to the interval length calculation (Eq. 11) we have the following:$$\begin{aligned} L = C_{\textit{SP}} \left( 1 + \frac{\alpha ^-_{A,l}}{\alpha ^+_{A,l}}\right) \end{aligned}$$

(13)

Therefore, the length of the interval only depends on the transmission time of the same priority messages, even when ST messages exist in the network. The Inflated SP message is shown in Fig. 8. Note that there could be several same priority messages in one interval, where in that case \(C_{\textit{SP}}\) is the sum of them.6.1.3 Interference from the same priority messagesIn order to capture the worst-case scenario, we assume that all the same priority messages in the FIFO queue are ahead of the message under analysis. Moreover, as the model is constrained deadline, in a schedulable system, only one instance of the same priority messages can be ahead of the message under analysis in the FIFO queue.6.1.4 Traffic shaper effectSimilar to the discussion for Fig. 5, the negative credit can be removed if the replenishment time after transmission of the message under analysis is taken into account.The response time for messages in class A in link l is calculated in Eq. (14). The iteration starts from \(\textit{RT}^{l,(0)}_i = C_i\) and terminates when \(\textit{RT}^{l,(x)}_i = RT^{l,(x-1)}_i\), where x is the iteration number. The calculation does not need to examine several instances of the message in the busy period. The reason is that the ST messages are the only higher priority messages for class A, and they are strictly periodic. Also, TAS prevents any transmission that can interfere with the ST messages. Therefore, the ST messages cannot be pushed through into the next period of message \(m_i\). This means that the next instances of \(m_i\) cannot have larger response time than the first instance.$$\begin{aligned} \begin{aligned} RT^{l,(x)}_i&= \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\} + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in sp(m_i), i \ne j \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ C_j \left( 1 + \frac{\alpha ^-_{A,l}}{\alpha ^+_{A,l}}\right) \right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lceil \frac{RT^{l,(x-1)}_i}{T_j} \right\rceil (C_j + C^*_{j, l})\right\} + \zeta .C_i + \varepsilon ,\\&\textit{where}\; \zeta =\left\{ \begin{array}{ll} 1, &{} \quad \text {if}\; sp(m_i) = \emptyset \\ \left( 1 + \frac{\alpha ^-_{A,l}}{\alpha ^+_{A,l}}\right) , &{} \quad \text {if}\; sp(m_i) \ne \emptyset \end{array}\right. \end{aligned} \end{aligned}$$

(14)

In Eq. (14), the first term represents the blocking by the lower priority messages, while the second term is the interference from the same priority messages, excluding the message itself. Also, the third term is the interference from the higher priority messages, which is only ST messages for class A. Therefore, the transmission time of virtual messages can be added to the ST transmission times, hence the guard band is also considered in the analysis. The fourth term is the transmission time of the message itself. As mentioned before, the inflation factor for the message under analysis is to cover the negative credit effect. When there is only one message in a class, i.e., \(sp(m_i) = \emptyset \), the credit of the class becomes negative only with that message. In a schedulable system with \(D_i \le T_i\), the credit should become zero at most by the next period of the message. Therefore, there is no negative credit for the message to be accounted for in the analysis. Consequently, the inflation factor of the message under analysis can be removed. Note that the message is delayed by the switch fabric latency (\(\varepsilon \)) accounted for the analysis.6.2 Response time of messages in class BSimilarly to the analysis for class A traffic, blocking times due to lower priority messages and interference from the same and higher priority messages should be considered in the worst-case response time calculation for class B traffic. Following the same proof made in the previous analysis, considering one blocking by the lower priority messages is sufficient if the same priority messages are inflated by \((1 + \frac{\alpha ^-_B}{\alpha ^+_B})\). Moreover, the interference from higher priority messages does not only stem from ST messages, but also from messages in class A. In this analysis, we must consider multiple instances of the message under analysis. Therefore, the queuing delay \(w_i^l(q)\) in link l is calculated in Eq. (15). The first term in Eq. (15) is the blocking by lower priority messages, while the second term is the transmission of \(m_i\) in previous \(q-1\) instances. Again, the transmission time of \(m_i\) is inflated only when there is no same priority messages in the set. The third term is the interference from the same priority messages. Also, the fourth term is the interference from higher priority messages, consisting of classes A and ST messages. Finally, the last term is the interference of virtual messages to consider the guard band in the analysis. Note that the queuing jitter of traffic class A (not by ST) on link l is denoted by \(J_j^l\) and described in the next subsection (Sect. 6.5).$$\begin{aligned} \begin{aligned} w_i^l(q)&= \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\} + (q - 1) \cdot \zeta . C_i \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N], j \ne i \\ \wedge \; m_j \in sp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{(q - 1)T_i}{T_j} + 1\right\rfloor C_j \left( 1 + \frac{\alpha ^-_{B,l}}{\alpha ^+_{B,l}}\right) \right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{w^l_i(q) + J_j^l}{T_j} + 1 \right\rfloor C_j\right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{w^l_i(q)}{T_j} + 1 \right\rfloor C^*_{j, l}\right\} \\&\textit{where} \; \zeta = \left\{ \begin{array}{ll} 1, &{} \quad \text {if}\; sp(m_i) = \emptyset \\ \left( 1 + \frac{\alpha ^-_{B,l}}{\alpha ^+_{B,l}}\right) , &{} \quad \text {if}\; sp(m_i) \ne \emptyset \end{array}\right. \end{aligned} \end{aligned}$$

(15)

The maximum response time among \(q_{\textit{max}}\) instances of the message is the worst-case response time, as calculated in Eq. (16). Note that the switch fabric latency (\(\epsilon \)) is also included in the analysis. Moreover, the same scenario as in class A occurs for the transmission time of the message under analysis to account for the negative credit.$$\begin{aligned} \begin{aligned} RT_i^l = \max _{q = 1\ldots q^{\textit{max}}} \left\{ w_i^l(q) - (q - 1)T_i + \zeta .C_i + \varepsilon \right\} ,\\ \textit{where}\; \zeta = \left\{ \begin{array}{ll} 1, &{} \quad \text {if}\; sp(m_i) = \emptyset \\ \left( 1 + \frac{\alpha ^-_{B,l}}{\alpha ^+_{B,l}}\right) , &{} \quad \text {if}\; sp(m_i) \ne \emptyset \end{array}\right. \end{aligned} \end{aligned}$$

(16)

The response time must be examined for instances within a range \([1, q_{\textit{max}}]\), where \(q_{\textit{max}}\) is derived as the smallest positive integer value from Eq. (17). Similar to Eq. (9), the left side of Eq. (17) is the length of the busy period, hence dividing that by \(T_i\) gives the maximum number of instances that have passed during the busy period. To compute the busy period length the interference and blocking should be added to the transmission time of the message.$$\begin{aligned} \begin{aligned} \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\}&+ \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in sp(m_i), j \ne i \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{(q - 1)T_i}{T_j} + 1\right\rfloor C_j \left( 1 + \frac{\alpha ^-_{B,l}}{\alpha ^+_{B,l}}\right) \right\} \\&+ \zeta .q.C_i + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lceil \frac{w^l_i(q) + J_j^l}{T_j} \right\rceil C_j\right\} \\&+ \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lceil \frac{w^l_i(q)}{T_j} \right\rceil C^*_{j, l}\right\} \le q \cdot T_i \\&where \; \zeta = \left\{ \begin{array}{ll} 1, &{} \quad \text {if}\; sp(m_i) = \emptyset \\ \left( 1 + \frac{\alpha ^-_{B,l}}{\alpha ^+_{B,l}}\right) , &{} \quad \text {if}\; sp(m_i) \ne \emptyset \end{array}\right. \end{aligned} \end{aligned}$$

(17)

6.3 Transmission delay of messages in class STThe ST messages are scheduled offline and TAS prevents any interference from lower priority messages. Therefore, the transmission delay of ST messages is equal to their transmission time and the switch fabric latency, which is shown in Eq. (18). Note that the switch fabric latency in the last link should be omitted as the last link is connected to the destination node, not a switch.$$\begin{aligned} RT_i^l =\left\{ \begin{array}{ll} C_i + \epsilon , &{} \quad \text {if}\;l \ne n_i \\ C_i, &{} \quad \text {if}\; l = n_i \end{array}\right. ;\quad \forall m_i \in \{\textit{class\;ST}\} \end{aligned}$$

(18)

6.4 Multi-switch response timeIn a multi-switch AVB architecture, messages are buffered in the queues of each switch through their route. Thus, the worst-case response time of a message traversing multiple switches is the sum of the per-hop response times, as shown in Eq. (19). Note that the wire latency is neglected in this calculation, whereas the switch fabric latency for each hop is already considered in each link. This means that as \(\epsilon \) was considered in each link, it is not needed in Eq. (19). Eq. (19) is used for classes A, B and ST.$$\begin{aligned} RT_i = \sum _{l = 1\ldots n_i} RT_i^l \end{aligned}$$

(19)

6.5 Jitter of the higher priority interferenceThe response time analysis given in Bordoloi et al.

(2014) is presented for a single-switch network without considering the traffic shaper of the nodes. Therefore, messages arrive to the switch at every period without variation in their delays. Thus, the queuing jitter due to crossing switches does not appear. The response time analysis presented in Diemer et al.

(2012) covers multi-hop architecture, however the queuing jitter is not considered. Here we show using a counterexample that if we do not consider the queuing jitter of a message due to passing through switches, the analysis can give an optimistic result. In AVB ST the queuing jitter of class A can affect the response time of message in class B. However, the ST traffic is scheduled offline without interference from other traffic classes. Therefore, they do not have queuing jitter. In this section, we discuss the effect of queuing jitter from class A on the class B analysis.Assume a network with 3 messages, from classes A, B and BE, for the same destination. The parameters of the messages are given in Table 2 (values refer to time units). The idleSlope (\(\alpha ^+_A\)) and sendSlope (\(\alpha ^-_A\)) for class A are 0.4 and 0.6, respectively, while the idleSlope (\(\alpha ^+_B\)) and sendSlope (\(\alpha ^-_B\)) for class B are both equal to 0.5. In this example we assumed \(\epsilon = 0\).Table 2 Message parameters for an exampleFull size table

A possible scheduling trace with jitter is shown in Fig. 9. In this scenario, we assume that \(m_A\) is arrived with a jitter of 4 time units, and \(m_{BE}\) started its transmission slightly before that, as the credit for \(m_B\) was negative. According to the figure the response time of \(m_B\) is 10 time units. However, when the response time of \(m_B\) is calculated using the analysis presented in this paper, without considering jitter, the response time becomes 8 time units as shown in Eq. (21), that is less than 10 time units shown in the figure. This is in contrast with the scheduling scenario shown in Fig. 9. In Eq. (21), \(w_B\) and \(\textit{RT}_B\) are calculated using Eqs. (15) and (16), respectively. Note that the maximum number of instances calculated in Eq. (17) is 1 in this example, i.e., \(w_B\) and \(\textit{RT}_B\) are only calculated for \(q = 1\). Moreover, the inflation factor for the message under analysis \(m_B\) is not considered as there is no interference from the same priority as \(m_B\).$$\begin{aligned} w_B&= 4 + 2\left\lfloor \frac{2}{10} + 1 \right\rfloor = 6 \nonumber \\ w_B&= 4 + 2\left\lfloor \frac{6}{10} + 1 \right\rfloor = 6 \\ RT_B&= 6 + 2 = 8\nonumber \end{aligned}$$

(20)

Fig. 9A scenario with jitter for the example depicted in Table 2

Full size image

Now, when we consider the jitter of \(m_A\) in the calculation (according to Eq. 15) the response time of message \(m_B\) becomes 10 (see Eq. 21), that is just equal to the depicted one in the figure. Again, in this calculation \(q = 1\) from Eq. (17).$$\begin{aligned} \begin{aligned} w_B&= 4 + 2\left\lfloor \frac{2 + 4}{10} + 1 \right\rfloor = 6 \\ w_B&= 4 + 2\left\lfloor \frac{6 + 4}{10} + 1 \right\rfloor = 8 \\ w_B&= 4 + 2\left\lfloor \frac{8 + 4}{10} + 1 \right\rfloor = 8 \\ RT_B&= 8 + 2 = 10 \end{aligned} \end{aligned}$$

(21)

In this work, we apply jitter similarly to the other response time analysis for switched Ethernet networks, e.g., Martin and Minet

(2006), by adding it to the calculation of busy period. In order to compute the queuing jitter of a message in class A, we need to find the difference between the worst-case and the best-case response times of the message from its source node to the link that we are calculating the response time of \(m_i\) in class B. This means that the response time for class A in all hops should be computed before the response time for class B. Equation (22) derives the queuing jitter of \(m_j\) from class A in link l.$$\begin{aligned} \begin{aligned} J_j^l&= \sum _{\begin{array}{c} L = 1 \ldots l \end{array}} \textit{RT}_j^L - \sum _{\begin{array}{c} L = 1 \ldots l \end{array}} \textit{BCRT}_j^L, \\&\quad \sum _{\begin{array}{c} L = 1\ldots l \end{array}} \textit{BCRT}_j^L = l.C_j \end{aligned} \end{aligned}$$

(22)

Note that the switch fabric latency (\(\epsilon \)) is considered for both best- and worst-case response times, however it is subtracted in Eq. (22).7 Bandwidth reservation for AVB networksAs mentioned before, two formal response time analysis techniques are presented in Diemer et al.

(2012) and Bordoloi et al.

(2014) to compute the delay of messages in AVB networks. The response time analysis techniques provide safe upper bounds on the worst-case response time of messages. In the presented analysis, besides the messages’ parameters, the idleSlope (reserved bandwidth) is taken into account. The standard defines how to set the idleSlope. Normally, the IEEE 802.1Q standard provides two modes of operation, which are (i) when the SRP is disabled or (ii) when the SRP is enabled. When the SRP is disabled, the idleSlope per class and per link is assigned by management through the adminIdleSlope parameter (see Clause 34.3 in IEEE 2014), which is equal to the operIdleSlope parameter. The operIdleSlope parameter is the actual bandwidth and its calculation is given in the standard (see Clause 34.4 in IEEE 2014). However, when the SRP is enabled, the SRP mechanism uses the Multiple Stream Registration Protocol (MSRP) to register the bandwidth through the operIdleSlope parameter per class and per link. According to the SRP, class A streams should transmit frames at a frequency multiple or equal to 8000 frames / s and class B streams at a frequency multiple or equal to 4000 frames / s. Also, in the case of lower message transmission frequencies, the same bandwidth for 8000 or 4000 frames / s has to be reserved. Such an over-reservation is very pessimistic when the frequency is lower than the one provided by the SR class. For this reason, in this paper, we assume that the SRP is disabled. Therefore, the bandwidth to be set in the operIdleSlope parameter is calculated as the product between the frame size (MaxFrameSize) and the frame transmission rate (maxFrameRate). This calculation, given in Clause 34.4 of the IEEE 802.1Q standard 

(2014), can be seen as the message utilization. Nevertheless, in most of the cases the response time analysis cannot converge to a schedulable result if the bandwidth is reserved according to the standard. This is because (i) lower priority blocking is not accounted for in the calculation of idleSlope, and (ii) the queuing jitter in multi-switch is not taken into account.In the experiments that are performed in Diemer et al.

(2012) bandwidth over-reservation is applied. The required idleSlope for the traffic shaper is multiplied by a value between 2 to 32, i.e., the reserved bandwidth for the messages is increased by 2–32 times. The over-reservation is considered for experimental purpose only, without giving a formal explanation about why and how to set it. Moreover, in the analysis presented in Bordoloi et al.

(2014), the idleSlope is chosen randomly for the experiments. In this section, we show this limitation and for the sake of simplicity we show the limitation in the context of plain AVB (i.e., without ST traffic) and for the traffic class A. However, the limitation also applies to traffic class B and to AVB ST networks. Then, we propose a solution to find a minimum over-reservation (a new idleSlope) for classes A and B, such that the system becomes schedulable. We present the solution for the case of the AVB ST networks. However, the response time of AVB ST is the general form of AVB. This means that if in Eq. (15), which computes the queuing delay in AVB ST network, we set the number of ST messages to zero, we achieve Eq. (7) to calculate queuing delay in AVB networks.7.1 Problem formulationHere, we demonstrate the limitation in two different cases. First, we focus on the effect of lower priority blocking on the bandwidth reservation. Second, we show that even in a network without lower priority messages, the analysis may provide schedulable results only when the periods of all messages are equal. Otherwise, when the bandwidth is reserved according to the standard, the system is not schedulable in any setting.7.1.1 Lower priority blockingAccording to the system model, \(F_i\) is the length of \(m_i\). Moreover, according to the standard (see Clause 34.4 in IEEE 2014), the idleSlope for class A, (\(\alpha ^+_A\)), is defined based on the MaxFrameSize (denoted by F in this paper) and maxFrameRate parameters. The maxFrameRate parameter is the transmission rate of the frame and is calculated using the MaxIntervalFrames parameter, which is the maximum number of frames that the sender node may transmit in one “class measurement interval”. The class measurement interval is 125 \(\upmu \)s for class A and 250 \(\upmu \)s for class B (see Clause 34.4 of IEEE 2014). This calculation is given in Clause 34.4 of the standard (IEEE 2014), which is presented below for one message \(m_j\).$$\begin{aligned} \alpha ^+_X = F_j \cdot \textit{maxFrameRate}_j \end{aligned}$$

(23)

$$\begin{aligned} \textit{maxFrameRate}_j = \textit{MaxIntervalFrames}_j \cdot \frac{1}{\textit{classMeasurementInterval}_X} \end{aligned}$$

(24)

Therefore, the idleSlope for all messages in class X is calculated as in Eq. (25).$$\begin{aligned} \alpha ^+_X = \sum _{m_j \in \{class X\}} F_j \cdot \textit{MaxIntervalFrames}_j \cdot \frac{1}{\textit{classMeasurementInterval}_X} \end{aligned}$$

(25)

Since in this paper we characterized a message by its period T, then we rewrite the idleSlope based on the period of messages. Note that the period is the time interval between two consecutive transmissions of the message from the source node. Therefore, MaxIntervalFrames can be written based on T as below:$$\begin{aligned} \textit{MaxIntervalFrames}_j = \frac{1}{T_j} \cdot \textit{classMeasurementInterval}_X \end{aligned}$$

(26)

Therefore, the idleSlope for class A can be written as in Eq. (27) by inserting MaxIntervalFrames from Eqs. (26) to (25).$$\begin{aligned} \alpha ^+_A = \sum _{m_j \in \{\textit{class A}\}} \frac{F_j}{T_j} \end{aligned}$$

(27)

According to the standard, when the SRP is enabled the MaxIntervalFrames parameter is the maximum number of frames in one class measurement interval, which is a 16-bit unsigned integer value in the traffic specification (TSpec) field (see Clause 35.2.2.8.4 in IEEE 2014). Therefore, any period larger than a class measurement interval becomes equal to the class measurement interval, when computing the idleSlope. However, when the SRP is disabled, as we assumed in this paper, the TSpec for registering the bandwidth is not used. Thus, any value for the MaxIntervalFrames parameter can be foreseen to set the idleSlope.The sendSlope is defined as \(\alpha ^-_A = R - \alpha ^+_A\), according to the standard (see sendSlope computation in Clause 8.6.8.2 in IEEE 2014). Therefore, the inflation factor discussed in the analysis can be rewritten as in Eq. (28).$$\begin{aligned} \left( 1 + \frac{\alpha ^-_A}{\alpha ^+_A} \right) = \frac{R}{\alpha ^+_A} = \frac{R}{\sum _{m_j \in \{\textit{class A}\}} \frac{F_j}{T_j}} \end{aligned}$$

(28)

We show the limitation by the following lemmas. It should be noted that Lemmas 1 and 2 are valid for the analysis presented in Bordoloi et al.

(2014). However, due to the improvement in the inflation factor for the message under analysis in this paper, these effects disappear for the presented analysis in this paper. In contrast, Lemmas 3 and 4 are valid for both analyses.

Lemma 1

If there is only one message \(m_i\) from class A in the network, and there is no other traffic from other classes, the response time of \(m_i\) is equal to its period \(T_i\).

Proof

Considering the revised inflation factor in Eq. (28), the response time computation of class A in Eq. (6) can be reformulated in a new form, that is shown in Eq. (29).$$\begin{aligned} RT_i = \max _{m_j \in lp(m_i)} \{C_j\} + \sum _{m_j \in sp(m_i)} \left\{ C_j\cdot \frac{R}{\sum _{m_j \in \{class A\}} \frac{F_j}{T_j}}\right\} \end{aligned}$$

(29)

Then, by replacing C with F (\(F_j = C_j \cdot R\)) in Eq. (29) the response time calculation can be written as in Eq. (30).$$\begin{aligned} RT_i = \max _{m_j \in lp(m_i)} \left\{ \frac{F_j}{R}\right\} + \frac{1}{\sum _{m_j \in \{\textit{class A}\}} \frac{F_j}{T_j}} \sum _{m_j \in \{\textit{class A}\}} F_j \end{aligned}$$

(30)

As there is no other messages than \(m_i\) in the network, the blocking term in the equation is zero, i.e., \(\max _{m_j \in lp(m_i)} \{\frac{F_j}{R}\} = 0\). Therefore, the response time of \(m_i\) is calculated as in Eq. (31).$$\begin{aligned} RT_i = 0 + \frac{1}{\frac{F_i}{T_i}} F_i = T_i \end{aligned}$$

(31)

From above, one can observe that the response time of \(m_i\) is equal to its period. \(\square \)

As one can see from the above lemma, the schedulability test is passed if we assume implicit deadline only, i.e., \(D_i = T_i\).

Lemma 2

If there is one message \(m_i\) from class A and one message \(m_j\) from class B in the network, the system is not schedulable according to the response time analysis in any settings.

Proof

Using Eq. (30) for the response time analysis and considering \(m_i\) and \(m_j\), the response time of \(m_i\) is computed as below.$$\begin{aligned} RT_i = \frac{F_j}{R} + T_i > T_i \end{aligned}$$

(32)

As the response time analysis is given for a constrained deadline model, i.e., \( D_i \le T_i\), the above system is not schedulable. \(\square \)

We can conclude that, by setting the bandwidth according to the standard, the system cannot become schedulable using the analysis presented in Bordoloi et al.

(2014), if there is at least one lower priority message in the network.7.1.2 Same priority interferenceThe response time analysis is not only limited because of blocking by lower priority messages. Here, we investigate the schedulability of a system when there is no lower priority message in the network in two cases: (i) when the periods of messages are equal, and (ii) when at least the period of one message is larger than the others. The main intention is to show that the presented analysis can only provide schedulable results when the periods of all messages are equal.

Lemma 3

(equal periods) If there are N messages only from class A in the network, and their periods are equal, the response time of all of them is equal to their periods.

Proof

As there is no lower priority messages in the network, the blocking is zero. Also, the period of messages are equal, i.e., \(T = T_1 = T_2 = \cdots = T_N\). Note that the response time of all messages in class A are equal, as it is shown in Eq. (6). Therefore, here we only look at the response time of \(m_N\), i.e., the last message in the set. Using Eq. (30), the response time of \(m_N\) is calculated in Eq. (33).$$\begin{aligned} \textit{RT}_N = \frac{1}{\sum _{k = 1\ldots N} \frac{F_k}{T_k}} \cdot \sum _{k = 1\ldots N} F_k = \frac{T}{\sum _{k = 1\ldots N} F_k} \cdot \sum _{k = 1\ldots N} F_k = T \end{aligned}$$

(33)

One can observe that the response time of messages is equal to the messages’ period. Therefore, the system is always schedulable assuming implicit deadline for the traffic (\(D_i = T_i\)), for any setting of \(F_i\) and \(T_i\). \(\square \)

Lemma 4

(unequal periods) If there are N messages only from class A in the network, and their periods are equal except one message with larger period than the others, the system is not schedulable.

Proof

We assume N messages in the network, where \(T = T_1 = T_2 = \cdots = T_{N-1}\), and \(T_N > T\). As mentioned before, the period can be written as a number of class measurement intervals, i.e., \(T = y \cdot \textit{classMeasurementInterval}\), where \(y > 0\). For instance, if \(y = 1/2\) and assuming class B then \(T = 125\) \(\upmu \)s, which is 2 frames in one class measurement interval. Therefore, \(T_N = z \cdot \textit{classMeasurementInterval}\), where \(z>y\). For example, in class B if \(z = 1\) then \(T_N = 250\) \(\upmu \)s. From the above description for T and \(T_N\), we can derive the following:$$\begin{aligned} \textit{classMeasurementInterval} = \frac{T}{y} = \frac{T_N}{z} \end{aligned}$$

(34)

Therefore, the relation between periods can be written as below, where \(z > y\) or \(z/y > 1\).$$\begin{aligned} T_N = \frac{z}{y}\cdot T \end{aligned}$$

(35)

Redefining the variable as \(x = z/y\), we can rewrite the above equation as below, where \(x>1\).$$\begin{aligned} T_N = x\cdot T \end{aligned}$$

(36)

Therefore, the response time for any message is calculated in Eq. (37).$$\begin{aligned} \begin{aligned} RT_i&= \frac{1}{\sum _{k = 1\ldots N} \frac{F_k}{T_k}} \cdot \sum _{k = 1\ldots N} F_k = \frac{\sum _{k = 1\ldots N} F_k}{\frac{1}{T} \sum _{p = 1\ldots N-1} F_p + \frac{1}{T_N}F_N} \\&= \frac{x\cdot T \cdot \sum _{k = 1\ldots N} F_k}{x \cdot \sum _{p = 1\ldots N-1} F_p + F_N} = \underbrace{\frac{x \cdot \sum _{k = 1\ldots N} F_k}{x\cdot \sum _{p = 1\ldots N-1} F_p + F_N}}_{E} \times T \end{aligned} \end{aligned}$$

(37)

Now if we show that the computed response time is larger than the message period, as the model is constrained deadline, the system is not schedulable. Let us assume that the system is not schedulable, i.e., \(\textit{RT}_i > T\), thus \(E > 1\) (Eq. 38).$$\begin{aligned} \frac{x \cdot \sum _{k = 1\ldots N} F_k}{x \cdot \sum _{p = 1\ldots N-1} F_p + F_N} > 1 \end{aligned}$$

(38)

By reorganizing the above inequality we can achieve Eq. (39). Further, we can take out \(x.F_N\) from the summation in the left side of the inequality, that becomes Eq. (40).$$\begin{aligned} x \cdot \sum _{k = 1\ldots N} F_k > x \cdot \sum _{p = 1\ldots N-1} F_p + F_N \end{aligned}$$

(39)

$$\begin{aligned} x \cdot \sum _{p = 1\ldots N-1} F_p + x.F_N > x \cdot \sum _{p = 1\ldots N-1} F_p + F_N \end{aligned}$$

(40)

Finally, we reduce the above inequality to reach Eq. (41), as we can remove the summations from both sides.$$\begin{aligned} x \cdot F_N > F_N \end{aligned}$$

(41)

One can observe that the final inequality shown in Eq. (41) is always true as we defined \(x > 1\). Therefore, Eq. (38) always holds. This means that the response time of the messages is always larger than the messages’ period, as we assumed in Eq. (38), hence the system is not schedulable. \(\square \)

To conclude, a system without any lower priority message is schedulable only if the periods of messages are equal (Lemmas 3, 4). In case of having even one lower priority message, the system is not schedulable in any setting (Lemmas 1, 2) using the presented analysis in Bordoloi et al.

(2014).7.2 Proposed solutionThrough the previous section, we demonstrated that the system is not schedulable in most of the cases. Although, we show the limitation for class A traffic, the problem is inherited in other classes, as well as in AVB ST networks. In order to be able to use the response time analysis an over-reservation of the reserved bandwidth is essential. On the other hand, over-reservation may cause bandwidth waste due to reservation of the bandwidth being made unnecessarily high. Therefore, we propose a solution to find the minimum required over-reservation for classes A and B. We propose the solution in the context of the AVB ST networks, as a general form of the analysis for AVB. For the solution we define a new idleSlope for the traffic shaper of class X on link l of the network as \(\beta ^+_{X, l}\). Moreover, we define \(\beta ^+_{X,l,i}\) as the idleSlope for \(m_i\) of class X on link l. Intuitively, by increasing \(\beta ^+_{X, l}\) the response time becomes smaller, as the reserved bandwidth is larger. The intention of the solution is to find the minimum \(\beta ^+_{X,l,i}\) such that \(m_i\) meets its deadline. Then, \(\beta ^+_{X,l}\) is derived in Eq. (42) such that the whole set of messages in class X meet their deadlines, hence the system becomes schedulable.$$\begin{aligned} \beta ^+_{X, l} = \max _{m_i \in \{class X\}} \{ \beta ^+_{X, l,i}\} \end{aligned}$$

(42)

In addition, according to the standard (IEEE 2014), a maximum reservable bandwidth is defined for each class of traffic, for which a reservation cannot be made larger. Therefore, the over-reservation is limited to the maximum reservable bandwidth. Assuming f as the maximum reservable portion of the bandwidth, the maximum idleSlope for traffic class X (\(\beta ^X_{max, l}\)) is calculated in Eq. (43).$$\begin{aligned} \beta ^X_{max, l} = f\cdot R \end{aligned}$$

(43)

Therefore, the calculated \(\beta ^+_{X, l}\) is valid if it is smaller or equal to \(\beta ^X_{max, l}\), otherwise the system cannot be schedulable with any over-reservation with the analysis presented in this paper. It should be noted that the over-reservation is derived based on the response time analysis presented in this paper. Therefore, the over-reservation is directly affected by the level of pessimism in the analysis. Moreover, when there is only one message in a class crossing a link, there is no need for over-reservation of bandwidth for that class in that link. The reason is that the idleSlope does not appear in the analysis when the same priority set is zero, i.e., \(sp(m_i) = \emptyset \). This can be seen in Eq. (14) for class A and in Eq. (16) for class B. Therefore, it is important to mention that the solution presented in this section applies only to the links crossed by traffic classes that have more than one message, i.e., \(sp(m_i) \ne 0\).To make the system schedulable, the response time in its worst-case should be less or equal to the deadline of the message. However, as the response time is computed for one link, it should meet the deadline defined for that link, i.e., \(\textit{RT}_i^l \le D_i^l\). The sum of the deadlines for the links in the route of the message is \(D_i\), i.e., \(\sum _{l = 1\ldots n_i} D_i^l = D_i\). Defining the deadline of a message for each link can be done in several ways. The simple solution is to divide \(D_i\) equally among the number of links \(n_i\). However, a smarter solution is to divide the deadline proportional to the load on the links. Decomposition of the deadline has been studied in the real-time community, e.g., Chatterjee and Strosnider

(1995) and Kao and Garcia-Molina

(1993). In this paper, we do not focus on optimizing the results based on deadlines decomposition and we keep it as a future work. For the experiments in this paper, the end-to-end deadlines are divided proportionaly to the load on the links. The formulation is discussed in Sect. 8.7.2.1 Solution for class BConsidering the revised inflation factor in Eq. (28), we can rewrite Eq. (15) for calculating \(w_i^l(q)\) in a new form, which is shown in Eq. (44). For simplicity of reading we name the blocking term and the same priority interference by \(B_i\) and \(A_i\), respectively (see Eq. 44).$$\begin{aligned} \begin{aligned} w_i^l(q)&= \underbrace{\max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\}}_{{B_i}} + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{w^l_i(q) + J_j^l}{T_j} + 1 \right\rfloor C_j\right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{w^l_i(q)}{T_j} + 1 \right\rfloor C_{j,l}^*\right\} \\&\quad + \frac{1}{\beta ^+_{B,l,i}} \underbrace{\left( (q-1) F_i + \sum _{\begin{array}{c} \forall j \in [1,N], j \ne i \\ \wedge \; m_j \in sp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{(q - 1)T_i}{T_j} + 1\right\rfloor F_j\right\} \right) }_{{A_i}} \end{aligned} \end{aligned}$$

(44)

Eq. (44) is a recursive function that starts with an initial value and continues until it stabilizes, i.e., the previous value and new output value of \(w_i^l(q)\) become equal. We can reformulate the equation to be as a function of time, where t evolves until \(w_i^l(t)\) becomes equal to t. This equation is presented in Eq. (45). Note that the equation is presented for a specific instant of q.$$\begin{aligned} \begin{aligned} w_i^l(t)&= B_i + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{t + J_j^l}{T_j} + 1 \right\rfloor C_j\right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left\lfloor \frac{t}{T_j} + 1 \right\rfloor C_{j,l}^*\right\} + \frac{1}{\beta ^+_{B,l,i}} A_i \end{aligned} \end{aligned}$$

(45)

Equation (45) that is used to evaluate the response time of instant q is shown in Fig. 10, which is a step function. The first point that t meets \(w_i^l(t)\) is the queuing delay of \(m_i\), that is shown by \(Q_i\) in Fig. 10, i.e., \(Q_i = min(t>0):t = w_i^l(t)\). The intention is to find minimum \(\beta ^+_{B,l,i}\) from Eq. (45) such that the response time of \(m_i\) in instant q becomes equal to the deadline of \(m_i\). However, the operation is not trivial as there are floor operations in the equation. In order to simplify, we can approximate \(w_i^l(t)\) by removing the floor operations from Eq. (45). The approximation is shown in Eq. (46). Intuitively it can be seen that \(w_i^{l,apx}(t)\) is always larger or equal to \(w_i^l(t)\), which is still a safe upper bound. This function is depicted in Fig. 10, as a linear function of time. Similarly to the previous equation, we evolve time t until \(w_i^{l,apx}(t) = t\). This point is shown by \(Q_i^{apx}\) in the figure, and it is the queuing delay of \(m_i\).$$\begin{aligned} \begin{aligned} w_i^{l, apx}(t)&= B_i + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{t + J_j^l}{T_j} + 1\right) C_j\right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{t}{T_j} + 1\right) C_{j,l}^*\right\} + \frac{1}{\beta ^+_{B,l,i}} A_i \end{aligned} \end{aligned}$$

(46)

Fig. 10The exact busy period and its approximationFull size image

With Eqs. (45) and (46) in mind, we continue to find \(\beta ^+_{B,l,i}\) by presenting in a lemma form.

Lemma 5

Assuming that the worst-case response time occurs in \(q'\)th instance, in order for \(m_i\) to meet its deadline using the approximation of queuing delay, \(\beta ^+_{B,l,i}\) should be set as follows:$$\begin{aligned} \begin{aligned} \beta ^+_{B,l,i}&\ge \frac{N}{M},\\ N&= \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left( \frac{F_i C_j}{T_j}\right) + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left( \frac{F_iC_{j,l}^*}{T_j}\right) - F_i - A_i, \\ M&= B_i - D_i^l - (q'-1)T_i + \epsilon + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left( \frac{D_i^l + (q' - 1) T_i - \epsilon + J_j^l}{T_j} + 1 \right) C_j \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left( \frac{D_i^l + (q'- 1)T_i - \epsilon }{T_j} + 1 \right) C^*_{j,l} \end{aligned} \end{aligned}$$

(47)

Proof

The response time of \(m_i\) in link l is calculated using Eq. (16). As it is assumed in the lemma, the max operation occurs in \(q'\)th instance, so we can rewrite Eq. (16) by considering the revised inflation factor and \(w_i^{l, apx}(t)\) as a function of time, which is shown in Eq. (48).$$\begin{aligned} RT_i^l = w_i^{l, apx}(t) - (q' - 1)T_i + \frac{1}{\beta ^+_{B,l,i}} F_i + \epsilon \end{aligned}$$

(48)

In order for \(m_i\) to meet its deadline \(\textit{RT}_i^l \le D_i^l\). Let us for now assume that \(\textit{RT}_i^l = D_i^l\). Therefore, Eq. (48) becomes:$$\begin{aligned} D_i^l = w_i^{l, apx}(t) - (q' - 1)T_i + \frac{1}{\beta ^+_{B,l,i}} F_i + \epsilon \end{aligned}$$

(49)

From the above equation we can derive \(w_i^{l, apx}(t)\), which is:$$\begin{aligned} w_i^{l, apx}(t) = D_i^l + (q' - 1)T_i - \frac{1}{\beta ^+_{B,l,i}} F_i - \epsilon \end{aligned}$$

(50)

As it was mentioned before, in Eq. (46) we have to evolve t until \(w_i^{l, apx}(t) = t\), and this point is \(Q_i^{apx}\) (see Fig. 10). Thus, in a schedulable system \(w_i^{l, apx} = t = Q_i^{apx}\). We can rewrite Eq. (50) assuming that the deadline of \(m_i\) is met, as it is in the lemma, which is shown in Eq. (51).$$\begin{aligned} Q_i^{apx} = D_i^l + (q' - 1)T_i - \frac{1}{\beta ^+_{B,l,i}} F_i - \epsilon \end{aligned}$$

(51)

On the other hand, we can write Eq. (46) as we are aiming \(m_i\) to meet its deadline, i.e., \(w_i^{l,apx} = t = Q_i^{apx}\). Therefore, Eq. (46) becomes:$$\begin{aligned} \begin{aligned} Q_i^{apx}&= B_i + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{Q_i^{apx} + J_j^l}{T_j} + 1\right) C_j\right\} + \\&\sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{Q_i^{apx}}{T_j} + 1\right) C_{j,l}^*\right\} + \frac{1}{\beta ^+_{B,l,i}} A_i \end{aligned} \end{aligned}$$

(52)

Now we can insert \(Q_i^{apx}\) from Eqs. (51) to (52). By doing so, we achieve Eq (53).$$\begin{aligned}&D_i^l + (q' - 1)T_i - \frac{1}{\beta ^+_{B,l,i}} F_i - \epsilon = B_i \nonumber \\&+ \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \nonumber \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{D_i^l + (q' - 1)T_i - \frac{1}{\beta ^+_{B,l,i}} F_i - \epsilon + J_j^l}{T_j} + 1\right) C_j\right\} \nonumber \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \nonumber \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{D_i^l + (q' - 1)T_i - \frac{1}{\beta ^+_{B,l,i}} F_i - \epsilon }{T_j} + 1\right) C_{j,l}^*\right\} \\&\quad + \frac{1}{\beta ^+_{B,l,i}} A_i \end{aligned}$$

(53)

We are interested to find \(\beta ^+_{B,l,i}\), thus we can extract it from Eq. (53), as it is a linear equation. The new idleSlope \(\beta ^+_{B,l,i}\) is calculated in Eq. (54), where for readability we name the numerator and denominator as N and M, respectively.$$\begin{aligned} \begin{array}{ll} \beta ^+_{B,l,i} = \frac{\overbrace{\sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left( \frac{F_i C_j}{T_j}\right) + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left( \frac{F_iC_{j,l}^*}{T_j}\right) - F_i - A_i}^{\text {N}}}{\underbrace{B_i - D_i^l - (q'-1)T_i + \epsilon + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left( \frac{D_i^l + (q' - 1) T_i - \epsilon + J_j^l}{T_j} + 1 \right) C_j + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{class ST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left( \frac{D_i^l + (q'- 1)T_i - \epsilon }{T_j} + 1 \right) C^*_{j,l}}_{\text {M}}} \end{array} \end{aligned}$$

(54)

We assumed that \(\textit{RT}_i^l = D_i^l\) in the beginning of the proof. By relaxing the assumption and considering \(\textit{RT}_i^l \le D_i^l\), \(\beta ^+_{B,l,i}\) should be computed using Eq. (55), as by increasing the bandwidth the response time becomes shorter. Thus, we proved the lemma.$$\begin{aligned} \beta ^+_{B,l,i} \ge \frac{N}{M} \end{aligned}$$

(55)

\(\square \)

In the above equations, to calculate \(\beta ^+_{B,l,i}\) we need \(q'\), which is unknown. We assumed \(q'\) is the instance within the range \([1, q_{\textit{max}}]\), where it causes the maximum response time in Eq. (16). However, to find the instance \(q'\), we need to examine all the instances within the range, for which the maximum \(q_{\textit{max}}\) is derived according to Eq. (17). Note that Eq. (17) is the function of \(\alpha ^+_{B, l}\), and with the new idleSlope it would be the function of \(\beta ^+_{B,l}\), which we are aiming to find. This means that while finding \(\beta ^+_{B,l,i}\), \(q_{\textit{max}}\) will change as well.The algorithm to find \(\beta ^+_{B,l}\) is shown in Algorithm 1. It iterates for all messages in class B crossing through link l to calculate the over-reservation for each of them, then the maximum of the idleSlopes among the messages is the final idleSlope. Note that the algorithm finds \(\beta ^+_{B,l}\) for each link in a multi-switch AVB ST architecture, thus it should be executed for all links in the network.The algorithm starts by finding \(\beta ^+_{B,l,i}\) using Eq. (47) for the first instance of the message, i.e., when \(q = 1\) (line 3). As it is mentioned before, if the over-reservation factor exceeds the maximum one, computed in Eq. (43), the system is not schedulable. Thus, the algorithm breaks after examining the maximum possible over-reservation in line (4), and returns the unschedulable flag (sched). Using the derived \(\beta ^+_{B,l,i}\), the maximum number of instances \(q_{\textit{max}}\) is calculated using Eq. (17) in line (7) of the algorithm. Note that we use the new idleSlope \(\beta ^+_{B,l,i}\) instead of \(\alpha ^+_{B,l}\). If the maximum number of instances is larger than 1, which \(\beta ^+_{B,l,i}\) is calculated based on that, the algorithm continues to find \(\beta ^+_{B,l,i}\) when \(q=2\) to \(q_{\textit{max}}\) in the loop starting from line (10). In each step, if the over-reservation is larger than the maximum possible over-reservation, the algorithm breaks and returns the unschedulable flag (line (12) and the following ones). However, if \(\beta ^+_{B,l,i}\) computed based on a q is larger than the previously calculated one, the maximum number of instances should be updated again (lines (15) and (16)). If the updated maximum number of instances \(\textit{newQ}\) is smaller than the current \(q_{\textit{max}}\), the algorithm does not need to continue calculating \(\beta ^+_{B,l,i}\), hence it breaks the loop and returns the new idleSlope \(\beta ^+_{B,l,i}\) for \(m_i\). If \(\textit{newQ}\) equals to the current \(q_{\textit{max}}\), the loop continues. However, if the \(\textit{newQ}\) becomes larger than qMax, then qMax is updated for the continuation of the loop. The loop will be eventually terminated as \(\beta ^+_{B,l,i}\) cannot exceed \(\beta ^+_{max, l}\) and will be broken in line (12). For each message in the loop of the algorithm, the maximum idleSlope is stored in \(\beta ^+_{B,l}\) in line (26). the complexity of the algorithm for link l in the network is \(O(N \times q_{\textit{max}} \times q_{\textit{max}})\). Note that the function findSlope is linear, and the function findQmax has an iteration with the maximum iteration number of \(q_{\textit{max}}\). Therefore, the algorithm has a polynomial time-complexity.

7.2.2 7Solution for class AThe response time for messages in class A is computed only for one instance, as shown in Eq. (14). We can write the response time equation as a function of time, similarly to the solution for class B. Moreover, we can write the approximation of the equation by removing the ceiling operation. Thus, the response time computation becomes as it is shown in Eq. (56), where we use the new idleSlope for \(m_i\) (\(\beta ^+_{A,l,i}\)), instead of \(\alpha ^+_{A,l}\). In Eq. (56) we evolve time t until \(\textit{RT}_i^l(t) = t\), which is the answer of the equation.$$\begin{aligned} \begin{aligned} RT^l_i(t)&= \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\} + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in sp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ C_j \left( 1 + \frac{\beta ^-_{A,l,i}}{\beta ^+_{A,l,i}}\right) \right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{t}{T_j} + 1\right) (C_j + C^*_{j, l})\right\} + \epsilon \end{aligned} \end{aligned}$$

(56)

We are interested to find \(\beta ^+_{A,l,i}\), and we describe it in the following lemma. Then, the new idleSlope for all messages \(\beta ^+_{A,l}\) is calculated by Eq.(42).

Lemma 6

In order for \(m_i\) to meet its deadline using the approximation of the response time, the new idleSlope for \(m_i\) is set by:$$\begin{aligned} \beta ^+_{A,l,i} \ge \frac{\sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in sp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} F_j}{D_i^l - \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\} - \epsilon - \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{D_i^l}{T_j} + 1 \right) (C_j + C^*_{j, l})\right\} } \end{aligned}$$

(57)

Proof

In order for \(m_i\) to meet its deadline the response time should be less or equal to the link deadline, i.e., \(\textit{RT}_i^l \le D_i^l\). Let us assume for now that \(\textit{RT}_i^l = D_i^l\). This means the answer of Eq. (56) should be equal to the link deadline, which is \(\textit{RT}_i^l(t) = t = D_i^l\). By inserting \(D_i^l\) instead of both t and \(\textit{RT}_i^l(t)\) in Eq. (56) (as they are equal by evolving t to reach the answer), hence Eq. (56) becomes as a linear function shown in Eq. (58).$$\begin{aligned} \begin{aligned} D_i^l&= \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\} + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in sp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ C_j \left( 1 + \frac{\beta ^-_{A,l,i}}{\beta ^+_{A,l,i}}\right) \right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{D_i^l}{T_j} + 1\right) (C_j + C^*_{j, l})\right\} + \epsilon \end{aligned} \end{aligned}$$

(58)

We also use the revised inflation factor for the same priority interference term in Eq. (58), which becomes as Eq. (59).$$\begin{aligned} \begin{aligned} D_i^l&= \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\} + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in sp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{F_j}{\beta ^+_{A,l,i}} \right) \right\} \\&\quad + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{D_i^l}{T_j} + 1\right) (C_j + C^*_{j, l})\right\} + \epsilon \end{aligned} \end{aligned}$$

(59)

We can derive \(\beta ^+_{A,l,i}\) from Eq. (59), as it is a linear equation, that is shown in Eq. (60). Note that at the beginning of the proof we assumed \(\textit{RT}_i^l = D_i^l\), however in order to have \(\textit{RT}_i^l \le D_i^l\) the over-reservation should be equal or larger than the computed one in Eq. (60), which proves the lemma.$$\begin{aligned} \beta ^+_{A,l,i} = \frac{\sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in sp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} F_j}{D_i^l - \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \{C_j\} - \epsilon - \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in hp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \left( \frac{D_i^l}{T_j} + 1 \right) (C_j + C^*_{j, l})\right\} } \end{aligned}$$

(60)

\(\square \)

7.3 Over-reservation in the AVB networksAlgorithm 1 can be also used to find the over-reservation of idleSlope in the AVB networks. The only change is to assume the set of ST messages is zero in all calculations, i.e., \(\{ST class\} = \emptyset \), as the presented analysis is the general form for AVB analysis.7.4 Evaluation of bandwidth over-reservationIn this experiment we show the effect of accumulating delay over multiple hops on the bandwidth over-reservation. For this evaluation we consider a network with six switches, as illustrated in Fig. 11.Fig. 11An architecture for the experiment of bandwidth over-reservationFull size image

The total bandwidth capacity of the network is 100 Mbps, where 40 Mbps is set for each class of the SR traffic as the maximum reservable bandwidth. We assumed four messages, two from class A and two from class B. All messages have 500bytes payload with 2000 \(\upmu \)s period. The source of all four messages is N1, and we change their destination from N2 to N7, i.e., the number of hops that the messages are crossing is changing from one to six switches. We computed the idleSlope of class A and B based on the standard (i.e., \(\alpha ^+_A\) and \(\alpha ^+_B\)) in the destination link. For example, in case of crossing the switch 1, we computed the idleSlopes for link L2 which is the messages’ destination to node N2 (see Fig. 11). In addition, we calculated the new idleSlopes (i.e., \(\beta ^+_A\) and \(\beta ^+_B\)) based on the presented solution in this paper. The idleSlopes are illustrated in Fig. 12. As it can be seen in the Figure, the idleSlopes based on the standard are constant and equal (2.56 Mbps) by increasing the number of hops. This is because the standard for reserving bandwidth does not consider the queuing jitter from crossing previous switches. In contrast, the new idleSlopes increase with the number of hops. This means that the messages will get larger delay after crossing several switches, hence the reserved bandwidth should be higher to make the system schedulable. In this experiment higher bandwidth for class B is required, as it has higher priority interference, unlike class A.Fig. 12Bandwidth and over-reserved bandwidth based on number of hopsFull size image

In order to show the effect of the message parameters on the over-reservation, we run two experiments. In the first experiment, we fixed the value of payload for messages to 300 bytes and we changed the period of messages from 1600 to 3000 \(\upmu \)s. The new idleSlopes in every hop to the destination link for both classes A and B are presented in Fig. 13. In the second experiment, we fixed the value of periods to 2500 \(\upmu \)s and we increased the payload for all messages from 100 to 500 bytes. The idleSlopes for the second experiment are illustrated in Fig. 14. In general, increasing the period of messages or decreasing the payload of messages, the idleSlope decreases due to the decrease in the messages’ utilization. However, the trend in Figs. 13 and 14 shows that the messages in class B are affected more than messages in class A. This is indeed due to the interference from higher priority messages on class B.Fig. 13The over-reserved bandwidth based on number of hops—changing the periodsFull size image

Fig. 14The over-reserved bandwidth based on number of hops—changing the payloadsFull size image

8 ExperimentsIn this section, we conduct simulative assessments of two types of network architectures. The first one refers to an industrial network, the second to an automotive network. The messages parameters are taken from the automation and automotive application domains, respectively, and the total bandwidth is set to 100 Mbps. The industrial scenario is setup with flow periods in the order of few milliseconds that are typical of the microgrid automation applications (Rinaldi et al. 2015), while the topology is chosen to consider the maximum hops that the standard guarantees (i.e., 7 hops). Moreover, the traffic parameters in the automotive case study are inspired from the architecture designed by BMW group (Lim et al. 2011). The response time for the defined messages is computed using the analysis presented in this paper. Then, we simulate the examples using OMNeT++ and we measure the response time of the messages during run-time for 500 s. We compare the calculated and measured response times to assess the level of pessimism in the defined case studies. It should be noted that this is not the maximum pessimism as it is rather difficult to examine it.In the experiments we need to split the deadline of the messages into the deadlines in each link. In this paper, we decompose the deadlines proportional to the load in each links. We use Eq. (61) to derive the deadline for \(m_i\) for link l.$$\begin{aligned} \begin{aligned} D_i^l&= \frac{{\textit{load}_i^l}}{\sum _{l = 1\ldots n_i} \textit{load}_i^l}\cdot D_i, \\ \textit{load}_i^l&= \max _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in lp(m_i) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \frac{F_j}{T_j}\right\} + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in (hp(m_i) \vee sp(m_i)) \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \frac{F_j}{T_j}\right\} + \sum _{\begin{array}{c} \forall j \in [1,N] \\ \wedge \; m_j \in \{classST\} \\ \wedge \; l \in \mathcal {L}_j \end{array}} \left\{ \frac{F_j + F_{j,l}^*}{T_j}\right\} \end{aligned} \end{aligned}$$

(61)

The simulation model was implemented using the OMNeT++ framework, which is a discrete-event simulator. For the simulation of the physical layer of Ethernet the INET libraries were adopted, while the MAC layer (with the Forwarding and Queuing for Time-Sensitive Streams (FQTSS) defined in the standard (IEEE 2014), the TAS and the traffic generators were implemented from scratch. In the simulation model no clock synchronization protocol was implemented, as clocks are assumed to be synchronized in the system model. The Stream Reservation Protocol was disabled in order to assess the performance with the idleSlope values calculated offline. Statistics on the delays were taken at the application level and no processing time on the nodes was assumed (only the switch fabric latency of 5.2 \(\upmu \)s is considered). Two kind of nodes were implemented, the AVB ST host nodes and the AVB ST switches. The simulation model was validated assessing the behaviors of messages in several scenarios and comparing the timing parameters, calculated under predictable test scenarios, with those obtained in the simulation.8.1 Industrial case studyIn this case study, we assumed an architecture consisting of six switches connected in a line topology. The architecture is depicted in Fig. 15. In this example, there are eight nodes, seven of which are talkers and one is a listener. This results in a high load for the listener node.Fig. 15Architecture of the industrial case studyFull size image

We defined eight messages in different traffic classes. The parameters of the messages are shown in Table 3. The overhead of SR messages is assumed to be 42bytes, while for the ST messages is assumed to be 30Bytes due to removing 12Bytes of interpacket gap. Note that the listener for the messages is N8 in this example. In this experiment we do not have any BE traffic and the switch fabric latency \(\epsilon \) is assumed to be 5.2 \(\upmu \)s.Table 3 Parameters of the messages in the industrial case studyFull size table

In the depicted architecture, we computed the idleSlope for classes A and B (\(\alpha ^+_A\) and \(\alpha ^+_B\)) based on the standard for each output link. Also, we calculated the over-reserved idleSlope (\(\beta ^+_A\) and \(\beta ^+_B\)) according to the presented solution in this paper. The results of idleSlopes are shown in Table 4, in Mbps. As it is mentioned in the system model, each link has two directions (full-duplex ports). Therefore, for each direction of a link, idleSlopes in class A and B are computed. In Table 4, for instance, L1 shows link L1 with direction from N1 to SW1, hence it presents the idleSlope for output port of N1.Table 4 The idleSlope (based on the standard and the over-reservation) for each link in the network—industrial case studyFull size table

In Table 4, the zero values show that there is no traffic in that class that goes through that particular link and direction. For example, only a message belonging to class A crosses the link L1. Therefore, the idleSlope for class B is zero. Another observation is that the over-reservation for the links with higher load is larger. For instance, the idleSlope for class A on link L13 is 8.26 Mbps since N8 is the listener for all defined messages. However, the over-reserved idleSlope is computed as 45.54 Mbps using the presented solution in this paper. This means that in order to have a schedulable system the idleSlope should be increased approximately 6 times for class A on link L13. This increase on the links with less load is lower. The reason is that by increasing the number of messages, the pessimism in the analysis becomes higher. Therefore, the bandwidth should be increased more to make the system schedulable. Furthermore, in some links no over-reservation is required. For instance, on link L1 there is only one message crossing the link, hence, according to the analysis and solution, no over-reservation is required.Fig. 16Response time of the messages for the industrial case studyFull size image

Fig. 17Architecture of the automotive case studyFull size image

Using the over-reserved idleSlopes, we computed the response time of the messages. Moreover, we measured the response times in a simulation to be compared with the computed results. The simulation has been done with the original idleSlope (\(\alpha ^+\)) and the over-reserved idleSlope (\(\beta ^+\)) to show the effect of over-reservation in the simulation. We used offsets for ST messages. In the industrial case study there are two ST messages with 4 ms periods. The offsets are set to 0 s and 2 ms, respectively. Therefore, the interval between two consecutive ST messages is always 2 ms. The results are illustrated in Fig. 16. In the figure, the computed response times are indicated by RTA, while the simulation measurements are identified by Sim Max \(\alpha \) and Sim Max \(\beta \) for the maximum measured values for the original and over-reserved idleSlope, respectively. As it can be seen, all measured response times with the over-reserved idleSlope are always less than or equal to the computed ones. Moreover, the biggest difference between the maximum measured and calculated is for message 2, which shows around 86% pessimism. However, this level of pessimism is only for this example. Showing the level of pessimism for the analysis is rather difficult and it requires to provide a worst-case example ensuring that the simulation reaches the worst-case results. Note that the measured and computed response times for ST messages (messages 3 and 4) are equal as there is no interference or blocking on them. Also, their response times are much smaller than the other classes, thus showing the effectiveness of the AVB ST proposal. Another observation is that the measured response time for a message with the original idleSlope can be larger than the deadline of the message. In this case study, the measured response time of message 5 with the original idleSlope is 2033 \(\upmu \)s, which is larger than its deadline 1875 \(\upmu \)s, hence the system is not schedulable. We can conclude that there are case studies in which the system is not schedulable (with both response time analysis and simulation) if the idleSlope is assigned according to the standard. The over-reservation algorithm presented in this paper is very useful to achieve the minimum over-reservation and the system schedulability for such scenarios.Table 5 Parameters of the messages for the automotive case studyFull size table

Table 6 The idleSlope (based on the standard and the over-reservation) for each link in the network—automotive case studyFull size table

8.2 Automotive case studyIn this case study we consider an example automotive network consisting of two switches in a double star topology, illustrated in Fig. 17. The network supports an ADAS system consisting of three cameras (CAM1–CAM3), which send video frames to a specialized processing unit, named the DACAM. The DACAM processes the video frames and produces both warnings, which are sent to a Control Unit (CU) and to a Head Unit, and aggregated flows, which are displayed by the Head Unit. The network also supports the bidirectional exchange of control messages between the CU, the DACAM, and the Head Unit. In addition to these safety-critical flows, the network also supports some multimedia/infotainment and telematics systems, which are real-time, but not safety-critical.Fig. 18Response time of the messages for the automotive case studyFull size image

For this case study we define 30 messages whose properties are inspired by realistic automotive messages. Audio and video frames are assigned to class A and B. The various type of control messages are all set as ST class. The properties of the messages are presented in Table 5. The switch fabric latency is assumed equal to 5.2 \(\upmu \)s.Similarly to the previous case study, we computed the idleSlope for classes A and B based on the standard, and according to the over-reservation approach in this paper. The results for each link are shown in Table 6.Considering the over-reserved bandwidth for each link, we calculated the response time of messages according to the analysis presented in this paper. Also, we measured the response times during the simulation. The results are depicted in Fig. 18. Similarly to the previous experiment, RTA, Sim Max \(\alpha \) and Sim Max \(\beta \) indicate the computed, maximum measured response times of the messages with the original and over-reserved idleSlope, respectively. As it can be seen, the computed response times of messages belonging to classes A and B are always larger than the measured ones with the over-reserved idleSlope. Also, the measured and computed response times of ST messages are always equal, and small compared to the response times of the other classes of messages. The highest pessimism in this example is for message 4 with around 86%. In the figure, the measured response times for messages 1, 2 and 3 are not the same, while their parameters are the same. In the simulation, all messages arrive to the switch at the same time in an order that is maintained in the simulation. This is why the first message has shorter response time compared with the others. However, in the analysis we consider the worst-case for each individual message. This case study validates the practicality of the ST approach as the ST messages have very short latency compared to the SR classes. Moreover, the over-reservation algorithm helps reduce the messages’ response time, e.g., for message 27.9 Conclusion and future workIn this work, we presented a response time analysis for multi-hop AVB ST networks, which can be applied to multi-hop AVB networks as well. The proposed analysis exploits a bandwidth over-reservation concept to overcome the limitations of state-of-the-art response time analysis approaches for AVB networks. We showed that the analysis based on the proposed bandwidth over-reservation method is effective by comparing the analysis results with the simulative ones obtained using OMNeT++. As we showed in the experiments, the presented analysis entails a level of pessimism. As decreasing the pessimism would lead to less bandwidth over-reservation and better resource utilization, future work will address a way to decrease such a pessimism, and the case for taking advantage from previous findings and approaches on stochastic analysis, such as the ones in Diaz et al.

(2004) and Kaczynski et al.

(2007), will be also considered. As future work we will address a response time analysis for the AVB ST combined with the frame preemption mechanism introduced by the IEEE 802.1Qbu standard.

ReferencesAlderisi G, Caltabiano A, Vasta G, Iannizzotto G, Steinbach T, Lo Bello L (2012) Simulative assessments of IEEE 802.1 Ethernet AVB and time-triggered Ethernet for advanced driver assistance systems and in-car infotainment. In: Vehicular networking conferenceAlderisi G, Iannizzotto G, Lo Bello L (2012) Towards 802.1 Ethernet AVB for advanced driver assistance systems: a preliminary assessment. In: IEEE 17th conference on emerging technologies factory automationAlderisi G, Patti G, Lo Bello L (2013) Introducing support for scheduled traffic over IEEE audio video bridging networks. In: 18th IEEE conference on emerging technologies factory automationBordoloi UD, Aminifar A, Eles P, Peng Z (2014) Schedulability analysis of ethernet AVB switches. In: The 20th IEEE international conference on embedded and real-time computing systems and applicationsChatterjee S, Strosnider J (1995) Distributed pipeline scheduling: end-to-end analysis of heterogeneous, multi-resource real-time systems. In: Proceedings of the 15th international conference on distributed computing systemsCummings R (2012) 802.1qbv scheduled traffic: Window options. In: 802.1 Interim meetingDavis RI, Burns A, Bril RJ, Lukkien JJ (2007) Controller Area Network (CAN) schedulability analysis: refuted, revisited and revised. Real-Time Syst J 35:239–272Article 

Google Scholar 

Davis R, Kollmann S, Pollex V, Slomka F (2011) Controller Area Network (CAN) schedulability analysis with FIFO queues. In: 23rd Euromicro conference on real-time systemsDe Azua JAR, Boyer M (2014) Complete modelling of AVB in Network Calculus framework. In: 22nd international conference on real-time networks and systemsDiaz JL, Lopez JM, Garcia M, Campos AM, Kanghee K, Lo Bello L (2004) Pessimism in the stochastic analysis of real-time systems: Concept and applications. In: in Proceeding of 25th IEEE international real-time systems symposiumDiemer J, Thiele D, Ernst R (2012) Formal worst-case timing analysis of Ethernet topologies with strict-priority and AVB switching. In: 7th IEEE international symposium on industrial embedded systemsHeidinger I, Geyer F, Schneele S, Paulitsch M (2012) A performance study of audio video bridging in aeronautic Ethernet networks. In: 7th IEEE international symposium on industrial embedded systemsIEEE (2008) IEEE standard for a precision clock synchronization protocol for networked measurement and control systems. IEEE Std 1588-2008 (Revision of IEEE Std 1588-2002)IEEE (2011) IEEE Std. 802.1AS-2011, IEEE standard for local and metropolitan area networks-timing and synchronization for time-sensitive applications in bridged local area networks (2011)IEEE (2014) IEEE Std. 802.1Q, IEEE standard for local and metropolitan area networks, bridges and bridged networksIEEE (2015) IEEE Std. 802.1Qbv, IEEE standard for local and metropolitan area networks, bridges and bridged networks, amendment 25: enhancement for scheduled trafficImtiaz J, Jasperneite J, Han L (2009) A performance study of Ethernet Audio Video Bridging (AVB) for industrial real-time communication. In: IEEE conference on emerging technologies factory automationImtiaz J, Jasperneite J, Schriegel S (2011) A proposal to integrate process data communication to IEEE 802.1 Audio Video Bridging (AVB). In: IEEE 16th conference on emerging technologies factory automationImtiaz J, Jasperneite J, Weber K (2012) Approaches to reduce the latency for high priority traffic in IEEE 802.1 AVB networks. In: 9th IEEE international workshop on factory communication systemsJasperneite J, Schumacher M, Weber K (2009) A proposal for a generic real-time Ethernet system. IEEE Trans Ind Inform 5(2):75–85Article 

Google Scholar 

Kaczynski GA, Lo Bello L, Nolte T (2007) Deriving exact stochastic response times of periodic tasks in hybrid priority-driven soft real-time systems. In: IEEE conference on emerging technologies and factory automationKao B, Garcia-Molina H (1993) Deadline assignment in a distributed soft real-time system. In: Proceedings the 13th international conference on distributed computing systemsKeynote Talk (2013) Experiences from EAST-ADL use. EAST-ADL Open Workshop, GothenbergLand I, Elliott J (2011) Architecting ARNIC 664 (AFDX) SolutionsLeboudec J, Thiran P (2001) Network calculus. Springer, BerlinBook 

Google Scholar 

Lee KC, Lee S, Lee MH (2006) Worst case communication delay of real-time industrial switched Ethernet with multiple levels. IEEE Trans Ind Electron 53(5):1669–1676Article 

Google Scholar 

Lehoczky J (1990) Fixed priority scheduling of periodic task sets with arbitrary deadlines. In: 11th real-time systems symposium (1990)Lim HT, Zahrer P, Volker L (2011) Performance evaluation of the inter-domain communication in a switched Ethernet based in-car network. In: 36th IEEE conference on local computer networksLo Bello L (2011) The case for Ethernet in automotive communications. SIGBED Rev 8(4):7–15Article 

Google Scholar 

Lo Bello L (2014) Novel trends in automotive networks: a perspective on Ethernet and the IEEE Audio Video Bridging. In: 19th IEEE international conference on emerging technologies and factory automationLim HT, Weckemann K, Herrscher D (2011) Performance study of an in-car switched Ethernet network without prioritization. In: Proceedings of the third international conference on communication technologies for vehiclesMartin S, Minet P (2006) Worst case end-to-end response times of flows scheduled with FP/FIFO. In: International conference on networking, international conference on systems and international conference on mobile communications and learning technologiesPalencia JC, González Harbour M (1998) Schedulability analysis for tasks with static and dynamic offsets. In: Proceedings of the IEEE real-time systems symposiumPannel D (2012) Avb - generation 2 latency improvement options. In: 802.1 AVB groupReimann F, Graf S, Streit F, Glas M, Teich J (2013) Timing analysis of Ethernet AVB-based automotive E/E architectures. In: 18th Conference on emerging technologies factory automationRinaldi S, Ferrari P, Ali NM, Gringoli F (2015) IEC 61850 for micro grid automation over heterogeneous network: Requirements and real case deployment. In: 13th IEEE international conference on industrial informaticsSchneele S, Geyer F (2012) Comparison of IEEE AVB and AFDX. In: IEEE/AIAA 31st digital avionics systems conferenceSteinbach T, Lim HT, Korf F, Schmidt T, Herrscher D, Wolisz A (2012) Tomorrow’s in-car interconnect? a competitive evaluation of IEEE 802.1 AVB and time-triggered ethernet (as6802). In: Vehicular technology conferenceThangamuthu S, Concer N, Cuijpers P, Lukkien J (2015) Analysis of Ethernet-switch traffic shapers for in-vehicle networking applications. In: Design, automation and test in Europe conference and exhibitionTuohy S, Glavin M, Hughes C, Jones E, Trivedi M, Kilmartin L (2015) Intra-vehicle networks: a review. IEEE Trans Intell Transp Syst 16(2):1–12Article 

Google Scholar 

Wandeler E, Thiele L, Verhoef M, Lieverse P (2006) System architecture evaluation using modular performance analysis: a case study. Int J Softw Tools Technol Transf 8(6):649–667Article 

Google Scholar 

Download referencesAcknowledgements This work is supported by the Swedish Foundation for Strategic Research via the PRESS, FiC and LUCIA projects.Author informationAuthors and AffiliationsMRTC/Mälardalen University, Västerås, SwedenMohammad Ashjaei, Moris Behnam & Thomas NolteUniversity of Catania, Catania, ItalyGaetano Patti, Giuliana Alderisi & Lucia Lo BelloAuthorsMohammad AshjaeiView author publicationsYou can also search for this author in

PubMed Google ScholarGaetano PattiView author publicationsYou can also search for this author in

PubMed Google ScholarMoris BehnamView author publicationsYou can also search for this author in

PubMed Google ScholarThomas NolteView author publicationsYou can also search for this author in

PubMed Google ScholarGiuliana AlderisiView author publicationsYou can also search for this author in

PubMed Google ScholarLucia Lo BelloView author publicationsYou can also search for this author in

PubMed Google ScholarCorresponding authorCorrespondence to

Mohammad Ashjaei.Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissionsAbout this articleCite this articleAshjaei, M., Patti, G., Behnam, M. et al. Schedulability analysis of Ethernet Audio Video Bridging networks with scheduled traffic support.

Real-Time Syst 53, 526–577 (2017). https://doi.org/10.1007/s11241-017-9268-5Download citationPublished: 02 February 2017Issue Date: July 2017DOI: https://doi.org/10.1007/s11241-017-9268-5Share this articleAnyone you share the following link with will be able to read this content:Get shareable linkSorry, a shareable link is not currently available for this article.Copy to clipboard

Provided by the Springer Nature SharedIt content-sharing initiative

KeywordsEthernet AVBScheduled trafficSchedulability analysisResponse time analysisBandwidth reservationOver-reservation

Use our pre-submission checklist

Avoid common mistakes on your manuscript.

Advertisement

Search

Search by keyword or author

Search

Navigation

Find a journal

Publish with us

Track your research

Discover content

Journals A-Z

Books A-Z

Publish with us

Publish your research

Open access publishing

Products and services

Our products

Librarians

Societies

Partners and advertisers

Our imprints

Springer

Nature Portfolio

BMC

Palgrave Macmillan

Apress

Your privacy choices/Manage cookies

Your US state privacy rights

Accessibility statement

Terms and conditions

Privacy policy

Help and support

49.157.13.121

Not affiliated

© 2024 Springer Nature