VIDEO QUALITY OF SERVICE Essay Example
VIDEO QUALITY OF SERVICE Essay Example

VIDEO QUALITY OF SERVICE Essay Example

Available Only on StudyHippo
  • Pages: 9 (2436 words)
  • Published: August 19, 2018
  • Type: Report
View Entire Sample
Text preview

Chapter five

5.0 VIDEO QUALITY OF SERVICE It is simpler to comprehend the QoS problems linked to video traffic on the internet by understanding VoIP. VoIP aims to overcome the challenges of conventional VoIP and PSTN networks by improving the network infrastructure and expanding the customer base beyond PSTN subscribers while maintaining service offerings and end-to-end capabilities.

Gheorghe (2006) states that the main goal of this objective is to minimize transmission costs, consolidate network costs, optimize equipment and bandwidth usage, and enhance employee productivity. However, measuring the quality of service (QoS) for packet networks carrying real-time video and voice is significantly different from data networks that only focus on reducing data errors. In voice channels, the quality of the resulting audio output is crucial. Therefore, in order to compete with switched-telephone networks that offer r

...

eliable service and high-quality voice output, it is vital for internet services to provide the same level of quality and reliability (Gregori, 2002). It is crucial to ensure that VoIP systems do not degrade voice quality or cause delays. The main concern is how efficiently voice can be transmitted through packet networks without compromising QoS.

This emphasizes the urgent need to define a method for carrying voice calls over IP networks while maintaining the quality of service. It also describes a networking environment where data, video, and voice transmissions are integrated into one system. The combination of telephone signaling, call processing intelligence, and packet switching technologies has allowed for the consolidation of separate data and voice networks and the development of new communication services. However, failures in these technologies have led to the emergence of separate data and voice networks, which are the basi

View entire sample
Join StudyHippo to see entire essay

for converged data networks that transmit video, data, and voice on a single packet network.

5.1 Video QoS in DiffServ-Aware Multiprotocol Label Switching Network

The quality of service (QoS) for video transmissions is evaluated in this study using two approaches: (i) Multiprotocol Label Switching (MPLS), which provides more routing flexibility, equipment integration, and network performance, and (ii) Differentiated Services (DiffServ), which offers a robust and scalable stateless network. The researchers used OPNET and a fixed bandwidth to assess the impact of each architecture on video QoS by studying packet losses, varying video resolutions, and delays.

The study examined the effectiveness of diffServ, MPLS, integrated MPLS and DiffServ, and Best Effort topologies in the presence of background traffic. Three different services (AF21, AF11, and Expedited Forward) were simulated. Measures were taken to minimize protocol-specific inefficiencies. The findings showed that IP Networks only provided optimal services (FTP, video traffic, and http flows) through the shortest paths, while longer paths were not fully utilized due to limited network resources. The integrated protocol, which utilized priority and weighted fair queuing, proved to be the most efficient. This was partly because it directed video traffic through the shortest path while routing other traffic through longer paths, thereby increasing speed and resource utilization.

The DiffServ-MPLS protocol offers several advantages compared to both DiffServ and MPLS. It increases video throughput by 40% and reduces end-to-end delays through faster processing rates and PHP network servicing. Additionally, it efficiently handles FTP and other traffic by routing video and other traffic separately to avoid congestion. Compared to MPLS, the DiffServ-MPLS protocol has higher packet losses but allows for up to 75% bandwidth availability.

According to Bhaniramka, Sun,

& Jain (2009), the research results suggest that increased use of DiffServ-MPLS could improve video quality of service and reduce packet losses. However, in order to ensure absolute efficiency, it is necessary to better understand other factors such as queuing, traffic policing, congestion avoidance, and scheduling. DiffServ-aware MPLS is widely adopted in modern and future networking technologies (NGN). NGN is increasingly developing the capacity of heterogeneous network convergence, making it attractive to network companies (Cho & Okamura, 2010). To support both technologies, a resource and admission control (RACF) is needed for SIP based technology convergence, specifically for real-time, per session services such as video and IP telephony. Research studies on resource management schemes in internet architecture have shown effectiveness. Cho & Okamura (2010) propose the increased use of a centralized QoS control scheme for engineering traffic in next-generation network core networks known as centralized MPLS-Traffic Engineering.

According to Jaffar, Hashim, & Hamzah (2009), a compromise system between MPLS and DiffServ should be used, along with multi-level QoS control architectures for scalability and simplicity. This approach divides and conquers by distinguishing interesting objects in access networks and the core. The research shows that DiffServ-MPLS improves Quality of Service by reducing packet losses, while also proposing new generation networks to enhance congestion avoidance. Jaffar, Hashim, & Hamzah (2009) suggests exploring other factors to make DiffServ-MPLS more practical and efficient, which is supported by Bless & Rohricht (2010). They believe the future of telecommunication networks depends on the NGN ITU-T framework. This framework provides end-to-end QoS support through an efficient control architecture based on the IETF's Next Steps in Signalling frameworks. NGNs are defined in ITU-T recommendation Y.2011

(i) and consist of packet-based networks that offer telecommunication services and broadband QoS-enabled transport technologies.

Most of the underlying technologies are independent to facilitate convergence and mobility between mobile and fixed technologies. The new technology addresses the deficiencies in RSVP signaling by using a two-layer approach, treating transport signal messages and signaling applications as separate entities (Markopoulou, Tobagi, ; Karam, 2003). The NSIS framework is utilized to provide comprehensive end-to-end QoS support, offering greater flexibility and a closer relationship to the actual data path.

5.2 Data Path Framework/Mechanisms

Different frameworks have been suggested in other research studies to achieve end-to-end QoS on the internet, particularly in diverse administrative domains. These mechanisms can be classified into two categories: control path and data path. The data path serves as the foundation for internet QoS and implements actions to be taken by routers for individual packets, enabling the enforcement of various service levels (Hentschel, Reinder, ; Yiirgwei, 2002).

The control path mechanisms are typically employed in configuring network nodes to provide special treatment to packets based on resource utilization rules. Packet classifiers use signature matching and processing, particularly for IntServ-enabled routers, or bit-pattern classification, to determine the packet flow according to existing rules. This classification is then followed by traffic stream measurement and profiling before queuing. During congestion, some packets may be dropped. These operations resemble those performed in Zamora, Jacobs, Eleftheriadis, Chang, ; Anastassiou (2000), where specialized programs (SVAs) were used to carry out similar functions.

Despite not being as established, SVAs are more adaptive and scalable. Data path frameworks serve as the building blocks for facilitating QoS and are crucial in implementing the necessary actions that routers must perform on each

packet to ensure the implementation of various services. These frameworks assist in configuring network nodes to provide different treatment for different packets. The flow of packets is determined by the basic packet forwarding operation, which in turn is determined by existing policies and header compression (Chiu, Huang, Lo, Hwang, ; Shieh, 2003). Data packets are classified either through a general classification or a bit-pattern classification. General classifications involve processing-intensive transport-level matching of signals based on packet header tuples.

The function is essential in IntServ-enabled routers and network boundaries for DiffServ. Bit-pattern classification is used to sort packets based on a single field in their header. After classification, the packets are directed to a logical traffic conditioning instance that includes a meter, shaper, marker, and dropper. The marker identifies the differentiated packets, which are then measured by the meter. The packets are then passed to a conditioner that compares them to the packet profile and either remarks or drops packets that are out-of-profile.

The in-profile packets are queued for further processing, which includes reshaping them into the traffic stream, Nisar, Hasbullah, ; Said (2009).

5.3 Queuing Management

The main purpose of QoS is to prevent or minimize packet losses by efficiently managing queues. Packet losses can occur due to transit damage or network congestion.

According to Wang (2001), network congestion plays a vital role in Quality of Service (QoS) as packet damage is rare, occurring in less than 1% of cases. To manage and prevent network congestion, both intermediate routers and network endpoints rely on the TCP Protocol, utilizing adaptive algorithms such as additive increase, slow start, and multiplicative decrease. The routers employ queue management techniques to optimize throughput and decrease

delays, as measured by the network power (i.e. the ratio of delay to throughput).

The network's buffer space is designed to handle short-term bursts of data rather than continuously holding data. If the buffer and queues are full, packets will be dropped, with priority given to newly arriving packets or those that have been in the queue for the longest time (Bless & Rohricht, 2010). This can result in a single connection dominating the network and causing a skewed resource utilization system. To prevent this problem, routers must actively determine which packets to drop. Active queue management, such as the Random Early Detection (RED) algorithm, allows routers to control the queue size by using time-based decays to manage incoming packets (Collins, 2001). As the queue size increases, RED marks packets based on existing policies and may drop them according to the algorithm if the average queue size exceeds the maximum queue size.

RED, as described by Martinez, Apostolopoulos, Alfaro, Sanchez, & Duato (2010), is a method that avoids TCP synchronization by utilizing randomization and reducing bias towards bursty traffic. The paper addresses the challenges of multimedia traffic, which requires different requirements and priorities that cannot be fulfilled by the best effort protocol alone or by individual protocols. With the increasing demand for internet communication, QoS networks specifically designed to handle video traffic have emerged. These networks are particularly necessary due to the unique characteristics of video frames being transmitted at regular intervals. Technologies like PCI Express Advanced Switching architecture and others mentioned in Jaffar, Hashim, & Hamzah (2009) provide QoS support for packeted multimedia traffic. To effectively schedule network traffic, traffic handling policies that prioritize packets based

on their respective deadlines and incorporate random access have proven to be highly effective.

This applies to video traffic as well, where packets must be categorized based on urgency and placed in separate queues that are processed concurrently. In a similar manner to the study conducted by Jaffar, Hashim, and Hamzah (2009), the researchers performed simulations to assess the effectiveness of the suggested traffic handling protocols under different conditions. These simulations tested three distinct approaches. The outcomes indicate that high-speed connections benefit not only from DiffServ-MPLS, but also from decline-based scheduling policies, as observed in the research by Martinez, Apostolopoulos, Alfaro, Sanchez, and Duato (2010). Deadline-based protocols were found to be better suited for managing specifications like PCI and InfiniBand compared to traditional protocols. Moreover, the proposed policies exhibited significantly superior performance compared to randomly accessed buffers. It is crucial for any protocol to prioritize traffic differentiation and deadline-based scheduling in order to achieve optimal efficiency.

5.4 Scheduling

According to Martinez, Apostolopoulos, Alfaro, Sanchez, & Duato (2010), controlling delays on packets is crucial for maintaining quality of service. This includes managing transmission, propagation, and queuing. Packet scheduling is the process of selecting packets from a queue for transmission and effectively allocating bandwidth to applications, classes, and stations. Scheduling also plays a role in link sharing, as the total bandwidth for a link may need to be divided among multiple organizations or protocols. In overloaded links, special attention should be given to prevent excessive delay or disruption.

Delay guarantees are an effective form of quality guarantees, particularly when the process is kept simple. The specialized scheduling algorithms that handle the buffers possess two key properties:

flow isolation and end-to-end deterministic or guaranteed QoS. According to Stiller (2009), flow isolation ensures that conforming traffic is protected from non-conforming traffic by dividing the network resources, although this can result in under-utilization of resources. Isolation also prevents aggressive flows from monopolizing the network resources, ensuring fairness.

However, the end-to-end property ensures that there are statistical and deterministic constraints throughout the wider network, rather than just at intermediate nodes. Scheduling algorithms can be divided into two main categories: work-conserving scheduling, which continuously transmits packets as long as they are in the buffer, and non-work conserving scheduling, which delays transmissions to guarantee specific delay jitters but may result in under-utilization of resources. Scheduling can be applied on a per-flow or per-traffic basis, or a combination of both, resulting in hierarchical schedules with multiple scheduling disciplines. These disciplines include:
- First Come First Serve (FCFS), the simplest policy without class or flow differentiation, rate, or delay guarantees.
- Priority Scheduling, which offers separate queues for different classes of traffic or packets. It is an advanced FCFS discipline that includes priority queues for prioritized flows.
- Weighted Fair Queuing (WFQ), which uses weights and reserved rates for certain links to provide end-to-end delay guarantees on a per-flow basis. However, it is incapable of providing different rate and delay guarantees, resulting in significant delays for low-bandwidth flows (Rosenberg et al.).In 2002, the Earliest Deadline First scheduling discipline assigns a deadline to every packet that must be sent. There are various scheduling algorithms, such as Process Sharing (DS), De-generalized Process Sharing (GPS), strict priority, Weighted Round Robin (WRR), Weighted Fair Queuing (WFQ), and Earliest Due Date (EDD). Additionally, FIFO has

only one queue where packets are served based on their arrival time.

According to Zeng (2010), although it conserves work and shares buffering space, the use of First-In-First-Out (FIFO) does not guarantee the same bit rates or packet losses. However, other disciplines deviate from FIFO because they can provide different treatment to data packets. These disciplines prioritize high priority packets over low priority ones, ensuring that low priority buffers are not completely blocked. The implementation of scheduling plays a crucial role in this process. Gheorghe (2006) explains that Strict Priority mechanisms assign specific priority orders to queues, which then determine how the packets are transmitted. This enables differentiated services in terms of delay and bandwidth allocation, where high priority packets have an advantage over regular packets. However, it is important to note that without network policing and admission control, there is a risk of aggressive high priority queues completely blocking low priority queues.

Weight Fair Queue scheduling is based on assigning a weight ratio to each queue, determined by the existing network policy. This ensures that queues are given equal time slots and separates packets within the same queue from those in different queues, preventing congestion from affecting other queues. This prevents bandwidth abuse and allows for the necessary delay and bandwidth performance based on allocated bandwidth, avoiding mismatches between requirements. However, if high bandwidth is not initially allocated to applications that require it, delays for high-bandwidth applications are inevitable.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New