Better resources management Essay Example
Better resources management Essay Example

Better resources management Essay Example

Available Only on StudyHippo
  • Pages: 10 (2534 words)
  • Published: August 13, 2018
  • Type: Research Paper
View Entire Sample
Text preview

Chapter Three

QoS Frameworks Given the fact that the modern internet connects to several administrative domains, end-to-end QoS can be ensured by the concatenation of the domain-to-domain data forwarding, which turn can use three separate frameworks. These include (i) over provisioning (ii) better resources management that includes traffic control and generic switch architectures as well as (iii) routing and traffic engineering that also comprises two basic technologies i.e. MPLS and BGP, Collins (2001). Over Provisioning The most basic technique, which is popular with ISPs is through the over-provisioning of the bandwidth, buffers, servicing traffic from competing users etc, in order to ensure that congestion is avoided in the first place.

The text describes a resource intensive approach that can only be achieved with high capacity fiber links supporting 1.6 Tbps. However, bottlenecks still occur due to limited capacity electronic s

...

witching (Zamora et al., 2000). Resource allocation quality of service (QoS) differentiates and prioritizes traffic through congestion or admission control. Over-provisioning ensures that the available resources have greater or similar capacity to estimated peak traffic loads, solving the problem efficiently in networks with predictable peak loads. This approach is reasonable for most applications, including demanding ones that compensate for bandwidth delays and variations in special-need traffic like video streaming (Collins, 2001). However, it may have limited usefulness given transport protocols that exponentially increase data amount until the available bandwidth is exhausted and additional packets are dropped, resulting in packet loss for all users.

The amount of over provisioning required in interior networks to replace QoS depends on the traffic demands and number of users, which also helps minimize over provisioning capacity (Bhaniramka, Sun, & Jain, 2009). This i

View entire sample
Join StudyHippo to see entire essay

mainly because bandwidth-intensive applications and an increase in the number of users reduce the excess capacity provided by over-provisioning. When this occurs, it becomes necessary to physically upgrade the network links, which is both costly and inconvenient. However, according to Ergin, Gruteser, Luo, Raychaudhuri, & Liu (2008), although over provisioning is straightforward, its ability to handle the growing demand for network services and multimedia traffic is severely limited.

In addition, the inefficiency of constantly maintaining excess capacity to handle increased traffic is significant when viewed at a larger scale. However, it remains practical for small networking needs and is also possible for local area networks that use fiber glass infrastructure. This type of infrastructure can handle large amounts of traffic without experiencing congestion. On a larger scale or in cases with multiple interconnections, over provisioning can still be used, but only in conjunction with other technologies (Wang, 2001). In fact, when used appropriately with traffic engineering and resource management QoS technologies, over provisioning provides a resource buffer that ensures QoS guarantee for many applications.

Resource Management

Resource management has two major frameworks: the intServ per-flow QoS framework and dynamic allocation of resources. The intServ per-flow QoS framework follows a philosophy that requires routers to reserve resources in order to provide measurable QoS for specific traffic flows.

The Resource Reservation Signaling Mechanism
The Resource Reservation Protocol (RSVP) serves as a signaling protocol for applications to reserve resources. RSVP allows receivers in diverse service environments to initiate reservations according to their specific requirements. In essence, the flow dispatches a PATH message to inform the intended receiver about the traffic's characteristics (Wang, 2001). As the PATH message spreads, network routers along the

path gather various resource-related details. Simultaneously, the intended receiver sends an RESV message requesting necessary resources along the same route to the initiating router. If resources are available on the routers along the path, they allocate buffer space and bandwidth to the flow while installing essential flow-specific information.

Per Flow services ensure that IntServ can provide guaranteed Quality of Service (QoS), despite the fact that it involves a major alteration of the present-day internet architecture by introducing flow-specific states within routers. This alteration is largely unavailable. According to Chiu, Huang, Lo, Hwang, ; Shieh (2003), routers may have millions of flows, which makes it challenging for them to effectively manage separate flow queues. However, flow aggregation can still be utilized. This service is commonly used within a single domain and its incremental deployment is only viable in controlled load services.

IntServ offers several advantages. It has an explicit end-to-end resource admission control and a per-request admission policy control. Furthermore, it provides signals for dynamic port numbers, as stated by Davidson, Fox, ; et al (2002). However, IntServ is plagued by continuous signaling due to its stateful architecture. Additionally, flow-based approaches are not flexible enough to handle increasing demand required for large-scale implementations.

The Differentiated Services (DiffServ) technique for managing resources is similar to IntServ, but with greater scalability. DiffServ is a per-aggregate-class service discrimination technique that uses packet tagging. Packet tagging utilizes bits in the packet header to prioritize packets using type-of-service (TOS). The TOS byte consists of a three-bit precedence field, a four-bit field for non-urgent requests, and unused bits for maximum throughput, reliability, and minimum cost (Jaffar, Hashim, ; Hamzah, 2009). DiffServ redefined these unused bits

as a DS field, with six of these bits forming the Differentiated Service CodePoint (DSCP) field. The remaining two bits are unused. The DSCP is used to select the per-hop behavior (PHB) that packets experience at nodes.

DiffServ is based on two principles: (i) the separation of supporting mechanisms and policy, and (ii) the pushing of complexity to the network boundaries. The network boundaries consist of application hosts, edge routers, routers, and leaf. These boundaries have a limited number of flows, allowing for more specific operations. On the other hand, network core routers can handle many different flows, making them more suitable for fast and simple operations. By separating the supporting mechanisms from the control policy, they can evolve independently. DiffServ defines multiple per-hop packet forwarding behaviors that form the basis of QoS provisioning for further control policy efforts.

Control policies can be changed by the network administrator and client, but the PHBs must remain stable. However, DiffServ is limited because how individual routers handle the DS field is determined by the configuration, making it difficult to predict end-to-end behavior. This becomes more complex when packets cross multiple DiffServ domains. This is a major drawback commercially because it means it is technically impossible to provide different classes of end-to-end connectivity. Internet operators can address this by enforcing standardized policies for different networks, although many are not interested in adding complexity to existing peering agreements.

On the other hand, the Best Effort technique aims to provide the best possible quality of service without guaranteeing it. This makes it less efficient and robust, as it relies on a single path for each destination. Overall, it is a less efficient virtual

model for communication.

Best Effort QoS provides the best possible paths to the destination, requiring an accurate definition of the Paths set as well as a forwarding pane plane that can efficiently support assignment/forwarding of traffic over several paths for each destination (Xiao, Chen, ; Li, 2010). The path weights include multi-component measurement metrics, which capture crucial performance measures and define the best paths set using a specialized algorithm.

Traffic Policing refers to the mechanisms that monitor admitted sessions traffic to ensure sessions remain within the provisions of the QoS contracts. Policing mechanisms ensure that traffic adheres to accepted/agreed traffic parameters, and when violations occur, the mechanism must reshape the data packets (Martinez, Apostolopoulos, Alfaro, Sanchez, ; Duato, 2010).

The use of policing in traffic regulations ensures that multimedia applications always comply with the accepted quantitative parameters. This is because standard coding is used to generate sound, video, and other signals, which allows for standard data coding. Policing can be used to separate different multimedia flows. Non-real time traffic does not have quantifiable traffic parameters and requires as much bandwidth as possible. To prioritize real time traffic, traffic policing is necessary to limit non-real time traffic from using too much bandwidth according to the network policy. Policing can be implemented on intermediate or end hosts, with the most common forms being Token Bucket and Leaky Bucket (Collins, 2001).



Leaky Bucket

The provision of Quality of Service (QoS) is determined by defining flow properties, aggregates, and service needs. These factors are partly dependent on interactive voice communication delay bounds and individual/business needs. QoS can be described qualitatively or quantitatively as relative or absolute,

respectively.

The TSpec token buckets are a popular flow specification that combines peak rate, token bucket, minimum policed unit, and highest datagram size (Cho ; Okamura, 2010). These specifications are used for packet filtering. Once a packet is serviced, it is removed from the bucket. The use of buckets ensures efficiency by queuing empty packets up to a specified volume before processing. Additionally, token buckets are often implemented alongside leaky buckets (Ahmad, 2001). This helps to smooth out bursty traffic by setting the highest burst size and peak rate.

The functioning is similar to that of a leaky bucket. Traffic parameters such as burst size (bucket) and maximum rate (hole size) are established to shape traffic into acceptable sizes and rates. The bucket size determines the traffic burst size, after which some packets will be dropped. Buckets enter through the top (size set as b) and traffic can exit at a maximum rate (r) per second through a hole. If the incoming traffic rate is lower than the leakage rate (r), the outgoing traffic rate will match the incoming rate. When the bucket is empty, the incoming rate is below the leakage rate, R < r.

If the incoming traffic R is greater than the leakage traffic r, then the rate of traffic would be reduced to be the same as r. If however, the bucket is full, then the bucket will drop traffic that does not conform and if it is not dropped, it will be forwarded on a best effort traffic basis, Markopoulou, Tobagi, ; Karam (2003).

Token Bucket

This mechanism differs slightly from the leaky bucket mechanism because it maintains the bursty traffic. The size of the

bucket (b) takes up traffic at r bytes per second. As soon as a packet arrives, it subsequently retrieves a token from the bucket (subject to availability), followed by the packet being forwarded to the outgoing stream. While there are tokens within the bucket, the outgoing traffic stream rate will be identical to the incoming rate, but if the token bucket runs empty, then incoming traffic streams must wait until the bucket fills up with more packets that subsequently become tokens.

The discussed mechanism, as described by Rosenberg et al. (2002), is responsible for maintaining bursty traffic until a specific level is reached. During this period, the outbound traffic must not exceed the token rate, ensuring that the token bucket effectively controls the traffic rate.

Admission Control

Admission control involves the implementation of decision algorithms by hosts or routers to determine whether new traffic streams can be admitted without impacting the already granted QoS assurances. Each traffic stream requires a specific amount of network resources (router buffer space and link bandwidth) to transfer data from source to receiver. Hence, admission control plays a crucial role in managing the allocation of these network resources as explained by Martinez et al. (2010).

The focus is on accurately determining the admission region to ensure optimal utilization of existing capacity. Admission control uses three main techniques: statistical, deterministic, and measurement-based. Statistical and deterministic approaches rely on prior estimations, while measurement-based techniques use current measurements to aid specialized algorithms in decision-making (Linawati, 2005). Deterministic methods use worst-case scenarios to prevent QoS violations and are effective for smoothing traffic flows but struggle with unpredictable and varied data. In their study on multi-rate wireless mesh networks,

Ergin, Gruteser, Luo, Raychaudhuri, ; Liu (2008) examined admission control and routing mechanisms. This technique depends on accurate estimations of available bandwidth at the involved nodes and the required bandwidth for new flows.

Estimation of the parameters in wireless networks is challenging due to the open and shared nature of the wireless channels. The existing techniques for approximating available bandwidth do not accurately estimate interference from neighboring nodes or the bandwidth requirements of data flows, which limits the potential for parallel transmissions (Ergin, Gruteser, Luo, Raychaudhuri, & Liu, 2008). In their study, the researchers found that implementing admission controls had a positive impact on quality of service (QoS). They graphed the per flow throughput and average delays before and after implementing admission controls and found that the arrival of a third flow caused significant congestion and increased delays for all three flows, with large variations.

The delays in both the first and second flows, as well as the third flow, were high and caused unstable throughput. This is particularly problematic for multimedia traffic, which requires minimal delays for meaningful output (Ergin, Gruteser, Luo, Raychaudhuri, ; Liu, 2008). When admission controls are enabled, the limited channel capacity results in the third flow being blocked from the network. This leads to a stable throughput with average delays reduced to 1/100s. The low packet delays and consistent throughputs demonstrate that admission controls are effective and have a positive impact on real-time multimedia QoS over wireless networks (Ferguson ; Huston, 1998).


Policy Control

Policy specifications determine access to network resources and services based on selected administrative criteria. These policies dictate which users, applications, and hosts are allowed access to the network and

also determine priorities for differentiated services (Zeng, 2010).

The network can be managed by corporate administrators and ISPs through policy infrastructures, rather than individual network devices. These infrastructures support the implementation of administrative intentions, resulting in differentiated treatment of packet traffic flows. Figure 8 illustrates a typical policy architecture, where multiple policy servers exist within each domain. These servers make decisions regarding both policy and configuration for various network elements. Additionally, the policy servers have access to the policy database, as well as the accounting and authorization databases. Each policy entry includes rules that are determined with the input of human operators (Dar ; Latif, 2010).

Policy servers consist of central policy controllers and policy decision points. These decision points are responsible for determining the actions applicable to individual packets to ensure uniformity in decision-making for accessing and utilizing network resources.

Bandwidth Brokerage

Bandwidth brokers are entities for managing logical resources, allocating intra-domain resources, and organizing inter-domain agreements. Each domain's bandwidth brokers can be configured with the policies of their respective organizations while also controlling the operations of edge routers. In the context of policy frameworks, bandwidth brokers include PDP functions, policy databases, and edge routers function as PEPs (Martinez, Apostolopoulos, Alfaro, Sanchez, & Duato, 2010). Bandwidth brokers play a crucial role in inter-domain communication by negotiating with neighboring domains, establishing bilateral agreements with each of them, and sending appropriate configuration parameters to the edge router domains.

In order to ensure the proper coordination with neighboring domains and maintain end-to-end Quality of Service (QoS), bilateral agreements are necessary. These agreements involve the exchange of bandwidth and require the concatenation of these agreements across different domains. Additionally, proper allocation of resources

within domains is crucial. Resource allocations within domains are performed by bandwidth brokers, who utilize admission controls for this purpose. The choice of intra-domain algorithms depends on negotiations within the domain. Gheorghe (2006) explains that the architecture of bandwidth brokers is similar to the current internet routing system, where BGP4 serves as the standard inter-domain router protocol and offers multiple options.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New