Frameworks to Mitigate the Shortcomings of Current QoS Technologies
The creative use of the existent technologies is critical to ensuring that the worst advantages of different protocols are kept at the absolute minimum. However, the use of different technologies involves tradeoffs of certain factors, and thus the suitability of varied architectures is dependent on the set QoS objectives and the existent resources, Cao, Ma, Zhang, Wang, & Zhu (2005). The new architectures to address the challenges of the traditional QoS frameworks comprise of the existing technologies that have been modified or improved to deal with not only the QoS challenges of the day, but crucially, the specific challenges facing and organizations quality of service for video, audio and other forms of traffic over the internet.
7.1 Congestion Control Requires Scalable Data Algorithms
Dedicated hardware largely represents a choice between two quality needs, and priorities according to the available resources. The nature of the internet traffic on the internet implies that it is nearly impossible to determine possibility of peak traffic, which may overload the system and hurting QoS or under-utilizing the resources and thus resulting to inefficiencies. In order to effectively address these challenges, the available resources can be rendered more flexible to the changes in the traffic by using SVAs, Hentschel, Reinder, & Yiirgwei (2002). The availability of technologies for processing video traffic make it extremely easy to introduce specialized software to facilitate the handling of the software by dedicated hardware.
This framework builds on the architecture presented by Hentschel, Reinder and Yiirgwei (2002). The scalable data architecture is derived from the scalable video architecture presented in the literature review, and will basically including video and other data processing where such technologies exist. Basically a set of algorithms for processing different forms of data to determine their QoS and network resources requirements, a strategy manager which can be used to set QoS priorities and collect information about the traffic to facilitate the allocation of the network resources according to the nature of the demand, QoS requirements and other considerations. The role of the strategy manager handles the application domain semantics, while the quality manager utilizes general utility notions determine the individual application requirements and determines the relative application importance to set weights for the application needs, which subsequently inform the strategy manager to allocate resources among the applications appropriately, Martinez, Apostolopoulos, Alfaro, Sanchez, & Duato (2010).
In this way, the resources are managed by controlling the admission to the network as well as scheduling in a flexible manner, with greater flexibility in the determination of QoS priorities as opposed to the use of dedicated hardware on its own, Bhakta, Chakrabory, Mitra, Sanyal, Chattopadhyay, & Chattopadhyay (2011). The increased efficiency reduces costs, while the flexibility allows it to scale well with the changing user needs. In addition, if structural loads for a given application increases suddenly, the system will only handle it by detecting the nature of the overload and dynamically changing the mode of the particular application or any other application on the network to accommodate the overload. This technology does not dispense with the normal technologies for congestion control, but reduces the disadvantages associated with them.
7.2 TEAM TE Automated Manager
While SVAs are perfect for Intserv, DiffServ and other framework, MPLS works best using TEAM TE. TEAM TE automates traffic engineering, which makes MPLS multiply more efficient, by building in elements of SVA as well as the traditional traffic engineering technologies.
The TEAM architecture comprises of a server; the simulation tool (ST); the traffic engineering tool (TET) as well as the Performance Evaluation and Measurement tool (MPET). The MPET and TET interact with switches and routers within the domain, with the MPET providing a considerable measure of a variety of parameters of a network as well as routers like the bandwidth available, jitter, overall delay, length of queues and number of packets that are dropped by the routers (Ergin, Gruteser, Luo, Raychaudhuri, & Liu, 2008). The information is fed into the TET, which subsequently decides on the action to be taken in the variance of the capacity allocated to each LSP or pre-emption of least priority LSPs to free resources for new LSPs. The TET should also automatically take measures to configure switches and routers across the transmission path including the consolidation of the decision by use of the ST. The ST simulates networks with the present state of managed networks, coupled by the application of decisions of TET in order to verify the attained performance, Gheorghe (2006). The management tasks of the traffic engineering tool comprise of bandwidth management and route management. The prototypes of this architecture have already been implemented in a various applications, and it promises enormous potential for the MPLS and other protocols.
7.3 DiffServ MPLS TE
The literature already presented and the critical assessment section has highlighted the clear advantages that Differentiated Services, Multi-protocol Label Switching and Traffic Engineering have as QoS frameworks. The potential that they hold is enormous, and while they have their individual shortcomings, an entirely new architecture that utilizes these three technologies ensure greater flexibility, scalability, transmission speed and ultimate quality, Liebehetrr, Patek, & Yilmaz (2000). Integration of MPLS and DiffServ has been widely discussed a viable alternative to the choice between the two technologies, with the challenge to the architectures implementation being the fact that label switching routers make forwarding decisions on the foundation of MPS headers alone, from which PHB must be inferred. This difficulty has since been solved by the IETF through the introduction of three separate, experimental bits within the MPLS header that carry DiffServ data in MPLS.
Effectively, the original problem experienced in conveying of the required PHP in the header, while at once introducing a different difficulty i.e. how can6-bit DSCP field values be mapped onto a 3-bit EXP fields, which can only manage eight different values, Chakraborty, Sanyal, Chakraborty, Ghosh, Chattopadhyay, & Chattopadhyay (2010). To handle this difficulty in networks that support under 8 PHPs, the mapping is straight forward as follows. The DSCP is similar to the specific EXP combination and should map onto the specific PHB. In forwarding, the individual labels should determine the packets destination, Jaffar, Hashim, & Hamzah (2009). The EXP bits are not properties are not signaled when the label switching path (LSPs) are determined by they derive from the configuration.
7.3 1 MPLS- Traffic Engineering
The literature and critical analyses included above have shown that the clear weakness in traffic engineering is the necessity to compute the path from the source through the ultimate destination. This is limited by a variety of constraints and it is impossible with regard to IP traffic since forwarding decisions are made in an independent manner at every hop based on the destination IP address, Gheorghe (2006). On the other hand, it has been clearly established that MPLS can easily attain packet forwarding on arbitrary paths, with massive explicit routing capabilities that allow the LSP originator to complete computations, determine the state of MPLS forwarding across the path as well as mapping of the packets onto the LSP, Chiu, Huang, Lo, Hwang, & Shieh (2003).
Mapped packets can be subsequently forwarded according to their respective labels with neither of the intermediate hops making it unnecessary for the intermediate hops to make forwarding decisions, Akyildiza, Anjalia, Chena, de Oliveiraa, & Scoglioa (2003). Traffic engineering with MPLS brings with it the LSP prioritization concept, which allows for the preferential marking of some LSPs, which in turn allows them to obtain resources from less priority LSPs. LSP prioritization effectively ensures that resources are differentially allocated to the data packets depending on the availability and urgency of the packets, Jaffar, Hashim, & Hamzah (2009). This should comfortably meet the needs of DiffServ architecture. The MPLS-TE sets out 8 separate priority levels, while LSPs have a two priority levels i.e. set up and hold priorities, Jaffar, Hashim, & Hamzah (2009). These priorities control access to the network resources for established LSPs. The set up priority ensures that if there are inadequate resources for the creation of an LSP, then the hold priority can be assessed in order to determine whether some resources can be hived from less priority LSPs.
Traffic engineering in QoS seeks to ensure efficiency in the face of resource constraints, which renders it natural, that the resource constraints are borne in mind in the determination of possible data paths. They include among others the requested bandwidth, administrative attributes of multiple links to be crossed by the traffic, the amount of hops as well as the LSP priorities, Chiu, Huang, Lo, Hwang, & Shieh (2003). In addition, the calculation of a path that me the existent constraints necessitate access to information about the constraints and the existent resources with the information being distributed across the transmission nodes. Effectively, the properties of links should be advertised across the network through the addition of TE-specific markings to the link-state protocols. Once this is accomplished, it is then possible for modified versions of the shortest possible path or SPF algorithm to be employed by the ingress node in the calculation of the most optimal path in the face of a specific set of constraints. This is illustrated in the figure below, depicting a network topology where a couple of LSPs have specific bandwidth needs. Once a path is calculated, RSVP-TE establishes the MPLS forwarding state as an LDP protocol, Bhaniramka, Sun, & Jain (2009). This architecture is immensely helpful in the elimination of traffic emanating from a specific class across a link than can be ensured by DiffServ, IntServ or MPLS on their own, Akyildiza, Anjalia, Chena, de Oliveiraa, & Scoglioa (2003).