Analysis of Web Service Efficiency Essay Example
Analysis of Web Service Efficiency Essay Example

Analysis of Web Service Efficiency Essay Example

Available Only on StudyHippo
  • Pages: 13 (3499 words)
  • Published: August 15, 2018
  • Type: Analysis
View Entire Sample
Text preview

Abstract

unified:

The following and their contents are present in the text:

Abstract

.Currently, the use of XML-based web service standards is crucial for facilitating communication between different Internet applications. However, with numerous options available, it can be time-consuming to choose an efficient web service that meets client requirements. To tackle this challenge, we suggest employing Hidden Markov Model (HMM) to optimize user request execution. Our approach aims to help users select the most dependable web service and focuses on creating a cost-effective servicing mechanism that minimizes the need for network engineers in maintenance tasks. Through parallelism techniques, our analysis has shown notable reductions in response time and increased composition speed.

Keywords: Hidden Markov Model (HMM), Extensible Markup Language, Web Services, Service Quality Architecture (SQA)

1. Introduction

...

The feedback from customers plays a significant role in determining the trustworthiness and reputation of web services in the Service Web. This, in turn, impacts the future adoption of these services by consumers. In this article, we present an approach to predicting and assessing the various reputations that exist within the services oriented environment. Web services facilitate computer-computer communication in a heterogeneous environment, making them well-suited for use on the internet. The standardized web service model allows individuals to quickly design, implement, and extend applications. Numerous enterprises and corporations offer various web services as a means to be more responsive and cost-effective.

The graphs of control flow and after coming data graphs define all activities that are composite services. As a service provider, it is most important to calculate the bound (upper) and mean RT of a request given the request load and architectural environment. This calculation should only be done prio

View entire sample
Join StudyHippo to see entire essay

to service deployment and usage. In exceptional cases, the performance of a composite service depends solely on hypotheses about the invoked elementary services. The component approach provides the additional benefit of reuse. In the web service definition language, elementary services are limited to simple features modeled by a collection of coexisting operations. Additionally, it is necessary to combine a set of web services into a single composite web service for certain application types. The proposed methodology incorporates ideas from Software Architecture- and Component-based approaches to software design.

The process of web service selection and discovery is crucial to ensure that clients receive satisfactory results that meet their requirements. It is impossible to complete this task without considering the ranking relations among numerous candidates with similar functionalities. Therefore, ranking plays a vital role in a Web service selection system as it integrates results from earlier stages and presents them to the requesting users. This paper focuses on the ranking process, taking into account the users' SQA requirements.

A Hidden Markov Model is closely related to the study of event likelihoods in a graphical model that deals with data sequences. The fundamental idea is that there exists a set of states, but we do not know the actual state (hence "hidden"). Instead, we can make an educated guess about the state, though certainty cannot be achieved. Furthermore, there are transitions between states, referred to as chances. These transitions can be known or unknown. These states serve not only for clarity but also for dividing data into smaller segments and even generating or perceiving nonexistent data. The "generative" characteristic involves training a model on the data and subsequently creating chances

and transitions randomly. This enables the generation of new data using a hidden Markov model.

2.1 Definition

The variables that define our HMM model are as follows:

The set of states, denoted as X and represented as {x1, x2, ..., xn}, is a collection of elements.

The output alphabet, denoted as Z, is equal to {z1, z2, ..., zm}.

The initial probability of being in state xi at time t = 0 is represented as I?(i).

The transitional probability, represented by {aij}, is a measure of the likelihood of transitioning into state xj at time t + 1, given that we are currently in state xi at time t.

The transition probabilities from state i to state j are not influenced by the previous states at earlier times.

Output probability B is expressed as the set of probabilities {bj(k)}.

The equation bj(k) represents the probability of observing zk at time t, given that the system is in state xj at time t.

For illustration purposes, assume we have two biased coins that are being flipped, and an observer is recording the results of each flip (without knowing which coin is being flipped). In reality, our actions can be represented by Figure1. In this figure, the states of the Hidden Markov Model (HMM) are q1 and q2 (representing the coins), the output possibilities are H (heads) and T (tails), and the transition and output probabilities are as indicated. If we assign (q1)=1 and (q2)=0, then the following represents a potential sequence of transitions and outputs for the HMM depicted in the diagram.

There is no text provided toand unify.There is no specific text provided in the question. Please provide the

text that needs to beand unified.

There are several events for which we can calculate probabilities with ease.

Pr[x1x1x1x2x2x1x1] = I?(x1)a11a11a12a22a21a11a‰? 0.025

Calculating the transition probability relies on the specific problem at hand. For certain situations, such as road snapping, it is possible to directly compute this probability using the available data.

Knowing the observation probabilities makes determining the transition probabilities simple. It involves finding the path that maximizes the observation probabilities and counting to measure the transition probabilities. The Baum-Welch algorithm is the most popular approach for estimating both observation and transition probabilities simultaneously in HMM.

The key elements of Service Quality Architecture discussed in this paper include RT, cost of execution, availability of space, reputation, and successful rate of execution. RT can be defined in various ways, such as the time from sending a request to receiving a response. This time period encompasses receiving request time, QT (queuing time), ET (execution time), and receiving RT by the requester. However, measuring these time sections is challenging due to their dependence on network conditions. Another approach is to measure the time between receiving a request by the service provider and sending a response to the service requestor.

This time it includes QT and ET only affected by the workload of the web service. This is the value that must be continuously updated in each and every web services because of the work load that’s of changing nature and web service may change during the work time. Execution cost of this process is a fee received by the service provider from the service requestor during each and every execution. The fee for this is determined solely by

the service provider and can change due to the web service provider’s financial policy at that moment. The availability is a very important degree, that is a web service is accessible and ready for immediate use at any given point. From service requester for each execution. This fee is determined by service provider and may change according to web service provider’s financial policy. Availability is the degree that a web service is accessible and ready for immediate use.

Table 1 summarizes the Service Quality Architecture employed in this paper.


SQA


Description

RT

It is the time between receiving and sending request

EC

Execution cost

request

Availability

Up time

Up time down time

Reputation

Repi

Total no.of usage

Successful ER

No.of successful request

Total no.of request

The paper provides descriptions of notations used.

The symbol used to indicate the number of tasks is m.

n: represents the total number of candidate web services available for each task.

p i: i-th atomic process of a composition schema (1

a;ia;m).

ws ij: j-th candidate web service for the ith atomic process, (1 ? i ? m, 1 ? j ? n).

This passage explains that each ith atomic process has a corresponding j-th candidate web service. The values of i and j are limited within certain ranges.

d: index of SQA.

wd: weight of the d-th SQA constraint defined by

a client.

Concatenation: permissible value of the

d-th SQA (constraints).

The total value of the d-th SQA in a composition plan is referred to as Agg d.


b


ij

is a binary decision variable (0 or 1). If

b


ij

equals 1, then the

j

-th candidate web service is chosen for the

i

-th process.

In general, composition plans for web service involve serial, cycle, XOR-parallel, and AND-parallel execution patterns. The calculation of the aggregative value of web service composition, as defined by SQA, is based on its workflow pattern. The discussion below pertains to the description and aggregation values of workflow patterns. In equation 2, negative criteria values are scaled, while in equation 1, positive criteria values are scaled.

I apologize, but without the given text that needs to beand unified within the

tags, I am unable to proceed with your request. Please provide the specific text you would like me to work on.

The values of n SQA attributes of a service S are represented as a vector: Qs = (Qs1, Qs2, …,Qsn). Additionally, the values of SQA requirements requested by a consumer are represented as vector Qr = (Qr1, Qr2,…Qrn). The consumer's preferences are represented as a vector pr = (pr1,pr2,…,prn), where pri[1,n]. If a consumer has no preferences over an attribute, the default preference value of n will be considered for that specific parameter.

The author has thoroughly examined the server times for the composite nature Web services database, which follows the fork-join execution model. The author proposes eliminating servers with slow response times

during join operations to maximize server performance. The focus of this work is to study the fork-join model and understand how data from various servers is merged. The performance analysis of Web services in this domain mainly revolves around composite web services and their response times. In the fork-join model, a single internet application invokes multiple Web services in parallel and gathers their responses to return all results to the client without any general impact.

The fork and join system can be best explained under certain assumptions. These assumptions include the number of servers being equal to 2, the arrival of jobs following a Poisson process, and the task having an exponential service time distribution in general. Renowned scientists Nelson and Tantawi proposed an approximation for the case where the number of servers is significantly greater than or equal to 2 and all servers are homogeneous and exponential. Additionally, a more general scenario is presented where the arrival and service processes are of a general nature.

The focus of this paper is on three major evolutionary algorithms: Particle Swarm Optimization (PSQ), Interactive Evolutionary Computation (IEC), and Differential evolution (DE). While IEC is suitable for discrete optimization (DO), PSO and DE are best for continuous optimization. This paper provides an introduction to all three techniques, highlighting their common computation procedures. The paper also discusses the similarities and differences among the three algorithms, exploring various computational steps and contrasting their performances. The summary of the literature discussed in this paper includes topics such as location allocation, flexibility in job shop, multimode resource projects with scheduling roadblocks, and vehicle routing constraints.

The average RT calculation is the time that an Enterprise

Server takes to provide the required and accurate result. Several factors affect the RT, including the number of users, the available network bandwidth, the server's average think time, and the type of request made to the server.

The RT (Response Time) in this section represents the average or mean response time of different requests. Each request type has its own minimum response time. However, when evaluating or testing system performance, the analysis focuses on the average response time of all requests sent to the server. A faster web service response time results in a higher number of processed requests per minute. However, as the user count of the system increases, the response time starts increasing proportionally even if there is a decrease in the number of requests per minute.

The performance graph of all servers shows that as the number of requests per minute decreases, the response time (RT) increases significantly. This inverse relationship becomes more prominent beyond a certain point.

The graph below clearly shows the peak load, where the number of requests per minute starts to decrease. Before reaching this peak, real-time calculations were not important or accurate because the peak numbers were not used in the formula. However, starting from this point on the graph, the admin accurately calculates response time by considering both the maximum number of users and requests per minute.

The formula mentioned above is determined by the following method and notations.

The response time, represented by RT(in seconds), at peak load:

The response is not rated - the thinking

The calculation of the Response Time (RT) always takes into account the think time in order to obtain an exact and precise result.

If

The

average think time, Tthink, is 5 seconds per request.

Thus, the calculation of RT is determined by the following formula:

The response time is not rated – The thinking time is calculated as (5000/1000) – 3 sec = 5 – 3 sec.

Thus, the response time is 2 seconds.

The critical factors for Application Server performance are the system's RT and throughput. All calculations after the system's RT are done at peak load.

In our paper, we propose an optimal web service composition plan that is a composition plan of a large road block (nm). We present an approach to finding and improving GA that quickly converges to the appropriate composition plan. The neighbor plans are generated using Tabu search, and the heuristic of simulated annealing is applied to accept or reject the neighbor plan. During this phase, all services located after the user's requirement are deleted, leaving only the services that fulfill the user request. From these services, the one with the highest score will be selected.

We have proposed the use of Tabu search and simulated annealing (SA) as a constrained satisfaction based approach. However, this approach may not always produce the best result because it can only work on one composition plan at a time. To address this limitation, we introduced a new method that utilizes a genetic algorithm to find the optimal composition plan. The SA method incorporates progressive updates and chromosome selection in order to improve the algorithm's performance speed. This new approach, called Self-orchestration, encompasses all interactions between and within services that it orchestrates. Before taking any action, Self-orchestration first performs the execution. A key language for defining self choreographies is

the Web Service Choreography Description Language.

When using partial initialization of chromosomes, this method helps to avoid local optimums. Unlike the Tabu method, which is used on a test sample of composition plans, this proposed method works in general. There are different composition approaches described, including self-orchestration, self-choreography, self-coordination, and part of the component. Self-Orchestration explains how the services involved in the composition interact at the message level, including the possible order of iterations and the business logic.

Fig 1: Values of All Web Services and Tasks

Proposed Design (5.1)

The activity diagram below visually represents the flow of activity for each object in the system, utilizing the database to display relevant content to the user. It also showcases the flow from one activity to another using a flow chart-like structure. The diagram includes the Server, User, database, queries, and subqueries. Each actor within the system performs specific functions to achieve their objectives. Initially, the user enters the system by inputting their correct username and password. Once authenticated, they can input their query into the system.

A use case diagram represents a user's interaction with the system and shows the specifications of a use case. Filtering web services involves functional and non-functional matchmaking. Functional matchmaking filters out web services that have different functionalities from the client, while non-functional matchmaking eliminates web services that don't meet the required quality.

At this stage, the candidate web services for each task are chosen and the user's details are retrieved and stored either in the web agent's memory or a temporary storage allocation site. The web agents then analyze the different web applications to determine the best web servers, and the resulting information is presented

along with user comments and reviews.

Fig 2: Flowchart

The diagram below illustrates the sequence of steps involved in allowing a user to view their related content. The diagram includes different objects such as User, database, Validate, relevant, and web access, and demonstrates the sequence of interactions between these objects. A sequence diagram is a type of interaction diagram that depicts how processes interact with each other and in what order. The user enters their login details and establishes a connection through web access, which is then linked to the time and review request. Subsequently, the web agent analyzes various requests from web applications and provides information on time and review, offering possible details to the user.

Fig 3: Sequence Diagram

Web-based systems have gained popularity in institutions, government agencies, and businesses for effectively meeting service requirements. When choosing a user service, the quality of available web services is crucial. To tackle the challenges of composing web services, there are various methods based on qualitative characteristics. These methods can be classified as exact or approximate. Exact methods evaluate all available designs by analyzing and computing candidate routes to obtain an accurate answer. Conversely, approximate methods aim to select a design that closely approximates the best and most precise solution.

The graph below compares web services in the field and displays their performance based on RT and user reviews.

The text should not beas it only contains and does not have any actual content.

Fig 4: The graph that was produced

In recent years, there has been considerable attention given to the optimal composition of web services, resulting in numerous research endeavors. Various methods have been developed to target specific aspects; however, despite

studying different innovative algorithms, challenges persist in web services composition, particularly regarding qualitative characteristics. Several of these methods face obstacles like local optimality roadblocks and fundamental problems with genetic algorithms. The crossover type and mutation operation frequently act randomly, leading to degradation of the method. To tackle these issues and enhance efficiency, combined methods and operators such as the revolution operator have been employed along with extra functions to improve performance.

These techniques aim to improve speed, convergence, and efficiency in large spaces. Previous studies have not established a specific benchmark tool for evaluating the algorithm. Some researchers have compared different simulation environments or data to assess their performance. However, the results indicate that each method has its own drawbacks and lacks a standardized approach. The proposed method utilizes the Skyline algorithm and parallelism technique to achieve optimal composition and minimize real-time (RT) in highly scalable scenarios.

The primary focus of all web services is to maintain Service Quality in order to retain their clients. This study addresses the importance of the RTs (Response Times) of composite Web services in achieving service quality. Our proposal introduces a heuristic model for predicting the RT of a web service, allowing for the selection of an optimal web service at runtime from a list of functionally similar web services. To handle the probabilistic instances of web services, we have implemented a Hidden Markov Model. Our model assumes that web services are deployed on a cluster of web servers, and occasionally experience delays or crashes during invocation due to a faulty node in the server clustering system. By utilizing HMM, we can predict the probabilistic nature and behavior of these web

servers, ultimately selecting the most optimal Web Services based on their probabilistic value.

The text proposes a solution to overcome the road block in selecting web services that are aware of the Service Quality Architecture. To tackle this issue, an algorithm based on SQA is introduced, which identifies all possible selections that provide results close to the optimal and efficient solution. Notably, this process is accomplished swiftly.

7. Reference

In the article "Unraveling Web Services: An Introduction to SOAP, Web Service Definition Language, and UDDI" by 11.F.CURBERA., M. DUFTLER, R.KHALAF, W. NAGY, N. MUKHI, and S. WEERAWARANA, published in IEEE Internet Computing, Volume 6 (2002), pages 86-93, the authors provide an overview of SOAP, Web Service Definition Language (WSDL), and Universal Description, Discovery, and Integration (UDDI).

In a study conducted by KOSHMAN.S in 2006, the focus was on visual-based information retrieval on the Web. The research was published in Library and Information Science Research, volume 28, pages 192-207.

13.CHEN L.S, F.H HSU, M.C CHEN, Y.C HSU, Developing Recommender Systems with the Consideration of Product Profitability for Sellers, Information Sciences, Vol.178 (2008), pp.1032-1048.

14. CHEN Y, L.ZHOU, D.ZHANG, "ontology-supported Web Service composition, An approach to service-oriented knowledge Management in Corporate Services", Database Management Vol.17 (2002), pp.67-84.

15. O'SULLIVAN J, D.EDMOND, A.T HOFSTEDE, what's in a service? Disturbed and Parallel Databases Vol.12 (2002), pp.117-

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New