Reliability and Validity
The profession of human service uses an enormous quantify of information to conduct test in the process of service delivery. The data assembled goes to a panel of assessment when deciding the option that will best fit the interest of the population, or the experiment idea in question. The content of this paper will define, and describe the different types of reliability, and validity. In addition display examples of data collection method and instrument used in human services, and managerial research (UOPX, 2013).
Types of Reliability Reliability is described as the degree to which a survey, test, instrument, observation, or measurement course of action generating equivalent ending each time an examiner performed the experiment. This process consists of five types, “alternate-form, internal-consistency, item-to-item, judge-to-judge, and test-retest reliability” (Rosnow & Rosenthal, 2008, p. 125). Alternate-form reliability is the degree of coefficient of relationship of different forms of the same experiment.
Internal-consistency is the notion of how dependable the experiment is to the moderator approach of assessing the result of the test. Item-to-item is the consistency of a specific point on average. Judge-to-judge is the reliability result from a moderator on average Test-retest reliability is the degree of stability pertaining to the measuring instrument or the experiment involves providing reassurance to the result of the test (Reshow & Rosenthal, 2008). Each of the type of reliability listed above is commonly use in research connected to human services.
Because human services work primarily related to the core influence of helping an individual with lifestyle modification process; it is vital that researcher make certain that the experiment use to establish the theories for this practice is reliable. Let’s elaborate on test-retest reliability assesses consistency across time. Reliability may fluctuate with the various issues that impinge on how an individual reacts to the test perform, counting the participant frame of mind, disruptions, or time of day, -etc. A high-quality test will mainly deal with these issues and provide somewhat minimal difference.
In contrast a changeable test is extremely susceptible to these issues and will provide unstable ending. Validity Validity is the degree to which the test measures what it is set out to measure (Reshow & Rosenthal, 2008). The types of validity includes “construct, content, convergent or discriminant, criterion, external, face, internal, and statistical” (Rosenthal & Rosnow, 2008, p. 125). It is important to distinguish the validity of the research outcome because it cannot contain any room for error, nor pending variable without an applicable explanation.
Validity is not verified by a statistic; rather by a uniform of examiner that reflects exemplify knowledge, and relationship among the test, and the performance it is projected to measure. Therefore, it is important for a test to be applicable in order for the product to securely, and correctly apply, and translated. Construct validity is the extent to which suggestion can be made from a broad view standpoint lining ideas to observations in the research to the hypothesis on which those ideas are based.
Content validity reflect on a personal pattern of measurement because it transmit on people’s insight for measuring hypothesis, which is complicated to measure if the test-to retest type was to performed. Convergent is the degree to which an imply procedures concerning the research are congruent in theory. Criterion relate to the precision exhibit from the measurement when weigh against other measure that have already proven as valid. External refers to the degree to which the product of the research takes a broad view further than the example. Face validity is related to scientific method.
It is highly concern to determine whether or not the instrument measure what is declaring. Internal validity take into account the independent variable of the research, which can correctly affirmed to construct the observed result expected. Statistical is the extent to which the ending of the research report, or score is reliable, or convincing. Furthermore, to obtain the result of reliability, and validity a process of study should put into action on the gathering data and how the data is use to perform, and manage the study according to plan.
These examples of reliabilities mentioned-above are degree to which recurring measures under unchanged condition demonstrate the same outcome (Reshow & Rosenthal, 2008). The information gathering approach pertaining to reliability and validity are decided by products that are simple to understand, evaluate, simplify, and linking to the hypothesis. Data collection methods play an essential function in human services because of the influence it has on the data applies to identify the procedures relating to the monitored results.
Three common are often used are group focus, written records, and survey. These examples help to determining any possible modification coming from different entity, or group observation. The data collection methods ought to monitor the moral principles behind the study and the methods that should employ in assessment of the three grouping: in-depth interview, observation methods, and document review (Zama, Wright-Daguerre & Brass, 2000). Reliability is not a battle condition expecting one outcome; data gathering procedure may varies from sound reliability to feeble reliability.
For example, if outcome on a data gathering method varies, it largely can state that the test is inconsistent. In contrast if the test from the data gathering process construct reliable ending with no disparity in scores over a period of time, the finding can determined as reliable. The presented concept is a basic formula explaining the method in which the two statistical impressions are in particular importance relating to reliability, and validity (Zaza, Wright-DeAguero & Briss, 2000)
In cross-cultural management research data collection method and data instrument emphasize take on the use of qualitative method for data collection, and testing. This approach of research stress on developing correlated liaison on understanding complicated, unified or varying circumstance is predominantly applicable to any dispute of performing management research (Pratama & Firman, 2012). Qualitative methods merging with quantitative can offer exceptional powerful analysis.
This method of research should implement with practical precision. For example the relationships, design among issues or the background in which action happen are useful in the development theory or abstract in the making of the hypothesis to justify those circumstance disputed in managerial research. In conclusion Reliability involves the process of a measure precision, and reliability is the assessment process of the measure.
The relationship between the two discuss components of research are set out to provide assessment value of measurement pertaining to a test. The shifting in function is identified as a substance of influence, and relationship to determining measures of an experiment. The values of reliability, and validity provide exemplify service to the process of data collection in human services, and managerial with powerful assisting tool to apply, and practice theories that are efficient, and proving adequate through the assessment of reliable sources.