Combo with Research Exam and 1 other – Flashcards

Unlock all answers in this set

Unlock answers
question
Grounded theory method
answer
developing theories
question
atypical of qualitative research
answer
standardized quality of life questionnare
question
purpose of phenomenological research
answer
describe experiences as they are lived
question
Data collection in many types of qualitative studies is considered complete when
answer
a point of theoretical saturation is reached
question
Qualitative research is based on which of the following?
answer
The belief that multiple "truths" and "realities" exist
question
here are many different approaches to qualitative research. What is one common thread that is evident in all types?
answer
The process in each kind is a little like working a puzzle and putting pieces together to make it whole.
question
type of research involve at least a minor degree of control by the researcher to implement the study treatment?
answer
quasi-experimental
question
type of research has a major focus on examining the long-term, short-term, negative, and positive results of care across a variety of settings
answer
Outcomes research
question
Based on the Medical Outcomes Study Theoretical Framework, which of the following are examples of processes of care that should be considered when designing the study
answer
nurse counseling regarding how to use ibuprofen for arthritis pain
question
dissemination of findings of outcomes research
answer
A series of presentation and publications in many venues
question
clinical guideline panels contribute to outcomes research
answer
They incorporate evidence on health outcomes into recommendations concerning management strategies
question
A financial incentive is a of care that can negatively influence cost outcomes for third-party payers.
answer
sturcture
question
A researcher is looking particularly at the organizational hierarchy of a health care system. What specific aspect of quality health care would this address
answer
Structure of care
question
Outcomes researchers study ways in which healthcare structure and process influence clinical end points, patient well-being, patient satisfaction, and
answer
functional status
question
Why is it important to widely disseminate the findings of outcomes research
answer
By disseminating the findings of outcomes research, myths can be dispelled or supported
question
A group of experts from around the world have been gathered together. They come from a variety of disciplines and represent many years of experience in the field. Each has read extensively on the topic of interest and shares what they have read. The goal of this meeting is to identify research priorities. What strategy is being employed in this situation
answer
consensus knowledge building
question
A researcher conducting outcomes research is focusing on what component of patient care
answer
end results
question
Which statement best reflects why nurse-sensitive quality indicators are extremely important to the profession
answer
Nurse-sensitive quality indicators are directly related to what nurses do for the patient
question
A researcher designs a study that uses a random sampling method to decrease the likelihood of bias in the study sample. This strategy was used to implement
answer
control
question
The primary purpose of nursing research is to
answer
generate scientific knowledge to guide nursing practice
question
Quantitative and qualitative research approaches are particularly useful in nursing because they
answer
balance each other
question
A nurse who reads research articles and incorporates research findings into nursing practice would demonstrate which of the following research roles
answer
consumer
question
Ethnographic research focuses on
answer
trying to understand cultures from an emic perspective
question
The components of rigor in qualitative research are
answer
openness and adherence to the philosophical orientation.
question
Which of the following is a characteristic of phenomenological research methodology
answer
bracketing
question
A research problem is defined as a(n):
answer
general area of concern requiring study
question
The main reason for not including qualitative studies in reviews for evidence-based practice is
answer
it is difficult to evaluate qualitative studies
question
Perhaps the biggest obstacle to outcomes research
answer
it is extremely complex and difficult to plan and control
question
According to Donabedian's Health Care Quality Theory, what three factors must be considered when designing research focused on assessing healthcare quality
answer
Structure, Process, and Outcomes
question
Cochrane Data Base
answer
systematic review
question
Which of the following indexes would provide the largest number of relevant nursing sources
answer
Cumulative Index to Nursing and Allied Health Literature
question
Which of the following research approaches would be most reasonable to use if a researcher is interested in finding out what it is like to live with a person who has a terminal illness
answer
phenomological approach
question
The approach involves studying behaviors from within a culture
answer
emic
question
Population-based studies offer which additional focus to research questions
answer
Conditions are studied in the context of a community
question
Within CINAHL, what search field would you use to limit/revise your search by study type
answer
publication type
question
The statement, "This study explores the experience of caregiving by adult daughters of parents with Alzheimer's disease," is an example of which of the following
answer
research objective
question
Low-fat diet is related to lower total cholesterol and higher HDL (high-density lipoprotein)."
answer
Complex, directional, associative
question
Cancer patients who receive music therapy complain less frequently of pain and require less pain medication than cancer patients not receiving music therapy
answer
complex, directional
question
Normal saline flush with heparin is more effective than normal saline flush alone in maintaining patency of an intermittent intravenous si
answer
simple, directional
question
In which situation would a null hypothesis be appropriate
answer
The researcher predicts that there will be no difference among the groups
question
response, behavior, or outcomes that the researcher wants to predict or explain.
answer
dependent variable
question
The purpose statement should identify the study variable(s) and what other key aspect of the study
answer
population to be studied
question
Ethical research follows .
answer
three principles set forth by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.
question
In experimental research, the researcher's control of the "treatment" is referred to as
answer
manipulation
question
A research design is best described as a:
answer
blueprint for conducting the study.
question
Which of the following must be present in experimental research
answer
Random assignment of subjects to groups
question
Control of threats to internal validity are most stringently considered in which type of study?
answer
Quasiexperimental
question
Which of the following are important aspect to consider when critiquing a literature review?
answer
Primary sources cited, references current (no more than 10 years old except for landmark studies,gaps in knowledge identified in a way that provides a basis for the study
question
The main goal of the Public Health Service (PHS) policy
answer
is to ensure humane treatment of animals
question
Another name for probability sampling
answer
random sampling
question
The population from which the researcher selects the actual study sample is referred to as
answer
accessibe population
question
A researcher who does not disclose that a portion of the data from the original study sample was not used in the final data analysis could be accused of:
answer
scientific misconduct
question
The population is the entire set of individuals who meet the sampling criteria.
answer
target
question
A researcher looking over a patient's record for purposes of obtaining research data would be exposing that subject to what degree of harm
answer
No anticipated effects
question
Which of the following will increase the external validity of a study
answer
randomly selected sample
question
a psychological response of subjects who change their behavior simply because they are in a study.
answer
hawethorne effect
question
The provision that research be differentiated to potential subjects as therapeutic or nontherapeutic is a part of the:
answer
Declaration of Helsinki
question
A researcher wanting to explore the lives of women newly diagnosed with breast cancer obtains a random sample of the population. What part of the study will be strengthened because of the random sample?
answer
external validity
question
A problem for nurses conducting quasi-experimental or experimental research is that many times a patient can not ethically be left without a treatment (i.e., put into a control group that receives nothing). What is the best solution for this problem?
answer
Ensuring all patients receive the standard of care.
question
The concept of causality would be important in
answer
experimental
question
threat to internal A validity of a study using a two-group pretest/-posttest design is that:
answer
The pretest may sensitize subjects so they change their behavior or responses on the posttest
question
The term "internal validity" refers to the degree to which
answer
the independent variable can be interpreted as being responsible for the effects on the dependent variable
question
At what point in the research process is the literature review conducted in a qualitative investigation
answer
depends on the type of study
question
Subjects who participate in a study of patients with inflammatory bowel disease are described as the
answer
sample
question
What international document was developed in 1949 to direct investigators in conducting ethical research?
answer
nuremburg code
question
Sample size is deemed to be adequate when the researcher is detecting no new knowledge from additional subjects
answer
qualitative sample size
question
Caucasian , african american, asian amaerican
answer
nominal
question
In the research process, a critical factor in data collection is
answer
consistency
question
Characteristics of the sample may affect the validity of a tool for use in a particular study
answer
validity of a measure
question
The difference between the observed score and what exists in reality (true score) is called:
answer
measurement error
question
The lowest level of measurement is
answer
nominal
question
Is not useful in situations where the variable is dynamic and changing
answer
reset retest reliablility
question
Data characteristics that can be ranked are measured on which scale
answer
ordinal
question
Type I errors
answer
Occurs when the researcher states that there is a statistically significant difference between groups (the null is rejected) when actually, there is NO difference between the groups (the null hypothesis should be accepted)
question
Descriptive statistics should be reported in every study for which of the following reasons
answer
to show the sample characteristics
question
What statistical test would you use to test the difference in heart rate response to exercise between a group of cardiac patients involved in a formal cardiac rehabilitation program and another group exercising at home?
answer
t-test
question
The mean scores of two groups participating in a study are exactly the same for a particular variable. This suggests that:
answer
The average score is the same for each group
question
The median of the following set of numbers (12, 4, 13, 20, 4, 10, 14) is
answer
12
question
Which of the following measures would be helpful in interpreting the relationship of a particular score to the distribution?
answer
standard deviation
question
E. M. Rogers developed a theory concerned with:
answer
research utilization
question
Utilization of research findings in clinical areas is slow because:
answer
Nurses are not usually rewarded for using research findings in practice
question
A model of research utilization such as the Rogers model has been designed to
answer
facilitate critical thinking and assist in decision making for research utilization
question
The Agency for Healthcare Research and Quality (AHRQ) is noted for development of which of the following
answer
Clinical practice guidelines to direct health care practices
question
Which of the following is one of the characteristics of an innovation that determines the probability and speed at which an idea will be adopted?
answer
compatability
question
What is a metasummary?
answer
a synthesis of multiple primary qualitative studies to produce a narrative about a selected phenomenon
question
Which of the following is true about the relationship between reliability and validity
answer
Unless it is reliable, it cannot be valid
question
Which of the following is true about Likert scales?
answer
Scores from individual items can be summed for a total score.
question
Cronbach's alpha is used in tool development to determine:
answer
internal consistency
question
The semantic differential scale consists of:
answer
sets of bipolar adjectives measuring degree of feeling about a concept.
question
A major advantage of using a questionnaire for data collection is that a questionnaire
answer
can be distributed to large groups of people
question
Which of the following can be measured using direct measures?
answer
Concrete factors, such as age, gender, height, and weight
question
Which level of measurement is indicated when referring to a temperature of 70°
answer
interval
question
Subjects respond to a visual analogue scale by:
answer
drawing a mark across a 100-mm line that has bipolar anchors.
question
Under what condition would the mean, median, and mode be equal?
answer
when scores are normally distributed
question
Which of the following will be most affected by scores that are extremely high or extremely low?
answer
mean
question
A researcher notes that although no mathematical significance was found, some premature infants who were exposed to soothing music for 6 hours daily exhibited lower heart rates and less crying. This finding would have which type of significance?
answer
clinical
question
The mean scores of two groups participating in a study are exactly the same for a particular variable. This suggests that:
answer
the average score is the same for each group
question
Rogers indicates that the process by which adopters of research modify innovations to best meet their own needs is:
answer
reinvention
question
Which of the following strategies for utilization is most amenable to adoption by baccalaureate nursing students and new graduates?
answer
Reading professional journals critically and reporting the findings at staff meetings
question
During which stage of Rogers' Theory of Diffusion of Innovations is the use of mass media most effective?
answer
knowledge
question
During which stage of Rogers' Theory of Diffusion of Innovations is the use of mass media most effective?
answer
Confirmation
question
To detect a significant difference between two groups when the effect size is small, what should the researcher do
answer
increase the sample size
question
Which of the following types of sampling is considered to be the weakest?
answer
convineince
question
Which uses the higher level of measurement, temperature in Fahrenheit degrees or weight in kilograms?
answer
weight
question
The aspect of reliability for which interrater reliability is appropriate is:
answer
equivalence
question
The items in nominal-level instruments should be
answer
mutually exclusive and exhuastive
question
One significant criterion to use when critiquing a literature review is to
answer
make note of the source of the articles cited.
question
A researcher wants to obtain a sample of individuals who are HIV positive. Which of the following sampling methods would be the most effective way to obtain a sample?
answer
Network Sampling
question
The first internationally recognized effort to establish ethical standards was the:
answer
Neuremburg Code
question
Which of the following definitions best describes rigor in quantitative research?
answer
Amount of control and percision exerted by the methodology
question
Data Quality
answer
How do you know if the data you have collected are "good"? An ideal data collection procedure measures or captures the constructs in a way that is: relevant credible accurate unbiased sensitive Few data collections match this ideal!
question
Measurement
answer
Quantitative studies derive data through measurement of research variable Measurement - consists of rules for assigning numbers to objects to represent quantities of attributes "Whatever exists, exists in some amount and can be measured." L.L. Thurstone (American psychologist)
question
Rules for Measurement
answer
Some rules to measure some variables are known to us (temperature, weight) Rules to measure other variables have to be invented Goal of the rules is to link numeric values to reality as close as possible - known as "isomorphism" (from Greek - meaning "equal shape")
question
Advantages for Measurement
answer
Removes guesswork - it is objective Makes it possible to obtain reasonably precise information Enables us to communicate information without ambiguity
question
Errors of Measurement
answer
Both the objects that are being measured and the measurement procedures are subject to influences that render it less than accurate Therefore, conceptually, an observed or obtained score can be represented as having two parts: a true component an error component Obtained score = True score +/- Error Score XO = XT +/- XE
question
Components of a Score
answer
True score- hypothetical; never known by us What one hopes to measure Error score - Problematic because it represents an unknown quantity Unknown Variable Random
question
Factors Contributing to Measurement Error
answer
Situational components - being watched Transitory personal factors - hunger Response-set bias - social desirability, extreme responses, etc Administration variations Instrument clarity Item sampling Instrument format
question
Types of Error
answer
Error can be divided into two components: Random error - any factors that randomly affect measurement of the variable across the sample; example - person taking a test - may be in a good mood or bad mood Systematic error - any factors that systematically affect measurement of the variable across the sample; persons taking a test-there may be a loud noise in the hall affecting all the students scores Example - weigh subjects with varying pressure by hand on shoulder
question
Instruments: Criteria to Assess Adequacy
answer
Assessment of the quality and adequacy of an instrument is necessary Assessment includes Reliability Validity
question
Approaches to Reliability
answer
There are two ways of approaching the concept of reliability Consistency Accuracy Errors that affect one of these aspects usually affect the other also
question
Reliability of Instruments
answer
Reliability refers to the degree of consistency with which the instrument measures the attribute it is supposed to be measuring (the target attribute) In practical terms - if a measure is reliable, it means that you would get the same result over and over again Reliability is a characteristic of a measure that is taken across individuals (not a single individual)
question
isomorphism
answer
Goal of the rules is to link numeric values to reality as close as possible
question
Determination of Reliability
answer
Reliability is related to the variation of an instrument in repeated measurements of the attribute It can't be calculated exactly but can only be estimated A reliable measure is one that maximizes the true score component and minimizes the error component Reliability may be thought of as a fraction or a proportion
question
Key Aspects of Reliability
answer
Stability, Internal consistency reliability, Equivalence
question
Equivalence
answer
the degree of similarity between alternate forms of the same instruments or findings of two researchers. Procedures include parallel-forms reliability and inter-rater reliability (the degree to which different raters give consistent estimates of the same phenomena)
question
Internal consistency reliability
answer
the homogeneity of an instrument; the extent to which all of its subparts are measuring the same characteristic. Procedure to assess this include split-half techniques and Cronbach alpha or coefficient alpha
question
Stability
answer
the extent to which the same results are obtained on repeated administrations of the instrument. Procedure to assess this aspect is test-retest reliability
question
Test-Retest Reliability
answer
Procedure to assess the stability of an instrument Administration of same test to same Ss & compares the scores The objective comparison procedure is to compute a reliability coefficient - which is a numeric index of the magnitude of the test's reliability (need to understand a correlation coefficient)
question
The Reliability Coefficient
answer
For test-retest- is the correlation coefficient between the two sets of scores (example - p.454) Reliability coefficients normally range between 0.00 and +1.00 Higher RCs indicate more stable measures Coefficients > .70 are satisfactory As a stability index, the RC is most appropriate for relatively enduring attributes such as personality, abilities, or physical characteristics. RCs usually higher for short-term retests than for long-term retests (1-2 months)
question
Correction for Attenuation
answer
An estimation of what the correlation between two variables would be, if both instruments were perfectly reliable ("the fiction of perfect reliabilities" - Nunnally, 1978) A method used to adjust correlation coefficients upward because of errors of measurement when two measured variables are correlated; the errors always serve to lower the correlation coefficient as compared with what it would have been if the measurement of the two variables had been perfectly reliable.
question
Disadvantages of the Test-Retest Method
answer
Some traits DO change over time, independent of the stability of the measure Test-retest effect - influenced by memory; results in a spuriously high reliability coefficient Ss may actually change as a result of the first administration Requires two test administrations Second test may be done carelessly
question
Internal Consistency
answer
The aspect of reliability that indicates the homogeneity of the instrument Judge the reliability of the instrument by estimating how well the items that reflect the same construct yield similar results Most widely used approach to estimating an instrument's reliability Requires only one test administration Best means to assess one of the most important sources of measurement error in psychosocial instruments --- sampling of items
question
Methods for Assessing Internal Consistency
answer
There are a number of internal consistency measures that can be used Two techniques commonly used are: Split-half Cronbach alpha SPSS Procedure - Analyze/Scale/Reliability Model - alpha Click on ALL individual items
question
Split-Half Technique
answer
Procedure splits test into two halves (odd-even) and computes a correlation coefficient between the two half tests This procedure tends to underestimate the reliability if the test were longer
question
Cronbach Alpha
answer
Most widely used internal consistency reliability estimate; most conservative (p. 455) Also known as coefficient alpha Preferable over the split-half Gives an estimate of the split-half correlation for all possible ways of dividing the measure into two halves Equivalent to the average of all possible split-half correlations Ranges between .00 and +1.00 Higher values reflect a higher degree of internal consistency KR-20 - a specialized version of the alpha for dichotomous data
question
Parallel-Forms Reliability (Equivalence)
answer
This approach is used under one of two circumstances: When different observers or researchers are using the same instrument to measure the same phenomena at the same time When two parallel instruments are administered to individuals at about the same time Purpose is to determine the equivalence of the instruments Compute a correlation coefficient between the two sets of scores as an index of reliability of equivalence
question
Inter-Rater Reliability
answer
A reliability of equivalence Procedure Two or more trained observers watching an event simultaneously and independently recording the data A index of equivalence can be computed This may be a correlation coefficient Others - Cohen's kappa, etc
question
Interpretation of Reliability Coefficients
answer
Instrument that is unreliable interferes with the adequate testing of a researcher's hypothesis Reliability affects statistical power There is no absolute standard of what an acceptable reliability coefficient should be For group-level comparisons a .70 may be adequate; coefficients of .80 or greater are highly desirable For making decisions about individuals a .90 is desirable
question
Reliability Coefficient as Variability in Scores
answer
The RC has special interpretation What was true about decomposing a single observed score is true about decomposing the variability of all the scores V0 = VT + VE Reliability is the proportion of VT to the V0 RC = VT/VO If we have an instrument with a coefficient of reliability of .85 - 85% of the variability in obtained scores could be said to represent true differences and 15% of the variability would reflect random fluctuations
question
Factors Affecting Reliability
answer
Reliability is partly a function of length of survey, to improve reliability more items tapping the concept should be added After doing an item analysis, items that do not discriminate should be replaced Greater reliability occurs with a more heterogeneous sample b/c the instrument is designed to measure differences
question
Using Instruments Previously Developed
answer
Using a previously reliable instrument is no guarantee that it will be reliable in the new study An instrument's reliability is not a fixed entity Reliability is not a property of the instrument but rather of the instrument when administered to a certain sample under certain conditions If the second group is similar - then probably the reliability would be pretty accurate
question
Validity of Instruments
answer
The second most important criterion for evaluating an instrument The degree to which an instrument measures what it is supposed to be measuring Validity also has a number of different approaches More difficult to establish than reliability
question
Types of Instrument Validity
answer
Translation validity- is the operationalization a good reflection of the construct? Face validity Content validity Criterion-related validity - does the operationalization behave the way it should given your theory of the construct? Predictive validity Concurrent validity Convergent validity Discriminant validity
question
Face Validity
answer
Look at the operationalization to see if "on its face" it looks as though it is measuring the appropriate construct Uses panel of experts Provides weak evidence b/c it uses subjective judgment call
question
Content Validity
answer
Checks the operationalization against the relevant content domain Assumes that you have a good detailed description of the content domain Instruments - determines the the sampling adequacy of the items for the construct being measured Useful for affective and cognitive instruments For cognitive measures - use a "framework" Example - Seven Danger Signs for Cancer For affective measures - need a thorough conceptualization of the construct of interest
question
Establishing Content Validity
answer
Follows an exhaustive literature review or is derived from a qualitative study No completely objective method of assuring content validity Necessary that All items are relevant All relevant items are included
question
Content Validity Index(CVI)
answer
Method to assess the relevance of the items Uses a panel of experts in the content area to evaluate individual items and the whole instrument Panel - at least three experts Experts rate each item on a 4-point scale - from 1 = not relevant to 4 = very relevant CVI is %age of total items rated as 3 or a 4 CVI of >.80 is considered to have good content validity
question
Criterion-Related Validity
answer
Practical approach to check the performance of the instrument against some criterion Not used to determine whether instrument is measuring a particular trait Used to establish the relationship between the instrument and some other criterion Need an available reasonably reliable and valid criterion to compare Method of comparison - correlate scores Example - measuring self-esteem
question
Types of Criterion-Related Validity
answer
Predictive validity Concurrent validity
question
Predictive Validity
answer
You assess the operationalization's ability to predict something it should theoretically be able to predict
question
Concurrent Validity
answer
You assess the ability of an instrument to distinguish groups that it should theoretically be able to distinguish between
question
Construct Validity"Theory-Based"
answer
The overarching quality with all of the other measurement validity labels falling beneath it Not limited only to measurement Construct validity - the approximate truth of the conclusion that your operationalization accurately reflects its construct Just as much a part of the independent variable as the dependent variable Strong steps to enhance the content validity will also strengthen the construct validity
question
Known Groups Method
answer
Groups known to differ on the attribute being measured are administered the instrument Examples To validate a fear of labor scale - give to both primiparas & multips. Assumes that primips fear labor more! To validate a measure of functional ability - give to emphysema patients and patients with no emphysema
question
Hypothesized Relationships
answer
Variant of the known-groups approach-involves hypothesizes relationships on the basis of theory Example -Text, p. 461 fear of labor instrument; we could contrast scores of primips v. multips; we would expect that primips would be more fearful Logic is fallible but yields important evidence
question
Convergent-Discriminant Validity
answer
Subtypes of construct validity Neither alone is sufficient for establishing validity Use of correlations in construct validity Correlations between theoretically similar measures should be "high" Correlations between theoretically dissimilar measures should be "low"
question
Convergent Validity
answer
Measures of constructs that theoretically should be related to each other are, in fact, observed to be related to each other Able to show a correspondence or convergence between similar constructs
question
Discriminant Validity
answer
Measures of constructs that theoretically should not be related to each other are, in fact, observed not to be related to each other Able to discriminate between dissimilar constructs
question
Multitrait-Multimethod Matrix Method (MTMM)
answer
A procedure that uses the concepts of both convergence & discriminability Uses multi-traits & multi-methods
question
Factor Analysis
answer
Another approach to construct validity is FA- a statistical procedure that identifies clusters of related variable-that is, dimensions underlying a central construct Dimensions are called factors- each represents a relatively unitary attribute Another means of testing hypotheses about interrelationships and for looking at the convergent and discriminant validity of a large set of items Confirmatory FA sometimes used to analyze MTMM data
question
Interpretation of Validity
answer
Not an all-or-nothing characteristic of an instrument - there are degrees However, one cannot say that the process of testing the validity of an instrument proves the validity but rather that the validity is supported to a greater or lesser degree by the evidence Really - a researcher does not validate the instrument itself but rather the application of the instrument
question
Interdependence - Reliability & Validity
answer
R & V - not totally independent An instrument that is not reliable cannot possibly be valid An instrument that is reliable may not necessarily be valid High reliability provides no evidence of validity for intended purpose Low reliability is evidence of low validity
question
Other Criteria for Instruments
answer
Sensitivity, Specificity, Predictive values, Likelihood ratios
question
Sensitivity
answer
the ability of an instrument to identify a "case" correctly, that is, to screen in or to diagnose a condition correctly the ability to identify a case correctly (true positive)
question
Specificity
answer
an instrument's ability to identify non-cases correctly, that is, to screen out those without the condition correctly the ability to identify a noncase correctly (true negative) It is an instrument's rate of yielding "true negatives"
question
Likelihood Ratio
answer
Addresses the question - How much more likely are we to find that an indicator is positive among those with the outcome of concern compared with those for whom the indicator is negative? summarizes the relationship between specificity and sensitivity in a single number
question
ROC Curves(Receiver Operator Characteristic
answer
A graphical plot of the sensitivity, or true positive rate, vs. false positive rate (1 − specificity or 1 − true negative rate), for a binary classifier system as its discrimination threshold is varied. Determines a cutoff point (the critical value) to discriminate between cases and noncases
question
Efficiency
answer
Efficiency -instruments of comparable R&V may differ in their efficiency In self-reports, closed-ended questions are more efficient than open-ended Spearman-Brown formula may be used to estimate reliability with a reduction in the number of items P & B - footnote on p. 467
question
Spearman-Brown (Prophecy) Formula
answer
Long instruments tend to be more reliable than shorter ones. There is a point of diminishing returns. A formula was developed to adjust the correlation to estimate how reliable the scale would be with fewer items. Known as Spearman-Brown prophecy formula
question
Developing a Data Collection Plan
answer
3 independent decisions 1) design of study (includes sampling) 2) use of existing or new data 3) data collection method
question
Existing Data
answer
Existing records, historical records, secondary analysis, meta-analysis.
question
Meta-analysis
answer
A method that allows researchers to combine the results of several different studies on a similar topic in order to establish the strength of an effect.
question
4 Dimensions of Data Collection Methods
answer
Type of Data Quantifiability Researcher Obtrusiveness Objectivity
question
Structured Data Collection
answer
Format is more rigidly fixed Takes time to develop but less time to analyze Collection instrument or tool is the formal written document
question
Unstructured Data Collection
answer
allows flexibilty to reveal information in a naturalistic manner more appropriate for indepth examination of a phenomenon less up front work, but more time analyzing usually qualitative research usually no formal instrument
question
Major Types Of Data Collection Methods
answer
Self Reports (surveys, questionnaires, interviews) Observation Biophysical measures
question
Self Report
answer
-obtained by asking people directly -direct and versatile -appropriate to know what people think, feel, of believe -yields info difficult to obtain any other way
question
Uses of Self Report
answer
-Obtain behaviors that one could observe but usually doesn't -gather retrospective data or prospective info -gather subjects "state of mind" rather than behaviors
question
Types of Unstructured Self Report
answer
Unstructured Interview Focused interview Focus group interview (need homogenous group to promote comfort) Life Histories Critical Incidents Diaries
question
Gathering Unstructured Data
answer
Researcher needs to overcome communication barriers and to enhance the flow of meaning Need to be a good listener Interview often long Record interview and transcribe
question
Self Report:Structured Interviews
answer
Uses formal, written instrument known as interview schedule Questionnaire (if respondents fill out themselves) Consists of a set questions in which the wording and most alternatives are predetermined Researcher uses a great deal effort into developing structured instruments
question
Advantages of Interviews
answer
High response rates Clarity-protect against ambiguity Depth of questioning Fewer "Idon't knows" Order of questions Sample Control Supplementary Data - reveals other data thru observation
question
Types of Closed Ended Questions
answer
Dichotomous Multiple Choice Cafeteria questions Rank-order Rating (issue of neutral Checklists Visual Analogue Scales (100mm long)
question
Developing Structured Instruments
answer
Requires considerable time Cluster related concepts Decide to use interview or questionairre meaningful sequence of modules and questions within modules General questions before specific Preface with introductory comments about study Have instrument critiqued for content and proofed for technical problems Pretest
question
Advantages of Questionnaires
answer
Cost Anonymity Lacks interviewer bias
question
Observation
answer
Alternative to selfreport Characteristics/conditions of individuals (self grooming, non-verbal communication) Lab or natural setting Directly or with tech (video)
question
Uses of Observational Data
answer
When people cannot be expected to describe their own behaviors (embarrassment, behaviors are emotion-laden, children or mentally ill)
question
Shortcomings of Observational Data Collection
answer
Ethical issues Distorted Behavior High rate of refusals Observer Bias
question
Sources of Observer Bias
answer
Emotions, prejudices, attitudes and values of observer Personal interest and commitment Anticipation of what is observed Hasty decisions
question
Phenomena Amenable to Observation
answer
Characteristics/conditions of individuals Activities Verbal communication behaviors Nonverbal communication Environmental characteristics
question
Units of Observation
answer
Molar Molecular
question
Molar Approach
answer
Observing large segments of behavior and trreating them as a whole
question
Molecular Approach
answer
Observing small and highly significant behaviors as the unit of observation
question
Observer-Observed Relationship with Intervention
answer
Researcher actually intervenes in the research setting Researcher stages a situation to provoke a behavior
question
Observer-Observed Relationship with Concealment
answer
Varies from partial to total Participants maybe aware of researcher, but unsure of motive
question
Structured Observations
answer
Used when researcher has previous knowledge of phenomena Researcher develops plan ahead of time of categories and checklists Operational definitions are developed
question
Observational Sampling
answer
Used with structured observations Researcher decides on the time-sampling; the timing of observations; the selection of periods during which the observations will occur Researcher can decide on event sampling; a predetermined behavior or event to be observed
question
Enhancement of Contrasts
answer
Distortion by dividing into clearcut entities
question
Central Tendency
answer
Distortion when extreme events are distorted to a middle ground
question
Assimilatory
answer
Distortion in the direction of the identity with previous inputs
question
Halo Effect
answer
Tendency of observer to let one characteristic influence the judgement of unrelated characteristics
question
Error of Leniency or Severity
answer
tendency of the observer to rate everything positively or negatively
question
Observer Biases
answer
Enhancement of contrast Central tendency Assimilatory Halo effect Error of leniency or severity
question
Biophysical Measures
answer
Current trend in nursing research Need specialized training for use and interpretation Equipment is costly but often available
question
Advantages od Biophysical Measures
answer
Objectivity Precision and sensitivity
question
Disadvantages of Biophysical Measures
answer
Failure to understand the limitations of the equipment Interferences in measurements may create artifacts
question
Identify Data Needs
answer
Address all research questions/test the hypotheses Describe the main characteristics of the sample Controlling for important sources of extraneous variables Analyze potential biases Understanding supgroup effects Interpreting results Checking on manipulation Obtaining administrative information
question
Selecting and Developing Measures
answer
Identify potential instruments Primary consideration is whether instrument is conceptually relevant Does the instrument yield data of sufficiently high quality Get permission to use/adapt existing instrument
question
Selecting an Existing Instrument
answer
Resources - direct costs, indirect costs Availability and familiarity Norms and comparibilty Administration issues - timing, privacy issues, copyright Reputation - seek consultation with experts
question
Pretest Data Collection
answer
Determine length of time for administration Identify difficult or confusing parts Identify objectionable or offensive parts Determine sequencing of instruments Determine training needs of staff Determine if measures yield data with sufficient variabilty
question
Selecting Research Personnel
answer
Large Studies Experience Congruity with sample characteristics Unremarkable appearance Availability thru entire study
question
Training Data Collectors
answer
General procedures Specific procedures Protocols Detailed instructions Trial runs of data collection
question
Response Biases
answer
Tendency to distort responses (usually to present a favorable image Social desirability response bias (misrepresent consistently) Acquiescence Response set (all yes or no)
question
Tips For Wording Questions
answer
Wording must be clear Determine if respondents can be expected to understand the question and know the response Wording should minimize risk of response bias Develop questions sensitive to the needs & rights of subjects State in affirmative rather than negative Avoid lengthy sentences/medical terminology Avoid "double-barreled" questions - those containing two ideas Avoid assuming that respondents WILL know the information Avoid leading questions - provide range Use closed-ended for sensitive Use impersonal wording if sensitive Pretest with "cognitive questioning" to assure understanding
question
Tips for Closed-Ended Response Alternatives
answer
Should cover all significant alternatives Be mutually exclusive Some rationale for order of presentation Some reasonable length
question
Formatting an Instrument
answer
Good eye appeal - enough "white space" Set off response options Include directions as to how to respond Format filter questions carefully using appropriate skip patterns Avoid forcing readers to read through inapplicable questions
question
Biophysical Data Collection Methods
answer
May be in vivo - directly within or on the living organism; or in vitro - performed outside the body - blood values Selection - use standard equipment Include information about make/model of the apparatus Include information about the error of measurement
question
Use of Records
answer
Multitude of sources Advantages - readily available and relatively low cost Disadvantage - bias can enter into use - sampling bias; question accuracy, authenticity
question
Projective Techniques
answer
Obtain psychological data with a minimum of cooperation One type - pictorial projective - uses pictures Rorschach ink blot test Thematic Apperception Test (TAT) - pictures where Ss make up stories Verbal projective techniques -uses word association; Ex - "When I think of a nurse, I feel..." Advantage - supporters say the technique probes the unconscious mind Critics - suggest that the interpretation of the response is almost as projective as the participant's reaction to the original stimulus
question
Vignettes
answer
Self report by participants but involve a stimulus Brief descriptions of events or situations are given to which respondents are asked to react Vignette may be written narratives or videotaped segments Advantage - stimuli can be manipulated; subjects can be randomly assigned Limitation - validity of responses - would Ss react in the same way in an actual situation
question
Scaling
answer
S.S. Stevens - "Scaling is the assignment of objects to numbers according to a rule." A branch of measurement that involves the construction of an instrument that associates qualitative constructs with quantitative metric units Unidimensional scaling methods were developed in the first half of the 20th century Generally named after their inventors
question
Purposes of Scaling
answer
To test an hypothesis To explore a topic - to decide on its dimensionality Need to decide on whether the construct is unidimensional (able to be measured with a single number line) or multidimensional (need more than one number line) Example - "intelligence" - more than one number line-verbal or mathematical
question
Composite Scaling
answer
Used for social-psychological measures Provides a numeric score to place respondents on a continuum with respect to an attitude or attribute Early types - historical - see footnote p. 418 Thurstone Guttman Common Likert Semantic Differential
question
Likert Scaling (Summated Rating Scale)
answer
Most widely used unidimensional scaling technique; named after psychologist Rensis Likert (1903-1981) Consists of several declarative items that express a viewpoint on a topic with respondents being asked to the degree they agree or disagree Initially assemble a large pool of items or statements that state the range of positions Aim is to spread out people along a continuum; avoid neutral or extreme positions Some use 5-point others use 7-point scale Issue of "uncertain"; include or code missing as such
question
Semantic Differential Scale
answer
Developed by Charles Osgood to study the connotative meaning of concepts Used for measuring attitudes or multidimensional psychosocial traits Rating of a given concept on a series of bipolar adjectives Asked to place a check or "x" on a line along a continuum with a 7-point rating Adjectives must be appropriate for concept Adjectives tend to cluster along three independent dimensions Evaluation - effective/ineffective; fair/unfair Potency - strong/weak; large/small Activity - active/passive; fast/slow During construction, researcher decides on dimensions Scoring - assigned a number which is summed to get a total score Advantage - highly flexible
question
Q-Sort Method
answer
Developed by William Stevenson to study people's subjectivity Named after a form of factor analysis called the "Q" method Participants presented with a set of cards on which are written words or phrases Typically, 50-100 cards to be sorted into 9-11 piles with the number of cards to be put into each pile predetermined by the researcher Advantage -Q-sort is versatile and can be applied to a wide variety of problems Limitation - must be done in person and time-consuming
Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New