Research Methods Ch. 5-8 – Flashcards
Unlock all answers in this set
Unlock answersquestion
            Interrogate the construct validity of a study's variables.
answer
        Construct validity: a measurement of how well a variable was measured (or manipulated) in a study Measured variables: Variables whose levels are observed and recorded (with no manipulation)
question
            Describe the kinds of evidence that support the construct validity of a measured variable.
answer
        Reliability: how consistent is the measurement? Validity: is it measuring what it's supposed to measure?
question
            Explain why a variable will usually have only one conceptual definition but can have multiple operational definitions.
answer
        Conceptual definition is the researchers definition of the variable in question at a theoretical level. Operational definition represents a researcher's specific decision about how to measure or manipulate the conceptual variable.
question
            Name three common ways in which researchers operationalize their variables.
answer
        Self report Observational Physiological
question
            Describe the differences between ordinal, interval, and ratio scales.
answer
        Ordinal allows you to say "first, second, third" Interval has equal distance between level but no true zero so can't say "twice as much" etc. Ratio has an absolute zero
question
            Reliability is about consistency. Define the three kinds of reliability, noting what kind of consistency each is designed to show.
answer
        Test-retest reliability: test is given twice, scores from the two are compared to see if same Interrater reliability: two observers rate some type of behavior and agree with each other Internal reliability: do the different items correlate well with each other
question
            For each of the three common types of operationalizations (self-report, observtional, and physiological) indicate which types of reliability would be relevant.
answer
        Self-report: test-retest and internal reliability Observational: interrater reliability Physiological: interrater
question
            Which of the following correlations is the strongest: r = .25, r = -.65, r = -.01, or r = .43?
answer
        R= -.65 because it's closest to -1.0
question
            What do face validity and content validity have in common?
answer
        They both are subjective ways to assess validity.
question
            To establish criterion validity, researchers make sure the scale or measure is correlated with
answer
        Some relevant behavior or outcome.
question
            Which requires stronger correlations for its evidence: convergent validity or discriminant validity?
answer
        Convergent validity
question
            Self-report measure
answer
        People answer questions about themselves in a questionnaire or interview.
question
            Observational measure
answer
        Recording observable behaviors of physical traces of behaviors.
question
            Physiological measure
answer
        Recording biological data.
question
            Categorical variable
answer
        Variables whose levels are categorical
question
            Quantitative variable
answer
        Quantitative
question
            Ordinal scale
answer
        A quantitative measurement scale whose levels represent a ranked order, in which it is unclear whether the distances between levels are equal
question
            Interval scale
answer
        Quantitative measurement scale that has no "true zero," and in which the numerals represent equal intervals (distances) between levels
question
            Ratio scale
answer
        Quantitative measurement in which the numerals have equal intervals and the value of zero truly means "nothing"
question
            Reliability
answer
        The consistency of the results of a measure
question
            Validity
answer
        The appropriateness of a conclusion or decision.
question
            Test-retest reliability
answer
        The consistency in results every time a measure is used.
question
            Interrater reliability
answer
        The degree to which two or more coders or observers give consistent ratings of a set of targets.
question
            Internal reliability
answer
        In a measure that contains several items, the consistency in a pattern of answers, no matter how a question is phrased.
question
            Correlation coefficient r
answer
        A single number, ranging from -1.0 to 1.0, that indicates the strength and direction of an association between two variables.
question
            Slope direction
answer
        The upward, downward, or neutral slope of the cluster of data points in a scatterplot.
question
            Strength
answer
        A description of an association indicating how closely the data points in a scatterplot cluster along a line of best fit drawn in through them.
question
            Cronbach's alpha
answer
        A correlation-based statistic that measures a scale's internal reliability.
question
            Face validity
answer
        Is this a plausible measure of the variable? (does it make sense at a gut level)
question
            Content validity
answer
        Does it capture all parts of a defined construct?
question
            Criterion validity
answer
        Both address how well a measure relates to a specific outcome. (Does it predict/correlate)
question
            Known-groups paradigm
answer
        A method for establishing criterion validity, in which a researcher tests two or more groups, who are known to differ on the variable of interest, to ensure that they score differently on a measure of that variable.
question
            Convergent validity
answer
        An empirical test of the extent to which a measure is associated with other measures of a theoretically similar construct.
question
            Discriminant validity
answer
        Empirical test of the extent to which a measure does not associate strongly with measures of other, theoretically different constructs.
question
            Explain how carefully prepared questions improve the construct validity of a poll or survey.
answer
        It is crucial that each question be clear and straightfoward to answer so it does not confuse respondents or influence their answers.
question
            Describe how researchers can make observations with good construct validity.
answer
        When they can avoid three problems: observer bias, observer effects, and reactivity
question
            What are three potential problems related to the wording of survey questions? Can they be avoided?
answer
        Leading questions, double-barreled questions, negative wording. Avoided by testing different wording... If the results are the same no matter the wording, it clearly doesn't matter... If results differ, may need to report results differently for each question
question
            For which topics, and in what situations, are people most likely to answer accurately to survey questions?
answer
        They make an effort to think about each question, they don't worry about looking good or bad, simply because they are unable Self-report is the best way.
question
            What are some ways to ensure that survey questions are answered accurately?
answer
        Ensure anonymity Use weed out items: Include "filler items" (Ex: Interested about racial attitudes, but also ask about politics, gender rolls, and education) Use implicit measures Ask about actions rather than attitudes
question
            What is the difference between observer bias and observer effects? How can such biases be prevented?
answer
        Train observers well Create clear rating scales (codebooks) Multiple observers
question
            What is reactivity? What three approaches can researchers take to be sure people do not react to being observed?
answer
        Blend in Wait it out: get used to researchers presence Measure the behavior's result (ex: empty liquor bottles in a residential garbage cans indicates how much alcohol is being consumed in community)
question
            Survey
answer
        A method of posing questions to people on the telephone, in personal interviews, on written questionnaires, or via the internet.
question
            Poll
answer
        A method of posing questions to people on the telephone, in personal interviews, on written questionnaires, or via the Internet.
question
            Open-ended question
answer
        Survey question format that allows respondents to answer any way they like.
question
            Forced-choice format
answer
        Survey question format in which respondents give their opinion by picking the best of two or more options.
question
            Likert scale
answer
        Survey question format; a rating scale containing multiple response options that are anchored by the terms 'strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree. Likert-type scale does not follow this format exactly.
question
            Semantic differential format
answer
        Response scale whose numbers are anchored with contrasting adjectives.
question
            Leading question
answer
        Type of question in a survey or poll that is problematic because its wording encourages only one response, thereby weakening its construct validity.
question
            Double-barreled question
answer
        Type of question in a survey or poll that is problematic because it asks two questions in one, thereby weakening its construct validity.
question
            Negatively worded question
answer
        Question in a survey or poll that contains negatively phrased statements, making its wording complicated or confusing and potentially weakening its construct validity.
question
            Response set
answer
        A shortcut respondents may use to answer items in a long survey, rather than responding to the content of each item (aka nondifferentiation)
question
            Acquiescence
answer
        Answering "yes" or "strongly agree" to every item in a survey or interview
question
            Fence sitting
answer
        Playing it safe by answering in the middle of the scale fir every question in a survey or interview.
question
            Socially desirable responding
answer
        Giving answers on a survey that make one look better than one really is.
question
            Faking good
answer
        Same as socially desirable responding.
question
            Faking bad
answer
        Giving answers on a survey that make one look worse than one really is.
question
            Observational research
answer
        The process of watching people or animals and systematically recording how they behave or what they are doing.
question
            Observer bias
answer
        A bias that occurs when observers' expectations influence their interpretation of the participants' behaviors or the outcome of the study.
question
            Observer effect
answer
        A change in behavior of study participants in the direction of an observer's expectation. (aka expectancy effect)
question
            Masked design
answer
        Study design in which the observers are unaware of the experimental conditions to which participants have been assigned (aka blind design)
question
            Reactivity
answer
        A change in behavior of study participants (such as acting less spontaneously) because they are aware they are being watched.
question
            Unobtrusive observation
answer
        An observation in a study made indirectly, through physical traces of behavior, or made by someone who is hidden or is posing as a bystander.
question
            Experimenter expectations
answer
        Experimenter has certain expectations -> expectations alter experimenter's behavior toward participants -; expected response is more likely shown by participants
question
            Rosenthall effect
answer
        Rosenthal was one of the researchers for both the intellectual bloomers AND the maze bright/dull studies. Study showed that observers not only see what they expect to see; sometimes they even cause the behavior of those they are observing to conform to their expectations.
question
            Ethics of behavioral observation
answer
        Observing public behavior is considered ethical (no expectation of privacy) Researchers don't report on who they watched specifically Videotaping in public is usually okay too Using 1-way mirrors or private videotaping generally requires permission in advance
question
            Explain why external validity often matters for a frequency claim.
answer
        The findings need to generalize to a larger population or to other settings
question
            Describe which sampling techniques allow generalizing from a sample to a population of interest, and which ones do not.
answer
        Allow: Simple Stratified Proportionate Cluster, Multistage  Not: Convenience sampling Purposive Snowball Self-selected
question
            What are five techniques for selecting a representative sample of a population of interest? Where does randomness enter into each of these five selection processes?
answer
        Simple Stratified Proportionate Cluster Multistage  During random selection
question
            In your own words, describe the difference between random sampling and random assignment.
answer
        Random sampling: Researchers draw a sample randomly Random assignment: only used in experiments; researchers randomly assign participants to groups; helps ensure that the groups are the same at the start of the experiment
question
            What are four ways of selecting a biased sample of a population of interest? Which subsets are more likely to be selected in each case.
answer
        Convenience sampling: who is easiest to access Purposive sampling: Certain kinds of people they want to study Snowball sampling: Participants are asked to recommend few acquaintances Quota sampling: target a number for each category non randomly
question
            Why are convenience, purposive, snowball, and quota sampling not examples of representative sampling?
answer
        Because they are not random samples so they do not generalize to the population
question
            Why do you think researchers might decide to use an unrepresentative sample, even though a random sample would ensure external validity?
answer
        When studying association or causal claims. When it actually matters (reviews on shoes online)
question
            When will it be most important for a researcher to use a representative sample?
answer
        Frequency claim If you want to generalize beyond your sample, it matters. You can't always confirm this (you don't actually take a full census of an entire population that you've first sampled) Sometimes you can.... Election polling is confirmed (or not) by election results.
question
            Which of these samples is more likely to be representative of a population if 100,000?
answer
        A randomly selected sample of 100 people
question
            Explain why a larger sample is not necessarily more representative than a smaller one.
answer
        Larger sample = smaller margin of error Statistical term quantifying the degree of error in the study. If 28% of people support a piece of legislation, with a margin of error of 3, then if you did the pool over and over, 95% of the time your result would be between 25-31%.
question
            Population
answer
        A larger group from which a sample is drawn; the group to which a study's conclusions are intended to be applied.
question
            Sample
answer
        The group of people, animals, or cases used in a study; a subset of the population of interest.
question
            Census
answer
        A set of observations that contains all members of the population of interest.
question
            Biased sample
answer
        A sample in which some members of the population of interest are systematically left out, and as a consequence, the results from the sample cannot generalize to the population of interest (aka unrepresentative sample)
question
            Representative sample
answer
        A sample in which all members of the population of interest are equally likely to be included (usually through some random method) & therefore the results can generalize to the population of interest.
question
            Convenience sampling
answer
        Choosing a sample based on those who are easiest to access and readily available, a biased sampling technique.
question
            Self-selection
answer
        A form of sampling bias that occurs when a sample contains only people who volunteer to participate
question
            Probability sampling
answer
        The process of drawing a sample from a population of interest in such a way that each member of the population has an equal chance of being included in the sample, usually via random selection.
question
            Simple random sampling
answer
        The most basic form of probability sampling, in which the sample is chosen completely at random from the population of interest.
question
            Cluster sampling
answer
        A probability sampling technique in which clusters of participants within the population of interest are selected at random, followed by data collection from all individuals in each cluster.
question
            Multistage sampling
answer
        A probability sampling technique involving at least two stages: a random sample of clusters followed by a random sample of people within the selected clusters
question
            Stratified random sampling
answer
        A form of probability sampling; a random sampling technique in which the researcher identifies particular demographic categories of interest and then randomly selects individuals within each category.
question
            Oversampling
answer
        A form of probability sampling; a variation of stratified random sampling in which the researcher intentionally overrepresents one or more groups.
question
            Systematic sampling
answer
        A probability sampling technique in which the researcher counts off members of a population to achieve a sample, using randomly chosen interval.
question
            Random assignment
answer
        The use of a random method to assign participants into different experimental groups.
question
            Purposive sampling
answer
        A biased sampling technique in which only certain kinds of people are included in a sample.
question
            Snowball sampling
answer
        A variation on purposive sampling, a biased sampling technique in which participants are asked to recommend acquaintances for the study.
question
            Quota sampling
answer
        A biased sampling technique in which a researcher identifies subsets of the population of interest, sets a target number for each category until the quotas are filled.
question
            What types of sampling errors could occur through internet research/polling?
answer
        Internet research uses a nonrandom sample Participants are self-selected volunteers Participants know how to use computers Participants have access to computers Participants are Internet savvy (maybe)
question
            Bivariate correlation
answer
        An association that involves exactly two variables
question
            Mean
answer
        An arithmethic average, a measure of central tendency computed from the sum of all the scores in a set of data, divided by the total number of scores.
question
            t test
answer
        A statistical test used to evaluate the size and significance of the difference between two means.
question
            Effect size
answer
        The magnitude of a relationship between two or more variables.
question
            Statistical significance
answer
        A conclusion that a result from a sample (such as an association or a difference between groups) is so extreme that the sample is unlikely to have come from a population in which there is no association or no difference.
question
            Outlier
answer
        A score that stands out as either much higher or much lower than most of the other scores in a sample.
question
            Restriction of range
answer
        A situation involving a bivariate correlation, in which there is not a full range of possible scores on one of the variables in the association, so the relationship from the sample underestimates the true correlation.
question
            Curvilinear association
answer
        An association between two variables in which is not a straight line; instead, as one variable increases, the level of the other variable increases and then decreases (vice versa)
question
            Directionality problem
answer
        A situation in which it is unclear which variable in an association came first.
question
            Third-variable problem
answer
        A situation in which a plausible alternative explanation exists for the association between two variables.
question
            Spurious association
answer
        A bivariate association that is attributable only to systematic mean differences on subgroups within the sample; the original association is not present within subgroups.
question
            Moderator
answer
        When the relationship between two variables changes depending on the level of a third variable, the third variable is the moderating variable.
question
            At minimum, how many variables are there in an association claim?
answer
        Two
question
            What characteristic of a study's variables makes a study correlational?
answer
        Both variables are measured
question
            Sketch two bar graphs: one showing a correlation and one showing a zero correlation.
answer
        A bar graph that shows a correlation should have bars at different heights A bar graph with zero correlation would show two bars of the same height.
question
            When do researchers typically use a bar graph, as opposed to a scatterplot, to display correlational data?
answer
        Categorical variables
question
            In one or two brief sentences, explain how you would interrogate the construct validity of a bivariate correlation.
answer
        Does it have good reliability? Is it measuring what it's intended to measure (validity)?
question
            What are five questions you can ask about the statistical validity of a bivariate correlation? Do all the statistical validity questions apply the same way when bivariate correlations are represented as bar graphs?
answer
        Effect size? Statistical significance? Are there subgroups? Outlier? If it looks like a zero correlation, could it actually be curvilinear?
question
            Subgroups
answer
        Sometimes there can be an apparent association, or lack thereof, but the association is different when you look more closely at at the subgroups. There could be subgroups (males and females, freshmen and seniors, etc) who both follow the pattern of the overall group.
question
            Association b/t moderators and external validity
answer
        When asking if an association will generalize to another age group, geographic group, etc, we are essentially asking if it could be moderated by that factor.
question
            When is a small effect size still important?
answer
        A small effect size could be important: taking an aspirin a day lowers heart attack risk, r = 0.03 Small effect size, but high consequences, and many lives saved
question
            Correlational studies:  What types of questions can these studies answer? What can't these studies tell us?
answer
        When gathering data in the early stages of research When manipulating an independent variable is impossible or unethical When studying the relationship between two naturally occurring variables for the purpose of prediction. Correlational relationships can be used for predictive purposes
question
            Understand and be able to recognize on a graph positive, negative, and zero correlations
answer
        zero correlation is straight line.
question
            3rd variable problem?
answer
        There may be an unmeasured variable that actually causes variables to covary (change together)
question
            Directionality problem?
answer
        Not always possible to specify the direction in which a causal arrow points
question
            When are these studies used?
answer
        When gathering data in the early stages of research When manipulating an independent variable is impossible or unethical When studying the relationship between two naturally occuring variables for the purpose of prediction