MKTG 379: Marketing Research Methods — Exam 1

Definition Marketing Research
BOOK: planning, collection, and analysis of data relevant to marketing decision making and the communication of the results of this analysis to management.

XCOPY: Specifies information, designs the method for collecting information (what do we need to know and how are we going to collect it), manages the data collection process, analyzes and communicates the results to the (emphasis on communicating to the whole company)

When to use/not use research
1. Time Constraints: Is there time?

2. Data Availability: Do you lack adequate info?
Is the data out there already?
Do we lack proper data? Should we conduct our own (primary research) or refer to someone else’s (secondary research)?

3. Routine VS. Non-Routine Decisions:
Whether you should carry drinks in large or small bottles — you already carry your drinks in both, so it’s a routine decision

4. Benefits VS. Costs: Do benefits outweigh cost of research?

When NOT to conduct research:
1. Resources are lacking
2. Research would not be useful
3. Opportunity has passed
4. The decision has already been made
5. Managers cannot agree on what they need to know to make a decision
6. Decision-making info already exists
7. The costs of conducting research outweighs the benefits

Exploratory, descriptive, and causal research -when to use each (p. 7-9)

Exploratory Research: when you don’t know much about problem; maybe you don’t necessarily know what is happening, so you explore it

Descriptive Research: What is the major problem?; phase to be done after exploratory; might do a survey with a representative sample to better understand what the problem is; quantify it

Causal Research: Experiment/Test Market; when you are testing a solution to the problem.


Exploratory Research: preliminary research conducted to increase understanding of a concept, to clarify the exact nature of the problem to be solved, or to identify important variables to the study

Descriptive Research: conducted to answer who, what, where, when, and how questions

Causal Research: those in which the researcher investigates whether one variable (independent) causes or influences another variable (dependent)

4) The research process and the various questions you would ask at each phase (p. 10-17)
1. The research problem/opportunity
• What info is needed to make the decision?
• Does the info exist?
• How will the info be used?

2. creating the research design
• What type of research design best addresses the research problem? (exploratory/descriptive/causal)

3. choosing a basic method
• What methodology best addresses the research question? (exploratory = focus groups/depth interviews, descriptive = surveys/observation, causal = experiments/test markets)

4. selecting the sample procedure
• Who is to be sampled?
• How large of a sample?
• How to choose?

5. collecting the data
• When the researcher collects the data, research design determines method
• consistent
• How will the data be gathered?
• Who will gather it?

6. analyze the data
• Logically summarizing the data
• statistical analysis
• determined by research design
• What are the rules for coding/editing
• What analysis techniques to use (why)?

7. prepare research report/presentation
• Who will read the report?
• How will it be structured?



5) The difference between qualitative and quantitative research and when to use each (p. 18-21)
• more descriptive
• Statistical results (Examples: surveys/observation)
A research methodology that seeks to quantify data
Applies statistical analysis
Used to generalize to the population
Quantify data, applying statistics and #’s

• exploratory research, word answers
• Focus groups, depth interviews (Examples:
Suppose you are a restaurant and you were looking at competitor restaurants in the area for food they sell, prices, types of customers they attract, etc)
• Open-ended: getting opinions/ideas to provide insight and understanding (no multiple choice or true/false)
• Unstructured exploratory research methodology
• To provide insight and understanding
• To understand a universe (ex: Coke wants to understand who their competitors are according to the consumers; may do qualitative exploratory research/consumers)
• Usually uses small samples (5-50 people)

6) Direct versus indirect research and when to use each (p. 34-35)
Direct Research:
• directly ask respondent what you want to know
• use it when you think they’ll answer truthfully

Indirect Research:
• indirectly ask the question
• respondent doesn’t know what question is really about
• use it when you think they will NOT answer truthfully

7) Types of exploratory research techniques – what they are and when to use each (p. 35-46)
Secondary Research:
• advantages: time, cost, and convenience;
• disadvantages: unavailability of secondary research, fit/relevance of data (wrong units, out-of-date, wrong definitions), inaccuracy (by source, original purpose, quality, collected by someone with a bias?)

Experience Surveys:
• surveys with people who are knowledgeable on the subject
• you are not talking to consumers for this
• ex: want to learn about shelf space in grocery store so you talk to grocery store managers! (people who have experience in the business)

Case Studies:
• observing people and writing up exemplary case of how they did it
• ex: TGIF was building a smaller restaurant, COO visited a Navy crew to explore efficient food service on deck, tried to emulate that at the new location

Focus Groups:
• a semi-structured, free flowing interview with a small number of people
• #1 type of exploratory research
• semi-structured = have an idea of what we want to cover, but we can vary from it if we need to

Depth Interviews:

8) Characteristics of the moderator (p. 38-39)
The moderator:
-Quick learner
-Friendly Leader
-Knowledgeable but not all-knowing
-Excellent memory
-Good listener
-A facilitator (not a performer)
-Empathetic (feel what other people feel)
-Big picture thinker
-A good writer
9) What is a cross-sectional versus longitudinal study and when to use each (p. 48)
Cross-Sectional Study:
• measured once / one and done!
• not looking to do research over time
• ex: research for new logo design (don’t really need to observe customer reactions over time)

Longitudinal Study:
• measuring sample repeatedly:
• true panel: when you’re measuring the exact same sample of people every time, over time (Ex: Nielsen television ratings = they install a box on TV’s to measure watch time over a long period) — measuring the SAME people over time
• omnibus panel: use the same TYPE of people over time (ex: phone surveys over time with certain age group over time, different individuals)

10) All errors in survey research (p. 49-55)
Two major types of errors may be encountered in connection with the sampling process

1) Random error: This error can be reduced only by increasing sample size

2)Systematic error: This error can be reduced by minimizing the sample design and measurement errors

Response bias: when people don’t answer truthfully

Non-response bias: When participants in a sample that respond to a questionnaire are different from those that don’t respond, thereby biasing the data.

Acquiescence bias: Respondents choose a response to please the interviewer

Extremity Bias: When respondent has a monotonous scaling task and just chooses all the same answers down one side or the middle

Interviewer bias: When the interviewer influences the respondent’s answer. This could also include: Body language and cheating

Auspices Bias: When knowledge of the organization conducting the survey biases how the respondent answers

11) Types of surveys and when to use each (p. 56-57)
• close-ended questions
• either dichotomous (two choices) or multichotomous (multiple choice)
• Ex: What is your gender? ____ M or ____ F

• open-ended question
• use when we don’t know the all the possible answers
• ex: “What do you think of President Obama?” ______________________

• respondent doesn’t know what the research is about
• use when think they will not answer truthfully
• ex: Do you have >friends that are prejudiced against illegal immigrants?

• respondent knows what the research is about
• use when you think they WILL answer truthfully
• ex: What is your favorite color? r/o/y/g/b/p (people probably wouldn’t lie about this 🙂 )

12) Telephone vs. personal vs. mail vs. internet questionnaires – When to use each (p. 58-60)
Personal Interview: a survey that gathers info through face-to-face contact
• Pros: good for >sensory research, can hold long interviews, can ask complex questions or ask them to do complicated tasks
• Cons: most expensive, interviewer influenced (bias), no anonymity (ex: questions about medical problems and social taboos might be difficult to answer)

Telephone Interview: a survey that gathers info through telephone contact w/ individuals
• Pros: fairly fast, less costly than personal interviews, a little less pressure because you are not face-to-face
• Cons: lack of sensory signals, length can be 15-20 mins and by random call, losing representation (Caller ID and Do Not Call List leave you with less respondents)

Mail Questionnaire: a self-administered questionnaire sent to respondents through the mail
• Pros: Geographic flexibility (send anywhere), relatively inexpensive/cost effective, can be filled out at respondent’s convenience
• Cons: low response rate, long time, length of questionnaire

13) How a questionnaire should be organized and errors (p. 61, 63-64)
The Questionnaire Development Process:
1. Determine survey OBJECTIVES(from Research Proposal)
2. Determine DATA COLLECTION method (telephone, email, mail, Internet, etc.)
3. Determine the question RESPONSE FORMAT (open-ended, close-ended multichotomous, scales, etc.)
4. Decide question WORDING
5. Establish questionnaire FLOW (order) and layout
6. PERSONALLY EVALUATE the questionnaire (length, does it meet objectives?, etc.)
7. PRETEST/REVISE (peer review!)

For part 5:
a. Intro
b. Screener Questions (qualifying Q’s)
c. 1st few questions
d. 1st 3rd (transition questions)
e. Middle Half to 2nd 3rd (difficult questions)
f. Last section (classification and demographic questions)

14) Types of observation (p. 65-68, 72)
-Structured observations:Checklist of behaviors, know exactly what we want to observe, descriptive

-Unstructured observations:Open to record everything, exploratory

-Natural observation:Observing things that occur naturally, use when what you want to observe happens regularly

-Scientifically contrived observation:The researcher makes the event happen; Use when event does not occur frequently enough or is difficult to observe

-Machine observation: Observation done by a machine; When a human can’t observe or it is inefficient for a human to observe

-Human observation:Observation by a human; When a person can do it

-Disguised observation: Respondent doesn’t know they are being observed; When observation will change the behavior

-Undisguised observation: Respondent knows they are being observed; Don’t think it will bias their actions

18) The different attitude scales and when to use each (p. 81-84)
Attitude: A feeling with regard to an object ( attitudes effect behavior)

The Likert Scale: measure that allows respondents to rate how strongly they agree or disagree with statements
–EX: 1: strongly disagree to 5: strongly agree

Semantic differential: seven point scale consisting of bi-polar adjectives (use to measure: image)
–EX: attitude towards Mcdonalds
-cheap —– expensive

Constant Sum Scale: a measure of attitudes where respondents are asked to divide a constant sum to indicate the relative importance of attributes
EX:Divide 100 points among the following characteristics

Graphic Ratings Scale: A measure of attitudes that allows respondents to rate an object by choosing any point along a graphic continuum.
EX: Which picture best describes how you feel about CSUF? ( the need to understand what each picture means to all respondents)

Ranking: respondents are asked to rank their preferences.

Perceptual scaling: creaking a map where respondents place brands according to distinguishing dimensions

17) The 3 components of attitude and measuring attitude (p. 80)
The 3 Components of Attitude and Measuring Attitude:

1. Affective (feeling): “I love Burger King.”

2. Cognitive (thinking): “It’s inexpensive and tastes great.”

3. Behavior (doing): “I go to Burger King three times a week.”

Attitudes affect behavior — i.e. If you have positive attitudes toward CSUF, you’re likely to continue attending the university.

15) Measurement: Conceptual level, conceptual definition and operational definition (p. 73-74)
Conceptual Level: What exactly are we measuring? (ex: happiness)

Conceptual Definition: What is the meaning of the concept (ex: define happiness as a state of well-being and contentment)

Operational Definition: How will the concept be measured? (ex: the amount of time that a person smiles in an hour [or looking at other body language such as eye movements])


Measuring Attitude:

Observation: (ex: see that you go to Starbucks 3 times per week, might be able to infer that you have a positive attitude toward it)

Indirect Techniques: sentence completion questions, collages, etc.

Physiological Reaction: changes in the body that show attitude (ex: pupil dilation, body electricity, etc.)

Self-Report Measures: scales we use to measure attitude that our respondents fill out (our focus for this class)

19) The concepts covered in the following homework
-The Research Process and Careers (Answers are posted on Titanium)
{See assignment doc in Titanium}
16) What is validity? reliability? How do you establish both of them? (p. 75-79)
Reliability: is concerned with random error.

If we have reliability- you have reduced random error

Reliability measures yield consistent results
Test-retest method: Administer the same measurement instrument twice and see if the results correlate

Validity: measuring what we think we are measuring

If we have validity- you have reduced systematic error

3 ways to establish validity:

1) Face validity: Do experts or judges agree that this is a good measurement? (weakest)

2)Criterion-related validity:Does a measure that is meant to predict accurately predict?

3)Construct validity: Do two different measuring instruments correlate on measures of the same item?

20) Exploratory, Descriptive and Causal Research (p. 37, 43, 8th edition)
-Qualitative versus quantitative (p. 80-81, 8th edition)
Qualitative Research: research whose findings are not subject to quantification or quantitative analysis

Quantitative Research: research that uses mathematical analyses
Exploratory Research: preliminary research conducted to increase understanding of a concept, to clarify the exact nature of the problem to be solved, or to identify important variables to be studied

Descriptive Research: research studies that answer the questions of who, what, when, where, and how

Causal Research: research studies that examine whether the value if one variable causes another variable

Get access to
knowledge base

MOney Back
No Hidden
Knowledge base
Become a Member