Hello ttfish-
I was unable to access the snack food survey at the Greenfield site,
but I did review 3 other questionnaires that were available. So I
have based my comments on those.
My qualifications to answer this question include 25 years of
experience in social science and marking research at management
levels. I have directed surveys for major U.S. corporations and for
several branches of the U.S. Government.
I shall not attempt to write your paper for you but shall try to give
you some guidance as to how you might approach it and to some sources
of more detailed discussion of the topics I mention.
I assume you have textbooks available in which these topics are also
covered.
You might begin your paper with an overview of the differences between
Internet surveys and more traditional forms focusing on sampling
techniques, instrument design and the constraints of using a
self-administered questionnaire.
For a discussion of various sampling techniques see
http://www.swcollege.com/marketing/zikmund/essentials_2e/powerpoint/ch12.ppt.
A good discussion of online marketing research is at
http://visionarymarketing.com/articles/cawi-1.html
Next review the specific techniques used by Greenfield and discuss why
they are good or bad.
Define your terms and discuss various sample types and their general
positives and negatives. Then bring that discussion back to the
specific Greenfield example.
For instance:
In the specific case of Greenfield, respondents are being recruited by
self selection through various advertisements and using an
incentive system that offers both cash rewards and chances to win
prizes.
This is creates a non-random universe from which samples for
individual studies may be pulled on a quota basis according to the
demographics supplied when an individual enrolls.
Such samples have a variety of biases. Firstly, the universe
comprises only those who have Internet access, a set of people with
distinct demographic characteristics.
There is no way to verify the demographics of respondents since they
are all self-reported and cannot be observed by an interviewer.
There is no way to confirm that the respondent one believes is
participating is, indeed, the person selected for the sample.
There is no way to prevent the same person from participating in a
survey multiple times simply by using multiple email addresses.
Any Quota samples have inherent problems.
Quota sampling is a method of stratified sampling in which the
selection within strata is non- random. Selection is normally left to
the discretion of the interviewer and it is this characteristic that
destroys any pretensions towards randomness.
The advantages and disadvantages of quota versus probability samples
has been a subject of controversy for many years. Some practitioners
hold the quota sample method to be so unreliable and prone to bias as
to be almost worthless. Others think that although it is clearly less
sound theoretically than probability sampling, it can be used safely
in certain circumstances. Still others believe that with adequate
safeguards quota sampling can be made highly reliable and that the
extra cost of probability sampling is not worthwhile.
Generally, statisticians criticize the method for its theoretical
weakness while market researchers defend it for its cheapness and
administrative convenience.
Main arguments against quota sampling is that it is not possible to
estimate sampling errors because of the absence of randomness. The
results, therefore, cannot be statistically projected to the general
population.
However, some argue that sampling errors are so small compared with
all the other errors and biases that enter into a survey that not
being able to estimate is no great disadvantage. One does not have the
security, though, of being able to measure and control these errors.
Now discuss questionnaire design in general, the differences between
those administered by an interviewerer and those that are
self-administered and any specific problems/design challenges of an
online questionnaire.
Then review the specific Greenfield example.
Critique of the Greenfield Questionnaires
Design includes pre-coded responses to facilitate analysis.
Questions are worded in easy-to-read fashion and at an appropriately
low reading-level.
However, the constraints of self-administered questionnaires require
that most questions be straightforward Yes/No dichotomies or simple
scales (usually 5 point).
Open-ended questions are of limited use again because they must
anticipate short answers and there is no opportunity to probe
responses.
The biggest problem in the questionnaire I reviewed is the respondent
burden. A seemingly endless string of questions about purchase
behavior, repeated, and repeated and repeated (I believe the list must
have totaled well over 60 items) will lose respondents and lead to
biased answers in those listed last simply because of respondent
fatigue.
Contingency questions were not always handled by the computer thus
increasing respondent burden, e.g. if I respond there are two persons
in my household, I should not be asked questions about members
3,4,5,6, etc. even if only to answer does not apply.
Other contingency questions did not have skip instructions to
respondent.
e.g. I respond that I purchased two cameras, but the following
question asks me to name the brand I purchased and permits that I
choose only one.
Although not apparent in the questionnaire I reviewed, a major mistake
in many online surveys I have seen is that question response
categories break one of instrument designs most important rules:
response categories must be all inclusive and mutually exclusive.
That is there must be no opportunity for my response to fit into two
answer categories.
E.g., I purchased my camera at a discount department store on.
The question might read:
Where did you purchase your camera? Was it:
1 a camera store
2 a department store
3 a discount store
4 online
Where do I put my response?
For a full discussion of questionnaire design see the book Asking
Questions: A Practical Guide to Questionnaire Design by Seymour
Sudman and Norman M. Bradburn
Instrument design for Internet surveys is discussed in:
Design of Web Survey Questionnaires:
Three Basic Experiments
Katja Lozar Manfreda, Zenel Batagelj, & Vasja Vehovar
http://www.swiftinteractive.com/white4.asp
I believe this should put you on track to an acceptable paper.
I do hope you have allowed yourself enough time to write it.
Good luck
Nellie Bly
Search strategy:
Marketing research questionnaire design
Marketing research instrument design
Marketing research instrument design respondent burden
Marketing research sampling
Marketing research sampling quota |