lgl1-ga,
1) When and why we use each:
Lets start with a basic definition of non-parametric statistics, from
[ http://www.lakeheadu.ca/~kinesiology/Wmontelp/mannWhit/tsld002.htm
]:
Often referred to as distribution free statistics,
non-parametric statistics are used when the data may not demonstrate
the characteristics of normality (i.e. follow a normal distribution).
Non-parametric statistics are used with nominal data, where the
response set can be converted to counts of events, and the measurement
scale is ignored. Non-parametric statistics can be used when data are
converted to ranks.
Getting into more detail, a look at non-parametric statistics from
Statistics Glossary, v. 1.1 by Valerie J. Easton & John H. McColl
[ http://www.cas.lancs.ac.uk/glossary_v1.1/nonparam.html#nonparat ]:
Non-Parametric tests are often used in place of their parametric
counterparts when certain assumptions about the underlying population
are questionable. For example, when comparing two independent samples,
the Wilcoxon Mann-Whitney test does not assume that the difference
between the samples is normally distributed whereas its parametric
counterpart, the two sample t-test does. Non-Parametric tests may be,
and often are, more powerful in detecting population differences when
certain assumptions are not satisfied. All tests involving ranked
data, i.e. data that can be put in order, are non-parametric.
Heres a description of the Wilcoxon Mann-Whitney Test from the same
site at [ http://www.cas.lancs.ac.uk/glossary_v1.1/nonparam.html#wmwt
]:
The Wilcoxon Mann-Whitney Test is one of the most powerful of the
non-parametric tests for comparing two populations. It is used to test
the null hypothesis that two populations have identical distribution
functions against the alternative hypothesis that the two distribution
functions differ only with respect to location (median), if at all.
And, here is a description of the (parametric) two sample t-test, also
from the same site atb [
http://www.cas.lancs.ac.uk/glossary_v1.1/hyptest.html#2sampt ]:
A two sample t-test is a hypothesis test for answering questions
about the mean where the data are collected from two random samples of
independent observations, each from an underlying normal distribution:
[ Note: see site for equation itself, as it is a graphic with
characters that cannot be displayed properly in the GA Answer box].
When carrying out a two sample t-test, it is usual to assume that the
variances for the two populations are equal, that is: [ Note: see site
for equation ]. The null hypothesis for the two sample t-test is: [
Note: see site for equation ]. That is, the two samples have both been
drawn from the same population.
This null hypothesis is tested against one of the following
alternative hypotheses, depending on the question posed: [Note: see
site for equation ].
Several other non-parametric and parametric statistical analysis
methods are discussed here as well :
[ http://www.cas.lancs.ac.uk/glossary_v1.1/nonparam.html#nonparat ],
including the non-parametric Kolmogorov-Smirnov Test:
For a single sample of data, the Kolmogorov-Smirnov test is used to
test whether or not the sample of data is consistent with a specified
distribution function. When there are two samples of data, it is used
to test whether or not these two samples may reasonably be assumed to
come from the same distribution.
The Kolmogorov-Smirnov test does not require the assumption that the
population is normally distributed.
Compare Chi-Squared Goodness of Fit Test.
2) Strengths and weaknesses of using each.:
Part of this has already been covered of, in particular:
Non-Parametric tests may be, and often are, more powerful in
detecting population differences when certain assumptions are not
satisfied. All tests involving ranked data, i.e. data that can be put
in order, are non-parametric.
A parametric test, on the other hand, is A statistical test in which
assumptions are made about the underlying distribution of observed
data. So in situations where these assumptions cannot be made,
non-parametric tests must be used. In cases where you know you can
make certain assumptions, parametric testing is more reliable.
[ http://mathworld.wolfram.com/ParametricTest.html ].
3. The costs and benefits of using each.
Again, this is largely related to the points covered above concerning
strengths, weaknesses and when to apply parametric vs. non-parametric
statistical testing, but I'll go into more detail and provide examples
at this point. Which type of test is better for a particular case
depends on what type of assumptions can be (or cannot be) made.
The following website offers worked-through examples of parametric and
non-parametric statistical tests, such as the sign test and the
Kruskal-Wallis test:
[ http://campus.houghton.edu/orgs/psychology/stat19/ ].
Also, Distribution free-tests or Non-parametric tests do not rely
on parameter estimation and/or distribution assumptions. That means
that the assumptions made about distribution of a data set are much
more general than they would be in a parametric test. Normality
assumptions are usually left out altogether. Examples are the Wilcoxon
Signed-Ranks Test, Chi-squared, and SPEARMANS RANK ORDER
CORRELATION.
The above comes from :
[ http://www.psybox.com/web_dictionary/Distfree.htm ].
The following observations on parametric tests, by Dr. Chong-ho Yu
[ http://seamonkey.ed.asu.edu/~alex/teaching/WBI/parametric_test.html
] are also valuable:
Restrictions of parametric tests
Conventional statistical procedures are also called parametric tests.
In a parametric test a sample statistic is obtained to estimate the
population parameter. Because this estimation process involves a
sample, a sampling distribution, and a population, certain parametric
assumptions are required to ensure all components are compatible with
each other. For example, in Analysis of Variance (ANOVA) there are
three assumptions:
Observations are independent.
The sample data have a normal distribution.
Scores in different groups have homogeneous variances."
Another important example from the above site:
Take ANOVA as an example. ANOVA is a procedure of comparing means in
terms of variance with reference to a normal distribution. The
inventor of ANOVA, Sir R. A. Fisher (1935) clearly explained the
relationship among the mean, the variance, and the normal
distribution: "The normal distribution has only two characteristics,
its mean and its variance. The mean determines the bias of our
estimate, and the variance determines its precision." (p.42) It is
generally known that the estimation is more precise as the variance
becomes smaller and smaller.
Put it in another way: the purpose of ANOVA is to extract precise
information out of bias, or to filter signal out of noise. When the
data are skewed (non-normal), the means can no longer reflect the
central location and thus the signal is biased. When the variances are
unequal, not every group has the same level of noise and thus the
comparison is invalid. More importantly, the purpose of parametric
test is to make inferences from the sample statistic to the population
parameter through sampling distributions. When the assumptions are not
met in the sample data, the statistic may not be a good estimation to
the parameter. It is incorrect to say that the population is assumed
to be normal and equal in variance, therefore the researcher demands
the same properties in the sample. Actually, the population is
infinite and unknown. It may or may not possess those attributes. The
required assumptions are imposed on the data because those attributes
are found in sampling distributions. However, very often the acquired
data do not meet these assumptions.
The following series of links will lead you to further information on
this topic--
Google search strategy:
Keywords,
nonparametric statistics:
://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&safe=off&q=++nonparametric+statistics&spell=1
,
parametric statistics:
://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-.
8&safe=off&q=parametric+statistics&btnG=Google+Search ,
parametric tests examples:
://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&safe=off&q=parametric+tests+examples
I hope this information is more than sufficient to assist you with
your project. If I have left anything out that you feel is important
to you, such as specific examples, please dont hesitate to request
Clarification to this Answer.
Good luck on your project!
Sincerely,
omniscientbeing-ga
Google Answers Researcher |