I agree with pinkfreud. The number of citations is dependent on
many, many factors, including how much interest there is in the
subject of the work, whether it is controversial (in which case it may
be widely cited to support one side of an argument), how much press
attention it receives, the reputation of the organization for which
the author works, etc. It may also be widely cited by further studies
refuting the study. For instance a study which suggested a link
between MMR vaccine and autism is very widely cited, both by those who
are opposed to vaccination and by a number of other researchers who
have published studies refuting it. Interestingly 10 of the 13
authors of that study recently came out and said they disagreed with
their own conclusions in the original study:
http://www.nytimes.com/2004/03/04/science/04AUTI.html?ex=1083297600&en=c967c6d9c4ccc259&ei=5070
I'm not bringing this up to elicit debate about autism and MMR, but to
show how a study, in which the majority of the authors admit the
conclusion was not fully supported by the data, has become very widely
cited. This is one of the frustrating things about science. In
newspapers, if it is published it is assumed to be true, and the more
it is published, the more it is accepted as true. In scientific
journals, publication is more a statement of "this is what I found and
how I found it, what do you all think?" |