Google Answers Logo
View Question
 
Q: Statistics ( Answered,   2 Comments )
Question  
Subject: Statistics
Category: Reference, Education and News > Teaching and Research
Asked by: munkeyboy-ga
List Price: $3.00
Posted: 19 Sep 2002 08:06 PDT
Expires: 19 Oct 2002 08:06 PDT
Question ID: 66845
Say I have a team of 15-20 people.  Each person has their work
monitored for accuracy.  There are three categories (1.
Responsiveness, 2. Accuracy 3. Tone).  Each person receives a rating
for each category - each category has a maximum number of points (1.
14 max , 2. 20 max, 3. 9 max).  All points for each category for each
employee are added together to come up with total points earned and
total points possible.  This is then divided out to come up with a
percent of goal.    The problem is this - if one person receives a
zero in each category (complete audit failure), the resulting socre
for the team is skewed and they receive a very low percent of goal -
which is not an accurate reflection of the entire team results.

The question is this - how to we count the failures but not have the
isolated failure skew the overall results?
Answer  
Subject: Re: Statistics
Answered By: calebu2-ga on 19 Sep 2002 10:18 PDT
 
I am going to make a couple of assumptions about the typical scores
that people get on this review and then make a few suggestions.

Lets focus on the "Accuracy" category (marks out of 20). Anything I
say here should apply equally well to the other categories.

I assume that most people get a fairly good score on the accuracy
test. In other words, most employees are pretty good and you don't
want to give them a low score because they did a good job.

However, I assume that there are some employees that completely miss
the mark, and to give them anything other than a zero for that
category defeats the purpose of defining that category as such.

If you had a nice even distribution of abilities/scores then the sum
of the team's individual scores would be a fair reflection of their
overall performance.

However from your comments I think it is fair to deduce that the
typical distribution of scores is not evenly or normally distributed.

To solve this problem you have to do some kind of transformation to
better capture the ability/quality of all team members. This can be
done a number of ways (two of which have been mentioned in comments
below). I'll start of with the simplest (yet perhaps not the most
helpful)

1. Remove 0 and low scores.

If you just add up the scores for the remaining team members, or
replace zero scored members with the team average then you remove
their skewing from the team. However by doing this you completely
ignore their performance and actually reward poor performing teams.

2. Use the median score to base your team statistic.

Instead of adding up all the scores (which is equivalent to taking the
mean score and multiplying it by the number of team members), consider
taking the median score. ie. rank all team members and then report the
score of the middle one (for a team of 15 this would be the 8th ranked
team member). By doing this you take into account the 0 scores (they
cause the focus to shift down the scale of the performing team
members) but you do not get the extra skewing weight.

The down side to this approach is that it tells you very little about
how the whole team is doing - you could have a team with two or three
stars and a load of average people and they would be ranked the same
as a team of all average people.

3. Adjust your target scores

As nellie_bly-ga suggested you can adjust the score of the team to
account for the zero-performers after the fact by subtracting a
smaller number off the total. This is kind of an ad-hoc approach, but
it should work well if you can find a number that seems to be fair.
The main problem with this approach is that it is difficult to
envision how the scoring will pan out for teams before you put it into
practise - you might end up underpenalizing the zeros in which case
changing the scale might decrease it's value to the employees that it
is being used to judge.

4. Standardize the distribution of scores

Getting a little more technical now. This requires a bit more effort
to set up but is the one that I would recommend as a good medium
difficulty solution. Instead of adding up the raw scores to form your
team statistic, consider producing an adjusted score that brings
everybody's performance closer together. The best way to do this is to
reassign scores so that each adjusted score (perhaps on a scale of 1
to 10) receives an equal number of people across all teams.

So (as an example) suppose you have 10 teams of 15 people and you find
the following raw scores :

0/1 - 10 people
2/3 - 5 people
4/5 - 5 people
6/7 - 5 people
8/9 - 10 people
10/11 - 15 people
12/13 - 20 people
14/15 - 25 people
16/17 - 25 people
18/19 - 20 people
20 - 10 people

You would group the scores approximately every 15 people and assign
the following "adjusted scores" :

Raw score     Adjusted score
  0 - 3              1
  4 - 8              2
  9 - 10             3
  11 - 12            4
  13 - 14            5
  15                 6
  16                 7
  17                 8
  18                 9
  19 - 20           10

By doing this you have taken the extreme weighting off the poor
performers and evened the scores. Sure the people who get 12/20 for
their raw score would be a little upset to know that they got an
adjusted score of 4, but if you think that is a problem, you keep the
adjusted score somewhat secret and just explain that before you add up
the numbers you adjust them carefully.

Now when you add up the adjusted scores for each team, you will get a
number out of 150, a typical team might get 110-120 but the most they
would lose by having a 0 person is 7-10 points.

5. Do some serious statistical analysis of your teams results and
expand the measurement criteria.

This goes beyond the scope of the question, but if you find that the
previous 4 solutions do not satisfy you - you could run a number of
statistical tests on the data - within teams, across teams and on the
whole group of employees. If you find someone with a firm grasp of
statistics and a lot of free time on their hands, they can set up a
statistical model under which you can evaluate the severity of
discrepancies within a team and give you more informative descriptions
than the sum of the scores (which isn't a bad starting place).

So, I'd go with either suggestion 1, 2 or 4. I hope that this answers
your question - be sure to ask for clarification if any point is
unclear or if you think that it may not be suitable. I will try to
clarify as best as I can (with the confines of a suitable answer for a
$3.00 question of course :)

calebu2-ga

---------------

Potential google searches and sites to consider (all of what I said
comes from practitioner experience, however you may want to back up my
comments) :

Search for: normalizing distribution
            statistics 201

Normalizing a distribution :
http://www.public.asu.edu/~pythagor/example2.htm

Request for Answer Clarification by munkeyboy-ga on 24 Sep 2002 16:26 PDT
Ok - I tried using the median - but I don't think this is
accomplishing what I need it to...

Using this set of scores for 9 employees...each number represents one
employee overall score:

000111122

The median (as I understand it) would be the number directly in the
middle, which in this case would be a 1 - which does not seem like an
accurate reflection of the team when 3 people completely failed.

Am I missing something?

Clarification of Answer by calebu2-ga on 25 Sep 2002 07:34 PDT
munkeyboy,

You are going to need to take a step back from the problem and think
about exactly what it is that you need. Here's my philosophy on
statistics, to which I am sure many other people hold :

Statistics are tools that you can use to help you answer a
question/solve a problem. By themselves they are as useful as a hammer
- you can hit things, break things and dent things. Where they really
come into their own is when you have a plan of what you hope to
achieve with them.

Now don't get me wrong - you seem to have a pretty clear goal of what
you need : A measure (or 3 measures) of team performance. This is a
general goal and we can answer it with a set of general answers (like
the ones I laid out below). You have specific concerns (that 0 score
people throw off the balance on the team) and as a result I
recommended either rescaling the scores or using the median.

However whether the median is the best statistic for you to use,
beyond my suggestions really becomes a judgement call on your part. I
can give you the tools to try to see how they work (a bit like a
salesperson in a sears hardware store), but at the end of your day you
need to figure out whether the results they give make sense.

For example (your example) scores out of 3 that read :

000111122

We want to judge the team based on these scores. From a statisticians
point of view (not knowing the team or how the grades are assigned) I
can see:

The mean is 0.8889
The median is 1
The mode is 1

So in terms of capturing the overall performance of the team, 1 seems
a fair score - it's what most team members got; there are some that
got more than 1, there are some that got less. Given that the scores
seem pretty evenly distributed across 0, 1 and 2 there is little
reason to worry that our results are being skewed by people who fail
or that we are ignoring the people who fail.

You would have more of an issue if you saw the following :

000033333

Here the mean would be 1.667, the median would be 3 and the mode would
be 3. Because you have a distribution of scores that focusses on the
extremes, it is tough to know how to best use the different "averages"
available to us. How we would proceed from here would depend on the
situation. If we had other teams we would look and see whether they
too possessed a 0/3 bipolar split. If that was the case, we might look
at a simple "percentage of team that got 2 or 3 vs percentage of team
that got 0 or 1".

I guess what I'm saying is that without going into a lot more detail
into your problem (and for a $3 question I doubt that will be
possible), the best I can do is give you the tools, explain when they
work and when they don't and let you play around with them to find out
which works best.

I hope that this clarification has helped - I admit I've hedged my
bets with the answer - but sometimes statistics does not give us the
answer, it just guides us in the right direction.

Regards

calebu2-ga
Comments  
Subject: Re: Statistics
From: nellie_bly-ga on 19 Sep 2002 08:23 PDT
 
You could make all your calculations based only on "non-zero" scores.
This, however, would not reflect the failure at all and, I presume,
not be fair to competing teams.

To adjust this you might then subtract some standard number assigned
for a failure/zero score either from the total before dividing for
your percentage or, probably better, reduce the derived achievement
percentage by some set number, a penalty score, so to speak.
Subject: Re: Statistics
From: aceresearcher-ga on 19 Sep 2002 08:40 PDT
 
Another strategy would be to eliminate the lowest individual score
(and possibly the highest as well) before calculating the team
rankings. This way every team gets cut some slack for their lowest
ranker, rather than one team being helped unfairly.

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy