Google Answers Logo
View Question
 
Q: Simualtion convergence and efficiency/statistics and when to stop ( Answered 5 out of 5 stars,   1 Comment )
Question  
Subject: Simualtion convergence and efficiency/statistics and when to stop
Category: Science > Math
Asked by: gooseman-ga
List Price: $20.00
Posted: 19 Jun 2002 13:59 PDT
Expires: 26 Jun 2002 13:59 PDT
Question ID: 29363
I have a simulation where various "robots" have to find a target. The
simulation is terminated once the target has been found. The current
measurement of performance is "iterations taken" - time. The
simulation settings are kept constant, aside from the starting
positions of the "robots" which are random. The simulation is repeated
until the confidence interval reaches a certain percentage [this
statement may
be wrong once you read then next step!]

The simulation then changes a parameter (such as number of "robots",
or grid size) and is then repeated to sample the new population. This
is done quite
a few times.

The question is:

1) Bearing in mind how important computational efficiency is, at which
point do I stop repeating the same scenario? (ie measure time taken
for a preset scenario, repeated until the mean time taken has
converged) - So is confidence in the mean the solution (and how do I
do that), or is it comparing various simulation runs together (and
using what method) or is it something else, or a combination. Does
having a skewed poisson type distribution alter anything?

2) How important is it to prove with a certain confidence, that
the MEAN time taken from Simulation A comes from a different
population than from Simulation B, C, D, E etc.

From my undestating, this may imply that I need to concentrate on the
accuracy of the mean of simulation run wrt the real population mean
(unknown) and then compare this to other simulation runs with
different populations. Some suggestions have included doing a ANOVA
analysis. Comparing multi variances was also suggested, but this
apparenly can only be done with 2 populations. 

Does this explain enough? If anyone requires any more info, just ask.
Sorry if this explanation or question sounds vague - I am just
starting to find my way around stats!!!!! References and equations to
use welcome!

Request for Question Clarification by elmarto-ga on 19 Jun 2002 14:33 PDT
Hello gooseman,

I studied this last year, so I don't know why is it being so difficult
for me to answer it :-) I need to know two things:

1) Are you saying that the number of iterations taken to find the
target follow a skewed poisson distribution?

2) Maybe I didn't understand your question, but you're asking when to
stop running simulations, and then you answer yourself: you stop when
the mean time has converged. According to the Law of Large Numbers,
you can be as near to the population (unknown) mean as you like by
taking the mean of an ever bigger sample. Now, if running each
simulation takes some time, are you asking how many simulations you
should run in order to be within a confidence interval of x% around
the mean?


Looking forward to answering your question,

elmarto-ga

Clarification of Question by gooseman-ga on 19 Jun 2002 16:56 PDT
Hi,

>> 1) Are you saying that the number of iterations taken to find the
target follow a skewed poisson distribution?

Yep - It seems that the time taken follows a poisson distribution -
the histogram is skewed to the left, with a long tail to the right.

>> 2) Maybe I didn't understand your question, but you're asking when
to
stop running simulations, and then you answer yourself: you stop when
the mean time has converged. According to the Law of Large Numbers,
you can be as near to the population (unknown) mean as you like by
taking the mean of an ever bigger sample. Now, if running each
simulation takes some time, are you asking how many simulations you
should run in order to be within a confidence interval of x% around
the mean?

Yep - once the mean converges - There are various methods I have seen
to get _efficiently_ this mean convergence. One friend mentioned batch
running - running the same scenario 10 times, getting the mean, then
20 times, then 40 and each point you check the error margin the mean.
When it is small you stop. But, I'm not sure this is a formal method,
and frankly sounds rather inefficient. Also, if anyone can get me some
source code examples of how to deal with this, I'd be extra greatful!

Request for Question Clarification by elmarto-ga on 19 Jun 2002 19:44 PDT
Ok, I think I've got the answer for you. But before anything, let me
say that if you are trying to find what the population mean is, the
only way to do it is what you consider "not-efficient"; that is,
running several simulations, taking the sample mean (the mean of this
simulations) and using it as an estimator for the population mean.
There is, however, a way to decide how many simulations you need tu
run in order to be as close to the real mean as you want (obviously,
the more simulations, the more precise the sample mean will be)

So, before I answer, I want to make sure what you want to know:

1) You want to find the population mean with a given error margin, so
you need to know how many simulations you have to run to achieve that
error margin.

2) You want to know wether the means from two different populations
are statistically different.

3) You want some source code examples to run the batch... what program
are you using for the robots? As I haven't used mathematics software
for a year now, I may not be able to provide these source codes... is
this OK for you?


Please tell me if what I've written is what you want to know.

Thanks a lot,

elmarto-ga

Clarification of Question by gooseman-ga on 20 Jun 2002 03:07 PDT
1) You want to find the population mean with a given error margin, so
you need to know how many simulations you have to run to achieve that
error margin.

Yes
 
2) You want to know wether the means from two different populations
are statistically different.

Yes
 
3) You want some source code examples to run the batch... what program
are you using for the robots? As I haven't used mathematics software
for a year now, I may not be able to provide these source codes... is
this OK for you?

Using Java, but any code would be useful. References would be handy
too, but not essential.
Answer  
Subject: Re: Simualtion convergence and efficiency/statistics and when to stop
Answered By: elmarto-ga on 22 Jun 2002 18:43 PDT
Rated:5 out of 5 stars
 
Hello again, gooseman!

Let's address your questions one by one. It's going to be a bit
difficult to write mathematical

symbols in this format, but I'll do muy best.

If you want to estimate the population mean using the sample mean, you
will need a large number

of simulations. The sample size (that is, the number of simulations
you'll have to run) depend on

the the accuracy (error margin) you want to achieve. To understand how
the sample size depends on

the error margin you set, you must first understand what the Central
Limit Theorem is.

There are various versions of the Central Limit Theorem. I'll state
the one that is most used and

applies to your case:

Central Limit Theorem (adapted from "Statistical Inference", by
Casella and Berger, 1990):
(1) Let X1, X2,...Xn be a sequence of n random iid (independent and
identically distributed)

variables. 
(2) Let E(Xi)=u and Var(Xi)=s^2 for i=1,2,... 

Define "An" be the sample mean of X1, X2,...Xn. Then
   [sqrt(n)*(An-u)]/s    converges to a standard normal distribution
(that is, when n is large,

it follows a normal distribution with mean 0 and variance 1).

sqrt(n) is the sqare root of n. If you don't see how this applies to
your case, think that if you

take n samples WITHOUT changing the parameters, it seems right to say
that each observation is a

random variable, and that they all have the same distribution. This is
because if you don't

change any parameters and just run your simulation over and over, then
the outcomes of each

simulation should follow the same -unknown- distribution (so the mean
and variance of each

outcome are the same, and assumption (2) is true).

What the central limit theorem implies is that the *sample mean* (I
called it An, a sample mean

of 'n' observations) minus the *population* mean and then divided by
(s/sqrt(n)) where s is the

*population* variance follows a standard normal distribution. 

This is a very powerful theorem. It says that no matter what's the
distribution from which the

sample comes; anyway, if the sample size is large, the sample mean
(with some transformations)

will have a standard normal distribution.

You can see this theorem in 'action' at

The Central Limit Theorem
http://www.math.csusb.edu/faculty/stanton/m262/central_limit_theorem/clt.html

It's stated a bit different than I wrote above. If you divide the
numerator and the denominator

by n, you'll get the equation I stated.

Now, we have two problems here. First: we don't know the population
mean 'u' (recall from above

that you have to substract it from the sample mean to form the random
variable that converges to

standard normal). Second: we don't know the population variance.

I'll first address the second problem. We don't know the population
variance, but then, you can

calculate the sample variance. You can find the formula at:

Sample variance
http://mathworld.wolfram.com/SampleVariance.html
(see item (3) )
 
By the Law of Large Numbers, you know that the sample variance
converges to the population

variance. So, applying Slutzky's Theorem, you can directly use the
sample variance instead of the

population variance, and the results will be the same for a large 'n'.

That solves the second problem. Now you're ready to find out which is
the minimum 'n' to achieve

a certain error margin.

Say you want to choose a sample size (n) so that:
Probability of "the difference between the sample mean and the actual
mean is no bigger than 2"

is 95%. That is, the sample mean you estimated is in the interval
[u-2,u+2] with a 95%

confidence.

You would write this event like this:

Prob (|An-u|<2) = 0.95
Where | | is modulus

This is the same to say that

Prob (|An-u|>2) = .05

But you also know by the Central Limit Theorem that sqrt(n)*(An-u)/s
has a standard normal

distribution (for large n). Thus,

Prob (|An-u| > 2) = 
Prob (sqrt(n)*|An-u| > sqrt(n)*2) =
Prob (sqrt(n)*|An-u|/s > sqrt(n)*2/s) = 0.05

Now, what follows is easy. You know that the lefthand side of the
inequality has a standard

normal distribution. Let's call the lefthand side as 'z'. You have

Prob (|z| > sqrt(n)*2/s) = 0.05

So you check a standard normal distribution table (you can find these
in mostly any Statistics

book, or at the link I provide below), and find which value leaves
0.025 (0.05/2) of the possible

values in the right tail.

Standard normal distribution table
http://www.math2.org/math/stat/distributions/z-dist.htm

In the example I'm giving you (that is, with 95% confidence), this
value is 1.96

Finally, you're all set! You just have to solve (in my example)

sqrt(n)*2/s = 1.96

Now you run into another problem (that's Statistics :-) ). You don't
have a sample variance until

you actually have run some simulations. The good news is that this
sample variance needn't be

very precise; some authors suggest using a sample size of a "few
hundred", I've found 150 has

worked well for me in most researches I've done. So once you have the
sample variance, you take

square root of it to get the standard deviation; and that's your 's'.
Now you have one equation

with one unknown. So, in general, with a confidence level of 95%, the
minimum sample size (n) you

need to be within a given error margin (ER) is:

***   n = [(1.96*s)/ER]^2   ***

That's all. I've explained everything step by step. BUT if you're
solely interested in the number

of the sample size, you can check the following site, where you enter
the error margin and the

sample standard deviation, and it automatically calculates the needed
sample size.

http://department.obg.cuhk.edu.hk/ResearchSupport/Sample_size_EstMean.asp

If the 'n' you find here is smaller than the 'n' you used to estimate
the sample variance (say,

150), obviously you don't need to run simulations again: just use the
mean of the same sample you

used for the variance.


Now, onto your second question. You want compare the means of
different processes, and see if

they are or not statistically different. This is somewhat more
complicated to explain step by

step, especially using text to write mathematical symbols. So I'll
just tell you what method to

use and then mention some books you can check to find out why it
works.

What you need to use for this kind of comparison is an ANOVA analysis.
It's not true that this

analysis can only be done for 2 populations, you can use it for as
many populations as you want.

Here, you are testing the null hypothesis "the means of all groups are
equal" versus "the means

of at least two groups are different", where, in your case, each
"group" would be the program run

with different parameters (different grid size, etc.). In order to
make this test, you have to

build an F-statistic in the following way (taken from EViews 3.1 help
file):

Denote the i-th observation in group g as Xgi, where i=1,2,...,Ng for
groups g=1,2,...,G. The

between and within sums of squares are defined as

SSbetween=  G
           ---
           \   Ng*(Yg-Y)^2
           /
           ---
           g=1

SSwithin=   G   Ng
           --- ---
           \   \     (Xgi-Yg)^2
           /   /
           --- ---
           g=1 i=1

Where Yg is the sample mean of group g, Y is the overall sample mean
(that is, using all

observations from all groups), G is the number of groups, and Ng is
the sample size of group g.

The F-statistic is as follows

F= SSb/(G-1)
   ---------
   SSw/(N-G)

where N is the total number of observations. This statistic is
distributed as an F with G-1

numerator degrees of freedom and N-G denominator degrees of freedom.
And now you're all set. Just

look up the area to the right of F in a table; or, if you don't have a
book, just plug the

degrees of freedom and F in the following page

F-table
http://davidmlane.com/hyperstat/F_table.html

click on Compute and you'll see the p-value for the test. So, a
p-value of .05 or lower means you

can be 95% confident that the null hyupothesis can be REJECTED. A
higher p-value means you cannot

reject the null hypothesis that the means across different groups are
equal.

Needless to say, you don't need to run the simulations again in order
to perform the ANOVA

analysis. Using the method I described above, by now you should have
calculated the minimum

sample size for each group, and you should have run that many
simulations; so by now you should

have a large (and efficient) number of observations.


I also found two web pages that can calculate the F-statistic for you:

Lieberman Research Worldwide
http://www.lrwonline.com/stattest/means.asp

Although using this will be simpler in that they only ask you for the
mean and variance of the

groups (you don't have to enter all the data), they only let you
compare two groups.

The following site is great: it's a statistics java applet. You can
enter data and it will

calculate descriptive statistics (mean, variance, etc.), graph the
data, and calculate many

Statistical Inference functions, including the ANOVA analysis.

WebStat
http://www.stat.sc.edu/webstat/

To learn more about ANOVA analysis check the following site:

http://www.cityu.edu.hk/ma/staff/ychon/3518/wk9-ANOVA.pdf
 
Finally, as I don't know java programming, I can't help you to
automate the calculation of this

statistics. However, I was able to locate some pages that provide java
source codes for

statistics applications; I'm sure you can adapt them to your needs.
The first site generates

random numbers from a distribution, and then, among other things, it
calculates their mean and

variance. I think this may apply to your case, in that the random
numbers for this applet would

be equivalent to the "number of iterations taken" variable of your
process.

http://www.math.uah.edu/psol/objects/experiments/SampleMeanExperiment.html

http://nimitz.mcs.kent.edu/~blewis/stat/anova.html


Well, I hope this explanation has proved useful to you. I know this is
a difficult subject, even

more difficult when you have to use mathematical symbols in an text
editor... I hope I've been

clear enough for you. In any case, for an in-depth explanation of
statistics methods, I suggest

you check the book I quoted above, "Statistical Inference". It's
available at Amazon.com.

Good luck with your project!

elmarto-ga

Clarification of Answer by elmarto-ga on 22 Jun 2002 18:47 PDT
I'm sorry, something went wrong when pasting the text from my Notepad,
and there are line breaks where there shouldn't be. I'll paste it
again hoping this time works.

--BTW, the Google search terms I used were
"sample size" population mean
compare two means
"ANOVA analysis"

---------------------------------------------
ANSWER REPOSTING

Hello again, gooseman!

Let's address your questions one by one. It's going to be a bit
difficult to write mathematical symbols in this format, but I'll do
muy best.

If you want to estimate the population mean using the sample mean, you
will need a large number of simulations. The sample size (that is, the
number of simulations you'll have to run) depend on the the accuracy
(error margin) you want to achieve. To understand how the sample size
depends on the error margin you set, you must first understand what
the Central Limit Theorem is.

There are various versions of the Central Limit Theorem. I'll state
the one that is most used and applies to your case:

Central Limit Theorem (adapted from "Statistical Inference", by
Casella and Berger, 1990):
(1) Let X1, X2,...Xn be a sequence of n random iid (independent and
identically distributed) variables.
(2) Let E(Xi)=u and Var(Xi)=s^2 for i=1,2,... 

Define "An" be the sample mean of X1, X2,...Xn. Then
   [sqrt(n)*(An-u)]/s    converges to a standard normal distribution
(that is, when n is large, it follows a normal distribution with mean
0 and variance 1).

sqrt(n) is the sqare root of n. If you don't see how this applies to
your case, think that if you take n samples WITHOUT changing the
parameters, it seems right to say that each observation is a random
variable, and that they all have the same distribution. This is
because if you don't change any parameters and just run your
simulation over and over, then the outcomes of each simulation should
follow the same -unknown- distribution (so the mean and variance of
each outcome are the same, and assumption (2) is true).

What the central limit theorem implies is that the *sample mean* (I
called it An, a sample mean of 'n' observations) minus the
*population* mean and then divided by (s/sqrt(n)) where s is the
*population* variance follows a standard normal distribution.

This is a very powerful theorem. It says that no matter what's the
distribution from which the sample comes; anyway, if the sample size
is large, the sample mean (with some transformations) will have a
standard normal distribution.

You can see this theorem in 'action' at

The Central Limit Theorem
http://www.math.csusb.edu/faculty/stanton/m262/central_limit_theorem/clt.html

It's stated a bit different than I wrote above. If you divide the
numerator and the denominator by n, you'll get the equation I stated.

Now, we have two problems here. First: we don't know the population
mean 'u' (recall from above that you have to substract it from the
sample mean to form the random variable that converges to standard
normal). Second: we don't know the population variance.

I'll first address the second problem. We don't know the population
variance, but then, you can calculate the sample variance. You can
find the formula at:

Sample variance
http://mathworld.wolfram.com/SampleVariance.html
(see item (3) )
 
By the Law of Large Numbers, you know that the sample variance
converges to the population variance. So, applying Slutzky's Theorem,
you can directly use the sample variance instead of the population
variance, and the results will be the same for a large 'n'.

That solves the second problem. Now you're ready to find out which is
the minimum 'n' to achieve a certain error margin.

Say you want to choose a sample size (n) so that:
Probability of "the difference between the sample mean and the actual
mean is no bigger than 2" is 95%. That is, the sample mean you
estimated is in the interval [u-2,u+2] with a 95% confidence.

You would write this event like this:

Prob (|An-u|<2) = 0.95
Where | | is modulus

This is the same to say that

Prob (|An-u|>2) = .05

But you also know by the Central Limit Theorem that sqrt(n)*(An-u)/s
has a standard normal distribution (for large n). Thus,

Prob (|An-u| > 2) = 
Prob (sqrt(n)*|An-u| > sqrt(n)*2) =
Prob (sqrt(n)*|An-u|/s > sqrt(n)*2/s) = 0.05

Now, what follows is easy. You know that the lefthand side of the
inequality has a standard normal distribution. Let's call the lefthand
side as 'z'. You have

Prob (|z| > sqrt(n)*2/s) = 0.05

So you check a standard normal distribution table (you can find these
in mostly any Statistics book, or at the link I provide below), and
find which value leaves 0.025 (0.05/2) of the possible values in the
right tail.

Standard normal distribution table
http://www.math2.org/math/stat/distributions/z-dist.htm

In the example I'm giving you (that is, with 95% confidence), this
value is 1.96

Finally, you're all set! You just have to solve (in my example)

sqrt(n)*2/s = 1.96

Now you run into another problem (that's Statistics :-) ). You don't
have a sample variance until you actually have run some simulations.
The good news is that this sample variance needn't be very precise;
some authors suggest using a sample size of a "few hundred", I've
found 150 has worked well for me in most researches I've done. So once
you have the sample variance, you take square root of it to get the
standard deviation; and that's your 's'. Now you have one equation
with one unknown. So, in general, with a confidence level of 95%, the
minimum sample size (n) you need to be within a given error margin
(ER) is:

***   n = [(1.96*s)/ER]^2   ***

That's all. I've explained everything step by step. BUT if you're
solely interested in the number of the sample size, you can check the
following site, where you enter the error margin and the sample
standard deviation, and it automatically calculates the needed sample
size.

http://department.obg.cuhk.edu.hk/ResearchSupport/Sample_size_EstMean.asp

If the 'n' you find here is smaller than the 'n' you used to estimate
the sample variance (say, 150), obviously you don't need to run
simulations again: just use the mean of the same sample you used for
the variance.


Now, onto your second question. You want compare the means of
different processes, and see if they are or not statistically
different. This is somewhat more complicated to explain step by step,
especially using text to write mathematical symbols. So I'll just tell
you what method to use and then mention some books you can check to
find out why it works.

What you need to use for this kind of comparison is an ANOVA analysis.
It's not true that this analysis can only be done for 2 populations,
you can use it for as many populations as you want. Here, you are
testing the null hypothesis "the means of all groups are equal" versus
"the means of at least two groups are different", where, in your case,
each "group" would be the program run with different parameters
(different grid size, etc.). In order to make this test, you have to
build an F-statistic in the following way (taken from EViews 3.1 help
file):

Denote the i-th observation in group g as Xgi, where i=1,2,...,Ng for
groups g=1,2,...,G. The between and within sums of squares are defined
as

SSbetween=  G
           ---
           \   Ng*(Yg-Y)^2
           /
           ---
           g=1

SSwithin=   G   Ng
           --- ---
           \   \     (Xgi-Yg)^2
           /   /
           --- ---
           g=1 i=1

Where Yg is the sample mean of group g, Y is the overall sample mean
(that is, using all observations from all groups), G is the number of
groups, and Ng is the sample size of group g.

The F-statistic is as follows

F= SSb/(G-1)
   ---------
   SSw/(N-G)

where N is the total number of observations. This statistic is
distributed as an F with G-1 numerator degrees of freedom and N-G
denominator degrees of freedom. And now you're all set. Just look up
the area to the right of F in a table; or, if you don't have a book,
just plug the degrees of freedom and F in the following page

F-table
http://davidmlane.com/hyperstat/F_table.html

click on Compute and you'll see the p-value for the test. So, a
p-value of .05 or lower means you can be 95% confident that the null
hyupothesis can be REJECTED. A higher p-value means you cannot reject
the null hypothesis that the means across different groups are equal.

Needless to say, you don't need to run the simulations again in order
to perform the ANOVA analysis. Using the method I described above, by
now you should have calculated the minimum sample size for each group,
and you should have run that many simulations; so by now you should
have a large (and efficient) number of observations.


I also found two web pages that can calculate the F-statistic for you:

Lieberman Research Worldwide
http://www.lrwonline.com/stattest/means.asp

Although using this will be simpler in that they only ask you for the
mean and variance of the groups (you don't have to enter all the
data), they only let you compare two groups.

The following site is great: it's a statistics java applet. You can
enter data and it will calculate descriptive statistics (mean,
variance, etc.), graph the data, and calculate many Statistical
Inference functions, including the ANOVA analysis.

WebStat
http://www.stat.sc.edu/webstat/

To learn more about ANOVA analysis check the following site:

http://www.cityu.edu.hk/ma/staff/ychon/3518/wk9-ANOVA.pdf
 
Finally, as I don't know java programming, I can't help you to
automate the calculation of this statistics. However, I was able to
locate some pages that provide java source codes for statistics
applications; I'm sure you can adapt them to your needs. The first
site generates random numbers from a distribution, and then, among
other things, it calculates their mean and variance. I think this may
apply to your case, in that the random numbers for this applet would
be equivalent to the "number of iterations taken" variable of your
process.

http://www.math.uah.edu/psol/objects/experiments/SampleMeanExperiment.html

http://nimitz.mcs.kent.edu/~blewis/stat/anova.html


Well, I hope this explanation has proved useful to you. I know this is
a difficult subject, even more difficult when you have to use
mathematical symbols in an text editor... I hope I've been clear
enough for you. In any case, for an in-depth explanation of statistics
methods, I suggest you check the book I quoted above, "Statistical
Inference". It's available at Amazon.com.

Good luck with your project!

elmarto-ga
gooseman-ga rated this answer:5 out of 5 stars
Thanks

Comments  
Subject: Re: Simualtion convergence and efficiency/statistics and when to stop
From: jeanluis-ga on 19 Jun 2002 15:20 PDT
 
This is an interesting question, unfortunately I am totally unable to
answer it. However in college I studied many of these topics, and I
remember I had a really good book, with many problems to solve at the
end of each chapter. I know this is not what you are really looking
for, but it may be of help in getting an understanding what to do, and
what stuff really means. The book is called:
Introduction to probability and statistics - Principles and
applications for engineering and the computing sciences
J.S Milton, Jesse C. Arnold
Irwin McGraw-Hill
Here is a link to the BN page for the book:
http://btobshop.barnesandnoble.com/textbooks/booksearch/isbninquiry.asp?userid=68XXKH3UU4&mscssid=0FJX64EE5UQ29J6M650L36RTBJCT2S03&btob=Y&isbn=0070427887

Good luck
--jld

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy