Hi, san007-ga:
The context suggested by statistics/forecasting and error criteria for
trend curves covers a lot of different applications. I'm assuming
that you are looking for the general principles for using one
criterion over another.
The trend curve may be fitted to a data set in any of a variety of
ways. There may, for example, be a formula with one or more
parameters, chosen to minimize error as measured in one of the ways
named.
Alternatively the "trend curve" might be a moving average or
"filtering" process applied to the data. Again the error criteria
might be applied to argue that one process or another is best for a
particular purpose.
There are then really two sorts of circumstances to keep our eyes on.
The first have to do with characteristics of the data (or perhaps, the
characteristics of our limitations in analyzing the data).
The second set of circumstances concerns the purposes to which the
trend curve will be put.
The mean square error is probably the most often used criterion,
though this may be because of the simplicity with which data can be
fitted to curves (often using a linear least squares package). The
history of this approach goes back to Gauss, who found in conducting
landmark surveys in Germany that repeated measurements of the same
quantity produced errors that could be modelled with a normal
distribution. In this case Gauss showed an "optimal" way of fitting
the data would be to use the mean square error criterion.
However in the financial world one often has to work with data
containing noise and observations that cannot be repeated. The mean
(absolute) error criterion is more "robust" than the mean square error
in these circumstances. A single observation containing a large error
can have a significant impact on the mean square error fit, while the
mean (absolute) error fit criterion is more or less "immune" to those
effects.
So if the data were known to contain a few very "noisy" observations
(data points with undetermined but possibly large errors), then a mean
(absolute) error criterion is to be preferred to a mean square error.
Another consideration that is based on the data concerns the relative
error (mean absolute percentage error) versus the absolute error (mean
absolute error or mean square error). If the data contains small
values, close or perhaps even equal to zero, the relative error
criterion would be tricky or impossible to apply. A relative error
criterion works with data that is bounded away from zero. If the data
is not like this, then an absolute error criterion should be used.
On the other hand one should also consider the purpose to which "trend
curves" will be put. Especially one needs to ask the "consequences"
of errors in forecasting. Is it better is use a relative error
criterion or an absolute error criterion? If the data set contains
values of widely different magnitudes, then it may be important to
have relatively good fitting to both high and low values. An example
might be in estimating a water budget for a regional district, where
monthly rainfalls vary by an order of magnitude during the annual
cycle. In financial forecasting, however, an absolute error criterion
might be more important, as a dollar loss or profit on revenues during
low sales activity can be equal to a dollar loss or profit during high
sales activity.
Also, there may be "catastrophic" consquences for errors in estimation
that are larger than a certain tolerance. As noted before, the mean
square error criterion gives a larger influence to single observation
that are significantly "outside" the range of others, than the mean
(absolute) error criterion would. If the impact of those larger
deviations threatens disproportionate consequences, it might be good
to use a mean square error criterion, if only for the sake of
incorporating some of that volatility (increased variance) into the
model, for planning purposes.
regards, mathtalk-ga |