Hello Davious,
I am glad to have the opportunity to help you again. Thank you for your note.
Again, the answer varies somewhat based on the operating system but
the basic approach is similar to that with CPU limits. The similar
part is the use of a hard and soft limit, the differences are in how
you detect the problem and must handle it.
First, the RLimitMEM should affect the value of RLIMIT_DATA, the total
size of data (initialized data, uninitialized data, heap). From
http://www.die.net/doc/linux/man/man2/getrusage.2.html
it affects calls to brk() and sbrk(), described at
http://www.die.net/doc/linux/man/man2/brk.2.html
Since brk() (and by implication sbrk) is a system call, the return
value of brk() is -1 and the system error number errno is set to
ENOMEM. The operating system basically refuses to give you the extra
memory. The result of that will vary based on the way the application
and/or language is implemented. For example, a CGI implemented in Perl
will likely get an "Out of Memory" error.
Note that calls to brk (or sbrk) are usually done by a run time
library routine, and not directly by your application. For example,
malloc in C or new in C++ will generally allocate a large region using
brk. It then manages that memory with a linked list of allocated /
deallocated regions. When that area runs out, it calls brk again to
get more. If the brk() call fails, you will get some error indication,
usually a NULL pointer and/or the ENOMEM errno value. In the Perl
example, this is what triggers the "Out of Memory" condition.
Again, just like the CPU soft limit the memory soft limit can be
increased by a call to setrusage(2), also described at
http://www.die.net/doc/linux/man/man2/getrusage.2.html
as long as the new soft limit is less than the hard limit. The root
user can increase the process soft / hard limits beyond the hard limit
setting, other users cannot.
Both the hard and soft limits are on a per process basis. In that way
- the behavior you noted (multiple processes where the total exceeds
the hard limit, but individual processes are under the soft limit) is
expected. If you must manage the total amount of virtual memory that
will be the memory limit times the maximum allowed processes. In
apache, the process limit is set using RLimitNPROCS, but note the
caution in the documentation - the CGI userid's must be different than
that of the web server - if it is the same, you will limit the number
of processes used by apache AND the CGI's.
If any part of this answer is unclear or if you need some related
guidance on how to handle the out of memory condition (or avoid it to
begin with), please use a clarification request.
--Maniac |
Clarification of Answer by
maniac-ga
on
26 Feb 2004 04:55 PST
Hello Davious,
For your situation, a simple approach such as
- setting the hard and soft limits the same
OR
- using the operating system defaults
is perhaps the best solution.
It is pretty clear why apache supports limits to process CPU
(RLimit_CPU), process memory (RLimit_Mem), and the number of processes
(RLimitNPROCS); this gives the system administrator some tools to
manage scarce resources. I say scarce resources, because if your
capacity was far greater than the expected load - you would not need
to set these limits.
As to why the limit is specified as a soft and hard limit, I believe
that is simply because apache was developed to run on Unix systems.
Unix allows you to specify a soft and hard limit, so it is natural
that the configuration file support that. The origin of hard and soft
limits in Unix goes back at least to the 4.2 BSD version developed in
the early 1980's. At that time, most programs were written in C (or as
simple shell scripts), so the programs could take action based on the
hard and soft limits.
You did not ask this question, but I'll provide a short answer to
How do I avoid "Out of Memory" in Perl?
There are several ways to do that. The most basic is to avoid use of
constructs such as:
@contents = <FH>;
where you read an entire file into memory. For example:
http://forums.devshed.com/t50374/s.html
describe one user's problem with searching a 555 Megabyte file using a
simple Perl script. A quick search using phrases such as
perl "Out of Memory"
will provide other examples including this one
http://www.mail-archive.com/perl5-changes@perl.org/msg09075.html
where a recent change to Perl 5 was made to avoid allocating memory
when printing the "Out of Memory" error message (so Perl won't die
silently). I include this latter example since it illustrates one of
the difficultiess with responding to these types of problems.
I'm glad to help. Let me know if you need any further information on this question.
--Maniac
|
Request for Answer Clarification by
davious-ga
on
26 Feb 2004 10:28 PST
Ok, so let me see if I've got this right. I'm trying to understand
why Apache lets you set both values and in what situations it would
apply.
Functionally speaking, the hard limit isn't really that useful for
Perl CGI scripts because perl dies with "Out of Memory!" when it hits
the soft limit, but if I had a CGI script written in C I might be able
to detect when malloc or whatever fails and then call setrusage to
increase the soft limit before trying again?
Or with Perl CGI, even running as an unprivileged process I could call
ulimit to increase my soft limit BEFORE I hit it to give myself so
extra space, so long as the new value is equal or below the hard
limit.
That makes sense to me, is it right? : )
|
Clarification of Answer by
maniac-ga
on
26 Feb 2004 15:31 PST
Hello Davious,
You got it right in the first case and are very close in the second.
The C program can detect the out of memory condition and adjust the
limit before trying again.
The Perl program can increase the limit (if it can call setrlimit)
before it runs out of memory.
If you are trying to use ulimit (the shell command), that will
increase the limit of the shell process and its children, not the
parent Perl process. Let me illustrate:
Apache -> Perl CGI "P" -> shell script (that includes ulimit) "S" -> program "X"
the new limit will apply to "S" and "X" and not "P".
With the right tools, you can call setrlimit from Perl (and increase
"P"'s limit). For example
http://rpmfind.net/linux/rpm2html/search.php?query=perl-BSD-Resource
is a reference to an RPM (RedHat Package Manager) file; source and
otherwise that implements calls to getrlimit and setrlimit.
Of course, the hard part is figuring out how much memory you are using
before you hit the limit. On Linux, should be able to read
/proc/self/statm and go by the first number from that file. The format
is described in
/usr/src/linux/Documentation/filesystems/proc.txt
(if you have kernel sources installed). Or at
http://www.die.net/doc/linux/man/man5/proc.5.html
in perhaps a more readable form.
Please let me know if any of this is unclear or you need more
information on this topic.
--Maniac
|