Clarification of Question by
christopher_bell-ga
on
06 Aug 2003 05:51 PDT
Thanks, I've got SiSandra. It is fine for telling me what the
attributes of my system are, but it provides no info about how it is
actually performing. I'll try to clarify what I'm after, sorry if this
becomes a bit verbose:
I write engineering software for a living, and part of the problem is
to display Finite Element data. For typical post-processing of this
type of data an image may contain a million or more "facets"
(typically 3 or 4 sided polygons), lines, text, and other graphical
objects. These may be filled, shaded, use transparency, etc. A
typical transient analysis may contain 50 or more "states", showing
analysis results at successive times, and each state contains
different data.
Users want to animate their results, and also to rotate and zoom them,
and obviously they want the fastest possible response, ideally in
real-time. So as you can imagine I am throwing huge amounts of data at
the graphics card. (It is not unusual for the dataset that I am
displaying to be 20+GB on disk, although only a proportion of this
ends up getting displayed.)
This is a slightly different problem to "gaming" applications where
typically the background scenery will remain constant, and only the
foreground objects will move. In these cases it is practical to cache
(static) background geometry in card memory, and only to retransmit
the things that change between frames.
However in my case the volume of "different" data being sent for each
frame is way beyond the storage capacity of any card, so everything
has to go down the graphics pipeline for every frame.
The actual rendering rates I am achieving are acceptable, but they are
only a fraction of the theoretical capacity of the graphics card, and
obviously I want to do better. However at present I can only guess at
where a bottleneck is occurring, try changing something, and time the
outcome with a stopwatch. (I feel a bit like a mediaeval "doctor"
trying things out on the patient, but without any knowledge of what is
going on inside!)
What I would like to be able to do is to monitor the performance of
the various parts of the system I am using, which will eliminate some
of the guesswork. For example I can see (from Task Manager in Windows,
or "top" in Linux) that my processor is working flat out ... but I
don't know whether that is because:
- It has maxed out my memory bandwidth getting data from main memory
- It is struggling with converting the data into OpenGL graphics calls
- Or something else
I have assumed (because the processor is at 100%) that I am
under-utilising the bandwidth of the pipeline to the graphics card -
but I may be wrong, and it would be nice to know just how hard that is
working.
So I need some sort of tool that can tell me what is going on inside
my system. Ideally this would give a real-time display but, as I said
in my original question, I'm equally happy to link in something like a
software library that "piggy-backs" on my application and reports
usage at the end of a run.
I feel sure that this must exist, otherwise how do hardware
manufacturers develop, monitor and benchmark their products?
If hardware and software specs help, I'm running:
HP xw6000 PC, twin 2.8GHz Pentium 4, 2.3GB Ram, NVidia Quadro 4 980
XGL card.
Operating system is Windows 2000, or Redhat 8.0 Linux.
I'm using a mixture of Fortran, C and C++ programming; with graphics
rendered through OpenGL.
If there are some hardware specific solutions that are not PC-based I
can also use the following Unix boxes:
HP PA-RISC and IA64 workstations (HP-UX 10.xx and 11.xx)
SGi various (Irix 6.x)
Compaq (Alpha) running Tru64 (OSF) 4 and 5
If there are graphics card manufacturer specific solutions I can
probably get the money to install a different card.