Google Answers Logo
View Question
 
Q: How to monitor graphics pipeline performance ( No Answer,   2 Comments )
Question  
Subject: How to monitor graphics pipeline performance
Category: Computers > Graphics
Asked by: christopher_bell-ga
List Price: $25.00
Posted: 05 Aug 2003 09:53 PDT
Expires: 04 Sep 2003 09:53 PDT
Question ID: 240302
I am trying to optimise the performance of a piece of graphics
software. How can I monitor the performance of my system (processor
usage, graphics card usage, the graphics pipeline feeding it, etc) so
that I can see where bottlenecks are occurring?

This needs to be a standalone package, or something I can link into my
software, that either gives me (near) real-time status or which
produces a report after a run a bit like that created by a profiler.

I'd be happy with any solution that works under Win32 or Linux.

Request for Question Clarification by techtor-ga on 06 Aug 2003 03:24 PDT
I wonder what sort of solution did have in mind when you asked this
question. Are you thinking of benchmarking software like Sisoft
Sandra, or something hardware-based?

Clarification of Question by christopher_bell-ga on 06 Aug 2003 05:51 PDT
Thanks, I've got SiSandra.  It is fine for telling me what the
attributes of my system are, but it provides no info about how it is
actually performing. I'll try to clarify what I'm after, sorry if this
becomes a bit verbose:

I write engineering software for a living, and part of the problem is
to display Finite Element data. For typical post-processing of this
type of data an image may contain a million or more "facets"
(typically 3 or 4 sided polygons), lines, text, and other graphical
objects. These may be filled, shaded, use transparency, etc.  A
typical transient analysis may contain 50 or more "states", showing
analysis results at successive times, and each state contains
different data.

Users want to animate their results, and also to rotate and zoom them,
and obviously they want the fastest possible response, ideally in
real-time. So as you can imagine I am throwing huge amounts of data at
the graphics card.  (It is not unusual for the dataset that I am
displaying to be 20+GB on disk, although only a proportion of this
ends up getting displayed.)

This is a slightly different problem to "gaming" applications where
typically the background scenery will remain constant, and only the
foreground objects will move.  In these cases it is practical to cache
(static) background geometry in card memory, and only to retransmit
the things that change between frames.

However in my case the volume of "different" data being sent for each
frame is way beyond the storage capacity of any card, so everything
has to go down the graphics pipeline for every frame.

The actual rendering rates I am achieving are acceptable, but they are
only a fraction of the theoretical capacity of the graphics card, and
obviously I want to do better. However at present I can only guess at
where a bottleneck is occurring, try changing something, and time the
outcome with a stopwatch. (I feel a bit like a mediaeval "doctor"
trying things out on the patient, but without any knowledge of what is
going on inside!)

What I would like to be able to do is to monitor the performance of
the various parts of the system I am using, which will eliminate some
of the guesswork. For example I can see (from Task Manager in Windows,
or "top" in Linux) that my processor is working flat out ... but I
don't know whether that is because:

- It has maxed out my memory bandwidth getting data from main memory
- It is struggling with converting the data into OpenGL graphics calls
- Or something else

I have assumed (because the processor is at 100%) that I am
under-utilising the bandwidth of the pipeline to the graphics card -
but I may be wrong, and it would be nice to know just how hard that is
working.


So I need some sort of tool that can tell me what is going on inside
my system.  Ideally this would give a real-time display but, as I said
in my original question, I'm equally happy to link in something like a
software library that "piggy-backs" on my application and reports
usage at the end of a run.

I feel sure that this must exist, otherwise how do hardware
manufacturers develop, monitor and benchmark their products?


If hardware and software specs help, I'm running:

HP xw6000 PC, twin 2.8GHz Pentium 4, 2.3GB Ram, NVidia Quadro 4 980
XGL card.

Operating system is Windows 2000, or Redhat 8.0 Linux.

I'm using a mixture of Fortran, C and C++ programming; with graphics
rendered through OpenGL.


If there are some hardware specific solutions that are not PC-based I
can also use the following Unix boxes:

HP PA-RISC and IA64 workstations (HP-UX 10.xx and 11.xx)
SGi various (Irix 6.x)
Compaq (Alpha) running Tru64 (OSF) 4 and 5


If there are graphics card manufacturer specific solutions I can
probably get the money to install a different card.

Request for Question Clarification by alienintelligence-ga on 06 Aug 2003 13:02 PDT
Hi christopher_bell,

I was curious..., the software you
are using, can't be manipulated
with a script that might be able
to give a time-dump at the start
and at the end of a process? A
little math after and you can
benchmark things.

It honestly seems like your daily
gfx requirements is the best bet
for testing the ultimate throughput
of your system. But as you said,
you would like to monitor each
point of possible bottlenecking.
I'm not sure that's feasible in
a way that doesn't add any extra
latency to the system being tested. 

The next best method would be to
use a timed graphical output that
is similar to your work, across a
range of gfx systems with a mixture
of components. Rearranging the 
pieces and retesting should show you
which ones are the weakpoints.

Sorry I don't have an answer at
this point.

-AI

Clarification of Question by christopher_bell-ga on 07 Aug 2003 02:34 PDT
What I think you are suggesting is that I send quanta of apples,
bananas, coconuts, etc to the system, and measure how long each one
takes to render. Then use this info to calibrate what I'm likely to
get from a "real" application, adjusting it to use the best method
(fruit).

This doesn't really get me any further, because it doesn't tell me WHY
apples render faster than pears, and what I need to do to speed up the
latter.

Obviously I can form a hypothesis, change the code, and see what
happens; and that is what I've done - which adds a whole new meaning
to the term "computer science"! For example on some cards it turns out
to be faster to render a flat shaded polygon by giving separate (but
identical) outward normal vectors at each vertex, rather than giving a
single normal and telling it to treat the polygon as flat and assume
that the rest are the same. This is totally counter-intuitive since
sending more data down the graphics pipeline gives a faster end
result.


As for adding latency due to the measuring process, I don't think this
matters so long as the timings scale in the unmeasured production
version: it's the relative effects of doing X or Y or Z I'm after.

I suppose it all boils down to the old saw "when you know why you know
how".

It seems that I'm asking the impossible, or maybe what I need is
proprietory and hidden somewhere in the depths of Intel or AMD or
Nvidia.
Answer  
There is no answer at this time.

Comments  
Subject: Re: How to monitor graphics pipeline performance
From: mike260-ga on 16 Nov 2004 10:07 PST
 
nVidia do have such a tool (NVPerfHud, available free on their
website), and it looks pretty cool too, but it's for D3D apps only.
You might be able to make use of it by recompiling your app to use an
OpenGL->D3D emulation library, but I wouldn't get your hopes up too
high.

Failing that, your first task is to determine if your app is GPU or CPU bound:
 - In your display-settings, disable VSync. Make sure it's really
disabled (rotate a scene around rapidly in your app and you should see
horizontal tearing across the screen).
 - Time how long your end-of-frame SwapBuffers() call takes to
execute, using QueryPerformanceCounter (or similar).

If it's consistently taking longer than a ms or so then the GPU is the
limiting factor. However, from what you said it sounds like the CPU is
the bottleneck. This being the case, check the following:

 - Does your app issue an excessive number of OpenGL function-calls?
These are relatively expensive, and can seriously add up if (eg)
you're pushing a lot of geometry through immediate-mode calls (ie.
glBegin/glEnd). Try and submit your geometry using as few OpenGL calls
as possible.

 - Are you doing anything that would serialise the GPU with the CPU?
The most common offender here is reading the framebuffer or zbuffer
into system-memory; this forces the entire pipeline to flush before it
will complete, which is a huge performance killer - GPU pipelines are
very, very deep nowadays, often containing a full frame worth of data
or more.

 - If all else fails, start timing individual routines in your code.
This will mostly measure CPU time (since most OpenGL calls are unlkely
to block the CPU to wait for the GPU).


Finally, the theoretical throughput and fillrate figures nVidia claim
are generally 2-3x what's attainable in the real-world, and that's
going to be 2-3x again what you'll be able to achieve pushing giant
gobs of geometry across the bus every frame. Don't expect to achieve
the figures nVidia are quoting.

Hope this helps.
Subject: Re: How to monitor graphics pipeline performance
From: christopher_bell-ga on 16 Nov 2004 10:48 PST
 
Thanks, I got all excited when I found out about NVPerfHud but then,
as you point out, discovered that it only works with DirectX.

So I got in contact with them & discovered that they are building a
similar tool for OpenGL, and I've joined the developer forum for this.
 By coincidence I checked it today and, so far, they haven't posted
anything - but I live in hope!


So thanks - right idea, but the technology ain't there yet.

Christopher Bell

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy