Hello cerebrate-ga,
Performance of internal buses is dependent on many factors, such as
latency within a chipset, depth and implementation of buffers, size of
data transfers, maximum concurrent transfers, etc. For these reasons,
the only way to know for sure what sort of performance your
application will see is to benchmark specific configurations. With
that in mind, I can give you an idea of what maximum transfer rates
you might see are.
Here I omit a discussion of the ISA bus. It is similar enough to PCI
that conclusions can be drawn based on the trends seen in my treatment
of the PCI bus. Further, ISA devices are rarely if ever used where
performance is a concern, and an ISA slot has not been standard on
most motherboards for several years.
A standard PCI bus runs at 33 MHz and transfers 32 bits of data on the
rising edge of each clock pulse. This gives a maximum theoretical
burst transfer rate of approximately 125MiB/sec. However, there is
overhead on each command which will lower the overall data rate,
depending on the ratio of commands to data transferred. In the
general case, the performance of the PCI bus cannot be determined
except by direct measurement. However, I suspect you are primarily
concerned with the performance of a single device in the absence of
other PCI devices, or in the presence of devices making a minimum of
PCI bus transfers. In real-world tests on a single active PCI device,
you can expect a maximum of approximately 110MiB/s. See for example
StorageReview.com's review of the Adaptec 2400A RAID controller at
http://www.storagereview.com/articles/200107/200107037410_2.html.
Scroll down to the Disk/Read Transfer Rate line under the table
labeled RAID 0 with 4 Drives.
64 bit, 33MHz; 32 bit, 66 MHz; 64 bit, 66 MHz, and PCI-X (64 bit, 100
or 133 MHz) all have a higher theoretical peak burst speed but will
show similar efficiency (220 440MiB/sec) if other factors are held
constant. When dealing with these faster and wider PCI buses, it is
important to keep in mind that it becomes more difficult for a single
device to saturate the bus. This means that either the device in
question becomes the bottleneck, or that you must consider multiple
devices operating concurrently in order to bump up against the PCI
bus' limitations. In either case, it becomes increasingly difficult
to provide meaningful data from the point of view of the PCI bus
itself.
AGP, which stands for Accelerated Graphics Port, is not a bus at all,
but rather a port. The difference is that a bus allows multiple
devices to share available bandwidth, whereas a port is a
point-to-point connection between two devices, typically a graphics
card and the chipset. AGP video cards are extremely well-engineered
to eke the maximum performance out of the port, and with few
exceptions are extremely close to the theoretical value. For hard
numbers, including the effect of Sideband Addressing (SBA), see this
page at Reactor Critical:
http://www.reactorcritical.com/review-geforce2gts/review-geforce2gts_4.shtml.
The amount of bandwidth consumed by different devices would depend on
the device in question. 10Mbit ethernet cards will transfer
approximately 1.25MiB/sec. 100Mbit cards will transfer about 12.5
MiB/sec. Gigabit cards about 125MiB/sec (though at this speed, few
cards can achieve over about 800Mbit, even in a 64-bit PCI slot).
Video cards can utilize nearly all the AGP bandwidth provided, but the
specific utilization would depend on the size and number of textures
being transferred, and the number of polygons in each frame being
displayed. A high-end card such as the Radeon 9700 still will not use
much bandwidth if it is displaying a simple spinning cube, for
example.
Sound cards do not transfer much data at all. CD quality sound at
44.1khz with 32-bit samples requires a fraction of a megabyte per
second. Sound cards though are making 44,100 PCI bus transfers every
second, and so are extremely sensitive to PCI bus latency, rather than
bandwidth.
If you require links to more sites with supporting documentation or
quotations, please request a follow-up.
Thank you,
Haversian |