Google Answers Logo
View Question
 
Q: Measured Ethernet Speeds Compared ( Answered 4 out of 5 stars,   0 Comments )
Question  
Subject: Measured Ethernet Speeds Compared
Category: Computers > Hardware
Asked by: traderjunky-ga
List Price: $5.00
Posted: 31 Jul 2003 22:43 PDT
Expires: 30 Aug 2003 22:43 PDT
Question ID: 237659
What are the typical measured speed differences between 10Mb, 100Mb and
1Gb Ethernet over a simple 5 node LAN configuration using Windows, Cat 5
wire, and TCP/IP? For example, is the difference between 100Mb and
1Gb really 10x in practical applications?
Answer  
Subject: Re: Measured Ethernet Speeds Compared
Answered By: maniac-ga on 01 Aug 2003 15:54 PDT
Rated:4 out of 5 stars
 
Hello Traderjunky,

Most of the information you can find about network speeds and related
performance is going to be independent of operating system, so some of
the references below will be for systems other than Intel / Windows
platforms. I'll start with a few general comments and then include a
variety of sites for back up material.

With current PC hardware, the difference between 10 Mbit and 100 Mbit
Ethernet will actually be close to a 10 X performance improvement.
This is because both
 - the speed of the CPU of modern PC's (> 1 GHz)
 - the speed of I/O busses and or the motherboard (32 bit/33 Mhz PCI -
100 Mbyte/sec)
are well within the demands of the network interface.

However, at 1000 Mbit, a number of system effects will drop the
maximum transfer rates to perhaps 40 to 60% of the rated maximum
unless you have better busses such as 64 Bit/66 Mhz PCI, dual or more
CPU's, and more memory. Latency will also be better, but not
necessarily 10 X better. There are a variety of sources that back up
that result including:

Odd that a Russian site has this older data for 100 Mbit network
interfaces. Note that this comparison shows that changing the
interface adapter can result in a change in CPU utilization. Also note
that both ran at about 96 Mbit/sec rate.
  http://developer.intel.ru/download/network/pdf/adapter_performance.pdf

A high end (8 CPU) HP server is able to keep up to four 1000 MHz
adapters fully utilized. Note that using two CPU's per interface are
basically necessary to meet the demand.
http://www.hp.com/products1/servers/rackoptimized/rp7410/infolibrary/perf_wp_022803_FINAL2.pdf

Benchmark results for NetApp file servers (Windows). Note that
swapping 100 Mbit for 1000 Mbit adapters yields about a 2x performance
boost (compare 720 with 740 lines).
  http://www.netapp.com/tech_library/3056.html

Somewhat dated (2000), but a good summary of the capabilities
available with 1000 Mbit adapters and the limiting factors of CPU's
available then.
  http://www.nwfusion.com/research/2000/0320revgig.html

More recent data (2002) with Sun computers connected through a Cisco
switch. A number of good tables that summarize the results (40 - 80%
bandwidth).
  http://www.sanet.sk/en/vykon_siete.shtm

A look at latency through switches
  http://www.mcclellanconsulting.com/analysis/latanal.html

A comparison of a number of Ethernet interfaces (Linux results)
  http://www.accs.com/p_and_p/GigaBit/server.html

A number of other results are available with searches such as
  measured ethernet performance results
  measured ethernet latency results

Please use a clarification request if you need further explanation of
these results.

  --Maniac

Request for Answer Clarification by traderjunky-ga on 01 Aug 2003 20:09 PDT
The information is somewhat dated as technology goes, but it will
sufice. I'm not surprised by the results of 100Mb vs. 1000Mb, but your
conclusion that the difference between 10 and 100 is close to 10x did
surprise me.

Your reference to the Russian site with performance data indicated
that "enhanced" Cat 5 was used in the evaluation. Thus, this deviates
a bit from the "typical" or standard Cat 5 wiring senario I was
looking for.

I conducted a small experiment of my own with a Windows 2000 server
and XP workstation and was only able to realize a slightly better than
4x difference from 10 to 100Mb using a 2.4GHz Intel P4s, 512MB RAM in
each system, Intel Pro 100+ Management adapters, TCP/IP and a standard
Cat 5 crossover cable (virtual point-to-point). Perhaps my experiement
was somehow compromised by something I overlooked.

Regardless, if you could find something that does not include an
enhanced Cat 5 cable, this would be great.

Thank you.

TJ

Clarification of Answer by maniac-ga on 02 Aug 2003 12:22 PDT
Hello Traderjunky,

Hmm. There could be a number of factors that can affect the results.
As I noted before, a modern PC should be able to keep a 100 Mbit
Ethernet fully utilizied (or close to that). I am including a few more
references at the end that show that kind of result. However, you are
not seeing that kind of performance, so some other factor must apply.

Some factors that can affect the results (reduce peak performance)
include:
 - operating system overhead. For example, if the OS overhead to
initiate a transfer is 1 msec, then you will be limited to less than
1000 messages per second. To send 100 Mbytes in that one second, you
have to use 100 Kbyte blocks.
 - other traffic. In an ideal situation, you would have an idle,
interface between the two systems. In the real world, other traffic
will interfere with the messages being measured.
 - switching overhead. In an ideal situation, you would have a direct
connection between the two systems. In the real world, each switch and
router between the two systems will add latency to each and every
message. This may be a significant factor with some network protocols.
 - protocol "rules". Using TCP/IP (the protocol used most often on the
Internet) as an example, there are "fairness" rules that require you
to send a little data, wait for acknowlegement, send more data, wait,
send even more data, wait and repeat until you hit the limit of the
connection between the two systems. This "slow start" is documented in
  http://www.faqs.org/rfcs/rfc2001.html
along with other congestion avoidance algorithms.
 - multi path effects. On a local network, this is not a problem, but
can be caused when there are two or more links connecting two systems.
This is caused by the arrival of packets "out of order" at the
destination. When this occurs, the congestion avoidance algorithms are
triggered, which will slow your transfers.
 - small block / small message effects. If you use small blocks, the
total OS overhead will be far larger than with large blocks. In the
same way, small messages will require a connection to be started and
stopped each time and reduce performance.
 - bottlenecks. Your test / application may be limited by the amount
of free memory or speed of disks (and not the network interface rate).
 - "tuning" effects. The operating system may have some settings that
need to be adjusted to get the peak performance from your system.
Fixing this is usally an iterative process, make a change - run the
test, and repeat.
 - other factors. As a couple of the netpipe results show, the
performance may vary in odd ways with block size. It is not clear what
the cause is, but may be related to data being copied to align
buffers.

Several more references follow to include:
 - benchmark tools / some analysis
 - tests of switches
 - tests of 100 Mbit network interfaces

Odd - another Russian site with good links to benchmark tools.
  http://www.benchmarkhq.ru/english.html?/be_net.html
Scroll down to "Netbench", hosted at
  http://www.etestinglabs.com/benchmarks/netbench/netbench.asp
to get a file server / client application to measure performance
yourself. There are a number of other tools referenced as well.

For a general explanation of how NetBench is used by PC Magazine,
  http://www.pcmag.com/print_article/0,3048,a=4408,00.asp
and scroll down to "Benchmark Tests: Business Servers". It notes that
the machines were substantially the same yet two products (HP, Dell)
performed more poorly than the others. They reloaded Windows 2000 and
did some other tuning to bring these machines in line with the others.

A series of benchmark tests of switches. If you select the
"Performance Results", note the almost 8 to 1 range of latency, though
most are closely bunched next to the best performers.
  http://www.pcmag.com/print_article/0,3048,a=17628,00.asp

Performance results showing > 80 Mbit/sec on a 100 Mbit/sec Ethernet
include:
800 Mhz Pentium III, on board Intel Ethernet interface
  http://www.acl.lanl.gov/plan9/netpipe/i82557.html
A variety of dual processor machines (1.4 to 2.8 Ghz CPUs); note the
odd slow downs with some block sizes.
  http://www.hpc-design.com/reports/smp-interface/smp-interface.html
Another set of dual processor results (1 Ghz each); this does not have
the odd slow downs of the previous results
  http://www.plogic.com/bps/bps-logs/index.html
A comparison of a variety of Ethernet cards on slower PC's (400 Mhz
and 166 Mhz). They also tested some "channel bonded" cases where two
or more Ethernet connections are used in parallel.
  http://www.hpc.sfu.ca/bugaboo/nic-test.html

There are a number of other references available. Search phrases
include
  netpipe performance results
  Windows 2000 XP network performance results
  Windows 2000 XP network benchmarks

  --Maniac
traderjunky-ga rated this answer:4 out of 5 stars
Maniac- Thank you for your efforts with this question, and I am
generally very happy your answer.

TJ

Comments  
There are no comments at this time.

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy