Intel Pro/1000 MT - Server vs Desktop?

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,737
Location
I am omnipresent
The server adaptor has an optional low-profile bracket and Solaris/*BSD and Netware driver support and advertised compliance with a longer list of buzzwords (it supports hot-swap and load balancing, presumably on a driver or firmware level).

Are you going to use those things at home? No. Is it worth a $100 premium for the server one, if you've got a server? Sure looks like it to me.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
You can get the desktop cards almost as cheap at googlegear. $42.50 with free 2 day air shipping. I would be buying them if the 8 port gigabit SMC switch I wanted was in stock anywhere.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Perhaps, I need to specify the reason for the question so that you can address the reason for the question.

I have a unmanaged switched TCP/IP network, with multiple machines that backup to a single file-server. Some of those machines have a lots of GB's of local storage and on some of the machines it takes many hours on a 10/100 switched network to back them up. So, I have decided to upgrade the infrastructure to GbE to eliminate the possible network bottleneck.

My observation with the current setup with 3com 10/100 cards using MS's backup the CPU pegs-out at 100% during backups for the sender, regardless of the CPU (even high performance dual processor machines). The receiver runs at aprox 70%CPU. The transfer maxes out at aprox 5MB/s with a norm of 2MB/s. This shouldn't be - It's way too slow for the machines and the HW involved. My best guess is that it is the 3com cards; Normal disk<->disk transfers do not peg the cpu and operate at 20-40MB/s depending up the drives involved. Backups to local HD's do peg the CPU but at a much higher transfer rate than over the network. If it is not the cards then my next assumption (or thing to test) is that it is the backup program itself.

I've done enough research to figure out that Intel actually makes premimum GbE cards. Now, looking at the specifications of the Desktop and Server cards makes me start wondering which of those long list of buzz-words actually makes a difference in performance. The first thing I noticed is that I don't know enough about some those buzzwords. The things I assume matter are stuff like the different PCI bus interfaces and their effect upon I/O, CPU utilization; Interrupt usage; Large block transfer support.

Thus even on the branch machines (senders), if I can get better network performance, it will be worth server cards. I really would like to convert 8 hour backups into 1 hour or less backups. As to normal network usage (non-backup) the current 10/100 is fine and I would seriously doubt that GbE would produce a noticable change because I'm not even coming close to using the BW available with 100MbE.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Stereodude said:
You can get the desktop cards almost as cheap at googlegear. $42.50 with free 2 day air shipping. I would be buying them if the 8 port gigabit SMC switch I wanted was in stock anywhere.

I got a Linksys (EG008W) 8 port GbE switch (please don't confuse with the EG0801) for aprox $175 which seems to be a good deal especially since they actually exist unlike 8 port GbE SMC's
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Stereodude said:
You can get the desktop cards almost as cheap at googlegear. $42.50 with free 2 day air shipping. I would be buying them if the 8 port gigabit SMC switch I wanted was in stock anywhere.

I don't know, but $33.00 (desktop Pro/1000 MT) shipped seems to me signifigently cheaper than $42.50 shipped via Googlegear. The flaw of the Intel price is max quantity of one.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,782
Location
USA
$33 is a good deal IMHO...I might buy one also. I read their disclaimer and it looks like intel may contact you regarding the evaluation unit...is there some hidden thing with this?

Also, the shipping appears to be free 2nd day from intel:









Item Ordered Backordered Price Total



Intel® PRO/1000 MT Desktop Adapter 1 0 $33.00 $33.00



Subtotal: $33.00
Tax: $1.65
Shipping Method: UPS Blue (2-Day) Shipping: FREE

TOTAL $34.65
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
P5-133XL said:
I got a Linksys (EG008W) 8 port GbE switch (please don't confuse with the EG0801) for aprox $175 which seems to be a good deal especially since they actually exist unlike 8 port GbE SMC's
Yes, but they also don't pass Jumbo frames. Jumbo frames are the key to low CPU usage and higher datarates.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Stereodude said:
Yes, but they also don't pass Jumbo frames. Jumbo frames are the key to low CPU usage and higher datarates.
See:

speed-iperf-tcp.png


speed-ntttcp.png


cpu-ntttcp.png


I'll wait for the SMC myself.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
oh, btw, Mark... I'm assuming that's NT backup because I usually get atleast 6MB using onboard NICs... not pegging the CPU when doing SMB transfers... I am assuming your CPU us high because NTBackup is performing some type of compression.


While running disk defrag at the moment I got 5MB/sec and CPU was about 66% on the sending computer... didn't check the recipient... sender was onboard VIA LAN on a athlonXP 1700+, recipient was 845GE with a celeron 2.0.. whatever LAN shuttle uses on the SB51G.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
blakerwry said:
wow, that intel really shines... low CPU usage and higher transfer rates... i wonder if that was the server or the workstation version in those graphs.
It's the desktop version. The Intel card didn't look so rosy 6 months earlier. It got beat 6 ways from Sunday. It seems to be all in the drivers so far.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
In my backup, There is no compression going on at all and none of the disks have any compression.

I have no idea if the Linksys EG008W switch supports jumbo frames. I can see no reference to it or MTU's in the specifications and I also did a google search and found nothing of interest.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
P5-133XL said:
I have no idea if the Linksys EG008W switch supports jumbo frames. I can see no reference to it or MTU's in the specifications and I also did a google search and found nothing of interest.
Well I would guess that if it had it they would sell that as a feature. The omission of that as a selling feature leads me to believe it doesn't have it.
 

ihsan

What is this storage?
Joined
Oct 6, 2002
Messages
66
Location
Petaling Jaya, Malaysia
Website
ihsan.synthexp.net

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,782
Location
USA
P5-133XL said:
In my backup, There is no compression going on at all and none of the disks have any compression.

I have no idea if the Linksys EG008W switch supports jumbo frames. I can see no reference to it or MTU's in the specifications and I also did a google search and found nothing of interest.

It does not support jumbo frames...I sent linksys an e-mail several months ago and they replied with that answer. I'll dig up the e-mail and paste their answer later tonight if you're interested.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
No need for digging up the Email. I've spent most of last night testing the network with the new evaluation Intel cards and the Linksys switch. I've been getting interesting results. I got my intel package containing two server cards, one desktop, and one dual port server.

First, the pegging-out of the CPU was definately the 3com cards. I haven't tested the desktop card vs. the server card because I've simply been testing server-server configuration (First things first).

The file server is a 500MHz PIII (Win2003 server) and I've been testing it with a dual Athlon 1900+ (Win XP). Both machines have server cards in them; The PIII is using a 32x33 PCI slot and the Athlon is using the 64x66 PCI slot. The cards are set to negociate the speed and set themselves to full Duplex GbE.

The Linksys Switch does not do Jumbo Frames (tested) so I used a cross-over cable for a direct connection between the two computers to bypass that issue. To bypass the unknowns within the backup program, I simply used a file copy from the workstation to the server of a 8.7 GB backup file.

In that configuration, the best file transfer, regardless of advanced settings, I could produce is 14MB/s using 11% of the BW:from the server (aprox 36% CPU) to the Workstation (aprox 15% CPU) and 10.5MB/s from the Workstation (Aprox. 15%) to the server (Aprox 36%). In all tests the advanced settings of both machines were identical. With Jumbo frames on (regardless of frame size) the results decreased and no other advanced setting mattered performance-wise, though I did not test to see of any of the CPU off-loading settings changed the CPU load (All of them were set to enable).

I then attached to switch and retested and got the same results except if Jumbo frames were enabled then the transfer rate dropped down to 0. I also experimented with the various options of load-balancing/redundancy with the dual port and concluded that they all either decreased performance or it was the same. I even played with W2003's ability to bind the ports to see what efffect and it was a decrease in performance.

My conclusion is that whatever the slow-down of the network is it is not related to BW. Using the Intel server cards removed the CPU pegging at 100% but only marginally improved the performance beyond what is capable of a 10/100 network.

Any suggestions?
 

CityK

Storage Freak Apprentice
Joined
Sep 2, 2002
Messages
1,719
First, the pegging-out of the CPU was definately the 3com cards.
I remember reading in the Anandtech preview and follow up artilces on the Nforce2 that they found that the 3com had higher CPU utilization because of lack of interupt moderation (you can read that here). Wouldn't be surprised that this is the same case with your 3com cards.

My conclusion is that whatever the slow-down of the network is it is not related to BW.
Indeed, it sounds like a PCI related problem. I presume your dual athlon is a AMD 760MPX. I also speculate that this is the source of the bottleneck. It can have similar PCI issues as those that have plagued Via chipsets.

What other PCI devices do you have running in the systems? Have you tried stripping them out (if any are present)?

CK
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Sounds like you need either better Ethernet cables, or to find the source of the slowdowns on your cable. Fluorescent lights and ethernet cables don't get along too well. I'm sure there are other things too.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Stereodude said:
Sounds like you need either better Ethernet cables, or to find the source of the slowdowns on your cable. Fluorescent lights and ethernet cables don't get along too well. I'm sure there are other things too.

It is not the cables - I did the diagnostics for the cards and was getting 25-40 bad packets in 1,500,000 packets: That is totally within my normal expectation for GbE over Cat-5.
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
CityK said:
My conclusion is that whatever the slow-down of the network is it is not related to BW.
Indeed, it sounds like a PCI related problem. I presume your dual athlon is a AMD 760MPX. I also speculate that this is the source of the bottleneck. It can have similar PCI issues as those that have plagued Via chipsets.

What other PCI devices do you have running in the systems? Have you tried stripping them out (if any are present)?

CK[/quote]

I believe the problem with the MPX chipset was with crap performance on the 32bit PCI bus, the 64bit was unaffected. Do those server cards work in 5V PCI slots P5? Just thinking of one for my 760MP (64bit,33MHz slots).
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
CityK said:
My conclusion is that whatever the slow-down of the network is it is not related to BW. Indeed, it sounds like a PCI related problem. I presume your dual athlon is a AMD 760MPX. I also speculate that this is the source of the bottleneck. It can have similar PCI issues as those that have plagued Via chipsets.

What other PCI devices do you have running in the systems? Have you tried stripping them out (if any are present)?

CK

I agree that it does sound like a PCI bus problem and yes the AMD dual processor chipsets have the same limitation as the VIA chipsets. However, the card is in a 64x66 slot and that slot has lots of BW to spare. Basiclyu on that machine all the slots are filled with stuff. More to the point does it have problems with the interfaces between the standard PCI bus and the fast one.

regardless of the above, my thoughts are more on the server side of the network - The 500MHz P3 and its PCI bus. There I'm limited by a standard PCI bus with multiple 3ware controllers in addition to the GbE. I have no good tools to see what is happening on the PCI bus to find out if that is the issue. Normally, I don't assume that a file-server requires a high performance CPU and that a normal PCI bus has the BW to deal with the I/O. In this case, I'm starting to suspect that a server upgrade may be necessary.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Pradeep said:
I believe the problem with the MPX chipset was with crap performance on the 32bit PCI bus, the 64bit was unaffected. Do those server cards work in 5V PCI slots P5? Just thinking of one for my 760MP (64bit,33MHz slots).

The server adapter should fine in most all PCI slot configurations: Specificly it worked fine in the high speed slot for the Asus a7m266-D. The desktop adapter is limited to normal PCI (32x33) or (32x66). I really don't know what types of slots are 5v vs 3.3V
 

CityK

Storage Freak Apprentice
Joined
Sep 2, 2002
Messages
1,719
Pradeep said:
I believe the problem with the MPX chipset was with crap performance on the 32bit PCI bus, the 64bit was unaffected
However, the card is in a 64x66 slot and that slot has lots of BW to spare. Basiclyu on that machine all the slots are filled with stuff. More to the point does it have problems with the interfaces between the standard PCI bus and the fast one.
Check out the following diagram of the bus implementation on the MPX:

mpx2.gif

Anything that comes off the 33MHz bus is, in the end, routed by the 66MHz bus. What I'm thinking is that there is some bus mastering problem going on - simply, only 1 device can own the bus at any one time, whether it be situated on the slower 33MHz or from the 66MHz.

[speculation] I think something is not playing well with the other devices on the PCI bus(es) and is constantly interupting the transfers.
- This may go in hand with what I wrote earlier about the 3com having higher CPU issues perhaps because of not having support for interupt moderation.
- Further, I propose that the Intel cards handle interuption better (i.e. evidenced by the low CPU), but are nonetheless also having their transfers constantly interupted (evidenced by extremely poor transfer rates) by some maljusted device on the PCI ([/speculation]

CK
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,782
Location
USA
P5-133XL said:
]The cards are set to negociate the speed and set themselves to full Duplex GbE.

Just a thought...have you tried changing the negotiation setting to "auto/auto | auto/auto" or "1000/full | 1000/full"?

I've noticed in our lab at work that our intel 10/100's can be picky about the combinations of negotiation speeds especially if they are connected via switch. I know you tested direct connects but I'm wondering if intel has issues with negotiating the speed automatically?
 

Fushigi

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,890
Location
Illinois, USA
Handruin said:
Just a thought...have you tried changing the negotiation setting to "auto/auto | auto/auto" or "1000/full | 1000/full"?
We've seen that in our production server farm. We've changed the servers to force the speed & duplex setting vs. using Auto. Dell servers, Cisco switches. Apparently the auto negotiation isn't reliable enough or causes the occasional burp on the network or something.
 
Top