Home ESXi

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
I'm looking to build a test environment for ESXi and I was intrigued by this Dell PowerEdge T100 which seems to be at a really low price of $289. When I bump the RAM from 1GB to 8GB it's only another $121 which seems pretty darn cheap for $410. I'll need to add a couple more Gigabit NICs into it for my testing but it comes with a few PCIe slots so it's expandable. The T100 is actually listed on VMWares HCL as long as a Xeon 33XX CPU type is selected. I'm not certain I need that much CPU for what I plan to do.

I also plan to build a NAS array and connect the shared storage via iSCSI so that VMotion will be possible. I'm leaning towards OpenFiler right now because it supports NIC teaming for iSCSI which leads me to my networking configuration.

For the networking part of this, I'm considering a couple HP ProCurve 1810G-8 to setup some vlans and also link aggregation for iSCSI. I plan to run two GigE in a single aggregated vlan to the NAS and the other two will be used for an aggregated public network for ESX.

Has anyone used the Dell T100 series or think of a good reason not to use them? For the price it seems hard to build a comparable system. I won't need to add more storage to the systems because i'll be using iSCSI and I don't plan to need more than 8GB RAM (which I believe is its limitation). I don't think I could even buy 4x 2GB stick for less than $121.

How about the HP ProCurve 1810 series managed switches? From what I've read, most people seem to like them. The warranty seems to be lifetime on the product and the feature set seems really good for the price.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
Are you running ESXi on something other than a Xeon (Core i7, right?)? I'm mostly worried that it won't boot with the Intel E5400. It also looks like Intel sold this CPU both with and without VT-x which would be a big deal for me. I wouldn't be able to boot 64-bit VMs without that feature in the CPU. The next option would be for me to select a Xeon X3330.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,534
Location
Horsens, Denmark
ESXi is really not fussy. I've run it on all kinds of stuff, including AMD X2 3800s, i7s, Celerons, you name it. I also haven't found an SATA or GbE controller it didn't like.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,607
Location
I am omnipresent
ESXi is funny about NICs. I know there's one or two models of 100Mbit Realtek NIC that it absolutely won't work with, which is obnoxious because those models amount to 84% of my classroom computers.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
The built-in should be fine since this T100 host is on their HCL. I plan to add supported Intel GigE NICs into the PCIe slots. I just need to decide which specific model and if I want to use single or dual port.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,607
Location
I am omnipresent
It's kind of amusing to me that if I configure that T100 with a reasonable selection of hardware (2.6GHz Xeon, 8GB RAM, 2x250GB drives, SBS2008 license) I wind up about $1200 cheaper than if I set up the same machine on a rackmounted platform.

Yes I know it's a different series of Xeon. It's still funny.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
I'm shopping for network cards and I was wondering if you've used or considered the Intel Gigabit ET Dual Port Adapt. It looks to be their latest generation NIC. It's only slightly more expensive then the Intel PRO/1000 Pt Dual Port Server Adapter. Both are listed as compatible on the VMWare HCL, so I'm thinking of going with the 3rd generation. That adapter supports this Virtualization Technology for Connectivity which I'm not yet familiar with. I can't really judge if it helps any, but for $4 more than the PT card, it might be worth experimenting with.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
Maybe I'll give it a try and see how it works. I've been playing around with some iSCSI from my desktop (win7) over to my cheap dell SC420 running openfiler and I'm only getting about 40MB/sec. I'm testing on a samsung F3 1TB drive which I know can push faster than that. The Dell has a broadcom BCM5751 Gb controller which I don't know if it's any good.

Unrelated...how much have you worked with vlans? Do I need two vlan-capable switches to create proper vlans or can I get away with one? I was thinking of going with a single 24 port ProCurve vs two 8 ports switches?

I'm looking for two ports per ESX server going to the switch for iSCSI, and two ports per ESX for LAN. Then I might do 2-4 ports from the switch to the NAS (and team them in openfiler). I'd like to separate the iSCSI traffic from the LAN traffic. I'd also want one port that will connect to my router for internet access to the ESX VMs. I don't know if all that is possible...or routable...which is why I'm building this to learn. I just want to make sure I get the right hardware to learn from.
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
By the time you're finished learning you will have a state of the art data center in your apartment. Then, all you'll need is fat pipe and you'll be set.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
Maybe FIOS will be available by then and I can host my own server for SF as a VM. I was thinking I could sell the Dells after I'm done since they'll work fine for basic desktop use. The networking I'll probably save for something else since I also plan to have a NAS server configured for storing a lot more stuff than just this project. Overall the project will probably cost a lot less than what ddrueding has spent on SSDs. :)
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,534
Location
Horsens, Denmark
:p

I was planning on learning about iSCSI doing something similar, but it won't apply to the environments I like to work in. I do need to learn about vLANs, and I have quite a few 3Com managed switches to play with (some 48-port, some 24, all GbE), but I haven't gotten to it yet.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
I'm surprised you never tried iSCSI in your environments with vmotion. Once you have it (along with the storage vmotion feature) you'll wonder why you ever did things the hard way. ;-) I upgraded 10 ESX servers at work to the latest ESX 4.0 and not a single user had a clue that it happened to their VMs until I told them the upgrades were complete. I also migrated everyone off of an old storage array on to a new storage array (also) without them ever knowing that it happened. I then phased out the old array and life was grand.

I think I'm leaning towards the 24 port switch and keeping it all together. I can separate all my traffic from broadcasts which will isolate iSCSI from the public LAN, and also from the vmotion traffic. The 1810G-24 port is fan-less and low power consuming. I did some more reading on those Intel dual port NICs and apparently they offload some of the overhead in the vmware switches to give more throughput through something they call VMDc. I don't think I'll be able to compare its performance benefit unless if I buy a PT version of the card. Their testing examples showed that they were able to go from 4.0 Gb/s up to 9.5Gb/s when using the 10Gb adapter with jumbo frames.

I found a vlan primer article tonight that you might also find useful. There is a lot of information in there and I need to get some more sleep before reading it again.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,534
Location
Horsens, Denmark
Thanks for the vLan stuff. vMotion would be neat, but if I constantly have to deal with the complexity and performance drop attached to iSCSI, it just isn't worth it. I've already migrated everyone from standard servers to virtual machines, and then consolidated virtual machines without anyone knowing.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
I'll see what I can achieve with iSCSI when using dual port GigE adapters. My drives won't be SSD like yours, but I'm hoping I'll come close to the performance of the normal SATA drive when used local in a system.

Even in the basic tests I ran yesterday, the seek time was consistent with that drive natively attached to the SATA controller. The reads/writes were about 1/3 the normal performance, but I think it's because of the network I have and the lousy Dell I was testing against.
 
Top