Virtualization project

jlangley

What is this storage?
Joined
Oct 1, 2009
Messages
3
Mercutio highly recommends this is the place where I come to talk about this virtualization project. Forgive my slowness because I'm new to the virtual world. My budget will roughly be about 25K, though I haven't been given actual numbers. They'll approve or disapprove as I go along...annoying as hell, but whatever.

The details: We've decided to build the computers in house. We will have spares on hand. I have a storage infrastructure built on an EMC Clariion
I am interested in keeping costs down and easily managing guest OSes on a Virtual Machine platform with Enterprise support (Xen by Citrix, Hyper-V on 2008, or ESXi)

Right now, the first 4 servers expected to be virtualized will be Windows servers we already have (aging hardware, bad configurations will make them built from scratch and have the applications/databases migrated over), but there may be a need for linux servers in the future. Also, we will probably build a much needed test environment at a later date-so a replica of our DC would be very desirable.

I am the leading Windows expert, I have a competing team mate that is a Linux (can I use the phrase delusional crackhead?) guru, and our even more clueless manager doesn't have criteria for evaluating what will be best for us.

Truth be told, we are leaning towards VMWare but none of us know how to present all of this so that it feels like the right decision, and a solid one at that.

I've installed Hyper-V, and I'm getting ready to create a test VM, but that's as far as I've gotten. The linux crackhead has installed KVM? and wants to test xen next. I will be testing VMWare next, but once again...we don't even have criteria from our manager about what things need to be tested.

Please--if you could help me with this, I'd be most grateful. I am also slow at this, so patience would be nice too. Thank you so much in advance, because I know I will have 800 million-bajillion questions.

~Jessica
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,612
Location
I am omnipresent
Jessica is kinda-sorta one of my students.
I suggest she bring this here because there might be a wider range of opinions and biases than my own in this, and because it's an interesting and different thing to potentially discuss.

Hi Jess. :)
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,537
Location
Horsens, Denmark
Hi, and welcome.

I have virtualized all but two servers at our company (about fifteen VMs in all). The only things that didn't get the treatment were our smoothwall firewall and our fileserver. I'm using VMWare ESXi because it is:

1. Free
2. Easy to work with
3. Very fast compared to the competition
4. Compatible with just about everything

I have a pair of machines built in-house (by me) that run them all, one live and one backup that receives a copy of the VMs every night through a script. The hardware is as follows:

Supermicro 3U Chassis with redundant power supplies
5 independant WD Velociraptors (VMs distributed manually) and a 2TB WD drive for backups
ASUS Dual Socket 771 motherboard
2 Xeon quad-core CPUs
12GB RAM

This machine handles all of our servers without issue. The databases hit the disks pretty hard (one each SQL, Oracle, and Exchange) so they have their own drives, and I am planning on going SSD (Intel X25-M) with them soon. CPU usage doesn't go above 50% ever, and some of the systems that require multiple servers (DB, APP, Web) actually run faster since there is no network bottleneck.

Other than the DC and Exchange servers, everything is construction-company-specific, so I don't have anything really good for comparison.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,612
Location
I am omnipresent
As I understand the situation there are a couple of things going on:

One is that they're looking for something that's going to have enterprise support if need be. ESXi is free and moves them up to a full VMware installation with all the attendant management tools if that is needed.

Microsoft's solution is "free" in that it's part of Server 2008, and integrates well with familiar Microsoft technology, but it doesn't have the software ecosystem that VMware does. Microsoft is just now releasing Physical to Virtual Migration software, for example. And from what I've read, the "Easy Migrate" feature that lets a guest image move from one host system to another doesn't work all that well. And Hyper-V is apparently lacking in terms of features for Linux guest as well.

Still, I suspect Microsoft's setup will be cheaper as far as licensing, and I'm not sure if the environment Jess is describing will benefit from all the advanced stuff VMware will actually do, and likewise I know it's easier to get an answer out of Technet than it is from EMC's service.

David, are you actually moving your guests among your servers to do maintenance, or do you just down them at times when you need to?

My own experience is primarily in virtualizing single systems either for a specific purpose (spam filters sitting in front of Exchange) or for continuity of a legacy server. It's not the same thing as setting up 10 systems on a single host and expecting everything to play nice.

Another issue, one that I'm hoping Handruin might have a tiny bit of insight with, is using a Clariion NAS over iSCSI to contain the source files for the virtual machines. Jess's organization has centralized its enterprise storage on that device, and they're planning to move things virtualized database servers into that space, but I have some suspicions that no matter what the Host OS they're using is, it might be a big boat anchor to overall VM performance compared to simple local storage of the VMs.

I've never looked at Xen at all in a commercial environment, either.

Personally, were I in her place, I'd probably just throw up an ESXi system and pound on it until it did what I wanted to, but no one in Jess's organization is really capable of doing that. They also don't have very good metrics for what is actually necessary or acceptable, but just like mainstream IT everywhere, they don't have the staffing, budget or management to actually figure that stuff out, which is why I suggested to her that she ask some folks who might actually have some wisdom about this sort of project.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Well, I do have opinions in this subject. There are lots of variables, and little details, so it is hard to talk concretely about lots of stuff. However, since this appears to be your first foray, there are some common issues.

The first is that Hyper-V has limited Linux support. So consider that when heading that direction. That being said, Microsoft can change this at the drop of a hat and they will if it is to their benefit. So far, they are more concerned in locking people into Microsoft products.

Next is, Consider redundancy as a very important consideration that is not given its fair share of consideration. You will be consolidating lots of servers to a few servers. That means that previously, if there was a HW failure it would be limited to subset of your core infrastructure. Once you consolidate to fewer servers then when you have a HW failure then it will potentially effect far more of your business. Depending upon your availability requirements, you need to seriously plan for that inevitable failure as you design for virtualization.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,537
Location
Horsens, Denmark
David, are you actually moving your guests among your servers to do maintenance, or do you just down them at times when you need to?

My maintenance/backup window is about 3 hours these days. The backup takes about 90 minutes. If I can get what I need done in the remaining time, I just shut it down. This is also my first tier of backups; the backup system has a complete "startable" copy of every production machine that is current as of the night before. The only thing I need to do is reprogram the static IP information (after a migration, it sees the NIC as new and goes to defaults). The backup system is able to copy it's VMs to our off-site location whenever, as it doesn't affect the production systems' performance.

Keep in mind, this is a pretty quick time for backing up 15 VMs. Most of them have disks under 8GB, many under 4GB, and even the biggest are under 45GB.
 

Fushigi

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,890
Location
Illinois, USA
It might help us to know the operating systems & kinds of applications that will be run. Also the relative size or number of users so we can gauge the expected workload. For instance, while David has virtualized his database servers, my employer cannot as they fail to adequately perform (we have over 30K employees plus some apps are used by our clients who are large global companies). Certain application servers *cough* WebSphere *cough* also don't seem to do well living in a VM.

That said, we use VMWare Infrastructure and as a general rule can get 10-16 VMs (or OS installs if you prefer) per physical server. We have over 1000 server OS installs, the vast majority are Windows. Our standard hardware is a Dell (I think the R710 is the flavor of the month) with dual quad-core Xeons. Some get local storage but we do shift more and more to our EMC SAN via Fibre.

A side benefit is that your company's power bill will go down. Virtualization as a rule reduces data center power consumption, which reduces the heat generated, which reduces the AC load. So your operating costs will go down. You can search for numbers but I'd think the power bill would drop by at least $20-30 or more per month for each physical server you virtualize.

BTW, I just went to Dell's site & configured an R710: Dual Xeon X5550s, 16GB RAM, 2x160GB SATA drives (I assumed minimum local disk as you'd use the SAN), an extra quad-port Intel Gb Ethernet (when possible give each VM it's own NIC), QLogic dual-channel 4Gb/s Fibre card for SAN connection, rack mount kit, redundant PSU, VMWare ESXi pre-loaded, 24x7 4 hour response 3 year warranty. No guest OSes. All of that was just under $8900. The QLogic is expensive but adding a RAID card + local disks would cost even more so go with the SAN to keep costs down.
 

jlangley

What is this storage?
Joined
Oct 1, 2009
Messages
3
Ok, so from what I'm seeing here, I've started a list of organizing our project. If you could take a look at this, and let me know if there's some additions, clarifications, changes, or comments...I would completely appreciate it. Thanks for the great replies so far.

Virtual Criteria

1. Management
A. Costs
1. Up front licensing
2. Long term licensing
3. Hardware
A. Specs to how many VM’s
1. VM Specs
B. Redundancy
4. Enterprise support

2. Interface
A. Ease of use
1. Creating VM’s
a. from scratch
b. Replica’s
2. Backups
a. Software/hardware backup solutions (SAN vs/+ BE)
1. Time that it takes
2. Migration
3. Redundancy/Failover

3. Performance
A. Speed
B. Compatibility across platforms
C. Features available
1. Will we use the features?
D. Application virtualizes well?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,612
Location
I am omnipresent
Well, let's go down the list:

Management:
* VMware at the moment has much better management in complex environments, and its management scales well to deal with more guest images than you're likely to have in your environment.

* Microsoft really only has simple offering at the moment, but in a setting that's probably familiar to you and your boss.

* I really have no comment on Citrix-anything. My feeling is that if you're looking at Xen, you might as well be looking at entirely free solutions.

Costs
* "Free" is not an option. Jessica needs to be able to call somebody.

* VMware vSphere Standard Edition + 2 years of 24x7 support = $1400 (per physical CPU, if I'm reading this right)

* Server 2008 R2 x64 OEM = $659 + whatever an incident costs from MS ($250?), or the cost of a Technet subscription ($1200/year for the cheapest one, which includes two support incidents).

* VMware ESXi is legitimately free with no ongoing costs and is compatible with other VMware products. I feel this merits consideration or at least mention.

Ongoing costs
* Looks like 24x7 Commercial Support for VMware is $200 a year. I don't see costs listed for subscription upgrades.

* Server 2008 support is usually doled out on a per-incident basis, unless your employer wants to pay for an MSDN sub, which they should be doing anyway and aren't.

Hardware
* Pretty straighforward. I'm guessing we're talking about 1 or 2 quad-core 5000-series Xeons with 8 or 16GB RAM. Whatever. I'll say $2500 per server and I'm guessing you'll want two buy two or three of them, with at least one of them serving as an available spare.

* There is some concern in my mind at least over whether it would be better to serve the guest images from local disks or from the NAS. I suspect performance will be better with image files on local drives, but that's not how your storage infrastructure is set up. I think it bears testing, though, even if it's only on a 250GB SATA drive.

VM Specs
* In general, many servers are drastically overspecified in terms of hardware. You may find that adequate performance for your guest servers is fine with very modest allocated resources. I'd start by considering the amount of RAM actually used by the existing servers. I suspect many of your guests will be in the 512MB - 2GB range, since that's where most Windows servers I deal with are.

* You're more familiar with the storage requirements than we are.

Redundancy
* At present, the plan is to have guest image files sitting on the Clariion, which provides substantial redundancy on that level.

* I suspect you will need to operate at least two host servers full time. Is one dedicated backup system enough?

* Power supplies, local disk drives, disk controllers, RAM and your fans (i.e. the stuff that breaks) should be purchased for individual component redundancy as well.

* You'll also want some heavy-duty power protection. I'm guessing something north of a 2000VA UPS.
 
Last edited:

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,537
Location
Horsens, Denmark
Another good point in favor of a VMWare solution; compatibility across their product line. I have migrated VMs from VMWare workstation, onto VMWare Server (the one that runs on another OS), and then onto the ESXi machine. I've then pulled a misbehaving VM back onto my local machine running VMWare Player to figure out what was going on.

I'm not saying that failing over to running the VMs on a workstation is the best idea, but I've done it, and it saved my bacon when I didn't have a hot spare server.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
From my limited experiance.
Hyper-V is okay but limited in some respects. Seems taylored to MS products. Screen resolution and quality of the guest OS sub-par. No conversion software yet.

Citrx Xen is a bare metal solution that is free. I found it hard to setup withou a DNS server for some reason. (we use static IP addresses). It's conversion software is limited in what it can convert and some what confusing to set up.

VMWare Server V2 is really nice. It was easiest to install and setup. It even runs on XP so no server OS is required. Saves $$$. The Linux version is slightly more difficult in install, but worth the effort if you want to save more $$$. Their conversion software converts almost any thing, including Acronis files. Acronis runs fine on the guest OS as well.

VMWare ESXi is a bare metal solution also. I didn't do much testing with this as it didn't recognize our RAID controller (3Ware) or network cards (DLink).

Support: I found all my questions answered on the net, usually in the forums of the product in question.

Hope this helps.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
Mercutio, I've only run ESX Server 3.x and vSphere (4.0) via clariion over 4Gb FC SAN. I'm not familiar with the performance of ESX when connected over iSCSI. For my work we aren't specifically testing the VMWare product, but actually using it, so this isn't something I've been able to try. jlangley, if this is something you really need, calling VMWare to get you performance numbers might be worth a chance. I'm hesitantly concerned that hosting them over 1Gb iSCSI might have performance implications, but I don't know for certain.


Things to consider:
VMWare offers a free tool called the VMWare converter to migrate existing physical systems into VMs. I've used this and it works amazing well even for machines that have to remain hot during the transition. I took a SQL server 2005 machine running Server 2003 and migrated it into a VM while it was up and running.

The converter also does a great job of going between products as ddrueding already mentioned. I've many times converted VMs from my desktop and moved them to ESX.

High availability and maintainability
With vSphere (ESX 4.0) (and appropriate licensing) you can implement the vmotion capability. I've used this throughout the years with great success in performing hardware maintenance. If you're not familiar with it, this allows you to migrate a live VM from one physical ESX system to another with no down time. If you add in the DRS you can have vSphere migrate workloads within your variety of VMs to different physical hardware as loads shift. Less busy vSphere systems can take on more VMs to distribute load. You can also implement the VMWARE Consolidated backup to take live backups of VMs if that's something you need.

Cost - support
Since I work for the parent company of VMWare, I (or the team I'm on) gets their software license for free (actually it's some internal cost that i don't know). Regardless, I've been using ESX since the 2.x version (maybe 4-5 years) and in all that time I've only once had to call support. Even then...I wasn't allowed support (because of internal company reasons) and had to figure out that one issue. In short, I've done the same as Bozo and I've found all the support I need from google and VMWare forums. In all the years I've needed very minimal support to have a very high functioning virtualized environment. Much like Bozo, we are able to be self-sufficient without support, but the option is there if you can endure the cost.

Cost - Licensing
For short and long-term licensing, VMWare has been very good about migrating licenses to the newer versions. I've seen our 3.5 licenses be upgraded to 4.0 licenses with respective functionality being equal. You can even call VMWare for specifics to ease concerns. vSphere is pretty new, so even if you put some budget into that product line, it'll likely stay in the 4.x realm for some time. I don't know any time line specifics so I'm just making a guess based on prior history of their products.

Cost - hardware
One of the nice things with my aforementioned vmotion is that you can roll new hardware into your vSphere farm over time and migrate your VMs (live) as you phase in new hardware and phase out the old. You can buy as much as you need for today and migrate slowly onto new hardware over time. The amount of hardware you need is tough to gauge without knowing more about how you plan to use your VMs.

You can now also perform HA clustering with the vSphere software to give true redundancy of virtual machines across multiple physical hardware. If you happen to have a hardware failure, the other VM becomes the master and a new shadow is created on any remaining hardware to keep HA policies.

There is also another feature/option of vCenter that uses vmotion and DRS to move VMs around to allow for hardware to be placed into a suspend state during less peak times. This can save you money via reduced power costs while hardware is automatically suspended. When peak times resume, the hardware comes back online and VMs migrate back over to redistribute load.

Ease of use
I presently use vCenter with about 14 ESX servers. Among the 14 are a mix between vSphere 4.0 and ESX 3.5. One vCenter interface allows me to see and manage the entire infrastructure of somewhere between 20-50 VMs. I have template machines that I roll out when people from my team request them and it's a breeze to deploy and manage.

Creating VMs can happen from several ways.
1.) Raw install of the core OS is done just like a phsycal machine. However, you can use your desktop or laptop to supply the CD-ROM which can be used to install, say, Server 2003, Linux, etc without having to put that CD-ROM into the physical ESX server.

2.) Clones can be created of a working machine with a simple right mouse click. A 100% duplicate will be made with minimal interaction.

3.) Templates can also be create and rolled out with customization policies. You can deploy from a custom-built template and have the hostname, IP address, etc already configured without once interacting with the desktop of the guest OS.

Snapshots are a nice management tool that allows you to take a point in time snap of the current state of the guest OS and then proceed with an experiment (such as a patch or configuration change). If the experiment within your guest OS goes bad, you can roll back the snap shot to the last know good point and ever change is reverted with no leftover mess.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,537
Location
Horsens, Denmark
For the record, it looks like you can get VMWare's Platinum 24x7 tech support on the free ESXi platform. The only thing I miss from the free management agents is the ability to see multiple servers at the same time. I can do migrations with the VMWare converter or scripts, the free VMWare Infrastructure Client can do all kinds of things (one server at a time), and includes an update manager.

I'm considering upgrading to a paid vSphere product, but I would have to go all the way to the "Advanced" product for the good stuff (VMotion). That is $3500/CPU for 3 years and 12x5 tech support.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
Don't forget that in order to use vmotion, your VMs must reside on a shared datastore that can be seen by all vSphere servers. You also need a private GigE network connecting all participating systems and same family processor between all machines.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,537
Location
Horsens, Denmark
Don't forget that in order to use vmotion, your VMs must reside on a shared datastore that can be seen by all vSphere servers. You also need a private GigE network connecting all participating systems and same family processor between all machines.

Ugh. Screw that. I'm fairly certain I'm getting better performance with local storage. It'll be a sure thing when I get the SSDs in there.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
When I say shared storage, I don't mean network storage. You just need multiple machines to have storage access to a common array. This is one area where FC on a SAN makes this an easy reality with LUN masking. Performance doesn't have to suffer, you just need a different type of access to it.
 
Top