mouser - so glad you asked
I do some consulting work for a local school system. The have a special grant/fund for technology so they have a lot more money than most schools. We're in the process of installing a 10 TB SAN. They just purchased 4 HP/Compaq DL380 Servers w/ Dual P4 Xeons.
They run Netware for file/print and NDS services. Netware (even OES Linux) doesn't take advantage of 64-bit chips. So I installed CentOS 4.2 x86_64, and VMWare Server (Beta). I can run 5 Virtual Servers on this box with about 5% CPU utilization. Using the SAN Storage we carved out two 100 G Virtual drives. I have two servers in this configuration, 100 G for each. Storing the entire Virtual Machine on SAN allows me to switch between physical servers. VM Server does not allow running transfers between VM Servers but the ESX version does. For our purposes a 5 minute outage is acceptable while the VM spins up on the 2nd VMHost.
We just finished the testing phase and are starting to really plan VMs. So far we have:
1 Win2K3 VM
1 CentOS i386 VM
1 Windows XP Pro workstation VM
1 Netware 6.5 SP 4 VM
1 OES Linux SP2 VM
These don't really tax the system, so we have room for more. We plan on adding at least 1 more Netware VM and 1 more CentOS VM. That will give us a 3:1 advantage over physical boxes. Using 2 bonded 1Gbit NICs, we should see no performance lag in any of these systems.
One really cool thing about the SAN is the ability to hide virtual drives. The VM Server can ONLY see the 2 100 G Virtual drives we created. Other V Drives will be available only to other server. The cool thing about Linux is that I can see both VDrives from both servers, but I only mount partitions on 1 at a time.
Answer your question?