I’ve had my virtual infrastructure labs in their various incarnations for more than a decade now and the most recent one is based around a “whitebox” Dual-Quad core Xeon server, with 24 GB RAM. This system was built in 2007 so the processors (while the best in their time) have become a bit of a limitation now. It still runs 15-20 VMs quite happily but lack of newer features is becoming an issue e.g. not being able to run nested x64 VMs on my virtual ESX servers. I run Windows 7 as the host OS and use VMware Workstation on top to provide my environment. Before you object, there are good reasons why I still run my environment this way and not directly on ESX, which will become clearer later in the post. As it’s time to build another machine for my lab, I thought I should document my decision points and why I decided to go this way. This could also serve as a reminder of what one should consider when building/refreshing a lab. So, here goes…
Why build/maintain one?
As soon as I discovered VMware Workstation (back in 2000), I started using it to have a lab environment in my home. That enabled me to do hands-on learning/study in a way I wasn’t able to before. In addition to that, it enabled me to create a home “production” environment – something that is always there, I had to maintain as a service and upgrade when required. True, it doesn’t give the exact same experience as a real production environment but once it matures, it can come close. That is especially true if you have a family with kids and they can be as tough a customer as you’ll find in the workplace!
I had a few to consider so I’ll mention them in order and my reasoning behind my decisions:
- ESX or Workstation: I always have two main machines and they run 24/7. Not very energy-efficient I know but it’s very important if you’re trying to run your lab as if it’s a service environment. There is KVM access to both of them and they are placed in my study. They also have two displays attached. If you are thinking where this is going – my family is familiar with Microsoft OSes and the rest of the ecosystem and while they have their own devices, the host environment on these machines is always available (as a backup) in case they need it. ESX won’t be of much use to them. This option also opens up motherboard/processor choices for me as for ESX, one really has to ensure that all hardware is on the hardware compatibility list. Another major reason for choosing a VMware Workstation set up is because traditionally, features appear first in Workstation and it’s generally easier and more compatible to work with virtual instances of other OSes e.g. Hyper-V etc.
- Branded Machine or Whitebox: Availability of cheap HP Proliant Microservers have made branded servers a possibility at a very affordable price, with guaranteed compatibility and minimum hassle. However, ESX compatibility was not a big issue for me, due to point 1 above and also because I can have virtual ESX servers on VMware Workstation as well. I’ve always built my own machines so hardware has never been an issue but a whitebox gives me complete freedom over the choice of hardware, casing, motherboard and therefore, expandability. That’s partly the reason why I don’t have to upgrade my lab as often as some people do. Needless to say, I went for the whitebox option.
- Xeon or Core i7: Having a Dual-Quad core Xeon environment is pretty powerful and it was tempting to go for a similar setup again but that can prove to be quite costly. Processors have also come a long way in the last few years and with a bit of shopping, one could build two reasonably spec’d Core i7 systems in the price of one big Dual-Xeon based one. That also provides an option of building one first and if really required, invest in another later. Alternatively, one could load the first one with extras and gain performance that way (see Storage). The option I took was to go with a single Core i7 based system.
- Memory: Two decisions there – which “brand” and how much? Branded memory can be faster and more reliable but the difference in the cost of large amounts of branded and unbranded memory can be huge. In the past, I had to go the branded route because I wanted stability and performance, especially after investing in Dual-Xeon environments. However, after deciding on a Core i7 setup, that was no longer a requirement. The option of buying a big amount of memory for less than half the price was available to me and I took that choice. The rationale behind the choice was that this will enable me to put twice the amount of memory into my system. True, it would be slightly slower but I’ve found that for most virtual workloads, the bottleneck is storage and not memory speed. Having twice the amount of memory, however, will allow me to put more VMs on the same environment and I won’t have to skimp on memory – the main cause for paging!
- Motherboard: There are lots of boards available for Core i7, which made the decision swing in its favour as well. However, the money saved from not having a Dual-Core Xeon, also enabled me to invest in a higher spec’d Core i7 board. Apart from other features, it allows me to put 64GB RAM into it. This option of expandability was a key factor in choosing a whitebox solution.
- Storage: My Xeon server has a RAID 10 array with 7200 RPM disks to run its VMs. It’s great but is still limited to a certain number of IOPS. Even with thick-provisioned disks, as the number of VMs grow on the set, the machines start slowing down. For that reason, I decided to spend a bit of money on a 512GB SSD as VM storage and thin-provision the VMs. With the IOPS provided by SSDs, thin-provisioning shouldn’t be an issue and 512GB is big enough to house a good number of VMs. For bigger VMs, I’ll still have my Dual-Xeon running. That said, I’ve also bought a new casing for the new motherboard with enough space for storage – another reason why I go for whiteboxes!
- Casing: As mentioned in the previous point, I’ve bought a new casing for my new machine. I could have used the casing from one of the machines I am retiring (I am using its PSU and Video Card!) but I wanted more space for peripherals and also cooling options, for when I overclock the board (another thing that I do with my whiteboxes :-))
- Networking: One drawback that one does get with whiteboxes is that the embedded LAN option is almost always not certified with ESX. Now, as I plan to run VMware Workstation, it’s not so much of an issue but I have still considered this point in case I change my mind in the future. So, the contingency is that I’ll add ESX certified NICs to the box if I ever decide to install ESX on this box. I will also add NICs to this box when I’ll buy more external shared storage – planned for the future.
Obviously, there can be other points to consider when it comes to building or upgrading a home virtual lab machines e.g. Noise (I build mine quiet as I keep them in the study), power-consumption, remote-access (if you’re keeping them in a garage) etc. but they’re more environmental than related to the build itself so I won’t bore you with those.
I hope this all makes sense and I am open to any questions/comments to what I’ve said above. I don’t want you all to fall asleep (if you haven’t already :-)) and so will close this post. In the next post, I’ll report on what I bought and built after making the decisions above and how it went. So, stay tuned!
Update: Part II of the post is here: Building (or Upgrading) a Virtual Home Lab Machine – Part II
Update: Part III of the post is here: Building (or Upgrading) a Virtual Home Lab Machine – Part III