A few months back, I started to think about having another machine built for my lab as the one I built about year and a half ago, was starting to become a bit cramped. There are a lot of things going on these days and you do need to have enough of a lab to have most (if not all) virtual machines running at the same time.

I’ve always been in the camp of people who run a hosted VMware Workstation based lab, instead of a native ESXi nested environment (for reasons mentioned in the post linked above).  For once, I thought about deviating from that and went for a Shuttle XH61V based setup, with a view of having VSAN as storage on a three host cluster. Those who follow me on Twitter, saw the following tweet:

That didn’t go very well and it’s a story for another day but suffice to say, the setup wasn’t stable enough under ESXi 5.5 to run anything reliably and I was forced to abandon that plan.  Fortunately, I had a plan B so all is not lost and it’s now serving as a resource cluster under ESXi 5.1.  A fair amount of investment not going my way was hard to swallow but you win some and lose some so it’s OK.

That means I am now firmly back to my original way of building the biggest host that I can afford, have nested labs and keep adding those every now and then to my environment. After investing a lot in the failed Shuttle-based attempt, it was hard to go for another host so soon but as I need ESXi 5.5 (or above) compatible lab resources, I decided to go for it. I must add, I also have a “very” understanding wife!

Issues to address:

So, after such a long preamble, what did I end up with? Before I go into that, I think it would be useful to know the considerations before the purchase. I’ve been very happy with my previous machine and it has served me well. However, there were a couple of things I wanted to improve on:

Storage: I used Starwind iSCSI SAN to create a host-based iSCSI setup and used two SSDs to host my datastores. The software is extremely flexible and you can have pretty much any kind of storage presented to your ESXi hosts, and best of all, it comes up with the OS, guaranteed to be ready for your ESXi hosts (as compared to appliances) and is ideal for my kind of setup.  However, I see a limitation when high-throughput tasks are run e.g. deploying a View pool. All traffic gets bottlenecked as I normally use a bridged networking setup and so the traffic passes through the physical network adapter, limited to 1 Gbps.

CPU: As the number of machines grow in my nested ESXi cluster, a single CPU is becoming bottleneck as well, especially when high-activity tasks are run. This is understandable as the nested environment is also dealing with an OS layer i.e. not as efficient as it could have been. Plus, there are only so many threads that a single Intel Core i7 based host can have.

Logical Decision Points:

Following the grand tradition of my previous machine build posts (click here for reference to first post), I’ll document the decisions I made for this build and why so that it can help if you’re also looking to build something similar:

ESX or Workstation: I deliberated on it for a while this time and ESXi as the base OS would’ve been far more efficient but in the end, I went for Windows. This is due to the very same reasons as before i.e. being in my study, the machine can also be used by family if required. Running VMware Workstation on top also means keeping my options open in terms of hardware, VM capabilities and storage, especially when it comes to carving up my SSDs, for different use cases.

Branded Machine or Whitebox: This is another thing on which my view hasn’t changed very much. Branded would have been better had I gone for ESXi as the host environment. Once you go for Windows, all that remains is to go for a motherboard that has the drivers for the OS you want to run on top. So, my choice is still a Whitebox type.

Xeon or Core i7: As I mentioned earlier, I found that as the number of machines increased in my nested set up, the CPU started to become a bottleneck. It’s still OK under normal operation but I am beginning to feel the need for more threads. Another consideration was the amount of RAM in the system: I was thinking about having a motherboard with the capability to go to at least 128GB RAM, even if I don’t go full for now. This would mean being able to fit more machines on the host and judging by my current machine, a single Core i7 wouldn’t do. So, the decision was to go back to a dual-Xeon motherboard. Sure it’s a far more expensive set up but things have changed recently in terms of how much you need to run these days and adding a machine frequently will cost more in the longer run, not to mention finding the time to do it all.

Memory: As in the previous point, I want more capacity in my host this time. I have gone for a motherboard that supports up to 256 GB (for Registered Memory) but I will probably only go up to 128 GB. Going for higher density RDIMMs was comparatively far more expensive and not going for the most advance Xeons mean I might be pushing the CPUs even with just 128GB worth of VMs.

Motherboard: Once I decided on the CPU and RAM requirements, I found that there aren’t many motherboards around that can cater for them. In a way, it’s a good thing as it focusses your mind on a few, however, you do feel a bit limited. In the end, I had another requirement that pretty much eliminated all other motherboards I could find: The need to have more than two SATA III interfaces! I pretty much exclusively use SSDs in my hosts now. They are all SATA III and I didn’t want to lose performance by running them on SATA II. I will have at least three in my system and pretty much all Dual-Xeon boards have just two SATA III interfaces! However, there is one that is essentially the bigger brother of the motherboard I am currently using (ASUS P9X79 Pro) and has an additional SATA III controller, adding an additional 4 SATA III connectors. That pretty much sealed it for me.

Storage: In terms of hardware for the storage, I am still going to have a completely SSD-based environment. Having a nested environment means that I can be creative with how much SSD is presented to the nested ESXi VMs. If at a later date, I want to introduce regular drives for capacity, I can still do that, as long as I have the space and power in the casing (coming up next). Having a nested environment gives you the flexibility there too about how much is allocated to which machine.

Casing: Even though I am about to retire my other Dual-Xeon based server (circa 2007) which is housed in an E-ATX casing, it’s still running and therefore, the casing is not available yet. So, I had to invest in a new one. Consideration here was that above all, it should suit the motherboard (challenging – don’t miss the next post!). As the motherboard is substantial, the size of casing means I don’t have to worry about future expansion: It has more than sufficient number of bays in case I need them.

PSU: I always go for a power supply that is at least 3-4 times I actually require for the machine at build time. One shouldn’t skimp on it as running power supplies at or near capacity, is not only inefficient but also destabilises the system – one thing you definitely don’t want. It also ensures good capacity for future expansion, hopefully for the lifetime of that machine. Given the components at build time, my consumption should be about 180W so I decided to go for a power supply with a rating of between 700-800W.

Networking: There was a time when most small brand motherboards came with cheaper on-board NICs – generally with drivers which are not natively part of ESXi. That is changing now and you can find more and more higher-end whitebox motherboards with compatible onboard NICs. The motherboard I chose also has compatible NICs (at the time of writing – as things might change in the future). This was important as I want to keep my options open in case I decide to rebuild with ESXi. Speed is not that important for me at this stage so an onboard Dual 1 Gbps NIC is enough for now. I’ll hardly be using any slots so if required, I can always add external NICs.

Cooling: This machine will be in my study, along with the one previously built. As mentioned in the older posts, it’s right next to my bedroom so having a quiet machine is important. I am not after a completely silent setup but am happy to spend a bit extra on quieter Xeon fans as normally, they’re quite noisy. I am also happy to replace the standard casing fans, if I think it will make a difference, although, that’s something that I will only be able to ascertain once the system has been running for a bit.

There are still other considerations involved e.g. Power Consumption, UPS, Remote Access etc. but for me, they weren’t going to influence my buying decisions so I won’t include them here. This post is also getting on a bit anyway!

In my next post, I’ll cover what I bought in the end. I see that people are also generally interested in the cost of the system build so I will include that too. As always, I am interested in what you have to say and if you have any queries so please do feel free to comment and I’ll do my best to answer as promptly as I can.

See you in the next post!

Link to Part IIMy New Dual-Xeon Based Nested Home Lab Machine – Part II

Link to Part IIIMy New Dual-Xeon Based Nested Home Lab Machine – Part III