In the first and second posts of this series, I covered my thought process for deciding what hardware and software to use for my new lab machine and then what I actually bought and built, respectively.  In this last post of the series, I intend to cover some optimizations I did after the build, some benchmarks and a verdict on how I think the machine is performing.  So here goes:

SSD Firmware Upgrade: First things first: Whenever using SSD in a system (especially when it’s also used to boot), I always check if there is a firmware upgrade for it and if one exists, I apply it.  While installing the OS, I had already noticed that every now and then, the system wouldn’t recognise the boot disk and the system sat there waiting for it.  A reboot always fixed that problem.  My suspicion immediately went to the firmware.  Upon checking, I found there was an update available, which I immediately downloaded and applied.  Lo and behold, the problem disappeared completely!  This is a good idea not just to fix boot issues but it usually also results in performance improvements.  In case you’re using the same SSD (512GB Crucial m4 2.5-inch SATA 6Gb/s (SATA III)), you can find the upgrade here.

Optimise for Best Performance: By default, Windows 7 chooses what’s best in terms of performance and visual appearance.  While animation, fading, thumbnails etc. look good aesthetically, they are pretty taxing on the processor.  If the reason this machine exists is to run virtual machines, you probably are not that interested in visuals.  For that reason, I switch those luxuries off and set the machine to “Adjust for best performance”.

Adjusting for best performance

Adjusting for best performance

Switch off Page File: If I have loaded my machine with lots of RAM and I intend to set VMware Workstation to “Fit all virtual machine memory into reserved host RAM”, then I can’t see why I can’t switch off the page file.  It’s quite useful if there is a shortage of RAM but there is a severe penalty in terms of disk access.  That said, even more important to me in this case is wastage of precious SSD storage.  With 32GB of RAM (with a possibility of 64GB), I get a default page file of exactly that size.  Why should I waste that space?  Sure, switching that off might cost me a machine or two but I won’t be compromising on speed.  Core dump is also one thing I’ll be missing out on but let’s hope I don’t get an unstable OS.  If I do, I guess I won’t be running virtual machines at the time and I can always switch paging back on.

Page File Disabled

Page File Disabled

Switch off Hibernation: Now this is a tricky one.  I prefer to do that, to save space on my SSD as the space consumed is equal to the amount of RAM installed.  However, if you have a UPS connected directly to this machine, it might complain that automatic shut down of the machine can’t happen in that state.  So, make your own decision on that.  I do have a UPS connected and don’t get many power failures and if I do get one, it’s usually a minute or so.  So, I’ve decided to take a chance and if the lab does go down some day, it’s just a lab!  I’ll bring it back.  At least, I’ll be saving precious SSD space in the meantime.  In case you want to do that, here is a good link.

Indexing: Another thing you can quite safely switch off and improve performance, is indexing.  Searching for files still works fine – it’s just slower!  For the very rare event that I might want to search for something on this machine, I guess I’ll just accept the delay.

Indexing Option Switched Off

Indexing Option Switched Off

These are the main optimizations I’ve implemented.  I guess one can keep going forever but soon it becomes a case of diminishing returns so for me, this was enough.

With this out-of-the-way, I did some quick benchmarking.  Here are the results:

RAID 10 Drive Performance for the Xeon-based Server

RAID 10 Drive Performance for the Xeon-based Server

SSD Drive Performance for new Core i7 machine

SSD Drive Performance for new Core i7 machine

As you can see, the single SSD drive has much higher read/write performance as compared to the four (7200 RPM) drive RAID 10 array on the Xeon machine.  Write seems to max out at 260 MB/sec but Read occasionally goes above 550 MB/sec.  Obviously, these tests are just an indication as the real performance depends on how the virtual machines behave in normal operation but like for like performance comparison between the two storage types, validates my decision to go SSD for this build.  As prices for SSD come down, I intend to build an SSD array, which should give me even more performance.  Another issue the array should resolve is risk of disk failure.  This is a one disk set up so I could suddenly lose all my machines.  That said, I am not too concerned as it is meant to be just a lab and while it will be very inconvenient to lose the machines, I can still rebuild them.

Just in case, I did another quick test using HD Tune Pro:

SSD Drive Performance for the new Core i7 machine (Using HD Tune Pro)

SSD Drive Performance for the new Core i7 machine (Using HD Tune Pro)

… and the results are pretty much the same!

Happy with the disk performance, I transferred some of my machines from my Xeon server to this new machine, after converting them to thin (possibly subject of another post soon).  The set I’ve transferred initially consists of:

  • Windows 2008 R2 DC
  • Windows 2008 R2 machine with SQL 2008 R2, running vCenter 5.0, Update Manager, Composer
  • Two ESXi 5.0 servers, with a few nested virtual machines running
  • View Connection server
  • Windows 7 desktop

Bringing them up was a joy!  Having all this performance meant that machines came up lightening fast and became stable very quickly.  Generally, when machines are left running for several days, paging kicks in and they start to become sluggish.  There is nothing of the sort in this case: Running the machines for a couple of weeks continuously has shown no sluggishness and they are as responsive as they were on day 1.

As I mentioned in the last post, partly due to cable management and also due to absence of hard disks with spindles, the system runs very cool and I haven’t yet seen a need for extra cooling.  With all these machines running constantly for more than a couple of weeks, the processor runs between 22-24 degrees Centigrade and the machine remains between 32-34 degrees Centigrade.  For that reason, fans are running at very low-speed and therefore, the machine is virtually silent.

Verdict: Needless to say, I am pretty happy with the machine so far.  As you know, I went for cheap/unknown brand of RAM so I was a bit worried but it’s working absolutely fine so far so I’ll most probably go for another 32 GB.  I’ll probably also go for a couple of additional 512GB SSDs in the future and make a 1 TB RAID 5 array – although, I suspect that will take a bit longer.

My results have been pretty good and exactly what I was expecting.  I hope this series of posts help/encourage you to build a system with this or similar configuration too.  Please do feel free to post comments/questions if you have any queries.  I’ll be glad to help!

Update (11/09/2013 20:00 BST): Just thought I should provide an updated on what happened next.  As mentioned in the post, I upgraded the RAM to 64GB later. Komputerbay RAM was out of stock so I went for Crucial 32GB Kit (8GBx4), Ballistix 240-pin DIMM, DDR3 PC3-12800.  As for SSDs, I dumped the idea of having a RAID5 for SSDs, thinking about the negative effects RAID might have on them.  Instead, I went for two additional Crucial M500 480GB 2.5-inch Internal SSDs (the one I originally used is no longer available) as individual disks.  I am now using Starwind Software’s iSCSI SAN software to provide shared storage to my ESX hosts, hosted on those SSDs.  This is far more flexible in terms of the various configurations that I might want for different set ups. I wrote a post about it here so have a read if you haven’t already.

Best of all, I can confirm that after using this system for a while now, I am very happy with its power, performance and capacity.