In my previous post, I mentioned that I use StarWind Software’s iSCSI SAN software to provide iSCSI LUNs for my home lab. In this post, I’ll document the process to create a simple shared iSCSI LUN setup. Using that, one can provide as many LUNs as required to an ESXi setup, possibly with multipathing – very useful if one doesn’t have expensive network storage and virtual appliances don’t seen flexible enough. I should apologise in advance as this is a fairly heavy post in terms of pictures. I thought about it but as I wanted to create an article with step-by-step instructions (making sure they’re followed exactly), I had to do it.
Stating the obvious here but the first thing is to have the software installed. If you don’t have the binaries already, go to this link and get the free download. As focus of this article is not the installation process, all I would say is that installing the software is a breeze and basically a “Next, Next, Install” sort of affair. The only thing to consider is the “License Key” screen, where you need to choose the correct option for the type of license you have. For this article, I am assuming you have the free version so choose the option accordingly. All options are shown in the screenshot below:
The next screen asks you to browse to the file containing the license, which you should have by now. Once done, it’s plain sailing after that and the software goes ahead and installs itself.
The “StarWind iSCSI SAN” service is installed during the install and therefore, comes up as a service soon after boot. The service is accessible through the management console, also installed as part of the setup. The first thing to do is, well, to install “a device”. To do that, click on “Add Device” on the top bar:
Select “Device Type”. For our example here, choose “Virtual Hard Disk”. Just like various hypervisors, this is a flexible option that let’s you create disks with varying capabilities. Other options could also be selected based on requirements but whether or not you can use them, will depend on your license entitlement.
The wizard continues to ask what type of medium should be used for this virtual disk. For the purpose of our example, we’ll choose “Image File Device”. As before, some options might require a different license, as shown in the example below. Here the option “Mirror (RAID-1) device” option is not covered by my license.
The next screen asks if you want to create a new virtual disk or mount a pre-existing disk for this device. We choose the “Create new virtual disk” option.
As we’re creating a new disk, we need to give the software the location and size of the file containing it. Obviously, it should be ensured that the destination location has sufficient capacity free. One could also create a compressed and/or encrypted disk here.
No need to make any changes on the next screen. Just click “Next” to continue:
Next screen is about caching. In the same way as in RAID controllers, caching policy is set here and you can choose whichever option suits your set up the most. As my environment is backed up by a UPS, I generally choose “Write-back caching” for performance reasons:
Now to create an iSCSI target. Here you can choose to create a new target or attach this device to a pre-existing one. For our example, we’ll create a new one. You could give an alias of your choice to the target and based on that, it automatically comes up with a “Target Name”. You could change the name if you want to but I didn’t see any particular reason to do so. As we intend to connect to this target from multiple ESXi servers, I would make sure to have the “Allow multiple concurrent iSCSI connections (clustering)” option ticked:
After that, it’s just a confirmation screen. Click “Finish” to complete the creation of disk and the associated iSCSI target. As you can see, it’s a very flexible piece of software, allowing you a large number of options to choose from when it comes to providing anything from basic LUNs to complex HA clusters.
Now, let’s move on to create a “Targets Group” (as we might have more than one targets in it). There is already a default “General” group but I would advise creating a new one and set appropriate access policies. To do that, right-click on “Targets” and select “Add Targets Group”
We’ll call it “ESX Cluster LUN Group”. It’s best to give a good description so that you can remember six months from now why you created it 🙂
One thing to do at this point is to change the “Access Rights” for the Targets Group “General”. Reason for that is that by default, it allows any initiator to connect which can’t be a good thing if your storage is used to cater for many different systems.
Once in there, untick the box “Set to Allow” for the rule “DefaultAccessPolicy” as shown in the screenshot
Now back to business. Remember we created our first LUN/Target a few steps back. By default, it goes into the “General” group. We’ve just created the “ESX Cluster LUN Group” for our purposes so we’ll move our LUN to that group:
Once the LUN is there, we’ll create a rule to have it accessed. To do that, right-click in the right hand pane and select “Add Rule”:
When creating access for ESX iSCSI initiators, the best way is to have IQN based access. However, you can see that one can use IP Address or DNS Name based rules as well. Give the rule a sensible name as well:
Even though it’s well-known how to find the Software iSCSI initiator IQN running on ESXi and I am sure you do too, for the sake of completeness, I’ll include it here. Like everyone else, I like copying and pasting this information, just to make sure it’s right:
That goes into the “Source” tab on the new access rule. Next thing to do is to tick the check box “Set to Allow”:
For “Destination”, we choose the target we created earlier in the process:
Once you click OK and exit rule creation, the configuration on the LUN/Target side is done. All that remains now is to “Discover” the LUN. In order to do that, we need to go back to the ESXi server and follow the standard procedure to discover iSCSI Targets. Here is what you have to do on the ESXi side:
If all that I mentioned above worked successfully, you should see this in the bottom pane of the “Storage Adapters” section.
Yes, “ROCKET iSCSI Disk” is correct. That’s the name that is used by StarWind for the target. Following the same process, one can add as many disks as required to the set up and with at least two, one can create a mutlipathing configuration for the iSCSI setup. The StarWind Management console, in that case, should show something similar to this:
I hope this step-by-step guide will be useful to you in creating a shared iSCSI setup for your ESXi (or Hyper-V for that matter) environment. Hopefully, I will be able to bring you more in the near future – possibly a more complex set up!
Hope this helps!
Hello Ather – Just wanted to drop you a quick note of appreciation for your stellar writeup on the Starwind iSCSI SAN/Vmware collaboration. The document answered just about every question that I had about Starwind’s use. I do have one small issue that I would like to ask you about, but I will first ask your permission whether or not to do so as this form is for comments and not a question/answer forum. Anyway, this was one of the most well written and useful pieces of information that I have ever encountered. And by all means I have already RTFM on this question that I have but cannot seem to get an answer 🙂
Thanks again!
Hi Tom,
First of all, many thanks for the kind words above – it makes all the effort worthwhile! 🙂
By all means, ask the question you have in mind. I’ll be glad to help if I can. I may not know the answer but I need to know the question first 🙂
Ather
[…] I used Starwind iSCSI SAN to create a host-based iSCSI setup and used two SSDs to host my datastores. The software is […]