ICS Support - Accounting and Business ERP Software Consultant Seattle Washington
IT Computer Network System Management and Technical Support Services in Seattle Washington

Your Computer Network Solution
Using NFS or iSCSI in a VmWare vSphere Environment
by Chris Faist

WARNING – this article may dive deeper into geek speak techno jargon than you are prepared for.

I just completed some rather exhaustive (or at least for me) testing of VmWare vSphere and Microsoft Storage Server 2008 R2. The purpose of this testing was to evaluate both products, along with a desire to gain a better understanding of performance differences between iSCSI and NFS in a SAN environment. I’ve researched and read extensively on performance and configuration differences between the two approaches. Candidly, I’ve come across a lot of conflicting information as to which approach is better in a VmWare environment. My intent was to gain some first-hand experience with both SAN connectivity options before we deploy more of these in our client networks.

To begin with, time constraints prohibited me from extensively testing various operating systems, applications and hardware configurations. Here is the base setup that we used:

Storage Server

  • Intel x3440 Xeon Processor, 2.53 Ghz. (4 Core)
  • Intel S3420GP System Board
  • 8.0 GB RAM
  • (4) 320 GB Seagate SATA2 Disks (7200 RPM)
  • Intel ESRT2 RAID Controller (embedded)
  • (2) Intel Gigabit Ethernet Adapters

Three drives were configured in a contiguous striped array providing 891 GB of common storage. An additional 320 GB drive was added later to test VmWare’s Extend feature on their datastore. The drive array was also the system boot and operating system drive. While this is not a configuration that we would probably deploy in the field, for our purposes it was sufficient. As with most projects, there was a bit of scope creep and as we added tests and configurations, there were a lot of instances of “would have, could have.”

The system was loaded with Windows 2008 R2 Enterprise with the Storage Server upgrades. These involved branding and iSCSI target software as well as a number of registry changes to improve file system performance. All current patches were applied and others were added from the storage server software set.

vSphere Server

  • Intel x3440 Xeon Processor, 2.53 Ghz. (4 Core)
  • Intel S3420GP System Board
  • 4.0 GB RAM
  • (1) 80 GB SSD Drive
  • Intel ESRT2 RAID Controller (embedded)
  • (2) Intel Gigabit Ethernet Adapters

The system was loaded with the latest vSphere ESXi 4.1 hypervisor. When using the SSD drive as the primary datastore, installation time was a matter of a few short minutes and boot time is measured in seconds. Frankly, the POST (Power On Self Tests) of the Intel server took far longer than vSphere took to boot.

In terms of networking, we used a dedicated Netgear GS108 8-Port gigabit switch. Initially, we used only a single NIC in both the storage server and vShere server.

Once the hardware was setup and host operating systems configured, I set up two iSCSI target drives and two NFS directories on the storage server. The first iSCSI target was configured with a 128 GB disk. The second hard drive was added and configured as a secondary LUN on the target.

The NFS directories were simply shared as NFS1 and NFS2. No security was configured for either the NFS or iSCSI drives.

On the vShere server we configured four datastores pointing to the respective devices or shares on the storage server.

We started by loading a new guest virtual machine with the Same Windows Server 2008 R2 standard edition server using iSCSI datastore #1. I was somewhat surprised, but the load process took longer than expected as compared to a local attached disk installation. This may have been a result of where our installation media was located (on my laptop). Next, we repeated the installation process except that we placed the virtual guest on the NFS1 datastore. This time we had located an ISO image of Windows server on the SSD datastore and the installation proceeded as expected.

Testing

As I mentioned earlier, we did not have an extensive series of benchmarks to use for testing, but as we were most interested in SAN performance, we measured system boot and load times as well as basic file transfer speeds. You can go to all sorts of effort to evaluate various performance benchmarks, but in the end, a system that is a dog to boot and operate never seems to perform well in the field. Systems that are snappy from the start are much more likely to perform well in the real world. Not to oversimplify things, we certainly recognize the elements that affect processing capability, but those are not the focus of this particular benchmark. In particular, we wanted to compare how fast each SAN topology performed doing the basics.

Results

Boot time (3 Tests)
iSCSI datastore Test #1
Test #2
Test #3
32 seconds
29 seconds
30 seconds
NFS datastore Test #1
Test #2
Test #3
17 seconds
17 seconds
16 seconds
SSD datastore (local attach) Test #1
Test #2
Test #3
17 seconds
16 seconds
16 seconds
Shutdown Time
iSCSI datastore Test #1
Test #2
Test #3
6 seconds
6 seconds
7 seconds
NFS datastore Test #1
Test #2
Test #3
6 seconds
6 seconds
6 seconds
SSD datastore (local attach) Test #1
Test #2
Test #3
4 seconds
5 seconds
4 seconds
File Transfer for 678 MB .avi video file from a Windows 2003 file server
iSCSI datastore Test #1 download
Test #2 download
Test #1 upload|
Test #2 upload
9 seconds
8 seconds
8 seconds
8 seconds
NFS datastore Test #1 download
Test #2 download
Test #1 upload
Test #2 upload
9 seconds
9 seconds
9 seconds
9 seconds
SSD datastore (local attach) Test #1 download
Test #2 download
Test #1 upload
Test #2 upload
9 seconds
9 seconds
9 seconds
9 seconds

Conclusions

The first things about these results that stand out are the performance differences between iSCSI and NFS. From our tests, NFS consistently performed better than iSCSI in system load times. System load times were nearly 76 percent faster with NFS. File transfer times were significantly faster as well. Once systems were loaded however, response times were good with all of the configurations. Subsequent tuning and the use of additional NICS in the servers yielded improved iSCSI performance, but not better than NFS.

So this would tend to dispel the myth that NFS is inherently slower than iSCSI in SAN deployments. Given the ease of configuration and expansion of NFS file systems, this would certainly be a desirable approach.

Most impressive was the Microsoft Storage server. With enhanced file system performance and several features only available in this configuration, it merits consideration. Currently HP, Dell and others have SAN product offerings based upon Storage Server. We are definitely going to look at developing solutions using this product. Stay tuned for future articles on this.

For more information or any questions, please contact Chris Faist, Integrated Computer Systems Support, at 425-284-5410.

 

Integrated Computer Systems Support, Inc
contact
support
home

From our tests, NFS consistently performed better than iSCSI in system load times. System load times were nearly 76 percent faster with NFS. File transfer times were significantly faster as well. Once systems were loaded however, response times were good with all of the configurations.

This would tend to dispel the myth that NFS is inherently slower than iSCSI in SAN deployments. Given the ease of configuration and expansion of NFS file systems, this would certainly be a desirable approach.