Storage to use with VMware Systems

Clients who decide to use VMware with SoftLayer have several options when it comes to storage. They can choose private, shared, or bring their own storage solutions. The following information is provided to help you decide which storage solution will work best with your workload.

Table 1 contains the storage tiers and where your workload may fall.

 

Tier 1

Tier 2

Tier 3

Business use

High performance and or high availability production applications, databases, and data

Non-mission critical test and development application, databases, and data

Non-mission critical data storage, backup, and archive

Performance

High (SSDs, SAS)

Medium (SAS, SATA)

Low (SATA)

Guaranteed IOPS

Yes

No

No

High availability (HA)

Yes

No

No

Replication

Yes

Yes

No

Snapshots

Yes

No

No

Table 1 Storage tiers

VMware Storage Option Decision Tree

Figure 1 is your decision tree to help you determine which storage option will work best with your workload and VMware.

Figure 1VMware storage decision tree

There are several options from which choose. You can select local disk options, VSAN, or QuantaStor for private storage, or Endurance or Performance for shared storage. If you decide to bring your storage, there are several “private” options, including NetApp OnTap Edge, NetApp Private Storage (NPS), IBM Spectrum Accelerate, and software defined storage. Figure  2 to Figure 8 discuss the different types of private storage and Figure 9 and Figure 10 discuss the shared storage options. Table 2 and Table 3 offer a side-by-side comparison of the options for your convenience.

Private Storage Options

There are several storage options for connecting to VMware in a single-tenant environment, including

Local Storage

Order bare metal server from the SoftLayer customer portal with ESX and select the desired disks [SATA, serial attached SCSI (SA SCSI), or SSD].

  • You can bring your own ESX license; you will need to open a ticket with SoftLayer Support to inform them of the change regarding the license
   

·   Recommended workloads:     Tier 3

·   Performance:                Limited; dependent on RAID and disk type. SSDs present cost increase for better performance.

·   Scalability:       Limited to the number of drive slots in the chassis

·   Protocols:        Not applicable

·   Cost:                  Low capital expenditure (CAPEX) and operating expenditure (OPEX)

·   HA:                     Not available

·   Replication:    vSphere Replication Virtual Appliance

·   Reliability:       Multiple single points of failure

 

Figure 2 Connecting VMware with local storage

vSAN [1]

   

·   Recommended workloads:     Tier 1

·   Performance:                90K+ IOPS per host depending on host configuration

·   Scalability:       v5.5 virtual machine disk (VMDK) up to 2TB; v6.0 VMDK up to 62TB. Scale out with more nodes.

·   Protocols:        Proprietary

·   Cost:                  Medium for both CAPEX and OPEX

·   HA:                     Supported for both host and disk failures. Failure domains supported on v6 of VMware.

·   Replication:    vSphere Replication Virtual Appliance

o   Replication and disaster recovery:

o   Schedule VM back up via VMware vCenter Server

o   Create VM snapshot

o   Data DE created for the VM

o   Restore the VM

·   Reliability:       Tolerates up to three host failures with seven plus hosts. Failure domains introduced in v6 of VMware.

 

Figure 3 Connecting VMware with vSAN

[1] vSan 6.2 new feature, stretched clusters, allows for hosts to be in different pods in the same data center (validation tests are in progress).

vSAN 5.5 is only available with Bring Your Own License (BYOL). It is only supported by VMware if you use an Avago LSI MegaRAID SAS 9361-8i disk controller.

vSAN 6.0 will be available directly from the SoftLayer portal with CPU license billing base.

QuantaStor

The OSNexus Solution Design Guide can be used to help with connecting VMware with QuantaStor.

           
 

Phsyical Servers (ESX)

 

QuantStor bare metal

 

·   Recommended workloads:     Tiers 2 and 3

·   Performance:                Variable based on the number of drives, RAID, and disk use (iSCSI or NFS)

·   Scalability:       v3 single frame supports 128TB; no scale up or scale out.

·   Protocols:        iSCSI, NFS, and SMB

·   Cost:                  High for both CAPEX and OPEX

·   HA:                     Not available

·   Replication:    Built-in replication features; no SRA available. Can use vSphere Replication Appliance as well

·   Reliability:       Single point of failure for the enclosure and RAID controller.

 

 

Figure 4 Connecting VMware with QuantaStor

NetApp Data OnTap Edge

You must purchase a NetApp device from NetApp or IBM. You will need to have it installed in on a bare metal server in your SoftLayer data center.

Use the following links for more information on how to connect VMware with NetApp.

Figure 5 Using Direct Link Colocation or Direct Link Cloud Exchange to connect to SoftLayer

Figure 6 Connecting VMware with NetApp Data OnTap Edge

·   Recommended workloads:     Tiers 2 and 3

·   Performance:                Variable based on the number of drives and RAID

·   Scalability:       Supports 110TB; no scale out.

·   Protocols:        iSCSI, NFS, and SMB

·   Cost:                  Medium for both CAPEX and OPEX

·   HA:                     Not available

·   Replication:    Supports SnapMirror; also achieved using vRealize Automation (vRA).

·   Reliability:       Single point of failure for the enclosure and RAID controller.

NetApp Private Storage

You must purchase a NetApp device from NetApp or IBM. You will need to have it installed in one of the colocation sites near your SoftLayer data center, and connect it using Direct Link Colocation or Direct Link Cloud.

Use the following links for more information on how to connect VMware with NetApp.

Figure 7 Connecting VMware with NPS

·   Recommended workloads:     Tier 1

·   Performance:                Dependent on NetApp model

·   Scalability:       Supports addition of drives and frames for increased capacity and IOPS

·   Protocols:        iSCSI, NFS, and SMB

·   Cost:                  High for both CAPEX and OPEX

·   HA:                     Dual heads and controllers

·   Replication:    Supports SnapVault and SnapMirror; also achieved using vRA.

·   Reliability:       High redundancy and MPIO support

IBM Spectrum Accelerate

The IBM Spectrum Acceleration private storage option is not available on the SoftLayer customer portal. Click here for the instructions on how to build the offering.

 
 

 

Figure 8 Connecting VMware with IBM Spectrum Accelerate

·   Recommended workloads:     Tier 1

·   Performance:                Dependent on number of disks, SSD (optional), and amount of memory given to each “node” VM.

·   Scalability:       Scales ~8 to 325TB usable

o   Minimum capacity: Three VMs x six drives

o   Maximum capacity: 15 VMs x 12 drives

o   Scales up to 144 virtual arrays and more than 40PB usable via IBM Hyper-Scale Manager

o   Non-disruptive capacity expansion by adding more nodes

o   1x500 or 800GB SSD per node supported

·   Protocols:        iSCSI only

·   Cost:                  Dependent on pricing model and physical machines deployed for nodes. High CAPEX; medium to low OPEX depending on licensing.

o   Priced per binary (TiB) of usable capacity

o   Not tied to any specific hardware configuration, e.g., 200TiB license could be deployed various ways – one 200TiB instance, two 100TiB instances, or four 50TiB instances

o   Offered two ways – perpetual license [includes one year of subscription and service (S&S)] and monthly license (includes S&S)

·   HA:                     Clustered solution

·   Replication:    Achieved using vRA or SRA, which can be used to replicate from physical IBM XIV with VMware’s Site Recovery Manager (SRM)

o   http://www.vmware.com/products/vsphere/features/replication

o   https://www304.ibm.com/partnerworld/wps/servlet/download/DownloadServlet?id=VZPYFkT7gvZiPCA$cnt&attachmentName=VMware_vCenter_Site_Recovery_Manager_version_guidelines_IBM_XIV_Storage.pdf&token=MTQzNDU3MTIyNTA5Nw==&locale=en_ALL_ZZ

o Reliability:       High redundancy and MPIO support. Any available node can manage the cluster. The following capabilities are not supported by IBM Spectrum Accelerate at this time over hardware-based IBM XIV:

o   Three-site mirroring

o   IBM Hyper-Scale Mobility (iSCSI)

o   USG v6

o   6TB disk drives

o   Storage Management Initiative Specifications (SMI-S) 1.6

o   Data at rest encryption

o   vStorage for API Array Integration (VAAI) Licensing Now Aligned with Virtual Volumes (VVol)

o   vCenter Operations Manager (VCop)

o   http://www-01.ibm.com/support/knowledgecenter/STJTAG/hsg/hsg_kcwelcomepage_xiv.html

Shared Storage Options

There are two storage options for connecting to VMware in a multi-tenant environment – Block Storage and File Storage.

Block and File Storage

Order the bare metal server from the SoftLayer customer portal with ESX.

In VMware, three Authorize (Block or File) Storage SoftLayer predefined values will be provided on the Host Device Details Storage tab – Username, Password (for CHAP authentication), and Host IQN.

Click here for instructions on how to provision block storage.

Click here for instructions on how to provision file storage

Figure 9 Connecting VMware with Block or File Storage

·   Recommended workloads:     Tiers 1, 2, and 3

·   Two ways to provision performenace:

  1. Endurance IOPS tiers: Available in 0.25, 2, 4 or 104 IOPS per GB
  2. Performance Allocated IOPS: Specify IOPS independent of capacity. Recommended for workloads with well defined performance requirements.

 

o   Predictable storage performance parameters

o   Multiple volumes may be striped together to achieve higher IOPS and more throughput

 

·   Latency <10ms UP to 48,000 IOPS

·   Scalability:       Order from 20GB to 12TB; you cannot resize the volume once it is ordered.

·   Protocols:        iSCSI and NFS

·   Cost:                  High for both CAPEX (10x for SAN of same size) and OPEX

·   HA:                     Dual heads and controllers

·   Replication:    Snapshot and Replication provided over the SoftLayer Private Network; also achieved using vRA, but no SRA.

·   Reliability:       High redundancy; iSCSI uses an MPIO connection; NFS-based storage is routed over TCP/IP connections. Snapshot and Replication enabled.

 

Table 2 provides the pros and cons of private storage in a single-tenant environment.

VMware – Private storage (single tenant)

Key factors/ storage option

Local

Virtual SAN

QuantaStor

NetApp

IBM Spectrum Accelerate

OnTap Edge

NPS

 

Type

Local

SDS

SDS

SDS

Monolithic

SDS

Performance

Based on SSD /SA-SCSI specs; further RAID 5/10 can be used for read /write gains.

90K+ IOPS per host depending on host configuration. 100 VMs per host, 32 hosts per cluster, 3,200 VMs per cluster, only 2,048 protected (v5.5)

Up to 20K IOPS, 200 VMs per host, 64 hosts per cluster, and 6,000 protected VMs per cluster (v6.0).

Based on types and number of disks selected, as well as RAID configurations and use of iSCSI or NFS.

Based on types and number of disks selected, as well as RAID configure-actions.

Depends on model.

Based on types and number of disks selected, as well as RAID configurations and optional use of SSD disk per hypervisor node.

Scalability

Limited growth in size and in disk I/O throughput.

Virtual machine disk (VMDK) up to 2TB with v5.5, and up to 62TB with v6.0.

Scale out with more nodes.

Single QS up to 128TB (3.x). No scaling up or out.

Up to 10TB; no scaling out.

Yes, add disk shelves for capacity and IOPS.

Scales from ~8 to 325TB usable space.

Min capacity is 3 VMs x 6 drives.

Max capacity is 15 VMs x 12 drives.

Scales up to 144 virtual arrays and more than 40PB usable via IBM Hyper-Scale Manager.

Non-disruptive capacity expansion by adding more nodes.

Protocols

NA

Proprietary

iSCSI/NFS/SMB

iSCSI/NFS/SMB

iSCSI

Use cases

Tier 2 and 3 workloads

Tier 1 workloads

Tier 2 and 3 workloads

Tier 2 and 3 workloads

Tier 1 workloads

Tier 1 workloads

High availability (HA)

Available with RAID

Yes; host and disk failures; failure domains (v6.0)

Not available in SoftLayer.

NA

Yes; dual heads and controllers.

Yes; clustered solution.

Configurability

Number and type of disks; RAID levels

Specific controllers required.

CPU, memory, cache, number and type of disks, and RAID levels.

CPU, memory, cache, number and type of disks, and RAID levels.

TBD

CPU, memory, cache, number and type of disks, SSD, caching, iSCSI port configuration. Multi-tenancy QOS.

Disaster recovery and replication

Use vRA to replicate; no SRAs.

Use vRA to replicate.

Built in replication; no SRAs available.

Can use vRA to replicate, SnapMirror.

Can use vRA to replicate, SnapMirror, SnapVault.

vRA or SRA supported; replication between SDS and or Physical XIV devices.

Snapshots supported; application recovery via IBM FlashCopy Manager.

Reliability

Single point of failure without HA.

Tolerates up to three host failures with seven plus hosts.

Failure domains introduced in v6.0.

Single point of failure (enclosure and RAID controller) and no HA.

Single point of failure (enclosure and RAID controller) and no HA.

Highly- redundant multipath I/O (MPIO) connection.

Highly- redundant iSCSI MPIO connections Any node can manage the cluster.

Table 2 Pros and cons of VMware private storage options

Table 2 documentation links:

Table 3 provides the pros and cons of shared storage in a multi-tenant environment.

VMware Shared storage (multi-tenant)

Key factors/ storage options

Block and File Endurance

Block and File Performance

Type

SDS

SDS

Performance

Available in 0.25, 2, 4 or 104 IOPS per GB.

Predictable storage performance parameters.

Multiple volumes may be striped together to achieve higher IOPS and more throughput.

Client provisions desired level of performance based on workload needs or price point.

 

Predictable storage performance parameters.

Multiple volumes may be striped together to achieve higher IOPS and more throughput.

Scalability

Volume sizes range from 20GB to 12TB. Cannot be resized once ordered.

Volume sizes range from 20GB to 12TB. Cannot be resized once ordered.

Protocols

iSCSI and NFS.

iSCSI and NFS.

Host Connections

Maximum of eight for iSCSI and64 for NFS.

Maximum of eight for iSCSI and 64 for NFS.

Use cases

Tier 1, 2, and 3 workloads:

·   0.25 IOPS per GB: Low I/O intensity. Example applications include storing mailboxes or department-level file shares.

·   2 IOPS per GB: General purposes. Example applications include hosting small databases backing web applications or virtual machine disk images for a hypervisor.

·   4 IOPS per GB: High I/O intensity. Example applications include transactional and other performance-sensitive databases.

·   10 IOPS per GB: High I/O intensity. Example applications include analytics.

Tier 1, 2, and 3 workloads:

MPIO iSCSI-based storage ideally suited for I/O intensive applications such as relational databases that require predictable levels of performance. The storage volumes come in sizes from 20GB to 12TB with user selectable IOPS ranging from 100 to 48,000 IOPS.

HA

Yes, dual heads and controllers.

Yes, dual heads and controllers.

Configurability

Size and IOPS only.

Size and IOPS only.

Disaster recovery and replication

Snapshot and Replication provided, which is done over the SoftLayer Private Network. Can use vRA to replicate at the VM level; no SRA.

Snapshot and Replication provided, which is done over the SoftLayer Private Network. Can use vRA to replicate at the VM level; no SRA.

Reliability

Highly redundant, MPIO connection, NFS-based file storage routed TCP/IP connections. Snapshots and Replication enabled.

Highly redundant, MPIO connection, NFS-based file storage routed TCP/IP connections.

Table 3 Pros and cons of VMware shared storage options

Table 3 documentation links: