Advanced Single-Site VMware Reference Architecture@SoftLayer

Following are the steps to create an advanced single-site vSphere 5.5 or 6.0 environment at SoftLayer. The steps define a VMware@SoftLayer reference architecture deployment using VMware best practices.


The reference architecture is intended for those needing to provision an environment leveraging shared storage, VMware High Availability (HA) and vSphere Distributed Resource Scheduler (DRS), and a SoftLayer gateway/firewall. It is a suitable basis for most production deployments, which can be scaled as workload dictates, and it can replace an on-premise implementation or extend it into a hybrid IT scenario.

Environment Overview

 

Figure 1: VMware Environment with Endurance / Performance or QuantaStor Storage

The advanced, representative VMware environment outlined will consist of one data center managing two separate clusters: management and capacity. The configuration may be set up using a single 4 node cluster depending on requirement. The management cluster will contain the following virtual machines (VMs) that will be used for managing the infrastructure:

  • VMware vCenter Server 5.5 or 6.0 Appliance
  • Microsoft Windows 2012 R2 Standard with Active Directory and Domain Name System (DNS) roles

The capacity cluster will contain the resources and infrastructure needed to create and execute VMs. In terms of networking, the environment will consist of three private, internal VLANs along with a single, public VLAN for external communication. Table 1 specifies the VLAN types and VLAN names that will be used throughout the environment. Click here for more information on VLANs at SoftLayer.

VLAN Type VLAN Name Description
Primary Private Management Assigned to manage and access the physical ESXi hosts and virtual/cloud servers
Primary Private Storage Assigned to manage and access the shared storage attached to each ESXi host.
Primary Private VM Access Assigned to virtual machines running on each ESXI host.
Primary Public Public Assigned to virtual machines or other devices requiring access from the public network.

Table 1: Primary VLANs

For shared storage, users have the option to either use OS Nexus QuantaStor, a single-tenant shared storage server, or SoftLayer’s Endurance or Performance storage services. In either case, the shared storage device will be used to store the VMs on both the management and capacity clusters. Click here for more information on all of SoftLayer’s storage options.

The storage environment will be configured in such a manner to support NFS volumes.

 

Step 1: Order Primary Public and Private VLANs

Note: Vlan orders are subject to review and approval. At this time we do not have any specific conditions/requirements that will flag a VLAN order to be auto-approved. VLAN orders may be denied depending on the request. 

Four VLANs will be created in this step: one for management, one for storage, one for VMs, and one for public access. It is strongly recommended to create these before ordering your servers to ensure the servers will be placed on the correct VLANs and in the correct data center.

The environment will consist of five ESXi hosts (two for the management cluster and three for the capacity cluster) and one virtual/cloud server. The private management VLAN will consist of a subnet with 16 IP addresses to support it. The primary private VLAN for storage and VM traffic will consist of eight addresses since the environment contains a single storage server and VMs will utilize a portable subnet. If a larger subnet range is needed for the management network, adjust the range accordingly by calculating the number of ESXi hosts and multiplying by 2. Also, make sure to specify the data center for which these VLANs will need to be created.

After logging into control.softlayer.com, open a support ticket for the management and virtual machine VLANs by selecting the Add Ticket under Support. Fill in the fields with the information in Table 2:

Field Value
Subject Private Network Question
Title Order VLANs
Details

Please provision 3 primary private VLANSs and 1 primary public VLAN. Associate the following addressing scheme for each VLAN:

  • Associate 1x18 (16 addresses) for primary private VLAN
  • Associate 2x29 (8 addresses) for primary private VLAN
  • Associate 1x29 (8 addresses) primary public VLAN
Why do you need these additional VLANs? To place hosts, storage, and VMs on a different networks for a VMware environment.
Do you need public and or private VLANs? Private and Public
How many VLANs? Private = 3, Public = 1
How many IP addresses do you need? 8 to 16 for each VLAN
What router do you need these VLANs behind? It doesn't matter as long as they’re all in the same pod (see below).
Which pod should the VLAN(s) be in? They should all be in the same pod in <DATA CENTER NAME>.

Table 2: Support ticket information

After the VLANs have been provisioned, make note of the VLAN numbers, subnet range, and gateway, and assign them to logical vSphere networks. You can use the worksheet in Appendix A: VLAN Worksheet to record the VLAN and the associated information. For example:

VLAN Type VLAN Number IP Range Gateway Purpose
Primary Private 1101 10.X.Y.Z/28 10.X.Y.1 Management
Primary Private 1102 10.A.B.C/29 10.A.B.1 Storage
Primary Private 1103 10.Q.R.S/29 10.Q.R.1 Virtual Machines
Primary Public 2101 75.S.T.U/29 75.S.T.1 Public Access

Table 3: Primary VLAN sample

Note: Do not continue to the next step until the VLANs have been provisioned.

 

Step 2: : Order Portable Private IP Addresses

In Step 2, a request will be made to create portable private subnets for use by management VMs, VMkernels accessing storage, and VMs in the capacity cluster. For this step, you will need to determine how many addresses you will need for each VLAN subnet. In our representative environment, we will order the following:

  • Management VLAN: 1x8 addresses /29 – One address for vCenter Appliance; one address for DNS and Active Directory.
  • Storage VLAN: 1x16 addresses /28 – We will create two subnets on the same VLAN for storage, allowing us to create two VMkernel ports on each ESXi hosts using different subnets to access the shared storage device(s).
  • VM VLAN: 1x32 addresses /27 – These addresses will be used to assign IPs to VMs in the capacity cluster.

When ordering an amount, make sure to take into account how many IPs that will actually be needed within the next 30 days and 6 months. It is also important to note that SoftLayer reserves three to four IP addresses per portable subnet block. As a result, ordering four IP address will really net one IP address, or zero if the pod supports Hot Standby Router Protocol.

Use the following steps to order a block of portable IP addresses for each VLAN for each subnet you want to create:

  1. Open a browser window, go to control.softlayer.com, and log in.
  2. Select Account > Place an Order.
  3. Select the Network section on the pop-up window and click the Order link under Subnets/IPs.
  4. Click the menu drop-down button and select Portable Private.
  5. Select the XX Portable Private IP Address radio button and click Continue. (XX specifies the number of IPs.)
  6. Select the VLAN to associate with IP address block and click Continue.
  7. Fill in the required information on the screen and click Continue.

The creation of the IP addresses is fairly quick and will be displayed by selecting Subnets from the Network > IP Management menu. You may wish to record these IP addresses in the worksheet found in Appendix A: VLAN Worksheet.

 

Step 3: Order Virtual/Cloud Server

A Windows 2012 R2 Standard virtual/cloud server will be provisioned to use as a utility server for ISOs and provide access to the environment in this step.

  1. Open a browser window, go to control.softlayer.com, and log in.
  2. Select Account > Place an Order.
  3. Select the Virtual Server menu and click Hourly or Monthly.
  4. Select the appropriate data center (i.e., where the VLANs were created) to provision the virtual/cloud server and specify the following for each option:
    • Computing Instance: 1x2.0 GHz Cores
    • RAM: 4 GB
    • Operating System: Windows Server 2012 R2 Standard Edition (64-bit)
    • First Disk: 100 GB (SAN)
    • Uplink Port Speeds: 1 Gbps Public and Private Network Uplinks
  5. Click the Continue Your Order button and select the backend and fronted VLANs on the Order Summary and Billing screen. (Note: The selection of VLANs is very important so the utility can be placed in the correct pod within the data center. For the example environment, the backend VLAN is the management VLAN (i.e., 1101) and the frontend VLAN is the public VLAN (i.e., 2101).)
  6. Enter a hostname and domain name for the server and click the Place Order button.

 

Step 4: Order ESXi Hosts and Gateway/Firewall

In this step, we will order the ESXi hosts and Brocade vRouter (Vyatta) gateway and firewall appliance while the virtual/cloud server is provisioned. It is important that we order all of these servers at the same time so they are placed in the same pod at the same time. There is a chance, albeit small, that ordering hardware at separate times can cause hosts and firewalls to be in separate pods within a SoftLayer data center.

ESXi Hosts

For each ESXi host ordered for our environment, we will select VMware ESXi 5.5 as the operating system. If you wish to utilize SoftLayer’s vSphere licenses, this will result monthly charges based upon certain usage. Click for here more information on VMware.

Another option is to install ESXi using your own ISO. Instructions for this process can be found in Installing VMware vSphere ESXi via Remote Console and Virtual Media. If you wish to utilize this option, make sure to select No Operating System as the operating system for the management and capacity hosts during the ordering process.

Note that this implementation requires Enterprise Plus licensing for the use of vSphere Distributed Virtual Switches. If your license is not valid for Enterprise Plus, it is recommended to use the SoftLayer provided VMware Service Provider Program (VSPP) license.

Ordering Management Hosts

Use the following steps to order the management host servers.

  1. Open a browser window, go to control.softlayer.com, and log in.
  2. Select Account > Place an Order.
  3. Select Monthly under Bare Metal Servers.
  4. Choose an appropriate server that meets the requirements for the management cluster on the server list screen. (At a minimum, ESXi 5.5 requires a single dual-core processor, 4GB of RAM, and 1Gb Ethernet controller. Click here for more information on vSphere ESXi minimums.)
  5. Select the appropriate data center (i.e., where the VLANs were created) to provision the ESXi servers and specify the following for each option:
  6. Click the Continue Your Order button.
  7. Click the Add Server button on the next screen to begin adding ESXi hosts for the capacity cluster to the order.
Order Capacity Hosts
  1. Choose an appropriate server that meets the requirements for the capacity cluster hosts on the server list screen. (At a minimum, ESXi 5.5 requires a single dual-core processor, 4GB of RAM, and 1Gb Ethernet controller. Click here for more information on vSphere ESXi minimums.)
  2. Select the appropriate data center (i.e., where the VLANs were created) to provision the ESXi servers and specify the following for each option:
  3. Click the Continue Your Order button.
Complete Configuration

At this point, you should have ESXi hosts in your shopping cart. In order for the devices to be provisioned correctly, you will need to assign the public (if applicable) VLAN, private VLAN, hostname and domain to the devices.

To do this, assign the following VLANs and create hostnames for the devices:

  • ESXi hosts – Backend VLAN: (e.g., 1101 in our environment)
  • Backend VLAN: (e.g., 1101 in our environment)
  • Frontend VLAN: (e.g., 2101 in our environment)

Once done, select your payment method, agree to the terms and conditions, and click the Finalize Your Order button. Do not proceed to Step 5: Trunk VLANs on BCS Switches until the servers have been provisioned and are accessible via VPN or virtual server ordered in the previous step.

 

Step 5: Trunk VLANs on BCS Switches

By default, SoftLayer places ports on the backend customer switches [i.e., private network switch in a pod, such as a Backend Customer Switch (BCS)] in access mode. As a result, we will need to trunk the ports attached to the ESXi hosts so the hosts can access storage and the VMs can communicate on the private network.

Before opening the ticket to trunk the VLANs, you will need to determine which network interfaces are on the private network. To do this, navigate to the Device Details for each ESXi server and look at the Private column under the Network section. For our environment, eth0 and eth2 are the private network adapters.

In addition to trunking of the VLANs for the ESXi hosts, we must also unbond the NICs for the management and capacity hosts. This is due to the fact that Link Aggregation Control Protocol (LACP) is not compatible with software iSCSI multipathing.

To trunk the VLANs and unbond the NICs, you will need to open a ticket.

  1. Open a browser window, go to control.softlayer.com, and log in.
  2. Select Support, Add Ticket.
  3. Enter the following information:
    • Subject: Private network question
    • Title: Trunk VLANs and unbond NICs
    • Details: Please trunk VLANs <Management VLAN>, <Storage VLAN>, and <VM VLAN> on eth0 and eth2 NIC pair for the following hosts [list each ESXi host]. Also, unbond (i.e., remove LACP) the private NICs (eth0 and eth2) on the following servers: [list each ESXi host]
  4. Click the Add Ticket button.

Make sure to change the VLANs designated in the <> with your actual VLANs

 

Step 6: Configure Management Host Networking

Once the servers have been provisioned and the VLANs are trunked, you will need to setup networking on the hosts in the management cluster. For this configuration, we will utilize vSphere standard switches for the management cluster. With the exception of the vMotion and fault tolerance port groups, we will use portable IP addresses to configure the VMkernel port groups. Note that we will utilize our own IP scheme for vMotion and fault tolerance since the traffic does not need to be routed. However, all hosts need to be on the same subnet as other hosts in the cluster in order to utilize vMotion and fault tolerance capabilities. Should the requirement arise that the subnet be routed it is recommended that Softlayer Portable IPs be used.

In order to configure the port groups, you will need to have the vSphere client installed on the utility virtual/cloud server or the workstation accessing the hosts via SoftLayer management VPN at https://vpn.softlayer.com.

  1. Open a browser window, go to https://vpn.softlayer.com, and log in.
  2. Launch the vSphere client when you have connected to the utility/cloud server or SoftLayer VPN.
  3. Enter the IP address, username, and password of one of the management ESXi hosts. (You can find the root password for the ESXi host by selecting Device Details > Passwords.
  4. Navigate to the Configurations tab and create or modify the following port groups on the vSphere standards switch (most likely vSwitch0) under Networking.

Do this for each host in the management cluster.

vSwitch0 Properties

Network Adapter vmnicX and vmnicY
Load Balancing Router based on the originating virtual port ID
Active Adapters vmnicX and vmnicY

Table 4: vSwitch0 Properties

Edit Existing Virtual Machine Port Group

Network Label vmPG-Management

Table 5: vmPG-Management Port Group

Edit Existing vmkernel Port Group

Network Label vmkPG-Management

Table 6: vmkPG-Management Port Group

Add vMotion vmkernel Port Group

Connection Type VMKernel
Network Label vmkPG-vMotion
VLAN ID <Storage VLAN> (e.g., 1102)
vMotion Traffic Enabled
IP Address

172.16.10.X/24

This is a user-defined address and can be a different subnet if needed. Just make sure the other vMotion addresses on each host is on the same subnet.
Subnet Mask 255.255.255.0

Table 7: vMotion Port Group

Add Fault Tolerance VMkernel Port Group

Connection Type VMKernel
Network Label vmkPG-FT
VLAN ID <Storage VLAN> (e.g., 1102)
Fault Tolerance Logging Enabled
IP Address

172.16.20.X/24

This is a user-defined address and can be a different subnet if needed. Just make sure the other FT addresses on each host is on the same subnet.
Netmask 255.255.255.0

Table 8: FT Port Group

Add Storage VMkernel Port Group
It is strongly suggested to update the Notes section of each Portable IP Address with the name of the host and VMkernel port assigned. The Notes section can be located by navigating to the SoftLayer Management portal, https://control.softlayer.com Network > IP Management > Subnets > [Subnet].

Connection Type VMKernel
Network Label vmkPG-Storage
VLAN ID <Storage VLAN> (e.g., 1102)
IP Address

Portable Private Address

This is an IP address selected from the portable private addresses bound to the storage VLAN.
Netmask Subnet Mask Associated with the IP Range

Table 9: Storage Port Group

Add Public Address Port Group
It is strongly suggested to update the Notes section of each Portable IP Address with the name of the host and VMkernel port assigned. The Notes section can be located by navigating to the SoftLayer Management portal, https://control.softlayer.com Network > IP Management > Subnets > [Subnet].

Connection Type Virtual Machine
Network Label vmPG-Public
VLAN ID <Primary Public VLAN> (e.g., 2101)

Table 10: Public Port Group

 

Step 7: Download/Upload ISO Images and vCenter Virtual Appliance

At this point, the environment is ready to deploy the VMware vCenter Virtual Appliance and install a virtual machine for DNS, Active Directory, or BIND. Before we can do this, however, you will need to download the aforementioned images. To do this, remote desktop to the virtual/cloud server previously provisioned and download the following on the virtual/cloud server depending on your environment:

After downloading the ISO that will be used for DNS, Active Directory or BIND, you will need to upload the image to a management host datastore so that it can be attached to a virtual machine. To do this, use the vSphere client to connect to a management host and create an ISO directory on the local datastore.

Figure 2: Datastore Browser

Step 8: Deploy DNS

In this step, you will deploy a VM (on a management host) and install DNS services. In order to do this action, use the traditional vSphere client to create a VM on the Management ESXI host where the Windows or Linux ISO is located. Connect the appropriate ISO (Windows or CentOS) to deploy a DNS server on the VM. Make sure to associate the VM to the VM port group (vmpg)-management port group created in a previous step.

After the VM is installed, assign an IP address and default gateway from the portable private subnet group. If you are using the VLAN worksheet in Appendix A, this is the Management VM subnet. It is highly suggested to update the Notes section of each Portable IP Address with the name of the VM assigned. The Notes section can be located by navigating to the SoftLayer Management portal, https://control.softlayer.com, Network > IP Management > Subnets > [Subnet].

Although it is beyond the scope of this document to detail the steps needed to enable DNS, we will provide the following guidance:

  1. Set DNS Forwarding to the service.softlayer.com local DNS hosts provided by SoftLayer:
    • rs1.service.softlayer.com 10.0.80.11
    • rs2.service.softlayer.com 10.0.80.12
  2. After DNS is setup, create a local DNS zone (e.g., dal06.mycompany.local) and a reverse lookup zone for all portable and primary subnets that have been provisioned.
  3. Add an A HOST records for each host’s Management IP address (vmk0 on vmkPG-management).
  4. Add an A HOST record from the portable private subnet bound to the management VLAN for your vCenter Virtual Appliance.
  5. Update the Notes section of the Portable IP Subnet that you just assigned to vCenter.

Click here for more information on Windows DNS and Active Directory.
Click here for more information on CentOS BIND.

 

Step 9: Deploy and Configure vCenter Virtual Appliance

Now that DNS has been configured, we can now deploy and configure the vCenter Server Appliance. To deploy the appliance:

  1. Remote desktop to the virtual/cloud server and open the vSphere Client.
  2. Connect to a management host and select File, Deploy OVF Template.
  3. Follow the wizard to complete the deployment.

For more information about OVF Template deployment, please visit the VMware documentation.

Since there is no Dynamic Host Configuration Protocol (DHCP) server available to assign the vCenter virtual appliance an IP address upon power on, we must utilize the root console to configure the appliance. In fact, a NO NETWORKING DETECTED message will display on the vCenter Virtual Appliance Console as shown in Figure 5.

Figure 3: vCenter Virtual Appliance Console

To configure the appliance:

  1. Log in to the console with a Username of root and a Password of vmware.
  2. Run /opt/vmware/share/vami/vami_config_net and follow the text wizard to configure the IP, subnet, and DNS properties. Remember to use the IP address of the DNS or BIND server created in Step 8: Deploy DNS.
  3. Save the network settings, exit the console, and open a browser on the virtual/cloud server.
  4. Navigate to the IP address you gave to the vCenter virtual appliance (VCVA) appended with port 5480 (e.g., https://:5480).
  5. Accept the EULA in the wizard, answer the Customer Improvement Experience Program question, and select Configure Options, Set custom configuration.
  6. Click Next and fill in the following values:
    • Wizard Menu Option Value
      Database Settings Database Type embedded
      SSO Settings SSO Deployment Type embedded
      SSO Settings New administrator password (for administrator@vsphere.local) <Enter a password>
      SSO Settings Retype the new password <Enter the same password as above>
      Time synchronization NTP synchronization servertime.service.softlayer.com
      Table 11: VCVA Setup Wizard
  7. Click Start. The VCVA is now configured.
  8. Change the root password using the options under Admin.
  9. Log off VCVA configuration web page when you done.
  10. Navigate to the vSphere Web Client by entering the IP address of the VCVA appended with 9443/vsphere-client (e.g, https://:9443/vsphere-client).
  11. Log in to the root and select Administration, Licenses.
  12. Enter your VMware vCenter license.

 

Step 10: Create vCenter Clusters and Distributed Virtual Switch

Now that VCVA is configured and licensed, we can now create the data center and cluster constructs and distributed virtual switches for the capacity cluster.

Create Data center and Clusters

  1. Navigate to the Home screen on the vSphere client.
  2. Select Getting Started and click on click here next to the message Your Inventory is empty
  3. Click on the Create Datacenter link on the next screen (i.e., Getting Started).
  4. Enter a data center name (e.g., our example environment is located in the Toronto 01 data center so we used Toronto 01 as the data center name).
  5. Click on the Create a cluster link on the Getting Started page once the data center is created.
  6. Name the first cluster Management, leave all other options unchecked (to be added later), and click OK.
  7. Create another cluster when you are done using the same process as the management cluster.

At this point, you can add the management and capacity hosts to the Management and Capacity clusters, respectively, by using the Add a host link. Be sure to use the fully qualified domain name (FQDN) of the server rather than the IP address so that DNS is utilized when adding each host.

After adding the ESXi hosts to vCenter, you may have noticed a couple of warnings messages relating to the enablement of the shell and SSH on each ESXi host:

Figure 4: vCenter Suppress Warnings

To suppress these warnings, click on the Suppress Warning link next to the warning and click Yes on the popup screen. If the Suppress Warnings link is not present:

  1. Go to the ESXi host’s Manage tab.
  2. Select the Settings sub tab and click on Advanced System Settings.
  3. Find the UserVars.SupressShellWarning key and change the value to 1.
  4. Click OK.

Figure 5: vCenter Suppress Warnings Advanced

After the management and capacity hosts are added to their respective clusters, go to each host and setup DNS and NTP. To setup DNS:

  1. Click on a host and select Manage > Networking.
  2. Select the default system stack (TCP/IP configuration) and click the pencil icon.
  3. Enter the IP address of the DNS server you previously created as well as the host and domain name.

Figure 6: Host DNS Settings

For NTP settings:

  1. Navigate to Manage, Settings, Time Configuration.
  2. Enter servertime.service.softlayer.com as the NTP server.
  3. Set the NTP Service Startup Policy to Start and stop with host.

Figure 7: Host NTP Settings

Create a Distributed Virtual Switch for the Capacity Hosts

1. Use the vSphere Web Client to navigate to the Networking section and right-click on the data center name.

2. Select New Distributed Switch.

3. Name the distributed switch and click Next.

Figure 8: New Distributed Switch

4. Select the appropriate distributed switch version and click Next. (We selected 5.5 since all of our ESXi hosts are provisioned with ESXi 5.5 in our environment.)

5. On the edit settings screen, enter 2 as the number of uplinks and uncheck the Create a default port group radio button.

6. Click Next and then Finish to create the distributed virtual switch.

Create Port Groups for Distributed Virtual Switch
Now that the distributed virtual switch is present, we must now create port groups for vMotion, fault tolerance, VMs, and storage.

Create dvpg-Private-VM Management Port Group

  1. Navigate to the Networking section using the vSphere web client.
  2. Right click on the distributed virtual switch (i.e., DVS).
  3. Click on New Distributed Port Group… and enter the information in Table 14 for the first port group.
  4. Leave the default value any options that are not specified in the table.
New Distributed Port Group Menu Field Value
Name and Location Name dvpg-Private-VM Management
Configure Settings Advanced Check Customize default policies configuration
Configure Policies (Teaming and Failover) Load Balancing Route based on physical NIC load
Configure Policies (Teaming and Failover) Failover Order Active Uplinks: Uplink 1 & Uplink 2

Table 12: DVS VM Management Port Group

When you are done creating the first port group, create the remaining port groups with the following configurations options (Table 15 to Table 19).

Create dvpg-Private-vMotion Port Group

New Distributed Port Group Menu Field Value
Name and Location Name dvpg-Private-vMotion
Configure Settings VLAN Type VLAN
Configure Settings VLAN ID <Storage VLAN>
Configure Settings Advanced Check Customize default policies configuration
Configure Policies (Teaming and Failover) Load Balancing Route based on physical NIC load
Configure Policies (Teaming and Failover) Failover Order Active Uplinks: Uplink 1 & Uplink 2

Table 13: DVS vMotion Port Group

Create dvpg-Private-Fault Tolerance Port Group

New Distributed Port Group Menu Field Value
Name and Location Name dvpg-Private-Fault Tolerance
Configure Settings VLAN Type VLAN
Configure Settings VLAN ID <Storage VLAN>
Configure Settings Advanced Check Customize default policies configuration
Configure Policies (Teaming and Failover) Load Balancing Route based on physical NIC load
Configure Policies (Teaming and Failover) Failover Order Active Uplinks: Uplink 1 & Uplink 2

Table 14: DVS FT Port Group

Create dvpg-Private-VM Access Port Group

New Distributed Port Group Menu Field Value
Name and Location Name dvpg-Private-VM Access
Configure Settings VLAN Type VLAN
Configure Settings VLAN ID <Virtual Machine VLAN>
Configure Settings Advanced Check Customize default policies configuration
Configure Policies (Teaming and Failover) Load Balancing Route based on physical NIC load
Configure Policies (Teaming and Failover) Failover Order Active Uplinks: Uplink 1 & Uplink 2

Table 15: DVS VM Access Port Group

Create dvpg-Private-Storage

New Distributed Port Group Menu Field Value
Name and Location Name dvpg-Private-Storage Path A
Configure Settings VLAN Type VLAN
Configure Settings VLAN ID <Storage VLAN>
Configure Settings Advanced Check Customize default policies configuration
Configure Policies (Teaming and Failover) Load Balancing Route based on physical NIC load
Configure Policies (Teaming and Failover) Failover Order Active Uplinks: Uplink 1 & Uplink 2

Table 16: DVS Storage Port Group

Create dvpg-Primary-Public

New Distributed Port Group Menu Field Value
Name and Location Name dvpg-Private-Storage
Configure Settings VLAN Type VLAN
Configure Settings VLAN ID <Primary Public VLAN>
Configure Settings Advanced Check Customize default policies configuration
Configure Policies (Teaming and Failover) Load Balancing Route based on physical NIC load
Configure Policies (Teaming and Failover) Failover Order Active Uplinks: Uplink 1 & Uplink 2

Table 17: DVS Storage Path B Port Group

 

Step 11: Migrate ESXi hosts in Capacity cluster to Distributed Virtual Switch

Now that the capacity hosts have been added to the capacity cluster, we can migrate the virtual standard switch configuration to the distributed virtual switch created in Step 10: Create vCenter Clusters & Distributed Virtual Switch. We will do this for one host and apply a host profile later to configure the rest of the cluster.

Before we begin adding VMkernel adapters, we will assign the vmnics to the uplinks on the distributed virtual switch.

  1. Click on vCenter Inventory Lists, Distributed Switches.
  2. Select the distributed switch for the capacity hosts.
  3. Click on the Add and manage hosts link on the Getting Started page.
  4. Use the following settings to add uplinks and migrate the existing VMkernel associated with management of the host.
    • Menu Field Value
      Select Task Select Tasks Add Hosts
      Select Hosts Click on "New Hosts" Click on Capacity Host
      Select network adapter tasks Select network adapter tasks Select Manage physical adapters and Manage VMkernel adapters
      Table 20: DVS Add Hosts
  5. Select one of the private vmnics and click on Assign uplink on the Manage physical network adapters menu.
  6. Select uplink1 on the pop-up screen and click OK.
  7. Repeat this step for the other private vmnic and assign it to uplink2. For example.

  1. Click Next, highlight the vmk0 VMkernel adapter, and click Assign port group.
  2. Select dvpg-Private-VM Management on the pop-up screen and click OK. Your screen should look similar to Figure 11.
  3. Click the Next button twice and then the Finish button to complete the migration to the distributed virtual switch. Note that you may briefly lose network connectivity to the host but this should subside quickly.

figure 11: VMKernel Network adapters

After migrating the vmk0 adapter to the distributed virtual switch, we can now add VMkernels to each port group in the DVS.

  1. Click Manage > Networking on the host within vCenter.
  2. Select the VMKernel adapters menu.
  3. Click on the Add host networking icon and add the VMkernel adapters in Table 19 to Table 21. To do this, you will need to migrate to the “VMKernel adapters” menu under the “Manage”->”Networking” tab on the host within vCenter. Then, click on the “Add host networking” icon and add the following VMKernel adapters.

Add vmk1 for vMotion

Menu Field Value
Select connection type Select connection type VMKernel Network Adapter
Select target device Select an existing distributed port group dvpg-Private-vMotion
Select network adapter tasks Select network adapter tasks Select Manage physical adapters and Manage VMkernel adapters
Port Properties Enable Services Check vMotion traffic
IPv4 Settings IPv4 Address

172.16.10.X/24

This is a user defined address and can be a different subnet if needed. Just make sure the other vMotion addresses on each host is on the same subnet.
IPv4 Settings Subnet Mask 255.255.255.0

Table 19: Host Networking vMotion

Add vmk2 for Fault Tolerance

Menu Field Value
Select connection type Select connection type VMKernel Network Adapter
Select target device Select an existing distributed port group dvpg-Private-Fault Tolerance
Select network adapter tasks Select network adapter tasks Select Manage physical adapters and Manage VMKernel adapters
Port Properties Enable Services Check Fault Tolerance Logging
IPv4 Settings IPv4 Address

172.16.20.X/24

This is a user defined address and can be a different subnet if needed. Just make sure the other FT addresses on each host is on the same subnet.
IPv4 Settings Subnet Mask 255.255.255.0

Table 20: Host Networking Fault Tolerance

Add vmk3 for Storage

Menu Field Value
Select connection type Select connection type VMkernel Network Adapter
Select target device Select an existing distributed port group dvpg-Private-Storage
Select network adapter tasks Select network adapter tasks Select Manage physical adapters and Manage VMkernel adapters
IPv4 Settings IPv4 Address

Portable Private Address

This is an IP address selected from the portable private addresses bound to the storage VLAN. This address is to be on a different subnet than Storage Path B.
IPv4 Settings Subnet Mask Subnet Mask Associated with the IP Range

Table 21: Host Networking Storage

Create a Host Profile
In this step, we will capture a host profile from the single capacity host that we configured.

  1. Navigate to the Home screen of the vSphere Web Client and click the Host Profiles icon.
  2. Click the green plus (+) sign, Extract profile from a host, and select the previously configured capacity host on the pop-up screen.
  3. Click Next.
  4. Name the host profile (e.g., Capacity01 Host Profile), enter a description, and click Next.
  5. Review the settings and click Finish.

After the host profile is created, we need modify it so that it does not prompt for MAC addresses when applying the profile to the rest of the hosts in the capacity cluster.

  1. Right click on the host profile you just created and select Edit Settings.
  2. Select Edit Host Profile, Host virtual NIC.
  3. Locate the right-hand pane and change Determine how MAC address for vmknic should be decided to User must explicitly choose the policy option.
  4. Click Next, then click Finish.

Attach Host Profile to Capacity Cluster
Now that we have created a host profile, we must attach the host profile to the cluster so that it can be applied to the capacity hosts.

  1. Navigate to the Host and Clusters view in the vSphere Web Client.
  2. Enter maintenance mode for each host in the cluster since profiles can only be applied to hosts in maintenance mode.
  3. Right click on the capacity cluster and select Attach Host Profile from the pop-up menu.
  4. Select the host profile you just finished creating and click Next.
  5. Enter the required information on the Customize host screen and click Finish.

When this step is complete, you may place the hosts back into service (non-maintenance mode).

 

Step 12: Order, Configure, and Attach Shared Storage

At this point, it is imperative that you order, configure, and attach shared storage for use with the management and capacity clusters and hosts within the environment. If you want to utilize SoftLayer’s multi-tenant shared storage solution (File Storage), visit the Architecture Guide for File Storage with VMware.

Step 13: Enable HA/DRS and svMotion vCVA

Enable HA/DRS on Management and Capacity Clusters
Now that shared storage is setup, we should enable HA and DRS so that we can provide additional protection and some load balancing capabilities to the VMs on the management cluster.

  1. Navigate to the vSphere Web Client.
  2. Select Manage, Settings for the management cluster.
  3. Select vSphere DRS and click Edit.
  4. Choose the options outlined in Figure 12.

Figure 12: DRS Settings

  5. Select the options outlined in Figure 13 for the vSphere HA Settings screen

Figure 13: HA Settings

Note: Repeat the above process for the capacity cluster.

Storage vMotion the vCenter Virtual Appliance
Now that storage is setup on the management cluster and HA and DRS are enabled, we need to set the vCenter Virtual Appliance to shared storage.

  1. Right click on the appliance and select Migrate from the pop-up menu.
  2. Select Change Datastore and click Next.
  3. Select the iSCSI volume previously mounted to the management cluster and click Next.
  4. Review the selections on the next screen and click Next.

At this point the advanced single-site VMware environment at SoftLayer is now complete.

 

Summary

At this point, you now have a VMware environment running in a SoftLayer data center capable of running production workloads and supplementing an on-premise SoftLayer deployment. It enacts VMware best practices and enables features such as VMware DRS, HA, Storage DRS, and networking redundancy. This reference architecture implementation can be extended with additional capacity or management hosts and additional storage as workload increases require.

For more information and FAQs related to VMware at SoftLayer:  

http://knowledgelayer.softlayer.com/procedure/deploy-vmwaresoftlayer  

http://knowledgelayer.softlayer.com/faqs/361

 

Appendix A: VLAN Worksheet

VLAN Type VLAN Number IP Range Gateway Purpose
Primary Private       Management
Primary Private       Storage
Primary Private       Virtual Machines
Primary Public       Public Access
Portable Private       Management VMs
Portable Private       Storage
Portable Private       Virtual Machines
vMotion     N/A  
Fault Tolerance     N/A