- Identify common virtual switch configurations
Skills and Abilities
- Determine use cases for and apply IPv6
- Configure NetQueue
- Configure SNMP
- Determine use cases for and apply VMware DirectPath I/O
- Migrate a vSS network to a Hybrid or Full vDS solution
- Configure vSS and vDS settings using command line tools
- Analyze command line output to identify vSS and vDS configuration details
- vSphere Command-Line Interface Installation and Scripting Guide
- vNetwork Distributed Switch: Migration and Configuration
- ESX Configuration Guide
- ESXi Configuration Guide
- vSphere Client
- vSphere CLI
Determine use cases for and apply IPv6
To enable IPv6 using a command line:
- To enable IPv6 for the VMkernel, run the command:
- vicfg/esxcfg-vmknic -6 true
- To enable IPv6 for the Service Console, run the command:
- esxcfg-vswif -6 true
- To verify that IPv6 has been enabled, run the command:
- vicfg-vmknic –list
Note the below:
ESX 3.5 supports virtual machines configured for IPv6.
ESX 4.0 supports IPv6 with the following restrictions:
- IPv6 Storage (software iSCSI and NFS) is experimental in ESX 4.0.
- ESX does not support TCP Segmentation Offload (TSO) with IPv6.
- VMware High Availability and Fault Tolerance do not support IPv6.
NetQueue is disabled by default and can be configured from the GUI or the command line
Enable NetQueue in VMkernel using VMware Infrastructure (VI) Client.
- Choose Configuration > Advanced Settings > VMkernel.
- Select the checkbox for VMkernel.Boot.netNetqueueEnabled.
At the command line, you can also add a line to /etc/vmware/esx.conf
After you enable NetQueue by either of the above methods, you must enable NetQueue on the adapter module itself using the vicfg-module command.
- Configure a supported NIC to use NetQueue:
- vicfg-module <conn_options> -s “intr_type=2 rx_ring_num=8” s2io
- Verify that NetQueue has been configured:
- vicfg-module <conn_options> -g s2io
- List the set of modules on the host:
- vicfg-module <conn_options> -l
- Changes require a reboot to take effect.
- Configure SNMP Communities
- vicfg-snmp.pl -server <hostname> -username <username> -password <password> -c <community1>
- Each time you specify a community with this command, the settings you specify overwrite the previous configuration. To specify multiple communities, separate the community names with a comma.
- Configure SNMP Agent to Send Traps
- vicfg-snmp.pl -server <hostname> -username <username> -password <password> -t target address>@<port>/<community>
- You can then enable the SNMP agent by typing
- vicfg-snmp.pl -server <hostname> -username <username> -password <password> -enable
- And then send a test by typing
- vicfg-snmp.pl -server <hostname> -username <username> -password <password>-test
- Configure SNMP Agent for Polling
- vicfg-snmp.pl -server <hostname> -username <username> -password <password> -p <port>
For vCenter server
- Select Administration->vCenter Server Settings
- If the vCenter Server is part of a connected group, in Current vCenter Server, select the appropriate server.
- Click SNMP in the navigation list.
- Enter primary Receiver info, note if the port value is empty vCenter Server uses the default of 162.
- Optionally enable additional receivers
- Click OK.
Determine use cases for and apply VMware DirectPath I/O
VMware DirectPath I/O allows a guest VM to directly access an I/O device via bypassing the virtualization layer. This can result in improved performance and a good use case would be 10 Gigabit Ethernet for guests requiring a lot of network throughput. Each guest can support up to two pass through devices.
- Requirements from VMware’s Direct Path I/O documentation: VMDirectPath supports a direct device connection for virtual machines running on Intel Xeon 5500 systems, which feature an implementation of the I/O memory management unit (IOMMU) called Virtual Technology for Directed I/O (VT‐d). VMDirectPath can work on AMD platforms with I/O Virtualization Technology (AMDIOMMU), but this configuration is offered as experimental support.
- Some machines might not have this technology enabled in the BIOS by default.
- A good guide to setup DirectPath I/O can be found at Petri It Knowledgebase
Migrate a vSS network to a Hybrid or Full vDS solution
This document from VMware covers this topic in its entirety. Read it to gain a better understanding of vDS and reasoning on why a Hybrid solution may or may not work. This is a good excerpt from the document below:
In a hybrid environment featuring a mixture of vNetwork Standard Switches and vNetwork Distributed Switches, VM networking should be migrated to vDS in order to take advantage of Network vMotion. As Service Consoles and VMkernel ports do not migrate from host to host, these can remain on a vSS. However, if you wish to use some of the advanced capabilities of the vDS for these ports, such as Private VLANs or bi-directional traffic shaping, or, team with the same NICs as the VMs (for example, in a two port 10GbE environment), then you will need to migrate all ports to the vDS.
Configure vSS and vDS settings using command line tools
Analyze command line output to identify vSS and vDS configuration details
vicfg-vswitch -l (to get DVSwitch, DVPort, and vmnic names)
esxcfg-vswif -l (get vswif IP address, netmask, dvPort id, etc. ESX Only)