![]() |
You can administer networking in a VMware environment for many different configurations. The examples in this section describe some of the VMware networking possibilities.
This section is not a substitute for the VMware documentation. Review the VMware networking best practices before deploying any applications on an ESXi host.
The following are the suggested best practices for configuring a network that supports deployed applications on VMware Hosts:
Separate the network services to achieve greater security and performance by creating a vSphere standard or distributed switch with dedicated NICs for each service. If you cannot use separate switches, use port groups with different VLAN IDs.
Configure the vMotion connection on a separate network devoted to vMotion.
For protection, deploy firewalls in the virtual machines that route between virtual networks that have uplinks to physical networks and pure virtual networks without uplinks.
Specify virtual machine NIC hardware type vmxnet3 for best performance.
Connect all physical NICs that are connected to the same vSphere standard switch to the same physical network.
Connect all physical NICs that are connected to the same distributed switch to the same physical network.
Configure all VMkernel vNICs to be the same IP Maximum Transmission Unit (MTU).
This configuration describes a simple version of networking Avaya applications within the same ESXi host. Highlights to note:
Separation of networks: VMware Management, VMware vMotion, iSCSI (SAN traffic), and virtual machine networks are segregated to separate physical NICs.
Teamed network interfaces: vSwitch 3 in Example 1 displays use of a load-balanced NIC team for the Virtual Machines Network. Load balancing provides additional bandwidth for the Virtual Machines Network, while also providing network connectivity for the virtual machines in the case of a single NIC failure.
Virtual networking: The network connectivity between virtual machines that connect to the same vSwitch is entirely virtual. In example 2, the virtual machine network of vSwitch3 can communicate without entering the physical network. Virtual networks benefit from faster communication speeds and lower management overhead.
This configuration shows a complex situation using multiple physical network interface cards. The key differences between example 1 and example 2 are:
VMware Management Network redundancy: Example 2 includes a second VMkernel Port at vSwitch2 to handle VMware Management Network traffic. In the event of a failure of vmnic0, VMware Management Network operations can continue on this redundant management network.
Removal of Teaming for Virtual Machines Network: Example 2 removes the teamed physical NICs on vSwitch3. vSwitch3 was providing more bandwidth and tolerance of a single NIC failure instead of reallocating this NIC to other workloads.
Communication Manager Duplex Link: vSwitch4 is dedicated to Communication Manager Software Duplication. The physical NIC given to vSwitch4 is on a separate physical network that follows the requirements described in PSN003556u at PSN003556u.
Session Manager Management Network: Example 2 shows the Session Manager Management network separated onto its own vSwitch. The vSwitch has a dedicated physical NIC that physically segregates the Session Manager Management network from other network traffic.
Title | Link |
Product Support Notice PSN003556u | https://downloads.avaya.com/css/P8/documents/100154621 |
Performance Best Practices for VMware vSphere® 5.5 | http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf |
Performance Best Practices for VMware vSphere® 6.0 | http://www.vmware.com/files/pdf/techpaper/VMware-PerfBest-Practices-vSphere6-0.pdf |
VMware vSphere 5.5 Documentation | https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html |
VMware vSphere 6.5 Documentation | http://pubs.vmware.com/vsphere-65/index.jsp |
VMware vSphere 6.0 Documentation | https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-6-pubs.html |
VMware Documentation Sets | https://www.vmware.com/support/pubs/ |