Load Balanced Virtual Switches (Erl, Naserpour)
How can workloads be dynamically balanced on physical network connections to prevent bandwidth bottlenecks?
ProblemWhen network traffic on the uplink port for a virtual switch increases, it can cause delays, performance issues and packet loss because the affected virtual servers are sending and receiving traffic via only one uplink.
SolutionNetwork traffic is balanced across multiple uplinks between the virtual and physical networks.
ApplicationExtra network interface cards are added to the physical host to accommodate the virtual switch that is configured with multiple physical uplinks.
MechanismsCloud Usage Monitor, Hypervisor, Load Balancer, Logical Network Perimeter, Resource Replication, Virtual Server
Compound PatternsBurst In, Burst Out to Private Cloud, Burst Out to Public Cloud, Elastic Environment, Infrastructure-as-a-Service (IaaS), Multitenant Environment, Platform-as-a-Service (PaaS), Private Cloud, Public Cloud, Resilient Environment, Software-as-a-Service (SaaS)
Virtual servers are connected to the outside world via virtual switches. When the network traffic on the uplink port increases, bandwidth bottlenecks can occur, resulting in transmission delays, performance issues, packet loss, and lag time because the virtual servers are sending and receiving traffic via the same uplink.
Figure 1 - The sequence of events that can lead to network bandwidth bottlenecks.
- A virtual switch has been created and is being used to interconnect virtual servers.
- A physical network adapter has been attached to the virtual switch to be used as an uplink to the physical (external) network, connecting virtual severs to cloud consumers.
- Cloud consumers can send their requests to virtual servers via the physical uplink. The virtual servers reply via the same uplink.
- When the number of requests and responses increases, the amount of traffic passing through the physical uplink also grows. This further increase the number of packets that need to be processed and forwarded by the physical network adapter.
- Because traffic increases beyond the physical adapter’s capacity, it is unable to handle the workload.
- The network forms a bottleneck that results in performance to degradation and the loss of delay-sensitive data packets.
A load balancing system is established whereby multiple uplinks are provided to balance network traffic workloads.
Balancing the network traffic load across multiple uplinks or redundant paths can help avoid slow transfers and data loss. Link aggregation can further be used to balance the traffic, thereby allowing the workload to be distributed across multiple uplinks at the same time. This way, none of the network cards are overloaded.
Figure 2 - The addition of network interface cards and physical uplinks allows network workloads to be balanced.
- Virtual servers are connected to the external network via a physical uplink, while actively responding to cloud consumer requests.
- An increase in requests leads to increased network traffic, resulting in the physical uplink becoming a bottleneck.
- Additional physical uplinks are added to enable network traffic to be distributed and balanced.
The virtual switch needs to be configured to support multiple physical uplinks. The number of required uplinks can vary on a server-by-server basis. The uplinks generally need to be configured as a team (also known as an NIC team) for which traffic shaping policies are defined.
NIST Reference Architecture Mapping
This pattern relates to the highlighted parts of the NIST reference architecture, as follows: