Load Balanced Virtual Switches (Erl, Naserpour)
How can workloads be dynamically balanced on physical network connections to prevent bandwidth bottlenecks?
ProblemWhen network traffic on the uplink port for a virtual switch increases, it can cause delays, performance issues and packet loss because the affected virtual servers are sending and receiving traffic via only one uplink.
SolutionNetwork traffic is balanced across multiple uplinks between the virtual and physical networks.
ApplicationExtra network interface cards are added to the physical host to accommodate the virtual switch that is configured with multiple physical uplinks.
MechanismsCloud Usage Monitor, Hypervisor, Load Balancer, Logical Network Perimeter, Resource Replication, Virtual Server
Compound PatternsBurst In, Burst Out to Private Cloud, Burst Out to Public Cloud, Cloud Authentication and Access Management, Elastic Environment, Infrastructure-as-a-Service (IaaS), Multitenant Environment, Platform-as-a-Service (PaaS), Private Cloud, Public Cloud, Resilient Environment, Secure Burst Out to Private Cloud/Public Cloud, Software-as-a-Service (SaaS)
The addition of network interface cards and physical uplinks allows network workloads to be balanced.
NIST Reference Architecture Mapping
This pattern relates to the highlighted parts of the NIST reference architecture, as follows: