Storage Workload Management (Erl, Naserpour)
How can storage processing workloads be dynamically distributed across multiple storage devices?
ProblemWhen storage-related processing is limited to one cloud storage device, over-utilization can occur, while other storage devices are being under-utilized or not utilized at all, resulting in a non-optimized cloud storage architecture.
SolutionA storage capacity system is provided to distribute runtime workloads between different cloud storage devices, across the network, and to enable LUNs to be divided and managed.
ApplicationCloud storage devices are combined into a resource pool from which they are scaled horizontally and in coordinate with the use of a storage capacity monitor and LUN migration.
MechanismsAudit Monitor, Automated Scaling Listener, Cloud Storage Device, Cloud Usage Monitor, Load Balancer, Logical Network Perimeter
Compound PatternsBurst In, Burst Out to Private Cloud, Burst Out to Public Cloud, Cloud Balancing, Elastic Environment, Infrastructure-as-a-Service (IaaS), Multitenant Environment, Platform-as-a-Service (PaaS), Private Cloud, Public Cloud, Resilient Environment, Software-as-a-Service (SaaS)
When cloud storage devices are utilized independently, the changes resulting from some devices being over-utilized while others remain under-utilized are significant. Over-utilized storage devices increase the workload on the storage controller and can cause a range of performance challenges. Under-utilized storage devices may be wasteful due to lost processing and storage capacity potential.
Figure 1 - An imbalanced cloud storage architecture where six storage LUNs are located on Storage 1 for use by different cloud consumers, while Storage 2 and Storage 3 each host two additional LUNs. Because it hosts the most LUNs, the majority of the workload ends up with Storage 1.
The LUNs are evenly distributed across available cloud storage devices and a storage capacity system is established to ensure that runtime workloads are evenly distributed across the LUNs.
Figure 2 - LUNs are dynamically distributed across cloud storage devices, resulting in more even distribution of associated types of workloads.
Combining the different storage devices as a group allows LUN data to be spread out equally among available storage hosts. A storage management station is configured and an automated scaling listener is positioned to monitor and equalize runtime workloads among the storage devices in the group.
Figure 3 - A cloud architecture resulting from the application of the Storage Workload Management pattern (Part 1).
- The storage capacity system and the storage capacity monitor are configured to survey three storage devices in realtime. As part of this configuration, some workload and capacity thresholds are defined.
- The storage capacity monitor determines that the workload on Storage 1 is reaching a predefined threshold.
Figure 4 - A cloud architecture resulting from the application of the Storage Workload Management pattern (Part 2).
- The storage capacity monitor informs the storage capacity system that Storage 1 is over-utilized.
- The storage capacity system initiates workload balancing via the storage load/capacity manager (not shown).
Figure 5 - A cloud architecture resulting from the application of the Storage Workload Management pattern (Part 3).
- The storage load/capacity manager calls for LUN migration to move some of the storage LUNs from Storage 1 to the other two storage devices.
- LUN migration transitions the LUNs.
- After the LUNs are distributed, the workload is balanced.
Note that if some of the LUNs are being accessed less frequently or only at specific times, the storage capacity system can keep the hosting storage device in power-saving mode until it is needed.
NIST Reference Architecture Mapping
This pattern relates to the highlighted parts of the NIST reference architecture, as follows: