Intra-Storage Device Vertical Data Tiering (Erl, Naserpour)
How can the dynamic vertical scaling of data be carried out within a storage device?
ProblemWhen required to maintain data within a single cloud storage device, the storage and processing capacity of the data will be limited to that of the device.
SolutionA cloud storage device capable of supporting multiple disk types is used to enable dynamic vertical scaling confined to the device.
ApplicationComplex cloud storage technology is utilized to establish storage tiers through which data can be scaled up or down via LUN migration.
Compound PatternsBurst In, Burst Out to Private Cloud, Burst Out to Public Cloud, Elastic Environment, Infrastructure-as-a-Service (IaaS), Multitenant Environment, Platform-as-a-Service (PaaS), Private Cloud, Public Cloud, Resilient Environment, Software-as-a-Service (SaaS)
When a cloud consumer has a firm requirement to limit the storage of data to a single cloud storage device, the capacity of that device to store and process data can become a source of performance-related challenges. For example, different servers, applications, and cloud services that are forced to use the same device may have data access and I/O requirements that are incompatible with the cloud storage device's capabilities.
A system is established to support vertical scaling within a single cloud storage device. This intra-device scaling system utilizes the availability of different disk types with different capacities.
Figure 1 - A conventional horizontal scaling system involving two cloud storage devices (1, 2) is transitioned to an intra-storage device system (3) capable of vertically scaling through disk types graded into different tiers (4). Each LUN is moved to a tier that corresponds to its processing and storage requirements (5).
The cloud storage architecture requires the use of a complex storage device that supports different types of hard disks, in particular high-performance disks, such as SATAs, SASs and SSDs. The disk types are organized into graded tiers, so that LUN migration can vertically scale the device based on the allocation of disk types that align to the processing and capacity requirements at hand.
After disk categorization, data load conditions and definitions are set, so that the LUNs are able to either move to a higher or lower grade depending on when pre-defined conditions are met. These thresholds and conditions are used by the automated scaling listener when monitoring runtime data processing traffic.
Figure 2 - An intra-device cloud storage architecture resulting from the application of this pattern (Part 1).
- A storage device that supports different types of hard disks is installed.
- Different types of hard disks are installed in the enclosures.
- Similar disk types are grouped together to create different grades of disk groups based on their I/O performance.
Figure 3 - An intra-device cloud storage architecture resulting from the application of this pattern (Part 2).
- Two LUNs have been created on Disk Group 1: LUN red and LUN yellow.
- The automated scaling listener monitors the requests and compares them with the predefined thresholds.
- The usage monitor tracks the actual amount of disk usage on the red LUN based on free space and disk group performance.
- The automated scaling listener realizes that the number of requests coming to the red LUN is reaching the predefined threshold, and the red LUN needs to be moved to a higher performance disk group, and informs the storage management program.
- The storage management program signals the LUN migration to move the red LUN to a higher performance disk group.
- The LUN migration works with the storage controller to move the red LUN to the higher capacity disk group.
Figure 4 - An intra-device cloud storage architecture resulting from the application of this pattern (Part 3).
- The red LUN is moved to a higher performance disk group.
- The usage monitor is still performing the same task of monitoring the disk usage. However, the difference is that the service price of the red LUN will be higher than before because it is using a higher performance disk group.
NIST Reference Architecture Mapping
This pattern relates to the highlighted parts of the NIST reference architecture, as follows: