« 3.018: fast vp - world's smartest storage tiering (part 1) | Main | 3.020: reality check - vsp vaai support »

January 18, 2011

3.019: fast vp - world's smartest storage tiering (part 2)

In Part 1 of this article, I discussed how the new VMAX FAST VP is highly differentiated when it comes to implementation, architecture, algorithms and simplicity. In Part 2 I focus on differentiation in the granularity of data management and in the advanced controls for FAST VP.

Before I dive in, I also wanted to re-iterate that FAST VP is not the end-game for EMC’s investments in automated tiering. As we’ve said since we introduced the concept back in April 2009, EMC’s FAST Vision (and roadmap) is laid out in 5 stages, of which FAST VP is only the 2nd. Over the coming months and years, you will see EMC extend FAST in a progression:

  1. VMAX FAST VPThick: VMAX FAST V1 provided policy-based optimization at the Full LUN level
  2. Thin: VMAX FAST VP provides sub-LUN automated optimization
  3. Small: Next up will be the incorporation of data reduction technologies to reduce the footprint of both idle and active data
  4. Green: This phase will take efficiency to another level, moving idle data to spindle groups that will be automatically spun down until the data is actually needed
  5. Gone: Finally, aged data blocks will be archived out of the VMAX itself to external archive platforms (like the one announced during the Record Breakers launch today)

So, in addition to the unique value propositions offered by The World’s Smartest Storage Tiering product, EMC’s larger vision is also highly differentiated. Although I do expect others will try to copy our vision as well…

On to Part 2!

 

Granular Data Management

  • FAST VP: Intelligent Efficiency How many tiers are supported?
  • What is the granularity of tiering control?
  • What is the granularity of relocation?
  • How fast does it execute changes?
how many tiers are supported?

FAST VP currently supports 3 distinct tiers (not just 2, like many competitive approaches) - Flash, enterprise hard disk drives (10K and 15K rpm), and high-capacity SATA HDDs. Nothing in the implementation limits it to 3 tiers, but for now 10K and 15K rpm are considered "equal" owing to the fact that the performance characteristics of both are not significantly different as compared to Flash and SATA.

At today’s Flash prices, 3 tiers are often more cost-effective than two – without the middle tier of fast enterprise HDDs, competitors are forced to employ more flash (at a higher cost), or more SATA (at a rather significant performance penalty). Even MLC can't span the void – in order to deliver the performance and lifetime required by enterprise storage platforms, MLC-based Flash drives must over-provision so much more capacity as compared to SLC that any cost-savings of the MLC NAND components are rendered moot. Of course, MLC might be a viable alternative on platforms that cannot sustain as high-levels of write destage as the VMAX (such as the Definitively Slow 8000 series from a certain competitor).

FAST VP on VMAX is optimized for significantly more challenging workloads.

what is the granularity of relocation?

When it comes to automated tiering, it is important to understand whether the implementation is limited to moving sand, rocks or boulders.

Based on the workload research mentioned earlier, it was determined that the most optimal solution was to have FAST VP employ variable units of relocation. Competitors have been forced Competitors Move Great Big Boulders(by their architecture) to implement fixed-sized relocations – and in some cases, these relocations are so large that 80-90% of the data that is moved may never actually be accessed. Clearly these implementations have been rushed to market – even the most basic analysis would have revealed that this approach is inherently inefficient.

FAST VP basically works on multiples of the Virtual Provisioning "chunk" – 768KB (12 tracks) is the smallest unit of relocation. Competitive enterprise arrays? One uses a minimum of 42MB, the other moves a minimum of 1GB.

VMAX FAST VP: Smaller is Better Now, in reality, VMAX FAST VP tracks utilization and moves data in increments of 10 VP chunks, or approximately 7.5MB "sub-extents." This is not a random choice – those traces I mentioned earlier lead to this decision. It turns out that locality of reference observations indicate that the first access to a previously untouched (512KB) block is almost always followed by accesses to the surrounding/following blocks – an observation highly correlated to the allocation strategies of most modern databases and file systems. So, recognizing I/O requests to data that has not been accessed lately is one predictor of what will accessed in the future.

One of the interesting features of FAST VP is that it only physically copies data that has actually been written to by the host at some point. When a LUN is created, VMAX marks all of its blocks as "Never Written By Host" (NWBH), and this flag is reset on the first write to the block. Is it thus possible for some of the 768KB VP chunks of a FAST VP sub-extent to have been written, and for others to still be NWBH. All NWBH blocks can be presumed to be 100% filled with zeros (the initial state of all newly created devices on VMAX). Thus, when FAST VP decides to promote a 7.5MB sub-extent to a higher tier, it does not have to actually copy any NWBH chunks – it can merely retarget them to unused capacity in the new (destination) pool, without actually having to copy the zeros.

Competitive implementations presumably have to copy the entire extent, be it 42MB or 1GB (or other) – even if the entire extent is all zeros.

No matter how you look at it, smaller granularity of relocation is better. Thanks to its granularity and awareness of blocks that have not actually been written to by the hosts, VMAX FAST VP actually has to copy less data to effect a tier relocation than pretty much any competitor in the market today. Smaller moves take less time, which means FAST VP can make changes faster. And smaller moves means that more I/O density can be moved into a Flash drive than can be attained with larger extent sizes – FAST VP can focus tens of thousands of 7.5MB-wide "hot spots" into Flash, while (for example) IBM Easy Tier can only move hundreds of 1GB regions into the same amount of Flash.

what is the granularity of control?

FAST VP is controlled by policies, and these policies can be assigned to individual devices. Policies can also be assigned to VMAX Storage Groups, a logical container for 1 or more related LUNs – imageperhaps all of the LUNs related to a specific database or application, or to multiple different applications that have similar SLA requirements. These policies define the pools that will form the 3 tiers and the maximum amount of space that members of the Storage Group are allowed to consume on each tier. Multiple different storage groups can utilize the same pools, and each can have its own capacity allocations. Conflicts between Storage Groups are resolved with Priorities, ensuring that the applications of highest importance are permitted to consume overcommitted capacity (in the Flash tier, for example) ahead of lower-priority apps.

Contrast this with some competitor solutions that have no policies whatsoever. In some, the "mix" is pre-defined in a hybrid "pool" of Flash and SATA devices, and then the capacity for all of the LUNs created from this pool will compete for resources based on access patterns. In this type of implementation, there is no way to ensure that an application is getting the appropriate service levels. Consider this: a very important application that performs relatively "slow" I/O requests will likely have less of its data moved into Flash if it shares the same hybrid pool with an application that does I/O as fast as possible – the one with more "misses" wins, even if it is not the important application.

In our research, customers made it clear that they required the ability not only to set different policies per application, but also that they were able to CHANGE these policies if things weren't working out as planned.

Which leads to the next question:

how fast will it react to changes?

And of course, being able to change the policies isn't enough – these changes have to have an effect quickly, especially if workloads or priorities suddenly change. Taking this requirement very seriously, FAST VP can respond to change in less than 30 minutes. Usage stats updated at least every 10 minutes and compared to the utilization and priority policies. Relocation activities will typically wait for 2 stats updates (to determine the change rate); these relocations will be scheduled to begin shortly after the second status update completes.

The world is too big for Slow Automation! Faster is of course better here. But attempting to react too quickly can also be detrimental, as the probability of moving more data than needed increases the less analysis that is done. For VMAX FAST VP, the ideal balance (based on the aforementioned workload traces) is the 10/20/30 minute cycle. This allows for relatively quick response to workload surges and bulges, without a significant amount of added/wasted overhead.

VMAX FAST VP is the Smartest Competitive solutions? More than one of them takes at least 24 hours to react to workload changes and/or policy changes (the ones that actually support policies, that is). Again, customers made it clear that this would be unacceptable. They know that their workloads change daily, and even multiple times per day, so to be viable, tiered storage automation has to be able to react far more quickly than once a day.

Especially if performance suddenly drops and requires a change in policy. “Let’s wait until tomorrow and see if things improve” is rarely a good solution.

Advanced Controls

  • image Are all time periods the same?
  • When can the data move?
  • Can the speed of moving be controlled?
  • Can data that is ‘in the right place’ be locked there?
  • Can policies be changed quickly?
Are All Time Periods the Same?

FAST VP collects usage statistics only during windows specified by the administrator. For example, if you want to have FAST VP optimize for daily transaction workloads, but not to consider the impact of backups on the access patterns, you can specify that stats should not be collected during the backup window.

When Can the Data Move?

FAST VP also allows you to specify periods of the day when data should (or should not) be relocated. For example, to ensure that 100% of the array's resources are focused on optimizing around market open in a trading application, you might prohibit all moves from (say) 9AM-10AM Eastern US time. You probably would want stats to be collected during that time, but you might not want to allow changes to happen until things settle out.

For other applications, you might decide that you really do want the system to only relocate during the off-hours (if your application actually enjoys such a thing as "off hours" – increasingly rare).

The reaction time of FAST VP is closely coupled to these two time windows – clearly FAST VP cannot react if the relocation window is closed off during the day.

can the speed of moves be controlled?

Of course it can – and in fact, it’s EASY!

imageData moves only during relocation time windows, closing/moving a window will stop all relocations immediately – that's the on/off switch. You can also control the copy rate (and thus its impact on performance) using Copy QoS (Quality of Service) settings. You can set relative priorities between different policies such that the more important applications get moved/relocated first/fastest, and less important ones wait until nothing else is going on. And you can stop movements for specific applications – but wait, I'm getting ahead of myself.

Many competitors to this day don't offer any type of Copy QoS – not even for local or remote replication. VMAX inherently supports the notion of administrator-controlled throttles on virtually all data copy operations, from TimeFinder and SRDF to Open Replicator and now FAST VP.

can data be locked in the ‘right place?’

Indeed, this is perhaps one of the most frequently requested features – which is odd, because it is pretty much guaranteed that no mere human will be able to collect and analyze enough data to decide if or when an application should be locked down.

No matter: FAST VP allows you to “pin,” “freeze” or “move” each application/storage group:

  •  Pin: stop moving the Extents for a specific application/storage group – leave every one of them where they are right now;
  • Freeze: stop moving everything in the array – temporarily stop moving EVERYTHING in the array and leave every extent right where it is
  • Move: relocate entire storage group back to a single tier within a specific pool (even a different one than it is using – this is accomplished using the new Virtual LUN V3 capability of 5875.
can policies be changed quickly?

At the risk of being repetitive, FAST VP supports multiple dynamic reconfigurations:

  • FAST VP: Smart enough to react FAST! Tiering policy – per application/storage group
  • Storage group membership
  • Copy QoS (move speeds)
  • Priorities
  • Statistics collection & relocation windows
  • “Pin,” “Freeze” or “Move”

Such dynamic management enables rapid response, automatically adapting to changing workloads while supporting one-off exceptions (fare wars, big news, etc.) or changing business priorities.

And since all of these controls are dynamic, it is easy to adapt and adjust should things stray too far off course - extremely unlikely to happen with FAST VP, but it isn't good to know that you haven't handed control over to an auto-pilot that won't let go of the helm until tomorrow?

FAST VP: world's Smartest Automated Tiering

So – a long post, but allow me to summarize:

FAST VP offers the most intelligent automated tiering available across
the entire information storage industry (not just in the Enterprise).

FAST VP is highly differentiated in several areas:

  • Easy and Effective Implementations
    • Per-application Policies, modeled by Tier Advisor
    • Relocations consider other resources, help with reads and writes
    • Works with everything, validated against real workloads
  • Granular Data Management
    • Supports 3 Tiers, dynamically configured with online expansion
    • Multiple applications share common pools, each app with their own policies
    • Moves 7.5MB sub-extents, analyzed every 10 minutes
  • Advanced Controls
    • Time controls for performance monitoring and movement
    • Supports Priorities and Copy Quality of Service
    • Allows for operator override and on-line reconfiguration

No other vendor has the experience or made the investments that EMC has made in VMAX FAST VP. Future software releases will continuously improve the FAST VP algorithms, and customer feedback will steer further investments in the product.

But for now, nobody has anything close to the intelligence of VMAX Fully Automated Storage Tiering for Virtual Pools.

And EMC is undeniably the first to deliver intelligent automated storage tiering.

 


TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d834c659f269e20147e19a3678970b

Listed below are links to weblogs that reference 3.019: fast vp - world's smartest storage tiering (part 2):

» FAST VP Customer Example from ApplyIT
Customers prefer to spend as little as they need to on infrastructure. With the price of disk capacity dropping each year, customers would love to buy the largest/cheapest drives that they can to support their workload. This is a description of the s... [Read More]

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Aran Hoffmann

Barry,
Very cool stuff. I am working on a design for a FAST VP deployment with a customer and had a question about the granularity of control. You state that you can assign a policy to a device, but the interface only allows for assignment to a Storage Group. Since most of the Masking Views in use for my customers contain devices that they would not want under FAST VP control is it preferable then to create a Storage Group that is just used for FAST VP Policy assignment so that we can have fine control over which devices are assigned to which policies?

Cheers,
Aran

the storage anarchist

Development is working on simplifying this, but yes, in the meantime, your approach will work.

MarkS

Why are your blogs basically EMC sales FUD ? There is nothing unbiased therefore nothing worth reading. You do not appreciate that other vendors are capable of original thought or that EMC has to build on 10 years of legacy code for each new function.

I never read your blogs because of this but I followed a link from another site. Blogs should be interesting not sales.

the storage anarchist

MarkS -

Thanks for the feedback - I am sorry that you did not find my blog interesting or informative.

The comments to this entry are closed.

anarchy cannot be moderated

about
the storage anarchist


View Barry Burke's profile on LinkedIn Digg Facebook FriendFeed LinkedIn Ning Other... Other... Other... Pandora Technorati Twitter Typepad YouTube

disclaimer

I am unabashedly an employee of EMC, but the opinions expressed here are entirely my own. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

search & follow

search blogs by many emc employees:

search this blog only:

 posts feed
      Subscribe by Email
 
 comments feed
 

 visit the anarchist @home
 
follow me on twitter follow me on twitter

TwitterCounter for @storageanarchy

recommended reads

privacy policy

This blog uses Google Ads to serve relevant ads with posts & comments. Google may use DoubleClick cookies to collect information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide ads about goods and services of interest to you. If you would like more information about this practice and your options for not having this information used by Google, please visit the Google Privacy Center.

All comments and trackbacks are moderated. Courteous comments always welcomed.

Email addresses are requested for validation of comment submitters only, and will not be shared or sold.

Use OpenDNS