« 2.041: pack light! | Main | 2.043: storage wrappin' about tiered storage »

February 23, 2010

2.042: bring out your dead!

R.I.P DS6800 My, what a week already.

IBM finally got around to putting the still-borne DS6800 out of its misery – something I had thought they were smart enough to do over two years ago (I was apparently wrong). Not to worry, I guess – if you really want to have one of these useless beasts, I understand they are still available over on e-bay.

Once touted as the entry level Shark, the DS6800 was purported to share the vast majority of its code with the higher-end DS8000 series. Over time, it became clear that no such miracle had been performed – the DS6800 was even less feature rich than the DS8K. And with the brandy-spanking-new DS8700 lacking several features that were touted as foundational for the DS8000 platform family (e.g. thin provisioning, LPARs and the like), it has got to make you wonder how serious IBM is about this space.

But undoubtedly attracting the most attention has been the comments from NetApp's CEO Tom Georgens late last week that the notion of storage tiering is dead.

Bring out your dead!

There has been a lot of Twitter chatter about Tom's assertion, and at least a few blog posts - e.g., Mark Twomey's (@StorageZilla) Virtual Vs. Static Provisioning. Martin Glassborow's (@storagebod) The Crying Game, and Chris Evans' (@chrismevens) Enterprise Computing – Death of Tiering?. And even today the debate rambles on in Twitterville, with Alex McDonald (@alextangent) in the middle of the debate over whether PAM II + SATA is "tiering" or simply "caching."

All good fun, but I'd like to bring forth a slightly different perspective for why there is more to tiering than simply Flash and SATA.

first, the context

Reviewing the context of Tom's comments from NetApp's Q3F10 Earnings Call Transcript, you'll see that Tom was reacting responding to a question from an analyst from Needham & Co. about EMC's FAST and "what NetApps (sic) is doing in that regard"  (you'll have to search for "cheering" instead of "tiering" – sort of a Freudian slip by the transcriptionist, I guess Winking).

Tom's response (edited to correct said Freudian slip):

[…] Frankly I think the entire concept of tiering is dying and I probably don’t want to go into a long speech on that but at the end of the day, the simple fact of the matter is, tiering is a way to manage migration of data between fiber channel based system and serial ATA based systems.

And with the advent of Flash, […] basically these systems are going to go to large amount of Flash which are going to be dynamic with serial ATA behind them and the whole concept of have tiered storage is going to go away.

Now first, you have to realize that Tom was being challenged by the Needham analyst – he was pointedly asking what NetApp is doing in response to EMC's FAST (Fully Automated Storage Tiering). In this situation, CEOs are coached never to respond defensively…and true to form, Tom moves quickly to turn the question into an offensive opportunity (read the transcript for the stuff I omitted to see what I mean). He deftly redefines the question (tiering is about migrating between Fibre Channel and SATA), and then goes on the offensive (it's all going to be just Flash and SATA – no FC, thus no need for tiering).

Now, admit it – had it been Yours Truly spouting something like that, every competitive blogger would have said those words justify calling me THE most notorious FUD-meister on the planet.

But even though it wasn't me that said those words, they are still outright and blatant FUD.

Yes, Virginia, even CEO's will sling the FUD when they have no other answer.

is tiering really dead? i think not!

OK, so now we know WHY Tom Georgens resorted to FUD-slinging. And as I said, the Twiteratti have been relentless over the weekend, pointing out the silliness of Tom's assertion from a variety of angles.

But I think they've all missed the most important point about storage tiering:

I will argue that there is much more to storage tiering than simply Flash, Fibre Channel and SATA. Indeed, even if Fibre Channel drives can be eliminated from the data center entirely - and I agree that they should be (they are the most cost-inefficient devices in both $/GB and $/IOP used in today's data centers).

So, even if there is no more FC (or enterprise SAS, for that matter), there will still be a need for storage tiering – at least in EMC's book. As @StorageZilla discussed, EMC's vision for FAST to the FULLEST is that automated tiering as about automating a continuous optimization of storage cost, performance and footprint. To get the right data, to the right tier, at the right time.

With a little thought, we can envision even more tiers beyond SATA, for example:

  1. A compressed or de-duped tier (either on-line or backup data)
  2. A tier of static, unused data which is moved to SATA drives and spun-down to reduce power requirements and heat generation
  3. An "archive" tier, where unused files or blocks are transparently off-loaded from primary to archive storage to further reduce the primary data footprint
  4. And how about a "cloud" tier, either as a lower-cost archive or perhaps even in support of applications that have been relocated out of the data center infrastructure for any of a variety of reasons (overflow, just-in-time, load balancing, etc.)

Perhaps not all environments will require all of these, nor will each necessarily be implemented as a distinct "tier" – indeed, both NetApp and Celerra offer data reduction for all primary data today and not as a tier.

But one thing is for sure – these others aren't going to be implemented as a cache or as an inherent feature. No, these tiers are destinations where data will live for a long time, controlled by dynamic policies about retention, SLAs and availability. These are not going to be implemented as a transient place where extra copies of data are made in hopes of improving performance & response times, as is the case with cache. And while using Flash as an L2/L3 cache (dependent upon your perspective) can clearly deliver value for some applications, there is no mandate that Flash-as-cache and flash-as-tier cannot be combined in a single system, and I expect there to be very practical implementations that do just that at some point in time.

so that's that

Well, not really. I suspect there is also a technical reason behind Tom's reaction that hasn't yet been uncovered. Judging by the way the NetApp blogger wagons have circled around the topic to assert that Flash+SATA is all that is needed, it's clear they don't want the conversation to expand as I've described above.

I'll admit, though, that I don't know why. Are they just defending their chosen path of Flash-as-cache instead of –as-a-tier? Or is it deeper than that? Is there something inherent in WAFL that makes it different difficult to implement multiple different tiers within a single array? Or is it something else?

I'll stop here, lest I be accused of fighting FUD with more FUD. But it does leave the question unanswered:

Why would NetApp declare storage tiering dead when
the world is so clearly headed in that direction?



TrackBack URL for this entry:

Listed below are links to weblogs that reference 2.042: bring out your dead!:


Feed You can follow this conversation by subscribing to the comment feed for this post.


The interesting thing to me is that there were essentially two contradicting claims...

1.) that tiering is dead/dying
2.) that dynamically moving blocks between SSD and SATA is the future, anything more is pointless.

Regardless of which disk technology you are using (FC, SSD, SATA, CDROM, DVD, etc) anytime you store data on more than one of them you have tiering. Whether it's SSD+SATA (2 tiers) or SSD+FC+SATA+DDUP (4 tiers), you still have tiering and automating the movement of data between those tiers (be it 2 or 10) is the future. The underlying technology doesn't really matter.

the storage anarchist


I have long asserted that the only real difference between the -as-cache vs. -as-tier models is whether there is only one or more than one copy of the data. With as-cache, there are more than one; with tiering as a destination, the data is persistent in only a single (logical) location at a time.

But as you say, in the end it's the same thing with a different implementation.

Alex McDonald

Hi Barry, Alex McDonald aka @alextangent here.

Full details of the non-issue over on Marc Farley's blog -- he's an even bigger conspiracy theorist that you!

Here's the bottom line; Georgens may be right about storage tiering. But he didn't say it because there's something inherent in WAFL that makes it difficult. Quite the reverse.

the storage anarchist

Alex - Sorry, but I'm not convinced yet.

Not that I'm skeptical, cynical or a contrarian or anything, but the deep dive that Farley started just doesn't clearly explain if/how you might implement multiple tiers on NTAP kit.

I'm not saying you can't - we've surprised a lot of Hitachi folks by implementing stuff they asserted our architecture couldn't handle. I'm just observing that you don't today offer multiple tiers in your product, and that your CEO says the idea itself is dead, which is preposterous.

And to my knowledge, nobody from NetApp has said "we could add multiple tiers to our box, but we won't and here's why..."

Alex McDonald

I never expected to convince you. Or Farley for that matter, but I can safely say he's driving down the wrong path. What we're doing with the technology will just have to come as a surprise in that case!

the storage anarchist

Ohhh, goody - I *LOVE* surprises.

But meanwhile, your CEO looks pretty out of touch with the existing reality that tiering is alive and well.

Tony Pearson

You forgot to mention tape as another tier! Put this down as "Tier 5" in your list.

Tony Pearson (IBM)

The comments to this entry are closed.

anarchy cannot be moderated

the storage anarchist

View Barry Burke's profile on LinkedIn Digg Facebook FriendFeed LinkedIn Ning Other... Other... Other... Pandora Technorati Twitter Typepad YouTube


I am unabashedly an employee of EMC, but the opinions expressed here are entirely my own. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

search & follow

search blogs by many emc employees:

search this blog only:

 posts feed
      Subscribe by Email
 comments feed

 visit the anarchist @home
follow me on twitter follow me on twitter

TwitterCounter for @storageanarchy

recommended reads

privacy policy

This blog uses Google Ads to serve relevant ads with posts & comments. Google may use DoubleClick cookies to collect information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide ads about goods and services of interest to you. If you would like more information about this practice and your options for not having this information used by Google, please visit the Google Privacy Center.

All comments and trackbacks are moderated. Courteous comments always welcomed.

Email addresses are requested for validation of comment submitters only, and will not be shared or sold.

Use OpenDNS