2.042: bring out your dead!
My, what a week already.
IBM finally got around to putting the still-borne DS6800 out of its misery – something I had thought they were smart enough to do over two years ago (I was apparently wrong). Not to worry, I guess – if you really want to have one of these useless beasts, I understand they are still available over on e-bay.
Once touted as the entry level Shark, the DS6800 was purported to share the vast majority of its code with the higher-end DS8000 series. Over time, it became clear that no such miracle had been performed – the DS6800 was even less feature rich than the DS8K. And with the brandy-spanking-new DS8700 lacking several features that were touted as foundational for the DS8000 platform family (e.g. thin provisioning, LPARs and the like), it has got to make you wonder how serious IBM is about this space.
But undoubtedly attracting the most attention has been the comments from NetApp's CEO Tom Georgens late last week that the notion of storage tiering is dead.
There has been a lot of Twitter chatter about Tom's assertion, and at least a few blog posts - e.g., Mark Twomey's (@StorageZilla) Virtual Vs. Static Provisioning. Martin Glassborow's (@storagebod) The Crying Game, and Chris Evans' (@chrismevens) Enterprise Computing – Death of Tiering?. And even today the debate rambles on in Twitterville, with Alex McDonald (@alextangent) in the middle of the debate over whether PAM II + SATA is "tiering" or simply "caching."
All good fun, but I'd like to bring forth a slightly different perspective for why there is more to tiering than simply Flash and SATA.
first, the context
Reviewing the context of Tom's comments from NetApp's Q3F10 Earnings Call Transcript, you'll see that Tom was
reacting responding to a question from an analyst from Needham & Co. about EMC's FAST and "what NetApps (sic) is doing in that regard" (you'll have to search for "cheering" instead of "tiering" – sort of a Freudian slip by the transcriptionist, I guess ).
Tom's response (edited to correct said Freudian slip):
[…] Frankly I think the entire concept of tiering is dying and I probably don’t want to go into a long speech on that but at the end of the day, the simple fact of the matter is, tiering is a way to manage migration of data between fiber channel based system and serial ATA based systems.
And with the advent of Flash, […] basically these systems are going to go to large amount of Flash which are going to be dynamic with serial ATA behind them and the whole concept of have tiered storage is going to go away.
Now first, you have to realize that Tom was being challenged by the Needham analyst – he was pointedly asking what NetApp is doing in response to EMC's FAST (Fully Automated Storage Tiering). In this situation, CEOs are coached never to respond defensively…and true to form, Tom moves quickly to turn the question into an offensive opportunity (read the transcript for the stuff I omitted to see what I mean). He deftly redefines the question (tiering is about migrating between Fibre Channel and SATA), and then goes on the offensive (it's all going to be just Flash and SATA – no FC, thus no need for tiering).
Now, admit it – had it been Yours Truly spouting something like that, every competitive blogger would have said those words justify calling me THE most notorious FUD-meister on the planet.
But even though it wasn't me that said those words, they are still outright and blatant FUD.
Yes, Virginia, even CEO's will sling the FUD when they have no other answer.
is tiering really dead? i think not!
OK, so now we know WHY Tom Georgens resorted to FUD-slinging. And as I said, the Twiteratti have been relentless over the weekend, pointing out the silliness of Tom's assertion from a variety of angles.
But I think they've all missed the most important point about storage tiering:
I will argue that there is much more to storage tiering than simply Flash, Fibre Channel and SATA. Indeed, even if Fibre Channel drives can be eliminated from the data center entirely - and I agree that they should be (they are the most cost-inefficient devices in both $/GB and $/IOP used in today's data centers).
So, even if there is no more FC (or enterprise SAS, for that matter), there will still be a need for storage tiering – at least in EMC's book. As @StorageZilla discussed, EMC's vision for FAST to the FULLEST is that automated tiering as about automating a continuous optimization of storage cost, performance and footprint. To get the right data, to the right tier, at the right time.
With a little thought, we can envision even more tiers beyond SATA, for example:
- A compressed or de-duped tier (either on-line or backup data)
- A tier of static, unused data which is moved to SATA drives and spun-down to reduce power requirements and heat generation
- An "archive" tier, where unused files or blocks are transparently off-loaded from primary to archive storage to further reduce the primary data footprint
- And how about a "cloud" tier, either as a lower-cost archive or perhaps even in support of applications that have been relocated out of the data center infrastructure for any of a variety of reasons (overflow, just-in-time, load balancing, etc.)
Perhaps not all environments will require all of these, nor will each necessarily be implemented as a distinct "tier" – indeed, both NetApp and Celerra offer data reduction for all primary data today and not as a tier.
But one thing is for sure – these others aren't going to be implemented as a cache or as an inherent feature. No, these tiers are destinations where data will live for a long time, controlled by dynamic policies about retention, SLAs and availability. These are not going to be implemented as a transient place where extra copies of data are made in hopes of improving performance & response times, as is the case with cache. And while using Flash as an L2/L3 cache (dependent upon your perspective) can clearly deliver value for some applications, there is no mandate that Flash-as-cache and flash-as-tier cannot be combined in a single system, and I expect there to be very practical implementations that do just that at some point in time.
so that's that
Well, not really. I suspect there is also a technical reason behind Tom's reaction that hasn't yet been uncovered. Judging by the way the NetApp blogger wagons have circled around the topic to assert that Flash+SATA is all that is needed, it's clear they don't want the conversation to expand as I've described above.
I'll admit, though, that I don't know why. Are they just defending their chosen path of Flash-as-cache instead of –as-a-tier? Or is it deeper than that? Is there something inherent in WAFL that makes it
different difficult to implement multiple different tiers within a single array? Or is it something else?
I'll stop here, lest I be accused of fighting FUD with more FUD. But it does leave the question unanswered:
Why would NetApp declare storage tiering dead when
the world is so clearly headed in that direction?