« 1.038: val - exposed | Main | 1.040: efd - what's in a name? »

February 15, 2009

1.039: don't miss the amazing vendor flash dance

UPDATED: 17 Feb 2009 - changes in green

Flashdance The Musical Flash dancing was a form of tap dance evolved in 1920s-1930s which combined tap with acrobatics.

That description pretty much sums up what Sun, HP, Hitachi, IBM and NetApp have been doing (and saying) about Flash Storage over the past couple of weeks. Some are tap dancing around their continual delays in getting product to market, while others have resorted to high-wire theatrics to cover up the fact that they’re still nowhere near ready to integrate flash tech.

And almost all of them have finally realized that EMC was right over a year ago – the first place we’re going to see benefits from flash technology is indeed as a new tier in high performance storage arrays. That’s right, after a year of excuses and a cacophony of claims that EMC’s introduction of Enterprise Flash Drives (EFDs) wasn’t innovative, today we find virtually every storage vendor (with one major exception) having announced that they, too, will soon be shipping EFDs in their arrays.

And every one of them has chosen the very supplier (STEC) and the same drive (ZeusIOPS) that EMC introduced to the world over a year ago.

To be honest, I’ve expected all along that this is where we’d be at this point in time, but I surely didn’t think it would take them this long to admit figure out that array-based EFDs is where they should start.

Where we are today is remarkable, and no one can argue that we’d be here were it not for EMC’s vision and investment in bringing the game-changing NAND technology to market ahead of all expectations.

But though the road we’ve travelled to get where we are today is relatively short, it has been littered with some remarkable Flash Dancing (and FUD) from the competition.

Let’s take a look at each of these vendors journeys on this Road to Flash, shall we?

WARNING: this one’s long – probably the longest ever. My apologies…I had lots to say

I’ve attached each vendor to a title from the original soundtrack of the 1983 movie Flashdance. Just seemed fitting (and fun on a Sunday morning). 

hitachi: manhunt

HDS started their flash journey in a state of denial, trotting out their CTO Hu Yoshida to proclaim that flash drives were a figment of EMC’s imagination. Hu dutifully spread the FUD far and wide, telling anyone who’d listen that they couldn’t work because flash wears out and Hitachi wouldn’t support them because customers would lose data (seems Hu overlooked the importance of RAID protection at the time). Hu got a lot of press coverage for his anti-EFD stance, but eventually Hitachi Japan shut him down by leaking that yes, indeed, the world would have EFDs in the USP-V.

And Hu has been pretty much silent on flash ever since. Personally, I know it’s hard to admit when you were wrong, especially when you’re the CTO and supposedly an expert on storage technologies. So when Hu reappeared, he fell back into his tried and true “Virtualization Solves Everything” theme, repeated ad nauseam. I figure he’s hoping that if he replays that tune long enough, his reader’s minds will turn to mush and they’ll forget how he showed the world that he was out of touch with both Flash technology and Hitachi Japan for most of last year.

Of course, like many flame-throwers out there, ole’ Hu can’t leave things well enough alone – in his recent post, 10 Trends for 2009, he includes this little ditty:

Flash-based SSDs will continue to be a niche market. Due to current economic conditions adoption of expensive flash-based SSD disk assemblies will be limited.

What EMC sales teams know (and Hu clearly doesn’t) is that Flash drives actually save people money! And as you can imagine, anything that saves money is hot right now, and adoption is growing faster than you’d (or Hu’d) imagine, (among EMC’s customers at least).

Now, being as I don’t want to help any competitor to get up to speed on Flash any quicker (why accept a 12-month lead if you can stretch it out even longer), I’m not going to explain the various ways that EMC customers are reducing their storage and server acquisition and operational costs using EFDs. But believe me, they are, and your local EMC rep will be most happy to show you how you can too..

Of course, Hu didn’t stop there – he always finds a way to hook in the term “virtualization” (a term which is rapidly becoming so…yesterday).

Virtualization 2.0 with thin provisioning and dynamic tiered services will be needed to maximize the utilization of SSDs.

And that’s what I meant by “flash dancing.” Translated, he’s saying: “we haven’t figured flash out yet, but we’re sure to have something interesting in the future, just as soon as we find it and copy it.”

Not to worry, Hu, EMC is still leading the way on the Flash highway, and many customers have already seen (under NDA) that EMC is alone in the fast lane. And just to one up your 2.0, I’m thinking “Virtualizing in 3D” might be a catchy name for whatever’s next.

Interestingly, Fellow Blogger Claus Mikkelsen was recently revived to add to HDS’ blogging volume. Though his first two posts this year were blatant rip-offs of my past XIV exposes (plagiary is the sincerest form of flattery, so I’m honored), his most recent post, What’s a Flash Drive?, explores the practical application of non-mirrored DRAM SSDs and flash in an almost-coherent manner. So I think the HDS manhunt for a believable flash spokesperson may be over (although I admit I was shocked that Claus would actually acknowledge that he agrees with EMC!)


ibm: (he’s) a dream

As transparent as Hu’s denials of flash were in 2008, IBM inarguably did HDS one better last year.

IBM got off to a bad start with flash, jumping in all tongue-tied early on when they actually suggested that TAPE was the more appropriate alternative. Never in a million years would I have predicted that one, but it happened. At the same time, nearly a year ago, IBM also acknowledged that customers were asking for Thin Provisioning, a feature they thought they indeed would need on the DS8000 in 2008. And here we are, 2009 and IBM’s first platform preview behind us, and still no thin provisioning.

Now that I think of it, I wonder if that little tete-a-tete that had me rotflmao had anything to do with Sir Monshaw’s recent exile back to the selling ranks over in under-performing and economically-torn Japan?

Of course, with 380-something thousand employees world-wide, the smart ones over there at the It’s Better Manually Corp quickly stepped forward to pick up the banner of flash, and they obviously took it very seriously.

At least, they started out with a bang. Who can forget their Frankenstorage science experiment packing Fusion-IO Flash Cards behind an unsupported configuration of SVC nodes to create an honest-to-goodness monster (that’s right, Chuck, you too are a plagiarist). BarryW was all proud, and hasn’t missed an opportunity since to claim whatever mind-share he can, whenever he can (watch – he’ll be one of the first to comment on this post). Heck – just this week he challenged my assumption that they’d be putting PCIe-based Flash inside of the SVC.

Of course, BarryW also spent most of last year denigrating the very notion of putting flash drives into a storage array. His assertion was that storage arrays can’t get all the IOPS out of a fast EFD, and thus he implied it foolhardy to make the investment. What the world needs instead, he asserted, is a storage platform optimized to deliver high bandwidth, low-latency, IOPS optimized I/O. Not surprisingly, he asserts that the SVC is just that platform, and the aforementioned science experiment was proof positive that all other applications of flash were a waste of money.

If you’ve seen how much IBM is charging for the newly announced flash drives in the DS8K, you’d have to agree: the minimum purchase of 16 146GB EFDs for your DS8K will set you back a cool 1.6 MILLION dollars! Hey – that’s probably more than a brandy-new DS8K system itself will cost you (field upgrades of EFDs are not supported). And while these are list prices, translated to the typically aggressive discounts IBM customers will see,

the expected street price of IBM’s EFDs are 4-5 times more
than the exact same drives are from EMC

That’s A Dream you shouldn’t let them get away with. No way, no how.

So, although BarryW spent all last year proclaiming the lunacy of EFDs in enterprise arrays, here now IBM’s gone and done exactly that. And now that the shoe is on the other foot, BarryW won’t answer the FUD question he flung at me all last year - How many few of the IOPS in those flash drives is the DS8K actually able to deliver to applications?

So, if BarryW won’t answer the question, then BarryB has to take a stab at it. Let’s see, now:

STEC rates their ZeusIOPS drives at something north of 50,000 read IOPS each, but as I have explained before, this is a misleading number because it’s for 512-byte blocks, read-only, without the overhead of RAID protection. A more realistic expectation is that the drives will deliver somewhere around 5-6000 4K IOPS (4K is a more typical I/O block size).

In the IBM DS8K you buy EFDs in groups of 16, of which 14 are usable and the remaining 2 are hot spares. So 14 EFDs should be able to support 70-84,000 4K read miss IOPS. Except that in a DS8K, all of these drives have to be installed behind a single back-end I/O controller pair, and a DS8K controller pair maxes out around 12,000 total 4K read miss IOPS (and somewhere around 3-4000 4K write IOPS, which explains why IBM engineers have been frantically trying to improve their write cache destage algorithms). Suffice to say there’s a sizable gap between the IOPS you pay for and the IOPS you get, even excluding the exorbitant prices IBM is trying to extract from customers for flash.

And for the curious, a DMX4-4500 today supports more than twice as many back-end IOPS than a DS8K while delivering better overall read/write response times than the current DS8K hw & sw, thanks in no small part to the larger cache and already-optimized algorithms. It will be interesting to see how much I'BM’s new cache logic will change things – if at all.

a new king on the ibm hill?

As an aside (that’s not a song from the movie), IBM bloggers positioned this week’s DS8000-related announcements as “proof” that the platform isn’t dead. As if adding Flash Drives, SATA drives and new cache algorithms to a platform that hasn’t been significantly refreshed in over 4 years was enough to bring it back to life. And while there are rumors of an upcoming DS8500 and DS8700 announcement (probably before EMC World, I’d guess), I still think the platform is indeed on its deathbed.


Two reasons:

EDIT 17 Feb 2009: Tony has apparently retracted his original, offensive responses to my questions on his blog, and replaced his answer to my questions about DS8K’s RAID 5 support for SATA with a more polite -– and comprehensive -- answer. The following is edited in light of his changes. New text in green.

  1. As TonyP confirmed in his responses to my clarifying questions about IBM’s storage announcements last week, IBM isn’t supporting RAID 5 for SATA drives on the DS8K as a standard feature. While TonyP says they’ll accept RPQ’s for such support, citing “professional malpractice” for using RAID 5 with large SATA drives, he also freely admits that RAID 5 –is– supported on almost all of the IBM storage arrays that they OEM from other suppliers, including the DS4000 and DS5000. Now, I find it interesting that IBM considers it “professional malpractice” to risk SATA drives to RAID 5 on the DS8K, but not on other platforms – apparently to IBM, information stored on a DS8000 SATA is somehow more important than data stored on DS5000 SATA. It seems counter-intuitive that DS8K owners are required to submit an RPQ for a feature that DS5000 customers aren’t – especially since the DS8K shops are usually the ones with a storage staff. I suspect that the real reason for the restriction is that the DS8K engineers at IBM haven’t yet figured out how to improve SATA reliability (and drive rebuild speeds) sufficiently for RAID 5 (that, or IBM just doesn’t trust their DS8K customers to make good decisions). 

    Given a 3-year head start in working with SATA drives (first shipped on CLARiiON in 2005) EMC engineers have significantly improved SATA reliability and shortened rebuild times up and down both the Symmetrix and CLARiiON product lines through a combination of hardware and software improvements. While not “proof positive” that the DS8K is dead, it is yet another indication that IBM isn’t investing enough in this platform to keep it competitive (along with no thin provisioning, insufficient write cache for flash drives, and the like).

  2. BarryW is also gleefully commenting (and twittering) about how the SVC is going to get enough embedded flash so as to offset the paltry real-world performance of Moshe’s heralded XIV platform. Now, while I suspect that will be a huge blow to Moshe’s ego, I also figure he has less than 12 months left on his contract with IBM, with a sizable pay-out at the end, so he’ll probably swallow his pride and watch it happen from the sidelines. But I sincerely doubt that the SVC can save XIV, now that people understand that SATA read-miss performance is never going to be fast enough to run production applications on, and the reliability profile isn’t adequate for a true private cloud platform. Still IBM’s pouring money into both SVC and XIV, and it has to be coming out of somewhere, so expect any new DS8Ks to be lipstick on the pig while IBM irons out the kinks on their next attempt at mastering the art of storage.

Just don’t hold your breath.


netapp: imagination

OK, this is getting long, so I’ll trim back a little here.

Actually, that’s not hard, since NetApp really haven’t done ANYTHING with Flash yet. Oh sure, they’ve qualified their vFiler gateway in front of a Texas Memory Systems RamSAN 500 – news that bored everyone (fortunately, there was a good fight on at the time, so we all didn’t fall asleep). And they also spotlighted Yet Again their SDRAM-based Performance Acceleration Module (PAM) in the same announcement, even though it truly has nothing to do with flash – it’s a non-persistent read cache that in reality is little more than an attempt to catch up with the massive global memories found in Symmetrix DMX and Hitachi USP-V arrays.

What is perhaps most remarkable, though, is what NetApp hasn’t yet announced – 13 months and counting, and still no support for actual flash drives. Now, I have it on reasonably good authority that indeed they have been trying to integrate flash drives into their systems, but not a peep out of them about that. But NetApp is never one to shy away from the acrobatics, and their Flash Dance asks us all to use our Imagination to foresee how WAFL will work better with flash than anything else on the planet. Given their essentially content-free flash announcement a couple of weeks ago, I have to wonder if they’re experiencing conflicts between WAFL and flash drives – could it be that an architecture designed for slow disk drives is struggling to leverage the performance advantages of flash?

I don’t know, but one notoriously Flashy Dancing NetApp blogger went so far as to assert that it was WAFL that was doing the wear-leveling for the RamSAN 500, as if TMS’s product would be useless without NetApp’s vFiler. Being as he’s no stranger to throwing partners under the bus (Symantec was another of his victims), I wasn’t surprised when I got no response from my suggestion that he have a TMS spokesperson explain exactly how that worked.

You see, I am indeed learning not to believe most of what that particular Valentine says.


sun: seduce me tonight

Sun jumped into the World o’ Flash with the “less filling” excuse – Flash belongs in the server, not in the storage (hmm…reminiscent of the Mr. T viral video that got at least one HDS marketing team an early retirement, isn’t it?). We heard all about the seductive wonders of ZFS and Write-Zillas and Read-Zillas…and how putting flash in the storage sacrifices all the performance.

As we see by the growing number of vendors who have announced that they will in fact be adding Flash drives to their storage arrays, we now know that all that “only in the server” walk was just Balderdash. When you come to grips with the facts that the best a 15K drive can deliver is 6ms response, a 10K is 9ms, and a 7200rpm SATA drive is around 12ms, getting less than 1ms consistently from a Flash drive is a HUGE benefit to application performance – even if you do have to tolerate a few microseconds of transport protocol overhead.

But when you step back and realize that those are BEST CASE numbers for spinning rust, obtained only when head movement and contention is virtually non-existent, then it really starts to sink in. A drive under load can quickly degrade to 30-50 milliseconds average response time, while that same workload will stay around 1ms on a Flash drive – that’s when you realize you don’t really have to embed Flash in your servers where it is going to be only fractionally faster than external, yet captive to that single server.

And in an external array, Flash drives have added benefits: you can share a RAID-set of EFDs across multiple applications and even different servers; you can use Virtual Provisioning to maximize utilization; you can replicate the data and the data can be included in multi-volume, multi-host consistency groups. Not to mention, you don’t have to wait around for all the server vendors (and the lone mainframe vendor) to get around to providing host-based flash.

And according to Sun, you may have a very long wait if you want server-based Flash.

Last week I found this white paper on SearchStorage: Solid State Storage in the Enterprise. In this paper (which was clearly sponsored by Sun) we find a NEW excuse for why we don’t yet see Flash storage embedded in servers - a lack of standards.

Yup – it seems that given the premise that the well-known Disk Drive Interface isn’t appropriate for embedded Flash, and since PCIe doesn’t define a standardized manner for talking to low-latency block-oriented storage devices, well…it appears that server vendors don’t want to commit themselves to any approach until there is a common interface standard.

That’s probably not good news for folks like Fusion-IO.

This could also be what’s behind BarryW’s suggestion to me that SVC might not be using Fusion-IO for its internal flash also. Double-bad for Fusion-IO, if true.

On the other hand, the argument probably has a lot of merit – heck, we’ll probably see the likes of Intel or AMD define and implement a standard flash interface along with the appropriate logic to handle wear-leveling and bad block remapping…and they’ll probably put it right on the motherboard, eliminating the need for the whole PCIe card form factor for storage.

I guess I can see why Sun might want to wait before going full-bore into server-side flash…


hp: maniac

Last, and indeed arguably least, is HP. As you may have seen, in an article titled HP Lays Out SSD Datacenter Ambitions, there are apparently people that HP allows to talk to the press who have been living under a rock for the past year. According to these crazy HP’ers, flash won’t be ready for the enterprise until 2012.

And you know, if that’s what they want to think, who am I to correct them?

But seriously, don’t these Maniacs realize that they ALREADY ship enterprise flash drives? As Storagezilla explains, since HP OEM’s the Hitachi USP-V (as the HP XP24000), they already support flash. And the 73GB EFD for the XP24000 is already in their price book! 

And while Virtual Geek is understanding of the confusion over at HP (which is made up of mostly printer, server and laptop geeks), he took the time this week to explain why the server guys (at least) ought to be paying more attention to the realities of Enterprise Flash Drives. Turns out that EFDs bring a lot of value to the world of virtualized server farms!

Unfortunately, the HP article is littered with Flash Dancing and misinformation. Perhaps the author misunderstood, or perhaps, indeed, it was an uninformed HP representative who provided the mistaken perceptions. For the record, I’ll correct a few of the most blatant mistakes here:

  1. Enterprise-ready flash, as we all know, has been shipping from EMC for a year, and today it is pretty clear that every major external storage vendor will be soon shipping the same drives from the same supplier that EMC has been using.
  2. These drives didn’t start-out in laptops – they were in fact purpose-built to meet the performance, reliability, and data integrity requirements of the world’s most mission-critical applications for the world’s most demanding customers of the world’s most trusted storage vendor (EMC).
  3. The STEC ZeusIOPS drives provide the necessary support for hot-swap, including on-board reserve power sufficient to de-stage data from the drive’s internal SDRAM to the persistent NAND Flash storage.
  4. The ZeusIOPS drive can deliver as much as 30x the IOPS (I/O’s per second) than a 15K rpm enterprise-class fibre channel drive. The article used a 5400rpm SATA drive as a reference point; the ZeusIOPS can deliver nearly 100x the IOPS of that class drive.
  5. The ZeusIOPS drives everyone is standardizing on is available from STEC in standard sizes of 73GB, 146GB and 300GB (although I don’t think any storage vendor has announced using the largest drive yet).
  6. There’s no need for faster storage controllers to capitalize on Flash drives – servers have been receiving performance far faster than a raw disk drive can deliver for more than a decade. An intelligently cached external disk arrays routinely deliver hundreds of thousands of IOPS at average latencies well below that of even the fastest Flash drive. While servers may require multiple HBAs to generate and receive this much I/O,  there’s no need for new hardware or even host software to reap the benefits of fast storage

Oddly, I know personally that there are people at HP that already know all this. One of them, Jieming Zhu, is the Treasurer of SNIA’s Solid State Storage Initiative, for example. But HP’s misinformation campaign is obviously well-funded – spend a few minutes on their HP On Solid State Storage Technology web page and you’ll find even more evidence of the confusion that seemingly runs rampant in the hallowed halls of HP.

That so much incorrect information is published under the banner of “HP technology innovation” probably makes both Bill and Dave roll in their graves!


emc: here where the heart is

Having long ago passed the recommended word length for a blog post, I’ll close out for now with a simple observation:

EMC started the Flash Revolution in hopes of ushering in a new age of Information Technology, where storage efficiency and IO performance are accelerated to cost-effectively meet the demands of exponential growth in processing power being deployed in our world.

The Enterprise Flash Drive is but the first step in this new era, and it is an important one. Over the coming years, we will see many new and exciting uses of solid-state storage across the entire computing landscape, born of wide and varied requirements and innovations, developed by the collective wisdom of all of us who touch this technology as suppliers and consumers.

In the grand scheme of things, we are at the very beginning of this journey.

And EMC is leading the way.

Stay tuned for more solid-state storage announcements from EMC throughout the year – we are indeed Taking Our Passion, And Making It Happen!

Please, don’t support the Flash Dancers…encourage them to (try to) Catch Up, or just Get Out Of The Way!


TrackBack URL for this entry:

Listed below are links to weblogs that reference 1.039: don't miss the amazing vendor flash dance:


Feed You can follow this conversation by subscribing to the comment feed for this post.


What is the difference between an EFD and an SSD?
Seems I've been told that flash is considerably slower than SSD. Isn't "EFD" an EMC-specific term? Thanks.

the storage anarchist

Chip -

Thanks for that excellent question.

EFD is the acronym for "Enterprise Flash Drive" - a term that EMC coined to distinguish the drives it uses in its storage arrays from the "lesser" flash drives on the market.

SSD is commonly used to mean "Solid State Drive", although "Solid State Device" is perhaps more appropriate (since it doesn't have to be supplied in a disk drive form factor). SSDs is in fact a broader term than EFD - SSDs would encompass the entire gamut of flash-based devices (including EFDs), and also devices built with other solid-state technologies, including SDRAM, PRAM, Memristors, etc.

EMC doesn't lay claim to the term EFD, but although other suppliers might call their flash drives "enterprise class", EMC limits applying that term only to drives that pass its stringent performance, reliability, longevity and end-to-end data integrity management - and to date, the only commercially available EFD as defined by EMC is the STEC ZeusIOPS family.

the storage anarchist

And here is the predicted BarryW comment - coyly placed over on StorageBod's blog, but clearly intended to be a response here. And indeed, one of the first!

(Gotcha again, Sir Whyte ;^)!

Wes Felter

Intel is putting flash on the motherboard as predicted -- it's called NVMHCI -- although I doubt that it will provide enough capacity and performance to be interesting in servers or storage controllers. Perhaps Fusion io and Micron can become NVMHCI compatible in the future and thus not require drivers.


Such a zinger, you...

Don't you think BarryW posts sooner than most because he is several time zones ahead of the rest of us? Did you intentionally post this on a Sunday so he would see it on Monday morning hours and hours before the rest of the usual crowd?

Calculated, cheap, AND intentionally misleading....par for the course, eh BarryB?

And why should anyone else lock themselves into one way of implementing flash, just because EMC only has one way to do it? Now it's a *benefit* that EMC has no viable ("completely failed" might be a more appropriate descriptor) virtualization platform/gateway capability?!?

Or is it getting under your skin that IBM, HDS and NetApp keep gaining footprint by front-ending EMC gear and spoiling your software sales. That old business model of selling the hardware for peanuts in order to make a profit on the software and maintenance doesn't work so well in the new world of virtualization, does it? I bet that intensifies the pain that Invista has caused you guys by being such a complete and utter failure in the marketplace. What were you saying about DEAD PLATFORMS...?

And for all your talk (you're right, that was one LONG post...I had to skim over it) of being first to market with EFDs, I didn't hear anything about being first to market with drive encryption or thin provisioning, or deduplication, or...well, you get the picture.



Maybe you should first investigate the different offerings Sun has. There is no "one size fits all".

Sun is amongst the first, that offer EFDs as cache _AND_ disk replacements.

You shouldn't dismiss the usage of EFDs within servers. You know, there is a latency when using SANs. Why not cache writes locally on fast EFDs first? A brilliant solution for certain workloads/environments.

I just had a talk to an EMC support guy last week. He hasn't seen any customer yet, that uses EFDs in DMXs. And there are a few of those around...

the storage anarchist

Brainy -

I never dismissed the notion of using flash in servers (although I'll not yet buy into calling Sun's ReadZilla and WriteZilla "enterprise-class"). In fact, I've argued all along that we will find flash deployed just about everywhere we see DRAM and hard drives.

But I do note that Sun indeed sponsored the referenced paper, and it basically says that flash-as-disk-drive is not the appropriate way to get flash into servers. So I figure the Read & Write 'zilla's are more marketing hype than real value these days.

As to your last point - indeed, there are more EMC support people than there are DMX's with EFDs installed, so your sample-of-1 isn't all that surprising.

And FWIW, I know a lot of Sun server sites that aren't using flash in their Sun servers, either.


When I first read this, I thought it must have been months old, because there is no mention of Sun's 7000 series of NAS/iSCSI arrays which actually leverage SSDs at the heart of their architecture. First, the 7000s are storage devices, not servers. Second, the use of flash is integral to the 7000s hardware/software architecture. Third, the use of write-oriented flash is an industry first. Fourth, the write oriented cache is a Sun innovation to meet their unique requirements. Fifth, the use of a dual-homed write optimized flash drive in lieu of a write cache mirroring is truly brilliant, eliminating hardware complexity (specialized cables and protocols), controller software complexity, and radically reducing failover controller complexity.

The fact you did not even mention these products, when they had been out for three months, weakens your credibility as an authority on the storage industry. An authority on EMC, perhaps, but the industry, no.

You further weakened your credibility with the snarky comment about the readzilla and writezilla being "marketing terms". That suggests you have absolutely no idea what these devices do in the 7000 architecture. That is odd, because Sun has been very transparent about the development of the 7000, and the role of flash. On the OpenSolaris boards and on the blogs by the 7000 engineering team, you can find test numbers, engineering decisions, etc. The writezilla or logzilla came about out of a public project to test ZFS intent logs on a flash device to accelerate performance of ZFS before anyone outside of Sun had heard of the 7000. Readzilla is a typical use case of flash, basically in this case used as a tier 2 cache, very similar in concept to tier 0 storage, but letting the storage controller manage the population of data onto flash instead of manually provisioning it. Now the term "Hybrid Storage Pool" does sound like a marketing term.

For the record, I do not work for Sun, but I do respect a well thought out product when I see it. Most people familiar with the storage strengths of Solaris 10 had good reason to believe it would find itself in a NAS device at some point, but the challenge for building a controller-based storage system has always been to balance redundant controller failover with write cache acceleration. Write optimized flash solved the problem elegantly for Sun.

As Bill Joy once said, its hard to build something simple.

P.S.: And that AJAX web performance monitoring interface on the 7000 ... wow. Just wow. Suddenly every other storage performance tool looks old and stale--because they are old and stale.

Bryan Cantrill

Our (Sun's) hybrid storage pool approach was first outlined in Adam Leventhal's paper in the Communications of the ACM in July -- so it's hardly "marketing hype." As for not yet calling our read- and write-optimized flash SSDs "enterprise class", you do know who our vendor is, right? I assume that you don't, or else you would realize that to denigrate our flash-based SSDs is to denigrate EMC's own... (If I'm being too oblique: EMC and Sun are both buying our SSDs from STEC, knucklehead! And yes, we're buying the same model, so Sun's got just as much EFD in our SSD as EMC...)

the storage anarchist

Meh130 -

You got me...you obviously know more about Sun storage than I. Thanks for providing a more accurate assessment.

Bryan -

Well, I did indeed know that Sun was using STEC drives, but my understanding is that neither of the 'Zillas are in fact the STEC ZeusIOPS. In fact, I had been told that one was based on STEC's Mach8 family, and the other was from another supplier. Hence my distinction - *I* don't believe the Mach8's yet meet the requirements of the enterprise.

Guess I have a pretty big blind spot when it comes to Sun Storage.

(P.S. Let's try to keep things civil here, OK? "Knucklehead" wasn't necessary.)


Interesting response from one of the designers of the 7000 series at Sun: http://blogs.sun.com/ahl/entry/dancing_with_the_anarchist

Bryan Cantrill

I think of "knucklehead" as being reasonably civil given that the integrity of our data was being questioned, but point taken and accept my apologies. For the record: logzilla is an STEC ZeusIOPS and has been since before EMC had even heard of STEC; we use the MACH-8 IOPS only for readzilla. This decision, by the way, was more around use than reliability: we just don't need the low write latency on on the read side because of its use as a read cache. And just to nip any reliability FUD in the bud: thanks to ZFS's practice of checksumming all data, even the worst possible failure mode (e.g. data corruption) on the read side only results in a cache miss. This is the real power of using flash as a tier in the memory hierarchy and not as an HSM tier: we are much less dependent on the flash getting it absolutely correct...



As we see by the growing number of vendors who have announced that they will in fact be adding Flash drives to their storage arrays, we now know that all that “only in the server” walk was just Balderdash."

Alternative take: the other vendors are behind Sun in that they lack ZIL/L2ARC-like technology, therefore they force their customers to manage hierarchical storage directly or otherwise go with all-flash solutions. Since flash is much more expensive than rotating rust this means that Sun's competitors' flash solutions are much pricier than Sun's for equivalent performance -- and that's not counting the cost of managing one more storage tier!

Incidentally, you can build all-flash storage pools with OpenSolaris. It'd be expensive, so expensive that by comparison to the performance of HSP it'd be hard to recommend it (you'd have to have a working set as large as the pool itself, with purely random access patterns in order for all-flash to be worthwhile).


I should add that you can certainly build an all flash ZFS pool in Solaris, if you really really want tier-0. That's a luxury, and you'll still want RAID-Z.

With HSP you needn't buy any more flash than is needed to bring your average latency down enough -- you choose what "enough" means, so you choose how much extra $$$ you spend on flash, and there's no need to mirror/RAID-Z the L2ARC because errors from readzilla can cause no data loss nor data corruption because all the data is checksummed with strong checksums.

the storage anarchist

Wow...with all the attention, one might get a SUN-burn around here!

The suddenly-inserted Sun perspective is welcomed, though one does have to wonder why the early days of their relationship with STEC moved so slowly...if events actually followed the timeline that's been described here, then perhaps there wasn't the same sense of urgency over at Sun as was at EMC during 2006 & 2007.

And amidst this rash of attention from Sun, I can't help but wonder why HP and IBM have been virtually silent. Surely they're not going to stand by and let Sun dominate the discussion here.

Or are they?

Tim Lewis

Don't you think your "blog" is a little disingneuous when there's no disclosure on your page that you are - and I'm quoting your linkedin profile - "Barry Burke. Chief Strategy Officer, Symmetrix Product Group, EMC Storage"?
It's all very well saying "this is my personal opinion and not EMC's", but not disclosing your - presumably - senior position at EMC does rather devalue the "independent voice" your "storage anarchist" tagline suggests in my opinion. It's certainly not how I would define anarchy anyway!
I haven't looked at EMC to see if they (or you) have a corporate blog as well. But, I'm afraid that your blog entry here has coloured my opinion of EMC and you as not being opaque and open. I haven't read any more of your blogs, and I'm pretty sure I won't now after reading this one.

the storage anarchist

Tim -

I am sorry that you feel that way, but I have never tried to hide my relationship with EMC. My long-time readers know full well I live and bleed EMC, and that my "unique perspective" is indeed as an insider. EMC does list my blog on their corporate site, but my blog itself is not funded, edited or reviewed by EMC - this is all my own opinion.

Thanks for stopping by, sorry that you won't be coming back.



I was wondering why EMC was so fast to incorporate SSD's - or EFD's as you like them to be named - in their high-end storage. Maybe it is due to the fact that RAID5 really brings poor performance within EMC Symmetrix compared to other storage vendors. You are talking about SATA drives were EMC worked hard on improving their reliability and rebuild time, but most of EMC customer are imlementing RAID1 rather than RAID5, or converting from RAID5 to RAID1 to have acceptable performance.

Any (technical) explanations would be more than welcome.


the storage anarchist

Steinweo -

Why did EMC add EFDs to Symmetrix, CLARiiON and Celerra?

- To overcome the performance limitations (IOPS and response times) of spinning HDDs
- To reduce the high $/IOPS incurred when using multiple short-stroked HDDs to meet IOPS requirements.
- Coupled with SATA HDDs, to reduce our customers' overall acquisition cost for storage
- To inititiate and accellerate the transition to EFDs, thus further reducing costs and increasing availability

As for RAID5 & RAID1- virtually every single DMX shipped since 2005 uses RAID5 and/or RAID6 for a significant percentage of their usable capacity. RAID1 indeed delivers the best performance, and many customers still deploy RAID1 (or RAID10) when their applications require it.

As to rebuild time, Enginuity 5773 code can rebuild any failed/failing drive under any RAID protection at the maximum write speed of the drive.

Thanks for asking.


Deja Vu - flash as a first level cache has been around for a decade, not that it was too affordable then. Seems like ESD is a proprietary was to say I got a JBOF (just a bunch of flash) that is from the top of the quality bin so my JBOF has higher quality - which is good for enterprise class needs. Other than that what am I missing?

The comments to this entry are closed.

anarchy cannot be moderated

the storage anarchist

View Barry Burke's profile on LinkedIn Digg Facebook FriendFeed LinkedIn Ning Other... Other... Other... Pandora Technorati Twitter Typepad YouTube


I am unabashedly an employee of EMC, but the opinions expressed here are entirely my own. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

search & follow

search blogs by many emc employees:

search this blog only:

 posts feed
      Subscribe by Email
 comments feed

 visit the anarchist @home
follow me on twitter follow me on twitter

TwitterCounter for @storageanarchy

recommended reads

privacy policy

This blog uses Google Ads to serve relevant ads with posts & comments. Google may use DoubleClick cookies to collect information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide ads about goods and services of interest to you. If you would like more information about this practice and your options for not having this information used by Google, please visit the Google Privacy Center.

All comments and trackbacks are moderated. Courteous comments always welcomed.

Email addresses are requested for validation of comment submitters only, and will not be shared or sold.

Use OpenDNS