« 1.022: are you ready for IBM's frankenstorage announcement? | Main | 1.024: something you should know (about xiv) »

September 04, 2008

1.023: it's just a flash-y science experiment

And now, my oft-requested take on the 1 Meellyun IOPS flash technology science experiment that IBM is promoting so heavily:

Way Cool. Applause

That's right - Barry Whyte and IBM's Almaden Lab team are to be congratulated for their accomplishment, as I actually did in the first comment to BarryW's boastful blog post on the event. This is indeed an important milestone on the road to wide-scale commercialization of solid-state persistent storage, even if it isn't an actual product announcement (IBM admits you can't buy their experimental configuration for at least 9-12 months).

its alive

Commendations all around...

But surely you don't think that's all I have to say now, do you...

why can't we just be friends?

What I did find somewhat uncalled for about IBM's bluster around this was, well - the antagonist tone of IBM's bluster. From BarryW's blog title ("...Actions speak louder than words"), to TonyP's misrepresentations of SPC-1 as the workload that was used (it was actually non-standard version of the SPC-1 workload, as BarryW so honestly explains in his post) EDIT Sep 5, '08: BarryW has clarified that it wasn't the SPC-1 after all (see comments below), and TonyP has corrected his post (presumably to avoid the Wrath of the SPC), to the quotes from IBM executive Charlie Andrews in the Byte and Switch article where he asserts that putting flash drives into a storage array is "like taking a jet and putting it on a two-lane road."

I guess Charlie was using the aging and decrepit DS8000 as his reference point...oops, sorry.

The posturing and positioning from IBM is totally expected - like everyone else, they're playing catch-up in the flash game, and the obvious response when you get caught flat-footed is to try to redefine the rules. So it's in their best interests to make it sound like flash requires special treatment, belongs in the server, won't work in the array, etc. etc. etc. It all just helps to justify IBM's continuing delay in getting flash to market.

But still, it was an odd thing for Charlie to say, especially since EMC has been delivering jet-speed performance with flash drives in the DMX-4 since the beginning of the year. The fact is that EMC has figured out how to integrate flash drives into both Symmetrix DMX and CLARiiON, while accommodating and mitigating the characteristics of NAND flash and the inherent differences between solid-state vs. spinning rust, while IBM has yet to make that milestone. The advanced architecture of Symmetrix and the decades of architectural and algorithmic optimizations in dealing with I/O prioritization, queue depths and error correction afford EMC more than a head-start over the competition.

What IBM is really admitting is that you can't simply install a flash drive in your storage platform an immediately take advantage of it.

Unless, that is, your storage platform's architecture was designed properly in the first place! (Many seem to forget that the predecessor of the Symmetrix was in fact a solid-state storage accelerator (DRAM based), and many of the issues that IBM and others are fretting over today were addressed by EMC more than 2 decades ago.)

ok. it wasn't really real...

When pressed on his blog, BarryW admitted that the experiment wasn't anything close to real-world. It was merely an experiment, performed with an artificial 4KB/block workload (70% read / 30% write), using more than the officially supported number of SVC nodes (max supported is 4 2-node clusters), fronting a home-brewed storage device concocted of P-servers with FusionIO PCI-bus flash devices, operating with ZERO data protection (no RAID5, no mirroring, no Data Integrity Bits, no nothing), and yes, operating without any of the normal software overheads of mirroring, replication, or any of that other messy reality that gets in the way of science.

Congratulations, indeed.

The most interesting thing is that even in this controlled lab experiment, IBM wasn't able to attain response times from their customized solution that were significantly faster than what EMC has been delivering with the standard STEC ZeusIOPS Enterprise Flash Drives integrated with the DMX-4 for months now. That's right, despite all of the hand-tuning, and putting storage on the PCI bus instead of behind a disk drive interface, the SVC's best 4K read response times were essentially the same as the DMX-4: just slightly less than 1ms.

The only difference? The DMX-4 numbers include the operational overheads of RAID5 and checksum verification that the data being read is actually what was written, while the SVC did not.

FYI - End-to-end data integrity verification is a critical requirement  for any storage device to protect against undetected data corruption caused by faulty logic or even sub-atomic particle bombardment (which gets worse at higher elevations). StorageMojo reported last year about CERN's data corruption research, in which they found that 1 in every 1500 data files contained undetected corruptions. Overlapped error correction is required throughout the data path to protect against such corruptions, and this is especially important with flash devices. This is why both Symmetrix and CLARiiON have always employed data integrity validation on every read from storage - an unfortunate but necessary overhead. And I'll note that the SVC has no such data protection - but then, neither do most non-EMC storage platforms. Does yours?

Bit I digress...

To me, what is most disheartening about IBM's announcement isn't actually what IBM said or didn't say - it is instead the continuing misrepresentation of the technology by the press and analysts covering it. For some reason the Byte and Switch article covering IBM's milestone concluded with findings by a Citigroup analyst who asserts that read/write endurance of MLC flash "remains challenging" - as if that had anything to do with the subject.

For the record, both the STEC ZeusIOPS drive that EMC is shipping today and the FusionIO device that IBM used in their SVC science experiment are SLC-based devices. Read/write endurance is an issue that the suppliers of both devices have solved already. The real challenge is the effective integration of these devices into servers and storage so as the maximize the response time benefits of the drives.

And despite what Charlie asserts on behalf of IBM, flash storage really is all about I/O response time.


TrackBack URL for this entry:

Listed below are links to weblogs that reference 1.023: it's just a flash-y science experiment:


Feed You can follow this conversation by subscribing to the comment feed for this post.

the storage anarchist

Related coverage in EETimes: IBM, EMC storage gurus debate future of flash.

marc farley

Thanks Barry for referencing Tony's post. I knew I had read something from IBM about this being an SPC-1 benchmark. I wish Tony and Barry Whyte had been a little more consistent about this.

Barry Whyte

I think there has been general confusion from maybe unclear statements about the actual benchmark that was run. I had intended my blog post to turn up at the same time as the press release so it would clarify the actual benchmark - but even I have to take some time off every so often and was 400 miles from my laptop!

This was intended to be the clarification of the benchmark, and the press release numbers were all done by performing exactly the same benchmark on the known SPC-1 benchmarked SVC configuration. In order to give the 70/30 4K a grounding in something known.

Anyway, just a couple of things I wanted to comment on :

1. ("...Actions speak louder than words") The title was actually aimed more at Sun and HDS who have made fluffy statements so far, I was implying we were making a statement that was grounded in reality - i.e. its not just us saying 'yes we are working on' This was not intended to in any way be taken as a snip at EMC Barry.

2. We started this project as a proof of concept, hence the lack of RAID used at this stage.

3. Nowhere in the press release did it state that this used a currently shipping SVC cluster - it clearly states using IBM's storage virtualization technology. Since its not a shipping product at present, I don't really see the relevance of your point here.

4. The 700us response time is with all the tested fusion cards running as close to the wire as we could. If we wanted to have 100us response time, we could do this, but it would reduce the throughput. Since we are talking response time, yes you can keep below 1ms for some workloads, but I look back at your presentation and see a dramatic increase up to well more than 2ms as the load ramps. This is not the case here, the max load ramp stage is 700us.

5. Rest assured we all understand the needs of the enterprise when it comes to storage, and what our customers want, and thats what we will continue to prioritise and deliver, as clearly stated by Andy and Charlie, this isn't just about disk, its about delivering the best for our customers, from the appliaction, through the server, SAN and storage.

And rest assured when we do deliver on the potential we are discussing here, we will be happy to provide full SPC-1, TPC-C, TPC-H etc with full disclosure to the world... can you say the same - it must be lonely out there being the one major storage vendor left that doesn't publish performance data - I know we've been round that loop again and again, but just making the point... again.


the storage anarchist

BarryW - thanks for clarifying. YOUR honesty was not to be challenged, but some of your coworkers have taken things perhaps a bit too far.

But I do suspect that your publicizing those SPC results is not exactly in line with the Bylaws of SPC membership. Nor are TonyP's extrapolations, given your acknowledgement of running a non-standard setup of SPC-1 (70/30 vs. 60/40).

But breaking (or changing) the SPC rules in order to kick sand in EMC's face didn't stop NetApp - why should it stop IBM?

As to the erosion of response time you note, the tests in my presentation were all run with 8 flash drives, each on individual loops. RAID calculation plays a role in that, as well as the contention of mixed read/write workloads that you noted.

EMC has since tuned the microcode a bit to improve this, whereas you have tuned the benchmark to avoid these impacts. And in any event, matching your benchmark results up against mine is a fool's game, since we both know we're both using different workloads - no way to do an apples to apples comparison.

But let's not joust over this...I'm not accusing you of doing anything wrong...just adding a little balancing perspective to help clarify it for everyone else.

Barry Whyte

Barry, understand your position and not looking for a joust tonight :)

But just to be clear, the tests we ran are NOTHING to do with SPC-1 - standard or otherwise. They are our own tool that generates I/O.

The 3.5x the performance figure is when compared to an SVC 8 node cluster running the same (in-house) 70/30 workload. So we aren't directly comparing this against an SPC-1 workload.

The only SPC-1 measurement that was referenced in the press release is the existing SVC 4.2.0 272K IOPs number.

The comments to this entry are closed.

anarchy cannot be moderated

the storage anarchist

View Barry Burke's profile on LinkedIn Digg Facebook FriendFeed LinkedIn Ning Other... Other... Other... Pandora Technorati Twitter Typepad YouTube


I am unabashedly an employee of EMC, but the opinions expressed here are entirely my own. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

search & follow

search blogs by many emc employees:

search this blog only:

 posts feed
      Subscribe by Email
 comments feed

 visit the anarchist @home
follow me on twitter follow me on twitter

TwitterCounter for @storageanarchy

recommended reads

privacy policy

This blog uses Google Ads to serve relevant ads with posts & comments. Google may use DoubleClick cookies to collect information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide ads about goods and services of interest to you. If you would like more information about this practice and your options for not having this information used by Google, please visit the Google Privacy Center.

All comments and trackbacks are moderated. Courteous comments always welcomed.

Email addresses are requested for validation of comment submitters only, and will not be shared or sold.

Use OpenDNS