« 0.006: usp-v launched! (yawn) | Main | 0.008: world's largest appliance »

May 19, 2007

0.007: virtual provisioning catch-22

There's been a lot of discussion lately about what I will henceforth refer to as virtual provisioning (as the only appropriate use of the term "virtual" in relationship to storage - see here for my reasons). I've seen many a blogger and blog commenter discuss the implementations, implications and merits of this so-called thin provisioning technology, and for the most part, I think people have got the basics figured out.

Put simply, virtual provisioning technology presents hosts/applications/file systems the illusion that they have more physical storage than is physically allocated, and allocates physical storage only when it is used (written). The technology thus improves storage utilization and simplifies the tasks of storage administration.

But does it really?

From my vantage point, I see a few things about virtual storage provisioning that seem to have been overlooked - paradoxes that may well prove to limit the utility and value of this technology to a subset of the storage domain that is smaller than most think.

Will virtually provisioned storage fit in your environment?


If you've read the book or watched the movie, you know that the concept of "Catch-22" is a trap of circular logic: that which you do actually prevents you from doing it(or something like that).

I assert that virtual provisioning provides a Catch-22 paradox along a few different dimensions. In fact, EMC has over a year of practical experience with virtual provisioning on our Celerra platform, and some of the things we've learned can be very enlightening.

catch-22 #1

New chores for the storage administrator. With virtual provisioning, storage administration is supposed to be simplified. The idea is that storage admins can simply over-provision every request for storage without having to bother with the traditional issues of capacity planning, locating unused space, balancing performance, or the inevitable expansion/relocation when the initial allocation is outgrown.

Couldn't make the storage admin's job any easier - point & click for another (virtual) terabyte - just set it, and forget it!

But as many are beginning to understand, it's not quite that simple. Sure, the process of allocation can be dumbed down, but a new problem is created to fill the void (so to speak):

Bad things happen if you ever run out of space!

A runaway application suddenly writes gigabytes of unexpected data. The alarms and alerts go unanswered for one I/O too many. Somebody decides to dump his entire MP3 collection to share drive. The purchase request for additional storage gets hung up, and the drives aren't ordered or don't arrive in time to meet the demand.

Bad things start to happen.

So to avoid this, the storage admin now has a new use for the time freed up by fast-and-simple allocation: monitoring the use of the virtual provisioned devices and their supporting pools. Adding more storage to pools that are getting full. Moving fast-growing virtual devices into new pools and away from critical applications that can't be allowed to EVER get a "disk full" error.

catch-22 #2

Deleting files doesn't return storage to the un-allocated pool. That's right - for virtually every file system in use today, when you delete a file, the space it uses is not freed up for use by another virtual volume. In many cases (such as NTFS), the file system doesn't even actually delete the file data at all - it simply marks the file name "deleted" so that it no longer appears to be in your folder or directory. Further frustrating the situation, NTFS (and other file systems) will in fact avoid re-using any space used by deleted files so as to improve the success rate of future undelete requests. And Windows Vista actually keeps multiple previous versions of files around, just in case.

Apparently nobody told Microsoft that disks (and disk administrators) still aren't cheap.

Net-net,  that the file system on your virtually provisioned device can appear to use far less storage than actually has been physically allocated to it. This means that when your storage admin (or more likely, when your server admin) discovers the runaway application or improper use of corporate resources to store those MP3 files, there's no easy way they can reclaim the space. Deleting the extraneous data doesn't do anything to solve the allocation problem - after the delete, the blocks will remain allocated to the file system, and the storage pools supporting the virtual devices won't be a single byte less full

The only option is to copy only the undeleted files into a fresh, clean new volume. And since the storage array has no clue which blocks are good data, and which are allocated to deleted files, you can't even use array-based migration tools to relocate the volume - you have to use a host-based approach that can see the data through the file system (e.g. RoboCopy or EMC's Open Migrator).

catch-22 #3

Performance is can be significantly reduced.This one could be the most significant of all. I/O performance is often overlooked in discussions of virtual provisioning. In fact, the benefits of improved utilization are usually assumed to come at no cost in performance at all.

But in fact, as the Symmetrix Performance Engineering team pointed out to me just this week, improved capacity utilization means (by definition) fewer disk drives have to support more data - and more I/O!

Sure, you can stripe the storage pools out over a lot of spindles to spread out the load, but unless you never allocate all of the space in the pools or you leave unallocated space on these drives, sooner or later you're going to hit the IOPS limits of the spindles, and then the performance of every application on them will suffer.

People tend to overlook the fact that a single disk drive can support relatively few I/O operations per second - typically in the range of 120-150 IOPs, and maybe as high as 200 IOPS with the assistance of external intelligence to optimize the order I/O requests. This IOPS ceiling is generally irrespective of the drive size. The spindle speed does have some impact, e.g., 15K drives usually support more IOPS than 7200 rpm ones, but it's not anywhere near twice as many.

"Wait a minute," you say, "My array is routinely delivering thousands and thousands of IOPS to multiple applications!" And this is true. Cached disk arrays deliver improved throughput through a variety of techniques that leverage the raw performance advantage of cache memory to mask the limitations of rotating mechanical storage.

But in the end, a cache miss requires physical a disk I/O, and a single disk drive can only support so many I/O requests per second.

So the paradox is this: when you lay out normal "fat" devices (I guess I should call them "real" devices, huh?) the performance load on each spindle is reduced by the overallocated-but-unused storage space. But with virtually provisioned devices, there is less unallocated-and-unused space on each drive, forcing each drive to support a bigger load.

Surely there are applications that won't suffer at all from this performance paradox. Our Celerra experience shows this to be true - software development, network shares for office documents, print queues - all work pretty well when virtually provisioned.

Fact is, storage admins are going to have to take performance into consideration when doing their virtual provisioning, and from an entirely different perspective (see catch-22 #1)

And if they don't get it right up front, they may have to spend time to (manually) relocate the virtual volumes to meet performance SLAs.

Don't you just love circular logic?


TrackBack URL for this entry:

Listed below are links to weblogs that reference 0.007: virtual provisioning catch-22:


Feed You can follow this conversation by subscribing to the comment feed for this post.


Thanks for the education and providing the lessons that you guys have learned with your implementations.

Any thoughts on how you can get out of the circular logic of the problem?

Barry Burke

Excellent question - I'll probably blog a response in the not-too-distant future.

I'm also hopeful that once we get over the initial marketing hype for the technology, we'll start seeing discussions of the early adopters' experiences.

Readers with experience using 3par, Compellent, Left-Hand, Celerra, NetApp (et al), please feel free to comment on your own best practices!


Seems to me you misunderstand the intent. The whole point of thing provisioning is that MOST admins far far overshoot their true capacity needs, and because of this, you have wasted space doing nothing. The problems you bring up aren't an issue of thing provisioning, it's an issue of proper administration.

*A user dumps his entire mp3 collection*, well why are you allowing your users unlimited space on a share? Quota's anyone?

It sounds to me like you're making excuses why you don't have thin provisioning yet by throwing out invalid arguments.

NOBODY has claimed that thin provisioning will somehow eliminate the need for proper planning and architecture. The *problems* you introduce have absolutely nothing to do with thin provisioning, those are issues of architecture and proper administration.

For instance, you shouldn't be placing *runaway logs* on a thinly provisioned LUN. Not only that, last I checked, you can set a percentage of the pool a given LUN is allowed to use. IE: if I have 100GB of free space, and 10 thinly provisioned LUNs, I can say "each one of you only gets 10GB out of that 100GB, when you reach 9GB I'll get a warning and can add more.

You act as if these are impossible problems to solve... yet none of them are...

Done ranting, I just think your catch-22's are nothing of the sort. That is unless you equate catch-22 with FUD.

Barry Burke

Seems to me that you are in violent agreement with me..."Thin Provisioning [doesn't] eliminate the need for planning and architecture" - couldn't have said it better myself.

Fact is, in my experience, many people have the mistaken belief that Thin Provisioning can improve utilization with virtually no risks or side effects. I have witnessed this perception in communications with and from the press, analysts, competitors and many (many) customers - and admittedly NOT from many true storage admins or people like yourself who obviously fully understand all the ramifications.

My article is aimed at helping these people become aware of some of the issues that are not being included among all the hype. Thanks for helping me improve their understanding.


Fair enough Barry :) Sounds to me like the issue is sales people, but then again, when are they ever NOT the problem?

I guess I just felt like your post was aimed at dismissing a VERY useful technology. Perhaps I misunderstood.


I have to say, I agree with the catch-22's Barry points out. If the problems introduced by implementing thin provisioning can be solved by better storage administration, then couldn't the problem thin provisioning is trying to solve just be resolved with better storage administration too, rather than implementing thin provisioning?

Sounds a little like trading one administrative task for another, or worse, perhaps more tasks? Perhaps the answer lies in determining what set of administrative issues are easier to administer.

Having been a storage admin in SMB and Enterprise environments, I've had my share of failed apps and frozen systems due out-of-space conditions. Thin provisioning would have added an additional layer of administration for me. And on the mainframe side of the equation, even on my best days, Stopx37 was always my friend.


You do realize that this is old FUD. Back in the old days of STK EMC used the same line with the V2X Virtual Disk. That was actually the first Thin Provisoning disk to market. Users could over allocate with no problem and in all my days I never once heard of a customer running out of real capacity. The alarms and warnings gave you more than enough time to purchase capacity.

Alick Lok

We at Emerging Health IT Storage Admins practise thin provisioning. We are keeping track of storage utilization by storage type and applications. How much storage fibre/SATA array group/tray application storage purchased and how of storage they allocated. We send our customers monthly storage ledgers. In additional we puchased a few array groups and SATA for Quick Response Pool (QRSP) so we won't have a situation that customers run out of storage.

Storage Pimp

Catch 22 #3.1 Assuming the majority of your storage is currently utilizing Raid 5 for protection. You must consider the "impact" of a double disk failure when moving toward a thin pool configuration.

For example lets say you use 146 GB drives in 3+1 protection. This would be 146 X 3 = ~438 GB of usable space in a single raid rank. So if you had a double disk failure in this rank the potential before thin pools was for 438 GB of data loss at the array level.

Now lets say you took this same raid rank and lets say another 159 raid ranks and created a thin pool. After some time there is a good chance that every device (TDEV for VP) has a chunk (extent for VP) on this raid rank.

The actual capacity of this pool is 160 *~438 = ~68 TBs. However if you oversubscribed this pool by say 150% then the potential is there for over 100 TBs of virtually allocated space to be lost. Lets not even talk about if these devices are put into logical volumes with devices from another array/pool.

So to mitigate this risk you go to a Raid 6 configuration which protects up to a three disk failure within a single raid rank. However you now have a double parity performance hit on writes.

So when going to pools you have to accept a higher "impact" to the risk of a raid 5 disk failure, or lower the risk by moving towards raid 6 protection.

The comments to this entry are closed.

anarchy cannot be moderated

the storage anarchist

View Barry Burke's profile on LinkedIn Digg Facebook FriendFeed LinkedIn Ning Other... Other... Other... Pandora Technorati Twitter Typepad YouTube


I am unabashedly an employee of EMC, but the opinions expressed here are entirely my own. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

search & follow

search blogs by many emc employees:

search this blog only:

 posts feed
      Subscribe by Email
 comments feed

 visit the anarchist @home
follow me on twitter follow me on twitter

TwitterCounter for @storageanarchy

recommended reads

privacy policy

This blog uses Google Ads to serve relevant ads with posts & comments. Google may use DoubleClick cookies to collect information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide ads about goods and services of interest to you. If you would like more information about this practice and your options for not having this information used by Google, please visit the Google Privacy Center.

All comments and trackbacks are moderated. Courteous comments always welcomed.

Email addresses are requested for validation of comment submitters only, and will not be shared or sold.

Use OpenDNS