« 2.047: the gestalt are coming, the gestalt are coming! | Main | 3.000: another year older »

April 23, 2010

2.048: a walk through the clouds

For what may be my final post of my 3rd year of blogging (April 27th is the anniversary of my first post -- not to worry, there will be a fourth year), I present a short story written by fellow EMC employee David Meiri. David has been a member of Symmetrix development for nearly over 12 years. For most of those years he has been a key innovator and developer on EMC's world-renowned SRDF (Symmetrix Remote Data Facility). I only just recently learned he can be an artful author in languages other than C and assembler.

So, forthwith, here's David's (mildly edited) short story:

A Walk through the Clouds

A short story describing how a private cloud may look for users, at all levels.

image

Sometime in the near future . . .

Susan’s day started out easy enough.

As the sole Application Administrator in the IT department of Blue Sky Bank, it was her responsibility to take care of any problems related to the many applications the bank ran. So far this morning there was only one issue: traders complained that the global trading application was not performing fast enough. While on the phone with their manager at headquarters, she took a quick look at the all-green dashboard on her monitor and said: “It looks like your storage demands are exceeding the 5,000 IO/sec you have requested, and as a result the average latency is above your SLA. If you want better performance, I can move your current trading activities to a higher tier and charge you an additional $10,000 a quarter. The next-level service package in the catalog will provide you with higher IO rate, reduced response time, higher availability and better data protection through more frequent snapshots that are retained for longer periods.” The manager on the other end of the line approved the change, Susan turned to her keyboard to adjust the policy, and within minutes the application’s performance started climbing up.

 

A knock on her door brought the second issue of the day. Ron from the finance department wanted to create a SAS environment for a new analysis project. After inquiring about the department’s needs, Susan recommended the Gold Package, which includes up to 10TB of data, up to 2,000 IO/sec with average response time of 5ms, and up to 500 MB/sec throughput. Ron hesitated: “The IO bandwidth looks adequate, but I'm pretty sure we won't need that much storage for at least a year.” Susan reassured him that with metered Virtual Provisioning they were going to pay only for the storage they actually used, and the charge-back would reach the full fee only when they reached 10TB of storage (“pay as you go.”) Alternatively, Ron could buy an annual subscription and be charged a reduced price.

Ron agreed, but was still concerned about security and data availability. “What exactly do you need?”, Susan asked, “Same access control rules, or do you need a new user group and policy? Data Encryption? Is Local Replication enough or do you want Remote Replication? Does it need to have Continuous Data Protection or is hourly backup sufficient or something in between? We have it all.” Ron was not sure. “We need to comply with the bank's regulation for sensitive financial information, SFI.” “No problem,” Susan replied, and checked the SFI box in the wizard she was running. “According to the Resource Manager,” she said, “SFI requires access logging, encryption, daily local snapshots for the last 7 days, and remote replication with RPO of up to 10 seconds. The system will allocate these resources and set up the protection automatically. Anything else I can do for you?”

“By RPO you mean Recovery Point Objective, right?”, asked Ron. “Yup!” concurred Susan enthusiastically. “It basically means that in case the main data center is lost, operations will resume from the secondary data center with no more than 10 seconds of data loss.” Ron asked to lower it to 5 seconds, to comply with his department’s guidelines. Although the solution Susan was building was based on the SFI template, with a few mouse clicks she modified it to reduce the RPO. The dollar amount on the top right corner of the screen increased by $2,000 per month, but being in Finance, Ron had no problem raising the cash. He approved the change, and within minutes the new environment was set up and ready to use.

“Next time,” added Susan, “you can use the self-service portal to make these changes.” Ron was surprised to hear that he could have made all these changes by himself, through a simple web interface. Susan explained that the same process that is used to buy equipment from IT can also be used to buy more bandwidth, higher availability, better disaster protection, etc. It was not that long ago that such changes would have taken weeks to navigate the approval path on its way to implementation.

It was time for her lunch break, but on her way to the cafeteria Susan’s iPhone beeped. A Twitter message came from the automated monitoring facility running in the datacenter and displaying on her desktop. Apparently, the sales department had reached 90% of their storage capacity allocation. At the same time, their IO rate had reached peak levels, indicating their applications required more bandwidth from the system. “I’ll have to call them after lunch and ask them to buy more storage or either start deleting old, unused data or select it for removal to archival storage,” thought Susan. “I can also ask them about whether they feel that their application response times are still satisfactory. At least I know for sure that their increased drain on the system is not going to adversely affect any users in other departments.”

Two months earlier . . .

The IT department of Blue Sky Bank had just acquired a brand new “EMC Virtual Storage System.” The system had all the data storage features they grew to depend on: from the automation of FAST, to the efficiency of TimeFinder to the assured data protection of SRDF. Virtual Storage also brought the latest application-centric optimization features such as End-to-End QoS, Dynamic Cache Partitioning and Multi-Tenancy Controls. These enabled different departments to share the same storage arrays without negatively impacting each other. For each department, IT could allocate exactly the resources it needed, with the protection and availability characteristics it required. These came with guaranteed service levels and monitoring tools to alert administrators when any issue occurred. In addition, a new Charge-Back facility enabled IT to charge users for exactly what they bought and used.

Usually, the gamut of features was a problem: it took storage administrators months and months of training and costly mistakes to learn how to operate the system. But EMC’s Virtual Storage had something new and exciting: a management module that tied all these features together into offerings that can be served up as complete bundles. EMC called it the “Virtual Storage Integrator,” but for the IT people it spelled trouble: with a system so simple to use, most of them image had to look elsewhere for a job.

For the few that were left to manage the storage, the job became extraordinarily easier. Within a few days they configured different packages, based on the various service level requirements (SLAs) of their internal customers. They defined policies for each package that followed corporate guidelines for protection and availability, and each had a price tag based on the amount of storage used, the maximal bandwidth guaranteed, maximal available IO/sec, and the target response times. No more Metas, Hypers, BIN Files or WWNs – just straight-forward policies that matched the needs of the business.

These packages were specifically targeted for use by the Application Administrators (“people who are not smart enough to be storage admins” was the common joke among the now-obsolete storage jockeys.) Everything was made as simple as possible: the app admins just needed to select the appropriate package, specify the desired capacity and have the users approve the cost. Packages could be easily modified (e.g., to improve protection or increase bandwidth), but most users took the default without any changes, knowing that they could always go back to their app admins and buy more: more IO/sec, more MB/sec, more protection, more storage, etc. The vision of delivering IT as a Service (ITaaS) had been fulfilled.

After they finished configuring the packages, life again became dull and boring for the admins. Even tech refresh array migrations, once a nightmarish drain of their resources and a constant source of strife between IT and its internal customers, became painless. Gone were the days where IT needed to schedule application, database, server and storage admins to agree on one weekend where 10 different departments would be down. Users were not even aware that their data was being migrated across arrays; using the Federation features of EMC Virtual Storage, the data moved seamlessly without any impact to applications and users.

EMC Virtual Storage had changed the way data centers work, and for the better.

-- David Meiri, April 2010

emc's journey to the private cloud

The above is not a fairy tale – it is instead an account of a most probable reality that EMC has already made tremendous progress in delivering. And indeed, it describes a Journey, and not a destination – the evolution from physical to Virtual Storage. If you want to learn more about this Journey, and how EMC's products and capabilities are paving the way to the Private Cloud (and why I keep capitalizing words like Virtual, Storage and Journey), well then, you should get yourself to EMC World on May 10-13, 2010.

 


TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d834c659f269e20133ece57d50970b

Listed below are links to weblogs that reference 2.048: a walk through the clouds:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.

anarchy cannot be moderated

about
the storage anarchist


View Barry Burke's profile on LinkedIn Digg Facebook FriendFeed LinkedIn Ning Other... Other... Other... Pandora Technorati Twitter Typepad YouTube

disclaimer

I am unabashedly an employee of EMC, but the opinions expressed here are entirely my own. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

search & follow

search blogs by many emc employees:

search this blog only:

 posts feed
      Subscribe by Email
 
 comments feed
 

 visit the anarchist @home
 
follow me on twitter follow me on twitter

TwitterCounter for @storageanarchy

recommended reads

privacy policy

This blog uses Google Ads to serve relevant ads with posts & comments. Google may use DoubleClick cookies to collect information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide ads about goods and services of interest to you. If you would like more information about this practice and your options for not having this information used by Google, please visit the Google Privacy Center.

All comments and trackbacks are moderated. Courteous comments always welcomed.

Email addresses are requested for validation of comment submitters only, and will not be shared or sold.

Use OpenDNS