« 0.020: do corporate blogs = marketing collateral? | Main | 0.022: be careful what you ask for »

July 28, 2007

0.021 the case against standardized (performance) testing

Fellow blogger Tony Pearson has just completed a week-long series on the values and merits of standardized storage performance benchmarking, in a not-so-subtle attempt to justify his recent assertion that a SPC-2 win for the SVC has awe-inspiring relevance to customers. And he's done so in an eloquent, perhaps even masterful manner, deftly leveraging the subtleties and nuances of the English language (who knew?) to make his case.

But if you ask me, he's failed miserably. Unless his readers get lost in the misdirection and fail to realize that his metaphors are totally unrelated to the world of storage performance. In fact, his tutorial underscores the problems associated with standardized testing.

Elsewhere in the blogosphere, I have offered my own personal perspective on standardized benchmarking, which boils down to this:

  • Standardized benchmarking oversimplifies the complex interactions that make up a real-world environment --the requirement for "controlled and repeatable" forces standardized benchmarks to exclude the chaos of random, but normally occurring, events and overheads, often masking or even intentionally subverting key differentiating capabilities of the test targets
  • The inherent quest to be best in standardized benchmarks inevitably drives participants in to optimize their test targets for the test
  • There is very little documented correlation between standardized testing results and the intended real-world application of the test target, and most people don't understand what the tests actually measure
  • The inbred survival instincts of humans leads us to subconsciously establish relationships and hierarchies between similar objects, and in the absence of in-depth situational/contextual understanding, we will assign "better" based solely on whatever limited data points are available to us

I know - heady assertions, and my opinions all. But note that I harbor these opinions for ANY standardized test, be it the SPC, TPC, MPG, EER, SAT or every state's equivalent of MCAS. And my reasoning is simple:

Standardized testing homogenizes comparisons to a meaningless baseline that masks the unique strengths of the test targets, be they cars, servers, storage arrays or high school students. Unless you fully understand the test itself and the relevant requirements of your own application of the test target, you can draw no real conclusions on how standardized test results apply to your expected results.

So when Tony tries to convince readers that the SPC is like MPG, well...you know me, I gotta take exception.

masterfully mixing metaphors

For some reason, Tony tries to make it sound like SPC benchmarks are similar to Miles Per Gallon ratings, when I think it pretty obvious that the real storage performance metric parallel to the automobile is Miles Per Hour. But when I specifically asked Tony about this in his blog, his response was surprising, if not counterintuitive. He said he was comparing SPC IOPS to MPG because (paraphrasing here):

  • MPG results are consistent across every instance of a particular car model that comes off the product line.
  • MPG is standardized and publicly available
  • MPG is usage-based connected to real-world conditions
  • MPG can be used for cost/benefit analysis

Powerful assertions that seemingly support - well, nothing, really! What does any of that have to do with the fact that MPG measures the number of miles a car is supposedly able to go on one gallon of gasoline under some unknown (but well-labeled) conditions, while the SPC measures how many "SPC IOPS" a certain vendor-selected configuration can achieve?

I know what a mile is, and I know how far I need to travel each day and I even have a pretty good idea how much a gallon of gas is going to cost me, but I have NO CLUE what an SPC IOP is, nor how many of these my storage array needs to be able to do. The metaphor isn't even apples and oranges - it's more like fruit flies and potato peelers!

mpg isn't what you might think

Fact is, there's no real attempt to prove that every single car coming off of a production line will get identical MPG in the EPA's test - they take one "production car" off the line, break it in for 4-5000 miles, test it, and publish the results. Done! And while the test is indeed standardized and publicly available, I doubt that most people have taken the time to read or understand these tests, or how they relate to their driving styles. Tony provided excerpts from the descriptions that describe the driving patterns for City and Highway (his link to How Stuff Works), and I'm sure you all immediately recognized (as I did) that you never drive that way - ever. But that's not the only reason for the "your mileage may vary" disclaimer.

For example, I bet most people don't know that the tests are always run with the air conditioners, heaters, radios, fans, GPS's and lights all turned off - because all of these things draw power from the engine which (you guessed it) lowers the results. And I'll bet you didn't know that the test doesn't actually measure the amount of fuel burned over the (synthetic) test course - it calculates it based on the hydrocarbon output at the exhaust. You and I, well, we calculate MPG based on the amount of fuel we have to put back into the tank to fill it back up. And as Tony points out in his rebuttal to my inquiry, engines are getting more and more efficient at burning fuel more completely, which means the test results are being artificially inflated - a factor that is even greater with hybrid cars because they don't generate any hydrocarbons while the electric engine is running, but the tests are don't sufficiently account for the energy overhead of charging the batteries. And the EPA tests are run in a temperature-controlled environment using specially-blended fuel with a consistent energy content (unlike what you can get at the pump - did you realize that "10% Ethanol" means that you are only buying 97.5% of the energy you would have gotten if there was no Ethanol added to the gasoline?)

To top it all off (so to speak), the EPA is in fact changing the way MPG is measured as we speak (see this article at Edmund's). That's right, they're reworking the benchmark, so the "city" numbers on 2008 model cars and trucks will be about 12% lower than the identical 2007 model.

your mileage will vary

So I ask you - how the heck are you going to be able to use the 2008 MPG numbers to make an informed car buying decision? You can't compare this year's MPG ratings to last years' cars, because the tests are different. More importantly, you still can't correlate the new MPG numbers to your own driving habits because nobody has any hands-on experience to validate the relationship that the EPA asserts they've accommodated. They "say" the new numbers are "more representative," and that they are tacitly credible because they are US Government EPA-sponsored tests and results (this is now being specifically called out on the ratings sheets, in an apparent response to market suspicion that the auto manufacturers have been "doping" the results over the years).

But guess what? I agree that MPG and SPC are indeed similar.

Similar in that both tests are impossible to correlate in advance to expected results! And after the fact, the real-world environment doesn't match the results predicted by the test, there's really not much anyone can do about it - either the test workload was misunderstood, or the specifics of the intended real-world workload was (more likely, both weren't understood sufficiently).

And even though the MPG tests are "standardized" (everybody knows what a "mile" and a "gallon" are) they don't necessarily cover my intended use case. If a want a 4x4 to go off-roading, how do I know that the relative EPA "on road" fuel economy ratings of my potential selections are going to be consistent in relationship to one another in the "off road" use cases? I don't. And I can't. The EPA tests don't cover my use case, and thus I have no idea how much fuel I should plan on needing.

Just like the SPC tells me nothing except maybe how well the specific tested configuration runs that benchmark. In fact, there's nothing to even explain WHY this specific configuration was chosen instead of one, say, with fewer larger disk drives. More significantly, as I mentioned before, there is no common understanding of what an "SPC IOP" is (nor an "SPC MB/s" for that matter ). Fact is, unless you're intimately involved in benchmarking, the SPC tests and the architectures of the storage itself, here is insufficient data to make any correlation of SPC results to to any other real-world environment.

speed vs. efficiency

On top of all this, the fact is that EPA's MPG is a measure of efficiency and not speed or performance. And the SPC is a measure of performance, and not efficiency - the number of these specific "SPC" IO's that you can get done in a unit of time (e.g., 1 second: IOPs). And while you can calculate SPC IOPs over the listed price of the test configuration to get to an efficiency rating, it's the wrong one: MPG is Miles per Gallon, not Miles per List Price. The SPC equivalent would have to be SPC IOPS per Watt, but I can't seem to find the measured power utilization of the test configuration in any of the SPC benchmarks.

No, the performance metric parallel we're looking for in the automotive world is very clearly Miles Per Hour. But as Tony points out, we all know that MPH isn't really all that useful in choosing the proper car in the real world, since almost all cars go faster than the speed limits (at least here in the US). Truth is, the vast majority of consumers don't buy cars based on which one wins at NASCAR or the Grand Prix.

So it clearly wouldn't have helped his argument any to relate SPC to MPH, because we all know MPH is irrelevant. And by association, that would admit that the SPC tests themselves might in fact be irrelevant.

but how much us good enough?

Here's the thing - neither SPC or MPG ratings are true "tests" - their results are really only relative metrics and there is no perfect score. You can't get a perfect score - in fact, the test creators don't know what a "good enough" score is, much less the best possible. Unlike so-called "aptitude tests," (SAT, MCAS, IQ), neither SPC or MPG really tell us anything about the ability (aptitude) of the test target to perform outside of the specific test criteria. And while the SAT or MCAS may provide some insights about the candidate's linguistic and mathematical abilities, neither offers the college registrar any real insights as to a candidates' aptitude for say, music, technology, or psychology.

Conversely, and perhaps the biggest challenge with SPC and other standardized performance tests such as SpecNFS, IOMeter (etc.) is the lack of knowing what "good" or even "good enough" is. The predominant assertion is that "more is better," and that you always want to buy "the most you can for your money." But how do you know how much you really need? What if you could spend half as much money to get half as much performance and still meet your application's requirements and SLAs? Or say you spend $3.5M to get the top-rated performance configuration, only to find that it costs you more to configure, operate, power & cool than you could afford? Or that its performance falls to a tiny fraction of the rated results while a disk drive is rebuilding or under the strain of synchronous remote replication?

No, tests like the SPC just aren't really all that helpful to help make the appropriate storage selection.

FWIW, David Hitz had a little fun with this topic in his recent Lies, Damned Lies and Benchmark Results blog post. His point was that there are lots of different ways to analyze performance benchmarks and you can come to different conclusions based on how you interpret them (although, not surprisingly, he was able to derive "NetApp is a little better", "NetApp is a lot better" and "NetApp is infinitely better" out of the same SpecFS results...go figure smile_regular).

blogketing gone overboard?

Bottom line: my point is not that the SPC (or any other standardized test, for that matter) is bad. But to promote it as anything other than an interesting data point is to assign more importance to it than it deserves (IMHO).

Of course, that's what marketing is really all about, and given Tony's title and position (brand marketing, IBM storage), I know I really shouldn't expect anything else. But in the grand scheme of things, posting the best results for a benchmark that nobody can relate to the real world barely justifies a press release. The blogketing hype and "get under EMC's skin" response to the relevance challenge, the thinly veiled accusations that EMC is hiding something by not participating in SPC, followed by a week-long (semi-condescending) tutorial on performance metrics - all in defense of one little benchmark - well, I just think that's going more than a bit overboard to create relevance where it simply doesn't exist.

And to try to correlate SPC with MPG (instead of MPH) is really just trying to obfuscate the argument, and I think that approach hurts the relevance argument instead of helping it.

At least I'm now more than ever convinced that the SPC benchmark is pretty much as irrelevant as MPG!

But remember - YMMV!


TrackBack URL for this entry:

Listed below are links to weblogs that reference 0.021 the case against standardized (performance) testing:


Feed You can follow this conversation by subscribing to the comment feed for this post.

open systems storage guy

A benchmark is useless for knowing how fast a device will go- it's main use is to know which device will go faster.

SPC's "meaningless baseline" is not meaningless for comparing systems. If two SANs are benchmarked, and one of them does better, that one will do better in most real life load environments. If I'm trying to decide between competitively priced bids that do everything the same, then this information can be the final differentiator.

That said, I agree that a benchmark is less important than a quantitative load test under a real workload, and I also agree that benchmarks leave out some very important variables when it comes to storage (such as manageability and reliability)

the storage anarchist

I'm still looking for proof of the assertion that the SPC is an accurate representation of "MOST real life load environments" - I think that's a stretch that even the SPC membership wouldn't attempt.

I know first-hand that a system optimized for OLTP workloads won't necessarily be best for DSS workloads (and vice-versa). And I sincerely doubt that the SPC results correlate to Microsoft Exchange workload performance (as an example). At least, I've seen no evidence that the configuration that performs SPC-2 best will also be the best at Exchange (or anything else, for that matter).

Storage just ain't that simple, especially when you have hundreds or thousands of hosts & applications running on a given platform simultaneously. Add in drive rebuilds, local & remote replication and the near constant flow of additional storage allocation, and there's no benchmark in the world that even approximates "most real-life load environments," much less actually covers them all.

If you have data to back up the assertion, I'd love to see it. Until I do, I'll maintain by my position: Your Mileage WILL Vary...

Dave Graham


Appreciated this article as it hearkens back to some of the issues I faced in the consulting world. In my case, it was regarding synthetic benchmarking of processor technologies (i.e. Intel and AMD) and how platform differentials, methodologies, and indeed even personal bias could, ultimately, impact the results. This happened on several fronts: psychologically, the marketing impact of "Hey, I won this round!" was utilized to prove that X technology held a certain performance crown for the moment. Statistically, it also skewed the results because there were no checks and balances (code optimizations, "hidden" code checks, etc.) to ensure the data from X technology was pure. So on, so forth. In any case, I continue to read your articles with interest.



open systems storage guy

"If you have data to back up the assertion, I'd love to see it. Until I do, I'll maintain by my position: Your Mileage WILL Vary..."

-I agree. Mileage will vary. Benchmarks are not supposed to tell a company how their workload will do on a particular system. What they do is tell the company whose system will be faster. The difference between their workload and the SPC test workload will be pretty constant for any hardware, and that's why people benchmark.

the storage anarchist

Your comments are welcome, but you still aren't providing any data to back up your perspective.

My own hands-on performance analysis experience (25+ years) has demonstrated repeatedly that no single benchmark data point is NEVER sufficient to predict the performance of a complex piece of computer equipment in "ALL" or even "MOST" applications. Different workloads will stress different bottlenecks of the system, and to there is no single benchmark that can reproduce and measure this in a meaningful manner.

TPC won't tell you how fast a platform can do FFT's, nor will the fastest TPC system also necessarily be the fastest FFT system (or vice-versa). That's why people looking to purchase a system to do complex graphical modelling and analysis find the TPC completely irrelevant to their decision making process.

SPC is no different. The SPC-2 benchmark mimics some mix of video transfer, file transfer and database query workloads - workloads that have nothing in common with high-transaction, small-block OLTP or Microsoft Exchange workloads (just as two examples).

Yet you seem to be insisting that the best SPC system will also be the best Exchange system...I can confidently assure you that nothing could be further from the truth - Exchange I/O patterns look NOTHING like the large-block transfers and read-only queries that SPC-2 models. Pick your Exchange system based on SPC-2 results and, well...hopefully your replacement won't make the same mistake ;).

But you are indeed making my primary point - people are easily misled into believing that the SPC is a representative test of relative performance for all use cases. Even though the creators of the SPC themselves acknowledge outright that such is not the case (nor the intent, mind you - that's why there are more than one SPC test in the first place!!!).


What I find most frustrating about this SPC debate is that both sides go to extremes and miss the entire middle ground. Yes, any benchmark has limited validity and value. But this does not mean that they have no value at all.

The reality is that a benchmark is a good tool in helping you to make a decision but it should not be the only tool.

You make a number of comments about testing and benchmarking in general that I feel are as off base as the people you are commenting about. For example to quote you:

"Standardized benchmarking oversimplifies the complex interactions that make up a real-world environment --the requirement for "controlled and repeatable" forces standardized benchmarks to exclude the chaos of random, but normally occurring, events and overheads, often masking or even intentionally subverting key differentiating capabilities of the test targets "

The purpose of a benchmark is to simplify things but this does not mean that you can not also use other tests and criteria to make your decision. If a test was devised that tested every possible boundary then the resultant data would be meaningless as there would be little or no context to put it into. Simply put to run a single test and consider it to provide complete proof is as silly to run no tests because they aren't perfect.

"The inherent quest to be best in standardized benchmarks inevitably drives participants in to optimize their test targets for the test"

Totally true and marketing types are the worst to take advantage of such things. The key to preventing this is to have a fully described and repeatable test and to not use a single benchmark or test as your only decision point.

The fact that some people may wish to manipulate the results of benchmarking does not invalidate their value.

"There is very little documented correlation between standardized testing results and the intended real-world application of the test target, and most people don't understand what the tests actually measure"

Totally incorrect. I would submit that the entire automotive and aviation saftey tests would prove this. Standardized testing on cars and an insistance to mandate improvements in crash tests results have with out a doubt improved the survivability of passengers. The fact that the automotive industry has used the same arguments as you make against standardize crash testing as being unrealistic and not representative of real world situations has not negated the fact that the testing and resultant changes has improved safety.

"The inbred survival instincts of humans leads us to subconsciously establish relationships and hierarchies between similar objects, and in the absence of in-depth situational/contextual understanding, we will assign "better" based solely on whatever limited data points are available to us"

Correct. This is called survival of the fittest. The ones that failed to make accurate judgement calls had a tendancy to perish thus reinforcing this trait. For good reason I would say.

"I know - heady assertions, and my opinions all. But note that I harbor these opinions for ANY standardized test, be it the SPC, TPC, MPG, EER, SAT or every state's equivalent of MCAS."

Let me ask you a question here. Given a choice which who would you prefer to operate on you, a doctor that has taken standarized tests and passed them or a doctor that hasn't taken any tests but scored really well with the nurses while going to medical school?

I agree with you that standardized testing does not prove the total ability of an individual but it does prove that they have the basic knowledge expected of them for a job or skill set. Again the purpose of the testing is not to be a definitive and complete ranking of an individual but only to be a single reference point. This is why most Universities look at not only SAT scores but other aspects of the student. But, this does NOT invalidate that the standarized testing has some value. But it must be used with judgement.

The equating MPG or even MPH with the SPC testing numbers is a poor choice and distracts from the underlying issue. Does the test provide some bit of information of value to the customer. With the MPG results would I use that as my only criteria on purchasing a vehicle? Of course not. But it could still be useful in deciding a class or level of a product that I might want. If I was looking to purchase a commercial vehicle for transporting goods I would want to be able to make some sort of decision based on size, capacity and efficiency. Short of hiring every possible vehicle and running test a MPG rating would provide an estimate for where different vehicles would sit in relation to each other.

Finally, standardized tests are used even in EMC. Or at least I certainly hope that they are. If not then how do you determine if an engineering change or software change improves or hinders performance? Does EMC do no internal testing at all of their products? If they do then they must find some value to these non-sensical standardized tests. Of course I expect that they do run tests and that they run a wide range of them as it would be silly on their part to try to run one single test that represented everything.

If EMC does not like the SPC tests then publish some other results with testing parameters clearly documented so that others can duplicate the tests to validate the results.

Finally to repeat myself people on both sides of this argument are over stating their positions and being a bit silly. The reality is in the middle somewhere as usual.

the storage anarchist

rwmiller - next time, don't hold back...tell us what you think!

Seriously, I'm not going to pick apart your response, primarily because this whole debate will never end - there will always be people on both sides of the argument, and neither you nor I are ever going to get everyone to agree. But you've given it a valiant effort, I must admit.

But I will make two comments. First, comparing computer performance benchmarking to standardized crash tests is a pretty big stretch. The crash test is insanely simple, easily recreatable and most importantly, obviously applicable to the real world - you don't have to be a physics major to tell the difference between a "good" and a "bad" test result.

But I seriously doubt that Consumer Reports would ever use the SPC tests to rate storage arrays, if only because the average Joe would have no idea what all that meant. And my hope is that Joe Storage knows better.

Second, my choice of doctor would be None Of The Above - I want the doctor who has successfully treated the most people with my ailment before. My wife and I actually spent months chasing references when we had to find a new doctor a couple of years ago. Not once did we ask about test scores; we focused entirely on personality, track record, and respect from peers, employees and patients to make our selection.

Oh, and neither "success with the nurses" nor "crash test ratings" played any part in our doctor selection either :)


First I totally agree with everything you have said. I just disagree with the conclusion you have come to based on that.

And yes the debate is not likely to end but not because one side or the other is not able to convice the other but because both sides have a different agenda. One side sees no corporate advantage to doing the tests and the other side does. This does not make the tests a bad idea nor does it make them a good idea. It does however make for a lot of finger pointing and the odd bit of fun.

I am happy that you were able to find a qualified doctor that met your needs and I totally agree with your method of choice.

But, (you knew there was one coming didn't you?) what are you going to do when all the qualified doctors retire or die off?

By this I mean at some point a freshly graduated doctor needed to get his or her first job and how are they to do this if the only criteria people use is the one you have used?

How is a hospital to choose a good candidate? Purely on the interview? Do their own comprehensive testing to see if he actually attended the classes? Or, would it make sense to take look at and review the comments of his instructors and the medical school that they went to?

And how is a the university to pick out who would be a good candidate for medical training? Just let anyone that feels like it give it a shot? Or should they review the standardized testing of the applicant from their previous schools?

My point here is that this wonderful doctor you found is the result not only of his own native abilities, training and work but has been culled from the herd by a progressive series of standardized testing.

I wonder what your doctor would have to say about standarized testing of student doctors having no value?

I feel that you have over simplified also the "physics" involved in crash testing. Yes, the basic premise is straight forward enough but how does one analyze the results? You say one need not be a physics major to tell the difference between a good or a bad test result but how? Just looking at it? How can someone simply look at it and say in this case the person broke a leg and in this one they broke two legs and sustained a concussion?

Sorry but in addition to running high speed cameras on the crash test the modern crash test dummy is full of sophisticated electronics to measure impact and g-forces utlizing state of the art computer systems.

Standardized testing is in use everywhere. Take your own products and company for example. EMC with out a doubt makes very good products and employs some of the best people in their fields. But the parts that make up your product have all been manufactured and tested to certain standards. They are assembled to a certain standard and tested against this standard. When you hire a new person with no previous work experience you without a doubt require proof of their education and results from their previous standardized tests (i.e. grade scores). You may even require that some people need to take a standardized test for certain positions in the company. When it is time to review performance this is all done against a set of standards which in effect is another standardize test.

Sorry, if I am a bit over the top here but I enjoy your blog and comments and find many good ideas in them but find your position in this case to be a bit less anarchist than it could be.

For example a more recent post you have done talks about the green power savings of the DMX-3 and DMX-4 and this is great and I have no doubt about what you say. But how did you come up with these numbers and values (do I hear standardized testing)? Are they 100% accurate in all cases? For example was the power consumption tests ran with the storage system running all drives at 100% random IOP load or was it done with the system idle? There are some variables here that could result in different results but it does not invalidate your numbers nor the fact that they are useful in assisting someone to make a decision.

Just as performance tests should not be sole criteria of a purchase neither should be power requirements. The final decision should be made by someone that can exercise reasonable judgement based on the results not only of the tests put in front of them but of their own life experience which is just another series of tests that they and we are constantly running from the time we wake up in our house that was inspected and tested, and eat our tested and inspected breakfast and drive to work in our tested vehicle and work with our inspected and tested co-workers as we test and inspect and develop our products.

It's almost enough to make me want to go back to bed and pull the covers over my head but only after I check to see that they have been properly inspected to meet the fire saftey regulations.

Love the blog and thanks for your previous reply.

the storage anarchist

Somehow I still don't think comparing Peformance Tests to Safety Tests or Aptitude Tests is appropriate.

With a Safety Test or an Aptitude test, the maximum score is known, and from that "Good Enough" can be determined.

But with the SPC, what's the maximum score? Is a "good" score necessarily the highest? Is today's "good" score still "good" next month/year when somebody beats it?

Should we even care what "best" is if "good enough" is all we need or can afford? And heck, is "best" at a $3M configuration necessarily "best" at a $50K configuration?

And what's "good enough" anyway - colleges don't always get the people who score 100% on the tests...they accept the "top XX%" (however many necessary to fill the seats and the coffers). But how do we know what a "good enough" score on the SPC is?

SPC isn't an aptitude test, an efficiency test, or a pass/fail safety test. It's a performance test - it measures SPEED - how fast a configuration can accomplish an arbitrary set of tasks (which virtually nobody can understand, unless you are a performance guru on your own). All of your comparisons are totally unrelated to the topic of quantifying performance.

Fun, interesting, and over-the-top, indeed. But totally off-topic.

But i WILL agree with you on one thing. There are two kinds of people in the world - those that think SPC is a meaningful and important measurement of performance, and those that know better :)

The comments to this entry are closed.

anarchy cannot be moderated

the storage anarchist

View Barry Burke's profile on LinkedIn Digg Facebook FriendFeed LinkedIn Ning Other... Other... Other... Pandora Technorati Twitter Typepad YouTube


I am unabashedly an employee of EMC, but the opinions expressed here are entirely my own. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

search & follow

search blogs by many emc employees:

search this blog only:

 posts feed
      Subscribe by Email
 comments feed

 visit the anarchist @home
follow me on twitter follow me on twitter

TwitterCounter for @storageanarchy

recommended reads

privacy policy

This blog uses Google Ads to serve relevant ads with posts & comments. Google may use DoubleClick cookies to collect information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide ads about goods and services of interest to you. If you would like more information about this practice and your options for not having this information used by Google, please visit the Google Privacy Center.

All comments and trackbacks are moderated. Courteous comments always welcomed.

Email addresses are requested for validation of comment submitters only, and will not be shared or sold.

Use OpenDNS