EMC conclusively proves that VNX bottlenecks NAS performance

A bit of a controversial title, no?

Allow me to elaborate.

EMC posted a new SPEC SFS result as part of a marketing stunt (which is working, look at what I’m doing – I’m talking about them, if only to clear the air).

In simple terms, EMC got almost 500,000 SPEC SFS NFS IOPS (not to be confused with, say, block-based SPC-1 IOPS) with the following configuration:

  1. Four (4) totally separate VNX arrays, each loaded with SSD storage, utterly unaware of each other (8 total controllers since each box has 2)
  2. Five (5) Celerra VG8 NAS heads/gateways (1 spare), one on top of each VNX box
  3. 2 Control Stations
  4. 8 exported filesystems (2 per VG8 head/VNX system)
  5. Multiple pools of storage (at least 1 per VG8) – not shared among the various boxes, no data mobility between boxes
  6. Only 60TB NAS space with RAID5 (or 15TB per box)

Now, this post is not about whether this configuration is unrealistic and expensive (almost nobody would pay $6m for merely 60TB of NAS, not today). I get it that EMC is trying to publish the best possible number by loading a bunch of separate arrays with SSD. It’s OK as long as everyone understands the details.

My beef has to do with how it’s marketed.

EMC is very vague about the configuration, unless you look at the actual SPEC website. In the marketing materials they just mention VNX, as in “The EMC VNX performed at 497,623 SPECsfs2008_nfs.v3 operations per second”. Kinda like saying it’s OK to take 3 5-year olds and a 6-year old to a bar because their age adds up to 21.

No – the far more accurate statement is “four separate VNXs working independently and utterly unaware of each other did 124,405 SPEC fs2008_nfs.v3 operations per second each“.

All EMC did was add up the result of 4 boxes.

Heck, that’s easy to do!

NetApp already has a result for the 6240 (just 2 controllers doing a respectable 190,675 SPEC NFS ops taking care of NAS and RAID all at once since they’re actually unified, no cornucopia of boxes there) without using Solid State Drives (common SAS drives plus a large cache were used instead – a standard, realistic config we sell every day, and not a “lab queen”).

If all we’re doing is adding up the result of different boxes, simply multiply this by 4 (plus we do have Cluster-Mode for NAS so it would count as a single clustered system with failover etc. among the nodes) and end up with the following result:

  1. 762,700 SPEC SFS NFS operations
  2. 8 exported filesystems
  3. 343TB usable with RAID-DP (thousands of times more resilient than RAID5)

So, which one do you think is the better deal? More speed, 343TB and better protection, or less speed, 60TB and far less protection? 🙂

Customers curious about other systems can do the same multiplication trick for other configs, the sky is the limit!

The other, more serious part, and what prompted me to title the post the way I did, is that EMC’s benchmarking made pretty clear the fact that the VNX is the bottleneck, only able to really support a single VG8 head at top speed, necessitating the need for 4 separate VNX systems to accomplish the final result. So, the fact that a VNX can have up to 8 Celerra heads on top of it means nothing since the back-end is your limiting factor. You might as well stick to a dual-head VG8 config (1 active 1 passive) since that’s all it can comfortably drive (otherwise why benchmark it that way?)

But with only 1 active NAS head you’d be limited to just 256TB max NAS capacity, since that’s how much total space a Celerra head can address as of the time of this writing. Which is probably enough for most people.

I wonder if the NAS heads that can be bought as a package with VNX are slower than VG8 heads, and by how much. You see, most people buying the VNX will be getting the NAS heads that can be packaged with it since it’s cheaper that way. How fast does that go? I’m sure customers would like to know, since that’s what they will typically buy.

I also wonder how fast it would be with RAID6.

Here’s a novel idea: benchmark what customers will actually buy!

So apples-to-apples comparisons can become easier instead of something like this:

Bothapples

For the curious: on the left you see an “Autumn Glory” Malus Floribunda (miniature apple). Photo courtesy of John Fullbright.

D

Technorati Tags: , , , , , , , ,

24 Replies to “EMC conclusively proves that VNX bottlenecks NAS performance”

  1. Hi D –

    I scored a 34 on my ACT exam. The first time I took it, I got an 18, and the second time, I got a 16. Do you think I’ll be accepted into some decent colleges?

    Great post!

    thanks,
    paul

  2. D –

    I am 9 feet tall – 6 feet up and 3 feet around. That means, in EMC parlance, I am going to make millions in the NBA.

    It is sad that EMC takes our industry to PT Barnum levels “You can fool some of the people some of the time”.

    – Mike

      1. Yes, here’s the problem with the way EMC did it:

        They are acting as if the back end doesn’t matter.

        Guess what – your back end – you know, your disks that store the stuff and do RAID etc – kinda DOES matter 🙂

        Your speed in the EMC world comes from 2 places: the NAS heads and the back end.

        Which is why I am complaining about it. If you have multiple back ends that have no knowledge of each other, does that count as a single “system”?

        D

  3. @ Jim – at NetApp we currently only benchmark realistic configurations. Look at all current submissions – totally normal systems that people buy all the time. Heck, the very good score for CIFS on a 3210 is done with SATA… hardly an expensive box and something people really do buy for NAS purposes (good amount of space and performance, inexpensive controller).

    A customer looking at NAS and looking at the 3210 result can get an idea of what a “real” config performs like.

    The EMC result is meaningless since nobody would ever buy it as configured.

    Oh, and the 1-million ops NetApp 24-node GX benchmark of old you linked to? People DO buy stuff like that from NetApp for scale out purposes, so that was utterly legit at the time it was written – several customers have that exact box.

    D

  4. Hey Dimitris,

    I have a 15 and 11 year old. I’ve just kicked them out of the house since they are 26 years old and should be living on their own by their age.

    Nice job on pointing out the EMC Jibba-Jabba! Can’t wait to see what’s in store with SPC results.

    Calvin (@HPStorageGuy)

  5. At some point you want benchmarks. At 2 points to be precise:
    1) I want to know what I will buy before I buy it (head-to-head with the compeditors)
    2) I want to stress it in my environment once bought (and not yet in use!)

    So benchmarks are not a wild idea as some (I’ll try not to mention EMC but stay general) but it’s the parameters you add to those results that count. I don’t want to know how many IOPS you can make in one configuration, I want to know the $/iop and $/GB for an array that provides that amount of performance. THAT’s a benchmark!

  6. I think the FUD from NETAP OR EMC is best compared apples to apples. Has this being done? COMPARE A 3000 series vs. A VNX with the same amount of drives, cache, and configuration. Make a real and equal test and post results. Do block and file based tests, read, writes, OLTP loads, etc… You can even add a Hitachi box to have three baselines. Once the numbers are orecorded and really benchmarked, then post real results.

    I don’t think that it is fair to make comments based on Marketing FUDs.

    I have seen AMS beat NETAPP and EMC, EMC beat NETAPP an AMS, and NETAPP beat EMC and AMS.

    just two cents….

  7. Thanks Tony, and unlike others I won’t give you grief for plugging your benchmark on my site. I respect the number, the only nit is the latency/ORT, which is about 3x the NetApp or EMC results.

    At least you guys configured a system with a bunch of usable space. However, at RAID5 how reliable is it with that many drives? Again, NetApp’s default protection is thousands of times better. Maybe test next time with R6?

    If you want you can provide me the cost and I can do the same analysis I did here:

    https://recoverymonkey.org/2011/02/28/examining-value-for-money-regarding-the-spec-benchmarks/

    D

  8. Dimitris,

    looks like you struggle to read spec reports.
    In this test is only ONE VG8 gateway with a total of 5 Blades.
    All the 497,623 SPECsfs2008_nfs.v3 operations are handle by this singel VG8 gateway.

    In the press it is compared e.g. with an IBM 16 node cluster. With the IBM CLuster having 3.3 TeraByte of cache – also a lab queen.

    and it’s about benchmarking file not backend storage.

    http://www.spec.org/sfs2008/results/res2011q1/sfs2008-20110207-00177.txt

    1. Hi Johannes,

      Actually I know how to read SPEC reports just fine.

      I just never got the memo that a storage system has nothing to do with the storage itself – which is what you seem to be suggesting.

      You see, what if a mythical NAS gateway existed that needed 1000 VNX boxes behind it to provide the full performance out of a single NAS blade?

      In order for someone to get that level of performance, they’d need to actually buy 1000 VNX boxes to store the data, because the data needs to actually be stored last I checked.

      I didn’t like EMC’s marketing, telling people that the VNX is capable of such performance. The wording is such that people think single VNX. I’ve had several customers show me the benchmark results and when I point out the configuration they were actually upset, feeling they were lied to.

      The IBM cluster was a single system with multiple nodes (designed to work like that in general) and it was even presenting a single filesystem to the outside world. It’s scale-out storage, similar to Isilon.

      Yes, SPEC SFS checks the file serving aspects of a storage system whereas SPC-1 the block serving ones.

      In the end, they both test a system that has some kind of RAID and disks and some method to either present NFS or FC to the outside world. I’ve never heard anyone say “well, but SPC-1 only tests the block aspect of things, so the back-end doesn’t matter”. It always matters.

      EMC’s system just happened to be 4 separate VNX boxes in the back end with a 5 nodes (4 active) of Celerra on top to provide the NFS duties since the storage itself can’t do it (not unified).

      Not sure what your affiliation is but if you’re a customer, and you’re looking at buying fast NAS, don’t you want to know how to extrapolate the performance you’d get using a benchmarked configuration as your base? And the cost?

      Think about that one.

      D

Comments are closed.