Home > 10GbE, Hardware, Home Lab, RackSwitch, Storage > Got 10GbE working in the lab–first good results

Got 10GbE working in the lab–first good results

I’ve done a couple of posts recently on some IBM RackSwitch G8124 10GbE switches I’ve picked up.  While I have a few more to come with the settings I finally got working and how I figured them out, I have had some requests from a few people as to how well it’s all working.   So a very quick summary of where I’m at and some results…

What is configured:

  • 4x ESXi hosts running ESXi v5.5 U2 on a Dell C6100 4 node
  • Each node uses the Dell X53DF dual 10GbE Mezzanine cards (with mounting dremeled in, thanks to a DCS case)
  • 2x IBM RackSwitch G8124 10GbE switches
  • 1x Dell R510 Running Windows 2012 R2 and StarWind SAN v8.  With both an SSD+HDD VOL, as well as a 20GB RAMDisk based VOL.  Using a BCM57810 2pt 10GbE NIC
    Results:
    IOMeter against the RAMDisk VOL, configured with 4 workers, 64 threads each, 4K 50% Read/50% Write, 100% Random:

image

StarWind side:

image

Shows about 32,000 IOPS

And an Atto Bench32 run:

image

Those numbers seem a little high.

I’ll post more details once I’ve had some sleep, I had to get something out, I was excited Smile

Soon to come are some details on the switches, for ISCSI configuration without any LACP other than for inter-switch traffic using the ISL/VLAG ports, as well as a “First time, Quick and Dirty Setup for StarWind v8”, as I needed something in the lab that could actually DO 10GbE, and  had to use SSD and/or RAM to get it to have enough ‘go’ to actually see if the 10GbE was working at all.

I wonder what these will look like with some PernixData FVP as well…

UPDATED – 6/10/2015 – I’ve been asked for photos of the work needed to Dremel in the 10GbE Mezz cards on the C6100 server – and have done so!  https://vnetwise.wordpress.com/2015/06/11/modifying-the-dell-c6100-for-10gbe-mezz-cards/

  1. Ben
    November 17, 2014 at 7:53 PM

    Hey, which revision of the C6100 do you have? I’ve got the XS23-TY3 and I recently came across some X53DF mezzanine boards but it doesn’t seem like they fit without modification. It seems like the cage above/next to the I/O board is in the way of the ports. Is that why you had to dremel?

    • November 17, 2014 at 7:57 PM

      Thats why I had to dremel ;). It’s worth it.

      • Ben
        November 17, 2014 at 8:07 PM

        You, sir, are a gentleman and a scholar! Much appreciated; breaking out my dremel now…

      • November 17, 2014 at 8:16 PM

        Did you pay more than $90 for the NIC’s? Theyre getting cheaper now!

  2. Ben
    November 19, 2014 at 11:07 PM

    Thanks again. Whipped out the dremel after the comments a couple nights ago and got the 10gb card going. The node I dropped it into was already an ESXi host. I haven’t had any time at all to do any real benchmarks or tweaking but connectivity is good.

    I paid $150 shipped for two of them – I plan on getting two of the Infiniband mezzanines to play around with in the other two nodes.

  3. November 19, 2014 at 11:09 PM

    That’s a good price, and very hard to beat for Intel 2 port 10GbE. I’d like the Infiniband as well, but it’s not what I see in the field, and so I wanted my lab to reflect what I work with. That said, it sure would be nice to scoff and be able say “you ONLY run 10GbE? How quaint…”. Perhaps someday, I still have to properly mount the 10GbE switches!

  4. Eugene
    June 10, 2015 at 4:17 PM

    Hello, do you have picture of your dremel mod ? I have XS23-TY3 and I’d like to know what I need to do if i buy X53DF mezzanine board.
    Thank you in advance!

  5. Eugene
    June 14, 2015 at 2:18 PM

    Thank you for this article ! Now I understand what I need. It should be pretty easy.

  6. Javier
    June 20, 2015 at 10:02 PM

    Nice post! What type or model of cable do you need for this cards ?

  1. No trackbacks yet.

Leave a comment