Got 10GbE working in the lab–first good results
I’ve done a couple of posts recently on some IBM RackSwitch G8124 10GbE switches I’ve picked up. While I have a few more to come with the settings I finally got working and how I figured them out, I have had some requests from a few people as to how well it’s all working. So a very quick summary of where I’m at and some results…
What is configured:
- 4x ESXi hosts running ESXi v5.5 U2 on a Dell C6100 4 node
- Each node uses the Dell X53DF dual 10GbE Mezzanine cards (with mounting dremeled in, thanks to a DCS case)
- 2x IBM RackSwitch G8124 10GbE switches
- 1x Dell R510 Running Windows 2012 R2 and StarWind SAN v8. With both an SSD+HDD VOL, as well as a 20GB RAMDisk based VOL. Using a BCM57810 2pt 10GbE NIC
- Results:
- IOMeter against the RAMDisk VOL, configured with 4 workers, 64 threads each, 4K 50% Read/50% Write, 100% Random:
StarWind side:
Shows about 32,000 IOPS
And an Atto Bench32 run:
Those numbers seem a little high.
I’ll post more details once I’ve had some sleep, I had to get something out, I was excited
Soon to come are some details on the switches, for ISCSI configuration without any LACP other than for inter-switch traffic using the ISL/VLAG ports, as well as a “First time, Quick and Dirty Setup for StarWind v8”, as I needed something in the lab that could actually DO 10GbE, and had to use SSD and/or RAM to get it to have enough ‘go’ to actually see if the 10GbE was working at all.
I wonder what these will look like with some PernixData FVP as well…
UPDATED – 6/10/2015 – I’ve been asked for photos of the work needed to Dremel in the 10GbE Mezz cards on the C6100 server – and have done so! https://vnetwise.wordpress.com/2015/06/11/modifying-the-dell-c6100-for-10gbe-mezz-cards/
Hey, which revision of the C6100 do you have? I’ve got the XS23-TY3 and I recently came across some X53DF mezzanine boards but it doesn’t seem like they fit without modification. It seems like the cage above/next to the I/O board is in the way of the ports. Is that why you had to dremel?
Thats why I had to dremel ;). It’s worth it.
You, sir, are a gentleman and a scholar! Much appreciated; breaking out my dremel now…
Did you pay more than $90 for the NIC’s? Theyre getting cheaper now!
Thanks again. Whipped out the dremel after the comments a couple nights ago and got the 10gb card going. The node I dropped it into was already an ESXi host. I haven’t had any time at all to do any real benchmarks or tweaking but connectivity is good.
I paid $150 shipped for two of them – I plan on getting two of the Infiniband mezzanines to play around with in the other two nodes.
That’s a good price, and very hard to beat for Intel 2 port 10GbE. I’d like the Infiniband as well, but it’s not what I see in the field, and so I wanted my lab to reflect what I work with. That said, it sure would be nice to scoff and be able say “you ONLY run 10GbE? How quaint…”. Perhaps someday, I still have to properly mount the 10GbE switches!
Hello, do you have picture of your dremel mod ? I have XS23-TY3 and I’d like to know what I need to do if i buy X53DF mezzanine board.
Thank you in advance!
I can do you one better. I have a new c6100 that needs the mod – and have both the Infiniband and 10GbE Mezz cards. Makes sense for me to take some photos as I do it. Probably be up on the weekend just need some time!
Great! Thank you! You can update your original post with photos or send them by email.
I asking because there is at least two different versions of X53DF on ebay http://www.ebay.com/itm//181751541002 and http://www.ebay.com/itm//261577721856 . And they have slightly different mounting brackets. I don;t know which one I should buy.
Eugene – check out https://vnetwise.wordpress.com/2015/06/11/modifying-the-dell-c6100-for-10gbe-mezz-cards/ As for your links – they’re both the same card, either will do the job. Good luck! Let me know how it goes…
Thank you for this article ! Now I understand what I need. It should be pretty easy.
Nice post! What type or model of cable do you need for this cards ?