Hi
We've got 2 5530s in a stack with 10 dell r610 running debian squeeze with xen connected to them via 4x 1gig links each. 2 links in each switch of the stack.
sysDescr: Ethernet Routing Switch 5530-24TFD
HW:35 FW:6.0.0.6 SW:v6.1.1.017
Initially this was setup with bonding mode 0 ( round robin distribution between the 4 nics ). The throughput was fine, we could get speeds both tx/rx of just above 2gbit with nfs3. Meaning there is a single ip in both ends with just one connection.
However, allthough the throughput was good the stability was not. We'd loose connectivity to the nas randomly from either 1 server or all of them. As expected (I suppose ) you'd see the mac address of the bond interface jumping between all 4 connected ports when doing show mac-address-table. My initial thought was "thats not good" and when we shut 3 of the 4 connected switchport the stability issues disappeared.
So the question is, how do we set this up for best possible performance. From what I can understand, both MLT and LACP configs will hash based on mac or ip and in both cases that will result in only 1 nic getting used.
I really hope that someone on here can point us in a useful direction in regards to what needs to be done on the switches to support something like the bond mode 0 on the server side.
Regards and thanks in advance
We've got 2 5530s in a stack with 10 dell r610 running debian squeeze with xen connected to them via 4x 1gig links each. 2 links in each switch of the stack.
sysDescr: Ethernet Routing Switch 5530-24TFD
HW:35 FW:6.0.0.6 SW:v6.1.1.017
Initially this was setup with bonding mode 0 ( round robin distribution between the 4 nics ). The throughput was fine, we could get speeds both tx/rx of just above 2gbit with nfs3. Meaning there is a single ip in both ends with just one connection.
However, allthough the throughput was good the stability was not. We'd loose connectivity to the nas randomly from either 1 server or all of them. As expected (I suppose ) you'd see the mac address of the bond interface jumping between all 4 connected ports when doing show mac-address-table. My initial thought was "thats not good" and when we shut 3 of the 4 connected switchport the stability issues disappeared.
So the question is, how do we set this up for best possible performance. From what I can understand, both MLT and LACP configs will hash based on mac or ip and in both cases that will result in only 1 nic getting used.
I really hope that someone on here can point us in a useful direction in regards to what needs to be done on the switches to support something like the bond mode 0 on the server side.
Regards and thanks in advance
Comment