Configuring Networking for Nimble-vSphere iSCSI

One of my last tasks for 2014 was integrating a new Nimble Storage array into our environment. As this is the first of these I’ve encountered and I haven’t been able to take the free one day Nimble Installation and Operation Professional (NIOP) course they provide I was left to feeling my way through it with great help from their documentation and only ended up calling support to resolve a bug related to upgrading from 2.14 of the Nimble OS. On the network side our datacenter is powered by Cisco Nexus 3000 series switches, also a new addition for us recently. These allowed us to use our existing Cat6 copper infrastructure while increasing our bandwidth to 10 GbE. In this post I’m going to document some of the setup required to meet the best practices outlined in Nimble’s Networking Best Practices Guide when setting up your system with redundant NX-OS switches.

Cabling and VLAN Management

All Nimble arrays give come with redundant controllers, each with up 2 1 GbE management ports and up to 4 10 GbE data ports, in your choice of copper, SFP, or Fibre Channel. In our case we chose 2 copper ports for controller, making for a grand total of 8 switch connections required. While it is possible to combine your management and data networks and not use the management ports at all this is not recommended and further each of these ports should be spread across redundant switches as shown in the diagrams below.

In addition to needing to the cabling need to configure separate VLANs for the management and data network traffic and assign the relevant ports.  For our example here we’ll use VLAN 20 for Management traffic and VLAN 40 for Data. On each of our switches ports 1 and 2 will be assigned to Management and ports 3 and 4 for Data.

Configure VLANs & Interfaces for Separate Management and Data Networks

switch1# conf t
switch1 (config)# vlan 20
switch1 (config-vlan)# name Storage Management
switch1 (config)# vlan 40
switch1 (config-vlan)# name Storage Data
switch1 (config)# int e1/1-2
switch1 (config-if-range)# description storage management
switch1 (config-if-range)# switchport mode access
switch1 (config-if-range)# switchport access vlan 20
switch1 (config)# int e1/3-4
switch1 (config-if-range)# description storage data
switch1 (config-if-range)# switchport mode access
switch1 (config-if-range)# switchport access vlan 40

Configure Flow Control (per interface setting)

switch1#conf t
switch1(config)#int e1/1-4
switch1(config-if-range)# flowcontrol receive on
switch1(config-if-range)# flowcontrol send on

Jumbo Frames (global configuration)

switch1# conf t
switch1(config)# policy-map type network-qos jumbo
switch1(config-pmap-nq)# class type network-qos class-default
switch1(config-pmap-nq-c)# mtu 9216
switch1(config-pmap-nq-c)# exit
switch1(config-pmap-nq)# exit
switch1(config)# system qos
switch1(config-sys-qos)# service-policy type network-qos jumbo

Unicast Storm Control is disabled by default

And really that’s it. With this configuration you should be able to verify connectivity by using vmkping from your ESX hosts to perform a 9000 byte ping to the either the data or management IPs you’ve configured. As long as that goes connectivity should be good.