So, I recently had a customer that wanted to enable “Jumbo Frames” to a UCS server that had the Cisco Virtual Interface Card (VIC) installed in it (also applies to Palo/M81KR, VIC-1240, VIC-1280). You might also know this process as “maximizing the MTU”. In this particular situation, the customer had an iSCSI appliance directly connected to the fabric interconnects (Whiptail in this case, which is not officially supported by Cisco as of this writing, but this process will be the same for any iSCSI appliance – supported or unsupported). It’s not the first time this has come up so I thought I’d write it down so that everyone can benefit (including me when I forget how I did all of this). This article will be helpful if you’re using any NAS storage such as NFS, CIFS/SMB, or iSCSI.
In Cisco UCS, we support certain storage appliances directly connected to the fabric interconnect via an Appliance Port (see supported arrays here: http://bit.ly/teL5Pb). The Appliance Port type was introduced in the 1.4(1) release of UCS Manager. Prior to this release, you had to put UCS in “switch mode” to attach an appliance directly to it. As a side note, appliance ports share some characteristics with server ports: They learn MAC addresses and they have an uplink (border) port assigned (static or dynamic) to them. Because they learn MAC addresses like any normal switch port does, as soon as a device is “heard” its MAC is considered “learned” and the switch will no longer need to flood packets destined for that MAC to all ports. This is good to know because when configuring an Appliance port, you are given the opportunity to add an “Ethernet Target Endpoint” – which is the MAC address of the connected appliance. This is an optional field and is not required for most appliances (as they broadcast their MAC address when connected), but if you have an appliance that doesn’t or if you have connectivity issues, you should enter the MAC in this field (See Figure 1 below). It should be noted that appliance ports do not support connected switches – at all. The port will shut down when it detects a switch on the far side.
So, let’s dig right in and setup an appliance port…step by step.
-
QoS Configuration
- This step is only needed if the appliance is trying to use a protocol or file system that benefits from a large MTU (Jumbo Frame) like iSCSI or NFS, for example. If your storage does not, you can move on to Step 2 because the MTU is usually the default Ethernet size of 1500 bytes.
- Go to the LAN tab, expand Policies, and look for the QoS Policies item.
- Right click “QoS Policies” and choose Create QoS Policy.
- Give it a name.
- Select an unused Priority from Bronze, Silver, Gold, or Platinum. I will use Bronze (most likely none of these are in use on your system, but make sure).
- Click OK to save the changes to the new policy.
- While still on the LAN tab, select “QoS System Class” under the “LAN Cloud” item.
- You will see a dialog similar to Figure 1.
Figure 1
- Change the MTU from normal to 9000 (again, I used Bronze).
- Check the Enabled box for Bronze.
- Click the Save Changes button.
- Set the MTU on the storage appliance to 9000
I should mention here that some storage appliances will allow an MTU of 9216, and if so, you can choose that so long as you make the Priority in UCS 9216 as well. We won’t get into it in this article, but if the appliance is not directly attached and is instead somewhere north of the fabric interconnects, you would need to match the MTU priority in UCS to the same size as the switch port connected to the fabric interconnect. But that scenario would not include Appliance Ports as they require the array to be directly connected.
If the appliance is plugged into both fabric interconnects (and it should be unless this is a lab like mine), repeat Step 2 on the opposite fabric interconnect. My suggestion is that you get one side working at a time.
-
Appliance Port Configuration
- On the Equipment tab, select the fabric interconnect where the appliance is plugged in to.
- Right-click the correct port from the unconfigured Ethernet ports and select “Configure as Appliance Port” (See Figure 2).
- If you’re using iSCSI storage AND you want to maximize the MTU, choose the same Priority class from the drop down menu that you configured in Step 1 above. We used Bronze in my example. Otherwise, just continue.
- Don’t use Pin Groups unless you’re very familiar with how they work and why you’re doing it.
- There should not be any need for a Network Control Policy here in this example.
- Select the correct port speed based on the speed of your storage appliance.
-
Decide if the port is a trunk or access port.
Note: I should mention that if this is a trunk port connected to the appliance, the VLANs you input here are not stored in the standard VLAN database that the fabric interconnect uses for server and uplink traffic. You can see this by looking on the LAN tab and you will notice VLANs for the Appliance cloud as well as the LAN cloud – these entries are separate from one another. So, if you are trying to put the appliance on an existing VLAN already configured on your servers, you will need to create an identical appliance VLAN for the appliance port using the same VLAN ID you use for the server’s vNIC (alternatively, you could create this VLAN ahead of time on the LAN tab under Appliances).
-
Click OK to save the changes.
Optional: As explained above, if you know the MAC address of the appliance, input in the dialog box (this simply pre-populates this address into the mac-address table in the FI). As explained above, most appliances will broadcast their MAC when they are connected, but it will not hurt to enter it here. My suggestion is to leave the endpoint blank and re-visit it if you cannot get it to work.
Figure 2
At this point, assuming you have configured the uplink and appliance VLANs correctly and that you have enabled the chosen storage appliance VLAN within your northbound infrastructure, you should be able to ping the appliance IP address from a workstation outside the UCS system. If you cannot, check the VLAN configuration for both the fabric interconnect LAN Cloud and the Appliances Cloud (both on the LAN tab) as well as the MTU size on the appliance port and the appliance itself. If your intention is just to make the storage available within the UCS system, it may not be available to outside systems at all because the VLANs are not accessible outside.
It’s now time to setup the VIC-enabled servers to access the storage. The plan is to add a vNIC on the server that is in the same VLAN as the storage. If using iSCSI, you need to match the MTU size of the vNIC to the size already configured in Step 1. The steps are as follows:
-
Server Configuration (this is disruptive and will reboot the server). See Figure 3
- Locate the desired Service Profile on the Servers tab.
- Right-click the vNICs item and choose “Create vNIC”
- Name the vNIC (i.e. “eth2″)
- Fill in the dialog with all required information such as MAC pool, correct Fabric, and the VLAN that matches the appliance port created in step 2 above.
- Be sure and select the appropriate VLAN that matches the appliance
- If using iSCSI and maximizing MTU, change the MTU of the vNIC to 9000 (vNIC MTU maximum is 9000, not 9216)
- If using iSCSI and maximizing MTU, select the correct QoS Policy you defined in Step 1
- Click OK to save the changes.
Figure 3
TIP: If this procedure were being done on an HP, IBM, or Dell server, you would have the additional step of going into the OS to set the MTU to match (9000). Depending on the OS/Hypervisor, this involves a registry hack, ifconfig, esxcfg-vswitch, or setting the MTU manually within the Windows adapter properties. This would be required on every server that plan to use the iSCSI appliance. With UCS and the Cisco VIC, it involves none of these because Cisco has strong integration with the OS/Hypervisor and the VIC driver will inform the installed OS/Hypervisor of the new MTU size automatically. Whatever MTU size you designate on vNIC itself will be used by the OS/Hypervisor. So, in this case, once you create the storage vNIC and reboot the server, the MTU size will already be at the correct value of 9000. How cool is that? Regardless of what OS you install, you don’t have to worry about finding the right command to set the MTU!
The designated MTU can be verified as follows:
Windows:
netsh interface ipv4 show subinterfaces
Linux:
ifconfig
ESX:
esxcfg-vswitch –l
If you would like to test the end-to-end MTU, there is an easy process for that as well:
Windows:
ping –f –l 8000 <storage appliance ip address>
If you get replies, it’s working. If you get “Packet needs to be fragmented but DF set”, windows is trying to tell you that the packet is too large to pass through and you specifically told ping not to fragment the packet.
Linux:
ping –M do –s 8000 <storage appliance ip address>
ESX:
vmkping –s 8000 <storage appliance ip address>
I use 8000 to keep it common between Windows, Linux, and ESX. The largest true packet size is 8972 which sends a 9000 byte packet when you add in IP and ICMP header info of 28K. Some OS’s accept a parameter of 9000 and others max at 8972, but 8000 works on all of them and demonstrates that the MTU is most likely working as you expect it to.
Note: This article is written to the lowest common denominator. It does not involve LAN Pin Groups or vNIC Templates. Those are topics that I hope to write articles on in the future, but are covered well in our product documentation.
As always, thanks for stopping by.
-Jeff
P.S. If you’re using a different CNA than the Cisco VIC and would like specific direction on setting the MTU for it (inside the OS/Hypervisor), drop me a note below and let me know which one. I’ll do my best to dig up the instructions.