Monday, January 14, 2013

Dividing Bandwidth of a 10 GB CNA Adapter for ESXi Networking andStorage!!


With most of my recent projects customers are moving towards the 10G converged adapters to achieve the benefits of consolidation of network and storage especially on Blade Server Architecture.

I am writing this post to provide you guidelines on how you can divide a 10GB CNA card on your ESXi server to meet all the network and storage requirements. Before that, let’s have a look at what is the 10Gig CNA and what are the brands available in the market available for this technology.

A CNA card a.k.a "Converged Network Adapter" is an I/O card on a X86 server, that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words it "converges" access to, respectively, a storage area network and a general-purpose computer network. As simple as it sounds, it makes things simple in the datacenter as well. Instead of running down those cables from each NIC card/FC HBA or iSCSI cards, you can just use a single cable to do all these tasks for you. This is because the CNA card is converged and can carry all the traffic on a single physical interface.

There are a number of manufacturers of such cards who either manufacture these cards themselves or just re-brand them with their logo and custom firmware.. Here are a few examples:-

- Cisco
- Dell
- HP
- Q-Logic
- Emulex
- IBM etc..

So as a customer you have a number of choices and it is important that you choose what fits your existing infrastructure or the new hardware if it is a Greenfield site.

Let's say you bought a CNA which gives you 4 virtual ports per physical port, let’s see how we can divide the bandwidth of this physical port amongst the virtual port to both Storage and Network Communication.

On the physical card the bandwidth can be divided like how it is shown in the figure below:-



Here, the CNA card has 2 physical port each with 10 GB bandwidth. I have further divided this card into 3 network cards and 1 FC HBA per physical port. Hence, I will have a total of 6 Network cards and 2 FC HBA per CAN card. If you like the concept of No Single Point of Failure (SPOF) and can afford another card, and then you would end up having 12 NIC Cards and 4 FC HBA Ports per Blade server.

Isn't that cool?? A Blade server which so many NICs. Well this can be used on Rack servers as well as it will also reduce the back-end cabling!

Now a last look at how I would use these NICs and FC Ports to configure the networking for the ESXi Server. The diagram below shows how I would configure the networking on my ESXi server to get the best possible configuration out of the available hardware resources.



The Diagram above clearly shows how we have divided this bandwidth amongst all the required port groups. If you have 2 such cards, you will have high resiliency in your design and the number of ports would double up providing better performance as well.

Remember you are free to toggle around the bandwidth for the Virtual NICs and Virtual FC HBA’s basis how much you want for your port groups. The bandwidths which I have mentioned above are a guideline and can be used as they fit in most the bills.

Hope this helps you design the network and storage with the 10GB Adapter without issues.

**************************************************

Update - 3rd April - Look at this new article - Dividing Bandwidth of a 10 GB CNA Adapter for ESXi Networking and Storage using Network I/O Control (NIOC) which talks about using Network IO Control to do the network segregation.

4 comments:

  1. Hi Sunny,

    User of Flex Fabric here, its a relief to see someone else posting an almost exact replica of how we chopped up our CNA config.

    We actually had a few more segregated networks that we couldnt consolidate so we went with 4 CNA paths across 4 10/24 VC Modules.

    We had a round robin config across 4 x 4 FlexHBAs.

    Something to note though, the vmotions dont necessarily need to be active/standby (sort of)

    We had dual IPs with an active/standby, standby/active type setup as listed by me Epping here;

    http://www.yellow-bricks.com/2011/09/17/multiple-nic-vmotion-in-vsphere-5/

    I know this would complicate your diagram, but it beefs up the vmotion speeds when bringing down a full host.

    CNAs definitely seem the way forward (unless everyone follows the idea of VICs like UCS) its just a pity people seem to avoid them.

    Articles like this help to increase the understanding.

    Good work.

    Kind Regards

    Dean Ravenscroft

    ReplyDelete
  2. Thanks Dean.. Cannot agree more. CNA gives you the option to allocate desired resources as shown in the article above. With the robustness of the dvSwitch with 5.1, NIOC is also a great way to do resource allocation as well. Will soon write about that method as well as I implemented that at a customer site too. It was way to easier to do it that way. However for Architects who believe the control should be with the Hardware CNA logical partition is the way to go.

    Regards
    Sunny

    ReplyDelete
  3. Hi Sunny.Very well explained.

    ReplyDelete
  4. Thanks for your effort. Crystal clear now!

    ReplyDelete