Friday, May 14, 2010

Cisco UCS – Part 1

In this post I’m going to take a look at the components of a UCS deployment and to that end I’m going to break this into 3 separate blogs.

  1. The first part will cover hardware components
  2. The second part will cover the Software components and
  3. Finally we’ll look at putting it all together and some of the points for consideration in doing that

In this post I’ll deal with part 1 (Hardware) drawing together all the references I found since I started to look into UCS (middle of 2009). All the information that I’ve gathered from connecting to knowledgeable people in and out of Cisco via various sources (but in particular due to twitter and googlewave) will be drawn together here in this series of blogs.

I’ve put the links to documents, videos and 3-D interactive views that cover the components into this blog. I think that by including those 3 elements (where possible) you will see and understand the components better and their relationship to each other.

Cisco UCS is an example of deploying FCoE at the server edge as mentioned here and is the first step in FCoE spreading to the data center core.

Hardware components of UCS

So starting with the hardware, the components that make up a UCS deployment are (see the diagram below) –

  • CNA (Converged Network Adapters)
  • Blades (B200/B250 M1 blade)
  • Chassis (5108 )
  • Fabric Extender (aka IOM, 2100 series)
  • Fabric Interconnects (6120/6140 models)
  • Expansion Modules (SAN/LAN – For data center connectivity)


Please open the interactive 3-D Model (here) in another window as you go through this section.

CNA (Converged Network Adapters)

In previous posts we mentioned that unified fabric is part of Cisco’s data center 3.0 strategy. One of the core elements of this strategy is the CNA (Converged Network Adapter), this enables previous disparate technologies – FC and LAN – to work on the same physical card in the blade (or server) out to an upstream switch over the same cable (in UCS the upstream switch is the Fabric Interconnect) and operating at 10GigE speeds.

Not only does this allow the fabrics (FC and LAN) to be unified, but they also allow consolidation of a number of 10/100/1000 lan connections into a single 10GigE pipe.

There are 3 types of CNA’s which can be deployed in the blades:-

  • Cisco UCS M81KR dual 10gig virtual interface card

This is very much a next generation adapter because not only does it support Cisco’s tenet of unified fabrics at 10GigE speeds but also it provides a layer of virtualisation of the physical adapter to the blade in which it sits. It has 2*10GigE ports that connect via the chassis backplane to the fabric extenders in the chassis (1 port to each fabric extender) for uplink purposes.

Due to the ability to virtualise the physical hardware card you can provide up to 128 (58 due to Fabric Interconnect restrictions) PCIe virtual interfaces either as a vNIC (58 or 56 if vHBA is used ) or vHBA (2) to the blade. These virtual interfaces can be dynamically configured (this is particularly useful for VMware environments).

So for example if your blade had one of these cards you can present to the bare metal installed OS or to a hypervisor a total of 58 virtual PCIe devices - LAN or SAN HBA’s. This is very useful if you want to have separation of devices at the OS/hypervisor level without the OS or hypervisor ever knowing that the virtual PCIe interfaces are actually from a single physical adapter in the blade. So for example I could present 4 vNICs - 1 for backup, 1 for mgmt and 2 for user access to the bare metal OS without it knowing that in reality it is coming from one physical card in the blade.

Link - here

  • Qlogic/Emulex CNA

These are Converged FCoE network adapters which have 2*10GigE ports from the blade, 1 to each Fabric Extender (IOM) via chassis backplane for uplink. The card presents (virtually) 2*4GB FC ports (vHBA) and 2*10GigE LAN (vNIC) ports down to the OS on the blade.

Emulex M71KR-E Link - here

Qlogic M71KR-Q Link - here

  • Cisco UCS 82598K-CI 10GigE Adapter

This is a Converged FCoE Network adapter which has 2*10GigE ports from the blade, 1 to each Fabric Extender (IOM) via chassis backplane for uplink and presents 2 LAN ports only down to the OS on the blade (Designed for low latency – 2 ports up, 2 ports down)

Link - here

Blade Servers

There are 2 blade servers that are presently available for deployment into a UCS chassis.

  • B200 M1

This is a half-width server that has 2 Nehalem 5500 sockets with 12 DIMMS of DDR3 memory which means a maximum of 96GB. Using any of the CNA’s you have 20GB/s I/O (10GB per port) to the Fabric Extenders in the chassis for uplink. This blade takes 1 dual port CNA. It also supports 2 (optional) SAS drives.

In a blade chassis you could fit 8 of these blades.

Link - here

Product Small Photo

  • B250 M1

This is a full-width server that has 2 Nehalem 5500 sockets with 48 DIMMS of DDR3 memory which means a maximum of 384GB. It supports 2 dual port CNA’s meaning 40GB/s (10GB per port) and 2 optional SAS drives.

It is worth noting that one of the leading features of this blade is the massive increase in memory compared with other blades. Normally standard blades have 9 slots per CPU so other Xeon 5500 have 18 slots (2 CPUs) in total which with 8GB DIMMS is a total of 144GB. It is worth noting that the average memory deployed in most blades today is 48GB, however with increasingly large virtualization projects, the more memory you have the better as VM’s tend to be memory hungry.

What Cisco have done is put in more memory slots (30) making a total of 48 memory slots in the server and these are presented to an ASIC (Application Specific Integrated Circuit) that is between the slots and the memory controller. The ASIC then presents every 4 slots of 8GB memory DIMMS as (1 slot) 32GB DIMM. So with 48 Slots that is (48/4) *32GB DIMMS presented to the memory controller which is 384MB in the blade.

One of the additional benefits of this (aside from potential for more VM’s) is that even if you don’t require 384GB in the server, you can get 192GB by using 4GB DIMMS and thereby achieve higher memory density than most other blades at a lower cost point.

In a blade chassis you can fit 4 of these blades.

Link Information / Video- here

Product Small Photo

Blade Chassis UCS 5108

The blade chassis is (as described by Cisco) a crucial building block, as it houses most of the components in the UCS deployment (blades/CNA and Fabric Extenders), It is a 6U chassis with the ability to house 8 half-width blades (B200 M1) or 4 full-width blades (B250 M1) or a combination of the two blades. It has fewer parts than other blade chassis as the brains/control of the UCS system lies upstream outside of the chassis in the fabric interconnects.

This means it doesn’t take much management and is more energy efficient as the unified fabric (FC and LAN on the same cable) means less cabling and drawing of power for the fewer parts (no chassis switches/modules as in traditional blades chassis). The chassis has 8 fans and 4 power supplies and houses 2 Fabric Extenders and the chassis backplane is 63% open for better airflow.

Link Information / Video - here

Additional Video - here

Product Small Photo

Fabric Extender (2100)

The fabric extender (aka IOM – Input Output Module) is one of the new and innovative elements of Cisco UCS. In a traditional blade chassis you would have interconnect modules for Ethernet, infiniband, SAS or FC these modules allow the chassis to be connected to other devices upstream running that protocol. The fabric extenders sit within the chassis, but where it differs is that it only runs one unified fabric (protocol) – FCoE and they do no switching, unlike traditional chassis interconnect modules. They are an extension of the fabric interconnect (FCoE upstream switch which physical sits outside of the chassis i.e. ToR) to which they are connected and in which all the management will take place for multiple chassis connected to it.

They have been described as a distributed line card – they allow control of the chassis/blades/service profiles to be done from the fabric interconnect. This also means that UCS system scales with very little effort – connect the new chassis (via the fabric extender in the chassis) to the fabric interconnects, acknowledge the new chassis (in UCSM) and then an inventory of the chassis will automatically take place and it is ready to use.

The chassis (5108 ) supports 2 fabric extenders; each fabric extender has 8 internal 10Gig ports (downlink) that connect to each of the 8 blades slots and 4 external 10Gig ports (uplink) that connect up to the Fabric Interconnect. Given that there is more bandwidth in the chassis (16*10GigE) than the uplink bandwidth (8*10GigE) then that brings us to the question of oversubscription, which I’ll cover in part 3 of the blogs concerned with UCS.

Link Information - here

Product Small Photo

Fabric Interconnect (6100)

The fabric Interconnects are also a new and innovative element of UCS. It is the point at which management of the UCS domain occurs, the funnel through which LAN and SAN traffic enters and exits the UCS domain. Unlike other blade designs the management sits in the fabric interconnect which sits outside of the individual chassis allowing the management of multiple chassis in multiple racks, in a ToR (Top Of Rack) design.

As a brief overview/comparison there are 2 models of fabric interconnects the 6120XP and 6140XP. The former is 20 ports, 1U, and 520Gbps throughput with one expansion slot; whilst the later is 40 ports, 2U, 1.04 terabits throughput with two expansion slots. All ports on both models are 10GigE FCoE capable and can be configured as uplink (to core network switches) or server ports (to blade chassis) depending on the required number of chassis to be connected and the required uplink (bandwidth) links.

The fabric interconnects are deployed in pairs and have an out-of-band management port and are connected together via clustered ports. The fabric interconnects and the attached chassis’s form a UCS domain. The Fabric interconnects have 3 mgmt ip’s (1 each and a cluster address). One fabric interconnect is active and the other is passive from a management point of view (the passive one is kept up to date via the cluster ports), however the (server and uplink) ports on both the active and passive fabric interconnects are active to allow the most throughput to the core network or SAN

Link/video - here Additional video - here

Product Small Photo

Expansion Module

The expansion module fits into the fabric interconnect and is the only means by which FC traffic can be broken out to the FC based infrastructure. If you look at the picture showing the fabric interconnects (above) the right hand side houses the module (take a look on the 3-D model). There are 4 types of expansion modules.

  • 8 port 1/2/4-Gbps FC Expansion module
  • 6 port 1/2/4/8-Gbps FC Expansion Module
  • 4 port FC + 4 port 10GigE Expansion module
  • 6 port 10GigE Expansion module

The 6120 fabric interconnect has 1 slot for the expansion module, whilst the 6140 has 2 slots for expansion modules.

Therefore the expansion module can give you flexibility in extending your UCS domain (connecting more chassis) by adding additional downlinks (server) connectivity on each fabric interconnect. It is worth noting that it is only possible to have FC based SAN connectivity via an expansion module. The likelihood is that you will at least want SAN (FC) connectivity to the present (Non FCoE ready) data center core.

In the next post we’ll cover the software element of UCS deployment.

0 comments:

Post a Comment