Monday, December 21, 2009

FCoE, what is it?

In my last post (Converged Infrastructure) I mentioned FCoE as one of the elements used to bring convergence to the data center. In this post I decided to explain a little as to what FCoE is and why it has a role in bringing convergence to the data center.

What is FCoE?

FCoE is Fibre Channel Over Ethernet - this is the encapsulating of the fibre Channel frames inside ethernet frames so fibre channel traffic and LAN traffic is sent over the same cables instead of over separate fibre and LAN cabling. Although this technically can be done over 1GigE network, vendors are only providing 10GigE devices. This means a certain amount of disruption (more on this below) to the existing network architecture.

There is also a requirement for what is called DCB - Data Center Bridging as Ethernet is ‘lossy’ (packets will get lost/retransmitted), whilst fibre is ’lossless’ (no packets dropped) as you don’t want to lose/drop data being transmitted to your SAN!!.

In order for FC to run over Ethernet we need to ensure that losslessness of fibre is retained when run over ethernet. So various standards are presently being worked on by the Data Center Bridging group in order to enable a low latency and lossless ethernet network, that allows FCoE frames to be transmitted on the same bits of wire as LAN traffic.

Why is FCoE Useful?

Consider a typical data center as depicted in the diagram below



What FCoE gives you is convergence at the adapter level (CNA - Converged Network Adapter) within the server (a single card for both SAN/LAN connectivity), so a reduction in HBA’s which then requires less power utilised by the server. In addition a reduction in switches as SAN and LAN switches are no longer separate. So the future converged infrastructure will look something like this (not the best diagram, it is simply to show the reduction in switches/cables).



There is potential for a 50% reduction in the data center for switches/cables and less power (Green I.T) required due to the reduction both in terms of physical equipment and the amount of power used in servers (with fewer cards).

The convergence of the physical infrastructure takes many shapes (some mentioned here) including for example server virtualisation and/or storage virtualisation. FCoE is another (complementary) method, the other methods might bring just as much benefit, but together they can bring greater convergence to the data center.

The FCoE standard was adopted in June 2009 and details can be found here, here and here, this last entry is the pdf of standard. There is still work to be done around DCB however and that is mentioned in the second entry just provided.

Many of the leading companies (for example Cisco) have FCoE at the center of their Data Center strategy and so it is critical for them that FCoE is adopted widely. It is the means by which Cisco see the convergence of the data center and it is a core part of their Unified Computing System (UCS).

We mentioned earlier that FCoE/DCB will have some disruption on the data center due to the requirement for 10GigE (it may be that your core network is presently not 10GigE as an example) and DCB standards being adopted meaning new hardware - CNA’s and FCoE switches to take advantage of FCoE/DCB. That is why the full power of FCoE/DCB will be gradual (rather that what some have termed) rip-and-replace strategy. This will start at the Access layer (server edge) and over time move through the Aggregation layer into the Network Core.

If you look at Cisco’s UCS blade infrastructure that is an example of how this can be achieved NOW - in the blade chassis FCoE is run over 10GigE ports (on Fabric Extenders) that connect to Fabric Interconnect (6120 or 6140) allowing Fibre and IP traffic to run over the same cable in the chassis upto the access switch (Fabric Interconnect).

The Fabric Interconnect has 10GigE ports and can also take a module that has Fibre Channel ports. The fibre channel ports then uplink to the normal SAN switches using fibre cables, whilst the Ethernet ports uplink to the Core Switches using lan cables. So you have 10GigE FCoE at the server (chassis) edge over the same cable that branches to separate legacy cables as you approach the network core

I was on the Data Center Of The Future event in which Cisco presented their view on the subject. This included a Q&A session, during the session I asked the question as to whether FCoE was ready and the response was "yes". So then I adjusted my question and asked if I could use it from the server to the core and then I was told that "some standards still needed to be ratified for that to be achieved, but FCoE had been ratified". So there still needs to be some work done before FCoE becomes all embracing from the server edge to the network core in the Data Center.

Nigel Poulton has a really good series of posts (deep dives) on FCoE here if you want to know more. Also Dave Convery has a very good post here on FCoE. There is also a very good post here that details the savings from deploying FCoE for a hospital including space and power.

Saturday, December 19, 2009

Data Center 2.0/3.0

I was attending the DCOF (Data Center Of the Future) web conference over 2 days (15th/16th December). The event had most of the main data center players listed here giving presentations (with the notable exception of HP) on the future data center.

One of the presentations was given by Cisco who throughout their presentation made reference to Data Center 3.0 (if we don't have numbers how do we know where we are? web 2.0, enterprise 2.0). After awhile I had to ask the question

What is the difference between DC 2.0 and DC 3.0 as Cisco talks about it?

The response:-

"In a nutshell, DC 2.0 refers to the client/server model and distributed resources. DC 3.0 refers to initiatives being taken today around consolidation and virtualization of resources. The goal of DC 3.0 is to be able to manage these resources and leverage them as a service to deploy applications alot more efficiently."

So there you have it, the data center of the future is consolidated and virtualization. No more silo's of server, storage and network that are isolated from each other and managed as such. These resources will be a tightly integrated, flexible and in Cisco's eyes virtualized, managed by integrated tools from a single pane of glass, service oriented (as opposed to asset oriented).

After the discussion I began to think that there was another difference between Data Center 2.0 and 3.0, My additional thinking is this

- Data Center 1.0 (the original) was the mainframe era
- Data Center 2.0 was the Unix/proprietary Architecture era
- Data Center 3.0 is the X86/X64 Open Architecture era

I've put it that way because when I originally started in I.T. and walked around a Data Center it was nearing the end of Mainframe dominance and the explosion in Unix. Over the years there were fewer and fewer mainframes, and more and more Unix (proprietary) platforms. Now when I walk in the Data Center the Unix platforms are fewer and fewer, whilst X84/64 rack and blade servers are everywhere - times have changed.

I shall be throwing some blogs out around the contents of the DCOF event from Intel, Netapp etc.. as well as overviews of Cisco UCS, Vblock (arcadia), HP Bladesystem Matrix and some of the small(er) players mentioned here.

What is Converged Infrastructure?

In this post we are going to deal with "What is Converged Infrastructure?" in later posts I'll be looking at the various companies doing the "How to?". The short answer is the unification of the infrastructure - HW and SW, but I think it leads to alot more (although it starts with the physcial infrastructure).

Lets take a look at the back of a rack in a typical data center, do you recognise this picture?

DSCN0123
Originally uploaded by alonzoD


The traditional Data Center Server has multiple physical connections to the LAN carrying ILO (integrated lights out), backup, management, user access and much more. Typically these would vary from 100MB for Management to maybe 1GB for backup.

In addition you might have redudant connections for some or all of these connections. So you might have any where from 2 LAN (test/development server) to upwards of 8 LAN connections if you want to have a highly redudant (production) server. Now look at the picture again, how many servers can you get in a 42U rack? well it depends 1U/2U/4U servers are not uncommon.

Lets use a 4U server in a 42U rack and do some sums, lets say the rack has maybe 8/9 servers (we will use 8 ) and maybe a ToR (Top of Rack) switch, each server with multiple NICS and multiple LAN cables.

Per Rack LAN cabling

8 Servers x 5 cables (1 mgmt, 1 backup, 1 ILO and 2 User Access) = 40 cables.
Each server has 2 x 1 Port HBA for fibre connection to the SAN

Per Rack SAN cabling

8 x 2 = 16 cables for SAN

Total cables for 1 rack = 56 (not including power or cables for uplink connection to Core Network / SAN fabric)

Then you can multiply that across multiple racks. The effects of all these physical cards/cables:-

Power requirement increased
Restricted airflow
Cabling management nightmare (labeling!!!)

Now consider the management (via tools) of physical infrastructure:-

LAN switch ports
VLAN management
SAN Switch ports
SAN Zoning

Using individual management tools - that is (potentially) alot of management (alot of people).

So what is Converged Infrastructure (IMHO)?

Convergence of the physical items - using techniques to reduce physical items

- Virtualized Servers - VMware, Xen, Hyper-v to reduce physical servers
- Virtualized Network - vSwitch, vNetwork Distributed Switch, Nexus 1000v
- Virtualized Storage - Thin provisioning
- FCoE / DCE - using converged network adaptors that carry both Fibre and LAN traffic on the same physical cable - 2:1 reduction in cabling
- SR-IOV - PCI-e adapter that appears to the OS as multiple adapters

Convergence of Mgmt - using

- A single pane of glass managment tool for Server, Storage and Network
- Dynamic, proactive and automatic configuration of Server, Storage and Network
- Same people to do Server, Storage and Network

Some vendors will put emphasis on different elements of the above points. A key point to note is that Converged Infrastructure (IMHO) is more than just physical it is also the management tools (which leads as a consequence to convergence of I.T. organisation processes)

From a physical view it might look like this - compare this with our picture above.



Note: This is a picture of a Cisco UCS Blade Chassis - 6U in size, so that is 7 chassis (at a push) in a 42 rack, each chassis has max 8 cables (normally 4) so the total number of cables is 56 cables (doesn't sound much better). However there would be 56 servers in the rack (slightly more than 8 in previous example) and single pane of management to go with it - more about Cisco UCS in another post.

Next posts - FCoE, Data Center 2.0/3.0 and Data Challenges

Data Center Of The Future

The data center of the future in terms of technology will be:-

Virtualized
Constituted of private and public (cloud) elements
Converged
Automated
Self-service
Network centric
Energy efficient
Self Managing

It means that:-

Hardware will no longer be unique (it will be commodity based)
Hardware will no longer be under-utilised (it will be running at 70%+)
Reduced Manual Support of Hardware and Software (it will be automated)
IT Focus will no longer be asset based (it will service based)
Reduced CapEx and Reduced OpEX (hopefully)
Reuse, Reuse, Reuse at every level

The companies/communities driving this change:-

The big(er) vendors

IBM
HP
Cisco
EMC
Intel
Dell
Oracle/Sun
VMware
Redhat
Citrix
Microsoft
Google
AMD

The small(er) Vendors

Netapp
InteliCloud
LiquidIQ
Panduit
Egenera
Xsigo
and others

The community

Opensource

The purpose of this blog is to:

Focus on the companies/communities driving this change
To be technology focused (because that is what i like)
Starting with "What is Converged Infrastructure?".....next post