Quantcast
Channel: Colocation to Virtualization » cx4
Viewing all articles
Browse latest Browse all 6

10gb Ethernet – A Year Later / Buy It Carefully

$
0
0

What I am getting at here is do not underestimate the amount of ports you will be using. We have implemented Cisco UCS with 6120 switches but also installed a pair of Nexus 5010 switches as well. Going in, we had only planned for a few devices to be connected to the 10gb Ethernet network outside of the UCS infrastructure. Looks like we underestimated!

Once we got over the fear of FCoE reliability, almost everything is now being ordered with 10gbE. The only reason of worry was during this implementation over a year ago, FCoE was not in the main stream yet. It was almost “bleeding edge” in the market.

Where are we a year later?

  • We are expanding our Cisco UCS environment to at least an additional chassis
  • EMC CX-4 has 10gbE fiber modules for ISCSI (instead of RDMs for some VMs)
  • DataDomain 670 is 10gbE fiber connected
  • EMC NX4 NAS has 10gbE fiber connectivity
  • We are planning a purchase of either a CX480 or VNX 5700 which will be 10gb FCoE
  • Hoping to order a pair of Cisco Nexus 7000s
  • One of the big killer of ports in our environment is having to tie the Gigabit Ethernet switches into them. This is wasting 4 ports that could be 10gb pre switch. Hopefully that will be resolved with a future purchase of Nexus 7000 switches.  We also have a pair of fabric extenders connected into the 5010s.  This also takes up two 10gbE ports per 5010.

    Below is a port listing from one of our Nexus 5010 switches. This is after we removed 3 ports (per switch) from physical servers that were tied into the 10gbE infrastructure. As you can see from below, we only have 5 ports per switch left (10 total if you count the redundant switch)! That is only 5 more devices at the most.

    --------------------------------------------------------------------------------
    Ethernet      VLAN   Type Mode   Status  Reason                   Speed     Port
    Interface                                                                   Ch #
    --------------------------------------------------------------------------------
    Eth1/1        x     eth  trunk  up      none                       1000(D) 1
    Eth1/2        x     eth  trunk  up      none                       1000(D) 1
    Eth1/3        x     eth  trunk  up      none                       1000(D) 1
    Eth1/4        x     eth  trunk  up      none                       1000(D) 1
    Eth1/5        x     eth  trunk  up      none                        10G(D) --
    Eth1/6        x     eth  trunk  up      none                        10G(D) --
    Eth1/7        x     eth  access up      none                        10G(D) --
    Eth1/8        x     eth  access up      none                        10G(D) --
    Eth1/9        x     eth  trunk  down    Link not connected          10G(D) --
    Eth1/10       x     eth  trunk  down    Link not connected          10G(D) --
    Eth1/11       x     eth  trunk  down    Link not connected          10G(D) --
    Eth1/12       x     eth  access down    SFP not inserted            10G(D) --
    Eth1/13       x     eth  access up      none                        10G(D) --
    Eth1/14       x     eth  access down    SFP not inserted            10G(D) --
    Eth1/15       x     eth  trunk  up      none                        10G(D) --
    Eth1/16       x     eth  trunk  up      none                        10G(D) --
    Eth1/17       x     eth  fabric up      none                        10G(D) --
    Eth1/18       x     eth  fabric up      none                        10G(D) --
    Eth1/19       x     eth  trunk  up      none                        10G(D) 2
    Eth1/20       x     eth  trunk  up      none                        10G(D) 2

    At the 6120′s end (UCS switches) we are fine. Currently have 14 ports on each switch available. That should give us the ability to wire in 7 more chassis with 2 – 10gbE uplinks per IOM. A total of 4 links / 40gb throughput per chassis. Our CIFS and NFS access is a lot faster running through the EMC NX4′s 10gbE.

    So if you are looking into 10gbE for our network, it would help to have at least a rough draft of where you want your datacenter to be a few years from now. Pretty much all major storage companies are doing some sort of 10gbE connectivity. Using it makes administration a lot easier. FCoE simplifies things even more, including eliminating the need for separate fiber switches. To sum it up, I love 10gbE and would recommend it even if not planning on incorporating FCoE or UCS in your environment.


    Filed under: Cisco UCS, Data Domain, Datacenter, deduplication, EMC, NAS, SAN (Storage Area Network)

    Viewing all articles
    Browse latest Browse all 6

    Trending Articles