This is more of an informational update of things that I have going on right now. I normally do not publish day-to-day type of things, but here we go.
Storage
We have received both replacement drives for our EMC Clariion CX340 and four new DAEs (disk shelves) for our CX4-240
Clariion CX3
The CX3 was originally bought to speed up our Oracle implementation. This was accomplished by ordering lots of fast disks (spindles) that were small. We wound up with 6 DAEs filled with 73gig 15kRPM disks, totalling 90 dedicated drives for Oracle.
This was great for the original purpose but the unit was replaced a year after initial deployment with a RamSan and EMC CX4. Having been decommissioned from production and moved to the tier 2 site, the need for space over IOPS (speed) drastically increased. Trying to keep performance and space requirements in balance, the decision has been made to go with a smaller RamSan for Oracle at the tier 2 site. This gives us the ability to replace the small 73 gigabyte drives with bigger 600 gigabyte 10kRPM disks. Replacing those disk with the same quantity of 600 gig ones will give us ~8 times as much space.
The RamSan will almost double the IOPS capacity that the CX3 is able to achieve and speed up our data warehouse even more.
Clariion CX4
So last year we went with implementing EMC Recoverpoint SAN based replication. This has been great and served us well! The only downfall was that we were doing “CRR” remote replication only. In a case of a failure and data needed to be recovered, there were no local copies. The snapshot or “point in time” would have to be loaded from the tier 2 site and transferred across the datacenter interconnect. The interconnect being 150 megs slowed this process down.
As planned from the beginning, we are implementing “CLR” local replication as well. This means that there will be a local copy of snapshots saved locally to the CX4. This will give us almost immediate access to the snapshots without being slowed down by the interconnect. The problem with RecoverPoint is that if you have a terrabyte LUN that you want to connect, you must have an extra terrabyte worth of space to save it. This is not really a problem, but a major consideration on the number of drives to buy and the overall expense of the implementation.
In our case, a terrabyte oracle LUN will wind up costing 3 terrabytes in the end. 1 terrabyte for the original data, 1 terrabyte for the local copy (CLR), and 1 terrabyte at the remote tier 2 site (CX3).
Our virtualization effort is continuing and this is another huge factor on the storage expansion. Currently we have 16 LUNs dedicated to the VMware environment. Each is 320 gigs in size. Moving forward, we will be doing a virtual desktop deployment as well. The leftover ~400 gigs will not cut it. So in the new 60 disks, 15 or more will have to be dedicated to VMware.
Cisco UCS
We have begun our UCS voyage. As of last weekend, we did a “rip and replace of our network”. This included rewiring the main network rack and configuring a new network core. Also, the Cisco Nexus 5010, 10 gigabit Ethernet switches are in. Uplinked to them are two 48 port gigabit fabric extenders.
The VMware environment is now connected via dual 10gigE links per server through this infrastructure. Reducing the cable count from 6 to 2 per server. So far verything is stable! A purchase order has been sent out and we should hopefully have two Cisco UCS Blade chassis and switching infrastructure show up within about 30 days.
Filed under: EMC, Networking, RamSan, SAN (Storage Area Network), VMWare