We ran into an issue where we needed an entire SAN frame retired. Problem is, there are several datastores and several guests running on that frame.
I wanted to script it out, which worked just fine. Then, we had more to do, so I edited the script and ran it again. After the third or fourth time, I decided to write a script that takes params via the cli.
Make sure your datastore names are similar, for instance, mine appends ‘_New’ to the end. So my datastores have to be named like this: ‘vmdatastore’ and ‘vmdatastore_New’. It will get all guests on the datastore and migrate them one by one over to the new datastore. When done, just delete the old datastore (or rename it to _Old) and rename the new one to match.
We all hate that adding DDR3 sticks to a server slows down the QPI speed (or RAM Bus for lack of a better example).
That changes with the Nehalem EX proc (and perhaps Westmere), as the CPU governs the speed. You can throw up to 16 sticks of DDR3 RAM per CPU at either 800, 978, or 1066MHz, and the governing factor is the CPU:
When we’re ready to deploy new ESXi hosts in our environment, we order them from Dell with ESXi pre-loaded on the internal SD-Card. This is nice and all, but what do you do when you have to go through and configure NTP, Users, Groups, Scratch directory, lockdown mode, and the list goes on?
You’d have to fire up each server, go through and configure everything, x10 if you had 10 new servers.
Since we’re working on a new rather larger virtualization deployment, we were looking at ways to overcome this.
Well, Cisco finally came with an answer to why I was able to break the stuff like clock work before, and that answer was firmware. A new firmware has been release for the chassis, blades, & FEX (and I’m sure I’ve either got that in the wrong order or hardware), but I can’t say I’m excited about it.
We set more time aside to have Cisco come in and upgrade the bits, as if we haven’t wasted enough time already. This time, they sent the big guns to work on it, or gun, rather, as they sent one of the engineers named Troy. He was a good guy, very knowledgeable, but he can’t help it that he works for Cisco, we’ve all gotta eat, right?
I heard today that Cisco will no longer make their Nexus-line of switches for non-Cisco brand blade enclosures.
What does this mean exactly? Those of us with the Dell M1000e blade chassis who are currently using pass-throughs and were waiting for the promised Cisco Nexus 4000 won’t have to wait anymore because it’s NEVER coming out.
Am I the only one who thinks Cisco shot themselves in the foot by doing this?
I mean, I have hands-on experience with UCS and wouldn’t wish that evil on anyone.
I just wanted to add that this affects all OEM vendors like Dell, HP, and IBM, as Cisco dropped production of new switches for other blade systems.
We were some of the earliest of ‘adopters’ for a paid POC to get the Cisco UCS system in-house.
Cisco sent the hardware & a group of techs to get this thing off the ground. Any new hardware being deployed shouldn’t require a group of people from the vendor to come out to set it up, should it? In today’s society, it should be user friendly, plug-n-play, which we all know Cisco to be, right? Heh, okay, or not, but anyway, our trouble had just begun.