So, I picked this item up from today’s Woot off.

Hands-free!
Click here for more information.
Voting ends this Friday at 5pm CST and I’m currently in 3rd place. I’d appreciate any votes I can get, as I doubt my employer will send me to VMworld again this year.
If you view the leader board, mine are by “cougar694u” and the one I like most says “I told you we should’ve bought the m1000e filled with m610’s instead of this unstable UCS carp!”
I have a feeling the first two people have people gunning for them, as they’re growing daily, but it doesn’t hurt to ask, does it?
I wanted to expand This Script to allow you to specify hosts as well, instead of just vCenter.
This came about because we have 20 new hosts that need storage so we can build our new vCenter server on them, and my old script wouldn’t suffice.
I know you can rescan at the container level (cluster, folder, datacenter), but sometimes the processes would hang on large clusters, other times I’d have to rescan twice. I like this script because it rescans all HBAs one by one, then rescans VMFS after. One could probably add the -runasync, but then it’s the same as the right-click in vCenter.
So, without further ado, here’s the updated script:
This past week I’ve been working on putting together a presentation to the CIO and senior business leadership for our plans for a Virtual Desktop Infrastructure (VDI). The presentation will include a short PowerPoint backed discussion as well as a live demo using VMware View. I’ve been working with Scott Reopelle from the Desktop team as his team will be the application owner for the broker as well as continuing the support and development of Desktops no matter the platform.
**Update May 07, 2012 – Use the new script here: Updated: Finding WWNs for HBAs in ESXi hosts, now with Get-VMHostHba
When building a new cluster, your storage team (or you) may need to add several hosts into the shared storage zone. It’s a pain to go to each host, configuration, storage adapters, then copy out the WWN.
With this script, you can supply a vCenter server and Cluster/Folder/Datacenter (any logical container) and it will list all the WWNs for Fibre Channel devices. But what if you don’t have vCenter stood up yet? No problem, you can also supply a list of ESX/ESXi hosts to scan.
Shawn & I built this because we have 20 hosts we need the WWNs from to provide to our storage team, and vCenter isn’t alive yet.
Awhile back a guy at the San Antonio VMUG asked the technical group how you could get the actual LUN UUID for a particular Datastore. I informed him that it was available via the PowerCLI and to contact me via the VMUG forums. He never did. My storage guy at work loves this script, though, so I thought I’d share it with everybody.
Having an ESX cluster is nice, and adding shared LUNs can sometimes become click redundant (host, config, storage, rescan; repeat).
Since we’ve been migrating to a new storage array, we’ve been adding quite a few LUNs to different clusters on different vCenter servers, so I wanted an easy way to rescan everything.
We ran into an issue where we needed an entire SAN frame retired. Problem is, there are several datastores and several guests running on that frame.
I wanted to script it out, which worked just fine. Then, we had more to do, so I edited the script and ran it again. After the third or fourth time, I decided to write a script that takes params via the cli.
Make sure your datastore names are similar, for instance, mine appends ‘_New’ to the end. So my datastores have to be named like this: ‘vmdatastore’ and ‘vmdatastore_New’. It will get all guests on the datastore and migrate them one by one over to the new datastore. When done, just delete the old datastore (or rename it to _Old) and rename the new one to match.
We all hate that adding DDR3 sticks to a server slows down the QPI speed (or RAM Bus for lack of a better example).
That changes with the Nehalem EX proc (and perhaps Westmere), as the CPU governs the speed. You can throw up to 16 sticks of DDR3 RAM per CPU at either 800, 978, or 1066MHz, and the governing factor is the CPU:
When planning a new virtualization environment, consolidation numbers are always flying around, and specifically the number of VMs you can run on a host.
According to This Doc you can have a max of 320 VMs per host, but keep in mind the number’s different for HA clusters. I was also pleased to find out the numbers were slightly changed for Update 1.