A reader on a previous post asked about pulling host UUIDs, so I wipped together this script.
Usage is like this:
Get-VMHostUUID.ps1 -vmhosts ("host1","host2","host3")
or
Get-VMHostUUID.ps1 -vc vcenterserver -container cluster1/folder/dc/etc
A reader on a previous post asked about pulling host UUIDs, so I wipped together this script.
Usage is like this:
Get-VMHostUUID.ps1 -vmhosts ("host1","host2","host3")
or
Get-VMHostUUID.ps1 -vc vcenterserver -container cluster1/folder/dc/etc
Using DRS in vSphere is a great thing for load balancing your cluster, but what if you need to keep one VM from vMotioning to another host? Everyone mention’s host affinity when searching, but digging through the DRS settings doesn’t really show much.
The closest thing to ‘Host Affinity’ is this KB article from VMware.
ESXi 4.0 Update 1 brought with it one major update (as I pointed out here). Now that 4.1 was released on July 13th, I wanted to take a look and see if anything else major has been changed.
Biggest change was they lifted the 160 VMs per host in an 8-node HA cluster. Now it’s the maximum of 320 VMs per host, and a maximum of 32 nodes per HA cluster. Problem is, they imposed a maximum of 3000 VMs per cluster (standard, HA, or DRS, they no longer differentiate them), so you’d just have to find your sweet spot to maximize how you want your cluster set up. Not that 3000 VMs per cluster is a problem, but if you ran 320 VMs on 75% of a 32-node cluster (leaving 25% for failover), that’s 7680. That’s a difference of 4680 VMs. At any rate, I’m glad they lifted the 40 VMs per host in a 9+ configuration.
The Configuration Maximums for 4.1 can be found here.
Here are some of the key features that have changed:
This is a welcomed (and much demanded) added feature to ESXi. I talked about the deployment of multiple ESXi hosts previously and how I used a golden image to lay down on individual SD cards.
It’s currently supported using a boot CD or PXE. However, the scripted installation is only for local or remote disks, installation on USB devices (SD cards, etc), isn’t currently supported.
It’s very similar to ESX, in that it uses a kickstart file to load the OS, and can be pulled from all the typical locations (FTP, HTTP, NFS, USB, etc).
At least this is a huge step in the right direction.
For more information on the install of ESXi 4.1, see chapter 5 of this doc.
Since deploying the Nexus 1000v, it set our slot sizes in the cluster to 1.5GHz and 2GB of RAM. Not wanting to waste slots in our cluster by guests that may not reach that size (or partially fill slots), I wanted to carve out the cluster into slots of a lesser size, similar to using smaller block sizes on a drive to maximize space.
Using percentage reservations with vSphere, you can get by the slot sizes, but what if you’re starting a small cluster and growing it as resources are needed? How could I carve 25% out of a 2-node cluster? Sure, you can do it, but if you’re operating at the full 75% (with 25% reserved for failover) and lose a host, you actually don’t have enough resources and are over-committed by 25%.
Setting the following settings will help reduce your slot size, but may also have a negative impact by not having enough reservations if you end up in a failover state.
Lets face it, repetition sucks. When provisioning ESX hosts, using such things as the EDA make life easier, but it only does so much for ESXi.
The install for ESXi is simple and straight forward, but when done, you have to go and set everything else (IP, hostname, DNS, local users, etc…). Doing this for 20 hosts could be a PITA (Pain In The A..), so I set out on writing a script that does all of this for you.
All you have to do is set the IP & root password, then verify you can ping the host by it’s hostname (set host/A record in DNS). Once that’s verified, here’s what the script does for you:
This is a slightly more advanced script, and it’s not fully polished, but works.
I wanted to expand This Script to allow you to specify hosts as well, instead of just vCenter.
This came about because we have 20 new hosts that need storage so we can build our new vCenter server on them, and my old script wouldn’t suffice.
I know you can rescan at the container level (cluster, folder, datacenter), but sometimes the processes would hang on large clusters, other times I’d have to rescan twice. I like this script because it rescans all HBAs one by one, then rescans VMFS after. One could probably add the -runasync, but then it’s the same as the right-click in vCenter.
So, without further ado, here’s the updated script:
This past week I’ve been working on putting together a presentation to the CIO and senior business leadership for our plans for a Virtual Desktop Infrastructure (VDI). The presentation will include a short PowerPoint backed discussion as well as a live demo using VMware View. I’ve been working with Scott Reopelle from the Desktop team as his team will be the application owner for the broker as well as continuing the support and development of Desktops no matter the platform.
Awhile back a guy at the San Antonio VMUG asked the technical group how you could get the actual LUN UUID for a particular Datastore. I informed him that it was available via the PowerCLI and to contact me via the VMUG forums. He never did. My storage guy at work loves this script, though, so I thought I’d share it with everybody.
Having an ESX cluster is nice, and adding shared LUNs can sometimes become click redundant (host, config, storage, rescan; repeat).
Since we’ve been migrating to a new storage array, we’ve been adding quite a few LUNs to different clusters on different vCenter servers, so I wanted an easy way to rescan everything.