In this post I talked about automated deployment that launches the remote console for me. Since I had 24 hosts that need the user & role, I created a script that does it for me. Nothing special, just something quick that works…
In this post I talked about automated deployment that launches the remote console for me. Since I had 24 hosts that need the user & role, I created a script that does it for me. Nothing special, just something quick that works…
I’ve recently had a ton of requests for information about specific VMs. They want to know how many disks they have, CPU count, how much RAM, and which environment the VM resides in.
Instead of constantly searching vCenter, I wrote this quickly during the meeting to query multiple servers.
Using DRS in vSphere is a great thing for load balancing your cluster, but what if you need to keep one VM from vMotioning to another host? Everyone mention’s host affinity when searching, but digging through the DRS settings doesn’t really show much.
The closest thing to ‘Host Affinity’ is this KB article from VMware.
I upgraded one of my lab hosts to ESXi 4.1 yesterday and was plagued with this error:
A lot of people are getting this error without a lot of direction. As it turns out, this is because I upgraded one of my hosts to 4.1 without upgrading vCenter to 4.1. Silly me! Who would have thought that vSphere vCenter couldn’t manage a vSphere host because it’s rev is 0.1 higher?
ESXi 4.0 Update 1 brought with it one major update (as I pointed out here). Now that 4.1 was released on July 13th, I wanted to take a look and see if anything else major has been changed.
Biggest change was they lifted the 160 VMs per host in an 8-node HA cluster. Now it’s the maximum of 320 VMs per host, and a maximum of 32 nodes per HA cluster. Problem is, they imposed a maximum of 3000 VMs per cluster (standard, HA, or DRS, they no longer differentiate them), so you’d just have to find your sweet spot to maximize how you want your cluster set up. Not that 3000 VMs per cluster is a problem, but if you ran 320 VMs on 75% of a 32-node cluster (leaving 25% for failover), that’s 7680. That’s a difference of 4680 VMs. At any rate, I’m glad they lifted the 40 VMs per host in a 9+ configuration.
The Configuration Maximums for 4.1 can be found here.
Here are some of the key features that have changed:
This is a welcomed (and much demanded) added feature to ESXi. I talked about the deployment of multiple ESXi hosts previously and how I used a golden image to lay down on individual SD cards.
It’s currently supported using a boot CD or PXE. However, the scripted installation is only for local or remote disks, installation on USB devices (SD cards, etc), isn’t currently supported.
It’s very similar to ESX, in that it uses a kickstart file to load the OS, and can be pulled from all the typical locations (FTP, HTTP, NFS, USB, etc).
At least this is a huge step in the right direction.
For more information on the install of ESXi 4.1, see chapter 5 of this doc.
Since deploying the Nexus 1000v, it set our slot sizes in the cluster to 1.5GHz and 2GB of RAM. Not wanting to waste slots in our cluster by guests that may not reach that size (or partially fill slots), I wanted to carve out the cluster into slots of a lesser size, similar to using smaller block sizes on a drive to maximize space.
Using percentage reservations with vSphere, you can get by the slot sizes, but what if you’re starting a small cluster and growing it as resources are needed? How could I carve 25% out of a 2-node cluster? Sure, you can do it, but if you’re operating at the full 75% (with 25% reserved for failover) and lose a host, you actually don’t have enough resources and are over-committed by 25%.
Setting the following settings will help reduce your slot size, but may also have a negative impact by not having enough reservations if you end up in a failover state.
Lets face it, repetition sucks. When provisioning ESX hosts, using such things as the EDA make life easier, but it only does so much for ESXi.
The install for ESXi is simple and straight forward, but when done, you have to go and set everything else (IP, hostname, DNS, local users, etc…). Doing this for 20 hosts could be a PITA (Pain In The A..), so I set out on writing a script that does all of this for you.
All you have to do is set the IP & root password, then verify you can ping the host by it’s hostname (set host/A record in DNS). Once that’s verified, here’s what the script does for you:
This is a slightly more advanced script, and it’s not fully polished, but works.
Click here for more information.
Voting ends this Friday at 5pm CST and I’m currently in 3rd place. I’d appreciate any votes I can get, as I doubt my employer will send me to VMworld again this year.
If you view the leader board, mine are by “cougar694u” and the one I like most says “I told you we should’ve bought the m1000e filled with m610’s instead of this unstable UCS carp!”
I have a feeling the first two people have people gunning for them, as they’re growing daily, but it doesn’t hurt to ask, does it?
I wanted to expand This Script to allow you to specify hosts as well, instead of just vCenter.
This came about because we have 20 new hosts that need storage so we can build our new vCenter server on them, and my old script wouldn’t suffice.
I know you can rescan at the container level (cluster, folder, datacenter), but sometimes the processes would hang on large clusters, other times I’d have to rescan twice. I like this script because it rescans all HBAs one by one, then rescans VMFS after. One could probably add the -runasync, but then it’s the same as the right-click in vCenter.
So, without further ado, here’s the updated script: