Hello! Long time, no scripting! I’ve been blowing through VCF, deploying, redeploying, and built some scripts to help me with this. Sharing is caring, read on to see what I’ve done…
Hello! Long time, no scripting! I’ve been blowing through VCF, deploying, redeploying, and built some scripts to help me with this. Sharing is caring, read on to see what I’ve done…
At a high level, I need to install five (5) PCIe NVMe SSDs into a homelab server. In this post I cover how CPU & motherboard all play a role in how & where these PCIe cards can and should be connected. I learned that simply having slots on the motherboard doesn’t mean they’re all capable of the same things. My research was eye-opening and really helped me understand the underlying architecture of the CPU, chipset, and manufacturer-specific motherboard connectivity. It’s a lot to digest at first, but I hope this provides some insight for others to learn from. Before I forget, the info below applies to server motherboards, too, and plays a key role in dual socket boards when only a single CPU is used.
I’m writing a script to deploy Azure VMware Solution (AVS) and ran into a situation many of us likely have: Some parameters depend on other parameters.
I started with Parameter Sets where I did have several parameters participating in multiple Parameter Sets, but that didn’t work how I thought it would (or should).
Here’s what didn’t work:
1 2 3 4 5 6 7 8 9 10 11 12 |
[CmdletBinding(DefaultParametersetName="cli")] param( [Parameter(ParameterSetName="cli")][Parameter(ParameterSetName="createVNET")][Parameter(Mandatory,ParameterSetName="VMInternet")][switch]$createVNET, [Parameter(ParameterSetName="cli")][Parameter(Mandatory,ParameterSetName="createVNET")][Parameter(Mandatory,ParameterSetName="VMInternet")][string]$vNetIPSubnet, [Parameter(ParameterSetName="cli")][Parameter(Mandatory,ParameterSetName="createVNET")][Parameter(Mandatory,ParameterSetName="VMInternet")][string]$vNetGatewaySubnet, [Parameter(ParameterSetName="cli")][Parameter(Mandatory,ParameterSetName="createVNET")][Parameter(Mandatory,ParameterSetName="VMInternet")][string]$vNetBastionSubnet, [Parameter(ParameterSetName="cli")][Parameter(Mandatory,ParameterSetName="createVNET")][Parameter(Mandatory,ParameterSetName="VMInternet")][string]$vNetManagementSubnet [Parameter(ParameterSetName="cli")][Parameter(ParameterSetName="VMInternet")][switch]$EnableVMInternet, [Parameter(ParameterSetName="cli")][Parameter(Mandatory,ParameterSetName="VMInternet")][string]$vNetFirewallSubnet, [Parameter(ParameterSetName="cli")][Parameter(Mandatory,ParameterSetName="VMInternet")][string]$vNetHubSubnet ) |
My intention was to have additional mandatory parameters based on additional switches. For instance, if you add “-createvNet”, the script needs four additional parameters. Also, if you used “-EnableVMInternet” without “-createvNET”, the script will also need to recognize that wasn’t supplied and make the parameters with it mandatory. Spoiler: that didn’t work.
I told one of my nodes to enter maintenance mode and it sat for overnight like this:
That screenshot was taken almost exactly 26 hours later. There were no running VMs on the host, nothing on the local datastore, no resyncing or rebuilding objects in vSAN, and lastly nearly zero IO on the network adapters.
I tried canceling the task, it would not cancel.
I rebooted the host, it came back into the cluster with that task still running.
I rebooted my vCenter, and that finally killed the task.
Today I am midway through setting up my lab and realized the reason VMware Cloud Foundation (VCF) is failing is because I set the wrong password in my JSON file for the root account on my vCenter appliance.
No big deal, right? Just SSH in and change it. I tried, and got this:
1 2 3 4 |
New password: BAD PASSWORD: it is based on a dictionary word passwd: Authentication token manipulation error passwd: password unchanged |
The bypass was actually easy. Presumably you’re already SSH’d in as root, so you just need to edit /etc/pam.d/system-password
1 2 3 4 5 6 7 8 |
# Begin /etc/pam.d/system-password # use sha512 hash for encryption, use shadow, and try to use any previously # defined authentication token (chosen password) set by any prior module password requisite pam_cracklib.so dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1 minlen=6 difok=4 enforce_for_root password required pam_pwhistory.so debug use_authtok enforce_for_root remember=5 password required pam_unix.so sha512 use_authtok shadow try_first_pass # End /etc/pam.d/system-password |
Remove enforce_for_root from the first line with pam_cracklib.so. Save the file, no need to restart any services, and retry passwd.
1 2 3 4 |
New password: BAD PASSWORD: it is based on a dictionary word Retype new password: passwd: password updated successfully |
After that, I re-added enforce_for_root to the file and clicked RETRY back in VCF and all things are happy once again.
I just built a new environment and was greeted by this error. This fix will likely work on other Dell servers, and the settings may apply to other vendors.
High level is you need to set TPM2 Algorithm Selection to SHA256 in the BIOS. You MIGHT have to turn on Intel TXT, and then enable Secure Boot. This SHOULD NOT impact the ESXi installation, but there is a chance it might. Enabling Secure Boot on a machine with modified or unsigned files carries with it the risk of rendering your machine unbootable with the current ESXi installation.
So, here we go:
I’m blogging about this because I always seem to forget where to find the status of the Tier-0 Logical Router, basically which edge transport node is Active and which is Standby for that specific Tier-0 Gateway. It’s easy once I remember, but hitting the search engines doesn’t show anything useful, so I’ll try to keyword spam this to get more visibility for the next time I forget.
TL;DR: Switch to Manager mode. Click the Networking tab, Tier-0 Logical Routers, select the T0 you want. Look under High Availability Mode (screenshot below)
Recently I had a colleague come to me with a request. They had a Nutanix Prism Central production environment with certain images loaded. The previous administrator failed to document where those images were stored and they could not be located. My colleague wanted to download the images from their production Prism Central so they could upload them to a new test environment. I have written a Python script that will make that quite easy.
I’ve been intending to deploy NSX-T 2.4 since it’s release a few months ago to check out what’s new.
With that, I learned a little about a repeatable workflow to deploy it in a relatively easy way.
This assumes you already have your vCenter deployed with a vSphere cluster and port groups set up. For NSX-T 2.4 (-T hereafter), you don’t have separate controllers from your manager, you can deploy a single manager and then add additional managers to make it a cluster. You’ll want 1 or 3 NSX Managers, depending if this is a lab, testing, or production; and if it’s a cluster, you’ll likely want an additional IP to serve as the cluster VIP. If you’re keeping count, that’s four (4) IPs, which is how I’m going to deploy it.
VMware has exploded into Software Defined Networking (SDN) with NSX, it’s no secret why it’s their fastest growing product, either. Through the use of all the components within NSX, you can be well on your way to a fully Software Defined Datacenter (SDDC) accomplishing things like automated deployments of networks, edge devices, NAT rules, firewall rules, and the list goes on.