Server Hardware

SCCM 2007 OSD – Failed to get client identity (80004005) and Signature Verification Failed

I’m working on importing drivers for Dell’s new 12G servers into our SCCM server for OSD. I got everything imported yesterday, added them to my boot image, created a new boot iso for use in non-PXE enabled networks, and went home for the day.

I get to work today and boot from my ISO I created yesterday and am greeted with the error 80004005, and some nondescript text stating it couldn’t pull a list of tasks. You know, the typical error that you have no idea what it actually means.

I googled it and found 80004005 is “Failed to get client identity”, and some pointed out the time being off may be the cause. I rebooted, BIOS time was maybe 30s off, so I tried again, but exported the smsts.log located in X:\windows\temp\smstslog\ via net use to my workstation. I opened that in SMS Trace, and here’s what I found:

SCCM 80004005

SCCM 80004005

Right there in RED is my error, plain as day, but what wasn’t shown to me in WinPE was the “signature varification failed”. I think it’s worthwhile to note Microsoft misspelled vErification, yup, that’s an A in theirs.

Now, if you google that, I found This Post stating they saw the error after moving their SCCM server to new hardware. We didn’t move to new hardware, we actually went from hardware to virtual, in that we P2V’d our SCCM server last night, which indeed changed the signature of the server.

I updated the boot image’s distribution point, which rebuilds it, then did a refresh for posterity. When that was 100% complete, I recreated the task sequence media boot ISO and all is well again.

Just thought I’d share!

Cisco UCS Blade System – Part 4 – A much needed update

Written May 11th, 2012 by
Categories: Server Hardware, Server Management, Virtualization
1 Comment »

A few years ago, we were one of the first/early adopters of UCS. At that time, it was clearly in it’s infancy and not ready for prime time, our local Cisco guys didn’t even know anything about it. If you care to read those previous posts, they can be found here: Part 1, Part 2, and Part 3. I was fairly bitter when I wrote those, but with good reason. I ‘wasted’ a lot of time (read weeks or months) jacking with it and had nothing but problems. Read the rest of this entry »

Updated: Finding WWNs for HBAs in ESXi hosts, now with get-vmhosthba

This is an update to my original get-WWN script using Get-View. Get-VMHostHba was pointed out to me by Robert van den Nieuwendijk, vExpert 2012, so I wanted to provide an update to my original post HERE. I attached the ps1 file at the end.

With the addition of get-vmhosthba in PowerCLI, you can get this information somewhat easier. At line 46

$hbas = Get-View (Get-View (Get-VMHost -Name $vmhost).ID).ConfigManager.StorageSystem

becomes

$hbas = Get-VMHostHba -vmhost $vmhost -Type FibreChannel

Since that pulls only fibre channel HBAs, the foreach changes to simply $hba in $hbas, and the if statement is no longer needed (line 47-50):

foreach ($hba in $hbas){
$wwpn = "{0:x}" -f $hba.PortWorldWideName
Write-Host -foregroundcolor green `t "World Wide Port Name:" $wwpn
}

Here’s the new version –> Get-WWN.ps1

How to determine RAM speeds on Dell’s new 12G servers with Intel Sandy Bridge Xeon E5-2600 Procs

I’m sure many of you have dealt with trying to figure out how much RAM you can shove in a box, say an R720, and still keep RAM speeds up. I actually had some docs from Dell, figures, diagrams, graphs, and a few charts. Even then, it was difficult.

Enter the “Dell 12G Memory Solution Tool”. It is a website that allows you to test RAM & CPU configurations to get optimal speeds. For instance, you can select the R720, 2 CPUs, and that you want 256GB of RAM. That’s a nicely sized box for virtualization. The tool tells me I can get 16x 16GB of 2R4 DIMMS at 1333MHz or 1600MHz. Of course, I’m going to go with the 1600MHz! What if I want to bump the RAM? I checked out 384GB & 512GB to see how they stack up; 384GB gives me the option for 24 16GB DIMMS, but drops my speed to 1066; and 512GB has two options, either 800MHz, or 1333MHz (yes please!).

It also shows you some quick price & power consumption rankings on a 1-5 scale.

Pretty awesome, imo! Here’s the link: http://poweredgecpumemory.com/

Finding WWNs for HBAs in multiple ESX or ESXi hosts, standalone or clustered

**Update May 07, 2012 – Use the new script here: Updated: Finding WWNs for HBAs in ESXi hosts, now with Get-VMHostHba

When building a new cluster, your storage team (or you) may need to add several hosts into the shared storage zone. It’s a pain to go to each host, configuration, storage adapters, then copy out the WWN.

With this script, you can supply a vCenter server and Cluster/Folder/Datacenter (any logical container) and it will list all the WWNs for Fibre Channel devices. But what if you don’t have vCenter stood up yet? No problem, you can also supply a list of ESX/ESXi hosts to scan.

Shawn & I built this because we have 20 hosts we need the WWNs from to provide to our storage team, and vCenter isn’t alive yet.

Our script: Read the rest of this entry »

Intel Nehalem EX & DDR3 Speeds

Written February 19th, 2010 by
Categories: Server Hardware
No Comments »

We all hate that adding DDR3 sticks to a server slows down the QPI speed (or RAM Bus for lack of a better example).

That changes with the Nehalem EX proc (and perhaps Westmere), as the CPU governs the speed. You can throw up to 16 sticks of DDR3 RAM per CPU at either 800, 978, or 1066MHz, and the governing factor is the CPU: Read the rest of this entry »

Fast deployment of vSphere ESXi 4.0 running on a 1GB SD-Card

Written February 18th, 2010 by
Categories: Server Hardware, Virtualization
4 comments

When we’re ready to deploy new ESXi hosts in our environment, we order them from Dell with ESXi pre-loaded on the internal SD-Card. This is nice and all, but what do you do when you have to go through and configure NTP, Users, Groups, Scratch directory, lockdown mode, and the list goes on?

You’d have to fire up each server, go through and configure everything, x10 if you had 10 new servers.

Since we’re working on a new rather larger virtualization deployment, we were looking at ways to overcome this.

Read the rest of this entry »

Cisco UCS Blade System – Part 3 – moar vSphere ESXi & UCS woes

Written February 16th, 2010 by
Categories: Server Hardware, Virtualization
No Comments »

Well, Cisco finally came with an answer to why I was able to break the stuff like clock work before, and that answer was firmware. A new firmware has been release for the chassis, blades, & FEX (and I’m sure I’ve either got that in the wrong order or hardware), but I can’t say I’m excited about it.

We set more time aside to have Cisco come in and upgrade the bits, as if we haven’t wasted enough time already. This time, they sent the big guns to work on it, or gun, rather, as they sent one of the engineers named Troy. He was a good guy, very knowledgeable, but he can’t help it that he works for Cisco, we’ve all gotta eat, right?

Read the rest of this entry »

Cisco UCS Blade System – Part 2 – my vSphere ESXi & UCS woes

Okay, so now that we’ve tested different OS installations, now it’s time to test the real purpose we acquired these blades for: Virtualization

A little info on the hardware: Cisco N20-B6620-1, dual Xeon E5540s, 24GB of RAM, and two 73gb drives

We’re using VMware ESXi 4.0u1 for our testing, and booting from the SAN. Yes, I know, it’s only still experimental with vSphere, I don’t like it, but that’s the path I was lead down by my superiors.

Read the rest of this entry »

Cisco no longer makes switches for other blade enclosure manufacturers

Written February 16th, 2010 by
Categories: Server Hardware
2 comments

I heard today that Cisco will no longer make their Nexus-line of switches for non-Cisco brand blade enclosures.

What does this mean exactly? Those of us with the Dell M1000e blade chassis who are currently using pass-throughs and were waiting for the promised Cisco Nexus 4000 won’t have to wait anymore because it’s NEVER coming out.

Am I the only one who thinks Cisco shot themselves in the foot by doing this?

I mean, I have hands-on experience with UCS and wouldn’t wish that evil on anyone.

*EDIT*
I just wanted to add that this affects all OEM vendors like Dell, HP, and IBM, as Cisco dropped production of new switches for other blade systems.

Designed by ThepHuck
Wordpress Themes
Scroll to Top