PowerCli wrapper for SSH actions

There is a “new” issue in town and this is a story about three colleagues helping us to tackle it.

tldr; go to the bullet points

ESXi installed on SD Cards can fail to reboot as the SD Medium wears silently out.

My colleague John Nicholson posted about it in March Using SD cards for embedded ESXi and vSAN?

The main problem with this issue is that the SD Cards are silently failing. Well, they are not when you know what to look out for, there are some lines in the log files to give this issue away but who reads them? You could probably set something up in LogInsight but that is for another day.

In Johns article, he referenced a nice script from an other colleague of mine William Lam. William created a handy ash script which you can run on the ESXi host to check if the SD card returns the same information several times from the boot sector area.

I took Williams script and put a PowerCLI wrapper around it (which utilizes Plink.Exe) because I needed to check on 180 servers. During a Workshop, I talked about that with an other colleague and he mentioned to me that I should share it with the community. So here it is, I hope you find it useful.

The script…

There are several things about this script you should know:

  1. You can find it here: CheckSDCard Script on Bitbucket
  2. It has some standard functions which I use when I write PowerShell scripts so you could make it slimmer when you remove those and rewrite it a bit
  3. You can choose either one ESXi or the whole Cluster to check for
  4. It requires the root password (either on the command line, per prompt or hardcoded in the script…. DON’T use that option 🙂 )
  5. If ssh is disabled on the server it will enable ssh first, do it’s magic and will disable ssh afterward again
  6. Oh, you need to be connected to the vCenter Server before running the script.

Looking a little bit closer at the script:

Line 60 – 69:
Will check if Plink.exe is in the directory of the script and exit if it is not.

Line 71 – 95:
Will determine based on the ParameterSetName if you want to check a host or a cluster. If it is a cluster the script will get all the ESXi in the cluster and then run the function checkSDCard.

Line 97 – 101:
Will Export the result as a CSV File. The filename is based on the ESXi/Cluster and a timestamp.

Line 102 – 125:
Will only execute this part of the script if the ESXi host is at least Version 6.x. This could be removed.

Line 120 – 123:
Almost the real thing. This Part of the code reads the SDCardCheck.txt file. This file is almost the same as the SDCardCheck_debug.txt file. The Debug Version contains the original script from William. The Non-Debug Version has every output remarked out except the important part which detects if the SD card is corrupted or not.

Line 151 – 208:
This function is almost 100% my default Invoke-SSHCommand.
It will check if SSH is enabled, if not it will enable SSH and afterward disable it again. It will then execute the command via plink and write the output of that command into $output. I also check if authentication is possible and will exit if it is not.

The only change can be found in

Line 192 – 193:
Here a new $row object is created with ESXi Name and the result of the stripped down result of the SDCardCheck script. Then the $row is added to the SDCardList which is at the end of the script exported into the CSV file.

Bonus Line 324 – 357:
This is a Update-Check function I wrote to check if the local version is different to the version in your git repository. If there is a difference a backup is created and the new version downloaded. After that, you need to rerun the script.

I hope you find this script and the functions in it useful.

 

 

File too big => number of hosts > 128. Not Supported

You might have stumbled over this error in the sdrsinjector.log.

I did, when I was troubleshooting why sdrsInjector was running haywire on one of the vSan notes.

While there is a KB for this symptom KB2145247 it doesn’t describe the whole solution.

We added an NFS datastore to all our hosts to redirect logs and scratch and we have more than 128 hosts in that vCenter.

So there is the truth, but we don’t have Storage I/O control enabled.

So I enabled the options:

  • Disable Storage I/O statistics collection
  • Exclude I/O statistics from SDRS

on those datastores.

Unfortunately I had to do that in all datacenters in all vCenters but that wasn’t too hard.

 

Tips & Tricks: Deploying vSphere Replication 6.5

Just a quick note in case you wonder.

The documentation isn’t very clear about which files you need to pick when you want to deploy vSphere Replication 6.5 from the ISO. You will need to select the following 3 files:

  1. vSphere_Replication_OVF10.ovf
  2. vSphere_Replication-support.vmdk
  3. vSphere_Replication-system.vmdk

 

Might be obvious, just wanted to note that down.

Setting Syslog Configuration via Get-EsxCli -v2

Hi,

after battleing a little with Get-EsxiCli -v2 in the last couple of month I needed to write a quick script yesterday to reconfigure the Syslog Parameters on all Esxi Hosts.

A problem when you are coming from the old Get-EsxiCli is how to map the old command line to the new version. Luckily for us we have a helper:

$esxcli.system.syslog.config.set

will display a list of all functions you can use and

$esxcli.system.syslog.config.set.CreateArgs()

will return the hashtable values you can use to create the

$esxcli.system.syslog.config.set.invoke() 

syntax, for example

$esxcli.system.syslog.config.set.invoke(@{
defaultrotate = 8
loghost = 'udp://syslogServer:514'
})

Complete Source Code can be found here: syslogConfig.ps1

Robocopy, Error 2, Accessing Destination Directory and The System cannot find the file specified

Isn’t IT fun?

You use a tool and while it normally works you stumble upon an issue which you cannot explain.

While doing some Robocopy jobs (with Robocopy 2008) I stumbled upon the following error:

Error 2 (0x00000002) Accessing Destination Directory TheSourceDirectory The System cannot find the file specified

and

Error 3 (0x0000003) Copying File TheSourceDirectory The System cannot find the file specified

Source and Target are accessible by the job, exploerer doesn’t show any issues when copying the data, the directory contains sometime a .DS_Store file (haven’t checked all directories).

Solution:

Using Robocopy from a 2003 Server.

IT, fun as always.

Powershell and UTF-8

Hi,

long time to updates so I thought I put up a workaround for UTF-8 issues with textfiles and powershell up.

As the German Language has umlauts… äöü … and several other languages too, people will stumble again and again on issues with how to handle those.

This is a problem in a lot of programming languages especially if you have different OS and language versions / settings.

I had the issue that even when a file was saved in UTF-8 the powershell script wouldn’t process the file correctly.

To make matters worse, the file was created in powershell 😦

So the solution I found was quite easy:

$rand = Get-Random
get-content $csv | out-file -encoding utf8 $fileLocation\temp-$rand.csv
$stuff = Import-Csv $fileLocation\temp-$rand.csv -UseCulture -Encoding UTF8
Remove-Item $fileLocation\temp-$rand.csv

This will make UTF-8 work for me.

BTW: The original file was created by:

Export-Csv “somefilename” -NoTypeInformation -UseCulture -Encoding UTF8

Circumventing Not Able to Set EVC Mode in vSphere 5.5

During an upgrade from vSphere 5.0 to vSphere 5.5 we stumbled upon a problem.

We needed to redo the vCenter Server (which isn’t really a problem) but the old cluster had EVC Penryn enabled.

We created a new vCenter Server, created a new Cluster and tried to join a host to it…

No chance… vCenter was telling us that it couldn’t join the host to the cluster with EVC enabled (as the vms where still running).

Ok, so than disable EVC and join the hosts… no issue.

Now enable EVC again…

No chances… some VMs on some servers where using a higher level of CPU Instructions than the EVC level we wanted to enable supported.

So we created a temporary cluster, moved the 2 esx servers with the offending vms into it, by removing them from the vCenter server first as the vms need to be kept running.

Than we set the EVC mode on the original cluster and started migrating the vms from the temp cluster into the EVC enabled cluster.

We only found 2 VMs which we couldn’t migrate as they where using features which weren’t supported on the EVC enabled cluster, one was a MS SQL server.

So we needed to shut them down, migrate them and power them up again.

Problem solved.

Yes this isn’t your typical approach on upgrading a vSphere environment, but in the end it did work.

Backup Exec 2014, NetApp and File Server Backup

It could be so easy… but it isn’t.

If you want to backup NetApp cifs shares via the File Server Backup options you need to do several steps:

  1. Disable NDMP on the NetApp
  2. Add the NetApp as FileServer to Backup Exec
  3. Have a Remote Agent for Windows License installed
  4. Have “Enable selection of user shares” Enabled in “Configuration and Settings -> Backup Exec Settings -> Network and Security”

This should do the trick.

In Backing Up NetApp Filer on Backup Exec 2012 Jonas Palencia also changes the NDMP Port on the Backup Exec server but this shouldn’t be necessary.

Visual C++ 2012 Redistributable fails to install with: The cryptographic operation failed due to a local security option setting

During upgrading BackupExec to a newer version I stumbled upon the following error:

Visual C++ 2012 Redistributable didn’t install and did show up a strange error in the BE Installation Log.

Running the package manually gave a far better error message:

The cryptographic operation failed due to a local security option setting

There is no direct solution to this problem (at least when you search for it).

The reason why the install fails is because of wrong configurations of the IE Security settings.

This knowledge base article provided the solution:

Error message when you try to validate a copy of Windows: The cryptographic operation failed due to a local security option setting