One of the annoying things about installing new builds of Windows 10 is that the installation process resets a number of settings to default.  When you’re on the insider preview “fast ring”, new builds come every 1-2 weeks, and changing those handful of things back to their previous setting gets old pretty fast.  After installing a new build today I finally decided to try automating these things using Powershell.

For me, there are 3 main things that I keep changing back after every build:

  1. Removing unwanted language packs.
  2. Installing the en-AU speech pack so that Cortana works properly. Edit: as of an insider build delivered sometime in May 2016, this now happens automatically.
  3. Disabling devices from waking my computer from sleep.

Unfortunately, I haven’t been able to figure out how to achieve the second one through Powershell (If I do, I’ll update the article), but the first one and last ones weren’t too tricky.

Removing unwanted language packs

Every time I install a new insider build, it automatically re-installs the en-US language.  I’m in Australia and only want en-AU installed.  The following Powershell code removes all language packs except the one you specify:

The only tricky part about that was figuring out how to remove an installed language. My first thought was trying to use Get-WinUserLanguageList en-US and then piping it to Remove-WinUserLanguageList… but that cmdlet doesn’t exist. It turns out that Set-WinUserLanguageList will remove all language packs except the one(s) you’re specifying.

Disabling devices from waking my computer from sleep

Every time I install a new insider build, it re-enables the option that allows my keyboard, mouse, and NIC to wake the computer from sleep.  This often means that I will put the computer to sleep, and it will then wake up later on because I bumped the mouse or because something tried to ping it over the network.  Since I have configured my power management options to never automatically put the computer to sleep, this typically means the computer will wake up and run for hours before I notice.

Thankfully, there is a command-line utility called powercfg that can be used to query all devices which are allowed to wake the PC. This is basically all devices in the device manager which have the “Allow this device to wake the computer” box checked under the power management tab. We can then use Powershell to disable that option for each device which has been detected as wake armed.

Note that the powershell script will have to be run in administrative mode to have the necessary permission to disable the wake-armed devices.

Now that’s all scripted I’ve saved myself about 30 seconds every 1-2 weeks when a new insider build gets installed. I’ll have to install about 120 insider builds to break even on the time spent working out this script and writing this blog post. Hooray.

This is an (extremely) quick post to cover the steps required to decommission a Platform Services Controler (PSC ) or vCenter Server from the vSphere single-sign on (SSO) domain.  The steps below are for a VCSA; steps for a Windows VC are very similar, and are contained in the VMware KB article I used as a reference for writing this post: KB 2106736.

Decommission a PSC

    1. Ensure no vCenter server instances are using the PSC that is to be decommissioned.  Instructions on how to query which PSC a vCenter instance is pointing to and subsequently repoint it are listed in my post here.
    2. Shut down the PSC.
    3. Connect to another PSC in the same SSO domain, either by SSH or using the console.  Enter the shell.
    4. Run the following cmsso-util command:
    5. Remove the decommissioned PSC from the vSphere inventory.

    Once these steps have been completed you can verify via the vSphere Web Client that the PSC has been decommissioned successfully by navigating to Administration > System Configuration > Nodes and ensuring that the decommissioned PSC is not present in the list of nodes.

    Decommission a vCenter Server (VCSA)

    1. Query the to-be-decommissioned vCenter server to identify the PSC it’s pointing to.  Instructions on how to query the PSC vCenter is pointing to are listed in my post here.
    2. Connect to the PSC the VCSA is pointing to, either by SSH or using the console.  Enter the shell.
    3. Run the following cmsso-util command:
    4. Power off the VCSA and remove it from the inventory.

    If you have multiple vCenter instances in a single SSO domain and you have just decommissioned one (or more), you may need to log out and log back into the vSphere Web Client before the decommissioned instance(s) disappear from the vSphere inventory tree.

Here’s a quick guide on how to query and change the Platform Services Controller (PSC) being used by vCenter.  Querying for the in-use PSC is possible on vCenter 6.0, but changing the PSC is only possible on 6.0 Update 1 or newer.  Note that I performed these steps on the vCenter Server Appliance (VCSA), and while I have also included some commands for a Windows-based vCenter server, I haven’t tested them myself.

Query the PSC being used by vCenter Server

There are two ways to identify this information: via the appliance console or SSH session, and via vSphere Web Client.

Option 1: via the appliance console or SSH session
On a VCSA

On a Windows vCenter

Option 2: via the vSphere Web Client

In the vSphere Web Client, navigate to the server’s Advanced Settings (vCenter > Manage > Settings > Advanced settings) there is a property called “config.vpxd.sso.admin.uri“.  The value of this property is the PSC that vCenter is currently using.

quick-post-query-and-change-the-platform-services-controller-being-used-by-vcenter-server-6-0-imga

 

Change (repoint) the PSC being used by vCenter Server

This step is a bit more detailed, as it depends on whether you are changing/repointing between PSC’s in a single SSO site, between SSO sites, or moving from an embedded PSC to an external PSC, and also whether you are using a VCSA or a Windows-based vCenter Server.  For this reason I’ll link directly to the VMware documentation for each scenario.

Option 1: Repointing within a site (KB 2113917).

    1. Connect to the VCSA console, or via SSH.
    2. Enable the shell (if necessary) and enter it.
    3. Run the vmafd-cli command.

  1. Restart the PSC services.

Option 2: Repointing between sites (KB 2131191).

Review the KB article for a full set of steps.

Option 3:  Repointing from an embedded PSC to an External PSC

See Reconfigure vCenter Server with Embedded Platform Services Controller to vCenter Server with External Platform Services Controller in the vSphere 6.0 documentation center.

As VMware continues to push in the direction of unix-based appliances for their vSphere management components, those without a Unix background (like myself) are having to come to grips with the Unix versions of common administrative tasks. Increasing the disk size on a vCenter Server appliance (VCSA) is one such task.  In vCenter 6.0 VMware has introduced Logical Volume Management (LVM) which really simplifies the process of increasing the size of a disk, and allows it to be done while the appliance is online.  VMware KB 2126276 covers all the steps required to increase the size of a disk, but this guide will cover it in slightly more detail.

Step 1: identify which disk (if any) has a problem with free space.

To do this, I connect to the appliance via SSH or the console, enable and enter the shell, and use the df -h command.
For more information on using command line tools for working with disk space can be found in my post Useful Unix commands for managing disk space on VMware appliances.

increasing-the-disk-size-on-a-vcenter-server-appliance-in-vsphere-6-0-a

I can see that both /storage/core and /storage/log are 100% used.  I’m guessing that /storage/core is full with vpxd crashdumps that are being generated because vCenter is crashing after being unable to generate logs on /storage/log.  Based on this guess, I’ll increase the size of /storage/log and then manually delete the crashdumps on /storage/core and monitor the situation.  I won’t cover the steps involved in deleting the vpxd crashdumps in this post, but it basically involves deleting core.vpxd.* and *.tgz files in the /storage/core directory.

Step 2: Increase the size of the affected disk using the vSphere Web Client

Looking at the table in VMware KB 2126276, it tells me that the disk mounted to /storage/log is VMDK5.  The way this is presented is a bit confusing in my opinion, because the disk we’re looking for is listed as hard disk 5 in the web client, but the filename of the disk is vmname_4.vmdk (the numbering of virtual disks is thrown out in this way because hard disk 1 is vmname.vmdk, and hard disk 2 is vmname_1.vmdk).  Where the KB article says “VMDK5”, it really just means “the fifth VMDK file”.

The reason my /storage/logs disk filled up is because I’ve increased the logging levels on my vCenter appliance to try to catch an issue that had been occuring.  Because of the increased amount of logs being generated, I’m going to increase the size of this VMDK to 25GB.  I don’t want to go overboard because the disks are thick provisioned by default.

increasing-the-disk-size-on-a-vcenter-server-appliance-in-vsphere-6-0-b

Step 3: expand the logical drive and confirm that it has grown successfully

Return to the SSH session and expand the logical drive(s) that have been resized.  The following command will expand any disks that have had their vmdk files resized.

If the operation is successful, you should see a message similar to the following.

In my case, I did get that message eventually,  but I also got a bunch of the following errors:

The reason I saw that error is because my /storage/core disk is 100% used.  As mentioned I’m going to free up space on that drive manually, so I’ll ignore that error for now.

If I run df -h again, I can see that /storage/log is now 25GB in total size.  Job done!

increasing-the-disk-size-on-a-vcenter-server-appliance-in-vsphere-6-0-c

Note: In the vCenter 5.x appliance, increasing disk sizes was a bit of a pain. The operation had to be performed while vCenter was offline, and involved adding a brand new disk, copying files from the old disk to the new one, and editing mount points.  For anyone who is working with a vCenter 5.x appliance, the steps are in KB 2056764.