Fixing the SSL Certificate with Project Honolulu

So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well.

I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom.

Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed Project Honolulu. If you want to do a public one, fine, but more likely you’ll have a certificate authority available to you internally. I’m not going to walk you through this again, my friend Luca Dell’Oca has a good write up on it here. Just do steps 1-3.

Make note of the Application ID here, you’ll use it later

Step 1: Shut it down and gather info: Next we need to shut down the Honolulu service. As most of what we’ll be doing here today is going to be in Powershell let’s just do this by CLI as well.

Now let’s take a look at what’s currently in place. You can do this with the following command, the output should look like the figure to the right. The relevant info we want to take note of here is 1) The port that we’ve got Honolulu listening on and 2) The Application ID attached to the certificate. I’m just going to reuse the one there but as Charbel points out this is generic and you can just generate a new one to use by using a generator.

Pick a cert, not any cert

Finally, in our quest to gather info let’s find the thumbprint of our newly loaded certificate. You can do this by using the Get-ChildItem command like this

As you can see in the second screenshot that will give you a list of the certificates with thumbprints installed on your server. You’ll need the thumbprint of the certificate you imported earlier.

Step 2: Make it happen: Ok now that we’ve got all our information let’s get this thing swapped. All of this seems to need to be done from the legacy command prompt. First, we want to delete the certificate binding in place now and the ACL. For the example shown above where I’m using port 443 it would look like this:

Now we need to put it back into place and start things back up. Using the port number, certificate thumbprint, and appid from our example the command to re-add the SSL certificate would look like this. You, of course, would need to sub in your own information.  Next, we need to put the URL ACL back in place. Finally, we just need to start the service back up from PowerShell.

Conclusion

At this point, you should be getting a shiny green padlock when you go the site and no more nags about a bad certificate. I hope as this thing progresses out of Tech Preview and into production quality this component gets easier but at least there’s a way.

VVOLs vs. the Expired Certificate

Hi all, I’m writing this to document a fix to an interesting challenge that has pretty much been my life for the last 24 hours or so. Through a comedy of errors and other things happening, we had a situation where the upstream CA from our VMware Certificate Authority (and other things) became very unavailable and the certificate authorizing it to manage certificates expired. Over the course of the last couple of days I’ve had to reissue certificates for just about everything including my Nimble Storage array and as far as vSphere goes we’ve had to revert all the certificate infrastructure to essentially the same as the out of the box self-signed guys and then reconfigure the VMCA as a subordinate again under the Root CA.

Even after all that I continued to have an issue where my Production VVOLs storage was inaccessible to the hosts. That’s not to say they weren’t working, amazingly and as a testament to the design of how VVOLs works my VMs on it ran throughout the process, but I was very limited in terms of the management of those VMs. Snapshots didn’t work, backups didn’t work, for a time even host migrations didn’t work until we reverted to the self-signed certs.

Thanks for a great deal of support and help from both VMware support and Nimble Storage Support we were finally able to come up with a runbook in dealing with a VVOL situation where major certificate changes occurred on the vSphere side. There is an assumption to this process that by the time you’ve got here all of your certificates both throughout vSphere as well as with the Nimble arrays are good and valid.

  1. Unregister the VASA provider and Web Client integration from the Nimble array. This can be done either through the GUI in Administration>VMware Integration by editing your vCenter, unchecking the boxes for the Web Client and VASA Provider and hitting save. This can also be done via the CLI using the command
  2. Register the integrations back in. Again, from the GUI simply just check the boxes back and hit save. If successful you should see a couple of little green bars briefly appear at the top of the screen saying the process was successful. From the CLI the commands are pretty similar
  3. Verify that your VASA provider is available in vCenter and online. This is just to make sure that the integration was successful. In either the Web Client or the HTML5 client go to vCenter> Configure> Storage Provider and look for the entry that matches the name of your array group and in the URL has the IP address of your array’s management interface. This should show as online. As you have been messing with certificates its probably worth looking at the Certificate Info tab as well while you are here to verify that the certificate is what you expect.
  4. Refresh the CA Certificates on each your hosts. Next, we need to ensure that all of the CA certificates are available on the hosts to ensure they can verify the certificates presented to them by the storage array. To do this you can either right-click each host > Certificates > Refresh CA Certificates or if you navigate to the configuration tab of each host, go to Certificate there is a button there as well. While in the window it is worth looking at the Status of each host’s certificate and ensure that it is Good.
  5. Restart the vvold service on each host. This final step was evidently the hardest one to nail down and find in the documentation. The simplest method may be to simply reboot each of your hosts as long as you can put them into maintenance mode and evacuate them first. The quicker way and the way that will let you keep things running is to enter a shell session on each of your hosts and simply run the following command:

    Once done you should see a response like the feature image on this post and a short while later your VVOLs array will again become available for each host as you work on them.

That’s about it. I really cannot thank the engineers at VMware (Sujish) and Nimble (Peter) enough for their assistance in getting me back to good. Also I’d like to thank Pete Flecha for jumping in at the end, helping me and reminding me to blog this.

If nothing else I hope this serves as a reminder to you (as well as myself) that certificates should be well tended to, please watch them carefully. ūüėČ

VMware Tools Security Bug and Finding which VMware Tools components are installed on all VMs

Just a quick post related to today’s VMware security advisories. VMware released a pair of advisories today, CVE-2016-5330 and CVE-2016-5331¬†and while both are nasty their scopes are somewhat limited. The 5331 issue is only applicable if you are running vCenter or ESXi 6.0 or 6.0U1, Update 2 patches the bug. The 5330 is limited to Windows VMs, running VMware Tools, and have the option HGFS component installed. To find out if you are vulnerable here’s a Power-CLI script to get all your VMs and list the installed components. Props to Jason Shiplett for giving me some assistance on the code.

While the output is still a little rough it will get you there. Alternatively if you are just using this script for the advisory listed you can change  where-object { $_.Name -match $componentPattern }  to  where-object { $_.Name -match "vmhgfs" } . This script is also available on GitHub.

Getting Started with rConfig on CentOS 7

I’ve been a long time user of RANCID for change management on network devices but frankly it’s always left me feeling a little bit of a pain to use and not particularly modern. I recently decided it was time for my OpenNMS/RANCID server to be rebuilt, moving OpenNMS up to a CentOS 7 installation and in doing so thought it was time to start looking around for an network device configuration management alternative. As is many times the way in the SMB space, this isn’t a task that actual budgetary dollars are going to go towards so off to Open Source land I went! ¬†rConfig¬†immediately caught my eye, looking¬†to me like RANCID’s hipper, younger brother what with its built in web GUI (through which you can actually add your devices), scheduled tasks that don’t require you to manually edit cron, etc. The fact that rConfig specifically targets CentOS as its underlaying OS was just a whole other layer of awesomesauce on top of everything else.

While rConfig’s website has a couple of really nice guides once you create a site login and use it, much to my dismay I found that they hadn’t been updated for CentOS 7 and while working through them I found that there are actually some pretty significant differences that effect the setup of rConfig. Some difference of minor (no more iptables, it’s firewalld) but it seems httpd has had a bit of an overhaul. Luckily I was not walking the virgin trail and through some trial, error and most importantly google I’ve now got my system up and running. In this post I’m going to walk through the process of setting up rConfig on a CentOS minimal install with network connectivity with hopes that 1) it may help you, the two reader’s I’ve got, and 2) when I inevitably have to do this again I’ll have documentation at hand.

Before we get into it I will say there are few artistic licenses I’ve taken with rConfig’s basic setup.

  1. I’ll be skipping over the network configuration portion of the basic setup guide. CentOS7 has done a great job of having a single configuration screen at install where you setup your networking among other things.
  2. The system is designed to run on MySQL but for a variety of reasons I prefer MariaDB. The portions of the creator’s config guide that deal with these components are different from what you see here but will work just fine if you do them they way described.
  3. I’m virtualized kind of guy so I’ll be installing the newly supported open-vm-tools as part of the config guide. Of course, if you aren’t installing on ESXi you won’t be needing these.
  4. Finally before proceeding please be sure to go ahead and run a yum update to make sure everything’s up to date and you really do have connectivity.

Disabling Stuff

Even with the minimal installation there are things you need to stop to make things work nice, namely the security measures. If you are installing this in the will this would be a serious no no, but for a smaller shop behind a well configured firewall it should be ok.

vi /etc/sysconfig/selinux

Once in the file you need to change the “ SELINUX=enforcing¬†” line to “ SELINUX=disabled¬†“. To do that hit “i” and then use vi like notepad with the arrow keys. When done hit Esc to exit insert mode and “ :wq¬†” to save and exit.

Installing the Prerequisites

Since we did the minimal install there are lots of things we need to install. If you are root on the box you should be able to just cut and paste the following into the cli and everything gets installed. As mentioned in the original Basic Config Guide, you will probably want to cut and past each line to make sure everything gets installed smoothly.

Autostart Services

Now that we’ve installed all that stuff it does us no good if it isn’t running. CentOS 6 used the command chkconfig on|off to control service autostart. In CentOS 7 all service manipulation is now done under the systemctl command. Don’t worry too much, if you use chkconfig or service start both at this point will still alias to the correct commands.

Finalize Disable of SELinux

One of the hard parts for me was getting the step 5/6 in the build guide to work correctly. If you don’t do it the install won’t complete, but it also doesn’t work right out of the box. To fix this the first line in prerequisites installs the attr package which contains the setfattr executable. Once that’s installed the following checks to see if the ‘.’ is still in the root directories ACLs and removes it from the /home directory. By all means if you know of a better way to accomplish this (I thought of putting the install in the /opt directory) please let me know in the comments or on twitter.

MySQL Secure Installation on MariaDB

MariaDB accepts any commands you would normally use with MySQL. the mysql_secure_installation script is a great way to go from baseline to well secured quickly and is installed by default. The script is designed to

  • Set root password
  • Remove anonymous users
  • Disallow root logon remotely
  • Remove test database and access to it
  • Finally reload the privilege tables

I tend to take all of the defaults with the exception of I allow root login remotely for easier management. Again, this would be a very bad idea for databases with external access.

Then follow the prompts from there.

As a follow up you may want to allow remote access to the database server for management tools such as Navicat or Heidi SQL. To do so enter the following where X.X.X.X is the IP address you will be administering from. Alternatively you can use root@’%’ to allow access from anywhere.


Configure VSFTPd FTP Software

Now that we’ve got the basics of setting up the OS and the underlying applications out of the way let’s get to the business of setting up rConfig for the first time. First we need to edit the sudoers file to allow the apache account access to various applications. Begin editing the sudoers file with the visudo¬† command, arrow your way to the bottom of the file and enter the following:

rConfig Installation

First you are going to need to download the rConfig zip file from their website. Unfortunately the website doesn’t seem to work with wget so you will need to download it to a computer with a GUI ¬†and then upload it via SFTP to your rConfig server. (ugh) Once the file is uploaded to your /home directory back at your server CLI do the following commands

Next we need to copy the the httpd.conf file over to /etc/httpd/conf directory. This is where I had the most issues of all in that the conf file included is for httpd in CentOS 6 and there are some module differences between 6 and 7. Attached here is a modified version that I was able to get working successfully after a bunch of failures. The file found here (httpd.txt) will need to replace the existing httpd.conf before the webapp will successfully start. If the file is copied to the /home/rconfig directory the shell commands would be

As long as the httpd service starts backup up correctly you should now be good to go with the web portion of the installation which is pretty point and click. Again for the sake of brevity just follow along at the rconfig installation guide starting with section rConfig web installation and follow along to the end. We’ll get into setting up devices in a later post, but it is a pretty simple process if you are used to working with networking command lines.

Getting Started with Veeam Endpoint Backup

This week Veeam Software officially released their new Endpoint Backup Free product introduced at VeeamON last October after a few months of beta testing. The target for this product is to allow image based backup of individual physical machines, namely workstations, allowing for Change Block Tracking much like users of their more mature Backup & Replication product have been used to in virtualized environments. Further Veeam has made a commitment that in the product is and should always be freely available making it possible for anybody to perform what is frankly enterprise level backup of their own computers with no cost other than possibly a external USB drive to store the backup data. ¬†I’ve been using the product throughout the beta process and in this post I’ll outline some of the options and features and review how to get started with the product.

Also released this month by Veeam is the related Update 2 for Backup & Replication 8. This update in this case allows a Backup Repository to be selected as a target for your Endpoint Backup job after some configuration as shown here. Keep in mind if you are wanting to backup to local USB or a network share this isn’t necessary but if you are already a B&R user this will make managing these backups much better.

Getting Started with Installation

Your installation optionsI have to say Veeam did very well keeping the complexity under the water in this one. Once downloaded and run the installation choices consist completely of one checkbox and one button. That’s it. Veeam Endpoint Backup seems to rely on a local SQL Server Express installation to provide backend services just like the bigger Backup & Replication install but it is installed on the fly. I have found that if there is pending Windows Updates to complete the installer will prompt you to restart prior to continuing to configuring your backup.

Configuring the Job

Once the installation is complete the installer will take you directly into configuring the backup as long as you are backing up to an external storage device. If you plan to use a network share or Veeam Backup Repository you will need to skip the step and configure the job once in the application. Essentially you have the following options:

  • What you wantto backup
    • Entire computer; which is image based backup
    • Specific volumes
    • File level backup
  • Where you want to back it up to (each will generate another step or two in the wizard)
    • Local storage
    • A shared folder
    • Veeam Backup & Replication repository
  • Schedule or trigger for backups
    • Daily at a a specific time
    • Trigger a backup on a lock, log off or when the backup target is connected


Personally I use one of three setups depending on the scenario. For personal computers I use a external USB drive triggered on when the backup target is available but set so that it never backs up more than once every 24 hours. In the enterprise using Endpoint Backup to deal with those few remaining non-virtualized Windows servers these are configured to backup to a Veeam Backup Repository on a daily schedule. Finally I will soon begin rolling this out to key enterprise laptop users and there backup will be to a B&R Repository as well but triggered on the user locking the workstation with a 24 hour hold down. Keep in mind all of these options can be tweaked via the Configure backup button in the Veeam Endpoint Backup Control Panel.

media-createCreating the Recovery Media

The last step of installing/configuring Endpoint Backup is to create the restore media. This creates a handy disk or ISO that you can boot off of to allow you to do a Bare Metal (or Bare VM :)) recovery of the machine. From an enterprise standpoint if you are rolling Endpoint Backup out to a fieldful of like machines I really can’t find a good reason to create more than one of these per model of device. Personally I’ve been creating the ISOs for each model and using it in conjunction with a Zalman VE-300 based external hard drive to keep from having lots of discs/pen drives around. If you are using this to backup physical servers it would also be a first step to being able to quickly restore to a VM if that is part of your disaster recovery plan.

As a trick what I’ve found is I have installed the product on a VM for no other reason but to create the recovery media. This way I know I’ll have the drivers to boot to it if need be. Further once you boot to the recovery media you’ll find all kinds of little goodies that make it a good ISO to have available in your bag.

Conclusion

I’ve played with lots of options, both paid and free, over the years for backing up a physical computer on a regular basis and even setting the general Veeam fanboy type stuff aside, this is the slickest solution for this problem I’ve ever seen. The fact that it is free and integrates into my existing Enterprise solution are definitely major added bonuses, but even in a standalone, “I need to make backups of Grandma’s computer” situation it is a great choice. If you find you need a little help with getting started the Veeam has created a whole Endpoint Backup forum just for this product. My experience both here and with other products is that there is generally very quick response from very knowledgeable Veeam engineers, developers and end users happy to lend a hand.

Quick Config: Install ClamAV & configure a daily scan on CentOS 6

I’m pretty well versed in the ways of Anti-Virus in Windows but I’ve wanted to get an AV engine installed on my Linux boxes for a while now. In looking around I’ve found a tried and true option in ClamAV¬†and after a few stops and starts was able to get something usable. I’d still like to figure out how to have it send me a report by e-mail if it finds something but that’s for another day; I don’t have enough Linux in my environment to necessitate me putting the time in for that.

So with that here’s how to quickly get started.

Step 0: If not already there, install the EPEL repository

Step 1: Install ClamAV

Step 2: Perform the 1st update of ClamAV definitions (this will happen daily by default afterwards)

Step 3: Enable and Start Services

Step 4: Configure Daily Cron Job

I chose to have it scan the whole system and only report infected files, you may want to do differently

Enter the following:

Note the -i option tells it to only return infected files, the -r tells it to recursively search. You may want to add the –remove option as well to remove files that are seen as infected.

Step 6: Make Cron Job Executable

You can then kick of a manual scan if you’d like using

That’s it! pretty simple and all of your output will be logged daily to the /var/log/clamav/daily_clamscan.log file for review.