DR Scenarios For You and Your Business: Getting Cloudy With It

In the last post we talked about the more traditional models of architecting a disaster recovery plan. In those we covered icky things like tape, dark sites and split datacenters. If you’d like to catch up you can read it here. All absolutely worthwhile ways to protect your data but all of those are slow and limit you and your organizations agility in the case of a disaster.

By now we have all heard about the cloud so much we’ve either gone completely cloud native, dabbled a little or just completely loathe the word. Another great use for “somebody else’s computer” is to power your disaster recovery plans. By leveraging cloud resources we can effectively get out of the managing hardware business in regards to DR and have borderline limitless resources if needed. Let’s look at a few ways this can happen.

DRaaS (Disaster Recovery as a Service)

For now this is my personal favorite, but my needs may be and probably are different from yours. In a DRaaS model you still take local backups as you normally have, but then those backups or replicas are then shipped off to Managed Service Providers (MSPs) aligned with your particular backup software vendor.

I can’t particularly speak to any of the others from experience but CloudConnect providers in the Veeam Backup and Replication ecosphere are simple to consume and use. Essentially once you buy the amount of space you need from a partner you then use the link and credentials you are provided and add them to your backup infrastructure. Once done you create a backup copy job with that repository as the target and let it run. If you are bandwidth restrained many will even let you seed the job with an external hard drive that you ship them full of backups then all you have to transfer over the wire is your daily changes. Meanwhile all of these backups are encrypted with a key that only you and your organization knows so the data is nice and safe sitting elsewhere.

This is really great in that it is infinitely scalable (you only pay for what you use) and you don’t have to own any of the hardware or software licenses to support it. In the case that you have an event you have options; you can either scramble and try to put something together on your own or most times you can leverage the compute capabilities of the provider to power your organization until such time you can get your on-site resources available again. As these providers will have their own IT resources available you and your team will be freed up to do the work of getting staff and customers back online as they handle getting you restored and back online.

In my mind the drawbacks to this model are minimal. In the case of a disaster you are definitely going to be paying more than you would if you are running restored systems on your own hardware, but you would have had to buy that hardware and maintain it as well which is expensive. You will also be in a situation where workers and datacenter systems are not in the same geographical area as well which may cause for increased bandwidth cost as you get back up and running but still nothing compared to maintaining this consistently. Probably the only real drawback here is almost all of these types of providers require long-term agreements, 1 year or more for the backup or replication portion of what is needed. You also need to be sure if you choose this route that the provider has enough compute resources available to absorb you if needed. This can be mitigated by working with your provider to do regular backup testing at the far end. This will cost you a bit more but it is truly worth it to me.

Backup to Public Cloud

Finally we come to what all the backup vendors seems to be  going towards these days, public cloud backups. In this situation your backups are either on premises first (highly recommended) and then shipped off to the public cloud provider of your choice. AWS, Azure or GCP start messing with their storage pricing models and suddenly become cheaper? Simply just add the new provider and shift the job to the new provider, easy peasy. As with all things cloud you are in theory also infinitely scalable so you don’t have to worry about on boarding new workloads except for cost, and who cares about cost anyway?

The upside here is the ability to be agile. Start to finish you can probably be setup to consume this model within minutes and then your only limit to how fast you can be covered is how much bandwidth you make available to shipping backups. If you are doing this to cover for an external event like failure of your passive site you can simply tear it back down afterwards just as fast as you made it. Also you are only ever paying for your actual consumption, so you know how much your cost is going to be for any additional workload to be protected, you don’t ever pay for “spare space.”

As far as the drawbacks I feel like we are still in the early days of this so there are a few. While you don’t have to maintain your far end equipment for either backup storage or compute I’m not convinced that this isn’t the most expensive option for traditional virtualized workloads.

Hybrid Archive Approach

One of the biggest challenges to maintaining an on-prem, off-prem backup system is we all run out of space sometimes. The public cloud provides us an ability to only consume what we need, not paying for any fluff, as well as letting others manage the performance and availability of that storage. One trend I’m seeing more and more is the ability to supplement your on premise backup storage with public cloud resources to allow for scale out of archives for as long as necessary. There is a tradeoff between locality and performance, but if your most recent backups are on premises or well-connected to your production environment you may not ever need to access those backups that archived off to object storage so you don’t really care how fast it is to restore; you’ve just checked your policy checkbox and have that “oh no” backup out there.

Once upon a time my employer had a situation where we needed to retain every backup for about 5 years. Each year we had to buy more and more media to save these backups we would never ever restore to because they were so old, but we had them and were in compliance. If things like Veeam’s Archive Tier or similar with other vendors existed I could have said “I want to retain X backups on-prem but after that shift them to a S3 IA bucket.” In the long-term this would have saved quite a bit of money and administrative overhead and when the requirement went away all I had to do is delete the bucket and reset back to normal policy.

While this is an excellent use of cloud technology I don’t consider it a replacement for things like DRaaS or Active/* models. The hoops you need to go through to restore these backups to a functional VM are still complex and require resources. Rather I see this as an extension of your on-prem backups to allow for short-term scale issues.

Conclusion

If you’ve followed along for both posts I’ve covered about 5.5 different methods of backing up, replicating and protecting your datacenter. Which one is right for you? It might be one of these, none of these or a mash-up of two or more to be honest. The main thing is know your business’ needs, it’s regulatory requirements and

DR Scenarios For You and Your Business Part 1: The Old Guard

It is Disaster Recovery review season again here at This Old Datacenter and reviewing our plans sparked the idea to outline some of the modern strategies for those who are new to the game or looking to modernize. I’m continually amazed by the number of people who I talk to that are using modern compute methodologies (virtualization on premises, partner IaaS, public cloud) but are still using the same backup systems they were using in the 2000s.

In this post I’m going to talk about some basic strategies using Veeam Backup and Replication because that is primarily what I use, but all of these are capable with any of the current data backup vendors with varying levels of advantages and disadvantages per vendor. The important part is to understand the different ways about protecting your data to start with and then pick a vendor that fits your needs.

One constant that you will see here is the idea of each strategy consisting of 2 parts. A local backup first to handle basic things like a failing VM, file restore, and other things that aren’t responding to all systems down. Secondly then archiving that backup to somewhere outside of your primary location and datacenter to deal with that systems down or virus consideration. You will often hear this referred to as the 3-2-1 rule:

  • 3 copies of your data
  • 2 copies on different types of physical media or systems
  • 1 copy (at least) in a different geographical location (offsite)
On Premises Backup/Archive to Removable Media Backup

This is essentially an evolution on your traditional backup system. Each night you take a backup of your critical systems to a local resource and then copy that to something removable so that it can be taken to somewhere offsite each evening. In the past this was probably only one step, you ran backups to tape and then you took that tape somewhere the next morning. Today I would hope the backups would land on disk somewhere local and then be copied to tape or a USB hard disk but everybody has their ways.

This method has the ability to get the job done but has a lot of drawbacks. First you must have human intervention to get your backups somewhere. Second restores may be quick if you are restoring from your primary backup method but if you have to go to your second you first have to physically locate that correct data set and then especially in the case of tape it can take some time to get that back to functional. Finally you own and have to maintain all the hardware involved in the backup system, hardware that effectively isn’t used for anything else.

Active/Passive Disaster Recovery

Historically the step up for many organizations from removable media is to maintain a set of hardware or at least a backup location somewhere else. This could be just a tape library, a NAS or an old server loaded with disks either in a remote branch or at a co-location facility. Usually you would have some dark hardware there that could allow systems to be restored if needed. In any case you still would perform backups locally and maintain a set on premises for the primary restore, then leverage the remote location for a systems down event.

This method definitely has advantages over the first in that you don’t have to dedicate a person’s time to ensuring that the backups go offsite and you might have some resources available to take over in case of a massive issue at your datacenter, but this method can get very expensive, very fast. All the hardware is owned by you and is used exclusively for you, if ever used at all. In many cases datacenter hardware is “retired” to this location and it may or may not have enough horsepower to cover your needs. Others may buy for the dark site at the same time as buying for the primary datacenter, effectively doubling the price of updating. Layer on top of this the cost of connectivity, power consumption and possibly rack space and you are talking about real money. Further you are on your own in terms of getting things going if you do have a DR event.

All that being said this is a true Disaster Recovery model, which differentiates from the first option. You have everything you need (possibly) if you experience a disaster at your primary site.

Active/Active Disaster Recovery

Does your organization have multiple sites, with datacenter capabilities in each place? If so then this model might be for you. With Active/Active you design you multisite datacenters with redundant space in mind so that in the case of an event in either location  you can run both workloads in a single location. The ability to have “hot” resources available at your DR site is attractive in that you can easily make use of not only backup operations but replication as well, significantly shortening your Restore Time Objective (RTO), usually with the ability to rollback to production when the event is over.

Think about a case where you have critical customer facing applications that cannot handle much downtime at all but you lose connectivity at your primary site. This workload could fairly easily be failed over to the replica in the far side DC, all the while your replication product (Think Veeam Backup & Replication or Zerto) is tracking the changes. When connectivity is restored you tell the application to fallback and you are running with changes intact back in your primary datacenter.

So what’s the downside? Well first off it requires you to have multiple locations to be able to support this in the first place. Beyond that you are still in a world of needing to support the load in case of having an event, so your hardware and software licensing costs will most likely go up to support this event that may never happen. Also supporting replication is a good bit more complex than backup when you include things like the need for ReIP, external DNS, etc. so you should definitely be testing this early and often, maintaining a living document that outlines the steps needed to failover and fallback.

Conclusion

This post covers what I consider the “old school” models of Disaster Recovery, where your organization owns all the hardware and such to power the system. But who wants to own physical things anymore, aren’t we living in the virtual age? In the next post we’ll look at some more “modern” approaches to the same ol’ concepts.

The Basics of Veeam Backup & Replication 9.5 Update 4 Licensing

Veeam has recently released the long-awaited Update 4 to their Backup and Replication 9.5 product and with it has come some changes to how they deal with licensing. As workloads that need to be protected/backed up/made available have moved from being 100% on-premises and inside our vSphere or Hyper-V environments to mixes of on-prem, off-prem, on physical, public cloud, etc. my guess is their customers have asked for a way to make that protection and licensing portable.In Veeam’s move they have decided this can be solved by creating per instance licensing, which is similar to how you consume many other cloud based services. This rides along with the established perpetual licensing we still have for VBR and Veeam Availability Suite.

I will be honest and say that the upgrade was not as smooth as I would have hoped. Now that I’ve got to the bottom of my own licensing issues I’ll post here what I’ve learned to hopefully keep you from experiencing the same headaches. It’s worth noting that there is a FAQ on this but the content is varying quite a bit as this gets rolled out.

How We Got Here

In the past if you were using nothing but Veeam Backup and Replication (VBR) you did all your licensing by the socket count of protected hypervisors. After that came along Veeam Agents for Windows and Linux and we had the addition subscriptions levels for VAW Server, VAW Workstations, and VAL. As these can be managed and deployed via the Veeam Console this license was required to be installed on your VBR server as well so you now had 2 separate licenses files that were commingled on the server to create the entire solution for protecting VBR and Agent workloads.

Now as we look at the present and future Veeam has lots of different products that are subscription based. Protecting Office365, AWS instances, and Veeam’s orchestration product are all per consumable unit subscriptions. Further when you consider that due to Veeam’s Service Provider program you as an end customer have the option of either buying and subscribing directly from a VAR or “renting” those licenses from a server provider. As you keep counting up you can see where this model needed (and still needs) streamlined.

Update 4 License Types

So that brings us to the here and now. For now and for as far as I can get anyone to tell me perpetual (a.k.a. per socket) licensing for Veeam Backup and Replication and the Veeam Availability Suite which includes VBR and VeeamONE is here to stay. Any new products though will be licensed through a per instance model going forward. In the middle there is some murkiness so let’s take a look at the options.

  1. Perpetual (per socket) only. This is your traditional Backup and Replication license, licensed per protected socket of hypervisor. You still have to obtain a new Update 4 license from my.veeam.com but works exactly the same. If you have a Veeam Server without any paid VAW/VAL subscriptions attached you can simply just run the installer and continue on your current license. An interesting note is that once you install your Update 4 perpetual license and if you have no instances it will automatically provide you with 1 instance per socket up to a maximum of 6. That’s actually a nice little freebie for those of us with a one-off physical box here or there or a just a couple of cloud instances.
  2. Instance based. These are the “portable licenses” that can be used for VBR protected VMs, VAW, VAL, Veeam for AWS, etc. If you are an existing customer you can contact licensing support and migrate your per socket to this if you want, but unless you are looking at a ROBO site, need more cloud protection or have a very distributed use case for Veeam (small on-prem, workstations, physical servers, cloud instances) I don’t see this being a winner price-wise. For those of us with traditional workloads perpetual makes the most sense because it doesn’t matter how many VMs we have running on our hypervisors, they are still all covered. If you’d like to do the math for yourself they’ve provided a instance cost calculator.

    I will mention that I think they are missing the idea in the calculator that unless they are doing something magical this is based on buying new. Renewals of perpetual licenses should be far cheaper than the given number and I’ve never heard of a subscription license service having a renewal rate.It is also worth noting that even if you aren’t managing your licensed (as opposed to free) Veeam Agent for Windows and Linux with VBR you will need to go to the Update 4 License management screen in my.veeam.com and convert your subscription licenses to Update 4 instances ones to be able to use the 3.0 versions of the software. It doesn’t cost any thing or make a difference at this point, but while you could buy subscription licenses in any numbers you choose per instance licenses have a minimum level of 10 and are only sold in 10 packs. So while for now it might be nice that your licenses are rounded up understand you’ll have to renew at the rounded up price as well.

    Further its worth noting that back when VAW was subscription there were separate lines for workstations and servers, with 1 server license costing the same as 3 workstations. In the new per instance model this is reflected by consumption. A server of any kind will consume 1 instance, but a workstation will only consume 0.33 of one. Same idea, different way of viewing it.

  3. The Hybrid License. This is what you need if you need/want to manage both perpetual and instances from the same VBR server . If you previously had per socket for your VMs and subscription licenses for VAW/VAL you will need to hit the merge button under your Update 4 license management screen. This only works if you are the primary license administrator for all support IDs you wish to merge.

Just to make sure it’s clear in previous versions you could have both a per socket and subscription license installed at the same time; this is no longer the case thus the reason for option 3. You cannot have a 1 and a 2 installed on the same server, the 2 will override the 1. So if you are consuming both perpetual and per instance under the same VBR server you must be sure to merge those licenses on my.veeam.com. In order to do so you will need any and all licenses/Support IDs to be merged to be under the same Primary License Administrator. If you did not do this previously you will need to open a case with support to get a common Primary set for your Support IDs.

Conclusion

As we begin, or continue, to move our production workloads from not only our own datacenters to others as well as the public cloud those workloads will continue to need to be protected. For those of us that use Veeam to do so handling the licensing has, for now, been made simpler and is still cost effective once you can get it lined out for yourself.

Dude, Where’s My Managed Service Accounts?

So I am probably way late to the game but today’s opportunities to learn have included ADFS and with that the concept of Managed Service Accounts.

What’s a Managed Service Account you ask? So we’ve all installed applications and either set the service to run with the local system account or with a standard Active Directory account. Since the release of Windows Server 2008 R2 this feature has been available (and with Windows Server 2012 greatly enhanced,) gMSA lets you create a special type of account to be used for services where Active Directory itself manages the security of the account, keeping you secure while not having to update passwords regularly.

While there are quite a few great step by step guides for setting things up and then creating your first Managed Service account, I almost immediately ran into an issue where my Active Directory didn’t seem to include the Managed Service Accounts container (CN=Managed Service Accounts,DC=mydomain,DC=local). My domain was at the correct level, Advanced Features were turned on in AD Users & Computers, everything seemed like it should be just fine, the container just wasn’t there. In this post I’ll outline the steps I ultimately took that resulted in getting the problem fixed.

Step 0: Take A Backup

While you probably are already mashing on the “take a snapshot” button or starting a backup job, its worth saying anyway. You are messing with your Active Directory, be sure to take a backup or snapshot of your Domain Controller(s) which holds the various FSMO roles. Now that you’ve got that backup depending on how complex your Active Directory is it might be worth leveraging something like Veeam’s SureBackup (er, I mean DataLab) like I did and create you a test bed where you can try it out on last night’s backups before doing this in production.

Step 1: ADSI Stuff

Now we are going to have to start actually manually editing Active Directory. This is because you might have references to Managed Service Accounts in your Schema but are just missing the container. You also have to tell AD it isn’t up to date so that the adprep utility can be rerun. Be sure you are logged into your Schema Master Domain Controller as an Enterprise Admin and launch the ADSIEdit MMC.

  1. Right click ADSI Edit at the top of the structure on the left, Click Connect… and hit OK as long as the Path is the default naming context.
  2. Drill down the menu structure to CN=Domain Updates, CN=System, DC=<mydomain>,DC=<mytld>
  3. Within the Operations Container you will need to delete the following containers entirely.
    1. CN=5e1574f6-55df-493e-a671-aaeffca6a100
    2. CN=d262aae8-41f7-48ed-9f-35-56-bb-b6-77-57-3d
  4. Now go back up a level and right click on the CN=ActiveDirectoryUpdates container and choose Properties
    1. Scroll down until you find the “revision” attribute, click on it and click Edit
    2. Hit the Clear button and then OK

Step 2: Run ADPrep /domainPrep

So now we’ve cleaned out the bad stuff and we just need to run ADprep. If you have upgraded to your current level of Active Directory you probably have done this at least once before, but typically it won’t let you run it on your domain once its been done; that’s what the clearing the revision attribute above did for us. Now we just need to pop in the (probably virtual) CD and run the command

Yay! It actually worked!
  1. Mount the ISO file for your given operating system to your domain controller. You can either do this by putting the ISO on the system, right click, mount or do so through your virtualization platform.
  2. Open up a command line or powershell prompt and navigate to <CDROOT>:\support\adprep
  3. Issue the .\adprep.exe /domainPrep command. If all goes well it should report back “Adprep successfully updated the domain-wide information.”

Now that the process is completed you should be able to refresh or relaunch your Active Directory Users & Computers window and see that Managed Service Accounts is available right below the root of your domain as long as Advanced Features is enabled under View and you are now good to go!

Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing

One of the issues that Veeam Backup & Replication users, actually those of any application aware backup solution, is that the various VSS writers are typically very finicky to say the least. Often you will get warnings about the services only to do a “vssadmin list writers” and see either writers in a failed state or not there at all. In most of these cases a reboot of either the service or the target system itself is an easy quick fix.

But do you really want to rely on yourself to remember to do this every day? I know I don’t and going with the mantra of “When in doubt, automate” here’s a script that will help out. The Reboot-VSS.ps1 script assumes that you are using vSphere tags to dynamically identify VMs to be included in backup jobs, looks at the services in the given services array and if they are present on the VM will restart them.

 

This script was designed to be set in the Windows scripts section of guest processing settings within a Veeam Backup and Replication job. I typically only need the SQL writer service myself but I’ve included VSS in the array as well here as an example of adding more than one. There are quite a few VSS services that VSS aware backup services can access, Veeam’s KB 20141 is a great reference for all of these that can be included here based on your need.

Reinstalling the Veeam Backup & Replication Powershell SnapIn

As somebody who lives by the old mantra of “Eat your own dog food” when it comes to the laptops I use both personally and professionally I tend to be on the early edge of installs. So while I am not at all ready to start deploying Windows 10 1803 to the end users I’ve recently upgraded my Surface Pro to it. In doing so I’ve found that doing so broke access to the Veeam Powershell SnapIn on my laptop when trying to run a script. After some Googling I found a very helpful post on the Veeam Forums that I thought I’d condense the commands to run here for us all. Let me start with a hat tip to James McGuire for finding this solution to the problem.

For the those that aren’t familiar with VBR’s Powershell capabilities, the SnapIn is installed either when you run the full installer on your VBR server or, as is my case when you install the Remote Console component on another Windows system. Don’t let me get started about the fact that Veeam is still using a SnapIn to provide PowerShell access, that’s a whole different post, but this is where we are.

The sign that this has occurred is when you get the “Get-PSSnapin : No Windows PowerShell snap-ins matching the pattern ‘VeeamPSSnapin’ were found.” error when trying to get access to the SnapIn. In order to fix this, you need to use the installutil.exe utility in your latest .Net installation. So in my example, this would be C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.ex
e. If you’ve already installed the VBR Remote console The SnapIn’s DLL should be at C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll. So to get the installation fixed and re-added to being available to Powershell you just need to do the following from an elevated PoSH prompt:

Then to load it and be able to use it simply

From there it’s up to you what comes next. Happy Scripting!

Fixing the SSL Certificate with Project Honolulu

So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well.

I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom.

Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed Project Honolulu. If you want to do a public one, fine, but more likely you’ll have a certificate authority available to you internally. I’m not going to walk you through this again, my friend Luca Dell’Oca has a good write up on it here. Just do steps 1-3.

Make note of the Application ID here, you’ll use it later

Step 1: Shut it down and gather info: Next we need to shut down the Honolulu service. As most of what we’ll be doing here today is going to be in Powershell let’s just do this by CLI as well.

Now let’s take a look at what’s currently in place. You can do this with the following command, the output should look like the figure to the right. The relevant info we want to take note of here is 1) The port that we’ve got Honolulu listening on and 2) The Application ID attached to the certificate. I’m just going to reuse the one there but as Charbel points out this is generic and you can just generate a new one to use by using a generator.

Pick a cert, not any cert

Finally, in our quest to gather info let’s find the thumbprint of our newly loaded certificate. You can do this by using the Get-ChildItem command like this

As you can see in the second screenshot that will give you a list of the certificates with thumbprints installed on your server. You’ll need the thumbprint of the certificate you imported earlier.

Step 2: Make it happen: Ok now that we’ve got all our information let’s get this thing swapped. All of this seems to need to be done from the legacy command prompt. First, we want to delete the certificate binding in place now and the ACL. For the example shown above where I’m using port 443 it would look like this:

Now we need to put it back into place and start things back up. Using the port number, certificate thumbprint, and appid from our example the command to re-add the SSL certificate would look like this. You, of course, would need to sub in your own information.  Next, we need to put the URL ACL back in place. Finally, we just need to start the service back up from PowerShell.

Conclusion

At this point, you should be getting a shiny green padlock when you go the site and no more nags about a bad certificate. I hope as this thing progresses out of Tech Preview and into production quality this component gets easier but at least there’s a way.

From Zero to PowerCLI: CentOS Edition

Hi all, just a quicky to get everybody off the ground out there that are looking to use both PowerShell and PowerCLI from things that don’t run Windows. Today VMware released version 10 of PowerCLI with support for installation on both Linux and MacOS. This was made possible by the also recently released Powershell Core 6.0 which allows PowerShell to be installed on *nix variants. While the ability to run it on a Mac really doesn’t do anything for me I do like to use my iPad with a keyboard case as a quick and easy jump box and its frustrated me for a while that I needed to do an RDP session and then run a Powershell session from within that. With these releases I’m now an SSH session away from the vast majority of my scripting needs with normal sized text and everything.

In this post I’ll cover getting both Powershell Core and PowerCLI installed on a CentOS VM. To be honest, installing both on any other variant is pretty trivial but the basic framework of the difference can be found in Microsoft Docs.

Step 1: Installing Powershell Core 6.0

First, you need to add the Powershell Core repository to your yum configuration. You may need to amend the “/7/” below if you are running a RHEL 6 variant like CentOS 6.

Once you have your repo added simply install from yum

Congrats! You now have PowerShell on Linux. To run it simply run pwsh from the command line and do your thing. If you are like me and use unsigned scripts a good deal you may want to lower your Execution Policy on launch. You can do so by adding the parameter.

 

Step 2: Installing VMware PowerCLI

Yes, this is the hard part… Just kidding! It’s just like on Windows, enter the simple one-liner to install all available modules.

If you want to check and see what you’ve installed afterward (as shown in the image)

If you are like me and starting to burn this through in your lab you are going to have to tell it to ignore certificate warnings to be able to connect to your vCenter. This is simple as well just use this and you’ll be off and running.

 

Step 3: Profit!

Really, that’s it. Now to be honest I still am going to need to jump to something Windows-based to do the normal ActiveDirectory, DNS or any other native  Windows type module but that’s pretty easy through Enter-PSSession.

Finally, if you have got through all of the above and just want to cut and paste here’s everything in one spot to get you installed.

 

 

VVOLs vs. the Expired Certificate

Hi all, I’m writing this to document a fix to an interesting challenge that has pretty much been my life for the last 24 hours or so. Through a comedy of errors and other things happening, we had a situation where the upstream CA from our VMware Certificate Authority (and other things) became very unavailable and the certificate authorizing it to manage certificates expired. Over the course of the last couple of days I’ve had to reissue certificates for just about everything including my Nimble Storage array and as far as vSphere goes we’ve had to revert all the certificate infrastructure to essentially the same as the out of the box self-signed guys and then reconfigure the VMCA as a subordinate again under the Root CA.

Even after all that I continued to have an issue where my Production VVOLs storage was inaccessible to the hosts. That’s not to say they weren’t working, amazingly and as a testament to the design of how VVOLs works my VMs on it ran throughout the process, but I was very limited in terms of the management of those VMs. Snapshots didn’t work, backups didn’t work, for a time even host migrations didn’t work until we reverted to the self-signed certs.

Thanks for a great deal of support and help from both VMware support and Nimble Storage Support we were finally able to come up with a runbook in dealing with a VVOL situation where major certificate changes occurred on the vSphere side. There is an assumption to this process that by the time you’ve got here all of your certificates both throughout vSphere as well as with the Nimble arrays are good and valid.

  1. Unregister the VASA provider and Web Client integration from the Nimble array. This can be done either through the GUI in Administration>VMware Integration by editing your vCenter, unchecking the boxes for the Web Client and VASA Provider and hitting save. This can also be done via the CLI using the command
  2. Register the integrations back in. Again, from the GUI simply just check the boxes back and hit save. If successful you should see a couple of little green bars briefly appear at the top of the screen saying the process was successful. From the CLI the commands are pretty similar
  3. Verify that your VASA provider is available in vCenter and online. This is just to make sure that the integration was successful. In either the Web Client or the HTML5 client go to vCenter> Configure> Storage Provider and look for the entry that matches the name of your array group and in the URL has the IP address of your array’s management interface. This should show as online. As you have been messing with certificates its probably worth looking at the Certificate Info tab as well while you are here to verify that the certificate is what you expect.
  4. Refresh the CA Certificates on each your hosts. Next, we need to ensure that all of the CA certificates are available on the hosts to ensure they can verify the certificates presented to them by the storage array. To do this you can either right-click each host > Certificates > Refresh CA Certificates or if you navigate to the configuration tab of each host, go to Certificate there is a button there as well. While in the window it is worth looking at the Status of each host’s certificate and ensure that it is Good.
  5. Restart the vvold service on each host. This final step was evidently the hardest one to nail down and find in the documentation. The simplest method may be to simply reboot each of your hosts as long as you can put them into maintenance mode and evacuate them first. The quicker way and the way that will let you keep things running is to enter a shell session on each of your hosts and simply run the following command:

    Once done you should see a response like the feature image on this post and a short while later your VVOLs array will again become available for each host as you work on them.

That’s about it. I really cannot thank the engineers at VMware (Sujish) and Nimble (Peter) enough for their assistance in getting me back to good. Also I’d like to thank Pete Flecha for jumping in at the end, helping me and reminding me to blog this.

If nothing else I hope this serves as a reminder to you (as well as myself) that certificates should be well tended to, please watch them carefully. 😉

Making Managing Printers Manageable With Security Groups and Group Policy

I don’t know about the rest of you but printing has long been the bane of my existence as an IT professional. Frankly, I hate it and believe the world should be 100% paperless by this point. That said, throughout my career, my users have done a wonderful job of showing me that I am truly in the minority on this matter so I have to do my part in making sure they are available.

As any Windows SysAdmin knows installing the actual print driver and setting up a TCP/IP port aren’t even half the battle. From there you got to get them shared and have the users actually connect to them so that they can use them. It’d be awesome if they would all just sit down say “I have no printers, let me go to Active Directory and find some” but I’ve yet to have more than a handful of users who see this as a solution; they just want the damned things there and ready to rock and roll.

In the past, I’ve always managed this with a series of old VBS scripts, which still works but requires tweaks from time to time. It’s possible to do this kind of stuff with Powershell these days as well as long as your user has the Active Directory module imported (Hint: they probably don’t). There are also any number of other 3rd party and really expensive Microsoft systems (Hi SCCM!) that will do this as well. But luckily we’ve had a little thing called Group Policy Preferences around for a while now too and it will do everything we need to make this really manageable, with a nice pretty GUI that you can even teach the Help Desk Intern how to manage.

  1. Setup the Print Server(s)- This is the same old, same old. Pick a server or set of servers and setup all your printers and share them. This gives you centralized queue management and all the goodies we know and love.
  2. Create Security Groups- Unless you work in a 10 person office most people won’t necessarily need every printer. I like to create Security groups, 1 per printer, and then assign everybody who needs that printer to the security group. I typically also like to set up these groups with a prefix, usually “prnt” so that they are all grouped together but that’s just me. Set these up now and we’ll use them in a minute.
  3. Create a new GPO- Truthfully this a personal preference, but I typically like to create a separate GPO for each major task I want to achieve aside from baseline things I through in a domain default policy.
  4. Navigate to Users>Preferences>Control Panel Settings>Printers- Cool, it’s a blank screen! Let’s fill this sucker up with some printing goodness. Start by right-clicking the screen and choosing New>Shared Printer.
  5. Once here you will the default action is Update. While there is an option for Create we want to leave the setting at the default because this will allow you more flexibility in the future while still letting you accomplish your goal now.
  6. Go ahead and fill in the share path with the full UNC path to the shared printer leaving everything else blank then click on the “Common” tab.
  7. This is where the magic happens so everybody only gets what they need. Check the box for “Item-level targeting” at the bottom and then click the now available button
  8. In the now open Targeting Editor window click the “New Item” button and choose “Security Group.” Note: I like to do this task with Security Groups but as you can see there are lots of options to choose from. You may want to do the assignment based on Active Directory Sites if you have a rotating band of workers for example. Do what fits your organization.
  9. Hit the browse “…” button and go find your group you want to have this printer added for then hit OK all the way back out to the GPO screen.

That’s it! you can essentially rinse and repeat these instructions for as many printers and print servers as you need to support. There really isn’t even any server magic to the printing, for all GP Preferences cares these can all be printers shared off individual workstations. I wouldn’t do that, but you know… My one real gripe with this is there doesn’t seem to be a way to script your way out of the process yet. I was able to bulk install the printers and create the ports on the print server but doing this work out of the GUI essentially means exporting the preferences list to an XML file, editing it and then importing it back in. Eww.

P.S. ProTip: Use Delete All For Print Server Migrations

So the idea spark for this post was a need to recreate all the logical printers in response to an office reorganization. The old names made no sense so we just blew them away and created new. One thing I did find out is that since Windows Server 2012 you can create a Printer Preference with type Delete and choose “Delete all shared connections.” Coupled with the Common options of “Apply once and do not reapply” this can be a very effective way to manage a print server migration, reorganization, or any other number of goals I can think of. If you do choose to do this be sure to 1) make sure any version of this you were using to do the “old printers” is gone before you set this to run and 2) you mess with the order of the Printer Preferences so it is number 1 in the order. In addition, when I was looking to use it I created it and then immediately right-click > Disabled the preference until I was really ready for it to go.