https://www.koolaid.info A Minor Subset of the Greater Series of Tubes Thu, 07 Feb 2019 15:03:20 +0000 en-US hourly 1 https://i1.wp.com/www.koolaid.info/wp-content/uploads/2018/03/koolaid-2018-150.png?fit=32%2C32&ssl=1 https://www.koolaid.info 32 32 78397926 The Basics of Veeam Backup & Replication 9.5 Update 4 Licensing https://www.koolaid.info/the-basics-of-veeam-backup-replication-9-5-update-4-licensing/ https://www.koolaid.info/the-basics-of-veeam-backup-replication-9-5-update-4-licensing/#respond Thu, 07 Feb 2019 15:03:20 +0000 https://www.koolaid.info/?p=800 Veeam has recently released the long-awaited Update 4 to their Backup and Replication 9.5 product and with it has come some changes to how they deal with licensing. As workloads that need to be protected/backed up/made available have moved from being 100% on-premises and inside our vSphere or Hyper-V environments to mixes of on-prem, off-prem, on physical, public cloud, etc. my guess is their customers have asked for a way to make that protection and licensing portable.In Veeam’s move they have decided this can be solved by creating per instance licensing, which is similar to how you consume many other cloud based services. This rides along with the established perpetual licensing we still have for VBR and Veeam Availability Suite. I will be honest and say that the upgrade was not as smooth as I would have hoped. Now that I’ve got to the bottom of my own licensing issues I’ll post here what I’ve learned to hopefully keep you from experiencing the same headaches. It’s worth noting that there is a FAQ on this but the content is varying quite a bit as this gets rolled out. How We Got Here In the past if you were using nothing but Veeam Backup and Replication (VBR) you did all your licensing by the socket count of protected hypervisors. After that came along Veeam Agents for Windows and Linux and we had the addition subscriptions levels for VAW Server, VAW Workstations, and VAL. As these can be managed and deployed via …

The post The Basics of Veeam Backup & Replication 9.5 Update 4 Licensing appeared first on .

]]>
Veeam has recently released the long-awaited Update 4 to their Backup and Replication 9.5 product and with it has come some changes to how they deal with licensing. As workloads that need to be protected/backed up/made available have moved from being 100% on-premises and inside our vSphere or Hyper-V environments to mixes of on-prem, off-prem, on physical, public cloud, etc. my guess is their customers have asked for a way to make that protection and licensing portable.In Veeam’s move they have decided this can be solved by creating per instance licensing, which is similar to how you consume many other cloud based services. This rides along with the established perpetual licensing we still have for VBR and Veeam Availability Suite.

I will be honest and say that the upgrade was not as smooth as I would have hoped. Now that I’ve got to the bottom of my own licensing issues I’ll post here what I’ve learned to hopefully keep you from experiencing the same headaches. It’s worth noting that there is a FAQ on this but the content is varying quite a bit as this gets rolled out.

How We Got Here

In the past if you were using nothing but Veeam Backup and Replication (VBR) you did all your licensing by the socket count of protected hypervisors. After that came along Veeam Agents for Windows and Linux and we had the addition subscriptions levels for VAW Server, VAW Workstations, and VAL. As these can be managed and deployed via the Veeam Console this license was required to be installed on your VBR server as well so you now had 2 separate licenses files that were commingled on the server to create the entire solution for protecting VBR and Agent workloads.

Now as we look at the present and future Veeam has lots of different products that are subscription based. Protecting Office365, AWS instances, and Veeam’s orchestration product are all per consumable unit subscriptions. Further when you consider that due to Veeam’s Service Provider program you as an end customer have the option of either buying and subscribing directly from a VAR or “renting” those licenses from a server provider. As you keep counting up you can see where this model needed (and still needs) streamlined.

Update 4 License Types

So that brings us to the here and now. For now and for as far as I can get anyone to tell me perpetual (a.k.a. per socket) licensing for Veeam Backup and Replication and the Veeam Availability Suite which includes VBR and VeeamONE is here to stay. Any new products though will be licensed through a per instance model going forward. In the middle there is some murkiness so let’s take a look at the options.

  1. Perpetual (per socket) only. This is your traditional Backup and Replication license, licensed per protected socket of hypervisor. You still have to obtain a new Update 4 license from my.veeam.com but works exactly the same. If you have a Veeam Server without any paid VAW/VAL subscriptions attached you can simply just run the installer and continue on your current license. An interesting note is that once you install your Update 4 perpetual license and if you have no instances it will automatically provide you with 1 instance per socket up to a maximum of 6. That’s actually a nice little freebie for those of us with a one-off physical box here or there or a just a couple of cloud instances.
  2. Instance based. These are the “portable licenses” that can be used for VBR protected VMs, VAW, VAL, Veeam for AWS, etc. If you are an existing customer you can contact licensing support and migrate your per socket to this if you want, but unless you are looking at a ROBO site, need more cloud protection or have a very distributed use case for Veeam (small on-prem, workstations, physical servers, cloud instances) I don’t see this being a winner price-wise. For those of us with traditional workloads perpetual makes the most sense because it doesn’t matter how many VMs we have running on our hypervisors, they are still all covered. If you’d like to do the math for yourself they’ve provided a instance cost calculator.

    I will mention that I think they are missing the idea in the calculator that unless they are doing something magical this is based on buying new. Renewals of perpetual licenses should be far cheaper than the given number and I’ve never heard of a subscription license service having a renewal rate.It is also worth noting that even if you aren’t managing your licensed (as opposed to free) Veeam Agent for Windows and Linux with VBR you will need to go to the Update 4 License management screen in my.veeam.com and convert your subscription licenses to Update 4 instances ones to be able to use the 3.0 versions of the software. It doesn’t cost any thing or make a difference at this point, but while you could buy subscription licenses in any numbers you choose per instance licenses have a minimum level of 10 and are only sold in 10 packs. So while for now it might be nice that your licenses are rounded up understand you’ll have to renew at the rounded up price as well.

    Further its worth noting that back when VAW was subscription there were separate lines for workstations and servers, with 1 server license costing the same as 3 workstations. In the new per instance model this is reflected by consumption. A server of any kind will consume 1 instance, but a workstation will only consume 0.33 of one. Same idea, different way of viewing it.

  3. The Hybrid License. This is what you need if you need/want to manage both perpetual and instances from the same VBR server . If you previously had per socket for your VMs and subscription licenses for VAW/VAL you will need to hit the merge button under your Update 4 license management screen. This only works if you are the primary license administrator for all support IDs you wish to merge.

Just to make sure it’s clear in previous versions you could have both a per socket and subscription license installed at the same time; this is no longer the case thus the reason for option 3. You cannot have a 1 and a 2 installed on the same server, the 2 will override the 1. So if you are consuming both perpetual and per instance under the same VBR server you must be sure to merge those licenses on my.veeam.com. In order to do so you will need any and all licenses/Support IDs to be merged to be under the same Primary License Administrator. If you did not do this previously you will need to open a case with support to get a common Primary set for your Support IDs.

Conclusion

As we begin, or continue, to move our production workloads from not only our own datacenters to others as well as the public cloud those workloads will continue to need to be protected. For those of us that use Veeam to do so handling the licensing has, for now, been made simpler and is still cost effective once you can get it lined out for yourself.

The post The Basics of Veeam Backup & Replication 9.5 Update 4 Licensing appeared first on .

]]>
https://www.koolaid.info/the-basics-of-veeam-backup-replication-9-5-update-4-licensing/feed/ 0 800
Dude, Where’s My Managed Service Accounts? https://www.koolaid.info/dude-wheres-my-managed-service-accounts/ https://www.koolaid.info/dude-wheres-my-managed-service-accounts/#respond Wed, 23 Jan 2019 19:22:36 +0000 https://www.koolaid.info/?p=794 So I am probably way late to the game but today’s opportunities to learn have included ADFS and with that the concept of Managed Service Accounts. What’s a Managed Service Account you ask? So we’ve all installed applications and either set the service to run with the local system account or with a standard Active Directory account. Since the release of Windows Server 2008 R2 this feature has been available (and with Windows Server 2012 greatly enhanced,) gMSA lets you create a special type of account to be used for services where Active Directory itself manages the security of the account, keeping you secure while not having to update passwords regularly. While there are quite a few great step by step guides for setting things up and then creating your first Managed Service account, I almost immediately ran into an issue where my Active Directory didn’t seem to include the Managed Service Accounts container (CN=Managed Service Accounts,DC=mydomain,DC=local). My domain was at the correct level, Advanced Features were turned on in AD Users & Computers, everything seemed like it should be just fine, the container just wasn’t there. In this post I’ll outline the steps I ultimately took that resulted in getting the problem fixed. Step 0: Take A Backup While you probably are already mashing on the “take a snapshot” button or starting a backup job, its worth saying anyway. You are messing with your Active Directory, be sure to take a backup or snapshot of your Domain Controller(s) which …

The post Dude, Where’s My Managed Service Accounts? appeared first on .

]]>
So I am probably way late to the game but today’s opportunities to learn have included ADFS and with that the concept of Managed Service Accounts.

What’s a Managed Service Account you ask? So we’ve all installed applications and either set the service to run with the local system account or with a standard Active Directory account. Since the release of Windows Server 2008 R2 this feature has been available (and with Windows Server 2012 greatly enhanced,) gMSA lets you create a special type of account to be used for services where Active Directory itself manages the security of the account, keeping you secure while not having to update passwords regularly.

While there are quite a few great step by step guides for setting things up and then creating your first Managed Service account, I almost immediately ran into an issue where my Active Directory didn’t seem to include the Managed Service Accounts container (CN=Managed Service Accounts,DC=mydomain,DC=local). My domain was at the correct level, Advanced Features were turned on in AD Users & Computers, everything seemed like it should be just fine, the container just wasn’t there. In this post I’ll outline the steps I ultimately took that resulted in getting the problem fixed.

Step 0: Take A Backup

While you probably are already mashing on the “take a snapshot” button or starting a backup job, its worth saying anyway. You are messing with your Active Directory, be sure to take a backup or snapshot of your Domain Controller(s) which holds the various FSMO roles. Now that you’ve got that backup depending on how complex your Active Directory is it might be worth leveraging something like Veeam’s SureBackup (er, I mean DataLab) like I did and create you a test bed where you can try it out on last night’s backups before doing this in production.

Step 1: ADSI Stuff

Now we are going to have to start actually manually editing Active Directory. This is because you might have references to Managed Service Accounts in your Schema but are just missing the container. You also have to tell AD it isn’t up to date so that the adprep utility can be rerun. Be sure you are logged into your Schema Master Domain Controller as an Enterprise Admin and launch the ADSIEdit MMC.

  1. Right click ADSI Edit at the top of the structure on the left, Click Connect… and hit OK as long as the Path is the default naming context.
  2. Drill down the menu structure to CN=Domain Updates, CN=System, DC=<mydomain>,DC=<mytld>
  3. Within the Operations Container you will need to delete the following containers entirely.
    1. CN=5e1574f6-55df-493e-a671-aaeffca6a100
    2. CN=d262aae8-41f7-48ed-9f-35-56-bb-b6-77-57-3d
  4. Now go back up a level and right click on the CN=ActiveDirectoryUpdates container and choose Properties
    1. Scroll down until you find the “revision” attribute, click on it and click Edit
    2. Hit the Clear button and then OK

Step 2: Run ADPrep /domainPrep

So now we’ve cleaned out the bad stuff and we just need to run ADprep. If you have upgraded to your current level of Active Directory you probably have done this at least once before, but typically it won’t let you run it on your domain once its been done; that’s what the clearing the revision attribute above did for us. Now we just need to pop in the (probably virtual) CD and run the command

Yay! It actually worked!
  1. Mount the ISO file for your given operating system to your domain controller. You can either do this by putting the ISO on the system, right click, mount or do so through your virtualization platform.
  2. Open up a command line or powershell prompt and navigate to <CDROOT>:\support\adprep
  3. Issue the .\adprep.exe /domainPrep command. If all goes well it should report back “Adprep successfully updated the domain-wide information.”

Now that the process is completed you should be able to refresh or relaunch your Active Directory Users & Computers window and see that Managed Service Accounts is available right below the root of your domain as long as Advanced Features is enabled under View and you are now good to go!

The post Dude, Where’s My Managed Service Accounts? appeared first on .

]]>
https://www.koolaid.info/dude-wheres-my-managed-service-accounts/feed/ 0 794
Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/ https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/#respond Thu, 01 Nov 2018 14:50:28 +0000 https://www.koolaid.info/?p=781 One of the issues that Veeam Backup & Replication users, actually those of any application aware backup solution, is that the various VSS writers are typically very finicky to say the least. Often you will get warnings about the services only to do a “vssadmin list writers” and see either writers in a failed state or not there at all. In most of these cases a reboot of either the service or the target system itself is an easy quick fix. But do you really want to rely on yourself to remember to do this every day? I know I don’t and going with the mantra of “When in doubt, automate” here’s a script that will help out. The Reboot-VSS.ps1 script assumes that you are using vSphere tags to dynamically identify VMs to be included in backup jobs, looks at the services in the given services array and if they are present on the VM will restart them. [crayon-5c62c93e8dba0899227460/]   This script was designed to be set in the Windows scripts section of guest processing settings within a Veeam Backup and Replication job. I typically only need the SQL writer service myself but I’ve included VSS in the array as well here as an example of adding more than one. There are quite a few VSS services that VSS aware backup services can access, Veeam’s KB 20141 is a great reference for all of these that can be included here based on your need.

The post Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing appeared first on .

]]>
One of the issues that Veeam Backup & Replication users, actually those of any application aware backup solution, is that the various VSS writers are typically very finicky to say the least. Often you will get warnings about the services only to do a “vssadmin list writers” and see either writers in a failed state or not there at all. In most of these cases a reboot of either the service or the target system itself is an easy quick fix.

But do you really want to rely on yourself to remember to do this every day? I know I don’t and going with the mantra of “When in doubt, automate” here’s a script that will help out. The Reboot-VSS.ps1 script assumes that you are using vSphere tags to dynamically identify VMs to be included in backup jobs, looks at the services in the given services array and if they are present on the VM will restart them.

#   Name:   Restart-VSS.ps1
#   Description: Restarts list of services in an array on VMs with a given vSphere tag. Helpful for Veeam B&R processing
#   For more info on Veeam VSS services that may cause failure see https://www.veeam.com/kb2041

Import-Module VMware.PowerCLI

$vcenter = "vcenter.domain.com"
$services = @("SQLWriter","VSS")
$tag = "myAwesomeTag"
Connect-VIServer $vcenter
$vms = Get-VM |where {$_.Tag -ne $tag}

ForEach ($vm in $vms){
  ForEach ($service in $services){
    If (Get-Service -ComputerName $vm -Name $service -ErrorAction SilentlyContinue) {
      Write-Host $service "on computer" $vm "restarting now."
      Restart-Service -InputObject $(Get-Service -Computer $vm -Name $service);
    }
  }
}

 

This script was designed to be set in the Windows scripts section of guest processing settings within a Veeam Backup and Replication job. I typically only need the SQL writer service myself but I’ve included VSS in the array as well here as an example of adding more than one. There are quite a few VSS services that VSS aware backup services can access, Veeam’s KB 20141 is a great reference for all of these that can be included here based on your need.

The post Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing appeared first on .

]]>
https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/feed/ 0 781
Reinstalling the Veeam Backup & Replication Powershell SnapIn https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/ https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/#respond Tue, 05 Jun 2018 17:47:21 +0000 https://www.koolaid.info/?p=754 As somebody who lives by the old mantra of “Eat your own dog food” when it comes to the laptops I use both personally and professionally I tend to be on the early edge of installs. So while I am not at all ready to start deploying Windows 10 1803 to the end users I’ve recently upgraded my Surface Pro to it. In doing so I’ve found that doing so broke access to the Veeam Powershell SnapIn on my laptop when trying to run a script. After some Googling I found a very helpful post on the Veeam Forums that I thought I’d condense the commands to run here for us all. Let me start with a hat tip to James McGuire for finding this solution to the problem. For the those that aren’t familiar with VBR’s Powershell capabilities, the SnapIn is installed either when you run the full installer on your VBR server or, as is my case when you install the Remote Console component on another Windows system. Don’t let me get started about the fact that Veeam is still using a SnapIn to provide PowerShell access, that’s a whole different post, but this is where we are. The sign that this has occurred is when you get the “Get-PSSnapin : No Windows PowerShell snap-ins matching the pattern ‘VeeamPSSnapin’ were found.” error when trying to get access to the SnapIn. In order to fix this, you need to use the installutil.exe utility in your latest .Net installation. So …

The post Reinstalling the Veeam Backup & Replication Powershell SnapIn appeared first on .

]]>
As somebody who lives by the old mantra of “Eat your own dog food” when it comes to the laptops I use both personally and professionally I tend to be on the early edge of installs. So while I am not at all ready to start deploying Windows 10 1803 to the end users I’ve recently upgraded my Surface Pro to it. In doing so I’ve found that doing so broke access to the Veeam Powershell SnapIn on my laptop when trying to run a script. After some Googling I found a very helpful post on the Veeam Forums that I thought I’d condense the commands to run here for us all. Let me start with a hat tip to James McGuire for finding this solution to the problem.

For the those that aren’t familiar with VBR’s Powershell capabilities, the SnapIn is installed either when you run the full installer on your VBR server or, as is my case when you install the Remote Console component on another Windows system. Don’t let me get started about the fact that Veeam is still using a SnapIn to provide PowerShell access, that’s a whole different post, but this is where we are.

The sign that this has occurred is when you get the “Get-PSSnapin : No Windows PowerShell snap-ins matching the pattern ‘VeeamPSSnapin’ were found.” error when trying to get access to the SnapIn. In order to fix this, you need to use the installutil.exe utility in your latest .Net installation. So in my example, this would be C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.ex
e. If you’ve already installed the VBR Remote console The SnapIn’s DLL should be at C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll. So to get the installation fixed and re-added to being available to Powershell you just need to do the following from an elevated PoSH prompt:

C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.exe C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll
Add-PSSnapin VeeamPSSnapin

Then to load it and be able to use it simply

Get-PSSnapin VeeamPSSnapin
Connect-VBRServer -Server <serverFQDN>

From there it’s up to you what comes next. Happy Scripting!

The post Reinstalling the Veeam Backup & Replication Powershell SnapIn appeared first on .

]]>
https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/feed/ 0 754
Fixing the SSL Certificate with Project Honolulu https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/ https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/#respond Fri, 09 Mar 2018 15:26:19 +0000 https://www.koolaid.info/?p=717 So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well. I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom. Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed …

The post Fixing the SSL Certificate with Project Honolulu appeared first on .

]]>
So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well.

I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom.

Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed Project Honolulu. If you want to do a public one, fine, but more likely you’ll have a certificate authority available to you internally. I’m not going to walk you through this again, my friend Luca Dell’Oca has a good write up on it here. Just do steps 1-3.

Make note of the Application ID here, you’ll use it later

Step 1: Shut it down and gather info: Next we need to shut down the Honolulu service. As most of what we’ll be doing here today is going to be in Powershell let’s just do this by CLI as well.

Get-Service *Gateway | Stop-Service

Now let’s take a look at what’s currently in place. You can do this with the following command, the output should look like the figure to the right. The relevant info we want to take note of here is 1) The port that we’ve got Honolulu listening on and 2) The Application ID attached to the certificate. I’m just going to reuse the one there but as Charbel points out this is generic and you can just generate a new one to use by using a generator.

netsh http show sslcert

Pick a cert, not any cert

Finally, in our quest to gather info let’s find the thumbprint of our newly loaded certificate. You can do this by using the Get-ChildItem command like this

Get-ChildItem -path cert:\LocalMachine\my

As you can see in the second screenshot that will give you a list of the certificates with thumbprints installed on your server. You’ll need the thumbprint of the certificate you imported earlier.

Step 2: Make it happen: Ok now that we’ve got all our information let’s get this thing swapped. All of this seems to need to be done from the legacy command prompt. First, we want to delete the certificate binding in place now and the ACL. For the example shown above where I’m using port 443 it would look like this:

cmd
netsh http delete sslcert ipport=0.0.0.0:443
netsh http delete urlacl url=https://+:443/

Now we need to put it back into place and start things back up. Using the port number, certificate thumbprint, and appid from our example the command to re-add the SSL certificate would look like this. You, of course, would need to sub in your own information.  Next, we need to put the URL ACL back in place. Finally, we just need to start the service back up from PowerShell.

netsh http add sslcert ipport=0.0.0.0:443 certhash=C9BB91F7D8755BD217444046A7E68CEF56E15717 appid={1fb046ab-09b5-4029-9ec5-6e17002d495f}
netsh http add urlacl url=https://+:443/ user="NT Authority\Network Service"
Get-Service *Gateway | Start-Service

Conclusion

At this point, you should be getting a shiny green padlock when you go the site and no more nags about a bad certificate. I hope as this thing progresses out of Tech Preview and into production quality this component gets easier but at least there’s a way.

The post Fixing the SSL Certificate with Project Honolulu appeared first on .

]]>
https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/feed/ 0 717
Cisco Live US 2018: CAE Location and Keynotes Announced! https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/ https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/#respond Tue, 06 Mar 2018 16:28:30 +0000 https://www.koolaid.info/?p=714 Pictured here is the entrance, five years ago, to the Customer Appreciation Event on the last night of Cisco Live US 2013. This was my first CiscoLive and first tech conference at all. I was exhausted from all I’d learned and excited by all the new people I’d met. The conference was in Orlando, FL that year and the CAE was held in a portion of Universal Studios theme park. This all comes full circle because this year I will once again be attending CiscoLive 2018 It will once again be held in Orlando, FL And the Customer Appreciation Event will be held at THE ENTIRE UNIVERSAL STUDIOS FLORIDA PARK! Customer Appreciation Event Info You read that right, for one night only, Cisco customers, employees and other conference attendees will have the whole park to themselves with food, drink, and all that jazz included.While the party itself is from 7:30 to 11:30, attendees will also have non-exclusive access to the Islands of Adventure side of the park starting at 6 so you can get there early, hang out in Diagon Alley and then hop the Hogwarts Express over to the party when the time comes. Can anybody say Geek Overload? Once the party starts all of the attractions will be available to you, rides like Transformers:3D, Harry Potter and the Escape from Gringotts, and Race Through New York Starring Jimmy Fallon just to name a few. There will also be a “festival style” music line-up to be announced later. Considering …

The post Cisco Live US 2018: CAE Location and Keynotes Announced! appeared first on .

]]>
Pictured here is the entrance, five years ago, to the Customer Appreciation Event on the last night of Cisco Live US 2013. This was my first CiscoLive and first tech conference at all. I was exhausted from all I’d learned and excited by all the new people I’d met. The conference was in Orlando, FL that year and the CAE was held in a portion of Universal Studios theme park. This all comes full circle because this year

Customer Appreciation Event Info

You read that right, for one night only, Cisco customers, employees and other conference attendees will have the whole park to themselves with food, drink, and all that jazz included.While the party itself is from 7:30 to 11:30, attendees will also have non-exclusive access to the Islands of Adventure side of the park starting at 6 so you can get there early, hang out in Diagon Alley and then hop the Hogwarts Express over to the party when the time comes. Can anybody say Geek Overload? Once the party starts all of the attractions will be available to you, rides like Transformers:3D, Harry Potter and the Escape from Gringotts, and Race Through New York Starring Jimmy Fallon just to name a few.

There will also be a “festival style” music line-up to be announced later. Considering Cisco’s recent track record of musical acts (Aerosmith,  Maroon 5, Elle King, Bruno Mars) it’s a good guess that those will be great as well.

Keynote Speakers

There are other announcements out now as well. Included in these are the guest keynote speakers. This year it appears Cisco is going all in on the future looking vibe by having Dr. Michio Kaku, and Amy Webb as the Thursday speakers. Dr. Kaku is a renowned theoretical physicist and futurist while Ms. Webb is also a futurist and founder of the Future Today Institute. While I don’t know much about them at the moment I look forward to what they have to say.

Sessions, Labs and Seminars

Finally it looks like the Session catalog has quietly gone live today as well. Here you can begin looking for ideas of sessions you think you will find helpful, but I will tell you it is always my suggestion to pick these for now by the instructors you may really want to be able to interact with. All of these sessions will be available online after the conference so that frees you up to network (socially, not with wires) while you are there.

What you can’t access after the fact is the Labs and Seminars Cisco puts on the weekend prior to the conference itself. These come in 4 or 8 hour flavors and as someone who has attended a couple myself I will tell you they are a very fast way to deep dive into a topic. The catalog of these has been made available as well so you may want to check them out.

One note for those of you that like me that are heavy users of ad blocking in your browser. I noticed that uBlock Origin was keeping the actual list from appearing, you will need to turn it off to see the session catalogs.

Conclusion

As somebody with a small child and thus has spent a good deal of time in the Orlando area 😉 I’ll have some more to share soon in that regard. If you are heading to the show feel free to reach or say hi there! These events are much better when you allow yourself to get out an meet others.

The post Cisco Live US 2018: CAE Location and Keynotes Announced! appeared first on .

]]>
https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/feed/ 0 714
Veeam Vanguard 2018 https://www.koolaid.info/veeam-vanguard-2018/ https://www.koolaid.info/veeam-vanguard-2018/#respond Tue, 06 Mar 2018 13:27:46 +0000 https://www.koolaid.info/?p=711 Here in the US Thanksgiving Day traditionally falls on the fourth Thursday of November. While it is one of my favorite holidays today is a day of thankfulness for me as I’ve been honored to be named a Veeam Vanguard for 2018. I’ve been fortunate enough to have been a part of the group since its inception and it is one of my highest honors. Thanks as always to Rick, Kirsten, Dmitry, Andrew, Niels, Anthony, Michael, Melissa and Danny for keeping the Vanguards the best of its kind around. To those who have also been renewed into the program please accept a heartfelt congratulations as you’ve earned it through your involvement and I look forward to trolling right along with you for another year. While the e-mails have just been sent so there aren’t any statistics yet I see quite a few new members who are quite deserving popping up on twitter. Some I know already and other I look forward to getting to know. One of the really nice thing about the Vannies is we are a small group so everybody pretty much gets to know everybody. If you are looking for success in this group please don’t be shy, come be social and share the knowledge you have. Are you just learning about the program or didn’t make the cut this year? If you are active with Veeam join the conversation in the forums, on Twitter, on Reddit, any of the various Slack communities, or your own blog …

The post Veeam Vanguard 2018 appeared first on .

]]>
Here in the US Thanksgiving Day traditionally falls on the fourth Thursday of November. While it is one of my favorite holidays today is a day of thankfulness for me as I’ve been honored to be named a Veeam Vanguard for 2018. I’ve been fortunate enough to have been a part of the group since its inception and it is one of my highest honors. Thanks as always to Rick, Kirsten, Dmitry, Andrew, Niels, Anthony, Michael, Melissa and Danny for keeping the Vanguards the best of its kind around.

To those who have also been renewed into the program please accept a heartfelt congratulations as you’ve earned it through your involvement and I look forward to trolling right along with you for another year.

While the e-mails have just been sent so there aren’t any statistics yet I see quite a few new members who are quite deserving popping up on twitter. Some I know already and other I look forward to getting to know. One of the really nice thing about the Vannies is we are a small group so everybody pretty much gets to know everybody. If you are looking for success in this group please don’t be shy, come be social and share the knowledge you have.

Are you just learning about the program or didn’t make the cut this year? If you are active with Veeam join the conversation in the forums, on Twitter, on Reddit, any of the various Slack communities, or your own blog and it will come. It doesn’t matter where you join, it just matters that you do.

Finally to dear, sweet Vanny Vanguard. We all miss you, please come home. 😉

The post Veeam Vanguard 2018 appeared first on .

]]>
https://www.koolaid.info/veeam-vanguard-2018/feed/ 0 711
Why I Blog https://www.koolaid.info/why-i-blog/ https://www.koolaid.info/why-i-blog/#comments Wed, 28 Feb 2018 20:31:27 +0000 https://www.koolaid.info/?p=702 If you haven’t noticed the new content on this site has got a little scarce for the past few months. To be very honest my life seems to have been a bit crazy, both professionally and personally, and writing, unfortunately, has been back-burnered. On my drive home yesterday I was listening to Paul Woodward‘s excellent ExploreVM podcast episode with Melissa Palmer where she was speaking about the process of writing her excellent book “IT Architect Series: The Journey“. This served to remind me that I really should get back to writing as I’ve got a few topics I really need to blog about. So as these things often happen in my head this led me this morning to think about why I blog in the first place. There are a number of reasons I do this and I thought for those that are passionate about any topic but in this case about technology maybe a little bit of the why would be the thing to get you started yourself. 1. So I can remember how I did something. Way back in the olden days (2004ish?) when my blog was called tastyyellowsnow.com this was the main reason I created the blog. I was 3 jobs into my career and all of my notes for how to do anything were saved in Outlook notes that I move from a work account to PST to work account to PST to work account. I was tired of doing it that way so I thought …

The post Why I Blog appeared first on .

]]>
If you haven’t noticed the new content on this site has got a little scarce for the past few months. To be very honest my life seems to have been a bit crazy, both professionally and personally, and writing, unfortunately, has been back-burnered. On my drive home yesterday I was listening to Paul Woodward‘s excellent ExploreVM podcast episode with Melissa Palmer where she was speaking about the process of writing her excellent book “IT Architect Series: The Journey“. This served to remind me that I really should get back to writing as I’ve got a few topics I really need to blog about.

So as these things often happen in my head this led me this morning to think about why I blog in the first place. There are a number of reasons I do this and I thought for those that are passionate about any topic but in this case about technology maybe a little bit of the why would be the thing to get you started yourself.

1. So I can remember how I did something. Way back in the olden days (2004ish?) when my blog was called tastyyellowsnow.com this was the main reason I created the blog. I was 3 jobs into my career and all of my notes for how to do anything were saved in Outlook notes that I move from a work account to PST to work account to PST to work account. I was tired of doing it that way so I thought I’d try putting it out there. That hawtness ran on some asp based package with an Access database on the backend (I still have it!), and while some of the content was absolutely horrible the reason behind it is still my primary driver, to make sure if I figured out how to do something a certain way I could remember how to do it when I invariably had to do it again. Looking through some of those titles some like “Changing the Music On Hold Volume in Cisco CallManager” and “Recovering from a Bad Domain Controller Demotion” are still actually relevant. It’s nice to know where to find those things.

2. So that others can learn how I did something. I joked on twitter the other day that in preparing a Career Day talk for my daughter’s Kindergarten class that I should title it “SysAdmin: I Google things for those who will not”. If you are new to IT or aspire to work as an *Admin  I cannot express how much of my “how in the world do you know that” is simply being good at feeding errors into google and processing the results. There may be 20 posts on how to do a single task but one of them will make more sense to me than others. Because of that, I try to feed many things back into the collective Google, especially the things that I wasn’t able to find much on or that I had to piece together through multiple KB articles and blog posts. In doing so I really do hope that I help others get their job done without having somebody send them the sticker to the right.

3. Writing something down in a manner you expect others to understand can often provide clarity. There’s an old adage that says “the best way to learn something is to teach it.” While yes, it is cliché, speaking as a former adjunct college professor and current internal staff trainer when needed, it is absolutely true. When I am learning something new or I have just finished working through a complex issue I find that documenting it, either here or internally helps to solidify what the core issue was, what components led to the issue, how the problem was solved and finally how it can be prevented in the future.

Conclusion

Those are the reasons why you see new things here from time to time.  I do want to mention one thing you did not see above and that was to gain access to influencer programs. I’ve been very fortunate to be included in the vExpert and Veeam Vanguard communities and while many will say the way to get there is through blogging I disagree. I think the best way to achieve those accolades and keep them is the develop your own version of commitment to the Tech Community.  If you find that giving things back to the community at large is something you find value in then you will find a way to do it, blogging, tweeting, podcasting, or any other way. If that’s a goal of yours and blogging or writing isn’t your thing, there is any number of ways to meet that goal as long as you focus on why you are in the community to start with.

As life has its ups and downs so does the regularity of content here. What are your reasons for blogging? If you have thought about it and haven’t done it yet, why not? Let’s continue the discussion on Twitter by reaching out @k00laidIT and help the distributed mind grow.

The post Why I Blog appeared first on .

]]>
https://www.koolaid.info/why-i-blog/feed/ 1 702
From Zero to PowerCLI: CentOS Edition https://www.koolaid.info/zero-powercli-centos-edition/ https://www.koolaid.info/zero-powercli-centos-edition/#respond Wed, 28 Feb 2018 18:28:07 +0000 https://www.koolaid.info/?p=704 Hi all, just a quicky to get everybody off the ground out there that are looking to use both PowerShell and PowerCLI from things that don’t run Windows. Today VMware released version 10 of PowerCLI with support for installation on both Linux and MacOS. This was made possible by the also recently released Powershell Core 6.0 which allows PowerShell to be installed on *nix variants. While the ability to run it on a Mac really doesn’t do anything for me I do like to use my iPad with a keyboard case as a quick and easy jump box and its frustrated me for a while that I needed to do an RDP session and then run a Powershell session from within that. With these releases I’m now an SSH session away from the vast majority of my scripting needs with normal sized text and everything. In this post I’ll cover getting both Powershell Core and PowerCLI installed on a CentOS VM. To be honest, installing both on any other variant is pretty trivial but the basic framework of the difference can be found in Microsoft Docs. Step 1: Installing Powershell Core 6.0 First, you need to add the Powershell Core repository to your yum configuration. You may need to amend the “/7/” below if you are running a RHEL 6 variant like CentOS 6. [crayon-5c62c93e8e85c437842205/] Once you have your repo added simply install from yum [crayon-5c62c93e8e863396035805/] Congrats! You now have PowerShell on Linux. To run it simply run pwsh from …

The post From Zero to PowerCLI: CentOS Edition appeared first on .

]]>
Hi all, just a quicky to get everybody off the ground out there that are looking to use both PowerShell and PowerCLI from things that don’t run Windows. Today VMware released version 10 of PowerCLI with support for installation on both Linux and MacOS. This was made possible by the also recently released Powershell Core 6.0 which allows PowerShell to be installed on *nix variants. While the ability to run it on a Mac really doesn’t do anything for me I do like to use my iPad with a keyboard case as a quick and easy jump box and its frustrated me for a while that I needed to do an RDP session and then run a Powershell session from within that. With these releases I’m now an SSH session away from the vast majority of my scripting needs with normal sized text and everything.

In this post I’ll cover getting both Powershell Core and PowerCLI installed on a CentOS VM. To be honest, installing both on any other variant is pretty trivial but the basic framework of the difference can be found in Microsoft Docs.

Step 1: Installing Powershell Core 6.0

First, you need to add the Powershell Core repository to your yum configuration. You may need to amend the “/7/” below if you are running a RHEL 6 variant like CentOS 6.

curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo

Once you have your repo added simply install from yum

sudo yum install -y powershell

Congrats! You now have PowerShell on Linux. To run it simply run pwsh from the command line and do your thing. If you are like me and use unsigned scripts a good deal you may want to lower your Execution Policy on launch. You can do so by adding the parameter.

pwsh -ep Unrestricted

 

Step 2: Installing VMware PowerCLI

Yes, this is the hard part… Just kidding! It’s just like on Windows, enter the simple one-liner to install all available modules.

Install-Module -Name VMware.PowerCLI -Scope CurrentUser

If you want to check and see what you’ve installed afterward (as shown in the image)

Get-Module VMware.* -ListAvailable

If you are like me and starting to burn this through in your lab you are going to have to tell it to ignore certificate warnings to be able to connect to your vCenter. This is simple as well just use this and you’ll be off and running.

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore

 

Step 3: Profit!

Really, that’s it. Now to be honest I still am going to need to jump to something Windows-based to do the normal ActiveDirectory, DNS or any other native  Windows type module but that’s pretty easy through Enter-PSSession.

Finally, if you have got through all of the above and just want to cut and paste here’s everything in one spot to get you installed.

curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
sudo yum install -y powershell
pwsh -ep Unrestricted
Install-Module -Name VMware.PowerCLI -Scope CurrentUser
Set-PowerCLIConfiguration -InvalidCertificateAction Ignore

 

 

The post From Zero to PowerCLI: CentOS Edition appeared first on .

]]>
https://www.koolaid.info/zero-powercli-centos-edition/feed/ 0 704
VVOLs vs. the Expired Certificate https://www.koolaid.info/vvols-vs-expired-certificate/ https://www.koolaid.info/vvols-vs-expired-certificate/#respond Fri, 12 Jan 2018 20:54:47 +0000 https://www.koolaid.info/?p=694 Hi all, I’m writing this to document a fix to an interesting challenge that has pretty much been my life for the last 24 hours or so. Through a comedy of errors and other things happening, we had a situation where the upstream CA from our VMware Certificate Authority (and other things) became very unavailable and the certificate authorizing it to manage certificates expired. Over the course of the last couple of days I’ve had to reissue certificates for just about everything including my Nimble Storage array and as far as vSphere goes we’ve had to revert all the certificate infrastructure to essentially the same as the out of the box self-signed guys and then reconfigure the VMCA as a subordinate again under the Root CA. Even after all that I continued to have an issue where my Production VVOLs storage was inaccessible to the hosts. That’s not to say they weren’t working, amazingly and as a testament to the design of how VVOLs works my VMs on it ran throughout the process, but I was very limited in terms of the management of those VMs. Snapshots didn’t work, backups didn’t work, for a time even host migrations didn’t work until we reverted to the self-signed certs. Thanks for a great deal of support and help from both VMware support and Nimble Storage Support we were finally able to come up with a runbook in dealing with a VVOL situation where major certificate changes occurred on the vSphere side. There …

The post VVOLs vs. the Expired Certificate appeared first on .

]]>
Hi all, I’m writing this to document a fix to an interesting challenge that has pretty much been my life for the last 24 hours or so. Through a comedy of errors and other things happening, we had a situation where the upstream CA from our VMware Certificate Authority (and other things) became very unavailable and the certificate authorizing it to manage certificates expired. Over the course of the last couple of days I’ve had to reissue certificates for just about everything including my Nimble Storage array and as far as vSphere goes we’ve had to revert all the certificate infrastructure to essentially the same as the out of the box self-signed guys and then reconfigure the VMCA as a subordinate again under the Root CA.

Even after all that I continued to have an issue where my Production VVOLs storage was inaccessible to the hosts. That’s not to say they weren’t working, amazingly and as a testament to the design of how VVOLs works my VMs on it ran throughout the process, but I was very limited in terms of the management of those VMs. Snapshots didn’t work, backups didn’t work, for a time even host migrations didn’t work until we reverted to the self-signed certs.

Thanks for a great deal of support and help from both VMware support and Nimble Storage Support we were finally able to come up with a runbook in dealing with a VVOL situation where major certificate changes occurred on the vSphere side. There is an assumption to this process that by the time you’ve got here all of your certificates both throughout vSphere as well as with the Nimble arrays are good and valid.

  1. Unregister the VASA provider and Web Client integration from the Nimble array. This can be done either through the GUI in Administration>VMware Integration by editing your vCenter, unchecking the boxes for the Web Client and VASA Provider and hitting save. This can also be done via the CLI using the command
    vcenter --unregister <vcenter_name> --extension vasa
    vcenter --unregister <vcenter_name> --extension web
  2. Register the integrations back in. Again, from the GUI simply just check the boxes back and hit save. If successful you should see a couple of little green bars briefly appear at the top of the screen saying the process was successful. From the CLI the commands are pretty similar
    vcenter --register <vcenter_name> --extension vasa
    vcenter --register <vcenter_name> --extension web
  3. Verify that your VASA provider is available in vCenter and online. This is just to make sure that the integration was successful. In either the Web Client or the HTML5 client go to vCenter> Configure> Storage Provider and look for the entry that matches the name of your array group and in the URL has the IP address of your array’s management interface. This should show as online. As you have been messing with certificates its probably worth looking at the Certificate Info tab as well while you are here to verify that the certificate is what you expect.
  4. Refresh the CA Certificates on each your hosts. Next, we need to ensure that all of the CA certificates are available on the hosts to ensure they can verify the certificates presented to them by the storage array. To do this you can either right-click each host > Certificates > Refresh CA Certificates or if you navigate to the configuration tab of each host, go to Certificate there is a button there as well. While in the window it is worth looking at the Status of each host’s certificate and ensure that it is Good.
  5. Restart the vvold service on each host. This final step was evidently the hardest one to nail down and find in the documentation. The simplest method may be to simply reboot each of your hosts as long as you can put them into maintenance mode and evacuate them first. The quicker way and the way that will let you keep things running is to enter a shell session on each of your hosts and simply run the following command:
    /etc/init.d/vvold restart

    Once done you should see a response like the feature image on this post and a short while later your VVOLs array will again become available for each host as you work on them.

That’s about it. I really cannot thank the engineers at VMware (Sujish) and Nimble (Peter) enough for their assistance in getting me back to good. Also I’d like to thank Pete Flecha for jumping in at the end, helping me and reminding me to blog this.

If nothing else I hope this serves as a reminder to you (as well as myself) that certificates should be well tended to, please watch them carefully. 😉

The post VVOLs vs. the Expired Certificate appeared first on .

]]>
https://www.koolaid.info/vvols-vs-expired-certificate/feed/ 0 694