https://www.koolaid.info A Minor Subset of the Greater Series of Tubes Fri, 02 Nov 2018 12:08:05 +0000 en-US hourly 1 https://i1.wp.com/www.koolaid.info/wp-content/uploads/2018/03/koolaid-2018-150.png?fit=32%2C32&ssl=1 https://www.koolaid.info 32 32 78397926 Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/ https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/#respond Thu, 01 Nov 2018 14:50:28 +0000 https://www.koolaid.info/?p=781 One of the issues that Veeam Backup & Replication users, actually those of any application aware backup solution, is that the various VSS writers are typically very finicky to say the least. Often you will get warnings about the services only to do a “vssadmin list writers” and see either writers in a failed state or not there at all. In most of these cases a reboot of either the service or the target system itself is an easy quick fix. But do you really want to rely on yourself to remember to do this every day? I know I don’t and going with the mantra of “When in doubt, automate” here’s a script that will help out. The Reboot-VSS.ps1 script assumes that you are using vSphere tags to dynamically identify VMs to be included in backup jobs, looks at the services in the given services array and if they are present on the VM will restart them. [crayon-5bfd9854d8e0a963311956/]   This script was designed to be set in the Windows scripts section of guest processing settings within a Veeam Backup and Replication job. I typically only need the SQL writer service myself but I’ve included VSS in the array as well here as an example of adding more than one. There are quite a few VSS services that VSS aware backup services can access, Veeam’s KB 20141 is a great reference for all of these that can be included here based on your need.

The post Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing appeared first on .

]]>
One of the issues that Veeam Backup & Replication users, actually those of any application aware backup solution, is that the various VSS writers are typically very finicky to say the least. Often you will get warnings about the services only to do a “vssadmin list writers” and see either writers in a failed state or not there at all. In most of these cases a reboot of either the service or the target system itself is an easy quick fix.

But do you really want to rely on yourself to remember to do this every day? I know I don’t and going with the mantra of “When in doubt, automate” here’s a script that will help out. The Reboot-VSS.ps1 script assumes that you are using vSphere tags to dynamically identify VMs to be included in backup jobs, looks at the services in the given services array and if they are present on the VM will restart them.

#   Name:   Restart-VSS.ps1
#   Description: Restarts list of services in an array on VMs with a given vSphere tag. Helpful for Veeam B&R processing
#   For more info on Veeam VSS services that may cause failure see https://www.veeam.com/kb2041

Import-Module VMware.PowerCLI

$vcenter = "vcenter.domain.com"
$services = @("SQLWriter","VSS")
$tag = "myAwesomeTag"
Connect-VIServer $vcenter
$vms = Get-VM |where {$_.Tag -ne $tag}

ForEach ($vm in $vms){
  ForEach ($service in $services){
    If (Get-Service -ComputerName $vm -Name $service -ErrorAction SilentlyContinue) {
      Write-Host $service "on computer" $vm "restarting now."
      Restart-Service -InputObject $(Get-Service -Computer $vm -Name $service);
    }
  }
}

 

This script was designed to be set in the Windows scripts section of guest processing settings within a Veeam Backup and Replication job. I typically only need the SQL writer service myself but I’ve included VSS in the array as well here as an example of adding more than one. There are quite a few VSS services that VSS aware backup services can access, Veeam’s KB 20141 is a great reference for all of these that can be included here based on your need.

The post Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing appeared first on .

]]>
https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/feed/ 0 781
Reinstalling the Veeam Backup & Replication Powershell SnapIn https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/ https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/#respond Tue, 05 Jun 2018 17:47:21 +0000 https://www.koolaid.info/?p=754 As somebody who lives by the old mantra of “Eat your own dog food” when it comes to the laptops I use both personally and professionally I tend to be on the early edge of installs. So while I am not at all ready to start deploying Windows 10 1803 to the end users I’ve recently upgraded my Surface Pro to it. In doing so I’ve found that doing so broke access to the Veeam Powershell SnapIn on my laptop when trying to run a script. After some Googling I found a very helpful post on the Veeam Forums that I thought I’d condense the commands to run here for us all. Let me start with a hat tip to James McGuire for finding this solution to the problem. For the those that aren’t familiar with VBR’s Powershell capabilities, the SnapIn is installed either when you run the full installer on your VBR server or, as is my case when you install the Remote Console component on another Windows system. Don’t let me get started about the fact that Veeam is still using a SnapIn to provide PowerShell access, that’s a whole different post, but this is where we are. The sign that this has occurred is when you get the “Get-PSSnapin : No Windows PowerShell snap-ins matching the pattern ‘VeeamPSSnapin’ were found.” error when trying to get access to the SnapIn. In order to fix this, you need to use the installutil.exe utility in your latest .Net installation. So …

The post Reinstalling the Veeam Backup & Replication Powershell SnapIn appeared first on .

]]>
As somebody who lives by the old mantra of “Eat your own dog food” when it comes to the laptops I use both personally and professionally I tend to be on the early edge of installs. So while I am not at all ready to start deploying Windows 10 1803 to the end users I’ve recently upgraded my Surface Pro to it. In doing so I’ve found that doing so broke access to the Veeam Powershell SnapIn on my laptop when trying to run a script. After some Googling I found a very helpful post on the Veeam Forums that I thought I’d condense the commands to run here for us all. Let me start with a hat tip to James McGuire for finding this solution to the problem.

For the those that aren’t familiar with VBR’s Powershell capabilities, the SnapIn is installed either when you run the full installer on your VBR server or, as is my case when you install the Remote Console component on another Windows system. Don’t let me get started about the fact that Veeam is still using a SnapIn to provide PowerShell access, that’s a whole different post, but this is where we are.

The sign that this has occurred is when you get the “Get-PSSnapin : No Windows PowerShell snap-ins matching the pattern ‘VeeamPSSnapin’ were found.” error when trying to get access to the SnapIn. In order to fix this, you need to use the installutil.exe utility in your latest .Net installation. So in my example, this would be C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.ex
e. If you’ve already installed the VBR Remote console The SnapIn’s DLL should be at C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll. So to get the installation fixed and re-added to being available to Powershell you just need to do the following from an elevated PoSH prompt:

C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.exe C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll
Add-PSSnapin VeeamPSSnapin

Then to load it and be able to use it simply

Get-PSSnapin VeeamPSSnapin
Connect-VBRServer -Server <serverFQDN>

From there it’s up to you what comes next. Happy Scripting!

The post Reinstalling the Veeam Backup & Replication Powershell SnapIn appeared first on .

]]>
https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/feed/ 0 754
Fixing the SSL Certificate with Project Honolulu https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/ https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/#respond Fri, 09 Mar 2018 15:26:19 +0000 https://www.koolaid.info/?p=717 So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well. I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom. Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed …

The post Fixing the SSL Certificate with Project Honolulu appeared first on .

]]>
So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well.

I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom.

Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed Project Honolulu. If you want to do a public one, fine, but more likely you’ll have a certificate authority available to you internally. I’m not going to walk you through this again, my friend Luca Dell’Oca has a good write up on it here. Just do steps 1-3.

Make note of the Application ID here, you’ll use it later

Step 1: Shut it down and gather info: Next we need to shut down the Honolulu service. As most of what we’ll be doing here today is going to be in Powershell let’s just do this by CLI as well.

Get-Service *Gateway | Stop-Service

Now let’s take a look at what’s currently in place. You can do this with the following command, the output should look like the figure to the right. The relevant info we want to take note of here is 1) The port that we’ve got Honolulu listening on and 2) The Application ID attached to the certificate. I’m just going to reuse the one there but as Charbel points out this is generic and you can just generate a new one to use by using a generator.

netsh http show sslcert

Pick a cert, not any cert

Finally, in our quest to gather info let’s find the thumbprint of our newly loaded certificate. You can do this by using the Get-ChildItem command like this

Get-ChildItem -path cert:\LocalMachine\my

As you can see in the second screenshot that will give you a list of the certificates with thumbprints installed on your server. You’ll need the thumbprint of the certificate you imported earlier.

Step 2: Make it happen: Ok now that we’ve got all our information let’s get this thing swapped. All of this seems to need to be done from the legacy command prompt. First, we want to delete the certificate binding in place now and the ACL. For the example shown above where I’m using port 443 it would look like this:

cmd
netsh http delete sslcert ipport=0.0.0.0:443
netsh http delete urlacl url=https://+:443/

Now we need to put it back into place and start things back up. Using the port number, certificate thumbprint, and appid from our example the command to re-add the SSL certificate would look like this. You, of course, would need to sub in your own information.  Next, we need to put the URL ACL back in place. Finally, we just need to start the service back up from PowerShell.

netsh http add sslcert ipport=0.0.0.0:443 certhash=C9BB91F7D8755BD217444046A7E68CEF56E15717 appid={1fb046ab-09b5-4029-9ec5-6e17002d495f}
netsh http add urlacl url=https://+:443/ user="NT Authority\Network Service"
Get-Service *Gateway | Start-Service

Conclusion

At this point, you should be getting a shiny green padlock when you go the site and no more nags about a bad certificate. I hope as this thing progresses out of Tech Preview and into production quality this component gets easier but at least there’s a way.

The post Fixing the SSL Certificate with Project Honolulu appeared first on .

]]>
https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/feed/ 0 717
Cisco Live US 2018: CAE Location and Keynotes Announced! https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/ https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/#respond Tue, 06 Mar 2018 16:28:30 +0000 https://www.koolaid.info/?p=714 Pictured here is the entrance, five years ago, to the Customer Appreciation Event on the last night of Cisco Live US 2013. This was my first CiscoLive and first tech conference at all. I was exhausted from all I’d learned and excited by all the new people I’d met. The conference was in Orlando, FL that year and the CAE was held in a portion of Universal Studios theme park. This all comes full circle because this year I will once again be attending CiscoLive 2018 It will once again be held in Orlando, FL And the Customer Appreciation Event will be held at THE ENTIRE UNIVERSAL STUDIOS FLORIDA PARK! Customer Appreciation Event Info You read that right, for one night only, Cisco customers, employees and other conference attendees will have the whole park to themselves with food, drink, and all that jazz included.While the party itself is from 7:30 to 11:30, attendees will also have non-exclusive access to the Islands of Adventure side of the park starting at 6 so you can get there early, hang out in Diagon Alley and then hop the Hogwarts Express over to the party when the time comes. Can anybody say Geek Overload? Once the party starts all of the attractions will be available to you, rides like Transformers:3D, Harry Potter and the Escape from Gringotts, and Race Through New York Starring Jimmy Fallon just to name a few. There will also be a “festival style” music line-up to be announced later. Considering …

The post Cisco Live US 2018: CAE Location and Keynotes Announced! appeared first on .

]]>
Pictured here is the entrance, five years ago, to the Customer Appreciation Event on the last night of Cisco Live US 2013. This was my first CiscoLive and first tech conference at all. I was exhausted from all I’d learned and excited by all the new people I’d met. The conference was in Orlando, FL that year and the CAE was held in a portion of Universal Studios theme park. This all comes full circle because this year

Customer Appreciation Event Info

You read that right, for one night only, Cisco customers, employees and other conference attendees will have the whole park to themselves with food, drink, and all that jazz included.While the party itself is from 7:30 to 11:30, attendees will also have non-exclusive access to the Islands of Adventure side of the park starting at 6 so you can get there early, hang out in Diagon Alley and then hop the Hogwarts Express over to the party when the time comes. Can anybody say Geek Overload? Once the party starts all of the attractions will be available to you, rides like Transformers:3D, Harry Potter and the Escape from Gringotts, and Race Through New York Starring Jimmy Fallon just to name a few.

There will also be a “festival style” music line-up to be announced later. Considering Cisco’s recent track record of musical acts (Aerosmith,  Maroon 5, Elle King, Bruno Mars) it’s a good guess that those will be great as well.

Keynote Speakers

There are other announcements out now as well. Included in these are the guest keynote speakers. This year it appears Cisco is going all in on the future looking vibe by having Dr. Michio Kaku, and Amy Webb as the Thursday speakers. Dr. Kaku is a renowned theoretical physicist and futurist while Ms. Webb is also a futurist and founder of the Future Today Institute. While I don’t know much about them at the moment I look forward to what they have to say.

Sessions, Labs and Seminars

Finally it looks like the Session catalog has quietly gone live today as well. Here you can begin looking for ideas of sessions you think you will find helpful, but I will tell you it is always my suggestion to pick these for now by the instructors you may really want to be able to interact with. All of these sessions will be available online after the conference so that frees you up to network (socially, not with wires) while you are there.

What you can’t access after the fact is the Labs and Seminars Cisco puts on the weekend prior to the conference itself. These come in 4 or 8 hour flavors and as someone who has attended a couple myself I will tell you they are a very fast way to deep dive into a topic. The catalog of these has been made available as well so you may want to check them out.

One note for those of you that like me that are heavy users of ad blocking in your browser. I noticed that uBlock Origin was keeping the actual list from appearing, you will need to turn it off to see the session catalogs.

Conclusion

As somebody with a small child and thus has spent a good deal of time in the Orlando area 😉 I’ll have some more to share soon in that regard. If you are heading to the show feel free to reach or say hi there! These events are much better when you allow yourself to get out an meet others.

The post Cisco Live US 2018: CAE Location and Keynotes Announced! appeared first on .

]]>
https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/feed/ 0 714
Veeam Vanguard 2018 https://www.koolaid.info/veeam-vanguard-2018/ https://www.koolaid.info/veeam-vanguard-2018/#respond Tue, 06 Mar 2018 13:27:46 +0000 https://www.koolaid.info/?p=711 Here in the US Thanksgiving Day traditionally falls on the fourth Thursday of November. While it is one of my favorite holidays today is a day of thankfulness for me as I’ve been honored to be named a Veeam Vanguard for 2018. I’ve been fortunate enough to have been a part of the group since its inception and it is one of my highest honors. Thanks as always to Rick, Kirsten, Dmitry, Andrew, Niels, Anthony, Michael, Melissa and Danny for keeping the Vanguards the best of its kind around. To those who have also been renewed into the program please accept a heartfelt congratulations as you’ve earned it through your involvement and I look forward to trolling right along with you for another year. While the e-mails have just been sent so there aren’t any statistics yet I see quite a few new members who are quite deserving popping up on twitter. Some I know already and other I look forward to getting to know. One of the really nice thing about the Vannies is we are a small group so everybody pretty much gets to know everybody. If you are looking for success in this group please don’t be shy, come be social and share the knowledge you have. Are you just learning about the program or didn’t make the cut this year? If you are active with Veeam join the conversation in the forums, on Twitter, on Reddit, any of the various Slack communities, or your own blog …

The post Veeam Vanguard 2018 appeared first on .

]]>
Here in the US Thanksgiving Day traditionally falls on the fourth Thursday of November. While it is one of my favorite holidays today is a day of thankfulness for me as I’ve been honored to be named a Veeam Vanguard for 2018. I’ve been fortunate enough to have been a part of the group since its inception and it is one of my highest honors. Thanks as always to Rick, Kirsten, Dmitry, Andrew, Niels, Anthony, Michael, Melissa and Danny for keeping the Vanguards the best of its kind around.

To those who have also been renewed into the program please accept a heartfelt congratulations as you’ve earned it through your involvement and I look forward to trolling right along with you for another year.

While the e-mails have just been sent so there aren’t any statistics yet I see quite a few new members who are quite deserving popping up on twitter. Some I know already and other I look forward to getting to know. One of the really nice thing about the Vannies is we are a small group so everybody pretty much gets to know everybody. If you are looking for success in this group please don’t be shy, come be social and share the knowledge you have.

Are you just learning about the program or didn’t make the cut this year? If you are active with Veeam join the conversation in the forums, on Twitter, on Reddit, any of the various Slack communities, or your own blog and it will come. It doesn’t matter where you join, it just matters that you do.

Finally to dear, sweet Vanny Vanguard. We all miss you, please come home. 😉

The post Veeam Vanguard 2018 appeared first on .

]]>
https://www.koolaid.info/veeam-vanguard-2018/feed/ 0 711
Why I Blog https://www.koolaid.info/why-i-blog/ https://www.koolaid.info/why-i-blog/#comments Wed, 28 Feb 2018 20:31:27 +0000 https://www.koolaid.info/?p=702 If you haven’t noticed the new content on this site has got a little scarce for the past few months. To be very honest my life seems to have been a bit crazy, both professionally and personally, and writing, unfortunately, has been back-burnered. On my drive home yesterday I was listening to Paul Woodward‘s excellent ExploreVM podcast episode with Melissa Palmer where she was speaking about the process of writing her excellent book “IT Architect Series: The Journey“. This served to remind me that I really should get back to writing as I’ve got a few topics I really need to blog about. So as these things often happen in my head this led me this morning to think about why I blog in the first place. There are a number of reasons I do this and I thought for those that are passionate about any topic but in this case about technology maybe a little bit of the why would be the thing to get you started yourself. 1. So I can remember how I did something. Way back in the olden days (2004ish?) when my blog was called tastyyellowsnow.com this was the main reason I created the blog. I was 3 jobs into my career and all of my notes for how to do anything were saved in Outlook notes that I move from a work account to PST to work account to PST to work account. I was tired of doing it that way so I thought …

The post Why I Blog appeared first on .

]]>
If you haven’t noticed the new content on this site has got a little scarce for the past few months. To be very honest my life seems to have been a bit crazy, both professionally and personally, and writing, unfortunately, has been back-burnered. On my drive home yesterday I was listening to Paul Woodward‘s excellent ExploreVM podcast episode with Melissa Palmer where she was speaking about the process of writing her excellent book “IT Architect Series: The Journey“. This served to remind me that I really should get back to writing as I’ve got a few topics I really need to blog about.

So as these things often happen in my head this led me this morning to think about why I blog in the first place. There are a number of reasons I do this and I thought for those that are passionate about any topic but in this case about technology maybe a little bit of the why would be the thing to get you started yourself.

1. So I can remember how I did something. Way back in the olden days (2004ish?) when my blog was called tastyyellowsnow.com this was the main reason I created the blog. I was 3 jobs into my career and all of my notes for how to do anything were saved in Outlook notes that I move from a work account to PST to work account to PST to work account. I was tired of doing it that way so I thought I’d try putting it out there. That hawtness ran on some asp based package with an Access database on the backend (I still have it!), and while some of the content was absolutely horrible the reason behind it is still my primary driver, to make sure if I figured out how to do something a certain way I could remember how to do it when I invariably had to do it again. Looking through some of those titles some like “Changing the Music On Hold Volume in Cisco CallManager” and “Recovering from a Bad Domain Controller Demotion” are still actually relevant. It’s nice to know where to find those things.

2. So that others can learn how I did something. I joked on twitter the other day that in preparing a Career Day talk for my daughter’s Kindergarten class that I should title it “SysAdmin: I Google things for those who will not”. If you are new to IT or aspire to work as an *Admin  I cannot express how much of my “how in the world do you know that” is simply being good at feeding errors into google and processing the results. There may be 20 posts on how to do a single task but one of them will make more sense to me than others. Because of that, I try to feed many things back into the collective Google, especially the things that I wasn’t able to find much on or that I had to piece together through multiple KB articles and blog posts. In doing so I really do hope that I help others get their job done without having somebody send them the sticker to the right.

3. Writing something down in a manner you expect others to understand can often provide clarity. There’s an old adage that says “the best way to learn something is to teach it.” While yes, it is cliché, speaking as a former adjunct college professor and current internal staff trainer when needed, it is absolutely true. When I am learning something new or I have just finished working through a complex issue I find that documenting it, either here or internally helps to solidify what the core issue was, what components led to the issue, how the problem was solved and finally how it can be prevented in the future.

Conclusion

Those are the reasons why you see new things here from time to time.  I do want to mention one thing you did not see above and that was to gain access to influencer programs. I’ve been very fortunate to be included in the vExpert and Veeam Vanguard communities and while many will say the way to get there is through blogging I disagree. I think the best way to achieve those accolades and keep them is the develop your own version of commitment to the Tech Community.  If you find that giving things back to the community at large is something you find value in then you will find a way to do it, blogging, tweeting, podcasting, or any other way. If that’s a goal of yours and blogging or writing isn’t your thing, there is any number of ways to meet that goal as long as you focus on why you are in the community to start with.

As life has its ups and downs so does the regularity of content here. What are your reasons for blogging? If you have thought about it and haven’t done it yet, why not? Let’s continue the discussion on Twitter by reaching out @k00laidIT and help the distributed mind grow.

The post Why I Blog appeared first on .

]]>
https://www.koolaid.info/why-i-blog/feed/ 1 702
From Zero to PowerCLI: CentOS Edition https://www.koolaid.info/zero-powercli-centos-edition/ https://www.koolaid.info/zero-powercli-centos-edition/#respond Wed, 28 Feb 2018 18:28:07 +0000 https://www.koolaid.info/?p=704 Hi all, just a quicky to get everybody off the ground out there that are looking to use both PowerShell and PowerCLI from things that don’t run Windows. Today VMware released version 10 of PowerCLI with support for installation on both Linux and MacOS. This was made possible by the also recently released Powershell Core 6.0 which allows PowerShell to be installed on *nix variants. While the ability to run it on a Mac really doesn’t do anything for me I do like to use my iPad with a keyboard case as a quick and easy jump box and its frustrated me for a while that I needed to do an RDP session and then run a Powershell session from within that. With these releases I’m now an SSH session away from the vast majority of my scripting needs with normal sized text and everything. In this post I’ll cover getting both Powershell Core and PowerCLI installed on a CentOS VM. To be honest, installing both on any other variant is pretty trivial but the basic framework of the difference can be found in Microsoft Docs. Step 1: Installing Powershell Core 6.0 First, you need to add the Powershell Core repository to your yum configuration. You may need to amend the “/7/” below if you are running a RHEL 6 variant like CentOS 6. [crayon-5bfd9854d99d3302006183/] Once you have your repo added simply install from yum [crayon-5bfd9854d99da564927208/] Congrats! You now have PowerShell on Linux. To run it simply run pwsh from …

The post From Zero to PowerCLI: CentOS Edition appeared first on .

]]>
Hi all, just a quicky to get everybody off the ground out there that are looking to use both PowerShell and PowerCLI from things that don’t run Windows. Today VMware released version 10 of PowerCLI with support for installation on both Linux and MacOS. This was made possible by the also recently released Powershell Core 6.0 which allows PowerShell to be installed on *nix variants. While the ability to run it on a Mac really doesn’t do anything for me I do like to use my iPad with a keyboard case as a quick and easy jump box and its frustrated me for a while that I needed to do an RDP session and then run a Powershell session from within that. With these releases I’m now an SSH session away from the vast majority of my scripting needs with normal sized text and everything.

In this post I’ll cover getting both Powershell Core and PowerCLI installed on a CentOS VM. To be honest, installing both on any other variant is pretty trivial but the basic framework of the difference can be found in Microsoft Docs.

Step 1: Installing Powershell Core 6.0

First, you need to add the Powershell Core repository to your yum configuration. You may need to amend the “/7/” below if you are running a RHEL 6 variant like CentOS 6.

curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo

Once you have your repo added simply install from yum

sudo yum install -y powershell

Congrats! You now have PowerShell on Linux. To run it simply run pwsh from the command line and do your thing. If you are like me and use unsigned scripts a good deal you may want to lower your Execution Policy on launch. You can do so by adding the parameter.

pwsh -ep Unrestricted

 

Step 2: Installing VMware PowerCLI

Yes, this is the hard part… Just kidding! It’s just like on Windows, enter the simple one-liner to install all available modules.

Install-Module -Name VMware.PowerCLI -Scope CurrentUser

If you want to check and see what you’ve installed afterward (as shown in the image)

Get-Module VMware.* -ListAvailable

If you are like me and starting to burn this through in your lab you are going to have to tell it to ignore certificate warnings to be able to connect to your vCenter. This is simple as well just use this and you’ll be off and running.

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore

 

Step 3: Profit!

Really, that’s it. Now to be honest I still am going to need to jump to something Windows-based to do the normal ActiveDirectory, DNS or any other native  Windows type module but that’s pretty easy through Enter-PSSession.

Finally, if you have got through all of the above and just want to cut and paste here’s everything in one spot to get you installed.

curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
sudo yum install -y powershell
pwsh -ep Unrestricted
Install-Module -Name VMware.PowerCLI -Scope CurrentUser
Set-PowerCLIConfiguration -InvalidCertificateAction Ignore

 

 

The post From Zero to PowerCLI: CentOS Edition appeared first on .

]]>
https://www.koolaid.info/zero-powercli-centos-edition/feed/ 0 704
VVOLs vs. the Expired Certificate https://www.koolaid.info/vvols-vs-expired-certificate/ https://www.koolaid.info/vvols-vs-expired-certificate/#respond Fri, 12 Jan 2018 20:54:47 +0000 https://www.koolaid.info/?p=694 Hi all, I’m writing this to document a fix to an interesting challenge that has pretty much been my life for the last 24 hours or so. Through a comedy of errors and other things happening, we had a situation where the upstream CA from our VMware Certificate Authority (and other things) became very unavailable and the certificate authorizing it to manage certificates expired. Over the course of the last couple of days I’ve had to reissue certificates for just about everything including my Nimble Storage array and as far as vSphere goes we’ve had to revert all the certificate infrastructure to essentially the same as the out of the box self-signed guys and then reconfigure the VMCA as a subordinate again under the Root CA. Even after all that I continued to have an issue where my Production VVOLs storage was inaccessible to the hosts. That’s not to say they weren’t working, amazingly and as a testament to the design of how VVOLs works my VMs on it ran throughout the process, but I was very limited in terms of the management of those VMs. Snapshots didn’t work, backups didn’t work, for a time even host migrations didn’t work until we reverted to the self-signed certs. Thanks for a great deal of support and help from both VMware support and Nimble Storage Support we were finally able to come up with a runbook in dealing with a VVOL situation where major certificate changes occurred on the vSphere side. There …

The post VVOLs vs. the Expired Certificate appeared first on .

]]>
Hi all, I’m writing this to document a fix to an interesting challenge that has pretty much been my life for the last 24 hours or so. Through a comedy of errors and other things happening, we had a situation where the upstream CA from our VMware Certificate Authority (and other things) became very unavailable and the certificate authorizing it to manage certificates expired. Over the course of the last couple of days I’ve had to reissue certificates for just about everything including my Nimble Storage array and as far as vSphere goes we’ve had to revert all the certificate infrastructure to essentially the same as the out of the box self-signed guys and then reconfigure the VMCA as a subordinate again under the Root CA.

Even after all that I continued to have an issue where my Production VVOLs storage was inaccessible to the hosts. That’s not to say they weren’t working, amazingly and as a testament to the design of how VVOLs works my VMs on it ran throughout the process, but I was very limited in terms of the management of those VMs. Snapshots didn’t work, backups didn’t work, for a time even host migrations didn’t work until we reverted to the self-signed certs.

Thanks for a great deal of support and help from both VMware support and Nimble Storage Support we were finally able to come up with a runbook in dealing with a VVOL situation where major certificate changes occurred on the vSphere side. There is an assumption to this process that by the time you’ve got here all of your certificates both throughout vSphere as well as with the Nimble arrays are good and valid.

  1. Unregister the VASA provider and Web Client integration from the Nimble array. This can be done either through the GUI in Administration>VMware Integration by editing your vCenter, unchecking the boxes for the Web Client and VASA Provider and hitting save. This can also be done via the CLI using the command
    vcenter --unregister <vcenter_name> --extension vasa
    vcenter --unregister <vcenter_name> --extension web
  2. Register the integrations back in. Again, from the GUI simply just check the boxes back and hit save. If successful you should see a couple of little green bars briefly appear at the top of the screen saying the process was successful. From the CLI the commands are pretty similar
    vcenter --register <vcenter_name> --extension vasa
    vcenter --register <vcenter_name> --extension web
  3. Verify that your VASA provider is available in vCenter and online. This is just to make sure that the integration was successful. In either the Web Client or the HTML5 client go to vCenter> Configure> Storage Provider and look for the entry that matches the name of your array group and in the URL has the IP address of your array’s management interface. This should show as online. As you have been messing with certificates its probably worth looking at the Certificate Info tab as well while you are here to verify that the certificate is what you expect.
  4. Refresh the CA Certificates on each your hosts. Next, we need to ensure that all of the CA certificates are available on the hosts to ensure they can verify the certificates presented to them by the storage array. To do this you can either right-click each host > Certificates > Refresh CA Certificates or if you navigate to the configuration tab of each host, go to Certificate there is a button there as well. While in the window it is worth looking at the Status of each host’s certificate and ensure that it is Good.
  5. Restart the vvold service on each host. This final step was evidently the hardest one to nail down and find in the documentation. The simplest method may be to simply reboot each of your hosts as long as you can put them into maintenance mode and evacuate them first. The quicker way and the way that will let you keep things running is to enter a shell session on each of your hosts and simply run the following command:
    /etc/init.d/vvold restart

    Once done you should see a response like the feature image on this post and a short while later your VVOLs array will again become available for each host as you work on them.

That’s about it. I really cannot thank the engineers at VMware (Sujish) and Nimble (Peter) enough for their assistance in getting me back to good. Also I’d like to thank Pete Flecha for jumping in at the end, helping me and reminding me to blog this.

If nothing else I hope this serves as a reminder to you (as well as myself) that certificates should be well tended to, please watch them carefully. 😉

The post VVOLs vs. the Expired Certificate appeared first on .

]]>
https://www.koolaid.info/vvols-vs-expired-certificate/feed/ 0 694
Making Managing Printers Manageable With Security Groups and Group Policy https://www.koolaid.info/making-managing-printers-manageable-security-groups-group-policy/ https://www.koolaid.info/making-managing-printers-manageable-security-groups-group-policy/#respond Tue, 31 Oct 2017 17:18:53 +0000 https://www.koolaid.info/?p=679 I don’t know about the rest of you but printing has long been the bane of my existence as an IT professional. Frankly, I hate it and believe the world should be 100% paperless by this point. That said, throughout my career, my users have done a wonderful job of showing me that I am truly in the minority on this matter so I have to do my part in making sure they are available. As any Windows SysAdmin knows installing the actual print driver and setting up a TCP/IP port aren’t even half the battle. From there you got to get them shared and have the users actually connect to them so that they can use them. It’d be awesome if they would all just sit down say “I have no printers, let me go to Active Directory and find some” but I’ve yet to have more than a handful of users who see this as a solution; they just want the damned things there and ready to rock and roll. In the past, I’ve always managed this with a series of old VBS scripts, which still works but requires tweaks from time to time. It’s possible to do this kind of stuff with Powershell these days as well as long as your user has the Active Directory module imported (Hint: they probably don’t). There are also any number of other 3rd party and really expensive Microsoft systems (Hi SCCM!) that will do this as well. But luckily we’ve …

The post Making Managing Printers Manageable With Security Groups and Group Policy appeared first on .

]]>
I don’t know about the rest of you but printing has long been the bane of my existence as an IT professional. Frankly, I hate it and believe the world should be 100% paperless by this point. That said, throughout my career, my users have done a wonderful job of showing me that I am truly in the minority on this matter so I have to do my part in making sure they are available.

As any Windows SysAdmin knows installing the actual print driver and setting up a TCP/IP port aren’t even half the battle. From there you got to get them shared and have the users actually connect to them so that they can use them. It’d be awesome if they would all just sit down say “I have no printers, let me go to Active Directory and find some” but I’ve yet to have more than a handful of users who see this as a solution; they just want the damned things there and ready to rock and roll.

In the past, I’ve always managed this with a series of old VBS scripts, which still works but requires tweaks from time to time. It’s possible to do this kind of stuff with Powershell these days as well as long as your user has the Active Directory module imported (Hint: they probably don’t). There are also any number of other 3rd party and really expensive Microsoft systems (Hi SCCM!) that will do this as well. But luckily we’ve had a little thing called Group Policy Preferences around for a while now too and it will do everything we need to make this really manageable, with a nice pretty GUI that you can even teach the Help Desk Intern how to manage.

  1. Setup the Print Server(s)- This is the same old, same old. Pick a server or set of servers and setup all your printers and share them. This gives you centralized queue management and all the goodies we know and love.
  2. Create Security Groups- Unless you work in a 10 person office most people won’t necessarily need every printer. I like to create Security groups, 1 per printer, and then assign everybody who needs that printer to the security group. I typically also like to set up these groups with a prefix, usually “prnt” so that they are all grouped together but that’s just me. Set these up now and we’ll use them in a minute.
  3. Create a new GPO- Truthfully this a personal preference, but I typically like to create a separate GPO for each major task I want to achieve aside from baseline things I through in a domain default policy.
  4. Navigate to Users>Preferences>Control Panel Settings>Printers- Cool, it’s a blank screen! Let’s fill this sucker up with some printing goodness. Start by right-clicking the screen and choosing New>Shared Printer.
  5. Once here you will the default action is Update. While there is an option for Create we want to leave the setting at the default because this will allow you more flexibility in the future while still letting you accomplish your goal now.
  6. Go ahead and fill in the share path with the full UNC path to the shared printer leaving everything else blank then click on the “Common” tab.
  7. This is where the magic happens so everybody only gets what they need. Check the box for “Item-level targeting” at the bottom and then click the now available button
  8. In the now open Targeting Editor window click the “New Item” button and choose “Security Group.” Note: I like to do this task with Security Groups but as you can see there are lots of options to choose from. You may want to do the assignment based on Active Directory Sites if you have a rotating band of workers for example. Do what fits your organization.
  9. Hit the browse “…” button and go find your group you want to have this printer added for then hit OK all the way back out to the GPO screen.

That’s it! you can essentially rinse and repeat these instructions for as many printers and print servers as you need to support. There really isn’t even any server magic to the printing, for all GP Preferences cares these can all be printers shared off individual workstations. I wouldn’t do that, but you know… My one real gripe with this is there doesn’t seem to be a way to script your way out of the process yet. I was able to bulk install the printers and create the ports on the print server but doing this work out of the GUI essentially means exporting the preferences list to an XML file, editing it and then importing it back in. Eww.

P.S. ProTip: Use Delete All For Print Server Migrations

So the idea spark for this post was a need to recreate all the logical printers in response to an office reorganization. The old names made no sense so we just blew them away and created new. One thing I did find out is that since Windows Server 2012 you can create a Printer Preference with type Delete and choose “Delete all shared connections.” Coupled with the Common options of “Apply once and do not reapply” this can be a very effective way to manage a print server migration, reorganization, or any other number of goals I can think of. If you do choose to do this be sure to 1) make sure any version of this you were using to do the “old printers” is gone before you set this to run and 2) you mess with the order of the Printer Preferences so it is number 1 in the order. In addition, when I was looking to use it I created it and then immediately right-click > Disabled the preference until I was really ready for it to go.

The post Making Managing Printers Manageable With Security Groups and Group Policy appeared first on .

]]>
https://www.koolaid.info/making-managing-printers-manageable-security-groups-group-policy/feed/ 0 679
Creating Staff Notification Mail Contacts in Exchange https://www.koolaid.info/creating-staff-notification-mail-contacts-exchange/ https://www.koolaid.info/creating-staff-notification-mail-contacts-exchange/#respond Fri, 27 Oct 2017 19:23:05 +0000 https://www.koolaid.info/?p=681 Just a quick post with a script I’ve just written. Living in WV we from time to time have to let staff know that the offices will be closed for various reasons, from heavy snow to chemical companies dumping large quantities of chemicals into the area’s water supply. For this reason, we maintain a basic emergency staff notification process that requires an authorized person to send an e-mail to a certain address and that will carpet bomb staff who chose to opt-in to receive text messages and e-mails to their personal (as opposed to business) e-mail addresses. This is all powered by using creating hidden mail contacts on our Exchange server for the personal address as well as the e-mail address that corresponds to the users’ mobile provider. These addresses are all then dynamically added to a distribution list that is restricted by who can send to it. To be honest the system is mostly automatic with the exception of needing to makes sure new contacts get put in and old contacts get taken out. Taking them out by GUI is pretty simple, just right click delete but it seems to be lots of steps to add them in. So in the script below I’ve automated the process of interrogating the Admin entering them and then using that information to automatically create the contacts and then hide them from the Global Address List. [crayon-5bfd9854d9fbb790390503/] Now in order to make this work you need to either have an Exchange Shell window open or …

The post Creating Staff Notification Mail Contacts in Exchange appeared first on .

]]>
Just a quick post with a script I’ve just written. Living in WV we from time to time have to let staff know that the offices will be closed for various reasons, from heavy snow to chemical companies dumping large quantities of chemicals into the area’s water supply. For this reason, we maintain a basic emergency staff notification process that requires an authorized person to send an e-mail to a certain address and that will carpet bomb staff who chose to opt-in to receive text messages and e-mails to their personal (as opposed to business) e-mail addresses. This is all powered by using creating hidden mail contacts on our Exchange server for the personal address as well as the e-mail address that corresponds to the users’ mobile provider. These addresses are all then dynamically added to a distribution list that is restricted by who can send to it.

To be honest the system is mostly automatic with the exception of needing to makes sure new contacts get put in and old contacts get taken out. Taking them out by GUI is pretty simple, just right click delete but it seems to be lots of steps to add them in. So in the script below I’ve automated the process of interrogating the Admin entering them and then using that information to automatically create the contacts and then hide them from the Global Address List.

#New-DRContact.ps1
#Allows for the creation of Exchange 2010-2016 Mail Contacts for both a mobile phone and a personal e-mail address
#If you wish these addresses to be listed in the GAL comment the Set-MailContact lines

$OU = Read-Host -Prompt "OU to put the contact in (format domain/staff/contacts)"
$FirstName = Read-Host -Prompt "What is the user's first name"
$LastName = Read-Host -Prompt "What is the user's last name"
$Mobile = Read-Host -Prompt "What is the mobile number e-mail address (Common Domains- txt.att.net, vtext.com, pcs.ntelos.net, sprintpcs.com)"
$Personal = Read-Host -Prompt "What is the user's personal e-mail address"

if ($Mobile) {
    $MobileCName = $FirstName + " " + $LastName + " Mobile"
    $MCAlias =$FirstName.ToLower().Substring(0,1) + $LastName.ToLower() + "mobile"
    New-MailContact -Name $MobileCName -Alias $MCAlias -ExternalEmailAddress $Mobile -OrganizationalUnit $OU
    Set-MailContact $MCAlias -HiddenFromAddressListsEnabled $true    
}

if ($Personal) {
    $PersonalCName = $FirstName + " " + $LastName + " Personal"
    $PCAlias =$FirstName.ToLower().Substring(0,1) + $LastName.ToLower() + "personal"
    New-MailContact -Name $PersonalCName -Alias $PCAlias -ExternalEmailAddress $Personal -OrganizationalUnit $OU
    Set-MailContact $PCAlias -HiddenFromAddressListsEnabled $true

Now in order to make this work you need to either have an Exchange Shell window open or be remotely connected. As I am getting to where I have a nice, neat PowerShell profile on my laptop I like to stay in it so I remote in. I’ve automated that process in this Open-Exchange.ps1 script.

#Open-Exchange
#Prompts for servername and credentials and then gives you a remote Exchange Shell connection

$Server = Read-Host -Prompt "What's the server name"
$UserCredential = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri http://$Server/PowerShell/ -Authentication Kerberos -Credential $UserCredential
Import-PSSession $Session

Now if you’d like to save yourself the need to cut and paste these locally you can find these scripts and few others I’ve been writing on my GitHub repo.

The post Creating Staff Notification Mail Contacts in Exchange appeared first on .

]]>
https://www.koolaid.info/creating-staff-notification-mail-contacts-exchange/feed/ 0 681