Reinstalling the Veeam Backup & Replication Powershell SnapIn

As somebody who lives by the old mantra of “Eat your own dog food” when it comes to the laptops I use both personally and professionally I tend to be on the early edge of installs. So while I am not at all ready to start deploying Windows 10 1803 to the end users I’ve recently upgraded my Surface Pro to it. In doing so I’ve found that doing so broke access to the Veeam Powershell SnapIn on my laptop when trying to run a script. After some Googling I found a very helpful post on the Veeam Forums that I thought I’d condense the commands to run here for us all. Let me start with a hat tip to James McGuire for finding this solution to the problem.

For the those that aren’t familiar with VBR’s Powershell capabilities, the SnapIn is installed either when you run the full installer on your VBR server or, as is my case when you install the Remote Console component on another Windows system. Don’t let me get started about the fact that Veeam is still using a SnapIn to provide PowerShell access, that’s a whole different post, but this is where we are.

The sign that this has occurred is when you get the “Get-PSSnapin : No Windows PowerShell snap-ins matching the pattern ‘VeeamPSSnapin’ were found.” error when trying to get access to the SnapIn. In order to fix this, you need to use the installutil.exe utility in your latest .Net installation. So in my example, this would be C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.ex
e. If you’ve already installed the VBR Remote console The SnapIn’s DLL should be at C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll. So to get the installation fixed and re-added to being available to Powershell you just need to do the following from an elevated PoSH prompt:

Then to load it and be able to use it simply

From there it’s up to you what comes next. Happy Scripting!

Fixing the SSL Certificate with Project Honolulu

So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well.

I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom.

Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed Project Honolulu. If you want to do a public one, fine, but more likely you’ll have a certificate authority available to you internally. I’m not going to walk you through this again, my friend Luca Dell’Oca has a good write up on it here. Just do steps 1-3.

Make note of the Application ID here, you’ll use it later

Step 1: Shut it down and gather info: Next we need to shut down the Honolulu service. As most of what we’ll be doing here today is going to be in Powershell let’s just do this by CLI as well.

Now let’s take a look at what’s currently in place. You can do this with the following command, the output should look like the figure to the right. The relevant info we want to take note of here is 1) The port that we’ve got Honolulu listening on and 2) The Application ID attached to the certificate. I’m just going to reuse the one there but as Charbel points out this is generic and you can just generate a new one to use by using a generator.

Pick a cert, not any cert

Finally, in our quest to gather info let’s find the thumbprint of our newly loaded certificate. You can do this by using the Get-ChildItem command like this

As you can see in the second screenshot that will give you a list of the certificates with thumbprints installed on your server. You’ll need the thumbprint of the certificate you imported earlier.

Step 2: Make it happen: Ok now that we’ve got all our information let’s get this thing swapped. All of this seems to need to be done from the legacy command prompt. First, we want to delete the certificate binding in place now and the ACL. For the example shown above where I’m using port 443 it would look like this:

Now we need to put it back into place and start things back up. Using the port number, certificate thumbprint, and appid from our example the command to re-add the SSL certificate would look like this. You, of course, would need to sub in your own information.  Next, we need to put the URL ACL back in place. Finally, we just need to start the service back up from PowerShell.

Conclusion

At this point, you should be getting a shiny green padlock when you go the site and no more nags about a bad certificate. I hope as this thing progresses out of Tech Preview and into production quality this component gets easier but at least there’s a way.

From Zero to PowerCLI: CentOS Edition

Hi all, just a quicky to get everybody off the ground out there that are looking to use both PowerShell and PowerCLI from things that don’t run Windows. Today VMware released version 10 of PowerCLI with support for installation on both Linux and MacOS. This was made possible by the also recently released Powershell Core 6.0 which allows PowerShell to be installed on *nix variants. While the ability to run it on a Mac really doesn’t do anything for me I do like to use my iPad with a keyboard case as a quick and easy jump box and its frustrated me for a while that I needed to do an RDP session and then run a Powershell session from within that. With these releases I’m now an SSH session away from the vast majority of my scripting needs with normal sized text and everything.

In this post I’ll cover getting both Powershell Core and PowerCLI installed on a CentOS VM. To be honest, installing both on any other variant is pretty trivial but the basic framework of the difference can be found in Microsoft Docs.

Step 1: Installing Powershell Core 6.0

First, you need to add the Powershell Core repository to your yum configuration. You may need to amend the “/7/” below if you are running a RHEL 6 variant like CentOS 6.

Once you have your repo added simply install from yum

Congrats! You now have PowerShell on Linux. To run it simply run pwsh from the command line and do your thing. If you are like me and use unsigned scripts a good deal you may want to lower your Execution Policy on launch. You can do so by adding the parameter.

 

Step 2: Installing VMware PowerCLI

Yes, this is the hard part… Just kidding! It’s just like on Windows, enter the simple one-liner to install all available modules.

If you want to check and see what you’ve installed afterward (as shown in the image)

If you are like me and starting to burn this through in your lab you are going to have to tell it to ignore certificate warnings to be able to connect to your vCenter. This is simple as well just use this and you’ll be off and running.

 

Step 3: Profit!

Really, that’s it. Now to be honest I still am going to need to jump to something Windows-based to do the normal ActiveDirectory, DNS or any other native  Windows type module but that’s pretty easy through Enter-PSSession.

Finally, if you have got through all of the above and just want to cut and paste here’s everything in one spot to get you installed.

 

 

VVOLs vs. the Expired Certificate

Hi all, I’m writing this to document a fix to an interesting challenge that has pretty much been my life for the last 24 hours or so. Through a comedy of errors and other things happening, we had a situation where the upstream CA from our VMware Certificate Authority (and other things) became very unavailable and the certificate authorizing it to manage certificates expired. Over the course of the last couple of days I’ve had to reissue certificates for just about everything including my Nimble Storage array and as far as vSphere goes we’ve had to revert all the certificate infrastructure to essentially the same as the out of the box self-signed guys and then reconfigure the VMCA as a subordinate again under the Root CA.

Even after all that I continued to have an issue where my Production VVOLs storage was inaccessible to the hosts. That’s not to say they weren’t working, amazingly and as a testament to the design of how VVOLs works my VMs on it ran throughout the process, but I was very limited in terms of the management of those VMs. Snapshots didn’t work, backups didn’t work, for a time even host migrations didn’t work until we reverted to the self-signed certs.

Thanks for a great deal of support and help from both VMware support and Nimble Storage Support we were finally able to come up with a runbook in dealing with a VVOL situation where major certificate changes occurred on the vSphere side. There is an assumption to this process that by the time you’ve got here all of your certificates both throughout vSphere as well as with the Nimble arrays are good and valid.

  1. Unregister the VASA provider and Web Client integration from the Nimble array. This can be done either through the GUI in Administration>VMware Integration by editing your vCenter, unchecking the boxes for the Web Client and VASA Provider and hitting save. This can also be done via the CLI using the command
  2. Register the integrations back in. Again, from the GUI simply just check the boxes back and hit save. If successful you should see a couple of little green bars briefly appear at the top of the screen saying the process was successful. From the CLI the commands are pretty similar
  3. Verify that your VASA provider is available in vCenter and online. This is just to make sure that the integration was successful. In either the Web Client or the HTML5 client go to vCenter> Configure> Storage Provider and look for the entry that matches the name of your array group and in the URL has the IP address of your array’s management interface. This should show as online. As you have been messing with certificates its probably worth looking at the Certificate Info tab as well while you are here to verify that the certificate is what you expect.
  4. Refresh the CA Certificates on each your hosts. Next, we need to ensure that all of the CA certificates are available on the hosts to ensure they can verify the certificates presented to them by the storage array. To do this you can either right-click each host > Certificates > Refresh CA Certificates or if you navigate to the configuration tab of each host, go to Certificate there is a button there as well. While in the window it is worth looking at the Status of each host’s certificate and ensure that it is Good.
  5. Restart the vvold service on each host. This final step was evidently the hardest one to nail down and find in the documentation. The simplest method may be to simply reboot each of your hosts as long as you can put them into maintenance mode and evacuate them first. The quicker way and the way that will let you keep things running is to enter a shell session on each of your hosts and simply run the following command:

    Once done you should see a response like the feature image on this post and a short while later your VVOLs array will again become available for each host as you work on them.

That’s about it. I really cannot thank the engineers at VMware (Sujish) and Nimble (Peter) enough for their assistance in getting me back to good. Also I’d like to thank Pete Flecha for jumping in at the end, helping me and reminding me to blog this.

If nothing else I hope this serves as a reminder to you (as well as myself) that certificates should be well tended to, please watch them carefully. 😉

Making Managing Printers Manageable With Security Groups and Group Policy

I don’t know about the rest of you but printing has long been the bane of my existence as an IT professional. Frankly, I hate it and believe the world should be 100% paperless by this point. That said, throughout my career, my users have done a wonderful job of showing me that I am truly in the minority on this matter so I have to do my part in making sure they are available.

As any Windows SysAdmin knows installing the actual print driver and setting up a TCP/IP port aren’t even half the battle. From there you got to get them shared and have the users actually connect to them so that they can use them. It’d be awesome if they would all just sit down say “I have no printers, let me go to Active Directory and find some” but I’ve yet to have more than a handful of users who see this as a solution; they just want the damned things there and ready to rock and roll.

In the past, I’ve always managed this with a series of old VBS scripts, which still works but requires tweaks from time to time. It’s possible to do this kind of stuff with Powershell these days as well as long as your user has the Active Directory module imported (Hint: they probably don’t). There are also any number of other 3rd party and really expensive Microsoft systems (Hi SCCM!) that will do this as well. But luckily we’ve had a little thing called Group Policy Preferences around for a while now too and it will do everything we need to make this really manageable, with a nice pretty GUI that you can even teach the Help Desk Intern how to manage.

  1. Setup the Print Server(s)- This is the same old, same old. Pick a server or set of servers and setup all your printers and share them. This gives you centralized queue management and all the goodies we know and love.
  2. Create Security Groups- Unless you work in a 10 person office most people won’t necessarily need every printer. I like to create Security groups, 1 per printer, and then assign everybody who needs that printer to the security group. I typically also like to set up these groups with a prefix, usually “prnt” so that they are all grouped together but that’s just me. Set these up now and we’ll use them in a minute.
  3. Create a new GPO- Truthfully this a personal preference, but I typically like to create a separate GPO for each major task I want to achieve aside from baseline things I through in a domain default policy.
  4. Navigate to Users>Preferences>Control Panel Settings>Printers- Cool, it’s a blank screen! Let’s fill this sucker up with some printing goodness. Start by right-clicking the screen and choosing New>Shared Printer.
  5. Once here you will the default action is Update. While there is an option for Create we want to leave the setting at the default because this will allow you more flexibility in the future while still letting you accomplish your goal now.
  6. Go ahead and fill in the share path with the full UNC path to the shared printer leaving everything else blank then click on the “Common” tab.
  7. This is where the magic happens so everybody only gets what they need. Check the box for “Item-level targeting” at the bottom and then click the now available button
  8. In the now open Targeting Editor window click the “New Item” button and choose “Security Group.” Note: I like to do this task with Security Groups but as you can see there are lots of options to choose from. You may want to do the assignment based on Active Directory Sites if you have a rotating band of workers for example. Do what fits your organization.
  9. Hit the browse “…” button and go find your group you want to have this printer added for then hit OK all the way back out to the GPO screen.

That’s it! you can essentially rinse and repeat these instructions for as many printers and print servers as you need to support. There really isn’t even any server magic to the printing, for all GP Preferences cares these can all be printers shared off individual workstations. I wouldn’t do that, but you know… My one real gripe with this is there doesn’t seem to be a way to script your way out of the process yet. I was able to bulk install the printers and create the ports on the print server but doing this work out of the GUI essentially means exporting the preferences list to an XML file, editing it and then importing it back in. Eww.

P.S. ProTip: Use Delete All For Print Server Migrations

So the idea spark for this post was a need to recreate all the logical printers in response to an office reorganization. The old names made no sense so we just blew them away and created new. One thing I did find out is that since Windows Server 2012 you can create a Printer Preference with type Delete and choose “Delete all shared connections.” Coupled with the Common options of “Apply once and do not reapply” this can be a very effective way to manage a print server migration, reorganization, or any other number of goals I can think of. If you do choose to do this be sure to 1) make sure any version of this you were using to do the “old printers” is gone before you set this to run and 2) you mess with the order of the Printer Preferences so it is number 1 in the order. In addition, when I was looking to use it I created it and then immediately right-click > Disabled the preference until I was really ready for it to go.

Creating Staff Notification Mail Contacts in Exchange

Just a quick post with a script I’ve just written. Living in WV we from time to time have to let staff know that the offices will be closed for various reasons, from heavy snow to chemical companies dumping large quantities of chemicals into the area’s water supply. For this reason, we maintain a basic emergency staff notification process that requires an authorized person to send an e-mail to a certain address and that will carpet bomb staff who chose to opt-in to receive text messages and e-mails to their personal (as opposed to business) e-mail addresses. This is all powered by using creating hidden mail contacts on our Exchange server for the personal address as well as the e-mail address that corresponds to the users’ mobile provider. These addresses are all then dynamically added to a distribution list that is restricted by who can send to it.

To be honest the system is mostly automatic with the exception of needing to makes sure new contacts get put in and old contacts get taken out. Taking them out by GUI is pretty simple, just right click delete but it seems to be lots of steps to add them in. So in the script below I’ve automated the process of interrogating the Admin entering them and then using that information to automatically create the contacts and then hide them from the Global Address List.

Now in order to make this work you need to either have an Exchange Shell window open or be remotely connected. As I am getting to where I have a nice, neat PowerShell profile on my laptop I like to stay in it so I remote in. I’ve automated that process in this Open-Exchange.ps1 script.

Now if you’d like to save yourself the need to cut and paste these locally you can find these scripts and few others I’ve been writing on my GitHub repo.

Notes on Migrating from an “All in One” Veeam Backup & Replication Server to a Distributed System

One of the biggest headaches I not only have and have heard about from other Veeam Backup & Replication administrators have is backup server migrations. In the past I have always gone the “All-in-One” approach, have one beefy physical server with Veeam directly installed and housing all the roles. This is great! It runs fast and it’s a fairly simple system to manage, but the problem is every time you need more space or your upgrading an old server you have to migrate all the parts and all the data. With my latest backup repository upgrade I’ve decided to go to a bit more of a distributed architecture, moving the command and control part out to a VM with an integrated SQL server and then letting the physical box handle the repository and proxy functions producing a best of both worlds setup, the speed and simplicity of all the data mover and VM access happening from the single physical server while the setup and brains of the operation reside in a movable, upgradable VM.

This post is mostly composed of my notes from the migration of all parts of VBR. The best way to think of this is to split the migration into 3 major parts; repository migration, VBR migration, proxy migration, and VBR migration. These notes are fairly high level, not going too deep into the individual steps. As migrations are complex if any of these parts don’t make sense to you or do not provide enough detail I would recommend that you give the fine folks at Veeam support a call to ride along as you perform your migration.

I. Migrating the Repository

  1. Setup 1 or more new repository servers
  2. Add new repository pointing to a separate folder (i.e. D:\ConfigBackups) on the new repository server to your existing VBR server exclusively for Configuration Backups. These cannot be included in a SOBR. Change the Config Backup Settings (File > Config Backup) to point to the new repository. This is also probably a good time to go ahead and run a manual Config Backup while you are there to snapshot your existing setup.
  3. Create one or more new backup repositories on your new repository server(s) to your existing VBR server configuration.
  4. Create Scale Out Backup Repository (SOBR), adding your existing repository and new repository or repositories as extents.
  5. All of your backup jobs should automatically be changed to point to the SOBR during the setup but check each of your jobs to ensure they are pointing at the SOBR.
  6. If possible go ahead and do a regular run of all jobs or wait until your regularly scheduled run.
  7. After successful run of jobs put the existing extent repository into Maintenance Mode and evacuate backups.
  8. Remove existing repository from the SOBR configuration and then from the Backup Repositories section. At this point no storage of any jobs should actually be flowing through your old server. It is perfectly fine for a SOBR to only contain a single extent from a data locality standpoint.

II. Migrate the Backup and Guest Interaction Proxies

  1. Go to each of your remaining repositories and set proxy affinity to the new repository server you have created. If you have previously scaled out your backup proxies then you can ignore this step.
  2. Under Backup Proxy in Backup Infrastructure remove the Backup Proxy installation on your existing VBR server.  Again, if possible you may want to run a job at this point to ensure you haven’t broken anything in the process.
  3. Go to each of your backup jobs that are utilizing the Guest Processing features. Ensure the guest interaction proxy at the bottom of the screen is set to either your new repository server, auto or if scaled out another server in your infrastructure.

III. Migrate the Veeam Backup & Replication Server

  1. Disable all backup, Backup Copy and Agent jobs on your old server that have a schedule.
  2. Run a Config Backup on the old server. If you have chosen to Encrypt your configuration backup the process below is going to be a great test to see if you remember or documented it. If you don’t know what this is go ahead and change it under File>Manage Passwords before running this final configuration backup.
  3. Shutdown all the Veeam services on your existing backup server or go ahead and power it down. This ensures you won’t have 2 servers accessing the same components.
  4. If not already done, create your new Veeam Backup and Replication server/VM. Be sure to follow the guidelines on sizing available in the Best Practices Guide.
  5. Install Veeam Backup, ensuring that you use the same version and update as production server. Safest bet is to just have both patched to the latest level of the latest version.
  6. Add a backup repository on your new server pointing to the Config Backup repository folder you created in step 2 of the Migrating the Repository step.
  7. Go to Config Backup and hit the “Restore” button.
  8. As the wizard begins choose the Migrate option.
  9. Change the backup repository to the repository created in step 5, choose your latest backup file which should be the same as the one created in step 2 above.
  10. If encrypted, specify your backup password and then choose to overwrite the existing VeeamBackup database you created when you installed Veeam in step 4. The defaults should do this.
  11. Choose any Restore Options you may want. I personally chose to check all 4 of the boxes but each job will have its own requirements.
  12. Click the Finish button to begin the migration. From this point if any screens or messages pop up about errors or issues in processing it is a good idea go to ahead and contact support. All this process does is move the database from the old server to the new, changing any references to the old server to the new along the way. If something goes wrong it is most likely going to have a cascade effect and you are going to want them involved sooner than later.

IV. Verification and Cleanup

  1. Now that your server has been migrated it’s a good idea to go through all the tabs in your Backup Infrastructure section, ensuring that all your information looks correct.
  2. Go ahead and run a Config Backup at this point. That’s a nice low-key way to ensure that all of the basic Veeam components are working correctly.
  3. Re-enable your disabled backup, backup copy and Agent jobs. If possible go ahead and run one and ensure that everything is hunky dory there.

Gotchas

This process when working correctly is extremely smooth. I’ll be honest and admit that I ran into a what I believe is a new bug in the VBR Migration wizard. We had a few SureBackup jobs that had been setup and while they had been run they have never been modified again since install. When this happens VBR notes the job_modified field of the job config database record as NUL. During the migration the wizard left those fields blank in the restored database, which is evidently something that is checked when you start the Veeam Backup Service. While the Service in the basic services.msc screen appears to be running under the hood you are only getting partial functionality. In my case support was able to go in and modify the database and re-include the NUL data to the field, but if you think you might have this issue it might be worth changing something minor on all of your jobs before the final configuration backup.

Conclusion

If you’ve made it this far, congrats! You should be good to go. While the process seems daunting it really wasn’t all that bad. If I hadn’t run into an issue it wouldn’t have been bad at all. The good news is that at this point you should be able to scale your backup system much easier without the grip and rip that used to be required.

Tech Conferences in Las Vegas for Newbies

As June is here we are deep into tech conference season already so I find myself behind the curve somewhat with this post, but here we are. I am extremely fortunate to have an employer who understands the value of attending Tech Conferences for IT Professionals and I’ve been able to attend at least one each year since 2014; going back and forth between CiscoLive and VMworld with a sprinkling of VeeamON and more local events such as vBrisket and VMUGs for good measure. As a “Hyper-Converged Admin” my choice of which “biggie” conference is done each year by looking at where my projects land; last year was CiscoLive due to a lot of Voice and Security Projects, this year VMworld due to lots of updates coming down the pike there and a potential VDI project.

The problem when you have a conference with north of 25,000 attendees is that you are limited in where you put these on. While Cisco does tend to move around some, VMworld has typically either been in San Francisco or Las Vegas. With the Moscone Center closed again this year for renovation we find pretty much all of the big guys are back in Las Vegas, with both CiscoLive and VMworld at Mandalay Bay once again as well as AWS re:Invent and Dell/EMC World in town this year as well. If you haven’t been to one of these Tech Conferences before or to Las Vegas both can be both exciting and overwhelming, but with a little help from others and some decent tips neither are that big of a deal.

Las Vegas Basics

So for a small town guy like me Las Vegas is very cool town, but tiring. The common thread I feel and have heard others voice as well is that Las Vegas is deceptively large because all of the hotels on the strip are so massive. While you can see from your Mandalay Bay window that New York New York is just the next block, it is probably about a mile away if walking there. Why this is important is that if you look at the list of hotels on each conference’s list you’ll see lots of options, but getting to that 8AM session may require a 30+ minute walk or even longer shuttle ride if you chose to stay at the Cosmopolitan (my personal favorite of all Las Vegas hotels but prohibitively far away). Couple that with temperatures in the triple digits during summer and proximity becomes more important.

Hotel Choices

So the first tip for any of these conferences is get a hotel as close as possible. For CiscoLive and VMworld keep in mind that you can move freely between the Mandalay Bay, Delano the Luxor and the Conference Center without ever stepping foot outside.  I would highly recommend trying to be in one of these. If you are booking late and the conference is out of rooms it’s worth trying to book directly through the hotel as they don’t let the events have the whole place. That said you are still going to be in for a hike. For example I stayed in the Mandalay Bay last year and it was approximately 1800 steps from my room to the entrance to conference.

Many of the vendor types that seemingly live their lives at these types of events like to opt for either the nearby Marriott Courtyard Las Vegas South or the Holiday Inn at Desert Club Resort for those that like a kitchen. From either of these you’re a quick Uber or Lyft away from the Conference Center entrance but don’t have to deal with the hustle and bustle of staying on the Strip if you don’t want to.

Getting Around

Speaking of Uber and Lyft, getting around with out walking is a bit of a consideration as well, both for the daily commute as well as for the various events. Traffic in the afternoons into the early morning is pretty impressive on the actual strip so to be honest I’ve not heard good things about trying to rely on the conference shuttles when available. Further I’ve heard many complaints from those who are locals that drive in and try to find parking.

Where that leaves you is 1) ride sharing service, 2) using the monorails, or 3) walking. Uber is nice because they are pretty knowledgeable about routing you around traffic regardless of time of day. Keep in mind when it comes to this and Mandalay Bay there are actually two defined Uber pickup/drop off spots, one outside of the conference center and another around the valet area underneath the hotel drop off area. These are impressively far apart so be sure you know where you want picked up before you request a ride.

The monorails are also nice but short. For those of you going to CLUS this is a good way to get to the Customer Appreciation Event as it will drop you off close to the T-mobile Arena.

Finally walking is a decent option, especially after dark for the various vendor events, but I do recommend if you are going to do it find a buddy or 3 or 4. I’ve never personally seen violence on the strip but you hear about it and there are lots of “character buskers” dressed like everything from Michael Jackson to Spongebob that will harass you.

One final note, while first impressions are important there really isn’t any point to being that person in the fancy shoes unless you’ve got booth duty. I typically while go buy a new pair of good running shoes a week or two before the conference so I can break them in and then that’s what I wear. If you are a step tracker kind of person like me expect 20,000 and up each day so take care of your feet.

Things To Do

Seriously, there’s plenty to do even if you weren’t at a conference already providing lots to do. Regardless of your interest if the conference doesn’t have you jam-packed enough you can find something you like here.

If you are new to IT or are just starting to get your name out there the most important things to do outside of the sessions is to get out there and be social. Both of the conferences we are talking about here have a great community that surround it with some wonderful people in it. The first step if you aren’t already would be to get yourself on twitter and follow the hashtag stream for your event (#CLUS for CiscoLive US, #VMworld for VMworld) , not only while you are there but before especially as many outside events will be planned then. Be sure to find the social area for your given conference and go make friends. Outside of the standard conference hours you’ll find that many of the Vendors will have events planned for attendees. If you have partners or vendors you work heavily with its worth asking your SE if they are doing anything.

CiscoLive Basics

CiscoLive will be held this year June 25-29th and promises to be a great show once again. While I have really enjoyed all of the conferences I’ve attended CLUS  was my first and near to my heart. First off of all those I’ve been to this one feels more academic than others. There aren’t really as many softball sessions and the sessions are a bigger part of the focus for the event than other. That said, they do a very good job of supporting the social community by having a Social Media Hub right in the middle of it all with special events for the twitterazzi most days. I highly recommend showing up and if nothing else walking up and just introducing yourself, trust me, you’ll fit right in there somewhere especially if you bring a Kilt. 😉 If you can come in early on Sunday the annual Tweetup Sunday afternoon is always a good time to make friends.

If you are going to CiscoLive you should have at this point booked most of your sessions. A couple of points here. First do not overbook yourself on sessions. While the pressure is always there to make sure you are getting all the education out of it as possible every session these days is recorded and can be watched later. My decision on if I’m going to do a particular session is based on if the subject is directly related to something that’s got me stumped and I want the opportunity to touch base with the speaker. Past that I’ll watch most after the fact. A better use of your time is getting out and networking, soaking up some of the distributed information there and will in many cases serve as a resource after the fact. I’ve yet to leave an event and not come home to do some kind of redesign based on things I’ve learned from the community.

A highlight for anybody who’s been to CLUS is always the Customer Appreciation Event. This year Bruno Mars will take over the T-mobile Arena and I am legitimately bummed that I will be missing it. The celebrity keynotes are always very good as well and usually provide a different view on how technology interacts with the world. I truly enjoyed listening to Kevin Spacey last year and this year they’ve booked Bryan Cranston.

Regarding keynotes, I typically like watch these in the social areas rather than packing myself into the keynote halls. The seating is better, there’s fewer people and usually refreshments are close at hand, plus you can find a surface to put your computer/iPad on to take notes and/or live tweet the talk.

VMworld Basics

As much as the focus on CiscoLive is on the direct educational benefit the focus from VMworld is more on learning from the community. With the conference officially running from August 27-31 there just as many official conference sessions as there are at CiscoLive, but I find there to be more lower level, marketing style sessions at VMworld. What makes up for it though is any number of community learning opportunities surrounding it. If you can swing coming in either Saturday or very early Sunday the vBrownbag/VMunderground Opening Acts is always a great place to learn about what is coming next in virtualization and technology. Speaking of vBrownBag, these guys have a stage running concurrent to the conference with session about anything you can conceive of all week long. Historically the vBrownBag stage has been found in the Hang Space (VMworld for social media area) but this year is still to be determined.

Another thing you’ll find is the potential to have your evenings books is exceptionally high with multiple vendor events every single night, traditionally starting with vBeers on Saturday evening. At some point as we get closer to the conference VMworld will fill a website with information and registration links for many of the gatherings to make scheduling easy. The Veeam, VMunderground and vExpert/VCDX/VMUG parties are always the most talked about. There is also the annual VMworld Party with typically big name acts but at the time of this writing there really isn’t any information about this yet. Be sure to follow along online and on social media to find out soon enough.

Conclusion

With all that being said, just go enjoy yourself as you are meant to do. There’s a reason that Denise Fishburne refers to CiscoLive as “Geek Summer Camp” because it does feel that way, regardless of the conference you’re attending. Everybody does things their own way. As I’ll be attending VMworld this year if you are there and want to say hi feel free to reach out and find me on twitter @k00laidIT.

Why Is My Nimble Storage Firmware Update Not Available

Today, like everyday as a technology professional, I got the opportunity to learn something new. After seeing posts on social media and articles that Nimble Storage with their NimbleOS version 3.6 supports the shiny new features of VMware’s vSphere 6.5 release including VVOLs 2.0 and VASA 3.0. After reading through the release notes and not seeing anything to really stress me out in the known issues I went to begin the download for an update in the off hours. To my early adopter horror I saw there was no download available! Had I misread the releases, did I imagine that the release notes really were for 3.6? No, those were real and it should be there. After asking around I learned that Nimble in a notable effort to save us from ourselves will from time to time blacklist you from receiving updates due to things they observe through their excellent InfoSight analytics system.

The problem with this is they don’t really make easily apparent that you are blacklisted from anywhere close to the download screen. In order to see if you are blacklisted  you have to switch over from the array management screen to InfoSight, go to Manage > Assets > Click on the Array, and then at the top where it says “Version: ….” click on the version link. There finally you will either see the new version in black if you are good to upgrade or as shown in my image, in red if blacklisted. Even with this it still doesn’t tell you why you are blacklisted, you have to call support to learn that.

Blacklisted

Not Blacklisted

Conclusion

The idea of blacklisting arrays that show signs of things known not to play well with future versions of software is a noble idea and has the potential to keep the load off of your support staff. The problem is the current way it is shown to the user almost ensures that a support call is going to have to be made anyway to either a) find out why the array is blacklisted (OMG, what’s wrong with my array that it can’t be upgraded!?!?) or b) find out why new software isn’t available. I would recommend that if an array is blacklisted and an admin attempts to download software let him know that he is blacklisted, and why, there on the array’s download software dialog. This would save everybody a good deal of time.

As an addendum as I post this I see that 3.6.1 has been release as well and my time on the blacklist is over. Off to upgrade!

Fixing Domain Controller Boot in Veeam SureBackup Labs

We’ve been dealing with an issue for past few runs of our monthly SureBackup jobs where the Domain Controller boots into Safe Mode and stays there. This is no good because without the DC booting normally you have no DNS, no Global Catalog or any of the other Domain Controller goodness for the rest of your servers launching behind it in the lab. All of this seems to have come from a change in how domain controller recover is done in Veeam Backup and Replication 9.0, Update 2 as discussed in a post on the Veeam Forums. Further I can verify that if you call Veeam Support you get the same answer as outlined here but there is no public KB about the issue. There are a couple of ways to deal with this, either each time or permanently, and I’ll outline both in this post.

The booting into Safe Mode is totally expected, as a recovered Domain Controller object should boot into Directory Services Restore mode the first time. What is missing though is that as long as you have the Domain Controller box checked for the VM in your application group setup then once booted Veeam should modify the boot setup and reboot the system before presenting it to you as a successful launch. This in part explains why when you check the Domain Controller box it lengthens the boot time allowed from 600 seconds to 1800 seconds by default.

On the Fly Fix

If you are like me and already have the lab up and need to get it fixed without tearing it back down you simply need to clear the Safe Boot bit and reboot from Remote Console. I prefer to

  1. Make a Remote Console connection to the  lab booted VM and login
  2. Go to Start, Run and type “msconfig”
  3. Click on the Boot tab and uncheck the “Safe boot” box. You may notice that Active Directory repair option is selected
  4. Hit Ok and select to Restart

Alternatively if you are command inclined a method is available via Veeam KB article 1277  where you just run these commands

it will reboot itself into normal operation. Just to be clear, either of these fixes are temporary. If you tear down the lab and start it back to the same point in time you will experience the same issue.

The Permanent Fix

The problem with either of the above methods is that while they will get you going on a lab that is already running about 50% of the time I find that once I have my DC up and running well I have to reboot all the other VMs in the lab to fix dependency issues. By the time I’m done with that I could have just relaunched the whole thing. To permanently fix the root issue is you can revert the way DCs are handled by creating a single registry entry as shown below on the production copy of each Domain Controller you run in the lab.

Once you have this key in place on your production VM you won’t have any issues with it going forward as long as the labs you launch are from backups made after that change is put in use. My understanding is this is a known issue and will eventually be fixed but at least as of 9.5 RTM it is not.