DR Scenarios For You and Your Business: Getting Cloudy With It

In the last post we talked about the more traditional models of architecting a disaster recovery plan. In those we covered icky things like tape, dark sites and split datacenters. If you’d like to catch up you can read it here. All absolutely worthwhile ways to protect your data but all of those are slow and limit you and your organizations agility in the case of a disaster.

By now we have all heard about the cloud so much we’ve either gone completely cloud native, dabbled a little or just completely loathe the word. Another great use for “somebody else’s computer” is to power your disaster recovery plans. By leveraging cloud resources we can effectively get out of the managing hardware business in regards to DR and have borderline limitless resources if needed. Let’s look at a few ways this can happen.

DRaaS (Disaster Recovery as a Service)

For now this is my personal favorite, but my needs may be and probably are different from yours. In a DRaaS model you still take local backups as you normally have, but then those backups or replicas are then shipped off to Managed Service Providers (MSPs) aligned with your particular backup software vendor.

I can’t particularly speak to any of the others from experience but CloudConnect providers in the Veeam Backup and Replication ecosphere are simple to consume and use. Essentially once you buy the amount of space you need from a partner you then use the link and credentials you are provided and add them to your backup infrastructure. Once done you create a backup copy job with that repository as the target and let it run. If you are bandwidth restrained many will even let you seed the job with an external hard drive that you ship them full of backups then all you have to transfer over the wire is your daily changes. Meanwhile all of these backups are encrypted with a key that only you and your organization knows so the data is nice and safe sitting elsewhere.

This is really great in that it is infinitely scalable (you only pay for what you use) and you don’t have to own any of the hardware or software licenses to support it. In the case that you have an event you have options; you can either scramble and try to put something together on your own or most times you can leverage the compute capabilities of the provider to power your organization until such time you can get your on-site resources available again. As these providers will have their own IT resources available you and your team will be freed up to do the work of getting staff and customers back online as they handle getting you restored and back online.

In my mind the drawbacks to this model are minimal. In the case of a disaster you are definitely going to be paying more than you would if you are running restored systems on your own hardware, but you would have had to buy that hardware and maintain it as well which is expensive. You will also be in a situation where workers and datacenter systems are not in the same geographical area as well which may cause for increased bandwidth cost as you get back up and running but still nothing compared to maintaining this consistently. Probably the only real drawback here is almost all of these types of providers require long-term agreements, 1 year or more for the backup or replication portion of what is needed. You also need to be sure if you choose this route that the provider has enough compute resources available to absorb you if needed. This can be mitigated by working with your provider to do regular backup testing at the far end. This will cost you a bit more but it is truly worth it to me.

Backup to Public Cloud

Finally we come to what all the backup vendors seems to be  going towards these days, public cloud backups. In this situation your backups are either on premises first (highly recommended) and then shipped off to the public cloud provider of your choice. AWS, Azure or GCP start messing with their storage pricing models and suddenly become cheaper? Simply just add the new provider and shift the job to the new provider, easy peasy. As with all things cloud you are in theory also infinitely scalable so you don’t have to worry about on boarding new workloads except for cost, and who cares about cost anyway?

The upside here is the ability to be agile. Start to finish you can probably be setup to consume this model within minutes and then your only limit to how fast you can be covered is how much bandwidth you make available to shipping backups. If you are doing this to cover for an external event like failure of your passive site you can simply tear it back down afterwards just as fast as you made it. Also you are only ever paying for your actual consumption, so you know how much your cost is going to be for any additional workload to be protected, you don’t ever pay for “spare space.”

As far as the drawbacks I feel like we are still in the early days of this so there are a few. While you don’t have to maintain your far end equipment for either backup storage or compute I’m not convinced that this isn’t the most expensive option for traditional virtualized workloads.

Hybrid Archive Approach

One of the biggest challenges to maintaining an on-prem, off-prem backup system is we all run out of space sometimes. The public cloud provides us an ability to only consume what we need, not paying for any fluff, as well as letting others manage the performance and availability of that storage. One trend I’m seeing more and more is the ability to supplement your on premise backup storage with public cloud resources to allow for scale out of archives for as long as necessary. There is a tradeoff between locality and performance, but if your most recent backups are on premises or well-connected to your production environment you may not ever need to access those backups that archived off to object storage so you don’t really care how fast it is to restore; you’ve just checked your policy checkbox and have that “oh no” backup out there.

Once upon a time my employer had a situation where we needed to retain every backup for about 5 years. Each year we had to buy more and more media to save these backups we would never ever restore to because they were so old, but we had them and were in compliance. If things like Veeam’s Archive Tier or similar with other vendors existed I could have said “I want to retain X backups on-prem but after that shift them to a S3 IA bucket.” In the long-term this would have saved quite a bit of money and administrative overhead and when the requirement went away all I had to do is delete the bucket and reset back to normal policy.

While this is an excellent use of cloud technology I don’t consider it a replacement for things like DRaaS or Active/* models. The hoops you need to go through to restore these backups to a functional VM are still complex and require resources. Rather I see this as an extension of your on-prem backups to allow for short-term scale issues.

Conclusion

If you’ve followed along for both posts I’ve covered about 5.5 different methods of backing up, replicating and protecting your datacenter. Which one is right for you? It might be one of these, none of these or a mash-up of two or more to be honest. The main thing is know your business’ needs, it’s regulatory requirements and

DR Scenarios For You and Your Business Part 1: The Old Guard

It is Disaster Recovery review season again here at This Old Datacenter and reviewing our plans sparked the idea to outline some of the modern strategies for those who are new to the game or looking to modernize. I’m continually amazed by the number of people who I talk to that are using modern compute methodologies (virtualization on premises, partner IaaS, public cloud) but are still using the same backup systems they were using in the 2000s.

In this post I’m going to talk about some basic strategies using Veeam Backup and Replication because that is primarily what I use, but all of these are capable with any of the current data backup vendors with varying levels of advantages and disadvantages per vendor. The important part is to understand the different ways about protecting your data to start with and then pick a vendor that fits your needs.

One constant that you will see here is the idea of each strategy consisting of 2 parts. A local backup first to handle basic things like a failing VM, file restore, and other things that aren’t responding to all systems down. Secondly then archiving that backup to somewhere outside of your primary location and datacenter to deal with that systems down or virus consideration. You will often hear this referred to as the 3-2-1 rule:

  • 3 copies of your data
  • 2 copies on different types of physical media or systems
  • 1 copy (at least) in a different geographical location (offsite)
On Premises Backup/Archive to Removable Media Backup

This is essentially an evolution on your traditional backup system. Each night you take a backup of your critical systems to a local resource and then copy that to something removable so that it can be taken to somewhere offsite each evening. In the past this was probably only one step, you ran backups to tape and then you took that tape somewhere the next morning. Today I would hope the backups would land on disk somewhere local and then be copied to tape or a USB hard disk but everybody has their ways.

This method has the ability to get the job done but has a lot of drawbacks. First you must have human intervention to get your backups somewhere. Second restores may be quick if you are restoring from your primary backup method but if you have to go to your second you first have to physically locate that correct data set and then especially in the case of tape it can take some time to get that back to functional. Finally you own and have to maintain all the hardware involved in the backup system, hardware that effectively isn’t used for anything else.

Active/Passive Disaster Recovery

Historically the step up for many organizations from removable media is to maintain a set of hardware or at least a backup location somewhere else. This could be just a tape library, a NAS or an old server loaded with disks either in a remote branch or at a co-location facility. Usually you would have some dark hardware there that could allow systems to be restored if needed. In any case you still would perform backups locally and maintain a set on premises for the primary restore, then leverage the remote location for a systems down event.

This method definitely has advantages over the first in that you don’t have to dedicate a person’s time to ensuring that the backups go offsite and you might have some resources available to take over in case of a massive issue at your datacenter, but this method can get very expensive, very fast. All the hardware is owned by you and is used exclusively for you, if ever used at all. In many cases datacenter hardware is “retired” to this location and it may or may not have enough horsepower to cover your needs. Others may buy for the dark site at the same time as buying for the primary datacenter, effectively doubling the price of updating. Layer on top of this the cost of connectivity, power consumption and possibly rack space and you are talking about real money. Further you are on your own in terms of getting things going if you do have a DR event.

All that being said this is a true Disaster Recovery model, which differentiates from the first option. You have everything you need (possibly) if you experience a disaster at your primary site.

Active/Active Disaster Recovery

Does your organization have multiple sites, with datacenter capabilities in each place? If so then this model might be for you. With Active/Active you design you multisite datacenters with redundant space in mind so that in the case of an event in either location  you can run both workloads in a single location. The ability to have “hot” resources available at your DR site is attractive in that you can easily make use of not only backup operations but replication as well, significantly shortening your Restore Time Objective (RTO), usually with the ability to rollback to production when the event is over.

Think about a case where you have critical customer facing applications that cannot handle much downtime at all but you lose connectivity at your primary site. This workload could fairly easily be failed over to the replica in the far side DC, all the while your replication product (Think Veeam Backup & Replication or Zerto) is tracking the changes. When connectivity is restored you tell the application to fallback and you are running with changes intact back in your primary datacenter.

So what’s the downside? Well first off it requires you to have multiple locations to be able to support this in the first place. Beyond that you are still in a world of needing to support the load in case of having an event, so your hardware and software licensing costs will most likely go up to support this event that may never happen. Also supporting replication is a good bit more complex than backup when you include things like the need for ReIP, external DNS, etc. so you should definitely be testing this early and often, maintaining a living document that outlines the steps needed to failover and fallback.

Conclusion

This post covers what I consider the “old school” models of Disaster Recovery, where your organization owns all the hardware and such to power the system. But who wants to own physical things anymore, aren’t we living in the virtual age? In the next post we’ll look at some more “modern” approaches to the same ol’ concepts.

Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing

One of the issues that Veeam Backup & Replication users, actually those of any application aware backup solution, is that the various VSS writers are typically very finicky to say the least. Often you will get warnings about the services only to do a “vssadmin list writers” and see either writers in a failed state or not there at all. In most of these cases a reboot of either the service or the target system itself is an easy quick fix.

But do you really want to rely on yourself to remember to do this every day? I know I don’t and going with the mantra of “When in doubt, automate” here’s a script that will help out. The Reboot-VSS.ps1 script assumes that you are using vSphere tags to dynamically identify VMs to be included in backup jobs, looks at the services in the given services array and if they are present on the VM will restart them.

 

This script was designed to be set in the Windows scripts section of guest processing settings within a Veeam Backup and Replication job. I typically only need the SQL writer service myself but I’ve included VSS in the array as well here as an example of adding more than one. There are quite a few VSS services that VSS aware backup services can access, Veeam’s KB 20141 is a great reference for all of these that can be included here based on your need.

Veeam Vanguard 2018

Here in the US Thanksgiving Day traditionally falls on the fourth Thursday of November. While it is one of my favorite holidays today is a day of thankfulness for me as I’ve been honored to be named a Veeam Vanguard for 2018. I’ve been fortunate enough to have been a part of the group since its inception and it is one of my highest honors. Thanks as always to Rick, Kirsten, Dmitry, Andrew, Niels, Anthony, Michael, Melissa and Danny for keeping the Vanguards the best of its kind around.

To those who have also been renewed into the program please accept a heartfelt congratulations as you’ve earned it through your involvement and I look forward to trolling right along with you for another year.

While the e-mails have just been sent so there aren’t any statistics yet I see quite a few new members who are quite deserving popping up on twitter. Some I know already and other I look forward to getting to know. One of the really nice thing about the Vannies is we are a small group so everybody pretty much gets to know everybody. If you are looking for success in this group please don’t be shy, come be social and share the knowledge you have.

Are you just learning about the program or didn’t make the cut this year? If you are active with Veeam join the conversation in the forums, on Twitter, on Reddit, any of the various Slack communities, or your own blog and it will come. It doesn’t matter where you join, it just matters that you do.

Finally to dear, sweet Vanny Vanguard. We all miss you, please come home. 😉

From Zero to PowerCLI: CentOS Edition

Hi all, just a quicky to get everybody off the ground out there that are looking to use both PowerShell and PowerCLI from things that don’t run Windows. Today VMware released version 10 of PowerCLI with support for installation on both Linux and MacOS. This was made possible by the also recently released Powershell Core 6.0 which allows PowerShell to be installed on *nix variants. While the ability to run it on a Mac really doesn’t do anything for me I do like to use my iPad with a keyboard case as a quick and easy jump box and its frustrated me for a while that I needed to do an RDP session and then run a Powershell session from within that. With these releases I’m now an SSH session away from the vast majority of my scripting needs with normal sized text and everything.

In this post I’ll cover getting both Powershell Core and PowerCLI installed on a CentOS VM. To be honest, installing both on any other variant is pretty trivial but the basic framework of the difference can be found in Microsoft Docs.

Step 1: Installing Powershell Core 6.0

First, you need to add the Powershell Core repository to your yum configuration. You may need to amend the “/7/” below if you are running a RHEL 6 variant like CentOS 6.

Once you have your repo added simply install from yum

Congrats! You now have PowerShell on Linux. To run it simply run pwsh from the command line and do your thing. If you are like me and use unsigned scripts a good deal you may want to lower your Execution Policy on launch. You can do so by adding the parameter.

 

Step 2: Installing VMware PowerCLI

Yes, this is the hard part… Just kidding! It’s just like on Windows, enter the simple one-liner to install all available modules.

If you want to check and see what you’ve installed afterward (as shown in the image)

If you are like me and starting to burn this through in your lab you are going to have to tell it to ignore certificate warnings to be able to connect to your vCenter. This is simple as well just use this and you’ll be off and running.

 

Step 3: Profit!

Really, that’s it. Now to be honest I still am going to need to jump to something Windows-based to do the normal ActiveDirectory, DNS or any other native  Windows type module but that’s pretty easy through Enter-PSSession.

Finally, if you have got through all of the above and just want to cut and paste here’s everything in one spot to get you installed.

 

 

VVOLs vs. the Expired Certificate

Hi all, I’m writing this to document a fix to an interesting challenge that has pretty much been my life for the last 24 hours or so. Through a comedy of errors and other things happening, we had a situation where the upstream CA from our VMware Certificate Authority (and other things) became very unavailable and the certificate authorizing it to manage certificates expired. Over the course of the last couple of days I’ve had to reissue certificates for just about everything including my Nimble Storage array and as far as vSphere goes we’ve had to revert all the certificate infrastructure to essentially the same as the out of the box self-signed guys and then reconfigure the VMCA as a subordinate again under the Root CA.

Even after all that I continued to have an issue where my Production VVOLs storage was inaccessible to the hosts. That’s not to say they weren’t working, amazingly and as a testament to the design of how VVOLs works my VMs on it ran throughout the process, but I was very limited in terms of the management of those VMs. Snapshots didn’t work, backups didn’t work, for a time even host migrations didn’t work until we reverted to the self-signed certs.

Thanks for a great deal of support and help from both VMware support and Nimble Storage Support we were finally able to come up with a runbook in dealing with a VVOL situation where major certificate changes occurred on the vSphere side. There is an assumption to this process that by the time you’ve got here all of your certificates both throughout vSphere as well as with the Nimble arrays are good and valid.

  1. Unregister the VASA provider and Web Client integration from the Nimble array. This can be done either through the GUI in Administration>VMware Integration by editing your vCenter, unchecking the boxes for the Web Client and VASA Provider and hitting save. This can also be done via the CLI using the command
  2. Register the integrations back in. Again, from the GUI simply just check the boxes back and hit save. If successful you should see a couple of little green bars briefly appear at the top of the screen saying the process was successful. From the CLI the commands are pretty similar
  3. Verify that your VASA provider is available in vCenter and online. This is just to make sure that the integration was successful. In either the Web Client or the HTML5 client go to vCenter> Configure> Storage Provider and look for the entry that matches the name of your array group and in the URL has the IP address of your array’s management interface. This should show as online. As you have been messing with certificates its probably worth looking at the Certificate Info tab as well while you are here to verify that the certificate is what you expect.
  4. Refresh the CA Certificates on each your hosts. Next, we need to ensure that all of the CA certificates are available on the hosts to ensure they can verify the certificates presented to them by the storage array. To do this you can either right-click each host > Certificates > Refresh CA Certificates or if you navigate to the configuration tab of each host, go to Certificate there is a button there as well. While in the window it is worth looking at the Status of each host’s certificate and ensure that it is Good.
  5. Restart the vvold service on each host. This final step was evidently the hardest one to nail down and find in the documentation. The simplest method may be to simply reboot each of your hosts as long as you can put them into maintenance mode and evacuate them first. The quicker way and the way that will let you keep things running is to enter a shell session on each of your hosts and simply run the following command:

    Once done you should see a response like the feature image on this post and a short while later your VVOLs array will again become available for each host as you work on them.

That’s about it. I really cannot thank the engineers at VMware (Sujish) and Nimble (Peter) enough for their assistance in getting me back to good. Also I’d like to thank Pete Flecha for jumping in at the end, helping me and reminding me to blog this.

If nothing else I hope this serves as a reminder to you (as well as myself) that certificates should be well tended to, please watch them carefully. 😉

VMworld 2017 US: T -2

I write this while traveling to sunny and amazingly hot Las Vegas for the 2017 edition of VMworld US. I hope to provide feedback and news throughout the conference, highlighting not only the excellent content and programs but also the best the virtualization community has to offer.

Today will be a travel day as well as a day to meet up with friends, new and old. Tomorrow, the Sunday before the conference, is when the real fun begins with things like Opening Acts for me, TAM and partner content for others as well as a number of social events.

What We Know So Far

Yesterday was the day that Vmware went on a killing spree, announcing the depreciation of Windows based vCenter, the flash based vSphere web client and the vmkLinux APIs and its associated driver ecosystem. All of these enter the depreciated state with the next major version of vSphere and then will be gone for ever and ever in the revision after that. Each of these are significant steps towards the evolution of vSphere as we know it, and when coupled with the advances in PowerCLI in version 6.5 the management of our in house infrastructure has been changed for the better.

These announcements came rapid fire on the Friday before Vmworld with the death of the Windows based vCenter coming first. As we have had versions of varying success of the vCenter Server Appliances (VCSA) for over 5 years now it’s been a long time coming. I myself migrated two years ago and while it was good then with the latest 6.5 version, with its PhotonOS base, excellent migration wizard and in appliance vCenter Update Manager support it has show it is definitely the way forward.

The flash client was the next announcement to come and again, we are looking at an depreciation that needs to happen and is most definitely going to be a good thing but does come with some apprehension. With most things that have been depreciated by Vmware we’ve had at least 1 feature rich version of the replacement out and stable before they announced the predecessor’s demise. This isn’t the case with the flash based web client. While the latest builds are getting very, very good there are still major things that either are quirky or simply aren’t there yet. The good news to this is we have been given almost immediately assurances by everyone involved with the product management that we the vSphere admins will never be left without a GUI management ability for any given task we have today and I for one believe them. The last components of what is known as the HTML5 client in my opinion simply can’t come enough, I’m tired of having to hop through multiple GUIs and browsers to be able to perform basic tasks in my daily work life.

Finally the day was finished with the announced depreciation of the non-native Linux drivers. To be honest I didn’t know that these were even still a thing as every Linux VM I’ve rolled for the past many years have been able to work with the native drivers. I’m sure there are those that at this point may still need additional time but the as the removal is still a couple of versions off this should be something can be mitigated now that the end is known.

Conclusion

With all of these preconference announcements related to Vmware’s flagship product is this going to be the year where Vmworld is chocked full of improvements to vSphere. This will be my 3rd one in 4 years and each year I’ve felt their focus was elsewhere. While vSAN, NSX, and the like are definitely where the company’s seeing growth all of these things rely on vSphere as an underlay. I for one would be happy to see a little love shown here.

With that happy thought I’m going to shut it down and land. For those coming to Vmworld this weekend safe travels and for those at home look for more info as its known here on koolaid.info.

Notes on Migrating from an “All in One” Veeam Backup & Replication Server to a Distributed System

One of the biggest headaches I not only have and have heard about from other Veeam Backup & Replication administrators have is backup server migrations. In the past I have always gone the “All-in-One” approach, have one beefy physical server with Veeam directly installed and housing all the roles. This is great! It runs fast and it’s a fairly simple system to manage, but the problem is every time you need more space or your upgrading an old server you have to migrate all the parts and all the data. With my latest backup repository upgrade I’ve decided to go to a bit more of a distributed architecture, moving the command and control part out to a VM with an integrated SQL server and then letting the physical box handle the repository and proxy functions producing a best of both worlds setup, the speed and simplicity of all the data mover and VM access happening from the single physical server while the setup and brains of the operation reside in a movable, upgradable VM.

This post is mostly composed of my notes from the migration of all parts of VBR. The best way to think of this is to split the migration into 3 major parts; repository migration, VBR migration, proxy migration, and VBR migration. These notes are fairly high level, not going too deep into the individual steps. As migrations are complex if any of these parts don’t make sense to you or do not provide enough detail I would recommend that you give the fine folks at Veeam support a call to ride along as you perform your migration.

I. Migrating the Repository

  1. Setup 1 or more new repository servers
  2. Add new repository pointing to a separate folder (i.e. D:\ConfigBackups) on the new repository server to your existing VBR server exclusively for Configuration Backups. These cannot be included in a SOBR. Change the Config Backup Settings (File > Config Backup) to point to the new repository. This is also probably a good time to go ahead and run a manual Config Backup while you are there to snapshot your existing setup.
  3. Create one or more new backup repositories on your new repository server(s) to your existing VBR server configuration.
  4. Create Scale Out Backup Repository (SOBR), adding your existing repository and new repository or repositories as extents.
  5. All of your backup jobs should automatically be changed to point to the SOBR during the setup but check each of your jobs to ensure they are pointing at the SOBR.
  6. If possible go ahead and do a regular run of all jobs or wait until your regularly scheduled run.
  7. After successful run of jobs put the existing extent repository into Maintenance Mode and evacuate backups.
  8. Remove existing repository from the SOBR configuration and then from the Backup Repositories section. At this point no storage of any jobs should actually be flowing through your old server. It is perfectly fine for a SOBR to only contain a single extent from a data locality standpoint.

II. Migrate the Backup and Guest Interaction Proxies

  1. Go to each of your remaining repositories and set proxy affinity to the new repository server you have created. If you have previously scaled out your backup proxies then you can ignore this step.
  2. Under Backup Proxy in Backup Infrastructure remove the Backup Proxy installation on your existing VBR server.  Again, if possible you may want to run a job at this point to ensure you haven’t broken anything in the process.
  3. Go to each of your backup jobs that are utilizing the Guest Processing features. Ensure the guest interaction proxy at the bottom of the screen is set to either your new repository server, auto or if scaled out another server in your infrastructure.

III. Migrate the Veeam Backup & Replication Server

  1. Disable all backup, Backup Copy and Agent jobs on your old server that have a schedule.
  2. Run a Config Backup on the old server. If you have chosen to Encrypt your configuration backup the process below is going to be a great test to see if you remember or documented it. If you don’t know what this is go ahead and change it under File>Manage Passwords before running this final configuration backup.
  3. Shutdown all the Veeam services on your existing backup server or go ahead and power it down. This ensures you won’t have 2 servers accessing the same components.
  4. If not already done, create your new Veeam Backup and Replication server/VM. Be sure to follow the guidelines on sizing available in the Best Practices Guide.
  5. Install Veeam Backup, ensuring that you use the same version and update as production server. Safest bet is to just have both patched to the latest level of the latest version.
  6. Add a backup repository on your new server pointing to the Config Backup repository folder you created in step 2 of the Migrating the Repository step.
  7. Go to Config Backup and hit the “Restore” button.
  8. As the wizard begins choose the Migrate option.
  9. Change the backup repository to the repository created in step 5, choose your latest backup file which should be the same as the one created in step 2 above.
  10. If encrypted, specify your backup password and then choose to overwrite the existing VeeamBackup database you created when you installed Veeam in step 4. The defaults should do this.
  11. Choose any Restore Options you may want. I personally chose to check all 4 of the boxes but each job will have its own requirements.
  12. Click the Finish button to begin the migration. From this point if any screens or messages pop up about errors or issues in processing it is a good idea go to ahead and contact support. All this process does is move the database from the old server to the new, changing any references to the old server to the new along the way. If something goes wrong it is most likely going to have a cascade effect and you are going to want them involved sooner than later.

IV. Verification and Cleanup

  1. Now that your server has been migrated it’s a good idea to go through all the tabs in your Backup Infrastructure section, ensuring that all your information looks correct.
  2. Go ahead and run a Config Backup at this point. That’s a nice low-key way to ensure that all of the basic Veeam components are working correctly.
  3. Re-enable your disabled backup, backup copy and Agent jobs. If possible go ahead and run one and ensure that everything is hunky dory there.

Gotchas

This process when working correctly is extremely smooth. I’ll be honest and admit that I ran into a what I believe is a new bug in the VBR Migration wizard. We had a few SureBackup jobs that had been setup and while they had been run they have never been modified again since install. When this happens VBR notes the job_modified field of the job config database record as NUL. During the migration the wizard left those fields blank in the restored database, which is evidently something that is checked when you start the Veeam Backup Service. While the Service in the basic services.msc screen appears to be running under the hood you are only getting partial functionality. In my case support was able to go in and modify the database and re-include the NUL data to the field, but if you think you might have this issue it might be worth changing something minor on all of your jobs before the final configuration backup.

Conclusion

If you’ve made it this far, congrats! You should be good to go. While the process seems daunting it really wasn’t all that bad. If I hadn’t run into an issue it wouldn’t have been bad at all. The good news is that at this point you should be able to scale your backup system much easier without the grip and rip that used to be required.

A VMworld US 2017 To Do List

If you work in the virtualization or datacenter field (are they really different anymore?) you probably know that VMworld US 2017 is next week, August 27-31. While VMware may not be the only option out there when it comes to virtualization anymore VMworld is still the defacto event for people in the field. This conference’s definition of community is unrivaled in scope with just as much if not more going on outside of the conference agenda as  in it.

As with all things worth doing conference attendance probably needs a checklist. Have you done yours? If not here are the high points of mine. I’m not going to bore you with “Jim will be attending session so and so”; well except for VMTN6699U and VMTN6700U you should totally join me at those sessions, but these are pretty general things I try to do each time.

  • Take Your Vitamins– I hate to say it but the Vegas Flu is a real thing. Between being in the recirculated air of a jumbo jet for any number of hours to bookend event and being in the recirculated air of a Vegas hotel/casino/conference center I always seem to get at least a mild head cold at some point during the week. Start about now taking whatever version of Vitamin C supplement you like and do so throughout the event to help head this issue off.
  • Bring Sharable Power- The average conference attendee has 3 devices on them at all times, phone, tablet and laptop. These things will start to get low on battery about midday and that just won’t do. In theory lots of places will have power outlets but with 25,000+ attendees they are still in short supply. I typically bring a big battery pack, a travel surge protector and USB power cables for everything under the sun so that I can plug in and share at sessions and keynotes.
  • Get There Early and Be Ready To Learn– While the conference doesn’t start in earnest until Monday the 27th I always try to arrive midday Saturday because there is so much going on before the conference starts. One of the highlights of the entire conference to me each year is Opening Acts, a series of panel sessions put on by VMunderground and vBrownBag on Sunday afternoon. These sessions always prove to be insightful and are traditionally more career-centric or more wide-ranging than your typical VMworld session. The fact that this is followed by the always awesome VMunderground party that night is not lost on me either. Also, if you are a VMware TAM customer there is exclusive content for you on Sunday afternoon.
  • Be Comfortable Being Yourself– So what do you wear? My friend Matt Crape covered this well in his recent post but I would like to add that go with what makes you most comfortable networking with your peers. If you are good with shorts and a t-shirt, go for it. Me personally I’m a golf shirt and jeans kind of guy so that’s most of what you’ll see from me. Your days at VMworld are most likely going to be between 15-20 hours so go with what feels good unless that’s naked. Nobody needs to see that. 😉
  • Get Out and Be Social– This is not a “Woo Hoo, It’s Vegas So Let’s Party” topic. Yes, you can do that if that’s your prerogative, but keep in mind some of the smartest minds in your chosen career are going to be here and out at both events in the evening as well as in the hang space during the day. Go meet people as they are typically pretty nice and cool. While the VMworld sessions are what’s being sold as the content of the conference I will book very few of those, choosing instead to spend my time learning from others how they are dealing with many of the same issues that I have and make connections that can prove helpful down the road.
    Where to go be social? During the day the HangSpace/ VM Village is the place to go. In the evenings there is a never-ending list of gatherings to find your way to. I personally will be making sure I attend the Veeam party and VMunderground as they are my 2 evening must do’s each year and are typically among the biggest. Past that I’ll just go with the flow.
  • Be Social Online Too– If you are a tweeter be sure to use not only the #VMworld hashtag but also that of whatever session or event you are currently in. If you look around it will typically be on a wall somewhere. This will help you extend the conversation during the session. If you aren’t on twitter yet you may want to consider that, often this is a great way to see what your colleagues are saying about announcements and such in real time. It also serves a great way to meet up with others at the conference.
  • Get Some Sleep When Possible– I know this sounds counter-intuitive to the previous topic but if you are a 40-year-old like me this week will catch up to you. It is definitely possible to do events and conference from 7:30 AM to after midnight each day and while that’s a lot of fun, by Wednesday there are so many zombies walking around Mandalay Bay it looks like an episode of the Walking Dead. If you’ve been working on the session builder already take a look at your schedule and make room for you to sleep in a morning sometime midweek. You can catch up on the sessions once you get back.

While there’s more than that for me those are the basics. If you are going please hit me up @k00laidIT on twitter, I’d love to have a coffee, a beer or just a conversation with you. Have a great time!

P.S. Wear comfortable shoes!

Learning To Pick The Right Tech Conference at vBrisket- TOMORROW!

Hey all, just a quick post to mention that the fine folks at vBrisket will be having a get together February 24th at 2 PM at Grist House Craft Brewery in Pittsburgh. If you work in the virtualization industry and haven’t heard of vBrisket yet you should get to know them because they have a great thing going.  vBrisket takes the typical User Group back to its vendor independence roots, allowing you to focus more on your general virtualization career and less on the path of any particular vendor. At the same time it gives Clint, Gabe, Jaison, and John a great reason to bring out the smokers and prepare enough meat to feed a brewery full of techies.

I’m honored to have been invited to join the panel discussion this time. The topic is “Tech Conferences – What are the right ones for you?” This will be moderated by the vBrisket team and includes myself, John White, Mike Muto, and Justin Paul. As I see my attendance at various conferences as a big driver in the success of my career and my growth as a technology worker I’m excited to be included.

Of course this meeting wouldn’t be possible without the sponsorship from Zerto. At the meeting they’ll be talking I’m sure about their new conference, ZertoCON in Boston May 22-24th.

So if you are in the Pittsburgh area tomorrow and would like to attend just be there at 2, I look forward to meeting up!