In the time of ransomware and and cybersecurity attacks being in the news almost daily things like our disaster recovery applications have to start taking those protections very seriously. Often the easiest rung to reach in terms of good application security is having Multi-Factor Authentication (MFA) required for logins. For most MFA is old hat but for those unfamiliar modern MFA can take various forms including:
SMS One Time Passcode (OTP): codes that are pushed from the website to you via SMS text messages. Better than nothing but easily spoofable.
OTP Apps: apps such as Google Authenticator, Authy or 1password that provide the codes based on scanning a QR code typically to setup then using your mobile device to generate the code
Push MFA: apps such as Duo or Gmail that when you authenticate it triggers a push notification to a particular instance of a mobile app that is also authenticated
With Veeam’s upcoming v12 release of Veeam Backup and Replication they are now supporting OTP App MFA in their console application. This takes a little bit of setup and in my particular case is a little quirky to setup but it does definitely work. Let’s walk through how to put this in effect.
Enable MFA in VBR v12
1. Once you have v12 Console installed, either the local or remote version, open it and navigate to Users and Roles
2. MFA in VBR is not supported with any use of groups. By default members of the local Adminstrators group are given the Veeam Backup Administrator role so we need to start by adding your own login first. You can do this with the Add… button.
If you weren’t already aware you’ll notice that there are number of roles that can be selected. More information about what each of these can do is available in the helpcenter documentation.
Bug: One other thing I’ll add here is that I noticed a quirk in some of our setups. If you use the browse method and if for some reason the NetBIOS domain is not just the portion before the first dot in the FQDN domain (for example prem for prem.lab.internal) then by default the User or group in the blank above is filled in incorrectly. In a few of our test environments we use the full FQDN without the dots as the NetBIOS name and it led to some issues that were easily fixed by just typing the correct domain in the blank. This feedback has been shared and I imagine it will be fixed before GA.
2a. If by chance you do not remove the groups before enabling MFA and hitting OK you will be presented with an error. Again, simply remove the groups to get past this.
3. It is important to understand that once you protect an account with MFA you will not be able to use it with automation methods such as PowerShell. To allow for this you will need to create an automation specific user, preferably with a very robust and often changed password, and set it in VBR as a service account to disable MFA. Complete adding your needed users and check the “Require two-factor authentication for interactive logon” to complete the server setup portion.
4. Once you setup your accounts in Veeam Console you will need to close out and relaunch to get into the MFA registration wizard. You would do this as you normally would, either using Windows session authentication or filling in the username and password.
5. Once you hit Connect you will get the standard MFA QR code for registration. Simply open the app of your choice and add by scanning the QR code then supply an active confirmation code to complete setup.
And with that you have now enabled 2 factor authentication for your Veeam Backup and Replication Users. You can potentially increase this further by not giving permissions to the user’s Windows logon account but instead doing a secondary, application specific account making them type in a username and password in step 4 above. With that you would have to authenticate to Windows, authenticate to Veeam and then provide an MFA code. That would all just depend on your organization’s security needs.
Conclusion
At the end of the day security practices should be a part of everything we do as IT and Disaster Recovery administrators. Little things like requiring MFA for our critical backups add up to a well design, layered security model.
Recently a good portion of my day job has been focused on learning and providing support for s3 compatible object storage? What is s3 compatible you say? So while Amazon’s AWS may have created the s3 platform at its root today is an open framework of API calls and commands known as s3. While AWS s3 and its many iterations are the 5000 pound gorilla in the room many other organizations have created either competing cloud services or storage systems that can let you leverage the technology in your own environments.
So why am I focusing on this you may ask? Today we are seeing more and more enterprise/cloud technologies have reliance on object storage. Any time you even think of mentioning Kubernetes you are going to be consuming object. In the Disaster Recovery landscape we’ve had the capability for a few years now to provide our archive or secondary copies of data to object “buckets” as it is both traditionally cheaper than other cloud based file systems and provided a much larger feature set. With their upcoming v12 release Veeam is going to be providing the first iteration of their Backup & Replication product that can write directly to object storage with no need to have the first repository be a Windows or Linux file system.
To specifically focus on the VBR v12 use case many customers are going to choose to start dipping their toes into the idea of on-prem s3 compatible object storage. This can be as full featured as a Cloudian physical appliance or as open and flexible as a minIO or ceph based architecture. The point being that as Veeam and other enterprise tech’s needs for object storage matures your systems will be growing out of the decisions you make today so it’s a good time to start learning about the technology and how to do the basics of management from an agnostic point of view.
So please excuse the long windedness of post as I dive into the whys and the hows of s3 compatible object storage.
Why Object Then?
Before we go further it’s worth taking a minute to talk about the reasons why these technologies are looking to object storage over the traditional block (NTFS, ReFS, XFS, etc) options. Probably first and foremost it is designed to be a scale-out architecture. With block storage while you can do things like creating RAID arrays to allow you to join multiple disks you aren’t really going to make a RAID across multiple servers. So for the use case of backup rather than be limited by the idea of a single server or have to use external constructs such as Veeam’s SOBR to allow those to be stitched together, you can target an object storage gateway that then write to a much more scalable, much more tunable, infrastructure of storage servers underneath.
Beyond the scale-out you have a vast feature set. Things that we use every day such as file versioning, security ACLs, least privilege design and the concept of immutability are extremely important in designing a secure storage system in today’s world and most object storage systems are going to be able to provide these capabilities. Beyond this we can look at capabilities such as multi-region synchronization as a way to ensure that our data is secure and highly available.
Connecting to S3 or S3 Compatible Storage
So regardless of whatever client you are using you are going to need 4 basic pieces of information to connect to the service at all.
Endpoint: This will be a internet style https or http URL that defines the address to the gateway that your client will connect to.
Region: This defines the datacenter location within the service provider’s system that you will be storing data in. For example the default for AWS s3 is us-east-1 but can be any number of other locations based on your geography needs and the provider.
Access Key: this is one half of the credential set you will need to consume your service and is mostly akin to a username or if you are used to consuming Office 365 like the AppID
Secret Key: this is the other half and is essentially the generated password for the access key.
Regardless of the service you will be consuming all of those parts. With things that have native AWS integration you may not be prompted necessarily for the endpoint but be assured it’s being used.
To get started at connecting to a service and creating basic, no frills buckets you can look at some basic GUI clients such as CyberDuck for Windows or MacOS or WinSCP for Windows. Decent primers for using these can be found here and here.
Installing and Configuring AWS CLI Client
If you’ve ever used AWS S3 to create a bucket before you are probably used to go to the console website and pointy, clicky create bucket, set attributes, upload files, etc. As we talk more and more about s3 compatible storage that UI may or may not be there and if it is there it may be wildly different than what you use at AWS because it’s a different interpretation of the protocol’s uses. What is consistent, and in some cases or situation may be your only option, is consuming s3 via CLI or the API.
Probably the easiest and most common client for consuming s3 via CLI is the AWS cli. This can easily be installed via the package manager of your choice, but for quick and easy access:
Windows via Chocolatey
choco install -y awscli
MacOS via Brew
brew install awscli
Once you have it installed you are going to need to interact with 2 local files in your .aws directory on your user profile, config and credentials. You can get these created by using the aws configure command. Further aws cli supports the concept of profiles so you can create multiple connections and accounts. To get started with this you would simply use aws configure --profile obj-testwhere obj-test is whatever name you want to use. This will then walk through prompting you for 3 of those 4 pieces of information, access key, secret key and default region. Just as an FYI this command impacts 2 files within your user profile, regardless of OS, ~/.aws/config and ~/.aws/credentials. These are worth reviewing after you configure to become familiar with the format and security implications.
Getting Started with CLI
Now that we’ve got our CLI installed and authentication configured let’s take a look a few basic commands that will help you get started. As a reference here are the living command references you will be using
Awesome! We’ve got our first bucket in our repository. That’s cool but I want my bucket to be able to leverage this object lock capability Jim keeps going on about. To do that you use the same command but add the –object-lock-enabled-for-bucket parameter.
So yeah, good to go there. Next let’s dive into that s3api list-buckets command seen in the previous screenshot. Listing buckets is a good example for understanding that when you access s3 or s3 compatible storage you are really talking about 2 things; s3 protocol and the s3api. For listing buckets you can use either:
aws --profile jimtest --endpoint-url=https://us-central-1a.object.ilandcloud.com s3 ls
While these are similar it’s worth noting the return will not be the same. The ls command will return data much like what it would in a standard Linux shell while s3api list-buckets will return JSON formatted data by default.
Enough About Buckets, Give Me Data
So buckets are great but the are nothing without data inside them. Let’s get to work writing objects.
Again writing data, especially if you are familar with the *nix methods to s3 can be very similar. I can use s3 cp or mv to copy or mv data to my s3://test-bucket-2-locked/ bucket or any other I’ve created or between them.
Now that we’ve written a couple files let’s look at what we have. As you can see above once again you can do the same actions via both methods, it’s just the s3api way will consistently give you more information and more capability. Here’s what the api output would look like.
Take note of a few things here. While the s3 ls command gives more traditional file system output s3api refers to the objects with their entire “path” as the key. Essentially in object storage for our benefit it still has the concept of file and folder structure but it views each unique object as a single flat thing on the file system without a true tree. Key is also important because as we start to consider more advanced object storage capabilities such as object lock, encryption, etc. the key is often what you need to supply to complete the commands.
A Few Notes About Object Lock/Immutability
To round out this post let’s take a look at where we started as the why, immutability. Sure we’ve enabled object lock on a bucket before but what that really just does is enable versioning, it’s not enforcing anything. Before we get crazy with creating immutable objects its important to understand there are 2 modes of object lock:
Governance Mode – In Governance Mode users can write data and not be able to truly delete it as expected but there are roles and permissions that can be set (and are inherited by root) that will allow that to be overridden and allow data to be removed.
Compliance Mode – This is the more firm option where even the root account cannot remove data/versions and the retention period is hard set. Further once a retention date is set on a given object you cannot shorten it in any way, only extend it further.
Object Lock is actually done in one of two ways (or a mix of both); creating a policy and applying it to a bucket so that anything written to that bucket will assume that retention or by actually applying a retention period to an object itself either while writing the object or after the fact.
Let’s start with applying a basic policy to a bucket. In this situations for my test-bucket-2-locked bucket I’m going to enable compliance mode and then set retention to 21 days. A full breakdown of the formatting of the object-lock-configuration parameter and the options it provides can be found in the AWS documentation.
Cool, now to check that compliance we can simply use s3api get-object-lock-configuration instead against the bucket to check what we’ve done. I’ll note that for either the “put” above or the “get” below that there is no s3 endpoint equivalent, these are some of the more advanced features I’ve been going on about.
Ok, so we’ve applied a baseline policy of compliance and retention of 21 days to our bucket and confirmed that it’s set. Now let’s look at the objects within. You can view a particular object’s retention with the s3api get-object-retention command. As we are dealing with advanced features at the object level you will need to capture the key for an object to test. If you’ll remember we found those using the s3api list-objects command.
So as you can see we have both mode and retention date set on the individual object. What if we wanted this particular object to have a different retention period than the bucket itself? Let’s now use the s3api put-object-retention option to work and try to set that down to 14 days instead. While we use a general purpose number of days when creating the policy when we set object level retention it’s done by modifying the actual date stamp so we’ll simply pick a day 14 days from today.
Doh! Remember what we said about compliance mode? That you could make the retention shorter than what was previously set? We are running into that here and can see that it in fact works! Instead let’s try this again and set it to 22 days.
As you can see now not only did we not get an error but when you check your retention it is now showing for defined timestamp so it definitely worked.
This feels like a good time to note that object locking is not the same as deletion protection. If I create an object lock enabled bucket and upload some objects to it, setting the object retention flag with the right info along the way, I am still going to be able to use a basic delete command to delete that file. In fact if I use CyberDuck or WinSCP to connect to my test bucket I can right click on any object there and successfully choose delete. What is happening under the covers is that a new version of that object is spawned, one with the delete marker applied to it. For standard clients that will appear that the data is gone but in reality it’s still there, it just needs to be restored to the previous version. In practice most of the UIs you are going to use to consume s3 compatible storage such as Veeam or developed consoles will recognize what is going on under the covers and essentially “block” you from executing the delete, but feel secure that as long as you have object lock enabled and the data is written with a retention date the data has not actually gone away and can be recovered.
All of this is a somewhat long winded answer to the question “How S3 Object Lock works” which Amazon has thoughtfully well answered in this post. I recommend you give it a read.
Conclusion
In the end you are most likely NOT going to need to know via the command line how to do all the above steps. More likely you will be using some form of a UI, be it Veeam Backup and Replication, AWS console or that of your service provider, but it is very good to know how to do these things especially if you are considering on-premises Object Storage as we move into this next evolution of IT and BCDR. Learning and testing the above is a relatively low cost consideration as most object services are literally pennies per GB, possibly with the egress data charges depending on your provider (hey AWS…), but it’s money well spent to get a better understanding.
I have been fortunate to be selected as a delegate for Cloud Field Day 12 next week, November 3-5, 2021. This will be the first Tech Field Day event being done with in person attendees since the beginning of the pandemic and I’m happy to say I am on my way to San Jose, CA to be in attendance. While I am definitely excited about being able to travel about this I am most excited about the exciting slate of vendors that will be presenting on all things Cloudy. Thus far the delegate panel appears to be very well attended and includes a number of friends and acquaintances such as Nico Stein and Nathan Bennett.
Disclosure
In the interests of full disclosure it is worth noting that Gesalt IT, the company behind Tech Field Day, is covering the costs of my expenses for this trip up to and including travel, lodging and meals while on site. That said I have had no stipulations or requests on commentary from either Gesalt IT or presenting sponsors. In the week prior to the event I was invited to be a guest on the On-Premise IT podcast discussion the premise “The Cloud Is Finally Ready for the Enterprise.”
Schedule
All presentations will be able to be viewed online live at the base event link https://techfieldday.com/event/cfd12/ and if you are following along the other delegates and I will be happy to pass along questions that are sent via twitter with the #CFD12 tag. In PST times the presentations will be on this schedule:
In this post we are going to continue on our journey of building your own, full featured, Veeam Backup and Replication v10 environment. As a reminder of how this series is going here’s the list:
Episode 2: On-Prem Windows Components (VBR, Windows Proxy, Windows Repo)
Episode 3: On-Prem Linux Components (Proxy, Repo), Create Local Jobs
Episode 4: Build Service Provider pod
Episode 5: Create cloud jobs to both Service Provider and Copy Mode to S3
Episode 6: Veeam Availability Console
Episode 7: Veeam Backup for Office 365
In this installment we are going to focus on building out the Microsoft Windows side of your customer environment, the one that is deployed on-premises to the customer data. We will begin by deploying a Veeam Backup and Replication server followed by setting up a Window Proxy server and a Repository based on ReFS.
If you haven’t noticed the new content on this site has got a little scarce for the past few months. To be very honest my life seems to have been a bit crazy, both professionally and personally, and writing, unfortunately, has been back-burnered. On my drive home yesterday I was listening to Paul Woodward‘s excellent ExploreVM podcast episode with Melissa Palmer where she was speaking about the process of writing her excellent book “IT Architect Series: The Journey“. This served to remind me that I really should get back to writing as I’ve got a few topics I really need to blog about.
So as these things often happen in my head this led me this morning to think about why I blog in the first place. There are a number of reasons I do this and I thought for those that are passionate about any topic but in this case about technology maybe a little bit of the why would be the thing to get you started yourself.
1. So I can remember how I did something. Way back in the olden days (2004ish?) when my blog was called tastyyellowsnow.com this was the main reason I created the blog. I was 3 jobs into my career and all of my notes for how to do anything were saved in Outlook notes that I move from a work account to PST to work account to PST to work account. I was tired of doing it that way so I thought I’d try putting it out there. That hawtness ran on some asp based package with an Access database on the backend (I still have it!), and while some of the content was absolutely horrible the reason behind it is still my primary driver, to make sure if I figured out how to do something a certain way I could remember how to do it when I invariably had to do it again. Looking through some of those titles some like “Changing the Music On Hold Volume in Cisco CallManager” and “Recovering from a Bad Domain Controller Demotion” are still actually relevant. It’s nice to know where to find those things.
2. So that others can learn how I did something. I joked on twitter the other day that in preparing a Career Day talk for my daughter’s Kindergarten class that I should title it “SysAdmin: I Google things for those who will not”. If you are new to IT or aspire to work as an *Admin I cannot express how much of my “how in the world do you know that” is simply being good at feeding errors into google and processing the results. There may be 20 posts on how to do a single task but one of them will make more sense to me than others. Because of that, I try to feed many things back into the collective Google, especially the things that I wasn’t able to find much on or that I had to piece together through multiple KB articles and blog posts. In doing so I really do hope that I help others get their job done without having somebody send them the sticker to the right.
3. Writing something down in a manner you expect others to understand can often provide clarity. There’s an old adage that says “the best way to learn something is to teach it.” While yes, it is cliché, speaking as a former adjunct college professor and current internal staff trainer when needed, it is absolutely true. When I am learning something new or I have just finished working through a complex issue I find that documenting it, either here or internally helps to solidify what the core issue was, what components led to the issue, how the problem was solved and finally how it can be prevented in the future.
Conclusion
Those are the reasons why you see new things here from time to time. I do want to mention one thing you did not see above and that was to gain access to influencer programs. I’ve been very fortunate to be included in the vExpert and Veeam Vanguard communities and while many will say the way to get there is through blogging I disagree. I think the best way to achieve those accolades and keep them is the develop your own version of commitment to the Tech Community. If you find that giving things back to the community at large is something you find value in then you will find a way to do it, blogging, tweeting, podcasting, or any other way. If that’s a goal of yours and blogging or writing isn’t your thing, there is any number of ways to meet that goal as long as you focus on why you are in the community to start with.
As life has its ups and downs so does the regularity of content here. What are your reasons for blogging? If you have thought about it and haven’t done it yet, why not? Let’s continue the discussion on Twitter by reaching out @k00laidIT and help the distributed mind grow.
I don’t know about the rest of you but printing has long been the bane of my existence as an IT professional. Frankly, I hate it and believe the world should be 100% paperless by this point. That said, throughout my career, my users have done a wonderful job of showing me that I am truly in the minority on this matter so I have to do my part in making sure they are available.
As any Windows SysAdmin knows installing the actual print driver and setting up a TCP/IP port aren’t even half the battle. From there you got to get them shared and have the users actually connect to them so that they can use them. It’d be awesome if they would all just sit down say “I have no printers, let me go to Active Directory and find some” but I’ve yet to have more than a handful of users who see this as a solution; they just want the damned things there and ready to rock and roll.
In the past, I’ve always managed this with a series of old VBS scripts, which still works but requires tweaks from time to time. It’s possible to do this kind of stuff with Powershell these days as well as long as your user has the Active Directory module imported (Hint: they probably don’t). There are also any number of other 3rd party and really expensive Microsoft systems (Hi SCCM!) that will do this as well. But luckily we’ve had a little thing called Group Policy Preferences around for a while now too and it will do everything we need to make this really manageable, with a nice pretty GUI that you can even teach the Help Desk Intern how to manage.
Setup the Print Server(s)- This is the same old, same old. Pick a server or set of servers and setup all your printers and share them. This gives you centralized queue management and all the goodies we know and love.
Create Security Groups- Unless you work in a 10 person office most people won’t necessarily need every printer. I like to create Security groups, 1 per printer, and then assign everybody who needs that printer to the security group. I typically also like to set up these groups with a prefix, usually “prnt” so that they are all grouped together but that’s just me. Set these up now and we’ll use them in a minute.
Create a new GPO- Truthfully this a personal preference, but I typically like to create a separate GPO for each major task I want to achieve aside from baseline things I through in a domain default policy.
Navigate to Users>Preferences>Control Panel Settings>Printers- Cool, it’s a blank screen! Let’s fill this sucker up with some printing goodness. Start by right-clicking the screen and choosing New>Shared Printer.
Once here you will the default action is Update. While there is an option for Create we want to leave the setting at the default because this will allow you more flexibility in the future while still letting you accomplish your goal now.
Go ahead and fill in the share path with the full UNC path to the shared printer leaving everything else blank then click on the “Common” tab.
This is where the magic happens so everybody only gets what they need. Check the box for “Item-level targeting” at the bottom and then click the now available button
In the now open Targeting Editor window click the “New Item” button and choose “Security Group.” Note: I like to do this task with Security Groups but as you can see there are lots of options to choose from. You may want to do the assignment based on Active Directory Sites if you have a rotating band of workers for example. Do what fits your organization.
Hit the browse “…” button and go find your group you want to have this printer added for then hit OK all the way back out to the GPO screen.
Step 4
Step 5
Step 7
Step 8
That’s it! you can essentially rinse and repeat these instructions for as many printers and print servers as you need to support. There really isn’t even any server magic to the printing, for all GP Preferences cares these can all be printers shared off individual workstations. I wouldn’t do that, but you know… My one real gripe with this is there doesn’t seem to be a way to script your way out of the process yet. I was able to bulk install the printers and create the ports on the print server but doing this work out of the GUI essentially means exporting the preferences list to an XML file, editing it and then importing it back in. Eww.
P.S. ProTip: Use Delete All For Print Server Migrations
So the idea spark for this post was a need to recreate all the logical printers in response to an office reorganization. The old names made no sense so we just blew them away and created new. One thing I did find out is that since Windows Server 2012 you can create a Printer Preference with type Delete and choose “Delete all shared connections.” Coupled with the Common options of “Apply once and do not reapply” this can be a very effective way to manage a print server migration, reorganization, or any other number of goals I can think of. If you do choose to do this be sure to 1) make sure any version of this you were using to do the “old printers” is gone before you set this to run and 2) you mess with the order of the Printer Preferences so it is number 1 in the order. In addition, when I was looking to use it I created it and then immediately right-click > Disabled the preference until I was really ready for it to go.
One of the biggest headaches I not only have and have heard about from other Veeam Backup & Replication administrators have is backup server migrations. In the past I have always gone the “All-in-One” approach, have one beefy physical server with Veeam directly installed and housing all the roles. This is great! It runs fast and it’s a fairly simple system to manage, but the problem is every time you need more space or your upgrading an old server you have to migrate all the parts and all the data. With my latest backup repository upgrade I’ve decided to go to a bit more of a distributed architecture, moving the command and control part out to a VM with an integrated SQL server and then letting the physical box handle the repository and proxy functions producing a best of both worlds setup, the speed and simplicity of all the data mover and VM access happening from the single physical server while the setup and brains of the operation reside in a movable, upgradable VM.
This post is mostly composed of my notes from the migration of all parts of VBR. The best way to think of this is to split the migration into 3 major parts; repository migration, VBR migration, proxy migration, and VBR migration. These notes are fairly high level, not going too deep into the individual steps. As migrations are complex if any of these parts don’t make sense to you or do not provide enough detail I would recommend that you give the fine folks at Veeam support a call to ride along as you perform your migration.
I. Migrating the Repository
Setup 1 or more new repository servers
Add new repository pointing to a separate folder (i.e. D:\ConfigBackups) on the new repository server to your existing VBR server exclusively for Configuration Backups. These cannot be included in a SOBR. Change the Config Backup Settings (File > Config Backup) to point to the new repository. This is also probably a good time to go ahead and run a manual Config Backup while you are there to snapshot your existing setup.
Create one or more new backup repositories on your new repository server(s) to your existing VBR server configuration.
Create Scale Out Backup Repository (SOBR), adding your existing repository and new repository or repositories as extents.
All of your backup jobs should automatically be changed to point to the SOBR during the setup but check each of your jobs to ensure they are pointing at the SOBR.
If possible go ahead and do a regular run of all jobs or wait until your regularly scheduled run.
After successful run of jobs put the existing extent repository into Maintenance Mode and evacuate backups.
Remove existing repository from the SOBR configuration and then from the Backup Repositories section. At this point no storage of any jobs should actually be flowing through your old server. It is perfectly fine for a SOBR to only contain a single extent from a data locality standpoint.
II. Migrate the Backup and Guest Interaction Proxies
Go to each of your remaining repositories and set proxy affinity to the new repository server you have created. If you have previously scaled out your backup proxies then you can ignore this step.
Under Backup Proxy in Backup Infrastructure remove the Backup Proxy installation on your existing VBR server. Again, if possible you may want to run a job at this point to ensure you haven’t broken anything in the process.
Go to each of your backup jobs that are utilizing the Guest Processing features. Ensure the guest interaction proxy at the bottom of the screen is set to either your new repository server, auto or if scaled out another server in your infrastructure.
III. Migrate the Veeam Backup & Replication Server
Disable all backup, Backup Copy and Agent jobs on your old server that have a schedule.
Run a Config Backup on the old server. If you have chosen to Encrypt your configuration backup the process below is going to be a great test to see if you remember or documented it. If you don’t know what this is go ahead and change it under File>Manage Passwords before running this final configuration backup.
Shutdown all the Veeam services on your existing backup server or go ahead and power it down. This ensures you won’t have 2 servers accessing the same components.
If not already done, create your new Veeam Backup and Replication server/VM. Be sure to follow the guidelines on sizing available in the Best Practices Guide.
Install Veeam Backup, ensuring that you use the same version and update as production server. Safest bet is to just have both patched to the latest level of the latest version.
Add a backup repository on your new server pointing to the Config Backup repository folder you created in step 2 of the Migrating the Repository step.
Go to Config Backup and hit the “Restore” button.
As the wizard begins choose the Migrate option.
Change the backup repository to the repository created in step 5, choose your latest backup file which should be the same as the one created in step 2 above.
If encrypted, specify your backup password and then choose to overwrite the existing VeeamBackup database you created when you installed Veeam in step 4. The defaults should do this.
Choose any Restore Options you may want. I personally chose to check all 4 of the boxes but each job will have its own requirements.
Click the Finish button to begin the migration. From this point if any screens or messages pop up about errors or issues in processing it is a good idea go to ahead and contact support. All this process does is move the database from the old server to the new, changing any references to the old server to the new along the way. If something goes wrong it is most likely going to have a cascade effect and you are going to want them involved sooner than later.
IV. Verification and Cleanup
Now that your server has been migrated it’s a good idea to go through all the tabs in your Backup Infrastructure section, ensuring that all your information looks correct.
Go ahead and run a Config Backup at this point. That’s a nice low-key way to ensure that all of the basic Veeam components are working correctly.
Re-enable your disabled backup, backup copy and Agent jobs. If possible go ahead and run one and ensure that everything is hunky dory there.
Gotchas
This process when working correctly is extremely smooth. I’ll be honest and admit that I ran into a what I believe is a new bug in the VBR Migration wizard. We had a few SureBackup jobs that had been setup and while they had been run they have never been modified again since install. When this happens VBR notes the job_modified field of the job config database record as NUL. During the migration the wizard left those fields blank in the restored database, which is evidently something that is checked when you start the Veeam Backup Service. While the Service in the basic services.msc screen appears to be running under the hood you are only getting partial functionality. In my case support was able to go in and modify the database and re-include the NUL data to the field, but if you think you might have this issue it might be worth changing something minor on all of your jobs before the final configuration backup.
Conclusion
If you’ve made it this far, congrats! You should be good to go. While the process seems daunting it really wasn’t all that bad. If I hadn’t run into an issue it wouldn’t have been bad at all. The good news is that at this point you should be able to scale your backup system much easier without the grip and rip that used to be required.
Each year many of the major companies in the tech industry allow people to be nominated, by themselves or by others, to be recognized for the contributions to the community that surrounds that company’s products. These people are typically active on social media, in both online and in person forums and user groups and often will write blogs about their experiences with the products. In return for what is essentially free, grass-roots type marketing the companies will provide awardees any number of benefits; access to licenses for products for homelabbing as well as sometimes access to engineers, preferred experiences at conferences, NDA level information, etc but in some cases the biggest benefit is the recognition itself.
As of today (November 10, 2016) two of the bigger and in my opinion one of the best programs are all open for nominations.
I’m honored to be both a vExpert and a Veeam Vanguard and like to think of myself as an honorary Cisco Champion (they can’t accept government employees) so I have some experience with each of these programs. Let’s take a look at all three.
VMware vExpert may not necessarily be the oldest influencers program but it is probably the one socially active technical people know except possibly the Microsoft MVP program. In many ways vExpert is not only an honorary of its own right but a launch pad towards acceptance into other programs. vExperts are as far as I know the largest such group with around 1500 members world-wide, it also boasts some really good benefits not only from VMware but from other companies in the virtualization ecosphere. There are many webinars and meet and greets throughout the calendar year which are either vExpert only or vExpert preferred and the vExpert party at VMworld is well-known as one of the best. The distinction I make most about vExpert is that while it is for and by VMware, some years much of the educational focus is on the ecosphere and community that surrounds it.
The vExpert program offers 4 paths to membership. The one most are in is the Evangelist path. These may be customers, partners or VMware employees themselves, but they are people speaking the good word of VMware. There are also specific paths for Partners and Customers but I don’t know that I’ve ever met anyone who was awarded in those tracks. Finally if you have achieved the highest level of VMware certification, VCDX, you automatically are awarded vExpert status.
Cisco Champions contrasts from vExpert most because it is a self-contained program with all the educational opportunities and benefits being from Cisco Systems itself. With the Champions there aren’t so many of the freebies with the notable exception of some nice perks if you attend CiscoLive, but what they do offer is exposure of your personal brand. Between the weekly Cisco Champions Radio podcast and the regularly featured blogs on Cisco’s website if you are working to make a name for yourself in the industry for whatever reason it is a very good program for that. Further Cisco gives you access to developers and program managers within the company so that you can not only gain greater understanding of the products but in many cases have the opportunity to weigh in on technology decisions during the development process.
Cisco breaks their program down into business segments in regards to your qualification for the program with tracks in Collaboration, Data Center, Enterprise Networks, IoT, and Security. If you have expertise in any of these by all means apply. In my mind I’m saving the best for last. The Veeam Vanguard program opened its nominations up today for its 3rd year and I’ve been honored to have awarded each year (so far). It is by far the most exclusive; there are currently only 50 members worldwide and I believe the philosophy is to keep it on the small side with only people who truly understand what the company is about. There are a lot of swag type benefits to the Vanguard to be sure, most noticeably something really special that revolves around their VeeamON conference (NOLA this year baby!), but to be honest what I most get out of the program is the distributed brain of not only the Veeam employees affiliated with the group but the group itself. On a daily basis it seems sometimes somebody’s technology issues, Veeam related or not, are being sorted out through Vanguard communication methods. Long story short, in the Vanguard program they simply take care of you and I’m happy to call all of them not just my peers but friends.
Because Veeam is a much tighter set of products than the other two there aren’t any official tracks within the program. That said they are very good about selecting members who affiliate themselves with each of the hypervisor companies they support, VMware’s vSphere and Microsoft’s Hyper-V. This diversity is part of what makes the discussions between us so good.
Conclusion
Over the course of the past week I’ve heard various people talking about strategies regarding getting awarded to any number of these. I’m not going to do this one so I can focus on that one and so forth, and honestly all I can recommend to you if you are interested in applying to any of them is look at where your focus is or where you focus should be and apply. There is no thing that says “you belong to too many programs” or anything like that; if you feel you are qualified for any of these or any other by all means go apply. The name of the game is to grow your involvement with the technology community, regardless of what type of technology it is.
So we recently had the joys of upgrading our Cisco Voice setup to version 11, including our Unified Contact Center Express (UCCX) system. In the process of our upgrade we had to do a quick upgrade of UCCX to 9.02 from 9.01 to be eligible to go the rest of the way up to 11, allowing us to run into a nice issue I’m thinking many others are running into.
As far as 11 is concerned the big difference is it is the first version where the Cisco Agent Desktop (CAD) is not an option as it has been replaced by the new web-based Finesse client for Agents and Supervisors. For this reason many Voice Admins are choosing to take the leap this year to 10.5 instead as it gives you the option of Cisco Agent Desktop/Cisco Supervisor Desktop (CSD) or Finesse. The problem? These MSI installed client applications are not Windows 10 compatible. In our case it wasn’t a big deal as the applications were already installed when we did an in place upgrade of many of our agent’s desktops to Windows 10, but attempting to do an installation would error out saying you were running an unsupported operating system.
*DISCLAIMER: While for us this worked just fine I’m sure it is unsupported and may lead to TAC giving you issues on support calls. Use at your own discretion.
Fixing the MSI with Orca
Luckily there is a way around this to allow the installers to run even allow for automated installation. Orca is one of the tools available within the Windows SDK Components download and it allows you to modify the parameters for Windows MSI packages and either include those changes directly into the MSI or to create a transform file (MST) so that the changes can be saved out-of-band to the install file so that it can be applied to different versions as needed. As my needs here are temporary I’m simply going to just modify the in place MSI and not bother with the MST, which would require additional parameters to be passed for remote installation.
Once you have the SDK Components downloaded you can install Orca by running the Orca.msi within and then just run it like any other application. The first step is to open the program and go to File>Open and open the MSI package. In this case we are looking for CiscoAgentDesktop.msi
Once open you will see a number of Tables down the left side. The easiest way I know to explain this is an MSI is simply a sort of database wrapping the installer with parameters. Scroll down the list until you see LaunchCondition and double-click on that. You will now see a list of list of conditions the MSI package is checking before the installer is allowed to launch. Reading the description of the first one this is our error message, right?
Now we need to remove the offending condition which can be done by simply right clicking on it and choosing “Drop Row.” It will prompt you to confirm, just hit OK to continue.
Finally before we save our new MSI we need to go to Tools and Options, choosing the Database tab. Here we need to check the “Copy embedded streams during ‘Save As’ so that our changes will be rolled into the package.
Now simply go to File>Save As… and save as you would any other file. Easy peasy…
Now if we run our new MSI package it will allow you to proceed to install as expected. Again, let me say this won’t magically tell TAC that this is a supported solution. If you run into problems they may still tell you either to upgrade to 10.6 (which supports Windows 10) or later or roll back Windows version to 8.1 or older.
Here at This Old Datacenter we’ve recently made the migration to using Cisco UCS for our production compute resources. UCS offers a great number of opportunity for system administrators, both in deployment as well as on going maintenance, making updating the physical as manageable as we virtualization admins are getting used to with the virtualized layer of the DC. Of course like any other deployment there is always going to be that one “oh yeah, that” moment. In my case after I had my servers up I realized I needed another virtual NIC, or vNIC in UCS world. This shouldn’t be a big deal because a big part of what UCS does for you is it abstracts the hardware configuration away from the actual hardware.
For those more familiar with standard server infrastructure, instead of having any number of physical NIC in the back of the host for specific uses (iSCSI, VM traffic, specialized networking, etc) you have a smaller number of connections as part of the Fabric Interconnect to the blade chassis that are logically split to provide networking to the individual blades. These Fabric Interconnects (FI) not only have multiple very high-speed connections (10 or 40 GbE) but each chassis typically will have multiple FI to provide redundancy throughout the design. All this being said, here’s a very basic design utilizing a UCS Mini setup with Nexus 3000 switches and a copper connected storage array:
So are you starting to thing this is a UCS geeksplainer? No, no my good person, this is actually the story of a fairly annoying hiccup when it comes to the relationship between UCS and VMware’s ESXi. You see while adding a vNIC should be as simple as create your vNICs in the Server Profile, reboot the effected blades and new NIC(s) are shown as available within ESXi, it of course is not that simple. What happens in reality when you add new NICs to an existing Physical NIC to vSwitch layout is that the relationships are shuffled. So for example if you started with a vNIC (shown as vmnicX in ESXi), vSwitch layout that looks like this to start with
After you add NICs and reboot it looks like this
Notice the vmnic to MAC address relationship in the 2. So while all the moving pieces are still there different physical devices map to different vSwitches than as designed. This really matters when you think about all the differences that usually exist in the VLAN design that underlay networking in an ESXi setup. In this example vSwitch0 handles management traffic, HQProd-vDS handles all the VM traffic (so just trunked VLANS) and vSwitch1 handles iSCSI traffic. Especially when things like iSCSI that require specialized networking setup are involved does this become a nightmare; frankly I couldn’t imagine having to do this will a more complex design.
The Fix
So I’m sure you are sitting here like I was thinking “I’ll call support and they will have some magic that with either a)fix this, b) prevent it from happening in the future, or preferably c) both. Well, not so much. The answer from both VMware and Cisco support is to figure out which NICs should be assigned to which vSwitch by reviewing the MAC to vNIC assignment in UCS Manager as shown and then manually manage the vSwitch Uplink assignment for each host.
As you may be thinking, yes this is a pain in the you know what. I only had to do this with 4 hosts, I don’t want to think about what this looks like in a bigger environment. Further, as best I can get answers from either TAC or VMware support there is no way to make this go better in the future; this was not an issue with my UCS setup, this is just the way it is. I would love it if some of my “Automate All The Things!!!” crew could share a counterpoint to this on how to automate your way out of this but I haven’t found it yet. Do you have a better idea? Feel free to share it in the comments or tweet me @k00laidIT.
Hi there and welcome to koolaid.info! My name is Jim Jones, a Geek of Many Hats living in West Virginia.
This site was created for the purpose of being a locker full of all the handy things I’ve learned over the years, know I’m going to need again and know I’ll forget. It’s morphed a bit over the years as all things do but still that’s the main purpose. If you’d like to know more about me check out any of the social links at the top left of the site, I’m pretty much an open book.
If you’ve found this page I hope you find it’s contents helpful. Finally, anything written here are solely my views and do not reflect those of my employer.
You must be logged in to post a comment.