So yeah, I’m heading to an in-person tech conference next week. Not just any tech conference but arguably my favorite, VeeamON. The event is being held at the Aria Resort in Las Vegas May 16-19 for the in person aspect but like so many of conferences in the post-COVID era it will be a hybrid event with a free of charge, online aspect as well. If you have not registered for either in person or online there is still plenty of time to get involved.
It’s been since August of 2019 since I last did any event of real size and while I am absolutely excited to be able to hear about some wonderful technology and be back around friends, colleagues and fellow Vanguards, I’d be remiss to not acknowledge a healthy amount of hesitance about being around large groups of people again. That said at this point I’ve done everything that I can possibly do to protect myself and those around me from all the health concerns so it’s time to get at it.
On a personal note this time around is going to be a bit different for me as well as it is going to be the first event I’ve attended as part of a partner organization as opposed to just being there as a customer. In the lead up there is quite a bit of preparation involved when attending this way but a great deal to be interested in as well.
What’s Going On?
So a look at the full agenda tells you that Veeam Backup & Replication v12 is going to be a major focus of the conference this time around and for good reason; v12 from everything I’ve seen so far is going to be a MASSIVE release in terms of under the cover infrastructure. I’ve been working with the beta since release and honestly I’m rarely leaving the backup infrastructure tab because there’s so much different there. Layer in improvements to related products such as the Veeam Agents and the hyper scaler cloud backup products and I’ll be busy long after the event watching recordings.
While V12 will be a major focus it won’t be the only one. Support for backing up SaaS application will be there as well with further improvements to Veeam Backup for Microsoft 365 and the upcoming release of Veeam Backup for Salesforce.
I’m personally going to have a couple of speaking sessions myself. The first will be with Josh Liebster as we discuss why “All Good Things Start at 11:11” Monday at 6pm on the partner stage in the expo hall. This quick session will be for my employer iland, an 11:11 Systems company and will talk about how we can help you with your disaster recover and cloud workload needs.
The second session I will be involved in will be with Mr. Liebster again as well as Sean Smith from VMware Tuesday at 10:15am titled “VCPP, VMware Cloud Director and Veeam — Bringing It All Together as a Full Stack.” In this session Sean will be talking about VMware Cloud Director and the overall VCPP program of which iland is an “all-in” member. Josh and I will talk about how iland takes what VMware provides and turns it into an award winning set of products.
All in all I’m expecting it to be an excellent as always. If you are going to be there please reach out to me through twitter and I’d love to meet up!
In my last post, Configuring Veeam Backup & Replication SOBR for Non-immutable Object Storage, I covered the basics of how to consume object storage in Veeam Backup & Replication (VBR). In general this is done through the concept of Scale-out Backup Repositories (SOBR). In this post we are going to build upon that land layer in object storage’s object-lock features which is commonly referred to in Veeam speak as Immutability.
First, let’s define immutability. What the backup/disaster recovery world thinks of as immutability is much like the old Write Once, Read Many (WORM) technology of the early 00’s, you can write to it but until it ages out it cannot be deleted or modified in any way. Veeam and other backup vendors definitely treat it this way but object-lock under the covers actually leverages the idea of versioning to make this happen. I can still delete an object through another client but the net effect is that a new version of that object is created with a delete marker attached to it. This means that if that were to occur you could simply restore to the previous version and it’s like it never happened.
With VBR once Immutability is enabled objects are written with Compliance mode retention for the duration of the protected period. It recognizes that the object bucket has object-lock enabled and then as it writes it each block is written with the policy directly rather than assuming a bucket level policy. If you attempt to delete a restore point that has not yet had its retention policy expire it won’t let you delete but instead gives you an error.
Setting up immutability with object storage in Veeam is the same as with non-immutability but with a few differences. This starts with how we create the bucket. In the last post we simply used the s3 mb command to create a bucket, but when you need to work with object-lock you need to use the s3api create-bucket command.
Once your bucket is created you will go about adding your backup repository as we’ve done previously but with one difference, when you get to the Bucket portion of the New Object Store Repository wizard you are going to check the box for “Make recent backups immutable” and set the number of days desired.
You now have an immutable object bucket that can be linked to a traditional repository in a SOBR. Once data is written (still the same modes) any item that is written is un-deleteable via the VBR server until the retention period expires. Finally if I examine any of the objects in the created bucket with the s3api get-object-retention command I can see that the object’s retention is set.
Veeam Backup & Replication (VBR) currently makes use of object storage through the concept of Scale-Out Backup Repositories, SOBR. A SOBR in VBR version 11 can contain any number of extents as the performance tier (made up of traditional repositories) and a single bucket for the capacity tier (object storage). The purpose of a SOBR from Veeam’s point of view is to allow for multiple on-premises repositories to be combined into a single logical repository to allow for large jobs to be supported and then be extended with cloud based object storage for further scalability and retention.
Copy Mode- any and all data that is written by Veeam to the performance tier extents will be copied to the object storage bucket
Move Mode- Only restore points that are aged out of a defined window will be evacuated to object storage or as a failure safeguard only when the performance tier extents reach a used capacity threshold. With Archive mode within the Veeam UI the restore points all still appear as being local but the local files only contain metadata that points to where the data chunks reside in the bucket. The process of this occurring in Veeam is referred to as dehydration.
In this post let’s demonstrate how to create the necessary buckets, how to create a SOBRs for both Copy and Move modes without object-lock (Immutability) enabled. If you haven’t read my previous post about how to configure aws cli to be used for object storage you may want to check that out first.
1. Create buckets that will back our Copy and Move mode SOBRs. In this example I am using AWS CLI with the s3 endpoint to make the buckets.
2. Now access your VBR server and start with adding the access key pair provided for the customer. You do this in Menu > Manage Cloud Credentials.
3. Click on Backup Infrastructure then right click on Backup Repositories, selecting Add Backup Repository.
4. Select Object Storage as type.
5. Select S3 Compatible as object storage type
6. Provide a name for your object repository and hit next.
7. For the Account Settings screen enter the endpoint, region and select your created credentials.
8. In the Bucket settings window click Browse and select your created bucket then click the Browse button beside the Folder blank and create a subfolder within your bucket with the “New Folder…” button. I’ll note here do NOT check the box for “Make recent backups immutable for…” here as the bucket we have created above does not support object-lock. Doing so will cause an error.
9. Click Apply.
10. Create or select from existing traditional, Direct Storage repository or repositories to be used in your SOBR. Note: You cannot choose the repository that your Configuration Backups are targeting.
11. Right click on Scale-out Backup Repositories and select “Add Scale-out Backup Repository…”
12. Name your new SOBR.
13. Click Add in your Performance Tier screen and select your repository or repositories desired. Hit Ok and then Next.
14. Leave Data Locality selected as the Placement Policy for most scenarios.
15. In the Capacity Tier section check to Extend capacity with object storage and then select the desired bucket.
16. (Optional but highly recommended): Check the Encrypt data uploaded to object storage and create an encryption password. Hit Apply.
17. This will have the effect of creating an exact copy of any backup jobs that target the SOBR both on premises and into the object store. To leverage Move mode rather than Copy you simply check the other box instead/in addition to and set the number of days you would like to keep on premises.
Now you simply need to target a job at your new SOBR to have it start working.
In conclusion let’s cover a bit about how we are going to see our data get written to object storage. For the Copy mode example it should start writing to data to the object store immediately upon completion of each run. In the case of leveraging Move mode you will see objects written after the run after the day specified to move archive. For example if you set it to move anything older than 7 days on-prem dehydration will occur after the run on day 8. These operations can be seen in the Storage Management section of the History tab in the VBR console.
Further if I recursively list my bucket via command line I can see lots of data now, data is good. 😉
% aws --endpoint=https://us-central-1a.object.ilandcloud.com --profile=premlab s3 ls s3://premlab-sobr-unlocked-copy/ --recursive
In my last post I worked through quite a few things I’ve learned recently about interacting with S3 Compatible storage via the CLI. Now that we know how to do all that fun stuff it’s time to put it into action with a significant Service Provider/Disaster Recovery slant. Starting with this post I’m going to highlight how to get started with some common use cases of object storage in Backup/DR scenarios. In this we’re going to look at a fairly mature use case, with it backing Veeam Backup for Office (now Microsoft) 365.
Veeam Backup for Microsoft 365 v6, which was recently showcased at Cloud Field Day 12, has been leveraging object as a way to make it’s storage consumption more manageable since version 4. Object also provides a couple more advantages in relation to VBM, namely an increase in data compression as well as a method to enable encryption of the data. With the upcoming v6 release they will also support the offload of backups to AWS Glacier for a secondary copy of this data.
VBM exposes its use of object storage under the Object Storage Repositories section of Backup Infrastructure but it consumes it as a step of the Backup Repository configuration itself, which is nested within a given Backup Proxy. I personally like to at a minimum start with scaling out repositories by workload (Exchange, OneDrive, Sharepoint, and Teams) as each data type has a different footprint. When you really need to scale out VBM, say anything north of 5000 users in a single organization, you will want to use that a starting point for how you break down and customize the proxy servers.
Let’s start by going to the backup proxy server, in this case the VBM server itself, and create folder structure for our desired Backup Repositories.
Now that we have folders let’s go create some corresponding buckets to back them. We’ll do this via the AWS S3 CLI as I showed in my last post. At this point VBM does not support advanced object features such as Immutability so no need to be fancy and use the s3api, but I just prefer the command structure.
Ok so now we have folder and buckets, time to hop in to Veeam. First we need to add our object credentials to the server. This is a simple setup and most likely you will only need one set of credentials for all your buckets. Because in this example I will be consuming iland Secure Cloud Object Storage I need to choose the “S3 Compatible access key” under the “Add…” button in Cloud Credential Manager (menu> Cloud Credentials). These should be the access key and secret provided to you by your service provider.
Now we need to go to Backup Infrastructure > Object Storage Repositories to add our various buckets. Start by right clicking and choose “Add Object Storage.”
Now simply repeat the process above for any and all buckets you need for this task.
Now that we have all our object buckets added we need to pair these up with our on premises repository folders. It’s worth noting that the on-prem repo is a bit misleading, no backup data as long as you use the defaults will ever live locally in that repository. Rather it will hold a metadata file in the form of a single jetDB file that service as pointers to the objects that is the actual data. For this reason the storage consumption here is really really low and shouldn’t be part of your design constraints.
Under Backup Infrastructure > Backup Repositories we’re going to click “Add Repository..” and let the wizard guide us.
One note on that final step above. Often organization will take the “Keep Forever” option that is allowed here and I will say I highly advise against this. You should specify a retention policy that is agreed upon with your business/organization stakeholders as keeping any backup data longer than needed may have unintended consequences should a legal situation arise; data the organization believes to be long since gone is now discoverable through these backups.
Also worth noting item-level retention is great if you are using a service provider that does not charge you on egress fees because it gives you more granular control in terms of retention. If you use a hyperscaler such as Amazon S3 you may find this option will drive your AWS bill up because of a much higher load on egress each time the job runs.
Once you’ve got one added again, rinse and repeat for any other repositories you need to add.
Finally the only step left to do is create jobs targeting our newly created repositories. This is going to have way more variables based on your organization size, retention needs, and other factors than I can truly do justice in the space of this blog post but I will show how to create a simple, entire organization, single workload job.
You can start the process under Organizations > Your Organization > Add to backup job…
Once again you’d want to repeat the above steps for all your different workload types but that’s it! If we do a s3 ls on our s3://premlab-ilandproduct-vbm365-exch/Veeam/Backup365/ilandproduct-vbm365-exch/ full path we’ll see a full folder structure where it’s working with the backup data, proving that we’re doing what we tried to do!
In conclusion I went way into depth of what is needed here but in practice it isn’t that difficult considering the benefits you gain by using object storage for Veeam Backup for Microsoft365. These benefits include large scale storage, encryption and better data compression. Hope you find this helpful and check back soon for more!
Recently a good portion of my day job has been focused on learning and providing support for s3 compatible object storage? What is s3 compatible you say? So while Amazon’s AWS may have created the s3 platform at its root today is an open framework of API calls and commands known as s3. While AWS s3 and its many iterations are the 5000 pound gorilla in the room many other organizations have created either competing cloud services or storage systems that can let you leverage the technology in your own environments.
So why am I focusing on this you may ask? Today we are seeing more and more enterprise/cloud technologies have reliance on object storage. Any time you even think of mentioning Kubernetes you are going to be consuming object. In the Disaster Recovery landscape we’ve had the capability for a few years now to provide our archive or secondary copies of data to object “buckets” as it is both traditionally cheaper than other cloud based file systems and provided a much larger feature set. With their upcoming v12 release Veeam is going to be providing the first iteration of their Backup & Replication product that can write directly to object storage with no need to have the first repository be a Windows or Linux file system.
To specifically focus on the VBR v12 use case many customers are going to choose to start dipping their toes into the idea of on-prem s3 compatible object storage. This can be as full featured as a Cloudian physical appliance or as open and flexible as a minIO or ceph based architecture. The point being that as Veeam and other enterprise tech’s needs for object storage matures your systems will be growing out of the decisions you make today so it’s a good time to start learning about the technology and how to do the basics of management from an agnostic point of view.
So please excuse the long windedness of post as I dive into the whys and the hows of s3 compatible object storage.
Why Object Then?
Before we go further it’s worth taking a minute to talk about the reasons why these technologies are looking to object storage over the traditional block (NTFS, ReFS, XFS, etc) options. Probably first and foremost it is designed to be a scale-out architecture. With block storage while you can do things like creating RAID arrays to allow you to join multiple disks you aren’t really going to make a RAID across multiple servers. So for the use case of backup rather than be limited by the idea of a single server or have to use external constructs such as Veeam’s SOBR to allow those to be stitched together, you can target an object storage gateway that then write to a much more scalable, much more tunable, infrastructure of storage servers underneath.
Beyond the scale-out you have a vast feature set. Things that we use every day such as file versioning, security ACLs, least privilege design and the concept of immutability are extremely important in designing a secure storage system in today’s world and most object storage systems are going to be able to provide these capabilities. Beyond this we can look at capabilities such as multi-region synchronization as a way to ensure that our data is secure and highly available.
Connecting to S3 or S3 Compatible Storage
So regardless of whatever client you are using you are going to need 4 basic pieces of information to connect to the service at all.
Endpoint: This will be a internet style https or http URL that defines the address to the gateway that your client will connect to.
Region: This defines the datacenter location within the service provider’s system that you will be storing data in. For example the default for AWS s3 is us-east-1 but can be any number of other locations based on your geography needs and the provider.
Access Key: this is one half of the credential set you will need to consume your service and is mostly akin to a username or if you are used to consuming Office 365 like the AppID
Secret Key: this is the other half and is essentially the generated password for the access key.
Regardless of the service you will be consuming all of those parts. With things that have native AWS integration you may not be prompted necessarily for the endpoint but be assured it’s being used.
To get started at connecting to a service and creating basic, no frills buckets you can look at some basic GUI clients such as CyberDuck for Windows or MacOS or WinSCP for Windows. Decent primers for using these can be found here and here.
Installing and Configuring AWS CLI Client
If you’ve ever used AWS S3 to create a bucket before you are probably used to go to the console website and pointy, clicky create bucket, set attributes, upload files, etc. As we talk more and more about s3 compatible storage that UI may or may not be there and if it is there it may be wildly different than what you use at AWS because it’s a different interpretation of the protocol’s uses. What is consistent, and in some cases or situation may be your only option, is consuming s3 via CLI or the API.
Probably the easiest and most common client for consuming s3 via CLI is the AWS cli. This can easily be installed via the package manager of your choice, but for quick and easy access:
Windows via Chocolatey
choco install -y awscli
MacOS via Brew
brew install awscli
Once you have it installed you are going to need to interact with 2 local files in your .aws directory on your user profile, config and credentials. You can get these created by using the aws configure command. Further aws cli supports the concept of profiles so you can create multiple connections and accounts. To get started with this you would simply use aws configure --profile obj-testwhere obj-test is whatever name you want to use. This will then walk through prompting you for 3 of those 4 pieces of information, access key, secret key and default region. Just as an FYI this command impacts 2 files within your user profile, regardless of OS, ~/.aws/config and ~/.aws/credentials. These are worth reviewing after you configure to become familiar with the format and security implications.
Getting Started with CLI
Now that we’ve got our CLI installed and authentication configured let’s take a look a few basic commands that will help you get started. As a reference here are the living command references you will be using
Awesome! We’ve got our first bucket in our repository. That’s cool but I want my bucket to be able to leverage this object lock capability Jim keeps going on about. To do that you use the same command but add the –object-lock-enabled-for-bucket parameter.
So yeah, good to go there. Next let’s dive into that s3api list-buckets command seen in the previous screenshot. Listing buckets is a good example for understanding that when you access s3 or s3 compatible storage you are really talking about 2 things; s3 protocol and the s3api. For listing buckets you can use either:
aws --profile jimtest --endpoint-url=https://us-central-1a.object.ilandcloud.com s3 ls
While these are similar it’s worth noting the return will not be the same. The ls command will return data much like what it would in a standard Linux shell while s3api list-buckets will return JSON formatted data by default.
Enough About Buckets, Give Me Data
So buckets are great but the are nothing without data inside them. Let’s get to work writing objects.
Again writing data, especially if you are familar with the *nix methods to s3 can be very similar. I can use s3 cp or mv to copy or mv data to my s3://test-bucket-2-locked/ bucket or any other I’ve created or between them.
Now that we’ve written a couple files let’s look at what we have. As you can see above once again you can do the same actions via both methods, it’s just the s3api way will consistently give you more information and more capability. Here’s what the api output would look like.
Take note of a few things here. While the s3 ls command gives more traditional file system output s3api refers to the objects with their entire “path” as the key. Essentially in object storage for our benefit it still has the concept of file and folder structure but it views each unique object as a single flat thing on the file system without a true tree. Key is also important because as we start to consider more advanced object storage capabilities such as object lock, encryption, etc. the key is often what you need to supply to complete the commands.
A Few Notes About Object Lock/Immutability
To round out this post let’s take a look at where we started as the why, immutability. Sure we’ve enabled object lock on a bucket before but what that really just does is enable versioning, it’s not enforcing anything. Before we get crazy with creating immutable objects its important to understand there are 2 modes of object lock:
Governance Mode – In Governance Mode users can write data and not be able to truly delete it as expected but there are roles and permissions that can be set (and are inherited by root) that will allow that to be overridden and allow data to be removed.
Compliance Mode – This is the more firm option where even the root account cannot remove data/versions and the retention period is hard set. Further once a retention date is set on a given object you cannot shorten it in any way, only extend it further.
Object Lock is actually done in one of two ways (or a mix of both); creating a policy and applying it to a bucket so that anything written to that bucket will assume that retention or by actually applying a retention period to an object itself either while writing the object or after the fact.
Let’s start with applying a basic policy to a bucket. In this situations for my test-bucket-2-locked bucket I’m going to enable compliance mode and then set retention to 21 days. A full breakdown of the formatting of the object-lock-configuration parameter and the options it provides can be found in the AWS documentation.
Cool, now to check that compliance we can simply use s3api get-object-lock-configuration instead against the bucket to check what we’ve done. I’ll note that for either the “put” above or the “get” below that there is no s3 endpoint equivalent, these are some of the more advanced features I’ve been going on about.
Ok, so we’ve applied a baseline policy of compliance and retention of 21 days to our bucket and confirmed that it’s set. Now let’s look at the objects within. You can view a particular object’s retention with the s3api get-object-retention command. As we are dealing with advanced features at the object level you will need to capture the key for an object to test. If you’ll remember we found those using the s3api list-objects command.
So as you can see we have both mode and retention date set on the individual object. What if we wanted this particular object to have a different retention period than the bucket itself? Let’s now use the s3api put-object-retention option to work and try to set that down to 14 days instead. While we use a general purpose number of days when creating the policy when we set object level retention it’s done by modifying the actual date stamp so we’ll simply pick a day 14 days from today.
Doh! Remember what we said about compliance mode? That you could make the retention shorter than what was previously set? We are running into that here and can see that it in fact works! Instead let’s try this again and set it to 22 days.
As you can see now not only did we not get an error but when you check your retention it is now showing for defined timestamp so it definitely worked.
This feels like a good time to note that object locking is not the same as deletion protection. If I create an object lock enabled bucket and upload some objects to it, setting the object retention flag with the right info along the way, I am still going to be able to use a basic delete command to delete that file. In fact if I use CyberDuck or WinSCP to connect to my test bucket I can right click on any object there and successfully choose delete. What is happening under the covers is that a new version of that object is spawned, one with the delete marker applied to it. For standard clients that will appear that the data is gone but in reality it’s still there, it just needs to be restored to the previous version. In practice most of the UIs you are going to use to consume s3 compatible storage such as Veeam or developed consoles will recognize what is going on under the covers and essentially “block” you from executing the delete, but feel secure that as long as you have object lock enabled and the data is written with a retention date the data has not actually gone away and can be recovered.
All of this is a somewhat long winded answer to the question “How S3 Object Lock works” which Amazon has thoughtfully well answered in this post. I recommend you give it a read.
In the end you are most likely NOT going to need to know via the command line how to do all the above steps. More likely you will be using some form of a UI, be it Veeam Backup and Replication, AWS console or that of your service provider, but it is very good to know how to do these things especially if you are considering on-premises Object Storage as we move into this next evolution of IT and BCDR. Learning and testing the above is a relatively low cost consideration as most object services are literally pennies per GB, possibly with the egress data charges depending on your provider (hey AWS…), but it’s money well spent to get a better understanding.
I am honored to have been chosen as a finalist for the 2021 IT Blog Awards selected by Cisco Systems. This is my first time with such and needless to say I’m a little excited about it. As a finalist koolaid.info will be put onto the ballot for all those who wish to vote (Vote Here!), with separate categories for blogs and podcasts. If you wish to vote you may do so now until voting closes on Friday, February 18th.
I recently attended Cloud Field Day 12 put on by Gesalt IT. While there were many great presentations and some really exciting technology discussed I’d like to highlight what I considered the highlight of the week, the Big Memory Cloud announcement by MemVerge.
MemVerge has some very big ideas around taking the “software defined” concept to memory, allowing advanced optimizations and flexibility of instance memory usage. In the past the company has launched Memory Machine products related to making the most of Intel Optane memory capabilities, but the new Big Memory Cloud product set is built all around allowing you to move stateful applications between clouds and instances.
The example given is around the idea of spot public cloud instances. For those unfamiliar with the concept the idea is to use excess capacity in a given region during non-peak times, allowing the provider to sell those instances for a much lower cost but subject to needing instances to be torn down quickly as capacity is needed. For that reason spot instances work great for stateless workloads where they can easily be torn down in one place and started in another without issue, but it doesn’t work well for stateful applications or more to MemVerge’s point big, CPU and memory intensive work loads like AI, graphics rendering, research that may need to run for days or weeks to complete a single process. MemVerge Big Memory Cloud is designed to solve that problem by allowing you to capture the running state of an application at any time, taking a snapshot, and then transferred to another running instance seamlessly.
The technical concept of this is around what they call an AppCapsule. An AppCapsule is essentially a snapshot of the DRAM memory of one instance that can then be applied via the software to another instance of the same workload elsewhere. I think the concept has dreams of being a “multi-cloud vMotion” where a workload is running in cloud A and then is “moved” to cloud B. Today it is more along the lines of “multi-cloud Fault Tolerance” where you have to have 2 running copies of the workload, 1 in each cloud and the active version of the workload is moved from one to another (or even to a third instance).
While you can see that this is currently very young it is still very very cool. At the moment the supported workloads for the underlying Memory Machine technology are pretty limited, mostly related to databases and caching systems but with this new announcement I look for that to grow. I can see this as a big winner for the IaaS and BCDR services industry, allowing for hard to move workloads to now be on the table. Of course those are down the road type things that might come through partnerships but exciting all the same.
I have been fortunate to be selected as a delegate for Cloud Field Day 12 next week, November 3-5, 2021. This will be the first Tech Field Day event being done with in person attendees since the beginning of the pandemic and I’m happy to say I am on my way to San Jose, CA to be in attendance. While I am definitely excited about being able to travel about this I am most excited about the exciting slate of vendors that will be presenting on all things Cloudy. Thus far the delegate panel appears to be very well attended and includes a number of friends and acquaintances such as Nico Stein and Nathan Bennett.
In the interests of full disclosure it is worth noting that Gesalt IT, the company behind Tech Field Day, is covering the costs of my expenses for this trip up to and including travel, lodging and meals while on site. That said I have had no stipulations or requests on commentary from either Gesalt IT or presenting sponsors. In the week prior to the event I was invited to be a guest on the On-Premise IT podcast discussion the premise “The Cloud Is Finally Ready for the Enterprise.”
All presentations will be able to be viewed online live at the base event link https://techfieldday.com/event/cfd12/ and if you are following along the other delegates and I will be happy to pass along questions that are sent via twitter with the #CFD12 tag. In PST times the presentations will be on this schedule:
I recently had a successful attempt at the latest version of the Veeam Certified Architect (VMCA) certification exam and I’m happy to have that one done. I found this newest version of the exam to be much more approachable than the rap on the last (and first) version of the exam. I wanted to take a minute to give some thoughts about the credential and pointers about how I prepared.
Unfortunately one thing that still survives in the latest version of the Veeam certification programs, the VMCE and VMCA, is a hard course requirement for each level of certification. This means that if you want or need to achieve both levels of certification you are going to need to take 2 courses and 2 exams. Further, each are now “year versioned” exams, with versioning done on an annual basis. When it comes to renewal each exam will need to be independently renewed. As long as you pass each every year you will not be required to retake the class to upgrade but if you miss a year you will need to retake the course.
I wholeheartedly disagree with this approach and consider it especially burdensome on the certified person. In my mind it is understandable to have ONE course requirement but not for both, I struggle to think of another vendor certification program that does this. Even more if you feel you need to do annual recertification, which I’m not wild about but can understand with the number of new features each release brings, doing the top level exam should recertify for both. It’s been explained to me the rationale for this is because the exams are testing for different skill sets but the VMCE is still listed on the website as a prerequisite for the VMCA so i believe it to be a bit too much. At the end of the day this whole setup screams money grab for a company that should be well past the point of needing it.
That said many of you like me may have employer requirements to maintain the credential so this is for you.
VMCE vs VMCA
While you would think that for a skill set as siloed as Veeam core backup platform, Backup & Replication, that there would be two exams worth of content to cover but these really are targeted for different levels of IT Professionals. The VMCE exam really wants you to know and understand the Veeam Availability Suite of products, requiring memorization of knowledge about the various components and how they all fit together, think of this as your stereotypical memorization exam. While both exams are multiple choice exams with the VMCE the questions are all against a core set of knowledge, if you can memorize you’ve got this
The VMCA on the other hand is very light on memorization but very heavy on thinking through how you would scale out the core products to work in a very large, distributed scale. Really here the focus on looking at a potential customer scenario and requirements and determining what you need to build or suggest to give them successful outcomes.
My VMCA Back Story
I was lucky enough that through the Veeam Vanguard program that I was able to take the course for the VMCA free of charge both in 2017 and again in 2021 while the courses were in beta status. Oh what a difference four years makes. In 2017 I was at that time well versed with what VBR could do but at the time I was a Systems Administrator for essentially a SMB, protecting 4 hosts and 60 VMs in a single location. While we had requirements they weren’t exactly stressing the product’s most basic capabilities. When I took the course the first time I’m not ashamed to say that it intimidated me to the point where I didn’t even consider sitting the exam because so much of it was not in line with my day to day work.
Fast forward to 2021 and not only has the course been retooled to be more approachable but I am now in an architecture role where I am working with Veeam at scale every day so a good deal of it made more sense to me. I say all this to point out that I wish I would have taken the exam before because I wasn’t as far away as I thought I was and even if that is the role you are in now if you want to do more this is something you can do, you just have to think differently about it.
The Exam Methodology
The entire exam is based on a single scenario, that in theory is stylized off of a very real Veeam customer design request. There are more than one of these so if you have to retake it won’t be the same so don’t bother with trying to brain dump this. In any case the scenario is broken up into a number of tabs and will always be present on the left side of your screen as you take the exam so you can refer back to it as needed. Even with that I will say that I do very much so recommend taking 15-20 minutes at the beginning of your exam and read through the ENTIRE scenario so you at least know where to look for information and understanding he basics of what is being asked for.
Once you get through the scenario there will be a number of multiple choice questions that all relate to the scenario, but one thing I will share from the Exam Guide is that none of the questions will build upon other questions, the all independently are asking you to provide an answer directly back against the scenario. This is nice in that it won’t create a cascading problem.
As I stated above I was lucky enough to be able to take the course while in beta status so my impressions of the course itself may not be in line with what is currently being put out there. That said the core idea of the class is very good, that it teaches you the Veeam architect way of thinking through a design based around customer requirements. This is especially on point because the Subject Matter Experts for the course itself were the Global Solutions Architect group within Veeam, some of the most knowledgeable people I know on the subject. The course walks you through what they consider the six stages of the solution lifecycle, which in turn make up the six sections of your exam, with each being tested against.
Further the course focuses on the four basic design principles;
All of these will be well covered in the course and in the Exam Guide that will be a part of your course materials. The guide itself is only 5 pages I think but it is jammed packed with information like the above that will really assist you so definitely give it a read through.
Once you get past the point of understanding both the life cycle and the design tenants there is a requirement to really know how to design the various Veeam components for use at scale and for this I highly recommend you consider a full read through of the Veeam Best Practices guide. This again is content created and managed by the Veeam Solutions Architecture group and is exceptional for understanding how you need to consider things both for the scope of this exam but also for right sizing your environment.
In the end if you can conceptually think about designing a BCDR plan based on Veeam solutions at a large scale, understand the lifecycle of that plan and the given needs of a customer, and are familiar with the best practices for deploying such systems this exam is very doable.
I received an email yesterday that the fast track program for VMCE 2021 is available now through December 21, 2021. So what is this? According to the e-mail and discussion with Rasmus Haslund of Veeam Fast Track is designed to be a self service resource to allow existing VMCEs v9, 2020 or people who took the v10 course to upgrade the certification to the latest version with access to study materials and a test voucher all for just a bit more than the voucher itself.
According to the email the program will provide you in total with the following:
The latest Veeam Availability Suite v11 Configuration and Management (VASCM) courseware
13 days access to VASCM Labs for practicing and exam preparation
VMCE 2021 Exam Specification Guide
Access to the ‘Haslund Knowledge Check’
Exam preparation videos
VMCE Exam Voucher to take the exam at Pearson Vue
I will share that in the past (I’ve been a VMCE on versions 8 and 9 and recently renewed to the 2020 release) I’ve sworn by Rasmus’ always excellent practice exams so their inclusion here is noteworthy. While they are included these seem to remain a community resource provided by Rasmus so the value is more in the course materials and the videos, but still worth calling out.
If you are not currently 2021 certified and wish to be able to do an upgrade in the future without taking a course you will need to do so to keep current versions. If you are a standard end customer it’s true, your certification never expires but if you are in the partner space like me you unfortunately have to always be within the past 2 versions. In any case this is a pretty good deal for a recertification prep package
To purchase the fast track package you will need to log into the website you’ve been given access to your Veeam training materials at in the past, veeam.lochoice.com and click on whatever is the latest version of the VMCE materials you have available. Once there you will see a “Buy VMCE Fast Track to v11” button. Once clicked it’s as simple as providing a credit card and you are off and running.
Hi there and welcome to koolaid.info! My name is Jim Jones, a Geek of Many Hats living in West Virginia.
This site was created for the purpose of being a locker full of all the handy things I’ve learned over the years, know I’m going to need again and know I’ll forget. It’s morphed a bit over the years as all things do but still that’s the main purpose. If you’d like to know more about me check out any of the social links at the top left of the site, I’m pretty much an open book.
If you’ve found this page I hope you find it’s contents helpful. Finally, anything written here are solely my views and do not reflect those of my employer.
You must be logged in to post a comment.