koolaid.info https://www.koolaid.info A Minor Subset of the Greater Series of Tubes Sun, 25 Aug 2019 13:51:12 +0000 en-US hourly 1 https://i1.wp.com/www.koolaid.info/wp-content/uploads/2018/03/koolaid-2018-150.png?fit=32%2C32&ssl=1 koolaid.info https://www.koolaid.info 32 32 78397926 Movin’ Right Along… https://www.koolaid.info/movin-right-along/ https://www.koolaid.info/movin-right-along/#respond Sun, 25 Aug 2019 13:51:11 +0000 https://www.koolaid.info/?p=850 Almost ten years ago I interviewed with this guy named Mike Murphy for a Senior Network Admin job at the West Virginia Housing Development Fund. I’d been laid off from my previous employer due to plant closure and had been on unemployment for a while. What was supposed to be a 1 hour interview took almost 2 hours and we hit it off pretty well. He has been one of the best managers I’ve ever had and more importantly, a friend. For the past ten years under Mike’s leadership I’ve been allowed to do some very cool things in my role there; learn about things like VMware vSphere, Veeam Backup and Replication, and cloud computing while making the computing infrastructure for The Fund the best I could make it. I also got to get into this thing called the “tech community” and it has truly changed my career trajectory. I’ve been extremely lucky to be included in things like vExpert and the Veeam Vanguards, learning to engage and give back as much as possible to those communities with my own viewpoints and knowledge. For what I’ve given it will never match what I’ve got back; the range of ideas, the challenges of my own beliefs, and frankly the friendships. In short it has given me the opportunity to be a better version of myself. Thanks in a very large part to that tech community after almost ten years it is finally time for me to be moving on. I am extremely excited to say that I will soon become Senior Cloud Architect for OffsiteDataSync, now a part of J2 Global. I’ve been associated in a few ways with ODS for years but I have a great deal of thanks to fellow Vanguard Brad Jervis, who made the introduction and who I will soon be working side by side with. My tech loves have been mostly Veeam and VMware for quite some time now so it’s very exciting to take what I’ve learned and apply it at a much grander scale. In addition to the new technical challenges in this new role I’m now going to be in a position to be much more socially active so expect to see quite a bit more content here in the future as I learn and grow into this new role. It’s already looking like it’s going to be a fun ride, I can’t wait to get on!

The post Movin’ Right Along… appeared first on koolaid.info.

]]>
Almost ten years ago I interviewed with this guy named Mike Murphy for a Senior Network Admin job at the West Virginia Housing Development Fund. I’d been laid off from my previous employer due to plant closure and had been on unemployment for a while. What was supposed to be a 1 hour interview took almost 2 hours and we hit it off pretty well. He has been one of the best managers I’ve ever had and more importantly, a friend.

For the past ten years under Mike’s leadership I’ve been allowed to do some very cool things in my role there; learn about things like VMware vSphere, Veeam Backup and Replication, and cloud computing while making the computing infrastructure for The Fund the best I could make it. I also got to get into this thing called the “tech community” and it has truly changed my career trajectory.

I’ve been extremely lucky to be included in things like vExpert and the Veeam Vanguards, learning to engage and give back as much as possible to those communities with my own viewpoints and knowledge. For what I’ve given it will never match what I’ve got back; the range of ideas, the challenges of my own beliefs, and frankly the friendships. In short it has given me the opportunity to be a better version of myself. Thanks in a very large part to that tech community after almost ten years it is finally time for me to be moving on.

I am extremely excited to say that I will soon become Senior Cloud Architect for OffsiteDataSync, now a part of J2 Global. I’ve been associated in a few ways with ODS for years but I have a great deal of thanks to fellow Vanguard Brad Jervis, who made the introduction and who I will soon be working side by side with. My tech loves have been mostly Veeam and VMware for quite some time now so it’s very exciting to take what I’ve learned and apply it at a much grander scale.

In addition to the new technical challenges in this new role I’m now going to be in a position to be much more socially active so expect to see quite a bit more content here in the future as I learn and grow into this new role. It’s already looking like it’s going to be a fun ride, I can’t wait to get on!

The post Movin’ Right Along… appeared first on koolaid.info.

]]>
https://www.koolaid.info/movin-right-along/feed/ 0 850
DR Scenarios For You and Your Business: Getting Cloudy With It https://www.koolaid.info/dr-scenarios-for-you-and-your-business-getting-cloudy-with-it/ https://www.koolaid.info/dr-scenarios-for-you-and-your-business-getting-cloudy-with-it/#respond Wed, 17 Apr 2019 13:15:15 +0000 https://www.koolaid.info/?p=817 In the last post we talked about the more traditional models of architecting a disaster recovery plan. In those we covered icky things like tape, dark sites and split datacenters. If you’d like to catch up you can read it here. All absolutely worthwhile ways to protect your data but all of those are slow and limit you and your organizations agility in the case of a disaster. By now we have all heard about the cloud so much we’ve either gone completely cloud native, dabbled a little or just completely loathe the word. Another great use for “somebody else’s computer” is to power your disaster recovery plans. By leveraging cloud resources we can effectively get out of the managing hardware business in regards to DR and have borderline limitless resources if needed. Let’s look at a few ways this can happen. DRaaS (Disaster Recovery as a Service) For now this is my personal favorite, but my needs may be and probably are different from yours. In a DRaaS model you still take local backups as you normally have, but then those backups or replicas are then shipped off to Managed Service Providers (MSPs) aligned with your particular backup software vendor. I can’t particularly speak to any of the others from experience but CloudConnect providers in the Veeam Backup and Replication ecosphere are simple to consume and use. Essentially once you buy the amount of space you need from a partner you then use the link and credentials you are provided and add them to your backup infrastructure. Once done you create a backup copy job with that repository as the target and let it run. If you are bandwidth restrained many will even let you seed the job with an external hard drive that you ship them full of backups then all you have to transfer over the wire is your daily changes. Meanwhile all of these backups are encrypted with a key that only you and your organization knows so the data is nice and safe sitting elsewhere. This is really great in that it is infinitely scalable (you only pay for what you use) and you don’t have to own any of the hardware or software licenses to support it. In the case that you have an event you have options; you can either scramble and try to put something together on your own or most times you can leverage the compute capabilities of the provider to power your organization until such time you can get your on-site resources available again. As these providers will have their own IT resources available you and your team will be freed up to do the work of getting staff and customers back online as they handle getting you restored and back online. In my mind the drawbacks to this model are minimal. In the case of a disaster you are definitely going to be paying more than you would if you are running restored systems on your own hardware, but you would have had to buy that hardware and maintain it as well which is expensive. You will also be in a situation where workers and datacenter systems are not in the same geographical area as well which may cause for increased bandwidth cost as you get back up and running but still nothing compared to maintaining this consistently. Probably the only real drawback here is almost all of these types of providers require long-term agreements, 1 year or more for the backup or replication portion of what is needed. You also need to be sure if you choose this route that the provider has enough compute resources available to absorb you if needed. This can be mitigated by working with your provider to do regular backup testing at the far end. This will cost you a bit more but it is truly worth it to me. Backup to Public Cloud Finally we come to what all the backup vendors seems to be  going towards these days, public cloud backups. In this situation your backups are either on premises first (highly recommended) and then shipped off to the public cloud provider of your choice. AWS, Azure or GCP start messing with their storage pricing models and suddenly become cheaper? Simply just add the new provider and shift the job to the new provider, easy peasy. As with all things cloud you are in theory also infinitely scalable so you don’t have to worry about on boarding new workloads except for cost, and who cares about cost anyway? The upside here is the ability to be agile. Start to finish you can probably be setup to consume this model within minutes and then your only limit to how fast you can be covered is how much bandwidth you make available to shipping backups. If you are doing this to cover for an external event like failure of your passive site you can simply tear it back down afterwards just as fast as you made it. Also you are only ever paying for your actual consumption, so you know how much your cost is going to be for any additional workload to be protected, you don’t ever pay for “spare space.” As far as the drawbacks I feel like we are still in the early days of this so there are a few. While you don’t have to maintain your far end equipment for either backup storage or compute I’m not convinced that this isn’t the most expensive option for traditional virtualized workloads. Hybrid Archive Approach One of the biggest challenges to maintaining an on-prem, off-prem backup system is we all run out of space sometimes. The public cloud provides us an ability to only consume what we need, not paying for any fluff, as well as letting others manage the performance and availability of that storage. One trend I’m seeing more and more is the ability to supplement your on premise backup storage with public cloud resources to allow for...

The post DR Scenarios For You and Your Business: Getting Cloudy With It appeared first on koolaid.info.

]]>
In the last post we talked about the more traditional models of architecting a disaster recovery plan. In those we covered icky things like tape, dark sites and split datacenters. If you’d like to catch up you can read it here. All absolutely worthwhile ways to protect your data but all of those are slow and limit you and your organizations agility in the case of a disaster.

By now we have all heard about the cloud so much we’ve either gone completely cloud native, dabbled a little or just completely loathe the word. Another great use for “somebody else’s computer” is to power your disaster recovery plans. By leveraging cloud resources we can effectively get out of the managing hardware business in regards to DR and have borderline limitless resources if needed. Let’s look at a few ways this can happen.

DRaaS (Disaster Recovery as a Service)

For now this is my personal favorite, but my needs may be and probably are different from yours. In a DRaaS model you still take local backups as you normally have, but then those backups or replicas are then shipped off to Managed Service Providers (MSPs) aligned with your particular backup software vendor.

I can’t particularly speak to any of the others from experience but CloudConnect providers in the Veeam Backup and Replication ecosphere are simple to consume and use. Essentially once you buy the amount of space you need from a partner you then use the link and credentials you are provided and add them to your backup infrastructure. Once done you create a backup copy job with that repository as the target and let it run. If you are bandwidth restrained many will even let you seed the job with an external hard drive that you ship them full of backups then all you have to transfer over the wire is your daily changes. Meanwhile all of these backups are encrypted with a key that only you and your organization knows so the data is nice and safe sitting elsewhere.

This is really great in that it is infinitely scalable (you only pay for what you use) and you don’t have to own any of the hardware or software licenses to support it. In the case that you have an event you have options; you can either scramble and try to put something together on your own or most times you can leverage the compute capabilities of the provider to power your organization until such time you can get your on-site resources available again. As these providers will have their own IT resources available you and your team will be freed up to do the work of getting staff and customers back online as they handle getting you restored and back online.

In my mind the drawbacks to this model are minimal. In the case of a disaster you are definitely going to be paying more than you would if you are running restored systems on your own hardware, but you would have had to buy that hardware and maintain it as well which is expensive. You will also be in a situation where workers and datacenter systems are not in the same geographical area as well which may cause for increased bandwidth cost as you get back up and running but still nothing compared to maintaining this consistently. Probably the only real drawback here is almost all of these types of providers require long-term agreements, 1 year or more for the backup or replication portion of what is needed. You also need to be sure if you choose this route that the provider has enough compute resources available to absorb you if needed. This can be mitigated by working with your provider to do regular backup testing at the far end. This will cost you a bit more but it is truly worth it to me.

Backup to Public Cloud

Finally we come to what all the backup vendors seems to be  going towards these days, public cloud backups. In this situation your backups are either on premises first (highly recommended) and then shipped off to the public cloud provider of your choice. AWS, Azure or GCP start messing with their storage pricing models and suddenly become cheaper? Simply just add the new provider and shift the job to the new provider, easy peasy. As with all things cloud you are in theory also infinitely scalable so you don’t have to worry about on boarding new workloads except for cost, and who cares about cost anyway?

The upside here is the ability to be agile. Start to finish you can probably be setup to consume this model within minutes and then your only limit to how fast you can be covered is how much bandwidth you make available to shipping backups. If you are doing this to cover for an external event like failure of your passive site you can simply tear it back down afterwards just as fast as you made it. Also you are only ever paying for your actual consumption, so you know how much your cost is going to be for any additional workload to be protected, you don’t ever pay for “spare space.”

As far as the drawbacks I feel like we are still in the early days of this so there are a few. While you don’t have to maintain your far end equipment for either backup storage or compute I’m not convinced that this isn’t the most expensive option for traditional virtualized workloads.

Hybrid Archive Approach

One of the biggest challenges to maintaining an on-prem, off-prem backup system is we all run out of space sometimes. The public cloud provides us an ability to only consume what we need, not paying for any fluff, as well as letting others manage the performance and availability of that storage. One trend I’m seeing more and more is the ability to supplement your on premise backup storage with public cloud resources to allow for scale out of archives for as long as necessary. There is a tradeoff between locality and performance, but if your most recent backups are on premises or well-connected to your production environment you may not ever need to access those backups that archived off to object storage so you don’t really care how fast it is to restore; you’ve just checked your policy checkbox and have that “oh no” backup out there.

Once upon a time my employer had a situation where we needed to retain every backup for about 5 years. Each year we had to buy more and more media to save these backups we would never ever restore to because they were so old, but we had them and were in compliance. If things like Veeam’s Archive Tier or similar with other vendors existed I could have said “I want to retain X backups on-prem but after that shift them to a S3 IA bucket.” In the long-term this would have saved quite a bit of money and administrative overhead and when the requirement went away all I had to do is delete the bucket and reset back to normal policy.

While this is an excellent use of cloud technology I don’t consider it a replacement for things like DRaaS or Active/* models. The hoops you need to go through to restore these backups to a functional VM are still complex and require resources. Rather I see this as an extension of your on-prem backups to allow for short-term scale issues.

Conclusion

If you’ve followed along for both posts I’ve covered about 5.5 different methods of backing up, replicating and protecting your datacenter. Which one is right for you? It might be one of these, none of these or a mash-up of two or more to be honest. The main thing is know your business’ needs, it’s regulatory requirements and

The post DR Scenarios For You and Your Business: Getting Cloudy With It appeared first on koolaid.info.

]]>
https://www.koolaid.info/dr-scenarios-for-you-and-your-business-getting-cloudy-with-it/feed/ 0 817
DR Scenarios For You and Your Business Part 1: The Old Guard https://www.koolaid.info/dr-scenarios-for-you-and-your-business-part-1-the-old-guard/ https://www.koolaid.info/dr-scenarios-for-you-and-your-business-part-1-the-old-guard/#respond Tue, 16 Apr 2019 18:14:07 +0000 https://www.koolaid.info/?p=809 It is Disaster Recovery review season again here at This Old Datacenter and reviewing our plans sparked the idea to outline some of the modern strategies for those who are new to the game or looking to modernize. I’m continually amazed by the number of people who I talk to that are using modern compute methodologies (virtualization on premises, partner IaaS, public cloud) but are still using the same backup systems they were using in the 2000s. In this post I’m going to talk about some basic strategies using Veeam Backup and Replication because that is primarily what I use, but all of these are capable with any of the current data backup vendors with varying levels of advantages and disadvantages per vendor. The important part is to understand the different ways about protecting your data to start with and then pick a vendor that fits your needs. One constant that you will see here is the idea of each strategy consisting of 2 parts. A local backup first to handle basic things like a failing VM, file restore, and other things that aren’t responding to all systems down. Secondly then archiving that backup to somewhere outside of your primary location and datacenter to deal with that systems down or virus consideration. You will often hear this referred to as the 3-2-1 rule: 3 copies of your data 2 copies on different types of physical media or systems 1 copy (at least) in a different geographical location (offsite) On Premises Backup/Archive to Removable Media Backup This is essentially an evolution on your traditional backup system. Each night you take a backup of your critical systems to a local resource and then copy that to something removable so that it can be taken to somewhere offsite each evening. In the past this was probably only one step, you ran backups to tape and then you took that tape somewhere the next morning. Today I would hope the backups would land on disk somewhere local and then be copied to tape or a USB hard disk but everybody has their ways. This method has the ability to get the job done but has a lot of drawbacks. First you must have human intervention to get your backups somewhere. Second restores may be quick if you are restoring from your primary backup method but if you have to go to your second you first have to physically locate that correct data set and then especially in the case of tape it can take some time to get that back to functional. Finally you own and have to maintain all the hardware involved in the backup system, hardware that effectively isn’t used for anything else. Active/Passive Disaster Recovery Historically the step up for many organizations from removable media is to maintain a set of hardware or at least a backup location somewhere else. This could be just a tape library, a NAS or an old server loaded with disks either in a remote branch or at a co-location facility. Usually you would have some dark hardware there that could allow systems to be restored if needed. In any case you still would perform backups locally and maintain a set on premises for the primary restore, then leverage the remote location for a systems down event. This method definitely has advantages over the first in that you don’t have to dedicate a person’s time to ensuring that the backups go offsite and you might have some resources available to take over in case of a massive issue at your datacenter, but this method can get very expensive, very fast. All the hardware is owned by you and is used exclusively for you, if ever used at all. In many cases datacenter hardware is “retired” to this location and it may or may not have enough horsepower to cover your needs. Others may buy for the dark site at the same time as buying for the primary datacenter, effectively doubling the price of updating. Layer on top of this the cost of connectivity, power consumption and possibly rack space and you are talking about real money. Further you are on your own in terms of getting things going if you do have a DR event. All that being said this is a true Disaster Recovery model, which differentiates from the first option. You have everything you need (possibly) if you experience a disaster at your primary site. Active/Active Disaster Recovery Does your organization have multiple sites, with datacenter capabilities in each place? If so then this model might be for you. With Active/Active you design you multisite datacenters with redundant space in mind so that in the case of an event in either location  you can run both workloads in a single location. The ability to have “hot” resources available at your DR site is attractive in that you can easily make use of not only backup operations but replication as well, significantly shortening your Restore Time Objective (RTO), usually with the ability to rollback to production when the event is over. Think about a case where you have critical customer facing applications that cannot handle much downtime at all but you lose connectivity at your primary site. This workload could fairly easily be failed over to the replica in the far side DC, all the while your replication product (Think Veeam Backup & Replication or Zerto) is tracking the changes. When connectivity is restored you tell the application to fallback and you are running with changes intact back in your primary datacenter. So what’s the downside? Well first off it requires you to have multiple locations to be able to support this in the first place. Beyond that you are still in a world of needing to support the load in case of having an event, so your hardware and software licensing costs will most likely go up to support this event that may never happen. Also supporting replication is a good bit more...

The post DR Scenarios For You and Your Business Part 1: The Old Guard appeared first on koolaid.info.

]]>
It is Disaster Recovery review season again here at This Old Datacenter and reviewing our plans sparked the idea to outline some of the modern strategies for those who are new to the game or looking to modernize. I’m continually amazed by the number of people who I talk to that are using modern compute methodologies (virtualization on premises, partner IaaS, public cloud) but are still using the same backup systems they were using in the 2000s.

In this post I’m going to talk about some basic strategies using Veeam Backup and Replication because that is primarily what I use, but all of these are capable with any of the current data backup vendors with varying levels of advantages and disadvantages per vendor. The important part is to understand the different ways about protecting your data to start with and then pick a vendor that fits your needs.

One constant that you will see here is the idea of each strategy consisting of 2 parts. A local backup first to handle basic things like a failing VM, file restore, and other things that aren’t responding to all systems down. Secondly then archiving that backup to somewhere outside of your primary location and datacenter to deal with that systems down or virus consideration. You will often hear this referred to as the 3-2-1 rule:

  • 3 copies of your data
  • 2 copies on different types of physical media or systems
  • 1 copy (at least) in a different geographical location (offsite)
On Premises Backup/Archive to Removable Media Backup

This is essentially an evolution on your traditional backup system. Each night you take a backup of your critical systems to a local resource and then copy that to something removable so that it can be taken to somewhere offsite each evening. In the past this was probably only one step, you ran backups to tape and then you took that tape somewhere the next morning. Today I would hope the backups would land on disk somewhere local and then be copied to tape or a USB hard disk but everybody has their ways.

This method has the ability to get the job done but has a lot of drawbacks. First you must have human intervention to get your backups somewhere. Second restores may be quick if you are restoring from your primary backup method but if you have to go to your second you first have to physically locate that correct data set and then especially in the case of tape it can take some time to get that back to functional. Finally you own and have to maintain all the hardware involved in the backup system, hardware that effectively isn’t used for anything else.

Active/Passive Disaster Recovery

Historically the step up for many organizations from removable media is to maintain a set of hardware or at least a backup location somewhere else. This could be just a tape library, a NAS or an old server loaded with disks either in a remote branch or at a co-location facility. Usually you would have some dark hardware there that could allow systems to be restored if needed. In any case you still would perform backups locally and maintain a set on premises for the primary restore, then leverage the remote location for a systems down event.

This method definitely has advantages over the first in that you don’t have to dedicate a person’s time to ensuring that the backups go offsite and you might have some resources available to take over in case of a massive issue at your datacenter, but this method can get very expensive, very fast. All the hardware is owned by you and is used exclusively for you, if ever used at all. In many cases datacenter hardware is “retired” to this location and it may or may not have enough horsepower to cover your needs. Others may buy for the dark site at the same time as buying for the primary datacenter, effectively doubling the price of updating. Layer on top of this the cost of connectivity, power consumption and possibly rack space and you are talking about real money. Further you are on your own in terms of getting things going if you do have a DR event.

All that being said this is a true Disaster Recovery model, which differentiates from the first option. You have everything you need (possibly) if you experience a disaster at your primary site.

Active/Active Disaster Recovery

Does your organization have multiple sites, with datacenter capabilities in each place? If so then this model might be for you. With Active/Active you design you multisite datacenters with redundant space in mind so that in the case of an event in either location  you can run both workloads in a single location. The ability to have “hot” resources available at your DR site is attractive in that you can easily make use of not only backup operations but replication as well, significantly shortening your Restore Time Objective (RTO), usually with the ability to rollback to production when the event is over.

Think about a case where you have critical customer facing applications that cannot handle much downtime at all but you lose connectivity at your primary site. This workload could fairly easily be failed over to the replica in the far side DC, all the while your replication product (Think Veeam Backup & Replication or Zerto) is tracking the changes. When connectivity is restored you tell the application to fallback and you are running with changes intact back in your primary datacenter.

So what’s the downside? Well first off it requires you to have multiple locations to be able to support this in the first place. Beyond that you are still in a world of needing to support the load in case of having an event, so your hardware and software licensing costs will most likely go up to support this event that may never happen. Also supporting replication is a good bit more complex than backup when you include things like the need for ReIP, external DNS, etc. so you should definitely be testing this early and often, maintaining a living document that outlines the steps needed to failover and fallback.

Conclusion

This post covers what I consider the “old school” models of Disaster Recovery, where your organization owns all the hardware and such to power the system. But who wants to own physical things anymore, aren’t we living in the virtual age? In the next post we’ll look at some more “modern” approaches to the same ol’ concepts.

The post DR Scenarios For You and Your Business Part 1: The Old Guard appeared first on koolaid.info.

]]>
https://www.koolaid.info/dr-scenarios-for-you-and-your-business-part-1-the-old-guard/feed/ 0 809
The Basics of Veeam Backup & Replication 9.5 Update 4 Licensing https://www.koolaid.info/the-basics-of-veeam-backup-replication-9-5-update-4-licensing/ https://www.koolaid.info/the-basics-of-veeam-backup-replication-9-5-update-4-licensing/#respond Thu, 07 Feb 2019 15:03:20 +0000 https://www.koolaid.info/?p=800 Veeam has recently released the long-awaited Update 4 to their Backup and Replication 9.5 product and with it has come some changes to how they deal with licensing. As workloads that need to be protected/backed up/made available have moved from being 100% on-premises and inside our vSphere or Hyper-V environments to mixes of on-prem, off-prem, on physical, public cloud, etc. my guess is their customers have asked for a way to make that protection and licensing portable.In Veeam’s move they have decided this can be solved by creating per instance licensing, which is similar to how you consume many other cloud based services. This rides along with the established perpetual licensing we still have for VBR and Veeam Availability Suite. I will be honest and say that the upgrade was not as smooth as I would have hoped. Now that I’ve got to the bottom of my own licensing issues I’ll post here what I’ve learned to hopefully keep you from experiencing the same headaches. It’s worth noting that there is a FAQ on this but the content is varying quite a bit as this gets rolled out. How We Got Here In the past if you were using nothing but Veeam Backup and Replication (VBR) you did all your licensing by the socket count of protected hypervisors. After that came along Veeam Agents for Windows and Linux and we had the addition subscriptions levels for VAW Server, VAW Workstations, and VAL. As these can be managed and deployed via the Veeam Console this license was required to be installed on your VBR server as well so you now had 2 separate licenses files that were commingled on the server to create the entire solution for protecting VBR and Agent workloads. Now as we look at the present and future Veeam has lots of different products that are subscription based. Protecting Office365, AWS instances, and Veeam’s orchestration product are all per consumable unit subscriptions. Further when you consider that due to Veeam’s Service Provider program you as an end customer have the option of either buying and subscribing directly from a VAR or “renting” those licenses from a server provider. As you keep counting up you can see where this model needed (and still needs) streamlined. Update 4 License Types So that brings us to the here and now. For now and for as far as I can get anyone to tell me perpetual (a.k.a. per socket) licensing for Veeam Backup and Replication and the Veeam Availability Suite which includes VBR and VeeamONE is here to stay. Any new products though will be licensed through a per instance model going forward. In the middle there is some murkiness so let’s take a look at the options. Perpetual (per socket) only. This is your traditional Backup and Replication license, licensed per protected socket of hypervisor. You still have to obtain a new Update 4 license from my.veeam.com but works exactly the same. If you have a Veeam Server without any paid VAW/VAL subscriptions attached you can simply just run the installer and continue on your current license. An interesting note is that once you install your Update 4 perpetual license and if you have no instances it will automatically provide you with 1 instance per socket up to a maximum of 6. That’s actually a nice little freebie for those of us with a one-off physical box here or there or a just a couple of cloud instances. Instance based. These are the “portable licenses” that can be used for VBR protected VMs, VAW, VAL, Veeam for AWS, etc. If you are an existing customer you can contact licensing support and migrate your per socket to this if you want, but unless you are looking at a ROBO site, need more cloud protection or have a very distributed use case for Veeam (small on-prem, workstations, physical servers, cloud instances) I don’t see this being a winner price-wise. For those of us with traditional workloads perpetual makes the most sense because it doesn’t matter how many VMs we have running on our hypervisors, they are still all covered. If you’d like to do the math for yourself they’ve provided a instance cost calculator. I will mention that I think they are missing the idea in the calculator that unless they are doing something magical this is based on buying new. Renewals of perpetual licenses should be far cheaper than the given number and I’ve never heard of a subscription license service having a renewal rate.It is also worth noting that even if you aren’t managing your licensed (as opposed to free) Veeam Agent for Windows and Linux with VBR you will need to go to the Update 4 License management screen in my.veeam.com and convert your subscription licenses to Update 4 instances ones to be able to use the 3.0 versions of the software. It doesn’t cost any thing or make a difference at this point, but while you could buy subscription licenses in any numbers you choose per instance licenses have a minimum level of 10 and are only sold in 10 packs. So while for now it might be nice that your licenses are rounded up understand you’ll have to renew at the rounded up price as well. Further its worth noting that back when VAW was subscription there were separate lines for workstations and servers, with 1 server license costing the same as 3 workstations. In the new per instance model this is reflected by consumption. A server of any kind will consume 1 instance, but a workstation will only consume 0.33 of one. Same idea, different way of viewing it. The Hybrid License. This is what you need if you need/want to manage both perpetual and instances from the same VBR server . If you previously had per socket for your VMs and subscription licenses for VAW/VAL you will need to hit the merge button under your Update 4 license management screen. This only works if you...

The post The Basics of Veeam Backup & Replication 9.5 Update 4 Licensing appeared first on koolaid.info.

]]>
Veeam has recently released the long-awaited Update 4 to their Backup and Replication 9.5 product and with it has come some changes to how they deal with licensing. As workloads that need to be protected/backed up/made available have moved from being 100% on-premises and inside our vSphere or Hyper-V environments to mixes of on-prem, off-prem, on physical, public cloud, etc. my guess is their customers have asked for a way to make that protection and licensing portable.In Veeam’s move they have decided this can be solved by creating per instance licensing, which is similar to how you consume many other cloud based services. This rides along with the established perpetual licensing we still have for VBR and Veeam Availability Suite.

I will be honest and say that the upgrade was not as smooth as I would have hoped. Now that I’ve got to the bottom of my own licensing issues I’ll post here what I’ve learned to hopefully keep you from experiencing the same headaches. It’s worth noting that there is a FAQ on this but the content is varying quite a bit as this gets rolled out.

How We Got Here

In the past if you were using nothing but Veeam Backup and Replication (VBR) you did all your licensing by the socket count of protected hypervisors. After that came along Veeam Agents for Windows and Linux and we had the addition subscriptions levels for VAW Server, VAW Workstations, and VAL. As these can be managed and deployed via the Veeam Console this license was required to be installed on your VBR server as well so you now had 2 separate licenses files that were commingled on the server to create the entire solution for protecting VBR and Agent workloads.

Now as we look at the present and future Veeam has lots of different products that are subscription based. Protecting Office365, AWS instances, and Veeam’s orchestration product are all per consumable unit subscriptions. Further when you consider that due to Veeam’s Service Provider program you as an end customer have the option of either buying and subscribing directly from a VAR or “renting” those licenses from a server provider. As you keep counting up you can see where this model needed (and still needs) streamlined.

Update 4 License Types

So that brings us to the here and now. For now and for as far as I can get anyone to tell me perpetual (a.k.a. per socket) licensing for Veeam Backup and Replication and the Veeam Availability Suite which includes VBR and VeeamONE is here to stay. Any new products though will be licensed through a per instance model going forward. In the middle there is some murkiness so let’s take a look at the options.

  1. Perpetual (per socket) only. This is your traditional Backup and Replication license, licensed per protected socket of hypervisor. You still have to obtain a new Update 4 license from my.veeam.com but works exactly the same. If you have a Veeam Server without any paid VAW/VAL subscriptions attached you can simply just run the installer and continue on your current license. An interesting note is that once you install your Update 4 perpetual license and if you have no instances it will automatically provide you with 1 instance per socket up to a maximum of 6. That’s actually a nice little freebie for those of us with a one-off physical box here or there or a just a couple of cloud instances.
  2. Instance based. These are the “portable licenses” that can be used for VBR protected VMs, VAW, VAL, Veeam for AWS, etc. If you are an existing customer you can contact licensing support and migrate your per socket to this if you want, but unless you are looking at a ROBO site, need more cloud protection or have a very distributed use case for Veeam (small on-prem, workstations, physical servers, cloud instances) I don’t see this being a winner price-wise. For those of us with traditional workloads perpetual makes the most sense because it doesn’t matter how many VMs we have running on our hypervisors, they are still all covered. If you’d like to do the math for yourself they’ve provided a instance cost calculator.

    I will mention that I think they are missing the idea in the calculator that unless they are doing something magical this is based on buying new. Renewals of perpetual licenses should be far cheaper than the given number and I’ve never heard of a subscription license service having a renewal rate.It is also worth noting that even if you aren’t managing your licensed (as opposed to free) Veeam Agent for Windows and Linux with VBR you will need to go to the Update 4 License management screen in my.veeam.com and convert your subscription licenses to Update 4 instances ones to be able to use the 3.0 versions of the software. It doesn’t cost any thing or make a difference at this point, but while you could buy subscription licenses in any numbers you choose per instance licenses have a minimum level of 10 and are only sold in 10 packs. So while for now it might be nice that your licenses are rounded up understand you’ll have to renew at the rounded up price as well.

    Further its worth noting that back when VAW was subscription there were separate lines for workstations and servers, with 1 server license costing the same as 3 workstations. In the new per instance model this is reflected by consumption. A server of any kind will consume 1 instance, but a workstation will only consume 0.33 of one. Same idea, different way of viewing it.

  3. The Hybrid License. This is what you need if you need/want to manage both perpetual and instances from the same VBR server . If you previously had per socket for your VMs and subscription licenses for VAW/VAL you will need to hit the merge button under your Update 4 license management screen. This only works if you are the primary license administrator for all support IDs you wish to merge.

Just to make sure it’s clear in previous versions you could have both a per socket and subscription license installed at the same time; this is no longer the case thus the reason for option 3. You cannot have a 1 and a 2 installed on the same server, the 2 will override the 1. So if you are consuming both perpetual and per instance under the same VBR server you must be sure to merge those licenses on my.veeam.com. In order to do so you will need any and all licenses/Support IDs to be merged to be under the same Primary License Administrator. If you did not do this previously you will need to open a case with support to get a common Primary set for your Support IDs.

Conclusion

As we begin, or continue, to move our production workloads from not only our own datacenters to others as well as the public cloud those workloads will continue to need to be protected. For those of us that use Veeam to do so handling the licensing has, for now, been made simpler and is still cost effective once you can get it lined out for yourself.

The post The Basics of Veeam Backup & Replication 9.5 Update 4 Licensing appeared first on koolaid.info.

]]>
https://www.koolaid.info/the-basics-of-veeam-backup-replication-9-5-update-4-licensing/feed/ 0 800
Dude, Where’s My Managed Service Accounts? https://www.koolaid.info/dude-wheres-my-managed-service-accounts/ https://www.koolaid.info/dude-wheres-my-managed-service-accounts/#respond Wed, 23 Jan 2019 19:22:36 +0000 https://www.koolaid.info/?p=794 So I am probably way late to the game but today’s opportunities to learn have included ADFS and with that the concept of Managed Service Accounts. What’s a Managed Service Account you ask? So we’ve all installed applications and either set the service to run with the local system account or with a standard Active Directory account. Since the release of Windows Server 2008 R2 this feature has been available (and with Windows Server 2012 greatly enhanced,) gMSA lets you create a special type of account to be used for services where Active Directory itself manages the security of the account, keeping you secure while not having to update passwords regularly. While there are quite a few great step by step guides for setting things up and then creating your first Managed Service account, I almost immediately ran into an issue where my Active Directory didn’t seem to include the Managed Service Accounts container (CN=Managed Service Accounts,DC=mydomain,DC=local). My domain was at the correct level, Advanced Features were turned on in AD Users & Computers, everything seemed like it should be just fine, the container just wasn’t there. In this post I’ll outline the steps I ultimately took that resulted in getting the problem fixed. Step 0: Take A Backup While you probably are already mashing on the “take a snapshot” button or starting a backup job, its worth saying anyway. You are messing with your Active Directory, be sure to take a backup or snapshot of your Domain Controller(s) which holds the various FSMO roles. Now that you’ve got that backup depending on how complex your Active Directory is it might be worth leveraging something like Veeam’s SureBackup (er, I mean DataLab) like I did and create you a test bed where you can try it out on last night’s backups before doing this in production. Step 1: ADSI Stuff Now we are going to have to start actually manually editing Active Directory. This is because you might have references to Managed Service Accounts in your Schema but are just missing the container. You also have to tell AD it isn’t up to date so that the adprep utility can be rerun. Be sure you are logged into your Schema Master Domain Controller as an Enterprise Admin and launch the ADSIEdit MMC. Right click ADSI Edit at the top of the structure on the left, Click Connect… and hit OK as long as the Path is the default naming context. Drill down the menu structure to CN=Domain Updates, CN=System, DC=<mydomain>,DC=<mytld> Within the Operations Container you will need to delete the following containers entirely. CN=5e1574f6-55df-493e-a671-aaeffca6a100 CN=d262aae8-41f7-48ed-9f-35-56-bb-b6-77-57-3d Now go back up a level and right click on the CN=ActiveDirectoryUpdates container and choose Properties Scroll down until you find the “revision” attribute, click on it and click Edit Hit the Clear button and then OK Step 2: Run ADPrep /domainPrep So now we’ve cleaned out the bad stuff and we just need to run ADprep. If you have upgraded to your current level of Active Directory you probably have done this at least once before, but typically it won’t let you run it on your domain once its been done; that’s what the clearing the revision attribute above did for us. Now we just need to pop in the (probably virtual) CD and run the command Mount the ISO file for your given operating system to your domain controller. You can either do this by putting the ISO on the system, right click, mount or do so through your virtualization platform. Open up a command line or powershell prompt and navigate to <CDROOT>:\support\adprep Issue the .\adprep.exe /domainPrep command. If all goes well it should report back “Adprep successfully updated the domain-wide information.” Now that the process is completed you should be able to refresh or relaunch your Active Directory Users & Computers window and see that Managed Service Accounts is available right below the root of your domain as long as Advanced Features is enabled under View and you are now good to go!

The post Dude, Where’s My Managed Service Accounts? appeared first on koolaid.info.

]]>
So I am probably way late to the game but today’s opportunities to learn have included ADFS and with that the concept of Managed Service Accounts.

What’s a Managed Service Account you ask? So we’ve all installed applications and either set the service to run with the local system account or with a standard Active Directory account. Since the release of Windows Server 2008 R2 this feature has been available (and with Windows Server 2012 greatly enhanced,) gMSA lets you create a special type of account to be used for services where Active Directory itself manages the security of the account, keeping you secure while not having to update passwords regularly.

While there are quite a few great step by step guides for setting things up and then creating your first Managed Service account, I almost immediately ran into an issue where my Active Directory didn’t seem to include the Managed Service Accounts container (CN=Managed Service Accounts,DC=mydomain,DC=local). My domain was at the correct level, Advanced Features were turned on in AD Users & Computers, everything seemed like it should be just fine, the container just wasn’t there. In this post I’ll outline the steps I ultimately took that resulted in getting the problem fixed.

Step 0: Take A Backup

While you probably are already mashing on the “take a snapshot” button or starting a backup job, its worth saying anyway. You are messing with your Active Directory, be sure to take a backup or snapshot of your Domain Controller(s) which holds the various FSMO roles. Now that you’ve got that backup depending on how complex your Active Directory is it might be worth leveraging something like Veeam’s SureBackup (er, I mean DataLab) like I did and create you a test bed where you can try it out on last night’s backups before doing this in production.

Step 1: ADSI Stuff

Now we are going to have to start actually manually editing Active Directory. This is because you might have references to Managed Service Accounts in your Schema but are just missing the container. You also have to tell AD it isn’t up to date so that the adprep utility can be rerun. Be sure you are logged into your Schema Master Domain Controller as an Enterprise Admin and launch the ADSIEdit MMC.

  1. Right click ADSI Edit at the top of the structure on the left, Click Connect… and hit OK as long as the Path is the default naming context.
  2. Drill down the menu structure to CN=Domain Updates, CN=System, DC=<mydomain>,DC=<mytld>
  3. Within the Operations Container you will need to delete the following containers entirely.
    1. CN=5e1574f6-55df-493e-a671-aaeffca6a100
    2. CN=d262aae8-41f7-48ed-9f-35-56-bb-b6-77-57-3d
  4. Now go back up a level and right click on the CN=ActiveDirectoryUpdates container and choose Properties
    1. Scroll down until you find the “revision” attribute, click on it and click Edit
    2. Hit the Clear button and then OK

Step 2: Run ADPrep /domainPrep

So now we’ve cleaned out the bad stuff and we just need to run ADprep. If you have upgraded to your current level of Active Directory you probably have done this at least once before, but typically it won’t let you run it on your domain once its been done; that’s what the clearing the revision attribute above did for us. Now we just need to pop in the (probably virtual) CD and run the command

Yay! It actually worked!
  1. Mount the ISO file for your given operating system to your domain controller. You can either do this by putting the ISO on the system, right click, mount or do so through your virtualization platform.
  2. Open up a command line or powershell prompt and navigate to <CDROOT>:\support\adprep
  3. Issue the .\adprep.exe /domainPrep command. If all goes well it should report back “Adprep successfully updated the domain-wide information.”

Now that the process is completed you should be able to refresh or relaunch your Active Directory Users & Computers window and see that Managed Service Accounts is available right below the root of your domain as long as Advanced Features is enabled under View and you are now good to go!

The post Dude, Where’s My Managed Service Accounts? appeared first on koolaid.info.

]]>
https://www.koolaid.info/dude-wheres-my-managed-service-accounts/feed/ 0 794
Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/ https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/#respond Thu, 01 Nov 2018 14:50:28 +0000 https://www.koolaid.info/?p=781 One of the issues that Veeam Backup & Replication users, actually those of any application aware backup solution, is that the various VSS writers are typically very finicky to say the least. Often you will get warnings about the services only to do a “vssadmin list writers” and see either writers in a failed state or not there at all. In most of these cases a reboot of either the service or the target system itself is an easy quick fix. But do you really want to rely on yourself to remember to do this every day? I know I don’t and going with the mantra of “When in doubt, automate” here’s a script that will help out. The Reboot-VSS.ps1 script assumes that you are using vSphere tags to dynamically identify VMs to be included in backup jobs, looks at the services in the given services array and if they are present on the VM will restart them. [crayon-5d95b4c80cef0360070710/]   This script was designed to be set in the Windows scripts section of guest processing settings within a Veeam Backup and Replication job. I typically only need the SQL writer service myself but I’ve included VSS in the array as well here as an example of adding more than one. There are quite a few VSS services that VSS aware backup services can access, Veeam’s KB 20141 is a great reference for all of these that can be included here based on your need.

The post Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing appeared first on koolaid.info.

]]>
One of the issues that Veeam Backup & Replication users, actually those of any application aware backup solution, is that the various VSS writers are typically very finicky to say the least. Often you will get warnings about the services only to do a “vssadmin list writers” and see either writers in a failed state or not there at all. In most of these cases a reboot of either the service or the target system itself is an easy quick fix.

But do you really want to rely on yourself to remember to do this every day? I know I don’t and going with the mantra of “When in doubt, automate” here’s a script that will help out. The Reboot-VSS.ps1 script assumes that you are using vSphere tags to dynamically identify VMs to be included in backup jobs, looks at the services in the given services array and if they are present on the VM will restart them.

#   Name:   Restart-VSS.ps1
#   Description: Restarts list of services in an array on VMs with a given vSphere tag. Helpful for Veeam B&R processing
#   For more info on Veeam VSS services that may cause failure see https://www.veeam.com/kb2041

Import-Module VMware.PowerCLI

$vcenter = "vcenter.domain.com"
$services = @("SQLWriter","VSS")
$tag = "myAwesomeTag"
Connect-VIServer $vcenter
$vms = Get-VM |where {$_.Tag -ne $tag}

ForEach ($vm in $vms){
  ForEach ($service in $services){
    If (Get-Service -ComputerName $vm -Name $service -ErrorAction SilentlyContinue) {
      Write-Host $service "on computer" $vm "restarting now."
      Restart-Service -InputObject $(Get-Service -Computer $vm -Name $service);
    }
  }
}

 

This script was designed to be set in the Windows scripts section of guest processing settings within a Veeam Backup and Replication job. I typically only need the SQL writer service myself but I’ve included VSS in the array as well here as an example of adding more than one. There are quite a few VSS services that VSS aware backup services can access, Veeam’s KB 20141 is a great reference for all of these that can be included here based on your need.

The post Reboot-VSS Script for Veeam Backup Job Pre-Thaw Processing appeared first on koolaid.info.

]]>
https://www.koolaid.info/reboot-vss-script-for-veeam-backup-job-pre-thaw-processing/feed/ 0 781
Reinstalling the Veeam Backup & Replication Powershell SnapIn https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/ https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/#respond Tue, 05 Jun 2018 17:47:21 +0000 https://www.koolaid.info/?p=754 As somebody who lives by the old mantra of “Eat your own dog food” when it comes to the laptops I use both personally and professionally I tend to be on the early edge of installs. So while I am not at all ready to start deploying Windows 10 1803 to the end users I’ve recently upgraded my Surface Pro to it. In doing so I’ve found that doing so broke access to the Veeam Powershell SnapIn on my laptop when trying to run a script. After some Googling I found a very helpful post on the Veeam Forums that I thought I’d condense the commands to run here for us all. Let me start with a hat tip to James McGuire for finding this solution to the problem. For the those that aren’t familiar with VBR’s Powershell capabilities, the SnapIn is installed either when you run the full installer on your VBR server or, as is my case when you install the Remote Console component on another Windows system. Don’t let me get started about the fact that Veeam is still using a SnapIn to provide PowerShell access, that’s a whole different post, but this is where we are. The sign that this has occurred is when you get the “Get-PSSnapin : No Windows PowerShell snap-ins matching the pattern ‘VeeamPSSnapin’ were found.” error when trying to get access to the SnapIn. In order to fix this, you need to use the installutil.exe utility in your latest .Net installation. So in my example, this would be C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.ex e. If you’ve already installed the VBR Remote console The SnapIn’s DLL should be at C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll. So to get the installation fixed and re-added to being available to Powershell you just need to do the following from an elevated PoSH prompt: [crayon-5d95b4c80d206271000150/] Then to load it and be able to use it simply [crayon-5d95b4c80d210583562076/] From there it’s up to you what comes next. Happy Scripting!

The post Reinstalling the Veeam Backup & Replication Powershell SnapIn appeared first on koolaid.info.

]]>
As somebody who lives by the old mantra of “Eat your own dog food” when it comes to the laptops I use both personally and professionally I tend to be on the early edge of installs. So while I am not at all ready to start deploying Windows 10 1803 to the end users I’ve recently upgraded my Surface Pro to it. In doing so I’ve found that doing so broke access to the Veeam Powershell SnapIn on my laptop when trying to run a script. After some Googling I found a very helpful post on the Veeam Forums that I thought I’d condense the commands to run here for us all. Let me start with a hat tip to James McGuire for finding this solution to the problem.

For the those that aren’t familiar with VBR’s Powershell capabilities, the SnapIn is installed either when you run the full installer on your VBR server or, as is my case when you install the Remote Console component on another Windows system. Don’t let me get started about the fact that Veeam is still using a SnapIn to provide PowerShell access, that’s a whole different post, but this is where we are.

The sign that this has occurred is when you get the “Get-PSSnapin : No Windows PowerShell snap-ins matching the pattern ‘VeeamPSSnapin’ were found.” error when trying to get access to the SnapIn. In order to fix this, you need to use the installutil.exe utility in your latest .Net installation. So in my example, this would be C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.ex
e. If you’ve already installed the VBR Remote console The SnapIn’s DLL should be at C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll. So to get the installation fixed and re-added to being available to Powershell you just need to do the following from an elevated PoSH prompt:

C:\windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.exe C:\Program Files\Veeam\Backup and Replication\Console\Veeam.Backup.PowerShell.dll
Add-PSSnapin VeeamPSSnapin

Then to load it and be able to use it simply

Get-PSSnapin VeeamPSSnapin
Connect-VBRServer -Server <serverFQDN>

From there it’s up to you what comes next. Happy Scripting!

The post Reinstalling the Veeam Backup & Replication Powershell SnapIn appeared first on koolaid.info.

]]>
https://www.koolaid.info/reinstalling-the-veeam-backup-replication-powershell-snapin/feed/ 0 754
Fixing the SSL Certificate with Project Honolulu https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/ https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/#respond Fri, 09 Mar 2018 15:26:19 +0000 https://www.koolaid.info/?p=717 So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well. I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom. Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed Project Honolulu. If you want to do a public one, fine, but more likely you’ll have a certificate authority available to you internally. I’m not going to walk you through this again, my friend Luca Dell’Oca has a good write up on it here. Just do steps 1-3. Step 1: Shut it down and gather info: Next we need to shut down the Honolulu service. As most of what we’ll be doing here today is going to be in Powershell let’s just do this by CLI as well. [crayon-5d95b4c80d31c785039687/] Now let’s take a look at what’s currently in place. You can do this with the following command, the output should look like the figure to the right. The relevant info we want to take note of here is 1) The port that we’ve got Honolulu listening on and 2) The Application ID attached to the certificate. I’m just going to reuse the one there but as Charbel points out this is generic and you can just generate a new one to use by using a generator. [crayon-5d95b4c80d322179098637/] Finally, in our quest to gather info let’s find the thumbprint of our newly loaded certificate. You can do this by using the Get-ChildItem command like this [crayon-5d95b4c80d325681718461/] As you can see in the second screenshot that will give you a list of the certificates with thumbprints installed on your server. You’ll need the thumbprint of the certificate you imported earlier. Step 2: Make it happen: Ok now that we’ve got all our information let’s get this thing swapped. All of this seems to need to be done from the legacy command prompt. First, we want to delete the certificate binding in place now and the ACL. For the example shown above where I’m using port 443 it would look like this: [crayon-5d95b4c80d328377118113/] Now we need to put it back into place and start things back up. Using the port number, certificate thumbprint, and appid from our example the command to re-add the SSL certificate would look like this. You, of course, would need to sub in your own information.  Next, we need to put the URL ACL back in place. Finally, we just need to start the service back up from PowerShell. [crayon-5d95b4c80d32d637809089/] Conclusion At this point, you should be getting a shiny green padlock when you go the site and no more nags about a bad certificate. I hope as this thing progresses out of Tech Preview and into production quality this component gets easier but at least there’s a way.

The post Fixing the SSL Certificate with Project Honolulu appeared first on koolaid.info.

]]>
So if you haven’t heard of it yet Microsoft is doing some pretty cool stuff in terms of Local Server management in what they are calling Project Honolulu. The latest version, 1802, was released March 1, 2018, so it is as good a time as any to get off the ground with it if you haven’t yet. If you’ve worked with Server Manager in versions newer than Windows Server 2008 R2 then the web interface should be comfortable enough that you can feel your way around so this post won’t be yet another “cool look at Project Honolulu!” but rather it will help you with a hiccup in getting it up and running well.

I was frankly a bit amazed that this is evidently a web service from Microsoft not built upon IIS. As such your only GUI based opportunity to get the certificate right is during installation, and that is based on the thumbprint at that, so still not exactly user-friendly. In this post, I’m going to talk about how to find that thumbprint in a manner that copies well (as opposed to opening the certificate) and then replacing the certificate on an already up and running Honolulu installation. Giving props where they do this post was heavily inspired by How to Change the Thumbprint of a Certificate in Microsoft Project Honolulu by Charbel Nemnom.

Step 0: Obtain a certificate: A good place to start would be to obtain or import a certificate to the server where you’ve installed Project Honolulu. If you want to do a public one, fine, but more likely you’ll have a certificate authority available to you internally. I’m not going to walk you through this again, my friend Luca Dell’Oca has a good write up on it here. Just do steps 1-3.

Make note of the Application ID here, you’ll use it later

Step 1: Shut it down and gather info: Next we need to shut down the Honolulu service. As most of what we’ll be doing here today is going to be in Powershell let’s just do this by CLI as well.

Get-Service *Gateway | Stop-Service

Now let’s take a look at what’s currently in place. You can do this with the following command, the output should look like the figure to the right. The relevant info we want to take note of here is 1) The port that we’ve got Honolulu listening on and 2) The Application ID attached to the certificate. I’m just going to reuse the one there but as Charbel points out this is generic and you can just generate a new one to use by using a generator.

netsh http show sslcert

Pick a cert, not any cert

Finally, in our quest to gather info let’s find the thumbprint of our newly loaded certificate. You can do this by using the Get-ChildItem command like this

Get-ChildItem -path cert:\LocalMachine\my

As you can see in the second screenshot that will give you a list of the certificates with thumbprints installed on your server. You’ll need the thumbprint of the certificate you imported earlier.

Step 2: Make it happen: Ok now that we’ve got all our information let’s get this thing swapped. All of this seems to need to be done from the legacy command prompt. First, we want to delete the certificate binding in place now and the ACL. For the example shown above where I’m using port 443 it would look like this:

cmd
netsh http delete sslcert ipport=0.0.0.0:443
netsh http delete urlacl url=https://+:443/

Now we need to put it back into place and start things back up. Using the port number, certificate thumbprint, and appid from our example the command to re-add the SSL certificate would look like this. You, of course, would need to sub in your own information.  Next, we need to put the URL ACL back in place. Finally, we just need to start the service back up from PowerShell.

netsh http add sslcert ipport=0.0.0.0:443 certhash=C9BB91F7D8755BD217444046A7E68CEF56E15717 appid={1fb046ab-09b5-4029-9ec5-6e17002d495f}
netsh http add urlacl url=https://+:443/ user="NT Authority\Network Service"
Get-Service *Gateway | Start-Service

Conclusion

At this point, you should be getting a shiny green padlock when you go the site and no more nags about a bad certificate. I hope as this thing progresses out of Tech Preview and into production quality this component gets easier but at least there’s a way.

The post Fixing the SSL Certificate with Project Honolulu appeared first on koolaid.info.

]]>
https://www.koolaid.info/fixing-the-ssl-certificate-with-project-honolulu/feed/ 0 717
Cisco Live US 2018: CAE Location and Keynotes Announced! https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/ https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/#respond Tue, 06 Mar 2018 16:28:30 +0000 https://www.koolaid.info/?p=714 Pictured here is the entrance, five years ago, to the Customer Appreciation Event on the last night of Cisco Live US 2013. This was my first CiscoLive and first tech conference at all. I was exhausted from all I’d learned and excited by all the new people I’d met. The conference was in Orlando, FL that year and the CAE was held in a portion of Universal Studios theme park. This all comes full circle because this year I will once again be attending CiscoLive 2018 It will once again be held in Orlando, FL And the Customer Appreciation Event will be held at THE ENTIRE UNIVERSAL STUDIOS FLORIDA PARK! Customer Appreciation Event Info You read that right, for one night only, Cisco customers, employees and other conference attendees will have the whole park to themselves with food, drink, and all that jazz included.While the party itself is from 7:30 to 11:30, attendees will also have non-exclusive access to the Islands of Adventure side of the park starting at 6 so you can get there early, hang out in Diagon Alley and then hop the Hogwarts Express over to the party when the time comes. Can anybody say Geek Overload? Once the party starts all of the attractions will be available to you, rides like Transformers:3D, Harry Potter and the Escape from Gringotts, and Race Through New York Starring Jimmy Fallon just to name a few. There will also be a “festival style” music line-up to be announced later. Considering Cisco’s recent track record of musical acts (Aerosmith,  Maroon 5, Elle King, Bruno Mars) it’s a good guess that those will be great as well. Keynote Speakers There are other announcements out now as well. Included in these are the guest keynote speakers. This year it appears Cisco is going all in on the future looking vibe by having Dr. Michio Kaku, and Amy Webb as the Thursday speakers. Dr. Kaku is a renowned theoretical physicist and futurist while Ms. Webb is also a futurist and founder of the Future Today Institute. While I don’t know much about them at the moment I look forward to what they have to say. Sessions, Labs and Seminars Finally it looks like the Session catalog has quietly gone live today as well. Here you can begin looking for ideas of sessions you think you will find helpful, but I will tell you it is always my suggestion to pick these for now by the instructors you may really want to be able to interact with. All of these sessions will be available online after the conference so that frees you up to network (socially, not with wires) while you are there. What you can’t access after the fact is the Labs and Seminars Cisco puts on the weekend prior to the conference itself. These come in 4 or 8 hour flavors and as someone who has attended a couple myself I will tell you they are a very fast way to deep dive into a topic. The catalog of these has been made available as well so you may want to check them out. One note for those of you that like me that are heavy users of ad blocking in your browser. I noticed that uBlock Origin was keeping the actual list from appearing, you will need to turn it off to see the session catalogs. Conclusion As somebody with a small child and thus has spent a good deal of time in the Orlando area 😉 I’ll have some more to share soon in that regard. If you are heading to the show feel free to reach or say hi there! These events are much better when you allow yourself to get out an meet others.

The post Cisco Live US 2018: CAE Location and Keynotes Announced! appeared first on koolaid.info.

]]>
Pictured here is the entrance, five years ago, to the Customer Appreciation Event on the last night of Cisco Live US 2013. This was my first CiscoLive and first tech conference at all. I was exhausted from all I’d learned and excited by all the new people I’d met. The conference was in Orlando, FL that year and the CAE was held in a portion of Universal Studios theme park. This all comes full circle because this year

Customer Appreciation Event Info

You read that right, for one night only, Cisco customers, employees and other conference attendees will have the whole park to themselves with food, drink, and all that jazz included.While the party itself is from 7:30 to 11:30, attendees will also have non-exclusive access to the Islands of Adventure side of the park starting at 6 so you can get there early, hang out in Diagon Alley and then hop the Hogwarts Express over to the party when the time comes. Can anybody say Geek Overload? Once the party starts all of the attractions will be available to you, rides like Transformers:3D, Harry Potter and the Escape from Gringotts, and Race Through New York Starring Jimmy Fallon just to name a few.

There will also be a “festival style” music line-up to be announced later. Considering Cisco’s recent track record of musical acts (Aerosmith,  Maroon 5, Elle King, Bruno Mars) it’s a good guess that those will be great as well.

Keynote Speakers

There are other announcements out now as well. Included in these are the guest keynote speakers. This year it appears Cisco is going all in on the future looking vibe by having Dr. Michio Kaku, and Amy Webb as the Thursday speakers. Dr. Kaku is a renowned theoretical physicist and futurist while Ms. Webb is also a futurist and founder of the Future Today Institute. While I don’t know much about them at the moment I look forward to what they have to say.

Sessions, Labs and Seminars

Finally it looks like the Session catalog has quietly gone live today as well. Here you can begin looking for ideas of sessions you think you will find helpful, but I will tell you it is always my suggestion to pick these for now by the instructors you may really want to be able to interact with. All of these sessions will be available online after the conference so that frees you up to network (socially, not with wires) while you are there.

What you can’t access after the fact is the Labs and Seminars Cisco puts on the weekend prior to the conference itself. These come in 4 or 8 hour flavors and as someone who has attended a couple myself I will tell you they are a very fast way to deep dive into a topic. The catalog of these has been made available as well so you may want to check them out.

One note for those of you that like me that are heavy users of ad blocking in your browser. I noticed that uBlock Origin was keeping the actual list from appearing, you will need to turn it off to see the session catalogs.

Conclusion

As somebody with a small child and thus has spent a good deal of time in the Orlando area 😉 I’ll have some more to share soon in that regard. If you are heading to the show feel free to reach or say hi there! These events are much better when you allow yourself to get out an meet others.

The post Cisco Live US 2018: CAE Location and Keynotes Announced! appeared first on koolaid.info.

]]>
https://www.koolaid.info/cisco-live-us-2018-cae-location-and-keynotes-announced/feed/ 0 714
Veeam Vanguard 2018 https://www.koolaid.info/veeam-vanguard-2018/ https://www.koolaid.info/veeam-vanguard-2018/#respond Tue, 06 Mar 2018 13:27:46 +0000 https://www.koolaid.info/?p=711 Here in the US Thanksgiving Day traditionally falls on the fourth Thursday of November. While it is one of my favorite holidays today is a day of thankfulness for me as I’ve been honored to be named a Veeam Vanguard for 2018. I’ve been fortunate enough to have been a part of the group since its inception and it is one of my highest honors. Thanks as always to Rick, Kirsten, Dmitry, Andrew, Niels, Anthony, Michael, Melissa and Danny for keeping the Vanguards the best of its kind around. To those who have also been renewed into the program please accept a heartfelt congratulations as you’ve earned it through your involvement and I look forward to trolling right along with you for another year. While the e-mails have just been sent so there aren’t any statistics yet I see quite a few new members who are quite deserving popping up on twitter. Some I know already and other I look forward to getting to know. One of the really nice thing about the Vannies is we are a small group so everybody pretty much gets to know everybody. If you are looking for success in this group please don’t be shy, come be social and share the knowledge you have. Are you just learning about the program or didn’t make the cut this year? If you are active with Veeam join the conversation in the forums, on Twitter, on Reddit, any of the various Slack communities, or your own blog and it will come. It doesn’t matter where you join, it just matters that you do. Finally to dear, sweet Vanny Vanguard. We all miss you, please come home. 😉

The post Veeam Vanguard 2018 appeared first on koolaid.info.

]]>
Here in the US Thanksgiving Day traditionally falls on the fourth Thursday of November. While it is one of my favorite holidays today is a day of thankfulness for me as I’ve been honored to be named a Veeam Vanguard for 2018. I’ve been fortunate enough to have been a part of the group since its inception and it is one of my highest honors. Thanks as always to Rick, Kirsten, Dmitry, Andrew, Niels, Anthony, Michael, Melissa and Danny for keeping the Vanguards the best of its kind around.

To those who have also been renewed into the program please accept a heartfelt congratulations as you’ve earned it through your involvement and I look forward to trolling right along with you for another year.

While the e-mails have just been sent so there aren’t any statistics yet I see quite a few new members who are quite deserving popping up on twitter. Some I know already and other I look forward to getting to know. One of the really nice thing about the Vannies is we are a small group so everybody pretty much gets to know everybody. If you are looking for success in this group please don’t be shy, come be social and share the knowledge you have.

Are you just learning about the program or didn’t make the cut this year? If you are active with Veeam join the conversation in the forums, on Twitter, on Reddit, any of the various Slack communities, or your own blog and it will come. It doesn’t matter where you join, it just matters that you do.

Finally to dear, sweet Vanny Vanguard. We all miss you, please come home. 😉

The post Veeam Vanguard 2018 appeared first on koolaid.info.

]]>
https://www.koolaid.info/veeam-vanguard-2018/feed/ 0 711