Hey all, just a quick post to mention that the fine folks at vBrisket will be having a get together February 24th at 2 PM at Grist House Craft Brewery in Pittsburgh. If you work in the virtualization industry and haven’t heard of vBrisket yet you should get to know them because they have a great thing going. vBrisket takes the typical User Group back to its vendor independence roots, allowing you to focus more on your general virtualization career and less on the path of any particular vendor. At the same time it gives Clint, Gabe, Jaison, and John a great reason to bring out the smokers and prepare enough meat to feed a brewery full of techies.
I’m honored to have been invited to join the panel discussion this time. The topic is “Tech Conferences – What are the right ones for you?” This will be moderated by the vBrisket team and includes myself, John White, Mike Muto, and Justin Paul. As I see my attendance at various conferences as a big driver in the success of my career and my growth as a technology worker I’m excited to be included.
Of course this meeting wouldn’t be possible without the sponsorship from Zerto. At the meeting they’ll be talking I’m sure about their new conference, ZertoCON in Boston May 22-24th.
So if you are in the Pittsburgh area tomorrow and would like to attend just be there at 2, I look forward to meeting up!
Backup, among other things, is very good at creating multiple copies of giant buckets of data that don’t change much and tend to sit for long periods of time. Since we are in modern times, we have a number of technologies to deal with this problem, one of which is called deduplication with quite a few implementations of it. Microsoft has had server-based storage versions since Windows 2008 R2 that has gotten better with each release, but as any technology still has its pitfalls to be mindful of. In this post I’m going to look a very specific use case of Windows server deduplication, using it as the storage beneath your Veeam Backup and Replication repositories, covering some basic tips to keep your data healthy and performance optimized.
What is Deduplication Anyway?
For those that don’t work with it much imagine you had a copy of War and Peace stored as a Word document with an approximate file size 1 MB. Each day for 30 days you go into the document and change 100 KB worth of the text in the document and save it as a new file on the same volume. With a basic file system like NTFS this would result in you having 31 MB tied up in the storage of these files, the original and then the full file size of each additional copy.
Now let’s look at the same scenario on a volume with deduplication enabled. The basic idea of deduplication replaces identical blocks of data with very small pointers back to a common copy of the data. In this case after 30 days instead of having 31 MB of data sitting on disk you would approximately 4 MB; the original 1 MB plus just the 100 KB of incremental updates. As far as the … Go Read More
It has been a great day here because today I learned that I have once again been awarded acceptance into the excellent Veeam Vanguard program, my third time. This program, above any others that I am or have been involved with takes a more personal approach to creating a group of awardees who not only deserve anything good they get out of it but give back just as much to the community itself. In only its 3rd year the group has grown; from 31 the first year, 50(ish) the second, to a total of 62 this year. There are 21 new awardees in that 62 number so there really isn’t a rubber stamp to stay included, it is legitimately awarded each year. The group has grown each year but as you can see not by the leaps and bounds others have, and for good reason. There is no way this experience could be had with a giant community.
At this point in the post I would typically tell you a bit about what the Vanguard program is and isn’t but honestly, Veeam’s own Dmitry Kniazev really put it best in a couple recent posts, “Veeam Vanguard Part 1: WTH Is This?” and “Veeam Vanguard Part 2: What It’s Not.” What I will add is that as nice as some of the perks are, as DK says in the Part 1 post the true perk is the intangibles; a vibrant community full of some of the smartest, most passionate people in the industry and in many cases access right to the people approving and disapproving changes to their software. These are the thing that made me sweat approval time.
Once again I would give a giant thank you to Veeam Software and especially the whole Vanguard crew. This includes Rick … Go Read More
Today, like everyday as a technology professional, I got the opportunity to learn something new. After seeing posts on social media and articles that Nimble Storage with their NimbleOS version 3.6 supports the shiny new features of VMware’s vSphere 6.5 release including VVOLs 2.0 and VASA 3.0. After reading through the release notes and not seeing anything to really stress me out in the known issues I went to begin the download for an update in the off hours. To my early adopter horror I saw there was no download available! Had I misread the releases, did I imagine that the release notes really were for 3.6? No, those were real and it should be there. After asking around I learned that Nimble in a notable effort to save us from ourselves will from time to time blacklist you from receiving updates due to things they observe through their excellent InfoSight analytics system.
The problem with this is they don’t really make easily apparent that you are blacklisted from anywhere close to the download screen. In order to see if you are blacklisted you have to switch over from the array management screen to InfoSight, go to Manage > Assets > Click on the Array, and then at the top where it says “Version: ….” click on the version link. There finally you will either see the new version in black if you are good to upgrade or as shown in my image, in red if blacklisted. Even with this it still doesn’t tell you why you are blacklisted, you have to call support to learn that.
The idea of blacklisting arrays that show signs of things known not to play well with future versions of software is a noble idea and has the potential to keep … Go Read More
We’ve been dealing with an issue for past few runs of our monthly SureBackup jobs where the Domain Controller boots into Safe Mode and stays there. This is no good because without the DC booting normally you have no DNS, no Global Catalog or any of the other Domain Controller goodness for the rest of your servers launching behind it in the lab. All of this seems to have come from a change in how domain controller recover is done in Veeam Backup and Replication 9.0, Update 2 as discussed in a post on the Veeam Forums. Further I can verify that if you call Veeam Support you get the same answer as outlined here but there is no public KB about the issue. There are a couple of ways to deal with this, either each time or permanently, and I’ll outline both in this post.
The booting into Safe Mode is totally expected, as a recovered Domain Controller object should boot into Directory Services Restore mode the first time. What is missing though is that as long as you have the Domain Controller box checked for the VM in your application group setup then once booted Veeam should modify the boot setup and reboot the system before presenting it to you as a successful launch. This in part explains why when you check the Domain Controller box it lengthens the boot time allowed from 600 seconds to 1800 seconds by default.
On the Fly Fix
If you are like me and already have the lab up and need to get it fixed without tearing it back down you simply need to clear the Safe Boot bit and reboot from Remote Console. I prefer to
Make a Remote Console connection to the lab booted VM and login
Go to … Go Read More
Each year many of the major companies in the tech industry allow people to be nominated, by themselves or by others, to be recognized for the contributions to the community that surrounds that company’s products. These people are typically active on social media, in both online and in person forums and user groups and often will write blogs about their experiences with the products. In return for what is essentially free, grass-roots type marketing the companies will provide awardees any number of benefits; access to licenses for products for homelabbing as well as sometimes access to engineers, preferred experiences at conferences, NDA level information, etc but in some cases the biggest benefit is the recognition itself.
As of today (November 10, 2016) two of the bigger and in my opinion one of the best programs are all open for nominations.
Program Name Program Leader Nomination Link Cisco Champions Lauren Friedman Nomination Link VMware vExpert Corey Romero Nominations Accepted until 12/16 Veeam Vanguards Rick Vanover Nominations Accepted until 12/9
I’m honored to be both a vExpert and a Veeam Vanguard and like to think of myself as an honorary Cisco Champion (they can’t accept government employees) so I have some experience with each of these programs. Let’s take a look at all three.
VMware vExpert may not necessarily be the oldest influencers program but it is probably the one socially active technical people know except possibly the Microsoft MVP program. In many ways vExpert is not only an honorary … Go Read More
Hi all, just a quick post to serve as both a reminder to me and hopefully something helpful for you. For some reason Microsoft has decided to make installing .Net 3.5 on anything after Windows Server 2012 (or Windows 8 on the client side) harder than it has to be. While it is included in the regular Windows Features GUI it is not included in the on-disk sources for features to be installed automatically. In a perfect world you just choose to source from Windows Update and go about your day, but in my experience this is a hit or miss solution as many times for whatever reason it errors out when attempting to access.
The fix is to install via the Deployment Image Servicing and Management tool better known as DISM and provide a local source for the file. .Net 3.5 is included in every modern Windows CD/ISO under the sources\sxs directory. When I do this installation I typically use the following command set from an elevated privilege command line or PowerShell window:
dism /online /enable-feature /featurename:NetFX3 /all /Source:<WIN10DISKLETTER>:\sources\sxs /LimitAccess
When done the window should look like the window to the left. Pretty simple, right? While this is all you really need to know to get it installed let’s go over what all these parameters are that you just fed into your computer.
- /online – This refers to the idea that you are changing the installed OS as opposed to an image
- /enable-feature – the is the CLI equivalent of choosing Add Roles and Features from Server Manager
… Go Read More
Hey y’all, happy Friday! One of the things that seems to still really fly under the radar in regards to Veeam Backup & Replication is its SureBackup feature. This feature is designed to allow for automated testing via scripts of groups of your backups. An example would be if you have a critical web application. You can create an application group that includes both the database server and the web server and when the SureBackup job is run Veeam will connect a section of its backup repository to a specified ESXi host as a datastore and, start the VMs within a NAT protected segment of your vSphere infrastructure, run either the role based scripts included or custom ones you specify to ensure that the VMs are connecting to the applications correctly, and then when done shut the lab down and fire off an e-mail.
That workflow is great an all but it only touches on the edge of the power of what SureBackup can do for you. In our environment not only do we have a mandate to provide backup tests that allow for end-user interaction, but we also use SureBackup for test bed type applications such as patch tests. An example of the latter here is when I was looking to upgrade our internal Windows-based CA to Server 2012 R2. I was able to launch the server in the lab, perform the upgrade and ensure that it behaved as expected WITHOUT ANY IMPACT ON PRODUCTION first and then tear down the lab and it was like it never happened. Allowing the VMs to stay up and running after the job starts requires nothing more than checking a box in your job setup.
By default access to a running lab is fairly limited. When you launch a lab from your Veeam … Go Read More
So we recently had the joys of upgrading our Cisco Voice setup to version 11, including our Unified Contact Center Express (UCCX) system. In the process of our upgrade we had to do a quick upgrade of UCCX to 9.02 from 9.01 to be eligible to go the rest of the way up to 11, allowing us to run into a nice issue I’m thinking many others are running into.
As far as 11 is concerned the big difference is it is the first version where the Cisco Agent Desktop (CAD) is not an option as it has been replaced by the new web-based Finesse client for Agents and Supervisors. For this reason many Voice Admins are choosing to take the leap this year to 10.5 instead as it gives you the option of Cisco Agent Desktop/Cisco Supervisor Desktop (CSD) or Finesse. The problem? These MSI installed client applications are not Windows 10 compatible. In our case it wasn’t a big deal as the applications were already installed when we did an in place upgrade of many of our agent’s desktops to Windows 10, but attempting to do an installation would error out saying you were running an unsupported operating system.
*DISCLAIMER: While for us this worked just fine I’m sure it is unsupported and may lead to TAC giving you issues on support calls. Use at your own discretion.
Fixing the MSI with Orca
Luckily there is a way around this to allow the installers to run even allow for automated installation. Orca is one of the tools available within the Windows SDK Components download and it allows you to modify the parameters for Windows MSI packages and either include those changes directly into the MSI or to create a transform file (MST) so that the changes can be … Go Read More
Here at This Old Datacenter we’ve recently made the migration to using Cisco UCS for our production compute resources. UCS offers a great number of opportunity for system administrators, both in deployment as well as on going maintenance, making updating the physical as manageable as we virtualization admins are getting used to with the virtualized layer of the DC. Of course like any other deployment there is always going to be that one “oh yeah, that” moment. In my case after I had my servers up I realized I needed another virtual NIC, or vNIC in UCS world. This shouldn’t be a big deal because a big part of what UCS does for you is it abstracts the hardware configuration away from the actual hardware.
For those more familiar with standard server infrastructure, instead of having any number of physical NIC in the back of the host for specific uses (iSCSI, VM traffic, specialized networking, etc) you have a smaller number of connections as part of the Fabric Interconnect to the blade chassis that are logically split to provide networking to the individual blades. These Fabric Interconnects (FI) not only have multiple very high-speed connections (10 or 40 GbE) but each chassis typically will have multiple FI to provide redundancy throughout the design. All this being said, here’s a very basic design utilizing a UCS Mini setup with Nexus 3000 switches and a copper connected storage array:
So are you starting to thing this is a UCS geeksplainer? No, no my good person, this … Go Read More