The Most Magical Time of Year: Influencer Program Selection Season!

Each year many of the major companies in the tech industry allow people to be nominated, by themselves or by others, to be recognized for the contributions to the community that surrounds that company’s products. These people are typically active on social media, in both online and in person forums and user groups and often will write blogs about their experiences with the products. In return for what is essentially free, grass-roots type marketing the companies will provide awardees any number of benefits; access to licenses for products for homelabbing as well as sometimes access to engineers, preferred experiences at conferences, NDA level information, etc but in some cases the biggest benefit is the recognition itself.

As of today (November 10, 2016) two of the bigger and in my opinion one of the best programs are all open for nominations.

Program Name Program Leader Nomination Link
Cisco Champions Lauren Friedman Nomination Link
VMware vExpert Corey Romero Nominations Accepted until 12/16
Veeam Vanguards Rick Vanover Nominations Accepted until 12/9

I’m honored to be both a vExpert and a Veeam Vanguard and like to think of myself as an honorary Cisco Champion (they can’t accept government employees) so I have some experience with each of these programs. Let’s take a look at all three.

vexpert-624x111VMware vExpert may not necessarily be the oldest influencers program but it is probably the one socially active technical people know except possibly the Microsoft MVP program. In many ways vExpert is not only an honorary of its own right but a launch pad towards acceptance into other programs. vExperts are as far as I know the largest such group with around 1500 members world-wide, it also boasts some really good benefits not only from VMware but from other companies in the virtualization ecosphere. There are many webinars and meet and greets throughout the calendar year which are either vExpert only or vExpert preferred and the vExpert party at VMworld is well-known as one of the best. The distinction I make most about vExpert is that while it is for and by VMware, some years much of the educational focus is on the ecosphere and community that surrounds it.

The vExpert program offers 4 paths to membership. The one most are in is the Evangelist path. These may be customers, partners or VMware employees themselves, but they are people speaking the good word of VMware. There are also specific paths for Partners and Customers but I don’t know that I’ve ever met anyone who was awarded in those tracks. Finally if you have achieved the highest level of VMware certification, VCDX, you automatically are awarded vExpert status.

ciscochampion2016-512-nodateCisco Champions contrasts from vExpert most because it is a self-contained program with all the educational opportunities and benefits being from Cisco Systems itself. With the Champions there aren’t so many of the freebies with the notable exception of some nice perks if you attend CiscoLive, but what they do offer is exposure of your personal brand. Between the weekly Cisco Champions Radio podcast and the regularly featured blogs on Cisco’s website if you are working to make a name for yourself in the industry for whatever reason it is a very good program for that. Further Cisco gives you access to developers and program  managers within the company so that you can not only gain greater understanding of the products but in many cases have the opportunity to weigh in on technology decisions during the development process.

Cisco breaks their program down into business segments in regards to your qualification for the program with tracks in Collaboration, Data Center, Enterprise Networks, IoT, and Security. If you have expertise in any of these by all means apply.
veeam_vanguard-700x224In my mind I’m saving the best for last. The Veeam Vanguard program opened its nominations up today for its 3rd year and I’ve been honored to have awarded each year (so far). It is by far the most exclusive; there are currently only 50 members worldwide and I believe the philosophy is to keep it on the small side with only people who truly understand what the company is about. There are a lot of swag type benefits to the Vanguard to be sure, most noticeably something really special that revolves around  their VeeamON conference (NOLA this year baby!), but to be honest what I most get out of the program is the distributed brain of not only the Veeam employees affiliated with the group but the group itself. On a daily basis it seems sometimes somebody’s technology issues, Veeam related or not, are being sorted out through Vanguard communication methods. Long story short, in the Vanguard program they simply take care of you and I’m happy to call all of them not just my peers but friends.

Because Veeam is a much tighter set of products than the other two there aren’t any official tracks within the program. That said they are very good about selecting members who affiliate themselves with each of the hypervisor companies they support, VMware’s vSphere and Microsoft’s Hyper-V. This diversity is part of what makes the discussions between us so good.

Conclusion

Over the course of the past week I’ve heard various people talking about strategies regarding getting awarded to any number of these. I’m not going to do this one so I can focus on that one and so forth, and honestly all I can recommend to you if you are interested in applying to any of them is look at where your focus is or where you focus should be and apply. There is no thing that says “you belong to too many programs” or anything like that; if you feel you are qualified for any of these or any other by all means go apply. The name of the game is to grow your involvement with the technology community, regardless of what type of technology it is.

Installing .Net 3.5 on Server 2012/ Windows 8 and above

Hi all, just a quick post to serve as both a reminder to me and hopefully something helpful for you. For some reason Microsoft has decided to make installing .Net 3.5 on anything after Windows Server 2012 (or Windows 8 on the client side) harder than it has to be. While it is included in the regular Windows Features GUI it is not included in the on-disk sources for features to be installed automatically. In a perfect world you just choose to source from Windows Update and go about your day, but in my experience this is a hit or miss solution as many times for whatever reason it errors out when attempting to access.

The fix is to install via the Deployment Image Servicing and Management tool better known as DISM and provide a local source for the file. .Net 3.5 is included in every modern Windows CD/ISO under the sources\sxs directory. When I do this installation I typically use the following command set from an elevated privilege command line or PowerShell window:

installedWhen done the window should look like the window to the left. Pretty simple, right? While this is all you really need to know to get it installed let’s go over what all these parameters are that you just fed into your computer.

  • /online – This refers to the idea that you are changing the installed OS as opposed to an image
  • /enable-feature – the is the CLI equivalent of choosing Add Roles and Features from Server Manager
  • /featurename – this is where we are specifying which role or feature we want to install. This can be used for any Windows feature
  • /all – here we are saying we not only want the base component but all components underneath it
  • /Source:d:\sources\sxs – This is specifying where you want DISM to look for media to install for. You could also copy this to a network share, map a drive and use it as the source.
  • /Limit Access – This simply tells DISM not to query Windows Update as a source

While DISM is available both in the command line as well as PowerShell there is a PS specific command that works here as well that is maybe a little easier to read, but I tend to use DISM just because it’s what I’m used to. To do the same in PowerShell you would use:

 

 

 

Setting Up External Access To A Veeam SureBackup Virtual Lab

Hey y’all, happy Friday! One of the things that seems to still really fly under the radar in regards to Veeam Backup & Replication is its SureBackup feature. This feature is designed to allow for automated testing via scripts of groups of your backups. An example would be if you have a critical web application. You can create an application group that includes both the database server and the web server and when the SureBackup job is run Veeam will connect a section of its backup repository to a specified ESXi host as a datastore and, start the VMs within a NAT protected segment of your vSphere infrastructure, run either the role based scripts included or custom ones you specify to ensure that the VMs are connecting to the applications correctly, and then when done shut the lab down and fire off an e-mail.

That workflow is great an all but it only touches on the edge of the power of what SureBackup can do for you. In our environment not only do we have a mandate to provide backup tests that allow for end-user interaction, but we also use SureBackup for test bed type applications such as patch tests. An example of the latter here is when I was looking to upgrade our internal Windows-based CA to Server 2012 R2. I was able to launch the server in the lab, perform the upgrade and ensure that it behaved as expected WITHOUT ANY IMPACT ON PRODUCTION first and then tear down the lab and it was like it never happened. Allowing the VMs to stay up and running after the job starts requires nothing more than checking a box in your job setup.

By default access to a running lab is fairly limited. When you launch a lab from your Veeam server a route to the NAT’d network is injected to the Veeam server itself to allow access, but that doesn’t help you all that much if you are wanting others to be able to interact; we need to expand that access outwards. This post is going to walk you through the networking setup for a Virtual Lab that can be accessed from whatever level of access you are looking for, in my case from anywhere within my production network.

Setting Up the Virtual Lab

 

The first step if you haven’t setup SureBackup in your environment at all is to set up your Virtual Lab.  The first of two parts here that are critical to this task is setting up the Proxy IP, which is the equivalent to your outside NAT address if you’ve ever worked on a firewall. This IP is going to essentially be the production network side of the Lab VM that is created when you setup a Veeam Virtual Lab.

1-set-nat-host

Next we need to set up an isolated network for each production port group you need to support. While I use many VLANs in my datacenter I try to keep the application groups I need to test on the same VLAN to make this setup simple, but it doesn’t need to be, you can support as many as you need. Simply hit add, browse out and find the production network port group you need to support, give the isolated network a name and specify a VLAN.

2a-setup-vlans

The last step of setting up the Virtual Lab in this regard is creating a virtual NIC to map to each of your isolated networks. So where I see a lot of people get tripped up with this is always make the proxy appliance IP address here map to the default gateway of the production network it is reflecting. If you don’t do that the launched lab VMs will never be able to talk outside of the lab. Second, in regard to the Masquerade IP try to aim for some consistency. Notice that in my production network I am using a Class B private address space but with a class C mask. By default this will throw off the automatic generation of the Masquerade IP and I’ve found it isn’t always consistent across multiple Virtual NIC setups.  If you setup multiple isolated networks above you need to repeat this process for each network. Once you are done with this you can complete your Lab Setup and hit Finish to have it build or rebuild the appliance.

2-create-nat-network

Tweaking the SureBackup Job

For the sake of brevity I’m assuming at this point that you’ve got your Application Groups setup without issue and are ready to proceed to fixing your SureBackup job to stay up and running. To do so on the Application Group screen All you have to do is check the “Keep the application group running after the job completes” box. That’s it. Really. Once you do that this lab will stay up and running until you right click on the job in the Veeam Backup & Replication Console and choose stop. I’ve been lobbying for year for a “stop after X hours” option but still haven’t got very far with that one, but really the concern there is more performance impact from doubling a part of your load since you are essentially running 2 copies of a segment of your datacenter. If you have plenty to burn it isn’t an issue.

3-keep-lab-up

Fixing the Routing

Now the final step is to either talk to your network guy or go yourself to where your VLAN routing is taking place and add a static route to the IP range of your inside the lab into the routing table through the Proxy Appliance’s IP. For the example we’ve been working through in this post our Proxy appliance has an IP of 172.16.3.42 and all of our Lab networks are within the 172.31.0.0/16 network. If you are using a IOS based Cisco switch to handle your VLAN routing the command would be

After that is done, from anywhere that route is accessible from you should now be able to pass whatever traffic inbound to the lab network addresses. So sticking with our example, for a production VM with the IP address 172.16.3.10, you would interact with the IP 172.31.3.10 in whatever way needed. Keep in mind this is for lack of a better word one way traffic. You can connect in to any of the hosts within the lab network but you can’t really have them reach directly out and have them interact on the production network.

4a-testing

One More Thing…

One final tip that I can give you on this if you are going to let others in to play in your labs is to have at least one workstation grade VM that you include in each of your Applications Groups with the software needed to test with loaded. This way you can enable RDP on that VM and they user can just double-click an icon and connect into the lab, running their tests from there. Otherwise if you have locally installed applications that need to connect to hosts that are now inside the lab you are either going to need to reconfigure the application with the corrected address or modify the user’s hosts file temporarily so that they connect to the right place, neither of which is particularly easy to manage. The other nice thing about a modern RDP session is you can cut and paste files in and out of it, which is handy if the user wants to run reports and the like.

4-connecting-into-the-lab

As an aside I’m contemplating doing a video run through of the setting up a SureBackup environment to be added to the blog next week. Would you find such a thing helpful? If so please let me know on twitter @k00laidIT.

Upgrading Cisco Agent Desktop on Windows 10

So we recently had the joys of upgrading our Cisco Voice setup to version 11, including our Unified Contact Center Express (UCCX) system. In the process of our upgrade we had to do a quick upgrade of UCCX to 9.02 from 9.01 to be eligible to go the rest of the way up to 11, allowing us to run into a nice issue I’m thinking many others are running into.

As far as 11 is concerned the big difference is it is the first version where the Cisco Agent Desktop (CAD) is not an option as it has been replaced by the new web-based Finesse client for Agents and Supervisors. For this reason many Voice Admins are choosing to take the leap this year to 10.5 instead as it gives you the option of Cisco Agent Desktop/Cisco Supervisor Desktop (CSD) or Finesse. The problem? These MSI installed client applications are not Windows 10 compatible. In our case it wasn’t a big deal as the applications were already installed when we did an in place upgrade of many of our agent’s desktops to Windows 10, but attempting to do an installation would error out saying you were running an unsupported operating system.

*DISCLAIMER: While for us this worked just fine I’m sure it is unsupported and may lead to TAC giving you issues on support calls. Use at your own discretion.

Fixing the MSI with Orca

Luckily there is a way around this to allow the installers to run even allow for automated installation. Orca is one of the tools available within the Windows SDK Components download and it allows you to modify the parameters for Windows MSI packages and either include those changes directly into the MSI or to create a transform file (MST) so that the changes can be saved out-of-band to the install file so that it can be applied to different versions as needed. As my needs here are temporary I’m simply going to just modify the in place MSI and not bother with the MST, which would require additional parameters to be passed for remote installation.

Once you have the SDK Components downloaded you can install Orca by running the Orca.msi within and then just run it like any other application. The first step is to open the program and go to File>Open and open the MSI package. In  this case we are looking for CiscoAgentDesktop.msi

orca-open-file

Once open you will see a number of Tables down the left side. The easiest way I know to explain this is an MSI is simply a sort of database wrapping the installer with parameters. Scroll down the list until you see LaunchCondition and double-click on that. You will now see a list of list of conditions the MSI package is checking before the installer is allowed to launch. Reading the description of the first one this is our error message, right?

1-orca-find-item

Now we need to remove the offending condition which can be done by simply right clicking on it and choosing “Drop Row.” It will prompt you to confirm, just hit OK to continue.

2-orca-delete-row

Finally before we save our new MSI we need to go to Tools and Options, choosing the Database tab. Here we need to check the “Copy embedded streams during ‘Save As’ so that our changes will be rolled into the package.

3-orca-options

Now simply go to File>Save As… and save as you would any other file. Easy peasy…

4-orca-save-as

Now if we run our new MSI package it will allow you to proceed to install as expected. Again, let me say this won’t magically tell TAC that this is a supported solution. If you run into problems they may still tell you either to upgrade to 10.6 (which supports Windows 10) or later or roll back Windows version to 8.1 or older.

5-after

Fun with the vNIC Shuffle with Cisco UCS

Here at This Old Datacenter we’ve recently made the migration to using Cisco UCS for our production compute resources. UCS offers a great number of opportunity for system administrators, both in deployment as well as on going maintenance, making updating the physical as manageable as we virtualization admins are getting used to with the virtualized layer of the DC. Of course like any other deployment there is always going to be that one “oh yeah, that” moment. In my case after I had my servers up I realized I needed another virtual NIC, or vNIC in UCS world. This shouldn’t be a big deal because a big part of what UCS does for you is it abstracts the hardware configuration away from the actual hardware.

For those more familiar with standard server infrastructure, instead of having any number of physical NIC in the back of the host for specific uses (iSCSI, VM traffic, specialized networking, etc) you have a smaller number of connections as part of the Fabric Interconnect to the blade chassis that are logically split to provide networking to the individual blades. These Fabric Interconnects (FI) not only have multiple very high-speed connections (10 or 40 GbE) but each chassis typically will have multiple FI to provide redundancy throughout the design. All this being said, here’s a very basic design utilizing a UCS Mini setup with Nexus 3000 switches and a copper connected storage array:

ucs-design

So are you starting to thing this is a UCS geeksplainer? No, no my good person, this is actually the story of a fairly annoying hiccup when it comes to the relationship between UCS and VMware’s ESXi. You see while adding a vNIC should be as simple as create your vNICs in the Server Profile, reboot the effected blades and new NIC(s) are shown as available within ESXi, it of course is not that simple. What happens in reality when you add new NICs to an existing Physical NIC to vSwitch layout is that the relationships are shuffled. So for example if you started with a vNIC (shown as vmnicX in ESXi), vSwitch layout that looks like this to start with

1-before

After you add NICs and reboot it looks like this

2-after

Notice the vmnic to MAC address relationship in the 2. So while all the moving pieces are still there different physical devices map to different vSwitches than as designed. This really matters when you think about all the differences that usually exist in the VLAN design that underlay networking in an ESXi  setup. In this example vSwitch0 handles management traffic, HQProd-vDS handles all the VM traffic (so just trunked VLANS) and vSwitch1 handles iSCSI traffic. Especially when things like iSCSI that require specialized networking setup are involved does this become a nightmare; frankly I couldn’t imagine having to do this will a more complex design.

The Fix

So I’m sure you are sitting here like I was thinking “I’ll call support and they will have some magic that with either a)fix this, b) prevent it from happening in the future, or preferably c) both. Well, not so much. The answer from both VMware and Cisco support is to figure out which NICs should be assigned to which vSwitch by reviewing the MAC to vNIC assignment in UCS Manager as shown and then manually manage the vSwitch Uplink assignment for each host.

3-corrected

4-correctedesx

As you may be thinking, yes this is a pain in the you know what. I only had to do this with 4 hosts, I don’t want to think about what this looks like in a bigger environment. Further, as best I can get answers from either TAC or VMware support there is no way to make this go better in the future; this was not an issue with my UCS setup, this is just the way it is. I would love it if some of my “Automate All The Things!!!” crew could share a counterpoint to this on how to automate your way out of this but I haven’t found it yet. Do you have a better idea? Feel free to share it in the comments or tweet me @k00laidIT.

Getting the Ball Rolling with #vDM30in30

Ahh, that time of year when geeks pull that long forgotten blog site out of the closet, dust it of and make promises of love and content: #vDM30in30. If you aren’t familiar with the idea, vDM30in30 is short for Virtual Design Master 30 blog posts in 30 days, an idea championed by Eric Wright of discoposse fame to get bloggers out there to work their way through regular generation of content. As you can see from this site new content is pretty rare so something like this is a welcome excuse to focus and get some stuff out there. vDM30in30 runs through the month of November and the best way to follow along with the content is to track the hashtag on twitter.

So What’s the Plan?

I’m a planner by nature so if I don’t at least have a general idea this isn’t going to work at all. The good news is I’ve got quite a few posts that I’ve been meaning to work on for some time so I’m going to be cleaning out my closet this week and get those out there. So the full schedule is going to look like this:

  • Week of Nov 1: random posts I’ve never quite finished but need to be released
  • Week of Nov 7: focus on all the new hotness coming from Veeam Software
  • Week of Nov 14: VMware’s upcoming vSphere 6.5 release
  • Week of Nov 21: randomness about community, career and navel gazing in general

I’m really looking forward to participating this year as I do believe that a lot of growth comes from successfully forming out thoughts and putting them down. Hope you find some of this hopeful, if there is anything you’d like to see in the space feel free to comment.

A how-to on cold calling from the customer perspective

Now that I’m back from my second tech conference in less than two months I am fully into the cold call season and I am once again reminded why I keep meaning to buy a burner phone and setup a Gmail account before I register next year. It seems every time I get back I am destined to months of “I am so glad you expressed deep interest in our product and I’d love to tell you more about it” when the reality is “I am calling you because you weren’t nimble enough to lunge away from our team of booth people who are paid or retained based on as many scans they can get. Most often when I get these calls or e-mails I’ll give each company a courteous thanks but no thanks and after that the iDivert button gets worn out.

The genesis of this post is two-fold. First a cold call this morning that was actually destined for my boss but when informed he wasn’t here went into telling how glad the person was that I had personally expressed interest in their product, WTF? This first event reminded me of a second, where a few months ago I was at a mixer preceding a vendor supplied training when I was approached by a bevy of 20 something Inside Sales Engineers and asked “what can I do to actually get you to listen?” From this I thought that just in case a young Padawan Sales Rep/Engineer happens to come across this, here are those ways to make your job more efficient and to stop alienating your potential customers.

Google Voice is the Devil

I guess the first step for anybody on the calling end of a cold call scenario is to get me to answer the phone. My biggest gripe in this regard and the quickest way to earn the hang up achievement is the currently practice of many of startups out there to use Google Voice as their business phone system. In case you don’t know with Google Voice they do local exchange drop offs when you call outside of your local calling area, meaning that when you call my desk I get a call with no name and a local area code, leaving me with the quandary of “is this a customer needing support or is this a cold call?” I get very few of the former but on the off-chance it is I will almost always answer leaving me hearing your happy voices.

I HAVE AN END CALL BUTTON AND I AM NOT AFRAID TO USE IT, GOOD DAY TO YOU SIR/MADAM!

You want to know how to do this better? First don’t just call me. You’ve got all my contact info so let’s start with being a little more passive and send me an e-mail introducing yourself and asking if I have time to talk to you. Many companies do this already because it brings with it a good deal of benefits; I’ve now captured your contact info, we’re not really wasting a lot of time on each other if there is zero interest, I don’t have to drop what I am dealing to get your pitch. If this idea just absolutely flies in the face of all that your company holds dear and you really must cold call me then don’t hide behind an anonymous number, call me from your corporate (or even better your personal DID) with your company’s name plastered on the Caller ID screen so at least I have the option to decide if it’s a call I need to deal with.

A Trade Show Badge Scan List Does Not Mean I am (or anybody else is) Buying

I once again had an awesome time at VMworld this year but got to have an experience that I’m sure many other attendees have had variants of. There I was, happily walking my way through the show floor through a throng of people, when out of my peripheral vision a booth person for a vendor not to be named literally stepped through one person and was simultaneously reaching to scan my badge while asking “Hi, do you mind if I scan you?” Yes, Mr./Ms. Inside Sales person, this is the type of quality customer interaction that happened that resulted on me being put on your list. It really doesn’t signify that I have a true interest in your product so please see item one above regarding how to approach the cold call better.

I understand there is an entire industry built around having people capture attendee information as sales leads but this just doesn’t seem like a very effective way to do it. My likelihood of talking to you more about your product is much higher if someone with working knowledge of your product, say an SE, talks to me about your product either in the booth or at a social event and then the communication starts there. Once everybody is back home and doing their thing that’s the call I’m going to take.

Know Your Product Better Than I Do

That leads me to the next item,  if by chance you’ve managed to cold call me, get me to pick up and finally manage to keep me on the line long enough to actually talk about your product, ACTUALLY KNOW YOUR PRODUCT. I can’t tell you how many times I’ve received calls after a show and the person on the other end of the line is so blatantly doing the fake it until you make it thing it isn’t funny. Keep in mind you are in the tech industry, cold calling people who most likely are fairly tech savvy and capable of logical thought, so that isn’t going to work so well for you. Frankly, my time is a very, very finite resource and even if I am interested in your product, which is why I took your call, if I’m correcting the caller that is an instant turn off.

I get that the people manning the phones aren’t going to be Senior Solutions Architects for your organization but try this on for size; if you’ve got me talking to you and you get asked something you don’t know, don’t be afraid to say you don’t know. This is your opportunity to bump me up the chain or to loop in a more technical person to the call to get the discussion back on the right track. I will respect that far more than if you try to throw out a BS answer. Meanwhile get as much education as you can on what you’re selling. I don’t care if you are a natural sales person, you aren’t going to be able to sell me my own pen in this market.

Employees != Resources

So you’ve got yourself all the way through the gauntlet and you’ve got me talking and you know your product, please don’t tell me how you can get some resources arranged to help me with designing my quote so the deal can more forward. I was actually in a face to face meeting once where the sales person did this, referring to the technical people within the organization as resources and I think my internal response to this can best be summed up in GIF form:

obama_kicks_door

This absolutely drives me bonkers. A resource is an inanimate object which can be used repeatedly without consequence except in the inevitable end result where the resource breaks. What you are calling a resource is a living, breathing, most likely highly intelligent human being who has all kinds of responsibilities, not just to you but to his family, community and any other number things. By referring to them as this, and therefore showing that you think of them as something that can be used repeatedly without consequence, you are demeaning that person and the skill set he or she has, and trust me that person is most likely who we as technical professionals are going to connect with far more than we are with you.

So that’s it, Jim’s guide to getting me on the phone. I’m sure as soon as I post this many other techniques will come to my mind and I’ll have to update this. If you take this to heart, great, I think that is going to work out for you. If not, well, I still hope I’ll remember to buy that burner phone next May and the Gmail account is already setup. 😉

Veeam Backup Repository Best Practices Session Notes

After a couple days off I’m back to some promised VeeamON content. A nice problem that VeeamON had this year is the session choices were much more diverse and there were a lot more of them. Unfortunately this led to some overlap of some really great sessions. A friend of mine, Jaison Bailey of vBrisket fame and fortune, got tied up in another session and was unable to attend what I considered one of the best breakout sessions all week, Anton Gostev‘s Backup Repository Best Practices so he asked me to post my notes.

For those not too familiar with Veeam repos they can essentially be any manner of addressable disk space, whether local, DAS, NAS, SAN or even cloud, but when you start taking performance into account you have to get much more specific. Gostev, who is the Product Manager for Backup & Replication, lines out the way to do it right.

Anyway, here’s the notes including links to information when possible. Any notations I have are in bold and italicized.

Don’t underestimate the importance of Performance

  • Performance issues may impact RTOs

Five Factors of choosing Storage

  • Reliability
  • Fast backups
  • Fast restores
  • DR from complete storage loss
  • Lowest Cost

Ultimate backup Architecture

  • Fast, reliable primary storage for fastest backups, then backup copy to Secondary storage both onsite AND offsite
  • Limit number of RP on primary, leverage cheap secondary
  • Selectively create offsite copies to tape, dr site, or cloud

Best Repo: Low End

  • Any Windows or Linux Server
    • Can also serve as backup /backup proxy server
  • Physical server storage options
    • Local Storage
    • DAS (JBOD)
    • SAN LUN
  • Virtual
    • iSCSI LUN connected to in guest Volume

Best Backup Repo: High End

Backup Repos to Avoid

  • Low-end NAS  & appliances
    • If stuck with it, use iSCSI instead of other protocols * Ran into this myself with a Qnap array as my secondary storage, not really even feasible to run anything I/O heavy on it
  • SMB (CIFS) network shares
    • Lots of issues with existing SMB clients
    • If share is backed up by server, add actual server instead
  • VMDK on VMFS *Nothing wrong with running a repo from a virtual machine, but don’t store backups within, instead connect an iSCSI LUN directly to the VM and format NTFS
    • Extra logic on the data path- more chances for data corruption
    • Dependent on vSphere being functional
  • Windows Server 2012 Deduplication (scalability) *I get his rationale, but honestly I live and die by 2012 R2 deduplication, it just takes more care and feeding than other options. See my session’s slides for notes on how I implement it.

Immediate Future: Technologies to keep in mind

  • Server 2016 Deduplication
    • Same deduplication, far greater performance and scale (64 TB files) *This really will be a big deal in this space, there is a lot of upside to a simple dedupe ability rolled into a Windows server
  • ReFS 2.0
    • Great fit for backup repos because it has built in data corruption protection
    • Veeam is currently working on some things with it

Raw Disk

  • Raid10 whenever you can (2x write penalty, but capacity suffers)
  • Raid5 4x write penalty, greater risks)
  • Raid6 severe performance overhead (6x write penalty
  • Lookup Maximum performance per spindle
  • A single job can only keep about 6-8 spindles busy- use multiple jobs if you have them to saturate
  • RAID volume
    • Stripe Size
      • Typical I/O for Veeam is 25k-512KB
      • Windows Server 2012 defaults to 64KB
      • At least 128 KB stripe size is highly recommended
        • Huge change for things like Synthetics, etc
    • RAID array
      • Fill as many drives as possible from the start to avoid expansion
      • Low-end sorage systems have significant performance problems
    • File System
      • NTFS (Best Option)
        • Larger block size does not affect performance, but it helps avoid excessive fragmentation so 64KB block size recommend
        • Format with /L to enable larger file records
        • 16 TB max file size limit before 2012 (now 256)
        • * Full string of best practices for format NTFS partition from CLI: Format <drive:> /L /Q /FS:NTFS /A:8192
      • ReFS not ready for prime time yet
      • Other
    • Backup Job Settings
      • Always a performance vs disk space choice
      • Reverse incremental backup mode is 3x I/O per block
      • Consider forever incremental instead
      • Evaluate transform performance
      • Repository load
        • Limit concurrent jobs to a reasonable amount
        • Use ingest rate throttling for cross-SAN backups

Dedupe Storage: Pains and Gains

  • Gains
    • True global dedupe
    • Lowest cost/ TB
  • Do not use deduplicating storage as your primary backup repository!
  • But if you must leverage vendor-specific integrations, use backup modes without full backup transformation, us active fulls instead of synthetics
  • If backup performance is still bad, consider VTL
  • 16TB+ backup storage optimization for 4MB blocks (new)
  • Parallel processing may impact  dedupe ratios

Secondary Storage Best Practices

  • Vendor-specific integrations can make performance better
  • Test Backup Copy retention processing performance. If too slow consider Active Full option of backup copy jobs (new in v9)
  • If already invested and stuck
    • Use as primary storage and leverage native replication to copy backups to DR

Backup Job Settings BP

Built-In deduplication

  • Keep ON for best performance (except lowest end devices) even if it isn’t going to help you with Per VM backup files
  • Compression
    • Instead of disabling keep Optimal enabled in job and use “decompress before storing- even locally
    • Dedupe-friendly isn’t very friendly any more (new)
      • Will hinder faster recovery in v9
  • Vendor recommendations are sometimes self-serving  to achieve higher dedupe ratios but negatively effect performance

Disk-based Storage Gotchas

  • Gostev loves tape
    • Cheaper
    • Reliable
    • Read-only
    • Customer Success is the biggie
    • Tape is dead
      • Amazon, Google & 50% of Veeam customers disagree
  • Storage-level corruption
    • RAID Controllers are your worst enemies
    • Firmware and software bugs are common, too
    • VT402 Data Corruption tomorrow at 1:30 for more
  • Ransomware  possible

The 2 Part of the 3-2-1 Rule

  • 3 copies, 2 different medias, 1 offsite
  • Completely different storage type!

Storage based replication

  • Betting exclusively on storage-based replication will cost you your job
  • Pros:
    • Fantastic performance
    • Efficient bandwidth utilization
  • Cons:
    • Replicates bad data too
    • Backups remain in a single fault domain

Backup Copy vs. Storage-Based Copy

  • Pros:
    • Breaks the data loop (isolated source and target storage)
    • Implicitly validates all source data during its operation
    • Includes backup files health check
  • Cons:
    • Higher load on backup storage

Make Tape out of drives

  • Low End:
    • Use rotated drives
    • Supported for both primary & backup copy jobs
  • Mid-range:
    • Keep an off-site copy off-prem (cloud)
  • High End:
    • Use hardware-based WORM solutions

Virtualize your Repository (SOBR)

  • simplify backup storage and backup job management
  • Reduce storage hardware spending by allowing disks to be fully utilized
  • Improve backup storage performance and scalability

 

Getting Started with rConfig on CentOS 7

I’ve been a long time user of RANCID for change management on network devices but frankly it’s always left me feeling a little bit of a pain to use and not particularly modern. I recently decided it was time for my OpenNMS/RANCID server to be rebuilt, moving OpenNMS up to a CentOS 7 installation and in doing so thought it was time to start looking around for an network device configuration management alternative. As is many times the way in the SMB space, this isn’t a task that actual budgetary dollars are going to go towards so off to Open Source land I went!  rConfig immediately caught my eye, looking to me like RANCID’s hipper, younger brother what with its built in web GUI (through which you can actually add your devices), scheduled tasks that don’t require you to manually edit cron, etc. The fact that rConfig specifically targets CentOS as its underlaying OS was just a whole other layer of awesomesauce on top of everything else.

While rConfig’s website has a couple of really nice guides once you create a site login and use it, much to my dismay I found that they hadn’t been updated for CentOS 7 and while working through them I found that there are actually some pretty significant differences that effect the setup of rConfig. Some difference of minor (no more iptables, it’s firewalld) but it seems httpd has had a bit of an overhaul. Luckily I was not walking the virgin trail and through some trial, error and most importantly google I’ve now got my system up and running. In this post I’m going to walk through the process of setting up rConfig on a CentOS minimal install with network connectivity with hopes that 1) it may help you, the two reader’s I’ve got, and 2) when I inevitably have to do this again I’ll have documentation at hand.

Before we get into it I will say there are few artistic licenses I’ve taken with rConfig’s basic setup.

  1. I’ll be skipping over the network configuration portion of the basic setup guide. CentOS7 has done a great job of having a single configuration screen at install where you setup your networking among other things.
  2. The system is designed to run on MySQL but for a variety of reasons I prefer MariaDB. The portions of the creator’s config guide that deal with these components are different from what you see here but will work just fine if you do them they way described.
  3. I’m virtualized kind of guy so I’ll be installing the newly supported open-vm-tools as part of the config guide. Of course, if you aren’t installing on ESXi you won’t be needing these.
  4. Finally before proceeding please be sure to go ahead and run a yum update to make sure everything’s up to date and you really do have connectivity.

Disabling Stuff

Even with the minimal installation there are things you need to stop to make things work nice, namely the security measures. If you are installing this in the will this would be a serious no no, but for a smaller shop behind a well configured firewall it should be ok.

vi /etc/sysconfig/selinux

Once in the file you need to change the “ SELINUX=enforcing ” line to “ SELINUX=disabled “. To do that hit “i” and then use vi like notepad with the arrow keys. When done hit Esc to exit insert mode and “ :wq ” to save and exit.

Installing the Prerequisites

Since we did the minimal install there are lots of things we need to install. If you are root on the box you should be able to just cut and paste the following into the cli and everything gets installed. As mentioned in the original Basic Config Guide, you will probably want to cut and past each line to make sure everything gets installed smoothly.

Autostart Services

Now that we’ve installed all that stuff it does us no good if it isn’t running. CentOS 6 used the command chkconfig on|off to control service autostart. In CentOS 7 all service manipulation is now done under the systemctl command. Don’t worry too much, if you use chkconfig or service start both at this point will still alias to the correct commands.

Finalize Disable of SELinux

One of the hard parts for me was getting the step 5/6 in the build guide to work correctly. If you don’t do it the install won’t complete, but it also doesn’t work right out of the box. To fix this the first line in prerequisites installs the attr package which contains the setfattr executable. Once that’s installed the following checks to see if the ‘.’ is still in the root directories ACLs and removes it from the /home directory. By all means if you know of a better way to accomplish this (I thought of putting the install in the /opt directory) please let me know in the comments or on twitter.

MySQL Secure Installation on MariaDB

MariaDB accepts any commands you would normally use with MySQL. the mysql_secure_installation script is a great way to go from baseline to well secured quickly and is installed by default. The script is designed to

  • Set root password
  • Remove anonymous users
  • Disallow root logon remotely
  • Remove test database and access to it
  • Finally reload the privilege tables

I tend to take all of the defaults with the exception of I allow root login remotely for easier management. Again, this would be a very bad idea for databases with external access.

Then follow the prompts from there.

As a follow up you may want to allow remote access to the database server for management tools such as Navicat or Heidi SQL. To do so enter the following where X.X.X.X is the IP address you will be administering from. Alternatively you can use root@’%’ to allow access from anywhere.


Configure VSFTPd FTP Software

Now that we’ve got the basics of setting up the OS and the underlying applications out of the way let’s get to the business of setting up rConfig for the first time. First we need to edit the sudoers file to allow the apache account access to various applications. Begin editing the sudoers file with the visudo  command, arrow your way to the bottom of the file and enter the following:

rConfig Installation

First you are going to need to download the rConfig zip file from their website. Unfortunately the website doesn’t seem to work with wget so you will need to download it to a computer with a GUI  and then upload it via SFTP to your rConfig server. (ugh) Once the file is uploaded to your /home directory back at your server CLI do the following commands

Next we need to copy the the httpd.conf file over to /etc/httpd/conf directory. This is where I had the most issues of all in that the conf file included is for httpd in CentOS 6 and there are some module differences between 6 and 7. Attached here is a modified version that I was able to get working successfully after a bunch of failures. The file found here (httpd.txt) will need to replace the existing httpd.conf before the webapp will successfully start. If the file is copied to the /home/rconfig directory the shell commands would be

As long as the httpd service starts backup up correctly you should now be good to go with the web portion of the installation which is pretty point and click. Again for the sake of brevity just follow along at the rconfig installation guide starting with section rConfig web installation and follow along to the end. We’ll get into setting up devices in a later post, but it is a pretty simple process if you are used to working with networking command lines.

Community and the Rural IT Professional

I was born and raised in a small area between Charleston and Huntington, WV. While I recognized my hometown, Scott Depot, was a small town growing up I thought of both those cities as just that, proper cities with all the benefits and drawbacks that go with them. As I grew older and my worldly view wider I came to realize that what I considered the big city was to many a minor suburb, but never the less it was and still is my home.

This lack of size and economic opportunity has never stood out more than when I began my career in Information Technology. After graduating from Marshall University with what I still believe to be a very respectable skill set many of my fellow graduates flocked to bigger areas such as Columbus, OH, RTP and Atlanta. I chose for a variety of reasons to stick around here and make a career of it and all in all while not always the most stable it has been fairly successful.

There are very few large datacenters here with most datacenters being composed of a handful of racks. Some go to work for various service providers, others enter the VAR space and I found my niche in what I like to call the Hyper Converged Administrator role. The HCA tends to wear most if not all of the hats; virtualization, storage, networking, server administration, etc. I consider myself somewhat blessed that I’ve managed to avoid the actual desktop admin stuff for most of my career, but still some of that too.

In the past couple of years I’ve got more and more active within the social IT community by way of conference attendance, social media and blogging and while it hasn’t necessarily changed the direction my career is going it has radically changed it in that I have found great opportunities for growing my personal knowledge. This growth in some cases has been very strictly technology related by way of pushing me to explore new facets of IT systems management I didn’t previously consider as well as access to very knowledgeable people who are usually very eager to point you in the right direction when asked. In other ways this knowledge while IT related is more oblique in that I feel like I now have a much better understanding of what life is like on the other side of the various fences (vendors, VARs, datacenter admins, etc) than I ever did before. This latter knowledge base has greatly changed how I approach some of the more political parts of IT such as vendor management and internal project pitches.

While the global Internet community is great I find that the missing piece is still facetime. The richness of communication when I’m at conferences is more personal than anything that is done online and I find myself somewhat jealous of those in areas large enough to support user groups of any kind of size. In the past year I’ve got to know VMware User Group (VMUG) leaders from Louisville, Kansas City, Phoenix and Portland as well as the guys behind the excellent career oriented community vBrisket and enjoying hearing tales of what’s involved in getting their regular meetings together and wish I could do the same here.

Personally my goal for the coming year is to do a bit of travel and attend the meetings of some of the User Groups listed above. If you are local here in the WV IT community reach out and let’s figure out how to do something here. There may not be a lot of us here but that’s an even better reason to get to know each other and share the knowledge.