Installing .Net 3.5 on Server 2012/ Windows 8 and above

Hi all, just a quick post to serve as both a reminder to me and hopefully something helpful for you. For some reason Microsoft has decided to make installing .Net 3.5 on anything after Windows Server 2012 (or Windows 8 on the client side) harder than it has to be. While it is included in the regular Windows Features GUI it is not included in the on-disk sources for features to be installed automatically. In a perfect world you just choose to source from Windows Update and go about your day, but in my experience this is a hit or miss solution as many times for whatever reason it errors out when attempting to access.

The fix is to install via the Deployment Image Servicing and Management tool better known as DISM and provide a local source for the file. .Net 3.5 is included in every modern Windows CD/ISO under the sources\sxs directory. When I do this installation I typically use the following command set from an elevated privilege command line or PowerShell window:

installedWhen done the window should look like the window to the left. Pretty simple, right? While this is all you really need to know to get it installed let’s go over what all these parameters are that you just fed into your computer.

  • /online – This refers to the idea that you are changing the installed OS as opposed to an image
  • /enable-feature – the is the CLI equivalent of choosing Add Roles and Features from Server Manager
  • /featurename – this is where we are specifying which role or feature we want to install. This can be used for any Windows feature
  • /all – here we are saying we not only want the base component but all components underneath it
  • /Source:d:\sources\sxs – This is specifying where you want DISM to look for media to install for. You could also copy this to a network share, map a drive and use it as the source.
  • /Limit Access – This simply tells DISM not to query Windows Update as a source

While DISM is available both in the command line as well as PowerShell there is a PS specific command that works here as well that is maybe a little easier to read, but I tend to use DISM just because it’s what I’m used to. To do the same in PowerShell you would use:

 

 

 

VMware Tools Security Bug and Finding which VMware Tools components are installed on all VMs

Just a quick post related to today’s VMware security advisories. VMware released a pair of advisories today, CVE-2016-5330 and CVE-2016-5331 and while both are nasty their scopes are somewhat limited. The 5331 issue is only applicable if you are running vCenter or ESXi 6.0 or 6.0U1, Update 2 patches the bug. The 5330 is limited to Windows VMs, running VMware Tools, and have the option HGFS component installed. To find out if you are vulnerable here’s a Power-CLI script to get all your VMs and list the installed components. Props to Jason Shiplett for giving me some assistance on the code.

While the output is still a little rough it will get you there. Alternatively if you are just using this script for the advisory listed you can change  where-object { $_.Name -match $componentPattern }  to  where-object { $_.Name -match "vmhgfs" } . This script is also available on GitHub.

The Basics of Network Troubleshooting

The following post is something I wrote as an in-house primer for our help desk staff. While it a bit down level from a lot of the content here I find more and more the picking and reliably going with a troubleshooting methodology is somewhat of a lost art. If you are just getting started in networking or are troubleshooting connectivity issues at your home or SMB this would be a great place to start.

We often get issues which are reported as application issues but end up being network related. There are a number steps and logical thought processes that can make dealing with even the most difficult network issues easy to troubleshoot. The purpose of this post is to outline many of the basic steps of troubleshooting network issues, past that it’s time to reach out and ask for assistance.

  1. Understand the basics of OSI model based troubleshooting

    The conceptual idea of how a network operates within a single node (computer, smartphone, printer, etc.) is defined by something called the OSI reference model. The OSI model breaks down the operations of a network into 7 layers, each of which is reliant on success at the layers below it (inbound traffic) and above it (outbound traffic). The layers (with some corresponding protocols you’ll recognize) are:

    7. Application: app needs to send/receive something (HTTP, HTTPS, FTP, anything that the user touches and begins/ends network transmission)
    6. Presentation: formatting & encryption (VPN and DNS host names)
    5. Session: interhost communication (nothing to see here:))
    4. Transport: end to end negotiations, reliability (the age old TCP vs. UDP debate)
    3. Network: path and logical addressing (IP addresses & routing)
    2. Data Link: physical addressing (MAC addresses & switches)
    1. Physical: physical connectivity (Is it plugged in?)

    The image below is a great cheat card for keeping these somewhat clear:

    OSI_2014
    Image source: http://www.gargasz.info/osi-model-how-internet-works/

    How OSI is used today is as a template for how to understand and thus troubleshoot networking issues. The best way to troubleshoot any IT problem that has the potential to have a network issue is from the bottom of the stack upwards. Here are a few basic steps to get you going with troubleshooting.

  2. Is it plugged in?

    This may seem like a smart ass answer, but many times this is just the case. Somebody’s unplugged the cable or the clip has broken off the Cat6 cable and every time somebody touches the desk it wiggles out. Most of the time you will have some form of a light to tell you that you have both connectivity to the network (usually green) and are transmitting on the network (usually orange).

    This troubleshooting represents layer 1 troubleshooting.

  3. Is the network interface enabled?

    So the cable is in and maybe you’ve tried to plug the same cable from the wall into multiple devices; you get link lights on other devices but no love on the device you need. This may represent a Data Link issue where the Network Interface Card (NIC) has been disabled in the OS. From the client standpoint this would be within Windows or Mac OSX or whatever, on the other side it’s possible the physical interface on the switch that represents the other end of the wire may be disabled. Check out the OS first and then reach out to your network guy to check the switch if need be.

  4. Can the user ping it?

    Moving up to the Network layer, the next step is to test if the user can ping the device which they are having an issue with. Have the user bring up a command prompt and ping the IP address of the far end device.

  5. Can you ping it?

    By the very nature of you being an awesomesauce IT person you are going to have more ability to test than the user. To start with, see if you can ping it from your workstation. This will rule out user error and potentially any number of other issues as well. Next if you can’t, are you on the same subnet/VLAN as the device you are trying to access? If not try to access a device in the same subnet as the endpoint device you are testing and ping it from there. That may give you some insight into having issues with default gateway configuration or underlying routing (aka Layer 3) issues.

  6. Can you ping it by name?

    Let’s say you can ping it by IP address from all of the above. If the user is trying to access something by name, say server1.foo.com have them ping that as well. It’s possible that while the lower three layers of the stack are operating well, something has gone awry with DNS or other forms of naming that happen at the Presentation layer.

  7. Application firewalls and the like

    Finally we’ve reached the top of the stack and we need to take a look at the individual applications. So far you’ve verified that the cable’s plugged in, the NICs on both sides are enabled and you can ping between the user and the far device by both IP and hostname but still the application won’t work so now’s when we look at the actual application and immediately start rebooting things.

    Just kidding 🙂 No now we need to look at services that are being present to the network. If we are troubleshooting an e-mail issue is the services running on the server and can we connect to it. When talking about TCP/IP-based traffic (meaning all traffic) all application layer traffic occurs over either a TCP or UDP protocol port. This isn’t something you physically plug-in, but rather it is a logical slot that an application is known to talk on, kind of like a CB radio channel. For example SMTP typically runs on TCP port 25, FTP 21, printing usually on 9100. If you are troubleshooting an e-mail issue bring up a command prompt and try to connect to the device via telnet like “telnet server1.foo.com 25.” If the SMTP server is running on that port at the far end then it will answer, if not the connection will time out.

  8. Call in reinforcements

    If you’ve got this far it’s going to take a combination of multiple brains and probably some application owners/vendors to unwrangle the mess those crazy users have made. Reach out to your network and application teams or call in vendor support at this point.

Network troubleshooting isn’t hard, you just have to know where to start.

Quieting the LogPartitionLowWaterMarkExceeded Beast in Cisco IPT 9.0.x Products

As a SysAdmin I’m used to waking up, grabbing my phone and seeing the 20 or so e-mails that  the various systems and such have sent me over night, gives me an idea of how the day will go and what I need start with. Every so often though you get that morning where the 20 becomes 200 and you just want to roll over and go back to bed. This morning I had about 200, the vast majority of which was from my Cisco Unified Contact Center Express server with the subject “LogPartitionLowWaterMarkExceeded.” Luckily I’ve had this before and know what to do with it but on the chance you are getting it too here’s what it means and how to deal with it in an efficient manner.

WTF Is This?!?

Or at least that was my response the first time I ran into this. If you are a good little voice administrator one of the first things you do when installing your phone system or taking one over due to job change is setup the automatic alerting capability in the Cisco Unified Real Time Monitoring Tool (or RTMT, you did install that, right?) so that when things go awry you know in theory before the users do. One of the downsides to this system is it is an either on or off alerting system meaning what ever log events are saved within the system are automatically e-mailed at the same frequency.

This particular error message is the by-product of a bug (CSCul18667) in the 9.0.x releases of all the Cisco IP Telephony products in which the JMX logs produced by the at the time new Unified Intelligence Center didn’t get automatically deleted to maintain space on the log partition. While this has long since been fixed phone systems are one of those things that don’t get updated as regularly as they should and such it is still and issue. The resulting effect is that when you reach the “warning” level of partition usage (Low Water Mark) it starts logging ever 5 minutes that the level has been reached.

Just Make the Screaming Stop

Now that we know what the issue is how do we fix it?

Go back to the RTMT application, and connect to the affected component server. Once there you will need to navigate to the Trace & Log Central tool then double-click on the Remote Browse option. remote-browse
Once in the Remote Browse dialog box choose “Trace Files” and then we really only need one of the services selected, Cisco Unified Intelligence Center Serviceability Service and then Next, Next, Finish. select-cuic
Once it is done gathering all of the log files it will tell you your browse is ready. You then need to drill all the way down through the menu on each node until you reach “jmx.” Once you double-click on jmx you will see the bonanza of logs. It is best to just click one, Ctrl+A to select all and then just hit the Delete button. browse-to-node
After you hit delete it will probably take it quite a while to process through. You will then want to click on the node name and hit refresh to check but when done you should be left with just the currently active log file. Afterwards if you have multiple nodes of the application you will need to repeat this process for the other. all-clean

And that’s it really. Once done the e-mail bleeding will stop and you can go about the other 20 things you need to get done this day. If you are experiencing this and if possible I would recommend being smarter than me and just update your CIPT components to a version newer than 9.0 (11.5 is the current release), something I am hoping to begin the process of in the next month or so.

Updating the Photo Attributes in Active Directory with Powershell

Today I got to have the joys of needed to once again get caught up on importing employee photos into the Active Directory photo attributes, thumbnailPhoto and jpegPhoto. While this isn’t exactly the most necessary thing on Earth it does make working in a Windows environment “pretty” as these images are used by things such as Outlook, Lync and Cisco Jabber among other. In the past the only way I’ve only ever known how to do this is by using the AD Photo Edit Free utility, which while nice tends to be a bit buggy and it requires lots of repetitive action as you manually update each user for each attribute. This year I’ve given myself the goal of 1) finally learning Powershell/PowerCLI to at least the level of mild proficiency and 2) automating as many tasks like this as possible. While I’ve been dutifully working my way through a playlist of great PluralSight courses on the subject, I’ve had to live dangerously a few times to accomplish tasks like this along the way.

So long story short with some help along the way from Googling things I’ve managed to put together a script to do the following.

  1. Look in a directory passed to the script via the jpgdir parameter for any images with the file name format <username>.jpg
  2. Do an Active Directory search in an OU specified in the ou parameter for the username included in the image name. This parameter needs to be the full DN path (ex. LDAP://ou=staff,dc=foo,dc=com)
  3. If the user is found then it will make a resized copy of the image file into the “resized” subdirectory to keep the file sizes small
  4. Finally the resized image is then set as the both the thumbnailPhoto and jpegPhoto attribute for the user’s AD account

So your basic usage would be .\Set-ADPhotos.ps1 -jpgdir "C:\MyPhotos" -OU "LDAP://ou=staff,dc=foo,dc=com" . This should be easily setup as a scheduled task to fully automate the process. In our case I’ve got the person in charge of creating security badges feeding the folder with pictures when taken for the badges, then this runs at 5 in the morning each day automatically.

All that said, here’s the actual script code:

 

Did I mention that I had some help from the Googles? I was able to grab some great help (read Ctrl+C, Ctrl+V) in learning how to piece this together from a couple of sites:

The basic idea came from https://coffeefueled.org/powershell/importing-photos-into-ad-with-powershell/

The Powershell Image Resize function: http://www.lewisroberts.com/2015/01/18/powershell-image-resize-function/

Finally I’ve been trying to be all DevOpsy and start using GitHub so a link to the living code can be found here: https://github.com/k00laidIT/Learning-PS/blob/master/Set-ADPhotos.ps1

A how-to on cold calling from the customer perspective

Now that I’m back from my second tech conference in less than two months I am fully into the cold call season and I am once again reminded why I keep meaning to buy a burner phone and setup a Gmail account before I register next year. It seems every time I get back I am destined to months of “I am so glad you expressed deep interest in our product and I’d love to tell you more about it” when the reality is “I am calling you because you weren’t nimble enough to lunge away from our team of booth people who are paid or retained based on as many scans they can get. Most often when I get these calls or e-mails I’ll give each company a courteous thanks but no thanks and after that the iDivert button gets worn out.

The genesis of this post is two-fold. First a cold call this morning that was actually destined for my boss but when informed he wasn’t here went into telling how glad the person was that I had personally expressed interest in their product, WTF? This first event reminded me of a second, where a few months ago I was at a mixer preceding a vendor supplied training when I was approached by a bevy of 20 something Inside Sales Engineers and asked “what can I do to actually get you to listen?” From this I thought that just in case a young Padawan Sales Rep/Engineer happens to come across this, here are those ways to make your job more efficient and to stop alienating your potential customers.

Google Voice is the Devil

I guess the first step for anybody on the calling end of a cold call scenario is to get me to answer the phone. My biggest gripe in this regard and the quickest way to earn the hang up achievement is the currently practice of many of startups out there to use Google Voice as their business phone system. In case you don’t know with Google Voice they do local exchange drop offs when you call outside of your local calling area, meaning that when you call my desk I get a call with no name and a local area code, leaving me with the quandary of “is this a customer needing support or is this a cold call?” I get very few of the former but on the off-chance it is I will almost always answer leaving me hearing your happy voices.

I HAVE AN END CALL BUTTON AND I AM NOT AFRAID TO USE IT, GOOD DAY TO YOU SIR/MADAM!

You want to know how to do this better? First don’t just call me. You’ve got all my contact info so let’s start with being a little more passive and send me an e-mail introducing yourself and asking if I have time to talk to you. Many companies do this already because it brings with it a good deal of benefits; I’ve now captured your contact info, we’re not really wasting a lot of time on each other if there is zero interest, I don’t have to drop what I am dealing to get your pitch. If this idea just absolutely flies in the face of all that your company holds dear and you really must cold call me then don’t hide behind an anonymous number, call me from your corporate (or even better your personal DID) with your company’s name plastered on the Caller ID screen so at least I have the option to decide if it’s a call I need to deal with.

A Trade Show Badge Scan List Does Not Mean I am (or anybody else is) Buying

I once again had an awesome time at VMworld this year but got to have an experience that I’m sure many other attendees have had variants of. There I was, happily walking my way through the show floor through a throng of people, when out of my peripheral vision a booth person for a vendor not to be named literally stepped through one person and was simultaneously reaching to scan my badge while asking “Hi, do you mind if I scan you?” Yes, Mr./Ms. Inside Sales person, this is the type of quality customer interaction that happened that resulted on me being put on your list. It really doesn’t signify that I have a true interest in your product so please see item one above regarding how to approach the cold call better.

I understand there is an entire industry built around having people capture attendee information as sales leads but this just doesn’t seem like a very effective way to do it. My likelihood of talking to you more about your product is much higher if someone with working knowledge of your product, say an SE, talks to me about your product either in the booth or at a social event and then the communication starts there. Once everybody is back home and doing their thing that’s the call I’m going to take.

Know Your Product Better Than I Do

That leads me to the next item,  if by chance you’ve managed to cold call me, get me to pick up and finally manage to keep me on the line long enough to actually talk about your product, ACTUALLY KNOW YOUR PRODUCT. I can’t tell you how many times I’ve received calls after a show and the person on the other end of the line is so blatantly doing the fake it until you make it thing it isn’t funny. Keep in mind you are in the tech industry, cold calling people who most likely are fairly tech savvy and capable of logical thought, so that isn’t going to work so well for you. Frankly, my time is a very, very finite resource and even if I am interested in your product, which is why I took your call, if I’m correcting the caller that is an instant turn off.

I get that the people manning the phones aren’t going to be Senior Solutions Architects for your organization but try this on for size; if you’ve got me talking to you and you get asked something you don’t know, don’t be afraid to say you don’t know. This is your opportunity to bump me up the chain or to loop in a more technical person to the call to get the discussion back on the right track. I will respect that far more than if you try to throw out a BS answer. Meanwhile get as much education as you can on what you’re selling. I don’t care if you are a natural sales person, you aren’t going to be able to sell me my own pen in this market.

Employees != Resources

So you’ve got yourself all the way through the gauntlet and you’ve got me talking and you know your product, please don’t tell me how you can get some resources arranged to help me with designing my quote so the deal can more forward. I was actually in a face to face meeting once where the sales person did this, referring to the technical people within the organization as resources and I think my internal response to this can best be summed up in GIF form:

obama_kicks_door

This absolutely drives me bonkers. A resource is an inanimate object which can be used repeatedly without consequence except in the inevitable end result where the resource breaks. What you are calling a resource is a living, breathing, most likely highly intelligent human being who has all kinds of responsibilities, not just to you but to his family, community and any other number things. By referring to them as this, and therefore showing that you think of them as something that can be used repeatedly without consequence, you are demeaning that person and the skill set he or she has, and trust me that person is most likely who we as technical professionals are going to connect with far more than we are with you.

So that’s it, Jim’s guide to getting me on the phone. I’m sure as soon as I post this many other techniques will come to my mind and I’ll have to update this. If you take this to heart, great, I think that is going to work out for you. If not, well, I still hope I’ll remember to buy that burner phone next May and the Gmail account is already setup. 😉

Getting Started with rConfig on CentOS 7

I’ve been a long time user of RANCID for change management on network devices but frankly it’s always left me feeling a little bit of a pain to use and not particularly modern. I recently decided it was time for my OpenNMS/RANCID server to be rebuilt, moving OpenNMS up to a CentOS 7 installation and in doing so thought it was time to start looking around for an network device configuration management alternative. As is many times the way in the SMB space, this isn’t a task that actual budgetary dollars are going to go towards so off to Open Source land I went!  rConfig immediately caught my eye, looking to me like RANCID’s hipper, younger brother what with its built in web GUI (through which you can actually add your devices), scheduled tasks that don’t require you to manually edit cron, etc. The fact that rConfig specifically targets CentOS as its underlaying OS was just a whole other layer of awesomesauce on top of everything else.

While rConfig’s website has a couple of really nice guides once you create a site login and use it, much to my dismay I found that they hadn’t been updated for CentOS 7 and while working through them I found that there are actually some pretty significant differences that effect the setup of rConfig. Some difference of minor (no more iptables, it’s firewalld) but it seems httpd has had a bit of an overhaul. Luckily I was not walking the virgin trail and through some trial, error and most importantly google I’ve now got my system up and running. In this post I’m going to walk through the process of setting up rConfig on a CentOS minimal install with network connectivity with hopes that 1) it may help you, the two reader’s I’ve got, and 2) when I inevitably have to do this again I’ll have documentation at hand.

Before we get into it I will say there are few artistic licenses I’ve taken with rConfig’s basic setup.

  1. I’ll be skipping over the network configuration portion of the basic setup guide. CentOS7 has done a great job of having a single configuration screen at install where you setup your networking among other things.
  2. The system is designed to run on MySQL but for a variety of reasons I prefer MariaDB. The portions of the creator’s config guide that deal with these components are different from what you see here but will work just fine if you do them they way described.
  3. I’m virtualized kind of guy so I’ll be installing the newly supported open-vm-tools as part of the config guide. Of course, if you aren’t installing on ESXi you won’t be needing these.
  4. Finally before proceeding please be sure to go ahead and run a yum update to make sure everything’s up to date and you really do have connectivity.

Disabling Stuff

Even with the minimal installation there are things you need to stop to make things work nice, namely the security measures. If you are installing this in the will this would be a serious no no, but for a smaller shop behind a well configured firewall it should be ok.

vi /etc/sysconfig/selinux

Once in the file you need to change the “ SELINUX=enforcing ” line to “ SELINUX=disabled “. To do that hit “i” and then use vi like notepad with the arrow keys. When done hit Esc to exit insert mode and “ :wq ” to save and exit.

Installing the Prerequisites

Since we did the minimal install there are lots of things we need to install. If you are root on the box you should be able to just cut and paste the following into the cli and everything gets installed. As mentioned in the original Basic Config Guide, you will probably want to cut and past each line to make sure everything gets installed smoothly.

Autostart Services

Now that we’ve installed all that stuff it does us no good if it isn’t running. CentOS 6 used the command chkconfig on|off to control service autostart. In CentOS 7 all service manipulation is now done under the systemctl command. Don’t worry too much, if you use chkconfig or service start both at this point will still alias to the correct commands.

Finalize Disable of SELinux

One of the hard parts for me was getting the step 5/6 in the build guide to work correctly. If you don’t do it the install won’t complete, but it also doesn’t work right out of the box. To fix this the first line in prerequisites installs the attr package which contains the setfattr executable. Once that’s installed the following checks to see if the ‘.’ is still in the root directories ACLs and removes it from the /home directory. By all means if you know of a better way to accomplish this (I thought of putting the install in the /opt directory) please let me know in the comments or on twitter.

MySQL Secure Installation on MariaDB

MariaDB accepts any commands you would normally use with MySQL. the mysql_secure_installation script is a great way to go from baseline to well secured quickly and is installed by default. The script is designed to

  • Set root password
  • Remove anonymous users
  • Disallow root logon remotely
  • Remove test database and access to it
  • Finally reload the privilege tables

I tend to take all of the defaults with the exception of I allow root login remotely for easier management. Again, this would be a very bad idea for databases with external access.

Then follow the prompts from there.

As a follow up you may want to allow remote access to the database server for management tools such as Navicat or Heidi SQL. To do so enter the following where X.X.X.X is the IP address you will be administering from. Alternatively you can use root@’%’ to allow access from anywhere.


Configure VSFTPd FTP Software

Now that we’ve got the basics of setting up the OS and the underlying applications out of the way let’s get to the business of setting up rConfig for the first time. First we need to edit the sudoers file to allow the apache account access to various applications. Begin editing the sudoers file with the visudo  command, arrow your way to the bottom of the file and enter the following:

rConfig Installation

First you are going to need to download the rConfig zip file from their website. Unfortunately the website doesn’t seem to work with wget so you will need to download it to a computer with a GUI  and then upload it via SFTP to your rConfig server. (ugh) Once the file is uploaded to your /home directory back at your server CLI do the following commands

Next we need to copy the the httpd.conf file over to /etc/httpd/conf directory. This is where I had the most issues of all in that the conf file included is for httpd in CentOS 6 and there are some module differences between 6 and 7. Attached here is a modified version that I was able to get working successfully after a bunch of failures. The file found here (httpd.txt) will need to replace the existing httpd.conf before the webapp will successfully start. If the file is copied to the /home/rconfig directory the shell commands would be

As long as the httpd service starts backup up correctly you should now be good to go with the web portion of the installation which is pretty point and click. Again for the sake of brevity just follow along at the rconfig installation guide starting with section rConfig web installation and follow along to the end. We’ll get into setting up devices in a later post, but it is a pretty simple process if you are used to working with networking command lines.

Quick How To: A restart from a previous installation or update is pending.

Just a quickie from an issue I ran into today trying to upgrade vCenter 5.5 to Update 3, or at least the SSO component of it. Immediately after running the installer I was presented with an MSI error “A restart from a previous installation or update is pending. Please restart your system before you run vCenter Single Sign-On installer.” Trying to be a good little SysAdmin I dutifully rebooted, repeatedly, each having no effect on the issue. I’ve seen different versions of this error in the past so I had an idea of where to go but it seems to require googling each time. This is caused by there being data present in the “PendingFileRenameOperations” value of the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager key. Simply checking this key and clearing out any data within will remove the flag and allow the installation to proceed.

In this case I had an HP print driver doing what they do best and gumming up the works. I’d love to say this is the first time I’ve been done in by a print driver but you all would know I’m lying. 🙂

Setting Up Endpoint Backup Access to Backup & Replication 8 Update 2 Repositories

A part of the Veeam Backup & Replication 8 Update 2 Release is the ability to allow users to target repositories specified in your Backup Infrastructure as targets for Endpoint Backup. While this is just one of many, many fixes and upgrades (hello vSphere 6!) in Update 2 this one is important for those looking to use Endpoint Backup in the enterprise as it allows for centralized storage and management and equally important is you also get e-mail notifications on these jobs.

Once the update is installed you’ll have to decide what repository or repositories will be available to Endpoint Backup and provide permissions for users to access them. By default every Backup Repository Denies Endpoint Backup access to everyone. To change this for one or more repositories you’ll need to:

  1. Access the Backup Repositories section under Backup Infrastructure, then right click a repository and choose “Permissions.”
  2. Once there you have three options for each repository in regards to Endpoint permissions; Deny to everyone (default), Allow to everyone, and Allow to the following users or groups only. This last option is the most granular and what I use, even if just to select a large group. In the example shown I’ve provided access to the Domain Admins group.
  3. You will also notice that I’ve chosen to encrypt any backups stored in the repository, a nice feature as well of Veeam Backup & Replication 8.

Also of note is that no user will be able to select a repository until they have access to it. In setting up the Endpoint Backup job when the Veeam server is specified you are given the option to supply credentials there so you may choose to use alternate credentials so that the end users themselves don’t actually have to have access to the destination.

Getting Started with Veeam Endpoint Backup

This week Veeam Software officially released their new Endpoint Backup Free product introduced at VeeamON last October after a few months of beta testing. The target for this product is to allow image based backup of individual physical machines, namely workstations, allowing for Change Block Tracking much like users of their more mature Backup & Replication product have been used to in virtualized environments. Further Veeam has made a commitment that in the product is and should always be freely available making it possible for anybody to perform what is frankly enterprise level backup of their own computers with no cost other than possibly a external USB drive to store the backup data.  I’ve been using the product throughout the beta process and in this post I’ll outline some of the options and features and review how to get started with the product.

Also released this month by Veeam is the related Update 2 for Backup & Replication 8. This update in this case allows a Backup Repository to be selected as a target for your Endpoint Backup job after some configuration as shown here. Keep in mind if you are wanting to backup to local USB or a network share this isn’t necessary but if you are already a B&R user this will make managing these backups much better.

Getting Started with Installation

Your installation optionsI have to say Veeam did very well keeping the complexity under the water in this one. Once downloaded and run the installation choices consist completely of one checkbox and one button. That’s it. Veeam Endpoint Backup seems to rely on a local SQL Server Express installation to provide backend services just like the bigger Backup & Replication install but it is installed on the fly. I have found that if there is pending Windows Updates to complete the installer will prompt you to restart prior to continuing to configuring your backup.

Configuring the Job

Once the installation is complete the installer will take you directly into configuring the backup as long as you are backing up to an external storage device. If you plan to use a network share or Veeam Backup Repository you will need to skip the step and configure the job once in the application. Essentially you have the following options:

  • What you wantto backup
    • Entire computer; which is image based backup
    • Specific volumes
    • File level backup
  • Where you want to back it up to (each will generate another step or two in the wizard)
    • Local storage
    • A shared folder
    • Veeam Backup & Replication repository
  • Schedule or trigger for backups
    • Daily at a a specific time
    • Trigger a backup on a lock, log off or when the backup target is connected


Personally I use one of three setups depending on the scenario. For personal computers I use a external USB drive triggered on when the backup target is available but set so that it never backs up more than once every 24 hours. In the enterprise using Endpoint Backup to deal with those few remaining non-virtualized Windows servers these are configured to backup to a Veeam Backup Repository on a daily schedule. Finally I will soon begin rolling this out to key enterprise laptop users and there backup will be to a B&R Repository as well but triggered on the user locking the workstation with a 24 hour hold down. Keep in mind all of these options can be tweaked via the Configure backup button in the Veeam Endpoint Backup Control Panel.

media-createCreating the Recovery Media

The last step of installing/configuring Endpoint Backup is to create the restore media. This creates a handy disk or ISO that you can boot off of to allow you to do a Bare Metal (or Bare VM :)) recovery of the machine. From an enterprise standpoint if you are rolling Endpoint Backup out to a fieldful of like machines I really can’t find a good reason to create more than one of these per model of device. Personally I’ve been creating the ISOs for each model and using it in conjunction with a Zalman VE-300 based external hard drive to keep from having lots of discs/pen drives around. If you are using this to backup physical servers it would also be a first step to being able to quickly restore to a VM if that is part of your disaster recovery plan.

As a trick what I’ve found is I have installed the product on a VM for no other reason but to create the recovery media. This way I know I’ll have the drivers to boot to it if need be. Further once you boot to the recovery media you’ll find all kinds of little goodies that make it a good ISO to have available in your bag.

Conclusion

I’ve played with lots of options, both paid and free, over the years for backing up a physical computer on a regular basis and even setting the general Veeam fanboy type stuff aside, this is the slickest solution for this problem I’ve ever seen. The fact that it is free and integrates into my existing Enterprise solution are definitely major added bonuses, but even in a standalone, “I need to make backups of Grandma’s computer” situation it is a great choice. If you find you need a little help with getting started the Veeam has created a whole Endpoint Backup forum just for this product. My experience both here and with other products is that there is generally very quick response from very knowledgeable Veeam engineers, developers and end users happy to lend a hand.