Thoughts on Leadership As Told to a 5 Year Old

I am lucky enough to be a father to a wonderful 5-year-old daughter, fresh into her Kindergarten year of school. Recently she came home with the dramatic cry of a 5-year-old, upset that her class has a Leader of the Month award and she didn’t win it. Once the sobbing subsided she got around to asking me how to be a leader, one of those basics of life type questions that all parents know and yet always get thrown by. How do I boil down the essence of leadership to something she not only can understand but can apply herself?

Thanks to the reoccurring themes of Special Agent Oso I got the idea to try to condense leadership to 3 simple steps. Simplistic I know, but the more I thought about it the more I realized that not only would it get her on the right track but that, to be honest, there are a great number of adults in leadership positions that experience differing levels of success with them. So thanks to my daughter and our good friend Oso I present Jim’s 3 simple steps to being a good leader.

Step 1! Have A Good Attitude

Seriously, there are so many studies/articles on the effect that a leader’s public attitude has on the productivity and efficiency of their team. If those linked articles aren’t enough for you Google it, there are a lot more. I know we all have our days when it all falls apart and have experienced these myself, but when those days stretch into weeks or months that team you are leading or even being a part of is going to start to fall apart as well. Some days it is going to require you to put a good face on what you are internally feeling, but coming in with a bounce in your step and not being negative is a great first step.

One of the realizations that I’ve had recently is that with any position there comes a time when you have to decide to either have some internal reconciliation with the job you have and make peace with the issues you have with it or decide to make the jump to something new. I’m not saying you should try to make the things you dislike better, you absolutely should, but staying in something you actively dislike and have lost the willpower to try to fix not only hurts you it hurts those around you.

Step 2! Listen

So for a 5-year-old trying to get them to listen to anything that isn’t on Netflix or YouTube is a challenge. Guess what? The same is true for me and you and those around you in the workplace from time to time. As framed to her, listening means making sure you pay attention not only to your teachers when they talk (and you absolutely must do this) but also to those around you that may be having issues. The teacher is talking about how to make the perfect “R” but your neighbor is having a problem? If she’s got it down this is a great opportunity to help!

As this applies to us how many of us go into meetings or conversations with our co-workers with a preset agenda or outcome of it already in our mind? Worse still is when we try to find the implied meaning of what is unsaid in a conversation. We as workers and leaders do better when we open our minds and actually listen to the words that are being spoken, then ask for clarification as needed. I know I have to work hard to quit relying on my own biases or trying to guess what the speaker means simply because we don’t want to ask “Ok, I didn’t get that, can you expand on this?”

Step 3! Apply What You’ve Learned

As we complete our journey through leadership we need have to apply what we’ve learned in the first two steps. We’ve spent the last 20 minutes learning how to make that perfect “R” and now we have to practice in class. My 2.0 may have this down, but she see’s her neighbor is trying to get help from the teacher. With a great attitude and an understanding of what’s expected this is her time to shine and help out her classmate!

At work this is important too. We have a great attitude now and we really want to be a helper, getting things done, and we’ve listened to those around us and learned not only about the needs and processes of our organizations but also the needs of our co-workers and staff. With all this ammo we are now in a position to truly make a difference because regardless of who you are helping when the knowledge gets distributed the potential for greater outcomes goes up. For those of us in traditional IT or other “support departments” this is where the true value lies. If we can’t learn, interpret and apply technology to the organization and staff’s needs then we aren’t supporting anyone.

Conclusion

So there you have it. Yes I know it’s simple, yes I know there are other factors (ego, time management, etc) that go into it as well but as in all things, you have to master the basics first. How many of you that have made it to this point were shaking your head and considering interactions with others as you read them? How many read these and had a “doh” thought about something you yourself have done? I know I did in thinking about it and there’s nothing wrong with that as long as you learn from it.

Notes on Migrating from an “All in One” Veeam Backup & Replication Server to a Distributed System

One of the biggest headaches I not only have and have heard about from other Veeam Backup & Replication administrators have is backup server migrations. In the past I have always gone the “All-in-One” approach, have one beefy physical server with Veeam directly installed and housing all the roles. This is great! It runs fast and it’s a fairly simple system to manage, but the problem is every time you need more space or your upgrading an old server you have to migrate all the parts and all the data. With my latest backup repository upgrade I’ve decided to go to a bit more of a distributed architecture, moving the command and control part out to a VM with an integrated SQL server and then letting the physical box handle the repository and proxy functions producing a best of both worlds setup, the speed and simplicity of all the data mover and VM access happening from the single physical server while the setup and brains of the operation reside in a movable, upgradable VM.

This post is mostly composed of my notes from the migration of all parts of VBR. The best way to think of this is to split the migration into 3 major parts; repository migration, VBR migration, proxy migration, and VBR migration. These notes are fairly high level, not going too deep into the individual steps. As migrations are complex if any of these parts don’t make sense to you or do not provide enough detail I would recommend that you give the fine folks at Veeam support a call to ride along as you perform your migration.

I. Migrating the Repository

  1. Setup 1 or more new repository servers
  2. Add new repository pointing to a separate folder (i.e. D:\ConfigBackups) on the new repository server to your existing VBR server exclusively for Configuration Backups. These cannot be included in a SOBR. Change the Config Backup Settings (File > Config Backup) to point to the new repository. This is also probably a good time to go ahead and run a manual Config Backup while you are there to snapshot your existing setup.
  3. Create one or more new backup repositories on your new repository server(s) to your existing VBR server configuration.
  4. Create Scale Out Backup Repository (SOBR), adding your existing repository and new repository or repositories as extents.
  5. All of your backup jobs should automatically be changed to point to the SOBR during the setup but check each of your jobs to ensure they are pointing at the SOBR.
  6. If possible go ahead and do a regular run of all jobs or wait until your regularly scheduled run.
  7. After successful run of jobs put the existing extent repository into Maintenance Mode and evacuate backups.
  8. Remove existing repository from the SOBR configuration and then from the Backup Repositories section. At this point no storage of any jobs should actually be flowing through your old server. It is perfectly fine for a SOBR to only contain a single extent from a data locality standpoint.

II. Migrate the Backup and Guest Interaction Proxies

  1. Go to each of your remaining repositories and set proxy affinity to the new repository server you have created. If you have previously scaled out your backup proxies then you can ignore this step.
  2. Under Backup Proxy in Backup Infrastructure remove the Backup Proxy installation on your existing VBR server.  Again, if possible you may want to run a job at this point to ensure you haven’t broken anything in the process.
  3. Go to each of your backup jobs that are utilizing the Guest Processing features. Ensure the guest interaction proxy at the bottom of the screen is set to either your new repository server, auto or if scaled out another server in your infrastructure.

III. Migrate the Veeam Backup & Replication Server

  1. Disable all backup, Backup Copy and Agent jobs on your old server that have a schedule.
  2. Run a Config Backup on the old server. If you have chosen to Encrypt your configuration backup the process below is going to be a great test to see if you remember or documented it. If you don’t know what this is go ahead and change it under File>Manage Passwords before running this final configuration backup.
  3. Shutdown all the Veeam services on your existing backup server or go ahead and power it down. This ensures you won’t have 2 servers accessing the same components.
  4. If not already done, create your new Veeam Backup and Replication server/VM. Be sure to follow the guidelines on sizing available in the Best Practices Guide.
  5. Install Veeam Backup, ensuring that you use the same version and update as production server. Safest bet is to just have both patched to the latest level of the latest version.
  6. Add a backup repository on your new server pointing to the Config Backup repository folder you created in step 2 of the Migrating the Repository step.
  7. Go to Config Backup and hit the “Restore” button.
  8. As the wizard begins choose the Migrate option.
  9. Change the backup repository to the repository created in step 5, choose your latest backup file which should be the same as the one created in step 2 above.
  10. If encrypted, specify your backup password and then choose to overwrite the existing VeeamBackup database you created when you installed Veeam in step 4. The defaults should do this.
  11. Choose any Restore Options you may want. I personally chose to check all 4 of the boxes but each job will have its own requirements.
  12. Click the Finish button to begin the migration. From this point if any screens or messages pop up about errors or issues in processing it is a good idea go to ahead and contact support. All this process does is move the database from the old server to the new, changing any references to the old server to the new along the way. If something goes wrong it is most likely going to have a cascade effect and you are going to want them involved sooner than later.

IV. Verification and Cleanup

  1. Now that your server has been migrated it’s a good idea to go through all the tabs in your Backup Infrastructure section, ensuring that all your information looks correct.
  2. Go ahead and run a Config Backup at this point. That’s a nice low-key way to ensure that all of the basic Veeam components are working correctly.
  3. Re-enable your disabled backup, backup copy and Agent jobs. If possible go ahead and run one and ensure that everything is hunky dory there.

Gotchas

This process when working correctly is extremely smooth. I’ll be honest and admit that I ran into a what I believe is a new bug in the VBR Migration wizard. We had a few SureBackup jobs that had been setup and while they had been run they have never been modified again since install. When this happens VBR notes the job_modified field of the job config database record as NUL. During the migration the wizard left those fields blank in the restored database, which is evidently something that is checked when you start the Veeam Backup Service. While the Service in the basic services.msc screen appears to be running under the hood you are only getting partial functionality. In my case support was able to go in and modify the database and re-include the NUL data to the field, but if you think you might have this issue it might be worth changing something minor on all of your jobs before the final configuration backup.

Conclusion

If you’ve made it this far, congrats! You should be good to go. While the process seems daunting it really wasn’t all that bad. If I hadn’t run into an issue it wouldn’t have been bad at all. The good news is that at this point you should be able to scale your backup system much easier without the grip and rip that used to be required.

Cisco Voice Servers Version 11.5 Could Not Load modules.dep

About 6 months ago we updated 3/4 of our Cisco Telephony environment from 8.5 to 11.5. The only reason we didn’t do it all is because UCCX 11.5 wasn’t out yet so it went to 11. While there were a few bumps in the road; resizing VMs, some COP files, etc. the update went well. Unfortunately once it was done we starting having a glorious issue where after a reboot the servers sometimes failed to boot, presenting “FATAL: Could not load /lib/modules/2.6.32-573.18.1.el6.x86_64/modules.dep: No such file or directory”. Any way you put it, this sucked.

The first time this happened I call TAC and while they had seen it, they had no good answer except for rebuild the VM, restore from backup. Finally after the 3rd time (approximately 3 months after install) the bug had been officially documented and (yay) it included a work around. The good news is that the underlying issue at this point has been fixed in 11.5(1.11900.5) and forward so if you are already there, no problems.

The issue lies with the fact that the locked down build of RHEL 6 that any of the Cisco Voice server platforms are built on don’t handle VMware Tools updates well. It’s all good when you perform a manual update from their CLI and use their “utils vmtools refresh” utility, but many organizations, mine included, choose to make life easier and enable vCenter Update Manager to automatically upgrade the VMware tools each time a new version is available and the VM is rebooted.

So how do you fix it? While the bug ID has the fix in it, if you aren’t a VMware regular they’ve left out a few steps and it may not be the easiest thing to follow. So here I’m going to run down the entire process and get you (and chances are, myself when this happens in the future) back up and running.

0. Go out to the cisco.com site and download the recovery CD for 11.5. You should be able to find that here, but if not or if you need a different version browse through the downloads to Downloads Home > Products > Unified Communications > Call Control > Unified Communications Manager (CallManager) > Unified Communications Manager Version 11.5 > Recovery Software. Once done upload this to any of the datastores available to host your failing VM resides on.
1. If you’ve still got the VM running, shut it down by right clicking the VM>Power>Power Off in the vCenter Web UI or the ESXi embedded host client.
2. Now we need to make a couple of modifications to the VM’s settings to tell it 1) attach the downloaded ISO file and check the “Connected at boot” box and 2) Under VM Options> Boot Options to “Force BIOS setup” at next boot. By default VMs do not look at attached ISOs as the first boot device. Once both of these are done it’s time to boot the VM.
3. I personally like to launch the VMware Remote Console first and then boot from there, that way I’ve already got the screen up. After you power on the BIOS in a VM is the same old Phoenix BIOS we all know and love. Simply tell the VM to boot to CD before hard drive, move to Exit and “Save and Exit” and your VM will reboot directly into the recovery ISO.
4.  Once you get up to the Recovery Disk menu screen as shown to the left we need to get out to a command prompt. To do this hit Alt-F2 and you’ll be presented with a standard bash prompt.
5. So the root cause of all this issue is that the initramfs file is improperly sized after an automatic upgrade of VMware tools has been processed. So now that we have our prompt we first need to verify that we are actually seeing the issue we expect. To do this run the command “ find / -name initramfs* .” This command should produce the full path and filename of the file. So to get the size of this file you now need to run an ls -lh against it. In my example your full command would be “ ls -lh /mnt/part1/boot/initramfs-2.6.32-573.18.1.el6.x86_64.img .” If you aren’t particularly used to the Linux CLI once you get past …initr you should be able to hit tab to autocomplete. This should respond by showing you that that file is incorrectly sized somewhere between 11-15 MB.
6. Now we need to perform a chroot on the directory that contains boot objects. In most cases this should simply be “ chroot /mnt/part1 “

7. Finally we need to manually re-run the VMware Tools installer to to get the file properly sized. These are included locally on the Recovery Disk so just run the command “ /usr/bin/vmware-config-tools.pl -d ” There are various steps throughout the process where it is going to ask for input. Unless you know you have a reason to differ just hit enter at each one until it completes.

Once the VMware Tools installation is done up arrow to where you checked the size of initramfs…img file above and rerun the command. You should now see file size changed to 24 MB or so.

8. Now we just need to do a little clean up before we reboot. You need to make sure you go into Settings for your VM and tell it not to connect the ISO at boot. Once you make that change you should be able to flip back over to your console and simply type reboot  or shutdown -r 0  to reboot back to full functionality.

 

Windows Server Deduplication, Veeam Repositories, and You!

Backup, among other things, is very good at creating multiple copies of giant buckets of data that don’t change much and tend to sit for long periods of time. Since we are in modern times, we have a number of technologies to deal with this problem, one of which is called deduplication with quite a few implementations of it. Microsoft has had server-based storage versions since Windows 2008 R2 that has gotten better with each release, but as any technology still has its pitfalls to be mindful of. In this post I’m going to look a very specific use case of Windows server deduplication, using it as the storage beneath your Veeam Backup and Replication repositories, covering some basic tips to keep your data healthy and performance optimized.

What is Deduplication Anyway?

For those that don’t work with it much imagine you had a copy of War and Peace stored as a Word document with an approximate file size 1 MB. Each day for 30 days you go into the document and change 100 KB worth of the text in the document and save it as a new file on the same volume. With a basic file system like NTFS this would result in you having 31 MB tied up in the storage of these files, the original and then the full file size of each additional copy.

Now let’s look at the same scenario on a volume with deduplication enabled. The basic idea of deduplication replaces identical blocks of data with very small pointers back to a common copy of the data. In this case after 30 days instead of having 31 MB of data sitting on disk you would approximately 4 MB; the original 1 MB plus just the 100 KB of incremental updates. As far as the user experience goes, the user just sees the 31 files they expect to see and they open like they normally would.

So that’s great when you are talking about a 1 MB file but what if we are talking about file storage in the virtualization world, one where we talking about terabytes of data multi gigabyte changes daily? If you think about the basic layout of a computer’s disk it is very similar to our working copy of War and Peace, a base system that rarely changes, things we add that then sit forever, and then a comparatively few things we change throughout the course of our day. This is why for virtual machine disk files and backup files deduplication works great as long as you set it up correctly and maintain it.

Jim’s Basic Rules of Windows Server Deduplication for Backup Repositories

I have repeated these a few times as I’ve honed them over the years. If you feel like you’ve read or heard this before its been part of my VeeamON presentations in both 2014 and 2015 as well as part of blog posts both here and on 4sysops.com. In any case here are the basics on care and feeding your deduplicated repositories.

  1. Format the Volume Correctly. Doing large-scale deduplication is not something that should be done without getting it right from the start. Because when we talk about backup files, or virtual disks in general for that matter, we are talking about large files we always want to format the volume through the command line so we can put some modifiers in there. The two attributes we really want to look at is /L and /A:64k. The /L  is an NTFS only attribute which overrides the default (small) size of the file record. The /A controls the allocation unit size, setting the block size. So for a given partition R: your format string may look like this:
  2. Control File Size As Best You Can. Windows Server 2012 R2 Deduplication came with some pretty stringent recommendations when it came to maximum file size and using deduplication, 1 TB. With traditional backup files blowing past that is extremely easy to do when you have all of your VMDKs rolled into a single backup file even after compression. While I have violated that recommendation in the past without issue I’ve also heard many horror stories of people who found themselves with corrupted data due to this. Your best bet is to be sure to enable Per-VM  backup chains on your Backup Repository (Backup Infrastructure> Backup Repositories> [REPONAME] > Repository> Advanced).
  3. Schedule and Verify Weekly Defragmentation. While by default Windows schedules weekly defragmentation jobs on all volumes these days the one and only time I came close to getting burnt but using dedupe was when said job was silently failing every week and the fragmentation became too much. I found out because my backup job began failing due to corrupted backup chain, but after a few passes of defragmenting the drive it was able to continue without error and test restores all worked correctly. For this reason I do recommend having the weekly job but make sure that it is actually happening.
  4. Enable Storage-Level Corruption Guard. Now that all of these things are done we should be good, but a system left untested can never be relied upon. With Veeam Backup & Replication v9 we now have the added tool on our backup jobs of being able to do periodic backup corruption checks. When you are doing anything even remotely risky like this it doesn’t hurt to make sure this is turned on and working. To enable this go to the Maintenance tab of the Advanced Storage settings of your job and check the top box. If you have a shorter retention time frame you may want to consider setting this to weekly.
  5. Modify Deduplication Schedule To Allow for Synthetic Operations. Finally the last recommendation has to do more with performance than with integrity of data. If you are going to be doing weekly synthetic fulls I’ve found performance is greatly decreased if you leave the default file age before deduplication setting (3 or 5 days depending on version of Windows) enabled. This is because in order to do the operation it has to reinflate each of the files before doing the operation. Instead set the deduplication age to 8 days to allow for the files to already be done processing before they were deduplicated.  For more information on how to enable deduplication as well as how to modify this setting see my blog over on 4sysops.com.

Well with that you now know all I know about deduplicating VBR repositories with Windows Server. Although there is currently a bug in the wild with Server 2016 deduplication, with a fix available, the latest version of Windows Server shows a lot of promise in its storage deduplication abilities. Among other things it pushes the file size limit up and does quite a bit to increase performance and stability.

Fixing Domain Controller Boot in Veeam SureBackup Labs

We’ve been dealing with an issue for past few runs of our monthly SureBackup jobs where the Domain Controller boots into Safe Mode and stays there. This is no good because without the DC booting normally you have no DNS, no Global Catalog or any of the other Domain Controller goodness for the rest of your servers launching behind it in the lab. All of this seems to have come from a change in how domain controller recover is done in Veeam Backup and Replication 9.0, Update 2 as discussed in a post on the Veeam Forums. Further I can verify that if you call Veeam Support you get the same answer as outlined here but there is no public KB about the issue. There are a couple of ways to deal with this, either each time or permanently, and I’ll outline both in this post.

The booting into Safe Mode is totally expected, as a recovered Domain Controller object should boot into Directory Services Restore mode the first time. What is missing though is that as long as you have the Domain Controller box checked for the VM in your application group setup then once booted Veeam should modify the boot setup and reboot the system before presenting it to you as a successful launch. This in part explains why when you check the Domain Controller box it lengthens the boot time allowed from 600 seconds to 1800 seconds by default.

On the Fly Fix

If you are like me and already have the lab up and need to get it fixed without tearing it back down you simply need to clear the Safe Boot bit and reboot from Remote Console. I prefer to

  1. Make a Remote Console connection to the  lab booted VM and login
  2. Go to Start, Run and type “msconfig”
  3. Click on the Boot tab and uncheck the “Safe boot” box. You may notice that Active Directory repair option is selected
  4. Hit Ok and select to Restart

Alternatively if you are command inclined a method is available via Veeam KB article 1277  where you just run these commands

it will reboot itself into normal operation. Just to be clear, either of these fixes are temporary. If you tear down the lab and start it back to the same point in time you will experience the same issue.

The Permanent Fix

The problem with either of the above methods is that while they will get you going on a lab that is already running about 50% of the time I find that once I have my DC up and running well I have to reboot all the other VMs in the lab to fix dependency issues. By the time I’m done with that I could have just relaunched the whole thing. To permanently fix the root issue is you can revert the way DCs are handled by creating a single registry entry as shown below on the production copy of each Domain Controller you run in the lab.

Once you have this key in place on your production VM you won’t have any issues with it going forward as long as the labs you launch are from backups made after that change is put in use. My understanding is this is a known issue and will eventually be fixed but at least as of 9.5 RTM it is not.

Installing .Net 3.5 on Server 2012/ Windows 8 and above

Hi all, just a quick post to serve as both a reminder to me and hopefully something helpful for you. For some reason Microsoft has decided to make installing .Net 3.5 on anything after Windows Server 2012 (or Windows 8 on the client side) harder than it has to be. While it is included in the regular Windows Features GUI it is not included in the on-disk sources for features to be installed automatically. In a perfect world you just choose to source from Windows Update and go about your day, but in my experience this is a hit or miss solution as many times for whatever reason it errors out when attempting to access.

The fix is to install via the Deployment Image Servicing and Management tool better known as DISM and provide a local source for the file. .Net 3.5 is included in every modern Windows CD/ISO under the sources\sxs directory. When I do this installation I typically use the following command set from an elevated privilege command line or PowerShell window:

installedWhen done the window should look like the window to the left. Pretty simple, right? While this is all you really need to know to get it installed let’s go over what all these parameters are that you just fed into your computer.

  • /online – This refers to the idea that you are changing the installed OS as opposed to an image
  • /enable-feature – the is the CLI equivalent of choosing Add Roles and Features from Server Manager
  • /featurename – this is where we are specifying which role or feature we want to install. This can be used for any Windows feature
  • /all – here we are saying we not only want the base component but all components underneath it
  • /Source:d:\sources\sxs – This is specifying where you want DISM to look for media to install for. You could also copy this to a network share, map a drive and use it as the source.
  • /Limit Access – This simply tells DISM not to query Windows Update as a source

While DISM is available both in the command line as well as PowerShell there is a PS specific command that works here as well that is maybe a little easier to read, but I tend to use DISM just because it’s what I’m used to. To do the same in PowerShell you would use:

 

 

 

VMware Tools Security Bug and Finding which VMware Tools components are installed on all VMs

Just a quick post related to today’s VMware security advisories. VMware released a pair of advisories today, CVE-2016-5330 and CVE-2016-5331 and while both are nasty their scopes are somewhat limited. The 5331 issue is only applicable if you are running vCenter or ESXi 6.0 or 6.0U1, Update 2 patches the bug. The 5330 is limited to Windows VMs, running VMware Tools, and have the option HGFS component installed. To find out if you are vulnerable here’s a Power-CLI script to get all your VMs and list the installed components. Props to Jason Shiplett for giving me some assistance on the code.

While the output is still a little rough it will get you there. Alternatively if you are just using this script for the advisory listed you can change  where-object { $_.Name -match $componentPattern }  to  where-object { $_.Name -match "vmhgfs" } . This script is also available on GitHub.

The Basics of Network Troubleshooting

The following post is something I wrote as an in-house primer for our help desk staff. While it a bit down level from a lot of the content here I find more and more the picking and reliably going with a troubleshooting methodology is somewhat of a lost art. If you are just getting started in networking or are troubleshooting connectivity issues at your home or SMB this would be a great place to start.

We often get issues which are reported as application issues but end up being network related. There are a number steps and logical thought processes that can make dealing with even the most difficult network issues easy to troubleshoot. The purpose of this post is to outline many of the basic steps of troubleshooting network issues, past that it’s time to reach out and ask for assistance.

  1. Understand the basics of OSI model based troubleshooting

    The conceptual idea of how a network operates within a single node (computer, smartphone, printer, etc.) is defined by something called the OSI reference model. The OSI model breaks down the operations of a network into 7 layers, each of which is reliant on success at the layers below it (inbound traffic) and above it (outbound traffic). The layers (with some corresponding protocols you’ll recognize) are:

    7. Application: app needs to send/receive something (HTTP, HTTPS, FTP, anything that the user touches and begins/ends network transmission)
    6. Presentation: formatting & encryption (VPN and DNS host names)
    5. Session: interhost communication (nothing to see here:))
    4. Transport: end to end negotiations, reliability (the age old TCP vs. UDP debate)
    3. Network: path and logical addressing (IP addresses & routing)
    2. Data Link: physical addressing (MAC addresses & switches)
    1. Physical: physical connectivity (Is it plugged in?)

    The image below is a great cheat card for keeping these somewhat clear:

    OSI_2014

    Image source: http://www.gargasz.info/osi-model-how-internet-works/

    How OSI is used today is as a template for how to understand and thus troubleshoot networking issues. The best way to troubleshoot any IT problem that has the potential to have a network issue is from the bottom of the stack upwards. Here are a few basic steps to get you going with troubleshooting.

  2. Is it plugged in?

    This may seem like a smart ass answer, but many times this is just the case. Somebody’s unplugged the cable or the clip has broken off the Cat6 cable and every time somebody touches the desk it wiggles out. Most of the time you will have some form of a light to tell you that you have both connectivity to the network (usually green) and are transmitting on the network (usually orange).

    This troubleshooting represents layer 1 troubleshooting.

  3. Is the network interface enabled?

    So the cable is in and maybe you’ve tried to plug the same cable from the wall into multiple devices; you get link lights on other devices but no love on the device you need. This may represent a Data Link issue where the Network Interface Card (NIC) has been disabled in the OS. From the client standpoint this would be within Windows or Mac OSX or whatever, on the other side it’s possible the physical interface on the switch that represents the other end of the wire may be disabled. Check out the OS first and then reach out to your network guy to check the switch if need be.

  4. Can the user ping it?

    Moving up to the Network layer, the next step is to test if the user can ping the device which they are having an issue with. Have the user bring up a command prompt and ping the IP address of the far end device.

  5. Can you ping it?

    By the very nature of you being an awesomesauce IT person you are going to have more ability to test than the user. To start with, see if you can ping it from your workstation. This will rule out user error and potentially any number of other issues as well. Next if you can’t, are you on the same subnet/VLAN as the device you are trying to access? If not try to access a device in the same subnet as the endpoint device you are testing and ping it from there. That may give you some insight into having issues with default gateway configuration or underlying routing (aka Layer 3) issues.

  6. Can you ping it by name?

    Let’s say you can ping it by IP address from all of the above. If the user is trying to access something by name, say server1.foo.com have them ping that as well. It’s possible that while the lower three layers of the stack are operating well, something has gone awry with DNS or other forms of naming that happen at the Presentation layer.

  7. Application firewalls and the like

    Finally we’ve reached the top of the stack and we need to take a look at the individual applications. So far you’ve verified that the cable’s plugged in, the NICs on both sides are enabled and you can ping between the user and the far device by both IP and hostname but still the application won’t work so now’s when we look at the actual application and immediately start rebooting things.

    Just kidding 🙂 No now we need to look at services that are being present to the network. If we are troubleshooting an e-mail issue is the services running on the server and can we connect to it. When talking about TCP/IP-based traffic (meaning all traffic) all application layer traffic occurs over either a TCP or UDP protocol port. This isn’t something you physically plug-in, but rather it is a logical slot that an application is known to talk on, kind of like a CB radio channel. For example SMTP typically runs on TCP port 25, FTP 21, printing usually on 9100. If you are troubleshooting an e-mail issue bring up a command prompt and try to connect to the device via telnet like “telnet server1.foo.com 25.” If the SMTP server is running on that port at the far end then it will answer, if not the connection will time out.

  8. Call in reinforcements

    If you’ve got this far it’s going to take a combination of multiple brains and probably some application owners/vendors to unwrangle the mess those crazy users have made. Reach out to your network and application teams or call in vendor support at this point.

Network troubleshooting isn’t hard, you just have to know where to start.

Quieting the LogPartitionLowWaterMarkExceeded Beast in Cisco IPT 9.0.x Products

As a SysAdmin I’m used to waking up, grabbing my phone and seeing the 20 or so e-mails that  the various systems and such have sent me over night, gives me an idea of how the day will go and what I need start with. Every so often though you get that morning where the 20 becomes 200 and you just want to roll over and go back to bed. This morning I had about 200, the vast majority of which was from my Cisco Unified Contact Center Express server with the subject “LogPartitionLowWaterMarkExceeded.” Luckily I’ve had this before and know what to do with it but on the chance you are getting it too here’s what it means and how to deal with it in an efficient manner.

WTF Is This?!?

Or at least that was my response the first time I ran into this. If you are a good little voice administrator one of the first things you do when installing your phone system or taking one over due to job change is setup the automatic alerting capability in the Cisco Unified Real Time Monitoring Tool (or RTMT, you did install that, right?) so that when things go awry you know in theory before the users do. One of the downsides to this system is it is an either on or off alerting system meaning what ever log events are saved within the system are automatically e-mailed at the same frequency.

This particular error message is the by-product of a bug (CSCul18667) in the 9.0.x releases of all the Cisco IP Telephony products in which the JMX logs produced by the at the time new Unified Intelligence Center didn’t get automatically deleted to maintain space on the log partition. While this has long since been fixed phone systems are one of those things that don’t get updated as regularly as they should and such it is still and issue. The resulting effect is that when you reach the “warning” level of partition usage (Low Water Mark) it starts logging ever 5 minutes that the level has been reached.

Just Make the Screaming Stop

Now that we know what the issue is how do we fix it?

Go back to the RTMT application, and connect to the affected component server. Once there you will need to navigate to the Trace & Log Central tool then double-click on the Remote Browse option. remote-browse
Once in the Remote Browse dialog box choose “Trace Files” and then we really only need one of the services selected, Cisco Unified Intelligence Center Serviceability Service and then Next, Next, Finish. select-cuic
Once it is done gathering all of the log files it will tell you your browse is ready. You then need to drill all the way down through the menu on each node until you reach “jmx.” Once you double-click on jmx you will see the bonanza of logs. It is best to just click one, Ctrl+A to select all and then just hit the Delete button. browse-to-node
After you hit delete it will probably take it quite a while to process through. You will then want to click on the node name and hit refresh to check but when done you should be left with just the currently active log file. Afterwards if you have multiple nodes of the application you will need to repeat this process for the other. all-clean

And that’s it really. Once done the e-mail bleeding will stop and you can go about the other 20 things you need to get done this day. If you are experiencing this and if possible I would recommend being smarter than me and just update your CIPT components to a version newer than 9.0 (11.5 is the current release), something I am hoping to begin the process of in the next month or so.

Updating the Photo Attributes in Active Directory with Powershell

Today I got to have the joys of needed to once again get caught up on importing employee photos into the Active Directory photo attributes, thumbnailPhoto and jpegPhoto. While this isn’t exactly the most necessary thing on Earth it does make working in a Windows environment “pretty” as these images are used by things such as Outlook, Lync and Cisco Jabber among other. In the past the only way I’ve only ever known how to do this is by using the AD Photo Edit Free utility, which while nice tends to be a bit buggy and it requires lots of repetitive action as you manually update each user for each attribute. This year I’ve given myself the goal of 1) finally learning Powershell/PowerCLI to at least the level of mild proficiency and 2) automating as many tasks like this as possible. While I’ve been dutifully working my way through a playlist of great PluralSight courses on the subject, I’ve had to live dangerously a few times to accomplish tasks like this along the way.

So long story short with some help along the way from Googling things I’ve managed to put together a script to do the following.

  1. Look in a directory passed to the script via the jpgdir parameter for any images with the file name format <username>.jpg
  2. Do an Active Directory search in an OU specified in the ou parameter for the username included in the image name. This parameter needs to be the full DN path (ex. LDAP://ou=staff,dc=foo,dc=com)
  3. If the user is found then it will make a resized copy of the image file into the “resized” subdirectory to keep the file sizes small
  4. Finally the resized image is then set as the both the thumbnailPhoto and jpegPhoto attribute for the user’s AD account

So your basic usage would be .\Set-ADPhotos.ps1 -jpgdir "C:\MyPhotos" -OU "LDAP://ou=staff,dc=foo,dc=com" . This should be easily setup as a scheduled task to fully automate the process. In our case I’ve got the person in charge of creating security badges feeding the folder with pictures when taken for the badges, then this runs at 5 in the morning each day automatically.

All that said, here’s the actual script code:

 

Did I mention that I had some help from the Googles? I was able to grab some great help (read Ctrl+C, Ctrl+V) in learning how to piece this together from a couple of sites:

The basic idea came from https://coffeefueled.org/powershell/importing-photos-into-ad-with-powershell/

The Powershell Image Resize function: http://www.lewisroberts.com/2015/01/18/powershell-image-resize-function/

Finally I’ve been trying to be all DevOpsy and start using GitHub so a link to the living code can be found here: https://github.com/k00laidIT/Learning-PS/blob/master/Set-ADPhotos.ps1