Tech Conferences in Las Vegas for Newbies

As June is here we are deep into tech conference season already so I find myself behind the curve somewhat with this post, but here we are. I am extremely fortunate to have an employer who understands the value of attending Tech Conferences for IT Professionals and I’ve been able to attend at least one each year since 2014; going back and forth between CiscoLive and VMworld with a sprinkling of VeeamON and more local events such as vBrisket and VMUGs for good measure. As a “Hyper-Converged Admin” my choice of which “biggie” conference is done each year by looking at where my projects land; last year was CiscoLive due to a lot of Voice and Security Projects, this year VMworld due to lots of updates coming down the pike there and a potential VDI project.

The problem when you have a conference with north of 25,000 attendees is that you are limited in where you put these on. While Cisco does tend to move around some, VMworld has typically either been in San Francisco or Las Vegas. With the Moscone Center closed again this year for renovation we find pretty much all of the big guys are back in Las Vegas, with both CiscoLive and VMworld at Mandalay Bay once again as well as AWS re:Invent and Dell/EMC World in town this year as well. If you haven’t been to one of these Tech Conferences before or to Las Vegas both can be both exciting and overwhelming, but with a little help from others and some decent tips neither are that big of a deal.

Las Vegas Basics

So for a small town guy like me Las Vegas is very cool town, but tiring. The common thread I feel and have heard others voice as well is that Las Vegas is deceptively large because all of the hotels on the strip are so massive. While you can see from your Mandalay Bay window that New York New York is just the next block, it is probably about a mile away if walking there. Why this is important is that if you look at the list of hotels on each conference’s list you’ll see lots of options, but getting to that 8AM session may require a 30+ minute walk or even longer shuttle ride if you chose to stay at the Cosmopolitan (my personal favorite of all Las Vegas hotels but prohibitively far away). Couple that with temperatures in the triple digits during summer and proximity becomes more important.

Hotel Choices

So the first tip for any of these conferences is get a hotel as close as possible. For CiscoLive and VMworld keep in mind that you can move freely between the Mandalay Bay, Delano the Luxor and the Conference Center without ever stepping foot outside.  I would highly recommend trying to be in one of these. If you are booking late and the conference is out of rooms it’s worth trying to book directly through the hotel as they don’t let the events have the whole place. That said you are still going to be in for a hike. For example I stayed in the Mandalay Bay last year and it was approximately 1800 steps from my room to the entrance to conference.

Many of the vendor types that seemingly live their lives at these types of events like to opt for either the nearby Marriott Courtyard Las Vegas South or the Holiday Inn at Desert Club Resort for those that like a kitchen. From either of these you’re a quick Uber or Lyft away from the Conference Center entrance but don’t have to deal with the hustle and bustle of staying on the Strip if you don’t want to.

Getting Around

Speaking of Uber and Lyft, getting around with out walking is a bit of a consideration as well, both for the daily commute as well as for the various events. Traffic in the afternoons into the early morning is pretty impressive on the actual strip so to be honest I’ve not heard good things about trying to rely on the conference shuttles when available. Further I’ve heard many complaints from those who are locals that drive in and try to find parking.

Where that leaves you is 1) ride sharing service, 2) using the monorails, or 3) walking. Uber is nice because they are pretty knowledgeable about routing you around traffic regardless of time of day. Keep in mind when it comes to this and Mandalay Bay there are actually two defined Uber pickup/drop off spots, one outside of the conference center and another around the valet area underneath the hotel drop off area. These are impressively far apart so be sure you know where you want picked up before you request a ride.

The monorails are also nice but short. For those of you going to CLUS this is a good way to get to the Customer Appreciation Event as it will drop you off close to the T-mobile Arena.

Finally walking is a decent option, especially after dark for the various vendor events, but I do recommend if you are going to do it find a buddy or 3 or 4. I’ve never personally seen violence on the strip but you hear about it and there are lots of “character buskers” dressed like everything from Michael Jackson to Spongebob that will harass you.

One final note, while first impressions are important there really isn’t any point to being that person in the fancy shoes unless you’ve got booth duty. I typically while go buy a new pair of good running shoes a week or two before the conference so I can break them in and then that’s what I wear. If you are a step tracker kind of person like me expect 20,000 and up each day so take care of your feet.

Things To Do

Seriously, there’s plenty to do even if you weren’t at a conference already providing lots to do. Regardless of your interest if the conference doesn’t have you jam-packed enough you can find something you like here.

If you are new to IT or are just starting to get your name out there the most important things to do outside of the sessions is to get out there and be social. Both of the conferences we are talking about here have a great community that surround it with some wonderful people in it. The first step if you aren’t already would be to get yourself on twitter and follow the hashtag stream for your event (#CLUS for CiscoLive US, #VMworld for VMworld) , not only while you are there but before especially as many outside events will be planned then. Be sure to find the social area for your given conference and go make friends. Outside of the standard conference hours you’ll find that many of the Vendors will have events planned for attendees. If you have partners or vendors you work heavily with its worth asking your SE if they are doing anything.

CiscoLive Basics

CiscoLive will be held this year June 25-29th and promises to be a great show once again. While I have really enjoyed all of the conferences I’ve attended CLUS  was my first and near to my heart. First off of all those I’ve been to this one feels more academic than others. There aren’t really as many softball sessions and the sessions are a bigger part of the focus for the event than other. That said, they do a very good job of supporting the social community by having a Social Media Hub right in the middle of it all with special events for the twitterazzi most days. I highly recommend showing up and if nothing else walking up and just introducing yourself, trust me, you’ll fit right in there somewhere especially if you bring a Kilt. 😉 If you can come in early on Sunday the annual Tweetup Sunday afternoon is always a good time to make friends.

If you are going to CiscoLive you should have at this point booked most of your sessions. A couple of points here. First do not overbook yourself on sessions. While the pressure is always there to make sure you are getting all the education out of it as possible every session these days is recorded and can be watched later. My decision on if I’m going to do a particular session is based on if the subject is directly related to something that’s got me stumped and I want the opportunity to touch base with the speaker. Past that I’ll watch most after the fact. A better use of your time is getting out and networking, soaking up some of the distributed information there and will in many cases serve as a resource after the fact. I’ve yet to leave an event and not come home to do some kind of redesign based on things I’ve learned from the community.

A highlight for anybody who’s been to CLUS is always the Customer Appreciation Event. This year Bruno Mars will take over the T-mobile Arena and I am legitimately bummed that I will be missing it. The celebrity keynotes are always very good as well and usually provide a different view on how technology interacts with the world. I truly enjoyed listening to Kevin Spacey last year and this year they’ve booked Bryan Cranston.

Regarding keynotes, I typically like watch these in the social areas rather than packing myself into the keynote halls. The seating is better, there’s fewer people and usually refreshments are close at hand, plus you can find a surface to put your computer/iPad on to take notes and/or live tweet the talk.

VMworld Basics

As much as the focus on CiscoLive is on the direct educational benefit the focus from VMworld is more on learning from the community. With the conference officially running from August 27-31 there just as many official conference sessions as there are at CiscoLive, but I find there to be more lower level, marketing style sessions at VMworld. What makes up for it though is any number of community learning opportunities surrounding it. If you can swing coming in either Saturday or very early Sunday the vBrownbag/VMunderground Opening Acts is always a great place to learn about what is coming next in virtualization and technology. Speaking of vBrownBag, these guys have a stage running concurrent to the conference with session about anything you can conceive of all week long. Historically the vBrownBag stage has been found in the Hang Space (VMworld for social media area) but this year is still to be determined.

Another thing you’ll find is the potential to have your evenings books is exceptionally high with multiple vendor events every single night, traditionally starting with vBeers on Saturday evening. At some point as we get closer to the conference VMworld will fill a website with information and registration links for many of the gatherings to make scheduling easy. The Veeam, VMunderground and vExpert/VCDX/VMUG parties are always the most talked about. There is also the annual VMworld Party with typically big name acts but at the time of this writing there really isn’t any information about this yet. Be sure to follow along online and on social media to find out soon enough.

Conclusion

With all that being said, just go enjoy yourself as you are meant to do. There’s a reason that Denise Fishburne refers to CiscoLive as “Geek Summer Camp” because it does feel that way, regardless of the conference you’re attending. Everybody does things their own way. As I’ll be attending VMworld this year if you are there and want to say hi feel free to reach out and find me on twitter @k00laidIT.

Why Is My Nimble Storage Firmware Update Not Available

Today, like everyday as a technology professional, I got the opportunity to learn something new. After seeing posts on social media and articles that Nimble Storage with their NimbleOS version 3.6 supports the shiny new features of VMware’s vSphere 6.5 release including VVOLs 2.0 and VASA 3.0. After reading through the release notes and not seeing anything to really stress me out in the known issues I went to begin the download for an update in the off hours. To my early adopter horror I saw there was no download available! Had I misread the releases, did I imagine that the release notes really were for 3.6? No, those were real and it should be there. After asking around I learned that Nimble in a notable effort to save us from ourselves will from time to time blacklist you from receiving updates due to things they observe through their excellent InfoSight analytics system.

The problem with this is they don’t really make easily apparent that you are blacklisted from anywhere close to the download screen. In order to see if you are blacklisted  you have to switch over from the array management screen to InfoSight, go to Manage > Assets > Click on the Array, and then at the top where it says “Version: ….” click on the version link. There finally you will either see the new version in black if you are good to upgrade or as shown in my image, in red if blacklisted. Even with this it still doesn’t tell you why you are blacklisted, you have to call support to learn that.

Blacklisted

Not Blacklisted

Conclusion

The idea of blacklisting arrays that show signs of things known not to play well with future versions of software is a noble idea and has the potential to keep the load off of your support staff. The problem is the current way it is shown to the user almost ensures that a support call is going to have to be made anyway to either a) find out why the array is blacklisted (OMG, what’s wrong with my array that it can’t be upgraded!?!?) or b) find out why new software isn’t available. I would recommend that if an array is blacklisted and an admin attempts to download software let him know that he is blacklisted, and why, there on the array’s download software dialog. This would save everybody a good deal of time.

As an addendum as I post this I see that 3.6.1 has been release as well and my time on the blacklist is over. Off to upgrade!

Fixing Domain Controller Boot in Veeam SureBackup Labs

We’ve been dealing with an issue for past few runs of our monthly SureBackup jobs where the Domain Controller boots into Safe Mode and stays there. This is no good because without the DC booting normally you have no DNS, no Global Catalog or any of the other Domain Controller goodness for the rest of your servers launching behind it in the lab. All of this seems to have come from a change in how domain controller recover is done in Veeam Backup and Replication 9.0, Update 2 as discussed in a post on the Veeam Forums. Further I can verify that if you call Veeam Support you get the same answer as outlined here but there is no public KB about the issue. There are a couple of ways to deal with this, either each time or permanently, and I’ll outline both in this post.

The booting into Safe Mode is totally expected, as a recovered Domain Controller object should boot into Directory Services Restore mode the first time. What is missing though is that as long as you have the Domain Controller box checked for the VM in your application group setup then once booted Veeam should modify the boot setup and reboot the system before presenting it to you as a successful launch. This in part explains why when you check the Domain Controller box it lengthens the boot time allowed from 600 seconds to 1800 seconds by default.

On the Fly Fix

If you are like me and already have the lab up and need to get it fixed without tearing it back down you simply need to clear the Safe Boot bit and reboot from Remote Console. I prefer to

  1. Make a Remote Console connection to the  lab booted VM and login
  2. Go to Start, Run and type “msconfig”
  3. Click on the Boot tab and uncheck the “Safe boot” box. You may notice that Active Directory repair option is selected
  4. Hit Ok and select to Restart

Alternatively if you are command inclined a method is available via Veeam KB article 1277  where you just run these commands

it will reboot itself into normal operation. Just to be clear, either of these fixes are temporary. If you tear down the lab and start it back to the same point in time you will experience the same issue.

The Permanent Fix

The problem with either of the above methods is that while they will get you going on a lab that is already running about 50% of the time I find that once I have my DC up and running well I have to reboot all the other VMs in the lab to fix dependency issues. By the time I’m done with that I could have just relaunched the whole thing. To permanently fix the root issue is you can revert the way DCs are handled by creating a single registry entry as shown below on the production copy of each Domain Controller you run in the lab.

Once you have this key in place on your production VM you won’t have any issues with it going forward as long as the labs you launch are from backups made after that change is put in use. My understanding is this is a known issue and will eventually be fixed but at least as of 9.5 RTM it is not.

Installing .Net 3.5 on Server 2012/ Windows 8 and above

Hi all, just a quick post to serve as both a reminder to me and hopefully something helpful for you. For some reason Microsoft has decided to make installing .Net 3.5 on anything after Windows Server 2012 (or Windows 8 on the client side) harder than it has to be. While it is included in the regular Windows Features GUI it is not included in the on-disk sources for features to be installed automatically. In a perfect world you just choose to source from Windows Update and go about your day, but in my experience this is a hit or miss solution as many times for whatever reason it errors out when attempting to access.

The fix is to install via the Deployment Image Servicing and Management tool better known as DISM and provide a local source for the file. .Net 3.5 is included in every modern Windows CD/ISO under the sources\sxs directory. When I do this installation I typically use the following command set from an elevated privilege command line or PowerShell window:

installedWhen done the window should look like the window to the left. Pretty simple, right? While this is all you really need to know to get it installed let’s go over what all these parameters are that you just fed into your computer.

  • /online – This refers to the idea that you are changing the installed OS as opposed to an image
  • /enable-feature – the is the CLI equivalent of choosing Add Roles and Features from Server Manager
  • /featurename – this is where we are specifying which role or feature we want to install. This can be used for any Windows feature
  • /all – here we are saying we not only want the base component but all components underneath it
  • /Source:d:\sources\sxs – This is specifying where you want DISM to look for media to install for. You could also copy this to a network share, map a drive and use it as the source.
  • /Limit Access – This simply tells DISM not to query Windows Update as a source

While DISM is available both in the command line as well as PowerShell there is a PS specific command that works here as well that is maybe a little easier to read, but I tend to use DISM just because it’s what I’m used to. To do the same in PowerShell you would use:

 

 

 

Setting Up External Access To A Veeam SureBackup Virtual Lab

Hey y’all, happy Friday! One of the things that seems to still really fly under the radar in regards to Veeam Backup & Replication is its SureBackup feature. This feature is designed to allow for automated testing via scripts of groups of your backups. An example would be if you have a critical web application. You can create an application group that includes both the database server and the web server and when the SureBackup job is run Veeam will connect a section of its backup repository to a specified ESXi host as a datastore and, start the VMs within a NAT protected segment of your vSphere infrastructure, run either the role based scripts included or custom ones you specify to ensure that the VMs are connecting to the applications correctly, and then when done shut the lab down and fire off an e-mail.

That workflow is great an all but it only touches on the edge of the power of what SureBackup can do for you. In our environment not only do we have a mandate to provide backup tests that allow for end-user interaction, but we also use SureBackup for test bed type applications such as patch tests. An example of the latter here is when I was looking to upgrade our internal Windows-based CA to Server 2012 R2. I was able to launch the server in the lab, perform the upgrade and ensure that it behaved as expected WITHOUT ANY IMPACT ON PRODUCTION first and then tear down the lab and it was like it never happened. Allowing the VMs to stay up and running after the job starts requires nothing more than checking a box in your job setup.

By default access to a running lab is fairly limited. When you launch a lab from your Veeam server a route to the NAT’d network is injected to the Veeam server itself to allow access, but that doesn’t help you all that much if you are wanting others to be able to interact; we need to expand that access outwards. This post is going to walk you through the networking setup for a Virtual Lab that can be accessed from whatever level of access you are looking for, in my case from anywhere within my production network.

Setting Up the Virtual Lab

 

The first step if you haven’t setup SureBackup in your environment at all is to set up your Virtual Lab.  The first of two parts here that are critical to this task is setting up the Proxy IP, which is the equivalent to your outside NAT address if you’ve ever worked on a firewall. This IP is going to essentially be the production network side of the Lab VM that is created when you setup a Veeam Virtual Lab.

1-set-nat-host

Next we need to set up an isolated network for each production port group you need to support. While I use many VLANs in my datacenter I try to keep the application groups I need to test on the same VLAN to make this setup simple, but it doesn’t need to be, you can support as many as you need. Simply hit add, browse out and find the production network port group you need to support, give the isolated network a name and specify a VLAN.

2a-setup-vlans

The last step of setting up the Virtual Lab in this regard is creating a virtual NIC to map to each of your isolated networks. So where I see a lot of people get tripped up with this is always make the proxy appliance IP address here map to the default gateway of the production network it is reflecting. If you don’t do that the launched lab VMs will never be able to talk outside of the lab. Second, in regard to the Masquerade IP try to aim for some consistency. Notice that in my production network I am using a Class B private address space but with a class C mask. By default this will throw off the automatic generation of the Masquerade IP and I’ve found it isn’t always consistent across multiple Virtual NIC setups.  If you setup multiple isolated networks above you need to repeat this process for each network. Once you are done with this you can complete your Lab Setup and hit Finish to have it build or rebuild the appliance.

2-create-nat-network

Tweaking the SureBackup Job

For the sake of brevity I’m assuming at this point that you’ve got your Application Groups setup without issue and are ready to proceed to fixing your SureBackup job to stay up and running. To do so on the Application Group screen All you have to do is check the “Keep the application group running after the job completes” box. That’s it. Really. Once you do that this lab will stay up and running until you right click on the job in the Veeam Backup & Replication Console and choose stop. I’ve been lobbying for year for a “stop after X hours” option but still haven’t got very far with that one, but really the concern there is more performance impact from doubling a part of your load since you are essentially running 2 copies of a segment of your datacenter. If you have plenty to burn it isn’t an issue.

3-keep-lab-up

Fixing the Routing

Now the final step is to either talk to your network guy or go yourself to where your VLAN routing is taking place and add a static route to the IP range of your inside the lab into the routing table through the Proxy Appliance’s IP. For the example we’ve been working through in this post our Proxy appliance has an IP of 172.16.3.42 and all of our Lab networks are within the 172.31.0.0/16 network. If you are using a IOS based Cisco switch to handle your VLAN routing the command would be

After that is done, from anywhere that route is accessible from you should now be able to pass whatever traffic inbound to the lab network addresses. So sticking with our example, for a production VM with the IP address 172.16.3.10, you would interact with the IP 172.31.3.10 in whatever way needed. Keep in mind this is for lack of a better word one way traffic. You can connect in to any of the hosts within the lab network but you can’t really have them reach directly out and have them interact on the production network.

4a-testing

One More Thing…

One final tip that I can give you on this if you are going to let others in to play in your labs is to have at least one workstation grade VM that you include in each of your Applications Groups with the software needed to test with loaded. This way you can enable RDP on that VM and they user can just double-click an icon and connect into the lab, running their tests from there. Otherwise if you have locally installed applications that need to connect to hosts that are now inside the lab you are either going to need to reconfigure the application with the corrected address or modify the user’s hosts file temporarily so that they connect to the right place, neither of which is particularly easy to manage. The other nice thing about a modern RDP session is you can cut and paste files in and out of it, which is handy if the user wants to run reports and the like.

4-connecting-into-the-lab

As an aside I’m contemplating doing a video run through of the setting up a SureBackup environment to be added to the blog next week. Would you find such a thing helpful? If so please let me know on twitter @k00laidIT.

Fun with the vNIC Shuffle with Cisco UCS

Here at This Old Datacenter we’ve recently made the migration to using Cisco UCS for our production compute resources. UCS offers a great number of opportunity for system administrators, both in deployment as well as on going maintenance, making updating the physical as manageable as we virtualization admins are getting used to with the virtualized layer of the DC. Of course like any other deployment there is always going to be that one “oh yeah, that” moment. In my case after I had my servers up I realized I needed another virtual NIC, or vNIC in UCS world. This shouldn’t be a big deal because a big part of what UCS does for you is it abstracts the hardware configuration away from the actual hardware.

For those more familiar with standard server infrastructure, instead of having any number of physical NIC in the back of the host for specific uses (iSCSI, VM traffic, specialized networking, etc) you have a smaller number of connections as part of the Fabric Interconnect to the blade chassis that are logically split to provide networking to the individual blades. These Fabric Interconnects (FI) not only have multiple very high-speed connections (10 or 40 GbE) but each chassis typically will have multiple FI to provide redundancy throughout the design. All this being said, here’s a very basic design utilizing a UCS Mini setup with Nexus 3000 switches and a copper connected storage array:

ucs-design

So are you starting to thing this is a UCS geeksplainer? No, no my good person, this is actually the story of a fairly annoying hiccup when it comes to the relationship between UCS and VMware’s ESXi. You see while adding a vNIC should be as simple as create your vNICs in the Server Profile, reboot the effected blades and new NIC(s) are shown as available within ESXi, it of course is not that simple. What happens in reality when you add new NICs to an existing Physical NIC to vSwitch layout is that the relationships are shuffled. So for example if you started with a vNIC (shown as vmnicX in ESXi), vSwitch layout that looks like this to start with

1-before

After you add NICs and reboot it looks like this

2-after

Notice the vmnic to MAC address relationship in the 2. So while all the moving pieces are still there different physical devices map to different vSwitches than as designed. This really matters when you think about all the differences that usually exist in the VLAN design that underlay networking in an ESXi  setup. In this example vSwitch0 handles management traffic, HQProd-vDS handles all the VM traffic (so just trunked VLANS) and vSwitch1 handles iSCSI traffic. Especially when things like iSCSI that require specialized networking setup are involved does this become a nightmare; frankly I couldn’t imagine having to do this will a more complex design.

The Fix

So I’m sure you are sitting here like I was thinking “I’ll call support and they will have some magic that with either a)fix this, b) prevent it from happening in the future, or preferably c) both. Well, not so much. The answer from both VMware and Cisco support is to figure out which NICs should be assigned to which vSwitch by reviewing the MAC to vNIC assignment in UCS Manager as shown and then manually manage the vSwitch Uplink assignment for each host.

3-corrected

4-correctedesx

As you may be thinking, yes this is a pain in the you know what. I only had to do this with 4 hosts, I don’t want to think about what this looks like in a bigger environment. Further, as best I can get answers from either TAC or VMware support there is no way to make this go better in the future; this was not an issue with my UCS setup, this is just the way it is. I would love it if some of my “Automate All The Things!!!” crew could share a counterpoint to this on how to automate your way out of this but I haven’t found it yet. Do you have a better idea? Feel free to share it in the comments or tweet me @k00laidIT.

Lots of new stuff coming from Veeam

Veeam had what they called “THEIR BIGGEST EVENT EVER” and while it at times did seem to be really heavy on the sales for the sake of sales pitch, there was a lot of stuff to legitimately be excited about for those of us who use their products. From the features coming in Veeam Backup & Replication in version 9.5 in a couple of months through the first new feature of next year’s version 10 all in total there were 5 major announcements here today that those of us using the product can make use of. In this post I’m going to run briefly through these and in the coming months will provide some deeper insights when possible.

Veeam Backup & Replication / Veeam ONE 9.5 (October 2016)

  • Nimble Storage Integration- Nimble with be the next vendor after EMC, NetApp and HP storage systems that will allow Veeam to interact at the array level, allowing for backups from snapshot. If you are a Nimble customer (like me) this is going to be some good stuff
  • Advanced usage of Windows Server 2016 ReFS- This is the real gravy here for anybody who is having to work with any kind of synthetic operations with their backup files. Through an integration Veeam has with Microsoft when ReFS is used to back your Veeam repositories your weekly rollups are going to take a heck of a lot less time and as well as see less storage consumption for long terms “weekly fulls”.  This is due to ReFS’ basic mechanism where file copies and moves never actually move data, it just moves the pointers. An example I’ve seen is on a 10 GB change rate backup the weekly full went from 35 minutes on NTFS to 5 minutes on ReFS. Now move that out to a real production dataset and you are really talking about something. There will be a lot more of this in follow-up posts.
  • Direct Restore to Microsoft Azure – If you are resource constrained (which you usually are in a situation where you need a restore) Veeam now has the ability to restore a VM (even if it is vSphere based) directly to Azure. Pretty cool and I think probably the first of what we’ll see on this thread
  • vCloud Director Integration
  • VeeamONE 9.5 – If your organization needs to work with charge back this is something that is directly supported in VeeamONE. If you haven’t played with VeeamONE yet, please do so, I’ve yet to meet anyone who hasn’t found one problem with VeeamONE when first installed in their virtualization environment

Veeam Agents (November-December 2016)
agent versions

Expanding on the Veeam Endpoint for Windows (and now Linux) Veeam has come out with a Veeam Agents for Windows and Linux product. While Endpoint is and will still be available for standalone installations, we finally have an enterprise managed version we’ve been looking for and we truly can have one centrally managed Veeam installation for our virtual, physical and workstation backups. As you can see there’s still a lot to like about the Free version including the new ability to restore directly to Azure or Hyper-V, the paid versions give us server grade capabilities such as Application-aware processing and transaction log processing. Further one I’m excited about as part of my use case for this is for my mobile workforce is the ability for workstations and remote office servers to cache their backups locally when they aren’t connected to the Internet and then ship them back to the corporate office or Cloud Connect repository when once again connected. This is good stuff that has been a long time coming.

Veeam Availability Console (Q1 2017)

I truly want to believe this is the first edge of “one UI to rule them all”, but the Veeam Availability Console is a web-based console to let you monitor and manage all of your Veeam resources; VBR, Agent, Cloud Connect, etc. This is an evolution of the managed backup portal available to Service Providers for a bit now and allows it to be moved downstream to the Enterprise. Let me  reinforce the emphasis on the Enterprise, while included in licensing you are going to have to be so big of an organization/installation to be allowed access to it. Hopefully as subsequent versions are released that will trickle down more.

Veeam Availability Orchestrator (Q1 2017, beta soon)

Veeam for a DevOpsy world. VAO will allow you to automate many of the processes you need to do with Veeam based upon your disaster recovery plan. Let’s say your plan requires you do so many backups, so many replicas, regular testing and comply with documentation practices. Orchestrator is going to allow you to take all that on paper and define it in workflows so in theory you are always in compliance, and if you aren’t have the documentation to show you where you aren’t. I’ve seen quite a few things about this, things that are going to be available to everybody to test soon, and they are all very powerful things.

Veeam Office 365 E-mail Backup (Q4 2016)

Of the new products announce this is the biggie. For those of us who have already began or have done Exchange migrations to Office 365, Veeam now has the ability to backup those mailboxes to your local repositories so that you always know that data is there. I don’t know how those conversations have gone for you but this is a major pain point for us in going to the cloud. Pricing or even how it is going to be sold still isn’t set but what is known is that when released the end of this year it will be free for a year for all Veeam customers with an active support contract and for 3 years for those with Enterprise Plus licensing.

Again, while I have no knowledge that it will happen I have to believe this is the first baby step into a whole host of things to make our cloudy life better in the future with Sharepoint, OneDrive and anything else coming down the road.

Veeam Backup & Replication integration with IBM storage (????, preview May 2017)

Finally the last announcement was the first related to Veeam Backup version 10, in this case the next storage vendor integration. This integration is going to work with any IBM product based on their Spectrum Virtualize software and should work like any other of their integrations. With this we also go to learn that the first technical preview of v10 will coincide with VeeamON 2017 in New Orleans, so mid May 2017.

VMware Tools Security Bug and Finding which VMware Tools components are installed on all VMs

Just a quick post related to today’s VMware security advisories. VMware released a pair of advisories today, CVE-2016-5330 and CVE-2016-5331 and while both are nasty their scopes are somewhat limited. The 5331 issue is only applicable if you are running vCenter or ESXi 6.0 or 6.0U1, Update 2 patches the bug. The 5330 is limited to Windows VMs, running VMware Tools, and have the option HGFS component installed. To find out if you are vulnerable here’s a Power-CLI script to get all your VMs and list the installed components. Props to Jason Shiplett for giving me some assistance on the code.

While the output is still a little rough it will get you there. Alternatively if you are just using this script for the advisory listed you can change  where-object { $_.Name -match $componentPattern }  to  where-object { $_.Name -match "vmhgfs" } . This script is also available on GitHub.

Updating the Photo Attributes in Active Directory with Powershell

Today I got to have the joys of needed to once again get caught up on importing employee photos into the Active Directory photo attributes, thumbnailPhoto and jpegPhoto. While this isn’t exactly the most necessary thing on Earth it does make working in a Windows environment “pretty” as these images are used by things such as Outlook, Lync and Cisco Jabber among other. In the past the only way I’ve only ever known how to do this is by using the AD Photo Edit Free utility, which while nice tends to be a bit buggy and it requires lots of repetitive action as you manually update each user for each attribute. This year I’ve given myself the goal of 1) finally learning Powershell/PowerCLI to at least the level of mild proficiency and 2) automating as many tasks like this as possible. While I’ve been dutifully working my way through a playlist of great PluralSight courses on the subject, I’ve had to live dangerously a few times to accomplish tasks like this along the way.

So long story short with some help along the way from Googling things I’ve managed to put together a script to do the following.

  1. Look in a directory passed to the script via the jpgdir parameter for any images with the file name format <username>.jpg
  2. Do an Active Directory search in an OU specified in the ou parameter for the username included in the image name. This parameter needs to be the full DN path (ex. LDAP://ou=staff,dc=foo,dc=com)
  3. If the user is found then it will make a resized copy of the image file into the “resized” subdirectory to keep the file sizes small
  4. Finally the resized image is then set as the both the thumbnailPhoto and jpegPhoto attribute for the user’s AD account

So your basic usage would be .\Set-ADPhotos.ps1 -jpgdir "C:\MyPhotos" -OU "LDAP://ou=staff,dc=foo,dc=com" . This should be easily setup as a scheduled task to fully automate the process. In our case I’ve got the person in charge of creating security badges feeding the folder with pictures when taken for the badges, then this runs at 5 in the morning each day automatically.

All that said, here’s the actual script code:

 

Did I mention that I had some help from the Googles? I was able to grab some great help (read Ctrl+C, Ctrl+V) in learning how to piece this together from a couple of sites:

The basic idea came from https://coffeefueled.org/powershell/importing-photos-into-ad-with-powershell/

The Powershell Image Resize function: http://www.lewisroberts.com/2015/01/18/powershell-image-resize-function/

Finally I’ve been trying to be all DevOpsy and start using GitHub so a link to the living code can be found here: https://github.com/k00laidIT/Learning-PS/blob/master/Set-ADPhotos.ps1

Getting Started with rConfig on CentOS 7

I’ve been a long time user of RANCID for change management on network devices but frankly it’s always left me feeling a little bit of a pain to use and not particularly modern. I recently decided it was time for my OpenNMS/RANCID server to be rebuilt, moving OpenNMS up to a CentOS 7 installation and in doing so thought it was time to start looking around for an network device configuration management alternative. As is many times the way in the SMB space, this isn’t a task that actual budgetary dollars are going to go towards so off to Open Source land I went!  rConfig immediately caught my eye, looking to me like RANCID’s hipper, younger brother what with its built in web GUI (through which you can actually add your devices), scheduled tasks that don’t require you to manually edit cron, etc. The fact that rConfig specifically targets CentOS as its underlaying OS was just a whole other layer of awesomesauce on top of everything else.

While rConfig’s website has a couple of really nice guides once you create a site login and use it, much to my dismay I found that they hadn’t been updated for CentOS 7 and while working through them I found that there are actually some pretty significant differences that effect the setup of rConfig. Some difference of minor (no more iptables, it’s firewalld) but it seems httpd has had a bit of an overhaul. Luckily I was not walking the virgin trail and through some trial, error and most importantly google I’ve now got my system up and running. In this post I’m going to walk through the process of setting up rConfig on a CentOS minimal install with network connectivity with hopes that 1) it may help you, the two reader’s I’ve got, and 2) when I inevitably have to do this again I’ll have documentation at hand.

Before we get into it I will say there are few artistic licenses I’ve taken with rConfig’s basic setup.

  1. I’ll be skipping over the network configuration portion of the basic setup guide. CentOS7 has done a great job of having a single configuration screen at install where you setup your networking among other things.
  2. The system is designed to run on MySQL but for a variety of reasons I prefer MariaDB. The portions of the creator’s config guide that deal with these components are different from what you see here but will work just fine if you do them they way described.
  3. I’m virtualized kind of guy so I’ll be installing the newly supported open-vm-tools as part of the config guide. Of course, if you aren’t installing on ESXi you won’t be needing these.
  4. Finally before proceeding please be sure to go ahead and run a yum update to make sure everything’s up to date and you really do have connectivity.

Disabling Stuff

Even with the minimal installation there are things you need to stop to make things work nice, namely the security measures. If you are installing this in the will this would be a serious no no, but for a smaller shop behind a well configured firewall it should be ok.

vi /etc/sysconfig/selinux

Once in the file you need to change the “ SELINUX=enforcing ” line to “ SELINUX=disabled “. To do that hit “i” and then use vi like notepad with the arrow keys. When done hit Esc to exit insert mode and “ :wq ” to save and exit.

Installing the Prerequisites

Since we did the minimal install there are lots of things we need to install. If you are root on the box you should be able to just cut and paste the following into the cli and everything gets installed. As mentioned in the original Basic Config Guide, you will probably want to cut and past each line to make sure everything gets installed smoothly.

Autostart Services

Now that we’ve installed all that stuff it does us no good if it isn’t running. CentOS 6 used the command chkconfig on|off to control service autostart. In CentOS 7 all service manipulation is now done under the systemctl command. Don’t worry too much, if you use chkconfig or service start both at this point will still alias to the correct commands.

Finalize Disable of SELinux

One of the hard parts for me was getting the step 5/6 in the build guide to work correctly. If you don’t do it the install won’t complete, but it also doesn’t work right out of the box. To fix this the first line in prerequisites installs the attr package which contains the setfattr executable. Once that’s installed the following checks to see if the ‘.’ is still in the root directories ACLs and removes it from the /home directory. By all means if you know of a better way to accomplish this (I thought of putting the install in the /opt directory) please let me know in the comments or on twitter.

MySQL Secure Installation on MariaDB

MariaDB accepts any commands you would normally use with MySQL. the mysql_secure_installation script is a great way to go from baseline to well secured quickly and is installed by default. The script is designed to

  • Set root password
  • Remove anonymous users
  • Disallow root logon remotely
  • Remove test database and access to it
  • Finally reload the privilege tables

I tend to take all of the defaults with the exception of I allow root login remotely for easier management. Again, this would be a very bad idea for databases with external access.

Then follow the prompts from there.

As a follow up you may want to allow remote access to the database server for management tools such as Navicat or Heidi SQL. To do so enter the following where X.X.X.X is the IP address you will be administering from. Alternatively you can use root@’%’ to allow access from anywhere.


Configure VSFTPd FTP Software

Now that we’ve got the basics of setting up the OS and the underlying applications out of the way let’s get to the business of setting up rConfig for the first time. First we need to edit the sudoers file to allow the apache account access to various applications. Begin editing the sudoers file with the visudo  command, arrow your way to the bottom of the file and enter the following:

rConfig Installation

First you are going to need to download the rConfig zip file from their website. Unfortunately the website doesn’t seem to work with wget so you will need to download it to a computer with a GUI  and then upload it via SFTP to your rConfig server. (ugh) Once the file is uploaded to your /home directory back at your server CLI do the following commands

Next we need to copy the the httpd.conf file over to /etc/httpd/conf directory. This is where I had the most issues of all in that the conf file included is for httpd in CentOS 6 and there are some module differences between 6 and 7. Attached here is a modified version that I was able to get working successfully after a bunch of failures. The file found here (httpd.txt) will need to replace the existing httpd.conf before the webapp will successfully start. If the file is copied to the /home/rconfig directory the shell commands would be

As long as the httpd service starts backup up correctly you should now be good to go with the web portion of the installation which is pretty point and click. Again for the sake of brevity just follow along at the rconfig installation guide starting with section rConfig web installation and follow along to the end. We’ll get into setting up devices in a later post, but it is a pretty simple process if you are used to working with networking command lines.