VMworld 2017 US: T -2

I write this while traveling to sunny and amazingly hot Las Vegas for the 2017 edition of VMworld US. I hope to provide feedback and news throughout the conference, highlighting not only the excellent content and programs but also the best the virtualization community has to offer.

Today will be a travel day as well as a day to meet up with friends, new and old. Tomorrow, the Sunday before the conference, is when the real fun begins with things like Opening Acts for me, TAM and partner content for others as well as a number of social events.

What We Know So Far

Yesterday was the day that Vmware went on a killing spree, announcing the depreciation of Windows based vCenter, the flash based vSphere web client and the vmkLinux APIs and its associated driver ecosystem. All of these enter the depreciated state with the next major version of vSphere and then will be gone for ever and ever in the revision after that. Each of these are significant steps towards the evolution of vSphere as we know it, and when coupled with the advances in PowerCLI in version 6.5 the management of our in house infrastructure has been changed for the better.

These announcements came rapid fire on the Friday before Vmworld with the death of the Windows based vCenter coming first. As we have had versions of varying success of the vCenter Server Appliances (VCSA) for over 5 years now it’s been a long time coming. I myself migrated two years ago and while it was good then with the latest 6.5 version, with its PhotonOS base, excellent migration wizard and in appliance vCenter Update Manager support it has show it is definitely the way forward.

The flash client was the next announcement to come and again, we are looking at an depreciation that needs to happen and is most definitely going to be a good thing but does come with some apprehension. With most things that have been depreciated by Vmware we’ve had at least 1 feature rich version of the replacement out and stable before they announced the predecessor’s demise. This isn’t the case with the flash based web client. While the latest builds are getting very, very good there are still major things that either are quirky or simply aren’t there yet. The good news to this is we have been given almost immediately assurances by everyone involved with the product management that we the vSphere admins will never be left without a GUI management ability for any given task we have today and I for one believe them. The last components of what is known as the HTML5 client in my opinion simply can’t come enough, I’m tired of having to hop through multiple GUIs and browsers to be able to perform basic tasks in my daily work life.

Finally the day was finished with the announced depreciation of the non-native Linux drivers. To be honest I didn’t know that these were even still a thing as every Linux VM I’ve rolled for the past many years have been able to work with the native drivers. I’m sure there are those that at this point may still need additional time but the as the removal is still a couple of versions off this should be something can be mitigated now that the end is known.

Conclusion

With all of these preconference announcements related to Vmware’s flagship product is this going to be the year where Vmworld is chocked full of improvements to vSphere. This will be my 3rd one in 4 years and each year I’ve felt their focus was elsewhere. While vSAN, NSX, and the like are definitely where the company’s seeing growth all of these things rely on vSphere as an underlay. I for one would be happy to see a little love shown here.

With that happy thought I’m going to shut it down and land. For those coming to Vmworld this weekend safe travels and for those at home look for more info as its known here on koolaid.info.

Setting Up External Access To A Veeam SureBackup Virtual Lab

Hey y’all, happy Friday! One of the things that seems to still really fly under the radar in regards to Veeam Backup & Replication is its SureBackup feature. This feature is designed to allow for automated testing via scripts of groups of your backups. An example would be if you have a critical web application. You can create an application group that includes both the database server and the web server and when the SureBackup job is run Veeam will connect a section of its backup repository to a specified ESXi host as a datastore and, start the VMs within a NAT protected segment of your vSphere infrastructure, run either the role based scripts included or custom ones you specify to ensure that the VMs are connecting to the applications correctly, and then when done shut the lab down and fire off an e-mail.

That workflow is great an all but it only touches on the edge of the power of what SureBackup can do for you. In our environment not only do we have a mandate to provide backup tests that allow for end-user interaction, but we also use SureBackup for test bed type applications such as patch tests. An example of the latter here is when I was looking to upgrade our internal Windows-based CA to Server 2012 R2. I was able to launch the server in the lab, perform the upgrade and ensure that it behaved as expected WITHOUT ANY IMPACT ON PRODUCTION first and then tear down the lab and it was like it never happened. Allowing the VMs to stay up and running after the job starts requires nothing more than checking a box in your job setup.

By default access to a running lab is fairly limited. When you launch a lab from your Veeam server a route to the NAT’d network is injected to the Veeam server itself to allow access, but that doesn’t help you all that much if you are wanting others to be able to interact; we need to expand that access outwards. This post is going to walk you through the networking setup for a Virtual Lab that can be accessed from whatever level of access you are looking for, in my case from anywhere within my production network.

Setting Up the Virtual Lab

 

The first step if you haven’t setup SureBackup in your environment at all is to set up your Virtual Lab.  The first of two parts here that are critical to this task is setting up the Proxy IP, which is the equivalent to your outside NAT address if you’ve ever worked on a firewall. This IP is going to essentially be the production network side of the Lab VM that is created when you setup a Veeam Virtual Lab.

1-set-nat-host

Next we need to set up an isolated network for each production port group you need to support. While I use many VLANs in my datacenter I try to keep the application groups I need to test on the same VLAN to make this setup simple, but it doesn’t need to be, you can support as many as you need. Simply hit add, browse out and find the production network port group you need to support, give the isolated network a name and specify a VLAN.

2a-setup-vlans

The last step of setting up the Virtual Lab in this regard is creating a virtual NIC to map to each of your isolated networks. So where I see a lot of people get tripped up with this is always make the proxy appliance IP address here map to the default gateway of the production network it is reflecting. If you don’t do that the launched lab VMs will never be able to talk outside of the lab. Second, in regard to the Masquerade IP try to aim for some consistency. Notice that in my production network I am using a Class B private address space but with a class C mask. By default this will throw off the automatic generation of the Masquerade IP and I’ve found it isn’t always consistent across multiple Virtual NIC setups.  If you setup multiple isolated networks above you need to repeat this process for each network. Once you are done with this you can complete your Lab Setup and hit Finish to have it build or rebuild the appliance.

2-create-nat-network

Tweaking the SureBackup Job

For the sake of brevity I’m assuming at this point that you’ve got your Application Groups setup without issue and are ready to proceed to fixing your SureBackup job to stay up and running. To do so on the Application Group screen All you have to do is check the “Keep the application group running after the job completes” box. That’s it. Really. Once you do that this lab will stay up and running until you right click on the job in the Veeam Backup & Replication Console and choose stop. I’ve been lobbying for year for a “stop after X hours” option but still haven’t got very far with that one, but really the concern there is more performance impact from doubling a part of your load since you are essentially running 2 copies of a segment of your datacenter. If you have plenty to burn it isn’t an issue.

3-keep-lab-up

Fixing the Routing

Now the final step is to either talk to your network guy or go yourself to where your VLAN routing is taking place and add a static route to the IP range of your inside the lab into the routing table through the Proxy Appliance’s IP. For the example we’ve been working through in this post our Proxy appliance has an IP of 172.16.3.42 and all of our Lab networks are within the 172.31.0.0/16 network. If you are using a IOS based Cisco switch to handle your VLAN routing the command would be

After that is done, from anywhere that route is accessible from you should now be able to pass whatever traffic inbound to the lab network addresses. So sticking with our example, for a production VM with the IP address 172.16.3.10, you would interact with the IP 172.31.3.10 in whatever way needed. Keep in mind this is for lack of a better word one way traffic. You can connect in to any of the hosts within the lab network but you can’t really have them reach directly out and have them interact on the production network.

4a-testing

One More Thing…

One final tip that I can give you on this if you are going to let others in to play in your labs is to have at least one workstation grade VM that you include in each of your Applications Groups with the software needed to test with loaded. This way you can enable RDP on that VM and they user can just double-click an icon and connect into the lab, running their tests from there. Otherwise if you have locally installed applications that need to connect to hosts that are now inside the lab you are either going to need to reconfigure the application with the corrected address or modify the user’s hosts file temporarily so that they connect to the right place, neither of which is particularly easy to manage. The other nice thing about a modern RDP session is you can cut and paste files in and out of it, which is handy if the user wants to run reports and the like.

4-connecting-into-the-lab

As an aside I’m contemplating doing a video run through of the setting up a SureBackup environment to be added to the blog next week. Would you find such a thing helpful? If so please let me know on twitter @k00laidIT.

Fun with the vNIC Shuffle with Cisco UCS

Here at This Old Datacenter we’ve recently made the migration to using Cisco UCS for our production compute resources. UCS offers a great number of opportunity for system administrators, both in deployment as well as on going maintenance, making updating the physical as manageable as we virtualization admins are getting used to with the virtualized layer of the DC. Of course like any other deployment there is always going to be that one “oh yeah, that” moment. In my case after I had my servers up I realized I needed another virtual NIC, or vNIC in UCS world. This shouldn’t be a big deal because a big part of what UCS does for you is it abstracts the hardware configuration away from the actual hardware.

For those more familiar with standard server infrastructure, instead of having any number of physical NIC in the back of the host for specific uses (iSCSI, VM traffic, specialized networking, etc) you have a smaller number of connections as part of the Fabric Interconnect to the blade chassis that are logically split to provide networking to the individual blades. These Fabric Interconnects (FI) not only have multiple very high-speed connections (10 or 40 GbE) but each chassis typically will have multiple FI to provide redundancy throughout the design. All this being said, here’s a very basic design utilizing a UCS Mini setup with Nexus 3000 switches and a copper connected storage array:

ucs-design

So are you starting to thing this is a UCS geeksplainer? No, no my good person, this is actually the story of a fairly annoying hiccup when it comes to the relationship between UCS and VMware’s ESXi. You see while adding a vNIC should be as simple as create your vNICs in the Server Profile, reboot the effected blades and new NIC(s) are shown as available within ESXi, it of course is not that simple. What happens in reality when you add new NICs to an existing Physical NIC to vSwitch layout is that the relationships are shuffled. So for example if you started with a vNIC (shown as vmnicX in ESXi), vSwitch layout that looks like this to start with

1-before

After you add NICs and reboot it looks like this

2-after

Notice the vmnic to MAC address relationship in the 2. So while all the moving pieces are still there different physical devices map to different vSwitches than as designed. This really matters when you think about all the differences that usually exist in the VLAN design that underlay networking in an ESXi  setup. In this example vSwitch0 handles management traffic, HQProd-vDS handles all the VM traffic (so just trunked VLANS) and vSwitch1 handles iSCSI traffic. Especially when things like iSCSI that require specialized networking setup are involved does this become a nightmare; frankly I couldn’t imagine having to do this will a more complex design.

The Fix

So I’m sure you are sitting here like I was thinking “I’ll call support and they will have some magic that with either a)fix this, b) prevent it from happening in the future, or preferably c) both. Well, not so much. The answer from both VMware and Cisco support is to figure out which NICs should be assigned to which vSwitch by reviewing the MAC to vNIC assignment in UCS Manager as shown and then manually manage the vSwitch Uplink assignment for each host.

3-corrected

4-correctedesx

As you may be thinking, yes this is a pain in the you know what. I only had to do this with 4 hosts, I don’t want to think about what this looks like in a bigger environment. Further, as best I can get answers from either TAC or VMware support there is no way to make this go better in the future; this was not an issue with my UCS setup, this is just the way it is. I would love it if some of my “Automate All The Things!!!” crew could share a counterpoint to this on how to automate your way out of this but I haven’t found it yet. Do you have a better idea? Feel free to share it in the comments or tweet me @k00laidIT.

VMware Tools Security Bug and Finding which VMware Tools components are installed on all VMs

Just a quick post related to today’s VMware security advisories. VMware released a pair of advisories today, CVE-2016-5330 and CVE-2016-5331 and while both are nasty their scopes are somewhat limited. The 5331 issue is only applicable if you are running vCenter or ESXi 6.0 or 6.0U1, Update 2 patches the bug. The 5330 is limited to Windows VMs, running VMware Tools, and have the option HGFS component installed. To find out if you are vulnerable here’s a Power-CLI script to get all your VMs and list the installed components. Props to Jason Shiplett for giving me some assistance on the code.

While the output is still a little rough it will get you there. Alternatively if you are just using this script for the advisory listed you can change  where-object { $_.Name -match $componentPattern }  to  where-object { $_.Name -match "vmhgfs" } . This script is also available on GitHub.

What’s New in vSphere 6: Licensing

Today's release of vSphere 6 brings about quite a few new technologies worth getting excited for. This includes things such as Virtual Volumes (VVOLs), Open Stack Integration, global content library and long distance vMotion. Now for many of us, especially in the SMB space, the question is can we afford to play with them. As usual VMware very quietly released the licensing level breakout of these and other new features and I have to say my first take is this is another case of the rich getting richer.

If you are already Enterprise Plus level licensed you are in great shape as everything discussed today except VSAN is included. Specifically this includes

  • cross vCenter/ long distance vCenter
  • Content Library
  • vGPU
  • VMware Integrated OpenStack

While that's great and all and I applaud their development, they have quite a few other licensing levels that have been left out. Personally my installations are done at either Standard or Enterprise levels. The only major feature with across the product line support is VVOLs, which is nice but I honestly expected them to at least move some version 5 features such as Storage DRS down a notch to the Enterprise level and I figured the Content Library would maybe come in at the Essentials Plus level or Enterprise.

As Mr. Geitner alluded to in his talk about half of all vSphere licenses are Enterprise Plus, my guess is the company really want to see that number grow. Here's to hoping that like vRAM this recent trend of heavily loading features into the highest level is a trend that will be quickly rectified because I think this is going to be just as popular.

 

 

VMware’s Big February 2nd Announcement

VMware will be having a big announcement event next week, most likely regarding the public release of their vSphere 6 suite of products. Version 6 has been in a “private” beta that anyone can join for the past 5 months or so and looks to include various features to move the product along. The beta program is still open for enrollment with the latest version being an RC build, you can sign up here to gain access to the bits themselves but also various documents and recorded webinars regarding the new features.

Just going by what was discussed at VMworld 2014 what is included in this version includes

  • Virtual Volumes: A VMware/Storage vendor interoperability technology that masks much of the complexity of storage management from the vSphere administrator and makes the storage more virtualization-centric than it already is. There is a lot of information out there on this already available through the power of Google, but the product announcement on the VMware blogs is nice and concise.
  • The death of the fat VI Client: This is the release where we are supposed to be going whole hog on the vSphere Web Client. Can you feel the enthusiasm I have for this?
  • vMotion Enhancements: One feature really worth getting worked up for is the ability to across the both vCenters and datacenters, neither of which was possible in the past. This is great news.
  • Multi-CPU VM Fault Tolerance: While the fault tolerance feature, the ability to have in essence a replica of protected VMs on separate hosts within your datacenter, has been around for years it has been relegated to the also featured category due to some pretty stringent requirements for VMs to be protected in this manner. In vSphere 6 the ability to protect VMs with multiple CPUs will finally be supported.

In any case the announcement will be available for all to attend online. You can register to attend the event at VMware’s website.

Thoughts on the vSphere 6 Open Beta

Ahead of its annual Vmworld conference (which I will be attending this year, yay!) VMware has announced the version 6.0 of its vSphere line of products including ESXi, vCenter and just about every other VMware related topic I’ve written about here.  The company has chosen to mix it up a little bit this year in that they have made the beta program itself public, but in joining the actual program you are required to sign a NDA keeping anything you learn private. To me I take this to mean that while the wire structure is there this is still very much a work in progress, with the community at large having the opportunity to greatly influence what we are going to be seeing in the final product.

As I cannot directly talk about anything I’m learning from the beta itself I highly recommend anybody with a little space to lab go sign up for the beta, start providing feedback and try it out for yourself. Instead what I’m going to discuss here is my wish list for things to be included when 6.0 finally hits gold as well as the basics of the long discussed Virtual Volumes product that was released into beta along with vSphere.

Wish List

As I mentioned above, the beta for vSphere 6 requires a non-disclosure agreement, even if it is open to the public.  To learn what is actually coming in vSphere 6 I urge you to go join the beta for yourself as there is a great deal of information in there for those who wish to really learn and understand the product(s).  Below is a list of things that generically myself and a great many others very much so wish to see as this release comes to be.

  • Bye Bye VI- Consider this your warning, the desktop Virtual Infrastructure client should be no more this time around.  We’ve been warned for a couple of years ago that when the next major release of vSphere comes the Web Client will be the only option. While it’s a great idea and vendor integration to it seems to be becoming very handy it does make me wish for…
  • HTML 5 based web client- Seriously VMware, 2005 called and wants its website back.  The current iteration of the web client is based on Adobe Flash which means proprietary code, security bug and no iPads. In a day and time when you have available open standards to allow for similar functionality, why aren’t you using it?
  • A full featured vCenter Appliance- with vSphere 5 we began to start to see the vCenter Server Appliance (vCSA) presented as a viable option to the application running on top of a Windows Server. That said it’s got some major drawbacks that in my opinion are deal breakers in terms of replacing my Windows vCenter boxes. These include
    • Update Manager support
    • Linked Mode
    • Greater database support (at a bare minimum MS-SQL)
  • Fix SSO/ Directly utilize AD/LDAP for an identity source-  SSO got better with vSphere 5.5 as compared to 5.0 and 5.1, but I am still flummoxed by the idea that Vmware feels that they need to reinvent the authentication wheel.  I would guess that the implementations they are in where there isn’t already some form of available authentication source such as Active Directory or Kerberos are few.  Please leverage those system and cut out the middle man.
  • Virtual Volumes- see below but this is a pretty good bet to be there
  • Greater IPv6- IPv6 support has been around for a while but if utilized in vSphere 5 it will break some things and still requires you to at least have a IPv4 loopback configured.
  • Marvin related things- VMware has been hinting at this all summer, the super-secret “Project Marvin.” There is a little real information and a lot of speculation going on around the internet. Essentially it is described as “the first hyperconverged infrastructure appliance” leading many to think that either VMware is about to get into the hardware game or is partnering with somebody to do the same.

Virtual Volumes

large_VVolsVirtual Volumes is storage centric feature that has been discussed and released to the public as a technical preview since at least 2012 and is a spin off idea from the original concept of VAAI. Typically when creating a new VM a VMware Admin needs to either contact the Storage Admin carve out a LUN each time, do so themselves, or what many, myself included do, create impossibly large LUNs and then have multiple VMs within which is actually pretty wasteful and negatively impacts system performance.  The goal of VVOLs is to make storage VM-centric rather than LUN-centric by leveraging that vSphere API for Array Integration (VAAI) to make the deployment of storage just a component of deploying a VM in whatever manner you choose to do so.  Put as simply as possible…

VVOLs is the storage of VM files directly on the storage system without a LUN middle man.

If you think about all the different ways you utilize storage with your virtualization strategy this makes even more sense.  You can take snapshots and create at both the VM and LUN level, what if they are one and the same?

Of course this is not going to be possible without some support from vendor ecosphere and that apparently is coming in droves.  As VVOLS enters into the beta program alongside vSphere 6 we are seeing  demonstrations of support from a variety of storage providers including Dell, NetApp, EMC, HP, Nimble Storage, Solid Fire, Tintri and open beta programs from HP, NetApp, IBM and Dell.

To really take the deep dive into what VVOLs is and how to implement I recommend reading these posts from Cormac Hogan and Duncan Epping as well as enrolling in the beta for yourself if you have some supported hardware.