A how-to on cold calling from the customer perspective

Now that I’m back from my second tech conference in less than two months I am fully into the cold call season and I am once again reminded why I keep meaning to buy a burner phone and setup a Gmail account before I register next year. It seems every time I get back I am destined to months of “I am so glad you expressed deep interest in our product and I’d love to tell you more about it” when the reality is “I am calling you because you weren’t nimble enough to lunge away from our team of booth people who are paid or retained based on as many scans they can get. Most often when I get these calls or e-mails I’ll give each company a courteous thanks but no thanks and after that the iDivert button gets worn out.

The genesis of this post is two-fold. First a cold call this morning that was actually destined for my boss but when informed he wasn’t here went into telling how glad the person was that I had personally expressed interest in their product, WTF? This first event reminded me of a second, where a few months ago I was at a mixer preceding a vendor supplied training when I was approached by a bevy of 20 something Inside Sales Engineers and asked “what can I do to actually get you to listen?” From this I thought that just in case a young Padawan Sales Rep/Engineer happens to come across this, here are those ways to make your job more efficient and to stop alienating your potential customers.

Google Voice is the Devil

I guess the first step for anybody on the calling end of a cold call scenario is to get me to answer the phone. My biggest gripe in this regard and the quickest way to earn the hang up achievement is the currently practice of many of startups out there to use Google Voice as their business phone system. In case you don’t know with Google Voice they do local exchange drop offs when you call outside of your local calling area, meaning that when you call my desk I get a call with no name and a local area code, leaving me with the quandary of “is this a customer needing support or is this a cold call?” I get very few of the former but on the off-chance it is I will almost always answer leaving me hearing your happy voices.

I HAVE AN END CALL BUTTON AND I AM NOT AFRAID TO USE IT, GOOD DAY TO YOU SIR/MADAM!

You want to know how to do this better? First don’t just call me. You’ve got all my contact info so let’s start with being a little more passive and send me an e-mail introducing yourself and asking if I have time to talk to you. Many companies do this already because it brings with it a good deal of benefits; I’ve now captured your contact info, we’re not really wasting a lot of time on each other if there is zero interest, I don’t have to drop what I am dealing to get your pitch. If this idea just absolutely flies in the face of all that your company holds dear and you really must cold call me then don’t hide behind an anonymous number, call me from your corporate (or even better your personal DID) with your company’s name plastered on the Caller ID screen so at least I have the option to decide if it’s a call I need to deal with.

A Trade Show Badge Scan List Does Not Mean I am (or anybody else is) Buying

I once again had an awesome time at VMworld this year but got to have an experience that I’m sure many other attendees have had variants of. There I was, happily walking my way through the show floor through a throng of people, when out of my peripheral vision a booth person for a vendor not to be named literally stepped through one person and was simultaneously reaching to scan my badge while asking “Hi, do you mind if I scan you?” Yes, Mr./Ms. Inside Sales person, this is the type of quality customer interaction that happened that resulted on me being put on your list. It really doesn’t signify that I have a true interest in your product so please see item one above regarding how to approach the cold call better.

I understand there is an entire industry built around having people capture attendee information as sales leads but this just doesn’t seem like a very effective way to do it. My likelihood of talking to you more about your product is much higher if someone with working knowledge of your product, say an SE, talks to me about your product either in the booth or at a social event and then the communication starts there. Once everybody is back home and doing their thing that’s the call I’m going to take.

Know Your Product Better Than I Do

That leads me to the next item,  if by chance you’ve managed to cold call me, get me to pick up and finally manage to keep me on the line long enough to actually talk about your product, ACTUALLY KNOW YOUR PRODUCT. I can’t tell you how many times I’ve received calls after a show and the person on the other end of the line is so blatantly doing the fake it until you make it thing it isn’t funny. Keep in mind you are in the tech industry, cold calling people who most likely are fairly tech savvy and capable of logical thought, so that isn’t going to work so well for you. Frankly, my time is a very, very finite resource and even if I am interested in your product, which is why I took your call, if I’m correcting the caller that is an instant turn off.

I get that the people manning the phones aren’t going to be Senior Solutions Architects for your organization but try this on for size; if you’ve got me talking to you and you get asked something you don’t know, don’t be afraid to say you don’t know. This is your opportunity to bump me up the chain or to loop in a more technical person to the call to get the discussion back on the right track. I will respect that far more than if you try to throw out a BS answer. Meanwhile get as much education as you can on what you’re selling. I don’t care if you are a natural sales person, you aren’t going to be able to sell me my own pen in this market.

Employees != Resources

So you’ve got yourself all the way through the gauntlet and you’ve got me talking and you know your product, please don’t tell me how you can get some resources arranged to help me with designing my quote so the deal can more forward. I was actually in a face to face meeting once where the sales person did this, referring to the technical people within the organization as resources and I think my internal response to this can best be summed up in GIF form:

obama_kicks_door

This absolutely drives me bonkers. A resource is an inanimate object which can be used repeatedly without consequence except in the inevitable end result where the resource breaks. What you are calling a resource is a living, breathing, most likely highly intelligent human being who has all kinds of responsibilities, not just to you but to his family, community and any other number things. By referring to them as this, and therefore showing that you think of them as something that can be used repeatedly without consequence, you are demeaning that person and the skill set he or she has, and trust me that person is most likely who we as technical professionals are going to connect with far more than we are with you.

So that’s it, Jim’s guide to getting me on the phone. I’m sure as soon as I post this many other techniques will come to my mind and I’ll have to update this. If you take this to heart, great, I think that is going to work out for you. If not, well, I still hope I’ll remember to buy that burner phone next May and the Gmail account is already setup. 😉

Veeam Backup Repository Best Practices Session Notes

After a couple days off I’m back to some promised VeeamON content. A nice problem that VeeamON had this year is the session choices were much more diverse and there were a lot more of them. Unfortunately this led to some overlap of some really great sessions. A friend of mine, Jaison Bailey of vBrisket fame and fortune, got tied up in another session and was unable to attend what I considered one of the best breakout sessions all week, Anton Gostev‘s Backup Repository Best Practices so he asked me to post my notes.

For those not too familiar with Veeam repos they can essentially be any manner of addressable disk space, whether local, DAS, NAS, SAN or even cloud, but when you start taking performance into account you have to get much more specific. Gostev, who is the Product Manager for Backup & Replication, lines out the way to do it right.

Anyway, here’s the notes including links to information when possible. Any notations I have are in bold and italicized.

Don’t underestimate the importance of Performance

  • Performance issues may impact RTOs

Five Factors of choosing Storage

  • Reliability
  • Fast backups
  • Fast restores
  • DR from complete storage loss
  • Lowest Cost

Ultimate backup Architecture

  • Fast, reliable primary storage for fastest backups, then backup copy to Secondary storage both onsite AND offsite
  • Limit number of RP on primary, leverage cheap secondary
  • Selectively create offsite copies to tape, dr site, or cloud

Best Repo: Low End

  • Any Windows or Linux Server
    • Can also serve as backup /backup proxy server
  • Physical server storage options
    • Local Storage
    • DAS (JBOD)
    • SAN LUN
  • Virtual
    • iSCSI LUN connected to in guest Volume

Best Backup Repo: High End

Backup Repos to Avoid

  • Low-end NAS  & appliances
    • If stuck with it, use iSCSI instead of other protocols * Ran into this myself with a Qnap array as my secondary storage, not really even feasible to run anything I/O heavy on it
  • SMB (CIFS) network shares
    • Lots of issues with existing SMB clients
    • If share is backed up by server, add actual server instead
  • VMDK on VMFS *Nothing wrong with running a repo from a virtual machine, but don’t store backups within, instead connect an iSCSI LUN directly to the VM and format NTFS
    • Extra logic on the data path- more chances for data corruption
    • Dependent on vSphere being functional
  • Windows Server 2012 Deduplication (scalability) *I get his rationale, but honestly I live and die by 2012 R2 deduplication, it just takes more care and feeding than other options. See my session’s slides for notes on how I implement it.

Immediate Future: Technologies to keep in mind

  • Server 2016 Deduplication
    • Same deduplication, far greater performance and scale (64 TB files) *This really will be a big deal in this space, there is a lot of upside to a simple dedupe ability rolled into a Windows server
  • ReFS 2.0
    • Great fit for backup repos because it has built in data corruption protection
    • Veeam is currently working on some things with it

Raw Disk

  • Raid10 whenever you can (2x write penalty, but capacity suffers)
  • Raid5 4x write penalty, greater risks)
  • Raid6 severe performance overhead (6x write penalty
  • Lookup Maximum performance per spindle
  • A single job can only keep about 6-8 spindles busy- use multiple jobs if you have them to saturate
  • RAID volume
    • Stripe Size
      • Typical I/O for Veeam is 25k-512KB
      • Windows Server 2012 defaults to 64KB
      • At least 128 KB stripe size is highly recommended
        • Huge change for things like Synthetics, etc
    • RAID array
      • Fill as many drives as possible from the start to avoid expansion
      • Low-end sorage systems have significant performance problems
    • File System
      • NTFS (Best Option)
        • Larger block size does not affect performance, but it helps avoid excessive fragmentation so 64KB block size recommend
        • Format with /L to enable larger file records
        • 16 TB max file size limit before 2012 (now 256)
        • * Full string of best practices for format NTFS partition from CLI: Format <drive:> /L /Q /FS:NTFS /A:8192
      • ReFS not ready for prime time yet
      • Other
    • Backup Job Settings
      • Always a performance vs disk space choice
      • Reverse incremental backup mode is 3x I/O per block
      • Consider forever incremental instead
      • Evaluate transform performance
      • Repository load
        • Limit concurrent jobs to a reasonable amount
        • Use ingest rate throttling for cross-SAN backups

Dedupe Storage: Pains and Gains

  • Gains
    • True global dedupe
    • Lowest cost/ TB
  • Do not use deduplicating storage as your primary backup repository!
  • But if you must leverage vendor-specific integrations, use backup modes without full backup transformation, us active fulls instead of synthetics
  • If backup performance is still bad, consider VTL
  • 16TB+ backup storage optimization for 4MB blocks (new)
  • Parallel processing may impact  dedupe ratios

Secondary Storage Best Practices

  • Vendor-specific integrations can make performance better
  • Test Backup Copy retention processing performance. If too slow consider Active Full option of backup copy jobs (new in v9)
  • If already invested and stuck
    • Use as primary storage and leverage native replication to copy backups to DR

Backup Job Settings BP

Built-In deduplication

  • Keep ON for best performance (except lowest end devices) even if it isn’t going to help you with Per VM backup files
  • Compression
    • Instead of disabling keep Optimal enabled in job and use “decompress before storing- even locally
    • Dedupe-friendly isn’t very friendly any more (new)
      • Will hinder faster recovery in v9
  • Vendor recommendations are sometimes self-serving  to achieve higher dedupe ratios but negatively effect performance

Disk-based Storage Gotchas

  • Gostev loves tape
    • Cheaper
    • Reliable
    • Read-only
    • Customer Success is the biggie
    • Tape is dead
      • Amazon, Google & 50% of Veeam customers disagree
  • Storage-level corruption
    • RAID Controllers are your worst enemies
    • Firmware and software bugs are common, too
    • VT402 Data Corruption tomorrow at 1:30 for more
  • Ransomware  possible

The 2 Part of the 3-2-1 Rule

  • 3 copies, 2 different medias, 1 offsite
  • Completely different storage type!

Storage based replication

  • Betting exclusively on storage-based replication will cost you your job
  • Pros:
    • Fantastic performance
    • Efficient bandwidth utilization
  • Cons:
    • Replicates bad data too
    • Backups remain in a single fault domain

Backup Copy vs. Storage-Based Copy

  • Pros:
    • Breaks the data loop (isolated source and target storage)
    • Implicitly validates all source data during its operation
    • Includes backup files health check
  • Cons:
    • Higher load on backup storage

Make Tape out of drives

  • Low End:
    • Use rotated drives
    • Supported for both primary & backup copy jobs
  • Mid-range:
    • Keep an off-site copy off-prem (cloud)
  • High End:
    • Use hardware-based WORM solutions

Virtualize your Repository (SOBR)

  • simplify backup storage and backup job management
  • Reduce storage hardware spending by allowing disks to be fully utilized
  • Improve backup storage performance and scalability

 

Presenting at VeeamON 2015: Design, Manage and Test Your Data Protection with Veeam Availabilty Suite

Last week I was presented with the honor of being invited to speak at Veeam Software‘s annual user conference, VeeamON. While this was not my first time doing so I was very happy with the end result this year, with 30-40 attendees and positive feedback both from people I knew beforehand as well as new acquaintances who attended.

My session is what I like to think of as the 1-1000 MPH with Veeam, specifically targeting the SMB space but with lots of general guidelines for how to get your DR system up and running fast and as error-free as possible. Some of the things I do with Veeam buck the Best Practices guide but we have been able to maintain high levels of protection over many years without much interruption. The session starts with the basics of designing your DR plan, then designing your Veeam infrastructure components to suit your needs, followed by tips for the actual implementation and other tricks and gotchas I’ve run into over the years.

Anyway due to the amount of information that was covered I promised attendee’s that I would put my slide deck out here for reference so here it is. If anybody has comments, questions or anything in between please feel free to reach out to me either through the comments here or on twitter. For attendees please keep an eye on your e-mail and the #VeeamON hashtag as the videos of all presentations should be made available in the coming weeks.

VeeamON 2014: Conference Season Veeam Style

I write this aboard about the coolest painted plane I’ve had the pleasure of flying on, en route to Las Vegas, NV to attend and speak at the inaugural VeeamON conference being held at the Cosmopolitan.  The conference is being held by Veeam Software, one of the leaders in virtualization backup, known best for its Veeam Backup & Replication product. The conference itself represents a pretty big milestone for a global company who in my opinion has done a very solid job of getting social right from the corporate standpoint. It is also going to time well due to the pending version 8 release of Backup & Replication.

I have been working with Veeam’s Backup & Replication software for a little over four years now and find it to be both powerful as well as easy to use, a nice combination when talking about the product responsible for the safety of your data. I will be speaking about my experiences with this software package from the small government organization standpoint and how it helps us deal with some of the particular challenges that come from being in that segment. My session will be on Wednesday at 8:30 AM.

This will be my first time speaking in this type of setting so we’ll see how it goes, but there will be no shortage of seasoned veterans providing sessions. Others speaking include a great deal of the staff from Veeam including Anton Gostev, Doug Hazelman, Rick Vanover, & Ben Milligan and those are just the ones that I’m personally familiar with. Further the virtualization industry will also be well represented by the likes of Chris Wahl, Symon Perriman, and Joep Piscaer.  Finally Alexis Ohanian of Reddit will serve as the celebrity speaker. All in all for a first time event they seem to have brought some very strong speakers to the event, we’ll see if I can hold up my part.

What To Look For
One of the things that I really like about this conference is the variety of options they are providing attendees to make the most of their time. Monday is Partner day, open only to their partners, but at the same time they will be having a variety of community driven Veeam User Group sessions for the rest of us attending. Also from the community side of things there will be a few vBrownbag sessions sprinkled through Tuesday and Wednesday. These are generally much shorter, 15-20 minutes and are great for people to share little tips and tricks of the industry. I myself will be providing a session on Physical Backup Strategies on Tuesday at 8:20 talking about how we use the open source software product Areca Backup to handle the role of backing up the few physical machines I have left in my environment.

One of the biggest draws and one that will be of great importance to both me and my employer is the ability to take the Veeam Certified Engineer (VMCE) course while attending. This course, a prerequisite to being able to sit the VMCE exam, is typically $3000 US and last 5 days. At the conference they will be condensing it into 2.5 days and conference attendees are able to take the course for only $650.

Also going on as an aside to the sessions are the Lab Warz game and offsite tour of a Modern Datacenter. Registrants for Lab Warz will compete against each other to create the ultimate data protection scenario for cash and prizes. The offsite tour will take a group of attendees to the Cobalt Cheyenne datacenter to see how datacenter is done on the large scale.

Keynotes
Even f you are unable to attend yourself the Keynotes on both Tuesday and Wednesday will be streamed live.  The big news most likely will be the announcement of the general release of version 8 of Veeam’s Availability Suite which includes the Backup & Replication product as well at the Veeam ONE virtualization infrastructure monitoring package.  Both of these products have been in beta for the past few months and from my own personal experiences with them Veeam has done a very good job of making great software better.  I wouldn’t be surprised if there weren’t a few surprises announcements there as well. It’s not everyday you get to host your own inaugural global event, might as well take advantage

Conclusion
I’m going to go ahead and sign out here for now. Be sure to check back later as I plan to update frequently through the week with news and information.