Spam, Whitelists, Blacklists, Greylists, Oh my!

There is much discussion on the internet about the wisdom of running one’s own mail server and it includes much valid criticism. There are significant security concerns beyond the normal amount of maintenance of any system. For reasons varied and irrelevant here, I have chosen to do so for over 15 years.

The aspect of doing so that is often not discussed when compared to using commercial services such as Gmail is that one has to deal with spam entirely on your own. This is difficult for (at least) 2 reasons. The obvious being that it is a hard problem to deal with, the not so obvious is I only have 2 users of my mail server, so I don’t have the ability to allow my users provide input in identifying spam.

For many years I have relied on tools such as Spamassassin to try to identify spam once it has reached my mail server. I also make use of various blacklists to identify IP addresses that are known to deliver spam. I use Mailspike and Spamhaus.

This is the situation I was in up until this past weekend. Hundreds of emails a day would slip past the blacklists. Spamassassin is very good but it was still allowing around 5% of those messages to reach my inbox. And the problem seemed to be getting worse.

In the past, I had used greylisting but I eventually stopped given the main side effect of that system; messages from new senders, legitimate or otherwise, would get delayed by at least 5 minutes. This is fine for most emails, but for things like password resets or confirmation codes, was just too much of an inconvenience.

What I wanted was a system where messages that are unlikely to be spam make it right through and all others get greylisted.

My boss mentioned a solution that he implemented once that decided to greylist based on the spam score of the inbound emails. This allowed him to only greylist things that looked like they might be spam. Unfortunately, looking at the emails that were slipping through my existing system, they generally had very low scores (spammers test against Spamasassin).

So, I pursued a different solution. Several services provide not only blacklists but also whitelists that give a reputation score for various IP addresses around the internet. I chose to use the whitelist from Mailspike and DNSWL

I implemented a hierarchy:

  • Accept messages from hosts we have manually whitelisted
  • Reject messages from hosts on one of the watched blacklists
  • Accept messages from hosts with a high reputation score
  • Greylist everything else

When I enabled this ruleset, I thought I had broken things. I stopped getting any email coming into my system. It turns out that I had just stopped all the spam. It was amazing.

In the two days I have been running this system, every legitimate email has made it to my inbox. I have seen 10-15 messages get through the initial screens and been correctly identified as spam by Spamassassin. (In the early stages I had a few messages make it to my inbox but I realized that was because I trusted the whitelists more than the blacklists. I.e. hosts were listed as both trustworthy and sending spam. As the Blacklists seem to react faster, I decided to switch the order as shown above.)

You can look at the graphs to see when I turned the system on:

This first graph shows the number of messages that were accepted by my server (per second). You can see that the number dropped considerably when I turned on my hybrid solution. Since messages were getting rejected before they were accepted by the system, there are less messages for Spamassassin to investigate.

This can be seen here, where the number of messages identified as spam also went down because they were stopped before Spamassassin even needed to look at them.

If you run postfix and would like to implement a similar system, here is the relevant configuration section from my

smtpd_recipient_restrictions = permit_mynetworks,
    check_client_access hash:/usr/local/etc/postfix/rbl_override,
    permit_dnswl_client[ 18..20 ],
    permit_dnswl_client[ 0..255 ].[ 2..3 ],
    check_policy_service unix:/var/run/postgrey.sock,

So, overall this has been a resounding success. I hope this helps some of you out there with the same challenges.


Books of 2015

I like to read. It is my chosen form of escapism. After participating in the Goodreads 2015 Reading Challenge I thought it might be fun to gather some further statistics about how my reading changed throughout the year. My goal had originally been 25 books, which I way exceeded with a count of 36. So, for 2016, I set it to 35 books. Some observations:

  • I started a new job at the beginning of March, so that kept me busy
  • May was high in page count due to rereading (I can reread faster than I can read a fresh book)
  • Tiffany and I went on vacation in September, so lots of books there


And here is a nice collage of the titles taken from Goodreads.


What is your reading goal for 2016?


Basic road warrior VPN using a Cisco router

Welcome to another of my end of the year, oh-god-I-didn’t-blog-all-year posts. I’ve been trying to go through some of the geeky things I did this year that were a challenge for me and document them so that they might be easier for someone else.

Today’s topic: setting up a VPN server using a Cisco router for “road warrior” clients (aka, devices which could be coming from any IP address).

As should come to no surprise to anyone who knows me or who is exposed to my twitter stream I value privacy and security both from a philosophical perspective but also just as fun projects to tackle.

This project arose as an evolution of earlier VPN setups I have had in the past. When I was living in the linux world (and before I purchased my Cisco router), I used a linux server as my internet router. If you are in that situation, I highly recomend using the strongSwan VPN server. It is an enterprise grade VPN server that is also easily configured to handle small situations. I often had multiple VPN tunnels up for fixed connections that were both site to site and for roadwarriors using both pre-shared keys (PSK) and X509 certificates.

But when I upgraded our home network to using a Cisco 2811 router that I bought from a tech company liquidation auction for $11.57, running the strongSwan VPN from behind the NAT router became much more challenging. (Doable, but required some ugly source routing hacks I never liked.)

My requirements were:

  1. IPsec
  2. Capable of supporting iOS and Mac OS X clients
  3. Clients could be behind NATs (NAT-T support)
  4. Pre-shared Key support (I might do certificates again later, but as there are only 2 users of the VPN, seems like overkill.)
  5. All traffic from the clients will be routed through the VPN (no split-tunnels)
  6. Ability to to do hairpin routing. (This means that a VPN client can tunnel all of their traffic, including that destined to the rest of the internet, to the VPN server and it will be able to route it back out to the internet. This is critical for protecting your clients on untrusted networks.)

The biggest challenge that I ran into was not the lack of capabilities of the Cisco platform, but the fact that it is designed for much much larger implementations that I was going to do. In addition, most of the examples were for site-to-site configurations.

I don’t intend to go through all of the steps needed to set up a Cisco router, that is beyond the scope of this post, so I will be making the following assumptions.

  1. You are familiar working in the IOS command line interface
  2. You already have a working network
  3. It has a single external IP address (preferably a static IP)
  4. You have 1 (or more than 1) internal networks
  5. Internal hosts are NAT translated when communicating with the Internet
  6. You are familiar with setting up your ip access-list commands to protect yourself and allow the appropriate traffic in and out of your networks

OK, let’s go!

Note: For my setup, FastEthernet0/0 is my external interface (set up as ip nat outside)

User & IP address setup

Set up a user (or more than one) that will be used to access the VPN.

aaa new-model
aaa authentication login AUTH local
aaa authorization network NET local 
username vpn-user password 0 VERY-STRONG-PASSWORD

And set up a pool of IP addresses that will be given out to users who connect to the VPN.

ip local pool VPN-POOL

ISAKMP Key Management

ISAKMP is the protocol that is used to do the initial negotiation and set up keys for the VPN session. First we will set up more general settings such as the fact we will be using 256 bit AES, PSKs, keepalives, etc.

crypto isakmp policy 1
 encr aes 256
 authentication pre-share
 group 2
 lifetime 3600

crypto isakmp keepalive 10

We will then set up the group which represents our clients. This includes setting paramaters for your clients, such as the pool of IP addresses they will get (from above), DNS servers, settings for perfect forward secrecy (PFS), etc.

crypto isakmp client configuration group YOUR-VPN-GROUP
 pool VPN-POOL

Finally, we will pull these items into a profile, vpn-profile, that can be used to set up a client.

crypto isakmp profile vpn-profile
   match identity group YOUR-VPN-GROUP
   client authentication list AUTH
   isakmp authorization list NET
   client configuration address respond
   client configuration group YOUR-VPN-GROUP
   virtual-template 1

IPSEC Paramaters

We set up the paramaters that define how IOS transforms (aka, encrypts and HMACs) the traffic on this tunnel and give it a name vpn-transform-set

crypto ipsec transform-set vpn-transform-set esp-aes esp-sha-hmac 

Full IPSEC profile

Finally we link both the ISAKMP (vpn-profile) and IPSEC (vpn-transform-set) items together and give them a name ipsecprof that can be attached to a virtual interface (below).

crypto ipsec profile ipsecprof
 set transform-set vpn-transform-set 
 set isakmp-profile vpn-profile

Virtual Template Interface

This caused me a bunch of confusion. Because we do not have a static site-to-site tunnel, we can’t define a tunnel interface for our VPN clients. What we do is set up a template interface that IOS will use to create the interfaces for our clients when they connect.

This needs to reference your external interface, which in my case is FastEthernet0/0.

interface Virtual-Template1 type tunnel
 ip unnumbered FastEthernet0/0
 ip nat inside
 ip virtual-reassembly
 tunnel source FastEthernet0/0
 tunnel mode ipsec ipv4
 tunnel protection ipsec profile ipsecprof

Other Notes

It is important that you have the appropriate access controls set up to restrict where in your network a VPN client can send packets. That is really beyond the scope of this post as it is very dependent on your configuration.

However, at a minimum, you need to allow the packets that arrive on your external interface for VPN clients to be handled. These packets will show up in a few forms.

You will need to add rules to handle these packets to your external, border, access lists.

ip access-list extended inBorder
 permit esp any host YOUR-EXTERNAL-IP
 permit udp any host YOUR-EXTERNAL-IP eq isakmp
 permit udp any host YOUR-EXTERNAL-IP eq non500-isakmp

Client Setup

Assuming all of this worked (and I transcribed things properly), you will be all set to configure a client. This should be a relatively easy configuration.

  • VPN Type: IKEv1, in iOS/Mac OS X this is listed as Cisco IPsec or IPsec
  • Server: Your public server IP or hostname
  • Pre shared key: VERY-STRONG-GROUP-KEY
  • User: vpn-user

Final Notes

Even though this setup uses users that are hard coded on your router, you may still want to set up a Radius server to receive accounting information so you can track connections to your VPN. It can also be expanded to do authentication and authorization for your VPN users.

I hope this was helpful to you. If you have any questions, please feel free to contact me via twitter @gothmog


ZFS mirroring with a restricted SSH account & zxfer

As part of my transition from using a combination of Linux and FreeBSD for our home servers to being exclusively FreeBSD, I wanted to update how I did backups from my public server, bree, to the internal storage server, rivendell. Previously, I had done this with a home grown script which used rsync to transfer updates to the storage server overnight. This solution worked just fine, but was not the most efficient (see: ZFS Replication to the cloud is finally here—and it’s fast). While I didn’t intend to replicate to I wanted to leverage ZFS since I am now going FreeBSD to FreeBSD.

There are numerous articles about using zxfer to perform backups but there was one big hiccup that I couldn’t get over. Quoting the man page:

zxfer -dFkPv -o copies=2,compression=lzjb -T root@ -R storage backup01/pools

Having to open up the root account on my storage server, no matter how I restricted it to IP address, keys, whatever, makes me really uncomfortable and a show-stopper for me. But I thought I could do better. I have limited experience using restricted-shells to limit access to servers before and I knew that ZFS allows for delegating permissions to non-root users so I decided to give it a shot.

TL;DR: It can work.

The configuration had a few phases to it:

  1. Create a new restricted user account on my backup server and configure the commands that zxfer needs access to in the restricted shell
  2. Create the destination zfs filesystem to receive the mirror and configure the delegated permissions for the backup user
  3. Set up access to the backup server from the source server via SSH
  4. Make slight modification to zxfer to allow it to run zfs command from the PATH instead of hardcoding the path in the script

Setting up the restricted user

I created a new user on the backup system named zbackup that would be my restricted user for receiving the backups. The goal was for this user to be as limited as possible. It should only be allowed to run the commands necessary for zxfer to do its job. I landed on using rzsh as the restricted shell as it was the first one I got working with the correct environment. I set up a directory to hold binaries that the zbackup user was allowed to use.

root@storage$ mkdir /usr/local/restricted_bin
root@storage$ ln -s /sbin/zfs /usr/local/restricted_bin/zfs
root@storage$ ln -s /usr/bin/uname /usr/local/restricted_bin/uname

I then set up the .zshenv file for the zbackup user to restrict the user to that directory for executables.

export PATH=/usr/local/restricted_bin

Setting up the destination zfs filesystem

I already had a zfs filesystem that was devoted to backups so I made a new zfs filesystem underneath it to hold these new backups and be a point where I could set delegation points for permissions. Then, through trial and error, I figured out all the permissions I had to delegate to the zbackup user on the filesystem to allow zxfer to work

root@storage$ zfs create nas/backup/bree-zxfer
root@storage$ chown zbackup:zbackup /nas/backup/bree-zxfer
root@storage$ zfs allow -u zbackup atime,canmount,casesensitivity,checksum,compression,copies,create,
                          snapshot_count,snapshot_limit,sync,userprop,utf8only,volmode nas/backup/bree-zxfer

(I figured out the list of actions and properties that I needed to delegate by having zxfer dump the zfs create command it was trying to run on the backup system when it failed.)

Update: I forgot 1 thing that is critical to making this work. You need to ensure that non-root users are allowed to mount filesystems. This can be accomplished by adding the following line to your /etc/sysctl.conf and rebooting:


Remote access to the backup server

Nothing fancy here. On my source server, I created a new SSH keypair for the root user (no problem with running the source zfs command as root). I then copied the public half of that key to the authorized_keys file of the zbackup user on the backup server. At this point, I could ssh from my source server to the backup server as the zbackup user. But when logged in to the backup server, the only commands that could be run are those in the /usr/local/restricted_bin directory (zfs and uname).

Tweak zxfer script to remove hard coded path in zfs commands

One of the limitations (intentional) of a restricted shell is that the restricted user is not allowed to specify a full pathname for any commands. Only commands located in their PATH can be run. Unfortunately, while the zbackup user has the zfs command in their PATH, it is referenced as /sbin/zfs in the zxfer script. To work around this, I modified the zxfer script to not use the path of zfs directly and assume that zfs will be in the path. This was only in 2 places of the script. If you do a quick search for /sbin/zfs you will find them.

Moment of truth!

After all this, I was now able to run any number of commands to mirror my source servers zfs filesystems (with snapshots) to my backup server.

root@source$ zxfer -dFPv -T zbackup@storage -N zroot/git nas/backup/bree-zxfer
root@source$ zxfer -dFPv -T zbackup@storage -R zroot/var nas/backup/bree-zxfer

And best of all, the storage server does not have SSH enabled for root. Success.


Environment variables for reliable ZSH restrictions

Arghh. I just spent 30 minutes trying to set up a locked down restricted shell on my FreeBSD box and I want to help you not do the same. My challenge was properly setting the PATH variable so that the user could not bust out and run any commands. The problem was ensuring that PATH was set for both interactive and non-interactive shells. The interactive ones were easy using either .zshrc or .bash_profile. But although the documentation for bash said it read in .bashrc for non-interactive sheets, it did not.

But, finally I found that .zshenv worked so now I can use the restricted ZSH. Yay!


Overcoming stubborn OS deployments in VirtualBox

I am a big fan of virtualization of operating systems. It allows for easy testing and obviously running multiple operating systems on one machine. At my company, we use VMWare ESX for infrastructure virtualization, but for my own use (professionally and personally) I really like Oracle’s VirtualBox. It is fast, reliable, and best of all, free.

As I work for a large, centrally managed company, we unsurprisingly use a standard (Windows) operating system across all of our hardware. As a right-thinking computer user, this is clearly not acceptable. While I wish I could just discard the standard company system image, I cannot do so. For my daily work, I am a Linux fan (Fedora is my distribution of choice). Virtualization allows me to merge those two worlds in a relatively harmonious way. My end goal is to run my company’s OS image inside a virtual machine on top of my preferred Linux installation. But getting there can be a challenge.

Installing an OS inside a VM is straight-forward and not worthy of a blog post but that does not help me particularly because I need to use the company-provided imaging tool that not only sets up the OS, but installs all of the corporate software and settings. This is done using a pretty slick tool (name intentionally withheld) that handles everything once the computer is registered on the back end by our IT staff.

This works great if I am installing onto the bare metal. Otherwise, there are challenges. Below is a slightly dramatized version of my install process. I don’t tell every iteration I tried but hopefully it is helpful to someone.

Once I got my new machine, I happily blew away the company OS install and got Linux working. (After making a backup, what kind of heathen do you think I am?) VirtualBox, check. Got bootable image of system imaging tool, check. Here we go.

Unknown computer

Well, I guess that makes sense. Our IT staff registered the physical machine; their backend would know nothing about a VM running on top of it. I pondered what they could use to identify the machine. Obvious choices included:

  • MAC Address
  • Hardware Serial Number
  • CPU Serial number (ick)

I decided to start with MAC address as that was the easiest to change in the VM. I wanted to make the VM use the same MAC address as the computer itself. In order to do that, however, I had to change the computer to use a different one temporarily, as having duplicate MAC addresses on the same physical network will cause problems. (I am using bridged networking.) So, I changed the MAC address of the computer using ifconfig to something new. (I just incremented the last byte by 1.) And then copied the original one into VirtualBox. This can be done under the advanced settings for the network adapter.

I rebooted into the imaging software again and, success, it started imaging the machine. I was quite pleased with myself. Sadly, it was short-lived. The imaging utility put the OS on the virtual machine but then died once it had booted into Windows and wanted to start installing further software.

In reviewing the logs, I saw the same sort of error as I had gotten originally, that the computer was not recognized by the back end system. This seemed odd as It got part of the way through the install. It appeared that at this later stage of the install the tool used a different set of information to identify the computer on which it was running.

A specific section of the log file caught my eye

Make: Innotek GmbH    Model: VirtualBox    Mfg: 
Serial Number: 0
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -

I could see where this might cause a problem as these are not representative of the actual hardware. These values are returned to an operating system by examining the Desktop Management Interface (aka, DMI) of the PC. Thankfully, I researched VirtualBox and there is a way to set the values that it provides to a child OS. In order to determine what values to use, I used the linux dmidecode tool. This provided a list of the underlying values I would need:

# dmidecode 2.12
SMBIOS 2.7 present.
35 structures occupying 1856 bytes.
Table at 0x54E3F000.

Handle 0x0010, DMI type 0, 24 bytes
BIOS Information
        Vendor: Hewlett-Packard
        Version: L70 Ver. 01.10
        Release Date: 06/24/2014
        Address: 0xF0000
        Runtime Size: 64 kB
        ROM Size: 8192 kB

Buried in the advanced section of the VirtualBox manual is a section entitled Configuring the BIOS DMI information which outlines the commands to set all of these values. I ended up setting more than I probably needed. (I had to wrap these commands, pull onto 1 line each if you need to run them.)

VBoxManage setextradata "M3065"
VBoxManage setextradata "M3065"
      "L70 Ver. 01.10"
VBoxManage setextradata "M3065" 
VBoxManage setextradata "M3065"  
VBoxManage setextradata "M3065"  
      "HP ZBook 15"
VBoxManage setextradata "M3065"  
VBoxManage setextradata "M3065"  
VBoxManage setextradata "M3065"  
VBoxManage setextradata "M3065"  
      "103C_5336AN G=N L=BUS B=HP S=ELI"
VBoxManage setextradata "M3065"  
VBoxManage setextradata "M3065"  
VBoxManage setextradata "M3065"  
VBoxManage setextradata "M3065"  
      "KBC Version 94.51"
VBoxManage setextradata "M3065"  

(I removed the actual serial number from the listing above.)

After this, I reran the imager for what turned out to be the final time and everything worked.

In the end it turned out to be a bit more work than I outlined above, but the critical steps were covered. I found it both a very frustrating and fun experience (once I got it working). A great puzzle to solve. It shows the power of virtualization software and how it is very unwise to trust what hardware tells you about itself as it is easy to manipulate.


On being a leader by John Cleese

After my former boss, Susan Evan’s great blog post this morning: In the category of not as easy as it looks: Being Boss, I ran across a Harvard Business Review interview with the amazing John Cleese. It contained a great quote I had to share:

In the book Life and How to Survive It, which I developed with Robin Skynner, we decided that the ideal leader was the one who was trying to make himself dispensable. In other words, he was helping the people around him acquire as many of his skills as possible so he could let everyone else do the work and just keep an eye on things, minimizing his job and the chaos that would come with a transfer of authority.


Overturning Net Neutrality Rules Harms Innovation

Since the recent ruling in Verizon v. FCC where the US Court of Appeals for the DC Circuit overturned the FCC net neutrality rules (see the EFF Net Neutrality page for background), there has been considerable discussion about the potential harms (or benefits) of this ruling. I have listened and read and I feel that the mainstream media is missing the large but subtle danger that this ruling causes and why it is critical that the FCC move to reinstate these rules.

The argument that I keep hearing about why the net neutrality rules are needed is that if internet carriers are allowed to offer differentiated internet service for a fee that it will harm consumers by raising the prices that consumers will pay. For example, ESPN might pay Verizon to allow its customers to stream its video for free but will then raise the cost to the consumer to cover this. While overturning the net neutrality rules would allow this, I don’t believe this is a threat. Both ESPN and Verizon know that consumers will prefer a lower cost solution so will not go for that. And if Verizon and ESPN can make a deal that makes it cheaper for the consumer, it might even be a benefit for the consumer. And here be dragons.

I believe that deals such as the one I outlined could be a short term benefit to consumers, but will change the way the economy of innovation works in a way that will harm consumers in the long term by shifting the cost structure of innovation in the favor of existing, large players.

The history of innovation on the Internet has been driven by the little guys. Google, the giant it is today, started as two guys in a dorm room. Facebook, another giant, started in a dorm room. In these and many other instances, the innovators had very limited resources. But, and this is the critical point, once they started providing a service on the internet, access to their new service was provided at the same level as the big players and consumers could judge the merits of say, Google vs. Altavista on the merits of the products and make a choice as to which was better.

My fear is that without net neutrality rules, the barrier to entry will be increased for new companies that can disrupt the marketplace and bring innovation to all consumers. I am not worried about the ESPNs or Verizons of the world. I am worried that it will make getting started harder for the next Google or Facebook.

So I strongly urge the FCC to reclassify internet service providers as common carriers and re-institute and strengthen the net neutrality rules to ensure that the Internet continues to innovate in a free and fair way.

More Background:


Who wears the pants in your company?

Whenever there is a group of people who intend to work together, whether a couple through marriage, friends planning an outing, citizens guiding a country, or employees running a company, decisions need to be made. Inevitably there are agreements, disagreements, and compromises. There are thousands of methods by which decisions can be reached, but the way in which a decision is reached and the motivations of the decision makers can indicate much about the health of the partnership.

The cynical question “who wears the pants in the family?” is often used to imply that there is one person in a marriage who is in charge. (We will ignore for this discussion the mysoginistic nature of the question.)

The same question can be applied to a company. When there is a conflict, large or small, between parties in the company, who wins? In examining this problem, I divide a company into two main areas: primary functions and support functions. Primary functions provide the stated, outward product or service the company offers while secondary functions are required to run a business but are not specific to any particular business sector.  For example, at an automobile company, the engineering department or assembly department would be primary functions while the human resources or accounting would be support functions.

I have observed that as a company grows in size, the balance of power shifts from the primary functions to the secondary functions. In a small company, the majority of the employees are focused on the primary functions and the support functions are usually very small (often woefully small). This results in a very strong alignment between the public goal of the company and the majority of the employees of a company.

What happens as a company grows? The support functions must grow to answer the needs of a larger organization. No longer can one person handle all the accounting and human resources duties by themselves. Departments must be created and staffed.

This poses a huge risk. As with all organisms (and yes, a company is an organism made up of people, just like you and I are made up of cells), organisms desire one thing above all else; survival. The larger the organization, the larger this survival instinct becomes. And a desire to survive often leads to a high degree of risk-aversion.

Avoiding risk is a dangerous thing depending on how the organization responds. Sadly, the common way to avoid risk goes something like this:

  • A problem occurs (i.e. bug in software, lawsuit, etc.)
  • A process or procedure is created that would have caught that particular problem
  • That process is rolled out for everyone to implement

The problem with that methodology is that each process that is created takes time away from the core mission of a company. As an example, let us assume that FooBar Inc. makes widgets. Each widget takes 10 hours to complete, but 1% of the time a widget jams in the machines and causes 20 hours of downtime. This sounds horrible! So FooBar Inc. implements a new process that changes the manufacturing process by introducing a QA step on every widget. Sounds like a great idea. However, it adds 1 hour to each widget manufacture.

  • Old System: 100 widgets takes 1000 hours + 20 hours of downtime
  • New System: 100 widgets takes 1100 hours

So, in this scenario, a seemingly good idea (extra QA) actually makes the situation worse for making widgets. And this type of decision is made every single day in companies. A singular bad thing happened resulting in a policy that is applied to all scenarios. By not accepting that some risk is unavoidable or that the cost of avoiding some risks is greater than the risk themselves, companies fall into a spiral of creating more and more time consuming processes which eventually stifle their ability to achieve their stated goals.

At some point in the life of most every company there comes a tipping point. A point where the support organizations that oversee these policies and procedures take over. It is hard to see, but can be answered by our original question.

Who wears the pants in your company? When there is a conflict between a support function and a primary function and it is presented to your senior leadership, which way do they decide?

A lot can be judged by that decision.