Role Based Configuration Management with SaltStack

When I first started using SaltStack, I started with a top.sls that looked like this:

base:
'*':
defaults
web-*.example.com
web
worker-*.example.com
worker
cache-*.example.com
cache

view raw
simpletop.yaml
hosted with ❤ by GitHub

But I didn’t like having the roles of a machine hard bound to the hostname of the machine. That seemed arbitrary. Also, we ended up wanting to split and merge roles and the machines they run on (for instance, it became reasonable to merge the cache and worker roles) and so we outgrew that.

To solve this problem, we introduced role grains. An example worker minion might have the following `/etc/grains` file:

role:
– worker
– cache

view raw
grains
hosted with ❤ by GitHub

With this, our topfile turned into this:

base:
'os:Ubuntu':
match: grain
ubuntu-defaults
'role:web':
match: grain
web
'role:worker':
match: grain
worker
'role:cache':
match: grain
cache

view raw
top.sls.yaml
hosted with ❤ by GitHub

Which was nice. For each role grain, it included the necessary role state file, and all was well… for a while.

Our environment grew to the point where we had many more roles than we at first anticipated. For each role we added, we had to add another 4 lines to our top.sls, and it grew pretty ugly and duplicate-y. To solve this, we leveraged the jinja templating system to create roles.sls:

{% if 'role' in grains %}
include:
{% for role in salt['grains.get']('role', []) %}
{{ role }}
{% endfor %}
{% endif %}

view raw
roles.sls.yaml
hosted with ❤ by GitHub

This allowed us to condense our top file to the following:

base:
'os:Ubuntu':
match: grain
ubuntu-defaults
'*':
roles

view raw
top.sls.yaml
hosted with ❤ by GitHub

Now each minion tells the master what kind of machine it dreams of being via the roles grain, and the salt master tells each minion how to become those things. If you want to add a role to a machine, you simply add it to the role grain on the minion and run highstate.