Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic selection of master #22

Open
jabclab opened this issue May 30, 2014 · 15 comments · May be fixed by #128, #172 or #250
Open

Dynamic selection of master #22

jabclab opened this issue May 30, 2014 · 15 comments · May be fixed by #128, #172 or #250

Comments

@jabclab
Copy link
Contributor

jabclab commented May 30, 2014

Hi there,

I've been doing some manual testing with Redis sentinel and its election of a new master. This all works fine as expected 😄

The issue I've got is that my inventory is currently as follows:

[redis_master]
redis01.staging

[redis_slaves]
redis02.staging
redis03.staging

If, for example, redis02.staging has previously become the master due to a reelection then re-running the playbook appears to break my setup. If redis01.staging is still the master then it works fine.

Do you think I need to use a dynamic inventory to calculate which member if redis0[1:3].staging is the master? I think this issue may be down to the redis-server config file being rewritten as part of the sentinel re-election process.

Thanks!

@jabclab
Copy link
Contributor Author

jabclab commented May 30, 2014

It might be crazy but could I use a dynamic inventory and select the master and slaves something like:

#!/usr/bin/env bash

# Master
master=`ssh redis01.staging 'echo "sentinel get-master-addr-by-name master01" | redis-cli -p 26379' | head -n 1`

# Slaves
slaves=`ssh redis01.staging 'echo "sentinel slaves master01" | redis-cli -p 26379' | grep ip -A 1 | grep -v -- -- | awk 'NR % 2 == 0'`

Obviously I'd have to take into account when redis-cli wasn't present, but what do you reckon to the idea? 😸

@DavidWittman
Copy link
Owner

Ah, gross. That's a legitimate issue. I don't think it makes sense for the role to depend on an external dynamic inventory script though. We can probably solve this by adding some runtime logic which runs the redis-cli commands you used above and sets the master variables appropriately.

Thanks again @jabclab. If you can't find an elegant solution for this, I'll take a crack at it over the weekend.

@jabclab
Copy link
Contributor Author

jabclab commented May 30, 2014

Sounds good to me. What do you think is the best way of handling it from an inventory perspective? One group or two? I can see arguments for both but I guess conceptually one would make more semantic sense as it is one group of Redis servers with replication and/or failover, we don't care which is master (at least that's the case for my use case).

@jabclab
Copy link
Contributor Author

jabclab commented Jun 2, 2014

Hey @DavidWittman , did you think of an elegant solution for this? My current thinking is that I'll just do the sentinel/replication setup once, i.e. if it has been set up don't do it again. This is the strategy we take for our MongoDB setup.

However, as we're manipulating the redis-server and/or redis-sentinel config file it seems like the problem boils down to the:

# Generated by CONFIG REWRITE
sentinel config-epoch master01 0
sentinel leader-epoch master01 0
...

added automatically by Redis. I guess the question is, what is the best way to handle with Ansible config files which are automatically updated at runtime.

One option for redis-server may be to use include directives in the redis.conf. includes don't seem to be mentioned in sentinel.conf but apparently (via IRC) it does support them.

If includes are the way you want to go maybe we could include the include via a lineinfile prior to CONFIG REWRITE (or EOF). Then the config file we write at present to /etc/redis/{{ redis_port }}.conf could be changed to the file which is included, e.g.

- name: ensure ansible-controlled config file added to config
  template: src=redis.conf.j2 dest=/etc/redis/{{ redis_port }}_ansible.conf owner={{ redis_user }}

- name: ensure redis config file exists
  file: path=/etc/redis/{{ redis_port }}.conf state=touch owner={{ redis_user }}

- name: ensure include of ansible config file present in redis.conf
  lineinfile: dest=/etc/redis/{{ redis_port }}_ansible.conf regexp='^ include' line='include /etc/redis/{{ redis_port }}' insertbefore='.*CONFIG REWRITE'

I'm pretty certain that this would be work but am definitely open to other ways!

What do you reckon? 😺

@DavidWittman
Copy link
Owner

@jabclab Sorry, turns out this past weekend was my on-call rotation and it wasn't too friendly to my free time.

The "set it and forget it" approach may be the easiest (and safest) route to go here. Do you have any examples from your MongoDB implementation? I'm going to research how other config mgmt tools are doing the same thing w/ Redis so I don't have to reinvent the wheel here.

That said, using includes looks like a pretty decent way to ensure idempotency. I'm almost certain they're supported in sentinel.

@jabclab
Copy link
Contributor Author

jabclab commented Jun 3, 2014

For Mongo the replication is set up via a script (vs. setting in the config file), can give you more code-based examples if that'd help?

The more I think about it the more I like the include way of ensuring idempotence. It feels like an elegant solution and would mean that we don't have to change too much 😄

e.g.

# Initial config
include /etc/redis/6379_ansible.conf
# After a runtime change
include /etc/redis/6379_ansible.conf
# Generated by CONFIG REWRITE
etc
etc

Therefore we're safe to update the ansible-controlled config file whilst not overwriting runtime config.

@jabclab
Copy link
Contributor Author

jabclab commented Jun 5, 2014

hey @DavidWittman , do you agree with the above suggestion as a means of guaranteeing idempotence? If so, I'll go ahead and implement and get a PR your way 😺

@DavidWittman
Copy link
Owner

@jabclab Haven't forgot about this. Going to have a look here sometime in the next few days. Apologies for the delay. Let me know if you have any other changes you'd like to discuss in the interim.

@jabclab
Copy link
Contributor Author

jabclab commented Jun 27, 2014

Awesome, thanks @DavidWittman - really appreciate all your work on this repo, helped me a lot 😄

@bryanlarsen
Copy link

Has any progress been made on this issue? We've been running our own home-grown ansible role successfully for quite a while, but on Friday we took down our website because after a deployment, every redis thought it was a slave...

@bryanlarsen
Copy link

Here's an ansible role that claims to have solved the problem: http://docs.debops.org/en/latest/ansible/roles/debops.redis.html

Based as it is on debops, it's not quite as easy to drop into a non-debops system as your role is.

@DavidWittman
Copy link
Owner

Hey @bryanlarsen. I have a local branch which I've been working on based around @jabclab's previous suggestions. I'll do some cleanup and get it pushed up here for testing soon. Thanks for the link to the DebOps redis role, too!

@danielkza
Copy link
Contributor

Has any progress been made in this? DebOps is really intrusive, while this role is much more self-contained. Are there any issues with @jabclab proposal of using includes to avoid overwriting the dynamic configuration?

@DavidWittman
Copy link
Owner

@danielkza No issues with the proposed changes (or something which uses blockinfile), I have just been neglecting this one.

@danielkza
Copy link
Contributor

danielkza commented Nov 15, 2016

@DavidWittman I'll see if I can freshen up @jabclab's work. Another interesting approach (which this Puppet module uses) is to set options through redis-cli in any runs after the initial setup.

@joshbenner joshbenner linked a pull request Sep 21, 2017 that will close this issue
@skress skress linked a pull request Aug 27, 2020 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants