Skip to content

Commit

Permalink
- Added functionality for installing and clustering multiple ZooKeepe…
Browse files Browse the repository at this point in the history
…r nodes. This will now install ZooKeeper and configure clustering properties for all hosts listed in the “zookeeper-nodes” group by default.

- Added checks for download and unpack to ensure role is idempotent.
- Changed reload of systemd configuration to use a handler instead of task to ensure that the role is idempotent.
- Added variable and configuration for the “dataLogDir” which allows disk separation between the snapshots and transactional logs.
- Create Docker network and 3 containers to run cluster tests. The Ansible modules used for Docker require at least Ansible 2.2
  • Loading branch information
Simon authored and Simon committed Mar 6, 2017
1 parent 26a8deb commit 2dfb06f
Show file tree
Hide file tree
Showing 9 changed files with 198 additions and 27 deletions.
68 changes: 53 additions & 15 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,32 +6,70 @@ services:
- docker

before_install:
# Pull a CentOS image with systemd installed
- docker pull centos/systemd
# Update the host with latest versions
- sudo apt-get update -qq

install:
# Run the container in detached mode. The usage of privileged, mounting cgroup volumes and /usr/lib/systemd/systemd are required so that systemd can be
# used without the "Failed to get D-Bus connection: Operation not permitted" error occurring when running commands, e.g. systemctl daemon-reload
- docker run --privileged --detach --volume="${PWD}":/etc/ansible/roles/ansible-zookeeper:ro --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --name zookeeper centos/systemd /usr/lib/systemd/systemd
# Install the EPEL packages
- docker exec zookeeper yum -y install epel-release
# Install the Ansible packages
- docker exec zookeeper yum -y install ansible
# Install Ansible on host
- pip install ansible

# Add ansible.cfg to pick up roles path.
- "printf '[defaults]\nroles_path = ../' > ansible.cfg"

# Pull a CentOS image with systemd installed for the Docker containers
- docker pull centos/systemd

script:
# Install role dependencies.
- docker exec zookeeper ansible-galaxy install -r /etc/ansible/roles/ansible-zookeeper/tests/requirements.yml
- ansible-galaxy install -r tests/requirements.yml

# Check syntax of Ansible role
- docker exec zookeeper ansible-playbook /etc/ansible/roles/ansible-zookeeper/tests/test.yaml --syntax-check
- ansible-playbook tests/test.yaml -i tests/inventory --syntax-check

# Run Ansible role
- docker exec zookeeper ansible-playbook /etc/ansible/roles/ansible-zookeeper/tests/test.yaml --verbose
- ansible-playbook tests/test.yaml -i tests/inventory --verbose

# Run the playbook and role again to ensure that it is idempotent and nothing has changed
- >
ansible-playbook tests/test.yaml -i tests/inventory --verbose
| grep -q 'changed=0.*failed=0'
&& (echo 'Idempotence test: pass' && exit 0)
|| (echo 'Idempotence test: fail' && exit 1)
# Check that the ZooKeeper service is running
- docker exec zookeeper systemctl status zookeeper 2>&1 | awk 'FNR == 3 {print}' | grep "active (running)" && (echo "Service running - pass" && exit 0) || (echo "Service running - fail" && exit 1)
- >
docker exec zookeeper-1 systemctl status zookeeper 2>&1
| awk 'FNR == 3 {print}' | grep "active (running)"
&& (echo "Service running - pass" && exit 0)
|| (echo "Service running - fail" && exit 1)
# Check that a Znode can be successfully created
- docker exec zookeeper /usr/share/zookeeper/bin/zkCli.sh create /TestZnode1 "test-node-1" 2>&1 | awk -F\" '/Created/ {print $1}' | grep "Created" && (echo "Znode ceate test - pass" && exit 0) || (echo "Znode create test - fail" && exit 1)
- >
docker exec zookeeper-1 /usr/share/zookeeper/bin/zkCli.sh create /TestZnode1 "test-node-1" 2>&1
| awk -F\" '/Created/ {print $1}' | grep "Created"
&& (echo "Znode ceate test - pass" && exit 0)
|| (echo "Znode create test - fail" && exit 1)
# Check that the Znode is available on all nodes in the cluster
- >
docker exec zookeeper-2 /usr/share/zookeeper/bin/zkCli.sh ls /TestZnode1 2>&1
| awk 'END{print}' | grep 'Node does not exist'
&& (echo "Znode cluster ceate test - fail" && exit 1)
|| (echo "Znode cluster create test - pass" && exit 0)
- >
docker exec zookeeper-3 /usr/share/zookeeper/bin/zkCli.sh ls /TestZnode1 2>&1
| awk 'END{print}' | grep 'Node does not exist'
&& (echo "Znode cluster ceate test - fail" && exit 1)
|| (echo "Znode cluster create test - pass" && exit 0)
after_script:
- docker stop zookeeper && docker rm zookeeper
# Stop and remove the Docker containers
- docker stop zookeeper-1 && docker rm zookeeper-1
- docker stop zookeeper-2 && docker rm zookeeper-2
- docker stop zookeeper-3 && docker rm zookeeper-3

# Remove the Docker network
- docker network rm zookeeper

notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/
35 changes: 34 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@

Ansible role for installing and configuring Apache ZooKeeper on RHEL / CentOS 7.

This role can be used to install and cluster multiple ZooKeeper nodes, this uses all hosts defined for the "zookeeper-nodes" group
in the inventory file by default. All servers are added to the zoo.cfg file along with the leader and election ports.

## Requirements

Platform: RHEL / CentOS 7
Expand All @@ -23,8 +26,38 @@ The Oracle Java 8 JDK role from Ansible Galaxy can be used if one is needed.
zookeeper_install_dir: '{{ zookeeper_root_dir}}/zookeeper-{{zookeeper_version}}'
zookeeper_dir: '{{ zookeeper_root_dir }}/zookeeper'
zookeeper_log_dir: /var/log/zookeeper
zookeeper_snapshot_dir: /var/lib/zookeeper/data
zookeeper_data_dir: /var/lib/zookeeper
zookeeper_data_log_dir: /var/lib/zookeeper
zookeeper_client_port: 2181
zookeeper_id: 1
zookeeper_leader_port: 2888
zookeeper_election_port: 3888


### Default Ports

| Port | Description |
|------|-------------|
| 2181 | Client connection port |
| 2888 | Quorum port for clustering |
| 3888 | Leader election port for clustering |


### Default Directories and Files

| Directory / File | |
|-----|----|
| Installation directory | `/usr/share/zookeeper-<version>`
| Symlink to install directory | `/usr/share/zookeeper` |
| Symlink to configuration | `/etc/zookeeper/zoo.cfg` |
| Log files | `/var/log/zookeeper` |
| Data directory for snapshots and myid file | `/var/lib/zookeeper` |
| Data directory for transaction log files | `/var/lib/zookeeper` |
| Systemd service | `/usr/lib/systemd/system/zookeeper.service` |

## Starting and Stopping ZooKeeper services
* The ZooKeeper service can be started via: `systemctl start zookeeper`
* The ZooKeeper service can be stopped via: `systemctl stop zookeeper`

## Dependencies

Expand Down
10 changes: 9 additions & 1 deletion defaults/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,13 @@ zookeeper_dir: '{{ zookeeper_root_dir }}/zookeeper'
zookeeper_log_dir: /var/log/zookeeper

# Variables for templating zookeeper.conf.j2
zookeeper_snapshot_dir: /var/lib/zookeeper/data
zookeeper_data_dir: /var/lib/zookeeper
zookeeper_data_log_dir: /var/lib/zookeeper
zookeeper_client_port: 2181

# Uniquely identifies the ZooKeeper instance when clustering ZooKeeper nodes.
# This value is placed in the /var/lib/zookeeper/myid file.
zookeeper_id: 1

zookeeper_leader_port: 2888
zookeeper_election_port: 3888
3 changes: 3 additions & 0 deletions handlers/main.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
---

- name: Reload systemd
command: systemctl daemon-reload

- name: Restart ZooKeeper service
service:
name: zookeeper.service
Expand Down
26 changes: 19 additions & 7 deletions tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,16 @@
tags:
- zookeeper_group

- name: Check if ZooKeeper has already been downloaded and unpacked
stat:
path: '{{ zookeeper_root_dir }}/zookeeper-{{ zookeeper_version }}'
register: dir

- name: Download Apache ZooKeeper
get_url:
url: http://www-eu.apache.org/dist/zookeeper/zookeeper-{{ zookeeper_version }}/zookeeper-{{ zookeeper_version }}.tar.gz
dest: /tmp
when: dir.stat.exists == False
tags:
- zookeeper_download

Expand All @@ -30,6 +36,7 @@
copy: no
group: '{{ zookeeper_group }}'
owner: '{{ zookeeper_user }}'
when: dir.stat.exists == False
tags:
- zookeeper_unpack

Expand All @@ -43,9 +50,9 @@
tags:
- zookeeper_dirs

- name: Create directory for snapshot files
- name: Create directory for snapshot files and myid file
file:
path: '{{ zookeeper_snapshot_dir }}'
path: '{{ zookeeper_data_dir }}'
state: directory
group: '{{ zookeeper_group }}'
owner: '{{ zookeeper_user }}'
Expand Down Expand Up @@ -90,6 +97,15 @@
tags:
- zookeeper_config

- name: Template myid to {{ zookeeper_data_dir}}/myid
template:
src: myid.j2
dest: '{{ zookeeper_data_dir }}/myid'
notify:
- Restart ZooKeeper service
tags:
- zookeeper_config

# Uncomment the log4j.properties line for setting the maximum number of logs to rollover and keep
- name: Set maximum log rollover history
replace:
Expand All @@ -106,15 +122,11 @@
src: zookeeper.service.j2
dest: /usr/lib/systemd/system/zookeeper.service
notify:
- Reload systemd
- Restart ZooKeeper service
tags:
- zookeeper_service

- name: Reload the services daemon
command: systemctl daemon-reload
tags:
- zookeeper_service

- name: Start the ZooKeeper service
service:
name: zookeeper.service
Expand Down
1 change: 1 addition & 0 deletions templates/myid.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{{ zookeeper_id }}
12 changes: 11 additions & 1 deletion templates/zoo.cfg.j2
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# {{ ansible_managed }}

# The number of milliseconds of each tick
tickTime=2000

Expand All @@ -12,7 +14,11 @@ syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir={{ zookeeper_snapshot_dir }}
dataDir={{ zookeeper_data_dir }}

# Directory to write the transaction log to the dataLogDir rather than the dataDir.
# This allows a dedicated log device to be used, and helps avoid competition between logging and snaphots.
dataLogDir={{ zookeeper_data_log_dir }}

# the port at which the clients will connect
clientPort={{ zookeeper_client_port }}
Expand Down Expand Up @@ -40,3 +46,7 @@ clientPort={{ zookeeper_client_port }}
# a single unique number, e.g. 1, 2, etc.
#server.1=hostname1:2888:3888
#server.2=hostname2:2888:3888

{% for host in groups['zookeeper-nodes'] %}
server.{{ hostvars[host].zookeeper_id }}={{ hostvars[host]['ansible_host'] }}:{{ zookeeper_leader_port }}:{{ zookeeper_election_port }}
{% endfor %}
9 changes: 8 additions & 1 deletion tests/inventory
Original file line number Diff line number Diff line change
@@ -1 +1,8 @@
localhost
zookeeper-1 ansible_host=zookeeper-1 ansible_connection=docker
zookeeper-2 ansible_host=zookeeper-2 ansible_connection=docker
zookeeper-3 ansible_host=zookeeper-3 ansible_connection=docker

[zookeeper-nodes]
zookeeper-1 zookeeper_id=1
zookeeper-2 zookeeper_id=2
zookeeper-3 zookeeper_id=3
61 changes: 60 additions & 1 deletion tests/test.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,66 @@
---

- hosts: localhost
remote_user: root
# Set the python interpreter to use as we need to run commands from
# the python virtualenv (/tmp/ansible-env) created below.
# If this is not done then the host's python interpreter is used and
# we will be using different versions of python packages than we require.
vars:
ansible_python_interpreter: /tmp/ansible-env/bin/python

tasks:

# Install docker-py required for the docker_network and docker_container modules.
# This installs this in a virtualenv as the required version must be at minimum 1.7
# and due to a version matching bug 1.10 and above are evaluated as being less than 1.7.
- name: Install docker-py
pip:
name: docker-py
version: 1.9.0
virtualenv: /tmp/ansible-env

# Create a Docker network that the containers will connect to. This will enable the
# containers to be able to see and access each other.
# This requires Ansible 2.2 for this docker_network module.
- name: Create Docker network
docker_network:
name: zookeeper
ipam_options:
subnet: '172.25.0.0/16'

# The centos/systemd image used to create these containers is required so
# that systemd is available. This is used for the systemctl commands to
# install and run the zookeeper services for this role. The privileged container
# and "/sys/fs/cgroup" volume mount is also requird for systemd support.
# Port 2181 is exposed as the ZooKeeper port.
# The container needs to be started with the "/usr/lib/systemd/systemd" so that
# this service is initialized.
- name: Create Docker containers
docker_container:
name: '{{ item.1 }}'
hostname: '{{ item.1 }}'
image: centos/systemd
state: started
privileged: yes
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
networks:
- name: zookeeper
ipv4_address: 172.25.10.{{ item.0 + 1 }}
purge_networks: yes
exposed_ports:
- 2181
- 2888
- 3888
etc_hosts:
zookeeper-1: 172.25.10.1
zookeeper-2: 172.25.10.2
zookeeper-3: 172.25.10.3
command: /usr/lib/systemd/systemd
with_indexed_items: "{{ groups['zookeeper-nodes'] }}"

# Install Java and ZooKeeper on all nodes.
- hosts: zookeeper-nodes
roles:
- java
- ansible-zookeeper

0 comments on commit 2dfb06f

Please sign in to comment.