diff --git a/.gitignore b/.gitignore index 3d612962f6..fc6d3d4d29 100644 --- a/.gitignore +++ b/.gitignore @@ -14,3 +14,4 @@ reports .metadata hosts .vscode +build diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index ac0eded756..bf6e2a1af1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,5 +1,5 @@ -# Submitting Your Code and Becoming a Contributor to Nuage MetroAG -Thank you for your interest! The main steps for submitting your code to Nuage Networks MetroAG are: +# Submitting Your Code and Becoming a Contributor to Nuage Metro Automation Engine +Thank you for your interest! The main steps for submitting your code to Nuage Networks Metro Automation Engine are: [1. Develop code on a fork](#1-develop-code-on-a-fork) [2. Finalize code contribution](#2-finalize-code-contribution) [3. Create pull request (PR)](#3-create-pull-request-pr) @@ -16,7 +16,7 @@ Health | For system-level sanity validation and monitoring. Destroy | For tear down of components and connections. This is one of two hypervisor-dependent roles (predeploy is the other). If you find yourself adding conditional execution based on the hypervisor anywhere else, it's probably a mistake. Upgrade | For upgrading components from one release to another. ## 1. Develop Code on a Fork -1. Before you start developing code, create your own fork from the upstream MetroAG repo. [https://github.com/nuagenetworks/nuage-metro/](https://github.com/nuagenetworks/nuage-metro/) +1. Before you start developing code, create your own fork from the upstream Metro Automation Engine repo. [https://github.com/nuagenetworks/nuage-metro/](https://github.com/nuagenetworks/nuage-metro/) 2. Clone your own fork on your machine and switch to the _dev_ branch. Note: By default the fork clones into `nuage-metro`. Consider creating a separate branch, other than dev, for feature development. Alternatively, you may provide a target dir for the clone, as shown below with `metro-fork`. ``` @@ -24,10 +24,11 @@ git clone https://github.com//nuage-metro.git metro-fork/ cd metro-fork/ git checkout dev ``` -3. Develop and test all proposed contributions on the appropriate hypervisors in the `metro-fork` directory. If you choose not to provide support for one or more supported hypervisors, you must provide graceful error handling for those types. Note: All python files modified or submitted must successfully pass a 'flake8 --ignore=E501' test. + +3. Develop and test all proposed contributions on the appropriate hypervisors in the `metro-fork` directory. If you choose not to provide support for one or more supported hypervisors, you must provide graceful error handling for those types. Testing includes running the program `flake8` over all Python files. The only exception to the flake8 rules that we accept is E501, line length. For example: `flake8 --ignore=E501`. 4. If you require any new User Input Variables: - * Extend the MetroAG variable files with sensible example values:
`build_vars.yml` and `user_creds.yml`. + * Extend the Metro Automation Engine variable files with sensible example values:
`build_vars.yml` and `user_creds.yml`. * Ensure that the copies of the variable files in `roles/reset-build/files/` are identical to
`build_vars.yml` and `user_creds.yml`. * Include comments with the variable specifications that explain the variable's purpose and acceptable values. * Variables that are almost never modified may be included in standard Ansible variable locations, e.g. `roles//vars/main.yml`. diff --git a/Documentation/CUSTOMIZE.md b/Documentation/CUSTOMIZE.md index aa61cb3be0..e06d636f02 100644 --- a/Documentation/CUSTOMIZE.md +++ b/Documentation/CUSTOMIZE.md @@ -2,9 +2,9 @@ ## Prerequisites / Requirements -To confirm that your components are supported by MetroAG, see [README.md](../README.md). +To confirm that your components are supported by Metro Automation Engine, see [README.md](../README.md). -If you have not previously set up your MetroAG Ansible environment, see [SETUP.md](SETUP.md) before proceeding. +If you have not previously set up your Metro Automation Engine Ansible environment, see [SETUP.md](SETUP.md) before proceeding. ## Main Steps @@ -19,9 +19,9 @@ Setting variables correctly ensures that when playbooks run they configure compo `user_creds.yml` contains user credentials for VSD, VCIN and VSC. Default values are specified; you can modify them as necessary. ### `build_vars.yml` -`build_vars.yml` contains configuration parameters for each component. You determine which components MetroAG operates on, as well as *how* those components are operated on, by including them or excluding them in this file. +`build_vars.yml` contains configuration parameters for each component. You determine which components Metro Automation Engine operates on, as well as *how* those components are operated on, by including them or excluding them in this file. -If this is your first time deploying or upgrading with MetroAG, and you intend on automatically unzipping the required Nuage software files as described in step 2 below, ensure that you have specified the following source and target directories in `build_vars.yml`. +If this is your first time deploying or upgrading with Metro Automation Engine, and you intend on automatically unzipping the required Nuage software files as described in step 2 below, ensure that you have specified the following source and target directories in `build_vars.yml`. ``` nuage_zipped_files_dir: "" @@ -33,7 +33,7 @@ If you intend on deploying VNS with zero factor bootstrapping, you must customiz ## 2. Unzip Nuage Files -Before executing with MetroAG *for the first time*, ensure that the required unzipped Nuage software files (QCOW2, OVA, and Linux Package files) are available for the components being installed. Use one of the two methods below. +Before executing with Metro Automation Engine *for the first time*, ensure that the required unzipped Nuage software files (QCOW2, OVA, and Linux Package files) are available for the components being installed. Use one of the two methods below. ### Automatically Ensure that you have specified the directory paths for zipped and unzipped files in `build_vars.yml`. (See step 1 above.) @@ -98,4 +98,4 @@ Ask questions and get support via email. Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. -You may also [contribute](../CONTRIBUTING.md) to Nuage MetroAG by submitting your own code to the project. +You may also [contribute](../CONTRIBUTING.md) to Nuage Metro Automation Engine by submitting your own code to the project. diff --git a/Documentation/DEPLOY.md b/Documentation/DEPLOY.md index 0fb6e7bbbb..5fe62bcf42 100644 --- a/Documentation/DEPLOY.md +++ b/Documentation/DEPLOY.md @@ -1,6 +1,6 @@ -# Deploying Nuage Networks Components with MetroAG +# Deploying Nuage Networks Components with Metro Automation Engine -You can execute MetroAG playbooks to perform the following installations: +You can execute Metro Automation Engine playbooks to perform the following installations: * [Deploy All Components](#deploy-all-components) * [Deploy Individual Modules](#deploy-individual-modules) @@ -8,13 +8,13 @@ You can execute MetroAG playbooks to perform the following installations: ## Prerequisites / Requirements -Before deploying any components, you must have previously [set up your Nuage MetroAG Ansible environment](SETUP.md "link to SETUP documentation") and [customized the environment for your target platform](CUSTOMIZE.md "link to CUSTOMIZE documentation"). +Before deploying any components, you must have previously [set up your Nuage Metro Automation Engine Ansible environment](SETUP.md "link to SETUP documentation") and [customized the environment for your target platform](CUSTOMIZE.md "link to CUSTOMIZE documentation"). -Make sure you have unzipped the Nuage Networks *.tar.gz files into their proper locations in the directory structure, so MetroAG can find the path of the Nuage components automatically when running commands. +Make sure you have unzipped the Nuage Networks *.tar.gz files into their proper locations in the directory structure, so Metro Automation Engine can find the path of the Nuage components automatically when running commands. ## Deploy All Components -MetroAG playbooks operate on components as you have defined them in `build_vars.yml`. If you run a playbook for a component not specified in `build_vars.yml`, the playbook skips all tasks associated with that component and runs to completion without error. Thus, if you run the `install_everything` playbook when only VRS appears in `build_vars.yml`, the playbook deploys VRS successfully while ignoring the tasks for the other components not specified. Deploy all specified components with one command as follows: +Metro Automation Engine playbooks operate on components as you have defined them in `build_vars.yml`. If you run a playbook for a component not specified in `build_vars.yml`, the playbook skips all tasks associated with that component and runs to completion without error. Thus, if you run the `install_everything` playbook when only VRS appears in `build_vars.yml`, the playbook deploys VRS successfully while ignoring the tasks for the other components not specified. Deploy all specified components with one command as follows: ``` ./metro-ansible install_everything.yml @@ -24,18 +24,17 @@ Note: `metro-ansible` is a shell script that executes `ansible-playbook` with th ## Deploy Individual Modules -MetroAG offers modular execution models in case you don't want to deploy all components together. See modules below. +Metro Automation Engine offers modular execution models in case you don't want to deploy all components together. See modules below. Module | Command | Description ---|---|--- VCS | `./metro-ansible install_vcs` | Installs components for Virtualized Cloud Services VNS | `./metro-ansible install_vns` | Installs VNS component on top of a VSP DNS
(experimental) | `./metro-ansible install_dns` | Installs a DNS server based on `named`, with a zone file containing all necessary entries for VSP -OSC (experimental) | `./metro-ansible install_osc` | Installs an RDO OpenStack environment that is integrated against VSD ## Install a Particular Role or Host -MetroAG has a complete library of [playbooks](/playbooks "link to playbooks directory"), which are directly linked to each individual role. You can limit your deployment to a particular role or component, or you can skip steps you are confident need not be repeated. For example, to deploy only the VSD VM-images and get them ready for VSD software installation, run: +Metro Automation Engine has a complete library of [playbooks](/playbooks "link to playbooks directory"), which are directly linked to each individual role. You can limit your deployment to a particular role or component, or you can skip steps you are confident need not be repeated. For example, to deploy only the VSD VM-images and get them ready for VSD software installation, run: ``` ./metro-ansible vsd_predeploy @@ -51,7 +50,7 @@ MetroAG has a complete library of [playbooks](/playbooks "link to playbooks dire ### NSGV and Bootstrapping -MetroAG can automatically bootstrap (ZFB) a NSGV when deploying a VNS UTIL VM. To direct MetroAG to generate the ISO file needed for zero factor bootstrapping, perform the following tasks before deploying: +Metro Automation Engine can automatically bootstrap (ZFB) a NSGV when deploying a VNS UTIL VM. To direct Metro Automation Engine to generate the ISO file needed for zero factor bootstrapping, perform the following tasks before deploying: * Customize variables in [`zfb_vars.yml`](/zfb_vars.yml "link to zfb_vars.yml file") * Specify `bootstrap_method: zfb_metro,` in mynsgvs parameters in [`build_vars.yml`](/build_vars.yml "link to build_vars.yml file") @@ -86,4 +85,4 @@ Ask questions and get support via email. Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. -You may also [contribute](../CONTRIBUTING.md) to Nuage MetroAG by submitting your own code to the project. +You may also [contribute](../CONTRIBUTING.md) to Nuage Metro Automation Engine by submitting your own code to the project. diff --git a/Documentation/DESTROY.md b/Documentation/DESTROY.md index 38b288fe66..78f6637595 100644 --- a/Documentation/DESTROY.md +++ b/Documentation/DESTROY.md @@ -1,4 +1,4 @@ -# Removing Nuage Networks Components with MetroAG +# Removing Nuage Networks Components with Metro Automation Engine The main steps for removing a deployment are: @@ -11,9 +11,9 @@ Use this procedure when you have previously deployed VSP components and would li ## 1. Check Existing Configuration -If you have previously deployed components with MetroAG and your configuration has not changed, you may proceed to step **2. Remove Component(s)**. +If you have previously deployed components with Metro Automation Engine and your configuration has not changed, you may proceed to step **2. Remove Component(s)**. -If you have not previously deployed components with MetroAG or your configuration has changed since, you will need to update your `build_vars.yml` file to reflect the new configuration. +If you have not previously deployed components with Metro Automation Engine or your configuration has changed since, you will need to update your `build_vars.yml` file to reflect the new configuration. ## 2. Remove Components @@ -48,4 +48,4 @@ Ask questions and get support via email. Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. -You may also [contribute](../CONTRIBUTING.md) to Nuage MetroAG by submitting your own code to the project. +You may also [contribute](../CONTRIBUTING.md) to Nuage Metro Automation Engine by submitting your own code to the project. diff --git a/Documentation/GETTING_STARTED.md b/Documentation/GETTING_STARTED.md index 2b8b201fe8..2b7d6d30fe 100644 --- a/Documentation/GETTING_STARTED.md +++ b/Documentation/GETTING_STARTED.md @@ -1,21 +1,21 @@ -# MetroAG Quick Start Guide +# Metro Automation Engine Quick Start Guide ## 1. Read documentation 1.1 [Readme](../README.md) for information on supported components -1.2 [Setup](SETUP.md) for setting up the MetroAG host and enabling SSH +1.2 [Setup](SETUP.md) for setting up the Metro Automation Engine host and enabling SSH 1.3 [Customize](CUSTOMIZE.md) for customizing user data and files 1.4 [Release Notes](RELEASE_NOTES.md) for information on the latest features -## 2. Setup MetroAG Host +## 2. Setup Metro Automation Engine Host -#### What's a MetroAG Host? +#### What's a Metro Automation Engine Host? * It can be a VM, physical server or container. * It requires CentOS 7.x or RHEL 7.x with basic packages. * We recommend that you dedicate a machine (VM) for it. -2.1 Clone the master branch of the repo onto the **MetroAG Host**. Read [Setup](SETUP.md) for details. +2.1 Clone the master branch of the repo onto the **Metro Automation Engine Host**. Read [Setup](SETUP.md) for details. ``` git clone https://github.com.com/nuagenetworks/nuage-metro.git ``` @@ -26,9 +26,9 @@ $ sudo ./metro-setup.sh ## 3. Enable SSH Access -### 3.1 For MetroAG User +### 3.1 For Metro Automation Engine User -3.1.1 As MetroAG User, generate SSH keys: `ssh-keygen`. +3.1.1 As Metro Automation Engine User, generate SSH keys: `ssh-keygen`. 3.1.2 Copy SSH public key: `ssh-copy-id localhost`. ### 3.2 For Root User @@ -45,19 +45,19 @@ See [Setup](SETUP.md) for more details about enabling SSH Access. ## 4. Install ovftool (for VMware only) -Download and install the [ovftool](https://www.vmware.com/support/developer/ovf/) from VMware. MetroAG uses ovftool for OVA operations. +Download and install the [ovftool](https://www.vmware.com/support/developer/ovf/) from VMware. Metro Automation Engine uses ovftool for OVA operations. ## 5. Prepare your environment 5.1 Unzip Nuage files: `./metro-ansible nuage_unzip`. See [CUSTOMIZE](CUSTOMIZE.md) for details. -      Be sure that Nuage packages (tar.gz) are available on localhost (MetroAG host), +      Be sure that Nuage packages (tar.gz) are available on localhost (Metro Automation Engine host),       either in a native directory or NFS-mounted. ## Checklist for Target Servers ### KVM -- [ ] MetroAG host has ability to do a password-less SSH as root. +- [ ] Metro Automation Engine host has ability to do a password-less SSH as root. - [ ] Sufficient disk space / resources exist to create VMs. - [ ] KVM is installed. - [ ] All required management and data bridges are created. @@ -65,8 +65,8 @@ Download and install the [ovftool](https://www.vmware.com/support/developer/ovf/ ### vCenter - [ ] User specified in build_vars.yml has required permissions to create and configure a VM. -- [ ] ovftool has been downloaded from VMware onto the MetroAG Host. -- [ ] pyvmomi has been installed on MetroAG Host: `pip install pyvmomi`. +- [ ] ovftool has been downloaded from VMware onto the Metro Automation Engine Host. +- [ ] pyvmomi has been installed on Metro Automation Engine Host: `pip install pyvmomi`. ## Next Steps @@ -80,4 +80,4 @@ Ask questions and get support via email. Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. -You may also [contribute](../CONTRIBUTING.md) to Nuage MetroAG by submitting your own code to the project. +You may also [contribute](../CONTRIBUTING.md) to Nuage Metro Automation Engine by submitting your own code to the project. diff --git a/Documentation/HOWTO.md b/Documentation/HOWTO.md index 16274ccd63..e08201684d 100644 --- a/Documentation/HOWTO.md +++ b/Documentation/HOWTO.md @@ -2,16 +2,16 @@ ## Prerequisites / Requirements -Before working with MetroAG, please refer to [README.md](README.md), [SETUP.md](/SETUP.md), and [CUSTOMIZE.md](/CUSTOMIZE.md) for information about supported deployments and general guidelines. +Before working with Metro Automation Engine, please refer to [README.md](README.md), [SETUP.md](/SETUP.md), and [CUSTOMIZE.md](/CUSTOMIZE.md) for information about supported deployments and general guidelines. -## A Sample of What MetroAG Can Do +## A Sample of What Metro Automation Engine Can Do [1. Customize the Component Mix](#1-customize-the-component-list) [2. Deploy VRS on Multiple Target Architecture](#2-deploy-vrs-on-multiple-target-architectures) [3. Deploy NSG in AWS](#3-deploy-nsg-in-aws) ## 1. Customize the Component Mix -MetroAG supports customizing the list of components the playbooks operate on. To operationalize two VSCs `build_vars.yml` would contain the following: +Metro Automation Engine supports customizing the list of components the playbooks operate on. To operationalize two VSCs `build_vars.yml` would contain the following: ``` myvscs: - { hostname: jenkinsvsc1.example.com, @@ -41,7 +41,7 @@ myvscs: ``` ### Example -You can use MetroAG to deploy a VSD cluster by itself. The basic pattern described here applies to deploying only VSD, VSD+VSC, VSD+VSC+VRS, VSC only, VSTAT only, and a number of other combination of list components. +You can use Metro Automation Engine to deploy a VSD cluster by itself. The basic pattern described here applies to deploying only VSD, VSD+VSC, VSD+VSC+VRS, VSC only, VSTAT only, and a number of other combination of list components. For deploying a VSD cluster, you must define 3 VSD entries in the `myvsds` dictionary in `build_vars.yml`. You must also have the other required definitions in place. Here is an example of the `build_vars.yml` file that deploys a cluster of 3 VSDs: @@ -84,7 +84,7 @@ For deploying a VSD cluster, you must define 3 VSD entries in the `myvsds` dicti Some customer environments use a mix of Debian- and RedHat-family Linux distributions in their compute nodes, where Debian == Ubuntu and Redhat == CentOS or RHEL. -MetroAG supports deploying VRS onto two target architectures by supporting VRS groups in `build_vars.yml`. The following is an example of deloying VRSs on three target architectures using one 'build_vars.yml' file. +Metro Automation Engine supports deploying VRS onto two target architectures by supporting VRS groups in `build_vars.yml`. The following is an example of deloying VRSs on three target architectures using one 'build_vars.yml' file. ### Example build_vars.yml file for three VRS target architectures @@ -114,7 +114,7 @@ myvrss: ## 3. Deploy NSG in AWS -MetroAG supports the deployment of NSGs in AWS and configuring those as Network Gateways in a particular enterprise of a Nuage Networks installation. +Metro Automation Engine supports the deployment of NSGs in AWS and configuring those as Network Gateways in a particular enterprise of a Nuage Networks installation. It assumes the necessary enterprise and NSG template has been preconfigured: It can either @@ -223,4 +223,4 @@ Ask questions and get support via email. Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. -You may also [contribute](../CONTRIBUTING.md) to Nuage MetroAG by submitting your own code to the project. +You may also [contribute](../CONTRIBUTING.md) to Nuage Metro Automation Engine by submitting your own code to the project. diff --git a/Documentation/OPENSTACK.md b/Documentation/OPENSTACK.md index 5bc7df6805..4bdaa2b63f 100644 --- a/Documentation/OPENSTACK.md +++ b/Documentation/OPENSTACK.md @@ -1,6 +1,6 @@ -# Deploying Nuage Networks Components in OpenStack with MetroAG (limited support) +# Deploying Nuage Networks Components in OpenStack with Metro Automation Engine (limited support) ## Internal Lab Use Only. Not for Customer Use. -The following components/roles are supported by MetroAG in OpenStack. +The following components/roles are supported by Metro Automation Engine in OpenStack. ### Deploy Infra VM You can deploy an infra VM that acts as a private DNS server and NTP server for VSD and VSC. `vsd-deploy` role and `vsc-deploy` role automatically populate their respective DNS entries with the hostname and IP addr mappings. @@ -303,4 +303,4 @@ Ask questions and get support via email. Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. -You may also [contribute](../CONTRIBUTING.md) to Nuage MetroAG by submitting your own code to the project. +You may also [contribute](../CONTRIBUTING.md) to Nuage Metro Automation Engine by submitting your own code to the project. diff --git a/Documentation/RELEASE_NOTES.md b/Documentation/RELEASE_NOTES.md index cfbc8f7fe7..e6c7760a80 100644 --- a/Documentation/RELEASE_NOTES.md +++ b/Documentation/RELEASE_NOTES.md @@ -1,36 +1,34 @@ -# MetroAG Release Notes -## Release 2.4.0 +# Metro Automation Engine Release Notes +## Release 2.4.1 ### New Features and Enhancements -* Eliminate the build_upgrade.yml playbook and consolidate upgrade_vars.yml into build_vars.yml. Enhance user_creds.yml to contain variables for all component logins. There are now only 3 user data files: build_vars.yml, user_creds.yml, and zfb_vars.yml. -* Support in-place ElasticSearch upgrade. Add support for both old and new upgrade procedure in vstat. Eliminate upgrade_major_or_minor var. -* Added support to VSS and AAR. New file “roles/vstat-deploy/tasks/aar_vss_enable.yml” creates certificates from VSD and pushes them to the stats VM. Also starts the nginx engine. Modified “roles/vstat-deply/tasks/non_heat.yml” to include file -* Build step is now run automatically when user runs Nuage playbooks directly. -* Refactor build role – VSR+VRS_DNS_Nuage OpenStack Plugins. Removes get_paths.yml from build and build-upgrade role. Includes refactor of VSR, VRS, DNS and Nuage OpenStack plugin files -* Added skipping of VSTAT when iptables/firewalld rules are already in place -* VSC health failure indication and displaying the error messages after all VSC health checks have been done -* Skip all components if healthy. Adds skipping functionality for all components (VSC, VRS, VSTAT, VNSUTILS, NSGV) in predeploy and deploy if the component is already running and healthy. -* Add installation of Nuage-selinux packages for RHEL7 & CentOS7. The Nuage selinux package allows the user to set selinux to enforcing mode on the nodes running the VRS. This is supported for RHEL7 and CentOS7. -* Support some VSCs having system IPs, and some not. Eliminates undefined variable error when some VSCs have system IPs, and some don’t -* Limit VSC name length check to system name, and allow user to specify a shorter one -* Add option to create dvSwitch when deploying VCIN -* Call ‘fallocate’ on VSD VM disk for KVM – adds a variable for the disk size to preallocate, default 285GB -* Execute vsd-health as part of vsd-post-deploy -* In vrs-health, look for interfaces connected to alubr0 instead of named tap* -* In vrs-postdeploy, better regex to determine ovs-vtep ip in case VSC is routed via default route, or indirect route -* For VSC, vsc_mgmt_static_route_list is now optional for cases where no static route list is required. -* Skip predeploy and deploy when already present, allowing, for example, re-running install_everything multiple times. +* Support for Nuage Networks version 5.2.3 +* Add check to verify VSDs are connected to VSCs +* Add validation for vsd hostname +* Change remote user from ‘root’ (or nothing) to a variable +* Add support for checking REST and JMS gateway on VSD and check VSTAT web gateway +* Update paramiko version in two files +* Delete all os-compute-*, osc-*, and infra-* from roles and playbooks +* Change ‘vsc_upgrade_backup_and_prep’ to vsc_sa_upgrade_backup_and_prep’ in UPGRADE.md +* Add parameter to specify backup location when upgrading +* Support for master/slave VCIN. +* Remove deprecated `include:` Ansible commands. +* Added yum_proxy support to dns. +* Added static route support for VNSUTIL. +* Added new roles for installation of VRS compute nodes, vrs-vm. ### Resolved Issues -* Fix problems in AAR VSS support. Eliminate installing pip and pexpect, only generate the certificate once -* Fix timeouts on backing up VSC by scp directly -* Add data_bridge support in the vnsutil hosts template and remove waiting for vmware tools for nsgv -* Add paramiko to pip install list. Pinning paramiko version 2.2.1 as there is a bug in the latest version (paramiko 2.3.1) installed by Ansible 2.4. -* Reset VSD keystore password to default when upgrading. In the field, we’ve run into issues where customers configure a non-default password on the keystore, and our VSD upgrade/decouple scripts fail. This task checks if the keystore password is still set to default, and if not asks the user to configure a variable with the current password so we can change it back to default. -* Fix VRS health check for VMs with multiple vnics, or interfaces not named ‘tapxxxx’ -* NSGV bug fix. MAC address of NSGV, which is needed for ZFB, was not getting populated in the build -* Fix VSC user credentials. The vrs-postdeploy tasks depend on vsc_username and vsc_password being specified for each VSC -* Remove cloud-init files from Utils VM -* Install missing VRS dependency python-six, .rpm does not list it as dependency -* Fix bug in VSTAT XML definition. Upgrade was not picking up vm_name since vmname is defined in XML definition -* Eliminate check for exactly 3 XMPP users -* Fixed vnsutil-postdeploy to run the install script with the data_fqdn +* Minor correction in ‘hosts.j2’ vsr section +* Correct SROS prompt +* Change ‘inventory hostname’ to ‘vm_name’ for dns image path +* Fix a failure during pip package check +* Change ‘inventory hostname’ to ‘vm_name’ for dns image path +* Add yum update and libguestfs-tools to ‘roles/vrs-vm-deploy/tasks/main.yml’ +* Import validate-build-vars task from common roles +* Add name ‘nsgv_predeploy’ to ‘install_vns.yml’ +* Delete sgt-qos section of config.cfg.j2 +* Add check for DNS qcow2 +* Add guestfish from the libguestfs-tools package as a prerequisite. +* The handle_vars playbook did not take into account custom provided build_vars_files or user_creds_file and calculated/verified the MD5 sum of the wrong files (static build_vars.yml and user_creds.yml instead of the provided values. +* vrs-vr image directory fix. +* Fix error on dns-predeploy when hostname and vmname are the same. +* Fix issue with running metro-ansible without root user. diff --git a/Documentation/SETUP.md b/Documentation/SETUP.md index 8c449cae2d..0cc1d60a04 100644 --- a/Documentation/SETUP.md +++ b/Documentation/SETUP.md @@ -1,19 +1,19 @@ -# Setting Up the Nuage MetroAG Ansible Environment +# Setting Up the Nuage Metro Automation Engine Ansible Environment (4 minute read) ## Prerequisites / Requirements -Before working with MetroAG, please read [README.md](/README.md) for a list of supported VCS/VNS components, supported target server types, and other requirements. +Before working with Metro Automation Engine, please read [README.md](/README.md) for a list of supported VCS/VNS components, supported target server types, and other requirements. ## Main steps for setting up the environment -[1. Clone Nuage MetroAG repository](#1-clone-nuage-metroag-repository) +[1. Clone Nuage Metro Automation Engine repository](#1-clone-nuage-metro-automation-engine-repository) [2. Set up Ansible host](#2-set-up-ansible-host) [3. Enable SSH Access](#3-enable-ssh-access) [4. Install ovftool (for VMware only)](#4-install-ovftool-for-vmware-only) -### 1. Clone Nuage MetroAG Repository -The Ansible Host must run el7 Linux host (CentOS 7.* or RHEL 7.*). Using one of the following two methods install a copy of the Nuage MetroAG repository onto the Ansible Host. +### 1. Clone Nuage Metro Automation Engine Repository +The Ansible Host must run el7 Linux host (CentOS 7.* or RHEL 7.*). Using one of the following two methods install a copy of the Nuage Metro Automation Engine repository onto the Ansible Host. #### Method One -Download a zip of the Nuage MetroAG archive from [GitHub.com](https://github.com/nuagenetworks/nuage-metro), and install it onto the Ansible Host. +Download a zip of the Nuage Metro Automation Engine archive from [GitHub.com](https://github.com/nuagenetworks/nuage-metro), and install it onto the Ansible Host. #### Method Two On the Ansible Host, execute the following commands: @@ -22,12 +22,12 @@ yum install -y git git clone https://github.com/nuagenetworks/nuage-metro ``` ### 2. Set Up Ansible Host -Prior to running MetroAG, use one of the two methods below to install the required packages onto the Ansible Host. +Prior to running Metro Automation Engine, use one of the two methods below to install the required packages onto the Ansible Host. #### Method One: Set Up Ansible Host Automatically (recommended) -*metro-setup.sh* is a script provided with the MetroAG code, which installs the packages and modules required for MetroAG. If any of the packages or modules are already present, the script does not upgrade or overwrite them. The script can also be run multiple times without affecting the system. The sample below is an example and may not reflect the most recent software. +*metro-setup.sh* is a script provided with the Metro Automation Engine code, which installs the packages and modules required for Metro Automation Engine. If any of the packages or modules are already present, the script does not upgrade or overwrite them. The script can also be run multiple times without affecting the system. The sample below is an example and may not reflect the most recent software. ``` -[JohnDoe@metroag-host ~]$ sudo ./metro-setup.sh +[JohnDoe@metro-host ~]$ sudo ./metro-setup.sh [sudo] password for JohnDoe: Setting up Nuage Metro Automation Engine @@ -57,43 +57,44 @@ The script writes a detailed log into *metro-setup.log*. #### Method Two: Set Up Ansible Host Manually 1. Install the following packages and modules for all setups: -Package or Module | Command -------- | -------- -Epel-release | `yum install -y epel-release` -Python-devel | `yum install -y python-devel.x86_64` -Openssl-devel | `yum install -y openssl-devel` -Python pip | `yum install -y python2-pip ` -Development Tools | `yum install -y "@Development tools"` +Package or Module | Command +------------------------------ | -------- +Epel-release | `yum install -y epel-release` +Python-devel | `yum install -y python-devel.x86_64` +Openssl-devel | `yum install -y openssl-devel` +Python pip | `yum install -y python2-pip` +Development Tools | `yum install -y "@Development tools"` Ansible 2.4 (for full support) | `pip install ansible==2.4` -Netmiko and its dependencies | `pip install netmiko` -Netaddr and its dependencies | `pip install netaddr` +Netmiko and its dependencies | `pip install netmiko` +Netaddr and its dependencies | `pip install netaddr` IPaddress and its dependencies | `pip install ipaddress` -Python pexpect module | `pip install pexpect` -VSPK Python module | `pip install vspk` -Paramiko | `pip install paramiko==2.2.1` +Python pexpect module | `pip install pexpect` +VSPK Python module | `pip install vspk` +Paramiko | `pip install paramiko==2.2.1` 2. **For ESXi / vCenter Only**, install the following package: Note: vCenter deployments are supported for Nuage software version 4.0R7 and greater. -Package | Command - -----| ------ - pyvmomi | `pip install pyvmomi` + Package | Command + -------- | ------- + pyvmomi | `pip install pyvmomi` + jmespath | `pip install jmespath` 3. For **OpenStack Only**, install the following module: -Module | Command - -----| ------ + Module | Command + ------------ | ------- shade python | `pip install shade` ### 3. Enable SSH Access -To enable passwordless SSH access, public/private SSH keys must be created and distributed for the MetroAG User and root users. The MetroAG User must be the root user or have *sudo* privileges. -#### For MetroAG User -1. Login to the Ansible Host as the MetroAG User. +To enable passwordless SSH access, public/private SSH keys must be created and distributed for the Metro Automation Engine User and root users. The Metro Automation Engine User must be the root user or have *sudo* privileges. +#### For Metro Automation Engine User +1. Login to the Ansible Host as the Metro Automation Engine User. 2. Generate SSH keys. Execute the command: `ssh-keygen`. 3. Follow the prompts. It is normal to accept all defaults. -4. Copy the SSH public key to the MetroAG User's authorized keys file. +4. Copy the SSH public key to the Metro Automation Engine User's authorized keys file. Execute the command: `ssh-copy-id localhost`. #### For Root User 1. Login to the Ansible Host as the Root User. @@ -108,10 +109,10 @@ To enable passwordless SSH access, public/private SSH keys must be created and d 2. Repeat for every target server. ### 4. Install ovftool (for VMware only) - If you are installing VSP components in a VMware environment (ESXi/vCenter) you will also need to download and install the [ovftool](https://www.vmware.com/support/developer/ovf/) from VMware. MetroAG uses ovftool for OVA operations. + If you are installing VSP components in a VMware environment (ESXi/vCenter) you will also need to download and install the [ovftool](https://www.vmware.com/support/developer/ovf/) from VMware. Metro Automation Engine uses ovftool for OVA operations. ## Next Step -After the MetroAG environment is set up, the next step is to customize it for your topology. See [CUSTOMIZE.md](CUSTOMIZE.md) for guidance. +After the Metro Automation Engine environment is set up, the next step is to customize it for your topology. See [CUSTOMIZE.md](CUSTOMIZE.md) for guidance. ## Questions, Feedback, and Contributing Ask questions and get support via email. @@ -120,4 +121,4 @@ Ask questions and get support via email. Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. -You may also [contribute](../CONTRIBUTING.md) to Nuage MetroAG by submitting your own code to the project. +You may also [contribute](../CONTRIBUTING.md) to Nuage Metro Automation Engine by submitting your own code to the project. diff --git a/Documentation/UPGRADE.md b/Documentation/UPGRADE.md index e300db9d18..d528450b45 100644 --- a/Documentation/UPGRADE.md +++ b/Documentation/UPGRADE.md @@ -1,8 +1,8 @@ -# Upgrading Nuage Networks Components with MetroAG +# Upgrading Nuage Networks Components with Metro Automation Engine ## Prerequisites / Requirements -Before upgrading any components, you must have previously [set up your Nuage MetroAG Ansible environment](SETUP.md) and [customized the upgrade environment for your target platform](CUSTOMIZE.md). +Before upgrading any components, you must have previously [set up your Nuage Metro Automation Engine Ansible environment](SETUP.md) and [customized the upgrade environment for your target platform](CUSTOMIZE.md). ## VSD, VSC, & VSTAT (elasticsearch ) HA/Cluster upgrade at a glance @@ -48,7 +48,7 @@ After all [Prerequisites](#prerequisites) are met, run the following set of comm 4. ./metro-ansible vsd_predeploy -vvvv 5. ./metro-ansible vsd_sa_upgrade_deploy -vvvv 6. ./metro-ansible vsd_upgrade_complete -vvvv -7. ./metro-ansible vsc_upgrade_backup_and_prep -vvvv +7. ./metro-ansible vsc_sa_upgrade_backup_and_prep -vvvv 8. ./metro-ansible vsc_sa_upgrade_deploy -vvvv 9. ./metro-ansible vsc_sa_upgrade_postdeploy -vvvv diff --git a/Documentation/VCENTER.md b/Documentation/VCENTER.md new file mode 100644 index 0000000000..1655fa7037 --- /dev/null +++ b/Documentation/VCENTER.md @@ -0,0 +1,197 @@ +# Using a VMware vCenter environment to deploy Nuage using Metro Automation Engine + +## Table of Content + +- [Supported versions](#supported-versions) + - [Nuage supported versions and components](#nuage-supported-versions-and-components) + - [vSphere supported versions](#vsphere-supported-versions) +- [Prerequisites / Requirements](#prerequisites---requirements) + - [Required packages](#required-packages) + - [vCenter user requirements](#vcenter-user-requirements) +- [Configuration](#configuration) + - [Specifying the vCenter host to use](#specifying-the-vcenter-host-to-use) + - [Overwriting settings for specific components](#overwriting-settings-for-specific-components) +- [Deploying vCenter Integration Nodes](#deploying-vcenter-integration-nodes) + - [VCIN deployment](#vcin-deployment) + - [VCIN Active/Standby deployment](#vcin-active-standby-deployment) + +## Supported versions + +### Nuage supported versions and components + +All versions starting from 4.0R11 and 5.0.1 are supported. + + **Note**: Support for VCIN Active/Standby deployment is only available for Nuage versions 5.2.2 and above. + +The following Nuage components can be deployed on a vSphere environment using Metro Automation Engine: + +* VSD +* VCIN (Active/Standby) +* ElasticSearch +* VSC +* VNS Utils +* NSG-V +* STC-V + +### vSphere supported versions + +The deployment of the Nuage components on a vSphere environment using Metro Automation Engine is supported on the same vSphere version as described in the Nuage Release Notes. + +## Prerequisites / Requirements + +### Required packages + +The following software and python packages are required to be installed on the Metro Automation Engine Host. + + Package | Command + -------- | ------- + ovftool | Download from the [VMware website](https://www.vmware.com/support/developer/ovf/) + pyvmomi | `pip install pyvmomi` + jmespath | `pip install jmespath` + +### vCenter user requirements + +The vCenter user or users used to deploy or upgrade the Nuage components on a vSphere environment will require a minimum of the following permissions: + +* Most of the VM actions for VMs in the resource pool the VMs need to be deployed in. This includes: + * Creating and Deleting VMs + * Changing the Power state of the VM + * Executing commands through VMware tools + * Deploy OVAs and OVFs +* (Optional) Create and update Distributed vSwitches and Distributed vSwitch Port Groups + +## Configuration + +To provide the general configuration for the deployment of the Nuage components on a vSphere environment, a set of configuration values has to be provided in the `build_vars.yml` file (or your own variables file). + +The below example shows and explains the general configuration values needed for a vSphere deployment. + +```yaml +vcenter: + username: administrator@vsphere.local + password: vmware + datacenter: Datacenter + cluster: Management + datastore: Datastore + resource_pool: Resourece Pool + ovftool: /usr/bin/ovftool +``` + +* **vcenter.username** + This is the username that will be used to connect to the vSphere environment, typically this will be in a username@domain.tld format when connecting to vCenter. +* **vcenter.password** + This is the password that will be used to connect to the vSphere environment, for the user mentioned in `vcenter.username`. +* **vcenter.datacenter** + This is the datacenter in vCenter in which the Nuage components will be deployed in by Metro Automation Engine. This name needs to exactly match with the name in vCenter for the Datacenter. +* **vcenter.cluster** + This is the cluster in vCenter, part of the datacenter configured in `vcenter.datacenter`. The Nuage components will be deployed in this cluster by Metro Automation Engine. If the cluster consists of multiple hosts, vCenter will make a decision on which host to use for running the VM, also depending on the `vcenter.datastore` configured. +* **vcenter.datastore** + This is the datastore in vCenter on which the Nuage components files will reside after deployment by Metro Automation Engine. This datastore will have to be connected to at least one ESXi host in the vCenter cluster configured in `vcenter.cluster` or deployment will fail because vCenter can not find a suitable host in the cluster to deploy the VMs in. +* **vcenter.resource_pool** + This optional parameter is the vCenter resource pool in which to deploy the Nuage components. This resource pool needs to be part of the configure vCenter cluster in `vcenter.cluster`. If a resource pool is provided with limitations configured, it is important to make sure the resource pool has sufficient resources available for running all the Nuage components that will be deployed. Otherwise, vCenter will refuse the power on of the components. +* **vcenter.ovftool** + This is the path on the Metro Automation Engine host for the ovftool binary. OVFTool is used to deploy the OVA and OVF images provided for each Nuage component. + + **Note**: Except for the vcenter.resource_pool, all values have to be configured as a general configuration. There are no default values. + +### Specifying the vCenter host to use + +One configuration value that is not present in the general configuration values, is the target vCenter host. This value is provided per component and is referred to in the [HOWTO.md](HOWTO.md) as the `target_server`. This is a configuration value that is set for each individual component of the deployment and has to contain the vCenter FQDN or IP to which that component needs to be deployed to. + +In combination with the `target_server` value per component, the `target_server_type` value needs to be set to `vcenter` for each component that needs to be deployed on a vSphere environment. + +Below is an example of a VSD with the `target_server` and `target_server_type` configured for deployment on a vSphere environment. + +```yaml +myvsds: + - { + hostname: vsd01.nuage.demo, + target_server_type: "vcenter", + target_server: vcenter.nuage.demo, + mgmt_ip: 192.0.2.10, + mgmt_gateway: 192.0.2.1, + mgmt_netmask: 255.255.255.0 + } +``` + +### Overwriting settings for specific components + +The above general configuration values will be used to deploy and manage all Nuage components in your environment by default. + +It is possible to overwrite this behaviour by providing component specific values for the vCenter configuration settings. Below is an example demonstrating this with the same VSD as in the previous example. + +```yaml +myvsds: + - { + hostname: vsd01.nuage.demo, + target_server_type: "vcenter", + target_server: vcenter.nuage.demo, + mgmt_ip: 192.0.2.10, + mgmt_gateway: 192.0.2.1, + mgmt_netmask: 255.255.255.0, + vcenter: { + username: alternative@vsphere.local, + password: alt_vmware, + datacenter: Lab, + cluster: Management, + datastore: LocalSSD01, + resource_pool: Nuage-RP + } + } +``` + +## Deploying vCenter Integration Nodes + +The deployment of one or more vCenter Integration Nodes (VCIN) is supported on a vSphere environment and on KVM, this section applies to both environments. + +### VCIN deployment + +To manage one or more VCINs, two sections in the `build_vars` have to be provided: + +* `vcin_operations_list` + This value can contain either `- install` or `- upgrade`. +* `myvcins` + A list of VCINs that need to be managed. + +The example below shows a single VCIN deployment configuration, it is possible to add as many VCINs as needed. The fields in each VCIN's definition have the same function as with the `myvsds` section described in the [HOWTO.md](HOWTO.md) documentation. + +```yaml +myvcins: + - { + hostname: vcin01.nuage.demo, + target_server_type: "vcenter", + target_server: vcenter.nuage.demo, + mgmt_ip: 192.0.2.20, + mgmt_gateway: 192.0.2.1, + mgmt_netmask: 255.255.255.0 + } +``` + +### VCIN Active/Standby deployment + +The deployment of one or more VCIN Active/Standby pairs is supported through Metro Automation Engine. To achieve this, a new `master_vcin` configuration setting needs to be added in the definition of a VCIN. This `master_vcin` has to contain the `hostname` of another entry in the `myvcins` list. + +The example below shows the deployment of an Active/Standby VCIN pair, where the slave VCIN is pointing to the master VCIN using the `master_vcin` configuration setting. + +```yaml +myvcins: + - { + hostname: master-vcin01.nuage.demo, + target_server_type: "vcenter", + target_server: vcenter.nuage.demo, + mgmt_ip: 192.0.2.20, + mgmt_gateway: 192.0.2.1, + mgmt_netmask: 255.255.255.0 + } + - { + hostname: slave-vcin02.nuage.demo, + maser_vcin: master-vcin01.nuage.demo, + target_server_type: "vcenter", + target_server: vcenter.nuage.demo, + mgmt_ip: 192.0.2.21, + mgmt_gateway: 192.0.2.1, + mgmt_netmask: 255.255.255.0 + } +``` + +A combination of multiple Active/Standby VCIN pairs and standalone VCINs can be deployed in the same environment with a single Metro Automation Engine execution. diff --git a/README.md b/README.md index 306410406d..5671bb1560 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,11 @@ -# Nuage Networks MetroAG Automation EnGine (AG) +# Nuage Networks Metro Automation Engine (4 minute read) -MetroAG is an automation engine that deploys and upgrades Nuage Networks components. -After you specify the individual details of your target platform, MetroAG (leveraging Ansible playbooks and roles) sets up the environment as specified. MetroAG can also upgrade, roll-back, and health-check the environment. +Metro is an automation engine that deploys and upgrades Nuage Networks components. +After you specify the individual details of your target platform, Metro Automation Engine (leveraging Ansible playbooks and roles) sets up the environment as specified. Metro Automation Engine can also upgrade, roll-back, and health-check the environment. ## Supported Components for Deployment -MetroAG supports deployment of the following components as VMs on the target server. The same target server types are supported as the VSP platform. +Metro Automation Engine supports deployment of the following components as VMs on the target server. The same target server types are supported as the VSP platform. Component | KVM (el7)
Stand-alone (SA) | KVM (el7)
Clustered (HA) | ESXi
Stand-alone (SA) | ESXi
Clustered (HA) ------- | :---: | :---: | :----: | :---: @@ -29,7 +29,7 @@ NSG-V (Network Services Gateway-Virtual) | X | ![topology](topology.png) ## Supported Components for Upgrade -MetroAG supports upgrade of the following Nuage VSP components. +Metro Automation Engine supports upgrade of the following Nuage VSP components. Component | KVM (el7)
SA | KVM (el7)
HA | ESXi
SA | ESXi
HA ------- | :---: | :---: | :----: | :---: @@ -39,30 +39,30 @@ VSC | X | X | X | X VCIN | X | | X | ## Use of Ansible Playbooks and Roles -**Ansible** provides a method to easily define one or more actions to be performed on one or more computers. These tasks can target the local system Ansible is running from, as well as other systems that Ansible can reach over the network. The Ansible engine has minimal installation requirements. Python, with a few additional libraries, is all that is needed for the core engine. MetroAG includes a few custom Python modules and scripts. Agent software is not required on the hosts to be managed. Communication with target hosts defaults to SSH. Ansible does not require the use of a persistent state engine. Every Ansible run determines state as it goes, and adjusts as necessary given the action requirements. Running Ansible requires only an inventory of potential targets, state directives, either expressed as an ad hoc action, or a series coded in a YAML file, and the credentials necessary to communicate with the target. +**Ansible** provides a method to easily define one or more actions to be performed on one or more computers. These tasks can target the local system Ansible is running from, as well as other systems that Ansible can reach over the network. The Ansible engine has minimal installation requirements. Python, with a few additional libraries, is all that is needed for the core engine. Metro Automation Engine includes a few custom Python modules and scripts. Agent software is not required on the hosts to be managed. Communication with target hosts defaults to SSH. Ansible does not require the use of a persistent state engine. Every Ansible run determines state as it goes, and adjusts as necessary given the action requirements. Running Ansible requires only an inventory of potential targets, state directives, either expressed as an ad hoc action, or a series coded in a YAML file, and the credentials necessary to communicate with the target. **Playbooks** are the language by which Ansible orchestrates, configures, administers and deploys systems. They are YAML-formatted files that collect one or more plays. Plays are one or more tasks linked to the hosts that they are to be executed on. **Roles** build on the idea of include files and combine them to form clean, reusable abstractions. Roles are ways of automatically loading certain vars files, tasks, and handlers based on a known file structure. -### MetroAG Playbooks and Roles -MetroAG playbooks and roles fall into the following categories: +### Metro Automation Engine Playbooks and Roles +Metro Automation Engine playbooks and roles fall into the following categories: Playbook/Role | Description | ------------- | ----------- | Predeploy | prepares infrastructure with necessary packages and makes the component(s) reachable | Deploy | installs and configures component(s) | Postdeploy | performs integration checks, and some basic commissioning tests | -Health | checks health for a running component without assuming it was deployed with MetroAG | +Health | checks health for a running component without assuming it was deployed with Metro Automation Engine | Destroy | removes component(s) from the infrastructure | Upgrade | upgrades component(s) from one release to another | ## Nomenclature -**Ansible Host**: The host where MetroAG runs. Ansible and the required packages are installed on this host. The Ansible Host must run el7 Linux host, e.g. Cent)S 7.* or RHEL 7.*. -**MetroAG User**: The user who runs MetroAG to deploy and upgrade components. +**Ansible Host**: The host where Metro Automation Engine runs. Ansible and the required packages are installed on this host. The Ansible Host must run el7 Linux host, e.g. Cent)S 7.* or RHEL 7.*. +**Metro Automation Engine User**: The user who runs Metro Automation Engine to deploy and upgrade components. **Target Server**: The hypervisor on which one or more VSP components are installed as VMs. Each deployment may contain more than one Target Server. -## Main Steps for Using MetroAG +## Main Steps for Using Metro Automation Engine 1. [Setup](Documentation/SETUP.md) the Ansible Host. @@ -74,12 +74,12 @@ Upgrade | upgrades component(s) from one release to another | ## Documentation -The [Documentation](Documentation/) directory contains the following guides to assist you in successfully working with MetroAG. +The [Documentation](Documentation/) directory contains the following guides to assist you in successfully working with Metro Automation Engine. File name | Description --------- | -------- [RELEASE_NOTES.md](Documentation/RELEASE_NOTES.md) | New features, resolved issues and known limitations and issues -[GETTING_STARTED.md](Documentation/GETTING_STARTED.md) | MetroAG Quick Start Guide +[GETTING_STARTED.md](Documentation/GETTING_STARTED.md) | Metro Automation Engine Quick Start Guide [SETUP.md](Documentation/SETUP.md) | Set up your environment by cloning the repo, installing packages and configuring access. [CUSTOMIZE.md](Documentation/CUSTOMIZE.md) | Customize user data files, unzip Nuage software [DEPLOY.md](Documentation/DEPLOY.md) | Deploy all VSP components or choose components individually. @@ -95,7 +95,7 @@ Ask questions and get support via email. Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. -You may also [contribute](CONTRIBUTING.md) to Nuage MetroAG by submitting your own code to the project. +You may also [contribute](CONTRIBUTING.md) to Nuage Metro Automation Engine by submitting your own code to the project. ## License Apache License 2.0 diff --git a/ansible.cfg b/ansible.cfg index 4e172f0eb7..3e8846ee7b 100644 --- a/ansible.cfg +++ b/ansible.cfg @@ -4,7 +4,6 @@ host_key_checking = False hash_behaviour = merge retry_files_enabled = False callback_whitelist = report_failures -task_includes_static = False callback_plugins = ./callback_plugins/ filter_plugins = ./filter_plugins/ library = ./library/ diff --git a/build_vars.yml b/build_vars.yml index a48aa5d116..965df0bc98 100644 --- a/build_vars.yml +++ b/build_vars.yml @@ -1,6 +1,6 @@ --- ### -# See BUILD.md for details +# See the documentation for details ### ### @@ -18,6 +18,11 @@ ## for all operations. nuage_unzipped_files_dir: "/home/caso/nfs-data/5.2.2/nuage-unpacked" +## Parameter to specify the location for backups during upgrade. +## The default value is nuage_unzipped_files_dir + "/backups". +## Uncomment and set to desired value for backup. +# metro_backup_root: "/home/caso/nfs-data/5.2.2/nuage-unpacked/backups" + ### ## upgrade parameters ### @@ -130,7 +135,10 @@ dns_domain: example.com ## Uncomment and set to 'False' if you want to skip the yum update--acceptable only in ## lab environments. # yum_update: True - +## secure_communication is used to setup TLS on all the communication between the VSD, VSC, +## VRS and NSGV. By default, it is set to 'True', the recomemded value. +## Uncomment and set to 'False' if you don't want to use TLS +#secure_communication: True ### ## Global Vcenter params @@ -185,6 +193,15 @@ dns_domain: example.com ## it will only show up once in vCenter, but the hosts tab will show it is ## available on multiple hosts (view in the screenshot below) ## +## resource_pool +## The vCenter resource pool where the VMs need to be located. A resource pool +## is a logical abstraction of resources. Different resource pools can be +## configured to have different priorities in case of resource contention and +## can have different resource reservations and limitations. +## In a typical deployment, you will see a resource pool with a high number of +## shares (higher priority) which will be used for the important components of +## Nuage, like the VSD and VSC's. +## ## ovftool ## Binary location of the ovftool ## @@ -195,7 +212,8 @@ dns_domain: example.com # password: Alcateldc # datacenter: Datacenter # cluster: Management -# datastore: datastore +# datastore: Datastore +# resource_pool: Resourece Pool # ovftool: /usr/bin/ovftool ### @@ -366,7 +384,7 @@ vsc_operations_list: ## expected_num_vm_vports ## expected_num_gateway_ports ## Optional: Values to use for this VSC when running a health test. All values are -## set to 0 by default, which means they will be ignored. To use them, uncomment +## set to 0 by default, which means they will be ignored. To use them, uncomment ## and set to the expected values. ## ## vsc_mgmt_static_route_list @@ -397,7 +415,8 @@ myvscs: # expected_num_vm_vports: 0, # expected_num_gateway_ports: 0, # vsc_mgmt_static_route_list: [ 0.0.0.0/1, 128.0.0.1/1 ], - xmpp_username: vsc1 } +# secure_communication: "{{ secure_communication }}", + xmpp_username: vsc1} - { hostname: "vsc1.nuage.met", # vmname: vsc1, target_server_type: "kvm", @@ -416,7 +435,8 @@ myvscs: # expected_num_vm_vports: 0, # expected_num_gateway_ports: 0, # vsc_mgmt_static_route_list: [ 0.0.0.0/1, 128.0.0.1/1 ], - xmpp_username: vsc2 } +# secure_communication: "{{ secure_communication }}", + xmpp_username: vsc2} ### ## VRS params ### @@ -480,8 +500,10 @@ myvrss: - { vrs_os_type: u14.04, # libnetwork_install: False, # dkms_install: False, + secure_openFlow: true, active_controller_ip: 192.168.122.204, standby_controller_ip: 192.168.122.205, +# secure_communication: "{{ secure_communication }}", vrs_ip_list: [ 192.168.122.101] } - { vrs_os_type: el7, @@ -489,6 +511,7 @@ myvrss: # dkms_install: False, active_controller_ip: 192.168.122.204, standby_controller_ip: 192.168.122.205, +# secure_communication: "{{ secure_communication }}", vrs_ip_list: [ 192.168.122.83, 192.168.122.238 ] } @@ -497,6 +520,7 @@ myvrss: # dkms_install: False, active_controller_ip: 192.168.122.204, standby_controller_ip: 192.168.122.205, +# secure_communication: "{{ secure_communication }}", vrs_ip_list: [ 192.168.122.215 ] } ### @@ -648,12 +672,18 @@ vns_operations_list: ## data_netmask ## Required: The netmask for the data port network. ## +## data_static_route +## Optional: list of eth1 static routes to the data networks +## +## data_gateway +## Optional: will be used as next-hop for the data_static_routes +## ## DHCP Bootstrap support -## Optional: MetroAG supports a special case, automatically bootstrapping +## Optional: Metro Automation Engine supports a special case, automatically bootstrapping ## (ZFB) a single NSGV at time of deployment of the VNS UTIL VM. To enable ## this special case, the variables in this section must be uncommented and -## defined so that MetroAG can configure the DHCP server on the VNS UTIL VM -## to participate in this process. If you are *not* going to use MetroAG's +## defined so that Metro can configure the DHCP server on the VNS UTIL VM +## to participate in this process. If you are *not* going to use Metro's ## automatic bootstrap of a single NSGV, the variables in this section ## must not be defined. ## @@ -681,10 +711,13 @@ myvnsutils: data_fqdn: "vnsutil1.data.nuage.met", data_ip: 192.168.100.205, data_netmask: 255.255.255.0, +# data_gateway: 192.168.100.1, +# data_static_route: [ 192.168.99.0/24, 192.168.98.0/24, 1192.168.97.0/24 ], # data_subnet: 192.168.100.0, # nsgv_ip: 192.168.100.206, # nsgv_mac: '52:54:00:88:85:12', # nsgv_hostname: "nsgv1.{{ dns_domain }}", +# secure_communication: "{{ secure_communication }}", vsd_fqdn: "{{ vsd_fqdn_global }}" } ### @@ -738,6 +771,7 @@ mynsgvs: # iso_file: 'user_img.iso', # nsgv_mac: '52:54:00:88:85:12', # bootstrap_method: none, +# secure_communication: {{ secure_communication }}, target_server: 135.227.181.233 } ### ## VCIN params @@ -757,6 +791,15 @@ vcin_operations_list: ## hostname ## Required always: The FQDN or IP address of the VCIN management port ## +## master_vcin +## Optional: The FQDN or IP address of the Master VCIN in an Active/Standby +## deployment. This must match the hostname of another VCIN in the list of +## myvcins. +## Validation is in place to assure: +## - Masters can not be their own slave +## - A Master can only have one slave +## - The Master must be present when configured on a slave +## ## target_server_type ## Required: The type of hypervisor the VCIN will be deployed on. Supported values ## are kvm, vcenter, and heat. @@ -781,7 +824,8 @@ vcin_operations_list: ## The example, below, is for a single VCIN. If deploying stand-alone, ## only one VSD defintion is required. myvcins: - - { hostname: vcin1.nuage.met, + - { hostname: vcin1.nuage.net, +# master_vcin: vcin2.nuage.net, target_server_type: "vcenter", target_server: 135.227.181.232, # vcenter: { username: administrator@vsphere.local, @@ -828,7 +872,7 @@ myvcins: # data_ip: 10.167.54.3, # data_subnet: 10.167.54.0, # data_netmask: 255.255.255.0, -# data_gateway: 10.167.54.1 +# data_gateway: 10.167.54.1, # data_static_route: [ 10.165.53.0/24, 10.165.54.0/24, 10.165.55.0/24 ], # dns_server: 8.8.8.8, # dns_mgmt: g5dns.mgmt.training.net., @@ -905,3 +949,88 @@ myvcins: # ports_to_hv_bridges: ['br0', 'br1','br0','br1'], # license_file: '/path/on/ansible/deployment/host/license.zip', # deploy_cfg_file: '/path/on/ansible/deployment/host/config_flat.txt'} + +## VRS-VM params +## vrs_vm_operations_list = A list of the operations you intend for the VRS-VM. +##The list could include 1 or more of the following: +## - install +## myvrs_vms is required when you are operating on VRS-VMs. It is not required if you aren't +## operating on VRS-VMs. It will be ignored if not defined. Each element in the list +## is a dictionary of parameters specific to a single VRS-VM. You can define as much +## VRS-VM as you want +## +## hostname +## Required always: The FQDN or IP address of the VRS-VM management port +## +## vmname +## Optional, vmname defaults to the hostname. Uncomment and set if you want a +##VM name other than hostname. +## +## target_server_type +## Optional: The default is 'kvm'. For now only KVM is supported. +## +## target_server +## Required: The hostname or IP address of the hypervisor where this VRS-VM will be +## instantiated. +## +## mgmt_bridge +## Optional: The name of the bridge on the hypervisor to connect the mgmt port to. +## By default, the mgmt port will be connected to the global mgmt_bridge that is +## defined elsewhere in this file. Uncomment and update if you want to use a +## different bridge for this component. +## +## mgmt_ip +## Required: The IP address of the VRS-VM's management port. +## +## mgmt_gateway +## Required: The IP address for the default gateway. +## +## mgmt_netmask +## Required: The netmask for the management port. +## +## data_ip +## Required: The IP address of the data plane. +## +## data_netmask +## Required: The netmask for the data plane network. +## +## data_bridge +## Optional: The name of the bridge on the hypervisor to connect the data port to. +## By default, the data port will be connected to the global data_bridge that is +## defined elsewhere in this file. Uncomment and update if you want to use a +## different bridge for this component. +## +## data_gateway +## Required: The IP address that will be used as a next-hop address of the +## data_static_route. +## +## data_static_route: list of eth1 static routes to the data networks +## +## ram (Gib) +## Optional: Ram amount that will be used for the VRS-VM. By default it is 4 (GB) +## +## vcpu +## Optional: VCPU count that will be used for the VRS-VM. By default it is 2 +## +## vrs_vm_qcow2_path +## Required: source qcow path of the VRS-VM +#vrs_vm_operations_list: +# - install +# +#myvrs_vms: +# - { hostname: vrs_vm1, +# #vmname: vrs_vm1, +# target_server_type: kvm, +# target_server: 10.10.13.5, +# ram: 12, +# vcpu: 4, +# mgmt_bridge: br0, +# mgmt_ip: 10.10.13.11, +# mgmt_gateway: 10.10.13.1, +# mgmt_netmask: 255.255.255.0, +# data_bridge: br1, +# data_ip: 10.9.13.11, +# data_netmask: 255.255.255.0, +# vrs_vm_qcow2_path: /tmp/images/centos7.qcow2, +# data_gateway: 10.9.13.1, +# data_static_route: [ 10.13.60.0/24, 10.12.60.0/24, 10.11.60.0/24] } diff --git a/destroy_everything.yml b/destroy_everything.yml index 02db2ea5cd..35f9e59b05 100644 --- a/destroy_everything.yml +++ b/destroy_everything.yml @@ -1,16 +1,23 @@ --- -- include: "playbooks/vsc_destroy.yml" -- include: "playbooks/vsd_destroy.yml" -- include: "playbooks/vsd_sa_upgrade_destroy.yml" -- include: "playbooks/vsd_ha_upgrade_destroy_2_and_3.yml" -- include: "playbooks/vsd_ha_upgrade_destroy_1.yml" -- include: "playbooks/vstat_destroy.yml" -- include: "playbooks/vstat_upgrade_destroy.yml" -- include: "playbooks/vcin_destroy.yml" -- include: "playbooks/vnsutil_destroy.yml" -- include: "playbooks/nsgv_destroy.yml" -- include: "playbooks/vrs_destroy.yml" - -- include: "playbooks/infra_destroy.yml" -- include: "playbooks/osc_destroy.yml" -- include: "playbooks/os_compute_destroy.yml" +- name: vsc_destroy + import_playbook: "playbooks/vsc_destroy.yml" +- name: vsd_destroy + import_playbook: "playbooks/vsd_destroy.yml" +- name: vsd_sa + import_playbook: "playbooks/vsd_sa_upgrade_destroy.yml" +- name: vsd_ha_2_3 + import_playbook: "playbooks/vsd_ha_upgrade_destroy_2_and_3.yml" +- name: vsd_ha_1 + import_playbook: "playbooks/vsd_ha_upgrade_destroy_1.yml" +- name: vstat_destroy + import_playbook: "playbooks/vstat_destroy.yml" +- name: vstat_up_destroy + import_playbook: "playbooks/vstat_upgrade_destroy.yml" +- name: vcin_destroy + import_playbook: "playbooks/vcin_destroy.yml" +- name: vnsutil_destroy + import_playbook: "playbooks/vnsutil_destroy.yml" +- name: nsgv_destroy + import_playbook: "playbooks/nsgv_destroy.yml" +- name: vrs_destroy + import_playbook: "playbooks/vrs_destroy.yml" diff --git a/examples/build_vars.yml.VRS-VMOnly b/examples/build_vars.yml.VRS-VMOnly new file mode 100644 index 0000000000..5671f6af26 --- /dev/null +++ b/examples/build_vars.yml.VRS-VMOnly @@ -0,0 +1,31 @@ +--- + +nuage_zipped_files_dir: /SharedNFS/ISOs-and-Software/Nuage_Software/5.2.1/ +nuage_unzipped_files_dir: /SharedNFS/ISOs-and-Software/Nuage_Software/5.2.1/unzip +target_server_username: root +ansible_sudo_username: root +vrs_vm_operations_list: [install] +myvrs_vms: + - { hostname: vrs_vm1, + target_server_type: kvm, + target_server: 10.10.13.5, + ram: 12, + vcpu: 4, + mgmt_bridge: br0, + mgmt_ip: 10.10.13.11, + mgmt_gateway: 10.10.13.1, + mgmt_netmask: 255.255.255.0, + data_bridge: br1, + data_ip: 10.9.13.11, + data_netmask: 255.255.255.0, + vrs_vm_qcow2_path: /tmp/images/centos7.qcow2, + data_gateway: 10.9.13.1, + data_static_route: [ 10.13.60.0/24, 10.12.60.0/24, 10.11.60.0/24] } +ansible_deployment_host: 10.10.13.5 +images_path: /var/lib/libvirt/images/ +mgmt_bridge: br0 +data_bridge: brControl +access_bridge: brControl +dns_server_list: + - 192.168.122.1 + \ No newline at end of file diff --git a/filter_plugins/sros_filters.py b/filter_plugins/sros_filters.py new file mode 100644 index 0000000000..e1b57540c5 --- /dev/null +++ b/filter_plugins/sros_filters.py @@ -0,0 +1,111 @@ +#!/usr/bin/python + +# Copyright 2017 Nokia +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + + +import re + + +def rm_insignificant_lines(in_cfg): + """ + Remove insignificant lines from input config file. + These lines are starting with '#' or 'echo' + :param in_cfg: cfg as a multiline string + :return: cleanified array of config lines + """ + cfg_arr = in_cfg.splitlines() + # make a dup array for in-place deletion + for line in list(cfg_arr): + if not is_cfg_statement(line): + cfg_arr.remove(line) + return cfg_arr + + +def is_cfg_statement(line): + # if line is empty, or first elements are not spaces + # consider this line for deletion + line = line.lstrip() + if line.strip() == '' or line.startswith('#') or line.startswith('echo'): + return False + else: + return True + + +def rootify(clean_cfg): + cfg_string = ['configure'] + rootified_cfg = [] + # init previous indent level as 0 for /configure line + ind_level = [-1] + + for i, line in enumerate(clean_cfg): + if line.strip().startswith('exit all'): + cfg_string = [] + ind_level = [-1] + continue + if line.strip() == 'exit': + cfg_string.pop() + ind_level.pop() + continue + + # calc current indent + prev_ind_level = ind_level[-1] + cur_ind_level = len(line) - len(line.lstrip()) + # append a command if it is on a next level of indent + if cur_ind_level > prev_ind_level: + cfg_string.append(line.strip()) + ind_level.append(cur_ind_level) + # if a command on the same level of indent + # we delete the prev. command and append the new one to the base string + elif cur_ind_level == prev_ind_level: + cfg_string.pop() + # removing (if any) `customer xxx create` or `create` at the end + # of the line since it was previously printed out + cfg_string[-1] = re.sub('\scustomer\s\d+\screate$|\screate$', + '', cfg_string[-1]) + cfg_string.append(line.strip()) + + # if we have a next line go check it's indent value + if i < len(clean_cfg) - 1: + next_ind_level = len( + clean_cfg[i + 1]) - len(clean_cfg[i + 1].lstrip()) + # if a next ind level is deeper (>) then we can continue + # accumulation of the commands + if next_ind_level > cur_ind_level: + continue + # if the next level is the same or lower, we must save a line + else: + rootified_cfg.append(' '.join(cfg_string)) + else: + # otherwise we have a last line here, so print it + rootified_cfg.append(' '.join(cfg_string)) + + return rootified_cfg + + +def sros_rootify(input_cfg_file): + ''' Given a string representation of the output of + ''' + + clean_cfg = rm_insignificant_lines(input_cfg_file) + return rootify(clean_cfg) + + +class FilterModule(object): + ''' Query filter ''' + + def filters(self): + return { + 'sros_rootify': sros_rootify + } diff --git a/hosts b/hosts index ac45abff3f..7bfa986e6a 100644 --- a/hosts +++ b/hosts @@ -1,2 +1,7 @@ +# *** WARNING *** +# This file is automatically generated by build.yml. +# Changes made to this file may be overwritten. +# [local_host] localhost ansible_connection=local + diff --git a/install_dns.yml b/install_dns.yml index 194285928e..e15d769421 100644 --- a/install_dns.yml +++ b/install_dns.yml @@ -1,5 +1,5 @@ --- -- include: "playbooks/dns_predeploy.yml" -- include: "playbooks/dns_deploy.yml" -#- include: "playbooks/dns_postdeploy.yml" - +- name: dns_predeploy + import_playbook: "playbooks/dns_predeploy.yml" +- name: dns_deploy + import_playbook: "playbooks/dns_deploy.yml" diff --git a/install_everything.yml b/install_everything.yml index 3f997361bf..d3164f3bc5 100644 --- a/install_everything.yml +++ b/install_everything.yml @@ -1,8 +1,7 @@ --- -- include: install_dns.yml - -- include: install_vcs.yml - -- include: install_vns.yml - -- include: install_osc.yml +- name: Install dns + import_playbook: "install_dns.yml" +- name: Install vcs + import_playbook: "install_vcs.yml" +- name: Install vns + import_playbook: "install_vns.yml" diff --git a/install_osc.yml b/install_osc.yml index 73ba555642..902cacb8b2 100644 --- a/install_osc.yml +++ b/install_osc.yml @@ -1,9 +1,15 @@ --- -- include: "playbooks/osc_predeploy.yml" -- include: "playbooks/osc_deploy.yml" +- name: osc_predeploy + import_playbook: "playbooks/osc_predeploy.yml" +- name: osc_deploy + import_playbook: "playbooks/osc_deploy.yml" -- include: "playbooks/os_compute_predeploy.yml" -- include: "playbooks/os_compute_deploy.yml" -- include: "playbooks/os_compute_postdeploy.yml" +- name: os_compute_predeploy + import_playbook: "playbooks/os_compute_predeploy.yml" +- name: os_compute_deploy + import_playbook: "playbooks/os_compute_deploy.yml" +- name: os_compute_postdeploy + import_playbook: "playbooks/os_compute_postdeploy.yml" -- include: "playbooks/vsd_osc_integration.yml" +- name: vsd_vsc_integration + import_playbook: "playbooks/vsd_osc_integration.yml" diff --git a/install_vcs.yml b/install_vcs.yml index 2f8918f959..abe92dc445 100644 --- a/install_vcs.yml +++ b/install_vcs.yml @@ -1,19 +1,33 @@ --- -- include: "playbooks/vsd_predeploy.yml" -- include: "playbooks/vsd_deploy.yml" -- include: "playbooks/vsd_postdeploy.yml" +- name: vsd_predeploy + import_playbook: "playbooks/vsd_predeploy.yml" +- name: vsd_deploy + import_playbook: "playbooks/vsd_deploy.yml" +- name: vsd_postdeploy + import_playbook: "playbooks/vsd_postdeploy.yml" -- include: "playbooks/vsc_predeploy.yml" -- include: "playbooks/vsc_deploy.yml" -- include: "playbooks/vsc_postdeploy.yml" +- name: vsc_predeploy + import_playbook: "playbooks/vsc_predeploy.yml" +- name: vsc_deploy + import_playbook: "playbooks/vsc_deploy.yml" +- name: vsc_post_deploy + import_playbook: "playbooks/vsc_postdeploy.yml" -- include: "playbooks/vrs_predeploy.yml" -- include: "playbooks/vrs_deploy.yml" -- include: "playbooks/vrs_postdeploy.yml" +- name: vrs_predeploy + import_playbook: "playbooks/vrs_predeploy.yml" +- name: vrs_deploy + import_playbook: "playbooks/vrs_deploy.yml" +- name: vrs_postdeploy + import_playbook: "playbooks/vrs_postdeploy.yml" -- include: "playbooks/vstat_predeploy.yml" -- include: "playbooks/vstat_deploy.yml" -- include: "playbooks/vstat_postdeploy.yml" +- name: vstat_predeploy + import_playbook: "playbooks/vstat_predeploy.yml" +- name: vstat_deploy + import_playbook: "playbooks/vstat_deploy.yml" +- name: vstat_postdeploy + import_playbook: "playbooks/vstat_postdeploy.yml" -- include: "playbooks/vcin_predeploy.yml" -- include: "playbooks/vcin_deploy.yml" +- name: vcin_predeploy + import_playbook: "playbooks/vcin_predeploy.yml" +- name: vcin_deploy + import_playbook: "playbooks/vcin_deploy.yml" diff --git a/install_vns.yml b/install_vns.yml index d60e4439f4..3ed8a63aab 100644 --- a/install_vns.yml +++ b/install_vns.yml @@ -1,10 +1,13 @@ --- -- include: "playbooks/vsc_vns_deploy.yml" -- include: "playbooks/vsc_vns_postdeploy.yml" -- include: "playbooks/vsd_vns_postdeploy.yml" +- name: vsd_vns_deploy + import_playbook: "playbooks/vsd_vns_postdeploy.yml" -- include: "playbooks/vnsutil_predeploy.yml" -- include: "playbooks/vnsutil_deploy.yml" -- include: "playbooks/vnsutil_postdeploy.yml" +- name: vnsutil_predeploy + import_playbook: "playbooks/vnsutil_predeploy.yml" +- name: vnsutil_deploy + import_playbook: "playbooks/vnsutil_deploy.yml" +- name: vnsutil_postdeploy + import_playbook: "playbooks/vnsutil_postdeploy.yml" -- include: "playbooks/nsgv_predeploy.yml" +- name: nsgv_predeploy + import_playbook: "playbooks/nsgv_predeploy.yml" diff --git a/metro-setup.sh b/metro-setup.sh index a72e59ae84..778c5168e2 100755 --- a/metro-setup.sh +++ b/metro-setup.sh @@ -1,9 +1,9 @@ #!/bin/bash ############################################################################### -## Metro Automation enGine Setup +## Metro Automation Engine Setup ## -## Script to install packages required for Nuage MetroAG. Safe to execute -## multiple times +## Script to install packages required for Nuage Metro Automation Engine. Safe +## to execute multiple times ############################################################################### ############################################################################### @@ -197,7 +197,7 @@ function main() { rm -f $LOG echo "" - print "Setting up Nuage Metro Automation enGine" + print "Setting up Nuage Metro Automation Engine" echo "" # Make sure script is being run as root or with sudo diff --git a/nuage_health.yml b/nuage_health.yml index 8ed0178e05..8d06f9d476 100644 --- a/nuage_health.yml +++ b/nuage_health.yml @@ -1,6 +1,11 @@ -- include: "playbooks/vsd_health.yml" -- include: "playbooks/vcin_health.yml" -- include: "playbooks/vstat_health.yml" -- include: "playbooks/vsc_health.yml" -- include: "playbooks/vrs_health.yml" +- name: vsd_health + import_playbook: "playbooks/vsd_health.yml" +- name: vcin_health + import_playbook: "playbooks/vcin_health.yml" +- name: vstat_health + import_playbook: "playbooks/vstat_health.yml" +- name: vsc_health + import_playbook: "playbooks/vsc_health.yml" +- name: vrs_health + import_playbook: "playbooks/vrs_health.yml" diff --git a/pip_requirements.txt b/pip_requirements.txt index 71cc06689a..4007032782 100644 --- a/pip_requirements.txt +++ b/pip_requirements.txt @@ -1,9 +1,10 @@ ansible==2.4.0.0 ipaddr jsonschema +jmespath netaddr netmiko -paramiko==2.2.1 +paramiko==2.4.1 pexpect pyvmomi vspk diff --git a/playbooks/infra_deploy.yml b/playbooks/infra_deploy.yml deleted file mode 100644 index 8e79c94010..0000000000 --- a/playbooks/infra_deploy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: infras - gather_facts: no - roles: - - infra-deploy diff --git a/playbooks/infra_destroy.yml b/playbooks/infra_destroy.yml deleted file mode 100644 index 4fab220655..0000000000 --- a/playbooks/infra_destroy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: infras - gather_facts: no - roles: - - infra-destroy diff --git a/playbooks/infra_predeploy.yml b/playbooks/infra_predeploy.yml deleted file mode 100644 index f1584330dd..0000000000 --- a/playbooks/infra_predeploy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: infras - gather_facts: no - roles: - - infra-predeploy diff --git a/playbooks/os_compute_deploy.yml b/playbooks/os_compute_deploy.yml deleted file mode 100644 index ec6071bb30..0000000000 --- a/playbooks/os_compute_deploy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: os_computes - gather_facts: no - roles: - - os-compute-deploy diff --git a/playbooks/os_compute_destroy.yml b/playbooks/os_compute_destroy.yml deleted file mode 100644 index 9cfa52edf7..0000000000 --- a/playbooks/os_compute_destroy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: os_computes - gather_facts: no - roles: - - os-compute-destroy diff --git a/playbooks/os_compute_postdeploy.yml b/playbooks/os_compute_postdeploy.yml deleted file mode 100644 index e112589579..0000000000 --- a/playbooks/os_compute_postdeploy.yml +++ /dev/null @@ -1,6 +0,0 @@ ---- -- hosts: os_computes - gather_facts: no - any_errors_fatal: true - roles: - - os-compute-postdeploy diff --git a/playbooks/os_compute_predeploy.yml b/playbooks/os_compute_predeploy.yml deleted file mode 100644 index f498b9fcb8..0000000000 --- a/playbooks/os_compute_predeploy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: os_computes - gather_facts: no - roles: - - os-compute-predeploy diff --git a/playbooks/osc_deploy.yml b/playbooks/osc_deploy.yml deleted file mode 100644 index dba507a892..0000000000 --- a/playbooks/osc_deploy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: oscs - gather_facts: no - roles: - - osc-deploy diff --git a/playbooks/osc_destroy.yml b/playbooks/osc_destroy.yml deleted file mode 100644 index eed3fa1e4e..0000000000 --- a/playbooks/osc_destroy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: oscs - gather_facts: no - roles: - - osc-destroy diff --git a/playbooks/osc_predeploy.yml b/playbooks/osc_predeploy.yml deleted file mode 100644 index 3ad4474b79..0000000000 --- a/playbooks/osc_predeploy.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: oscs - gather_facts: no - roles: - - osc-predeploy diff --git a/playbooks/validate_build_vars.yml b/playbooks/validate_build_vars.yml index 54956e46f4..ea8ebf5f9a 100644 --- a/playbooks/validate_build_vars.yml +++ b/playbooks/validate_build_vars.yml @@ -3,5 +3,7 @@ pre_tasks: - name: Include build variable files include_vars: "{{ build_vars_file | default ('build_vars.yml') }}" - roles: - - validate-build-vars + tasks: + - include_role: + name: common + tasks_from: validate-build-vars diff --git a/playbooks/vrs_vm_deploy.yml b/playbooks/vrs_vm_deploy.yml new file mode 100644 index 0000000000..ca43426f86 --- /dev/null +++ b/playbooks/vrs_vm_deploy.yml @@ -0,0 +1,5 @@ +--- +- hosts: vrs_vms + gather_facts: no + roles: + - vrs-vm-deploy diff --git a/playbooks/vrs_vm_destroy.yml b/playbooks/vrs_vm_destroy.yml new file mode 100644 index 0000000000..45ad328530 --- /dev/null +++ b/playbooks/vrs_vm_destroy.yml @@ -0,0 +1,5 @@ +--- +- hosts: vrs_vms + gather_facts: no + roles: + - vrs-vm-destroy diff --git a/playbooks/vrs_vm_predeploy.yml b/playbooks/vrs_vm_predeploy.yml new file mode 100644 index 0000000000..2f6297093d --- /dev/null +++ b/playbooks/vrs_vm_predeploy.yml @@ -0,0 +1,5 @@ +--- +- hosts: vrs_vms + gather_facts: no + roles: + - vrs-vm-predeploy diff --git a/playbooks/vsc_vns_deploy.yml b/playbooks/vsc_vns_deploy.yml deleted file mode 100644 index 0bbe11eea8..0000000000 --- a/playbooks/vsc_vns_deploy.yml +++ /dev/null @@ -1,5 +0,0 @@ -- hosts: vscs - gather_facts: no - serial: 1 - roles: - - { role: vsc-vns-deploy, when: "groups['vnsutils'] is defined and groups['vnsutils']" } diff --git a/playbooks/vsc_vns_postdeploy.yml b/playbooks/vsc_vns_postdeploy.yml deleted file mode 100644 index eab3db1d1a..0000000000 --- a/playbooks/vsc_vns_postdeploy.yml +++ /dev/null @@ -1,6 +0,0 @@ -- hosts: vscs - gather_facts: no - roles: - - { role: vsc-vns-postdeploy, when: "groups['vnsutils'] is defined and groups['vnsutils']" } - - diff --git a/playbooks/vsd_license.yml b/playbooks/vsd_license.yml index 2f0ec8aff7..941554f4cd 100644 --- a/playbooks/vsd_license.yml +++ b/playbooks/vsd_license.yml @@ -3,4 +3,4 @@ roles: - vsd-license become: yes - remote_user: "root" + remote_user: "{{ vsd_username }}" diff --git a/playbooks/vsr_deploy.yml b/playbooks/vsr_deploy.yml index 9a9d6fad5c..1ab1f9aa7e 100644 --- a/playbooks/vsr_deploy.yml +++ b/playbooks/vsr_deploy.yml @@ -2,10 +2,5 @@ - hosts: vsrs gather_facts: no serial: 1 - vars: - cli: - host: "{{ mgmt_ip }}" - username: admin - password: admin roles: - vsr-deploy diff --git a/playbooks/vsr_postdeploy.yml b/playbooks/vsr_postdeploy.yml new file mode 100644 index 0000000000..751fe644b0 --- /dev/null +++ b/playbooks/vsr_postdeploy.yml @@ -0,0 +1,6 @@ +--- +- hosts: vsrs + gather_facts: no + serial: 1 + roles: + - vsr-postdeploy diff --git a/roles/build/tasks/main.yml b/roles/build/tasks/main.yml index b2e854285e..bf5f01b7e0 100644 --- a/roles/build/tasks/main.yml +++ b/roles/build/tasks/main.yml @@ -93,6 +93,12 @@ tasks_from: gvm-process-vars tags: gvm +- name: Update vrs_vm variables + include_role: + name: common + tasks_from: vrs-vm-process-vars + tags: vrs-vm + - name: Create hosts file template: src=hosts.j2 dest="{{ inventory_dir }}/hosts" backup=no tags: diff --git a/roles/build/templates/hosts.j2 b/roles/build/templates/hosts.j2 index ef53dbe21c..fbaa6c2487 100644 --- a/roles/build/templates/hosts.j2 +++ b/roles/build/templates/hosts.j2 @@ -36,12 +36,25 @@ vsd_node3 {% endif %} {% if myvcins is defined and myvcins %} +[vcin_masters] +{% for vcin in myvcins %} +{% if 'master_vcin' not in vcin %} +{{ vcin.hostname }} {% if 'mgmt_bridge' in vcin %} mgmt_bridge={{ vcin.mgmt_bridge }}{% endif %} + +{% endif %} +{% endfor %} -[vcins] +[vcin_slaves] {% for vcin in myvcins %} +{% if 'master_vcin' in vcin %} {{ vcin.hostname }} {% if 'mgmt_bridge' in vcin %} mgmt_bridge={{ vcin.mgmt_bridge }}{% endif %} + +{% endif %} {% endfor %} +[vcins:children] +vcin_masters +vcin_slaves {% endif %} {% if myvscs is defined and myvscs %} @@ -92,6 +105,13 @@ vsc_node2 {% endfor %} {% endif %} +{% if myvrs_vms is defined and myvrs_vms %} +[vrs_vms] +{% for vrs_vm in myvrs_vms %} +{{ vrs_vm.hostname }} +{% endfor %} +{% endif %} + {% if myvnsutils is defined and myvnsutils %} [vnsutils] {% for vnsutil in myvnsutils %} @@ -173,6 +193,7 @@ vstat_node3 [vsrs] {% for vsr in myvsrs %} {{ vsr.hostname }} {% if 'mgmt_bridge' in vsr %} mgmt_bridge={{ vsr.mgmt_bridge }}{% endif %} + {% endfor %} {% endif %} diff --git a/roles/build/templates/vrs_vm.j2 b/roles/build/templates/vrs_vm.j2 new file mode 100644 index 0000000000..df701af521 --- /dev/null +++ b/roles/build/templates/vrs_vm.j2 @@ -0,0 +1,61 @@ +{# + Note: the following warning is for the generated file + only, not this source file. +#} +# *** WARNING *** +# This is a generated file. Manual changes to this file +# will be lost if reset-build or build is run +# +target_server_type: {{ item.target_server_type }} +hostname: {{ item.hostname }} +{% if item.vmname is defined %} +vm_name: {{ item.vmname }} +{% else %} +vm_name: {{ item.hostname }} +{% endif %} + +{% if item.target_server_type | match("kvm") %} +target_server: {{ item.target_server }} + +{% if item.ram is defined %} +vrs_vm_ram: {{ item.ram }} +{% endif %} + +{% if item.vcpu is defined %} +vrs_vm_vcpu: {{ item.vcpu }} +{% endif %} + +{% if item.mgmt_ip is defined and item.mgmt_gateway is defined%} +mgmt_ip: {{ item.mgmt_ip }} +mgmt_bridge: {{ item.mgmt_bridge }} +mgmt_gateway: {{ item.mgmt_gateway }} + +{% if item.mgmt_prefix is defined %} +mgmt_prefix: {{ item.mgmt_prefix }} +{% else %} +mgmt_netmask: {{ item.mgmt_netmask }} +{% endif %} +{% endif %} + +{% if item.data_ip is defined %} +{% if item.data_ip != '' %} +data_ip: {{ item.data_ip }} +data_bridge: {{ item.data_bridge }} +{% if item.data_prefix is defined %} +data_prefix: {{ item.data_prefix }} +{% else %} +data_netmask: {{ item.data_netmask }} +{% endif %} + +{% endif %} +{% endif %} + +{% if item.data_static_route is defined and item.data_gateway is defined %} +data_gateway: {{ item.data_gateway }} +data_static_route: {{ item.data_static_route|to_yaml }} +{% endif %} + +vrs_vm_qcow2_path: {{ item.vrs_vm_qcow2_path | dirname }} +vrs_vm_qcow2_file_name: {{ item.vrs_vm_qcow2_path | basename }} + +{% endif %} diff --git a/roles/check-node-running/tasks/main.yml b/roles/check-node-running/tasks/main.yml index 3758731fee..2cb68d0e9d 100644 --- a/roles/check-node-running/tasks/main.yml +++ b/roles/check-node-running/tasks/main.yml @@ -1,3 +1,3 @@ --- -- include: "{{ target_server_type }}.yml" +- include_tasks: "{{ target_server_type }}.yml" diff --git a/roles/check_vrs_prereqs/tasks/main.yml b/roles/check_vrs_prereqs/tasks/main.yml index 0810229ae4..6fd2a7f873 100644 --- a/roles/check_vrs_prereqs/tasks/main.yml +++ b/roles/check_vrs_prereqs/tasks/main.yml @@ -6,5 +6,6 @@ register: docker_installed_version - name: Check if docker is installed when libnetwork_install is True - assert: { that: "not (docker_installed_version | failed)" } -... + assert: + that: "not (docker_installed_version | failed)" + msg: "Docker is required for libnetwork. Quitting" diff --git a/roles/common/tasks/check-dns.yml b/roles/common/tasks/check-dns.yml index 14a314f4c8..ec23ef3d8a 100644 --- a/roles/common/tasks/check-dns.yml +++ b/roles/common/tasks/check-dns.yml @@ -3,17 +3,19 @@ shell: "getent hosts {{ item.hostname }} | awk '{print $1}'" register: hostip -- name: Ensuring Host IP from DNS and host_vars is the same +- name: Ensure that the hostname maps to the proper mgmt IP assert: that: "'{{ hostip.stdout }}' == '{{ item.mgmt_ip }}'" - msg: "Querying {{ dns_server_list[0] }} for IPv4 address for {{item.hostname}} != {{item.mgmt_ip}}" + msg: "IPv4 address for {{item.hostname}} != {{item.mgmt_ip}}" - block: + - shell: "getent hosts {{ item.data_fqdn }} | awk '{print $1}'" register: data_fqdn_ip - - name: Ensuring Host IP from DNS and host_vars is the same + - name: Ensure that the data FQDN maps to the proper data IP assert: that: "'{{ data_fqdn_ip.stdout }}' == '{{ item.data_ip }}'" - msg: "Querying {{ dns_server_list[0] }} for IPv4 address for {{item.data_fqdn}} != {{item.data_ip}}" + msg: "IPv4 address for {{item.data_fqdn}} != {{item.data_ip}}" + when: myvnsutils is defined diff --git a/roles/common/tasks/check-md5.yml b/roles/common/tasks/check-md5.yml index 7d4b06b18d..ec42e89011 100644 --- a/roles/common/tasks/check-md5.yml +++ b/roles/common/tasks/check-md5.yml @@ -5,31 +5,74 @@ get_md5: yes register: data_file -- name: - assert: - that: "data_file.stat.exists" - msg: "Required file {{ file_name }} not found" +- block: + + - name: Stat the source if symlink + stat: + path: "{{ data_file.stat.lnk_source }}" + get_md5: yes + register: data_lnk_source + + - name: + assert: + that: "data_lnk_source.stat.exists" + msg: "Required file {{ file_name }} not found" -- name: Find md5 file - find: path="{{ inventory_dir }}" pattern="{{ file_name }}.md5" - register: md5_file + - name: Find md5 file + find: path="{{ inventory_dir }}" pattern="{{ file_name }}.md5" + register: md5_symlink_file -- name: Set variable to do build if md5 not found - set_fact: - do_build: True - when: md5_file.matched == 0 + - name: Set variable to do build if md5 not found + set_fact: + do_build: True + when: md5_symlink_file.matched == 0 -- block: + - block: + + - name: Get the md5 value in the file we found + command: "cat {{ md5_symlink_file.files[0].path }}" + register: old_md5_symlink_string + + - debug: var=data_lnk_source.stat.md5 verbosity=1 + + - name: Set variable to do build if md5s don't match + set_fact: + do_build: True + when: data_lnk_source.stat.md5 != old_md5_symlink_string.stdout - - name: Get the md5 value in the file we found - command: "cat {{ md5_file.files[0].path }}" - register: old_md5_string + when: md5_symlink_file.matched > 0 and not do_build + + when: data_file.stat.mimetype|match('inode/symlink') - - debug: var=data_file.stat.md5 verbosity=1 +- block: - - name: Set variable to do build if md5s don't match + - name: + assert: + that: "data_file.stat.exists" + msg: "Required file {{ file_name }} not found" + + - name: Find md5 file + find: path="{{ inventory_dir }}" pattern="{{ file_name }}.md5" + register: md5_file + + - name: Set variable to do build if md5 not found set_fact: do_build: True - when: data_file.stat.md5 != old_md5_string.stdout + when: md5_file.matched == 0 + + - block: + + - name: Get the md5 value in the file we found + command: "cat {{ md5_file.files[0].path }}" + register: old_md5_string + + - debug: var=data_file.stat.md5 verbosity=1 + + - name: Set variable to do build if md5s don't match + set_fact: + do_build: True + when: data_file.stat.md5 != old_md5_string.stdout + + when: md5_file.matched > 0 and not do_build - when: md5_file.matched > 0 and not do_build + when: not data_file.stat.mimetype|match('inode/symlink') diff --git a/roles/common/tasks/check-prereq.yml b/roles/common/tasks/check-prereq.yml index b67242083e..5e749acc24 100644 --- a/roles/common/tasks/check-prereq.yml +++ b/roles/common/tasks/check-prereq.yml @@ -10,8 +10,8 @@ - name: get the paramiko version assert: - that: "paramiko_version.stdout|search('2.2.1')" - msg: "Paramiko version 2.2.1 is required." + that: "paramiko_version.stdout|search('2.4.1') or paramiko_version.stdout|search('2.2.1')" + msg: "paramiko version 2.2.1 or 2.4.1 is required" - name: Check for supported host OS on Ansible host assert: @@ -34,7 +34,7 @@ assert: that: "pipoutput.stdout|search('{{ item }}')" msg: "Missing required package {{ item }} . Please refer to metro-setup.sh" - with_lines: ./pip_requirements.txt | sed 's/==.*//' + with_lines: sed 's/==.*//' pip_requirements.txt - name : Check if all yum packages are installed assert: diff --git a/roles/common/tasks/dns-process-vars.yml b/roles/common/tasks/dns-process-vars.yml index ebdc296408..d8d07c6ba2 100644 --- a/roles/common/tasks/dns-process-vars.yml +++ b/roles/common/tasks/dns-process-vars.yml @@ -19,8 +19,14 @@ msg: "DNS image is taken from VSTAT, but we can't find the image path. Make sure myvstats is defined in build_vars.yml" } + - name: Stat the dns qcow2 file + stat: + path: "{{ nuage_unzipped_files_dir }}/dns/dns.qcow2" + register: qcow_file + - name: Copy vstat qcow2 image to dns directory copy: src={{ rc_vstat_file.files[0].path }} dest={{ nuage_unzipped_files_dir }}/dns/dns.qcow2 force=yes + when: not qcow_file.stat.exists - name: Find name of DNS VM QCOW2 File find: path="{{ nuage_unzipped_files_dir }}/dns" pattern="*.qcow2" recurse=yes diff --git a/roles/common/tasks/handle-vars.yml b/roles/common/tasks/handle-vars.yml index daf27cf930..4b48c4817f 100644 --- a/roles/common/tasks/handle-vars.yml +++ b/roles/common/tasks/handle-vars.yml @@ -5,8 +5,8 @@ user_creds: "{{ user_creds_file | default (inventory_dir+'/user_creds.yml') }}" do_build: "{{ force_build | default(False) }}" data_file_name_list: - - build_vars.yml - - user_creds.yml + - "{{ build_vars_file | default ('build_vars.yml') }}" + - "{{ user_creds_file | default ('user_creds.yml') }}" - name: Include build variable files include_vars: "{{ build_vars }}" diff --git a/roles/common/tasks/linux-ntp-sync.yml b/roles/common/tasks/linux-ntp-sync.yml index bcbf69a251..1eb49ca25e 100644 --- a/roles/common/tasks/linux-ntp-sync.yml +++ b/roles/common/tasks/linux-ntp-sync.yml @@ -81,5 +81,5 @@ when: not sync_status.stdout | search("synchronized") # block level parameters - remote_user: "root" + remote_user: "{{ rem_user }}" tags: ntp diff --git a/roles/common/tasks/set-md5.yml b/roles/common/tasks/set-md5.yml index b8542ec436..beb0c00a83 100644 --- a/roles/common/tasks/set-md5.yml +++ b/roles/common/tasks/set-md5.yml @@ -5,8 +5,24 @@ get_md5: yes register: data_file +- block: + + - name: Stat the source if symlink + stat: + path: "{{ data_file.stat.lnk_source }}" + get_md5: yes + register: data_lnk_source + + - name: Write md5 file to disk + copy: + content: "{{ data_lnk_source.stat.md5 }}" + dest: "{{ inventory_dir }}/{{ file_name }}.md5" + + when: data_file.stat.mimetype|match('inode/symlink') + - name: Write md5 file to disk copy: content: "{{ data_file.stat.md5 }}" dest: "{{ inventory_dir }}/{{ file_name }}.md5" + when: not data_file.stat.mimetype|match('inode/symlink') diff --git a/roles/common/tasks/validate-build-vars.yml b/roles/common/tasks/validate-build-vars.yml index 9e8f5759fc..6409eada5c 100644 --- a/roles/common/tasks/validate-build-vars.yml +++ b/roles/common/tasks/validate-build-vars.yml @@ -34,21 +34,23 @@ when: myvstats is defined # TODO: -# Use the floowing block to disable the feature for now. We need to update our +# Use the following block to disable the feature for now. We need to update our # test infrastructure to accomodate. - block: - - name: Verify VSD DNS entries exist at server {{ dns_server_list[0] }}, and hostnames map to their m - include: check-dns.yml + + - name: Verify VSD DNS entries exist and hostnames map to their IPs + include_tasks: check-dns.yml with_items: "{{ myvsds }}" when: dns_server_list is defined and myvsds is defined - - name: Verify VStat DNS entries exist at server {{ dns_server_list[0] }}, and hostnames map to their management IPs - include: check-dns.yml + - name: Verify VStat DNS entries exist and hostnames map to their IPs + include_tasks: check-dns.yml with_items: "{{ myvstats }}" when: dns_server_list is defined and myvstats is defined - - name: Verify VNS Utils DNS entries exist at server {{ dns_server_list[0] }}, and hostnames map to their IPs - include: check-dns.yml + - name: Verify VNS Utils DNS entries exist and hostnames map to their IPs + include_tasks: check-dns.yml with_items: "{{ myvnsutils }}" when: dns_server_list is defined and myvnsutils is defined + when: false diff --git a/roles/common/tasks/vcin-process-vars.yml b/roles/common/tasks/vcin-process-vars.yml index 77ca3b687a..0bde734f8c 100644 --- a/roles/common/tasks/vcin-process-vars.yml +++ b/roles/common/tasks/vcin-process-vars.yml @@ -12,6 +12,13 @@ when: not myvcins_check - block: + + - name: Verifying VCIN Active/Standby variables + include_role: + name: common + tasks_from: vcin-validate-as-vars + with_items: "{{ myvcins | json_query('[?master_vcin].master_vcin') }}" + - name: Disable HA deployment for VCIN set_fact: disable_vcin_ha: True diff --git a/roles/common/tasks/vcin-validate-as-vars.yml b/roles/common/tasks/vcin-validate-as-vars.yml new file mode 100644 index 0000000000..340168d691 --- /dev/null +++ b/roles/common/tasks/vcin-validate-as-vars.yml @@ -0,0 +1,21 @@ +--- +- name: Getting the slave and master count + set_fact: + slave_count: "{{ myvcins | json_query(\"[?master_vcin=='\"+ item + \"']\") | length }}" + master_count: "{{ myvcins | json_query(\"[?hostname=='\"+ item + \"']\") | length }}" + master_slave_count: "{{ myvcins | json_query(\"[?hostname=='\"+ item + \"'] | [?master_vcin=='\"+ item +\"']\") | length }}" + +- name: Verifying the master exists + assert: + that: master_count|int == 1 + msg: "{{ item }} does not exist as a master" + +- name: Verifying the the master is not the same as the slave + assert: + that: master_slave_count|int == 0 + msg: "{{ item }} can not be configured as its own slave" + +- name: Verifying there is only one slave + assert: + that: slave_count|int == 1 + msg: "{{ item }} has more than one slave, only one slave is allowed per master" \ No newline at end of file diff --git a/roles/common/tasks/vrs-process-vars.yml b/roles/common/tasks/vrs-process-vars.yml index a5fd4abfae..3538fdc22a 100644 --- a/roles/common/tasks/vrs-process-vars.yml +++ b/roles/common/tasks/vrs-process-vars.yml @@ -373,7 +373,7 @@ set_fact: myvrss: "{{ myvrss|default({}) }}" when: not myvrss_check - + - block: - name: Create host_vars files for vrs template: src=vrs.j2 backup=no dest={{ inventory_dir }}/host_vars/{{ item.1 }} diff --git a/roles/common/tasks/vrs-vm-process-vars.yml b/roles/common/tasks/vrs-vm-process-vars.yml new file mode 100644 index 0000000000..2056c7a2c3 --- /dev/null +++ b/roles/common/tasks/vrs-vm-process-vars.yml @@ -0,0 +1,12 @@ +--- +- name: Set myvrs_vms + set_fact: myvrs_vms_check={{ myvrs_vms is defined }} + +- name: Assign empty list to myvrs_vms if it is undefined + set_fact: myvrs_vms= default([]) + when: not myvrs_vms_check + +- name: Create host_vars files for vrs_vms + template: src=vrs_vm.j2 backup=no dest={{ playbook_dir }}/host_vars/{{ item.hostname }} + with_items: "{{ myvrs_vms }}" + when: myvrs_vms_check diff --git a/roles/common/tasks/vsc-process-vars.yml b/roles/common/tasks/vsc-process-vars.yml index 89d7fa3f9c..8ae3ae0005 100644 --- a/roles/common/tasks/vsc-process-vars.yml +++ b/roles/common/tasks/vsc-process-vars.yml @@ -84,6 +84,7 @@ set_fact: myvscs= default([]) when: not myvscs_check + - name: Create host_vars files for vsc template: src=vsc.j2 backup=no dest={{ inventory_dir }}/host_vars/{{ item.hostname }} with_items: "{{ myvscs }}" diff --git a/roles/common/tasks/vsc-tls-setup.yml b/roles/common/tasks/vsc-tls-setup.yml new file mode 100644 index 0000000000..d367a7bf4b --- /dev/null +++ b/roles/common/tasks/vsc-tls-setup.yml @@ -0,0 +1,43 @@ +- block: + - name: Create and transfer certs + include_role: + name: common + tasks_from: vsd-generate-transfer-certificates + vars: + certificate_password: "{{ vsc_password }}" + certificate_username: "{{ xmpp.username }}" + commonName: "{{ xmpp.username }}" + certificate_type: server + scp_user: "{{ vsc_username }}" + scp_location: / + additional_parameters: -d {{ inventory_hostname }} + + - name: Configure VSC for secure communication + sros_config: + lines: + - configure system security tls-profile vsc-tls-profile own-key cf1:\{{ xmpp.username }}-Key.pem + - configure system security tls-profile vsc-tls-profile own-certificate cf1:\{{ xmpp.username }}.pem + - configure system security tls-profile vsc-tls-profile ca-certificate cf1:\{{ xmpp.username }}-CA.pem + - configure system security tls-profile vsc-tls-profile no shutdown + - configure vswitch-controller open-flow tls-profile vsc-tls-profile + - configure vswitch-controller xmpp tls-profile vsc-tls-profile + - configure system time ntp ntp-server + - admin save + provider: "{{ vsc_creds }}" + delegate_to: localhost + + - name: check xmpp connectivity between VSC and VSD after enabling TLS + sros_command: + commands: + - show vswitch-controller xmpp-server | match Functional + provider: "{{ vsc_creds }}" + register: xmpp_status + until: xmpp_status.stdout[0].find('Functional') != -1 + retries: 6 + delay: 10 + delegate_to: localhost + + - name: Print output of 'show vswitch-controller xmpp-server' when verbosity >= 1 + debug: var=xmpp_status verbosity=1 + when: secure_communication + diff --git a/roles/common/tasks/vsd-generate-transfer-certificates.yml b/roles/common/tasks/vsd-generate-transfer-certificates.yml new file mode 100644 index 0000000000..e63ede83ff --- /dev/null +++ b/roles/common/tasks/vsd-generate-transfer-certificates.yml @@ -0,0 +1,61 @@ +- name: Get vsd node(s) information + import_role: + name: common + tasks_from: vsd-node-info.yml + vars: + vsd_hostname: "{{ vsd_fqdn }}" + run_once: true + +- name: Get VSD version + shell: echo $VSD_VERSION + register: vsd_version + delegate_to: "{{ vsd_hostname_list[0] }}" + remote_user: "{{ vsd_username }}" + +- name: Check if the user is already present + shell: '/opt/vsd/ejbca/bin/ejbca.sh ra listendentities -S 40 | grep "End Entity: {{ certificate_username }}"' + register: userExistsOutput + remote_user: "{{ vsd_username }}" + delegate_to: "{{ vsd_hostname_list[0] }}" + ignore_errors: yes + +- name: Create and transfer certs from VSD + shell: "/bin/sshpass -p{{ certificate_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u {{ certificate_username }} -c {{ commonName }} -o csp -f pem -t {{ certificate_type }} {{ additional_parameters }} " + remote_user: "{{ vsd_username }}" + delegate_to: "{{ vsd_hostname_list[0] }}" + register: created + until: "created.rc == 0 or (created.stdout is search('fail adding entity'))" + retries: 5 + delay: 30 + when: "'4.0.4' not in vsd_version and userExistsOutput.rc != 0 and scp_user is not defined and scp_location is not defined" + +- name: Create and transfer certs from VSD + shell: "/bin/sshpass -p{{ certificate_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u {{ certificate_username }} -c {{ commonName }} -o csp -f pem -t {{ certificate_type }} -s {{ scp_user }}@{{ inventory_hostname }}:{{ scp_location }} {{ additional_parameters }} " + remote_user: "{{ vsd_username }}" + delegate_to: "{{ vsd_hostname_list[0] }}" + register: created + until: "created.rc == 0 or (created.stdout is search('fail adding entity'))" + retries: 5 + delay: 30 + when: "'4.0.4' not in vsd_version and userExistsOutput.rc != 0 and scp_user is defined and scp_location is defined" + +- name: Create and transfer certs from 4.0.4 VSD + shell: "/bin/sshpass -p{{ certificate_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u {{ certificate_username }} -c {{ commonName }} -o csp -f pem -t {{ certificate_type }} -n VSPCA {{ additional_parameters }} " + remote_user: "{{ vsd_username }}" + delegate_to: "{{ vsd_hostname_list[0] }}" + register: created + until: "created.rc == 0 or (created.stdout is search('fail adding entity'))" + retries: 5 + delay: 30 + when: "'4.0.4' in vsd_version and userExistsOutput.rc != 0 and scp_user is not defined and scp_location is not defined" + +- name: Create and transfer certs from 4.0.4 VSD + shell: "/bin/sshpass -p{{ certificate_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u {{ certificate_username }} -c {{ commonName }} -o csp -f pem -t {{ certificate_type }} -s {{ scp_user }}@{{ inventory_hostname }}:{{ scp_location }} -n VSPCA {{ additional_parameters }} " + remote_user: "{{ vsd_username }}" + delegate_to: "{{ vsd_hostname_list[0] }}" + register: created + until: "created.rc == 0 or (created.stdout is search('fail adding entity'))" + retries: 5 + delay: 30 + when: "'4.0.4' in vsd_version and userExistsOutput.rc != 0 and scp_user is defined and scp_location is defined" + diff --git a/roles/common/tasks/vsd-verify-db-status.yml b/roles/common/tasks/vsd-verify-db-status.yml new file mode 100644 index 0000000000..a3b0af571b --- /dev/null +++ b/roles/common/tasks/vsd-verify-db-status.yml @@ -0,0 +1,22 @@ +- name: Reading the status of the DB upgrade directory + stat: + path: "/var/lib/mysql/nuageDbUpgrade/" + register: db_dir + remote_user: "{{ vsd_username }}" + +- name: Verify that DB upgrade directory exists + assert: + that: + - db_dir.stat.exists == True + msg: "nuageDbUpgrade dir does not exist" + +- name: Check that the database is properly identified by MySQL + shell: "mysql -e 'show databases;' | grep nuageDbUpgrade" + register: db + remote_user: "{{ vsd_username }}" + +- name: Verify the upgrade database name + assert: + that: + - "'nuageDbUpgrade' == db.stdout" + msg: "Could not find nuageDbUpgrade database in mysql" diff --git a/roles/common/tasks/vstat-upgrade-check.yml b/roles/common/tasks/vstat-upgrade-check.yml index b1cd05b6bf..1e0c339d15 100644 --- a/roles/common/tasks/vstat-upgrade-check.yml +++ b/roles/common/tasks/vstat-upgrade-check.yml @@ -1,9 +1,15 @@ - name: Skip vstat upgrade for versions that do not require upgrade set_fact: - upgrade_521_to_522: "{{ upgrade_from_version|version_compare('5.2.1', operator='eq', strict=True) and upgrade_to_version|version_compare('5.2.2', operator='eq', strict=True) }}" - upgrade_521_to_531: "{{ upgrade_from_version|version_compare('5.2.1', operator='eq', strict=True) and upgrade_to_version|version_compare('5.3.1', operator='eq', strict=True) }}" - upgrade_522_to_531: "{{ upgrade_from_version|version_compare('5.2.2', operator='eq', strict=True) and upgrade_to_version|version_compare('5.3.1', operator='eq', strict=True) }}" + upgrade_from_521: "{{ upgrade_from_version|version_compare('5.2.1', operator='eq', strict=True) + and ( upgrade_to_version|version_compare('5.2.2', operator='eq', strict=True) + or upgrade_to_version|version_compare('5.2.3', operator='eq', strict=True) + or upgrade_to_version|version_compare('5.3.1', operator='eq', strict=True) ) }}" + upgrade_from_522: "{{ upgrade_from_version|version_compare('5.2.2', operator='eq', strict=True) + and ( upgrade_to_version|version_compare('5.2.3', operator='eq', strict=True) + or upgrade_to_version|version_compare('5.3.1', operator='eq', strict=True) ) }}" + upgrade_from_523: "{{ upgrade_from_version|version_compare('5.2.3', operator='eq', strict=True) + and upgrade_to_version|version_compare('5.3.1', operator='eq', strict=True) }}" - name: Skip vstat upgrade for versions that do not require upgrade set_fact: - skip_vstat_upgrade: "{{ upgrade_521_to_522 or upgrade_521_to_531 or upgrade_522_to_531 }}" + skip_vstat_upgrade: "{{ upgrade_from_521 or upgrade_from_522 or upgrade_from_523 }}" diff --git a/roles/common/templates/vnsutil.j2 b/roles/common/templates/vnsutil.j2 index 42a7b6751d..65321186e2 100644 --- a/roles/common/templates/vnsutil.j2 +++ b/roles/common/templates/vnsutil.j2 @@ -20,6 +20,10 @@ data_fqdn: {{ item.data_fqdn }} {% if item.data_ip is defined %} data_ip: {{ item.data_ip }} data_netmask: {{ item.data_netmask }} +{% if item.data_gateway is defined and item.data_static_route is defined %} +data_gateway: {{ item.data_gateway }} +data_static_route: {{ item.data_static_route|to_yaml }} +{% endif %} {% endif %} {% if item.nsgv_ip is defined %} nsgv_ip: {{ item.nsgv_ip }} diff --git a/roles/common/templates/vrs.j2 b/roles/common/templates/vrs.j2 index fb46f7e6d3..42e4499221 100644 --- a/roles/common/templates/vrs.j2 +++ b/roles/common/templates/vrs.j2 @@ -8,6 +8,9 @@ node_ip_addr: {{ item.1 }} active_controller_addr: {{ item.0.active_controller_ip }} standby_controller_addr: {{ item.0.standby_controller_ip }} +compute_username: {{ compute_username|default('root') }} +compute_password: {{ compute_password|default('caso') }} + {% if item.0.dkms_install is defined %} {% set dkms_install = item.0.dkms_install %} {% else %} @@ -15,6 +18,13 @@ standby_controller_addr: {{ item.0.standby_controller_ip }} {% endif %} dkms_install: {{ dkms_install }} +{% if item.0.secure_communication is defined %} +{% set secure_communication = item.0.secure_communication %} +{% else %} +{% set secure_communication = True %} +{% endif %} +secure_communication: {{ secure_communication }} + {% if item.0.uplink_interface is defined %} uplink_interface: {{ item.0.uplink_interface }} {% endif %} @@ -133,4 +143,6 @@ libnetwork_install: true libnetwork_scope: {{ libnetwork.scope }} libnetwork_cluster_store_url: {{ libnetwork.cluster_store_url }} + + {% endif %} diff --git a/roles/common/templates/vsc.j2 b/roles/common/templates/vsc.j2 index 0c65c09c6c..e9f7cdec2a 100644 --- a/roles/common/templates/vsc.j2 +++ b/roles/common/templates/vsc.j2 @@ -72,6 +72,13 @@ vcenter: vsd_fqdn: {{ item.vsd_fqdn }} +{% if item.secure_communication is defined %} +{% set secure_communication = item.secure_communication %} +{% else %} +{% set secure_communication = True %} +{% endif %} +secure_communication: {{ secure_communication }} + {% if item.system_ip is defined %} system_ip: {{ item.system_ip }} {% endif %} diff --git a/roles/common/templates/vsd.j2 b/roles/common/templates/vsd.j2 index 7f9659d295..a655e22bf9 100644 --- a/roles/common/templates/vsd.j2 +++ b/roles/common/templates/vsd.j2 @@ -19,6 +19,9 @@ vmname: {{ item.hostname }} vcin_mode: true vsd_sa_or_ha: sa +{% if item.master_vcin is defined %} +master_vcin: {{ item.master_vcin }} +{% endif %} {% else %} diff --git a/roles/common/templates/vsr.j2 b/roles/common/templates/vsr.j2 index 5b6f7924da..9f4d5efdce 100644 --- a/roles/common/templates/vsr.j2 +++ b/roles/common/templates/vsr.j2 @@ -28,6 +28,7 @@ ports_to_hv_bridges: {% endfor %} {% endif %} + # VSR bof address configuration mgmt_ip: {{ item.mgmt_ip }} mgmt_netmask_prefix: {{ item.mgmt_netmask_prefix }} @@ -38,6 +39,17 @@ mgmt_static_route_list: {% for route in item.mgmt_static_route_list %} - {{ route }} {% endfor %} + +# VSR router address configuration +router: +{% if item.router.data_ip is defined %} + data_ip: {{ item.router.data_ip }} +{% endif %} + system_ip: {{ item.router.system_ip }} + +nuage_integration: {{ item.nuage_integration | default(False) }} + +# License and config file locations {% if 'license_file' in item %} license_file: {{ item.license_file }} {% endif %} diff --git a/roles/dns-deploy/tasks/main.yml b/roles/dns-deploy/tasks/main.yml index 9c7cc8600e..a67e128e7d 100644 --- a/roles/dns-deploy/tasks/main.yml +++ b/roles/dns-deploy/tasks/main.yml @@ -7,6 +7,18 @@ ssh_host: "{{ mgmt_ip }}" - block: + - name: Configure yum proxy + lineinfile: + dest: /etc/yum.conf + regexp: "^proxy=" + line: "proxy={{ yum_proxy }}" + when: not yum_proxy | match('NONE') + + - name: Execute a yum update + yum: + name: '*' + state: latest + when: yum_update - name: stop and flush firewall service: name=firewalld state=stopped @@ -25,6 +37,8 @@ include_role: name: common tasks_from: linux-ntp-sync + vars: + rem_user: "{{ dns_username }}" - block: diff --git a/roles/dns-destroy/tasks/kvm.yml b/roles/dns-destroy/tasks/kvm.yml index 211756dd84..e40c1dd3dd 100644 --- a/roles/dns-destroy/tasks/kvm.yml +++ b/roles/dns-destroy/tasks/kvm.yml @@ -5,7 +5,7 @@ delegate_to: "{{ target_server }}" remote_user: "{{ target_server_username }}" -- include: dns_destroy_helper.yml +- import_tasks: dns_destroy_helper.yml when: inventory_hostname in virt_vms.list_vms - name: Destroy the images directory diff --git a/roles/dns-destroy/tasks/main.yml b/roles/dns-destroy/tasks/main.yml index 1d785abd57..70771f6452 100644 --- a/roles/dns-destroy/tasks/main.yml +++ b/roles/dns-destroy/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - dns diff --git a/roles/dns-predeploy/tasks/kvm.yml b/roles/dns-predeploy/tasks/kvm.yml index 4c32accf4f..915e532c48 100644 --- a/roles/dns-predeploy/tasks/kvm.yml +++ b/roles/dns-predeploy/tasks/kvm.yml @@ -52,17 +52,17 @@ - name: Check if the VM is already running on {{ target_server }} fail: msg="The VM is already defined on this target_server." - when: inventory_hostname in virt_vms.list_vms + when: vm_name in virt_vms.list_vms - name: Create libvirt image directory on {{ target_server }} - file: path={{ images_path }}/{{ inventory_hostname }} + file: path={{ images_path }}/{{ vm_name }} state=directory owner={{ libvirt.user }} group={{ libvirt.group }} - name: Copy the DNS qcow image to virt images directory on {{ target_server }} copy: src={{ qcow2_path }}/{{ qcow2_file_name }} - dest={{ images_path }}/{{ inventory_hostname }} + dest={{ images_path }}/{{ vm_name }} owner={{ libvirt.user }} group={{ libvirt.group }} @@ -71,7 +71,7 @@ guestfish_dest: "{{ images_path }}/{{ vm_name }}/{{ qcow2_file_name }}" - name: Create a temporary copy of the network script for eth0 on {{ target_server }} - template: src=ifcfg-eth0.j2 backup=no dest={{ images_path }}/{{ inventory_hostname }}/ifcfg-eth0 + template: src=ifcfg-eth0.j2 backup=no dest={{ images_path }}/{{ vm_name }}/ifcfg-eth0 - name: Get list of partitions shell: "guestfish -r -a {{ guestfish_dest }} run : list-filesystems | grep -Ev '(unknown|swap)'" @@ -91,44 +91,44 @@ - debug: var=guestfish_mount verbosity=1 - name: Copy eth0 network script file to the DNS image on {{ target_server }} - command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ inventory_hostname }}/ifcfg-eth0 /etc/sysconfig/network-scripts/ + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/ifcfg-eth0 /etc/sysconfig/network-scripts/ - name: Remove temporary copy of eth0 network script - file: path={{ images_path }}/{{ inventory_hostname }}/ifcfg-eth0 state=absent + file: path={{ images_path }}/{{ vm_name }}/ifcfg-eth0 state=absent - name: Set the owner and group on the eth0 network script file in the DNS image command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/ifcfg-eth0 - name: Create a temporary copy of the network script for eth1 on {{ target_server }} - template: src=ifcfg-eth1.j2 backup=no dest={{ images_path }}/{{ inventory_hostname }}/ifcfg-eth1 + template: src=ifcfg-eth1.j2 backup=no dest={{ images_path }}/{{ vm_name }}/ifcfg-eth1 - name: Copy eth1 network script file to the DNS image on {{ target_server }} - command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ inventory_hostname }}/ifcfg-eth1 /etc/sysconfig/network-scripts/ + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/ifcfg-eth1 /etc/sysconfig/network-scripts/ - name: Remove temporary copy of eth1 network script - file: path={{ images_path }}/{{ inventory_hostname }}/ifcfg-eth1 state=absent + file: path={{ images_path }}/{{ vm_name }}/ifcfg-eth1 state=absent - name: Set the owner and group on the eth1 network script file in the DNS image command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/ifcfg-eth1 - name: Create a temporary copy of the syscfg network file on {{ target_server }} - template: src=network.j2 backup=no dest={{ images_path }}/{{ inventory_hostname }}/network + template: src=network.j2 backup=no dest={{ images_path }}/{{ vm_name }}/network - name: Copy network file to the DNS image - command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ inventory_hostname }}/network /etc/sysconfig/ + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/network /etc/sysconfig/ - name: Remove temporary copy of network file - file: path={{ images_path }}/{{ inventory_hostname }}/network state=absent + file: path={{ images_path }}/{{ vm_name }}/network state=absent - block: - name: Create a temporary copy of the network script for route-eth1 on {{ target_server }} - template: src=route-eth1.j2 backup=no dest={{ images_path }}/{{ inventory_hostname }}/route-eth1 + template: src=route-eth1.j2 backup=no dest={{ images_path }}/{{ vm_name }}/route-eth1 - name: Copy route-eth1 network script file to the DNS image on {{ target_server }} - command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ inventory_hostname }}/route-eth1 /etc/sysconfig/network-scripts/ + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/route-eth1 /etc/sysconfig/network-scripts/ - name: Remove temporary copy of route-eth1 network script - file: path={{ images_path }}/{{ inventory_hostname }}/route-eth1 state=absent + file: path={{ images_path }}/{{ vm_name }}/route-eth1 state=absent - name: Set the owner and group on the route-eth1 network script file in the DNS image command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/route-eth1 @@ -149,13 +149,13 @@ register: current_user_ssh_key - name: Create a temporary copy of the authorized_keys file - template: src=authorized_keys.j2 backup=no dest={{ images_path }}/{{ inventory_hostname }}/authorized_keys + template: src=authorized_keys.j2 backup=no dest={{ images_path }}/{{ vm_name }}/authorized_keys - name: Copy authorized_keys file to the DNS image - command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ inventory_hostname }}/authorized_keys /root/.ssh/ + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/authorized_keys /root/.ssh/ - name: Remove temporary copy of authorized_keys file - file: path={{ images_path }}/{{ inventory_hostname }}/authorized_keys state=absent + file: path={{ images_path }}/{{ vm_name }}/authorized_keys state=absent - name: Set the owner and group for the authorized_keys file on the DNS image command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /root/.ssh/authorized_keys @@ -164,16 +164,15 @@ command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chmod 0640 /root/.ssh/authorized_keys - name: "Define new DNS VM" - virt: name="{{ inventory_hostname }}" + virt: name="{{ vm_name }}" command=define xml="{{ lookup('template', 'dns.xml.j2') }}" uri=qemu:///system - name: "Run DNS VM" - virt: name="{{ inventory_hostname }}" + virt: name="{{ vm_name }}" state=running uri=qemu:///system delegate_to: "{{ target_server }}" remote_user: "{{ target_server_username }}" - diff --git a/roles/dns-predeploy/tasks/main.yml b/roles/dns-predeploy/tasks/main.yml index c24dc08d69..fa60542b28 100644 --- a/roles/dns-predeploy/tasks/main.yml +++ b/roles/dns-predeploy/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - dns diff --git a/roles/dns-predeploy/templates/dns.xml.j2 b/roles/dns-predeploy/templates/dns.xml.j2 index 3a76708608..31e8cc7629 100644 --- a/roles/dns-predeploy/templates/dns.xml.j2 +++ b/roles/dns-predeploy/templates/dns.xml.j2 @@ -1,5 +1,5 @@ - {{ inventory_hostname }} + {{ vm_name }} {{ dns_ram }} {{ dns_ram }} 2 diff --git a/roles/gvm-destroy/tasks/kvm.yml b/roles/gvm-destroy/tasks/kvm.yml index deac4ac26f..8ce0fd5213 100644 --- a/roles/gvm-destroy/tasks/kvm.yml +++ b/roles/gvm-destroy/tasks/kvm.yml @@ -5,5 +5,5 @@ delegate_to: "{{ target_server }}" remote_user: "{{ target_server_username }}" -- include: gvm_destroy_helper.yml +- import_tasks: gvm_destroy_helper.yml when: inventory_hostname in virt_vms.list_vms diff --git a/roles/gvm-destroy/tasks/main.yml b/roles/gvm-destroy/tasks/main.yml index 67d45042da..3fc12f1bab 100644 --- a/roles/gvm-destroy/tasks/main.yml +++ b/roles/gvm-destroy/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - gvm diff --git a/roles/gvm-predeploy/tasks/main.yml b/roles/gvm-predeploy/tasks/main.yml index 2576430e6a..5e3e6c5cec 100644 --- a/roles/gvm-predeploy/tasks/main.yml +++ b/roles/gvm-predeploy/tasks/main.yml @@ -1,11 +1,11 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - gvm - gvm-predeploy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - gvm diff --git a/roles/infra-deploy/tasks/main.yml b/roles/infra-deploy/tasks/main.yml deleted file mode 100644 index f80b41a04c..0000000000 --- a/roles/infra-deploy/tasks/main.yml +++ /dev/null @@ -1,69 +0,0 @@ ---- -- name: Get Infra server details from OpenStack - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ inventory_hostname }}*" - register: infra_server - delegate_to: 127.0.0.1 - -- name: Set Infra mgmt ip - set_fact: - infra_mgmt_ip: "{{ infra_server['ansible_facts']['openstack_servers'][0]['private_v4'] }}" - -- name: Update /etc/hosts file on ansible host - lineinfile: - dest: /etc/hosts - line: "{{ infra_mgmt_ip }} {{ inventory_hostname }}" - delegate_to: 127.0.0.1 - -- name: Clean known_hosts of Infra's - command: ssh-keygen -R "{{ infra_mgmt_ip }}" - delegate_to: localhost - ignore_errors: True - -- name: Wait for INFRA ssh to be ready - include_role: - name: common - tasks_from: wait-for-ssh - vars: - ssh_host: "{{ infra_mgmt_ip }}" - -- name: Pause for cloud-init {{ inventory_hostname }} - pause: - seconds: 10 - -- name: Add nameserver - command: echo "{{ dns_server_list[1] }}" >> /etc/resolv.conf - remote_user: "root" - -- name: Install DNS if not present - yum: - name: dnsmasq - state: latest - remote_user: "root" - -- name: Configure dnsmasq - template: - src: "dnsmasq.conf.j2" - dest: "/etc/dnsmasq.conf" - remote_user: "root" - -- name: Start the DNS service - command: service dnsmasq start - remote_user: "root" - -- name: Enable the DNS service - command: chkconfig dnsmasq on - remote_user: "root" - -- name: Install NTP if not present - yum: - name: ntp - state: latest - remote_user: "root" - -- name: Configure ntpd and ntpdate and local time zone - include_role: - name: common - tasks_from: linux-ntp-sync diff --git a/roles/infra-deploy/templates/dnsmasq.conf.j2 b/roles/infra-deploy/templates/dnsmasq.conf.j2 deleted file mode 100644 index bc2836936d..0000000000 --- a/roles/infra-deploy/templates/dnsmasq.conf.j2 +++ /dev/null @@ -1,4 +0,0 @@ -interface=eth0 -no-dhcp-interface=eth0 -domain=example.com -conf-dir=/etc/dnsmasq.d diff --git a/roles/infra-destroy/tasks/heat.yml b/roles/infra-destroy/tasks/heat.yml deleted file mode 100644 index 0f9759dc6b..0000000000 --- a/roles/infra-destroy/tasks/heat.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- name: Destroy INFRA heat template - os_stack: - name: "{{ inventory_hostname }}" - auth: - "{{ os_auth }}" - state: absent - delegate_to: 127.0.0.1 diff --git a/roles/infra-destroy/tasks/infra_destroy_helper.yml b/roles/infra-destroy/tasks/infra_destroy_helper.yml deleted file mode 100644 index df8f22df5b..0000000000 --- a/roles/infra-destroy/tasks/infra_destroy_helper.yml +++ /dev/null @@ -1,24 +0,0 @@ ---- -- name: Destroy Infra VM - virt: - name: "{{ inventory_hostname }}" - state: destroyed - uri: qemu:///system - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - -- name: Undefine Infra VM - virt: - name: "{{ inventory_hostname }}" - command: undefine - uri: qemu:///system - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - -- name: Destroy the images directory - file: - path: "{{ images_path }}/{{ inventory_hostname }}" - state: absent - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - diff --git a/roles/infra-destroy/tasks/kvm.yml b/roles/infra-destroy/tasks/kvm.yml deleted file mode 100644 index 2af06d7b75..0000000000 --- a/roles/infra-destroy/tasks/kvm.yml +++ /dev/null @@ -1,16 +0,0 @@ ---- -- name: List the Virtual Machines on {{ target_server }} - virt: command=list_vms - register: virt_vms - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - -- include: infra_destroy_helper.yml - when: inventory_hostname in virt_vms.list_vms - -- name: Destroy the images directory - file: path={{ images_path }}/{{ inventory_hostname }} - state=absent - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - diff --git a/roles/infra-destroy/tasks/main.yml b/roles/infra-destroy/tasks/main.yml deleted file mode 100644 index f4a7634ad1..0000000000 --- a/roles/infra-destroy/tasks/main.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- include: kvm.yml - when: target_server_type | match("kvm") - tags: - - infra - - infra-destroy - -- include: heat.yml - when: target_server_type | match("heat") - tags: - - infra - - heat - - infra-destroy diff --git a/roles/infra-predeploy/files/infra.yml b/roles/infra-predeploy/files/infra.yml deleted file mode 100644 index 65585750ca..0000000000 --- a/roles/infra-predeploy/files/infra.yml +++ /dev/null @@ -1,37 +0,0 @@ -heat_template_version: '2014-10-16' -parameters: - vm_name: - type: string - infra_image: - type: string - infra_flavor: - type: string - infra_network: - type: string - infra_subnet: - type: string - ssh_key: - type: string - mgmt_ip: - type: string -resources: - mycompute: - type: OS::Nova::Server - properties: - name: {get_param: vm_name} - flavor: {get_param: infra_flavor} - image: {get_param: infra_image} - networks: - - network: {get_param: infra_network} - user_data_format: RAW - user_data: - str_replace: - template: | - #!/bin/bash - echo usr >> /root/.ssh/authorized_keys - params: - usr: {get_param: ssh_key} -outputs: - server_ip: - description: mgmt ip assigned to the server - value: { get_attr: [mycompute, networks, {get_param: infra_network}, 0]} diff --git a/roles/infra-predeploy/files/infra_fixed.yml b/roles/infra-predeploy/files/infra_fixed.yml deleted file mode 100644 index 0a16ab8784..0000000000 --- a/roles/infra-predeploy/files/infra_fixed.yml +++ /dev/null @@ -1,42 +0,0 @@ -heat_template_version: '2014-10-16' -parameters: - vm_name: - type: string - infra_image: - type: string - infra_flavor: - type: string - infra_network: - type: string - infra_subnet: - type: string - ssh_key: - type: string - mgmt_ip: - type: string -resources: - mgmt_port: - type: OS::Neutron::Port - properties: - network_id: {get_param: infra_network} - fixed_ips: [{"subnet": {get_param: infra_subnet}, "ip_address": {get_param: mgmt_ip}}] - mycompute: - type: OS::Nova::Server - properties: - name: {get_param: vm_name} - flavor: {get_param: infra_flavor} - image: {get_param: infra_image} - networks: - - port: {get_resource: mgmt_port} - user_data_format: RAW - user_data: - str_replace: - template: | - #!/bin/bash - echo usr >> /root/.ssh/authorized_keys - params: - usr: {get_param: ssh_key} -outputs: - server_ip: - description: mgmt ip assigned to the server - value: { get_attr: [mycompute, networks, {get_param: infra_network}, 0]} diff --git a/roles/infra-predeploy/tasks/heat.yml b/roles/infra-predeploy/tasks/heat.yml deleted file mode 100644 index 202458bc57..0000000000 --- a/roles/infra-predeploy/tasks/heat.yml +++ /dev/null @@ -1,32 +0,0 @@ ---- -- name: Get the public key for the current user - local_action: command cat ~/.ssh/id_rsa.pub - register: current_user_ssh_key - -- name: Get heat stack for fixed ip deployments - set_fact: - infra_heat_template: "{{ role_path }}/files/infra_fixed.yml" - when: dhcp == False - -- name: Get heat stack for dhcp based deployments - set_fact: - infra_heat_template: "{{ role_path }}/files/infra.yml" - when: dhcp == True - -- name: Creating INFRA stack - register: infra_stack - os_stack: - name: "{{ inventory_hostname }}" - template: "{{ infra_heat_template }}" - auth: - "{{ os_auth }}" - parameters: - vm_name: "{{ inventory_hostname }}" - infra_image: "{{ infra_image }}" - infra_flavor: "{{ infra_flavor }}" - infra_network: "{{ infra_network }}" - infra_subnet: "{{ infra_subnet | default('NONE') }}" - mgmt_ip: "{{ mgmt_ip | default('NONE') }}" - ssh_key: "{{ current_user_ssh_key.stdout }}" - delegate_to: 127.0.0.1 -- debug: var=infra_stack['stack']['outputs'][0]['output_value'] diff --git a/roles/infra-predeploy/tasks/main.yml b/roles/infra-predeploy/tasks/main.yml deleted file mode 100644 index 63680f9fc4..0000000000 --- a/roles/infra-predeploy/tasks/main.yml +++ /dev/null @@ -1,6 +0,0 @@ ---- -- include: heat.yml - when: target_server_type | match("heat") - tags: - - infra - - infra-predeploy diff --git a/roles/nsgv-destroy/tasks/kvm.yml b/roles/nsgv-destroy/tasks/kvm.yml index 0d484d741b..0163472be6 100644 --- a/roles/nsgv-destroy/tasks/kvm.yml +++ b/roles/nsgv-destroy/tasks/kvm.yml @@ -5,7 +5,7 @@ delegate_to: "{{ target_server }}" remote_user: "{{ target_server_username }}" -- include: nsgv_destroy_helper.yml +- import_tasks: nsgv_destroy_helper.yml when: vmname in virt_vms.list_vms - name: Destroy the images directory diff --git a/roles/nsgv-destroy/tasks/main.yml b/roles/nsgv-destroy/tasks/main.yml index d1230e902e..6357544a59 100644 --- a/roles/nsgv-destroy/tasks/main.yml +++ b/roles/nsgv-destroy/tasks/main.yml @@ -1,11 +1,11 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - nsgv - nsgv-destroy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - nsgv diff --git a/roles/nsgv-predeploy/tasks/main.yml b/roles/nsgv-predeploy/tasks/main.yml index d3a80d7f8a..dff10f4991 100644 --- a/roles/nsgv-predeploy/tasks/main.yml +++ b/roles/nsgv-predeploy/tasks/main.yml @@ -8,14 +8,14 @@ vars: do_reachability_checks: False -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") static: no tags: - nsgv - nsgv-predeploy -- include: aws.yml +- import_tasks: aws.yml when: target_server_type | match("aws") static: no tags: @@ -23,7 +23,7 @@ - nsgv-predeploy - aws -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") static: no tags: diff --git a/roles/nuage-unzip/tasks/main.yml b/roles/nuage-unzip/tasks/main.yml index be7f3cf738..4dce0251b3 100644 --- a/roles/nuage-unzip/tasks/main.yml +++ b/roles/nuage-unzip/tasks/main.yml @@ -52,21 +52,21 @@ # QCOW2 - block: - name: Find and unzip VSD QCOW2 Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VSD*QCOW*" unpack_target_folder: "vsd/qcow2" unpack_register_var: "rc_vsd_qcow2_archive_files" - name: Find and unzip VSD OVA Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VSD*OVA*" unpack_target_folder: "vsd/ova" unpack_register_var: "rc_vsd_ova_archive_files" - name: Find and unzip VSD migration Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VSD-migration*ISO*" unpack_target_folder: "vsd/migration" @@ -80,7 +80,7 @@ - block: - name: Find and unzip VSTAT Stats Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-elastic-[0-9].*" unpack_pattern_regexp: True @@ -88,7 +88,7 @@ unpack_register_var: "rc_vstat_archive_files" - name: Find and unzip VSTAT Stats upgrade Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-elastic-upgrade-[0-9].*" unpack_pattern_regexp: True @@ -96,7 +96,7 @@ unpack_register_var: "rc_vstat_upgrade_archive_files" - name: Find and unzip VSTAT Stats backup Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-elastic-backup-[0-9].*" unpack_pattern_regexp: True @@ -111,7 +111,7 @@ - block: - name: Find and unzip VSC Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VSC*" unpack_target_folder: "vsc" @@ -125,7 +125,7 @@ - block: - name: Find and unzip hyperv VRS Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VRS*-hyperV*" unpack_target_folder: "vrs/hyperv" @@ -133,35 +133,35 @@ - name: Find and unzip vmware VRS Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VRS*-vmware*" unpack_target_folder: "vrs/vmware" unpack_register_var: "rc_vrs_vmware_archive_files" - name: Find and unzip EL7 VRS Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VRS*-el7*" unpack_target_folder: "vrs/el7" unpack_register_var: "rc_vrs_el7_archive_files" - name: Find and unzip ubuntu-14.04 VRS Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VRS*-ubuntu.14.04*" unpack_target_folder: "vrs/u14_04" unpack_register_var: "rc_vrs_u14_04_archive_files" - name: Find and unzip ubuntu-16.04 VRS Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VRS*-ubuntu.16.04*" unpack_target_folder: "vrs/u16_04" unpack_register_var: "rc_vrs_u16_04_archive_files" - name: Find and unzip VMWare VRS Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VRS*-vmware*" unpack_target_folder: "vrs/vmware" @@ -176,7 +176,7 @@ ################# - block: - name: Find and unzip selinux package - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-selinux-*" unpack_target_folder: "selinux" @@ -192,7 +192,7 @@ - block: - name: Find and unzip Libnetwork Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-libnetwork*" unpack_target_folder: "libnetwork" @@ -213,7 +213,7 @@ # for releases before 4.0.R9 when all VNS packages were archived to one file - name: Find and unzip VNS Archive for releases before 4.0.R9 - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VNS-[0-9].*" unpack_pattern_regexp: True @@ -221,7 +221,7 @@ unpack_register_var: "rc_vns_before_4_0_R9" - name: Find and unzip VNS NSG Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_zipped_files_dir: "{{ nuage_unzipped_files_dir }}/vns/" unpack_pattern: "Nuage-VNS-NSG-*" @@ -229,7 +229,7 @@ unpack_register_var: "rc_vns_before_4_0_R9_nsg_archive" - name: Find and unzip VNS Utils Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_zipped_files_dir: "{{ nuage_unzipped_files_dir }}/vns/" unpack_pattern: "Nuage-VNS-Utils-*" @@ -239,14 +239,14 @@ # for releases starting from 4.0.R9 when VNS packages were archived to 2 files # + unpack AWS archive - name: Find and unzip VNS NSG Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VNS-NSG*" unpack_target_folder: "vns/nsg" unpack_register_var: "rc_vns_nsg" - name: Find and unzip VNS AWS NSG Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_zipped_files_dir: "{{ nuage_unzipped_files_dir }}/vns/nsg/" unpack_pattern: "Nuage-NSG-*AWS*" @@ -254,7 +254,7 @@ unpack_register_var: "rc_vns_aws_archives" - name: Find and unzip VNS Utils Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-VNS-Utils*" unpack_target_folder: "vns/utils" @@ -269,7 +269,7 @@ - block: - name: Find and unzip the Nuage OpenStack Plugin Archive - include: unpack_actions.yml + import_tasks: unpack_actions.yml vars: unpack_pattern: "Nuage-openstack*" unpack_target_folder: "nuage_os" @@ -282,7 +282,7 @@ ########################## - block: - name: Find and unzip the Nokia VSR Archive - include: unpack_actions_vsr.yml + import_tasks: unpack_actions_vsr.yml vars: unpack_pattern: "Nokia-VSR-VM*" unpack_target_folder: "vsr" diff --git a/roles/os-compute-deploy/tasks/main.yml b/roles/os-compute-deploy/tasks/main.yml deleted file mode 100644 index b1cb6d452b..0000000000 --- a/roles/os-compute-deploy/tasks/main.yml +++ /dev/null @@ -1,231 +0,0 @@ ---- -- name: Get OSC facts from "{{ osc_server_name }}" - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ osc_server_name }}" - register: osc_server - delegate_to: 127.0.0.1 - -- name: Set OSC ip - set_fact: - osc_ip: "{{ osc_server['ansible_facts']['openstack_servers'][0]['private_v4'] }}" - -- name: Get os-compute details from OS facts - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ inventory_hostname }}" - register: compute_server - delegate_to: 127.0.0.1 - -- name: Save os-compute mgmt ip - set_fact: - compute_mgmt_ip: "{{ compute_server['ansible_facts']['openstack_servers'][0]['networks'][compute_mgmt_network][0] }}" - -- name: Update /etc/hosts file on ansible host - lineinfile: - dest: /etc/hosts - line: "{{ compute_mgmt_ip }} {{ inventory_hostname }}" - delegate_to: 127.0.0.1 - -- block: - - name: Get infra server details from OS server facts - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ infra_server_name }}" - register: infra_server - delegate_to: 127.0.0.1 - - - name: Set DNS/NTP server ip - set_fact: - infra_ip: "{{ infra_server['ansible_facts']['openstack_servers'][0]['private_v4'] }}" - - - name: Update DNS entries - lineinfile: - line: "{{ compute_mgmt_ip }} {{ inventory_hostname }}" - dest: "/etc/hosts" - delegate_to: "{{ infra_ip }}" - remote_user: "root" - - - name: Restart DNS service - shell: service dnsmasq restart - delegate_to: "{{ infra_ip }}" - remote_user: "root" - when: infra_server_name is defined - -- name: Clean known_hosts of OS_computes - command: ssh-keygen -R "{{ compute_mgmt_ip }}" - delegate_to: localhost - ignore_errors: True - -- name: Wait for os_compute ssh to be ready - include_role: - name: common - tasks_from: wait-for-ssh - vars: - ssh_host: "{{ compute_mgmt_ip }}" - ssh_delay_seconds: 50 - -- name: Query {{ target_server }} facts - action: setup - delegate_to: "{{ compute_mgmt_ip }}" - remote_user: "root" - -- name: Update /etc/hosts file on os_compute - lineinfile: - dest: /etc/hosts - line: "{{ compute_mgmt_ip }} {{ inventory_hostname }}" - remote_user: "root" - -- name: Update hostname - template: src=network.j2 backup=no dest=/etc/sysconfig/network - remote_user: "root" - -- name: Add nameserver - command: echo "{{ infra_ip }}" >> /etc/resolv.conf - remote_user: "root" - when: infra_server_name is defined - -- name: Disable firewall - service: - name: firewalld - enabled: no - when: - - ansible_os_family == 'RedHat' - ignore_errors: yes - remote_user: "root" - -- name: Stop firewall - service: - name: firewalld - state: stopped - when: - - ansible_os_family == 'RedHat' - ignore_errors: yes - remote_user: "root" - -- name: Disable NetworkManager - service: - name: NetworkManager - enabled: no - ignore_errors: yes - remote_user: "root" - -- name: Stop NetworkManager - service: - name: NetworkManager - state: stopped - ignore_errors: yes - remote_user: "root" - -- name: Copy eth0 config to os_compute - template: src=ifcfg-eth0.j2 backup=no dest=/etc/sysconfig/network-scripts/ifcfg-eth0 - remote_user: "root" - -- name: Copy eht1 config to os_compute - template: src=ifcfg-eth1.j2 backup=no dest=/etc/sysconfig/network-scripts/ifcfg-eth1 - remote_user: "root" - -- name: Enable network - service: - name: network - enabled: yes - remote_user: "root" - -- name: Start netowrk - service: - name: network - state: restarted - ignore_errors: yes - remote_user: "root" - -- name: Install NTP if not present - yum: - name: ntp - state: latest - remote_user: "root" - -- name: Configure ntpd and ntpdate and local time zone - include_role: - name: common - tasks_from: linux-ntp-sync - -- name: Install EPEL repos only on Centos - yum: - name: epel-release - state: present - remote_user: "root" - when: ansible_distribution == 'CentOS' - -- name: Pause - pause: - seconds: 5 - -- name: Load correspoing software repos for OpenStack Centos7 - yum: - name: "{{ os_centos }}{{ nuage_os_release }}" - state: present - remote_user: "root" - when: - - ansible_distribution == 'CentOS' - - ansible_distribution_major_version == '7' - -- name: Copy Redhat repo file for RedHat images - template: - src: "{{ role_path }}/templates/redhat.repo.j2" - dest: "/etc/yum.repos.d/rhel.repo" - remote_user: "root" - when: ansible_distribution == 'RedHat' - -- name: Execute a yum update - yum: - name: '*' - state: latest - remote_user: "root" - -- name: Generate SSH keys on OSC - shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N "" - args: - creates: /root/.ssh/id_rsa - remote_user: "root" - delegate_to: "{{ osc_ip }}" - -- name: Get generated SSH keys - shell: cat ~/.ssh/id_rsa.pub - register: ssh_key - remote_user: "root" - delegate_to: "{{ osc_ip }}" - -- name: Copy SSH key - shell: "echo {{ssh_key.stdout}} >> /root/.ssh/authorized_keys" - remote_user: "root" - -- name: Find the answers file on the OSC - find: - paths: "/root" - patterns: "packstack*.txt" - register: answer_file - delegate_to: "{{ osc_ip }}" - -- name: Update the answer file with compute node ip - lineinfile: - dest: "{{ answer_file.files[0].path }}" - regexp: CONFIG_COMPUTE_HOSTS= - line: CONFIG_COMPUTE_HOSTS={{ compute_mgmt_ip }} - remote_user: "root" - delegate_to: "{{ osc_ip }}" - -- name: Update the answer file with compute node ip - lineinfile: - dest: "{{ answer_file.files[0].path }}" - regexp: EXCLUDE_SERVERS= - line: EXCLUDE_SERVERS={{ osc_ip }} - remote_user: "root" - delegate_to: "{{ osc_ip }}" - -- name: Add compute node to OSC - command: "packstack --answer-file={{ answer_file.files[0].path }}" - remote_user: "root" - delegate_to: "{{ osc_ip }}" diff --git a/roles/os-compute-deploy/templates/ifcfg-eth0.j2 b/roles/os-compute-deploy/templates/ifcfg-eth0.j2 deleted file mode 100644 index 0807ff869f..0000000000 --- a/roles/os-compute-deploy/templates/ifcfg-eth0.j2 +++ /dev/null @@ -1,15 +0,0 @@ -DEVICE="eth0" -IPV6INIT="no" -NM_CONTROLLED="no" -ONBOOT="yes" -TYPE="Ethernet" -BOOTPROTO="dhcp" -{% if infra_ip is defined %} -DNS1="{{ infra_ip }}" -{% endif %} -{% if infra_ip is not defined and dns_server_list[0] is defined %} -DNS1="{{dns_server_list[0]}}" -{% endif %} -{% if infra_ip is not defined and dns_server_list[1] is defined %} -DNS2="{{dns_server_list[1]}}" -{% endif %} diff --git a/roles/os-compute-deploy/templates/network.j2 b/roles/os-compute-deploy/templates/network.j2 deleted file mode 100644 index d764e19f83..0000000000 --- a/roles/os-compute-deploy/templates/network.j2 +++ /dev/null @@ -1,2 +0,0 @@ -NETWORKING=yes -HOSTNAME={{inventory_hostname}} diff --git a/roles/os-compute-deploy/templates/redhat.repo.j2 b/roles/os-compute-deploy/templates/redhat.repo.j2 deleted file mode 100644 index 56bfc0d4e0..0000000000 --- a/roles/os-compute-deploy/templates/redhat.repo.j2 +++ /dev/null @@ -1,28 +0,0 @@ -[rhel-7-server-rpms] -name=Red Hat Enterprise Linux 7 Server -baseurl=http://mirrors.mv.nuagenetworks.net/rhel-7.2/rhel-7-server-rpms -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release - -[rhel-7-server-extra-rpms] -name=Red Hat Enterprise Linux 7 Server - Extras -baseurl=http://mirrors.mv.nuagenetworks.net/rhel-7.2/rhel-7-server-extras-rpms -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release - -[rhel-7-server-rh-common-rpms] -name=Red Hat Enterprise Linux - RH Common -baseurl=http://mirrors.mv.nuagenetworks.net/rhel-7-server-rh-common-rpms -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release - -[rhel-7-server-openstack-{{ os_release_num }}-rpms] -name=Red Hat OpenStack Platform {{ os_release_num }} for RHEL -baseurl=http://mirrors.mv.nuagenetworks.net/rhel-7-server-openstack-{{ os_release_num }}-rpms -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release - diff --git a/roles/os-compute-deploy/vars/main.yml b/roles/os-compute-deploy/vars/main.yml deleted file mode 100644 index acb7b9314d..0000000000 --- a/roles/os-compute-deploy/vars/main.yml +++ /dev/null @@ -1,3 +0,0 @@ -os_rhel: "https://rdoproject.org/repos/openstack-" -os_liberty_rpm: "rdo-release-liberty-5.noarch.rpm" -os_centos: "centos-release-openstack-" diff --git a/roles/os-compute-destroy/tasks/heat.yml b/roles/os-compute-destroy/tasks/heat.yml deleted file mode 100644 index 37f83b2054..0000000000 --- a/roles/os-compute-destroy/tasks/heat.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- name: Destroy COMPUTE heat template - os_stack: - name: "{{ inventory_hostname }}" - auth: - "{{ os_auth }}" - state: absent - delegate_to: 127.0.0.1 diff --git a/roles/os-compute-destroy/tasks/main.yml b/roles/os-compute-destroy/tasks/main.yml deleted file mode 100644 index 7869f671de..0000000000 --- a/roles/os-compute-destroy/tasks/main.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- include: kvm.yml - when: target_server_type | match("kvm") - tags: - - compute - - compute-destroy - -- include: heat.yml - when: target_server_type | match("heat") - tags: - - compute - - heat - - compute-destroy diff --git a/roles/os-compute-destroy/tasks/os_compute_destroy.yml b/roles/os-compute-destroy/tasks/os_compute_destroy.yml deleted file mode 100644 index 2a6b257cdf..0000000000 --- a/roles/os-compute-destroy/tasks/os_compute_destroy.yml +++ /dev/null @@ -1,24 +0,0 @@ ---- -- name: Destroy OpenStack Compute VM - virt: - name: "{{ inventory_hostname }}" - state: destroyed - uri: qemu:///system - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - -- name: Undefine OpenStack Compute VM - virt: - name: "{{ inventory_hostname }}" - command: undefine - uri: qemu:///system - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - -- name: Destroy the images directory - file: - path: "{{ images_path }}/{{ inventory_hostname }}" - state: absent - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - diff --git a/roles/os-compute-postdeploy/tasks/main.yml b/roles/os-compute-postdeploy/tasks/main.yml deleted file mode 100644 index aa2844fa0c..0000000000 --- a/roles/os-compute-postdeploy/tasks/main.yml +++ /dev/null @@ -1,90 +0,0 @@ ---- -- block: - - name: Get vsc primary controller ip - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ vsc_primary_server }}" - register: vsc_primary - delegate_to: 127.0.0.1 - - - name: Set primary_controller ip - set_fact: - primary_controller: "{{ vsc_primary['ansible_facts']['openstack_servers'][0]['networks'][compute_data_network][0] }}" - when: vsc_primary_server is defined - -- block: - - name: Get vsc secondary controller ip - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ vsc_secondary_server }}" - register: vsc_secondary - delegate_to: 127.0.0.1 - - - name: Set secondary_controller ip - set_fact: - secondary_controller: "{{ vsc_secondary['ansible_facts']['openstack_servers'][0]['networks'][compute_data_network][0] }}" - when: vsc_secondary_server is defined - -- block: - - name: Get Compute IP from OS facts - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ inventory_hostname }}*" - register: compute_server - delegate_to: 127.0.0.1 - - - name: Save Compute ip - set_fact: - compute_mgmt_ip: "{{ compute_server['ansible_facts']['openstack_servers'][0]['networks'][compute_mgmt_network][0] }}" - - - name: Remove OVS packages installed - command: "yum -y erase {{ item }}" - with_items: - - openvswitch.x86_64 - - openstack-neutron* - - python-openvswitch.noarch - remote_user: "root" - - - name: Set primary_controller ip - set_fact: - primary_controller: "{{ vsc_primary_ip }}" - when: vsc_primary_ip is defined - - - name: Set secondary_controller ip - set_fact: - secondary_controller: "{{ vsc_secondary_ip }}" - when: vsc_secondary_ip is defined - - - name: Install prerequisites for nuage vrs - include_role: - name: vrs-predeploy - - - name: Deploy nuage vrs to os_compute hosts - include_role: - name: vrs-deploy - vars: - active_controller_addr: "{{ primary_controller }}" - standby_controller_addr: "{{ secondary_controller | default(primary_controller) }}" - - - name: Configure VNC proxy client ip address - lineinfile: - dest: /etc/nova/nova.conf - regexp: vncserver_proxyclient_address= - line: vncserver_proxyclient_address={{compute_mgmt_ip}} - remote_user: "root" - - - name: Configure cpu_mode to none - lineinfile: - dest: /etc/nova/nova.conf - regexp: cpu_mode=host-model - line: cpu_mode=none - remote_user: "root" - - - name: Edit nova config and restart services - import_role: - name: vrs-oscompute-integration - - when: inventory_hostname in groups['os_computes'] diff --git a/roles/os-compute-postdeploy/vars/main.yml b/roles/os-compute-postdeploy/vars/main.yml deleted file mode 100644 index f716050feb..0000000000 --- a/roles/os-compute-postdeploy/vars/main.yml +++ /dev/null @@ -1 +0,0 @@ -temp_dir: /tmp/vrs_packages diff --git a/roles/os-compute-predeploy/files/os_compute.yml b/roles/os-compute-predeploy/files/os_compute.yml deleted file mode 100644 index 6e8efca853..0000000000 --- a/roles/os-compute-predeploy/files/os_compute.yml +++ /dev/null @@ -1,48 +0,0 @@ -heat_template_version: '2014-10-16' -parameters: - vm_name: - type: string - ssh_key: - type: string - compute_image: - type: string - compute_mgmt_network: - type: string - compute_mgmt_subnet: - type: string - compute_data_network: - type: string - compute_data_subnet: - type: string - compute_flavor: - type: string - mgmt_ip: - type: string - data_ip: - type: string - -resources: - mycompute: - type: OS::Nova::Server - properties: - name: {get_param: vm_name} - flavor: {get_param: compute_flavor} - image: {get_param: compute_image} - networks: - - network: {get_param: compute_mgmt_network} - - network: {get_param: compute_data_network} - user_data_format: RAW - user_data: - str_replace: - template: | - #!/bin/bash - echo usr >> /root/.ssh/authorized_keys - params: - usr: {get_param: ssh_key} -outputs: - compute_mgmt_ip: - description: mgmt ip assigned to os_compute - value: { get_attr: [mycompute, networks, {get_param: compute_mgmt_network}, 0]} - compute_data_ip: - description: data ip assigned to os_compute - value: { get_attr: [mycompute, networks, {get_param: compute_data_network}, 0]} diff --git a/roles/os-compute-predeploy/files/os_compute_fixed.yml b/roles/os-compute-predeploy/files/os_compute_fixed.yml deleted file mode 100644 index 78eeb9f3d1..0000000000 --- a/roles/os-compute-predeploy/files/os_compute_fixed.yml +++ /dev/null @@ -1,58 +0,0 @@ -heat_template_version: '2014-10-16' -parameters: - vm_name: - type: string - ssh_key: - type: string - compute_image: - type: string - compute_mgmt_network: - type: string - compute_mgmt_subnet: - type: string - compute_data_network: - type: string - compute_data_subnet: - type: string - compute_flavor: - type: string - mgmt_ip: - type: string - data_ip: - type: string -resources: - mgmt_port: - type: OS::Neutron::Port - properties: - network_id: {get_param: compute_mgmt_network} - fixed_ips: [{"subnet": {get_param: compute_mgmt_subnet}, "ip_address": {get_param: mgmt_ip}}] - data_port: - type: OS::Neutron::Port - properties: - network_id: {get_param: compute_data_network} - fixed_ips: [{"subnet": {get_param: compute_data_subnet}, "ip_address": {get_param: data_ip}}] - mycompute: - type: OS::Nova::Server - properties: - name: {get_param: vm_name} - flavor: {get_param: compute_flavor} - image: {get_param: compute_image} - networks: - - port: {get_resource: mgmt_port} - - port: {get_resource: data_port} - #key_name: {get_resource: vsd_user_key} - user_data_format: RAW - user_data: - str_replace: - template: | - #!/bin/bash - echo usr >> /root/.ssh/authorized_keys - params: - usr: {get_param: ssh_key} -outputs: - compute_mgmt_ip: - description: mgmt ip assigned to os_compute - value: { get_attr: [mycompute, networks, {get_param: compute_mgmt_network}, 0]} - compute_data_ip: - description: control ip assigned to os_compute - value: { get_attr: [mycompute, networks, {get_param: compute_data_network}, 0]} diff --git a/roles/os-compute-predeploy/tasks/heat.yml b/roles/os-compute-predeploy/tasks/heat.yml deleted file mode 100644 index 6b5d85b9d0..0000000000 --- a/roles/os-compute-predeploy/tasks/heat.yml +++ /dev/null @@ -1,37 +0,0 @@ ---- -# TODO: -# Check for existing stack or vms -- name: Get the public key for the current user - local_action: command cat ~/.ssh/id_rsa.pub - register: current_user_ssh_key - -- name: Get heat stack for fixed ip deployments - set_fact: - compute_heat_template: "{{ role_path }}/files/os_compute_fixed.yml" - when: dhcp == False - -- name: Get heat stack for dhcp based deployments - set_fact: - compute_heat_template: "{{ role_path }}/files/os_compute.yml" - when: dhcp == True - -- name: Create Compute node - register: create_stack - os_stack: - name: "{{ stack_name | default(inventory_hostname) }}" - template: "{{ compute_heat_template }}" - auth: - "{{ os_auth }}" - parameters: - vm_name: "{{inventory_hostname}}" - compute_image: "{{ compute_image }}" - compute_flavor: "{{ compute_flavor }}" - compute_mgmt_network: "{{ compute_mgmt_network }}" - compute_mgmt_subnet: "{{ compute_mgmt_subnet | default('NONE') }}" - compute_data_network: "{{ compute_data_network }}" - compute_data_subnet: "{{ compute_data_subnet | default('NONE') }}" - mgmt_ip: "{{ mgmt_ip | default('NONE') }}" - data_ip: "{{ data_ip | default('NONE') }}" - ssh_key: "{{ current_user_ssh_key.stdout }}" - delegate_to: 127.0.0.1 -- debug: var=create_stack['stack']['outputs'][0]['output_value'] diff --git a/roles/os-compute-predeploy/tasks/main.yml b/roles/os-compute-predeploy/tasks/main.yml deleted file mode 100644 index 5b3859e5cd..0000000000 --- a/roles/os-compute-predeploy/tasks/main.yml +++ /dev/null @@ -1,14 +0,0 @@ ---- -- include: kvm.yml - when: target_server_type | match("kvm") - tags: - - os-compute - - os-compute-predeploy - - nuage-os - -- include: heat.yml - when: target_server_type | match("heat") - tags: - - os-compute - - os-compute-predeploy - - nuage-os diff --git a/roles/osc-deploy/files/del_compute.txt b/roles/osc-deploy/files/del_compute.txt deleted file mode 100644 index 51984a51ce..0000000000 --- a/roles/osc-deploy/files/del_compute.txt +++ /dev/null @@ -1,5 +0,0 @@ -use nova; -DELETE FROM compute_nodes; -SELECT id INTO @sid FROM services where topic='compute' LIMIT 1; -DELETE FROM services WHERE id= @sid; -exit diff --git a/roles/osc-deploy/files/del_network.txt b/roles/osc-deploy/files/del_network.txt deleted file mode 100644 index 46719e9bfd..0000000000 --- a/roles/osc-deploy/files/del_network.txt +++ /dev/null @@ -1,6 +0,0 @@ -use neutron; -DELETE FROM routers; -DELETE FROM ports; -DELETE FROM subnets; -DELETE FROM networks; -exit diff --git a/roles/osc-deploy/tasks/main.yml b/roles/osc-deploy/tasks/main.yml deleted file mode 100644 index bf22d092f9..0000000000 --- a/roles/osc-deploy/tasks/main.yml +++ /dev/null @@ -1,228 +0,0 @@ ---- -- name: Get OSC IP from {{ inventory_hostname }} - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ inventory_hostname }}" - register: osc_server - delegate_to: 127.0.0.1 - -- name: Set OSC ip - set_fact: - osc_mgmt_ip: "{{ osc_server['ansible_facts']['openstack_servers'][0]['networks'][osc_network][0] }}" - -- block: - - name: Get infra server details from OS server facts - os_server_facts: - auth: - "{{ os_auth }}" - server: "{{ infra_server_name }}" - register: infra_server - delegate_to: 127.0.0.1 - - - name: Set DNS/NTP server ip - set_fact: - infra_ip: "{{ infra_server['ansible_facts']['openstack_servers'][0]['private_v4'] }}" - - - name: Update DNS entries - lineinfile: - line: "{{ osc_mgmt_ip }} {{ inventory_hostname }}" - dest: "/etc/hosts" - delegate_to: "{{ infra_ip }}" - remote_user: "root" - - - name: Restart DNS service - shell: service dnsmasq restart - delegate_to: "{{ infra_ip }}" - remote_user: "root" - when: infra_server_name is defined - -- name: Update /etc/hosts file on ansible host - lineinfile: - dest: /etc/hosts - line: "{{ osc_mgmt_ip }} {{ inventory_hostname }}" - delegate_to: 127.0.0.1 - -- name: Clean known_hosts of OSC's - command: ssh-keygen -R "{{ osc_mgmt_ip }}" - delegate_to: localhost - ignore_errors: True - -- name: Wait for OSC ssh to be ready - include_role: - name: common - tasks_from: wait-for-ssh - vars: - ssh_host: "{{ osc_mgmt_ip }}" - -- name: Pause for ssh port to be active - pause: - seconds: 10 - -- name: Query {{ target_server }} facts - action: setup - remote_user: "root" - delegate_to: "{{ osc_mgmt_ip }}" - -- name: Update /etc/hosts file on osc - lineinfile: - dest: /etc/hosts - line: "{{ osc_mgmt_ip }} {{ inventory_hostname }}" - remote_user: "root" - -- name: Update hostname - template: src=network.j2 backup=no dest=/etc/sysconfig/network - -- name: Add nameserver - command: echo "{{ infra_ip }}" >> /etc/resolv.conf - remote_user: "root" - when: infra_server_name is defined - - -- name: Disable firewall - service: - name: firewalld - enabled: no - remote_user: "root" - when: - - ansible_os_family == 'RedHat' - ignore_errors: yes - -- name: Stop firewall - service: - name: firewalld - state: stopped - remote_user: "root" - when: - - ansible_os_family == 'RedHat' - ignore_errors: yes - -- name: Disable NetworkManager - service: - name: NetworkManager - enabled: no - remote_user: "root" - ignore_errors: yes - -- name: Stop NetworkManager - service: - name: NetworkManager - state: stopped - remote_user: "root" - ignore_errors: yes - -- name: Copy eth0 config to osc - template: src=ifcfg-eth0.j2 backup=no dest=/etc/sysconfig/network-scripts/ifcfg-eth0 - remote_user: "root" - -- name: Delete eht1 config on osc - file: - path: "/etc/sysconfig/network-scripts/ifcfg-eth1" - state: absent - remote_user: "root" - -- name: Enable network - service: - name: network - enabled: yes - remote_user: "root" - -- name: Start network - service: - name: network - state: started - remote_user: "root" - ignore_errors: yes - -- name: Pause - pause: - seconds: 5 - -- name: Install NTP if not present - yum: - name: ntp - state: latest - remote_user: "root" - -- name: Configure ntpd and ntpdate and local time zone - include_role: - name: common - tasks_from: linux-ntp-sync - -- name: Load correspoing software repos for OpenStack Centos7 - yum: - name: "{{ os_centos }}{{ nuage_os_release }}" - state: present - remote_user: "root" - when: - - ansible_distribution == 'CentOS' - - ansible_distribution_major_version == '7' - -- name: Copy Redhat repo file for RedHat images - template: - src={{ role_path }}/templates/redhat.repo.j2 - dest=/etc/yum.repos.d/rhel.repo - remote_user: "root" - when: - - ansible_distribution == 'RedHat' - -- name: Execute a yum update - yum: - name: '*' - state: latest - remote_user: "root" - -- name: Install packstack packages - yum: - name: openstack-packstack - state: present - remote_user: "root" - -- name: Install OpenStack packstack - command: "{{ install_packstack }}" - remote_user: "root" - -- name: Copy Mysql compute query file to controller - copy: - src={{ role_path }}/files/del_compute.txt - dest=/root/ - remote_user: "root" - -- name: Delete compute node from controller - shell: "mysql -u root < /root/del_compute.txt" - remote_user: "root" - -- name: Disable compute service on controller - service: - name: openstack-nova-compute - enabled: no - remote_user: "root" - -- name: Stop the compute service on controller - service: - name: openstack-nova-compute - state: stopped - remote_user: "root" - -- name: Copy Mysql netowrk query file to controller - copy: - src={{ role_path }}/files/del_network.txt - dest=/root/ - remote_user: "root" - -- name: Delete network,subnets,routers,ports from Neutron - shell: "mysql -u root < /root/del_network.txt" - remote_user: "root" - -- name: Add * to Server Alias list - lineinfile: - dest: /etc/httpd/conf.d/15-horizon_vhost.conf - insertafter: '## Server aliases' - line: ' ServerAlias *' - remote_user: "root" - -- name: Restart httpd - service: - name: httpd - state: restarted - remote_user: "root" diff --git a/roles/osc-deploy/templates/ifcfg-eth0.j2 b/roles/osc-deploy/templates/ifcfg-eth0.j2 deleted file mode 100644 index 0807ff869f..0000000000 --- a/roles/osc-deploy/templates/ifcfg-eth0.j2 +++ /dev/null @@ -1,15 +0,0 @@ -DEVICE="eth0" -IPV6INIT="no" -NM_CONTROLLED="no" -ONBOOT="yes" -TYPE="Ethernet" -BOOTPROTO="dhcp" -{% if infra_ip is defined %} -DNS1="{{ infra_ip }}" -{% endif %} -{% if infra_ip is not defined and dns_server_list[0] is defined %} -DNS1="{{dns_server_list[0]}}" -{% endif %} -{% if infra_ip is not defined and dns_server_list[1] is defined %} -DNS2="{{dns_server_list[1]}}" -{% endif %} diff --git a/roles/osc-deploy/templates/network.j2 b/roles/osc-deploy/templates/network.j2 deleted file mode 100644 index d764e19f83..0000000000 --- a/roles/osc-deploy/templates/network.j2 +++ /dev/null @@ -1,2 +0,0 @@ -NETWORKING=yes -HOSTNAME={{inventory_hostname}} diff --git a/roles/osc-deploy/templates/redhat.repo.j2 b/roles/osc-deploy/templates/redhat.repo.j2 deleted file mode 100644 index 56bfc0d4e0..0000000000 --- a/roles/osc-deploy/templates/redhat.repo.j2 +++ /dev/null @@ -1,28 +0,0 @@ -[rhel-7-server-rpms] -name=Red Hat Enterprise Linux 7 Server -baseurl=http://mirrors.mv.nuagenetworks.net/rhel-7.2/rhel-7-server-rpms -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release - -[rhel-7-server-extra-rpms] -name=Red Hat Enterprise Linux 7 Server - Extras -baseurl=http://mirrors.mv.nuagenetworks.net/rhel-7.2/rhel-7-server-extras-rpms -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release - -[rhel-7-server-rh-common-rpms] -name=Red Hat Enterprise Linux - RH Common -baseurl=http://mirrors.mv.nuagenetworks.net/rhel-7-server-rh-common-rpms -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release - -[rhel-7-server-openstack-{{ os_release_num }}-rpms] -name=Red Hat OpenStack Platform {{ os_release_num }} for RHEL -baseurl=http://mirrors.mv.nuagenetworks.net/rhel-7-server-openstack-{{ os_release_num }}-rpms -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release - diff --git a/roles/osc-deploy/vars/main.yml b/roles/osc-deploy/vars/main.yml deleted file mode 100644 index 4aca025a2e..0000000000 --- a/roles/osc-deploy/vars/main.yml +++ /dev/null @@ -1,4 +0,0 @@ -os_rhel: "https://rdoproject.org/repos/openstack-" -os_liberty_rpm: "rdo-release-liberty-5.noarch.rpm" -os_centos: "centos-release-openstack-" -install_packstack: "packstack --allinone --os-heat-install=y --os-ceilometer-install=n --nagios-install=n --os-sahara-install=n --os-swift-install=n --os-cinder-install=n" diff --git a/roles/osc-destroy/tasks/heat.yml b/roles/osc-destroy/tasks/heat.yml deleted file mode 100644 index 263cc80237..0000000000 --- a/roles/osc-destroy/tasks/heat.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- name: Destroy OSC heat template - os_stack: - name: "{{ inventory_hostname }}" - auth: - "{{ os_auth }}" - state: absent - delegate_to: 127.0.0.1 diff --git a/roles/osc-destroy/tasks/kvm.yml b/roles/osc-destroy/tasks/kvm.yml deleted file mode 100644 index 92cec65dc5..0000000000 --- a/roles/osc-destroy/tasks/kvm.yml +++ /dev/null @@ -1,16 +0,0 @@ ---- -- name: List the Virtual Machines on {{ target_server }} - virt: command=list_vms - register: virt_vms - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - -- include: osc_destroy_helper.yml - when: inventory_hostname in virt_vms.list_vms - -- name: Destroy the images directory - file: path={{ images_path }}/{{ inventory_hostname }} - state=absent - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - diff --git a/roles/osc-destroy/tasks/main.yml b/roles/osc-destroy/tasks/main.yml deleted file mode 100644 index 43b3e8669d..0000000000 --- a/roles/osc-destroy/tasks/main.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- include: kvm.yml - when: target_server_type | match("kvm") - tags: - - osc - - osc-destroy - -- include: heat.yml - when: target_server_type | match("heat") - tags: - - osc - - heat - - osc-destroy diff --git a/roles/osc-destroy/tasks/osc_destroy_helper.yml b/roles/osc-destroy/tasks/osc_destroy_helper.yml deleted file mode 100644 index 1a89af23ec..0000000000 --- a/roles/osc-destroy/tasks/osc_destroy_helper.yml +++ /dev/null @@ -1,24 +0,0 @@ ---- -- name: Destroy OSC VM - virt: - name: "{{ inventory_hostname }}" - state: destroyed - uri: qemu:///system - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - -- name: Undefine OSC VM - virt: - name: "{{ inventory_hostname }}" - command: undefine - uri: qemu:///system - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - -- name: Destroy the images directory - file: - path: "{{ images_path }}/{{ inventory_hostname }}" - state: absent - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" - diff --git a/roles/osc-predeploy/files/osc.yml b/roles/osc-predeploy/files/osc.yml deleted file mode 100644 index c2f94ce7b9..0000000000 --- a/roles/osc-predeploy/files/osc.yml +++ /dev/null @@ -1,37 +0,0 @@ -heat_template_version: '2014-10-16' -parameters: - vm_name: - type: string - ssh_key: - type: string - osc_image: - type: string - osc_network: - type: string - osc_subnet: - type: string - osc_flavor: - type: string - mgmt_ip: - type: string -resources: - mycompute: - type: OS::Nova::Server - properties: - name: {get_param: vm_name} - flavor: {get_param: osc_flavor} - image: {get_param: osc_image} - networks: - - network: {get_param: osc_network} - user_data_format: RAW - user_data: - str_replace: - template: | - #!/bin/bash - echo usr >> /root/.ssh/authorized_keys - params: - usr: {get_param: ssh_key} -outputs: - server_ip: - description: mgmt ip assigned to osc server - value: { get_attr: [mycompute, networks, {get_param: osc_network}, 0]} diff --git a/roles/osc-predeploy/files/osc_fixed.yml b/roles/osc-predeploy/files/osc_fixed.yml deleted file mode 100644 index 49b14430c2..0000000000 --- a/roles/osc-predeploy/files/osc_fixed.yml +++ /dev/null @@ -1,43 +0,0 @@ -heat_template_version: '2014-10-16' -parameters: - vm_name: - type: string - ssh_key: - type: string - osc_image: - type: string - osc_network: - type: string - osc_subnet: - type: string - osc_flavor: - type: string - mgmt_ip: - type: string - -resources: - mgmt_port: - type: OS::Neutron::Port - properties: - network_id: {get_param: osc_network} - fixed_ips: [{"subnet": {get_param: osc_subnet}, "ip_address": {get_param: mgmt_ip}}] - mycompute: - type: OS::Nova::Server - properties: - name: {get_param: vm_name} - flavor: {get_param: osc_flavor} - image: {get_param: osc_image} - networks: - - port: {get_resource: mgmt_port} - user_data_format: RAW - user_data: - str_replace: - template: | - #!/bin/bash - echo usr >> /root/.ssh/authorized_keys - params: - usr: {get_param: ssh_key} -outputs: - server_ip: - description: mgmt ip assigned to osc server - value: { get_attr: [mycompute, networks, {get_param: osc_network}, 0]} diff --git a/roles/osc-predeploy/tasks/heat.yml b/roles/osc-predeploy/tasks/heat.yml deleted file mode 100644 index 1829c6373a..0000000000 --- a/roles/osc-predeploy/tasks/heat.yml +++ /dev/null @@ -1,44 +0,0 @@ ---- -# TODO: -# Check for existing stack or vms -- name: Get the public key for the current user - local_action: command cat ~/.ssh/id_rsa.pub - register: current_user_ssh_key - -- name: Get heat stack for fixed ip deployments - set_fact: - osc_heat_template: "{{ role_path }}/files/osc_fixed.yml" - when: dhcp == False - -- name: Get heat stack for dhcp based deployments - set_fact: - osc_heat_template: "{{ role_path }}/files/osc.yml" - when: dhcp == True - -- name: Create OSC node - register: create_stack - os_stack: - name: "{{ stack_name | default(inventory_hostname) }}" - template: "{{ osc_heat_template }}" - auth: - "{{ os_auth }}" - parameters: - vm_name: "{{ inventory_hostname }}" - osc_image: "{{ osc_image }}" - osc_flavor: "{{ osc_flavor }}" - osc_network: "{{ osc_network }}" - osc_subnet: "{{ osc_subnet | default('NONE') }}" - mgmt_ip: "{{ mgmt_ip | default('NONE') }}" - ssh_key: "{{ current_user_ssh_key.stdout }}" - delegate_to: 127.0.0.1 -- debug: var=create_stack['stack']['outputs'][0]['output_value'] - -- name: Set the OSC mgmt ip - set_fact: - osc_mgmt_ip: "{{ create_stack['stack']['outputs'][0]['output_value'] }}" - -- name: Update /etc/hosts file on ansible host - lineinfile: - dest: /etc/hosts - line: "{{ osc_mgmt_ip }} {{ inventory_hostname }}" - delegate_to: 127.0.0.1 diff --git a/roles/osc-predeploy/tasks/main.yml b/roles/osc-predeploy/tasks/main.yml deleted file mode 100644 index 7d8c2d677e..0000000000 --- a/roles/osc-predeploy/tasks/main.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- include: heat.yml - when: target_server_type | match("heat") - tags: - - osc - - heat - - osc-predeploy - - nuage-os diff --git a/roles/reset-build/files/build_vars.yml b/roles/reset-build/files/build_vars.yml index a48aa5d116..965df0bc98 100644 --- a/roles/reset-build/files/build_vars.yml +++ b/roles/reset-build/files/build_vars.yml @@ -1,6 +1,6 @@ --- ### -# See BUILD.md for details +# See the documentation for details ### ### @@ -18,6 +18,11 @@ ## for all operations. nuage_unzipped_files_dir: "/home/caso/nfs-data/5.2.2/nuage-unpacked" +## Parameter to specify the location for backups during upgrade. +## The default value is nuage_unzipped_files_dir + "/backups". +## Uncomment and set to desired value for backup. +# metro_backup_root: "/home/caso/nfs-data/5.2.2/nuage-unpacked/backups" + ### ## upgrade parameters ### @@ -130,7 +135,10 @@ dns_domain: example.com ## Uncomment and set to 'False' if you want to skip the yum update--acceptable only in ## lab environments. # yum_update: True - +## secure_communication is used to setup TLS on all the communication between the VSD, VSC, +## VRS and NSGV. By default, it is set to 'True', the recomemded value. +## Uncomment and set to 'False' if you don't want to use TLS +#secure_communication: True ### ## Global Vcenter params @@ -185,6 +193,15 @@ dns_domain: example.com ## it will only show up once in vCenter, but the hosts tab will show it is ## available on multiple hosts (view in the screenshot below) ## +## resource_pool +## The vCenter resource pool where the VMs need to be located. A resource pool +## is a logical abstraction of resources. Different resource pools can be +## configured to have different priorities in case of resource contention and +## can have different resource reservations and limitations. +## In a typical deployment, you will see a resource pool with a high number of +## shares (higher priority) which will be used for the important components of +## Nuage, like the VSD and VSC's. +## ## ovftool ## Binary location of the ovftool ## @@ -195,7 +212,8 @@ dns_domain: example.com # password: Alcateldc # datacenter: Datacenter # cluster: Management -# datastore: datastore +# datastore: Datastore +# resource_pool: Resourece Pool # ovftool: /usr/bin/ovftool ### @@ -366,7 +384,7 @@ vsc_operations_list: ## expected_num_vm_vports ## expected_num_gateway_ports ## Optional: Values to use for this VSC when running a health test. All values are -## set to 0 by default, which means they will be ignored. To use them, uncomment +## set to 0 by default, which means they will be ignored. To use them, uncomment ## and set to the expected values. ## ## vsc_mgmt_static_route_list @@ -397,7 +415,8 @@ myvscs: # expected_num_vm_vports: 0, # expected_num_gateway_ports: 0, # vsc_mgmt_static_route_list: [ 0.0.0.0/1, 128.0.0.1/1 ], - xmpp_username: vsc1 } +# secure_communication: "{{ secure_communication }}", + xmpp_username: vsc1} - { hostname: "vsc1.nuage.met", # vmname: vsc1, target_server_type: "kvm", @@ -416,7 +435,8 @@ myvscs: # expected_num_vm_vports: 0, # expected_num_gateway_ports: 0, # vsc_mgmt_static_route_list: [ 0.0.0.0/1, 128.0.0.1/1 ], - xmpp_username: vsc2 } +# secure_communication: "{{ secure_communication }}", + xmpp_username: vsc2} ### ## VRS params ### @@ -480,8 +500,10 @@ myvrss: - { vrs_os_type: u14.04, # libnetwork_install: False, # dkms_install: False, + secure_openFlow: true, active_controller_ip: 192.168.122.204, standby_controller_ip: 192.168.122.205, +# secure_communication: "{{ secure_communication }}", vrs_ip_list: [ 192.168.122.101] } - { vrs_os_type: el7, @@ -489,6 +511,7 @@ myvrss: # dkms_install: False, active_controller_ip: 192.168.122.204, standby_controller_ip: 192.168.122.205, +# secure_communication: "{{ secure_communication }}", vrs_ip_list: [ 192.168.122.83, 192.168.122.238 ] } @@ -497,6 +520,7 @@ myvrss: # dkms_install: False, active_controller_ip: 192.168.122.204, standby_controller_ip: 192.168.122.205, +# secure_communication: "{{ secure_communication }}", vrs_ip_list: [ 192.168.122.215 ] } ### @@ -648,12 +672,18 @@ vns_operations_list: ## data_netmask ## Required: The netmask for the data port network. ## +## data_static_route +## Optional: list of eth1 static routes to the data networks +## +## data_gateway +## Optional: will be used as next-hop for the data_static_routes +## ## DHCP Bootstrap support -## Optional: MetroAG supports a special case, automatically bootstrapping +## Optional: Metro Automation Engine supports a special case, automatically bootstrapping ## (ZFB) a single NSGV at time of deployment of the VNS UTIL VM. To enable ## this special case, the variables in this section must be uncommented and -## defined so that MetroAG can configure the DHCP server on the VNS UTIL VM -## to participate in this process. If you are *not* going to use MetroAG's +## defined so that Metro can configure the DHCP server on the VNS UTIL VM +## to participate in this process. If you are *not* going to use Metro's ## automatic bootstrap of a single NSGV, the variables in this section ## must not be defined. ## @@ -681,10 +711,13 @@ myvnsutils: data_fqdn: "vnsutil1.data.nuage.met", data_ip: 192.168.100.205, data_netmask: 255.255.255.0, +# data_gateway: 192.168.100.1, +# data_static_route: [ 192.168.99.0/24, 192.168.98.0/24, 1192.168.97.0/24 ], # data_subnet: 192.168.100.0, # nsgv_ip: 192.168.100.206, # nsgv_mac: '52:54:00:88:85:12', # nsgv_hostname: "nsgv1.{{ dns_domain }}", +# secure_communication: "{{ secure_communication }}", vsd_fqdn: "{{ vsd_fqdn_global }}" } ### @@ -738,6 +771,7 @@ mynsgvs: # iso_file: 'user_img.iso', # nsgv_mac: '52:54:00:88:85:12', # bootstrap_method: none, +# secure_communication: {{ secure_communication }}, target_server: 135.227.181.233 } ### ## VCIN params @@ -757,6 +791,15 @@ vcin_operations_list: ## hostname ## Required always: The FQDN or IP address of the VCIN management port ## +## master_vcin +## Optional: The FQDN or IP address of the Master VCIN in an Active/Standby +## deployment. This must match the hostname of another VCIN in the list of +## myvcins. +## Validation is in place to assure: +## - Masters can not be their own slave +## - A Master can only have one slave +## - The Master must be present when configured on a slave +## ## target_server_type ## Required: The type of hypervisor the VCIN will be deployed on. Supported values ## are kvm, vcenter, and heat. @@ -781,7 +824,8 @@ vcin_operations_list: ## The example, below, is for a single VCIN. If deploying stand-alone, ## only one VSD defintion is required. myvcins: - - { hostname: vcin1.nuage.met, + - { hostname: vcin1.nuage.net, +# master_vcin: vcin2.nuage.net, target_server_type: "vcenter", target_server: 135.227.181.232, # vcenter: { username: administrator@vsphere.local, @@ -828,7 +872,7 @@ myvcins: # data_ip: 10.167.54.3, # data_subnet: 10.167.54.0, # data_netmask: 255.255.255.0, -# data_gateway: 10.167.54.1 +# data_gateway: 10.167.54.1, # data_static_route: [ 10.165.53.0/24, 10.165.54.0/24, 10.165.55.0/24 ], # dns_server: 8.8.8.8, # dns_mgmt: g5dns.mgmt.training.net., @@ -905,3 +949,88 @@ myvcins: # ports_to_hv_bridges: ['br0', 'br1','br0','br1'], # license_file: '/path/on/ansible/deployment/host/license.zip', # deploy_cfg_file: '/path/on/ansible/deployment/host/config_flat.txt'} + +## VRS-VM params +## vrs_vm_operations_list = A list of the operations you intend for the VRS-VM. +##The list could include 1 or more of the following: +## - install +## myvrs_vms is required when you are operating on VRS-VMs. It is not required if you aren't +## operating on VRS-VMs. It will be ignored if not defined. Each element in the list +## is a dictionary of parameters specific to a single VRS-VM. You can define as much +## VRS-VM as you want +## +## hostname +## Required always: The FQDN or IP address of the VRS-VM management port +## +## vmname +## Optional, vmname defaults to the hostname. Uncomment and set if you want a +##VM name other than hostname. +## +## target_server_type +## Optional: The default is 'kvm'. For now only KVM is supported. +## +## target_server +## Required: The hostname or IP address of the hypervisor where this VRS-VM will be +## instantiated. +## +## mgmt_bridge +## Optional: The name of the bridge on the hypervisor to connect the mgmt port to. +## By default, the mgmt port will be connected to the global mgmt_bridge that is +## defined elsewhere in this file. Uncomment and update if you want to use a +## different bridge for this component. +## +## mgmt_ip +## Required: The IP address of the VRS-VM's management port. +## +## mgmt_gateway +## Required: The IP address for the default gateway. +## +## mgmt_netmask +## Required: The netmask for the management port. +## +## data_ip +## Required: The IP address of the data plane. +## +## data_netmask +## Required: The netmask for the data plane network. +## +## data_bridge +## Optional: The name of the bridge on the hypervisor to connect the data port to. +## By default, the data port will be connected to the global data_bridge that is +## defined elsewhere in this file. Uncomment and update if you want to use a +## different bridge for this component. +## +## data_gateway +## Required: The IP address that will be used as a next-hop address of the +## data_static_route. +## +## data_static_route: list of eth1 static routes to the data networks +## +## ram (Gib) +## Optional: Ram amount that will be used for the VRS-VM. By default it is 4 (GB) +## +## vcpu +## Optional: VCPU count that will be used for the VRS-VM. By default it is 2 +## +## vrs_vm_qcow2_path +## Required: source qcow path of the VRS-VM +#vrs_vm_operations_list: +# - install +# +#myvrs_vms: +# - { hostname: vrs_vm1, +# #vmname: vrs_vm1, +# target_server_type: kvm, +# target_server: 10.10.13.5, +# ram: 12, +# vcpu: 4, +# mgmt_bridge: br0, +# mgmt_ip: 10.10.13.11, +# mgmt_gateway: 10.10.13.1, +# mgmt_netmask: 255.255.255.0, +# data_bridge: br1, +# data_ip: 10.9.13.11, +# data_netmask: 255.255.255.0, +# vrs_vm_qcow2_path: /tmp/images/centos7.qcow2, +# data_gateway: 10.9.13.1, +# data_static_route: [ 10.13.60.0/24, 10.12.60.0/24, 10.11.60.0/24] } diff --git a/roles/stcv-postdeploy/tasks/main.yml b/roles/stcv-postdeploy/tasks/main.yml index c7a4f07fae..9009a4d87d 100644 --- a/roles/stcv-postdeploy/tasks/main.yml +++ b/roles/stcv-postdeploy/tasks/main.yml @@ -1,11 +1,11 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - stcv - stcv-postdeploy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - stcv diff --git a/roles/stcv-predeploy/tasks/main.yml b/roles/stcv-predeploy/tasks/main.yml index 7caaa69113..d8f6b3d604 100644 --- a/roles/stcv-predeploy/tasks/main.yml +++ b/roles/stcv-predeploy/tasks/main.yml @@ -1,11 +1,11 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - stcv - stcv-predeploy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - stcv diff --git a/roles/vcin-health/tasks/main.yml b/roles/vcin-health/tasks/main.yml index 72897e1db7..5a28234c69 100644 --- a/roles/vcin-health/tasks/main.yml +++ b/roles/vcin-health/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: report_header.yml +- import_tasks: report_header.yml - name: Get current version of VSD software command: echo $VSD_VERSION @@ -23,6 +23,6 @@ nuage_append: filename="{{ report_path }}" text="{{ net_conf.info | to_nice_json}}\n" delegate_to: localhost -- include: monit_status.yml +- import_tasks: monit_status.yml -- include: report_footer.yml +- import_tasks: report_footer.yml diff --git a/roles/vnsutil-deploy/tasks/main.yml b/roles/vnsutil-deploy/tasks/main.yml index bfe47b94c0..8a30ece03d 100644 --- a/roles/vnsutil-deploy/tasks/main.yml +++ b/roles/vnsutil-deploy/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: non_heat.yml +- import_tasks: non_heat.yml when: not target_server_type | match("heat") tags: - vnsutil diff --git a/roles/vnsutil-deploy/tasks/non_heat.yml b/roles/vnsutil-deploy/tasks/non_heat.yml index 81800c235c..f3b3a6ec44 100644 --- a/roles/vnsutil-deploy/tasks/non_heat.yml +++ b/roles/vnsutil-deploy/tasks/non_heat.yml @@ -24,6 +24,8 @@ include_role: name: common tasks_from: linux-ntp-sync + vars: + rem_user: "{{ vnsutil_username }}" - block: diff --git a/roles/vnsutil-destroy/tasks/kvm.yml b/roles/vnsutil-destroy/tasks/kvm.yml index d574fd7917..c65c861526 100644 --- a/roles/vnsutil-destroy/tasks/kvm.yml +++ b/roles/vnsutil-destroy/tasks/kvm.yml @@ -5,7 +5,7 @@ delegate_to: "{{ target_server }}" remote_user: "{{ target_server_username }}" -- include: vnsutil_destroy_helper.yml +- import_tasks: vnsutil_destroy_helper.yml when: vmname in virt_vms.list_vms - name: Destroy the images directory diff --git a/roles/vnsutil-destroy/tasks/main.yml b/roles/vnsutil-destroy/tasks/main.yml index 50cfd40c9d..98baacc27a 100644 --- a/roles/vnsutil-destroy/tasks/main.yml +++ b/roles/vnsutil-destroy/tasks/main.yml @@ -1,18 +1,18 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vnsutil - vnsutil-destroy -- include: heat.yml +- import_tasks: heat.yml when: target_server_type | match("heat") tags: - vnsutil - heat - vnsutil-destroy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - vnsutil diff --git a/roles/vnsutil-postdeploy/tasks/main.yml b/roles/vnsutil-postdeploy/tasks/main.yml index 88b7d37507..236cd84c12 100644 --- a/roles/vnsutil-postdeploy/tasks/main.yml +++ b/roles/vnsutil-postdeploy/tasks/main.yml @@ -6,29 +6,19 @@ path: "/opt/proxy/config/keys" state: directory - - name: Get vsd node(s) information - import_role: - name: common - tasks_from: vsd-node-info.yml - vars: - vsd_hostname: "{{ vsd_fqdn }}" - run_once: true - - - name: Get VSD version - shell: echo $VSD_VERSION - register: vsd_version - delegate_to: "{{ vsd_hostname_list[0] }}" - - - name: Create and transfer certs for VSD 4.0.4 - command: "{{ create_certs_404 }}" - delegate_to: "{{ vsd_hostname_list[0] }}" - when: "'4.0.4' in vsd_version.stdout" - - name: Create and transfer certs - command: "{{ create_certs }}" - delegate_to: "{{ vsd_hostname_list[0] }}" - when: "'4.0.4' not in vsd_version.stdout" - + include_role: + name: common + tasks_from: vsd-generate-transfer-certificates + vars: + certificate_password: "{{ vnsutil_password }}" + certificate_username: proxy + commonName: proxy + certificate_type: server + scp_user: "{{ vnsutil_username }}" + scp_location: /opt/proxy/config/keys + additional_parameters: -d {{ inventory_hostname }} + - name: Install supervisord and haproxy command: "{{ install_cmd }}" diff --git a/roles/vnsutil-postdeploy/vars/main.yml b/roles/vnsutil-postdeploy/vars/main.yml index 060a8161a2..e1c1c5f7db 100644 --- a/roles/vnsutil-postdeploy/vars/main.yml +++ b/roles/vnsutil-postdeploy/vars/main.yml @@ -1,8 +1,3 @@ -# Command to Create and trasfer certs -create_certs: "/bin/sshpass -p{{ vsd_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u proxy -c proxy -d {{ data_fqdn | default(inventory_hostname) }} -f pem -t server -s {{ vnsutil_username }}@{{inventory_hostname}}:/opt/proxy/config/keys -o csp" - -# Alternate command to Create and trasfer certs for VSD version 4.0.4 -create_certs_404: "/bin/sshpass -p{{ vsd_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u proxy -c proxy -d {{ data_fqdn | default(inventory_hostname) }} -f pem -t server -s {{ vnsutil_username }}@{{inventory_hostname}}:/opt/proxy/config/keys -o csp -n VSPCA" # install script install_cmd: "./rpms/install.sh -x {{vsd_fqdn}} -u {{ data_fqdn | default(inventory_hostname) }}" diff --git a/roles/vnsutil-predeploy/tasks/kvm.yml b/roles/vnsutil-predeploy/tasks/kvm.yml index 504c9b8c1d..da6f21897e 100644 --- a/roles/vnsutil-predeploy/tasks/kvm.yml +++ b/roles/vnsutil-predeploy/tasks/kvm.yml @@ -141,6 +141,20 @@ - name: Set the owner and group for the network hostname file on the vnsutil image command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network + - block: + - name: Create a temporary copy of the network script for route-eth1 + template: src=route-eth1.j2 backup=no dest={{ images_path }}/{{ vmname }}/route-eth1 + + - name: Copy route-eth1 network script file to the vnsutil image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vmname }}/route-eth1 /etc/sysconfig/network-scripts/ + + - name: Remove temporary copy of route-eth1 network script + file: path={{ images_path }}/{{ vmname }}/route-eth1 state=absent + + - name: Set the owner and group on the route-eth1 network script file in the vnsutil image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/route-eth1 + when: data_gateway is defined and data_static_route is defined + - name: Create the directory /root/.ssh for authorized_keys command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} mkdir-mode /root/.ssh 0700 diff --git a/roles/vnsutil-predeploy/tasks/main.yml b/roles/vnsutil-predeploy/tasks/main.yml index a16fa36a79..02a3d4a4e3 100644 --- a/roles/vnsutil-predeploy/tasks/main.yml +++ b/roles/vnsutil-predeploy/tasks/main.yml @@ -16,13 +16,13 @@ vnsutil_dir.stat.isdir is defined and vnsutil_dir.stat.isdir }}" -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vnsutil - vnsutil-predeploy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") static: no tags: diff --git a/roles/vnsutil-predeploy/templates/route-eth1.j2 b/roles/vnsutil-predeploy/templates/route-eth1.j2 new file mode 100644 index 0000000000..6b71133632 --- /dev/null +++ b/roles/vnsutil-predeploy/templates/route-eth1.j2 @@ -0,0 +1,3 @@ +{% for route in data_static_route %} +{{ route }} via {{ data_gateway }} +{% endfor %} diff --git a/roles/vrs-deploy/tasks/main.yml b/roles/vrs-deploy/tasks/main.yml index 63bc66b3ba..add4c88b3b 100644 --- a/roles/vrs-deploy/tasks/main.yml +++ b/roles/vrs-deploy/tasks/main.yml @@ -14,6 +14,26 @@ openvswitch_file: "/etc/default/openvswitch-switch" when: ansible_os_family == "Debian" + - name: Set connection type + set_fact: + conn_type: "ssl" + + - name: Set Client key path + set_fact: + client_keys_path: "/etc/default/bootstrap/keys" + + - name: Create Key directory + file: + path: "{{ client_keys_path }}" + state: directory + mode: 777 + recurse: yes + + - name: Set openvswitch file on Debian OS family distros + set_fact: + openvswitch_file: "/etc/default/openvswitch-switch" + when: ansible_os_family == "Debian" + - name: Check whether active controller address is already configured command: grep -Fq "ACTIVE_CONTROLLER={{ active_controller_addr }}" {{ openvswitch_file }} register: active_address_result @@ -117,7 +137,7 @@ dest: "{{ openvswitch_file }}" regexp: "^STANDBY_CONTROLLER=" line: "STANDBY_CONTROLLER={{ standby_controller_addr }}" - + - name: Update connection type in {{ openvswitch_file }} file remote_user: "{{ target_server_username }}" lineinfile: @@ -125,6 +145,49 @@ regexp: "^CONN_TYPE=" line: "CONN_TYPE=tcp" + - block: + - name: Generate TLS Certificates for VRS + include_role: + name: common + tasks_from: vsd-generate-transfer-certificates + vars: + certificate_password: "{{ compute_password }}" + certificate_username: vrs-{{ inventory_hostname }} + commonName: vrs-{{ inventory_hostname }} + certificate_type: vrs + scp_user: "{{ compute_username }}" + scp_location: "{{ client_keys_path }}" + additional_parameters: -v {{ inventory_hostname }} + + - name: Update client key path in {{ openvswitch_file }} file + remote_user: "{{ target_server_username }}" + lineinfile: + dest: "{{ openvswitch_file }}" + regexp: "^CLIENT_KEY_PATH=" + line: "CLIENT_KEY_PATH={{ client_keys_path }}/vrs-{{ inventory_hostname }}-Key.pem" + + - name: Update client cert path in {{ openvswitch_file }} file + remote_user: "{{ target_server_username }}" + lineinfile: + dest: "{{ openvswitch_file }}" + regexp: "^CLIENT_CERT_PATH=" + line: "CLIENT_CERT_PATH={{ client_keys_path }}/vrs-{{ inventory_hostname }}.pem" + + - name: Update CA cert path in {{ openvswitch_file }} file + remote_user: "{{ target_server_username }}" + lineinfile: + dest: "{{ openvswitch_file }}" + regexp: "^CA_CERT_PATH=" + line: "CA_CERT_PATH={{ client_keys_path }}/vrs-{{ inventory_hostname }}-CA.pem" + + - name: Update connection type in {{ openvswitch_file }} file + remote_user: "{{ target_server_username }}" + lineinfile: + dest: "{{ openvswitch_file }}" + regexp: "^CONN_TYPE=" + line: "CONN_TYPE={{ conn_type }}" + when: secure_communication + - name: Restart OpenVSwitch Service on RedHat OS family distros service: name=openvswitch state=restarted when: ansible_os_family == "RedHat" diff --git a/roles/vrs-health/tasks/main.yml b/roles/vrs-health/tasks/main.yml index 7b2382a636..39a9bc9f37 100644 --- a/roles/vrs-health/tasks/main.yml +++ b/roles/vrs-health/tasks/main.yml @@ -9,28 +9,28 @@ - health - processes -- include: "check_ovs_service.yml" +- import_tasks: "check_ovs_service.yml" static: no tags: - vrs - health - ovs-service -- include: "check_processes.yml" +- import_tasks: "check_processes.yml" static: no tags: - vrs - health - processes -- include: "check_controller.yml" +- import_tasks: "check_controller.yml" static: no tags: - vrs - health - controller -- include: "check_vport_resolution.yml" +- import_tasks: "check_vport_resolution.yml" static: no tags: - vrs diff --git a/roles/vrs-postdeploy/tasks/main.yml b/roles/vrs-postdeploy/tasks/main.yml index ea5c66e304..c3b8f1d857 100644 --- a/roles/vrs-postdeploy/tasks/main.yml +++ b/roles/vrs-postdeploy/tasks/main.yml @@ -6,27 +6,32 @@ # TODO: Evalue use of include_role: vrs_health. # This looks like some duplicate code - -- name: Get controller connection info - shell: "ovs-vsctl show | grep -Pzl '(?s)Controller \"ctrl(1|2)\"\\n *target: \"(tcp|ssl):({{ item }}):6633\"\\n *role: (master|slave)\\n *is_connected: true'" - with_items: - - "{{ active_controller_addr }}" - - "{{ standby_controller_addr }}" - register: command_result - remote_user: "{{ target_server_username }}" - ignore_errors: yes - changed_when: false - -- name: Check primary controller - assert: - that: "command_result.results[0].rc == 0" - msg: "Switch not connected to primary controller" - -- name: Check secondary controller - assert: - that: "command_result.results[1].rc == 0" - msg: "Switch not connected to secondary controller" - +- block: + - name: Set transport type + set_fact: + transport_type: "{{ \"{{secure_communication}}\" | ternary ('ssl', 'tcp') }}" + + - name: Get controller connection info + shell: "ovs-vsctl show | grep -Pzl '(?s)Controller \"ctrl(1|2)\"\\n *target: \"{{ transport_type }}:({{ item }}):6633\"\\n *role: (master|slave)\\n *is_connected: true'" + with_items: + - "{{ active_controller_addr }}" + - "{{ standby_controller_addr }}" + register: command_result + remote_user: "{{ target_server_username }}" + ignore_errors: yes + changed_when: false + + + - name: Check primary controller + assert: + that: "command_result.results[0].rc == 0" + msg: "Switch not connected to primary controller" + + - name: Check secondary controller + assert: + that: "command_result.results[1].rc == 0" + msg: "Switch not connected to secondary controller" + - block: - name: Ceate a local temp directory # TODO: Use tempfile module in Ansible 2.3+ @@ -46,7 +51,7 @@ local_action: file path={{ mktemp_output.stdout }} state=absent - name: Verify registration on Active and Standby Controller - include: check_vsc_registration.yml controller_addr="{{item}}" + include_tasks: check_vsc_registration.yml controller_addr="{{item}}" with_items: - "{{ active_controller_addr }}" - "{{ standby_controller_addr }}" diff --git a/roles/vrs-vm-deploy/tasks/main.yml b/roles/vrs-vm-deploy/tasks/main.yml new file mode 100644 index 0000000000..384de85591 --- /dev/null +++ b/roles/vrs-vm-deploy/tasks/main.yml @@ -0,0 +1,37 @@ +- name: Wait for VRS VM ssh to be ready + local_action: + module: wait_for + port: "22" + host: "{{ mgmt_ip }}" + search_regex: OpenSSH + delay: 1 + when: mgmt_ip is defined + +- block: + - name: Configure yum proxy + lineinfile: + dest: /etc/yum.conf + regexp: "^proxy=" + line: "proxy={{ yum_proxy }}" + when: not yum_proxy | match('NONE') + + - name: Add epel repository on RedHat OS family distros + yum_repository: + name: epel + description: EPEL YUM repo + baseurl: http://download.fedoraproject.org/pub/epel/$releasever/$basearch/ + when: ansible_os_family == "RedHat" + + - name: Execute a yum update + yum: + name: '*' + state: latest + when: yum_update + + - name: Install requried yum packages + yum: name={{ item }} state=present + with_items: + - net-tools + - libguestfs-tools + remote_user: "{{ target_server_username }}" + delegate_to: "{{ mgmt_ip }}" diff --git a/roles/os-compute-destroy/tasks/kvm.yml b/roles/vrs-vm-destroy/tasks/kvm.yml similarity index 50% rename from roles/os-compute-destroy/tasks/kvm.yml rename to roles/vrs-vm-destroy/tasks/kvm.yml index 443649615f..12b1936c8a 100644 --- a/roles/os-compute-destroy/tasks/kvm.yml +++ b/roles/vrs-vm-destroy/tasks/kvm.yml @@ -5,11 +5,5 @@ delegate_to: "{{ target_server }}" remote_user: "{{ target_server_username }}" -- include: os_compute_destroy.yml +- include: vrs_vm_destroy_helper.yml when: inventory_hostname in virt_vms.list_vms - -- name: Destroy the images directory - file: path={{ images_path }}/{{ inventory_hostname }} - state=absent - delegate_to: "{{ target_server }}" - remote_user: "{{ target_server_username }}" diff --git a/roles/vrs-vm-destroy/tasks/main.yml b/roles/vrs-vm-destroy/tasks/main.yml new file mode 100644 index 0000000000..bb0f828452 --- /dev/null +++ b/roles/vrs-vm-destroy/tasks/main.yml @@ -0,0 +1,7 @@ +--- +- include: kvm.yml + when: target_server_type | match("kvm") + tags: + - vrs-vm + - vrs-vm-destroy + diff --git a/roles/vrs-vm-destroy/tasks/vrs_vm_destroy_helper.yml b/roles/vrs-vm-destroy/tasks/vrs_vm_destroy_helper.yml new file mode 100644 index 0000000000..e7b20ee61c --- /dev/null +++ b/roles/vrs-vm-destroy/tasks/vrs_vm_destroy_helper.yml @@ -0,0 +1,20 @@ +--- +- block: + - name: Destroy VRS VM + virt: + name: "{{ inventory_hostname }}" + state: destroyed + uri: qemu:///system + + - name: Undefine VRS VM + virt: + name: "{{ inventory_hostname }}" + command: undefine + uri: qemu:///system + + - name: Remove Image Directory + file: + state: absent + path: "{{ images_path }}/{{ vm_name }}" + delegate_to: "{{ target_server }}" + remote_user: "{{ target_server_username }}" diff --git a/roles/vrs-vm-predeploy/tasks/kvm.yml b/roles/vrs-vm-predeploy/tasks/kvm.yml new file mode 100644 index 0000000000..2da24c323e --- /dev/null +++ b/roles/vrs-vm-predeploy/tasks/kvm.yml @@ -0,0 +1,189 @@ +--- +- name: Query {{ target_server }} facts + action: setup + remote_user: "{{ target_server_username }}" + delegate_to: "{{ target_server }}" + +- name: Check target for supported OS + fail: msg="Unsupported OS family ({{ ansible_os_family }})" + when: ansible_os_family not in vrs_vm_target_server_os_family_list + +- name: Include OS-specific variables. + include_vars: "{{ ansible_os_family }}.yml" + +- block: + - name: If RedHat, install packages for RedHat OS family distros + yum: name={{ item }} state=present + with_items: + - qemu-kvm + - libvirt + - bridge-utils + - libvirt-python + when: ansible_os_family == "RedHat" + + - name: If Debian, install packages for Debian OS family distros + apt: name={{ item }} state=present + with_items: + - qemu-kvm + - libvirt-bin + - bridge-utils + - python-libvirt + when: ansible_os_family == "Debian" + + - name: List the Virtual Machines running + virt: command=list_vms + register: virt_vms + delegate_to: "{{ target_server }}" + remote_user: "{{ target_server_username }}" + +- name: Verify that the VM is not already running + assert: + that: "vm_name not in virt_vms.list_vms" + msg: "{{ vm_name }} is already running on {{ target_server }}" + +- name: Set local variable with vrs_vm guestfish destination + set_fact: + guestfish_dest: "{{ images_path }}/{{ vm_name }}/{{ vrs_vm_qcow2_file_name }}" + +- block: + - name: Create libvirt image directory + file: path={{ images_path }}/{{ vm_name }} + state=directory + owner={{ libvirt.user }} + group={{ libvirt.group }} + + - name: Copy the vrs_vm qcow image to virt images directory + copy: src={{ vrs_vm_qcow2_path }}/{{ vrs_vm_qcow2_file_name }} + dest={{ images_path }}/{{ vm_name }} + owner={{ libvirt.user }} + group={{ libvirt.group }} + + - name: Get list of partitions + shell: "guestfish -r -a {{ guestfish_dest }} run : list-filesystems | grep -Ev '(unknown|swap)'" + register: partitions_list + + - name: Check partition content + shell: "guestfish -r -a {{ guestfish_dest }} run : mount {{ item.split(':')[0] }} / : ls /" + register: partitions + with_items: "{{ partitions_list.stdout_lines }}" + remote_user: "{{ target_server_username }}" + delegate_to: "{{ target_server }}" + +- name: Find root partition + set_fact: + guestfish_mount: "{{ item.item.split(':')[0]}}" + with_items: "{{ partitions.results }}" + when: '"proc" in item.stdout' + +- debug: var=guestfish_mount verbosity=1 + +- block: + - name: Create a temporary copy of the network script for eth0 + template: src=ifcfg-eth0.j2 backup=no dest={{ images_path }}/{{ vm_name }}/ifcfg-eth0 + + - name: Copy eth0 network script file to the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/ifcfg-eth0 /etc/sysconfig/network-scripts/ + + - name: Remove temporary copy of eth0 network script + file: path={{ images_path }}/{{ vm_name }}/ifcfg-eth0 state=absent + + - name: Set the owner and group on the eth0 network script file in the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/ifcfg-eth0 + delegate_to: "{{ target_server }}" + remote_user: "{{ target_server_username }}" + +- block: + - name: Create a temporary copy of the network script for eth1 + template: src=ifcfg-eth1.j2 backup=no dest={{ images_path }}/{{ vm_name }}/ifcfg-eth1 + + - name: Copy eth1 network script file to the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/ifcfg-eth1 /etc/sysconfig/network-scripts/ + + - name: Remove temporary copy of eth1 network script + file: path={{ images_path }}/{{ vm_name }}/ifcfg-eth1 state=absent + + - name: Set the owner and group on the eth1 network script file in the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/ifcfg-eth1 + delegate_to: "{{ target_server }}" + remote_user: "{{ target_server_username }}" + when: data_ip is defined or data_netmask is defined + +- block: + - name: Create the directory /root/.ssh for authorized_keys + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} mkdir-p /root/.ssh + + - name: Set the mode for the .ssh folder on the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chmod 0700 /root/.ssh + + - name: Set the owner and group for the /root/.ssh directory on the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /root/.ssh + delegate_to: "{{ target_server }}" + remote_user: "{{ target_server_username }}" + +- name: Get the public key for the current user + local_action: command cat "{{ user_ssh_pub_key }}" + register: current_user_ssh_key + +- block: + - name: Create a temporary copy of the authorized_keys file + template: src=authorized_keys.j2 backup=no dest={{ images_path }}/{{ vm_name }}/authorized_keys + + - name: Copy authorized_keys file to the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/authorized_keys /root/.ssh/ + + - name: Remove temporary copy of authorized_keys file + file: path={{ images_path }}/{{ vm_name }}/authorized_keys state=absent + + - name: Set the owner and group for the authorized_keys file on the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /root/.ssh/authorized_keys + + - name: Set the mode for the authorized_keys file on the vrs_vm image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chmod 0640 /root/.ssh/authorized_keys + + - name: Create a temporary copy of the network script for route-br1 on {{ target_server }} + template: src=route-br1.j2 backup=no dest={{ images_path }}/{{ vm_name }}/route-br1 + + - name: Copy route-br1 network script file to the VRS VM image on {{ target_server }} + command: guestfish --rw -a {{ images_path }}/{{ vm_name }}/{{ vrs_vm_qcow2_file_name }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/route-br1 /etc/sysconfig/network-scripts/ + + - name: Remove temporary copy of route-br1 network script + file: path={{ images_path }}/{{ vm_name }}/route-br1 state=absent + + - name: Set the owner and group on the route-br1 network script file in the VRS VM image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/route-br1 + + - name: Create a temporary copy of the network script for ifcfg-br0 on {{ target_server }} + template: src=ifcfg-br0.j2 backup=no dest={{ images_path }}/{{ vm_name }}/ifcfg-br0 + + - name: Copy ifcfg-br0 network script file to the VRS VM image on {{ target_server }} + command: guestfish --rw -a {{ images_path }}/{{ vm_name }}/{{ vrs_vm_qcow2_file_name }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/ifcfg-br0 /etc/sysconfig/network-scripts/ + + - name: Remove temporary copy of ifcfg-br0 network script + file: path={{ images_path }}/{{ vm_name }}/ifcfg-br0 state=absent + + - name: Set the owner and group on the ifcfg-br0 network script file in the VRS VM image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/ifcfg-br0 + + - name: Create a temporary copy of the network script for ifcfg-br1 on {{ target_server }} + template: src=ifcfg-br1.j2 backup=no dest={{ images_path }}/{{ vm_name }}/ifcfg-br1 + + - name: Copy ifcfg-br1 network script file to the VRS VM image on {{ target_server }} + command: guestfish --rw -a {{ images_path }}/{{ vm_name }}/{{ vrs_vm_qcow2_file_name }} -m {{ guestfish_mount }} copy-in {{ images_path }}/{{ vm_name }}/ifcfg-br1 /etc/sysconfig/network-scripts/ + + - name: Remove temporary copy of ifcfg-br1 network script + file: path={{ images_path }}/{{ vm_name }}/ifcfg-br1 state=absent + + - name: Set the owner and group on the ifcfg-br1 network script file in the VRS VM image + command: guestfish --rw -a {{ guestfish_dest }} -m {{ guestfish_mount }} chown 0 0 /etc/sysconfig/network-scripts/ifcfg-br1 + + - name: "Define new VM" + virt: name="{{ vm_name }}" + command=define + xml="{{ lookup('template', 'vrs_vm.xml.j2') }}" + + - name: "Run VRS VM" + virt: name="{{ vm_name }}" + state=running + uri=qemu:///system + delegate_to: "{{ target_server }}" + remote_user: "{{ target_server_username }}" diff --git a/roles/vrs-vm-predeploy/tasks/main.yml b/roles/vrs-vm-predeploy/tasks/main.yml new file mode 100644 index 0000000000..38629db8ed --- /dev/null +++ b/roles/vrs-vm-predeploy/tasks/main.yml @@ -0,0 +1,12 @@ +--- +- include: kvm.yml + when: target_server_type | match("kvm") + tags: + - vrs-vm + - vrs-vm-predeploy + +- include: vcenter.yml + when: target_server_type | match("vcenter") + tags: + - vrs-vm + - vrs-vm-predeploy diff --git a/roles/vrs-vm-predeploy/tasks/vcenter.yml b/roles/vrs-vm-predeploy/tasks/vcenter.yml new file mode 100644 index 0000000000..b46342d192 --- /dev/null +++ b/roles/vrs-vm-predeploy/tasks/vcenter.yml @@ -0,0 +1,5 @@ +--- +- name: Assert feature toggle + assert: + that: "False" + msg: "vcenter deployment is not supported, quiting" diff --git a/roles/vrs-vm-predeploy/templates/authorized_keys.j2 b/roles/vrs-vm-predeploy/templates/authorized_keys.j2 new file mode 100644 index 0000000000..cd6e7699e4 --- /dev/null +++ b/roles/vrs-vm-predeploy/templates/authorized_keys.j2 @@ -0,0 +1 @@ +{{ current_user_ssh_key.stdout }} diff --git a/roles/vrs-vm-predeploy/templates/ifcfg-br0.j2 b/roles/vrs-vm-predeploy/templates/ifcfg-br0.j2 new file mode 100644 index 0000000000..8b96d64e49 --- /dev/null +++ b/roles/vrs-vm-predeploy/templates/ifcfg-br0.j2 @@ -0,0 +1,21 @@ +DEVICE="br0" +IPV6INIT="yes" +NM_CONTROLLED="no" +ONBOOT="yes" +TYPE="Bridge" +{% if mgmt_ip is defined and mgmt_gateway is defined%} +BOOTPROTO="static" +IPADDR="{{ mgmt_ip }}" +GATEWAY="{{ mgmt_gateway }}" +{% if mgmt_prefix is defined %} +PREFIX="{{ mgmt_prefix }}" +{% else %} +NETMASK="{{ mgmt_netmask }}" +{% endif %} +DNS1="{{dns_server_list[0]}}" +{% if dns_server_list[1] is defined %} +DNS2="{{dns_server_list[1]}}" +{% endif %} +{% else %} +BOOTPROTO="dhcp" +{% endif %} diff --git a/roles/vrs-vm-predeploy/templates/ifcfg-br1.j2 b/roles/vrs-vm-predeploy/templates/ifcfg-br1.j2 new file mode 100644 index 0000000000..21e2106ecc --- /dev/null +++ b/roles/vrs-vm-predeploy/templates/ifcfg-br1.j2 @@ -0,0 +1,12 @@ +DEVICE="br1" +IPV6INIT="yes" +NM_CONTROLLED="no" +ONBOOT="yes" +TYPE="Bridge" +BOOTPROTO="static" +IPADDR="{{ data_ip }}" +{% if data_prefix is defined %} +PREFIX="{{ data_prefix }}" +{% else %} +NETMASK="{{ data_netmask }}" +{% endif %} diff --git a/roles/vrs-vm-predeploy/templates/ifcfg-eth0.j2 b/roles/vrs-vm-predeploy/templates/ifcfg-eth0.j2 new file mode 100644 index 0000000000..15ed8bcc7e --- /dev/null +++ b/roles/vrs-vm-predeploy/templates/ifcfg-eth0.j2 @@ -0,0 +1,7 @@ +DEVICE="eth0" +IPV6INIT="yes" +NM_CONTROLLED="no" +ONBOOT="yes" +TYPE="Ethernet" +BOOTPROTO="none" +BRIDGE=br0 diff --git a/roles/os-compute-deploy/templates/ifcfg-eth1.j2 b/roles/vrs-vm-predeploy/templates/ifcfg-eth1.j2 similarity index 57% rename from roles/os-compute-deploy/templates/ifcfg-eth1.j2 rename to roles/vrs-vm-predeploy/templates/ifcfg-eth1.j2 index 1881478d07..0e3cff4da0 100644 --- a/roles/os-compute-deploy/templates/ifcfg-eth1.j2 +++ b/roles/vrs-vm-predeploy/templates/ifcfg-eth1.j2 @@ -1,7 +1,7 @@ DEVICE="eth1" -IPV6INIT="no" +IPV6INIT="yes" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" -BOOTPROTO="dhcp" -DEFROUTE="no" +BOOTPROTO="none" +BRIDGE=br1 diff --git a/roles/vrs-vm-predeploy/templates/route-br1.j2 b/roles/vrs-vm-predeploy/templates/route-br1.j2 new file mode 100644 index 0000000000..6b71133632 --- /dev/null +++ b/roles/vrs-vm-predeploy/templates/route-br1.j2 @@ -0,0 +1,3 @@ +{% for route in data_static_route %} +{{ route }} via {{ data_gateway }} +{% endfor %} diff --git a/roles/vrs-vm-predeploy/templates/vrs_vm.xml.j2 b/roles/vrs-vm-predeploy/templates/vrs_vm.xml.j2 new file mode 100644 index 0000000000..fd6f800c69 --- /dev/null +++ b/roles/vrs-vm-predeploy/templates/vrs_vm.xml.j2 @@ -0,0 +1,73 @@ + + {{ vm_name }} + {{ vrs_vm_ram | default("4") }} + {{ vrs_vm_ram | default("4") }} + {{ vrs_vm_vcpu | default("2") }} + + /machine + + + hvm + + + + + + + + + + destroy + restart + restart + + {{ libvirt.emulator }} + + + + + + + + + + + + + + + + +{% if data_ip is defined and data_netmask is defined %} + + + + +{% endif %} + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/roles/vrs-vm-predeploy/vars/Debian.yml b/roles/vrs-vm-predeploy/vars/Debian.yml new file mode 100644 index 0000000000..f9b1ae75ea --- /dev/null +++ b/roles/vrs-vm-predeploy/vars/Debian.yml @@ -0,0 +1,4 @@ +libvirt: + emulator: "/usr/bin/kvm" + user: "libvirt-qemu" + group: "kvm" diff --git a/roles/vrs-vm-predeploy/vars/RedHat.yml b/roles/vrs-vm-predeploy/vars/RedHat.yml new file mode 100644 index 0000000000..0e5354ddfc --- /dev/null +++ b/roles/vrs-vm-predeploy/vars/RedHat.yml @@ -0,0 +1,4 @@ +libvirt: + emulator: "/usr/libexec/qemu-kvm" + user: "qemu" + group: "qemu" diff --git a/roles/vrs-vm-predeploy/vars/main.yml b/roles/vrs-vm-predeploy/vars/main.yml new file mode 100644 index 0000000000..3751b948b9 --- /dev/null +++ b/roles/vrs-vm-predeploy/vars/main.yml @@ -0,0 +1,8 @@ +--- +# This file contains common variables that are usually +# not to be modified by users of the playbooks. +# +# Supported operating systems. +vrs_vm_target_server_os_family_list: + - Debian + - RedHat diff --git a/roles/vsc-backup/tasks/backup_vsc.yml b/roles/vsc-backup/tasks/backup_vsc.yml index 708760ff42..775bf045e8 100644 --- a/roles/vsc-backup/tasks/backup_vsc.yml +++ b/roles/vsc-backup/tasks/backup_vsc.yml @@ -43,7 +43,7 @@ (?i)yes: "yes" (?i)password: "{{ vsc_password|default('admin') }}" timeout: "{{ vsc_scp_timeout_seconds }}" - + - name: Transfer primary image from VSC to backup_machine expect: command: "{{ vsc_scp_backup_primary_image }}" diff --git a/roles/vsc-backup/tasks/main.yml b/roles/vsc-backup/tasks/main.yml index 2436294a9c..fb1d41239d 100644 --- a/roles/vsc-backup/tasks/main.yml +++ b/roles/vsc-backup/tasks/main.yml @@ -103,4 +103,4 @@ - name: Print 'admin save' when verbosity >= 1 debug: var=conf_save.stdout[1] verbosity=1 -- include: backup_vsc.yml +- import_tasks: backup_vsc.yml diff --git a/roles/vsc-backup/vars/main.yml b/roles/vsc-backup/vars/main.yml index 99b0b6bba5..d9b2813aa0 100644 --- a/roles/vsc-backup/vars/main.yml +++ b/roles/vsc-backup/vars/main.yml @@ -6,4 +6,5 @@ vsc_creds: vsc_scp_backup_primary_image: "scp {{ vsc_user|default('admin') }}@{{ mgmt_ip }}:{{ bof_json['primary_image_unix'] }} /tmp/{{ backup_folder }}/" vsc_scp_backup_bof: "scp {{ vsc_user|default('admin') }}@{{ mgmt_ip }}:/bof.cfg /tmp/{{ backup_folder }}/" vsc_scp_backup_config: "scp {{ vsc_user|default('admin') }}@{{ mgmt_ip }}:/config.cfg /tmp/{{ backup_folder }}/" +vsc_scp_backup_keys: "scp {{ vsc_user|default('admin') }}@{{ mgmt_ip }}:{{ item }} s /tmp/{{ backup_folder }}/" vsc_scp_timeout: "{{ vsc_scp_timeout_seconds }}" diff --git a/roles/vsc-deploy/tasks/main.yml b/roles/vsc-deploy/tasks/main.yml index 7955bcacf4..ac933409ed 100644 --- a/roles/vsc-deploy/tasks/main.yml +++ b/roles/vsc-deploy/tasks/main.yml @@ -1,7 +1,38 @@ --- -- include: heat.yml +- import_tasks: heat.yml when: target_server_type | match("heat") tags: - vsc - heat - vsc-deploy + +- block: + - name: Change XMPP connection to TLS on VSD + command: /opt/vsd/bin/ejmode allow -y + delegate_to: "{{ item }}" + with_items: "{{ groups['vsds'] }}" + when: secure_communication + + - name: wait for ejabberd-status to become running + monit_waitfor_service: + name: "ejabberd-status" + timeout_seconds: 600 + test_interval_seconds: 30 + delegate_to: "{{ item }}" + with_items: "{{ groups['vsds'] }}" + + - name: wait for ejbca-status to become running + monit_waitfor_service: + name: "ejbca-status" + timeout_seconds: 600 + test_interval_seconds: 30 + delegate_to: "{{ item }}" + with_items: "{{ groups['vsds'] }}" + + remote_user: "{{ vsd_username }}" + +- name: setup TLS + include_role: + name: common + tasks_from: vsc-tls-setup + diff --git a/roles/vsc-deploy/templates/config.cfg.j2 b/roles/vsc-deploy/templates/config.cfg.j2 index f8c6963ff9..4c67ac445a 100644 --- a/roles/vsc-deploy/templates/config.cfg.j2 +++ b/roles/vsc-deploy/templates/config.cfg.j2 @@ -21,7 +21,7 @@ echo "System Configuration" #-------------------------------------------------- system security - tls-profile "vns-tls-profile" create + tls-profile "vsc-tls-profile" create shutdown exit exit diff --git a/roles/vsc-destroy/tasks/main.yml b/roles/vsc-destroy/tasks/main.yml index b04dd59054..ac1c743b5c 100644 --- a/roles/vsc-destroy/tasks/main.yml +++ b/roles/vsc-destroy/tasks/main.yml @@ -1,18 +1,18 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vsc - vsc-destroy -- include: heat.yml +- import_tasks: heat.yml when: target_server_type | match("heat") tags: - vsc - heat - vsc-destroy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - vsc diff --git a/roles/vsc-health/tasks/main.yml b/roles/vsc-health/tasks/main.yml index 0fff7e0aa1..6bc4b07b2e 100644 --- a/roles/vsc-health/tasks/main.yml +++ b/roles/vsc-health/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: report_header.yml +- import_tasks: report_header.yml - block: @@ -271,13 +271,13 @@ - name: Verify NTP status assert: - that: not ntp_status.stdout[0]|search('none') - msg: "NTP Status not okay: {{ ntp_status.stdout[0] }}. Quitting." + that: not ntp_status.stdout[0]|search('none') + msg: "NTP Status not okay: {{ ntp_status.stdout[0] }}. Quitting." ignore_errors: "{{ not nuage_upgrade }}" - name: Write NTP status to report nuage_append: filename="{{ report_path }}" text="{{ inventory_hostname }} NTP Status {{ ntp_status.stdout[0] }}\n" - - include: report_footer.yml + - import_tasks: report_footer.yml delegate_to: localhost diff --git a/roles/vsc-predeploy/tasks/main.yml b/roles/vsc-predeploy/tasks/main.yml index 09f2156e0e..2dba9bbdde 100644 --- a/roles/vsc-predeploy/tasks/main.yml +++ b/roles/vsc-predeploy/tasks/main.yml @@ -24,20 +24,20 @@ set_fact: node_present: "{{ node_reachable and sh_version is defined }}" -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vsc - vsc-predeploy -- include: heat.yml +- import_tasks: heat.yml when: target_server_type | match("heat") tags: - vsc - heat - vsc-predeploy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - vsc diff --git a/roles/vsc-predeploy/templates/config.cfg.j2 b/roles/vsc-predeploy/templates/config.cfg.j2 index 17effbb3e1..2d9973ebf8 100644 --- a/roles/vsc-predeploy/templates/config.cfg.j2 +++ b/roles/vsc-predeploy/templates/config.cfg.j2 @@ -20,7 +20,7 @@ echo "System Configuration" #-------------------------------------------------- system security - tls-profile "vns-tls-profile" create + tls-profile "vsc-tls-profile" create shutdown exit exit @@ -99,14 +99,6 @@ echo "Service Configuration" static-route 0.0.0.0/0 next-hop {{ control_ip.split('.')[0] }}.{{ control_ip.split('.')[1] }}.{{ control_ip.split('.')[2] }}.1 {% endif %} - sgt-qos - application bgp dscp nc2 - application ntp dscp nc2 - application openflow dscp nc2 - application json-rpc dscp nc2 - application dtls-vxlan dscp nc2 - application dtls-ipsec dscp nc2 - exit ntp no shutdown exit diff --git a/roles/vsc-upgrade-deploy/tasks/main.yml b/roles/vsc-upgrade-deploy/tasks/main.yml index 4d3c27b781..7976e5b1e4 100644 --- a/roles/vsc-upgrade-deploy/tasks/main.yml +++ b/roles/vsc-upgrade-deploy/tasks/main.yml @@ -41,7 +41,7 @@ - name: Print image version in json when verbosity >= 1 debug: var=version_json verbosity=1 - + - name: Copy new VSC image to VSC nodes expect: command: "{{ vsc_image_copy }}" @@ -69,8 +69,10 @@ when: target_server_type | match('heat') - name: Wait for VSC ssh to be ready - include_role: + include_role: name: common tasks_from: wait-for-ssh vars: ssh_host: "{{ mgmt_ip }}" + + diff --git a/roles/vsc-vns-deploy/tasks/main.yml b/roles/vsc-vns-deploy/tasks/main.yml deleted file mode 100644 index b343919199..0000000000 --- a/roles/vsc-vns-deploy/tasks/main.yml +++ /dev/null @@ -1,84 +0,0 @@ ---- -- block: - - - name: Clean known_hosts - known_hosts: - name: "{{ mgmt_ip }}" - state: absent - delegate_to: localhost - no_log: True - ignore_errors: True - - - block: - - - name: Change XMPP connection to TLS on VSD - command: /opt/vsd/bin/ejmode allow -y - delegate_to: "{{ item }}" - with_items: "{{ groups['vsds'] }}" - - - name: wait for ejabberd-status to become running - monit_waitfor_service: - name: "ejabberd-status" - timeout_seconds: 600 - test_interval_seconds: 30 - delegate_to: "{{ item }}" - with_items: "{{ groups['vsds'] }}" - - - name: wait for ejbca-status to become running - monit_waitfor_service: - name: "ejbca-status" - timeout_seconds: 600 - test_interval_seconds: 30 - delegate_to: "{{ item }}" - with_items: "{{ groups['vsds'] }}" - - remote_user: "{{ vsd_username }}" - - - block: - - - name: Get VSD version - shell: echo $VSD_VERSION - register: vsd_version - delegate_to: "{{ groups['vsds'][0] }}" - - - name: Create and transfer certs from 4.0.4 VSD - command: "{{ create_certs_404 }}" - delegate_to: "{{ groups['vsds'][0] }}" - when: "'4.0.4' in vsd_version.stdout" - - - name: Create and transfer certs from VSD - command: "{{ create_certs }}" - delegate_to: "{{ groups['vsds'][0] }}" - when: "'4.0.4' not in vsd_version.stdout" - - remote_user: "{{ vsd_username }}" - - - name: Configure VSC - sros_config: - lines: - - configure system security tls-profile vns-tls-profile - - configure vswitch-controller open-flow tls-profile vns-tls-profile - - configure vswitch-controller xmpp tls-profile vns-tls-profile - - configure system time ntp ntp-server - - configure system security tls-profile vns-tls-profile own-key cf1:\{{ xmpp.username }}-Key.pem - - configure system security tls-profile vns-tls-profile own-certificate cf1:\{{ xmpp.username }}.pem - - configure system security tls-profile vns-tls-profile ca-certificate cf1:\{{ xmpp.username }}-CA.pem - - configure system security tls-profile vns-tls-profile no shutdown - provider: "{{ vsc_creds }}" - delegate_to: localhost - - - name: check xmpp connectivity between VSC and VSD after enabling TLS - sros_command: - commands: - - show vswitch-controller xmpp-server | match Functional - provider: "{{ vsc_creds }}" - register: xmpp_status - until: xmpp_status.stdout[0].find('Functional') != -1 - retries: 6 - delay: 10 - delegate_to: localhost - - - name: Print output of 'show vswitch-controller xmpp-server' when verbosity >= 1 - debug: var=xmpp_status verbosity=1 - - when: groups['vnsutils'] is defined diff --git a/roles/vsc-vns-deploy/vars/main.yml b/roles/vsc-vns-deploy/vars/main.yml deleted file mode 100644 index 45924fa4d3..0000000000 --- a/roles/vsc-vns-deploy/vars/main.yml +++ /dev/null @@ -1,17 +0,0 @@ ---- -# Post Deploy specific Variables - -# Timeout in seconds -retry_timeout: 120 - -# Command to generate and copy certs to vsc -create_certs: "/bin/sshpass -p{{ vsc_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u {{xmpp.username}} -c {{xmpp.username}} -d {{inventory_hostname}} -f pem -t server -s admin@{{inventory_hostname}}:/ -o csp" - -# Alternate Command to generate and copy certs to vsc on VSD 4.0.4 -create_certs_404: "/bin/sshpass -p{{ vsc_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u {{xmpp.username}} -c {{xmpp.username}} -d {{inventory_hostname}} -f pem -t server -s admin@{{inventory_hostname}}:/ -o csp -n VSPCA" - -vsc_creds: - host: "{{ mgmt_ip }}" - username: "{{ vsc_username|default('admin') }}" - password: "{{ vsc_password|default('admin') }}" - timeout: "{{ vsc_command_timeout_seconds }}" diff --git a/roles/vsc-vns-postdeploy/tasks/main.yml b/roles/vsc-vns-postdeploy/tasks/main.yml deleted file mode 100644 index 9ba5e1168b..0000000000 --- a/roles/vsc-vns-postdeploy/tasks/main.yml +++ /dev/null @@ -1,14 +0,0 @@ -- name: Clean known_hosts - known_hosts: - name: "{{ mgmt_ip }}" - state: absent - delegate_to: localhost - no_log: True - ignore_errors: True - -- name: Verify if vsc user is connected to the VSD - command: /opt/ejabberd/bin/ejabberdctl connected_users - register: proxy_user - remote_user: "{{ vsd_username }}" - delegate_to: "{{ groups['vsds'][0] }}" - failed_when: xmpp.username not in proxy_user.stdout diff --git a/roles/vsd-dbbackup/tasks/main.yml b/roles/vsd-dbbackup/tasks/main.yml index 06a4d43970..d523ec37bd 100644 --- a/roles/vsd-dbbackup/tasks/main.yml +++ b/roles/vsd-dbbackup/tasks/main.yml @@ -41,28 +41,11 @@ debug: var=mode_status verbosity=1 when: inventory_hostname in groups['vsds'] -- name: Reading the status of the DB upgrade directory - stat: - path: "/var/lib/mysql/nuageDbUpgrade/" - register: db_dir - remote_user: "{{ vsd_username }}" - -- name: Verify that DB upgrade directory exists - assert: - that: - - db_dir.stat.exists == True - msg: "nuageDbUpgrade dir does not exist" - -- name: Check that the database is properly identified by MySQL - shell: "mysql -e 'show databases;' | grep nuageDbUpgrade" - register: db - remote_user: "{{ vsd_username }}" - -- name: Verify the upgrade database name - assert: - that: - - "'nuageDbUpgrade' == db.stdout" - msg: "Could not find nuageDbUpgrade database in mysql" +- name: Read status of the DB upgrade directory and verify it + include_role: + name: common + tasks_from: vsd-verify-db-status + tags: vsd - block: - name: Read gateway purge timer diff --git a/roles/vsd-decouple/tasks/main.yml b/roles/vsd-decouple/tasks/main.yml index 084a79520c..38898e89fc 100644 --- a/roles/vsd-decouple/tasks/main.yml +++ b/roles/vsd-decouple/tasks/main.yml @@ -9,7 +9,7 @@ name: common tasks_from: vsd-reset-keystorepass -- include: report_header.yml +- import_tasks: report_header.yml - name: get the username running the deploy local_action: command whoami diff --git a/roles/vsd-deploy/tasks/heat.yml b/roles/vsd-deploy/tasks/heat.yml index 9ef9cf6c6f..85e68d39ef 100644 --- a/roles/vsd-deploy/tasks/heat.yml +++ b/roles/vsd-deploy/tasks/heat.yml @@ -192,6 +192,8 @@ include_role: name: common tasks_from: linux-ntp-sync + vars: + rem_user: "{{ vsd_username }}" - name: Install VSD software on standalone node command: /opt/vsd/vsd-install.sh -t s -y diff --git a/roles/vsd-deploy/tasks/main.yml b/roles/vsd-deploy/tasks/main.yml index 64fd541f6b..a81e0fd6aa 100644 --- a/roles/vsd-deploy/tasks/main.yml +++ b/roles/vsd-deploy/tasks/main.yml @@ -10,12 +10,12 @@ command: monit summary ignore_errors: yes register: monit_result - remote_user: root + remote_user: "{{ vsd_username }}" - name: Read the VSD version shell: echo $VSD_VERSION register: vsd_version - remote_user: root + remote_user: "{{ vsd_username }}" - name: Set if VSD versions match qcow2 set_fact: vsd_versions_match="{{ vsd_version.stdout == qcow2_file_name | regex_search('([0-9]+\\.[0-9]+\\.[0-9A-Za-z]+)') }}" @@ -36,13 +36,13 @@ - "*************************************************" when: skip_vsd_deploy -- include: non_heat.yml +- import_tasks: non_heat.yml when: not skip_vsd_deploy and not target_server_type | match("heat") tags: - vsd - vsd-deploy -- include: heat.yml +- import_tasks: heat.yml when: not skip_vsd_deploy and target_server_type | match("heat") tags: - vsd diff --git a/roles/vsd-deploy/tasks/non_heat.yml b/roles/vsd-deploy/tasks/non_heat.yml index c83cf62a17..a9102cf356 100644 --- a/roles/vsd-deploy/tasks/non_heat.yml +++ b/roles/vsd-deploy/tasks/non_heat.yml @@ -1,55 +1,55 @@ --- - block: - - block: - - - name: Read the VSD version - shell: echo $VSD_VERSION - register: vsd_full_version - - - name: Set Major, Minor and Patch VSD version - set_fact: - vsd_major_version: "{{ vsd_full_version.stdout.split('.')[0] }}" - vsd_minor_version: "{{ vsd_full_version.stdout.split('.')[1] }}" - vsd_patch_version: "{{ vsd_full_version.stdout.split('.')[2].split('U')[0] }}" + - name: Read the VSD version + shell: echo $VSD_VERSION + register: vsd_full_version - - debug: var=vsd_full_version.stdout verbosity=1 + - name: Set Major, Minor and Patch VSD version + set_fact: + vsd_major_version: "{{ vsd_full_version.stdout.split('.')[0] }}" + vsd_minor_version: "{{ vsd_full_version.stdout.split('.')[1] }}" + vsd_patch_version: "{{ vsd_full_version.stdout.split('.')[2].split('U')[0] }}" - - debug: var=vsd_major_version verbosity=1 + - debug: var=vsd_full_version.stdout verbosity=1 - - debug: var=vsd_minor_version verbosity=1 + - debug: var=vsd_major_version verbosity=1 - - debug: var=vsd_patch_version verbosity=1 + - debug: var=vsd_minor_version verbosity=1 - - block: + - debug: var=vsd_patch_version verbosity=1 - - name: Set VSD numbering for install - set_fact: - vsd_cluster_node_1: "{{ groups['vsds'][0] }}" - vsd_cluster_node_2: "{{ groups['vsds'][1] }}" - vsd_cluster_node_3: "{{ groups['vsds'][2] }}" - when: not nuage_upgrade + - name: Set deploy_vcin to false (deploy vsd) + set_fact: + deploy_vcin: false - - name: Set VSD numbering for upgrade - set_fact: - vsd_cluster_node_1: "{{ groups['vsds'][1] }}" - vsd_cluster_node_2: "{{ groups['vsds'][2] }}" - vsd_cluster_node_3: "{{ groups['vsds'][0] }}" - when: nuage_upgrade + - name: Overwrite deploy_vcin to true (deploy vcin) + set_fact: + deploy_vcin: true + when: + - vcin_mode is defined + - vcin_mode + - (vsd_major_version|int > 5) or + (vsd_major_version|int >= 5 and vsd_minor_version|int > 2) or + (vsd_major_version|int >= 5 and vsd_minor_version|int >= 2 and vsd_patch_version|int >= 2) - when: vsd_sa_or_ha | match ('ha') + - block: - - name: Set deploy_vcin to false (deploy vsd) + - name: Set VSD numbering for install set_fact: - deploy_vcin: false + vsd_cluster_node_1: "{{ groups['vsds'][0] }}" + vsd_cluster_node_2: "{{ groups['vsds'][1] }}" + vsd_cluster_node_3: "{{ groups['vsds'][2] }}" + when: not nuage_upgrade - - name: Overwrite deploy_vcin to true (deploy vcin) + - name: Set VSD numbering for upgrade set_fact: - deploy_vcin: true - when: - - vcin_mode is defined - - vcin_mode + vsd_cluster_node_1: "{{ groups['vsds'][1] }}" + vsd_cluster_node_2: "{{ groups['vsds'][2] }}" + vsd_cluster_node_3: "{{ groups['vsds'][0] }}" + when: nuage_upgrade + when: vsd_sa_or_ha | match ('ha') run_once: True - block: @@ -116,6 +116,34 @@ when: inventory_hostname == vsd_cluster_node_1 or (inventory_hostname == vsd_cluster_node_3 and nuage_upgrade) when: vsd_sa_or_ha | match('ha') + + - block: + + - name: Generate SSH key on master VCIN + delegate_to: "{{ master_vcin }}" + user: + name: "{{ vsd_username }}" + generate_ssh_key: yes + register: master_vcin_ssh_key + + - name: Generate SSH key on slave VCIN + user: + name: "{{ vsd_username }}" + generate_ssh_key: yes + register: slave_vcin_ssh_key + + - name: Add master VCIN SSH key to slave VCIN + authorized_key: + key: "{{ master_vcin_ssh_key.ssh_public_key }}" + user: "{{ vsd_username }}" + + - name: Add slave VCIN SSH key to master VCIN + delegate_to: "{{ master_vcin }}" + authorized_key: + key: "{{ slave_vcin_ssh_key.ssh_public_key }}" + user: "{{ vsd_username }}" + + when: master_vcin is defined - name: Configure yum proxy lineinfile: @@ -144,6 +172,8 @@ include_role: name: common tasks_from: linux-ntp-sync + vars: + rem_user: "{{ vsd_username }}" - block: @@ -157,7 +187,42 @@ command: /opt/vsd/vsd-install.sh -t v -y when: deploy_vcin - when: vsd_sa_or_ha | match ('sa') + when: + - vsd_sa_or_ha | match ('sa') + - master_vcin is not defined + + - block: + + - name: Preparing the master + delegate_to: "{{ master_vcin }}" + command: /opt/vsd/bin/vsd-prepare-replication-master-cluster.sh + + - name: Preparing replication on the master + delegate_to: "{{ master_vcin }}" + command: "/opt/vsd/bin/vsd-prepare-replication-master.sh -a {{ inventory_hostname }}" + + - name: Creating the data folder on the slave + file: + path: /opt/vsd/data/ + state: directory + + - name: Syncing the backup from the master to the local system + delegate_to: "{{ master_vcin }}" + synchronize: + dest: /opt/vsd/data/ + src: /tmp/backup/ + mode: push + + - name: Install VCIN software on slave VCIN node + command: /opt/vsd/vsd-install.sh -t v -y + + - name: Start the replication + command: "/opt/vsd/bin/vsd-start-replication-slave -m {{ master_vcin }}" + + when: + - vsd_sa_or_ha | match ('sa') + - deploy_vcin + - master_vcin is defined - block: diff --git a/roles/vsd-destroy/tasks/main.yml b/roles/vsd-destroy/tasks/main.yml index 7bffdf5804..9b6c26c6ba 100644 --- a/roles/vsd-destroy/tasks/main.yml +++ b/roles/vsd-destroy/tasks/main.yml @@ -1,20 +1,20 @@ --- - block: - - include: kvm.yml + - import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vsd - vsd-destroy - - include: heat.yml + - import_tasks: heat.yml when: target_server_type | match("heat") tags: - vsd - heat - vsd-destroy - - include: vcenter.yml + - import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - vsd diff --git a/roles/vsd-ha-upgrade-block-access/tasks/main.yml b/roles/vsd-ha-upgrade-block-access/tasks/main.yml index 44fc7c6b10..417d78dcbf 100644 --- a/roles/vsd-ha-upgrade-block-access/tasks/main.yml +++ b/roles/vsd-ha-upgrade-block-access/tasks/main.yml @@ -2,5 +2,5 @@ - name: Block access to VSD1 and VSD2 from VSD3 command: "{{ item }}" with_items: "{{ iptable_entries }}" - remote_user: root + remote_user: "{{ vsd_username }}" run_once: true diff --git a/roles/vsd-health/tasks/main.yml b/roles/vsd-health/tasks/main.yml index 376965c820..e73411ba2e 100644 --- a/roles/vsd-health/tasks/main.yml +++ b/roles/vsd-health/tasks/main.yml @@ -14,13 +14,23 @@ no_log: True ignore_errors: True -- include: report_header.yml +- import_tasks: report_header.yml - name: Get current version of VSD software command: echo $VSD_VERSION register: vsd_version remote_user: "{{ vsd_username }}" +- name: Get configured VSD Hostname + command: hostname -f + register: hostname_output + remote_user: "{{ vsd_username }}" + +- name: Verify configured VSD hostname + assert: + that: "hostname_output.stdout|search('{{ inventory_hostname }}')" + msg: "Configured VSD hostname does not match expected VSD hostname" + - name: Write VSD version to json file nuage_append: filename="{{ report_path }}" text="{{ vsd_version.stdout | to_nice_json}}\n" delegate_to: localhost @@ -38,7 +48,7 @@ nuage_append: filename="{{ report_path }}" text="{{ net_conf.info | to_nice_json}}\n" delegate_to: localhost -- include: monit_status.yml +- import_tasks: monit_status.yml - name: Execute list_p1db command on VSD(s) command: "{{ p1db_cmd }}" @@ -85,6 +95,16 @@ nuage_append: filename="{{ report_path }}" text="{{ inventory_hostname }} {{ ejabberd_users_json|to_nice_json }}\n" delegate_to: localhost +- name: Verify connected VSCs + assert: + that: "user_list.stdout|search('{{ hostvars[item].xmpp.username }}')" + msg: "{{ hostvars[item].xmpp.username }} could not be found in '/opt/ejabberd/bin/ejabberdctl connected_users'" + remote_user: "{{ vsd_username }}" + with_items: "{{ groups['vscs'] }}" + when: + - groups['vscs'] is defined + - not skip_vsc | default('False') + - name: Get VSD deployment mode include_role: name: common @@ -110,4 +130,23 @@ msg: "keyserver@{{ vsd_fqdn }} could not be found in '/opt/ejabberd/bin/ejabberdctl connected_users'" when: vsd_sa_or_ha|match('ha') -- include: report_footer.yml +- block: + - name: Verify that REST and JMS gateway is reachable + uri: + url: https://{{ vsd_fqdn }}:{{ item }} + method: GET + user: "{{ vsd_auth.username }}" + password: "{{ vsd_auth.password }}" + status_code: 200 + validate_certs: False + register: webresult + ignore_errors: yes + with_items: + - "8443/index.html" + - 61619 + + - name: write web interface result + nuage_append: filename="{{ report_path }}" text="{{ webresult | to_nice_json}}\n" + delegate_to: localhost + +- import_tasks: report_footer.yml diff --git a/roles/vsd-health/tasks/monit_status.yml b/roles/vsd-health/tasks/monit_status.yml index 9f4eb20dd1..89f4d2798e 100644 --- a/roles/vsd-health/tasks/monit_status.yml +++ b/roles/vsd-health/tasks/monit_status.yml @@ -1,22 +1,24 @@ -- name: Get monit summary for vsd processes - vsd_monit: - group: all - register: vsd_proc_pre - remote_user: root +--- +- block: -- name: wait for VSD common, core and stats services to become running - monit_waitfor_service: - name: "{{ item }}" - timeout_seconds: 1200 - test_interval_seconds: 30 - with_items: "{{ vsd_proc_pre['state'].keys() }}" - remote_user: "root" + - name: Get monit summary for vsd processes + vsd_monit: + group: all + register: vsd_proc_pre -- name: Get monit summary for vsd processes - vsd_monit: - group: all - register: vsd_proc - remote_user: root + - name: wait for VSD common, core and stats services to become running + monit_waitfor_service: + name: "{{ item }}" + timeout_seconds: 1200 + test_interval_seconds: 30 + with_items: "{{ vsd_proc_pre['state'].keys() }}" + + - name: Get monit summary for vsd processes + vsd_monit: + group: all + register: vsd_proc + + remote_user: "{{ vsd_username }}" - name: Print monit status when verbosity >= 1 debug: var=vsd_proc verbosity=1 diff --git a/roles/vsd-postdeploy/tasks/main.yml b/roles/vsd-postdeploy/tasks/main.yml index 0267041ef4..ee9431df8f 100644 --- a/roles/vsd-postdeploy/tasks/main.yml +++ b/roles/vsd-postdeploy/tasks/main.yml @@ -3,3 +3,4 @@ include_role: name="vsd-health" vars: report_filename: vsd-postdeploy-health.yml + skip_vsc: True diff --git a/roles/vsd-predeploy/tasks/main.yml b/roles/vsd-predeploy/tasks/main.yml index 8c61a4ff5e..aabb36a7ad 100644 --- a/roles/vsd-predeploy/tasks/main.yml +++ b/roles/vsd-predeploy/tasks/main.yml @@ -6,7 +6,7 @@ - name: Get VSD directory stat stat: path: /opt/vsd - remote_user: root + remote_user: "{{ vsd_username }}" register: vsd_dir when: node_reachable @@ -17,20 +17,20 @@ vsd_dir.stat.isdir is defined and vsd_dir.stat.isdir }}" -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vsd - vsd-predeploy -- include: heat.yml +- import_tasks: heat.yml when: target_server_type | match("heat") tags: - vsd - heat - vsd-predeploy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - vsd diff --git a/roles/vsd-preupgrade/tasks/main.yml b/roles/vsd-preupgrade/tasks/main.yml index ca08cfa426..9cd14b509a 100644 --- a/roles/vsd-preupgrade/tasks/main.yml +++ b/roles/vsd-preupgrade/tasks/main.yml @@ -18,7 +18,7 @@ - name: Print vsd_version output when verbosity >= 1 debug: var=vsd_version verbosity=1 - - name: Check VSD License additionalsupportedversions + - name: Check VSD License check_vsd_license_validity: vsd_auth: "{{ vsd_auth }}" diff --git a/roles/vsd-services-stop/tasks/main.yml b/roles/vsd-services-stop/tasks/main.yml index f5c637b020..2fb0b584cd 100644 --- a/roles/vsd-services-stop/tasks/main.yml +++ b/roles/vsd-services-stop/tasks/main.yml @@ -1,6 +1,8 @@ --- - block: + - block: + - name: Stop vsd statistics services on node1 and node2 shell: "{{ stop_stats }}" @@ -20,51 +22,51 @@ with_items: "{{ list_stats_pids.stdout_lines|default([]) }}" ignore_errors: yes when: list_stats_pids.stdout.strip()!= "" + when: groups['mystats'] is defined and groups['myvstats'] - - name: Stop vsd core services on node1 and node2 - shell: "{{ stop_vsd_core }}" - remote_user: "root" - - - name: Pause for proccesses to exit - pause: - seconds: 20 + - block: + + - name: Stop vsd core services + shell: "{{ stop_vsd_core }}" + + - name: Pause for proccesses to exit + pause: + seconds: 20 + + - name: Check for left over vsd-core proccesses + shell: "{{ core_pids }}" + register: list_core_pids - - name: Check for left over vsd-core proccesses - shell: "{{ core_pids }}" - register: list_core_pids - remote_user: "root" + - name: Print core_pids output when verbosity >= 1 + debug: var=list_core_pids verbosity=1 - - name: Print core_pids output when verbosity >= 1 - debug: var=list_core_pids verbosity=1 + - name: Kill the vsd-core pids if they exists + shell: "kill -9 {{ item }}" + with_items: "{{ list_core_pids.stdout_lines|default([]) }}" + ignore_errors: yes + when: list_core_pids.stdout.strip()!="" - - name: Kill the vsd-core pids if they exists - shell: "kill -9 {{ item }}" - with_items: "{{ list_core_pids.stdout_lines|default([]) }}" - remote_user: "root" - ignore_errors: yes - when: list_core_pids.stdout.strip()!="" + - name: Stop vsd common services + shell: "{{ stop_vsd_common }}" - - name: Stop vsd common services - shell: "{{ stop_vsd_common }}" - remote_user: "root" + - name: Pause for proccesses to exit + pause: + seconds: 20 - - name: Pause for proccesses to exit - pause: - seconds: 20 + - name: Check for left over vsd common proccesses + shell: "{{ common_pids }}" + register: list_common_pids - - name: Check for left over vsd common proccesses - shell: "{{ common_pids }}" - register: list_common_pids - remote_user: "root" + - name: Print common_pids output when verbosity >= 1 + debug: var=list_common_pids verbosity=1 - - name: Print common_pids output when verbosity >= 1 - debug: var=list_common_pids verbosity=1 + - name: Kill the vsd-common pids if they exists + shell: "kill -9 {{ item }}" + with_items: "{{ list_common_pids.stdout_lines|default([]) }}" + ignore_errors: yes + when: list_common_pids.stdout.strip()!="" + + remote_user: "{{ vsd_username }}" - - name: Kill the vsd-common pids if they exists - shell: "kill -9 {{ item }}" - with_items: "{{ list_common_pids.stdout_lines|default([]) }}" - remote_user: "root" - ignore_errors: yes - when: list_common_pids.stdout.strip()!="" when: inventory_hostname == groups['vsds'][0] or inventory_hostname == groups['vsds'][1] diff --git a/roles/vsd-upgrade-complete/tasks/main.yml b/roles/vsd-upgrade-complete/tasks/main.yml index 9d1a382d20..01b6194636 100644 --- a/roles/vsd-upgrade-complete/tasks/main.yml +++ b/roles/vsd-upgrade-complete/tasks/main.yml @@ -21,6 +21,16 @@ - name: Print vsd_version output when verbosity >= 1 debug: var=vsd_version verbosity=1 + - name: Check VSD License + check_vsd_license_validity: + vsd_auth: + "{{ vsd_auth }}" + api_version: "{{ vsd_version.stdout }}" + register: license_valid + delegate_to: localhost + + - debug: var=license_valid verbosity=1 + - name: Set upgrade complete flag shell: "{{ upgrade_complete_flag_command }}" register: result diff --git a/roles/vsd-upgrade-postdeploy/tasks/main.yml b/roles/vsd-upgrade-postdeploy/tasks/main.yml index 8d0172cfc2..34b3460b75 100644 --- a/roles/vsd-upgrade-postdeploy/tasks/main.yml +++ b/roles/vsd-upgrade-postdeploy/tasks/main.yml @@ -1,22 +1,23 @@ --- -- name: Get monit summary for all process on VSD - vsd_monit: - group: all - register: proc_list - remote_user: "root" +- block: -- name: Wait for VSD common , core and stats services to become running - monit_waitfor_service: - name: "{{ item }}" - timeout_seconds: 1200 - test_interval_seconds: 30 - with_items: "{{ proc_list['state'].keys() }}" - remote_user: "root" + - name: Get monit summary for all process on VSD + vsd_monit: + group: all + register: proc_list -- name: Get current version of VSD software to use it in loading correct vspk version - command: echo $VSD_VERSION - register: vsd_version - remote_user: "root" + - name: Wait for VSD common, core and stats services to become running + monit_waitfor_service: + name: "{{ item }}" + timeout_seconds: 1200 + test_interval_seconds: 30 + with_items: "{{ proc_list['state'].keys() }}" + + - name: Get current version of VSD software to use it in loading correct vspk version + command: echo $VSD_VERSION + register: vsd_version + + remote_user: "{{ vsd_username }}" - name: Print vsd_version output when verbosity >= 1 debug: var=vsd_version verbosity=1 diff --git a/roles/vsd-upgrade-prepare-for-deploy/tasks/main.yml b/roles/vsd-upgrade-prepare-for-deploy/tasks/main.yml index a66e48b8c9..b09a5b213c 100644 --- a/roles/vsd-upgrade-prepare-for-deploy/tasks/main.yml +++ b/roles/vsd-upgrade-prepare-for-deploy/tasks/main.yml @@ -5,30 +5,30 @@ vars: ssh_host: "{{ mgmt_ip }}" -- name: Create the directory on VSD to store backup files in - file: - path: "/opt/vsd/data" - state: directory - remote_user: root - -- name: Copy backup files from backup_machine - copy: - src: "{{ item }}" - dest: "/opt/vsd/data/" - with_fileglob: - - "{{metro_backup_root}}/backup-{{ groups['vsds'][0] }}-latest/*.tar.gz" - remote_user: root - -- name: Delete log files from /opt/vsd/data/ directory - shell: "rm -rf /opt/vsd/data/*.log" - remote_user: root - -- name: Get list of files in /opt/vsd/data/ directory - find: - path: "/opt/vsd/data/" - pattern: "*.tar.gz" - register: lst_files - remote_user: root +- block: + + - name: Create the directory on VSD to store backup files in + file: + path: "/opt/vsd/data" + state: directory + + - name: Copy backup files from backup_machine + copy: + src: "{{ item }}" + dest: "/opt/vsd/data/" + with_fileglob: + - "{{metro_backup_root}}/backup-{{ groups['vsds'][0] }}-latest/*.tar.gz" + + - name: Delete log files from /opt/vsd/data/ directory + shell: "rm -rf /opt/vsd/data/*.log" + + - name: Get list of files in /opt/vsd/data/ directory + find: + path: "/opt/vsd/data/" + pattern: "*.tar.gz" + register: lst_files + + remote_user: "{{ vsd_username }}" - name: Verify /opt/vsd/data/ directory contains exactly 3 files assert: diff --git a/roles/vsd-vns-postdeploy/tasks/main.yml b/roles/vsd-vns-postdeploy/tasks/main.yml index eab6b52f4d..bc2ac364d3 100644 --- a/roles/vsd-vns-postdeploy/tasks/main.yml +++ b/roles/vsd-vns-postdeploy/tasks/main.yml @@ -10,7 +10,7 @@ command: /opt/vsd/bin/ejmode status delegate_to: "{{ item }}" register: ejmode - remote_user: root + remote_user: "{{ vsd_username }}" with_items: "{{ groups['vsds'] }}" failed_when: "'allow' not in ejmode.stdout" run_once: true diff --git a/roles/vsr-deploy/README.md b/roles/vsr-deploy/README.md new file mode 100644 index 0000000000..ea0fde3170 --- /dev/null +++ b/roles/vsr-deploy/README.md @@ -0,0 +1,10 @@ +This role configures a number of sections for a 7x50 VSR: +- Sets up connected ports +- Sets up basic system settings +- Sets up router configuration + - System IP + - Dataplane IP, typically wired into a Nuage underlay network + - OSPF + - BGP with a group of other VSRs and optionally Nuage VSCs + +Tested with R15.0 and R16.0 diff --git a/roles/vsr-deploy/tasks/main.yml b/roles/vsr-deploy/tasks/main.yml index d6718d9831..8fa4904fc1 100644 --- a/roles/vsr-deploy/tasks/main.yml +++ b/roles/vsr-deploy/tasks/main.yml @@ -3,7 +3,7 @@ module: sros_command commands: show system license wait_for: result[0] contains sros - provider: "{{ cli }}" + provider: "{{ provider_creds }}" register: vsr_license_info remote_user: "{{ target_server_username }}" delegate_to: "{{ target_server }}" @@ -14,47 +14,59 @@ fail: msg='VSR reports about "missing license record". Try redeploy with valid license file.' when: '"License status : card reboot pending, missing license record" == vsr_license_info.stdout_lines[0][3]' -- name: Set system name - local_action: - module: sros_config - lines: - - configure system name "{{ inventory_hostname }}" - provider: "{{ cli }}" -- name: Set ssh preserve-key +- name: Configure DNS in BOF local_action: - module: sros_config - lines: - - configure system security ssh preserve-key - provider: "{{ cli }}" + module: sros_command + commands: + - "bof dns-domain {{ dns_domain }}" + - "bof primary-dns {{ dns_server_list[0] }}" + - "bof save" + provider: "{{ provider_creds }}" -- name: Set ntp servers +- name: Create rollback point local_action: module: sros_config lines: - - configure system time ntp server {{ item }} - provider: "{{ cli }}" - with_items: '{{ ntp_server_list }}' + - "admin rollback save comment \"Before Metro-Config {{ lookup('pipe', 'date -u +%Y-%m-%d-%H:%M:%s') }}\"" + provider: "{{ provider_creds }}" + +- name: Ensure build directory exits to store config fragments in + local_action: + module: file + state: directory + path: "{{ buildpath }}/{{ inventory_hostname }}" + -- name: Configure VSR cards +- name: Set configuration fragments + set_fact: + config_items: + - { file: "system.cfg", prio: "10" } + - { file: "ports.cfg", prio: "20" } + - { file: "router.cfg", prio: "30" } + +- name: Generate configuration fragments local_action: - module: sros_config - lines: - - configure card 1 card-type iom-v - - configure card 1 mda 1 mda-type m20-v - provider: "{{ cli }}" + module: template + src: "{{ item.file }}.j2" + dest: "{{ buildpath }}/{{ inventory_hostname }}/{{ item.prio }}-{{ item.file}}" + with_items: "{{ config_items }}" -- debug: msg='{{ lookup("file", deploy_cfg_file ).split('\n') }}' verbosity=1 +- name: Show rootified commands that will be sent to VSR + debug: msg='{{ lookup("template", "{{ item.file }}.j2" ) | sros_rootify }}' verbosity=1 + with_items: "{{ config_items }}" -- name: Configure VSR from deploy_cfg_file +- name: Configure additional configuration to integrate VSR with Nuage VSD local_action: module: sros_config - lines: '{{ lookup("file", deploy_cfg_file ).split("\n") }}' - provider: "{{ cli }}" - when: deploy_cfg_file is defined + lines: '{{ lookup("template", "{{ item.file }}.j2" ) | sros_rootify }}' + provider: "{{ provider_creds }}" + with_items: "{{ config_items }}" + - name: Save VSR config local_action: module: sros_config save: yes - provider: "{{ cli }}" + provider: "{{ provider_creds }}" + diff --git a/roles/vsr-deploy/templates/ports.cfg.j2 b/roles/vsr-deploy/templates/ports.cfg.j2 new file mode 100644 index 0000000000..25e7c2e3bb --- /dev/null +++ b/roles/vsr-deploy/templates/ports.cfg.j2 @@ -0,0 +1,30 @@ +#-------------------------------------------------- +echo "Card Configuration" +#-------------------------------------------------- + card 1 + card-type iom-v + mda 1 + mda-type m20-v + no shutdown + exit + no shutdown + exit + +#-------------------------------------------------- +echo "Port Configuration" +#-------------------------------------------------- +{% for port in ports_to_hv_bridges[1:] %} + port 1/1/{{ loop.index }} + no shutdown + ethernet + lldp + dest-mac nearest-bridge + admin-status tx-rx + tx-tlvs port-desc sys-name sys-desc sys-cap + tx-mgmt-address system + exit + exit + exit + exit +{% endfor %} + diff --git a/roles/vsr-deploy/templates/router.cfg.j2 b/roles/vsr-deploy/templates/router.cfg.j2 new file mode 100644 index 0000000000..a120669eaa --- /dev/null +++ b/roles/vsr-deploy/templates/router.cfg.j2 @@ -0,0 +1,81 @@ +#jinja2:lstrip_blocks: True +#-------------------------------------------------- +echo "Router (Network Side) Configuration" +#-------------------------------------------------- + router Base +{% if router.data_ip is defined %} + interface "data" + address {{ router.data_ip | ipaddr('host/prefix') }} + port 1/1/2 + no shutdown + exit +{% endif %} + interface "system" + address {{ router.system_ip | ipaddr('address') }}/32 + no shutdown + exit +{% if as_number is defined %} + autonomous-system {{ as_number }} +{% endif %} +{% if router.system_ip is defined %} + router-id {{ router.system_ip | ipaddr('address') }} +{% endif %} + +#-------------------------------------------------- +echo "OSPFv2 Configuration" +#-------------------------------------------------- + ospf 0 + area 0.0.0.0 + interface "system" + no shutdown + exit +{% if router.data_ip is defined %} + interface "data" + no shutdown + mtu 1500 + exit +{% endif %} + exit + no shutdown + exit +#-------------------------------------------------- +echo "BGP Configuration" +#-------------------------------------------------- + bgp + connect-retry 2 + min-route-advertisement 1 + enable-peer-tracking + rapid-withdrawal + rapid-update evpn + group "vsr" + family evpn + type internal + cluster 1.1.1.1 +{% for dcgw_item in groups['vsrs'] %} + {%- if dcgw_item != inventory_hostname %} + {%- if hostvars[dcgw_item].router.system_ip is defined %} + neighbor {{ hostvars[dcgw_item].router.system_ip | ipaddr('address') }} + no shutdown + exit + exit + {% endif %} + {% endif %} +{% endfor %} + exit +{% if nuage_integration %} + group "nuage_controllers" + family evpn + type internal +{% for vsc_item in groups['vscs'] %} + {%- if hostvars[vsc_item].system_ip is defined %} + neighbor {{ hostvars[vsc_item].system_ip | ipaddr('address') }} + no shutdown + exit + exit + {% endif %} +{% endfor %} +{% endif %} + exit + no shutdown + exit + exit diff --git a/roles/vsr-deploy/templates/system.cfg.j2 b/roles/vsr-deploy/templates/system.cfg.j2 new file mode 100644 index 0000000000..7c8c6b373d --- /dev/null +++ b/roles/vsr-deploy/templates/system.cfg.j2 @@ -0,0 +1,67 @@ +#------------------------------------------------- +echo "System Configuration" +#-------------------------------------------------- + system + name "{{ inventory_hostname.split('.')[0] | lower }}" + snmp + packet-size 9216 + exit + login-control + exponential-backoff + exit + time + ntp +{% for ntp_server in ntp_server_list %} + server {{ ntp_server }} +{% endfor %} + no shutdown + exit + sntp + shutdown + exit + exit + lldp + tx-interval 10 + tx-hold-multiplier 3 + reinit-delay 5 + notification-interval 10 + exit + rollback + rollback-location "cf3:/rollback/config.cfg" + exit + netconf + no shutdown + exit + exit +#-------------------------------------------------- +echo "System Security Configuration" +#-------------------------------------------------- + system + security + user "netops" +{# Password : Net0ps #} + password "$2y$10$TLBciWmGqy2Wa5HPQ2vRo.py.eUFOTm8v1dAL3hP8H0AjkTwUh5f." + access console ftp snmp netconf + console + cannot-change-password + member "default" + member "administrative" + exit + exit + snmp + community "uy29ENQixgHSYMeiPthuDk" hash2 rwa version both + exit + ssh + preserve-key + exit + profile "administrative" + netconf + base-op-authorization + lock + exit + exit + exit + exit + exit + + diff --git a/roles/vsr-deploy/vars/main.yml b/roles/vsr-deploy/vars/main.yml new file mode 100644 index 0000000000..86d9f64ad9 --- /dev/null +++ b/roles/vsr-deploy/vars/main.yml @@ -0,0 +1,8 @@ +provider_creds: + host: "{{ mgmt_ip }}" + username: "{{ vsr_user|default('admin') }}" + password: "{{ vsr_password|default('admin') }}" + +rollbackdir: "cf3:/rollback" + +as_number: 65000 diff --git a/roles/vsr-destroy/tasks/main.yml b/roles/vsr-destroy/tasks/main.yml index db791f4c67..7385858d3c 100644 --- a/roles/vsr-destroy/tasks/main.yml +++ b/roles/vsr-destroy/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vsr diff --git a/roles/vsr-postdeploy/files/l2-aa-with-i-es.py b/roles/vsr-postdeploy/files/l2-aa-with-i-es.py new file mode 100755 index 0000000000..3a4e480cc8 --- /dev/null +++ b/roles/vsr-postdeploy/files/l2-aa-with-i-es.py @@ -0,0 +1,196 @@ +from alc import dyn +# note 1: an I-ES must be manually configured upfront (under 'configure service system bgp-evpn ethernet-segment ...' - see I-ES ACG). This only need to be done once and then I-ES can be re-used for multiple fully-dynamiced created services. +# note 2: the script offsets the dyn.select_free_id function with 500 to avoid issues with other L2VXLAN-IRB/L3VXLAN services. Hence the configured I-ES service range starting point should be 500 higher than the vsd service range start point. Eg vsd service range 64000-64999 and I-ES service range 64500-64999. +# note 3: this script can be used for a redundant DC-GW pair. Make sure you a different RD on each DC-GW +# note 4: in case you use redundant DC-GWs you also need to provide +# routing policies (manually at bgp level) to avoid routing loops. See +# also ACG. + +# example of metadata to be added in VSD WAN Service: +# "rddc=1:30000,rdwan=10.0.0.1:3000,rtwan=65000:3000" + +# example of tools cli to test this script: tools perform service vsd evaluate-script domain-name "l2dom1" type l2-domain action setup policy "py-l2-I-ES" vni 1234 rt-i target:1:1 rt-e target:1:1 metadata "rddc=1:30000,rdwan=10.0.0.1:3000,rtwan=65000:3000" +# teardown example cli: tools perform service vsd evaluate-script +# domain-name "l2dom1" type l2-domain action teardown policy "py-l2-I-ES" +# vni 1234 rt-i target:1:1 rt-e target:1:1 + + +def setup_script(vsdParams): + + print ("These are the VSD params: " + str(vsdParams)) + servicetype = vsdParams['servicetype'] + vni = vsdParams['vni'] + rtdc = vsdParams['rt'] + +# add "target:" if provisioned by VSD (VSD uses x:x format whereas tools +# command uses target:x:x format) + if not rtdc.startswith('target'): + rtdc = "target:" + rtdc + + metadata = vsdParams['metadata'] + +# remove trailing space at the end of the metadata + metadata = metadata.rstrip() + + print ("VSD metadata" + str(metadata)) + + metadata = dict(e.split('=') for e in metadata.split(',')) + print ("Modified metadata" + str(metadata)) + vplsSvc_id = str(int(dyn.select_free_id("service-id")) + 500) + print ("this is the free svc id picked up by the system: " + vplsSvc_id) + + if servicetype == "L2DOMAIN": + + rddc = metadata['rddc'] + rdwan = metadata['rdwan'] + rtwan = metadata['rtwan'] + if not rtwan.startswith('target'): + rtwan = "target:" + rtwan + print ('servicetype, VPLS id, rtdc, vni, rddc, rdwan, rtwan:', + servicetype, vplsSvc_id, rtdc, vni, rddc, rdwan, rtwan) + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s customer 1 create + description vpls%(vplsSvc_id)s + bgp + route-distinguisher %(rddc)s + route-target %(rtdc)s + exit + bgp 2 + route-distinguisher %(rdwan)s + route-target %(rtwan)s + exit + vxlan vni %(vni)s create + exit + bgp-evpn + evi %(vplsSvc_id)s + vxlan + no shut + exit + mpls + ingress-replication-bum-label + ecmp 2 + bgp-instance 2 + auto-bind-tunnel + resolution any + exit + no shutdown + exit + exit + no shutdown + exit + exit + exit + """ % {'vplsSvc_id': vplsSvc_id, 'vni': vsdParams['vni'], 'rtdc': rtdc, 'rddc': rddc, 'rdwan': rdwan, 'rtwan': rtwan}) + # L2DOMAIN returns setupParams: vplsSvc_id, servicetype, vni + return {'vplsSvc_id': vplsSvc_id, + 'servicetype': servicetype, 'vni': vni} + +# ------------------------------------------------------------------------------------------------ + + +def modify_script(vsdParams, setup_result): + + print ( + "These are the setup_result params for modify_script: " + + str(setup_result)) + print ("These are the VSD params for modify_script: " + str(vsdParams)) + + # remove trailing space at the end of the metadata + metadata = vsdParams['metadata'].rstrip() + + print ("VSD metadata" + str(metadata)) + metadata = dict(e.split('=') for e in metadata.split(',')) + print ("Modified metadata" + str(metadata)) + + # updating the setup_result dict + setup_result.update(metadata) + params = setup_result + + print ( + "The updated params from metadata and return from the setup result: " + + str(params)) + + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + service-mtu %(svc-mtu)s + exit + exit + exit + """ % params) + + # Result is passed to teardown_script + return params + +# ------------------------------------------------------------------------------------------------ + + +def revert_script(vsdParams, setup_result): + print ( + "These are the setup_result params for revert_script: " + + str(setup_result)) + print ("These are the VSD params for revert_script: " + str(vsdParams)) + + # When modify fails, the revert is called and then the teardown is called. + # It is recommended to revert to same value as used in setup for the + # attributes modified in modify_script. + + params = setup_result + + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + service-mtu 2000 + exit + exit + exit + """ % params) + + # Result is passed to teardown_script + return params + +# ------------------------------------------------------------------------------------------------ + + +def teardown_script(setupParams): + print ("These are the teardown_script setupParams: " + str(setupParams)) + servicetype = setupParams['servicetype'] + if servicetype == "L2DOMAIN": + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + no description + bgp-evpn + vxlan + shut + exit + mpls + shut + exit + no evi + exit + no vxlan vni %(vni)s + bgp + no route-distinguisher + no route-target + exit + no bgp + bgp 2 + no route-distinguisher + no route-target + exit + no bgp 2 + no bgp-evpn + shutdown + exit + no vpls %(vplsSvc_id)s + exit + exit + """ % {'vplsSvc_id': setupParams['vplsSvc_id'], 'vni': setupParams['vni']}) + return setupParams + + +d = {"script": (setup_script, modify_script, revert_script, teardown_script)} + +dyn.action(d) diff --git a/roles/vsr-postdeploy/files/l2-as-with-bgp-mh.py b/roles/vsr-postdeploy/files/l2-as-with-bgp-mh.py new file mode 100755 index 0000000000..02254bb7c5 --- /dev/null +++ b/roles/vsr-postdeploy/files/l2-as-with-bgp-mh.py @@ -0,0 +1,182 @@ +from alc import dyn + +# example of metadata to be added in VSD WAN Service: +# "rd=1:1,sap=1/1/3:1000,opergroup=group-PE1" + +# example of tools cli to test this script: tools perform service vsd evaluate-script domain-name "l2dom1" type l2-domain action setup policy "py-l2-red" vni 1234 rt-i target:1:1 rt-e target:1:1 metadata "rd=1:1,sap=1/1/3:1000,opergroup=group-PE1" +# teardown example cli: tools perform service vsd evaluate-script +# domain-name "l2dom1" type l2-domain action teardown policy "py-l2-red" +# vni 1234 rt-i target:1:1 rt-e target:1:1 + + +def setup_script(vsdParams): + + print ("These are the VSD params: " + str(vsdParams)) + servicetype = vsdParams['servicetype'] + vni = vsdParams['vni'] + rt = vsdParams['rt'] + +# add "target:" if provisioned by VSD (VSD uses x:x format whereas tools +# command uses target:x:x format) + if not rt.startswith('target'): + rt = "target:" + rt + + metadata = vsdParams['metadata'] + +# remove trailing space at the end of the metadata + metadata = metadata.rstrip() + + print ("VSD metadata" + str(metadata)) + + metadata = dict(e.split('=') for e in metadata.split(',')) + print ("Modified metadata" + str(metadata)) + vplsSvc_id = dyn.select_free_id("service-id") + print ("this is the free svc id picked up by the system: " + vplsSvc_id) + + if servicetype == "L2DOMAIN": + + rd = metadata['rd'] + sap = metadata['sap'] + opergroup = metadata['opergroup'] + print ('servicetype, VPLS id, rt, vni, rd, sap, opergroup:', + servicetype, vplsSvc_id, rt, vni, rd, sap, opergroup) + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s customer 1 name evi%(vplsSvc_id)s create + description vpls%(vplsSvc_id)s + proxy-arp + dynamic-arp-populate + no shut + exit + bgp + route-distinguisher %(rd)s + route-target %(rt)s + exit + vxlan vni %(vni)s create + exit + bgp-evpn + evi %(vplsSvc_id)s + vxlan + no shut + exit + exit + sap %(sap)s create + monitor-oper-group %(opergroup)s + no shutdown + exit + no shutdown + exit + exit + exit + """ % {'vplsSvc_id': vplsSvc_id, 'vni': vsdParams['vni'], 'rt': rt, 'rd': metadata['rd'], 'sap': sap, 'opergroup': metadata['opergroup']}) + # L2DOMAIN returns setupParams: vplsSvc_id, servicetype, vni, sdp, + # opergroup + return {'vplsSvc_id': vplsSvc_id, 'servicetype': servicetype, + 'vni': vni, 'sap': sap, 'opergroup': opergroup} + +# ------------------------------------------------------------------------------------------------ + + +def modify_script(vsdParams, setup_result): + + print ( + "These are the setup_result params for modify_script: " + + str(setup_result)) + print ("These are the VSD params for modify_script: " + str(vsdParams)) + + # remove trailing space at the end of the metadata + metadata = vsdParams['metadata'].rstrip() + + print ("VSD metadata" + str(metadata)) + metadata = dict(e.split('=') for e in metadata.split(',')) + print ("Modified metadata" + str(metadata)) + + # updating the setup_result dict + setup_result.update(metadata) + params = setup_result + + print ( + "The updated params from metadata and return from the setup result: " + + str(params)) + + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + service-mtu %(svc-mtu)s + exit + exit + exit + """ % params) + + # Result is passed to teardown_script + return params + +# ------------------------------------------------------------------------------------------------ + + +def revert_script(vsdParams, setup_result): + print ( + "These are the setup_result params for revert_script: " + + str(setup_result)) + print ("These are the VSD params for revert_script: " + str(vsdParams)) + + # When modify fails, the revert is called and then the teardown is called. + # It is recommended to revert to same value as used in setup for the + # attributes modified in modify_script. + + params = setup_result + + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + service-mtu 2000 + exit + exit + exit + """ % params) + + # Result is passed to teardown_script + return params + +# ------------------------------------------------------------------------------------------------ + + +def teardown_script(setupParams): + print ("These are the teardown_script setupParams: " + str(setupParams)) + servicetype = setupParams['servicetype'] + if servicetype == "L2DOMAIN": + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + no description + proxy-arp shut + no proxy-arp + bgp-evpn + vxlan + shut + exit + no evi + exit + no vxlan vni %(vni)s + bgp + no route-distinguisher + no route-target + exit + no bgp + no bgp-evpn + sap %(sap)s + shutdown + exit + no sap %(sap)s + shutdown + exit + no vpls %(vplsSvc_id)s + exit + exit + """ % {'vplsSvc_id': setupParams['vplsSvc_id'], 'vni': setupParams['vni'], 'sap': setupParams['sap']}) + return setupParams + + +d = {"script": (setup_script, modify_script, revert_script, teardown_script)} + +dyn.action(d) diff --git a/roles/vsr-postdeploy/files/l2-irb-with-vrrp.py b/roles/vsr-postdeploy/files/l2-irb-with-vrrp.py new file mode 100755 index 0000000000..2a75a4d36d --- /dev/null +++ b/roles/vsr-postdeploy/files/l2-irb-with-vrrp.py @@ -0,0 +1,160 @@ +from alc import dyn + +# example of metadata to be added in VSD WAN Service on PE1: "vprnRD=65000:1,vprnRT=target:65000:100,irbGW=10.2.2.1/24,vrrpID=1,vrrpIP=10.2.2.254,vrrpPRIO=150" +# example of metadata to be added in VSD WAN Service on PE2: +# "vprnRD=65000:2,vprnRT=target:65000:100,irbGW=10.2.2.2/24,vrrpID=1,vrrpIP=10.2.2.254,vrrpPRIO=100" + +# example of tools cli to test this script: tools perform service vsd evaluate-script domain-name "l2domIRB1-red" type l2-domain-irb action setup policy "py-l2-irb-red" vni 1234 rt-i target:2:2 rt-e target:2:2 metadata "vprnRD=65000:1,vprnRT=target:65000:100,irbGW=10.2.2.1/24,vrrpID=1,vrrpIP=10.2.2.254,vrrpPRIO=150" +# teardown example cli: tools perform service vsd evaluate-script +# domain-name "l2domIRB1-red" type l2-domain-irb action teardown policy +# "py-l2-irb-red" vni 1234 rt-i target:2:2 rt-e target:2:2 + + +def setup_script(vsdParams): + + print ("These are the VSD params: " + str(vsdParams)) + servicetype = vsdParams.get('servicetype') + vni = vsdParams.get('vni') + rt = vsdParams.get('rt') + +# add "target:" if provisioned by VSD (VSD uses x:x format whereas tools +# command uses target:x:x format) + if not rt.startswith('target'): + rt = "target:" + rt + + metadata = vsdParams['metadata'] + +# remove trailing space at the end of the metadata + metadata = metadata.rstrip() + + print ("VSD metadata" + str(metadata)) + metadata = dict(e.split('=') for e in metadata.split(',')) + print ("Modified metadata" + str(metadata)) + vplsSvc_id = dyn.select_free_id("service-id") + vprnSvc_id = dyn.select_free_id("service-id") + print ("this are the free svc ids picked up by the system: VPLS:" + + vplsSvc_id + " + VPRN:" + vprnSvc_id) + + if servicetype == "L2DOMAIN-IRB": + vprn_RD = metadata['vprnRD'] + vprn_RT = metadata['vprnRT'] + irb_GW = metadata['irbGW'] + vrrp_ID = metadata['vrrpID'] + vrrp_IP = metadata['vrrpIP'] + vrrp_PRIO = metadata['vrrpPRIO'] + print ( + 'servicetype, VPLS id, rt, vni, VPRN id, vprn_RD, vprn_RT, irb_GW, vrrp_ID, vrrp_IP, vrrp_PRIO:', + servicetype, + vplsSvc_id, + rt, + vni, + vprnSvc_id, + vprn_RD, + vprn_RT, + irb_GW, + vrrp_ID, + vrrp_IP, + vrrp_PRIO) + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s customer 1 name vpls%(vplsSvc_id)s create + allow-ip-int-bind vxlan-ipv4-tep-ecmp + exit + description vpls%(vplsSvc_id)s + bgp + route-target %(rt)s + exit + vxlan vni %(vni)s create + exit + bgp-evpn + evi %(vplsSvc_id)s + vxlan + no shut + exit + exit + no shutdown + exit + exit + exit + configure service + vprn %(vprnSvc_id)s customer 1 create + auto-bind-tunnel resolution any + route-distinguisher %(vprn_RD)s + vrf-target %(vprn_RT)s + interface "irbvpls-%(vplsSvc_id)s" create + address %(irb_GW)s + vrrp %(vrrp_ID)s + priority %(vrrp_PRIO)s + backup %(vrrp_IP)s + ping-reply + exit + vpls "vpls%(vplsSvc_id)s" + exit + exit + no shutdown + exit + exit + + """ % {'vplsSvc_id': vplsSvc_id, 'vprnSvc_id': vprnSvc_id, 'vni': vsdParams['vni'], 'rt': rt, 'vprn_RD': vprn_RD, 'vprn_RT': vprn_RT, 'irb_GW': irb_GW, 'vrrp_ID': vrrp_ID, 'vrrp_IP': vrrp_IP, 'vrrp_PRIO': vrrp_PRIO}) + # L2DOMAIN-IRB returns setupParams: vplsSvc_id, vprnSvc_id, + # servicetype, vni, vprn_RD, vprn_RT, irb_GW, vrrp_ID, vrrp_IP, + # vrrp_PRIO + return { + 'vplsSvc_id': vplsSvc_id, + 'vprnSvc_id': vprnSvc_id, + 'servicetype': servicetype, + 'vni': vni, + 'vprn_RD': vprn_RD, + 'vprn_RT': vprn_RT, + 'irb_GW': irb_GW, + 'vrrp_ID': vrrp_ID, + 'vrrp_IP': vrrp_IP, + 'vrrp_PRIO': vrrp_PRIO} + +# ------------------------------------------------------------------------------------------------ + + +def teardown_script(setupParams): + print ("These are the teardown_script setupParams: " + str(setupParams)) + servicetype = setupParams.get('servicetype') + if servicetype == "L2DOMAIN-IRB": + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + no description + bgp-evpn + vxlan + shut + exit + no evi + exit + no vxlan vni %(vni)s + bgp + no route-distinguisher + no route-target + exit + no bgp + no bgp-evpn + shutdown + exit + no vpls %(vplsSvc_id)s + vprn %(vprnSvc_id)s + interface "irbvpls-%(vplsSvc_id)s" + no vpls + vrrp %(vrrp_ID)s shut + no vrrp %(vrrp_ID)s + shutdown + exit + no interface "irbvpls-%(vplsSvc_id)s" + shutdown + exit + no vprn %(vprnSvc_id)s + exit + + """ % {'vplsSvc_id': setupParams['vplsSvc_id'], 'vprnSvc_id': setupParams['vprnSvc_id'], 'vni': setupParams['vni'], 'vrrp_ID': setupParams['vrrp_ID']}) + return setupParams + + +d = {"script": (setup_script, None, None, teardown_script)} + +dyn.action(d) diff --git a/roles/vsr-postdeploy/files/l2-vxlan-nosap-bgpad.py b/roles/vsr-postdeploy/files/l2-vxlan-nosap-bgpad.py new file mode 100755 index 0000000000..20e019d87e --- /dev/null +++ b/roles/vsr-postdeploy/files/l2-vxlan-nosap-bgpad.py @@ -0,0 +1,111 @@ +from alc import dyn +# example of metadata to be added in VSD WAN Service: "rd=1:1" +# example of tools cli to test this script: tools perform service vsd evaluate-script domain-name "l2dom1" type l2-domain action setup policy "py-l2" vni 1234 rt-i target:1:1 rt-e target:1:1 metadata "rd=1:1" +# teardown example cli: tools perform service vsd evaluate-script +# domain-name "l2dom1" type l2-domain action teardown policy "py-l2" vni +# 1234 rt-i target:1:1 rt-e target:1:1 + + +def setup_script(vsdParams): + print ("These are the VSD params: " + str(vsdParams)) + servicetype = vsdParams['servicetype'] + vni = vsdParams['vni'] + rt = vsdParams['rt'] +# add "target:" if provisioned by VSD (VSD uses x:x format whereas tools +# command uses target:x:x format) + if not rt.startswith('target'): + rt = "target:" + rt + metadata = vsdParams['metadata'] +# remove trailing space at the end of the metadata + metadata = metadata.rstrip() + print ("VSD metadata" + str(metadata)) + metadata = dict(e.split('=') for e in metadata.split(',')) + print ("Modified metadata" + str(metadata)) + vplsSvc_id = dyn.select_free_id("service-id") + # vprnSvc_id = dyn.select_free_id("service-id") + print ("this are the free svc ids picked up by the system: VPLS:" + vplsSvc_id) + + if servicetype == "L2DOMAIN": + rd = metadata['rd'] + # vprn_AS = metadata ['vprnAS'] + # vprn_RD = metadata ['vprnRD'] + # vprn_RT = metadata ['vprnRT'] + # vprn_Lo = metadata ['vprnLo'] + # irb_GW = metadata ['irbGW'] + print ('servicetype, VPLS id, rt, vni, rd', + servicetype, vplsSvc_id, rt, vni, rd) + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s customer 1 name vpls%(vplsSvc_id)s create + description vpls%(vplsSvc_id)s + proxy-arp + dynamic-arp-populate + no shutdown + exit + bgp + route-distinguisher %(rd)s + route-target %(rt)s + pw-template-binding 1 import-rt %(rt)s + exit + exit + vxlan vni %(vni)s create + exit + bgp-ad + vpls-id %(rd)s + no shut + exit + bgp-evpn + evi %(vplsSvc_id)s + vxlan + no shut + exit + exit + no shutdown + exit + exit + exit + """ % {'vplsSvc_id': vplsSvc_id, 'vni': vsdParams['vni'], 'rt': rt, 'rd': metadata['rd'], }) +# L2DOMAIN returns setupParams: vplsSvc_id, vprnSvc_id, servicetype, vni + return {'vplsSvc_id': vplsSvc_id, + 'servicetype': servicetype, 'vni': vni} +# ------------------------------------------------------------------------------------------------ + + +def teardown_script(setupParams): + print ("These are the teardown_script setupParams: " + str(setupParams)) + servicetype = setupParams['servicetype'] + if servicetype == "L2DOMAIN": + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + no description + proxy-arp shut + no proxy-arp + bgp-evpn + vxlan + shut + exit + no evi + exit + no vxlan vni %(vni)s + bgp + no route-distinguisher + no route-target + exit + bgp-ad + shutdown + exit + no bgp-ad + no bgp + no bgp-evpn + shutdown + exit + no vpls %(vplsSvc_id)s + exit + exit + """ % {'vplsSvc_id': setupParams['vplsSvc_id'], 'vni': setupParams['vni']}) + return setupParams + + +d = {"script": (setup_script, None, None, teardown_script)} +dyn.action(d) diff --git a/roles/vsr-postdeploy/files/l3vxlan-with-bgp-pece.py b/roles/vsr-postdeploy/files/l3vxlan-with-bgp-pece.py new file mode 100755 index 0000000000..92f50b2ce4 --- /dev/null +++ b/roles/vsr-postdeploy/files/l3vxlan-with-bgp-pece.py @@ -0,0 +1,233 @@ +# Metadata example +# +# rd=3:3,vprnAS=65000,vprnRD=65000:1002,vprnRT=target:65000:1002,vprnLo=1.1.1.1,customer=pepsi,customeras=64555,customerip=2.2.2.2,customerpass=password,customersubnet=192.168.1.0/31,customersap=1/1/1:10 + + +from alc import dyn + +# example of metadata to be added in VSD WAN Service: +# "rd=3:3,vprnAS=65000,vprnRD=65000:1,vprnRT=target:65000:1,vprnLo=1.1.1.1,customer=pepsi,customeras=640001,customerip=2.2.2.2,customerpass=password,customersubnet=192.168.1.0/31,customersap=1.1.1:10" + +# example of tools cli to test this script: tools perform service vsd evaluate-script domain-name "l3dom1" type vrf-vxlan action setup policy "py-vrf-vxlan" vni 1234 rt-i target:3:3 rt-e target:3:3 metadata "rd=3:3,vprnAS=65000,vprnRD=65000:1,vprnRT=target:65000:1,vprnLo=1.1.1.1" +# teardown example cli: tools perform service vsd evaluate-script +# domain-name "l3dom1" type vrf-vxlan action teardown policy +# "py-vrf-vxlan" vni 1234 rt-i target:3:3 rt-e target:3:3 + + +def setup_script(vsdParams): + + print ("These are the VSD params: " + str(vsdParams)) + servicetype = vsdParams.get('servicetype') + vni = vsdParams.get('vni') + rt = vsdParams.get('rt') + +# add "target:" if provisioned by VSD (VSD uses x:x format whereas tools +# command uses target:x:x format) + if not rt.startswith('target'): + rt = "target:" + rt + + metadata = vsdParams['metadata'] + +# remove trailing space at the end of the metadata + metadata = metadata.rstrip() + + print ("VSD metadata" + str(metadata)) + metadata = dict(e.split('=') for e in metadata.split(',')) + print ("Modified metadata" + str(metadata)) + vplsSvc_id = dyn.select_free_id("service-id") + vprnSvc_id = dyn.select_free_id("service-id") + print ("this are the free svc ids picked up by the system: VPLS:" + + vplsSvc_id + " + VPRN:" + vprnSvc_id) + + if servicetype == "VRF-VXLAN": + + rd = metadata['rd'] + vprn_AS = metadata['vprnAS'] + vprn_RD = metadata['vprnRD'] + vprn_RT = metadata['vprnRT'] + vprn_Lo = metadata['vprnLo'] + customer = metadata['customer'] + customeras = metadata['customeras'] + customerip = metadata['customerip'] + customerpass = metadata['customerpass'] + customersubnet = metadata['customersubnet'] + customersap = metadata['customersap'] + print ( + 'servicetype, VPLS id, rt, vni, rd, VPRN id, vprn_AS, vprn_RD, vprn_RT, vprn_Lo, customer, customeras, customerip, customerpass, customersubnet, customersap:', + servicetype, + vplsSvc_id, + rt, + vni, + rd, + vprnSvc_id, + vprn_AS, + vprn_RD, + vprn_RT, + vprn_Lo, + customer, + customeras, + customerip, + customerpass, + customersubnet, + customersap) + dyn.add_cli(""" + configure router policy-options + begin + community _VSD_%(vplsSvc_id)s members %(rt)s + policy-statement vsi_import_%(vplsSvc_id)s + entry 10 + from + family evpn + community _VSD_%(vplsSvc_id)s + exit + action accept + exit + exit + exit + policy-statement vsi_export_%(vplsSvc_id)s + entry 10 + from + family evpn + exit + action accept + community add _VSD_%(vplsSvc_id)s + exit + exit + exit + commit + exit + + configure service + vpls %(vplsSvc_id)s customer 1 name l3-backhaul-vpls%(vplsSvc_id)s create + allow-ip-int-bind vxlan-ipv4-tep-ecmp + exit + description vpls%(vplsSvc_id)s + bgp + route-distinguisher %(rd)s + vsi-import vsi_import_%(vplsSvc_id)s + vsi-export vsi_export_%(vplsSvc_id)s + exit + vxlan vni %(vni)s create + exit + bgp-evpn + ip-route-advertisement + vxlan + no shut + exit + exit + no shutdown + exit + exit + exit + + + configure service + vprn %(vprnSvc_id)s customer 1 create + auto-bind-tunnel resolution any + router-id %(vprn_Lo)s + autonomous-system %(vprn_AS)s + route-distinguisher %(vprn_RD)s + vrf-target %(vprn_RT)s + interface "vpls-%(vplsSvc_id)s" create + vpls "vpls%(vplsSvc_id)s" evpn-tunnel + exit + interface "lo1" create + address %(vprn_Lo)s/32 + loopback + exit + no shutdown + interface %(customer)s create + address %(customersubnet)s + sap %(customersap)s create + exit + exit + bgp group %(customer)s + peer-as %(customeras)s + neighbor %(customerip)s authentication-key %(customerpass)s + exit + exit + + """ % {'vplsSvc_id': vplsSvc_id, 'vprnSvc_id': vprnSvc_id, 'vni': vsdParams['vni'], 'rt': rt, 'rd': metadata['rd'], 'vprn_AS': vprn_AS, 'vprn_RD': vprn_RD, 'vprn_RT': vprn_RT, 'vprn_Lo': vprn_Lo, 'customer': customer, 'customeras': customeras, 'customerip': customerip, 'customerpass': customerpass, 'customersubnet': customersubnet, 'customersap': customersap}) + # VRF-VXLAN returns setupParams: vplsSvc_id, vprnSvc_id, servicetype, + # vni, vprn_AS, vprn_RD, vprn_RT, vprn_Lo + return { + 'vplsSvc_id': vplsSvc_id, + 'vprnSvc_id': vprnSvc_id, + 'servicetype': servicetype, + 'vni': vni, + 'vprn_AS': vprn_AS, + 'vprn_RD': vprn_RD, + 'vprn_RT': vprn_RT, + 'vprn_Lo': vprn_Lo, + 'customer': customer, + 'customeras': customeras, + 'customerip': customerip, + 'customerpass': customerpass, + 'customersubnet': customersubnet, + 'customersap': customersap} + +# ------------------------------------------------------------------------------------------------ + + +def teardown_script(setupParams): + print ("These are the teardown_script setupParams: " + str(setupParams)) + servicetype = setupParams.get('servicetype') + if servicetype == "VRF-VXLAN": + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + no description + bgp-evpn + vxlan + shut + exit + no evi + exit + no vxlan vni %(vni)s + bgp + no route-distinguisher + no route-target + exit + no bgp + no bgp-evpn + shutdown + exit + no vpls %(vplsSvc_id)s + vprn %(vprnSvc_id)s + interface lo1 shutdown + no interface lo1 + interface "vpls-%(vplsSvc_id)s" + vpls "vpls%(vplsSvc_id)s" + no evpn-tunnel + exit + no vpls + shutdown + exit + no interface "vpls-%(vplsSvc_id)s" + interface %(customer)s create + sap %(customersap)s shutdown + no sap %(customersap)s + shutdown + exit + no interface %(customer)s + bgp shutdown + no bgp + shutdown + exit + no vprn %(vprnSvc_id)s + exit + configure router policy-options + begin + no community _VSD_%(vplsSvc_id)s + no policy-statement vsi_import_%(vplsSvc_id)s + no policy-statement vsi_export_%(vplsSvc_id)s + commit + exit + + """ % {'vplsSvc_id': setupParams['vplsSvc_id'], 'vprnSvc_id': setupParams['vprnSvc_id'], 'vni': setupParams['vni'], 'customer': setupParams['customer'], 'customersap': setupParams['customersap']}) + return setupParams + + +d = {"script": (setup_script, None, None, teardown_script)} + +dyn.action(d) diff --git a/roles/vsr-postdeploy/files/l3vxlan-with-lo.py b/roles/vsr-postdeploy/files/l3vxlan-with-lo.py new file mode 100755 index 0000000000..737172dc2e --- /dev/null +++ b/roles/vsr-postdeploy/files/l3vxlan-with-lo.py @@ -0,0 +1,156 @@ +from alc import dyn + +# example of metadata to be added in VSD WAN Service: +# "rd=3:3,vprnAS=65000,vprnRD=65000:1,vprnRT=target:65000:1,vprnLo=1.1.1.1" + +# example of tools cli to test this script: tools perform service vsd evaluate-script domain-name "l3dom1" type vrf-vxlan action setup policy "py-vrf-vxlan" vni 1234 rt-i target:3:3 rt-e target:3:3 metadata "rd=3:3,vprnAS=65000,vprnRD=65000:1,vprnRT=target:65000:1,vprnLo=1.1.1.1" +# teardown example cli: tools perform service vsd evaluate-script +# domain-name "l3dom1" type vrf-vxlan action teardown policy +# "py-vrf-vxlan" vni 1234 rt-i target:3:3 rt-e target:3:3 + + +def setup_script(vsdParams): + + print ("These are the VSD params: " + str(vsdParams)) + servicetype = vsdParams['servicetype'] + vni = vsdParams['vni'] + rt = vsdParams['rt'] + +# add "target:" if provisioned by VSD (VSD uses x:x format whereas tools +# command uses target:x:x format) + if not rt.startswith('target'): + rt = "target:" + rt + + metadata = vsdParams['metadata'] + +# remove trailing space at the end of the metadata + metadata = metadata.rstrip() + + print ("VSD metadata" + str(metadata)) + metadata = dict(e.split('=') for e in metadata.split(',')) + print ("Modified metadata" + str(metadata)) + vplsSvc_id = dyn.select_free_id("service-id") + vprnSvc_id = dyn.select_free_id("service-id") + print ("this are the free svc ids picked up by the system: VPLS:" + + vplsSvc_id + " + VPRN:" + vprnSvc_id) + + if servicetype == "VRF-VXLAN": + + rd = metadata['rd'] + vprn_AS = metadata['vprnAS'] + vprn_RD = metadata['vprnRD'] + vprn_RT = metadata['vprnRT'] + vprn_Lo = metadata['vprnLo'] + print ( + 'servicetype, VPLS id, rt, vni, rd, VPRN id, vprn_AS, vprn_RD, vprn_RT, vprn_Lo:', + servicetype, + vplsSvc_id, + rt, + vni, + rd, + vprnSvc_id, + vprn_AS, + vprn_RD, + vprn_RT, + vprn_Lo) + dyn.add_cli(""" + + configure service + vpls %(vplsSvc_id)s customer 1 name l3-backhaul-vpls%(vplsSvc_id)s create + allow-ip-int-bind vxlan-ipv4-tep-ecmp + exit + description vpls%(vplsSvc_id)s + bgp + route-distinguisher %(rd)s + route-target %(rt)s + exit + vxlan vni %(vni)s create + exit + bgp-evpn + ip-route-advertisement + vxlan + no shut + exit + exit + no shutdown + exit + exit + exit + + + configure service + vprn %(vprnSvc_id)s customer 1 create + autonomous-system %(vprn_AS)s + route-distinguisher %(vprn_RD)s + auto-bind-tunnel resolution any + vrf-target %(vprn_RT)s + interface "vpls-%(vplsSvc_id)s" create + vpls "vpls%(vplsSvc_id)s" evpn-tunnel + exit + interface "lo1" create + address %(vprn_Lo)s/32 + loopback + exit + no shutdown + exit + exit + + """ % {'vplsSvc_id': vplsSvc_id, 'vprnSvc_id': vprnSvc_id, 'vni': vsdParams['vni'], 'rt': rt, 'rd': metadata['rd'], 'vprn_AS': vprn_AS, 'vprn_RD': vprn_RD, 'vprn_RT': vprn_RT, 'vprn_Lo': vprn_Lo}) + # VRF-VXLAN returns setupParams: vplsSvc_id, vprnSvc_id, servicetype, + # vni, vprn_AS, vprn_RD, vprn_RT, vprn_Lo + return { + 'vplsSvc_id': vplsSvc_id, + 'vprnSvc_id': vprnSvc_id, + 'servicetype': servicetype, + 'vni': vni, + 'vprn_AS': vprn_AS, + 'vprn_RD': vprn_RD, + 'vprn_RT': vprn_RT, + 'vprn_Lo': vprn_Lo} + +# ------------------------------------------------------------------------------------------------ + + +def teardown_script(setupParams): + print ("These are the teardown_script setupParams: " + str(setupParams)) + servicetype = setupParams['servicetype'] + if servicetype == "VRF-VXLAN": + print ("Test1") + print ("These are the teardown_script setupParams: " + str(setupParams)) + dyn.add_cli(""" + configure service + vpls %(vplsSvc_id)s + bgp-evpn + vxlan + shut + exit + no evi + exit + no vxlan vni %(vni)s + no bgp-evpn + shutdown + exit + no vpls %(vplsSvc_id)s + vprn %(vprnSvc_id)s + interface lo1 shutdown + no interface lo1 + interface "vpls-%(vplsSvc_id)s" + vpls "vpls%(vplsSvc_id)s" + no evpn-tunnel + exit + no vpls + shutdown + exit + no interface "vpls-%(vplsSvc_id)s" + shutdown + exit + no vprn %(vprnSvc_id)s + exit + + """ % {'vplsSvc_id': setupParams['vplsSvc_id'], 'vprnSvc_id': setupParams['vprnSvc_id'], 'vni': setupParams['vni']}) + return setupParams + + +d = {"script": (setup_script, None, None, teardown_script)} + +dyn.action(d) diff --git a/roles/vsr-postdeploy/tasks/copy_python_scripts.yml b/roles/vsr-postdeploy/tasks/copy_python_scripts.yml new file mode 100644 index 0000000000..e044a9ac0f --- /dev/null +++ b/roles/vsr-postdeploy/tasks/copy_python_scripts.yml @@ -0,0 +1,62 @@ +# Install Pre-Requisites +- name: Pull facts of localhost + local_action: + module: setup + +- name: Install pip on RedHat OS family distribution + local_action: + module: yum + name: "python-pip" + state: "present" + when: ansible_os_family == "RedHat" + +- name: Install pip on Debian OS family distribution + local_action: + module: apt + name: "python-pip" + state: "present" + when: ansible_os_family == "Debian" + +- name: Install pexpect module via pip + local_action: + module: pip + name: pexpect + state: "present" + +# Create directory on SR to store python policy scripts +- name: Create directory used for storing python policy scripts + sros_command: + commands: + - 'file md {{ scriptdir }}' + provider: "{{ provider_creds }}" + delegate_to: localhost + + +- name: Set local path of python policy scripts + set_fact: local_scripts_path="{{ role_path }}/files" + +- name: Get list of Python scripts + local_action: + module: find + path: "{{ local_scripts_path }}" + pattern: "*.py" + register: rc_pythonscripts + +- debug: var=rc_pythonscripts verbosity=1 + +- name: Copy Python-scripts + local_action: + module: expect + command: "{{ vsr_scp_python_scripts }}" + responses: +# (?i)yes: "yes" + (?i)password: "{{ vsr_password|default('admin') }}" + timeout: "{{ vsr_scp_timeout_seconds }}" + with_items: "{{ rc_pythonscripts.files | map(attribute='path') | list }}" + + +- name: Set pythonscripts variable + set_fact: pythonscripts="{{ rc_pythonscripts.files | map(attribute='path') | list | map('basename') | list | map('splitext') | list | map('first') | list }}" + when: rc_pythonscripts.matched > 0 + +- debug: var=pythonscripts verbosity=1 diff --git a/roles/vsr-postdeploy/tasks/main.yml b/roles/vsr-postdeploy/tasks/main.yml new file mode 100644 index 0000000000..39f9ea278b --- /dev/null +++ b/roles/vsr-postdeploy/tasks/main.yml @@ -0,0 +1,44 @@ +- name: Copy Python python scripts + include: "copy_python_scripts.yml" + +- name: Create rollback point + local_action: + module: sros_config + lines: + - "admin rollback save comment \"Before Metro-DCGW Integration {{ lookup('pipe', 'date -u +%Y-%m-%d-%H:%M:%s') }}\"" + provider: "{{ provider_creds }}" + + +- block: + - name: "Check if directory {{ buildpath }}/{{ inventory_hostname }} exists" + local_action: + module: stat + path: "{{ buildpath }}/{{ inventory_hostname }}" + register: builddir + + - name: Generate configuration fragments + local_action: + module: template + src: "{{ item.file }}.j2" + dest: "{{ buildpath }}/{{ inventory_hostname }}/{{ item.prio }}-{{ item.file}}" + with_items: + - { file: "vsd_integration.cfg", prio: "80" } + when: builddir.stat.exists + when: buildpath is defined + +- name: Show rootified commands that will be sent to VSR + debug: msg='{{ lookup("template", "vsd_integration.cfg.j2" ) | sros_rootify }}' verbosity=1 + + +- name: Configure additional configuration to integrate VSR with Nuage VSD + local_action: + module: sros_config + lines: '{{ lookup("template", "vsd_integration.cfg.j2" ) | sros_rootify }}' + provider: "{{ provider_creds }}" + +- name: Save VSR config + local_action: + module: sros_config + save: yes + provider: "{{ provider_creds }}" + diff --git a/roles/vsr-postdeploy/templates/vsd_integration.cfg.j2 b/roles/vsr-postdeploy/templates/vsd_integration.cfg.j2 new file mode 100644 index 0000000000..c29050fbfa --- /dev/null +++ b/roles/vsr-postdeploy/templates/vsd_integration.cfg.j2 @@ -0,0 +1,54 @@ +#----------------------------- +echo "VSD Integration" +#---------------------------- + system + vsd + system-id "{{ inventory_hostname.split('.')[0] | lower }}" + exit + + xmpp + server "vsd" domain-name "{{ vsd_fqdn }}" router "management" create username "{{ inventory_hostname.split('.')[0] | lower }}" + no shutdown + exit + exit + exit + +# F-D XMPP provisioning requires a reserved range of Service-IDs that can be used +# for dynamic data services. This configured range is no longer available for regular +# services configured via CLI/SNMP: + + service + vsd + service-range 64000 to 64999 + exit + exit + +# is possible to edit the dynamic VSD services configuration by entering the enablevsd- +# config mode. A passord is required to enter this mode. +# In this configuration, "Alcateldc" is configured as vsd-password + system + security + password + vsd-password "vVEv9OfVJMp3K6v33ScWVUcXbIoUY/JoMnsQBE2KtJU" hash2 + exit + exit + exit + + +# Load Python scripts +# The python script that will build the dynamic services based on the VSD parameters +# obtained via XMPP can be stored locally on the CF or on a remote FTP server: + + python +{% for script in pythonscripts %} + python-script "{{ script }}" create + primary-url "{{ scriptdir }}/{{ script }}.py" + no shutdown + exit + python-policy "{{ script }}" create + vsd script "{{ script }}" + exit + +{% endfor %} + exit + diff --git a/roles/vsr-postdeploy/vars/main.yml b/roles/vsr-postdeploy/vars/main.yml new file mode 100644 index 0000000000..79930887ae --- /dev/null +++ b/roles/vsr-postdeploy/vars/main.yml @@ -0,0 +1,8 @@ +provider_creds: + host: "{{ mgmt_ip }}" + username: "{{ vsr_user|default('admin') }}" + password: "{{ vsr_password|default('admin') }}" + +scriptdir: "cf3:/scripts" +vsr_scp_python_scripts: 'scp {{ item }} {{ vsr_user|default("admin") }}@{{ mgmt_ip }}:"{{ scriptdir }}/{{ item | basename }}"' +vsr_scp_timeout_seconds: "{{ vsr_command_timeout_seconds|default(180) }}" diff --git a/roles/vsr-predeploy/tasks/kvm.yml b/roles/vsr-predeploy/tasks/kvm.yml index 3c44a14d95..373af80c0c 100644 --- a/roles/vsr-predeploy/tasks/kvm.yml +++ b/roles/vsr-predeploy/tasks/kvm.yml @@ -10,9 +10,9 @@ vsr_target_qcow2_file_path: '{{ images_path }}/{{ vmname }}/{{ inventory_hostname }}.qcow2' vsr_target_license_file: '{{ images_path }}/{{ vmname }}/license.txt' -- include: kvm_check_hypervisor.yml +- import_tasks: kvm_check_hypervisor.yml -- include: kvm_check_resources.yml +- import_tasks: kvm_check_resources.yml - name: List the Virtual Machines virt: command=list_vms @@ -37,9 +37,9 @@ delegate_to: "{{ target_server }}" remote_user: "{{ target_server_username }}" -- include: kvm_upload_license_file.yml +- import_tasks: kvm_upload_license_file.yml -- include: kvm_deploy_image_file.yml +- import_tasks: kvm_deploy_image_file.yml - name: Get license file content command: cat {{ vsr_target_license_file }} @@ -59,7 +59,7 @@ - debug: var=vsr_vm_uuid verbosity=1 -- include: kvm_define_vsr_vm.yml +- import_tasks: kvm_define_vsr_vm.yml - name: Wait for VSR ssh to be ready include_role: diff --git a/roles/vsr-predeploy/tasks/kvm_upload_license_file.yml b/roles/vsr-predeploy/tasks/kvm_upload_license_file.yml index b5776f2d5f..ac3cdbfc15 100644 --- a/roles/vsr-predeploy/tasks/kvm_upload_license_file.yml +++ b/roles/vsr-predeploy/tasks/kvm_upload_license_file.yml @@ -26,10 +26,10 @@ - block: - - name: Copy license file with standard name + - name: Copy license file with standard name out of .zip file copy: src: '{{ rc_vsr_license_file.files[0].path }}' - dest: '{{ images_path }}/{{ vmname }}/license.txt' + dest: '{{ vsr_target_license_file }}' remote_src: yes - name: Delete original license file diff --git a/roles/vsr-predeploy/tasks/main.yml b/roles/vsr-predeploy/tasks/main.yml index 3ae1ff9d67..4b1a74c365 100644 --- a/roles/vsr-predeploy/tasks/main.yml +++ b/roles/vsr-predeploy/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vsr diff --git a/roles/vsr-predeploy/templates/vsr_xml.j2 b/roles/vsr-predeploy/templates/vsr_xml.j2 index 7386a6855b..24ba9139b7 100644 --- a/roles/vsr-predeploy/templates/vsr_xml.j2 +++ b/roles/vsr-predeploy/templates/vsr_xml.j2 @@ -1,57 +1,54 @@ - - {{ vsr_vm_uuid }} - {{ vmname }} - {{ vsr_memory }} - {{ vsr_vcpu }} - - - - - -{% set _static_routes = [] %} -{% if mgmt_static_route_list is defined %} -{% for route in mgmt_static_route_list %}{{ _static_routes.append('static-route='+route+'@'+mgmt_gateway) }}{% endfor %} -{% endif %} - TIMOS:slot=A chassis=VSR-I card=cpm-v mda/1=m20-v address={{ (mgmt_ip|string + '/' + mgmt_netmask_prefix|string ) | ipaddr() }}@active {{ _static_routes|join(' ') }} license-file=cf3:/license.txt - - - - hvm - - - - - - - - - - /usr/libexec/qemu-kvm - - - - - - - - - - -{% for bridge in ports_to_hv_bridges %} - - - - -{% endfor %} - - - - - - - - - - - - + + {{ vsr_vm_uuid }} + {{ vmname }} + {{ vsr_memory }} + {{ vsr_vcpu }} + + + + + +{% set _static_routes = mgmt_static_route_list+[''] %} + TIMOS:slot=A chassis=VSR-I card=cpm-v mda/1=m20-v address={{ (mgmt_ip ~ '/' ~ mgmt_netmask_prefix) }}@active {{ _static_routes|join('@' ~ mgmt_gateway ~ ' ') }} license-file=cf3:/license.txt + + + + hvm + + + + + + + + + + /usr/libexec/qemu-kvm + + + + + + + + + + +{% for bridge in ports_to_hv_bridges %} + + + + +{% endfor %} + + + + + + + + + + + + diff --git a/roles/vstat-data-backup/tasks/main.yml b/roles/vstat-data-backup/tasks/main.yml index 35ac98bd5a..98b197783e 100644 --- a/roles/vstat-data-backup/tasks/main.yml +++ b/roles/vstat-data-backup/tasks/main.yml @@ -1,41 +1,42 @@ +--- - name: Pull facts of localhost action: setup connection: local -- name: Install mount package - yum: name={{ item }} state=present - remote_user: root - with_items: - - libnfsidmap - - nfs-utils - -- name: Set name of vstat data backup dir - set_fact: - vstat_backup_dir: "{{metro_backup_root}}/backup-{{ inventory_hostname }}-{{ ansible_date_time.iso8601_basic_short }}/" - run_once: true - -- name: Create vstat data backup dir on vstat node(s) - file: - dest: "{{ vstat_backup_dir }}" - state: directory - mode: 0777 - recurse: yes - owner: elasticsearch - group: elasticsearch - remote_user: root - -- name: Mount the nfs folder on to vstat vm - mount: - src: "{{ vstat_nfs_server_with_folder }}" - name: "{{ vstat_backup_dir }}" - state: mounted - fstype: nfs4 - remote_user: root - -- name: Get the nfs shared folder details - shell: "mount | grep nfs" - register: nfs_folder - remote_user: root +- block: + + - name: Install mount package + yum: name={{ item }} state=present + with_items: + - libnfsidmap + - nfs-utils + + - name: Set name of vstat data backup dir + set_fact: + vstat_backup_dir: "{{metro_backup_root}}/backup-{{ inventory_hostname }}-{{ ansible_date_time.iso8601_basic_short }}/" + run_once: true + + - name: Create vstat data backup dir on vstat node(s) + file: + dest: "{{ vstat_backup_dir }}" + state: directory + mode: 0777 + recurse: yes + owner: elasticsearch + group: elasticsearch + + - name: Mount the nfs folder on to vstat vm + mount: + src: "{{ vstat_nfs_server_with_folder }}" + name: "{{ vstat_backup_dir }}" + state: mounted + fstype: nfs4 + + - name: Get the nfs shared folder details + shell: "mount | grep nfs" + register: nfs_folder + + remote_user: "{{ vstat_username }}" - name: Verify backup folder path is nfs shared assert: @@ -43,6 +44,7 @@ msg: "{{ vstat_backup_dir }} is not nfs shared" - block: + - name: Get the username running the playbooks local_action: command whoami register: username_on_the_host @@ -60,140 +62,135 @@ delegate_to: localhost run_once: true - - name: Cleanup backup dir in elasticseach.yml file - lineinfile: - dest: "/etc/elasticsearch/elasticsearch.yml" - regexp: "path.repo" - state: absent - remote_user: root - - - name: Configure backup location in elasticseach.yml file - lineinfile: - dest: "/etc/elasticsearch/elasticsearch.yml" - line: "path.repo: [{{ vstat_backup_dir }}]" - remote_user: root - - - name: Add user permission to backup location - file: - dest: "{{ vstat_backup_dir }}" - owner: "elasticsearch" - group: "elasticsearch" - recurse: yes - mode: 0777 - remote_user: root - - - name: Restart elasticsearch process - systemd: - name: elasticsearch - state: restarted - remote_user: root - - - name: Wait for elasticsearch process to come up - pause: - seconds: 20 - - - name: Get elasticsearch current status - systemd: - name: elasticsearch - state: started - register: es_status - remote_user: root - - - name: Check elasticsearch status is active - assert: - that: es_status.status.ActiveState == 'active' - msg: "Elasticserach process in not active after restart" - - - name: Check elasticsearch process is running - assert: - that: es_status.status.SubState == 'running' - msg: "Elasticsearch process is not running after restart" - - - name: Copy elasticsearch backup scritps - copy: src={{ vstat_backup_scripts_path }}/{{ item }} - dest=/tmp/ - with_items: "{{ vstat_backup_scripts_file_list }}" - remote_user: root - run_once: true - - - name: Set the repo name to be created - set_fact: - repo_name: "{{ ansible_date_time.iso8601_basic_short }}" - run_once: true + - block: - - name: Create a repository to backup ES data - command: "python /tmp/{{ create_repo }}" - remote_user: root - run_once: true + - name: Cleanup backup dir in elasticseach.yml file + lineinfile: + dest: "/etc/elasticsearch/elasticsearch.yml" + regexp: "path.repo" + state: absent + + - name: Configure backup location in elasticseach.yml file + lineinfile: + dest: "/etc/elasticsearch/elasticsearch.yml" + line: "path.repo: [{{ vstat_backup_dir }}]" + + - name: Add user permission to backup location + file: + dest: "{{ vstat_backup_dir }}" + owner: "elasticsearch" + group: "elasticsearch" + recurse: yes + mode: 0777 + + - name: Restart elasticsearch process + systemd: + name: elasticsearch + state: restarted + + - name: Wait for elasticsearch process to come up + pause: + seconds: 20 + + - name: Get elasticsearch current status + systemd: + name: elasticsearch + state: started + register: es_status + + - name: Check elasticsearch status is active + assert: + that: es_status.status.ActiveState == 'active' + msg: "Elasticserach process in not active after restart" + + - name: Check elasticsearch process is running + assert: + that: es_status.status.SubState == 'running' + msg: "Elasticsearch process is not running after restart" - - name: Get the repo created by backup script - command: "python /tmp/{{ show_repo }}" - register: repo_path - remote_user: root - run_once: true + - name: Copy elasticsearch backup scritps + copy: src={{ vstat_backup_scripts_path }}/{{ item }} + dest=/tmp/ + with_items: "{{ vstat_backup_scripts_file_list }}" + run_once: true + + - name: Set the repo name to be created + set_fact: + repo_name: "{{ ansible_date_time.iso8601_basic_short }}" + run_once: true - - name: Print contents of show_repo output when verbosity >= 1 - debug: var=repo_path verbosity=1 - run_once: true + - name: Create a repository to backup ES data + command: "python /tmp/{{ create_repo }}" + run_once: true - - name: Verify repo is created - assert: - that: '"Error in getting repo" not in repo_path.stdout' - msg: Failed to verify the repo created - run_once: true + - name: Get the repo created by backup script + command: "python /tmp/{{ show_repo }}" + register: repo_path + run_once: true - - name: Set the snapshot name to be created - set_fact: - snap_name: "{{ ansible_date_time.iso8601_basic_short }}" - run_once: true + - name: Print contents of show_repo output when verbosity >= 1 + debug: var=repo_path verbosity=1 + run_once: true - - name: Create snapshot with all indicies - command: "python /tmp/{{ create_snapshot }}" - register: snapshot - remote_user: root - run_once: true + - name: Verify repo is created + assert: + that: '"Error in getting repo" not in repo_path.stdout' + msg: Failed to verify the repo created + run_once: true - - name: Print contents of create_snapshot output when verbosity >= 1 - debug: var=snapshot verbosity=1 - run_once: true + - name: Set the snapshot name to be created + set_fact: + snap_name: "{{ ansible_date_time.iso8601_basic_short }}" + run_once: true - - name: Get the contents of created snapshot - command: "python /tmp/{{ show_snapshot }}" - register: snapshot_contents - remote_user: root - run_once: true + - name: Create snapshot with all indicies + command: "python /tmp/{{ create_snapshot }}" + register: snapshot + run_once: true - - name: Create local variable with snap_contents output to json - set_fact: snapshot_contents_json="{{ snapshot_contents.stdout|snapshot_list_indices_to_json }}" - run_once: true + - name: Print contents of create_snapshot output when verbosity >= 1 + debug: var=snapshot verbosity=1 + run_once: true - - name: Print contents of snapshot_contents output when verbosity >= 1 - debug: var=snapshot_contents verbosity=1 - run_once: true + - name: Get the contents of created snapshot + command: "python /tmp/{{ show_snapshot }}" + register: snapshot_contents + run_once: true - - block: - - name: Verify the contents of the snapshot created - assert: - that: '"{{ item }}" in list_of_indices' - msg: "{{ item }} index was not found" - with_items: "{{ snapshot_contents_json['indices'] }}" - when: list_of_indices is defined - run_once: true + - name: Create local variable with snap_contents output to json + set_fact: snapshot_contents_json="{{ snapshot_contents.stdout|snapshot_list_indices_to_json }}" + run_once: true - - block: - - name: Get the list of all indices - command: "python /tmp/{{ get_indices }}" - remote_user: root - register: indices_output + - name: Print contents of snapshot_contents output when verbosity >= 1 + debug: var=snapshot_contents verbosity=1 run_once: true - - name: Verify the contents of the snapshot created - assert: - that: '"{{ item }}" in indices_output.stdout' - msg: "{{ item }} index was not found" - with_items: "{{ snapshot_contents_json['indices'] }}" + - block: + - name: Verify the contents of the snapshot created + assert: + that: '"{{ item }}" in list_of_indices' + msg: "{{ item }} index was not found" + with_items: "{{ snapshot_contents_json['indices'] }}" + when: list_of_indices is defined run_once: true - when: list_of_indices is not defined + + - block: + + - name: Get the list of all indices + command: "python /tmp/{{ get_indices }}" + register: indices_output + run_once: true + + - name: Verify the contents of the snapshot created + assert: + that: '"{{ item }}" in indices_output.stdout' + msg: "{{ item }} index was not found" + with_items: "{{ snapshot_contents_json['indices'] }}" + run_once: true + + when: list_of_indices is not defined + + remote_user: "{{ vsd_username }}" - block: diff --git a/roles/vstat-data-migrate/tasks/handle_upgrade_from_401_version.yml b/roles/vstat-data-migrate/tasks/handle_upgrade_from_401_version.yml index d64654d030..94c74638da 100644 --- a/roles/vstat-data-migrate/tasks/handle_upgrade_from_401_version.yml +++ b/roles/vstat-data-migrate/tasks/handle_upgrade_from_401_version.yml @@ -1,7 +1,8 @@ +--- - name: Get current VSD version command: echo $VSD_VERSION delegate_to: "{{ groups['vsds'][0] }}" - remote_user: root + remote_user: "{{ vsd_username }}" run_once: true register: vsd_version @@ -13,16 +14,15 @@ debug: var=vsd_version verbosity=1 run_once: true -- block: +- block: + - name: Stop vsd-stats group on VSD(s) command: monit stop -g vsd-stats - remote_user: root - name: Get monit state for stat processes vsd_monit: group: vsd-stats register: stats_state - remote_user: root - name: Verify stats processes are stopped assert: @@ -32,17 +32,14 @@ - name: Migrate current date data to new schema version command: "{{ migrate_current_data }}" - remote_user: root when: migrate_current_day_data - name: Migrate previous day data to new schema version command: "{{ migrate_previous_data }}" - remote_user: root when: not migrate_current_day_data - name: Start vsd-stats processess command: "monit start -g vsd-stats" - remote_user: root - name: Fetch stats processess current state command: "monit -g vsd-stats summary" @@ -54,19 +51,19 @@ retries: 10 delay: 30 register: stats_temp_state - remote_user: root - name: Get monit state for stat processes vsd_monit: group: vsd-stats register: stats_current_state - remote_user: root - name: Verify stats processes are started/running assert: that: stats_current_state['state']['{{ item }}'] == 'running' or stats_current_state['state']['{{ item }}'] == 'status ok' msg: item is still running with_items: "{{ stats_current_state.state.keys() }}" + + remote_user: "{{ vsd_username }}" when: - vsd_version.stdout in supported_vsd_versions - upgrade_from_version == '4.0.1' diff --git a/roles/vstat-data-migrate/tasks/main.yml b/roles/vstat-data-migrate/tasks/main.yml index 63dc75f31d..a5c14f8df0 100644 --- a/roles/vstat-data-migrate/tasks/main.yml +++ b/roles/vstat-data-migrate/tasks/main.yml @@ -5,6 +5,7 @@ run_once: true - block: + - name: Pull facts of localhost action: setup connection: local @@ -14,86 +15,81 @@ vstat_backup_dir: "{{metro_backup_root}}/backup-{{ inventory_hostname }}-{{ ansible_date_time.iso8601_basic_short }}/" run_once: true - - name: Install mount packages - yum: name={{ item }} state=present - remote_user: root - with_items: - - libnfsidmap - - nfs-utils + - block: - - name: Create dir with vstat data backup path - file: - dest: "{{ vstat_backup_dir }}" - state: directory - mode: 0777 - recurse: yes - remote_user: root - - - name: Mount the nfs folder on to vstat vm - mount: - src: "{{ vstat_nfs_server_with_folder }}" - name: "{{ vstat_backup_dir }}" - state: mounted - fstype: nfs4 - remote_user: root - - - name: Get the nfs shared folder details - shell: "mount | grep nfs" - register: nfs_folder - remote_user: root - - - name: Verify backup folder path is nfs shared - assert: - that: vstat_backup_dir[:-1] in nfs_folder.stdout - msg: "{{ vstat_backup_dir }} is not nfs shared" - - - name: Copy elasticsearch backup scritps - copy: src={{ vstat_backup_scripts_path }}/{{ item }} - dest=/tmp/ - with_items: "{{ vstat_backup_scripts_file_list }}" - remote_user: root - run_once: true - - - name: Cleanup backup dir in elasticseach.yml file - lineinfile: - dest: "/etc/elasticsearch/elasticsearch.yml" - regexp: "path.repo" - state: absent - remote_user: root - - - name: Configure backup dir in elasticseach.yml file - lineinfile: - dest: "/etc/elasticsearch/elasticsearch.yml" - line: "path.repo: [{{ vstat_backup_dir }}]" - remote_user: root - - - name: Restart elasticsearch process - systemd: - name: elasticsearch - state: restarted - remote_user: root - - - name: Wait for elasticsearch process to come up - pause: - seconds: 20 - - - name: Get elasticsearch current status - systemd: - name: elasticsearch - state: started - register: es_status - remote_user: root - - - name: Check elasticsearch status is active - assert: - that: es_status.status.ActiveState == 'active' - msg: "Elasticserach process in not active after restart" - - - name: Check elasticsearch process is running - assert: - that: es_status.status.SubState == 'running' - msg: "Elasticsearch process is not running after restart" + - name: Install mount packages + yum: name={{ item }} state=present + with_items: + - libnfsidmap + - nfs-utils + + - name: Create dir with vstat data backup path + file: + dest: "{{ vstat_backup_dir }}" + state: directory + mode: 0777 + recurse: yes + + - name: Mount the nfs folder on to vstat vm + mount: + src: "{{ vstat_nfs_server_with_folder }}" + name: "{{ vstat_backup_dir }}" + state: mounted + fstype: nfs4 + + - name: Get the nfs shared folder details + shell: "mount | grep nfs" + register: nfs_folder + + - name: Verify backup folder path is nfs shared + assert: + that: vstat_backup_dir[:-1] in nfs_folder.stdout + msg: "{{ vstat_backup_dir }} is not nfs shared" + + - name: Copy elasticsearch backup scritps + copy: src={{ vstat_backup_scripts_path }}/{{ item }} + dest=/tmp/ + with_items: "{{ vstat_backup_scripts_file_list }}" + run_once: true + + - name: Cleanup backup dir in elasticseach.yml file + lineinfile: + dest: "/etc/elasticsearch/elasticsearch.yml" + regexp: "path.repo" + state: absent + + - name: Configure backup dir in elasticseach.yml file + lineinfile: + dest: "/etc/elasticsearch/elasticsearch.yml" + line: "path.repo: [{{ vstat_backup_dir }}]" + + - name: Restart elasticsearch process + systemd: + name: elasticsearch + state: restarted + + - name: Wait for elasticsearch process to come up + pause: + seconds: 20 + + - name: Get elasticsearch current status + systemd: + name: elasticsearch + state: started + register: es_status + + - name: Check elasticsearch status is active + assert: + that: es_status.status.ActiveState == 'active' + msg: "Elasticserach process in not active after restart" + + - name: Check elasticsearch process is running + assert: + that: es_status.status.SubState == 'running' + msg: "Elasticsearch process is not running after restart" + remote_user: "{{ vstat_username }}" + - name: Read the repo name to be recreated from the file command: "cat {{metro_backup_root}}/backup-{{ groups['vstats'][0] }}-latest/repo_snapshot_name" register: names @@ -102,13 +98,13 @@ - name: Create repo on the new vstat vm command: "python /tmp/{{ create_repo }}" - remote_user: root + remote_user: "{{ vstat_username }}" run_once: true - name: Get the repo created by backup script command: "python /tmp/{{ show_repo }}" register: repo_path - remote_user: root + remote_user: "{{ vstat_username }}" run_once: true - name: Print contents of show_repo output when verbosity >= 1 @@ -140,13 +136,13 @@ state: directory recurse: yes mode: 0777 - remote_user: root + remote_user: "{{ vstat_username }}" run_once: true - name: Restore the snapshot on the new vstat VM command: "python /tmp/{{ restore_snapshot }}" register: restore_snap - remote_user: root + remote_user: "{{ vstat_username }}" run_once: true - name: Print contents of restore_snapshot output when verbosity >= 1 @@ -156,7 +152,7 @@ - name: Get the contents of created snapshot command: "python /tmp/{{ show_snapshot }}" register: snapshot_contents - remote_user: root + remote_user: "{{ vstat_username }}" run_once: true - name: Create local variable with snap_contents output to json @@ -167,19 +163,19 @@ debug: var=snapshot_contents verbosity=1 run_once: true - - block: - - name: Verify the contents of the snapshot created - assert: - that: '"{{ item }}" in list_of_indices' - msg: "{{ item }} index was not found" - with_items: "{{ snapshot_contents_json['indices'] }}" - run_once: true + - name: Verify the contents of the snapshot created + assert: + that: '"{{ item }}" in list_of_indices' + msg: "{{ item }} index was not found" + with_items: "{{ snapshot_contents_json['indices'] }}" + run_once: true when: list_of_indices is defined - block: + - name: Get the list of all indices command: "python /tmp/{{ get_indices }}" - remote_user: root + remote_user: "{{ vstat_username }}" register: indices_output run_once: true @@ -189,7 +185,9 @@ msg: "{{ item }} index was not found" with_items: "{{ snapshot_contents_json['indices'] }}" run_once: true + when: list_of_indices is not defined + when: - not vstat_in_place_upgrade - from_major_version == 4 @@ -204,7 +202,7 @@ vsd_hostname: "{{ vsd_fqdn }}" run_once: true - - include: handle_upgrade_from_401_version.yml + - include_tasks: handle_upgrade_from_401_version.yml delegate_to: "{{ item }}" with_items: "{{ vsd_hostname_list }}" when: diff --git a/roles/vstat-deploy/tasks/aar_vss_enable.yml b/roles/vstat-deploy/tasks/aar_vss_enable.yml index 2561442952..d34a67da68 100644 --- a/roles/vstat-deploy/tasks/aar_vss_enable.yml +++ b/roles/vstat-deploy/tasks/aar_vss_enable.yml @@ -7,9 +7,9 @@ - name: Revoke old stats cert, if any command: "/opt/vsd/ejbca/deploy/certMgmt.sh -a revoke -u elastic" remote_user: "{{ vsd_username }}" - delegate_to: "{{ groups['vsds'][0] }}" + delegate_to: "{{ vsd_fqdn }}" ignore_errors: yes - + remote_user: "{{ vstat_username }}" - name: Set local variables @@ -20,13 +20,18 @@ - elasticCert.pem - elastic-Key.pem - elastic.pem -- block: +- block: - name: Generate SSL certificates on VSD for the Stats node - command: "/bin/sshpass -p{{ vstat_password }} /opt/vsd/ejbca/deploy/certMgmt.sh -a generate -u elastic -c elastic -o csp -f pem -t server -d {{ mgmt_ip }}" - register: sslresult - remote_user: "{{ vsd_username }}" - delegate_to: "{{ groups['vsds'][0] }}" + include_role: + name: common + tasks_from: vsd-generate-transfer-certificates + vars: + certificate_password: "Alcateldc" + certificate_username: elastic + commonName: elastic + certificate_type: server + additional_parameters: -d {{ mgmt_ip }} - name: Create temp folder to host the ssl certificates local_action: file path={{ dest }} state=directory mode=755 @@ -47,9 +52,10 @@ - name: Delete temp folder from localhost local_action: file path={{ dest }} state=absent run_once: True - + - name: Restart nginx process on Stats node systemd: name: nginx state: restarted remote_user: "{{ vstat_username }}" + diff --git a/roles/vstat-deploy/tasks/heat.yml b/roles/vstat-deploy/tasks/heat.yml index 3cfda1ef35..cd4fea59cb 100644 --- a/roles/vstat-deploy/tasks/heat.yml +++ b/roles/vstat-deploy/tasks/heat.yml @@ -103,6 +103,8 @@ include_role: name: common tasks_from: linux-ntp-sync + vars: + rem_user: "{{ vstat_username }}" - name: Resolve "{{ vsd_fqdn }}" to ip addr shell: "getent hosts {{ vsd_fqdn }} | awk '{print $1}'" diff --git a/roles/vstat-deploy/tasks/main.yml b/roles/vstat-deploy/tasks/main.yml index 807e6ffde7..4762e770bb 100644 --- a/roles/vstat-deploy/tasks/main.yml +++ b/roles/vstat-deploy/tasks/main.yml @@ -1,11 +1,11 @@ --- -- include: non_heat.yml +- import_tasks: non_heat.yml when: not target_server_type | match("heat") tags: - vstat - vstat-deploy -- include: heat.yml +- import_tasks: heat.yml when: target_server_type | match("heat") tags: - vstat diff --git a/roles/vstat-deploy/tasks/non_heat.yml b/roles/vstat-deploy/tasks/non_heat.yml index 9b75eb8243..772e970f58 100644 --- a/roles/vstat-deploy/tasks/non_heat.yml +++ b/roles/vstat-deploy/tasks/non_heat.yml @@ -15,22 +15,19 @@ vsd_hostname: "{{ vsd_fqdn }}" - block: + - name: Generate SSH keys shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N "" args: creates: /root/.ssh/id_rsa - remote_user: root delegate_to: "{{ item }}" with_items: "{{ groups['vstats'] }}" - run_once: true - name: Get generated SSH keys shell: cat ~/.ssh/id_rsa.pub register: ssh_key_lst - remote_user: root delegate_to: "{{ item }}" with_items: "{{ groups['vstats'] }}" - run_once: true - name: Add SSH keys to authorized_keys file shell: "echo {{ item[1].stdout }} >> /root/.ssh/authorized_keys" @@ -38,35 +35,37 @@ with_nested: - "{{ groups['vstats'] }}" - "{{ ssh_key_lst.results }}" - remote_user: root - run_once: true + + remote_user: "{{ vstat_username }}" + run_once: true when: vstat_sa_or_ha | match('ha') - name: check for iptables shell: "service iptables status" register: _svc_iptables ignore_errors: True - remote_user: "root" + remote_user: "{{ vstat_username }}" - name: Print vsd deployment mode when verbosity >= 1 debug: var="vsd_sa_or_ha" - block: + - name: Start iptables systemd: name: iptables state: started - remote_user: "root" + remote_user: "{{ vstat_username }}" - name: Enable iptables on boot systemd: name: iptables enabled: yes - remote_user: "root" + remote_user: "{{ vstat_username }}" - name: Check if iptables is already setup for VSD rules shell: iptables -L INPUT | grep 'match-set vsd src' - remote_user: "{{ target_server_username }}" + remote_user: "{{ vstat_username }}" register: vstat_iptables_result ignore_errors: True @@ -99,7 +98,7 @@ shell: "{{ item }}" with_items: - "{{ iptables_std_commands }}" - remote_user: "root" + remote_user: "{{ vstat_username }}" register: iptables_results ignore_errors: True @@ -120,7 +119,7 @@ shell: "{{ item }}" with_items: - "{{ iptables_cluster_commands }}" - remote_user: "root" + remote_user: "{{ vstat_username }}" register: iptables_results ignore_errors: True @@ -143,20 +142,21 @@ shell: "service firewalld status" register: _svc_firewalld ignore_errors: True - remote_user: "root" + remote_user: "{{ vstat_username }}" - block: + - name: Start firewalld systemd: name: firewalld state: started - remote_user: "root" + remote_user: "{{ vstat_username }}" - name: Enable firewalld on boot systemd: name: firewalld enabled: yes - remote_user: "root" + remote_user: "{{ vstat_username }}" - name: Check if firewalld is already setup for VSD rules shell: firewall-cmd --list-all | grep 9200 | grep accept @@ -191,7 +191,7 @@ shell: "{{ item }}" with_items: - "{{ firewall_std_commands }}" - remote_user: "root" + remote_user: "{{ vstat_username }}" when: vsd_sa_or_ha | match('sa') - name: Config firewall on VSTAT vm to accept conn on ports 9200, 9300 from vsd(s) in cluster setup @@ -199,7 +199,7 @@ with_items: - "{{ firewall_cluster_commands }}" when: vsd_sa_or_ha | match('ha') - remote_user: "root" + remote_user: "{{ vstat_username }}" when: not skip_vstat_deploy @@ -213,12 +213,14 @@ include_role: name: common tasks_from: linux-ntp-sync + vars: + rem_user: "{{ vstat_username }}" - name: Restart elastic search systemd: name: elasticsearch state: restarted - remote_user: "root" + remote_user: "{{ vstat_username }}" - block: @@ -241,14 +243,14 @@ stat: path=/opt/vsd/vsd-es-standalone.sh register: es_sa_script delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: root + remote_user: "{{ vsd_username }}" when: - vstat_sa_or_ha | match('sa') - name: Execute VSTAT standalone script on standalone or clustered vsds command: /opt/vsd/vsd-es-standalone.sh -e {{ inventory_hostname }} delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: root + remote_user: "{{ vsd_username }}" environment: SSHPASS: "{{ vstat_password }}" when: @@ -258,7 +260,7 @@ - name: Execute VSTAT cluster script on standalone or clustered vsds command: /opt/vsd/vsd-es-cluster-config.sh -e {{ groups['vstats'][0] }},{{ groups['vstats'][1] }},{{ groups['vstats'][2] }} delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: root + remote_user: "{{ vsd_username }}" environment: SSHPASS: "{{ vstat_password }}" when: vstat_sa_or_ha | match('ha') @@ -272,5 +274,5 @@ when: not skip_vstat_deploy -- include: aar_vss_enable.yml +- import_tasks: aar_vss_enable.yml diff --git a/roles/vstat-destroy/tasks/main.yml b/roles/vstat-destroy/tasks/main.yml index d18000f0a8..09a0b6d9af 100644 --- a/roles/vstat-destroy/tasks/main.yml +++ b/roles/vstat-destroy/tasks/main.yml @@ -1,20 +1,20 @@ --- - block: - - include: kvm.yml + - import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vstat - vstat-destroy - - include: heat.yml + - import_tasks: heat.yml when: target_server_type | match("heat") tags: - vstat - heat - vstat-destroy - - include: vcenter.yml + - import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - vstat diff --git a/roles/vstat-health/tasks/main.yml b/roles/vstat-health/tasks/main.yml index eecfa1046a..b35203cbe5 100644 --- a/roles/vstat-health/tasks/main.yml +++ b/roles/vstat-health/tasks/main.yml @@ -1,5 +1,5 @@ --- -- include: report_header.yml +- import_tasks: report_header.yml - block: - name: Get current network config of all VSTAT nodes @@ -14,9 +14,25 @@ - name: Write network config to json file nuage_append: filename="{{ report_path }}" text="{{ net_conf.info | to_nice_json}}\n" delegate_to: localhost + + - name: check web interface of vstat + uri: + url: http://{{ inventory_hostname }}:9200 + method: GET + user: "{{ vstat_username }}" + password: "{{ vstat_password }}" + status_code: 200 + validate_certs: False + register: webresult + ignore_errors: yes + + - name: write web interface result + nuage_append: filename="{{ report_path }}" text="{{ webresult | to_nice_json}}\n" + delegate_to: localhost + when: inventory_hostname in groups['vstats'] -- include: monit_status.yml +- import_tasks: monit_status.yml when: inventory_hostname in groups['vsds'] -- include: report_footer.yml +- import_tasks: report_footer.yml diff --git a/roles/vstat-health/tasks/report_header.yml b/roles/vstat-health/tasks/report_header.yml index 396b15a7c4..4187c087db 100644 --- a/roles/vstat-health/tasks/report_header.yml +++ b/roles/vstat-health/tasks/report_header.yml @@ -18,7 +18,7 @@ run_once: true - name: Write title to report file - nuage_append: filename="{{ report_path }}" text="VSD Health Report Start\n" + nuage_append: filename="{{ report_path }}" text="VSTAT Health Report Start\n" delegate_to: localhost run_once: true diff --git a/roles/vstat-predeploy/tasks/main.yml b/roles/vstat-predeploy/tasks/main.yml index 31df76426c..6d4aaee824 100644 --- a/roles/vstat-predeploy/tasks/main.yml +++ b/roles/vstat-predeploy/tasks/main.yml @@ -21,13 +21,13 @@ that: "groups['vsds'] is defined" msg: "vstat-deploy requires VSD information. Please add VSD information to build_vars.yml, re-run the build, then re-run the vstat-deploy. See examples for details." -- include: kvm.yml +- import_tasks: kvm.yml when: target_server_type | match("kvm") tags: - vstat - vstat-predeploy -- include: vcenter.yml +- import_tasks: vcenter.yml when: target_server_type | match("vcenter") tags: - vstat diff --git a/roles/vstat-upgrade-backup-and-prep/tasks/main.yml b/roles/vstat-upgrade-backup-and-prep/tasks/main.yml index a741370578..cac9b9515a 100644 --- a/roles/vstat-upgrade-backup-and-prep/tasks/main.yml +++ b/roles/vstat-upgrade-backup-and-prep/tasks/main.yml @@ -12,8 +12,8 @@ when: skip_vstat_upgrade - block: - - include: prep_vstat_in_place_upgrade.yml - remote_user: root + - import_tasks: prep_vstat_in_place_upgrade.yml + remote_user: "{{ vstat_username }}" when: vstat_in_place_upgrade - name: Backup elasticsearch data diff --git a/roles/vstat-upgrade-wrapup/tasks/main.yml b/roles/vstat-upgrade-wrapup/tasks/main.yml index cc6fe4057d..9428a899fd 100644 --- a/roles/vstat-upgrade-wrapup/tasks/main.yml +++ b/roles/vstat-upgrade-wrapup/tasks/main.yml @@ -12,8 +12,8 @@ when: skip_vstat_upgrade - block: - - include: post_upgrade_checks.yml - remote_user: root + - import_tasks: post_upgrade_checks.yml + remote_user: "{{ vstat_username }}" when: vstat_in_place_upgrade - name: Migrate elasticsearch data to new vstat vm(s) diff --git a/roles/vstat-upgrade/tasks/main.yml b/roles/vstat-upgrade/tasks/main.yml index c2e8a2391a..2e0d713866 100644 --- a/roles/vstat-upgrade/tasks/main.yml +++ b/roles/vstat-upgrade/tasks/main.yml @@ -12,10 +12,10 @@ when: skip_vstat_upgrade - block: - - include: vstat_in_place_upgrade.yml - remote_user: root + - import_tasks: vstat_in_place_upgrade.yml + remote_user: "{{ vstat_username }}" when: vstat_in_place_upgrade - - include: vstat_out_of_place_upgrade.yml + - import_tasks: vstat_out_of_place_upgrade.yml when: not vstat_in_place_upgrade when: not skip_vstat_upgrade diff --git a/roles/vstat-upgrade/tasks/vstat_in_place_upgrade.yml b/roles/vstat-upgrade/tasks/vstat_in_place_upgrade.yml index 4d8b2d720d..bae78f66a3 100644 --- a/roles/vstat-upgrade/tasks/vstat_in_place_upgrade.yml +++ b/roles/vstat-upgrade/tasks/vstat_in_place_upgrade.yml @@ -40,12 +40,12 @@ - block: - name: Execute VSTAT standalone script - include: execute_sa_script.yml + import_tasks: execute_sa_script.yml run_once: true rescue: - name: Wait for shard count go down to zero and status to turn green - include: get_health_status.yml + import_tasks: get_health_status.yml run_once: true - name: Check ES Status @@ -53,7 +53,7 @@ when: es_status.json.status == 'red' - name: Execute VSTAT standalone script after status turns green - include: execute_sa_script.yml + import_tasks: execute_sa_script.yml when: - es_status.json.status == 'green' - es_status.json.unassigned_shards == 0 @@ -63,13 +63,13 @@ - block: - name: Execute VSTAT clustered script - include: execute_ha_script.yml + import_tasks: execute_ha_script.yml delegate_to: "{{ vsd_hostname_list[0] }}" run_once: true rescue: - name: Wait for shard count go down to zero and status to turn green - include: get_health_status.yml + import_tasks: get_health_status.yml run_once: true - name: Check ES status after cluster restart @@ -77,7 +77,7 @@ when: es_status.json.status == 'red' - name: Execute VSTAT clustered script after status turns green - include: execute_ha_script.yml + import_tasks: execute_ha_script.yml when: - es_status.json.status == 'green' - es_status.json.unassigned_shards == 0 diff --git a/roles/vstat-vsd-health/tasks/main.yml b/roles/vstat-vsd-health/tasks/main.yml index 4cafb38cfb..d16a0a0fa8 100644 --- a/roles/vstat-vsd-health/tasks/main.yml +++ b/roles/vstat-vsd-health/tasks/main.yml @@ -7,3 +7,9 @@ - elasticsearch-status - tca-daemon-status - stats-collector-status + +- name: Read the status of the DB upgrade directory and verify it exists + include_role: + name: common + tasks_from: vsd-verify-db-status + tags: vsd diff --git a/user_creds.yml b/user_creds.yml index 115d071a9d..20f575623f 100644 --- a/user_creds.yml +++ b/user_creds.yml @@ -38,3 +38,7 @@ vcin_password: Alcateldc # VNSUTIL username and password vnsutil_username: root vnsutil_password: Alcateldc + +# compute node uername and password +compute_username: root +compute_password: caso diff --git a/wrapper.yml b/wrapper.yml index d04b24a52a..105ef5e874 100644 --- a/wrapper.yml +++ b/wrapper.yml @@ -1 +1,3 @@ -- include: "{{ playbook }}" +--- +- name: Include Playbook + import_playbook: "{{ playbook }}"