diff --git a/Documentation/RELEASE_NOTES.md b/Documentation/RELEASE_NOTES.md index dc2999f231..50a84fd782 100644 --- a/Documentation/RELEASE_NOTES.md +++ b/Documentation/RELEASE_NOTES.md @@ -1,4 +1,14 @@ # Metro Automation Engine Release Notes +## Release 2.4.5 +### Resolved Issues +* Set validate_certs to no for VMware playbooks +* Clean socket files to work around Ansible persistent connection bug +* Fix vsc_health and vstat_health to support custom username and password +* Remove vstat_health from UPGRADE procedures +* Fix vstat-vsd-health role to only run when VSTATs are defined +* Removed unused datafile, upgrade_vars.yml +* Removed unused playbooks, vsc_ha_node1_upgrade.yml and vsc_ha_node2_upgrade.yml + ## Release 2.4.4 ### New Features and Enhancements * upgrade VSTAT and VSC operating with user other than root @@ -13,10 +23,10 @@ ## Release 2.4.3 ### New Features and Enhancements * add 5.3.1 from version to VSTAT upgrade skip list -* refactor logic behind destroy during install and upgrade, edit VSTAT list of versions to be skipped during upgrade. -* add vnc console access in VSD template +* refactor logic behind destroy during install and upgrade, edit VSTAT list of versions to be skipped during upgrade. +* add vnc console access in VSD template * add vault encryption procedure doc -* check the 'show router interface' command to verify that states for Adm and Oprv4 are correct for control interface +* check the 'show router interface' command to verify that states for Adm and Oprv4 are correct for control interface * add support for new cloud-init version for 5.3.2 * add support for upgrade to version 5.3.2 * add suppport for non-root usernames for VSD upgrade @@ -29,7 +39,7 @@ * add ability to customize passwords for VSD programs and services * add playbook to copy qcow2 files before predeploy step, add checks in predeploy step for qcow2 existence if skipCopyImages is set ### Resolved Issues -* user-related fixes +* user-related fixes - vsd-predeploy role tries to use the password listed in user_creds.yml to authenticate in the vmware_vm_shell tasks, rather than root/Alcateldc, which it should be using for a freshly-deployed OVF. - roles/vstat-vsd-health/tasks/main.yml: Needs remote_user: "{{ vstat_username }}" on the monit_waitfor_service task, otherwise it tries to SSH into vstat using the local username on the metro host, not the root user. - roles/vstat-health/tasks/main.yml: Needs delegate to localhost. diff --git a/Documentation/UPGRADE_HA.md b/Documentation/UPGRADE_HA.md index 8872c7d58f..b87a504e76 100644 --- a/Documentation/UPGRADE_HA.md +++ b/Documentation/UPGRADE_HA.md @@ -45,7 +45,7 @@ For this example, our clustered (HA) deployment consists of: `./metro-ansible vsd_ha_upgrade_deploy_2_and_3 -vvvv` The VSD nodes have been upgraded. If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command `./metro-ansible vsd_ha_upgrade_destroy_2_and_3`) then re-execute the deploy command. - + 4. Power off VSD node one. `./metro-ansible vsd_ha_upgrade_shutdown_1 -vvvv` @@ -83,7 +83,7 @@ For this example, our clustered (HA) deployment consists of: 2. Backup and prepare VSC node one. `./metro-ansible vsc_ha_upgrade_backup_and_prep_1 -vvvv` - + Ir you experience a failure, you can re-execute the command. 3. Deploy VSC node one. @@ -106,7 +106,7 @@ Upgrade your VRS(s) and then continue with this procedure. Do not proceed withou 1. Backup and prepare VSC node two. `./metro-ansible vsc_ha_upgrade_backup_and_prep_2 -vvvv` - + If you experience a failure, you can re-execute the command. 2. Deploy VSC node two. @@ -124,37 +124,31 @@ Upgrade your VRS(s) and then continue with this procedure. Do not proceed withou ## Upgrade VSTAT Our example includes a VSTAT node. If your topology does not include one, proceed to *Finalize the Upgrade* below. -1. Run VSTAT health check (optional). - - `./metro-ansible vstat_health -e report_filename=vstat_preupgrade_health.txt -vvvv` - - You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems. - -2. Backup the VSTAT node. +1. Backup the VSTAT node. `./metro-ansible vstat_upgrade_data_backup -vvvv` Data from the VSTAT node is backed up in the NFS shared folder. If you experience a failure, you can re-execute the command. -3. Power off the VSTAT node. +2. Power off the VSTAT node. `./metro-ansible vstat_destroy -vvvv` VSTAT shuts down; it is not deleted. (The new node will be brought up with the new VM name.) You have the option of performing this step manually instead. If you experience a failure you can re-execute the command or power off the VM manually. -4. Predeploy the new VSTAT node. +3. Predeploy the new VSTAT node. `./metro-ansible vstat_predeploy` - The new VSD node is now up and running; it is not yet configured. If you experience a failure, delete the new node by executing the command `./metro-ansible vstat_upgrade_destroy` then re-execute the predeploy command. + The new VSD node is now up and running; it is not yet configured. If you experience a failure, delete the new node by executing the command `./metro-ansible vstat_upgrade_destroy` then re-execute the predeploy command. -5. Deploy the new VSTAT node. +4. Deploy the new VSTAT node. `./metro-ansible vstat_deploy -vvvv` The new VSTAT node has been deployed and configured to talk with the VSD node. If you experience a failure, re-execute the command. If it fails a second time, destroy the VMs (either manually or with the command `./metro-ansible vstat_upgrade_destroy`) then proceed from the predeploy step above. -6. Migrate data to new VSTAT node. +5. Migrate data to new VSTAT node. `./metro-ansible vstat_upgrade_data_migrate -vvvv` @@ -175,9 +169,9 @@ Our example includes a VSTAT node. If your topology does not include one, procee ## Questions, Feedback, and Contributing -Ask questions and get support via email. - Outside Nokia: [devops@nuagenetworks.net](mailto:deveops@nuagenetworks.net "send email to nuage-metro project") - Internal Nokia: [nuage-metro-interest@list.nokia.com](mailto:nuage-metro-interest@list.nokia.com "send email to nuage-metro project") +Ask questions and get support via email. + Outside Nokia: [devops@nuagenetworks.net](mailto:deveops@nuagenetworks.net "send email to nuage-metro project") + Internal Nokia: [nuage-metro-interest@list.nokia.com](mailto:nuage-metro-interest@list.nokia.com "send email to nuage-metro project") Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. diff --git a/Documentation/UPGRADE_SA.md b/Documentation/UPGRADE_SA.md index 4b48cbbb89..f5a0632676 100644 --- a/Documentation/UPGRADE_SA.md +++ b/Documentation/UPGRADE_SA.md @@ -65,7 +65,7 @@ This example is for one VSC node. If your topology has more than one VSC node, p 2. Backup and prepare the VSC node. `./metro-ansible vsc_sa_upgrade_backup_and_prep -vvvv` - + If you experience failure, you can re-execute the command. 3. Deploy VSC. @@ -86,37 +86,31 @@ Upgrade your VRS(s) and then continue with this procedure. Do not proceed withou ## Upgrade VSTAT Our example includes a VSTAT node. If your topology does not include one, proceed to *Finalize the Upgrade* below. -1. Run VSTAT health check (optional). - - `./metro-ansible vstat_health -e report_filename=vstat_preupgrade_health.txt -vvvv` - - You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems. - -2. Backup the VSTAT node. +1. Backup the VSTAT node. `./metro-ansible vstat_upgrade_data_backup -vvvv` Data from the VSTAT node is backed up in the NFS shared folder. If you experience a failure, you can re-execute the command. -3. Power off the VSTAT node. +2. Power off the VSTAT node. `./metro-ansible vstat_destroy -vvvv` VSTAT shuts down; it is not deleted. (The new node will be brought up with the new VM name.) You have the option of performing this step manually instead. If you experience a failure you can re-execute the command or power off the VM manually. -4. Predeploy the new VSTAT node. +3. Predeploy the new VSTAT node. `./metro-ansible vstat_predeploy` - The new VSD node is now up and running; it is not yet configured. If you experience a failure, delete the new node by executing the command then re-execute the predeploy command. + The new VSD node is now up and running; it is not yet configured. If you experience a failure, delete the new node by executing the command then re-execute the predeploy command. -5. Deploy the new VSTAT node. +4. Deploy the new VSTAT node. `./metro-ansible vstat_deploy -vvvv` The new VSTAT node has been deployed and configured to talk with the VSD node. If you experience a failure, re-execute the command. If it fails a second time, destroy the VMs (either manually or with the command `./metro-ansible vstat_upgrade_destroy`) then proceed from the predeploy step above. -6. Migrate data to new VSTAT node. +5. Migrate data to new VSTAT node. `./metro-ansible vstat_upgrade_data_migrate -vvvv` @@ -137,9 +131,9 @@ Our example includes a VSTAT node. If your topology does not include one, procee ## Questions, Feedback, and Contributing -Ask questions and get support via email. - Outside Nokia: [devops@nuagenetworks.net](mailto:deveops@nuagenetworks.net "send email to nuage-metro project") - Internal Nokia: [nuage-metro-interest@list.nokia.com](mailto:nuage-metro-interest@list.nokia.com "send email to nuage-metro project") +Ask questions and get support via email. + Outside Nokia: [devops@nuagenetworks.net](mailto:deveops@nuagenetworks.net "send email to nuage-metro project") + Internal Nokia: [nuage-metro-interest@list.nokia.com](mailto:nuage-metro-interest@list.nokia.com "send email to nuage-metro project") Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metro/issues "nuage-metro issues") feature. diff --git a/build_vars.yml b/build_vars.yml index f2e1286548..a55a8728d1 100644 --- a/build_vars.yml +++ b/build_vars.yml @@ -42,7 +42,6 @@ upgrade_from_version: '4.0.11' upgrade_to_version: '5.2.1' ## VSTAT UPGRADE ONLY! -# Required only when upgarding to 4.0.11 versions and below ## NFS export that will be mounted on the VSTAT for backup and restore. ## Includes the server ip with the folder path. The default is 'NONE' ## Uncomment and provide an NFS export to mount if VSTAT upgrade. diff --git a/library/monit_waitfor_service.py b/library/monit_waitfor_service.py index 3481413232..3d16cbc16c 100644 --- a/library/monit_waitfor_service.py +++ b/library/monit_waitfor_service.py @@ -92,7 +92,7 @@ def status(proc_name): if desired_state: module.exit_json(changed=True, name=proc_name, state=monit_stats) else: - module.fail_json(msg="Process %s did not transitioned to active within %i seconds" % (proc_name, timeout_seconds)) + module.fail_json(msg="Process %s did not transition to active within %i seconds" % (proc_name, timeout_seconds)) # Run the main diff --git a/library/vmware_autostart.py b/library/vmware_autostart.py index 2804b1b3e8..4a9c561b95 100644 --- a/library/vmware_autostart.py +++ b/library/vmware_autostart.py @@ -1,5 +1,6 @@ #!/usr/bin/python +import os from ansible.module_utils.basic import AnsibleModule import sys from pyVmomi import vim @@ -50,6 +51,11 @@ virtual machine in the order to be started required: false default: 10 + validate_certs: + description: + - Whether Ansible should validate ssh certificates + required: False + default: yes ''' @@ -62,6 +68,7 @@ username: vCenter_username password: vCenter_password state: enable + validate_certs: no # Example for disabling or not enabling autostart for vm_1 - vmware_autostart: @@ -74,17 +81,13 @@ ''' -def get_esxi_host(ipAddr, port, username, password, id): +def get_esxi_host(ip_addr, port, username, password, id): uuid = id - try: - si = get_connection(ipAddr, username, password, port) - vm = si.content.searchIndex.FindByUuid(None, - uuid, - True, - False) - except Exception: - return None - + si = get_connection(ip_addr, username, password, port) + vm = si.content.searchIndex.FindByUuid(None, + uuid, + True, + False) if vm is not None: host = vm.runtime.host if host is not None: @@ -94,86 +97,79 @@ def get_esxi_host(ipAddr, port, username, password, id): def get_connection(ip_addr, user, password, port): - try: - connection = SmartConnect( - host=ip_addr, port=port, user=user, pwd=password - ) - return connection - except Exception: - return None + connection = SmartConnect( + host=ip_addr, port=port, user=user, pwd=password + ) + return connection -def get_hosts(conn): - try: - content = conn.RetrieveContent() - container = content.viewManager.CreateContainerView( - content.rootFolder, [vim.HostSystem], True - ) - except Exception: - return None - obj = [host for host in container.view] +def get_host_obj(host_name, conn): + obj = None + content = conn.RetrieveContent() + container = content.viewManager.CreateContainerView( + content.rootFolder, [vim.HostSystem], True + ) + for c in container.view: + if c.name == host_name: + obj = c + break return obj -def configure_hosts(commaList, connection, startDelay, vmname, state): - try: - config_hosts = commaList.split(",") - all_hosts = get_hosts(connection) - host_names = [h.name for h in all_hosts] - for a in config_hosts: - if a not in host_names: - return None - for h in all_hosts: - if h.name in config_hosts: - return configure_autostart(h, startDelay, vmname, state) - except Exception: - return None - - -def configure_autostart(host, startDelay, vmname, state): - hostDefSettings = vim.host.AutoStartManager.SystemDefaults() - hostDefSettings.enabled = True - hostDefSettings.startDelay = int(startDelay) - order = 1 - if host is not None: - try: - for vhost in host.vm: - if vhost.name == vmname: - spec = host.configManager.autoStartManager.config - spec.defaults = hostDefSettings - auto_power_info = vim.host.AutoStartManager.AutoPowerInfo() - auto_power_info.key = vhost - auto_power_info.waitForHeartbeat = 'no' - auto_power_info.startDelay = -1 - auto_power_info.startOrder = -1 - auto_power_info.stopAction = 'None' - auto_power_info.stopDelay = -1 - if vhost.runtime.powerState == "poweredOff": - auto_power_info.startAction = 'None' - elif vhost.runtime.powerState == "poweredOn": - auto_power_info.startAction = 'powerOn' if state == 'enable' else 'None' - spec.powerInfo = [auto_power_info] - order = order + 1 - host.configManager.autoStartManager.ReconfigureAutostart(spec) - return True - except Exception: - return False +def configure_autostart(host_name, connection, start_delay, vmname, state): + host_obj = get_host_obj(host_name, connection) + if host_obj is None: + return {'failed': True, 'msg': 'Could not find {0} in list of hosts'.format(host_name)} + vm_names = [vm.name for vm in host_obj.vm] + if vmname not in vm_names: + return {'failed': True, 'msg': 'Could not find {0} in list of VMs'.format(vmname)} + host_def_settings = vim.host.AutoStartManager.SystemDefaults() + host_def_settings.enabled = True + host_def_settings.startDelay = int(start_delay) + for vm in host_obj.vm: + if vm.name == vmname: + spec = host_obj.configManager.autoStartManager.config + spec.defaults = host_def_settings + auto_power_info = vim.host.AutoStartManager.AutoPowerInfo() + auto_power_info.key = vm + auto_power_info.waitForHeartbeat = 'no' + auto_power_info.startDelay = -1 + auto_power_info.startOrder = -1 + auto_power_info.stopAction = 'None' + auto_power_info.stopDelay = -1 + if vm.runtime.powerState == "poweredOff": + auto_power_info.startAction = 'None' + elif vm.runtime.powerState == "poweredOn": + auto_power_info.startAction = 'powerOn' if state == 'enable' else 'None' + spec.powerInfo = [auto_power_info] + host_obj.configManager.autoStartManager.ReconfigureAutostart(spec) + return {'failed': False, 'msg': 'Autostart change initiated for {0}'.format(vmname)} def main(): - arg_spec = dict( - name=dict(required=True, type='str'), - uuid=dict(required=True, type='str'), - hostname=dict(required=True, type='str'), - port=dict(required=False, type=int, default=443), - username=dict(required=True, type='str', no_log=True), - password=dict(required=True, type='str', no_log=True), - state=dict(required=True, type='str'), - delay=dict(required=False, type=int, default=10) + module = AnsibleModule( + argument_spec=dict( + hostname=dict( + type='str', + default=os.environ.get('VMWARE_HOST') + ), + username=dict( + type='str', + default=os.environ.get('VMWARE_USER') + ), + password=dict( + type='str', no_log=True, + default=os.environ.get('VMWARE_PASSWORD') + ), + validate_certs=dict(required=False, type='bool', default=True), + name=dict(required=True, type='str'), + uuid=dict(required=False, type='str'), + port=dict(required=False, type=int, default=443), + delay=dict(required=False, type=int, default=10), + state=dict(required=True, type='str', choices=['enable', 'disable']) + ), ) - module = AnsibleModule(argument_spec=arg_spec, supports_check_mode=True) - ip_addr = module.params['hostname'] username = module.params['username'] password = module.params['password'] @@ -183,22 +179,26 @@ def main(): port = module.params['port'] start_delay = module.params['delay'] - connection = get_connection(ip_addr, username, password, port) + try: + connection = get_connection(ip_addr, username, password, port) - if connection is None: - module.fail_json(changed=False, msg="Could not connect to %s" % ip_addr) + if connection is None: + module.fail_json(msg="Establishing connection to %s failed" % ip_addr) - esxi_host = get_esxi_host(ip_addr, port, username, password, uuid) + esxi_host = get_esxi_host(ip_addr, port, username, password, uuid) - if esxi_host is not None: - configured = configure_hosts(esxi_host, connection, start_delay, vm_name, state) - else: - module.fail_json(changed=False, msg="Could not get ESXi host for %s" % vm_name) + if esxi_host is None: + module.fail_json(msg="Could not find ESXi host using uuid %s" % uuid) + + result = configure_autostart(esxi_host, connection, start_delay, vm_name, state) + except: + e = sys.exec_info()[0] + module.fail_json(msg="Attempt to configure autostart failed with exception: %s" % e) - if configured: - module.exit_json(changed=True, msg="VM %s has been configured" % vm_name) + if result['failed']: + module.fail_json(**result) else: - module.fail_json(changed=False, msg="VM %s could not be configured" % vm_name) + module.exit_json(**result) if __name__ == "__main__": main() diff --git a/playbooks/vsc_ha_node1_upgrade.yml b/playbooks/vsc_ha_node1_upgrade.yml deleted file mode 100644 index 9b7474775e..0000000000 --- a/playbooks/vsc_ha_node1_upgrade.yml +++ /dev/null @@ -1,30 +0,0 @@ ---- -- hosts: vscs - gather_facts: no - serial: 1 - connection: local - any_errors_fatal: true - vars: - report_filename: vsc_node1_pre_upgrade_report.txt - roles: - - vsc-health - -- hosts: vsc_node1 - gather_facts: no - roles: - - vsc-upgrade - vars: - vsc_username: "{{ vsc_custom_username | default(vsc_default_username) }}" - vsc_password: "{{ vsc_custom_password | default(vsc_default_password) }}" - -- hosts: vscs - gather_facts: no - serial: 1 - connection: local - any_errors_fatal: true - vars: - report_filename: vsc_node1_post_upgrade_report.txt - vsc_username: "{{ vsc_custom_username | default(vsc_default_username) }}" - vsc_password: "{{ vsc_custom_password | default(vsc_default_password) }}" - roles: - - vsc-health diff --git a/playbooks/vsc_ha_node2_upgrade.yml b/playbooks/vsc_ha_node2_upgrade.yml deleted file mode 100644 index c79efb6943..0000000000 --- a/playbooks/vsc_ha_node2_upgrade.yml +++ /dev/null @@ -1,30 +0,0 @@ ---- -- hosts: vscs - gather_facts: no - serial: 1 - connection: local - any_errors_fatal: true - vars: - report_filename: vsc_node2_pre_upgrade_report.txt - roles: - - vsc-health - -- hosts: vsc_node2 - gather_facts: no - roles: - - vsc-upgrade - vars: - vsc_username: "{{ vsc_custom_username | default(vsc_default_username) }}" - vsc_password: "{{ vsc_custom_password | default(vsc_default_password) }}" - -- hosts: vscs - gather_facts: no - serial: 1 - connection: local - any_errors_fatal: true - vars: - report_filename: vsc_node2_post_upgrade_report.txt - vsc_username: "{{ vsc_custom_username | default(vsc_default_username) }}" - vsc_password: "{{ vsc_custom_password | default(vsc_default_password) }}" - roles: - - vsc-health diff --git a/playbooks/vsc_health.yml b/playbooks/vsc_health.yml index 5922dca34b..956c65df3a 100644 --- a/playbooks/vsc_health.yml +++ b/playbooks/vsc_health.yml @@ -3,4 +3,8 @@ any_errors_fatal: true roles: - vsc-health + vars: + report_filename: vsc_health_report.txt + vsc_username: "{{ vsc_custom_username | default(vsc_default_username) }}" + vsc_password: "{{ vsc_custom_password | default(vsc_default_password) }}" serial: 1 diff --git a/playbooks/vstat_health.yml b/playbooks/vstat_health.yml index 21d386440e..341027bd68 100644 --- a/playbooks/vstat_health.yml +++ b/playbooks/vstat_health.yml @@ -8,9 +8,12 @@ gather_facts: no roles: - vstat-vsc-health + vars: + report_filename: vstat_vsc_health_report.txt + vsc_username: "{{ vsc_custom_username | default(vsc_default_username) }}" + vsc_password: "{{ vsc_custom_password | default(vsc_default_password) }}" - hosts: vrss gather_facts: no roles: - vstat-vrs-health - diff --git a/roles/common/templates/vstat.j2 b/roles/common/templates/vstat.j2 index 989fd0c28e..4b7102a826 100644 --- a/roles/common/templates/vstat.j2 +++ b/roles/common/templates/vstat.j2 @@ -83,8 +83,8 @@ infra_server_name: {{ item.infra_server_name }} {% if 'upgrade' in vstat_operations_list %} upgrade_vmname: {{ item.upgrade_vmname is defined | ternary( item.upgrade_vmname, [ item.vmname | default(item.hostname), "new" ] | join('-') ) }} vstat_in_place_upgrade: {{ vstat_in_place_upgrade }} -{% if vstat_in_place_upgrade == False %} vstat_nfs_server_with_folder: {{ vstat_nfs_server_with_folder | default('NONE') }} +{% if vstat_in_place_upgrade == False %} vstat_backup_scripts_path: {{ vstat_backup_scripts_path }} vstat_backup_scripts_file_list: {{ vstat_backup_scripts_file_list }} {% endif %} diff --git a/roles/reset-build/files/build_vars.yml b/roles/reset-build/files/build_vars.yml index f2e1286548..a55a8728d1 100644 --- a/roles/reset-build/files/build_vars.yml +++ b/roles/reset-build/files/build_vars.yml @@ -42,7 +42,6 @@ upgrade_from_version: '4.0.11' upgrade_to_version: '5.2.1' ## VSTAT UPGRADE ONLY! -# Required only when upgarding to 4.0.11 versions and below ## NFS export that will be mounted on the VSTAT for backup and restore. ## Includes the server ip with the folder path. The default is 'NONE' ## Uncomment and provide an NFS export to mount if VSTAT upgrade. diff --git a/roles/vsc-deploy/tasks/main.yml b/roles/vsc-deploy/tasks/main.yml index cb97a166ac..fed96ed8dc 100644 --- a/roles/vsc-deploy/tasks/main.yml +++ b/roles/vsc-deploy/tasks/main.yml @@ -5,8 +5,9 @@ - vsc - heat - vsc-deploy - + - block: + - name: Change XMPP connection to TLS on VSD command: /opt/vsd/bin/ejmode allow -y delegate_to: "{{ item }}" @@ -32,7 +33,7 @@ remote_user: "{{ vsd_default_username }}" - name: setup TLS - include_role: + include_role: name: common tasks_from: vsc-tls-setup - + diff --git a/roles/vsc-destroy/tasks/vcenter.yml b/roles/vsc-destroy/tasks/vcenter.yml index a6ff4519ff..a5fb3ded28 100644 --- a/roles/vsc-destroy/tasks/vcenter.yml +++ b/roles/vsc-destroy/tasks/vcenter.yml @@ -10,7 +10,7 @@ validate_certs: no register: vsc_vm_folder ignore_errors: on - + - name: Gathering info on VM connection: local vmware_guest_facts: @@ -28,24 +28,25 @@ - debug: var=vsc_vm_facts verbosity=1 - block: - + - block: - + - name: Get Facts of VM vmware_vm_facts: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" + validate_certs: no delegate_to: localhost register: vm_list - + - name: Set VM UUID set_fact: uuid: "{{ vm_list.virtual_machines[vm_name]['uuid'] }}" when: vm_name in vm_list.virtual_machines - + - debug: var=uuid - + - name: Turn off autostart connection: local vmware_autostart: @@ -55,6 +56,7 @@ username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" state: disable + validate_certs: no - name: Power off the VSC VM connection: local @@ -67,7 +69,7 @@ folder: "/{{ vcenter.datacenter }}{{ vsc_vm_folder['folders'][0] }}" name: "{{ vmname }}" state: "poweredoff" - + when: vsc_vm_facts['instance']['hw_power_status'] == 'poweredOn' - name: Removing the VSC VM @@ -81,5 +83,5 @@ folder: "/{{ vcenter.datacenter }}{{ vsc_vm_folder['folders'][0] }}" name: "{{ vmname }}" state: "absent" - + when: vsc_vm_folder|succeeded and vsc_vm_facts|succeeded diff --git a/roles/vsc-predeploy/tasks/vcenter.yml b/roles/vsc-predeploy/tasks/vcenter.yml index 7d074967a8..858d190265 100644 --- a/roles/vsc-predeploy/tasks/vcenter.yml +++ b/roles/vsc-predeploy/tasks/vcenter.yml @@ -115,20 +115,21 @@ hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" + validate_certs: no delegate_to: localhost register: vm_list - + - name: Verify VM exists on Host assert: that: "vm_name in vm_list.virtual_machines" msg: "Desired VM does not exist" - + - name: Set VM UUID set_fact: uuid: "{{ vm_list.virtual_machines[vm_name]['uuid'] }}" - + - debug: var=uuid - + - name: Turn on autostart connection: local vmware_autostart: @@ -138,7 +139,8 @@ username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" state: enable - + validate_certs: no + - name: Verify VM is running assert: that: "vsc_vm_folder|succeeded and vsc_vm_facts|succeeded" diff --git a/roles/vsd-decouple/tasks/main.yml b/roles/vsd-decouple/tasks/main.yml index c2b11f0ced..1fd13934ba 100644 --- a/roles/vsd-decouple/tasks/main.yml +++ b/roles/vsd-decouple/tasks/main.yml @@ -10,7 +10,7 @@ include_role: name: common tasks_from: vsd-reset-keystorepass - + - import_tasks: report_header.yml - name: get the username running the deploy @@ -55,6 +55,10 @@ - "'iso' in mount_file.stdout" msg: "Did not find iso file in mount path" + - name: Pause to allow for mount delay + pause: + seconds: 5 + - name: Decouple VSD Node command: "/media/CDROM/decouple.sh -y" @@ -65,7 +69,7 @@ - name: Execute list_p1db command on node3 command: "{{ p1db_cmd }}" register: list_p1db_output - + remote_user: "{{ vsd_username | default(vsd_default_username) }}" become: "{{ 'no' if vsd_username | default(vsd_default_username) == 'root' else 'yes' }}" vars: diff --git a/roles/vsd-destroy/tasks/vcenter.yml b/roles/vsd-destroy/tasks/vcenter.yml index c61acfc4db..ac0f578fdf 100644 --- a/roles/vsd-destroy/tasks/vcenter.yml +++ b/roles/vsd-destroy/tasks/vcenter.yml @@ -40,6 +40,7 @@ hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" + validate_certs: no delegate_to: localhost register: vm_list @@ -58,6 +59,7 @@ username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" state: disable + validate_certs: no - name: Power off the vsd VM connection: local diff --git a/roles/vsd-predeploy/tasks/vcenter.yml b/roles/vsd-predeploy/tasks/vcenter.yml index 035c1f241a..77a96da197 100644 --- a/roles/vsd-predeploy/tasks/vcenter.yml +++ b/roles/vsd-predeploy/tasks/vcenter.yml @@ -76,6 +76,7 @@ hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" + validate_certs: no delegate_to: localhost register: vm_list @@ -83,13 +84,13 @@ assert: that: "vm_name in vm_list.virtual_machines" msg: "Desired VM does not exist" - + - name: Set VM UUID set_fact: uuid: "{{ vm_list.virtual_machines[vm_name]['uuid'] }}" - + - debug: var=uuid - + - name: Turn on autostart connection: local vmware_autostart: @@ -99,7 +100,8 @@ username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" state: enable - + validate_certs: no + - name: Disabling cloud-init connection: local vmware_vm_shell: diff --git a/roles/vstat-destroy/tasks/vcenter.yml b/roles/vstat-destroy/tasks/vcenter.yml index 8bfecf0022..18bff04283 100644 --- a/roles/vstat-destroy/tasks/vcenter.yml +++ b/roles/vstat-destroy/tasks/vcenter.yml @@ -32,22 +32,23 @@ when: vstat_vm_facts.exception is defined - block: - + - name: Get Facts of VM vmware_vm_facts: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" - delegate_to: localhost - register: vm_list + validate_certs: no + delegate_to: localhost + register: vm_list - name: Set VM UUID set_fact: uuid: "{{ vm_list.virtual_machines[vm_name]['uuid'] }}" when: vm_name in vm_list.virtual_machines - + - debug: var=uuid - + - name: Turn off autostart connection: local vmware_autostart: @@ -57,7 +58,8 @@ username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" state: disable - + validate_certs: no + - name: Power off the Stats VM connection: local vmware_guest: diff --git a/roles/vstat-predeploy/tasks/vcenter.yml b/roles/vstat-predeploy/tasks/vcenter.yml index 3c2c41844d..ffaf0150d8 100644 --- a/roles/vstat-predeploy/tasks/vcenter.yml +++ b/roles/vstat-predeploy/tasks/vcenter.yml @@ -76,20 +76,21 @@ hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" + validate_certs: no delegate_to: localhost register: vm_list - + - name: Verify VM exists on Host assert: that: "vm_name in vm_list.virtual_machines" msg: "Desired VM does not exist" - + - name: Set VM UUID set_fact: uuid: "{{ vm_list.virtual_machines[vm_name]['uuid'] }}" - + - debug: var=uuid - + - name: Turn on autostart connection: local vmware_autostart: @@ -99,6 +100,7 @@ username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" state: enable + validate_certs: no - name: Writing eth0 network script file to the VM connection: local diff --git a/roles/vstat-upgrade-backup-and-prep/tasks/main.yml b/roles/vstat-upgrade-backup-and-prep/tasks/main.yml index 2139bb2570..43dc6b7141 100644 --- a/roles/vstat-upgrade-backup-and-prep/tasks/main.yml +++ b/roles/vstat-upgrade-backup-and-prep/tasks/main.yml @@ -11,9 +11,12 @@ - "****************************************************" when: skip_vstat_upgrade -- block: +- block: - import_tasks: prep_vstat_in_place_upgrade.yml remote_user: "{{ vstat_username | default(vstat_default_username) }}" + become: "{{ 'no' if vstat_username | default(vstat_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ vstat_password | default(vstat_default_password) }}" when: vstat_in_place_upgrade - name: Backup elasticsearch data diff --git a/roles/vstat-upgrade-backup-and-prep/tasks/prep_vstat_in_place_upgrade.yml b/roles/vstat-upgrade-backup-and-prep/tasks/prep_vstat_in_place_upgrade.yml index 8b42228252..929ff3b6ff 100644 --- a/roles/vstat-upgrade-backup-and-prep/tasks/prep_vstat_in_place_upgrade.yml +++ b/roles/vstat-upgrade-backup-and-prep/tasks/prep_vstat_in_place_upgrade.yml @@ -22,23 +22,29 @@ vsd_hostname: "{{ vsd_fqdn }}" run_once: true -- name: Disable stats collection on all VSD nodes - command: /opt/vsd/vsd-stats.sh -d - delegate_to: "{{ item }}" - with_items: "{{ vsd_hostname_list }}" +- block: -- name: Generate ssh key on vsd for vstat upgrade - shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/vstat_rsa -q -N "" - args: - creates: /root/.ssh/vstat_rsa - delegate_to: "{{ vsd_hostname_list[0] }}" - run_once: true + - name: Disable stats collection on all VSD nodes + command: /opt/vsd/vsd-stats.sh -d + with_items: "{{ vsd_hostname_list }}" + delegate_to: "{{ item }}" -- name: Get generated SSH keys - shell: cat ~/.ssh/vstat_rsa.pub - register: ssh_key + - name: Generate ssh key on vsd for vstat upgrade + shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/vstat_rsa -q -N "" + args: + creates: /root/.ssh/vstat_rsa + delegate_to: "{{ vsd_hostname_list[0] }}" + + - name: Get generated SSH keys + shell: cat ~/.ssh/vstat_rsa.pub + register: ssh_key + delegate_to: "{{ vsd_hostname_list[0] }}" + + remote_user: "{{ vsd_username | default(vsd_default_username) }}" + become: "{{ 'no' if vsd_username | default(vsd_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ vsd_password | default(vsd_default_password) }}" run_once: true - delegate_to: "{{ vsd_hostname_list[0] }}" - name: Copy ssh key from vsd to vstat node(s) shell: "echo {{ ssh_key.stdout }} >> /root/.ssh/authorized_keys" diff --git a/roles/vstat-vsd-health/tasks/main.yml b/roles/vstat-vsd-health/tasks/main.yml index 8ae9b9a63b..6896dbe584 100644 --- a/roles/vstat-vsd-health/tasks/main.yml +++ b/roles/vstat-vsd-health/tasks/main.yml @@ -1,19 +1,23 @@ -- name: Wait for ES VSD processes to become running - monit_waitfor_service: - name: "{{ item }}" - timeout_seconds: 600 - test_interval_seconds: 30 - with_items: - - elasticsearch-status - - tca-daemon-status - - stats-collector-status - remote_user: "{{ vsd_username | default(vsd_default_username) }}" - become: "{{ 'no' if vsd_username | default(vsd_default_username) == 'root' else 'yes' }}" - vars: - ansible_become_pass: "{{ vsd_password | default(vsd_default_password) }}" +- block: -- name: Read the status of the DB upgrade directory and verify it exists - include_role: - name: common - tasks_from: vsd-verify-db-status - tags: vsd + - name: Wait for ES VSD processes to become running + monit_waitfor_service: + name: "{{ item }}" + timeout_seconds: 600 + test_interval_seconds: 30 + with_items: + - elasticsearch-status + - tca-daemon-status + - stats-collector-status + remote_user: "{{ vsd_username | default(vsd_default_username) }}" + become: "{{ 'no' if vsd_username | default(vsd_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ vsd_password | default(vsd_default_password) }}" + + - name: Read the status of the DB upgrade directory and verify it exists + include_role: + name: common + tasks_from: vsd-verify-db-status + tags: vsd + + when: "groups['vstats'] is defined and groups['vstats']" diff --git a/upgrade_vars.yml b/upgrade_vars.yml deleted file mode 100644 index 7371bbbd7a..0000000000 --- a/upgrade_vars.yml +++ /dev/null @@ -1,18 +0,0 @@ ---- -### -# See UPGRADE_SA.md or UPGRADE_HA.md for details -### -# parameter to determine from which version VCS is being upgraded -upgrade_from_version: '4.0.11' - -# parameter to determine to which version VCS is being upgraded -upgrade_to_version: '5.2.2' - -# NFS server ip with the folder path. This is used to mount nfs folder on vstat node(s) -# Needed only when upgrading to 4.0.11 version. -vstat_nfs_server_with_folder: 135.227.181.233:/tmp/vstat/ - -# number of vrss(vswitches) to be seen by VSCs before and after the upgrade -expected_num_vswitches: 0 -# number of bgp peers to be seen by VSCs before and after the upgrade -expected_num_bgp_peers: 0