Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerfiles, documentation, keycloak #911

Merged
merged 17 commits into from
Feb 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions .github/workflows/build-image-test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Build the Deployer image

name: build-deployer-image-test

# Trigger the workflow on push or pull request events but only for the main branch
on:
push:
branches:
- test
workflow_dispatch:

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest

# Sequence of tasks that will be executed as part of the job
steps:

- name: Check-out repository
uses: actions/checkout@v3

- name: Build the image
run: |
./cp-deploy.sh build

- name: Push to quay.io
env:
QUAY_IO_USER: ${{ secrets.QUAY_IO_USER }}
QUAY_IO_PASSWORD: ${{ secrets.QUAY_IO_PASSWORD }}
run: |
podman login quay.io -u "${QUAY_IO_USER}" -p "${QUAY_IO_PASSWORD}"
podman tag cloud-pak-deployer:latest quay.io/cloud-pak-deployer/cloud-pak-deployer:test
podman push quay.io/cloud-pak-deployer/cloud-pak-deployer:test
21 changes: 15 additions & 6 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,26 @@ FROM ${CPD_OLM_UTILS_V3_IMAGE} as olmn-utils-v3
LABEL authors="Arthur Laimbock, \
Markus Wiegleb, \
Frank Ketelaars, \
Jiri Petnik"
Jiri Petnik, \
Jan Dusek"
LABEL product=cloud-pak-deployer

ENV PIP_ROOT_USER_ACTION=ignore

USER 0

# Install required packages, including HashiCorp Vault client
RUN yum install -y yum-utils && \
RUN export PYVER=$(python -c "import sys;print('{}.{}'.format(sys.version_info[0],sys.version_info[1]))") && \
if [ ! $(command -v yum) ];then microdnf install -y yum;fi && \
alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYVER} 1 && \
alternatives --set python3 /usr/bin/python${PYVER} && \
python3 -m ensurepip && \
yum install -y yum-utils && \
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
yum install -y tar sudo unzip wget jq skopeo httpd-tools git hostname bind-utils iproute procps-ng && \
yum install -y tar sudo unzip wget httpd-tools git hostname bind-utils iproute procps-ng && \
# Need gcc anf py-devel to recompile python dependencies on ppc64le (during pip install).
yum install -y gcc python3.11-devel && \
pip3 install jmespath pyyaml argparse python-benedict pyvmomi psutil && \
yum install -y gcc python${PYVER}-devel && \
pip3 install --no-cache-dir jmespath pyyaml argparse python-benedict pyvmomi psutil && \
sed -i 's|#!/usr/bin/python.*|#!/usr/bin/python3.9|g' /usr/bin/yum-config-manager && \
yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo && \
yum install -y vault && \
Expand All @@ -48,7 +56,8 @@ RUN cd /opt/ansible && \

# BUG with building wheel
#RUN pip3 install -r /cloud-pak-deployer/deployer-web/requirements.txt > /tmp/deployer-web-pip-install.out 2>&1
RUN pip3 install "cython<3.0.0" wheel && pip3 install PyYAML==6.0 --no-build-isolation && pip3 install -r /cloud-pak-deployer/deployer-web/requirements.txt > /tmp/deployer-web-pip-install.out 2>&1
RUN pip3 install --no-cache-dir "cython<3.0.0" wheel && pip3 install PyYAML==6.0 --no-build-isolation && \
pip3 install --no-cache-dir -r /cloud-pak-deployer/deployer-web/requirements.txt > /tmp/deployer-web-pip-install.out 2>&1

# cli utilities
RUN wget -q -O /tmp/cpd-cli.tar.gz $(curl -s https://api.github.com/repos/IBM/cpd-cli/releases/latest | jq -r '.assets[] | select( .browser_download_url | contains("linux-EE")).browser_download_url') && \
Expand Down
23 changes: 17 additions & 6 deletions Dockerfile.ppc64le
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,26 @@ FROM ${CPD_OLM_UTILS_V3_IMAGE}
LABEL authors="Arthur Laimbock, \
Markus Wiegleb, \
Frank Ketelaars, \
Jiri Petnik"
Jiri Petnik, \
Jan Dusek, \
Sebastien Chabrolles"
LABEL product=cloud-pak-deployer

ENV PIP_ROOT_USER_ACTION=ignore

USER 0

# Install required packages, including HashiCorp Vault client
RUN yum install -y yum-utils && \
RUN export PYVER=$(python -c "import sys;print('{}.{}'.format(sys.version_info[0],sys.version_info[1]))") && \
if [ ! $(command -v yum) ];then microdnf install -y yum;fi && \
alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYVER} 1 && \
alternatives --set python3 /usr/bin/python${PYVER} && \
python3 -m ensurepip && \
yum install -y yum-utils && \
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
yum install -y tar sudo unzip wget jq skopeo httpd-tools git hostname bind-utils iproute procps-ng && \
yum install -y gcc python3.11-devel && \
pip3 install jmespath pyyaml argparse python-benedict pyvmomi psutil && \
yum install -y tar sudo unzip wget httpd-tools git hostname bind-utils iproute procps-ng && \
yum install -y gcc python${PYVER}-devel && \
pip3 install --no-cache-dir jmespath pyyaml argparse python-benedict pyvmomi psutil && \
curl -O https://downloads.power-devops.com/vault_1.12.4_linux_ppc64le.zip && \
unzip -d /usr/local/bin vault_1.12.4_linux_ppc64le.zip && rm vault_1.12.4_linux_ppc64le.zip && \
yum install -y nginx && \
Expand All @@ -46,7 +55,9 @@ RUN cd /opt/ansible && \

# BUG with building wheel
#RUN pip3 install -r /cloud-pak-deployer/deployer-web/requirements.txt > /tmp/deployer-web-pip-install.out 2>&1
RUN pip3 install "cython<3.0.0" wheel && pip3 install PyYAML==6.0 --no-build-isolation && pip3 install -r /cloud-pak-deployer/deployer-web/requirements.txt > /tmp/deployer-web-pip-install.out 2>&1
RUN pip3 install --no-cache-dir "cython<3.0.0" wheel && \
pip3 install --no-cache-dir PyYAML==6.0 --no-build-isolation && \
pip3 install --no-cache-dir -r /cloud-pak-deployer/deployer-web/requirements.txt > /tmp/deployer-web-pip-install.out 2>&1

# cli utilities
RUN wget -q -O /tmp/cpd-cli.tar.gz $(curl -s https://api.github.com/repos/IBM/cpd-cli/releases/latest | jq -r '.assets[] | select( .browser_download_url | contains("ppc64le-EE")).browser_download_url') && \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,25 +7,16 @@
path: /tmp/work
state: directory

- name: Generate case-download command preview script to download case files
- name: Generate case-download command to download case files
set_fact:
_case_download_command: "{{ lookup('template', 'case-download.j2') }}"

- name: Show case-download command to download case files
debug:
var: _case_download_command

- name: Write script to "{{ status_dir }}/cp4d/{{ _p_current_cp4d_cluster.cp4d_version }}-case-download.sh"
copy:
content: "{{ _case_download_command }}"
dest: "{{ status_dir }}/cp4d/{{ _p_current_cp4d_cluster.cp4d_version }}-case-download.sh"
mode: u+rwx

- name: Download case files, logs are in {{ status_dir }}/log/{{ _p_current_cp4d_cluster.cp4d_version }}-case-download.log
shell: |
{{ status_dir }}/cp4d/{{ _p_current_cp4d_cluster.cp4d_version }}-case-download.sh > {{ status_dir }}/log/{{ _p_current_cp4d_cluster.cp4d_version }}-case-download.log
args:
chdir: /tmp/work
- include_role:
name: run-command
vars:
_p_command_description: Download case files
_p_command: "{{ _case_download_command }}"
_p_command_log_file: "{{ status_dir }}/log/{{ _p_current_cp4d_cluster.project }}-case-download.log"

- name: Create {{ status_dir }}/work directory if not present yet
file:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,4 +61,11 @@
name: cp4d-prepare-openshift
vars:
_p_openshift_cluster_name: "{{ current_cp4d_cluster.openshift_cluster_name }}"
when: ( current_cp4d_cluster.change_node_settings | default(True) | bool )
when: ( current_cp4d_cluster.change_node_settings | default(True) | bool )

# - name: Download case files
# include_role:
# name: cp4d-case-save
# vars:
# _p_current_cp4d_cluster: "{{ current_cp4d_cluster }}"
# when: not (cpd_airgap | bool)
Original file line number Diff line number Diff line change
Expand Up @@ -11,20 +11,24 @@
_p_activity_yaml: "{{ status_dir }}/cp4d/fluent.conf"

- block:
- name: Create zen-audit-config config map if it doesn't exist
shell: |
oc create -n {{ current_cp4d_cluster.project }} cm zen-audit-config | true

- name: Apply audit configuration from {{ status_dir }}/cp4d/fluent.conf
shell:
shell: |
oc set data -n {{ current_cp4d_cluster.project }} cm/zen-audit-config \
--from-file={{ status_dir }}/cp4d/fluent.conf
register: _audit_set_data
changed_when: "'zen-audit-config data updated' in _audit_set_data.stdout"

- name: Restart audit pods if configuration was changed
shell:
shell: |
oc delete po -n {{ current_cp4d_cluster.project }} -l component=zen-audit
when: _audit_set_data.changed

- name: Apply replication factor
shell:
shell: |
oc scale -n {{ current_cp4d_cluster.project }} deploy/zen-audit --replicas={{ _cp4d_audit_config.audit_replicas | default(1) }}

- name: Apply audit output for OpenShift logging
Expand Down
52 changes: 22 additions & 30 deletions cp-deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -97,13 +97,6 @@ run_env_logs() {
fi
fi
fi

# Show login info
if [[ "${ACTION}" != "destroy" ]];then
if [ -e ${STATUS_DIR}/cloud-paks/cloud-pak-deployer-info.txt ];then
cat ${STATUS_DIR}/cloud-paks/cloud-pak-deployer-info.txt
fi
fi
}

# --------------------------------------------------------------------------------------------------------- #
Expand Down Expand Up @@ -644,29 +637,6 @@ else
IMAGE_ARCH=${ARCH}
fi

# If images have not been overridden, set the variables here
if [ -z $CPD_OLM_UTILS_V2_IMAGE ];then
if [[ "${IMAGE_ARCH}" == "amd64" || "${IMAGE_ARCH}" == "arm64" ]]; then
export CPD_OLM_UTILS_V2_IMAGE=icr.io/cpopen/cpd/olm-utils-v2:latest
else
export CPD_OLM_UTILS_V2_IMAGE=icr.io/cpopen/cpd/olm-utils-v2:latest.${IMAGE_ARCH}
fi
else
echo "Custom olm-utils-v2 image ${CPD_OLM_UTILS_V2_IMAGE} will be used."
fi

# If images have not been overridden, set the variables here
if [ -z $CPD_OLM_UTILS_V3_IMAGE ];then
if [[ "${IMAGE_ARCH}" == "amd64" || "${ARCH}" == "arm64" ]]; then
export CPD_OLM_UTILS_V3_IMAGE=icr.io/cpopen/cpd/olm-utils-v3:latest
else
export CPD_OLM_UTILS_V3_IMAGE=icr.io/cpopen/cpd/olm-utils-v3:latest.${IMAGE_ARCH}
fi
else
echo "Custom olm-utils-v3 image ${CPD_OLM_UTILS_V3_IMAGE} will be used."
fi


if ! $INSIDE_CONTAINER;then
# Check if podman or docker command was found
if [ -z $CPD_CONTAINER_ENGINE ];then
Expand All @@ -688,6 +658,28 @@ if ! $INSIDE_CONTAINER;then

# If running "build" subcommand, build the image
if [ "$SUBCOMMAND" == "build" ];then
# If images have not been overridden, set the variables here
if [ -z $CPD_OLM_UTILS_V2_IMAGE ];then
if [[ "${IMAGE_ARCH}" == "amd64" || "${IMAGE_ARCH}" == "arm64" ]]; then
export CPD_OLM_UTILS_V2_IMAGE=icr.io/cpopen/cpd/olm-utils-v2:latest
else
export CPD_OLM_UTILS_V2_IMAGE=icr.io/cpopen/cpd/olm-utils-v2:latest.${IMAGE_ARCH}
fi
else
echo "Custom olm-utils-v2 image ${CPD_OLM_UTILS_V2_IMAGE} will be used."
fi

# If images have not been overridden, set the variables here
if [ -z $CPD_OLM_UTILS_V3_IMAGE ];then
if [[ "${IMAGE_ARCH}" == "amd64" || "${ARCH}" == "arm64" ]]; then
export CPD_OLM_UTILS_V3_IMAGE=icr.io/cpopen/cpd/olm-utils-v3:latest
else
export CPD_OLM_UTILS_V3_IMAGE=icr.io/cpopen/cpd/olm-utils-v3:latest.${IMAGE_ARCH}
fi
else
echo "Custom olm-utils-v3 image ${CPD_OLM_UTILS_V3_IMAGE} will be used."
fi

echo "Building Cloud Pak Deployer container image cloud-pak-deployer:${CPD_IMAGE_TAG}"
# Store version info into image
mkdir -p ${SCRIPT_DIR}/.version-info
Expand Down
12 changes: 12 additions & 0 deletions docker-scripts/run_automation.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,15 @@ cd ${SCRIPT_DIR}/..
# Retrieve version info
source ./.version-info/version-info.sh

# Show login info
show_deployer_info() {
if [[ "$SUBCOMMAND" == "environment" && "${ACTION}" == "apply" ]];then
if [ -e ${STATUS_DIR}/cloud-paks/cloud-pak-deployer-info.txt ];then
cat ${STATUS_DIR}/cloud-paks/cloud-pak-deployer-info.txt
fi
fi
}

# Check that subcommand is valid
export SUBCOMMAND=${SUBCOMMAND,,}
export ACTION=${ACTION,,}
Expand Down Expand Up @@ -135,6 +144,9 @@ env|environment)
echo "====================================================================================" | tee -a ${STATUS_DIR}/log/cloud-pak-deployer.log
echo "Deployer FAILED. Check previous messages. If command line is not returned, press ^C." | tee -a ${STATUS_DIR}/log/cloud-pak-deployer.log
fi

show_deployer_info

exit ${exit_code}
;;

Expand Down
8 changes: 7 additions & 1 deletion docs/src/05-install/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,15 @@ First go to the directory where you cloned the GitHub repository, for example `~
cd cloud-pak-deployer
```

### Set path and alias for the deployer

``` { .bash .copy }
source ./set-env.sh
```

Then run the following command to build the container image.
``` { .bash .copy }
./cp-deploy.sh build [--clean-up]
cp-deploy.sh build [--clean-up]
```

This process will take 5-10 minutes to complete and it will install all the pre-requisites needed to run the automation, including Ansible, Python and required operating system packages. For the installation to work, the system on which the image is built must be connected to the internet.
Expand Down
24 changes: 15 additions & 9 deletions docs/src/10-use-deployer/3-run/aws-rosa.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ This scenario is supported. To enable this feature, please ensure that you take
2. Create "cluster-admin " password token using the following command:

``` { .bash .copy }
$ ./cp-deploy.sh vault set -vs={{env_id}}-cluster-admin-password=[YOUR PASSWORD]
$ cp-deploy.sh vault set -vs={{env_id}}-cluster-admin-password=[YOUR PASSWORD]
```

Without these changes, sthe cloud player will fail and you will receive the following error message: "Failed to get the cluster-admin password from the vault".
Expand Down Expand Up @@ -159,27 +159,33 @@ export AWS_SESSION_TOKEN=your_session_token
In some cases, download of the `cloudctl` and `cpd-cli` clients from https://github.com/IBM will fail because GitHub limits the number of API calls from non-authenticated clients. You can remediate this issue by creating a [Personal Access Token on github.com](https://github.com/settings/tokens) and creating a secret in the vault.

``` { .bash .copy }
./cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
cp-deploy.sh vault set -vs github-ibm-pat=<your PAT>
```

Alternatively, you can set the secret by adding `-vs github-ibm-pat=<your PAT>` to the `./cp-deploy.sh env apply` command.
Alternatively, you can set the secret by adding `-vs github-ibm-pat=<your PAT>` to the `cp-deploy.sh env apply` command.

## 5. Run the deployer

### Set path and alias for the deployer

``` { .bash .copy }
source ./set-env.sh
```

### Optional: validate the configuration

If you only want to validate the configuration, you can run the dpeloyer with the `--check-only` argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

``` { .bash .copy }
./cp-deploy.sh env apply --check-only --accept-all-licenses
cp-deploy.sh env apply --check-only --accept-all-licenses
```

### Run the Cloud Pak Deployer

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user **must** be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

``` { .bash .copy }
./cp-deploy.sh env apply --accept-all-licenses
cp-deploy.sh env apply --accept-all-licenses
```

You can also specify extra variables such as `env_id` to override the names of the objects referenced in the `.yaml` configuration files as `{{ env_id }}-xxxx`. For more information about the extra (dynamic) variables, see [advanced configuration](../../50-advanced/advanced-configuration.md#using-dynamic-variables-extra-variables).
Expand All @@ -191,15 +197,15 @@ When running the command, the container will start as a daemon and the command w
You can return to view the logs as follows:

``` { .bash .copy }
./cp-deploy.sh env logs
cp-deploy.sh env logs
```

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to [Timings](../../30-reference/timings).

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

``` { .bash .copy }
./cp-deploy.sh env kill
cp-deploy.sh env kill
```

### On failure
Expand Down Expand Up @@ -228,7 +234,7 @@ The `admin` password can be retrieved from the vault as follows:
List the secrets in the vault:

``` { .bash .copy }
./cp-deploy.sh vault list
cp-deploy.sh vault list
```

This will show something similar to the following:
Expand All @@ -247,7 +253,7 @@ Secret list for group sample:
You can then retrieve the Cloud Pak for Data admin password like this:

``` { .bash .copy }
./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01
cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01
```

```output
Expand Down
Loading
Loading