diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/01-introduction/current-state/index.html b/01-introduction/current-state/index.html new file mode 100644 index 000000000..8ae9b633e --- /dev/null +++ b/01-introduction/current-state/index.html @@ -0,0 +1 @@ + Current state - Cloud Pak Deployer
Skip to content

Current state of the Cloud Pak Deployer🔗

The below picture indicates the current state of the Cloud Pak Deployer, which infrastructures are supported to provision or use OpenShift, the storage classes which can be controlled and the Cloud Paks with cartridges and components. Current state of the deployer

\ No newline at end of file diff --git a/01-introduction/images/cp-deploy-current-state.drawio b/01-introduction/images/cp-deploy-current-state.drawio new file mode 100644 index 000000000..019f9278b --- /dev/null +++ b/01-introduction/images/cp-deploy-current-state.drawio @@ -0,0 +1,133 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/01-introduction/images/cp-deploy-current-state.png b/01-introduction/images/cp-deploy-current-state.png new file mode 100644 index 000000000..8830c35dc Binary files /dev/null and b/01-introduction/images/cp-deploy-current-state.png differ diff --git a/05-install/install/index.html b/05-install/install/index.html new file mode 100644 index 000000000..2be606282 --- /dev/null +++ b/05-install/install/index.html @@ -0,0 +1,9 @@ + Installing Cloud Pak Deployer - Cloud Pak Deployer
Skip to content

Installing the Cloud Pak Deployer🔗

Install pre-requisites🔗

The Cloud Pak Deployer requires podman or docker to run, which are available on most Linux distributions such as Red Hat Enterprise Linux (preferred), Fedora, CentOS, Ubuntu and MacOS. On Windows Docker behaves differently than Linux platforms and this can cause the deployer to fail.

Using a Windows workstation🔗

If you don't have a Linux server in some cloud, you can use VirtualBox to create a Linux virtual machine.

Once the guest operating system is up and running, log on as root to the guest operating system. For convenience, VirtualBox also supports port forwarding so you can use PuTTY to access the Linux command line.

Install on Linux🔗

On Red Hat Enterprise Linux of CentOS, run the following commands:

yum install -y podman git
+yum clean all
+

On MacOS, run the following commands:

brew install podman git
+podman machine create
+podman machine init
+

On Ubuntu, follow the instructions here: https://docs.docker.com/engine/install/ubuntu/

Clone the current repository🔗

Using the command line🔗

If you clone the repository from the command line, you will need to enter a token when you run the git clone command. You can retrieve your token as follows:

Go to a directory where you want to download the Git repo.

git clone --depth=1 https://github.com/IBM/cloud-pak-deployer.git
+

Build the image🔗

First go to the directory where you cloned the GitHub repository, for example ~/cloud-pak-deployer.

cd cloud-pak-deployer
+

Then run the following command to build the container image.

./cp-deploy.sh build
+

This process will take 5-10 minutes to complete and it will install all the pre-requisites needed to run the automation, including Ansible, Python and required operating system packages. For the installation to work, the system on which the image is built must be connected to the internet.

\ No newline at end of file diff --git a/10-use-deployer/1-overview/overview/index.html b/10-use-deployer/1-overview/overview/index.html new file mode 100644 index 000000000..a9ab947b7 --- /dev/null +++ b/10-use-deployer/1-overview/overview/index.html @@ -0,0 +1 @@ + Overview - Cloud Pak Deployer
Skip to content

Using Cloud Pak Deployer🔗

Running Cloud Pak Deployer🔗

There are 3 main steps you need to perform to provision an OpenShift cluster with the desired Cloud Pak(s):

  1. Install the Cloud Pak Deployer
  2. Run the Cloud Pak Deployer to create the cluster and install the Cloud Pak

What will I need?🔗

To complete the deployment, you will or may need the following. Details will be provided when you need them.

  • Your Cloud Pak entitlement key to pull images from the IBM Container Registry
  • IBM Cloud VPC: An IBM Cloud API key that allows you to provision infrastructure
  • vSphere: A vSphere user and password which has infrastructure create permissions
  • AWS ROSA: AWS IAM credentials (access key and secret access key), a ROSA login token and optionally a temporary security token
  • AWS Self-managed: AWS IAM credentials (access key and secret access key) and optionally a temporary security token
  • Azure: Azure service principal with the correct permissions
  • Existing OpenShift: Cluster admin login credentials of the OpenShift cluster

Executing commands on the OpenShift cluster🔗

The server on which you run the Cloud Pak Deployer may not have the necessary clients to interact with the cloud infrastructure, OpenShift, or the installed Cloud Pak. You can run commands using the same container image that runs the deployment of OpenShift and the Cloud Paks through the command line: Open a command line

Destroying your OpenShift cluster🔗

If you want to destroy the provisioned OpenShift cluster, including the installed Cloud Pak(s), you can do this through the Cloud pak Deployer. Steps can be found here: Destroy the assets

\ No newline at end of file diff --git a/10-use-deployer/3-run/aws-rosa/index.html b/10-use-deployer/3-run/aws-rosa/index.html new file mode 100644 index 000000000..2bd593e0a --- /dev/null +++ b/10-use-deployer/3-run/aws-rosa/index.html @@ -0,0 +1,40 @@ + AWS ROSA - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on AWS (ROSA)🔗

On Amazon Web Services (AWS), OpenShift can be set up in various ways, managed by Red Hat (ROSA) or self-managed. The steps below are applicable to the ROSA (Red Hat OpenShift on AWS) installation. More information about ROSA can be found here: https://aws.amazon.com/rosa/

There are 5 main steps to run the deployer for AWS:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

Topology🔗

A typical setup of the ROSA cluster is pictured below: ROSA configuration

When deploying ROSA, an external host name and domain name are automatically generated by Amazon Web Services and both the API and Ingress servers can be resolved by external clients. At this stage, one cannot configure the domain name to be used.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For ROSA installations, copy one of ocp-aws-rosa-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-aws-rosa-elastic.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Enable ROSA on AWS🔗

Before you can use ROSA on AWS, you have to enable it if this has not been done already. This can be done as follows:

Obtain the AWS IAM credentials🔗

You will need an Access Key ID and Secret Access Key for the deployer to run rosa commands.

Alternative: Using temporary AWS security credentials (STS)🔗

If your account uses temporary security credentials for AWS resources, you must use the Access Key ID, Secret Access Key and Session Token associated with your temporary credentials.

For more information about using temporary security credentials, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html.

The temporary credentials must be issued for an IAM role that has sufficient permissions to provision the infrastructure and all other components. More information about required permissions for ROSA cluster can be found here: https://docs.openshift.com/rosa/rosa_planning/rosa-sts-aws-prereqs.html#rosa-sts-aws-prereqs.

An example on how to retrieve the temporary credentials for a user-defined role:

printf "\nexport AWS_ACCESS_KEY_ID=%s\nexport AWS_SECRET_ACCESS_KEY=%s\nexport AWS_SESSION_TOKEN=%s\n" $(aws sts assume-role \
+--role-arn arn:aws:iam::678256850452:role/ocp-sts-role \
+--role-session-name OCPInstall \
+--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
+--output text)
+

This would return something like the below, which you can then paste into the session running the deployer.

export AWS_ACCESS_KEY_ID=ASIxxxxxxAW
+export AWS_SECRET_ACCESS_KEY=jtLxxxxxxxxxxxxxxxGQ
+export AWS_SESSION_TOKEN=IQxxxxxxxxxxxxxbfQ
+

You must set the infrastructure.use_sts to True in the openshift configuration if you need to use the temporary security credentials. Cloud Pak Deployer will then run the rosa create cluster command with the appropriate flag.

Obtain your ROSA login token🔗

To run rosa commands to manage the cluster, the deployer requires the ROSA login token.

  • Go to https://cloud.redhat.com/openshift/token/rosa
  • Login with your Red Hat user ID and password. If you don't have one yet, you need to create it.
  • Copy the offline access token presented on the screen and store it in a safe place.

If ROSA is already installed🔗

This scenario is supported. To enable this feature, please ensure that you take the following steps:

  1. Include the environment ID in the infrastrucure definition {{ env_id }} to match existing cluster
  2. Create "cluster-admin " password token using the following command:

    $ ./cp-deploy.sh vault set -vs={{env_id}}-cluster-admin-password=[YOUR PASSWORD]
    +

Without these changes, sthe cloud player will fail and you will receive the following error message: "Failed to get the cluster-admin password from the vault".

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

4. Set environment variables and secrets🔗

export AWS_ACCESS_KEY_ID=your_access_key
+export AWS_SECRET_ACCESS_KEY=your_secret_access_key
+export ROSA_LOGIN_TOKEN="your_rosa_login_token"
+export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+

Optional: If your user does not have permanent administrator access but using temporary credentials, you can set the AWS_SESSION_TOKEN to be used for the AWS CLI.

export AWS_SESSION_TOKEN=your_session_token
+

  • AWS_ACCESS_KEY_ID: This is the AWS Access Key you retrieved above, often this is something like AK1A2VLMPQWBJJQGD6GV
  • AWS_SECRET_ACCESS_KEY: The secret associated with your AWS Access Key, also retrieved above
  • AWS_SESSION_TOKEN: The session token that will grant temporary elevated permissions
  • ROSA_LOGIN_TOKEN: The offline access token that was retrieved before. This is a very long string (200+ characters). Make sure you enclose the string in single or double quotes as it may hold special characters
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string

Warning

If your AWS_SESSION_TOKEN is expires while the deployer is still running, the deployer may end abnormally. In such case, you can just issue new temporary credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN) and restart the deployer. Alternatively, you can update the 3 vault secrets, respectively aws-access-key, aws-secret-access-key and aws-session-token with new values as they are re-retrieved by the deployer on a regular basis.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd:
+https://cpd-cpd.apps.pluto-01.pmxz.p1.openshiftapps.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- aws-access-key
+- aws-secret-access-key
+- ibm_cp_entitlement_key
+- rosa-login-token
+- pluto-01-cluster-admin-password
+- cp4d_admin_zen_40_pluto_01
+- all-config
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_40_pluto_01: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/aws-self-managed/index.html b/10-use-deployer/3-run/aws-self-managed/index.html new file mode 100644 index 000000000..24b461046 --- /dev/null +++ b/10-use-deployer/3-run/aws-self-managed/index.html @@ -0,0 +1,45 @@ + AWS Self-managed - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on AWS (Self-managed)🔗

On Amazon Web Services (AWS), OpenShift can be set up in various ways, self-managed or managed by Red Hat (ROSA). The steps below are applicable to a self-managed OpenShift installation. The IPI (Installer Provisioned Infrastructure) installer will be used. More information about IPI installation can be found here: https://docs.openshift.com/container-platform/4.12/installing/installing_aws/installing-aws-customizations.html.

There are 5 main steps to run the deploye for AWS:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

See the deployer in action in this video: https://ibm.box.com/v/cpd-aws-self-managed

Topology🔗

A typical setup of the self-managed OpenShift cluster is pictured below: AWS self-managed OpenShift

Single-node OpenShift (SNO) on AWS🔗

Red Hat OpenShift also supports single-node deployments in which control plane and compute are combined into a single node. Obviously, this type of configuration does not cater for any high availability requirements that are usually part of a production installation, but it does offer a more cost-efficient option for development and testing purposes.

Cloud Pak Deployer can deploy a single-node OpenShift with elastic storage and a sample configuration is provided as part of the deployer.

Warning

When deploying the IBM Cloud Paks on single-node OpenShift, there may be intermittent timeouts as pods are starting up. In those cases, just re-run the deployer with the same configuration and check status of the pods.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For self-managed OpenShift installations, copy one of ocp-aws-self-managed-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-aws-self-managed-elastic.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Configure Route53 service on AWS🔗

When deploying a self-managed OpenShift on Amazon web Services, a public hosted zone must be created in the same account as your OpenShift cluster. The domain name or subdomain name registered in the Route53 service must be specifed in the openshift configuration of the deployer.

For more information on acquiring or specifying a domain on AWS, you can refer to https://github.com/openshift/installer/blob/master/docs/user/aws/route53.md.

Obtain the AWS IAM credentials🔗

If you can use your permanent security credentials for the AWS account, you will need an Access Key ID and Secret Access Key for the deployer to setup an OpenShift cluster on AWS.

Alternative: Using temporary AWS security credentials (STS)🔗

If your account uses temporary security credentials for AWS resources, you must use the Access Key ID, Secret Access Key and Session Token associated with your temporary credentials.

For more information about using temporary security credentials, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html.

The temporary credentials must be issued for an IAM role that has sufficient permissions to provision the infrastructure and all other components. More information about required permissions can be found here: https://docs.openshift.com/container-platform/4.10/authentication/managing_cloud_provider_credentials/cco-mode-sts.html#sts-mode-create-aws-resources-ccoctl.

An example on how to retrieve the temporary credentials for a user-defined role:

printf "\nexport AWS_ACCESS_KEY_ID=%s\nexport AWS_SECRET_ACCESS_KEY=%s\nexport AWS_SESSION_TOKEN=%s\n" $(aws sts assume-role \
+--role-arn arn:aws:iam::678256850452:role/ocp-sts-role \
+--role-session-name OCPInstall \
+--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
+--output text)
+

Thie would return something like the below, which you can then paste into the session running the deployer.

export AWS_ACCESS_KEY_ID=ASIxxxxxxAW
+export AWS_SECRET_ACCESS_KEY=jtLxxxxxxxxxxxxxxxGQ
+export AWS_SESSION_TOKEN=IQxxxxxxxxxxxxxbfQ
+

If the openshift configuration has the infrastructure.credentials_mode set to Manual, Cloud Pak Deployer will automatically configure and run the Cloud Credential Operator utility.

3. Acquire entitlement keys and secrets🔗

Acquire IBM Cloud Pak entitlement key🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

Acquire an OpenShift pull secret🔗

To install OpenShift you need an OpenShift pull secret which holds your entitlement.

Optional: Locate or generate a public SSH Key🔗

To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub, where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux. Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Set the environment variables for AWS self-managed OpenShift deployment🔗

export AWS_ACCESS_KEY_ID=your_access_key
+export AWS_SECRET_ACCESS_KEY=your_secret_access_key
+

Optional: If your user does not have permanent administrator access but using temporary credentials, you can set the AWS_SESSION_TOKEN to be used for the AWS CLI.

export AWS_SESSION_TOKEN=your_session_token
+

  • AWS_ACCESS_KEY_ID: This is the AWS Access Key you retrieved above, often this is something like AK1A2VLMPQWBJJQGD6GV
  • AWS_SECRET_ACCESS_KEY: The secret associated with your AWS Access Key, also retrieved above
  • AWS_SESSION_TOKEN: The session token that will grant temporary elevated permissions

Warning

If your AWS_SESSION_TOKEN is expires while the deployer is still running, the deployer may end abnormally. In such case, you can just issue new temporary credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN) and restart the deployer. Alternatively, you can update the 3 vault secrets, respectively aws-access-key, aws-secret-access-key and aws-session-token with new values as they are re-retrieved by the deployer on a regular basis.

Create the secrets needed for self-managed OpenShift cluster🔗

You need to store the below credentials in the vault so that the deployer has access to them when installing self-managed OpenShift cluster on AWS.

./cp-deploy.sh vault set \
+    --vault-secret ocp-pullsecret \
+    --vault-secret-file /tmp/ocp_pullsecret.json
+

Optional: Create secret for public SSH key🔗

If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key.

./cp-deploy.sh vault set \
+    --vault-secret ocp-ssh-pub-key \
+    --vault-secret-file ~/.ssh/id_rsa.pub
+

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- aws-access-key
+- aws-secret-access-key
+- ocp-pullsecret
+- ocp-ssh-pub-key
+- ibm_cp_entitlement_key
+- pluto-01-cluster-admin-password
+- cp4d_admin_zen_40_pluto_01
+- all-config
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_40_pluto_01: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/azure-aro/index.html b/10-use-deployer/3-run/azure-aro/index.html new file mode 100644 index 000000000..4546dc93f --- /dev/null +++ b/10-use-deployer/3-run/azure-aro/index.html @@ -0,0 +1,48 @@ + Azure ARO - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on Microsoft Azure - ARO🔗

On Azure, OpenShift can be set up in various ways, managed by Red Hat (ARO) or self-managed. The steps below are applicable to the ARO (Azure Red Hat OpenShift).

There are 5 main steps to run the deployer for Azure:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

Topology🔗

A typical setup of the ARO cluster is pictured below: ARO configuration

When deploying ARO, you can configure the domain name by setting the openshift.domain_name attribute. The resulting domain name is managed by Azure, and it must be unique across all ARO instances deployed in Azure. Both the API and Ingress urls are set to be public in the template, so they can be resolved by external clients. If you want to use a custom domain and don't have one yet, you buy one from Azure: https://learn.microsoft.com/en-us/azure/app-service/manage-custom-dns-buy-domain.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For ARO installations, copy one of ocp-azure-aro*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-azure-aro.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Install the Azure CLI tool🔗

Install Azure CLI tool, and run the commands in your operating system.

Verify your quota and permissions in Microsoft Azure🔗

  • Check Azure resource quota of the subscription - Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster.
  • The ARO cluster is provisioned using the az command. Ideally, one has to have Contributor permissions on the subscription (Azure resources) and Application administrator role assigned in the Azure Active Directory. See details here.

Set environment variables for Azure🔗

export AZURE_RESOURCE_GROUP=pluto-01-rg
+export AZURE_LOCATION=westeurope
+export AZURE_SP=pluto-01-sp
+
  • AZURE_RESOURCE_GROUP: The Azure resource group that will hold all resources belonging to the cluster: VMs, load balancers, virtual networks, subnets, etc.. Typically you will create a resource group for every OpenShift cluster you provision.
  • AZURE_LOCATION: The Azure location of the resource group, for example useast or westeurope.
  • AZURE_SP: Azure service principal that is used to create the resources on Azure. You will get the service principal from the Azure administrator.

Store Service Principal credentials🔗

You must run the OpenShift installation using an Azure Service Principal with sufficient permissions. The Azure account administrator will share the SP credentials as a JSON file. If you have subscription-level access you can also create the Service Principal yourself. See steps in Create Azure service principal.

Example output in credentials file:

{
+  "appId": "a4c39ae9-f9d1-4038-b4a4-ab011e769111",
+  "displayName": "pluto-01-sp",
+  "password": "xyz-xyz",
+  "tenant": "869930ac-17ee-4dda-bbad-7354c3e7629c8"
+}
+

Store this file as /tmp/${AZURE_SP}-credentials.json.

Login as Service Principal🔗

Login as the service principal:

az login --service-principal -u a4c39ae9-f9d1-4038-b4a4-ab011e769111 -p xyz-xyz --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8
+

Register Resource Providers🔗

Make sure the following Resource Providers are registered for your subscription by running:

az provider register -n Microsoft.RedHatOpenShift --wait
+az provider register -n Microsoft.Compute --wait
+az provider register -n Microsoft.Storage --wait
+az provider register -n Microsoft.Authorization --wait
+

Create the resource group🔗

First the resource group must be created; this resource group must match the one configured in your OpenShift yaml config file.

az group create \
+  --name ${AZURE_RESOURCE_GROUP} \
+  --location ${AZURE_LOCATION}
+

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

Acquire an OpenShift pull secret🔗

To install OpenShift you need an OpenShift pull secret which holds your entitlement.

4. Set environment variables and secrets🔗

Create the secrets needed for ARO deployment🔗

You need to store the OpenShift pull secret and service principal credentials in the vault so that the deployer has access to it.

./cp-deploy.sh vault set \
+    --vault-secret ocp-pullsecret \
+    --vault-secret-file /tmp/ocp_pullsecret.json
+
+
+./cp-deploy.sh vault set \
+    --vault-secret ${AZURE_SP}-credentials \
+    --vault-secret-file /tmp/${AZURE_SP}-credentials.json
+

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-provision-ssh-key
+- sample-provision-ssh-pub-key
+- cp4d_admin_zen_sample_sample
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_sample_sample
+
PLAY [Secrets] *****************************************************************
+included: /automation_script/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/azure-self-managed/index.html b/10-use-deployer/3-run/azure-self-managed/index.html new file mode 100644 index 000000000..0931a1c91 --- /dev/null +++ b/10-use-deployer/3-run/azure-self-managed/index.html @@ -0,0 +1,48 @@ + Azure Self-managed - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on Microsoft Azure - Self-managed🔗

On Azure, OpenShift can be set up in various ways, managed by Red Hat (ARO) or self-managed. The steps below are applicable to the self-managed Red Hat OpenShift.

There are 5 main steps to run the deployer for Azure:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

Topology🔗

A typical setup of the OpenShift cluster on Azure is pictured below: Self-managed configuration

When deploying self-managed OpenShift on Azure, you must configure the domain name by setting the openshift.domain_name, which must be public domain with a registrar. OpenShift will create a public DNS zone with additional entries to reach the OpenShift API and the applications (Cloud Paks). If you don't have a domain yet, you buy one from Azure: https://learn.microsoft.com/en-us/azure/app-service/manage-custom-dns-buy-domain.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For Azure self-managed installations, copy one of ocp-azure-self-managed*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-azure-self-managed.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Install the Azure CLI tool🔗

Install Azure CLI tool, and run the commands in your operating system.

Verify your quota and permissions in Microsoft Azure🔗

  • Check Azure resource quota of the subscription - Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster.
  • The self-managed cluster is provisioned using the IPI installer command. Ideally, one has to have Contributor permissions on the subscription (Azure resources) and Application administrator role assigned in the Azure Active Directory. See details here.

Set environment variables for Azure🔗

export AZURE_RESOURCE_GROUP=pluto-01-rg
+export AZURE_LOCATION=westeurope
+export AZURE_SP=pluto-01-sp
+
  • AZURE_RESOURCE_GROUP: The Azure resource group that will hold all resources belonging to the cluster: VMs, load balancers, virtual networks, subnets, etc.. Typically you will create a resource group for every OpenShift cluster you provision.
  • AZURE_LOCATION: The Azure location of the resource group, for example useast or westeurope.
  • AZURE_SP: Azure service principal that is used to create the resources on Azure. You will get the service principal from the Azure administrator.

Store Service Principal credentials🔗

You must run the OpenShift installation using an Azure Service Principal with sufficient permissions. The Azure account administrator will share the SP credentials as a JSON file. If you have subscription-level access you can also create the Service Principal yourself. See steps in Create Azure service principal.

Example output in credentials file:

{
+  "appId": "a4c39ae9-f9d1-4038-b4a4-ab011e769111",
+  "displayName": "pluto-01-sp",
+  "password": "xyz-xyz",
+  "tenant": "869930ac-17ee-4dda-bbad-7354c3e7629c8"
+}
+

Store this file as /tmp/${AZURE_SP}-credentials.json.

Login as Service Principal🔗

Login as the service principal:

az login --service-principal -u a4c39ae9-f9d1-4038-b4a4-ab011e769111 -p xyz-xyz --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8
+

Create the resource group🔗

First the resource group must be created; this resource group must match the one configured in your OpenShift yaml config file.

az group create \
+  --name ${AZURE_RESOURCE_GROUP} \
+  --location ${AZURE_LOCATION}
+

3. Acquire entitlement keys and secrets🔗

Acquire IBM Cloud Pak entitlement key🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

Acquire an OpenShift pull secret🔗

To install OpenShift you need an OpenShift pull secret which holds your entitlement.

Optional: Locate or generate a public SSH Key🔗

To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub, where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux. Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Create the secrets needed for self-managed OpenShift cluster🔗

You need to store the OpenShift pull secret and service principal credentials in the vault so that the deployer has access to it.

./cp-deploy.sh vault set \
+    --vault-secret ocp-pullsecret \
+    --vault-secret-file /tmp/ocp_pullsecret.json
+
+
+./cp-deploy.sh vault set \
+    --vault-secret ${AZURE_SP}-credentials \
+    --vault-secret-file /tmp/${AZURE_SP}-credentials.json
+

Optional: Create secret for public SSH key🔗

If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key.

./cp-deploy.sh vault set \
+    --vault-secret ocp-ssh-pub-key \
+    --vault-secret-file ~/.ssh/id_rsa.pub
+

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-provision-ssh-key
+- sample-provision-ssh-pub-key
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_sample_sample
+
PLAY [Secrets] *****************************************************************
+included: /automation_script/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/azure-service-principal/index.html b/10-use-deployer/3-run/azure-service-principal/index.html new file mode 100644 index 000000000..57e5fb0a3 --- /dev/null +++ b/10-use-deployer/3-run/azure-service-principal/index.html @@ -0,0 +1,52 @@ + Create an Azure Service Principal - Cloud Pak Deployer
Skip to content

Create an Azure Service Principal🔗

Login to Azure🔗

Login to the Microsoft Azure using your subscription-level credentials.

az login
+

If you have a subscription with multiple tenants, use:

az login --tenant <TENANT_ID>
+

Example:

az login --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8
+To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AXWFQQ5FJ to authenticate.
+[
+  {
+    "cloudName": "AzureCloud",
+    "homeTenantId": "869930ac-17ee-4dda-bbad-7354c3e7629c8",
+    "id": "72281667-6d54-46cb-8423-792d7bcb1234",
+    "isDefault": true,
+    "managedByTenants": [],
+    "name": "Azure Account",
+    "state": "Enabled",
+    "tenantId": "869930ac-17ee-4dda-bbad-7354c3e7629c8",
+    "user": {
+      "name": "your_user@domain.com",
+      "type": "user"
+    }
+  }
+]
+

Set subscription (optional)🔗

If you have multiple Azure subscriptions, specify the relevant subscription ID: az account set --subscription <SUBSCRIPTION_ID>

You can list the subscriptions via command:

az account subscription list
+

[
+  {
+    "authorizationSource": "RoleBased",
+    "displayName": "IBM xxx",
+    "id": "/subscriptions/dcexxx",
+    "state": "Enabled",
+    "subscriptionId": "dcexxx",
+    "subscriptionPolicies": {
+      "locationPlacementId": "Public_2014-09-01",
+      "quotaId": "EnterpriseAgreement_2014-09-01",
+      "spendingLimit": "Off"
+    }
+  }
+]
+

Create service principal🔗

Create the service principal that will do the installation and assign the Contributor role

Set environment variables for Azure🔗

export AZURE_SUBSCRIPTION_ID=72281667-6d54-46cb-8423-792d7bcb1234
+export AZURE_LOCATION=westeurope
+export AZURE_SP=pluto-01-sp
+
  • AZURE_SUBSCRIPTION_ID: The id of your Azure subscription. Once logged in, you can retrieve this using the az account show command.
  • AZURE_LOCATION: The Azure location of the resource group, for example useast or westeurope.
  • AZURE_SP: Azure service principal that is used to create the resources on Azure.

Create the service principal🔗

az ad sp create-for-rbac \
+  --role Contributor \
+  --name ${AZURE_SP} \
+  --scopes /subscriptions/${AZURE_SUBSCRIPTION_ID} | tee /tmp/${AZURE_SP}-credentials.json
+

Example output:

{
+  "appId": "a4c39ae9-f9d1-4038-b4a4-ab011e769111",
+  "displayName": "pluto-01-sp",
+  "password": "xyz-xyz",
+  "tenant": "869930ac-17ee-4dda-bbad-7354c3e7629c8"
+}
+

Set permissions for service principal🔗

Finally, set the permissions of the service principal to allow creation of the OpenShift cluster

az role assignment create \
+  --role "User Access Administrator" \
+  --assignee-object-id $(az ad sp list --display-name=${AZURE_SP} --query='[].id' -o tsv)
+

\ No newline at end of file diff --git a/10-use-deployer/3-run/existing-openshift/index.html b/10-use-deployer/3-run/existing-openshift/index.html new file mode 100644 index 000000000..60d74b927 --- /dev/null +++ b/10-use-deployer/3-run/existing-openshift/index.html @@ -0,0 +1,44 @@ + Existing OpenShift - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on an existing OpenShift cluster🔗

When running the Cloud Pak Deployer on an existing OpenShift cluster, the following is assumed:

  • The OpenShift cluster is up and running with sufficient compute nodes
  • The appropriate storage class(es) have been pre-created
  • You have cluster administrator permissions to OpenShift

Info

You can also choose to run Cloud Pak Deployer as a job on the OpenShift cluster. This removes the dependency on a separate server or workstation to run the deployer. Please note that you may need unrestricted OpenShift entitlements for this. To run the deployer on OpenShift via the OpenShift console, see Run on OpenShift using console

With the Existing OpenShift type of deployment you can install and configure the Cloud Pak(s) both on connected and disconnected (air-gapped) cluster. When using the deployer for a disconnected cluster, make sure you specify --air-gapped for the cp-deploy.sh command.

There are 5 main steps to run the deployer for existing OpenShift:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For existing OpenShift installations, copy one of ocp-existing-ocp-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-existing-ocp-auto.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

No steps should be required to prepare the infrastructure; this type of installation expects the OpenShift cluster to be up and running with the supported storage classes.

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Store the OpenShift login command or configuration🔗

Because you will be deploying the Cloud Pak on an existing OpenShift cluster, the deployer needs to be able to access OpenShift. There are thre methods for passing the login credentials of your OpenShift cluster(s) to the deployer process:

  1. Generic oc login command (preferred)
  2. Specific oc login command(s)
  3. kubeconfig file

Regardless of which authentication option you choose, the deployer will retrieve the secret from the vault when it requires access to OpenShift. If the secret cannot be found or if it is invalid or the OpenShift login token has expired, the deployer will fail and you will need to update the secret of your choice.

For most OpenShift installations, you can retrieve the oc login command with a temporary token from the OpenShift console. Go to the OpenShift console and click on your user at the top right of the page to get the login command. Typically this command looks something like this: oc login --server=https://api.pluto-01.coc.ibm.com:6443 --token=sha256~NQUUMroU4B6q_GTBAMS18Y3EIba1KHnJ08L2rBHvTHA

Before passing the oc login command or the kubeconfig file, make sure you can login to your cluster using the command or the config file. If the cluster's API server has a self-signed certificate, make sure you specify the --insecure-skip-tls-verify flag for the oc login command.

Example:

oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify
+

Output:

Login successful.
+
+You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects'
+
+Using project "default".
+

Option 1 - Generic oc login command🔗

This is the most straightforward option if you only have 1 OpenShift cluster in your configuration.

Set the environment variable for the oc login command

export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
+

Info

Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored.

When the deployer is run, it automatically sets the oc-login vault secret to the specified oc login command. When logging in to OpenShift, the deployer first checks if there is a specific oc login secret for the cluster in question (see option 2). If there is not, it will default to the generic oc-login secret (option 1).

Option 2 - Specific oc login command(s)🔗

Use this option if you have multiple OpenShift clusters configured in th deployer configuration.

Store the login command in secret <cluster name>-oc-login

./cp-deploy.sh vault set \
+  -vs pluto-01-oc-login \
+  -vsv "oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
+

Info

Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored.

Option 3 - Use a kubeconfig file🔗

If you already have a "kubeconfig" file that holds the credentials of your cluster, you can use this, otherwise: - Log in to OpenShift as a cluster administrator using your method of choice - Locate the Kubernetes config file. If you have logged in with the OpenShift client, this is typically ~/.kube/config

If you did not just login to the cluster, the current context of the kubeconfig file may not point to your cluster. The deployer will check that the server the current context points to matches the cluster_name and domain_name of the configured openshift object. To check the current context, run the following command:

oc config current-context
+

Now, store the Kubernetes config file as a vault secret.

./cp-deploy.sh vault set \
+    --vault-secret kubeconfig \
+    --vault-secret-file ~/.kube/config
+

If the deployer manages multiple OpenShift clusters, you can specify a kubeconfig file for each of the clusters by prefixing the kubeconfig with the name of the openshift object, for example:

./cp-deploy.sh vault set \
+    --vault-secret pluto-01-kubeconfig \
+    --vault-secret-file /data/pluto-01/kubeconfig
+
+./cp-deploy.sh vault set \
+    --vault-secret venus-02-kubeconfig \
+    --vault-secret-file /data/venus-02/kubeconfig
+
When connecting to the OpenShift cluster, a cluster-specific kubeconfig vault secret will take precedence over the generic kubeconfig secret.

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- oc-login
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_sample
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/ibm-cloud/index.html b/10-use-deployer/3-run/ibm-cloud/index.html new file mode 100644 index 000000000..244f5a9b4 --- /dev/null +++ b/10-use-deployer/3-run/ibm-cloud/index.html @@ -0,0 +1,26 @@ + IBM Cloud - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on IBM Cloud🔗

You can use Cloud Pak Deployer to create a ROKS (Red Hat OpenShift Kubernetes Service) on IBM Cloud.

There are 5 main steps to run the deployer for IBM Cloud:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

See the deployer in action in this video: https://ibm.box.com/v/cpd-ibm-cloud-roks

Topology🔗

A typical setup of the ROKS cluster on IBM Cloud VPC is pictured below: ROKS configuration

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For IBM Cloud installations, copy one of ocp-ibm-cloud-roks*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-ibm-cloud-roks-ocs.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Create an IBM Cloud API Key🔗

In order for the Cloud Pak Deployer to create the infrastructure and deploy IBM Cloud Pak for Data, it must perform tasks on IBM Cloud. In order to do so it requires an IBM Cloud API Key. This can be created by following these steps:

  • Go to https://cloud.ibm.com/iam/apikeys and login with your IBMid credentials
  • Ensure you have selected the correct IBM Cloud Account for which you wish to use the Cloud Pak Deployer
  • Click Create an IBM Cloud API Key and provide a name and description
  • Copy the IBM Cloud API key using the Copy button and store it in a safe place, as you will not be able to retrieve it later

Warning

You can choose to download the API key for later reference. However, when we reference the API key, we mean the IBM Cloud API key as a 40+ character string.

Set environment variables for IBM Cloud🔗

Set the environment variables specific to IBM Cloud deployments.

export IBM_CLOUD_API_KEY=your_api_key
+

  • IBM_CLOUD_API_KEY: This is the API key you generated using your IBM Cloud account, this is a 40+ character string

3. Acquire entitlement keys and secrets🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-provision-ssh-key
+- sample-provision-ssh-pub-key
+- sample-terraform-tfstate
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_demo
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/3-run/images/aws-rosa-ocs.png b/10-use-deployer/3-run/images/aws-rosa-ocs.png new file mode 100644 index 000000000..37f2cb217 Binary files /dev/null and b/10-use-deployer/3-run/images/aws-rosa-ocs.png differ diff --git a/10-use-deployer/3-run/images/aws-self-managed-ocs.png b/10-use-deployer/3-run/images/aws-self-managed-ocs.png new file mode 100644 index 000000000..37f2cb217 Binary files /dev/null and b/10-use-deployer/3-run/images/aws-self-managed-ocs.png differ diff --git a/10-use-deployer/3-run/images/azure-aro.png b/10-use-deployer/3-run/images/azure-aro.png new file mode 100644 index 000000000..8d4212c70 Binary files /dev/null and b/10-use-deployer/3-run/images/azure-aro.png differ diff --git a/10-use-deployer/3-run/images/ibm-roks-ocs.png b/10-use-deployer/3-run/images/ibm-roks-ocs.png new file mode 100644 index 000000000..42e568754 Binary files /dev/null and b/10-use-deployer/3-run/images/ibm-roks-ocs.png differ diff --git a/10-use-deployer/3-run/images/vsphere-ocs-nfs.png b/10-use-deployer/3-run/images/vsphere-ocs-nfs.png new file mode 100644 index 000000000..b99c4bfcf Binary files /dev/null and b/10-use-deployer/3-run/images/vsphere-ocs-nfs.png differ diff --git a/10-use-deployer/3-run/run/index.html b/10-use-deployer/3-run/run/index.html new file mode 100644 index 000000000..c4d7d278e --- /dev/null +++ b/10-use-deployer/3-run/run/index.html @@ -0,0 +1 @@ + Running Cloud Pak Deployer - Cloud Pak Deployer
Skip to content
\ No newline at end of file diff --git a/10-use-deployer/3-run/vsphere/index.html b/10-use-deployer/3-run/vsphere/index.html new file mode 100644 index 000000000..7716e833a --- /dev/null +++ b/10-use-deployer/3-run/vsphere/index.html @@ -0,0 +1,35 @@ + vSphere - Cloud Pak Deployer
Skip to content

Running the Cloud Pak Deployer on vSphere🔗

You can use Cloud Pak Deployer to create an OpenShift cluster on VMWare infrastructure.

There are 5 main steps to run the deployer for vSphere:

  1. Configure deployer
  2. Prepare the cloud environment
  3. Obtain entitlement keys and secrets
  4. Set environment variables and secrets
  5. Run the deployer

Topology🔗

A typical setup of the vSphere cluster with OpenShift is pictured below: vSphere configuration

When deploying OpenShift and the Cloud Pak(s) on VMWare vSphere, there is a dependency on a DHCP server for issuing IP addresses to the newly configured cluster nodes. Also, once the OpenShift cluster has been installed, valid fully qualified host names are required to connect to the OpenShift API server at port 6443 and applications running behind the ingress server at port 443. The Cloud Pak deployer cannot set up a DHCP server or a DNS server and to be able to connect to OpenShift or to reach the Cloud Pak after installation, name entries must be set up.

1. Configure deployer🔗

Deployer configuration and status directories🔗

Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory (STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory.

You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration. For vSphere installations, copy one of ocp-vsphere-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files.

Example:

mkdir -p $HOME/cpd-config/config
+cp sample-configurations/sample-dynamic/config-samples/ocp-vsphere-ocs-nfs.yaml $HOME/cpd-config/config/
+cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/
+

Set configuration and status directories environment variables🔗

Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs.

export CONFIG_DIR=$HOME/cpd-config
+export STATUS_DIR=$HOME/cpd-status
+
  • CONFIG_DIR: Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files.
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and logs files.

Optional: advanced configuration🔗

If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration.

For special configuration with defaults and dynamic variables, refer to Advanced configuration.

2. Prepare the cloud environment🔗

Pre-requisites for vSphere🔗

In order to successfully install OpenShift on vSphere infrastructure, the following pre-requisites must have been met.

Pre-requisite Description
Red Hat pull secret A pull secret is required to download and install OpenShift. See Acquire pull secret
IBM Entitlement key When instaling an IBM Cloud Pak, you need an IBM entitlement key. See Acquire IBM Cloud Pak entitlement key
vSphere credentials The OpenShift IPI installer requires vSphere credentials to create VMs and storage
Firewall rules The OpenShift cluster's API server on port 6443 and application server on port 443 must be reachable.
Whitelisted URLs The OpenShift and Cloud Pak download locations and registry must be accessible from the vSphere infrastructure. See Whitelisted locations
DHCP When provisioning new VMs, IP addresses must be automatically assigned through DHCP
DNS A DNS server that will resolve the OpenShift API server and applications is required. See DNS configuration
Time server A time server to synchronize the time must be available in the network and configured through the DHCP server

There are also some optional settings, dependent on the specifics of the installation:

Pre-requisite Description
Bastion server It can be useful to have a bastion/installation server to run the deployer. This (virtual) server must reside within the vSphere network
NFS details If an NFS server is used for storage, it must be reacheable (firewall) and no_root_squash must be set
Private registry If the installation must use a private registry for the Cloud Pak installation, it must be available and credentials shared
Certificates If the Cloud Pak URL must have a CA-signed certificate, the key, certificate and CA bundle must be available at instlalation time
Load balancer The OpenShift IPI install creates 2 VIPs and takes care of the routing to the services. In some implementations, a load balancer provided by the infrastructure team is preferred. This load balancer must be configured externally

DNS configuration🔗

During the provisioning and configuration process, the deployer needs access to the OpenShift API and the ingress server for which the IP addresses are specified in the openshift object.

Ensure that the DNS server has the following entries:

  • api.openshift_name.domain_name → Point to the api_vip address configured in the openshift object
  • *.apps.openshift_name.domain_name → Point to the ingress_vip address configured in the openshift object

If you do not configure the DNS entries upfront, the deployer will still run and it will "spoof" the required entries in the container's /etc/hosts file. However to be able to connect to OpenShift and access the Cloud Pak, the DNS entries are required.

Obtain the vSphere user and password🔗

In order for the Cloud Pak Deployer to create the infrastructure and deploy the IBM Cloud Pak, it must have provisioning access to vSphere and it needs the vSphere user and password. The user must have permissions to create VM folders and virtual machines.

Set environment variables for vSphere🔗

export VSPHERE_USER=your_vsphere_user
+export VSPHERE_PASSWORD=password_of_the_vsphere_user
+
  • VSPHERE_USER: This is the user name of the vSphere user, often this is something like admin@vsphere.local
  • VSPHERE_PASSWORD: The password of the vSphere user. Be careful with special characters like $, ! as they are not accepted by the IPI provisioning of OpenShift

3. Acquire entitlement keys and secrets🔗

Acquire IBM Cloud Pak entitlement key🔗

If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry.

Warning

As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.

Acquire an OpenShift pull secret🔗

To install OpenShift you need an OpenShift pull secret which holds your entitlement.

Optional: Locate or generate a public SSH Key🔗

To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub, where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux. Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.

4. Set environment variables and secrets🔗

Set the Cloud Pak entitlement key🔗

If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key.

export CP_ENTITLEMENT_KEY=your_cp_entitlement_key
+
  • CP_ENTITLEMENT_KEY: This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry

Create the secrets needed for vSphere deployment🔗

You need to store the OpenShift pull secret in the vault so that the deployer has access to it.

./cp-deploy.sh vault set \
+    --vault-secret ocp-pullsecret \
+    --vault-secret-file /tmp/ocp_pullsecret.json
+

Optional: Create secret for public SSH key🔗

If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key.

./cp-deploy.sh vault set \
+    --vault-secret ocp-ssh-pub-key \
+    --vault-secret-file ~/.ssh/id_rsa.pub
+

5. Run the deployer🔗

Optional: validate the configuration🔗

If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators.

./cp-deploy.sh env apply --check-only --accept-all-licenses
+

Run the Cloud Pak Deployer🔗

To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions.

./cp-deploy.sh env apply --accept-all-licenses
+

You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx. For more information about the extra (dynamic) variables, see advanced configuration.

The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings.

If you need to interrupt the automation, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

On failure🔗

If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.

Finishing up🔗

Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified.

To retrieve the Cloud Pak URL(s):

cat $STATUS_DIR/cloud-paks/*
+

This will show the Cloud Pak URLs:

Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com):
+https://cpd-cpd.apps.pluto-01.example.com
+

The admin password can be retrieved from the vault as follows:

List the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- vsphere-user
+- vsphere-password
+- ocp-pullsecret
+- ocp-ssh-pub-key
+- ibm_cp_entitlement_key
+- sample-kubeadmin-password
+- cp4d_admin_cpd_demo
+

You can then retrieve the Cloud Pak for Data admin password like this:

./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_demo
+
PLAY [Secrets] *****************************************************************
+included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost
+cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr
+

Post-install configuration🔗

You can find examples of a couple of typical changes you may want to do here: Post-run changes.

\ No newline at end of file diff --git a/10-use-deployer/5-post-run/post-run/index.html b/10-use-deployer/5-post-run/post-run/index.html new file mode 100644 index 000000000..ba6ddfbc2 --- /dev/null +++ b/10-use-deployer/5-post-run/post-run/index.html @@ -0,0 +1,10 @@ + Post-run changes - Cloud Pak Deployer
Skip to content

Post-run changes🔗

If you want to change the deployed configuration, you can just update the configuration files and re-run the deployer. Make sure that you use the same input configuration and status directories and also the env_id if you specified one, otherwise deployment may fail.

Below are a couple of examples of post-run changes you may want to do.

Change Cloud Pak for Data admin password🔗

When initially installed, the Cloud Pak Deployer will generate a strong password for the Cloud Pak for Data admin user (or cpadmin if you have selected to use Foundational Services IAM). If you want to change the password afterwards, you can do this from the Cloud Pak for Data user interface, but this means that the deployer will no longer be able to make changes to the Cloud Pak for Data configuration.

If you have updated the admin password from the UI, please make sure you also update the secret in the vault.

First, list the secrets in the vault:

./cp-deploy.sh vault list
+

This will show something similar to the following:

Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-provision-ssh-key
+- sample-provision-ssh-pub-key
+- sample-terraform-tfstate
+- cp4d_admin_zen_sample_sample
+

Then, update the password:

./cp-deploy.sh vault set -vs cp4d_admin_zen_sample_sample -vsv "my Really Sec3re Passw0rd"
+

Finally, run the deployer again. It will make the necessary changes to the OpenShift secret and check that the admin user can log in. In this case you can speed up the process via the --skip-infra flag.

./cp-deploy.sh env apply --skip-infra [--accept-all-liceneses]
+

\ No newline at end of file diff --git a/10-use-deployer/7-command/command/index.html b/10-use-deployer/7-command/command/index.html new file mode 100644 index 000000000..e81293618 --- /dev/null +++ b/10-use-deployer/7-command/command/index.html @@ -0,0 +1,27 @@ + Running commands - Cloud Pak Deployer
Skip to content

Open a command line within the Cloud Pak Deployer container🔗

Sometimes you may need to access the OpenShift cluster using the OpenShift client. For convenience we have made the oc command available in the Cloud Pak Deployer and you can start exploring the current OpenShift cluster immediately without having to install the client on your own workstation.

Prepare for the command line🔗

Set environment variables🔗

Make sure you have set the CONFIG_DIR and STATUS_DIR environment variables to the same values when you ran the env apply command. This will ensure that the oc command will access the OpenShift cluster(s) of that configuration.

Optional: prepare OpenShift cluster🔗

If you have not run the deployer yet and do not intend to install any Cloud Paks, but you do want to access the OpenShift cluster from the command line to check or prepare items, run the deployer with the --skip-cp-install flag.

./cp-deploy.sh env apply --skip-cp-install
+

Deployer will check the configuration, download clients, attempt to login to OpenShift and prepare the OpenShift cluster with the global pull secret and (for Cloud Pak for Data) node settings. After that the deployer will finish without installing any Cloud Pak.

Run the Cloud Pak Deployer command line🔗

./cp-deploy.sh env cmd 
+

You should see something like this:

-------------------------------------------------------------------------------
+Entering Cloud Pak Deployer command line in a container.
+Use the "exit" command to leave the container and return to the hosting server.
+-------------------------------------------------------------------------------
+Installing OpenShift client
+Current OpenShift context: cpd
+

Now, you can check the OpenShift cluster version:

[root@Cloud Pak Deployer Container ~]$ oc get clusterversion
+NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
+version   4.8.14    True        False         2d3h    Cluster version is 4.8.14
+

Or, display the list of OpenShift projects:

[root@Cloud Pak Deployer Container ~]$ oc get projects | grep -v openshift-
+NAME                                               DISPLAY NAME   STATUS
+calico-system                                                     Active
+default                                                           Active
+ibm-cert-store                                                    Active
+ibm-odf-validation-webhook                                        Active
+ibm-system                                                        Active
+kube-node-lease                                                   Active
+kube-public                                                       Active
+kube-system                                                       Active
+openshift                                                         Active
+services                                                          Active
+tigera-operator                                                   Active
+cpd                                                            Active
+

Exit the command line🔗

Once finished, exit out of the container.

exit
+

\ No newline at end of file diff --git a/10-use-deployer/9-destroy/destroy/index.html b/10-use-deployer/9-destroy/destroy/index.html new file mode 100644 index 000000000..3a0e3fe65 --- /dev/null +++ b/10-use-deployer/9-destroy/destroy/index.html @@ -0,0 +1,11 @@ + Destroy cluster - Cloud Pak Deployer
Skip to content

Destroy the created resources🔗

If you have previously used the Cloud Pak Deployer to create assets on IBM Cloud, AWS or Azure, you can destroy the assets with the same command.

Info

Currently, destroy is only implemented for IBM Cloud ROKS, AWS and Azure ARO, not for other cloud platforms.

Prepare for destroy🔗

Prepare for destroy on IBM Cloud🔗

Set environment variables for IBM Cloud🔗

export IBM_CLOUD_API_KEY=your_api_key
+

Optional: set environment variables for deployer config and status directories. If not specified, respectively $HOME/cpd-config and $HOME/cpd-status will be used.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+

  • IBM_CLOUD_API_KEY: This is the API key you generated using your IBM Cloud account, this is a 40+ character string
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment
  • CONFIG_DIR: Directory that holds the configuration. This must be the same directory you used when you created the environment

Prepare for destroy on AWS🔗

Set environment variables for AWS🔗

We assume that the vault already holds the mandatory secrets for AWS Access Key, Secret Access Key and ROSA login token.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment
  • CONFIG_DIR: Directory that holds the configuration. This must be the same directory you used when you created the environment

Prepare for destroy on Azure🔗

Set environment variables for Azure🔗

We assume that the vault already holds the mandatory secrets for Azure - Service principal id and its password, tenant id and ARO login token.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+
  • STATUS_DIR: The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment
  • CONFIG_DIR: Directory that holds the configuration. This must be the same directory you used when you created the environment

Run the Cloud Pak Deployer to destroy the assets🔗

./cp-deploy.sh env destroy --confirm-destroy
+

Please ensure you specify the same extra (dynamic) variables that you used when you ran the env apply command.

When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background.

You can return to view the logs as follows:

./cp-deploy.sh env logs
+

If you need to interrupt the process, use CTRL-C to stop the logging output and then use:

./cp-deploy.sh env kill
+

Finishing up🔗

Once the process has finished successfully, you can delete the status directory.

\ No newline at end of file diff --git a/30-reference/configuration/cloud-pak/index.html b/30-reference/configuration/cloud-pak/index.html new file mode 100644 index 000000000..f7732bcf3 --- /dev/null +++ b/30-reference/configuration/cloud-pak/index.html @@ -0,0 +1,241 @@ + Cloud Paks - Cloud Pak Deployer
Skip to content

Cloud Paks🔗

Defines the Cloud Pak(s) which is/are layed out on the OpenShift cluster, typically in one or more OpenShift projects. The Cloud Pak definition represents the instance users connect to and which is responsible for managing the functional capabilities installed within the application.

Cloud Pak configuration🔗

cp4d🔗

Defines the Cloud Pak for Data instances to be configured on the OpenShift cluster(s).

cp4d:
+- project: cpd
+  openshift_cluster_name: sample
+  cp4d_version: 4.7.3
+  sequential_install: False
+  use_fs_iam: False
+  change_node_settings: True
+  db2u_limited_privileges: False
+  accept_licenses: False
+  openshift_storage_name: nfs-storage
+  cp4d_entitlement: cpd-enterprise
+  cp4d_production_license: True
+  
+  cartridges:
+  - name: cpfs
+  - name: cpd_platform
+

Properties🔗

Property Description Mandatory Allowed values
project Name of the OpenShift project of the Cloud Pak for Data instance Yes
openshift_cluster_name Name of the OpenShift cluster Yes, inferred from openshift Existing openshift cluster
cp4d_version Cloud Pak for Data version to install, this will determine the version for all cartridges that do not specify a version Yes 4.x.x
sequential_install If set to True the deployer will run the OLM utils playbooks to install catalog sources, subscriptions and CRs. If set to False, deployer will use OLM utils to generate the scripts and then run them, which will cause the catalog sources, subscriptions and CRs to be created immediately and install in parallel No True (default), False
use_fs_iam If set to True the deployer will enable Foundational Services IAM for authentication No False (default), True
change_node_settings Controls whether the node settings using the machine configs will be applied onto the OpenShift cluster. No True, False
db2u_limited_privileges Depicts whether Db2U containers run with limited privileges. If they do (True), Deployer will create KubeletConfig and Tuned OpenShift resources as per the documentation. No False (default), True
accept_licenses Set to 'True' to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command No True, False (default)
cp4d_entitlement Set to cpd-enterprise, cpd-standard, watsonx-data, watsonx-ai, watsonx-gov-model-management, watsonx-gov-risk-compliance, dependent on the deployed license No cpd-enterprise (default), cpd-standard, watsonx-data, watsonx-ai, watsonx-gov-model-management, watsonx-gov-risk-compliance
cp4d_production_license Whether the Cloud Pak for Data is a production license No True (default), False
image_registry_name When using private registry, specify name of image_registry No
openshift_storage_name References an openshift_storage element in the OpenShift cluster that was defined for this Cloud Pak for Data instance. The name must exist under `openshift.[openshift_cluster_name].openshift_storage. No, inferred from openshift->openshift_storage
cartridges List of cartridges to install for this Cloud Pak for Data instance. See Cloud Pak for Data cartridges for more details Yes

cp4i🔗

Defines the Cloud Pak for Integration installation to be configured on the OpenShift cluster(s).

cp4i:
+
+- project: cp4i
+  openshift_cluster_name: {{ env_id }}
+  openshift_storage_name: nfs-rook-ceph
+  cp4i_version: 2021.4.1
+  accept_licenses: False
+  use_top_level_operator: False
+  top_level_operator_channel: v1.5
+  top_level_operator_case_version: 2.5.0
+  operators_in_all_namespaces: True
+ 
+  instances:
+
+  - name: integration-navigator
+    type: platform-navigator
+    license: L-RJON-C7QG3S
+    channel: v5.2
+    case_version: 1.5.0
+

OpenShift projects🔗

The immediate content of the cp4i object is actually a list of OpenShift projects (namespaces). There can be more than one project and instances can be created in separate projects.

cp4i:
+- project: cp4i
+  ...
+
+- project: cp4i-ace
+  ...
+
+- project: cp4i-apic
+  ...
+

Operator channels, CASE versions, license IDs🔗

Before you run the Cloud Pak Deployer be sure that the correct operator channels are defined for the selected instance types. Some products require a license ID, please check the documentation of each product for the correct license. If you decide to use CASE files instead of the IBM Operator Catalog (more on that below) make sure that you selected the correct CASE versions - please refer: https://github.com/IBM/cloud-pak/tree/master/repo/case

Main properties🔗

The following properties are defined on the project level:

Property Description Mandatory Allowed values
project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes
openshift_cluster_name Dynamically defined form the env_id parameter during the execution. Yes, inferred from openshift Existing openshift cluster
openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). The definition must include the class name of the file storage type and the class name of the block storage type. No, inferred from openshift->openshift_storage
cp4i_version The version of the Cloud Pak for Integration (e.g. 2021.4.1) Yes
use_case_files The property defines if the CASE files are used for installation. If it is True then the operator catalogs are created from the CASE files. If it is False, the IBM Operator Catalog from the entitled registry is used. No True, False (default)
accept_licenses Set to True to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes True, False
use_top_level_operator If it is True then the CP4I top-level operator that installs all other operators is used. Otherwise, only the operators for the selected instance types are installed. No True, False (default)
top_level_operator_channel Needed if the use_top_level_operator is True otherwise, it is ignored. Specifies the channel of the top-level operator. No
top_level_operator_case_version Needed if the use_top_level_operator is True otherwise, it is ignored. Specifies the CASE package version of the top-level operator. No
operators_in_all_namespaces It defines whether the operators are visible in all namespaces or just in the specific namespace where they are needed. No True, False (default)
instances List of the instances that are going to be created (please see below). Yes

Warning

Despite the properties use_case_files, use_top_level_operator and operators_in_all_namespaces are defined as optional, they are actually crucial for the way of execution of the installation process. If any of them is omitted, it is assumed that the default False value is used. If none of them exists, it means that all are False. In this case, it means that the IBM Operator Catalog is used and only the needed operators for specified instance types are installed in the specific namespace.

Properties of the individual instances🔗

The instance property contains one or more instances definitions. Each instance must have a unique name. There can be more the one instance of the same type.

Naming convention for instance types🔗

For each instance definition, an instance type must be specified. We selected the type names that are as much as possible similar to the naming convention used in the Platform Navigator use interface. The following table shows all existing types:

Instance type Description/Product name
platform-navigator Platform Navigator
api-management IBM API Connect
automation-assets Automation assets a.k.a Asset repo
enterprise-gateway IBM Data Power
event-endpoint-management Event endpoint manager - managing asynchronous APIs
event-streams IBM Event Streams - Kafka
high-speed-transfer-server Aspera HSTS
integration-dashboard IBM App Connect Integration Dashboard
integration-design IBM App Connect Designer
integration-tracing Operations Dashboard
messaging IBM MQ

Platform navigator🔗

The Platform Navigator is defined as one of the instance types. There is typically only one instance of it. The exception would be an installation in two or more completely separate namespaces (see the CP4I documentation). Special attention is paid to the installation of the Navigator. The Cloud Pak Deployer will install the Navigator instance first, before any other instance, and it will wait until the instance is ready (this could take up to 45 minutes).

When the installation is completed, you will find the admin user password in the status/cloud-paks/cp4i--cp4i-PN-access.txt file. Of course, you can obtain the password also from the platform-auth-idp-credentials secret in ibm-common-services namespace.

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be platform-navigator
license License ID L-RJON-C7QG3S
channel Subscription channel v5.2
case_version CASE version 1.5.0

API management (IBM API Connect)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be api-management
license License ID L-RJON-C7BJ42
version Version of API Connect 10.0.4.0
channel Subscription channel v2.4
case_version CASE version 3.0.5

Automation assets (Asset repo)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be automation-assets
license License ID L-PNAA-C68928
version Version of Asset repo 2021.4.1-2
channel Subscription channel v1.4
case_version CASE version 1.4.2

Enterprise gateway (IBM Data Power)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be enterprise-gateway
admin_password_secret The name of the secret where admin password is stored. The default name is used if you leave it empty.
license License ID L-RJON-BYDR3Q
version Version of Data Power 10.0-cd
channel Subscription channel v1.5
case_version CASE version 1.5.0

Event endpoint management🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be event-endpoint-management
license License ID L-RJON-C7BJ42
version Version of Event endpoint manager 10.0.4.0
channel Subscription channel v2.4
case_version CASE version 3.0.5

Event streams🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be event-streams
version Version of Event streams 10.5.0
channel Subscription channel v2.5
case_version CASE version 1.5.2

High speed transfer server (Aspera HSTS)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be high-speed-transfer-server
aspera_key A license key for the Aspera software
redis_version Version of the Redis database 5.0.9
version Version of Aspera HSTS 4.0.0
channel Subscription channel v1.4
case_version CASE version 1.4.0

Integration dashboard (IBM App Connect Dashboard)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be integration-dashboard
license License ID L-APEH-C79J9U
version Version of IBM App Connect 12.0
channel Subscription channel v3.1
case_version CASE version 3.1.0

Integration design (IBM App Connect Designer)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be integration-design
license License ID L-KSBM-C87FU2
version Version of IBM App Connect 12.0
channel Subscription channel v3.1
case_version CASE version 3.1.0

Integration tracing (Operation dashborad)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be integration-tracing
version Version of Integration tracing 2021.4.1-2
channel Subscription channel v2.5
case_version CASE version 2.5.2

Messaging (IBM MQ)🔗

Property Description Sample value for 2021.4.1
name Unique name within the cluster using only lowercase alphanumerics and "-"
type It must be messaging
queue_manager_name The name of the initial queue. Default is QUICKSTART
license License ID L-RJON-C7QG3S
version Version of IBM MQ 9.2.4.0-r1
channel Subscription channel v1.7
case_version CASE version 1.7.0

cp4waiops🔗

Defines the Cloud Pak for Watson AIOps installation to be configured on the OpenShift cluster(s). The following instances can be installed by the deployer: * AI Manager * Event Manager * Turbonomic * Instana * Infrastructure management * ELK stack (ElasticSearch, Logstash, Kibana)

Aside from the base install, the deployer can also install ready-to-use demos for each of the instances

cp4waiops:
+
+- project: cp4waiops
+  openshift_cluster_name: "{{ env_id }}"
+  openshift_storage_name: auto-storage
+  accept_licenses: False
+ 
+  instances:
+  - name: cp4waiops-aimanager
+    kind: AIManager
+    install: true
+  ...
+

Main properties🔗

The following properties are defined on the project level:

Property Description Mandatory Allowed values
project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes
openshift_cluster_name Dynamically defined form the env_id parameter during the execution. No, only if mutiple OpenShift clusters defined Existing openshift cluster
openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). No, inferred from openshift->openshift_storage
accept_licenses Set to True to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes True, False

Service instances🔗

The project that is specified at the cp4waiops level defines the OpenShift project into which the instances of each of the services will be installed. Below is a list of instance "kinds" that can be installed. For every "service instance" there can also be a "demo content" entry to prepare the demo content for the capability.

AI Manager🔗

  instances:
+  - name: cp4waiops-aimanager
+    kind: AIManager
+    install: true
+
+    waiops_size: small
+    custom_size_file: none
+    waiops_name: ibm-cp-watson-aiops
+    subscription_channel: v3.6
+    freeze_catalog: false
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes AIManager
install Must the service be installed? Yes true, false
waiops_size Size of the install Yes small, tall, custom
custom_size_file Name of the file holding the custom sizes if waiops_size is custom No
waiops_name Name of the CP4WAIOPS instance Yes
subscription_channel Subscription channel of the operator Yes
freeze_catalog Freeze the version of the catalog source? Yes false, true
case_install Must AI manager be installed via case files? No false, true
case_github_url GitHub URL to download case file Yes if case_install is true
case_name Name of the case file Yes if case_install is true
case_version Version of the case file to download Yes if case_install is true
case_inventory_setup Case file operation to run for this service Yes if case_install is true cpwaiopsSetup

AI Manager - Demo Content🔗

  instances:
+  - name: cp4waiops-aimanager-demo-content
+    kind: AIManagerDemoContent
+    install: true
+    ...
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes AIManagerDemoContent
install Must the content be installed? Yes true, false

See sample config for remainder of properties.

Event Manager🔗

  instances:
+  - name: cp4waiops-eventmanager
+    kind: EventManager
+    install: true
+    subscription_channel: v1.11
+    starting_csv: noi.v1.7.0
+    noi_version: 1.6.6
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes EventManager
install Must the service be installed? Yes true, false
subscription_channel Subscription channel of the operator Yes
starting_csv Starting Cluster Server Version Yes
noi_version Version of noi Yes

Event Manager Demo Content🔗

  instances:
+  - name: cp4waiops-eventmanager
+    kind: EventManagerDemoContent
+    install: true
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes EventManagerDemoContent
install Must the content be installed? Yes true, false

Infrastructure Management🔗

  instances:
+  - name: cp4waiops-infrastructure-management
+    kind: InfrastructureManagement
+    install: false
+    subscription_channel: v3.5
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes InfrastructureManagement
install Must the service be installed? Yes true, false
subscription_channel Subscription channel of the operator Yes

ELK stack🔗

ElasticSearch, Logstash and Kibana stack.

  instances:
+  - name: cp4waiops-elk
+    kind: ELK
+    install: false
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes ELK
install Must the service be installed? Yes true, false

Instana🔗

  instances:
+  - name: cp4waiops-instana
+    kind: Instana
+    install: true
+    version: 241-0
+
+    sales_key: 'NONE'
+    agent_key: 'NONE'
+
+    instana_admin_user: "admin@instana.local"
+    #instana_admin_pass: 'P4ssw0rd!'
+    
+    install_agent: true
+
+    integrate_aimanager: true
+    #integrate_turbonomic: true
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes Instana
install Must the service be installed? Yes true, false
version Version of Instana to install No
sales_key License key to be configured No
agent_key License key for agent to be configured No
instana_admin_user Instana admin user to be configured Yes
instana_admin_pass Instana admin user password to be set (if different from global password) No
install_agent Must the Instana agent be installed? Yes true, false
integrate_aimanager Must Instana be integrated with AI Manager? Yes true, false
integrate_turbonomic Must Instana be integrated with Turbonomic? No true, false

Turbonomic🔗

  instances:
+  - name: cp4waiops-turbonomic
+    kind: Turbonomic
+    install: true
+    turbo_version: 8.7.0
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes Turbonomic
install Must the service be installed? Yes true, false
turbo_version Version of Turbonomic to install Yes

Turbonomic Demo Content🔗

  instances:
+  - name: cp4waiops-turbonomic-demo-content
+    kind: TurbonomicDemoContent
+    install: true
+    #turbo_admin_password: P4ssw0rd!
+    create_user: false
+    demo_user: demo
+    #turbo_demo_password: P4ssw0rd!
+
Property Description Mandatory Allowed values
name Unique name within the cluster using only lowercase alphanumerics and "-" Yes
kind Service kind to install Yes TurbonomicDemoContent
install Must the content be installed? Yes true, false
turbo_admin_pass Turbonomic admin user password to be set (if different from global password) No
create_user Must the demo user be created? No false, true
demo_user Name of the demo user No
turbo_demo_password Demo user password if different from global password No

See sample config for remainder of properties.

cp4ba🔗

Defines the Cloud Pak for Business Automation installation to be configured on the OpenShift cluster(s).
See Cloud Pak for Business Automation for additional details.

---
+cp4ba:
+- project: cp4ba
+  collateral_project: cp4ba-collateral
+  openshift_cluster_name: "{{ env_id }}"
+  openshift_storage_name: auto-storage
+  accept_licenses: false
+  state: installed
+  cpfs_profile_size: small # Profile size which affect replicas and resources of Pods of CPFS as per https://www.ibm.com/docs/en/cpfs?topic=operator-hardware-requirements-recommendations-foundational-services
+
+  # Section for Cloud Pak for Business Automation itself
+  cp4ba:
+    # Set to false if you don't want to install (or remove) CP4BA
+    enabled: true # Currently always true
+    profile_size: small # Profile size which affect replicas and resources of Pods as per https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=pcmppd-system-requirements
+    patterns:
+      foundation: # Foundation pattern, always true - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__foundation
+        optional_components:
+          bas: true # Business Automation Studio (BAS) 
+          bai: true # Business Automation Insights (BAI)
+          ae: true # Application Engine (AE)
+      decisions: # Operational Decision Manager (ODM) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__odm
+        enabled: true
+        optional_components:
+          decision_center: true # Decision Center (ODM)
+          decision_runner: true # Decision Runner (ODM)
+          decision_server_runtime: true # Decision Server (ODM)
+        # Additional customization for Operational Decision Management
+        # Contents of the following will be merged into ODM part of CP4BA CR yaml file. Arrays are overwritten.
+        cr_custom:
+          spec:
+            odm_configuration:
+              decisionCenter:
+                # Enable support for decision models
+                disabledDecisionModel: false
+      decisions_ads: # Automation Decision Services (ADS) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ads
+        enabled: true
+        optional_components:
+          ads_designer: true # Designer (ADS)
+          ads_runtime: true # Runtime (ADS)
+      content: # FileNet Content Manager (FNCM) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ecm
+        enabled: true
+        optional_components:
+          cmis: true # Content Management Interoperability Services (FNCM - CMIS)
+          css: true # Content Search Services (FNCM - CSS)
+          es: true # External Share (FNCM - ES)
+          tm: true # Task Manager (FNCM - TM)
+          ier: true # IBM Enterprise Records (FNCM - IER)
+          icc4sap: false # IBM Content Collector for SAP (FNCM - ICC4SAP) - Currently not implemented
+      application: # Business Automation Application (BAA) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa
+        enabled: true
+        optional_components:
+          app_designer: true # App Designer (BAA)
+          ae_data_persistence: true # App Engine data persistence (BAA)
+      document_processing: # Automation Document Processing (ADP) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__adp
+        enabled: true
+        optional_components: 
+          document_processing_designer: true # Designer (ADP)
+        # Additional customization for Automation Document Processing
+        # Contents of the following will be merged into ADP part of CP4BA CR yaml file. Arrays are overwritten.
+        cr_custom:
+          spec:
+            ca_configuration:
+              # GPU config as described on https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=resource-configuring-document-processing
+              deeplearning:
+                gpu_enabled: false
+                nodelabel_key: nvidia.com/gpu.present
+                nodelabel_value: "true"
+              # [Tech Preview] Deploy OCR Engine 2 (IOCR) for ADP - https://www.ibm.com/support/pages/extraction-language-technology-preview-feature-available-automation-document-processing-2301
+              ocrextraction:
+                use_iocr: none # Allowed values: "none" to uninstall, "all" or "auto" to install (these are aliases)
+      workflow: # Business Automation Workflow (BAW) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baw
+        enabled: true
+        optional_components:
+          baw_authoring: true # Workflow Authoring (BAW) - always keep true if workflow pattern is chosen. BAW Runtime is not implemented.
+          kafka: true # Will install a kafka cluster and enable kafka service for workflow authoring.
+  
+  # Section for IBM Process mining
+  pm:
+    # Set to false if you don't want to install (or remove) Process Mining
+    enabled: true
+    # Additional customization for Process Mining
+    # Contents of the following will be merged into PM CR yaml file. Arrays are overwritten.
+    cr_custom:
+      spec:
+        processmining:
+          storage:
+            # Disables redis to spare resources as per https://www.ibm.com/docs/en/process-mining/latest?topic=configurations-custom-resource-definition
+            redis:
+              install: false  
+
+  # Section for IBM Robotic Process Automation
+  rpa:
+    # Set to false if you don't want to install (or remove) RPA
+    enabled: true
+    # Additional customization for Robotic Process Automation
+    # Contents of the following will be merged into RPA CR yaml file. Arrays are overwritten.
+    cr_custom:
+      spec:
+        # Configures the NLP provider component of IBM RPA. You can disable it by specifying 0. https://www.ibm.com/docs/en/rpa/latest?topic=platform-configuring-rpa-custom-resources#basic-setup
+        nlp:
+          replicas: 1
+
+  # Set to false if you don't want to install (or remove) CloudBeaver (PostgreSQL, DB2, MSSQL UI)
+  cloudbeaver_enabled: true
+
+  # Set to false if you don't want to install (or remove) Roundcube
+  roundcube_enabled: true
+
+  # Set to false if you don't want to install (or remove) Cerebro
+  cerebro_enabled: true
+
+  # Set to false if you don't want to install (or remove) AKHQ
+  akhq_enabled: true
+
+  # Set to false if you don't want to install (or remove) Mongo Express
+  mongo_express_enabled: true
+
+  # Set to false if you don't want to install (or remove) phpLDAPAdmin
+  phpldapadmin_enabled: true
+

Main properties🔗

The following properties are defined on the project level.

Property Description Mandatory Allowed values
project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes Valid OCP project name
collateral_project The name of the OpenShift project that will be created and used for the installation of all collateral (prerequisites and extras). Yes Valid OCP project name
openshift_cluster_name Dynamically defined form the env_id parameter during the execution. No, only if multiple OpenShift clusters defined Existing openshift cluster
openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). No, inferred from openshift->openshift_storage
accept_licenses Set to true to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes true, false
state Set to installed to install enabled capabilities, set to removed to remove enabled capabilities. Yes installed, removed
cpfs_profile_size Profile size which affect replicas and resources of Pods of CPFS as per https://www.ibm.com/docs/en/cpfs?topic=operator-hardware-requirements-recommendations-foundational-services Yes starterset, small, medium, large

Cloud Pak for Business Automation properties🔗

Used to configure CP4BA.
Placed in cp4ba key on the project level.

Property Description Mandatory Allowed values
enabled Set to true to enable CP4BA. Currently always true. Yes true
profile_size Profile size which affect replicas and resources of Pods as per https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=pcmppd-system-requirements Yes small, medium, large
patterns Section where CP4BA patterns are configured. Please make sure to select all that is needed as a dependencies. Dependencies can be determined from documentation at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments Yes Object - see details below

Foundation pattern properties🔗

Always configure in CP4BA.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__foundation
Placed in cp4ba.patterns.foundation key.

Property Description Mandatory Allowed values
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.bas Set to true to enable Business Automation Studio Yes true, false
optional_components.bai Set to true to enable Business Automation Insights Yes true, false
optional_components.ae Set to true to enable Application Engine Yes true, false

Decisions pattern properties🔗

Used to configure Operation Decision Manager.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__odm
Placed in cp4ba.patterns.decisions key.

Property Description Mandatory Allowed values
enabled Set to true to enable decisions pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.decision_center Set to true to enable Decision Center Yes true, false
optional_components.decision_runner Set to true to enable Decision Runner Yes true, false
optional_components.decision_server_runtime Set to true to enable Decision Server Yes true, false
cr_custom Additional customization for Operational Decision Management. Contents will be merged into ODM part of CP4BA CR yaml file. Arrays are overwritten. No Object

Decisions ADS pattern properties🔗

Used to configure Automation Decision Services.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ads
Placed in cp4ba.patterns.decisions_ads key.

Property Description Mandatory Allowed values
enabled Set to true to enable decisions_ads pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.ads_designer Set to true to enable Designer Yes true, false
optional_components.ads_runtime Set to true to enable Runtime Yes true, false

Content pattern properties🔗

Used to configure FileNet Content Manager.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ecm
Placed in cp4ba.patterns.content key.

Property Description Mandatory Allowed values
enabled Set to true to enable content pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.cmis Set to true to enable CMIS Yes true, false
optional_components.css Set to true to enable Content Search Services Yes true, false
optional_components.es Set to true to enable External Share. Currently not functional. Yes true, false
optional_components.tm Set to true to enable Task Manager Yes true, false
optional_components.ier Set to true to enable IBM Enterprise Records Yes true, false
optional_components.icc4sap Set to true to enable IBM Content Collector for SAP. Currently not functional. Always false. Yes false

Application pattern properties🔗

Used to configure Business Automation Application.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa
Placed in cp4ba.patterns.application key.

Property Description Mandatory Allowed values
enabled Set to true to enable application pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.app_designer Set to true to enable Application Designer Yes true, false
optional_components.ae_data_persistence Set to true to enable App Engine data persistence Yes true, false

Document Processing pattern properties🔗

Used to configure Automation Document Processing.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa
Placed in cp4ba.patterns.document_processing key.

Property Description Mandatory Allowed values
enabled Set to true to enable document_processing pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.document_processing_designer Set to true to enable Designer Yes true
cr_custom Additional customization for Automation Document Processing. Contents will be merged into ADP part of CP4BA CR yaml file. Arrays are overwritten. No Object

Workflow pattern properties🔗

Used to configure Business Automation Workflow.
https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baw
Placed in cp4ba.patterns.workflow key.

Property Description Mandatory Allowed values
enabled Set to true to enable workflow pattern. Yes true, false
optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern
optional_components.baw_authoring Set to true to enable Workflow Authoring. Currently always true. Yes true
optional_components.kafka Set to true to install a kafka cluster and enable kafka service for workflow authoring. Yes true, false

Process Mining properties🔗

Used to configure IBM Process Mining.
Placed in pm key on the project level.

Property Description Mandatory Allowed values
enabled Set to true to enable process mining. Yes true, false
cr_custom Additional customization for Process Mining. Contents will be merged into PM CR yaml file. Arrays are overwritten. No Object

Robotic Process Automation properties🔗

Used to configure IBM Robotic Process Automation.
Placed in rpa key on the project level.

Property Description Mandatory Allowed values
enabled Set to true to enable rpa. Yes true, false
cr_custom Additional customization for Process Mining. Contents will be merged into RPA CR yaml file. Arrays are overwritten. No Object

Other properties🔗

Used to configure extra UIs.
The following properties are defined on the project level.

Property Description Mandatory Allowed values
cloudbeaver_enabled Set to true to enable CloudBeaver (PostgreSQL, DB2, MSSQL UI). Yes true, false
roundcube_enabled Set to true to enable Roundcube. Client for mail. Yes true, false
cerebro_enabled Set to true to enable Cerebro. Client for ElasticSearch in CP4BA. Yes true, false
akhq_enabled Set to true to enable AKHQ. Client for Kafka in CP4BA. Yes true, false
mongo_express_enabled Set to true to enable Mongo Express. Client for MongoDB. Yes true, false
phpldapadmin_enabled Set to true to enable phpLDApAdmin. Client for OpenLDAP. Yes true, false
\ No newline at end of file diff --git a/30-reference/configuration/cp4ba/index.html b/30-reference/configuration/cp4ba/index.html new file mode 100644 index 000000000..6e37d686f --- /dev/null +++ b/30-reference/configuration/cp4ba/index.html @@ -0,0 +1 @@ + Cloud Pak for Business Automation - Cloud Pak Deployer
Skip to content

Cloud Pak for Business Automation🔗

Contains CP4BA version 23.0.2 iFix 2.
RPA and Process Mining are currently not deployed due to discrepancy in Cloud Pak Foundational Services version.
Contains IPM version 1.14.3. Contains RPA version 23.0.14.

Disclaimer ✋🔗

This is not an official IBM documentation.
Absolutely no warranties, no support, no responsibility for anything.
Use it on your own risk and always follow the official IBM documentations.
It is always your responsibility to make sure you are license compliant when using this repository to install IBM Cloud Pak for Business Automation.

Please do not hesitate to create an issue here if needed. Your feedback is appreciated.

Not for production use (neither dev nor test or prod environments). Suitable for Demo and PoC environments - but with Production deployment.

!Important - Keep in mind that this deployment contains capabilities (the ones which are not bundled with CP4BA) which are not eligible to run on Worker Nodes covered by CP4BA OCP Restricted licenses. More info on https://www.ibm.com/docs/en/cloud-paks/1.0?topic=clusters-restricted-openshift-entitlement.

Documentation base 📝🔗

Deploying CP4BA is based on official documentation which is located at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest.

Deployment of other parts is also based on respective official documentations.

Benefits 🚀🔗

  • Automatic deployment of the whole platform where you don't need to take care about almost any prerequisites
  • OCP Ingress certificate is used for Routes so there is only one certificate you need to trust in you local machine to trust all URLs of the whole platform
  • Trusted certificate in browser also enable you to save passwords
  • Wherever possible a common admin user cpadmin with adjustable password is used, so you don't need to remember multiple credentials when you want to access the platform (convenience also comes with responsibility - so you don't want to expose your platform to whole world)
  • The whole platform is running on containers, so you don't need to manually prepare anything on traditional VMs and take care of them including required prerequisites
  • Many otherwise manual post-deployment steps have been automated
  • Pre integrated and automatically connected extras are deployed in the platform for easier access/management/troubleshooting
  • You have a working Production deployment which you can use as a reference for further custom deployments

General information 📢🔗

What is not included:

  • ICCs - not covered.
  • FNCM External share - Currently not supported with ZEN & IAM as per limitation on FNCM limitations
  • Asset Repository - it is more part of CP4I.
  • Workflow Server and Workstream Services - this is a dev deployment. BAW Authoring and (BAW + IAWS) are mutually exclusive in single project.
  • ADP Runtime deployment - this is a dev deployment.

What is in the package 📦🔗

When you perform full deployment, as a result you will get full CP4BA platform as seen in the picture. You can also omit some capabilities - this is covered later in this doc.

More details about each section from the picture follows below it.

images/cp4ba-installation.png

Extras section🔗

Contains extra software which makes working with the platform even easier.

  • phpLDAPadmin - Web UI for OpenLDAP directory making it easier to admin and troubleshoot the LDAP.
  • Gitea - Contains Git server with web UI and is used for ADS and ADP for project sharing and publishing. Organizations for ADS and APD are automatically created. Gitea is connected to OpenLDAP for authentication and authorization.
  • Nexus - Repository manager which contains pushed ADS java libraries needed for custom development and also for publishing custom ADS jars. Nexus is connected to OpenLDAP for authentication and authorization.
  • Roundcube - Web UI for included Mail server to be able to browse incoming emails.
  • Cerebro - Web UI elastic search browser automatically connected to ES instance deployed with CP4BA.
  • AKHQ - Web UI kafka browser automatically connected to Kafka instance deployed with CP4BA.
  • Kibana - Web UI elastic search dashboard tool automatically connected to ES instance deployed with CP4BA.
  • Mail server - For various mail integrations e.g. from BAN, BAW and RPA.
  • Mongo Express - Web UI for Mongo DB databases for CP4BA and Process Mining to easier troubleshoot DB.
  • CloudBeaver - Web UI for Postgresql and MSSQL databases making it easier to admin and troubleshoot the DBs.

CP4BA (Cloud Pak for Business Automation) section🔗

CP4BA capabilities🔗

CP4BA capabilities are in purple color.

More info for CP4BA capabilities is available in official docs at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest.

More specifically in overview of patterns at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments.

Pink color is used for CPFS dedicated capabilities.

More info for CPFS dedicated capabilities is available in official docs at https://www.ibm.com/docs/en/cloud-paks/foundational-services/latest.

Magenta color is used for additional capabilities.

More info for Process Mining is available in official docs at https://www.ibm.com/docs/en/process-mining/latest.

More info for RPA is available in official docs at https://www.ibm.com/docs/en/rpa/latest.

Assets are currently not deployed.

CPFS (Cloud Pak Foundational Services) section🔗

Contains services which are reused by Cloud Paks.

More info available in official docs at https://www.ibm.com/docs/en/cpfs.

  • License metering - Tracks license usage.
  • Certificate Manager - Provides certificate handling.

Pre-requisites section🔗

Contains prerequisites for the whole platform.

  • PostgreSQL - Database storage for the majority of capabilities.
  • OpenLDAP - Directory solution for users and groups definition.
  • MSSQL server - Database storage for RPA server.
  • MongoDB - Database storage for ADS and Process Mining.

Environments used for installation 💻🔗

With proper sizing of the cluster and provided RWX File and RWO Block Storage Class, CP4BA deployed with Deployer should be working on any OpenShift 4.12 with Worker Nodes which in total have (60 CPU, 128GB Memory).

Automated post-deployment tasks ✅🔗

For your convenience the following post-deployment setup tasks have been automated:

Post installation steps ➡️🔗

CP4BA
Review and perform post deploy manual steps for CP4BA as specified in Project cloud-pak-deployer in ConfigMap cp4ba-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode.

RPA
Review and perform post deploy manual steps for RPA as specified in Project cloud-pak-deployer in ConfigMap cp4ba-rpa-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode.

Process Mining
Review and perform post deploy manual steps for IPM as specified in Project cloud-pak-deployer in ConfigMap cp4ba-pm-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode.

Usage & operations 📇🔗

Endpoints, access info and other useful information is available in Project cloud-pak-deployer in ConfigMap cp4ba-usage in usage.md file after installation. It is best to copy the contents and open it in nice MarkDown editor like VSCode.

\ No newline at end of file diff --git a/30-reference/configuration/cp4d-assets/index.html b/30-reference/configuration/cp4d-assets/index.html new file mode 100644 index 000000000..a9265c93f --- /dev/null +++ b/30-reference/configuration/cp4d-assets/index.html @@ -0,0 +1,103 @@ + Assets - Cloud Pak Deployer
Skip to content

Cloud Pak Asset configuration🔗

The Cloud Pak Deployer can implement demo assets and accelerators as part of the deployment process to standardize standing up fully-featured demo environments, or to test patches or new versions of the Cloud Pak using pre-defined assets.

Node changes for ROKS and Satellite clusters🔗

If you put a script named apply-custom-node-settings.sh in the CONFIG_DIR/assets directory, it will be run as part of applying the node settings. This way you can override the existing node settings applied by the deployer or update the compute nodes with new settings. For more information regarding the apply-custom-node-settings.sh script, go to Prepare OpenShift cluster on IBM Cloud and IBM Cloud Satellite.

cp4d_asset🔗

A cp4d_asset entry defines one or more assets to be deployed for a specific Cloud Pak for Data instance (OpenShift project). In the configuration, a directory relative to the configuration directory (CONFIG_DIR) is specified. For example, if the directory where the configuration is stored is $HOME/cpd-config/sample and you specify assets as the asset directory, all assets under /cpd-config/sample/assets are processed.

You can create one or more subdirectories under the specified location, each holding an asset to be deployed. The deployer finds all cp4d-asset.sh scripts and cp4d-asset.yaml Ansible task files and runs them.

The following runtime attributes will be set prior to running the shell script or the Ansible task: * If the Cloud Pak for Data instances has the Common Core Services (CCS) custom resource installed, cpdctl is configured for the current Cloud Pak for Data instance and the current context is set to the admin user of the instance. This means you can run all cpdctl commands without first having to login to Cloud Pak for Data. * * The current working directory is set to the directory holding the cp4d-asset.sh script. * When running the cp4d-asset.sh shell script, the following environment variables are available: - CP4D_URL: Cloud Pak for Data URL - CP4D_ADMIN_PASSWORD: Cloud Pak for Data admin password - CP4D_OCP_PROJECT: OpenShift project that holds the Cloud Pak for Data instance - KUBECONFIG: OpenShift configuration file that allows you to run oc commands for the cluster

cp4d_asset:
+- name: sample-asset
+  project: cpd
+  asset_location: cp4d-assets
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the asset to be deployed. You can specify as many assets as wanted Yes
project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes
asset_location Directory holding the asset(s). This is a directory relative to the config directory (CONFIG_DIR) that was passed to the deployer Yes

Asset example🔗

Below is an example asset that implements the Customer Attrition industry accelerator, which can be found here: https://github.com/IBM/Industry-Accelerators/blob/master/CPD%204.0.1.0/utilities-customer-attrition-prediction-industry-accelerator.tar.gz

To implement:

  • Download the zip file to the cp4d-assets directory in the specified configuration directory
  • Create the cp4d-asset.sh shell script (example below)
  • Add a cp4d_asset entry to the Cloud Pak for Data config file in the config directory (or in any other file with extention .yaml)

cp4d-asset.sh shell script:

#!/bin/bash
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )
+
+# Function to retrieve project by name
+function retrieve_project {
+    project_name=$1
+
+    # First check if project already exists
+    project_id=$(cpdctl project list \
+        --output json | \
+        jq -r --arg project_name $project_name \
+        'if .total_results==0 then "" else .resources[] | select(.entity.name == $project_name) | .metadata.guid end')
+
+    echo $project_id
+}
+
+# Function to create a project
+function create_project {
+    project_name=$1
+
+    retrieve_project $project_name
+
+    if [ "$project_id" != "" ];then
+        echo "Project $project_name already exists"
+        return
+    else
+        echo "Creating project $project_name"
+        storage_id=$(uuidgen)
+        storage=$(jq --arg storage_id $storage_id '. | .guid=$storage_id | .type="assetfiles"' <<< '{}')
+        cpdctl project create --name $project_name --storage "$storage"
+    fi
+
+    # Find project_id to return
+    project_id=$(cpdctl project list \
+        --output json | \
+        jq -r --arg project_name $project_name \
+        'if .total_results==0 then "" else .resources[] | select(.entity.name == $project_name) | .metadata.guid end')
+}
+
+# Function to import a project
+function import_project {
+    project_id=$1
+    zip_file=$2
+    import_id=$(cpdctl asset import start \
+        --project-id $project_id --import-file $zip_file \
+        --output json --jmes-query "metadata.id" --raw-output)
+    
+    cpdctl asset import get --project-id $project_id --import-id $import_id --output json
+
+}
+
+# Function to run jobs
+function run_jobs {
+    project_id=$1
+    for job in $(cpdctl job list --project-id $project_id \
+        --output json | jq -r '.results[] | .metadata.asset_id');do
+        cpdctl job run create --project-id $project_id --job-id $job --job-run "{}"
+    done
+}
+
+#
+# Start of the asset code
+#
+
+# Unpack the utilities-customer-attrition-prediction-industry-accelerator directory
+rm -rf /tmp/utilities-customer-attrition-prediction-industry-accelerator
+tar xzf utilities-customer-attrition-prediction-industry-accelerator.tar.gz -C /tmp
+asset_dir=/tmp/customer-attrition-prediction-industry-accelerator
+
+# Change to the asset directory
+pushd ${asset_dir} > /dev/null
+
+# Log on to Cloud Pak for Data with the admin user
+cp4d_token=$(curl -s -k -H 'Content-Type: application/json' -X POST $CP4D_URL/icp4d-api/v1/authorize -d '{"username": "admin", "password": "'$CP4D_ADMIN_PASSWORD'"}' | jq -r .token)
+
+# Import categories
+curl -s -k -H 'accept: application/json' -H "Authorization: Bearer ${cp4d_token}" -H "content-type: multipart/form-data" -X POST $CP4D_URL/v3/governance_artifact_types/category/import?merge_option=all -F "file=@./utilities-customer-attrition-prediction-glossary-categories.csv;type=text/csv"
+
+# Import glossary terms
+curl -s -k -H 'accept: application/json' -H "Authorization: Bearer ${cp4d_token}" -H "content-type: multipart/form-data" -X POST $CP4D_URL/v3/governance_artifact_types/glossary_term/import?merge_option=all -F "file=@./utilities-customer-attrition-prediction-glossary-terms.csv;type=text/csv"
+
+# Check if customer-attrition project already exists. If so, do nothing
+project_id=$(retrieve_project "customer-attrition")
+
+# If project does not exist, import it and run jobs
+if [ "$project_id" == "" ];then
+    create_project "customer-attrition"
+    import_project $project_id \
+        /tmp/utilities-customer-attrition-prediction-industry-accelerator/utilities-customer-attrition-prediction-analytics-project.zip
+    run_jobs $project_id
+else
+    echo "Skipping deployment of CP4D asset, project customer-attrition already exists"
+fi
+
+# Return to original directory
+popd > /dev/null
+
+exit 0
+

\ No newline at end of file diff --git a/30-reference/configuration/cp4d-cartridges/index.html b/30-reference/configuration/cp4d-cartridges/index.html new file mode 100644 index 000000000..92f76edfe --- /dev/null +++ b/30-reference/configuration/cp4d-cartridges/index.html @@ -0,0 +1,31 @@ + Cartridges - Cloud Pak Deployer
Skip to content

Cloud Pak for Data cartridges🔗

Defines the services (cartridges) which must be installed into the Cloud Pak for Data instances. The cartridges will be configured with the storage class defined at the Cloud Pak for Data object level. For each cartridge you can specify whether it must be installed or removed by specifying the state. If a cartridge is installed and the state is changed to removed, the cartridge and all of its instances are removed by the deployer when it is run.

An example Cloud Pak for Data object with cartridges is below:

cp4d:
+- project: cpd-instance
+  cp4d_version: 4.6.3
+  sequential_install: False
+
+  cartridges:
+  - name: cpfs
+
+  - name: cpd_platform
+
+  - name: db2oltp
+    size: small
+    instances:
+    - name: db2-instance
+      metadata_size_gb: 20
+      data_size_gb: 20
+      backup_size_gb: 20
+      transactionlog_size_gb: 20
+    state: installed
+
+  - name: wkc
+    size: small
+    state: removed
+
+  - name: wml
+    size: small
+    state: installed
+
+  - name: ws
+    state: installed
+

When run, the deployer installs the Db2 OLTP (db2oltp), Watson Machine Learning (wml) and Watson Studio (ws) cartridges. If the Watson Knowledge Catalog (wkc) is installed in the cpd-instance OpenShift project, it is removed.

After the deployer installs Db2 OLTP, a new Db2 instance is created with the specified attributes.

Cloud Pak for Data cartridges🔗

cp4d.cartridges🔗

This is a list of cartridges that will be installed in the Cloud Pak for Data instance. Every cartridge is identified by its name.

Some cartridges may require additional information to correctly install or to create an instance for the cartridge. Below you will find a list of all tested Cloud Pak for Data cartridges and their specific properties.

Properties for all cartridges🔗

Property Description Mandatory Allowed values
name Name of the cartridge Yes
state Whether the cartridge must be installed or removed. If not specified, the cartridge will be installed No installed, removed
installation_options Record of properties that will be applied to the spec of the OpenShift Custom Resource No

Cartridge cpfs or cp-foundation🔗

Defines the Cloud Pak Foundational Services (fka Common Services) which are required for all Cloud Pak for Data installations. Cloud Pak for Data Foundational Services provide functionalities around certificate management, license service, identity and access management (IAM), etc.

This cartridge is mandatory for every Cloud Pak for Data instance.

Cartridge cpd_platform or lite🔗

Defines the Cloud Pak for Data platform operator (fka "lite") which installs the base services needed to operate Cloud Pak for Data, such as the Zen metastore, Zen watchdog and the user interface.

This cartridge is mandatory for every Cloud Pak for Data instance.

Cartridge wkc🔗

Manages the Watson Knowledge Catalog installation for the Cloud Pak for Data instance.

Additional properties for cartridge wkc🔗

Property Description Mandatory Allowed values
size Scale configuration of the cartridge No small (default), medium, large
installation_options.install_wkc_core_only Install only the core of WKC? No True, False (default)
installation_options.enableKnowledgeGraph Enable the knowledge graph for business lineage? No True, False (default)
installation_options.enableDataQuality Enable data quality for WKC? No True, False (default)
installation_options.enableMANTA Enable MANTA? No True, False (default)
\ No newline at end of file diff --git a/30-reference/configuration/cp4d-connections/index.html b/30-reference/configuration/cp4d-connections/index.html new file mode 100644 index 000000000..7088735e8 --- /dev/null +++ b/30-reference/configuration/cp4d-connections/index.html @@ -0,0 +1,20 @@ + Platform connections - Cloud Pak Deployer
Skip to content

Cloud Pak for Data platform connections🔗

Cloud Pak for Data platform connection - cp4d_conection🔗

The cp4d_connection object can be used to create Global Platform connections.

cp4d_connection:
+- name: connection_name                                 # Name of the connection, must be unique
+  type: database                                        # Type, currently supported: [database]
+  cp4d_instance: cpd                                    # CP4D instance on which the connection must be created
+  openshift_cluster_name: cluster_name                  # OpenShift cluster name on which the cp4d_instance is deployed
+  database_type: db2                                    # Type of connection
+  database_hostname: hostname                           # Hostname of the connection
+  database_port: 30556                                  # Port of the connection
+  database_name: bludb                                  # Database name of the connection
+  database_port_ssl: true                               # enable ssl flag
+  database_credentials_username: 77066f69               # Username of the datasource
+  database_credentials_password_secret: db-credentials  # Vault lookup name to contain the password
+  database_ssl_certificate_secret: db-ssl-cert          # Vault lookup name to contain the SSL certificate
+

Cloud Pak for Data backup and restore platform connections - cp4d_backup_restore_connections🔗

The cp4d_backup_restore_connections can be used to backup all current configured Global Platform connections, which are either created by the Cloud Pak Deployer or added manually. The backup is stored in the status/cp4d/exports folder as a json file.

A backup file can be used to restore global platform connections. A flag can be used to indicate whether if a Global Platform connection with the same name already exists, the restore is skipped.

Using the Cloud Pak Deployer cp4d_backup_restore_connections capability implements the following: - Connect to the IBM Cloud Pak for Data instance specified using cp4d_instance and openshift_cluster_name - If connections_backup_file is specified export all Global Platform connections to the specified file in the status/cp4d/export/connections folder - If connections_restore_file is specified, load the file and restore the Global Platform connections - The connections_restore_overwrite (true/false) indicates whether if a Global Platform Connection with the same already exists, it will be replaced.

cp4d_backup_restore_connections:
+- cp4d_instance: cpd
+  openshift_cluster_name: {{ env_id }}
+  connections_backup_file: {{ env_id }}_cpd_connections.json
+  connections_restore_file: {{ env_id }}_cpd_connection.json
+  connections_restore_overwrite: false
+
\ No newline at end of file diff --git a/30-reference/configuration/cp4d-instances/index.html b/30-reference/configuration/cp4d-instances/index.html new file mode 100644 index 000000000..4748b9d0a --- /dev/null +++ b/30-reference/configuration/cp4d-instances/index.html @@ -0,0 +1,107 @@ + Instances - Cloud Pak Deployer
Skip to content

Cloud Pak for Data instances🔗

Manage cloud Pak for Data instances🔗

Some cartridges have the ability to create one or more instances to run an isolated installation of the cartridge. If instances have been configured for the cartridge, the deployer can manage creating and deleting the instances.

The following Cloud Pak for Data cartridges are currently supported for managing instances:

Analytics engine powered by Apache Spark Instances🔗

Analytics Engine instances can be defined by adding the instances section to the cartridges entry of cartridge analytics-engine. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: analytics-engine
+    size: small
+    state: installed
+    instances:
+    - name: analyticsengine-instance
+      storage_size_gb: 50
+
Property Description Mandatory Allowed Values
name Name of the instance Yes
storage_size_db Size of the storage allocated to the instance Yes numeric value

DataStage instances🔗

DataStage instances can be defined by adding the instances section to the cartridges entry of cartridge datastage-ent-plus. The following example shows the configuration to define an instance.

DataStage, upon deployment, always creates a default instance called ds-px-default. This instance cannot be configured in the instances section.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: datastage-ent-plus
+    state: installed
+    instances:
+    - name: ds-instance
+      # Optional settings
+      description: "datastage ds-instance"
+      size: medium
+      storage_class: efs-nfs-client
+      storage_size_gb: 60
+      # Optional Custom Scale options
+      scale_px_runtime:
+        replicas: 2
+        cpu_request: 500m
+        cpu_limit: 2
+        memory_request: 2Gi
+        memory_limit: 4Gi
+      scale_px_compute:
+        replicas: 2
+        cpu_request: 1
+        cpu_limit: 3
+        memory_request: 4Gi
+        memory_limit: 12Gi   
+
Property Description Mandatory Allowed Values
name Name of the instance Yes
description Description of the instance No
size Size of the DataStage instance No small (default), medium, large
storage_class Override the default storage class No
storage_size_gb Storage size allocated to the DataStage instance No numeric

Optionally, the default px_runtime and px_compute instances of the DataStage instance can be tweaked. Both scale_px_runtime and scale_px_compute must be specified when used, and all properties must be specified.

Property Description Mandatory
replicas Number of replicas Yes
cpu_request CPU Request value Yes
memory_request Memory Request value Yes
cpu_limit CPU limit value Yes
memory_limit Memory limit value Yes

Db2 OLTP Instances🔗

DB2 OLTP instances can be defined by adding the instances section to the cartridges entry of cartridge db2. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: db2
+    size: small
+    state: installed
+    instances:
+    - name: db2 instance
+      metadata_size_gb: 20
+      data_size_gb: 20
+      backup_size_gb: 20  
+      transactionlog_size_gb: 20
+    
+
Property Description Mandatory Allowed Values
name Name of the instance Yes
metadata_size_gb Size of the metadata store Yes numeric value
data_size_gb Size of the data store Yes numeric value
backup_size_gb Size of the backup store Yes numeric value
transactionlog_size_gb Size of the transactionlog store Yes numeric value

Data Virtualization Instances🔗

Data Virtualization instances can be defined by adding the instances section to the cartridges entry of cartridge dv. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: dv
+    size: small
+    state: installed
+    instances:
+    - name: data-virtualization
+
Property Description Mandatory Allowed Values
name Name of the instance Yes

Cognos Analytics Instance🔗

A Cognos Analytics instance can be defined by adding the instances section to the cartridges entry of cartridge ca. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: ca
+    size: small
+    state: installed
+    instances:
+    - name: ca-instance
+      metastore_ref: ca-metastore
+
Property Description Mandatory
name Name of the instance Yes
metastore_ref Name of the DB2 instance used for the Cognos Repository database Yes

The Cognos Content Repository database can use an IBM Cloud Pak for Data DB2 OLTP instance. The Cloud Pak Deployer will first determine whether an existing DB2 OLTP existing with the name specified metastore_ref. If this is the case, this DB2 OLTP instance will be used and the database is prepared using the Cognos DB2 script prior to provisioning the Cognos instance.

EDB Postgres for Cloud Pak for Data instances🔗

EnterpriseDB instances can be defined by adding the instances section to the cartridges entry of cartridge dv. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+
+  # Please note that for EDB Postgress, a secret edb-postgres-license-key must be created in the vault
+  # before deploying
+  - name: edb_cp4d
+    size: small
+    state: installed
+    instances:
+    - name: instance1
+      version: "13.5"
+      #Optional Parameters
+      type: Standard
+      members: 1
+      size_gb: 50
+      resource_request_cpu: 1000m
+      resource_request_memory: 4Gi
+      resource_limit_cpu: 1000m
+      resource_limit_memory: 4Gi
+
Property Description Mandatory Allowed Values
name Name of the instance Yes
version Version of the EDB PostGres instance Yes 12.11, 13.5
type Enterprise or Standard version No Standard (default), Enterprise
members Number of members of the instance No number, 1 (default)
size_gb Storage Size allocated to the instance No number, 50 (default)
resource_request_cpu Request CPU of the instance No 1000m (default)
resource_request_memory Request Memory of the instance No 4Gi (default)
resource_limit_cpu Limit CPU of the instance No 1000m (default)
resource_limit_memory Limit Memory of the instance No 4Gi (default)

OpenPages Instance🔗

An OpenPages instance can be defined by adding the instances section to the cartridges entry of cartridge openpages. The following example shows the configuration to define an instance.

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: "{{ env_id }}"
+...
+  cartridges:
+  - name: openpages
+    state: installed
+    instances:
+    - name: openpages-instance
+      size: xsmall
+
Property Description Mandatory
name Name of the instance Yes
size The size of the OpenPages instances, default is xsmall No
\ No newline at end of file diff --git a/30-reference/configuration/cp4d-ldap/index.html b/30-reference/configuration/cp4d-ldap/index.html new file mode 100644 index 000000000..c544a85ae --- /dev/null +++ b/30-reference/configuration/cp4d-ldap/index.html @@ -0,0 +1,46 @@ + LDAP - Cloud Pak Deployer
Skip to content

Cloud Pak for Data LDAP🔗

Cloud Pak for Data can connect to an LDAP user registry for identity and access managment (IAM). When configured, for a Cloud Pak for Data instance, a user must authenticate with the user name and password stored in the LDAP server.

If SAML is also configured for the Cloud Pak for Data instance, authentication (identity) is managed by the SAML server but access management (groups, roles) can still be served by LDAP.

Cloud Pak for Data LDAP configuration🔗

LDAP_Overview

IBM Cloud Pak for Data can connect to an LDAP user registry in order for users to log on with their LDAP credentials. The configuration of LDAP can be specified in a seperate yaml file in the config folder, or included in an existing yaml file.

LDAP configuration - cp4d_ldap_config🔗

A cp4d_ldap_config entry contains the connectivity information to the LDAP user registry. The project and openshift_cluster_name values uniquely identify the Cloud Pak for Data instance. The ldap_domain_search_password_vault entry contains a reference to the vault, which means that as a preparation the LDAP bind user password must be stored in the vault used by the Cloud Pak Deployer using the key referenced in the configuration. If the password is not available, the Cloud Pak Deployer will fail and not able to configure the LDAP connectivity.

# Each Cloud Pak for Data Deployment deployed in an OpenShift Project of an OpenShift cluster can have its own LDAP configuration
+cp4d_ldap_config:
+- project: cpd-instance
+  openshift_cluster_name: sample                                         # Mandatory
+  ldap_host: ldaps://ldap-host                                           # Mandatory
+  ldap_port: 636                                                         # Mandatory
+  ldap_user_search_base: ou=users,dc=ibm,dc=com                          # Mandatory
+  ldap_user_search_field: uid                                            # Mandatory
+  ldap_domain_search_user: uid=ibm_roks_bind_user,ou=users,dc=ibm,dc=com # Mandatory
+  ldap_domain_search_password_vault: ldap_bind_password                  # Mandatory, Password vault reference
+  auto_signup: "false"                                                   # Mandatory
+  ldap_group_search_base: ou=groups,dc=ibm,dc=com                        # Optional, but mandatory when using user groups
+  ldap_group_search_field: cn                                            # Optional, but mandatory when using user groups
+  ldap_mapping_first_name: cn                                            # Optional, but mandatory when using user groups
+  ldap_mapping_last_name: sn                                             # Optional, but mandatory when using user groups
+  ldap_mapping_email: mail                                               # Optional, but mandatory when using user groups
+  ldap_mapping_group_membership: memberOf                                # Optional, but mandatory when using user groups
+  ldap_mapping_group_member: member                                      # Optional, but mandatory when using user groups
+

The above configuration uses the LDAPS protocol to connect to port 636 on the ldap-host server. This server can be a private server if an upstream DNS server is also defined for the OpenShift cluster that runs Cloud Pak for Data. Common Name uid=ibm_roks_bind_user,ou=users,dc=ibm,dc=com is used as the bind user for the LDAP server and its password is retrieved from vault secret ldap_bind_password.

User Group configuration - cp4d_user_group_configuration🔗

The cp4d_user_group_configuration: can optionally create User Group(s) with references to LDAP Group(s). A user_groups entry must contain at least 1 role_assignments and 1 ldap_groups entry.

# Each Cloud Pak for Data Deployment deployed in an OpenShift Project of an OpenShift cluster can have its own User Groups configuration
+cp4d_user_group_configuration:
+- project: zen-sample                                                    # Mandatory
+  openshift_cluster_name: sample                                         # Mandatory
+  user_groups:
+  - name: CA_Analytics_Viewer
+    description: User Group for Cognos Analytics Viewers
+    role_assigmnents:
+    - name: zen_administrator_role
+    ldap_groups:
+    - name: cn=ca_viewers,ou=groups,dc=ibm,dc=com
+  - name: CA_Analytics_Administrators
+    description: User Group for Cognos Analytics Administrators
+    role_assigmnents:
+    - name: zen_administrator_role
+    ldap_groups:
+    - name: cn=ca_admins,ou=groups,dc=ibm,dc=com
+

Role Assignment values: - zen_administrator_role - zen_user_role - wkc_data_scientist_role - zen_developer_role - zen_data_engineer_role (requires installation of DataStage cartridge to become available)

During the creation of User Group(s) the following validations are performed: - LDAP configuration is completed - The provided role assignment(s) are available in Cloud Pak for Data - The provided LDAP group(s) are available in the LDAP registry - If the User Group already exists, it ensures the provided LDAP Group(s) are assigned, but no changes to the existing role assignments are performed and no LDAP groups are removed from the User Group

Provisioned instance authorization - cp4d_instance_configuration🔗

When using Cloud Pak for Data LDAP connectivity and User Groups, the User Groups can be assigned to authorize the users of the LDAP groups access to the proviosioned instance(s).

Currently supported instance authorization:
- Cognos Analytics (ca)

Cognos Analytics instance authorization🔗

Cognos Analytics Authorization

cp4d_instance_configuration:
+- project: zen-sample                # Mandatory
+  openshift_cluster_name: sample     # Mandatory
+  cartridges:
+  - name: cognos_analytics
+    manage_access:                                  # Optional, requires LDAP connectivity
+    - ca_role: Analytics Viewer                     # Mandatory, one the CA Access roles
+      cp4d_user_group: CA_Analytics_Viewer          # Mandatory, the CP4D User Group Name
+    - ca_role: Analytics Administrators             # Mandatory, one the CA Access roles
+      cp4d_user_group: CA_Analytics_Administrators  # Mandatory, the CP4D User Group Name
+

A Cognos Analytics (ca) instance can have multiple manage_access entries. Each entry consists of 1 ca_role and 1 cp4d_user_group element. The ca_role must be one of the following possible values: - Analytics Administrators - Analytics Explorers - Analytics Users - Analytics Viewer

During the configuration of the instance authorization the following validations are performend: - LDAP configuration is completed - The provided ca_role is valid - The provided cp4d_user_group exists

\ No newline at end of file diff --git a/30-reference/configuration/cp4d-saml/index.html b/30-reference/configuration/cp4d-saml/index.html new file mode 100644 index 000000000..c5ce9294c --- /dev/null +++ b/30-reference/configuration/cp4d-saml/index.html @@ -0,0 +1,10 @@ + SAML - Cloud Pak Deployer
Skip to content

Cloud Pak for Data SAML configuration🔗

You can configure Single Sign-on (SSO) by specifying a SAML server for the Cloud Pak for Data instance, which will take care of authenticating users. SAML configuration can be used in combination with the Cloud Pak for Data LDAP configuration, in which case LDAP complements the identity with access management (groups) for users.

SAML configuration - cp4d_saml_config🔗

An cp4d_saml_config entry holds connection information, certificates and field configuration that is needed in the exchange between Cloud Pak for Data user management and the identity provider (idP). The entry must created for every Cloud Pak for Data project that requires SAML authentication.

When a cp4d_saml_config entry exists for a certain cp4d project, the user management pods are updated with a samlConfig.json file and then restarted. If an entry is removed later, the file is removed and the pods restarted again. When no changes are needed, the file in the pod is left untouched and no restart takes place.

For more information regarding the Cloud Pak for Data SAML configuration, check the single sign-on documentation: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=client-configuring-sso

cp4d_saml_config:
+- project: cpd
+  entrypoint: "https://prepiam.ice.ibmcloud.com/saml/sps/saml20ip/saml20/login"
+  field_to_authenticate: email
+  sp_cert_secret: {{ env_id }}-cpd-sp-cert
+  idp_cert_secret: {{ env_id }}-cpd-idp-cert
+  issuer: "cp4d"
+  identifier_format: ""
+  callback_url: ""
+

The above configuration uses the IBM preproduction IAM server to delegate authentication to and authentication is done via the user's e-mail address. An issuer must be configured in the identity provider (idP) and the idP's certificate must be kept in the vault so Cloud Pak for Data can confirm its identity.

Property explanation🔗

Property Description Mandatory Allowed values
project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes
entrypoint URL of the identity provider (idP) login page Yes
field_to_authenticate Name of the parameter to authenticate with the idP Yes
sp_cert_secret Vault secret that holds the private certificate to authenticate to the idP. If not specified, requests will not be signed. No
idp_cert_secret Vault secret that holds the public certificate of the idP. This confirms the identity of the idP Yes
issuer The name you chose to register the Cloud Pak for Data instance with your idP Yes
identifier_format Format of the requests from Cloud Pak for Data to the idP. If not specified, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress is used No
callback_url Specify the callback URL if you want to override the default of cp4d_url/auth/login/sso/callback No

The callbackUrl field in the samlConfig.json file is automatically populated by the deployer if it is not specified by the cp4d_saml_config entry. It then consists of the Cloud Pak for Data base URL appended with /auth/login/sso/callback.

Before running the deployer with SAML configuration, ensure that the secret configured for idp_cert_secret exists in the vault. Check Vault configuration for instructions on adding secrets to the vault.

\ No newline at end of file diff --git a/30-reference/configuration/cpd-global-config/index.html b/30-reference/configuration/cpd-global-config/index.html new file mode 100644 index 000000000..751273807 --- /dev/null +++ b/30-reference/configuration/cpd-global-config/index.html @@ -0,0 +1,8 @@ + Global config - Cloud Pak Deployer
Skip to content

Global configuration for Cloud Pak Deployer🔗

global_config🔗

Cloud Pak Deployer can use properties set in the global configuration (global_config) during the deployment process and also as substitution variables in the configuration, such as {{ env_id}} and {{ ibm_cloud_region }}.

The following global_config variables are automatically copied into a "simple" form so they can be referenced in the configuration file(s) and also overridden using the command line.

Variable name Description
environment_name Name used to group secrets, typically you will specify sample
cloud_platform Cloud platform applicable to configuration, such as ibm-cloud, aws, azure
env_id Environment ID used in various other configuration objects
ibm_cloud_region When Cloud Platform is ibm-cloud, the region into which the ROKS cluster is deployed
aws_region When Cloud Platform is aws, the region into which the ROSA/self-managed OpenShift cluster is deployed
azure_location When Cloud Platform is azure, the region into which the ARO OpenShift cluster is deployed
universal_admin_user User name to be used for admin user (currently not used)
universal_password Password to be used for all (admin) users it not specified in the vault
confirm_destroy Is destroying of clusters, services/cartridges and instances allowed?

For all other variables, you can refer to the qualified form, for example: "{{ global_config.division }}"

Sample global configuration:

global_config:
+  environment_name: sample
+  cloud_platform: ibm-cloud
+  env_id: pluto-01
+  ibm_cloud_region: eu-de
+  universal_password: very_secure_Passw0rd$
+  confirm_destroy: False
+

If you run the cp-deploy.sh command and specify -e env_id=jupiter-03, this will override the value in the global_config object. The same applies to the other variables.

\ No newline at end of file diff --git a/30-reference/configuration/cpd-objects/index.html b/30-reference/configuration/cpd-objects/index.html new file mode 100644 index 000000000..4fcd88dd7 --- /dev/null +++ b/30-reference/configuration/cpd-objects/index.html @@ -0,0 +1 @@ + Objects overview - Cloud Pak Deployer
Skip to content

Configuration objects🔗

All objects used by the Cloud Pak Deployer are defined in a yaml format in files in the config directory. You can create a single yaml file holding all objects, or group objects in individual yaml files. At deployment time, all yaml files in the config directory are merged.

To make it easier to navigate the different object types, they have been groups in different tabs. You can also use the index below to find the definitions.

Configuration🔗

Infrastructure🔗

  • Infrastructure objects
  • Provider
  • Resource groups
  • Virtual Private Clouds (VPCs)
  • Security groups
  • Security rules
  • Address prefixes
  • Subnets
  • Floating ips
  • Virtual Server Instances (VSIs)
  • NFS Servers
  • SSH keys
  • Transit Gateways

OpenShift object types🔗

Cloud Pak for Data Cartridges object types🔗

\ No newline at end of file diff --git a/30-reference/configuration/dns/index.html b/30-reference/configuration/dns/index.html new file mode 100644 index 000000000..305e486b7 --- /dev/null +++ b/30-reference/configuration/dns/index.html @@ -0,0 +1,11 @@ + DNS - Cloud Pak Deployer
Skip to content

Upstream DNS servers for OpenShift🔗

When deploying OpenShift in a private network, one may want to reach additional private network services by their host name. Examples could be a database server, Hadoop cluster or an LDAP server. OpenShift provides a DNS operator which deploys and manages CoreDNS which takes care of name resolution for pods running inside the container platform, also known as DNS forwarding.

If the services that need to be reachable our registered on public DNS servers, you typically do not have to configure upstream DNS servers.

The upstream DNS used for a particular OpenShift cluster is configured like this:

openshift:
+- name: sample
+...
+  upstream_dns:
+  - name: sample-dns
+    zones:
+    - example.com
+    dns_servers:
+    - 172.31.2.73:53
+

The zones which have been defined for each of the upstream_dns configurations control which DNS server(s) will be used for name resolution. For example, if example.com is given as the zone and an upstream DNS server of 172.31.2.73:53, any host name matching *.example.com will be resolved using DNS server 172.31.2.73 and port 53.

If you want to remove the upstream DNS that was previously configured, you can change the deployer configuration as below and run the deployer. Removing the upstream_dns element altogether will not make changes to the OpenShift DNS operator.

  upstream_dns: []
+

See https://docs.openshift.com/container-platform/4.8/networking/dns-operator.html for more information about the operator that is configured by specifying upstream DNS servers.

Property explanation🔗

Property Description Mandatory Allowed values
upstream_dns[] List of alternative upstream DNS servers(s) for OpenShift No
name Name of the upstream DNS entry Yes
zones Specification of one or more zone for which the DNS server is applicable Yes
dns_servers One or more DNS servers (host:port) that will resolve host names in the specified zone Yes
\ No newline at end of file diff --git a/30-reference/configuration/images/cloud-pak-context-deployment-basic.png b/30-reference/configuration/images/cloud-pak-context-deployment-basic.png new file mode 100644 index 000000000..1f01ab267 Binary files /dev/null and b/30-reference/configuration/images/cloud-pak-context-deployment-basic.png differ diff --git a/30-reference/configuration/images/cloud-pak-context-deployment-full.png b/30-reference/configuration/images/cloud-pak-context-deployment-full.png new file mode 100644 index 000000000..6cecab493 Binary files /dev/null and b/30-reference/configuration/images/cloud-pak-context-deployment-full.png differ diff --git a/30-reference/configuration/images/cloud-pak-context-deployment.drawio b/30-reference/configuration/images/cloud-pak-context-deployment.drawio new file mode 100644 index 000000000..19eb76b5e --- /dev/null +++ b/30-reference/configuration/images/cloud-pak-context-deployment.drawio @@ -0,0 +1 @@ +7V1rd5vYkv01WWvmQ7x4ybY+YoEdcgWKbNSO/CVLRgoGyZZHD0vw62fvOiALkJ10J06nZzr3pmMBxTlU7dr1OEf4ndm5314sRo93/nw8mb0ztPH2nem8Mwxd147xD49k6shp21AH4kUyLi56PnCV5JPioFYcXSfjybJy4Wo+n62Sx+rBaP7wMIlWlWOjxWK+qV72dT6rjvo4iieNA1fRaNY8ep2MV3fFUxgnz8c/TJL4rhxZP26rM/ej8uLiSZZ3o/F8s3fIdN+ZncV8vlI/3W87kxmVV+rlsrtwjE3fvBhNllfXHz+etVPrvbrZ+Z8RKR7haTRbFw/lPXxdjJarxTparReTYnqrrHzm8tEWk4fVX55K9OWT91GLt6v3j9GX8eL9yXnQeV9a9ZW5aP8Vze8f1yv82JFpzRc0kXx4mKw288X0vxsz/jp/WF0VH7V35tnTZLFKYER7lsQPOHafjMc8eTYqDkR4tMkCB5aPoyh5iMP5I46+h1HOviazWWc+m+Os8zB/oNRivn4YT8bFzTd3yWpyBTmOtgHscexudT/DJ513XC3m00l5h3eGaZknx2dnuzMljEwcGY+Wd7v7AsKrUfLAacmNovlsNnpcJre7x5psH0cP5eWLSbReLJOnyeVkqdyGR7/TdiUmoKfJdg/phS0vJvP7yWqR4ZLirHlcmK7w4+PSjzfPXmGVUL/b8wirVThj4Yjx7tbP4MEPBX7+BKxbTSwFCh+w50+B8OFhm+70aX07SyLc59MieRoRuVpnNl+PGzAFBTzyx/ttTLY8Sm7vj27n0PDZ4yLM5NRiEifzB8LwG5BeEbE7PM8mX1fPaO7KJ8c06gC33gLfbZ3/a+DbeA3f2vPM6HfWG8N9h5Xvx7tRxXtba8K9fQDuJ6XcT8e7edoAXu9x8nB19zVZ/UoKb+JfTSP52pzGW/EyUHd+ruHPj4G3gP+v4GVqojJ7/indXF2mt78f0Dsw/HUCN46/j8Bb7R8HdJQZg/945x+j7H8W5/9jaVe9+fv3JUr3kFTQpvZpNH3HuZyRSkeLFVLDGFng24H6G1P5k7CucvP3Y7rd7nQEFT9IyOVt/p6E43Vj/2XAtk5Pvg+w5k9g4MMoMRsoeTHEJ/dSWOyg0R3dTmaf5stkJeHduZ2vVvN7XDDjibNRNI3F6DVqegFZy0dV63xNtrTmmYxml0e18gh+Ho9Wo3emrT4a58snpEZnW2DH6Hz6EBg32Zl1e71dR7mWjD5capEzf+qaY3OctUw/az1F99GTn9obv9POx/dR4n24W91etPLew91ydN1afLr6OB9/uNz0ktMnSJndhyjv3rezm+x02wunra6prvOSM3N0famNHC3x88Em+hAn3sXdbHQ9no9xLHBsQ459uHm8+Tzu3Jpx20vt2O/YeRDaMc/jHsbN54/56Lq9/nTlbbupm3xKhunkwj3xOnbsOZu77pUWXxp/rHHd3fiinfWSs3Bk/KENjTgOwsHaT+x5kMfbnuM+eh0tlmf+4B93s3aurru7iz7Y66HRXuH8qnr/rTb6fLm8CVvT4efLu+71LL01rNOuefk4vtjOatdObz4H6adpkN1cn2s31/14bMy0UWc3n/WnTltdU5e7vskrx9Tf0+iivZxcj59uk7PHG+gsujjH/bQt9LK5NWbr8Qf/WWfOdE39B6mdeQ6O5f1WN522euFwAx1ovjNYB2m8DRyP1+9/Trq5de+Zd3e9TI3bNS5nNw8+cPMxCaxhuJQZfbqI4tF1v+3Nzu5ujMHKv/LiyLzMbo3VDKhY3xqtGWaWjz98fBrhPJ5+Or6gVV1Y/ubx9mLT9u5bT7f3g/LzKnr4A9rVkpvPN7Pbe2in49VGLXT04F3Mphhl3w7JAZ2pmX44AxbimDoLw0jvprEV5FM9cAZLP/Tw2dWgJ8tzfbN3ZZm9MDL80De6jpsFVxZ0Mtz0OrblJxb0CXkc96+sVs+Jt346iHuOB10PzeBqs/XDeO07cRY4/Rz325fZBh0LuOubfjiMMf4af80gdfMgoZwLOdfqObbpOfU5usC6Ddl4E2RnTpC7GC+y/BTnr+qyQwuyG4yz6YUuZTU8nxU40TZIbKPXsSzMD/aebjknfMa1nhY4ftzP67L0sSnuSz3YJp8ZvpP1woFe0w2e3cJ1USsI+7VznkLJVFmYKLm5ny1vYQsvtzfB4PLiMimOdfRsfL0t0XMMH37yktNDCNh5BD3gjEgIoHkg3NV7zrDlZ6VWvA21CEuAjXx8Hlpgn4xPjqfZgNlg7UHsh1Ojm3o5NJH3qFEnWuM+GTSx9Ry3FSRW5uPJ/XwKrdh5l5rPMdbVmdOUtTHOoAWNZpT1M2sDzwKLeUugBPed8ucsyGjR6doPB1aQTvOuw/vELc7PTyNYA1a7soCk6Qbee0DWwzhRyw/7Fp4HDI1xwqnm5/DoMM668myu1utwTj4QMwDyBi3RRU02yCzMwQNKXMoCfZ7Ww3MGnZoeiTJaNowN6LNEZsZ7BVc1S2dvZ2kXOPUNxATOkNaxemF/93R+iqdLo3xfq4L/cBorf+9DSwM8Oc7LdfT3fu0c0RTnsLYgBNYgpuknOjRKy8BSUU4f9MgHQEiQ+6IVajfIwQfgAHCt4oR8CJ/zNdwXlomMIO+LpXBf8oIGv8mDfHBAFtpPY4yL+Ti0DBCQ+5lCLbVPq/VhlQ0tA3S7O8RXZevPSl7og1988lZVh4oXcG1/H3maQnzVysGb+XNArYXAPp7Av9rAD2JoBlpybFpZh9bgn7YBBl76KXCcRtAomOvK1uHbW1oOPgsW9/i5BcaDNZkFQCshteRbYN6MWQGZ03f61FpDlhEBjAkt2bUxh7BkbAqnJDZQAXYPbeEMyBlABBgcjOvQyrzWRvQV/6zLGuLnYFqiqTZm4xyZFv62QWTgfemrQJ2wMu47IGODveHjHZtz2EgWAC6CLLjAArJ8sn/8qn6LDONSrKiZ+Hw/ut4ukXWl4Df90r25KI91P3/Uby8GKpY/PM6i+9N6BrEfuTeI2HPvggjSMOaAUVCDPhiNwJ/0IpfIw7N5mBejSt8A/4BrfHikZzIyBx2bfAxZl6hkpNbwrNBtRC+M/dQnHrYq6m0YieCBPu1bl9PBmzg2hL7hUSlsx8jrMEuCHGNGPgU3+obHKCnZQR/2YHQcwKN8ZAFDRHDEgZQZVgSPmWZy38zKYRcD98Z9B8reOSM474sYQQbLbbAH8ClY8WlX3LcPxvLzgLGlLkd2gMeS3TgfhRO/xbliXLLelkzTnOu+7qoe3HvDiGyDYxA5wFmYIaJan5EEkRIzJD85RKBrgJfpMRm8a0PL0fo7bkv7Jp4csjYjsg4PE34CMjLeJ8ihsRSWTKdEScb8qCGrIj9zmPqYG3Im+A33ZeSrzZc5NOIHr1XzpdWmov26LPIx3NcT1kBM2h+zhei6pQznCzkg0pK8XEX5qYmsJRM2Svt4Fm8Di3NMMll9zJqOXtFv1YO3b5h35TYw2gf/9GUWyE+IQ03yH5eRCv4WIofK5Ql0FSlV9BNOS33kGvGOixHNGHWQd3nIKSxWMuCAIbmtRb8g//Y69I2abOrTp2QcyoLrdPCbCc2Sb4EQ3DcEb17ZuS9+5JW8aATkDscVDsBTbPA5hz+pLLghi8w9tGH5yBSez8gfMg6qLBtzHCCDngryYEXyjkWehy7qsrrKKFituZQVvyfChY/39ej4G+G3tIwf5AWvyBJqeVfoP9dXCeScP9zL6bBRY6G+Wk/u/8huM+/p+yon1wT6GMMUH+e+ydxPoY9eFzEXtCTGZRvoLyIamRExDzbgBS12EoIQHos4DdS3VOyxt8hKkJ96mOsAevE1ohx60JE3M0Oqy5qsZH3yGfJgVkPw6hbxB1mJ38imMuHzMNKeq95NrmL/AHq0JUNCNbMWfUv8qctWcmjYiHOcZop7XVZ2zLTAyTHnpDG+Q05hsyFrM9aW9YQJbDIfUdxc0SOfp7+r3DGnHM8Oj0dNk3tJUfVeSNwNXf3N4nLI7sEAc6FePMunL6WDIhMGbvnsYDLBASpGYB4yrAFswS0qwUxYmbKZBQy4RXwt4llKFtxAHxH8IW6VXFGThR9Y8G93W8RQ+IcN27m0JflmHaSDXHxL5UyoPl1TdAo/6aZDnT6luGKYk1URW+nvLeALDOuzLsN9h9Q/ckOfMRb3dcEN0y1jpUduoP6Rk0hsxtzxrJbiOrlvVRZ1Fuy4LeI6uQFMzdyE52w8K3QndVR9vvs6rPtz9Eb+XJ8DuQt+5AwsZQvkTA58ISQOy2gXI89t2Hiraty+Bt5r2Nhn9RAO5doDNs6ALeZXyGHs+phFtEN9HDJWkP+n4HRUMFe7aId6f6CwA/5HnGCOxCiqiU9hvnLflFyECJ7bZmPM6rkiOnucN7Fes78LHCEnl4rLro25p7+GfffO1SLzz82/brLi7ubHp/Fne4WI+GJfrNZLK1jF23YHwXL0+Wxzc91/XbYY8/b6PB8V46qfteyZmT6ao4uZdnN1sJP3wh3Hn4OZ90F1jnv3d9nN9RDYD4SjwHsv89g+1jvt1fBzsPieJyi19txRfUXHL7EmMwZWkrAyM44pKxAgiR2gM78eiYS9cttiz8Fzkb1L9eizu8MsidkL2HbIY3HouCpaooKBPdmBYk+DfTygu3Fv3gvjxi0V5TbVedVlc1YGMaJt3Bj3OjkLVLTtm4wuYLgWUK2zqmXfE/NCloPPqHwC3rsjTA/viMyig8W+IMaVChTYkqpn6xd9QkZC8fAiEh46r6q6ooqp3T9kFhd67GJJloeoic/I0lLq1N9ITcBoIZmC6swhWkvPW8Zy7I3K3Lwiv2ZkYQbgohobvHxNRs+OdPyNw46GcXCNI5lnJtcx+3FcVpm4xmXtsPLlOe2cmR/uk0lWmrOTOMhUlmDX55uzJ4+sQZfnqT2ryhZdsm5G/GDctVT2ksn0N5KpOQNWoGIHyVJTxUbIRqTPJb2gK5VxSE3ELgplr6Bnqajr2cj+Ocn4VJ8aURg1T3VNASwN/JuS6WfFOVTyzCi8Gm4wH+IXLI9qukNMVjDHCL8mbgPppPisemGTSGUfOZ8F1UUotVZl3aPLrDlj5iJZWm0NpG6jTU22audi3OfslxEqRZUfSpdGZH1W5mEh20HmonQomSP+FtntlBWCqWrDxrNW/Ou6ntnvr5x8XyYQCx/nWjL+/HH5n04gjAiOf/w2w4E/H17MI147V80xcuRwBp9SeSjR5GWqSmb+5AMlQ2gIVSy7kMyL06KCdTxNeXxMz1qyjoT3kNU0yTPpzVydkFqqtBatIb0YeNOwVdRhUvVjbNFqbUzkeZ5YnSsiGLMyX3hFphDDmoz39ZRs6LNnVpdlF5jWY91ChCACTLXCe+vnZA0BeZDqSaWeYq5wyK405UzWQ4HkzJUx6zp6Tb8+2VQ9a0RGqcqez2PYZwtP1MB27FMulW7gnWHRSUEeRHTSQ6SrkdMTI50MxWdCrqSzP6bqc+oc8ynsAyZai0c70kdlPwo5lSvdEuRVNdmBRDilZ1vpA7qRLjXr5I7F3gD1HJPF1JzYO2WdDPZKB6ozRF2GlI2FRZuytEl/Kywq9bnUZaX9Noh+GsdUXaU+e6tbdsSV7Wuy0pnBeOwbMwdO6PHS1anpEcdCYQDannltHjDKFOs8ofOqDUaeq5UsbLE321XXm6gHpZMv9XFi0f7s6y6Vfrgygby0w9qaUSySmkRy9SuuE5XrXkNGL1P1NtjxYm7eV/UL6+OKrCt1k58WPc1wKAyveu6bbU/q40G5TlT0CmI1p5B1E3J3WSeS+hj497KiB1+X5UpGi71sPB9kpZbLVJ9VrZkB4yU2NtI73fVuarKprE8V41KWPSKv7O1X9LjLohx5HmWznHWFq6kI+7INJFNCJFQ9VrWii6gI/A/K1ZhM1h/T0sdd9vX1nR1SdgH9Ha+A18BtiByq78wOIrK4AftESzXPAVd6LFWD1mTzMguKM8oy8j/zDvWOyJNzzOp8Ve3qFfwp9ilxsXcOvCt9Zlv6Ol32MoqIxXUOPF9rryMpOgMnc91F1WhVWaNH7nL8okdeHbOiP8ezeh1ZG2AvAM/BdY9BiSf/Vd0Lz5UrWsONZNA5M+qI89YUHlnX+ciA6OsbrvrB9gO1JoTYgnkyW0C2HheZF6O5WodlTw/HuKPEkoyDPfnQKzPgqmzK1Uj4ptiOq3tcXxiW3WjUo8yuin5gucrJTrH0JTxmV6o3KqucIss1DrMp65EjczVHyg6JZWUHrs5nzKoLrsLcuVYiWal0zyuykn1BTxvF+W7GdQ36hHBkRY+7lU5mSOzFsRIxBYuOVAuv2YA8Z4ju0mmhD1d6R5iXrMuR44o555VzDvsN1GG/2C0RlRmWrMtRL7CrVuxMIC8h7soaOOxTlx1CFvPLhWvqY1Z8FjqAXuCbCjN1fxdO9mUtjnxRk82Jr2muOK3u7xH10iJHKaxX/b0hW/F34eRM4qji3Z2/1+e75+/+K+ck/hBrsptDPUvOHn65nsqVcZ/xKy3HJH68TCod+kcaqb6XxD65ditjSfwBD0jfS/ILhekwkh0mgvFcZEvOrMlyrY69oSIXYx8n9Yr+v8u1Q/aGCs7kOF6xLkqM12TZq0ZmTx+GLHuuOqtnxZm8lvshhrqsyORSiRXxR8bhHg1iVu0SENkiltdla3rkylCPeQz06LPyT7nWOZV4ynHYN6LdpUPwig0k/kjPOSKuW0ERB3z2Xdmvlj0fzFeiEsvsQSNHcIuxI1M4oBg7oN7zWGPu1ozlci3iINddpApkJbQpdyPVZMnxwLXEScoKp8G+whcBc8R8WKzDSCwBVoey/gu+0GRdJpXYTl2yas2lyhaekvVW7oriiht4kLui4nLfDXv5yImKNZyQPXZfkx1VHJc93jxqFbug6niuyXIX1BD5QMw1dYxDjPbJabKrizu8eqFc28Tzfn0gnM052tIDbsq6+X4M47XQ27bA5FbF8TL+1fFclQ0kj7PLfCpT+x+kf8916y3X62SfUdbAM58dOpdOE3WByh44CfsqT2/Isl/ryq6RQhYYUuv3SueIBbnsfuM6u6pJwoEle4Z2eWOf/Xbxcck5uQaYeyVfA2MWsVPsa+Kae6zLTp1iLxLmwHjCc1Kr7PKZUHbJKT5wN1zD4N6q0ubENP2+iNPsvw8t0af4LdcLdnGNXSh2xGQfAJ9F1iVzWZNpxgHuMs0jte71vJ8kV3ljTdbxyrpgy31a0k+WDgR34fSzXZ3D3KEeB2qyPekWRcVeBdrJ5b6vwme5+we5g9o7UY+zzBGt0m5qFyAw6Ygf1mXZ0cN9B7s4LLuOWEeC/7ieFjA2F1zQk31n02LPDnHJ3B32y2WPDHMlq9yPAqxJhyXYrXFKh8VS9ZatZJ14q3JHyfs1dmPUri5yQWxxrmrlvlbHVGWZl2TPfkdMebnKJcv1Cq+ocV1dasK0XAsmFwzKNU9fcba7VVyo1n9YF5Xjig+zJ5FKrcI1Q5OdWKlFQ670kwMFN5ns/JIuV2R69VyJsRM1strRQH+3ycHs4EndVpVlhwm2Zx0ue4KmW9lhoGrRlviM6i3ozXy1Jpuz/+Fvek6Rr7KTlfcNVQ/6mfKp8nkq+WouOzZCf1PU5bgnezCx7E9pygpXZJIzcu2dtpIu4u54Pa9m7DTUOuCQnc9NsTMklzX1tNwPxWcfauVasS87Z6eZ4iupKRiDZH0xVGvU1KXeLXob5N8irhuy14bcI/ZgjoO6RXoDtnRffVnfkbqSOYCslSleqtWVXOt22O30nuOys/OJmmys0ffo09J/SbmPaVjuOrRkP1cqO/gyyQ/yYaZ2QdIHuGPGLfIFj3pBTIuKvbCsw1zpXfAc90rIOXAB66xdzJX9UJEpqxTNc9X+SMi+FdfNFCYr++rYa2OdHhacApuoPSOSI9b7I0agfE72VLBXJLuUxZ41uXy449763jjVR2EuydxrI3vN2A8odvPUZXPV2+grTkg9xijFB9S5Y+vFnpRar4a9Oe5LYB7Jupm7KlELFnth6rJcd3zWH2WRG6h9JZnsTCJ+czmH8QflfgKfqwDsCarPiqOgy+fcTvbY7PRHHzJ2uV3KlYVnjpK9zaErKxW0C1eedvFdOBd5nsxJel41WcacMj6J7KbY2VTn60ztg3Z1xTNVrg8dz1T5uYrj7N32wjIOeIzNFmuAomdBbieXMoeGf3K/R+nD1djV6GmEXJ+11Z64Yi9Sj7mGyt2qslz/3cvdqvGU6/uS1xc9qVoPpiZbjeOyqobnKTiVu+fps2q/B54nVvv2wmZuIbqQOUrN05Bl3cixVb/E3+tZeOSrzXNNxvm6Ze5MPJlq5Ul8o5YrEU/ELfciUp9TS9UFgmuuz+tFPcbcDLFdVr64h34rdfeuZ8RVpYGKSa7yhecYLDum9Wc+Fx/XxK7qWxlZsV6eiZ+nkjfIvp1GHVjtcddryIx5EuxWfuOhJsu+c8xvLpTf0GCNLn3Z2rna+sL+OeZHfsGRnib720L1DQrJbfO+tesD58zpWYcwdyK3R1rPKff7e5l8J4G1UThoynI/GPeBSu1QGVNiJrlW5fe1tYWKnPSj2LvW1d4X8tUgl/3MsiPcLntTmfomCzAr/OhuBf9FjRSq/U+mX+47lp6Lu1Wrn2ej2h6HjHtOf8IeB2fzdNP4blL0fOVp1wwMXMFv4h38nnvtS7iWcXIiL2ZofFPzwJcaX/3a6Pd/qfHkuPKlRutIM9vPf46txlcc9faRpje/5GjoR23zrb7naDW/DVt+CfeXfRfWhF5+9Luwv/D73W/2NdjjtlX9Gmyr+eKN0+Nf+S3Y1r/o+H3QoX8bHSe/FB3H/6Ljt0FHy/jduOPkX3T8NuiwzN8NHc033PyLjt/m9RuHIsuvfPuG0YwsDVA0XozxOFkkGF50jDGSx+Xk0/OhfQs2kVK+r0+rG66CIfP5ZRu7l24k9zEecZbc4r+jaAWbfRknsN9qTj2cJxzhYbL6EvEdM0d8J0fzRUAvvjDohdcgzTmpFY3VelNctKuwMJrvxTJ07cg6bSJDN42j47cCRzOweGc+DrgPq2Q1g+mM4xnfkHa7wE8xf7qcxMmyHPtHQPQnkLNv1PHk62gtc3r9FTD7wGi80O1b6Etu7/FfeQkM/pk8JdHkSzKeUCnZl+VkwQMFAJ/57uBLhXZAe+mdRXu+9Gbg00+r6LOa6Ds+wEnHb8VJZvO9UfZyOVktf1m4olGOj/8x4eolVvtlYcw6PfAiv7d6i1SQaF+uHH/5R2f91V/ffr1Onfv3zfLZe1iuRg/Rr8txCJlO5/8IaBpY+G4cvVxVm7XcR2++ekzXfiVomrnPv6D5zUBj1RPmXwmaw0lR83118rZl4kWF/jd8paLRLOac4AoHrjA0TPv3pO4vrQC8kILhkq/y5+fkYUntXdfnhRlQHSyjObSSfQH2vybxejHiewKLzOwAPl819/d3AE6qkNWP/+Z8ymg3QPP8TuXFz0rY/4nQ+SUp/JsBzfwbgXYwnjZfYnxx4C3KP+tlnvtY+OE3et6tVvzdBzYVYJw/3i6PVhvg5SiaEyePizmsC4TwatD7uW7pVruttY4tSzvlW3+P8ezni+Ph5P7pvP3F0rQt/h6lj3ET47uQ/31B82WsvNyhrgVN47S5Anqoy3TyVsBoRi17PYady1fK76NjOllFd4V1HufJw0om0zrD/zG9jvrbcvi7C3DkyGgdOHjo2EnzoN68DP/oh0aoHzx07KR5UG9exk/lrKsHDx07aTVnXJfWD0jrNekWF+/n69UMGWBn91s9DjDwDp3xYjROJtV0r2WdWO29c47035S7PswXxFadBjsnuqmfHyLOnfNWAsI32eA7M++9cFQPFPVcvkET9d8oMNosraPFZDlfL6KJF8lvEsBH9VP1qhFx/eV+9ACiWLyljx+XvzKh9PF2883jrQP9wvLYT/fxZpLRdexPDf8+iMG3RsABg5pHu+ZxGe+b4D1ptTtUQcMZipLrhcj03IQ8BL2/NRTox3oTJr80R2j+ghF7vbpj5hVJev4bIyYZ3R8guE6R//1jMXJ8bH4TI+Wq5q/BSDOR9OcPCXz1T2cMR5ouMbD8dy9ct0+/dab4d+9MebfmmR/OQ/5KVH858TicoRxKZQ7e8sV8xKhmFN/0vD2X+s4cveGVYtLJwn2aKMsecs+Ipb7ePlrIvc4eF2VsVrBpeu25zf8dSks0raWdmP8Ix60XgKbWXE0+PuC4rTdz3ObKTXcex4e89v9De6HRmbrfsdiXmdJLuU79Qkx5M+jUm1SHfhXTT+odsDDe/T4+Obf3Ww1N938B \ No newline at end of file diff --git a/30-reference/configuration/images/cloud-pak-deployer-logging.drawio b/30-reference/configuration/images/cloud-pak-deployer-logging.drawio new file mode 100644 index 000000000..c44403b47 --- /dev/null +++ b/30-reference/configuration/images/cloud-pak-deployer-logging.drawio @@ -0,0 +1 @@ +3LzXluNIli34NfVYuaAI8QiA0IQgBCFeZkFrrfH1A/OIrBQRXd09N6umb/uKcAcN0o7c+xwD/4ay7SFM4VCofZI2f0Og5Pgb+vwbgsAwhN9/wMj5bYSkkG8D+VQm3w/6bcAqr/T7IPR9dC2TdP7DgUvfN0s5/HEw7rsujZc/jIXT1O9/PCzrmz/edQjz9IcBKw6bH0fdMlmK76P4A/tth5iWefHrrWGc+ranDX89+vtU5iJM+v13Qyj3N5Sd+n75ttUebNoA6f0qmG/n8f/B3n882ZR2y3/lBEcm1a7vccc/siafRFO8lr+j366yhc36fcbfH3Y5fxXB1K9dkoKLQH9Dmb0ol9Qawhjs3W+l32PF0jb3J/je/PGhvj/nlk5Levxu6PtDCmnfpst03od834tQ3wX23WTgXwW4/6aAB/59rPid7LFfDwy/Kz3/x7V/E8u98V0y/w0pYT9IyVijpozvixlTuYVLeu9lm35N7r9GEy5ZP7U/CPKe//JHac3L1Ncp2zf9dI90fXcfyWRl0/xpKGzKvLs/xrdU03ucAdIsbyulv+9oyyQBt/mpev6owL9AQyiC/qcagnHiRw2h/yoFEX+xGf9RMX9DUJ6H7p97TxLOxddV/iJrR7E/WTv+oywx8ifWjv7LrJ38QZhmCgxbDJe/gZviDTDj6N7IwYY+pJ1VlNny6577pv/Y+fsTpt8Gv40k5fbnoXkIu1/H2L5bwrK7Df73TvWPe/z+0N8N/+Gif7kHNmm2/P/vfwT+n9rMbTX/Rv9DHj/YzD/CYVj/E1+E/z/4Ih6TaZT9SU/3eBKmZBb/LFz+FTLH/yhzDMN/9FP0JzL/l7kp/l8IejfmGMBm2X7hnH/Y7SuM0sbo53IpeyCpqF+Wvr0PaMAOJozr/EtHv5Nu9vXzE9tfeqCwcB6+4a+sPIBmma9b0r+OQr+O3NvFsgD0RoPpI/w6NH2Y/LKXddmmSRn+0k/5PQw+D+DzvR33bdt38721FGt7xxYeBf/Brn8En7+/+ry3zyH9Zd7A+XdOgobj7z/f/8vQ5f++8I3+NHr/AqH/54YhqSOq/D/cGTZj+Hftyp7eU/07/O+ziyTNwrX5WUT8b1tFEi7hbRTfPiI80BDClh9GN3dIEfKevn80yyk4J7+32Bl8Hlna//pLh3IDRtG64d4f0y9SxLeAGHGdMUX7YuRe502f47Jex3EcyxbzbwizjuPyXnNFKt3axA47b6nxoFVsZRipuvdDBy+e5e3njD6ND7vDjzs18hcZGTXP0z3PdlQolciKw486JsVr08iKhWkykYxKp/OC4+R8sj4yN+43KeBv0AoX+LzCmRubJyvpT3DpYUBWDLbKhe+Lp9hOyksQZNM6REFV0yAb3veZSjo8xOkTVveUGKRsVDlMCXnuP0/f+8jKZ5w/VqDkS4ITxEr7kbQRW02+mWepx5TkR8Ly6PrszosgQdpE5o2UN/amdX6myflMVWBZpFeQHyFODpyPX40+we+SpYV5ATNuY8S9hHhquBBFw0t4yHX/7sutuqM+k4rNw3dqy3692JLvF2qB9AIKq3f4OnwFXa5MwC6+EJQAd8fmzgU8hh1yp3X35HkHv9zQ/vZcC6EMjnJveHh2ujabPve2HF+O//gEWr6wrDTx0gi7JYKjFm6Yb2HTwAMkXdCIkmv7Er0SKzGVjzUUg+gp+U+m1Mz67CAKieT3+jgLX2E+l/dIp7GsxyZvFEWlT9MpMJgibr9iouFjZSJQFgc55rQu1ki5/e1QzOyUZAYrWqaHIxJc4L5k5kFFoTCOsohDwjOKGmxaqauYQ8CxTmxBJJVlqfL5rm+TghMTtDZQ+AyjhusbUc3kD/Bv72pxGXtUQaRG4rGny6ffIgReSBTPPPwUGltDi89L0e/cwh/VfJyX/mZb6Ox4fxDFA0pIFEkpr9IDm66esXA/Gn1a15SauGb2ulhgaZVj+zvvlIfNXCLDiTREqTbxTezbCgHbdP2Xqi0Gvt0aYmC3gobkXRc1/1qxNn14ShBOoQIbPQBeutNaDFs3QINuqNLXLe3AGWHHLfePdU7wpOLu+vBecvzMSRCmc+QYWdgsG5jUHs48+z5WFa0xkJdI4wbash053viREW6JMGfqCeblr6+If8SBYZTSMHuja6HHg3+I4qf13fuo95yEpztC+1EwPJduQEnrJXafGlOU1UE29LZ3HrkhIpNdBuHdf3nvzDD+oypBP4IbcdhXVKEt56ObyoP1JQmEx38bi0WhXwj4X4PSfpoYkB9BWjgtU5mAiPu/EKIhP0I0GP5JMv4rINpPBf5jdeV/lcAx4n+awPEfBG70gIQ0fT7/b5D4DyYO/Shx4vFvFPiPHOR/l8AxDP7l8T9L5D+WZ+g1KZdvQs9LgJv/Orn/pCLyrxb5P2q7/0TgCPbvjCq/hrDfifx3RS/o9deL/ZZxRD6wB/Sj7DMyTuN/WXwh/ih7hPxJZQn7F1U5EKfUi4gSegia/97CyTvW4l8j+u/Emt7Z0/r+sZ+W4ub1Xdhwv43+qeD22zGvHpDSL3FX6bKc39tN4br0f1TGT4q84K5/EOpP5Dz36xSn/8yQfm1hhVOeLv8MqSE/V9OUNuFSbn98kp/J/OtUeprC83cHDH3ZLfPvrmyAgd9hVuRP2sf/1FH6T47/tRjym76/PcFv2v/HVP4PnPHH8gY9DE0Zh181i68o+DfQ6Zv2cEpA8fh/oVtiP7ol9ZOIiP3LIuKPVOJHP+0SGnRggSyacJ7L+I+i/aOX/sHlbklNpwf2/AJj5K8DPtj7yz/2P4/vp377dP7+k5FO5T1TIP9/XnT/Tz3x19j/J///UVW/707+RBW/jv33PPgHl8Mg4ufJ8ddLfItB38/6J76LPf54IfTxpwt9k8wPF/rL3Pi/0Hn+v8KC/qcYBon+RYZBIr/AoHr+/Qf+o5lAyL/VTJAfodcPZjLvZduEXzh1vh9n+Z7Wf7CJ30fs3wrX+RQm5a3UPyHevyBq/7kCBP0YtMmfrmMg/2Vh+8eFDP+XOd0/NZL/Ia74gKBfSOpPbkP8gv2p3/NfdUhwOQj6zSH/lAEQCP3h0v+BU/538eAD+hPsQP8N+O5Xk/yP8N0d1lAYGMc3nBf3TZPGS/8jzvu/vsf1kkBLi/re4/JQNqXuvznNffW4sE4/k+SxJwHqgVI3kzyCNUpXFP1MqdSe+31C7kezBDH3XZja56CRzrlakou3fdIrxzcjzdA5zNClbPnXUDC9qk2556kM/xqk5QyUeV6UdQGF+SANiFv4fKoHrbeh0bRayeh8yBiU12cv2q4BBvdX0/hxXes9Gksk9jiuB40hDRfwYCQZyHIjg3ZqERy5iETNItDITU6bvLaOk8USfbRcuwiV4RPiHJA53PQi6EdQT3tLhYvAhHlp5blLCTJjEUECj5Td/wNUwZlYGwi5EU7Qx0E5rjjeHSmD5h185hroA9hiM42FbZSgweMbSZcUYHirhZXwUFF9rbZuaCJhP8TsdYVFvlGyRtcOsr0F1C8nodSzeBfwiLZ1nebTd1wjLYtGxBF1ksbqT05ycXZDMsJWKPPFFx5od7U4STSzB28o/3Hia1dGu1fRsOUILIyv3Lu5itybxIvLQUcGdlRtRkPbh5ZCVdpT1tRDrUAnQ819xIy07Jod7W0MUUF+zoxG3EzJ49vPjtvm+eK+whVj56NTG63XUC5nU5irRbRZgmoC3RUe49t5uYqC9j8cpuzvxnSlHGqf+9ymtGwsgS9dcLjzlWQyW5HbJWMvgy5SkhS9QGNdJD5hrdS8009QZYOuC+YVjW4YapKoeVKnQKVU8A6I+Sj6WFc5Wpk2L4wwQ+VZfmuQitCv6Mj8ZLNPPLaNKLnPCM8pSj10+urorcQsO49yXJzxqsZZS/CaY/l3W9GSw0823ITFTOi26c8uRKHBNYuvhmD63TvdFz9wJjy/SGKG6AxruDRZwvzj42T80dSK8fgOsv0o4Nf3E62j1Lns1BViyxTmd2UKPT2rrtbu6UPm4Zaorm5xyFnwE5fwSROo07AXHyxbyLtu0J5OX+wt6Q1ucXLjgq4XeebmJBp8asVIZl1Jl0Jzq2XaHq/e3L4CZ3jsMDvqetZLnwLHC/aTtPXzc8nAlued2uhtl0PQ4MITqudeKDVuLjrOFfIBtgwMdvdTNJ7j7IMal+glt1toU5qW5vO9qx+s0YLK7jnZypO3qUuhZHKOxbjSck0Fk/bMs7fMpLFbjnfaWKZzcbKUufJ5+abCTyeUxIfnurqO9h/5wKmyTJO0HLvO22PymkjItPMnnQuP7Hz0LVNyiAmMyhNtf8oKJg5yiLbeMpNze87aQ0vz4juUdrrc48NcQWBYmGlCj2cLiqb8ByIXwdeyDHqOLM3xpszL5qkM+mOJ6ZdQPYFBI7NKhFaCDbWQbOwsnnHeY+3RTmE8u9Vp1BCGdWVM4e9LT+M+MHThzdcDplNHmJMjh5QNL71aDAZ3dAItK73TohZ8A+tP/P3+JTgi0sDidWBTbYEAomkB3UdIMvqMBRqvSwjikOp9YFT8JHZ2nHqwVNLuOwzPjP7berstP1OYoz7LGKc7XUPXHoYrRfnYxTSpeVPlKfO+/EbBdbZMF3fmr6sjcfu6rz6V5nG7DcPe4Yp/SpuHl6/ek+lLxCUsA6sE4pzx4JV9GvynmhrEeiSNqnfMxxo8RiklToRBK/k48XXFILAudcNGUVw6aecKlpMd2TpFgf1EtNsv1rjiGNocPa+bVLkHDNNZuTrNxGG9j9eIs+qt6HA1VJD++aKISQoEoDv2leMLmZS8iuH2+RIDbOLcPnuHqUelKEVUNjS+oyx4x/Qodpcy7TSCTjro/27b/G21j44LLtXeEwoPa3oP59hL0pvLRHa8d0JHwys8SUv9YNImN2kb+tmqKB4FweJgS3lOUn+mlaAt5STJZ3/sh8q/ViPnQT9efFxoLZjHFedemIvvt94YeUCIPNPlhlQguHZYwMckGbI6tsCBF1fyJ8pOwvx6OKaQl3V7q8XBtRIcjXsaIH305EHP+/OeHMxShFP7dugBcuqzMg5BNZtMolQMe8FoD0GS34hudO4tkmbAds0UrKCQvJiaTHLPmRo3+/htMrqeJAmJ5jkT1aLS8rKojpVi7xtIZw/t8605j81ZnICR58HhRb4PIRMwQ5jqMY8o+dG5+8lbcQFkN5+CetCxJznvtQiADddsLs3NZIaFZznX8Gr3zTCw4eUBrbad0XLDHSEJOqSNiDEeOxPU9TGqmpM76vtF4lqCkjgFZnGEKOo5KE4XydQyRy+9WO9JY2/BM5/5x+U/w6xsin7RPCOu8ae2e88H3XzpBEsEmAuW+srTBYdjgtvbmCqndXQJLJzEvjSHT44HgMSk3L98E/MTAoicqDpdoHkkqtTnO4jBg2TPzABQjyf5LZrK5JDDfhGD0pYOO58+w/W2imuM59aivVZb2YkZocgy5tVgJGL4vLOW70RXSvTyZCzoDicVSInrw0E3Q+E9Q1FKOEcM6FXxWKB5brfycl+1nM4azzuyGSU2Ga7KoMLb7eOX9pUILKNzPmCpUaKgxM5CoZo9ylICyZxWUZ/9bjBOhQA9OWfbcSDJBSLQcV+Ebzy/rdk8ELzjgPYZdgDgMNs77mr2j6MP00sEKx5axqBKmaZaGWOmrTPXJO2LOkHTR3Xlw61x3spyxFXk2LeYmmaAnlcTN7YomAgGCYGoR1QiZFeoV3bXDeqc7gQqBll6RaTdTKXTN7SsmPTnTZ6iYejYhm6bu+m6KBW+Sbej5ewjLXX9jTohhI1MveoQVBUcYS0zEUOQ18GWnOu+AJA5czAZ0jZITM88SXGQD28ft7PF/rWv52vWXj5uOeW7ogkLLfgLeVTZ3iwx2cJ+br+P8K17q0HWT/ueSLdVBowaiKCzlcSb5KskXBpAHoPxboDKa6gI8ifUor0Ayc7KrSr/aRetLnjkjv69bFZgnZFOZ0AbJv4wHlfOscOLR8HThlWBcXgaU5Gc6QfVEOJYS27PKFI+QCZBWKeWrZnIJN3TDVh2X7VlTgoG781E9a4eVEn4xsAJUg1bhWW35WMYwVU/6HTI3/M59+mD6lUDrEKRJoNojZfUm+4rKeuVaLNge0YASNnohtWO13GYzDGzrahcWDykXoDzFrnxMIBGVzQZ0kMiq4efv2gpfrPmyq4FEe3Lt5BBCM+MwiweGt6zLB1yFWKcWlshn3o+n0ZDfa4tEBvh5LqWzSwxdyMbf9KU0uSP5UMWP6MqgNDeNoPVWBueZJAwMRX/4RXmo5grJn4FhSq/dN3hEVbqzj71tm92PuggW5Lfoj2jgWnl4bCIoSVLkePm0o0mZANNmIubrzrlz7cMJVote1fenov5TlmpdJdn90RGK9tXEEaoDte9JZXV91g+VY8A6g4Qc17J2/mEYcZoWFCcPZVexgACB+5fAFswIXlIhVvCXWSteu/FS1guqo+IAhATcEuFiZqpjiqviYtxToj58dSd3X9ZW4Fo7mtNXyBfHMRVVSvH9UGEazkEVuA1EkPuQ+9p9dle89VN9DE8HmbXOYOlI0z3TC/ilDOCAsIgn8Mb6z93nlZFvmqWrUKE+G0dJTdoTqHXdyIFGAB5fx6AW+c1p7q5Zm2zSga74fdOiaNLtqYe0ltPLz1tyVlznrN4/fm2wq4Wn1/xAldfIGGgFto3uasO+aN6+nkedhShAHpovPdGoaGGRXziJL2MegYZiXg1xw+m/+EFzGyFIfhSo/XuT1iyA3fdHnsZ8YzlIISnkNSMeHGGLgklultQmR8uXzDaaijrikqJqvQqj3VO9RX5fSA6sAZ0WxFwc/mbddzZ3RL23qRujOIVKXUUDJpnBWEgEGIYrMdBM5eHc4qcHHAoQkuJrunyVHabQfbkEO5pRT2jFo3PEsA2lDT9hVf8QHGcsaCild+BM1Mkdt5Pri2ZHTyji2nOcB6zjgvX/b0dlry/TdZJkTbAHuaWHsMgLjX3rt99PQ1Yp9r+hXXinKKlbpFAJR2kSbOlPTfDtUCkF46EOFmOJ4vTrkWrcJWUJyWSwC5DgbquyjtBiaX+eE0pPt2uwHyo7GggY5pVgYtEGBWURiRS8XGY8RYAa97h4lWWQd+SCTC5zfnmTOmz22pH+Tz1srZA8lh1kotkSq7UT6FS1Ma2e1C8KAynqONAIcMBzIlurLp88jjzRs1iMeyQIsXXHXuugMHe1RjRQbbOTBjqS3wpKYjRwKNIjEqAsy5KCNkF737FyaC9U4MXiStHfTqIqj+nTXvElwcmBwYUjNymMe8uI3fcGyn5sKE88dkJ495P+ttxsY9VVrSSecqTwxUk9S7MAaotRxjEEKZIeUHJp1jOGNWdWcNXEXI2SWzr0SHtcz4rIwo9zROQSszCXkjUKF4gKRFxS16tuBRAN/XNme+3JeXX53PFxwXcgeSaNFYfB85a7XDTtMenyY4g7Vf704Y0ALYK/LojrOW2xRfSAutVeOz5FUTBv8tgm65xfGw5sFEL7Qpab3iVd6LqP7DKZnQVal4DhqPgvLNF0SgOBc5p0sYT1QYBwVp8qque6ip9Z47uXZEwRG3YooGJP8eGaom30ZPrBcmYSMf4i3lhLxkBD9+/UW/zC3jFd5kbbVkJ3GG6DYDx9Fo1tiBwMIHX5sd4U07AB59RSj14o8Z881O7aqWGPaSnAZNN9DR2+/0Eg+TX9iS5OzTjT+HOFhXwSNR375BOu77eeg3efx5F9XzmzeEVCYozPPeMgCRYC9RLYD+KUBmzPp9Cst8A3FCCL61l+4WjczXi+jf2cArVGt2mvXnIGrRGpgcnVyKAUCmk5TylSQVVQ5Xt5Gc6vLB1lqpdnIENFt2aujT1MqGX8HQECY+YI/OxZ7f3HfMGwcMTSuX1QcnYBc9uAkGtRrAalxrZpArPCF1feO7AySYeRWFhhpR5M/Ut8GSs93n2uOC3k87OPWs+ux4C2RgrSO+Rq9nS3TNhGs2c+FDlLFC/5aEsiI1WTHAYz098Nm39qVbwxJVv+rTfxqtY5eOboz5eFFg3630tTJUG6/02UEp9woK1og/8A+oGlPlG6e0Y4sRRWOuQwhOaM/zwKoIcY8o2Exp/6RJ3BHpGO0t15Hag+zFWEK4Lq8LKP4m9BCZwbl4qSXRBfbzUjK0diWPkjbVxlqada6485Hc8bH54MnNB+raicFDfhHhGwSAJ2nRdjZl8tqfRKq/QvjJgp+QBJ2VnoA+Kdtp4H29kMJqjGJlQHDtbtMo3YqncA2mXPWrCOnOD1OgveS6twa+jGnvRC/EgAIThScKCHWQah6ONWH8JLpZEE2ibJkFQbbdakJPMAQ6Ngg7Qaq6va+0EQHVmHmDlMqrcjHV9sYmNEmuIXL7Y6h/APi9t4CoVYyCTazXTj+0xiWCSqpfMfGRRbL4woTUvkY9opVlmhycZB0PNlvd76iWZN6xSOOUztmTpR/aAvWo1mc8zGt3xlsdYDbHXtA/IVqkBYtX+Iy6gUgHUllDES62VeQ0TnADCcofwkvNcRlq1fjAmdvNiABJGg9pMRuB5DcvVDtOEnIztzzLWuV2Iva+ebXPDIGEyXQfBGbIBWTG9mb/fru3VW6Dgc7Oca/EVJ3RtkAQunOkeJ5VAfpu0gB6jm3pDmO0xaN4RmebBUqsbJ/yEzpfTTk/rybbeLMf5fY0QWeyMIpygvlDZ1cxz0RTTNId6pJ9dimS8H5uF7eqLKsS+dLqLxuQHND1fuvM6z4KjmytJIMexKvsD6/FcNoHvH+ygwH2/lKVcgzhpo8CB5vnRxu3laHPuX/psHjcD5yLim9uZUV86fE3VZBQjF7qaEYRGWPwpEj2U3nP2LdSensq8cPE1PxZF3rbXgy0VFH6xyLJ/yrjU0vwOAaVqR7zMBB1X0S63z1iOKu6hmH1u09IovQvnnqAyOFKO0X3ZMGF/KRqhXz4Jp9MnkG5mjtkVVwdigZVVo5U+LNyOnKXMlKawuXCc1Uuq5TcfmtPRCwVcdEAUohHsZr8iEHKeyVSZzOCjn/DKdjgGafAS3C3bv/iZryiVcz7QGFZeu/mI4Gl3c6vD/B5YSI8BmUx52Hd+nM9nETokLAMXbUWJzJ8MYxZsX4sjYsnlHQic0HYiWn5bbb3iMAtqpDO5LkPoRsdOx7P7cm257+rZe7KGDCqhGhG+6yRScCcnJSk2b4EUvK8MtqlUKLnD7viRuUbChooTcl9kjjqtki6AAVgHQLXkUoQYtW6AqUgroTaFqCyzoefNa0MMvA/Tu2LGlJLEfT5so/p3TjlY4uIV5ySdGELejkMjAO7U5zftT5pCdH1hUblppj6iPNmiaCb78s6VMbaiDxB6mAUs3rL1JpXYG4Q3bEMMUJPnegYW8kxH60Xqr2IVCt+amlJxXmtT7j5zIzQxhkbsEZ+mcG+OZZFpweyCzOSlIOMkPJUKz3kS0sdQaXa/9MTx0S6BOTUn56YDIB/JBfBrMRfa07V9u1OIE2nto0IoPxWbkqjyzDPUg6IAQqfhFaYI7fpYqNOboXTq5VkknyfEfSxdiaeJqJbLxurn02/l0Go/yKtcvl6SoB5ohkE3HjXbkTvVti4ffDt26Fr0cJxRwi106HEhF/EtnfCk5n3b+HrDpkN1+bVpo3CzWKQZNRSdw3FxSOngAPHVAmCqZvrR1nGvpjeWD+q1havA9XB0NrJuPLDdBvpz+x5Oj0QUogja3rH0MOmbDUVk7/nSjcC3I7vNmE8JiLY69E58TbIqStAF7YXGDtcck3bAzZuOtB3dixvaP0iCINYMd78xz3r3nAhJMKKQVclzsYqt3sKNz27io+Ealx8CxPJsJhAtyrh4+x4QA2VbKIfKvDCFFSiCQ3VrmGBF25cAFGuRGFVadNCbK4QLTFLlzumHiGPd/WYNcvrV14pR6hA+CoEdgCzqKEVweXmfy6+vc343znwHH2bDTE9MXilK6u/LXgUyJokSd9NkD/tmcF62kMer7W+s+unFwraFVILmncvZLUJvb8DJ1kQokBOqRUCCXRcib2qgO2JDh/E81SXwa8u+ye+MNR+8cUb+PS6vECxGGyDKM8/p0qPC31yn8JI9ffGTY4cduQKLR3n2bQnmoZeqaoYodSVBdMFOyjKJr8UpqYEwY0SCCxpwzNHMgcjgwixZzlnkMiSUKmW6/MHzUBIMy/sT1USqOmc5KYP5+mwt/FVaGz0SvHC5ZcWtwucdsp4PrxZBSBLMpDZwOuTZkJclesvS1fOMqMhw0bpZnjPo+s3WMGbGOhyXZxAIU3OzGJob1FrC43ytqkfJgKlQGHOk5HJ0cWIVqO+5tgVLeEOyk/3GlTNac4nzxe7AdIyaM+kURbgzdowk9zXQWrhEowI+6oo+b5+TOp5OtE9pv9xxHj7v0Q+ghe/achAKuo2pdokIp2juSbKtOr4sp1QUGl78szz8DZPyqmqf/Dw6MUCX/V41xYOV0rXl4hodrMN1K1UkY8/8iCFLOM+baOYS8HL8YX+AxL6/lRboSEs45YM7XfNW4n18cQEy4kHe6FK8awLUQ1v5x2en56RgRvrqwP5aOE5847SKuwXPURHfXS+zf72YkuvV17u7IXxQb6lloNPXi09TZ5ZQZ7OfTJtZVKqOZ88yGZ+L56y3YAoNSCJN6G1hGcPFO3/n0vOFK3X2YoIRIduACVcjY54JgFeg0rgCgDVGkWug73SKIOqlUMuDGv28eALWoD6etO5Abbk4oUxxZhulOzZbazcHcEIyC54wq+oBJILxKlPe+HEky9nIJ1Vv1eDzVgFmmXJMd18663+cZKcaPT3Z+qZIm9L4CyUda7SkJPxcY9W5GRqItdFxM+HBA4ZXmuhFkcCpzh5NnbieidAdyQWrSR4EcPzUo9QAFSRyGC6tH/nQxMxy1IlwT77IatMDFKU3vKkFb43tQtuvNGuCjQXqITfm8Id3Ji9k5gQE9BzI8L5gvBBG/x5BPh3eg4xyvkw1b9M3G173YYWGTF6kcZVBvM5OAQKspJQiGP+dwpzMDN5xmvHJKh+xIagX8QH1gUp+gShftuuyfvUNqRKJjyQs3u+memPlR3mKDGvOni/PUZvoDpbs5CQQmXHBIoZ9lEJkWsmqhVZT8xoWXftr6QSi63qgdCAdOsOms2xAvMaaJhSshb/ekeYSDzZAuiianMqGZNPjCsQIDoWphwD4UVwWvW+znJF91LK9FCx/I7idIgpFbnat6d6bsVCS6pYs+LSduALDvzQgG4hCo0fuPFR4dOYr7wYu7JlZATWm9XVzfPTyfOvGt4pxQJafJAbrFhb7XpjSutFfmhmTRxLilyeMZ89oT4mzTHYR6uHdU8w0gU4XRUgwUCOBJ6C6zxvLU8LqntWUhx1AriqF+flh5K5+hFQCUtPr7X1u+tq0LfAZ0i2tp5bNsSRGb/O12BMI+wkjN77RJp64IqCMUjdmTN3pTDdGbm8D01dvKjw71ofroGEz5icAsdH5VTiMx9dtzPwi2oDUWTeD4VY3YnXl89ZLduerzLn5Fnjyp7+u6U0d0TVAuuqMmvGS1ALLP96OXJtL9i9voyF7EfQz3XTbTwINvHH4+ThWLZXvBE9vpA08VXiBIMyoJ/fMRW4iQui8hJVDgeTUHXhf1QGhvyJAj7Ba8++w04CeWt/4xTGM86WsC3WzMwiCH0lPDl0MvcfamtrpqwtkyB/tjoJmaQ555bp1yDF9w2KYcXYkjqMb1lWf8kG7e+l0EDezI5yfqjC9NsgxCBQ3Z1kR8A67CfwwR7fF7usel5N9TkS3EJL+MdH8YlT9AWXfIX89PvNX5Wjj5nr7UsVNEhMztfavFH8dmUQL3+vUwHIDyzpW9hkTEEc2ROuZfSy60bgDRaaAgofUQL/lDmJqWXly+gDKEdFXt8XhgcrGZ3aVCxGbS4xXsPbQwsK5ZZFbAQk1lUW22HkMHG7FW8o1oI9gJ3G6P3Ay2o12dwEzv/UsC4N2Y36+UOBrTcWNKAWeQMnEu7aASm7aqejO3L8S4eg5rzL0G2IAc/Ulgq20BfRn/bdddcF1Ru+iOBAKMuIFXp4Pja1cy21RWYOlwL0i5dq4uZ7h2Yn0UQTaRVtmNOOr2bFwHpdlKP0hhvBbQ8/K7lD0pT5SnuPrE2CbsQzyyvlMmPtV+wiQr1dcY1tW8Oe0da9Z9Tf/6aygaMnoSWAMBOBpxvm1SodERTSRtz5bku3Km3JIGQ3IGTjWA/88mrdqy1dR4foRvzVQVnp8wUHKe6zrBjG2EppxyDxtdypa1Qh1w6glqJ3os2qOBr/W56gnkRoYmkr1yLlN6joHBvmAzFwWOAwC+t7gZ9Bk6DYPHq+/HAn4a1tSHGin8VMFJFltAfrU6VGi62kldN/1ealWwiG3isfCPhskXVH4ojOl8mmD+kYgo5Thxs8pkD6WMAyWUGBZirhREPkyioC/6nU0MwClW+gdtZ+xENbVkIhsvGHeTTiYtzR9zoMXHD0PFxz0ez0ue9GD8squGabSfbFIfgiDOIIQbpiTpfLIzaRvqnh9M3rYvflVIQgGMAYecHvRHmWzwxw0P8iHtuu6oT96TLACrSj4jpX3HJ7uw2JzZnszh3YHu4LN1z8VmyW4CvXFyyw6rreH/SY6YqxylMjQZK1wQe9MnEpY8FPMogA+qN7L9LT3hGiCb6wH4qm+pIjVahvCderobJK/GB/QbMyoLLE0u1xcelfcT9WdnPp+mJYuqC8PITV09T/M1/KdY0mO3Tb0YCw25cG/dta3Se71Vctn9Ool2olFcNikvN+XvyiwjHwt4lEtUNvFPIOYsQGUlHg77TVa2bNGZaX3294mDZ7tRwfJ4Q7LnfjSnlFq0freenDDxODst5OvHtpVYude4SI7+8tY+WIBvgKnpoaaparZeK18Xr5fD8SXpsnPDGJCtrq2Z/TeBD0tyfdBoMlpSWZlJ4jfPPMc2HFnz3Ck7U/UaeNpNwj1KJHl3ctu2XAKOva94lriGoY8LqU4CFPy8SmNDbSc0Vn2jK1Z13oNqmu8QB7QQgmLYvGpp5RgMgh0bivc0z5MFvZHhtb4tUVbTgE3Gsg08pLTOkaAmtxp28Uv53Jhi3kW6vyC8kB+yqXMnQ0nYzc/Ti8D1EcoEOvJbH91dZSbazel/lGPMvl+Y8yT/bSQfbHBrPvsjQne7Y6uAJDerMSlQrZI2noMOTfPK8lpRIHpxoBgNv3h5JjteBuWZ9l2A/40ybbBskiy71pdOhRkCFR63OrIneJPdIdp2y0lwp9b1dOo2zjgNDxaZVFCpXBwfmghpntIqLb5GEHAi+ZIU04rU/SY6zEb94JjU3d4HJACoia5L9R1UBDAdWq9TasPFtVoaw5Fgo8t3OC9/Xddm7SsDSLamgNGqY0gpbeHCUDJYXQM5hq1DtdWGqbRtPUZmxlHPhz2SE7v1oT9Xg2DJKvmfNTcPfVQcDQr0/ydqV4gCMXdzfFiG6f9Aulf7CPnJWY8ch35EE+Xy9+FBOlCCGIo2Zhpml5HGNcPwi7WUOSg19zp+keyiA9OFWgcLIX+Wf07Bbf2A1Fl4gCmyuTHBrflYyskhx44XcLkfC/talv9+zFBaeQTU0vkxZu+UOiLquwv2A7o/+pnzvNJAVFBXHwSTAkzBeLe7EaS6FMB9RWrBmi2frKLrq41VrBycCqWLI+PvH69K4yWP6cpJJECkSRumjoo7kB1t521elXLILtQseVSMPnpi3OcmxwfdV/fmewtmxnKOt5ZW8zOAXAigxu58fUAt/WxohKwl0W/OmMSPTKnP6Pjkc8QZhn8tVsPtHnYbG4XxwLixFuQqtuKeKlwReSMl0T5WsEx4a7zLm/UjjSy/y5tyAmfpcD5fnFpX98BIYsmRyrnBSoWHjkDFsMLWnluIAWCnI/mKEWBDU68zRd7aC66+bPI2Ya9EXMVxLQnjtYB9RorKac7RkUs+kwYQyNoDpKsdxxUEnbvvXZnq8UKnp6a8gh3Webvu5aY4MRxlhedeCmlecJh+Xk9kXFbH4qK6KDYjz/a8QoJr0q8O891wZLxqw7R52765BwPgfWyGUtiDyQ8XeqQj1ezxMrwDL0AvMfFxJVTf4SusCx+1F7sG9k8QPDC4gZseDiJxtISbCYsoUTN82JsBLgpObo8YAL5vKuBr6kJdg7few/xVnWGwGzXgjUnwu5l10Pt1TWI+7VkAMaYs6pVN0Ffvfr1dS5rTJLtkfuCFCz6HvmIvdFInblAyaX7ym7Y2j7tYc76b98RwnTuCJ1qpYAepL0qyzuYXmGkw/nMdjeHZmi+EDTxpuCCIjVD6A5VZoC2EF8e8bkR6+u9U8EJVmkurwSJCct0EQh07roZ2yjNVqNbCZoOw2fOUTUNkah6mGPHt773ICEFxevhsBTbBBA6kj5du8fq0MhObz7ruVz4cW8RFCU/twcQTZTlbVruCHEldllCxIhf0cyIwJ6vT/LojcShIm9SNLA2mEBS4XE8Q9+B4W6hTUYV2q1eQC+daTrP5HU8zWbG7gzVe2IuVcPxay8VLZzM160oKxFJ1Qcm2pl6zQxm2DhS3lfn4NJx864IPQ1ksRsbtT2VVKntOoJdY0G0IWiomHJE2tb5EvtWbuBhaLFVAr80Y5hlJxNSb8PllLzjHSqKRlgoqem188fOh7Z41eiz+YakMdUA9iNx56OnaRZ8n4nJfAQvSmb55PEGZNWaO90P4aMce83khyuvL6BgsNCS+K+J/kyiwydH9zLzJpcDn1dGnAlPtHwtfjSwstRaNxKqNdU73E8s+AnAldBRzV1ikzljHC2UcvAzjYlKfSig+mBlStbFa5XtPXBk2zAYj8E1jcjMLr/nB4UWXRsTD4M0eJbcI7o6KrHFJ8Kcj2/dkRSspQZJVoKtSU09hGtr+B2YT3aI/EWOzwvModPJp0OXdb9ovj95dypuNnP1RFh6DACa2agsYlI/Nyd6VMKLx4+5RcraIW+MsrBFVNGMWoVFCAhpjXQsjWwFtENl2Pmf8UBpzWiNqJdQqp4X9GG/So45Bx2E5zpCX2776Nsxnx7zTff344VOXstNEVQlLD70qqmld24Dlh7SoFoJQj97x7dOQXLpvSbi+gxT1s9v4khVRMSWME145QNUVh7Lh5X2AR7LpTp2BSONJsUwPB/kS9c40OOd0egpX5ipGWoyMJ+OA3I3hfy1uBNOEE6nz8OTDB7MiIqNgsLgAa71WN4h2WNNpR/TLtlV3PfLg78yQjtXm+1Y5QtpoU3RDTXsDBCpxEJfvrY1xWjHkl+D5TYaMpd7FKTVALKkP/PX4VBQvPjbxfhtwTdhJry2gT+QLnPHbcZeodXChpq6VKRil6pzaX1c3ovybiD+6VW2GzM0Kouk886G6SNBmL6+6grdxGS3K1qAq4tkaUzaKQVCb/h4Advnd8VOlwasZpr9NFmh4OCKaBWA8bTaPOshhm/rGIMlh6KvP+c0oShEpJ3QWl+5GyAg28S+GiVUIJuw1l9+DQe9NqXAndCbbt0Ikj8mU9ce5fObsYH8hL4fXAPCG/uiYJYvntO+n8V46xoRSkDSl7LrQojk3PRR+fAEgjQz2RhObajhuV1MXrBkay0RNBgCahYMZWzYJvpNgwtFqzmchE60zXnjFII4xEk1Lj+naBo/6RdCkKzxo2bVHcjZU7heh3UZQxUUHGT4qNzhi9zHxkssUK9s6RZ5RCeOfK2fENyliqz0ZQ/4No8Y8K36W4BPVT2AFsFIUn6SJifnqS2D1CfHVHRcydviOGkXQbnBqSPRuM95EZvLGrh3oVCGq5EvHOCqSwWYDpa2HPLOUUAxDVOnqpFn9bqZNCuFhHtJNgtqW5XrfOrTDfZZ/SIOLFOhOIslRW5LA6gkhzKFjbAg4Cix2kBEr6hS15XYMOLmsfHX6tID4iIdDWxQpNuuNqpXsWHQyRzUR+nAink1ZVV3d3KSLOTOOweGOihJgu9TQ2t+kCqgXScabLZUA5Mgziy9QbNODetrlNo7bXVSV3aYFEZgWgJ4vQiIql5CIwkoRn1D2uLR2wd4N1nD5O7mzX6dpI0ur8YEmTeW3tsEOO1aRyI7hBq383BcPqc0AUUnpjfR+hWRS3HjpPh8W6FwYy1JJYkj1kkd7V5f7wTo8fCAuZ5jRcmvJfwKvt4/0BHP/iwhKdhOI6thPo7BRASwzqqMvQErCWqlplBvTB/oTJHbTYPqY0j9W46m1z+v6XUDVPTLniY+pZnXtMbx1/LT5kp1Z3q1wR5rz2yzItiZkAVwGK1Nee1Zngbdx5/NH6BN1Qwg/UgaPahyjTtxWmA5jO41L1p4Zm3hrqT80KOilHlV5YDqM5RS2UxjwQLqzAyQd0WfF74xz9WwLPhwvAYuRHa6tv0btZbmBygEqV+vGQD3SwGRn4EMIBDwiLStkZJ97mAN68BWLE4v23qJAPTy/YnrmPyo91L18ZH0P02n+RgXF6ExmgpkL6CAbNQwaLEI6xPF3fhBoRwoVJM3LLeO9yK3xRgXXko+Ymh2E0YTJQCTADCmT1tywxDH3SCU/JZ6hrw+qVxGb4gK1L6NYfz+6DgKv9tVCx4C3HQvL+v6ip6Gd/G84iqXOJLCdwXXyWo+HibNJCcRFDOKij4o7mTQlw2gCp9d0YISN62Nk0tvu24+7RusJjJ+I9eQE0XIjASTlprYxxwtEInsxVZi2xZB30fuW6ht38kXu3QkrhOkdU2zVc0b9HlHSdNODgLFjDheNhA7MyjJEDspeqRmFemZdZlN2Cey+OmTRx3T4pjR17yTnzESQ4Fvkg8j4wY7al3i7OrHBMPHZXTWMsMlDgXt3OqRBsXSEmHuYtgzyUJarnRL7tTCemQnZSOUl4FKyp0JVlnLkjkQ5uVoRudZdWTNOKAYRdJhJPfvhpYEZg93syzhpBCkvb3DBgg7IA10YKUI8/UOESAPiQ9rHitZNztevQIYC8v1AKyVCKsbgdGe+aPKoHhkGRPieusxa28uRJJU86uv6wXexVshESUd5LGUwvUPOGyaXJfcRN8zH0o2AMlrJFqBe8DC+ggymzQMQkltn1oCMtOzznT2upFWF8zC6o6teXDY7cnaWQuWPDJsJenPGwPmrHhEZQR/bjnmjwPTIlTHCBCjtJtrldV9ra3DZmefj/fN+5Epc+2vepnqX5zlBqju2GOEgqLS1xf6pXzRZSdEPurSPrzKMBuu/9z3sisYlXErzXYV+FW8rwn7iNmyHoIenqGz6fD6ENdVJJ8LKK8Rz+5xPt9THwpLcWYnn4gfdG7twF8jg0CiBzk6MyhRMzTzXnE8JeZBWacaGocbnSusoxbsdH7qJbkDMvBXM3/s7JA7Y1n7NCp+uDF1+4QvzD6cCL1aAncznlWwolxhXBCGRMmD9WAyc4lgHogUrUixcBIJOkJOShz24SkhFZ+88BF81p2l3tdT/UCrA3Qk+a/21P5pIWo13vlMIaptNC0sgLIRrdCxnR8P6ZFjNx9KQveat9XedgYg5q4XFVbzaUh/7g9CUVwj4Hv4RGCzfkGw5rx5aur6eajVpuAoipJU9QmkIOII/ihRmeBP5MNufN+8LREqPDau6updc29GOlyfClakfzwSdvvqIAH4XnLacJG3Ce6eC32kFaIm992sTRHmDsRLIwc5zCv1JBmFx3eqvsbkTeyHYnNlfOy4XsmiqJLik8Xu8Aye+hB5pS3Aa0uVqcczbrsBNnrW/ST9zRS2BDV0FS9uCeRzEnn+O7JkIcDgK0hJ5QDVCKMiQJ/QCQKO0KQnVb6eFzPGuF170JrYs2MvtGy6JSktZl+30xhA4q1p1z6BQzr5AJKL4iC2NEso5yXoVN+ZmTdefbwAhKtjqEGunbFR8Ov76vEXVeL+tYFq4bhgtJIlNt2G8ljBkqI4rhTDdojso9s+QeXabmjdb+fg5jUDHlKtHDLzVb5EP3oKuYr2XefoTgg9HV2h3vVYNZasiaHEJkqj1ixi0XnDWEmoooyRTZ6ODfL42T8SiBAiH4dFHYr9u34Mz4ZFq6SFv1Ig9dg51/0q3gYvkQjc12Lgq5s8d114yUkYndt3Xramh3TYTfeG5ZtdLUp9Be/z7SGVORm7rwxvs3qWB2MW5/1gnG/Ci5yCeEJ4JMXzX10hUOW4ckD0r1YDH4re9iZBwrQhLD+z/JZdPrqQ56hpVlvNUaq8cdiURlvgrJKHeonzZWgL8PnGHmFKu3sIzSJYneOeLX9rGefZHS6kTnE4vy8vTpaq/lbXgwPrkorF9N/WcRrOiOi18pYJC/rYVXXGMoESQnKSzrwFqldyOkwLmKaGiLca6bhl3NE41nm7Bdk7Y3HNEJKJhSeegU085OKc0zTejyGCyJfOuwg1VOohbKD64EXrgIaQNltxa48l1UvFRxSpXidTCiMpDD+WdP8itKucZN4NOigKTKfp5iyVqnR3xPJgubFvrLGXrXZIDqvSuNsf1UHJC8n+TPWLnbrVJGCZ6YeSVkZL1c6rFvJyc9+xhU1xq1PCCamQ03/eJVFCjPSu8IizTDc62mp5s0fShUgcp/LRiWJJBsDO0UUF5G7Jq1E2pz04yOnzjF9t687RQYxCNsDZ+jx8HmFnG1WqeAWF4PLWj7t84rN733gePalq85wFQizFFuZqxPcbqicXWw+PFCFxjZ5aNJ9Wb9bxyDiVlzDSM4Y1oikaur7UV7URAR1YqVGZon3uQYGgaIaJcHVC1Rwj0Y31rp3A0ykpJ06X96p4cZKvHrl2U7bPmyEw2tcIaE4B0Frj44awTWhKRQa1Zuzi0GzZN2OzzSs5BgHarSZhQr+0Wj6wp8gF7/YzCD26L77jOAx75I2m6wNXVdcZ7zux9py1MBDwnEOG4W2LpG1Cv17uCQ3WfdQjppFvN2NsWVfbWuu0bG05ymoj3SaLFdfN4TGR9KsmnNY5C12dkTHmnksPihFsDWhecGGteK1ZRTqXSF8TzhH2gs4CTLwwakXnvnoCnME+SjhCIrnRd+2Z76dW3SDUEzwPY5ihrqTwwC1Z9h92x8gQfEde7XPHktXxSiq4/aQsRdmVrGFCpr5QYE6OrH1tnBm8THbnQF8DeTnKH/HmU611mMd58pL7elQGVLqv8MY2Hc1x70UgZBE0KTJQksPR0+jojleacMY+uAsA5vpAJUKwcuXO/erAPKVIFmQl7Ffw3e9MSRMwSgc87n8+Sh+342CnQx3Gk/982YyD59HG5fWlH+LLhYSvt4cwk9pdN0664OCVr7dGuBvxjuxS+0lsdd0T4xpFf2ci5BcS/NbIdaeekmJAzx22JtbRqx079YO1hXUXbVjg1Y75xPGnJnxFgFFOGT6KbClbI+jsR0vhlSd6DHGblUJhyxC85P9l6i2WZEeibcGvucP7TAzDkEJSiBknbWJm1te3PE+9tq5BnUxlhMg3rLXJe7mpzzr/B9CAvHRkp8UlH41mosfCI5d8SMu+uktQ0W1zgc/0lDOhm7zYC9f7zyuBcaHxaG0i65NfU0MbqtR+4GAd5NrTdHthQOeFYPuvOQZs5qSboHBmfBCGX3XnAvwAYpAu63aT2SPX+s8k8VFe4zoubdxgz0JWm80voVaaqPwozIW0V1zRH1MRrsOKYqipOuM3tMuz8qE1V0Ccv70h5pboPkX/q4fNvEM2ZiH3iGulCOTfIoOs4PYV6jbL65XFlw1hs4x2wkKlaI/utTwgYVAQzXS1Nq/fMWP9gKkLlihfzrHYTmi7rvTzn53BpPf0uSkn9KJPD+5/e3hLkiYPgEMs54O7Hv/nKvcX5Hz1Js5Eai/uptqlL9QLAb6z0afo+NcbO0OAkynA70KqmDMhy4GwVG3aiboUzc+lIpzlSk2v14AOBYpx9MLafrDfoOYZkt4TRto3r0XwvzACn/j8NF9TnWuk5inFzFYrsWYbwpGlSVxXxxewoO/hnczdju59++HnvYEZ7AL5fKYUMrqCiEJx8lDorRgnC2bRil6NGvUwnzWpvc/IhZgT6WlB5TmKJ/B7H08KpeWGt9SPeCZgSrwHlMoy6IeoPOCEQWDCKAiSeoYoG6YqCe5hSIxjezELgU9UkzXXnPRBCLv0K4wjitBdVGQ9G4ev5ouYEXB1+JoA9skNnlU/gyA15eBkmQFaNMv9u3kTyhPIS6bpMGkiQwIVqPDTB4DsGhHXCMHrE03v2UoLFSvb2u4G4jWvThye7MWR7vAc2nKzS4qf5SjcqiO1RD9Mxy4B5qwGLUdL5fDFsRE7+aGu0bWA3iw1Jx4WHYjmqX7Lb/Q4x+fc++cbD2RG/tWTMxhwMVA56FEffqIK/Z3bGmSQGoUla7nZz4KNX7IsCEJKHYMJvdJJHkN0DyDF55CrxBwuVjzF+XCd1FJACgnZcQqgwffxHBxmX4XPXuhb0PEeHiH5bc/2VBK9iTQAO3+W73xlySVnUseHAM2hAn0cY0GGyX9ZwK8S+Z51pPn3Pi3cMx51iQqe/yUhlDyn+65oFhnkVlhDwAwFJga6S6/HdhwUdZ4ZdJjAJNnBmk2499SjfCJrFvAFUewPytCjzSNWUBb6fCAR8Ka4UGS4v0acTSzMo9/TqNW3269Be9kkf3E0CJswTEJDD7Dfm+xUluXE+zlLrinI6vJsK0dNey/SFPZVckpTHlZRGdniijswMF+hsQB59PWOF0EX0vMW3S12HnbWo+Pyyr5ZnBxDpCSXiA5+MnCR4L68b5Y9oKt8/Trq6mAoBNThWDHResEUkyV/xBinSjcUhokd6WV1vo6g3g8InTN5gBxxyUgcHh7BerPOBHtSj2BoGUJLJARKN308eOVb2AGluBDPN277Go558CjxE1lqPQ8PPjykTMA08Xu1bAp+/UScSr6UVO79qo4Nr2PnsXA5TR1PeWZ+voJKQxTxynz6YCQaJi6e7sKnuEx7diQJZ7prnn13xUViEvlLZraQoh5j1Ihz+H5YHyaOg+TdoM+SED8hTgGBnqrsJCS/t8Hl7ap0rXQ1XqgM0EHff5MP68al6d7uDYpGXjEi7wDHaVKtJ7hlZhdqvCQYTkgmfR3KameTIAcSRU6Yc7qrZEr81i3cxtX8Ef7a3XRt8OOEAH0VcL811QWYNm9BPulj6cXaL3SW3bO0Ey/V0eOHfGGkRnFladw0559n97YEhf4GoiCkqd/rH6Hp3a/zfYSr+6lpuqVzkvdamFrqAsVhkJ/8eQncshuDzpzt8ssaF0WS19Z8EdWOeKt+fH2XraB4NlD+TfEL2Zy2aQrBZcVzfkt69IT+gocjLzDRmpro7yg+9INCzIsSj9D2TnRLLF83BJOfsRcYqqqhY2VZr/VEpUgj+L9icA6P0wYFJDCOm/pJgsdnH3HCKJy3RD6oStLXUuCgDSCU+jLH5IDs3YZfTTcApwJCHUUqFp7ZCrtu/TGh+K+aq8QbhNzEe0YfG7RBSizuVTCicpiofZlN1xNNkYKHDM6jazE8tEBmKAuNhZrS7OvupQR3rhfqn9kT28qxmk501UYxb9owDBHw5CIzFJlfRWr2t5NeRXvUHk9XXT50qf0nANEgyRsIzUMQ2x81wLTFFk+et0XCFLgL9n99hmYX+hl+FK4MRrQHR8FM2ydj22qSHSfgPNHrQQylVpu9Ld1Fp48Y3lMOk+GuYK7sZaG+/SC2esDf2j31cPv8XSqqjQANygZUwj0NGl/RiiWbjkb2vFgsrEb0TBc3Iej0T5ExiKbyEW7Mj0uBrKxjgpdekAbrHmSATU+eqbPIIT8ix1c8r5vv4b93kYAwVtppSN9Ng0rgZY1juC78XBbnLzbqA3rzrpGxy7yYv0A87VdSKdQ5cnNZhSFLatOpDJvqM4IzKyBnjQDiQMP92lKtqiEaxInsHn1qzynXy73zeIceJMXmQiYBKCVB5Hbadnzbgt/q/tDD84fBIdTU9M11cMT+QKUx7cBZGZyBU3ZnL3SZsND7vN+sogZzE8WkHpjuPuG6uo4Fq/zNlzFnZfszYgQHx3BPFpVqLAMS0taKrO9zf/0nAzEuioWyjKsUGJHkyZK+JmVw8SswtmxN/QrH60vwQLJ3TcsQF50UhDIm5sDmf/ufMCZeSS4c5oVTKju80EfA7z6IWlubPPZfKP3lHJ/vAnjMHp2X3v/t7F8sNtEWvagX+MhRLl6sKzCeOkJ/yBruF7/KKS1Nw+ab73uJPCyR/8WvbweIw4EbdTpnmeIYJtJrtg9bGQvKWX6F4U4hoX+nS9r1HqT6yQH7KxkauH9nOH8HuYIgKM/rzAdd7Q+J6bx2eLYGaT+rGcv6+/G7dY4dthqTAG5/tvl48SqNw1c8DbX5YJ+SzNH0Vr9fle4hUmDONNQP/zdML1qQu+hME62H20ip5La21YY3K5vK5btGoCylGpZV7HiaruuGp3hvii1y0eOINYWHkPf51bGIN+62ntzza67HC+np1A+7ziNtylLb4WcnLDIHIqCU08pUdwd3JrhNfPLYZp2QKvp/yexr4P7GfJz6jH6V7fRTF4lblsk9EajIHBynzR/n9/AS8KJCsXWWT93IyUOjQOnbuvr3yurhBTM1L9Rc6li/U1GFexP4ZaeWnx7Gq7CYRqxXqSXBLqg/J1djzM5fbrTKME4d1WOZhz7/bZDyKAdKAExprn5rVcruPYiQWTEH1WLif39evHmOMInQaLZZ6uitMyddjt/Xj1fJjfTsoxW6evowt+fOrqTYFt/6dndxkxrWpHrpXF5WulAFUdVW36yg+uVv/kcCckSnFUpZQ1yxp9mZOLeeAjPiQ3/2VPVH0HeTw9WsG79wRDqrZarRWomYRG+ZJFHk4jUDG1IUfV0nMlZNeTfdYbIPqWLixn51l4FyTBpeG8g5DYyrlUR1FNVFlm7YDLPOL+LHb3K4rT+2uSV+1hZkLm7VAdXX4PS97YnvyzctS/BXZ/3E2+bSil2i3azKJ6iPezIGwBdmXnWfSk3HQ+Rto4vlylpWQQcLkJbaSFDESJxdcGXXzw7T4Hes7WNW2GRiGZHMqo0IRKIjU+Czav3kMS3VUrTM/Lh9cDHsaBDxZuwI5hecXkdEqobZIncXgiisarKMqsb90F/u2CpCcwqmvhHfx0q4lfpLHHxIuXQhxy23bbH8vNDGSf9sDEePYTantFDg9t+Upk7nZibURG5dY7WY5iwKc0KWLAWKpPMshgFTkAJDkdW0KlB5/fwF442l8+ivrY2Ux1DoMOR/mVg83P38WpS6KnyZdpNdQzWaZEGkj8RBGOGJ11A/Y9BSyCqhxPrqPZmEvjd/5cb5X4co9NcW/9wT0mor5J1PryjOgt6U0af5NoOTGXfsE8PIT+bd/ITyVQaWxtu0LzZJkdkH/W6Fpwlt1YkhPwRdxn5GHPqsJ/aSCs53w7Svc+eZ5rw4xz2lgXrpRdaMGxfV82X337uKVw7XUHqhJ843vcQNf1UDHHsvuWuQKCKIIUvU4ERao0x4Y10jt39b7C8PQzHuj8q/QjDKZ/Xj9+aTNVvPqsv1s5AskWc7zwiAQwB2HcXGjxFEMJq0I0lmmbn5Kf9KRdIjAEv5ijI8tbxwXPH5wi91sYDtD84rQsGf96UYW1l7H64rW6kZfOPpKnw/DCj3uwMkeXX0LxMoJ6D6j++qlKLK29ZrWyoqAn3v8us4D2L9ZeQIaTrHgaUustOdkJ+7sbYLuLh1u+9Xzf7Ei83oIX7Bl+ZWlxyF7t8At+gsfr+hPa/rghmVxoj+Xas0kha+Rla57EQrgmapUS47ewRJtWtV7MbnuRpGm15sN6eRZ8+Z6XIoi2qK+Bq1gFndtLhKY9mI90gaQh7ENOBxOGWsat0wLhOUTU8vm+8zBJST2suJga2cmGV5IcxfhX9I2F2WVJGEtZD4GhAgO/q4iea3NYPjSVeyGc9Flb+QuHfskCfKA6EgMNDk/0zuo+KnvBvSD4SwNfx57q/6ORUvfjh+J38uhGHVFskWL2b0gHhrgPmyTNTT0caMv0OrxUpral2NFsfwFsosuZKxQG9Y/u3sZyBuVsXIrW5elzWK8Ukk7LAtGbmuVFwuqRuV58L6XbSJSqvkK8n6Yc65LMn0cV6UnVxmGgONh94G4npJKMAHoaenAQJ2ZmPoe6B2rUDLCyRdGSVjSAdssmWPFu//gvCU/L/kYVv+97BfAG4qmREQAgclFAhZ9cMScCehj5J/1OEp9o2FTmq575/+BUHCUATWxCnrwroi71I4AXUU8RzFRCDP3zzKInNyI9mvDi+UXVwHA726I65umOrYv4FKeGsBdV9tGMM92IzXJoEny3eX++TKkoASWsxfP71lPxta/aYqlCVG4J6IvpyCD/6TaJ3N1+QY7r192VdkgDw2ko6oCnDIx701kJ7rndAoV7D8OPmivqiX0fJ4kpfoeXLoHCNAgAN266tgMRqS50Uv+pch7Wcf9a/kNM7tmI3rd5dTsZLz4JukRSpoUE1+qS6oBuDn/tdQGqX6MIRZK/q10zgh59OgfMq1So2UnmGBsAy0vvs0bPTsjmo8IUndQ6VE+/wNMzV1u6k/UuXKoGzrNY8wQUcPYii7XuG/bKNWhX/MpwoA9mInb7zK9PdiQ/oOQw7RV8liPEoZvqpspcpm+EgbT4vQLWPwDaLYi4tPzMOKPPERE9fQ5rzyFzNxQv3Ov25/vBi8Zgs6FJS08skawGuw93XiwDYUQfjXMeMPbELa1zCQdSJ5LFi+r2Gcrmn12g+aCGOp8e+Jf2Bdh/Jh8+IpgSCmjMdc5EWDWZK9jeZo5lkxzyc0/mHmEkMS7VkJggqr5w8cpIU7i/I8fhXfdc7OhSsYtqWXNrbG4vj059RWZLnh+7cg72upVen3Cc0MGORdx+laMXtZBfUwzi8t8wfEHyM2fakFReSU4zPGwNZQHn885iWObsNw2Bk/Y4WHkgd24GQw/uJH0XqAPZxdrPV3B0XibwspxSRsMYoS0NfQjUqXKM1daIrT7EnkFFecbs5HpRKaexRny0iwuIQeIUyxe5NG7lozyGCy4XJ7aUsK+V7/qROsusvtYdhSmXRWPbM27G4nTBitcVW876EXQPzGh55QHS4N2Q/MT2iV5iwmROr0nmN2jTAtwWFbvnRINim10a6sfwbZs10Eb0QpsSCP03XE5J2HfGhZAsUxDKFm6ULcgqFLXgcv1OcXT/QF7216DGZOkKFmaiXxJHlyWs4u5+myTQcHFNoyMSZzXpMhMH724lRMNEfoPau6CfLzMO7A+aNYDjD99JdANaco2+lE3P2wDcHKfMb2QO1xnzFjADapNlzSFBOreqRuJeQXVj5wmWQAFv8cY88dmQMmn/RFTo/4jkqhqYsaJY5SzK3yrfkoOXw0N5J1GCisOYvE+/KvMTa/9579hoYIMsN3XDknf+YjrCP8eK1HSCLW/16HPyq+Mx8NB2VucxWG6KH9gYGCQt4iXtuq7Jg3DIPaR4PJLX/dSdBs+Gg5xgIz7psszMBuaJlXZxWRKbwf9HO/4u63VC4X1bb+vHLz2RP+t02ubU12uymV6Sp4/wPI+ztqOQb31RKirtIG12tRqJZp2toh4SYIXImOav/rVEpk36IbDCDEfyQurz2B8zhGEt1ooYMG84o7ucA4cNaHq8QmGvw+ww6MkmLibZjMJxlYmf82L8hQz9exHKUgZmGJIL4x+Av6zBqUyyTWzMPv/RvRd/N361SXHEn96mFL+LWPOksvtMPTCZ9bdN72AUFjfWNzM1KHlrqePrDS96kAWlmTdPeqQBdlTWyx8HJAy7EDgUQWr5TdHyCBMOoT39HSGVoAeddJXht+3Wl6nLLTAFgu8DSUV1ER17sJ6ks+T7QT8W8m9v+GMOg+PBw/ZOI1PTQENilfPMH4Ne9NjGZvfjJDAPKsCAqLEiyMqhsDmjZdSU54OesCjoBPddPstmhehbsM3n5dZW8JWC0IDKZxEcc9WeT8tkP9RiJJejfZ9Vea3spwZKvYeIxAiv0vZ0H+2pllVhRYjSG6r/L0WJVXr9d0iVl2T/lqcWfifRk/6VNbkOxp1I0+yH65Wzflh1+ypiuGYXb50tdcciltDBv+M1tn5ZvrzSb2S1j8pGZF03I/H7lkO5l351HgT5njrLF/WBbgcmrUq4n0aUFbnNcRt4iUeO9CoWgk7FSQq36EY04Lz+KS01m6F9lMC4y0nb+p55ruJ0e/D/wQ4sIbxkFNlzgi6PAon89HOGjJ4ni35hQJWRWvfFet13OSz/rFIjp37Jfq7PP0WoQ/y1/sWUhZ25DgVmM/qWBQ9qWYEF38mqTIb+9nGHGAaJBvNUT6t1fy/Bdtk2OCpqUX3TydT6fPJnc2XBPxF4/j2rsuAklXtDD4mj8uSk3+xgdoCfzakN+3Tt3iLss97yvvzDh4qZceJCqU8U/6gvIgaZoP8iNoNXCL6IMkpsoWwDk5kuy3L1NljA5Hbnv719Iap78ehK54f4DIoabXxt6uihqvkDuGBlfIYR1x+vc3EvDrTzkc0U32UQeohdQJEp8dWQdlGNL8v5EXkA8Gs0mb1O5f9sVy7yFgTTUKw5bDRfoFk64q6NNgH4kP1ID8lakHbgGCJYvHKA4MuJqIKYz5HZ3/dED57TmbtuRXGJNA+dQnDOOY6xkD/aE9oG9UhH/jq9wCugD9U0ujHyTV+PQQqqXN2B5IqLxgQGEZ0IyrooKQf07KZpWdBitSoNvwqz3Ji4MLRddDbKRlivueRepksELzW8p3LZfip3zB635Mn9TkrS5imJL7RZxleYJpMX5UfrggHSzeDTWLsSzEciyGiXK4jwmWIWeCXuvxwRYILcgnVOt9mYlp+Uv28CAyK0GZ5/UvmW/5p9h3akP3HBpQJFiqFMVi4Uzt9dGlse8b+SwfQbuHBUeSZFIAsNz2ALS5bfxUv7xmxpwzrX+xEPfH8EWD4iENjX7GwsM/iekr6bwd34h6l/xUivXMmNKgpgXf/Fz0IuRbaHeG916Q6isBENvrFYMM2NkBsbtc8hz0638vVVerCKCHE2Mot11mcq/lI0m+yafgEeQrAEMetrJo2TrEu46dxWRKTyBYR9Gc4/Sa5tzDLo10H/+mT1aR0mPR3r08Ry0aET0EWrJbx7IEelg8P8lNlVSmaUuAqLHdcQvtfvp1PzRXqbreUMPa9NrvB4wz4q9HpmK+dwZsQyw7uvw1efwTJ1hvYwxTDqU1m4DJL8h/bLbJQMDDC/I9o/OvESFY0L6GOezr1evZAfsbxzqCKJ74/Rtc+qG/BNcWFaskVUaLFXQZmmPg/13CU7H/wpWa3nahXPEMd83s6sipv/vJgn4mqJz2k/Lj8aCHVYuitoW9hJqgaBBYvJk2twos6a6CLNXiP7ymcGD6GqENlYJVdoTCrncHW5D4rC5L5FhZ0QxRGx++VCkT0F9q4XFPjqoptxAlOfeCtgnAu4TVACXgzg+GR2lVkoni4nj4N/0TSOISSf79+KYFCE/tC1ZlK+B7IEVSUHTzFGbJgyype8JzfMaF8r5/DXpX6sGeV2TU6bJuoTGLmQjaKLDLupGLMoerGmJdLizk1rRE63uYF3Skkt+j3klmwE6cc/aizXk5H6vtNDMAdaDuPsg7Re7FbA7SQiUFkHEZRpo43SwuMr+99lWEX3hu+tEtzf3fQ9xDSVPNH2agbh2K1wTtoTRNHkJW3/XWyyvbVbTTzb+x+w5MvTCoS5R9/CDqnPOSrznXIEhUxbPJfkY9ePEzBhoxMRjMw+Mx0jECEQRr+zZ+NL4lVwyePPSVGj927Ffu/lt/HUdTzQgeSlAy7uEkjp1NcxxIwpVFXcgrPHpODC1wHPv9kB9/b56LjffHURM17quD3ar9hsC5ct2qfkoAFf6TI+G4/7E+GkTgvD+Pb1pwhxyRZvleiAozm5/06FeCqv02f5CVFUe8IGjtb4p8/uZEfD8PCI8GNN2CLSJ42vpvAm3yp5+7InvwX8F8HPtakyqriju5MjvixF8wcY+SHN63kDw92cA4iMJemZ8+d78EqKNLIV7aYrUyvI+OMDY2Bioi/83yo+PYKbjuaCuVfQnC+hywJRJELH0H67msQsdFu3uRCZKZ2CQGZiifHmN5G3VAnzbUhqdwnpPSaKBq6tBlRoC4zDKGMEka32RDxRMll7P59/oJXL1W9KVe64w1D69GXAzqChhJ4kHE+tjhcD7Finz1f1w7oUpwNLeDroqwKAhAPgrwK6oL1iKXWJI8oOdlF/TnQLtWjpHcrCvux0ey9w2oFo7t36n6gkl52eszhujC4szALHX4ie7ZjbLKwYJzyhU31LVah3V7gd1p+Hy6jeGlplJfDlxFxoJ8S9jBpwT2/pwmrC+aOwQrJvp73/RfTAzfEVMYbknGXqdQhOM2jS/Ec+Kcob2kaEBBWx/sOHnzmyhB1FwCuonytl2lFRd/qAeC5S02H1t4UpLtZlTe59R8pM3UXVo+xJF8ZhLSHdgK4E74Ruu9gVjR4TyGsWDc81AblWH50T+HMvc1JTCiuWiO6fBTPwQC3ooV/xnlvCwCVU+yjfD/Ip1tXkqpzKulNCAtcYHdLoSz32SkjFyGP6GtK7AzR9G2e6w79X6WxOediL6s06+wW4SW8fCx1NM2aMT+KznluyBf12PhGO97RuoU+mfKU0BfjL88Md7pr6ytcAjZLYmn4aBWUK8+M4Ab33sufzHSTZOyxoeDYUf2+Lwg9YvNyK2fNfrP+0X7NM2JzcljnKOWQB8KAvqzYnL7m+3lkQ9H3VuOq13nkEaNe8VEVEg7I+knsQObxRnbrUtojK17LW5oFZp/jSj871OBXIkLf25vX5ppn+kLN3CS67+pOag7l3WR8gVJRVJug+qpWsLsKZin/7wKJ9ix2Kw2DgSd17DIxlViytnRqjCPG+lMMfngd0xWci3oBY2Sgq+J9GMmBT8DEChS/XzHPuvWM1wWMgwf1iHZ9JMvS3U5Mo1H4YUIKEzqZhn8cp3Z3w63WtxzzX6F9oKjud8Snw8XXrShdbHbjjJQ9iEJi7cuvhIEQZU2OcZivmLK8CewowZlWTCCmpajock8WJdOLdXvBzeFv8ZUKNNUVBdtdcV5ln3xOT8VE/2RXfz69XCtT9vnh1One92Y6Zvdd/Yp5GcM5ejB8K2s84eXwboXksRYC3BYcqBu2KhJ1gKGjTKhUaQbFihtijyU+lLSVm9H613WcJPdHhJaKrIYx1I446/jGsacAw3bokRAjd+s1QLwml9F2qMhbXxhqJPyQvUIltvC6Lh+TxnGz/1neEXxn+0ZeDCGkn88JiXXphU7KVJ0a6cGbjrs17cW8Ek1y6C0OVd3LgE8G2HmX/kTOezXowOfHczO+hoU/fGG1ySyXl6Ikq/MVtWsnWL4dnw3sepK5RItsh71JI1+wx1FVJPnQj/Ta8noXIC5rUwkzVPSv18JRj9k8p8EglQE/MG/FytTqRgGEmHvRmYvUbjS2SDYYDRnw5qdH1Lr96c0x/BXE07Pw3r06mB55xHU7gp0qeS3D5Ftnw8LI3uowNJlZfQXtxjDnpyD+dk/zXhGHM2e+3VDlHbLXfok+ZewshTJ9Tq+J0wmrBtpIVRROkB++zrC7KMNd1OlGBrUP0qjM2uJeDGykoZ05bt5R3Z3C5zPqfDK6z+Mx6ilNn389vGsHmtjuG67db0mkF1rHjR4gl3XIw7+XPHyKa98n5nIu+XB6CmtDT3xXsvNTfoidwMJyv8xl78tIOC/QcFhtEHSu0KslVdbSDi2gh1jWipLL45oSYOEPTv8pSS7vf/Lyz/UP2Fg6hvhQpN52YKmv1YigUQzEvvl79QnxKyzeDptBGTtQr5HPIV57jBBEIA4/5PJ3odNechbVNWWfqmjww3ovPXhcTjRAHQu8OgPBGkALGD5PXXu8RE3c+lCxNis7+tFUnesu++7nGYv7FV/ptFGKuhPqS7o/JHrVmw7qTibOc2sL4xKR/XoLmYSIz9TlqgweRzK+ZevKoAxV0LoxYLWy/a3CGl+we3JqvaV5x9lyTjYdkAiy3pez9D2f5dhlbHQMxb+gJhcXmior1nWKczKAr0f14SXZunsrfbNPp/dNWavP1xxUJNCdgNWUhq4MBvlzFxkxLX+wlderr2Q0UfF3Y8Ah+Y+ytNX+Cof1NpREFGkrqG/ESTU+X5IbEhjOfoKmHBMcrK5sjxQX85PkhzRePRrgvhZZiOzU5+rfAH3a2ZN62NWuyCO17Gd7Vq5G/Uazlo8XugtJ9F92wK68xHL4T7xsnEiQb9g7syyVl+/5cNt2YwsFw1G0SNs/gCDO672gxN6SR3+yTM1rRI2BF8oS5H4IBhObzZNvKK2LEbw5nd2dmMy5HEiK6vfRqhkXLqu3Imc3JI8vFgY16VwV9+/askibeQtwKuUwtjx8BpipIl1L7YEWTwTNMNm9kvRB2wdSs6ab4rZU2+kgDy3Q7s4JmIqvoj6XCoe7if5A82FohjDWQfWz5gXkzRsHXTspew0NuKe5ThMOVdUrZYtqUgDjZMd2haaoy+jo06phv3bjZO9gKc2NhvhdbMyojScDXBnPkx+mpvg5lCjgYFeN7ZTKaSgPHTYq7NEaz8QmYim5wsD4C0p3jpKiVn+eUOEEqKMpoK7ZcO2FuhU1mVPDtPMmsmPPwld00NAyl7UEC2dO3UM+eVCKw4Vh7OqeGms/Ymmm//OuunqmTjw9tAkYBQR77gGRgDQZ5ez8Jdei4pC8AujfyKaSy1SVD817PoLEnhN0v9laOc2iCkYxYl0aiTxwYO9ztjzobKjpIchENNwKQ0FGjx8V1gOgDfwEhQIHuONWOlb9YRX5Fz982yEo42uXKXjhs33hbjJV9iyn9eo25Tajsu0ecjjS2mvxx42G0+Gtpfe3FcViHcZ5dq7H5Co6e2Fevn//VmH8fcAHQLMrhnQ4+Oa38CyXRupRCJXlLVelm4zQwkq16SQ4/xlp0VXeJu/XeuB26OvLosfIJ66zROpxT5vI4tPWZXefciiejGuTYwTL4zzw3bnTxY+ilwYyM3SXQND/h41iCnrHnDu3Axcc9+gW4jvuxo+7vKeHda8sOXk2k8r9feVGD+7b+1AIqU6NNc3Mt47sMG5/2hIjRC5fpgumt3E1OGEfGjS8rVemCB67CdgeQNyvi8PpQudoBz0hwEfnovVuwjo88usx3Q42WlqwrES7eQ0+Lo4s1gPy/h+aAjiPWdW72zj4rbjw0sZBXWVxIObmEXH23pLb/wCAzg9iw4QczlriFHwMhU+Ho7DsDCnaX6Vd9OS+vRVkZCIRAsh02T/TJUWrrGYJstyc6/1wj6YTs8wV0OuoRzINiuQ69W56o+pecc478HNax9qa2GRzrpLqCb2jagSzdBx7nGO8ePmZWwjBSE4uZbm1uz9EJUPLaEkMizqfB4QWvQhimtD3JmaxtcgflcaZHXzOm6o2o/NzvOPa6bCMuTjvI98/j6uNUZ+jvDKaSD9CGWlF/YRdx398FQp85elC0fjH9TaOc1esOlfj9VGdwuoHA/QjBGIbgmfXFdbLkzCK/qw4+MlytmATzb5ARDFVqcpwYVReacDj2EJBVU2cR0oXyuO5X9YNcUSXHE5jVYY1plr+XyQZU9U6CH1Kk/zBk8PhpmuJ04KVn1kuS4u64G606sR32JOiPSDFUw+y5c7XelcXix1ReNEL/Zyt3whRJEt2/w+2q0HrbVZEb+tKHaoEkSBjYPgkKEbI6Q8L+gglAa2X48C7I9+TdEwelJ7iX0WJyHm20tDBC4VV0q4EgeLEdGi6Ib9Oawufe2f7DFFHV0xoe0ERc65me1mU/tSraN05HXDwOyy2StOGH+q5Ld/2b8KwqrhdT9c56uKyJ4KrOZ9wWAnl7SGbb0cF36VK3qsC46bykH9B+FQf/9Hg7ijlaFM/CLqxxcg+VI9283g1kiC3Z6J2uu23t1r6bN9QhNU2yR9rdejj1HXv+1HtAETvh1pjCVYUSieomDya1aihGbLa5VGKd0pd4H5NdEEwAbrsSBO+82BZ06ex905mbqintUfbTxjyN6rsIXMeoYy1hCOYy6ABBr2jNAty/nXbMGVmMixqg4BAo+Ai9cnbFikKz7yR21JBOaKX01iazuJQ9qjC9w/JDRpGrLbCxcj4g0eA64dDVF2hKKtUNn1WK7pjoQJ9MKzWz5WShPXjLf6SrtCyfu4CvMEKXUzbRCoN6gdHit2tZCv9LPgcSmJb1qhzXwubSzEtSbHLxtx/TyK+7GKisnxw3pc1d4JTmtWuCrp7GImPfRn/tW21lBNZ+cvVpCJ2zHeySznUPrvxvHN6wpRjU67KdlAWSF2kyI8/Ibccp4TJooTVIJlDMTZuNvGrfHKLRrKUsjaFnAWsA6GMbatfc2wQFbXEoRTePxNnXEIbniWaEdJ+3ipdWv8TPg5D/4Hgpl4JNG8/xGG1P/WdrE6WOFrqjVVyPOBamDN/DW2qnkBptLxQllRQDA8BxF08sQxu7a1b9psofc3z9VPvXbgyN3eXEe/2/6huld8vcQxREWNSfoF2csZsRfTpcOX9iziJbyoZMBL0CfKa7ow9G9i4jmJDbovY9yLD4cXX2n0qvTkn2ot+x9hX6EIMzKobRjV8RTr8vwa5AIbNciWIY1LbaSoLGTriZBLKSuv66r5tGtzvZjCUVO5pMqASFOxHaAzepw8/+5KRz9tLTpf5opiO6kxudRpnCxDRGggqmKJAOguWMKGiii8BlMqIXJlo6l1mB1WWI31r+4k/dFyl7YSWFwWLcYd26SJ6MzNZSUgd//fNMgGM2tRKGe3Jn6iGqLxrTDxF7+BF2SqmvfNKmWfZw3OLT2oyBT3kbLziepZ6dgzRRJ4TuiUfrL1XgZS8atexrEuEwD71CihD1BPx8FdGR8/ZWDbOE6ttqcHEd8CSLURVqd/d7doByHwYsax6/AFwV+oFhnlBDsg8Nl1/dWnu4hvFOqrtCEiTn/RMDWFDdZHXfVUjs1ErjhxLswR1jMxSiG9NUSHaMOD4rnTrCyH1TD5ZLiWndAsxXLDnWZOdel/Mx3LCwmS8+OinBL0Zo3ivgFi2CDkkVgMHraN8IGbpahC2TuSz+VlGSB92X6MsC1LPCNDk6ydKeENhRr39WPwBjHDD03RtRiYL/2ROat34lv4vhyvSmaQK9K6ZQoIe2LTzRPtJNIFD/cQqHygBYkH0oouuH6ZRWHRFfFZEbU0c0Wc74h//WBkQyz8nRsTQq5GOWuL6w7hbO7qDkD1mlUOa56Hg3IM3FaQzfBxRbfo1MVvK5a0rlj0/ZWeSsspmbC3N1ZYaUvqxrRZy3nbPlKfFndPpM54iTh42eXHQkonLkNh3zrodEOg6gRnpPp0/ZDkyUbfEGAYDKhmlo8l5i14wljS9J/hA30UB3nDUtUQmCz5uB+w+qcLzfuB6MA1NAKYscx3ZZr7XDyyYl1jXEKyIHK/t/TfxFwrT4sW0XkSBXU4/JkUdWN+55+85GkTpPj3/64Y+2y9c1GCpZYllfFVv4PsYrD1sSG5OdVeuq7jcwbsBD/RX1QWU7f06nvDyEmFJz/K6hewETj/jThH4CBCdnQbkosKNJ3y6YUGf7stCoZhjJwYyiX0UfwUfTGdEYuD1tQqE7N12h8VVzbvoqqEcsKPpOH42Kp2Kng39omZYybAzP4+VI28WgegM1lFb4PHZTUXOie0Dgug060ZzLRa/F6dhqu19nC/rIubT3E0rK0a4V3sir8s91+gDhP/Ba4ZyEZTmmabr76y68aq02Vb20oEWhlPkhaL9IttPL/srBcgRtK5ptmrWrbdJZ7COo1vEQ8yL4mH2U2f7UbVewuI3cn0NaLqdP7wbYfWpPrbA9iAau9HyUkGrqs6ANRAoVqhI5vG3wjfsc0DsYbafMaeae8lgc+E69dwtGTpGrIcmH9wcuPb/7v3H8iFMHgH52NevpY32Fggbxsfway0FKLz4tAtdunMa5lw+AgabO8nNa1rDcascY8cu8tiOcKT3vuXUfUO5oSsq6mvm+ZFzSjosIYLUP37Byq50UjTqlovSMQXPaG1czl27N/xY+1Jv+Z5FAPEuuC2/ii3WTnTc9a5D3yP7mcZ+iN/wAU1o/vSdMIEyvLxqbBSWQX3mcXOrs6CwUMtv+mCWnUom6bmEr8yFSfCOabjrREjbOpu9W/IiYTWLxVZHPvzt9OEWZF0bWy1VAbp1rqzi4RznPNup+BJecGunW/s/hgVF4Wj2FoqCU9E1jojaGSr+IiDqN9Jm0Y3fdDnA4I7eiknB+qBWx6+aPGHiLTWqyTug+c8yYTO7dZ7pnG6FrDlX+0IkPkv60L1Wv6Nrnlsp/dn9ktFuTpul6RM7nvvJV2OFlcpObIqO4leX+vBmiW7qQ9/RFS+4U1X/VRW4FtKhGl2seR5JJFJabvmZNqmvYw8JorFJgLcFXwjJb8d59ei+clgmitoLKz+E44DzlDj+Ij3RdJt4NMfoqS8E7N+FVaZcu4y1cLafyNaWeZjUaTn8qJtWUJOwjIliNAJTy5+DQS0Gl5AFt3jfX1YnDmJytXTl00Y6eb6bj7IT5WdboaCZJ3+9hJ+QOgQ3ETSPcVB3VF+IPDgmWf/Nxwa+8IHOThOsqx862Zp/WBRTkIPFqidFo9QB1UgUiyW6Qc5TuYLEtSGCBuGbEgkKK/hxwzDNW7IdfLPwZWFwgi+iZvFrtRPZ/813RoWpRHbN9eq0DNbRwxhQdBfC6ar/CiJcgyh89Y2FbP7mNWSlcX42hWVYnwi1rfiBfdsl03xYUgbotuLkkMkXqoDInFoENEf9Hh1SFX088B2PedS9Oej+XWonJ0nyW/qvZOS9GxuVC6unzAeg9vlRtMHRdOlYOFM96D2lRjtYxjBuj4PAn9VavVPfIYTFPpu4dRY6VRd01/asy6MTOBCeq/ZY7CYuz/hiisYtasn32o21rqq6mu63YtoZyihLdnGDcOb6eT+EMqGCL8r20GIN0mS9UXfiVdjIlfDg8oDM+BMcDoih3SkHOqzYiLeUSt+YU9QEY/6+fvR1tdolpInzmJvZM9TsRbZb/tF577SyziMY33mZUbgwLQizZ6Td6elZIaDuPYBu/Kqu/YchMX7ehXQbcKPfLm6denGAryNCUIPxfBohEAfEao7zhSqPyr3bzy+RQVf/MfFynJJtRDpvahv8nVxNVJM8VLVqrLW8E9J2gnXmH8jW1C9Vrau8IrrIVP/Wd0g0Lc9qX62n60yyJdtgZl6EHfJvoP4qdQW++mvxIZed8o1thgwOdx5hMryfdTXnTd0vijx5kekui4eFio9aGNaDNlSHKPsRLlO69C4Fu4WXn4bSwzOmDC3/O2mp/1CYmAKu7+GFEqXdh40WkTxP+6Hsdzg8rQZkm6EN77W9p76xSBJQHU7Ch5UckbinAR9+xTSdIK5s0ym24dwJOHovwL0SPOBDzD6mYwIcdYtwJoHQcS+h76X54RQGVsCXZYQy9wfUpfnQloFbKeZPuRGFKi8c7kxv/TM1xQ1MKLDjBkG/4QI+7Uo/BP1jKZlnW53c2ZFyi6kNu0FBCmWgnv8wnj2qju6jzR5fP94BdYaLBBggwuX/IFtXhj4VjSG+M11oWi0icBirzb430AZJv6bjw62QWMKwsBeDNg2kKZ8ri+bIYOyiC8f4kt45YSKU6wlFLVPmZZqwjATl/Ts9/yNjnXhkckTy9B0Mf+ix+NvA1Ak8Gss8YHKHBvn/oU8/ZL/9cX5OpJKItroF8duHCMtDGGHRYeJGKMwBEtbF9qemopNG5+Do+cbqS3fnPv2hqWGMxzKwIsNIGzJTOcZmnrkDHR+qW4ERErjIszehlm3hTGBseuvwYSg8yaYbHXlkw/4j+t4p7V3s2fZ/0FfV/FKKzTFSz5s/4O+gAwR1RmV/x/ujrs5/l/tKb7BV/1fBPr3wSNftvz690EYHEK5/0HZ/hLysc+35YXY0H9/BVsUgG/c/35FYOr/gAw9OHTW2Vb9O4xB/+e/g1Vel9V/t4Ch/wf77+vx+u9Q+f9d4G8g0t9lgXG/2PzFeP/dxd/PCFRn/13TrfUqoYURgtb/7eHMTLX0f8n/HiTu9vzfxz57Vr8GFerGF/MQ/4O8C/G+mHTsujzdwHbS4PPrdnf/fX6t4gn8WPdx+f7LgHdSp3GnxEneGeNab/U4vH9Pxm0b+/cDHfgDE6dtuYz7kLFj954V/WZ5Ee/d9v87w6erS/DNbZzeo/E6vTfw/lrUV/4+EvN3wc//PQr93yPgVPEW/w/6+ffrC5GH90nY2mN064RkoRzBumu2W3Fu+f6kiO//WJr9hOB4gLI5/f5bvpLBmZ6FDfqdZfiZRWgANITJ8GhP8h1FvSUX+/t8v1CGySpCzF+xfchB86fkWlGqTOf+7BzfzR/mU8LMp5bs8Jmql2prSxkEKsMrk7jdkbyum7yDshAmyiOwcRGf61EfHGiy7HY2ux71tyfBGiTHM8Hg+mqe4s8DnHoqUhh+PfgHQzouAvgizSaqftluv/QIgTxkphYJyCFlt0M9L5+Sfi8/7Ll+ExojJH9rRJVwNwKgBNNf58iFh8SEdeuldchJqmARQQS3BHK8ESoTTKpNpNQJoPgnQTmuusyBAv5Kh+9SAzM/nV+3zJVj/HVqhEY2ZBU4fLTCTgboT1V2Rze0H+ngv0J54qo8aEn7tC5ymMLLCRbhhbLpKRDJx9H1D5+baYv0LJqQVzKIGqt/OdEH024L0pFpS+GrAND0nqDIbg3gA+U9N31OeXZGFY17jsTi9CkDBCqk0SIVrgSxDdhVtRWNX5O8Varc35KmXirA8C9NDREr0YrXSWmmMSUV5d3FB/ELuUxfBfsrYfpjDSl248MLjUYN5Uo2h7n2h3Zb1CwCUEWM79ftqapP6HGYfJqd5Ysl1H/Ptc8/krFFofjA8ck3osUcVenUjLNN+o8WxeSvUf5HenErt7w7LlDjgAlsWFB1umGoWaaWWZv/bT0QmRG5XtWY6ir3kZcjiBPMUHmWPzqkIfUnuYowO5z7ZcLG3+zQ+GUveYD+Tf3LwP63Ll7Pmzs/zbxqGdFyLG/2zUd0+cWBu7haSd2xwH55NBo9YCY1yYxncPsKP3EWvCoUuUKfAuu4PNvi0gsJKvU0tWECfoCcMIn43fyibZK7j5P7r7uyhNVsLGH8rKqv9Wf+twMQ2TzD5lKrEGY+GVIWWE7D+ZvcuZfDMGlfd6zOngomv7pBt8f+UHdpLT+Dz+0UKewnG3Jo7bVCO9M9WHslcif8hNlZ14tR9CqCqFgv69uv9/xNiFxP+vgcpwSK7Ewio0dOQen58NF5bf4az3QgsGeYo+maFh5qPL8ge9VCW/K8tr7mqXpYp0WNM3KSXWampYuxaHGuzfji9iwVk4/Md7StrHN6jnf7VPqUv8WW1ybkpbOqv24s/vDA93UdHT3pIui6zrO8nochOFPqWSjIcsrvpxTw4sZf0llzyN9Iz+DnhEtRMWn0cnbblJiSO0vWmfoP/zNj8fzUZ3pZf2h/Y5blZTE92N2b9yBqE0KtKKDvzH443pJ4ybrlSce39KMIDQg9PMiqkrGdYVMrZAe7/u60HLH+6pc4Xf3mNloIw4Y6pQnz0fN0jAxdMPl2wnT6iktq5pC640Wlx2BwRTfSijq4bXojDhAyDwFRE9wf0sG/58KW1gYGRNOizws1szlkbJC72GJgh1SwycHPy5ziuvVoa8QzdBmemUPTNv2eX2nMVb91SnwGXUP3EYYbWfacalnUsmvKnDGfsJMJna3zzV/55xko4q8ndamtC4TFWBB/+YpHQNTKGEif50eIWAHqM9KSCeCd/Rq81ywdYuNZp+oD49lTwMi1yP0AgAyum9h3DAK0/8Dm328bxJOrWE5yJfv+CayXfPxxs+edwNDuGnndouszYpjBLtVlJS/bvJSZYNV3oePdUCEwPqGqUooGBui1ffWsIItcNincf5VfhC2cPxZmnAd0jtJk40CzmRSRmX7m3/DIy/lB0EUHdc3Hsf5LUOiE4NOgQCG+7MWc7nkU/1+WrmLBUSWK/hIaYBkI7i473N35+kf1vN3M9HQCVVfOuSqabCEwIAwPXR0nc+RXHCfra7GL9vLpo0nSmedtFrbl3yKOd97w2lYvonSP13mpnLIbJQcSqgL+oC1vXU9aBnEpmKbeGWVECBw9lIZYIR/tsoGOiRJkD0z1+dt4LHlJcRPWPwJcSdt+mGp1sb0IJ/OZR8iY/DiQ0ffMxcVsmb+1f//1Aj711xgXr1pdIVIqhikw+iIYMewEP7nPHsn/YlnWX9mIGKTUYpFnSbcfa0xNi9b1LMtItCzppBXknpMEdW5k5zz+OtM07/+g01qkGfiX38V+qvKcYjqipzjXUw6Ry2vwz5uzU9DPC603r17fNBBdc6/+dka1TCmu3WLFVWC7z6T052EY2KSAsG/VD0bPTq+FJL7x10hoAz/pqG2vWdXc0lVNhfxoGUp+KPAWV4yigYt+vlX2UoBrFBUm+H0xkw+sX+n5nDet8iHrz5ejhT31WmcM/mY7izcYmUU/sDg2gc67LB0Bct6UXx3dIvtDYn8391nc4G+9IcD4oYWFGSDtHtEMOv/lkKRRf2b0N1Kg+BUGgHocyR3JUmeXFI+bENWOeDnl4k2PaVfPnK69/Q16bWcW+uW3trHuBi0Sk2cWPTcIvpjp9U3b0GtOwLA3bsdd9DBkLjBkuYZLxICUhsMiLfCHnZPGpmd1xvi9ls2oscXwVRrlwaojRftzBLYxuH+7rDIZJU4GitUCr2sROPOviobM/wLjNgi4J/fuBxY4uUgAdzxWsfkpX2m2LuQzsH+jm5m/VFxxDuzTnZ6rT4vyt5m2pw2qlr5UL2H08vL6PcvHqs3QHG+ecgJhc7soEV+W0tCm2+9fqfpufYwjiRaCRmJw1DMqEpLPtztz6gZ1L68DFaIiB1OXu6V2x+4rydbXM8lbMAwdO0AJz6HrgliF1refbfecv+IwvqgTQpjE0psBQVXe5fe6EDAEUS6mZv2/RtcXdIGXIR2DxPQiEGUX8V4W+SpbGj7nfiurpoQf263N5kvYaMU9CN4UZ7elZA+HpWNesakHu0G2v7/hgkdjwKiB8DrTiJxFKjXh/xXgG3QAgmcaCuK8K9SjIw9J7s7uKuf1m9ZWHPJa/1GyGtAao38LcBvWBzfwp2SZSeFQ8LRxU2HsJ0+pRCr0i+oIYW5Ff6RlsZwgiyDsWyv2QqCz4edHDHPu2rZmFf0ZrUwNnhEC5rczPgSpxr3MMMfmGUb0tPg3n0pzvdcxx6lRNQD7ExeD6A1FHC1fyep2J/oiOsDabRpz0ANr3WBgMYmlV0dW2bjCxZGHyx4R/taAZE+yGCIukg0elspXTE3G2pm9IpLz/7ptgv8VFGZz0GSuknhJTYyxamvHXB6EXJ5M7b2DUJtBuKWuFStDrMPMpF6eU5rk2SFkcysKeGsSHCvIUh6frID4hW44j5NpT7Z2TPgzCk356LrLIYw43GMe/D/7dQJZxL82IaD5GnitMp42IbYlMXH9UnzRhGSgGf2w69Pm3G1KUKa1UvCU/b1ZZs6Itb/9hh8y28W5AzNCDR892HJJNef6pwYEuO4IsdadfJWPn1bsC/Oye+aiYkx/CwDDB2ALOiYvsfJreEjsXR+DdIvrTQ0RgQfHBNRSppNuaZMm6NJqXjNixX+6e4aKfVSI5it7rgB/cRFP0+wsO0bJRyv/AgedSJPnNAZae/fP+gzL95pw3BoGd7L1l/j/8oe4pYKgwGGQv8nERu/106rANd12NAifmvZVs5PmVnr7OlKAARDTwwFnLltW9UvNPlaVjE4jHN36g27FngfIaP+C/HZEdy851ub0n2nHQyv8S91/wLYTDkNtdOxKX51KvPmFZRkPFCEDemiYZyd/oY5BQuImg4L6RQWJBC3LTVbocTxm9fwU/V2jbY43LDqRvx/4WSccbbsIEcgktSJBWqBbRgn+ETWWx5Yb9rU7yn6SWqQavSlTnVVDWTIvRAfSgB7737Yx6f8ybZmz+XO0qBejBFVOXRWNlkVFGAiEGAYTsNDKlvGaIzcLFIrQcmLohjKX/G6SAimGx6+s3kmPpjdo9+RQ0go3Tg4j2XXnikp27gTKTJHY/T65thVO9EseurvjdS4GNt5P87hs6TQtxs1BgxpuHfk1TcLWsmZrju0yYYPqhA82CGuO1roNlvC9jEETV1v7HYZvA0vPXxlxMyxHVrfTCnblyzlHiiSBPYYMgZFZAy+D+ZXKkn8WkBvwqOLqIGNZVZ5NBBjl5U4gcgG/rPQAudjohCulrqOxJzMgcof7T5ny33C0ruz99LoF9VPurpNsIlFSo3qVSlEH059RpVDYh6KuC4UMFzCnb2e39Y/70CZqVZvhxBQpKK/teSIaM5s5+UbFvtJxrG/pA8aH0n8l8iT2t4y13OQYcirO/7OTUf+6hiARdpbyBohqvdv5BsSfBmYXKD1AkVc01tOnpYE1kZqLOyoQfgM/n+Oim66PeXbdfOUikH/sR0by4MH+Yp31DAMbQlc5x8vlkkoFrforY4QqQq4WiR0jOuVjyRV1QqG39TciBbMxBUk6OYhEOSHek1cbNgfQTTVZyzRtsXw870mvB6gDyXZ5quLXh7H76aVpuNcVV5SPu+P18RcAWxlWXgtr+331h7RKgFCx37+09V/yjemGzg2x7cJmLXYaaH/hVTkIaohjjUPrKtQpE/b5Kwi+exRN0phn3S7vAkHtEGCshZ+667mufl/PMZgNCUPUgW0aePHf3FE9YRojuT+QhAnf9KPQCqZICHj40USDI6zg/XNK7OxIcuRPCygdCPRWNY4ocjGe01Z8fikn4IO/JKdwzmix0PJaH2yVGSE9j+hi+S7zcL5PMIlh6yyif0Lr58e/3qIBGomG/mvSv36o90H3GT28an6/sruCKkM/NMf+EnASjA3iJXCYJKiE2Z5XiQ7IHn8pPhT3uv/D0aWasKOJ4W6l2rPf9R8w17E3Cj26WdApRWsyabs/cQGVVF+VGaRfPinYvorNKYDaVLIa9tz/UooFKfzP5cVPQl9FiP2GcxxoExiPgK9lxUPJ1AfPbv1tzzSi3XjUxCFVeEW+7fMpXTg7hKuqbMwQi2D9v9uoYALvN374sF90Zh0Z6zeMEPDGWEUGeKkW2/C+Cd1p1sLFKmuT4PegIkqNXsg+8Ke8P6vl6D+1gRe2Nr+3YxpKtUv/Vr3TuEKBASIBgGmmONmmaaCU+oN5e0fxD9hPylGWiX6Pa0ozV2bsS4xvaC0+V9AQ5JxSjpV9P4ouslekF193a67SifQwxSrC92GV37kfcdZABO4jyEXxW1FekFupfSJpiphYnxZ5PvjWzkHhwMGWx5EFCJlzdhJPqkkIdxJNIq8tz9NZmXf8jF5WYucpgJySFwzWPKI49XX79JxfZDBbs5BYUJq6R7JLL2Jp/AvptzPp4rbwo9wYH2mt7SlskxZTvhuBEwDCcCRhwy6yzNPVJ0y4RQ9Dohl0LAvPq47fbMhNlgCHJhEYlMuxY9tqNwCqK42DWltUfhnrrjCZgxJ7jDyh0OseYJ+PNrGNitGQxfaaFabOnCUwSbVbAdoTUkvB+N56BC75yt22uhxJuxhq9Vw4UopovbBKZmVv7sk6TJwJU1o1W+87mf35PY+5mdKg63HIUakJYtTRE0DLBg2uLaMIRW3ldY+zD6ibof0pfqSylJBebXHawri/WSLXbFCHRfMcp2GlOmAaX5Kp421zWzqVMIbq3XcvDOIXy3eRD03+2431Mv+w3/tn/Jv0+rKcZwtlN/b/5kI9H3rAbyqDwj7rAT1GD/WFMAc+acGVWNbFULufZtyCro/bLz/7x/TBKqXl+xkxsjkFRbhR+6CSr1n3psmWZU3t/P0NOVJwYWpVjq9vKp+G4u1vGl1e0PJTdFe574r9dk+WQa5rN44H6+lad1EYXswkw+O41bXUAjvpoECB1hXv0/5xtbUMH321rpeBswnxT+2sZKxdrqVaMkmRB92tBEITLPWqTI9Fcy3+mdo7UGnlIygrvsnScSg4U8sorDDIdnp1Wmt5+ZqAWnUSTqKjgW2+PnuuWInK/iVbY+l8xVk0K/d9QXlyxRL7jnVHx+Mja4T+hCScL14kvswccxq2jYQKq5tOq0OYfxW5yOklz2FrY1l7FFU77Lwvq6MPCrjohMhExzvd+STA5PyypbHoKUS9+ClOOAVu8OH9ozj/+FkIxnzdOJrCsnJaeAIvp1/aAxaOQEJGDJzJUsbjEKblelexS8ISUNFeEMnyR9NWxYytMCO2VL+GwI0dN/lKpt23+wcG45C5ldy3KfaT6/ymq6/4jjQO7Rr8GEMCkVCNiM02S+SPW5KimFrvgVRcKE+OJTcoecL+7ElsJ2JTw/JlKNBXmzfZEMEArAOgWrM5QszaMMFUotVQn0NUUTjQ7+W1MQYaKUdfKOhaFFnPYzo1fH3KxRAPJ7s36aYQYrruFwFwp/1/aMWiycQwVjZVWlYeIvKPqapucZ7g3mnjqMYI+U4rj6VHsb+kEjOBecMOBJSIHuxIw3xZ6Gi7ieNT7XwV2ktXy66yd/UZ0i9CE1JoxvD0/qvnmOuq0KLVB54pyIHHyTgq53/rwuf41GjOuI3E5WkPT9+aW7LLBZCP6AP4tVnbN9C183hdiJtoPd4gVJgLXU00ZREY6kVRAKF/4R2mCO3xbNQdrVi89fquMu8HsZ6ty+myEM32OFj7+4W9FNu9B6qQOhABoXC0wKAXj1r9zN5q39Y4188DulcjnBYU/x46hD/IQ/xzJxyp/T9GofnrLkR1STm0mX9ZLNLNGoqu8by5pHiBuglNi4CoWrmn7fPZLCZWTupzxDvPjnByd5Ju4NjpgPvzxxHOr0zgkwQ6zFTEre/LhhJyDELxReDHVfytiiagrz2gr+Prsl2WoyHqHzR12e5atAvuzG+inehZvdAeJwmC2IuP/495tmfgJkiGEZWkioGPNUxj8qBoTla1j8aWFw8xHFPwRI/S/qc3J8RAmR4qobqsLB4kXzcW1e1pgWXt3CIQrEVSVO7RSe+eGK4wUZUGd5wSlvHPlzVI+V9eK0Wpi/dkArsAWdRRimBBSpnjduVezc5dX+NDH5gVCJmSo6RuPs7OkylJ1B8/z8547CZXcfgy3Z3wYFRvFCrH4XMRWk+2ZI4EfbXhQ/YWQgGf0Gw8Ep06nwRLB70WG7qM361uUdjazkt+V6zzPp07c+a8KTFIVk4QFVj38uhJFR6+WwXZmSvc4jrxQO5A4lGOMW3euvRaVa0YpZ4sSh7YzRk6C7U0J7W/0R8J74MEHH11ayTQH34VbfeuSgnia5WyfO7iOCiLps30kpbIVfeuF3myFO/o4b/Q2hyQoLb1KKr3Cn+vyfrhQQuW1nO8lbXG5xtzTMxJ4vco8j0IjKQqPoL9sjx30vWXrWH0ig2fj7QCQ5hbh01/2UltxU9a7k2D12BPD01h9JWT2zWkmV2hYeA7Nix+OpJZHPMj38leimwoDBemY9RaiLcgwINxYiR57pHWwzWaVPDVNt/71Tlx4L6Z5tWO4s/r5JlzGEEbN/T1xFffPqX6LSHcqntfkunVWbHdWpa/8Bbe9RUemFg2Tf/j1tlNAbocz6arcEbM955NW3SyL99vVIFMA8sTYoZwfy/RLEWg5R/c8cCJOf+EOtKRnnBrnL19673E9/9XDyAjARTMPsX5oDCT+9qlFzLLb5ExI1cG8POWv+7PwWoN+x48SyXc8CjWqCh0zY6qYg4vhI/aI7cNdAGBG34ZrBoaHMYrtJVBxeb6jQxdcKVwr3oPXqEDTqSLgyOuU7gyS7MUf8pHbguFjmaE7CM63o2C/v2tIAaRxh0ArDlJfAM18yWBKEWmNpyaw7L6Adag4r+v7kJ9vbmxRLFWn+Qnttr7sEZwRtLbJ6N3NQBIBONUun7x40zWq1Euqt6rkWeqALMsf6uZdCb03OykOj2/mfalSIfchRslXnuy5ST821PVfRkasLXJ9TLhKQCCV1voQ5FAqe4Rzd20XYnYn8kNa8m/SvPPrSe5ASJI5DQ92jhzsYVZ9awT8Zn9kdUOlHHmesdZWmRqzBA7YaPZC2xs0Aj5KfvBgztTkJXlEZBzIOP3A9ONMEZzBv50MicJZUOJ6kwrtDpOD2H5C1mc8P2oNBIMTg4QYCPmFEGHZg6zEj0F122lNyN7QkdQCvFXRNRICrDydb9v+1/ekKqR9MriyjS7xsRqT/4JNGOtQSitSZ/pLpad5MIThfHAAoZ5ciXQvWi3fK+pZQsLvvNXM4Houh7JA3CH7nToDBMRytx+CRnr4QWkYtksgEFnF1l1JVVM2aGnDbARLApTOA/4UVpXY+gwrFF4at0/MlaayMfJEZkiD6fV9MCkbZSkhq2IvH4Q/iZCPBo4G4hCE7x0cRWe3fUph4mNR3qVQYxpV16Ojz5BaL/4VjYuyA6zzGD8ymbMja7tF/3lhbEEJCH8acJ8j7T2E1nbYja+ncyRopcFZLooQoTBNRKfDET3OWP7iVg7MpqMOxHkq2Jc3h4tDS0eUxlwTYoZeC997foe6Azp1/ZPK9ZUFBLTUjYHbOHkMlrqQqPPAmFHQBil7ayUet2Zbszs2UdWqL5UeHVtjx2g6TDWHwCxyV8d9CedlVeYuU1wAKmzXwbD7n7C6LJn6jVzck3hvnwLPPkv3Pf8pY7oHiFDcyfd/IhqhZVecCLP4ZOjEhxfyNl4/c4P3Qmzf2OWPM+1W7E2s0/+Im2gqbwi/O2IutlfKbALEUP3w+8sKMvh1BNoXwPWsnFKAugR1mrha3Y6kFMbu7C6pnl95H2jXnYGQTCejeQ0pJA5t/bSL39ZIEPytNcKWrU1lY3vtzFLjx2DYcY9kJ8PemBD49X41z9rd4DYlZnh8lb5RTkg1yDQj/U3V3TAXgI/rckrsed+pvXi3AsxbISoexZaPrSq41DxP+Rv51+pNC5oVgnOrUm7LCVWah+V/KNchfjl/49TA8mNbPvamV9KQCzZEX1gjangJ/MJLvKv7Tqmpq8pDRDdSvKP1UFPBpP8ZVtc7q849Fc89Uak1pZ+GljDtbhy37Mo7YiEusYme+y+JvZjp0fOdiCP4GRpfuIfMjmN/vQBM3/vWeIn7cX8XCXDz54LB1HzHIGSWfAcEZW9tFPW3XVUMv4a2aAx9BdiAHENRYJptL9RvqHpNEP03IlZVRdCQUa6wdsP15jGt/0elTRYjPwnkZ+DXdsVXt1En4V/vSL0bKVPd2LxOm/bVIdTCn3eG/o1zoCiiornHMu1N8A2cx2Vjest2N9svlfy/irjUkeSP7/lGJRVDY/w5+4gaEnrWWRMBOBpxv1XpUOiAppJx1hs2fGUXT3ltAbOGSgW/vHwzlQd6amaj36lJig1P/E/OEgF+L4fEO3IsZXG9M/xl6pXjVg3jFaE+uV7N93VfZ79N+tZokaGplIjch+Luq+RQeKQVUo8i0Hgvg/4F3UFeqxTwOmKC2oAs76mWJBO45bmbxPiEaE//TuL33bZCT30Q05s5Xgq7QrfGDACd0fh51vITfg1/u8ySXKanb2bJ0Mso2kso0BZinBQEKkYVcQ97T5bBYDSPWQmvTdX/L4bIlHML8x7CQdtiot3Xxzv6mW8fUC+N2AL5TvJSvGsMJWfm01yUxylCYSw05ptTUAe1velis8/oYf9l19VPG8AYeAAtxecWbIGzEXLi8S1U9cNHR8x3o60quIGRjrLvyFwqbUyo1VCp4s90RHqXsMU2UeFxkqxqoEdnel8iY6Qqiwl0F+yldlodBdWJWz4JxRJBF/UGBR6PgZ8ssAv1gP2VN9yxO61A2EHdXYPMdwMDyQbC6rIbM2pN/97yr7XDDermrhl67yqBAipoXvo0X/lO9eWXadj6NFcHTLOKScTOiSr/MXyab1RBCezCRZbZNN8wk2GJeSviEe1QWwXCwxixSYQUuKcfNS+8lmAXYCm6RyLBq8OPkBSfMLSICjaL8ntr372AdzRKfht0y33AB0aYfCfeJPcUzF2rtr+9lnnloZatao5n1b2lDBsJ+Lvpknvryy92H0nMMZggX62GIbA0JRfUWIkN0pNjv5NzHwydzx/HS8ZtPl2OoTCa2QzR8mvO1ZG53GUfVvY45j7iPkHmCnp8mrjAClndJUC4+j2vd2j5pkf4Ae0WMSSVPjpOcVbNALdxw6P3xAmK8eToD1VjuQo/9YATGSeBNltX2DyyeIvxyn8KZcP2/SvUlcFKiPpJ9USe3eshL38OH8MEB+hgK0ni1MZ2qS09mHJw6udJdI0MfrHeD3kPEy06iHzYgKzP9EdANKXlfhUzFRZ384xCwZWiW4n8PQwRwR96LhbYo4bHFhZFMcL+POsOCbbJslx6HXxkpEpUr/z0Sb+knrJa6YdvxaJcO3VQKNe4YDz+OrlTY7lyv1wUw/RAy6i2hFiBAFvmisu5VdeEnxt52I+K5bJ/Qm/IBlYTfLcqOeiIIDr1PZY9hAU1Wh7CSV8iG3sFJih2bbWV9ImAe2tCaPUjhfzV8N4cMlxck3WnvQu2zcapn2/tjd36wfxWAzP7uC9CcfcDYMkm+7GW/Z99Zh3NbvQwpNuFGCE0uHleKnz+YYVMioMXnIiPV+ljnjEz2dLsxIhnQezw2mys/I8f644bXHCqfZYYCFlHXTdE23C+1AVmkZbpXt7+Lrg3sERVSIuIKp0eR1wX+NHJbrfidVFTCrP2mmOPXwfE4RGvJTakiA99I1CFapx/mD7317wsHB/P1DBzUFsehN0DdMV4r/sRhS/twziK3YL0Gz7YzZd3VusYqTolm1JmvGyVcwG+0rebfFZIkMk+bGsvzk2UDscd6s+zTZJPlQdpRgtYa6wrvuS46sd29eTmZJVoIwb3K1Nn6CEnJfAF/npg4OvDbGq4THF/iqDsQgBWX692Q3IXwwz9Ec5bRztcIcpneragJ0webF5pYgTK19A7nTL5L8KjuXju2b9onakk0KzdiA3/tU8G4bVo80A1kmCxZLy/YCIRUCugMVwvFbfB3CBwOejJUpR4A+s8Iovhms+eoSrwDqGcxBrE6XfQJjtCxo1RpRvf06qVAjpOIVmkBwkmeC6qCwezLP1V7vHKu67dPUVn5LEvd9aY7ybpkVZDcIj19YNx7Wn/JD52HFZRcDCCvyD9/MTE0GTBa+fGyKw3UqHvvdpheSaTpGtOLQtMhcS3z51SZfSbak8/eIgIkD6OG3c1uOHyra5WVMYEzkCQPDi6gVsn3gRjK0nmILfYpFa1804CPCl5OyDLWh0uZ5qFGpqht3T/7mH9GgGg6ePZ8O6G2HOehih/hk6xP8rGYAx+m5a1c9QZVRBrK/fU5LsrzLkxWjTzyREnOOLtIX/t7TLV4oXtvY/Z1qL8d/yDHrwZ+hWG/lvQNMub2a0KHGiw+XKDC+Hpr9cxWvCS8F5Weym2J+awgBpIa6+0vsgdsU8qegGVZqbkiEpYVs+Av0thVqxg9IcNXkvQdNh+C5Zqv1CJKpe1jxwfRjgJCSjn3a6bNmxAIRORG/oz1SdOskdrV+71hs3nz2CoqT3agDRJUXZ5/WJEE/m1DVEzJ8nWWkByPPjZfhoZC6VBIusgdpgAsl5/PrFoQvDw/a1aJXvj3YDuXS6GwKL0z95sdLOYKjBD/OpFk6Vs5a1eLGU96LsTCDV8G+opKW39GTFnSuWY3NP/jftzIbQ80gShrlT+1vO5dZpE9g3NkSboo5KKVf4OjpXY//CDRwMbY5KfB7NmFbJLfg8OD5STr72DhUEI67k3Ar61XPKqa+UFv39v6EWUw0gPyJ74+P3y7wW0bVojw+SbJVu7vO3aLVlb98jQpRlnpX02Pr5AwoGA21ZqCxfbxFcLrsGxSq7UopCTp4/dHyjtbKFycRIYm+/SKjV1ODyvZQPM4AroatZh8whS9q4eihn4V+eEo2KyyD6YBdyMaR7U5wjUGTHMOiA/mgaUVhD+b4fFNvf1lg4sP6Mu2sWT56Byhzhh9A3/i87koNaauBkRdhe1DxA2L6Fzcj6MVMSblJ6P+AdBp38ud+6HTctDJfgdcXdYe2BAIv4BKCZg0oCJo5rd6NXwyvc51p7pG5d8sUoG1MlzZdWm7iKASFtkYH5IkcFnVAdD6E3X+hXM3ojGUWUatcNxR2lZul70oF5bhNU8Xt87OdywdeX7p+Xgi5Bzy4J1GTMZxpVS8tf3wYkPf6CaCUw/cxr3wYZKUVzz4T9F+dMWL7EkWqIhKnhLxHUOIis4JvHiOcEz/XWXKeMkUaXY9innKRH11iQ413R5Cc9mKUZajbR3sCCc7f4UgGT6QjCHfR1+pERTs+o0MkoGPRNP/u1mTE5Yl2jX8spOk06jhvOPQWh3bvDDIz8h7TQrhqmFnYniJRTfqyVY8+xr2tLymT7nYas9ZlEeTMBLxmu3HO5FJRu4fHQYV9xXVzwyjFxFzIU/nysmBLbPWyouU8lKvaoOpu31xMoVPACcW9UmWEu0KSusiG4O3pMeH75m4+HHkJ2Os2Xh5uHZL6YeFIyhL7w8QGyz52yk28dqGZawzzboehiq2TngfD02rrqMfY59jkFJYdCqP/WPKMoRPi6sb0rpR8hwNukoZpkVCRZsDY+YQtHo7bkQJ3Ql26BbXLXYukaXv/+CRvwT6iJsx0wb4xCwQxX/ZbzvKv5vWuEB71M2FYPQwyRrJ/jTQgvwEjTi4N9qAM1An9IyQcWHa0nog5DQMyCpowDO4Sw6z581WsuK6LL12GDeYmBHWLF9iP9lmSZvfwPIYj27KlF8xpy5uYf5bIfY2qiioWMEJWGzyaNqaEIFRrU/bdH8OT+IH/1E7y/NYmdK870OdYZA7rV/jPwuapH0MYbWc4t4uKWHHUUkPpj6eabNtKxuW4+JFBpsOpMdP5v3YTusSfWrGTK8DVS+QBc9agA08HiUULBPfMopmHq0nTSqj4vk2bEmPAf0WFAbKvxXa+9/ehc1T/iwNAN+mGwrCodcQKR5FiisBnm+Q9K7A44IiVp1H0nDox4eWz6V116QWyio5EDgnTH0yftLnQ0uliTitcuLFtPVzft8Don0UZev3NhqIuSJJjkibbcJDbgdt1kcphajSyCuIv8Bc06Ne3KLPav2xrEoR4wMU7Aa/HIvz5wut1iI4soWjUhbQu+hwe0m2xh8vTL7nxu0kE3pbOA501F81gAp93bRGCmWGNPDk7r35JnIOhEjxbaKgm5VS9OSm/TjvkXa4kqSVypTurooPz1BOjphMPsyDKCGLbi54n++g90JHC8LSZ5x+0kNS7nOVqICNYZlXbACkwuauWWQoM5x9GVIo+XBrXXlIfvOVrB+HsW5QWo6J88LVz+pZVlT9O/8tPuyXV3UfroTLVfcdgJ7C7IBjiM1uec9qtv4zum3hFO0KFqxt/SWHEOoMY3Xsdpg3IYPeiUL/8r+srfSQnXk6qWOFVlwdUXKKUyhcaAAurCihCz+d7P56B/u2Hb8OUGHVwJzPIc5z9qLa44CASpf20GQP1yQORXcAYQMHhE3rdIzfxOUMM6MQ3z+W7H/ggA9HLj/dExCW/PWg0/Mxl63aCFGJtWsTFbMuRsIIBstH/7gvj9h378FKdQFgSqyReW25e5SX01p1WQk3gKrX5GawKYSlIDYPy9HdGP48/Hj2Ix7KlfzOmLyhbfA1HBtR9znJqe/kFhs9+1COfhblCCYhib7zKZ1e9Jm1JkSepzyh+dbNYLt750dhNRtaKoEP5tq4P+ZACVueJJNpR4aW2aPXo/DOvtvGA1kz4vco1ZQYCshLe+YpeGmKtFAlEoTCP0fRWNY+KbfOuEbrk5tSuyAy/ue17satmhv9dKWk52EShmpOl2ANtZQFmBOFk1Ii0ji79iKBzCuZEtzH8c6lo2S8+hFtzcipEYCnSTxI2CnZyk94l7aPEFhq/HGOxthesPFPVrrycalIpbgvmb4awkA2mlPGyl2/L7VdyUg1BB8bcgVWV2SSuyNeLX7epm99cMZAuW5tEn+Y0TaTS7r8jTZ3xadQ1nFS+e/Ws2gNkBbmAAlSL0Xw8RIA9ZCGsBI9ovO94DMFGBY9jxb2MVwuhGZPR3iTcFlM4MbUHsaOOrZrIxkuVa2Px9XhQ8nB0TSTZAAUPJ7IjDcdeVuuhn+lmEUHYASN4iyQ7UA+Z3PCoc0jAIOXdCaovIQi8Gyz3bTtx98Bb2cB0dzmKvJmt3y9vSTDONqP9eDFgywpXUCey951jiF6YlqI4RwEZpL9eqm/ezjgFb3XO9zJf3I0vhO3/xMjV8WNuPULBVMflbVgaq2/Ccq4bihki8rZ0raAwLbIR8v8tpYFT62HlxqkCv0nPPGDxl6naKRniF7m74tJew7wL520B4jfgN+P0zlzHmt+oubi4TPHTtnSjcE4NAEpyc3RWEqOkvbe6fT06sk7wvLTRPLzqXGVetmOX22i17DTLQV6vET2Yq3bluwy8qeOyc+2PGVdYYL4TebJF/GL8m2lG2Mh4IQ5IMZwKYLHwiWiciRxtSqNxMhK6YFTOXwQM5ptKb4z0+ZPxVHEM91y+0uUBGkvtLT51eD1G7YZYrhaiO0fUwD8JGX/mbOuWFi3iJvXwoi/1nPXbnOGmAmIdRkBkt/EL678QJWfaNiBvhG4GtVoFgzTU5ahnGdWrVrmIpihJVFczfpIUP8sFrVCK4G/GYgxs70xagKmDSpm3MljVp8fJDKtqREccz5vjLIAH4XrPa9JCvCJ6BD3niDlGLb3Z7V8WlC3HizEIureSBKKHwbOaqMmcmcV6yw9bpdX70RhIElRR+DPaaZ/DUl8DJfQXalhpLT9eP40fYHNjvk4wvUzgy1NDVT/WeQLlmSRCaiS3xEQY/UU7KF4hGGA0B8oRuFLGEJv6oWvk99Jx+nDaA9sxZXWf7SpZfk+JmjW2/zBEkvDftOzdQSLf8mxIru4gjriLKBhm6tK9n5gxlTLe/RbsYapD7YBwU/P8YZ1qh6k/4HCBaOG/YVy4y59vH0tzAoiy7vpjCToycs9//QOTa6b562K/Ry2umT0z1UkyvT60IYfLjSxUdh8HV3Rj6ubpMme3cdLakCbHIZHKntgxif8uOtrNYRWmjWAIdm6TZOz3xb6odl8ZVGwuj2eLTr2PQJuv/evR7Cj9Z3/8L3kaKQES+shmf3c9+p84rUhYn9/E/L9vzS7ycbjBh6WVXm9w+kXmbAdJYi3GG8mRaza++aKu63wdjQwvepBzYEyIgKY77ywqBKMdTAqL/9Br4SzU6wcKLmDbFtbdKpuRzyYP8Zk2z+2ZNctn8wJY4Ozxr1xw0imwoQUf0WV/sEedf/4yhVQDVOf7dc+8tfzjmhCtxkF02HOuHlcRmfK8LZ0FdUrVZoWlft+HOiN7KpkTYkOc0zZ1KBErw2U266xGpQc3q8JfHNDVGgt3I56Ngr86171ctyNGdq2eFkEKoAuGOHAKXqnvN8/S8pgQiFZ3zEWpq1Is/QPQhSPYJjSFttdPemWtqFCtPEKhRJ3MKIynsc235+Udodykrghd0UH87e7thLXIwlsAV6oth57Gz51Gy+ym77EZjX31UJ7msRMdbWoVZht0iYIkep/orz7aq3U/Ll/Xhm6mNLWmvU/wNqZA7emZN1BAtms0nYW3LT66+2UzmyoYYSdNcugZBqMkIyDm6qYDcbWUzS9ZyRhe5eL9U6Xt/TS5i5osJLvbfFXIIszqo3KQ7CATX7/34m5feg/niefSmmiNwNwixZYdfm/lzvlA9e5h2wnOE/GjfpUfLZQ9W/ZMYt6zw83fFsE6wBEPXt/ZpDiL6RnZuNJbg3GdUIShaYALc3FCzpkjyYr3nJD75ktULq0tnUymsGKpXqb2UzTNpAvuGGgGtOQBae3q9ELaLLbEqoN5K/Q+02s7L2Bzrya6Jh067y+g4rO2ei5wl8UFTP418Z1/hBpbFMLzsNF2f2KZ57vQ8iX1k7Y2GgOZcEgwfRyIeC/rX3BMbjI+3M6aRpl/QjqSrfasNWrH3LGX3ie6Q1f7RrQlfyK/SEm7v3pWursicsr9tBMEIBixcoqMH64VnLxrSfYTvs3xYwtnQlYcJBaN2dB2bH8AZDF7DCZJInX5qv/K8teYFoQEfBBhNT20jxtfHlqQQdwZaguDX8mrea0t2N6ip6NWTuhYkX7SnBVnGSoZZKbHPvXNX0Ez2+sBQA345KfH0CKnevqzrvjnRV/DGgGpfiV9sM3xZ1tx4QhJAkqIAIbkPehvDd+DkLl4x7+MDgLnjqEjwdim/vl+d6J+YSLwkx+P++dvl+iVg9Btxn9Dz5DHt58nJpzZOl/CnOLT7KZODLdtHvwTFh/i/7iHMok7fT7Mhujj5r2uEfRHvzGxtmKX2MPwwtpN1sxCgsBJhUyP3k/qJsgH9TtheGFdvTuzWL8bh91NwYJ5TB9pLU68lQpmHUVaePFmy5aPjdcbTcnjniBFD/G6nUNg2+CDr5aY+6/wfQAPy0hGdFpdcNJqJHvOPXHIhJfvqLkFFt80FPlNTTodu8mIvXO+/rwTGhcahtYmsT35NDWWoUvuFg3WQa0/T7YUGnRe87b/m+G8JMdUEhTPjAz8I1Z3z8AOIQbqs201kj1zrgkngo7zGdVzauMGchaw2m19CrTSR+VGYC2GvuKI/psJfhxXFUFN1hjC0y7NyoTVXQJx/vSHmlug+RS/Uw2beIRMzkHvEtVIEsrDIICu4/fi6zfJ6ZfBlQ5gso5ywUEnKo3otDwgYFETTXa3N62/MGD+g64L5lC/nWGwntF1XEvxnpzHp/fjclBNq0acH9389vCVJkwd/Q4zmg70eX3CV+wdyvnoTZyK5F3dT7dIP6vkA35noW3Tc642dIcCJFOB3PlXM+SPLAb9UbdqJuhTNz6UirOVKTa/XgA4FinH0/Np+MWFQ8wxJ7wkj7JvTIvhfGIFLfG6ar6nONULzlGJmqvWzZhvCEqX5ua6OK2Be38M7mbsd3fv2y817A9PYBfL5dMlnVAV9CsXJQ763Ypwo6EUrejVq1MN81qT2viMbYk6kpwWZ5yiewO9zPCmUlhveksLnmYAp8R5QKkuj308FNnyAmDlnFB+CfIYoG6YqCe5hSIxjezHLB5/IJmuuOemDEHapVxhHFKG6qMh6Jg5fzRcxI2Dr8DUBzJMbHKN+B15qysHJMgO0aJb7b/MmlPsgL5mmwqSJDAlUoMJPHwCya0RswwevTzS9ZystVKxsa7sbiNO8OnE4ohdHqsNzaMvNLikEy1HYVUdqiXrojlkCzFkNSo6WyuGKY/vsxJe8RtcCerPUrHhYVCCap/orf9HjHN9z759fPBDZ3zIpmsaAi4HKQY/68BtVqHBua5BBahSWjOVmggUbQrIsCEJIHY3xvdJJHv3pHkCKzyFXP3O4WPEU58N1kksBKQRkxymABr/Hc3CYeRU+e6FvQcV7eITErz3bU0n0JtIA7BQs3/nJkkvMhI4PAZpDBfo4xoIMk/+yAKESuZ5xpFl43xbuaY+8RAXP/5IQSp5TfVc0iwxyK4zBY4YCfwaqS6/HdhwUdZ4ZdJjABNHBmv1x76lHuUTWLOALotgflKFHm0esoCz0uUD6wJviQpHhCo04m1iYR8LTqNWv269Be9kkd7EUCJvQdEJBD7Dfm+xUluXE+zlLrsnL6vJsK0tOey9SJPZTclJTHkZRadliizswMF+hsAB59PWOF17n0/MW3S12HmbWo+Pyyr5ZnBxDpCSXPh38ZOBLgvvyfln2gK7y9eeoq4OhEFCHY8VE6wVTdJb8EWOcLN2QHyZmpJbV+Tm8ej8gdE7nAXLEJS2xeHgE6804E+xJPYKhZQgtER8o3fT14JVrYQeU4kIc17jtazjmwSPFb2Sp9Tw8+PAQ8gemPsKrZVMg9NPnVPKlJHNPqDomvI6dw8LlNHU85ej5+fEqBZGfV+bTByPQMHHxdOe/xWXasyNJON1d8+y7Ky5+JpG7ZHoLSfIxRu1zDr8v48Of4yA4N+izJMRPiFVAoKcqOwnJ721wObsqXStdjRcqA3TQ97/ky7hxabq3e4OikVeMiDvAcYpQ6wlu6dmFGi8JhhOSCV+HstrZJMiBRJHl55zqKpkUf3ULt3E1f/m/djddG/w4+YC+CrjfmuoCTJuzIJ/wsfRi7Bc6y+5Z2omX6ughID8YqVFcWRo3zbnn2b0tQaG/gSgIYer3+kdoevfn/B7+6gQ1Tbd0TvJeC1NLXaA4DPKTOy+eXXZj0OmzXYSscVEkeW3ND1HtiLPqx9d32QqKZ/ubzMotRHPapskHlxXP+S3p0RP6Cx6OHE9Ha2qiwlF8qQeF6BclHqHtneiWWL5u8CY3Yy8wVFVDx8qyXuuJTJGG94VicA6P1QYFJDCOmxQk3uOyrzhhJM5ZIhdUJeFrKXDQBhBKfZljYkD2bsOvphuAUwGhjiIVC89s+V23/phQ/FfNVeINQmziPaOPDdogJQb3KhhRWUzUfvSm64mmSMFDBOfRtRgeWiAzlIXGQk5p9nP3UoI71wv17+yJbeVYTSe6aqOYN2UYhvg3XTkzFJlbRXL2t5NaRXvUHk9XXS50yV3ggWgQxA2E5vl8tj9qgGmLLZ4cZ4sfk2cv2Bf6DM0u9DsIJK4MRrQHR0FP2zdj2mqSHSdgPdHrQQylVpu9Ld1Fp44Y3lMWk+GuoK/sZaG+/SC2esC/2j31cPv+fVVUGwEalA2ohHsaNL6iFUs2HY3sebEYWI2omSruD69TgiJjEEXmI9yYX/dv7rcD1i/SBWEw7kEE2PTkmTqLLCJ8cnzF87r5Hf77FAkIY6WdhvTdNKgfvKxxDNd5wWVw7mKiPqA27xppu8yL+QfE034llUSdIzeXlR+ypDadyrDJPvuwZgXkrOH/NpTcry3VqhqiQJzI7tGn9pxyvdw7j3foQVJsLmQCgFICRG6nbce3LRBWV0APzx8G56Ompm+ugyP2ByqNaQc+lcZpOGV25kKXCQu97/ubVdRgbqKY5ANT3TdcV9exYJW7uTJmrWx/RuzDwjHcE0WlGsuAhJS1Iuv73j//yUCMi2SgLGMrBUYkebKkn0kabPwKjC1bU7/C8foSPJDsXdMyxEUnBaGMiT6w+fiHOk28klw4zAunVHZ4oY6A230QtbY2eex/UCrkLJfvPHjNHp2X3hd25i8Wm2iLXtQLfOQoGy/WFRhPHaECsob7xa1ySknTsPnmey6RhyXyv/j17QBxOHCjTucsUxzDRHrN9mErY0A5i1AY7hR+9N90Sbveg1Q/MWB/JUMD++8TTuEgVhAE5Tid/qKr/SUwndMOz9YgTbCasax/X79b59hhqhEsoxNs8/HiVRqHn3gaavPFviWRo+mt/n4q1UMET59pqB++MEwvWpC76EwTrYfbSKnktrbVhjMrm8zlu0agLCUbhlHseJqu64aneG+KLXLR44g1hYOQ9/3VsYg39rae3PNrtscL6enUL7POI2XKUtvhZ8cvMpjlSJNOK5PdHdwZ7zbxyWGbdUKq6P8ls6+B/Rvzceoz+lO2009dJG4ZOvdEoCJzcJw2d5y/w0vAQYVi6yzfupGTh0KB0rd19e/I6uEFMzXH12zqWMKpqPy98dyyk4ugh/HKL6YR61VqSbD7t6ZnNcbsFHKjVYZx6sgeyzz0+Vf8wD3KgX4ApjRXv7UqZfcehM+smIVqMfF/ghdvnsNPIjSabZY6euvMSZfj9yVwKrERnn20fFdPX/r23NmVFNviWt/uLnZSw5pQL53Ny0rnqyCq2uqXFWS//M3/SP6Wg1uhlDWfK/Y0OxPn1lNgWnyo756q/gj6bnK4mnVDCEcE7DaqRmv9xAR6ywSBIhenGdiQoujrOpGxasq76Q6TeQgVEzfmp4OtwJg0vDaQdRoYVyuJ7EiyiyzdsGl6nV/Ej9/EcFt/bHNL/KwtiFzcqgOqr8Hpe9sT38M3LYv3V2f9xtvmUopdot2syieoj3syGsAXel51n0xNx0PkbaOK5cpaRkEHC5CW2khQxEicnXdl188O0+B2rO1jht/kzzIimVUbEYhERybPZdX6zWNKqqVomblx++Ji2FEg4k3bEcwtOLWOiFQNs0XsLgSRWNVkGVmN+6G/3LFV+ObkTX37/B4rYVfyL3HwJeTShRy33LbF8vNCGyf9u9EsNYbZnFJ8gdt/U5o6nZ3pUBPZdY3VYpqzKMw/smQpUCSdZzEMmIIUGIqsplWByuvnLxhvLJ1H/WxtJD2aRIch/8vE4uHu59ei1FXhy5Sb7BqqUQQDIn0EDsIIT7yG+hmDlkJGCSXGV+/J/Oh781dunP91iEJ/bfHPPSGttkLe+fSK4izoTRp9mm9/e2mMO/Y/w8hN5t0IfPkqA0PhbdoXm6TIzIP+tsLT+LbqxJAbgi5jviMOfdcTe0kF67th2te580xzXpzjnlJAvfQia8aNjer5svvfXcUri2sotVAT65te4oZC1QDH3kvuGiSKCGLIEjk4kdYoE95Y18juvxb7y8OQtCuQ+Y8PRvmsBG5vvlmz9Yy6XIKFZIk823n2ATgEYNdRbPwYQXijSTuCoJeZnZ/yr1QkPQJwla8ow1PL8ccVny/8Upe/LYnBeUUo+PG+FGMra+/LdWUrNYNvPF2F74cB5X53/C3qRv8ygXICqv+4rkpJsrxtvbalovqg71P+HOdBrL+M3EeaznFgyIvodCfk5m6s7QIubt3u+1Wzv/Fi03qIX/CludUlR6H7N8AtOgtBGNrzui6YVins0793lUbSwtXIKpedaEXQLDXK9V9t77E0ObScBz4Nl2TAm2XBFry3mwl47z2eXkB1U+TVpRgTI02vus6PQgEn3ZfmZFrpzYuKVSlCO9z3WVPq+GC7KQlda0oNh4VpWJWFR6n51OIk+Vno84o9K0kAuABVv6/DykNZabp+Gm/Z9Ph4810KveWk1nwg+uvlzfMDYX4V/gFmtWlchiLSAMKjQF7e0YZVMJjG8Pc7WfB6OGZFYgBha+k+i+UbgN/AQJ39Ubm3gh7SpovfN4Stovd9McrnkN3oZrkN/zoAgpRrKJmckJI95C4+4kkSVo17E1HeBiwmLS6JedZqFIFrINH4gkc8uSIZ01p3j120guBrVT8maxCiA4vpfp1TfFmIqJgTJyyOmfbacBXkRs4WnPaCjHVonOqiLC9aqUhVChh2rfGF5RThFx8Erpb40PFqFvLqiU3N4eJ8k66UnFK4rRl0Zw0m53394BC9X/KwKf6+LPOCm1KieAhD3xIKCC+7fvbZA9MG0dur4BC62oRHpdi2T/eAIL7P/XeQ7jLTjsA5BIoBLYHdez5i0P3rR5mndqbH29miubwJS6/DZ7tH5QUSLf1rqIQ25ivuiwUiqAsa0VLH4Gh6znwdbFFgQEwK2WOn1/RrAYtXl7k8RxDYYSHDymjv3bHaWlyFD8HWWad1hvqbx4aSAVZ+M2SdS33Tc50d6MXykh/FH9QXdhJc7Hf8OHquFNj78CLAHrm0hTcpFcqyvBO8Uxe3owu7h3Nq+7KN2vHa0y5p0b7RVVRD5T2gGn8TjVf01z/3GF2u5fJDYUYla+dGopiUjb38KZYy0RNyAnnM1OHq6pKg1tIrrNAYxzUXFmP182tmamhWXX3E0pHesq1HPYIYGd6QLm9aiX7TlVhk7jbu8p0nwNGjO5xF8n2wIXkFAQtpi2hSLiH3jCKZibzqHtRE48y38+Azfhi5Uf6JOFCWRi6kogpY7Yf/IiqKie/xO+2P5r1br34LvyWtXLz44OJvXRXboAWEAMrYRvQBDUBldB1aRpxD/Jl5FON4jovbfOCYHwqVe278mx/fFzed5XfxMmJCudSJn+TbS7Kz4AxOXTPiuJhEP9RUIFCs3guGEUF5/8BBkjuTIE0DI3uOfbQOWIKgJT5uY6PPtkd+DnWB5gu8vjP0bEuliN9PYKSvQt40lKxko5OUtx7G/iZFdr/xx5BOHteCwDLC9ii9pysgiz4u9TiOTk2xyBHdQ4kGovubRI1wJzcI5v3qw8lBGm+zYShiGkDOR36NYBgDGF3TS00kVGcmCVa1RoGVHWG8WA8WC2DqYJQuQt5kY3IAENnqDBK6KlXH/dECi/VxWxLAc7tPFSPlVaw3RRfyqNHKkTZBe9lBTKm18875Ax4A8R1ucoQ1sNAlzzc+gVkYkxBjid25ttHW/Dj7u2V64i5auNiEm7z8FLJrORBaC2JsAi6raZDB2Td+k5L4FsdQmJImM3bxuia6LTgTn280kie4NcneGxmGB6qhFtgdZ/Fh2puUJfM67uwr0KaBUKn9qAye8tIHpyKCMQDPXZWVl+6bcnrWG4SiB8m7O3miPgTJSkbs6vq19xfqMzQ7bA3bhOj9q5Mq3cENITbLW2wXTHpg5Q0WcfrC4q+tb5ktsa/Kxz2B1UKuJRJgbMNajsIEccpsrT9yBu71BaUt8hbWHHnsMtyjjA3m2tJvX2N+qnu2I2X417j5ZQBvt3ExUUC672PwB9mzp71mgdSpz1wXXLjbkbegkDOxR7fKG+L2fa90YW+w8+90EjDpHlwMEU8N2yrx06s31NSt0hJLZc7zu6lbUIcp5NOB1bU7zsy4t5j7rqNjmaPVrHJpODLafV/kzQxqhoBdOQewIzf++WgUoqHqprJxsPZ9RyTDymPsUg6tS3D8/g3x77HDqbdv37Yehxec/6aAl+zB+vqO0h5YxhZWo9cRtG8rKSpa+9G4456WOKZ+QIZyPIZlL3ghDQoI8vTem+F7UoFMwpF66r/P37CunZi1VRx8wLWzA03+29zKJD7QDk1GdGrgad16CI60lc6MUOkb4rw730zQ3/QHcYmTzS19TZBUoUGC036PHNvAm8ji5KL9ARIAIT7RFc6trvqAex74uaLnlST7IdnvaAOK50ggK8M8qjbjrS/53OGGRd8J2/42YdA8sN+/0MipWqDzdFw8eILyKs4dKdVavXgCXsizQDAoiCA/KE70myN+xhnmZrTz+gjoWNX1ZgnGmTtz727nWXQmj1Q8TyEqG7LsnYb2d90VJhRw3L3wtjuT5JL7PV2E2qV4XOi+Gf3mr+1JogWeVimsZeS7Q8qsfKymg02Sc0hng9oj50noQR7qDKV3razkjnfz1TgJ13/jJVkQBLGKx33NRIdQh6DmPpN5lJ6xXHRsPQ6LF1e0YJjO5yMVdCtxzjTw3CGxrDl0N02/uJwYtHLEPZJXZ/sxxA0kxu5DKBgO+Y3wM8ULUcRuwEmYMzJNtjydSJ4S1+M7dmzdfqXw+wFvTJg5Xd+J8RQGCO5v+fP58DspmiznVKwsQovsFg/VOi3DubSbTax1hm4ujy5Lzpn/af58SwPCXPsYNWvrTnidsE7ZAMj8W8d5drlfXY98SAU8s8aSl5bb9Iu2SRFGkuKDbu7WI5N7lVoLrLCIQaOocs8Tg5IFznWu4vaTUOJf+wA1Bh8d8mWqxMmvotiyrnSPlAXnau7eRIU8/LjPL3acJDk/2/1GfR8RvqHYUOj8NU62KHnN46lSeotCl7X+OdIaJd/uDV1xXg/gfUUutbWeJTGcAbv3NSrj/TKg5PfXEpDxxgwMyTr9KD3QAMoICPcGLb3c90n2t+UF4L2N2cRVbDaGfrDcs/Rn/i6CzPs7xRgRz9LvEn8bsA9Qv/krQ/Od/A2WzC4l2+DrqwmITBnMYP+VAfm7ZXTS4Aw/xL78qQ4QRBHH1XvyQ7qvvBEhykRnsfpk/p6fmmttx4naI/tAKSzKct+EygMGZJp6D+MqMM9nn4OwaHkjX4rk8Np/K1d0I/+E4WUXanEeo66joSruzcBgCumqpEL4FA943fbxkxic2YYUVbDfkDVNlzdMyguLD+snvck5gWpSpgmZtklRYQZ2EUZT+ISRSzXcyAzAOX4HSrXNEzbOv2QP90ZmRSB13e5x5hvuzreNWOEtA3oY8ucygZGIPxJruTVx6LpaOoqbV69+RqE4HuUXWK6b/x5zW7mxevyaCbGPpPpGfNTtPQP7+Y3rKnkPuYt+YsOTk2ndmZB4SH7I+XKkVKET44yuXia4IcTk6pWinesn2oK9iO2xin766tkestpMdG2Y8ZhT0ZQy/E2DQyjCaeYJ3yppj2Mm/uQcBDH8q8iDRhJMSwM4x7bSCE/I8Q3WESRr252q2le/iQPZRd/xk5a4eJuke833Xgl6SPa+Gm/mPs++FuT3V3QSOZFI0uQBYmg21ITbr3ZeN8mWiqbVRL/Unfr9vsoZ8pY9VRDPPXy6xuYNnn+HPP6wE6g1EYLIu9wYtU9lJ+DdFl2nb8DD9bMtJTNGDyHEbx7FHHTV4nZ0j/zasQ5vFE9gfo1LPySDsU1e0nJcpqRQAqeu2jr69ydcBfkbrlS1pg2kkqPYc6IXW0q8zYtn+DMCxbgdhBcNO9kvahg2DejGxAiEPU+j9bg6pW+KV+mniRr98JrMvt3XMLUvZaS0Qhh03Mtf/dijNUnEh9IMJ4BYueBxlVIe/iYmGnX4oBhSAxCifc1wE794FzPrVwjY44OgYVIWeCw7KBr8un++nDiHonfdnmG+Dk/l8WZpye/33hRJTpD1nRsF92ZJnQOcoiPK5Wf/VeCh1I3cD8so42lefG3kE+Y3oW8VVS3lRQaWFUA7bJBLjWEKJrMbJ7AnotfB7oGnr544pvRBm9N83GbTqob/1oE6Wy9tBL7lk9GLMxHnL49LIFRHyWqyocF0KiPz3+BYtb2d6+vvS1x9QRL1DzMQlwZESwx3QJLENyYpD7214kw3BW4149d23waJBwa1sbwNH0iZMk70VPvseZEoOTrejrB7N35C3oOYCPj2w+MQ3NZ94Q3Wdk10q1yDLwg4uvDDNV5kWw/f/aW/hsKJqvs3wcspe7MiS0+GMfQ45kiCxmclGt4HAucoiny/0Je7VtdBhutjK7ESdeVOr+V2Ae+9Ms0sv7IP5N6dQcGw/bw+8o3AuT+Lb5hgC+2hanpuAPMTnR3k4JW8on5Xr5fkBYVc328sJoE+vz4RzOd+w6M+STbviAiONP92oI1/8rnJkvsb2dtEkafWibwoqJ3Jky2M3Ali1yBKwXXx8d3hNYi+Udgz9ZL76mYftjUxQAtLKBeK8+ABRIZahwXoby8/MorsnG33plTox0FY7h00BQyLRKY379PMNVSw2geZQKmBjIJvBNLhUqa7EjvwaQK1v3P7PgiVfEVN6dtU9yGHmocAxHGdiVdYOGB8Puo/24+hyrnAj+u1TEh9c0rIRm9dASWKv1FS+wYG0yGU+CP/w9LyZYzCmeW3ZYiEvv/mo17/imj9Jc9EGsd34H68C/Kzw20jRVBmVCX75ULJZXyiASPreygebxBu+tiMPjyRKNURU+m/gnO0g6SwIG8fUsn2VaVUQdWc73QXLhsvvX9cU7ErerbEI166RGTnEgx5/p/EtCcYGwDKBvx9dvoXE0M3yOD7S5SQxyjkwbCOwwPx7CijSDfO67egrfM3FL+4VRABYipedxPmLKtMSjb6EDcASmtk3BZ/JzjdTrC0TYlxi6uhOaS0CwN+Tzig2aDpgy3PhMu1vrGi3b51fUbY+yZWIkWyvbt3eeoqgqcEY1Ztw+bGrvd5tBFK7jNIWZH7ihanK+b9Ip1NVoiJxCmF2EMNdr7TLvijWyWoCB2KO4C1zZEjg+Gmvc0rcb+myGWtAD9ep1cilwDMw+4hiauuwID8LTnlWj9bln1mKZc5QmUMvCPhiFde9F+eGG21h9cWMACsBkeToFdKoFPu6YUbzDUV3whqx1Feot1GkD29PY4Xu9mipMZLa+3rfsNtHKfYYqUhymCTJ3cZes9nRfj66+3l4jdLXGuGKm1r43qFuvmIlVAzQckntnyLRinLqQpgiMxryS9g4eu/Awm/n/LNlTjg53K3uR63iTxRHcXZjkmMXtnYtA3l36x0XGr88i4bzOgIkCN/VoXlrUioFwt9GZ1TkdBCFWzM6MEsEZcdyFQ2OP+7j2Z8zvAJDKKMLrH4pUYZPfw3UKR42YZ8lrWj2DSgKC6oArzuRk8Sq2KgapdAc+F1YRInTcHH15m8dXfK2TmW9JurDziaujX2uGDmBAtYZqtpCR2mbxwzOfPkSp7nFXGVIiTiSqoIvjw9qECa+sNb07LXJJ75y9wqhcJ8UIP/HUwFUlWBNcFSFvSdBzdz3JiP5Edy0PPbgZU2rp8vShzOeSGGZ7TM5BHQV++LwQXBS16mDye9dM9FkTLn12BJvrIigyqa89tslAr0PFkRX24S6CaUxyVttGYwH7IGq+R0AN8QoUnZpszqvxPXIGLvcNDkBfTW+E1qxb9Wk5HFLeyT2uP7Ki5OWAtBqcn1lu22hKK8zLv7hxX/6J6ee9tQcrdLJfhSN0IrhrJmbkTPjrv12NYcPIh67uUmY6vWwV7LhhkZI31Cm2Zc0vfo3mhNRifIj9s/KpF2s1wQPXkyy3ppZd2zoquOFEcs5nCWtLDDSZgJNhhSDI4NvFSrRL11XsxtpgJuHKLGMCIIf/D4Lwe+qQjwgzInLRGJEPgiZm16as3hO+e75623NWdNG60XEAvzleu9/9WEk1O/7J3Sm+6x+5WzvLJUcOsHS9fPhwahLZBB8TRTkkFNSrdGe6e+1lfV7wGF0/t6zBChXlKb3HHGYGaaQJlWRdeISJh5QQ0Ay3L7Or9dFSLW3gSboRAU+dY/ioM9qbFwUpKcBGTpOVmLt1fzGp9D5uTHfui3XolNcnvN7Zod0kRg1bTLco5vdq2+Yf/2N00LWfBzRvOnOLNtokL3knq9I9QmcIVrKVYn7vLM8UUg++O5/EZAgL9GwUG4AuJDIdrMyjXAbEtG9iEp5LkTBrgg34Q93f9Sku3W/fLyN/GHGajqgtjAoB5vQdUeLREDghEK3fy79QFQyyQcdhO+vHZCzB6NQZbZlO/7b5z/TiX3Qycc4M6KYonfxNbA+j1564FDf8D+e3KBg79vkOY335HbEvsabmE15jaA9NVkHiuSOEPVMg85jY7fyu5IwhWX4a9cnsDxxZc1XzdctldjnGiPH+SW6OBNSEVKusc0VkB83+Xjl6/KX2UuB8CDBc3H219DqP76lyspKiNNX8KU0HfsgIgX1bQcgeV9T90sIr6jTPR+Y3JZrsKeapoHP8kz8Fyu8o+bpdGX0tXbdLTnkD72cEHfmhS87ZGCUN8fpsOMmvIUO5dv8PDLueUSfCuo8+HBwNgGaWR4Rv7A5ga/EUXi7LsLggKN6/rYAlSaJU+fCoY4w+szzXzl8flxnMVql3xUEDdJdGi0yn0WD+B+1Kxhfoxy44Xh3NejWUpnJR7FWQn7A72lOLwui4c3LqRZ1MMebxyLYebtOzMvJeM1XLDOq55mgk7JWohMn1fhDot1o5hWELt3cFRFKpgFgCdMEzja87rdGXUdLbAlCSG4eq2VXogEuKxASwpT86WEiueZ2aGdmaKL5jPlOATqaBujFDTUhO78WpWCH1oOXAIEN5D2wZZvFs94D8Om1uOi98jSF6w5XQS1Je5AvPzc9M1sG5AhewLssYmwO5/fGD0il2W9Pyrf/OrTbOC6pb0n9hJ6HGphSzMUJOwzLBfTEhWoBobRCiwTzuDHoyMOsQK9y4niLQfHJjJq/jGzEiTXrPXizqwfvSQz3oeD9Rp85bq27FLGefkmg06ZRFL9vpGJcLyZdwQfJ8ruMoixUfysIUTwYUoS/tXQQVPxZCJpkisFSWpO+Mcb+bbugJfLHtQQzq0zthTOsIEZBbLNmmU01+Z2h+PFMZNmOFoq9JzV1/HbioizHR3BXtBnFRP/S6+Fec57ud7dIckmJi4onwp0vBny3TrufhnaqfEjAoRRLBlrUbhRf6tS+riJdC/IvveFJJgLXQZ6F91kmn3B27sJMgAO0Yot5KW4/MNyjva5V8xWB0cqk2FFpuuEnJjh1/Tr1so6JpbtUE0WcOhcWMu+BfXK4YHlJhfLKDz2kFGq3Ot+EzWdNROP/399ln743q8MvZ5d3cP7xzEY37QcCyoFLJPlpZrndjUCESiWOJei7PFO8zZ3V289lx21Bk+ZZ8+HXGWdRlyNPM6CZo8wS6394Hn5YFwLG0aOH6abbo+vxH9kKdehiybbGgS8LawhQ9LemekcO72muavhNUC3TQluZ37uDqpu0LBS5SWl8mVEykuvS92hUC531fH0lHN3pLevnxtSQVim7YYDpxc2tigm7ao4M+YDEwSX/vg0pwM28/ihZK5hhA1/kdeGZ0L5EAG+v6l5GzYr2XWF2WasHqwKnidr5Mtu6syHBADOtSflSlc2alouOOWBVxZR2NmRmjW0qdbkQs+3Aadrkj5kzEcFUDJaJPzHRVEQ5Kckyc7iqhtcGxkFCrBQMCE8ibfPWKrBEglJPM8X+2gv5INo5ASyFeDo8g6tkww4bpUp3pAYV4RyLlg/+qEyZxpqzasAKmxbsTJWdQ1lb3sfPk5WRBaUY7ydqUlmTu4XUrjA5AssRcLW416HFr6x/FwhZyLG4VGIzEK+Wd2simqi8iKjdb39nIigCLgo60KPu/ZziaCvzT986otfTF7Imb6FTYM/HFFI3Glq/F57O7G0dr3ldPI7Y7WS7fxWjvtwSvFYOwd3pikNG8TBGX7o4XZj+ajfK+tsfxHFWiUJxgZhcSU9hyAxAZQWdu4wV8m26X1oJUFiVHZYlZQp2p4q6biheYsV4Ma1MkuyGk12ihrPO4pzWrklqcpP8wbaw60gz6QOAPf85e18ls1XspCZNJvKAkexlm/FZnp8AENrunpduJk3XKmTLDCNILSw7Ie+hb7BIV3TB0C+H9CByTVoPRbl1T/aOYb94IrNKXRpFAeIZ8015jtEVMrBgu00goWzrOnWZzfb5NF/kkvlVXhGmLphBD5lRroZdeWJlQaTodv2PbVJRifbQfQp4+/G0L8KwrLmNC9YprMM8Y7wzfrZ4HeSS1KBllYMM7dIJTlUOcuORa/8gXCwt/1xg9i9kYBUYCDl4/GAdCqu5aRgo8f+Zk1Y5bZr52yV+Fk/gfFW28RdpVWDhxDnn/Ejao/wTIvrQ/FSFIjG0B+9ihYJvl6zSiFhQrOLjae+dTi+YIN26TdOy/zmZMf37WysRJxhR2u3OhwRYG1l0ABGNQEprfP7PuUvB+rWBJENzXrnZIKlEEuRovQ+BA6vL14doG7ijnBLH6XBIZDNvxWOLM0o9EkHz2B348CoqtBmzWwECdf7GmBlq5C8QQRpBvKmRVJFtjiIwSeaXtK+EKqwpJzZleoZiO7HkanbT4iLanxfud7a4aGkFxNixK8JDnOBMUkJ19MxNxEfVaoUPd6I42Vh1A1lmI+2F1TDonS2f5iTzJZxa+UT7sJf41fbWgEVmR7fSIZGdkM4OzXtXe6YleXqxxTCKpm0Y7y+ZYXIhQtg/+0z074PEMuPtxIspQDWQp0mavSHb+FAEgPaMl9jAWpvM8amsc4J5PHynP1gDPZf1xkbY/t7DjcYt/bHtW70rwHex85932AmGook5334PvGYysoXG8k9VTHHEro/QPVqM2+JzHKaX1Vpu4Eky28wPHsj6PiBIlZlqUxSr4H76+fqJW7Ts/hmrY6tXU13E+3Dvm5s64KsRDj5gOz5COmTapOeIV0TexxeWNTB2e9i+VFdCPzrmHiMQg1v8xB1ws2iOSMObpkc3F0uRffFrDMQQEp6axsGZTiEqjgYHZ9BvXqzZVDtECsuyDPeuALgEPLCaZpi3M1Snw+msJVEKojCx5JEaHrgCG87y5hNbsm7qQSboc4wsuIKkQqNRPEigPgaIEoa81/ZfUlYEyGBVm+XSgBf6HBsbGoDZVqlvbM9cG8wnbkpeRqVBJNyhiauQzJ1Mkn28c370w2yRoxK4IvJqbCvoARwdMlUxKDXawWpsuI8o0zo+178Y012IjSEbSCsbCQ6Wty3VBZ5juVbuRstrZNerviWj8exzOML9olBhO9XPG0bdSR0+BS+ZaEosViu5odc80KqFTNbjdmcvOl5340o26qCBwQzQCVQ8vFOQODS8/zVpzuQp+fKI7QBJIy/aJiSgDrtwY5yyPtqQGcU2ydi88sR6wWfXCqkAaTuAtHUqmaagUoQf1JUTQ9gEiOpZg8jI9rkb0/H4oT8+Pg4MCv7nVHBqKe/Mew35BGbFBo0Nf8B6zkvA8nd48/ppunr9KXbPoCWJHKUBIySeiSY2+dK1FW3zunYBN4kQVaCbzzuj8SanR1dPPP4eGU8vbkitZ1HH7NGOlldwYpDjXdRFwKKG5ihqMfN8ASrx7PITbLEPgukFEYmC9MVco8dDC2ABpmpNgDorOWjMtl254/6Ki//rV4zi37JsqCX955dc7zuP47g5K0ye01J4+YZCZ63kGNh2gUVdNZK8wtpiu2Q1EsxretH7JL86rDEHk4B/U2x/5hQYUdFwG9rCxxO8Io6xuqJNp5fKL7TwdN5EHwbVFPzxxSy5n3DSFS1r+698ij00ookis5TafxxPi/1DweYth3SXtNQ82+PZa4tksxjo4EWqgphY5x+I/dbQ/465ppZkjeQxuHwW4fDHXFe1QYzfaU5S2o/QZl/pxh9r519ErypFAWRcmW3vdlFf+0iXXQyojk1TUOn9NUT3EgysCQkTuFW14rgowKOXphWD2DDUI4JWZtnAUyyNQuQ8vI9dMolJ+z/pi3yuq4PrBBIBfCRvQR+MJ0eCb1aVwoV0VXS7SVb1A9RFUw+wFtUUXRoFCvh3Qv5RNQ+YW/P/i5Q9Kxc+ldm0pJce5dNKzawD2Dp59edbgx/IpX8+8g0WC6Vi3pFlV9cgsJBZVYQ5yBnxNDs30AdIvwJXFOABSckSdeMttDLSivjaZnrgvlqEY2iGgnkg21cr2jNByCG4rEk6SNaltXGrkzbtWdiNzTNsYtYdZduetm58xu7k8hzgJXx+KLrBixx+ZsBrAOV+yWkOH1/V7FfUAMESgkPdBIxIbohq/vGGirjHjqqueYYPGK2W4LBlMSzT7NX/b8315nuz7N/31wIhbZgNmTFo3n9lX75beVCkBbnXLAfHLpGDpm6DRX0H14Fre0gxmWp3jZr7C1FzjybNn8n18ZQitaCLJ+2FcE4SZZXlAz3SzC/on9930puOFTVstJyHPIEl2+sTIps67t/aWvUzmkaBB8yT7CpPvJllPZ4H1XmvbZH89IU/uLf1wTVg/O46ZjxCsvHI4JSoWXUo2YrPVsTfF9q/o4n0Ch9UdcVG3ulIdshylItZw4IZhFXozEBK2BqN5d4vm/3b9KEUeJkpa+VWPjJ2jiTAwVTlHFOK6NxcYKOla30duslGwaD0JgKDo5Y2tjDe5Ct5EIWIL4Haejt+IHvzxvc0Qop3mH3feSegfMfIlIbtxTZD5pxOBXYl1Ntqcpqqk8Xv9qRl+cZ2gGqpfi1rrktu/MmmiHCTBnWU5RH53n2giwGky3lDFrkDYdPxryRek4v4sPtIZGtaN2WX4XmuYYQQJKeTWkacGiUm7Y+qKZuTj2LsHy2MB91eE9PcKZlvUowPilIsjmJBOVf5tjBFNb3j3CdONn4HvnBCsI9EPNbIqUhZQ5VzrT1a9FKUx+TwF2HEyzT5DMclAheAA5wdNCzx4BFd308b2+X8UBhYkUiUw5PMkConaqr/kBfRbLbCfDjZfzNEr7f0OH7EHF75ztxhdkOgb1rHN2vOTTCgDve23Y8L1zjpEl1I2GGAzfiK60aDUALlG+kWCiSD7QfFPMmqHUB1HVJF/G3vIYbUgRV2T7T8J+BK3KZ4j0DNfJNru7W+h261U1CxVYmU8vANRpbCECe1x4NpincIApSBMDT2tQltXmI2eClSXnqGRZCdEAmU3K8czTzKnsgoPbh5YbxLmCPq/NG4mA/JD/w/siQImvHjmxaxibw14Ozc1dYK4vj79i5ByFq6VQrbFTdQTT4l8MOhvcWTRe8iVLtDVtnrDe3rvvLct8QyCjE4h3oBMYwwKzBWJvJWJ7jL+1Z5XrKswG5VfTem9TVHWDJ5pTSVqNn1ittnmXJGE77INoJiElTslBddycyvj6YvEL890y3N8Qbx/HyoO/YrRCBrcBe4V41YI9gMkC7uCcs7NFCLFxhIzCgyyuQS3y9bW+qczAK0RUmodPT+y5pE+/W7SQzT+4kFESRLnVT3bdBUhYn187aw5RT3YYcawcdadEca/KD/Nle+T1twg1csThV4UQ8uA4xRPZ5f6sYT+4hrNn2GChfIvMuNLoEGZ2920GKYk7UAOrcsKuzZXZUXEjQQlHLolLRT4FbMVsbv5YtsFbJa5u7+XnjiXcvju9r6xaXX8tLF+nNl62+kbgAe0qeDXmJ2OTb4S3YCp9XwtaW4FMZ2LqYQnNd2FWt27eeIHLGRyDaNupnItlJfZx1yZRtvWgFqUqqQD9n9uIf/zYSKZQyQHb+TdNTvwHWU7nVnX0CJHMz9SopwOjP90Notnc40ghwJ0RrT206V2EQQORhzQr9GxbtATtGXls/uTgeb99ZKtWsnd/jYPAeBrrFaUd7EP6MegjZy+oj9Q1BQtcBzOnaAVBEJk8WBUBT1wfXpCkXFx7ZSKoL2AF+Rd4+nYibO4oxBPVt0WFEFIV+AohmTAL9hB2lqmmrWe2UmqG88YlFuj6GCwXv7N8gmtzyCq89iW/P2x+GNXvzDbCBuYN/3zEvFHjJKoV9pyqXVdKAQKFTavTXUIaKfv3R3zFoVI7pyIMBmxpQ5c/J0CnUy7Pw+ENcAS4sX7KyOQeC+imSQokpamTjjmaO72CbJxoaHDb3dRtxD3rcfwNAId+rkNh7RWZfWecX8vQK7tvlx2NIShFrwm8UOVEENSCA7CYZxEIEgwAorm1guUoi1E109LaWrbg6MxnLdLqpBBMYSK8V69+wJTUeR2Bood2T2ak44ctSKhsi1tpPmsUPMYicvwMmGJnV/mgpCxd/3n9sy9mNtRkdTf8L/JiKh1uBPZvX7PwX+AFkb8r4XRqjOevXP0uCMsHS/8NeUTtF/6reOeMzyr9CwHshzP4LTHcnnw1dts4PxAb+3uYdUfDe5/rzkQT+fDyqdC3/LCHAv5F/FsusKsq/v4XA/4b8/Wq0/Fkq/ufNf82Qfj/5KvaTzh589/cJfv+HgCr98x3IqbQyJvkBAJZ/7cDUSNTkX/++2h61W/bnsj8Ly3q1fxfmYXvwzXsT8Nmdo6zWzBqj5P3rMUfjs1auXfv3z8s6D01GD+0wPyv90D+XUXnVtv++9C8QnEYZkSfPetRWRf+sJc+2ZvP/260H/9tNxiDgH3YZgbB/2maQAP55l//9e//3txj4py3N0iKz/n4c5rUciqGPWvY/Vqn/2HTg+fQf18jDMP7d6jpb18uq7vcm0bYO/0iIZ7vmy//PH4L3Zv+G/vtH5vx78z+frr+f0mgp/yex/zz4+7T/PTWelxu2Ocn+m134y8BrNBfZ+t9ch//X1J2zNlqr/R+f478i1e+rn3mOrv90wThU/br8pzvr78J/MA0KoP/ANBAI/S9k/3PH/2CC//lo/9/5Av4n0ftsafU+VTs8rgaQD/MRzekjGP8XJfIRv5hAERT4Z7HMiSRL/v8SSwT7R7F8VN0/iSX5X0gl8n9BKv9LXQ2i/7T91hr16bPjz+qwreP2OgrZPA//hwT4p+38BxH7z0RIszza2ufNqWKO0uq5/n9RpP9b8fsn2vzvjRCJ/Ns/MjuM/jMpYAz996v+D3Xkm6IdhvU/i86zQ6UypNl7xf8A \ No newline at end of file diff --git a/30-reference/configuration/images/cloud-pak-deployer-logging.png b/30-reference/configuration/images/cloud-pak-deployer-logging.png new file mode 100644 index 000000000..d42c0bafa Binary files /dev/null and b/30-reference/configuration/images/cloud-pak-deployer-logging.png differ diff --git a/30-reference/configuration/images/cloud-pak-deployer-monitors.drawio b/30-reference/configuration/images/cloud-pak-deployer-monitors.drawio new file mode 100644 index 000000000..4afd4ed0c --- /dev/null +++ b/30-reference/configuration/images/cloud-pak-deployer-monitors.drawio @@ -0,0 +1 @@ +7L3XsqRImi76NH05ZWhxCQQ6CLS8OYYWgQi0ePqNr8zqrqysM9N7T1ef3n1mmVUF4RC488vvF07+BeW6Q5ziT6UNWd7+BYGy4y/o4y8IAsMQcX+AkfPbCEUj3wbKqc6+X/S3Abu+8u+D0PfRtc7y+YcLl2Fol/rz42A69H2eLj+MxdM07D9eVgztj7N+4jL/acBO4/bnUb/Olur7KIFjfzsh5XVZ/To1TNDfznTxr1d/f5S5irNh/80Qyv8F5aZhWL4ddQeXt4B6vxLm2++E/5ezf13ZlPfL3/MDV6G0fhgINzyKtpwkS7qW/0C/3WWL2/X7E39f7HL+SoJpWPssBzeB/oKye1Uvuf2JU3B2v5l+j1VL197f4Pvw50V9X+eWT0t+/Gbo+yLFfOjyZTrvS76fRejvBPsuMvCvBNz/xgCc+D5W/Yb22K8Xxt+ZXv713n8jy33wnTL/G1TCfqKSsSZtnd43M6Z6i5f8Psu1w5rdn0YbL8UwdT8R8n7+5Udqzcs0vHNuaIfpHumH/r6SLeq2/d1Q3NZlf39Nb6rm9zgLqFnfUsp8P9HVWQam+UP2/MjAfwCHUAT9LzkEE+TPHEL/LAaR/2Ax/pExf0FQQYDuv/tMFs/V113+QdKOYj9KOwL9TEuM+gNpR6g/i5jUT8S0ciDYUrz8BXCaaIEYJ/dBCQ70T97bVV0sv565J/3ryd/+YPrb4LeRrN5+PzR/4v7XMW7ol7jub4H/rVL9dY7fXvqb4R9u+g/XwDYvlv/v9Y8k/kuZuaXmn6h/CP6TzPzVHMbv/0QX4f8DXSRSKk+K3/HpHs/inCrSPzKX/wiaEz/SHMOIn/UU+yM9/bNoTvwdRu/GHB9wWHdfOOevcvuMk7w1hrle6gFQKhmWZejuC1pwgo3Td/nFo99Qt/j6+wPZXwbAsHj+fMNfRX0AzrJfUzK/jkK/jtzH1bIA9MaAx0eE9dMOcfbLXr/rLs/q+JdhKu9h8P0Dvt/H6dB1Qz/fR0u1drdtEVDwHzj1V+PzH8+hHJzzk/8yb+D3t0+CPsd//PH5Xz59+c8z3+gfSAWK/AKh/33BkLURVf8f/ozbMf6P11U8gof2H/A/Ty6yvIjX9o8s4v+2VGTxEt9C8e0rIgAOIVztsbq1Q6pYDsz997LdinfL+4ibwfeRY8KvTyZWWjCKvlve9KywypHQBmQkdNaSnItVBl2wQp4vBp0gCKxYrL8g7DqOi7mWqlz7bws7nLKjx4PRsJVl5eY+Dx2CdNa3nrP6NOJOTxy3axQuKjHegsAMAtfTsVwjKwHj75SSru1FNRzMUJlsNDpTVjyvlJPtKfy430GBcINWuCLmFS781Do5WX+AW38+yIrBdr0IQ/WQukl9iqJi2YckaloeFR/z/qWaf3Bp8uLmfiQWqVtNiXNSmQfvEQaeonrj7NmRWi4ZQZIrEybyRm5vymQftZ7ScpiIC94Pxe0XgYN0yCIY6WAcLPv0psn1piaybSqoKE9Ms4MQ0merT7BZc4w4L+CJuxTxLzGdWj5G0fgSceU9mEO9NbfVZ3OpxUP3bTvPJ1cLw0IvkF5BcWPGzyNU0eUqROwSKlGNCH9sb18gYNih9K/+fnjBJS4/dr6tayHVj6veBwFRnL7D5Y+9q8enG+Je9CoXjpMnQR5hv0YI1CYMyxS3F1hA1ketJPtOKDMruZJTja+xFCUPOXyw9ct6nz1EI4lirvhZhSrrXQGeT2P9HtuyVVWNOS23wmCavPWKTT6eXUiAWTzkWtO62CPtD7dCsbNbUwWsvgo9HpHoAvNSRQBVlcq66iJ9MoFVtWh71bqGuSSc6uQWJXJd15pQ7vo2qQQ5QWsLxY84afmhlbRC8YB+B1dHKBjeRImWSMeeL96wJQi8UChRBMQpts4Lrbynqt/+XDia+Tgv3eQ66OyF8CNJB5RRKJLTQaNHDtM8UvFeGnPa15RbxMsadKnC8qbEdrPsVdxhL4nlJQaiNYf8RvZthYBs+uFTey0Gsd0cYmG/gT6Z+a7ewnPFuhwP1CieYhU2BgC8dLezWe7dAg76scZcN7Ujd4Rdv949+5zgSSP8FQ+eSvooKWCmS+QYOdiqW5h64e48hyHWVJ3xoS6JIQy043pqvPEjK94UYc88EK0rXJ+JgKeRYdTyZw5G30YPXMAlyetC/77KnLP49EdoPypW4PMNMGm9pN57Y6q6usiG3vIuIDdEZIvLIIP7UwjOAhM8TY2GEUzEY19WhbFdT7dUnAtlGZjHf1oUi0K/kPCfg9L+0DEgP4O0eFqmOgMW998QoiE/QzQY/pMg2h8S/Ofsyr8VwTHyX43gPydq/q0IjlP/agT/g7jv34ngBP2vRnDiJ4JrQ18vNzX+Dcj9kwWHfiY3if8Tqf1ziP1vRG0Mg3/B/7Xo/XPq8d+I3jhM/KvRm/6J3j8ROu8zBhTSADHaeJ7r9Efa/pjd/CE9fpNqOoPvZ76+hODLTYTvXx/Hb08+zr/8FynSJZ7KfPmvrWOe/VDV+5kjvy0l/YH1/nVsytt4qbcfa4F/xITvMxhD3S+/iQGQH2MADCZ+vMU8rFOaf//V31j5041g7Pc3on680TfC/HSjL5n462P/NzJK0E9y8pusP/RdR2uQsvmHqmlC4RgO/aymBZXm6Z/mhMjfpfSoP8iuY3+QXf/TtBT+OaNnTODeVb7OQNz77PNX1v+7UZ/4mfrwH2VU/zzq/xw2/zlG8hcYo35rKOFf4P/CUH59M/Kpvp8UMOC/aT1/VfN/EfOJQeQfe8v/XfOJ4T/eCMV/d6M/23z+HV0W/1dI0L+KYFDoP0gwKOQXGFSKvv/BP4oJhPxzxeTn7MX/iMl/Q0xw/B8kJjj+LyUmP+dc9GTOpy0GRbv5vwcCfnLpP4jQb4HA30p/5RRn9X3971om/hFggMZ+FzGh+B/AARL65Z8ZNME/R6l/jp7+s6Im+F9Lb38fNiHk7zj5d4dN+O/CJuSfGzYhP4dNPwnKvNddG38pzHwvZ/neAvuTVPyzVe/35as/6Gug/rAJ8x/RlvbH1Pz/Q1+Dvd//4+dvfQ0sXoQ4fB+UDP/V14AFK5KZgZy+8FiUcPW9TzINFUon1/vmcLBGhzsEpa5Tvz7Ii/Ns4WBhpuV7TXI590z33BQPSwogb3Sdkd0bOrzeVVwLeNhEnXxaa529tYXitXGTx3oveark+Du6vyBY8vGe2UE3DniMUjUfGAMabWSe4WSWURiLV6SEc9h7vRZTcmzMgQOZHThePYaA8R6myfCyUMv3MFP6pWw+uTq1XI01ebMUmYp938+syhWjierRursALpRVQbyHebm+P8xJbAvWBXOYpR/edDE5JmYqbo6FXTAhnmE5NqzvYUYW7om5feh578G49+/Ze8IIkLiWzQdbz5H3NTHDcZV1k97i5Oq+n3p8gt1jTAYstZPviZlSvH85iVXKui/2a2K5vIfvJ2QqZlaVQzDBUm+ecbwIfjjUPLOHk+g5YOL7WQf3Zidv1nJ4r2+OPUow7+F7qVH9PzT6Hxr9SiMvmTXQEdKCXgY3HPMHX/nVCTeP/luvHPQ097wCPTzYW+erUKJQg4Jwb72HSDabQh6eAY21oB5dFtladHreZoIIo5KoWKvCHCpAhY/eG9hbaQhLcpiyHwK+/4BGpGwueLL5nHORzNDAlI0/9SMaTVM29e5auNOwdEIUQvJQak7MjW074/6iFRwX5qXG5phi+Zx/gq74N8PqS7r2yAmB7p4t64jeYheufrqILErds+zHYEE3kkxYaWnMB6CMlGByWHrpNN24nd37TIH4BPgd4fYAbtVdOEKEZv4skOVJ5NZ9zWOTAJ9w+oEzhET0gxPW6oRfD5SlDY4EDSAR6WbvV4hD1i0ZuEmDfpCyB70vNpSvNtRV/iYMpMVfkvlhUXScGOrZPnw04bCoXDL48VhPB3SLdPfdepoxtnfmOsa1Ez0OGsxAD8xyLXMnrFurL7mgByPzxHa9rG1brtnw7dTj9Gm3pA9WnhZ9+nJHvF+eDmilIRG7XHlAshHpwHqlilKHFd3obTbI7B4QRJK/1/4Q8EZ45KA1iVozvcqL0bK1hlJT+ptsiNVnI6mPirb0SGqtLV3bmqoKNulQLb7ig1KAZ2btZDLvDw62r83qPuRMT5HM8/cNcpb2+JsOWzLvBw5XaGsOuzM8Tnx5fmsVgqUmZEUlhJyblnCzHEXlIosXInD4YDzJIFBKllvLsRKEHCVceIEntOIxYipfNVSdQGCE66vxHn5J9v3/YT0SZ3pNiPQxDbmSstfLYfmMhacSuylXWmg5bCPrP+ApLAcsqUG/0nNFUQVVcXQ54qgqlm4itKfeQ2Wxh5oER6rKv4BUJMWDEx4htHoYvGZqVhxv0EK6FcGIIyuvPxFBg0/QUPepkHVEUsXxlVtbvY8OSVnu1XiAQYF4HLO7KNIhtObL2wmskVfMrpNNbD1wu1rzsBgxl4IROdhV+06pXPOwtHryiKgDels50T0JWG3bZcB3puLbVKTTcYlzwj+5e3FiHUPdaxWjrFKZ8Bk4T8LzF1ecP7LUlksQKZV8xkkAJWdZNzFMhsEnxCPFB7ox8yMdrJ1jrgHCBE/6U2aftoysdXaBWCCXhlePpdh6pD4gJ2KX12AakzffMuXtVdJ9/PheH7vJjdIM0CnEhS2+B7fv6HXZg3Ij3h/Lm/WdsqRCO92K5A6DhxZX3T+jFGDP9fGG2OK+xYt2ezUdPR4ju4jajBwJ+u1jumn8weRAQfPnowptfw5EYrdZxGVd/twlXhk+2JZNEViFDLnP95d2fr6JHPaZIXxJlHwx2UH1oozozQ/pTW1FGfQN2oX4odC0poegXXMjgTKCp47ilyvtt4V9EINekxdOB7pSbkcJllkvqfuCiUhj40KbmhzVm3XMZK+231ZzixiLOccHUyL4RbkuTQOvAHrL9jhPjI8VxDgvkTARbk59ZlC+v17ubuUTmvgVy5IDyamZbzdr5ukNZmOQ2bhmHWjb/Zs4oZKhI93YTZmyCySgCGLfXYNGklZVxV2eTEOXEpnWnGN+aV2MvhRH31DIaeDkwhVlfpYb5GwTYY9MyKMS6EPcEtnF7s9QD3sPmCTkQEjdDdDFhxpMvYkTfHZ/fN9qG1Jv5UCkMZaCUIpcnXGevNsER2jFNZtoDxInkYXKE5e2dMFz1unWEeK2aJNAbsmHwgwXUIIbejnzq7p97AWWlWyQs8jAoCMO9z0g39MV8pl42k5sZsiACxZlQ856MM3Z8zJo8ptupcT3uAr5EXNzt4bdF2J3r6qFaG7JezcYIdSf370t716hoMdnHMkhjjReOjza0017qluOKVD6AhQ0OBqKSsenvb6j2v5Aoqn1ulXdVKnHhMkYCVEa3Vc0SsL8gMeneuNSe4ps3lbWmaxgBkmbz1OHmQJpc4EqbG2SB8uR+tWD3aXZsGw4D0l+QQFDCeeF2iM/6EtgiV7zOXRox8fnezcsCOsIiwkjF/LMxqC10p6sHHvpyVCrLS1fkM3HsQcWTDA6DnO889KqsqeOg6sny0GbizVlb9hSfrMGLBwt9wyxg7aXnHqyrnw9m9GMMjdo8W7bilkLzC8NYeUYNpk9HtgrTSc5AZ2RXW55drFoPOR5BRPfth7N59YQta0cRtDlCsFQRg3amQJKKJFk0kzSQ348vd9edgvJieTZgCCRKkaKgvYEWTyM4Lrdz5d9fK+13O4jg0urVvZBrNuBVW0s1/TjpVsjrpYZfB5m0EAAsRGocn71XsqF5Bh0ZgQfDvQ9B8Z+VKfj3xaVFNQPy8KgH3h0G2RMNGAm13t5e/A04OfEvxXQ/Nrhpf4YJDmzL4c/CPHWmpLykMQRtyV+7vvL5GV4MeXPSZX4ba+dCF+CrqiMxewoVQg0G0mNCc/a8sGm7+6K2r5ylnneiHGKej3djoSIb3AVo5e+p3BZWty889hrfd0wYvY/G1/WmkIAHVgeky+bb6iJedMPHyy37iqk5ovQwbzz3rvFGE2CPHA3cZELGCShJ66ISXmVZFCTwAvYeXdKaa/QjUuzeDeDtgg4cFpeTQJOOxHnH9jR6pydrnPXxM9FWywXTbKMHEuEEWipGQ6tCllumz8bQ5miUgkOl4rEOy4XV456tx2cpX6quggp5+nZR8pa8f7BD+9xz0PIiWlHYmeREG8ploeaEyv3OJYl3wyvEE1CHlNFWbk785Y3ZU7Py3wvpNTJGL9Uu4qffuPeN2KC7HULOuLZT1m+3VZ1beHQRATyQZjWe+zmJBg9A5Oaoa8i/YkLug8ZuVFHMWTbSq9Sw0wYbm3a96ONNAg/TeubZKdVmYk+I5BclWT+Lt6rYkMFXp9OAbRo8LSHJVY8SYvKKOAvAbGY9r5eonmB2GtloKaY14mRJZ/mLB64LLpDdj11mpkZhRgrn4MLdQsIgH0c+RTZe6WhmEYeg97srB9ZWfkplL4RQtYLK/IhD5PbEZJQhzK+yWXDL/c1/kkn8csYuO79OjppoV1ebtucwkOfFM4diDpfDaBhXFOy0C4F/zm+2dteftZwQ/MWyAc7FFMcKbed8aCCR8WMNe6rArxugCJQnxufKhlSeNi5VBpGbCPjc0LGvMjSe2559gp6ucK7WmADPzzCwaR1mJ/xvm2cEOyzYeXwtqDv1he4Xv5w8dki0RsRQ6U9qG13WW2+sOezLmyL1zbdGT/LJ1In5nO8Ytnls7xAqZR1/LY06GMqd+OgIr3wtGdvwGld1iy3lM7VTeIgexADUCbLz8jtLyJcUlx5zG0BWB3ZZi0qTLdzeuND4xIDwLs2CloIWKRvjq2mb2TlrgiJ3+5caKDFQwt19o6BHw8MVUD+CD8vntiOBS4yyEdzC455gouh8FnegpKpl8HBztQ85dtHsfYCtgpZgknuKaBByWIeM7MH06uhnNycXZDdKqyP+AHxE9F1z6D34wQZ0P5iWmbWjigPehvHYdCTw3auw9EF+06pZrQsAImFLbBL/jzGCDkKCp26APeyTCu6E58i74qK2xUFVyNbIeXJ11axxO6NvXujYckfdLCHSYTtxw69o4785rnhA4nR51hvD4On0nqhnDewGFPB021kvDpg3dfg2qlgpzkZ9x1/V6XUNlxz4sO9hYs6PdBp0aMmGAkHpk3OEAankDBPUiGXgIulNm8lLRcKSvcOpdv4OkYZdnQQFu2eN4e3hzwS6P3QHFkol5TProVS9uBgYZ9+vEcXLocm2MLmtu6AJp/e4u6gt/IZvRHUspAYbXX4jTSZ/lXut2hQeM0gAJ3c4S8HH4/t5TBLAi2VfD0ekc3iioskonPfwYavTwLTlX0M4XD0muudD6E24rWJHrM+2mAnCpvLMPU0spIeIDd/2jaVs4SddMYp7be/zchztTL99dGj4j2GL3hz8dfYpbycWLqykqY8BtftjQQFxt2h7DGQCZCNSxRouidYnU5QSQr79anPsaze1r/deiGGcv62VIRIaJMJcy9jT9i4OvA89PLk+rJiODgnQwcrDUrJpkIejB0qMKncDiKqJcNZss7Mm5WvGQ5RT1D7fqu7OiEvyLut9MgY5pPjsCULyDWQhZsWY1IkabeYNqSfgh+xszBV7ORaE5yDjQzQ0+zxM+I0yR46u8uOoltlnVKRwsJHt+SUMJhEDOylKd5bb3n68dDQj+8rvgbMUb8ldK9Nn4uvr7I9gP1zYOOAuoT3yd49uDnbphbmxqDF5JNyBW55GadSrRG49i3MPMkL6Cw2EYdi9Y0I6i11RYKniLZiRXg935bGRnNNxmF+ryVpWvwpWmfzFGn22PhrevcGxnfzVInw6Ox3xLVSri8/nf61k0QVGeuWGzGMtEzDxkNaNOq1ty0bjwKsv6Fq757uKlQLBz0z0VQjgmO62gX7OZ70Q74hxwNbFxe5ic9uJ4gq/ByFbLlMsqILNy4zpqTcj3ORQ4er4BjvZaFZqHfevhz5CTYi6Qrx6F8X/HTFdey61x0hdJRQp57KP9yiFBqPfeU2fLJ3qCbyjdW+aiGWnzbAViTS8AIkJqq49fMXJccWqicQ7M6+0W4J2VgkgiMuCVW9Il9Td4NVsiP50xWTmbll5GBIWPWG8ywszpyX17QZWFfitVhVx/t2ItVteMh5SRexkjhJSgpPj0XEx9FuuBGGeKTVbW3clQ9l/02S7BrAlGGlw9d2mGUCm8AKE5a7Gt1OHCIEnEoVWAqezyt0caSrzlaviYddbT3D8DEW1vtMxTyABLaWodBSxE59BIoKHPS2BqbQDJ/DTjQ6csUdWbWngkcykzM4FPhJoXWSPHowfGqLgyiwIih5iiATDzaOlW/1rdyS31kLCBuY6Vzi647vqcsyj8yZFHrLErzn20UnHOClgT98WQWiJ+6zct4NowlndV/Gpj1dQVYN7Iw32ChGC7yTcHu/IOXnoYzm5wyph48RjgkCJTkPjA/YdSZo2fhE1gDvnqFSXvExUVj4skTFeRtNnngG/lZq4qlWqBxLvUdONAMHuFb1kAsNqCWBuF5wb54kH99KbZLb2wUZxt6/XfBkv2/0e3Y2k68KPG/d+FpANmCD5oUP3DNKn2JzR0bV2HOPK6VEqEr8cZclLwXWXI7ZisBWHY0iSIkMrHjGVll7Hnksz+Pk4CA08kTZKNlbbxCa0p4tIIhceSKOW+0Q3x7ZPsj3eaPrAyxTMP33pwYWO6g3ZiOrNTsDtxosMhPxUc5bljeA7BlP8kzWtoSfAft8RVwdpa5qi5hF5RBpLxHsLLKncy/mjhxFesQ4itDzple29Vw0efaaXUEO7aQrtZEU2n7Ympu9SGKpkAeuNHuD+5lzIK/YYV8x79pBrfYPhQ2Xir86e/MU98b5rPL46G7+uWzZtEarVN+DH91+e+AKjxjfHlVF3tbjRGeganNhW9AhZX/bCQyOS5nlldpmGGuyBxaTTIsZHoAPV+erWgdyrap5e3Y15OtXxt7RTyzvj/+D3xaJDdKPweJQ67ttsGxxON0ljJvSU0COC52PN+YX73hEWJUmusO8dC2ItmZig6xNoujbLcqiFLyqgMVqvRTsy5uO1/1tzJWLkjpElGU4p0mD1bdgyeepf7F0OA3vF7nFL0WT7kspzf4sZJATNJJkpDCfj/K9M/xtODJN9Ppy9KDkxt2d/t75xKH9Yqogr8hKSoutnPF4iRUZ+dk+RG+6YeHo3+FL2HfIQjZoVuQtfKwPH9KsarJtECe2PhGddfXY8zsk15+zV1nsCzpGIXXQrScprk23Ln1wGoUBrU7lHc4HId47o+tBCNQ3dYePNL3F2OiBZPbsevz8jrnCxzFJVVeNpXgAtCxQVppxvzTex/Et8cpma7CeVIE4IVd23nMyUIOAslD4gOznTDA2da1vUeoVq2g3usWW5aCEp7g/y4KOVjrE7dRZWql9cS/Ryr4S7CALKROsmXOwFRvrZDxqn3ldAcjzFh0svbKJg19ctlP4wwYB6RboWX374xCnpAv2YUhtm+j1kHB3xEHw6otzvIZ9ex9m+eT5SXlmTMnUXGU3Jh8wo/9UbRrN1zPZ1qKu/Pwh0RTr2ZLFyUpplj4Xu9eaXlfpTj5Fq/hC+Wk4ukn9fGHBLnY+Mt3O9tYw0qAoTFAPunVinF1ZZtbzgALix28FNaX7982dwCFNJIrkH6ECKbKOrpAX70XC4unGHcJKtTqul4j6GgYl4hVr8GcLssWjQ9edRGUunijaU/TSvSz6tW2mrDydT2R3xAhPJOkAlwJe68FKiP5x4+TAIH8vWkBBuMo4XJNB6P/Slu1Jg+3HAj6Tj55dqSCdgAgEEw2/aieWAmw/N1sBeQENbVp47jA5Lo53b/pz4A8n0cLPRZbQZHIEuoVY5Enl3a6JlzGz0Cm4pnyj77DojzB6a50wdwGNt3iJWWGvM+UbzQVEEnHiZIKRo9f5YE/DU2+L1EHFRbRB9IJA/fnmsKIFC41lMxAQ2AjPj8K8ATLP+xNdKLTeD207nurkt5+9T4XahF+G5An2QKdXxYla2nEvKGnyCcSwPh435Dni78/H8Sz69KJkPi4F6fg+T9FWOfERyPsBpi5UVS2AgeayAkGH9w2nsoQHtO1lpkeQmvsWFKOCrjRyW+IWxW5bndv8zqd45xAddojrpUisUbrOcpmxRn9Sxn1on/lWkyCLnl8vTACx1tE91+NQ+x6PfO+Ne4P8TVyAxKygR13IgNdRetvYMmDoOmBuCI8DYMpJfDEWlH3c7fH5hOF+7EaDy14FCQMtN+4giCGuZVE/tdp2+tY6XlZpZEXR2k0PgD9pCtQr4tKJPu6yVrctwZRWO/1UUC8KAAEqcgzKDK4L+Ojt4zZP7gwgmM3xiNXSSMc/zKK9oEu0b+hGzTwAuqsnb2LjxA85whYuINPUBnvN/SdafBVXlqTAYD3w2bBv9tSeHqdT2zmSxlIj9waAI8ZLr1+7Hn/UIztzJ02X/XGGETygdZp9rDT2AMREnQfsFsRnf7bj5JxzMm3Vc7ytNN3nC7l2nfRU6bkSAm/0XX872WNp2uTNH7rEf1VzpEZT2qWgU4FFps5Mp0zHqSY+AwfTAbbH5Ob2sck8WhhMbeJ5kS1Irg+niMawaFgJ3toXi6q6NhwlqYlkWEGZmZkTs1JDDhdYuaH0qMIRu/W+M2iD6FJuxfUGEXJ6JJxjkb0vSSkkzjJOPOD8MGG5OSCMd6DOeF84j+XGohk/7ZjQrLHHNcCTqNInEIvkdrnzEj434sPKGxYpq63bbXhYkjFmDzmzSM94u3jgqXR3BDxhGyyRxCMEDccpFx8MWTbPQtaxge2ZcS40ET6yoa1pNmT3LHfYUHzEoX9vrQJ1UtF3SdhbMo175COUHZBCeYvQqebFidZbps8wD4Ns1Ryg86FbU5mdsieSE9zL4pBUKdkRhBNhW/p6AKln51RpXGSqztert+8AG6joaFXI6NTbckrEbC2Fwm6YeRiw05T9M1fcj15pXP9qm9mlEIkAIkDFKiZwD9JKldqlsvPNyW8YeT/zEHVX8kKpLokesHlTdUn6874rHsO6pWORhD9uszNibxjYPjQc/dc7dmwlv6Pf1432l3AQczsgPCwTxfMNihl58alWUA0JPrkVkHz9XsT4jNXpCTcF7Xne62Y+1r6GA+PSoD0zYHUSXX8tdSd/AB7en3bVHaPb2oaNnXM3nMcrAiFwFlyfGUe4txpSldStUmwXoO768faDbT1J98ErMPoSA3m3zufcLD8megMquYcgQcvGX+XKofBuN21Si69M0KfWLanP72ek1E13uizj6f7ChopuJYHWjPoajzyhQdUiLuLyKuVyNPBWyDR8PfCQ30e9EqLW4/0PF2bT294b/TKduJUskMLh6xhL3xjlEd2juaNep8GRUcKgC0oHvbL2QnX4InjhZBb7J0ARW9o1+rCJH2CaiPpF0vmWuZ2MHC9EJPanusHlzVm0hgpvUkUu8hvWrWNTxD6QH1xvBSNGYTCXQYwYq1AfxLoEnpl+bpCVoBxMAxFKJRBCA9v5bjUXA2xjYQqo8iE5zKAZnztQSptW7oRWsyLnLRat/1K7g9JMeyH1UuyTk+WjxHqicHU8auvBQ6LkPYRdsl39JOI3WP1Mne+NQEnN3Fc/Haz1HY5vgjgb6zSDokSfQMg3A0RWV5dMDy3I+GxELld3zOapbXQFmFuYAeiKZQtxOq29BbnB3r7BftAkT/dGVn1FkOs8eYx3+0Q8shaFJeev2u8yPfaHQUwjjO2aOmbcVvoT8c5cxEo8Vd1qFtpxt3FN3Q/osYa3/oHbOIg9FBrSnu2NMANf3cyo0KSFf8JLb+pAnd6STIOynOU3AVOHPXy9m2Px3L3JYvU5w4QR6qLbnwd2SrQ/gEttIpGyQ0azTwVrajv6TfZak+SxxL0P2+pHnPi1O0PBeCPeHa2g0NP7QJZToYeHjeJuVs7h6bjRXqd5KPH8sqbhoaKQjj+zSBA+PhepoLQ4pFeY2SccZrVmo1HB6gOPG+URSErmUPEyvrFXB2e8p3vvbptE2Lqg7CWsA6DQY/g4BDE2Yb8/vTagz0UYi1Z6Hq/iOU+KO3tITy7aZ5mDOOgaFuoJZqaOheaWEPEL5E0g47SzXorq562E4zarZp83kaP2avVifVmya4zY08PNX/PmD186FaBVJF5QHkhACt0k1geYP7wPAccPEO5+/GdyIwX+kVnInnBxeV7iXKkRvsvRM9ZE27Groibc5cYzaJI4XMx/Qi4qPrq5jUreH02Htus7iZoXB4XcIVxoGQaNZKd2CDVNbCL7R7S8jxg5dWiwrD2rTZe3ux8dXwVaF5WcogkNVXOu26TfQRC1KbyobYL+Hj7ZG7cbd+Zjpb91eZg/fQwHdWG/n0i5+rR2TrEFEaMWMklpPl/zGcgwlMgq8ba76jCr53ZEA4T5Y39BRTn6udkmzhws3EHKrzt4TtxP6Oivj935eThLlXkfwTeai8MM1Iwna2AReXyptZEs5S63/bB9QOlXaPDwsQeKtKRQM4n8xOIgXUptDXR6J3KbsA59A32fVZL0H+lUNt7RMxCvYhTfnV2lfIqlVqo3PI2XQUmSiARP9oqml++Jq3W0CE8pra3N+Hm1psmPbE0m5DjOhT6HCRwq3lfXChq9zafkc58HLh2J0xNWFNtHvjyzEEeasPWewa3yq616uoMjkT+rUWhcRvXaRk3FRStKhtnG4wAlVS9RFBl/4jcg4x+fV/82Up3zIhKLK5lZ6sxADExdIGTPjMigzj6vJOMQTZC3nhqSLF0xytrGG3BEEE/TNrsX2jzZw1OWRqt9dYSq/oMiHqCJXYlescZGngm0LJTOmTwdfDKibKKjO7Y2sZEix/ipSzF4cQ0r3mAVe137NI5vWFUNPL7uiCeWEX8U+nzSP88zV3w1k/uBddeD9f3YMdLxGD8gwjwC1r1j3GBLS8tb9XVZJ2I4fTjOXg4wjw/RRxyieD4dpvbJpdAiSR+fFQcx3IxIULr52kDHdNA/6RYZubjbCVxiJwB7+zwKO7/FiU94Jc+YeviwqbDGkyG4842J/RIYUJG70vBVquwOlIUWiEiCr/dIUSZRr/HQTWqTdlH1iDcduAs9MPPVDm+fk9bveBnOjHSL2/OiC4boWtpu4XQHja9JD0k3lZBFsx8yKc0G5MGlX8+oPPPvTCpaob3mK3ScuLGuIMkqMsHpryA2O6Cgy2KxHfdbxzzFjYzUJqhHuqEZgb9IXYeoO/T9vOzqEfhdS7hNjL3CPnpbXoWl9xQqUojtZhgvDC5QRzmXbp/Ucucmq01Xa08rNl5mDJk4kLhZ4KXjovlBOY7mW+YFzHv+Gkr/NT4EdjYnvydM2m21lxI8S++Z7exA2YTcWfJMpgKZgyTusHHxh+SGCdo9KoKejQpY+8SE7KnYMXGe0hjIEgIgDCimo0fEBQ+etgUhAIUkaBh3hPuCzOY40XV0nsoL9VCKDei0DzdFgtt9UmaDMCi+oEBRoaiY1hNz1KLl5eGfXEHINUMvzoLKwGr66QqiEjcBSngBwqZ95NCh3wtuoXOxmoq2q02GFWcPftlbfMW4Gk2ftCEwaO28kBDafDO+0fm+dRh2qpgMi09xbPYwR/WwYQ2XdyM6bAT4VPtOrThS4a6X9EBT0PvmuquYe/ayZjMuhzfkHmXaqG+yO4Ewi/ZT8YoHk9P6k9pffh6ly8dg+wV2rxFv5fq9U3ZQyz7V3pYdf4Qg/5lEc95QnfTeaT8pqOTsj2GuKG8tJSJ9E3XSv7BW6nC3mBp2n67gNKWIM4MwTiTpEr+1m63OBFLAI0mAF0bx2pmGgq+0M+paPPbIkgwltDFSUn7skChJ5z1/HUiLb77fb8DaekMsiQnofrYetmuxngQ5Isb1fY4nZJxepbgRS0zXcCtEgAwPCVmj+XmjR0LyH5Xydp96J+ymX6KR/Hy+xywK1KYeR8a39DmRrB2E7aO/SR9lxZBaW6+LRS4EIOTPiraCJ4Sv8pHDyJE9u9cLpg7+k6GR1+pb/qqH+BXy8AXEKCzdyUE17ZxDBsnewExhrfE61k6yJ1JpMtNaQOYf5zUURPJC6YGwnTF2oPvO83ImrvWQWPUr6cNlWbd7JIRh/hvhQH9fBHDZHCP08+nOkFLn5zXjBgT8TjauunRQS9Q8uDfq60UBe14d3ED1MD0CfbQupGi0VJNPTBpQIgAY93m80a6bEiWZb5N8VslFaOtRQzjcwnqcCWMSIZbZf5ihF+so2O/QACK9me5aMe3vuDhetR4wFoYeFm59uJZWHXRyYHvgZ2SToyJvElbJKBR2cXXCYZJtFxdN6JyXJyQYJX2VDT8E+XzjsS3yhlwRvSHhR8Ap8MJHShmbx8UUuGG+KFN+EgKtGzo1CSHltlNm8Iu9FrepV2t4BbUYFyDUroiUmtGtuic93xO8wuaM8aLWtFFRq+kNG1XoRGNvQScTICvIMHIE9Uqkfnp94c/bJ524g8vC8gAJiIPsbhF67BaQvlpmQBev8JxZHnQBg6oCJ8sMw/HWg5Flr4mXUmVq0Cr81MyvZvcXyJW/VIIb7Hu4YuaDYeSSZyDQdwglpVoNMgO6hnlGKxnmaYGuYSvXRpEZODCfWZ7gh7y5c5wQIlPI3ZEQw/DcxTP3VObD5O+J7bUc+XtY/u3EtzW99WQA3fX3xCzzAEuFQNcvHzRq9W2puyuDiWWL15jyXuqIfE3MsL9dqhBC06BWX4/624mFxlqZUSxLMDFjMmCpC+isZgXsHL4t9abR439o9D80+k9oBFpf6eNjH2gb8mXw9aY9VrFcnJ/eSlmWYMcO+sdv2/vPd1P93XuYft1Qe/64pem3W5j+YAfTn7aBCf5P3iYE/fC6ZCjJq3irwYnfvzt7nb9eswHlx5JPfQwm7X7z+o2vfx3h5z2g/5ovvv5HsfnHLaIIhP6CoT/vEiX+YI/on/eqxb/jpRH/129We8rgnbv095fwBiiX07/drNbrZ5bhexahAWg+YDM8WpN8RVFvyuXu3O8flGEyyxALkjrvkIdGpuTfslKZzsmsvNCOt6EpYZapFTu8PhU7aK+pDAKNFZ4feTkjdZ4XdV0A7o/yiAT19VyPumBDk2m1s9H1KJA2LOYg2a4PDObX8hS/LpBTTWUKw48LZzCk5SNQGEmzD1VvVNRNHUIgF5lpRQK6tLLToa6t5xWpRvGO7xaxMcI7EIuoEm4HUBuB6Yez5eJFYuK8dMrc5yRVcIgogyWBAkSEqgSbvj6k0oqgJpagPF8dZk+BuEmHz/IFogtHaqexcowaRBmhkfVZBYa3t7iSASppz9XRjZdEOrhUPK+4KjdaeTFvF9lMEQ3rSaz1It1FImEcXWeE3EzfSMehCXkkvfzi9Acv+wS3IQXpqLT1FKoA1Eg6giLbOYA3VPDc9NrV0Rk0NO54EgNAPECgQhks8smXoEwEu9prRmMnhJZKU7s7utIODbQpIloZIlbyKq7ZfZnGJ7lDirNgEL9Qy/RWoQMEUwDsXil24r3WvoYXypdcDvNvCW1vlDuB178KmNDNy1VVTOjxmLqbreXLJdQ99rnLGcVYolC+4HgXGtlit6p0atZZPrpEy3LyBDlfifTit/oW3DuEbBxQyMOCqtUNQ8syrczeOWApHZkROR/VkOoaz6jTFsQJZtweW9hapCH1KzmKMNuck0gdIwE4PT6nJA/Q6euVwys5Ky5ej4s7Xs04vzLizXOC2TWM7Ao3Wm7jaiZ1xwpnH6LR6JqlZ0uywx6c/lP48BY8PylyhpgCa/k8W+LSCwkq9V5awwZCDzlhEgmr+UDfSe5eTu6LqW2Js9lY4sDMmv/q9hxXBLgjm6tfXGoWw8wnQ8oC7DScBbRPrmXff14Pd6j2jgo+fnXy44KuF3WW1iQZQm6nSGFfWZ9Dc/cqXnu6BnP3jNwPvsPcqOvFIHsVQVScl3Xvh3cpQJbnnd6YbVdASsQkMnrgnyg9bj46zg0C4v1vnYR3ZJvOaeGhxiUF2a0WrynPa+th7pqHta+ocQZescs7dNLl+EYXrs368nJNFZsP7GOwrax1Ol5wu1RhSmmy1bkJBeX2MQ83liU88H1dRwdPOQi6rvMsr8e+D/aUuiYKspzywZQiXpz40LE1j1hAqALJCaeiYtOohBjbVNiS30vO+XSMIJmxvDP1nh7WCgzDwk4Tejw60PUpeBC1iOGrKKDHyDG8YCmCYp3qR8eXlHmKDUgrXciskbGdYZ+3mG3cLJ1pOWDd0U1xOvvNabzveK+vU5owLz1Ph8jQRVN4fzCdPuKSGnmkbgX52WHAkwlu9Crq4LTphdhApB2CTbeiKyEtLF0HNr3tr6L6K2KGBMnGkLXBm6GXGNghLfBgVPIypzhOPVoaeQ9dVmDH0LRNvxNmGnO1R50STK+/0HWA4UZVPaeaJq1smzJnzStsVULn6nzxZ+G6eopwQPFhqq0DlA1Amkd4yFtA1M8hUJhLImSsAGX7tGQDeOUehuA1U4vYeNZqes969idg1VrmJdAhERwnsa4YBPa7bdgoSUsv73zF8Yqr2Kckcl7C+MNijyuBoe0xCLpF13vEsr1datNMHrZ5PEeC025Gx6uhfW0OrKqUooEBum1fPT6RSS2bFO4eTynCJt4fCjPOAzpHabJxoNFMishMmVHqL3XaGQSddFDH2rb5W15DJ0SfBh0n8WFP5uccB1k2+ULiQLIJOlpBFShGHj4WY/HTa0O9rUnSURRtHrbVxyQPZ96Ir6WeZOUcjv3QhOdqlALo8ZbwC32L1nGlZRCXkmnqrVFGpCSwfWnIFUK8DhvomKxAds9VxFcTtOIlxUla3yr3lbKsm6lVB9/JcDLueYQMyUMAL+X2zMnFbFU8X98u/drq9GiMQ9SstpBpDcOeMDpAkBy2kp+ce4fkILtxWTnIgclBSk8WtZfsm7CG1LRYXc+yO8wvSzZ5S2onKJI2Nqqzb8Cd4S/vW8kLm4s0AyOPgyeqcv/EbMR+4lxPBUQtj97fT8FOq692n1PUDiYNZNdcK1CCZN93tDK3kxVXge1en2e3b4aBfZ4gkVF1vdHxn9tCkkzMGAlr4Dsbvd/HqL3c0tXMJ0W8MpQiaPAUR4yigYsSTJVNHXsM8pMLHgxmioH1KD1f8D6zuqn6xQistKbe2xkCUJAQ5RN0O7AXLA9NoIsuz0agYb8pGR1dIpugsC/OEZMbACDxlfIKLSzMQH3FI5teFxkBSRrtYUYpWEjxKAwA2wVK2JKpzg4lHhYpqh35cMrJ+1ymXV1jOnc2E3SvlZvYEUpsY14NViY/nll0Qi/5cqbXJ2tDtzlpgEtccRfdDFUIDFWt4RIxoGcjYNEr8PtVUIam43XOeNyWzaixyfA1FhVNf0ifry9HYBu9CyrORKai5M5BsVbgdf3V88FoaMh9Fxi3QQCf3LPrv9qwIgnweKhikyhvabYOhOh5wH2W+wBwWOw9f7W75+qf6SmBV7J3rEHXCkN3CsZOW2+tWT5U7wzN8eYqPyBjaxcl4qtKGtrsm2EBn1eLMLYkmkgWiQGpR1QmFV98r9yuG/Q53Q5Uior8SiinnWp3aBlFtRjPpE7JMHRsQ7fN33RdkqvQYrrRdveRkfvhRp0QwiWW3vQIqomuuNaFhCHI8+Bq3vdB9/4NusDDUI5BYXoRyKqLeIJz3MqWhte+ns/59QwJ263NhiFttBIuBG+KvV1SqoPD0jGP2NSD1aDeD1AU77fGgFEDEXWukQWLetakzwDIY7AgMS28UAn4T6hDBxFS3JVfNcHrlte7EpDb+g+K1YCt0jpTAG5YBG7gV8lzn6eAgtXGTYXxRJ7SiVLoB92S0viW/YFV5fIDWSRpn69iLSQ26x9+xHH7+lrmrGKJwcq04BrAO4uE1iBISos7leO2xTOM6HrjTP4pzfmchxynB80AJRt5MsjOeMqD5T+z+r2SXRFtD9BpgTnohr3doOcxhWdnR9X4uMLlQYTLDrnxMIBGVzIZMi5TDR6WT0ZOTc5aubUik335ZjJI8VHQmC1AH3NW5ENpYozX3nYs5EEo5Mnnfa6ggcwg3VJ/FTNHzv3IpV6e0y/Fs0PIFmYUZPGSYJtB589GZAUkTmwjeILKeqq1YtKXUWjKS9ddAeHk/hzyYPsm5x9Qevn6Zy2A5r/AY5XxZ5FiW5ET1y/lG00oBpqxFz9f71w4TQXKXm8luMruXCwz5+TaXx79AxntYl+BGaF7Qg+WXNHMsX5oAQnYHSHWvFK38omfGWNgUXX3XH4aH2A4iPD62lESU4dc+TXcJ/aqD0G6xPWihYgkAjIBtVTZpJ3eSRO0aTXOGTnjD93dw6e9VcjLf675E/iLg7yaZuX5IUqIVwl6yJBWZqn9MwSv99ld89VPzPHBcavv3Y+tI2z/yC/yVAqSBsSgHh8TG7zbT2uS0LTL1iBiatpHzX9ebqW/b0cKMABiejiIiss3r/nly95mjYp2IxzcmkCXYs0DZLAfQX46sruWAm8L+sO04/4tPb7sBaGBhC6G2ujQlr72KfHmEZZl3NOkCsJDw9xblYFaDgnJkwoK+hEVFBK8eeFjhZ4gYlYnfqIvNtrmcMKyE/nrhu91IrC2i5CBStEzEqQFumS05G9RY3l8uWCM3dL2ldQy3ehNmeq8FqqKeSA6kAZ0WxEwufJNOm7vbov7YNE3RgmqnD4qFi2LijQQCDEMLuChmS/jOUdOHigU+crJvu3LXPHbjxIoMTwwqnYmHZqeNYBtKGWFi6CGkeq6Y0Unq7A/vzrgsPNe+WspnOiRXGx7xvNY9Hy87uZ22MpuWpybI12E4daWH5+PtLx5820O7+mD9ZoTXlgvzTla6zYFWNJDL3m2X4/N8G1g6cUjI0+OF6jqdN6SXflqLlAyRWKXoUJ935S9qKbycDynnJhAccuji6OFjGnWRD6RYFRUW4nMJfyw0g20cEQ7XD3rOho6KvvqYne/KVP+6Le3q3oPvX6DDf3uqlN8otBKo3mVRtMb1+1R9aQxgqaPA4UMF0ROTGu/64dAsCZqVYvhxDQlPW/bc0UsZjZjwkTFOrNxrC/ppYJOIwpoFIXRoHhTLmoMOZXgf9nJqLtdQ5BIK097PUS/vdNhAvJLA7MDFLtQ5BaNefdZpedNpBbilg6kRy+O+zDpputjnl03jFoE6oMnVCQPLswFrK1HGNgQtsoFUS2nVClYzZ85I9QQarYobBvQTz6UQlEnNHpaJwgqMRt7IkmrBpGsJuRNea3hwXsqBM3kLdO05fLyvCs9LqAOFN/mqYYfBGd3nztMw722OKJ8WB2vi8EmOkGFn7eFtf2u+kJaJUCo2OMv3zs12cvg2r51Q2w5sPEVOw203vCq7CUtxLHGYXUNap8fjEDB784ORZM0Fnm3zdtA0loEGGvpoa16rv8vlq5jwVEdCP4SmHw0mJxzuJFzznz9Q7PvNruza4PU6q6qDlK/b+QYzIaEIepANw28+G/uqJ4wjZHcH0hChW+KK7SCKtIHPPxoIsERVvCOnxI7O5Ic+dMCKkUDvVWNI4pclOe0FZtfygn44C/JKYwzWjS0vNZXGzUeIT2P6GL5LvNwvk8wiWHrLKJ/Qiv+499o0YATiYT+69K/fqj3QYePHlY1v1/ZXUGVITjNsb8ErARjA70EDpMEkVDb8yrRAc3RX4oPxb3u/3B0qSbsaKKYW6n27Hf9y0P2qDcKPbrZv8YRTSZt9ycuYALLV2UG6ZdPCrqvYnMKoOOXrIY997+UYkEK/3N5EU/oqwjR33COA20C5xHwtax4CJn64NmtvyZmI9qNR00cUoXXz7d98NKFs0O4qspGDbEI1v/HdxRM4P1GnA/7RWfWkbF+wwj91YNWZICVarEN75vQnWYtXKyyNgn+H1REqdELGQ7jJag0dPSf2sALW5vf2zENpdql699BxRQK5GmDv5tzxMk2TQOh1B/M2zuC4R7QDSjLRL7HNaWZKzP2JcY3tBb4FTQEOaeUY2VfXNFF9or04utuzVU6kR6maEX4PqzyO/cjzhqYwH0EuSh+K8oLciu1z0+afky0T4s8H3xr56Bw4GDL48jCB+HbTuJJNQnhTqJJ5LXleTor846f0ctK7DwFsFPygrN6MBCM+rp9es4vMpitWUgsKE3dI9mlF7E0/vXptzPp4rbwo9wYH2mt7SlskxZVvhuBEQDCcCRhw+5nmaerT5hwix6GRDLoWBaeVx2/2T43WT5/SfIB0Gp2bFvtBkB1pTGQ+kXkl7HuCpM5CLHHnycUet0D7PPRJrZRURqy2F6zwtSZswQmqXYrLKxIUktB+d56BC75yt22uhxJuyhi9Vw4UopovbBKZmVv7sk6TJwJVVo1W+87mf35XY+5mdKg6zHIUakJYtTREzagVIBtyyhCUVt53eMMJ8Bi+VP8SGUpfXq1xWgLfXkxAAmzQR0WzXOchpbqgGp8SaaOt81t6VTCGKp3370wiF8s3/3gNNmBqJi/zD/s9/4ZbSD4vCzn2ULZjX0HBIEHpwfspjIo7LP+rxbjUF8Ic2CTFlyJZV0Mtftpxi3I+rj98rN/TB+sUlq+nxF/NqegCDdqH0TyNeveNNmyrKmdv78h/xRcmFqV4+ubyqehePubRpcXtPwU3VXuu2K/3ZNlkOvajePBerrWXRSGFzPJ8DhudS21wE86yF9V1or1af+42lqGj75a18vA2YT4d+ysZKxdrqVaMkk/D7JbCYQkaOpVmR6L5lr8c7V3oNIKLigrtsnScSgYU8sIrDCf7fTqtNby8nUBteoknERHA9t8ffZc0RKR/Uu2xtL5irNoVu77gvLkiiX6HeuOjsdH1gj9CUk4X7xIfJk56jRsGwkVWjedVocw/x7kIqeXPIetjWXtUVTtsPO+rI48COCi00cmOt7pzicBLueXLY1FTyHixU9xwikIgw/vH8X5x89CWW7cG0NSWFZOC0vg5fRLe0DDEVjI+Fe6upTxOIRpud5V7JIw6PPjekEkyx9NWxUztsL8saX6dQRu7LjJVzLtvt1x+K+WYSX3bYr95Dq/6eorviONQ7sGP8aQgBKqEbHZZomMuyUpiqn1LkjFhfLkWHKDkCfsz57EdiI6NSxfhgJ9tXmTDRGYt8kBoFqz+YeYtWGCqUSroT6HqKJwoN/La2MUFFuMvlDQtSiynsd0avjGlIshHk52b9JNoY/pul8wApJu73+7v2gyMYyVTZWWlYcf+cdUVbc4T3DvtHFUY/T5TiuPpkexv6QSNYF7Q48PaBc42JGG+bLQkXYTx6fa+Sq0l66WXWXv6jOkX4QmpNCMYult8e+Pc10VWrT6IDIFOYg4GUfl/G9d+BybGs0Zt5G4PO3h6VtzS3a5APIRfQC/Nmv7Brp2Hm8IcROtx5oPFeZCVxNNWQSGelEUQOhfeIcpQns8G3FHKxZvvb6rzPtBrGfrcrosRLM9Dtr+fmEvxXbvfZR6+7vFjcKQAoVePGr1M3urfVtjXD8PyF6NcFpQ/LvoEPZ8HuJfOOFILfj3w98VgAOiS8qhzfzLYj/drCHIGs+bS4oXSAJrWgRM1co9bZ/PZjHRclKfI955doSTu5N0A0NPB+yfP45wfmUCnyTQYaYiZn1fNpSQYxCKLwI/rgJMPMoJ6GsPyBv4umyX5WiI+gdJXba7Fu2CO/ObaCdyVi+0x0iCIPYC9/8xz/YM3OSToUQlqWLgow3TmDwo4ZZVDdfY8uIhhmMKnugR2sd7c/oYCNNDJVSXlcWDIQEbi+j2tMCydm7RX8d+isg9MundE8MVKqrS4I5TwjL++bIGKf/La6UIdfGeTKB/RbA6QhFsCSZZcbtyr2bnrq/zoQ/UCoRMyRFSNx9n58mUJGrcz7MzHrvJVRy+THcnPBjVG4XKcfhchNaTLZkjQd7TgJO99aFATGg2/hOdOp8ESwe9Hhu6jN+tblHY2s5Lfle08/DOnTlz3pT4A3XjBFGBdS+PnlTh4btVkJ25wi2uEw/kDiwe4RjT5q1Lr1XVihHqyaLkgd2cobNQS3NSA27GSHgfJODoq1sjgcb5VbTduyoliK9VyvK5i+OgLJo200taIlfdu17kyVK8o4f/pLU5+CsQOorq3cLf67J+WND+1THzVtYa+DfmmJiTxO9R5HsQGElV4IL9sjx30vWXraH0ig44Lq3AEebWYdNfdlJbEU/LvWmwmgavQqH0lZPbNaSZXSFh4Ds2LOIdySyOict3spciGwrDheootRbiLQjwYJwoSZ57pPVwjSQVfLXN937PnDhw30zzakfx53XyzDmMoI0b+nriq2+fUv2WEG7VvS/J9Oqs2G4ty194C+/6Cg9ULJum/3Hr7KYAXY5n01UYI+Z7z6YtMtmX7zeqQKaB5QkxQ7i/l2iW4l8bA+Z4YMX+vzYz0j894dYYe/vWu4nvv68eQEYCKJh9ivMtgHq+dumFzPJbZNTIlQH8vuWvGz9YrWHfhWephBsexRoVha7ZUVXM4YXwUXvktoEsfzczLoNVQ4PDeIW2MojYXL+RoQuuFO5V78ErgJpLrouDI65TuDJLsxR/Ci63hUJH84fsIzrejYL+ZQBeAaVxBwBrThLfQMx8SSBKkakNo+awrH6ANajY76u7UF9vbixRrNUn+Ymu9j6sEZyR9IZn9K4GAImgnErXL36cyXo1ykXVezXyTBVglqVEdV/RmdBzs5Pq9Pxm2pciHXIXbpR47cmWk/BvT1X3ZWjA1ybXy4SnABhebSEPRYJDdY9I7qbtSsT+TG5oS3LAgeO3nuQGUJDIaXq0ceZiC7XqWSfiM/sjqx1oWsr1jrO0yNSYIXbCRrMX2NigEfJTFseCO1M+K8t/QM6BjN8PTDfCGM0ZxNPJnCSEDSWqM63Q6jg9hOUvZHHCF1fpTzA4OUCAjZhTBB2aOcxK9BRct5XejOwJHUEphAf0gUZSgJev+33b//KGVP1JryyuTLNrTLT25J9AM9YahNKa9JnuotlJLjxRGA8soKgnVwLdi3bL95patrDgO39VER9d1yP5r3vOnQ6dYSJCmdsvIaM9/HeJM5sFMGhKJauupIopO/S0AT6CRWAK4/9m4dXVGDoMaxSeWvePjJbmB3fyj0yRh9NqemDSNkJSw1ZEXj8IOzD8RwNrA1FIgpUupsKzuz7lMLHxSK8y0Jh25eX4yBOE9otvZeOC7DDLDMavbMbc6Np+0V9eGEtAEsLfSZjvkdZ+ImtbzMa3kzlS9LKATBdFiDDYRgLPgLrPGdtPRNuR0WTMiSBfFePy9mhpaLGYykBoUszAe+lr1/fgzJB+bf+0Yk1FITEtZXMW4PYzWupCo88CYf8AGaXtrJR6w5luzOzZR1aovlR4dW2PHaDpMNYfALHJ/SccprPyGjO3CQ4gdfbLYNjdTxhd9ky9Zk6uKdyXb4En/4X7nr/UEdmjz9DcSTc/olqhpRecn+fwyVEJji/kbLx+54fuhFmkgQ5Tz3PtVqzNDM9fpA1OKq8AJ0yrN/srBXYhYuh++J1FwMqpJzh9zQAWXUn+KmNbLXzdTgdyamMXVtc0r4+8b9TLziAIxrKRnIYUMufWXvrlLwtkSJ72ekGrtqay8f02ZumxY1DUuAcSx5EDHRqvxr7+WbsDxK7MDJe3yi/KAbkGgeDWKsk8PqAvgZ/W5LXYcz/TenHuhRg2QtQ9CykfWtUxqPgf8rfzr1QaV5sPPzi3Ju2ylFipfVRyXLkK8cv/r1MDy41s+9qZX0pALNkRfWCNqeAn8wk28m8YZ0xNX1MaILqV5B+rg7GFTPKXbXE5sGXzr3jqjUitLcUbWMO0uHLftSjtiIS6xiZ79L4mFrfTI2c7kEdwsjQ/MZxMTqM/fcDM332W+El7MT9XyfCz58JB1DxHIGQWPEdEZS/tlHV3HZWMv0Y2aAz9hRjAXEORYBptA/nZ0HSaIXruxKyq60NBRrrB2w/TmMa3/R6RNFiM/CeRn4Nd2xVe3USfBbC7SE/PVvp0Jxqv87ZNdTilEP7u0K9xBgRRVCznWK69AbaZ66hsXG9B/T/tI/r8DR1JHUnGf8sxKKsaHuHP3YFoSetZZEwE4GnG/VelQyICkknHWGzZ8ZRdPeW0BtYZHCwM97DOVB3pqRpcv1ITzEo4sT84SAXYvh8Q7cixlcb0z/GXqleNWDeMVoT65Xs33dXhz/6b9SxRI0NTqfFzH4u6r5FBgtGwEs+iENjvA/5FXYEc6xRwuuKK4Lz2NcWCdBq3NGAlmyNCfvp3Fr/tshN66Iec2MrxVNoVtjG/7pPvCPx8C7kJvwb1j0AmOc3O3s2TIZrRNJpRoCxFOCiIVIwq4p52n63ib8gUZCa9N1f8vhsiUcwvzHsJB22Ki3dfHO/qZbzhIN8bsIXynWSleFaYys/NJrkpjtIE+rDTmm1NQB7W96WKzz+jh/2XX1U8bwBj4AC3F5xZsgbURcqLxLRT1w0dG1HejrSq4gZGOksYlLGn1sqMVgmdLvpER6h7DVNkuAqNlWJVAzs60/kSHSFVWUqgv2Qrs9HoLqxK2PBPKJIIvqgxKPR8DPhkgV+sB/ypvuVg4OPxYQd1dg8x3AwPJBsLqshszak3/3vKvtcMN6uamGXrvKoEH1JD9tCj/8p3ri27TsfQo7k6ZIxTTiZ0SFb50/JpvVEEJ7MJFl1k03zCTYalz18Rj2oDbRcNDGJFJyApcU4+al/5LDqVEU3TORYNXh1sgKT4hKVBULRfkttf/ewDuKNT8L9Nt9wDZGiEwX/iTXJPxQDTiMBZgXNLQ6xa1Ry8lT0lDNuJ+Ntp0luBTyh23wmMMVigny2GIXA05VeUGMmNUpOjfxMzn8wdz1/HSwZtvp3uQ2H1ZzNHya87VkbmcZR9W9jjmMPFHAduSrq82jhAyhlZpcA4un1v96h55gfEAS0W0SQVfnpO8Rb9ge5jh8dvCJOV40nQnipHcpQUOEYTmSdBdtsX6GVY/OU4hb/D5cM2/avUVYHKSPpJtcTeHSuhLz/OHwPoIxTw9WRxKkOblNY+LHl4tbNEmiZK/xivh5yHiVY9ZF5MYPYnsgNA+rISn4qZKuvbOWb9smxEtxN4epgjgj50zC1Rxw0OtCyK4wX8eVYck22T5Dj0unjJnylSv/PRJv6Sesnrph2/Folw7dVAo17jgPP46uVNjuXKxbmph+gBExHtCFGCgDfNFZfyKy8JtrZzMZ8Vy+T+hF2QDLwmeW7Uc1EQwHVqeyx7CIpqtL2EEj5EN3YKzNBsW+sraZOA9NaEUmrHi/l7wniwyXFyTdae9C7bNxqqfb+2N3cr/vFYFMvu4N0Jx9wNgySb7sZa9n31mHc1u9DCk24U4ITS4eV4qYN/w+ozKgxWciI9X6X+8Yifz5ZmJUI6HwMfSnZWnufPFactRjjVHgsspKyDrnuiTXg4VSFptFW6t4dvCO4d7KNKxLX9zcC+DrivsaMS3e/E6iIqlWftNMcevo8JpBEvpbYkSA99oxCFapw/2A7o/x4W7u9HgaWC2PQm6Bqmq4//shtR/N4y0FfsFqDZ9sdsurq3aMVI0S3bkjRjZauYDfqVvNvis0SGSBK3LB2IO1A7HHerPs02ST5UHaUYLWGusK77kuOrHds3kpmSVSCMG9ytTZ+g05mXwBf56YOBrw3RquFRxf4qg7EIAVl+vdkNyF8MMzSunDaGdJjDlE51bcBPmLzYvFbEiZUvfO50y+S/Co4F912zflH7p5NCs3YgN/7VPBuG1aP9XVIvCRZLyvcDFIuAXAGL4Xitvg8QAkHMR0qEosAPrPCaL4ppPnKEq8A6hnMQaxOl30CY7QsaNUaUb39OqlQI6TiFZpAcJJnguqgsHsyz9Ve7Ryvuu3T1FZ+SxL3fWqO8m6ZFWQ3CI9fWDce1p/w+87FjsvrRgdiPY/38xETQZMEb54ZoK7hdh773aYXkmk6RrTi0LTLXJ7596pIupdtSefrFQQSuWqHTxm09fqhsm5s1hTE/RwAIXly9gA2PF8HYeoIp+C0WqXXdjIMAX0rOPgeYQLmeahRqaobe0/+5h/RoBoOnj2dDu/vDnPUwQv0zdB//r2QARum7aVU/Q5RRBVpfv6ck2V9lyIvRpp9J+HGO76ctfLDJta8UL2ztf860FuP/g0sGf4ZutZFBDtLZ5c2MFiVOdLhcmeHl0PSXq3hNeCk4L4vdFPtTUxggLcTVV3ofxK6YJxXdoEpzU7JPStiW/4FA5m5Y0YPSHDV5N0HTYfguWar9QiSiXtY8cH0YYCQkI3g7XbbsWABCJ6I39GeqTp3kjtavXeuNm8/+gyCk954AokuKss/r80M8mVPXEDHjT7LSwl/nvpdho5G5VBIssgZqg4lPzmPXLw5dGB62r0WrfH+0G8il090QWJyO58VKO4OhBj/Up1o4Vc5a1uLFUt6NsjOBVEHXHzdYektPVty5Yjk29+R/085sCD2PJGGYO7W/5VxunTaBfWP7aFPUUSnlCl9H52r0n9zAwdDmqAT+aMa0Sm7B58GBSzn5+jtEEIy4knMr6FfPKae+Ulrk1/1D0qhqAPsR2Rsbv18wfhk0/PFBkq3SzeFgGhPXsrfvESHCMs9Kemz9/AEFg4G2LFSWr7cILpddg2KVXSlFISfPOB3fSK1sYTIxktjbLxJqNTW4fC/lwwzgSuhq1iFzyJI2rh7KWfiXp0SjYjJQH+xCLoZ0b4pz/OvTMww6oHFNIwprKN/3g2L72xoLB4MweNcsljwDlTnC70Pf2L/sSA5qqUGQFWF7UfPgw/YtbEbWj5mScJPS+wHvMOjkz/3W7bhpYbgEbyjuDmsPBFjEJgDNHEQSUHFcuxu5Gl7h8GvtP3Xrki9G2Zgqab602sRVDAhp+xmY7+eooBOq4yH05gv5akZvJKOIUO26IZij1Cx9Tzpwz22CKH6Pjf1cLtj60v3zUpAl6NklgZqMwadRtbT8jW3A0uMvUCuB62de/zbIn1I090zYf3HOhOVLHKmGSJga/hJBjQFlBds8RjwneK635jpllDS6HEXxcpIeXWNBjndFkp/0oJZmqNlEewML1t3iS2XzF5wg3EFfpx8ZYfSMCJ2MwOABnv3azJgc0a7Rr+UUnSYdxw3jnoLQ7t1hBkb+Q1pIVw1TC7sTRMopP9bKsefo17UlZbL9Tvus9ZlEefM3AyVcuedyKSjdwuOhw77iurjglWPirs9Q+POxokps97Ch5j6VqOij6mzeXk+gUMELxL1RZYa5QJK6yobg7ugx4fmlAZ+MHEJ2Os2Xh5uHZL6oeFIyhLzw8QG2z52yk29/47DWMM92KLrYKtl5YDy9tq56jOLHPqeg5FAI9d+aZxT1Eb5ubO9K6UcfEG3SUE0yKpIsWBufsIWjUVtycJyQl26BZtxrsXQNq3//jA3EJ8TE2A64N0ahYIarfst53tX87vWH/5tvv9XDEEMk6+dYE8ILcNL04qA4dSBG4A8p+cCio/VE1KEfoFnQlHGghxB2Hc5XveayIrJ8HTaYlxj4IVZscem3JMvs5X8IQbRnTy2a15EzN/8ol/0YUxNVLGSEiDTgmzSmhiJUSFD33/6DJTf++auf4P2tSexccSb8WGcUnK32n4PPVT2CNt7Icm4RF7fkqKOA1B9LN9+0kY7NdfMhgUqDVWei83/rJnSPPbFmJVOGr5HK32i+RwWYDhaPEgrumUdQDVWXppNW9XmZNCPGhP+IDgO0rcZ3vfb2o3NV/4gDQzcIzqBZVTriBJTkWKLQGeZ5HCF252/qfdKo+04cKPHy2PSvuvSC2ERHIgeIdMfTJ+0udDSyWJOK1S4sW09XN+3wBifR/rxx50IRFyFJEyCIlpvEBuyum0wOU6uRRRB3kb+gWaemXZnF/g1bgzjUAyrGCXgtHrQXgaVqt9jIIopWTUjbgu/hgdNNtjB5+mV3PjfpIJvSWSDypqJ5LIDT7m0iMFOssScHp/VvyTMgOtGjhbRKQm7Vi5PS27Rj/sVaokoSV6qTOjIofz0BejphMDuyjCCGrYg/0V//gf4JHG+LSd5xO0mNy3mOFiKCdUalnQNYSdTKLYUEc44hK0UeLw1qrykP33W0gvH3LMoLUJE/e1q4/Esry56mf+Wn3ZPr7qL00Zlqv+KwE9hdPhvgMFqfc9qvvo3vmHpHOEGHqhlg9RNxDqDGN97AaYNyGD3olC//K/rK30kJ05OqljhVZcHWFwilMoXGgALqwoo+ZvO9H/ygf7th2/DlBh1cCczyHOc/ai2uf9MW1L82A3D8ckDkV7AGEHB4RN63n5r5naCGdWIaBv9ux/4IAPRy443rqIS1Z62G+EyGXjdoIcqmVWzMlgw5GxCQjRYGKRZ+/yG4n2IUwgKhmnxhuX2Zm9RXc1oFOYml0OpntCaASxZqAIy/tyP6cYzjfhSLYU/9Yk5fVLb4Hh8VbPsxx6np6TgCm/2uRRgPd4MSFMPYfJfJrH5P2pQiS1L4KeM62awXZn3p7CaiakUQIQTiTgH92QAic8WTbAjx0to0e/R+GNbbecFqJuEvco1ZQYCshLe+YpeGqKtFAlEoTCP0fRWNY+KbfOuEbrk5tSuyAy/ue17satkhv9dLWk52EQhqpOl2/M2vhbLi42TV+GkZWfwVQ+EQzv3ZwvzHIa5ls/QcasHNrSiJIuBskphRsJOT9D5xDy22wPD1GIO9rXCNQ1G/9nqiQam4Jai/Gc5KMpBWysNWui2/X8VNOR8qKICS8kaCXdKKbI34dbu62f01A9nSYG7FSX7jRBrN7ivy9BmfVl3DWcWLZ/+6DeB2/u4PApUi9F8PESAPWQhrASPaLzvegwoYC8OOAKzVH0Y3IqO/S6wpoHRmaAtiRxtbNZONP1muhc3f50XBw9kxkWQDFDCUzI4YHHddqYt+pp9FCGUHgOTtJ9nB8YD5HYsKhzQMQs6dkNoistCLwXLPthN3cHEKbQ/X0WEs+p5k7W55W5ppphH134sBS0a4kjqBvXcdS+xCtQTRUQL4KO3lWnXzftYxoKt7rpf58v7PUvjOn16mhg9r+xGiu86cIEBUAtVtWM5VQ3FDJNbWzhU0htWxo/d+l9PAiITbeXGqf8M2zj1jsJSp2yka4RW6uwFvL2HfBfK3AXmN+A3Y/TOXMea36i5uLhM8ZO2dKNwTg/gkGDm7K5Co6S9t7jieE+sk70sLzdOLzmXGVStmub12y16HDM6rVWInM5XuXLfhFxE8ds79MeMqa4wXQm+2yD+MXxPtCFsZD4R+kgxjApgsfCJaJyJHGlKo3EyErpgVM5fBAjmm0pvjPT5k/FUcQz3XL6S5QEaS+0tPnV4PUbthliv1UR2j62EeyEZf+Zs65YWJWIm+fCiL/Wc9duc4/+7KGEZBZrTwC+m/EyNk2TciboTvD2y1CgRrrslRyzCuU6t2FUtRlKiqP7AKAv7BsRqRCO7+eMzBjZ1pC1AVMGnTNmbLmrR4+SEV7Z8RwzLm+MsgAfhes9r0kK8JnoEPeeIOUYtvdntXxaULceLMQi6t5IEoIfBs5qoyZyZxXrLD1ul14nojCYJKCj8Gfd0zeOpL4OS+Am1LjaWnK+74EToH9vsk48sUjgwxdBWv3hUo1ywJQjOxJT5C4SfKSfkCaoTRECBP6EYRS2jij6qV30PPKe60AbRnzuo621ey/JoUN2ts+2WOIOHdad/5m5PnlhMILrL7ccRVRNggQ5b2jcycoYzpBhCujiIGuQ/GQcHK/9XjClXj4XMAtXDe0K9cZM63j6W5gUVZdn0xhZ34c85+/wPKtdN99bBfo5fXTHhM9VJMr0+tCGHy40sVGYfB1d0Y+rm6TJnt3HS2pAmxyGRyp7bMx/6WHW1nsYrQRrEEOjpJs3d6IvAQApfGVRsLo9li069jkCbr4b8QSGEn6/t/4m2kCETkK5uB7372O3VekbI4uY//edmeX+LldIMJSy+72uT2iczbDD6NtRhnKE+m1fzqi7aq+30wNrTgTcqBPyECkuK4v6wQUDmeEhD9p9fAH6rRCRZeRLUprr1VMiWfS57Pb9Y0u2/WJJdNHLbE2eFZu+agUWRDCToifH2xR5x//TOGVgFU5/h3z727jHPMCVfiILtsONYPK4nN+G4X9jfzs9qs0LSv23Dnj97KpkTYkOc0zZ1KBELw2U266xGpQc3q8JdHNTX+BLuRz0fBXp1r3++xIEd3rp4V+hRCFQh35BCYVN1rnqfnNSUQqeic/6GmRr34A6gPQbJPSAxpq532zlxTo1h5gkCNOplTKEmh+LXl5x+h3aWsCF7QQVHgdbphLXKxyU9XqC+GncfOnkfJ7qfsshuNfc+jOsllJTre0irMMuwWAUv0ONVfebZV7X5avqwP30xtdEl7neJvSIXc0TNrooZo0WzwhLUtP7n6ZjOZKxviT5rm0jUIQk3+je1HNhWQu61sZslazugiF++XKn3vr8lFzHwxwcX+u0Luw6wOIjfpDoTg+t0ff/PSezBfPI/cVHME7gZ9bNnh12bGzxeqZw/TTlj+IXHtu/RIuezBquOJccsKP39XFO0ESzB0fWuf5iCib2TnRmMJzn1G1QdBClSAmxtq1vSTvFjvOQk8X7J6YXXpbCqFFUP1KrWXsnkmTaDfUCOgNQdAa0+vF8J2sSVWBdRbqY9Dq+28jM2xnuyaeOi0u4yOw9ruuchZEh+07dMfMIeZG1gWRbGy03R9YpvmudPzJPaRtTcaAifnkmD4OBLxWJC/5p7YYHysnVGNNP2CdiRd7Vtt0Iq9Zym7T3SHrHZctyZsIb9KS7i9e1e6un7mlP1tIxAjmBbQvOhBe+HZi4Z0H+H7LDhLOBuy8jChoNSOrGPzAziDwWo4+SRSp5/arzxvrXlBaMAHAUrTU9uI8YXbkhRizkBLEPx6Xs17fcnuBjUVveekrgXJF+1p+SxjJcOslNjn3rkraCZ7Y2CogbiclFh6hFRvX9Z135zoK1hjQLWvxC+2Gb4sa248IQkgSVEASQ5HbmP4DpzcxSvq4T4AmDuGiARvl/Ib+9WJ/omJxEtyPO448A71l4CRb8ThoefJY9rPk5NPbZwu4U9xaBcvk4Mt20e/BMWH+L/uIdSiTt9PsyG6OPmva4R9Ee/MbG2YpfYw/FC2k3WzEKCwEmFTI/eT+omyAf1O2F4YV29O9NYvxuH3U3BgnlMH2ktTryVCmYcRVp48WbLlo+N1xtNyeOeIEf343U4hsG3wQdbLTX3W+T+ABuylIzotLrloNBM95h+55EJK9tVdgopumwtspqacDt3kxV6Y3n9fC4wLjUNq87M++TU1lKFK7RcO1kGuPU23Fxp0XvC2/7pjwGZOqgkKZ8YGfhCqO+fhBxCDdFm3m8geudYFk8BGeY3ruLQxgzkLWW02v4RaaSLzozAXwl4xRX9Mhb8OK4qhpuoMYWiXZ+VCa66AOf96Q8wt0X2KXqiHzbxDJmYg94hrpQhkYZFBVnD78XWb5fXKYMv2YbKMcsJCJSmP6rU8IGBQEE13tTavvzFj/ICuCwYvX86x2E5ou64k+M9Oo9L78bkpJ9SiTw/m/3p4S5ImD0BALOeDvR5fcJX7B3K+ehNnIrkXd1Pt0g/q+QDbmehbdNwbjZ0hwIgU4Hc+VcwZl+WAX6o27URdiubnUj+s5UpNr9eADgWKcfT82n5RYVDz7JPeE0rYN6dF8D8ZgUt8bpqvqc41QvOUYmaqFV+z7cMSpYlfV8cVMK/v4Z3M3Y7sffvl5r2BafQC+Xy65DOqgvBCcfKQ760YIwp60YpejRr1MJ81qb3vyIaoE+lpQeY5giXw+xxPCqXlhrWkgD8TcCXeA0plaeSLVx4IwkCYMAqcIJ8hyoapSoJ7GBLj2F7MgmMT2WTNNSd9EMIu9RrjiHyoLiqynonD9+SLqBGwdfi6AObJDY5RvwMvNeXgZJkBWjTL/bd5E8Lhn5dMU2HSRIYEKlDhpw8A2TUituGDNyaa3rOVFiJWtrXdDcRpXp04HNGLI9VhObSBYZqFYDkKu+qfWqIeumOWAHVWg5KjpXK44tjwnfiS1+ha4NwsNSseFhWI5qn+yl/0OMf33PvnFw9ERvzVk9MoCDFQOehRH36jChHObQ0ySI3CkrHcTLBgQ0iW5fMhpI5G+V7pJI/GuweQ4nPIVXwOFyue4ny4TnIpIIWA7DgF0OD3eA4GM++Bz17oW1DxHh4h8WvP9lQSvYk0ADsFy3d+suQSM6FjQ4DkUIE8jrF8hsl/WYBQiVzPONIsvG8L97RHXqKC5X9JCCXPqb4rmkUGuRXG4FFDgfGB6tLrsR0HQZxnBh0mMEF0sGbj7j31CJfImgViQRT7gzL0SPOIFZSFPhdIOLwpLhQZrtCIs4mGeSQ8jVr9uv0atJdNchdLAdmEphMKeoD/3mSnsiwn3s9Zck1eVpdnW1ly2nuRItGfkpOa8jCKSssWW9yBgfoKhQafR1/veOF1Pj1v0d1i52FmPTour+ybxcnRj5TkEt7BT/Z3S999eb8se/6ucv056uqgCASOw7GiovWCKTpL/ogxRpZuyA8TM1LL6vwcXr0fIJ3TefA54pKWWCw8gvVmnAn2pP6DImUILREfKN309eCVa2EHlOJCHNe47es45sEjxW9kqfU8PNjwEDIOU7jwnrIpEPoJP5V8KcncE6qOCa9j59BwOU0dSzl6fn68SkEk/tp8+qAEEiYulu78t7hMe3YkCaO7a559d8VEfBK5S6a3kCQfY9Twc/h9GR/Gj4Pg3KDPkhA7IVYBQk9VdtInv7fB5eyqdK10NV6oDNBB3/+SL+PGpene7g2KRl4zIu4AwyhCrSe4pWcXarwkGE5IJnwdympnkyAHEkWWn3Oqq2RS/NUt3MbV/OX/2t10DVxGhoO+CrjfmuoCTJuzIJ/w0fRi7Bc6y+5Z2omX6sghfH7wp0YwZWncNOeeZ/e2BIH+BqJ8CFO/1z9C07s/5/fwVyeoabqlc5L3Wpha6gLFYZCf3Hnx7LIbg06f7SJkjYt8ktfX/D6qHXFW/fj6LltB8Wyg/JvkFqI5bdPkg8uK5/yW9OgJ/QULR46nozU1EeEovtSDQPSLEo/Q9k5kSyxfN3iTm9EXGKqqoaNlWa/1RKafhveFYnAOj9UGBSQwjpsUJN7jsq84oSTGWSIXVCXha+AOetoARqkvc0wMn73bsKvpBhBUgNRRpGLhmS2/69YfE4r/qrlKrPkQm3jPyGODNkiJwbwK/qgsKmo/etP1RFOk4CGC8+haFAstkBnKQmMhpzT7uXspwZ3rhfp39sS2cqymE121UcybMgxDBDy5yAxF5laRnP3tpFbRHrXH01WXC11yF3hgGgRxA6N5cHz7owaottjiyXG2iJs8e8G+0GdIdiHfQSAxZTCiPTgKetq+GdNWk+w4AeuJXg80lFpt9rZ0F506YnhPWVSGu4K+speF+vbzsdUD/tXuqYfb9++rotoIkKBsQCXc0yDxFa1osulIZM+LxcBqRM1UceO8TgmKjEIUmY9wY37dvzGpjgkWvSAMxj2IAJ2ePFNnkf0IeI6tWF43v8N/nyIBMlbaaZ++mwYVx8oaQzGdF1wG4y4m6gNq866Rtsu8mH/APO3XUknEOXJzWfkhS2rTqQyb7DOcNStgZw0PdKDhfn2pVtUQBXQiu0ee2nPK9XLvPN6h55OicyETAJQSQLmdth3btkBYXQE5PH8YHFxNTd9cB0fsD0Qa0w58Ko3RcMrszIUsExp6YNpkFTWomygm+cBU9w3X1XUsWOVuroxZK9ufEcVZOIZ7oqhUYxk+IWWtn/V975//ZEDjIhkoy9hKgT+SPFnSzyQNNn4NxpatqV/heH0JHkj2rmkZYqKTAiljog90Pv6hThOrJBcO88IplR1eqCPgdh+o1tYmj/0PSoWc5fKdB6/ZI/PS+8LO/GmxibboRb3AR46w8WJdgfHUESJ81nC/uFVOKWkaNt981yXy0ET+p1/ff9dpHphRp3OWKY5hfnrN9mErY0A5i1AY7hTi+m+6pF3vQaqfGNC/kqGB/fcJp3AQKxBBOU6nv8hqfwlU57TDszVIE6xmLOvf1+/WOXaYakwCuBVs8/HiVRqHn3gaavNFvyWRI+mt/n4q1UMET59pqB++MEwvWpC76EwTrYfbSKnktrbVhjMrm8zlu/5AWUo2DKPY8TRd1w1P8d4UW+QixxFrCgd93vdXxyLe2Nt6cs+v2R4rpKdTv8w6j5QpS22HnR2/yCxQQEmnlcnuDu6Md5v45NDNOiFV9P+S2dfA/o35OPUZ+Snb6afuJ24ZOvdEcETm4Dht7jh/h5eAhQrF1lm+dSMnD4WAQ9/W1b8lq4cXzNQcX7OpYwmnovL3xnPLTi6CHsYrv5hGrFepJcEuqD8nVmPMTiE3WmUYp47s0cxDnv8v+XmUA8EBpjRXv7UqZfeeD59ZMQvVYuL/BC/ePIefRGg02yx19NaZky7H7kvgVGIjPPto+a6evvTtubMrKbbFtb7dXeykhjWhXjqbl5XOV0FUtdUvK8h++Zv/kYAc0WmFUtbgV+xpdibOrafAtPhQ3z1V/RH03eRwNeuGEI6fzmrparRWPCaQWyYI5HNxmoEOKYK8ofMzVk15N91hMg+houLG/HSXhnJUGl4fyDoNjKmVRHYk2UWWbtg0vc4v4sduYritP7a5JX7WFkQubtUB1dfg9L3tie/im5bF+6uzfuNtcynFLpFuVmVwKSj3ZDSAL/S86j6Zmo73kbeNKpYraxkFGSxAWmojQT5G4uy8K7t+dpgGt6NtHzP8JuPL+Mms2oiAEh2ZPJdV6zePKamWomXmxu2LiWFHAcWbtiOYWzBqHT9SNcwWsbsQRKJVk2VkNe6H/nLHVuGbkzf1Df89VsKu5F/i4EvIpQs5brlti+XnhTZO+nejWWoMszml+AKz/6Y0dTo706Emsusaq8U0Z1GY47JkKVAknWcxDKjyKVDks5pWBSqvnz8x3lg6j/rZ2kh6NIkMQ/6XicXC3c+vRamrwpcpN9k1RKMIBih9BAZkhCdeQ/2MQUsho4QS46v3ZOL63vyVG+d/HaLQX1v8c0+fVlsh73x6RXEW5CaNPs23v5ntxh37+DByk3k3Al++h4GhsDbti01SZOZBflvhaXxbdWLIDUGXMd8Rg77rib6kgvXdMO3r3HmmOS/OcU/BJRecXmTNuLFRPV92/7ureGUxDaEWamJ900vcUKgaENh7yV2DRBGBhiyRgxNpjTJhjXWN7P5r0b88DEm7Apn/+GCUz0rg9uabNVvPqMslWJ8skWc7z3CAQwB2HcXGjz8f3mjSjiDoZWbnp/wrFUmPAGzla8rw1HL8ccXnC7/UxQK+PzivCAG/3pdibGXtfbmubKVm8I2nq7D9MKDc7w6Q5NWRv0ygnIDqP66rUpIsb1uvbamocOR9yp/jPB/rLyOHS9M5Dgx5EZ3uhNzcjbVdwMWt232/avY3XmxaD7ELvjS3uuQodP8GuEVnIQhDe17XBdMqheL9u1dpJC1c/VnlshOtCJqlRrns7OEl1a5VsRuf52pobXqx3ZxGnj1npssiDKIp4uvUAnp10+IqjWXD379JQ8iD6Aa8DquMVa0bxmWCsunpZfN99gHlpPZyogZgecvyQpi/Cv8Qt7ssqSIJbSHxdSDAdvRxE81fawbHk65EM56LKv8gce+YIU+UB0KAMNDk/1zuo2KnvBuSACRsDXue+6d+T8WLH5bbCcGFULTaItnixIwaPt4aoL4s4/V0tDHt79BqMdKaWlejxTG8hTJDrETMUxua/zr7GfCbUVFiq5s3ZI1ifOIJM2xLRqwrGZdL6kbluTB+F22i0ir5SjB+mLMuQ9B9nBdlJ5eZRkPjobeBuF4SAvBB6Olp8DmBZ6Hugdy1AikvkHSllYwmHN1kenu0OF8IwlPy/5KHbfn/y/4AuKlkmv/gGCih+BBVPywBe+L6KPlHHZ5i31jIpJb7/u1fEMQPRWBNrLIujCtyLonhUEfiz1FM+Of5m0dZZE5uJPvVYYWyi+tgIFd3xNUNkx3zN1AJay1w3FcbRjEPNuO1SeDJ8t3lPtmyxKGEEvM3Tm+ZYEOr31SFssQfuMejH6tgg/8kWmdzNTGGe29f9hUZII/9SUdEBTjk694aSM/1TmiUK9h+jHhRX9TLSHk8yUv0PDl0jhEgwAG99ZW3aO2T50Uv+pch7Wcf9a/lNM7tmI3rd5dTMZLzYJukRSpoUE2EVOdVA/Bz/2cojVJ9adysFf3aKQyX82lQvuVapUZKzTCPWwZS330aNnp2RzWWEITuIVKiff+GmZq63dRfqXJlULb1ukcYp6LnYyi7XmFCtpGrwj3mUwUAezGTN15lKrzYkLrDkP3oq2TRHqkMP1W2UmUz/E8bTwvfLWPwC6LYi4tvzMGKPHERHdfQ5rz2F9NxQgrnX7c/VgxeswUdAkpauWQN4DXY+zpxYBuKIOznmPEXNiHtZxifdSI4NFh+r2Ocrmn12i+S8GOpce8HC2Bfh/Jh8uIpgSGmtEdfxEWBWZK9jeRI5lkxxyUU9qXnEv0k2rPiOBlWzx84SAt3FuV5/Cm+65ydC1cwbEsvbWyNxfGp76mtn+WGb2H5vMtSq5LwDc0MOORdx6haMXtZBfUwjpCW+QP0x4hJX2pB4jnp+LQxMDWUx1+Pfomj29AsesbPWGGh5HHgQ1Du4kbReoA/nF209XcH+cS/FlKKid9iBMGhn6EblS6RmrtQJKvZk8gqrjjdrI9IJTT3CMaUEW+xCTVCqGL3JvW5a80ggsmGy+2lLSnke/23TtDqLreHZkpl0hn1zNqwu50wobXGVbG+h14AIYwPNSE6XBqyH5jf0CrNWUzw1Ok9x+waflqCw7Z86ZBsQmqjXVn/HLJnux+sEaXEgjxW1z8m5zzEQ8kSKI6hcTVLF/zmDV3yOnghv0I8URe8t+kxmDlOhJqplfiT5MlpObucp8s2HSw40JaJ0pnzugye9rMXp6KiOULvp6obLz8P7Q6sP4rlAFNPf/Fkc4qynU743Q/bEKz0d2wPxB73GTUG4JNqwyVMMbGqR+pWXH5h5QOXSQZgseAYe+7ILHD5hC+yesR1ZApNXdQocZSibpVvzVfJ4aO5P1mHgsKas0i8H/c6Y/N375kwNHiQGb7jyjkhmA+/jvDjtR4uiWgvvAF/VHxnPhoWytzmKgzRQ/oDBQWFnIW/vlXZUW8YBrWPBpNd/rqToNnwkXKMeXrcN5mfgd/QMq/OKjxTOD/o537F3F+pXC6ibf155eazJ5ywTa5tTXa7KZXpKlgvAOT9G7UchftqCRFXaYPr9ShkSzdt7RBwEwSuREW1/3MqJbJv0Q0GIPEfictpT+A8jpFEN1LooMG8Yk82MA6M8eEqsfEGu8+wA6Ok6HgbJvNJBkbmfs0LMtTzDSxHyYtZWH4+vjH4C/LMGpTLBNrMg/D+Du+7+bd1qkuMhH71sMUL7aPO0gvtsHTC5haZt334ILG+MbkZqUNLXk8fWCn2d+GjtCbp7lWBLsqa2KLh5YCWYwcCiSxOKbs/QAKh5De+o6UztADyrpO4Nuy60/Q4ZacBsJznKCivoiKudxPUl3yfaMdjYcb3/4cw6D48HMJn4jQ9NHgmKcHF3X7NeROt2ZufzBCAPOsHgUUJ5kfVjQFNm64kx72ccQFHwKa6aXZbNK/CXQZvv66yt3i05nka1diIZZ8scoTtUH+RSBDeTXT9laa3MhzZKjYezRNiL+QMyF87s8yIPKPRePdTnh6t8uqNmi4+y+4pXy3mTJwvYyd1assnexp1ow6iX+7WTblBSNZ0RVHULl/6mksuqY1hw31n66x8c72ZxH4Ji5/UjGha7vcrl0wnc+488twps6w19g/DAFxOjno1ET7Fa4vzBuL2IyXeu1EIEvE7GeSqH2Go08KzuORUlu5FNlM8LW2nMPVs0wlyJHzhBxcXzjAOcrrE8YMMj/L9fvmDkiyWc2tWkT6r4pXvrvV6TnBZv1h45479Up19nl4L/+f5iz0LSWsbEsxq7CflDdK+FBOiCqFJivz2BMOIg48G+VaDp//19l7Lkis5tuDX1GOXUYtH6qDW8qWNKqi15tdfMjJrpu/UtTEbs+5J2+fkTgaDwh0OrAXAgV9XmOnnbZNjjCSlB93crU+m9yq3NlhhMYvGceWdJwalC/w1+IrfT0JNfuUDtAR8dMiHrVL3exXFlneld2QcOFdz9wYqlOEnfUGx4yTJB/keNNr7iPANJabKfF/j5Eiy3zxMlTZaFLrs9c+W1jj9dK/rivd7AO8rcqnt9SyJ4Qy5va9RBe+XASU/v5KArD/mYETWGaX2QAOoIyDeG7T0St+n+d+SF4D/FmaTVqnZWObBcs+hV5tqBILMuwt1MyKdZdClwTZgFFC/8StTD9zv6yyZPVpxfn1SRUShTXZw/q4B5bPlTNrgrDAkgUJVBwiiiOsZPUmR3rveiAhl47NYA/L77p+aa33Hidon+1AtbNr23oDKAwYUhn4346qwIOTUQdiMspHvjHzhtf9UnuTFwQnDyy7W0jzGXcdAVdJbockW8lXJhUgVD3jd9pFKTd5qI5ouuE/EWZYnmBbtRwXFBWlv8W6oWbRlQZZj0XSUg12MMTQ+YeRSDTcyA/AXv0O12uYJG+dfsId/PbMSkHle95D5hr+/20as8JYDPQwFc5nCSCwcqb3cujR0XS0fxS1oVz+jUJKMygss1y14t7mt/Fg9vGZCnCOtPrEQd3vPwsH3xg2NvIevh1KJ6SvptO5sRDxTfijf5cjowiDGGV39XPQiiP1qV4Z2XpDqC/YitscqBtmrZ3vIbnPJc2DWZ09VV8voRQ8HQhNuM0/4Vsl7krAJ9eUhiBVeRR42smjZOsC7jp3FeEqOr7OOIDnH6TTNufpNGsgu/oxUVuLSbZHeNd97JRoR2Qdasln7PAd6+L0/kpsqqUySlgAQQ7OhFtx+9PO6Sa5Udb0m+qXutM/nVc6Qv+yZivjeETA1Nm/w/Nvk8UecQL2JEUTZlcasAzo/Af+2mTp7HR5ekG8ZmbNGBCFB8yjmsKsWr2N65FeOdXi9eCL7K1xKkSzGNd+SUZIyI8USOA3NMdC/t/BU5K+7UtObNpRLnubOiVkcOfU3P5lhagSKcTsIPx52sl+0KGoa0EuIEYh6gUHrcXXLwJKuMshSLf7hNYV7q69hWl8qSGlHMOh6V7AGic/osoQPpRVNALHy4UOVMgH+pBYad/igmnIDEJJzzXCTvHgXs+p3EXAHhaBRWhZ4orgoGv6qf76SOEeSf92+ab2Ep/IFq7SV93tviORLkPX9NQv+jZK6BzjFR/xVnvHXgGembuR+REYdT+sSavM7YUETBXZR1fK3yMGyAhiXC79yY1qixe7mCeyp5Hewd+DZqyeOKXvQ5jQft9W0mhm8eaDu1ssbgW/fyeylmUi+r4zLIFTH6Wpxkcl2GqsIn/BY9b2d6+vvS1x9QRL1DzMQlw7ESwJ3QJomNyarz3zrxZltKtzq5q/svgMSDwxqE2UbKEidcl7yNefsBYkoeSbZjqh7B35C3o2YCPjWw+MR3DEC8XXWdk18a3yDLwg4evAjNX7s2I/c/Z1/HYVTzQhuQlAy7uYkjplMc+hxzJVFXchLNLoPBP6iKPL5QB/+Wj0XGS7KURM17sqdWcvtAt5r5bpVfpQA+Pp3DoXD9mN95OuB834W37TAFtojzfK9EBYmJj/IwS8FVfusfi8rCwp5QdDYbApRvzoRLHW/7tGAJJu3RQRPWn8r0Ca/9bkpsgf+Eubj2NfqVFlU1MmVyRFH/gSxa5Dk8LqE5O7wGkRfL+yZ+el9dXMAO7oUooUtlgvN+/AAIkNtwCL0t5YfGcfOl2v3plSZhyAs9w5aIobFEttb92l9dVS02weZQJmJjGJghvLh0Za3EjtANaHW31/nPgiNfJea2reZEUAuPQ8hiOMGm6yweMD4/Osu9uJ/VD0X+KFey4TUN69GXPzmFdCSxL8e630Dw+kQS/xZ/8PSCmWCwrkdtGWEREHwxqNefkW0wfLNJQbHd+B+2AVJ7XDbyDGUm1XJffhI9tiAaMDY/hyqL5iElz02o49OJM4MxFL7j+ge7SCrHCg4h1xyfVWpVVg159u7hc/Hy+gfaip1Rc+VeCzIl4TsfIohz+9pwviiuQGgYsKfZ6R/PjF0g0yhvyQZeYzCNxzWcXggnhPnNOkl3/pNaOuCDcUvfhUlgJiKl27CvG2XacnFFHEDoLzG5m0Ld4oz7QTL25Sat7SaukvKuzjg94QDugNaAdgKbLRc6+sr2p3bMGaEu29iJTIk37t7V6auIgRaNGfNMR1+7PpAQBux5KlBzotvoOpJtmL+z9PZ5IWUyrxaSD3UYOfb7UI4ulWGisil+QNY2y9y5DDctLd1pd7Hkvi8FeGHdfolconAPOw+knraCgzI35RTvg3yZdlnjvbYI1LH0D9SnnjXi/GLE6Ot/sjaAoaA3eBoGvZqCXTqPb1wg72m4hND7TgqS7w7CLJnt88LUjfbtNz4Wa1/vE+0jeOU2Jw8xDlsCeSuQO/+rBhff7W9PPzmiGvNUbVtHdyoUO87YiXUTFBKJXZgMyhtu1UBDLF1Ld8LWIT6z0YU/kOVb6zEBanL2+Z63CbyRA0U5zo2NXt147I2Utg3qIjLTVDeZYOZHQHy5M+qcIIdi/Vio6+g8xoS2aiKjTkzWCXicQOZKSYffPbRSs4ZPoFBUtAlkT70qKBH8DqKVD/fEGpZO5rLQprmwyrE6270ZakqBrr2CPQrvhQmdbMMfLjO5K+7W87usWSfr/aAo6lbE58PZ160gWW2m5YwYObGMYu3Tr4UBEGVVjlGYr6ki/AjMIMGZFkwvDkte03iebDMrVqoLIWawm9jKpBpKqyLb6dXnmEefM6P35GkZBc9Px1Y6eNKfVDicM8LMX2zZSefgD5GXwweCF7KMlG8/M77V5Joa34NlhyoKzJokjW/xUbp0PimKxIoTQrdhPpQ0kZvBuuZ1nCV3Q4QGiKyaMdSOOO34xpEnB0Om28BvTl+k1YJr9VkFWmL+rT2hb5KihPWI1BuvkbLdVtK037u3/0jin90T8+/ZSj526NTfKkbsZUiRbc2oufG3X5s6xc8iHrulSbnqtbFXsuGmTkrU5HDsB4Z+ExvthZrECTl9Y9KZLz8K0q+MlllvbSK4dvxVceqKxVzNMt61OEkzIYbDKkmz4V+pleS0bov5rYyETcPSWdZCYQpPPkrgW8oAqRQ9mRkIhXDQMLszcjsOQoXMusF+y3NWTNm64fEwn6Ueu9/OeHk1C97p/aWd+xB5S7vWir4lcKylaIYENpCBZROKyNZ1KINe3R2+mN/NOMeUDi7r8cMEdolt+md5CxmZSmU61V8jYiMWRfUALCitC/57aoIsfcm3EyVoMk3/1EanElLxJOWlTQkS9/NW7y9mtf4HAqvPPbDuI1KatLbb27P6pAmBqumXZZzfKNr9Q0Hd7DpesSB1BnPVHHm20RH3iX3RkdoTeiJ11KsbtJ9czeQgPwPc/m1gAB/hYLDaAWkZ4YYKy/XEHNsBdmHtFDmThzg4u3UzTP9LyTZbt0vLn8Tf4SBri6IC036YQua/miJBBDNSOzm36UPgF4m8XCa6JW1E2L3eAzz3KGDIHj9/HcmexST8oA3q6otfVJHB+t3560PDv0BB+/OBR7+vE6aFxYw/JY613CLqzm3IWSsFvtYkdQdqpZ9ptPshK3sjjRacQX+KOUJHB98Wb/rhivOao4T4wuD0hIdvImZRMv3mCUqiO+7cvziVd9XmSsh8GBB62H7awTVn+DyZFVj5elDWDL6th2Q8KKaliO0/c9pWEUsdLSF3q9PLv9qsK9Z1iFMygw8p2vCQ7N05lK7epuO9hyyxx4u6JuTgrc9UhDae2Mmyunpm2Hn8gkfeTm3rwzfKupSAhia2yCPrMAqFGxt8OtRJM6+uyAo1PmuT2xAYzjyDOhwSHK8PrM8UB/Oj+McVnvko4L4SWYis1Xvs3gA96NmTYsyy00Qh3Nfj2Yp3ZV4FGcl7g/0lpPoumwB3viI4VAfe9g4lsDsW3dmXkrWb/hwnVcjy0WDVvQImahX4Q6LfaOYXhC7f/B0RaqYDYAnzBA42guG05l1HS+wLYsRuPqtnV2IDHicyMgqWwuljErnmTuRk1uSh35n2nUJ1NU3Vi0YqIm8+bUqhTC0PLiECG4i7YMt3yie+W6GzeyHovfI0hecNV0EvaXeQLzy3PTN7JiQqfgi7HOpuLtU8gPNX0Ux+qMKrI8xzSZu2Pq7Yy9lxqEWtyxHQcI5o3KxbEmFamAY7dC24Bx+GB1xSBXoX26cbF9wbGKzFh4zK0NKzdkv7sz70U9z83042KjBd13XtlMquKDcZNipk0Rqn9czEY03C77AW1K8ZZASs/hZQ4gQoowkgqthwqYSyFTWZU8O08yacMofhbbugFfKHtQQza07tjTOcqEVh4rDWWU819Z2R+PFs5Nuunom9rzd18lbioh3XAPBXtBnF5PwC69F36/gf43ujkgutXBRpSrQ9Wco8Oqk+0VopyaICRBGsXSsJfFGg63KmOMmsr0g+z4Q03AuDAXoPXRTGO4Fb+8gKAA4xCu2kJfqCY/IuTp1r5ijDa5cpsOKTNcJuQkrrNnHq9V1TG3HpZs85NG5sJd9C+uVx0PbSy+OVQXsmUa58q77DdR09kw8/P+iln743O8aepld3cM75ZpsYNmuDZUilivKUs1zu5qhBBRL8pXj/GGn3/brrf56LjtqD746z34Aeeo6jbgW+7wNzT5hlXpL4d/ywbg2Noy8MEw30x4fWaAU+WtAF0O2NQj4W1RDpqx7r3Hnptc0dzW8hui2qeHtzs/VQc0LG06u/LRUP6xE+9l1aTsUKeWuub6R8d6O9M71oyEVhOX6brpwdmFji2Lyrkkzaz0wQfQYKmB4A3DYh4eSXx0jHPiDvDY8F8tnEuD7k1m36XCyU1eYYyXawWngeXLmd9ktg6VIAOA9Z1KvbOXipuXDUxkEdZHEnRvpWUebak0v9HwLcHoWGUDmfFQAraBFKlAeioKgMKVpfhZX3eD6yKpQiEWiBeFpslFjqYVLLKbJPF/co70QCtHJCeQqwDWUHVonBXC9Klf9ITWvGOU9sH70Q2XNDNRaVwFU2LZiZaIZOsrdzj5Qbl7ENvTFBCfX0tyavA+k8qElFFiGRK3Pv4QWvrHvuULuRIzDoxDZhXyjunkV10Tlx2br+fs5EWER8nHeRT5/7ecSQx9HeOQ0kD6YspAzc4ubDlM8Ucj8aenCXvs7sbROvX2Z9LfHaiXb+c0cD+CMFrB2Du9cVxsuTMIzopjh9hLlqN8z63x/EcVapSnGhVFxpT2PIAkBlDZ27jBfKY7lU4yaIgmquJxGKjTjTJV83NC8JSpw43qZp3mNpjtNj+cdJ19GvWW5+p7WDbSHV0G+RR8A7gfLW/ksn690IXN5ttQFjhP9uxWb5QshDK3Z6nfRZt1wpU2KyDai2MJKEAU2+jqHDN0YAOV+QAem1KD9WJRX/+jnGPWDJzWn2GVxEiK+PddY4BJxqYQLtjMIFs2KbtjUbrXpo/9kj/5W0Rlj2oYR+JSb2WbWlS9VOkxGXtv39CabneKEMVUmn41lfhmEZc3rfrhMZxnhHRFY9TPAbyeXtAJtvRhmfpFLcqi+HDcWvfoHwsH+9ocGcXsjA5nIQirlC4B8qp7tZmBjJMFmT1jltWvnbpVErVRovtk2SVfp1eAjxPmn/YjWIwLb4sZQvDMKxGMUjH7FSIRQr3mlkjChO8Um0J86Gl+wwXjM66dl89cyJ/ftbpxMnFHH6Lc2HDFgb2XYAGY1ARljCPs+fV8JNOwJIhuG88/JAksxkWNV7QMIHF4uXh2gYeGueMuU2uAQyH0/FY4szSj2aQfPYHfjwKhp0GbPXAyJ1/sa4NtBXtkggrRCZdNjuSJbHMTgE80ueV8ITVwy3upK7Qwlj3IV+g5S4qKbIFCvN3d4KJnFgljpY4HDXGBsWsL1dMxNLMSVJscPG3H9PIq7oYy+o+OH1bConRMc1qRwZdLa3wn34I/5y22tgIrMjk+sQCO3IbyTWc6udOzK8fVjCmGNTNsxWd+0QuTCRbD/9Lnl3AeIfY83EyyjAc5G3SZujEdu4VCWQsa2XmMB6m8xxqaxzwkU8PKcg3AM91/VGQfj+nuONhi394daN8bHBO9j5z+vMxONJJL3KaFPfbayv4uDfH1NtcYSuimgerWZv8RWOc2vqnS8UFaU1xmevx50/EARu7I1Nq3X0PvVc/VTr+k5fLNX19GvpruJ9hFfL3EMUVFjnHxA9nxEzEm3ac+SnoU9hBeWDHAOukR5VBcC/yomHqNYw9s8xJ14c+iXlQavTA/+Lpei+2D2GYogLb+5DYM6HGJVHKyBz6BRvdEyqHaJFReVGW88EXAJZeF1XTXvZqnPB1M4aioXRBFgaSo2PXBEt5Pn7Ka05N1UosPSZxTbSYXIhU6ieBFCQg0QJYMF79p9p7AmIgKt3iqVAL4w0dg49AYqjMb4Z3vg/mC5c1MKDCqLFu0OTVJHZObmshLgm/+nGmSNmJUoFJNbYR9RDeH4UuiYRa/XCtJlxftmmTL3vQTHmu5EZIrbQNj5SHSMtG+ZIgk8J7RKN9p6J79S8SkfxrHM4wv2iUGC73d5Og7qyuhAFYFtoyix2J4eRHzzQqoVs1qd3dxv0wuBF9OOXYUPCGaBSqSV4+2AwGfn+ctPdyHf+KrPog0hcfx5w9QUNBgfdtVD2VcTOuPEORFHWI7EKIT00iAdIA0PiKdWs7IcVMOEylAtO4BJiuWaO8ycaNO/NR2LEwqSg3JhTgk6s4JR33h92K/LI7FoNGxqgQLr+VuGsrcn1Oll2Uv6sm0fQFuWeFoGRlk7Uszrv2rcVbfBG9gE3iRBVmJgPvRH5qzOiS+BfThemUxvrEhr5zHA7JFJV0+0k0gXPNSDgOIGZijucSs6wephFl+LLDFqgdTCzBVxuiL+sYORDTAgO9UmAJ21clQW1+7CUV/lFbzZa1bRL3ke9srec+sXr3vKFd1vq85+UzK4dcai7y/kWFhOQYedvTLCQlpSO6T1UkzrSkld+r06LHWGU0TfwS4oCyqcuAiFbW2Bww3fpY5xRqqP5wdK7mzwDQEE3wLV9ExZYt68bxhLmv4x/Hc9ir28IqlqCHSWUC71zv7hAtO2Q/prGmrhrbHMt0Wa+1w8MGJVIVyCM6/nfmvIX8VcK0+/DaTzOPzm4fBH8q1qk50+8pyndZCi7L9mjLnXzjkJwVKLgsj4stve6GKwdrEhuTnRnLquo1P26gl+JFlYFlO38KprRfBRBUc/yqoHsGEoz0acI3AAJju6Dcjf8t10yqcnHPy6LQqGYQycGMoFQCl+Cj+YzojFXqsrlY6ZKu32kivqZ1JVTDnAW9JQdGhUOxW8C6Fiep+wt2Z/F6pGXi79u2ayklx7j8sqLnQOYOnnl043ZjCR6vfzrGmwXCoP9Yvqe/EpCoeVVUG8i5wxy3B/HXWI+MdxTQM2nJIkU7P6wiwro46nba0LFmhFPEpaLJIPtvH8orUegBhJx5Jmz9Ky7TbxFMapfQu7oWlOPMSuu2wzys6bX9+dTJ4DrI7HB103YEnKXw9gA6i8DyEn2Xtf1XlBDRCqJTwwacxG6Ias3utrqMx76OjmmhPwSLhuCQdLls4+y1/1/17cYLs/z/55YyE02oL5kBeP5g1W5pW3lY9ARpq/ovPg0DV2ycxr6LCnBA20t4MYl6V6y6xxtxy782w5wp1eG0uregtyQtZWBOum+beiFbhfwvld+tfnzeSGI00rK/2LQ77oCY2dy7Fjf/YPY4/6OU2DGEDWCTYVpVxm6Yz3UeX+a3t0P8vgD/55TVA9uA9Nx8x3sVA+EZYqo6A+PdvZ2Vrg+1LzZzyBRu2Luq64xC9NxYlQjm55a0Awm7ganQ05EdO6ucS/+3b/Ok2YJU5WxlpJRZCujTu5UDjFOe+2CpoUJ+ja+cpst1FyUTiIjaXi4IhljTO8G9lKPuIA4nOQptGOFHxTr3NHL+Rkh733kXsW/v4QkdZ4pcRRaM7jdOhcbrVlGqdrAVP8ckdemWcZF6iW4le65radzp8YlohydVhPSRnd59kLshgsrlRyaFE2HD5Z60bqObsIit8jIl/Rui0/KiPwDSGCJDNb8jTg0Kg0bX3QTd2cRh5j39nGAtQVfCPF2ZbzK9GkMpDkviQSln+FYwcz2Ngp8Tpxsgl8ksIKwjsQ61MipSnnLl3OjP0r0crQlEXgnsuLtmUJOQ7KhCACBzi66NljwGJ4Af5tb4/1QXHiJCJXD182QaidqqumoI8qO+0EBMky/noJ36/r8H2IpL2/O3FF+Q6BvWce3a84NMKCO947TjIvfONmaXUjUY4DNxKorRYPQAuUr6dYLFIK2g+afQPUhggahmxI+Jteww8Zgmpcn+v4z8AVX4UWfBM1v5tS3a3923RrWISGrWyulaFnNo4YgoKgPxpMV/lBEuUYgKe1qUt68xGrwUuL9rUzKsT4gCy25AX3aOZV8UFA66PLi5JdxB6q83ri4CAiKXh/1pCq6MeObHrOpfDHh/NzVzk7T5LP2HkHIenZVKtcXN1hPASXyw2m/yZNF4KF0u0N22diNLdhBMty3xDIqsTiH+gEJjDAruFYW+lYnuMv7Fl9jUzgQnKrmL236Ks7wJL70mpbjb5Vr4x1liVruu2DaCcgIS3ZRg3Dm8jkojBlhYTPmW2vizdJkuVB34lXISJXgb3Kv2rAGcF0gHZpTznYZ8REvKJGZEFPUCGP+Pjb3lTnYBaSJ05iZ2T3XTIW3q3bSea+0skoiCJd5mVG4ICkIk2ek7eHpWSGA7n2Drryorv2FITfZ3iVd7cJP/DF4laFGwvgOiQQ2X/7W8MEco9g3XHGUP0QuX+h8SUq6OzfLlIUc6qFUOdFXZ0vs6vhYooWqlYWlYZSBW4nXG3+SrbAeqWs7df7njee+vfiBoG+bkn5sf1skd942RqYqQdwp+w7kJ9KzXc7/AVb4fNKudoWAzoHWw9TGb6Luqr1+tYXJd6kRKJt434m0p00xtmQLcUxilaUq7QKjXPmLuHht7FEo7QJcvOvm572CbGe/trd2adAOjdTr5EijP64H8JwvcuTZoi7EVr7WtN5KosAkgDrdhTcsOQM2DEK+kp9pfF4687SmW7vwp6Eg/8I0C1NO9qDMDUaEeQsa4DUNwSJXQewp+eEQBFbAlkUAENfFK7L01daBGQj6S7kBvhd8s7pxvzc0awpam+JDjOmaZQKIYa1CJSKOlrTsla32ymzImUTUpv0AgwXC8HdP2E8eeUVXXua3L6/PwJr9dbrYAO/Lv5527zQ4KVoNPaZqq+ikSYEip1ao7+CMnT8q4/+tkGjv5iBPBiwqQFNoU6WyaBemcWHD/EFuHBCySnWHIoaVaSFmtD0yCUdwx6fwbFONDJ5bO7rNuYf9Lj/GoBCgV8hif8umX3l3J/L0y/4T/c9HkNSSlgTfeLYjWOoAQFkt8gwEWMYBEBpbUPbU1OxbuKjd/R8xbWZzTm2Myw1nMBQfq1Y/7ot6fE4QlOPnJ7MT9WNXpHSuAix137SbWFIQOT8bTDByLwORltd+IR6/3At7zT2ZnYM8w/4MRWPtAJjPOf9+g/4AWSQqE6w/J/cFbdT/B/a/WUDVv0PCPhz4p7Pa37+ORF8D8HcP2CmO4V86PJ1fiA28PfTt0XB+43r78nY3yscVbaWf44hwD/JPwfLvCrKv/dH4H8if78bL38OFf/X1X/VkH73fDX7yeQPwPv7CL/fIaDK/t9e4++F97jd8j/n/TmwrFf798BSxuP7a9XFxfM3/b5ylcatEid5awxLtVZD/3yeDOs6dM8J7fsBHadNMQ9bnzFDO8zP51n+jbd2/S9XoNqqeL+5DuNzNF7GPH3f+Vud+fPQ9O+G1L+OAv868l4qXuN/wNSffz4IuH/AGVN5tG4dgCwUwzutmu2WnFs8v0nh8z/GYKj373clyefzt3dxLWd6FtLrYOb4RfLq4deOkEreSlQ8O82lTnFjlkZdtKo51q6n+KvxiLXGDKlnup1im6j1ZpM9HPeVbdsugGZEZYnp2YILOImW0uAUI5qZGs3RjhzMxg8uQ53VPKpm82ydGoB0jNxW/TW5JABU+HWus08Ma0AHO3wpgiIo63v4V19tCzMa0tXB/PxML18rDPFzTODZBpZpwJnEw7aYo5A2GMfW3Afh7E71jzNx8IrmuAbhr/Nqh3BX25M3eQfb5hnRvy3xqYBVcd9CkWZr0O77Vty1vWXmaP2RXZi90QZM92DwtFfmcvADre9DcFSIvJ4pL1QSPI0/HxgTQfBF8B9TZSmo3BOMFEAfKsrFrJjoY6CHQNnI3qK+l26ByLcXmh3LLFfbBbJDylJsidpDYL96aye/5YFnxcFSh3Zs3sFnid36YVtkIa1Ki9GhWxISBnVQIH3oz4974GdfYkZ5+KwZRfJolXWte9dmTbCr6+W4OHLESkvUPSSXO/URVdHeRZ2KCfL3ZVuAH/kl7WVbigzJryPHJCQ3HBrSOE1ZTb6JYm/ZLUV+xhGgHCTPkHa+M6VKv/h+pA4HrMY29KjXi1iQbDtCceRESrJaEW2kJeiwDd9P0Mu+oV8Ok1pAD3WlffPW0UB67ivZ3uZ5cvdYYtGut1X6DeXr1V1xcrlU1axcnxSyegLSXbngGgQfm0qS72l/Qp8833DMGIo3jQVm1RL7eR+M+U72m+ULrXNadsvhtSseQ7+cDaXwjtuyResZZBXSJIXeyT7SuVWx4V9lo20HskdcgL5N2RphMKLxJFaL3hawv+J5L2dSffv2A+xFmfSDNorMTZAve4T9u2uQb44HdGjtGPodaNoCE59+O8bEW5jxjc6onaM+z/Z1fNviR4mWVTm0Uf2eMeKZSCQ/E/J9t+zjgeH3u8FZBSXWmiplSoWCVlMH8TUW548nhM/PbZ5k/1kS8iMApDsarjMwCGXqNKUgyLDsyUeWJjnKeQdqgexjNZ1kN0pNDg3AFhRgsLhWqGrh82eq7IpmY2eunF4x8IJUh5DGkyNmZ1jdyJmMUlotXlStRKxCGAukT6WgCI/A2fOHPlLhqoVWrAX7exPZg1uiRxoEc/dSVAEZQhoNaflQSKCZEknTKlR2wz5H1o74k11dlie7nB4HdhxYQMu5VWOLTPXatVmxgopwCjzLJ1dRBbdveP0oEJ0dw9WNIa7qjgWwdvmXsxUKj6Atne7idGLZmDqxFGG8EvWwfyJVFPwo8otABSgivxYwy9eOYNifsSTUv63JL8a5TIpaOzgpB0xn6xvHFWPAHkXvvCbeiHdAih/h0j9Gx8pxNxYMBYGWTFYfD875ZyhIIBJMDsTI7/lziEe7GpOjfzTdIZ8jHnufqeEZc3pXIQgtMIXc6mo3wCMr4q3d6i1CPR2/360HWdqizhdOn7vPYH8ZsEaPr2vjC5OPzhUttf9tqF/mOJsECwi4YcwoeXHGzJBKoYYhxqR85tFDWEtVHjN54azfC3H3xNXpMjHwzui5SoSHJsabthZ23LaUbD5PmYNQi0AdPvdA49qc1plnBo+W4NcAIM327dBhlptzDg6fOWLutHYxz3NS2e1RVqNaNvEokSJqnjgT+zBItfyKg6NL09gVKZOCR11Lago14VVbv922PG94h/i1GytX+YZHbgLXr1LuYYYrtNdnqdctNXi8+UpGuRkn2gfYBUraI52Xb1+YK5qRuy0Bg3+252JasJycywOX38RVe9bSKUrZ5oiNLXkSkHKrq+l+1Y1K8kmRnC8/K2Y21WOteh6c1W8+9JtWD5gPEYdW4bI0Zl+oUebK74W8iQRLLLr+WVSP5dGNk9CJX6mlO2evCUXG9sEfn8kenN7sBWZpee5UQL2abWIc6Xx0vaQtY7953WPPkqHIN0+dJjtTlKRdG+a/iTSvwSEIyoghcvV+UWEsYF5V5THKb9/Etx6jLDdb8vuBkw0W22yDo/JEkAxSolVAmBXguK4BuKl1xatLnf1vRUW9DF3+Vp8RbtDTmwa4NI9Hxzywe8mNz0XE3ssUQ0j90/ma/67o5o0rJZzalrQ4eT1KdR3cG3fdq+Nq2hXKBc8wQ0IYMv76IdGa5N4SCl08bNcUrsYlPl1eunJJne6jr2QlbqZRILa7SRhUHGXBP24R1dMmjA+UDcr8HNlXClZrdPTFh72PJO6JNJkVzjivQi7Hz9hHXRiGAIHTmbd++3cxV+IZw51IfLas9a0Oq1xv05VX++YbpEBZGwVD/Mb7kRqe7oRTS10q9cOsV4ca36UkWqMkz7PW32jifkrV4cCXrsz9yNmWrPA1PgjmIz2D/B1TWe2rOxdPzOCxFgIi1btc3/bIgT5VZv2sGvUoys3KK2gJZLSXJUUhcyqZc3wwo/rMPCatBpXS9x2HcnWJxSKuzVhJlAjVps9dPyPdHeuHdF2MSXxYO1M7AW1swgMVsJlLbjHiSyj8hb+hMc9wuNO1nbMVT4RSYs2KWaQZZ9r58YFaPI6znB2TA7rmQQ03LNoO61CY1TnP+qFBTbriyYQ60i8qsK4AEcpNptDA25yArQVMi3axCrIfZdKeee+Mt9vqJT8nwqUGUManW1PxTRCtrqiaumoEJK0409ehHa08X29zBdJiec4xkPAU1bZoOSelXGLK64UMjvzFR9TMgDs3rFdxUD18X1U+73VIE79dydBaR+kGh9ozfkl9UUfofPj2xWC0H1iZAxDGuSQCpZdJYdmaNa0JEUwMv2tcDWTKbqyEsel4sbQmsVdu12BpuBqqasNaFroA6JUGC//t3PT9KmiQZWGv3dzxIEuOGnhKPqREuCH7Cvz6zzJ6fgTD0t4UdoIo3qSr3m/U49FcdxmrJjhI3WJj+e5PUfVxCy7XP3JHWL3LCb+aFCgKwiobQl7NDGzBfr/7uh7fHv3SyNYeyzPIB1WIIlh9pND3mtGnwU2OK2mBTWz2toU9EL2rpCo+/PFR4VSe/HbfzUuYT0zYk1NjMVPd5BI5urQXVsD++teXDuPXfTddqi2D1XAAkqFilpKwmLYfgAqW3+DIzE1TejTKvnTBegHNu2XrajnvVXAC4h6uUmRjap8aJQSRBjlIW5IOvQRUlODt82cM99db/C0QUr8CLQqvck08b83OUBEiLvMX3fPFVamwKVgODoGdHZ6BRwC2H2ZCPjK6aWYVf52HFRCM/MG3F8MUu+6muiHU/hisNlbY04kYFErBlermEPh9s/v57uaZB+4/eP5IjpXX2wdTUGekYIfIsdMWmKQ7SF96yabFin2+oAYdgJpzXOihgrxXm1557AnKCiXaZ7VuK4R/+XYfwYvBFpdpaU5h3Xl9V9+g6j4rhKdG8QlWGWMPyPnbHkXQGwDqiT/IEtP7sYkCd4O/u1eBRq/99kxoQo21McROtKgwcGIydrKpGqsijPDlzMu2A/8eAZyySyXHP+Q73jFHrfHzVq/jMfi+VyG93aMXVYbz2PHEMdLTUYkvUgtZK+zBX8/A54fyGP1TLylb4V5MdX6dyUWEVosbc+hjsnvRvtoId06eD7UX3lb8kLZrd/YNFreS5Ij9rFSiF/Fh3nyHIga5pQ5l0+wLUmBxcgME8NEeLuP4dfk8MUed8f052o/1wPhhOiMAz8AXwfBIhumPYa1lTMc8quHLvnEYT9MC8M+DvuuCpLDZh/gyC0y/UAyeh0hf/jzaf/GOjccnfDvdnaeycLL41aDct30ZaFdiKnn5KbS2RgL6h61+jRmGMElWCPMyMPZNNphb8I14RYCo3SYQcRHoiOm0RCiO121WZheRWwb6lZDvbWLxiYVgcVqIXu3cy+TBHX8VDuwG2S8TaM8jzfgCHCj3HOZGcbAHdHGk2uzp68ANi9CePnGM0ijsGuwPAqd8uO1Ly+6wx6C+g8KvwI3QMJ6vQOQeD0GFhKPmh/cEyMGFuB/2xAtiCxRKeQgg7lvMK+FGRDb091mNWeCBUzAJYoU+yuNwX883BKjsi2d/+7bSCQMnbnrm2DB+zvEX5d7gDB379/sFSbS2hpS/yJyWqTyaaOjF85oM5lGj3dUXflH/GzUZ48gGBE1nATLIYUyz7HeZPspikl83KurEwRvYAjZIgr24fd7HeNj2m4h9sO5oBSS5ADAM0WsSQzLoLwugRrrdLZwlHUiaKvYCiO7D+co3AJVA2V6T3wViJiazkTtMwDXxN3pcEh9YxVFhEbsK6/4GMJ53MCjFSPmcRtcMsi4pwpW6gQFbw2WoxyB5hAXb0hjSygutIkeM9teaqP1kVavsxerJKGoaVk34gD/KcY3RySFiNnt7uATqnpyqEBjIAQFkVaVJwwNAxUb0l1s+hkrgaL8iRA0B9/tupPBtycZbRtz41HSWJXQmnkD6N1kXufuApO4SBq90lW0ViJVBzMhRE4cotvwZtTbCFvzQDMy28is8bpsvMN/t/QCCzWw5e7NsvZaXuIMGRPNEDVQs4BVHr2um5NpqlINiUsBP0ZLMbsYwzlq9+aNivY3KUAy7/G6//h9aG6sFihjGdWQU1IkDOMi45A2uNaUNx2Gx5sTNuvQIv/fRIZntMUnQmmWeZoFzdAGFveWrXb0MJptzY/mx7znvAZzJuMR5vZTGgu3JGbt7qk/wH6kLwJSpprLIP1vSNBtWeTFhEg+/woN8sMDRC9+B5K3GiDoE6t2pYZAHXz43giO3IM31Bh4ONRkpMX19bAfJ/QDm0X0Y4xL3jKwL22je7b54/MK2D5r/4ba3zzO/POA8DusFNdduaj5seRRBDsj+YCUN4t2gqIAf8Mphzi+mcVR/fh/yG2i6PVV8DXBL8jnhAI71Fv3DvTLnS3BIy/n8cxrQTcG2olbiDegjJsxrKlfzF9noWnAtbdApTjX+xIgQiQtcEHeXjq2sVGREy18N7UT+y08jY16+g/BzXPmrVQPi/sEyvY+Cr6/WMiIMoLBFWVIlo4MB1oUqbw+zI16sOiHKtzEtH/lgxbjQOhVbIcO/7hhKuQgfC+vM68of+1wM7xKVTw6NPFmdVmWeBush4NLD0EsFWKYus0Uphy1kWXSaetsSYxtSInlFeKFMW9VDiWsrkuzkyoe1nYGH7aSBERQ7byrsEgGSSk3UlRn9ZjmZ38dpHSdedBjOSjpZ/Krd6iwSAfjjleMhXOtYOFnhAYfI3n/N7ve+dO+CtDo8pMdi4Ay2DdODiwYJeajPRN+f5rS0tHcvqlfgcLjpX85g8QcHmn7Wj1cPqpQnLFLgtneBLvwBUkBr0Mi8qjP0+SwZM5+zs0yenNtoCOgPNOGryv6Z8/4irqpZycUWTzTGPK4pXQ2UmBbbr5GNi1wmL31Aw08Lzbye2e74AWd20MQodF8WesnICYzYCFYCSB7MYTIsFc5cSDzz3rXy0n826bjJKegZkpZsL2Sr8xRToyQMFQ+S4JP0Ltb8TEsC50UnDB4z8zxaPkchOTLpZIILu3IdmuYWGXVP9cHwHKUgI9es3DuPLnq161t9ZrUZN0hPY35rJXwY2kfgqnqEz73lZtpcFP3E/iTFDhM7itQ+PO1gGs12hxUVo3o6T7phgVlA/JJWJSlySRfUnW7W7aZouMJTEGKyglW5z5zbqmKVLxt7wBjuDVvPhdN1ZJdrV484SR08xOtwpZsvlcS3Dh31Vi9PS4Gm4da7a0aNQK1HkUhryBb05gyFwwhgNdlUypa5Ctot+jBm04nRZSV3TarCDtKVauguklQOYEqh1uMJcuTlZ/koFZe08vz6da2JWSsuUiSJ2j7H9UyFwQReDrPvItbPK20lrX5VToB7pUh+umlwkF+TpzF6XsPXbTJQi/Znegj1h7QuOmZt2WcuQbizrVXMe3pBbv/rJcWWOA/jIJIan8IM3g1JsCwZObpFnRfIw/zbovSe+b0fMN8+bKpfLqb+YOvVSsRJz9+k9syJTn7E/o33jHGCmesaSoQ+6b4cFMRDznPDGdzsowuP1U2yN9b1dZZnXY4TsFCP1bmp6vpjbvzHYIBr5J++sKYLkyevhKfvhwEl6EbDtcKsiIvmPCinfIzjVEFnIWd2w6O1SLPUFhg9pdm6JzLv4kKzEtPvxWJMu8i+Mxp0p8VL3s+RpUTJnCXNqTk294zzu9HEhQ80CulBaM7vjOkFkumD44FewIKxxHY0V02NP1VoY99M7DNZ/Xob7vsmvyaD3KCuvI4OtjYWoYL82fIGsx0S6OcW+UvSZrl4VKA/+r+diS9IOf68uP7bST1CUbxyXVv04g8yDdg2M8civpsAHsL9jgU6PjpNgcYTiLtBll4PX65QUytwwNWBhfDu0hyYOI5mZUzvRA+n1v25kkGM8AXHFDDA93HPWDxuvMRf36EkCFJjh/D4uU7GFatZLrZD3eynT3I2lejjgZesGWv+DyS4HyxvfPYz0GUTTIn/8DBJ1ERZMSQpFM9omOBYewSg46tweL1F0fJOot7FzFkmRv9ryNE8Gpcq6KACkvQWY5qTU4s+xOcD+IOp/plPoSo4jQPazzhbD2ysPFP2M7m2uIa7M3zSI0BA15BujK20BPSUVR+V5WjizG3qCDCkDcVSS2nuR69CH8vxrFSp0YjQt27QE9tHdzp3Y680G5+OaKe8xHTO2xtltiXP1kUFgUXvEUWZ9zVx7p6V4D9zHvms/e6GUHhZISVpHi1lTDBvAuzIr2P242MHv0SnHoGBOzTQog+M7U8XaA+OujU++c6tN4LUVxAqVJ54e7QnK+LTvPIDJz6wQrsVrlGtRw1GHgPL7Bt7Go7n6n0s5KAvjvAkaTM7cR+lnWYrHFlZCQYllzw/N3Owt4t8jSciOImA13xi6bwSZGP/Oj8PJAHXqnrvUpKy28FjUN1/FhHWRFNKerLizlJk6bNpc7CMSTkoiIfAXNOMQ+EwcIXSPxNESYQdfVmUBiCLpslh5YYHmyDHr8QGD6oAI9tzHMfduLF/QdSbDspWfF+AbfHaU+mm8ge/HgW0WAMHbMH5x5sCkIbDUCYTile5J3BGsgP6oV4P2VAvduXTYxz2wX1FPgWIDZS0N3pSIi3R8vgt4agZNrIyDdqk0y2Q+S9EONPBlV649w8fnV1x1UdSBdKRe5grO4NvJftbCyeIyPCCk74i8NuB0xqs7wh6HkvWc/7HB7tIUZiNTmQ0nji3ImLtUx9sOK2U3cgzT3z58w9xoaO45Kj7sWIcttNinq5OA2rhMxkf8ThxrToP1MK4Twvj2cAfKmZEXqs6LkwjAXvMrAIgA3RsgC3ZAmf4kBSxH7lkBFqRrKnll+Y2PkrD8MfWnvBnKecGZGRenB9FpgXeUNYsNVAw3FcsSKi6BhYu0JwkshbVYI2TG1uGOrZcBOW9kggLu/Wh2HcK6NnPSkGdEcDKBdcdmMx9nwOo1M/l+sYxRv8aPUbGnOSSD+xCcBIhneqmyGJK6MbciFLi17iErwFE+qoy9n3ftkOyqSQKINYyu15if6kdYYNfw1BzG/3bdYvUfZnSxRvYopjeiCHCmqiq1DazEYqDoRI2VEZSH+XZE83107kdBgsoL9Kcxpq4Ai9x2mlnO1S1pqUIKgxvhIDotm18Z9EiHvuLpYAG6XD9TbsClHIS81vJu9C0CIIZJD8aPay/FIUmoqznsZuuLhFy6lZ7cIFFAFxzfVu70ujY9m95Wir6Kp4nRtrMLxRTALvs2g+10DSD+OpgNqA4cTK/iPbbuNTTLRl9ZFl8Q/X/g8kL7+b//5q8gKL/lrwAI//EwH/PXoD/G1IX0GZg0Y1i7sQ3Ukbou9xL/gME//9LXfgHBH9/f/47khfKdR2fwaD+mOYxWf65HlVX/DN9iwXy4zx8qzb/z9/ZL0cDERAhSQDFEAQgIAzC3lHmZyzMu50n/xMBgPP575/1WPxPCgD0vwsAhP979sq/Dv3X2f/Xsf/+2f/3xBWhWsu3mzvQDX21vrGi5+bYL+Ekef6BFe9vy7DNb7l2YM7Hd/6Hv8/yX6XmGZL31HLtnkdiwfdb6zw0+b+koR/6V5ieaWr/H4fivzLR5t//U55LV2XZexP6KKs1t8c4fe94zPErQT+Be0XnJyz/YxMJ/u8TCSP/PpEg8H+YSfj/+0y+Yjo82u///kx4XrVUhyx/z/hf \ No newline at end of file diff --git a/30-reference/configuration/images/cloud-pak-deployer-monitors.png b/30-reference/configuration/images/cloud-pak-deployer-monitors.png new file mode 100644 index 000000000..77cc0d0b2 Binary files /dev/null and b/30-reference/configuration/images/cloud-pak-deployer-monitors.png differ diff --git a/30-reference/configuration/images/cognos_authorization.png b/30-reference/configuration/images/cognos_authorization.png new file mode 100644 index 000000000..6f042f56f Binary files /dev/null and b/30-reference/configuration/images/cognos_authorization.png differ diff --git a/30-reference/configuration/images/cp4ba-installation.png b/30-reference/configuration/images/cp4ba-installation.png new file mode 100644 index 000000000..d38e454dc Binary files /dev/null and b/30-reference/configuration/images/cp4ba-installation.png differ diff --git a/30-reference/configuration/images/cp4d_events.png b/30-reference/configuration/images/cp4d_events.png new file mode 100644 index 000000000..a94020363 Binary files /dev/null and b/30-reference/configuration/images/cp4d_events.png differ diff --git a/30-reference/configuration/images/cp4d_monitors.png b/30-reference/configuration/images/cp4d_monitors.png new file mode 100644 index 000000000..c5f86d62d Binary files /dev/null and b/30-reference/configuration/images/cp4d_monitors.png differ diff --git a/30-reference/configuration/images/ldap_user_groups.png b/30-reference/configuration/images/ldap_user_groups.png new file mode 100644 index 000000000..2fed31bff Binary files /dev/null and b/30-reference/configuration/images/ldap_user_groups.png differ diff --git a/30-reference/configuration/infrastructure/index.html b/30-reference/configuration/infrastructure/index.html new file mode 100644 index 000000000..9643130db --- /dev/null +++ b/30-reference/configuration/infrastructure/index.html @@ -0,0 +1,162 @@ + Infrastructure - Cloud Pak Deployer
Skip to content

Infrastructure🔗

For some of the cloud platforms, you must explicitly specify the infrastructure layer on which the OpenShift cluster(s) will be provisioned, or you can override the defaults.

For IBM Cloud, you can configure the VPC, subnets, NFS server(s), other Virtual Server Instance(s) and a number of other objects. When provisioning OpenShift on vSphere, you can configure data center, data store, network and virtual machine definitions. For Azure ARO you configure a single object with information about the virtual network (vnet) to be used and the node server profiles. When deploying OpenShift on AWS you can specify an EFS server if you want to use elastic storage.

This page lists all the objects you can configure for each of the supported cloud providers. - IBM Cloud - Microsoft Azure - Amazon AWS - vSphere

IBM Cloud🔗

For IBM Cloud, the following object types are supported:

IBM Cloud provider🔗

Defines the provider that Terraform will use for managing the IBM Cloud assets.

provider:
+- name: ibm
+  region: eu-de
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the provider cluster No ibm
region Region to connect to Yes Any IBM Cloud region

IBM Cloud resource_group🔗

The resource group is for cloud asset grouping purposes. You can define multiple resource groups in your IBM cloud account to group the provisioned assets. If you do not need to group your assets, choose default.

resource_group:
+- name: default
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the existing resource group Yes

IBM Cloud ssh_keys🔗

SSH keys to connect to VSIs. If you have Virtual Server Instances in your VPC, you will need an SSH key to connect to them. SSH keys defined here will be looked up in the vault and created if they don't exist already.

ssh_keys:
+- name: vsi-access
+  managed: True
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the SSH key in IBM Cloud Yes
managed Determines if the SSH key will be created if it doesn't exist No True (default), False

IBM Cloud security_rule🔗

Defines the services (or ports) which are allowed within the context of a VPC and/or VSI.

security_rule:
+- name: https
+  tcp: {port_min: 443, port_max: 443}
+- name: ssh
+  tcp: {port_min: 22, port_max: 22}
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the security rule Yes
tcp Range of tcp ports (port_min and port_max) to allow No 1-65535
udp Range of udp ports (port_min and port_max) to allow No 1-65535
icmp ICMP Type and Code for IPv4 (code and type) to allow No 1-255 for code, 1-254 for type

IBM Cloud vpc🔗

Defines the virtual private cloud which groups the provisioned objects (including VSIs and OpenShift cluster).

vpc:
+- name: sample
+  allow_inbound: ['ssh', 'https']
+  classic_access: false
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the Virtual Private Cloud Yes
managed Controls whether the VPC is managed. The default is True. Only set to False if the VPC is not managed but only referenced by other objects such as transit gateways. No True (default), False
allow_inbound Security rules which are allowed for inbound traffic No Existing security_rule
classic_access Connect VPC to IBM Cloud classic infratructure resources No false (default), true

IBM Cloud address_prefix🔗

Defines the zones used within the VPC, along with the subnet the addresses will be issued for.

- name: sample-zone-1
+  vpc: sample
+  zone: eu-de-1
+  cidr: 10.27.0.0/26
+- name: sample-zone-2
+  vpc: sample
+  zone: eu-de-2
+  cidr: 10.27.0.64/26
+- name: sample-zone-3
+  vpc: sample
+  zone: eu-de-3
+  cidr: 10.27.0.128/26
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the zone Yes
zone Zone in the IBM Cloud Yes
cidr Address range that IPs in this zone will fall into Yes
vpc Virtual Private Cloud this address prefix belongs to Yes, inferred from vpc Existing vpc

IBM Cloud subnet🔗

Defines the subnet that Virtual Server Instances and ROKS compute nodes will be attached to.

subnet:
+- name: sample-subnet-zone-1
+  address_prefix: sample-zone-1
+  ipv4_cidr_block: 10.27.0.0/26
+  zone: eu-de-1
+  vpc: sample
+  network_acl: sample-acl
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the subnet Yes
zone Zone this subnet belongs to Yes, inferred from address_prefix->zone
ipv4_cidr_block Address range that IPs in this subnet will fall into Yes, inferred from address_prefix->cidr Range of subrange of zone
address_prefix Zone of the address prefix definition Yes, inferred from address_prefix Existing address_prefix
vpc Virtual Private Cloud this subnet prefix belongs to Yes, inferred from address_prefix->vpc Existing vpc
network_acl Reference to the network access control list protecting this subnet No

IBM Cloud network_acl🔗

Defines the network access control list to be associated with subnets to allow or deny traffic from or to external connections. The rules are processed in sequence per direction. Rules that appear higher in the list will be processed first.

network_acl:
+- name: "{{ env_id }}-acl"
+  vpc_name: "{{ env_id }}"
+  rules:
+  - name: inbound-ssh
+    action: allow               # Can be allow or deny
+    source: "0.0.0.0/0"
+    destination: "0.0.0.0/0"
+    direction: inbound
+    tcp:
+      source_port_min: 1        # optional
+      source_port_max: 65535    # optional
+      dest_port_min: 22         # optional
+      dest_port_max: 22         # optional
+  - name: output-udp
+    action: deny                # Can be allow or deny
+    source: "0.0.0.0/0"
+    destination: "0.0.0.0/0"
+    direction: outbound
+    udp:
+      source_port_min: 1        # optional
+      source_port_max: 65535    # optional
+      dest_port_min: 1000       # optional
+      dest_port_max: 2000       # optional
+  - name: output-icmp
+    action: allow               # Can be allow or deny
+    source: "0.0.0.0/0"
+    destination: "0.0.0.0/0"
+    direction: outbound
+    icmp:
+      code: 1
+      type: 1
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the network access control liet Yes
vpc_name Virtual Private Cloud this network ACL belongs to Yes
rules Rules to be applied, every rule is an entry in the list Yes
rules.name Unique name of the rule Yes
rules.action Defines whether the traffic is allowed or denied Yes allow, deny
rules.source Source address range that defines the rule Yes
rules.destination Destination address range that defines the rule Yes
rules.direction Inbound or outbound direction of the traffic Yes inbound, outbound
rules.tcp Rule for TCP traffic No
rules.tcp.source_port_min Low value of the source port range No, default=1 1-65535
rules.tcp.source_port_max High value of the source port range No, default=65535 1-65535
rules.tcp.dest_port_min Low value of the destination port range No, default=1 1-65535
rules.tcp.dest_port_max High value of the destination port range No, default=65535 1-65535
rules.udp Rule for UDP traffic No
rules.udp.source_port_min Low value of the source port range No, default=1 1-65535
rules.udp.source_port_max High value of the source port range No, default=65535 1-65535
rules.udp.dest_port_min Low value of the destination port range No, default=1 1-65535
rules.udp.dest_port_max High value of the destination port range No, default=65535 1-65535
rules.icmp Rule for ICMP traffic No
rules.icmp.code ICMP traffic code No, default=all 0-255
rules.icmp.type ICMP traffic type No, default=all 0-254

IBM Cloud vsi🔗

Defines a Virtual Server Instance within the VPC.

vsi:
+- name: sample-bastion
+  infrastructure:
+    type: vpc
+    keys:
+    - "vsi-access"
+    image: ibm-redhat-8-3-minimal-amd64-3
+    subnet: sample-subnet-zone-1
+    primary_ipv4_address: 10.27.0.4
+    public_ip: True
+    vpc_name: sample
+    zone: eu-de-3
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the Virtual Server Instance Yes
infrastructure Infrastructure attributes Yes
infrastructure.type Infrastructure type Yes vpc
infrastructure.allow_ip_spoofing Decide if IP spoofing is allowed for the interface or not No False (default), True
infrastructure.keys List of SSH keys to attach to the VSI Yes, inferred from ssh_keys Existing ssh_keys
infrastructure.image Operating system image to be used Yes Existing image in IBM Cloud
infrastructure.profile Server profile to be used, for example cx2-2x4 Yes Existing profile in IBM Cloud
infrastructure.subnet Subnet the VSI will be connected to Yes, inferred from sunset Existing subnet
infrastructure.primary_ipv4_address IP v4 address that will be assigned to the VSI No If specified, address in the subnet range
infrastructure.public_ip Must a public IP address be attached to this VSI? No False (default), True
infrastructure.vpc_name Virtual Private Cloud this VSI belongs to Yes, inferred from vpc Existing vpc
infrastructure.zone Zone the VSI will be plaed into Yes, inferred from subnet->zone

IBM Cloud transit_gateway🔗

Connects one or more VPCs to each other.

transit_gateway:
+- name: sample-tgw
+  location: eu-de
+  connections:
+  - vpc: other-vpc
+  - vpc: sample
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the transit gateway Yes
location IBM Cloud location of the transit gateway Yes
connections Defines which VPCs must be included in the transit gateway Yes
connection.vpc Defines the VPC to include. Every VPC must exist in the configuration, even if not managed by this configuration. When referencing an existing VPC, make sure that there is a vpc object of that name with managed set to False. Yes Existing vpc

IBM Cloud nfs_server🔗

Defines a Virtual Server Instance within the VPC that will be used as an NFS server.

nfs_server:
+- name: sample-nfs
+  infrastructure:
+    type: vpc
+    vpc_name: sample
+    subnet: sample-subnet-zone-1
+    zone: eu-de-1
+    primary_ipv4_address: 10.27.0.5
+    image: ibm-redhat-8-3-minimal-amd64-3
+    profile: cx2-2x4
+    bastion_host: sample-bastion
+    storage_folder: /data/nfs
+    storage_profile: 10iops-tier
+    keys:
+      - "sample-nfs-provision"
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the Virtual Server Instance Yes
infrastructure Infrastructure attributes Yes
infrastructure.image Operating system image to be used Yes Existing image in IBM Cloud
infrastructure.profile Server profile to be used, for example cx2-2x4 Yes Existing profile in IBM Cloud
infrastructure.type Type of infrastructure for NFS servers to Yes vpc
infrastructure.vpc_name Virtual Private Cloud this VSI belongs to Yes, inferred from vpc Existing vpc
infrastructure.subnet Subnet the VSI will be connected to Yes, inferred from subnet Existing subnet
infrastructure.zone Zone the VSI will be plaed into Yes, inferred from subnet->zone
infrastructure.primary_ipv4_address IP v4 address that will be assigned to the VSI No If specified, address in the subnet range
infrastructure.bastion_host Specify the VSI of the bastion to reach this NFS server No
infrastructure.storage_profile Storage profile that will be used Yes 3iops-tier, 5iops-tier, 10iops-tier
infrastructure.volume_size_gb Size of the NFS server data volume Yes
infrastructure.storage_folder Folder that holds the data, this will be mounted from the NFS storage class Yes
infrastructure.keys List of SSH keys to attach to the NFS server VSI Yes, inferred from ssh_keys Existing ssh_keys
infrastructure.allow_ip_spoofing Decide if IP spoofing is allowed for the interface or not No False (default), True

IBM Cloud cos🔗

Defines a IBM Cloud Cloud Object Storage instance and allows to create buckets.

cos:
+- name: {{ env_id }}-cos
+  plan: standard
+  location: global
+  serviceids:
+  - name: {{ env_id }}-cos-serviceid
+    roles: ["Manager", "Viewer", "Administrator"]
+  buckets:
+  - name: bucketone6c9d6840
+    cross_region_location: eu
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the serviceid Yes
plan short description of the serviceid Yes
location collection of servicekeys that should be created for the parent serviceid Yes
serviceids Collection of references to defined seriveids No
serviceids.name Name of the serviceid Yes
serviceids.roles An array of strings to define which role should be granted to the serviceid Yes
buckets Collection of buckets that should be created inside the cos instance No
buckets[].name Name of the bucket No
buckets[].storage_class Storage class of the bucket No standard (default), vault, cold, flex, smart
buckets[].endpoint_type Endpoint type of the bucket No public (default), private
buckets[].cross_region_location If you use this parameter, do not set single_site_location or region_location at the same time. Yes (one of) us, eu, ap
buckets[].region_location If you set this parameter, do not set single_site_location or cross_region_location at the same time. Yes (one of) au-syd, eu-de, eu-gb, jp-tok, us-east, us-south, ca-tor, jp-osa, br-sao
buckets[].single_site_location If you set this parameter, do not set region_location or cross_region_location at the same time. Yes (one of) ams03, che01, hkg02, mel01, mex01, mil01, mon01, osl01, par01, sjc04, sao01, seo01, sng01, and tor01

serviceid🔗

Defines a iam_service_id that can be granted several role based accesss right via attaching iam_policies to it.

serviceid:
+- name: sample-serviceid
+  description: to access ibmcloud services from external
+  servicekeys:
+  - name: primarykey
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the serviceid Yes
description short description of the serviceid No
servicekeys collection of servicekeys that should be created for the parent serviceid No
servicekeys.name Name of the servicekey Yes

Microsoft Azure🔗

For Microsoft Azure, the following object type is supported:

Azure🔗

Defines an infrastructure configuration onto which OpenShift will be provisioned.

azure:
+- name: sample
+  resource_group:
+    name: sample
+    location: westeurope
+  vnet:
+    name: vnet
+    address_space: 10.0.0.0/22
+  control_plane:
+    subnet:
+      name: control-plane-subnet
+      address_prefixes: 10.0.0.0/23
+  compute:
+    subnet:
+      name: compute-subnet
+      address_prefixes: 10.0.2.0/23
+

Properties explanation🔗

Property Description Mandatory Allowed values
name Name of the azure definition object, will be referenced by openshift Yes
resource_group Resource group attributes Yes
resource_group.name Name of the resource group (will be provisioned) Yes unique value, it must not exist
resource_group.location Azure location Yes to pick a different location, run: az account list-locations -o table
vnet Virtual network attributes Yes
vnet.name Name of the virtual network Yes
vnet.address_space Address space of the virtual network Yes
control_plane Control plane (master) nodes attributes Yes
control_plane.subnet Control plane nodes subnet attributes Yes
control_plane.subnet.name Name of the control plane nodes subnet Yes
control_plane.subnet.address_prefixes Address prefixes of the control plane nodes subnet (divided by a , comma, if relevant) Yes
control_plane.vm Control plane nodes virtual machine attributes Yes
control_plane.vm.size Virtual machine size (aka flavour) of the control plane nodes Yes Standard_D8s_v3, Standard_D16s_v3, Standard_D32s_v3
compute Compute (worker) nodes attributes Yes
compute.subnet Compute nodes subnet attributes Yes
compute.subnet.name Name of the compute nodes subnet Yes
compute.subnet.address_prefixes Address prefixes of the compute nodes subnet (divided by a , comma, if relevant) Yes
compute.vm Compute nodes virtual machine attributes Yes
compute.vm.size Virtual machine size (aka flavour) of the compute nodes Yes See the full list of supported virtual machine sizes
compute.vm.disk_size_gb Disk size in GBs of the compute nodes virtual machine Yes minimum value is 128
compute.vm.count Number of compute nodes virtual machines Yes minimum value is 3

Amazon🔗

For Amazon AWS, the following object types are supported:

AWS EFS Server nfs_server🔗

Defines a new Elastic File Storage (EFS) service that is connected to the OpenShift cluster within the same VPC. The file storage will be used as the back-end for the efs-nfs-client OpenShift storage class.

nfs_server:
+- name: sample-elastic
+  infrastructure:
+    aws_region: eu-west-1
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the EFS File System service to be created Yes
infrastructure Infrastructure attributes Yes
infrastructure.aws_region AWS region where the storage will be provisioned Yes

vSphere🔗

For vSphere, the following object types are supported:

vSphere vsphere🔗

Defines the vSphere vCenter onto which OpenShift will be provisioned.

vsphere:
+- name: sample
+  vcenter: 10.99.92.13
+  datacenter: Datacenter1
+  datastore: Datastore1
+  cluster: Cluster1
+  network: "VM Network"
+  folder: /Datacenter1/vm/sample
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the vSphere definition, will be referenced by openshift Yes
vcenter Host or IP address of the vSphere Center Yes
datacenter vSphere Data Center to be used for the virtual machines Yes
datastore vSphere Datastore to be used for the virtual machines Yes
cluster vSphere cluster to be used for the virtual machines Yes
resource_pool vSphere resource pool No
network vSphere network to be used for the virtual machines Yes
folder Fully qualified folder name into which the OpenShift cluster will be placed Yes

vSphere vm_definition🔗

Defines the virtual machine properties to be used for the control-plane nodes and compute nodes.

vm_definition:
+- name: control-plane
+  vcpu: 8
+  memory_mb: 32768
+  boot_disk_size_gb: 100
+- name: compute
+  vcpu: 16
+  memory_mb: 65536
+  boot_disk_size_gb: 200
+  # Optional overrides for vsphere properties
+  # datastore: Datastore1
+  # network: "VM Network"
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the VM definition, will be referenced by openshift Yes
vcpu Number of virtual CPUs to be assigned to the VMs Yes
memory_mb Amount of memory in MiB of the virtual machines Yes
boot_disk_size_gb Size of the virtual machine boot disk in GiB Yes
datastore vSphere Datastore to be used for the virtual machines, overrides vsphere.datastore No
network vSphere network to be used for the virtual machines, overrides vsphere.network No

vSphere nfs_server🔗

Defines an existing NFS server that will be used for the OpenShift NFS storage class.

nfs_server:
+- name: sample-nfs
+  infrastructure:
+    host_ip: 10.99.92.31
+    storage_folder: /data/nfs
+

Property explanation🔗

Property Description Mandatory Allowed values
name Name of the NFS server Yes
infrastructure Infrastructure attributes Yes
infrastructure.host_ip Host or IP address of the NFS server Yes
infrastructure.storage_folder Folder that holds the data, this will be mounted from the NFS storage class Yes
\ No newline at end of file diff --git a/30-reference/configuration/logging-auditing/index.html b/30-reference/configuration/logging-auditing/index.html new file mode 100644 index 000000000..dd4282b65 --- /dev/null +++ b/30-reference/configuration/logging-auditing/index.html @@ -0,0 +1,42 @@ + Logging and auditing - Cloud Pak Deployer
Skip to content

Logging and auditing for Cloud Paks🔗

For logging and auditing of Cloud Pak for Data we make use of the OpenShift logging framework, which delivers a lot of flexibility in capturing logs from applications, storing them in an ElasticSearch datastore in the cluster (currently not supported by the deployer), or forwarding the log entries to external log collectors such as an ElasticSearch, Fluentd, Loki and others.

Logging overview

OpenShift logging captures 3 types of logging entries from workload that is running on the cluster:

  • infrastructure - logs generated by OpenShift processes
  • audit - audit logs generated by applications as well as OpenShift
  • application - all other applications on the cluster

Logging configuration - openshift_logging🔗

Defines how OpenShift forwards the logs to external log collectors. Currently, the following log collector types are supported:

  • loki

When OpenShift logging is activated via the openshift_logging object, all 3 logging types are activated automatically. You can specify logging_output items to forward log records to the log collector of your choice. In the below example, the application logs are forwarded to a loki server https://loki-application.sample.com and audit logs to https://loki-audit.sample.com, both have the same certificate to connect with:

openshift_logging:
+- openshift_cluster_name: pluto-01
+  configure_es_log_store: False
+  cluster_wide_logging:
+  - input: application
+    logging_name: loki-application
+  - input: infrastructure
+    logging_name: loki-application
+  - input: audit
+    logging_name: loki-audit
+  logging_output:
+  - name: loki-application
+    type: loki
+    url: https://loki-application.sample.com
+    certificates:
+      cert: pluto-01-loki-cert
+      key: pluto-01-loki-key
+      ca: pluto-01-loki-ca
+  - name: loki-audit
+    type: loki
+    url: https://loki-audit.sample.com
+    certificates:
+      cert: pluto-01-loki-cert
+      key: pluto-01-loki-key
+      ca: pluto-01-loki-ca
+

Cloud Pak for Data and Foundational Services application logs are automatically picked up and forwarded to the loki-application logging destination and no additional configuration is needed.

Property explanation🔗

Property Description Mandatory Allowed values
openshift_cluster_name Name of the OpenShift cluster to configure the logging for Yes
configure_es_log_store Must internal ElasticSearch log store and Kibana be provisioned? (default False) No True, False (default)
cluster_wide_logging Defines which classes of log records will be sent to the log collectors No
cluster_wide_logging.input Specifies OpenShift log records class to forwawrd Yes application, infrastructure, audit
cluster_wide_logging.logging_name Specifies the logging_output to send the records to . If not specified, records will be sent to the internal log only No
cluster_wide_logging.labels Specify your own labels to be added to the log records. Every logging input/output combination can have its own labes No
logging_output Defines the log collectors. If configure_es_log_store is True, output will always be sent to the internal ES log store No
logging_output.name Log collector name, referenced by cluster_wide_logging or cp4d_audit Yes
logging_output.type Type of the log collector, currently only loki is possible Yes loki
logging_output.url URL of the log collector; this URL must be reachable from within the cluster Yes
logging_output.certificates Defines the vault secrets that hold the certificate elements Yes, if url is https
logging_output.certificates.cert Public certificate to connect to the URL Yes
logging_output.certificates.key Private key to connect to the URL Yes
logging_output.certificates.ca Certificate Authority bundle to connect to the URL Yes

If you also want to activate audit logging for Cloud Pak for Data, you can do this by adding a cp4d_audit_config object to your configuration. With the below example, the Cloud Pak for Data audit logger is configured to write log records to the standard output (stdout) of the pods, after which they are forwarded to the loki-audit logging destination by a ClusterLogForwarder custom resource. Optionally labels can be specified which are added to the ClusterLogForwarder custom resource pipeline entry.

cp4d_audit_config:
+- project: cpd
+  audit_replicas: 2
+  audit_output:
+  - type: openshift-logging
+    logging_name: loki-audit
+    labels:
+      cluster_name: "{{ env_id }}"    
+

Info

Because audit log entries are written to the standard output, they will also be picked up by the generic application log forwarder and will therefore also appear in the application logging destination.

Cloud Pak for Data audit configuration🔗

IBM Cloud Pak for Data has a centralized auditing component for base platform and services auditable events. Audit events include login and logout to the platform, creation and deletion of connections and many more. Services that support auditing are documented here: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=data-services-that-support-audit-logging

The Cloud Pak Deployer simplifies the recording of audit log entries by means of the OpenShift logging framework, which can in turn be configured to forward entries to various log collectors such as Fluentd, Loki and ElasticSearch.

Audit configuration - cp4d_audit_config🔗

A cp4d_audit_config entry defines the audit configuration for a Cloud Pak for Data instance (OpenShift project). The main configuration items are the number of replicas and the output. Currently only one output type is supported: openshift-logging, which allows the OpenShift logging framework to pick up audit entries and forward to the designated collectors.

When a cp4d_audit_config entry exists for a certain cp4d project, the zen-audit-config ConfigMap is updated and then the audit logging deployment is restarted. If no configuration changes have been made, no restart is done.

Additionally, for the audit_output entries, the OpenShift logging ClusterLogForwarder instance is updated to forward audit entries to the designated logging output. In the example below the auditing is configured with 2 replicas and an input and pipeline is added to the ClusterLogForwarder instance so output to the matching channel defined in openshift_logging.logging_output.

cp4d_audit_config:
+- project: cpd
+  audit_replicas: 2
+  audit_output:
+  - type: openshift-logging
+    logging_name: loki-audit
+    labels:
+      cluster_name: "{{ env_id }}"
+

Property explanation🔗

Property Description Mandatory Allowed values
project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes
audit_replicas Number of replicas for the Cloud Pak for Data audit logger. No (default 1)
audit_output Defines where the audit logs should be written to Yes
audit_output.type Type of auditing output, defines where audit logging entries will be written Yes openshift-logging
audit_output.logging_name Name of the logging_output entry in the openshift_logging object. This logging_output entry must exist. Yes
audit_output.labels Optional list of labels set to the ClusterLogForwarder custom resource pipeline No
\ No newline at end of file diff --git a/30-reference/configuration/monitoring/index.html b/30-reference/configuration/monitoring/index.html new file mode 100644 index 000000000..fcfa74439 --- /dev/null +++ b/30-reference/configuration/monitoring/index.html @@ -0,0 +1,75 @@ + Monitoring - Cloud Pak Deployer
Skip to content

Monitoring OpenShift and Cloud Paks🔗

For monitoring of Cloud Pak for Data we make use of the OpenShift Monitoring framework. The observations generated by Cloud Pak for Data are pushed to the OpenShift Monitoring Prometheus endpoint. This will allow (external) monitoring tools to combine the observations from the OpenShift platform and Cloud Pak for Data from a single source.

Monitoring overview

OpenShift monitoring🔗

To deploy Cloud Pak for Data Monitors, its is mandatory to also enable the OpenShift monitoring. OpenShift monitoring is activated via the openshift_monitoring object.

openshift_monitoring:
+- openshift_cluster_name: pluto-01
+  user_workload: enabled
+  remote_rewrite_url: http://www.example.com:1234/receive
+  retention_period: 15d
+  pvc_storage_class: ibmc-vpc-block-retain-general-purpose
+  pvc_storage_size_gb: 100
+  grafana_operator: enabled
+  grafana_project: grafana
+  labels:
+    cluster_name: pluto-01
+
Property Description Mandatory Allowed values
user_worload Allow pushing Prometheus metrics to OpenShift (must be set to True for monitoring to work) Yes True, False
pvc_storage_class Storage class to keep persistent monitoring data No Valid storage class
pvc_storage_size_gb Size of the PVC holding the monitoring data Yes if pv_storage_class is set
remote_rewrite_url Set this value to redirect metrics to remote Prometheus NO
retention_period Number of seconds (s), minutes (m), hours(h), days (d), weeks (w), years (y) to retain monitoring data. Default is 15d Yes
labels Additional labels to be added to the metrics No
grafana_operator Enable Grafana community operator? No False (default), True
grafana_project If enabled, project in which to enable the Grafana operator Yes, if grafana_operator enabled

Note Labels must be specified as a YAML record where each line is a key-value. The labels will be added to the prometheus key of the user-workload-monitoring-config ConfigMap and to the prometheusK8S key of the cluster-monitoring-config ConfigMap.

Note When the Grafana operator is enabled, you can build your own Grafana dashboard based on the metrics collected by Prometheus. When installed, Grafana creates a local admin user with user name root and passwowrd secret. Grafana can be accessed using the OpenShift route that is created in the project specified by grafana_project.

Cloud Pak for Data monitoring🔗

The observations of Cloud Pak for Data are generated using the zen-watchdog component, which is part of the cpd_platform cartridge and therefore available on each instance of Cloud Pak for Data. Part of the zen-watchdog installation is a set of monitors which focus on the technical deployment of Cloud Pak for Data (e.g. running pods and bound Persistent Volume Claims (pvcs)).

Additional monitors which focus more on the operational usage of Cloud Pak for Data can be deployed as well. These monitors are maintained in a seperate Git repository and be accessed at IBM/cp4d-monitors. Using the Cloud Pak Deployer, monitors can be deployed which uses the Cloud Pak for Data zen-watchdog monitor framework. This allows adding custom monitors to the zen-watchdog, making these custom monitors visible in the Cloud Pak for Data metrics.

Cloud Pak for Data Monitors Overview

Using the Cloud Pak Deployer cp4d_monitors capability implements the following: - Create Cloud Pak for Data ServiceMonitor endpoint to forward zen-watchdog monitor events to OpenShift Cluster monitoring - Create source repository auth secrets (optional, if pulling monitors from secure repo) - Create target container registry auth secrets (optional, if pushing monitor images to secure container registry) - Deploy custom monitors, which will be added to the zen-watchdog monitor framework

For custom monitors to be deployed, it is mandatory to enable the OpenShift user-workload monitoring, as specified in OpenShift monitoring.

The Cloud Pak for Data monitors are specified in a cp4d_monitors definition.

cp4d_monitors:
+- name: cp4d-monitor-set-1
+  cp4d_instance: zen-45
+  openshift_cluster_name: pluto-01
+  default_monitor_source_repo: https://github.com/IBM/cp4d-monitors
+  #default_monitor_source_token_secret: monitors_source_repo_secret
+  #default_monitor_target_cr: de.icr.io/monitorrepo  
+  #default_monitor_target_cr_user_secret: monitors_target_cr_username
+  #default_monitor_target_cr_password_secret: monitors_target_cr_password
+  # List of monitors
+  monitors:
+  - name: cp4dplatformcognosconnectionsinfo
+    context: cp4d-cognos-connections-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformcognostaskinfo
+    context: cp4d-cognos-task-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformglobalconnections
+    context: cp4d-platform-global-connections
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwatsonstudiojobinfo
+    context: cp4d-watsonstudio-job-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwatsonstudiojobscheduleinfo
+    context: cp4d-watsonstudio-job-schedule-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwatsonstudioruntimeusage
+    context: cp4d-watsonstudio-runtime-usage
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwatsonknowledgecataloginfo
+    context: cp4d-wkc-info
+    label: latest
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwmldeploymentspaceinfo
+    context: cp4d-wml-deployment-space-info
+    label: latest  
+    schedule: "*/15 * * * *"
+  - name: cp4dplatformwmldeploymentspacejobinfo
+    context: cp4d-wml-deployment-space-job-info
+    label: latest
+    schedule: "*/15 * * * *"
+

Each cp4d_monitors entry contains a set of default settings, which are applicable to the monitors list. These defaults can be overwritten per monitor if needed.

Property Description Mandatory Allowed values
name The name of the monitor set Yes lowercase RFC 1123 subdomain (1)
cp4d_instance The OpenShift project (namespace) on which the Cloud Pak for Data instance resides Yes
openshift_cluster_name The Openshift cluster name Yes
default_monitor_source_repo The default repository location of all monitors located in the monitors section No
default_monitor_source_token_secret The default repo access token secret name, must be available in the vault No
default_monitor_target_cr The default target container registry (cr) for the monitor image to be pushed. When omitted, the OpenShift internal registry is used No
default_monitor_target_cr_user_secret The default target container registry user name secret name used to push the monitor image. Must be available in the vault No
default_monitor_target_cr_password_secret The default target container registry password secret name used to push the monitor image. Must be available in the vault No
monitors List of monitors Yes

Per monitors entry, the following settings are specified:

Property Description Mandatory Allowed values
name The name of the monitor entry Yes lowercase RFC 1123 subdomain (1)
monitor_source_repo Overrides default_monitor_source_repo for this single monitor No
monitor_source_token_secret Overrides default_monitor_source_token_secret for this single monitor No
monitor_target_cr Overrides default_monitor_target_cr for this single monitor No
monitor_target_cr_user_secret Overrides default_monitor_target_cr_user_secret for this single monitor No
monitor_target_cr_user_password Overrides default_monitor_target_cr_user_password for this single monitor No
context Sets the context of the monitor the the source repo (sub folder name) Yes
label Set the label of the pushed image, default to 'latest' No
schedule Sets the schedule of the generated Cloud Pak for Data monitor cronjob Yes

Each monitor has a set of event_types, which contain the observations generated by the monitor. These event types are retrieved directly from the github repository, which it is expected that each context contains a file called event_types.yml. During deployment of the monitor this file is retrieved and used to populate the event_types of the monitor.

If the Deployer runs and the monitor is already deployed, the following process is used: - The build process is restarted to ensure the latest image of monitor is used - A comparison is made between the monitor's current configuration and the configuration created by the Deployer. If these are identical, the monitor's configuration is left as-is, however if these are different, the monitor's configuration is rebuild and the monitor is re-deployed.

Example monitior - global platform connections🔗

This monitor counts the number of Global Platform connections and for each Global Platform Connection a test is executed to test whether the connection can still be established.

Generated metrics🔗

Once the monitor is deployed, the following metrics are available in IBM Cloud Pak for Data.

Overview Events and Alerts

On the Platform Management Events page the following entries are added: - Cloud Pak for Data Global Connections Count - Global Connection - <Global Connection Name> (for each connection)

Using the IBM Cloud Pak for Data Prometheus endpoint🔗

https://<CP4D-BASE-URL>/zen/metrics

It will generate 2 types of metrics:

  • global_connections_count
    Provides the number of available connections
  • global_connection_valid
    For each connection, a test action is performed
    • 1 (Test Connection success)
    • 0 (Test connection failed)
# HELP global_connections_count 
+# TYPE global_connections_count gauge
+global_connections_count{event_type="global_connections_count",monitor_type="cp4d_platform_global_connections",reference="Cloud Pak for Data Global Connections Count"} 2
+
+# HELP global_connection_valid 
+# TYPE global_connection_valid gauge
+global_connection_valid{event_type="global_connection_valid",monitor_type="cp4d_platform_global_connections",reference="Cognos MetaStore Connection"} 1
+global_connection_valid{event_type="global_connection_valid",monitor_type="cp4d_platform_global_connections",reference="Cognos non-shared"} 0
+

Zen Watchdog metrics (used in platform management events) - watchdog_cp4d_platform_global_connections_global_connections_count - watchdog_cp4d_platform_global_connections_global_connection_valid (for each connection)

Zen Watchdog metrics can have the following values: - 2 (info) - 1 (warning) - 0 (critical)

# HELP watchdog_cp4d_platform_global_connections_global_connection_valid 
+# TYPE watchdog_cp4d_platform_global_connections_global_connection_valid gauge
+watchdog_cp4d_platform_global_connections_global_connection_valid{event_type="global_connection_valid",monitor_type="cp4d_platform_global_connections",reference="Cognos MetaStore Connection"} 2
+watchdog_cp4d_platform_global_connections_global_connection_valid{event_type="global_connection_valid",monitor_type="cp4d_platform_global_connections",reference="Cognos non-shared"} 1
+
+# HELP watchdog_cp4d_platform_global_connections_global_connections_count 
+# TYPE watchdog_cp4d_platform_global_connections_global_connections_count gauge
+watchdog_cp4d_platform_global_connections_global_connections_count{event_type="global_connections_count",monitor_type="cp4d_platform_global_connections",reference="Cloud Pak for Data Global Connections Count"} 2
+
\ No newline at end of file diff --git a/30-reference/configuration/openshift/index.html b/30-reference/configuration/openshift/index.html new file mode 100644 index 000000000..8d95ce14a --- /dev/null +++ b/30-reference/configuration/openshift/index.html @@ -0,0 +1,211 @@ + OpenShift - Cloud Pak Deployer
Skip to content

OpenShift cluster(s)🔗

You can configure one or more OpenShift clusters that will be layed down on the specified infrastructure, or which already exist.

Dependent on the cloud platform on which the OpenShift cluster will be provisioned, different installation methods apply. For IBM Cloud, Terraform is used, whereas for vSphere the IPI installer is used. On AWS (ROSA), the rosa CLI is used to create and modify ROSA clusters. Each of the different platforms have slightly different properties for the openshift objects.

openshift🔗

For OpenShift, there are 5 flavours:

Every OpenShift cluster definition of a few mandatory properties that control which version of OpenShift is installed, the number and flavour of control plane and compute nodes and the underlying infrastructure, dependent on the cloud platform on which it is provisioned. Storage is a mandatory element for every openshift definition. For a list of supported storage types per cloud platform, refer to Supported storage types.

Additionally, one can configure Upstream DNS Servers and OpenShift logging.

The Multicloud Object Gateway (MCG) supports access to s3-compatible object storage via an underpinning block/file storage class, through the Noobaa operator. Some Cloud Pak for Data services such as Watson Assistant need object storage to run. MCG does not need to be installed if OpenShift Data Foundation (fka OCS) is also installed as the operator includes Noobaa.

OpenShift on IBM Cloud (ROKS)🔗

VPC-based OpenShift cluster on IBM Cloud, using the Red Hat OpenShift Kubernetes Services (ROKS).

openshift:
+- name: sample
+  managed: True
+  ocp_version: 4.8
+  compute_flavour: bx2.16x64
+  compute_nodes: 3
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    type: vpc
+    vpc_name: sample
+    subnets:
+    - sample-subnet-zone-1
+    - sample-subnet-zone-2
+    - sample-subnet-zone-3
+    cos_name: sample-cos
+    private_only: False
+    deny_node_ports: False
+  upstream_dns:
+  - name: sample-dns
+     zones:
+     - example.com
+     dns_servers:
+     - 172.31.2.73:53
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: managed-nfs-storage
+  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+    nfs_server_name: sample-nfs
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 500
+    ocs_version: 4.8.0
+  - storage_name: pwx-storage
+    storage_type: pwx 
+    pwx_etcd_location: {{ ibm_cloud_region }}
+    pwx_storage_size_gb: 200 
+    pwx_storage_iops: 10 
+    pwx_storage_profile: "10iops-tier"
+    stork_version: 2.6.2
+    portworx_version: 2.7.2
+

Property explanation OpenShift clusters on IBM Cloud (ROKS)🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
managed Is the ROKS cluster managed by this deployer? See note below. No True (default), False
ocp_version ROKS Kubernetes version. If you want to install 4.10, specify "4.10" Yes >= 4.6
compute_flavour Type of compute node to be used Yes Node flavours
compute_nodes Total number of compute nodes. This must be a factor of the number of subnets Yes Integer
resource_group IBM Cloud resource group for the ROKS cluster Yes
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure.type Type of infrastructure to provision ROKS cluster on No vpc
infrastructure.vpc_name Name of the VPC if type is vpc Yes, inferrred from vpc Existing VPC
infrastructure.subnets List of subnets within the VPC to use. Either 1 or 3 subnets must be specified Yes Existing subnet
infrastructure.cos_name Reference to the cos object created for this cluster Yes Existing cos object
infrastructure.private_only If true, it indicates that the ROKS cluster must be provisioned without public endpoints No True, False (default)
infrastructure.deny_node_ports If true, the Allow ICMP, TCP and UDP rules for the security group associated with the ROKS cluster are removed if present. If false, the Allow ICMP, TCP and UDP rules are added if not present. No True, False (default)
infrastructure.secondary_storage Reference to the storage flavour to be used as secondary storage, for example "900gb.5iops-tier" No Valid secondary storage flavour
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes

The managed attribute indicates whether the ROKS cluster is managed by the Cloud Pak Deployer. If set to False, the deployer will not provision the ROKS cluster but expects it to already be available in the VPC. You can still use the deployer to create the VPC, the subnets, NFS servers and other infrastructure, but first run it without an openshift element. Once the VPC has been created, manually create an OpenShift cluster in the VPC and then add the openshift element with managed set to False. If you intend to use OpenShift Container Storage, you must also activate the add-on and create the OcsCluster custom resource.

Warning

If you set infrastructure.private_only to True, the server from which you run the deployer must be able to access the ROKS cluster via its private endpoint, either by establishing a VPN to the cluster's VPC, or by making sure the deployer runs on a server that has a connection with the ROKS VPC via a transit gateway.

openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to create in the OpenShift cluster Yes nfs, ocs or pwx
nfs_server_name Name of the NFS server within the VPC Yes if storage_type is nfs Existing nfs_server
ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_version Version of OCS (ODF) to be deployed. If left empty, the latest version will be deployed No >= 4.6
pwx_etcd_location Location where the etcd service will be deployed, typically the same region as the ROKS cluster Yes if storage_type is pwx
pwx_storage_size_gb Size of the Portworx storage that will be provisioned Yes if storage_type is pwx
pwx_storage_iops IOPS for the storage volumes that will be provisioned Yes if storage_type is pwx
pwx_storage_profile IOPS storage tier the storage volumes that will be provisioned Yes if storage_type is pwx
stork_version Version of the Portworx storage orchestration layer for Kubernetes Yes if storage_type is pwx
portworx_version Version of the Portworx storage provider Yes if storage_type is pwx

Warning

When deploying a ROKS cluster with OpenShift Data Foundation (fka OpenShift Container Storage/OCS), the minimum version of OpenShift is 4.7.

OpenShift on vSphere🔗

openshift:
+- name: sample
+  domain_name: example.com
+  vsphere_name: sample
+  ocp_version: 4.8
+  control_plane_nodes: 3
+  control_plane_vm_definition: control-plane
+  compute_nodes: 3
+  compute_vm_definition: compute
+  api_vip: 10.99.92.51
+  ingress_vip: 10.99.92.52
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    openshift_cluster_network_cidr: 10.128.0.0/14
+  upstream_dns:
+  - name: sample-dns
+     zones:
+     - example.com
+     dns_servers:
+     - 172.31.2.73:53
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: thin
+  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+    nfs_server_name: sample-nfs
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 512
+    ocs_dynamic_storage_class: thin
+

Property explanation OpenShift clusters on vSphere🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
domain_name Domain name of the cluster, this will also depict the route to the API and ingress endpoints Yes
ocp_version OpenShift version. If you want to install 4.10, specify "4.10" Yes >= 4.6
control_plane_nodes Total number of control plane nodes, typically 3 Yes Integer
control_plane_vm_definition vm_definition object that will be used to define number of vCPUs and memory for the control plane nodes Yes Existing vm_definition
compute_nodes Total number of compute nodes Yes Integer
compute_vm_definition vm_definition object that will be used to define number of vCPUs and memory for the compute nodes Yes Existing vm_definition
api_vip Virtual IP address that the installer will provision for the API server Yes
ingress_vip Virtual IP address that the installer will provision for the ingress server Yes
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure Infrastructure properties No
infrastructure.openshift_cluster_network_cidr Network CIDR used by the OpenShift pods. Normally you would not have to change this, unless other systems in the network are in the 10.128.0.0/14 subnet. No CIDR
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes
openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to create in the OpenShift cluster Yes nfs or ocs
nfs_server_name Name of the NFS server within the VPC Yes if storage_type is nfs Existing nfs_server
ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No >= 4.6
ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_dynamic_storage_class Storage class that will be used for provisioning OCS. On vSphere clusters, thin is usually available after OpenShift installation Yes if storage_type is ocs
storage_vm_definition VM Definition that defines the virtual machine attributes for the OCS nodes Yes if storage_type is ocs

OpenShift on AWS - self-managed🔗

nfs_server:
+- name: sample-elastic
+  infrastructure:
+    aws_region: eu-west-1
+
+openshift:
+- name: sample
+  ocp_version: 4.10.34
+  domain_name: cp-deployer.eu
+  compute_flavour: m5.4xlarge
+  compute_nodes: 3
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    type: self-managed
+    aws_region: eu-central-1
+    multi_zone: True
+    credentials_mode: Manual
+    private_only: True
+    machine_cidr: 10.2.1.0/24
+    openshift_cluster_network_cidr: 10.128.0.0/14
+    subnet_ids:
+    - subnet-06bbef28f585a0dd3
+    - subnet-0ea5ac344c0fbadf5
+    hosted_zone_id: Z08291873MCIC4TMIK4UP
+    ami_id: ami-09249dd86b1933dd5
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: gp3-csi
+  openshift_storage:
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 512
+  - storage_name: sample-elastic
+    storage_type: aws-elastic
+

Property explanation OpenShift clusters on AWS (self-managed)🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
ocp_version OpenShift version version, specified as x.y.z Yes >= 4.6
domain_name Base domain name of the cluster. Together with the name, this will be the domain of the OpenShift cluster. Yes
control_plane_flavour Flavour of the AWS servers used for the control plane nodes. m5.xxlarge is the recommended value 4 GB of memory Yes
control_plane_nodes Total number of control plane Yes Integer
compute_flavour Flavour of the AWS servers used for the compute nodes. m5.4xlarge is a large node with 16 cores and 64 GB of memory Yes
compute_nodes Total number of compute nodes Yes Integer
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure Infrastructure properties Yes
infrastructure.type Type of OpenShift cluster on AWS. Yes rosa or self-managed
infrastructure.aws_region Region of AWS where cluster is deployed. Yes
infrastructure.multi_zone Determines whether the OpenShift cluster is deployed across multiple availability zones. Default is True. No True (default), False
infrastructure.credentials_mode Security requirement of the Cloud Credential Operator (COO) when doing installations with temporary AWS security credentials. Default (omit) is automatically handled by CCO. No Manual, Mint
infrastructure.machine_cdr Machine CIDR. This value will be used to create the VPC and its subnets. In case of an existing VPC, specify the CIDR of that VPC. No CIDR
infrastructure.openshift_cluster_network_cidr Network CIDR used by the OpenShift pods. Normally you would not have to change this, unless other systems in the network are in the 10.128.0.0/14 subnet. No CIDR
infrastructure.subnet_ids Existing public and private subnet IDs in the VPC to be used for the OpenShift cluster. Must be specified in combination with machine_cidr and hosted_zone_id. No Existing subnet IDs
infrastructure.private_only Indicates whether the OpenShift can be accessed from the internet. Default is True No True, False
infrastructure.hosted_zone_id ID of the AWS Route 53 hosted zone that controls the DNS entries. If not specified, the OpenShift installer will create a hosted zone for the specified domain_name. This attribute is only needed if you create the OpenShift cluster in an existing VPC No
infrastructure.control_plane_iam_role If not standard, specify the IAM role that the OpenShift installer must use for the control plane nodes during cluster creation No
infrastructure.compute_iam_role If not standard, specify the IAM role that the OpenShift installer must use for the compute nodes during cluster creation No
infrastructure.ami_id ID of the AWS AMI to boot all images No
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes

When deploying the OpenShift cluster within an existing VPC, you must specify the machine_cidr that covers all subnets and the subnet IDs within the VPC. For example:

    machine_cidr: 10.243.0.0/24
+    subnets_ids:
+    - subnet-0e63f662bb1842e8a
+    - subnet-0673351cd49877269
+    - subnet-00b007a7c2677cdbc
+    - subnet-02b676f92c83f4422
+    - subnet-0f1b03a02973508ed
+    - subnet-027ca7cc695ce8515
+

openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to create in the OpenShift cluster Yes ocs, aws-elastic
ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No
ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_dynamic_storage_class Storage class that will be used for provisioning ODF. gp3-csi is usually available after OpenShift installation No

OpenShift on AWS - ROSA🔗

nfs_server:
+- name: sample-elastic
+  infrastructure:
+    aws_region: eu-west-1
+
+openshift:
+- name: sample
+  ocp_version: 4.10.34
+  compute_flavour: m5.4xlarge
+  compute_nodes: 3
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    type: rosa
+    aws_region: eu-central-1
+    multi_zone: True
+    use_sts: False
+    credentials_mode: Manual
+  upstream_dns:
+  - name: sample-dns
+     zones:
+     - example.com
+     dns_servers:
+     - 172.31.2.73:53
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: gp3-csi
+  openshift_storage:
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 512
+  - storage_name: sample-elastic
+    storage_type: aws-elastic
+

Property explanation OpenShift clusters on AWS (ROSA)🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
ocp_version OpenShift version version, specified as x.y.z Yes >= 4.6
compute_flavour Flavour of the AWS servers used for the compute nodes. m5.4xlarge is a large node with 16 cores and 64 GB of memory Yes
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure Infrastructure properties Yes
infrastructure.type Type of OpenShift cluster on AWS. Yes rosa or self-managed
infrastructure.aws_region Region of AWS where cluster is deployed. Yes
infrastructure.multi_zone Determines whether the OpenShift cluster is deployed across multiple availability zones. Default is True. No True (default), False
infrastructure.use_sts Determines whether AWS Security Token Service must be used by the ROSA installer. Default is False. No True, False (default)
infrastructure.credentials_mode Change the security requirement of the Cloud Credential Operator (COO). Default (omit) is automatically handled by CCO. No Manual, Mint
infrastructure.machine_cdr Machine CIDR, for example 10.243.0.0/16. No CIDR
infrastructure.subnet_ids Existing public and private subnet IDs in the VPC to be used for the OpenShift cluster. Must be specified in combination with machine_cidr. No Existing subnet IDs
compute_nodes Total number of compute nodes Yes Integer
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes

When deploying the OpenShift cluster within an existing VPC, you must specify the machine_cidr that covers all subnets and the subnet IDs within the VPC. For example:

    machine_cidr: 10.243.0.0/24
+    subnets_ids:
+    - subnet-0e63f662bb1842e8a
+    - subnet-0673351cd49877269
+    - subnet-00b007a7c2677cdbc
+    - subnet-02b676f92c83f4422
+    - subnet-0f1b03a02973508ed
+    - subnet-027ca7cc695ce8515
+

openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to create in the OpenShift cluster Yes ocs, aws-elastic
ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No
ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_dynamic_storage_class Storage class that will be used for provisioning ODF. gp3-csi is usually available after OpenShift installation No

OpenShift on Microsoft Azure (ARO)🔗

openshift:
+- name: sample
+  azure_name: sample
+  domain_name: example.com
+  ocp_version: 4.10.54
+  cloud_native_toolkit: False
+  oadp: False
+  network:
+    pod_cidr: "10.128.0.0/14"
+    service_cidr: "172.30.0.0/16"
+  openshift_storage:
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 512
+    ocs_dynamic_storage_class: managed-premium
+

Property explanation for OpenShift cluster on Microsoft Azure (ARO)🔗

Warning

You are not allowed to specify the OCP version of the ARO cluster. The latest current version is provisioned automatically instead no matter what value is specified in the "ocp_version" parameter. The "ocp_version" parameter is mandatory for compatibility with other layers of the provisioning, such as the OpenShift client. For instance, the value is used by the process which downloads and installs the oc client. Please, specify the value according to what OCP version will be provisioned.

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
azure_name Name of the azure element in the configuration Yes
domain_name Domain mame of the cluster, if you want to override the name generated by Azure No
ocp_version The OpenShift version. If you want to install 4.10, specify "4.10" Yes >= 4.6
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
network Cluster network attributes Yes
network.pod_cidr CIDR of pod network Yes Must be a minimum of /18 or larger.
network.service_cidr CIDR of service network Yes Must be a minimum of /18 or larger.
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes
openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
openshift_storage[] List of storage definitions to be defined on OpenShift Yes
storage_name Name of the storage Yes
storage_type Type of storage class to create in the OpenShift cluster Yes ocs or nfs
ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No
ocs_storage_label Label (or rather a name) to be used for the dedicated OCS nodes in the cluster - together with the combination of Azure location and zone id Yes if storage_type is ocs
ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs
ocs_dynamic_storage_class Storage class that will be used for provisioning OCS. In Azure, you must select managed-premium Yes if storage_type is ocs managed-premium

Existing OpenShift🔗

When using the Cloud Pak Deployer on an existing OpenShift cluster, the scripts assume that the cluster is already operational and that any storage classes have been pre-created. The deployer accesses the cluster through a vault secret with the kubeconfig information; the name of the secret is <name>-kubeconfig.

openshift:
+- name: sample
+  ocp_version: 4.8
+  cluster_name: sample
+  domain_name: example.com
+  cloud_native_toolkit: False
+  oadp: False
+  infrastructure:
+    type: standard
+    processor_architecture: amd64
+  upstream_dns:
+  - name: sample-dns
+     zones:
+     - example.com
+     dns_servers:
+     - 172.31.2.73:53
+  gpu:
+    install: False
+  mcg:
+    install: True
+    storage_type: storage-class
+    storage_class: managed-nfs-storage
+  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+    # ocp_storage_class_file: managed-nfs-storage
+    # ocp_storage_class_block: managed-nfs-storage
+

Property explanation for existing OpenShift clusters🔗

Property Description Mandatory Allowed values
name Name of the OpenShift cluster Yes
ocp_version OpenShift version of the cluster, used to download the client. If you want to install 4.10, specify "4.10" Yes >= 4.6
cluster_name Name of the cluster (part of the FQDN) Yes
domain_name Domain name of the cluster (part of the FQDN) Yes
cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default)
oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default)
infrastructure.type Infrastructure OpenShfit is deployed on. See below for additional explanation detect (default)
infrastructure.processor_architecture Architecture of the processor that the OpenShift cluster is deployed on No amd64 (default), ppc64le, s390x
openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No
upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No
gpu Control Node Feature Discovery and NVIDIA GPU operators No
gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall) Yes True, False
mcg Multicloud Object Gateway properties No
mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False
mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class
mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class
openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes
infastructure.type - Type of infrastructure🔗

When deploying on existing OpenShift, the underlying infrastructure can pose some restrictions on capabilities available. For example, Red Hat OpenShift on IBM Cloud (aka ROKS) does not include the Machine Config Operator and ROSA on AWS does not allow to set labels for Machine Config Pools. This means that node settings required for Cloud Pak for Data must be applied in a non-standard manner.

The following values are allowed for infrastructure.type:

  • detect (default): The deployer will attempt to detect the underlying cloud infrastructure. This is done by retrieving the existing storage classes and then inferring the cloud type.
  • standard: The deployer will assume a standard OpenShift cluster with no further restrictions. This is the fallback value for detect if the underlying infra cannot be detected.
  • aws-self-managed: A self-managed OpenShift cluster on AWS. No restrictions.
  • aws-rosa: Managed Red Hat OpenShift on AWS. Some restrictions with regards to Machine Config Pools apply.
  • azure-aro: Managed Red Hat OpenShift on Azure. No known restrictions.
  • vsphere: OpenShift on vSphere. No known restrictions.
openshift_storage[] - OpenShift storage definitions🔗
Property Description Mandatory Allowed values
storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes
storage_type Type of storage class to use in the OpenShift cluster Yes nfs, ocs, aws-elastic, auto, custom
ocp_storage_class_file OpenShift storage class to use for file storage if different from default for storage_type Yes if storage_type is custom
ocp_storage_class_block OpenShift storage class to use for block storage if different from default for storage_type Yes if storage_type is custom

Info

The custom storage_type can be used in case you want to use a non-standard storage class(es). In this case the storage class(es) must be already configured on the OCP cluster and set in the respective ocp_storage_class_file and ocp_storage_class_block variables

Info

The auto storage_type will let the deployer automatically detect the storage type based on the existing storage classes in the OpenShift cluster.

Supported storage types🔗

An openshift definition always includes the type(s) of storage that it will provide. When the OpenShift cluster is provisioned by the deployer, the necessary infrastructure and storage class(es) are also configured. In case an existing OpenShift cluster is referenced by the configuration, the storage classes are expected to exist already.

The table below indicates which storage classes are supported by the Cloud Pak Deployer per cloud infrastructure.

Warning

The ability to provision or use certain storage types does not imply support by the Cloud Paks or by OpenShift itself. There are several restrictions for production use OpenShift Data Foundation, for example when on ROSA.

Cloud Provider NFS Storage OCS/ODF Storage Portworx Elastic Custom (2)
ibm-cloud Yes Yes Yes No Yes
vsphere Yes (1) Yes No No Yes
aws No Yes No Yes (3) Yes
azure No Yes No No Yes
existing-ocp Yes Yes No Yes Yes
  • (1) An existing NFS server can be specified so that the deployer configures the managed-nfs-storage storage class. The deployer will not provision or change the NFS server itself.
  • (2) If you specify a custom storage type, you must specify the storage class to be used for block (RWO) and file (RWX) storage.
  • (3) Specifying this storage type means that Elastic File Storage (EFS) and Elastic Block Storage (EBS) storage classes will be used. For EFS, an nfs_server object is required to define the "file server" storage on AWS.
\ No newline at end of file diff --git a/30-reference/configuration/private-registry/index.html b/30-reference/configuration/private-registry/index.html new file mode 100644 index 000000000..5c2f199c0 --- /dev/null +++ b/30-reference/configuration/private-registry/index.html @@ -0,0 +1,58 @@ + Private registries - Cloud Pak Deployer
Skip to content

Private registry🔗

In cases where the OpenShift cluster is in an environment with limited internet connectivity, you may want OpenShift to pull Cloud Pak images from a private image registry (aka container registry). There may also be other reasons for choosing a private registry over the entitled registry.

Configuring a private registry🔗

The below steps outline how to configure a private registry for a Cloud Pak deployment. When the image_registry object is referenced by the Cloud Pak object (such as cp4d), the deployer makes the following changes in OpenShift so that images are pulled from the private registry:

  • Global pull secret: The image registry's credentials are retrieved from the vault (the secret name must be image-registry-<name> and an entry for the registry is added to the global pull secret (secret pull-secret in project openshift-config).
  • ImageContentSourcePolicy: This is a mapping between the original location of the image, for example quay.io/opencloudio/zen-metastoredb@sha256:582cac2366dda8520730184dec2c430e51009a854ed9ccea07db9c3390e13b29 is mapped to registry.coc.uk.ibm.com:15000/opencloudio/zen-metastoredb@sha256:582cac2366dda8520730184dec2c430e51009a854ed9ccea07db9c3390e13b29.
  • Image registry settings: OpenShift keeps image registry settings in custom resource image.config.openshift.io/cluster. If a private registry with a self-signed certificate is configured, certificate authority's PEM secret must be created as a configmap in the openshift-config project. The deployer uses the vault secret referenced in registry_trusted_ca_secret property to create or update the configmap so that OpenShift can connect to the registry in a secure manner. Alternatively, you add the registry_insecure: true property to pull images without checking the certificate.

image_registry🔗

Defines a private registry that will be used for pulling the Cloud Pak container images from. Additionally, if the Cloud Pak entitlement key was specified at run time of the deployer, the images defined by the case files will be mirrored to this private registry.

image_registry:
+- name: cpd463
+  registry_host_name: registry.example.com
+  registry_port: 5000
+  registry_insecure: false
+  registry_trusted_ca_secret: cpd463-ca-bundle
+

Properties🔗

Property Description Mandatory Allowed values
name Name by which the image registry is identified. Yes
registry_host_name Host name or IP address of the registry server Yes
registry_port Port that the image registry listens on. Default is the https port (443) No
registry_namespace Namespace (path) within the registry that holds the Cloud Pak images. Mandatory only when using the IBM Cloud Container Registry (ICR) No
registry_insecure Defines whether insecure registry access with a self-signed certificate is allowed No True, False (default)
registry_trusted_ca_secret Defines the vault secret which holds the certificate authority bundle that must be used when connecting to this private registry. This parameter cannot be specified if registry_insecure is also specified. No

Warning

The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name.

When mirroring images, the deployer connects to the registry using the host name and port. If the port is omitted, the standard https protocol (443) is used. If a registry_namespace is specified, for example when using the IBM Container Registry on IBM Cloud, it will be appended to the registry URL.

The user and password to connect to the registry will be retrieved from the vault, using secret image-registry-<your_image_registry_name> and must be stored in the format registry_user:registry_password. For example, if you want to connect to the image registry cpd404 with user admin and password very_s3cret, you would create a secret as follows:

./cp-deploy.sh vault set \
+  -vs image-registry-cpd463 \
+  -vsv "admin:very_s3cret"
+

If you need to connect to a private registry which is not signed by a public certificate authority, you have two choices: * Store the PEM certificate that that holds the CA bundle in a vault secret and specify that secret for the registry_trusted_ca_secret property. This is the recommended method for private registries. * Specify registry_insecure: false (not recommended): This means that the registry (and port) will be marked as insecure and OpenShift will pull images from it, even if its certificate is self-signed.

For example, if you have a file /tmp/ca.crt with the PEM certificate for the certificate authority, you can do the following:

./cp-deploy.sh vault set \
+  -vs cpd463-ca-bundle \
+  -vsf /tmp/ca.crt
+

This will create a vault secret which the deployer will use to populate a configmap in the openshift-config project, which in turn is referenced by the image.config.openshift.io/cluster custom resource. For the above configuration, configmap cpd404-ca-bundle would be created and teh image.config.openshift.io/cluster would look something like this:

apiVersion: config.openshift.io/v1
+kind: Image
+metadata:
+...
+...
+  name: cluster
+spec:
+  additionalTrustedCA:
+    name: cpd463-ca-bundle
+

Using the IBM Container Registry as a private registry🔗

If you want to use a private registry when running the deployer for a ROKS cluster on IBM Cloud, you must use the IBM Container Registry (ICR) service. The deployer will automatically create the specified namespace in the ICR and set up the credentials accordingly. Configure an image_registry object with the host name of the private registry and the namespace that holds the images. An example of using the ICR as a private registry:

image_registry:
+- name: cpd463
+  registry_host_name: de.icr.io
+  registry_namespace: cpd463
+

The registry host name must end with icr.io and the registry namespace is mandatory. No other properties are needed; the deployer will retrieve them from IBM Cloud.

If you have already created the ICR namespace, create a vault secret for the image registry credentials:

./cp-deploy.sh vault set \
+  -vs image-registry-cpd463
+  -vsv "admin:very_s3cret"
+

An example of configuring the private registry for a cp4d object is below:

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: {{ env_id }}
+  cp4d_version: 4.6.3
+  image_registry_name: cpd463
+

The Cloud Pak for Data installation refers to the cpd463 image_registry object.

If the ibm_cp_entitlement_key secret is in the vault at the time of running the deployer, the required images will be mirrored from the entitled registry to the private registry. If all images are already available in the private registry, just specify the --skip-mirror-images flag when you run the deployer.

Using a private registry for the Cloud Pak installation (non-IBM Cloud)🔗

Configure an image_registry object with the host name of the private registry and some optional properties such as port number, CA certificate and whether insecure access to the registry is allowed.

Example:

image_registry:
+- name: cpd463
+  registry_host_name: registry.example.com
+  registry_port: 5000
+  registry_insecure: false
+  registry_trusted_ca_secret: cpd463-ca-bundle
+

Warning

The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name.

To create the vault secret for the image registry credentials:

./cp-deploy.sh vault set \
+  -vs image-registry-cpd463
+  -vsv "admin:very_s3cret"
+

To create the vault secret for the CA bundle:

./cp-deploy.sh vault set \
+  -vs cpd463-ca-bundle
+  -vsf /tmp/ca.crt
+

Where ca.crt looks something like this:

-----BEGIN CERTIFICATE-----
+MIIFszCCA5ugAwIBAgIUT02v9OdgdvjgQVslCuL0wwCVaE8wDQYJKoZIhvcNAQEL
+BQAwaTELMAkGA1UEBhMCVVMxETAPBgNVBAgMCE5ldyBZb3JrMQ8wDQYDVQQHDAZB
+cm1vbmsxFjAUBgNVBAoMDUlCTSBDbG91ZCBQYWsxHjAcBgNVBAMMFUlCTSBDbG91
+...
+mcutkgtbkq31XYZj0CiM451Qp8KnTx0=
+-----END CERTIFICATE-
+

An example of configuring the private registry for a cp4d object is below:

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: {{ env_id }}
+  cp4d_version: 4.6.3
+  image_registry_name: cpd463
+

The Cloud Pak for Data installation refers to the cpd463 image_registry object.

If the ibm_cp_entitlement_key secret is in the vault at the time of running the deployer, the required images will be mirrored from the entitled registry to the private registry. If all images are already available in the private registry, just specify the --skip-mirror-images flag when you run the deployer.

\ No newline at end of file diff --git a/30-reference/configuration/topologies/index.html b/30-reference/configuration/topologies/index.html new file mode 100644 index 000000000..459c3fdaf --- /dev/null +++ b/30-reference/configuration/topologies/index.html @@ -0,0 +1 @@ + Topologies - Cloud Pak Deployer
Skip to content

Deployment topologies🔗

Configuration of the topology to be deployed typically boils down to choosing the cloud infrastructure you want to deploy, then choosing the type of OpenShift and storage, integrating with infrastructure services and then setting up the Cloud Pak(s). For most initial implementations, a basic deployment will suffice and later this can be extended with additional configuration.

Depicted below is the basic deployment topology, followed by a topology with all bells and whistles.

Basic deployment🔗

Basic deployment

For more details on each of the configuration elements, refer to:

Extended deployment🔗

Extended deployment

For more details about extended deployment, refer to:

\ No newline at end of file diff --git a/30-reference/configuration/vault/index.html b/30-reference/configuration/vault/index.html new file mode 100644 index 000000000..1ef3c37a6 --- /dev/null +++ b/30-reference/configuration/vault/index.html @@ -0,0 +1,4 @@ + Vault - Cloud Pak Deployer
Skip to content

Vault configuration🔗

Vault configuration🔗

Throughout the deployment process, the Cloud Pak Deployer will create secrets in a vault and retrieve them later. Examples of secrets are: ssh keys, Cloud Pak for Data admin password. Additionally, when provisioning infrastructure no the IBM Cloud, the resulting Terraform state file is also stored in the vault so it can be used later if the configuration needs to be changed.

Configuration of the vault is done through a vault object in the configuration. If you want to use the file-based vault in the status directory, you do not need to configure anything.

The following Vault implementations can be used to store and retrieve secrets: - File Vault (no encryption) - IBM Cloud Secrets Manager - Hashicorp Vault (token authentication) - Hashicorp Vault (certificate authentication)

The File Vault is the default vault and also the simplest. It does not require a password and all secrets are stored in base-64 encoding in a properties file under the <status_directory>/vault directory. The name of the vault file is the environment_name you specified in the global configuration, inventory file or at the command line.

All of the other vault options require some secret manager (IBM Cloud service or Hashicorp Vault) to be available and you need to specify a password or provide a certificate.

Sample Vault config:

vault:
+  vault_type: file-vault
+  vault_authentication_type: none
+

Properties for all vault implementations🔗

Property Description Mandatory Allowed values
vault_type Chosen implementation of the vault Yes file-vault, ibmcloud-vault, hashicorp-vault

Properties for file-vault🔗

Property Description Mandatory Allowed values
vault_authentication_type Authentication method for the file vault No none

Properties for ibmcloud-vault🔗

Property Description Mandatory Allowed values
vault_authentication_type Authentication method for the file vault No api-key
vault_url URL for the IBM Cloud secrets manager instance Yes

Properties for hashicorp-vault🔗

Property Description Mandatory Allowed values
vault_authentication_type Authentication method for the file vault No api-key, certificate
vault_url URL for the Hashicorp vault, this is typically https://hostname:8200 Yes
vault_api_key When authentication type is api-key, the field to authenticate with Yes
vault_secret_path Default secret path to store and retrieve secrets into/from Yes
vault_secret_field Default field to store or retrieve secrets Yes
vault_secret_path_append_group Determines whether or not the secrete group will be appended to the path Yes True (default), False
vault_secret_base64 Depicts if secrets are stored in base64 format for Hashicorp Vault Yes True (default), False
\ No newline at end of file diff --git a/30-reference/process/configure-cloud-pak/index.html b/30-reference/process/configure-cloud-pak/index.html new file mode 100644 index 000000000..966e9929f --- /dev/null +++ b/30-reference/process/configure-cloud-pak/index.html @@ -0,0 +1 @@ + Configure Cloud Paks - Cloud Pak Deployer
Skip to content

Configure the Cloud Pak(s)🔗

This stage focuses on post-installation configuration of the Cloud Paks and cartridges.

Cloud Pak for Data🔗

Web interface certificate🔗

When provisioning on IBM Cloud ROKS, a CA-signed certificate for the ingress subdomain is automatically generated in the IBM Cloud certificate manager. The deployer retrieves the certificate and adds it to the secret that stores the certificate key. This will avoid getting a warning when opening the Cloud Pak for Data home page.

Configure identity and access management🔗

For Cloud Pak for Data you can configure:

  • SAML for Single Sign-on. When specified in the cp4d_saml_config object, the deployer configures the user management pods to redirect logins to the identity provider (idP) of choice.
  • LDAP configuration. LDAP can be used both for authentication (if no SSO has been configured) and for access management by mapping LDAP groups to Cloud Pak for Data user groups. Specify the LDAP or LDAPS properties in the cp4d_ldap_config object so that the deployer configures it for Cloud Pak for Data. If SAML has been configured for authentication, the configured LDAP server is only used for access management.
  • User group configuration. This creates user-defined user groups in Cloud Pak for Data to match the LDAP configuration. The configuration object used for this is cp4d_user_group_configuration.

Provision instances🔗

Some cartridges such as Data Virtualization have the ability to create one or more instances to run an isolated installation of the cartridge. If instances have been configured for the cartridge, this steps provisions them. The following Cloud Pak for Data cartridges are currently supported for creating instances:

  • Analytics engine powered by Apache Spark (analytics-engine)
  • Db2 OLTP (db2)
  • Cognos Analytics (ca)
  • Data Virtualization (dv)

Configure instance access🔗

Cloud Pak for Data does not support group-defined access to cartridge instances. After creation of the instances (and also when the deployer is run with the --cp-config-only flag), the permissions of users accessing the instance is configured.

For Cognos Analytics, the Cognos Authorization process is run to apply user group permissions to the Cognos Analytics instance.

Create or change platform connections🔗

Cloud Pak for Data defines data source connections at the platform level and these can be reused in some cartridges like Watson Knowledge Catalog and Watson Studio. The cp4d_connection object defines each of the platform connections that must be managed by the deployer.

Backup and restore connections🔗

If you want to back up or restore platform connections, the cp4d_backup_restore_connections object defines the JSON file that will be used for backup and restore.

\ No newline at end of file diff --git a/30-reference/process/configure-infra/index.html b/30-reference/process/configure-infra/index.html new file mode 100644 index 000000000..df9cc2c5d --- /dev/null +++ b/30-reference/process/configure-infra/index.html @@ -0,0 +1 @@ + Configure infrastructure - Cloud Pak Deployer
Skip to content

Configure infrastructure🔗

This stage focuses on the configuration of the provisioned infrastructure.

Configure infrastructure for IBM Cloud🔗

Configure the VPC bastion server(s)🔗

In a configuration scenario where NFS is used for OpenShift storage, the NFS server must be provisioned as a VSI within the VPC that contains the OpenShift cluster. It is best practice to shield off the NFS server from the outside world by using a jump host (bastion) to access it.

This steps configures the bastion host which has a public IP address to serve as a jump host to access other servers and services within the VPC.

Configure the VPC NFS server(s)🔗

Configures the NFS server using the specs in the nfs_server configuration object(s). It installs the required packages and sets up the NFSv4 service. Additionally, it will format the empty volume as xfs and export it so it can be used by the managed-nfs-storage storage class in the OpenShift cluster.

Configure the OpenShift storage classes🔗

This steps takes care of configuring the storage classes in the OpenShift cluster. Storage classes are an abstraction of the underlying physical and virtual storage. When run, it processes the openshift_storage elements within the current openshift configuration object.

Two types of storage classes can be automatically created and configured:

NFS Storage🔗

Creates the managed-nfs-storage OpenShift storage class using the specified nfs_server_name which references an nfs_server configuration object.

OCS Storage🔗

Activates the ROKS cluster's OpenShift Container Storage add-on to install the operator into the cluster. Once finished with the preparation, the OcsCluster OpenShift object is created to provision the storage cluster. As the backing storage the ibmc-vpc-block-metro-10iops-tier storage class is used, which has the appropriate IO characteristics for the Cloud Paks.

Info

Both NFS and OCS storage classes can be created but only 1 storage class of each type can exist in the cluster at the moment. If more than one storage class of the same type is specified, the configuration will fail.

\ No newline at end of file diff --git a/30-reference/process/cp4d-cartridges/cognos-authorization/index.html b/30-reference/process/cp4d-cartridges/cognos-authorization/index.html new file mode 100644 index 000000000..d2e7ceee8 --- /dev/null +++ b/30-reference/process/cp4d-cartridges/cognos-authorization/index.html @@ -0,0 +1,22 @@ + Automated Cognos Authorization using LDAP groups - Cloud Pak Deployer
Skip to content

Automated Cognos Authorization using LDAP groups🔗

Authorization Overview

Description🔗

The automated cognos authorization capability uses LDAP groups to assign users to a Cognos Analytics Role, which allows these users to login to IBM Cloud Pak for Data and access the Cognos Analytics instance. This capability will perform the following tasks: - Create a User Group and assign the associated LDAP Group(s) and Cloud Pak for Data role(s) - For each member of the LDAP Group(s) part of the User Group, create the user as a Cloud Pak for Data User and assigned the Cloud Pak for Data role(s) - For each member of the LDAP Group(s) part of the User Group, assign membership to the Cognos Analytics instance and authorize for the Cognos Analytics Role

If the User Group is already present, validate all LDAP Group(s) are associated with the User Group. Add the LDAP Group(s) not yet assiciated to the User Group. Existing LDAP groups will not be removed from the User Group

If a User is already present in Cloud Pak for Data, it will not be updated.

If a user is already associated with the Cognos Analytics instance, keep its original membership and do not update the membership

Pre-requisites🔗

Prior to running the script, ensure: - LDAP configuration in IBM Cloud Pak for Data is completed and validated - Cognos Analytics instance is provisioned and running in IBM Cloud Pak for Data - The role(s) that will be associated with the User Group are present in IBM Cloud Pak for Data

Usage of the Script🔗

The script is available in automation-roles/50-install-cloud-pak/cp4d-service/files/assign_CA_authorization.sh.

Run the script without arguments to show its usage help.

# ./assign_CA_authorization.sh                                                                               
+Usage:
+
+assign_CA_authorization.sh
+  <CLOUD_PAK_FOR_DATA_URL>
+  <CLOUD_PAK_FOR_DATA_LOGIN_USER>
+  <CLOUD_PAK_FOR_DATA_LOGIN_PASSWORD>
+  <CLOUD_PAK_FOR_DATA_USER_GROUP_NAME>
+  <CLOUD_PAK_FOR_DATA_USER_GROUP_DESCRIPTION>
+  <CLOUD_PAK_FOR_DATA_USER_GROUP_ROLES_ASSIGNMENT>
+  <CLOUD_PAK_FOR_DATA_USER_GROUP_LDAP_GROUPS_MAPPING>
+  <CLOUD_PAK_FOR_DATA_COGNOS_ANALYTICS_ROLE>
+

  • The URL to the IBM Cloud Pak for Data instance
  • The login user to IBM Cloud Pak for Data, e.g. the admin user
  • The login password to IBM Cloud Pak for Data
  • The Cloud Pak for Data User Group Name
  • The Cloud Pak for Data User Group Description
  • The Cloud Pak for Data roles associated to the User Group. Use a ; seperated list to assign multiple roles
  • The LDAP Groups associated to the User Group. Use a ; seperated list to assign LDAP groups
  • The Cognos Analytics Role each member of the User Group will be associated with, which must be one of:
  • Analytics Administrators
  • Analytics Explorers
  • Analytics Users
  • Analytics Viewer

Running the script🔗

Using the command example provided by the ./assign_CA_authorization.sh command, run the script with its arguments

# ./assign_CA_authorization.sh \
+  https://...... \
+  admin \
+  ******** \
+  "Cognos User Group" \
+  "Cognos User Group Description" \
+  "wkc_data_scientist_role;zen_administrator_role" \
+  "cn=ca_group,ou=groups,dc=ibm,dc=com" \
+  "Analytics Viewer"
+
The script execution will run through the following tasks:

Validation
Confirm all required arguments are provided.
Confirm at least 1 User Group Role assignment is provided.
Confirm at least 1 LDAP Group is provided.

Login to Cloud Pak for Data and generate a Bearer token
Using the provided IBM Cloud for Data URL, username and password, login to Cloud pak for Data and generate the Bearer token used for subsequent commands. Exit with an error if the login to IBM Cloud Pak for Data fails.

Confirm the provided User Group role(s) are present in Cloud Pak for Data
Acquire all Cloud Pak for Data roles and confirm the provided User Group role(s) are one of the existing Cloud Pak for Data roles. Exit with an error if a role is provided which is not currently present in IBM Cloud Pak for Data.

Confirm the provided Cognos Analytics role is valid
Ensure the provided Cognos Analytics role is one of the available Cognos Analytics roles. Exit with an error if a Cognos Analytics role is provided that does not match with the available Cognos Analytics roles.

Confirm LDAP is configured in IBM Cloud Pak for Data
Ensures the LDAP configuration is completed. Exit with an error if there is no current LDAP configuration.

Confirm the provided LDAP groups are present in the LDAP User Registry
Using IBM Cloud Pak for Data, query whether the provided LDAP groups are present in the LDAP User registry. Exit with an error if a LDAP Group is not available.

Confirm if the IBM Cloud Pak for Data User Group exists
Queries the IBM Cloud Pak for Data User Groups. If the provided User Group exists, acquire the Group ID.

If the IBM Cloud Pak for Data User Group does not exist, create it
If the User Group does not exist, create it, and assign the IBM Cloud Pak for Data Roles and LDAP Groups to the new User Group

If the IBM Cloud Pak for Data User Group does exist, validate the associated LDAP Groups
If the User Group already exists, confirm all provided LDAP groups are associated with the User Group. Add LDAP groups that are not yet associated.

Get the Cognos Analytics instance ID
Queries the IBM Cloud Pak for Data service instances and acquires the Cognos Analytics instance ID. Exit with an error if no Cognos Analytics instance is available

Ensure each user member of the IBM Cloud Pak for Data User Group is an existing user
Each user that is member of the provided LDAP groups, ensure this member is an IBM Cloud Pak for Data User. Create a new user with the provided User Group role(s) if the the user is not yet available. Any existing User(s) will not be updated. If Users are removed from an LDAP Group, these users will not be removed from Cloud Pak for Data.

Ensure each user member of the IBM Cloud Pak for Data User Group is associated to the Cognos Analytics instance
Each user that is member of the provided LDAP groups, ensure this member is associated to the Cognos Analytics instance with the provided Cognos Analytics role. Any user that is already associated to the Cognos Analytics instance will have its Cognos Analytics role updated to the provided Cognos Analytics Role

\ No newline at end of file diff --git a/30-reference/process/cp4d-cartridges/cognos_authorization.png b/30-reference/process/cp4d-cartridges/cognos_authorization.png new file mode 100644 index 000000000..6f042f56f Binary files /dev/null and b/30-reference/process/cp4d-cartridges/cognos_authorization.png differ diff --git a/30-reference/process/deploy-assets/index.html b/30-reference/process/deploy-assets/index.html new file mode 100644 index 000000000..7c91cdd84 --- /dev/null +++ b/30-reference/process/deploy-assets/index.html @@ -0,0 +1 @@ + Deploy assets - Cloud Pak Deployer
Skip to content

Deploy Cloud Pak assets🔗

Cloud Pak for Data🔗

For Cloud Pak for Data, this stage does the following:

  • Deploy Cloud Pak for Data assets which are defined with object cp4d_asset
  • Deploy the Cloud Pak for Data monitors identified with cp4d_monitors elements.

Deploy Cloud Pak for Data assets🔗

See cp4d_asset for more details.

Cloud Pak for Data monitors🔗

See cp4d_monitors for more details.

\ No newline at end of file diff --git a/30-reference/process/images/provisioning-process.drawio b/30-reference/process/images/provisioning-process.drawio new file mode 100644 index 000000000..e61717386 --- /dev/null +++ b/30-reference/process/images/provisioning-process.drawio @@ -0,0 +1 @@ +7Zldb5swFIZ/TS5b8ZHwcZmk7Vapkyql2rqryg0OuDUcZEwC+/WzwQkwKKXS2kQiuUh8Xn/EPi/PISITcxlm3xiKgx/gYToxNC+bmFcTw9Bd0xQfUslLxXGNUvAZ8dSgSliRP1iJmlJT4uGkMZADUE7ipriGKMJr3tAQY7BrDtsAbX5rjHzcElZrRNvqL+LxQJ3CsCv9OyZ+sP9m3XLLnhDtB6uTJAHyYFeTzOuJuWQAvGyF2RJTmbx9Xsp5N2/0HjbGcMSHTMiMe3b3RNhDPH/IYf38sshuLnRlT8Lz/YmxJxKgQmA8AB8iRK8rdcEgjTwsl9VEVI25A4iFqAvxBXOeKzdRykFIAQ+p6sUZ4Y+19m+51OVMRVeZWrkIchVsIOJqQUNX8RIosGLX5k3xEnp5HnmIN/OkpARStsZ9yVHXG2I+5j3jrIObAgMMIeYsF/MYpoiTbXMfSF2P/mFcZZloKNc+4mC57hbR9IDO5cSwqNjx4pmJli9bPxElntgORC3DKztlXncB4XgVoyIzOwF107qaDbrVY8OGUFrTNc11NWmj2IYfCW0t3MCic7HFjBPB21x1hMTziqtMnUt046zfynbq1YQL21bwqepjz1S8q1jWTaUFNY4t7ZPsmrXsmkcJeRYn/tcWUS9i2UxI5FM8l7VsgD3NtFviVdhRlMJ9AROo2frUcKr3/QjlrHZpaM0hHwbw//rmdPrWZZz9WcaJJJ0rZW8FHFAp7WNWSquFntFVKe8ZlikYU5m0tFMrk/rsTFs/RQNoc45Jm92izeymDbYkKX6WaLfRhqExcTc1T44768xdP08DuHOPyZ3T4m7axd0Sog3xU3mfGx93B35Ohzv7zF0/TwO407Vu278GPLcF3qwLvNso4ahYbUkh9cTnPXodE3zuybHnnNl7h6kh8L1h+xc9BdNa9Fnv3fbGyZ9zavgdno+fPH4iwyx/rAe1WTKsphXRMbEd+uxaN46Kbfvptd2F7RWOKchtzZME82REuBqzU+O1vGIanjldnq1CeJVl9gEno3JsanydYyKs/kAs+mp/w5rXfwE= \ No newline at end of file diff --git a/30-reference/process/images/provisioning-process.png b/30-reference/process/images/provisioning-process.png new file mode 100644 index 000000000..382f40168 Binary files /dev/null and b/30-reference/process/images/provisioning-process.png differ diff --git a/30-reference/process/install-cloud-pak/index.html b/30-reference/process/install-cloud-pak/index.html new file mode 100644 index 000000000..58faf5d47 --- /dev/null +++ b/30-reference/process/install-cloud-pak/index.html @@ -0,0 +1,13 @@ + Install the Cloud Pak - Cloud Pak Deployer
Skip to content

Install the Cloud Pak(s)🔗

This stage focuses on preparing the OpenShift cluster for installing the Cloud Pak(s) and then proceeds with the installation of Cloud Paks and the cartridges. The below documentation will start with a list of steps that will be executed for all Cloud Paks, then proceed with Cloud Pak specific activities. The execution of the steps may slightly differ from the sequence in the documentation.

Sections:

Remove Cloud Pak for Data🔗

Before going ahead with the mirroring of container images and installation of Cloud Pak for Data, the previous configuration (if any) is retrieved from the vault to determine if a Cloud Pak for Data instance has been removed. If a previously installed cp4d object no longer exists in the current configuration, its associated instance is removed from the OpenShift cluster.

First, the custom resources are removed from the OpenShift project. This happens with a grace period of 5 minutes. After the grace period has expired, OpenShift automatically forcefully deletes the custom resource and its associated definitions. Then, the control plane custom resource Ibmcpd is removed and finally the namespace (project). For the namespace deletion, a grace period of 10 minutes is applied.

Prepare private image registry🔗

When installing the Cloud Paks, images must be pulled from an image registry. All Cloud Paks support pulling images directly from the IBM Entitled Registry using the entitlement key, but there may be situations this is not possible, for example in air-gapped environents, or when images must be scanned for vulnerabilities before they are allowed to be used. In those cases, a private registry will have to be set up.

The Cloud Pak Deployer can mirror images to a private registry from the entitled registry. On IBM Cloud, the deployer is also capable of creating a namespace in the IBM Container Registry and mirror the images to that namespace.

When a private registry has been specified in the Cloud Pak entry (using the image_registry_name property), the necessary OpenShift configuration changes will also be made.

Create IBM Container Registry namespace (IBM Cloud only)🔗

If OpenShift is deployed on IBM Cloud (ROKS), the IBM Container Registry should be used as the private registry from which the images will be pulled. Images in the ICR are organized by namespace and can be accessed using an API key issued for a service account. If an image_registry object is specified in the configuration, this process will take care of creating the service account, then the API key and it will store the API key in the vault.

Connect to the specified private image registry🔗

If an image registry has been specified for the Cloud Pak using the image_registry_name property, the referenced image_registry entry is looked up in the configuration and the credentials are retrieved from the vault. Then the connection to the registry is tested by logging on.

Install Cloud Pak for Data and cartridges🔗

Prepare OpenShift cluster for Cloud Pak installation🔗

Cloud Pak for Data requires a number of cluster-wide settings:

  • Create an ImageContentSourcePolicy if images must be pulled from a private registry
  • Set the global pull secret with the credentials to pull images from the entitled or private image registry
  • Create a Tuned object to set kernel semaphores and other properties of CoreOS containers being spun up
  • Allow unsafe system controls in the Kubelet configuration
  • Set PIDs limit and default ulimit for the CRI-O configuration

For all OpenShift clusters, except ROKS on IBM Cloud, these settings are applied using OpenShift configuration objects and then picked up by the Machine Config Operator. This operator will then apply the settings to the control plane and compute nodes as appropriate and reload them one by one.

To avoid having to reload the nodes more than once, the Machine Config Operator is paused before the settings are applied. After all setup, the Machine Config Operator is released and the deployment process will then wait until all nodes are ready with the configuration applied.

Prepare OpenShift cluster on IBM Cloud and IBM Cloud Satellite🔗

As mentioned before, ROKS on IBM Cloud does not include the Machine Config Operator and would normally require the compute nodes to be reloaded (classic ROKS) or replaced (ROKS on VPC) to make the changes effective. While implementing this process, we have experienced intermittent reliability issues where replacement of nodes never finished or the cluster ended up in a unusable state. To avoid this, the process is applying the settings in a different manner.

On every node, a cron job is created which starts every 5 minutes. It runs a script that checks if any of the cluster-wide settings must be (re-)applied, then updates the local system and restarts the crio and kubelet daemons. If no settings are to be adjusted, the daemons will not be restarted and therefore the cron job has minimal or no effect on the running applications.

Compute node changes that are made by the cron job: ImageContentSourcePolicy: File /etc/containers/registries.conf is updated to include registry mirrors for the private registry. Kubelet: File /etc/kubernetes/kubelet.conf is appended with the allowedUnsafeSysctls entries. CRI-O: pids_limit and default_ulimit changes are made to the /etc/crio/crio.conf file. Pull secret: The registry and credentials are appended to the /.docker/config.json configuration.

There are scenarios, especially on IBM Cloud Satellite, where custom changes must be applied to the compute nodes. This is possible by adding the apply-custom-node-settings.sh to the assets directory within the CONFIG_DIR directory. Once Kubelet, CRI-O and other changes have been applied, this script (if existing) is run to apply any additional configuration changes to the compute node.

By setting the NODE_UPDATED script variable to 1 you can tell the deployer to restart the crio and kubelet daemons.

WARNING: You should never set the NODE_UPDATED script variable to 0 as this will cause previous changes to the pull secret, ImageContentSourcePolicy and others not to become effective.

WARNING: Do not end the script with the exit command; this will stop the calling script from running and therefore not restart the daemons.

Sample script:

#!/bin/bash
+
+#
+# This is a sample script that will cause the crio and kubelet daemons to be restarted once by checking
+# file /tmp/apply-custom-node-settings-run. If the file doesn't exist, it creates it and sets NODE_UPDATED to 1.
+# The deployer will observe that the node has been updated and restart the daemons.
+#
+
+if [ ! -e /tmp/apply-custom-node-settings-run ];then
+    touch /tmp/apply-custom-node-settings-run
+    NODE_UPDATED=1
+fi
+

Mirror images to the private registry🔗

If a private image registry is specified, and if the IBM Cloud Pak entitlement key is available in the vault (cp_entitlement_key secret), the Cloud Pak case files for the Foundational Services, the Cloud Pak control plane and cartridges are downloaded to a subdirectory of the status directory that was specified. Then all images defined for the cartridges are mirrored from the entitled registry to the private image registry. Dependent on network speed and how many cartridges have been configured, the mirroring can take a very long time (12+ hours). All images which have already been mirrored to the private registry are skipped by the mirroring process.

Even if all images have been mirrored, the act of checking existence and digest can still take a bit of time (10-15 minutes). To avoid this, you can remove the cp_entitlement_key secret from the vault and unset the CP_ENTITLEMENT_KEY environment variable before running the Cloud Pak Deployer.

Create catalog sources🔗

The images of the operators which control the Cloud Pak are defined in OpenShift CatalogSource objects which reside in the openshift-marketplace project. Operator subscriptions subsequently reference the catalog source and define the update channel. When images are pulled from the entitled registry, most subscriptions reference the same ibm-operator-catalog catalog source (and also a Db2U catalog source). If images are pulled from a private registry, the control plane and also each cartridge reference their own catalog source in the openshift-marketplace project.

This step creates the necessary catalog sources, dependent on whether the entitled registry or a private registry is used. For the entitled registry, it creates the catalog source directly using a YAML template; when using a private registry, the cloudctl case command is used for the control plane and every cartridge to install the catalog sources and their dependencies.

Get OpenShift storage classes🔗

Most custom resources defined by the cartridge operators require some back-end storage. To be able to reference the correct OpenShift storage classes, they are retrieved based on the openshift_storage_name property of the Cloud Pak object.

Prepare the Cloud Pak for Data operator🔗

When using express install, the Cloud Pak for Data operator also installs the Cloud Pak Foundational Services. Consecutively, this part of the deployer:

  • Creates the operator project if it doesn't exist already
  • Creates an OperatorGroup
  • Installs the license service and certificate manager
  • Creates the platform operator subscription
  • Waits until the ClusterServerVersion objects for the platform operator and Operand Deployment Lifecycle Manager have been created

Install the Cloud Pak for Data control plane🔗

When the Cloud Pak for Data operator has been installed, the process continues by creating an OperandRequest object for the platform operator which manages the project in the which Cloud Pak for Data instance is installed. Then it creates an Ibmcpd custom resource in the project which installs the controle plane with nginx the metastore, etc.

The Cloud Pak for Data control plane is a pre-requisite for all cartridges so at this stage, the deployer waits until the Ibmcpd status reached the Completed state.

Once the control plane has been installed successfully, the deployer generates a new strong 25-character password for the Cloud Pak for Data admin user and stores this into the vault. Additionally, the admin-user-details secret in the OpenShift project is updated with the new password.

Install the specified Cloud Pak for Data cartridges🔗

Now that the control plane has been installed in the specified OpenShift project, cartridges can be installed. Every cartridge is controlled by its own operator subscription in the operators project and a custom resource. The deployer iterates twice over the specified cartridges, first to create the operator subscriptions, then to create the custom resources.

Create cartridge operator subscriptions🔗

This steps creates subscription objects for each cartridge in the operators project, using a YAML template that is included in the deployer code and the subscription_channel specified in the cartridge definition. Keeping the subscription channel separate delivers flexibility when new subscription channels become available over time.

Once the subscription has been created, the deployer waits for the associate CSV(s) to be created and reach the Installed state.

Delete obsolete cartridges🔗

If this is not the first installation, earlier configured cartridges may have been removed. This steps iterates over all supported cartridges and checks if the cartridge has been installed and wheter it exists in the configuration of the current cp4d object. If the cartridge is no longer defined, its custom resource is removed; the operator will then take care of removing all OpenShift configuration.

Install the cartridges🔗

This steps creates the Custom Resources for each cartridge. This is the actual installation of the cartridge. Cartridges can be installed in parallel to a certain extent and the operator will wait for the dependencies to be installed first before starting the processes. For example, if Watson Studio and Watson Machine Learning are installed, both have a dependency on the Common Core Services (CCS) and will wait for the CCS object to reach the Completed state before proceeding with the install. Once that is the case, both WS and WML will run the installation process in parallel.

Wait until all cartridges are ready🔗

Installation of the cartridges can take a very long time; up to 5 hours for Watson Knowledge Catalog. While cartridges are being installed, the deployer checks the states of all cartridges on a regular basis and reports these in a log file. The deployer will retry until all specified cartridges have reached the Completed state.

Configure LDAP authentication for Cloud Pak for Data🔗

If LDAP has been configured for the Cloud Pak for Data element, it will be configured after all cartridges have finished installing.

\ No newline at end of file diff --git a/30-reference/process/overview/index.html b/30-reference/process/overview/index.html new file mode 100644 index 000000000..189febf29 --- /dev/null +++ b/30-reference/process/overview/index.html @@ -0,0 +1 @@ + Overview - Cloud Pak Deployer
Skip to content

Deployment process overview🔗

Deployment process overview

When running the Cloud Pak Deployer (cp-deploy env apply), a series of pre-defined stages are followed to arrive at the desired end-state.

10 - Validation🔗

In this stage, the following activities are executed:

  • Is the specified cloud platform in the inventory file supported?
  • Are the mandatory variables defined?
  • Can the deployer connect to the specified vault?

20 - Prepare🔗

In this stage, the following activities are executed:

  • Read the configuration files from the config directory
  • Replace variable placeholders in the configuration with the extra parameters passed to the cp-deploy command
  • Expand the configuration with defaults from the defaults directory
  • Run the "linter" to check the object attributes in the configuration and their relations
  • Generate the Terraform scripts to provision the infrastructure (IBM Cloud only)
  • Download all CLIs needed for the selected cloud platform and cloud pak(s), if not air-gapped

30 - Provision infra🔗

In this stage, the following activities are executed:

  • Run Terraform to create or change the infrastructure components for IBM cloud
  • Run the OpenShift installer-provisioned infrastructure (IPI) installer for AWS (ROSA), Azure (ARO) or vSphere

40 - Configure infra🔗

In this stage, the following activities are executed:

  • Configure the VPC bastion and NFS server(s) for IBM Cloud
  • Configure the OpenShift storage classes or test validate the existing storege classes if an existing OpenShift cluster is used
  • Configure OpenShift logging

50 - Install Cloud Pak🔗

In this stage, the following activities are executed:

  • Create the IBM Container Registry namespace for IBM Cloud
  • Connect to the specified image registry and create ImageContentSourcePolicy
  • Prepare OpenShift cluster for Cloud Pak for Data installation
  • Mirror images to the private registry
  • Install Cloud Pak for Data control plane
  • Configure Foundational Services license service
  • Install specified Cloud Pak for Data cartridges

60 - Configure Cloud Pak🔗

In this stage, the following activities are executed:

  • Add OpenShift signed certificate to Cloud Pak for Data web server when on IBM Cloud
  • Configure LDAP for Cloud Pak for Data
  • Configure SAML authentication for Cloud Pak for Data
  • Configure auditing for Cloud Pak for Data
  • Configure instance for the cartridges (Analytics engine, Db2, Cognos Analytics, Data Virtualization, …)
  • Configure instance authorization using the LDAP group mapping

70 - Deploy Assets🔗

  • Configure Cloud Pak for Data monitors
  • Install Cloud Pak for Data assets

80 - Smoke Tests🔗

In this stage, the following activities are executed:

  • Show the Cloud Pak for Data URL and admin password
\ No newline at end of file diff --git a/30-reference/process/prepare/index.html b/30-reference/process/prepare/index.html new file mode 100644 index 000000000..437b40542 --- /dev/null +++ b/30-reference/process/prepare/index.html @@ -0,0 +1 @@ + Prepare deployment - Cloud Pak Deployer
Skip to content

Prepare the deployer🔗

This stage mainly takes care of checking the configuration and expanding it where necessary so it can be used by subsequent stages. Additionally, the preparation also calls the roles that will generate Terraform or other configuration files which are needed for provisioning and configuration.

Generator🔗

All yaml files in the config directory of the specified CONFIG_DIR are processed and a composite JSON object, all_config is created, which contains all configuration.

While processing the objects defined in the config directory files, the defaults directory is also processed to determine if any supplemental "default" variables must be added to the configuration objets. This makes it easy for example to ensure VSIs always use the correct Red Hat Enterprise Linux image available on IBM Cloud.

You will find the generator roles under the automation-generators directory. There are cloud-provider dependent roles such as openshift which have a structure dependent on the chosen cloud provider and there are generic roles such as cp4d which are not dependent on the cloud provider.

To find the appropriate role for the object, the generator first checks if the role is found under the specified cloud provider directory. If not found, it will call the role under generic.

Linting🔗

Each of the objects have a syntax checking module called preprocessor.py. This Python program checks the attributes of the object in question and can also add defaults for properties which are missing. All errors found are collected and displayed at the end of the generator.

\ No newline at end of file diff --git a/30-reference/process/provision-infra/index.html b/30-reference/process/provision-infra/index.html new file mode 100644 index 000000000..c110dd4dd --- /dev/null +++ b/30-reference/process/provision-infra/index.html @@ -0,0 +1 @@ + Provision infrastructure - Cloud Pak Deployer
Skip to content

Provision infrastructure🔗

This stage will provision the infrastructure that was defined in the input configuration files. Currently, this has only been implemented for IBM Cloud.

IBM Cloud🔗

The IBM Cloud infrastructure provisioning runs Terraform to initially provision the infrastructure components such as VPC, VSIs, security groups, ROKS cluster and others. Also, if changes have been made in the configuration, Terraform will attempt to make the changes to reach the desired end-state.

Based on the chosen action (apply or destroy), Terraform is instructed to provision or change the infrastructure components or to destroy everything.

The Terraform state file (tfstate) is maintained in the vault and is critical to enable dynamic updates to the infrastructure. If the state file is lost or corrupted, updates to the infrastructure will have to be done manually. The Ansible tasks have been built in a way that the Terraform state file is always persisted into the vault, even if the apply or destroy process has failed.

There are 3 main steps:

Terraform init🔗

This step initializes the Terraform provider (ibm) with the correct version. If needed, the Terraform modules for the provider are downloaded or updated.

Terraform plan🔗

Applying changes to the infrastructure using Terraform based on the input configuration files may cause critical components to be replaced (destroyed and recreated). The plan step checks what will be changed. If infrastructure components are destroyed and the --confirm-destroy parameter has not be specified for the deployer, the process is aborted.

Terraform apply or Terraform destroy🔗

This is the execution of the plan and will provision new infrastructure (apply) or destroy everything (destroy).

While the Terraform apply or destroy process is running, a .tfstate file is updated on disk. When the command completes, the deployer writes this as a secret to the vault so it can be used next time to update (or destroy) the infrastructure components.

\ No newline at end of file diff --git a/30-reference/process/smoke-tests/index.html b/30-reference/process/smoke-tests/index.html new file mode 100644 index 000000000..4402e511e --- /dev/null +++ b/30-reference/process/smoke-tests/index.html @@ -0,0 +1,2 @@ + Smoke tests - Cloud Pak Deployer
Skip to content

Smoke tests🔗

This is the final stage before returning control to the process that started the deployer. Here tests to check that the Cloud Pak and its cartridges has been deployed correctly and that everything is running as expected.

The method for smoke tests should be dynamic, for example by referencing a Git repository and context (directory within the repository); the code within that directory then deploys the asset(s).

Cloud Pak for Data smoke tests🔗

Show the Cloud Pak for Data URL and admin password🔗

This "smoke test" finds the route of the Cloud Pak for Data instance(s) and retrieves the admin password from the vault which is then displayed.

Example:

['CP4D URL: https://cpd-cpd.fke09-10-a939e0e6a37f1ce85dbfddbb7ab97418-0000.eu-gb.containers.appdomain.cloud', 'CP4D admin password: ITnotgXcMTcGliiPvVLwApmsV']
+

With this information you can go to the Cloud Pak for Data URL and login using the admin user.

\ No newline at end of file diff --git a/30-reference/process/validate/index.html b/30-reference/process/validate/index.html new file mode 100644 index 000000000..649fed5eb --- /dev/null +++ b/30-reference/process/validate/index.html @@ -0,0 +1 @@ + Validate - Cloud Pak Deployer
Skip to content

10 - Validation - Validate the configuration🔗

In this stage, the following activities are executed:

  • Is the specified cloud platform in the inventory file supported?
  • Are the mandatory variables defined?
  • Can the deployer connect to the specified vault?
\ No newline at end of file diff --git a/30-reference/timings/index.html b/30-reference/timings/index.html new file mode 100644 index 000000000..7a41bfce3 --- /dev/null +++ b/30-reference/timings/index.html @@ -0,0 +1 @@ + Timings - Cloud Pak Deployer
Skip to content

Timings for the deployment🔗

Duration of the overall deployment process🔗

Phase Step Time in minutes Comments
10 - Validation 3
20 - Prepare Generators 3
30 - Provision infrastructure Create VPC 1
Create VSI without storage 5
Create VSI with storage 10
Create VPC ROKS cluster 45
Install ROKS OCS add-on and create storage classes 45
40 - Configure infrastructure Install NFS on VSIs 10
Create NFS storage classes 5
Create private container registry namespace 5
50 - Install Cloud Pak Prepare OpenShift for Cloud Pak for Data install 60 During this step, the compute nodes may be replaced and also the Kubernetes services may be restarted.
Mirror Cloud Pak for Data images to private registry (only done when using private registry) 30-600 If the entitled registry is used, this step will be skipped. When using a private registry, if images have already been mirrored, the duration will be much shorter, approximately 10 minutes.
Install Cloud Pak for Data control plane 20
Create Cloud Pak for Data subscriptions for cartridges 15
Install cartridges 20-300 The amount of time really depends on the cartridges being installed. In the table below you will find an estimate of the installation time for each cartridge. Cartridges will be installed in parallel through the operators.
60 - Configure Cloud Pak Configure Cloud Pak for Data LDAP 5
Provision instances for cartridges 30-60 For cartridges that have instances defined. Creation of the instances will run in parallel where possible.
Configure cartridge and instance permissions based on LDAP config 10
70 - Deploy assets No activities yet 0
80 - Smoke tests Show Cloud Pak for Data cluster details 1

Cloud Pak for Data cartridge deployment🔗

Cartridge Full name Installation time Instance provisioning time Dependencies
cpd_platform Cloud Pak for Data control plane 20 N/A
ccs Common Core Services 75 N/A
db2aas Db2 as a Service 30 N/A
iis Information Server 60 N/A ccs, db2aas
ca Cognos Analytics 20 45 ccs
planning-analytics Planning Analytics 15 N/A
watson_assistant Watson Assistant 70 N/A
watson-discovery Watson Discovery 100 N/A
] watson-ks Watson Knowledge Studio 20 N/A
watson-speech Watson Speech to Text and Text to Speech 20 N/A
wkc Watson Knowledge Catalog 90 N/A ccs, db2aas, iis
wml Watson Machine Learning 45 N/A ccs
ws Watson Studio 30 N/A ccs

Examples:

  • Cloud Pak for Data installation with just Cognos Analytics will take 20 (control plane) + 75 (ccs) + 20 (ca) + 45 (ca instance) = ~160 minutes
  • Cloud Pak for Data installation with Cognos Analytics and Watson Studio will take 20 (control plane) + 75 (ccs) + 45 (ws+ca) + 45 (ca instance) = ~185 minutes
  • Cloud Pak for Data installation with just Watson Knowledge Catalog will take 20 (control plane) + 75 (ccs) + 30 (db2aas) + 60 (iis) + 90 (wkc) = ~275 minutes
  • Cloud Pak for Data installation with Watson Knowledge Catalog and Watson Studio will take the same time because WS will finish 30 minutes after installing CCS, while WKC will take a lot longer to complete
\ No newline at end of file diff --git a/40-troubleshooting/cp4d-uninstall/index.html b/40-troubleshooting/cp4d-uninstall/index.html new file mode 100644 index 000000000..eb3d74c63 --- /dev/null +++ b/40-troubleshooting/cp4d-uninstall/index.html @@ -0,0 +1 @@ + Cloud Pak for Data uninstall - Cloud Pak Deployer
Skip to content

Uninstall Cloud Pak for Data and Foundational Services🔗

For convenience, the Cloud Pak Deployer includes a script that removes the Cloud Pak for Data instance from the OpenShift cluster, then Cloud Pak Foundational Services and finally the catalog sources and CRDs.

Steps:

  • Make sure you are connected to the OpenShift cluster
  • Run script ./scripts/cp4d/cp4d-delete-instance.sh <CP4D_project>

You will have to confirm that you want to delete the instance and all other artifacts.

Warning

Please be very careful with this command. Ensure you are connected to the correct OpenShift cluster and that no other Cloud Paks use operator namespace. The action cannot be undone.

\ No newline at end of file diff --git a/40-troubleshooting/ibm-cloud-access-nfs-server/index.html b/40-troubleshooting/ibm-cloud-access-nfs-server/index.html new file mode 100644 index 000000000..16f18d339 --- /dev/null +++ b/40-troubleshooting/ibm-cloud-access-nfs-server/index.html @@ -0,0 +1,69 @@ + Access NFS server provisioned on IBM Cloud - Cloud Pak Deployer
Skip to content

Access NFS server provisioned on IBM Cloud🔗

When choosing the "simple" sample configuration for ROKS VPC on IBM Cloud, the deployer also provisions a Virtual Server Instance and installs a standard NFS server on it. In some cases you may want to get access to the NFS server for troubleshooting.

For security reasons, the NFS server can only be reached via a bastion server that is connected to the internet, i.e. use the bastion server as a jump host, this to avoid exposing NFS volumes to the outside world and provide an extra layer of protection. Additionally, password login is disabled on both the bastion and NFS servers and one must use the private SSH key to connect.

Start the command line within the container🔗

Getting SSH access to the NFS server is easiest from within the deployer container as it has all tools installed to extract the IP addresses from the Terraform state file.

Optional: Ensure that the environment variables for the configuration and status directories are set. If not specified, the directories are assumed to be $HOME/cpd-config and $HOME/cpd-status.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+

Start the deployer command line.

./cp-deploy.sh env command
+

-------------------------------------------------------------------------------
+Entering Cloud Pak Deployer command line in a container.
+Use the "exit" command to leave the container and return to the hosting server.
+-------------------------------------------------------------------------------
+Installing OpenShift client
+Current OpenShift context: pluto-01
+

Obtain private SSH key🔗

Access to both the bastion and NFS servers are typically protected by the same SSH key, which is stored in the vault. To list all vault secrets, run the command below.

cd /cloud-pak-deployer
+./cp-deploy.sh vault list
+
./cp-deploy.sh vault list
+
+Starting Automation script...
+
+PLAY [Secrets] *****************************************************************
+Secret list for group sample:
+- ibm_cp_entitlement_key
+- sample-terraform-tfstate
+- cp4d_admin_zen_40_fke34d
+- sample-all-config
+- pluto-01-provision-ssh-key
+- pluto-01-provision-ssh-pub-key
+
+PLAY RECAP *********************************************************************
+localhost                  : ok=11   changed=0    unreachable=0    failed=0    skipped=21   rescued=0    ignored=0
+

Then, retrieve the private key (in the above example pluto-01-provision-ssh-key) to an output file in your ~/.ssh directory, make sure it has the correct private key format (new line at the end) and permissions (600).

SSH_FILE=~/.ssh/pluto-01-rsa
+mkdir -p ~/.ssh
+chmod 600 ~/.ssh
+./cp-deploy.sh vault get -vs pluto-01-provision-ssh-key \
+    -vsf $SSH_FILE
+echo -e "\n" >> $SSH_FILE
+chmod 600 $SSH_FILE
+

Find the IP addresses🔗

To connect to the NFS server, you need the public IP address of the bastion server and the private IP address of the NFS server. Obviously these can be retrieved from the IBM Cloud resource list (https://cloud.ibm.com/resources), but they are also kept in the Terraform "tfstate" file

./cp-deploy.sh vault get -vs sample-terraform-tfstate \
+    -vsf /tmp/sample-terraform-tfstate
+

The below commands do not provide the prettiest output but you should be able to extract the IP addresses from them.

For the bastion node public (floating) IP address:

cat /tmp/sample-terraform-tfstate | jq -r '.resources[]' | grep -A 10 -E "ibm_is_float"
+

  "type": "ibm_is_floating_ip",
+  "name": "pluto_01_bastion",
+  "provider": "provider[\"registry.terraform.io/ibm-cloud/ibm\"]",
+  "instances": [
+    {
+      "schema_version": 0,
+      "attributes": {
+        "address": "149.81.215.172",
+...
+        "name": "pluto-01-bastion",
+

For the NFS server:

cat /tmp/sample-terraform-tfstate | jq -r '.resources[]' | grep -A 10 -E "ibm_is_instance|primary_network_interface"
+

...
+--
+  "type": "ibm_is_instance",
+  "name": "pluto_01_nfs",
+  "provider": "provider[\"registry.terraform.io/ibm-cloud/ibm\"]",
+  "instances": [
+...
+--
+        "primary_network_interface": [
+...
+            "name": "pluto-01-nfs-nic",
+            "port_speed": 0,
+            "primary_ipv4_address": "10.227.0.138",
+

In the above examples, the IP addresses are:

  • Bastion public IP address: 149.81.215.172
  • NFS server private IP address: 10.227.0.138

SSH to the NFS server🔗

Finally, to get command line access to the NFS server:

BASTION_IP=149.81.215.172
+NFS_IP=10.227.0.138
+ssh -i $SSH_FILE \
+  -o ProxyCommand="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
+  -i $SSH_FILE -W %h:%p -q $BASTION_IP" \
+  root@$NFS_IP
+

Stopping the session🔗

Once you've finished exploring the NFS server, you can exit from it:

exit
+

Finally, exit from the deployer container which is then terminated.

exit
+

\ No newline at end of file diff --git a/404.html b/404.html new file mode 100644 index 000000000..721da0b33 --- /dev/null +++ b/404.html @@ -0,0 +1 @@ + Cloud Pak Deployer
\ No newline at end of file diff --git a/50-advanced/advanced-configuration/index.html b/50-advanced/advanced-configuration/index.html new file mode 100644 index 000000000..7a2f11c4e --- /dev/null +++ b/50-advanced/advanced-configuration/index.html @@ -0,0 +1,59 @@ + Advanced configuration - Cloud Pak Deployer
Skip to content

Cloud Pak Deployer Advanced Configuration🔗

The Cloud Pak Deployer includes several samples which you can use to build your own configuration. You can find sample configuration yaml files in the sub-directories of the sample-configurations directory of the repository. Descriptions and topologies are also included in the sub-directories.

Warning

Do not make changes to the sample configurations in the cloud-pak-deployer directory, but rather copy it to your own home directory or somewhere else and then make changes. If you store your own configuration under the repository's clone, you may not be able to update (pull) the repository with changes applied on GitHub, or accidentally overwrite it.

Warning

The deployer expects to manage all objects referenced in the configuration files, including the referenced OpenShift cluster and Cloud Pak installation. If you have already pre-provisioned the OpenShift cluster, choose a configuration with existing-ocp cloud platform. If the Cloud Pak has already been installed, unexpected and undesired activities may happen. The deployer has not been designed to alter a pre-provisioned OpenShift cluster or existing Cloud Pak installation.

Configuration steps - static sample configuration🔗

  1. Copy the static sample configuration directory to your own directory:
    mkdir -p $HOME/cpd-config/config
    +cp -r ./sample-configurations/roks-ocs-cp4d/config/* $HOME/cpd-config/config/
    +cd $HOME/cpd-config/config
    +
  2. Edit the "cp4d-....yaml" file and select the cartridges to be installed by changing the state to installed. Additionally you can accept the Cloud Pak license in the config file by specifying accept_licenses: True.
    nano ./config/cp4d-450.yaml
    +

The configuration typically works without any configuration changes and will create all referenced objects, including the Virtual Private Cloud, subnets, SSH keys, ROKS cluster and OCS storage ndoes. There is typically no need to change address prefixes and subnets. The IP addresses used by the provisioned components are private to the VPC and are not externally exposed.

Configuration steps - dynamically choose OpenShift and Cloud Pak🔗

  1. Copy the sample configuration directory to your own directory:
    mkdir -p $HOME/cpd-config/config
    +
  2. Copy the relevant OpenShift configuration file from the samples-configuration directory to the config directory, for example:
    cp ./sample-configurations/sample-dynamic/config-samples/ocp-ibm-cloud-roks-ocs.yaml $HOME/cpd-config/config/
    +
  3. Copy the relevant "cp4d-…" file from the samples-configuration directory to the config directory, for example:

    cp ./sample-configurations/sample-dynamic/config-samples/cp4d-462.yaml $HOME/cpd-config/config/
    +

  4. Edit the "$HOME/cpd-config/config/cp4d-....yaml" file and select the cartridges to be installed by changing the state to installed. Additionally you can accept the Cloud Pak license in the config file by specifying accept_licenses: True.

    nano $HOME/cpd-config/config/cp4d-463.yaml
    +

For more advanced configuration topics such as using a private registry, setting up transit gateways between VPCs, etc, go to the Advanced configuration section

Directory structure🔗

Every configuration has a fixed directory structure, consisting of mandatory and optional subdirectories. Directory structure

Mandatory subdirectories:

  • config: Keeps one or more yaml files with your OpenShift and Cloud Pak configuration

Additionally, there are 3 optional subdirectories:

  • defaults: Directory that keeps the defaults which will be merged with your configuration
  • inventory: Keep global settings for the configuration such as environment name or other variables used in the configs
  • assets: Keeps directories of assets which must be deployed onto the Cloud Pak

config directory🔗

You can choose to keep only a single file per subdirectory or, for more complex configurations, you can create multiple yaml files. You can find a full list of all supported object types here: Configuration objects. The generator automatically merges all .yaml files in the config and defaults directory. Files with different extensions are ignored. In the sample configurations we split configuration of the OpenShift ocp-... and Cloud Pak cp4.-... objects.

For example, your config directory could hold the following files:

cp4d-463.yaml
+ocp-ibm-cloud-roks-ocs.yaml
+

This will provision a ROKS cluster on IBM Cloud with OpenShift Data Foundation (fka OCS) and Cloud Pak for Data 4.0.8.

defaults directory (optional)🔗

Holds the defaults for all object types. If a certain object property has not been specified in the config directory, it will be retrieved from the defaults directory using the flavour specified in the configured object. If no flavour has been selected, the default flavour will be chosen.

You should not need this subdirectory in most circumstances.

assets directory (optional)🔗

Optional directory holding the assets you wish to deploy for the Cloud Pak. More information about Cloud Pak for Data assets which can be deployed can be found in object definition cp4d_asset. The directory can be named differently as well, for example cp4d-assets or customer-churn-demo.

inventory directory (optional)🔗

The Cloud Pak Deployer pipeline has been built using Ansible and it can be configured using "inventory" files. Inventory files allow you to specify global variables used throughout Ansible playbooks. In the current version of the Cloud Pak Deployer, the inventory directory has become fully optional as the global_config and vault objects have taken over its role. However, if there are certain global variables such as env_id you want to pass via an inventory file, you can also do this.

Vault secrets🔗

User passwords, certificates and other "secret" information is kept in the vault, which can be either a flat file (not encrypted), HashiCorp Vault or the IBM Cloud Secrets Manager service. Some of the deployment configurations require that the vault is pre-populated with secrets which as needed during the deployment. For example, a vSphere deployment needs the vSphere user and password to authenticate to vSphere and Cloud Pak for Data SAML configuration requires the idP certificate

All samples default to the File Vault, meaning that the vault will be kept in the vault directory under the status directory you specify when you run the deployer. Detailed descriptions of the vault settings can be found in the sample inventory file and also here: vault settings.

Optional: Ensure that the environment variables for the configuration and status directories are set. If not specified, the directories are assumed to be $HOME/cpd-config and $HOME/cpd-status.

export STATUS_DIR=$HOME/cpd-status
+export CONFIG_DIR=$HOME/cpd-config
+

Set vSphere user secret:

./cp-deploy.sh vault set \
+    --vault-secret vsphere-user \
+    --vault-secret-value super_user@vsphere.local
+

Or, if you want to create the secret from an input file:

./cp-deploy.sh vault set \
+    --vault-secret kubeconfig \
+    --vault-secret-file ~/.kube/config
+

Using a GitHub repository for the configuration🔗

If the configuration is kept in a GitHub repository, you can set environment variables to have the deployer pull the GitHub repository to the current server before starting the process.

Set environment variables.

export CPD_CONFIG_GIT_REPO="https://github.com/IBM/cloud-pak-deployer-config.git"
+export CPD_CONFIG_GIT_REF="main"
+export CPD_CONFIG_GIT_CONTEXT=""
+

  • CPD_CONFIG_GIT_REPO: The clone URL of the GitHub repository that holds the configuration.
  • CPD_CONFIG_GIT_REF: The branch, tag or commit ID to be cloned. If not specified, the repository's default branch will be cloned.
  • CPD_CONFIG_GIT_CONTEXT: The directory within the GitHub repository that holds the configuration. This directory must contain the config directory under which the YAML files are kept.

Info

When specifying a GitHub repository, the contents will be copied under $STATUS_DIR/cpd-config and this directory is then set as the configuration directory.

Using dynamic variables (extra variables)🔗

In some situations you may want to use a single configuration for deployment in different environments, such as development, acceptance test and production. The Cloud Pak Deployer uses the Jinja2 templating engine which is included in Ansible to pre-process the configuration. This allows you to dynamically adjust the configuration based on extra variables you specify at the command line.

Example:

./cp-deploy.sh env apply \
+  -e ibm_cloud_region=eu_gb \
+  -e env_id=jupiter-03 [--accept-all-liceneses]
+

This passes the env_id and ibm_cloud_region variables to the Cloud Pak Deployer, which can then populate variables in the configuration. In the sample configurations, the env_id is used to specify the name of the VPC, ROKS cluster and others and overrides the value specified in the global_config definition. The ibm_cloud_region overrides region specified in the inventory file.

...
+vpc:
+- name: "{{ env_id }}"
+  allow_inbound: ['ssh']
+
+address_prefix:
+### Prefixes for the client environment
+- name: "{{ env_id }}-zone-1"
+  vpc: "{{ env_id }}"
+  zone: {{ ibm_cloud_region }}-1
+  cidr: 10.231.0.0/26
+...
+

When running with the above cp-deploy.sh command, the snippet would be generated as:

...
+vpc:
+- name: "jupiter-03"
+  allow_inbound: ['ssh']
+
+address_prefix:
+### Prefixes for the client environment
+- name: "jupiter-03-zone-1"
+  vpc: "jupiter-03"
+  zone: eu-de-1
+  cidr: 10.231.0.0/26
+...
+

The ibm_cloud_region variable is specified in the inventory file. This is another method of specifying variables for dynamic configuration.

You can even include more complex constructs for dynamic configuration, with if statements, for loops and others.

An example where the OpenShift OCS storage classes would only be generated for a specific environment (pluto-prod) would be:

  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+    nfs_server_name: "{{ env_id }}-nfs"
+{% if env_id == 'jupiter-prod' %}
+  - storage_name: ocs-storage
+    storage_type: ocs
+    ocs_storage_label: ocs
+    ocs_storage_size_gb: 500
+{% endif %}
+

For a more comprehensive overview of Jinja2 templating, see https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html

\ No newline at end of file diff --git a/50-advanced/alternative-repo-reg/index.html b/50-advanced/alternative-repo-reg/index.html new file mode 100644 index 000000000..1bb5fd5aa --- /dev/null +++ b/50-advanced/alternative-repo-reg/index.html @@ -0,0 +1,31 @@ + Using alternative CASE repositories and registries - Cloud Pak Deployer
Skip to content

Using alternative repositories and registries🔗

Warning

In most scenarios you will not need this type of configuration.

Alternative repositories and registries are mainly geared towards pre-GA use of the Cloud Paks where CASE files are downloaded from internal repositories and staging container image registries need to be used as images have not been released yet.

Building the Cloud Pak Deployer image🔗

By default the Cloud Pak Deployer image is built on top of the olm-utils images in icr.io. If you're working with a pre-release of the Cloud Pak OLM utils image, you can override the setting as follows:

export CPD_OLM_UTILS_V2_IMAGE=cp.staging.acme.com:4.8.0
+

Subsequently, run the install commmand:

./cp-deploy.sh build
+

Configuring the alternative repositories and registries🔗

When specifying a cp_alt_repo object in a YAML file, this is used for all Cloud Paks. The object triggers the following steps: * The following files are created in the /tmp/work directory in the container: play_env.sh, resolvers.yaml and resolvers_auth. * When downloading CASE files using the ibm-pak plug-in, the play_env sets the locations of the resolvers and authorization files. * Also, the locations of the case files for the Cloud Pak, Foundational Servides and Open Content are set in an enviroment variable. * Registry mirrors are configured using an ImageContentSourcePolicy resource in the OpenShift cluster. * Registry credentials are added to the OpenShift cluster's global pull secret.

The cp_alt_repo is configured like this:

cp_alt_repo:
+  repo:
+    token_secret: github-internal-repo
+    cp_path: https://raw.internal-repo.acme.com/cpd-case-repo/4.8.0/promoted/case-repo-promoted
+    fs_path: https://raw.internal-repo.acme.com/cloud-pak-case-repo/main/repo/case
+    opencontent_path: https://raw.internal-repo.acme.com/cloud-pak-case-repo/main/repo/case
+  registry_pull_secrets:
+  - registry: cp.staging.acme.com
+    pull_secret: cp-staging
+  - registry: fs.staging.acme.com
+    pull_secret: cp-fs-staging
+  registry_mirrors:
+  - source: cp.icr.com/cp
+    mirrors:
+    - cp.staging.acme.com/cp
+  - source: cp.icr.io/cp/cpd
+    mirrors:
+    - cp.staging.acme.com/cp/cpd
+  - source: icr.io/cpopen
+    mirrors:
+    - fs.staging.acme.com/cp
+  - source: icr.io/cpopen/cpfs
+    mirrors:
+    - fs.staging.acme.com/cp
+

Property explanation🔗

Property Description Mandatory Allowed values
repo Repositories to be accessed and the Git token Yes
repo.token_secret Secret in the vault that holds the Git login token Yes
repo.cp_path Repository path where to find Cloud Pak CASE files Yes
repo.fs)path Repository path where to find the Foundational Services CASE files Yes
repo.opencontent_path Repository path where to find the Open Content CASE files Yes
registry_pull_secrets List of registries and their pull secrets, will be used to configure global pull secret Yes
.registry Registry host name Yes
.pull_secret Vault secret that holds the pull secret (user:password) for the registry Yes
registry_mirrors List of registries and their mirrors, will be used to configure the ImageContentSourcePolicy Yes
.source Registry and path referenced by the Cloud Pak/FS pod Yes
.mirrors: List of alternate registry locations for this source Yes

Configuring the secrets🔗

Before running the deployer with a cp_alt_repo object, you need to ensure the referenced secrets are present in the vault.

For the GitHub token, you need to set the token (typically a deploy key) to login to GitHub or GitHub Enterprise.

./cp-deploy.sh vault set -vs github-internal-repo=abc123def456
+

For the registry credentials, specify the user and password separated by a colon (:):

./cp-deploy.sh vault set -vs cp-staging="cp-staging-user:cp-staging-password"
+

You can also set these tokens on the cp-deploy.sh env apply command line.

./cp-deploy.sh env apply -f -vs github-internal-repo=abc123def456 -vs cp-staging="cp-staging-user:cp-staging-password
+

Running the deploy🔗

To run the deployer you can now use the standard process:

./cp-deploy.sh env apply -v
+

\ No newline at end of file diff --git a/50-advanced/apply-node-settings-non-mco/index.html b/50-advanced/apply-node-settings-non-mco/index.html new file mode 100644 index 000000000..d597b2ce0 --- /dev/null +++ b/50-advanced/apply-node-settings-non-mco/index.html @@ -0,0 +1,44 @@ + Apply OpenShift node settings when machine config operator does not exist - Cloud Pak Deployer
Skip to content

Apply OpenShift node settings when machine config operator does not exist🔗

Cloud Pak Deployer automatically applies cluster and node settings before installing the Cloud Pak(s). Sometimes you may also want to automate applying these node settings without installing the Cloud Pak. For convenience, the repository includes a script that makes the same changes normally done through automation: scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh.

To apply the node settings, do the following:

  • If images are pulled from the entitled registry, set the CP_ENTITLEMENT_KEY environment variable
  • If images are to be pulled from a private registry, set both the CPD_PRIVATE_REGISTRY and CPD_PRIVATE_REGISTRY_CREDS environment variables
  • Log in to the OpenShift cluster with cluster-admin permissions
  • Run the scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh script.

The CPD_PRIVATE_REGISTRY value must reference the registry host name and optionally the port and namespace that must prefix the images. For example, if the images are kept in https://de.icr.io/cp4d-470, you must specify de.icr.io/cp4d-470 for the CPD_PRIVATE_REGISTRY environment variable. If images are kept in https://cust-reg:5000, you must specify cust-reg:5000 for the CPD_PRIVATE_REGISTRY environment variable.

For the CPD_PRIVATE_REGISTRY_CREDS value, specify both the user and password in a single string, separated by a colon (:). For example: admin:secret_passw0rd.

Warning

When setting the private registry and its credentials, the script automatically creates the configuration that will set up ImageContentSourcePolicy and global pull secret alternatives. This change cannot be undone using the script. It is not possible to set the private registry and later change to entitled registry. Changing the private registry's credentials can be done by re-running the script with the new credentials.

Example🔗

export CPD_PRIVATE_REGISTRY=de.icr.io/cp4d-470
+export CPD_PRIVATE_REGISTRY_CREDS="iamapikey:U97KLPYF663AE4XAQL0"
+./scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh
+
Creating ConfigMaps and secret
+configmap "cloud-pak-node-fix-scripts" deleted
+configmap/cloud-pak-node-fix-scripts created
+configmap "cloud-pak-node-fix-config" deleted
+configmap/cloud-pak-node-fix-config created
+secret "cloud-pak-node-fix-secrets" deleted
+secret/cloud-pak-node-fix-secrets created
+Setting global pull secret
+/tmp/.dockerconfigjson
+info: pull-secret was not changed
+secret/cloud-pak-node-fix-secrets data updated
+Private registry specified, creating ImageContentSourcePolicy for registry de.icr.io/cp4d-470
+Generating Tuned config
+tuned.tuned.openshift.io/cp4d-ipc unchanged
+Writing fix scripts to config map
+configmap/cloud-pak-node-fix-scripts data updated
+configmap/cloud-pak-node-fix-scripts data updated
+configmap/cloud-pak-node-fix-scripts data updated
+configmap/cloud-pak-node-fix-scripts data updated
+Creating service account for DaemonSet
+serviceaccount/cloud-pak-crontab-sa unchanged
+clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "cloud-pak-crontab-sa"
+Recreate DaemonSet
+daemonset.apps "cloud-pak-crontab-ds" deleted
+daemonset.apps/cloud-pak-crontab-ds created
+Showing running DaemonSet pods
+NAME                         READY   STATUS              RESTARTS   AGE
+cloud-pak-crontab-ds-b92f9   0/1     Terminating         0          12m
+cloud-pak-crontab-ds-f85lf   0/1     ContainerCreating   0          0s
+cloud-pak-crontab-ds-jlbvm   0/1     ContainerCreating   0          0s
+cloud-pak-crontab-ds-rbj65   1/1     Terminating         0          12m
+cloud-pak-crontab-ds-vckrs   0/1     ContainerCreating   0          0s
+cloud-pak-crontab-ds-x288p   1/1     Terminating         0          12m
+Waiting for 5 seconds for pods to start
+
+Showing running DaemonSet pods
+NAME                         READY   STATUS    RESTARTS   AGE
+cloud-pak-crontab-ds-f85lf   1/1     Running   0          5s
+cloud-pak-crontab-ds-jlbvm   1/1     Running   0          5s
+cloud-pak-crontab-ds-vckrs   1/1     Running   0          5s
+
\ No newline at end of file diff --git a/50-advanced/gitops/index.html b/50-advanced/gitops/index.html new file mode 100644 index 000000000..d732f8734 --- /dev/null +++ b/50-advanced/gitops/index.html @@ -0,0 +1,15 @@ + Continuous Adoption using GitOps - Cloud Pak Deployer
Skip to content

GitOps

The process of supporting multiple products, releases and patch levels within a release has great similarity to the git-flow model, which has been really well-described by Vincent Driessen in his blog post: https://nvie.com/posts/a-successful-git-branching-model/. This model has been and is still very popular with many software-development teams.

Below is a description of how a git-flow could be implemented with the Cloud Pak Deployer. The following steps are covered:

  • Setting up the company's Git and image registry for the Cloud Paks
  • The git-flow change process
  • Feeding Cloud Pak changes into the process
  • Deploying the Cloud Pak changes

Environments, Git and registry🔗

Governed Process with Continuous Adoption.

There are 4 Cloud Pak environments within the company's domain: Dev, UAT, Pre-prod and Prod. Each of these environments have a namespace in the company's registry (or an isolated registry could be created per environment) and the Cloud Pak release installed is represented by manifests in a branch of the Git repository, respectively dev, uat, pp and prod.

Organizing registries by namespace has the advantage that duplication of images can be avoided. Each of the namespaces can have their own set of images that have been approved for running in the associated environment. The image itself is referenced by digest (i.e., checksum) and organized on disk as such. If one tries to copy an image to a different namespace within the same registry, only a new entry is created, the image itself is not duplicated because it already exists.

The manifests (CASE files) representing the Cloud Pak components are present in each of the branches of the Git repository, or there is a configuration file that references the location of the case file, including the exact version number.

In the Cloud Pak Deployer, we have chosen to reference the CASE versions in the configuration, for example:

cp4d:
+- project: cpd-instance
+  openshift_cluster_name: {{ env_id }}
+  cp4d_version: 4.6.0
+  openshift_storage_name: ocs-storage
+  sequential_install: True
+  cartridges:
+  - name: cpfs
+  - name: cpd_platform
+  - name: ws
+    state: installed
+  - name: wml
+    size: small
+    state: installed
+

If Cloud Pak for Data has been configured with a private registry in the deployer config, the deployer will mirror images from the IBM entitled registry to the private registry. In the above configuration, no private registry has been specified. The deployer will automatically download and use the CASE files to create the catalog sources.

Change process using git-flow🔗

With the initial status in place, the continuous adoption process may commence, using the principles of git-flow.

Git-flow addresses a couple of needs for continuous adoption:

  • Control and visibility over what software (version) runs in which environment; there is a central truth which describes the state of every environment managed
  • New features (in case of the deployer: new operator versions and custom resources) can be tested without affecting the pending releases or production implementation
  • While preparing for a new release, hot fixes can still be applied to the production environments

git-flow

The Git repository consists of 4 branches: dev, uat, pp and prd. At the start, release 4.0.0 is being implemented and it will go through the stages from dev to prd. When the installation has been tested in development, a pull request (PR) is done to promote to the uat branch. The PR is reviewed, and changes are then merged into the uat branch. After testing in the uat branch, the steps are repeated until the 4.0.0 release is eventually in production.

With each of the implementation and promotion steps, the registry namespaces and associated with the particular branch are updated with the images described in the manifests kept in the Git repository. Additionally, the changes are installed in the respective environments. The details of these processes will be outlined later.

New patches are received, committed and installed on the dev branch on a regular basis and when no issues are found, the changes are gathered into a PR for uat. When no issues are found for 2 weeks, another PR is done for the pp branch and eventually for prd. During this promotion flow, new patches are still being received in dev.

While version 4.0.2 is running in production, a critical defect is found for which a hot fix is developed. The hot fix is first committed to the pp branch and tested and then a PR is made to promote it to the prd branch. In the meantime, the dev and uat branches continue with their own release schedule. The hot fix is included in 4.0.4 which will be promoted as part of the 4.0.5 release.

The uat, pp and prd branches can be protected by a branch protection rule so that changes from dev can only be promoted (via a pull request) after an approving review or, when the intention is to promote changes in a fully automated manner, after passing status checks and testing. Read Managing a branch protection rule for putting in these controls in GitHub or Protected branches for GitLab.

With this flow, there is control over patches, promotion approvals and releases installed in each of the environments. Additional branches could be introduced if additional environments are in play or if different releases are being managed using the git-flow.

Feeding patches and releases into the flow🔗

As discussed above, patches are first "developed" in the dev branch, i.e., changes are fed into the Git repository, images are loaded into the company's registry (dev namespace) and the installed into the Dev environment.

The process of receiving and installing the patches is common for all Cloud Paks: the cloudctl case tool downloads the CASE file associated with the operator version and the same CASE file can be used to upload images into the company's registry. Then a Catalog Source is created which makes the images available to the operator subscriptions, which in turn manage the various custom resources in the Cloud Pak instance. For example, the ws operator manages the Ws custom resource and this CR ensures that OpenShift deployments, secrets, Config Maps, Stateful Sets, and so forth are managed within the Cloud Pak for Data instance project.

In the git-flow example, Watson Studio release 4.0.2 is installed by updating the Catalog Source. Detailed installation steps for Cloud Pak for Data can be found in the IBM documentation.

Deploying the Cloud Pak changes🔗

Now that the hard work of managing changes to the Git repository branches and image registry namespaces has been done, we can look at the (automatic) deployment of the changes.

In a continuous adoption workflow, the implementation of new releases and patches is automated by means of a pipeline, which allows for deployment and testing in a predictable and controlled manner. A pipeline executes a series of steps to inspect the change and then run the command to install it in the respective environment. Moreover, after installation tests can be automatically executed. The most-popular tools for pipelines are ArgoCD, GitLab pipelines and Tekton (serverless).

To link the execution of a pipeline with the git-flow pull request, one can use ArcoCD or a GitHub/GitLab webhook. As soon as a PR is accepted and changes are applied to the Git branch, the pipeline is triggered and will run the Cloud Pak Deployer to automatically apply the changes according to the latest version.

\ No newline at end of file diff --git a/50-advanced/images/air-gapped-overview.drawio b/50-advanced/images/air-gapped-overview.drawio new file mode 100644 index 000000000..1d645e3da --- /dev/null +++ b/50-advanced/images/air-gapped-overview.drawio @@ -0,0 +1 @@ +7Vtbc9o4FP41zOw+xGN8Ax4DSbqdSbfMJrNtnjoCC6ONsVxZ3PrrV7IlXySR2ARIO7vpTEHysWSf85276LmT1e4DAenyEw5h3HPscNdzb3qO4/QHLvvgM/tiZjhyiomIoLCY6lcTD+gHFJO2mF2jEGYNQopxTFHanJzjJIFz2pgDhOBtk2yB4+auKYigNvEwB7E++wWFdCnewhlU839AFC3lzv1gVFxZAUks3iRbghBva1Pubc+dEIxp8W21m8CYM0/yZYF/ROD+yy143H/6SJ+++sOn1VWx2F2XW8QrbEC8Fi/1J6RbTJ5REsm3IDChp93V0XadErQBFHJhxXgdss/fQpQJwcHwd8Emupe8ZxxL+dfVLuLgstBsZc0wg9Y4JY/7/BKBEcIJm1nghD6IW2023kBCERPjdYwidv2G4pTNAjGK4YK97jhLwZwx4T4f3bhONfXIyW88vjCK4wmOMWHjBCds/THB6ySEodhpu0QUPrDb+NZb9qBsbklXMRv1+YKU4GcoV+g57qjP/5VXJKr45iHIluW6jDEUoAQSMRZPxkZXXn45jkGaoVn5ynCXgkTeTeB8TTK0gX/BrFAqPttS3BIxjIdwV9MDIf4PEK8gJXtGIq66EuRCy52RGG8rnfHk3LKmL768EQg9jcq1K8CxLwJzHfDnavjT8AVDpupiiAld4ggnIL6tZhVJVzT3OMcHl+I/kNK9sFtgTXFT+jksi4vDuoD7ubwQ/Vr7/sS3sXwxutmVYmWDvRwkjDdfc8Kh78uJp/rV6sZ81LhzCgli7C0x1Q0PGV6TOXyJTjCdAhLBFxcUUucSeBFfBMaAMhQ37boBK+LWKUbsTUpcDvwmLn1bgVvxSuKuuolTF1IAHqi4LV5ZW+iaELCvkaWcIHvhgfujxj6u1zC97EuxYqUXJbNaqcrn77OAxNNPD2j7OY2+323xdyCse11VJsJAT8Ezv3nFvBx76CDmRnNG2LeIf2Ow4ehZxzHkxAuCV3wszbxKzm11Voh1u4QJJ8WhYdmM8ZIalfUezFh40VAwadHnDMEc1prhX6EwLHSZG0JQmUshCra4P+75N+21QVoW1TqWQYjYpOHnTVbTtrgjqItbjI5FvSTBi0UGaU81od2gYnx373WrGjGjmXYzLGa+dXJAQVM/PU93QH2TA+qfzQH5GqvGIKM8WlE5VnmZ/uvxhDmwqYc/xqjDtkcju4PJ91pL4P04HGgcvoFpjPfMDmih5DOk82Vd8yG53cDCAOQMk9G5rcZhB2M5EQ3G3ChNcYZy2Ros0b1CMMOUMkt5SJR4TWMW9k3KhMY2mTk1NmYqj+bQYpo0hynNLJ7o5IT1+LXHA7WBMx6eAwfyalMRXcfyNU30DTBxB5Lw5DgZ6P4NJwsUrQkw6uP/YLkUWAY/H1iGGlgeKKDr7CIoqfP814LKXf53TrviKljRHbwJKd7Z8kvdvxtj1ssmmFV+2MgNrTJVPJQdily0yj+falfMuWi3GO/V5FHw8/XcUcTf584dPSWidI9MHV3FyPn9dqnjqUJ2Vw+TLMsSyRpd8hSOCRDRPIl7IWdjyBP0KlGZ8dkV+eWzt5omMIi0z+akHr85nbuyLVuYtTdCT5qSfc9ow86X3rl6qPQ55e4FxLrYpwTvdEmfNl5SvIznjfyc+NTOUQOi0VuqznEDIVhZTmilOSvOV1911PpqoHs/z7OGBv9Xzr7FAxrLRnqF/z0cYKOkygdTQJkIk3zGsc2F9srdOT+9v5M6eW5/5w6VAGvQzuF1LXF69rC5T6B0l1Rj6Dkv0b+5JGpk+shgB2HysFwgvTLZEhfHdO70ymzxGGihP8ZrHbDSleoxf73ndeWYgnvbzotHb+h8CYPe1EVX9Qf1ple/d2SXi3PClJrUI4RRe4WWWDi6GBk4gW6tHUOy4o/ebqrNQNJ7wM0Sf89h3LAngFCCwrzYfzZM627jY7IggIMZkg1LNs+5ud4VnBqCVzXSQUmGwrzXobawFR0EWVpECwu045gep7WGG3s+lGaw1oN7MRQ+HDQp+PacwWA8lkHRGMyfo1xPaySL/E9RAa3dzbUx7/VUnzdoFTEex2jG/kczljLchYAC/gG5rL4xvvDkYf9NCM/KNlHDUBiLHKVBUQzNaDSZdKkilGBqrZu+WnLq65FUYFDN4Fx1hH6LlsplYNUJC2DOQ41vIWImmGLOh7s87k8g/ZbrhgCCanMP2uYDHgfzh6JcWH6XGKwzLq4cW61FjjRguLZr9Q1BdtnGOz069CrTx/EnNnF7MGX/61AO3hVFR1qkEC7AOn+mA/ZIpna/iCmqKdP50KcWNw3nZy5rlfSiUe6lGfvWc7rmjf/LRcF6MK4+i/3bHK/SNffMk/yxMOGYyQdJcdhNP152roD5JKfEPHcQ5E79EgFzW1wHXXGtRcLy9OdrkbB3rvaOoxvUaw0Z71m3t7qd6jriDFlHabctZJy7PuGrZ7lU49e2IB8cSs9OfJZL3UcePjvrWS490avORPCCdt72Nh3mqhUWilI/P9fVO3z4Sz3t9a61/5aQdk5Z63fcYdCQryyIHotzCROlXnuSav9wn4IPm39GP8Bof73+/ueHv70rw7m/6zSN/zMCDFxfEeAbj9+Vy/TPIEEzP/R+zU/lzHpHNaG7HYgOr/nPPKq4i83cIc7GIqnlh0klxSzG8+fHJUrkBUHYJbZv6xX7bev7F3KfnuqNjnWfAxmbyYUG7dznyTCvn9ApD1baCQ4NDWqWtSYxBvywsT25/8g/eEjsBGDF4/BklqU5AtT7VogBh2SlF7Qs6z2M45Gd7dI4/Aydbfnzr4tZRj1pLCSviM/Y521kWKfqKNezPaWPDLaZa81jpFchvPyPzTPCEMGqylI/T9iqxOKassyup4K7Z4DsltdLGwPhteoZ4NA54ogfG1Y/qCugVP0s0b39Fw==7Vxbc5s4FP41ntl9sAcDNvAY20mbNk2yTbKb9CWDjWyrwYgK+dZfvxIIDJJ8DThOpslMbAlxO/rOp6NPR6kZ3cniE3bD8TfkAb+ma96iZvRquq43LYN+sJplUmM7elIxwtBLqpqrijv4G/BKjddOoQeiQkOCkE9gWKwcoCAAA1KoczFG82KzIfKLdw3dEZAq7gauL9f+Bz0y5m+hW6v6zwCOxumdm20nOTJx08b8TaKx66F5rso4rxldjBBJvk0WXeAz46V2qX/7ZzEy8M2w2zH+9abt/37/uq4nF7vY5xT+CjPXn/KXugZkjvALDEbpW2AQkHLvqkt3vcVw5hLAOstHU49+/uXBiHcc8P7mZiLL1PbUYiH7OlmMGLgasD9p9BGFVifE98v4EAYjiAJaM0QBueOnarQ8A5hA2o1nPhzR4z2CQlrr8pIPhvR1O1HoDqgRruJSz9BXVfesec9kF4a+30U+wrQcoIBev4PRNPCAx+80H0MC7uhp7NZz+qC0bkwmPi012QUJRi8gvUJNN5wm+82OpKhiN/fcaJxdlxqGuDAAmJf5k9FS3YwP+74bRrCfvTJYhG6Qno3BYIojOAPfQZQ4FavdsbtTxFAbgkXOD3j3fwJoAghe0ib8qJGCnHu5YfPyfOUzpsPrxjl/MQ1e6XI/HWXXXgGOfuGYU+Pv0b75jO3+fNADd4Gjj0n3+SuHbB5/Er6AR12dFxEmYzRCgeufr2qFnl61uUIxPlgv/gSELDlvuVOCir0fwzI5aOc7uBn3FySPue9P7DaNFi/1Flm30sIyLQTUNo9xQ7vVSiue8kdXJ8alwpm3AENq3gxT++EhQlM8ABvacVIkLh6BTdczk3bM/hvRhYHvEorhIqsrkMJPvUWQvkeGSqtVRGVLE8CWvBA/K09w4oUEeLd14ULJG0sXioGbvc9OWL758tBvoZ/Dyfzs85k9utbDxUyB5S5n0Fv3hZ08ocNQVNPbPmO1PqbfRuwb7VfWvVPfB6zxEKMJK6c8LDZnZBollp+PQcCaIk9x2Yi+LlF605Xbp+N/wQNSyh1QiDHcScw8gZ6XOBtjKnfFZyEzZ2zgVqfW6inhutH1RfrKogR+k8JArKI16o2O7hS6nrPIochMm6DhMAKvxcrtF7D8cYOWv4wfw1/BmXbx7f73Lrw3orQW7uf6asPtNUS0iz5kmvIQ0VQNEU3R2Q4ZIpSmkgOjjhsRFk+IFluNA83tI7469MgHKMq4QNMcR1OT8saO3t4Db2dhOQiUg7wXQAbjvMsDfD4DiefHhkrjZk2MkNZGWTxO8xkb3aIIxn2qoKAroUEfEUIpcl0XoinxaUDWzaYamorfxKiVujocgAb1oAEISdRgU5C4YT6yrLEQytI7dhX9nx4VYjSdxhqiB7YU8DCstGHp+DD+4ON08GGdHj7Mt8FH3trvCyQX8U+VJGIIKJFHcRVGzBJGmMv6lx+P/j14vFrUQ/NiOv398EMR7tyErCNcX45bbzFaLCUAlcspQn+YptOKG5cNIwkxSlyJMJoB4E4autcIY1NUJwjUjRUrpAGfLkPFNBu2AixZbelwkUM+CQ9HUAUKMgAr3LqE9mIQ1+iaWhxK5YJUE9hFLtive7fO75upApub4G/yyqon+HVxYm7vOMM/YLI1+3n1bAdhv44mD9effHABP39J3/O46FHrRvpmJOwEOTqtP2Oa+YrO4roLyKzCVSQvbdH30eDlfgyDpJo3UmP3CDhUNnTKxuGu5KN8GjnavQlBcDceQllN2dFiBwjzckyVPAUcyk+xTVXPtBs5gMnr6HVdFaloWjzdfYWanoE0jzdDHLLzQnqzdqByziyhirNyAm/T2R3pKRQOlk/aelseTXVF5NVyKkJzS8JRUZSs6dQYWpfyB4ZeLE9Whei29CSXwRC7DMoAz2jcXOG9Leneq2WunJ5Ko7MIegrNFQ3zimy6Mia4oRuFSUw3hAsG606Y0/HpQ8IwAjlpf6P8uj60FSBu6pbV6aSha8cdvIxiV801GcY/ghdIq2jMIWOFevXZg5MRNbQP+/Qv7E/oX88lLvsArMOeqa0CAsnymfdgI5qNClyhnLRlnCJwjeN0u2tmRRsBtbN3msJKQzYHynlnW+Gc7RKmRco3sLfHuccB1V5IcAdsEH72IOVggpgZLuK5WQDIc+wZHAYi6a4l5zVDDmIPRVhftfaIkvdGRV3XREnFkWBhaEajqZgEsTXiarDhyHTZ+UYrzpnHxStFIkl9z4jslRg6kI08MHSn8TOt4aJ08v1OaCjnSpVhT9RpnDdmpKas1MSjNLXedECmbKXyaCFwU5YBLjnP1LN8FB4+UJiKD1b6atCOs6O9xbq6LqCgbckoyHTfPAz0MpaE1G8hrwndEZdMI8nMf5TdHZXdstAiTDEsTdb/VVhplaD+q19BnjCLlKH9NUCTcMpi526MHoQZs8eFIMlyk/PKqprVlpIeZhpWOw67jzGrrQxL0nQ1TfvcNl01KwPTG60lqdcFWhr7PTUSSqKeqMHcJqKknKRWVjg8CdMmWzFtUq0mVcc4spTQA6GPlooo4M/C9PaF6dKAop/a0CTrPtS6QziaYleZQfQHLcdDS9M8NbTYEh6On3t8SLrxnn100MLNQ8u1+52mffP9EsPvztM/fufrsRYQbQEoloCAXROEs7S6dRcqL0FYbVdZ0+lxEQ0y9VsbU2+Mt3uI0o6XDW8sZB7EHKZIJA7i6TCfp6nTjKMxDEMgi9dHyAvO6z7GHrDlbvn65Na61tB4fHkSmcHqNSdZ9Tg7BV5abWnYa0PDntx0KpsSTGFm1BIVt105RxSXLXs3zjnD2F3mmnF32v2BbW3jc7Wsje3pl+QJysW2rCqdhaEvK9any0aZe75620IJbJQF3mahL5sCViskqx1Sg9+SqwpMtSKudZuvTiD2cnbkQeud8aBlFvMMW1a7Eh4UM19T+WptTCi014/CgyqVq+1O2Lwu6Efso9FoJOHcBGKMMIsO1+0f4xvGQLYwmEtpENtSz6HgGscbzhQZEO+Hhtel5rwlDafxfbaR8Wg0rMjyubqU12qUmkFBxSgrzzmvhggShDuPjMbAh7LoYMY/tJ429CBYrSznJZSdlpUNlWa/YetWaZqGQJeq9VyLD5F5TcPWy9icocaGLJaewhCdpklnw+tTbuBds6uaFg7dGr3r8Jt60tbxN10grXwAFuN068ABuC6o+G1xT3/F4ocha7HimLdxdV+SRNA88JHLdj5rjOzogPmexq91qSKHjF+6YRe6Nt3Q+drxrF28qiFcobrxLE3AepeziC05/GZNtWj8Ch57S2VW3I1o6wdKs9K+c5HmSpoe6KIEzOWYtfKNo36usqYHmzrvNNbANa11YWYJxqexN05vTKP+s4dj4FYXzknanSIvS5WdV8Z/y1FuZJJDubMBM3Y5mfPKW8qjtiJ8TFJIa1vTaKREVCl7qmO02+uAKb/kRiMdnIRpKYJ2VRJMGcl3yjeQ08KbH87mYjSq2HZ6VJvLS3Mfzubigns25r6VzRXJxsaHM7qweG03j2d05X+pkpcXP5zNdVG0PiK51J+0y97dA14Y/SmA1/rw622gCOjMj2ZzaT2xOnKhxdX/40xi7tV/NTXO/wc=7VxrV6M6F/41rnXeD+3iWtqPtrWOc3SmWh11vrgQ0halBEN6O7/+TSBQSNKLFmqdNbqWNiFA2Hn2sy/Z9ETvTBbnyA7HV9AF/ommuIsTvXuiaWqrZZF/tGeZ9FhWI+kYIc9lg1YdA+8/wDoV1jv1XBAVBmIIfeyFxU4HBgFwcKHPRgjOi8OG0C/eNbRHQOgYOLYv9t57Lh4nvU3NWvV/A95onN5ZbbSSIxM7HcyeJBrbLpznuvSzE72DIMTJp8miA3wqvFQuN5eoq82v9XMbRIP779/brRejllys955TTDaBme1P2VP9AHgO0asXjNLHQCDAJd9WFW7bR97MxoAulw+nLvn/j+tFbOmA+z8mKLxMpU9kFtKPk8WIwqvuPU/qz5CAqx2i22V8CIGRBwPSM4QBHrBTFdKeAYQ9spCnvjcix7sYhqTXZi0fDMnztqPQdogULuNWV9dWXbd0eNegF/Z8vwN9iEg7gAG5fhvBaeACl91pPvYwGJDT6K3nZKKkb4wnPmmp9IIYwVeQXuFE01sq/c2OpLiiN3ftaJxdlwgG214AEGuzmZFWzYgP+74dRt5z9shgEdpBejYCzhRF3gzcgChRK9q743pnmCFCBIucKjAAnAM4ARgtyRB2VE9xzhRd11l7vlIbo8X6xjmVMZqs02aqOsquvYIc+cBQJ0egevX2c/YAw85jcHU6Cv3zx36nJuL+Z4gJWGxyZsOny/+MyKcR/dRHcLEU8BdCL8AAnc2ItKJ0OVM9Vvj1WrvmDDV5IBEYGEbLjAf79jPw+zDycIzkrgPoTXMQvuQGPEOM4WQrxrPrwCn2CY46GUcqyXMUdGsGgD2pa249jEVRIVRqum7VzQJaDE2CFqPelMAl6y0dLyJhCYAALrEOrAkRHsMRhdPZqpejhtWYSxgTCl3vF4Dxkpk6e4phEToZpNJG38ZkGYO4R1PkvAEWHn7IWMDDj/QzkXHS6i5yh7rLj1EBttEI4A3iY+OoiDaiAAHfxoSXirZaspjs1D5Vwhx6NL3INBZPIBGcIgew0zhMZPPYCSZSgehNCa+AYDAeelgATCn21XnqX3xXRgtcC50nF9Ws3g+CVhGuyTS8oTiNbcZx4rluDF+BOwrmsKaJNNbrKeRnP6PICLKIa32TPVQ/agCpJAqzpz+p+5AMU1u7a0cGht0NZaMI34bWEKlPkxhKs7U/7zlL7e7fi953Z/mGem+GMvgJa7UUpTkkdZh/1rdfT+hciDiUjo0w8dFHxB2vDNSaLkzlIhgim6IZoJnnVHrz1gaPlfqZUSrtvGrZUZgY1aG3oFBthwB5ZGFimJK7kkgF9FddeSUQlW2tb8HB1iCBVLud+g5t23kdxeqXGzKMfzhkCw4uVTJvEgdB6f+uNyFhQc/3nslf4nGTv66NbfoP0BV4IhFZgD28fGJLUo9mo4L+S73ejCc4/mi1Op1YA8WF3QyRnTXO4l3ThuhsNCQK1yjBMTUv/xtoV1d3Z43h8m7YJc7Aq10TcfZJqHoXFGyHWu0n1yPEiiEVQy/2jgOAn+J4juGAZ9K1jLvGjkA6KUzXypS7KSXBoqYpHC5SAsjhQlf0uipxQ2kAVw04VDFsuWhfkY4zqnQ+WTsheLkpi5s+SEguGNrTeE5r6CgNgL4IE+WUqTr0cV6s3jocKUkNpuhOxpaXSG/q4CkC1bi2chUQHZILxjS1LFvEXAICU35ixeXf4ojKQ+i8v6zu+KDquyHARzINS8SArkpAkCUVS2cfvSGIfoBtPI0EKUevADtjxgYlJUpOaBrJ0trNCrIi0iSIjPAKWRE4HFJeIahzQIijOk0gxwMF68WsWmVg4cIGSzEFrKiysEHVzRLiBmkEKrFUHGUo/zhwEk6pB92J4QMRZfa4ESRJaDHrW1WwWkry1tCtRux5HyJYrQpMQgza0gQwSWNQowQsSXlHpJ2DEI48M2sq9PfYOCjxeaI6VZqIcHKy7VGdcTKLGGmaonEyZWmKqiCiirmBT8SIopg9o10BRvbI3mv1afRM6DlOrlaGDLXFWaIdw2m9Kp9FNQVgdEHow6XEN6wQIEdFFlsdlszPqo5AtO0ei3VQAhGNDBHu0BtNUcKmf8HyaWBRjSMDi2YIcDjA5l8uQ/J+ced36DZa0Kq36JrcWlrcIpW3QSdfOlHPuyxp6dH9A2VMYB7XvvCJNDczGzRAcWJyEEdRr50CIwmLhaNx3BONvTCkSQoJhmLV3pKzXRvzIBof2KsoIqalWK5m+8Tsclk2XQqijYAXlDYrrmJ3PcnXL0mVWakr7GIfBVI6hBBUBCoBiegz9Im+xo+YW9Lc0jXeprRUKxZvLYnRTsmAZriIZZweTmFQ2n7R4XKyza+aki0t4inSlm7t5NdWtk0k2QwV+SRwT2mZIwWIb0eR5xSFVbRIe5mXhLU3mRdJpcjGJONWO7Qt1FT2Yxm28Hyk2+Czq2vslXAhDj/8ZRK5VGf2JGEQnAc+tL+UGVq3k/IhM6RZzf0gksW4xTOqs0vGMfidZInQMqsno41cQRltrirK4lZaUralds2Qpk3LZqEDObl8FqS5m5MrXIfPxZrKbrRBaN9e5oYxDVzPcmu88nXz4sen81rhPZlBuei3BA47PSp9WKnAY0ED5PrwPmjn7ebGkr+qoW3wkORdnF2xzVdYCEpSEraFCTeVjfMyrY3jK8K2WDd6Goa+GCEcr3HO1PMYYsTMbTcKa6lyWK3QVO9Qx/V1bPfHzfCFObKAp83G4X33fgLv4EIxd+a0tObqy3CaZRTdfJP3DT+J0xrqfuPTAqRKOVDyLlqx1DhOM6zNqk08EuuiuPZniOCE4msMxMFhlsnJle3yg4gasdOVcHuV7xFzcspCR8TJqQ/OoJUt+wHSepoIsMsLsXpJul9SSKaVtX2b3wnitl/seaTXHd8T815G/EP6yUDXA6u0Xn77aKc6S12WblOUVkupssCkpjbk7FLYz2FrlU/pNLV6VTs6ppgfOQZznb65lb2t9cgGrn9zizZyKeI9zfdGPSovl7ev/eZdduuD9rvGZ/v4uoTy0nQ+9G9er29frlQXjM3lpTJzDclbqceAwlWA29LVvONIjYJubPEd4xaPyEIy6OPw3Og1Hk/IXBY8La4mWHg9ujx4Ls373z/+7TzPHfy7Bnvfbh6eXiTwfI+DFk59P++eZZ4VP3zlaCnzMQjoUOhKLhuRxxVfZzyANyYCdqM27+18KXU95dyj2DSVBnU7cBd1SMI93Yp3C4/PoRqG6G2osi8fUPk9mo94G5vi35yo2nYkrTIq/XWF9zl8Gxd6u8P3eRIWPX1BtH8LuLYXcO25/ulR/lW6uli/JdvCXX0TROn4+Nxy4b/4KL6De3z4MD4HH3/kO04lkQj/TuRubxwYJViY/ky7ve48POEH5yZYnt9h4Fzu4u78reYqvZqrJCxxBcUSj1CX8U0JWHrs/3q5efh2hh5+z29/jwaNa7sl8QdPHarU5XyfhfSWooOkCvhlEN3ueApAF8ihrTca66yh+JAbhfTht6izTY3cMsveYyvDKZU+geh0SNI7X1zmO7wYdlCZi4Zc/9NkrnLFOM0D4rz2pjpP3fn18iF8ea39PvvxK7qTmMY/TuZCWahyOJlfnD/1bq2fb7/Gy2dtYgxfkTuS8LmkVvBry9zkN72r4xbSXH35apIgW32FrX72fw== \ No newline at end of file diff --git a/50-advanced/images/air-gapped-portable.png b/50-advanced/images/air-gapped-portable.png new file mode 100644 index 000000000..b809ed17a Binary files /dev/null and b/50-advanced/images/air-gapped-portable.png differ diff --git a/50-advanced/images/directory-structure.drawio b/50-advanced/images/directory-structure.drawio new file mode 100644 index 000000000..1ae684fe8 --- /dev/null +++ b/50-advanced/images/directory-structure.drawio @@ -0,0 +1 @@ +7Z3dk5s2EMD/Gj+eB0l8PsZ3l2SmaebayzTTvnQ4kG1yGFGQ7yN/fYUtsJFkm/MZUAuTh9hrWQjtT6tld+WboOvVy6fMT5e/khDHE2iELxN0M4EQAMNm/xWS163E9eBWsMiikDfaCe6jn5gLDS5dRyHOaw0pITGN0rowIEmCA1qT+VlGnuvN5iSuXzX1F1gS3Ad+LEu/RyFd8ruAzk7+GUeLZXllYHvbT1Z+2ZjfSb70Q/K8J0K3E3SdEUK3r1Yv1zguJq+cl+33Ph74tBpYhhPa5AvBX/iPr9+u5vZv6Vf0w0xurF++XPFenvx4zW+YD5a+ljOQP2IaFLdiTNAsJVFCcXb7xC5azCxgsurGigahny9xyN8s6SouG9GMPOJrEpOMSRKSsN5nsf+A4zuSRzQiCRMHuOicffCEMxoxJXwRGjwQSslqr8GHOFoUH1CSMilZ0zhK2FVKFopB+LxJ1Tkbb1rc2+plUTA7JfN5FOApIyjAKc2nBSObhvMojssRTyCChgNnLpPLM19OIxsVftkTcU18wmSFafbKmpSf2pwKviygyd8/7yADBpct9wBzuMznXC+qrneqZy+49t9AAhxJ6IUEhOokoN5JQCMJjUnwvOvrjx/bIcFx+ybBHEnQggQb9k2C1SsJwlRbRvFPW0LSjKRF94XbegkaTLdOgwlkGiwFDKbXEgz2CIM+MDg9w+CMMGgDg4V6hsEdYdAGBscA/cLgjTBoA4Pb9zZR+ix7NOT+Ko3xVZCa4fTVZ9oU6WD3SuuaLuc1xnOq0MgqCsPiy7MM59FP/2HTUYXW5pas2cS6KXpaU6biTbSxEUJcdAHVWNZpdw4AhW5gW849kKOAXDdRMs/8QStHsW46Vo4cmOPKIUE6ZNWonJ2OVSNHykI899cxM+gDVoyriFt5nepFjlttl8w0Sp4GqhRHscs4XSoFNUgw4CT8UOTsdjOxpxUcLvA9b0syuiQLkvjx7U56eNpyss4CfIyYbTvqZwtMj7TjYBVDOaqEE15WKctw7NPoqZ5qVM08v8JdAdjkYBIJeYLytvfNvwX3UoNCR1X2qdz1LKGj7cRIHTFl+a97zfgCODjgKqtbxjXtWsqSvdj2uKOsmtN3gNcgn9EWeCeBgloBJeoHgTOBQgJQtml1AhRyuwCqQVpEe0uG9AIPCnoUt6FzwZNMYkvgubAL8BpkYfbAC2I/z6NgP3AC6iAW8jufUpwlGwk0UOWllNUusGVgvYbA2loBawtbr4POBNYRt0S3HWAtw6pdx0PG8XEJC+mt7U3P6mBBNEg+9GaJYUOwLa3A/s9ZYiQmxFAX4DVIdGgPnl4WteKj1OO54JliRy35nhJ4nVi8BkkV7cFzhgFeteW2DJ5ldQBeac21BM9sCJ6rFXim+JBsnwueAITtwWk36DnQ7gC9BkXqvaHX9PFFr+dtUY/uuY8vSHx8ebeXVwsyX4IeKNETkGQeLSSG5ESAGOAXwvlSlv9gpkDODCii/srsQEbWSVgVPRzk9A21pa5X0xhS1JaqyGstMWAezqINVUWOU7eylqwiu1MVyRHPKCnKdAjvfNSR5fWtIzk4WBbYDNncAWGngwpz11Zy2r+ls3vq3PyD6Ozzt8fP5Hf7z/HEnSZnKjyFUW3tTIWSBNkxkUjQKY10jGZN3NrWgpeumAhtKY3keZdNIyl1Jvtbfp7j4Xpbbn0n9xoW0be2Q4yn7vrZIcQwCjAUJKhqpsRqmYuRILt0RbH0QNepBSxBPYqSNrPLhdrvObjhLlTbEBYqaFjc2NpC7ffc00hCRYLqdzQ6JUHOz22cKyDxMAyjbSNxT1XESTr1rhQHkTYaklfsQDWkipB0q6F+QyTDPjjoiDgAhUXtFocGcZIRh65wgA0PkraHgxy+KB6KrjZGfJovB2rGxTpSAHoPdMtxjD1FNTwf979UlVCKC2DTUHR7unpbZXlrsehjMeaTsWi9DmeJJRa7aNKby3tssScxJdFSdQ8wLlxZplavJkXcx5g6yZ5etbSm5MRBoR6sKXuW1JNYpHap8wlidK/0O9tlT5M67nexp1dVowXFMO25p1JtyaVp6ViqxF7p5LbLnial3Mfs2en8r14JYK3he0dlo7p+QJOS7GNG6TRAUCuAbNEFgsaZAIkH4gBwbd0AGiMrGkVWgOJnUzqNrEA5svKc/83u+webtunPKJXgGMjzurQV2LKmOg2tQE1+geBdrkM5Zl0tP0BnPjPJlr+lZ6aDQz7o1EhfuHC9mbyhrKI8j5LFBNpxYSoeMvZqsflVpaLNUGvOREXYtiVZFFX1uBhCamBR2Nvd3xjZqnT3l1rQ7b8= \ No newline at end of file diff --git a/50-advanced/images/directory-structure.png b/50-advanced/images/directory-structure.png new file mode 100644 index 000000000..024b17017 Binary files /dev/null and b/50-advanced/images/directory-structure.png differ diff --git a/50-advanced/images/git-flow.png b/50-advanced/images/git-flow.png new file mode 100644 index 000000000..2ead3b759 Binary files /dev/null and b/50-advanced/images/git-flow.png differ diff --git a/50-advanced/images/gitops-pictures.pptx b/50-advanced/images/gitops-pictures.pptx new file mode 100644 index 000000000..0e2912e49 Binary files /dev/null and b/50-advanced/images/gitops-pictures.pptx differ diff --git a/50-advanced/images/governed-process-ca.png b/50-advanced/images/governed-process-ca.png new file mode 100644 index 000000000..6332f8ec7 Binary files /dev/null and b/50-advanced/images/governed-process-ca.png differ diff --git a/50-advanced/images/not-air-gapped.png b/50-advanced/images/not-air-gapped.png new file mode 100644 index 000000000..f37c2b395 Binary files /dev/null and b/50-advanced/images/not-air-gapped.png differ diff --git a/50-advanced/images/semi-air-gapped.png b/50-advanced/images/semi-air-gapped.png new file mode 100644 index 000000000..0100abc05 Binary files /dev/null and b/50-advanced/images/semi-air-gapped.png differ diff --git a/50-advanced/locations-to-whitelist/index.html b/50-advanced/locations-to-whitelist/index.html new file mode 100644 index 000000000..aadb4cc29 --- /dev/null +++ b/50-advanced/locations-to-whitelist/index.html @@ -0,0 +1 @@ + Locations to whitelist on bastion - Cloud Pak Deployer
Skip to content

Locations to whitelist on bastion🔗

When building or running the deployer in an environment with strict policies for internet access, you may have to specify the list of URLs that need to be accessed by the deployer.

Locations to whitelist when building the deployer image.🔗

Location Used for
registry.access.redhat.com Base image
icr.io olm-utils base image
cdn.redhat.com Installing operating system packages
cdn-ubi.redhat.com Installing operating system packages
rpm.releases.hashicorp.com Hashicorp Vault integration
dl.fedoraproject.org Extra Packages for Enterprise Linux (EPEL)
mirrors.fedoraproject.org EPEL mirror site
fedora.mirrorservice.org EPEL mirror site
pypi.org Python packages for deployer
galaxy.ansible.com Ansible Galaxy packages

Locations to whitelist when running the deployer for existing OpenShift.🔗

Location Used for
github.com Case files, Cloud Pak clients: cloudctl, cpd-cli, cpdctl
gcr.io Google Container Registry (GCR)
objects.githubusercontent.com Binary content for github.com
raw.githubusercontent.com Binary content for github.com
mirror.openshift.com OpenShift client
ocsp.digicert.com Certificate checking
subscription.rhsm.redhat.com OpenShift subscriptions
\ No newline at end of file diff --git a/50-advanced/private-registry-and-air-gapped/index.html b/50-advanced/private-registry-and-air-gapped/index.html new file mode 100644 index 000000000..36e925c15 --- /dev/null +++ b/50-advanced/private-registry-and-air-gapped/index.html @@ -0,0 +1,44 @@ + Private registry and air-gapped - Cloud Pak Deployer
Skip to content

Using a private registry🔗

Some environments, especially in situations where the OpenShift cannot directly connect to the internet, require a private registry for OpenShift to pull the Cloud Pak images from. The Cloud Pak Deployer can mirror images from the entitled registry to a private registry that you want to use for the Cloud Pak(s). Also, if infrastructure which holds the OpenShift cluster is fully disconnected from the internet, the Cloud Pak Deployer can build a registry which can be stored on a portable hard disk or pen drive and then shipped to the site.

Info

Note: In all cases, the deployer can work behind a proxy to access the internet. Go to Running behind proxy for more information.

The below instructions are not limited to disconnected (air-gapped) OpenShift clusters, but are more generic for deployment using a private registry.

There are three use cases for mirroring images to a private registry and using this to install the Cloud Pak(s):

Use cases 1 and 3 are also outlined in the Cloud Pak for Data installation documentation: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.5.x?topic=tasks-mirroring-images-your-private-container-registry

For specifying a private registry in the Cloud Pak Deployer configuration, please see Private registry. Example of specifying a private registry with a self-signed certificate in the configuration:

image_registry:
+- name: cpd453
+  registry_host_name: registry.coc.ibm.com
+  registry_port: 5000
+  registry_insecure: True
+

The cp4d instance must reference the image_registry object using the image_registry_name:

cp4d:
+- project: zen-45
+  openshift_cluster_name: {{ env_id }}
+  cp4d_version: 4.5.3
+  openshift_storage_name: ocs-storage
+  image_registry_name: cpd453
+

Info

The deployer only supports using a private registry for the Cloud Pak images, not for OpenShift itself. Air-gapped installation of OpenShift is currently not in scope for the deployer.

Warning

The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name.

The main 3 directories that are needed for both types of air-gapped installations are:

  • Cloud Pak Deployer directory: cloud-pak-deployer
  • Configuration directory: The directory that holds a all the Cloud Pak Deployer configuration
  • Status directory: The directory that will hold all downloads, vault secrets and the portable registry when applicable (use case 3)

Fpr use cases 2 and 3, where the directories must be shipped to the air-gapped cluster, the Cloud Pak Deployer and Configuration directories will be stored in the Status directory for simplicity.

Use case 1 - Mirror images and install using a bastion server🔗

This is effectively "not-air-gapped" scenario, where the following conditions apply:

  • The private registry is hosted inside the private dloud
  • The bastion server can connect to the internet and mirror images to the private image registry
  • The bastion server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details
  • The bastion server can connect to OpenShift

Not-air-gapped

On the bastion server🔗

The bastion server is connected to the internet and OpenShift cluster.

  • If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist
  • If a proxy server is configured for the bastion node, check the settings (http_proxy, https_proxy, no_proxy environment variables)
  • Build the Cloud Pak Deployer image using ./cp-deploy.sh build
  • Create or update the directory with the configuration; make sure all your Cloud Paks and cartridges are specified as well as an image_registry entry to identify the private registry
  • Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory
  • Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key
  • Create a vault secret image-registry-<name> holding the connection credentials for the private registry specified in the configuration (image_registry). For example for a registry definition with name cpd453, create secret image-registry-cpd453.

    ./cp-deploy.sh vault set \
    +    -vs image-registry-cpd453 \
    +    -vsv "admin:very_s3cret"
    +

  • Set the environment variable for the oc login command. For example:

    export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
    +

  • Run the ./cp-deploy.sh env apply command to start deployment of the Cloud Pak to the OpenShift cluster. For example:

    ./cp-deploy.sh env apply
    +
    The existence of the image_registry definition and its reference in the cp4d definition instruct the deployer to mirror images to the private registry and to configure the OpenShift cluster to pull images from the private registry. If you have already mirrored the Cloud Pak images, you can add the --skip-mirror-images parameter to speed up the deployment process.

Use case 2 - Mirror images with an internet-connected server, install using a bastion🔗

This use case is also sometimes referred to as "semi-air-gapped", where the following conditions apply:

  • The private registry is hosted outside of the private cloud that hosts the bastion server and OpenShift
  • An internet-connected server external to the private cloud can reach the entitled registry and the private registry
  • The internet-connected server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details
  • The bastion server cannot connect to the internet
  • The bastion server can connect to OpenShift

Semi-air-gapped

Warning

Please note that in this case the Cloud Pak Deployer expects an OpenShift cluster to be available already and will only work with an existing-ocp configuration. The bastion server does not have access to the internet and can therefore not instantiate an OpenShift cluster.

On the internet-connected server🔗

  • If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist
  • If a proxy server is configured for the internet-connected server, check the settings (http_proxy, https_proxy, no_proxy environment variables)
  • Build the Cloud Pak Deployer image using ./cp-deploy.sh build
  • Create or update the directory with the configuration; make sure all your Cloud Paks and cartridges are specified as well as an image_registry entry to identify the private registry
  • Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory
  • Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key
  • Create a vault secret image-registry-<name> holding the connection credentials for the private registry specified in the configuration (image_registry). For example for a registry definition with name cpd453, create secret image-registry-cpd453.
    ./cp-deploy.sh vault set \
    +    -vs image-registry-cpd453 \
    +    -vsv "admin:very_s3cret"
    +
    If the status directory does not exist it is created at this point.

Diagram step 1🔗

  • Run the deployer using the ./cp-deploy.sh env download --skip-portable-registry command. For example:

    ./cp-deploy.sh env download \
    +    --skip-portable-registry
    +
    This will download all clients to the status directory and then mirror images from the entitled registry to the private registry. If mirroring fails, fix the issue and just run the env download again.

  • Before saving the status directory, you can optionally remove the entitlement key from the vault:

    ./cp-deploy.sh vault delete \
    +    -vs ibm_cp_entitlement_key
    +

Diagram step 2🔗

When the download finished successfully, the status directory holds the deployer scripts, the configuration directory and the deployer container image.

Diagram step 3🔗

Ship the status directory from the internet-connected server to the bastion server.

You can use tar with gzip mode or any other compression technique. The total size of the directories should be relatively small, typically < 5 GB

On the bastion server🔗

The bastion server is not connected to the internet but is connected to the private registry and the OpenShift cluster.

Diagram step 4🔗

We're using the instructions in Run on existing OpenShift, adding the --air-gapped and --skip-mirror-images flags, to start the deployer:

  • Restore the status directory onto the bastion server
  • Export the STATUS_DIR environment variable to point to the status directory
  • Untar the cloud-pak-deployer scripts, for example:

    tar xvzf $STATUS_DIR/cloud-pak-deployer.tar.gz
    +

  • Set the CPD_AIRGAP environment variable to true

    export CPD_AIRGAP=true
    +

  • Set the environment variable for the oc login command. For example:

    export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
    +

  • Run the cp-deploy.sh env apply --skip-mirror-images command to start deployment of the Cloud Pak to the OpenShift cluster. For example:

    cd cloud-pak-deployer
    +./cp-deploy.sh env apply \
    +    --skip-mirror-images
    +

The CPD_AIRGGAP environment variable tells the deployer it will not download anything from the internet; --skip-mirror-images indicates that images are already available in the private registry that is included in the configuration (image_registry)

Use case 3 - Mirror images using a portable image registry🔗

This use case is also usually referred to as "air-gapped", where the following conditions apply:

  • The private registry is hosted in the private cloud that hosts the bastion server and OpenShift
  • The bastion server cannot connect to the internet
  • The bastion server can connect to the private registry and the OpenShift cluster
  • The internet-connected server cannot connect to the private cloud
  • The internet-connected server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details
  • You need a portable registry to fill the private registry with the Cloud Pak images

Air-gapped using portable registry

Warning

Please note that in this case the Cloud Pak Deployer expects an OpenShift cluster to be available already and will only work with an existing-ocp configuration. The bastion server does not have access to the internet and can therefore not instantiate an OpenShift cluster.

On the internet-connected server🔗

  • If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist
  • If a proxy server is configured for the bastion node, check the settings (http_proxy, https_proxy, no_proxy environment variables)
  • Build the Cloud Pak Deployer image using cp-deploy.sh build
  • Create or update the directory with the configuration, making sure all your Cloud Paks and cartridges are specified
  • Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory
  • Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key

Diagram step 1🔗

  • Run the deployer using the ./cp-deploy.sh env download command. For example:

    ./cp-deploy.sh env download
    +
    This will download all clients, start the portable registry and then mirror images from the entitled registry to the portable registry. The portable registry data is kept in the status directory. If mirroring fails, fix the issue and just run the env download again.

  • Before saving the status directory, you can optionally remove the entitlement key from the vault:

    ./cp-deploy.sh vault delete \
    +    -vs ibm_cp_entitlement_key
    +

See the download of watsonx.ai in action: https://ibm.box.com/v/cpd-air-gapped-download

Diagram step 2🔗

When the download finished successfully, the status directory holds the deployer scripts, the configuration directory, the deployer container image and the portable registry.

Diagram step 3🔗

Ship the status directory from the internet-connected server to the bastion server.

You can use tar with gzip mode or any other compression technique. The status directory now holds all assets required for the air-gapped installation and its size can be substantial (100+ GB). You may want to use multi-volume tar files if you are using network transfer.

On the bastion server🔗

The bastion server is not connected to the internet but is connected to the private registry and OpenShift cluster.

Diagram step 4🔗

See the air-gapped installation of Cloud Pak for Data in action: https://ibm.box.com/v/cpd-air-gapped-install. For the demonstration video, the download of the previous step has first been re-run to only download the Cloud Pak for Data control plane to avoid having to ship and upload ~700 GB.

We're using the instructions in Run on existing OpenShift, adding the CPD_AIRGAP environment variable.

  • Restore the status directory onto the bastion server. Make sure the volume to which you restore has enough space to hold the entire status directory, which includes the portable registry.
  • Export the STATUS_DIR environment variable to point to the status directory
  • Untar the cloud-pak-deployer scripts, for example:

    tar xvzf $STATUS_DIR/cloud-pak-deployer.tar.gz
    +cd cloud-pak-deployer
    +

  • Set the CPD_AIRGAP environment variable to true

    export CPD_AIRGAP=true
    +

  • Set the environment variable for the oc login command. For example:

    export CPD_OC_LOGIN="oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify"
    +

  • Create a vault secret image-registry-<name> holding the connection credentials for the private registry specified in the configuration (image_registry). For example for a registry definition with name cpd453, create secret image-registry-cpd453.

    ./cp-deploy.sh vault set \
    +    -vs image-registry-cpd453 \
    +    -vsv "admin:very_s3cret"
    +

  • Run the ./cp-deploy.sh env apply command to start deployment of the Cloud Pak to the OpenShift cluster. For example:

    ./cp-deploy.sh env apply
    +
    The CPD_AIRGGAP environment variable tells the deployer it will not download anything from the internet. As a first action, the deployer mirrors images from the portable registry to the private registry included in the configuration (image_registry)

Running behind a proxy🔗

If the Cloud Pak Deployer is run from a server that has the HTTP proxy environment variables set up, i.e. "proxy" environment variables are configured on the server and in the terminal session, it will also apply these settings in the deployer container.

The following environment variables are automatically applied to the deployer container if set up in the session running the cp-deploy.sh command:

  • http_proxy
  • https_proxy
  • no_proxy

If you do not want the deployer to use the proxy environment variables, you must remove them before running the cp-deploy.sh command:

unset http_proxy
+unset https_proxy
+unset no_proxy
+

Special settings for debug and DaemonSet images in air-gapped mode🔗

Specifically when running the deployer on IBM Cloud ROKS, certain OpenShift settings must be applied using DaemonSets in the kube-system namespace. Additionally, the deployer uses the oc debug node commands to retrieve kubelet and crio configuration files from the compute nodes.

The default container images used by the DaemonSets and oc debug node commands are based on Red Hat's Universal Base Image and will be pulled from Red Hat registries. This is typically not possible in air-gapped installations, hence different images must be used. It is your responsibility to copy suitable (preferably UBI) images to an image registry that is connected to the OpenShift cluster. Also, if a pull secret is needed to pull the image(s) from the registry, you must create the associated secret in the kube-system OpenShift project.

To configure alternative container images for the deployer to use, set the following properties in the .inv file kept in your configuration's inventory directory, or specify them as additional command line parameters for the cp-deploy.sh command.

If you do not set these values, the deployer assumes that the default images are used for DaemonSet and oc debug node.

Property Description Example
cpd_oc_debug_image Container image to be used for the oc debug command. registry.redhat.io/rhel8/support-tools:latest
cpd_ds_image Container image to be used for the DaemonSets that configure Kubelet, etc. registry.access.redhat.com/ubi8/ubi:latest
\ No newline at end of file diff --git a/50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/index.html b/50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/index.html new file mode 100644 index 000000000..2d130522a --- /dev/null +++ b/50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/index.html @@ -0,0 +1,339 @@ + Build image and run deployer on OpenShift - Cloud Pak Deployer
Skip to content

Build image and run deployer on OpenShift🔗

Create configuration🔗

export CONFIG_DIR=$HOME/cpd-config && mkdir -p $CONFIG_DIR/config
+
+cat << EOF > $CONFIG_DIR/config/cpd-config.yaml
+---
+global_config:
+  environment_name: demo
+  cloud_platform: existing-ocp
+  confirm_destroy: False
+
+openshift:
+- name: cpd-demo
+  ocp_version: "4.10"
+  cluster_name: cpd-demo
+  domain_name: example.com
+  openshift_storage:
+  - storage_name: nfs-storage
+    storage_type: nfs
+
+cp4d:
+- project: cpd-instance
+  openshift_cluster_name: cpd-demo
+  cp4d_version: 4.6.0
+  sequential_install: True
+  accept_licenses: True
+  cartridges:
+  - name: cp-foundation
+    license_service:
+      state: disabled
+      threads_per_core: 2
+  - name: lite
+
+#
+# All tested cartridges. To install, change the "state" property to "installed". To uninstall, change the state
+# to "removed" or comment out the entire cartridge. Make sure that the "-" and properties are aligned with the lite
+# cartridge; the "-" is at position 3 and the property starts at position 5.
+#
+
+  - name: analyticsengine 
+    size: small 
+    state: removed
+
+  - name: bigsql
+    state: removed
+
+  - name: ca
+    size: small
+    instances:
+    - name: ca-instance
+      metastore_ref: ca-metastore
+    state: removed
+
+  - name: cde
+    state: removed
+
+  - name: datagate
+    state: removed
+
+  - name: datastage-ent-plus
+    state: removed
+    # instances:
+    #   - name: ds-instance
+    #     # Optional settings
+    #     description: "datastage ds-instance"
+    #     size: medium
+    #     storage_class: efs-nfs-client
+    #     storage_size_gb: 60
+    #     # Custom Scale options
+    #     scale_px_runtime:
+    #       replicas: 2
+    #       cpu_request: 500m
+    #       cpu_limit: 2
+    #       memory_request: 2Gi
+    #       memory_limit: 4Gi
+    #     scale_px_compute:
+    #       replicas: 2
+    #       cpu_request: 1
+    #       cpu_limit: 3
+    #       memory_request: 4Gi
+    #       memory_limit: 12Gi    
+
+  - name: db2
+    size: small
+    instances:
+    - name: ca-metastore
+      metadata_size_gb: 20
+      data_size_gb: 20
+      backup_size_gb: 20  
+      transactionlog_size_gb: 20
+    state: removed
+
+  - name: db2wh
+    state: removed
+
+  - name: dmc
+    state: removed
+
+  - name: dods
+    size: small
+    state: removed
+
+  - name: dp
+    size: small
+    state: removed
+
+  - name: dv
+    size: small 
+    instances:
+    - name: data-virtualization
+    state: removed
+
+  - name: hadoop
+    size: small
+    state: removed
+
+  - name: mdm
+    size: small
+    wkc_enabled: true
+    state: removed
+
+  - name: openpages
+    state: removed
+
+  - name: planning-analytics
+    state: removed
+
+  - name: rstudio
+    size: small
+    state: removed
+
+  - name: spss
+    state: removed
+
+  - name: voice-gateway
+    replicas: 1
+    state: removed
+
+  - name: watson-assistant
+    size: small
+    state: removed
+
+  - name: watson-discovery
+    state: removed
+
+  - name: watson-ks
+    size: small
+    state: removed
+
+  - name: watson-openscale
+    size: small
+    state: removed
+
+  - name: watson-speech
+    stt_size: xsmall
+    tts_size: xsmall
+    state: removed
+
+  - name: wkc
+    size: small
+    state: removed
+
+  - name: wml
+    size: small
+    state: installed
+
+  - name: wml-accelerator
+    replicas: 1
+    size: small
+    state: removed
+
+  - name: wsl
+    state: installed
+
+EOF
+

Log in to the OpenShift cluster🔗

Log is as a cluster administrator to be able to run the deployer with the correct permissions.

Prepare the deployer project🔗

oc new-project cloud-pak-deployer 
+
+oc project cloud-pak-deployer
+oc create serviceaccount cloud-pak-deployer-sa
+oc adm policy add-scc-to-user privileged -z cloud-pak-deployer-sa
+oc adm policy add-cluster-role-to-user cluster-admin -z cloud-pak-deployer-sa
+

Build deployer image and push to the internal registry🔗

Building the deployer image typically takes ~5 minutes. Only do this if the image has not been built yet.

cat << EOF | oc apply -f -
+apiVersion: image.openshift.io/v1
+kind: ImageStream
+metadata:
+  name: cloud-pak-deployer
+spec:
+  lookupPolicy:
+    local: true
+EOF
+
+cat << EOF | oc create -f -
+kind: Build
+apiVersion: build.openshift.io/v1
+metadata:
+  generateName: cloud-pak-deployer-bc-
+  namespace: cloud-pak-deployer
+spec:
+  serviceAccount: builder
+  source:
+    type: Git
+    git:
+      uri: 'https://github.com/IBM/cloud-pak-deployer'
+      ref: wizard
+  strategy:
+    type: Docker
+    dockerStrategy:
+      buildArgs:
+      - name: CPD_OLM_UTILS_V1_IMAGE
+        value: icr.io/cpopen/cpd/olm-utils:latest
+      - name: CPD_OLM_UTILS_V2_IMAGE
+        value: icr.io/cpopen/cpd/olm-utils-v2:latest
+  output:
+    to:
+      kind: ImageStreamTag
+      name: 'cloud-pak-deployer:latest'
+  triggeredBy:
+    - message: Manually triggered
+EOF
+

Now, wait until the deployer image has been built.

oc get build -n cloud-pak-deployer -w
+

Set configuration🔗

oc create cm -n cloud-pak-deployer cloud-pak-deployer-config
+oc set data -n cloud-pak-deployer cm/cloud-pak-deployer-config \
+  --from-file=$CONFIG_DIR/config
+

Start the deployer job🔗

export CP_ENTITLEMENT_KEY=your_entitlement_key
+
+cat << EOF | oc apply -f -
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: cloud-pak-deployer-status
+  namespace: cloud-pak-deployer
+spec:
+  accessModes:
+  - ReadWriteMany
+  resources:
+    requests:
+      storage: 10Gi
+EOF
+
+cat << EOF | oc apply -f -
+apiVersion: batch/v1
+kind: Job
+metadata:
+  labels:
+    app: cloud-pak-deployer
+  name: cloud-pak-deployer
+  namespace: cloud-pak-deployer
+spec:
+  parallelism: 1
+  completions: 1
+  backoffLimit: 0
+  template:
+    metadata:
+      name: cloud-pak-deployer
+      labels:
+        app: cloud-pak-deployer
+    spec:
+      containers:
+      - name: cloud-pak-deployer
+        image: cloud-pak-deployer:latest
+        imagePullPolicy: Always
+        terminationMessagePath: /dev/termination-log
+        terminationMessagePolicy: File
+        env:
+        - name: CONFIG_DIR
+          value: /Data/cpd-config
+        - name: STATUS_DIR
+          value: /Data/cpd-status
+        - name: CP_ENTITLEMENT_KEY
+          value: ${CP_ENTITLEMENT_KEY}
+        volumeMounts:
+        - name: config-volume
+          mountPath: /Data/cpd-config/config
+        - name: status-volume
+          mountPath: /Data/cpd-status
+        command: ["/bin/sh","-xc"]
+        args: 
+          - /cloud-pak-deployer/cp-deploy.sh env apply -v
+      restartPolicy: Never
+      securityContext:
+        runAsUser: 0
+      serviceAccountName: cloud-pak-deployer-sa
+      volumes:
+      - name: config-volume
+        configMap:
+          name: cloud-pak-deployer-config
+      - name: status-volume
+        persistentVolumeClaim:
+          claimName: cloud-pak-deployer-status        
+EOF
+

Optional: start debug job🔗

The debug job can be useful if you want to access the status directory of the deployer if the deployer job has failed.

cat << EOF | oc apply -f -
+apiVersion: batch/v1
+kind: Job
+metadata:
+  labels:
+    app: cloud-pak-deployer-debug
+  name: cloud-pak-deployer-debug
+  namespace: cloud-pak-deployer
+spec:
+  parallelism: 1
+  completions: 1
+  backoffLimit: 0
+  template:
+    metadata:
+      name: cloud-pak-deployer-debug
+      labels:
+        app: cloud-pak-deployer-debug
+    spec:
+      containers:
+      - name: cloud-pak-deployer-debug
+        image: cloud-pak-deployer:latest
+        imagePullPolicy: Always
+        terminationMessagePath: /dev/termination-log
+        terminationMessagePolicy: File
+        env:
+        - name: CONFIG_DIR
+          value: /Data/cpd-config
+        - name: STATUS_DIR
+          value: /Data/cpd-status
+        volumeMounts:
+        - name: config-volume
+          mountPath: /Data/cpd-config/config
+        - name: status-volume
+          mountPath: /Data/cpd-status
+        command: ["/bin/sh","-xc"]
+        args: 
+          - sleep infinity
+      restartPolicy: Never
+      securityContext:
+        runAsUser: 0
+      serviceAccountName: cloud-pak-deployer-sa
+      volumes:
+      - name: config-volume
+        configMap:
+          name: cloud-pak-deployer-config
+      - name: status-volume
+        persistentVolumeClaim:
+          claimName: cloud-pak-deployer-status        
+EOF
+

Follow the logs of the deployment🔗

oc logs -f -n cloud-pak-deployer job/cloud-pak-deployer
+

In some cases, especially if the OpenShift cluster is remote from where the oc command is running, the oc logs -f command may terminate abruptly.

\ No newline at end of file diff --git a/50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/index.html b/50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/index.html new file mode 100644 index 000000000..4aca6ab2c --- /dev/null +++ b/50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/index.html @@ -0,0 +1,379 @@ + Run deployer on OpenShift using Console - Cloud Pak Deployer
Skip to content

Running deployer on OpenShift using console🔗

See the deployer in action deploying IBM watsonx.ai on an existing OpenShift cluster in this video: https://ibm.box.com/v/cpd-wxai-existing-ocp

Log in to the OpenShift cluster🔗

Log is as a cluster administrator to be able to run the deployer with the correct permissions.

Prepare the deployer project and the storage🔗

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block (exactly) into the window
    ---
    +apiVersion: v1
    +kind: Namespace
    +metadata:
    +  creationTimestamp: null
    +  name: cloud-pak-deployer
    +---
    +apiVersion: v1
    +kind: ServiceAccount
    +metadata:
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: RoleBinding
    +metadata:
    +  name: system:openshift:scc:privileged
    +  namespace: cloud-pak-deployer
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: system:openshift:scc:privileged
    +subjects:
    +- kind: ServiceAccount
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: ClusterRoleBinding
    +metadata:
    +  name: cloud-pak-deployer-cluster-admin
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: cluster-admin
    +subjects:
    +- kind: ServiceAccount
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +

Set the entitlement key🔗

  • Update the secret below with your Cloud Pak entitlement key. Make sure the key is indented exactly as below.
  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the follliwng block, adjust where needed
    ---
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: cloud-pak-entitlement-key
    +  namespace: cloud-pak-deployer
    +type: Opaque
    +stringData:
    +  cp-entitlement-key: |
    +    YOUR_ENTITLEMENT_KEY
    +

Configure the Cloud Paks and service to be deployed🔗

  • Update the configuration below to match what you want to deploy, do not change indent
  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the follliwng block (exactly into the window)
    ---
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: cloud-pak-deployer-config
    +  namespace: cloud-pak-deployer
    +data:
    +  cpd-config.yaml: |
    +    global_config:
    +      environment_name: demo
    +      cloud_platform: existing-ocp
    +      confirm_destroy: False
    +
    +    openshift:
    +    - name: cpd-demo
    +      ocp_version: "4.12"
    +      cluster_name: cpd-demo
    +      domain_name: example.com
    +      mcg:
    +        install: False
    +        storage_type: storage-class
    +        storage_class: managed-nfs-storage
    +      gpu:
    +        install: False
    +      openshift_storage:
    +      - storage_name: auto-storage
    +        storage_type: auto
    +
    +    cp4d:
    +    - project: cpd
    +      openshift_cluster_name: cpd-demo
    +      cp4d_version: 4.8.1
    +      sequential_install: False
    +      accept_licenses: True
    +      cartridges:
    +      cartridges:
    +      - name: cp-foundation
    +        license_service:
    +          state: disabled
    +          threads_per_core: 2
    +      
    +      - name: lite
    +
    +      - name: scheduler 
    +        state: removed
    +        
    +      - name: analyticsengine 
    +        description: Analytics Engine Powered by Apache Spark 
    +        size: small 
    +        state: removed
    +
    +      - name: bigsql
    +        description: Db2 Big SQL
    +        state: removed
    +
    +      - name: ca
    +        description: Cognos Analytics
    +        size: small
    +        instances:
    +        - name: ca-instance
    +          metastore_ref: ca-metastore
    +        state: removed
    +
    +      - name: dashboard
    +        description: Cognos Dashboards
    +        state: removed
    +
    +      - name: datagate
    +        description: Db2 Data Gate
    +        state: removed
    +
    +      - name: datastage-ent
    +        description: DataStage Enterprise
    +        state: removed
    +
    +      - name: datastage-ent-plus
    +        description: DataStage Enterprise Plus
    +        state: removed
    +        # instances:
    +        #   - name: ds-instance
    +        #     # Optional settings
    +        #     description: "datastage ds-instance"
    +        #     size: medium
    +        #     storage_class: efs-nfs-client
    +        #     storage_size_gb: 60
    +        #     # Custom Scale options
    +        #     scale_px_runtime:
    +        #       replicas: 2
    +        #       cpu_request: 500m
    +        #       cpu_limit: 2
    +        #       memory_request: 2Gi
    +        #       memory_limit: 4Gi
    +        #     scale_px_compute:
    +        #       replicas: 2
    +        #       cpu_request: 1
    +        #       cpu_limit: 3
    +        #       memory_request: 4Gi
    +        #       memory_limit: 12Gi    
    +
    +      - name: db2
    +        description: Db2 OLTP
    +        size: small
    +        instances:
    +        - name: ca-metastore
    +          metadata_size_gb: 20
    +          data_size_gb: 20
    +          backup_size_gb: 20  
    +          transactionlog_size_gb: 20
    +        state: removed
    +
    +      - name: db2wh
    +        description: Db2 Warehouse
    +        state: removed
    +
    +      - name: dmc
    +        description: Db2 Data Management Console
    +        state: removed
    +
    +      - name: dods
    +        description: Decision Optimization
    +        size: small
    +        state: removed
    +
    +      - name: dp
    +        description: Data Privacy
    +        size: small
    +        state: removed
    +
    +      - name: dpra
    +        description: Data Privacy Risk Assessment
    +        state: removed
    +
    +      - name: dv
    +        description: Data Virtualization
    +        size: small 
    +        instances:
    +        - name: data-virtualization
    +        state: removed
    +
    +      # Please note that for EDB Postgress, a secret edb-postgres-license-key must be created in the vault
    +      # before deploying
    +      - name: edb_cp4d
    +        description: EDB Postgres
    +        state: removed
    +        instances:
    +        - name: instance1
    +          version: "15.4"
    +          #type: Standard
    +          #members: 1
    +          #size_gb: 50
    +          #resource_request_cpu: 1
    +          #resource_request_memory: 4Gi
    +          #resource_limit_cpu: 1
    +          #resource_limit_memory: 4Gi
    +
    +      - name: factsheet
    +        description: AI Factsheets
    +        size: small
    +        state: removed
    +
    +      - name: hadoop
    +        description: Execution Engine for Apache Hadoop
    +        size: small
    +        state: removed
    +
    +      - name: mantaflow
    +        description: MANTA Automated Lineage
    +        size: small
    +        state: removed
    +
    +      - name: match360
    +        description: IBM Match 360
    +        size: small
    +        wkc_enabled: true
    +        state: removed
    +
    +      - name: openpages
    +        description: OpenPages
    +        state: removed
    +
    +      # For Planning Analytics, the case version is needed due to defect in olm utils
    +      - name: planning-analytics
    +        description: Planning Analytics
    +        state: removed
    +
    +      - name: replication
    +        description: Data Replication
    +        license: IDRC
    +        size: small
    +        state: removed
    +
    +      - name: rstudio
    +        description: RStudio Server with R 3.6
    +        size: small
    +        state: removed
    +
    +      - name: spss
    +        description: SPSS Modeler
    +        state: removed
    +
    +      - name: voice-gateway
    +        description: Voice Gateway
    +        replicas: 1
    +        state: removed
    +
    +      - name: watson-assistant
    +        description: Watson Assistant
    +        size: small
    +        # noobaa_account_secret: noobaa-admin
    +        # noobaa_cert_secret: noobaa-s3-serving-cert
    +        state: removed
    +
    +      - name: watson-discovery
    +        description: Watson Discovery
    +        # noobaa_account_secret: noobaa-admin
    +        # noobaa_cert_secret: noobaa-s3-serving-cert
    +        state: removed
    +
    +      - name: watson-ks
    +        description: Watson Knowledge Studio
    +        size: small
    +        # noobaa_account_secret: noobaa-admin
    +        # noobaa_cert_secret: noobaa-s3-serving-cert
    +        state: removed
    +
    +      - name: watson-openscale
    +        description: Watson OpenScale
    +        size: small
    +        state: removed
    +
    +      - name: watson-speech
    +        description: Watson Speech (STT and TTS)
    +        stt_size: xsmall
    +        tts_size: xsmall
    +        # noobaa_account_secret: noobaa-admin
    +        # noobaa_cert_secret: noobaa-s3-serving-cert
    +        state: removed
    +
    +      # Please note that for watsonx.ai foundation models, you neeed to install the
    +      # Node Feature Discovery and NVIDIA GPU operators. You can do so by setting the openshift.gpu.install property to True
    +      - name: watsonx_ai
    +        description: watsonx.ai
    +        state: removed
    +        models:
    +        - model_id: google-flan-t5-xxl
    +          state: removed
    +        - model_id: google-flan-ul2
    +          state: removed
    +        - model_id: eleutherai-gpt-neox-20b
    +          state: removed
    +        - model_id: ibm-granite-13b-chat-v1
    +          state: removed
    +        - model_id: ibm-granite-13b-instruct-v1
    +          state: removed
    +        - model_id: meta-llama-llama-2-70b-chat
    +          state: removed
    +        - model_id: ibm-mpt-7b-instruct2
    +          state: removed
    +        - model_id: bigscience-mt0-xxl
    +          state: removed
    +        - model_id: bigcode-starcoder
    +          state: removed
    +
    +      - name: watsonx_data
    +        description: watsonx.data
    +        state: removed
    +
    +      - name: wkc
    +        description: Watson Knowledge Catalog
    +        size: small
    +        state: removed
    +        installation_options:
    +          install_wkc_core_only: False
    +          enableKnowledgeGraph: False
    +          enableDataQuality: False
    +          enableFactSheet: False
    +
    +      - name: wml
    +        description: Watson Machine Learning
    +        size: small
    +        state: installed
    +
    +      - name: wml-accelerator
    +        description: Watson Machine Learning Accelerator
    +        replicas: 1
    +        size: small
    +        state: removed
    +
    +      - name: ws
    +        description: Watson Studio
    +        state: installed
    +
    +      - name: ws-pipelines
    +        description: Watson Studio Pipelines
    +        state: removed
    +
    +      - name: ws-runtimes
    +        description: Watson Studio Runtimes
    +        runtimes:
    +        - ibm-cpd-ws-runtime-py39
    +        - ibm-cpd-ws-runtime-222-py
    +        - ibm-cpd-ws-runtime-py39gpu
    +        - ibm-cpd-ws-runtime-222-pygpu
    +        - ibm-cpd-ws-runtime-231-pygpu
    +        - ibm-cpd-ws-runtime-r36
    +        - ibm-cpd-ws-runtime-222-r
    +        - ibm-cpd-ws-runtime-231-r
    +        state: removed 
    +

Start the deployer🔗

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block (exactly) into the window
    apiVersion: v1
    +kind: Pod
    +metadata:
    +  labels:
    +    app: cloud-pak-deployer-start
    +  generateName: cloud-pak-deployer-start-
    +  namespace: cloud-pak-deployer
    +spec:
    +  containers:
    +  - name: cloud-pak-deployer
    +    image: quay.io/cloud-pak-deployer/cloud-pak-deployer:latest
    +    imagePullPolicy: Always
    +    terminationMessagePath: /dev/termination-log
    +    terminationMessagePolicy: File
    +    command: ["/bin/sh","-xc"]
    +    args: 
    +      - /cloud-pak-deployer/scripts/deployer/cpd-start-deployer.sh
    +  restartPolicy: Never
    +  securityContext:
    +    runAsUser: 0
    +  serviceAccountName: cloud-pak-deployer-sa
    +

Follow the logs of the deployment🔗

  • Open the OpenShift console
  • Go to Compute → Pods
  • Select cloud-pak-deployer as the project at the top of the page
  • Click the deployer pod
  • Click logs

Info

When running the deployer installing Cloud Pak for Data, the first run will fail. This is because the deployer applies the node configuration to OpenShift, which will cause all nodes to restart one by one, including the node that runs the deployer. Because of the job setting, a new deployer pod will automatically start and resume from where it was stopped.

Re-run deployer when failed or if you want to update the configuration🔗

If the deployer has failed or if you want to make changes to the configuration after the successful run, you can do the following:

  • Open the OpenShift console
  • Go to Workloads → Jobs
  • Check the logs of the cloud-pak-deployer job
  • If needed, make changes to the cloud-pak-deployer-config Config Map by going to Workloads → ConfigMaps
  • Re-run the deployer
\ No newline at end of file diff --git a/50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/index.html b/50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/index.html new file mode 100644 index 000000000..9c50a8544 --- /dev/null +++ b/50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/index.html @@ -0,0 +1,141 @@ + Run deployer wizard on OpenShift - Cloud Pak Deployer
Skip to content

Run deployer wizard on OpenShift🔗

Log in to the OpenShift cluster🔗

Log is as a cluster administrator to be able to run the deployer with the correct permissions.

Prepare the deployer project and the storage🔗

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the follliwng block (exactly into the window)
    ---
    +apiVersion: v1
    +kind: Namespace
    +metadata:
    +  creationTimestamp: null
    +  name: cloud-pak-deployer
    +---
    +apiVersion: v1
    +kind: ServiceAccount
    +metadata:
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: RoleBinding
    +metadata:
    +  name: system:openshift:scc:privileged
    +  namespace: cloud-pak-deployer
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: system:openshift:scc:privileged
    +subjects:
    +- kind: ServiceAccount
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: ClusterRoleBinding
    +metadata:
    +  name: cloud-pak-deployer-cluster-admin
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: cluster-admin
    +subjects:
    +- kind: ServiceAccount
    +  name: cloud-pak-deployer-sa
    +  namespace: cloud-pak-deployer
    +---
    +apiVersion: v1
    +kind: PersistentVolumeClaim
    +metadata:
    +  name: cloud-pak-deployer-config
    +  namespace: cloud-pak-deployer
    +spec:
    +  accessModes:
    +  - ReadWriteMany
    +  resources:
    +    requests:
    +      storage: 1Gi
    +---
    +apiVersion: v1
    +kind: PersistentVolumeClaim
    +metadata:
    +  name: cloud-pak-deployer-status
    +  namespace: cloud-pak-deployer
    +spec:
    +  accessModes:
    +  - ReadWriteMany
    +  resources:
    +    requests:
    +      storage: 10Gi
    +

Run the deployer wizard and expose route🔗

  • Go to the OpenShift console
  • Click the "+" sign at the top of the page
  • Paste the following block (exactly into the window)
    apiVersion: apps/v1
    +kind: Deployment
    +metadata:
    +  name: cloud-pak-deployer-wizard
    +  namespace: cloud-pak-deployer
    +spec:
    +  replicas: 1
    +  selector:
    +    matchLabels:
    +      app: cloud-pak-deployer-wizard
    +  template:
    +    metadata:
    +      name: cloud-pak-deployer-wizard
    +      labels:
    +        app: cloud-pak-deployer-wizard
    +    spec:
    +      containers:
    +      - name: cloud-pak-deployer
    +        image: quay.io/cloud-pak-deployer/cloud-pak-deployer:latest
    +        imagePullPolicy: Always
    +        terminationMessagePath: /dev/termination-log
    +        terminationMessagePolicy: File
    +        ports:
    +        - containerPort: 8080
    +          protocol: TCP
    +        env:
    +        - name: CONFIG_DIR
    +          value: /Data/cpd-config
    +        - name: STATUS_DIR
    +          value: /Data/cpd-status
    +        - name: CPD_WIZARD_PAGE_TITLE
    +          value: "Cloud Pak Deployer"
    +#        - name: CPD_WIZARD_MODE
    +#          value: existing-ocp
    +        volumeMounts:
    +        - name: config-volume
    +          mountPath: /Data/cpd-config
    +        - name: status-volume
    +          mountPath: /Data/cpd-status
    +        command: ["/bin/sh","-xc"]
    +        args: 
    +          - mkdir -p /Data/cpd-config/config && /cloud-pak-deployer/cp-deploy.sh env wizard -v
    +      securityContext:
    +        runAsUser: 0
    +      serviceAccountName: cloud-pak-deployer-sa
    +      volumes:
    +      - name: config-volume
    +        persistentVolumeClaim:
    +          claimName: cloud-pak-deployer-config   
    +      - name: status-volume
    +        persistentVolumeClaim:
    +          claimName: cloud-pak-deployer-status        
    +---
    +apiVersion: v1
    +kind: Service
    +metadata:
    +  name: cloud-pak-deployer-wizard-svc
    +  namespace: cloud-pak-deployer    
    +spec:
    +  selector:                  
    +    app: cloud-pak-deployer-wizard
    +  ports:
    +  - nodePort: 0
    +    port: 8080            
    +    protocol: TCP
    +---
    +apiVersion: route.openshift.io/v1
    +kind: Route
    +metadata:
    +  name: cloud-pak-deployer-wizard
    +spec:
    +  tls:
    +    termination: edge
    +  to:
    +    kind: Service
    +    name: cloud-pak-deployer-wizard-svc
    +    weight: null
    +

Open the wizard🔗

Now you can access the deployer wizard using the route created in the cloud-pak-deployer project. * Open the OpenShift console * Go to Networking → Routes * Click the Cloud Pak Deployer wizard route

\ No newline at end of file diff --git a/80-development/deployer-development-setup/index.html b/80-development/deployer-development-setup/index.html new file mode 100644 index 000000000..6a6c9a915 --- /dev/null +++ b/80-development/deployer-development-setup/index.html @@ -0,0 +1,56 @@ + Deployer development setup - Cloud Pak Deployer
Skip to content

Deployer Development Setup🔗

Setting up a virtual machine or server to develop the Cloud Pak Deployer code. Focuses on initial setup of a server to run the deployer container, setting up Visual Studio Code, issuing GPG keys and running the deployer in development mode.

Set up a server for development🔗

We recommend to use a Red Hat Linux server for development of the Cloud Pak Deployer, either using a virtual server in the cloud or a virtual machine on your workstation. Ideally you run Visual Studio Code on your workstation and connect it to the remote Red Hat Linux server, updating the code and running it immediately from that server.

Install required packages🔗

To allow for remote development, a number of packages need to be installed on the Linux server. Not having these will cause VSCode not to work and the error messages are difficult to debug. To install these packages, run the following as the root user:

yum install -y git podman wget unzip tar gpg pinentry
+

Additionally, you can also install EPEL and screen to make it easier to keep your session if it gets disconnected.

yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
+yum install -y screen
+

Set up development user🔗

It is recommended to use a special development user (your user name) on the Linux server, rather than using root. Not only will this be more secure; it also prevent destructive mistakes. In the below steps, we create a user fk-dev and give it sudo permissions.

useradd -G wheel fk-dev
+

To give the fk-dev permissions to run commands as root, change the sudo settings.

visudo
+

Scroll down until you see the following line:

# %wheel        ALL=(ALL)       NOPASSWD: ALL
+

Change the line to look like this:

%wheel        ALL=(ALL)       NOPASSWD: ALL
+

Now, save the file by pressing Esc, followed by : and x.

Configure password-less SSH for development user🔗

Especially when running the virtual server in the cloud, users would logon using their SSH key. This requires the public key of the workstation to be added to the development user's SSH configuration.

Make sure you run the following commands as the development user (fk-dev):

mkdir -p ~/.ssh
+chmod 700 ~/.ssh
+touch ~/.ssh/authorized_keys
+chmod 600 ~/.ssh/authorized_keys
+

Then, add the public key of your workstation to the authorized_keys file.

vi ~/.ssh/authorized_keys
+

Press the i to enter insert mode for vi. Then paste the public SSH key, for example:

ssh-rsa AAAAB3NzaC1yc2EAAAADAXABAAABAQEGUeXJr0ZHy1SPGOntmr/7ixmK3KV8N3q/+0eSfKVTyGbhUO9lC1+oYcDvwMrizAXBJYWkIIwx4WgC77a78....fP3S5WYgqL fk-dev
+

Finally save the file by pressing Esc, followed by : and x.

Configure Git for the development user🔗

Run the following commands as the development user (fk-dev):

git config --global user.name "Your full name"
+git config --global user.email "your_email_address"
+git config --global credential.helper "cache --timeout=86400"
+

Set up GPG for the development user🔗

We also want to ensure that commits are verified (trusted) by signing them with a GPG key. This requires set up on the development server and also on your Git account.

First, set up a new GPG key:

gpg --default-new-key-algo rsa4096 --gen-key
+

You will be prompted to specify your user information:

  • Real name: Enter your full name
  • Email address: Your e-mail address that will be used to sign the commits

Press o at the following prompt:

Change (N)ame, (E)mail, or (O)kay/(Q)uit?
+

Then, you will be prompted for a passphrase. You cannot use a passphrase for your GPG key if you want to use it for automatic signing of commits. Just press Enter multiple times until the GPG key has been generated.

List the signatures of the known keys. You will use the signature to sign the commits and to retrieve the public key.

gpg --list-signatures
+

Output will look something like this:

/home/fk-dev/.gnupg/pubring.kbx
+-----------------------------------
+pub   rsa4096 2022-10-30 [SC] [expires: 2024-10-29]
+      BC83E8A97538EDD4E01DC05EA83C67A6D7F71756
+uid           [ultimate] FK Developer <fk-dev@ibm.com>
+sig 3        A83C67A6D7F71756 2022-10-30  FK Developer <fk-dev@ibm.com>
+

You will use the signature to retrieve the public key:

gpg --armor --export A83C67A6D7F71756
+

The public key will look something like below:

-----BEGIN PGP PUBLIC KEY BLOCK-----
+
+mQINBGNeGNQBEAC/y2tovX5s0Z+onUpisnMMleG94nqOtajXG1N0UbHAUQyKfirt
+O8t91ek+e5PEsVkR/RLIM1M1YkiSV4irxW/uFPucXHZDVH8azfnJjf6j6cXWt/ra
+1I2vGV3dIIQ6aJIBEEXC+u+N6rWpCOF5ERVrumGFlDhL/PY8Y9NM0cNQCbOcciTV
+5a5DrqyHC3RD5Bcn5EA0/5ISTCGQyEbJe45G8L+a5yRchn4ACVEztR2B/O5iOZbM
+.
+.
+.
+4ojOJPu0n5QLA5cI3RyZFw==
+=sx91
+-----END PGP PUBLIC KEY BLOCK-----
+

Now that you have the signature, you can configure Git to sign commits:

git config --global user.signingkey A83C67A6D7F71756
+

Next, add your GPG key to your Git user.

  • Go to IBM/cloud-pak-deployer.git
  • Log in using your public GitHub user
  • Click on your user at the top right of the pages
  • Click select
  • In the left menu, select SSH and GPG keys
  • Click New GPG key
  • Enter a meaningful title for your GPG key, for example: FK Development Server
  • Paste the public GPG key
  • Confirm by pushing the Add GPG key button

Commits done on your development server will now be signed with your user name and e-mail address and will show as Verified when listing the commits.

Clone the repository🔗

Clone the repository using a git command. The command below is the clone of the main Cloud Pak Deployer repository. If you have forked the repository to develop features, you will have to use the URL of your own fork.

git clone https://github.com/IBM/cloud-pak-deployer.git
+

Connect VSCode to the development server🔗

  • Install the Remote - SSH extension in VSCode
  • Click on the green icon in the lower left of VSCode
  • Open SSH Config file, choose the one in your home directory
  • Add the following lines:
    Host nickname_of_your_server
    +   HostName ip_address_of_your_server
    +   User fk-dev
    +

Once you have set up this server in the SSH config file, you can connect to it and start remote development.

  • Open
  • Select the cloud-pak-deployer directory (this is the cloned repository)
  • As the directory is a cloned Git repo, VSCode will automatically open the default branch

From that point forward you can use VSCode as if you were working on your laptop, make changes and use a separate terminal to test your changes.

Cloud Pak Deployer developer command line option🔗

The Cloud Pak Deployer runs as a container on the server. When you're in the process of developing new features, having to always rebuild the image is a bit of a pain, hence we've introduced a special command line parameter.

./cp-deploy.sh env apply .... --cpd-develop [--accept-all-liceneses]
+

When adding the --cpd-develop parameter to the command line, the current directory is mapped as a volume to the /cloud-pak-deployer directory within the container. This means that any latest changes you've done to the Ansible playbooks or other commands will take effect immediately.

Warning

Even though it is possible to run the deployer multiple times in parallel, for different environments, please be aware that is NOT possible when you use the --cpd-develop parameter. If you run two deploy processes with this parameters, you will see errors with permissions.

Cloud Pak Deployer developer container image tag🔗

When working on multiple changes concurrently, you may have to switch between branches or tags. By default, the Cloud Pak Deployer image is built with image latest, but you can override this by setting the CPD_IMAGE_TAG environment variable in your session.

export CPD_IMAGE_TAG=cp4d-460
+./cp-deploy.sh build
+

When building the deployer, the image is now tagged:

podman image ls
+

REPOSITORY                           TAG         IMAGE ID      CREATED        SIZE
+localhost/cloud-pak-deployer         cp4d-460    8b08cb2f9a2e  8 minutes ago  1.92 GB
+

When running the deployer with the same environment variable set, you will see an additional message in the output.

./cp-deploy.sh env apply
+

Cloud Pak Deployer image tag cp4d-460 will be used.
+...
+

Cloud Pak Deployer podman or docker command🔗

By default, the cp-deploy.sh command detects if podman (preferred) or docker is found on the system. In case both are present, podman is used. You can override this behaviour by setting the CPD_CONTAINER_ENGINE environment variable.

export CPD_CONTAINER_ENGINE=docker
+./cp-deploy.sh build
+
Container engine docker will be used.
+
\ No newline at end of file diff --git a/80-development/doc-development-setup/index.html b/80-development/doc-development-setup/index.html new file mode 100644 index 000000000..1e13db5fb --- /dev/null +++ b/80-development/doc-development-setup/index.html @@ -0,0 +1,11 @@ + Deployer documentation development setup - Cloud Pak Deployer
Skip to content

Documentation Development setup🔗

Mkdocs themes encapsulate all of the configuration and implementation details of static documentation sites. This GitHub repository has been built with a dependency on the Mkdocs tool. This GiHub repository is connected to GitHub Actions; any commit to the main branch will cause a build of the GitHub pages to be triggered. The preferred method of working while developing documentation is to use the tooling from a loacal system

Local tooling installation🔗

If you want to test the documentation pages you're developing, it is best to run Mkdocs in a container and map your local docs folder to a folder inside the container. This avoids having to install nvm and many modules on your workstation.

Do the following:

  • Make sure you have cloned this repository to your development server
  • Start from the main directory of the cloud-pak-deployer repository
    cd docs
    +./dev-doc-build.sh
    +

This will build a Red Hat UBI image with all requirements pre-installed. It will take ~2-10 minutes to complete this step, dependent on your network bandwidth.

Running the documentation image🔗

./dev-doc-run.sh
+

This will start the container as a daemon and tail the logs. Once running, you will see the following message:

...
+INFO     -  Documentation built in 3.32 seconds
+INFO     -  [11:55:49] Watching paths for changes: 'src', 'mkdocs.yml'
+INFO     -  [11:55:49] Serving on http://0.0.0.0:8000/cloud-pak-deployer/...
+

Starting the browser🔗

Now that the container has fully started, it automatically tracks all changes under the docs folder and updates the pages site automatically. You can view the site by opening a browswer for URL:

http://localhost:8000

Stopping the documentation container🔗

If you don't want to test your changes locally anymore, stop the docker container.

podman kill cpd-doc
+

Next time you want to test your changes, re-run the ./dev-doc-run.sh, which will delete the container, delete cache and build the documentation.

Removing the docker container and image🔗

If you want to remove all from your development server, do the following:

podman rm -f cpd-doc
+podman rmi -f cpd-doc:latest
+

Note that after merging your updated documentation with the main branch, the pages site will be rendered by a GitHub action. Go to GitHub Actions if you want to monitor the build process.

\ No newline at end of file diff --git a/80-development/doc-guidelines/index.html b/80-development/doc-guidelines/index.html new file mode 100644 index 000000000..3c790f902 --- /dev/null +++ b/80-development/doc-guidelines/index.html @@ -0,0 +1,37 @@ + Deployer documentation guidelines - Cloud Pak Deployer
Skip to content

Deployer documentation guidelines

Documentation guidelines🔗

This document contains a few formatting rules/requirements to maintain uniformity and structure across our documentation.

Formatting🔗

Code block input🔗

Code block inputs should be created by surrounding the code text with three tick marks ``` key. For example, to create the following code block:

oc get nodes
+

Your markdown input would look like:

```
+oc get nodes
+```
+

Code block output🔗

Code block outputs should specify the output language. This can be done by putting the language after the opening tick marks. For example, to create the following code block:

{
+    "cloudName": "AzureCloud",
+    "homeTenantId": "fcf67057-50c9-4ad4-98f3-ffca64add9e9",
+    "id": "d604759d-4ce2-4dbc-b012-b9d7f1d0c185",
+    "isDefault": true,
+    "managedByTenants": [],
+    "name": "Microsoft Azure Enterprise",
+    "state": "Enabled",
+    "tenantId": "fcf67057-50c9-4ad4-98f3-ffca64add9e9",
+    "user": {
+    "name": "example@example.com",
+    "type": "user"
+    }
+}
+

Your markdown input would look like:

```output
+{
+    "cloudName": "AzureCloud",
+    "homeTenantId": "fcf67057-50c9-4ad4-98f3-ffca64add9e9",
+    "id": "d604759d-4ce2-4dbc-b012-b9d7f1d0c185",
+    "isDefault": true,
+    "managedByTenants": [],
+    "name": "Microsoft Azure Enterprise",
+    "state": "Enabled",
+    "tenantId": "fcf67057-50c9-4ad4-98f3-ffca64add9e9",
+    "user": {
+    "name": "example@example.com",
+    "type": "user"
+    }
+}
+```
+

Information block (inline notifications)🔗

If you want to highlight something to reader, using an information or a warning block, use the following code:

!!! warning
+    Warning: please do not shut down the cluster at this stage.
+

This will show up as:

Warning

Warning: please do not shut down the cluster at this stage.

You can also info and error.

\ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 000000000..1cf13b9f9 Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.37e9125f.min.js b/assets/javascripts/bundle.37e9125f.min.js new file mode 100644 index 000000000..12b8ec54d --- /dev/null +++ b/assets/javascripts/bundle.37e9125f.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var aa=Object.create;var wr=Object.defineProperty;var sa=Object.getOwnPropertyDescriptor;var ca=Object.getOwnPropertyNames,kt=Object.getOwnPropertySymbols,fa=Object.getPrototypeOf,Er=Object.prototype.hasOwnProperty,fn=Object.prototype.propertyIsEnumerable;var cn=(e,t,r)=>t in e?wr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,H=(e,t)=>{for(var r in t||(t={}))Er.call(t,r)&&cn(e,r,t[r]);if(kt)for(var r of kt(t))fn.call(t,r)&&cn(e,r,t[r]);return e};var un=(e,t)=>{var r={};for(var n in e)Er.call(e,n)&&t.indexOf(n)<0&&(r[n]=e[n]);if(e!=null&&kt)for(var n of kt(e))t.indexOf(n)<0&&fn.call(e,n)&&(r[n]=e[n]);return r};var yt=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var ua=(e,t,r,n)=>{if(t&&typeof t=="object"||typeof t=="function")for(let o of ca(t))!Er.call(e,o)&&o!==r&&wr(e,o,{get:()=>t[o],enumerable:!(n=sa(t,o))||n.enumerable});return e};var Ye=(e,t,r)=>(r=e!=null?aa(fa(e)):{},ua(t||!e||!e.__esModule?wr(r,"default",{value:e,enumerable:!0}):r,e));var ln=yt((Sr,pn)=>{(function(e,t){typeof Sr=="object"&&typeof pn!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(Sr,function(){"use strict";function e(r){var n=!0,o=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(_){return!!(_&&_!==document&&_.nodeName!=="HTML"&&_.nodeName!=="BODY"&&"classList"in _&&"contains"in _.classList)}function c(_){var We=_.type,Fe=_.tagName;return!!(Fe==="INPUT"&&s[We]&&!_.readOnly||Fe==="TEXTAREA"&&!_.readOnly||_.isContentEditable)}function f(_){_.classList.contains("focus-visible")||(_.classList.add("focus-visible"),_.setAttribute("data-focus-visible-added",""))}function u(_){!_.hasAttribute("data-focus-visible-added")||(_.classList.remove("focus-visible"),_.removeAttribute("data-focus-visible-added"))}function p(_){_.metaKey||_.altKey||_.ctrlKey||(a(r.activeElement)&&f(r.activeElement),n=!0)}function l(_){n=!1}function d(_){!a(_.target)||(n||c(_.target))&&f(_.target)}function h(_){!a(_.target)||(_.target.classList.contains("focus-visible")||_.target.hasAttribute("data-focus-visible-added"))&&(o=!0,window.clearTimeout(i),i=window.setTimeout(function(){o=!1},100),u(_.target))}function b(_){document.visibilityState==="hidden"&&(o&&(n=!0),U())}function U(){document.addEventListener("mousemove",W),document.addEventListener("mousedown",W),document.addEventListener("mouseup",W),document.addEventListener("pointermove",W),document.addEventListener("pointerdown",W),document.addEventListener("pointerup",W),document.addEventListener("touchmove",W),document.addEventListener("touchstart",W),document.addEventListener("touchend",W)}function G(){document.removeEventListener("mousemove",W),document.removeEventListener("mousedown",W),document.removeEventListener("mouseup",W),document.removeEventListener("pointermove",W),document.removeEventListener("pointerdown",W),document.removeEventListener("pointerup",W),document.removeEventListener("touchmove",W),document.removeEventListener("touchstart",W),document.removeEventListener("touchend",W)}function W(_){_.target.nodeName&&_.target.nodeName.toLowerCase()==="html"||(n=!1,G())}document.addEventListener("keydown",p,!0),document.addEventListener("mousedown",l,!0),document.addEventListener("pointerdown",l,!0),document.addEventListener("touchstart",l,!0),document.addEventListener("visibilitychange",b,!0),U(),r.addEventListener("focus",d,!0),r.addEventListener("blur",h,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var mn=yt(Or=>{(function(e){var t=function(){try{return!!Symbol.iterator}catch(f){return!1}},r=t(),n=function(f){var u={next:function(){var p=f.shift();return{done:p===void 0,value:p}}};return r&&(u[Symbol.iterator]=function(){return u}),u},o=function(f){return encodeURIComponent(f).replace(/%20/g,"+")},i=function(f){return decodeURIComponent(String(f).replace(/\+/g," "))},s=function(){var f=function(p){Object.defineProperty(this,"_entries",{writable:!0,value:{}});var l=typeof p;if(l!=="undefined")if(l==="string")p!==""&&this._fromString(p);else if(p instanceof f){var d=this;p.forEach(function(G,W){d.append(W,G)})}else if(p!==null&&l==="object")if(Object.prototype.toString.call(p)==="[object Array]")for(var h=0;hd[0]?1:0}),f._entries&&(f._entries={});for(var p=0;p1?i(d[1]):"")}})})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Or);(function(e){var t=function(){try{var o=new e.URL("b","http://a");return o.pathname="c d",o.href==="http://a/c%20d"&&o.searchParams}catch(i){return!1}},r=function(){var o=e.URL,i=function(c,f){typeof c!="string"&&(c=String(c)),f&&typeof f!="string"&&(f=String(f));var u=document,p;if(f&&(e.location===void 0||f!==e.location.href)){f=f.toLowerCase(),u=document.implementation.createHTMLDocument(""),p=u.createElement("base"),p.href=f,u.head.appendChild(p);try{if(p.href.indexOf(f)!==0)throw new Error(p.href)}catch(_){throw new Error("URL unable to set base "+f+" due to "+_)}}var l=u.createElement("a");l.href=c,p&&(u.body.appendChild(l),l.href=l.href);var d=u.createElement("input");if(d.type="url",d.value=c,l.protocol===":"||!/:/.test(l.href)||!d.checkValidity()&&!f)throw new TypeError("Invalid URL");Object.defineProperty(this,"_anchorElement",{value:l});var h=new e.URLSearchParams(this.search),b=!0,U=!0,G=this;["append","delete","set"].forEach(function(_){var We=h[_];h[_]=function(){We.apply(h,arguments),b&&(U=!1,G.search=h.toString(),U=!0)}}),Object.defineProperty(this,"searchParams",{value:h,enumerable:!0});var W=void 0;Object.defineProperty(this,"_updateSearchParams",{enumerable:!1,configurable:!1,writable:!1,value:function(){this.search!==W&&(W=this.search,U&&(b=!1,this.searchParams._fromString(this.search),b=!0))}})},s=i.prototype,a=function(c){Object.defineProperty(s,c,{get:function(){return this._anchorElement[c]},set:function(f){this._anchorElement[c]=f},enumerable:!0})};["hash","host","hostname","port","protocol"].forEach(function(c){a(c)}),Object.defineProperty(s,"search",{get:function(){return this._anchorElement.search},set:function(c){this._anchorElement.search=c,this._updateSearchParams()},enumerable:!0}),Object.defineProperties(s,{toString:{get:function(){var c=this;return function(){return c.href}}},href:{get:function(){return this._anchorElement.href.replace(/\?$/,"")},set:function(c){this._anchorElement.href=c,this._updateSearchParams()},enumerable:!0},pathname:{get:function(){return this._anchorElement.pathname.replace(/(^\/?)/,"/")},set:function(c){this._anchorElement.pathname=c},enumerable:!0},origin:{get:function(){var c={"http:":80,"https:":443,"ftp:":21}[this._anchorElement.protocol],f=this._anchorElement.port!=c&&this._anchorElement.port!=="";return this._anchorElement.protocol+"//"+this._anchorElement.hostname+(f?":"+this._anchorElement.port:"")},enumerable:!0},password:{get:function(){return""},set:function(c){},enumerable:!0},username:{get:function(){return""},set:function(c){},enumerable:!0}}),i.createObjectURL=function(c){return o.createObjectURL.apply(o,arguments)},i.revokeObjectURL=function(c){return o.revokeObjectURL.apply(o,arguments)},e.URL=i};if(t()||r(),e.location!==void 0&&!("origin"in e.location)){var n=function(){return e.location.protocol+"//"+e.location.hostname+(e.location.port?":"+e.location.port:"")};try{Object.defineProperty(e.location,"origin",{get:n,enumerable:!0})}catch(o){setInterval(function(){e.location.origin=n()},100)}}})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Or)});var Pn=yt((Ks,$t)=>{/*! ***************************************************************************** +Copyright (c) Microsoft Corporation. + +Permission to use, copy, modify, and/or distribute this software for any +purpose with or without fee is hereby granted. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH +REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, +INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR +OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +PERFORMANCE OF THIS SOFTWARE. +***************************************************************************** */var dn,hn,bn,vn,gn,yn,xn,wn,En,Ht,_r,Sn,On,_n,rt,Tn,Mn,Ln,An,Cn,Rn,kn,Hn,Pt;(function(e){var t=typeof global=="object"?global:typeof self=="object"?self:typeof this=="object"?this:{};typeof define=="function"&&define.amd?define("tslib",["exports"],function(n){e(r(t,r(n)))}):typeof $t=="object"&&typeof $t.exports=="object"?e(r(t,r($t.exports))):e(r(t));function r(n,o){return n!==t&&(typeof Object.create=="function"?Object.defineProperty(n,"__esModule",{value:!0}):n.__esModule=!0),function(i,s){return n[i]=o?o(i,s):s}}})(function(e){var t=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(n,o){n.__proto__=o}||function(n,o){for(var i in o)Object.prototype.hasOwnProperty.call(o,i)&&(n[i]=o[i])};dn=function(n,o){if(typeof o!="function"&&o!==null)throw new TypeError("Class extends value "+String(o)+" is not a constructor or null");t(n,o);function i(){this.constructor=n}n.prototype=o===null?Object.create(o):(i.prototype=o.prototype,new i)},hn=Object.assign||function(n){for(var o,i=1,s=arguments.length;i=0;u--)(f=n[u])&&(c=(a<3?f(c):a>3?f(o,i,c):f(o,i))||c);return a>3&&c&&Object.defineProperty(o,i,c),c},gn=function(n,o){return function(i,s){o(i,s,n)}},yn=function(n,o){if(typeof Reflect=="object"&&typeof Reflect.metadata=="function")return Reflect.metadata(n,o)},xn=function(n,o,i,s){function a(c){return c instanceof i?c:new i(function(f){f(c)})}return new(i||(i=Promise))(function(c,f){function u(d){try{l(s.next(d))}catch(h){f(h)}}function p(d){try{l(s.throw(d))}catch(h){f(h)}}function l(d){d.done?c(d.value):a(d.value).then(u,p)}l((s=s.apply(n,o||[])).next())})},wn=function(n,o){var i={label:0,sent:function(){if(c[0]&1)throw c[1];return c[1]},trys:[],ops:[]},s,a,c,f;return f={next:u(0),throw:u(1),return:u(2)},typeof Symbol=="function"&&(f[Symbol.iterator]=function(){return this}),f;function u(l){return function(d){return p([l,d])}}function p(l){if(s)throw new TypeError("Generator is already executing.");for(;i;)try{if(s=1,a&&(c=l[0]&2?a.return:l[0]?a.throw||((c=a.return)&&c.call(a),0):a.next)&&!(c=c.call(a,l[1])).done)return c;switch(a=0,c&&(l=[l[0]&2,c.value]),l[0]){case 0:case 1:c=l;break;case 4:return i.label++,{value:l[1],done:!1};case 5:i.label++,a=l[1],l=[0];continue;case 7:l=i.ops.pop(),i.trys.pop();continue;default:if(c=i.trys,!(c=c.length>0&&c[c.length-1])&&(l[0]===6||l[0]===2)){i=0;continue}if(l[0]===3&&(!c||l[1]>c[0]&&l[1]=n.length&&(n=void 0),{value:n&&n[s++],done:!n}}};throw new TypeError(o?"Object is not iterable.":"Symbol.iterator is not defined.")},_r=function(n,o){var i=typeof Symbol=="function"&&n[Symbol.iterator];if(!i)return n;var s=i.call(n),a,c=[],f;try{for(;(o===void 0||o-- >0)&&!(a=s.next()).done;)c.push(a.value)}catch(u){f={error:u}}finally{try{a&&!a.done&&(i=s.return)&&i.call(s)}finally{if(f)throw f.error}}return c},Sn=function(){for(var n=[],o=0;o1||u(b,U)})})}function u(b,U){try{p(s[b](U))}catch(G){h(c[0][3],G)}}function p(b){b.value instanceof rt?Promise.resolve(b.value.v).then(l,d):h(c[0][2],b)}function l(b){u("next",b)}function d(b){u("throw",b)}function h(b,U){b(U),c.shift(),c.length&&u(c[0][0],c[0][1])}},Mn=function(n){var o,i;return o={},s("next"),s("throw",function(a){throw a}),s("return"),o[Symbol.iterator]=function(){return this},o;function s(a,c){o[a]=n[a]?function(f){return(i=!i)?{value:rt(n[a](f)),done:a==="return"}:c?c(f):f}:c}},Ln=function(n){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var o=n[Symbol.asyncIterator],i;return o?o.call(n):(n=typeof Ht=="function"?Ht(n):n[Symbol.iterator](),i={},s("next"),s("throw"),s("return"),i[Symbol.asyncIterator]=function(){return this},i);function s(c){i[c]=n[c]&&function(f){return new Promise(function(u,p){f=n[c](f),a(u,p,f.done,f.value)})}}function a(c,f,u,p){Promise.resolve(p).then(function(l){c({value:l,done:u})},f)}},An=function(n,o){return Object.defineProperty?Object.defineProperty(n,"raw",{value:o}):n.raw=o,n};var r=Object.create?function(n,o){Object.defineProperty(n,"default",{enumerable:!0,value:o})}:function(n,o){n.default=o};Cn=function(n){if(n&&n.__esModule)return n;var o={};if(n!=null)for(var i in n)i!=="default"&&Object.prototype.hasOwnProperty.call(n,i)&&Pt(o,n,i);return r(o,n),o},Rn=function(n){return n&&n.__esModule?n:{default:n}},kn=function(n,o,i,s){if(i==="a"&&!s)throw new TypeError("Private accessor was defined without a getter");if(typeof o=="function"?n!==o||!s:!o.has(n))throw new TypeError("Cannot read private member from an object whose class did not declare it");return i==="m"?s:i==="a"?s.call(n):s?s.value:o.get(n)},Hn=function(n,o,i,s,a){if(s==="m")throw new TypeError("Private method is not writable");if(s==="a"&&!a)throw new TypeError("Private accessor was defined without a setter");if(typeof o=="function"?n!==o||!a:!o.has(n))throw new TypeError("Cannot write private member to an object whose class did not declare it");return s==="a"?a.call(n,i):a?a.value=i:o.set(n,i),i},e("__extends",dn),e("__assign",hn),e("__rest",bn),e("__decorate",vn),e("__param",gn),e("__metadata",yn),e("__awaiter",xn),e("__generator",wn),e("__exportStar",En),e("__createBinding",Pt),e("__values",Ht),e("__read",_r),e("__spread",Sn),e("__spreadArrays",On),e("__spreadArray",_n),e("__await",rt),e("__asyncGenerator",Tn),e("__asyncDelegator",Mn),e("__asyncValues",Ln),e("__makeTemplateObject",An),e("__importStar",Cn),e("__importDefault",Rn),e("__classPrivateFieldGet",kn),e("__classPrivateFieldSet",Hn)})});var Br=yt((At,Yr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof At=="object"&&typeof Yr=="object"?Yr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof At=="object"?At.ClipboardJS=r():t.ClipboardJS=r()})(At,function(){return function(){var e={686:function(n,o,i){"use strict";i.d(o,{default:function(){return ia}});var s=i(279),a=i.n(s),c=i(370),f=i.n(c),u=i(817),p=i.n(u);function l(j){try{return document.execCommand(j)}catch(T){return!1}}var d=function(T){var O=p()(T);return l("cut"),O},h=d;function b(j){var T=document.documentElement.getAttribute("dir")==="rtl",O=document.createElement("textarea");O.style.fontSize="12pt",O.style.border="0",O.style.padding="0",O.style.margin="0",O.style.position="absolute",O.style[T?"right":"left"]="-9999px";var k=window.pageYOffset||document.documentElement.scrollTop;return O.style.top="".concat(k,"px"),O.setAttribute("readonly",""),O.value=j,O}var U=function(T,O){var k=b(T);O.container.appendChild(k);var $=p()(k);return l("copy"),k.remove(),$},G=function(T){var O=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},k="";return typeof T=="string"?k=U(T,O):T instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(T==null?void 0:T.type)?k=U(T.value,O):(k=p()(T),l("copy")),k},W=G;function _(j){return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?_=function(O){return typeof O}:_=function(O){return O&&typeof Symbol=="function"&&O.constructor===Symbol&&O!==Symbol.prototype?"symbol":typeof O},_(j)}var We=function(){var T=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},O=T.action,k=O===void 0?"copy":O,$=T.container,q=T.target,Te=T.text;if(k!=="copy"&&k!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(q!==void 0)if(q&&_(q)==="object"&&q.nodeType===1){if(k==="copy"&&q.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(k==="cut"&&(q.hasAttribute("readonly")||q.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(Te)return W(Te,{container:$});if(q)return k==="cut"?h(q):W(q,{container:$})},Fe=We;function Pe(j){return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Pe=function(O){return typeof O}:Pe=function(O){return O&&typeof Symbol=="function"&&O.constructor===Symbol&&O!==Symbol.prototype?"symbol":typeof O},Pe(j)}function Ji(j,T){if(!(j instanceof T))throw new TypeError("Cannot call a class as a function")}function sn(j,T){for(var O=0;O0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof $.action=="function"?$.action:this.defaultAction,this.target=typeof $.target=="function"?$.target:this.defaultTarget,this.text=typeof $.text=="function"?$.text:this.defaultText,this.container=Pe($.container)==="object"?$.container:document.body}},{key:"listenClick",value:function($){var q=this;this.listener=f()($,"click",function(Te){return q.onClick(Te)})}},{key:"onClick",value:function($){var q=$.delegateTarget||$.currentTarget,Te=this.action(q)||"copy",Rt=Fe({action:Te,container:this.container,target:this.target(q),text:this.text(q)});this.emit(Rt?"success":"error",{action:Te,text:Rt,trigger:q,clearSelection:function(){q&&q.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function($){return xr("action",$)}},{key:"defaultTarget",value:function($){var q=xr("target",$);if(q)return document.querySelector(q)}},{key:"defaultText",value:function($){return xr("text",$)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function($){var q=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return W($,q)}},{key:"cut",value:function($){return h($)}},{key:"isSupported",value:function(){var $=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],q=typeof $=="string"?[$]:$,Te=!!document.queryCommandSupported;return q.forEach(function(Rt){Te=Te&&!!document.queryCommandSupported(Rt)}),Te}}]),O}(a()),ia=oa},828:function(n){var o=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,c){for(;a&&a.nodeType!==o;){if(typeof a.matches=="function"&&a.matches(c))return a;a=a.parentNode}}n.exports=s},438:function(n,o,i){var s=i(828);function a(u,p,l,d,h){var b=f.apply(this,arguments);return u.addEventListener(l,b,h),{destroy:function(){u.removeEventListener(l,b,h)}}}function c(u,p,l,d,h){return typeof u.addEventListener=="function"?a.apply(null,arguments):typeof l=="function"?a.bind(null,document).apply(null,arguments):(typeof u=="string"&&(u=document.querySelectorAll(u)),Array.prototype.map.call(u,function(b){return a(b,p,l,d,h)}))}function f(u,p,l,d){return function(h){h.delegateTarget=s(h.target,p),h.delegateTarget&&d.call(u,h)}}n.exports=c},879:function(n,o){o.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},o.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||o.node(i[0]))},o.string=function(i){return typeof i=="string"||i instanceof String},o.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}},370:function(n,o,i){var s=i(879),a=i(438);function c(l,d,h){if(!l&&!d&&!h)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(h))throw new TypeError("Third argument must be a Function");if(s.node(l))return f(l,d,h);if(s.nodeList(l))return u(l,d,h);if(s.string(l))return p(l,d,h);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function f(l,d,h){return l.addEventListener(d,h),{destroy:function(){l.removeEventListener(d,h)}}}function u(l,d,h){return Array.prototype.forEach.call(l,function(b){b.addEventListener(d,h)}),{destroy:function(){Array.prototype.forEach.call(l,function(b){b.removeEventListener(d,h)})}}}function p(l,d,h){return a(document.body,l,d,h)}n.exports=c},817:function(n){function o(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var c=window.getSelection(),f=document.createRange();f.selectNodeContents(i),c.removeAllRanges(),c.addRange(f),s=c.toString()}return s}n.exports=o},279:function(n){function o(){}o.prototype={on:function(i,s,a){var c=this.e||(this.e={});return(c[i]||(c[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var c=this;function f(){c.off(i,f),s.apply(a,arguments)}return f._=s,this.on(i,f,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),c=0,f=a.length;for(c;c{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var Ms=/["'&<>]/;Si.exports=Ls;function Ls(e){var t=""+e,r=Ms.exec(t);if(!r)return t;var n,o="",i=0,s=0;for(i=r.index;i0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var n=this,o=this,i=o.hasError,s=o.isStopped,a=o.observers;return i||s?Tr:(this.currentObservers=null,a.push(r),new $e(function(){n.currentObservers=null,Ue(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var n=this,o=n.hasError,i=n.thrownError,s=n.isStopped;o?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new F;return r.source=this,r},t.create=function(r,n){return new Qn(r,n)},t}(F);var Qn=function(e){ne(t,e);function t(r,n){var o=e.call(this)||this;return o.destination=r,o.source=n,o}return t.prototype.next=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.next)===null||o===void 0||o.call(n,r)},t.prototype.error=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.error)===null||o===void 0||o.call(n,r)},t.prototype.complete=function(){var r,n;(n=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||n===void 0||n.call(r)},t.prototype._subscribe=function(r){var n,o;return(o=(n=this.source)===null||n===void 0?void 0:n.subscribe(r))!==null&&o!==void 0?o:Tr},t}(E);var wt={now:function(){return(wt.delegate||Date).now()},delegate:void 0};var Et=function(e){ne(t,e);function t(r,n,o){r===void 0&&(r=1/0),n===void 0&&(n=1/0),o===void 0&&(o=wt);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=n,i._timestampProvider=o,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=n===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,n),i}return t.prototype.next=function(r){var n=this,o=n.isStopped,i=n._buffer,s=n._infiniteTimeWindow,a=n._timestampProvider,c=n._windowTime;o||(i.push(r),!s&&i.push(a.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var n=this._innerSubscribe(r),o=this,i=o._infiniteTimeWindow,s=o._buffer,a=s.slice(),c=0;c0?e.prototype.requestAsyncId.call(this,r,n,o):(r.actions.push(this),r._scheduled||(r._scheduled=at.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,n,o){if(o===void 0&&(o=0),o!=null&&o>0||o==null&&this.delay>0)return e.prototype.recycleAsyncId.call(this,r,n,o);r.actions.some(function(i){return i.id===n})||(at.cancelAnimationFrame(n),r._scheduled=void 0)},t}(Nt);var Gn=function(e){ne(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var n=this._scheduled;this._scheduled=void 0;var o=this.actions,i;r=r||o.shift();do if(i=r.execute(r.state,r.delay))break;while((r=o[0])&&r.id===n&&o.shift());if(this._active=!1,i){for(;(r=o[0])&&r.id===n&&o.shift();)r.unsubscribe();throw i}},t}(zt);var xe=new Gn(Bn);var R=new F(function(e){return e.complete()});function qt(e){return e&&L(e.schedule)}function Hr(e){return e[e.length-1]}function Ve(e){return L(Hr(e))?e.pop():void 0}function Ee(e){return qt(Hr(e))?e.pop():void 0}function Kt(e,t){return typeof Hr(e)=="number"?e.pop():t}var st=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Qt(e){return L(e==null?void 0:e.then)}function Yt(e){return L(e[it])}function Bt(e){return Symbol.asyncIterator&&L(e==null?void 0:e[Symbol.asyncIterator])}function Gt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function ya(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Jt=ya();function Xt(e){return L(e==null?void 0:e[Jt])}function Zt(e){return jn(this,arguments,function(){var r,n,o,i;return It(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,jt(r.read())];case 3:return n=s.sent(),o=n.value,i=n.done,i?[4,jt(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,jt(o)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function er(e){return L(e==null?void 0:e.getReader)}function N(e){if(e instanceof F)return e;if(e!=null){if(Yt(e))return xa(e);if(st(e))return wa(e);if(Qt(e))return Ea(e);if(Bt(e))return Jn(e);if(Xt(e))return Sa(e);if(er(e))return Oa(e)}throw Gt(e)}function xa(e){return new F(function(t){var r=e[it]();if(L(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function wa(e){return new F(function(t){for(var r=0;r=2,!0))}function ie(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new E}:t,n=e.resetOnError,o=n===void 0?!0:n,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,c=a===void 0?!0:a;return function(f){var u,p,l,d=0,h=!1,b=!1,U=function(){p==null||p.unsubscribe(),p=void 0},G=function(){U(),u=l=void 0,h=b=!1},W=function(){var _=u;G(),_==null||_.unsubscribe()};return g(function(_,We){d++,!b&&!h&&U();var Fe=l=l!=null?l:r();We.add(function(){d--,d===0&&!b&&!h&&(p=Dr(W,c))}),Fe.subscribe(We),!u&&d>0&&(u=new Ge({next:function(Pe){return Fe.next(Pe)},error:function(Pe){b=!0,U(),p=Dr(G,o,Pe),Fe.error(Pe)},complete:function(){h=!0,U(),p=Dr(G,s),Fe.complete()}}),N(_).subscribe(u))})(f)}}function Dr(e,t){for(var r=[],n=2;ne.next(document)),e}function Q(e,t=document){return Array.from(t.querySelectorAll(e))}function K(e,t=document){let r=pe(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function pe(e,t=document){return t.querySelector(e)||void 0}function Ie(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}function nr(e){return A(v(document.body,"focusin"),v(document.body,"focusout")).pipe(Re(1),m(()=>{let t=Ie();return typeof t!="undefined"?e.contains(t):!1}),z(e===Ie()),B())}function qe(e){return{x:e.offsetLeft,y:e.offsetTop}}function yo(e){return A(v(window,"load"),v(window,"resize")).pipe(Ae(0,xe),m(()=>qe(e)),z(qe(e)))}function or(e){return{x:e.scrollLeft,y:e.scrollTop}}function pt(e){return A(v(e,"scroll"),v(window,"resize")).pipe(Ae(0,xe),m(()=>or(e)),z(or(e)))}var wo=function(){if(typeof Map!="undefined")return Map;function e(t,r){var n=-1;return t.some(function(o,i){return o[0]===r?(n=i,!0):!1}),n}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(r){var n=e(this.__entries__,r),o=this.__entries__[n];return o&&o[1]},t.prototype.set=function(r,n){var o=e(this.__entries__,r);~o?this.__entries__[o][1]=n:this.__entries__.push([r,n])},t.prototype.delete=function(r){var n=this.__entries__,o=e(n,r);~o&&n.splice(o,1)},t.prototype.has=function(r){return!!~e(this.__entries__,r)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(r,n){n===void 0&&(n=null);for(var o=0,i=this.__entries__;o0},e.prototype.connect_=function(){!qr||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),Ka?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},e.prototype.disconnect_=function(){!qr||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},e.prototype.onTransitionEnd_=function(t){var r=t.propertyName,n=r===void 0?"":r,o=qa.some(function(i){return!!~n.indexOf(i)});o&&this.refresh()},e.getInstance=function(){return this.instance_||(this.instance_=new e),this.instance_},e.instance_=null,e}(),Eo=function(e,t){for(var r=0,n=Object.keys(t);r0},e}(),Oo=typeof WeakMap!="undefined"?new WeakMap:new wo,_o=function(){function e(t){if(!(this instanceof e))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var r=Qa.getInstance(),n=new ns(t,r,this);Oo.set(this,n)}return e}();["observe","unobserve","disconnect"].forEach(function(e){_o.prototype[e]=function(){var t;return(t=Oo.get(this))[e].apply(t,arguments)}});var os=function(){return typeof ir.ResizeObserver!="undefined"?ir.ResizeObserver:_o}(),To=os;var Mo=new E,is=P(()=>I(new To(e=>{for(let t of e)Mo.next(t)}))).pipe(S(e=>A(Se,I(e)).pipe(C(()=>e.disconnect()))),X(1));function he(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ve(e){return is.pipe(w(t=>t.observe(e)),S(t=>Mo.pipe(x(({target:r})=>r===e),C(()=>t.unobserve(e)),m(()=>he(e)))),z(he(e)))}function mt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function cr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var Lo=new E,as=P(()=>I(new IntersectionObserver(e=>{for(let t of e)Lo.next(t)},{threshold:0}))).pipe(S(e=>A(Se,I(e)).pipe(C(()=>e.disconnect()))),X(1));function fr(e){return as.pipe(w(t=>t.observe(e)),S(t=>Lo.pipe(x(({target:r})=>r===e),C(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function Ao(e,t=16){return pt(e).pipe(m(({y:r})=>{let n=he(e),o=mt(e);return r>=o.height-n.height-t}),B())}var ur={drawer:K("[data-md-toggle=drawer]"),search:K("[data-md-toggle=search]")};function Co(e){return ur[e].checked}function Ke(e,t){ur[e].checked!==t&&ur[e].click()}function dt(e){let t=ur[e];return v(t,"change").pipe(m(()=>t.checked),z(t.checked))}function ss(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function Ro(){return v(window,"keydown").pipe(x(e=>!(e.metaKey||e.ctrlKey)),m(e=>({mode:Co("search")?"search":"global",type:e.key,claim(){e.preventDefault(),e.stopPropagation()}})),x(({mode:e,type:t})=>{if(e==="global"){let r=Ie();if(typeof r!="undefined")return!ss(r,t)}return!0}),ie())}function Oe(){return new URL(location.href)}function pr(e){location.href=e.href}function ko(){return new E}function Ho(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Ho(e,r)}function M(e,t,...r){let n=document.createElement(e);if(t)for(let o of Object.keys(t))typeof t[o]!="undefined"&&(typeof t[o]!="boolean"?n.setAttribute(o,t[o]):n.setAttribute(o,""));for(let o of r)Ho(n,o);return n}function Po(e,t){let r=t;if(e.length>r){for(;e[r]!==" "&&--r>0;);return`${e.substring(0,r)}...`}return e}function lr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function $o(){return location.hash.substring(1)}function Io(e){let t=M("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function cs(){return v(window,"hashchange").pipe(m($o),z($o()),x(e=>e.length>0),X(1))}function jo(){return cs().pipe(m(e=>pe(`[id="${e}"]`)),x(e=>typeof e!="undefined"))}function Kr(e){let t=matchMedia(e);return rr(r=>t.addListener(()=>r(t.matches))).pipe(z(t.matches))}function Fo(){let e=matchMedia("print");return A(v(window,"beforeprint").pipe(m(()=>!0)),v(window,"afterprint").pipe(m(()=>!1))).pipe(z(e.matches))}function Qr(e,t){return e.pipe(S(r=>r?t():R))}function mr(e,t={credentials:"same-origin"}){return ue(fetch(`${e}`,t)).pipe(ce(()=>R),S(r=>r.status!==200?Ot(()=>new Error(r.statusText)):I(r)))}function je(e,t){return mr(e,t).pipe(S(r=>r.json()),X(1))}function Uo(e,t){let r=new DOMParser;return mr(e,t).pipe(S(n=>n.text()),m(n=>r.parseFromString(n,"text/xml")),X(1))}function Do(e){let t=M("script",{src:e});return P(()=>(document.head.appendChild(t),A(v(t,"load"),v(t,"error").pipe(S(()=>Ot(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),C(()=>document.head.removeChild(t)),oe(1))))}function Wo(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function Vo(){return A(v(window,"scroll",{passive:!0}),v(window,"resize",{passive:!0})).pipe(m(Wo),z(Wo()))}function No(){return{width:innerWidth,height:innerHeight}}function zo(){return v(window,"resize",{passive:!0}).pipe(m(No),z(No()))}function qo(){return Y([Vo(),zo()]).pipe(m(([e,t])=>({offset:e,size:t})),X(1))}function dr(e,{viewport$:t,header$:r}){let n=t.pipe(J("size")),o=Y([n,r]).pipe(m(()=>qe(e)));return Y([r,t,o]).pipe(m(([{height:i},{offset:s,size:a},{x:c,y:f}])=>({offset:{x:s.x-c,y:s.y-f+i},size:a})))}function Ko(e,{tx$:t}){let r=v(e,"message").pipe(m(({data:n})=>n));return t.pipe(Lt(()=>r,{leading:!0,trailing:!0}),w(n=>e.postMessage(n)),S(()=>r),ie())}var fs=K("#__config"),ht=JSON.parse(fs.textContent);ht.base=`${new URL(ht.base,Oe())}`;function le(){return ht}function Z(e){return ht.features.includes(e)}function re(e,t){return typeof t!="undefined"?ht.translations[e].replace("#",t.toString()):ht.translations[e]}function _e(e,t=document){return K(`[data-md-component=${e}]`,t)}function te(e,t=document){return Q(`[data-md-component=${e}]`,t)}function us(e){let t=K(".md-typeset > :first-child",e);return v(t,"click",{once:!0}).pipe(m(()=>K(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function Qo(e){return!Z("announce.dismiss")||!e.childElementCount?R:P(()=>{let t=new E;return t.pipe(z({hash:__md_get("__announce")})).subscribe(({hash:r})=>{var n;r&&r===((n=__md_get("__announce"))!=null?n:r)&&(e.hidden=!0,__md_set("__announce",r))}),us(e).pipe(w(r=>t.next(r)),C(()=>t.complete()),m(r=>H({ref:e},r)))})}function ps(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function Yo(e,t){let r=new E;return r.subscribe(({hidden:n})=>{e.hidden=n}),ps(e,t).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))}var ii=Ye(Br());function Gr(e){return M("div",{class:"md-tooltip",id:e},M("div",{class:"md-tooltip__inner md-typeset"}))}function Bo(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return M("aside",{class:"md-annotation",tabIndex:0},Gr(t),M("a",{href:r,class:"md-annotation__index",tabIndex:-1},M("span",{"data-md-annotation-id":e})))}else return M("aside",{class:"md-annotation",tabIndex:0},Gr(t),M("span",{class:"md-annotation__index",tabIndex:-1},M("span",{"data-md-annotation-id":e})))}function Go(e){return M("button",{class:"md-clipboard md-icon",title:re("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function Jr(e,t){let r=t&2,n=t&1,o=Object.keys(e.terms).filter(a=>!e.terms[a]).reduce((a,c)=>[...a,M("del",null,c)," "],[]).slice(0,-1),i=new URL(e.location);Z("search.highlight")&&i.searchParams.set("h",Object.entries(e.terms).filter(([,a])=>a).reduce((a,[c])=>`${a} ${c}`.trim(),""));let{tags:s}=le();return M("a",{href:`${i}`,class:"md-search-result__link",tabIndex:-1},M("article",{class:["md-search-result__article",...r?["md-search-result__article--document"]:[]].join(" "),"data-md-score":e.score.toFixed(2)},r>0&&M("div",{class:"md-search-result__icon md-icon"}),M("h1",{class:"md-search-result__title"},e.title),n>0&&e.text.length>0&&M("p",{class:"md-search-result__teaser"},Po(e.text,320)),e.tags&&M("div",{class:"md-typeset"},e.tags.map(a=>{let c=a.replace(/<[^>]+>/g,""),f=s?c in s?`md-tag-icon md-tag-icon--${s[c]}`:"md-tag-icon":"";return M("span",{class:`md-tag ${f}`},a)})),n>0&&o.length>0&&M("p",{class:"md-search-result__terms"},re("search.result.term.missing"),": ",...o)))}function Jo(e){let t=e[0].score,r=[...e],n=r.findIndex(f=>!f.location.includes("#")),[o]=r.splice(n,1),i=r.findIndex(f=>f.scoreJr(f,1)),...a.length?[M("details",{class:"md-search-result__more"},M("summary",{tabIndex:-1},a.length>0&&a.length===1?re("search.result.more.one"):re("search.result.more.other",a.length)),...a.map(f=>Jr(f,1)))]:[]];return M("li",{class:"md-search-result__item"},c)}function Xo(e){return M("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>M("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?lr(r):r)))}function Xr(e){let t=`tabbed-control tabbed-control--${e}`;return M("div",{class:t,hidden:!0},M("button",{class:"tabbed-button",tabIndex:-1}))}function Zo(e){return M("div",{class:"md-typeset__scrollwrap"},M("div",{class:"md-typeset__table"},e))}function ls(e){let t=le(),r=new URL(`../${e.version}/`,t.base);return M("li",{class:"md-version__item"},M("a",{href:`${r}`,class:"md-version__link"},e.title))}function ei(e,t){return M("div",{class:"md-version"},M("button",{class:"md-version__current","aria-label":re("select.version.title")},t.title),M("ul",{class:"md-version__list"},e.map(ls)))}function ms(e,t){let r=P(()=>Y([yo(e),pt(t)])).pipe(m(([{x:n,y:o},i])=>{let{width:s,height:a}=he(e);return{x:n-i.x+s/2,y:o-i.y+a/2}}));return nr(e).pipe(S(n=>r.pipe(m(o=>({active:n,offset:o})),oe(+!n||1/0))))}function ti(e,t,{target$:r}){let[n,o]=Array.from(e.children);return P(()=>{let i=new E,s=i.pipe(de(1));return i.subscribe({next({offset:a}){e.style.setProperty("--md-tooltip-x",`${a.x}px`),e.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),fr(e).pipe(ee(s)).subscribe(a=>{e.toggleAttribute("data-md-visible",a)}),A(i.pipe(x(({active:a})=>a)),i.pipe(Re(250),x(({active:a})=>!a))).subscribe({next({active:a}){a?e.prepend(n):n.remove()},complete(){e.prepend(n)}}),i.pipe(Ae(16,xe)).subscribe(({active:a})=>{n.classList.toggle("md-tooltip--active",a)}),i.pipe(Nr(125,xe),x(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?e.style.setProperty("--md-tooltip-0",`${-a}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),v(o,"click").pipe(ee(s),x(a=>!(a.metaKey||a.ctrlKey))).subscribe(a=>a.preventDefault()),v(o,"mousedown").pipe(ee(s),ae(i)).subscribe(([a,{active:c}])=>{var f;if(a.button!==0||a.metaKey||a.ctrlKey)a.preventDefault();else if(c){a.preventDefault();let u=e.parentElement.closest(".md-annotation");u instanceof HTMLElement?u.focus():(f=Ie())==null||f.blur()}}),r.pipe(ee(s),x(a=>a===n),ke(125)).subscribe(()=>e.focus()),ms(e,t).pipe(w(a=>i.next(a)),C(()=>i.complete()),m(a=>H({ref:e},a)))})}function ds(e){let t=[];for(let r of Q(".c, .c1, .cm",e)){let n=[],o=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=o.nextNode();i;i=o.nextNode())n.push(i);for(let i of n){let s;for(;s=/(\(\d+\))(!)?/.exec(i.textContent);){let[,a,c]=s;if(typeof c=="undefined"){let f=i.splitText(s.index);i=f.splitText(a.length),t.push(f)}else{i.textContent=a,t.push(i);break}}}}return t}function ri(e,t){t.append(...Array.from(e.childNodes))}function ni(e,t,{target$:r,print$:n}){let o=t.closest("[id]"),i=o==null?void 0:o.id,s=new Map;for(let a of ds(t)){let[,c]=a.textContent.match(/\((\d+)\)/);pe(`li:nth-child(${c})`,e)&&(s.set(c,Bo(c,i)),a.replaceWith(s.get(c)))}return s.size===0?R:P(()=>{let a=new E,c=[];for(let[f,u]of s)c.push([K(".md-typeset",u),K(`li:nth-child(${f})`,e)]);return n.pipe(ee(a.pipe(de(1)))).subscribe(f=>{e.hidden=!f;for(let[u,p]of c)f?ri(u,p):ri(p,u)}),A(...[...s].map(([,f])=>ti(f,t,{target$:r}))).pipe(C(()=>a.complete()),ie())})}var hs=0;function ai(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return ai(t)}}function oi(e){return ve(e).pipe(m(({width:t})=>({scrollable:mt(e).width>t})),J("scrollable"))}function si(e,t){let{matches:r}=matchMedia("(hover)"),n=P(()=>{let o=new E;if(o.subscribe(({scrollable:s})=>{s&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")}),ii.default.isSupported()){let s=e.closest("pre");s.id=`__code_${++hs}`,s.insertBefore(Go(s.id),e)}let i=e.closest(".highlight");if(i instanceof HTMLElement){let s=ai(i);if(typeof s!="undefined"&&(i.classList.contains("annotate")||Z("content.code.annotate"))){let a=ni(s,e,t);return oi(e).pipe(w(c=>o.next(c)),C(()=>o.complete()),m(c=>H({ref:e},c)),et(ve(i).pipe(m(({width:c,height:f})=>c&&f),B(),S(c=>c?a:R))))}}return oi(e).pipe(w(s=>o.next(s)),C(()=>o.complete()),m(s=>H({ref:e},s)))});return Z("content.lazy")?fr(e).pipe(x(o=>o),oe(1),S(()=>n)):n}var ci=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:transparent}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color)}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}defs #flowchart-circleEnd,defs #flowchart-circleStart,defs #flowchart-crossEnd,defs #flowchart-crossStart,defs #flowchart-pointEnd,defs #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}.actor,defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{stroke:var(--md-mermaid-node-fg-color)}text.actor>tspan{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-default-fg-color--lighter)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-edge-color)}.loopText>tspan,.messageText{font-family:var(--md-mermaid-font-family)!important}#arrowhead path,.loopText>tspan,.messageText{fill:var(--md-mermaid-edge-color);stroke:none}.loopLine{stroke:var(--md-mermaid-node-fg-color)}.labelBox,.loopLine{fill:var(--md-mermaid-node-bg-color)}.labelBox{stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-node-fg-color);font-family:var(--md-mermaid-font-family)}";var Zr,vs=0;function gs(){return typeof mermaid=="undefined"||mermaid instanceof Element?Do("https://unpkg.com/mermaid@9.1.7/dist/mermaid.min.js"):I(void 0)}function fi(e){return e.classList.remove("mermaid"),Zr||(Zr=gs().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:ci})),m(()=>{}),X(1))),Zr.subscribe(()=>{e.classList.add("mermaid");let t=`__mermaid_${vs++}`,r=M("div",{class:"mermaid"});mermaid.mermaidAPI.render(t,e.textContent,n=>{let o=r.attachShadow({mode:"closed"});o.innerHTML=n,e.replaceWith(r)})}),Zr.pipe(m(()=>({ref:e})))}function ys(e,{target$:t,print$:r}){let n=!0;return A(t.pipe(m(o=>o.closest("details:not([open])")),x(o=>e===o),m(()=>({action:"open",reveal:!0}))),r.pipe(x(o=>o||!n),w(()=>n=e.open),m(o=>({action:o?"open":"close"}))))}function ui(e,t){return P(()=>{let r=new E;return r.subscribe(({action:n,reveal:o})=>{e.toggleAttribute("open",n==="open"),o&&e.scrollIntoView()}),ys(e,t).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))})}var pi=M("table");function li(e){return e.replaceWith(pi),pi.replaceWith(Zo(e)),I({ref:e})}function xs(e){let t=Q(":scope > input",e),r=t.find(n=>n.checked)||t[0];return A(...t.map(n=>v(n,"change").pipe(m(()=>K(`label[for="${n.id}"]`))))).pipe(z(K(`label[for="${r.id}"]`)),m(n=>({active:n})))}function mi(e,{viewport$:t}){let r=Xr("prev");e.append(r);let n=Xr("next");e.append(n);let o=K(".tabbed-labels",e);return P(()=>{let i=new E,s=i.pipe(de(1));return Y([i,ve(e)]).pipe(Ae(1,xe),ee(s)).subscribe({next([{active:a},c]){let f=qe(a),{width:u}=he(a);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let p=or(o);(f.xp.x+c.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),Y([pt(o),ve(o)]).pipe(ee(s)).subscribe(([a,c])=>{let f=mt(o);r.hidden=a.x<16,n.hidden=a.x>f.width-c.width-16}),A(v(r,"click").pipe(m(()=>-1)),v(n,"click").pipe(m(()=>1))).pipe(ee(s)).subscribe(a=>{let{width:c}=he(o);o.scrollBy({left:c*a,behavior:"smooth"})}),Z("content.tabs.link")&&i.pipe(He(1),ae(t)).subscribe(([{active:a},{offset:c}])=>{let f=a.innerText.trim();if(a.hasAttribute("data-md-switching"))a.removeAttribute("data-md-switching");else{let u=e.offsetTop-c.y;for(let l of Q("[data-tabs]"))for(let d of Q(":scope > input",l)){let h=K(`label[for="${d.id}"]`);if(h!==a&&h.innerText.trim()===f){h.setAttribute("data-md-switching",""),d.click();break}}window.scrollTo({top:e.offsetTop-u});let p=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...p])])}}),xs(e).pipe(w(a=>i.next(a)),C(()=>i.complete()),m(a=>H({ref:e},a)))}).pipe(Je(fe))}function di(e,{viewport$:t,target$:r,print$:n}){return A(...Q("pre:not(.mermaid) > code",e).map(o=>si(o,{target$:r,print$:n})),...Q("pre.mermaid",e).map(o=>fi(o)),...Q("table:not([class])",e).map(o=>li(o)),...Q("details",e).map(o=>ui(o,{target$:r,print$:n})),...Q("[data-tabs]",e).map(o=>mi(o,{viewport$:t})))}function ws(e,{alert$:t}){return t.pipe(S(r=>A(I(!0),I(!1).pipe(ke(2e3))).pipe(m(n=>({message:r,active:n})))))}function hi(e,t){let r=K(".md-typeset",e);return P(()=>{let n=new E;return n.subscribe(({message:o,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=o}),ws(e,t).pipe(w(o=>n.next(o)),C(()=>n.complete()),m(o=>H({ref:e},o)))})}function Es({viewport$:e}){if(!Z("header.autohide"))return I(!1);let t=e.pipe(m(({offset:{y:o}})=>o),Ce(2,1),m(([o,i])=>[oMath.abs(i-o.y)>100),m(([,[o]])=>o),B()),n=dt("search");return Y([e,n]).pipe(m(([{offset:o},i])=>o.y>400&&!i),B(),S(o=>o?r:I(!1)),z(!1))}function bi(e,t){return P(()=>Y([ve(e),Es(t)])).pipe(m(([{height:r},n])=>({height:r,hidden:n})),B((r,n)=>r.height===n.height&&r.hidden===n.hidden),X(1))}function vi(e,{header$:t,main$:r}){return P(()=>{let n=new E,o=n.pipe(de(1));return n.pipe(J("active"),Ze(t)).subscribe(([{active:i},{hidden:s}])=>{e.classList.toggle("md-header--shadow",i&&!s),e.hidden=s}),r.subscribe(n),t.pipe(ee(o),m(i=>H({ref:e},i)))})}function Ss(e,{viewport$:t,header$:r}){return dr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:n}})=>{let{height:o}=he(e);return{active:n>=o}}),J("active"))}function gi(e,t){return P(()=>{let r=new E;r.subscribe(({active:o})=>{e.classList.toggle("md-header__title--active",o)});let n=pe("article h1");return typeof n=="undefined"?R:Ss(n,t).pipe(w(o=>r.next(o)),C(()=>r.complete()),m(o=>H({ref:e},o)))})}function yi(e,{viewport$:t,header$:r}){let n=r.pipe(m(({height:i})=>i),B()),o=n.pipe(S(()=>ve(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),J("bottom"))));return Y([n,o,t]).pipe(m(([i,{top:s,bottom:a},{offset:{y:c},size:{height:f}}])=>(f=Math.max(0,f-Math.max(0,s-c,i)-Math.max(0,f+c-a)),{offset:s-i,height:f,active:s-i<=c})),B((i,s)=>i.offset===s.offset&&i.height===s.height&&i.active===s.active))}function Os(e){let t=__md_get("__palette")||{index:e.findIndex(r=>matchMedia(r.getAttribute("data-md-color-media")).matches)};return I(...e).pipe(se(r=>v(r,"change").pipe(m(()=>r))),z(e[Math.max(0,t.index)]),m(r=>({index:e.indexOf(r),color:{scheme:r.getAttribute("data-md-color-scheme"),primary:r.getAttribute("data-md-color-primary"),accent:r.getAttribute("data-md-color-accent")}})),X(1))}function xi(e){return P(()=>{let t=new E;t.subscribe(n=>{document.body.setAttribute("data-md-color-switching","");for(let[o,i]of Object.entries(n.color))document.body.setAttribute(`data-md-color-${o}`,i);for(let o=0;o{document.body.removeAttribute("data-md-color-switching")});let r=Q("input",e);return Os(r).pipe(w(n=>t.next(n)),C(()=>t.complete()),m(n=>H({ref:e},n)))})}var en=Ye(Br());function _s(e){e.setAttribute("data-md-copying","");let t=e.innerText;return e.removeAttribute("data-md-copying"),t}function wi({alert$:e}){en.default.isSupported()&&new F(t=>{new en.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||_s(K(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>re("clipboard.copied"))).subscribe(e)}function Ts(e){if(e.length<2)return[""];let[t,r]=[...e].sort((o,i)=>o.length-i.length).map(o=>o.replace(/[^/]+$/,"")),n=0;if(t===r)n=t.length;else for(;t.charCodeAt(n)===r.charCodeAt(n);)n++;return e.map(o=>o.replace(t.slice(0,n),""))}function hr(e){let t=__md_get("__sitemap",sessionStorage,e);if(t)return I(t);{let r=le();return Uo(new URL("sitemap.xml",e||r.base)).pipe(m(n=>Ts(Q("loc",n).map(o=>o.textContent))),ce(()=>R),De([]),w(n=>__md_set("__sitemap",n,sessionStorage,e)))}}function Ei({document$:e,location$:t,viewport$:r}){let n=le();if(location.protocol==="file:")return;"scrollRestoration"in history&&(history.scrollRestoration="manual",v(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}));let o=pe("link[rel=icon]");typeof o!="undefined"&&(o.href=o.href);let i=hr().pipe(m(f=>f.map(u=>`${new URL(u,n.base)}`)),S(f=>v(document.body,"click").pipe(x(u=>!u.metaKey&&!u.ctrlKey),S(u=>{if(u.target instanceof Element){let p=u.target.closest("a");if(p&&!p.target){let l=new URL(p.href);if(l.search="",l.hash="",l.pathname!==location.pathname&&f.includes(l.toString()))return u.preventDefault(),I({url:new URL(p.href)})}}return Se}))),ie()),s=v(window,"popstate").pipe(x(f=>f.state!==null),m(f=>({url:new URL(location.href),offset:f.state})),ie());A(i,s).pipe(B((f,u)=>f.url.href===u.url.href),m(({url:f})=>f)).subscribe(t);let a=t.pipe(J("pathname"),S(f=>mr(f.href).pipe(ce(()=>(pr(f),Se)))),ie());i.pipe(ut(a)).subscribe(({url:f})=>{history.pushState({},"",`${f}`)});let c=new DOMParser;a.pipe(S(f=>f.text()),m(f=>c.parseFromString(f,"text/html"))).subscribe(e),e.pipe(He(1)).subscribe(f=>{for(let u of["title","link[rel=canonical]","meta[name=author]","meta[name=description]","[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...Z("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let p=pe(u),l=pe(u,f);typeof p!="undefined"&&typeof l!="undefined"&&p.replaceWith(l)}}),e.pipe(He(1),m(()=>_e("container")),S(f=>Q("script",f)),Ir(f=>{let u=M("script");if(f.src){for(let p of f.getAttributeNames())u.setAttribute(p,f.getAttribute(p));return f.replaceWith(u),new F(p=>{u.onload=()=>p.complete()})}else return u.textContent=f.textContent,f.replaceWith(u),R})).subscribe(),A(i,s).pipe(ut(e)).subscribe(({url:f,offset:u})=>{f.hash&&!u?Io(f.hash):window.scrollTo(0,(u==null?void 0:u.y)||0)}),r.pipe(Mt(i),Re(250),J("offset")).subscribe(({offset:f})=>{history.replaceState(f,"")}),A(i,s).pipe(Ce(2,1),x(([f,u])=>f.url.pathname===u.url.pathname),m(([,f])=>f)).subscribe(({offset:f})=>{window.scrollTo(0,(f==null?void 0:f.y)||0)})}var As=Ye(tn());var Oi=Ye(tn());function rn(e,t){let r=new RegExp(e.separator,"img"),n=(o,i,s)=>`${i}${s}`;return o=>{o=o.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator})(${o.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return s=>(t?(0,Oi.default)(s):s).replace(i,n).replace(/<\/mark>(\s+)]*>/img,"$1")}}function _i(e){return e.split(/"([^"]+)"/g).map((t,r)=>r&1?t.replace(/^\b|^(?![^\x00-\x7F]|$)|\s+/g," +"):t).join("").replace(/"|(?:^|\s+)[*+\-:^~]+(?=\s+|$)/g,"").trim()}function bt(e){return e.type===1}function Ti(e){return e.type===2}function vt(e){return e.type===3}function Rs({config:e,docs:t}){e.lang.length===1&&e.lang[0]==="en"&&(e.lang=[re("search.config.lang")]),e.separator==="[\\s\\-]+"&&(e.separator=re("search.config.separator"));let n={pipeline:re("search.config.pipeline").split(/\s*,\s*/).filter(Boolean),suggestions:Z("search.suggest")};return{config:e,docs:t,options:n}}function Mi(e,t){let r=le(),n=new Worker(e),o=new E,i=Ko(n,{tx$:o}).pipe(m(s=>{if(vt(s))for(let a of s.data.items)for(let c of a)c.location=`${new URL(c.location,r.base)}`;return s}),ie());return ue(t).pipe(m(s=>({type:0,data:Rs(s)}))).subscribe(o.next.bind(o)),{tx$:o,rx$:i}}function Li({document$:e}){let t=le(),r=je(new URL("../versions.json",t.base)).pipe(ce(()=>R)),n=r.pipe(m(o=>{let[,i]=t.base.match(/([^/]+)\/?$/);return o.find(({version:s,aliases:a})=>s===i||a.includes(i))||o[0]}));r.pipe(m(o=>new Map(o.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),S(o=>v(document.body,"click").pipe(x(i=>!i.metaKey&&!i.ctrlKey),ae(n),S(([i,s])=>{if(i.target instanceof Element){let a=i.target.closest("a");if(a&&!a.target&&o.has(a.href)){let c=a.href;return!i.target.closest(".md-version")&&o.get(c)===s?R:(i.preventDefault(),I(c))}}return R}),S(i=>{let{version:s}=o.get(i);return hr(new URL(i)).pipe(m(a=>{let f=Oe().href.replace(t.base,"");return a.includes(f.split("#")[0])?new URL(`../${s}/${f}`,t.base):new URL(i)}))})))).subscribe(o=>pr(o)),Y([r,n]).subscribe(([o,i])=>{K(".md-header__topic").appendChild(ei(o,i))}),e.pipe(S(()=>n)).subscribe(o=>{var s;let i=__md_get("__outdated",sessionStorage);if(i===null){let a=((s=t.version)==null?void 0:s.default)||"latest";i=!o.aliases.includes(a),__md_set("__outdated",i,sessionStorage)}if(i)for(let a of te("outdated"))a.hidden=!1})}function ks(e,{rx$:t}){let r=(__search==null?void 0:__search.transform)||_i,{searchParams:n}=Oe();n.has("q")&&Ke("search",!0);let o=t.pipe(x(bt),oe(1),m(()=>n.get("q")||""));dt("search").pipe(x(a=>!a),oe(1)).subscribe(()=>{let a=new URL(location.href);a.searchParams.delete("q"),history.replaceState({},"",`${a}`)}),o.subscribe(a=>{a&&(e.value=a,e.focus())});let i=nr(e),s=A(v(e,"keyup"),v(e,"focus").pipe(ke(1)),o).pipe(m(()=>r(e.value)),z(""),B());return Y([s,i]).pipe(m(([a,c])=>({value:a,focus:c})),X(1))}function Ai(e,{tx$:t,rx$:r}){let n=new E,o=n.pipe(de(1));return n.pipe(J("value"),m(({value:i})=>({type:2,data:i}))).subscribe(t.next.bind(t)),n.pipe(J("focus")).subscribe(({focus:i})=>{i?(Ke("search",i),e.placeholder=""):e.placeholder=re("search.placeholder")}),v(e.form,"reset").pipe(ee(o)).subscribe(()=>e.focus()),ks(e,{tx$:t,rx$:r}).pipe(w(i=>n.next(i)),C(()=>n.complete()),m(i=>H({ref:e},i)),ie())}function Ci(e,{rx$:t},{query$:r}){let n=new E,o=Ao(e.parentElement).pipe(x(Boolean)),i=K(":scope > :first-child",e),s=K(":scope > :last-child",e),a=t.pipe(x(bt),oe(1));return n.pipe(ae(r),Mt(a)).subscribe(([{items:f},{value:u}])=>{if(u)switch(f.length){case 0:i.textContent=re("search.result.none");break;case 1:i.textContent=re("search.result.one");break;default:i.textContent=re("search.result.other",lr(f.length))}else i.textContent=re("search.result.placeholder")}),n.pipe(w(()=>s.innerHTML=""),S(({items:f})=>A(I(...f.slice(0,10)),I(...f.slice(10)).pipe(Ce(4),zr(o),S(([u])=>u))))).subscribe(f=>s.appendChild(Jo(f))),t.pipe(x(vt),m(({data:f})=>f)).pipe(w(f=>n.next(f)),C(()=>n.complete()),m(f=>H({ref:e},f)))}function Hs(e,{query$:t}){return t.pipe(m(({value:r})=>{let n=Oe();return n.hash="",n.searchParams.delete("h"),n.searchParams.set("q",r),{url:n}}))}function Ri(e,t){let r=new E;return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),v(e,"click").subscribe(n=>n.preventDefault()),Hs(e,t).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))}function ki(e,{rx$:t},{keyboard$:r}){let n=new E,o=_e("search-query"),i=A(v(o,"keydown"),v(o,"focus")).pipe(Le(fe),m(()=>o.value),B());return n.pipe(Ze(i),m(([{suggestions:a},c])=>{let f=c.split(/([\s-]+)/);if((a==null?void 0:a.length)&&f[f.length-1]){let u=a[a.length-1];u.startsWith(f[f.length-1])&&(f[f.length-1]=u)}else f.length=0;return f})).subscribe(a=>e.innerHTML=a.join("").replace(/\s/g," ")),r.pipe(x(({mode:a})=>a==="search")).subscribe(a=>{switch(a.type){case"ArrowRight":e.innerText.length&&o.selectionStart===o.value.length&&(o.value=e.innerText);break}}),t.pipe(x(vt),m(({data:a})=>a)).pipe(w(a=>n.next(a)),C(()=>n.complete()),m(()=>({ref:e})))}function Hi(e,{index$:t,keyboard$:r}){let n=le();try{let o=(__search==null?void 0:__search.worker)||n.search,i=Mi(o,t),s=_e("search-query",e),a=_e("search-result",e),{tx$:c,rx$:f}=i;c.pipe(x(Ti),ut(f.pipe(x(bt))),oe(1)).subscribe(c.next.bind(c)),r.pipe(x(({mode:l})=>l==="search")).subscribe(l=>{let d=Ie();switch(l.type){case"Enter":if(d===s){let h=new Map;for(let b of Q(":first-child [href]",a)){let U=b.firstElementChild;h.set(b,parseFloat(U.getAttribute("data-md-score")))}if(h.size){let[[b]]=[...h].sort(([,U],[,G])=>G-U);b.click()}l.claim()}break;case"Escape":case"Tab":Ke("search",!1),s.blur();break;case"ArrowUp":case"ArrowDown":if(typeof d=="undefined")s.focus();else{let h=[s,...Q(":not(details) > [href], summary, details[open] [href]",a)],b=Math.max(0,(Math.max(0,h.indexOf(d))+h.length+(l.type==="ArrowUp"?-1:1))%h.length);h[b].focus()}l.claim();break;default:s!==Ie()&&s.focus()}}),r.pipe(x(({mode:l})=>l==="global")).subscribe(l=>{switch(l.type){case"f":case"s":case"/":s.focus(),s.select(),l.claim();break}});let u=Ai(s,i),p=Ci(a,i,{query$:u});return A(u,p).pipe(et(...te("search-share",e).map(l=>Ri(l,{query$:u})),...te("search-suggest",e).map(l=>ki(l,i,{keyboard$:r}))))}catch(o){return e.hidden=!0,Se}}function Pi(e,{index$:t,location$:r}){return Y([t,r.pipe(z(Oe()),x(n=>!!n.searchParams.get("h")))]).pipe(m(([n,o])=>rn(n.config,!0)(o.searchParams.get("h"))),m(n=>{var s;let o=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let a=i.nextNode();a;a=i.nextNode())if((s=a.parentElement)!=null&&s.offsetHeight){let c=a.textContent,f=n(c);f.length>c.length&&o.set(a,f)}for(let[a,c]of o){let{childNodes:f}=M("span",null,c);a.replaceWith(...Array.from(f))}return{ref:e,nodes:o}}))}function Ps(e,{viewport$:t,main$:r}){let n=e.parentElement,o=n.offsetTop-n.parentElement.offsetTop;return Y([r,t]).pipe(m(([{offset:i,height:s},{offset:{y:a}}])=>(s=s+Math.min(o,Math.max(0,a-i))-o,{height:s,locked:a>=i+o})),B((i,s)=>i.height===s.height&&i.locked===s.locked))}function nn(e,n){var o=n,{header$:t}=o,r=un(o,["header$"]);let i=K(".md-sidebar__scrollwrap",e),{y:s}=qe(i);return P(()=>{let a=new E;return a.pipe(Ae(0,xe),ae(t)).subscribe({next([{height:c},{height:f}]){i.style.height=`${c-2*s}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),a.pipe(Le(xe),oe(1)).subscribe(()=>{for(let c of Q(".md-nav__link--active[href]",e)){let f=cr(c);if(typeof f!="undefined"){let u=c.offsetTop-f.offsetTop,{height:p}=he(f);f.scrollTo({top:u-p/2})}}}),Ps(e,r).pipe(w(c=>a.next(c)),C(()=>a.complete()),m(c=>H({ref:e},c)))})}function $i(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return _t(je(`${r}/releases/latest`).pipe(ce(()=>R),m(n=>({version:n.tag_name})),De({})),je(r).pipe(ce(()=>R),m(n=>({stars:n.stargazers_count,forks:n.forks_count})),De({}))).pipe(m(([n,o])=>H(H({},n),o)))}else{let r=`https://api.github.com/users/${e}`;return je(r).pipe(m(n=>({repositories:n.public_repos})),De({}))}}function Ii(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return je(r).pipe(ce(()=>R),m(({star_count:n,forks_count:o})=>({stars:n,forks:o})),De({}))}function ji(e){let[t]=e.match(/(git(?:hub|lab))/i)||[];switch(t.toLowerCase()){case"github":let[,r,n]=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);return $i(r,n);case"gitlab":let[,o,i]=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i);return Ii(o,i);default:return R}}var $s;function Is(e){return $s||($s=P(()=>{let t=__md_get("__source",sessionStorage);if(t)return I(t);if(te("consent").length){let n=__md_get("__consent");if(!(n&&n.github))return R}return ji(e.href).pipe(w(n=>__md_set("__source",n,sessionStorage)))}).pipe(ce(()=>R),x(t=>Object.keys(t).length>0),m(t=>({facts:t})),X(1)))}function Fi(e){let t=K(":scope > :last-child",e);return P(()=>{let r=new E;return r.subscribe(({facts:n})=>{t.appendChild(Xo(n)),t.classList.add("md-source__repository--active")}),Is(e).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))})}function js(e,{viewport$:t,header$:r}){return ve(document.body).pipe(S(()=>dr(e,{header$:r,viewport$:t})),m(({offset:{y:n}})=>({hidden:n>=10})),J("hidden"))}function Ui(e,t){return P(()=>{let r=new E;return r.subscribe({next({hidden:n}){e.hidden=n},complete(){e.hidden=!1}}),(Z("navigation.tabs.sticky")?I({hidden:!1}):js(e,t)).pipe(w(n=>r.next(n)),C(()=>r.complete()),m(n=>H({ref:e},n)))})}function Fs(e,{viewport$:t,header$:r}){let n=new Map,o=Q("[href^=\\#]",e);for(let a of o){let c=decodeURIComponent(a.hash.substring(1)),f=pe(`[id="${c}"]`);typeof f!="undefined"&&n.set(a,f)}let i=r.pipe(J("height"),m(({height:a})=>{let c=_e("main"),f=K(":scope > :first-child",c);return a+.8*(f.offsetTop-c.offsetTop)}),ie());return ve(document.body).pipe(J("height"),S(a=>P(()=>{let c=[];return I([...n].reduce((f,[u,p])=>{for(;c.length&&n.get(c[c.length-1]).tagName>=p.tagName;)c.pop();let l=p.offsetTop;for(;!l&&p.parentElement;)p=p.parentElement,l=p.offsetTop;return f.set([...c=[...c,u]].reverse(),l)},new Map))}).pipe(m(c=>new Map([...c].sort(([,f],[,u])=>f-u))),Ze(i),S(([c,f])=>t.pipe(Ur(([u,p],{offset:{y:l},size:d})=>{let h=l+d.height>=Math.floor(a.height);for(;p.length;){let[,b]=p[0];if(b-f=l&&!h)p=[u.pop(),...p];else break}return[u,p]},[[],[...c]]),B((u,p)=>u[0]===p[0]&&u[1]===p[1])))))).pipe(m(([a,c])=>({prev:a.map(([f])=>f),next:c.map(([f])=>f)})),z({prev:[],next:[]}),Ce(2,1),m(([a,c])=>a.prev.length{let o=new E,i=o.pipe(de(1));if(o.subscribe(({prev:s,next:a})=>{for(let[c]of a)c.classList.remove("md-nav__link--passed"),c.classList.remove("md-nav__link--active");for(let[c,[f]]of s.entries())f.classList.add("md-nav__link--passed"),f.classList.toggle("md-nav__link--active",c===s.length-1)}),Z("toc.follow")){let s=A(t.pipe(Re(1),m(()=>{})),t.pipe(Re(250),m(()=>"smooth")));o.pipe(x(({prev:a})=>a.length>0),ae(s)).subscribe(([{prev:a},c])=>{let[f]=a[a.length-1];if(f.offsetHeight){let u=cr(f);if(typeof u!="undefined"){let p=f.offsetTop-u.offsetTop,{height:l}=he(u);u.scrollTo({top:p-l/2,behavior:c})}}})}return Z("navigation.tracking")&&t.pipe(ee(i),J("offset"),Re(250),He(1),ee(n.pipe(He(1))),Tt({delay:250}),ae(o)).subscribe(([,{prev:s}])=>{let a=Oe(),c=s[s.length-1];if(c&&c.length){let[f]=c,{hash:u}=new URL(f.href);a.hash!==u&&(a.hash=u,history.replaceState({},"",`${a}`))}else a.hash="",history.replaceState({},"",`${a}`)}),Fs(e,{viewport$:t,header$:r}).pipe(w(s=>o.next(s)),C(()=>o.complete()),m(s=>H({ref:e},s)))})}function Us(e,{viewport$:t,main$:r,target$:n}){let o=t.pipe(m(({offset:{y:s}})=>s),Ce(2,1),m(([s,a])=>s>a&&a>0),B()),i=r.pipe(m(({active:s})=>s));return Y([i,o]).pipe(m(([s,a])=>!(s&&a)),B(),ee(n.pipe(He(1))),Fr(!0),Tt({delay:250}),m(s=>({hidden:s})))}function Wi(e,{viewport$:t,header$:r,main$:n,target$:o}){let i=new E,s=i.pipe(de(1));return i.subscribe({next({hidden:a}){e.hidden=a,a?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(ee(s),J("height")).subscribe(({height:a})=>{e.style.top=`${a+16}px`}),Us(e,{viewport$:t,main$:n,target$:o}).pipe(w(a=>i.next(a)),C(()=>i.complete()),m(a=>H({ref:e},a)))}function Vi({document$:e,tablet$:t}){e.pipe(S(()=>Q(".md-toggle--indeterminate, [data-md-state=indeterminate]")),w(r=>{r.indeterminate=!0,r.checked=!1}),se(r=>v(r,"change").pipe(Wr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),ae(t)).subscribe(([r,n])=>{r.classList.remove("md-toggle--indeterminate"),n&&(r.checked=!1)})}function Ds(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function Ni({document$:e}){e.pipe(S(()=>Q("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),x(Ds),se(t=>v(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function zi({viewport$:e,tablet$:t}){Y([dt("search"),t]).pipe(m(([r,n])=>r&&!n),S(r=>I(r).pipe(ke(r?400:100))),ae(e)).subscribe(([r,{offset:{y:n}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${n}px`;else{let o=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",o&&window.scrollTo(0,o)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let n=e[r];typeof n!="object"?n=document.createTextNode(n):n.parentNode&&n.parentNode.removeChild(n),r?t.insertBefore(this.previousSibling,n):t.replaceChild(n,this)}}}));document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var tt=go(),vr=ko(),gt=jo(),on=Ro(),we=qo(),gr=Kr("(min-width: 960px)"),Ki=Kr("(min-width: 1220px)"),Qi=Fo(),Yi=le(),Bi=document.forms.namedItem("search")?(__search==null?void 0:__search.index)||je(new URL("search/search_index.json",Yi.base)):Se,an=new E;wi({alert$:an});Z("navigation.instant")&&Ei({document$:tt,location$:vr,viewport$:we});var qi;((qi=Yi.version)==null?void 0:qi.provider)==="mike"&&Li({document$:tt});A(vr,gt).pipe(ke(125)).subscribe(()=>{Ke("drawer",!1),Ke("search",!1)});on.pipe(x(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=pe("[href][rel=prev]");typeof t!="undefined"&&t.click();break;case"n":case".":let r=pe("[href][rel=next]");typeof r!="undefined"&&r.click();break}});Vi({document$:tt,tablet$:gr});Ni({document$:tt});zi({viewport$:we,tablet$:gr});var Qe=bi(_e("header"),{viewport$:we}),br=tt.pipe(m(()=>_e("main")),S(e=>yi(e,{viewport$:we,header$:Qe})),X(1)),Ws=A(...te("consent").map(e=>Yo(e,{target$:gt})),...te("dialog").map(e=>hi(e,{alert$:an})),...te("header").map(e=>vi(e,{viewport$:we,header$:Qe,main$:br})),...te("palette").map(e=>xi(e)),...te("search").map(e=>Hi(e,{index$:Bi,keyboard$:on})),...te("source").map(e=>Fi(e))),Vs=P(()=>A(...te("announce").map(e=>Qo(e)),...te("content").map(e=>di(e,{viewport$:we,target$:gt,print$:Qi})),...te("content").map(e=>Z("search.highlight")?Pi(e,{index$:Bi,location$:vr}):R),...te("header-title").map(e=>gi(e,{viewport$:we,header$:Qe})),...te("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Qr(Ki,()=>nn(e,{viewport$:we,header$:Qe,main$:br})):Qr(gr,()=>nn(e,{viewport$:we,header$:Qe,main$:br}))),...te("tabs").map(e=>Ui(e,{viewport$:we,header$:Qe})),...te("toc").map(e=>Di(e,{viewport$:we,header$:Qe,target$:gt})),...te("top").map(e=>Wi(e,{viewport$:we,header$:Qe,main$:br,target$:gt})))),Gi=tt.pipe(S(()=>Vs),et(Ws),X(1));Gi.subscribe();window.document$=tt;window.location$=vr;window.target$=gt;window.keyboard$=on;window.viewport$=we;window.tablet$=gr;window.screen$=Ki;window.print$=Qi;window.alert$=an;window.component$=Gi;})(); +//# sourceMappingURL=bundle.37e9125f.min.js.map + diff --git a/assets/javascripts/bundle.37e9125f.min.js.map b/assets/javascripts/bundle.37e9125f.min.js.map new file mode 100644 index 000000000..3fd65f05a --- /dev/null +++ b/assets/javascripts/bundle.37e9125f.min.js.map @@ -0,0 +1,8 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/url-polyfill/url-polyfill.js", "node_modules/rxjs/node_modules/tslib/tslib.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "node_modules/array-flat-polyfill/index.mjs", "src/assets/javascripts/bundle.ts", "node_modules/unfetch/polyfill/index.js", "node_modules/rxjs/node_modules/tslib/modules/index.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/concatMap.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/sample.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/assets/javascripts/browser/document/index.ts", "src/assets/javascripts/browser/element/_/index.ts", "src/assets/javascripts/browser/element/focus/index.ts", "src/assets/javascripts/browser/element/offset/_/index.ts", "src/assets/javascripts/browser/element/offset/content/index.ts", "node_modules/resize-observer-polyfill/dist/ResizeObserver.es.js", "src/assets/javascripts/browser/element/size/_/index.ts", "src/assets/javascripts/browser/element/size/content/index.ts", "src/assets/javascripts/browser/element/visibility/index.ts", "src/assets/javascripts/browser/toggle/index.ts", "src/assets/javascripts/browser/keyboard/index.ts", "src/assets/javascripts/browser/location/_/index.ts", "src/assets/javascripts/utilities/h/index.ts", "src/assets/javascripts/utilities/string/index.ts", "src/assets/javascripts/browser/location/hash/index.ts", "src/assets/javascripts/browser/media/index.ts", "src/assets/javascripts/browser/request/index.ts", "src/assets/javascripts/browser/script/index.ts", "src/assets/javascripts/browser/viewport/offset/index.ts", "src/assets/javascripts/browser/viewport/size/index.ts", "src/assets/javascripts/browser/viewport/_/index.ts", "src/assets/javascripts/browser/viewport/at/index.ts", "src/assets/javascripts/browser/worker/index.ts", "src/assets/javascripts/_/index.ts", "src/assets/javascripts/components/_/index.ts", "src/assets/javascripts/components/announce/index.ts", "src/assets/javascripts/components/consent/index.ts", "src/assets/javascripts/components/content/code/_/index.ts", "src/assets/javascripts/templates/tooltip/index.tsx", "src/assets/javascripts/templates/annotation/index.tsx", "src/assets/javascripts/templates/clipboard/index.tsx", "src/assets/javascripts/templates/search/index.tsx", "src/assets/javascripts/templates/source/index.tsx", "src/assets/javascripts/templates/tabbed/index.tsx", "src/assets/javascripts/templates/table/index.tsx", "src/assets/javascripts/templates/version/index.tsx", "src/assets/javascripts/components/content/annotation/_/index.ts", "src/assets/javascripts/components/content/annotation/list/index.ts", "src/assets/javascripts/components/content/code/mermaid/index.ts", "src/assets/javascripts/components/content/details/index.ts", "src/assets/javascripts/components/content/table/index.ts", "src/assets/javascripts/components/content/tabs/index.ts", "src/assets/javascripts/components/content/_/index.ts", "src/assets/javascripts/components/dialog/index.ts", "src/assets/javascripts/components/header/_/index.ts", "src/assets/javascripts/components/header/title/index.ts", "src/assets/javascripts/components/main/index.ts", "src/assets/javascripts/components/palette/index.ts", "src/assets/javascripts/integrations/clipboard/index.ts", "src/assets/javascripts/integrations/sitemap/index.ts", "src/assets/javascripts/integrations/instant/index.ts", "src/assets/javascripts/integrations/search/document/index.ts", "src/assets/javascripts/integrations/search/highlighter/index.ts", "src/assets/javascripts/integrations/search/query/transform/index.ts", "src/assets/javascripts/integrations/search/worker/message/index.ts", "src/assets/javascripts/integrations/search/worker/_/index.ts", "src/assets/javascripts/integrations/version/index.ts", "src/assets/javascripts/components/search/query/index.ts", "src/assets/javascripts/components/search/result/index.ts", "src/assets/javascripts/components/search/share/index.ts", "src/assets/javascripts/components/search/suggest/index.ts", "src/assets/javascripts/components/search/_/index.ts", "src/assets/javascripts/components/search/highlight/index.ts", "src/assets/javascripts/components/sidebar/index.ts", "src/assets/javascripts/components/source/facts/github/index.ts", "src/assets/javascripts/components/source/facts/gitlab/index.ts", "src/assets/javascripts/components/source/facts/_/index.ts", "src/assets/javascripts/components/source/_/index.ts", "src/assets/javascripts/components/tabs/index.ts", "src/assets/javascripts/components/toc/index.ts", "src/assets/javascripts/components/top/index.ts", "src/assets/javascripts/patches/indeterminate/index.ts", "src/assets/javascripts/patches/scrollfix/index.ts", "src/assets/javascripts/patches/scrolllock/index.ts", "src/assets/javascripts/polyfills/index.ts"], + "sourceRoot": "../../../..", + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "(function(global) {\r\n /**\r\n * Polyfill URLSearchParams\r\n *\r\n * Inspired from : https://github.com/WebReflection/url-search-params/blob/master/src/url-search-params.js\r\n */\r\n\r\n var checkIfIteratorIsSupported = function() {\r\n try {\r\n return !!Symbol.iterator;\r\n } catch (error) {\r\n return false;\r\n }\r\n };\r\n\r\n\r\n var iteratorSupported = checkIfIteratorIsSupported();\r\n\r\n var createIterator = function(items) {\r\n var iterator = {\r\n next: function() {\r\n var value = items.shift();\r\n return { done: value === void 0, value: value };\r\n }\r\n };\r\n\r\n if (iteratorSupported) {\r\n iterator[Symbol.iterator] = function() {\r\n return iterator;\r\n };\r\n }\r\n\r\n return iterator;\r\n };\r\n\r\n /**\r\n * Search param name and values should be encoded according to https://url.spec.whatwg.org/#urlencoded-serializing\r\n * encodeURIComponent() produces the same result except encoding spaces as `%20` instead of `+`.\r\n */\r\n var serializeParam = function(value) {\r\n return encodeURIComponent(value).replace(/%20/g, '+');\r\n };\r\n\r\n var deserializeParam = function(value) {\r\n return decodeURIComponent(String(value).replace(/\\+/g, ' '));\r\n };\r\n\r\n var polyfillURLSearchParams = function() {\r\n\r\n var URLSearchParams = function(searchString) {\r\n Object.defineProperty(this, '_entries', { writable: true, value: {} });\r\n var typeofSearchString = typeof searchString;\r\n\r\n if (typeofSearchString === 'undefined') {\r\n // do nothing\r\n } else if (typeofSearchString === 'string') {\r\n if (searchString !== '') {\r\n this._fromString(searchString);\r\n }\r\n } else if (searchString instanceof URLSearchParams) {\r\n var _this = this;\r\n searchString.forEach(function(value, name) {\r\n _this.append(name, value);\r\n });\r\n } else if ((searchString !== null) && (typeofSearchString === 'object')) {\r\n if (Object.prototype.toString.call(searchString) === '[object Array]') {\r\n for (var i = 0; i < searchString.length; i++) {\r\n var entry = searchString[i];\r\n if ((Object.prototype.toString.call(entry) === '[object Array]') || (entry.length !== 2)) {\r\n this.append(entry[0], entry[1]);\r\n } else {\r\n throw new TypeError('Expected [string, any] as entry at index ' + i + ' of URLSearchParams\\'s input');\r\n }\r\n }\r\n } else {\r\n for (var key in searchString) {\r\n if (searchString.hasOwnProperty(key)) {\r\n this.append(key, searchString[key]);\r\n }\r\n }\r\n }\r\n } else {\r\n throw new TypeError('Unsupported input\\'s type for URLSearchParams');\r\n }\r\n };\r\n\r\n var proto = URLSearchParams.prototype;\r\n\r\n proto.append = function(name, value) {\r\n if (name in this._entries) {\r\n this._entries[name].push(String(value));\r\n } else {\r\n this._entries[name] = [String(value)];\r\n }\r\n };\r\n\r\n proto.delete = function(name) {\r\n delete this._entries[name];\r\n };\r\n\r\n proto.get = function(name) {\r\n return (name in this._entries) ? this._entries[name][0] : null;\r\n };\r\n\r\n proto.getAll = function(name) {\r\n return (name in this._entries) ? this._entries[name].slice(0) : [];\r\n };\r\n\r\n proto.has = function(name) {\r\n return (name in this._entries);\r\n };\r\n\r\n proto.set = function(name, value) {\r\n this._entries[name] = [String(value)];\r\n };\r\n\r\n proto.forEach = function(callback, thisArg) {\r\n var entries;\r\n for (var name in this._entries) {\r\n if (this._entries.hasOwnProperty(name)) {\r\n entries = this._entries[name];\r\n for (var i = 0; i < entries.length; i++) {\r\n callback.call(thisArg, entries[i], name, this);\r\n }\r\n }\r\n }\r\n };\r\n\r\n proto.keys = function() {\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push(name);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n proto.values = function() {\r\n var items = [];\r\n this.forEach(function(value) {\r\n items.push(value);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n proto.entries = function() {\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push([name, value]);\r\n });\r\n return createIterator(items);\r\n };\r\n\r\n if (iteratorSupported) {\r\n proto[Symbol.iterator] = proto.entries;\r\n }\r\n\r\n proto.toString = function() {\r\n var searchArray = [];\r\n this.forEach(function(value, name) {\r\n searchArray.push(serializeParam(name) + '=' + serializeParam(value));\r\n });\r\n return searchArray.join('&');\r\n };\r\n\r\n\r\n global.URLSearchParams = URLSearchParams;\r\n };\r\n\r\n var checkIfURLSearchParamsSupported = function() {\r\n try {\r\n var URLSearchParams = global.URLSearchParams;\r\n\r\n return (\r\n (new URLSearchParams('?a=1').toString() === 'a=1') &&\r\n (typeof URLSearchParams.prototype.set === 'function') &&\r\n (typeof URLSearchParams.prototype.entries === 'function')\r\n );\r\n } catch (e) {\r\n return false;\r\n }\r\n };\r\n\r\n if (!checkIfURLSearchParamsSupported()) {\r\n polyfillURLSearchParams();\r\n }\r\n\r\n var proto = global.URLSearchParams.prototype;\r\n\r\n if (typeof proto.sort !== 'function') {\r\n proto.sort = function() {\r\n var _this = this;\r\n var items = [];\r\n this.forEach(function(value, name) {\r\n items.push([name, value]);\r\n if (!_this._entries) {\r\n _this.delete(name);\r\n }\r\n });\r\n items.sort(function(a, b) {\r\n if (a[0] < b[0]) {\r\n return -1;\r\n } else if (a[0] > b[0]) {\r\n return +1;\r\n } else {\r\n return 0;\r\n }\r\n });\r\n if (_this._entries) { // force reset because IE keeps keys index\r\n _this._entries = {};\r\n }\r\n for (var i = 0; i < items.length; i++) {\r\n this.append(items[i][0], items[i][1]);\r\n }\r\n };\r\n }\r\n\r\n if (typeof proto._fromString !== 'function') {\r\n Object.defineProperty(proto, '_fromString', {\r\n enumerable: false,\r\n configurable: false,\r\n writable: false,\r\n value: function(searchString) {\r\n if (this._entries) {\r\n this._entries = {};\r\n } else {\r\n var keys = [];\r\n this.forEach(function(value, name) {\r\n keys.push(name);\r\n });\r\n for (var i = 0; i < keys.length; i++) {\r\n this.delete(keys[i]);\r\n }\r\n }\r\n\r\n searchString = searchString.replace(/^\\?/, '');\r\n var attributes = searchString.split('&');\r\n var attribute;\r\n for (var i = 0; i < attributes.length; i++) {\r\n attribute = attributes[i].split('=');\r\n this.append(\r\n deserializeParam(attribute[0]),\r\n (attribute.length > 1) ? deserializeParam(attribute[1]) : ''\r\n );\r\n }\r\n }\r\n });\r\n }\r\n\r\n // HTMLAnchorElement\r\n\r\n})(\r\n (typeof global !== 'undefined') ? global\r\n : ((typeof window !== 'undefined') ? window\r\n : ((typeof self !== 'undefined') ? self : this))\r\n);\r\n\r\n(function(global) {\r\n /**\r\n * Polyfill URL\r\n *\r\n * Inspired from : https://github.com/arv/DOM-URL-Polyfill/blob/master/src/url.js\r\n */\r\n\r\n var checkIfURLIsSupported = function() {\r\n try {\r\n var u = new global.URL('b', 'http://a');\r\n u.pathname = 'c d';\r\n return (u.href === 'http://a/c%20d') && u.searchParams;\r\n } catch (e) {\r\n return false;\r\n }\r\n };\r\n\r\n\r\n var polyfillURL = function() {\r\n var _URL = global.URL;\r\n\r\n var URL = function(url, base) {\r\n if (typeof url !== 'string') url = String(url);\r\n if (base && typeof base !== 'string') base = String(base);\r\n\r\n // Only create another document if the base is different from current location.\r\n var doc = document, baseElement;\r\n if (base && (global.location === void 0 || base !== global.location.href)) {\r\n base = base.toLowerCase();\r\n doc = document.implementation.createHTMLDocument('');\r\n baseElement = doc.createElement('base');\r\n baseElement.href = base;\r\n doc.head.appendChild(baseElement);\r\n try {\r\n if (baseElement.href.indexOf(base) !== 0) throw new Error(baseElement.href);\r\n } catch (err) {\r\n throw new Error('URL unable to set base ' + base + ' due to ' + err);\r\n }\r\n }\r\n\r\n var anchorElement = doc.createElement('a');\r\n anchorElement.href = url;\r\n if (baseElement) {\r\n doc.body.appendChild(anchorElement);\r\n anchorElement.href = anchorElement.href; // force href to refresh\r\n }\r\n\r\n var inputElement = doc.createElement('input');\r\n inputElement.type = 'url';\r\n inputElement.value = url;\r\n\r\n if (anchorElement.protocol === ':' || !/:/.test(anchorElement.href) || (!inputElement.checkValidity() && !base)) {\r\n throw new TypeError('Invalid URL');\r\n }\r\n\r\n Object.defineProperty(this, '_anchorElement', {\r\n value: anchorElement\r\n });\r\n\r\n\r\n // create a linked searchParams which reflect its changes on URL\r\n var searchParams = new global.URLSearchParams(this.search);\r\n var enableSearchUpdate = true;\r\n var enableSearchParamsUpdate = true;\r\n var _this = this;\r\n ['append', 'delete', 'set'].forEach(function(methodName) {\r\n var method = searchParams[methodName];\r\n searchParams[methodName] = function() {\r\n method.apply(searchParams, arguments);\r\n if (enableSearchUpdate) {\r\n enableSearchParamsUpdate = false;\r\n _this.search = searchParams.toString();\r\n enableSearchParamsUpdate = true;\r\n }\r\n };\r\n });\r\n\r\n Object.defineProperty(this, 'searchParams', {\r\n value: searchParams,\r\n enumerable: true\r\n });\r\n\r\n var search = void 0;\r\n Object.defineProperty(this, '_updateSearchParams', {\r\n enumerable: false,\r\n configurable: false,\r\n writable: false,\r\n value: function() {\r\n if (this.search !== search) {\r\n search = this.search;\r\n if (enableSearchParamsUpdate) {\r\n enableSearchUpdate = false;\r\n this.searchParams._fromString(this.search);\r\n enableSearchUpdate = true;\r\n }\r\n }\r\n }\r\n });\r\n };\r\n\r\n var proto = URL.prototype;\r\n\r\n var linkURLWithAnchorAttribute = function(attributeName) {\r\n Object.defineProperty(proto, attributeName, {\r\n get: function() {\r\n return this._anchorElement[attributeName];\r\n },\r\n set: function(value) {\r\n this._anchorElement[attributeName] = value;\r\n },\r\n enumerable: true\r\n });\r\n };\r\n\r\n ['hash', 'host', 'hostname', 'port', 'protocol']\r\n .forEach(function(attributeName) {\r\n linkURLWithAnchorAttribute(attributeName);\r\n });\r\n\r\n Object.defineProperty(proto, 'search', {\r\n get: function() {\r\n return this._anchorElement['search'];\r\n },\r\n set: function(value) {\r\n this._anchorElement['search'] = value;\r\n this._updateSearchParams();\r\n },\r\n enumerable: true\r\n });\r\n\r\n Object.defineProperties(proto, {\r\n\r\n 'toString': {\r\n get: function() {\r\n var _this = this;\r\n return function() {\r\n return _this.href;\r\n };\r\n }\r\n },\r\n\r\n 'href': {\r\n get: function() {\r\n return this._anchorElement.href.replace(/\\?$/, '');\r\n },\r\n set: function(value) {\r\n this._anchorElement.href = value;\r\n this._updateSearchParams();\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'pathname': {\r\n get: function() {\r\n return this._anchorElement.pathname.replace(/(^\\/?)/, '/');\r\n },\r\n set: function(value) {\r\n this._anchorElement.pathname = value;\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'origin': {\r\n get: function() {\r\n // get expected port from protocol\r\n var expectedPort = { 'http:': 80, 'https:': 443, 'ftp:': 21 }[this._anchorElement.protocol];\r\n // add port to origin if, expected port is different than actual port\r\n // and it is not empty f.e http://foo:8080\r\n // 8080 != 80 && 8080 != ''\r\n var addPortToOrigin = this._anchorElement.port != expectedPort &&\r\n this._anchorElement.port !== '';\r\n\r\n return this._anchorElement.protocol +\r\n '//' +\r\n this._anchorElement.hostname +\r\n (addPortToOrigin ? (':' + this._anchorElement.port) : '');\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'password': { // TODO\r\n get: function() {\r\n return '';\r\n },\r\n set: function(value) {\r\n },\r\n enumerable: true\r\n },\r\n\r\n 'username': { // TODO\r\n get: function() {\r\n return '';\r\n },\r\n set: function(value) {\r\n },\r\n enumerable: true\r\n },\r\n });\r\n\r\n URL.createObjectURL = function(blob) {\r\n return _URL.createObjectURL.apply(_URL, arguments);\r\n };\r\n\r\n URL.revokeObjectURL = function(url) {\r\n return _URL.revokeObjectURL.apply(_URL, arguments);\r\n };\r\n\r\n global.URL = URL;\r\n\r\n };\r\n\r\n if (!checkIfURLIsSupported()) {\r\n polyfillURL();\r\n }\r\n\r\n if ((global.location !== void 0) && !('origin' in global.location)) {\r\n var getOrigin = function() {\r\n return global.location.protocol + '//' + global.location.hostname + (global.location.port ? (':' + global.location.port) : '');\r\n };\r\n\r\n try {\r\n Object.defineProperty(global.location, 'origin', {\r\n get: getOrigin,\r\n enumerable: true\r\n });\r\n } catch (e) {\r\n setInterval(function() {\r\n global.location.origin = getOrigin();\r\n }, 100);\r\n }\r\n }\r\n\r\n})(\r\n (typeof global !== 'undefined') ? global\r\n : ((typeof window !== 'undefined') ? window\r\n : ((typeof self !== 'undefined') ? self : this))\r\n);\r\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global global, define, System, Reflect, Promise */\r\nvar __extends;\r\nvar __assign;\r\nvar __rest;\r\nvar __decorate;\r\nvar __param;\r\nvar __metadata;\r\nvar __awaiter;\r\nvar __generator;\r\nvar __exportStar;\r\nvar __values;\r\nvar __read;\r\nvar __spread;\r\nvar __spreadArrays;\r\nvar __spreadArray;\r\nvar __await;\r\nvar __asyncGenerator;\r\nvar __asyncDelegator;\r\nvar __asyncValues;\r\nvar __makeTemplateObject;\r\nvar __importStar;\r\nvar __importDefault;\r\nvar __classPrivateFieldGet;\r\nvar __classPrivateFieldSet;\r\nvar __createBinding;\r\n(function (factory) {\r\n var root = typeof global === \"object\" ? global : typeof self === \"object\" ? self : typeof this === \"object\" ? this : {};\r\n if (typeof define === \"function\" && define.amd) {\r\n define(\"tslib\", [\"exports\"], function (exports) { factory(createExporter(root, createExporter(exports))); });\r\n }\r\n else if (typeof module === \"object\" && typeof module.exports === \"object\") {\r\n factory(createExporter(root, createExporter(module.exports)));\r\n }\r\n else {\r\n factory(createExporter(root));\r\n }\r\n function createExporter(exports, previous) {\r\n if (exports !== root) {\r\n if (typeof Object.create === \"function\") {\r\n Object.defineProperty(exports, \"__esModule\", { value: true });\r\n }\r\n else {\r\n exports.__esModule = true;\r\n }\r\n }\r\n return function (id, v) { return exports[id] = previous ? previous(id, v) : v; };\r\n }\r\n})\r\n(function (exporter) {\r\n var extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n\r\n __extends = function (d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n };\r\n\r\n __assign = Object.assign || function (t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n };\r\n\r\n __rest = function (s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n };\r\n\r\n __decorate = function (decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n };\r\n\r\n __param = function (paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n };\r\n\r\n __metadata = function (metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n };\r\n\r\n __awaiter = function (thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n };\r\n\r\n __generator = function (thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n };\r\n\r\n __exportStar = function(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n };\r\n\r\n __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n }) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n });\r\n\r\n __values = function (o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n };\r\n\r\n __read = function (o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n };\r\n\r\n /** @deprecated */\r\n __spread = function () {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n };\r\n\r\n /** @deprecated */\r\n __spreadArrays = function () {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n };\r\n\r\n __spreadArray = function (to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n };\r\n\r\n __await = function (v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n };\r\n\r\n __asyncGenerator = function (thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n };\r\n\r\n __asyncDelegator = function (o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n };\r\n\r\n __asyncValues = function (o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n };\r\n\r\n __makeTemplateObject = function (cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n };\r\n\r\n var __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n }) : function(o, v) {\r\n o[\"default\"] = v;\r\n };\r\n\r\n __importStar = function (mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n };\r\n\r\n __importDefault = function (mod) {\r\n return (mod && mod.__esModule) ? mod : { \"default\": mod };\r\n };\r\n\r\n __classPrivateFieldGet = function (receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n };\r\n\r\n __classPrivateFieldSet = function (receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n };\r\n\r\n exporter(\"__extends\", __extends);\r\n exporter(\"__assign\", __assign);\r\n exporter(\"__rest\", __rest);\r\n exporter(\"__decorate\", __decorate);\r\n exporter(\"__param\", __param);\r\n exporter(\"__metadata\", __metadata);\r\n exporter(\"__awaiter\", __awaiter);\r\n exporter(\"__generator\", __generator);\r\n exporter(\"__exportStar\", __exportStar);\r\n exporter(\"__createBinding\", __createBinding);\r\n exporter(\"__values\", __values);\r\n exporter(\"__read\", __read);\r\n exporter(\"__spread\", __spread);\r\n exporter(\"__spreadArrays\", __spreadArrays);\r\n exporter(\"__spreadArray\", __spreadArray);\r\n exporter(\"__await\", __await);\r\n exporter(\"__asyncGenerator\", __asyncGenerator);\r\n exporter(\"__asyncDelegator\", __asyncDelegator);\r\n exporter(\"__asyncValues\", __asyncValues);\r\n exporter(\"__makeTemplateObject\", __makeTemplateObject);\r\n exporter(\"__importStar\", __importStar);\r\n exporter(\"__importDefault\", __importDefault);\r\n exporter(\"__classPrivateFieldGet\", __classPrivateFieldGet);\r\n exporter(\"__classPrivateFieldSet\", __classPrivateFieldSet);\r\n});\r\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "Array.prototype.flat||Object.defineProperty(Array.prototype,\"flat\",{configurable:!0,value:function r(){var t=isNaN(arguments[0])?1:Number(arguments[0]);return t?Array.prototype.reduce.call(this,function(a,e){return Array.isArray(e)?a.push.apply(a,r.call(e,t-1)):a.push(e),a},[]):Array.prototype.slice.call(this)},writable:!0}),Array.prototype.flatMap||Object.defineProperty(Array.prototype,\"flatMap\",{configurable:!0,value:function(r){return Array.prototype.map.apply(this,arguments).flat()},writable:!0})\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"array-flat-polyfill\"\nimport \"focus-visible\"\nimport \"unfetch/polyfill\"\nimport \"url-polyfill\"\n\nimport {\n EMPTY,\n NEVER,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getOptionalElement,\n requestJSON,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantLoading,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget()\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? __search?.index || requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up instant loading, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantLoading({ document$, location$, viewport$ })\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"[href][rel=prev]\")\n if (typeof prev !== \"undefined\")\n prev.click()\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"[href][rel=next]\")\n if (typeof next !== \"undefined\")\n next.click()\n break\n }\n })\n\n/* Set up patches */\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, { viewport$, header$, target$ })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.component$ = component$ /* Component observable */\n", "self.fetch||(self.fetch=function(e,n){return n=n||{},new Promise(function(t,s){var r=new XMLHttpRequest,o=[],u=[],i={},a=function(){return{ok:2==(r.status/100|0),statusText:r.statusText,status:r.status,url:r.responseURL,text:function(){return Promise.resolve(r.responseText)},json:function(){return Promise.resolve(r.responseText).then(JSON.parse)},blob:function(){return Promise.resolve(new Blob([r.response]))},clone:a,headers:{keys:function(){return o},entries:function(){return u},get:function(e){return i[e.toLowerCase()]},has:function(e){return e.toLowerCase()in i}}}};for(var c in r.open(n.method||\"get\",e,!0),r.onload=function(){r.getAllResponseHeaders().replace(/^(.*?):[^\\S\\n]*([\\s\\S]*?)$/gm,function(e,n,t){o.push(n=n.toLowerCase()),u.push([n,t]),i[n]=i[n]?i[n]+\",\"+t:t}),t(a())},r.onerror=s,r.withCredentials=\"include\"==n.credentials,n.headers)r.setRequestHeader(c,n.headers[c]);r.send(n.body||null)})});\n", "import tslib from '../tslib.js';\r\nconst {\r\n __extends,\r\n __assign,\r\n __rest,\r\n __decorate,\r\n __param,\r\n __metadata,\r\n __awaiter,\r\n __generator,\r\n __exportStar,\r\n __createBinding,\r\n __values,\r\n __read,\r\n __spread,\r\n __spreadArrays,\r\n __spreadArray,\r\n __await,\r\n __asyncGenerator,\r\n __asyncDelegator,\r\n __asyncValues,\r\n __makeTemplateObject,\r\n __importStar,\r\n __importDefault,\r\n __classPrivateFieldGet,\r\n __classPrivateFieldSet,\r\n} = tslib;\r\nexport {\r\n __extends,\r\n __assign,\r\n __rest,\r\n __decorate,\r\n __param,\r\n __metadata,\r\n __awaiter,\r\n __generator,\r\n __exportStar,\r\n __createBinding,\r\n __values,\r\n __read,\r\n __spread,\r\n __spreadArrays,\r\n __spreadArray,\r\n __await,\r\n __asyncGenerator,\r\n __asyncDelegator,\r\n __asyncValues,\r\n __makeTemplateObject,\r\n __importStar,\r\n __importDefault,\r\n __classPrivateFieldGet,\r\n __classPrivateFieldSet,\r\n};\r\n", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n ReplaySubject,\n Subject,\n fromEvent\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch document\n *\n * Documents are implemented as subjects, so all downstream observables are\n * automatically updated when a new document is emitted.\n *\n * @returns Document subject\n */\nexport function watchDocument(): Subject {\n const document$ = new ReplaySubject(1)\n fromEvent(document, \"DOMContentLoaded\", { once: true })\n .subscribe(() => document$.next(document))\n\n /* Return document */\n return document$\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve all elements matching the query selector\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Elements\n */\nexport function getElements(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T][]\n\nexport function getElements(\n selector: string, node?: ParentNode\n): T[]\n\nexport function getElements(\n selector: string, node: ParentNode = document\n): T[] {\n return Array.from(node.querySelectorAll(selector))\n}\n\n/**\n * Retrieve an element matching a query selector or throw a reference error\n *\n * Note that this function assumes that the element is present. If unsure if an\n * element is existent, use the `getOptionalElement` function instead.\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Element\n */\nexport function getElement(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T]\n\nexport function getElement(\n selector: string, node?: ParentNode\n): T\n\nexport function getElement(\n selector: string, node: ParentNode = document\n): T {\n const el = getOptionalElement(selector, node)\n if (typeof el === \"undefined\")\n throw new ReferenceError(\n `Missing element: expected \"${selector}\" to be present`\n )\n\n /* Return element */\n return el\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Retrieve an optional element matching the query selector\n *\n * @template T - Element type\n *\n * @param selector - Query selector\n * @param node - Node of reference\n *\n * @returns Element or nothing\n */\nexport function getOptionalElement(\n selector: T, node?: ParentNode\n): HTMLElementTagNameMap[T] | undefined\n\nexport function getOptionalElement(\n selector: string, node?: ParentNode\n): T | undefined\n\nexport function getOptionalElement(\n selector: string, node: ParentNode = document\n): T | undefined {\n return node.querySelector(selector) || undefined\n}\n\n/**\n * Retrieve the currently active element\n *\n * @returns Element or nothing\n */\nexport function getActiveElement(): HTMLElement | undefined {\n return document.activeElement instanceof HTMLElement\n ? document.activeElement || undefined\n : undefined\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n debounceTime,\n distinctUntilChanged,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\nimport { getActiveElement } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch element focus\n *\n * Previously, this function used `focus` and `blur` events to determine whether\n * an element is focused, but this doesn't work if there are focusable elements\n * within the elements itself. A better solutions are `focusin` and `focusout`\n * events, which bubble up the tree and allow for more fine-grained control.\n *\n * `debounceTime` is necessary, because when a focus change happens inside an\n * element, the observable would first emit `false` and then `true` again.\n *\n * @param el - Element\n *\n * @returns Element focus observable\n */\nexport function watchElementFocus(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(document.body, \"focusin\"),\n fromEvent(document.body, \"focusout\")\n )\n .pipe(\n debounceTime(1),\n map(() => {\n const active = getActiveElement()\n return typeof active !== \"undefined\"\n ? el.contains(active)\n : false\n }),\n startWith(el === getActiveElement()),\n distinctUntilChanged()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n animationFrameScheduler,\n auditTime,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Element offset\n */\nexport interface ElementOffset {\n x: number /* Horizontal offset */\n y: number /* Vertical offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element offset\n *\n * @param el - Element\n *\n * @returns Element offset\n */\nexport function getElementOffset(\n el: HTMLElement\n): ElementOffset {\n return {\n x: el.offsetLeft,\n y: el.offsetTop\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element offset\n *\n * @param el - Element\n *\n * @returns Element offset observable\n */\nexport function watchElementOffset(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(window, \"load\"),\n fromEvent(window, \"resize\")\n )\n .pipe(\n auditTime(0, animationFrameScheduler),\n map(() => getElementOffset(el)),\n startWith(getElementOffset(el))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n animationFrameScheduler,\n auditTime,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\nimport { ElementOffset } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element content offset (= scroll offset)\n *\n * @param el - Element\n *\n * @returns Element content offset\n */\nexport function getElementContentOffset(\n el: HTMLElement\n): ElementOffset {\n return {\n x: el.scrollLeft,\n y: el.scrollTop\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element content offset\n *\n * @param el - Element\n *\n * @returns Element content offset observable\n */\nexport function watchElementContentOffset(\n el: HTMLElement\n): Observable {\n return merge(\n fromEvent(el, \"scroll\"),\n fromEvent(window, \"resize\")\n )\n .pipe(\n auditTime(0, animationFrameScheduler),\n map(() => getElementContentOffset(el)),\n startWith(getElementContentOffset(el))\n )\n}\n", "/**\r\n * A collection of shims that provide minimal functionality of the ES6 collections.\r\n *\r\n * These implementations are not meant to be used outside of the ResizeObserver\r\n * modules as they cover only a limited range of use cases.\r\n */\r\n/* eslint-disable require-jsdoc, valid-jsdoc */\r\nvar MapShim = (function () {\r\n if (typeof Map !== 'undefined') {\r\n return Map;\r\n }\r\n /**\r\n * Returns index in provided array that matches the specified key.\r\n *\r\n * @param {Array} arr\r\n * @param {*} key\r\n * @returns {number}\r\n */\r\n function getIndex(arr, key) {\r\n var result = -1;\r\n arr.some(function (entry, index) {\r\n if (entry[0] === key) {\r\n result = index;\r\n return true;\r\n }\r\n return false;\r\n });\r\n return result;\r\n }\r\n return /** @class */ (function () {\r\n function class_1() {\r\n this.__entries__ = [];\r\n }\r\n Object.defineProperty(class_1.prototype, \"size\", {\r\n /**\r\n * @returns {boolean}\r\n */\r\n get: function () {\r\n return this.__entries__.length;\r\n },\r\n enumerable: true,\r\n configurable: true\r\n });\r\n /**\r\n * @param {*} key\r\n * @returns {*}\r\n */\r\n class_1.prototype.get = function (key) {\r\n var index = getIndex(this.__entries__, key);\r\n var entry = this.__entries__[index];\r\n return entry && entry[1];\r\n };\r\n /**\r\n * @param {*} key\r\n * @param {*} value\r\n * @returns {void}\r\n */\r\n class_1.prototype.set = function (key, value) {\r\n var index = getIndex(this.__entries__, key);\r\n if (~index) {\r\n this.__entries__[index][1] = value;\r\n }\r\n else {\r\n this.__entries__.push([key, value]);\r\n }\r\n };\r\n /**\r\n * @param {*} key\r\n * @returns {void}\r\n */\r\n class_1.prototype.delete = function (key) {\r\n var entries = this.__entries__;\r\n var index = getIndex(entries, key);\r\n if (~index) {\r\n entries.splice(index, 1);\r\n }\r\n };\r\n /**\r\n * @param {*} key\r\n * @returns {void}\r\n */\r\n class_1.prototype.has = function (key) {\r\n return !!~getIndex(this.__entries__, key);\r\n };\r\n /**\r\n * @returns {void}\r\n */\r\n class_1.prototype.clear = function () {\r\n this.__entries__.splice(0);\r\n };\r\n /**\r\n * @param {Function} callback\r\n * @param {*} [ctx=null]\r\n * @returns {void}\r\n */\r\n class_1.prototype.forEach = function (callback, ctx) {\r\n if (ctx === void 0) { ctx = null; }\r\n for (var _i = 0, _a = this.__entries__; _i < _a.length; _i++) {\r\n var entry = _a[_i];\r\n callback.call(ctx, entry[1], entry[0]);\r\n }\r\n };\r\n return class_1;\r\n }());\r\n})();\n\n/**\r\n * Detects whether window and document objects are available in current environment.\r\n */\r\nvar isBrowser = typeof window !== 'undefined' && typeof document !== 'undefined' && window.document === document;\n\n// Returns global object of a current environment.\r\nvar global$1 = (function () {\r\n if (typeof global !== 'undefined' && global.Math === Math) {\r\n return global;\r\n }\r\n if (typeof self !== 'undefined' && self.Math === Math) {\r\n return self;\r\n }\r\n if (typeof window !== 'undefined' && window.Math === Math) {\r\n return window;\r\n }\r\n // eslint-disable-next-line no-new-func\r\n return Function('return this')();\r\n})();\n\n/**\r\n * A shim for the requestAnimationFrame which falls back to the setTimeout if\r\n * first one is not supported.\r\n *\r\n * @returns {number} Requests' identifier.\r\n */\r\nvar requestAnimationFrame$1 = (function () {\r\n if (typeof requestAnimationFrame === 'function') {\r\n // It's required to use a bounded function because IE sometimes throws\r\n // an \"Invalid calling object\" error if rAF is invoked without the global\r\n // object on the left hand side.\r\n return requestAnimationFrame.bind(global$1);\r\n }\r\n return function (callback) { return setTimeout(function () { return callback(Date.now()); }, 1000 / 60); };\r\n})();\n\n// Defines minimum timeout before adding a trailing call.\r\nvar trailingTimeout = 2;\r\n/**\r\n * Creates a wrapper function which ensures that provided callback will be\r\n * invoked only once during the specified delay period.\r\n *\r\n * @param {Function} callback - Function to be invoked after the delay period.\r\n * @param {number} delay - Delay after which to invoke callback.\r\n * @returns {Function}\r\n */\r\nfunction throttle (callback, delay) {\r\n var leadingCall = false, trailingCall = false, lastCallTime = 0;\r\n /**\r\n * Invokes the original callback function and schedules new invocation if\r\n * the \"proxy\" was called during current request.\r\n *\r\n * @returns {void}\r\n */\r\n function resolvePending() {\r\n if (leadingCall) {\r\n leadingCall = false;\r\n callback();\r\n }\r\n if (trailingCall) {\r\n proxy();\r\n }\r\n }\r\n /**\r\n * Callback invoked after the specified delay. It will further postpone\r\n * invocation of the original function delegating it to the\r\n * requestAnimationFrame.\r\n *\r\n * @returns {void}\r\n */\r\n function timeoutCallback() {\r\n requestAnimationFrame$1(resolvePending);\r\n }\r\n /**\r\n * Schedules invocation of the original function.\r\n *\r\n * @returns {void}\r\n */\r\n function proxy() {\r\n var timeStamp = Date.now();\r\n if (leadingCall) {\r\n // Reject immediately following calls.\r\n if (timeStamp - lastCallTime < trailingTimeout) {\r\n return;\r\n }\r\n // Schedule new call to be in invoked when the pending one is resolved.\r\n // This is important for \"transitions\" which never actually start\r\n // immediately so there is a chance that we might miss one if change\r\n // happens amids the pending invocation.\r\n trailingCall = true;\r\n }\r\n else {\r\n leadingCall = true;\r\n trailingCall = false;\r\n setTimeout(timeoutCallback, delay);\r\n }\r\n lastCallTime = timeStamp;\r\n }\r\n return proxy;\r\n}\n\n// Minimum delay before invoking the update of observers.\r\nvar REFRESH_DELAY = 20;\r\n// A list of substrings of CSS properties used to find transition events that\r\n// might affect dimensions of observed elements.\r\nvar transitionKeys = ['top', 'right', 'bottom', 'left', 'width', 'height', 'size', 'weight'];\r\n// Check if MutationObserver is available.\r\nvar mutationObserverSupported = typeof MutationObserver !== 'undefined';\r\n/**\r\n * Singleton controller class which handles updates of ResizeObserver instances.\r\n */\r\nvar ResizeObserverController = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserverController.\r\n *\r\n * @private\r\n */\r\n function ResizeObserverController() {\r\n /**\r\n * Indicates whether DOM listeners have been added.\r\n *\r\n * @private {boolean}\r\n */\r\n this.connected_ = false;\r\n /**\r\n * Tells that controller has subscribed for Mutation Events.\r\n *\r\n * @private {boolean}\r\n */\r\n this.mutationEventsAdded_ = false;\r\n /**\r\n * Keeps reference to the instance of MutationObserver.\r\n *\r\n * @private {MutationObserver}\r\n */\r\n this.mutationsObserver_ = null;\r\n /**\r\n * A list of connected observers.\r\n *\r\n * @private {Array}\r\n */\r\n this.observers_ = [];\r\n this.onTransitionEnd_ = this.onTransitionEnd_.bind(this);\r\n this.refresh = throttle(this.refresh.bind(this), REFRESH_DELAY);\r\n }\r\n /**\r\n * Adds observer to observers list.\r\n *\r\n * @param {ResizeObserverSPI} observer - Observer to be added.\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.addObserver = function (observer) {\r\n if (!~this.observers_.indexOf(observer)) {\r\n this.observers_.push(observer);\r\n }\r\n // Add listeners if they haven't been added yet.\r\n if (!this.connected_) {\r\n this.connect_();\r\n }\r\n };\r\n /**\r\n * Removes observer from observers list.\r\n *\r\n * @param {ResizeObserverSPI} observer - Observer to be removed.\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.removeObserver = function (observer) {\r\n var observers = this.observers_;\r\n var index = observers.indexOf(observer);\r\n // Remove observer if it's present in registry.\r\n if (~index) {\r\n observers.splice(index, 1);\r\n }\r\n // Remove listeners if controller has no connected observers.\r\n if (!observers.length && this.connected_) {\r\n this.disconnect_();\r\n }\r\n };\r\n /**\r\n * Invokes the update of observers. It will continue running updates insofar\r\n * it detects changes.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.refresh = function () {\r\n var changesDetected = this.updateObservers_();\r\n // Continue running updates if changes have been detected as there might\r\n // be future ones caused by CSS transitions.\r\n if (changesDetected) {\r\n this.refresh();\r\n }\r\n };\r\n /**\r\n * Updates every observer from observers list and notifies them of queued\r\n * entries.\r\n *\r\n * @private\r\n * @returns {boolean} Returns \"true\" if any observer has detected changes in\r\n * dimensions of it's elements.\r\n */\r\n ResizeObserverController.prototype.updateObservers_ = function () {\r\n // Collect observers that have active observations.\r\n var activeObservers = this.observers_.filter(function (observer) {\r\n return observer.gatherActive(), observer.hasActive();\r\n });\r\n // Deliver notifications in a separate cycle in order to avoid any\r\n // collisions between observers, e.g. when multiple instances of\r\n // ResizeObserver are tracking the same element and the callback of one\r\n // of them changes content dimensions of the observed target. Sometimes\r\n // this may result in notifications being blocked for the rest of observers.\r\n activeObservers.forEach(function (observer) { return observer.broadcastActive(); });\r\n return activeObservers.length > 0;\r\n };\r\n /**\r\n * Initializes DOM listeners.\r\n *\r\n * @private\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.connect_ = function () {\r\n // Do nothing if running in a non-browser environment or if listeners\r\n // have been already added.\r\n if (!isBrowser || this.connected_) {\r\n return;\r\n }\r\n // Subscription to the \"Transitionend\" event is used as a workaround for\r\n // delayed transitions. This way it's possible to capture at least the\r\n // final state of an element.\r\n document.addEventListener('transitionend', this.onTransitionEnd_);\r\n window.addEventListener('resize', this.refresh);\r\n if (mutationObserverSupported) {\r\n this.mutationsObserver_ = new MutationObserver(this.refresh);\r\n this.mutationsObserver_.observe(document, {\r\n attributes: true,\r\n childList: true,\r\n characterData: true,\r\n subtree: true\r\n });\r\n }\r\n else {\r\n document.addEventListener('DOMSubtreeModified', this.refresh);\r\n this.mutationEventsAdded_ = true;\r\n }\r\n this.connected_ = true;\r\n };\r\n /**\r\n * Removes DOM listeners.\r\n *\r\n * @private\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.disconnect_ = function () {\r\n // Do nothing if running in a non-browser environment or if listeners\r\n // have been already removed.\r\n if (!isBrowser || !this.connected_) {\r\n return;\r\n }\r\n document.removeEventListener('transitionend', this.onTransitionEnd_);\r\n window.removeEventListener('resize', this.refresh);\r\n if (this.mutationsObserver_) {\r\n this.mutationsObserver_.disconnect();\r\n }\r\n if (this.mutationEventsAdded_) {\r\n document.removeEventListener('DOMSubtreeModified', this.refresh);\r\n }\r\n this.mutationsObserver_ = null;\r\n this.mutationEventsAdded_ = false;\r\n this.connected_ = false;\r\n };\r\n /**\r\n * \"Transitionend\" event handler.\r\n *\r\n * @private\r\n * @param {TransitionEvent} event\r\n * @returns {void}\r\n */\r\n ResizeObserverController.prototype.onTransitionEnd_ = function (_a) {\r\n var _b = _a.propertyName, propertyName = _b === void 0 ? '' : _b;\r\n // Detect whether transition may affect dimensions of an element.\r\n var isReflowProperty = transitionKeys.some(function (key) {\r\n return !!~propertyName.indexOf(key);\r\n });\r\n if (isReflowProperty) {\r\n this.refresh();\r\n }\r\n };\r\n /**\r\n * Returns instance of the ResizeObserverController.\r\n *\r\n * @returns {ResizeObserverController}\r\n */\r\n ResizeObserverController.getInstance = function () {\r\n if (!this.instance_) {\r\n this.instance_ = new ResizeObserverController();\r\n }\r\n return this.instance_;\r\n };\r\n /**\r\n * Holds reference to the controller's instance.\r\n *\r\n * @private {ResizeObserverController}\r\n */\r\n ResizeObserverController.instance_ = null;\r\n return ResizeObserverController;\r\n}());\n\n/**\r\n * Defines non-writable/enumerable properties of the provided target object.\r\n *\r\n * @param {Object} target - Object for which to define properties.\r\n * @param {Object} props - Properties to be defined.\r\n * @returns {Object} Target object.\r\n */\r\nvar defineConfigurable = (function (target, props) {\r\n for (var _i = 0, _a = Object.keys(props); _i < _a.length; _i++) {\r\n var key = _a[_i];\r\n Object.defineProperty(target, key, {\r\n value: props[key],\r\n enumerable: false,\r\n writable: false,\r\n configurable: true\r\n });\r\n }\r\n return target;\r\n});\n\n/**\r\n * Returns the global object associated with provided element.\r\n *\r\n * @param {Object} target\r\n * @returns {Object}\r\n */\r\nvar getWindowOf = (function (target) {\r\n // Assume that the element is an instance of Node, which means that it\r\n // has the \"ownerDocument\" property from which we can retrieve a\r\n // corresponding global object.\r\n var ownerGlobal = target && target.ownerDocument && target.ownerDocument.defaultView;\r\n // Return the local global object if it's not possible extract one from\r\n // provided element.\r\n return ownerGlobal || global$1;\r\n});\n\n// Placeholder of an empty content rectangle.\r\nvar emptyRect = createRectInit(0, 0, 0, 0);\r\n/**\r\n * Converts provided string to a number.\r\n *\r\n * @param {number|string} value\r\n * @returns {number}\r\n */\r\nfunction toFloat(value) {\r\n return parseFloat(value) || 0;\r\n}\r\n/**\r\n * Extracts borders size from provided styles.\r\n *\r\n * @param {CSSStyleDeclaration} styles\r\n * @param {...string} positions - Borders positions (top, right, ...)\r\n * @returns {number}\r\n */\r\nfunction getBordersSize(styles) {\r\n var positions = [];\r\n for (var _i = 1; _i < arguments.length; _i++) {\r\n positions[_i - 1] = arguments[_i];\r\n }\r\n return positions.reduce(function (size, position) {\r\n var value = styles['border-' + position + '-width'];\r\n return size + toFloat(value);\r\n }, 0);\r\n}\r\n/**\r\n * Extracts paddings sizes from provided styles.\r\n *\r\n * @param {CSSStyleDeclaration} styles\r\n * @returns {Object} Paddings box.\r\n */\r\nfunction getPaddings(styles) {\r\n var positions = ['top', 'right', 'bottom', 'left'];\r\n var paddings = {};\r\n for (var _i = 0, positions_1 = positions; _i < positions_1.length; _i++) {\r\n var position = positions_1[_i];\r\n var value = styles['padding-' + position];\r\n paddings[position] = toFloat(value);\r\n }\r\n return paddings;\r\n}\r\n/**\r\n * Calculates content rectangle of provided SVG element.\r\n *\r\n * @param {SVGGraphicsElement} target - Element content rectangle of which needs\r\n * to be calculated.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getSVGContentRect(target) {\r\n var bbox = target.getBBox();\r\n return createRectInit(0, 0, bbox.width, bbox.height);\r\n}\r\n/**\r\n * Calculates content rectangle of provided HTMLElement.\r\n *\r\n * @param {HTMLElement} target - Element for which to calculate the content rectangle.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getHTMLElementContentRect(target) {\r\n // Client width & height properties can't be\r\n // used exclusively as they provide rounded values.\r\n var clientWidth = target.clientWidth, clientHeight = target.clientHeight;\r\n // By this condition we can catch all non-replaced inline, hidden and\r\n // detached elements. Though elements with width & height properties less\r\n // than 0.5 will be discarded as well.\r\n //\r\n // Without it we would need to implement separate methods for each of\r\n // those cases and it's not possible to perform a precise and performance\r\n // effective test for hidden elements. E.g. even jQuery's ':visible' filter\r\n // gives wrong results for elements with width & height less than 0.5.\r\n if (!clientWidth && !clientHeight) {\r\n return emptyRect;\r\n }\r\n var styles = getWindowOf(target).getComputedStyle(target);\r\n var paddings = getPaddings(styles);\r\n var horizPad = paddings.left + paddings.right;\r\n var vertPad = paddings.top + paddings.bottom;\r\n // Computed styles of width & height are being used because they are the\r\n // only dimensions available to JS that contain non-rounded values. It could\r\n // be possible to utilize the getBoundingClientRect if only it's data wasn't\r\n // affected by CSS transformations let alone paddings, borders and scroll bars.\r\n var width = toFloat(styles.width), height = toFloat(styles.height);\r\n // Width & height include paddings and borders when the 'border-box' box\r\n // model is applied (except for IE).\r\n if (styles.boxSizing === 'border-box') {\r\n // Following conditions are required to handle Internet Explorer which\r\n // doesn't include paddings and borders to computed CSS dimensions.\r\n //\r\n // We can say that if CSS dimensions + paddings are equal to the \"client\"\r\n // properties then it's either IE, and thus we don't need to subtract\r\n // anything, or an element merely doesn't have paddings/borders styles.\r\n if (Math.round(width + horizPad) !== clientWidth) {\r\n width -= getBordersSize(styles, 'left', 'right') + horizPad;\r\n }\r\n if (Math.round(height + vertPad) !== clientHeight) {\r\n height -= getBordersSize(styles, 'top', 'bottom') + vertPad;\r\n }\r\n }\r\n // Following steps can't be applied to the document's root element as its\r\n // client[Width/Height] properties represent viewport area of the window.\r\n // Besides, it's as well not necessary as the itself neither has\r\n // rendered scroll bars nor it can be clipped.\r\n if (!isDocumentElement(target)) {\r\n // In some browsers (only in Firefox, actually) CSS width & height\r\n // include scroll bars size which can be removed at this step as scroll\r\n // bars are the only difference between rounded dimensions + paddings\r\n // and \"client\" properties, though that is not always true in Chrome.\r\n var vertScrollbar = Math.round(width + horizPad) - clientWidth;\r\n var horizScrollbar = Math.round(height + vertPad) - clientHeight;\r\n // Chrome has a rather weird rounding of \"client\" properties.\r\n // E.g. for an element with content width of 314.2px it sometimes gives\r\n // the client width of 315px and for the width of 314.7px it may give\r\n // 314px. And it doesn't happen all the time. So just ignore this delta\r\n // as a non-relevant.\r\n if (Math.abs(vertScrollbar) !== 1) {\r\n width -= vertScrollbar;\r\n }\r\n if (Math.abs(horizScrollbar) !== 1) {\r\n height -= horizScrollbar;\r\n }\r\n }\r\n return createRectInit(paddings.left, paddings.top, width, height);\r\n}\r\n/**\r\n * Checks whether provided element is an instance of the SVGGraphicsElement.\r\n *\r\n * @param {Element} target - Element to be checked.\r\n * @returns {boolean}\r\n */\r\nvar isSVGGraphicsElement = (function () {\r\n // Some browsers, namely IE and Edge, don't have the SVGGraphicsElement\r\n // interface.\r\n if (typeof SVGGraphicsElement !== 'undefined') {\r\n return function (target) { return target instanceof getWindowOf(target).SVGGraphicsElement; };\r\n }\r\n // If it's so, then check that element is at least an instance of the\r\n // SVGElement and that it has the \"getBBox\" method.\r\n // eslint-disable-next-line no-extra-parens\r\n return function (target) { return (target instanceof getWindowOf(target).SVGElement &&\r\n typeof target.getBBox === 'function'); };\r\n})();\r\n/**\r\n * Checks whether provided element is a document element ().\r\n *\r\n * @param {Element} target - Element to be checked.\r\n * @returns {boolean}\r\n */\r\nfunction isDocumentElement(target) {\r\n return target === getWindowOf(target).document.documentElement;\r\n}\r\n/**\r\n * Calculates an appropriate content rectangle for provided html or svg element.\r\n *\r\n * @param {Element} target - Element content rectangle of which needs to be calculated.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction getContentRect(target) {\r\n if (!isBrowser) {\r\n return emptyRect;\r\n }\r\n if (isSVGGraphicsElement(target)) {\r\n return getSVGContentRect(target);\r\n }\r\n return getHTMLElementContentRect(target);\r\n}\r\n/**\r\n * Creates rectangle with an interface of the DOMRectReadOnly.\r\n * Spec: https://drafts.fxtf.org/geometry/#domrectreadonly\r\n *\r\n * @param {DOMRectInit} rectInit - Object with rectangle's x/y coordinates and dimensions.\r\n * @returns {DOMRectReadOnly}\r\n */\r\nfunction createReadOnlyRect(_a) {\r\n var x = _a.x, y = _a.y, width = _a.width, height = _a.height;\r\n // If DOMRectReadOnly is available use it as a prototype for the rectangle.\r\n var Constr = typeof DOMRectReadOnly !== 'undefined' ? DOMRectReadOnly : Object;\r\n var rect = Object.create(Constr.prototype);\r\n // Rectangle's properties are not writable and non-enumerable.\r\n defineConfigurable(rect, {\r\n x: x, y: y, width: width, height: height,\r\n top: y,\r\n right: x + width,\r\n bottom: height + y,\r\n left: x\r\n });\r\n return rect;\r\n}\r\n/**\r\n * Creates DOMRectInit object based on the provided dimensions and the x/y coordinates.\r\n * Spec: https://drafts.fxtf.org/geometry/#dictdef-domrectinit\r\n *\r\n * @param {number} x - X coordinate.\r\n * @param {number} y - Y coordinate.\r\n * @param {number} width - Rectangle's width.\r\n * @param {number} height - Rectangle's height.\r\n * @returns {DOMRectInit}\r\n */\r\nfunction createRectInit(x, y, width, height) {\r\n return { x: x, y: y, width: width, height: height };\r\n}\n\n/**\r\n * Class that is responsible for computations of the content rectangle of\r\n * provided DOM element and for keeping track of it's changes.\r\n */\r\nvar ResizeObservation = /** @class */ (function () {\r\n /**\r\n * Creates an instance of ResizeObservation.\r\n *\r\n * @param {Element} target - Element to be observed.\r\n */\r\n function ResizeObservation(target) {\r\n /**\r\n * Broadcasted width of content rectangle.\r\n *\r\n * @type {number}\r\n */\r\n this.broadcastWidth = 0;\r\n /**\r\n * Broadcasted height of content rectangle.\r\n *\r\n * @type {number}\r\n */\r\n this.broadcastHeight = 0;\r\n /**\r\n * Reference to the last observed content rectangle.\r\n *\r\n * @private {DOMRectInit}\r\n */\r\n this.contentRect_ = createRectInit(0, 0, 0, 0);\r\n this.target = target;\r\n }\r\n /**\r\n * Updates content rectangle and tells whether it's width or height properties\r\n * have changed since the last broadcast.\r\n *\r\n * @returns {boolean}\r\n */\r\n ResizeObservation.prototype.isActive = function () {\r\n var rect = getContentRect(this.target);\r\n this.contentRect_ = rect;\r\n return (rect.width !== this.broadcastWidth ||\r\n rect.height !== this.broadcastHeight);\r\n };\r\n /**\r\n * Updates 'broadcastWidth' and 'broadcastHeight' properties with a data\r\n * from the corresponding properties of the last observed content rectangle.\r\n *\r\n * @returns {DOMRectInit} Last observed content rectangle.\r\n */\r\n ResizeObservation.prototype.broadcastRect = function () {\r\n var rect = this.contentRect_;\r\n this.broadcastWidth = rect.width;\r\n this.broadcastHeight = rect.height;\r\n return rect;\r\n };\r\n return ResizeObservation;\r\n}());\n\nvar ResizeObserverEntry = /** @class */ (function () {\r\n /**\r\n * Creates an instance of ResizeObserverEntry.\r\n *\r\n * @param {Element} target - Element that is being observed.\r\n * @param {DOMRectInit} rectInit - Data of the element's content rectangle.\r\n */\r\n function ResizeObserverEntry(target, rectInit) {\r\n var contentRect = createReadOnlyRect(rectInit);\r\n // According to the specification following properties are not writable\r\n // and are also not enumerable in the native implementation.\r\n //\r\n // Property accessors are not being used as they'd require to define a\r\n // private WeakMap storage which may cause memory leaks in browsers that\r\n // don't support this type of collections.\r\n defineConfigurable(this, { target: target, contentRect: contentRect });\r\n }\r\n return ResizeObserverEntry;\r\n}());\n\nvar ResizeObserverSPI = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserver.\r\n *\r\n * @param {ResizeObserverCallback} callback - Callback function that is invoked\r\n * when one of the observed elements changes it's content dimensions.\r\n * @param {ResizeObserverController} controller - Controller instance which\r\n * is responsible for the updates of observer.\r\n * @param {ResizeObserver} callbackCtx - Reference to the public\r\n * ResizeObserver instance which will be passed to callback function.\r\n */\r\n function ResizeObserverSPI(callback, controller, callbackCtx) {\r\n /**\r\n * Collection of resize observations that have detected changes in dimensions\r\n * of elements.\r\n *\r\n * @private {Array}\r\n */\r\n this.activeObservations_ = [];\r\n /**\r\n * Registry of the ResizeObservation instances.\r\n *\r\n * @private {Map}\r\n */\r\n this.observations_ = new MapShim();\r\n if (typeof callback !== 'function') {\r\n throw new TypeError('The callback provided as parameter 1 is not a function.');\r\n }\r\n this.callback_ = callback;\r\n this.controller_ = controller;\r\n this.callbackCtx_ = callbackCtx;\r\n }\r\n /**\r\n * Starts observing provided element.\r\n *\r\n * @param {Element} target - Element to be observed.\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.observe = function (target) {\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n // Do nothing if current environment doesn't have the Element interface.\r\n if (typeof Element === 'undefined' || !(Element instanceof Object)) {\r\n return;\r\n }\r\n if (!(target instanceof getWindowOf(target).Element)) {\r\n throw new TypeError('parameter 1 is not of type \"Element\".');\r\n }\r\n var observations = this.observations_;\r\n // Do nothing if element is already being observed.\r\n if (observations.has(target)) {\r\n return;\r\n }\r\n observations.set(target, new ResizeObservation(target));\r\n this.controller_.addObserver(this);\r\n // Force the update of observations.\r\n this.controller_.refresh();\r\n };\r\n /**\r\n * Stops observing provided element.\r\n *\r\n * @param {Element} target - Element to stop observing.\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.unobserve = function (target) {\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n // Do nothing if current environment doesn't have the Element interface.\r\n if (typeof Element === 'undefined' || !(Element instanceof Object)) {\r\n return;\r\n }\r\n if (!(target instanceof getWindowOf(target).Element)) {\r\n throw new TypeError('parameter 1 is not of type \"Element\".');\r\n }\r\n var observations = this.observations_;\r\n // Do nothing if element is not being observed.\r\n if (!observations.has(target)) {\r\n return;\r\n }\r\n observations.delete(target);\r\n if (!observations.size) {\r\n this.controller_.removeObserver(this);\r\n }\r\n };\r\n /**\r\n * Stops observing all elements.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.disconnect = function () {\r\n this.clearActive();\r\n this.observations_.clear();\r\n this.controller_.removeObserver(this);\r\n };\r\n /**\r\n * Collects observation instances the associated element of which has changed\r\n * it's content rectangle.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.gatherActive = function () {\r\n var _this = this;\r\n this.clearActive();\r\n this.observations_.forEach(function (observation) {\r\n if (observation.isActive()) {\r\n _this.activeObservations_.push(observation);\r\n }\r\n });\r\n };\r\n /**\r\n * Invokes initial callback function with a list of ResizeObserverEntry\r\n * instances collected from active resize observations.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.broadcastActive = function () {\r\n // Do nothing if observer doesn't have active observations.\r\n if (!this.hasActive()) {\r\n return;\r\n }\r\n var ctx = this.callbackCtx_;\r\n // Create ResizeObserverEntry instance for every active observation.\r\n var entries = this.activeObservations_.map(function (observation) {\r\n return new ResizeObserverEntry(observation.target, observation.broadcastRect());\r\n });\r\n this.callback_.call(ctx, entries, ctx);\r\n this.clearActive();\r\n };\r\n /**\r\n * Clears the collection of active observations.\r\n *\r\n * @returns {void}\r\n */\r\n ResizeObserverSPI.prototype.clearActive = function () {\r\n this.activeObservations_.splice(0);\r\n };\r\n /**\r\n * Tells whether observer has active observations.\r\n *\r\n * @returns {boolean}\r\n */\r\n ResizeObserverSPI.prototype.hasActive = function () {\r\n return this.activeObservations_.length > 0;\r\n };\r\n return ResizeObserverSPI;\r\n}());\n\n// Registry of internal observers. If WeakMap is not available use current shim\r\n// for the Map collection as it has all required methods and because WeakMap\r\n// can't be fully polyfilled anyway.\r\nvar observers = typeof WeakMap !== 'undefined' ? new WeakMap() : new MapShim();\r\n/**\r\n * ResizeObserver API. Encapsulates the ResizeObserver SPI implementation\r\n * exposing only those methods and properties that are defined in the spec.\r\n */\r\nvar ResizeObserver = /** @class */ (function () {\r\n /**\r\n * Creates a new instance of ResizeObserver.\r\n *\r\n * @param {ResizeObserverCallback} callback - Callback that is invoked when\r\n * dimensions of the observed elements change.\r\n */\r\n function ResizeObserver(callback) {\r\n if (!(this instanceof ResizeObserver)) {\r\n throw new TypeError('Cannot call a class as a function.');\r\n }\r\n if (!arguments.length) {\r\n throw new TypeError('1 argument required, but only 0 present.');\r\n }\r\n var controller = ResizeObserverController.getInstance();\r\n var observer = new ResizeObserverSPI(callback, controller, this);\r\n observers.set(this, observer);\r\n }\r\n return ResizeObserver;\r\n}());\r\n// Expose public methods of ResizeObserver.\r\n[\r\n 'observe',\r\n 'unobserve',\r\n 'disconnect'\r\n].forEach(function (method) {\r\n ResizeObserver.prototype[method] = function () {\r\n var _a;\r\n return (_a = observers.get(this))[method].apply(_a, arguments);\r\n };\r\n});\n\nvar index = (function () {\r\n // Export existing implementation if available.\r\n if (typeof global$1.ResizeObserver !== 'undefined') {\r\n return global$1.ResizeObserver;\r\n }\r\n return ResizeObserver;\r\n})();\n\nexport default index;\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ResizeObserver from \"resize-observer-polyfill\"\nimport {\n NEVER,\n Observable,\n Subject,\n defer,\n filter,\n finalize,\n map,\n merge,\n of,\n shareReplay,\n startWith,\n switchMap,\n tap\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Element offset\n */\nexport interface ElementSize {\n width: number /* Element width */\n height: number /* Element height */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Resize observer entry subject\n */\nconst entry$ = new Subject()\n\n/**\n * Resize observer observable\n *\n * This observable will create a `ResizeObserver` on the first subscription\n * and will automatically terminate it when there are no more subscribers.\n * It's quite important to centralize observation in a single `ResizeObserver`,\n * as the performance difference can be quite dramatic, as the link shows.\n *\n * @see https://bit.ly/3iIYfEm - Google Groups on performance\n */\nconst observer$ = defer(() => of(\n new ResizeObserver(entries => {\n for (const entry of entries)\n entry$.next(entry)\n })\n))\n .pipe(\n switchMap(observer => merge(NEVER, of(observer))\n .pipe(\n finalize(() => observer.disconnect())\n )\n ),\n shareReplay(1)\n )\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element size\n *\n * @param el - Element\n *\n * @returns Element size\n */\nexport function getElementSize(\n el: HTMLElement\n): ElementSize {\n return {\n width: el.offsetWidth,\n height: el.offsetHeight\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch element size\n *\n * This function returns an observable that subscribes to a single internal\n * instance of `ResizeObserver` upon subscription, and emit resize events until\n * termination. Note that this function should not be called with the same\n * element twice, as the first unsubscription will terminate observation.\n *\n * Sadly, we can't use the `DOMRect` objects returned by the observer, because\n * we need the emitted values to be consistent with `getElementSize`, which will\n * return the used values (rounded) and not actual values (unrounded). Thus, we\n * use the `offset*` properties. See the linked GitHub issue.\n *\n * @see https://bit.ly/3m0k3he - GitHub issue\n *\n * @param el - Element\n *\n * @returns Element size observable\n */\nexport function watchElementSize(\n el: HTMLElement\n): Observable {\n return observer$\n .pipe(\n tap(observer => observer.observe(el)),\n switchMap(observer => entry$\n .pipe(\n filter(({ target }) => target === el),\n finalize(() => observer.unobserve(el)),\n map(() => getElementSize(el))\n )\n ),\n startWith(getElementSize(el))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ElementSize } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve element content size (= scroll width and height)\n *\n * @param el - Element\n *\n * @returns Element content size\n */\nexport function getElementContentSize(\n el: HTMLElement\n): ElementSize {\n return {\n width: el.scrollWidth,\n height: el.scrollHeight\n }\n}\n\n/**\n * Retrieve the overflowing container of an element, if any\n *\n * @param el - Element\n *\n * @returns Overflowing container or nothing\n */\nexport function getElementContainer(\n el: HTMLElement\n): HTMLElement | undefined {\n let parent = el.parentElement\n while (parent)\n if (\n el.scrollWidth <= parent.scrollWidth &&\n el.scrollHeight <= parent.scrollHeight\n )\n parent = (el = parent).parentElement\n else\n break\n\n /* Return overflowing container */\n return parent ? el : undefined\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n NEVER,\n Observable,\n Subject,\n defer,\n distinctUntilChanged,\n filter,\n finalize,\n map,\n merge,\n of,\n shareReplay,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport {\n getElementContentSize,\n getElementSize,\n watchElementContentOffset\n} from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Intersection observer entry subject\n */\nconst entry$ = new Subject()\n\n/**\n * Intersection observer observable\n *\n * This observable will create an `IntersectionObserver` on first subscription\n * and will automatically terminate it when there are no more subscribers.\n *\n * @see https://bit.ly/3iIYfEm - Google Groups on performance\n */\nconst observer$ = defer(() => of(\n new IntersectionObserver(entries => {\n for (const entry of entries)\n entry$.next(entry)\n }, {\n threshold: 0\n })\n))\n .pipe(\n switchMap(observer => merge(NEVER, of(observer))\n .pipe(\n finalize(() => observer.disconnect())\n )\n ),\n shareReplay(1)\n )\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch element visibility\n *\n * @param el - Element\n *\n * @returns Element visibility observable\n */\nexport function watchElementVisibility(\n el: HTMLElement\n): Observable {\n return observer$\n .pipe(\n tap(observer => observer.observe(el)),\n switchMap(observer => entry$\n .pipe(\n filter(({ target }) => target === el),\n finalize(() => observer.unobserve(el)),\n map(({ isIntersecting }) => isIntersecting)\n )\n )\n )\n}\n\n/**\n * Watch element boundary\n *\n * This function returns an observable which emits whether the bottom content\n * boundary (= scroll offset) of an element is within a certain threshold.\n *\n * @param el - Element\n * @param threshold - Threshold\n *\n * @returns Element boundary observable\n */\nexport function watchElementBoundary(\n el: HTMLElement, threshold = 16\n): Observable {\n return watchElementContentOffset(el)\n .pipe(\n map(({ y }) => {\n const visible = getElementSize(el)\n const content = getElementContentSize(el)\n return y >= (\n content.height - visible.height - threshold\n )\n }),\n distinctUntilChanged()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n startWith\n} from \"rxjs\"\n\nimport { getElement } from \"../element\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Toggle\n */\nexport type Toggle =\n | \"drawer\" /* Toggle for drawer */\n | \"search\" /* Toggle for search */\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Toggle map\n */\nconst toggles: Record = {\n drawer: getElement(\"[data-md-toggle=drawer]\"),\n search: getElement(\"[data-md-toggle=search]\")\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve the value of a toggle\n *\n * @param name - Toggle\n *\n * @returns Toggle value\n */\nexport function getToggle(name: Toggle): boolean {\n return toggles[name].checked\n}\n\n/**\n * Set toggle\n *\n * Simulating a click event seems to be the most cross-browser compatible way\n * of changing the value while also emitting a `change` event. Before, Material\n * used `CustomEvent` to programmatically change the value of a toggle, but this\n * is a much simpler and cleaner solution which doesn't require a polyfill.\n *\n * @param name - Toggle\n * @param value - Toggle value\n */\nexport function setToggle(name: Toggle, value: boolean): void {\n if (toggles[name].checked !== value)\n toggles[name].click()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch toggle\n *\n * @param name - Toggle\n *\n * @returns Toggle value observable\n */\nexport function watchToggle(name: Toggle): Observable {\n const el = toggles[name]\n return fromEvent(el, \"change\")\n .pipe(\n map(() => el.checked),\n startWith(el.checked)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n map,\n share\n} from \"rxjs\"\n\nimport { getActiveElement } from \"../element\"\nimport { getToggle } from \"../toggle\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Keyboard mode\n */\nexport type KeyboardMode =\n | \"global\" /* Global */\n | \"search\" /* Search is open */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Keyboard\n */\nexport interface Keyboard {\n mode: KeyboardMode /* Keyboard mode */\n type: string /* Key type */\n claim(): void /* Key claim */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Check whether an element may receive keyboard input\n *\n * @param el - Element\n * @param type - Key type\n *\n * @returns Test result\n */\nfunction isSusceptibleToKeyboard(\n el: HTMLElement, type: string\n): boolean {\n switch (el.constructor) {\n\n /* Input elements */\n case HTMLInputElement:\n /* @ts-expect-error - omit unnecessary type cast */\n if (el.type === \"radio\")\n return /^Arrow/.test(type)\n else\n return true\n\n /* Select element and textarea */\n case HTMLSelectElement:\n case HTMLTextAreaElement:\n return true\n\n /* Everything else */\n default:\n return el.isContentEditable\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch keyboard\n *\n * @returns Keyboard observable\n */\nexport function watchKeyboard(): Observable {\n return fromEvent(window, \"keydown\")\n .pipe(\n filter(ev => !(ev.metaKey || ev.ctrlKey)),\n map(ev => ({\n mode: getToggle(\"search\") ? \"search\" : \"global\",\n type: ev.key,\n claim() {\n ev.preventDefault()\n ev.stopPropagation()\n }\n } as Keyboard)),\n filter(({ mode, type }) => {\n if (mode === \"global\") {\n const active = getActiveElement()\n if (typeof active !== \"undefined\")\n return !isSusceptibleToKeyboard(active, type)\n }\n return true\n }),\n share()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Subject } from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve location\n *\n * This function returns a `URL` object (and not `Location`) to normalize the\n * typings across the application. Furthermore, locations need to be tracked\n * without setting them and `Location` is a singleton which represents the\n * current location.\n *\n * @returns URL\n */\nexport function getLocation(): URL {\n return new URL(location.href)\n}\n\n/**\n * Set location\n *\n * @param url - URL to change to\n */\nexport function setLocation(url: URL): void {\n location.href = url.href\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location\n *\n * @returns Location subject\n */\nexport function watchLocation(): Subject {\n return new Subject()\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { JSX as JSXInternal } from \"preact\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * HTML attributes\n */\ntype Attributes =\n & JSXInternal.HTMLAttributes\n & JSXInternal.SVGAttributes\n & Record\n\n/**\n * Child element\n */\ntype Child =\n | HTMLElement\n | Text\n | string\n | number\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Append a child node to an element\n *\n * @param el - Element\n * @param child - Child node(s)\n */\nfunction appendChild(el: HTMLElement, child: Child | Child[]): void {\n\n /* Handle primitive types (including raw HTML) */\n if (typeof child === \"string\" || typeof child === \"number\") {\n el.innerHTML += child.toString()\n\n /* Handle nodes */\n } else if (child instanceof Node) {\n el.appendChild(child)\n\n /* Handle nested children */\n } else if (Array.isArray(child)) {\n for (const node of child)\n appendChild(el, node)\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * JSX factory\n *\n * @template T - Element type\n *\n * @param tag - HTML tag\n * @param attributes - HTML attributes\n * @param children - Child elements\n *\n * @returns Element\n */\nexport function h(\n tag: T, attributes?: Attributes | null, ...children: Child[]\n): HTMLElementTagNameMap[T]\n\nexport function h(\n tag: string, attributes?: Attributes | null, ...children: Child[]\n): T\n\nexport function h(\n tag: string, attributes?: Attributes | null, ...children: Child[]\n): T {\n const el = document.createElement(tag)\n\n /* Set attributes, if any */\n if (attributes)\n for (const attr of Object.keys(attributes)) {\n if (typeof attributes[attr] === \"undefined\")\n continue\n\n /* Set default attribute or boolean */\n if (typeof attributes[attr] !== \"boolean\")\n el.setAttribute(attr, attributes[attr])\n else\n el.setAttribute(attr, \"\")\n }\n\n /* Append child nodes */\n for (const child of children)\n appendChild(el, child)\n\n /* Return element */\n return el as T\n}\n\n/* ----------------------------------------------------------------------------\n * Namespace\n * ------------------------------------------------------------------------- */\n\nexport declare namespace h {\n namespace JSX {\n type Element = HTMLElement\n type IntrinsicElements = JSXInternal.IntrinsicElements\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Truncate a string after the given number of characters\n *\n * This is not a very reasonable approach, since the summaries kind of suck.\n * It would be better to create something more intelligent, highlighting the\n * search occurrences and making a better summary out of it, but this note was\n * written three years ago, so who knows if we'll ever fix it.\n *\n * @param value - Value to be truncated\n * @param n - Number of characters\n *\n * @returns Truncated value\n */\nexport function truncate(value: string, n: number): string {\n let i = n\n if (value.length > i) {\n while (value[i] !== \" \" && --i > 0) { /* keep eating */ }\n return `${value.substring(0, i)}...`\n }\n return value\n}\n\n/**\n * Round a number for display with repository facts\n *\n * This is a reverse-engineered version of GitHub's weird rounding algorithm\n * for stars, forks and all other numbers. While all numbers below `1,000` are\n * returned as-is, bigger numbers are converted to fixed numbers:\n *\n * - `1,049` => `1k`\n * - `1,050` => `1.1k`\n * - `1,949` => `1.9k`\n * - `1,950` => `2k`\n *\n * @param value - Original value\n *\n * @returns Rounded value\n */\nexport function round(value: number): string {\n if (value > 999) {\n const digits = +((value - 950) % 1000 > 99)\n return `${((value + 0.000001) / 1000).toFixed(digits)}k`\n } else {\n return value.toString()\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n map,\n shareReplay,\n startWith\n} from \"rxjs\"\n\nimport { getOptionalElement } from \"~/browser\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve location hash\n *\n * @returns Location hash\n */\nexport function getLocationHash(): string {\n return location.hash.substring(1)\n}\n\n/**\n * Set location hash\n *\n * Setting a new fragment identifier via `location.hash` will have no effect\n * if the value doesn't change. When a new fragment identifier is set, we want\n * the browser to target the respective element at all times, which is why we\n * use this dirty little trick.\n *\n * @param hash - Location hash\n */\nexport function setLocationHash(hash: string): void {\n const el = h(\"a\", { href: hash })\n el.addEventListener(\"click\", ev => ev.stopPropagation())\n el.click()\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch location hash\n *\n * @returns Location hash observable\n */\nexport function watchLocationHash(): Observable {\n return fromEvent(window, \"hashchange\")\n .pipe(\n map(getLocationHash),\n startWith(getLocationHash()),\n filter(hash => hash.length > 0),\n shareReplay(1)\n )\n}\n\n/**\n * Watch location target\n *\n * @returns Location target observable\n */\nexport function watchLocationTarget(): Observable {\n return watchLocationHash()\n .pipe(\n map(id => getOptionalElement(`[id=\"${id}\"]`)!),\n filter(el => typeof el !== \"undefined\")\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n fromEvent,\n fromEventPattern,\n map,\n merge,\n startWith,\n switchMap\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch media query\n *\n * Note that although `MediaQueryList.addListener` is deprecated we have to\n * use it, because it's the only way to ensure proper downward compatibility.\n *\n * @see https://bit.ly/3dUBH2m - GitHub issue\n *\n * @param query - Media query\n *\n * @returns Media observable\n */\nexport function watchMedia(query: string): Observable {\n const media = matchMedia(query)\n return fromEventPattern(next => (\n media.addListener(() => next(media.matches))\n ))\n .pipe(\n startWith(media.matches)\n )\n}\n\n/**\n * Watch print mode\n *\n * @returns Print observable\n */\nexport function watchPrint(): Observable {\n const media = matchMedia(\"print\")\n return merge(\n fromEvent(window, \"beforeprint\").pipe(map(() => true)),\n fromEvent(window, \"afterprint\").pipe(map(() => false))\n )\n .pipe(\n startWith(media.matches)\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Toggle an observable with a media observable\n *\n * @template T - Data type\n *\n * @param query$ - Media observable\n * @param factory - Observable factory\n *\n * @returns Toggled observable\n */\nexport function at(\n query$: Observable, factory: () => Observable\n): Observable {\n return query$\n .pipe(\n switchMap(active => active ? factory() : EMPTY)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n catchError,\n from,\n map,\n of,\n shareReplay,\n switchMap,\n throwError\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch the given URL\n *\n * If the request fails (e.g. when dispatched from `file://` locations), the\n * observable will complete without emitting a value.\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Response observable\n */\nexport function request(\n url: URL | string, options: RequestInit = { credentials: \"same-origin\" }\n): Observable {\n return from(fetch(`${url}`, options))\n .pipe(\n catchError(() => EMPTY),\n switchMap(res => res.status !== 200\n ? throwError(() => new Error(res.statusText))\n : of(res)\n )\n )\n}\n\n/**\n * Fetch JSON from the given URL\n *\n * @template T - Data type\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Data observable\n */\nexport function requestJSON(\n url: URL | string, options?: RequestInit\n): Observable {\n return request(url, options)\n .pipe(\n switchMap(res => res.json()),\n shareReplay(1)\n )\n}\n\n/**\n * Fetch XML from the given URL\n *\n * @param url - Request URL\n * @param options - Options\n *\n * @returns Data observable\n */\nexport function requestXML(\n url: URL | string, options?: RequestInit\n): Observable {\n const dom = new DOMParser()\n return request(url, options)\n .pipe(\n switchMap(res => res.text()),\n map(res => dom.parseFromString(res, \"text/xml\")),\n shareReplay(1)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n defer,\n finalize,\n fromEvent,\n map,\n merge,\n switchMap,\n take,\n throwError\n} from \"rxjs\"\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create and load a `script` element\n *\n * This function returns an observable that will emit when the script was\n * successfully loaded, or throw an error if it didn't.\n *\n * @param src - Script URL\n *\n * @returns Script observable\n */\nexport function watchScript(src: string): Observable {\n const script = h(\"script\", { src })\n return defer(() => {\n document.head.appendChild(script)\n return merge(\n fromEvent(script, \"load\"),\n fromEvent(script, \"error\")\n .pipe(\n switchMap(() => (\n throwError(() => new ReferenceError(`Invalid script: ${src}`))\n ))\n )\n )\n .pipe(\n map(() => undefined),\n finalize(() => document.head.removeChild(script)),\n take(1)\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n merge,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport offset\n */\nexport interface ViewportOffset {\n x: number /* Horizontal offset */\n y: number /* Vertical offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve viewport offset\n *\n * On iOS Safari, viewport offset can be negative due to overflow scrolling.\n * As this may induce strange behaviors downstream, we'll just limit it to 0.\n *\n * @returns Viewport offset\n */\nexport function getViewportOffset(): ViewportOffset {\n return {\n x: Math.max(0, scrollX),\n y: Math.max(0, scrollY)\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport offset\n *\n * @returns Viewport offset observable\n */\nexport function watchViewportOffset(): Observable {\n return merge(\n fromEvent(window, \"scroll\", { passive: true }),\n fromEvent(window, \"resize\", { passive: true })\n )\n .pipe(\n map(getViewportOffset),\n startWith(getViewportOffset())\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n startWith\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport size\n */\nexport interface ViewportSize {\n width: number /* Viewport width */\n height: number /* Viewport height */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve viewport size\n *\n * @returns Viewport size\n */\nexport function getViewportSize(): ViewportSize {\n return {\n width: innerWidth,\n height: innerHeight\n }\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport size\n *\n * @returns Viewport size observable\n */\nexport function watchViewportSize(): Observable {\n return fromEvent(window, \"resize\", { passive: true })\n .pipe(\n map(getViewportSize),\n startWith(getViewportSize())\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n map,\n shareReplay\n} from \"rxjs\"\n\nimport {\n ViewportOffset,\n watchViewportOffset\n} from \"../offset\"\nimport {\n ViewportSize,\n watchViewportSize\n} from \"../size\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Viewport\n */\nexport interface Viewport {\n offset: ViewportOffset /* Viewport offset */\n size: ViewportSize /* Viewport size */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport\n *\n * @returns Viewport observable\n */\nexport function watchViewport(): Observable {\n return combineLatest([\n watchViewportOffset(),\n watchViewportSize()\n ])\n .pipe(\n map(([offset, size]) => ({ offset, size })),\n shareReplay(1)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n distinctUntilKeyChanged,\n map\n} from \"rxjs\"\n\nimport { Header } from \"~/components\"\n\nimport { getElementOffset } from \"../../element\"\nimport { Viewport } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
/* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch viewport relative to element\n *\n * @param el - Element\n * @param options - Options\n *\n * @returns Viewport observable\n */\nexport function watchViewportAt(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n const size$ = viewport$\n .pipe(\n distinctUntilKeyChanged(\"size\")\n )\n\n /* Compute element offset */\n const offset$ = combineLatest([size$, header$])\n .pipe(\n map(() => getElementOffset(el))\n )\n\n /* Compute relative viewport, return hot observable */\n return combineLatest([header$, viewport$, offset$])\n .pipe(\n map(([{ height }, { offset, size }, { x, y }]) => ({\n offset: {\n x: offset.x - x,\n y: offset.y - y + height\n },\n size\n }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n fromEvent,\n map,\n share,\n switchMap,\n tap,\n throttle\n} from \"rxjs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Worker message\n */\nexport interface WorkerMessage {\n type: unknown /* Message type */\n data?: unknown /* Message data */\n}\n\n/**\n * Worker handler\n *\n * @template T - Message type\n */\nexport interface WorkerHandler<\n T extends WorkerMessage\n> {\n tx$: Subject /* Message transmission subject */\n rx$: Observable /* Message receive observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n *\n * @template T - Worker message type\n */\ninterface WatchOptions {\n tx$: Observable /* Message transmission observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch a web worker\n *\n * This function returns an observable that sends all values emitted by the\n * message observable to the web worker. Web worker communication is expected\n * to be bidirectional (request-response) and synchronous. Messages that are\n * emitted during a pending request are throttled, the last one is emitted.\n *\n * @param worker - Web worker\n * @param options - Options\n *\n * @returns Worker message observable\n */\nexport function watchWorker(\n worker: Worker, { tx$ }: WatchOptions\n): Observable {\n\n /* Intercept messages from worker-like objects */\n const rx$ = fromEvent(worker, \"message\")\n .pipe(\n map(({ data }) => data as T)\n )\n\n /* Send and receive messages, return hot observable */\n return tx$\n .pipe(\n throttle(() => rx$, { leading: true, trailing: true }),\n tap(message => worker.postMessage(message)),\n switchMap(() => rx$),\n share()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { getElement, getLocation } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Feature flag\n */\nexport type Flag =\n | \"announce.dismiss\" /* Dismissable announcement bar */\n | \"content.code.annotate\" /* Code annotations */\n | \"content.lazy\" /* Lazy content elements */\n | \"content.tabs.link\" /* Link content tabs */\n | \"header.autohide\" /* Hide header */\n | \"navigation.expand\" /* Automatic expansion */\n | \"navigation.indexes\" /* Section pages */\n | \"navigation.instant\" /* Instant loading */\n | \"navigation.sections\" /* Section navigation */\n | \"navigation.tabs\" /* Tabs navigation */\n | \"navigation.tabs.sticky\" /* Tabs navigation (sticky) */\n | \"navigation.top\" /* Back-to-top button */\n | \"navigation.tracking\" /* Anchor tracking */\n | \"search.highlight\" /* Search highlighting */\n | \"search.share\" /* Search sharing */\n | \"search.suggest\" /* Search suggestions */\n | \"toc.follow\" /* Following table of contents */\n | \"toc.integrate\" /* Integrated table of contents */\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Translation\n */\nexport type Translation =\n | \"clipboard.copy\" /* Copy to clipboard */\n | \"clipboard.copied\" /* Copied to clipboard */\n | \"search.config.lang\" /* Search language */\n | \"search.config.pipeline\" /* Search pipeline */\n | \"search.config.separator\" /* Search separator */\n | \"search.placeholder\" /* Search */\n | \"search.result.placeholder\" /* Type to start searching */\n | \"search.result.none\" /* No matching documents */\n | \"search.result.one\" /* 1 matching document */\n | \"search.result.other\" /* # matching documents */\n | \"search.result.more.one\" /* 1 more on this page */\n | \"search.result.more.other\" /* # more on this page */\n | \"search.result.term.missing\" /* Missing */\n | \"select.version.title\" /* Version selector */\n\n/**\n * Translations\n */\nexport type Translations = Record\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Versioning\n */\nexport interface Versioning {\n provider: \"mike\" /* Version provider */\n default?: string /* Default version */\n}\n\n/**\n * Configuration\n */\nexport interface Config {\n base: string /* Base URL */\n features: Flag[] /* Feature flags */\n translations: Translations /* Translations */\n search: string /* Search worker URL */\n tags?: Record /* Tags mapping */\n version?: Versioning /* Versioning */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve global configuration and make base URL absolute\n */\nconst script = getElement(\"#__config\")\nconst config: Config = JSON.parse(script.textContent!)\nconfig.base = `${new URL(config.base, getLocation())}`\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve global configuration\n *\n * @returns Global configuration\n */\nexport function configuration(): Config {\n return config\n}\n\n/**\n * Check whether a feature flag is enabled\n *\n * @param flag - Feature flag\n *\n * @returns Test result\n */\nexport function feature(flag: Flag): boolean {\n return config.features.includes(flag)\n}\n\n/**\n * Retrieve the translation for the given key\n *\n * @param key - Key to be translated\n * @param value - Positional value, if any\n *\n * @returns Translation\n */\nexport function translation(\n key: Translation, value?: string | number\n): string {\n return typeof value !== \"undefined\"\n ? config.translations[key].replace(\"#\", value.toString())\n : config.translations[key]\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { getElement, getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Component type\n */\nexport type ComponentType =\n | \"announce\" /* Announcement bar */\n | \"container\" /* Container */\n | \"consent\" /* Consent */\n | \"content\" /* Content */\n | \"dialog\" /* Dialog */\n | \"header\" /* Header */\n | \"header-title\" /* Header title */\n | \"header-topic\" /* Header topic */\n | \"main\" /* Main area */\n | \"outdated\" /* Version warning */\n | \"palette\" /* Color palette */\n | \"search\" /* Search */\n | \"search-query\" /* Search input */\n | \"search-result\" /* Search results */\n | \"search-share\" /* Search sharing */\n | \"search-suggest\" /* Search suggestions */\n | \"sidebar\" /* Sidebar */\n | \"skip\" /* Skip link */\n | \"source\" /* Repository information */\n | \"tabs\" /* Navigation tabs */\n | \"toc\" /* Table of contents */\n | \"top\" /* Back-to-top button */\n\n/**\n * Component\n *\n * @template T - Component type\n * @template U - Reference type\n */\nexport type Component<\n T extends {} = {},\n U extends HTMLElement = HTMLElement\n> =\n T & {\n ref: U /* Component reference */\n }\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Component type map\n */\ninterface ComponentTypeMap {\n \"announce\": HTMLElement /* Announcement bar */\n \"container\": HTMLElement /* Container */\n \"consent\": HTMLElement /* Consent */\n \"content\": HTMLElement /* Content */\n \"dialog\": HTMLElement /* Dialog */\n \"header\": HTMLElement /* Header */\n \"header-title\": HTMLElement /* Header title */\n \"header-topic\": HTMLElement /* Header topic */\n \"main\": HTMLElement /* Main area */\n \"outdated\": HTMLElement /* Version warning */\n \"palette\": HTMLElement /* Color palette */\n \"search\": HTMLElement /* Search */\n \"search-query\": HTMLInputElement /* Search input */\n \"search-result\": HTMLElement /* Search results */\n \"search-share\": HTMLAnchorElement /* Search sharing */\n \"search-suggest\": HTMLElement /* Search suggestions */\n \"sidebar\": HTMLElement /* Sidebar */\n \"skip\": HTMLAnchorElement /* Skip link */\n \"source\": HTMLAnchorElement /* Repository information */\n \"tabs\": HTMLElement /* Navigation tabs */\n \"toc\": HTMLElement /* Table of contents */\n \"top\": HTMLAnchorElement /* Back-to-top button */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Retrieve the element for a given component or throw a reference error\n *\n * @template T - Component type\n *\n * @param type - Component type\n * @param node - Node of reference\n *\n * @returns Element\n */\nexport function getComponentElement(\n type: T, node: ParentNode = document\n): ComponentTypeMap[T] {\n return getElement(`[data-md-component=${type}]`, node)\n}\n\n/**\n * Retrieve all elements for a given component\n *\n * @template T - Component type\n *\n * @param type - Component type\n * @param node - Node of reference\n *\n * @returns Elements\n */\nexport function getComponentElements(\n type: T, node: ParentNode = document\n): ComponentTypeMap[T][] {\n return getElements(`[data-md-component=${type}]`, node)\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n finalize,\n fromEvent,\n map,\n startWith,\n tap\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport { getElement } from \"~/browser\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Announcement bar\n */\nexport interface Announce {\n hash: number /* Content hash */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch announcement bar\n *\n * @param el - Announcement bar element\n *\n * @returns Announcement bar observable\n */\nexport function watchAnnounce(\n el: HTMLElement\n): Observable {\n const button = getElement(\".md-typeset > :first-child\", el)\n return fromEvent(button, \"click\", { once: true })\n .pipe(\n map(() => getElement(\".md-typeset\", el)),\n map(content => ({ hash: __md_hash(content.innerHTML) }))\n )\n}\n\n/**\n * Mount announcement bar\n *\n * @param el - Announcement bar element\n *\n * @returns Announcement bar component observable\n */\nexport function mountAnnounce(\n el: HTMLElement\n): Observable> {\n if (!feature(\"announce.dismiss\") || !el.childElementCount)\n return EMPTY\n\n /* Mount component on subscription */\n return defer(() => {\n const push$ = new Subject()\n push$\n .pipe(\n startWith({ hash: __md_get(\"__announce\") })\n )\n .subscribe(({ hash }) => {\n if (hash && hash === (__md_get(\"__announce\") ?? hash)) {\n el.hidden = true\n\n /* Persist preference in local storage */\n __md_set(\"__announce\", hash)\n }\n })\n\n /* Create and return component */\n return watchAnnounce(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n finalize,\n map,\n tap\n} from \"rxjs\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Consent\n */\nexport interface Consent {\n hidden: boolean /* Consent is hidden */\n}\n\n/**\n * Consent defaults\n */\nexport interface ConsentDefaults {\n analytics?: boolean /* Consent for Analytics */\n github?: boolean /* Consent for GitHub */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n target$: Observable /* Target observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch consent\n *\n * @param el - Consent element\n * @param options - Options\n *\n * @returns Consent observable\n */\nexport function watchConsent(\n el: HTMLElement, { target$ }: WatchOptions\n): Observable {\n return target$\n .pipe(\n map(target => ({ hidden: target !== el }))\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Mount consent\n *\n * @param el - Consent element\n * @param options - Options\n *\n * @returns Consent component observable\n */\nexport function mountConsent(\n el: HTMLElement, options: MountOptions\n): Observable> {\n const internal$ = new Subject()\n internal$.subscribe(({ hidden }) => {\n el.hidden = hidden\n })\n\n /* Create and return component */\n return watchConsent(el, options)\n .pipe(\n tap(state => internal$.next(state)),\n finalize(() => internal$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ClipboardJS from \"clipboard\"\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n finalize,\n map,\n mergeWith,\n switchMap,\n take,\n tap\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n getElementContentSize,\n watchElementSize,\n watchElementVisibility\n} from \"~/browser\"\nimport { renderClipboardButton } from \"~/templates\"\n\nimport { Component } from \"../../../_\"\nimport {\n Annotation,\n mountAnnotationList\n} from \"../../annotation\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Code block\n */\nexport interface CodeBlock {\n scrollable: boolean /* Code block overflows */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Global sequence number for code blocks\n */\nlet sequence = 0\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Find candidate list element directly following a code block\n *\n * @param el - Code block element\n *\n * @returns List element or nothing\n */\nfunction findCandidateList(el: HTMLElement): HTMLElement | undefined {\n if (el.nextElementSibling) {\n const sibling = el.nextElementSibling as HTMLElement\n if (sibling.tagName === \"OL\")\n return sibling\n\n /* Skip empty paragraphs - see https://bit.ly/3r4ZJ2O */\n else if (sibling.tagName === \"P\" && !sibling.children.length)\n return findCandidateList(sibling)\n }\n\n /* Everything else */\n return undefined\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch code block\n *\n * This function monitors size changes of the viewport, as well as switches of\n * content tabs with embedded code blocks, as both may trigger overflow.\n *\n * @param el - Code block element\n *\n * @returns Code block observable\n */\nexport function watchCodeBlock(\n el: HTMLElement\n): Observable {\n return watchElementSize(el)\n .pipe(\n map(({ width }) => {\n const content = getElementContentSize(el)\n return {\n scrollable: content.width > width\n }\n }),\n distinctUntilKeyChanged(\"scrollable\")\n )\n}\n\n/**\n * Mount code block\n *\n * This function ensures that an overflowing code block is focusable through\n * keyboard, so it can be scrolled without a mouse to improve on accessibility.\n * Furthermore, if code annotations are enabled, they are mounted if and only\n * if the code block is currently visible, e.g., not in a hidden content tab.\n *\n * Note that code blocks may be mounted eagerly or lazily. If they're mounted\n * lazily (on first visibility), code annotation anchor links will not work,\n * as they are evaluated on initial page load, and code annotations in general\n * might feel a little bumpier.\n *\n * @param el - Code block element\n * @param options - Options\n *\n * @returns Code block and annotation component observable\n */\nexport function mountCodeBlock(\n el: HTMLElement, options: MountOptions\n): Observable> {\n const { matches: hover } = matchMedia(\"(hover)\")\n\n /* Defer mounting of code block - see https://bit.ly/3vHVoVD */\n const factory$ = defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ scrollable }) => {\n if (scrollable && hover)\n el.setAttribute(\"tabindex\", \"0\")\n else\n el.removeAttribute(\"tabindex\")\n })\n\n /* Render button for Clipboard.js integration */\n if (ClipboardJS.isSupported()) {\n const parent = el.closest(\"pre\")!\n parent.id = `__code_${++sequence}`\n parent.insertBefore(\n renderClipboardButton(parent.id),\n el\n )\n }\n\n /* Handle code annotations */\n const container = el.closest(\".highlight\")\n if (container instanceof HTMLElement) {\n const list = findCandidateList(container)\n\n /* Mount code annotations, if enabled */\n if (typeof list !== \"undefined\" && (\n container.classList.contains(\"annotate\") ||\n feature(\"content.code.annotate\")\n )) {\n const annotations$ = mountAnnotationList(list, el, options)\n\n /* Create and return component */\n return watchCodeBlock(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state })),\n mergeWith(\n watchElementSize(container)\n .pipe(\n map(({ width, height }) => width && height),\n distinctUntilChanged(),\n switchMap(active => active ? annotations$ : EMPTY)\n )\n )\n )\n }\n }\n\n /* Create and return component */\n return watchCodeBlock(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n\n /* Mount code block lazily */\n if (feature(\"content.lazy\"))\n return watchElementVisibility(el)\n .pipe(\n filter(visible => visible),\n take(1),\n switchMap(() => factory$)\n )\n\n /* Mount code block */\n return factory$\n}\n", "/*\n * Copyright (c) 2016-2021 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a tooltip\n *\n * @param id - Tooltip identifier\n *\n * @returns Element\n */\nexport function renderTooltip(id?: string): HTMLElement {\n return (\n
\n
\n
\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\nimport { renderTooltip } from \"../tooltip\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render an annotation\n *\n * @param id - Annotation identifier\n * @param prefix - Tooltip identifier prefix\n *\n * @returns Element\n */\nexport function renderAnnotation(\n id: string | number, prefix?: string\n): HTMLElement {\n prefix = prefix ? `${prefix}_annotation_${id}` : undefined\n\n /* Render tooltip with anchor, if given */\n if (prefix) {\n const anchor = prefix ? `#${prefix}` : undefined\n return (\n \n )\n } else {\n return (\n \n )\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { translation } from \"~/_\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a 'copy-to-clipboard' button\n *\n * @param id - Unique identifier\n *\n * @returns Element\n */\nexport function renderClipboardButton(id: string): HTMLElement {\n return (\n code`}\n >\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ComponentChild } from \"preact\"\n\nimport { configuration, feature, translation } from \"~/_\"\nimport {\n SearchDocument,\n SearchMetadata,\n SearchResultItem\n} from \"~/integrations/search\"\nimport { h, truncate } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Render flag\n */\nconst enum Flag {\n TEASER = 1, /* Render teaser */\n PARENT = 2 /* Render as parent */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper function\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a search document\n *\n * @param document - Search document\n * @param flag - Render flags\n *\n * @returns Element\n */\nfunction renderSearchDocument(\n document: SearchDocument & SearchMetadata, flag: Flag\n): HTMLElement {\n const parent = flag & Flag.PARENT\n const teaser = flag & Flag.TEASER\n\n /* Render missing query terms */\n const missing = Object.keys(document.terms)\n .filter(key => !document.terms[key])\n .reduce((list, key) => [\n ...list, {key}, \" \"\n ], [])\n .slice(0, -1)\n\n /* Assemble query string for highlighting */\n const url = new URL(document.location)\n if (feature(\"search.highlight\"))\n url.searchParams.set(\"h\", Object.entries(document.terms)\n .filter(([, match]) => match)\n .reduce((highlight, [value]) => `${highlight} ${value}`.trim(), \"\")\n )\n\n /* Render article or section, depending on flags */\n const { tags } = configuration()\n return (\n \n \n {parent > 0 &&
}\n

{document.title}

\n {teaser > 0 && document.text.length > 0 &&\n

\n {truncate(document.text, 320)}\n

\n }\n {document.tags && (\n
\n {document.tags.map(tag => {\n const id = tag.replace(/<[^>]+>/g, \"\")\n const type = tags\n ? id in tags\n ? `md-tag-icon md-tag-icon--${tags[id]}`\n : \"md-tag-icon\"\n : \"\"\n return (\n {tag}\n )\n })}\n
\n )}\n {teaser > 0 && missing.length > 0 &&\n

\n {translation(\"search.result.term.missing\")}: {...missing}\n

\n }\n \n
\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a search result\n *\n * @param result - Search result\n *\n * @returns Element\n */\nexport function renderSearchResultItem(\n result: SearchResultItem\n): HTMLElement {\n const threshold = result[0].score\n const docs = [...result]\n\n /* Find and extract parent article */\n const parent = docs.findIndex(doc => !doc.location.includes(\"#\"))\n const [article] = docs.splice(parent, 1)\n\n /* Determine last index above threshold */\n let index = docs.findIndex(doc => doc.score < threshold)\n if (index === -1)\n index = docs.length\n\n /* Partition sections */\n const best = docs.slice(0, index)\n const more = docs.slice(index)\n\n /* Render children */\n const children = [\n renderSearchDocument(article, Flag.PARENT | +(!parent && index === 0)),\n ...best.map(section => renderSearchDocument(section, Flag.TEASER)),\n ...more.length ? [\n
\n \n {more.length > 0 && more.length === 1\n ? translation(\"search.result.more.one\")\n : translation(\"search.result.more.other\", more.length)\n }\n \n {...more.map(section => renderSearchDocument(section, Flag.TEASER))}\n
\n ] : []\n ]\n\n /* Render search result */\n return (\n
  • \n {children}\n
  • \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SourceFacts } from \"~/components\"\nimport { h, round } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render repository facts\n *\n * @param facts - Repository facts\n *\n * @returns Element\n */\nexport function renderSourceFacts(facts: SourceFacts): HTMLElement {\n return (\n
      \n {Object.entries(facts).map(([key, value]) => (\n
    • \n {typeof value === \"number\" ? round(value) : value}\n
    • \n ))}\n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Tabbed control type\n */\ntype TabbedControlType =\n | \"prev\"\n | \"next\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render control for content tabs\n *\n * @param type - Control type\n *\n * @returns Element\n */\nexport function renderTabbedControl(\n type: TabbedControlType\n): HTMLElement {\n const classes = `tabbed-control tabbed-control--${type}`\n return (\n \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a table inside a wrapper to improve scrolling on mobile\n *\n * @param table - Table element\n *\n * @returns Element\n */\nexport function renderTable(table: HTMLElement): HTMLElement {\n return (\n
    \n
    \n {table}\n
    \n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { configuration, translation } from \"~/_\"\nimport { h } from \"~/utilities\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Version\n */\nexport interface Version {\n version: string /* Version identifier */\n title: string /* Version title */\n aliases: string[] /* Version aliases */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a version\n *\n * @param version - Version\n *\n * @returns Element\n */\nfunction renderVersion(version: Version): HTMLElement {\n const config = configuration()\n\n /* Ensure trailing slash - see https://bit.ly/3rL5u3f */\n const url = new URL(`../${version.version}/`, config.base)\n return (\n
  • \n \n {version.title}\n \n
  • \n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Render a version selector\n *\n * @param versions - Versions\n * @param active - Active version\n *\n * @returns Element\n */\nexport function renderVersionSelector(\n versions: Version[], active: Version\n): HTMLElement {\n return (\n
    \n \n {active.title}\n \n
      \n {versions.map(renderVersion)}\n
    \n
    \n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n animationFrameScheduler,\n auditTime,\n combineLatest,\n debounceTime,\n defer,\n delay,\n filter,\n finalize,\n fromEvent,\n map,\n merge,\n switchMap,\n take,\n takeLast,\n takeUntil,\n tap,\n throttleTime,\n withLatestFrom\n} from \"rxjs\"\n\nimport {\n ElementOffset,\n getActiveElement,\n getElementSize,\n watchElementContentOffset,\n watchElementFocus,\n watchElementOffset,\n watchElementVisibility\n} from \"~/browser\"\n\nimport { Component } from \"../../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Annotation\n */\nexport interface Annotation {\n active: boolean /* Annotation is active */\n offset: ElementOffset /* Annotation offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch annotation\n *\n * @param el - Annotation element\n * @param container - Containing element\n *\n * @returns Annotation observable\n */\nexport function watchAnnotation(\n el: HTMLElement, container: HTMLElement\n): Observable {\n const offset$ = defer(() => combineLatest([\n watchElementOffset(el),\n watchElementContentOffset(container)\n ]))\n .pipe(\n map(([{ x, y }, scroll]): ElementOffset => {\n const { width, height } = getElementSize(el)\n return ({\n x: x - scroll.x + width / 2,\n y: y - scroll.y + height / 2\n })\n })\n )\n\n /* Actively watch annotation on focus */\n return watchElementFocus(el)\n .pipe(\n switchMap(active => offset$\n .pipe(\n map(offset => ({ active, offset })),\n take(+!active || Infinity)\n )\n )\n )\n}\n\n/**\n * Mount annotation\n *\n * @param el - Annotation element\n * @param container - Containing element\n * @param options - Options\n *\n * @returns Annotation component observable\n */\nexport function mountAnnotation(\n el: HTMLElement, container: HTMLElement, { target$ }: MountOptions\n): Observable> {\n const [tooltip, index] = Array.from(el.children)\n\n /* Mount component on subscription */\n return defer(() => {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n push$.subscribe({\n\n /* Handle emission */\n next({ offset }) {\n el.style.setProperty(\"--md-tooltip-x\", `${offset.x}px`)\n el.style.setProperty(\"--md-tooltip-y\", `${offset.y}px`)\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-tooltip-x\")\n el.style.removeProperty(\"--md-tooltip-y\")\n }\n })\n\n /* Start animation only when annotation is visible */\n watchElementVisibility(el)\n .pipe(\n takeUntil(done$)\n )\n .subscribe(visible => {\n el.toggleAttribute(\"data-md-visible\", visible)\n })\n\n /* Toggle tooltip presence to mitigate empty lines when copying */\n merge(\n push$.pipe(filter(({ active }) => active)),\n push$.pipe(debounceTime(250), filter(({ active }) => !active))\n )\n .subscribe({\n\n /* Handle emission */\n next({ active }) {\n if (active)\n el.prepend(tooltip)\n else\n tooltip.remove()\n },\n\n /* Handle complete */\n complete() {\n el.prepend(tooltip)\n }\n })\n\n /* Toggle tooltip visibility */\n push$\n .pipe(\n auditTime(16, animationFrameScheduler)\n )\n .subscribe(({ active }) => {\n tooltip.classList.toggle(\"md-tooltip--active\", active)\n })\n\n /* Track relative origin of tooltip */\n push$\n .pipe(\n throttleTime(125, animationFrameScheduler),\n filter(() => !!el.offsetParent),\n map(() => el.offsetParent!.getBoundingClientRect()),\n map(({ x }) => x)\n )\n .subscribe({\n\n /* Handle emission */\n next(origin) {\n if (origin)\n el.style.setProperty(\"--md-tooltip-0\", `${-origin}px`)\n else\n el.style.removeProperty(\"--md-tooltip-0\")\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-tooltip-0\")\n }\n })\n\n /* Allow to copy link without scrolling to anchor */\n fromEvent(index, \"click\")\n .pipe(\n takeUntil(done$),\n filter(ev => !(ev.metaKey || ev.ctrlKey))\n )\n .subscribe(ev => ev.preventDefault())\n\n /* Allow to open link in new tab or blur on close */\n fromEvent(index, \"mousedown\")\n .pipe(\n takeUntil(done$),\n withLatestFrom(push$)\n )\n .subscribe(([ev, { active }]) => {\n\n /* Open in new tab */\n if (ev.button !== 0 || ev.metaKey || ev.ctrlKey) {\n ev.preventDefault()\n\n /* Close annotation */\n } else if (active) {\n ev.preventDefault()\n\n /* Focus parent annotation, if any */\n const parent = el.parentElement!.closest(\".md-annotation\")\n if (parent instanceof HTMLElement)\n parent.focus()\n else\n getActiveElement()?.blur()\n }\n })\n\n /* Open and focus annotation on location target */\n target$\n .pipe(\n takeUntil(done$),\n filter(target => target === tooltip),\n delay(125)\n )\n .subscribe(() => el.focus())\n\n /* Create and return component */\n return watchAnnotation(el, container)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n finalize,\n merge,\n share,\n takeLast,\n takeUntil\n} from \"rxjs\"\n\nimport {\n getElement,\n getElements,\n getOptionalElement\n} from \"~/browser\"\nimport { renderAnnotation } from \"~/templates\"\n\nimport { Component } from \"../../../_\"\nimport {\n Annotation,\n mountAnnotation\n} from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Find all annotation markers in the given code block\n *\n * @param container - Containing element\n *\n * @returns Annotation markers\n */\nfunction findAnnotationMarkers(container: HTMLElement): Text[] {\n const markers: Text[] = []\n for (const el of getElements(\".c, .c1, .cm\", container)) {\n const nodes: Text[] = []\n\n /* Find all text nodes in current element */\n const it = document.createNodeIterator(el, NodeFilter.SHOW_TEXT)\n for (let node = it.nextNode(); node; node = it.nextNode())\n nodes.push(node as Text)\n\n /* Find all markers in each text node */\n for (let text of nodes) {\n let match: RegExpExecArray | null\n\n /* Split text at marker and add to list */\n while ((match = /(\\(\\d+\\))(!)?/.exec(text.textContent!))) {\n const [, id, force] = match\n if (typeof force === \"undefined\") {\n const marker = text.splitText(match.index)\n text = marker.splitText(id.length)\n markers.push(marker)\n\n /* Replace entire text with marker */\n } else {\n text.textContent = id\n markers.push(text)\n break\n }\n }\n }\n }\n return markers\n}\n\n/**\n * Swap the child nodes of two elements\n *\n * @param source - Source element\n * @param target - Target element\n */\nfunction swap(source: HTMLElement, target: HTMLElement): void {\n target.append(...Array.from(source.childNodes))\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount annotation list\n *\n * This function analyzes the containing code block and checks for markers\n * referring to elements in the given annotation list. If no markers are found,\n * the list is left untouched. Otherwise, list elements are rendered as\n * annotations inside the code block.\n *\n * @param el - Annotation list element\n * @param container - Containing element\n * @param options - Options\n *\n * @returns Annotation component observable\n */\nexport function mountAnnotationList(\n el: HTMLElement, container: HTMLElement, { target$, print$ }: MountOptions\n): Observable> {\n\n /* Compute prefix for tooltip anchors */\n const parent = container.closest(\"[id]\")\n const prefix = parent?.id\n\n /* Find and replace all markers with empty annotations */\n const annotations = new Map()\n for (const marker of findAnnotationMarkers(container)) {\n const [, id] = marker.textContent!.match(/\\((\\d+)\\)/)!\n if (getOptionalElement(`li:nth-child(${id})`, el)) {\n annotations.set(id, renderAnnotation(id, prefix))\n marker.replaceWith(annotations.get(id)!)\n }\n }\n\n /* Keep list if there are no annotations to render */\n if (annotations.size === 0)\n return EMPTY\n\n /* Mount component on subscription */\n return defer(() => {\n const done$ = new Subject()\n\n /* Retrieve container pairs for swapping */\n const pairs: [HTMLElement, HTMLElement][] = []\n for (const [id, annotation] of annotations)\n pairs.push([\n getElement(\".md-typeset\", annotation),\n getElement(`li:nth-child(${id})`, el)\n ])\n\n /* Handle print mode - see https://bit.ly/3rgPdpt */\n print$\n .pipe(\n takeUntil(done$.pipe(takeLast(1)))\n )\n .subscribe(active => {\n el.hidden = !active\n\n /* Show annotations in code block or list (print) */\n for (const [inner, child] of pairs)\n if (!active)\n swap(child, inner)\n else\n swap(inner, child)\n })\n\n /* Create and return component */\n return merge(...[...annotations]\n .map(([, annotation]) => (\n mountAnnotation(annotation, container, { target$ })\n ))\n )\n .pipe(\n finalize(() => done$.complete()),\n share()\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n map,\n of,\n shareReplay,\n tap\n} from \"rxjs\"\n\nimport { watchScript } from \"~/browser\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../../_\"\n\nimport themeCSS from \"./index.css\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mermaid diagram\n */\nexport interface Mermaid {}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Mermaid instance observable\n */\nlet mermaid$: Observable\n\n/**\n * Global sequence number for diagrams\n */\nlet sequence = 0\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch Mermaid script\n *\n * @returns Mermaid scripts observable\n */\nfunction fetchScripts(): Observable {\n return typeof mermaid === \"undefined\" || mermaid instanceof Element\n ? watchScript(\"https://unpkg.com/mermaid@9.1.7/dist/mermaid.min.js\")\n : of(undefined)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount Mermaid diagram\n *\n * @param el - Code block element\n *\n * @returns Mermaid diagram component observable\n */\nexport function mountMermaid(\n el: HTMLElement\n): Observable> {\n el.classList.remove(\"mermaid\") // Hack: mitigate https://bit.ly/3CiN6Du\n mermaid$ ||= fetchScripts()\n .pipe(\n tap(() => mermaid.initialize({\n startOnLoad: false,\n themeCSS\n })),\n map(() => undefined),\n shareReplay(1)\n )\n\n /* Render diagram */\n mermaid$.subscribe(() => {\n el.classList.add(\"mermaid\") // Hack: mitigate https://bit.ly/3CiN6Du\n const id = `__mermaid_${sequence++}`\n const host = h(\"div\", { class: \"mermaid\" })\n mermaid.mermaidAPI.render(id, el.textContent, (svg: string) => {\n\n /* Create a shadow root and inject diagram */\n const shadow = host.attachShadow({ mode: \"closed\" })\n shadow.innerHTML = svg\n\n /* Replace code block with diagram */\n el.replaceWith(host)\n })\n })\n\n /* Create and return component */\n return mermaid$\n .pipe(\n map(() => ({ ref: el }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n filter,\n finalize,\n map,\n merge,\n tap\n} from \"rxjs\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Details\n */\nexport interface Details {\n action: \"open\" | \"close\" /* Details state */\n reveal?: boolean /* Details is revealed */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch details\n *\n * @param el - Details element\n * @param options - Options\n *\n * @returns Details observable\n */\nexport function watchDetails(\n el: HTMLDetailsElement, { target$, print$ }: WatchOptions\n): Observable
    {\n let open = true\n return merge(\n\n /* Open and focus details on location target */\n target$\n .pipe(\n map(target => target.closest(\"details:not([open])\")!),\n filter(details => el === details),\n map(() => ({\n action: \"open\", reveal: true\n }) as Details)\n ),\n\n /* Open details on print and close afterwards */\n print$\n .pipe(\n filter(active => active || !open),\n tap(() => open = el.open),\n map(active => ({\n action: active ? \"open\" : \"close\"\n }) as Details)\n )\n )\n}\n\n/**\n * Mount details\n *\n * This function ensures that `details` tags are opened on anchor jumps and\n * prior to printing, so the whole content of the page is visible.\n *\n * @param el - Details element\n * @param options - Options\n *\n * @returns Details component observable\n */\nexport function mountDetails(\n el: HTMLDetailsElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject
    ()\n push$.subscribe(({ action, reveal }) => {\n el.toggleAttribute(\"open\", action === \"open\")\n if (reveal)\n el.scrollIntoView()\n })\n\n /* Create and return component */\n return watchDetails(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, of } from \"rxjs\"\n\nimport { renderTable } from \"~/templates\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Data table\n */\nexport interface DataTable {}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Sentinel for replacement\n */\nconst sentinel = h(\"table\")\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount data table\n *\n * This function wraps a data table in another scrollable container, so it can\n * be smoothly scrolled on smaller screen sizes and won't break the layout.\n *\n * @param el - Data table element\n *\n * @returns Data table component observable\n */\nexport function mountDataTable(\n el: HTMLElement\n): Observable> {\n el.replaceWith(sentinel)\n sentinel.replaceWith(renderTable(el))\n\n /* Create and return component */\n return of({ ref: el })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n animationFrameScheduler,\n asyncScheduler,\n auditTime,\n combineLatest,\n defer,\n finalize,\n fromEvent,\n map,\n merge,\n skip,\n startWith,\n subscribeOn,\n takeLast,\n takeUntil,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n getElement,\n getElementContentOffset,\n getElementContentSize,\n getElementOffset,\n getElementSize,\n getElements,\n watchElementContentOffset,\n watchElementSize\n} from \"~/browser\"\nimport { renderTabbedControl } from \"~/templates\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Content tabs\n */\nexport interface ContentTabs {\n active: HTMLLabelElement /* Active tab label */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch content tabs\n *\n * @param el - Content tabs element\n *\n * @returns Content tabs observable\n */\nexport function watchContentTabs(\n el: HTMLElement\n): Observable {\n const inputs = getElements(\":scope > input\", el)\n const initial = inputs.find(input => input.checked) || inputs[0]\n return merge(...inputs.map(input => fromEvent(input, \"change\")\n .pipe(\n map(() => getElement(`label[for=\"${input.id}\"]`))\n )\n ))\n .pipe(\n startWith(getElement(`label[for=\"${initial.id}\"]`)),\n map(active => ({ active }))\n )\n}\n\n/**\n * Mount content tabs\n *\n * This function scrolls the active tab into view. While this functionality is\n * provided by browsers as part of `scrollInfoView`, browsers will always also\n * scroll the vertical axis, which we do not want. Thus, we decided to provide\n * this functionality ourselves.\n *\n * @param el - Content tabs element\n * @param options - Options\n *\n * @returns Content tabs component observable\n */\nexport function mountContentTabs(\n el: HTMLElement, { viewport$ }: MountOptions\n): Observable> {\n\n /* Render content tab previous button for pagination */\n const prev = renderTabbedControl(\"prev\")\n el.append(prev)\n\n /* Render content tab next button for pagination */\n const next = renderTabbedControl(\"next\")\n el.append(next)\n\n /* Mount component on subscription */\n const container = getElement(\".tabbed-labels\", el)\n return defer(() => {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n combineLatest([push$, watchElementSize(el)])\n .pipe(\n auditTime(1, animationFrameScheduler),\n takeUntil(done$)\n )\n .subscribe({\n\n /* Handle emission */\n next([{ active }, size]) {\n const offset = getElementOffset(active)\n const { width } = getElementSize(active)\n\n /* Set tab indicator offset and width */\n el.style.setProperty(\"--md-indicator-x\", `${offset.x}px`)\n el.style.setProperty(\"--md-indicator-width\", `${width}px`)\n\n /* Scroll container to active content tab */\n const content = getElementContentOffset(container)\n if (\n offset.x < content.x ||\n offset.x + width > content.x + size.width\n )\n container.scrollTo({\n left: Math.max(0, offset.x - 16),\n behavior: \"smooth\"\n })\n },\n\n /* Handle complete */\n complete() {\n el.style.removeProperty(\"--md-indicator-x\")\n el.style.removeProperty(\"--md-indicator-width\")\n }\n })\n\n /* Hide content tab buttons on borders */\n combineLatest([\n watchElementContentOffset(container),\n watchElementSize(container)\n ])\n .pipe(\n takeUntil(done$)\n )\n .subscribe(([offset, size]) => {\n const content = getElementContentSize(container)\n prev.hidden = offset.x < 16\n next.hidden = offset.x > content.width - size.width - 16\n })\n\n /* Paginate content tab container on click */\n merge(\n fromEvent(prev, \"click\").pipe(map(() => -1)),\n fromEvent(next, \"click\").pipe(map(() => +1))\n )\n .pipe(\n takeUntil(done$)\n )\n .subscribe(direction => {\n const { width } = getElementSize(container)\n container.scrollBy({\n left: width * direction,\n behavior: \"smooth\"\n })\n })\n\n /* Set up linking of content tabs, if enabled */\n if (feature(\"content.tabs.link\"))\n push$.pipe(\n skip(1),\n withLatestFrom(viewport$)\n )\n .subscribe(([{ active }, { offset }]) => {\n const tab = active.innerText.trim()\n if (active.hasAttribute(\"data-md-switching\")) {\n active.removeAttribute(\"data-md-switching\")\n\n /* Determine viewport offset of active tab */\n } else {\n const y = el.offsetTop - offset.y\n\n /* Passively activate other tabs */\n for (const set of getElements(\"[data-tabs]\"))\n for (const input of getElements(\n \":scope > input\", set\n )) {\n const label = getElement(`label[for=\"${input.id}\"]`)\n if (\n label !== active &&\n label.innerText.trim() === tab\n ) {\n label.setAttribute(\"data-md-switching\", \"\")\n input.click()\n break\n }\n }\n\n /* Bring active tab into view */\n window.scrollTo({\n top: el.offsetTop - y\n })\n\n /* Persist active tabs in local storage */\n const tabs = __md_get(\"__tabs\") || []\n __md_set(\"__tabs\", [...new Set([tab, ...tabs])])\n }\n })\n\n /* Create and return component */\n return watchContentTabs(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n .pipe(\n subscribeOn(asyncScheduler)\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Observable, merge } from \"rxjs\"\n\nimport { Viewport, getElements } from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Annotation } from \"../annotation\"\nimport {\n CodeBlock,\n Mermaid,\n mountCodeBlock,\n mountMermaid\n} from \"../code\"\nimport {\n Details,\n mountDetails\n} from \"../details\"\nimport {\n DataTable,\n mountDataTable\n} from \"../table\"\nimport {\n ContentTabs,\n mountContentTabs\n} from \"../tabs\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Content\n */\nexport type Content =\n | Annotation\n | ContentTabs\n | CodeBlock\n | Mermaid\n | DataTable\n | Details\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n target$: Observable /* Location target observable */\n print$: Observable /* Media print observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount content\n *\n * This function mounts all components that are found in the content of the\n * actual article, including code blocks, data tables and details.\n *\n * @param el - Content element\n * @param options - Options\n *\n * @returns Content component observable\n */\nexport function mountContent(\n el: HTMLElement, { viewport$, target$, print$ }: MountOptions\n): Observable> {\n return merge(\n\n /* Code blocks */\n ...getElements(\"pre:not(.mermaid) > code\", el)\n .map(child => mountCodeBlock(child, { target$, print$ })),\n\n /* Mermaid diagrams */\n ...getElements(\"pre.mermaid\", el)\n .map(child => mountMermaid(child)),\n\n /* Data tables */\n ...getElements(\"table:not([class])\", el)\n .map(child => mountDataTable(child)),\n\n /* Details */\n ...getElements(\"details\", el)\n .map(child => mountDetails(child, { target$, print$ })),\n\n /* Content tabs */\n ...getElements(\"[data-tabs]\", el)\n .map(child => mountContentTabs(child, { viewport$ }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n delay,\n finalize,\n map,\n merge,\n of,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { getElement } from \"~/browser\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Dialog\n */\nexport interface Dialog {\n message: string /* Dialog message */\n active: boolean /* Dialog is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n alert$: Subject /* Alert subject */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n alert$: Subject /* Alert subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch dialog\n *\n * @param _el - Dialog element\n * @param options - Options\n *\n * @returns Dialog observable\n */\nexport function watchDialog(\n _el: HTMLElement, { alert$ }: WatchOptions\n): Observable {\n return alert$\n .pipe(\n switchMap(message => merge(\n of(true),\n of(false).pipe(delay(2000))\n )\n .pipe(\n map(active => ({ message, active }))\n )\n )\n )\n}\n\n/**\n * Mount dialog\n *\n * This function reveals the dialog in the right corner when a new alert is\n * emitted through the subject that is passed as part of the options.\n *\n * @param el - Dialog element\n * @param options - Options\n *\n * @returns Dialog component observable\n */\nexport function mountDialog(\n el: HTMLElement, options: MountOptions\n): Observable> {\n const inner = getElement(\".md-typeset\", el)\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ message, active }) => {\n el.classList.toggle(\"md-dialog--active\", active)\n inner.textContent = message\n })\n\n /* Create and return component */\n return watchDialog(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatest,\n combineLatestWith,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n map,\n of,\n shareReplay,\n startWith,\n switchMap,\n takeLast,\n takeUntil\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n watchElementSize,\n watchToggle\n} from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Main } from \"../../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Header\n */\nexport interface Header {\n height: number /* Header visible height */\n hidden: boolean /* Header is hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Compute whether the header is hidden\n *\n * If the user scrolls past a certain threshold, the header can be hidden when\n * scrolling down, and shown when scrolling up.\n *\n * @param options - Options\n *\n * @returns Toggle observable\n */\nfunction isHidden({ viewport$ }: WatchOptions): Observable {\n if (!feature(\"header.autohide\"))\n return of(false)\n\n /* Compute direction and turning point */\n const direction$ = viewport$\n .pipe(\n map(({ offset: { y } }) => y),\n bufferCount(2, 1),\n map(([a, b]) => [a < b, b] as const),\n distinctUntilKeyChanged(0)\n )\n\n /* Compute whether header should be hidden */\n const hidden$ = combineLatest([viewport$, direction$])\n .pipe(\n filter(([{ offset }, [, y]]) => Math.abs(y - offset.y) > 100),\n map(([, [direction]]) => direction),\n distinctUntilChanged()\n )\n\n /* Compute threshold for hiding */\n const search$ = watchToggle(\"search\")\n return combineLatest([viewport$, search$])\n .pipe(\n map(([{ offset }, search]) => offset.y > 400 && !search),\n distinctUntilChanged(),\n switchMap(active => active ? hidden$ : of(false)),\n startWith(false)\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch header\n *\n * @param el - Header element\n * @param options - Options\n *\n * @returns Header observable\n */\nexport function watchHeader(\n el: HTMLElement, options: WatchOptions\n): Observable
    {\n return defer(() => combineLatest([\n watchElementSize(el),\n isHidden(options)\n ]))\n .pipe(\n map(([{ height }, hidden]) => ({\n height,\n hidden\n })),\n distinctUntilChanged((a, b) => (\n a.height === b.height &&\n a.hidden === b.hidden\n )),\n shareReplay(1)\n )\n}\n\n/**\n * Mount header\n *\n * This function manages the different states of the header, i.e. whether it's\n * hidden or rendered with a shadow. This depends heavily on the main area.\n *\n * @param el - Header element\n * @param options - Options\n *\n * @returns Header component observable\n */\nexport function mountHeader(\n el: HTMLElement, { header$, main$ }: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject
    ()\n const done$ = push$.pipe(takeLast(1))\n push$\n .pipe(\n distinctUntilKeyChanged(\"active\"),\n combineLatestWith(header$)\n )\n .subscribe(([{ active }, { hidden }]) => {\n el.classList.toggle(\"md-header--shadow\", active && !hidden)\n el.hidden = hidden\n })\n\n /* Link to main area */\n main$.subscribe(push$)\n\n /* Create and return component */\n return header$\n .pipe(\n takeUntil(done$),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n defer,\n distinctUntilKeyChanged,\n finalize,\n map,\n tap\n} from \"rxjs\"\n\nimport {\n Viewport,\n getElementSize,\n getOptionalElement,\n watchViewportAt\n} from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { Header } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Header\n */\nexport interface HeaderTitle {\n active: boolean /* Header title is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch header title\n *\n * @param el - Heading element\n * @param options - Options\n *\n * @returns Header title observable\n */\nexport function watchHeaderTitle(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n return watchViewportAt(el, { viewport$, header$ })\n .pipe(\n map(({ offset: { y } }) => {\n const { height } = getElementSize(el)\n return {\n active: y >= height\n }\n }),\n distinctUntilKeyChanged(\"active\")\n )\n}\n\n/**\n * Mount header title\n *\n * This function swaps the header title from the site title to the title of the\n * current page when the user scrolls past the first headline.\n *\n * @param el - Header title element\n * @param options - Options\n *\n * @returns Header title component observable\n */\nexport function mountHeaderTitle(\n el: HTMLElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ active }) => {\n el.classList.toggle(\"md-header__title--active\", active)\n })\n\n /* Obtain headline, if any */\n const heading = getOptionalElement(\"article h1\")\n if (typeof heading === \"undefined\")\n return EMPTY\n\n /* Create and return component */\n return watchHeaderTitle(heading, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n map,\n switchMap\n} from \"rxjs\"\n\nimport {\n Viewport,\n watchElementSize\n} from \"~/browser\"\n\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Main area\n */\nexport interface Main {\n offset: number /* Main area top offset */\n height: number /* Main area visible height */\n active: boolean /* Main area is active */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch main area\n *\n * This function returns an observable that computes the visual parameters of\n * the main area which depends on the viewport vertical offset and height, as\n * well as the height of the header element, if the header is fixed.\n *\n * @param el - Main area element\n * @param options - Options\n *\n * @returns Main area observable\n */\nexport function watchMain(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable
    {\n\n /* Compute necessary adjustment for header */\n const adjust$ = header$\n .pipe(\n map(({ height }) => height),\n distinctUntilChanged()\n )\n\n /* Compute the main area's top and bottom borders */\n const border$ = adjust$\n .pipe(\n switchMap(() => watchElementSize(el)\n .pipe(\n map(({ height }) => ({\n top: el.offsetTop,\n bottom: el.offsetTop + height\n })),\n distinctUntilKeyChanged(\"bottom\")\n )\n )\n )\n\n /* Compute the main area's offset, visible height and if we scrolled past */\n return combineLatest([adjust$, border$, viewport$])\n .pipe(\n map(([header, { top, bottom }, { offset: { y }, size: { height } }]) => {\n height = Math.max(0, height\n - Math.max(0, top - y, header)\n - Math.max(0, height + y - bottom)\n )\n return {\n offset: top - header,\n height,\n active: top - header <= y\n }\n }),\n distinctUntilChanged((a, b) => (\n a.offset === b.offset &&\n a.height === b.height &&\n a.active === b.active\n ))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n asyncScheduler,\n defer,\n finalize,\n fromEvent,\n map,\n mergeMap,\n observeOn,\n of,\n shareReplay,\n startWith,\n tap\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\nimport { Component } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Palette colors\n */\nexport interface PaletteColor {\n scheme?: string /* Color scheme */\n primary?: string /* Primary color */\n accent?: string /* Accent color */\n}\n\n/**\n * Palette\n */\nexport interface Palette {\n index: number /* Palette index */\n color: PaletteColor /* Palette colors */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch color palette\n *\n * @param inputs - Color palette element\n *\n * @returns Color palette observable\n */\nexport function watchPalette(\n inputs: HTMLInputElement[]\n): Observable {\n const current = __md_get(\"__palette\") || {\n index: inputs.findIndex(input => matchMedia(\n input.getAttribute(\"data-md-color-media\")!\n ).matches)\n }\n\n /* Emit changes in color palette */\n return of(...inputs)\n .pipe(\n mergeMap(input => fromEvent(input, \"change\")\n .pipe(\n map(() => input)\n )\n ),\n startWith(inputs[Math.max(0, current.index)]),\n map(input => ({\n index: inputs.indexOf(input),\n color: {\n scheme: input.getAttribute(\"data-md-color-scheme\"),\n primary: input.getAttribute(\"data-md-color-primary\"),\n accent: input.getAttribute(\"data-md-color-accent\")\n }\n } as Palette)),\n shareReplay(1)\n )\n}\n\n/**\n * Mount color palette\n *\n * @param el - Color palette element\n *\n * @returns Color palette component observable\n */\nexport function mountPalette(\n el: HTMLElement\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(palette => {\n document.body.setAttribute(\"data-md-color-switching\", \"\")\n\n /* Set color palette */\n for (const [key, value] of Object.entries(palette.color))\n document.body.setAttribute(`data-md-color-${key}`, value)\n\n /* Toggle visibility */\n for (let index = 0; index < inputs.length; index++) {\n const label = inputs[index].nextElementSibling\n if (label instanceof HTMLElement)\n label.hidden = palette.index !== index\n }\n\n /* Persist preference in local storage */\n __md_set(\"__palette\", palette)\n })\n\n /* Revert transition durations after color switch */\n push$.pipe(observeOn(asyncScheduler))\n .subscribe(() => {\n document.body.removeAttribute(\"data-md-color-switching\")\n })\n\n /* Create and return component */\n const inputs = getElements(\"input\", el)\n return watchPalette(inputs)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport ClipboardJS from \"clipboard\"\nimport {\n Observable,\n Subject,\n map,\n tap\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport { getElement } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n alert$: Subject /* Alert subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Extract text to copy\n *\n * @param el - HTML element\n *\n * @returns Extracted text\n */\nfunction extract(el: HTMLElement): string {\n el.setAttribute(\"data-md-copying\", \"\")\n const text = el.innerText\n el.removeAttribute(\"data-md-copying\")\n return text\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up Clipboard.js integration\n *\n * @param options - Options\n */\nexport function setupClipboardJS(\n { alert$ }: SetupOptions\n): void {\n if (ClipboardJS.isSupported()) {\n new Observable(subscriber => {\n new ClipboardJS(\"[data-clipboard-target], [data-clipboard-text]\", {\n text: el => (\n el.getAttribute(\"data-clipboard-text\")! ||\n extract(getElement(\n el.getAttribute(\"data-clipboard-target\")!\n ))\n )\n })\n .on(\"success\", ev => subscriber.next(ev))\n })\n .pipe(\n tap(ev => {\n const trigger = ev.trigger as HTMLElement\n trigger.focus()\n }),\n map(() => translation(\"clipboard.copied\"))\n )\n .subscribe(alert$)\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n catchError,\n defaultIfEmpty,\n map,\n of,\n tap\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport { getElements, requestXML } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Sitemap, i.e. a list of URLs\n */\nexport type Sitemap = string[]\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Preprocess a list of URLs\n *\n * This function replaces the `site_url` in the sitemap with the actual base\n * URL, to allow instant loading to work in occasions like Netlify previews.\n *\n * @param urls - URLs\n *\n * @returns URL path parts\n */\nfunction preprocess(urls: Sitemap): Sitemap {\n if (urls.length < 2)\n return [\"\"]\n\n /* Take the first two URLs and remove everything after the last slash */\n const [root, next] = [...urls]\n .sort((a, b) => a.length - b.length)\n .map(url => url.replace(/[^/]+$/, \"\"))\n\n /* Compute common prefix */\n let index = 0\n if (root === next)\n index = root.length\n else\n while (root.charCodeAt(index) === next.charCodeAt(index))\n index++\n\n /* Remove common prefix and return in original order */\n return urls.map(url => url.replace(root.slice(0, index), \"\"))\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch the sitemap for the given base URL\n *\n * @param base - Base URL\n *\n * @returns Sitemap observable\n */\nexport function fetchSitemap(base?: URL): Observable {\n const cached = __md_get(\"__sitemap\", sessionStorage, base)\n if (cached) {\n return of(cached)\n } else {\n const config = configuration()\n return requestXML(new URL(\"sitemap.xml\", base || config.base))\n .pipe(\n map(sitemap => preprocess(getElements(\"loc\", sitemap)\n .map(node => node.textContent!)\n )),\n catchError(() => EMPTY), // @todo refactor instant loading\n defaultIfEmpty([]),\n tap(sitemap => __md_set(\"__sitemap\", sitemap, sessionStorage, base))\n )\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n bufferCount,\n catchError,\n concatMap,\n debounceTime,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n fromEvent,\n map,\n merge,\n of,\n sample,\n share,\n skip,\n skipUntil,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"~/_\"\nimport {\n Viewport,\n ViewportOffset,\n getElements,\n getOptionalElement,\n request,\n setLocation,\n setLocationHash\n} from \"~/browser\"\nimport { getComponentElement } from \"~/components\"\nimport { h } from \"~/utilities\"\n\nimport { fetchSitemap } from \"../sitemap\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * History state\n */\nexport interface HistoryState {\n url: URL /* State URL */\n offset?: ViewportOffset /* State viewport offset */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n document$: Subject /* Document subject */\n location$: Subject /* Location subject */\n viewport$: Observable /* Viewport observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up instant loading\n *\n * When fetching, theoretically, we could use `responseType: \"document\"`, but\n * since all MkDocs links are relative, we need to make sure that the current\n * location matches the document we just loaded. Otherwise any relative links\n * in the document could use the old location.\n *\n * This is the reason why we need to synchronize history events and the process\n * of fetching the document for navigation changes (except `popstate` events):\n *\n * 1. Fetch document via `XMLHTTPRequest`\n * 2. Set new location via `history.pushState`\n * 3. Parse and emit fetched document\n *\n * For `popstate` events, we must not use `history.pushState`, or the forward\n * history will be irreversibly overwritten. In case the request fails, the\n * location change is dispatched regularly.\n *\n * @param options - Options\n */\nexport function setupInstantLoading(\n { document$, location$, viewport$ }: SetupOptions\n): void {\n const config = configuration()\n if (location.protocol === \"file:\")\n return\n\n /* Disable automatic scroll restoration */\n if (\"scrollRestoration\" in history) {\n history.scrollRestoration = \"manual\"\n\n /* Hack: ensure that reloads restore viewport offset */\n fromEvent(window, \"beforeunload\")\n .subscribe(() => {\n history.scrollRestoration = \"auto\"\n })\n }\n\n /* Hack: ensure absolute favicon link to omit 404s when switching */\n const favicon = getOptionalElement(\"link[rel=icon]\")\n if (typeof favicon !== \"undefined\")\n favicon.href = favicon.href\n\n /* Intercept internal navigation */\n const push$ = fetchSitemap()\n .pipe(\n map(paths => paths.map(path => `${new URL(path, config.base)}`)),\n switchMap(urls => fromEvent(document.body, \"click\")\n .pipe(\n filter(ev => !ev.metaKey && !ev.ctrlKey),\n switchMap(ev => {\n if (ev.target instanceof Element) {\n const el = ev.target.closest(\"a\")\n if (el && !el.target) {\n const url = new URL(el.href)\n\n /* Canonicalize URL */\n url.search = \"\"\n url.hash = \"\"\n\n /* Check if URL should be intercepted */\n if (\n url.pathname !== location.pathname &&\n urls.includes(url.toString())\n ) {\n ev.preventDefault()\n return of({\n url: new URL(el.href)\n })\n }\n }\n }\n return NEVER\n })\n )\n ),\n share()\n )\n\n /* Intercept history back and forward */\n const pop$ = fromEvent(window, \"popstate\")\n .pipe(\n filter(ev => ev.state !== null),\n map(ev => ({\n url: new URL(location.href),\n offset: ev.state\n })),\n share()\n )\n\n /* Emit location change */\n merge(push$, pop$)\n .pipe(\n distinctUntilChanged((a, b) => a.url.href === b.url.href),\n map(({ url }) => url)\n )\n .subscribe(location$)\n\n /* Fetch document via `XMLHTTPRequest` */\n const response$ = location$\n .pipe(\n distinctUntilKeyChanged(\"pathname\"),\n switchMap(url => request(url.href)\n .pipe(\n catchError(() => {\n setLocation(url)\n return NEVER\n })\n )\n ),\n share()\n )\n\n /* Set new location via `history.pushState` */\n push$\n .pipe(\n sample(response$)\n )\n .subscribe(({ url }) => {\n history.pushState({}, \"\", `${url}`)\n })\n\n /* Parse and emit fetched document */\n const dom = new DOMParser()\n response$\n .pipe(\n switchMap(res => res.text()),\n map(res => dom.parseFromString(res, \"text/html\"))\n )\n .subscribe(document$)\n\n /* Replace meta tags and components */\n document$\n .pipe(\n skip(1)\n )\n .subscribe(replacement => {\n for (const selector of [\n\n /* Meta tags */\n \"title\",\n \"link[rel=canonical]\",\n \"meta[name=author]\",\n \"meta[name=description]\",\n\n /* Components */\n \"[data-md-component=announce]\",\n \"[data-md-component=container]\",\n \"[data-md-component=header-topic]\",\n \"[data-md-component=outdated]\",\n \"[data-md-component=logo]\",\n \"[data-md-component=skip]\",\n ...feature(\"navigation.tabs.sticky\")\n ? [\"[data-md-component=tabs]\"]\n : []\n ]) {\n const source = getOptionalElement(selector)\n const target = getOptionalElement(selector, replacement)\n if (\n typeof source !== \"undefined\" &&\n typeof target !== \"undefined\"\n ) {\n source.replaceWith(target)\n }\n }\n })\n\n /* Re-evaluate scripts */\n document$\n .pipe(\n skip(1),\n map(() => getComponentElement(\"container\")),\n switchMap(el => getElements(\"script\", el)),\n concatMap(el => {\n const script = h(\"script\")\n if (el.src) {\n for (const name of el.getAttributeNames())\n script.setAttribute(name, el.getAttribute(name)!)\n el.replaceWith(script)\n\n /* Complete when script is loaded */\n return new Observable(observer => {\n script.onload = () => observer.complete()\n })\n\n /* Complete immediately */\n } else {\n script.textContent = el.textContent\n el.replaceWith(script)\n return EMPTY\n }\n })\n )\n .subscribe()\n\n /* Emit history state change */\n merge(push$, pop$)\n .pipe(\n sample(document$)\n )\n .subscribe(({ url, offset }) => {\n if (url.hash && !offset) {\n setLocationHash(url.hash)\n } else {\n window.scrollTo(0, offset?.y || 0)\n }\n })\n\n /* Debounce update of viewport offset */\n viewport$\n .pipe(\n skipUntil(push$),\n debounceTime(250),\n distinctUntilKeyChanged(\"offset\")\n )\n .subscribe(({ offset }) => {\n history.replaceState(offset, \"\")\n })\n\n /* Set viewport offset from history */\n merge(push$, pop$)\n .pipe(\n bufferCount(2, 1),\n filter(([a, b]) => a.url.pathname === b.url.pathname),\n map(([, state]) => state)\n )\n .subscribe(({ offset }) => {\n window.scrollTo(0, offset?.y || 0)\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexDocument } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search document\n */\nexport interface SearchDocument extends SearchIndexDocument {\n parent?: SearchIndexDocument /* Parent article */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search document mapping\n */\nexport type SearchDocumentMap = Map\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search document mapping\n *\n * @param docs - Search index documents\n *\n * @returns Search document map\n */\nexport function setupSearchDocumentMap(\n docs: SearchIndexDocument[]\n): SearchDocumentMap {\n const documents = new Map()\n const parents = new Set()\n for (const doc of docs) {\n const [path, hash] = doc.location.split(\"#\")\n\n /* Extract location, title and tags */\n const location = doc.location\n const title = doc.title\n const tags = doc.tags\n\n /* Escape and cleanup text */\n const text = escapeHTML(doc.text)\n .replace(/\\s+(?=[,.:;!?])/g, \"\")\n .replace(/\\s+/g, \" \")\n\n /* Handle section */\n if (hash) {\n const parent = documents.get(path)!\n\n /* Ignore first section, override article */\n if (!parents.has(parent)) {\n parent.title = doc.title\n parent.text = text\n\n /* Remember that we processed the article */\n parents.add(parent)\n\n /* Add subsequent section */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n parent\n })\n }\n\n /* Add article */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n ...tags && { tags }\n })\n }\n }\n return documents\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexConfig } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlight function\n *\n * @param value - Value\n *\n * @returns Highlighted value\n */\nexport type SearchHighlightFn = (value: string) => string\n\n/**\n * Search highlight factory function\n *\n * @param query - Query value\n *\n * @returns Search highlight function\n */\nexport type SearchHighlightFactoryFn = (query: string) => SearchHighlightFn\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search highlighter\n *\n * @param config - Search index configuration\n * @param escape - Whether to escape HTML\n *\n * @returns Search highlight factory function\n */\nexport function setupSearchHighlighter(\n config: SearchIndexConfig, escape: boolean\n): SearchHighlightFactoryFn {\n const separator = new RegExp(config.separator, \"img\")\n const highlight = (_: unknown, data: string, term: string) => {\n return `${data}${term}`\n }\n\n /* Return factory function */\n return (query: string) => {\n query = query\n .replace(/[\\s*+\\-:~^]+/g, \" \")\n .trim()\n\n /* Create search term match expression */\n const match = new RegExp(`(^|${config.separator})(${\n query\n .replace(/[|\\\\{}()[\\]^$+*?.-]/g, \"\\\\$&\")\n .replace(separator, \"|\")\n })`, \"img\")\n\n /* Highlight string value */\n return value => (\n escape\n ? escapeHTML(value)\n : value\n )\n .replace(match, highlight)\n .replace(/<\\/mark>(\\s+)]*>/img, \"$1\")\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search transformation function\n *\n * @param value - Query value\n *\n * @returns Transformed query value\n */\nexport type SearchTransformFn = (value: string) => string\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Default transformation function\n *\n * 1. Search for terms in quotation marks and prepend a `+` modifier to denote\n * that the resulting document must contain all terms, converting the query\n * to an `AND` query (as opposed to the default `OR` behavior). While users\n * may expect terms enclosed in quotation marks to map to span queries, i.e.\n * for which order is important, Lunr.js doesn't support them, so the best\n * we can do is to convert the terms to an `AND` query.\n *\n * 2. Replace control characters which are not located at the beginning of the\n * query or preceded by white space, or are not followed by a non-whitespace\n * character or are at the end of the query string. Furthermore, filter\n * unmatched quotation marks.\n *\n * 3. Trim excess whitespace from left and right.\n *\n * @param query - Query value\n *\n * @returns Transformed query value\n */\nexport function defaultTransform(query: string): string {\n return query\n .split(/\"([^\"]+)\"/g) /* => 1 */\n .map((terms, index) => index & 1\n ? terms.replace(/^\\b|^(?![^\\x00-\\x7F]|$)|\\s+/g, \" +\")\n : terms\n )\n .join(\"\")\n .replace(/\"|(?:^|\\s+)[*+\\-:^~]+(?=\\s+|$)/g, \"\") /* => 2 */\n .trim() /* => 3 */\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { SearchIndex, SearchResult } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search message type\n */\nexport const enum SearchMessageType {\n SETUP, /* Search index setup */\n READY, /* Search index ready */\n QUERY, /* Search query */\n RESULT /* Search results */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Message containing the data necessary to setup the search index\n */\nexport interface SearchSetupMessage {\n type: SearchMessageType.SETUP /* Message type */\n data: SearchIndex /* Message data */\n}\n\n/**\n * Message indicating the search index is ready\n */\nexport interface SearchReadyMessage {\n type: SearchMessageType.READY /* Message type */\n}\n\n/**\n * Message containing a search query\n */\nexport interface SearchQueryMessage {\n type: SearchMessageType.QUERY /* Message type */\n data: string /* Message data */\n}\n\n/**\n * Message containing results for a search query\n */\nexport interface SearchResultMessage {\n type: SearchMessageType.RESULT /* Message type */\n data: SearchResult /* Message data */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Message exchanged with the search worker\n */\nexport type SearchMessage =\n | SearchSetupMessage\n | SearchReadyMessage\n | SearchQueryMessage\n | SearchResultMessage\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Type guard for search setup messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchSetupMessage(\n message: SearchMessage\n): message is SearchSetupMessage {\n return message.type === SearchMessageType.SETUP\n}\n\n/**\n * Type guard for search ready messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchReadyMessage(\n message: SearchMessage\n): message is SearchReadyMessage {\n return message.type === SearchMessageType.READY\n}\n\n/**\n * Type guard for search query messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchQueryMessage(\n message: SearchMessage\n): message is SearchQueryMessage {\n return message.type === SearchMessageType.QUERY\n}\n\n/**\n * Type guard for search result messages\n *\n * @param message - Search worker message\n *\n * @returns Test result\n */\nexport function isSearchResultMessage(\n message: SearchMessage\n): message is SearchResultMessage {\n return message.type === SearchMessageType.RESULT\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n ObservableInput,\n Subject,\n from,\n map,\n share\n} from \"rxjs\"\n\nimport { configuration, feature, translation } from \"~/_\"\nimport { WorkerHandler, watchWorker } from \"~/browser\"\n\nimport { SearchIndex } from \"../../_\"\nimport {\n SearchOptions,\n SearchPipeline\n} from \"../../options\"\nimport {\n SearchMessage,\n SearchMessageType,\n SearchSetupMessage,\n isSearchResultMessage\n} from \"../message\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search worker\n */\nexport type SearchWorker = WorkerHandler\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up search index\n *\n * @param data - Search index\n *\n * @returns Search index\n */\nfunction setupSearchIndex({ config, docs }: SearchIndex): SearchIndex {\n\n /* Override default language with value from translation */\n if (config.lang.length === 1 && config.lang[0] === \"en\")\n config.lang = [\n translation(\"search.config.lang\")\n ]\n\n /* Override default separator with value from translation */\n if (config.separator === \"[\\\\s\\\\-]+\")\n config.separator = translation(\"search.config.separator\")\n\n /* Set pipeline from translation */\n const pipeline = translation(\"search.config.pipeline\")\n .split(/\\s*,\\s*/)\n .filter(Boolean) as SearchPipeline\n\n /* Determine search options */\n const options: SearchOptions = {\n pipeline,\n suggestions: feature(\"search.suggest\")\n }\n\n /* Return search index after defaulting */\n return { config, docs, options }\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up search worker\n *\n * This function creates a web worker to set up and query the search index,\n * which is done using Lunr.js. The index must be passed as an observable to\n * enable hacks like _localsearch_ via search index embedding as JSON.\n *\n * @param url - Worker URL\n * @param index - Search index observable input\n *\n * @returns Search worker\n */\nexport function setupSearchWorker(\n url: string, index: ObservableInput\n): SearchWorker {\n const config = configuration()\n const worker = new Worker(url)\n\n /* Create communication channels and resolve relative links */\n const tx$ = new Subject()\n const rx$ = watchWorker(worker, { tx$ })\n .pipe(\n map(message => {\n if (isSearchResultMessage(message)) {\n for (const result of message.data.items)\n for (const document of result)\n document.location = `${new URL(document.location, config.base)}`\n }\n return message\n }),\n share()\n )\n\n /* Set up search index */\n from(index)\n .pipe(\n map(data => ({\n type: SearchMessageType.SETUP,\n data: setupSearchIndex(data)\n } as SearchSetupMessage))\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Return search worker */\n return { tx$, rx$ }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Subject,\n catchError,\n combineLatest,\n filter,\n fromEvent,\n map,\n of,\n switchMap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport {\n getElement,\n getLocation,\n requestJSON,\n setLocation\n} from \"~/browser\"\nimport { getComponentElements } from \"~/components\"\nimport {\n Version,\n renderVersionSelector\n} from \"~/templates\"\n\nimport { fetchSitemap } from \"../sitemap\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Setup options\n */\ninterface SetupOptions {\n document$: Subject /* Document subject */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Set up version selector\n *\n * @param options - Options\n */\nexport function setupVersionSelector(\n { document$ }: SetupOptions\n): void {\n const config = configuration()\n const versions$ = requestJSON(\n new URL(\"../versions.json\", config.base)\n )\n .pipe(\n catchError(() => EMPTY) // @todo refactor instant loading\n )\n\n /* Determine current version */\n const current$ = versions$\n .pipe(\n map(versions => {\n const [, current] = config.base.match(/([^/]+)\\/?$/)!\n return versions.find(({ version, aliases }) => (\n version === current || aliases.includes(current)\n )) || versions[0]\n })\n )\n\n /* Intercept inter-version navigation */\n versions$\n .pipe(\n map(versions => new Map(versions.map(version => [\n `${new URL(`../${version.version}/`, config.base)}`,\n version\n ]))),\n switchMap(urls => fromEvent(document.body, \"click\")\n .pipe(\n filter(ev => !ev.metaKey && !ev.ctrlKey),\n withLatestFrom(current$),\n switchMap(([ev, current]) => {\n if (ev.target instanceof Element) {\n const el = ev.target.closest(\"a\")\n if (el && !el.target && urls.has(el.href)) {\n const url = el.href\n // This is a temporary hack to detect if a version inside the\n // version selector or on another part of the site was clicked.\n // If we're inside the version selector, we definitely want to\n // find the same page, as we might have different deployments\n // due to aliases. However, if we're outside the version\n // selector, we must abort here, because we might otherwise\n // interfere with instant loading. We need to refactor this\n // at some point together with instant loading.\n //\n // See https://github.com/squidfunk/mkdocs-material/issues/4012\n if (!ev.target.closest(\".md-version\")) {\n const version = urls.get(url)!\n if (version === current)\n return EMPTY\n }\n ev.preventDefault()\n return of(url)\n }\n }\n return EMPTY\n }),\n switchMap(url => {\n const { version } = urls.get(url)!\n return fetchSitemap(new URL(url))\n .pipe(\n map(sitemap => {\n const location = getLocation()\n const path = location.href.replace(config.base, \"\")\n return sitemap.includes(path.split(\"#\")[0])\n ? new URL(`../${version}/${path}`, config.base)\n : new URL(url)\n })\n )\n })\n )\n )\n )\n .subscribe(url => setLocation(url))\n\n /* Render version selector and warning */\n combineLatest([versions$, current$])\n .subscribe(([versions, current]) => {\n const topic = getElement(\".md-header__topic\")\n topic.appendChild(renderVersionSelector(versions, current))\n })\n\n /* Integrate outdated version banner with instant loading */\n document$.pipe(switchMap(() => current$))\n .subscribe(current => {\n\n /* Check if version state was already determined */\n let outdated = __md_get(\"__outdated\", sessionStorage)\n if (outdated === null) {\n const latest = config.version?.default || \"latest\"\n outdated = !current.aliases.includes(latest)\n\n /* Persist version state in session storage */\n __md_set(\"__outdated\", outdated, sessionStorage)\n }\n\n /* Unhide outdated version banner */\n if (outdated)\n for (const warning of getComponentElements(\"outdated\"))\n warning.hidden = false\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n combineLatest,\n delay,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n finalize,\n fromEvent,\n map,\n merge,\n share,\n shareReplay,\n startWith,\n take,\n takeLast,\n takeUntil,\n tap\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport {\n getLocation,\n setToggle,\n watchElementFocus,\n watchToggle\n} from \"~/browser\"\nimport {\n SearchMessageType,\n SearchQueryMessage,\n SearchWorker,\n defaultTransform,\n isSearchReadyMessage\n} from \"~/integrations\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search query\n */\nexport interface SearchQuery {\n value: string /* Query value */\n focus: boolean /* Query focus */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch search query\n *\n * Note that the focus event which triggers re-reading the current query value\n * is delayed by `1ms` so the input's empty state is allowed to propagate.\n *\n * @param el - Search query element\n * @param worker - Search worker\n *\n * @returns Search query observable\n */\nexport function watchSearchQuery(\n el: HTMLInputElement, { rx$ }: SearchWorker\n): Observable {\n const fn = __search?.transform || defaultTransform\n\n /* Immediately show search dialog */\n const { searchParams } = getLocation()\n if (searchParams.has(\"q\"))\n setToggle(\"search\", true)\n\n /* Intercept query parameter (deep link) */\n const param$ = rx$\n .pipe(\n filter(isSearchReadyMessage),\n take(1),\n map(() => searchParams.get(\"q\") || \"\")\n )\n\n /* Remove query parameter when search is closed */\n watchToggle(\"search\")\n .pipe(\n filter(active => !active),\n take(1)\n )\n .subscribe(() => {\n const url = new URL(location.href)\n url.searchParams.delete(\"q\")\n history.replaceState({}, \"\", `${url}`)\n })\n\n /* Set query from parameter */\n param$.subscribe(value => { // TODO: not ideal - find a better way\n if (value) {\n el.value = value\n el.focus()\n }\n })\n\n /* Intercept focus and input events */\n const focus$ = watchElementFocus(el)\n const value$ = merge(\n fromEvent(el, \"keyup\"),\n fromEvent(el, \"focus\").pipe(delay(1)),\n param$\n )\n .pipe(\n map(() => fn(el.value)),\n startWith(\"\"),\n distinctUntilChanged(),\n )\n\n /* Combine into single observable */\n return combineLatest([value$, focus$])\n .pipe(\n map(([value, focus]) => ({ value, focus })),\n shareReplay(1)\n )\n}\n\n/**\n * Mount search query\n *\n * @param el - Search query element\n * @param worker - Search worker\n *\n * @returns Search query component observable\n */\nexport function mountSearchQuery(\n el: HTMLInputElement, { tx$, rx$ }: SearchWorker\n): Observable> {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n\n /* Handle value changes */\n push$\n .pipe(\n distinctUntilKeyChanged(\"value\"),\n map(({ value }): SearchQueryMessage => ({\n type: SearchMessageType.QUERY,\n data: value\n }))\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Handle focus changes */\n push$\n .pipe(\n distinctUntilKeyChanged(\"focus\")\n )\n .subscribe(({ focus }) => {\n if (focus) {\n setToggle(\"search\", focus)\n el.placeholder = \"\"\n } else {\n el.placeholder = translation(\"search.placeholder\")\n }\n })\n\n /* Handle reset */\n fromEvent(el.form!, \"reset\")\n .pipe(\n takeUntil(done$)\n )\n .subscribe(() => el.focus())\n\n /* Create and return component */\n return watchSearchQuery(el, { tx$, rx$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state })),\n share()\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n filter,\n finalize,\n map,\n merge,\n of,\n skipUntil,\n switchMap,\n take,\n tap,\n withLatestFrom,\n zipWith\n} from \"rxjs\"\n\nimport { translation } from \"~/_\"\nimport {\n getElement,\n watchElementBoundary\n} from \"~/browser\"\nimport {\n SearchResult,\n SearchWorker,\n isSearchReadyMessage,\n isSearchResultMessage\n} from \"~/integrations\"\nimport { renderSearchResultItem } from \"~/templates\"\nimport { round } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\nimport { SearchQuery } from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n query$: Observable /* Search query observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search result list\n *\n * This function performs a lazy rendering of the search results, depending on\n * the vertical offset of the search result container.\n *\n * @param el - Search result list element\n * @param worker - Search worker\n * @param options - Options\n *\n * @returns Search result list component observable\n */\nexport function mountSearchResult(\n el: HTMLElement, { rx$ }: SearchWorker, { query$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n const boundary$ = watchElementBoundary(el.parentElement!)\n .pipe(\n filter(Boolean)\n )\n\n /* Retrieve nested components */\n const meta = getElement(\":scope > :first-child\", el)\n const list = getElement(\":scope > :last-child\", el)\n\n /* Wait until search is ready */\n const ready$ = rx$\n .pipe(\n filter(isSearchReadyMessage),\n take(1)\n )\n\n /* Update search result metadata */\n push$\n .pipe(\n withLatestFrom(query$),\n skipUntil(ready$)\n )\n .subscribe(([{ items }, { value }]) => {\n if (value) {\n switch (items.length) {\n\n /* No results */\n case 0:\n meta.textContent = translation(\"search.result.none\")\n break\n\n /* One result */\n case 1:\n meta.textContent = translation(\"search.result.one\")\n break\n\n /* Multiple result */\n default:\n meta.textContent = translation(\n \"search.result.other\",\n round(items.length)\n )\n }\n } else {\n meta.textContent = translation(\"search.result.placeholder\")\n }\n })\n\n /* Update search result list */\n push$\n .pipe(\n tap(() => list.innerHTML = \"\"),\n switchMap(({ items }) => merge(\n of(...items.slice(0, 10)),\n of(...items.slice(10))\n .pipe(\n bufferCount(4),\n zipWith(boundary$),\n switchMap(([chunk]) => chunk)\n )\n ))\n )\n .subscribe(result => list.appendChild(\n renderSearchResultItem(result)\n ))\n\n /* Filter search result message */\n const result$ = rx$\n .pipe(\n filter(isSearchResultMessage),\n map(({ data }) => data)\n )\n\n /* Create and return component */\n return result$\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n finalize,\n fromEvent,\n map,\n tap\n} from \"rxjs\"\n\nimport { getLocation } from \"~/browser\"\n\nimport { Component } from \"../../_\"\nimport { SearchQuery } from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search sharing\n */\nexport interface SearchShare {\n url: URL /* Deep link for sharing */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n query$: Observable /* Search query observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n query$: Observable /* Search query observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search sharing\n *\n * @param _el - Search sharing element\n * @param options - Options\n *\n * @returns Search sharing observable\n */\nexport function watchSearchShare(\n _el: HTMLElement, { query$ }: WatchOptions\n): Observable {\n return query$\n .pipe(\n map(({ value }) => {\n const url = getLocation()\n url.hash = \"\"\n url.searchParams.delete(\"h\")\n url.searchParams.set(\"q\", value)\n return { url }\n })\n )\n}\n\n/**\n * Mount search sharing\n *\n * @param el - Search sharing element\n * @param options - Options\n *\n * @returns Search sharing component observable\n */\nexport function mountSearchShare(\n el: HTMLAnchorElement, options: MountOptions\n): Observable> {\n const push$ = new Subject()\n push$.subscribe(({ url }) => {\n el.setAttribute(\"data-clipboard-text\", el.href)\n el.href = `${url}`\n })\n\n /* Prevent following of link */\n fromEvent(el, \"click\")\n .subscribe(ev => ev.preventDefault())\n\n /* Create and return component */\n return watchSearchShare(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n asyncScheduler,\n combineLatestWith,\n distinctUntilChanged,\n filter,\n finalize,\n fromEvent,\n map,\n merge,\n observeOn,\n tap\n} from \"rxjs\"\n\nimport { Keyboard } from \"~/browser\"\nimport {\n SearchResult,\n SearchWorker,\n isSearchResultMessage\n} from \"~/integrations\"\n\nimport { Component, getComponentElement } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search suggestions\n */\nexport interface SearchSuggest {}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n keyboard$: Observable /* Keyboard observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search suggestions\n *\n * This function will perform a lazy rendering of the search results, depending\n * on the vertical offset of the search result container.\n *\n * @param el - Search result list element\n * @param worker - Search worker\n * @param options - Options\n *\n * @returns Search result list component observable\n */\nexport function mountSearchSuggest(\n el: HTMLElement, { rx$ }: SearchWorker, { keyboard$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n\n /* Retrieve query component and track all changes */\n const query = getComponentElement(\"search-query\")\n const query$ = merge(\n fromEvent(query, \"keydown\"),\n fromEvent(query, \"focus\")\n )\n .pipe(\n observeOn(asyncScheduler),\n map(() => query.value),\n distinctUntilChanged(),\n )\n\n /* Update search suggestions */\n push$\n .pipe(\n combineLatestWith(query$),\n map(([{ suggestions }, value]) => {\n const words = value.split(/([\\s-]+)/)\n if (suggestions?.length && words[words.length - 1]) {\n const last = suggestions[suggestions.length - 1]\n if (last.startsWith(words[words.length - 1]))\n words[words.length - 1] = last\n } else {\n words.length = 0\n }\n return words\n })\n )\n .subscribe(words => el.innerHTML = words\n .join(\"\")\n .replace(/\\s/g, \" \")\n )\n\n /* Set up search keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"search\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Right arrow: accept current suggestion */\n case \"ArrowRight\":\n if (\n el.innerText.length &&\n query.selectionStart === query.value.length\n )\n query.value = el.innerText\n break\n }\n })\n\n /* Filter search result message */\n const result$ = rx$\n .pipe(\n filter(isSearchResultMessage),\n map(({ data }) => data)\n )\n\n /* Create and return component */\n return result$\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(() => ({ ref: el }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n NEVER,\n Observable,\n ObservableInput,\n filter,\n merge,\n mergeWith,\n sample,\n take\n} from \"rxjs\"\n\nimport { configuration } from \"~/_\"\nimport {\n Keyboard,\n getActiveElement,\n getElements,\n setToggle\n} from \"~/browser\"\nimport {\n SearchIndex,\n SearchResult,\n isSearchQueryMessage,\n isSearchReadyMessage,\n setupSearchWorker\n} from \"~/integrations\"\n\nimport {\n Component,\n getComponentElement,\n getComponentElements\n} from \"../../_\"\nimport {\n SearchQuery,\n mountSearchQuery\n} from \"../query\"\nimport { mountSearchResult } from \"../result\"\nimport {\n SearchShare,\n mountSearchShare\n} from \"../share\"\nimport {\n SearchSuggest,\n mountSearchSuggest\n} from \"../suggest\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search\n */\nexport type Search =\n | SearchQuery\n | SearchResult\n | SearchShare\n | SearchSuggest\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n index$: ObservableInput /* Search index observable */\n keyboard$: Observable /* Keyboard observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search\n *\n * This function sets up the search functionality, including the underlying\n * web worker and all keyboard bindings.\n *\n * @param el - Search element\n * @param options - Options\n *\n * @returns Search component observable\n */\nexport function mountSearch(\n el: HTMLElement, { index$, keyboard$ }: MountOptions\n): Observable> {\n const config = configuration()\n try {\n const url = __search?.worker || config.search\n const worker = setupSearchWorker(url, index$)\n\n /* Retrieve query and result components */\n const query = getComponentElement(\"search-query\", el)\n const result = getComponentElement(\"search-result\", el)\n\n /* Re-emit query when search is ready */\n const { tx$, rx$ } = worker\n tx$\n .pipe(\n filter(isSearchQueryMessage),\n sample(rx$.pipe(filter(isSearchReadyMessage))),\n take(1)\n )\n .subscribe(tx$.next.bind(tx$))\n\n /* Set up search keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"search\")\n )\n .subscribe(key => {\n const active = getActiveElement()\n switch (key.type) {\n\n /* Enter: go to first (best) result */\n case \"Enter\":\n if (active === query) {\n const anchors = new Map()\n for (const anchor of getElements(\n \":first-child [href]\", result\n )) {\n const article = anchor.firstElementChild!\n anchors.set(anchor, parseFloat(\n article.getAttribute(\"data-md-score\")!\n ))\n }\n\n /* Go to result with highest score, if any */\n if (anchors.size) {\n const [[best]] = [...anchors].sort(([, a], [, b]) => b - a)\n best.click()\n }\n\n /* Otherwise omit form submission */\n key.claim()\n }\n break\n\n /* Escape or Tab: close search */\n case \"Escape\":\n case \"Tab\":\n setToggle(\"search\", false)\n query.blur()\n break\n\n /* Vertical arrows: select previous or next search result */\n case \"ArrowUp\":\n case \"ArrowDown\":\n if (typeof active === \"undefined\") {\n query.focus()\n } else {\n const els = [query, ...getElements(\n \":not(details) > [href], summary, details[open] [href]\",\n result\n )]\n const i = Math.max(0, (\n Math.max(0, els.indexOf(active)) + els.length + (\n key.type === \"ArrowUp\" ? -1 : +1\n )\n ) % els.length)\n els[i].focus()\n }\n\n /* Prevent scrolling of page */\n key.claim()\n break\n\n /* All other keys: hand to search query */\n default:\n if (query !== getActiveElement())\n query.focus()\n }\n })\n\n /* Set up global keyboard handlers */\n keyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\"),\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Open search and select query */\n case \"f\":\n case \"s\":\n case \"/\":\n query.focus()\n query.select()\n\n /* Prevent scrolling of page */\n key.claim()\n break\n }\n })\n\n /* Create and return component */\n const query$ = mountSearchQuery(query, worker)\n const result$ = mountSearchResult(result, worker, { query$ })\n return merge(query$, result$)\n .pipe(\n mergeWith(\n\n /* Search sharing */\n ...getComponentElements(\"search-share\", el)\n .map(child => mountSearchShare(child, { query$ })),\n\n /* Search suggestions */\n ...getComponentElements(\"search-suggest\", el)\n .map(child => mountSearchSuggest(child, worker, { keyboard$ }))\n )\n )\n\n /* Gracefully handle broken search */\n } catch (err) {\n el.hidden = true\n return NEVER\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n ObservableInput,\n combineLatest,\n filter,\n map,\n startWith\n} from \"rxjs\"\n\nimport { getLocation } from \"~/browser\"\nimport {\n SearchIndex,\n setupSearchHighlighter\n} from \"~/integrations\"\nimport { h } from \"~/utilities\"\n\nimport { Component } from \"../../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlighting\n */\nexport interface SearchHighlight {\n nodes: Map /* Map of replacements */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount options\n */\ninterface MountOptions {\n index$: ObservableInput /* Search index observable */\n location$: Observable /* Location observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Mount search highlighting\n *\n * @param el - Content element\n * @param options - Options\n *\n * @returns Search highlighting component observable\n */\nexport function mountSearchHiglight(\n el: HTMLElement, { index$, location$ }: MountOptions\n): Observable> {\n return combineLatest([\n index$,\n location$\n .pipe(\n startWith(getLocation()),\n filter(url => !!url.searchParams.get(\"h\"))\n )\n ])\n .pipe(\n map(([index, url]) => setupSearchHighlighter(index.config, true)(\n url.searchParams.get(\"h\")!\n )),\n map(fn => {\n const nodes = new Map()\n\n /* Traverse text nodes and collect matches */\n const it = document.createNodeIterator(el, NodeFilter.SHOW_TEXT)\n for (let node = it.nextNode(); node; node = it.nextNode()) {\n if (node.parentElement?.offsetHeight) {\n const original = node.textContent!\n const replaced = fn(original)\n if (replaced.length > original.length)\n nodes.set(node as ChildNode, replaced)\n }\n }\n\n /* Replace original nodes with matches */\n for (const [node, text] of nodes) {\n const { childNodes } = h(\"span\", null, text)\n node.replaceWith(...Array.from(childNodes))\n }\n\n /* Return component */\n return { ref: el, nodes }\n })\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n animationFrameScheduler,\n auditTime,\n combineLatest,\n defer,\n distinctUntilChanged,\n finalize,\n map,\n observeOn,\n take,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport {\n Viewport,\n getElement,\n getElementContainer,\n getElementOffset,\n getElementSize,\n getElements\n} from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\nimport { Main } from \"../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Sidebar\n */\nexport interface Sidebar {\n height: number /* Sidebar height */\n locked: boolean /* Sidebar is locked */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n main$: Observable
    /* Main area observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch sidebar\n *\n * This function returns an observable that computes the visual parameters of\n * the sidebar which depends on the vertical viewport offset, as well as the\n * height of the main area. When the page is scrolled beyond the header, the\n * sidebar is locked and fills the remaining space.\n *\n * @param el - Sidebar element\n * @param options - Options\n *\n * @returns Sidebar observable\n */\nexport function watchSidebar(\n el: HTMLElement, { viewport$, main$ }: WatchOptions\n): Observable {\n const parent = el.parentElement!\n const adjust =\n parent.offsetTop -\n parent.parentElement!.offsetTop\n\n /* Compute the sidebar's available height and if it should be locked */\n return combineLatest([main$, viewport$])\n .pipe(\n map(([{ offset, height }, { offset: { y } }]) => {\n height = height\n + Math.min(adjust, Math.max(0, y - offset))\n - adjust\n return {\n height,\n locked: y >= offset + adjust\n }\n }),\n distinctUntilChanged((a, b) => (\n a.height === b.height &&\n a.locked === b.locked\n ))\n )\n}\n\n/**\n * Mount sidebar\n *\n * This function doesn't set the height of the actual sidebar, but of its first\n * child \u2013 the `.md-sidebar__scrollwrap` element in order to mitigiate jittery\n * sidebars when the footer is scrolled into view. At some point we switched\n * from `absolute` / `fixed` positioning to `sticky` positioning, significantly\n * reducing jitter in some browsers (respectively Firefox and Safari) when\n * scrolling from the top. However, top-aligned sticky positioning means that\n * the sidebar snaps to the bottom when the end of the container is reached.\n * This is what leads to the mentioned jitter, as the sidebar's height may be\n * updated too slowly.\n *\n * This behaviour can be mitigiated by setting the height of the sidebar to `0`\n * while preserving the padding, and the height on its first element.\n *\n * @param el - Sidebar element\n * @param options - Options\n *\n * @returns Sidebar component observable\n */\nexport function mountSidebar(\n el: HTMLElement, { header$, ...options }: MountOptions\n): Observable> {\n const inner = getElement(\".md-sidebar__scrollwrap\", el)\n const { y } = getElementOffset(inner)\n return defer(() => {\n const push$ = new Subject()\n push$\n .pipe(\n auditTime(0, animationFrameScheduler),\n withLatestFrom(header$)\n )\n .subscribe({\n\n /* Handle emission */\n next([{ height }, { height: offset }]) {\n inner.style.height = `${height - 2 * y}px`\n el.style.top = `${offset}px`\n },\n\n /* Handle complete */\n complete() {\n inner.style.height = \"\"\n el.style.top = \"\"\n }\n })\n\n /* Bring active item into view on initial load */\n push$\n .pipe(\n observeOn(animationFrameScheduler),\n take(1)\n )\n .subscribe(() => {\n for (const item of getElements(\".md-nav__link--active[href]\", el)) {\n const container = getElementContainer(item)\n if (typeof container !== \"undefined\") {\n const offset = item.offsetTop - container.offsetTop\n const { height } = getElementSize(container)\n container.scrollTo({\n top: offset - height / 2\n })\n }\n }\n })\n\n /* Create and return component */\n return watchSidebar(el, options)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { Repo, User } from \"github-types\"\nimport {\n EMPTY,\n Observable,\n catchError,\n defaultIfEmpty,\n map,\n zip\n} from \"rxjs\"\n\nimport { requestJSON } from \"~/browser\"\n\nimport { SourceFacts } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * GitHub release (partial)\n */\ninterface Release {\n tag_name: string /* Tag name */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch GitHub repository facts\n *\n * @param user - GitHub user or organization\n * @param repo - GitHub repository\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFactsFromGitHub(\n user: string, repo?: string\n): Observable {\n if (typeof repo !== \"undefined\") {\n const url = `https://api.github.com/repos/${user}/${repo}`\n return zip(\n\n /* Fetch version */\n requestJSON(`${url}/releases/latest`)\n .pipe(\n catchError(() => EMPTY), // @todo refactor instant loading\n map(release => ({\n version: release.tag_name\n })),\n defaultIfEmpty({})\n ),\n\n /* Fetch stars and forks */\n requestJSON(url)\n .pipe(\n catchError(() => EMPTY), // @todo refactor instant loading\n map(info => ({\n stars: info.stargazers_count,\n forks: info.forks_count\n })),\n defaultIfEmpty({})\n )\n )\n .pipe(\n map(([release, info]) => ({ ...release, ...info }))\n )\n\n /* User or organization */\n } else {\n const url = `https://api.github.com/users/${user}`\n return requestJSON(url)\n .pipe(\n map(info => ({\n repositories: info.public_repos\n })),\n defaultIfEmpty({})\n )\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { ProjectSchema } from \"gitlab\"\nimport {\n EMPTY,\n Observable,\n catchError,\n defaultIfEmpty,\n map\n} from \"rxjs\"\n\nimport { requestJSON } from \"~/browser\"\n\nimport { SourceFacts } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch GitLab repository facts\n *\n * @param base - GitLab base\n * @param project - GitLab project\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFactsFromGitLab(\n base: string, project: string\n): Observable {\n const url = `https://${base}/api/v4/projects/${encodeURIComponent(project)}`\n return requestJSON(url)\n .pipe(\n catchError(() => EMPTY), // @todo refactor instant loading\n map(({ star_count, forks_count }) => ({\n stars: star_count,\n forks: forks_count\n })),\n defaultIfEmpty({})\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport { EMPTY, Observable } from \"rxjs\"\n\nimport { fetchSourceFactsFromGitHub } from \"../github\"\nimport { fetchSourceFactsFromGitLab } from \"../gitlab\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository facts for repositories\n */\nexport interface RepositoryFacts {\n stars?: number /* Number of stars */\n forks?: number /* Number of forks */\n version?: string /* Latest version */\n}\n\n/**\n * Repository facts for organizations\n */\nexport interface OrganizationFacts {\n repositories?: number /* Number of repositories */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Repository facts\n */\nexport type SourceFacts =\n | RepositoryFacts\n | OrganizationFacts\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch repository facts\n *\n * @param url - Repository URL\n *\n * @returns Repository facts observable\n */\nexport function fetchSourceFacts(\n url: string\n): Observable {\n const [type] = url.match(/(git(?:hub|lab))/i) || []\n switch (type.toLowerCase()) {\n\n /* GitHub repository */\n case \"github\":\n const [, user, repo] = url.match(/^.+github\\.com\\/([^/]+)\\/?([^/]+)?/i)!\n return fetchSourceFactsFromGitHub(user, repo)\n\n /* GitLab repository */\n case \"gitlab\":\n const [, base, slug] = url.match(/^.+?([^/]*gitlab[^/]+)\\/(.+?)\\/?$/i)!\n return fetchSourceFactsFromGitLab(base, slug)\n\n /* Everything else */\n default:\n return EMPTY\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n EMPTY,\n Observable,\n Subject,\n catchError,\n defer,\n filter,\n finalize,\n map,\n of,\n shareReplay,\n tap\n} from \"rxjs\"\n\nimport { getElement } from \"~/browser\"\nimport { ConsentDefaults } from \"~/components/consent\"\nimport { renderSourceFacts } from \"~/templates\"\n\nimport {\n Component,\n getComponentElements\n} from \"../../_\"\nimport {\n SourceFacts,\n fetchSourceFacts\n} from \"../facts\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository information\n */\nexport interface Source {\n facts: SourceFacts /* Repository facts */\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Repository information observable\n */\nlet fetch$: Observable\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch repository information\n *\n * This function tries to read the repository facts from session storage, and\n * if unsuccessful, fetches them from the underlying provider.\n *\n * @param el - Repository information element\n *\n * @returns Repository information observable\n */\nexport function watchSource(\n el: HTMLAnchorElement\n): Observable {\n return fetch$ ||= defer(() => {\n const cached = __md_get(\"__source\", sessionStorage)\n if (cached) {\n return of(cached)\n } else {\n\n /* Check if consent is configured and was given */\n const els = getComponentElements(\"consent\")\n if (els.length) {\n const consent = __md_get(\"__consent\")\n if (!(consent && consent.github))\n return EMPTY\n }\n\n /* Fetch repository facts */\n return fetchSourceFacts(el.href)\n .pipe(\n tap(facts => __md_set(\"__source\", facts, sessionStorage))\n )\n }\n })\n .pipe(\n catchError(() => EMPTY),\n filter(facts => Object.keys(facts).length > 0),\n map(facts => ({ facts })),\n shareReplay(1)\n )\n}\n\n/**\n * Mount repository information\n *\n * @param el - Repository information element\n *\n * @returns Repository information component observable\n */\nexport function mountSource(\n el: HTMLAnchorElement\n): Observable> {\n const inner = getElement(\":scope > :last-child\", el)\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe(({ facts }) => {\n inner.appendChild(renderSourceFacts(facts))\n inner.classList.add(\"md-source__repository--active\")\n })\n\n /* Create and return component */\n return watchSource(el)\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n defer,\n distinctUntilKeyChanged,\n finalize,\n map,\n of,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n watchElementSize,\n watchViewportAt\n} from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Navigation tabs\n */\nexport interface Tabs {\n hidden: boolean /* Navigation tabs are hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch navigation tabs\n *\n * @param el - Navigation tabs element\n * @param options - Options\n *\n * @returns Navigation tabs observable\n */\nexport function watchTabs(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n return watchElementSize(document.body)\n .pipe(\n switchMap(() => watchViewportAt(el, { header$, viewport$ })),\n map(({ offset: { y } }) => {\n return {\n hidden: y >= 10\n }\n }),\n distinctUntilKeyChanged(\"hidden\")\n )\n}\n\n/**\n * Mount navigation tabs\n *\n * This function hides the navigation tabs when scrolling past the threshold\n * and makes them reappear in a nice CSS animation when scrolling back up.\n *\n * @param el - Navigation tabs element\n * @param options - Options\n *\n * @returns Navigation tabs component observable\n */\nexport function mountTabs(\n el: HTMLElement, options: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n push$.subscribe({\n\n /* Handle emission */\n next({ hidden }) {\n el.hidden = hidden\n },\n\n /* Handle complete */\n complete() {\n el.hidden = false\n }\n })\n\n /* Create and return component */\n return (\n feature(\"navigation.tabs.sticky\")\n ? of({ hidden: false })\n : watchTabs(el, options)\n )\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatestWith,\n debounceTime,\n defer,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n filter,\n finalize,\n map,\n merge,\n of,\n repeat,\n scan,\n share,\n skip,\n startWith,\n switchMap,\n takeLast,\n takeUntil,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { feature } from \"~/_\"\nimport {\n Viewport,\n getElement,\n getElementContainer,\n getElementSize,\n getElements,\n getLocation,\n getOptionalElement,\n watchElementSize\n} from \"~/browser\"\n\nimport {\n Component,\n getComponentElement\n} from \"../_\"\nimport { Header } from \"../header\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Table of contents\n */\nexport interface TableOfContents {\n prev: HTMLAnchorElement[][] /* Anchors (previous) */\n next: HTMLAnchorElement[][] /* Anchors (next) */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n target$: Observable /* Location target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch table of contents\n *\n * This is effectively a scroll spy implementation which will account for the\n * fixed header and automatically re-calculate anchor offsets when the viewport\n * is resized. The returned observable will only emit if the table of contents\n * needs to be repainted.\n *\n * This implementation tracks an anchor element's entire path starting from its\n * level up to the top-most anchor element, e.g. `[h3, h2, h1]`. Although the\n * Material theme currently doesn't make use of this information, it enables\n * the styling of the entire hierarchy through customization.\n *\n * Note that the current anchor is the last item of the `prev` anchor list.\n *\n * @param el - Table of contents element\n * @param options - Options\n *\n * @returns Table of contents observable\n */\nexport function watchTableOfContents(\n el: HTMLElement, { viewport$, header$ }: WatchOptions\n): Observable {\n const table = new Map()\n\n /* Compute anchor-to-target mapping */\n const anchors = getElements(\"[href^=\\\\#]\", el)\n for (const anchor of anchors) {\n const id = decodeURIComponent(anchor.hash.substring(1))\n const target = getOptionalElement(`[id=\"${id}\"]`)\n if (typeof target !== \"undefined\")\n table.set(anchor, target)\n }\n\n /* Compute necessary adjustment for header */\n const adjust$ = header$\n .pipe(\n distinctUntilKeyChanged(\"height\"),\n map(({ height }) => {\n const main = getComponentElement(\"main\")\n const grid = getElement(\":scope > :first-child\", main)\n return height + 0.8 * (\n grid.offsetTop -\n main.offsetTop\n )\n }),\n share()\n )\n\n /* Compute partition of previous and next anchors */\n const partition$ = watchElementSize(document.body)\n .pipe(\n distinctUntilKeyChanged(\"height\"),\n\n /* Build index to map anchor paths to vertical offsets */\n switchMap(body => defer(() => {\n let path: HTMLAnchorElement[] = []\n return of([...table].reduce((index, [anchor, target]) => {\n while (path.length) {\n const last = table.get(path[path.length - 1])!\n if (last.tagName >= target.tagName) {\n path.pop()\n } else {\n break\n }\n }\n\n /* If the current anchor is hidden, continue with its parent */\n let offset = target.offsetTop\n while (!offset && target.parentElement) {\n target = target.parentElement\n offset = target.offsetTop\n }\n\n /* Map reversed anchor path to vertical offset */\n return index.set(\n [...path = [...path, anchor]].reverse(),\n offset\n )\n }, new Map()))\n })\n .pipe(\n\n /* Sort index by vertical offset (see https://bit.ly/30z6QSO) */\n map(index => new Map([...index].sort(([, a], [, b]) => a - b))),\n combineLatestWith(adjust$),\n\n /* Re-compute partition when viewport offset changes */\n switchMap(([index, adjust]) => viewport$\n .pipe(\n scan(([prev, next], { offset: { y }, size }) => {\n const last = y + size.height >= Math.floor(body.height)\n\n /* Look forward */\n while (next.length) {\n const [, offset] = next[0]\n if (offset - adjust < y || last) {\n prev = [...prev, next.shift()!]\n } else {\n break\n }\n }\n\n /* Look backward */\n while (prev.length) {\n const [, offset] = prev[prev.length - 1]\n if (offset - adjust >= y && !last) {\n next = [prev.pop()!, ...next]\n } else {\n break\n }\n }\n\n /* Return partition */\n return [prev, next]\n }, [[], [...index]]),\n distinctUntilChanged((a, b) => (\n a[0] === b[0] &&\n a[1] === b[1]\n ))\n )\n )\n )\n )\n )\n\n /* Compute and return anchor list migrations */\n return partition$\n .pipe(\n map(([prev, next]) => ({\n prev: prev.map(([path]) => path),\n next: next.map(([path]) => path)\n })),\n\n /* Extract anchor list migrations */\n startWith({ prev: [], next: [] }),\n bufferCount(2, 1),\n map(([a, b]) => {\n\n /* Moving down */\n if (a.prev.length < b.prev.length) {\n return {\n prev: b.prev.slice(Math.max(0, a.prev.length - 1), b.prev.length),\n next: []\n }\n\n /* Moving up */\n } else {\n return {\n prev: b.prev.slice(-1),\n next: b.next.slice(0, b.next.length - a.next.length)\n }\n }\n })\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Mount table of contents\n *\n * @param el - Table of contents element\n * @param options - Options\n *\n * @returns Table of contents component observable\n */\nexport function mountTableOfContents(\n el: HTMLElement, { viewport$, header$, target$ }: MountOptions\n): Observable> {\n return defer(() => {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n push$.subscribe(({ prev, next }) => {\n\n /* Look forward */\n for (const [anchor] of next) {\n anchor.classList.remove(\"md-nav__link--passed\")\n anchor.classList.remove(\"md-nav__link--active\")\n }\n\n /* Look backward */\n for (const [index, [anchor]] of prev.entries()) {\n anchor.classList.add(\"md-nav__link--passed\")\n anchor.classList.toggle(\n \"md-nav__link--active\",\n index === prev.length - 1\n )\n }\n })\n\n /* Set up following, if enabled */\n if (feature(\"toc.follow\")) {\n\n /* Toggle smooth scrolling only for anchor clicks */\n const smooth$ = merge(\n viewport$.pipe(debounceTime(1), map(() => undefined)),\n viewport$.pipe(debounceTime(250), map(() => \"smooth\" as const))\n )\n\n /* Bring active anchor into view */\n push$\n .pipe(\n filter(({ prev }) => prev.length > 0),\n withLatestFrom(smooth$)\n )\n .subscribe(([{ prev }, behavior]) => {\n const [anchor] = prev[prev.length - 1]\n if (anchor.offsetHeight) {\n\n /* Retrieve overflowing container and scroll */\n const container = getElementContainer(anchor)\n if (typeof container !== \"undefined\") {\n const offset = anchor.offsetTop - container.offsetTop\n const { height } = getElementSize(container)\n container.scrollTo({\n top: offset - height / 2,\n behavior\n })\n }\n }\n })\n }\n\n /* Set up anchor tracking, if enabled */\n if (feature(\"navigation.tracking\"))\n viewport$\n .pipe(\n takeUntil(done$),\n distinctUntilKeyChanged(\"offset\"),\n debounceTime(250),\n skip(1),\n takeUntil(target$.pipe(skip(1))),\n repeat({ delay: 250 }),\n withLatestFrom(push$)\n )\n .subscribe(([, { prev }]) => {\n const url = getLocation()\n\n /* Set hash fragment to active anchor */\n const anchor = prev[prev.length - 1]\n if (anchor && anchor.length) {\n const [active] = anchor\n const { hash } = new URL(active.href)\n if (url.hash !== hash) {\n url.hash = hash\n history.replaceState({}, \"\", `${url}`)\n }\n\n /* Reset anchor when at the top */\n } else {\n url.hash = \"\"\n history.replaceState({}, \"\", `${url}`)\n }\n })\n\n /* Create and return component */\n return watchTableOfContents(el, { viewport$, header$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n Subject,\n bufferCount,\n combineLatest,\n distinctUntilChanged,\n distinctUntilKeyChanged,\n endWith,\n finalize,\n map,\n repeat,\n skip,\n takeLast,\n takeUntil,\n tap\n} from \"rxjs\"\n\nimport { Viewport } from \"~/browser\"\n\nimport { Component } from \"../_\"\nimport { Header } from \"../header\"\nimport { Main } from \"../main\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Back-to-top button\n */\nexport interface BackToTop {\n hidden: boolean /* Back-to-top button is hidden */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch options\n */\ninterface WatchOptions {\n viewport$: Observable /* Viewport observable */\n main$: Observable
    /* Main area observable */\n target$: Observable /* Location target observable */\n}\n\n/**\n * Mount options\n */\ninterface MountOptions {\n viewport$: Observable /* Viewport observable */\n header$: Observable
    /* Header observable */\n main$: Observable
    /* Main area observable */\n target$: Observable /* Location target observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Watch back-to-top\n *\n * @param _el - Back-to-top element\n * @param options - Options\n *\n * @returns Back-to-top observable\n */\nexport function watchBackToTop(\n _el: HTMLElement, { viewport$, main$, target$ }: WatchOptions\n): Observable {\n\n /* Compute direction */\n const direction$ = viewport$\n .pipe(\n map(({ offset: { y } }) => y),\n bufferCount(2, 1),\n map(([a, b]) => a > b && b > 0),\n distinctUntilChanged()\n )\n\n /* Compute whether main area is active */\n const active$ = main$\n .pipe(\n map(({ active }) => active)\n )\n\n /* Compute threshold for hiding */\n return combineLatest([active$, direction$])\n .pipe(\n map(([active, direction]) => !(active && direction)),\n distinctUntilChanged(),\n takeUntil(target$.pipe(skip(1))),\n endWith(true),\n repeat({ delay: 250 }),\n map(hidden => ({ hidden }))\n )\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Mount back-to-top\n *\n * @param el - Back-to-top element\n * @param options - Options\n *\n * @returns Back-to-top component observable\n */\nexport function mountBackToTop(\n el: HTMLElement, { viewport$, header$, main$, target$ }: MountOptions\n): Observable> {\n const push$ = new Subject()\n const done$ = push$.pipe(takeLast(1))\n push$.subscribe({\n\n /* Handle emission */\n next({ hidden }) {\n el.hidden = hidden\n if (hidden) {\n el.setAttribute(\"tabindex\", \"-1\")\n el.blur()\n } else {\n el.removeAttribute(\"tabindex\")\n }\n },\n\n /* Handle complete */\n complete() {\n el.style.top = \"\"\n el.hidden = true\n el.removeAttribute(\"tabindex\")\n }\n })\n\n /* Watch header height */\n header$\n .pipe(\n takeUntil(done$),\n distinctUntilKeyChanged(\"height\")\n )\n .subscribe(({ height }) => {\n el.style.top = `${height + 16}px`\n })\n\n /* Create and return component */\n return watchBackToTop(el, { viewport$, main$, target$ })\n .pipe(\n tap(state => push$.next(state)),\n finalize(() => push$.complete()),\n map(state => ({ ref: el, ...state }))\n )\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n fromEvent,\n map,\n mergeMap,\n switchMap,\n takeWhile,\n tap,\n withLatestFrom\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n tablet$: Observable /* Media tablet observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch indeterminate checkboxes\n *\n * This function replaces the indeterminate \"pseudo state\" with the actual\n * indeterminate state, which is used to keep navigation always expanded.\n *\n * @param options - Options\n */\nexport function patchIndeterminate(\n { document$, tablet$ }: PatchOptions\n): void {\n document$\n .pipe(\n switchMap(() => getElements(\n // @todo `data-md-state` is deprecated and removed in v9\n \".md-toggle--indeterminate, [data-md-state=indeterminate]\"\n )),\n tap(el => {\n el.indeterminate = true\n el.checked = false\n }),\n mergeMap(el => fromEvent(el, \"change\")\n .pipe(\n takeWhile(() => el.classList.contains(\"md-toggle--indeterminate\")),\n map(() => el)\n )\n ),\n withLatestFrom(tablet$)\n )\n .subscribe(([el, tablet]) => {\n el.classList.remove(\"md-toggle--indeterminate\")\n if (tablet)\n el.checked = false\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n filter,\n fromEvent,\n map,\n mergeMap,\n switchMap,\n tap\n} from \"rxjs\"\n\nimport { getElements } from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n document$: Observable /* Document observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Check whether the given device is an Apple device\n *\n * @returns Test result\n */\nfunction isAppleDevice(): boolean {\n return /(iPad|iPhone|iPod)/.test(navigator.userAgent)\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch all elements with `data-md-scrollfix` attributes\n *\n * This is a year-old patch which ensures that overflow scrolling works at the\n * top and bottom of containers on iOS by ensuring a `1px` scroll offset upon\n * the start of a touch event.\n *\n * @see https://bit.ly/2SCtAOO - Original source\n *\n * @param options - Options\n */\nexport function patchScrollfix(\n { document$ }: PatchOptions\n): void {\n document$\n .pipe(\n switchMap(() => getElements(\"[data-md-scrollfix]\")),\n tap(el => el.removeAttribute(\"data-md-scrollfix\")),\n filter(isAppleDevice),\n mergeMap(el => fromEvent(el, \"touchstart\")\n .pipe(\n map(() => el)\n )\n )\n )\n .subscribe(el => {\n const top = el.scrollTop\n\n /* We're at the top of the container */\n if (top === 0) {\n el.scrollTop = 1\n\n /* We're at the bottom of the container */\n } else if (top + el.offsetHeight === el.scrollHeight) {\n el.scrollTop = top - 1\n }\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n Observable,\n combineLatest,\n delay,\n map,\n of,\n switchMap,\n withLatestFrom\n} from \"rxjs\"\n\nimport {\n Viewport,\n watchToggle\n} from \"~/browser\"\n\n/* ----------------------------------------------------------------------------\n * Helper types\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch options\n */\ninterface PatchOptions {\n viewport$: Observable /* Viewport observable */\n tablet$: Observable /* Media tablet observable */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Patch the document body to lock when search is open\n *\n * For mobile and tablet viewports, the search is rendered full screen, which\n * leads to scroll leaking when at the top or bottom of the search result. This\n * function locks the body when the search is in full screen mode, and restores\n * the scroll position when leaving.\n *\n * @param options - Options\n */\nexport function patchScrolllock(\n { viewport$, tablet$ }: PatchOptions\n): void {\n combineLatest([watchToggle(\"search\"), tablet$])\n .pipe(\n map(([active, tablet]) => active && !tablet),\n switchMap(active => of(active)\n .pipe(\n delay(active ? 400 : 100)\n )\n ),\n withLatestFrom(viewport$)\n )\n .subscribe(([active, { offset: { y }}]) => {\n if (active) {\n document.body.setAttribute(\"data-md-scrolllock\", \"\")\n document.body.style.top = `-${y}px`\n } else {\n const value = -1 * parseInt(document.body.style.top, 10)\n document.body.removeAttribute(\"data-md-scrolllock\")\n document.body.style.top = \"\"\n if (value)\n window.scrollTo(0, value)\n }\n })\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Polyfills\n * ------------------------------------------------------------------------- */\n\n/* Polyfill `Object.entries` */\nif (!Object.entries)\n Object.entries = function (obj: object) {\n const data: [string, string][] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push([key, obj[key]])\n\n /* Return entries */\n return data\n }\n\n/* Polyfill `Object.values` */\nif (!Object.values)\n Object.values = function (obj: object) {\n const data: string[] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push(obj[key])\n\n /* Return values */\n return data\n }\n\n/* ------------------------------------------------------------------------- */\n\n/* Polyfills for `Element` */\nif (typeof Element !== \"undefined\") {\n\n /* Polyfill `Element.scrollTo` */\n if (!Element.prototype.scrollTo)\n Element.prototype.scrollTo = function (\n x?: ScrollToOptions | number, y?: number\n ): void {\n if (typeof x === \"object\") {\n this.scrollLeft = x.left!\n this.scrollTop = x.top!\n } else {\n this.scrollLeft = x!\n this.scrollTop = y!\n }\n }\n\n /* Polyfill `Element.replaceWith` */\n if (!Element.prototype.replaceWith)\n Element.prototype.replaceWith = function (\n ...nodes: Array\n ): void {\n const parent = this.parentNode\n if (parent) {\n if (nodes.length === 0)\n parent.removeChild(this)\n\n /* Replace children and create text nodes */\n for (let i = nodes.length - 1; i >= 0; i--) {\n let node = nodes[i]\n if (typeof node !== \"object\")\n node = document.createTextNode(node)\n else if (node.parentNode)\n node.parentNode.removeChild(node)\n\n /* Replace child or insert before previous sibling */\n if (!i)\n parent.replaceChild(node, this)\n else\n parent.insertBefore(this.previousSibling!, node)\n }\n }\n }\n}\n"], + "mappings": "6+BAAA,IAAAA,GAAAC,GAAA,CAAAC,GAAAC,KAAA,EAAC,SAAUC,EAAQC,EAAS,CAC1B,OAAOH,IAAY,UAAY,OAAOC,IAAW,YAAcE,EAAQ,EACvE,OAAO,QAAW,YAAc,OAAO,IAAM,OAAOA,CAAO,EAC1DA,EAAQ,CACX,GAAEH,GAAO,UAAY,CAAE,aASrB,SAASI,EAA0BC,EAAO,CACxC,IAAIC,EAAmB,GACnBC,EAA0B,GAC1BC,EAAiC,KAEjCC,EAAsB,CACxB,KAAM,GACN,OAAQ,GACR,IAAK,GACL,IAAK,GACL,MAAO,GACP,SAAU,GACV,OAAQ,GACR,KAAM,GACN,MAAO,GACP,KAAM,GACN,KAAM,GACN,SAAU,GACV,iBAAkB,EACpB,EAOA,SAASC,EAAmBC,EAAI,CAC9B,MACE,GAAAA,GACAA,IAAO,UACPA,EAAG,WAAa,QAChBA,EAAG,WAAa,QAChB,cAAeA,GACf,aAAcA,EAAG,UAKrB,CASA,SAASC,EAA8BD,EAAI,CACzC,IAAIE,GAAOF,EAAG,KACVG,GAAUH,EAAG,QAUjB,MARI,GAAAG,KAAY,SAAWL,EAAoBI,KAAS,CAACF,EAAG,UAIxDG,KAAY,YAAc,CAACH,EAAG,UAI9BA,EAAG,kBAKT,CAOA,SAASI,EAAqBJ,EAAI,CAC5BA,EAAG,UAAU,SAAS,eAAe,IAGzCA,EAAG,UAAU,IAAI,eAAe,EAChCA,EAAG,aAAa,2BAA4B,EAAE,EAChD,CAOA,SAASK,EAAwBL,EAAI,CAC/B,CAACA,EAAG,aAAa,0BAA0B,IAG/CA,EAAG,UAAU,OAAO,eAAe,EACnCA,EAAG,gBAAgB,0BAA0B,EAC/C,CAUA,SAASM,EAAUC,EAAG,CAChBA,EAAE,SAAWA,EAAE,QAAUA,EAAE,UAI3BR,EAAmBL,EAAM,aAAa,GACxCU,EAAqBV,EAAM,aAAa,EAG1CC,EAAmB,GACrB,CAUA,SAASa,EAAcD,EAAG,CACxBZ,EAAmB,EACrB,CASA,SAASc,EAAQF,EAAG,CAEd,CAACR,EAAmBQ,EAAE,MAAM,IAI5BZ,GAAoBM,EAA8BM,EAAE,MAAM,IAC5DH,EAAqBG,EAAE,MAAM,CAEjC,CAMA,SAASG,EAAOH,EAAG,CACb,CAACR,EAAmBQ,EAAE,MAAM,IAK9BA,EAAE,OAAO,UAAU,SAAS,eAAe,GAC3CA,EAAE,OAAO,aAAa,0BAA0B,KAMhDX,EAA0B,GAC1B,OAAO,aAAaC,CAA8B,EAClDA,EAAiC,OAAO,WAAW,UAAW,CAC5DD,EAA0B,EAC5B,EAAG,GAAG,EACNS,EAAwBE,EAAE,MAAM,EAEpC,CAOA,SAASI,EAAmBJ,EAAG,CACzB,SAAS,kBAAoB,WAK3BX,IACFD,EAAmB,IAErBiB,EAA+B,EAEnC,CAQA,SAASA,GAAiC,CACxC,SAAS,iBAAiB,YAAaC,CAAoB,EAC3D,SAAS,iBAAiB,YAAaA,CAAoB,EAC3D,SAAS,iBAAiB,UAAWA,CAAoB,EACzD,SAAS,iBAAiB,cAAeA,CAAoB,EAC7D,SAAS,iBAAiB,cAAeA,CAAoB,EAC7D,SAAS,iBAAiB,YAAaA,CAAoB,EAC3D,SAAS,iBAAiB,YAAaA,CAAoB,EAC3D,SAAS,iBAAiB,aAAcA,CAAoB,EAC5D,SAAS,iBAAiB,WAAYA,CAAoB,CAC5D,CAEA,SAASC,GAAoC,CAC3C,SAAS,oBAAoB,YAAaD,CAAoB,EAC9D,SAAS,oBAAoB,YAAaA,CAAoB,EAC9D,SAAS,oBAAoB,UAAWA,CAAoB,EAC5D,SAAS,oBAAoB,cAAeA,CAAoB,EAChE,SAAS,oBAAoB,cAAeA,CAAoB,EAChE,SAAS,oBAAoB,YAAaA,CAAoB,EAC9D,SAAS,oBAAoB,YAAaA,CAAoB,EAC9D,SAAS,oBAAoB,aAAcA,CAAoB,EAC/D,SAAS,oBAAoB,WAAYA,CAAoB,CAC/D,CASA,SAASA,EAAqBN,EAAG,CAG3BA,EAAE,OAAO,UAAYA,EAAE,OAAO,SAAS,YAAY,IAAM,SAI7DZ,EAAmB,GACnBmB,EAAkC,EACpC,CAKA,SAAS,iBAAiB,UAAWR,EAAW,EAAI,EACpD,SAAS,iBAAiB,YAAaE,EAAe,EAAI,EAC1D,SAAS,iBAAiB,cAAeA,EAAe,EAAI,EAC5D,SAAS,iBAAiB,aAAcA,EAAe,EAAI,EAC3D,SAAS,iBAAiB,mBAAoBG,EAAoB,EAAI,EAEtEC,EAA+B,EAM/BlB,EAAM,iBAAiB,QAASe,EAAS,EAAI,EAC7Cf,EAAM,iBAAiB,OAAQgB,EAAQ,EAAI,EAOvChB,EAAM,WAAa,KAAK,wBAA0BA,EAAM,KAI1DA,EAAM,KAAK,aAAa,wBAAyB,EAAE,EAC1CA,EAAM,WAAa,KAAK,gBACjC,SAAS,gBAAgB,UAAU,IAAI,kBAAkB,EACzD,SAAS,gBAAgB,aAAa,wBAAyB,EAAE,EAErE,CAKA,GAAI,OAAO,QAAW,aAAe,OAAO,UAAa,YAAa,CAIpE,OAAO,0BAA4BD,EAInC,IAAIsB,EAEJ,GAAI,CACFA,EAAQ,IAAI,YAAY,8BAA8B,CACxD,OAASC,EAAP,CAEAD,EAAQ,SAAS,YAAY,aAAa,EAC1CA,EAAM,gBAAgB,+BAAgC,GAAO,GAAO,CAAC,CAAC,CACxE,CAEA,OAAO,cAAcA,CAAK,CAC5B,CAEI,OAAO,UAAa,aAGtBtB,EAA0B,QAAQ,CAGtC,CAAE,ICvTF,IAAAwB,GAAAC,GAAAC,IAAA,EAAC,SAASC,EAAQ,CAOhB,IAAIC,EAA6B,UAAW,CAC1C,GAAI,CACF,MAAO,CAAC,CAAC,OAAO,QAClB,OAASC,EAAP,CACA,MAAO,EACT,CACF,EAGIC,EAAoBF,EAA2B,EAE/CG,EAAiB,SAASC,EAAO,CACnC,IAAIC,EAAW,CACb,KAAM,UAAW,CACf,IAAIC,EAAQF,EAAM,MAAM,EACxB,MAAO,CAAE,KAAME,IAAU,OAAQ,MAAOA,CAAM,CAChD,CACF,EAEA,OAAIJ,IACFG,EAAS,OAAO,UAAY,UAAW,CACrC,OAAOA,CACT,GAGKA,CACT,EAMIE,EAAiB,SAASD,EAAO,CACnC,OAAO,mBAAmBA,CAAK,EAAE,QAAQ,OAAQ,GAAG,CACtD,EAEIE,EAAmB,SAASF,EAAO,CACrC,OAAO,mBAAmB,OAAOA,CAAK,EAAE,QAAQ,MAAO,GAAG,CAAC,CAC7D,EAEIG,EAA0B,UAAW,CAEvC,IAAIC,EAAkB,SAASC,EAAc,CAC3C,OAAO,eAAe,KAAM,WAAY,CAAE,SAAU,GAAM,MAAO,CAAC,CAAE,CAAC,EACrE,IAAIC,EAAqB,OAAOD,EAEhC,GAAIC,IAAuB,YAEpB,GAAIA,IAAuB,SAC5BD,IAAiB,IACnB,KAAK,YAAYA,CAAY,UAEtBA,aAAwBD,EAAiB,CAClD,IAAIG,EAAQ,KACZF,EAAa,QAAQ,SAASL,EAAOQ,EAAM,CACzCD,EAAM,OAAOC,EAAMR,CAAK,CAC1B,CAAC,CACH,SAAYK,IAAiB,MAAUC,IAAuB,SAC5D,GAAI,OAAO,UAAU,SAAS,KAAKD,CAAY,IAAM,iBACnD,QAASI,EAAI,EAAGA,EAAIJ,EAAa,OAAQI,IAAK,CAC5C,IAAIC,EAAQL,EAAaI,GACzB,GAAK,OAAO,UAAU,SAAS,KAAKC,CAAK,IAAM,kBAAsBA,EAAM,SAAW,EACpF,KAAK,OAAOA,EAAM,GAAIA,EAAM,EAAE,MAE9B,OAAM,IAAI,UAAU,4CAA8CD,EAAI,6BAA8B,CAExG,KAEA,SAASE,KAAON,EACVA,EAAa,eAAeM,CAAG,GACjC,KAAK,OAAOA,EAAKN,EAAaM,EAAI,MAKxC,OAAM,IAAI,UAAU,8CAA+C,CAEvE,EAEIC,EAAQR,EAAgB,UAE5BQ,EAAM,OAAS,SAASJ,EAAMR,EAAO,CAC/BQ,KAAQ,KAAK,SACf,KAAK,SAASA,GAAM,KAAK,OAAOR,CAAK,CAAC,EAEtC,KAAK,SAASQ,GAAQ,CAAC,OAAOR,CAAK,CAAC,CAExC,EAEAY,EAAM,OAAS,SAASJ,EAAM,CAC5B,OAAO,KAAK,SAASA,EACvB,EAEAI,EAAM,IAAM,SAASJ,EAAM,CACzB,OAAQA,KAAQ,KAAK,SAAY,KAAK,SAASA,GAAM,GAAK,IAC5D,EAEAI,EAAM,OAAS,SAASJ,EAAM,CAC5B,OAAQA,KAAQ,KAAK,SAAY,KAAK,SAASA,GAAM,MAAM,CAAC,EAAI,CAAC,CACnE,EAEAI,EAAM,IAAM,SAASJ,EAAM,CACzB,OAAQA,KAAQ,KAAK,QACvB,EAEAI,EAAM,IAAM,SAASJ,EAAMR,EAAO,CAChC,KAAK,SAASQ,GAAQ,CAAC,OAAOR,CAAK,CAAC,CACtC,EAEAY,EAAM,QAAU,SAASC,EAAUC,EAAS,CAC1C,IAAIC,EACJ,QAASP,KAAQ,KAAK,SACpB,GAAI,KAAK,SAAS,eAAeA,CAAI,EAAG,CACtCO,EAAU,KAAK,SAASP,GACxB,QAASC,EAAI,EAAGA,EAAIM,EAAQ,OAAQN,IAClCI,EAAS,KAAKC,EAASC,EAAQN,GAAID,EAAM,IAAI,CAEjD,CAEJ,EAEAI,EAAM,KAAO,UAAW,CACtB,IAAId,EAAQ,CAAC,EACb,YAAK,QAAQ,SAASE,EAAOQ,EAAM,CACjCV,EAAM,KAAKU,CAAI,CACjB,CAAC,EACMX,EAAeC,CAAK,CAC7B,EAEAc,EAAM,OAAS,UAAW,CACxB,IAAId,EAAQ,CAAC,EACb,YAAK,QAAQ,SAASE,EAAO,CAC3BF,EAAM,KAAKE,CAAK,CAClB,CAAC,EACMH,EAAeC,CAAK,CAC7B,EAEAc,EAAM,QAAU,UAAW,CACzB,IAAId,EAAQ,CAAC,EACb,YAAK,QAAQ,SAASE,EAAOQ,EAAM,CACjCV,EAAM,KAAK,CAACU,EAAMR,CAAK,CAAC,CAC1B,CAAC,EACMH,EAAeC,CAAK,CAC7B,EAEIF,IACFgB,EAAM,OAAO,UAAYA,EAAM,SAGjCA,EAAM,SAAW,UAAW,CAC1B,IAAII,EAAc,CAAC,EACnB,YAAK,QAAQ,SAAShB,EAAOQ,EAAM,CACjCQ,EAAY,KAAKf,EAAeO,CAAI,EAAI,IAAMP,EAAeD,CAAK,CAAC,CACrE,CAAC,EACMgB,EAAY,KAAK,GAAG,CAC7B,EAGAvB,EAAO,gBAAkBW,CAC3B,EAEIa,EAAkC,UAAW,CAC/C,GAAI,CACF,IAAIb,EAAkBX,EAAO,gBAE7B,OACG,IAAIW,EAAgB,MAAM,EAAE,SAAS,IAAM,OAC3C,OAAOA,EAAgB,UAAU,KAAQ,YACzC,OAAOA,EAAgB,UAAU,SAAY,UAElD,OAASc,EAAP,CACA,MAAO,EACT,CACF,EAEKD,EAAgC,GACnCd,EAAwB,EAG1B,IAAIS,EAAQnB,EAAO,gBAAgB,UAE/B,OAAOmB,EAAM,MAAS,aACxBA,EAAM,KAAO,UAAW,CACtB,IAAIL,EAAQ,KACRT,EAAQ,CAAC,EACb,KAAK,QAAQ,SAASE,EAAOQ,EAAM,CACjCV,EAAM,KAAK,CAACU,EAAMR,CAAK,CAAC,EACnBO,EAAM,UACTA,EAAM,OAAOC,CAAI,CAErB,CAAC,EACDV,EAAM,KAAK,SAASqB,EAAGC,EAAG,CACxB,OAAID,EAAE,GAAKC,EAAE,GACJ,GACED,EAAE,GAAKC,EAAE,GACX,EAEA,CAEX,CAAC,EACGb,EAAM,WACRA,EAAM,SAAW,CAAC,GAEpB,QAASE,EAAI,EAAGA,EAAIX,EAAM,OAAQW,IAChC,KAAK,OAAOX,EAAMW,GAAG,GAAIX,EAAMW,GAAG,EAAE,CAExC,GAGE,OAAOG,EAAM,aAAgB,YAC/B,OAAO,eAAeA,EAAO,cAAe,CAC1C,WAAY,GACZ,aAAc,GACd,SAAU,GACV,MAAO,SAASP,EAAc,CAC5B,GAAI,KAAK,SACP,KAAK,SAAW,CAAC,MACZ,CACL,IAAIgB,EAAO,CAAC,EACZ,KAAK,QAAQ,SAASrB,EAAOQ,EAAM,CACjCa,EAAK,KAAKb,CAAI,CAChB,CAAC,EACD,QAASC,EAAI,EAAGA,EAAIY,EAAK,OAAQZ,IAC/B,KAAK,OAAOY,EAAKZ,EAAE,CAEvB,CAEAJ,EAAeA,EAAa,QAAQ,MAAO,EAAE,EAG7C,QAFIiB,EAAajB,EAAa,MAAM,GAAG,EACnCkB,EACKd,EAAI,EAAGA,EAAIa,EAAW,OAAQb,IACrCc,EAAYD,EAAWb,GAAG,MAAM,GAAG,EACnC,KAAK,OACHP,EAAiBqB,EAAU,EAAE,EAC5BA,EAAU,OAAS,EAAKrB,EAAiBqB,EAAU,EAAE,EAAI,EAC5D,CAEJ,CACF,CAAC,CAKL,GACG,OAAO,QAAW,YAAe,OAC5B,OAAO,QAAW,YAAe,OACjC,OAAO,MAAS,YAAe,KAAO/B,EAC9C,GAEC,SAASC,EAAQ,CAOhB,IAAI+B,EAAwB,UAAW,CACrC,GAAI,CACF,IAAIC,EAAI,IAAIhC,EAAO,IAAI,IAAK,UAAU,EACtC,OAAAgC,EAAE,SAAW,MACLA,EAAE,OAAS,kBAAqBA,EAAE,YAC5C,OAASP,EAAP,CACA,MAAO,EACT,CACF,EAGIQ,EAAc,UAAW,CAC3B,IAAIC,EAAOlC,EAAO,IAEdmC,EAAM,SAASC,EAAKC,EAAM,CACxB,OAAOD,GAAQ,WAAUA,EAAM,OAAOA,CAAG,GACzCC,GAAQ,OAAOA,GAAS,WAAUA,EAAO,OAAOA,CAAI,GAGxD,IAAIC,EAAM,SAAUC,EACpB,GAAIF,IAASrC,EAAO,WAAa,QAAUqC,IAASrC,EAAO,SAAS,MAAO,CACzEqC,EAAOA,EAAK,YAAY,EACxBC,EAAM,SAAS,eAAe,mBAAmB,EAAE,EACnDC,EAAcD,EAAI,cAAc,MAAM,EACtCC,EAAY,KAAOF,EACnBC,EAAI,KAAK,YAAYC,CAAW,EAChC,GAAI,CACF,GAAIA,EAAY,KAAK,QAAQF,CAAI,IAAM,EAAG,MAAM,IAAI,MAAME,EAAY,IAAI,CAC5E,OAASC,EAAP,CACA,MAAM,IAAI,MAAM,0BAA4BH,EAAO,WAAaG,CAAG,CACrE,CACF,CAEA,IAAIC,EAAgBH,EAAI,cAAc,GAAG,EACzCG,EAAc,KAAOL,EACjBG,IACFD,EAAI,KAAK,YAAYG,CAAa,EAClCA,EAAc,KAAOA,EAAc,MAGrC,IAAIC,EAAeJ,EAAI,cAAc,OAAO,EAI5C,GAHAI,EAAa,KAAO,MACpBA,EAAa,MAAQN,EAEjBK,EAAc,WAAa,KAAO,CAAC,IAAI,KAAKA,EAAc,IAAI,GAAM,CAACC,EAAa,cAAc,GAAK,CAACL,EACxG,MAAM,IAAI,UAAU,aAAa,EAGnC,OAAO,eAAe,KAAM,iBAAkB,CAC5C,MAAOI,CACT,CAAC,EAID,IAAIE,EAAe,IAAI3C,EAAO,gBAAgB,KAAK,MAAM,EACrD4C,EAAqB,GACrBC,EAA2B,GAC3B/B,EAAQ,KACZ,CAAC,SAAU,SAAU,KAAK,EAAE,QAAQ,SAASgC,EAAY,CACvD,IAAIC,GAASJ,EAAaG,GAC1BH,EAAaG,GAAc,UAAW,CACpCC,GAAO,MAAMJ,EAAc,SAAS,EAChCC,IACFC,EAA2B,GAC3B/B,EAAM,OAAS6B,EAAa,SAAS,EACrCE,EAA2B,GAE/B,CACF,CAAC,EAED,OAAO,eAAe,KAAM,eAAgB,CAC1C,MAAOF,EACP,WAAY,EACd,CAAC,EAED,IAAIK,EAAS,OACb,OAAO,eAAe,KAAM,sBAAuB,CACjD,WAAY,GACZ,aAAc,GACd,SAAU,GACV,MAAO,UAAW,CACZ,KAAK,SAAWA,IAClBA,EAAS,KAAK,OACVH,IACFD,EAAqB,GACrB,KAAK,aAAa,YAAY,KAAK,MAAM,EACzCA,EAAqB,IAG3B,CACF,CAAC,CACH,EAEIzB,EAAQgB,EAAI,UAEZc,EAA6B,SAASC,EAAe,CACvD,OAAO,eAAe/B,EAAO+B,EAAe,CAC1C,IAAK,UAAW,CACd,OAAO,KAAK,eAAeA,EAC7B,EACA,IAAK,SAAS3C,EAAO,CACnB,KAAK,eAAe2C,GAAiB3C,CACvC,EACA,WAAY,EACd,CAAC,CACH,EAEA,CAAC,OAAQ,OAAQ,WAAY,OAAQ,UAAU,EAC5C,QAAQ,SAAS2C,EAAe,CAC/BD,EAA2BC,CAAa,CAC1C,CAAC,EAEH,OAAO,eAAe/B,EAAO,SAAU,CACrC,IAAK,UAAW,CACd,OAAO,KAAK,eAAe,MAC7B,EACA,IAAK,SAASZ,EAAO,CACnB,KAAK,eAAe,OAAYA,EAChC,KAAK,oBAAoB,CAC3B,EACA,WAAY,EACd,CAAC,EAED,OAAO,iBAAiBY,EAAO,CAE7B,SAAY,CACV,IAAK,UAAW,CACd,IAAIL,EAAQ,KACZ,OAAO,UAAW,CAChB,OAAOA,EAAM,IACf,CACF,CACF,EAEA,KAAQ,CACN,IAAK,UAAW,CACd,OAAO,KAAK,eAAe,KAAK,QAAQ,MAAO,EAAE,CACnD,EACA,IAAK,SAASP,EAAO,CACnB,KAAK,eAAe,KAAOA,EAC3B,KAAK,oBAAoB,CAC3B,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,OAAO,KAAK,eAAe,SAAS,QAAQ,SAAU,GAAG,CAC3D,EACA,IAAK,SAASA,EAAO,CACnB,KAAK,eAAe,SAAWA,CACjC,EACA,WAAY,EACd,EAEA,OAAU,CACR,IAAK,UAAW,CAEd,IAAI4C,EAAe,CAAE,QAAS,GAAI,SAAU,IAAK,OAAQ,EAAG,EAAE,KAAK,eAAe,UAI9EC,EAAkB,KAAK,eAAe,MAAQD,GAChD,KAAK,eAAe,OAAS,GAE/B,OAAO,KAAK,eAAe,SACzB,KACA,KAAK,eAAe,UACnBC,EAAmB,IAAM,KAAK,eAAe,KAAQ,GAC1D,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,MAAO,EACT,EACA,IAAK,SAAS7C,EAAO,CACrB,EACA,WAAY,EACd,EAEA,SAAY,CACV,IAAK,UAAW,CACd,MAAO,EACT,EACA,IAAK,SAASA,EAAO,CACrB,EACA,WAAY,EACd,CACF,CAAC,EAED4B,EAAI,gBAAkB,SAASkB,EAAM,CACnC,OAAOnB,EAAK,gBAAgB,MAAMA,EAAM,SAAS,CACnD,EAEAC,EAAI,gBAAkB,SAASC,EAAK,CAClC,OAAOF,EAAK,gBAAgB,MAAMA,EAAM,SAAS,CACnD,EAEAlC,EAAO,IAAMmC,CAEf,EAMA,GAJKJ,EAAsB,GACzBE,EAAY,EAGTjC,EAAO,WAAa,QAAW,EAAE,WAAYA,EAAO,UAAW,CAClE,IAAIsD,EAAY,UAAW,CACzB,OAAOtD,EAAO,SAAS,SAAW,KAAOA,EAAO,SAAS,UAAYA,EAAO,SAAS,KAAQ,IAAMA,EAAO,SAAS,KAAQ,GAC7H,EAEA,GAAI,CACF,OAAO,eAAeA,EAAO,SAAU,SAAU,CAC/C,IAAKsD,EACL,WAAY,EACd,CAAC,CACH,OAAS7B,EAAP,CACA,YAAY,UAAW,CACrBzB,EAAO,SAAS,OAASsD,EAAU,CACrC,EAAG,GAAG,CACR,CACF,CAEF,GACG,OAAO,QAAW,YAAe,OAC5B,OAAO,QAAW,YAAe,OACjC,OAAO,MAAS,YAAe,KAAOvD,EAC9C,IC5eA,IAAAwD,GAAAC,GAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,gFAeA,IAAIC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,GACAC,IACH,SAAUC,EAAS,CAChB,IAAIC,EAAO,OAAO,QAAW,SAAW,OAAS,OAAO,MAAS,SAAW,KAAO,OAAO,MAAS,SAAW,KAAO,CAAC,EAClH,OAAO,QAAW,YAAc,OAAO,IACvC,OAAO,QAAS,CAAC,SAAS,EAAG,SAAU3B,EAAS,CAAE0B,EAAQE,EAAeD,EAAMC,EAAe5B,CAAO,CAAC,CAAC,CAAG,CAAC,EAEtG,OAAOC,IAAW,UAAY,OAAOA,GAAO,SAAY,SAC7DyB,EAAQE,EAAeD,EAAMC,EAAe3B,GAAO,OAAO,CAAC,CAAC,EAG5DyB,EAAQE,EAAeD,CAAI,CAAC,EAEhC,SAASC,EAAe5B,EAAS6B,EAAU,CACvC,OAAI7B,IAAY2B,IACR,OAAO,OAAO,QAAW,WACzB,OAAO,eAAe3B,EAAS,aAAc,CAAE,MAAO,EAAK,CAAC,EAG5DA,EAAQ,WAAa,IAGtB,SAAU8B,EAAIC,EAAG,CAAE,OAAO/B,EAAQ8B,GAAMD,EAAWA,EAASC,EAAIC,CAAC,EAAIA,CAAG,CACnF,CACJ,GACC,SAAUC,EAAU,CACjB,IAAIC,EAAgB,OAAO,gBACtB,CAAE,UAAW,CAAC,CAAE,YAAa,OAAS,SAAUC,EAAGC,EAAG,CAAED,EAAE,UAAYC,CAAG,GAC1E,SAAUD,EAAGC,EAAG,CAAE,QAASC,KAAKD,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGC,CAAC,IAAGF,EAAEE,GAAKD,EAAEC,GAAI,EAEpGlC,GAAY,SAAUgC,EAAGC,EAAG,CACxB,GAAI,OAAOA,GAAM,YAAcA,IAAM,KACjC,MAAM,IAAI,UAAU,uBAAyB,OAAOA,CAAC,EAAI,+BAA+B,EAC5FF,EAAcC,EAAGC,CAAC,EAClB,SAASE,GAAK,CAAE,KAAK,YAAcH,CAAG,CACtCA,EAAE,UAAYC,IAAM,KAAO,OAAO,OAAOA,CAAC,GAAKE,EAAG,UAAYF,EAAE,UAAW,IAAIE,EACnF,EAEAlC,GAAW,OAAO,QAAU,SAAUmC,EAAG,CACrC,QAASC,EAAG,EAAI,EAAGC,EAAI,UAAU,OAAQ,EAAIA,EAAG,IAAK,CACjDD,EAAI,UAAU,GACd,QAASH,KAAKG,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGH,CAAC,IAAGE,EAAEF,GAAKG,EAAEH,GAC9E,CACA,OAAOE,CACX,EAEAlC,GAAS,SAAUmC,EAAGE,EAAG,CACrB,IAAIH,EAAI,CAAC,EACT,QAASF,KAAKG,EAAO,OAAO,UAAU,eAAe,KAAKA,EAAGH,CAAC,GAAKK,EAAE,QAAQL,CAAC,EAAI,IAC9EE,EAAEF,GAAKG,EAAEH,IACb,GAAIG,GAAK,MAAQ,OAAO,OAAO,uBAA0B,WACrD,QAASG,EAAI,EAAGN,EAAI,OAAO,sBAAsBG,CAAC,EAAGG,EAAIN,EAAE,OAAQM,IAC3DD,EAAE,QAAQL,EAAEM,EAAE,EAAI,GAAK,OAAO,UAAU,qBAAqB,KAAKH,EAAGH,EAAEM,EAAE,IACzEJ,EAAEF,EAAEM,IAAMH,EAAEH,EAAEM,KAE1B,OAAOJ,CACX,EAEAjC,GAAa,SAAUsC,EAAYC,EAAQC,EAAKC,EAAM,CAClD,IAAIC,EAAI,UAAU,OAAQC,EAAID,EAAI,EAAIH,EAASE,IAAS,KAAOA,EAAO,OAAO,yBAAyBF,EAAQC,CAAG,EAAIC,EAAMZ,EAC3H,GAAI,OAAO,SAAY,UAAY,OAAO,QAAQ,UAAa,WAAYc,EAAI,QAAQ,SAASL,EAAYC,EAAQC,EAAKC,CAAI,MACxH,SAASJ,EAAIC,EAAW,OAAS,EAAGD,GAAK,EAAGA,KAASR,EAAIS,EAAWD,MAAIM,GAAKD,EAAI,EAAIb,EAAEc,CAAC,EAAID,EAAI,EAAIb,EAAEU,EAAQC,EAAKG,CAAC,EAAId,EAAEU,EAAQC,CAAG,IAAMG,GAChJ,OAAOD,EAAI,GAAKC,GAAK,OAAO,eAAeJ,EAAQC,EAAKG,CAAC,EAAGA,CAChE,EAEA1C,GAAU,SAAU2C,EAAYC,EAAW,CACvC,OAAO,SAAUN,EAAQC,EAAK,CAAEK,EAAUN,EAAQC,EAAKI,CAAU,CAAG,CACxE,EAEA1C,GAAa,SAAU4C,EAAaC,EAAe,CAC/C,GAAI,OAAO,SAAY,UAAY,OAAO,QAAQ,UAAa,WAAY,OAAO,QAAQ,SAASD,EAAaC,CAAa,CACjI,EAEA5C,GAAY,SAAU6C,EAASC,EAAYC,EAAGC,EAAW,CACrD,SAASC,EAAMC,EAAO,CAAE,OAAOA,aAAiBH,EAAIG,EAAQ,IAAIH,EAAE,SAAUI,EAAS,CAAEA,EAAQD,CAAK,CAAG,CAAC,CAAG,CAC3G,OAAO,IAAKH,IAAMA,EAAI,UAAU,SAAUI,EAASC,EAAQ,CACvD,SAASC,EAAUH,EAAO,CAAE,GAAI,CAAEI,EAAKN,EAAU,KAAKE,CAAK,CAAC,CAAG,OAASjB,EAAP,CAAYmB,EAAOnB,CAAC,CAAG,CAAE,CAC1F,SAASsB,EAASL,EAAO,CAAE,GAAI,CAAEI,EAAKN,EAAU,MAASE,CAAK,CAAC,CAAG,OAASjB,EAAP,CAAYmB,EAAOnB,CAAC,CAAG,CAAE,CAC7F,SAASqB,EAAKE,EAAQ,CAAEA,EAAO,KAAOL,EAAQK,EAAO,KAAK,EAAIP,EAAMO,EAAO,KAAK,EAAE,KAAKH,EAAWE,CAAQ,CAAG,CAC7GD,GAAMN,EAAYA,EAAU,MAAMH,EAASC,GAAc,CAAC,CAAC,GAAG,KAAK,CAAC,CACxE,CAAC,CACL,EAEA7C,GAAc,SAAU4C,EAASY,EAAM,CACnC,IAAIC,EAAI,CAAE,MAAO,EAAG,KAAM,UAAW,CAAE,GAAI5B,EAAE,GAAK,EAAG,MAAMA,EAAE,GAAI,OAAOA,EAAE,EAAI,EAAG,KAAM,CAAC,EAAG,IAAK,CAAC,CAAE,EAAG6B,EAAGC,EAAG9B,EAAG+B,EAC/G,OAAOA,EAAI,CAAE,KAAMC,EAAK,CAAC,EAAG,MAASA,EAAK,CAAC,EAAG,OAAUA,EAAK,CAAC,CAAE,EAAG,OAAO,QAAW,aAAeD,EAAE,OAAO,UAAY,UAAW,CAAE,OAAO,IAAM,GAAIA,EACvJ,SAASC,EAAK9B,EAAG,CAAE,OAAO,SAAUT,EAAG,CAAE,OAAO+B,EAAK,CAACtB,EAAGT,CAAC,CAAC,CAAG,CAAG,CACjE,SAAS+B,EAAKS,EAAI,CACd,GAAIJ,EAAG,MAAM,IAAI,UAAU,iCAAiC,EAC5D,KAAOD,GAAG,GAAI,CACV,GAAIC,EAAI,EAAGC,IAAM9B,EAAIiC,EAAG,GAAK,EAAIH,EAAE,OAAYG,EAAG,GAAKH,EAAE,SAAc9B,EAAI8B,EAAE,SAAc9B,EAAE,KAAK8B,CAAC,EAAG,GAAKA,EAAE,OAAS,EAAE9B,EAAIA,EAAE,KAAK8B,EAAGG,EAAG,EAAE,GAAG,KAAM,OAAOjC,EAE3J,OADI8B,EAAI,EAAG9B,IAAGiC,EAAK,CAACA,EAAG,GAAK,EAAGjC,EAAE,KAAK,GAC9BiC,EAAG,GAAI,CACX,IAAK,GAAG,IAAK,GAAGjC,EAAIiC,EAAI,MACxB,IAAK,GAAG,OAAAL,EAAE,QAAgB,CAAE,MAAOK,EAAG,GAAI,KAAM,EAAM,EACtD,IAAK,GAAGL,EAAE,QAASE,EAAIG,EAAG,GAAIA,EAAK,CAAC,CAAC,EAAG,SACxC,IAAK,GAAGA,EAAKL,EAAE,IAAI,IAAI,EAAGA,EAAE,KAAK,IAAI,EAAG,SACxC,QACI,GAAM5B,EAAI4B,EAAE,KAAM,EAAA5B,EAAIA,EAAE,OAAS,GAAKA,EAAEA,EAAE,OAAS,MAAQiC,EAAG,KAAO,GAAKA,EAAG,KAAO,GAAI,CAAEL,EAAI,EAAG,QAAU,CAC3G,GAAIK,EAAG,KAAO,IAAM,CAACjC,GAAMiC,EAAG,GAAKjC,EAAE,IAAMiC,EAAG,GAAKjC,EAAE,IAAM,CAAE4B,EAAE,MAAQK,EAAG,GAAI,KAAO,CACrF,GAAIA,EAAG,KAAO,GAAKL,EAAE,MAAQ5B,EAAE,GAAI,CAAE4B,EAAE,MAAQ5B,EAAE,GAAIA,EAAIiC,EAAI,KAAO,CACpE,GAAIjC,GAAK4B,EAAE,MAAQ5B,EAAE,GAAI,CAAE4B,EAAE,MAAQ5B,EAAE,GAAI4B,EAAE,IAAI,KAAKK,CAAE,EAAG,KAAO,CAC9DjC,EAAE,IAAI4B,EAAE,IAAI,IAAI,EACpBA,EAAE,KAAK,IAAI,EAAG,QACtB,CACAK,EAAKN,EAAK,KAAKZ,EAASa,CAAC,CAC7B,OAASzB,EAAP,CAAY8B,EAAK,CAAC,EAAG9B,CAAC,EAAG2B,EAAI,CAAG,QAAE,CAAUD,EAAI7B,EAAI,CAAG,CACzD,GAAIiC,EAAG,GAAK,EAAG,MAAMA,EAAG,GAAI,MAAO,CAAE,MAAOA,EAAG,GAAKA,EAAG,GAAK,OAAQ,KAAM,EAAK,CACnF,CACJ,EAEA7D,GAAe,SAAS8D,EAAG,EAAG,CAC1B,QAASpC,KAAKoC,EAAOpC,IAAM,WAAa,CAAC,OAAO,UAAU,eAAe,KAAK,EAAGA,CAAC,GAAGX,GAAgB,EAAG+C,EAAGpC,CAAC,CAChH,EAEAX,GAAkB,OAAO,OAAU,SAASgD,EAAGD,EAAGE,EAAGC,EAAI,CACjDA,IAAO,SAAWA,EAAKD,GAC3B,OAAO,eAAeD,EAAGE,EAAI,CAAE,WAAY,GAAM,IAAK,UAAW,CAAE,OAAOH,EAAEE,EAAI,CAAE,CAAC,CACvF,EAAM,SAASD,EAAGD,EAAGE,EAAGC,EAAI,CACpBA,IAAO,SAAWA,EAAKD,GAC3BD,EAAEE,GAAMH,EAAEE,EACd,EAEA/D,GAAW,SAAU8D,EAAG,CACpB,IAAIlC,EAAI,OAAO,QAAW,YAAc,OAAO,SAAUiC,EAAIjC,GAAKkC,EAAElC,GAAIG,EAAI,EAC5E,GAAI8B,EAAG,OAAOA,EAAE,KAAKC,CAAC,EACtB,GAAIA,GAAK,OAAOA,EAAE,QAAW,SAAU,MAAO,CAC1C,KAAM,UAAY,CACd,OAAIA,GAAK/B,GAAK+B,EAAE,SAAQA,EAAI,QACrB,CAAE,MAAOA,GAAKA,EAAE/B,KAAM,KAAM,CAAC+B,CAAE,CAC1C,CACJ,EACA,MAAM,IAAI,UAAUlC,EAAI,0BAA4B,iCAAiC,CACzF,EAEA3B,GAAS,SAAU6D,EAAGjC,EAAG,CACrB,IAAIgC,EAAI,OAAO,QAAW,YAAcC,EAAE,OAAO,UACjD,GAAI,CAACD,EAAG,OAAOC,EACf,IAAI/B,EAAI8B,EAAE,KAAKC,CAAC,EAAGzB,EAAG4B,EAAK,CAAC,EAAGnC,EAC/B,GAAI,CACA,MAAQD,IAAM,QAAUA,KAAM,IAAM,EAAEQ,EAAIN,EAAE,KAAK,GAAG,MAAMkC,EAAG,KAAK5B,EAAE,KAAK,CAC7E,OACO6B,EAAP,CAAgBpC,EAAI,CAAE,MAAOoC,CAAM,CAAG,QACtC,CACI,GAAI,CACI7B,GAAK,CAACA,EAAE,OAASwB,EAAI9B,EAAE,SAAY8B,EAAE,KAAK9B,CAAC,CACnD,QACA,CAAU,GAAID,EAAG,MAAMA,EAAE,KAAO,CACpC,CACA,OAAOmC,CACX,EAGA/D,GAAW,UAAY,CACnB,QAAS+D,EAAK,CAAC,EAAGlC,EAAI,EAAGA,EAAI,UAAU,OAAQA,IAC3CkC,EAAKA,EAAG,OAAOhE,GAAO,UAAU8B,EAAE,CAAC,EACvC,OAAOkC,CACX,EAGA9D,GAAiB,UAAY,CACzB,QAASyB,EAAI,EAAGG,EAAI,EAAGoC,EAAK,UAAU,OAAQpC,EAAIoC,EAAIpC,IAAKH,GAAK,UAAUG,GAAG,OAC7E,QAASM,EAAI,MAAMT,CAAC,EAAGmC,EAAI,EAAGhC,EAAI,EAAGA,EAAIoC,EAAIpC,IACzC,QAASqC,EAAI,UAAUrC,GAAIsC,EAAI,EAAGC,EAAKF,EAAE,OAAQC,EAAIC,EAAID,IAAKN,IAC1D1B,EAAE0B,GAAKK,EAAEC,GACjB,OAAOhC,CACX,EAEAjC,GAAgB,SAAUmE,EAAIC,EAAMC,EAAM,CACtC,GAAIA,GAAQ,UAAU,SAAW,EAAG,QAAS1C,EAAI,EAAG2C,EAAIF,EAAK,OAAQP,EAAIlC,EAAI2C,EAAG3C,KACxEkC,GAAM,EAAElC,KAAKyC,MACRP,IAAIA,EAAK,MAAM,UAAU,MAAM,KAAKO,EAAM,EAAGzC,CAAC,GACnDkC,EAAGlC,GAAKyC,EAAKzC,IAGrB,OAAOwC,EAAG,OAAON,GAAM,MAAM,UAAU,MAAM,KAAKO,CAAI,CAAC,CAC3D,EAEAnE,GAAU,SAAUe,EAAG,CACnB,OAAO,gBAAgBf,IAAW,KAAK,EAAIe,EAAG,MAAQ,IAAIf,GAAQe,CAAC,CACvE,EAEAd,GAAmB,SAAUoC,EAASC,EAAYE,EAAW,CACzD,GAAI,CAAC,OAAO,cAAe,MAAM,IAAI,UAAU,sCAAsC,EACrF,IAAIa,EAAIb,EAAU,MAAMH,EAASC,GAAc,CAAC,CAAC,EAAGZ,EAAG4C,EAAI,CAAC,EAC5D,OAAO5C,EAAI,CAAC,EAAG4B,EAAK,MAAM,EAAGA,EAAK,OAAO,EAAGA,EAAK,QAAQ,EAAG5B,EAAE,OAAO,eAAiB,UAAY,CAAE,OAAO,IAAM,EAAGA,EACpH,SAAS4B,EAAK9B,EAAG,CAAM6B,EAAE7B,KAAIE,EAAEF,GAAK,SAAUT,EAAG,CAAE,OAAO,IAAI,QAAQ,SAAUgD,EAAG5C,EAAG,CAAEmD,EAAE,KAAK,CAAC9C,EAAGT,EAAGgD,EAAG5C,CAAC,CAAC,EAAI,GAAKoD,EAAO/C,EAAGT,CAAC,CAAG,CAAC,CAAG,EAAG,CACzI,SAASwD,EAAO/C,EAAGT,EAAG,CAAE,GAAI,CAAE+B,EAAKO,EAAE7B,GAAGT,CAAC,CAAC,CAAG,OAASU,EAAP,CAAY+C,EAAOF,EAAE,GAAG,GAAI7C,CAAC,CAAG,CAAE,CACjF,SAASqB,EAAKd,EAAG,CAAEA,EAAE,iBAAiBhC,GAAU,QAAQ,QAAQgC,EAAE,MAAM,CAAC,EAAE,KAAKyC,EAAS7B,CAAM,EAAI4B,EAAOF,EAAE,GAAG,GAAItC,CAAC,CAAI,CACxH,SAASyC,EAAQ/B,EAAO,CAAE6B,EAAO,OAAQ7B,CAAK,CAAG,CACjD,SAASE,EAAOF,EAAO,CAAE6B,EAAO,QAAS7B,CAAK,CAAG,CACjD,SAAS8B,EAAOrB,EAAGpC,EAAG,CAAMoC,EAAEpC,CAAC,EAAGuD,EAAE,MAAM,EAAGA,EAAE,QAAQC,EAAOD,EAAE,GAAG,GAAIA,EAAE,GAAG,EAAE,CAAG,CACrF,EAEApE,GAAmB,SAAUuD,EAAG,CAC5B,IAAI/B,EAAGN,EACP,OAAOM,EAAI,CAAC,EAAG4B,EAAK,MAAM,EAAGA,EAAK,QAAS,SAAU7B,EAAG,CAAE,MAAMA,CAAG,CAAC,EAAG6B,EAAK,QAAQ,EAAG5B,EAAE,OAAO,UAAY,UAAY,CAAE,OAAO,IAAM,EAAGA,EAC1I,SAAS4B,EAAK9B,EAAG2B,EAAG,CAAEzB,EAAEF,GAAKiC,EAAEjC,GAAK,SAAUT,EAAG,CAAE,OAAQK,EAAI,CAACA,GAAK,CAAE,MAAOpB,GAAQyD,EAAEjC,GAAGT,CAAC,CAAC,EAAG,KAAMS,IAAM,QAAS,EAAI2B,EAAIA,EAAEpC,CAAC,EAAIA,CAAG,EAAIoC,CAAG,CAClJ,EAEAhD,GAAgB,SAAUsD,EAAG,CACzB,GAAI,CAAC,OAAO,cAAe,MAAM,IAAI,UAAU,sCAAsC,EACrF,IAAID,EAAIC,EAAE,OAAO,eAAgB,EACjC,OAAOD,EAAIA,EAAE,KAAKC,CAAC,GAAKA,EAAI,OAAO9D,IAAa,WAAaA,GAAS8D,CAAC,EAAIA,EAAE,OAAO,UAAU,EAAG,EAAI,CAAC,EAAGH,EAAK,MAAM,EAAGA,EAAK,OAAO,EAAGA,EAAK,QAAQ,EAAG,EAAE,OAAO,eAAiB,UAAY,CAAE,OAAO,IAAM,EAAG,GAC9M,SAASA,EAAK9B,EAAG,CAAE,EAAEA,GAAKiC,EAAEjC,IAAM,SAAUT,EAAG,CAAE,OAAO,IAAI,QAAQ,SAAU4B,EAASC,EAAQ,CAAE7B,EAAI0C,EAAEjC,GAAGT,CAAC,EAAGyD,EAAO7B,EAASC,EAAQ7B,EAAE,KAAMA,EAAE,KAAK,CAAG,CAAC,CAAG,CAAG,CAC/J,SAASyD,EAAO7B,EAASC,EAAQ1B,EAAGH,EAAG,CAAE,QAAQ,QAAQA,CAAC,EAAE,KAAK,SAASA,EAAG,CAAE4B,EAAQ,CAAE,MAAO5B,EAAG,KAAMG,CAAE,CAAC,CAAG,EAAG0B,CAAM,CAAG,CAC/H,EAEAxC,GAAuB,SAAUsE,EAAQC,EAAK,CAC1C,OAAI,OAAO,eAAkB,OAAO,eAAeD,EAAQ,MAAO,CAAE,MAAOC,CAAI,CAAC,EAAYD,EAAO,IAAMC,EAClGD,CACX,EAEA,IAAIE,EAAqB,OAAO,OAAU,SAASnB,EAAG1C,EAAG,CACrD,OAAO,eAAe0C,EAAG,UAAW,CAAE,WAAY,GAAM,MAAO1C,CAAE,CAAC,CACtE,EAAK,SAAS0C,EAAG1C,EAAG,CAChB0C,EAAE,QAAa1C,CACnB,EAEAV,GAAe,SAAUwE,EAAK,CAC1B,GAAIA,GAAOA,EAAI,WAAY,OAAOA,EAClC,IAAI7B,EAAS,CAAC,EACd,GAAI6B,GAAO,KAAM,QAASnB,KAAKmB,EAASnB,IAAM,WAAa,OAAO,UAAU,eAAe,KAAKmB,EAAKnB,CAAC,GAAGjD,GAAgBuC,EAAQ6B,EAAKnB,CAAC,EACvI,OAAAkB,EAAmB5B,EAAQ6B,CAAG,EACvB7B,CACX,EAEA1C,GAAkB,SAAUuE,EAAK,CAC7B,OAAQA,GAAOA,EAAI,WAAcA,EAAM,CAAE,QAAWA,CAAI,CAC5D,EAEAtE,GAAyB,SAAUuE,EAAUC,EAAOC,EAAM7B,EAAG,CACzD,GAAI6B,IAAS,KAAO,CAAC7B,EAAG,MAAM,IAAI,UAAU,+CAA+C,EAC3F,GAAI,OAAO4B,GAAU,WAAaD,IAAaC,GAAS,CAAC5B,EAAI,CAAC4B,EAAM,IAAID,CAAQ,EAAG,MAAM,IAAI,UAAU,0EAA0E,EACjL,OAAOE,IAAS,IAAM7B,EAAI6B,IAAS,IAAM7B,EAAE,KAAK2B,CAAQ,EAAI3B,EAAIA,EAAE,MAAQ4B,EAAM,IAAID,CAAQ,CAChG,EAEAtE,GAAyB,SAAUsE,EAAUC,EAAOrC,EAAOsC,EAAM7B,EAAG,CAChE,GAAI6B,IAAS,IAAK,MAAM,IAAI,UAAU,gCAAgC,EACtE,GAAIA,IAAS,KAAO,CAAC7B,EAAG,MAAM,IAAI,UAAU,+CAA+C,EAC3F,GAAI,OAAO4B,GAAU,WAAaD,IAAaC,GAAS,CAAC5B,EAAI,CAAC4B,EAAM,IAAID,CAAQ,EAAG,MAAM,IAAI,UAAU,yEAAyE,EAChL,OAAQE,IAAS,IAAM7B,EAAE,KAAK2B,EAAUpC,CAAK,EAAIS,EAAIA,EAAE,MAAQT,EAAQqC,EAAM,IAAID,EAAUpC,CAAK,EAAIA,CACxG,EAEA1B,EAAS,YAAa9B,EAAS,EAC/B8B,EAAS,WAAY7B,EAAQ,EAC7B6B,EAAS,SAAU5B,EAAM,EACzB4B,EAAS,aAAc3B,EAAU,EACjC2B,EAAS,UAAW1B,EAAO,EAC3B0B,EAAS,aAAczB,EAAU,EACjCyB,EAAS,YAAaxB,EAAS,EAC/BwB,EAAS,cAAevB,EAAW,EACnCuB,EAAS,eAAgBtB,EAAY,EACrCsB,EAAS,kBAAmBP,EAAe,EAC3CO,EAAS,WAAYrB,EAAQ,EAC7BqB,EAAS,SAAUpB,EAAM,EACzBoB,EAAS,WAAYnB,EAAQ,EAC7BmB,EAAS,iBAAkBlB,EAAc,EACzCkB,EAAS,gBAAiBjB,EAAa,EACvCiB,EAAS,UAAWhB,EAAO,EAC3BgB,EAAS,mBAAoBf,EAAgB,EAC7Ce,EAAS,mBAAoBd,EAAgB,EAC7Cc,EAAS,gBAAiBb,EAAa,EACvCa,EAAS,uBAAwBZ,EAAoB,EACrDY,EAAS,eAAgBX,EAAY,EACrCW,EAAS,kBAAmBV,EAAe,EAC3CU,EAAS,yBAA0BT,EAAsB,EACzDS,EAAS,yBAA0BR,EAAsB,CAC7D,CAAC,ICjTD,IAAAyE,GAAAC,GAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA,IAMC,SAA0CC,EAAMC,EAAS,CACtD,OAAOH,IAAY,UAAY,OAAOC,IAAW,SACnDA,GAAO,QAAUE,EAAQ,EAClB,OAAO,QAAW,YAAc,OAAO,IAC9C,OAAO,CAAC,EAAGA,CAAO,EACX,OAAOH,IAAY,SAC1BA,GAAQ,YAAiBG,EAAQ,EAEjCD,EAAK,YAAiBC,EAAQ,CAChC,GAAGH,GAAM,UAAW,CACpB,OAAiB,UAAW,CAClB,IAAII,EAAuB,CAE/B,IACC,SAASC,EAAyBC,EAAqBC,EAAqB,CAEnF,aAGAA,EAAoB,EAAED,EAAqB,CACzC,QAAW,UAAW,CAAE,OAAqBE,EAAW,CAC1D,CAAC,EAGD,IAAIC,EAAeF,EAAoB,GAAG,EACtCG,EAAoCH,EAAoB,EAAEE,CAAY,EAEtEE,EAASJ,EAAoB,GAAG,EAChCK,EAA8BL,EAAoB,EAAEI,CAAM,EAE1DE,EAAaN,EAAoB,GAAG,EACpCO,EAA8BP,EAAoB,EAAEM,CAAU,EAOlE,SAASE,EAAQC,EAAM,CACrB,GAAI,CACF,OAAO,SAAS,YAAYA,CAAI,CAClC,OAASC,EAAP,CACA,MAAO,EACT,CACF,CAUA,IAAIC,EAAqB,SAA4BC,EAAQ,CAC3D,IAAIC,EAAeN,EAAe,EAAEK,CAAM,EAC1C,OAAAJ,EAAQ,KAAK,EACNK,CACT,EAEiCC,EAAeH,EAOhD,SAASI,EAAkBC,EAAO,CAChC,IAAIC,EAAQ,SAAS,gBAAgB,aAAa,KAAK,IAAM,MACzDC,EAAc,SAAS,cAAc,UAAU,EAEnDA,EAAY,MAAM,SAAW,OAE7BA,EAAY,MAAM,OAAS,IAC3BA,EAAY,MAAM,QAAU,IAC5BA,EAAY,MAAM,OAAS,IAE3BA,EAAY,MAAM,SAAW,WAC7BA,EAAY,MAAMD,EAAQ,QAAU,QAAU,UAE9C,IAAIE,EAAY,OAAO,aAAe,SAAS,gBAAgB,UAC/D,OAAAD,EAAY,MAAM,IAAM,GAAG,OAAOC,EAAW,IAAI,EACjDD,EAAY,aAAa,WAAY,EAAE,EACvCA,EAAY,MAAQF,EACbE,CACT,CAYA,IAAIE,EAAiB,SAAwBJ,EAAOK,EAAS,CAC3D,IAAIH,EAAcH,EAAkBC,CAAK,EACzCK,EAAQ,UAAU,YAAYH,CAAW,EACzC,IAAIL,EAAeN,EAAe,EAAEW,CAAW,EAC/C,OAAAV,EAAQ,MAAM,EACdU,EAAY,OAAO,EACZL,CACT,EASIS,EAAsB,SAA6BV,EAAQ,CAC7D,IAAIS,EAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAChF,UAAW,SAAS,IACtB,EACIR,EAAe,GAEnB,OAAI,OAAOD,GAAW,SACpBC,EAAeO,EAAeR,EAAQS,CAAO,EACpCT,aAAkB,kBAAoB,CAAC,CAAC,OAAQ,SAAU,MAAO,MAAO,UAAU,EAAE,SAASA,GAAW,KAA4B,OAASA,EAAO,IAAI,EAEjKC,EAAeO,EAAeR,EAAO,MAAOS,CAAO,GAEnDR,EAAeN,EAAe,EAAEK,CAAM,EACtCJ,EAAQ,MAAM,GAGTK,CACT,EAEiCU,EAAgBD,EAEjD,SAASE,EAAQC,EAAK,CAA6B,OAAI,OAAO,QAAW,YAAc,OAAO,OAAO,UAAa,SAAYD,EAAU,SAAiBC,EAAK,CAAE,OAAO,OAAOA,CAAK,EAAYD,EAAU,SAAiBC,EAAK,CAAE,OAAOA,GAAO,OAAO,QAAW,YAAcA,EAAI,cAAgB,QAAUA,IAAQ,OAAO,UAAY,SAAW,OAAOA,CAAK,EAAYD,EAAQC,CAAG,CAAG,CAUzX,IAAIC,GAAyB,UAAkC,CAC7D,IAAIL,EAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,EAE/EM,EAAkBN,EAAQ,OAC1BO,EAASD,IAAoB,OAAS,OAASA,EAC/CE,EAAYR,EAAQ,UACpBT,EAASS,EAAQ,OACjBS,GAAOT,EAAQ,KAEnB,GAAIO,IAAW,QAAUA,IAAW,MAClC,MAAM,IAAI,MAAM,oDAAoD,EAItE,GAAIhB,IAAW,OACb,GAAIA,GAAUY,EAAQZ,CAAM,IAAM,UAAYA,EAAO,WAAa,EAAG,CACnE,GAAIgB,IAAW,QAAUhB,EAAO,aAAa,UAAU,EACrD,MAAM,IAAI,MAAM,mFAAmF,EAGrG,GAAIgB,IAAW,QAAUhB,EAAO,aAAa,UAAU,GAAKA,EAAO,aAAa,UAAU,GACxF,MAAM,IAAI,MAAM,uGAAwG,CAE5H,KACE,OAAM,IAAI,MAAM,6CAA6C,EAKjE,GAAIkB,GACF,OAAOP,EAAaO,GAAM,CACxB,UAAWD,CACb,CAAC,EAIH,GAAIjB,EACF,OAAOgB,IAAW,MAAQd,EAAYF,CAAM,EAAIW,EAAaX,EAAQ,CACnE,UAAWiB,CACb,CAAC,CAEL,EAEiCE,GAAmBL,GAEpD,SAASM,GAAiBP,EAAK,CAA6B,OAAI,OAAO,QAAW,YAAc,OAAO,OAAO,UAAa,SAAYO,GAAmB,SAAiBP,EAAK,CAAE,OAAO,OAAOA,CAAK,EAAYO,GAAmB,SAAiBP,EAAK,CAAE,OAAOA,GAAO,OAAO,QAAW,YAAcA,EAAI,cAAgB,QAAUA,IAAQ,OAAO,UAAY,SAAW,OAAOA,CAAK,EAAYO,GAAiBP,CAAG,CAAG,CAE7Z,SAASQ,GAAgBC,EAAUC,EAAa,CAAE,GAAI,EAAED,aAAoBC,GAAgB,MAAM,IAAI,UAAU,mCAAmC,CAAK,CAExJ,SAASC,GAAkBxB,EAAQyB,EAAO,CAAE,QAASC,EAAI,EAAGA,EAAID,EAAM,OAAQC,IAAK,CAAE,IAAIC,EAAaF,EAAMC,GAAIC,EAAW,WAAaA,EAAW,YAAc,GAAOA,EAAW,aAAe,GAAU,UAAWA,IAAYA,EAAW,SAAW,IAAM,OAAO,eAAe3B,EAAQ2B,EAAW,IAAKA,CAAU,CAAG,CAAE,CAE5T,SAASC,GAAaL,EAAaM,EAAYC,EAAa,CAAE,OAAID,GAAYL,GAAkBD,EAAY,UAAWM,CAAU,EAAOC,GAAaN,GAAkBD,EAAaO,CAAW,EAAUP,CAAa,CAEtN,SAASQ,GAAUC,EAAUC,EAAY,CAAE,GAAI,OAAOA,GAAe,YAAcA,IAAe,KAAQ,MAAM,IAAI,UAAU,oDAAoD,EAAKD,EAAS,UAAY,OAAO,OAAOC,GAAcA,EAAW,UAAW,CAAE,YAAa,CAAE,MAAOD,EAAU,SAAU,GAAM,aAAc,EAAK,CAAE,CAAC,EAAOC,GAAYC,GAAgBF,EAAUC,CAAU,CAAG,CAEhY,SAASC,GAAgBC,EAAGC,EAAG,CAAE,OAAAF,GAAkB,OAAO,gBAAkB,SAAyBC,EAAGC,EAAG,CAAE,OAAAD,EAAE,UAAYC,EAAUD,CAAG,EAAUD,GAAgBC,EAAGC,CAAC,CAAG,CAEzK,SAASC,GAAaC,EAAS,CAAE,IAAIC,EAA4BC,GAA0B,EAAG,OAAO,UAAgC,CAAE,IAAIC,EAAQC,GAAgBJ,CAAO,EAAGK,EAAQ,GAAIJ,EAA2B,CAAE,IAAIK,EAAYF,GAAgB,IAAI,EAAE,YAAaC,EAAS,QAAQ,UAAUF,EAAO,UAAWG,CAAS,CAAG,MAASD,EAASF,EAAM,MAAM,KAAM,SAAS,EAAK,OAAOI,GAA2B,KAAMF,CAAM,CAAG,CAAG,CAExa,SAASE,GAA2BC,EAAMC,EAAM,CAAE,OAAIA,IAAS3B,GAAiB2B,CAAI,IAAM,UAAY,OAAOA,GAAS,YAAsBA,EAAeC,GAAuBF,CAAI,CAAG,CAEzL,SAASE,GAAuBF,EAAM,CAAE,GAAIA,IAAS,OAAU,MAAM,IAAI,eAAe,2DAA2D,EAAK,OAAOA,CAAM,CAErK,SAASN,IAA4B,CAA0E,GAApE,OAAO,SAAY,aAAe,CAAC,QAAQ,WAA6B,QAAQ,UAAU,KAAM,MAAO,GAAO,GAAI,OAAO,OAAU,WAAY,MAAO,GAAM,GAAI,CAAE,YAAK,UAAU,SAAS,KAAK,QAAQ,UAAU,KAAM,CAAC,EAAG,UAAY,CAAC,CAAC,CAAC,EAAU,EAAM,OAASS,EAAP,CAAY,MAAO,EAAO,CAAE,CAEnU,SAASP,GAAgBP,EAAG,CAAE,OAAAO,GAAkB,OAAO,eAAiB,OAAO,eAAiB,SAAyBP,EAAG,CAAE,OAAOA,EAAE,WAAa,OAAO,eAAeA,CAAC,CAAG,EAAUO,GAAgBP,CAAC,CAAG,CAa5M,SAASe,GAAkBC,EAAQC,EAAS,CAC1C,IAAIC,EAAY,kBAAkB,OAAOF,CAAM,EAE/C,GAAI,EAACC,EAAQ,aAAaC,CAAS,EAInC,OAAOD,EAAQ,aAAaC,CAAS,CACvC,CAOA,IAAIC,GAAyB,SAAUC,EAAU,CAC/CxB,GAAUuB,EAAWC,CAAQ,EAE7B,IAAIC,EAASnB,GAAaiB,CAAS,EAMnC,SAASA,EAAUG,EAAShD,EAAS,CACnC,IAAIiD,EAEJ,OAAArC,GAAgB,KAAMiC,CAAS,EAE/BI,EAAQF,EAAO,KAAK,IAAI,EAExBE,EAAM,eAAejD,CAAO,EAE5BiD,EAAM,YAAYD,CAAO,EAElBC,CACT,CAQA,OAAA9B,GAAa0B,EAAW,CAAC,CACvB,IAAK,iBACL,MAAO,UAA0B,CAC/B,IAAI7C,EAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,EACnF,KAAK,OAAS,OAAOA,EAAQ,QAAW,WAAaA,EAAQ,OAAS,KAAK,cAC3E,KAAK,OAAS,OAAOA,EAAQ,QAAW,WAAaA,EAAQ,OAAS,KAAK,cAC3E,KAAK,KAAO,OAAOA,EAAQ,MAAS,WAAaA,EAAQ,KAAO,KAAK,YACrE,KAAK,UAAYW,GAAiBX,EAAQ,SAAS,IAAM,SAAWA,EAAQ,UAAY,SAAS,IACnG,CAMF,EAAG,CACD,IAAK,cACL,MAAO,SAAqBgD,EAAS,CACnC,IAAIE,EAAS,KAEb,KAAK,SAAWlE,EAAe,EAAEgE,EAAS,QAAS,SAAUR,GAAG,CAC9D,OAAOU,EAAO,QAAQV,EAAC,CACzB,CAAC,CACH,CAMF,EAAG,CACD,IAAK,UACL,MAAO,SAAiBA,EAAG,CACzB,IAAIQ,EAAUR,EAAE,gBAAkBA,EAAE,cAChCjC,GAAS,KAAK,OAAOyC,CAAO,GAAK,OACjCvC,GAAOC,GAAgB,CACzB,OAAQH,GACR,UAAW,KAAK,UAChB,OAAQ,KAAK,OAAOyC,CAAO,EAC3B,KAAM,KAAK,KAAKA,CAAO,CACzB,CAAC,EAED,KAAK,KAAKvC,GAAO,UAAY,QAAS,CACpC,OAAQF,GACR,KAAME,GACN,QAASuC,EACT,eAAgB,UAA0B,CACpCA,GACFA,EAAQ,MAAM,EAGhB,OAAO,aAAa,EAAE,gBAAgB,CACxC,CACF,CAAC,CACH,CAMF,EAAG,CACD,IAAK,gBACL,MAAO,SAAuBA,EAAS,CACrC,OAAOP,GAAkB,SAAUO,CAAO,CAC5C,CAMF,EAAG,CACD,IAAK,gBACL,MAAO,SAAuBA,EAAS,CACrC,IAAIG,EAAWV,GAAkB,SAAUO,CAAO,EAElD,GAAIG,EACF,OAAO,SAAS,cAAcA,CAAQ,CAE1C,CAQF,EAAG,CACD,IAAK,cAML,MAAO,SAAqBH,EAAS,CACnC,OAAOP,GAAkB,OAAQO,CAAO,CAC1C,CAKF,EAAG,CACD,IAAK,UACL,MAAO,UAAmB,CACxB,KAAK,SAAS,QAAQ,CACxB,CACF,CAAC,EAAG,CAAC,CACH,IAAK,OACL,MAAO,SAAczD,EAAQ,CAC3B,IAAIS,EAAU,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAChF,UAAW,SAAS,IACtB,EACA,OAAOE,EAAaX,EAAQS,CAAO,CACrC,CAOF,EAAG,CACD,IAAK,MACL,MAAO,SAAaT,EAAQ,CAC1B,OAAOE,EAAYF,CAAM,CAC3B,CAOF,EAAG,CACD,IAAK,cACL,MAAO,UAAuB,CAC5B,IAAIgB,EAAS,UAAU,OAAS,GAAK,UAAU,KAAO,OAAY,UAAU,GAAK,CAAC,OAAQ,KAAK,EAC3F6C,EAAU,OAAO7C,GAAW,SAAW,CAACA,CAAM,EAAIA,EAClD8C,GAAU,CAAC,CAAC,SAAS,sBACzB,OAAAD,EAAQ,QAAQ,SAAU7C,GAAQ,CAChC8C,GAAUA,IAAW,CAAC,CAAC,SAAS,sBAAsB9C,EAAM,CAC9D,CAAC,EACM8C,EACT,CACF,CAAC,CAAC,EAEKR,CACT,EAAG/D,EAAqB,CAAE,EAEOF,GAAaiE,EAExC,EAEA,IACC,SAASxE,EAAQ,CAExB,IAAIiF,EAAqB,EAKzB,GAAI,OAAO,SAAY,aAAe,CAAC,QAAQ,UAAU,QAAS,CAC9D,IAAIC,EAAQ,QAAQ,UAEpBA,EAAM,QAAUA,EAAM,iBACNA,EAAM,oBACNA,EAAM,mBACNA,EAAM,kBACNA,EAAM,qBAC1B,CASA,SAASC,EAASb,EAASQ,EAAU,CACjC,KAAOR,GAAWA,EAAQ,WAAaW,GAAoB,CACvD,GAAI,OAAOX,EAAQ,SAAY,YAC3BA,EAAQ,QAAQQ,CAAQ,EAC1B,OAAOR,EAETA,EAAUA,EAAQ,UACtB,CACJ,CAEAtE,EAAO,QAAUmF,CAGX,EAEA,IACC,SAASnF,EAAQoF,EAA0B9E,EAAqB,CAEvE,IAAI6E,EAAU7E,EAAoB,GAAG,EAYrC,SAAS+E,EAAUf,EAASQ,EAAU/D,EAAMuE,EAAUC,EAAY,CAC9D,IAAIC,EAAaC,EAAS,MAAM,KAAM,SAAS,EAE/C,OAAAnB,EAAQ,iBAAiBvD,EAAMyE,EAAYD,CAAU,EAE9C,CACH,QAAS,UAAW,CAChBjB,EAAQ,oBAAoBvD,EAAMyE,EAAYD,CAAU,CAC5D,CACJ,CACJ,CAYA,SAASG,EAASC,EAAUb,EAAU/D,EAAMuE,EAAUC,EAAY,CAE9D,OAAI,OAAOI,EAAS,kBAAqB,WAC9BN,EAAU,MAAM,KAAM,SAAS,EAItC,OAAOtE,GAAS,WAGTsE,EAAU,KAAK,KAAM,QAAQ,EAAE,MAAM,KAAM,SAAS,GAI3D,OAAOM,GAAa,WACpBA,EAAW,SAAS,iBAAiBA,CAAQ,GAI1C,MAAM,UAAU,IAAI,KAAKA,EAAU,SAAUrB,EAAS,CACzD,OAAOe,EAAUf,EAASQ,EAAU/D,EAAMuE,EAAUC,CAAU,CAClE,CAAC,EACL,CAWA,SAASE,EAASnB,EAASQ,EAAU/D,EAAMuE,EAAU,CACjD,OAAO,SAASnB,EAAG,CACfA,EAAE,eAAiBgB,EAAQhB,EAAE,OAAQW,CAAQ,EAEzCX,EAAE,gBACFmB,EAAS,KAAKhB,EAASH,CAAC,CAEhC,CACJ,CAEAnE,EAAO,QAAU0F,CAGX,EAEA,IACC,SAAStF,EAAyBL,EAAS,CAQlDA,EAAQ,KAAO,SAASuB,EAAO,CAC3B,OAAOA,IAAU,QACVA,aAAiB,aACjBA,EAAM,WAAa,CAC9B,EAQAvB,EAAQ,SAAW,SAASuB,EAAO,CAC/B,IAAIP,EAAO,OAAO,UAAU,SAAS,KAAKO,CAAK,EAE/C,OAAOA,IAAU,SACTP,IAAS,qBAAuBA,IAAS,4BACzC,WAAYO,IACZA,EAAM,SAAW,GAAKvB,EAAQ,KAAKuB,EAAM,EAAE,EACvD,EAQAvB,EAAQ,OAAS,SAASuB,EAAO,CAC7B,OAAO,OAAOA,GAAU,UACjBA,aAAiB,MAC5B,EAQAvB,EAAQ,GAAK,SAASuB,EAAO,CACzB,IAAIP,EAAO,OAAO,UAAU,SAAS,KAAKO,CAAK,EAE/C,OAAOP,IAAS,mBACpB,CAGM,EAEA,IACC,SAASf,EAAQoF,EAA0B9E,EAAqB,CAEvE,IAAIsF,EAAKtF,EAAoB,GAAG,EAC5BoF,EAAWpF,EAAoB,GAAG,EAWtC,SAASI,EAAOQ,EAAQH,EAAMuE,EAAU,CACpC,GAAI,CAACpE,GAAU,CAACH,GAAQ,CAACuE,EACrB,MAAM,IAAI,MAAM,4BAA4B,EAGhD,GAAI,CAACM,EAAG,OAAO7E,CAAI,EACf,MAAM,IAAI,UAAU,kCAAkC,EAG1D,GAAI,CAAC6E,EAAG,GAAGN,CAAQ,EACf,MAAM,IAAI,UAAU,mCAAmC,EAG3D,GAAIM,EAAG,KAAK1E,CAAM,EACd,OAAO2E,EAAW3E,EAAQH,EAAMuE,CAAQ,EAEvC,GAAIM,EAAG,SAAS1E,CAAM,EACvB,OAAO4E,EAAe5E,EAAQH,EAAMuE,CAAQ,EAE3C,GAAIM,EAAG,OAAO1E,CAAM,EACrB,OAAO6E,EAAe7E,EAAQH,EAAMuE,CAAQ,EAG5C,MAAM,IAAI,UAAU,2EAA2E,CAEvG,CAWA,SAASO,EAAWG,EAAMjF,EAAMuE,EAAU,CACtC,OAAAU,EAAK,iBAAiBjF,EAAMuE,CAAQ,EAE7B,CACH,QAAS,UAAW,CAChBU,EAAK,oBAAoBjF,EAAMuE,CAAQ,CAC3C,CACJ,CACJ,CAWA,SAASQ,EAAeG,EAAUlF,EAAMuE,EAAU,CAC9C,aAAM,UAAU,QAAQ,KAAKW,EAAU,SAASD,EAAM,CAClDA,EAAK,iBAAiBjF,EAAMuE,CAAQ,CACxC,CAAC,EAEM,CACH,QAAS,UAAW,CAChB,MAAM,UAAU,QAAQ,KAAKW,EAAU,SAASD,EAAM,CAClDA,EAAK,oBAAoBjF,EAAMuE,CAAQ,CAC3C,CAAC,CACL,CACJ,CACJ,CAWA,SAASS,EAAejB,EAAU/D,EAAMuE,EAAU,CAC9C,OAAOI,EAAS,SAAS,KAAMZ,EAAU/D,EAAMuE,CAAQ,CAC3D,CAEAtF,EAAO,QAAUU,CAGX,EAEA,IACC,SAASV,EAAQ,CAExB,SAASkG,EAAO5B,EAAS,CACrB,IAAInD,EAEJ,GAAImD,EAAQ,WAAa,SACrBA,EAAQ,MAAM,EAEdnD,EAAemD,EAAQ,cAElBA,EAAQ,WAAa,SAAWA,EAAQ,WAAa,WAAY,CACtE,IAAI6B,EAAa7B,EAAQ,aAAa,UAAU,EAE3C6B,GACD7B,EAAQ,aAAa,WAAY,EAAE,EAGvCA,EAAQ,OAAO,EACfA,EAAQ,kBAAkB,EAAGA,EAAQ,MAAM,MAAM,EAE5C6B,GACD7B,EAAQ,gBAAgB,UAAU,EAGtCnD,EAAemD,EAAQ,KAC3B,KACK,CACGA,EAAQ,aAAa,iBAAiB,GACtCA,EAAQ,MAAM,EAGlB,IAAI8B,EAAY,OAAO,aAAa,EAChCC,EAAQ,SAAS,YAAY,EAEjCA,EAAM,mBAAmB/B,CAAO,EAChC8B,EAAU,gBAAgB,EAC1BA,EAAU,SAASC,CAAK,EAExBlF,EAAeiF,EAAU,SAAS,CACtC,CAEA,OAAOjF,CACX,CAEAnB,EAAO,QAAUkG,CAGX,EAEA,IACC,SAASlG,EAAQ,CAExB,SAASsG,GAAK,CAGd,CAEAA,EAAE,UAAY,CACZ,GAAI,SAAUC,EAAMjB,EAAUkB,EAAK,CACjC,IAAIrC,EAAI,KAAK,IAAM,KAAK,EAAI,CAAC,GAE7B,OAACA,EAAEoC,KAAUpC,EAAEoC,GAAQ,CAAC,IAAI,KAAK,CAC/B,GAAIjB,EACJ,IAAKkB,CACP,CAAC,EAEM,IACT,EAEA,KAAM,SAAUD,EAAMjB,EAAUkB,EAAK,CACnC,IAAIxC,EAAO,KACX,SAASyB,GAAY,CACnBzB,EAAK,IAAIuC,EAAMd,CAAQ,EACvBH,EAAS,MAAMkB,EAAK,SAAS,CAC/B,CAEA,OAAAf,EAAS,EAAIH,EACN,KAAK,GAAGiB,EAAMd,EAAUe,CAAG,CACpC,EAEA,KAAM,SAAUD,EAAM,CACpB,IAAIE,EAAO,CAAC,EAAE,MAAM,KAAK,UAAW,CAAC,EACjCC,IAAW,KAAK,IAAM,KAAK,EAAI,CAAC,IAAIH,IAAS,CAAC,GAAG,MAAM,EACvD3D,EAAI,EACJ+D,EAAMD,EAAO,OAEjB,IAAK9D,EAAGA,EAAI+D,EAAK/D,IACf8D,EAAO9D,GAAG,GAAG,MAAM8D,EAAO9D,GAAG,IAAK6D,CAAI,EAGxC,OAAO,IACT,EAEA,IAAK,SAAUF,EAAMjB,EAAU,CAC7B,IAAInB,EAAI,KAAK,IAAM,KAAK,EAAI,CAAC,GACzByC,EAAOzC,EAAEoC,GACTM,EAAa,CAAC,EAElB,GAAID,GAAQtB,EACV,QAAS1C,EAAI,EAAG+D,EAAMC,EAAK,OAAQhE,EAAI+D,EAAK/D,IACtCgE,EAAKhE,GAAG,KAAO0C,GAAYsB,EAAKhE,GAAG,GAAG,IAAM0C,GAC9CuB,EAAW,KAAKD,EAAKhE,EAAE,EAQ7B,OAACiE,EAAW,OACR1C,EAAEoC,GAAQM,EACV,OAAO1C,EAAEoC,GAEN,IACT,CACF,EAEAvG,EAAO,QAAUsG,EACjBtG,EAAO,QAAQ,YAAcsG,CAGvB,CAEI,EAGIQ,EAA2B,CAAC,EAGhC,SAASxG,EAAoByG,EAAU,CAEtC,GAAGD,EAAyBC,GAC3B,OAAOD,EAAyBC,GAAU,QAG3C,IAAI/G,EAAS8G,EAAyBC,GAAY,CAGjD,QAAS,CAAC,CACX,EAGA,OAAA5G,EAAoB4G,GAAU/G,EAAQA,EAAO,QAASM,CAAmB,EAGlEN,EAAO,OACf,CAIA,OAAC,UAAW,CAEXM,EAAoB,EAAI,SAASN,EAAQ,CACxC,IAAIgH,EAAShH,GAAUA,EAAO,WAC7B,UAAW,CAAE,OAAOA,EAAO,OAAY,EACvC,UAAW,CAAE,OAAOA,CAAQ,EAC7B,OAAAM,EAAoB,EAAE0G,EAAQ,CAAE,EAAGA,CAAO,CAAC,EACpCA,CACR,CACD,EAAE,EAGD,UAAW,CAEX1G,EAAoB,EAAI,SAASP,EAASkH,EAAY,CACrD,QAAQC,KAAOD,EACX3G,EAAoB,EAAE2G,EAAYC,CAAG,GAAK,CAAC5G,EAAoB,EAAEP,EAASmH,CAAG,GAC/E,OAAO,eAAenH,EAASmH,EAAK,CAAE,WAAY,GAAM,IAAKD,EAAWC,EAAK,CAAC,CAGjF,CACD,EAAE,EAGD,UAAW,CACX5G,EAAoB,EAAI,SAASyB,EAAKoF,EAAM,CAAE,OAAO,OAAO,UAAU,eAAe,KAAKpF,EAAKoF,CAAI,CAAG,CACvG,EAAE,EAMK7G,EAAoB,GAAG,CAC/B,EAAG,EACX,OACD,CAAC,ICz3BD,IAAA8G,GAAAC,GAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAeA,IAAIC,GAAkB,UAOtBD,GAAO,QAAUE,GAUjB,SAASA,GAAWC,EAAQ,CAC1B,IAAIC,EAAM,GAAKD,EACXE,EAAQJ,GAAgB,KAAKG,CAAG,EAEpC,GAAI,CAACC,EACH,OAAOD,EAGT,IAAIE,EACAC,EAAO,GACPC,EAAQ,EACRC,EAAY,EAEhB,IAAKD,EAAQH,EAAM,MAAOG,EAAQJ,EAAI,OAAQI,IAAS,CACrD,OAAQJ,EAAI,WAAWI,CAAK,EAAG,CAC7B,IAAK,IACHF,EAAS,SACT,MACF,IAAK,IACHA,EAAS,QACT,MACF,IAAK,IACHA,EAAS,QACT,MACF,IAAK,IACHA,EAAS,OACT,MACF,IAAK,IACHA,EAAS,OACT,MACF,QACE,QACJ,CAEIG,IAAcD,IAChBD,GAAQH,EAAI,UAAUK,EAAWD,CAAK,GAGxCC,EAAYD,EAAQ,EACpBD,GAAQD,CACV,CAEA,OAAOG,IAAcD,EACjBD,EAAOH,EAAI,UAAUK,EAAWD,CAAK,EACrCD,CACN,IC7EA,MAAM,UAAU,MAAM,OAAO,eAAe,MAAM,UAAU,OAAO,CAAC,aAAa,GAAG,MAAM,SAASG,GAAG,CAAC,IAAI,EAAE,MAAM,UAAU,EAAE,EAAE,EAAE,OAAO,UAAU,EAAE,EAAE,OAAO,EAAE,MAAM,UAAU,OAAO,KAAK,KAAK,SAASC,EAAEC,EAAE,CAAC,OAAO,MAAM,QAAQA,CAAC,EAAED,EAAE,KAAK,MAAMA,EAAED,EAAE,KAAKE,EAAE,EAAE,CAAC,CAAC,EAAED,EAAE,KAAKC,CAAC,EAAED,CAAC,EAAE,CAAC,CAAC,EAAE,MAAM,UAAU,MAAM,KAAK,IAAI,CAAC,EAAE,SAAS,EAAE,CAAC,EAAE,MAAM,UAAU,SAAS,OAAO,eAAe,MAAM,UAAU,UAAU,CAAC,aAAa,GAAG,MAAM,SAASD,EAAE,CAAC,OAAO,MAAM,UAAU,IAAI,MAAM,KAAK,SAAS,EAAE,KAAK,CAAC,EAAE,SAAS,EAAE,CAAC,ECuBxf,IAAAG,GAAO,SCvBP,KAAK,QAAQ,KAAK,MAAM,SAAS,EAAEC,EAAE,CAAC,OAAOA,EAAEA,GAAG,CAAC,EAAE,IAAI,QAAQ,SAASC,EAAEC,EAAE,CAAC,IAAIC,EAAE,IAAI,eAAeC,EAAE,CAAC,EAAEC,EAAE,CAAC,EAAEC,EAAE,CAAC,EAAEC,EAAE,UAAU,CAAC,MAAM,CAAC,IAAOJ,EAAE,OAAO,IAAI,IAAjB,EAAoB,WAAWA,EAAE,WAAW,OAAOA,EAAE,OAAO,IAAIA,EAAE,YAAY,KAAK,UAAU,CAAC,OAAO,QAAQ,QAAQA,EAAE,YAAY,CAAC,EAAE,KAAK,UAAU,CAAC,OAAO,QAAQ,QAAQA,EAAE,YAAY,EAAE,KAAK,KAAK,KAAK,CAAC,EAAE,KAAK,UAAU,CAAC,OAAO,QAAQ,QAAQ,IAAI,KAAK,CAACA,EAAE,QAAQ,CAAC,CAAC,CAAC,EAAE,MAAMI,EAAE,QAAQ,CAAC,KAAK,UAAU,CAAC,OAAOH,CAAC,EAAE,QAAQ,UAAU,CAAC,OAAOC,CAAC,EAAE,IAAI,SAASG,EAAE,CAAC,OAAOF,EAAEE,EAAE,YAAY,EAAE,EAAE,IAAI,SAASA,EAAE,CAAC,OAAOA,EAAE,YAAY,IAAIF,CAAC,CAAC,CAAC,CAAC,EAAE,QAAQG,KAAKN,EAAE,KAAKH,EAAE,QAAQ,MAAM,EAAE,EAAE,EAAEG,EAAE,OAAO,UAAU,CAACA,EAAE,sBAAsB,EAAE,QAAQ,+BAA+B,SAASK,EAAER,EAAEC,EAAE,CAACG,EAAE,KAAKJ,EAAEA,EAAE,YAAY,CAAC,EAAEK,EAAE,KAAK,CAACL,EAAEC,CAAC,CAAC,EAAEK,EAAEN,GAAGM,EAAEN,GAAGM,EAAEN,GAAG,IAAIC,EAAEA,CAAC,CAAC,EAAEA,EAAEM,EAAE,CAAC,CAAC,EAAEJ,EAAE,QAAQD,EAAEC,EAAE,gBAA2BH,EAAE,aAAb,UAAyBA,EAAE,QAAQG,EAAE,iBAAiBM,EAAET,EAAE,QAAQS,EAAE,EAAEN,EAAE,KAAKH,EAAE,MAAM,IAAI,CAAC,CAAC,CAAC,GDyBj5B,IAAAU,GAAO,SEzBP,IAAAC,GAAkB,WACZ,CACF,UAAAC,GACA,SAAAC,GACA,OAAAC,GACA,WAAAC,GACA,QAAAC,GACA,WAAAC,GACA,UAAAC,GACA,YAAAC,GACA,aAAAC,GACA,gBAAAC,GACA,SAAAC,GACA,OAAAC,EACA,SAAAC,GACA,eAAAC,GACA,cAAAC,EACA,QAAAC,GACA,iBAAAC,GACA,iBAAAC,GACA,cAAAC,GACA,qBAAAC,GACA,aAAAC,GACA,gBAAAC,GACA,uBAAAC,GACA,uBAAAC,EACJ,EAAI,GAAAC,QCtBE,SAAUC,EAAWC,EAAU,CACnC,OAAO,OAAOA,GAAU,UAC1B,CCGM,SAAUC,GAAoBC,EAAgC,CAClE,IAAMC,EAAS,SAACC,EAAa,CAC3B,MAAM,KAAKA,CAAQ,EACnBA,EAAS,MAAQ,IAAI,MAAK,EAAG,KAC/B,EAEMC,EAAWH,EAAWC,CAAM,EAClC,OAAAE,EAAS,UAAY,OAAO,OAAO,MAAM,SAAS,EAClDA,EAAS,UAAU,YAAcA,EAC1BA,CACT,CCDO,IAAMC,GAA+CC,GAC1D,SAACC,EAAM,CACL,OAAA,SAA4CC,EAA0B,CACpED,EAAO,IAAI,EACX,KAAK,QAAUC,EACRA,EAAO,OAAM;EACxBA,EAAO,IAAI,SAACC,EAAKC,EAAC,CAAK,OAAGA,EAAI,EAAC,KAAKD,EAAI,SAAQ,CAAzB,CAA6B,EAAE,KAAK;GAAM,EACzD,GACJ,KAAK,KAAO,sBACZ,KAAK,OAASD,CAChB,CARA,CAQC,ECvBC,SAAUG,GAAaC,EAA6BC,EAAO,CAC/D,GAAID,EAAK,CACP,IAAME,EAAQF,EAAI,QAAQC,CAAI,EAC9B,GAAKC,GAASF,EAAI,OAAOE,EAAO,CAAC,EAErC,CCOA,IAAAC,GAAA,UAAA,CAyBE,SAAAA,EAAoBC,EAA4B,CAA5B,KAAA,gBAAAA,EAdb,KAAA,OAAS,GAER,KAAA,WAAmD,KAMnD,KAAA,YAAqD,IAMV,CAQnD,OAAAD,EAAA,UAAA,YAAA,UAAA,aACME,EAEJ,GAAI,CAAC,KAAK,OAAQ,CAChB,KAAK,OAAS,GAGN,IAAAC,EAAe,KAAI,WAC3B,GAAIA,EAEF,GADA,KAAK,WAAa,KACd,MAAM,QAAQA,CAAU,MAC1B,QAAqBC,EAAAC,GAAAF,CAAU,EAAAG,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAA5B,IAAMG,EAAMD,EAAA,MACfC,EAAO,OAAO,IAAI,yGAGpBJ,EAAW,OAAO,IAAI,EAIlB,IAAiBK,EAAqB,KAAI,gBAClD,GAAIC,EAAWD,CAAgB,EAC7B,GAAI,CACFA,EAAgB,QACTE,EAAP,CACAR,EAASQ,aAAaC,GAAsBD,EAAE,OAAS,CAACA,CAAC,EAIrD,IAAAE,EAAgB,KAAI,YAC5B,GAAIA,EAAa,CACf,KAAK,YAAc,SACnB,QAAwBC,EAAAR,GAAAO,CAAW,EAAAE,EAAAD,EAAA,KAAA,EAAA,CAAAC,EAAA,KAAAA,EAAAD,EAAA,KAAA,EAAE,CAAhC,IAAME,EAASD,EAAA,MAClB,GAAI,CACFE,GAAcD,CAAS,QAChBE,EAAP,CACAf,EAASA,GAAM,KAANA,EAAU,CAAA,EACfe,aAAeN,GACjBT,EAAMgB,EAAAA,EAAA,CAAA,EAAAC,EAAOjB,CAAM,CAAA,EAAAiB,EAAKF,EAAI,MAAM,CAAA,EAElCf,EAAO,KAAKe,CAAG,sGAMvB,GAAIf,EACF,MAAM,IAAIS,GAAoBT,CAAM,EAG1C,EAoBAF,EAAA,UAAA,IAAA,SAAIoB,EAAuB,OAGzB,GAAIA,GAAYA,IAAa,KAC3B,GAAI,KAAK,OAGPJ,GAAcI,CAAQ,MACjB,CACL,GAAIA,aAAoBpB,EAAc,CAGpC,GAAIoB,EAAS,QAAUA,EAAS,WAAW,IAAI,EAC7C,OAEFA,EAAS,WAAW,IAAI,GAEzB,KAAK,aAAcC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAAA,EAAI,CAAA,GAAI,KAAKD,CAAQ,EAG/D,EAOQpB,EAAA,UAAA,WAAR,SAAmBsB,EAAoB,CAC7B,IAAAnB,EAAe,KAAI,WAC3B,OAAOA,IAAemB,GAAW,MAAM,QAAQnB,CAAU,GAAKA,EAAW,SAASmB,CAAM,CAC1F,EASQtB,EAAA,UAAA,WAAR,SAAmBsB,EAAoB,CAC7B,IAAAnB,EAAe,KAAI,WAC3B,KAAK,WAAa,MAAM,QAAQA,CAAU,GAAKA,EAAW,KAAKmB,CAAM,EAAGnB,GAAcA,EAAa,CAACA,EAAYmB,CAAM,EAAIA,CAC5H,EAMQtB,EAAA,UAAA,cAAR,SAAsBsB,EAAoB,CAChC,IAAAnB,EAAe,KAAI,WACvBA,IAAemB,EACjB,KAAK,WAAa,KACT,MAAM,QAAQnB,CAAU,GACjCoB,GAAUpB,EAAYmB,CAAM,CAEhC,EAgBAtB,EAAA,UAAA,OAAA,SAAOoB,EAAsC,CACnC,IAAAR,EAAgB,KAAI,YAC5BA,GAAeW,GAAUX,EAAaQ,CAAQ,EAE1CA,aAAoBpB,GACtBoB,EAAS,cAAc,IAAI,CAE/B,EAlLcpB,EAAA,MAAS,UAAA,CACrB,IAAMwB,EAAQ,IAAIxB,EAClB,OAAAwB,EAAM,OAAS,GACRA,CACT,EAAE,EA+KJxB,GArLA,EAuLO,IAAMyB,GAAqBC,GAAa,MAEzC,SAAUC,GAAeC,EAAU,CACvC,OACEA,aAAiBF,IAChBE,GAAS,WAAYA,GAASC,EAAWD,EAAM,MAAM,GAAKC,EAAWD,EAAM,GAAG,GAAKC,EAAWD,EAAM,WAAW,CAEpH,CAEA,SAASE,GAAcC,EAAwC,CACzDF,EAAWE,CAAS,EACtBA,EAAS,EAETA,EAAU,YAAW,CAEzB,CChNO,IAAMC,GAAuB,CAClC,iBAAkB,KAClB,sBAAuB,KACvB,QAAS,OACT,sCAAuC,GACvC,yBAA0B,ICGrB,IAAMC,GAAmC,CAG9C,WAAA,SAAWC,EAAqBC,EAAgB,SAAEC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,EAAA,GAAA,UAAAA,GACxC,IAAAC,EAAaL,GAAe,SACpC,OAAIK,GAAQ,MAARA,EAAU,WACLA,EAAS,WAAU,MAAnBA,EAAQC,EAAA,CAAYL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,EAE/C,WAAU,MAAA,OAAAG,EAAA,CAACL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,CAC7C,EACA,aAAA,SAAaK,EAAM,CACT,IAAAH,EAAaL,GAAe,SACpC,QAAQK,GAAQ,KAAA,OAARA,EAAU,eAAgB,cAAcG,CAAa,CAC/D,EACA,SAAU,QCjBN,SAAUC,GAAqBC,EAAQ,CAC3CC,GAAgB,WAAW,UAAA,CACjB,IAAAC,EAAqBC,GAAM,iBACnC,GAAID,EAEFA,EAAiBF,CAAG,MAGpB,OAAMA,CAEV,CAAC,CACH,CCtBM,SAAUI,IAAI,CAAK,CCMlB,IAAMC,GAAyB,UAAA,CAAM,OAAAC,GAAmB,IAAK,OAAW,MAAS,CAA5C,EAAsE,EAO5G,SAAUC,GAAkBC,EAAU,CAC1C,OAAOF,GAAmB,IAAK,OAAWE,CAAK,CACjD,CAOM,SAAUC,GAAoBC,EAAQ,CAC1C,OAAOJ,GAAmB,IAAKI,EAAO,MAAS,CACjD,CAQM,SAAUJ,GAAmBK,EAAuBD,EAAYF,EAAU,CAC9E,MAAO,CACL,KAAIG,EACJ,MAAKD,EACL,MAAKF,EAET,CCrCA,IAAII,GAAuD,KASrD,SAAUC,GAAaC,EAAc,CACzC,GAAIC,GAAO,sCAAuC,CAChD,IAAMC,EAAS,CAACJ,GAKhB,GAJII,IACFJ,GAAU,CAAE,YAAa,GAAO,MAAO,IAAI,GAE7CE,EAAE,EACEE,EAAQ,CACJ,IAAAC,EAAyBL,GAAvBM,EAAWD,EAAA,YAAEE,EAAKF,EAAA,MAE1B,GADAL,GAAU,KACNM,EACF,MAAMC,QAMVL,EAAE,CAEN,CAMM,SAAUM,GAAaC,EAAQ,CAC/BN,GAAO,uCAAyCH,KAClDA,GAAQ,YAAc,GACtBA,GAAQ,MAAQS,EAEpB,CCrBA,IAAAC,GAAA,SAAAC,EAAA,CAAmCC,GAAAF,EAAAC,CAAA,EA6BjC,SAAAD,EAAYG,EAA6C,CAAzD,IAAAC,EACEH,EAAA,KAAA,IAAA,GAAO,KATC,OAAAG,EAAA,UAAqB,GAUzBD,GACFC,EAAK,YAAcD,EAGfE,GAAeF,CAAW,GAC5BA,EAAY,IAAIC,CAAI,GAGtBA,EAAK,YAAcE,IAEvB,CAzBO,OAAAN,EAAA,OAAP,SAAiBO,EAAwBC,EAA2BC,EAAqB,CACvF,OAAO,IAAIC,GAAeH,EAAMC,EAAOC,CAAQ,CACjD,EAgCAT,EAAA,UAAA,KAAA,SAAKW,EAAS,CACR,KAAK,UACPC,GAA0BC,GAAiBF,CAAK,EAAG,IAAI,EAEvD,KAAK,MAAMA,CAAM,CAErB,EASAX,EAAA,UAAA,MAAA,SAAMc,EAAS,CACT,KAAK,UACPF,GAA0BG,GAAkBD,CAAG,EAAG,IAAI,GAEtD,KAAK,UAAY,GACjB,KAAK,OAAOA,CAAG,EAEnB,EAQAd,EAAA,UAAA,SAAA,UAAA,CACM,KAAK,UACPY,GAA0BI,GAAuB,IAAI,GAErD,KAAK,UAAY,GACjB,KAAK,UAAS,EAElB,EAEAhB,EAAA,UAAA,YAAA,UAAA,CACO,KAAK,SACR,KAAK,UAAY,GACjBC,EAAA,UAAM,YAAW,KAAA,IAAA,EACjB,KAAK,YAAc,KAEvB,EAEUD,EAAA,UAAA,MAAV,SAAgBW,EAAQ,CACtB,KAAK,YAAY,KAAKA,CAAK,CAC7B,EAEUX,EAAA,UAAA,OAAV,SAAiBc,EAAQ,CACvB,GAAI,CACF,KAAK,YAAY,MAAMA,CAAG,UAE1B,KAAK,YAAW,EAEpB,EAEUd,EAAA,UAAA,UAAV,UAAA,CACE,GAAI,CACF,KAAK,YAAY,SAAQ,UAEzB,KAAK,YAAW,EAEpB,EACFA,CAAA,EApHmCiB,EAAY,EA2H/C,IAAMC,GAAQ,SAAS,UAAU,KAEjC,SAASC,GAAyCC,EAAQC,EAAY,CACpE,OAAOH,GAAM,KAAKE,EAAIC,CAAO,CAC/B,CAMA,IAAAC,GAAA,UAAA,CACE,SAAAA,EAAoBC,EAAqC,CAArC,KAAA,gBAAAA,CAAwC,CAE5D,OAAAD,EAAA,UAAA,KAAA,SAAKE,EAAQ,CACH,IAAAD,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,KAClB,GAAI,CACFA,EAAgB,KAAKC,CAAK,QACnBC,EAAP,CACAC,GAAqBD,CAAK,EAGhC,EAEAH,EAAA,UAAA,MAAA,SAAMK,EAAQ,CACJ,IAAAJ,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,MAClB,GAAI,CACFA,EAAgB,MAAMI,CAAG,QAClBF,EAAP,CACAC,GAAqBD,CAAK,OAG5BC,GAAqBC,CAAG,CAE5B,EAEAL,EAAA,UAAA,SAAA,UAAA,CACU,IAAAC,EAAoB,KAAI,gBAChC,GAAIA,EAAgB,SAClB,GAAI,CACFA,EAAgB,SAAQ,QACjBE,EAAP,CACAC,GAAqBD,CAAK,EAGhC,EACFH,CAAA,EArCA,EAuCAM,GAAA,SAAAC,EAAA,CAAuCC,GAAAF,EAAAC,CAAA,EACrC,SAAAD,EACEG,EACAN,EACAO,EAA8B,CAHhC,IAAAC,EAKEJ,EAAA,KAAA,IAAA,GAAO,KAEHN,EACJ,GAAIW,EAAWH,CAAc,GAAK,CAACA,EAGjCR,EAAkB,CAChB,KAAOQ,GAAc,KAAdA,EAAkB,OACzB,MAAON,GAAK,KAALA,EAAS,OAChB,SAAUO,GAAQ,KAARA,EAAY,YAEnB,CAEL,IAAIG,EACAF,GAAQG,GAAO,0BAIjBD,EAAU,OAAO,OAAOJ,CAAc,EACtCI,EAAQ,YAAc,UAAA,CAAM,OAAAF,EAAK,YAAW,CAAhB,EAC5BV,EAAkB,CAChB,KAAMQ,EAAe,MAAQZ,GAAKY,EAAe,KAAMI,CAAO,EAC9D,MAAOJ,EAAe,OAASZ,GAAKY,EAAe,MAAOI,CAAO,EACjE,SAAUJ,EAAe,UAAYZ,GAAKY,EAAe,SAAUI,CAAO,IAI5EZ,EAAkBQ,EAMtB,OAAAE,EAAK,YAAc,IAAIX,GAAiBC,CAAe,GACzD,CACF,OAAAK,CAAA,EAzCuCS,EAAU,EA2CjD,SAASC,GAAqBC,EAAU,CAClCC,GAAO,sCACTC,GAAaF,CAAK,EAIlBG,GAAqBH,CAAK,CAE9B,CAQA,SAASI,GAAoBC,EAAQ,CACnC,MAAMA,CACR,CAOA,SAASC,GAA0BC,EAA2CC,EAA2B,CAC/F,IAAAC,EAA0BR,GAAM,sBACxCQ,GAAyBC,GAAgB,WAAW,UAAA,CAAM,OAAAD,EAAsBF,EAAcC,CAAU,CAA9C,CAA+C,CAC3G,CAOO,IAAMG,GAA6D,CACxE,OAAQ,GACR,KAAMC,GACN,MAAOR,GACP,SAAUQ,ICjRL,IAAMC,GAA+B,UAAA,CAAM,OAAC,OAAO,QAAW,YAAc,OAAO,YAAe,cAAvD,EAAsE,ECyClH,SAAUC,GAAYC,EAAI,CAC9B,OAAOA,CACT,CCiCM,SAAUC,IAAI,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACnB,OAAOC,GAAcF,CAAG,CAC1B,CAGM,SAAUE,GAAoBF,EAA+B,CACjE,OAAIA,EAAI,SAAW,EACVG,GAGLH,EAAI,SAAW,EACVA,EAAI,GAGN,SAAeI,EAAQ,CAC5B,OAAOJ,EAAI,OAAO,SAACK,EAAWC,EAAuB,CAAK,OAAAA,EAAGD,CAAI,CAAP,EAAUD,CAAY,CAClF,CACF,CC9EA,IAAAG,EAAA,UAAA,CAkBE,SAAAA,EAAYC,EAA6E,CACnFA,IACF,KAAK,WAAaA,EAEtB,CA4BA,OAAAD,EAAA,UAAA,KAAA,SAAQE,EAAyB,CAC/B,IAAMC,EAAa,IAAIH,EACvB,OAAAG,EAAW,OAAS,KACpBA,EAAW,SAAWD,EACfC,CACT,EA8IAH,EAAA,UAAA,UAAA,SACEI,EACAC,EACAC,EAA8B,CAHhC,IAAAC,EAAA,KAKQC,EAAaC,GAAaL,CAAc,EAAIA,EAAiB,IAAIM,GAAeN,EAAgBC,EAAOC,CAAQ,EAErH,OAAAK,GAAa,UAAA,CACL,IAAAC,EAAuBL,EAArBL,EAAQU,EAAA,SAAEC,EAAMD,EAAA,OACxBJ,EAAW,IACTN,EAGIA,EAAS,KAAKM,EAAYK,CAAM,EAChCA,EAIAN,EAAK,WAAWC,CAAU,EAG1BD,EAAK,cAAcC,CAAU,CAAC,CAEtC,CAAC,EAEMA,CACT,EAGUR,EAAA,UAAA,cAAV,SAAwBc,EAAmB,CACzC,GAAI,CACF,OAAO,KAAK,WAAWA,CAAI,QACpBC,EAAP,CAIAD,EAAK,MAAMC,CAAG,EAElB,EA6DAf,EAAA,UAAA,QAAA,SAAQgB,EAA0BC,EAAoC,CAAtE,IAAAV,EAAA,KACE,OAAAU,EAAcC,GAAeD,CAAW,EAEjC,IAAIA,EAAkB,SAACE,EAASC,EAAM,CAC3C,IAAMZ,EAAa,IAAIE,GAAkB,CACvC,KAAM,SAACW,EAAK,CACV,GAAI,CACFL,EAAKK,CAAK,QACHN,EAAP,CACAK,EAAOL,CAAG,EACVP,EAAW,YAAW,EAE1B,EACA,MAAOY,EACP,SAAUD,EACX,EACDZ,EAAK,UAAUC,CAAU,CAC3B,CAAC,CACH,EAGUR,EAAA,UAAA,WAAV,SAAqBQ,EAA2B,OAC9C,OAAOI,EAAA,KAAK,UAAM,MAAAA,IAAA,OAAA,OAAAA,EAAE,UAAUJ,CAAU,CAC1C,EAOAR,EAAA,UAACG,IAAD,UAAA,CACE,OAAO,IACT,EA4FAH,EAAA,UAAA,KAAA,UAAA,SAAKsB,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACH,OAAOC,GAAcF,CAAU,EAAE,IAAI,CACvC,EA6BAtB,EAAA,UAAA,UAAA,SAAUiB,EAAoC,CAA9C,IAAAV,EAAA,KACE,OAAAU,EAAcC,GAAeD,CAAW,EAEjC,IAAIA,EAAY,SAACE,EAASC,EAAM,CACrC,IAAIC,EACJd,EAAK,UACH,SAACkB,EAAI,CAAK,OAACJ,EAAQI,CAAT,EACV,SAACV,EAAQ,CAAK,OAAAK,EAAOL,CAAG,CAAV,EACd,UAAA,CAAM,OAAAI,EAAQE,CAAK,CAAb,CAAc,CAExB,CAAC,CACH,EA3aOrB,EAAA,OAAkC,SAAIC,EAAwD,CACnG,OAAO,IAAID,EAAcC,CAAS,CACpC,EA0aFD,GA/cA,EAwdA,SAAS0B,GAAeC,EAA+C,OACrE,OAAOC,EAAAD,GAAW,KAAXA,EAAeE,GAAO,WAAO,MAAAD,IAAA,OAAAA,EAAI,OAC1C,CAEA,SAASE,GAAcC,EAAU,CAC/B,OAAOA,GAASC,EAAWD,EAAM,IAAI,GAAKC,EAAWD,EAAM,KAAK,GAAKC,EAAWD,EAAM,QAAQ,CAChG,CAEA,SAASE,GAAgBF,EAAU,CACjC,OAAQA,GAASA,aAAiBG,IAAgBJ,GAAWC,CAAK,GAAKI,GAAeJ,CAAK,CAC7F,CC1eM,SAAUK,GAAQC,EAAW,CACjC,OAAOC,EAAWD,GAAM,KAAA,OAANA,EAAQ,IAAI,CAChC,CAMM,SAAUE,EACdC,EAAqF,CAErF,OAAO,SAACH,EAAqB,CAC3B,GAAID,GAAQC,CAAM,EAChB,OAAOA,EAAO,KAAK,SAA+BI,EAA2B,CAC3E,GAAI,CACF,OAAOD,EAAKC,EAAc,IAAI,QACvBC,EAAP,CACA,KAAK,MAAMA,CAAG,EAElB,CAAC,EAEH,MAAM,IAAI,UAAU,wCAAwC,CAC9D,CACF,CCjBM,SAAUC,EACdC,EACAC,EACAC,EACAC,EACAC,EAAuB,CAEvB,OAAO,IAAIC,GAAmBL,EAAaC,EAAQC,EAAYC,EAASC,CAAU,CACpF,CAMA,IAAAC,GAAA,SAAAC,EAAA,CAA2CC,GAAAF,EAAAC,CAAA,EAiBzC,SAAAD,EACEL,EACAC,EACAC,EACAC,EACQC,EACAI,EAAiC,CAN3C,IAAAC,EAoBEH,EAAA,KAAA,KAAMN,CAAW,GAAC,KAfV,OAAAS,EAAA,WAAAL,EACAK,EAAA,kBAAAD,EAeRC,EAAK,MAAQR,EACT,SAAuCS,EAAQ,CAC7C,GAAI,CACFT,EAAOS,CAAK,QACLC,EAAP,CACAX,EAAY,MAAMW,CAAG,EAEzB,EACAL,EAAA,UAAM,MACVG,EAAK,OAASN,EACV,SAAuCQ,EAAQ,CAC7C,GAAI,CACFR,EAAQQ,CAAG,QACJA,EAAP,CAEAX,EAAY,MAAMW,CAAG,UAGrB,KAAK,YAAW,EAEpB,EACAL,EAAA,UAAM,OACVG,EAAK,UAAYP,EACb,UAAA,CACE,GAAI,CACFA,EAAU,QACHS,EAAP,CAEAX,EAAY,MAAMW,CAAG,UAGrB,KAAK,YAAW,EAEpB,EACAL,EAAA,UAAM,WACZ,CAEA,OAAAD,EAAA,UAAA,YAAA,UAAA,OACE,GAAI,CAAC,KAAK,mBAAqB,KAAK,kBAAiB,EAAI,CAC/C,IAAAO,EAAW,KAAI,OACvBN,EAAA,UAAM,YAAW,KAAA,IAAA,EAEjB,CAACM,KAAUC,EAAA,KAAK,cAAU,MAAAA,IAAA,QAAAA,EAAA,KAAf,IAAI,GAEnB,EACFR,CAAA,EAnF2CS,EAAU,ECd9C,IAAMC,GAAiD,CAG5D,SAAA,SAASC,EAAQ,CACf,IAAIC,EAAU,sBACVC,EAAkD,qBAC9CC,EAAaJ,GAAsB,SACvCI,IACFF,EAAUE,EAAS,sBACnBD,EAASC,EAAS,sBAEpB,IAAMC,EAASH,EAAQ,SAACI,EAAS,CAI/BH,EAAS,OACTF,EAASK,CAAS,CACpB,CAAC,EACD,OAAO,IAAIC,GAAa,UAAA,CAAM,OAAAJ,GAAM,KAAA,OAANA,EAASE,CAAM,CAAf,CAAgB,CAChD,EACA,sBAAqB,UAAA,SAACG,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACZ,IAAAL,EAAaJ,GAAsB,SAC3C,QAAQI,GAAQ,KAAA,OAARA,EAAU,wBAAyB,uBAAsB,MAAA,OAAAM,EAAA,CAAA,EAAAC,EAAIH,CAAI,CAAA,CAAA,CAC3E,EACA,qBAAoB,UAAA,SAACA,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACX,IAAAL,EAAaJ,GAAsB,SAC3C,QAAQI,GAAQ,KAAA,OAARA,EAAU,uBAAwB,sBAAqB,MAAA,OAAAM,EAAA,CAAA,EAAAC,EAAIH,CAAI,CAAA,CAAA,CACzE,EACA,SAAU,QCrBL,IAAMI,GAAuDC,GAClE,SAACC,EAAM,CACL,OAAA,UAAoC,CAClCA,EAAO,IAAI,EACX,KAAK,KAAO,0BACZ,KAAK,QAAU,qBACjB,CAJA,CAIC,ECXL,IAAAC,EAAA,SAAAC,EAAA,CAAgCC,GAAAF,EAAAC,CAAA,EAwB9B,SAAAD,GAAA,CAAA,IAAAG,EAEEF,EAAA,KAAA,IAAA,GAAO,KAzBT,OAAAE,EAAA,OAAS,GAEDA,EAAA,iBAAyC,KAGjDA,EAAA,UAA2B,CAAA,EAE3BA,EAAA,UAAY,GAEZA,EAAA,SAAW,GAEXA,EAAA,YAAmB,MAenB,CAGA,OAAAH,EAAA,UAAA,KAAA,SAAQI,EAAwB,CAC9B,IAAMC,EAAU,IAAIC,GAAiB,KAAM,IAAI,EAC/C,OAAAD,EAAQ,SAAWD,EACZC,CACT,EAGUL,EAAA,UAAA,eAAV,UAAA,CACE,GAAI,KAAK,OACP,MAAM,IAAIO,EAEd,EAEAP,EAAA,UAAA,KAAA,SAAKQ,EAAQ,CAAb,IAAAL,EAAA,KACEM,GAAa,UAAA,SAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACdA,EAAK,mBACRA,EAAK,iBAAmB,MAAM,KAAKA,EAAK,SAAS,OAEnD,QAAuBO,EAAAC,GAAAR,EAAK,gBAAgB,EAAAS,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAzC,IAAMG,EAAQD,EAAA,MACjBC,EAAS,KAAKL,CAAK,qGAGzB,CAAC,CACH,EAEAR,EAAA,UAAA,MAAA,SAAMc,EAAQ,CAAd,IAAAX,EAAA,KACEM,GAAa,UAAA,CAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACnBA,EAAK,SAAWA,EAAK,UAAY,GACjCA,EAAK,YAAcW,EAEnB,QADQC,EAAcZ,EAAI,UACnBY,EAAU,QACfA,EAAU,MAAK,EAAI,MAAMD,CAAG,EAGlC,CAAC,CACH,EAEAd,EAAA,UAAA,SAAA,UAAA,CAAA,IAAAG,EAAA,KACEM,GAAa,UAAA,CAEX,GADAN,EAAK,eAAc,EACf,CAACA,EAAK,UAAW,CACnBA,EAAK,UAAY,GAEjB,QADQY,EAAcZ,EAAI,UACnBY,EAAU,QACfA,EAAU,MAAK,EAAI,SAAQ,EAGjC,CAAC,CACH,EAEAf,EAAA,UAAA,YAAA,UAAA,CACE,KAAK,UAAY,KAAK,OAAS,GAC/B,KAAK,UAAY,KAAK,iBAAmB,IAC3C,EAEA,OAAA,eAAIA,EAAA,UAAA,WAAQ,KAAZ,UAAA,OACE,QAAOgB,EAAA,KAAK,aAAS,MAAAA,IAAA,OAAA,OAAAA,EAAE,QAAS,CAClC,kCAGUhB,EAAA,UAAA,cAAV,SAAwBiB,EAAyB,CAC/C,YAAK,eAAc,EACZhB,EAAA,UAAM,cAAa,KAAA,KAACgB,CAAU,CACvC,EAGUjB,EAAA,UAAA,WAAV,SAAqBiB,EAAyB,CAC5C,YAAK,eAAc,EACnB,KAAK,wBAAwBA,CAAU,EAChC,KAAK,gBAAgBA,CAAU,CACxC,EAGUjB,EAAA,UAAA,gBAAV,SAA0BiB,EAA2B,CAArD,IAAAd,EAAA,KACQa,EAAqC,KAAnCE,EAAQF,EAAA,SAAEG,EAASH,EAAA,UAAED,EAASC,EAAA,UACtC,OAAIE,GAAYC,EACPC,IAET,KAAK,iBAAmB,KACxBL,EAAU,KAAKE,CAAU,EAClB,IAAII,GAAa,UAAA,CACtBlB,EAAK,iBAAmB,KACxBmB,GAAUP,EAAWE,CAAU,CACjC,CAAC,EACH,EAGUjB,EAAA,UAAA,wBAAV,SAAkCiB,EAA2B,CACrD,IAAAD,EAAuC,KAArCE,EAAQF,EAAA,SAAEO,EAAWP,EAAA,YAAEG,EAASH,EAAA,UACpCE,EACFD,EAAW,MAAMM,CAAW,EACnBJ,GACTF,EAAW,SAAQ,CAEvB,EAQAjB,EAAA,UAAA,aAAA,UAAA,CACE,IAAMwB,EAAkB,IAAIC,EAC5B,OAAAD,EAAW,OAAS,KACbA,CACT,EAxHOxB,EAAA,OAAkC,SAAI0B,EAA0BC,EAAqB,CAC1F,OAAO,IAAIrB,GAAoBoB,EAAaC,CAAM,CACpD,EAuHF3B,GA7IgCyB,CAAU,EAkJ1C,IAAAG,GAAA,SAAAC,EAAA,CAAyCC,GAAAF,EAAAC,CAAA,EACvC,SAAAD,EAESG,EACPC,EAAsB,CAHxB,IAAAC,EAKEJ,EAAA,KAAA,IAAA,GAAO,KAHA,OAAAI,EAAA,YAAAF,EAIPE,EAAK,OAASD,GAChB,CAEA,OAAAJ,EAAA,UAAA,KAAA,SAAKM,EAAQ,UACXC,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,QAAI,MAAAD,IAAA,QAAAA,EAAA,KAAAC,EAAGF,CAAK,CAChC,EAEAN,EAAA,UAAA,MAAA,SAAMS,EAAQ,UACZF,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,SAAK,MAAAD,IAAA,QAAAA,EAAA,KAAAC,EAAGC,CAAG,CAC/B,EAEAT,EAAA,UAAA,SAAA,UAAA,UACEO,GAAAC,EAAA,KAAK,eAAW,MAAAA,IAAA,OAAA,OAAAA,EAAE,YAAQ,MAAAD,IAAA,QAAAA,EAAA,KAAAC,CAAA,CAC5B,EAGUR,EAAA,UAAA,WAAV,SAAqBU,EAAyB,SAC5C,OAAOH,GAAAC,EAAA,KAAK,UAAM,MAAAA,IAAA,OAAA,OAAAA,EAAE,UAAUE,CAAU,KAAC,MAAAH,IAAA,OAAAA,EAAII,EAC/C,EACFX,CAAA,EA1ByCY,CAAO,EC5JzC,IAAMC,GAA+C,CAC1D,IAAG,UAAA,CAGD,OAAQA,GAAsB,UAAY,MAAM,IAAG,CACrD,EACA,SAAU,QCwBZ,IAAAC,GAAA,SAAAC,EAAA,CAAsCC,GAAAF,EAAAC,CAAA,EAUpC,SAAAD,EACUG,EACAC,EACAC,EAA6D,CAF7DF,IAAA,SAAAA,EAAA,KACAC,IAAA,SAAAA,EAAA,KACAC,IAAA,SAAAA,EAAAC,IAHV,IAAAC,EAKEN,EAAA,KAAA,IAAA,GAAO,KAJC,OAAAM,EAAA,YAAAJ,EACAI,EAAA,YAAAH,EACAG,EAAA,mBAAAF,EAZFE,EAAA,QAA0B,CAAA,EAC1BA,EAAA,oBAAsB,GAc5BA,EAAK,oBAAsBH,IAAgB,IAC3CG,EAAK,YAAc,KAAK,IAAI,EAAGJ,CAAW,EAC1CI,EAAK,YAAc,KAAK,IAAI,EAAGH,CAAW,GAC5C,CAEA,OAAAJ,EAAA,UAAA,KAAA,SAAKQ,EAAQ,CACL,IAAAC,EAA+E,KAA7EC,EAASD,EAAA,UAAEE,EAAOF,EAAA,QAAEG,EAAmBH,EAAA,oBAAEJ,EAAkBI,EAAA,mBAAEL,EAAWK,EAAA,YAC3EC,IACHC,EAAQ,KAAKH,CAAK,EAClB,CAACI,GAAuBD,EAAQ,KAAKN,EAAmB,IAAG,EAAKD,CAAW,GAE7E,KAAK,YAAW,EAChBH,EAAA,UAAM,KAAI,KAAA,KAACO,CAAK,CAClB,EAGUR,EAAA,UAAA,WAAV,SAAqBa,EAAyB,CAC5C,KAAK,eAAc,EACnB,KAAK,YAAW,EAQhB,QANMC,EAAe,KAAK,gBAAgBD,CAAU,EAE9CJ,EAAmC,KAAjCG,EAAmBH,EAAA,oBAAEE,EAAOF,EAAA,QAG9BM,EAAOJ,EAAQ,MAAK,EACjBK,EAAI,EAAGA,EAAID,EAAK,QAAU,CAACF,EAAW,OAAQG,GAAKJ,EAAsB,EAAI,EACpFC,EAAW,KAAKE,EAAKC,EAAO,EAG9B,YAAK,wBAAwBH,CAAU,EAEhCC,CACT,EAEQd,EAAA,UAAA,YAAR,UAAA,CACQ,IAAAS,EAAoE,KAAlEN,EAAWM,EAAA,YAAEJ,EAAkBI,EAAA,mBAAEE,EAAOF,EAAA,QAAEG,EAAmBH,EAAA,oBAK/DQ,GAAsBL,EAAsB,EAAI,GAAKT,EAK3D,GAJAA,EAAc,KAAYc,EAAqBN,EAAQ,QAAUA,EAAQ,OAAO,EAAGA,EAAQ,OAASM,CAAkB,EAIlH,CAACL,EAAqB,CAKxB,QAJMM,EAAMb,EAAmB,IAAG,EAC9Bc,EAAO,EAGFH,EAAI,EAAGA,EAAIL,EAAQ,QAAWA,EAAQK,IAAiBE,EAAKF,GAAK,EACxEG,EAAOH,EAETG,GAAQR,EAAQ,OAAO,EAAGQ,EAAO,CAAC,EAEtC,EACFnB,CAAA,EAzEsCoB,CAAO,EClB7C,IAAAC,GAAA,SAAAC,EAAA,CAA+BC,GAAAF,EAAAC,CAAA,EAC7B,SAAAD,EAAYG,EAAsBC,EAAmD,QACnFH,EAAA,KAAA,IAAA,GAAO,IACT,CAWO,OAAAD,EAAA,UAAA,SAAP,SAAgBK,EAAWC,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GAClB,IACT,EACFN,CAAA,EAjB+BO,EAAY,ECHpC,IAAMC,GAAqC,CAGhD,YAAA,SAAYC,EAAqBC,EAAgB,SAAEC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,EAAA,GAAA,UAAAA,GACzC,IAAAC,EAAaL,GAAgB,SACrC,OAAIK,GAAQ,MAARA,EAAU,YACLA,EAAS,YAAW,MAApBA,EAAQC,EAAA,CAAaL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,EAEhD,YAAW,MAAA,OAAAG,EAAA,CAACL,EAASC,CAAO,EAAAK,EAAKJ,CAAI,CAAA,CAAA,CAC9C,EACA,cAAA,SAAcK,EAAM,CACV,IAAAH,EAAaL,GAAgB,SACrC,QAAQK,GAAQ,KAAA,OAARA,EAAU,gBAAiB,eAAeG,CAAa,CACjE,EACA,SAAU,QCtBZ,IAAAC,GAAA,SAAAC,EAAA,CAAoCC,GAAAF,EAAAC,CAAA,EAOlC,SAAAD,EAAsBG,EAAqCC,EAAmD,CAA9G,IAAAC,EACEJ,EAAA,KAAA,KAAME,EAAWC,CAAI,GAAC,KADF,OAAAC,EAAA,UAAAF,EAAqCE,EAAA,KAAAD,EAFjDC,EAAA,QAAmB,IAI7B,CAEO,OAAAL,EAAA,UAAA,SAAP,SAAgBM,EAAWC,EAAiB,CAC1C,GADyBA,IAAA,SAAAA,EAAA,GACrB,KAAK,OACP,OAAO,KAIT,KAAK,MAAQD,EAEb,IAAME,EAAK,KAAK,GACVL,EAAY,KAAK,UAuBvB,OAAIK,GAAM,OACR,KAAK,GAAK,KAAK,eAAeL,EAAWK,EAAID,CAAK,GAKpD,KAAK,QAAU,GAEf,KAAK,MAAQA,EAEb,KAAK,GAAK,KAAK,IAAM,KAAK,eAAeJ,EAAW,KAAK,GAAII,CAAK,EAE3D,IACT,EAEUP,EAAA,UAAA,eAAV,SAAyBG,EAA2BM,EAAWF,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GACtDG,GAAiB,YAAYP,EAAU,MAAM,KAAKA,EAAW,IAAI,EAAGI,CAAK,CAClF,EAEUP,EAAA,UAAA,eAAV,SAAyBW,EAA4BH,EAASD,EAAwB,CAEpF,GAF4DA,IAAA,SAAAA,EAAA,GAExDA,GAAS,MAAQ,KAAK,QAAUA,GAAS,KAAK,UAAY,GAC5D,OAAOC,EAITE,GAAiB,cAAcF,CAAE,CAEnC,EAMOR,EAAA,UAAA,QAAP,SAAeM,EAAUC,EAAa,CACpC,GAAI,KAAK,OACP,OAAO,IAAI,MAAM,8BAA8B,EAGjD,KAAK,QAAU,GACf,IAAMK,EAAQ,KAAK,SAASN,EAAOC,CAAK,EACxC,GAAIK,EACF,OAAOA,EACE,KAAK,UAAY,IAAS,KAAK,IAAM,OAc9C,KAAK,GAAK,KAAK,eAAe,KAAK,UAAW,KAAK,GAAI,IAAI,EAE/D,EAEUZ,EAAA,UAAA,SAAV,SAAmBM,EAAUO,EAAc,CACzC,IAAIC,EAAmB,GACnBC,EACJ,GAAI,CACF,KAAK,KAAKT,CAAK,QACRU,EAAP,CACAF,EAAU,GAIVC,EAAaC,GAAQ,IAAI,MAAM,oCAAoC,EAErE,GAAIF,EACF,YAAK,YAAW,EACTC,CAEX,EAEAf,EAAA,UAAA,YAAA,UAAA,CACE,GAAI,CAAC,KAAK,OAAQ,CACV,IAAAiB,EAAoB,KAAlBT,EAAES,EAAA,GAAEd,EAASc,EAAA,UACbC,EAAYf,EAAS,QAE7B,KAAK,KAAO,KAAK,MAAQ,KAAK,UAAY,KAC1C,KAAK,QAAU,GAEfgB,GAAUD,EAAS,IAAI,EACnBV,GAAM,OACR,KAAK,GAAK,KAAK,eAAeL,EAAWK,EAAI,IAAI,GAGnD,KAAK,MAAQ,KACbP,EAAA,UAAM,YAAW,KAAA,IAAA,EAErB,EACFD,CAAA,EA3IoCoB,EAAM,ECiB1C,IAAAC,GAAA,UAAA,CAGE,SAAAA,EAAoBC,EAAoCC,EAAiC,CAAjCA,IAAA,SAAAA,EAAoBF,EAAU,KAAlE,KAAA,oBAAAC,EAClB,KAAK,IAAMC,CACb,CA6BO,OAAAF,EAAA,UAAA,SAAP,SAAmBG,EAAqDC,EAAmBC,EAAS,CAA5B,OAAAD,IAAA,SAAAA,EAAA,GAC/D,IAAI,KAAK,oBAAuB,KAAMD,CAAI,EAAE,SAASE,EAAOD,CAAK,CAC1E,EAnCcJ,EAAA,IAAoBM,GAAsB,IAoC1DN,GArCA,ECpBA,IAAAO,GAAA,SAAAC,EAAA,CAAoCC,GAAAF,EAAAC,CAAA,EAkBlC,SAAAD,EAAYG,EAAgCC,EAAiC,CAAjCA,IAAA,SAAAA,EAAoBC,GAAU,KAA1E,IAAAC,EACEL,EAAA,KAAA,KAAME,EAAiBC,CAAG,GAAC,KAlBtB,OAAAE,EAAA,QAAmC,CAAA,EAOnCA,EAAA,QAAmB,GAQnBA,EAAA,WAAkB,QAIzB,CAEO,OAAAN,EAAA,UAAA,MAAP,SAAaO,EAAwB,CAC3B,IAAAC,EAAY,KAAI,QAExB,GAAI,KAAK,QAAS,CAChBA,EAAQ,KAAKD,CAAM,EACnB,OAGF,IAAIE,EACJ,KAAK,QAAU,GAEf,EACE,IAAKA,EAAQF,EAAO,QAAQA,EAAO,MAAOA,EAAO,KAAK,EACpD,YAEMA,EAASC,EAAQ,MAAK,GAIhC,GAFA,KAAK,QAAU,GAEXC,EAAO,CACT,KAAQF,EAASC,EAAQ,MAAK,GAC5BD,EAAO,YAAW,EAEpB,MAAME,EAEV,EACFT,CAAA,EAhDoCK,EAAS,EC8CtC,IAAMK,GAAiB,IAAIC,GAAeC,EAAW,EAK/CC,GAAQH,GClDrB,IAAAI,GAAA,SAAAC,EAAA,CAA6CC,GAAAF,EAAAC,CAAA,EAC3C,SAAAD,EAAsBG,EAA8CC,EAAmD,CAAvH,IAAAC,EACEJ,EAAA,KAAA,KAAME,EAAWC,CAAI,GAAC,KADF,OAAAC,EAAA,UAAAF,EAA8CE,EAAA,KAAAD,GAEpE,CAEU,OAAAJ,EAAA,UAAA,eAAV,SAAyBG,EAAoCG,EAAUC,EAAiB,CAEtF,OAFqEA,IAAA,SAAAA,EAAA,GAEjEA,IAAU,MAAQA,EAAQ,EACrBN,EAAA,UAAM,eAAc,KAAA,KAACE,EAAWG,EAAIC,CAAK,GAGlDJ,EAAU,QAAQ,KAAK,IAAI,EAIpBA,EAAU,aAAeA,EAAU,WAAaK,GAAuB,sBAAsB,UAAA,CAAM,OAAAL,EAAU,MAAM,MAAS,CAAzB,CAA0B,GACtI,EACUH,EAAA,UAAA,eAAV,SAAyBG,EAAoCG,EAAUC,EAAiB,CAItF,GAJqEA,IAAA,SAAAA,EAAA,GAIhEA,GAAS,MAAQA,EAAQ,GAAOA,GAAS,MAAQ,KAAK,MAAQ,EACjE,OAAON,EAAA,UAAM,eAAc,KAAA,KAACE,EAAWG,EAAIC,CAAK,EAK7CJ,EAAU,QAAQ,KAAK,SAACM,EAAM,CAAK,OAAAA,EAAO,KAAOH,CAAd,CAAgB,IACtDE,GAAuB,qBAAqBF,CAAE,EAC9CH,EAAU,WAAa,OAI3B,EACFH,CAAA,EAlC6CU,EAAW,ECFxD,IAAAC,GAAA,SAAAC,EAAA,CAA6CC,GAAAF,EAAAC,CAAA,EAA7C,SAAAD,GAAA,+CAkCA,CAjCS,OAAAA,EAAA,UAAA,MAAP,SAAaG,EAAyB,CACpC,KAAK,QAAU,GAUf,IAAMC,EAAU,KAAK,WACrB,KAAK,WAAa,OAEV,IAAAC,EAAY,KAAI,QACpBC,EACJH,EAASA,GAAUE,EAAQ,MAAK,EAEhC,EACE,IAAKC,EAAQH,EAAO,QAAQA,EAAO,MAAOA,EAAO,KAAK,EACpD,aAEMA,EAASE,EAAQ,KAAOF,EAAO,KAAOC,GAAWC,EAAQ,MAAK,GAIxE,GAFA,KAAK,QAAU,GAEXC,EAAO,CACT,MAAQH,EAASE,EAAQ,KAAOF,EAAO,KAAOC,GAAWC,EAAQ,MAAK,GACpEF,EAAO,YAAW,EAEpB,MAAMG,EAEV,EACFN,CAAA,EAlC6CO,EAAc,ECgCpD,IAAMC,GAA0B,IAAIC,GAAwBC,EAAoB,EC8BhF,IAAMC,EAAQ,IAAIC,EAAkB,SAACC,EAAU,CAAK,OAAAA,EAAW,SAAQ,CAAnB,CAAqB,EC9D1E,SAAUC,GAAYC,EAAU,CACpC,OAAOA,GAASC,EAAWD,EAAM,QAAQ,CAC3C,CCDA,SAASE,GAAQC,EAAQ,CACvB,OAAOA,EAAIA,EAAI,OAAS,EAC1B,CAEM,SAAUC,GAAkBC,EAAW,CAC3C,OAAOC,EAAWJ,GAAKG,CAAI,CAAC,EAAIA,EAAK,IAAG,EAAK,MAC/C,CAEM,SAAUE,GAAaF,EAAW,CACtC,OAAOG,GAAYN,GAAKG,CAAI,CAAC,EAAIA,EAAK,IAAG,EAAK,MAChD,CAEM,SAAUI,GAAUJ,EAAaK,EAAoB,CACzD,OAAO,OAAOR,GAAKG,CAAI,GAAM,SAAWA,EAAK,IAAG,EAAMK,CACxD,CClBO,IAAMC,GAAe,SAAIC,EAAM,CAAwB,OAAAA,GAAK,OAAOA,EAAE,QAAW,UAAY,OAAOA,GAAM,UAAlD,ECMxD,SAAUC,GAAUC,EAAU,CAClC,OAAOC,EAAWD,GAAK,KAAA,OAALA,EAAO,IAAI,CAC/B,CCHM,SAAUE,GAAoBC,EAAU,CAC5C,OAAOC,EAAWD,EAAME,GAAkB,CAC5C,CCLM,SAAUC,GAAmBC,EAAQ,CACzC,OAAO,OAAO,eAAiBC,EAAWD,GAAG,KAAA,OAAHA,EAAM,OAAO,cAAc,CACvE,CCAM,SAAUE,GAAiCC,EAAU,CAEzD,OAAO,IAAI,UACT,iBACEA,IAAU,MAAQ,OAAOA,GAAU,SAAW,oBAAsB,IAAIA,EAAK,KAAG,0HACwC,CAE9H,CCXM,SAAUC,IAAiB,CAC/B,OAAI,OAAO,QAAW,YAAc,CAAC,OAAO,SACnC,aAGF,OAAO,QAChB,CAEO,IAAMC,GAAWD,GAAiB,ECJnC,SAAUE,GAAWC,EAAU,CACnC,OAAOC,EAAWD,GAAK,KAAA,OAALA,EAAQE,GAAgB,CAC5C,CCHM,SAAiBC,GAAsCC,EAAqC,mGAC1FC,EAASD,EAAe,UAAS,2DAGX,MAAA,CAAA,EAAAE,GAAMD,EAAO,KAAI,CAAE,CAAA,gBAArCE,EAAkBC,EAAA,KAAA,EAAhBC,EAAKF,EAAA,MAAEG,EAAIH,EAAA,KACfG,iBAAA,CAAA,EAAA,CAAA,SACF,MAAA,CAAA,EAAAF,EAAA,KAAA,CAAA,qBAEIC,CAAM,CAAA,SAAZ,MAAA,CAAA,EAAAD,EAAA,KAAA,CAAA,SAAA,OAAAA,EAAA,KAAA,mCAGF,OAAAH,EAAO,YAAW,6BAIhB,SAAUM,GAAwBC,EAAQ,CAG9C,OAAOC,EAAWD,GAAG,KAAA,OAAHA,EAAK,SAAS,CAClC,CCPM,SAAUE,EAAaC,EAAyB,CACpD,GAAIA,aAAiBC,EACnB,OAAOD,EAET,GAAIA,GAAS,KAAM,CACjB,GAAIE,GAAoBF,CAAK,EAC3B,OAAOG,GAAsBH,CAAK,EAEpC,GAAII,GAAYJ,CAAK,EACnB,OAAOK,GAAcL,CAAK,EAE5B,GAAIM,GAAUN,CAAK,EACjB,OAAOO,GAAYP,CAAK,EAE1B,GAAIQ,GAAgBR,CAAK,EACvB,OAAOS,GAAkBT,CAAK,EAEhC,GAAIU,GAAWV,CAAK,EAClB,OAAOW,GAAaX,CAAK,EAE3B,GAAIY,GAAqBZ,CAAK,EAC5B,OAAOa,GAAuBb,CAAK,EAIvC,MAAMc,GAAiCd,CAAK,CAC9C,CAMM,SAAUG,GAAyBY,EAAQ,CAC/C,OAAO,IAAId,EAAW,SAACe,EAAyB,CAC9C,IAAMC,EAAMF,EAAIG,IAAkB,EAClC,GAAIC,EAAWF,EAAI,SAAS,EAC1B,OAAOA,EAAI,UAAUD,CAAU,EAGjC,MAAM,IAAI,UAAU,gEAAgE,CACtF,CAAC,CACH,CASM,SAAUX,GAAiBe,EAAmB,CAClD,OAAO,IAAInB,EAAW,SAACe,EAAyB,CAU9C,QAASK,EAAI,EAAGA,EAAID,EAAM,QAAU,CAACJ,EAAW,OAAQK,IACtDL,EAAW,KAAKI,EAAMC,EAAE,EAE1BL,EAAW,SAAQ,CACrB,CAAC,CACH,CAEM,SAAUT,GAAee,EAAuB,CACpD,OAAO,IAAIrB,EAAW,SAACe,EAAyB,CAC9CM,EACG,KACC,SAACC,EAAK,CACCP,EAAW,SACdA,EAAW,KAAKO,CAAK,EACrBP,EAAW,SAAQ,EAEvB,EACA,SAACQ,EAAQ,CAAK,OAAAR,EAAW,MAAMQ,CAAG,CAApB,CAAqB,EAEpC,KAAK,KAAMC,EAAoB,CACpC,CAAC,CACH,CAEM,SAAUd,GAAgBe,EAAqB,CACnD,OAAO,IAAIzB,EAAW,SAACe,EAAyB,aAC9C,QAAoBW,EAAAC,GAAAF,CAAQ,EAAAG,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAzB,IAAMJ,EAAKM,EAAA,MAEd,GADAb,EAAW,KAAKO,CAAK,EACjBP,EAAW,OACb,yGAGJA,EAAW,SAAQ,CACrB,CAAC,CACH,CAEM,SAAUP,GAAqBqB,EAA+B,CAClE,OAAO,IAAI7B,EAAW,SAACe,EAAyB,CAC9Ce,GAAQD,EAAed,CAAU,EAAE,MAAM,SAACQ,EAAG,CAAK,OAAAR,EAAW,MAAMQ,CAAG,CAApB,CAAqB,CACzE,CAAC,CACH,CAEM,SAAUX,GAA0BmB,EAAqC,CAC7E,OAAOvB,GAAkBwB,GAAmCD,CAAc,CAAC,CAC7E,CAEA,SAAeD,GAAWD,EAAiCd,EAAyB,uIACxDkB,EAAAC,GAAAL,CAAa,gFAIrC,GAJeP,EAAKa,EAAA,MACpBpB,EAAW,KAAKO,CAAK,EAGjBP,EAAW,OACb,MAAA,CAAA,CAAA,6RAGJ,OAAAA,EAAW,SAAQ,WChHf,SAAUqB,GACdC,EACAC,EACAC,EACAC,EACAC,EAAc,CADdD,IAAA,SAAAA,EAAA,GACAC,IAAA,SAAAA,EAAA,IAEA,IAAMC,EAAuBJ,EAAU,SAAS,UAAA,CAC9CC,EAAI,EACAE,EACFJ,EAAmB,IAAI,KAAK,SAAS,KAAMG,CAAK,CAAC,EAEjD,KAAK,YAAW,CAEpB,EAAGA,CAAK,EAIR,GAFAH,EAAmB,IAAIK,CAAoB,EAEvC,CAACD,EAKH,OAAOC,CAEX,CCeM,SAAUC,GAAaC,EAA0BC,EAAS,CAAT,OAAAA,IAAA,SAAAA,EAAA,GAC9CC,EAAQ,SAACC,EAAQC,EAAU,CAChCD,EAAO,UACLE,EACED,EACA,SAACE,EAAK,CAAK,OAAAC,GAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,KAAKE,CAAK,CAArB,EAAwBL,CAAK,CAA1E,EACX,UAAA,CAAM,OAAAM,GAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,SAAQ,CAAnB,EAAuBH,CAAK,CAAzE,EACN,SAACO,EAAG,CAAK,OAAAD,GAAgBH,EAAYJ,EAAW,UAAA,CAAM,OAAAI,EAAW,MAAMI,CAAG,CAApB,EAAuBP,CAAK,CAAzE,CAA0E,CACpF,CAEL,CAAC,CACH,CCPM,SAAUQ,GAAeC,EAA0BC,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,GAChDC,EAAQ,SAACC,EAAQC,EAAU,CAChCA,EAAW,IAAIJ,EAAU,SAAS,UAAA,CAAM,OAAAG,EAAO,UAAUC,CAAU,CAA3B,EAA8BH,CAAK,CAAC,CAC9E,CAAC,CACH,CC7DM,SAAUI,GAAsBC,EAA6BC,EAAwB,CACzF,OAAOC,EAAUF,CAAK,EAAE,KAAKG,GAAYF,CAAS,EAAGG,GAAUH,CAAS,CAAC,CAC3E,CCFM,SAAUI,GAAmBC,EAAuBC,EAAwB,CAChF,OAAOC,EAAUF,CAAK,EAAE,KAAKG,GAAYF,CAAS,EAAGG,GAAUH,CAAS,CAAC,CAC3E,CCJM,SAAUI,GAAiBC,EAAqBC,EAAwB,CAC5E,OAAO,IAAIC,EAAc,SAACC,EAAU,CAElC,IAAIC,EAAI,EAER,OAAOH,EAAU,SAAS,UAAA,CACpBG,IAAMJ,EAAM,OAGdG,EAAW,SAAQ,GAInBA,EAAW,KAAKH,EAAMI,IAAI,EAIrBD,EAAW,QACd,KAAK,SAAQ,EAGnB,CAAC,CACH,CAAC,CACH,CCfM,SAAUE,GAAoBC,EAAoBC,EAAwB,CAC9E,OAAO,IAAIC,EAAc,SAACC,EAAU,CAClC,IAAIC,EAKJ,OAAAC,GAAgBF,EAAYF,EAAW,UAAA,CAErCG,EAAYJ,EAAcI,IAAgB,EAE1CC,GACEF,EACAF,EACA,UAAA,OACMK,EACAC,EACJ,GAAI,CAEDC,EAAkBJ,EAAS,KAAI,EAA7BE,EAAKE,EAAA,MAAED,EAAIC,EAAA,WACPC,EAAP,CAEAN,EAAW,MAAMM,CAAG,EACpB,OAGEF,EAKFJ,EAAW,SAAQ,EAGnBA,EAAW,KAAKG,CAAK,CAEzB,EACA,EACA,EAAI,CAER,CAAC,EAMM,UAAA,CAAM,OAAAI,EAAWN,GAAQ,KAAA,OAARA,EAAU,MAAM,GAAKA,EAAS,OAAM,CAA/C,CACf,CAAC,CACH,CCvDM,SAAUO,GAAyBC,EAAyBC,EAAwB,CACxF,GAAI,CAACD,EACH,MAAM,IAAI,MAAM,yBAAyB,EAE3C,OAAO,IAAIE,EAAc,SAACC,EAAU,CAClCC,GAAgBD,EAAYF,EAAW,UAAA,CACrC,IAAMI,EAAWL,EAAM,OAAO,eAAc,EAC5CI,GACED,EACAF,EACA,UAAA,CACEI,EAAS,KAAI,EAAG,KAAK,SAACC,EAAM,CACtBA,EAAO,KAGTH,EAAW,SAAQ,EAEnBA,EAAW,KAAKG,EAAO,KAAK,CAEhC,CAAC,CACH,EACA,EACA,EAAI,CAER,CAAC,CACH,CAAC,CACH,CCzBM,SAAUC,GAA8BC,EAA8BC,EAAwB,CAClG,OAAOC,GAAsBC,GAAmCH,CAAK,EAAGC,CAAS,CACnF,CCoBM,SAAUG,GAAaC,EAA2BC,EAAwB,CAC9E,GAAID,GAAS,KAAM,CACjB,GAAIE,GAAoBF,CAAK,EAC3B,OAAOG,GAAmBH,EAAOC,CAAS,EAE5C,GAAIG,GAAYJ,CAAK,EACnB,OAAOK,GAAcL,EAAOC,CAAS,EAEvC,GAAIK,GAAUN,CAAK,EACjB,OAAOO,GAAgBP,EAAOC,CAAS,EAEzC,GAAIO,GAAgBR,CAAK,EACvB,OAAOS,GAAsBT,EAAOC,CAAS,EAE/C,GAAIS,GAAWV,CAAK,EAClB,OAAOW,GAAiBX,EAAOC,CAAS,EAE1C,GAAIW,GAAqBZ,CAAK,EAC5B,OAAOa,GAA2Bb,EAAOC,CAAS,EAGtD,MAAMa,GAAiCd,CAAK,CAC9C,CCoDM,SAAUe,GAAQC,EAA2BC,EAAyB,CAC1E,OAAOA,EAAYC,GAAUF,EAAOC,CAAS,EAAIE,EAAUH,CAAK,CAClE,CCxBM,SAAUI,GAAE,SAAIC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACpB,IAAMC,EAAYC,GAAaH,CAAI,EACnC,OAAOI,GAAKJ,EAAaE,CAAS,CACpC,CCsCM,SAAUG,GAAWC,EAA0BC,EAAyB,CAC5E,IAAMC,EAAeC,EAAWH,CAAmB,EAAIA,EAAsB,UAAA,CAAM,OAAAA,CAAA,EAC7EI,EAAO,SAACC,EAA6B,CAAK,OAAAA,EAAW,MAAMH,EAAY,CAAE,CAA/B,EAChD,OAAO,IAAII,EAAWL,EAAY,SAACI,EAAU,CAAK,OAAAJ,EAAU,SAASG,EAAa,EAAGC,CAAU,CAA7C,EAAiDD,CAAI,CACzG,CCrHM,SAAUG,GAAYC,EAAU,CACpC,OAAOA,aAAiB,MAAQ,CAAC,MAAMA,CAAY,CACrD,CCsCM,SAAUC,EAAUC,EAAyCC,EAAa,CAC9E,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAEhC,IAAIC,EAAQ,EAGZF,EAAO,UACLG,EAAyBF,EAAY,SAACG,EAAQ,CAG5CH,EAAW,KAAKJ,EAAQ,KAAKC,EAASM,EAAOF,GAAO,CAAC,CACvD,CAAC,CAAC,CAEN,CAAC,CACH,CC1DQ,IAAAG,GAAY,MAAK,QAEzB,SAASC,GAAkBC,EAA6BC,EAAW,CAC/D,OAAOH,GAAQG,CAAI,EAAID,EAAE,MAAA,OAAAE,EAAA,CAAA,EAAAC,EAAIF,CAAI,CAAA,CAAA,EAAID,EAAGC,CAAI,CAChD,CAMM,SAAUG,GAAuBJ,EAA2B,CAC9D,OAAOK,EAAI,SAAAJ,EAAI,CAAI,OAAAF,GAAYC,EAAIC,CAAI,CAApB,CAAqB,CAC5C,CCfQ,IAAAK,GAAY,MAAK,QACjBC,GAA0D,OAAM,eAArCC,GAA+B,OAAM,UAAlBC,GAAY,OAAM,KAQlE,SAAUC,GAAqDC,EAAuB,CAC1F,GAAIA,EAAK,SAAW,EAAG,CACrB,IAAMC,EAAQD,EAAK,GACnB,GAAIL,GAAQM,CAAK,EACf,MAAO,CAAE,KAAMA,EAAO,KAAM,IAAI,EAElC,GAAIC,GAAOD,CAAK,EAAG,CACjB,IAAME,EAAOL,GAAQG,CAAK,EAC1B,MAAO,CACL,KAAME,EAAK,IAAI,SAACC,EAAG,CAAK,OAAAH,EAAMG,EAAN,CAAU,EAClC,KAAID,IAKV,MAAO,CAAE,KAAMH,EAAa,KAAM,IAAI,CACxC,CAEA,SAASE,GAAOG,EAAQ,CACtB,OAAOA,GAAO,OAAOA,GAAQ,UAAYT,GAAeS,CAAG,IAAMR,EACnE,CC7BM,SAAUS,GAAaC,EAAgBC,EAAa,CACxD,OAAOD,EAAK,OAAO,SAACE,EAAQC,EAAKC,EAAC,CAAK,OAAEF,EAAOC,GAAOF,EAAOG,GAAKF,CAA5B,EAAqC,CAAA,CAAS,CACvF,CCsMM,SAAUG,GAAa,SAAoCC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAC/D,IAAMC,EAAYC,GAAaH,CAAI,EAC7BI,EAAiBC,GAAkBL,CAAI,EAEvCM,EAA8BC,GAAqBP,CAAI,EAA/CQ,EAAWF,EAAA,KAAEG,EAAIH,EAAA,KAE/B,GAAIE,EAAY,SAAW,EAIzB,OAAOE,GAAK,CAAA,EAAIR,CAAgB,EAGlC,IAAMS,EAAS,IAAIC,EACjBC,GACEL,EACAN,EACAO,EAEI,SAACK,EAAM,CAAK,OAAAC,GAAaN,EAAMK,CAAM,CAAzB,EAEZE,EAAQ,CACb,EAGH,OAAOZ,EAAkBO,EAAO,KAAKM,GAAiBb,CAAc,CAAC,EAAsBO,CAC7F,CAEM,SAAUE,GACdL,EACAN,EACAgB,EAAiD,CAAjD,OAAAA,IAAA,SAAAA,EAAAF,IAEO,SAACG,EAA2B,CAGjCC,GACElB,EACA,UAAA,CAaE,QAZQmB,EAAWb,EAAW,OAExBM,EAAS,IAAI,MAAMO,CAAM,EAG3BC,EAASD,EAITE,EAAuBF,aAGlBG,EAAC,CACRJ,GACElB,EACA,UAAA,CACE,IAAMuB,EAASf,GAAKF,EAAYgB,GAAItB,CAAgB,EAChDwB,EAAgB,GACpBD,EAAO,UACLE,EACER,EACA,SAACS,EAAK,CAEJd,EAAOU,GAAKI,EACPF,IAEHA,EAAgB,GAChBH,KAEGA,GAGHJ,EAAW,KAAKD,EAAeJ,EAAO,MAAK,CAAE,CAAC,CAElD,EACA,UAAA,CACO,EAAEQ,GAGLH,EAAW,SAAQ,CAEvB,CAAC,CACF,CAEL,EACAA,CAAU,GAjCLK,EAAI,EAAGA,EAAIH,EAAQG,MAAnBA,CAAC,CAoCZ,EACAL,CAAU,CAEd,CACF,CAMA,SAASC,GAAclB,EAAsC2B,EAAqBC,EAA0B,CACtG5B,EACF6B,GAAgBD,EAAc5B,EAAW2B,CAAO,EAEhDA,EAAO,CAEX,CC3RM,SAAUG,GACdC,EACAC,EACAC,EACAC,EACAC,EACAC,EACAC,EACAC,EAAgC,CAGhC,IAAMC,EAAc,CAAA,EAEhBC,EAAS,EAETC,EAAQ,EAERC,EAAa,GAKXC,EAAgB,UAAA,CAIhBD,GAAc,CAACH,EAAO,QAAU,CAACC,GACnCR,EAAW,SAAQ,CAEvB,EAGMY,EAAY,SAACC,EAAQ,CAAK,OAACL,EAASN,EAAaY,EAAWD,CAAK,EAAIN,EAAO,KAAKM,CAAK,CAA5D,EAE1BC,EAAa,SAACD,EAAQ,CAI1BT,GAAUJ,EAAW,KAAKa,CAAY,EAItCL,IAKA,IAAIO,EAAgB,GAGpBC,EAAUf,EAAQY,EAAOJ,GAAO,CAAC,EAAE,UACjCQ,EACEjB,EACA,SAACkB,EAAU,CAGTf,GAAY,MAAZA,EAAee,CAAU,EAErBd,EAGFQ,EAAUM,CAAiB,EAG3BlB,EAAW,KAAKkB,CAAU,CAE9B,EACA,UAAA,CAGEH,EAAgB,EAClB,EAEA,OACA,UAAA,CAIE,GAAIA,EAKF,GAAI,CAIFP,IAKA,qBACE,IAAMW,EAAgBZ,EAAO,MAAK,EAI9BF,EACFe,GAAgBpB,EAAYK,EAAmB,UAAA,CAAM,OAAAS,EAAWK,CAAa,CAAxB,CAAyB,EAE9EL,EAAWK,CAAa,GARrBZ,EAAO,QAAUC,EAASN,OAYjCS,EAAa,QACNU,EAAP,CACArB,EAAW,MAAMqB,CAAG,EAG1B,CAAC,CACF,CAEL,EAGA,OAAAtB,EAAO,UACLkB,EAAyBjB,EAAYY,EAAW,UAAA,CAE9CF,EAAa,GACbC,EAAa,CACf,CAAC,CAAC,EAKG,UAAA,CACLL,GAAmB,MAAnBA,EAAmB,CACrB,CACF,CClEM,SAAUgB,GACdC,EACAC,EACAC,EAA6B,CAE7B,OAFAA,IAAA,SAAAA,EAAA,KAEIC,EAAWF,CAAc,EAEpBF,GAAS,SAACK,EAAGC,EAAC,CAAK,OAAAC,EAAI,SAACC,EAAQC,EAAU,CAAK,OAAAP,EAAeG,EAAGG,EAAGF,EAAGG,CAAE,CAA1B,CAA2B,EAAEC,EAAUT,EAAQI,EAAGC,CAAC,CAAC,CAAC,CAAjF,EAAoFH,CAAU,GAC/G,OAAOD,GAAmB,WACnCC,EAAaD,GAGRS,EAAQ,SAACC,EAAQC,EAAU,CAAK,OAAAC,GAAeF,EAAQC,EAAYZ,EAASE,CAAU,CAAtD,CAAuD,EAChG,CChCM,SAAUY,GAAyCC,EAA6B,CAA7B,OAAAA,IAAA,SAAAA,EAAA,KAChDC,GAASC,GAAUF,CAAU,CACtC,CCNM,SAAUG,IAAS,CACvB,OAAOC,GAAS,CAAC,CACnB,CCmDM,SAAUC,IAAM,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACrB,OAAOC,GAAS,EAAGC,GAAKH,EAAMI,GAAaJ,CAAI,CAAC,CAAC,CACnD,CC9DM,SAAUK,EAAsCC,EAA0B,CAC9E,OAAO,IAAIC,EAA+B,SAACC,EAAU,CACnDC,EAAUH,EAAiB,CAAE,EAAE,UAAUE,CAAU,CACrD,CAAC,CACH,CChDA,IAAME,GAA0B,CAAC,cAAe,gBAAgB,EAC1DC,GAAqB,CAAC,mBAAoB,qBAAqB,EAC/DC,GAAgB,CAAC,KAAM,KAAK,EA8N5B,SAAUC,EACdC,EACAC,EACAC,EACAC,EAAsC,CAMtC,GAJIC,EAAWF,CAAO,IACpBC,EAAiBD,EACjBA,EAAU,QAERC,EACF,OAAOJ,EAAaC,EAAQC,EAAWC,CAA+B,EAAE,KAAKG,GAAiBF,CAAc,CAAC,EAUzG,IAAAG,EAAAC,EAEJC,GAAcR,CAAM,EAChBH,GAAmB,IAAI,SAACY,EAAU,CAAK,OAAA,SAACC,EAAY,CAAK,OAAAV,EAAOS,GAAYR,EAAWS,EAASR,CAA+B,CAAtE,CAAlB,CAAyF,EAElIS,GAAwBX,CAAM,EAC5BJ,GAAwB,IAAIgB,GAAwBZ,EAAQC,CAAS,CAAC,EACtEY,GAA0Bb,CAAM,EAChCF,GAAc,IAAIc,GAAwBZ,EAAQC,CAAS,CAAC,EAC5D,CAAA,EAAE,CAAA,EATDa,EAAGR,EAAA,GAAES,EAAMT,EAAA,GAgBlB,GAAI,CAACQ,GACCE,GAAYhB,CAAM,EACpB,OAAOiB,GAAS,SAACC,EAAc,CAAK,OAAAnB,EAAUmB,EAAWjB,EAAWC,CAA+B,CAA/D,CAAgE,EAClGiB,EAAUnB,CAAM,CAAC,EAOvB,GAAI,CAACc,EACH,MAAM,IAAI,UAAU,sBAAsB,EAG5C,OAAO,IAAIM,EAAc,SAACC,EAAU,CAIlC,IAAMX,EAAU,UAAA,SAACY,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAAmB,OAAAF,EAAW,KAAK,EAAIC,EAAK,OAASA,EAAOA,EAAK,EAAE,CAAhD,EAEpC,OAAAR,EAAIJ,CAAO,EAEJ,UAAA,CAAM,OAAAK,EAAQL,CAAO,CAAf,CACf,CAAC,CACH,CASA,SAASE,GAAwBZ,EAAaC,EAAiB,CAC7D,OAAO,SAACQ,EAAkB,CAAK,OAAA,SAACC,EAAY,CAAK,OAAAV,EAAOS,GAAYR,EAAWS,CAAO,CAArC,CAAlB,CACjC,CAOA,SAASC,GAAwBX,EAAW,CAC1C,OAAOI,EAAWJ,EAAO,WAAW,GAAKI,EAAWJ,EAAO,cAAc,CAC3E,CAOA,SAASa,GAA0Bb,EAAW,CAC5C,OAAOI,EAAWJ,EAAO,EAAE,GAAKI,EAAWJ,EAAO,GAAG,CACvD,CAOA,SAASQ,GAAcR,EAAW,CAChC,OAAOI,EAAWJ,EAAO,gBAAgB,GAAKI,EAAWJ,EAAO,mBAAmB,CACrF,CC/LM,SAAUwB,GACdC,EACAC,EACAC,EAAsC,CAEtC,OAAIA,EACKH,GAAoBC,EAAYC,CAAa,EAAE,KAAKE,GAAiBD,CAAc,CAAC,EAGtF,IAAIE,EAAoB,SAACC,EAAU,CACxC,IAAMC,EAAU,UAAA,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAAc,OAAAH,EAAW,KAAKE,EAAE,SAAW,EAAIA,EAAE,GAAKA,CAAC,CAAzC,EACzBE,EAAWT,EAAWM,CAAO,EACnC,OAAOI,EAAWT,CAAa,EAAI,UAAA,CAAM,OAAAA,EAAcK,EAASG,CAAQ,CAA/B,EAAmC,MAC9E,CAAC,CACH,CCtBM,SAAUE,GACdC,EACAC,EACAC,EAAyC,CAFzCF,IAAA,SAAAA,EAAA,GAEAE,IAAA,SAAAA,EAAAC,IAIA,IAAIC,EAAmB,GAEvB,OAAIH,GAAuB,OAIrBI,GAAYJ,CAAmB,EACjCC,EAAYD,EAIZG,EAAmBH,GAIhB,IAAIK,EAAW,SAACC,EAAU,CAI/B,IAAIC,EAAMC,GAAYT,CAAO,EAAI,CAACA,EAAUE,EAAW,IAAG,EAAKF,EAE3DQ,EAAM,IAERA,EAAM,GAIR,IAAIE,EAAI,EAGR,OAAOR,EAAU,SAAS,UAAA,CACnBK,EAAW,SAEdA,EAAW,KAAKG,GAAG,EAEf,GAAKN,EAGP,KAAK,SAAS,OAAWA,CAAgB,EAGzCG,EAAW,SAAQ,EAGzB,EAAGC,CAAG,CACR,CAAC,CACH,CChGM,SAAUG,GAAK,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACpB,IAAMC,EAAYC,GAAaH,CAAI,EAC7BI,EAAaC,GAAUL,EAAM,GAAQ,EACrCM,EAAUN,EAChB,OAAQM,EAAQ,OAGZA,EAAQ,SAAW,EAEnBC,EAAUD,EAAQ,EAAE,EAEpBE,GAASJ,CAAU,EAAEK,GAAKH,EAASJ,CAAS,CAAC,EAL7CQ,CAMN,CCjEO,IAAMC,GAAQ,IAAIC,EAAkBC,EAAI,ECpCvC,IAAAC,GAAY,MAAK,QAMnB,SAAUC,GAAkBC,EAAiB,CACjD,OAAOA,EAAK,SAAW,GAAKF,GAAQE,EAAK,EAAE,EAAIA,EAAK,GAAMA,CAC5D,CCoDM,SAAUC,EAAUC,EAAiDC,EAAa,CACtF,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAEhC,IAAIC,EAAQ,EAIZF,EAAO,UAILG,EAAyBF,EAAY,SAACG,EAAK,CAAK,OAAAP,EAAU,KAAKC,EAASM,EAAOF,GAAO,GAAKD,EAAW,KAAKG,CAAK,CAAhE,CAAiE,CAAC,CAEtH,CAAC,CACH,CCxBM,SAAUC,IAAG,SAACC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAClB,IAAMC,EAAiBC,GAAkBH,CAAI,EAEvCI,EAAUC,GAAeL,CAAI,EAEnC,OAAOI,EAAQ,OACX,IAAIE,EAAsB,SAACC,EAAU,CAGnC,IAAIC,EAAuBJ,EAAQ,IAAI,UAAA,CAAM,MAAA,CAAA,CAAA,CAAE,EAK3CK,EAAYL,EAAQ,IAAI,UAAA,CAAM,MAAA,EAAA,CAAK,EAGvCG,EAAW,IAAI,UAAA,CACbC,EAAUC,EAAY,IACxB,CAAC,EAKD,mBAASC,EAAW,CAClBC,EAAUP,EAAQM,EAAY,EAAE,UAC9BE,EACEL,EACA,SAACM,EAAK,CAKJ,GAJAL,EAAQE,GAAa,KAAKG,CAAK,EAI3BL,EAAQ,MAAM,SAACM,EAAM,CAAK,OAAAA,EAAO,MAAP,CAAa,EAAG,CAC5C,IAAMC,EAAcP,EAAQ,IAAI,SAACM,EAAM,CAAK,OAAAA,EAAO,MAAK,CAAZ,CAAe,EAE3DP,EAAW,KAAKL,EAAiBA,EAAc,MAAA,OAAAc,EAAA,CAAA,EAAAC,EAAIF,CAAM,CAAA,CAAA,EAAIA,CAAM,EAI/DP,EAAQ,KAAK,SAACM,EAAQI,EAAC,CAAK,MAAA,CAACJ,EAAO,QAAUL,EAAUS,EAA5B,CAA8B,GAC5DX,EAAW,SAAQ,EAGzB,EACA,UAAA,CAGEE,EAAUC,GAAe,GAIzB,CAACF,EAAQE,GAAa,QAAUH,EAAW,SAAQ,CACrD,CAAC,CACF,GA9BIG,EAAc,EAAG,CAACH,EAAW,QAAUG,EAAcN,EAAQ,OAAQM,MAArEA,CAAW,EAmCpB,OAAO,UAAA,CACLF,EAAUC,EAAY,IACxB,CACF,CAAC,EACDU,CACN,CC9DM,SAAUC,GAASC,EAAoD,CAC3E,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAW,GACXC,EAAsB,KACtBC,EAA6C,KAC7CC,EAAa,GAEXC,EAAc,UAAA,CAGlB,GAFAF,GAAkB,MAAlBA,EAAoB,YAAW,EAC/BA,EAAqB,KACjBF,EAAU,CACZA,EAAW,GACX,IAAMK,EAAQJ,EACdA,EAAY,KACZF,EAAW,KAAKM,CAAK,EAEvBF,GAAcJ,EAAW,SAAQ,CACnC,EAEMO,EAAkB,UAAA,CACtBJ,EAAqB,KACrBC,GAAcJ,EAAW,SAAQ,CACnC,EAEAD,EAAO,UACLS,EACER,EACA,SAACM,EAAK,CACJL,EAAW,GACXC,EAAYI,EACPH,GACHM,EAAUZ,EAAiBS,CAAK,CAAC,EAAE,UAChCH,EAAqBK,EAAyBR,EAAYK,EAAaE,CAAe,CAAE,CAG/F,EACA,UAAA,CACEH,EAAa,IACZ,CAACH,GAAY,CAACE,GAAsBA,EAAmB,SAAWH,EAAW,SAAQ,CACxF,CAAC,CACF,CAEL,CAAC,CACH,CC3CM,SAAUU,GAAaC,EAAkBC,EAAyC,CAAzC,OAAAA,IAAA,SAAAA,EAAAC,IACtCC,GAAM,UAAA,CAAM,OAAAC,GAAMJ,EAAUC,CAAS,CAAzB,CAA0B,CAC/C,CCEM,SAAUI,GAAeC,EAAoBC,EAAsC,CAAtC,OAAAA,IAAA,SAAAA,EAAA,MAGjDA,EAAmBA,GAAgB,KAAhBA,EAAoBD,EAEhCE,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAiB,CAAA,EACjBC,EAAQ,EAEZH,EAAO,UACLI,EACEH,EACA,SAACI,EAAK,aACAC,EAAuB,KAKvBH,IAAUL,IAAsB,GAClCI,EAAQ,KAAK,CAAA,CAAE,MAIjB,QAAqBK,EAAAC,GAAAN,CAAO,EAAAO,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAzB,IAAMG,EAAMD,EAAA,MACfC,EAAO,KAAKL,CAAK,EAMbR,GAAca,EAAO,SACvBJ,EAASA,GAAM,KAANA,EAAU,CAAA,EACnBA,EAAO,KAAKI,CAAM,qGAItB,GAAIJ,MAIF,QAAqBK,EAAAH,GAAAF,CAAM,EAAAM,EAAAD,EAAA,KAAA,EAAA,CAAAC,EAAA,KAAAA,EAAAD,EAAA,KAAA,EAAE,CAAxB,IAAMD,EAAME,EAAA,MACfC,GAAUX,EAASQ,CAAM,EACzBT,EAAW,KAAKS,CAAM,oGAG5B,EACA,UAAA,aAGE,QAAqBI,EAAAN,GAAAN,CAAO,EAAAa,EAAAD,EAAA,KAAA,EAAA,CAAAC,EAAA,KAAAA,EAAAD,EAAA,KAAA,EAAE,CAAzB,IAAMJ,EAAMK,EAAA,MACfd,EAAW,KAAKS,CAAM,oGAExBT,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEEC,EAAU,IACZ,CAAC,CACF,CAEL,CAAC,CACH,CCbM,SAAUc,GACdC,EAAgD,CAEhD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAgC,KAChCC,EAAY,GACZC,EAEJF,EAAWF,EAAO,UAChBK,EAAyBJ,EAAY,OAAW,OAAW,SAACK,EAAG,CAC7DF,EAAgBG,EAAUT,EAASQ,EAAKT,GAAWC,CAAQ,EAAEE,CAAM,CAAC,CAAC,EACjEE,GACFA,EAAS,YAAW,EACpBA,EAAW,KACXE,EAAc,UAAUH,CAAU,GAIlCE,EAAY,EAEhB,CAAC,CAAC,EAGAA,IAMFD,EAAS,YAAW,EACpBA,EAAW,KACXE,EAAe,UAAUH,CAAU,EAEvC,CAAC,CACH,CC/HM,SAAUO,GACdC,EACAC,EACAC,EACAC,EACAC,EAAqC,CAErC,OAAO,SAACC,EAAuBC,EAA2B,CAIxD,IAAIC,EAAWL,EAIXM,EAAaP,EAEbQ,EAAQ,EAGZJ,EAAO,UACLK,EACEJ,EACA,SAACK,EAAK,CAEJ,IAAMC,EAAIH,IAEVD,EAAQD,EAEJP,EAAYQ,EAAOG,EAAOC,CAAC,GAIzBL,EAAW,GAAOI,GAGxBR,GAAcG,EAAW,KAAKE,CAAK,CACrC,EAGAJ,GACG,UAAA,CACCG,GAAYD,EAAW,KAAKE,CAAK,EACjCF,EAAW,SAAQ,CACrB,CAAE,CACL,CAEL,CACF,CCnCM,SAAUO,IAAa,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAClC,IAAMC,EAAiBC,GAAkBH,CAAI,EAC7C,OAAOE,EACHE,GAAKL,GAAa,MAAA,OAAAM,EAAA,CAAA,EAAAC,EAAKN,CAAoC,CAAA,CAAA,EAAGO,GAAiBL,CAAc,CAAC,EAC9FM,EAAQ,SAACC,EAAQC,EAAU,CACzBC,GAAiBN,EAAA,CAAEI,CAAM,EAAAH,EAAKM,GAAeZ,CAAI,CAAC,CAAA,CAAA,EAAGU,CAAU,CACjE,CAAC,CACP,CCUM,SAAUG,IAAiB,SAC/BC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAEA,OAAOC,GAAa,MAAA,OAAAC,EAAA,CAAA,EAAAC,EAAIJ,CAAY,CAAA,CAAA,CACtC,CC+BM,SAAUK,GACdC,EACAC,EAA6G,CAE7G,OAAOC,EAAWD,CAAc,EAAIE,GAASH,EAASC,EAAgB,CAAC,EAAIE,GAASH,EAAS,CAAC,CAChG,CCpBM,SAAUI,GAAgBC,EAAiBC,EAAyC,CAAzC,OAAAA,IAAA,SAAAA,EAAAC,IACxCC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAkC,KAClCC,EAAsB,KACtBC,EAA0B,KAExBC,EAAO,UAAA,CACX,GAAIH,EAAY,CAEdA,EAAW,YAAW,EACtBA,EAAa,KACb,IAAMI,EAAQH,EACdA,EAAY,KACZF,EAAW,KAAKK,CAAK,EAEzB,EACA,SAASC,GAAY,CAInB,IAAMC,EAAaJ,EAAYR,EACzBa,EAAMZ,EAAU,IAAG,EACzB,GAAIY,EAAMD,EAAY,CAEpBN,EAAa,KAAK,SAAS,OAAWM,EAAaC,CAAG,EACtDR,EAAW,IAAIC,CAAU,EACzB,OAGFG,EAAI,CACN,CAEAL,EAAO,UACLU,EACET,EACA,SAACK,EAAQ,CACPH,EAAYG,EACZF,EAAWP,EAAU,IAAG,EAGnBK,IACHA,EAAaL,EAAU,SAASU,EAAcX,CAAO,EACrDK,EAAW,IAAIC,CAAU,EAE7B,EACA,UAAA,CAGEG,EAAI,EACJJ,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEEE,EAAYD,EAAa,IAC3B,CAAC,CACF,CAEL,CAAC,CACH,CCpFM,SAAUS,GAAqBC,EAAe,CAClD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAW,GACfF,EAAO,UACLG,EACEF,EACA,SAACG,EAAK,CACJF,EAAW,GACXD,EAAW,KAAKG,CAAK,CACvB,EACA,UAAA,CACOF,GACHD,EAAW,KAAKH,CAAa,EAE/BG,EAAW,SAAQ,CACrB,CAAC,CACF,CAEL,CAAC,CACH,CCXM,SAAUI,GAAQC,EAAa,CACnC,OAAOA,GAAS,EAEZ,UAAA,CAAM,OAAAC,CAAA,EACNC,EAAQ,SAACC,EAAQC,EAAU,CACzB,IAAIC,EAAO,EACXF,EAAO,UACLG,EAAyBF,EAAY,SAACG,EAAK,CAIrC,EAAEF,GAAQL,IACZI,EAAW,KAAKG,CAAK,EAIjBP,GAASK,GACXD,EAAW,SAAQ,EAGzB,CAAC,CAAC,CAEN,CAAC,CACP,CC9BM,SAAUI,IAAc,CAC5B,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChCD,EAAO,UAAUE,EAAyBD,EAAYE,EAAI,CAAC,CAC7D,CAAC,CACH,CCCM,SAAUC,GAASC,EAAQ,CAC/B,OAAOC,EAAI,UAAA,CAAM,OAAAD,CAAA,CAAK,CACxB,CC2BM,SAAUE,GACdC,EACAC,EAAmC,CAEnC,OAAIA,EAEK,SAACC,EAAqB,CAC3B,OAAAC,GAAOF,EAAkB,KAAKG,GAAK,CAAC,EAAGC,GAAc,CAAE,EAAGH,EAAO,KAAKH,GAAUC,CAAqB,CAAC,CAAC,CAAvG,EAGGM,GAAS,SAACC,EAAOC,EAAK,CAAK,OAAAR,EAAsBO,EAAOC,CAAK,EAAE,KAAKJ,GAAK,CAAC,EAAGK,GAAMF,CAAK,CAAC,CAA9D,CAA+D,CACnG,CCxBM,SAAUG,GAASC,EAAoBC,EAAyC,CAAzCA,IAAA,SAAAA,EAAAC,IAC3C,IAAMC,EAAWC,GAAMJ,EAAKC,CAAS,EACrC,OAAOI,GAAU,UAAA,CAAM,OAAAF,CAAA,CAAQ,CACjC,CC0EM,SAAUG,EACdC,EACAC,EAA0D,CAA1D,OAAAA,IAAA,SAAAA,EAA+BC,IAK/BF,EAAaA,GAAU,KAAVA,EAAcG,GAEpBC,EAAQ,SAACC,EAAQC,EAAU,CAGhC,IAAIC,EAEAC,EAAQ,GAEZH,EAAO,UACLI,EAAyBH,EAAY,SAACI,EAAK,CAEzC,IAAMC,EAAaV,EAAYS,CAAK,GAKhCF,GAAS,CAACR,EAAYO,EAAaI,CAAU,KAM/CH,EAAQ,GACRD,EAAcI,EAGdL,EAAW,KAAKI,CAAK,EAEzB,CAAC,CAAC,CAEN,CAAC,CACH,CAEA,SAASP,GAAeS,EAAQC,EAAM,CACpC,OAAOD,IAAMC,CACf,CCjHM,SAAUC,EAA8CC,EAAQC,EAAuC,CAC3G,OAAOC,EAAqB,SAACC,EAAMC,EAAI,CAAK,OAAAH,EAAUA,EAAQE,EAAEH,GAAMI,EAAEJ,EAAI,EAAIG,EAAEH,KAASI,EAAEJ,EAAjD,CAAqD,CACnG,CCLM,SAAUK,IAAO,SAAIC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACzB,OAAO,SAACC,EAAqB,CAAK,OAAAC,GAAOD,EAAQE,EAAE,MAAA,OAAAC,EAAA,CAAA,EAAAC,EAAIN,CAAM,CAAA,CAAA,CAAA,CAA3B,CACpC,CCHM,SAAUO,EAAYC,EAAoB,CAC9C,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAGhC,GAAI,CACFD,EAAO,UAAUC,CAAU,UAE3BA,EAAW,IAAIH,CAAQ,EAE3B,CAAC,CACH,CC9BM,SAAUI,GAAYC,EAAa,CACvC,OAAOA,GAAS,EACZ,UAAA,CAAM,OAAAC,CAAA,EACNC,EAAQ,SAACC,EAAQC,EAAU,CAKzB,IAAIC,EAAc,CAAA,EAClBF,EAAO,UACLG,EACEF,EACA,SAACG,EAAK,CAEJF,EAAO,KAAKE,CAAK,EAGjBP,EAAQK,EAAO,QAAUA,EAAO,MAAK,CACvC,EACA,UAAA,aAGE,QAAoBG,EAAAC,GAAAJ,CAAM,EAAAK,EAAAF,EAAA,KAAA,EAAA,CAAAE,EAAA,KAAAA,EAAAF,EAAA,KAAA,EAAE,CAAvB,IAAMD,EAAKG,EAAA,MACdN,EAAW,KAAKG,CAAK,oGAEvBH,EAAW,SAAQ,CACrB,EAEA,OACA,UAAA,CAEEC,EAAS,IACX,CAAC,CACF,CAEL,CAAC,CACP,CC1DM,SAAUM,IAAK,SAAIC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACvB,IAAMC,EAAYC,GAAaH,CAAI,EAC7BI,EAAaC,GAAUL,EAAM,GAAQ,EAC3C,OAAAA,EAAOM,GAAeN,CAAI,EAEnBO,EAAQ,SAACC,EAAQC,EAAU,CAChCC,GAASN,CAAU,EAAEO,GAAIC,EAAA,CAAEJ,CAAM,EAAAK,EAAMb,CAA6B,CAAA,EAAGE,CAAS,CAAC,EAAE,UAAUO,CAAU,CACzG,CAAC,CACH,CCcM,SAAUK,IAAS,SACvBC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAEA,OAAOC,GAAK,MAAA,OAAAC,EAAA,CAAA,EAAAC,EAAIJ,CAAY,CAAA,CAAA,CAC9B,CCmEM,SAAUK,GAAUC,EAAqC,OACzDC,EAAQ,IACRC,EAEJ,OAAIF,GAAiB,OACf,OAAOA,GAAkB,UACxBG,EAA4BH,EAAa,MAAzCC,EAAKE,IAAA,OAAG,IAAQA,EAAED,EAAUF,EAAa,OAE5CC,EAAQD,GAILC,GAAS,EACZ,UAAA,CAAM,OAAAG,CAAA,EACNC,EAAQ,SAACC,EAAQC,EAAU,CACzB,IAAIC,EAAQ,EACRC,EAEEC,EAAc,UAAA,CAGlB,GAFAD,GAAS,MAATA,EAAW,YAAW,EACtBA,EAAY,KACRP,GAAS,KAAM,CACjB,IAAMS,EAAW,OAAOT,GAAU,SAAWU,GAAMV,CAAK,EAAIW,EAAUX,EAAMM,CAAK,CAAC,EAC5EM,EAAqBC,EAAyBR,EAAY,UAAA,CAC9DO,EAAmB,YAAW,EAC9BE,EAAiB,CACnB,CAAC,EACDL,EAAS,UAAUG,CAAkB,OAErCE,EAAiB,CAErB,EAEMA,EAAoB,UAAA,CACxB,IAAIC,EAAY,GAChBR,EAAYH,EAAO,UACjBS,EAAyBR,EAAY,OAAW,UAAA,CAC1C,EAAEC,EAAQP,EACRQ,EACFC,EAAW,EAEXO,EAAY,GAGdV,EAAW,SAAQ,CAEvB,CAAC,CAAC,EAGAU,GACFP,EAAW,CAEf,EAEAM,EAAiB,CACnB,CAAC,CACP,CC7HM,SAAUE,GAAUC,EAAyB,CACjD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAW,GACXC,EAAsB,KAC1BH,EAAO,UACLI,EAAyBH,EAAY,SAACI,EAAK,CACzCH,EAAW,GACXC,EAAYE,CACd,CAAC,CAAC,EAEJP,EAAS,UACPM,EACEH,EACA,UAAA,CACE,GAAIC,EAAU,CACZA,EAAW,GACX,IAAMG,EAAQF,EACdA,EAAY,KACZF,EAAW,KAAKI,CAAK,EAEzB,EACAC,EAAI,CACL,CAEL,CAAC,CACH,CCgBM,SAAUC,GAAcC,EAA6DC,EAAQ,CAMjG,OAAOC,EAAQC,GAAcH,EAAaC,EAAW,UAAU,QAAU,EAAG,EAAI,CAAC,CACnF,CCgDM,SAAUG,GAASC,EAA4B,CAA5BA,IAAA,SAAAA,EAAA,CAAA,GACf,IAAAC,EAAgHD,EAAO,UAAvHE,EAASD,IAAA,OAAG,UAAA,CAAM,OAAA,IAAIE,CAAJ,EAAgBF,EAAEG,EAA4EJ,EAAO,aAAnFK,EAAYD,IAAA,OAAG,GAAIA,EAAEE,EAAuDN,EAAO,gBAA9DO,EAAeD,IAAA,OAAG,GAAIA,EAAEE,EAA+BR,EAAO,oBAAtCS,EAAmBD,IAAA,OAAG,GAAIA,EAUnH,OAAO,SAACE,EAAa,CACnB,IAAIC,EACAC,EACAC,EACAC,EAAW,EACXC,EAAe,GACfC,EAAa,GAEXC,EAAc,UAAA,CAClBL,GAAe,MAAfA,EAAiB,YAAW,EAC5BA,EAAkB,MACpB,EAGMM,EAAQ,UAAA,CACZD,EAAW,EACXN,EAAaE,EAAU,OACvBE,EAAeC,EAAa,EAC9B,EACMG,EAAsB,UAAA,CAG1B,IAAMC,EAAOT,EACbO,EAAK,EACLE,GAAI,MAAJA,EAAM,YAAW,CACnB,EAEA,OAAOC,EAAc,SAACC,EAAQC,GAAU,CACtCT,IACI,CAACE,GAAc,CAACD,GAClBE,EAAW,EAOb,IAAMO,GAAQX,EAAUA,GAAO,KAAPA,EAAWX,EAAS,EAO5CqB,GAAW,IAAI,UAAA,CACbT,IAKIA,IAAa,GAAK,CAACE,GAAc,CAACD,IACpCH,EAAkBa,GAAYN,EAAqBV,CAAmB,EAE1E,CAAC,EAIDe,GAAK,UAAUD,EAAU,EAGvB,CAACZ,GAIDG,EAAW,IAOXH,EAAa,IAAIe,GAAe,CAC9B,KAAM,SAACC,GAAK,CAAK,OAAAH,GAAK,KAAKG,EAAK,CAAf,EACjB,MAAO,SAACC,GAAG,CACTZ,EAAa,GACbC,EAAW,EACXL,EAAkBa,GAAYP,EAAOb,EAAcuB,EAAG,EACtDJ,GAAK,MAAMI,EAAG,CAChB,EACA,SAAU,UAAA,CACRb,EAAe,GACfE,EAAW,EACXL,EAAkBa,GAAYP,EAAOX,CAAe,EACpDiB,GAAK,SAAQ,CACf,EACD,EACDK,EAAUP,CAAM,EAAE,UAAUX,CAAU,EAE1C,CAAC,EAAED,CAAa,CAClB,CACF,CAEA,SAASe,GACPP,EACAY,EAA+C,SAC/CC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,EAAA,GAAA,UAAAA,GAEA,GAAIF,IAAO,GAAM,CACfZ,EAAK,EACL,OAGF,GAAIY,IAAO,GAIX,KAAMG,EAAe,IAAIP,GAAe,CACtC,KAAM,UAAA,CACJO,EAAa,YAAW,EACxBf,EAAK,CACP,EACD,EAED,OAAOY,EAAE,MAAA,OAAAI,EAAA,CAAA,EAAAC,EAAIJ,CAAI,CAAA,CAAA,EAAE,UAAUE,CAAY,EAC3C,CClHM,SAAUG,EACdC,EACAC,EACAC,EAAyB,WAErBC,EACAC,EAAW,GACf,OAAIJ,GAAsB,OAAOA,GAAuB,UACnDK,EAA8EL,EAAkB,WAAhGG,EAAUE,IAAA,OAAG,IAAQA,EAAEC,EAAuDN,EAAkB,WAAzEC,EAAUK,IAAA,OAAG,IAAQA,EAAEC,EAAgCP,EAAkB,SAAlDI,EAAQG,IAAA,OAAG,GAAKA,EAAEL,EAAcF,EAAkB,WAEnGG,EAAcH,GAAkB,KAAlBA,EAAsB,IAE/BQ,GAAS,CACd,UAAW,UAAA,CAAM,OAAA,IAAIC,GAAcN,EAAYF,EAAYC,CAAS,CAAnD,EACjB,aAAc,GACd,gBAAiB,GACjB,oBAAqBE,EACtB,CACH,CCvIM,SAAUM,GAAQC,EAAa,CACnC,OAAOC,EAAO,SAACC,EAAGC,EAAK,CAAK,OAAAH,GAASG,CAAT,CAAc,CAC5C,CCWM,SAAUC,GAAaC,EAAyB,CACpD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAS,GAEPC,EAAiBC,EACrBH,EACA,UAAA,CACEE,GAAc,MAAdA,EAAgB,YAAW,EAC3BD,EAAS,EACX,EACAG,EAAI,EAGNC,EAAUR,CAAQ,EAAE,UAAUK,CAAc,EAE5CH,EAAO,UAAUI,EAAyBH,EAAY,SAACM,EAAK,CAAK,OAAAL,GAAUD,EAAW,KAAKM,CAAK,CAA/B,CAAgC,CAAC,CACpG,CAAC,CACH,CCRM,SAAUC,GAAS,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GAC9B,IAAMC,EAAYC,GAAaH,CAAM,EACrC,OAAOI,EAAQ,SAACC,EAAQC,EAAU,EAI/BJ,EAAYK,GAAOP,EAAQK,EAAQH,CAAS,EAAIK,GAAOP,EAAQK,CAAM,GAAG,UAAUC,CAAU,CAC/F,CAAC,CACH,CCmBM,SAAUE,EACdC,EACAC,EAA6G,CAE7G,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAyD,KACzDC,EAAQ,EAERC,EAAa,GAIXC,EAAgB,UAAA,CAAM,OAAAD,GAAc,CAACF,GAAmBD,EAAW,SAAQ,CAArD,EAE5BD,EAAO,UACLM,EACEL,EACA,SAACM,EAAK,CAEJL,GAAe,MAAfA,EAAiB,YAAW,EAC5B,IAAIM,EAAa,EACXC,EAAaN,IAEnBO,EAAUb,EAAQU,EAAOE,CAAU,CAAC,EAAE,UACnCP,EAAkBI,EACjBL,EAIA,SAACU,EAAU,CAAK,OAAAV,EAAW,KAAKH,EAAiBA,EAAeS,EAAOI,EAAYF,EAAYD,GAAY,EAAIG,CAAU,CAAzG,EAChB,UAAA,CAIET,EAAkB,KAClBG,EAAa,CACf,CAAC,CACD,CAEN,EACA,UAAA,CACED,EAAa,GACbC,EAAa,CACf,CAAC,CACF,CAEL,CAAC,CACH,CCvFM,SAAUO,GAAaC,EAA8B,CACzD,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChCC,EAAUJ,CAAQ,EAAE,UAAUK,EAAyBF,EAAY,UAAA,CAAM,OAAAA,EAAW,SAAQ,CAAnB,EAAuBG,EAAI,CAAC,EACrG,CAACH,EAAW,QAAUD,EAAO,UAAUC,CAAU,CACnD,CAAC,CACH,CCIM,SAAUI,GAAaC,EAAiDC,EAAiB,CAAjB,OAAAA,IAAA,SAAAA,EAAA,IACrEC,EAAQ,SAACC,EAAQC,EAAU,CAChC,IAAIC,EAAQ,EACZF,EAAO,UACLG,EAAyBF,EAAY,SAACG,EAAK,CACzC,IAAMC,EAASR,EAAUO,EAAOF,GAAO,GACtCG,GAAUP,IAAcG,EAAW,KAAKG,CAAK,EAC9C,CAACC,GAAUJ,EAAW,SAAQ,CAChC,CAAC,CAAC,CAEN,CAAC,CACH,CCyCM,SAAUK,EACdC,EACAC,EACAC,EAA8B,CAK9B,IAAMC,EACJC,EAAWJ,CAAc,GAAKC,GAASC,EAElC,CAAE,KAAMF,EAA2E,MAAKC,EAAE,SAAQC,CAAA,EACnGF,EAEN,OAAOG,EACHE,EAAQ,SAACC,EAAQC,EAAU,QACzBC,EAAAL,EAAY,aAAS,MAAAK,IAAA,QAAAA,EAAA,KAArBL,CAAW,EACX,IAAIM,EAAU,GACdH,EAAO,UACLI,EACEH,EACA,SAACI,EAAK,QACJH,EAAAL,EAAY,QAAI,MAAAK,IAAA,QAAAA,EAAA,KAAhBL,EAAmBQ,CAAK,EACxBJ,EAAW,KAAKI,CAAK,CACvB,EACA,UAAA,OACEF,EAAU,IACVD,EAAAL,EAAY,YAAQ,MAAAK,IAAA,QAAAA,EAAA,KAApBL,CAAW,EACXI,EAAW,SAAQ,CACrB,EACA,SAACK,EAAG,OACFH,EAAU,IACVD,EAAAL,EAAY,SAAK,MAAAK,IAAA,QAAAA,EAAA,KAAjBL,EAAoBS,CAAG,EACvBL,EAAW,MAAMK,CAAG,CACtB,EACA,UAAA,SACMH,KACFD,EAAAL,EAAY,eAAW,MAAAK,IAAA,QAAAA,EAAA,KAAvBL,CAAW,IAEbU,EAAAV,EAAY,YAAQ,MAAAU,IAAA,QAAAA,EAAA,KAApBV,CAAW,CACb,CAAC,CACF,CAEL,CAAC,EAIDW,EACN,CC9IO,IAAMC,GAAwC,CACnD,QAAS,GACT,SAAU,IAiDN,SAAUC,GACdC,EACAC,EAA8C,CAA9C,OAAAA,IAAA,SAAAA,EAAAH,IAEOI,EAAQ,SAACC,EAAQC,EAAU,CACxB,IAAAC,EAAsBJ,EAAM,QAAnBK,EAAaL,EAAM,SAChCM,EAAW,GACXC,EAAsB,KACtBC,EAAiC,KACjCC,EAAa,GAEXC,EAAgB,UAAA,CACpBF,GAAS,MAATA,EAAW,YAAW,EACtBA,EAAY,KACRH,IACFM,EAAI,EACJF,GAAcN,EAAW,SAAQ,EAErC,EAEMS,EAAoB,UAAA,CACxBJ,EAAY,KACZC,GAAcN,EAAW,SAAQ,CACnC,EAEMU,EAAgB,SAACC,EAAQ,CAC7B,OAACN,EAAYO,EAAUhB,EAAiBe,CAAK,CAAC,EAAE,UAAUE,EAAyBb,EAAYO,EAAeE,CAAiB,CAAC,CAAhI,EAEID,EAAO,UAAA,CACX,GAAIL,EAAU,CAIZA,EAAW,GACX,IAAMQ,EAAQP,EACdA,EAAY,KAEZJ,EAAW,KAAKW,CAAK,EACrB,CAACL,GAAcI,EAAcC,CAAK,EAEtC,EAEAZ,EAAO,UACLc,EACEb,EAMA,SAACW,EAAK,CACJR,EAAW,GACXC,EAAYO,EACZ,EAAEN,GAAa,CAACA,EAAU,UAAYJ,EAAUO,EAAI,EAAKE,EAAcC,CAAK,EAC9E,EACA,UAAA,CACEL,EAAa,GACb,EAAEJ,GAAYC,GAAYE,GAAa,CAACA,EAAU,SAAWL,EAAW,SAAQ,CAClF,CAAC,CACF,CAEL,CAAC,CACH,CCvEM,SAAUc,GACdC,EACAC,EACAC,EAA8B,CAD9BD,IAAA,SAAAA,EAAAE,IACAD,IAAA,SAAAA,EAAAE,IAEA,IAAMC,EAAYC,GAAMN,EAAUC,CAAS,EAC3C,OAAOM,GAAS,UAAA,CAAM,OAAAF,CAAA,EAAWH,CAAM,CACzC,CCJM,SAAUM,IAAc,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACnC,IAAMC,EAAUC,GAAkBH,CAAM,EAExC,OAAOI,EAAQ,SAACC,EAAQC,EAAU,CAehC,QAdMC,EAAMP,EAAO,OACbQ,EAAc,IAAI,MAAMD,CAAG,EAI7BE,EAAWT,EAAO,IAAI,UAAA,CAAM,MAAA,EAAA,CAAK,EAGjCU,EAAQ,cAMHC,EAAC,CACRC,EAAUZ,EAAOW,EAAE,EAAE,UACnBE,EACEP,EACA,SAACQ,EAAK,CACJN,EAAYG,GAAKG,EACb,CAACJ,GAAS,CAACD,EAASE,KAEtBF,EAASE,GAAK,IAKbD,EAAQD,EAAS,MAAMM,EAAQ,KAAON,EAAW,MAEtD,EAGAO,EAAI,CACL,GAnBIL,EAAI,EAAGA,EAAIJ,EAAKI,MAAhBA,CAAC,EAwBVN,EAAO,UACLQ,EAAyBP,EAAY,SAACQ,EAAK,CACzC,GAAIJ,EAAO,CAET,IAAMO,EAAMC,EAAA,CAAIJ,CAAK,EAAAK,EAAKX,CAAW,CAAA,EACrCF,EAAW,KAAKJ,EAAUA,EAAO,MAAA,OAAAgB,EAAA,CAAA,EAAAC,EAAIF,CAAM,CAAA,CAAA,EAAIA,CAAM,EAEzD,CAAC,CAAC,CAEN,CAAC,CACH,CCxFM,SAAUG,IAAG,SAAOC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACxB,OAAOC,EAAQ,SAACC,EAAQC,EAAU,CAChCL,GAAS,MAAA,OAAAM,EAAA,CAACF,CAA8B,EAAAG,EAAMN,CAAuC,CAAA,CAAA,EAAE,UAAUI,CAAU,CAC7G,CAAC,CACH,CCCM,SAAUG,IAAO,SAAkCC,EAAA,CAAA,EAAAC,EAAA,EAAAA,EAAA,UAAA,OAAAA,IAAAD,EAAAC,GAAA,UAAAA,GACvD,OAAOC,GAAG,MAAA,OAAAC,EAAA,CAAA,EAAAC,EAAIJ,CAAW,CAAA,CAAA,CAC3B,CCYO,SAASK,IAAmC,CACjD,IAAMC,EAAY,IAAIC,GAAwB,CAAC,EAC/C,OAAAC,EAAU,SAAU,mBAAoB,CAAE,KAAM,EAAK,CAAC,EACnD,UAAU,IAAMF,EAAU,KAAK,QAAQ,CAAC,EAGpCA,CACT,CCHO,SAASG,EACdC,EAAkBC,EAAmB,SAChC,CACL,OAAO,MAAM,KAAKA,EAAK,iBAAoBD,CAAQ,CAAC,CACtD,CAuBO,SAASE,EACdF,EAAkBC,EAAmB,SAClC,CACH,IAAME,EAAKC,GAAsBJ,EAAUC,CAAI,EAC/C,GAAI,OAAOE,GAAO,YAChB,MAAM,IAAI,eACR,8BAA8BH,kBAChC,EAGF,OAAOG,CACT,CAsBO,SAASC,GACdJ,EAAkBC,EAAmB,SACtB,CACf,OAAOA,EAAK,cAAiBD,CAAQ,GAAK,MAC5C,CAOO,SAASK,IAA4C,CAC1D,OAAO,SAAS,yBAAyB,aACrC,SAAS,eAAiB,MAEhC,CClEO,SAASC,GACdC,EACqB,CACrB,OAAOC,EACLC,EAAU,SAAS,KAAM,SAAS,EAClCA,EAAU,SAAS,KAAM,UAAU,CACrC,EACG,KACCC,GAAa,CAAC,EACdC,EAAI,IAAM,CACR,IAAMC,EAASC,GAAiB,EAChC,OAAO,OAAOD,GAAW,YACrBL,EAAG,SAASK,CAAM,EAClB,EACN,CAAC,EACDE,EAAUP,IAAOM,GAAiB,CAAC,EACnCE,EAAqB,CACvB,CACJ,CChBO,SAASC,GACdC,EACe,CACf,MAAO,CACL,EAAGA,EAAG,WACN,EAAGA,EAAG,SACR,CACF,CAWO,SAASC,GACdD,EAC2B,CAC3B,OAAOE,EACLC,EAAU,OAAQ,MAAM,EACxBA,EAAU,OAAQ,QAAQ,CAC5B,EACG,KACCC,GAAU,EAAGC,EAAuB,EACpCC,EAAI,IAAMP,GAAiBC,CAAE,CAAC,EAC9BO,EAAUR,GAAiBC,CAAE,CAAC,CAChC,CACJ,CCxCO,SAASQ,GACdC,EACe,CACf,MAAO,CACL,EAAGA,EAAG,WACN,EAAGA,EAAG,SACR,CACF,CAWO,SAASC,GACdD,EAC2B,CAC3B,OAAOE,EACLC,EAAUH,EAAI,QAAQ,EACtBG,EAAU,OAAQ,QAAQ,CAC5B,EACG,KACCC,GAAU,EAAGC,EAAuB,EACpCC,EAAI,IAAMP,GAAwBC,CAAE,CAAC,EACrCO,EAAUR,GAAwBC,CAAE,CAAC,CACvC,CACJ,CCpEA,IAAIQ,GAAW,UAAY,CACvB,GAAI,OAAO,KAAQ,YACf,OAAO,IASX,SAASC,EAASC,EAAKC,EAAK,CACxB,IAAIC,EAAS,GACb,OAAAF,EAAI,KAAK,SAAUG,EAAOC,EAAO,CAC7B,OAAID,EAAM,KAAOF,GACbC,EAASE,EACF,IAEJ,EACX,CAAC,EACMF,CACX,CACA,OAAsB,UAAY,CAC9B,SAASG,GAAU,CACf,KAAK,YAAc,CAAC,CACxB,CACA,cAAO,eAAeA,EAAQ,UAAW,OAAQ,CAI7C,IAAK,UAAY,CACb,OAAO,KAAK,YAAY,MAC5B,EACA,WAAY,GACZ,aAAc,EAClB,CAAC,EAKDA,EAAQ,UAAU,IAAM,SAAUJ,EAAK,CACnC,IAAIG,EAAQL,EAAS,KAAK,YAAaE,CAAG,EACtCE,EAAQ,KAAK,YAAYC,GAC7B,OAAOD,GAASA,EAAM,EAC1B,EAMAE,EAAQ,UAAU,IAAM,SAAUJ,EAAKK,EAAO,CAC1C,IAAIF,EAAQL,EAAS,KAAK,YAAaE,CAAG,EACtC,CAACG,EACD,KAAK,YAAYA,GAAO,GAAKE,EAG7B,KAAK,YAAY,KAAK,CAACL,EAAKK,CAAK,CAAC,CAE1C,EAKAD,EAAQ,UAAU,OAAS,SAAUJ,EAAK,CACtC,IAAIM,EAAU,KAAK,YACfH,EAAQL,EAASQ,EAASN,CAAG,EAC7B,CAACG,GACDG,EAAQ,OAAOH,EAAO,CAAC,CAE/B,EAKAC,EAAQ,UAAU,IAAM,SAAUJ,EAAK,CACnC,MAAO,CAAC,CAAC,CAACF,EAAS,KAAK,YAAaE,CAAG,CAC5C,EAIAI,EAAQ,UAAU,MAAQ,UAAY,CAClC,KAAK,YAAY,OAAO,CAAC,CAC7B,EAMAA,EAAQ,UAAU,QAAU,SAAUG,EAAUC,EAAK,CAC7CA,IAAQ,SAAUA,EAAM,MAC5B,QAASC,EAAK,EAAGC,EAAK,KAAK,YAAaD,EAAKC,EAAG,OAAQD,IAAM,CAC1D,IAAIP,EAAQQ,EAAGD,GACfF,EAAS,KAAKC,EAAKN,EAAM,GAAIA,EAAM,EAAE,CACzC,CACJ,EACOE,CACX,EAAE,CACN,EAAG,EAKCO,GAAY,OAAO,QAAW,aAAe,OAAO,UAAa,aAAe,OAAO,WAAa,SAGpGC,GAAY,UAAY,CACxB,OAAI,OAAO,QAAW,aAAe,OAAO,OAAS,KAC1C,OAEP,OAAO,MAAS,aAAe,KAAK,OAAS,KACtC,KAEP,OAAO,QAAW,aAAe,OAAO,OAAS,KAC1C,OAGJ,SAAS,aAAa,EAAE,CACnC,EAAG,EAQCC,GAA2B,UAAY,CACvC,OAAI,OAAO,uBAA0B,WAI1B,sBAAsB,KAAKD,EAAQ,EAEvC,SAAUL,EAAU,CAAE,OAAO,WAAW,UAAY,CAAE,OAAOA,EAAS,KAAK,IAAI,CAAC,CAAG,EAAG,IAAO,EAAE,CAAG,CAC7G,EAAG,EAGCO,GAAkB,EAStB,SAASC,GAAUR,EAAUS,EAAO,CAChC,IAAIC,EAAc,GAAOC,EAAe,GAAOC,EAAe,EAO9D,SAASC,GAAiB,CAClBH,IACAA,EAAc,GACdV,EAAS,GAETW,GACAG,EAAM,CAEd,CAQA,SAASC,GAAkB,CACvBT,GAAwBO,CAAc,CAC1C,CAMA,SAASC,GAAQ,CACb,IAAIE,EAAY,KAAK,IAAI,EACzB,GAAIN,EAAa,CAEb,GAAIM,EAAYJ,EAAeL,GAC3B,OAMJI,EAAe,EACnB,MAEID,EAAc,GACdC,EAAe,GACf,WAAWI,EAAiBN,CAAK,EAErCG,EAAeI,CACnB,CACA,OAAOF,CACX,CAGA,IAAIG,GAAgB,GAGhBC,GAAiB,CAAC,MAAO,QAAS,SAAU,OAAQ,QAAS,SAAU,OAAQ,QAAQ,EAEvFC,GAA4B,OAAO,kBAAqB,YAIxDC,GAA0C,UAAY,CAMtD,SAASA,GAA2B,CAMhC,KAAK,WAAa,GAMlB,KAAK,qBAAuB,GAM5B,KAAK,mBAAqB,KAM1B,KAAK,WAAa,CAAC,EACnB,KAAK,iBAAmB,KAAK,iBAAiB,KAAK,IAAI,EACvD,KAAK,QAAUZ,GAAS,KAAK,QAAQ,KAAK,IAAI,EAAGS,EAAa,CAClE,CAOA,OAAAG,EAAyB,UAAU,YAAc,SAAUC,EAAU,CAC5D,CAAC,KAAK,WAAW,QAAQA,CAAQ,GAClC,KAAK,WAAW,KAAKA,CAAQ,EAG5B,KAAK,YACN,KAAK,SAAS,CAEtB,EAOAD,EAAyB,UAAU,eAAiB,SAAUC,EAAU,CACpE,IAAIC,EAAY,KAAK,WACjB1B,EAAQ0B,EAAU,QAAQD,CAAQ,EAElC,CAACzB,GACD0B,EAAU,OAAO1B,EAAO,CAAC,EAGzB,CAAC0B,EAAU,QAAU,KAAK,YAC1B,KAAK,YAAY,CAEzB,EAOAF,EAAyB,UAAU,QAAU,UAAY,CACrD,IAAIG,EAAkB,KAAK,iBAAiB,EAGxCA,GACA,KAAK,QAAQ,CAErB,EASAH,EAAyB,UAAU,iBAAmB,UAAY,CAE9D,IAAII,EAAkB,KAAK,WAAW,OAAO,SAAUH,EAAU,CAC7D,OAAOA,EAAS,aAAa,EAAGA,EAAS,UAAU,CACvD,CAAC,EAMD,OAAAG,EAAgB,QAAQ,SAAUH,EAAU,CAAE,OAAOA,EAAS,gBAAgB,CAAG,CAAC,EAC3EG,EAAgB,OAAS,CACpC,EAOAJ,EAAyB,UAAU,SAAW,UAAY,CAGlD,CAAChB,IAAa,KAAK,aAMvB,SAAS,iBAAiB,gBAAiB,KAAK,gBAAgB,EAChE,OAAO,iBAAiB,SAAU,KAAK,OAAO,EAC1Ce,IACA,KAAK,mBAAqB,IAAI,iBAAiB,KAAK,OAAO,EAC3D,KAAK,mBAAmB,QAAQ,SAAU,CACtC,WAAY,GACZ,UAAW,GACX,cAAe,GACf,QAAS,EACb,CAAC,IAGD,SAAS,iBAAiB,qBAAsB,KAAK,OAAO,EAC5D,KAAK,qBAAuB,IAEhC,KAAK,WAAa,GACtB,EAOAC,EAAyB,UAAU,YAAc,UAAY,CAGrD,CAAChB,IAAa,CAAC,KAAK,aAGxB,SAAS,oBAAoB,gBAAiB,KAAK,gBAAgB,EACnE,OAAO,oBAAoB,SAAU,KAAK,OAAO,EAC7C,KAAK,oBACL,KAAK,mBAAmB,WAAW,EAEnC,KAAK,sBACL,SAAS,oBAAoB,qBAAsB,KAAK,OAAO,EAEnE,KAAK,mBAAqB,KAC1B,KAAK,qBAAuB,GAC5B,KAAK,WAAa,GACtB,EAQAgB,EAAyB,UAAU,iBAAmB,SAAUjB,EAAI,CAChE,IAAIsB,EAAKtB,EAAG,aAAcuB,EAAeD,IAAO,OAAS,GAAKA,EAE1DE,EAAmBT,GAAe,KAAK,SAAUzB,EAAK,CACtD,MAAO,CAAC,CAAC,CAACiC,EAAa,QAAQjC,CAAG,CACtC,CAAC,EACGkC,GACA,KAAK,QAAQ,CAErB,EAMAP,EAAyB,YAAc,UAAY,CAC/C,OAAK,KAAK,YACN,KAAK,UAAY,IAAIA,GAElB,KAAK,SAChB,EAMAA,EAAyB,UAAY,KAC9BA,CACX,EAAE,EASEQ,GAAsB,SAAUC,EAAQC,EAAO,CAC/C,QAAS5B,EAAK,EAAGC,EAAK,OAAO,KAAK2B,CAAK,EAAG5B,EAAKC,EAAG,OAAQD,IAAM,CAC5D,IAAIT,EAAMU,EAAGD,GACb,OAAO,eAAe2B,EAAQpC,EAAK,CAC/B,MAAOqC,EAAMrC,GACb,WAAY,GACZ,SAAU,GACV,aAAc,EAClB,CAAC,CACL,CACA,OAAOoC,CACX,EAQIE,GAAe,SAAUF,EAAQ,CAIjC,IAAIG,EAAcH,GAAUA,EAAO,eAAiBA,EAAO,cAAc,YAGzE,OAAOG,GAAe3B,EAC1B,EAGI4B,GAAYC,GAAe,EAAG,EAAG,EAAG,CAAC,EAOzC,SAASC,GAAQrC,EAAO,CACpB,OAAO,WAAWA,CAAK,GAAK,CAChC,CAQA,SAASsC,GAAeC,EAAQ,CAE5B,QADIC,EAAY,CAAC,EACRpC,EAAK,EAAGA,EAAK,UAAU,OAAQA,IACpCoC,EAAUpC,EAAK,GAAK,UAAUA,GAElC,OAAOoC,EAAU,OAAO,SAAUC,EAAMC,EAAU,CAC9C,IAAI1C,EAAQuC,EAAO,UAAYG,EAAW,UAC1C,OAAOD,EAAOJ,GAAQrC,CAAK,CAC/B,EAAG,CAAC,CACR,CAOA,SAAS2C,GAAYJ,EAAQ,CAGzB,QAFIC,EAAY,CAAC,MAAO,QAAS,SAAU,MAAM,EAC7CI,EAAW,CAAC,EACPxC,EAAK,EAAGyC,EAAcL,EAAWpC,EAAKyC,EAAY,OAAQzC,IAAM,CACrE,IAAIsC,EAAWG,EAAYzC,GACvBJ,EAAQuC,EAAO,WAAaG,GAChCE,EAASF,GAAYL,GAAQrC,CAAK,CACtC,CACA,OAAO4C,CACX,CAQA,SAASE,GAAkBf,EAAQ,CAC/B,IAAIgB,EAAOhB,EAAO,QAAQ,EAC1B,OAAOK,GAAe,EAAG,EAAGW,EAAK,MAAOA,EAAK,MAAM,CACvD,CAOA,SAASC,GAA0BjB,EAAQ,CAGvC,IAAIkB,EAAclB,EAAO,YAAamB,EAAenB,EAAO,aAS5D,GAAI,CAACkB,GAAe,CAACC,EACjB,OAAOf,GAEX,IAAII,EAASN,GAAYF,CAAM,EAAE,iBAAiBA,CAAM,EACpDa,EAAWD,GAAYJ,CAAM,EAC7BY,EAAWP,EAAS,KAAOA,EAAS,MACpCQ,EAAUR,EAAS,IAAMA,EAAS,OAKlCS,EAAQhB,GAAQE,EAAO,KAAK,EAAGe,EAASjB,GAAQE,EAAO,MAAM,EAqBjE,GAlBIA,EAAO,YAAc,eAOjB,KAAK,MAAMc,EAAQF,CAAQ,IAAMF,IACjCI,GAASf,GAAeC,EAAQ,OAAQ,OAAO,EAAIY,GAEnD,KAAK,MAAMG,EAASF,CAAO,IAAMF,IACjCI,GAAUhB,GAAeC,EAAQ,MAAO,QAAQ,EAAIa,IAOxD,CAACG,GAAkBxB,CAAM,EAAG,CAK5B,IAAIyB,EAAgB,KAAK,MAAMH,EAAQF,CAAQ,EAAIF,EAC/CQ,EAAiB,KAAK,MAAMH,EAASF,CAAO,EAAIF,EAMhD,KAAK,IAAIM,CAAa,IAAM,IAC5BH,GAASG,GAET,KAAK,IAAIC,CAAc,IAAM,IAC7BH,GAAUG,EAElB,CACA,OAAOrB,GAAeQ,EAAS,KAAMA,EAAS,IAAKS,EAAOC,CAAM,CACpE,CAOA,IAAII,GAAwB,UAAY,CAGpC,OAAI,OAAO,oBAAuB,YACvB,SAAU3B,EAAQ,CAAE,OAAOA,aAAkBE,GAAYF,CAAM,EAAE,kBAAoB,EAKzF,SAAUA,EAAQ,CAAE,OAAQA,aAAkBE,GAAYF,CAAM,EAAE,YACrE,OAAOA,EAAO,SAAY,UAAa,CAC/C,EAAG,EAOH,SAASwB,GAAkBxB,EAAQ,CAC/B,OAAOA,IAAWE,GAAYF,CAAM,EAAE,SAAS,eACnD,CAOA,SAAS4B,GAAe5B,EAAQ,CAC5B,OAAKzB,GAGDoD,GAAqB3B,CAAM,EACpBe,GAAkBf,CAAM,EAE5BiB,GAA0BjB,CAAM,EAL5BI,EAMf,CAQA,SAASyB,GAAmBvD,EAAI,CAC5B,IAAIwD,EAAIxD,EAAG,EAAGyD,EAAIzD,EAAG,EAAGgD,EAAQhD,EAAG,MAAOiD,EAASjD,EAAG,OAElD0D,EAAS,OAAO,iBAAoB,YAAc,gBAAkB,OACpEC,EAAO,OAAO,OAAOD,EAAO,SAAS,EAEzC,OAAAjC,GAAmBkC,EAAM,CACrB,EAAGH,EAAG,EAAGC,EAAG,MAAOT,EAAO,OAAQC,EAClC,IAAKQ,EACL,MAAOD,EAAIR,EACX,OAAQC,EAASQ,EACjB,KAAMD,CACV,CAAC,EACMG,CACX,CAWA,SAAS5B,GAAeyB,EAAGC,EAAGT,EAAOC,EAAQ,CACzC,MAAO,CAAE,EAAGO,EAAG,EAAGC,EAAG,MAAOT,EAAO,OAAQC,CAAO,CACtD,CAMA,IAAIW,GAAmC,UAAY,CAM/C,SAASA,EAAkBlC,EAAQ,CAM/B,KAAK,eAAiB,EAMtB,KAAK,gBAAkB,EAMvB,KAAK,aAAeK,GAAe,EAAG,EAAG,EAAG,CAAC,EAC7C,KAAK,OAASL,CAClB,CAOA,OAAAkC,EAAkB,UAAU,SAAW,UAAY,CAC/C,IAAID,EAAOL,GAAe,KAAK,MAAM,EACrC,YAAK,aAAeK,EACZA,EAAK,QAAU,KAAK,gBACxBA,EAAK,SAAW,KAAK,eAC7B,EAOAC,EAAkB,UAAU,cAAgB,UAAY,CACpD,IAAID,EAAO,KAAK,aAChB,YAAK,eAAiBA,EAAK,MAC3B,KAAK,gBAAkBA,EAAK,OACrBA,CACX,EACOC,CACX,EAAE,EAEEC,GAAqC,UAAY,CAOjD,SAASA,EAAoBnC,EAAQoC,EAAU,CAC3C,IAAIC,EAAcR,GAAmBO,CAAQ,EAO7CrC,GAAmB,KAAM,CAAE,OAAQC,EAAQ,YAAaqC,CAAY,CAAC,CACzE,CACA,OAAOF,CACX,EAAE,EAEEG,GAAmC,UAAY,CAW/C,SAASA,EAAkBnE,EAAUoE,EAAYC,EAAa,CAc1D,GAPA,KAAK,oBAAsB,CAAC,EAM5B,KAAK,cAAgB,IAAI/E,GACrB,OAAOU,GAAa,WACpB,MAAM,IAAI,UAAU,yDAAyD,EAEjF,KAAK,UAAYA,EACjB,KAAK,YAAcoE,EACnB,KAAK,aAAeC,CACxB,CAOA,OAAAF,EAAkB,UAAU,QAAU,SAAUtC,EAAQ,CACpD,GAAI,CAAC,UAAU,OACX,MAAM,IAAI,UAAU,0CAA0C,EAGlE,GAAI,SAAO,SAAY,aAAe,EAAE,mBAAmB,SAG3D,IAAI,EAAEA,aAAkBE,GAAYF,CAAM,EAAE,SACxC,MAAM,IAAI,UAAU,uCAAuC,EAE/D,IAAIyC,EAAe,KAAK,cAEpBA,EAAa,IAAIzC,CAAM,IAG3ByC,EAAa,IAAIzC,EAAQ,IAAIkC,GAAkBlC,CAAM,CAAC,EACtD,KAAK,YAAY,YAAY,IAAI,EAEjC,KAAK,YAAY,QAAQ,GAC7B,EAOAsC,EAAkB,UAAU,UAAY,SAAUtC,EAAQ,CACtD,GAAI,CAAC,UAAU,OACX,MAAM,IAAI,UAAU,0CAA0C,EAGlE,GAAI,SAAO,SAAY,aAAe,EAAE,mBAAmB,SAG3D,IAAI,EAAEA,aAAkBE,GAAYF,CAAM,EAAE,SACxC,MAAM,IAAI,UAAU,uCAAuC,EAE/D,IAAIyC,EAAe,KAAK,cAEpB,CAACA,EAAa,IAAIzC,CAAM,IAG5ByC,EAAa,OAAOzC,CAAM,EACrByC,EAAa,MACd,KAAK,YAAY,eAAe,IAAI,GAE5C,EAMAH,EAAkB,UAAU,WAAa,UAAY,CACjD,KAAK,YAAY,EACjB,KAAK,cAAc,MAAM,EACzB,KAAK,YAAY,eAAe,IAAI,CACxC,EAOAA,EAAkB,UAAU,aAAe,UAAY,CACnD,IAAII,EAAQ,KACZ,KAAK,YAAY,EACjB,KAAK,cAAc,QAAQ,SAAUC,EAAa,CAC1CA,EAAY,SAAS,GACrBD,EAAM,oBAAoB,KAAKC,CAAW,CAElD,CAAC,CACL,EAOAL,EAAkB,UAAU,gBAAkB,UAAY,CAEtD,GAAI,EAAC,KAAK,UAAU,EAGpB,KAAIlE,EAAM,KAAK,aAEXF,EAAU,KAAK,oBAAoB,IAAI,SAAUyE,EAAa,CAC9D,OAAO,IAAIR,GAAoBQ,EAAY,OAAQA,EAAY,cAAc,CAAC,CAClF,CAAC,EACD,KAAK,UAAU,KAAKvE,EAAKF,EAASE,CAAG,EACrC,KAAK,YAAY,EACrB,EAMAkE,EAAkB,UAAU,YAAc,UAAY,CAClD,KAAK,oBAAoB,OAAO,CAAC,CACrC,EAMAA,EAAkB,UAAU,UAAY,UAAY,CAChD,OAAO,KAAK,oBAAoB,OAAS,CAC7C,EACOA,CACX,EAAE,EAKE7C,GAAY,OAAO,SAAY,YAAc,IAAI,QAAY,IAAIhC,GAKjEmF,GAAgC,UAAY,CAO5C,SAASA,EAAezE,EAAU,CAC9B,GAAI,EAAE,gBAAgByE,GAClB,MAAM,IAAI,UAAU,oCAAoC,EAE5D,GAAI,CAAC,UAAU,OACX,MAAM,IAAI,UAAU,0CAA0C,EAElE,IAAIL,EAAahD,GAAyB,YAAY,EAClDC,EAAW,IAAI8C,GAAkBnE,EAAUoE,EAAY,IAAI,EAC/D9C,GAAU,IAAI,KAAMD,CAAQ,CAChC,CACA,OAAOoD,CACX,EAAE,EAEF,CACI,UACA,YACA,YACJ,EAAE,QAAQ,SAAUC,EAAQ,CACxBD,GAAe,UAAUC,GAAU,UAAY,CAC3C,IAAIvE,EACJ,OAAQA,EAAKmB,GAAU,IAAI,IAAI,GAAGoD,GAAQ,MAAMvE,EAAI,SAAS,CACjE,CACJ,CAAC,EAED,IAAIP,GAAS,UAAY,CAErB,OAAI,OAAOS,GAAS,gBAAmB,YAC5BA,GAAS,eAEboE,EACX,EAAG,EAEIE,GAAQ/E,GCr2Bf,IAAMgF,GAAS,IAAIC,EAYbC,GAAYC,EAAM,IAAMC,EAC5B,IAAIC,GAAeC,GAAW,CAC5B,QAAWC,KAASD,EAClBN,GAAO,KAAKO,CAAK,CACrB,CAAC,CACH,CAAC,EACE,KACCC,EAAUC,GAAYC,EAAMC,GAAOP,EAAGK,CAAQ,CAAC,EAC5C,KACCG,EAAS,IAAMH,EAAS,WAAW,CAAC,CACtC,CACF,EACAI,EAAY,CAAC,CACf,EAaK,SAASC,GACdC,EACa,CACb,MAAO,CACL,MAAQA,EAAG,YACX,OAAQA,EAAG,YACb,CACF,CAuBO,SAASC,GACdD,EACyB,CACzB,OAAOb,GACJ,KACCe,EAAIR,GAAYA,EAAS,QAAQM,CAAE,CAAC,EACpCP,EAAUC,GAAYT,GACnB,KACCkB,EAAO,CAAC,CAAE,OAAAC,CAAO,IAAMA,IAAWJ,CAAE,EACpCH,EAAS,IAAMH,EAAS,UAAUM,CAAE,CAAC,EACrCK,EAAI,IAAMN,GAAeC,CAAE,CAAC,CAC9B,CACF,EACAM,EAAUP,GAAeC,CAAE,CAAC,CAC9B,CACJ,CC1GO,SAASO,GACdC,EACa,CACb,MAAO,CACL,MAAQA,EAAG,YACX,OAAQA,EAAG,YACb,CACF,CASO,SAASC,GACdD,EACyB,CACzB,IAAIE,EAASF,EAAG,cAChB,KAAOE,IAEHF,EAAG,aAAeE,EAAO,aACzBF,EAAG,cAAgBE,EAAO,eAE1BA,GAAUF,EAAKE,GAAQ,cAK3B,OAAOA,EAASF,EAAK,MACvB,CCfA,IAAMG,GAAS,IAAIC,EAUbC,GAAYC,EAAM,IAAMC,EAC5B,IAAI,qBAAqBC,GAAW,CAClC,QAAWC,KAASD,EAClBL,GAAO,KAAKM,CAAK,CACrB,EAAG,CACD,UAAW,CACb,CAAC,CACH,CAAC,EACE,KACCC,EAAUC,GAAYC,EAAMC,GAAON,EAAGI,CAAQ,CAAC,EAC5C,KACCG,EAAS,IAAMH,EAAS,WAAW,CAAC,CACtC,CACF,EACAI,EAAY,CAAC,CACf,EAaK,SAASC,GACdC,EACqB,CACrB,OAAOZ,GACJ,KACCa,EAAIP,GAAYA,EAAS,QAAQM,CAAE,CAAC,EACpCP,EAAUC,GAAYR,GACnB,KACCgB,EAAO,CAAC,CAAE,OAAAC,CAAO,IAAMA,IAAWH,CAAE,EACpCH,EAAS,IAAMH,EAAS,UAAUM,CAAE,CAAC,EACrCI,EAAI,CAAC,CAAE,eAAAC,CAAe,IAAMA,CAAc,CAC5C,CACF,CACF,CACJ,CAaO,SAASC,GACdN,EAAiBO,EAAY,GACR,CACrB,OAAOC,GAA0BR,CAAE,EAChC,KACCI,EAAI,CAAC,CAAE,EAAAK,CAAE,IAAM,CACb,IAAMC,EAAUC,GAAeX,CAAE,EAC3BY,EAAUC,GAAsBb,CAAE,EACxC,OAAOS,GACLG,EAAQ,OAASF,EAAQ,OAASH,CAEtC,CAAC,EACDO,EAAqB,CACvB,CACJ,CCjFA,IAAMC,GAA4C,CAChD,OAAQC,EAAW,yBAAyB,EAC5C,OAAQA,EAAW,yBAAyB,CAC9C,EAaO,SAASC,GAAUC,EAAuB,CAC/C,OAAOH,GAAQG,GAAM,OACvB,CAaO,SAASC,GAAUD,EAAcE,EAAsB,CACxDL,GAAQG,GAAM,UAAYE,GAC5BL,GAAQG,GAAM,MAAM,CACxB,CAWO,SAASG,GAAYH,EAAmC,CAC7D,IAAMI,EAAKP,GAAQG,GACnB,OAAOK,EAAUD,EAAI,QAAQ,EAC1B,KACCE,EAAI,IAAMF,EAAG,OAAO,EACpBG,EAAUH,EAAG,OAAO,CACtB,CACJ,CClCA,SAASI,GACPC,EAAiBC,EACR,CACT,OAAQD,EAAG,YAAa,CAGtB,KAAK,iBAEH,OAAIA,EAAG,OAAS,QACP,SAAS,KAAKC,CAAI,EAElB,GAGX,KAAK,kBACL,KAAK,oBACH,MAAO,GAGT,QACE,OAAOD,EAAG,iBACd,CACF,CAWO,SAASE,IAAsC,CACpD,OAAOC,EAAyB,OAAQ,SAAS,EAC9C,KACCC,EAAOC,GAAM,EAAEA,EAAG,SAAWA,EAAG,QAAQ,EACxCC,EAAID,IAAO,CACT,KAAME,GAAU,QAAQ,EAAI,SAAW,SACvC,KAAMF,EAAG,IACT,OAAQ,CACNA,EAAG,eAAe,EAClBA,EAAG,gBAAgB,CACrB,CACF,EAAc,EACdD,EAAO,CAAC,CAAE,KAAAI,EAAM,KAAAP,CAAK,IAAM,CACzB,GAAIO,IAAS,SAAU,CACrB,IAAMC,EAASC,GAAiB,EAChC,GAAI,OAAOD,GAAW,YACpB,MAAO,CAACV,GAAwBU,EAAQR,CAAI,CAChD,CACA,MAAO,EACT,CAAC,EACDU,GAAM,CACR,CACJ,CCpFO,SAASC,IAAmB,CACjC,OAAO,IAAI,IAAI,SAAS,IAAI,CAC9B,CAOO,SAASC,GAAYC,EAAgB,CAC1C,SAAS,KAAOA,EAAI,IACtB,CASO,SAASC,IAA8B,CAC5C,OAAO,IAAIC,CACb,CCLA,SAASC,GAAYC,EAAiBC,EAA8B,CAGlE,GAAI,OAAOA,GAAU,UAAY,OAAOA,GAAU,SAChDD,EAAG,WAAaC,EAAM,SAAS,UAGtBA,aAAiB,KAC1BD,EAAG,YAAYC,CAAK,UAGX,MAAM,QAAQA,CAAK,EAC5B,QAAWC,KAAQD,EACjBF,GAAYC,EAAIE,CAAI,CAE1B,CAyBO,SAASC,EACdC,EAAaC,KAAmCC,EAC7C,CACH,IAAMN,EAAK,SAAS,cAAcI,CAAG,EAGrC,GAAIC,EACF,QAAWE,KAAQ,OAAO,KAAKF,CAAU,EACnC,OAAOA,EAAWE,IAAU,cAI5B,OAAOF,EAAWE,IAAU,UAC9BP,EAAG,aAAaO,EAAMF,EAAWE,EAAK,EAEtCP,EAAG,aAAaO,EAAM,EAAE,GAI9B,QAAWN,KAASK,EAClBP,GAAYC,EAAIC,CAAK,EAGvB,OAAOD,CACT,CChFO,SAASQ,GAASC,EAAeC,EAAmB,CACzD,IAAIC,EAAID,EACR,GAAID,EAAM,OAASE,EAAG,CACpB,KAAOF,EAAME,KAAO,KAAO,EAAEA,EAAI,GAAG,CACpC,MAAO,GAAGF,EAAM,UAAU,EAAGE,CAAC,MAChC,CACA,OAAOF,CACT,CAkBO,SAASG,GAAMH,EAAuB,CAC3C,GAAIA,EAAQ,IAAK,CACf,IAAMI,EAAS,GAAGJ,EAAQ,KAAO,IAAO,IACxC,MAAO,KAAKA,EAAQ,MAAY,KAAM,QAAQI,CAAM,IACtD,KACE,QAAOJ,EAAM,SAAS,CAE1B,CC5BO,SAASK,IAA0B,CACxC,OAAO,SAAS,KAAK,UAAU,CAAC,CAClC,CAYO,SAASC,GAAgBC,EAAoB,CAClD,IAAMC,EAAKC,EAAE,IAAK,CAAE,KAAMF,CAAK,CAAC,EAChCC,EAAG,iBAAiB,QAASE,GAAMA,EAAG,gBAAgB,CAAC,EACvDF,EAAG,MAAM,CACX,CASO,SAASG,IAAwC,CACtD,OAAOC,EAA2B,OAAQ,YAAY,EACnD,KACCC,EAAIR,EAAe,EACnBS,EAAUT,GAAgB,CAAC,EAC3BU,EAAOR,GAAQA,EAAK,OAAS,CAAC,EAC9BS,EAAY,CAAC,CACf,CACJ,CAOO,SAASC,IAA+C,CAC7D,OAAON,GAAkB,EACtB,KACCE,EAAIK,GAAMC,GAAmB,QAAQD,KAAM,CAAE,EAC7CH,EAAOP,GAAM,OAAOA,GAAO,WAAW,CACxC,CACJ,CC1CO,SAASY,GAAWC,EAAoC,CAC7D,IAAMC,EAAQ,WAAWD,CAAK,EAC9B,OAAOE,GAA0BC,GAC/BF,EAAM,YAAY,IAAME,EAAKF,EAAM,OAAO,CAAC,CAC5C,EACE,KACCG,EAAUH,EAAM,OAAO,CACzB,CACJ,CAOO,SAASI,IAAkC,CAChD,IAAMJ,EAAQ,WAAW,OAAO,EAChC,OAAOK,EACLC,EAAU,OAAQ,aAAa,EAAE,KAAKC,EAAI,IAAM,EAAI,CAAC,EACrDD,EAAU,OAAQ,YAAY,EAAE,KAAKC,EAAI,IAAM,EAAK,CAAC,CACvD,EACG,KACCJ,EAAUH,EAAM,OAAO,CACzB,CACJ,CAcO,SAASQ,GACdC,EAA6BC,EACd,CACf,OAAOD,EACJ,KACCE,EAAUC,GAAUA,EAASF,EAAQ,EAAIG,CAAK,CAChD,CACJ,CC7CO,SAASC,GACdC,EAAmBC,EAAuB,CAAE,YAAa,aAAc,EACjD,CACtB,OAAOC,GAAK,MAAM,GAAGF,IAAOC,CAAO,CAAC,EACjC,KACCE,GAAW,IAAMC,CAAK,EACtBC,EAAUC,GAAOA,EAAI,SAAW,IAC5BC,GAAW,IAAM,IAAI,MAAMD,EAAI,UAAU,CAAC,EAC1CE,EAAGF,CAAG,CACV,CACF,CACJ,CAYO,SAASG,GACdT,EAAmBC,EACJ,CACf,OAAOF,GAAQC,EAAKC,CAAO,EACxB,KACCI,EAAUC,GAAOA,EAAI,KAAK,CAAC,EAC3BI,EAAY,CAAC,CACf,CACJ,CAUO,SAASC,GACdX,EAAmBC,EACG,CACtB,IAAMW,EAAM,IAAI,UAChB,OAAOb,GAAQC,EAAKC,CAAO,EACxB,KACCI,EAAUC,GAAOA,EAAI,KAAK,CAAC,EAC3BO,EAAIP,GAAOM,EAAI,gBAAgBN,EAAK,UAAU,CAAC,EAC/CI,EAAY,CAAC,CACf,CACJ,CClDO,SAASI,GAAYC,EAA+B,CACzD,IAAMC,EAASC,EAAE,SAAU,CAAE,IAAAF,CAAI,CAAC,EAClC,OAAOG,EAAM,KACX,SAAS,KAAK,YAAYF,CAAM,EACzBG,EACLC,EAAUJ,EAAQ,MAAM,EACxBI,EAAUJ,EAAQ,OAAO,EACtB,KACCK,EAAU,IACRC,GAAW,IAAM,IAAI,eAAe,mBAAmBP,GAAK,CAAC,CAC9D,CACH,CACJ,EACG,KACCQ,EAAI,IAAG,EAAY,EACnBC,EAAS,IAAM,SAAS,KAAK,YAAYR,CAAM,CAAC,EAChDS,GAAK,CAAC,CACR,EACH,CACH,CCfO,SAASC,IAAoC,CAClD,MAAO,CACL,EAAG,KAAK,IAAI,EAAG,OAAO,EACtB,EAAG,KAAK,IAAI,EAAG,OAAO,CACxB,CACF,CASO,SAASC,IAAkD,CAChE,OAAOC,EACLC,EAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,EAC7CA,EAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,CAC/C,EACG,KACCC,EAAIJ,EAAiB,EACrBK,EAAUL,GAAkB,CAAC,CAC/B,CACJ,CC3BO,SAASM,IAAgC,CAC9C,MAAO,CACL,MAAQ,WACR,OAAQ,WACV,CACF,CASO,SAASC,IAA8C,CAC5D,OAAOC,EAAU,OAAQ,SAAU,CAAE,QAAS,EAAK,CAAC,EACjD,KACCC,EAAIH,EAAe,EACnBI,EAAUJ,GAAgB,CAAC,CAC7B,CACJ,CCXO,SAASK,IAAsC,CACpD,OAAOC,EAAc,CACnBC,GAAoB,EACpBC,GAAkB,CACpB,CAAC,EACE,KACCC,EAAI,CAAC,CAACC,EAAQC,CAAI,KAAO,CAAE,OAAAD,EAAQ,KAAAC,CAAK,EAAE,EAC1CC,EAAY,CAAC,CACf,CACJ,CCVO,SAASC,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EAChB,CACtB,IAAMC,EAAQF,EACX,KACCG,EAAwB,MAAM,CAChC,EAGIC,EAAUC,EAAc,CAACH,EAAOD,CAAO,CAAC,EAC3C,KACCK,EAAI,IAAMC,GAAiBR,CAAE,CAAC,CAChC,EAGF,OAAOM,EAAc,CAACJ,EAASD,EAAWI,CAAO,CAAC,EAC/C,KACCE,EAAI,CAAC,CAAC,CAAE,OAAAE,CAAO,EAAG,CAAE,OAAAC,EAAQ,KAAAC,CAAK,EAAG,CAAE,EAAAC,EAAG,EAAAC,CAAE,CAAC,KAAO,CACjD,OAAQ,CACN,EAAGH,EAAO,EAAIE,EACd,EAAGF,EAAO,EAAIG,EAAIJ,CACpB,EACA,KAAAE,CACF,EAAE,CACJ,CACJ,CCIO,SAASG,GACdC,EAAgB,CAAE,IAAAC,CAAI,EACP,CAGf,IAAMC,EAAMC,EAAwBH,EAAQ,SAAS,EAClD,KACCI,EAAI,CAAC,CAAE,KAAAC,CAAK,IAAMA,CAAS,CAC7B,EAGF,OAAOJ,EACJ,KACCK,GAAS,IAAMJ,EAAK,CAAE,QAAS,GAAM,SAAU,EAAK,CAAC,EACrDK,EAAIC,GAAWR,EAAO,YAAYQ,CAAO,CAAC,EAC1CC,EAAU,IAAMP,CAAG,EACnBQ,GAAM,CACR,CACJ,CCCA,IAAMC,GAASC,EAAW,WAAW,EAC/BC,GAAiB,KAAK,MAAMF,GAAO,WAAY,EACrDE,GAAO,KAAO,GAAG,IAAI,IAAIA,GAAO,KAAMC,GAAY,CAAC,IAW5C,SAASC,IAAwB,CACtC,OAAOF,EACT,CASO,SAASG,EAAQC,EAAqB,CAC3C,OAAOJ,GAAO,SAAS,SAASI,CAAI,CACtC,CAUO,SAASC,GACdC,EAAkBC,EACV,CACR,OAAO,OAAOA,GAAU,YACpBP,GAAO,aAAaM,GAAK,QAAQ,IAAKC,EAAM,SAAS,CAAC,EACtDP,GAAO,aAAaM,EAC1B,CCjCO,SAASE,GACdC,EAASC,EAAmB,SACP,CACrB,OAAOC,EAAW,sBAAsBF,KAASC,CAAI,CACvD,CAYO,SAASE,GACdH,EAASC,EAAmB,SACL,CACvB,OAAOG,EAAY,sBAAsBJ,KAASC,CAAI,CACxD,CC1EO,SAASI,GACdC,EACsB,CACtB,IAAMC,EAASC,EAAW,6BAA8BF,CAAE,EAC1D,OAAOG,EAAUF,EAAQ,QAAS,CAAE,KAAM,EAAK,CAAC,EAC7C,KACCG,EAAI,IAAMF,EAAW,cAAeF,CAAE,CAAC,EACvCI,EAAIC,IAAY,CAAE,KAAM,UAAUA,EAAQ,SAAS,CAAE,EAAE,CACzD,CACJ,CASO,SAASC,GACdN,EACiC,CACjC,MAAI,CAACO,EAAQ,kBAAkB,GAAK,CAACP,EAAG,kBAC/BQ,EAGFC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EACG,KACCE,EAAU,CAAE,KAAM,SAAiB,YAAY,CAAE,CAAC,CACpD,EACG,UAAU,CAAC,CAAE,KAAAC,CAAK,IAAM,CA5FjC,IAAAC,EA6FcD,GAAQA,MAAUC,EAAA,SAAiB,YAAY,IAA7B,KAAAA,EAAkCD,KACtDb,EAAG,OAAS,GAGZ,SAAiB,aAAca,CAAI,EAEvC,CAAC,EAGEd,GAAcC,CAAE,EACpB,KACCe,EAAIC,GAASN,EAAM,KAAKM,CAAK,CAAC,EAC9BC,EAAS,IAAMP,EAAM,SAAS,CAAC,EAC/BN,EAAIY,GAAUE,EAAA,CAAE,IAAKlB,GAAOgB,EAAQ,CACtC,CACJ,CAAC,CACH,CC5BO,SAASG,GACdC,EAAiB,CAAE,QAAAC,CAAQ,EACN,CACrB,OAAOA,EACJ,KACCC,EAAIC,IAAW,CAAE,OAAQA,IAAWH,CAAG,EAAE,CAC3C,CACJ,CAYO,SAASI,GACdJ,EAAiBK,EACe,CAChC,IAAMC,EAAY,IAAIC,EACtB,OAAAD,EAAU,UAAU,CAAC,CAAE,OAAAE,CAAO,IAAM,CAClCR,EAAG,OAASQ,CACd,CAAC,EAGMT,GAAaC,EAAIK,CAAO,EAC5B,KACCI,EAAIC,GAASJ,EAAU,KAAKI,CAAK,CAAC,EAClCC,EAAS,IAAML,EAAU,SAAS,CAAC,EACnCJ,EAAIQ,GAAUE,EAAA,CAAE,IAAKZ,GAAOU,EAAQ,CACtC,CACJ,CC7FA,IAAAG,GAAwB,SCajB,SAASC,GAAcC,EAA0B,CACtD,OACEC,EAAC,OAAI,MAAM,aAAa,GAAID,GAC1BC,EAAC,OAAI,MAAM,+BAA+B,CAC5C,CAEJ,CCHO,SAASC,GACdC,EAAqBC,EACR,CAIb,GAHAA,EAASA,EAAS,GAAGA,gBAAqBD,IAAO,OAG7CC,EAAQ,CACV,IAAMC,EAASD,EAAS,IAAIA,IAAW,OACvC,OACEE,EAAC,SAAM,MAAM,gBAAgB,SAAU,GACpCC,GAAcH,CAAM,EACrBE,EAAC,KAAE,KAAMD,EAAQ,MAAM,uBAAuB,SAAU,IACtDC,EAAC,QAAK,wBAAuBH,EAAI,CACnC,CACF,CAEJ,KACE,QACEG,EAAC,SAAM,MAAM,gBAAgB,SAAU,GACpCC,GAAcH,CAAM,EACrBE,EAAC,QAAK,MAAM,uBAAuB,SAAU,IAC3CA,EAAC,QAAK,wBAAuBH,EAAI,CACnC,CACF,CAGN,CC5BO,SAASK,GAAsBC,EAAyB,CAC7D,OACEC,EAAC,UACC,MAAM,uBACN,MAAOC,GAAY,gBAAgB,EACnC,wBAAuB,IAAIF,WAC5B,CAEL,CCYA,SAASG,GACPC,EAA2CC,EAC9B,CACb,IAAMC,EAASD,EAAO,EAChBE,EAASF,EAAO,EAGhBG,EAAU,OAAO,KAAKJ,EAAS,KAAK,EACvC,OAAOK,GAAO,CAACL,EAAS,MAAMK,EAAI,EAClC,OAAyB,CAACC,EAAMD,IAAQ,CACvC,GAAGC,EAAMC,EAAC,WAAKF,CAAI,EAAQ,GAC7B,EAAG,CAAC,CAAC,EACJ,MAAM,EAAG,EAAE,EAGRG,EAAM,IAAI,IAAIR,EAAS,QAAQ,EACjCS,EAAQ,kBAAkB,GAC5BD,EAAI,aAAa,IAAI,IAAK,OAAO,QAAQR,EAAS,KAAK,EACpD,OAAO,CAAC,CAAC,CAAEU,CAAK,IAAMA,CAAK,EAC3B,OAAO,CAACC,EAAW,CAACC,CAAK,IAAM,GAAGD,KAAaC,IAAQ,KAAK,EAAG,EAAE,CACpE,EAGF,GAAM,CAAE,KAAAC,CAAK,EAAIC,GAAc,EAC/B,OACEP,EAAC,KAAE,KAAM,GAAGC,IAAO,MAAM,yBAAyB,SAAU,IAC1DD,EAAC,WACC,MAAO,CAAC,4BAA6B,GAAGL,EACpC,CAAC,qCAAqC,EACtC,CAAC,CACL,EAAE,KAAK,GAAG,EACV,gBAAeF,EAAS,MAAM,QAAQ,CAAC,GAEtCE,EAAS,GAAKK,EAAC,OAAI,MAAM,iCAAiC,EAC3DA,EAAC,MAAG,MAAM,2BAA2BP,EAAS,KAAM,EACnDG,EAAS,GAAKH,EAAS,KAAK,OAAS,GACpCO,EAAC,KAAE,MAAM,4BACNQ,GAASf,EAAS,KAAM,GAAG,CAC9B,EAEDA,EAAS,MACRO,EAAC,OAAI,MAAM,cACRP,EAAS,KAAK,IAAIgB,GAAO,CACxB,IAAMC,EAAKD,EAAI,QAAQ,WAAY,EAAE,EAC/BE,EAAOL,EACTI,KAAMJ,EACJ,4BAA4BA,EAAKI,KACjC,cACF,GACJ,OACEV,EAAC,QAAK,MAAO,UAAUW,KAASF,CAAI,CAExC,CAAC,CACH,EAEDb,EAAS,GAAKC,EAAQ,OAAS,GAC9BG,EAAC,KAAE,MAAM,2BACNY,GAAY,4BAA4B,EAAE,KAAG,GAAGf,CACnD,CAEJ,CACF,CAEJ,CAaO,SAASgB,GACdC,EACa,CACb,IAAMC,EAAYD,EAAO,GAAG,MACtBE,EAAO,CAAC,GAAGF,CAAM,EAGjBnB,EAASqB,EAAK,UAAUC,GAAO,CAACA,EAAI,SAAS,SAAS,GAAG,CAAC,EAC1D,CAACC,CAAO,EAAIF,EAAK,OAAOrB,EAAQ,CAAC,EAGnCwB,EAAQH,EAAK,UAAUC,GAAOA,EAAI,MAAQF,CAAS,EACnDI,IAAU,KACZA,EAAQH,EAAK,QAGf,IAAMI,EAAOJ,EAAK,MAAM,EAAGG,CAAK,EAC1BE,EAAOL,EAAK,MAAMG,CAAK,EAGvBG,EAAW,CACf9B,GAAqB0B,EAAS,EAAc,EAAE,CAACvB,GAAUwB,IAAU,EAAE,EACrE,GAAGC,EAAK,IAAIG,GAAW/B,GAAqB+B,EAAS,CAAW,CAAC,EACjE,GAAGF,EAAK,OAAS,CACfrB,EAAC,WAAQ,MAAM,0BACbA,EAAC,WAAQ,SAAU,IAChBqB,EAAK,OAAS,GAAKA,EAAK,SAAW,EAChCT,GAAY,wBAAwB,EACpCA,GAAY,2BAA4BS,EAAK,MAAM,CAEzD,EACC,GAAGA,EAAK,IAAIE,GAAW/B,GAAqB+B,EAAS,CAAW,CAAC,CACpE,CACF,EAAI,CAAC,CACP,EAGA,OACEvB,EAAC,MAAG,MAAM,0BACPsB,CACH,CAEJ,CC1IO,SAASE,GAAkBC,EAAiC,CACjE,OACEC,EAAC,MAAG,MAAM,oBACP,OAAO,QAAQD,CAAK,EAAE,IAAI,CAAC,CAACE,EAAKC,CAAK,IACrCF,EAAC,MAAG,MAAO,oCAAoCC,KAC5C,OAAOC,GAAU,SAAWC,GAAMD,CAAK,EAAIA,CAC9C,CACD,CACH,CAEJ,CCAO,SAASE,GACdC,EACa,CACb,IAAMC,EAAU,kCAAkCD,IAClD,OACEE,EAAC,OAAI,MAAOD,EAAS,OAAM,IACzBC,EAAC,UAAO,MAAM,gBAAgB,SAAU,GAAI,CAC9C,CAEJ,CCpBO,SAASC,GAAYC,EAAiC,CAC3D,OACEC,EAAC,OAAI,MAAM,0BACTA,EAAC,OAAI,MAAM,qBACRD,CACH,CACF,CAEJ,CCMA,SAASE,GAAcC,EAA+B,CACpD,IAAMC,EAASC,GAAc,EAGvBC,EAAM,IAAI,IAAI,MAAMH,EAAQ,WAAYC,EAAO,IAAI,EACzD,OACEG,EAAC,MAAG,MAAM,oBACRA,EAAC,KAAE,KAAM,GAAGD,IAAO,MAAM,oBACtBH,EAAQ,KACX,CACF,CAEJ,CAcO,SAASK,GACdC,EAAqBC,EACR,CACb,OACEH,EAAC,OAAI,MAAM,cACTA,EAAC,UACC,MAAM,sBACN,aAAYI,GAAY,sBAAsB,GAE7CD,EAAO,KACV,EACAH,EAAC,MAAG,MAAM,oBACPE,EAAS,IAAIP,EAAa,CAC7B,CACF,CAEJ,CCCO,SAASU,GACdC,EAAiBC,EACO,CACxB,IAAMC,EAAUC,EAAM,IAAMC,EAAc,CACxCC,GAAmBL,CAAE,EACrBM,GAA0BL,CAAS,CACrC,CAAC,CAAC,EACC,KACCM,EAAI,CAAC,CAAC,CAAE,EAAAC,EAAG,EAAAC,CAAE,EAAGC,CAAM,IAAqB,CACzC,GAAM,CAAE,MAAAC,EAAO,OAAAC,CAAO,EAAIC,GAAeb,CAAE,EAC3C,MAAQ,CACN,EAAGQ,EAAIE,EAAO,EAAIC,EAAQ,EAC1B,EAAGF,EAAIC,EAAO,EAAIE,EAAS,CAC7B,CACF,CAAC,CACH,EAGF,OAAOE,GAAkBd,CAAE,EACxB,KACCe,EAAUC,GAAUd,EACjB,KACCK,EAAIU,IAAW,CAAE,OAAAD,EAAQ,OAAAC,CAAO,EAAE,EAClCC,GAAK,CAAC,CAACF,GAAU,GAAQ,CAC3B,CACF,CACF,CACJ,CAWO,SAASG,GACdnB,EAAiBC,EAAwB,CAAE,QAAAmB,CAAQ,EAChB,CACnC,GAAM,CAACC,EAASC,CAAK,EAAI,MAAM,KAAKtB,EAAG,QAAQ,EAG/C,OAAOG,EAAM,IAAM,CACjB,IAAMoB,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EACpC,OAAAH,EAAM,UAAU,CAGd,KAAK,CAAE,OAAAN,CAAO,EAAG,CACfjB,EAAG,MAAM,YAAY,iBAAkB,GAAGiB,EAAO,KAAK,EACtDjB,EAAG,MAAM,YAAY,iBAAkB,GAAGiB,EAAO,KAAK,CACxD,EAGA,UAAW,CACTjB,EAAG,MAAM,eAAe,gBAAgB,EACxCA,EAAG,MAAM,eAAe,gBAAgB,CAC1C,CACF,CAAC,EAGD2B,GAAuB3B,CAAE,EACtB,KACC4B,GAAUH,CAAK,CACjB,EACG,UAAUI,GAAW,CACpB7B,EAAG,gBAAgB,kBAAmB6B,CAAO,CAC/C,CAAC,EAGLC,EACEP,EAAM,KAAKQ,EAAO,CAAC,CAAE,OAAAf,CAAO,IAAMA,CAAM,CAAC,EACzCO,EAAM,KAAKS,GAAa,GAAG,EAAGD,EAAO,CAAC,CAAE,OAAAf,CAAO,IAAM,CAACA,CAAM,CAAC,CAC/D,EACG,UAAU,CAGT,KAAK,CAAE,OAAAA,CAAO,EAAG,CACXA,EACFhB,EAAG,QAAQqB,CAAO,EAElBA,EAAQ,OAAO,CACnB,EAGA,UAAW,CACTrB,EAAG,QAAQqB,CAAO,CACpB,CACF,CAAC,EAGHE,EACG,KACCU,GAAU,GAAIC,EAAuB,CACvC,EACG,UAAU,CAAC,CAAE,OAAAlB,CAAO,IAAM,CACzBK,EAAQ,UAAU,OAAO,qBAAsBL,CAAM,CACvD,CAAC,EAGLO,EACG,KACCY,GAAa,IAAKD,EAAuB,EACzCH,EAAO,IAAM,CAAC,CAAC/B,EAAG,YAAY,EAC9BO,EAAI,IAAMP,EAAG,aAAc,sBAAsB,CAAC,EAClDO,EAAI,CAAC,CAAE,EAAAC,CAAE,IAAMA,CAAC,CAClB,EACG,UAAU,CAGT,KAAK4B,EAAQ,CACPA,EACFpC,EAAG,MAAM,YAAY,iBAAkB,GAAG,CAACoC,KAAU,EAErDpC,EAAG,MAAM,eAAe,gBAAgB,CAC5C,EAGA,UAAW,CACTA,EAAG,MAAM,eAAe,gBAAgB,CAC1C,CACF,CAAC,EAGLqC,EAAsBf,EAAO,OAAO,EACjC,KACCM,GAAUH,CAAK,EACfM,EAAOO,GAAM,EAAEA,EAAG,SAAWA,EAAG,QAAQ,CAC1C,EACG,UAAUA,GAAMA,EAAG,eAAe,CAAC,EAGxCD,EAAsBf,EAAO,WAAW,EACrC,KACCM,GAAUH,CAAK,EACfc,GAAehB,CAAK,CACtB,EACG,UAAU,CAAC,CAACe,EAAI,CAAE,OAAAtB,CAAO,CAAC,IAAM,CAvOzC,IAAAwB,EA0OU,GAAIF,EAAG,SAAW,GAAKA,EAAG,SAAWA,EAAG,QACtCA,EAAG,eAAe,UAGTtB,EAAQ,CACjBsB,EAAG,eAAe,EAGlB,IAAMG,EAASzC,EAAG,cAAe,QAAQ,gBAAgB,EACrDyC,aAAkB,YACpBA,EAAO,MAAM,GAEbD,EAAAE,GAAiB,IAAjB,MAAAF,EAAoB,MACxB,CACF,CAAC,EAGLpB,EACG,KACCQ,GAAUH,CAAK,EACfM,EAAOY,GAAUA,IAAWtB,CAAO,EACnCuB,GAAM,GAAG,CACX,EACG,UAAU,IAAM5C,EAAG,MAAM,CAAC,EAGxBD,GAAgBC,EAAIC,CAAS,EACjC,KACC4C,EAAIC,GAASvB,EAAM,KAAKuB,CAAK,CAAC,EAC9BC,EAAS,IAAMxB,EAAM,SAAS,CAAC,EAC/BhB,EAAIuC,GAAUE,EAAA,CAAE,IAAKhD,GAAO8C,EAAQ,CACtC,CACJ,CAAC,CACH,CCrMA,SAASG,GAAsBC,EAAgC,CAC7D,IAAMC,EAAkB,CAAC,EACzB,QAAWC,KAAMC,EAAY,eAAgBH,CAAS,EAAG,CACvD,IAAMI,EAAgB,CAAC,EAGjBC,EAAK,SAAS,mBAAmBH,EAAI,WAAW,SAAS,EAC/D,QAASI,EAAOD,EAAG,SAAS,EAAGC,EAAMA,EAAOD,EAAG,SAAS,EACtDD,EAAM,KAAKE,CAAY,EAGzB,QAASC,KAAQH,EAAO,CACtB,IAAII,EAGJ,KAAQA,EAAQ,gBAAgB,KAAKD,EAAK,WAAY,GAAI,CACxD,GAAM,CAAC,CAAEE,EAAIC,CAAK,EAAIF,EACtB,GAAI,OAAOE,GAAU,YAAa,CAChC,IAAMC,EAASJ,EAAK,UAAUC,EAAM,KAAK,EACzCD,EAAOI,EAAO,UAAUF,EAAG,MAAM,EACjCR,EAAQ,KAAKU,CAAM,CAGrB,KAAO,CACLJ,EAAK,YAAcE,EACnBR,EAAQ,KAAKM,CAAI,EACjB,KACF,CACF,CACF,CACF,CACA,OAAON,CACT,CAQA,SAASW,GAAKC,EAAqBC,EAA2B,CAC5DA,EAAO,OAAO,GAAG,MAAM,KAAKD,EAAO,UAAU,CAAC,CAChD,CAoBO,SAASE,GACdb,EAAiBF,EAAwB,CAAE,QAAAgB,EAAS,OAAAC,CAAO,EACxB,CAGnC,IAAMC,EAASlB,EAAU,QAAQ,MAAM,EACjCmB,EAASD,GAAA,YAAAA,EAAQ,GAGjBE,EAAc,IAAI,IACxB,QAAWT,KAAUZ,GAAsBC,CAAS,EAAG,CACrD,GAAM,CAAC,CAAES,CAAE,EAAIE,EAAO,YAAa,MAAM,WAAW,EAChDU,GAAmB,gBAAgBZ,KAAOP,CAAE,IAC9CkB,EAAY,IAAIX,EAAIa,GAAiBb,EAAIU,CAAM,CAAC,EAChDR,EAAO,YAAYS,EAAY,IAAIX,CAAE,CAAE,EAE3C,CAGA,OAAIW,EAAY,OAAS,EAChBG,EAGFC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAGZC,EAAsC,CAAC,EAC7C,OAAW,CAAClB,EAAImB,CAAU,IAAKR,EAC7BO,EAAM,KAAK,CACTE,EAAW,cAAeD,CAAU,EACpCC,EAAW,gBAAgBpB,KAAOP,CAAE,CACtC,CAAC,EAGH,OAAAe,EACG,KACCa,GAAUL,EAAM,KAAKM,GAAS,CAAC,CAAC,CAAC,CACnC,EACG,UAAUC,GAAU,CACnB9B,EAAG,OAAS,CAAC8B,EAGb,OAAW,CAACC,EAAOC,CAAK,IAAKP,EACtBK,EAGHpB,GAAKqB,EAAOC,CAAK,EAFjBtB,GAAKsB,EAAOD,CAAK,CAGvB,CAAC,EAGEE,EAAM,GAAG,CAAC,GAAGf,CAAW,EAC5B,IAAI,CAAC,CAAC,CAAEQ,CAAU,IACjBQ,GAAgBR,EAAY5B,EAAW,CAAE,QAAAgB,CAAQ,CAAC,CACnD,CACH,EACG,KACCqB,EAAS,IAAMZ,EAAM,SAAS,CAAC,EAC/Ba,GAAM,CACR,CACJ,CAAC,CACH,CV9GA,IAAIC,GAAW,EAaf,SAASC,GAAkBC,EAA0C,CACnE,GAAIA,EAAG,mBAAoB,CACzB,IAAMC,EAAUD,EAAG,mBACnB,GAAIC,EAAQ,UAAY,KACtB,OAAOA,EAGJ,GAAIA,EAAQ,UAAY,KAAO,CAACA,EAAQ,SAAS,OACpD,OAAOF,GAAkBE,CAAO,CACpC,CAIF,CAgBO,SAASC,GACdF,EACuB,CACvB,OAAOG,GAAiBH,CAAE,EACvB,KACCI,EAAI,CAAC,CAAE,MAAAC,CAAM,KAEJ,CACL,WAFcC,GAAsBN,CAAE,EAElB,MAAQK,CAC9B,EACD,EACDE,EAAwB,YAAY,CACtC,CACJ,CAoBO,SAASC,GACdR,EAAiBS,EAC8B,CAC/C,GAAM,CAAE,QAASC,CAAM,EAAI,WAAW,SAAS,EAGzCC,EAAWC,EAAM,IAAM,CAC3B,IAAMC,EAAQ,IAAIC,EASlB,GARAD,EAAM,UAAU,CAAC,CAAE,WAAAE,CAAW,IAAM,CAC9BA,GAAcL,EAChBV,EAAG,aAAa,WAAY,GAAG,EAE/BA,EAAG,gBAAgB,UAAU,CACjC,CAAC,EAGG,GAAAgB,QAAY,YAAY,EAAG,CAC7B,IAAMC,EAASjB,EAAG,QAAQ,KAAK,EAC/BiB,EAAO,GAAK,UAAU,EAAEnB,KACxBmB,EAAO,aACLC,GAAsBD,EAAO,EAAE,EAC/BjB,CACF,CACF,CAGA,IAAMmB,EAAYnB,EAAG,QAAQ,YAAY,EACzC,GAAImB,aAAqB,YAAa,CACpC,IAAMC,EAAOrB,GAAkBoB,CAAS,EAGxC,GAAI,OAAOC,GAAS,cAClBD,EAAU,UAAU,SAAS,UAAU,GACvCE,EAAQ,uBAAuB,GAC9B,CACD,IAAMC,EAAeC,GAAoBH,EAAMpB,EAAIS,CAAO,EAG1D,OAAOP,GAAeF,CAAE,EACrB,KACCwB,EAAIC,GAASZ,EAAM,KAAKY,CAAK,CAAC,EAC9BC,EAAS,IAAMb,EAAM,SAAS,CAAC,EAC/BT,EAAIqB,GAAUE,EAAA,CAAE,IAAK3B,GAAOyB,EAAQ,EACpCG,GACEzB,GAAiBgB,CAAS,EACvB,KACCf,EAAI,CAAC,CAAE,MAAAC,EAAO,OAAAwB,CAAO,IAAMxB,GAASwB,CAAM,EAC1CC,EAAqB,EACrBC,EAAUC,GAAUA,EAASV,EAAeW,CAAK,CACnD,CACJ,CACF,CACJ,CACF,CAGA,OAAO/B,GAAeF,CAAE,EACrB,KACCwB,EAAIC,GAASZ,EAAM,KAAKY,CAAK,CAAC,EAC9BC,EAAS,IAAMb,EAAM,SAAS,CAAC,EAC/BT,EAAIqB,GAAUE,EAAA,CAAE,IAAK3B,GAAOyB,EAAQ,CACtC,CACJ,CAAC,EAGD,OAAIJ,EAAQ,cAAc,EACjBa,GAAuBlC,CAAE,EAC7B,KACCmC,EAAOC,GAAWA,CAAO,EACzBC,GAAK,CAAC,EACNN,EAAU,IAAMpB,CAAQ,CAC1B,EAGGA,CACT,4uJWpLA,IAAI2B,GAKAC,GAAW,EAWf,SAASC,IAAiC,CACxC,OAAO,OAAO,SAAY,aAAe,mBAAmB,QACxDC,GAAY,qDAAqD,EACjEC,EAAG,MAAS,CAClB,CAaO,SAASC,GACdC,EACgC,CAChC,OAAAA,EAAG,UAAU,OAAO,SAAS,EAC7BN,QAAaE,GAAa,EACvB,KACCK,EAAI,IAAM,QAAQ,WAAW,CAC3B,YAAa,GACb,SAAAC,EACF,CAAC,CAAC,EACFC,EAAI,IAAG,EAAY,EACnBC,EAAY,CAAC,CACf,GAGFV,GAAS,UAAU,IAAM,CACvBM,EAAG,UAAU,IAAI,SAAS,EAC1B,IAAMK,EAAK,aAAaV,OAClBW,EAAOC,EAAE,MAAO,CAAE,MAAO,SAAU,CAAC,EAC1C,QAAQ,WAAW,OAAOF,EAAIL,EAAG,YAAcQ,GAAgB,CAG7D,IAAMC,EAASH,EAAK,aAAa,CAAE,KAAM,QAAS,CAAC,EACnDG,EAAO,UAAYD,EAGnBR,EAAG,YAAYM,CAAI,CACrB,CAAC,CACH,CAAC,EAGMZ,GACJ,KACCS,EAAI,KAAO,CAAE,IAAKH,CAAG,EAAE,CACzB,CACJ,CC1CO,SAASU,GACdC,EAAwB,CAAE,QAAAC,EAAS,OAAAC,CAAO,EACrB,CACrB,IAAIC,EAAO,GACX,OAAOC,EAGLH,EACG,KACCI,EAAIC,GAAUA,EAAO,QAAQ,qBAAqB,CAAE,EACpDC,EAAOC,GAAWR,IAAOQ,CAAO,EAChCH,EAAI,KAAO,CACT,OAAQ,OAAQ,OAAQ,EAC1B,EAAa,CACf,EAGFH,EACG,KACCK,EAAOE,GAAUA,GAAU,CAACN,CAAI,EAChCO,EAAI,IAAMP,EAAOH,EAAG,IAAI,EACxBK,EAAII,IAAW,CACb,OAAQA,EAAS,OAAS,OAC5B,EAAa,CACf,CACJ,CACF,CAaO,SAASE,GACdX,EAAwBY,EACQ,CAChC,OAAOC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAAC,CAAE,OAAAE,EAAQ,OAAAC,CAAO,IAAM,CACtCjB,EAAG,gBAAgB,OAAQgB,IAAW,MAAM,EACxCC,GACFjB,EAAG,eAAe,CACtB,CAAC,EAGMD,GAAaC,EAAIY,CAAO,EAC5B,KACCF,EAAIQ,GAASJ,EAAM,KAAKI,CAAK,CAAC,EAC9BC,EAAS,IAAML,EAAM,SAAS,CAAC,EAC/BT,EAAIa,GAAUE,EAAA,CAAE,IAAKpB,GAAOkB,EAAQ,CACtC,CACJ,CAAC,CACH,CC5FA,IAAMG,GAAWC,EAAE,OAAO,EAgBnB,SAASC,GACdC,EACkC,CAClC,OAAAA,EAAG,YAAYH,EAAQ,EACvBA,GAAS,YAAYI,GAAYD,CAAE,CAAC,EAG7BE,EAAG,CAAE,IAAKF,CAAG,CAAC,CACvB,CCuBO,SAASG,GACdC,EACyB,CACzB,IAAMC,EAASC,EAA8B,iBAAkBF,CAAE,EAC3DG,EAAUF,EAAO,KAAKG,GAASA,EAAM,OAAO,GAAKH,EAAO,GAC9D,OAAOI,EAAM,GAAGJ,EAAO,IAAIG,GAASE,EAAUF,EAAO,QAAQ,EAC1D,KACCG,EAAI,IAAMC,EAA6B,cAAcJ,EAAM,MAAM,CAAC,CACpE,CACF,CAAC,EACE,KACCK,EAAUD,EAA6B,cAAcL,EAAQ,MAAM,CAAC,EACpEI,EAAIG,IAAW,CAAE,OAAAA,CAAO,EAAE,CAC5B,CACJ,CAeO,SAASC,GACdX,EAAiB,CAAE,UAAAY,CAAU,EACO,CAGpC,IAAMC,EAAOC,GAAoB,MAAM,EACvCd,EAAG,OAAOa,CAAI,EAGd,IAAME,EAAOD,GAAoB,MAAM,EACvCd,EAAG,OAAOe,CAAI,EAGd,IAAMC,EAAYR,EAAW,iBAAkBR,CAAE,EACjD,OAAOiB,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EACpC,OAAAC,EAAc,CAACJ,EAAOK,GAAiBvB,CAAE,CAAC,CAAC,EACxC,KACCwB,GAAU,EAAGC,EAAuB,EACpCC,GAAUN,CAAK,CACjB,EACG,UAAU,CAGT,KAAK,CAAC,CAAE,OAAAV,CAAO,EAAGiB,CAAI,EAAG,CACvB,IAAMC,EAASC,GAAiBnB,CAAM,EAChC,CAAE,MAAAoB,CAAM,EAAIC,GAAerB,CAAM,EAGvCV,EAAG,MAAM,YAAY,mBAAoB,GAAG4B,EAAO,KAAK,EACxD5B,EAAG,MAAM,YAAY,uBAAwB,GAAG8B,KAAS,EAGzD,IAAME,EAAUC,GAAwBjB,CAAS,GAE/CY,EAAO,EAAYI,EAAQ,GAC3BJ,EAAO,EAAIE,EAAQE,EAAQ,EAAIL,EAAK,QAEpCX,EAAU,SAAS,CACjB,KAAM,KAAK,IAAI,EAAGY,EAAO,EAAI,EAAE,EAC/B,SAAU,QACZ,CAAC,CACL,EAGA,UAAW,CACT5B,EAAG,MAAM,eAAe,kBAAkB,EAC1CA,EAAG,MAAM,eAAe,sBAAsB,CAChD,CACF,CAAC,EAGLsB,EAAc,CACZY,GAA0BlB,CAAS,EACnCO,GAAiBP,CAAS,CAC5B,CAAC,EACE,KACCU,GAAUN,CAAK,CACjB,EACG,UAAU,CAAC,CAACQ,EAAQD,CAAI,IAAM,CAC7B,IAAMK,EAAUG,GAAsBnB,CAAS,EAC/CH,EAAK,OAASe,EAAO,EAAI,GACzBb,EAAK,OAASa,EAAO,EAAII,EAAQ,MAAQL,EAAK,MAAQ,EACxD,CAAC,EAGLtB,EACEC,EAAUO,EAAM,OAAO,EAAE,KAAKN,EAAI,IAAM,EAAE,CAAC,EAC3CD,EAAUS,EAAM,OAAO,EAAE,KAAKR,EAAI,IAAM,CAAE,CAAC,CAC7C,EACG,KACCmB,GAAUN,CAAK,CACjB,EACG,UAAUgB,GAAa,CACtB,GAAM,CAAE,MAAAN,CAAM,EAAIC,GAAef,CAAS,EAC1CA,EAAU,SAAS,CACjB,KAAMc,EAAQM,EACd,SAAU,QACZ,CAAC,CACH,CAAC,EAGDC,EAAQ,mBAAmB,GAC7BnB,EAAM,KACJoB,GAAK,CAAC,EACNC,GAAe3B,CAAS,CAC1B,EACG,UAAU,CAAC,CAAC,CAAE,OAAAF,CAAO,EAAG,CAAE,OAAAkB,CAAO,CAAC,IAAM,CACvC,IAAMY,EAAM9B,EAAO,UAAU,KAAK,EAClC,GAAIA,EAAO,aAAa,mBAAmB,EACzCA,EAAO,gBAAgB,mBAAmB,MAGrC,CACL,IAAM+B,EAAIzC,EAAG,UAAY4B,EAAO,EAGhC,QAAWc,KAAOxC,EAAY,aAAa,EACzC,QAAWE,KAASF,EAClB,iBAAkBwC,CACpB,EAAG,CACD,IAAMC,EAAQnC,EAAW,cAAcJ,EAAM,MAAM,EACnD,GACEuC,IAAUjC,GACViC,EAAM,UAAU,KAAK,IAAMH,EAC3B,CACAG,EAAM,aAAa,oBAAqB,EAAE,EAC1CvC,EAAM,MAAM,EACZ,KACF,CACF,CAGF,OAAO,SAAS,CACd,IAAKJ,EAAG,UAAYyC,CACtB,CAAC,EAGD,IAAMG,EAAO,SAAmB,QAAQ,GAAK,CAAC,EAC9C,SAAS,SAAU,CAAC,GAAG,IAAI,IAAI,CAACJ,EAAK,GAAGI,CAAI,CAAC,CAAC,CAAC,CACjD,CACF,CAAC,EAGE7C,GAAiBC,CAAE,EACvB,KACC6C,EAAIC,GAAS5B,EAAM,KAAK4B,CAAK,CAAC,EAC9BC,EAAS,IAAM7B,EAAM,SAAS,CAAC,EAC/BX,EAAIuC,GAAUE,EAAA,CAAE,IAAKhD,GAAO8C,EAAQ,CACtC,CACJ,CAAC,EACE,KACCG,GAAYC,EAAc,CAC5B,CACJ,CCtKO,SAASC,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,EAAS,OAAAC,CAAO,EACd,CAChC,OAAOC,EAGL,GAAGC,EAAY,2BAA4BL,CAAE,EAC1C,IAAIM,GAASC,GAAeD,EAAO,CAAE,QAAAJ,EAAS,OAAAC,CAAO,CAAC,CAAC,EAG1D,GAAGE,EAAY,cAAeL,CAAE,EAC7B,IAAIM,GAASE,GAAaF,CAAK,CAAC,EAGnC,GAAGD,EAAY,qBAAsBL,CAAE,EACpC,IAAIM,GAASG,GAAeH,CAAK,CAAC,EAGrC,GAAGD,EAAY,UAAWL,CAAE,EACzB,IAAIM,GAASI,GAAaJ,EAAO,CAAE,QAAAJ,EAAS,OAAAC,CAAO,CAAC,CAAC,EAGxD,GAAGE,EAAY,cAAeL,CAAE,EAC7B,IAAIM,GAASK,GAAiBL,EAAO,CAAE,UAAAL,CAAU,CAAC,CAAC,CACxD,CACF,CClCO,SAASW,GACdC,EAAkB,CAAE,OAAAC,CAAO,EACP,CACpB,OAAOA,EACJ,KACCC,EAAUC,GAAWC,EACnBC,EAAG,EAAI,EACPA,EAAG,EAAK,EAAE,KAAKC,GAAM,GAAI,CAAC,CAC5B,EACG,KACCC,EAAIC,IAAW,CAAE,QAAAL,EAAS,OAAAK,CAAO,EAAE,CACrC,CACF,CACF,CACJ,CAaO,SAASC,GACdC,EAAiBC,EACc,CAC/B,IAAMC,EAAQC,EAAW,cAAeH,CAAE,EAC1C,OAAOI,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAAC,CAAE,QAAAZ,EAAS,OAAAK,CAAO,IAAM,CACvCE,EAAG,UAAU,OAAO,oBAAqBF,CAAM,EAC/CI,EAAM,YAAcT,CACtB,CAAC,EAGMJ,GAAYW,EAAIC,CAAO,EAC3B,KACCM,EAAIC,GAASH,EAAM,KAAKG,CAAK,CAAC,EAC9BC,EAAS,IAAMJ,EAAM,SAAS,CAAC,EAC/BR,EAAIW,GAAUE,EAAA,CAAE,IAAKV,GAAOQ,EAAQ,CACtC,CACJ,CAAC,CACH,CC9BA,SAASG,GAAS,CAAE,UAAAC,CAAU,EAAsC,CAClE,GAAI,CAACC,EAAQ,iBAAiB,EAC5B,OAAOC,EAAG,EAAK,EAGjB,IAAMC,EAAaH,EAChB,KACCI,EAAI,CAAC,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,IAAMA,CAAC,EAC5BC,GAAY,EAAG,CAAC,EAChBF,EAAI,CAAC,CAACG,EAAGC,CAAC,IAAM,CAACD,EAAIC,EAAGA,CAAC,CAAU,EACnCC,EAAwB,CAAC,CAC3B,EAGIC,EAAUC,EAAc,CAACX,EAAWG,CAAU,CAAC,EAClD,KACCS,EAAO,CAAC,CAAC,CAAE,OAAAC,CAAO,EAAG,CAAC,CAAER,CAAC,CAAC,IAAM,KAAK,IAAIA,EAAIQ,EAAO,CAAC,EAAI,GAAG,EAC5DT,EAAI,CAAC,CAAC,CAAE,CAACU,CAAS,CAAC,IAAMA,CAAS,EAClCC,EAAqB,CACvB,EAGIC,EAAUC,GAAY,QAAQ,EACpC,OAAON,EAAc,CAACX,EAAWgB,CAAO,CAAC,EACtC,KACCZ,EAAI,CAAC,CAAC,CAAE,OAAAS,CAAO,EAAGK,CAAM,IAAML,EAAO,EAAI,KAAO,CAACK,CAAM,EACvDH,EAAqB,EACrBI,EAAUC,GAAUA,EAASV,EAAUR,EAAG,EAAK,CAAC,EAChDmB,EAAU,EAAK,CACjB,CACJ,CAcO,SAASC,GACdC,EAAiBC,EACG,CACpB,OAAOC,EAAM,IAAMd,EAAc,CAC/Be,GAAiBH,CAAE,EACnBxB,GAASyB,CAAO,CAClB,CAAC,CAAC,EACC,KACCpB,EAAI,CAAC,CAAC,CAAE,OAAAuB,CAAO,EAAGC,CAAM,KAAO,CAC7B,OAAAD,EACA,OAAAC,CACF,EAAE,EACFb,EAAqB,CAACR,EAAGC,IACvBD,EAAE,SAAWC,EAAE,QACfD,EAAE,SAAWC,EAAE,MAChB,EACDqB,EAAY,CAAC,CACf,CACJ,CAaO,SAASC,GACdP,EAAiB,CAAE,QAAAQ,EAAS,MAAAC,CAAM,EACH,CAC/B,OAAOP,EAAM,IAAM,CACjB,IAAMQ,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EACpC,OAAAH,EACG,KACCxB,EAAwB,QAAQ,EAChC4B,GAAkBN,CAAO,CAC3B,EACG,UAAU,CAAC,CAAC,CAAE,OAAAX,CAAO,EAAG,CAAE,OAAAQ,CAAO,CAAC,IAAM,CACvCL,EAAG,UAAU,OAAO,oBAAqBH,GAAU,CAACQ,CAAM,EAC1DL,EAAG,OAASK,CACd,CAAC,EAGLI,EAAM,UAAUC,CAAK,EAGdF,EACJ,KACCO,GAAUH,CAAK,EACf/B,EAAImC,GAAUC,EAAA,CAAE,IAAKjB,GAAOgB,EAAQ,CACtC,CACJ,CAAC,CACH,CChHO,SAASE,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACb,CACzB,OAAOC,GAAgBH,EAAI,CAAE,UAAAC,EAAW,QAAAC,CAAQ,CAAC,EAC9C,KACCE,EAAI,CAAC,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,IAAM,CACzB,GAAM,CAAE,OAAAC,CAAO,EAAIC,GAAeP,CAAE,EACpC,MAAO,CACL,OAAQK,GAAKC,CACf,CACF,CAAC,EACDE,EAAwB,QAAQ,CAClC,CACJ,CAaO,SAASC,GACdT,EAAiBU,EACmB,CACpC,OAAOC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClBD,EAAM,UAAU,CAAC,CAAE,OAAAE,CAAO,IAAM,CAC9Bd,EAAG,UAAU,OAAO,2BAA4Bc,CAAM,CACxD,CAAC,EAGD,IAAMC,EAAUC,GAAmB,YAAY,EAC/C,OAAI,OAAOD,GAAY,YACdE,EAGFlB,GAAiBgB,EAASL,CAAO,EACrC,KACCQ,EAAIC,GAASP,EAAM,KAAKO,CAAK,CAAC,EAC9BC,EAAS,IAAMR,EAAM,SAAS,CAAC,EAC/BR,EAAIe,GAAUE,EAAA,CAAE,IAAKrB,GAAOmB,EAAQ,CACtC,CACJ,CAAC,CACH,CCvDO,SAASG,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACpB,CAGlB,IAAMC,EAAUD,EACb,KACCE,EAAI,CAAC,CAAE,OAAAC,CAAO,IAAMA,CAAM,EAC1BC,EAAqB,CACvB,EAGIC,EAAUJ,EACb,KACCK,EAAU,IAAMC,GAAiBT,CAAE,EAChC,KACCI,EAAI,CAAC,CAAE,OAAAC,CAAO,KAAO,CACnB,IAAQL,EAAG,UACX,OAAQA,EAAG,UAAYK,CACzB,EAAE,EACFK,EAAwB,QAAQ,CAClC,CACF,CACF,EAGF,OAAOC,EAAc,CAACR,EAASI,EAASN,CAAS,CAAC,EAC/C,KACCG,EAAI,CAAC,CAACQ,EAAQ,CAAE,IAAAC,EAAK,OAAAC,CAAO,EAAG,CAAE,OAAQ,CAAE,EAAAC,CAAE,EAAG,KAAM,CAAE,OAAAV,CAAO,CAAE,CAAC,KAChEA,EAAS,KAAK,IAAI,EAAGA,EACjB,KAAK,IAAI,EAAGQ,EAASE,EAAIH,CAAM,EAC/B,KAAK,IAAI,EAAGP,EAASU,EAAID,CAAM,CACnC,EACO,CACL,OAAQD,EAAMD,EACd,OAAAP,EACA,OAAQQ,EAAMD,GAAUG,CAC1B,EACD,EACDT,EAAqB,CAACU,EAAGC,IACvBD,EAAE,SAAWC,EAAE,QACfD,EAAE,SAAWC,EAAE,QACfD,EAAE,SAAWC,EAAE,MAChB,CACH,CACJ,CClDO,SAASC,GACdC,EACqB,CACrB,IAAMC,EAAU,SAAkB,WAAW,GAAK,CAChD,MAAOD,EAAO,UAAUE,GAAS,WAC/BA,EAAM,aAAa,qBAAqB,CAC1C,EAAE,OAAO,CACX,EAGA,OAAOC,EAAG,GAAGH,CAAM,EAChB,KACCI,GAASF,GAASG,EAAUH,EAAO,QAAQ,EACxC,KACCI,EAAI,IAAMJ,CAAK,CACjB,CACF,EACAK,EAAUP,EAAO,KAAK,IAAI,EAAGC,EAAQ,KAAK,EAAE,EAC5CK,EAAIJ,IAAU,CACZ,MAAOF,EAAO,QAAQE,CAAK,EAC3B,MAAO,CACL,OAASA,EAAM,aAAa,sBAAsB,EAClD,QAASA,EAAM,aAAa,uBAAuB,EACnD,OAASA,EAAM,aAAa,sBAAsB,CACpD,CACF,EAAa,EACbM,EAAY,CAAC,CACf,CACJ,CASO,SAASC,GACdC,EACgC,CAChC,OAAOC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClBD,EAAM,UAAUE,GAAW,CACzB,SAAS,KAAK,aAAa,0BAA2B,EAAE,EAGxD,OAAW,CAACC,EAAKC,CAAK,IAAK,OAAO,QAAQF,EAAQ,KAAK,EACrD,SAAS,KAAK,aAAa,iBAAiBC,IAAOC,CAAK,EAG1D,QAASC,EAAQ,EAAGA,EAAQjB,EAAO,OAAQiB,IAAS,CAClD,IAAMC,EAAQlB,EAAOiB,GAAO,mBACxBC,aAAiB,cACnBA,EAAM,OAASJ,EAAQ,QAAUG,EACrC,CAGA,SAAS,YAAaH,CAAO,CAC/B,CAAC,EAGDF,EAAM,KAAKO,GAAUC,EAAc,CAAC,EACjC,UAAU,IAAM,CACf,SAAS,KAAK,gBAAgB,yBAAyB,CACzD,CAAC,EAGH,IAAMpB,EAASqB,EAA8B,QAASX,CAAE,EACxD,OAAOX,GAAaC,CAAM,EACvB,KACCsB,EAAIC,GAASX,EAAM,KAAKW,CAAK,CAAC,EAC9BC,EAAS,IAAMZ,EAAM,SAAS,CAAC,EAC/BN,EAAIiB,GAAUE,EAAA,CAAE,IAAKf,GAAOa,EAAQ,CACtC,CACJ,CAAC,CACH,CC/HA,IAAAG,GAAwB,SAiCxB,SAASC,GAAQC,EAAyB,CACxCA,EAAG,aAAa,kBAAmB,EAAE,EACrC,IAAMC,EAAOD,EAAG,UAChB,OAAAA,EAAG,gBAAgB,iBAAiB,EAC7BC,CACT,CAWO,SAASC,GACd,CAAE,OAAAC,CAAO,EACH,CACF,GAAAC,QAAY,YAAY,GAC1B,IAAIC,EAA8BC,GAAc,CAC9C,IAAI,GAAAF,QAAY,iDAAkD,CAChE,KAAMJ,GACJA,EAAG,aAAa,qBAAqB,GACrCD,GAAQQ,EACNP,EAAG,aAAa,uBAAuB,CACzC,CAAC,CAEL,CAAC,EACE,GAAG,UAAWQ,GAAMF,EAAW,KAAKE,CAAE,CAAC,CAC5C,CAAC,EACE,KACCC,EAAID,GAAM,CACQA,EAAG,QACX,MAAM,CAChB,CAAC,EACDE,EAAI,IAAMC,GAAY,kBAAkB,CAAC,CAC3C,EACG,UAAUR,CAAM,CAEzB,CCrCA,SAASS,GAAWC,EAAwB,CAC1C,GAAIA,EAAK,OAAS,EAChB,MAAO,CAAC,EAAE,EAGZ,GAAM,CAACC,EAAMC,CAAI,EAAI,CAAC,GAAGF,CAAI,EAC1B,KAAK,CAACG,EAAGC,IAAMD,EAAE,OAASC,EAAE,MAAM,EAClC,IAAIC,GAAOA,EAAI,QAAQ,SAAU,EAAE,CAAC,EAGnCC,EAAQ,EACZ,GAAIL,IAASC,EACXI,EAAQL,EAAK,WAEb,MAAOA,EAAK,WAAWK,CAAK,IAAMJ,EAAK,WAAWI,CAAK,GACrDA,IAGJ,OAAON,EAAK,IAAIK,GAAOA,EAAI,QAAQJ,EAAK,MAAM,EAAGK,CAAK,EAAG,EAAE,CAAC,CAC9D,CAaO,SAASC,GAAaC,EAAiC,CAC5D,IAAMC,EAAS,SAAkB,YAAa,eAAgBD,CAAI,EAClE,GAAIC,EACF,OAAOC,EAAGD,CAAM,EACX,CACL,IAAME,EAASC,GAAc,EAC7B,OAAOC,GAAW,IAAI,IAAI,cAAeL,GAAQG,EAAO,IAAI,CAAC,EAC1D,KACCG,EAAIC,GAAWhB,GAAWiB,EAAY,MAAOD,CAAO,EACjD,IAAIE,GAAQA,EAAK,WAAY,CAChC,CAAC,EACDC,GAAW,IAAMC,CAAK,EACtBC,GAAe,CAAC,CAAC,EACjBC,EAAIN,GAAW,SAAS,YAAaA,EAAS,eAAgBP,CAAI,CAAC,CACrE,CACJ,CACF,CCIO,SAASc,GACd,CAAE,UAAAC,EAAW,UAAAC,EAAW,UAAAC,CAAU,EAC5B,CACN,IAAMC,EAASC,GAAc,EAC7B,GAAI,SAAS,WAAa,QACxB,OAGE,sBAAuB,UACzB,QAAQ,kBAAoB,SAG5BC,EAAU,OAAQ,cAAc,EAC7B,UAAU,IAAM,CACf,QAAQ,kBAAoB,MAC9B,CAAC,GAIL,IAAMC,EAAUC,GAAoC,gBAAgB,EAChE,OAAOD,GAAY,cACrBA,EAAQ,KAAOA,EAAQ,MAGzB,IAAME,EAAQC,GAAa,EACxB,KACCC,EAAIC,GAASA,EAAM,IAAIC,GAAQ,GAAG,IAAI,IAAIA,EAAMT,EAAO,IAAI,GAAG,CAAC,EAC/DU,EAAUC,GAAQT,EAAsB,SAAS,KAAM,OAAO,EAC3D,KACCU,EAAOC,GAAM,CAACA,EAAG,SAAW,CAACA,EAAG,OAAO,EACvCH,EAAUG,GAAM,CACd,GAAIA,EAAG,kBAAkB,QAAS,CAChC,IAAMC,EAAKD,EAAG,OAAO,QAAQ,GAAG,EAChC,GAAIC,GAAM,CAACA,EAAG,OAAQ,CACpB,IAAMC,EAAM,IAAI,IAAID,EAAG,IAAI,EAO3B,GAJAC,EAAI,OAAS,GACbA,EAAI,KAAO,GAITA,EAAI,WAAa,SAAS,UAC1BJ,EAAK,SAASI,EAAI,SAAS,CAAC,EAE5B,OAAAF,EAAG,eAAe,EACXG,EAAG,CACR,IAAK,IAAI,IAAIF,EAAG,IAAI,CACtB,CAAC,CAEL,CACF,CACA,OAAOG,EACT,CAAC,CACH,CACF,EACAC,GAAoB,CACtB,EAGIC,EAAOjB,EAAyB,OAAQ,UAAU,EACrD,KACCU,EAAOC,GAAMA,EAAG,QAAU,IAAI,EAC9BN,EAAIM,IAAO,CACT,IAAK,IAAI,IAAI,SAAS,IAAI,EAC1B,OAAQA,EAAG,KACb,EAAE,EACFK,GAAoB,CACtB,EAGFE,EAAMf,EAAOc,CAAI,EACd,KACCE,EAAqB,CAACC,EAAGC,IAAMD,EAAE,IAAI,OAASC,EAAE,IAAI,IAAI,EACxDhB,EAAI,CAAC,CAAE,IAAAQ,CAAI,IAAMA,CAAG,CACtB,EACG,UAAUjB,CAAS,EAGxB,IAAM0B,EAAY1B,EACf,KACC2B,EAAwB,UAAU,EAClCf,EAAUK,GAAOW,GAAQX,EAAI,IAAI,EAC9B,KACCY,GAAW,KACTC,GAAYb,CAAG,EACRE,GACR,CACH,CACF,EACAC,GAAM,CACR,EAGFb,EACG,KACCwB,GAAOL,CAAS,CAClB,EACG,UAAU,CAAC,CAAE,IAAAT,CAAI,IAAM,CACtB,QAAQ,UAAU,CAAC,EAAG,GAAI,GAAGA,GAAK,CACpC,CAAC,EAGL,IAAMe,EAAM,IAAI,UAChBN,EACG,KACCd,EAAUqB,GAAOA,EAAI,KAAK,CAAC,EAC3BxB,EAAIwB,GAAOD,EAAI,gBAAgBC,EAAK,WAAW,CAAC,CAClD,EACG,UAAUlC,CAAS,EAGxBA,EACG,KACCmC,GAAK,CAAC,CACR,EACG,UAAUC,GAAe,CACxB,QAAWC,IAAY,CAGrB,QACA,sBACA,oBACA,yBAGA,+BACA,gCACA,mCACA,+BACA,2BACA,2BACA,GAAGC,EAAQ,wBAAwB,EAC/B,CAAC,0BAA0B,EAC3B,CAAC,CACP,EAAG,CACD,IAAMC,EAAShC,GAAmB8B,CAAQ,EACpCG,EAASjC,GAAmB8B,EAAUD,CAAW,EAErD,OAAOG,GAAW,aAClB,OAAOC,GAAW,aAElBD,EAAO,YAAYC,CAAM,CAE7B,CACF,CAAC,EAGLxC,EACG,KACCmC,GAAK,CAAC,EACNzB,EAAI,IAAM+B,GAAoB,WAAW,CAAC,EAC1C5B,EAAUI,GAAMyB,EAAY,SAAUzB,CAAE,CAAC,EACzC0B,GAAU1B,GAAM,CACd,IAAM2B,EAASC,EAAE,QAAQ,EACzB,GAAI5B,EAAG,IAAK,CACV,QAAW6B,KAAQ7B,EAAG,kBAAkB,EACtC2B,EAAO,aAAaE,EAAM7B,EAAG,aAAa6B,CAAI,CAAE,EAClD,OAAA7B,EAAG,YAAY2B,CAAM,EAGd,IAAIG,EAAWC,GAAY,CAChCJ,EAAO,OAAS,IAAMI,EAAS,SAAS,CAC1C,CAAC,CAGH,KACE,QAAAJ,EAAO,YAAc3B,EAAG,YACxBA,EAAG,YAAY2B,CAAM,EACdK,CAEX,CAAC,CACH,EACG,UAAU,EAGf1B,EAAMf,EAAOc,CAAI,EACd,KACCU,GAAOhC,CAAS,CAClB,EACG,UAAU,CAAC,CAAE,IAAAkB,EAAK,OAAAgC,CAAO,IAAM,CAC1BhC,EAAI,MAAQ,CAACgC,EACfC,GAAgBjC,EAAI,IAAI,EAExB,OAAO,SAAS,GAAGgC,GAAA,YAAAA,EAAQ,IAAK,CAAC,CAErC,CAAC,EAGLhD,EACG,KACCkD,GAAU5C,CAAK,EACf6C,GAAa,GAAG,EAChBzB,EAAwB,QAAQ,CAClC,EACG,UAAU,CAAC,CAAE,OAAAsB,CAAO,IAAM,CACzB,QAAQ,aAAaA,EAAQ,EAAE,CACjC,CAAC,EAGL3B,EAAMf,EAAOc,CAAI,EACd,KACCgC,GAAY,EAAG,CAAC,EAChBvC,EAAO,CAAC,CAACU,EAAGC,CAAC,IAAMD,EAAE,IAAI,WAAaC,EAAE,IAAI,QAAQ,EACpDhB,EAAI,CAAC,CAAC,CAAE6C,CAAK,IAAMA,CAAK,CAC1B,EACG,UAAU,CAAC,CAAE,OAAAL,CAAO,IAAM,CACzB,OAAO,SAAS,GAAGA,GAAA,YAAAA,EAAQ,IAAK,CAAC,CACnC,CAAC,CACP,CCzSA,IAAAM,GAAuB,SCAvB,IAAAC,GAAuB,SAsChB,SAASC,GACdC,EAA2BC,EACD,CAC1B,IAAMC,EAAY,IAAI,OAAOF,EAAO,UAAW,KAAK,EAC9CG,EAAY,CAACC,EAAYC,EAAcC,IACpC,GAAGD,4BAA+BC,WAI3C,OAAQC,GAAkB,CACxBA,EAAQA,EACL,QAAQ,gBAAiB,GAAG,EAC5B,KAAK,EAGR,IAAMC,EAAQ,IAAI,OAAO,MAAMR,EAAO,cACpCO,EACG,QAAQ,uBAAwB,MAAM,EACtC,QAAQL,EAAW,GAAG,KACtB,KAAK,EAGV,OAAOO,IACLR,KACI,GAAAS,SAAWD,CAAK,EAChBA,GAED,QAAQD,EAAOL,CAAS,EACxB,QAAQ,8BAA+B,IAAI,CAClD,CACF,CC9BO,SAASQ,GAAiBC,EAAuB,CACtD,OAAOA,EACJ,MAAM,YAAY,EAChB,IAAI,CAACC,EAAOC,IAAUA,EAAQ,EAC3BD,EAAM,QAAQ,+BAAgC,IAAI,EAClDA,CACJ,EACC,KAAK,EAAE,EACT,QAAQ,kCAAmC,EAAE,EAC7C,KAAK,CACV,CCoCO,SAASE,GACdC,EAC+B,CAC/B,OAAOA,EAAQ,OAAS,CAC1B,CASO,SAASC,GACdD,EAC+B,CAC/B,OAAOA,EAAQ,OAAS,CAC1B,CASO,SAASE,GACdF,EACgC,CAChC,OAAOA,EAAQ,OAAS,CAC1B,CCvEA,SAASG,GAAiB,CAAE,OAAAC,EAAQ,KAAAC,CAAK,EAA6B,CAGhED,EAAO,KAAK,SAAW,GAAKA,EAAO,KAAK,KAAO,OACjDA,EAAO,KAAO,CACZE,GAAY,oBAAoB,CAClC,GAGEF,EAAO,YAAc,cACvBA,EAAO,UAAYE,GAAY,yBAAyB,GAQ1D,IAAMC,EAAyB,CAC7B,SANeD,GAAY,wBAAwB,EAClD,MAAM,SAAS,EACf,OAAO,OAAO,EAKf,YAAaE,EAAQ,gBAAgB,CACvC,EAGA,MAAO,CAAE,OAAAJ,EAAQ,KAAAC,EAAM,QAAAE,CAAQ,CACjC,CAkBO,SAASE,GACdC,EAAaC,EACC,CACd,IAAMP,EAASQ,GAAc,EACvBC,EAAS,IAAI,OAAOH,CAAG,EAGvBI,EAAM,IAAIC,EACVC,EAAMC,GAAYJ,EAAQ,CAAE,IAAAC,CAAI,CAAC,EACpC,KACCI,EAAIC,GAAW,CACb,GAAIC,GAAsBD,CAAO,EAC/B,QAAWE,KAAUF,EAAQ,KAAK,MAChC,QAAWG,KAAYD,EACrBC,EAAS,SAAW,GAAG,IAAI,IAAIA,EAAS,SAAUlB,EAAO,IAAI,IAEnE,OAAOe,CACT,CAAC,EACDI,GAAM,CACR,EAGF,OAAAC,GAAKb,CAAK,EACP,KACCO,EAAIO,IAAS,CACX,OACA,KAAMtB,GAAiBsB,CAAI,CAC7B,EAAwB,CAC1B,EACG,UAAUX,EAAI,KAAK,KAAKA,CAAG,CAAC,EAG1B,CAAE,IAAAA,EAAK,IAAAE,CAAI,CACpB,CCvEO,SAASU,GACd,CAAE,UAAAC,CAAU,EACN,CACN,IAAMC,EAASC,GAAc,EACvBC,EAAYC,GAChB,IAAI,IAAI,mBAAoBH,EAAO,IAAI,CACzC,EACG,KACCI,GAAW,IAAMC,CAAK,CACxB,EAGIC,EAAWJ,EACd,KACCK,EAAIC,GAAY,CACd,GAAM,CAAC,CAAEC,CAAO,EAAIT,EAAO,KAAK,MAAM,aAAa,EACnD,OAAOQ,EAAS,KAAK,CAAC,CAAE,QAAAE,EAAS,QAAAC,CAAQ,IACvCD,IAAYD,GAAWE,EAAQ,SAASF,CAAO,CAChD,GAAKD,EAAS,EACjB,CAAC,CACH,EAGFN,EACG,KACCK,EAAIC,GAAY,IAAI,IAAIA,EAAS,IAAIE,GAAW,CAC9C,GAAG,IAAI,IAAI,MAAMA,EAAQ,WAAYV,EAAO,IAAI,IAChDU,CACF,CAAC,CAAC,CAAC,EACHE,EAAUC,GAAQC,EAAsB,SAAS,KAAM,OAAO,EAC3D,KACCC,EAAOC,GAAM,CAACA,EAAG,SAAW,CAACA,EAAG,OAAO,EACvCC,GAAeX,CAAQ,EACvBM,EAAU,CAAC,CAACI,EAAIP,CAAO,IAAM,CAC3B,GAAIO,EAAG,kBAAkB,QAAS,CAChC,IAAME,EAAKF,EAAG,OAAO,QAAQ,GAAG,EAChC,GAAIE,GAAM,CAACA,EAAG,QAAUL,EAAK,IAAIK,EAAG,IAAI,EAAG,CACzC,IAAMC,EAAMD,EAAG,KAWf,MAAI,CAACF,EAAG,OAAO,QAAQ,aAAa,GAClBH,EAAK,IAAIM,CAAG,IACZV,EACPJ,GAEXW,EAAG,eAAe,EACXI,EAAGD,CAAG,EACf,CACF,CACA,OAAOd,CACT,CAAC,EACDO,EAAUO,GAAO,CACf,GAAM,CAAE,QAAAT,CAAQ,EAAIG,EAAK,IAAIM,CAAG,EAChC,OAAOE,GAAa,IAAI,IAAIF,CAAG,CAAC,EAC7B,KACCZ,EAAIe,GAAW,CAEb,IAAMC,EADWC,GAAY,EACP,KAAK,QAAQxB,EAAO,KAAM,EAAE,EAClD,OAAOsB,EAAQ,SAASC,EAAK,MAAM,GAAG,EAAE,EAAE,EACtC,IAAI,IAAI,MAAMb,KAAWa,IAAQvB,EAAO,IAAI,EAC5C,IAAI,IAAImB,CAAG,CACjB,CAAC,CACH,CACJ,CAAC,CACH,CACF,CACF,EACG,UAAUA,GAAOM,GAAYN,CAAG,CAAC,EAGtCO,EAAc,CAACxB,EAAWI,CAAQ,CAAC,EAChC,UAAU,CAAC,CAACE,EAAUC,CAAO,IAAM,CACpBkB,EAAW,mBAAmB,EACtC,YAAYC,GAAsBpB,EAAUC,CAAO,CAAC,CAC5D,CAAC,EAGHV,EAAU,KAAKa,EAAU,IAAMN,CAAQ,CAAC,EACrC,UAAUG,GAAW,CA5J1B,IAAAoB,EA+JM,IAAIC,EAAW,SAAS,aAAc,cAAc,EACpD,GAAIA,IAAa,KAAM,CACrB,IAAMC,IAASF,EAAA7B,EAAO,UAAP,YAAA6B,EAAgB,UAAW,SAC1CC,EAAW,CAACrB,EAAQ,QAAQ,SAASsB,CAAM,EAG3C,SAAS,aAAcD,EAAU,cAAc,CACjD,CAGA,GAAIA,EACF,QAAWE,KAAWC,GAAqB,UAAU,EACnDD,EAAQ,OAAS,EACvB,CAAC,CACL,CCtFO,SAASE,GACdC,EAAsB,CAAE,IAAAC,CAAI,EACH,CACzB,IAAMC,GAAK,+BAAU,YAAaC,GAG5B,CAAE,aAAAC,CAAa,EAAIC,GAAY,EACjCD,EAAa,IAAI,GAAG,GACtBE,GAAU,SAAU,EAAI,EAG1B,IAAMC,EAASN,EACZ,KACCO,EAAOC,EAAoB,EAC3BC,GAAK,CAAC,EACNC,EAAI,IAAMP,EAAa,IAAI,GAAG,GAAK,EAAE,CACvC,EAGFQ,GAAY,QAAQ,EACjB,KACCJ,EAAOK,GAAU,CAACA,CAAM,EACxBH,GAAK,CAAC,CACR,EACG,UAAU,IAAM,CACf,IAAMI,EAAM,IAAI,IAAI,SAAS,IAAI,EACjCA,EAAI,aAAa,OAAO,GAAG,EAC3B,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAGA,GAAK,CACvC,CAAC,EAGLP,EAAO,UAAUQ,GAAS,CACpBA,IACFf,EAAG,MAAQe,EACXf,EAAG,MAAM,EAEb,CAAC,EAGD,IAAMgB,EAASC,GAAkBjB,CAAE,EAC7BkB,EAASC,EACbC,EAAUpB,EAAI,OAAO,EACrBoB,EAAUpB,EAAI,OAAO,EAAE,KAAKqB,GAAM,CAAC,CAAC,EACpCd,CACF,EACG,KACCI,EAAI,IAAMT,EAAGF,EAAG,KAAK,CAAC,EACtBsB,EAAU,EAAE,EACZC,EAAqB,CACvB,EAGF,OAAOC,EAAc,CAACN,EAAQF,CAAM,CAAC,EAClC,KACCL,EAAI,CAAC,CAACI,EAAOU,CAAK,KAAO,CAAE,MAAAV,EAAO,MAAAU,CAAM,EAAE,EAC1CC,EAAY,CAAC,CACf,CACJ,CAUO,SAASC,GACd3B,EAAsB,CAAE,IAAA4B,EAAK,IAAA3B,CAAI,EACqB,CACtD,IAAM4B,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EAGpC,OAAAH,EACG,KACCI,EAAwB,OAAO,EAC/BtB,EAAI,CAAC,CAAE,MAAAI,CAAM,KAA2B,CACtC,OACA,KAAMA,CACR,EAAE,CACJ,EACG,UAAUa,EAAI,KAAK,KAAKA,CAAG,CAAC,EAGjCC,EACG,KACCI,EAAwB,OAAO,CACjC,EACG,UAAU,CAAC,CAAE,MAAAR,CAAM,IAAM,CACpBA,GACFnB,GAAU,SAAUmB,CAAK,EACzBzB,EAAG,YAAc,IAEjBA,EAAG,YAAckC,GAAY,oBAAoB,CAErD,CAAC,EAGLd,EAAUpB,EAAG,KAAO,OAAO,EACxB,KACCmC,GAAUJ,CAAK,CACjB,EACG,UAAU,IAAM/B,EAAG,MAAM,CAAC,EAGxBD,GAAiBC,EAAI,CAAE,IAAA4B,EAAK,IAAA3B,CAAI,CAAC,EACrC,KACCmC,EAAIC,GAASR,EAAM,KAAKQ,CAAK,CAAC,EAC9BC,EAAS,IAAMT,EAAM,SAAS,CAAC,EAC/BlB,EAAI0B,GAAUE,EAAA,CAAE,IAAKvC,GAAOqC,EAAQ,EACpCG,GAAM,CACR,CACJ,CCrHO,SAASC,GACdC,EAAiB,CAAE,IAAAC,CAAI,EAAiB,CAAE,OAAAC,CAAO,EACZ,CACrC,IAAMC,EAAQ,IAAIC,EACZC,EAAYC,GAAqBN,EAAG,aAAc,EACrD,KACCO,EAAO,OAAO,CAChB,EAGIC,EAAOC,EAAW,wBAAyBT,CAAE,EAC7CU,EAAOD,EAAW,uBAAwBT,CAAE,EAG5CW,EAASV,EACZ,KACCM,EAAOK,EAAoB,EAC3BC,GAAK,CAAC,CACR,EAGF,OAAAV,EACG,KACCW,GAAeZ,CAAM,EACrBa,GAAUJ,CAAM,CAClB,EACG,UAAU,CAAC,CAAC,CAAE,MAAAK,CAAM,EAAG,CAAE,MAAAC,CAAM,CAAC,IAAM,CACrC,GAAIA,EACF,OAAQD,EAAM,OAAQ,CAGpB,IAAK,GACHR,EAAK,YAAcU,GAAY,oBAAoB,EACnD,MAGF,IAAK,GACHV,EAAK,YAAcU,GAAY,mBAAmB,EAClD,MAGF,QACEV,EAAK,YAAcU,GACjB,sBACAC,GAAMH,EAAM,MAAM,CACpB,CACJ,MAEAR,EAAK,YAAcU,GAAY,2BAA2B,CAE9D,CAAC,EAGLf,EACG,KACCiB,EAAI,IAAMV,EAAK,UAAY,EAAE,EAC7BW,EAAU,CAAC,CAAE,MAAAL,CAAM,IAAMM,EACvBC,EAAG,GAAGP,EAAM,MAAM,EAAG,EAAE,CAAC,EACxBO,EAAG,GAAGP,EAAM,MAAM,EAAE,CAAC,EAClB,KACCQ,GAAY,CAAC,EACbC,GAAQpB,CAAS,EACjBgB,EAAU,CAAC,CAACK,CAAK,IAAMA,CAAK,CAC9B,CACJ,CAAC,CACH,EACG,UAAUC,GAAUjB,EAAK,YACxBkB,GAAuBD,CAAM,CAC/B,CAAC,EAGW1B,EACb,KACCM,EAAOsB,EAAqB,EAC5BC,EAAI,CAAC,CAAE,KAAAC,CAAK,IAAMA,CAAI,CACxB,EAIC,KACCX,EAAIY,GAAS7B,EAAM,KAAK6B,CAAK,CAAC,EAC9BC,EAAS,IAAM9B,EAAM,SAAS,CAAC,EAC/B2B,EAAIE,GAAUE,EAAA,CAAE,IAAKlC,GAAOgC,EAAQ,CACtC,CACJ,CC1FO,SAASG,GACdC,EAAkB,CAAE,OAAAC,CAAO,EACF,CACzB,OAAOA,EACJ,KACCC,EAAI,CAAC,CAAE,MAAAC,CAAM,IAAM,CACjB,IAAMC,EAAMC,GAAY,EACxB,OAAAD,EAAI,KAAO,GACXA,EAAI,aAAa,OAAO,GAAG,EAC3BA,EAAI,aAAa,IAAI,IAAKD,CAAK,EACxB,CAAE,IAAAC,CAAI,CACf,CAAC,CACH,CACJ,CAUO,SAASE,GACdC,EAAuBC,EACa,CACpC,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAAC,CAAE,IAAAL,CAAI,IAAM,CAC3BG,EAAG,aAAa,sBAAuBA,EAAG,IAAI,EAC9CA,EAAG,KAAO,GAAGH,GACf,CAAC,EAGDO,EAAUJ,EAAI,OAAO,EAClB,UAAUK,GAAMA,EAAG,eAAe,CAAC,EAG/Bb,GAAiBQ,EAAIC,CAAO,EAChC,KACCK,EAAIC,GAASL,EAAM,KAAKK,CAAK,CAAC,EAC9BC,EAAS,IAAMN,EAAM,SAAS,CAAC,EAC/BP,EAAIY,GAAUE,EAAA,CAAE,IAAKT,GAAOO,EAAQ,CACtC,CACJ,CCtCO,SAASG,GACdC,EAAiB,CAAE,IAAAC,CAAI,EAAiB,CAAE,UAAAC,CAAU,EACd,CACtC,IAAMC,EAAQ,IAAIC,EAGZC,EAASC,GAAoB,cAAc,EAC3CC,EAASC,EACbC,EAAUJ,EAAO,SAAS,EAC1BI,EAAUJ,EAAO,OAAO,CAC1B,EACG,KACCK,GAAUC,EAAc,EACxBC,EAAI,IAAMP,EAAM,KAAK,EACrBQ,EAAqB,CACvB,EAGF,OAAAV,EACG,KACCW,GAAkBP,CAAM,EACxBK,EAAI,CAAC,CAAC,CAAE,YAAAG,CAAY,EAAGC,CAAK,IAAM,CAChC,IAAMC,EAAQD,EAAM,MAAM,UAAU,EACpC,IAAID,GAAA,YAAAA,EAAa,SAAUE,EAAMA,EAAM,OAAS,GAAI,CAClD,IAAMC,EAAOH,EAAYA,EAAY,OAAS,GAC1CG,EAAK,WAAWD,EAAMA,EAAM,OAAS,EAAE,IACzCA,EAAMA,EAAM,OAAS,GAAKC,EAC9B,MACED,EAAM,OAAS,EAEjB,OAAOA,CACT,CAAC,CACH,EACG,UAAUA,GAASjB,EAAG,UAAYiB,EAChC,KAAK,EAAE,EACP,QAAQ,MAAO,QAAQ,CAC1B,EAGJf,EACG,KACCiB,EAAO,CAAC,CAAE,KAAAC,CAAK,IAAMA,IAAS,QAAQ,CACxC,EACG,UAAUC,GAAO,CAChB,OAAQA,EAAI,KAAM,CAGhB,IAAK,aAEDrB,EAAG,UAAU,QACbK,EAAM,iBAAmBA,EAAM,MAAM,SAErCA,EAAM,MAAQL,EAAG,WACnB,KACJ,CACF,CAAC,EAGWC,EACb,KACCkB,EAAOG,EAAqB,EAC5BV,EAAI,CAAC,CAAE,KAAAW,CAAK,IAAMA,CAAI,CACxB,EAIC,KACCC,EAAIC,GAAStB,EAAM,KAAKsB,CAAK,CAAC,EAC9BC,EAAS,IAAMvB,EAAM,SAAS,CAAC,EAC/BS,EAAI,KAAO,CAAE,IAAKZ,CAAG,EAAE,CACzB,CACJ,CC9CO,SAAS2B,GACdC,EAAiB,CAAE,OAAAC,EAAQ,UAAAC,CAAU,EACN,CAC/B,IAAMC,EAASC,GAAc,EAC7B,GAAI,CACF,IAAMC,GAAM,+BAAU,SAAUF,EAAO,OACjCG,EAASC,GAAkBF,EAAKJ,CAAM,EAGtCO,EAASC,GAAoB,eAAgBT,CAAE,EAC/CU,EAASD,GAAoB,gBAAiBT,CAAE,EAGhD,CAAE,IAAAW,EAAK,IAAAC,CAAI,EAAIN,EACrBK,EACG,KACCE,EAAOC,EAAoB,EAC3BC,GAAOH,EAAI,KAAKC,EAAOG,EAAoB,CAAC,CAAC,EAC7CC,GAAK,CAAC,CACR,EACG,UAAUN,EAAI,KAAK,KAAKA,CAAG,CAAC,EAGjCT,EACG,KACCW,EAAO,CAAC,CAAE,KAAAK,CAAK,IAAMA,IAAS,QAAQ,CACxC,EACG,UAAUC,GAAO,CAChB,IAAMC,EAASC,GAAiB,EAChC,OAAQF,EAAI,KAAM,CAGhB,IAAK,QACH,GAAIC,IAAWZ,EAAO,CACpB,IAAMc,EAAU,IAAI,IACpB,QAAWC,KAAUC,EACnB,sBAAuBd,CACzB,EAAG,CACD,IAAMe,EAAUF,EAAO,kBACvBD,EAAQ,IAAIC,EAAQ,WAClBE,EAAQ,aAAa,eAAe,CACtC,CAAC,CACH,CAGA,GAAIH,EAAQ,KAAM,CAChB,GAAM,CAAC,CAACI,CAAI,CAAC,EAAI,CAAC,GAAGJ,CAAO,EAAE,KAAK,CAAC,CAAC,CAAEK,CAAC,EAAG,CAAC,CAAEC,CAAC,IAAMA,EAAID,CAAC,EAC1DD,EAAK,MAAM,CACb,CAGAP,EAAI,MAAM,CACZ,CACA,MAGF,IAAK,SACL,IAAK,MACHU,GAAU,SAAU,EAAK,EACzBrB,EAAM,KAAK,EACX,MAGF,IAAK,UACL,IAAK,YACH,GAAI,OAAOY,GAAW,YACpBZ,EAAM,MAAM,MACP,CACL,IAAMsB,EAAM,CAACtB,EAAO,GAAGgB,EACrB,wDACAd,CACF,CAAC,EACKqB,EAAI,KAAK,IAAI,GACjB,KAAK,IAAI,EAAGD,EAAI,QAAQV,CAAM,CAAC,EAAIU,EAAI,QACrCX,EAAI,OAAS,UAAY,GAAK,IAE9BW,EAAI,MAAM,EACdA,EAAIC,GAAG,MAAM,CACf,CAGAZ,EAAI,MAAM,EACV,MAGF,QACMX,IAAUa,GAAiB,GAC7Bb,EAAM,MAAM,CAClB,CACF,CAAC,EAGLN,EACG,KACCW,EAAO,CAAC,CAAE,KAAAK,CAAK,IAAMA,IAAS,QAAQ,CACxC,EACG,UAAUC,GAAO,CAChB,OAAQA,EAAI,KAAM,CAGhB,IAAK,IACL,IAAK,IACL,IAAK,IACHX,EAAM,MAAM,EACZA,EAAM,OAAO,EAGbW,EAAI,MAAM,EACV,KACJ,CACF,CAAC,EAGL,IAAMa,EAAUC,GAAiBzB,EAAOF,CAAM,EACxC4B,EAAUC,GAAkBzB,EAAQJ,EAAQ,CAAE,OAAA0B,CAAO,CAAC,EAC5D,OAAOI,EAAMJ,EAAQE,CAAO,EACzB,KACCG,GAGE,GAAGC,GAAqB,eAAgBtC,CAAE,EACvC,IAAIuC,GAASC,GAAiBD,EAAO,CAAE,OAAAP,CAAO,CAAC,CAAC,EAGnD,GAAGM,GAAqB,iBAAkBtC,CAAE,EACzC,IAAIuC,GAASE,GAAmBF,EAAOjC,EAAQ,CAAE,UAAAJ,CAAU,CAAC,CAAC,CAClE,CACF,CAGJ,OAASwC,EAAP,CACA,OAAA1C,EAAG,OAAS,GACL2C,EACT,CACF,CCtKO,SAASC,GACdC,EAAiB,CAAE,OAAAC,EAAQ,UAAAC,CAAU,EACG,CACxC,OAAOC,EAAc,CACnBF,EACAC,EACG,KACCE,EAAUC,GAAY,CAAC,EACvBC,EAAOC,GAAO,CAAC,CAACA,EAAI,aAAa,IAAI,GAAG,CAAC,CAC3C,CACJ,CAAC,EACE,KACCC,EAAI,CAAC,CAACC,EAAOF,CAAG,IAAMG,GAAuBD,EAAM,OAAQ,EAAI,EAC7DF,EAAI,aAAa,IAAI,GAAG,CAC1B,CAAC,EACDC,EAAIG,GAAM,CA1FhB,IAAAC,EA2FQ,IAAMC,EAAQ,IAAI,IAGZC,EAAK,SAAS,mBAAmBd,EAAI,WAAW,SAAS,EAC/D,QAASe,EAAOD,EAAG,SAAS,EAAGC,EAAMA,EAAOD,EAAG,SAAS,EACtD,IAAIF,EAAAG,EAAK,gBAAL,MAAAH,EAAoB,aAAc,CACpC,IAAMI,EAAWD,EAAK,YAChBE,EAAWN,EAAGK,CAAQ,EACxBC,EAAS,OAASD,EAAS,QAC7BH,EAAM,IAAIE,EAAmBE,CAAQ,CACzC,CAIF,OAAW,CAACF,EAAMG,CAAI,IAAKL,EAAO,CAChC,GAAM,CAAE,WAAAM,CAAW,EAAIC,EAAE,OAAQ,KAAMF,CAAI,EAC3CH,EAAK,YAAY,GAAG,MAAM,KAAKI,CAAU,CAAC,CAC5C,CAGA,MAAO,CAAE,IAAKnB,EAAI,MAAAa,CAAM,CAC1B,CAAC,CACH,CACJ,CCbO,SAASQ,GACdC,EAAiB,CAAE,UAAAC,EAAW,MAAAC,CAAM,EACf,CACrB,IAAMC,EAASH,EAAG,cACZI,EACJD,EAAO,UACPA,EAAO,cAAe,UAGxB,OAAOE,EAAc,CAACH,EAAOD,CAAS,CAAC,EACpC,KACCK,EAAI,CAAC,CAAC,CAAE,OAAAC,EAAQ,OAAAC,CAAO,EAAG,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,CAAC,KACzCD,EAASA,EACL,KAAK,IAAIJ,EAAQ,KAAK,IAAI,EAAGK,EAAIF,CAAM,CAAC,EACxCH,EACG,CACL,OAAAI,EACA,OAAQC,GAAKF,EAASH,CACxB,EACD,EACDM,EAAqB,CAACC,EAAGC,IACvBD,EAAE,SAAWC,EAAE,QACfD,EAAE,SAAWC,EAAE,MAChB,CACH,CACJ,CAuBO,SAASC,GACdb,EAAiBc,EACe,CADf,IAAAC,EAAAD,EAAE,SAAAE,CAtJrB,EAsJmBD,EAAcE,EAAAC,GAAdH,EAAc,CAAZ,YAEnB,IAAMI,EAAQC,EAAW,0BAA2BpB,CAAE,EAChD,CAAE,EAAAS,CAAE,EAAIY,GAAiBF,CAAK,EACpC,OAAOG,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EACG,KACCE,GAAU,EAAGC,EAAuB,EACpCC,GAAeX,CAAO,CACxB,EACG,UAAU,CAGT,KAAK,CAAC,CAAE,OAAAR,CAAO,EAAG,CAAE,OAAQD,CAAO,CAAC,EAAG,CACrCY,EAAM,MAAM,OAAS,GAAGX,EAAS,EAAIC,MACrCT,EAAG,MAAM,IAAY,GAAGO,KAC1B,EAGA,UAAW,CACTY,EAAM,MAAM,OAAS,GACrBnB,EAAG,MAAM,IAAY,EACvB,CACF,CAAC,EAGLuB,EACG,KACCK,GAAUF,EAAuB,EACjCG,GAAK,CAAC,CACR,EACG,UAAU,IAAM,CACf,QAAWC,KAAQC,EAAY,8BAA+B/B,CAAE,EAAG,CACjE,IAAMgC,EAAYC,GAAoBH,CAAI,EAC1C,GAAI,OAAOE,GAAc,YAAa,CACpC,IAAMzB,EAASuB,EAAK,UAAYE,EAAU,UACpC,CAAE,OAAAxB,CAAO,EAAI0B,GAAeF,CAAS,EAC3CA,EAAU,SAAS,CACjB,IAAKzB,EAASC,EAAS,CACzB,CAAC,CACH,CACF,CACF,CAAC,EAGET,GAAaC,EAAIiB,CAAO,EAC5B,KACCkB,EAAIC,GAASb,EAAM,KAAKa,CAAK,CAAC,EAC9BC,EAAS,IAAMd,EAAM,SAAS,CAAC,EAC/BjB,EAAI8B,GAAUE,EAAA,CAAE,IAAKtC,GAAOoC,EAAQ,CACtC,CACJ,CAAC,CACH,CChJO,SAASG,GACdC,EAAcC,EACW,CACzB,GAAI,OAAOA,GAAS,YAAa,CAC/B,IAAMC,EAAM,gCAAgCF,KAAQC,IACpD,OAAOE,GAGLC,GAAqB,GAAGF,mBAAqB,EAC1C,KACCG,GAAW,IAAMC,CAAK,EACtBC,EAAIC,IAAY,CACd,QAASA,EAAQ,QACnB,EAAE,EACFC,GAAe,CAAC,CAAC,CACnB,EAGFL,GAAkBF,CAAG,EAClB,KACCG,GAAW,IAAMC,CAAK,EACtBC,EAAIG,IAAS,CACX,MAAOA,EAAK,iBACZ,MAAOA,EAAK,WACd,EAAE,EACFD,GAAe,CAAC,CAAC,CACnB,CACJ,EACG,KACCF,EAAI,CAAC,CAACC,EAASE,CAAI,IAAOC,IAAA,GAAKH,GAAYE,EAAO,CACpD,CAGJ,KAAO,CACL,IAAMR,EAAM,gCAAgCF,IAC5C,OAAOI,GAAkBF,CAAG,EACzB,KACCK,EAAIG,IAAS,CACX,aAAcA,EAAK,YACrB,EAAE,EACFD,GAAe,CAAC,CAAC,CACnB,CACJ,CACF,CCvDO,SAASG,GACdC,EAAcC,EACW,CACzB,IAAMC,EAAM,WAAWF,qBAAwB,mBAAmBC,CAAO,IACzE,OAAOE,GAA2BD,CAAG,EAClC,KACCE,GAAW,IAAMC,CAAK,EACtBC,EAAI,CAAC,CAAE,WAAAC,EAAY,YAAAC,CAAY,KAAO,CACpC,MAAOD,EACP,MAAOC,CACT,EAAE,EACFC,GAAe,CAAC,CAAC,CACnB,CACJ,CCOO,SAASC,GACdC,EACyB,CACzB,GAAM,CAACC,CAAI,EAAID,EAAI,MAAM,mBAAmB,GAAK,CAAC,EAClD,OAAQC,EAAK,YAAY,EAAG,CAG1B,IAAK,SACH,GAAM,CAAC,CAAEC,EAAMC,CAAI,EAAIH,EAAI,MAAM,qCAAqC,EACtE,OAAOI,GAA2BF,EAAMC,CAAI,EAG9C,IAAK,SACH,GAAM,CAAC,CAAEE,EAAMC,CAAI,EAAIN,EAAI,MAAM,oCAAoC,EACrE,OAAOO,GAA2BF,EAAMC,CAAI,EAG9C,QACE,OAAOE,CACX,CACF,CCpBA,IAAIC,GAgBG,SAASC,GACdC,EACoB,CACpB,OAAOF,QAAWG,EAAM,IAAM,CAC5B,IAAMC,EAAS,SAAsB,WAAY,cAAc,EAC/D,GAAIA,EACF,OAAOC,EAAGD,CAAM,EAKhB,GADYE,GAAqB,SAAS,EAClC,OAAQ,CACd,IAAMC,EAAU,SAA0B,WAAW,EACrD,GAAI,EAAEA,GAAWA,EAAQ,QACvB,OAAOC,CACX,CAGA,OAAOC,GAAiBP,EAAG,IAAI,EAC5B,KACCQ,EAAIC,GAAS,SAAS,WAAYA,EAAO,cAAc,CAAC,CAC1D,CAEN,CAAC,EACE,KACCC,GAAW,IAAMJ,CAAK,EACtBK,EAAOF,GAAS,OAAO,KAAKA,CAAK,EAAE,OAAS,CAAC,EAC7CG,EAAIH,IAAU,CAAE,MAAAA,CAAM,EAAE,EACxBI,EAAY,CAAC,CACf,EACJ,CASO,SAASC,GACdd,EAC+B,CAC/B,IAAMe,EAAQC,EAAW,uBAAwBhB,CAAE,EACnD,OAAOC,EAAM,IAAM,CACjB,IAAMgB,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAAC,CAAE,MAAAR,CAAM,IAAM,CAC7BM,EAAM,YAAYI,GAAkBV,CAAK,CAAC,EAC1CM,EAAM,UAAU,IAAI,+BAA+B,CACrD,CAAC,EAGMhB,GAAYC,CAAE,EAClB,KACCQ,EAAIY,GAASH,EAAM,KAAKG,CAAK,CAAC,EAC9BC,EAAS,IAAMJ,EAAM,SAAS,CAAC,EAC/BL,EAAIQ,GAAUE,EAAA,CAAE,IAAKtB,GAAOoB,EAAQ,CACtC,CACJ,CAAC,CACH,CCtDO,SAASG,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACpB,CAClB,OAAOC,GAAiB,SAAS,IAAI,EAClC,KACCC,EAAU,IAAMC,GAAgBL,EAAI,CAAE,QAAAE,EAAS,UAAAD,CAAU,CAAC,CAAC,EAC3DK,EAAI,CAAC,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,KACZ,CACL,OAAQA,GAAK,EACf,EACD,EACDC,EAAwB,QAAQ,CAClC,CACJ,CAaO,SAASC,GACdT,EAAiBU,EACY,CAC7B,OAAOC,EAAM,IAAM,CACjB,IAAMC,EAAQ,IAAIC,EAClB,OAAAD,EAAM,UAAU,CAGd,KAAK,CAAE,OAAAE,CAAO,EAAG,CACfd,EAAG,OAASc,CACd,EAGA,UAAW,CACTd,EAAG,OAAS,EACd,CACF,CAAC,GAICe,EAAQ,wBAAwB,EAC5BC,EAAG,CAAE,OAAQ,EAAM,CAAC,EACpBjB,GAAUC,EAAIU,CAAO,GAExB,KACCO,EAAIC,GAASN,EAAM,KAAKM,CAAK,CAAC,EAC9BC,EAAS,IAAMP,EAAM,SAAS,CAAC,EAC/BN,EAAIY,GAAUE,EAAA,CAAE,IAAKpB,GAAOkB,EAAQ,CACtC,CACJ,CAAC,CACH,CCpBO,SAASG,GACdC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACT,CAC7B,IAAMC,EAAQ,IAAI,IAGZC,EAAUC,EAA+B,cAAeL,CAAE,EAChE,QAAWM,KAAUF,EAAS,CAC5B,IAAMG,EAAK,mBAAmBD,EAAO,KAAK,UAAU,CAAC,CAAC,EAChDE,EAASC,GAAmB,QAAQF,KAAM,EAC5C,OAAOC,GAAW,aACpBL,EAAM,IAAIG,EAAQE,CAAM,CAC5B,CAGA,IAAME,EAAUR,EACb,KACCS,EAAwB,QAAQ,EAChCC,EAAI,CAAC,CAAE,OAAAC,CAAO,IAAM,CAClB,IAAMC,EAAOC,GAAoB,MAAM,EACjCC,EAAOC,EAAW,wBAAyBH,CAAI,EACrD,OAAOD,EAAS,IACdG,EAAK,UACLF,EAAK,UAET,CAAC,EACDI,GAAM,CACR,EAgFF,OA7EmBC,GAAiB,SAAS,IAAI,EAC9C,KACCR,EAAwB,QAAQ,EAGhCS,EAAUC,GAAQC,EAAM,IAAM,CAC5B,IAAIC,EAA4B,CAAC,EACjC,OAAOC,EAAG,CAAC,GAAGrB,CAAK,EAAE,OAAO,CAACsB,EAAO,CAACnB,EAAQE,CAAM,IAAM,CACvD,KAAOe,EAAK,QACGpB,EAAM,IAAIoB,EAAKA,EAAK,OAAS,EAAE,EACnC,SAAWf,EAAO,SACzBe,EAAK,IAAI,EAOb,IAAIG,EAASlB,EAAO,UACpB,KAAO,CAACkB,GAAUlB,EAAO,eACvBA,EAASA,EAAO,cAChBkB,EAASlB,EAAO,UAIlB,OAAOiB,EAAM,IACX,CAAC,GAAGF,EAAO,CAAC,GAAGA,EAAMjB,CAAM,CAAC,EAAE,QAAQ,EACtCoB,CACF,CACF,EAAG,IAAI,GAAkC,CAAC,CAC5C,CAAC,EACE,KAGCd,EAAIa,GAAS,IAAI,IAAI,CAAC,GAAGA,CAAK,EAAE,KAAK,CAAC,CAAC,CAAEE,CAAC,EAAG,CAAC,CAAEC,CAAC,IAAMD,EAAIC,CAAC,CAAC,CAAC,EAC9DC,GAAkBnB,CAAO,EAGzBU,EAAU,CAAC,CAACK,EAAOK,CAAM,IAAM7B,EAC5B,KACC8B,GAAK,CAAC,CAACC,EAAMC,CAAI,EAAG,CAAE,OAAQ,CAAE,EAAAC,CAAE,EAAG,KAAAC,CAAK,IAAM,CAC9C,IAAMC,EAAOF,EAAIC,EAAK,QAAU,KAAK,MAAMd,EAAK,MAAM,EAGtD,KAAOY,EAAK,QAAQ,CAClB,GAAM,CAAC,CAAEP,CAAM,EAAIO,EAAK,GACxB,GAAIP,EAASI,EAASI,GAAKE,EACzBJ,EAAO,CAAC,GAAGA,EAAMC,EAAK,MAAM,CAAE,MAE9B,MAEJ,CAGA,KAAOD,EAAK,QAAQ,CAClB,GAAM,CAAC,CAAEN,CAAM,EAAIM,EAAKA,EAAK,OAAS,GACtC,GAAIN,EAASI,GAAUI,GAAK,CAACE,EAC3BH,EAAO,CAACD,EAAK,IAAI,EAAI,GAAGC,CAAI,MAE5B,MAEJ,CAGA,MAAO,CAACD,EAAMC,CAAI,CACpB,EAAG,CAAC,CAAC,EAAG,CAAC,GAAGR,CAAK,CAAC,CAAC,EACnBY,EAAqB,CAACV,EAAGC,IACvBD,EAAE,KAAOC,EAAE,IACXD,EAAE,KAAOC,EAAE,EACZ,CACH,CACF,CACF,CACF,CACF,EAIC,KACChB,EAAI,CAAC,CAACoB,EAAMC,CAAI,KAAO,CACrB,KAAMD,EAAK,IAAI,CAAC,CAACT,CAAI,IAAMA,CAAI,EAC/B,KAAMU,EAAK,IAAI,CAAC,CAACV,CAAI,IAAMA,CAAI,CACjC,EAAE,EAGFe,EAAU,CAAE,KAAM,CAAC,EAAG,KAAM,CAAC,CAAE,CAAC,EAChCC,GAAY,EAAG,CAAC,EAChB3B,EAAI,CAAC,CAAC,EAAGgB,CAAC,IAGJ,EAAE,KAAK,OAASA,EAAE,KAAK,OAClB,CACL,KAAMA,EAAE,KAAK,MAAM,KAAK,IAAI,EAAG,EAAE,KAAK,OAAS,CAAC,EAAGA,EAAE,KAAK,MAAM,EAChE,KAAM,CAAC,CACT,EAIO,CACL,KAAMA,EAAE,KAAK,MAAM,EAAE,EACrB,KAAMA,EAAE,KAAK,MAAM,EAAGA,EAAE,KAAK,OAAS,EAAE,KAAK,MAAM,CACrD,CAEH,CACH,CACJ,CAYO,SAASY,GACdxC,EAAiB,CAAE,UAAAC,EAAW,QAAAC,EAAS,QAAAuC,CAAQ,EACP,CACxC,OAAOnB,EAAM,IAAM,CACjB,IAAMoB,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EAoBpC,GAnBAH,EAAM,UAAU,CAAC,CAAE,KAAAV,EAAM,KAAAC,CAAK,IAAM,CAGlC,OAAW,CAAC3B,CAAM,IAAK2B,EACrB3B,EAAO,UAAU,OAAO,sBAAsB,EAC9CA,EAAO,UAAU,OAAO,sBAAsB,EAIhD,OAAW,CAACmB,EAAO,CAACnB,CAAM,CAAC,IAAK0B,EAAK,QAAQ,EAC3C1B,EAAO,UAAU,IAAI,sBAAsB,EAC3CA,EAAO,UAAU,OACf,uBACAmB,IAAUO,EAAK,OAAS,CAC1B,CAEJ,CAAC,EAGGc,EAAQ,YAAY,EAAG,CAGzB,IAAMC,EAAUC,EACd/C,EAAU,KAAKgD,GAAa,CAAC,EAAGrC,EAAI,IAAG,EAAY,CAAC,EACpDX,EAAU,KAAKgD,GAAa,GAAG,EAAGrC,EAAI,IAAM,QAAiB,CAAC,CAChE,EAGA8B,EACG,KACCQ,EAAO,CAAC,CAAE,KAAAlB,CAAK,IAAMA,EAAK,OAAS,CAAC,EACpCmB,GAAeJ,CAAO,CACxB,EACG,UAAU,CAAC,CAAC,CAAE,KAAAf,CAAK,EAAGoB,CAAQ,IAAM,CACnC,GAAM,CAAC9C,CAAM,EAAI0B,EAAKA,EAAK,OAAS,GACpC,GAAI1B,EAAO,aAAc,CAGvB,IAAM+C,EAAYC,GAAoBhD,CAAM,EAC5C,GAAI,OAAO+C,GAAc,YAAa,CACpC,IAAM3B,EAASpB,EAAO,UAAY+C,EAAU,UACtC,CAAE,OAAAxC,CAAO,EAAI0C,GAAeF,CAAS,EAC3CA,EAAU,SAAS,CACjB,IAAK3B,EAASb,EAAS,EACvB,SAAAuC,CACF,CAAC,CACH,CACF,CACF,CAAC,CACP,CAGA,OAAIN,EAAQ,qBAAqB,GAC/B7C,EACG,KACCuD,GAAUZ,CAAK,EACfjC,EAAwB,QAAQ,EAChCsC,GAAa,GAAG,EAChBQ,GAAK,CAAC,EACND,GAAUf,EAAQ,KAAKgB,GAAK,CAAC,CAAC,CAAC,EAC/BC,GAAO,CAAE,MAAO,GAAI,CAAC,EACrBP,GAAeT,CAAK,CACtB,EACG,UAAU,CAAC,CAAC,CAAE,CAAE,KAAAV,CAAK,CAAC,IAAM,CAC3B,IAAM2B,EAAMC,GAAY,EAGlBtD,EAAS0B,EAAKA,EAAK,OAAS,GAClC,GAAI1B,GAAUA,EAAO,OAAQ,CAC3B,GAAM,CAACuD,CAAM,EAAIvD,EACX,CAAE,KAAAwD,CAAK,EAAI,IAAI,IAAID,EAAO,IAAI,EAChCF,EAAI,OAASG,IACfH,EAAI,KAAOG,EACX,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAGH,GAAK,EAIzC,MACEA,EAAI,KAAO,GACX,QAAQ,aAAa,CAAC,EAAG,GAAI,GAAGA,GAAK,CAEzC,CAAC,EAGA5D,GAAqBC,EAAI,CAAE,UAAAC,EAAW,QAAAC,CAAQ,CAAC,EACnD,KACC6D,EAAIC,GAAStB,EAAM,KAAKsB,CAAK,CAAC,EAC9BC,EAAS,IAAMvB,EAAM,SAAS,CAAC,EAC/B9B,EAAIoD,GAAUE,EAAA,CAAE,IAAKlE,GAAOgE,EAAQ,CACtC,CACJ,CAAC,CACH,CCpRO,SAASG,GACdC,EAAkB,CAAE,UAAAC,EAAW,MAAAC,EAAO,QAAAC,CAAQ,EACvB,CAGvB,IAAMC,EAAaH,EAChB,KACCI,EAAI,CAAC,CAAE,OAAQ,CAAE,EAAAC,CAAE,CAAE,IAAMA,CAAC,EAC5BC,GAAY,EAAG,CAAC,EAChBF,EAAI,CAAC,CAACG,EAAGC,CAAC,IAAMD,EAAIC,GAAKA,EAAI,CAAC,EAC9BC,EAAqB,CACvB,EAGIC,EAAUT,EACb,KACCG,EAAI,CAAC,CAAE,OAAAO,CAAO,IAAMA,CAAM,CAC5B,EAGF,OAAOC,EAAc,CAACF,EAASP,CAAU,CAAC,EACvC,KACCC,EAAI,CAAC,CAACO,EAAQE,CAAS,IAAM,EAAEF,GAAUE,EAAU,EACnDJ,EAAqB,EACrBK,GAAUZ,EAAQ,KAAKa,GAAK,CAAC,CAAC,CAAC,EAC/BC,GAAQ,EAAI,EACZC,GAAO,CAAE,MAAO,GAAI,CAAC,EACrBb,EAAIc,IAAW,CAAE,OAAAA,CAAO,EAAE,CAC5B,CACJ,CAYO,SAASC,GACdC,EAAiB,CAAE,UAAApB,EAAW,QAAAqB,EAAS,MAAApB,EAAO,QAAAC,CAAQ,EACpB,CAClC,IAAMoB,EAAQ,IAAIC,EACZC,EAAQF,EAAM,KAAKG,GAAS,CAAC,CAAC,EACpC,OAAAH,EAAM,UAAU,CAGd,KAAK,CAAE,OAAAJ,CAAO,EAAG,CACfE,EAAG,OAASF,EACRA,GACFE,EAAG,aAAa,WAAY,IAAI,EAChCA,EAAG,KAAK,GAERA,EAAG,gBAAgB,UAAU,CAEjC,EAGA,UAAW,CACTA,EAAG,MAAM,IAAM,GACfA,EAAG,OAAS,GACZA,EAAG,gBAAgB,UAAU,CAC/B,CACF,CAAC,EAGDC,EACG,KACCP,GAAUU,CAAK,EACfE,EAAwB,QAAQ,CAClC,EACG,UAAU,CAAC,CAAE,OAAAC,CAAO,IAAM,CACzBP,EAAG,MAAM,IAAM,GAAGO,EAAS,MAC7B,CAAC,EAGE7B,GAAesB,EAAI,CAAE,UAAApB,EAAW,MAAAC,EAAO,QAAAC,CAAQ,CAAC,EACpD,KACC0B,EAAIC,GAASP,EAAM,KAAKO,CAAK,CAAC,EAC9BC,EAAS,IAAMR,EAAM,SAAS,CAAC,EAC/BlB,EAAIyB,GAAUE,EAAA,CAAE,IAAKX,GAAOS,EAAQ,CACtC,CACJ,CCpHO,SAASG,GACd,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACf,CACND,EACG,KACCE,EAAU,IAAMC,EAEd,0DACF,CAAC,EACDC,EAAIC,GAAM,CACRA,EAAG,cAAgB,GACnBA,EAAG,QAAU,EACf,CAAC,EACDC,GAASD,GAAME,EAAUF,EAAI,QAAQ,EAClC,KACCG,GAAU,IAAMH,EAAG,UAAU,SAAS,0BAA0B,CAAC,EACjEI,EAAI,IAAMJ,CAAE,CACd,CACF,EACAK,GAAeT,CAAO,CACxB,EACG,UAAU,CAAC,CAACI,EAAIM,CAAM,IAAM,CAC3BN,EAAG,UAAU,OAAO,0BAA0B,EAC1CM,IACFN,EAAG,QAAU,GACjB,CAAC,CACP,CC/BA,SAASO,IAAyB,CAChC,MAAO,qBAAqB,KAAK,UAAU,SAAS,CACtD,CAiBO,SAASC,GACd,CAAE,UAAAC,CAAU,EACN,CACNA,EACG,KACCC,EAAU,IAAMC,EAAY,qBAAqB,CAAC,EAClDC,EAAIC,GAAMA,EAAG,gBAAgB,mBAAmB,CAAC,EACjDC,EAAOP,EAAa,EACpBQ,GAASF,GAAMG,EAAUH,EAAI,YAAY,EACtC,KACCI,EAAI,IAAMJ,CAAE,CACd,CACF,CACF,EACG,UAAUA,GAAM,CACf,IAAMK,EAAML,EAAG,UAGXK,IAAQ,EACVL,EAAG,UAAY,EAGNK,EAAML,EAAG,eAAiBA,EAAG,eACtCA,EAAG,UAAYK,EAAM,EAEzB,CAAC,CACP,CCpCO,SAASC,GACd,CAAE,UAAAC,EAAW,QAAAC,CAAQ,EACf,CACNC,EAAc,CAACC,GAAY,QAAQ,EAAGF,CAAO,CAAC,EAC3C,KACCG,EAAI,CAAC,CAACC,EAAQC,CAAM,IAAMD,GAAU,CAACC,CAAM,EAC3CC,EAAUF,GAAUG,EAAGH,CAAM,EAC1B,KACCI,GAAMJ,EAAS,IAAM,GAAG,CAC1B,CACF,EACAK,GAAeV,CAAS,CAC1B,EACG,UAAU,CAAC,CAACK,EAAQ,CAAE,OAAQ,CAAE,EAAAM,CAAE,CAAC,CAAC,IAAM,CACzC,GAAIN,EACF,SAAS,KAAK,aAAa,qBAAsB,EAAE,EACnD,SAAS,KAAK,MAAM,IAAM,IAAIM,UACzB,CACL,IAAMC,EAAQ,GAAK,SAAS,SAAS,KAAK,MAAM,IAAK,EAAE,EACvD,SAAS,KAAK,gBAAgB,oBAAoB,EAClD,SAAS,KAAK,MAAM,IAAM,GACtBA,GACF,OAAO,SAAS,EAAGA,CAAK,CAC5B,CACF,CAAC,CACP,CC7DK,OAAO,UACV,OAAO,QAAU,SAAUC,EAAa,CACtC,IAAMC,EAA2B,CAAC,EAClC,QAAWC,KAAO,OAAO,KAAKF,CAAG,EAE/BC,EAAK,KAAK,CAACC,EAAKF,EAAIE,EAAI,CAAC,EAG3B,OAAOD,CACT,GAGG,OAAO,SACV,OAAO,OAAS,SAAUD,EAAa,CACrC,IAAMC,EAAiB,CAAC,EACxB,QAAWC,KAAO,OAAO,KAAKF,CAAG,EAE/BC,EAAK,KAAKD,EAAIE,EAAI,EAGpB,OAAOD,CACT,GAKE,OAAO,SAAY,cAGhB,QAAQ,UAAU,WACrB,QAAQ,UAAU,SAAW,SAC3BE,EAA8BC,EACxB,CACF,OAAOD,GAAM,UACf,KAAK,WAAaA,EAAE,KACpB,KAAK,UAAYA,EAAE,MAEnB,KAAK,WAAaA,EAClB,KAAK,UAAYC,EAErB,GAGG,QAAQ,UAAU,cACrB,QAAQ,UAAU,YAAc,YAC3BC,EACG,CACN,IAAMC,EAAS,KAAK,WACpB,GAAIA,EAAQ,CACND,EAAM,SAAW,GACnBC,EAAO,YAAY,IAAI,EAGzB,QAASC,EAAIF,EAAM,OAAS,EAAGE,GAAK,EAAGA,IAAK,CAC1C,IAAIC,EAAOH,EAAME,GACb,OAAOC,GAAS,SAClBA,EAAO,SAAS,eAAeA,CAAI,EAC5BA,EAAK,YACZA,EAAK,WAAW,YAAYA,CAAI,EAG7BD,EAGHD,EAAO,aAAa,KAAK,gBAAkBE,CAAI,EAF/CF,EAAO,aAAaE,EAAM,IAAI,CAGlC,CACF,CACF,IjMDJ,SAAS,gBAAgB,UAAU,OAAO,OAAO,EACjD,SAAS,gBAAgB,UAAU,IAAI,IAAI,EAG3C,IAAMC,GAAYC,GAAc,EAC1BC,GAAYC,GAAc,EAC1BC,GAAYC,GAAoB,EAChCC,GAAYC,GAAc,EAG1BC,GAAYC,GAAc,EAC1BC,GAAYC,GAAW,oBAAoB,EAC3CC,GAAYD,GAAW,qBAAqB,EAC5CE,GAAYC,GAAW,EAGvBC,GAASC,GAAc,EACvBC,GAAS,SAAS,MAAM,UAAU,QAAQ,GAC5C,+BAAU,QAASC,GACnB,IAAI,IAAI,2BAA4BH,GAAO,IAAI,CACjD,EACEI,GAGEC,GAAS,IAAIC,EACnBC,GAAiB,CAAE,OAAAF,EAAO,CAAC,EAGvBG,EAAQ,oBAAoB,GAC9BC,GAAoB,CAAE,UAAAxB,GAAW,UAAAE,GAAW,UAAAM,EAAU,CAAC,EA1HzD,IAAAiB,KA6HIA,GAAAV,GAAO,UAAP,YAAAU,GAAgB,YAAa,QAC/BC,GAAqB,CAAE,UAAA1B,EAAU,CAAC,EAGpC2B,EAAMzB,GAAWE,EAAO,EACrB,KACCwB,GAAM,GAAG,CACX,EACG,UAAU,IAAM,CACfC,GAAU,SAAU,EAAK,EACzBA,GAAU,SAAU,EAAK,CAC3B,CAAC,EAGLvB,GACG,KACCwB,EAAO,CAAC,CAAE,KAAAC,CAAK,IAAMA,IAAS,QAAQ,CACxC,EACG,UAAUC,GAAO,CAChB,OAAQA,EAAI,KAAM,CAGhB,IAAK,IACL,IAAK,IACH,IAAMC,EAAOC,GAAmB,kBAAkB,EAC9C,OAAOD,GAAS,aAClBA,EAAK,MAAM,EACb,MAGF,IAAK,IACL,IAAK,IACH,IAAME,EAAOD,GAAmB,kBAAkB,EAC9C,OAAOC,GAAS,aAClBA,EAAK,MAAM,EACb,KACJ,CACF,CAAC,EAGLC,GAAmB,CAAE,UAAApC,GAAW,QAAAU,EAAQ,CAAC,EACzC2B,GAAe,CAAE,UAAArC,EAAU,CAAC,EAC5BsC,GAAgB,CAAE,UAAA9B,GAAW,QAAAE,EAAQ,CAAC,EAGtC,IAAM6B,GAAUC,GAAYC,GAAoB,QAAQ,EAAG,CAAE,UAAAjC,EAAU,CAAC,EAClEkC,GAAQ1C,GACX,KACC2C,EAAI,IAAMF,GAAoB,MAAM,CAAC,EACrCG,EAAUC,GAAMC,GAAUD,EAAI,CAAE,UAAArC,GAAW,QAAA+B,EAAQ,CAAC,CAAC,EACrDQ,EAAY,CAAC,CACf,EAGIC,GAAWrB,EAGf,GAAGsB,GAAqB,SAAS,EAC9B,IAAIJ,GAAMK,GAAaL,EAAI,CAAE,QAAAzC,EAAQ,CAAC,CAAC,EAG1C,GAAG6C,GAAqB,QAAQ,EAC7B,IAAIJ,GAAMM,GAAYN,EAAI,CAAE,OAAAzB,EAAO,CAAC,CAAC,EAGxC,GAAG6B,GAAqB,QAAQ,EAC7B,IAAIJ,GAAMO,GAAYP,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,MAAAG,EAAM,CAAC,CAAC,EAG3D,GAAGO,GAAqB,SAAS,EAC9B,IAAIJ,GAAMQ,GAAaR,CAAE,CAAC,EAG7B,GAAGI,GAAqB,QAAQ,EAC7B,IAAIJ,GAAMS,GAAYT,EAAI,CAAE,OAAA5B,GAAQ,UAAAX,EAAU,CAAC,CAAC,EAGnD,GAAG2C,GAAqB,QAAQ,EAC7B,IAAIJ,GAAMU,GAAYV,CAAE,CAAC,CAC9B,EAGMW,GAAWC,EAAM,IAAM9B,EAG3B,GAAGsB,GAAqB,UAAU,EAC/B,IAAIJ,GAAMa,GAAcb,CAAE,CAAC,EAG9B,GAAGI,GAAqB,SAAS,EAC9B,IAAIJ,GAAMc,GAAad,EAAI,CAAE,UAAArC,GAAW,QAAAJ,GAAS,OAAAS,EAAO,CAAC,CAAC,EAG7D,GAAGoC,GAAqB,SAAS,EAC9B,IAAIJ,GAAMtB,EAAQ,kBAAkB,EACjCqC,GAAoBf,EAAI,CAAE,OAAA5B,GAAQ,UAAAf,EAAU,CAAC,EAC7C2D,CACJ,EAGF,GAAGZ,GAAqB,cAAc,EACnC,IAAIJ,GAAMiB,GAAiBjB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,EAAQ,CAAC,CAAC,EAGzD,GAAGU,GAAqB,SAAS,EAC9B,IAAIJ,GAAMA,EAAG,aAAa,cAAc,IAAM,aAC3CkB,GAAGnD,GAAS,IAAMoD,GAAanB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,MAAAG,EAAM,CAAC,CAAC,EACjEqB,GAAGrD,GAAS,IAAMsD,GAAanB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,MAAAG,EAAM,CAAC,CAAC,CACrE,EAGF,GAAGO,GAAqB,MAAM,EAC3B,IAAIJ,GAAMoB,GAAUpB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,EAAQ,CAAC,CAAC,EAGlD,GAAGU,GAAqB,KAAK,EAC1B,IAAIJ,GAAMqB,GAAqBrB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,QAAAnC,EAAQ,CAAC,CAAC,EAGtE,GAAG6C,GAAqB,KAAK,EAC1B,IAAIJ,GAAMsB,GAAetB,EAAI,CAAE,UAAArC,GAAW,QAAA+B,GAAS,MAAAG,GAAO,QAAAtC,EAAQ,CAAC,CAAC,CACzE,CAAC,EAGKgE,GAAapE,GAChB,KACC4C,EAAU,IAAMY,EAAQ,EACxBa,GAAUrB,EAAQ,EAClBD,EAAY,CAAC,CACf,EAGFqB,GAAW,UAAU,EAMrB,OAAO,UAAapE,GACpB,OAAO,UAAaE,GACpB,OAAO,QAAaE,GACpB,OAAO,UAAaE,GACpB,OAAO,UAAaE,GACpB,OAAO,QAAaE,GACpB,OAAO,QAAaE,GACpB,OAAO,OAAaC,GACpB,OAAO,OAAaO,GACpB,OAAO,WAAagD", + "names": ["require_focus_visible", "__commonJSMin", "exports", "module", "global", "factory", "applyFocusVisiblePolyfill", "scope", "hadKeyboardEvent", "hadFocusVisibleRecently", "hadFocusVisibleRecentlyTimeout", "inputTypesAllowlist", "isValidFocusTarget", "el", "focusTriggersKeyboardModality", "type", "tagName", "addFocusVisibleClass", "removeFocusVisibleClass", "onKeyDown", "e", "onPointerDown", "onFocus", "onBlur", "onVisibilityChange", "addInitialPointerMoveListeners", "onInitialPointerMove", "removeInitialPointerMoveListeners", "event", "error", "require_url_polyfill", "__commonJSMin", "exports", "global", "checkIfIteratorIsSupported", "error", "iteratorSupported", "createIterator", "items", "iterator", "value", "serializeParam", "deserializeParam", "polyfillURLSearchParams", "URLSearchParams", "searchString", "typeofSearchString", "_this", "name", "i", "entry", "key", "proto", "callback", "thisArg", "entries", "searchArray", "checkIfURLSearchParamsSupported", "e", "a", "b", "keys", "attributes", "attribute", "checkIfURLIsSupported", "u", "polyfillURL", "_URL", "URL", "url", "base", "doc", "baseElement", "err", "anchorElement", "inputElement", "searchParams", "enableSearchUpdate", "enableSearchParamsUpdate", "methodName", "method", "search", "linkURLWithAnchorAttribute", "attributeName", "expectedPort", "addPortToOrigin", "blob", "getOrigin", "require_tslib", "__commonJSMin", "exports", "module", "__extends", "__assign", "__rest", "__decorate", "__param", "__metadata", "__awaiter", "__generator", "__exportStar", "__values", "__read", "__spread", "__spreadArrays", "__spreadArray", "__await", "__asyncGenerator", "__asyncDelegator", "__asyncValues", "__makeTemplateObject", "__importStar", "__importDefault", "__classPrivateFieldGet", "__classPrivateFieldSet", "__createBinding", "factory", "root", "createExporter", "previous", "id", "v", "exporter", "extendStatics", "d", "b", "p", "__", "t", "s", "n", "e", "i", "decorators", "target", "key", "desc", "c", "r", "paramIndex", "decorator", "metadataKey", "metadataValue", "thisArg", "_arguments", "P", "generator", "adopt", "value", "resolve", "reject", "fulfilled", "step", "rejected", "result", "body", "_", "f", "y", "g", "verb", "op", "m", "o", "k", "k2", "ar", "error", "il", "a", "j", "jl", "to", "from", "pack", "l", "q", "resume", "settle", "fulfill", "cooked", "raw", "__setModuleDefault", "mod", "receiver", "state", "kind", "require_clipboard", "__commonJSMin", "exports", "module", "root", "factory", "__webpack_modules__", "__unused_webpack_module", "__webpack_exports__", "__webpack_require__", "clipboard", "tiny_emitter", "tiny_emitter_default", "listen", "listen_default", "src_select", "select_default", "command", "type", "err", "ClipboardActionCut", "target", "selectedText", "actions_cut", "createFakeElement", "value", "isRTL", "fakeElement", "yPosition", "fakeCopyAction", "options", "ClipboardActionCopy", "actions_copy", "_typeof", "obj", "ClipboardActionDefault", "_options$action", "action", "container", "text", "actions_default", "clipboard_typeof", "_classCallCheck", "instance", "Constructor", "_defineProperties", "props", "i", "descriptor", "_createClass", "protoProps", "staticProps", "_inherits", "subClass", "superClass", "_setPrototypeOf", "o", "p", "_createSuper", "Derived", "hasNativeReflectConstruct", "_isNativeReflectConstruct", "Super", "_getPrototypeOf", "result", "NewTarget", "_possibleConstructorReturn", "self", "call", "_assertThisInitialized", "e", "getAttributeValue", "suffix", "element", "attribute", "Clipboard", "_Emitter", "_super", "trigger", "_this", "_this2", "selector", "actions", "support", "DOCUMENT_NODE_TYPE", "proto", "closest", "__unused_webpack_exports", "_delegate", "callback", "useCapture", "listenerFn", "listener", "delegate", "elements", "is", "listenNode", "listenNodeList", "listenSelector", "node", "nodeList", "select", "isReadOnly", "selection", "range", "E", "name", "ctx", "data", "evtArr", "len", "evts", "liveEvents", "__webpack_module_cache__", "moduleId", "getter", "definition", "key", "prop", "require_escape_html", "__commonJSMin", "exports", "module", "matchHtmlRegExp", "escapeHtml", "string", "str", "match", "escape", "html", "index", "lastIndex", "r", "a", "e", "import_focus_visible", "n", "t", "s", "r", "o", "u", "i", "a", "e", "c", "import_url_polyfill", "import_tslib", "__extends", "__assign", "__rest", "__decorate", "__param", "__metadata", "__awaiter", "__generator", "__exportStar", "__createBinding", "__values", "__read", "__spread", "__spreadArrays", "__spreadArray", "__await", "__asyncGenerator", "__asyncDelegator", "__asyncValues", "__makeTemplateObject", "__importStar", "__importDefault", "__classPrivateFieldGet", "__classPrivateFieldSet", "tslib", "isFunction", "value", "createErrorClass", "createImpl", "_super", "instance", "ctorFunc", "UnsubscriptionError", "createErrorClass", "_super", "errors", "err", "i", "arrRemove", "arr", "item", "index", "Subscription", "initialTeardown", "errors", "_parentage", "_parentage_1", "__values", "_parentage_1_1", "parent_1", "initialFinalizer", "isFunction", "e", "UnsubscriptionError", "_finalizers", "_finalizers_1", "_finalizers_1_1", "finalizer", "execFinalizer", "err", "__spreadArray", "__read", "teardown", "_a", "parent", "arrRemove", "empty", "EMPTY_SUBSCRIPTION", "Subscription", "isSubscription", "value", "isFunction", "execFinalizer", "finalizer", "config", "timeoutProvider", "handler", "timeout", "args", "_i", "delegate", "__spreadArray", "__read", "handle", "reportUnhandledError", "err", "timeoutProvider", "onUnhandledError", "config", "noop", "COMPLETE_NOTIFICATION", "createNotification", "errorNotification", "error", "nextNotification", "value", "kind", "context", "errorContext", "cb", "config", "isRoot", "_a", "errorThrown", "error", "captureError", "err", "Subscriber", "_super", "__extends", "destination", "_this", "isSubscription", "EMPTY_OBSERVER", "next", "error", "complete", "SafeSubscriber", "value", "handleStoppedNotification", "nextNotification", "err", "errorNotification", "COMPLETE_NOTIFICATION", "Subscription", "_bind", "bind", "fn", "thisArg", "ConsumerObserver", "partialObserver", "value", "error", "handleUnhandledError", "err", "SafeSubscriber", "_super", "__extends", "observerOrNext", "complete", "_this", "isFunction", "context_1", "config", "Subscriber", "handleUnhandledError", "error", "config", "captureError", "reportUnhandledError", "defaultErrorHandler", "err", "handleStoppedNotification", "notification", "subscriber", "onStoppedNotification", "timeoutProvider", "EMPTY_OBSERVER", "noop", "observable", "identity", "x", "pipe", "fns", "_i", "pipeFromArray", "identity", "input", "prev", "fn", "Observable", "subscribe", "operator", "observable", "observerOrNext", "error", "complete", "_this", "subscriber", "isSubscriber", "SafeSubscriber", "errorContext", "_a", "source", "sink", "err", "next", "promiseCtor", "getPromiseCtor", "resolve", "reject", "value", "operations", "_i", "pipeFromArray", "x", "getPromiseCtor", "promiseCtor", "_a", "config", "isObserver", "value", "isFunction", "isSubscriber", "Subscriber", "isSubscription", "hasLift", "source", "isFunction", "operate", "init", "liftedSource", "err", "createOperatorSubscriber", "destination", "onNext", "onComplete", "onError", "onFinalize", "OperatorSubscriber", "_super", "__extends", "shouldUnsubscribe", "_this", "value", "err", "closed_1", "_a", "Subscriber", "animationFrameProvider", "callback", "request", "cancel", "delegate", "handle", "timestamp", "Subscription", "args", "_i", "__spreadArray", "__read", "ObjectUnsubscribedError", "createErrorClass", "_super", "Subject", "_super", "__extends", "_this", "operator", "subject", "AnonymousSubject", "ObjectUnsubscribedError", "value", "errorContext", "_b", "__values", "_c", "observer", "err", "observers", "_a", "subscriber", "hasError", "isStopped", "EMPTY_SUBSCRIPTION", "Subscription", "arrRemove", "thrownError", "observable", "Observable", "destination", "source", "AnonymousSubject", "_super", "__extends", "destination", "source", "_this", "value", "_b", "_a", "err", "subscriber", "EMPTY_SUBSCRIPTION", "Subject", "dateTimestampProvider", "ReplaySubject", "_super", "__extends", "_bufferSize", "_windowTime", "_timestampProvider", "dateTimestampProvider", "_this", "value", "_a", "isStopped", "_buffer", "_infiniteTimeWindow", "subscriber", "subscription", "copy", "i", "adjustedBufferSize", "now", "last", "Subject", "Action", "_super", "__extends", "scheduler", "work", "state", "delay", "Subscription", "intervalProvider", "handler", "timeout", "args", "_i", "delegate", "__spreadArray", "__read", "handle", "AsyncAction", "_super", "__extends", "scheduler", "work", "_this", "state", "delay", "id", "_id", "intervalProvider", "_scheduler", "error", "_delay", "errored", "errorValue", "e", "_a", "actions", "arrRemove", "Action", "Scheduler", "schedulerActionCtor", "now", "work", "delay", "state", "dateTimestampProvider", "AsyncScheduler", "_super", "__extends", "SchedulerAction", "now", "Scheduler", "_this", "action", "actions", "error", "asyncScheduler", "AsyncScheduler", "AsyncAction", "async", "AnimationFrameAction", "_super", "__extends", "scheduler", "work", "_this", "id", "delay", "animationFrameProvider", "action", "AsyncAction", "AnimationFrameScheduler", "_super", "__extends", "action", "flushId", "actions", "error", "AsyncScheduler", "animationFrameScheduler", "AnimationFrameScheduler", "AnimationFrameAction", "EMPTY", "Observable", "subscriber", "isScheduler", "value", "isFunction", "last", "arr", "popResultSelector", "args", "isFunction", "popScheduler", "isScheduler", "popNumber", "defaultValue", "isArrayLike", "x", "isPromise", "value", "isFunction", "isInteropObservable", "input", "isFunction", "observable", "isAsyncIterable", "obj", "isFunction", "createInvalidObservableTypeError", "input", "getSymbolIterator", "iterator", "isIterable", "input", "isFunction", "iterator", "readableStreamLikeToAsyncGenerator", "readableStream", "reader", "__await", "_a", "_b", "value", "done", "isReadableStreamLike", "obj", "isFunction", "innerFrom", "input", "Observable", "isInteropObservable", "fromInteropObservable", "isArrayLike", "fromArrayLike", "isPromise", "fromPromise", "isAsyncIterable", "fromAsyncIterable", "isIterable", "fromIterable", "isReadableStreamLike", "fromReadableStreamLike", "createInvalidObservableTypeError", "obj", "subscriber", "obs", "observable", "isFunction", "array", "i", "promise", "value", "err", "reportUnhandledError", "iterable", "iterable_1", "__values", "iterable_1_1", "asyncIterable", "process", "readableStream", "readableStreamLikeToAsyncGenerator", "asyncIterable_1", "__asyncValues", "asyncIterable_1_1", "executeSchedule", "parentSubscription", "scheduler", "work", "delay", "repeat", "scheduleSubscription", "observeOn", "scheduler", "delay", "operate", "source", "subscriber", "createOperatorSubscriber", "value", "executeSchedule", "err", "subscribeOn", "scheduler", "delay", "operate", "source", "subscriber", "scheduleObservable", "input", "scheduler", "innerFrom", "subscribeOn", "observeOn", "schedulePromise", "input", "scheduler", "innerFrom", "subscribeOn", "observeOn", "scheduleArray", "input", "scheduler", "Observable", "subscriber", "i", "scheduleIterable", "input", "scheduler", "Observable", "subscriber", "iterator", "executeSchedule", "value", "done", "_a", "err", "isFunction", "scheduleAsyncIterable", "input", "scheduler", "Observable", "subscriber", "executeSchedule", "iterator", "result", "scheduleReadableStreamLike", "input", "scheduler", "scheduleAsyncIterable", "readableStreamLikeToAsyncGenerator", "scheduled", "input", "scheduler", "isInteropObservable", "scheduleObservable", "isArrayLike", "scheduleArray", "isPromise", "schedulePromise", "isAsyncIterable", "scheduleAsyncIterable", "isIterable", "scheduleIterable", "isReadableStreamLike", "scheduleReadableStreamLike", "createInvalidObservableTypeError", "from", "input", "scheduler", "scheduled", "innerFrom", "of", "args", "_i", "scheduler", "popScheduler", "from", "throwError", "errorOrErrorFactory", "scheduler", "errorFactory", "isFunction", "init", "subscriber", "Observable", "isValidDate", "value", "map", "project", "thisArg", "operate", "source", "subscriber", "index", "createOperatorSubscriber", "value", "isArray", "callOrApply", "fn", "args", "__spreadArray", "__read", "mapOneOrManyArgs", "map", "isArray", "getPrototypeOf", "objectProto", "getKeys", "argsArgArrayOrObject", "args", "first_1", "isPOJO", "keys", "key", "obj", "createObject", "keys", "values", "result", "key", "i", "combineLatest", "args", "_i", "scheduler", "popScheduler", "resultSelector", "popResultSelector", "_a", "argsArgArrayOrObject", "observables", "keys", "from", "result", "Observable", "combineLatestInit", "values", "createObject", "identity", "mapOneOrManyArgs", "valueTransform", "subscriber", "maybeSchedule", "length", "active", "remainingFirstValues", "i", "source", "hasFirstValue", "createOperatorSubscriber", "value", "execute", "subscription", "executeSchedule", "mergeInternals", "source", "subscriber", "project", "concurrent", "onBeforeNext", "expand", "innerSubScheduler", "additionalFinalizer", "buffer", "active", "index", "isComplete", "checkComplete", "outerNext", "value", "doInnerSub", "innerComplete", "innerFrom", "createOperatorSubscriber", "innerValue", "bufferedValue", "executeSchedule", "err", "mergeMap", "project", "resultSelector", "concurrent", "isFunction", "a", "i", "map", "b", "ii", "innerFrom", "operate", "source", "subscriber", "mergeInternals", "mergeAll", "concurrent", "mergeMap", "identity", "concatAll", "mergeAll", "concat", "args", "_i", "concatAll", "from", "popScheduler", "defer", "observableFactory", "Observable", "subscriber", "innerFrom", "nodeEventEmitterMethods", "eventTargetMethods", "jqueryMethods", "fromEvent", "target", "eventName", "options", "resultSelector", "isFunction", "mapOneOrManyArgs", "_a", "__read", "isEventTarget", "methodName", "handler", "isNodeStyleEventEmitter", "toCommonHandlerRegistry", "isJQueryStyleEventEmitter", "add", "remove", "isArrayLike", "mergeMap", "subTarget", "innerFrom", "Observable", "subscriber", "args", "_i", "fromEventPattern", "addHandler", "removeHandler", "resultSelector", "mapOneOrManyArgs", "Observable", "subscriber", "handler", "e", "_i", "retValue", "isFunction", "timer", "dueTime", "intervalOrScheduler", "scheduler", "async", "intervalDuration", "isScheduler", "Observable", "subscriber", "due", "isValidDate", "n", "merge", "args", "_i", "scheduler", "popScheduler", "concurrent", "popNumber", "sources", "innerFrom", "mergeAll", "from", "EMPTY", "NEVER", "Observable", "noop", "isArray", "argsOrArgArray", "args", "filter", "predicate", "thisArg", "operate", "source", "subscriber", "index", "createOperatorSubscriber", "value", "zip", "args", "_i", "resultSelector", "popResultSelector", "sources", "argsOrArgArray", "Observable", "subscriber", "buffers", "completed", "sourceIndex", "innerFrom", "createOperatorSubscriber", "value", "buffer", "result", "__spreadArray", "__read", "i", "EMPTY", "audit", "durationSelector", "operate", "source", "subscriber", "hasValue", "lastValue", "durationSubscriber", "isComplete", "endDuration", "value", "cleanupDuration", "createOperatorSubscriber", "innerFrom", "auditTime", "duration", "scheduler", "asyncScheduler", "audit", "timer", "bufferCount", "bufferSize", "startBufferEvery", "operate", "source", "subscriber", "buffers", "count", "createOperatorSubscriber", "value", "toEmit", "buffers_1", "__values", "buffers_1_1", "buffer", "toEmit_1", "toEmit_1_1", "arrRemove", "buffers_2", "buffers_2_1", "catchError", "selector", "operate", "source", "subscriber", "innerSub", "syncUnsub", "handledResult", "createOperatorSubscriber", "err", "innerFrom", "scanInternals", "accumulator", "seed", "hasSeed", "emitOnNext", "emitBeforeComplete", "source", "subscriber", "hasState", "state", "index", "createOperatorSubscriber", "value", "i", "combineLatest", "args", "_i", "resultSelector", "popResultSelector", "pipe", "__spreadArray", "__read", "mapOneOrManyArgs", "operate", "source", "subscriber", "combineLatestInit", "argsOrArgArray", "combineLatestWith", "otherSources", "_i", "combineLatest", "__spreadArray", "__read", "concatMap", "project", "resultSelector", "isFunction", "mergeMap", "debounceTime", "dueTime", "scheduler", "asyncScheduler", "operate", "source", "subscriber", "activeTask", "lastValue", "lastTime", "emit", "value", "emitWhenIdle", "targetTime", "now", "createOperatorSubscriber", "defaultIfEmpty", "defaultValue", "operate", "source", "subscriber", "hasValue", "createOperatorSubscriber", "value", "take", "count", "EMPTY", "operate", "source", "subscriber", "seen", "createOperatorSubscriber", "value", "ignoreElements", "operate", "source", "subscriber", "createOperatorSubscriber", "noop", "mapTo", "value", "map", "delayWhen", "delayDurationSelector", "subscriptionDelay", "source", "concat", "take", "ignoreElements", "mergeMap", "value", "index", "mapTo", "delay", "due", "scheduler", "asyncScheduler", "duration", "timer", "delayWhen", "distinctUntilChanged", "comparator", "keySelector", "identity", "defaultCompare", "operate", "source", "subscriber", "previousKey", "first", "createOperatorSubscriber", "value", "currentKey", "a", "b", "distinctUntilKeyChanged", "key", "compare", "distinctUntilChanged", "x", "y", "endWith", "values", "_i", "source", "concat", "of", "__spreadArray", "__read", "finalize", "callback", "operate", "source", "subscriber", "takeLast", "count", "EMPTY", "operate", "source", "subscriber", "buffer", "createOperatorSubscriber", "value", "buffer_1", "__values", "buffer_1_1", "merge", "args", "_i", "scheduler", "popScheduler", "concurrent", "popNumber", "argsOrArgArray", "operate", "source", "subscriber", "mergeAll", "from", "__spreadArray", "__read", "mergeWith", "otherSources", "_i", "merge", "__spreadArray", "__read", "repeat", "countOrConfig", "count", "delay", "_a", "EMPTY", "operate", "source", "subscriber", "soFar", "sourceSub", "resubscribe", "notifier", "timer", "innerFrom", "notifierSubscriber_1", "createOperatorSubscriber", "subscribeToSource", "syncUnsub", "sample", "notifier", "operate", "source", "subscriber", "hasValue", "lastValue", "createOperatorSubscriber", "value", "noop", "scan", "accumulator", "seed", "operate", "scanInternals", "share", "options", "_a", "connector", "Subject", "_b", "resetOnError", "_c", "resetOnComplete", "_d", "resetOnRefCountZero", "wrapperSource", "connection", "resetConnection", "subject", "refCount", "hasCompleted", "hasErrored", "cancelReset", "reset", "resetAndUnsubscribe", "conn", "operate", "source", "subscriber", "dest", "handleReset", "SafeSubscriber", "value", "err", "innerFrom", "on", "args", "_i", "onSubscriber", "__spreadArray", "__read", "shareReplay", "configOrBufferSize", "windowTime", "scheduler", "bufferSize", "refCount", "_a", "_b", "_c", "share", "ReplaySubject", "skip", "count", "filter", "_", "index", "skipUntil", "notifier", "operate", "source", "subscriber", "taking", "skipSubscriber", "createOperatorSubscriber", "noop", "innerFrom", "value", "startWith", "values", "_i", "scheduler", "popScheduler", "operate", "source", "subscriber", "concat", "switchMap", "project", "resultSelector", "operate", "source", "subscriber", "innerSubscriber", "index", "isComplete", "checkComplete", "createOperatorSubscriber", "value", "innerIndex", "outerIndex", "innerFrom", "innerValue", "takeUntil", "notifier", "operate", "source", "subscriber", "innerFrom", "createOperatorSubscriber", "noop", "takeWhile", "predicate", "inclusive", "operate", "source", "subscriber", "index", "createOperatorSubscriber", "value", "result", "tap", "observerOrNext", "error", "complete", "tapObserver", "isFunction", "operate", "source", "subscriber", "_a", "isUnsub", "createOperatorSubscriber", "value", "err", "_b", "identity", "defaultThrottleConfig", "throttle", "durationSelector", "config", "operate", "source", "subscriber", "leading", "trailing", "hasValue", "sendValue", "throttled", "isComplete", "endThrottling", "send", "cleanupThrottling", "startThrottle", "value", "innerFrom", "createOperatorSubscriber", "throttleTime", "duration", "scheduler", "config", "asyncScheduler", "defaultThrottleConfig", "duration$", "timer", "throttle", "withLatestFrom", "inputs", "_i", "project", "popResultSelector", "operate", "source", "subscriber", "len", "otherValues", "hasValue", "ready", "i", "innerFrom", "createOperatorSubscriber", "value", "identity", "noop", "values", "__spreadArray", "__read", "zip", "sources", "_i", "operate", "source", "subscriber", "__spreadArray", "__read", "zipWith", "otherInputs", "_i", "zip", "__spreadArray", "__read", "watchDocument", "document$", "ReplaySubject", "fromEvent", "getElements", "selector", "node", "getElement", "el", "getOptionalElement", "getActiveElement", "watchElementFocus", "el", "merge", "fromEvent", "debounceTime", "map", "active", "getActiveElement", "startWith", "distinctUntilChanged", "getElementOffset", "el", "watchElementOffset", "merge", "fromEvent", "auditTime", "animationFrameScheduler", "map", "startWith", "getElementContentOffset", "el", "watchElementContentOffset", "merge", "fromEvent", "auditTime", "animationFrameScheduler", "map", "startWith", "MapShim", "getIndex", "arr", "key", "result", "entry", "index", "class_1", "value", "entries", "callback", "ctx", "_i", "_a", "isBrowser", "global$1", "requestAnimationFrame$1", "trailingTimeout", "throttle", "delay", "leadingCall", "trailingCall", "lastCallTime", "resolvePending", "proxy", "timeoutCallback", "timeStamp", "REFRESH_DELAY", "transitionKeys", "mutationObserverSupported", "ResizeObserverController", "observer", "observers", "changesDetected", "activeObservers", "_b", "propertyName", "isReflowProperty", "defineConfigurable", "target", "props", "getWindowOf", "ownerGlobal", "emptyRect", "createRectInit", "toFloat", "getBordersSize", "styles", "positions", "size", "position", "getPaddings", "paddings", "positions_1", "getSVGContentRect", "bbox", "getHTMLElementContentRect", "clientWidth", "clientHeight", "horizPad", "vertPad", "width", "height", "isDocumentElement", "vertScrollbar", "horizScrollbar", "isSVGGraphicsElement", "getContentRect", "createReadOnlyRect", "x", "y", "Constr", "rect", "ResizeObservation", "ResizeObserverEntry", "rectInit", "contentRect", "ResizeObserverSPI", "controller", "callbackCtx", "observations", "_this", "observation", "ResizeObserver", "method", "ResizeObserver_es_default", "entry$", "Subject", "observer$", "defer", "of", "ResizeObserver_es_default", "entries", "entry", "switchMap", "observer", "merge", "NEVER", "finalize", "shareReplay", "getElementSize", "el", "watchElementSize", "tap", "filter", "target", "map", "startWith", "getElementContentSize", "el", "getElementContainer", "parent", "entry$", "Subject", "observer$", "defer", "of", "entries", "entry", "switchMap", "observer", "merge", "NEVER", "finalize", "shareReplay", "watchElementVisibility", "el", "tap", "filter", "target", "map", "isIntersecting", "watchElementBoundary", "threshold", "watchElementContentOffset", "y", "visible", "getElementSize", "content", "getElementContentSize", "distinctUntilChanged", "toggles", "getElement", "getToggle", "name", "setToggle", "value", "watchToggle", "el", "fromEvent", "map", "startWith", "isSusceptibleToKeyboard", "el", "type", "watchKeyboard", "fromEvent", "filter", "ev", "map", "getToggle", "mode", "active", "getActiveElement", "share", "getLocation", "setLocation", "url", "watchLocation", "Subject", "appendChild", "el", "child", "node", "h", "tag", "attributes", "children", "attr", "truncate", "value", "n", "i", "round", "digits", "getLocationHash", "setLocationHash", "hash", "el", "h", "ev", "watchLocationHash", "fromEvent", "map", "startWith", "filter", "shareReplay", "watchLocationTarget", "id", "getOptionalElement", "watchMedia", "query", "media", "fromEventPattern", "next", "startWith", "watchPrint", "merge", "fromEvent", "map", "at", "query$", "factory", "switchMap", "active", "EMPTY", "request", "url", "options", "from", "catchError", "EMPTY", "switchMap", "res", "throwError", "of", "requestJSON", "shareReplay", "requestXML", "dom", "map", "watchScript", "src", "script", "h", "defer", "merge", "fromEvent", "switchMap", "throwError", "map", "finalize", "take", "getViewportOffset", "watchViewportOffset", "merge", "fromEvent", "map", "startWith", "getViewportSize", "watchViewportSize", "fromEvent", "map", "startWith", "watchViewport", "combineLatest", "watchViewportOffset", "watchViewportSize", "map", "offset", "size", "shareReplay", "watchViewportAt", "el", "viewport$", "header$", "size$", "distinctUntilKeyChanged", "offset$", "combineLatest", "map", "getElementOffset", "height", "offset", "size", "x", "y", "watchWorker", "worker", "tx$", "rx$", "fromEvent", "map", "data", "throttle", "tap", "message", "switchMap", "share", "script", "getElement", "config", "getLocation", "configuration", "feature", "flag", "translation", "key", "value", "getComponentElement", "type", "node", "getElement", "getComponentElements", "getElements", "watchAnnounce", "el", "button", "getElement", "fromEvent", "map", "content", "mountAnnounce", "feature", "EMPTY", "defer", "push$", "Subject", "startWith", "hash", "_a", "tap", "state", "finalize", "__spreadValues", "watchConsent", "el", "target$", "map", "target", "mountConsent", "options", "internal$", "Subject", "hidden", "tap", "state", "finalize", "__spreadValues", "import_clipboard", "renderTooltip", "id", "h", "renderAnnotation", "id", "prefix", "anchor", "h", "renderTooltip", "renderClipboardButton", "id", "h", "translation", "renderSearchDocument", "document", "flag", "parent", "teaser", "missing", "key", "list", "h", "url", "feature", "match", "highlight", "value", "tags", "configuration", "truncate", "tag", "id", "type", "translation", "renderSearchResultItem", "result", "threshold", "docs", "doc", "article", "index", "best", "more", "children", "section", "renderSourceFacts", "facts", "h", "key", "value", "round", "renderTabbedControl", "type", "classes", "h", "renderTable", "table", "h", "renderVersion", "version", "config", "configuration", "url", "h", "renderVersionSelector", "versions", "active", "translation", "watchAnnotation", "el", "container", "offset$", "defer", "combineLatest", "watchElementOffset", "watchElementContentOffset", "map", "x", "y", "scroll", "width", "height", "getElementSize", "watchElementFocus", "switchMap", "active", "offset", "take", "mountAnnotation", "target$", "tooltip", "index", "push$", "Subject", "done$", "takeLast", "watchElementVisibility", "takeUntil", "visible", "merge", "filter", "debounceTime", "auditTime", "animationFrameScheduler", "throttleTime", "origin", "fromEvent", "ev", "withLatestFrom", "_a", "parent", "getActiveElement", "target", "delay", "tap", "state", "finalize", "__spreadValues", "findAnnotationMarkers", "container", "markers", "el", "getElements", "nodes", "it", "node", "text", "match", "id", "force", "marker", "swap", "source", "target", "mountAnnotationList", "target$", "print$", "parent", "prefix", "annotations", "getOptionalElement", "renderAnnotation", "EMPTY", "defer", "done$", "Subject", "pairs", "annotation", "getElement", "takeUntil", "takeLast", "active", "inner", "child", "merge", "mountAnnotation", "finalize", "share", "sequence", "findCandidateList", "el", "sibling", "watchCodeBlock", "watchElementSize", "map", "width", "getElementContentSize", "distinctUntilKeyChanged", "mountCodeBlock", "options", "hover", "factory$", "defer", "push$", "Subject", "scrollable", "ClipboardJS", "parent", "renderClipboardButton", "container", "list", "feature", "annotations$", "mountAnnotationList", "tap", "state", "finalize", "__spreadValues", "mergeWith", "height", "distinctUntilChanged", "switchMap", "active", "EMPTY", "watchElementVisibility", "filter", "visible", "take", "mermaid$", "sequence", "fetchScripts", "watchScript", "of", "mountMermaid", "el", "tap", "mermaid_default", "map", "shareReplay", "id", "host", "h", "svg", "shadow", "watchDetails", "el", "target$", "print$", "open", "merge", "map", "target", "filter", "details", "active", "tap", "mountDetails", "options", "defer", "push$", "Subject", "action", "reveal", "state", "finalize", "__spreadValues", "sentinel", "h", "mountDataTable", "el", "renderTable", "of", "watchContentTabs", "el", "inputs", "getElements", "initial", "input", "merge", "fromEvent", "map", "getElement", "startWith", "active", "mountContentTabs", "viewport$", "prev", "renderTabbedControl", "next", "container", "defer", "push$", "Subject", "done$", "takeLast", "combineLatest", "watchElementSize", "auditTime", "animationFrameScheduler", "takeUntil", "size", "offset", "getElementOffset", "width", "getElementSize", "content", "getElementContentOffset", "watchElementContentOffset", "getElementContentSize", "direction", "feature", "skip", "withLatestFrom", "tab", "y", "set", "label", "tabs", "tap", "state", "finalize", "__spreadValues", "subscribeOn", "asyncScheduler", "mountContent", "el", "viewport$", "target$", "print$", "merge", "getElements", "child", "mountCodeBlock", "mountMermaid", "mountDataTable", "mountDetails", "mountContentTabs", "watchDialog", "_el", "alert$", "switchMap", "message", "merge", "of", "delay", "map", "active", "mountDialog", "el", "options", "inner", "getElement", "defer", "push$", "Subject", "tap", "state", "finalize", "__spreadValues", "isHidden", "viewport$", "feature", "of", "direction$", "map", "y", "bufferCount", "a", "b", "distinctUntilKeyChanged", "hidden$", "combineLatest", "filter", "offset", "direction", "distinctUntilChanged", "search$", "watchToggle", "search", "switchMap", "active", "startWith", "watchHeader", "el", "options", "defer", "watchElementSize", "height", "hidden", "shareReplay", "mountHeader", "header$", "main$", "push$", "Subject", "done$", "takeLast", "combineLatestWith", "takeUntil", "state", "__spreadValues", "watchHeaderTitle", "el", "viewport$", "header$", "watchViewportAt", "map", "y", "height", "getElementSize", "distinctUntilKeyChanged", "mountHeaderTitle", "options", "defer", "push$", "Subject", "active", "heading", "getOptionalElement", "EMPTY", "tap", "state", "finalize", "__spreadValues", "watchMain", "el", "viewport$", "header$", "adjust$", "map", "height", "distinctUntilChanged", "border$", "switchMap", "watchElementSize", "distinctUntilKeyChanged", "combineLatest", "header", "top", "bottom", "y", "a", "b", "watchPalette", "inputs", "current", "input", "of", "mergeMap", "fromEvent", "map", "startWith", "shareReplay", "mountPalette", "el", "defer", "push$", "Subject", "palette", "key", "value", "index", "label", "observeOn", "asyncScheduler", "getElements", "tap", "state", "finalize", "__spreadValues", "import_clipboard", "extract", "el", "text", "setupClipboardJS", "alert$", "ClipboardJS", "Observable", "subscriber", "getElement", "ev", "tap", "map", "translation", "preprocess", "urls", "root", "next", "a", "b", "url", "index", "fetchSitemap", "base", "cached", "of", "config", "configuration", "requestXML", "map", "sitemap", "getElements", "node", "catchError", "EMPTY", "defaultIfEmpty", "tap", "setupInstantLoading", "document$", "location$", "viewport$", "config", "configuration", "fromEvent", "favicon", "getOptionalElement", "push$", "fetchSitemap", "map", "paths", "path", "switchMap", "urls", "filter", "ev", "el", "url", "of", "NEVER", "share", "pop$", "merge", "distinctUntilChanged", "a", "b", "response$", "distinctUntilKeyChanged", "request", "catchError", "setLocation", "sample", "dom", "res", "skip", "replacement", "selector", "feature", "source", "target", "getComponentElement", "getElements", "concatMap", "script", "h", "name", "Observable", "observer", "EMPTY", "offset", "setLocationHash", "skipUntil", "debounceTime", "bufferCount", "state", "import_escape_html", "import_escape_html", "setupSearchHighlighter", "config", "escape", "separator", "highlight", "_", "data", "term", "query", "match", "value", "escapeHTML", "defaultTransform", "query", "terms", "index", "isSearchReadyMessage", "message", "isSearchQueryMessage", "isSearchResultMessage", "setupSearchIndex", "config", "docs", "translation", "options", "feature", "setupSearchWorker", "url", "index", "configuration", "worker", "tx$", "Subject", "rx$", "watchWorker", "map", "message", "isSearchResultMessage", "result", "document", "share", "from", "data", "setupVersionSelector", "document$", "config", "configuration", "versions$", "requestJSON", "catchError", "EMPTY", "current$", "map", "versions", "current", "version", "aliases", "switchMap", "urls", "fromEvent", "filter", "ev", "withLatestFrom", "el", "url", "of", "fetchSitemap", "sitemap", "path", "getLocation", "setLocation", "combineLatest", "getElement", "renderVersionSelector", "_a", "outdated", "latest", "warning", "getComponentElements", "watchSearchQuery", "el", "rx$", "fn", "defaultTransform", "searchParams", "getLocation", "setToggle", "param$", "filter", "isSearchReadyMessage", "take", "map", "watchToggle", "active", "url", "value", "focus$", "watchElementFocus", "value$", "merge", "fromEvent", "delay", "startWith", "distinctUntilChanged", "combineLatest", "focus", "shareReplay", "mountSearchQuery", "tx$", "push$", "Subject", "done$", "takeLast", "distinctUntilKeyChanged", "translation", "takeUntil", "tap", "state", "finalize", "__spreadValues", "share", "mountSearchResult", "el", "rx$", "query$", "push$", "Subject", "boundary$", "watchElementBoundary", "filter", "meta", "getElement", "list", "ready$", "isSearchReadyMessage", "take", "withLatestFrom", "skipUntil", "items", "value", "translation", "round", "tap", "switchMap", "merge", "of", "bufferCount", "zipWith", "chunk", "result", "renderSearchResultItem", "isSearchResultMessage", "map", "data", "state", "finalize", "__spreadValues", "watchSearchShare", "_el", "query$", "map", "value", "url", "getLocation", "mountSearchShare", "el", "options", "push$", "Subject", "fromEvent", "ev", "tap", "state", "finalize", "__spreadValues", "mountSearchSuggest", "el", "rx$", "keyboard$", "push$", "Subject", "query", "getComponentElement", "query$", "merge", "fromEvent", "observeOn", "asyncScheduler", "map", "distinctUntilChanged", "combineLatestWith", "suggestions", "value", "words", "last", "filter", "mode", "key", "isSearchResultMessage", "data", "tap", "state", "finalize", "mountSearch", "el", "index$", "keyboard$", "config", "configuration", "url", "worker", "setupSearchWorker", "query", "getComponentElement", "result", "tx$", "rx$", "filter", "isSearchQueryMessage", "sample", "isSearchReadyMessage", "take", "mode", "key", "active", "getActiveElement", "anchors", "anchor", "getElements", "article", "best", "a", "b", "setToggle", "els", "i", "query$", "mountSearchQuery", "result$", "mountSearchResult", "merge", "mergeWith", "getComponentElements", "child", "mountSearchShare", "mountSearchSuggest", "err", "NEVER", "mountSearchHiglight", "el", "index$", "location$", "combineLatest", "startWith", "getLocation", "filter", "url", "map", "index", "setupSearchHighlighter", "fn", "_a", "nodes", "it", "node", "original", "replaced", "text", "childNodes", "h", "watchSidebar", "el", "viewport$", "main$", "parent", "adjust", "combineLatest", "map", "offset", "height", "y", "distinctUntilChanged", "a", "b", "mountSidebar", "_a", "_b", "header$", "options", "__objRest", "inner", "getElement", "getElementOffset", "defer", "push$", "Subject", "auditTime", "animationFrameScheduler", "withLatestFrom", "observeOn", "take", "item", "getElements", "container", "getElementContainer", "getElementSize", "tap", "state", "finalize", "__spreadValues", "fetchSourceFactsFromGitHub", "user", "repo", "url", "zip", "requestJSON", "catchError", "EMPTY", "map", "release", "defaultIfEmpty", "info", "__spreadValues", "fetchSourceFactsFromGitLab", "base", "project", "url", "requestJSON", "catchError", "EMPTY", "map", "star_count", "forks_count", "defaultIfEmpty", "fetchSourceFacts", "url", "type", "user", "repo", "fetchSourceFactsFromGitHub", "base", "slug", "fetchSourceFactsFromGitLab", "EMPTY", "fetch$", "watchSource", "el", "defer", "cached", "of", "getComponentElements", "consent", "EMPTY", "fetchSourceFacts", "tap", "facts", "catchError", "filter", "map", "shareReplay", "mountSource", "inner", "getElement", "push$", "Subject", "renderSourceFacts", "state", "finalize", "__spreadValues", "watchTabs", "el", "viewport$", "header$", "watchElementSize", "switchMap", "watchViewportAt", "map", "y", "distinctUntilKeyChanged", "mountTabs", "options", "defer", "push$", "Subject", "hidden", "feature", "of", "tap", "state", "finalize", "__spreadValues", "watchTableOfContents", "el", "viewport$", "header$", "table", "anchors", "getElements", "anchor", "id", "target", "getOptionalElement", "adjust$", "distinctUntilKeyChanged", "map", "height", "main", "getComponentElement", "grid", "getElement", "share", "watchElementSize", "switchMap", "body", "defer", "path", "of", "index", "offset", "a", "b", "combineLatestWith", "adjust", "scan", "prev", "next", "y", "size", "last", "distinctUntilChanged", "startWith", "bufferCount", "mountTableOfContents", "target$", "push$", "Subject", "done$", "takeLast", "feature", "smooth$", "merge", "debounceTime", "filter", "withLatestFrom", "behavior", "container", "getElementContainer", "getElementSize", "takeUntil", "skip", "repeat", "url", "getLocation", "active", "hash", "tap", "state", "finalize", "__spreadValues", "watchBackToTop", "_el", "viewport$", "main$", "target$", "direction$", "map", "y", "bufferCount", "a", "b", "distinctUntilChanged", "active$", "active", "combineLatest", "direction", "takeUntil", "skip", "endWith", "repeat", "hidden", "mountBackToTop", "el", "header$", "push$", "Subject", "done$", "takeLast", "distinctUntilKeyChanged", "height", "tap", "state", "finalize", "__spreadValues", "patchIndeterminate", "document$", "tablet$", "switchMap", "getElements", "tap", "el", "mergeMap", "fromEvent", "takeWhile", "map", "withLatestFrom", "tablet", "isAppleDevice", "patchScrollfix", "document$", "switchMap", "getElements", "tap", "el", "filter", "mergeMap", "fromEvent", "map", "top", "patchScrolllock", "viewport$", "tablet$", "combineLatest", "watchToggle", "map", "active", "tablet", "switchMap", "of", "delay", "withLatestFrom", "y", "value", "obj", "data", "key", "x", "y", "nodes", "parent", "i", "node", "document$", "watchDocument", "location$", "watchLocation", "target$", "watchLocationTarget", "keyboard$", "watchKeyboard", "viewport$", "watchViewport", "tablet$", "watchMedia", "screen$", "print$", "watchPrint", "config", "configuration", "index$", "requestJSON", "NEVER", "alert$", "Subject", "setupClipboardJS", "feature", "setupInstantLoading", "_a", "setupVersionSelector", "merge", "delay", "setToggle", "filter", "mode", "key", "prev", "getOptionalElement", "next", "patchIndeterminate", "patchScrollfix", "patchScrolllock", "header$", "watchHeader", "getComponentElement", "main$", "map", "switchMap", "el", "watchMain", "shareReplay", "control$", "getComponentElements", "mountConsent", "mountDialog", "mountHeader", "mountPalette", "mountSearch", "mountSource", "content$", "defer", "mountAnnounce", "mountContent", "mountSearchHiglight", "EMPTY", "mountHeaderTitle", "at", "mountSidebar", "mountTabs", "mountTableOfContents", "mountBackToTop", "component$", "mergeWith"] +} diff --git a/assets/javascripts/lunr/min/lunr.ar.min.js b/assets/javascripts/lunr/min/lunr.ar.min.js new file mode 100644 index 000000000..248ddc5d1 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ar.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ar=function(){this.pipeline.reset(),this.pipeline.add(e.ar.trimmer,e.ar.stopWordFilter,e.ar.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ar.stemmer))},e.ar.wordCharacters="ء-ٛٱـ",e.ar.trimmer=e.trimmerSupport.generateTrimmer(e.ar.wordCharacters),e.Pipeline.registerFunction(e.ar.trimmer,"trimmer-ar"),e.ar.stemmer=function(){var e=this;return e.result=!1,e.preRemoved=!1,e.sufRemoved=!1,e.pre={pre1:"ف ك ب و س ل ن ا ي ت",pre2:"ال لل",pre3:"بال وال فال تال كال ولل",pre4:"فبال كبال وبال وكال"},e.suf={suf1:"ه ك ت ن ا ي",suf2:"نك نه ها وك يا اه ون ين تن تم نا وا ان كم كن ني نن ما هم هن تك ته ات يه",suf3:"تين كهم نيه نهم ونه وها يهم ونا ونك وني وهم تكم تنا تها تني تهم كما كها ناه نكم هنا تان يها",suf4:"كموه ناها ونني ونهم تكما تموه تكاه كماه ناكم ناهم نيها وننا"},e.patterns=JSON.parse('{"pt43":[{"pt":[{"c":"ا","l":1}]},{"pt":[{"c":"ا,ت,ن,ي","l":0}],"mPt":[{"c":"ف","l":0,"m":1},{"c":"ع","l":1,"m":2},{"c":"ل","l":2,"m":3}]},{"pt":[{"c":"و","l":2}],"mPt":[{"c":"ف","l":0,"m":0},{"c":"ع","l":1,"m":1},{"c":"ل","l":2,"m":3}]},{"pt":[{"c":"ا","l":2}]},{"pt":[{"c":"ي","l":2}],"mPt":[{"c":"ف","l":0,"m":0},{"c":"ع","l":1,"m":1},{"c":"ا","l":2},{"c":"ل","l":3,"m":3}]},{"pt":[{"c":"م","l":0}]}],"pt53":[{"pt":[{"c":"ت","l":0},{"c":"ا","l":2}]},{"pt":[{"c":"ا,ن,ت,ي","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":0},{"c":"ا","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":3},{"c":"ل","l":3,"m":4},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":0},{"c":"ا","l":3}],"mPt":[{"c":"ف","l":0,"m":1},{"c":"ع","l":1,"m":2},{"c":"ل","l":2,"m":4}]},{"pt":[{"c":"ا","l":3},{"c":"ن","l":4}]},{"pt":[{"c":"ت","l":0},{"c":"ي","l":3}]},{"pt":[{"c":"م","l":0},{"c":"و","l":3}]},{"pt":[{"c":"ا","l":1},{"c":"و","l":3}]},{"pt":[{"c":"و","l":1},{"c":"ا","l":2}]},{"pt":[{"c":"م","l":0},{"c":"ا","l":3}]},{"pt":[{"c":"م","l":0},{"c":"ي","l":3}]},{"pt":[{"c":"ا","l":2},{"c":"ن","l":3}]},{"pt":[{"c":"م","l":0},{"c":"ن","l":1}],"mPt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ف","l":2,"m":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"م","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"م","l":0},{"c":"ا","l":2}]},{"pt":[{"c":"م","l":1},{"c":"ا","l":3}]},{"pt":[{"c":"ي,ت,ا,ن","l":0},{"c":"ت","l":1}],"mPt":[{"c":"ف","l":0,"m":2},{"c":"ع","l":1,"m":3},{"c":"ا","l":2},{"c":"ل","l":3,"m":4}]},{"pt":[{"c":"ت,ي,ا,ن","l":0},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ت","l":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":2},{"c":"ي","l":3}]},{"pt":[{"c":"ا,ي,ت,ن","l":0},{"c":"ن","l":1}],"mPt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ف","l":2,"m":2},{"c":"ع","l":3,"m":3},{"c":"ا","l":4},{"c":"ل","l":5,"m":4}]},{"pt":[{"c":"ا","l":3},{"c":"ء","l":4}]}],"pt63":[{"pt":[{"c":"ا","l":0},{"c":"ت","l":2},{"c":"ا","l":4}]},{"pt":[{"c":"ا,ت,ن,ي","l":0},{"c":"س","l":1},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ف","l":3,"m":3},{"c":"ع","l":4,"m":4},{"c":"ا","l":5},{"c":"ل","l":6,"m":5}]},{"pt":[{"c":"ا,ن,ت,ي","l":0},{"c":"و","l":3}]},{"pt":[{"c":"م","l":0},{"c":"س","l":1},{"c":"ت","l":2}],"mPt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ف","l":3,"m":3},{"c":"ع","l":4,"m":4},{"c":"ا","l":5},{"c":"ل","l":6,"m":5}]},{"pt":[{"c":"ي","l":1},{"c":"ي","l":3},{"c":"ا","l":4},{"c":"ء","l":5}]},{"pt":[{"c":"ا","l":0},{"c":"ن","l":1},{"c":"ا","l":4}]}],"pt54":[{"pt":[{"c":"ت","l":0}]},{"pt":[{"c":"ا,ي,ت,ن","l":0}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":2},{"c":"ل","l":3,"m":3},{"c":"ر","l":4,"m":4},{"c":"ا","l":5},{"c":"ر","l":6,"m":4}]},{"pt":[{"c":"م","l":0}],"mPt":[{"c":"ا","l":0},{"c":"ف","l":1,"m":1},{"c":"ع","l":2,"m":2},{"c":"ل","l":3,"m":3},{"c":"ر","l":4,"m":4},{"c":"ا","l":5},{"c":"ر","l":6,"m":4}]},{"pt":[{"c":"ا","l":2}]},{"pt":[{"c":"ا","l":0},{"c":"ن","l":2}]}],"pt64":[{"pt":[{"c":"ا","l":0},{"c":"ا","l":4}]},{"pt":[{"c":"م","l":0},{"c":"ت","l":1}]}],"pt73":[{"pt":[{"c":"ا","l":0},{"c":"س","l":1},{"c":"ت","l":2},{"c":"ا","l":5}]}],"pt75":[{"pt":[{"c":"ا","l":0},{"c":"ا","l":5}]}]}'),e.execArray=["cleanWord","removeDiacritics","cleanAlef","removeStopWords","normalizeHamzaAndAlef","removeStartWaw","removePre432","removeEndTaa","wordCheck"],e.stem=function(){var r=0;for(e.result=!1,e.preRemoved=!1,e.sufRemoved=!1;r=0)return!0},e.normalizeHamzaAndAlef=function(){return e.word=e.word.replace("ؤ","ء"),e.word=e.word.replace("ئ","ء"),e.word=e.word.replace(/([\u0627])\1+/gi,"ا"),!1},e.removeEndTaa=function(){return!(e.word.length>2)||(e.word=e.word.replace(/[\u0627]$/,""),e.word=e.word.replace("ة",""),!1)},e.removeStartWaw=function(){return e.word.length>3&&"و"==e.word[0]&&"و"==e.word[1]&&(e.word=e.word.slice(1)),!1},e.removePre432=function(){var r=e.word;if(e.word.length>=7){var t=new RegExp("^("+e.pre.pre4.split(" ").join("|")+")");e.word=e.word.replace(t,"")}if(e.word==r&&e.word.length>=6){var c=new RegExp("^("+e.pre.pre3.split(" ").join("|")+")");e.word=e.word.replace(c,"")}if(e.word==r&&e.word.length>=5){var l=new RegExp("^("+e.pre.pre2.split(" ").join("|")+")");e.word=e.word.replace(l,"")}return r!=e.word&&(e.preRemoved=!0),!1},e.patternCheck=function(r){for(var t=0;t3){var t=new RegExp("^("+e.pre.pre1.split(" ").join("|")+")");e.word=e.word.replace(t,"")}return r!=e.word&&(e.preRemoved=!0),!1},e.removeSuf1=function(){var r=e.word;if(0==e.sufRemoved&&e.word.length>3){var t=new RegExp("("+e.suf.suf1.split(" ").join("|")+")$");e.word=e.word.replace(t,"")}return r!=e.word&&(e.sufRemoved=!0),!1},e.removeSuf432=function(){var r=e.word;if(e.word.length>=6){var t=new RegExp("("+e.suf.suf4.split(" ").join("|")+")$");e.word=e.word.replace(t,"")}if(e.word==r&&e.word.length>=5){var c=new RegExp("("+e.suf.suf3.split(" ").join("|")+")$");e.word=e.word.replace(c,"")}if(e.word==r&&e.word.length>=4){var l=new RegExp("("+e.suf.suf2.split(" ").join("|")+")$");e.word=e.word.replace(l,"")}return r!=e.word&&(e.sufRemoved=!0),!1},e.wordCheck=function(){for(var r=(e.word,[e.removeSuf432,e.removeSuf1,e.removePre1]),t=0,c=!1;e.word.length>=7&&!e.result&&t=f.limit)return;f.cursor++}for(;!f.out_grouping(w,97,248);){if(f.cursor>=f.limit)return;f.cursor++}d=f.cursor,d=d&&(r=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,e=f.find_among_b(c,32),f.limit_backward=r,e))switch(f.bra=f.cursor,e){case 1:f.slice_del();break;case 2:f.in_grouping_b(p,97,229)&&f.slice_del()}}function t(){var e,r=f.limit-f.cursor;f.cursor>=d&&(e=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,f.find_among_b(l,4)?(f.bra=f.cursor,f.limit_backward=e,f.cursor=f.limit-r,f.cursor>f.limit_backward&&(f.cursor--,f.bra=f.cursor,f.slice_del())):f.limit_backward=e)}function s(){var e,r,i,n=f.limit-f.cursor;if(f.ket=f.cursor,f.eq_s_b(2,"st")&&(f.bra=f.cursor,f.eq_s_b(2,"ig")&&f.slice_del()),f.cursor=f.limit-n,f.cursor>=d&&(r=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,e=f.find_among_b(m,5),f.limit_backward=r,e))switch(f.bra=f.cursor,e){case 1:f.slice_del(),i=f.limit-f.cursor,t(),f.cursor=f.limit-i;break;case 2:f.slice_from("løs")}}function o(){var e;f.cursor>=d&&(e=f.limit_backward,f.limit_backward=d,f.ket=f.cursor,f.out_grouping_b(w,97,248)?(f.bra=f.cursor,u=f.slice_to(u),f.limit_backward=e,f.eq_v_b(u)&&f.slice_del()):f.limit_backward=e)}var a,d,u,c=[new r("hed",-1,1),new r("ethed",0,1),new r("ered",-1,1),new r("e",-1,1),new r("erede",3,1),new r("ende",3,1),new r("erende",5,1),new r("ene",3,1),new r("erne",3,1),new r("ere",3,1),new r("en",-1,1),new r("heden",10,1),new r("eren",10,1),new r("er",-1,1),new r("heder",13,1),new r("erer",13,1),new r("s",-1,2),new r("heds",16,1),new r("es",16,1),new r("endes",18,1),new r("erendes",19,1),new r("enes",18,1),new r("ernes",18,1),new r("eres",18,1),new r("ens",16,1),new r("hedens",24,1),new r("erens",24,1),new r("ers",16,1),new r("ets",16,1),new r("erets",28,1),new r("et",-1,1),new r("eret",30,1)],l=[new r("gd",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1)],m=[new r("ig",-1,1),new r("lig",0,1),new r("elig",1,1),new r("els",-1,1),new r("løst",-1,2)],w=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],p=[239,254,42,3,0,0,0,0,0,0,0,0,0,0,0,0,16],f=new i;this.setCurrent=function(e){f.setCurrent(e)},this.getCurrent=function(){return f.getCurrent()},this.stem=function(){var r=f.cursor;return e(),f.limit_backward=r,f.cursor=f.limit,n(),f.cursor=f.limit,t(),f.cursor=f.limit,s(),f.cursor=f.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.da.stemmer,"stemmer-da"),e.da.stopWordFilter=e.generateStopWordFilter("ad af alle alt anden at blev blive bliver da de dem den denne der deres det dette dig din disse dog du efter eller en end er et for fra ham han hans har havde have hende hendes her hos hun hvad hvis hvor i ikke ind jeg jer jo kunne man mange med meget men mig min mine mit mod ned noget nogle nu når og også om op os over på selv sig sin sine sit skal skulle som sådan thi til ud under var vi vil ville vor være været".split(" ")),e.Pipeline.registerFunction(e.da.stopWordFilter,"stopWordFilter-da")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.de.min.js b/assets/javascripts/lunr/min/lunr.de.min.js new file mode 100644 index 000000000..f3b5c108c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.de.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `German` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.de=function(){this.pipeline.reset(),this.pipeline.add(e.de.trimmer,e.de.stopWordFilter,e.de.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.de.stemmer))},e.de.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.de.trimmer=e.trimmerSupport.generateTrimmer(e.de.wordCharacters),e.Pipeline.registerFunction(e.de.trimmer,"trimmer-de"),e.de.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,n){return!(!v.eq_s(1,e)||(v.ket=v.cursor,!v.in_grouping(p,97,252)))&&(v.slice_from(r),v.cursor=n,!0)}function i(){for(var r,n,i,s,t=v.cursor;;)if(r=v.cursor,v.bra=r,v.eq_s(1,"ß"))v.ket=v.cursor,v.slice_from("ss");else{if(r>=v.limit)break;v.cursor=r+1}for(v.cursor=t;;)for(n=v.cursor;;){if(i=v.cursor,v.in_grouping(p,97,252)){if(s=v.cursor,v.bra=s,e("u","U",i))break;if(v.cursor=s,e("y","Y",i))break}if(i>=v.limit)return void(v.cursor=n);v.cursor=i+1}}function s(){for(;!v.in_grouping(p,97,252);){if(v.cursor>=v.limit)return!0;v.cursor++}for(;!v.out_grouping(p,97,252);){if(v.cursor>=v.limit)return!0;v.cursor++}return!1}function t(){m=v.limit,l=m;var e=v.cursor+3;0<=e&&e<=v.limit&&(d=e,s()||(m=v.cursor,m=v.limit)return;v.cursor++}}}function c(){return m<=v.cursor}function u(){return l<=v.cursor}function a(){var e,r,n,i,s=v.limit-v.cursor;if(v.ket=v.cursor,(e=v.find_among_b(w,7))&&(v.bra=v.cursor,c()))switch(e){case 1:v.slice_del();break;case 2:v.slice_del(),v.ket=v.cursor,v.eq_s_b(1,"s")&&(v.bra=v.cursor,v.eq_s_b(3,"nis")&&v.slice_del());break;case 3:v.in_grouping_b(g,98,116)&&v.slice_del()}if(v.cursor=v.limit-s,v.ket=v.cursor,(e=v.find_among_b(f,4))&&(v.bra=v.cursor,c()))switch(e){case 1:v.slice_del();break;case 2:if(v.in_grouping_b(k,98,116)){var t=v.cursor-3;v.limit_backward<=t&&t<=v.limit&&(v.cursor=t,v.slice_del())}}if(v.cursor=v.limit-s,v.ket=v.cursor,(e=v.find_among_b(_,8))&&(v.bra=v.cursor,u()))switch(e){case 1:v.slice_del(),v.ket=v.cursor,v.eq_s_b(2,"ig")&&(v.bra=v.cursor,r=v.limit-v.cursor,v.eq_s_b(1,"e")||(v.cursor=v.limit-r,u()&&v.slice_del()));break;case 2:n=v.limit-v.cursor,v.eq_s_b(1,"e")||(v.cursor=v.limit-n,v.slice_del());break;case 3:if(v.slice_del(),v.ket=v.cursor,i=v.limit-v.cursor,!v.eq_s_b(2,"er")&&(v.cursor=v.limit-i,!v.eq_s_b(2,"en")))break;v.bra=v.cursor,c()&&v.slice_del();break;case 4:v.slice_del(),v.ket=v.cursor,e=v.find_among_b(b,2),e&&(v.bra=v.cursor,u()&&1==e&&v.slice_del())}}var d,l,m,h=[new r("",-1,6),new r("U",0,2),new r("Y",0,1),new r("ä",0,3),new r("ö",0,4),new r("ü",0,5)],w=[new r("e",-1,2),new r("em",-1,1),new r("en",-1,2),new r("ern",-1,1),new r("er",-1,1),new r("s",-1,3),new r("es",5,2)],f=[new r("en",-1,1),new r("er",-1,1),new r("st",-1,2),new r("est",2,1)],b=[new r("ig",-1,1),new r("lich",-1,1)],_=[new r("end",-1,1),new r("ig",-1,2),new r("ung",-1,1),new r("lich",-1,3),new r("isch",-1,2),new r("ik",-1,2),new r("heit",-1,3),new r("keit",-1,4)],p=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32,8],g=[117,30,5],k=[117,30,4],v=new n;this.setCurrent=function(e){v.setCurrent(e)},this.getCurrent=function(){return v.getCurrent()},this.stem=function(){var e=v.cursor;return i(),v.cursor=e,t(),v.limit_backward=e,v.cursor=v.limit,a(),v.cursor=v.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.de.stemmer,"stemmer-de"),e.de.stopWordFilter=e.generateStopWordFilter("aber alle allem allen aller alles als also am an ander andere anderem anderen anderer anderes anderm andern anderr anders auch auf aus bei bin bis bist da damit dann das dasselbe dazu daß dein deine deinem deinen deiner deines dem demselben den denn denselben der derer derselbe derselben des desselben dessen dich die dies diese dieselbe dieselben diesem diesen dieser dieses dir doch dort du durch ein eine einem einen einer eines einig einige einigem einigen einiger einiges einmal er es etwas euch euer eure eurem euren eurer eures für gegen gewesen hab habe haben hat hatte hatten hier hin hinter ich ihm ihn ihnen ihr ihre ihrem ihren ihrer ihres im in indem ins ist jede jedem jeden jeder jedes jene jenem jenen jener jenes jetzt kann kein keine keinem keinen keiner keines können könnte machen man manche manchem manchen mancher manches mein meine meinem meinen meiner meines mich mir mit muss musste nach nicht nichts noch nun nur ob oder ohne sehr sein seine seinem seinen seiner seines selbst sich sie sind so solche solchem solchen solcher solches soll sollte sondern sonst um und uns unse unsem unsen unser unses unter viel vom von vor war waren warst was weg weil weiter welche welchem welchen welcher welches wenn werde werden wie wieder will wir wird wirst wo wollen wollte während würde würden zu zum zur zwar zwischen über".split(" ")),e.Pipeline.registerFunction(e.de.stopWordFilter,"stopWordFilter-de")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.du.min.js b/assets/javascripts/lunr/min/lunr.du.min.js new file mode 100644 index 000000000..49a0f3f0a --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.du.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Dutch` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");console.warn('[Lunr Languages] Please use the "nl" instead of the "du". The "nl" code is the standard code for Dutch language, and "du" will be removed in the next major versions.'),e.du=function(){this.pipeline.reset(),this.pipeline.add(e.du.trimmer,e.du.stopWordFilter,e.du.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.du.stemmer))},e.du.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.du.trimmer=e.trimmerSupport.generateTrimmer(e.du.wordCharacters),e.Pipeline.registerFunction(e.du.trimmer,"trimmer-du"),e.du.stemmer=function(){var r=e.stemmerSupport.Among,i=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e,r,i,o=C.cursor;;){if(C.bra=C.cursor,e=C.find_among(b,11))switch(C.ket=C.cursor,e){case 1:C.slice_from("a");continue;case 2:C.slice_from("e");continue;case 3:C.slice_from("i");continue;case 4:C.slice_from("o");continue;case 5:C.slice_from("u");continue;case 6:if(C.cursor>=C.limit)break;C.cursor++;continue}break}for(C.cursor=o,C.bra=o,C.eq_s(1,"y")?(C.ket=C.cursor,C.slice_from("Y")):C.cursor=o;;)if(r=C.cursor,C.in_grouping(q,97,232)){if(i=C.cursor,C.bra=i,C.eq_s(1,"i"))C.ket=C.cursor,C.in_grouping(q,97,232)&&(C.slice_from("I"),C.cursor=r);else if(C.cursor=i,C.eq_s(1,"y"))C.ket=C.cursor,C.slice_from("Y"),C.cursor=r;else if(n(r))break}else if(n(r))break}function n(e){return C.cursor=e,e>=C.limit||(C.cursor++,!1)}function o(){_=C.limit,f=_,t()||(_=C.cursor,_<3&&(_=3),t()||(f=C.cursor))}function t(){for(;!C.in_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}for(;!C.out_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}return!1}function s(){for(var e;;)if(C.bra=C.cursor,e=C.find_among(p,3))switch(C.ket=C.cursor,e){case 1:C.slice_from("y");break;case 2:C.slice_from("i");break;case 3:if(C.cursor>=C.limit)return;C.cursor++}}function u(){return _<=C.cursor}function c(){return f<=C.cursor}function a(){var e=C.limit-C.cursor;C.find_among_b(g,3)&&(C.cursor=C.limit-e,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del()))}function l(){var e;w=!1,C.ket=C.cursor,C.eq_s_b(1,"e")&&(C.bra=C.cursor,u()&&(e=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-e,C.slice_del(),w=!0,a())))}function m(){var e;u()&&(e=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-e,C.eq_s_b(3,"gem")||(C.cursor=C.limit-e,C.slice_del(),a())))}function d(){var e,r,i,n,o,t,s=C.limit-C.cursor;if(C.ket=C.cursor,e=C.find_among_b(h,5))switch(C.bra=C.cursor,e){case 1:u()&&C.slice_from("heid");break;case 2:m();break;case 3:u()&&C.out_grouping_b(z,97,232)&&C.slice_del()}if(C.cursor=C.limit-s,l(),C.cursor=C.limit-s,C.ket=C.cursor,C.eq_s_b(4,"heid")&&(C.bra=C.cursor,c()&&(r=C.limit-C.cursor,C.eq_s_b(1,"c")||(C.cursor=C.limit-r,C.slice_del(),C.ket=C.cursor,C.eq_s_b(2,"en")&&(C.bra=C.cursor,m())))),C.cursor=C.limit-s,C.ket=C.cursor,e=C.find_among_b(k,6))switch(C.bra=C.cursor,e){case 1:if(c()){if(C.slice_del(),i=C.limit-C.cursor,C.ket=C.cursor,C.eq_s_b(2,"ig")&&(C.bra=C.cursor,c()&&(n=C.limit-C.cursor,!C.eq_s_b(1,"e")))){C.cursor=C.limit-n,C.slice_del();break}C.cursor=C.limit-i,a()}break;case 2:c()&&(o=C.limit-C.cursor,C.eq_s_b(1,"e")||(C.cursor=C.limit-o,C.slice_del()));break;case 3:c()&&(C.slice_del(),l());break;case 4:c()&&C.slice_del();break;case 5:c()&&w&&C.slice_del()}C.cursor=C.limit-s,C.out_grouping_b(j,73,232)&&(t=C.limit-C.cursor,C.find_among_b(v,4)&&C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-t,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del())))}var f,_,w,b=[new r("",-1,6),new r("á",0,1),new r("ä",0,1),new r("é",0,2),new r("ë",0,2),new r("í",0,3),new r("ï",0,3),new r("ó",0,4),new r("ö",0,4),new r("ú",0,5),new r("ü",0,5)],p=[new r("",-1,3),new r("I",0,2),new r("Y",0,1)],g=[new r("dd",-1,-1),new r("kk",-1,-1),new r("tt",-1,-1)],h=[new r("ene",-1,2),new r("se",-1,3),new r("en",-1,2),new r("heden",2,1),new r("s",-1,3)],k=[new r("end",-1,1),new r("ig",-1,2),new r("ing",-1,1),new r("lijk",-1,3),new r("baar",-1,4),new r("bar",-1,5)],v=[new r("aa",-1,-1),new r("ee",-1,-1),new r("oo",-1,-1),new r("uu",-1,-1)],q=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],j=[1,0,0,17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],z=[17,67,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],C=new i;this.setCurrent=function(e){C.setCurrent(e)},this.getCurrent=function(){return C.getCurrent()},this.stem=function(){var r=C.cursor;return e(),C.cursor=r,o(),C.limit_backward=r,C.cursor=C.limit,d(),C.cursor=C.limit_backward,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.du.stemmer,"stemmer-du"),e.du.stopWordFilter=e.generateStopWordFilter(" aan al alles als altijd andere ben bij daar dan dat de der deze die dit doch doen door dus een eens en er ge geen geweest haar had heb hebben heeft hem het hier hij hoe hun iemand iets ik in is ja je kan kon kunnen maar me meer men met mij mijn moet na naar niet niets nog nu of om omdat onder ons ook op over reeds te tegen toch toen tot u uit uw van veel voor want waren was wat werd wezen wie wil worden wordt zal ze zelf zich zij zijn zo zonder zou".split(" ")),e.Pipeline.registerFunction(e.du.stopWordFilter,"stopWordFilter-du")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.es.min.js b/assets/javascripts/lunr/min/lunr.es.min.js new file mode 100644 index 000000000..2989d3426 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.es.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Spanish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,s){"function"==typeof define&&define.amd?define(s):"object"==typeof exports?module.exports=s():s()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.es=function(){this.pipeline.reset(),this.pipeline.add(e.es.trimmer,e.es.stopWordFilter,e.es.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.es.stemmer))},e.es.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.es.trimmer=e.trimmerSupport.generateTrimmer(e.es.wordCharacters),e.Pipeline.registerFunction(e.es.trimmer,"trimmer-es"),e.es.stemmer=function(){var s=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(){if(A.out_grouping(x,97,252)){for(;!A.in_grouping(x,97,252);){if(A.cursor>=A.limit)return!0;A.cursor++}return!1}return!0}function n(){if(A.in_grouping(x,97,252)){var s=A.cursor;if(e()){if(A.cursor=s,!A.in_grouping(x,97,252))return!0;for(;!A.out_grouping(x,97,252);){if(A.cursor>=A.limit)return!0;A.cursor++}}return!1}return!0}function i(){var s,r=A.cursor;if(n()){if(A.cursor=r,!A.out_grouping(x,97,252))return;if(s=A.cursor,e()){if(A.cursor=s,!A.in_grouping(x,97,252)||A.cursor>=A.limit)return;A.cursor++}}g=A.cursor}function a(){for(;!A.in_grouping(x,97,252);){if(A.cursor>=A.limit)return!1;A.cursor++}for(;!A.out_grouping(x,97,252);){if(A.cursor>=A.limit)return!1;A.cursor++}return!0}function t(){var e=A.cursor;g=A.limit,p=g,v=g,i(),A.cursor=e,a()&&(p=A.cursor,a()&&(v=A.cursor))}function o(){for(var e;;){if(A.bra=A.cursor,e=A.find_among(k,6))switch(A.ket=A.cursor,e){case 1:A.slice_from("a");continue;case 2:A.slice_from("e");continue;case 3:A.slice_from("i");continue;case 4:A.slice_from("o");continue;case 5:A.slice_from("u");continue;case 6:if(A.cursor>=A.limit)break;A.cursor++;continue}break}}function u(){return g<=A.cursor}function w(){return p<=A.cursor}function c(){return v<=A.cursor}function m(){var e;if(A.ket=A.cursor,A.find_among_b(y,13)&&(A.bra=A.cursor,(e=A.find_among_b(q,11))&&u()))switch(e){case 1:A.bra=A.cursor,A.slice_from("iendo");break;case 2:A.bra=A.cursor,A.slice_from("ando");break;case 3:A.bra=A.cursor,A.slice_from("ar");break;case 4:A.bra=A.cursor,A.slice_from("er");break;case 5:A.bra=A.cursor,A.slice_from("ir");break;case 6:A.slice_del();break;case 7:A.eq_s_b(1,"u")&&A.slice_del()}}function l(e,s){if(!c())return!0;A.slice_del(),A.ket=A.cursor;var r=A.find_among_b(e,s);return r&&(A.bra=A.cursor,1==r&&c()&&A.slice_del()),!1}function d(e){return!c()||(A.slice_del(),A.ket=A.cursor,A.eq_s_b(2,e)&&(A.bra=A.cursor,c()&&A.slice_del()),!1)}function b(){var e;if(A.ket=A.cursor,e=A.find_among_b(S,46)){switch(A.bra=A.cursor,e){case 1:if(!c())return!1;A.slice_del();break;case 2:if(d("ic"))return!1;break;case 3:if(!c())return!1;A.slice_from("log");break;case 4:if(!c())return!1;A.slice_from("u");break;case 5:if(!c())return!1;A.slice_from("ente");break;case 6:if(!w())return!1;A.slice_del(),A.ket=A.cursor,e=A.find_among_b(C,4),e&&(A.bra=A.cursor,c()&&(A.slice_del(),1==e&&(A.ket=A.cursor,A.eq_s_b(2,"at")&&(A.bra=A.cursor,c()&&A.slice_del()))));break;case 7:if(l(P,3))return!1;break;case 8:if(l(F,3))return!1;break;case 9:if(d("at"))return!1}return!0}return!1}function f(){var e,s;if(A.cursor>=g&&(s=A.limit_backward,A.limit_backward=g,A.ket=A.cursor,e=A.find_among_b(W,12),A.limit_backward=s,e)){if(A.bra=A.cursor,1==e){if(!A.eq_s_b(1,"u"))return!1;A.slice_del()}return!0}return!1}function _(){var e,s,r,n;if(A.cursor>=g&&(s=A.limit_backward,A.limit_backward=g,A.ket=A.cursor,e=A.find_among_b(L,96),A.limit_backward=s,e))switch(A.bra=A.cursor,e){case 1:r=A.limit-A.cursor,A.eq_s_b(1,"u")?(n=A.limit-A.cursor,A.eq_s_b(1,"g")?A.cursor=A.limit-n:A.cursor=A.limit-r):A.cursor=A.limit-r,A.bra=A.cursor;case 2:A.slice_del()}}function h(){var e,s;if(A.ket=A.cursor,e=A.find_among_b(z,8))switch(A.bra=A.cursor,e){case 1:u()&&A.slice_del();break;case 2:u()&&(A.slice_del(),A.ket=A.cursor,A.eq_s_b(1,"u")&&(A.bra=A.cursor,s=A.limit-A.cursor,A.eq_s_b(1,"g")&&(A.cursor=A.limit-s,u()&&A.slice_del())))}}var v,p,g,k=[new s("",-1,6),new s("á",0,1),new s("é",0,2),new s("í",0,3),new s("ó",0,4),new s("ú",0,5)],y=[new s("la",-1,-1),new s("sela",0,-1),new s("le",-1,-1),new s("me",-1,-1),new s("se",-1,-1),new s("lo",-1,-1),new s("selo",5,-1),new s("las",-1,-1),new s("selas",7,-1),new s("les",-1,-1),new s("los",-1,-1),new s("selos",10,-1),new s("nos",-1,-1)],q=[new s("ando",-1,6),new s("iendo",-1,6),new s("yendo",-1,7),new s("ándo",-1,2),new s("iéndo",-1,1),new s("ar",-1,6),new s("er",-1,6),new s("ir",-1,6),new s("ár",-1,3),new s("ér",-1,4),new s("ír",-1,5)],C=[new s("ic",-1,-1),new s("ad",-1,-1),new s("os",-1,-1),new s("iv",-1,1)],P=[new s("able",-1,1),new s("ible",-1,1),new s("ante",-1,1)],F=[new s("ic",-1,1),new s("abil",-1,1),new s("iv",-1,1)],S=[new s("ica",-1,1),new s("ancia",-1,2),new s("encia",-1,5),new s("adora",-1,2),new s("osa",-1,1),new s("ista",-1,1),new s("iva",-1,9),new s("anza",-1,1),new s("logía",-1,3),new s("idad",-1,8),new s("able",-1,1),new s("ible",-1,1),new s("ante",-1,2),new s("mente",-1,7),new s("amente",13,6),new s("ación",-1,2),new s("ución",-1,4),new s("ico",-1,1),new s("ismo",-1,1),new s("oso",-1,1),new s("amiento",-1,1),new s("imiento",-1,1),new s("ivo",-1,9),new s("ador",-1,2),new s("icas",-1,1),new s("ancias",-1,2),new s("encias",-1,5),new s("adoras",-1,2),new s("osas",-1,1),new s("istas",-1,1),new s("ivas",-1,9),new s("anzas",-1,1),new s("logías",-1,3),new s("idades",-1,8),new s("ables",-1,1),new s("ibles",-1,1),new s("aciones",-1,2),new s("uciones",-1,4),new s("adores",-1,2),new s("antes",-1,2),new s("icos",-1,1),new s("ismos",-1,1),new s("osos",-1,1),new s("amientos",-1,1),new s("imientos",-1,1),new s("ivos",-1,9)],W=[new s("ya",-1,1),new s("ye",-1,1),new s("yan",-1,1),new s("yen",-1,1),new s("yeron",-1,1),new s("yendo",-1,1),new s("yo",-1,1),new s("yas",-1,1),new s("yes",-1,1),new s("yais",-1,1),new s("yamos",-1,1),new s("yó",-1,1)],L=[new s("aba",-1,2),new s("ada",-1,2),new s("ida",-1,2),new s("ara",-1,2),new s("iera",-1,2),new s("ía",-1,2),new s("aría",5,2),new s("ería",5,2),new s("iría",5,2),new s("ad",-1,2),new s("ed",-1,2),new s("id",-1,2),new s("ase",-1,2),new s("iese",-1,2),new s("aste",-1,2),new s("iste",-1,2),new s("an",-1,2),new s("aban",16,2),new s("aran",16,2),new s("ieran",16,2),new s("ían",16,2),new s("arían",20,2),new s("erían",20,2),new s("irían",20,2),new s("en",-1,1),new s("asen",24,2),new s("iesen",24,2),new s("aron",-1,2),new s("ieron",-1,2),new s("arán",-1,2),new s("erán",-1,2),new s("irán",-1,2),new s("ado",-1,2),new s("ido",-1,2),new s("ando",-1,2),new s("iendo",-1,2),new s("ar",-1,2),new s("er",-1,2),new s("ir",-1,2),new s("as",-1,2),new s("abas",39,2),new s("adas",39,2),new s("idas",39,2),new s("aras",39,2),new s("ieras",39,2),new s("ías",39,2),new s("arías",45,2),new s("erías",45,2),new s("irías",45,2),new s("es",-1,1),new s("ases",49,2),new s("ieses",49,2),new s("abais",-1,2),new s("arais",-1,2),new s("ierais",-1,2),new s("íais",-1,2),new s("aríais",55,2),new s("eríais",55,2),new s("iríais",55,2),new s("aseis",-1,2),new s("ieseis",-1,2),new s("asteis",-1,2),new s("isteis",-1,2),new s("áis",-1,2),new s("éis",-1,1),new s("aréis",64,2),new s("eréis",64,2),new s("iréis",64,2),new s("ados",-1,2),new s("idos",-1,2),new s("amos",-1,2),new s("ábamos",70,2),new s("áramos",70,2),new s("iéramos",70,2),new s("íamos",70,2),new s("aríamos",74,2),new s("eríamos",74,2),new s("iríamos",74,2),new s("emos",-1,1),new s("aremos",78,2),new s("eremos",78,2),new s("iremos",78,2),new s("ásemos",78,2),new s("iésemos",78,2),new s("imos",-1,2),new s("arás",-1,2),new s("erás",-1,2),new s("irás",-1,2),new s("ís",-1,2),new s("ará",-1,2),new s("erá",-1,2),new s("irá",-1,2),new s("aré",-1,2),new s("eré",-1,2),new s("iré",-1,2),new s("ió",-1,2)],z=[new s("a",-1,1),new s("e",-1,2),new s("o",-1,1),new s("os",-1,1),new s("á",-1,1),new s("é",-1,2),new s("í",-1,1),new s("ó",-1,1)],x=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,1,17,4,10],A=new r;this.setCurrent=function(e){A.setCurrent(e)},this.getCurrent=function(){return A.getCurrent()},this.stem=function(){var e=A.cursor;return t(),A.limit_backward=e,A.cursor=A.limit,m(),A.cursor=A.limit,b()||(A.cursor=A.limit,f()||(A.cursor=A.limit,_())),A.cursor=A.limit,h(),A.cursor=A.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.es.stemmer,"stemmer-es"),e.es.stopWordFilter=e.generateStopWordFilter("a al algo algunas algunos ante antes como con contra cual cuando de del desde donde durante e el ella ellas ellos en entre era erais eran eras eres es esa esas ese eso esos esta estaba estabais estaban estabas estad estada estadas estado estados estamos estando estar estaremos estará estarán estarás estaré estaréis estaría estaríais estaríamos estarían estarías estas este estemos esto estos estoy estuve estuviera estuvierais estuvieran estuvieras estuvieron estuviese estuvieseis estuviesen estuvieses estuvimos estuviste estuvisteis estuviéramos estuviésemos estuvo está estábamos estáis están estás esté estéis estén estés fue fuera fuerais fueran fueras fueron fuese fueseis fuesen fueses fui fuimos fuiste fuisteis fuéramos fuésemos ha habida habidas habido habidos habiendo habremos habrá habrán habrás habré habréis habría habríais habríamos habrían habrías habéis había habíais habíamos habían habías han has hasta hay haya hayamos hayan hayas hayáis he hemos hube hubiera hubierais hubieran hubieras hubieron hubiese hubieseis hubiesen hubieses hubimos hubiste hubisteis hubiéramos hubiésemos hubo la las le les lo los me mi mis mucho muchos muy más mí mía mías mío míos nada ni no nos nosotras nosotros nuestra nuestras nuestro nuestros o os otra otras otro otros para pero poco por porque que quien quienes qué se sea seamos sean seas seremos será serán serás seré seréis sería seríais seríamos serían serías seáis sido siendo sin sobre sois somos son soy su sus suya suyas suyo suyos sí también tanto te tendremos tendrá tendrán tendrás tendré tendréis tendría tendríais tendríamos tendrían tendrías tened tenemos tenga tengamos tengan tengas tengo tengáis tenida tenidas tenido tenidos teniendo tenéis tenía teníais teníamos tenían tenías ti tiene tienen tienes todo todos tu tus tuve tuviera tuvierais tuvieran tuvieras tuvieron tuviese tuvieseis tuviesen tuvieses tuvimos tuviste tuvisteis tuviéramos tuviésemos tuvo tuya tuyas tuyo tuyos tú un una uno unos vosotras vosotros vuestra vuestras vuestro vuestros y ya yo él éramos".split(" ")),e.Pipeline.registerFunction(e.es.stopWordFilter,"stopWordFilter-es")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.fi.min.js b/assets/javascripts/lunr/min/lunr.fi.min.js new file mode 100644 index 000000000..29f5dfcea --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.fi.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Finnish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(i,e){"function"==typeof define&&define.amd?define(e):"object"==typeof exports?module.exports=e():e()(i.lunr)}(this,function(){return function(i){if(void 0===i)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===i.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");i.fi=function(){this.pipeline.reset(),this.pipeline.add(i.fi.trimmer,i.fi.stopWordFilter,i.fi.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(i.fi.stemmer))},i.fi.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",i.fi.trimmer=i.trimmerSupport.generateTrimmer(i.fi.wordCharacters),i.Pipeline.registerFunction(i.fi.trimmer,"trimmer-fi"),i.fi.stemmer=function(){var e=i.stemmerSupport.Among,r=i.stemmerSupport.SnowballProgram,n=new function(){function i(){f=A.limit,d=f,n()||(f=A.cursor,n()||(d=A.cursor))}function n(){for(var i;;){if(i=A.cursor,A.in_grouping(W,97,246))break;if(A.cursor=i,i>=A.limit)return!0;A.cursor++}for(A.cursor=i;!A.out_grouping(W,97,246);){if(A.cursor>=A.limit)return!0;A.cursor++}return!1}function t(){return d<=A.cursor}function s(){var i,e;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(h,10)){switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:if(!A.in_grouping_b(x,97,246))return;break;case 2:if(!t())return}A.slice_del()}else A.limit_backward=e}function o(){var i,e,r;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(v,9))switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:r=A.limit-A.cursor,A.eq_s_b(1,"k")||(A.cursor=A.limit-r,A.slice_del());break;case 2:A.slice_del(),A.ket=A.cursor,A.eq_s_b(3,"kse")&&(A.bra=A.cursor,A.slice_from("ksi"));break;case 3:A.slice_del();break;case 4:A.find_among_b(p,6)&&A.slice_del();break;case 5:A.find_among_b(g,6)&&A.slice_del();break;case 6:A.find_among_b(j,2)&&A.slice_del()}else A.limit_backward=e}function l(){return A.find_among_b(q,7)}function a(){return A.eq_s_b(1,"i")&&A.in_grouping_b(L,97,246)}function u(){var i,e,r;if(A.cursor>=f)if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,i=A.find_among_b(C,30)){switch(A.bra=A.cursor,A.limit_backward=e,i){case 1:if(!A.eq_s_b(1,"a"))return;break;case 2:case 9:if(!A.eq_s_b(1,"e"))return;break;case 3:if(!A.eq_s_b(1,"i"))return;break;case 4:if(!A.eq_s_b(1,"o"))return;break;case 5:if(!A.eq_s_b(1,"ä"))return;break;case 6:if(!A.eq_s_b(1,"ö"))return;break;case 7:if(r=A.limit-A.cursor,!l()&&(A.cursor=A.limit-r,!A.eq_s_b(2,"ie"))){A.cursor=A.limit-r;break}if(A.cursor=A.limit-r,A.cursor<=A.limit_backward){A.cursor=A.limit-r;break}A.cursor--,A.bra=A.cursor;break;case 8:if(!A.in_grouping_b(W,97,246)||!A.out_grouping_b(W,97,246))return}A.slice_del(),k=!0}else A.limit_backward=e}function c(){var i,e,r;if(A.cursor>=d)if(e=A.limit_backward,A.limit_backward=d,A.ket=A.cursor,i=A.find_among_b(P,14)){if(A.bra=A.cursor,A.limit_backward=e,1==i){if(r=A.limit-A.cursor,A.eq_s_b(2,"po"))return;A.cursor=A.limit-r}A.slice_del()}else A.limit_backward=e}function m(){var i;A.cursor>=f&&(i=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,A.find_among_b(F,2)?(A.bra=A.cursor,A.limit_backward=i,A.slice_del()):A.limit_backward=i)}function w(){var i,e,r,n,t,s;if(A.cursor>=f){if(e=A.limit_backward,A.limit_backward=f,A.ket=A.cursor,A.eq_s_b(1,"t")&&(A.bra=A.cursor,r=A.limit-A.cursor,A.in_grouping_b(W,97,246)&&(A.cursor=A.limit-r,A.slice_del(),A.limit_backward=e,n=A.limit-A.cursor,A.cursor>=d&&(A.cursor=d,t=A.limit_backward,A.limit_backward=A.cursor,A.cursor=A.limit-n,A.ket=A.cursor,i=A.find_among_b(S,2))))){if(A.bra=A.cursor,A.limit_backward=t,1==i){if(s=A.limit-A.cursor,A.eq_s_b(2,"po"))return;A.cursor=A.limit-s}return void A.slice_del()}A.limit_backward=e}}function _(){var i,e,r,n;if(A.cursor>=f){for(i=A.limit_backward,A.limit_backward=f,e=A.limit-A.cursor,l()&&(A.cursor=A.limit-e,A.ket=A.cursor,A.cursor>A.limit_backward&&(A.cursor--,A.bra=A.cursor,A.slice_del())),A.cursor=A.limit-e,A.ket=A.cursor,A.in_grouping_b(y,97,228)&&(A.bra=A.cursor,A.out_grouping_b(W,97,246)&&A.slice_del()),A.cursor=A.limit-e,A.ket=A.cursor,A.eq_s_b(1,"j")&&(A.bra=A.cursor,r=A.limit-A.cursor,A.eq_s_b(1,"o")?A.slice_del():(A.cursor=A.limit-r,A.eq_s_b(1,"u")&&A.slice_del())),A.cursor=A.limit-e,A.ket=A.cursor,A.eq_s_b(1,"o")&&(A.bra=A.cursor,A.eq_s_b(1,"j")&&A.slice_del()),A.cursor=A.limit-e,A.limit_backward=i;;){if(n=A.limit-A.cursor,A.out_grouping_b(W,97,246)){A.cursor=A.limit-n;break}if(A.cursor=A.limit-n,A.cursor<=A.limit_backward)return;A.cursor--}A.ket=A.cursor,A.cursor>A.limit_backward&&(A.cursor--,A.bra=A.cursor,b=A.slice_to(),A.eq_v_b(b)&&A.slice_del())}}var k,b,d,f,h=[new e("pa",-1,1),new e("sti",-1,2),new e("kaan",-1,1),new e("han",-1,1),new e("kin",-1,1),new e("hän",-1,1),new e("kään",-1,1),new e("ko",-1,1),new e("pä",-1,1),new e("kö",-1,1)],p=[new e("lla",-1,-1),new e("na",-1,-1),new e("ssa",-1,-1),new e("ta",-1,-1),new e("lta",3,-1),new e("sta",3,-1)],g=[new e("llä",-1,-1),new e("nä",-1,-1),new e("ssä",-1,-1),new e("tä",-1,-1),new e("ltä",3,-1),new e("stä",3,-1)],j=[new e("lle",-1,-1),new e("ine",-1,-1)],v=[new e("nsa",-1,3),new e("mme",-1,3),new e("nne",-1,3),new e("ni",-1,2),new e("si",-1,1),new e("an",-1,4),new e("en",-1,6),new e("än",-1,5),new e("nsä",-1,3)],q=[new e("aa",-1,-1),new e("ee",-1,-1),new e("ii",-1,-1),new e("oo",-1,-1),new e("uu",-1,-1),new e("ää",-1,-1),new e("öö",-1,-1)],C=[new e("a",-1,8),new e("lla",0,-1),new e("na",0,-1),new e("ssa",0,-1),new e("ta",0,-1),new e("lta",4,-1),new e("sta",4,-1),new e("tta",4,9),new e("lle",-1,-1),new e("ine",-1,-1),new e("ksi",-1,-1),new e("n",-1,7),new e("han",11,1),new e("den",11,-1,a),new e("seen",11,-1,l),new e("hen",11,2),new e("tten",11,-1,a),new e("hin",11,3),new e("siin",11,-1,a),new e("hon",11,4),new e("hän",11,5),new e("hön",11,6),new e("ä",-1,8),new e("llä",22,-1),new e("nä",22,-1),new e("ssä",22,-1),new e("tä",22,-1),new e("ltä",26,-1),new e("stä",26,-1),new e("ttä",26,9)],P=[new e("eja",-1,-1),new e("mma",-1,1),new e("imma",1,-1),new e("mpa",-1,1),new e("impa",3,-1),new e("mmi",-1,1),new e("immi",5,-1),new e("mpi",-1,1),new e("impi",7,-1),new e("ejä",-1,-1),new e("mmä",-1,1),new e("immä",10,-1),new e("mpä",-1,1),new e("impä",12,-1)],F=[new e("i",-1,-1),new e("j",-1,-1)],S=[new e("mma",-1,1),new e("imma",0,-1)],y=[17,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8],W=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],L=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],x=[17,97,24,1,0,0,0,0,0,0,0,0,0,0,0,0,8,0,32],A=new r;this.setCurrent=function(i){A.setCurrent(i)},this.getCurrent=function(){return A.getCurrent()},this.stem=function(){var e=A.cursor;return i(),k=!1,A.limit_backward=e,A.cursor=A.limit,s(),A.cursor=A.limit,o(),A.cursor=A.limit,u(),A.cursor=A.limit,c(),A.cursor=A.limit,k?(m(),A.cursor=A.limit):(A.cursor=A.limit,w(),A.cursor=A.limit),_(),!0}};return function(i){return"function"==typeof i.update?i.update(function(i){return n.setCurrent(i),n.stem(),n.getCurrent()}):(n.setCurrent(i),n.stem(),n.getCurrent())}}(),i.Pipeline.registerFunction(i.fi.stemmer,"stemmer-fi"),i.fi.stopWordFilter=i.generateStopWordFilter("ei eivät emme en et ette että he heidän heidät heihin heille heillä heiltä heissä heistä heitä hän häneen hänelle hänellä häneltä hänen hänessä hänestä hänet häntä itse ja johon joiden joihin joiksi joilla joille joilta joina joissa joista joita joka joksi jolla jolle jolta jona jonka jos jossa josta jota jotka kanssa keiden keihin keiksi keille keillä keiltä keinä keissä keistä keitä keneen keneksi kenelle kenellä keneltä kenen kenenä kenessä kenestä kenet ketkä ketkä ketä koska kuin kuka kun me meidän meidät meihin meille meillä meiltä meissä meistä meitä mihin miksi mikä mille millä miltä minkä minkä minua minulla minulle minulta minun minussa minusta minut minuun minä minä missä mistä mitkä mitä mukaan mutta ne niiden niihin niiksi niille niillä niiltä niin niin niinä niissä niistä niitä noiden noihin noiksi noilla noille noilta noin noina noissa noista noita nuo nyt näiden näihin näiksi näille näillä näiltä näinä näissä näistä näitä nämä ole olemme olen olet olette oli olimme olin olisi olisimme olisin olisit olisitte olisivat olit olitte olivat olla olleet ollut on ovat poikki se sekä sen siihen siinä siitä siksi sille sillä sillä siltä sinua sinulla sinulle sinulta sinun sinussa sinusta sinut sinuun sinä sinä sitä tai te teidän teidät teihin teille teillä teiltä teissä teistä teitä tuo tuohon tuoksi tuolla tuolle tuolta tuon tuona tuossa tuosta tuota tähän täksi tälle tällä tältä tämä tämän tänä tässä tästä tätä vaan vai vaikka yli".split(" ")),i.Pipeline.registerFunction(i.fi.stopWordFilter,"stopWordFilter-fi")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.fr.min.js b/assets/javascripts/lunr/min/lunr.fr.min.js new file mode 100644 index 000000000..68cd0094a --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.fr.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `French` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.fr=function(){this.pipeline.reset(),this.pipeline.add(e.fr.trimmer,e.fr.stopWordFilter,e.fr.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.fr.stemmer))},e.fr.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.fr.trimmer=e.trimmerSupport.generateTrimmer(e.fr.wordCharacters),e.Pipeline.registerFunction(e.fr.trimmer,"trimmer-fr"),e.fr.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,s){return!(!W.eq_s(1,e)||(W.ket=W.cursor,!W.in_grouping(F,97,251)))&&(W.slice_from(r),W.cursor=s,!0)}function i(e,r,s){return!!W.eq_s(1,e)&&(W.ket=W.cursor,W.slice_from(r),W.cursor=s,!0)}function n(){for(var r,s;;){if(r=W.cursor,W.in_grouping(F,97,251)){if(W.bra=W.cursor,s=W.cursor,e("u","U",r))continue;if(W.cursor=s,e("i","I",r))continue;if(W.cursor=s,i("y","Y",r))continue}if(W.cursor=r,W.bra=r,!e("y","Y",r)){if(W.cursor=r,W.eq_s(1,"q")&&(W.bra=W.cursor,i("u","U",r)))continue;if(W.cursor=r,r>=W.limit)return;W.cursor++}}}function t(){for(;!W.in_grouping(F,97,251);){if(W.cursor>=W.limit)return!0;W.cursor++}for(;!W.out_grouping(F,97,251);){if(W.cursor>=W.limit)return!0;W.cursor++}return!1}function u(){var e=W.cursor;if(q=W.limit,g=q,p=q,W.in_grouping(F,97,251)&&W.in_grouping(F,97,251)&&W.cursor=W.limit){W.cursor=q;break}W.cursor++}while(!W.in_grouping(F,97,251))}q=W.cursor,W.cursor=e,t()||(g=W.cursor,t()||(p=W.cursor))}function o(){for(var e,r;;){if(r=W.cursor,W.bra=r,!(e=W.find_among(h,4)))break;switch(W.ket=W.cursor,e){case 1:W.slice_from("i");break;case 2:W.slice_from("u");break;case 3:W.slice_from("y");break;case 4:if(W.cursor>=W.limit)return;W.cursor++}}}function c(){return q<=W.cursor}function a(){return g<=W.cursor}function l(){return p<=W.cursor}function w(){var e,r;if(W.ket=W.cursor,e=W.find_among_b(C,43)){switch(W.bra=W.cursor,e){case 1:if(!l())return!1;W.slice_del();break;case 2:if(!l())return!1;W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"ic")&&(W.bra=W.cursor,l()?W.slice_del():W.slice_from("iqU"));break;case 3:if(!l())return!1;W.slice_from("log");break;case 4:if(!l())return!1;W.slice_from("u");break;case 5:if(!l())return!1;W.slice_from("ent");break;case 6:if(!c())return!1;if(W.slice_del(),W.ket=W.cursor,e=W.find_among_b(z,6))switch(W.bra=W.cursor,e){case 1:l()&&(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"at")&&(W.bra=W.cursor,l()&&W.slice_del()));break;case 2:l()?W.slice_del():a()&&W.slice_from("eux");break;case 3:l()&&W.slice_del();break;case 4:c()&&W.slice_from("i")}break;case 7:if(!l())return!1;if(W.slice_del(),W.ket=W.cursor,e=W.find_among_b(y,3))switch(W.bra=W.cursor,e){case 1:l()?W.slice_del():W.slice_from("abl");break;case 2:l()?W.slice_del():W.slice_from("iqU");break;case 3:l()&&W.slice_del()}break;case 8:if(!l())return!1;if(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"at")&&(W.bra=W.cursor,l()&&(W.slice_del(),W.ket=W.cursor,W.eq_s_b(2,"ic")))){W.bra=W.cursor,l()?W.slice_del():W.slice_from("iqU");break}break;case 9:W.slice_from("eau");break;case 10:if(!a())return!1;W.slice_from("al");break;case 11:if(l())W.slice_del();else{if(!a())return!1;W.slice_from("eux")}break;case 12:if(!a()||!W.out_grouping_b(F,97,251))return!1;W.slice_del();break;case 13:return c()&&W.slice_from("ant"),!1;case 14:return c()&&W.slice_from("ent"),!1;case 15:return r=W.limit-W.cursor,W.in_grouping_b(F,97,251)&&c()&&(W.cursor=W.limit-r,W.slice_del()),!1}return!0}return!1}function f(){var e,r;if(W.cursor=q){if(s=W.limit_backward,W.limit_backward=q,W.ket=W.cursor,e=W.find_among_b(P,7))switch(W.bra=W.cursor,e){case 1:if(l()){if(i=W.limit-W.cursor,!W.eq_s_b(1,"s")&&(W.cursor=W.limit-i,!W.eq_s_b(1,"t")))break;W.slice_del()}break;case 2:W.slice_from("i");break;case 3:W.slice_del();break;case 4:W.eq_s_b(2,"gu")&&W.slice_del()}W.limit_backward=s}}function b(){var e=W.limit-W.cursor;W.find_among_b(U,5)&&(W.cursor=W.limit-e,W.ket=W.cursor,W.cursor>W.limit_backward&&(W.cursor--,W.bra=W.cursor,W.slice_del()))}function d(){for(var e,r=1;W.out_grouping_b(F,97,251);)r--;if(r<=0){if(W.ket=W.cursor,e=W.limit-W.cursor,!W.eq_s_b(1,"é")&&(W.cursor=W.limit-e,!W.eq_s_b(1,"è")))return;W.bra=W.cursor,W.slice_from("e")}}function k(){if(!w()&&(W.cursor=W.limit,!f()&&(W.cursor=W.limit,!m())))return W.cursor=W.limit,void _();W.cursor=W.limit,W.ket=W.cursor,W.eq_s_b(1,"Y")?(W.bra=W.cursor,W.slice_from("i")):(W.cursor=W.limit,W.eq_s_b(1,"ç")&&(W.bra=W.cursor,W.slice_from("c")))}var p,g,q,v=[new r("col",-1,-1),new r("par",-1,-1),new r("tap",-1,-1)],h=[new r("",-1,4),new r("I",0,1),new r("U",0,2),new r("Y",0,3)],z=[new r("iqU",-1,3),new r("abl",-1,3),new r("Ièr",-1,4),new r("ièr",-1,4),new r("eus",-1,2),new r("iv",-1,1)],y=[new r("ic",-1,2),new r("abil",-1,1),new r("iv",-1,3)],C=[new r("iqUe",-1,1),new r("atrice",-1,2),new r("ance",-1,1),new r("ence",-1,5),new r("logie",-1,3),new r("able",-1,1),new r("isme",-1,1),new r("euse",-1,11),new r("iste",-1,1),new r("ive",-1,8),new r("if",-1,8),new r("usion",-1,4),new r("ation",-1,2),new r("ution",-1,4),new r("ateur",-1,2),new r("iqUes",-1,1),new r("atrices",-1,2),new r("ances",-1,1),new r("ences",-1,5),new r("logies",-1,3),new r("ables",-1,1),new r("ismes",-1,1),new r("euses",-1,11),new r("istes",-1,1),new r("ives",-1,8),new r("ifs",-1,8),new r("usions",-1,4),new r("ations",-1,2),new r("utions",-1,4),new r("ateurs",-1,2),new r("ments",-1,15),new r("ements",30,6),new r("issements",31,12),new r("ités",-1,7),new r("ment",-1,15),new r("ement",34,6),new r("issement",35,12),new r("amment",34,13),new r("emment",34,14),new r("aux",-1,10),new r("eaux",39,9),new r("eux",-1,1),new r("ité",-1,7)],x=[new r("ira",-1,1),new r("ie",-1,1),new r("isse",-1,1),new r("issante",-1,1),new r("i",-1,1),new r("irai",4,1),new r("ir",-1,1),new r("iras",-1,1),new r("ies",-1,1),new r("îmes",-1,1),new r("isses",-1,1),new r("issantes",-1,1),new r("îtes",-1,1),new r("is",-1,1),new r("irais",13,1),new r("issais",13,1),new r("irions",-1,1),new r("issions",-1,1),new r("irons",-1,1),new r("issons",-1,1),new r("issants",-1,1),new r("it",-1,1),new r("irait",21,1),new r("issait",21,1),new r("issant",-1,1),new r("iraIent",-1,1),new r("issaIent",-1,1),new r("irent",-1,1),new r("issent",-1,1),new r("iront",-1,1),new r("ît",-1,1),new r("iriez",-1,1),new r("issiez",-1,1),new r("irez",-1,1),new r("issez",-1,1)],I=[new r("a",-1,3),new r("era",0,2),new r("asse",-1,3),new r("ante",-1,3),new r("ée",-1,2),new r("ai",-1,3),new r("erai",5,2),new r("er",-1,2),new r("as",-1,3),new r("eras",8,2),new r("âmes",-1,3),new r("asses",-1,3),new r("antes",-1,3),new r("âtes",-1,3),new r("ées",-1,2),new r("ais",-1,3),new r("erais",15,2),new r("ions",-1,1),new r("erions",17,2),new r("assions",17,3),new r("erons",-1,2),new r("ants",-1,3),new r("és",-1,2),new r("ait",-1,3),new r("erait",23,2),new r("ant",-1,3),new r("aIent",-1,3),new r("eraIent",26,2),new r("èrent",-1,2),new r("assent",-1,3),new r("eront",-1,2),new r("ât",-1,3),new r("ez",-1,2),new r("iez",32,2),new r("eriez",33,2),new r("assiez",33,3),new r("erez",32,2),new r("é",-1,2)],P=[new r("e",-1,3),new r("Ière",0,2),new r("ière",0,2),new r("ion",-1,1),new r("Ier",-1,2),new r("ier",-1,2),new r("ë",-1,4)],U=[new r("ell",-1,-1),new r("eill",-1,-1),new r("enn",-1,-1),new r("onn",-1,-1),new r("ett",-1,-1)],F=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,128,130,103,8,5],S=[1,65,20,0,0,0,0,0,0,0,0,0,0,0,0,0,128],W=new s;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){var e=W.cursor;return n(),W.cursor=e,u(),W.limit_backward=e,W.cursor=W.limit,k(),W.cursor=W.limit,b(),W.cursor=W.limit,d(),W.cursor=W.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.fr.stemmer,"stemmer-fr"),e.fr.stopWordFilter=e.generateStopWordFilter("ai aie aient aies ait as au aura aurai auraient aurais aurait auras aurez auriez aurions aurons auront aux avaient avais avait avec avez aviez avions avons ayant ayez ayons c ce ceci celà ces cet cette d dans de des du elle en es est et eu eue eues eurent eus eusse eussent eusses eussiez eussions eut eux eûmes eût eûtes furent fus fusse fussent fusses fussiez fussions fut fûmes fût fûtes ici il ils j je l la le les leur leurs lui m ma mais me mes moi mon même n ne nos notre nous on ont ou par pas pour qu que quel quelle quelles quels qui s sa sans se sera serai seraient serais serait seras serez seriez serions serons seront ses soi soient sois soit sommes son sont soyez soyons suis sur t ta te tes toi ton tu un une vos votre vous y à étaient étais était étant étiez étions été étée étées étés êtes".split(" ")),e.Pipeline.registerFunction(e.fr.stopWordFilter,"stopWordFilter-fr")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.hi.min.js b/assets/javascripts/lunr/min/lunr.hi.min.js new file mode 100644 index 000000000..7dbc41402 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.hi.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.hi=function(){this.pipeline.reset(),this.pipeline.add(e.hi.trimmer,e.hi.stopWordFilter,e.hi.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.hi.stemmer))},e.hi.wordCharacters="ऀ-ःऄ-एऐ-टठ-यर-िी-ॏॐ-य़ॠ-९॰-ॿa-zA-Za-zA-Z0-90-9",e.hi.trimmer=e.trimmerSupport.generateTrimmer(e.hi.wordCharacters),e.Pipeline.registerFunction(e.hi.trimmer,"trimmer-hi"),e.hi.stopWordFilter=e.generateStopWordFilter("अत अपना अपनी अपने अभी अंदर आदि आप इत्यादि इन इनका इन्हीं इन्हें इन्हों इस इसका इसकी इसके इसमें इसी इसे उन उनका उनकी उनके उनको उन्हीं उन्हें उन्हों उस उसके उसी उसे एक एवं एस ऐसे और कई कर करता करते करना करने करें कहते कहा का काफ़ी कि कितना किन्हें किन्हों किया किर किस किसी किसे की कुछ कुल के को कोई कौन कौनसा गया घर जब जहाँ जा जितना जिन जिन्हें जिन्हों जिस जिसे जीधर जैसा जैसे जो तक तब तरह तिन तिन्हें तिन्हों तिस तिसे तो था थी थे दबारा दिया दुसरा दूसरे दो द्वारा न नके नहीं ना निहायत नीचे ने पर पहले पूरा पे फिर बनी बही बहुत बाद बाला बिलकुल भी भीतर मगर मानो मे में यदि यह यहाँ यही या यिह ये रखें रहा रहे ऱ्वासा लिए लिये लेकिन व वग़ैरह वर्ग वह वहाँ वहीं वाले वुह वे वो सकता सकते सबसे सभी साथ साबुत साभ सारा से सो संग ही हुआ हुई हुए है हैं हो होता होती होते होना होने".split(" ")),e.hi.stemmer=function(){return function(e){return"function"==typeof e.update?e.update(function(e){return e}):e}}();var r=e.wordcut;r.init(),e.hi.tokenizer=function(i){if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(r){return isLunr2?new e.Token(r.toLowerCase()):r.toLowerCase()});var t=i.toString().toLowerCase().replace(/^\s+/,"");return r.cut(t).split("|")},e.Pipeline.registerFunction(e.hi.stemmer,"stemmer-hi"),e.Pipeline.registerFunction(e.hi.stopWordFilter,"stopWordFilter-hi")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.hu.min.js b/assets/javascripts/lunr/min/lunr.hu.min.js new file mode 100644 index 000000000..ed9d909f7 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.hu.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Hungarian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,n){"function"==typeof define&&define.amd?define(n):"object"==typeof exports?module.exports=n():n()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.hu=function(){this.pipeline.reset(),this.pipeline.add(e.hu.trimmer,e.hu.stopWordFilter,e.hu.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.hu.stemmer))},e.hu.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.hu.trimmer=e.trimmerSupport.generateTrimmer(e.hu.wordCharacters),e.Pipeline.registerFunction(e.hu.trimmer,"trimmer-hu"),e.hu.stemmer=function(){var n=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,i=new function(){function e(){var e,n=L.cursor;if(d=L.limit,L.in_grouping(W,97,252))for(;;){if(e=L.cursor,L.out_grouping(W,97,252))return L.cursor=e,L.find_among(g,8)||(L.cursor=e,e=L.limit)return void(d=e);L.cursor++}if(L.cursor=n,L.out_grouping(W,97,252)){for(;!L.in_grouping(W,97,252);){if(L.cursor>=L.limit)return;L.cursor++}d=L.cursor}}function i(){return d<=L.cursor}function a(){var e;if(L.ket=L.cursor,(e=L.find_among_b(h,2))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("a");break;case 2:L.slice_from("e")}}function t(){var e=L.limit-L.cursor;return!!L.find_among_b(p,23)&&(L.cursor=L.limit-e,!0)}function s(){if(L.cursor>L.limit_backward){L.cursor--,L.ket=L.cursor;var e=L.cursor-1;L.limit_backward<=e&&e<=L.limit&&(L.cursor=e,L.bra=e,L.slice_del())}}function c(){var e;if(L.ket=L.cursor,(e=L.find_among_b(_,2))&&(L.bra=L.cursor,i())){if((1==e||2==e)&&!t())return;L.slice_del(),s()}}function o(){L.ket=L.cursor,L.find_among_b(v,44)&&(L.bra=L.cursor,i()&&(L.slice_del(),a()))}function w(){var e;if(L.ket=L.cursor,(e=L.find_among_b(z,3))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("e");break;case 2:case 3:L.slice_from("a")}}function l(){var e;if(L.ket=L.cursor,(e=L.find_among_b(y,6))&&(L.bra=L.cursor,i()))switch(e){case 1:case 2:L.slice_del();break;case 3:L.slice_from("a");break;case 4:L.slice_from("e")}}function u(){var e;if(L.ket=L.cursor,(e=L.find_among_b(j,2))&&(L.bra=L.cursor,i())){if((1==e||2==e)&&!t())return;L.slice_del(),s()}}function m(){var e;if(L.ket=L.cursor,(e=L.find_among_b(C,7))&&(L.bra=L.cursor,i()))switch(e){case 1:L.slice_from("a");break;case 2:L.slice_from("e");break;case 3:case 4:case 5:case 6:case 7:L.slice_del()}}function k(){var e;if(L.ket=L.cursor,(e=L.find_among_b(P,12))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 7:case 9:L.slice_del();break;case 2:case 5:case 8:L.slice_from("e");break;case 3:case 6:L.slice_from("a")}}function f(){var e;if(L.ket=L.cursor,(e=L.find_among_b(F,31))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 7:case 8:case 9:case 12:case 13:case 16:case 17:case 18:L.slice_del();break;case 2:case 5:case 10:case 14:case 19:L.slice_from("a");break;case 3:case 6:case 11:case 15:case 20:L.slice_from("e")}}function b(){var e;if(L.ket=L.cursor,(e=L.find_among_b(S,42))&&(L.bra=L.cursor,i()))switch(e){case 1:case 4:case 5:case 6:case 9:case 10:case 11:case 14:case 15:case 16:case 17:case 20:case 21:case 24:case 25:case 26:case 29:L.slice_del();break;case 2:case 7:case 12:case 18:case 22:case 27:L.slice_from("a");break;case 3:case 8:case 13:case 19:case 23:case 28:L.slice_from("e")}}var d,g=[new n("cs",-1,-1),new n("dzs",-1,-1),new n("gy",-1,-1),new n("ly",-1,-1),new n("ny",-1,-1),new n("sz",-1,-1),new n("ty",-1,-1),new n("zs",-1,-1)],h=[new n("á",-1,1),new n("é",-1,2)],p=[new n("bb",-1,-1),new n("cc",-1,-1),new n("dd",-1,-1),new n("ff",-1,-1),new n("gg",-1,-1),new n("jj",-1,-1),new n("kk",-1,-1),new n("ll",-1,-1),new n("mm",-1,-1),new n("nn",-1,-1),new n("pp",-1,-1),new n("rr",-1,-1),new n("ccs",-1,-1),new n("ss",-1,-1),new n("zzs",-1,-1),new n("tt",-1,-1),new n("vv",-1,-1),new n("ggy",-1,-1),new n("lly",-1,-1),new n("nny",-1,-1),new n("tty",-1,-1),new n("ssz",-1,-1),new n("zz",-1,-1)],_=[new n("al",-1,1),new n("el",-1,2)],v=[new n("ba",-1,-1),new n("ra",-1,-1),new n("be",-1,-1),new n("re",-1,-1),new n("ig",-1,-1),new n("nak",-1,-1),new n("nek",-1,-1),new n("val",-1,-1),new n("vel",-1,-1),new n("ul",-1,-1),new n("nál",-1,-1),new n("nél",-1,-1),new n("ból",-1,-1),new n("ról",-1,-1),new n("tól",-1,-1),new n("bõl",-1,-1),new n("rõl",-1,-1),new n("tõl",-1,-1),new n("ül",-1,-1),new n("n",-1,-1),new n("an",19,-1),new n("ban",20,-1),new n("en",19,-1),new n("ben",22,-1),new n("képpen",22,-1),new n("on",19,-1),new n("ön",19,-1),new n("képp",-1,-1),new n("kor",-1,-1),new n("t",-1,-1),new n("at",29,-1),new n("et",29,-1),new n("ként",29,-1),new n("anként",32,-1),new n("enként",32,-1),new n("onként",32,-1),new n("ot",29,-1),new n("ért",29,-1),new n("öt",29,-1),new n("hez",-1,-1),new n("hoz",-1,-1),new n("höz",-1,-1),new n("vá",-1,-1),new n("vé",-1,-1)],z=[new n("án",-1,2),new n("én",-1,1),new n("ánként",-1,3)],y=[new n("stul",-1,2),new n("astul",0,1),new n("ástul",0,3),new n("stül",-1,2),new n("estül",3,1),new n("éstül",3,4)],j=[new n("á",-1,1),new n("é",-1,2)],C=[new n("k",-1,7),new n("ak",0,4),new n("ek",0,6),new n("ok",0,5),new n("ák",0,1),new n("ék",0,2),new n("ök",0,3)],P=[new n("éi",-1,7),new n("áéi",0,6),new n("ééi",0,5),new n("é",-1,9),new n("ké",3,4),new n("aké",4,1),new n("eké",4,1),new n("oké",4,1),new n("áké",4,3),new n("éké",4,2),new n("öké",4,1),new n("éé",3,8)],F=[new n("a",-1,18),new n("ja",0,17),new n("d",-1,16),new n("ad",2,13),new n("ed",2,13),new n("od",2,13),new n("ád",2,14),new n("éd",2,15),new n("öd",2,13),new n("e",-1,18),new n("je",9,17),new n("nk",-1,4),new n("unk",11,1),new n("ánk",11,2),new n("énk",11,3),new n("ünk",11,1),new n("uk",-1,8),new n("juk",16,7),new n("ájuk",17,5),new n("ük",-1,8),new n("jük",19,7),new n("éjük",20,6),new n("m",-1,12),new n("am",22,9),new n("em",22,9),new n("om",22,9),new n("ám",22,10),new n("ém",22,11),new n("o",-1,18),new n("á",-1,19),new n("é",-1,20)],S=[new n("id",-1,10),new n("aid",0,9),new n("jaid",1,6),new n("eid",0,9),new n("jeid",3,6),new n("áid",0,7),new n("éid",0,8),new n("i",-1,15),new n("ai",7,14),new n("jai",8,11),new n("ei",7,14),new n("jei",10,11),new n("ái",7,12),new n("éi",7,13),new n("itek",-1,24),new n("eitek",14,21),new n("jeitek",15,20),new n("éitek",14,23),new n("ik",-1,29),new n("aik",18,26),new n("jaik",19,25),new n("eik",18,26),new n("jeik",21,25),new n("áik",18,27),new n("éik",18,28),new n("ink",-1,20),new n("aink",25,17),new n("jaink",26,16),new n("eink",25,17),new n("jeink",28,16),new n("áink",25,18),new n("éink",25,19),new n("aitok",-1,21),new n("jaitok",32,20),new n("áitok",-1,22),new n("im",-1,5),new n("aim",35,4),new n("jaim",36,1),new n("eim",35,4),new n("jeim",38,1),new n("áim",35,2),new n("éim",35,3)],W=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,1,17,52,14],L=new r;this.setCurrent=function(e){L.setCurrent(e)},this.getCurrent=function(){return L.getCurrent()},this.stem=function(){var n=L.cursor;return e(),L.limit_backward=n,L.cursor=L.limit,c(),L.cursor=L.limit,o(),L.cursor=L.limit,w(),L.cursor=L.limit,l(),L.cursor=L.limit,u(),L.cursor=L.limit,k(),L.cursor=L.limit,f(),L.cursor=L.limit,b(),L.cursor=L.limit,m(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.hu.stemmer,"stemmer-hu"),e.hu.stopWordFilter=e.generateStopWordFilter("a abban ahhoz ahogy ahol aki akik akkor alatt amely amelyek amelyekben amelyeket amelyet amelynek ami amikor amit amolyan amíg annak arra arról az azok azon azonban azt aztán azután azzal azért be belül benne bár cikk cikkek cikkeket csak de e ebben eddig egy egyes egyetlen egyik egyre egyéb egész ehhez ekkor el ellen elsõ elég elõ elõször elõtt emilyen ennek erre ez ezek ezen ezt ezzel ezért fel felé hanem hiszen hogy hogyan igen ill ill. illetve ilyen ilyenkor ismét ison itt jobban jó jól kell kellett keressünk keresztül ki kívül között közül legalább legyen lehet lehetett lenne lenni lesz lett maga magát majd majd meg mellett mely melyek mert mi mikor milyen minden mindenki mindent mindig mint mintha mit mivel miért most már más másik még míg nagy nagyobb nagyon ne nekem neki nem nincs néha néhány nélkül olyan ott pedig persze rá s saját sem semmi sok sokat sokkal szemben szerint szinte számára talán tehát teljes tovább továbbá több ugyanis utolsó után utána vagy vagyis vagyok valaki valami valamint való van vannak vele vissza viszont volna volt voltak voltam voltunk által általában át én éppen és így õ õk õket össze úgy új újabb újra".split(" ")),e.Pipeline.registerFunction(e.hu.stopWordFilter,"stopWordFilter-hu")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.it.min.js b/assets/javascripts/lunr/min/lunr.it.min.js new file mode 100644 index 000000000..344b6a3c0 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.it.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Italian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.it=function(){this.pipeline.reset(),this.pipeline.add(e.it.trimmer,e.it.stopWordFilter,e.it.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.it.stemmer))},e.it.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.it.trimmer=e.trimmerSupport.generateTrimmer(e.it.wordCharacters),e.Pipeline.registerFunction(e.it.trimmer,"trimmer-it"),e.it.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(e,r,n){return!(!x.eq_s(1,e)||(x.ket=x.cursor,!x.in_grouping(L,97,249)))&&(x.slice_from(r),x.cursor=n,!0)}function i(){for(var r,n,i,o,t=x.cursor;;){if(x.bra=x.cursor,r=x.find_among(h,7))switch(x.ket=x.cursor,r){case 1:x.slice_from("à");continue;case 2:x.slice_from("è");continue;case 3:x.slice_from("ì");continue;case 4:x.slice_from("ò");continue;case 5:x.slice_from("ù");continue;case 6:x.slice_from("qU");continue;case 7:if(x.cursor>=x.limit)break;x.cursor++;continue}break}for(x.cursor=t;;)for(n=x.cursor;;){if(i=x.cursor,x.in_grouping(L,97,249)){if(x.bra=x.cursor,o=x.cursor,e("u","U",i))break;if(x.cursor=o,e("i","I",i))break}if(x.cursor=i,x.cursor>=x.limit)return void(x.cursor=n);x.cursor++}}function o(e){if(x.cursor=e,!x.in_grouping(L,97,249))return!1;for(;!x.out_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}return!0}function t(){if(x.in_grouping(L,97,249)){var e=x.cursor;if(x.out_grouping(L,97,249)){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return o(e);x.cursor++}return!0}return o(e)}return!1}function s(){var e,r=x.cursor;if(!t()){if(x.cursor=r,!x.out_grouping(L,97,249))return;if(e=x.cursor,x.out_grouping(L,97,249)){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return x.cursor=e,void(x.in_grouping(L,97,249)&&x.cursor=x.limit)return;x.cursor++}k=x.cursor}function a(){for(;!x.in_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}for(;!x.out_grouping(L,97,249);){if(x.cursor>=x.limit)return!1;x.cursor++}return!0}function u(){var e=x.cursor;k=x.limit,p=k,g=k,s(),x.cursor=e,a()&&(p=x.cursor,a()&&(g=x.cursor))}function c(){for(var e;;){if(x.bra=x.cursor,!(e=x.find_among(q,3)))break;switch(x.ket=x.cursor,e){case 1:x.slice_from("i");break;case 2:x.slice_from("u");break;case 3:if(x.cursor>=x.limit)return;x.cursor++}}}function w(){return k<=x.cursor}function l(){return p<=x.cursor}function m(){return g<=x.cursor}function f(){var e;if(x.ket=x.cursor,x.find_among_b(C,37)&&(x.bra=x.cursor,(e=x.find_among_b(z,5))&&w()))switch(e){case 1:x.slice_del();break;case 2:x.slice_from("e")}}function v(){var e;if(x.ket=x.cursor,!(e=x.find_among_b(S,51)))return!1;switch(x.bra=x.cursor,e){case 1:if(!m())return!1;x.slice_del();break;case 2:if(!m())return!1;x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"ic")&&(x.bra=x.cursor,m()&&x.slice_del());break;case 3:if(!m())return!1;x.slice_from("log");break;case 4:if(!m())return!1;x.slice_from("u");break;case 5:if(!m())return!1;x.slice_from("ente");break;case 6:if(!w())return!1;x.slice_del();break;case 7:if(!l())return!1;x.slice_del(),x.ket=x.cursor,e=x.find_among_b(P,4),e&&(x.bra=x.cursor,m()&&(x.slice_del(),1==e&&(x.ket=x.cursor,x.eq_s_b(2,"at")&&(x.bra=x.cursor,m()&&x.slice_del()))));break;case 8:if(!m())return!1;x.slice_del(),x.ket=x.cursor,e=x.find_among_b(F,3),e&&(x.bra=x.cursor,1==e&&m()&&x.slice_del());break;case 9:if(!m())return!1;x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"at")&&(x.bra=x.cursor,m()&&(x.slice_del(),x.ket=x.cursor,x.eq_s_b(2,"ic")&&(x.bra=x.cursor,m()&&x.slice_del())))}return!0}function b(){var e,r;x.cursor>=k&&(r=x.limit_backward,x.limit_backward=k,x.ket=x.cursor,e=x.find_among_b(W,87),e&&(x.bra=x.cursor,1==e&&x.slice_del()),x.limit_backward=r)}function d(){var e=x.limit-x.cursor;if(x.ket=x.cursor,x.in_grouping_b(y,97,242)&&(x.bra=x.cursor,w()&&(x.slice_del(),x.ket=x.cursor,x.eq_s_b(1,"i")&&(x.bra=x.cursor,w()))))return void x.slice_del();x.cursor=x.limit-e}function _(){d(),x.ket=x.cursor,x.eq_s_b(1,"h")&&(x.bra=x.cursor,x.in_grouping_b(U,99,103)&&w()&&x.slice_del())}var g,p,k,h=[new r("",-1,7),new r("qu",0,6),new r("á",0,1),new r("é",0,2),new r("í",0,3),new r("ó",0,4),new r("ú",0,5)],q=[new r("",-1,3),new r("I",0,1),new r("U",0,2)],C=[new r("la",-1,-1),new r("cela",0,-1),new r("gliela",0,-1),new r("mela",0,-1),new r("tela",0,-1),new r("vela",0,-1),new r("le",-1,-1),new r("cele",6,-1),new r("gliele",6,-1),new r("mele",6,-1),new r("tele",6,-1),new r("vele",6,-1),new r("ne",-1,-1),new r("cene",12,-1),new r("gliene",12,-1),new r("mene",12,-1),new r("sene",12,-1),new r("tene",12,-1),new r("vene",12,-1),new r("ci",-1,-1),new r("li",-1,-1),new r("celi",20,-1),new r("glieli",20,-1),new r("meli",20,-1),new r("teli",20,-1),new r("veli",20,-1),new r("gli",20,-1),new r("mi",-1,-1),new r("si",-1,-1),new r("ti",-1,-1),new r("vi",-1,-1),new r("lo",-1,-1),new r("celo",31,-1),new r("glielo",31,-1),new r("melo",31,-1),new r("telo",31,-1),new r("velo",31,-1)],z=[new r("ando",-1,1),new r("endo",-1,1),new r("ar",-1,2),new r("er",-1,2),new r("ir",-1,2)],P=[new r("ic",-1,-1),new r("abil",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],F=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],S=[new r("ica",-1,1),new r("logia",-1,3),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,9),new r("anza",-1,1),new r("enza",-1,5),new r("ice",-1,1),new r("atrice",7,1),new r("iche",-1,1),new r("logie",-1,3),new r("abile",-1,1),new r("ibile",-1,1),new r("usione",-1,4),new r("azione",-1,2),new r("uzione",-1,4),new r("atore",-1,2),new r("ose",-1,1),new r("ante",-1,1),new r("mente",-1,1),new r("amente",19,7),new r("iste",-1,1),new r("ive",-1,9),new r("anze",-1,1),new r("enze",-1,5),new r("ici",-1,1),new r("atrici",25,1),new r("ichi",-1,1),new r("abili",-1,1),new r("ibili",-1,1),new r("ismi",-1,1),new r("usioni",-1,4),new r("azioni",-1,2),new r("uzioni",-1,4),new r("atori",-1,2),new r("osi",-1,1),new r("anti",-1,1),new r("amenti",-1,6),new r("imenti",-1,6),new r("isti",-1,1),new r("ivi",-1,9),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,6),new r("imento",-1,6),new r("ivo",-1,9),new r("ità",-1,8),new r("istà",-1,1),new r("istè",-1,1),new r("istì",-1,1)],W=[new r("isca",-1,1),new r("enda",-1,1),new r("ata",-1,1),new r("ita",-1,1),new r("uta",-1,1),new r("ava",-1,1),new r("eva",-1,1),new r("iva",-1,1),new r("erebbe",-1,1),new r("irebbe",-1,1),new r("isce",-1,1),new r("ende",-1,1),new r("are",-1,1),new r("ere",-1,1),new r("ire",-1,1),new r("asse",-1,1),new r("ate",-1,1),new r("avate",16,1),new r("evate",16,1),new r("ivate",16,1),new r("ete",-1,1),new r("erete",20,1),new r("irete",20,1),new r("ite",-1,1),new r("ereste",-1,1),new r("ireste",-1,1),new r("ute",-1,1),new r("erai",-1,1),new r("irai",-1,1),new r("isci",-1,1),new r("endi",-1,1),new r("erei",-1,1),new r("irei",-1,1),new r("assi",-1,1),new r("ati",-1,1),new r("iti",-1,1),new r("eresti",-1,1),new r("iresti",-1,1),new r("uti",-1,1),new r("avi",-1,1),new r("evi",-1,1),new r("ivi",-1,1),new r("isco",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("Yamo",-1,1),new r("iamo",-1,1),new r("avamo",-1,1),new r("evamo",-1,1),new r("ivamo",-1,1),new r("eremo",-1,1),new r("iremo",-1,1),new r("assimo",-1,1),new r("ammo",-1,1),new r("emmo",-1,1),new r("eremmo",54,1),new r("iremmo",54,1),new r("immo",-1,1),new r("ano",-1,1),new r("iscano",58,1),new r("avano",58,1),new r("evano",58,1),new r("ivano",58,1),new r("eranno",-1,1),new r("iranno",-1,1),new r("ono",-1,1),new r("iscono",65,1),new r("arono",65,1),new r("erono",65,1),new r("irono",65,1),new r("erebbero",-1,1),new r("irebbero",-1,1),new r("assero",-1,1),new r("essero",-1,1),new r("issero",-1,1),new r("ato",-1,1),new r("ito",-1,1),new r("uto",-1,1),new r("avo",-1,1),new r("evo",-1,1),new r("ivo",-1,1),new r("ar",-1,1),new r("ir",-1,1),new r("erà",-1,1),new r("irà",-1,1),new r("erò",-1,1),new r("irò",-1,1)],L=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2,1],y=[17,65,0,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2],U=[17],x=new n;this.setCurrent=function(e){x.setCurrent(e)},this.getCurrent=function(){return x.getCurrent()},this.stem=function(){var e=x.cursor;return i(),x.cursor=e,u(),x.limit_backward=e,x.cursor=x.limit,f(),x.cursor=x.limit,v()||(x.cursor=x.limit,b()),x.cursor=x.limit,_(),x.cursor=x.limit_backward,c(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.it.stemmer,"stemmer-it"),e.it.stopWordFilter=e.generateStopWordFilter("a abbia abbiamo abbiano abbiate ad agl agli ai al all alla alle allo anche avemmo avendo avesse avessero avessi avessimo aveste avesti avete aveva avevamo avevano avevate avevi avevo avrai avranno avrebbe avrebbero avrei avremmo avremo avreste avresti avrete avrà avrò avuta avute avuti avuto c che chi ci coi col come con contro cui da dagl dagli dai dal dall dalla dalle dallo degl degli dei del dell della delle dello di dov dove e ebbe ebbero ebbi ed era erano eravamo eravate eri ero essendo faccia facciamo facciano facciate faccio facemmo facendo facesse facessero facessi facessimo faceste facesti faceva facevamo facevano facevate facevi facevo fai fanno farai faranno farebbe farebbero farei faremmo faremo fareste faresti farete farà farò fece fecero feci fosse fossero fossi fossimo foste fosti fu fui fummo furono gli ha hai hanno ho i il in io l la le lei li lo loro lui ma mi mia mie miei mio ne negl negli nei nel nell nella nelle nello noi non nostra nostre nostri nostro o per perché più quale quanta quante quanti quanto quella quelle quelli quello questa queste questi questo sarai saranno sarebbe sarebbero sarei saremmo saremo sareste saresti sarete sarà sarò se sei si sia siamo siano siate siete sono sta stai stando stanno starai staranno starebbe starebbero starei staremmo staremo stareste staresti starete starà starò stava stavamo stavano stavate stavi stavo stemmo stesse stessero stessi stessimo steste stesti stette stettero stetti stia stiamo stiano stiate sto su sua sue sugl sugli sui sul sull sulla sulle sullo suo suoi ti tra tu tua tue tuo tuoi tutti tutto un una uno vi voi vostra vostre vostri vostro è".split(" ")),e.Pipeline.registerFunction(e.it.stopWordFilter,"stopWordFilter-it")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ja.min.js b/assets/javascripts/lunr/min/lunr.ja.min.js new file mode 100644 index 000000000..5f254ebe9 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ja.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r="2"==e.version[0];e.ja=function(){this.pipeline.reset(),this.pipeline.add(e.ja.trimmer,e.ja.stopWordFilter,e.ja.stemmer),r?this.tokenizer=e.ja.tokenizer:(e.tokenizer&&(e.tokenizer=e.ja.tokenizer),this.tokenizerFn&&(this.tokenizerFn=e.ja.tokenizer))};var t=new e.TinySegmenter;e.ja.tokenizer=function(i){var n,o,s,p,a,u,m,l,c,f;if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(t){return r?new e.Token(t.toLowerCase()):t.toLowerCase()});for(o=i.toString().toLowerCase().replace(/^\s+/,""),n=o.length-1;n>=0;n--)if(/\S/.test(o.charAt(n))){o=o.substring(0,n+1);break}for(a=[],s=o.length,c=0,l=0;c<=s;c++)if(u=o.charAt(c),m=c-l,u.match(/\s/)||c==s){if(m>0)for(p=t.segment(o.slice(l,c)).filter(function(e){return!!e}),f=l,n=0;n=C.limit)break;C.cursor++;continue}break}for(C.cursor=o,C.bra=o,C.eq_s(1,"y")?(C.ket=C.cursor,C.slice_from("Y")):C.cursor=o;;)if(e=C.cursor,C.in_grouping(q,97,232)){if(i=C.cursor,C.bra=i,C.eq_s(1,"i"))C.ket=C.cursor,C.in_grouping(q,97,232)&&(C.slice_from("I"),C.cursor=e);else if(C.cursor=i,C.eq_s(1,"y"))C.ket=C.cursor,C.slice_from("Y"),C.cursor=e;else if(n(e))break}else if(n(e))break}function n(r){return C.cursor=r,r>=C.limit||(C.cursor++,!1)}function o(){_=C.limit,d=_,t()||(_=C.cursor,_<3&&(_=3),t()||(d=C.cursor))}function t(){for(;!C.in_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}for(;!C.out_grouping(q,97,232);){if(C.cursor>=C.limit)return!0;C.cursor++}return!1}function s(){for(var r;;)if(C.bra=C.cursor,r=C.find_among(p,3))switch(C.ket=C.cursor,r){case 1:C.slice_from("y");break;case 2:C.slice_from("i");break;case 3:if(C.cursor>=C.limit)return;C.cursor++}}function u(){return _<=C.cursor}function c(){return d<=C.cursor}function a(){var r=C.limit-C.cursor;C.find_among_b(g,3)&&(C.cursor=C.limit-r,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del()))}function l(){var r;w=!1,C.ket=C.cursor,C.eq_s_b(1,"e")&&(C.bra=C.cursor,u()&&(r=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-r,C.slice_del(),w=!0,a())))}function m(){var r;u()&&(r=C.limit-C.cursor,C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-r,C.eq_s_b(3,"gem")||(C.cursor=C.limit-r,C.slice_del(),a())))}function f(){var r,e,i,n,o,t,s=C.limit-C.cursor;if(C.ket=C.cursor,r=C.find_among_b(h,5))switch(C.bra=C.cursor,r){case 1:u()&&C.slice_from("heid");break;case 2:m();break;case 3:u()&&C.out_grouping_b(j,97,232)&&C.slice_del()}if(C.cursor=C.limit-s,l(),C.cursor=C.limit-s,C.ket=C.cursor,C.eq_s_b(4,"heid")&&(C.bra=C.cursor,c()&&(e=C.limit-C.cursor,C.eq_s_b(1,"c")||(C.cursor=C.limit-e,C.slice_del(),C.ket=C.cursor,C.eq_s_b(2,"en")&&(C.bra=C.cursor,m())))),C.cursor=C.limit-s,C.ket=C.cursor,r=C.find_among_b(k,6))switch(C.bra=C.cursor,r){case 1:if(c()){if(C.slice_del(),i=C.limit-C.cursor,C.ket=C.cursor,C.eq_s_b(2,"ig")&&(C.bra=C.cursor,c()&&(n=C.limit-C.cursor,!C.eq_s_b(1,"e")))){C.cursor=C.limit-n,C.slice_del();break}C.cursor=C.limit-i,a()}break;case 2:c()&&(o=C.limit-C.cursor,C.eq_s_b(1,"e")||(C.cursor=C.limit-o,C.slice_del()));break;case 3:c()&&(C.slice_del(),l());break;case 4:c()&&C.slice_del();break;case 5:c()&&w&&C.slice_del()}C.cursor=C.limit-s,C.out_grouping_b(z,73,232)&&(t=C.limit-C.cursor,C.find_among_b(v,4)&&C.out_grouping_b(q,97,232)&&(C.cursor=C.limit-t,C.ket=C.cursor,C.cursor>C.limit_backward&&(C.cursor--,C.bra=C.cursor,C.slice_del())))}var d,_,w,b=[new e("",-1,6),new e("á",0,1),new e("ä",0,1),new e("é",0,2),new e("ë",0,2),new e("í",0,3),new e("ï",0,3),new e("ó",0,4),new e("ö",0,4),new e("ú",0,5),new e("ü",0,5)],p=[new e("",-1,3),new e("I",0,2),new e("Y",0,1)],g=[new e("dd",-1,-1),new e("kk",-1,-1),new e("tt",-1,-1)],h=[new e("ene",-1,2),new e("se",-1,3),new e("en",-1,2),new e("heden",2,1),new e("s",-1,3)],k=[new e("end",-1,1),new e("ig",-1,2),new e("ing",-1,1),new e("lijk",-1,3),new e("baar",-1,4),new e("bar",-1,5)],v=[new e("aa",-1,-1),new e("ee",-1,-1),new e("oo",-1,-1),new e("uu",-1,-1)],q=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],z=[1,0,0,17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],j=[17,67,16,1,0,0,0,0,0,0,0,0,0,0,0,0,128],C=new i;this.setCurrent=function(r){C.setCurrent(r)},this.getCurrent=function(){return C.getCurrent()},this.stem=function(){var e=C.cursor;return r(),C.cursor=e,o(),C.limit_backward=e,C.cursor=C.limit,f(),C.cursor=C.limit_backward,s(),!0}};return function(r){return"function"==typeof r.update?r.update(function(r){return n.setCurrent(r),n.stem(),n.getCurrent()}):(n.setCurrent(r),n.stem(),n.getCurrent())}}(),r.Pipeline.registerFunction(r.nl.stemmer,"stemmer-nl"),r.nl.stopWordFilter=r.generateStopWordFilter(" aan al alles als altijd andere ben bij daar dan dat de der deze die dit doch doen door dus een eens en er ge geen geweest haar had heb hebben heeft hem het hier hij hoe hun iemand iets ik in is ja je kan kon kunnen maar me meer men met mij mijn moet na naar niet niets nog nu of om omdat onder ons ook op over reeds te tegen toch toen tot u uit uw van veel voor want waren was wat werd wezen wie wil worden wordt zal ze zelf zich zij zijn zo zonder zou".split(" ")),r.Pipeline.registerFunction(r.nl.stopWordFilter,"stopWordFilter-nl")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.no.min.js b/assets/javascripts/lunr/min/lunr.no.min.js new file mode 100644 index 000000000..92bc7e4e8 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.no.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Norwegian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.no=function(){this.pipeline.reset(),this.pipeline.add(e.no.trimmer,e.no.stopWordFilter,e.no.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.no.stemmer))},e.no.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.no.trimmer=e.trimmerSupport.generateTrimmer(e.no.wordCharacters),e.Pipeline.registerFunction(e.no.trimmer,"trimmer-no"),e.no.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){function e(){var e,r=w.cursor+3;if(a=w.limit,0<=r||r<=w.limit){for(s=r;;){if(e=w.cursor,w.in_grouping(d,97,248)){w.cursor=e;break}if(e>=w.limit)return;w.cursor=e+1}for(;!w.out_grouping(d,97,248);){if(w.cursor>=w.limit)return;w.cursor++}a=w.cursor,a=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(m,29),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:n=w.limit-w.cursor,w.in_grouping_b(c,98,122)?w.slice_del():(w.cursor=w.limit-n,w.eq_s_b(1,"k")&&w.out_grouping_b(d,97,248)&&w.slice_del());break;case 3:w.slice_from("er")}}function t(){var e,r=w.limit-w.cursor;w.cursor>=a&&(e=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,w.find_among_b(u,2)?(w.bra=w.cursor,w.limit_backward=e,w.cursor=w.limit-r,w.cursor>w.limit_backward&&(w.cursor--,w.bra=w.cursor,w.slice_del())):w.limit_backward=e)}function o(){var e,r;w.cursor>=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(l,11),e?(w.bra=w.cursor,w.limit_backward=r,1==e&&w.slice_del()):w.limit_backward=r)}var s,a,m=[new r("a",-1,1),new r("e",-1,1),new r("ede",1,1),new r("ande",1,1),new r("ende",1,1),new r("ane",1,1),new r("ene",1,1),new r("hetene",6,1),new r("erte",1,3),new r("en",-1,1),new r("heten",9,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",12,1),new r("s",-1,2),new r("as",14,1),new r("es",14,1),new r("edes",16,1),new r("endes",16,1),new r("enes",16,1),new r("hetenes",19,1),new r("ens",14,1),new r("hetens",21,1),new r("ers",14,1),new r("ets",14,1),new r("et",-1,1),new r("het",25,1),new r("ert",-1,3),new r("ast",-1,1)],u=[new r("dt",-1,-1),new r("vt",-1,-1)],l=[new r("leg",-1,1),new r("eleg",0,1),new r("ig",-1,1),new r("eig",2,1),new r("lig",2,1),new r("elig",4,1),new r("els",-1,1),new r("lov",-1,1),new r("elov",7,1),new r("slov",7,1),new r("hetslov",9,1)],d=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],c=[119,125,149,1],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,i(),w.cursor=w.limit,t(),w.cursor=w.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.no.stemmer,"stemmer-no"),e.no.stopWordFilter=e.generateStopWordFilter("alle at av bare begge ble blei bli blir blitt både båe da de deg dei deim deira deires dem den denne der dere deres det dette di din disse ditt du dykk dykkar då eg ein eit eitt eller elles en enn er et ett etter for fordi fra før ha hadde han hans har hennar henne hennes her hjå ho hoe honom hoss hossen hun hva hvem hver hvilke hvilken hvis hvor hvordan hvorfor i ikke ikkje ikkje ingen ingi inkje inn inni ja jeg kan kom korleis korso kun kunne kva kvar kvarhelst kven kvi kvifor man mange me med medan meg meget mellom men mi min mine mitt mot mykje ned no noe noen noka noko nokon nokor nokre nå når og også om opp oss over på samme seg selv si si sia sidan siden sin sine sitt sjøl skal skulle slik so som som somme somt så sånn til um upp ut uten var vart varte ved vere verte vi vil ville vore vors vort vår være være vært å".split(" ")),e.Pipeline.registerFunction(e.no.stopWordFilter,"stopWordFilter-no")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.pt.min.js b/assets/javascripts/lunr/min/lunr.pt.min.js new file mode 100644 index 000000000..6c16996d6 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.pt.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Portuguese` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.pt=function(){this.pipeline.reset(),this.pipeline.add(e.pt.trimmer,e.pt.stopWordFilter,e.pt.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.pt.stemmer))},e.pt.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.pt.trimmer=e.trimmerSupport.generateTrimmer(e.pt.wordCharacters),e.Pipeline.registerFunction(e.pt.trimmer,"trimmer-pt"),e.pt.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(k,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("a~");continue;case 2:z.slice_from("o~");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function n(){if(z.out_grouping(y,97,250)){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!0;z.cursor++}return!1}return!0}function i(){if(z.in_grouping(y,97,250))for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return g=z.cursor,!0}function o(){var e,r,s=z.cursor;if(z.in_grouping(y,97,250))if(e=z.cursor,n()){if(z.cursor=e,i())return}else g=z.cursor;if(z.cursor=s,z.out_grouping(y,97,250)){if(r=z.cursor,n()){if(z.cursor=r,!z.in_grouping(y,97,250)||z.cursor>=z.limit)return;z.cursor++}g=z.cursor}}function t(){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return!0}function a(){var e=z.cursor;g=z.limit,b=g,h=g,o(),z.cursor=e,t()&&(b=z.cursor,t()&&(h=z.cursor))}function u(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(q,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("ã");continue;case 2:z.slice_from("õ");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function w(){return g<=z.cursor}function m(){return b<=z.cursor}function c(){return h<=z.cursor}function l(){var e;if(z.ket=z.cursor,!(e=z.find_among_b(F,45)))return!1;switch(z.bra=z.cursor,e){case 1:if(!c())return!1;z.slice_del();break;case 2:if(!c())return!1;z.slice_from("log");break;case 3:if(!c())return!1;z.slice_from("u");break;case 4:if(!c())return!1;z.slice_from("ente");break;case 5:if(!m())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(j,4),e&&(z.bra=z.cursor,c()&&(z.slice_del(),1==e&&(z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del()))));break;case 6:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(C,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 7:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(P,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 8:if(!c())return!1;z.slice_del(),z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del());break;case 9:if(!w()||!z.eq_s_b(1,"e"))return!1;z.slice_from("ir")}return!0}function f(){var e,r;if(z.cursor>=g){if(r=z.limit_backward,z.limit_backward=g,z.ket=z.cursor,e=z.find_among_b(S,120))return z.bra=z.cursor,1==e&&z.slice_del(),z.limit_backward=r,!0;z.limit_backward=r}return!1}function d(){var e;z.ket=z.cursor,(e=z.find_among_b(W,7))&&(z.bra=z.cursor,1==e&&w()&&z.slice_del())}function v(e,r){if(z.eq_s_b(1,e)){z.bra=z.cursor;var s=z.limit-z.cursor;if(z.eq_s_b(1,r))return z.cursor=z.limit-s,w()&&z.slice_del(),!1}return!0}function p(){var e;if(z.ket=z.cursor,e=z.find_among_b(L,4))switch(z.bra=z.cursor,e){case 1:w()&&(z.slice_del(),z.ket=z.cursor,z.limit-z.cursor,v("u","g")&&v("i","c"));break;case 2:z.slice_from("c")}}function _(){if(!l()&&(z.cursor=z.limit,!f()))return z.cursor=z.limit,void d();z.cursor=z.limit,z.ket=z.cursor,z.eq_s_b(1,"i")&&(z.bra=z.cursor,z.eq_s_b(1,"c")&&(z.cursor=z.limit,w()&&z.slice_del()))}var h,b,g,k=[new r("",-1,3),new r("ã",0,1),new r("õ",0,2)],q=[new r("",-1,3),new r("a~",0,1),new r("o~",0,2)],j=[new r("ic",-1,-1),new r("ad",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],C=[new r("ante",-1,1),new r("avel",-1,1),new r("ível",-1,1)],P=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],F=[new r("ica",-1,1),new r("ância",-1,1),new r("ência",-1,4),new r("ira",-1,9),new r("adora",-1,1),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,8),new r("eza",-1,1),new r("logía",-1,2),new r("idade",-1,7),new r("ante",-1,1),new r("mente",-1,6),new r("amente",12,5),new r("ável",-1,1),new r("ível",-1,1),new r("ución",-1,3),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,1),new r("imento",-1,1),new r("ivo",-1,8),new r("aça~o",-1,1),new r("ador",-1,1),new r("icas",-1,1),new r("ências",-1,4),new r("iras",-1,9),new r("adoras",-1,1),new r("osas",-1,1),new r("istas",-1,1),new r("ivas",-1,8),new r("ezas",-1,1),new r("logías",-1,2),new r("idades",-1,7),new r("uciones",-1,3),new r("adores",-1,1),new r("antes",-1,1),new r("aço~es",-1,1),new r("icos",-1,1),new r("ismos",-1,1),new r("osos",-1,1),new r("amentos",-1,1),new r("imentos",-1,1),new r("ivos",-1,8)],S=[new r("ada",-1,1),new r("ida",-1,1),new r("ia",-1,1),new r("aria",2,1),new r("eria",2,1),new r("iria",2,1),new r("ara",-1,1),new r("era",-1,1),new r("ira",-1,1),new r("ava",-1,1),new r("asse",-1,1),new r("esse",-1,1),new r("isse",-1,1),new r("aste",-1,1),new r("este",-1,1),new r("iste",-1,1),new r("ei",-1,1),new r("arei",16,1),new r("erei",16,1),new r("irei",16,1),new r("am",-1,1),new r("iam",20,1),new r("ariam",21,1),new r("eriam",21,1),new r("iriam",21,1),new r("aram",20,1),new r("eram",20,1),new r("iram",20,1),new r("avam",20,1),new r("em",-1,1),new r("arem",29,1),new r("erem",29,1),new r("irem",29,1),new r("assem",29,1),new r("essem",29,1),new r("issem",29,1),new r("ado",-1,1),new r("ido",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("indo",-1,1),new r("ara~o",-1,1),new r("era~o",-1,1),new r("ira~o",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("ir",-1,1),new r("as",-1,1),new r("adas",47,1),new r("idas",47,1),new r("ias",47,1),new r("arias",50,1),new r("erias",50,1),new r("irias",50,1),new r("aras",47,1),new r("eras",47,1),new r("iras",47,1),new r("avas",47,1),new r("es",-1,1),new r("ardes",58,1),new r("erdes",58,1),new r("irdes",58,1),new r("ares",58,1),new r("eres",58,1),new r("ires",58,1),new r("asses",58,1),new r("esses",58,1),new r("isses",58,1),new r("astes",58,1),new r("estes",58,1),new r("istes",58,1),new r("is",-1,1),new r("ais",71,1),new r("eis",71,1),new r("areis",73,1),new r("ereis",73,1),new r("ireis",73,1),new r("áreis",73,1),new r("éreis",73,1),new r("íreis",73,1),new r("ásseis",73,1),new r("ésseis",73,1),new r("ísseis",73,1),new r("áveis",73,1),new r("íeis",73,1),new r("aríeis",84,1),new r("eríeis",84,1),new r("iríeis",84,1),new r("ados",-1,1),new r("idos",-1,1),new r("amos",-1,1),new r("áramos",90,1),new r("éramos",90,1),new r("íramos",90,1),new r("ávamos",90,1),new r("íamos",90,1),new r("aríamos",95,1),new r("eríamos",95,1),new r("iríamos",95,1),new r("emos",-1,1),new r("aremos",99,1),new r("eremos",99,1),new r("iremos",99,1),new r("ássemos",99,1),new r("êssemos",99,1),new r("íssemos",99,1),new r("imos",-1,1),new r("armos",-1,1),new r("ermos",-1,1),new r("irmos",-1,1),new r("ámos",-1,1),new r("arás",-1,1),new r("erás",-1,1),new r("irás",-1,1),new r("eu",-1,1),new r("iu",-1,1),new r("ou",-1,1),new r("ará",-1,1),new r("erá",-1,1),new r("irá",-1,1)],W=[new r("a",-1,1),new r("i",-1,1),new r("o",-1,1),new r("os",-1,1),new r("á",-1,1),new r("í",-1,1),new r("ó",-1,1)],L=[new r("e",-1,1),new r("ç",-1,2),new r("é",-1,1),new r("ê",-1,1)],y=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,3,19,12,2],z=new s;this.setCurrent=function(e){z.setCurrent(e)},this.getCurrent=function(){return z.getCurrent()},this.stem=function(){var r=z.cursor;return e(),z.cursor=r,a(),z.limit_backward=r,z.cursor=z.limit,_(),z.cursor=z.limit,p(),z.cursor=z.limit_backward,u(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.pt.stemmer,"stemmer-pt"),e.pt.stopWordFilter=e.generateStopWordFilter("a ao aos aquela aquelas aquele aqueles aquilo as até com como da das de dela delas dele deles depois do dos e ela elas ele eles em entre era eram essa essas esse esses esta estamos estas estava estavam este esteja estejam estejamos estes esteve estive estivemos estiver estivera estiveram estiverem estivermos estivesse estivessem estivéramos estivéssemos estou está estávamos estão eu foi fomos for fora foram forem formos fosse fossem fui fôramos fôssemos haja hajam hajamos havemos hei houve houvemos houver houvera houveram houverei houverem houveremos houveria houveriam houvermos houverá houverão houveríamos houvesse houvessem houvéramos houvéssemos há hão isso isto já lhe lhes mais mas me mesmo meu meus minha minhas muito na nas nem no nos nossa nossas nosso nossos num numa não nós o os ou para pela pelas pelo pelos por qual quando que quem se seja sejam sejamos sem serei seremos seria seriam será serão seríamos seu seus somos sou sua suas são só também te tem temos tenha tenham tenhamos tenho terei teremos teria teriam terá terão teríamos teu teus teve tinha tinham tive tivemos tiver tivera tiveram tiverem tivermos tivesse tivessem tivéramos tivéssemos tu tua tuas tém tínhamos um uma você vocês vos à às éramos".split(" ")),e.Pipeline.registerFunction(e.pt.stopWordFilter,"stopWordFilter-pt")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ro.min.js b/assets/javascripts/lunr/min/lunr.ro.min.js new file mode 100644 index 000000000..727714018 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ro.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Romanian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ro=function(){this.pipeline.reset(),this.pipeline.add(e.ro.trimmer,e.ro.stopWordFilter,e.ro.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ro.stemmer))},e.ro.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.ro.trimmer=e.trimmerSupport.generateTrimmer(e.ro.wordCharacters),e.Pipeline.registerFunction(e.ro.trimmer,"trimmer-ro"),e.ro.stemmer=function(){var i=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(e,i){L.eq_s(1,e)&&(L.ket=L.cursor,L.in_grouping(W,97,259)&&L.slice_from(i))}function n(){for(var i,r;;){if(i=L.cursor,L.in_grouping(W,97,259)&&(r=L.cursor,L.bra=r,e("u","U"),L.cursor=r,e("i","I")),L.cursor=i,L.cursor>=L.limit)break;L.cursor++}}function t(){if(L.out_grouping(W,97,259)){for(;!L.in_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}return!0}function a(){if(L.in_grouping(W,97,259))for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}function o(){var e,i,r=L.cursor;if(L.in_grouping(W,97,259)){if(e=L.cursor,!t())return void(h=L.cursor);if(L.cursor=e,!a())return void(h=L.cursor)}L.cursor=r,L.out_grouping(W,97,259)&&(i=L.cursor,t()&&(L.cursor=i,L.in_grouping(W,97,259)&&L.cursor=L.limit)return!1;L.cursor++}for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!1;L.cursor++}return!0}function c(){var e=L.cursor;h=L.limit,k=h,g=h,o(),L.cursor=e,u()&&(k=L.cursor,u()&&(g=L.cursor))}function s(){for(var e;;){if(L.bra=L.cursor,e=L.find_among(z,3))switch(L.ket=L.cursor,e){case 1:L.slice_from("i");continue;case 2:L.slice_from("u");continue;case 3:if(L.cursor>=L.limit)break;L.cursor++;continue}break}}function w(){return h<=L.cursor}function m(){return k<=L.cursor}function l(){return g<=L.cursor}function f(){var e,i;if(L.ket=L.cursor,(e=L.find_among_b(C,16))&&(L.bra=L.cursor,m()))switch(e){case 1:L.slice_del();break;case 2:L.slice_from("a");break;case 3:L.slice_from("e");break;case 4:L.slice_from("i");break;case 5:i=L.limit-L.cursor,L.eq_s_b(2,"ab")||(L.cursor=L.limit-i,L.slice_from("i"));break;case 6:L.slice_from("at");break;case 7:L.slice_from("aţi")}}function p(){var e,i=L.limit-L.cursor;if(L.ket=L.cursor,(e=L.find_among_b(P,46))&&(L.bra=L.cursor,m())){switch(e){case 1:L.slice_from("abil");break;case 2:L.slice_from("ibil");break;case 3:L.slice_from("iv");break;case 4:L.slice_from("ic");break;case 5:L.slice_from("at");break;case 6:L.slice_from("it")}return _=!0,L.cursor=L.limit-i,!0}return!1}function d(){var e,i;for(_=!1;;)if(i=L.limit-L.cursor,!p()){L.cursor=L.limit-i;break}if(L.ket=L.cursor,(e=L.find_among_b(F,62))&&(L.bra=L.cursor,l())){switch(e){case 1:L.slice_del();break;case 2:L.eq_s_b(1,"ţ")&&(L.bra=L.cursor,L.slice_from("t"));break;case 3:L.slice_from("ist")}_=!0}}function b(){var e,i,r;if(L.cursor>=h){if(i=L.limit_backward,L.limit_backward=h,L.ket=L.cursor,e=L.find_among_b(q,94))switch(L.bra=L.cursor,e){case 1:if(r=L.limit-L.cursor,!L.out_grouping_b(W,97,259)&&(L.cursor=L.limit-r,!L.eq_s_b(1,"u")))break;case 2:L.slice_del()}L.limit_backward=i}}function v(){var e;L.ket=L.cursor,(e=L.find_among_b(S,5))&&(L.bra=L.cursor,w()&&1==e&&L.slice_del())}var _,g,k,h,z=[new i("",-1,3),new i("I",0,1),new i("U",0,2)],C=[new i("ea",-1,3),new i("aţia",-1,7),new i("aua",-1,2),new i("iua",-1,4),new i("aţie",-1,7),new i("ele",-1,3),new i("ile",-1,5),new i("iile",6,4),new i("iei",-1,4),new i("atei",-1,6),new i("ii",-1,4),new i("ului",-1,1),new i("ul",-1,1),new i("elor",-1,3),new i("ilor",-1,4),new i("iilor",14,4)],P=[new i("icala",-1,4),new i("iciva",-1,4),new i("ativa",-1,5),new i("itiva",-1,6),new i("icale",-1,4),new i("aţiune",-1,5),new i("iţiune",-1,6),new i("atoare",-1,5),new i("itoare",-1,6),new i("ătoare",-1,5),new i("icitate",-1,4),new i("abilitate",-1,1),new i("ibilitate",-1,2),new i("ivitate",-1,3),new i("icive",-1,4),new i("ative",-1,5),new i("itive",-1,6),new i("icali",-1,4),new i("atori",-1,5),new i("icatori",18,4),new i("itori",-1,6),new i("ători",-1,5),new i("icitati",-1,4),new i("abilitati",-1,1),new i("ivitati",-1,3),new i("icivi",-1,4),new i("ativi",-1,5),new i("itivi",-1,6),new i("icităi",-1,4),new i("abilităi",-1,1),new i("ivităi",-1,3),new i("icităţi",-1,4),new i("abilităţi",-1,1),new i("ivităţi",-1,3),new i("ical",-1,4),new i("ator",-1,5),new i("icator",35,4),new i("itor",-1,6),new i("ător",-1,5),new i("iciv",-1,4),new i("ativ",-1,5),new i("itiv",-1,6),new i("icală",-1,4),new i("icivă",-1,4),new i("ativă",-1,5),new i("itivă",-1,6)],F=[new i("ica",-1,1),new i("abila",-1,1),new i("ibila",-1,1),new i("oasa",-1,1),new i("ata",-1,1),new i("ita",-1,1),new i("anta",-1,1),new i("ista",-1,3),new i("uta",-1,1),new i("iva",-1,1),new i("ic",-1,1),new i("ice",-1,1),new i("abile",-1,1),new i("ibile",-1,1),new i("isme",-1,3),new i("iune",-1,2),new i("oase",-1,1),new i("ate",-1,1),new i("itate",17,1),new i("ite",-1,1),new i("ante",-1,1),new i("iste",-1,3),new i("ute",-1,1),new i("ive",-1,1),new i("ici",-1,1),new i("abili",-1,1),new i("ibili",-1,1),new i("iuni",-1,2),new i("atori",-1,1),new i("osi",-1,1),new i("ati",-1,1),new i("itati",30,1),new i("iti",-1,1),new i("anti",-1,1),new i("isti",-1,3),new i("uti",-1,1),new i("işti",-1,3),new i("ivi",-1,1),new i("ităi",-1,1),new i("oşi",-1,1),new i("ităţi",-1,1),new i("abil",-1,1),new i("ibil",-1,1),new i("ism",-1,3),new i("ator",-1,1),new i("os",-1,1),new i("at",-1,1),new i("it",-1,1),new i("ant",-1,1),new i("ist",-1,3),new i("ut",-1,1),new i("iv",-1,1),new i("ică",-1,1),new i("abilă",-1,1),new i("ibilă",-1,1),new i("oasă",-1,1),new i("ată",-1,1),new i("ită",-1,1),new i("antă",-1,1),new i("istă",-1,3),new i("ută",-1,1),new i("ivă",-1,1)],q=[new i("ea",-1,1),new i("ia",-1,1),new i("esc",-1,1),new i("ăsc",-1,1),new i("ind",-1,1),new i("ând",-1,1),new i("are",-1,1),new i("ere",-1,1),new i("ire",-1,1),new i("âre",-1,1),new i("se",-1,2),new i("ase",10,1),new i("sese",10,2),new i("ise",10,1),new i("use",10,1),new i("âse",10,1),new i("eşte",-1,1),new i("ăşte",-1,1),new i("eze",-1,1),new i("ai",-1,1),new i("eai",19,1),new i("iai",19,1),new i("sei",-1,2),new i("eşti",-1,1),new i("ăşti",-1,1),new i("ui",-1,1),new i("ezi",-1,1),new i("âi",-1,1),new i("aşi",-1,1),new i("seşi",-1,2),new i("aseşi",29,1),new i("seseşi",29,2),new i("iseşi",29,1),new i("useşi",29,1),new i("âseşi",29,1),new i("işi",-1,1),new i("uşi",-1,1),new i("âşi",-1,1),new i("aţi",-1,2),new i("eaţi",38,1),new i("iaţi",38,1),new i("eţi",-1,2),new i("iţi",-1,2),new i("âţi",-1,2),new i("arăţi",-1,1),new i("serăţi",-1,2),new i("aserăţi",45,1),new i("seserăţi",45,2),new i("iserăţi",45,1),new i("userăţi",45,1),new i("âserăţi",45,1),new i("irăţi",-1,1),new i("urăţi",-1,1),new i("ârăţi",-1,1),new i("am",-1,1),new i("eam",54,1),new i("iam",54,1),new i("em",-1,2),new i("asem",57,1),new i("sesem",57,2),new i("isem",57,1),new i("usem",57,1),new i("âsem",57,1),new i("im",-1,2),new i("âm",-1,2),new i("ăm",-1,2),new i("arăm",65,1),new i("serăm",65,2),new i("aserăm",67,1),new i("seserăm",67,2),new i("iserăm",67,1),new i("userăm",67,1),new i("âserăm",67,1),new i("irăm",65,1),new i("urăm",65,1),new i("ârăm",65,1),new i("au",-1,1),new i("eau",76,1),new i("iau",76,1),new i("indu",-1,1),new i("ându",-1,1),new i("ez",-1,1),new i("ească",-1,1),new i("ară",-1,1),new i("seră",-1,2),new i("aseră",84,1),new i("seseră",84,2),new i("iseră",84,1),new i("useră",84,1),new i("âseră",84,1),new i("iră",-1,1),new i("ură",-1,1),new i("âră",-1,1),new i("ează",-1,1)],S=[new i("a",-1,1),new i("e",-1,1),new i("ie",1,1),new i("i",-1,1),new i("ă",-1,1)],W=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,2,32,0,0,4],L=new r;this.setCurrent=function(e){L.setCurrent(e)},this.getCurrent=function(){return L.getCurrent()},this.stem=function(){var e=L.cursor;return n(),L.cursor=e,c(),L.limit_backward=e,L.cursor=L.limit,f(),L.cursor=L.limit,d(),L.cursor=L.limit,_||(L.cursor=L.limit,b(),L.cursor=L.limit),v(),L.cursor=L.limit_backward,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.ro.stemmer,"stemmer-ro"),e.ro.stopWordFilter=e.generateStopWordFilter("acea aceasta această aceea acei aceia acel acela acele acelea acest acesta aceste acestea aceşti aceştia acolo acord acum ai aia aibă aici al ale alea altceva altcineva am ar are asemenea asta astea astăzi asupra au avea avem aveţi azi aş aşadar aţi bine bucur bună ca care caut ce cel ceva chiar cinci cine cineva contra cu cum cumva curând curînd când cât câte câtva câţi cînd cît cîte cîtva cîţi că căci cărei căror cărui către da dacă dar datorită dată dau de deci deja deoarece departe deşi din dinaintea dintr- dintre doi doilea două drept după dă ea ei el ele eram este eu eşti face fata fi fie fiecare fii fim fiu fiţi frumos fără graţie halbă iar ieri la le li lor lui lângă lîngă mai mea mei mele mereu meu mi mie mine mult multă mulţi mulţumesc mâine mîine mă ne nevoie nici nicăieri nimeni nimeri nimic nişte noastre noastră noi noroc nostru nouă noştri nu opt ori oricare orice oricine oricum oricând oricât oricînd oricît oriunde patra patru patrulea pe pentru peste pic poate pot prea prima primul prin puţin puţina puţină până pînă rog sa sale sau se spate spre sub sunt suntem sunteţi sută sînt sîntem sînteţi să săi său ta tale te timp tine toate toată tot totuşi toţi trei treia treilea tu tăi tău un una unde undeva unei uneia unele uneori unii unor unora unu unui unuia unul vi voastre voastră voi vostru vouă voştri vreme vreo vreun vă zece zero zi zice îi îl îmi împotriva în înainte înaintea încotro încât încît între întrucât întrucît îţi ăla ălea ăsta ăstea ăştia şapte şase şi ştiu ţi ţie".split(" ")),e.Pipeline.registerFunction(e.ro.stopWordFilter,"stopWordFilter-ro")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.ru.min.js b/assets/javascripts/lunr/min/lunr.ru.min.js new file mode 100644 index 000000000..186cc485c --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.ru.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Russian` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,n){"function"==typeof define&&define.amd?define(n):"object"==typeof exports?module.exports=n():n()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ru=function(){this.pipeline.reset(),this.pipeline.add(e.ru.trimmer,e.ru.stopWordFilter,e.ru.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ru.stemmer))},e.ru.wordCharacters="Ѐ-҄҇-ԯᴫᵸⷠ-ⷿꙀ-ꚟ︮︯",e.ru.trimmer=e.trimmerSupport.generateTrimmer(e.ru.wordCharacters),e.Pipeline.registerFunction(e.ru.trimmer,"trimmer-ru"),e.ru.stemmer=function(){var n=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,t=new function(){function e(){for(;!W.in_grouping(S,1072,1103);){if(W.cursor>=W.limit)return!1;W.cursor++}return!0}function t(){for(;!W.out_grouping(S,1072,1103);){if(W.cursor>=W.limit)return!1;W.cursor++}return!0}function w(){b=W.limit,_=b,e()&&(b=W.cursor,t()&&e()&&t()&&(_=W.cursor))}function i(){return _<=W.cursor}function u(e,n){var r,t;if(W.ket=W.cursor,r=W.find_among_b(e,n)){switch(W.bra=W.cursor,r){case 1:if(t=W.limit-W.cursor,!W.eq_s_b(1,"а")&&(W.cursor=W.limit-t,!W.eq_s_b(1,"я")))return!1;case 2:W.slice_del()}return!0}return!1}function o(){return u(h,9)}function s(e,n){var r;return W.ket=W.cursor,!!(r=W.find_among_b(e,n))&&(W.bra=W.cursor,1==r&&W.slice_del(),!0)}function c(){return s(g,26)}function m(){return!!c()&&(u(C,8),!0)}function f(){return s(k,2)}function l(){return u(P,46)}function a(){s(v,36)}function p(){var e;W.ket=W.cursor,(e=W.find_among_b(F,2))&&(W.bra=W.cursor,i()&&1==e&&W.slice_del())}function d(){var e;if(W.ket=W.cursor,e=W.find_among_b(q,4))switch(W.bra=W.cursor,e){case 1:if(W.slice_del(),W.ket=W.cursor,!W.eq_s_b(1,"н"))break;W.bra=W.cursor;case 2:if(!W.eq_s_b(1,"н"))break;case 3:W.slice_del()}}var _,b,h=[new n("в",-1,1),new n("ив",0,2),new n("ыв",0,2),new n("вши",-1,1),new n("ивши",3,2),new n("ывши",3,2),new n("вшись",-1,1),new n("ившись",6,2),new n("ывшись",6,2)],g=[new n("ее",-1,1),new n("ие",-1,1),new n("ое",-1,1),new n("ые",-1,1),new n("ими",-1,1),new n("ыми",-1,1),new n("ей",-1,1),new n("ий",-1,1),new n("ой",-1,1),new n("ый",-1,1),new n("ем",-1,1),new n("им",-1,1),new n("ом",-1,1),new n("ым",-1,1),new n("его",-1,1),new n("ого",-1,1),new n("ему",-1,1),new n("ому",-1,1),new n("их",-1,1),new n("ых",-1,1),new n("ею",-1,1),new n("ою",-1,1),new n("ую",-1,1),new n("юю",-1,1),new n("ая",-1,1),new n("яя",-1,1)],C=[new n("ем",-1,1),new n("нн",-1,1),new n("вш",-1,1),new n("ивш",2,2),new n("ывш",2,2),new n("щ",-1,1),new n("ющ",5,1),new n("ующ",6,2)],k=[new n("сь",-1,1),new n("ся",-1,1)],P=[new n("ла",-1,1),new n("ила",0,2),new n("ыла",0,2),new n("на",-1,1),new n("ена",3,2),new n("ете",-1,1),new n("ите",-1,2),new n("йте",-1,1),new n("ейте",7,2),new n("уйте",7,2),new n("ли",-1,1),new n("или",10,2),new n("ыли",10,2),new n("й",-1,1),new n("ей",13,2),new n("уй",13,2),new n("л",-1,1),new n("ил",16,2),new n("ыл",16,2),new n("ем",-1,1),new n("им",-1,2),new n("ым",-1,2),new n("н",-1,1),new n("ен",22,2),new n("ло",-1,1),new n("ило",24,2),new n("ыло",24,2),new n("но",-1,1),new n("ено",27,2),new n("нно",27,1),new n("ет",-1,1),new n("ует",30,2),new n("ит",-1,2),new n("ыт",-1,2),new n("ют",-1,1),new n("уют",34,2),new n("ят",-1,2),new n("ны",-1,1),new n("ены",37,2),new n("ть",-1,1),new n("ить",39,2),new n("ыть",39,2),new n("ешь",-1,1),new n("ишь",-1,2),new n("ю",-1,2),new n("ую",44,2)],v=[new n("а",-1,1),new n("ев",-1,1),new n("ов",-1,1),new n("е",-1,1),new n("ие",3,1),new n("ье",3,1),new n("и",-1,1),new n("еи",6,1),new n("ии",6,1),new n("ами",6,1),new n("ями",6,1),new n("иями",10,1),new n("й",-1,1),new n("ей",12,1),new n("ией",13,1),new n("ий",12,1),new n("ой",12,1),new n("ам",-1,1),new n("ем",-1,1),new n("ием",18,1),new n("ом",-1,1),new n("ям",-1,1),new n("иям",21,1),new n("о",-1,1),new n("у",-1,1),new n("ах",-1,1),new n("ях",-1,1),new n("иях",26,1),new n("ы",-1,1),new n("ь",-1,1),new n("ю",-1,1),new n("ию",30,1),new n("ью",30,1),new n("я",-1,1),new n("ия",33,1),new n("ья",33,1)],F=[new n("ост",-1,1),new n("ость",-1,1)],q=[new n("ейше",-1,1),new n("н",-1,2),new n("ейш",-1,1),new n("ь",-1,3)],S=[33,65,8,232],W=new r;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){return w(),W.cursor=W.limit,!(W.cursor=i&&(e-=i,t[e>>3]&1<<(7&e)))return this.cursor++,!0}return!1},in_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e<=s&&e>=i&&(e-=i,t[e>>3]&1<<(7&e)))return this.cursor--,!0}return!1},out_grouping:function(t,i,s){if(this.cursors||e>3]&1<<(7&e)))return this.cursor++,!0}return!1},out_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e>s||e>3]&1<<(7&e)))return this.cursor--,!0}return!1},eq_s:function(t,i){if(this.limit-this.cursor>1),f=0,l=o0||e==s||c)break;c=!0}}for(;;){var _=t[s];if(o>=_.s_size){if(this.cursor=n+_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n+_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},find_among_b:function(t,i){for(var s=0,e=i,n=this.cursor,u=this.limit_backward,o=0,h=0,c=!1;;){for(var a=s+(e-s>>1),f=0,l=o=0;m--){if(n-l==u){f=-1;break}if(f=r.charCodeAt(n-1-l)-_.s[m])break;l++}if(f<0?(e=a,h=l):(s=a,o=l),e-s<=1){if(s>0||e==s||c)break;c=!0}}for(;;){var _=t[s];if(o>=_.s_size){if(this.cursor=n-_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n-_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},replace_s:function(t,i,s){var e=s.length-(i-t),n=r.substring(0,t),u=r.substring(i);return r=n+s+u,this.limit+=e,this.cursor>=i?this.cursor+=e:this.cursor>t&&(this.cursor=t),e},slice_check:function(){if(this.bra<0||this.bra>this.ket||this.ket>this.limit||this.limit>r.length)throw"faulty slice operation"},slice_from:function(r){this.slice_check(),this.replace_s(this.bra,this.ket,r)},slice_del:function(){this.slice_from("")},insert:function(r,t,i){var s=this.replace_s(r,t,i);r<=this.bra&&(this.bra+=s),r<=this.ket&&(this.ket+=s)},slice_to:function(){return this.slice_check(),r.substring(this.bra,this.ket)},eq_v_b:function(r){return this.eq_s_b(r.length,r)}}}},r.trimmerSupport={generateTrimmer:function(r){var t=new RegExp("^[^"+r+"]+"),i=new RegExp("[^"+r+"]+$");return function(r){return"function"==typeof r.update?r.update(function(r){return r.replace(t,"").replace(i,"")}):r.replace(t,"").replace(i,"")}}}}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.sv.min.js b/assets/javascripts/lunr/min/lunr.sv.min.js new file mode 100644 index 000000000..3e5eb6400 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.sv.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Swedish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.sv=function(){this.pipeline.reset(),this.pipeline.add(e.sv.trimmer,e.sv.stopWordFilter,e.sv.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.sv.stemmer))},e.sv.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.sv.trimmer=e.trimmerSupport.generateTrimmer(e.sv.wordCharacters),e.Pipeline.registerFunction(e.sv.trimmer,"trimmer-sv"),e.sv.stemmer=function(){var r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,t=new function(){function e(){var e,r=w.cursor+3;if(o=w.limit,0<=r||r<=w.limit){for(a=r;;){if(e=w.cursor,w.in_grouping(l,97,246)){w.cursor=e;break}if(w.cursor=e,w.cursor>=w.limit)return;w.cursor++}for(;!w.out_grouping(l,97,246);){if(w.cursor>=w.limit)return;w.cursor++}o=w.cursor,o=o&&(w.limit_backward=o,w.cursor=w.limit,w.ket=w.cursor,e=w.find_among_b(u,37),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:w.in_grouping_b(d,98,121)&&w.slice_del()}}function i(){var e=w.limit_backward;w.cursor>=o&&(w.limit_backward=o,w.cursor=w.limit,w.find_among_b(c,7)&&(w.cursor=w.limit,w.ket=w.cursor,w.cursor>w.limit_backward&&(w.bra=--w.cursor,w.slice_del())),w.limit_backward=e)}function s(){var e,r;if(w.cursor>=o){if(r=w.limit_backward,w.limit_backward=o,w.cursor=w.limit,w.ket=w.cursor,e=w.find_among_b(m,5))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:w.slice_from("lös");break;case 3:w.slice_from("full")}w.limit_backward=r}}var a,o,u=[new r("a",-1,1),new r("arna",0,1),new r("erna",0,1),new r("heterna",2,1),new r("orna",0,1),new r("ad",-1,1),new r("e",-1,1),new r("ade",6,1),new r("ande",6,1),new r("arne",6,1),new r("are",6,1),new r("aste",6,1),new r("en",-1,1),new r("anden",12,1),new r("aren",12,1),new r("heten",12,1),new r("ern",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",18,1),new r("or",-1,1),new r("s",-1,2),new r("as",21,1),new r("arnas",22,1),new r("ernas",22,1),new r("ornas",22,1),new r("es",21,1),new r("ades",26,1),new r("andes",26,1),new r("ens",21,1),new r("arens",29,1),new r("hetens",29,1),new r("erns",21,1),new r("at",-1,1),new r("andet",-1,1),new r("het",-1,1),new r("ast",-1,1)],c=[new r("dd",-1,-1),new r("gd",-1,-1),new r("nn",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1),new r("tt",-1,-1)],m=[new r("ig",-1,1),new r("lig",0,1),new r("els",-1,1),new r("fullt",-1,3),new r("löst",-1,2)],l=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,24,0,32],d=[119,127,149],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,t(),w.cursor=w.limit,i(),w.cursor=w.limit,s(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return t.setCurrent(e),t.stem(),t.getCurrent()}):(t.setCurrent(e),t.stem(),t.getCurrent())}}(),e.Pipeline.registerFunction(e.sv.stemmer,"stemmer-sv"),e.sv.stopWordFilter=e.generateStopWordFilter("alla allt att av blev bli blir blivit de dem den denna deras dess dessa det detta dig din dina ditt du där då efter ej eller en er era ert ett från för ha hade han hans har henne hennes hon honom hur här i icke ingen inom inte jag ju kan kunde man med mellan men mig min mina mitt mot mycket ni nu när någon något några och om oss på samma sedan sig sin sina sitta själv skulle som så sådan sådana sådant till under upp ut utan vad var vara varför varit varje vars vart vem vi vid vilka vilkas vilken vilket vår våra vårt än är åt över".split(" ")),e.Pipeline.registerFunction(e.sv.stopWordFilter,"stopWordFilter-sv")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.th.min.js b/assets/javascripts/lunr/min/lunr.th.min.js new file mode 100644 index 000000000..dee3aac6e --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.th.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r="2"==e.version[0];e.th=function(){this.pipeline.reset(),this.pipeline.add(e.th.trimmer),r?this.tokenizer=e.th.tokenizer:(e.tokenizer&&(e.tokenizer=e.th.tokenizer),this.tokenizerFn&&(this.tokenizerFn=e.th.tokenizer))},e.th.wordCharacters="[฀-๿]",e.th.trimmer=e.trimmerSupport.generateTrimmer(e.th.wordCharacters),e.Pipeline.registerFunction(e.th.trimmer,"trimmer-th");var t=e.wordcut;t.init(),e.th.tokenizer=function(i){if(!arguments.length||null==i||void 0==i)return[];if(Array.isArray(i))return i.map(function(t){return r?new e.Token(t):t});var n=i.toString().replace(/^\s+/,"");return t.cut(n).split("|")}}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.tr.min.js b/assets/javascripts/lunr/min/lunr.tr.min.js new file mode 100644 index 000000000..563f6ec1f --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.tr.min.js @@ -0,0 +1,18 @@ +/*! + * Lunr languages, `Turkish` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2014, Mihai Valentin + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +!function(r,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(r.lunr)}(this,function(){return function(r){if(void 0===r)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===r.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");r.tr=function(){this.pipeline.reset(),this.pipeline.add(r.tr.trimmer,r.tr.stopWordFilter,r.tr.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(r.tr.stemmer))},r.tr.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",r.tr.trimmer=r.trimmerSupport.generateTrimmer(r.tr.wordCharacters),r.Pipeline.registerFunction(r.tr.trimmer,"trimmer-tr"),r.tr.stemmer=function(){var i=r.stemmerSupport.Among,e=r.stemmerSupport.SnowballProgram,n=new function(){function r(r,i,e){for(;;){var n=Dr.limit-Dr.cursor;if(Dr.in_grouping_b(r,i,e)){Dr.cursor=Dr.limit-n;break}if(Dr.cursor=Dr.limit-n,Dr.cursor<=Dr.limit_backward)return!1;Dr.cursor--}return!0}function n(){var i,e;i=Dr.limit-Dr.cursor,r(Wr,97,305);for(var n=0;nDr.limit_backward&&(Dr.cursor--,e=Dr.limit-Dr.cursor,i()))?(Dr.cursor=Dr.limit-e,!0):(Dr.cursor=Dr.limit-n,r()?(Dr.cursor=Dr.limit-n,!1):(Dr.cursor=Dr.limit-n,!(Dr.cursor<=Dr.limit_backward)&&(Dr.cursor--,!!i()&&(Dr.cursor=Dr.limit-n,!0))))}function u(r){return t(r,function(){return Dr.in_grouping_b(Wr,97,305)})}function o(){return u(function(){return Dr.eq_s_b(1,"n")})}function s(){return u(function(){return Dr.eq_s_b(1,"s")})}function c(){return u(function(){return Dr.eq_s_b(1,"y")})}function l(){return t(function(){return Dr.in_grouping_b(Lr,105,305)},function(){return Dr.out_grouping_b(Wr,97,305)})}function a(){return Dr.find_among_b(ur,10)&&l()}function m(){return n()&&Dr.in_grouping_b(Lr,105,305)&&s()}function d(){return Dr.find_among_b(or,2)}function f(){return n()&&Dr.in_grouping_b(Lr,105,305)&&c()}function b(){return n()&&Dr.find_among_b(sr,4)}function w(){return n()&&Dr.find_among_b(cr,4)&&o()}function _(){return n()&&Dr.find_among_b(lr,2)&&c()}function k(){return n()&&Dr.find_among_b(ar,2)}function p(){return n()&&Dr.find_among_b(mr,4)}function g(){return n()&&Dr.find_among_b(dr,2)}function y(){return n()&&Dr.find_among_b(fr,4)}function z(){return n()&&Dr.find_among_b(br,2)}function v(){return n()&&Dr.find_among_b(wr,2)&&c()}function h(){return Dr.eq_s_b(2,"ki")}function q(){return n()&&Dr.find_among_b(_r,2)&&o()}function C(){return n()&&Dr.find_among_b(kr,4)&&c()}function P(){return n()&&Dr.find_among_b(pr,4)}function F(){return n()&&Dr.find_among_b(gr,4)&&c()}function S(){return Dr.find_among_b(yr,4)}function W(){return n()&&Dr.find_among_b(zr,2)}function L(){return n()&&Dr.find_among_b(vr,4)}function x(){return n()&&Dr.find_among_b(hr,8)}function A(){return Dr.find_among_b(qr,2)}function E(){return n()&&Dr.find_among_b(Cr,32)&&c()}function j(){return Dr.find_among_b(Pr,8)&&c()}function T(){return n()&&Dr.find_among_b(Fr,4)&&c()}function Z(){return Dr.eq_s_b(3,"ken")&&c()}function B(){var r=Dr.limit-Dr.cursor;return!(T()||(Dr.cursor=Dr.limit-r,E()||(Dr.cursor=Dr.limit-r,j()||(Dr.cursor=Dr.limit-r,Z()))))}function D(){if(A()){var r=Dr.limit-Dr.cursor;if(S()||(Dr.cursor=Dr.limit-r,W()||(Dr.cursor=Dr.limit-r,C()||(Dr.cursor=Dr.limit-r,P()||(Dr.cursor=Dr.limit-r,F()||(Dr.cursor=Dr.limit-r))))),T())return!1}return!0}function G(){if(W()){Dr.bra=Dr.cursor,Dr.slice_del();var r=Dr.limit-Dr.cursor;return Dr.ket=Dr.cursor,x()||(Dr.cursor=Dr.limit-r,E()||(Dr.cursor=Dr.limit-r,j()||(Dr.cursor=Dr.limit-r,T()||(Dr.cursor=Dr.limit-r)))),nr=!1,!1}return!0}function H(){if(!L())return!0;var r=Dr.limit-Dr.cursor;return!E()&&(Dr.cursor=Dr.limit-r,!j())}function I(){var r,i=Dr.limit-Dr.cursor;return!(S()||(Dr.cursor=Dr.limit-i,F()||(Dr.cursor=Dr.limit-i,P()||(Dr.cursor=Dr.limit-i,C()))))||(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,T()||(Dr.cursor=Dr.limit-r),!1)}function J(){var r,i=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,nr=!0,B()&&(Dr.cursor=Dr.limit-i,D()&&(Dr.cursor=Dr.limit-i,G()&&(Dr.cursor=Dr.limit-i,H()&&(Dr.cursor=Dr.limit-i,I()))))){if(Dr.cursor=Dr.limit-i,!x())return;Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,r=Dr.limit-Dr.cursor,S()||(Dr.cursor=Dr.limit-r,W()||(Dr.cursor=Dr.limit-r,C()||(Dr.cursor=Dr.limit-r,P()||(Dr.cursor=Dr.limit-r,F()||(Dr.cursor=Dr.limit-r))))),T()||(Dr.cursor=Dr.limit-r)}Dr.bra=Dr.cursor,Dr.slice_del()}function K(){var r,i,e,n;if(Dr.ket=Dr.cursor,h()){if(r=Dr.limit-Dr.cursor,p())return Dr.bra=Dr.cursor,Dr.slice_del(),i=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,W()?(Dr.bra=Dr.cursor,Dr.slice_del(),K()):(Dr.cursor=Dr.limit-i,a()&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()))),!0;if(Dr.cursor=Dr.limit-r,w()){if(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,e=Dr.limit-Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else{if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,!a()&&(Dr.cursor=Dr.limit-e,!m()&&(Dr.cursor=Dr.limit-e,!K())))return!0;Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())}return!0}if(Dr.cursor=Dr.limit-r,g()){if(n=Dr.limit-Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else if(Dr.cursor=Dr.limit-n,m())Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K());else if(Dr.cursor=Dr.limit-n,!K())return!1;return!0}}return!1}function M(r){if(Dr.ket=Dr.cursor,!g()&&(Dr.cursor=Dr.limit-r,!k()))return!1;var i=Dr.limit-Dr.cursor;if(d())Dr.bra=Dr.cursor,Dr.slice_del();else if(Dr.cursor=Dr.limit-i,m())Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K());else if(Dr.cursor=Dr.limit-i,!K())return!1;return!0}function N(r){if(Dr.ket=Dr.cursor,!z()&&(Dr.cursor=Dr.limit-r,!b()))return!1;var i=Dr.limit-Dr.cursor;return!(!m()&&(Dr.cursor=Dr.limit-i,!d()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()),!0)}function O(){var r,i=Dr.limit-Dr.cursor;return Dr.ket=Dr.cursor,!(!w()&&(Dr.cursor=Dr.limit-i,!v()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,!(!W()||(Dr.bra=Dr.cursor,Dr.slice_del(),!K()))||(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!(a()||(Dr.cursor=Dr.limit-r,m()||(Dr.cursor=Dr.limit-r,K())))||(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()),!0)))}function Q(){var r,i,e=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,!p()&&(Dr.cursor=Dr.limit-e,!f()&&(Dr.cursor=Dr.limit-e,!_())))return!1;if(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,r=Dr.limit-Dr.cursor,a())Dr.bra=Dr.cursor,Dr.slice_del(),i=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,W()||(Dr.cursor=Dr.limit-i);else if(Dr.cursor=Dr.limit-r,!W())return!0;return Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,K(),!0}function R(){var r,i,e=Dr.limit-Dr.cursor;if(Dr.ket=Dr.cursor,W())return Dr.bra=Dr.cursor,Dr.slice_del(),void K();if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,q())if(Dr.bra=Dr.cursor,Dr.slice_del(),r=Dr.limit-Dr.cursor,Dr.ket=Dr.cursor,d())Dr.bra=Dr.cursor,Dr.slice_del();else{if(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!a()&&(Dr.cursor=Dr.limit-r,!m())){if(Dr.cursor=Dr.limit-r,Dr.ket=Dr.cursor,!W())return;if(Dr.bra=Dr.cursor,Dr.slice_del(),!K())return}Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())}else if(Dr.cursor=Dr.limit-e,!M(e)&&(Dr.cursor=Dr.limit-e,!N(e))){if(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,y())return Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,i=Dr.limit-Dr.cursor,void(a()?(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K())):(Dr.cursor=Dr.limit-i,W()?(Dr.bra=Dr.cursor,Dr.slice_del(),K()):(Dr.cursor=Dr.limit-i,K())));if(Dr.cursor=Dr.limit-e,!O()){if(Dr.cursor=Dr.limit-e,d())return Dr.bra=Dr.cursor,void Dr.slice_del();Dr.cursor=Dr.limit-e,K()||(Dr.cursor=Dr.limit-e,Q()||(Dr.cursor=Dr.limit-e,Dr.ket=Dr.cursor,(a()||(Dr.cursor=Dr.limit-e,m()))&&(Dr.bra=Dr.cursor,Dr.slice_del(),Dr.ket=Dr.cursor,W()&&(Dr.bra=Dr.cursor,Dr.slice_del(),K()))))}}}function U(){var r;if(Dr.ket=Dr.cursor,r=Dr.find_among_b(Sr,4))switch(Dr.bra=Dr.cursor,r){case 1:Dr.slice_from("p");break;case 2:Dr.slice_from("ç");break;case 3:Dr.slice_from("t");break;case 4:Dr.slice_from("k")}}function V(){for(;;){var r=Dr.limit-Dr.cursor;if(Dr.in_grouping_b(Wr,97,305)){Dr.cursor=Dr.limit-r;break}if(Dr.cursor=Dr.limit-r,Dr.cursor<=Dr.limit_backward)return!1;Dr.cursor--}return!0}function X(r,i,e){if(Dr.cursor=Dr.limit-r,V()){var n=Dr.limit-Dr.cursor;if(!Dr.eq_s_b(1,i)&&(Dr.cursor=Dr.limit-n,!Dr.eq_s_b(1,e)))return!0;Dr.cursor=Dr.limit-r;var t=Dr.cursor;return Dr.insert(Dr.cursor,Dr.cursor,e),Dr.cursor=t,!1}return!0}function Y(){var r=Dr.limit-Dr.cursor;(Dr.eq_s_b(1,"d")||(Dr.cursor=Dr.limit-r,Dr.eq_s_b(1,"g")))&&X(r,"a","ı")&&X(r,"e","i")&&X(r,"o","u")&&X(r,"ö","ü")}function $(){for(var r,i=Dr.cursor,e=2;;){for(r=Dr.cursor;!Dr.in_grouping(Wr,97,305);){if(Dr.cursor>=Dr.limit)return Dr.cursor=r,!(e>0)&&(Dr.cursor=i,!0);Dr.cursor++}e--}}function rr(r,i,e){for(;!Dr.eq_s(i,e);){if(Dr.cursor>=Dr.limit)return!0;Dr.cursor++}return(tr=i)!=Dr.limit||(Dr.cursor=r,!1)}function ir(){var r=Dr.cursor;return!rr(r,2,"ad")||(Dr.cursor=r,!rr(r,5,"soyad"))}function er(){var r=Dr.cursor;return!ir()&&(Dr.limit_backward=r,Dr.cursor=Dr.limit,Y(),Dr.cursor=Dr.limit,U(),!0)}var nr,tr,ur=[new i("m",-1,-1),new i("n",-1,-1),new i("miz",-1,-1),new i("niz",-1,-1),new i("muz",-1,-1),new i("nuz",-1,-1),new i("müz",-1,-1),new i("nüz",-1,-1),new i("mız",-1,-1),new i("nız",-1,-1)],or=[new i("leri",-1,-1),new i("ları",-1,-1)],sr=[new i("ni",-1,-1),new i("nu",-1,-1),new i("nü",-1,-1),new i("nı",-1,-1)],cr=[new i("in",-1,-1),new i("un",-1,-1),new i("ün",-1,-1),new i("ın",-1,-1)],lr=[new i("a",-1,-1),new i("e",-1,-1)],ar=[new i("na",-1,-1),new i("ne",-1,-1)],mr=[new i("da",-1,-1),new i("ta",-1,-1),new i("de",-1,-1),new i("te",-1,-1)],dr=[new i("nda",-1,-1),new i("nde",-1,-1)],fr=[new i("dan",-1,-1),new i("tan",-1,-1),new i("den",-1,-1),new i("ten",-1,-1)],br=[new i("ndan",-1,-1),new i("nden",-1,-1)],wr=[new i("la",-1,-1),new i("le",-1,-1)],_r=[new i("ca",-1,-1),new i("ce",-1,-1)],kr=[new i("im",-1,-1),new i("um",-1,-1),new i("üm",-1,-1),new i("ım",-1,-1)],pr=[new i("sin",-1,-1),new i("sun",-1,-1),new i("sün",-1,-1),new i("sın",-1,-1)],gr=[new i("iz",-1,-1),new i("uz",-1,-1),new i("üz",-1,-1),new i("ız",-1,-1)],yr=[new i("siniz",-1,-1),new i("sunuz",-1,-1),new i("sünüz",-1,-1),new i("sınız",-1,-1)],zr=[new i("lar",-1,-1),new i("ler",-1,-1)],vr=[new i("niz",-1,-1),new i("nuz",-1,-1),new i("nüz",-1,-1),new i("nız",-1,-1)],hr=[new i("dir",-1,-1),new i("tir",-1,-1),new i("dur",-1,-1),new i("tur",-1,-1),new i("dür",-1,-1),new i("tür",-1,-1),new i("dır",-1,-1),new i("tır",-1,-1)],qr=[new i("casına",-1,-1),new i("cesine",-1,-1)],Cr=[new i("di",-1,-1),new i("ti",-1,-1),new i("dik",-1,-1),new i("tik",-1,-1),new i("duk",-1,-1),new i("tuk",-1,-1),new i("dük",-1,-1),new i("tük",-1,-1),new i("dık",-1,-1),new i("tık",-1,-1),new i("dim",-1,-1),new i("tim",-1,-1),new i("dum",-1,-1),new i("tum",-1,-1),new i("düm",-1,-1),new i("tüm",-1,-1),new i("dım",-1,-1),new i("tım",-1,-1),new i("din",-1,-1),new i("tin",-1,-1),new i("dun",-1,-1),new i("tun",-1,-1),new i("dün",-1,-1),new i("tün",-1,-1),new i("dın",-1,-1),new i("tın",-1,-1),new i("du",-1,-1),new i("tu",-1,-1),new i("dü",-1,-1),new i("tü",-1,-1),new i("dı",-1,-1),new i("tı",-1,-1)],Pr=[new i("sa",-1,-1),new i("se",-1,-1),new i("sak",-1,-1),new i("sek",-1,-1),new i("sam",-1,-1),new i("sem",-1,-1),new i("san",-1,-1),new i("sen",-1,-1)],Fr=[new i("miş",-1,-1),new i("muş",-1,-1),new i("müş",-1,-1),new i("mış",-1,-1)],Sr=[new i("b",-1,1),new i("c",-1,2),new i("d",-1,3),new i("ğ",-1,4)],Wr=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,32,8,0,0,0,0,0,0,1],Lr=[1,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,0,0,0,0,0,0,1],xr=[1,64,16,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],Ar=[17,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,130],Er=[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],jr=[17],Tr=[65],Zr=[65],Br=[["a",xr,97,305],["e",Ar,101,252],["ı",Er,97,305],["i",jr,101,105],["o",Tr,111,117],["ö",Zr,246,252],["u",Tr,111,117]],Dr=new e;this.setCurrent=function(r){Dr.setCurrent(r)},this.getCurrent=function(){return Dr.getCurrent()},this.stem=function(){return!!($()&&(Dr.limit_backward=Dr.cursor,Dr.cursor=Dr.limit,J(),Dr.cursor=Dr.limit,nr&&(R(),Dr.cursor=Dr.limit_backward,er())))}};return function(r){return"function"==typeof r.update?r.update(function(r){return n.setCurrent(r),n.stem(),n.getCurrent()}):(n.setCurrent(r),n.stem(),n.getCurrent())}}(),r.Pipeline.registerFunction(r.tr.stemmer,"stemmer-tr"),r.tr.stopWordFilter=r.generateStopWordFilter("acaba altmış altı ama ancak arada aslında ayrıca bana bazı belki ben benden beni benim beri beş bile bin bir biri birkaç birkez birçok birşey birşeyi biz bizden bize bizi bizim bu buna bunda bundan bunlar bunları bunların bunu bunun burada böyle böylece da daha dahi de defa değil diye diğer doksan dokuz dolayı dolayısıyla dört edecek eden ederek edilecek ediliyor edilmesi ediyor elli en etmesi etti ettiği ettiğini eğer gibi göre halen hangi hatta hem henüz hep hepsi her herhangi herkesin hiç hiçbir iki ile ilgili ise itibaren itibariyle için işte kadar karşın katrilyon kendi kendilerine kendini kendisi kendisine kendisini kez ki kim kimden kime kimi kimse kırk milyar milyon mu mü mı nasıl ne neden nedenle nerde nerede nereye niye niçin o olan olarak oldu olduklarını olduğu olduğunu olmadı olmadığı olmak olması olmayan olmaz olsa olsun olup olur olursa oluyor on ona ondan onlar onlardan onları onların onu onun otuz oysa pek rağmen sadece sanki sekiz seksen sen senden seni senin siz sizden sizi sizin tarafından trilyon tüm var vardı ve veya ya yani yapacak yapmak yaptı yaptıkları yaptığı yaptığını yapılan yapılması yapıyor yedi yerine yetmiş yine yirmi yoksa yüz zaten çok çünkü öyle üzere üç şey şeyden şeyi şeyler şu şuna şunda şundan şunları şunu şöyle".split(" ")),r.Pipeline.registerFunction(r.tr.stopWordFilter,"stopWordFilter-tr")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.vi.min.js b/assets/javascripts/lunr/min/lunr.vi.min.js new file mode 100644 index 000000000..22aed28c4 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.vi.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.vi=function(){this.pipeline.reset(),this.pipeline.add(e.vi.stopWordFilter,e.vi.trimmer)},e.vi.wordCharacters="[A-Za-ẓ̀͐́͑̉̃̓ÂâÊêÔôĂ-ăĐ-đƠ-ơƯ-ư]",e.vi.trimmer=e.trimmerSupport.generateTrimmer(e.vi.wordCharacters),e.Pipeline.registerFunction(e.vi.trimmer,"trimmer-vi"),e.vi.stopWordFilter=e.generateStopWordFilter("là cái nhưng mà".split(" "))}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/min/lunr.zh.min.js b/assets/javascripts/lunr/min/lunr.zh.min.js new file mode 100644 index 000000000..7727bbe24 --- /dev/null +++ b/assets/javascripts/lunr/min/lunr.zh.min.js @@ -0,0 +1 @@ +!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r(require("nodejieba")):r()(e.lunr)}(this,function(e){return function(r,t){if(void 0===r)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===r.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var i="2"==r.version[0];r.zh=function(){this.pipeline.reset(),this.pipeline.add(r.zh.trimmer,r.zh.stopWordFilter,r.zh.stemmer),i?this.tokenizer=r.zh.tokenizer:(r.tokenizer&&(r.tokenizer=r.zh.tokenizer),this.tokenizerFn&&(this.tokenizerFn=r.zh.tokenizer))},r.zh.tokenizer=function(n){if(!arguments.length||null==n||void 0==n)return[];if(Array.isArray(n))return n.map(function(e){return i?new r.Token(e.toLowerCase()):e.toLowerCase()});t&&e.load(t);var o=n.toString().trim().toLowerCase(),s=[];e.cut(o,!0).forEach(function(e){s=s.concat(e.split(" "))}),s=s.filter(function(e){return!!e});var u=0;return s.map(function(e,t){if(i){var n=o.indexOf(e,u),s={};return s.position=[n,e.length],s.index=t,u=n,new r.Token(e,s)}return e})},r.zh.wordCharacters="\\w一-龥",r.zh.trimmer=r.trimmerSupport.generateTrimmer(r.zh.wordCharacters),r.Pipeline.registerFunction(r.zh.trimmer,"trimmer-zh"),r.zh.stemmer=function(){return function(e){return e}}(),r.Pipeline.registerFunction(r.zh.stemmer,"stemmer-zh"),r.zh.stopWordFilter=r.generateStopWordFilter("的 一 不 在 人 有 是 为 以 于 上 他 而 后 之 来 及 了 因 下 可 到 由 这 与 也 此 但 并 个 其 已 无 小 我 们 起 最 再 今 去 好 只 又 或 很 亦 某 把 那 你 乃 它 吧 被 比 别 趁 当 从 到 得 打 凡 儿 尔 该 各 给 跟 和 何 还 即 几 既 看 据 距 靠 啦 了 另 么 每 们 嘛 拿 哪 那 您 凭 且 却 让 仍 啥 如 若 使 谁 虽 随 同 所 她 哇 嗡 往 哪 些 向 沿 哟 用 于 咱 则 怎 曾 至 致 着 诸 自".split(" ")),r.Pipeline.registerFunction(r.zh.stopWordFilter,"stopWordFilter-zh")}}); \ No newline at end of file diff --git a/assets/javascripts/lunr/tinyseg.js b/assets/javascripts/lunr/tinyseg.js new file mode 100644 index 000000000..167fa6dd6 --- /dev/null +++ b/assets/javascripts/lunr/tinyseg.js @@ -0,0 +1,206 @@ +/** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ +;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like environments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + factory()(root.lunr); + } +}(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + + return function(lunr) { + // TinySegmenter 0.1 -- Super compact Japanese tokenizer in Javascript + // (c) 2008 Taku Kudo + // TinySegmenter is freely distributable under the terms of a new BSD licence. + // For details, see http://chasen.org/~taku/software/TinySegmenter/LICENCE.txt + + function TinySegmenter() { + var patterns = { + "[一二三四五六七八九十百千万億兆]":"M", + "[一-龠々〆ヵヶ]":"H", + "[ぁ-ん]":"I", + "[ァ-ヴーア-ン゙ー]":"K", + "[a-zA-Za-zA-Z]":"A", + "[0-90-9]":"N" + } + this.chartype_ = []; + for (var i in patterns) { + var regexp = new RegExp(i); + this.chartype_.push([regexp, patterns[i]]); + } + + this.BIAS__ = -332 + this.BC1__ = {"HH":6,"II":2461,"KH":406,"OH":-1378}; + this.BC2__ = {"AA":-3267,"AI":2744,"AN":-878,"HH":-4070,"HM":-1711,"HN":4012,"HO":3761,"IA":1327,"IH":-1184,"II":-1332,"IK":1721,"IO":5492,"KI":3831,"KK":-8741,"MH":-3132,"MK":3334,"OO":-2920}; + this.BC3__ = {"HH":996,"HI":626,"HK":-721,"HN":-1307,"HO":-836,"IH":-301,"KK":2762,"MK":1079,"MM":4034,"OA":-1652,"OH":266}; + this.BP1__ = {"BB":295,"OB":304,"OO":-125,"UB":352}; + this.BP2__ = {"BO":60,"OO":-1762}; + this.BQ1__ = {"BHH":1150,"BHM":1521,"BII":-1158,"BIM":886,"BMH":1208,"BNH":449,"BOH":-91,"BOO":-2597,"OHI":451,"OIH":-296,"OKA":1851,"OKH":-1020,"OKK":904,"OOO":2965}; + this.BQ2__ = {"BHH":118,"BHI":-1159,"BHM":466,"BIH":-919,"BKK":-1720,"BKO":864,"OHH":-1139,"OHM":-181,"OIH":153,"UHI":-1146}; + this.BQ3__ = {"BHH":-792,"BHI":2664,"BII":-299,"BKI":419,"BMH":937,"BMM":8335,"BNN":998,"BOH":775,"OHH":2174,"OHM":439,"OII":280,"OKH":1798,"OKI":-793,"OKO":-2242,"OMH":-2402,"OOO":11699}; + this.BQ4__ = {"BHH":-3895,"BIH":3761,"BII":-4654,"BIK":1348,"BKK":-1806,"BMI":-3385,"BOO":-12396,"OAH":926,"OHH":266,"OHK":-2036,"ONN":-973}; + this.BW1__ = {",と":660,",同":727,"B1あ":1404,"B1同":542,"、と":660,"、同":727,"」と":1682,"あっ":1505,"いう":1743,"いっ":-2055,"いる":672,"うし":-4817,"うん":665,"から":3472,"がら":600,"こう":-790,"こと":2083,"こん":-1262,"さら":-4143,"さん":4573,"した":2641,"して":1104,"すで":-3399,"そこ":1977,"それ":-871,"たち":1122,"ため":601,"った":3463,"つい":-802,"てい":805,"てき":1249,"でき":1127,"です":3445,"では":844,"とい":-4915,"とみ":1922,"どこ":3887,"ない":5713,"なっ":3015,"など":7379,"なん":-1113,"にし":2468,"には":1498,"にも":1671,"に対":-912,"の一":-501,"の中":741,"ませ":2448,"まで":1711,"まま":2600,"まる":-2155,"やむ":-1947,"よっ":-2565,"れた":2369,"れで":-913,"をし":1860,"を見":731,"亡く":-1886,"京都":2558,"取り":-2784,"大き":-2604,"大阪":1497,"平方":-2314,"引き":-1336,"日本":-195,"本当":-2423,"毎日":-2113,"目指":-724,"B1あ":1404,"B1同":542,"」と":1682}; + this.BW2__ = {"..":-11822,"11":-669,"――":-5730,"−−":-13175,"いう":-1609,"うか":2490,"かし":-1350,"かも":-602,"から":-7194,"かれ":4612,"がい":853,"がら":-3198,"きた":1941,"くな":-1597,"こと":-8392,"この":-4193,"させ":4533,"され":13168,"さん":-3977,"しい":-1819,"しか":-545,"した":5078,"して":972,"しな":939,"その":-3744,"たい":-1253,"たた":-662,"ただ":-3857,"たち":-786,"たと":1224,"たは":-939,"った":4589,"って":1647,"っと":-2094,"てい":6144,"てき":3640,"てく":2551,"ては":-3110,"ても":-3065,"でい":2666,"でき":-1528,"でし":-3828,"です":-4761,"でも":-4203,"とい":1890,"とこ":-1746,"とと":-2279,"との":720,"とみ":5168,"とも":-3941,"ない":-2488,"なが":-1313,"など":-6509,"なの":2614,"なん":3099,"にお":-1615,"にし":2748,"にな":2454,"によ":-7236,"に対":-14943,"に従":-4688,"に関":-11388,"のか":2093,"ので":-7059,"のに":-6041,"のの":-6125,"はい":1073,"はが":-1033,"はず":-2532,"ばれ":1813,"まし":-1316,"まで":-6621,"まれ":5409,"めて":-3153,"もい":2230,"もの":-10713,"らか":-944,"らし":-1611,"らに":-1897,"りし":651,"りま":1620,"れた":4270,"れて":849,"れば":4114,"ろう":6067,"われ":7901,"を通":-11877,"んだ":728,"んな":-4115,"一人":602,"一方":-1375,"一日":970,"一部":-1051,"上が":-4479,"会社":-1116,"出て":2163,"分の":-7758,"同党":970,"同日":-913,"大阪":-2471,"委員":-1250,"少な":-1050,"年度":-8669,"年間":-1626,"府県":-2363,"手権":-1982,"新聞":-4066,"日新":-722,"日本":-7068,"日米":3372,"曜日":-601,"朝鮮":-2355,"本人":-2697,"東京":-1543,"然と":-1384,"社会":-1276,"立て":-990,"第に":-1612,"米国":-4268,"11":-669}; + this.BW3__ = {"あた":-2194,"あり":719,"ある":3846,"い.":-1185,"い。":-1185,"いい":5308,"いえ":2079,"いく":3029,"いた":2056,"いっ":1883,"いる":5600,"いわ":1527,"うち":1117,"うと":4798,"えと":1454,"か.":2857,"か。":2857,"かけ":-743,"かっ":-4098,"かに":-669,"から":6520,"かり":-2670,"が,":1816,"が、":1816,"がき":-4855,"がけ":-1127,"がっ":-913,"がら":-4977,"がり":-2064,"きた":1645,"けど":1374,"こと":7397,"この":1542,"ころ":-2757,"さい":-714,"さを":976,"し,":1557,"し、":1557,"しい":-3714,"した":3562,"して":1449,"しな":2608,"しま":1200,"す.":-1310,"す。":-1310,"する":6521,"ず,":3426,"ず、":3426,"ずに":841,"そう":428,"た.":8875,"た。":8875,"たい":-594,"たの":812,"たり":-1183,"たる":-853,"だ.":4098,"だ。":4098,"だっ":1004,"った":-4748,"って":300,"てい":6240,"てお":855,"ても":302,"です":1437,"でに":-1482,"では":2295,"とう":-1387,"とし":2266,"との":541,"とも":-3543,"どう":4664,"ない":1796,"なく":-903,"など":2135,"に,":-1021,"に、":-1021,"にし":1771,"にな":1906,"には":2644,"の,":-724,"の、":-724,"の子":-1000,"は,":1337,"は、":1337,"べき":2181,"まし":1113,"ます":6943,"まっ":-1549,"まで":6154,"まれ":-793,"らし":1479,"られ":6820,"るる":3818,"れ,":854,"れ、":854,"れた":1850,"れて":1375,"れば":-3246,"れる":1091,"われ":-605,"んだ":606,"んで":798,"カ月":990,"会議":860,"入り":1232,"大会":2217,"始め":1681,"市":965,"新聞":-5055,"日,":974,"日、":974,"社会":2024,"カ月":990}; + this.TC1__ = {"AAA":1093,"HHH":1029,"HHM":580,"HII":998,"HOH":-390,"HOM":-331,"IHI":1169,"IOH":-142,"IOI":-1015,"IOM":467,"MMH":187,"OOI":-1832}; + this.TC2__ = {"HHO":2088,"HII":-1023,"HMM":-1154,"IHI":-1965,"KKH":703,"OII":-2649}; + this.TC3__ = {"AAA":-294,"HHH":346,"HHI":-341,"HII":-1088,"HIK":731,"HOH":-1486,"IHH":128,"IHI":-3041,"IHO":-1935,"IIH":-825,"IIM":-1035,"IOI":-542,"KHH":-1216,"KKA":491,"KKH":-1217,"KOK":-1009,"MHH":-2694,"MHM":-457,"MHO":123,"MMH":-471,"NNH":-1689,"NNO":662,"OHO":-3393}; + this.TC4__ = {"HHH":-203,"HHI":1344,"HHK":365,"HHM":-122,"HHN":182,"HHO":669,"HIH":804,"HII":679,"HOH":446,"IHH":695,"IHO":-2324,"IIH":321,"III":1497,"IIO":656,"IOO":54,"KAK":4845,"KKA":3386,"KKK":3065,"MHH":-405,"MHI":201,"MMH":-241,"MMM":661,"MOM":841}; + this.TQ1__ = {"BHHH":-227,"BHHI":316,"BHIH":-132,"BIHH":60,"BIII":1595,"BNHH":-744,"BOHH":225,"BOOO":-908,"OAKK":482,"OHHH":281,"OHIH":249,"OIHI":200,"OIIH":-68}; + this.TQ2__ = {"BIHH":-1401,"BIII":-1033,"BKAK":-543,"BOOO":-5591}; + this.TQ3__ = {"BHHH":478,"BHHM":-1073,"BHIH":222,"BHII":-504,"BIIH":-116,"BIII":-105,"BMHI":-863,"BMHM":-464,"BOMH":620,"OHHH":346,"OHHI":1729,"OHII":997,"OHMH":481,"OIHH":623,"OIIH":1344,"OKAK":2792,"OKHH":587,"OKKA":679,"OOHH":110,"OOII":-685}; + this.TQ4__ = {"BHHH":-721,"BHHM":-3604,"BHII":-966,"BIIH":-607,"BIII":-2181,"OAAA":-2763,"OAKK":180,"OHHH":-294,"OHHI":2446,"OHHO":480,"OHIH":-1573,"OIHH":1935,"OIHI":-493,"OIIH":626,"OIII":-4007,"OKAK":-8156}; + this.TW1__ = {"につい":-4681,"東京都":2026}; + this.TW2__ = {"ある程":-2049,"いった":-1256,"ころが":-2434,"しょう":3873,"その後":-4430,"だって":-1049,"ていた":1833,"として":-4657,"ともに":-4517,"もので":1882,"一気に":-792,"初めて":-1512,"同時に":-8097,"大きな":-1255,"対して":-2721,"社会党":-3216}; + this.TW3__ = {"いただ":-1734,"してい":1314,"として":-4314,"につい":-5483,"にとっ":-5989,"に当た":-6247,"ので,":-727,"ので、":-727,"のもの":-600,"れから":-3752,"十二月":-2287}; + this.TW4__ = {"いう.":8576,"いう。":8576,"からな":-2348,"してい":2958,"たが,":1516,"たが、":1516,"ている":1538,"という":1349,"ました":5543,"ません":1097,"ようと":-4258,"よると":5865}; + this.UC1__ = {"A":484,"K":93,"M":645,"O":-505}; + this.UC2__ = {"A":819,"H":1059,"I":409,"M":3987,"N":5775,"O":646}; + this.UC3__ = {"A":-1370,"I":2311}; + this.UC4__ = {"A":-2643,"H":1809,"I":-1032,"K":-3450,"M":3565,"N":3876,"O":6646}; + this.UC5__ = {"H":313,"I":-1238,"K":-799,"M":539,"O":-831}; + this.UC6__ = {"H":-506,"I":-253,"K":87,"M":247,"O":-387}; + this.UP1__ = {"O":-214}; + this.UP2__ = {"B":69,"O":935}; + this.UP3__ = {"B":189}; + this.UQ1__ = {"BH":21,"BI":-12,"BK":-99,"BN":142,"BO":-56,"OH":-95,"OI":477,"OK":410,"OO":-2422}; + this.UQ2__ = {"BH":216,"BI":113,"OK":1759}; + this.UQ3__ = {"BA":-479,"BH":42,"BI":1913,"BK":-7198,"BM":3160,"BN":6427,"BO":14761,"OI":-827,"ON":-3212}; + this.UW1__ = {",":156,"、":156,"「":-463,"あ":-941,"う":-127,"が":-553,"き":121,"こ":505,"で":-201,"と":-547,"ど":-123,"に":-789,"の":-185,"は":-847,"も":-466,"や":-470,"よ":182,"ら":-292,"り":208,"れ":169,"を":-446,"ん":-137,"・":-135,"主":-402,"京":-268,"区":-912,"午":871,"国":-460,"大":561,"委":729,"市":-411,"日":-141,"理":361,"生":-408,"県":-386,"都":-718,"「":-463,"・":-135}; + this.UW2__ = {",":-829,"、":-829,"〇":892,"「":-645,"」":3145,"あ":-538,"い":505,"う":134,"お":-502,"か":1454,"が":-856,"く":-412,"こ":1141,"さ":878,"ざ":540,"し":1529,"す":-675,"せ":300,"そ":-1011,"た":188,"だ":1837,"つ":-949,"て":-291,"で":-268,"と":-981,"ど":1273,"な":1063,"に":-1764,"の":130,"は":-409,"ひ":-1273,"べ":1261,"ま":600,"も":-1263,"や":-402,"よ":1639,"り":-579,"る":-694,"れ":571,"を":-2516,"ん":2095,"ア":-587,"カ":306,"キ":568,"ッ":831,"三":-758,"不":-2150,"世":-302,"中":-968,"主":-861,"事":492,"人":-123,"会":978,"保":362,"入":548,"初":-3025,"副":-1566,"北":-3414,"区":-422,"大":-1769,"天":-865,"太":-483,"子":-1519,"学":760,"実":1023,"小":-2009,"市":-813,"年":-1060,"強":1067,"手":-1519,"揺":-1033,"政":1522,"文":-1355,"新":-1682,"日":-1815,"明":-1462,"最":-630,"朝":-1843,"本":-1650,"東":-931,"果":-665,"次":-2378,"民":-180,"気":-1740,"理":752,"発":529,"目":-1584,"相":-242,"県":-1165,"立":-763,"第":810,"米":509,"自":-1353,"行":838,"西":-744,"見":-3874,"調":1010,"議":1198,"込":3041,"開":1758,"間":-1257,"「":-645,"」":3145,"ッ":831,"ア":-587,"カ":306,"キ":568}; + this.UW3__ = {",":4889,"1":-800,"−":-1723,"、":4889,"々":-2311,"〇":5827,"」":2670,"〓":-3573,"あ":-2696,"い":1006,"う":2342,"え":1983,"お":-4864,"か":-1163,"が":3271,"く":1004,"け":388,"げ":401,"こ":-3552,"ご":-3116,"さ":-1058,"し":-395,"す":584,"せ":3685,"そ":-5228,"た":842,"ち":-521,"っ":-1444,"つ":-1081,"て":6167,"で":2318,"と":1691,"ど":-899,"な":-2788,"に":2745,"の":4056,"は":4555,"ひ":-2171,"ふ":-1798,"へ":1199,"ほ":-5516,"ま":-4384,"み":-120,"め":1205,"も":2323,"や":-788,"よ":-202,"ら":727,"り":649,"る":5905,"れ":2773,"わ":-1207,"を":6620,"ん":-518,"ア":551,"グ":1319,"ス":874,"ッ":-1350,"ト":521,"ム":1109,"ル":1591,"ロ":2201,"ン":278,"・":-3794,"一":-1619,"下":-1759,"世":-2087,"両":3815,"中":653,"主":-758,"予":-1193,"二":974,"人":2742,"今":792,"他":1889,"以":-1368,"低":811,"何":4265,"作":-361,"保":-2439,"元":4858,"党":3593,"全":1574,"公":-3030,"六":755,"共":-1880,"円":5807,"再":3095,"分":457,"初":2475,"別":1129,"前":2286,"副":4437,"力":365,"動":-949,"務":-1872,"化":1327,"北":-1038,"区":4646,"千":-2309,"午":-783,"協":-1006,"口":483,"右":1233,"各":3588,"合":-241,"同":3906,"和":-837,"員":4513,"国":642,"型":1389,"場":1219,"外":-241,"妻":2016,"学":-1356,"安":-423,"実":-1008,"家":1078,"小":-513,"少":-3102,"州":1155,"市":3197,"平":-1804,"年":2416,"広":-1030,"府":1605,"度":1452,"建":-2352,"当":-3885,"得":1905,"思":-1291,"性":1822,"戸":-488,"指":-3973,"政":-2013,"教":-1479,"数":3222,"文":-1489,"新":1764,"日":2099,"旧":5792,"昨":-661,"時":-1248,"曜":-951,"最":-937,"月":4125,"期":360,"李":3094,"村":364,"東":-805,"核":5156,"森":2438,"業":484,"氏":2613,"民":-1694,"決":-1073,"法":1868,"海":-495,"無":979,"物":461,"特":-3850,"生":-273,"用":914,"町":1215,"的":7313,"直":-1835,"省":792,"県":6293,"知":-1528,"私":4231,"税":401,"立":-960,"第":1201,"米":7767,"系":3066,"約":3663,"級":1384,"統":-4229,"総":1163,"線":1255,"者":6457,"能":725,"自":-2869,"英":785,"見":1044,"調":-562,"財":-733,"費":1777,"車":1835,"軍":1375,"込":-1504,"通":-1136,"選":-681,"郎":1026,"郡":4404,"部":1200,"金":2163,"長":421,"開":-1432,"間":1302,"関":-1282,"雨":2009,"電":-1045,"非":2066,"駅":1620,"1":-800,"」":2670,"・":-3794,"ッ":-1350,"ア":551,"グ":1319,"ス":874,"ト":521,"ム":1109,"ル":1591,"ロ":2201,"ン":278}; + this.UW4__ = {",":3930,".":3508,"―":-4841,"、":3930,"。":3508,"〇":4999,"「":1895,"」":3798,"〓":-5156,"あ":4752,"い":-3435,"う":-640,"え":-2514,"お":2405,"か":530,"が":6006,"き":-4482,"ぎ":-3821,"く":-3788,"け":-4376,"げ":-4734,"こ":2255,"ご":1979,"さ":2864,"し":-843,"じ":-2506,"す":-731,"ず":1251,"せ":181,"そ":4091,"た":5034,"だ":5408,"ち":-3654,"っ":-5882,"つ":-1659,"て":3994,"で":7410,"と":4547,"な":5433,"に":6499,"ぬ":1853,"ね":1413,"の":7396,"は":8578,"ば":1940,"ひ":4249,"び":-4134,"ふ":1345,"へ":6665,"べ":-744,"ほ":1464,"ま":1051,"み":-2082,"む":-882,"め":-5046,"も":4169,"ゃ":-2666,"や":2795,"ょ":-1544,"よ":3351,"ら":-2922,"り":-9726,"る":-14896,"れ":-2613,"ろ":-4570,"わ":-1783,"を":13150,"ん":-2352,"カ":2145,"コ":1789,"セ":1287,"ッ":-724,"ト":-403,"メ":-1635,"ラ":-881,"リ":-541,"ル":-856,"ン":-3637,"・":-4371,"ー":-11870,"一":-2069,"中":2210,"予":782,"事":-190,"井":-1768,"人":1036,"以":544,"会":950,"体":-1286,"作":530,"側":4292,"先":601,"党":-2006,"共":-1212,"内":584,"円":788,"初":1347,"前":1623,"副":3879,"力":-302,"動":-740,"務":-2715,"化":776,"区":4517,"協":1013,"参":1555,"合":-1834,"和":-681,"員":-910,"器":-851,"回":1500,"国":-619,"園":-1200,"地":866,"場":-1410,"塁":-2094,"士":-1413,"多":1067,"大":571,"子":-4802,"学":-1397,"定":-1057,"寺":-809,"小":1910,"屋":-1328,"山":-1500,"島":-2056,"川":-2667,"市":2771,"年":374,"庁":-4556,"後":456,"性":553,"感":916,"所":-1566,"支":856,"改":787,"政":2182,"教":704,"文":522,"方":-856,"日":1798,"時":1829,"最":845,"月":-9066,"木":-485,"来":-442,"校":-360,"業":-1043,"氏":5388,"民":-2716,"気":-910,"沢":-939,"済":-543,"物":-735,"率":672,"球":-1267,"生":-1286,"産":-1101,"田":-2900,"町":1826,"的":2586,"目":922,"省":-3485,"県":2997,"空":-867,"立":-2112,"第":788,"米":2937,"系":786,"約":2171,"経":1146,"統":-1169,"総":940,"線":-994,"署":749,"者":2145,"能":-730,"般":-852,"行":-792,"規":792,"警":-1184,"議":-244,"谷":-1000,"賞":730,"車":-1481,"軍":1158,"輪":-1433,"込":-3370,"近":929,"道":-1291,"選":2596,"郎":-4866,"都":1192,"野":-1100,"銀":-2213,"長":357,"間":-2344,"院":-2297,"際":-2604,"電":-878,"領":-1659,"題":-792,"館":-1984,"首":1749,"高":2120,"「":1895,"」":3798,"・":-4371,"ッ":-724,"ー":-11870,"カ":2145,"コ":1789,"セ":1287,"ト":-403,"メ":-1635,"ラ":-881,"リ":-541,"ル":-856,"ン":-3637}; + this.UW5__ = {",":465,".":-299,"1":-514,"E2":-32768,"]":-2762,"、":465,"。":-299,"「":363,"あ":1655,"い":331,"う":-503,"え":1199,"お":527,"か":647,"が":-421,"き":1624,"ぎ":1971,"く":312,"げ":-983,"さ":-1537,"し":-1371,"す":-852,"だ":-1186,"ち":1093,"っ":52,"つ":921,"て":-18,"で":-850,"と":-127,"ど":1682,"な":-787,"に":-1224,"の":-635,"は":-578,"べ":1001,"み":502,"め":865,"ゃ":3350,"ょ":854,"り":-208,"る":429,"れ":504,"わ":419,"を":-1264,"ん":327,"イ":241,"ル":451,"ン":-343,"中":-871,"京":722,"会":-1153,"党":-654,"務":3519,"区":-901,"告":848,"員":2104,"大":-1296,"学":-548,"定":1785,"嵐":-1304,"市":-2991,"席":921,"年":1763,"思":872,"所":-814,"挙":1618,"新":-1682,"日":218,"月":-4353,"査":932,"格":1356,"機":-1508,"氏":-1347,"田":240,"町":-3912,"的":-3149,"相":1319,"省":-1052,"県":-4003,"研":-997,"社":-278,"空":-813,"統":1955,"者":-2233,"表":663,"語":-1073,"議":1219,"選":-1018,"郎":-368,"長":786,"間":1191,"題":2368,"館":-689,"1":-514,"E2":-32768,"「":363,"イ":241,"ル":451,"ン":-343}; + this.UW6__ = {",":227,".":808,"1":-270,"E1":306,"、":227,"。":808,"あ":-307,"う":189,"か":241,"が":-73,"く":-121,"こ":-200,"じ":1782,"す":383,"た":-428,"っ":573,"て":-1014,"で":101,"と":-105,"な":-253,"に":-149,"の":-417,"は":-236,"も":-206,"り":187,"る":-135,"を":195,"ル":-673,"ン":-496,"一":-277,"中":201,"件":-800,"会":624,"前":302,"区":1792,"員":-1212,"委":798,"学":-960,"市":887,"広":-695,"後":535,"業":-697,"相":753,"社":-507,"福":974,"空":-822,"者":1811,"連":463,"郎":1082,"1":-270,"E1":306,"ル":-673,"ン":-496}; + + return this; + } + TinySegmenter.prototype.ctype_ = function(str) { + for (var i in this.chartype_) { + if (str.match(this.chartype_[i][0])) { + return this.chartype_[i][1]; + } + } + return "O"; + } + + TinySegmenter.prototype.ts_ = function(v) { + if (v) { return v; } + return 0; + } + + TinySegmenter.prototype.segment = function(input) { + if (input == null || input == undefined || input == "") { + return []; + } + var result = []; + var seg = ["B3","B2","B1"]; + var ctype = ["O","O","O"]; + var o = input.split(""); + for (i = 0; i < o.length; ++i) { + seg.push(o[i]); + ctype.push(this.ctype_(o[i])) + } + seg.push("E1"); + seg.push("E2"); + seg.push("E3"); + ctype.push("O"); + ctype.push("O"); + ctype.push("O"); + var word = seg[3]; + var p1 = "U"; + var p2 = "U"; + var p3 = "U"; + for (var i = 4; i < seg.length - 3; ++i) { + var score = this.BIAS__; + var w1 = seg[i-3]; + var w2 = seg[i-2]; + var w3 = seg[i-1]; + var w4 = seg[i]; + var w5 = seg[i+1]; + var w6 = seg[i+2]; + var c1 = ctype[i-3]; + var c2 = ctype[i-2]; + var c3 = ctype[i-1]; + var c4 = ctype[i]; + var c5 = ctype[i+1]; + var c6 = ctype[i+2]; + score += this.ts_(this.UP1__[p1]); + score += this.ts_(this.UP2__[p2]); + score += this.ts_(this.UP3__[p3]); + score += this.ts_(this.BP1__[p1 + p2]); + score += this.ts_(this.BP2__[p2 + p3]); + score += this.ts_(this.UW1__[w1]); + score += this.ts_(this.UW2__[w2]); + score += this.ts_(this.UW3__[w3]); + score += this.ts_(this.UW4__[w4]); + score += this.ts_(this.UW5__[w5]); + score += this.ts_(this.UW6__[w6]); + score += this.ts_(this.BW1__[w2 + w3]); + score += this.ts_(this.BW2__[w3 + w4]); + score += this.ts_(this.BW3__[w4 + w5]); + score += this.ts_(this.TW1__[w1 + w2 + w3]); + score += this.ts_(this.TW2__[w2 + w3 + w4]); + score += this.ts_(this.TW3__[w3 + w4 + w5]); + score += this.ts_(this.TW4__[w4 + w5 + w6]); + score += this.ts_(this.UC1__[c1]); + score += this.ts_(this.UC2__[c2]); + score += this.ts_(this.UC3__[c3]); + score += this.ts_(this.UC4__[c4]); + score += this.ts_(this.UC5__[c5]); + score += this.ts_(this.UC6__[c6]); + score += this.ts_(this.BC1__[c2 + c3]); + score += this.ts_(this.BC2__[c3 + c4]); + score += this.ts_(this.BC3__[c4 + c5]); + score += this.ts_(this.TC1__[c1 + c2 + c3]); + score += this.ts_(this.TC2__[c2 + c3 + c4]); + score += this.ts_(this.TC3__[c3 + c4 + c5]); + score += this.ts_(this.TC4__[c4 + c5 + c6]); + // score += this.ts_(this.TC5__[c4 + c5 + c6]); + score += this.ts_(this.UQ1__[p1 + c1]); + score += this.ts_(this.UQ2__[p2 + c2]); + score += this.ts_(this.UQ3__[p3 + c3]); + score += this.ts_(this.BQ1__[p2 + c2 + c3]); + score += this.ts_(this.BQ2__[p2 + c3 + c4]); + score += this.ts_(this.BQ3__[p3 + c2 + c3]); + score += this.ts_(this.BQ4__[p3 + c3 + c4]); + score += this.ts_(this.TQ1__[p2 + c1 + c2 + c3]); + score += this.ts_(this.TQ2__[p2 + c2 + c3 + c4]); + score += this.ts_(this.TQ3__[p3 + c1 + c2 + c3]); + score += this.ts_(this.TQ4__[p3 + c2 + c3 + c4]); + var p = "O"; + if (score > 0) { + result.push(word); + word = ""; + p = "B"; + } + p1 = p2; + p2 = p3; + p3 = p; + word += seg[i]; + } + result.push(word); + + return result; + } + + lunr.TinySegmenter = TinySegmenter; + }; + +})); \ No newline at end of file diff --git a/assets/javascripts/lunr/wordcut.js b/assets/javascripts/lunr/wordcut.js new file mode 100644 index 000000000..146f4b44b --- /dev/null +++ b/assets/javascripts/lunr/wordcut.js @@ -0,0 +1,6708 @@ +(function(f){if(typeof exports==="object"&&typeof module!=="undefined"){module.exports=f()}else if(typeof define==="function"&&define.amd){define([],f)}else{var g;if(typeof window!=="undefined"){g=window}else if(typeof global!=="undefined"){g=global}else if(typeof self!=="undefined"){g=self}else{g=this}(g.lunr || (g.lunr = {})).wordcut = f()}})(function(){var define,module,exports;return (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o 1; + }) + this.addWords(words, false) + } + if(finalize){ + this.finalizeDict(); + } + }, + + dictSeek: function (l, r, ch, strOffset, pos) { + var ans = null; + while (l <= r) { + var m = Math.floor((l + r) / 2), + dict_item = this.dict[m], + len = dict_item.length; + if (len <= strOffset) { + l = m + 1; + } else { + var ch_ = dict_item[strOffset]; + if (ch_ < ch) { + l = m + 1; + } else if (ch_ > ch) { + r = m - 1; + } else { + ans = m; + if (pos == LEFT) { + r = m - 1; + } else { + l = m + 1; + } + } + } + } + return ans; + }, + + isFinal: function (acceptor) { + return this.dict[acceptor.l].length == acceptor.strOffset; + }, + + createAcceptor: function () { + return { + l: 0, + r: this.dict.length - 1, + strOffset: 0, + isFinal: false, + dict: this, + transit: function (ch) { + return this.dict.transit(this, ch); + }, + isError: false, + tag: "DICT", + w: 1, + type: "DICT" + }; + }, + + transit: function (acceptor, ch) { + var l = this.dictSeek(acceptor.l, + acceptor.r, + ch, + acceptor.strOffset, + LEFT); + if (l !== null) { + var r = this.dictSeek(l, + acceptor.r, + ch, + acceptor.strOffset, + RIGHT); + acceptor.l = l; + acceptor.r = r; + acceptor.strOffset++; + acceptor.isFinal = this.isFinal(acceptor); + } else { + acceptor.isError = true; + } + return acceptor; + }, + + sortuniq: function(a){ + return a.sort().filter(function(item, pos, arr){ + return !pos || item != arr[pos - 1]; + }) + }, + + flatten: function(a){ + //[[1,2],[3]] -> [1,2,3] + return [].concat.apply([], a); + } +}; +module.exports = WordcutDict; + +}).call(this,"/dist/tmp") +},{"glob":16,"path":22}],3:[function(require,module,exports){ +var WordRule = { + createAcceptor: function(tag) { + if (tag["WORD_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + var lch = ch.toLowerCase(); + if (lch >= "a" && lch <= "z") { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "WORD_RULE", + type: "WORD_RULE", + w: 1}; + } +}; + +var NumberRule = { + createAcceptor: function(tag) { + if (tag["NUMBER_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (ch >= "0" && ch <= "9") { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "NUMBER_RULE", + type: "NUMBER_RULE", + w: 1}; + } +}; + +var SpaceRule = { + tag: "SPACE_RULE", + createAcceptor: function(tag) { + + if (tag["SPACE_RULE"]) + return null; + + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (ch == " " || ch == "\t" || ch == "\r" || ch == "\n" || + ch == "\u00A0" || ch=="\u2003"//nbsp and emsp + ) { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: SpaceRule.tag, + w: 1, + type: "SPACE_RULE"}; + } +} + +var SingleSymbolRule = { + tag: "SINSYM", + createAcceptor: function(tag) { + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (this.strOffset == 0 && ch.match(/^[\@\(\)\/\,\-\."`]$/)) { + this.isFinal = true; + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "SINSYM", + w: 1, + type: "SINSYM"}; + } +} + + +var LatinRules = [WordRule, SpaceRule, SingleSymbolRule, NumberRule]; + +module.exports = LatinRules; + +},{}],4:[function(require,module,exports){ +var _ = require("underscore") + , WordcutCore = require("./wordcut_core"); +var PathInfoBuilder = { + + /* + buildByPartAcceptors: function(path, acceptors, i) { + var + var genInfos = partAcceptors.reduce(function(genInfos, acceptor) { + + }, []); + + return genInfos; + } + */ + + buildByAcceptors: function(path, finalAcceptors, i) { + var self = this; + var infos = finalAcceptors.map(function(acceptor) { + var p = i - acceptor.strOffset + 1 + , _info = path[p]; + + var info = {p: p, + mw: _info.mw + (acceptor.mw === undefined ? 0 : acceptor.mw), + w: acceptor.w + _info.w, + unk: (acceptor.unk ? acceptor.unk : 0) + _info.unk, + type: acceptor.type}; + + if (acceptor.type == "PART") { + for(var j = p + 1; j <= i; j++) { + path[j].merge = p; + } + info.merge = p; + } + + return info; + }); + return infos.filter(function(info) { return info; }); + }, + + fallback: function(path, leftBoundary, text, i) { + var _info = path[leftBoundary]; + if (text[i].match(/[\u0E48-\u0E4E]/)) { + if (leftBoundary != 0) + leftBoundary = path[leftBoundary].p; + return {p: leftBoundary, + mw: 0, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; +/* } else if(leftBoundary > 0 && path[leftBoundary].type !== "UNK") { + leftBoundary = path[leftBoundary].p; + return {p: leftBoundary, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; */ + } else { + return {p: leftBoundary, + mw: _info.mw, + w: 1 + _info.w, + unk: 1 + _info.unk, + type: "UNK"}; + } + }, + + build: function(path, finalAcceptors, i, leftBoundary, text) { + var basicPathInfos = this.buildByAcceptors(path, finalAcceptors, i); + if (basicPathInfos.length > 0) { + return basicPathInfos; + } else { + return [this.fallback(path, leftBoundary, text, i)]; + } + } +}; + +module.exports = function() { + return _.clone(PathInfoBuilder); +} + +},{"./wordcut_core":8,"underscore":25}],5:[function(require,module,exports){ +var _ = require("underscore"); + + +var PathSelector = { + selectPath: function(paths) { + var path = paths.reduce(function(selectedPath, path) { + if (selectedPath == null) { + return path; + } else { + if (path.unk < selectedPath.unk) + return path; + if (path.unk == selectedPath.unk) { + if (path.mw < selectedPath.mw) + return path + if (path.mw == selectedPath.mw) { + if (path.w < selectedPath.w) + return path; + } + } + return selectedPath; + } + }, null); + return path; + }, + + createPath: function() { + return [{p:null, w:0, unk:0, type: "INIT", mw:0}]; + } +}; + +module.exports = function() { + return _.clone(PathSelector); +}; + +},{"underscore":25}],6:[function(require,module,exports){ +function isMatch(pat, offset, ch) { + if (pat.length <= offset) + return false; + var _ch = pat[offset]; + return _ch == ch || + (_ch.match(/[กข]/) && ch.match(/[ก-ฮ]/)) || + (_ch.match(/[มบ]/) && ch.match(/[ก-ฮ]/)) || + (_ch.match(/\u0E49/) && ch.match(/[\u0E48-\u0E4B]/)); +} + +var Rule0 = { + pat: "เหก็ม", + createAcceptor: function(tag) { + return {strOffset: 0, + isFinal: false, + transit: function(ch) { + if (isMatch(Rule0.pat, this.strOffset,ch)) { + this.isFinal = (this.strOffset + 1 == Rule0.pat.length); + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "THAI_RULE", + type: "THAI_RULE", + w: 1}; + } +}; + +var PartRule = { + createAcceptor: function(tag) { + return {strOffset: 0, + patterns: [ + "แก", "เก", "ก้", "กก์", "กา", "กี", "กิ", "กืก" + ], + isFinal: false, + transit: function(ch) { + var offset = this.strOffset; + this.patterns = this.patterns.filter(function(pat) { + return isMatch(pat, offset, ch); + }); + + if (this.patterns.length > 0) { + var len = 1 + offset; + this.isFinal = this.patterns.some(function(pat) { + return pat.length == len; + }); + this.strOffset++; + } else { + this.isError = true; + } + return this; + }, + isError: false, + tag: "PART", + type: "PART", + unk: 1, + w: 1}; + } +}; + +var ThaiRules = [Rule0, PartRule]; + +module.exports = ThaiRules; + +},{}],7:[function(require,module,exports){ +var sys = require("sys") + , WordcutDict = require("./dict") + , WordcutCore = require("./wordcut_core") + , PathInfoBuilder = require("./path_info_builder") + , PathSelector = require("./path_selector") + , Acceptors = require("./acceptors") + , latinRules = require("./latin_rules") + , thaiRules = require("./thai_rules") + , _ = require("underscore"); + + +var Wordcut = Object.create(WordcutCore); +Wordcut.defaultPathInfoBuilder = PathInfoBuilder; +Wordcut.defaultPathSelector = PathSelector; +Wordcut.defaultAcceptors = Acceptors; +Wordcut.defaultLatinRules = latinRules; +Wordcut.defaultThaiRules = thaiRules; +Wordcut.defaultDict = WordcutDict; + + +Wordcut.initNoDict = function(dict_path) { + var self = this; + self.pathInfoBuilder = new self.defaultPathInfoBuilder; + self.pathSelector = new self.defaultPathSelector; + self.acceptors = new self.defaultAcceptors; + self.defaultLatinRules.forEach(function(rule) { + self.acceptors.creators.push(rule); + }); + self.defaultThaiRules.forEach(function(rule) { + self.acceptors.creators.push(rule); + }); +}; + +Wordcut.init = function(dict_path, withDefault, additionalWords) { + withDefault = withDefault || false; + this.initNoDict(); + var dict = _.clone(this.defaultDict); + dict.init(dict_path, withDefault, additionalWords); + this.acceptors.creators.push(dict); +}; + +module.exports = Wordcut; + +},{"./acceptors":1,"./dict":2,"./latin_rules":3,"./path_info_builder":4,"./path_selector":5,"./thai_rules":6,"./wordcut_core":8,"sys":28,"underscore":25}],8:[function(require,module,exports){ +var WordcutCore = { + + buildPath: function(text) { + var self = this + , path = self.pathSelector.createPath() + , leftBoundary = 0; + self.acceptors.reset(); + for (var i = 0; i < text.length; i++) { + var ch = text[i]; + self.acceptors.transit(ch); + + var possiblePathInfos = self + .pathInfoBuilder + .build(path, + self.acceptors.getFinalAcceptors(), + i, + leftBoundary, + text); + var selectedPath = self.pathSelector.selectPath(possiblePathInfos) + + path.push(selectedPath); + if (selectedPath.type !== "UNK") { + leftBoundary = i; + } + } + return path; + }, + + pathToRanges: function(path) { + var e = path.length - 1 + , ranges = []; + + while (e > 0) { + var info = path[e] + , s = info.p; + + if (info.merge !== undefined && ranges.length > 0) { + var r = ranges[ranges.length - 1]; + r.s = info.merge; + s = r.s; + } else { + ranges.push({s:s, e:e}); + } + e = s; + } + return ranges.reverse(); + }, + + rangesToText: function(text, ranges, delimiter) { + return ranges.map(function(r) { + return text.substring(r.s, r.e); + }).join(delimiter); + }, + + cut: function(text, delimiter) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + return this + .rangesToText(text, ranges, + (delimiter === undefined ? "|" : delimiter)); + }, + + cutIntoRanges: function(text, noText) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + + if (!noText) { + ranges.forEach(function(r) { + r.text = text.substring(r.s, r.e); + }); + } + return ranges; + }, + + cutIntoArray: function(text) { + var path = this.buildPath(text) + , ranges = this.pathToRanges(path); + + return ranges.map(function(r) { + return text.substring(r.s, r.e) + }); + } +}; + +module.exports = WordcutCore; + +},{}],9:[function(require,module,exports){ +// http://wiki.commonjs.org/wiki/Unit_Testing/1.0 +// +// THIS IS NOT TESTED NOR LIKELY TO WORK OUTSIDE V8! +// +// Originally from narwhal.js (http://narwhaljs.org) +// Copyright (c) 2009 Thomas Robinson <280north.com> +// +// Permission is hereby granted, free of charge, to any person obtaining a copy +// of this software and associated documentation files (the 'Software'), to +// deal in the Software without restriction, including without limitation the +// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or +// sell copies of the Software, and to permit persons to whom the Software is +// furnished to do so, subject to the following conditions: +// +// The above copyright notice and this permission notice shall be included in +// all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +// AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +// ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +// when used in node, this will actually load the util module we depend on +// versus loading the builtin util module as happens otherwise +// this is a bug in node module loading as far as I am concerned +var util = require('util/'); + +var pSlice = Array.prototype.slice; +var hasOwn = Object.prototype.hasOwnProperty; + +// 1. The assert module provides functions that throw +// AssertionError's when particular conditions are not met. The +// assert module must conform to the following interface. + +var assert = module.exports = ok; + +// 2. The AssertionError is defined in assert. +// new assert.AssertionError({ message: message, +// actual: actual, +// expected: expected }) + +assert.AssertionError = function AssertionError(options) { + this.name = 'AssertionError'; + this.actual = options.actual; + this.expected = options.expected; + this.operator = options.operator; + if (options.message) { + this.message = options.message; + this.generatedMessage = false; + } else { + this.message = getMessage(this); + this.generatedMessage = true; + } + var stackStartFunction = options.stackStartFunction || fail; + + if (Error.captureStackTrace) { + Error.captureStackTrace(this, stackStartFunction); + } + else { + // non v8 browsers so we can have a stacktrace + var err = new Error(); + if (err.stack) { + var out = err.stack; + + // try to strip useless frames + var fn_name = stackStartFunction.name; + var idx = out.indexOf('\n' + fn_name); + if (idx >= 0) { + // once we have located the function frame + // we need to strip out everything before it (and its line) + var next_line = out.indexOf('\n', idx + 1); + out = out.substring(next_line + 1); + } + + this.stack = out; + } + } +}; + +// assert.AssertionError instanceof Error +util.inherits(assert.AssertionError, Error); + +function replacer(key, value) { + if (util.isUndefined(value)) { + return '' + value; + } + if (util.isNumber(value) && !isFinite(value)) { + return value.toString(); + } + if (util.isFunction(value) || util.isRegExp(value)) { + return value.toString(); + } + return value; +} + +function truncate(s, n) { + if (util.isString(s)) { + return s.length < n ? s : s.slice(0, n); + } else { + return s; + } +} + +function getMessage(self) { + return truncate(JSON.stringify(self.actual, replacer), 128) + ' ' + + self.operator + ' ' + + truncate(JSON.stringify(self.expected, replacer), 128); +} + +// At present only the three keys mentioned above are used and +// understood by the spec. Implementations or sub modules can pass +// other keys to the AssertionError's constructor - they will be +// ignored. + +// 3. All of the following functions must throw an AssertionError +// when a corresponding condition is not met, with a message that +// may be undefined if not provided. All assertion methods provide +// both the actual and expected values to the assertion error for +// display purposes. + +function fail(actual, expected, message, operator, stackStartFunction) { + throw new assert.AssertionError({ + message: message, + actual: actual, + expected: expected, + operator: operator, + stackStartFunction: stackStartFunction + }); +} + +// EXTENSION! allows for well behaved errors defined elsewhere. +assert.fail = fail; + +// 4. Pure assertion tests whether a value is truthy, as determined +// by !!guard. +// assert.ok(guard, message_opt); +// This statement is equivalent to assert.equal(true, !!guard, +// message_opt);. To test strictly for the value true, use +// assert.strictEqual(true, guard, message_opt);. + +function ok(value, message) { + if (!value) fail(value, true, message, '==', assert.ok); +} +assert.ok = ok; + +// 5. The equality assertion tests shallow, coercive equality with +// ==. +// assert.equal(actual, expected, message_opt); + +assert.equal = function equal(actual, expected, message) { + if (actual != expected) fail(actual, expected, message, '==', assert.equal); +}; + +// 6. The non-equality assertion tests for whether two objects are not equal +// with != assert.notEqual(actual, expected, message_opt); + +assert.notEqual = function notEqual(actual, expected, message) { + if (actual == expected) { + fail(actual, expected, message, '!=', assert.notEqual); + } +}; + +// 7. The equivalence assertion tests a deep equality relation. +// assert.deepEqual(actual, expected, message_opt); + +assert.deepEqual = function deepEqual(actual, expected, message) { + if (!_deepEqual(actual, expected)) { + fail(actual, expected, message, 'deepEqual', assert.deepEqual); + } +}; + +function _deepEqual(actual, expected) { + // 7.1. All identical values are equivalent, as determined by ===. + if (actual === expected) { + return true; + + } else if (util.isBuffer(actual) && util.isBuffer(expected)) { + if (actual.length != expected.length) return false; + + for (var i = 0; i < actual.length; i++) { + if (actual[i] !== expected[i]) return false; + } + + return true; + + // 7.2. If the expected value is a Date object, the actual value is + // equivalent if it is also a Date object that refers to the same time. + } else if (util.isDate(actual) && util.isDate(expected)) { + return actual.getTime() === expected.getTime(); + + // 7.3 If the expected value is a RegExp object, the actual value is + // equivalent if it is also a RegExp object with the same source and + // properties (`global`, `multiline`, `lastIndex`, `ignoreCase`). + } else if (util.isRegExp(actual) && util.isRegExp(expected)) { + return actual.source === expected.source && + actual.global === expected.global && + actual.multiline === expected.multiline && + actual.lastIndex === expected.lastIndex && + actual.ignoreCase === expected.ignoreCase; + + // 7.4. Other pairs that do not both pass typeof value == 'object', + // equivalence is determined by ==. + } else if (!util.isObject(actual) && !util.isObject(expected)) { + return actual == expected; + + // 7.5 For all other Object pairs, including Array objects, equivalence is + // determined by having the same number of owned properties (as verified + // with Object.prototype.hasOwnProperty.call), the same set of keys + // (although not necessarily the same order), equivalent values for every + // corresponding key, and an identical 'prototype' property. Note: this + // accounts for both named and indexed properties on Arrays. + } else { + return objEquiv(actual, expected); + } +} + +function isArguments(object) { + return Object.prototype.toString.call(object) == '[object Arguments]'; +} + +function objEquiv(a, b) { + if (util.isNullOrUndefined(a) || util.isNullOrUndefined(b)) + return false; + // an identical 'prototype' property. + if (a.prototype !== b.prototype) return false; + // if one is a primitive, the other must be same + if (util.isPrimitive(a) || util.isPrimitive(b)) { + return a === b; + } + var aIsArgs = isArguments(a), + bIsArgs = isArguments(b); + if ((aIsArgs && !bIsArgs) || (!aIsArgs && bIsArgs)) + return false; + if (aIsArgs) { + a = pSlice.call(a); + b = pSlice.call(b); + return _deepEqual(a, b); + } + var ka = objectKeys(a), + kb = objectKeys(b), + key, i; + // having the same number of owned properties (keys incorporates + // hasOwnProperty) + if (ka.length != kb.length) + return false; + //the same set of keys (although not necessarily the same order), + ka.sort(); + kb.sort(); + //~~~cheap key test + for (i = ka.length - 1; i >= 0; i--) { + if (ka[i] != kb[i]) + return false; + } + //equivalent values for every corresponding key, and + //~~~possibly expensive deep test + for (i = ka.length - 1; i >= 0; i--) { + key = ka[i]; + if (!_deepEqual(a[key], b[key])) return false; + } + return true; +} + +// 8. The non-equivalence assertion tests for any deep inequality. +// assert.notDeepEqual(actual, expected, message_opt); + +assert.notDeepEqual = function notDeepEqual(actual, expected, message) { + if (_deepEqual(actual, expected)) { + fail(actual, expected, message, 'notDeepEqual', assert.notDeepEqual); + } +}; + +// 9. The strict equality assertion tests strict equality, as determined by ===. +// assert.strictEqual(actual, expected, message_opt); + +assert.strictEqual = function strictEqual(actual, expected, message) { + if (actual !== expected) { + fail(actual, expected, message, '===', assert.strictEqual); + } +}; + +// 10. The strict non-equality assertion tests for strict inequality, as +// determined by !==. assert.notStrictEqual(actual, expected, message_opt); + +assert.notStrictEqual = function notStrictEqual(actual, expected, message) { + if (actual === expected) { + fail(actual, expected, message, '!==', assert.notStrictEqual); + } +}; + +function expectedException(actual, expected) { + if (!actual || !expected) { + return false; + } + + if (Object.prototype.toString.call(expected) == '[object RegExp]') { + return expected.test(actual); + } else if (actual instanceof expected) { + return true; + } else if (expected.call({}, actual) === true) { + return true; + } + + return false; +} + +function _throws(shouldThrow, block, expected, message) { + var actual; + + if (util.isString(expected)) { + message = expected; + expected = null; + } + + try { + block(); + } catch (e) { + actual = e; + } + + message = (expected && expected.name ? ' (' + expected.name + ').' : '.') + + (message ? ' ' + message : '.'); + + if (shouldThrow && !actual) { + fail(actual, expected, 'Missing expected exception' + message); + } + + if (!shouldThrow && expectedException(actual, expected)) { + fail(actual, expected, 'Got unwanted exception' + message); + } + + if ((shouldThrow && actual && expected && + !expectedException(actual, expected)) || (!shouldThrow && actual)) { + throw actual; + } +} + +// 11. Expected to throw an error: +// assert.throws(block, Error_opt, message_opt); + +assert.throws = function(block, /*optional*/error, /*optional*/message) { + _throws.apply(this, [true].concat(pSlice.call(arguments))); +}; + +// EXTENSION! This is annoying to write outside this module. +assert.doesNotThrow = function(block, /*optional*/message) { + _throws.apply(this, [false].concat(pSlice.call(arguments))); +}; + +assert.ifError = function(err) { if (err) {throw err;}}; + +var objectKeys = Object.keys || function (obj) { + var keys = []; + for (var key in obj) { + if (hasOwn.call(obj, key)) keys.push(key); + } + return keys; +}; + +},{"util/":28}],10:[function(require,module,exports){ +'use strict'; +module.exports = balanced; +function balanced(a, b, str) { + if (a instanceof RegExp) a = maybeMatch(a, str); + if (b instanceof RegExp) b = maybeMatch(b, str); + + var r = range(a, b, str); + + return r && { + start: r[0], + end: r[1], + pre: str.slice(0, r[0]), + body: str.slice(r[0] + a.length, r[1]), + post: str.slice(r[1] + b.length) + }; +} + +function maybeMatch(reg, str) { + var m = str.match(reg); + return m ? m[0] : null; +} + +balanced.range = range; +function range(a, b, str) { + var begs, beg, left, right, result; + var ai = str.indexOf(a); + var bi = str.indexOf(b, ai + 1); + var i = ai; + + if (ai >= 0 && bi > 0) { + begs = []; + left = str.length; + + while (i >= 0 && !result) { + if (i == ai) { + begs.push(i); + ai = str.indexOf(a, i + 1); + } else if (begs.length == 1) { + result = [ begs.pop(), bi ]; + } else { + beg = begs.pop(); + if (beg < left) { + left = beg; + right = bi; + } + + bi = str.indexOf(b, i + 1); + } + + i = ai < bi && ai >= 0 ? ai : bi; + } + + if (begs.length) { + result = [ left, right ]; + } + } + + return result; +} + +},{}],11:[function(require,module,exports){ +var concatMap = require('concat-map'); +var balanced = require('balanced-match'); + +module.exports = expandTop; + +var escSlash = '\0SLASH'+Math.random()+'\0'; +var escOpen = '\0OPEN'+Math.random()+'\0'; +var escClose = '\0CLOSE'+Math.random()+'\0'; +var escComma = '\0COMMA'+Math.random()+'\0'; +var escPeriod = '\0PERIOD'+Math.random()+'\0'; + +function numeric(str) { + return parseInt(str, 10) == str + ? parseInt(str, 10) + : str.charCodeAt(0); +} + +function escapeBraces(str) { + return str.split('\\\\').join(escSlash) + .split('\\{').join(escOpen) + .split('\\}').join(escClose) + .split('\\,').join(escComma) + .split('\\.').join(escPeriod); +} + +function unescapeBraces(str) { + return str.split(escSlash).join('\\') + .split(escOpen).join('{') + .split(escClose).join('}') + .split(escComma).join(',') + .split(escPeriod).join('.'); +} + + +// Basically just str.split(","), but handling cases +// where we have nested braced sections, which should be +// treated as individual members, like {a,{b,c},d} +function parseCommaParts(str) { + if (!str) + return ['']; + + var parts = []; + var m = balanced('{', '}', str); + + if (!m) + return str.split(','); + + var pre = m.pre; + var body = m.body; + var post = m.post; + var p = pre.split(','); + + p[p.length-1] += '{' + body + '}'; + var postParts = parseCommaParts(post); + if (post.length) { + p[p.length-1] += postParts.shift(); + p.push.apply(p, postParts); + } + + parts.push.apply(parts, p); + + return parts; +} + +function expandTop(str) { + if (!str) + return []; + + // I don't know why Bash 4.3 does this, but it does. + // Anything starting with {} will have the first two bytes preserved + // but *only* at the top level, so {},a}b will not expand to anything, + // but a{},b}c will be expanded to [a}c,abc]. + // One could argue that this is a bug in Bash, but since the goal of + // this module is to match Bash's rules, we escape a leading {} + if (str.substr(0, 2) === '{}') { + str = '\\{\\}' + str.substr(2); + } + + return expand(escapeBraces(str), true).map(unescapeBraces); +} + +function identity(e) { + return e; +} + +function embrace(str) { + return '{' + str + '}'; +} +function isPadded(el) { + return /^-?0\d/.test(el); +} + +function lte(i, y) { + return i <= y; +} +function gte(i, y) { + return i >= y; +} + +function expand(str, isTop) { + var expansions = []; + + var m = balanced('{', '}', str); + if (!m || /\$$/.test(m.pre)) return [str]; + + var isNumericSequence = /^-?\d+\.\.-?\d+(?:\.\.-?\d+)?$/.test(m.body); + var isAlphaSequence = /^[a-zA-Z]\.\.[a-zA-Z](?:\.\.-?\d+)?$/.test(m.body); + var isSequence = isNumericSequence || isAlphaSequence; + var isOptions = m.body.indexOf(',') >= 0; + if (!isSequence && !isOptions) { + // {a},b} + if (m.post.match(/,.*\}/)) { + str = m.pre + '{' + m.body + escClose + m.post; + return expand(str); + } + return [str]; + } + + var n; + if (isSequence) { + n = m.body.split(/\.\./); + } else { + n = parseCommaParts(m.body); + if (n.length === 1) { + // x{{a,b}}y ==> x{a}y x{b}y + n = expand(n[0], false).map(embrace); + if (n.length === 1) { + var post = m.post.length + ? expand(m.post, false) + : ['']; + return post.map(function(p) { + return m.pre + n[0] + p; + }); + } + } + } + + // at this point, n is the parts, and we know it's not a comma set + // with a single entry. + + // no need to expand pre, since it is guaranteed to be free of brace-sets + var pre = m.pre; + var post = m.post.length + ? expand(m.post, false) + : ['']; + + var N; + + if (isSequence) { + var x = numeric(n[0]); + var y = numeric(n[1]); + var width = Math.max(n[0].length, n[1].length) + var incr = n.length == 3 + ? Math.abs(numeric(n[2])) + : 1; + var test = lte; + var reverse = y < x; + if (reverse) { + incr *= -1; + test = gte; + } + var pad = n.some(isPadded); + + N = []; + + for (var i = x; test(i, y); i += incr) { + var c; + if (isAlphaSequence) { + c = String.fromCharCode(i); + if (c === '\\') + c = ''; + } else { + c = String(i); + if (pad) { + var need = width - c.length; + if (need > 0) { + var z = new Array(need + 1).join('0'); + if (i < 0) + c = '-' + z + c.slice(1); + else + c = z + c; + } + } + } + N.push(c); + } + } else { + N = concatMap(n, function(el) { return expand(el, false) }); + } + + for (var j = 0; j < N.length; j++) { + for (var k = 0; k < post.length; k++) { + var expansion = pre + N[j] + post[k]; + if (!isTop || isSequence || expansion) + expansions.push(expansion); + } + } + + return expansions; +} + + +},{"balanced-match":10,"concat-map":13}],12:[function(require,module,exports){ + +},{}],13:[function(require,module,exports){ +module.exports = function (xs, fn) { + var res = []; + for (var i = 0; i < xs.length; i++) { + var x = fn(xs[i], i); + if (isArray(x)) res.push.apply(res, x); + else res.push(x); + } + return res; +}; + +var isArray = Array.isArray || function (xs) { + return Object.prototype.toString.call(xs) === '[object Array]'; +}; + +},{}],14:[function(require,module,exports){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +function EventEmitter() { + this._events = this._events || {}; + this._maxListeners = this._maxListeners || undefined; +} +module.exports = EventEmitter; + +// Backwards-compat with node 0.10.x +EventEmitter.EventEmitter = EventEmitter; + +EventEmitter.prototype._events = undefined; +EventEmitter.prototype._maxListeners = undefined; + +// By default EventEmitters will print a warning if more than 10 listeners are +// added to it. This is a useful default which helps finding memory leaks. +EventEmitter.defaultMaxListeners = 10; + +// Obviously not all Emitters should be limited to 10. This function allows +// that to be increased. Set to zero for unlimited. +EventEmitter.prototype.setMaxListeners = function(n) { + if (!isNumber(n) || n < 0 || isNaN(n)) + throw TypeError('n must be a positive number'); + this._maxListeners = n; + return this; +}; + +EventEmitter.prototype.emit = function(type) { + var er, handler, len, args, i, listeners; + + if (!this._events) + this._events = {}; + + // If there is no 'error' event listener then throw. + if (type === 'error') { + if (!this._events.error || + (isObject(this._events.error) && !this._events.error.length)) { + er = arguments[1]; + if (er instanceof Error) { + throw er; // Unhandled 'error' event + } + throw TypeError('Uncaught, unspecified "error" event.'); + } + } + + handler = this._events[type]; + + if (isUndefined(handler)) + return false; + + if (isFunction(handler)) { + switch (arguments.length) { + // fast cases + case 1: + handler.call(this); + break; + case 2: + handler.call(this, arguments[1]); + break; + case 3: + handler.call(this, arguments[1], arguments[2]); + break; + // slower + default: + len = arguments.length; + args = new Array(len - 1); + for (i = 1; i < len; i++) + args[i - 1] = arguments[i]; + handler.apply(this, args); + } + } else if (isObject(handler)) { + len = arguments.length; + args = new Array(len - 1); + for (i = 1; i < len; i++) + args[i - 1] = arguments[i]; + + listeners = handler.slice(); + len = listeners.length; + for (i = 0; i < len; i++) + listeners[i].apply(this, args); + } + + return true; +}; + +EventEmitter.prototype.addListener = function(type, listener) { + var m; + + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + if (!this._events) + this._events = {}; + + // To avoid recursion in the case that type === "newListener"! Before + // adding it to the listeners, first emit "newListener". + if (this._events.newListener) + this.emit('newListener', type, + isFunction(listener.listener) ? + listener.listener : listener); + + if (!this._events[type]) + // Optimize the case of one listener. Don't need the extra array object. + this._events[type] = listener; + else if (isObject(this._events[type])) + // If we've already got an array, just append. + this._events[type].push(listener); + else + // Adding the second element, need to change to array. + this._events[type] = [this._events[type], listener]; + + // Check for listener leak + if (isObject(this._events[type]) && !this._events[type].warned) { + var m; + if (!isUndefined(this._maxListeners)) { + m = this._maxListeners; + } else { + m = EventEmitter.defaultMaxListeners; + } + + if (m && m > 0 && this._events[type].length > m) { + this._events[type].warned = true; + console.error('(node) warning: possible EventEmitter memory ' + + 'leak detected. %d listeners added. ' + + 'Use emitter.setMaxListeners() to increase limit.', + this._events[type].length); + if (typeof console.trace === 'function') { + // not supported in IE 10 + console.trace(); + } + } + } + + return this; +}; + +EventEmitter.prototype.on = EventEmitter.prototype.addListener; + +EventEmitter.prototype.once = function(type, listener) { + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + var fired = false; + + function g() { + this.removeListener(type, g); + + if (!fired) { + fired = true; + listener.apply(this, arguments); + } + } + + g.listener = listener; + this.on(type, g); + + return this; +}; + +// emits a 'removeListener' event iff the listener was removed +EventEmitter.prototype.removeListener = function(type, listener) { + var list, position, length, i; + + if (!isFunction(listener)) + throw TypeError('listener must be a function'); + + if (!this._events || !this._events[type]) + return this; + + list = this._events[type]; + length = list.length; + position = -1; + + if (list === listener || + (isFunction(list.listener) && list.listener === listener)) { + delete this._events[type]; + if (this._events.removeListener) + this.emit('removeListener', type, listener); + + } else if (isObject(list)) { + for (i = length; i-- > 0;) { + if (list[i] === listener || + (list[i].listener && list[i].listener === listener)) { + position = i; + break; + } + } + + if (position < 0) + return this; + + if (list.length === 1) { + list.length = 0; + delete this._events[type]; + } else { + list.splice(position, 1); + } + + if (this._events.removeListener) + this.emit('removeListener', type, listener); + } + + return this; +}; + +EventEmitter.prototype.removeAllListeners = function(type) { + var key, listeners; + + if (!this._events) + return this; + + // not listening for removeListener, no need to emit + if (!this._events.removeListener) { + if (arguments.length === 0) + this._events = {}; + else if (this._events[type]) + delete this._events[type]; + return this; + } + + // emit removeListener for all listeners on all events + if (arguments.length === 0) { + for (key in this._events) { + if (key === 'removeListener') continue; + this.removeAllListeners(key); + } + this.removeAllListeners('removeListener'); + this._events = {}; + return this; + } + + listeners = this._events[type]; + + if (isFunction(listeners)) { + this.removeListener(type, listeners); + } else { + // LIFO order + while (listeners.length) + this.removeListener(type, listeners[listeners.length - 1]); + } + delete this._events[type]; + + return this; +}; + +EventEmitter.prototype.listeners = function(type) { + var ret; + if (!this._events || !this._events[type]) + ret = []; + else if (isFunction(this._events[type])) + ret = [this._events[type]]; + else + ret = this._events[type].slice(); + return ret; +}; + +EventEmitter.listenerCount = function(emitter, type) { + var ret; + if (!emitter._events || !emitter._events[type]) + ret = 0; + else if (isFunction(emitter._events[type])) + ret = 1; + else + ret = emitter._events[type].length; + return ret; +}; + +function isFunction(arg) { + return typeof arg === 'function'; +} + +function isNumber(arg) { + return typeof arg === 'number'; +} + +function isObject(arg) { + return typeof arg === 'object' && arg !== null; +} + +function isUndefined(arg) { + return arg === void 0; +} + +},{}],15:[function(require,module,exports){ +(function (process){ +exports.alphasort = alphasort +exports.alphasorti = alphasorti +exports.setopts = setopts +exports.ownProp = ownProp +exports.makeAbs = makeAbs +exports.finish = finish +exports.mark = mark +exports.isIgnored = isIgnored +exports.childrenIgnored = childrenIgnored + +function ownProp (obj, field) { + return Object.prototype.hasOwnProperty.call(obj, field) +} + +var path = require("path") +var minimatch = require("minimatch") +var isAbsolute = require("path-is-absolute") +var Minimatch = minimatch.Minimatch + +function alphasorti (a, b) { + return a.toLowerCase().localeCompare(b.toLowerCase()) +} + +function alphasort (a, b) { + return a.localeCompare(b) +} + +function setupIgnores (self, options) { + self.ignore = options.ignore || [] + + if (!Array.isArray(self.ignore)) + self.ignore = [self.ignore] + + if (self.ignore.length) { + self.ignore = self.ignore.map(ignoreMap) + } +} + +function ignoreMap (pattern) { + var gmatcher = null + if (pattern.slice(-3) === '/**') { + var gpattern = pattern.replace(/(\/\*\*)+$/, '') + gmatcher = new Minimatch(gpattern) + } + + return { + matcher: new Minimatch(pattern), + gmatcher: gmatcher + } +} + +function setopts (self, pattern, options) { + if (!options) + options = {} + + // base-matching: just use globstar for that. + if (options.matchBase && -1 === pattern.indexOf("/")) { + if (options.noglobstar) { + throw new Error("base matching requires globstar") + } + pattern = "**/" + pattern + } + + self.silent = !!options.silent + self.pattern = pattern + self.strict = options.strict !== false + self.realpath = !!options.realpath + self.realpathCache = options.realpathCache || Object.create(null) + self.follow = !!options.follow + self.dot = !!options.dot + self.mark = !!options.mark + self.nodir = !!options.nodir + if (self.nodir) + self.mark = true + self.sync = !!options.sync + self.nounique = !!options.nounique + self.nonull = !!options.nonull + self.nosort = !!options.nosort + self.nocase = !!options.nocase + self.stat = !!options.stat + self.noprocess = !!options.noprocess + + self.maxLength = options.maxLength || Infinity + self.cache = options.cache || Object.create(null) + self.statCache = options.statCache || Object.create(null) + self.symlinks = options.symlinks || Object.create(null) + + setupIgnores(self, options) + + self.changedCwd = false + var cwd = process.cwd() + if (!ownProp(options, "cwd")) + self.cwd = cwd + else { + self.cwd = options.cwd + self.changedCwd = path.resolve(options.cwd) !== cwd + } + + self.root = options.root || path.resolve(self.cwd, "/") + self.root = path.resolve(self.root) + if (process.platform === "win32") + self.root = self.root.replace(/\\/g, "/") + + self.nomount = !!options.nomount + + // disable comments and negation unless the user explicitly + // passes in false as the option. + options.nonegate = options.nonegate === false ? false : true + options.nocomment = options.nocomment === false ? false : true + deprecationWarning(options) + + self.minimatch = new Minimatch(pattern, options) + self.options = self.minimatch.options +} + +// TODO(isaacs): remove entirely in v6 +// exported to reset in tests +exports.deprecationWarned +function deprecationWarning(options) { + if (!options.nonegate || !options.nocomment) { + if (process.noDeprecation !== true && !exports.deprecationWarned) { + var msg = 'glob WARNING: comments and negation will be disabled in v6' + if (process.throwDeprecation) + throw new Error(msg) + else if (process.traceDeprecation) + console.trace(msg) + else + console.error(msg) + + exports.deprecationWarned = true + } + } +} + +function finish (self) { + var nou = self.nounique + var all = nou ? [] : Object.create(null) + + for (var i = 0, l = self.matches.length; i < l; i ++) { + var matches = self.matches[i] + if (!matches || Object.keys(matches).length === 0) { + if (self.nonull) { + // do like the shell, and spit out the literal glob + var literal = self.minimatch.globSet[i] + if (nou) + all.push(literal) + else + all[literal] = true + } + } else { + // had matches + var m = Object.keys(matches) + if (nou) + all.push.apply(all, m) + else + m.forEach(function (m) { + all[m] = true + }) + } + } + + if (!nou) + all = Object.keys(all) + + if (!self.nosort) + all = all.sort(self.nocase ? alphasorti : alphasort) + + // at *some* point we statted all of these + if (self.mark) { + for (var i = 0; i < all.length; i++) { + all[i] = self._mark(all[i]) + } + if (self.nodir) { + all = all.filter(function (e) { + return !(/\/$/.test(e)) + }) + } + } + + if (self.ignore.length) + all = all.filter(function(m) { + return !isIgnored(self, m) + }) + + self.found = all +} + +function mark (self, p) { + var abs = makeAbs(self, p) + var c = self.cache[abs] + var m = p + if (c) { + var isDir = c === 'DIR' || Array.isArray(c) + var slash = p.slice(-1) === '/' + + if (isDir && !slash) + m += '/' + else if (!isDir && slash) + m = m.slice(0, -1) + + if (m !== p) { + var mabs = makeAbs(self, m) + self.statCache[mabs] = self.statCache[abs] + self.cache[mabs] = self.cache[abs] + } + } + + return m +} + +// lotta situps... +function makeAbs (self, f) { + var abs = f + if (f.charAt(0) === '/') { + abs = path.join(self.root, f) + } else if (isAbsolute(f) || f === '') { + abs = f + } else if (self.changedCwd) { + abs = path.resolve(self.cwd, f) + } else { + abs = path.resolve(f) + } + return abs +} + + +// Return true, if pattern ends with globstar '**', for the accompanying parent directory. +// Ex:- If node_modules/** is the pattern, add 'node_modules' to ignore list along with it's contents +function isIgnored (self, path) { + if (!self.ignore.length) + return false + + return self.ignore.some(function(item) { + return item.matcher.match(path) || !!(item.gmatcher && item.gmatcher.match(path)) + }) +} + +function childrenIgnored (self, path) { + if (!self.ignore.length) + return false + + return self.ignore.some(function(item) { + return !!(item.gmatcher && item.gmatcher.match(path)) + }) +} + +}).call(this,require('_process')) +},{"_process":24,"minimatch":20,"path":22,"path-is-absolute":23}],16:[function(require,module,exports){ +(function (process){ +// Approach: +// +// 1. Get the minimatch set +// 2. For each pattern in the set, PROCESS(pattern, false) +// 3. Store matches per-set, then uniq them +// +// PROCESS(pattern, inGlobStar) +// Get the first [n] items from pattern that are all strings +// Join these together. This is PREFIX. +// If there is no more remaining, then stat(PREFIX) and +// add to matches if it succeeds. END. +// +// If inGlobStar and PREFIX is symlink and points to dir +// set ENTRIES = [] +// else readdir(PREFIX) as ENTRIES +// If fail, END +// +// with ENTRIES +// If pattern[n] is GLOBSTAR +// // handle the case where the globstar match is empty +// // by pruning it out, and testing the resulting pattern +// PROCESS(pattern[0..n] + pattern[n+1 .. $], false) +// // handle other cases. +// for ENTRY in ENTRIES (not dotfiles) +// // attach globstar + tail onto the entry +// // Mark that this entry is a globstar match +// PROCESS(pattern[0..n] + ENTRY + pattern[n .. $], true) +// +// else // not globstar +// for ENTRY in ENTRIES (not dotfiles, unless pattern[n] is dot) +// Test ENTRY against pattern[n] +// If fails, continue +// If passes, PROCESS(pattern[0..n] + item + pattern[n+1 .. $]) +// +// Caveat: +// Cache all stats and readdirs results to minimize syscall. Since all +// we ever care about is existence and directory-ness, we can just keep +// `true` for files, and [children,...] for directories, or `false` for +// things that don't exist. + +module.exports = glob + +var fs = require('fs') +var minimatch = require('minimatch') +var Minimatch = minimatch.Minimatch +var inherits = require('inherits') +var EE = require('events').EventEmitter +var path = require('path') +var assert = require('assert') +var isAbsolute = require('path-is-absolute') +var globSync = require('./sync.js') +var common = require('./common.js') +var alphasort = common.alphasort +var alphasorti = common.alphasorti +var setopts = common.setopts +var ownProp = common.ownProp +var inflight = require('inflight') +var util = require('util') +var childrenIgnored = common.childrenIgnored +var isIgnored = common.isIgnored + +var once = require('once') + +function glob (pattern, options, cb) { + if (typeof options === 'function') cb = options, options = {} + if (!options) options = {} + + if (options.sync) { + if (cb) + throw new TypeError('callback provided to sync glob') + return globSync(pattern, options) + } + + return new Glob(pattern, options, cb) +} + +glob.sync = globSync +var GlobSync = glob.GlobSync = globSync.GlobSync + +// old api surface +glob.glob = glob + +glob.hasMagic = function (pattern, options_) { + var options = util._extend({}, options_) + options.noprocess = true + + var g = new Glob(pattern, options) + var set = g.minimatch.set + if (set.length > 1) + return true + + for (var j = 0; j < set[0].length; j++) { + if (typeof set[0][j] !== 'string') + return true + } + + return false +} + +glob.Glob = Glob +inherits(Glob, EE) +function Glob (pattern, options, cb) { + if (typeof options === 'function') { + cb = options + options = null + } + + if (options && options.sync) { + if (cb) + throw new TypeError('callback provided to sync glob') + return new GlobSync(pattern, options) + } + + if (!(this instanceof Glob)) + return new Glob(pattern, options, cb) + + setopts(this, pattern, options) + this._didRealPath = false + + // process each pattern in the minimatch set + var n = this.minimatch.set.length + + // The matches are stored as {: true,...} so that + // duplicates are automagically pruned. + // Later, we do an Object.keys() on these. + // Keep them as a list so we can fill in when nonull is set. + this.matches = new Array(n) + + if (typeof cb === 'function') { + cb = once(cb) + this.on('error', cb) + this.on('end', function (matches) { + cb(null, matches) + }) + } + + var self = this + var n = this.minimatch.set.length + this._processing = 0 + this.matches = new Array(n) + + this._emitQueue = [] + this._processQueue = [] + this.paused = false + + if (this.noprocess) + return this + + if (n === 0) + return done() + + for (var i = 0; i < n; i ++) { + this._process(this.minimatch.set[i], i, false, done) + } + + function done () { + --self._processing + if (self._processing <= 0) + self._finish() + } +} + +Glob.prototype._finish = function () { + assert(this instanceof Glob) + if (this.aborted) + return + + if (this.realpath && !this._didRealpath) + return this._realpath() + + common.finish(this) + this.emit('end', this.found) +} + +Glob.prototype._realpath = function () { + if (this._didRealpath) + return + + this._didRealpath = true + + var n = this.matches.length + if (n === 0) + return this._finish() + + var self = this + for (var i = 0; i < this.matches.length; i++) + this._realpathSet(i, next) + + function next () { + if (--n === 0) + self._finish() + } +} + +Glob.prototype._realpathSet = function (index, cb) { + var matchset = this.matches[index] + if (!matchset) + return cb() + + var found = Object.keys(matchset) + var self = this + var n = found.length + + if (n === 0) + return cb() + + var set = this.matches[index] = Object.create(null) + found.forEach(function (p, i) { + // If there's a problem with the stat, then it means that + // one or more of the links in the realpath couldn't be + // resolved. just return the abs value in that case. + p = self._makeAbs(p) + fs.realpath(p, self.realpathCache, function (er, real) { + if (!er) + set[real] = true + else if (er.syscall === 'stat') + set[p] = true + else + self.emit('error', er) // srsly wtf right here + + if (--n === 0) { + self.matches[index] = set + cb() + } + }) + }) +} + +Glob.prototype._mark = function (p) { + return common.mark(this, p) +} + +Glob.prototype._makeAbs = function (f) { + return common.makeAbs(this, f) +} + +Glob.prototype.abort = function () { + this.aborted = true + this.emit('abort') +} + +Glob.prototype.pause = function () { + if (!this.paused) { + this.paused = true + this.emit('pause') + } +} + +Glob.prototype.resume = function () { + if (this.paused) { + this.emit('resume') + this.paused = false + if (this._emitQueue.length) { + var eq = this._emitQueue.slice(0) + this._emitQueue.length = 0 + for (var i = 0; i < eq.length; i ++) { + var e = eq[i] + this._emitMatch(e[0], e[1]) + } + } + if (this._processQueue.length) { + var pq = this._processQueue.slice(0) + this._processQueue.length = 0 + for (var i = 0; i < pq.length; i ++) { + var p = pq[i] + this._processing-- + this._process(p[0], p[1], p[2], p[3]) + } + } + } +} + +Glob.prototype._process = function (pattern, index, inGlobStar, cb) { + assert(this instanceof Glob) + assert(typeof cb === 'function') + + if (this.aborted) + return + + this._processing++ + if (this.paused) { + this._processQueue.push([pattern, index, inGlobStar, cb]) + return + } + + //console.error('PROCESS %d', this._processing, pattern) + + // Get the first [n] parts of pattern that are all strings. + var n = 0 + while (typeof pattern[n] === 'string') { + n ++ + } + // now n is the index of the first one that is *not* a string. + + // see if there's anything else + var prefix + switch (n) { + // if not, then this is rather simple + case pattern.length: + this._processSimple(pattern.join('/'), index, cb) + return + + case 0: + // pattern *starts* with some non-trivial item. + // going to readdir(cwd), but not include the prefix in matches. + prefix = null + break + + default: + // pattern has some string bits in the front. + // whatever it starts with, whether that's 'absolute' like /foo/bar, + // or 'relative' like '../baz' + prefix = pattern.slice(0, n).join('/') + break + } + + var remain = pattern.slice(n) + + // get the list of entries. + var read + if (prefix === null) + read = '.' + else if (isAbsolute(prefix) || isAbsolute(pattern.join('/'))) { + if (!prefix || !isAbsolute(prefix)) + prefix = '/' + prefix + read = prefix + } else + read = prefix + + var abs = this._makeAbs(read) + + //if ignored, skip _processing + if (childrenIgnored(this, read)) + return cb() + + var isGlobStar = remain[0] === minimatch.GLOBSTAR + if (isGlobStar) + this._processGlobStar(prefix, read, abs, remain, index, inGlobStar, cb) + else + this._processReaddir(prefix, read, abs, remain, index, inGlobStar, cb) +} + +Glob.prototype._processReaddir = function (prefix, read, abs, remain, index, inGlobStar, cb) { + var self = this + this._readdir(abs, inGlobStar, function (er, entries) { + return self._processReaddir2(prefix, read, abs, remain, index, inGlobStar, entries, cb) + }) +} + +Glob.prototype._processReaddir2 = function (prefix, read, abs, remain, index, inGlobStar, entries, cb) { + + // if the abs isn't a dir, then nothing can match! + if (!entries) + return cb() + + // It will only match dot entries if it starts with a dot, or if + // dot is set. Stuff like @(.foo|.bar) isn't allowed. + var pn = remain[0] + var negate = !!this.minimatch.negate + var rawGlob = pn._glob + var dotOk = this.dot || rawGlob.charAt(0) === '.' + + var matchedEntries = [] + for (var i = 0; i < entries.length; i++) { + var e = entries[i] + if (e.charAt(0) !== '.' || dotOk) { + var m + if (negate && !prefix) { + m = !e.match(pn) + } else { + m = e.match(pn) + } + if (m) + matchedEntries.push(e) + } + } + + //console.error('prd2', prefix, entries, remain[0]._glob, matchedEntries) + + var len = matchedEntries.length + // If there are no matched entries, then nothing matches. + if (len === 0) + return cb() + + // if this is the last remaining pattern bit, then no need for + // an additional stat *unless* the user has specified mark or + // stat explicitly. We know they exist, since readdir returned + // them. + + if (remain.length === 1 && !this.mark && !this.stat) { + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + if (prefix) { + if (prefix !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + + if (e.charAt(0) === '/' && !this.nomount) { + e = path.join(this.root, e) + } + this._emitMatch(index, e) + } + // This was the last one, and no stats were needed + return cb() + } + + // now test all matched entries as stand-ins for that part + // of the pattern. + remain.shift() + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + var newPattern + if (prefix) { + if (prefix !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + this._process([e].concat(remain), index, inGlobStar, cb) + } + cb() +} + +Glob.prototype._emitMatch = function (index, e) { + if (this.aborted) + return + + if (this.matches[index][e]) + return + + if (isIgnored(this, e)) + return + + if (this.paused) { + this._emitQueue.push([index, e]) + return + } + + var abs = this._makeAbs(e) + + if (this.nodir) { + var c = this.cache[abs] + if (c === 'DIR' || Array.isArray(c)) + return + } + + if (this.mark) + e = this._mark(e) + + this.matches[index][e] = true + + var st = this.statCache[abs] + if (st) + this.emit('stat', e, st) + + this.emit('match', e) +} + +Glob.prototype._readdirInGlobStar = function (abs, cb) { + if (this.aborted) + return + + // follow all symlinked directories forever + // just proceed as if this is a non-globstar situation + if (this.follow) + return this._readdir(abs, false, cb) + + var lstatkey = 'lstat\0' + abs + var self = this + var lstatcb = inflight(lstatkey, lstatcb_) + + if (lstatcb) + fs.lstat(abs, lstatcb) + + function lstatcb_ (er, lstat) { + if (er) + return cb() + + var isSym = lstat.isSymbolicLink() + self.symlinks[abs] = isSym + + // If it's not a symlink or a dir, then it's definitely a regular file. + // don't bother doing a readdir in that case. + if (!isSym && !lstat.isDirectory()) { + self.cache[abs] = 'FILE' + cb() + } else + self._readdir(abs, false, cb) + } +} + +Glob.prototype._readdir = function (abs, inGlobStar, cb) { + if (this.aborted) + return + + cb = inflight('readdir\0'+abs+'\0'+inGlobStar, cb) + if (!cb) + return + + //console.error('RD %j %j', +inGlobStar, abs) + if (inGlobStar && !ownProp(this.symlinks, abs)) + return this._readdirInGlobStar(abs, cb) + + if (ownProp(this.cache, abs)) { + var c = this.cache[abs] + if (!c || c === 'FILE') + return cb() + + if (Array.isArray(c)) + return cb(null, c) + } + + var self = this + fs.readdir(abs, readdirCb(this, abs, cb)) +} + +function readdirCb (self, abs, cb) { + return function (er, entries) { + if (er) + self._readdirError(abs, er, cb) + else + self._readdirEntries(abs, entries, cb) + } +} + +Glob.prototype._readdirEntries = function (abs, entries, cb) { + if (this.aborted) + return + + // if we haven't asked to stat everything, then just + // assume that everything in there exists, so we can avoid + // having to stat it a second time. + if (!this.mark && !this.stat) { + for (var i = 0; i < entries.length; i ++) { + var e = entries[i] + if (abs === '/') + e = abs + e + else + e = abs + '/' + e + this.cache[e] = true + } + } + + this.cache[abs] = entries + return cb(null, entries) +} + +Glob.prototype._readdirError = function (f, er, cb) { + if (this.aborted) + return + + // handle errors, and cache the information + switch (er.code) { + case 'ENOTSUP': // https://github.com/isaacs/node-glob/issues/205 + case 'ENOTDIR': // totally normal. means it *does* exist. + this.cache[this._makeAbs(f)] = 'FILE' + break + + case 'ENOENT': // not terribly unusual + case 'ELOOP': + case 'ENAMETOOLONG': + case 'UNKNOWN': + this.cache[this._makeAbs(f)] = false + break + + default: // some unusual error. Treat as failure. + this.cache[this._makeAbs(f)] = false + if (this.strict) { + this.emit('error', er) + // If the error is handled, then we abort + // if not, we threw out of here + this.abort() + } + if (!this.silent) + console.error('glob error', er) + break + } + + return cb() +} + +Glob.prototype._processGlobStar = function (prefix, read, abs, remain, index, inGlobStar, cb) { + var self = this + this._readdir(abs, inGlobStar, function (er, entries) { + self._processGlobStar2(prefix, read, abs, remain, index, inGlobStar, entries, cb) + }) +} + + +Glob.prototype._processGlobStar2 = function (prefix, read, abs, remain, index, inGlobStar, entries, cb) { + //console.error('pgs2', prefix, remain[0], entries) + + // no entries means not a dir, so it can never have matches + // foo.txt/** doesn't match foo.txt + if (!entries) + return cb() + + // test without the globstar, and with every child both below + // and replacing the globstar. + var remainWithoutGlobStar = remain.slice(1) + var gspref = prefix ? [ prefix ] : [] + var noGlobStar = gspref.concat(remainWithoutGlobStar) + + // the noGlobStar pattern exits the inGlobStar state + this._process(noGlobStar, index, false, cb) + + var isSym = this.symlinks[abs] + var len = entries.length + + // If it's a symlink, and we're in a globstar, then stop + if (isSym && inGlobStar) + return cb() + + for (var i = 0; i < len; i++) { + var e = entries[i] + if (e.charAt(0) === '.' && !this.dot) + continue + + // these two cases enter the inGlobStar state + var instead = gspref.concat(entries[i], remainWithoutGlobStar) + this._process(instead, index, true, cb) + + var below = gspref.concat(entries[i], remain) + this._process(below, index, true, cb) + } + + cb() +} + +Glob.prototype._processSimple = function (prefix, index, cb) { + // XXX review this. Shouldn't it be doing the mounting etc + // before doing stat? kinda weird? + var self = this + this._stat(prefix, function (er, exists) { + self._processSimple2(prefix, index, er, exists, cb) + }) +} +Glob.prototype._processSimple2 = function (prefix, index, er, exists, cb) { + + //console.error('ps2', prefix, exists) + + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + // If it doesn't exist, then just mark the lack of results + if (!exists) + return cb() + + if (prefix && isAbsolute(prefix) && !this.nomount) { + var trail = /[\/\\]$/.test(prefix) + if (prefix.charAt(0) === '/') { + prefix = path.join(this.root, prefix) + } else { + prefix = path.resolve(this.root, prefix) + if (trail) + prefix += '/' + } + } + + if (process.platform === 'win32') + prefix = prefix.replace(/\\/g, '/') + + // Mark this as a match + this._emitMatch(index, prefix) + cb() +} + +// Returns either 'DIR', 'FILE', or false +Glob.prototype._stat = function (f, cb) { + var abs = this._makeAbs(f) + var needDir = f.slice(-1) === '/' + + if (f.length > this.maxLength) + return cb() + + if (!this.stat && ownProp(this.cache, abs)) { + var c = this.cache[abs] + + if (Array.isArray(c)) + c = 'DIR' + + // It exists, but maybe not how we need it + if (!needDir || c === 'DIR') + return cb(null, c) + + if (needDir && c === 'FILE') + return cb() + + // otherwise we have to stat, because maybe c=true + // if we know it exists, but not what it is. + } + + var exists + var stat = this.statCache[abs] + if (stat !== undefined) { + if (stat === false) + return cb(null, stat) + else { + var type = stat.isDirectory() ? 'DIR' : 'FILE' + if (needDir && type === 'FILE') + return cb() + else + return cb(null, type, stat) + } + } + + var self = this + var statcb = inflight('stat\0' + abs, lstatcb_) + if (statcb) + fs.lstat(abs, statcb) + + function lstatcb_ (er, lstat) { + if (lstat && lstat.isSymbolicLink()) { + // If it's a symlink, then treat it as the target, unless + // the target does not exist, then treat it as a file. + return fs.stat(abs, function (er, stat) { + if (er) + self._stat2(f, abs, null, lstat, cb) + else + self._stat2(f, abs, er, stat, cb) + }) + } else { + self._stat2(f, abs, er, lstat, cb) + } + } +} + +Glob.prototype._stat2 = function (f, abs, er, stat, cb) { + if (er) { + this.statCache[abs] = false + return cb() + } + + var needDir = f.slice(-1) === '/' + this.statCache[abs] = stat + + if (abs.slice(-1) === '/' && !stat.isDirectory()) + return cb(null, false, stat) + + var c = stat.isDirectory() ? 'DIR' : 'FILE' + this.cache[abs] = this.cache[abs] || c + + if (needDir && c !== 'DIR') + return cb() + + return cb(null, c, stat) +} + +}).call(this,require('_process')) +},{"./common.js":15,"./sync.js":17,"_process":24,"assert":9,"events":14,"fs":12,"inflight":18,"inherits":19,"minimatch":20,"once":21,"path":22,"path-is-absolute":23,"util":28}],17:[function(require,module,exports){ +(function (process){ +module.exports = globSync +globSync.GlobSync = GlobSync + +var fs = require('fs') +var minimatch = require('minimatch') +var Minimatch = minimatch.Minimatch +var Glob = require('./glob.js').Glob +var util = require('util') +var path = require('path') +var assert = require('assert') +var isAbsolute = require('path-is-absolute') +var common = require('./common.js') +var alphasort = common.alphasort +var alphasorti = common.alphasorti +var setopts = common.setopts +var ownProp = common.ownProp +var childrenIgnored = common.childrenIgnored + +function globSync (pattern, options) { + if (typeof options === 'function' || arguments.length === 3) + throw new TypeError('callback provided to sync glob\n'+ + 'See: https://github.com/isaacs/node-glob/issues/167') + + return new GlobSync(pattern, options).found +} + +function GlobSync (pattern, options) { + if (!pattern) + throw new Error('must provide pattern') + + if (typeof options === 'function' || arguments.length === 3) + throw new TypeError('callback provided to sync glob\n'+ + 'See: https://github.com/isaacs/node-glob/issues/167') + + if (!(this instanceof GlobSync)) + return new GlobSync(pattern, options) + + setopts(this, pattern, options) + + if (this.noprocess) + return this + + var n = this.minimatch.set.length + this.matches = new Array(n) + for (var i = 0; i < n; i ++) { + this._process(this.minimatch.set[i], i, false) + } + this._finish() +} + +GlobSync.prototype._finish = function () { + assert(this instanceof GlobSync) + if (this.realpath) { + var self = this + this.matches.forEach(function (matchset, index) { + var set = self.matches[index] = Object.create(null) + for (var p in matchset) { + try { + p = self._makeAbs(p) + var real = fs.realpathSync(p, self.realpathCache) + set[real] = true + } catch (er) { + if (er.syscall === 'stat') + set[self._makeAbs(p)] = true + else + throw er + } + } + }) + } + common.finish(this) +} + + +GlobSync.prototype._process = function (pattern, index, inGlobStar) { + assert(this instanceof GlobSync) + + // Get the first [n] parts of pattern that are all strings. + var n = 0 + while (typeof pattern[n] === 'string') { + n ++ + } + // now n is the index of the first one that is *not* a string. + + // See if there's anything else + var prefix + switch (n) { + // if not, then this is rather simple + case pattern.length: + this._processSimple(pattern.join('/'), index) + return + + case 0: + // pattern *starts* with some non-trivial item. + // going to readdir(cwd), but not include the prefix in matches. + prefix = null + break + + default: + // pattern has some string bits in the front. + // whatever it starts with, whether that's 'absolute' like /foo/bar, + // or 'relative' like '../baz' + prefix = pattern.slice(0, n).join('/') + break + } + + var remain = pattern.slice(n) + + // get the list of entries. + var read + if (prefix === null) + read = '.' + else if (isAbsolute(prefix) || isAbsolute(pattern.join('/'))) { + if (!prefix || !isAbsolute(prefix)) + prefix = '/' + prefix + read = prefix + } else + read = prefix + + var abs = this._makeAbs(read) + + //if ignored, skip processing + if (childrenIgnored(this, read)) + return + + var isGlobStar = remain[0] === minimatch.GLOBSTAR + if (isGlobStar) + this._processGlobStar(prefix, read, abs, remain, index, inGlobStar) + else + this._processReaddir(prefix, read, abs, remain, index, inGlobStar) +} + + +GlobSync.prototype._processReaddir = function (prefix, read, abs, remain, index, inGlobStar) { + var entries = this._readdir(abs, inGlobStar) + + // if the abs isn't a dir, then nothing can match! + if (!entries) + return + + // It will only match dot entries if it starts with a dot, or if + // dot is set. Stuff like @(.foo|.bar) isn't allowed. + var pn = remain[0] + var negate = !!this.minimatch.negate + var rawGlob = pn._glob + var dotOk = this.dot || rawGlob.charAt(0) === '.' + + var matchedEntries = [] + for (var i = 0; i < entries.length; i++) { + var e = entries[i] + if (e.charAt(0) !== '.' || dotOk) { + var m + if (negate && !prefix) { + m = !e.match(pn) + } else { + m = e.match(pn) + } + if (m) + matchedEntries.push(e) + } + } + + var len = matchedEntries.length + // If there are no matched entries, then nothing matches. + if (len === 0) + return + + // if this is the last remaining pattern bit, then no need for + // an additional stat *unless* the user has specified mark or + // stat explicitly. We know they exist, since readdir returned + // them. + + if (remain.length === 1 && !this.mark && !this.stat) { + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + if (prefix) { + if (prefix.slice(-1) !== '/') + e = prefix + '/' + e + else + e = prefix + e + } + + if (e.charAt(0) === '/' && !this.nomount) { + e = path.join(this.root, e) + } + this.matches[index][e] = true + } + // This was the last one, and no stats were needed + return + } + + // now test all matched entries as stand-ins for that part + // of the pattern. + remain.shift() + for (var i = 0; i < len; i ++) { + var e = matchedEntries[i] + var newPattern + if (prefix) + newPattern = [prefix, e] + else + newPattern = [e] + this._process(newPattern.concat(remain), index, inGlobStar) + } +} + + +GlobSync.prototype._emitMatch = function (index, e) { + var abs = this._makeAbs(e) + if (this.mark) + e = this._mark(e) + + if (this.matches[index][e]) + return + + if (this.nodir) { + var c = this.cache[this._makeAbs(e)] + if (c === 'DIR' || Array.isArray(c)) + return + } + + this.matches[index][e] = true + if (this.stat) + this._stat(e) +} + + +GlobSync.prototype._readdirInGlobStar = function (abs) { + // follow all symlinked directories forever + // just proceed as if this is a non-globstar situation + if (this.follow) + return this._readdir(abs, false) + + var entries + var lstat + var stat + try { + lstat = fs.lstatSync(abs) + } catch (er) { + // lstat failed, doesn't exist + return null + } + + var isSym = lstat.isSymbolicLink() + this.symlinks[abs] = isSym + + // If it's not a symlink or a dir, then it's definitely a regular file. + // don't bother doing a readdir in that case. + if (!isSym && !lstat.isDirectory()) + this.cache[abs] = 'FILE' + else + entries = this._readdir(abs, false) + + return entries +} + +GlobSync.prototype._readdir = function (abs, inGlobStar) { + var entries + + if (inGlobStar && !ownProp(this.symlinks, abs)) + return this._readdirInGlobStar(abs) + + if (ownProp(this.cache, abs)) { + var c = this.cache[abs] + if (!c || c === 'FILE') + return null + + if (Array.isArray(c)) + return c + } + + try { + return this._readdirEntries(abs, fs.readdirSync(abs)) + } catch (er) { + this._readdirError(abs, er) + return null + } +} + +GlobSync.prototype._readdirEntries = function (abs, entries) { + // if we haven't asked to stat everything, then just + // assume that everything in there exists, so we can avoid + // having to stat it a second time. + if (!this.mark && !this.stat) { + for (var i = 0; i < entries.length; i ++) { + var e = entries[i] + if (abs === '/') + e = abs + e + else + e = abs + '/' + e + this.cache[e] = true + } + } + + this.cache[abs] = entries + + // mark and cache dir-ness + return entries +} + +GlobSync.prototype._readdirError = function (f, er) { + // handle errors, and cache the information + switch (er.code) { + case 'ENOTSUP': // https://github.com/isaacs/node-glob/issues/205 + case 'ENOTDIR': // totally normal. means it *does* exist. + this.cache[this._makeAbs(f)] = 'FILE' + break + + case 'ENOENT': // not terribly unusual + case 'ELOOP': + case 'ENAMETOOLONG': + case 'UNKNOWN': + this.cache[this._makeAbs(f)] = false + break + + default: // some unusual error. Treat as failure. + this.cache[this._makeAbs(f)] = false + if (this.strict) + throw er + if (!this.silent) + console.error('glob error', er) + break + } +} + +GlobSync.prototype._processGlobStar = function (prefix, read, abs, remain, index, inGlobStar) { + + var entries = this._readdir(abs, inGlobStar) + + // no entries means not a dir, so it can never have matches + // foo.txt/** doesn't match foo.txt + if (!entries) + return + + // test without the globstar, and with every child both below + // and replacing the globstar. + var remainWithoutGlobStar = remain.slice(1) + var gspref = prefix ? [ prefix ] : [] + var noGlobStar = gspref.concat(remainWithoutGlobStar) + + // the noGlobStar pattern exits the inGlobStar state + this._process(noGlobStar, index, false) + + var len = entries.length + var isSym = this.symlinks[abs] + + // If it's a symlink, and we're in a globstar, then stop + if (isSym && inGlobStar) + return + + for (var i = 0; i < len; i++) { + var e = entries[i] + if (e.charAt(0) === '.' && !this.dot) + continue + + // these two cases enter the inGlobStar state + var instead = gspref.concat(entries[i], remainWithoutGlobStar) + this._process(instead, index, true) + + var below = gspref.concat(entries[i], remain) + this._process(below, index, true) + } +} + +GlobSync.prototype._processSimple = function (prefix, index) { + // XXX review this. Shouldn't it be doing the mounting etc + // before doing stat? kinda weird? + var exists = this._stat(prefix) + + if (!this.matches[index]) + this.matches[index] = Object.create(null) + + // If it doesn't exist, then just mark the lack of results + if (!exists) + return + + if (prefix && isAbsolute(prefix) && !this.nomount) { + var trail = /[\/\\]$/.test(prefix) + if (prefix.charAt(0) === '/') { + prefix = path.join(this.root, prefix) + } else { + prefix = path.resolve(this.root, prefix) + if (trail) + prefix += '/' + } + } + + if (process.platform === 'win32') + prefix = prefix.replace(/\\/g, '/') + + // Mark this as a match + this.matches[index][prefix] = true +} + +// Returns either 'DIR', 'FILE', or false +GlobSync.prototype._stat = function (f) { + var abs = this._makeAbs(f) + var needDir = f.slice(-1) === '/' + + if (f.length > this.maxLength) + return false + + if (!this.stat && ownProp(this.cache, abs)) { + var c = this.cache[abs] + + if (Array.isArray(c)) + c = 'DIR' + + // It exists, but maybe not how we need it + if (!needDir || c === 'DIR') + return c + + if (needDir && c === 'FILE') + return false + + // otherwise we have to stat, because maybe c=true + // if we know it exists, but not what it is. + } + + var exists + var stat = this.statCache[abs] + if (!stat) { + var lstat + try { + lstat = fs.lstatSync(abs) + } catch (er) { + return false + } + + if (lstat.isSymbolicLink()) { + try { + stat = fs.statSync(abs) + } catch (er) { + stat = lstat + } + } else { + stat = lstat + } + } + + this.statCache[abs] = stat + + var c = stat.isDirectory() ? 'DIR' : 'FILE' + this.cache[abs] = this.cache[abs] || c + + if (needDir && c !== 'DIR') + return false + + return c +} + +GlobSync.prototype._mark = function (p) { + return common.mark(this, p) +} + +GlobSync.prototype._makeAbs = function (f) { + return common.makeAbs(this, f) +} + +}).call(this,require('_process')) +},{"./common.js":15,"./glob.js":16,"_process":24,"assert":9,"fs":12,"minimatch":20,"path":22,"path-is-absolute":23,"util":28}],18:[function(require,module,exports){ +(function (process){ +var wrappy = require('wrappy') +var reqs = Object.create(null) +var once = require('once') + +module.exports = wrappy(inflight) + +function inflight (key, cb) { + if (reqs[key]) { + reqs[key].push(cb) + return null + } else { + reqs[key] = [cb] + return makeres(key) + } +} + +function makeres (key) { + return once(function RES () { + var cbs = reqs[key] + var len = cbs.length + var args = slice(arguments) + + // XXX It's somewhat ambiguous whether a new callback added in this + // pass should be queued for later execution if something in the + // list of callbacks throws, or if it should just be discarded. + // However, it's such an edge case that it hardly matters, and either + // choice is likely as surprising as the other. + // As it happens, we do go ahead and schedule it for later execution. + try { + for (var i = 0; i < len; i++) { + cbs[i].apply(null, args) + } + } finally { + if (cbs.length > len) { + // added more in the interim. + // de-zalgo, just in case, but don't call again. + cbs.splice(0, len) + process.nextTick(function () { + RES.apply(null, args) + }) + } else { + delete reqs[key] + } + } + }) +} + +function slice (args) { + var length = args.length + var array = [] + + for (var i = 0; i < length; i++) array[i] = args[i] + return array +} + +}).call(this,require('_process')) +},{"_process":24,"once":21,"wrappy":29}],19:[function(require,module,exports){ +if (typeof Object.create === 'function') { + // implementation from standard node.js 'util' module + module.exports = function inherits(ctor, superCtor) { + ctor.super_ = superCtor + ctor.prototype = Object.create(superCtor.prototype, { + constructor: { + value: ctor, + enumerable: false, + writable: true, + configurable: true + } + }); + }; +} else { + // old school shim for old browsers + module.exports = function inherits(ctor, superCtor) { + ctor.super_ = superCtor + var TempCtor = function () {} + TempCtor.prototype = superCtor.prototype + ctor.prototype = new TempCtor() + ctor.prototype.constructor = ctor + } +} + +},{}],20:[function(require,module,exports){ +module.exports = minimatch +minimatch.Minimatch = Minimatch + +var path = { sep: '/' } +try { + path = require('path') +} catch (er) {} + +var GLOBSTAR = minimatch.GLOBSTAR = Minimatch.GLOBSTAR = {} +var expand = require('brace-expansion') + +var plTypes = { + '!': { open: '(?:(?!(?:', close: '))[^/]*?)'}, + '?': { open: '(?:', close: ')?' }, + '+': { open: '(?:', close: ')+' }, + '*': { open: '(?:', close: ')*' }, + '@': { open: '(?:', close: ')' } +} + +// any single thing other than / +// don't need to escape / when using new RegExp() +var qmark = '[^/]' + +// * => any number of characters +var star = qmark + '*?' + +// ** when dots are allowed. Anything goes, except .. and . +// not (^ or / followed by one or two dots followed by $ or /), +// followed by anything, any number of times. +var twoStarDot = '(?:(?!(?:\\\/|^)(?:\\.{1,2})($|\\\/)).)*?' + +// not a ^ or / followed by a dot, +// followed by anything, any number of times. +var twoStarNoDot = '(?:(?!(?:\\\/|^)\\.).)*?' + +// characters that need to be escaped in RegExp. +var reSpecials = charSet('().*{}+?[]^$\\!') + +// "abc" -> { a:true, b:true, c:true } +function charSet (s) { + return s.split('').reduce(function (set, c) { + set[c] = true + return set + }, {}) +} + +// normalizes slashes. +var slashSplit = /\/+/ + +minimatch.filter = filter +function filter (pattern, options) { + options = options || {} + return function (p, i, list) { + return minimatch(p, pattern, options) + } +} + +function ext (a, b) { + a = a || {} + b = b || {} + var t = {} + Object.keys(b).forEach(function (k) { + t[k] = b[k] + }) + Object.keys(a).forEach(function (k) { + t[k] = a[k] + }) + return t +} + +minimatch.defaults = function (def) { + if (!def || !Object.keys(def).length) return minimatch + + var orig = minimatch + + var m = function minimatch (p, pattern, options) { + return orig.minimatch(p, pattern, ext(def, options)) + } + + m.Minimatch = function Minimatch (pattern, options) { + return new orig.Minimatch(pattern, ext(def, options)) + } + + return m +} + +Minimatch.defaults = function (def) { + if (!def || !Object.keys(def).length) return Minimatch + return minimatch.defaults(def).Minimatch +} + +function minimatch (p, pattern, options) { + if (typeof pattern !== 'string') { + throw new TypeError('glob pattern string required') + } + + if (!options) options = {} + + // shortcut: comments match nothing. + if (!options.nocomment && pattern.charAt(0) === '#') { + return false + } + + // "" only matches "" + if (pattern.trim() === '') return p === '' + + return new Minimatch(pattern, options).match(p) +} + +function Minimatch (pattern, options) { + if (!(this instanceof Minimatch)) { + return new Minimatch(pattern, options) + } + + if (typeof pattern !== 'string') { + throw new TypeError('glob pattern string required') + } + + if (!options) options = {} + pattern = pattern.trim() + + // windows support: need to use /, not \ + if (path.sep !== '/') { + pattern = pattern.split(path.sep).join('/') + } + + this.options = options + this.set = [] + this.pattern = pattern + this.regexp = null + this.negate = false + this.comment = false + this.empty = false + + // make the set of regexps etc. + this.make() +} + +Minimatch.prototype.debug = function () {} + +Minimatch.prototype.make = make +function make () { + // don't do it more than once. + if (this._made) return + + var pattern = this.pattern + var options = this.options + + // empty patterns and comments match nothing. + if (!options.nocomment && pattern.charAt(0) === '#') { + this.comment = true + return + } + if (!pattern) { + this.empty = true + return + } + + // step 1: figure out negation, etc. + this.parseNegate() + + // step 2: expand braces + var set = this.globSet = this.braceExpand() + + if (options.debug) this.debug = console.error + + this.debug(this.pattern, set) + + // step 3: now we have a set, so turn each one into a series of path-portion + // matching patterns. + // These will be regexps, except in the case of "**", which is + // set to the GLOBSTAR object for globstar behavior, + // and will not contain any / characters + set = this.globParts = set.map(function (s) { + return s.split(slashSplit) + }) + + this.debug(this.pattern, set) + + // glob --> regexps + set = set.map(function (s, si, set) { + return s.map(this.parse, this) + }, this) + + this.debug(this.pattern, set) + + // filter out everything that didn't compile properly. + set = set.filter(function (s) { + return s.indexOf(false) === -1 + }) + + this.debug(this.pattern, set) + + this.set = set +} + +Minimatch.prototype.parseNegate = parseNegate +function parseNegate () { + var pattern = this.pattern + var negate = false + var options = this.options + var negateOffset = 0 + + if (options.nonegate) return + + for (var i = 0, l = pattern.length + ; i < l && pattern.charAt(i) === '!' + ; i++) { + negate = !negate + negateOffset++ + } + + if (negateOffset) this.pattern = pattern.substr(negateOffset) + this.negate = negate +} + +// Brace expansion: +// a{b,c}d -> abd acd +// a{b,}c -> abc ac +// a{0..3}d -> a0d a1d a2d a3d +// a{b,c{d,e}f}g -> abg acdfg acefg +// a{b,c}d{e,f}g -> abdeg acdeg abdeg abdfg +// +// Invalid sets are not expanded. +// a{2..}b -> a{2..}b +// a{b}c -> a{b}c +minimatch.braceExpand = function (pattern, options) { + return braceExpand(pattern, options) +} + +Minimatch.prototype.braceExpand = braceExpand + +function braceExpand (pattern, options) { + if (!options) { + if (this instanceof Minimatch) { + options = this.options + } else { + options = {} + } + } + + pattern = typeof pattern === 'undefined' + ? this.pattern : pattern + + if (typeof pattern === 'undefined') { + throw new TypeError('undefined pattern') + } + + if (options.nobrace || + !pattern.match(/\{.*\}/)) { + // shortcut. no need to expand. + return [pattern] + } + + return expand(pattern) +} + +// parse a component of the expanded set. +// At this point, no pattern may contain "/" in it +// so we're going to return a 2d array, where each entry is the full +// pattern, split on '/', and then turned into a regular expression. +// A regexp is made at the end which joins each array with an +// escaped /, and another full one which joins each regexp with |. +// +// Following the lead of Bash 4.1, note that "**" only has special meaning +// when it is the *only* thing in a path portion. Otherwise, any series +// of * is equivalent to a single *. Globstar behavior is enabled by +// default, and can be disabled by setting options.noglobstar. +Minimatch.prototype.parse = parse +var SUBPARSE = {} +function parse (pattern, isSub) { + if (pattern.length > 1024 * 64) { + throw new TypeError('pattern is too long') + } + + var options = this.options + + // shortcuts + if (!options.noglobstar && pattern === '**') return GLOBSTAR + if (pattern === '') return '' + + var re = '' + var hasMagic = !!options.nocase + var escaping = false + // ? => one single character + var patternListStack = [] + var negativeLists = [] + var stateChar + var inClass = false + var reClassStart = -1 + var classStart = -1 + // . and .. never match anything that doesn't start with ., + // even when options.dot is set. + var patternStart = pattern.charAt(0) === '.' ? '' // anything + // not (start or / followed by . or .. followed by / or end) + : options.dot ? '(?!(?:^|\\\/)\\.{1,2}(?:$|\\\/))' + : '(?!\\.)' + var self = this + + function clearStateChar () { + if (stateChar) { + // we had some state-tracking character + // that wasn't consumed by this pass. + switch (stateChar) { + case '*': + re += star + hasMagic = true + break + case '?': + re += qmark + hasMagic = true + break + default: + re += '\\' + stateChar + break + } + self.debug('clearStateChar %j %j', stateChar, re) + stateChar = false + } + } + + for (var i = 0, len = pattern.length, c + ; (i < len) && (c = pattern.charAt(i)) + ; i++) { + this.debug('%s\t%s %s %j', pattern, i, re, c) + + // skip over any that are escaped. + if (escaping && reSpecials[c]) { + re += '\\' + c + escaping = false + continue + } + + switch (c) { + case '/': + // completely not allowed, even escaped. + // Should already be path-split by now. + return false + + case '\\': + clearStateChar() + escaping = true + continue + + // the various stateChar values + // for the "extglob" stuff. + case '?': + case '*': + case '+': + case '@': + case '!': + this.debug('%s\t%s %s %j <-- stateChar', pattern, i, re, c) + + // all of those are literals inside a class, except that + // the glob [!a] means [^a] in regexp + if (inClass) { + this.debug(' in class') + if (c === '!' && i === classStart + 1) c = '^' + re += c + continue + } + + // if we already have a stateChar, then it means + // that there was something like ** or +? in there. + // Handle the stateChar, then proceed with this one. + self.debug('call clearStateChar %j', stateChar) + clearStateChar() + stateChar = c + // if extglob is disabled, then +(asdf|foo) isn't a thing. + // just clear the statechar *now*, rather than even diving into + // the patternList stuff. + if (options.noext) clearStateChar() + continue + + case '(': + if (inClass) { + re += '(' + continue + } + + if (!stateChar) { + re += '\\(' + continue + } + + patternListStack.push({ + type: stateChar, + start: i - 1, + reStart: re.length, + open: plTypes[stateChar].open, + close: plTypes[stateChar].close + }) + // negation is (?:(?!js)[^/]*) + re += stateChar === '!' ? '(?:(?!(?:' : '(?:' + this.debug('plType %j %j', stateChar, re) + stateChar = false + continue + + case ')': + if (inClass || !patternListStack.length) { + re += '\\)' + continue + } + + clearStateChar() + hasMagic = true + var pl = patternListStack.pop() + // negation is (?:(?!js)[^/]*) + // The others are (?:) + re += pl.close + if (pl.type === '!') { + negativeLists.push(pl) + } + pl.reEnd = re.length + continue + + case '|': + if (inClass || !patternListStack.length || escaping) { + re += '\\|' + escaping = false + continue + } + + clearStateChar() + re += '|' + continue + + // these are mostly the same in regexp and glob + case '[': + // swallow any state-tracking char before the [ + clearStateChar() + + if (inClass) { + re += '\\' + c + continue + } + + inClass = true + classStart = i + reClassStart = re.length + re += c + continue + + case ']': + // a right bracket shall lose its special + // meaning and represent itself in + // a bracket expression if it occurs + // first in the list. -- POSIX.2 2.8.3.2 + if (i === classStart + 1 || !inClass) { + re += '\\' + c + escaping = false + continue + } + + // handle the case where we left a class open. + // "[z-a]" is valid, equivalent to "\[z-a\]" + if (inClass) { + // split where the last [ was, make sure we don't have + // an invalid re. if so, re-walk the contents of the + // would-be class to re-translate any characters that + // were passed through as-is + // TODO: It would probably be faster to determine this + // without a try/catch and a new RegExp, but it's tricky + // to do safely. For now, this is safe and works. + var cs = pattern.substring(classStart + 1, i) + try { + RegExp('[' + cs + ']') + } catch (er) { + // not a valid class! + var sp = this.parse(cs, SUBPARSE) + re = re.substr(0, reClassStart) + '\\[' + sp[0] + '\\]' + hasMagic = hasMagic || sp[1] + inClass = false + continue + } + } + + // finish up the class. + hasMagic = true + inClass = false + re += c + continue + + default: + // swallow any state char that wasn't consumed + clearStateChar() + + if (escaping) { + // no need + escaping = false + } else if (reSpecials[c] + && !(c === '^' && inClass)) { + re += '\\' + } + + re += c + + } // switch + } // for + + // handle the case where we left a class open. + // "[abc" is valid, equivalent to "\[abc" + if (inClass) { + // split where the last [ was, and escape it + // this is a huge pita. We now have to re-walk + // the contents of the would-be class to re-translate + // any characters that were passed through as-is + cs = pattern.substr(classStart + 1) + sp = this.parse(cs, SUBPARSE) + re = re.substr(0, reClassStart) + '\\[' + sp[0] + hasMagic = hasMagic || sp[1] + } + + // handle the case where we had a +( thing at the *end* + // of the pattern. + // each pattern list stack adds 3 chars, and we need to go through + // and escape any | chars that were passed through as-is for the regexp. + // Go through and escape them, taking care not to double-escape any + // | chars that were already escaped. + for (pl = patternListStack.pop(); pl; pl = patternListStack.pop()) { + var tail = re.slice(pl.reStart + pl.open.length) + this.debug('setting tail', re, pl) + // maybe some even number of \, then maybe 1 \, followed by a | + tail = tail.replace(/((?:\\{2}){0,64})(\\?)\|/g, function (_, $1, $2) { + if (!$2) { + // the | isn't already escaped, so escape it. + $2 = '\\' + } + + // need to escape all those slashes *again*, without escaping the + // one that we need for escaping the | character. As it works out, + // escaping an even number of slashes can be done by simply repeating + // it exactly after itself. That's why this trick works. + // + // I am sorry that you have to see this. + return $1 + $1 + $2 + '|' + }) + + this.debug('tail=%j\n %s', tail, tail, pl, re) + var t = pl.type === '*' ? star + : pl.type === '?' ? qmark + : '\\' + pl.type + + hasMagic = true + re = re.slice(0, pl.reStart) + t + '\\(' + tail + } + + // handle trailing things that only matter at the very end. + clearStateChar() + if (escaping) { + // trailing \\ + re += '\\\\' + } + + // only need to apply the nodot start if the re starts with + // something that could conceivably capture a dot + var addPatternStart = false + switch (re.charAt(0)) { + case '.': + case '[': + case '(': addPatternStart = true + } + + // Hack to work around lack of negative lookbehind in JS + // A pattern like: *.!(x).!(y|z) needs to ensure that a name + // like 'a.xyz.yz' doesn't match. So, the first negative + // lookahead, has to look ALL the way ahead, to the end of + // the pattern. + for (var n = negativeLists.length - 1; n > -1; n--) { + var nl = negativeLists[n] + + var nlBefore = re.slice(0, nl.reStart) + var nlFirst = re.slice(nl.reStart, nl.reEnd - 8) + var nlLast = re.slice(nl.reEnd - 8, nl.reEnd) + var nlAfter = re.slice(nl.reEnd) + + nlLast += nlAfter + + // Handle nested stuff like *(*.js|!(*.json)), where open parens + // mean that we should *not* include the ) in the bit that is considered + // "after" the negated section. + var openParensBefore = nlBefore.split('(').length - 1 + var cleanAfter = nlAfter + for (i = 0; i < openParensBefore; i++) { + cleanAfter = cleanAfter.replace(/\)[+*?]?/, '') + } + nlAfter = cleanAfter + + var dollar = '' + if (nlAfter === '' && isSub !== SUBPARSE) { + dollar = '$' + } + var newRe = nlBefore + nlFirst + nlAfter + dollar + nlLast + re = newRe + } + + // if the re is not "" at this point, then we need to make sure + // it doesn't match against an empty path part. + // Otherwise a/* will match a/, which it should not. + if (re !== '' && hasMagic) { + re = '(?=.)' + re + } + + if (addPatternStart) { + re = patternStart + re + } + + // parsing just a piece of a larger pattern. + if (isSub === SUBPARSE) { + return [re, hasMagic] + } + + // skip the regexp for non-magical patterns + // unescape anything in it, though, so that it'll be + // an exact match against a file etc. + if (!hasMagic) { + return globUnescape(pattern) + } + + var flags = options.nocase ? 'i' : '' + try { + var regExp = new RegExp('^' + re + '$', flags) + } catch (er) { + // If it was an invalid regular expression, then it can't match + // anything. This trick looks for a character after the end of + // the string, which is of course impossible, except in multi-line + // mode, but it's not a /m regex. + return new RegExp('$.') + } + + regExp._glob = pattern + regExp._src = re + + return regExp +} + +minimatch.makeRe = function (pattern, options) { + return new Minimatch(pattern, options || {}).makeRe() +} + +Minimatch.prototype.makeRe = makeRe +function makeRe () { + if (this.regexp || this.regexp === false) return this.regexp + + // at this point, this.set is a 2d array of partial + // pattern strings, or "**". + // + // It's better to use .match(). This function shouldn't + // be used, really, but it's pretty convenient sometimes, + // when you just want to work with a regex. + var set = this.set + + if (!set.length) { + this.regexp = false + return this.regexp + } + var options = this.options + + var twoStar = options.noglobstar ? star + : options.dot ? twoStarDot + : twoStarNoDot + var flags = options.nocase ? 'i' : '' + + var re = set.map(function (pattern) { + return pattern.map(function (p) { + return (p === GLOBSTAR) ? twoStar + : (typeof p === 'string') ? regExpEscape(p) + : p._src + }).join('\\\/') + }).join('|') + + // must match entire pattern + // ending in a * or ** will make it less strict. + re = '^(?:' + re + ')$' + + // can match anything, as long as it's not this. + if (this.negate) re = '^(?!' + re + ').*$' + + try { + this.regexp = new RegExp(re, flags) + } catch (ex) { + this.regexp = false + } + return this.regexp +} + +minimatch.match = function (list, pattern, options) { + options = options || {} + var mm = new Minimatch(pattern, options) + list = list.filter(function (f) { + return mm.match(f) + }) + if (mm.options.nonull && !list.length) { + list.push(pattern) + } + return list +} + +Minimatch.prototype.match = match +function match (f, partial) { + this.debug('match', f, this.pattern) + // short-circuit in the case of busted things. + // comments, etc. + if (this.comment) return false + if (this.empty) return f === '' + + if (f === '/' && partial) return true + + var options = this.options + + // windows: need to use /, not \ + if (path.sep !== '/') { + f = f.split(path.sep).join('/') + } + + // treat the test path as a set of pathparts. + f = f.split(slashSplit) + this.debug(this.pattern, 'split', f) + + // just ONE of the pattern sets in this.set needs to match + // in order for it to be valid. If negating, then just one + // match means that we have failed. + // Either way, return on the first hit. + + var set = this.set + this.debug(this.pattern, 'set', set) + + // Find the basename of the path by looking for the last non-empty segment + var filename + var i + for (i = f.length - 1; i >= 0; i--) { + filename = f[i] + if (filename) break + } + + for (i = 0; i < set.length; i++) { + var pattern = set[i] + var file = f + if (options.matchBase && pattern.length === 1) { + file = [filename] + } + var hit = this.matchOne(file, pattern, partial) + if (hit) { + if (options.flipNegate) return true + return !this.negate + } + } + + // didn't get any hits. this is success if it's a negative + // pattern, failure otherwise. + if (options.flipNegate) return false + return this.negate +} + +// set partial to true to test if, for example, +// "/a/b" matches the start of "/*/b/*/d" +// Partial means, if you run out of file before you run +// out of pattern, then that's fine, as long as all +// the parts match. +Minimatch.prototype.matchOne = function (file, pattern, partial) { + var options = this.options + + this.debug('matchOne', + { 'this': this, file: file, pattern: pattern }) + + this.debug('matchOne', file.length, pattern.length) + + for (var fi = 0, + pi = 0, + fl = file.length, + pl = pattern.length + ; (fi < fl) && (pi < pl) + ; fi++, pi++) { + this.debug('matchOne loop') + var p = pattern[pi] + var f = file[fi] + + this.debug(pattern, p, f) + + // should be impossible. + // some invalid regexp stuff in the set. + if (p === false) return false + + if (p === GLOBSTAR) { + this.debug('GLOBSTAR', [pattern, p, f]) + + // "**" + // a/**/b/**/c would match the following: + // a/b/x/y/z/c + // a/x/y/z/b/c + // a/b/x/b/x/c + // a/b/c + // To do this, take the rest of the pattern after + // the **, and see if it would match the file remainder. + // If so, return success. + // If not, the ** "swallows" a segment, and try again. + // This is recursively awful. + // + // a/**/b/**/c matching a/b/x/y/z/c + // - a matches a + // - doublestar + // - matchOne(b/x/y/z/c, b/**/c) + // - b matches b + // - doublestar + // - matchOne(x/y/z/c, c) -> no + // - matchOne(y/z/c, c) -> no + // - matchOne(z/c, c) -> no + // - matchOne(c, c) yes, hit + var fr = fi + var pr = pi + 1 + if (pr === pl) { + this.debug('** at the end') + // a ** at the end will just swallow the rest. + // We have found a match. + // however, it will not swallow /.x, unless + // options.dot is set. + // . and .. are *never* matched by **, for explosively + // exponential reasons. + for (; fi < fl; fi++) { + if (file[fi] === '.' || file[fi] === '..' || + (!options.dot && file[fi].charAt(0) === '.')) return false + } + return true + } + + // ok, let's see if we can swallow whatever we can. + while (fr < fl) { + var swallowee = file[fr] + + this.debug('\nglobstar while', file, fr, pattern, pr, swallowee) + + // XXX remove this slice. Just pass the start index. + if (this.matchOne(file.slice(fr), pattern.slice(pr), partial)) { + this.debug('globstar found match!', fr, fl, swallowee) + // found a match. + return true + } else { + // can't swallow "." or ".." ever. + // can only swallow ".foo" when explicitly asked. + if (swallowee === '.' || swallowee === '..' || + (!options.dot && swallowee.charAt(0) === '.')) { + this.debug('dot detected!', file, fr, pattern, pr) + break + } + + // ** swallows a segment, and continue. + this.debug('globstar swallow a segment, and continue') + fr++ + } + } + + // no match was found. + // However, in partial mode, we can't say this is necessarily over. + // If there's more *pattern* left, then + if (partial) { + // ran out of file + this.debug('\n>>> no match, partial?', file, fr, pattern, pr) + if (fr === fl) return true + } + return false + } + + // something other than ** + // non-magic patterns just have to match exactly + // patterns with magic have been turned into regexps. + var hit + if (typeof p === 'string') { + if (options.nocase) { + hit = f.toLowerCase() === p.toLowerCase() + } else { + hit = f === p + } + this.debug('string match', p, f, hit) + } else { + hit = f.match(p) + this.debug('pattern match', p, f, hit) + } + + if (!hit) return false + } + + // Note: ending in / means that we'll get a final "" + // at the end of the pattern. This can only match a + // corresponding "" at the end of the file. + // If the file ends in /, then it can only match a + // a pattern that ends in /, unless the pattern just + // doesn't have any more for it. But, a/b/ should *not* + // match "a/b/*", even though "" matches against the + // [^/]*? pattern, except in partial mode, where it might + // simply not be reached yet. + // However, a/b/ should still satisfy a/* + + // now either we fell off the end of the pattern, or we're done. + if (fi === fl && pi === pl) { + // ran out of pattern and filename at the same time. + // an exact hit! + return true + } else if (fi === fl) { + // ran out of file, but still had pattern left. + // this is ok if we're doing the match as part of + // a glob fs traversal. + return partial + } else if (pi === pl) { + // ran out of pattern, still have file left. + // this is only acceptable if we're on the very last + // empty segment of a file with a trailing slash. + // a/* should match a/b/ + var emptyFileEnd = (fi === fl - 1) && (file[fi] === '') + return emptyFileEnd + } + + // should be unreachable. + throw new Error('wtf?') +} + +// replace stuff like \* with * +function globUnescape (s) { + return s.replace(/\\(.)/g, '$1') +} + +function regExpEscape (s) { + return s.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&') +} + +},{"brace-expansion":11,"path":22}],21:[function(require,module,exports){ +var wrappy = require('wrappy') +module.exports = wrappy(once) +module.exports.strict = wrappy(onceStrict) + +once.proto = once(function () { + Object.defineProperty(Function.prototype, 'once', { + value: function () { + return once(this) + }, + configurable: true + }) + + Object.defineProperty(Function.prototype, 'onceStrict', { + value: function () { + return onceStrict(this) + }, + configurable: true + }) +}) + +function once (fn) { + var f = function () { + if (f.called) return f.value + f.called = true + return f.value = fn.apply(this, arguments) + } + f.called = false + return f +} + +function onceStrict (fn) { + var f = function () { + if (f.called) + throw new Error(f.onceError) + f.called = true + return f.value = fn.apply(this, arguments) + } + var name = fn.name || 'Function wrapped with `once`' + f.onceError = name + " shouldn't be called more than once" + f.called = false + return f +} + +},{"wrappy":29}],22:[function(require,module,exports){ +(function (process){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +// resolves . and .. elements in a path array with directory names there +// must be no slashes, empty elements, or device names (c:\) in the array +// (so also no leading and trailing slashes - it does not distinguish +// relative and absolute paths) +function normalizeArray(parts, allowAboveRoot) { + // if the path tries to go above the root, `up` ends up > 0 + var up = 0; + for (var i = parts.length - 1; i >= 0; i--) { + var last = parts[i]; + if (last === '.') { + parts.splice(i, 1); + } else if (last === '..') { + parts.splice(i, 1); + up++; + } else if (up) { + parts.splice(i, 1); + up--; + } + } + + // if the path is allowed to go above the root, restore leading ..s + if (allowAboveRoot) { + for (; up--; up) { + parts.unshift('..'); + } + } + + return parts; +} + +// Split a filename into [root, dir, basename, ext], unix version +// 'root' is just a slash, or nothing. +var splitPathRe = + /^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/; +var splitPath = function(filename) { + return splitPathRe.exec(filename).slice(1); +}; + +// path.resolve([from ...], to) +// posix version +exports.resolve = function() { + var resolvedPath = '', + resolvedAbsolute = false; + + for (var i = arguments.length - 1; i >= -1 && !resolvedAbsolute; i--) { + var path = (i >= 0) ? arguments[i] : process.cwd(); + + // Skip empty and invalid entries + if (typeof path !== 'string') { + throw new TypeError('Arguments to path.resolve must be strings'); + } else if (!path) { + continue; + } + + resolvedPath = path + '/' + resolvedPath; + resolvedAbsolute = path.charAt(0) === '/'; + } + + // At this point the path should be resolved to a full absolute path, but + // handle relative paths to be safe (might happen when process.cwd() fails) + + // Normalize the path + resolvedPath = normalizeArray(filter(resolvedPath.split('/'), function(p) { + return !!p; + }), !resolvedAbsolute).join('/'); + + return ((resolvedAbsolute ? '/' : '') + resolvedPath) || '.'; +}; + +// path.normalize(path) +// posix version +exports.normalize = function(path) { + var isAbsolute = exports.isAbsolute(path), + trailingSlash = substr(path, -1) === '/'; + + // Normalize the path + path = normalizeArray(filter(path.split('/'), function(p) { + return !!p; + }), !isAbsolute).join('/'); + + if (!path && !isAbsolute) { + path = '.'; + } + if (path && trailingSlash) { + path += '/'; + } + + return (isAbsolute ? '/' : '') + path; +}; + +// posix version +exports.isAbsolute = function(path) { + return path.charAt(0) === '/'; +}; + +// posix version +exports.join = function() { + var paths = Array.prototype.slice.call(arguments, 0); + return exports.normalize(filter(paths, function(p, index) { + if (typeof p !== 'string') { + throw new TypeError('Arguments to path.join must be strings'); + } + return p; + }).join('/')); +}; + + +// path.relative(from, to) +// posix version +exports.relative = function(from, to) { + from = exports.resolve(from).substr(1); + to = exports.resolve(to).substr(1); + + function trim(arr) { + var start = 0; + for (; start < arr.length; start++) { + if (arr[start] !== '') break; + } + + var end = arr.length - 1; + for (; end >= 0; end--) { + if (arr[end] !== '') break; + } + + if (start > end) return []; + return arr.slice(start, end - start + 1); + } + + var fromParts = trim(from.split('/')); + var toParts = trim(to.split('/')); + + var length = Math.min(fromParts.length, toParts.length); + var samePartsLength = length; + for (var i = 0; i < length; i++) { + if (fromParts[i] !== toParts[i]) { + samePartsLength = i; + break; + } + } + + var outputParts = []; + for (var i = samePartsLength; i < fromParts.length; i++) { + outputParts.push('..'); + } + + outputParts = outputParts.concat(toParts.slice(samePartsLength)); + + return outputParts.join('/'); +}; + +exports.sep = '/'; +exports.delimiter = ':'; + +exports.dirname = function(path) { + var result = splitPath(path), + root = result[0], + dir = result[1]; + + if (!root && !dir) { + // No dirname whatsoever + return '.'; + } + + if (dir) { + // It has a dirname, strip trailing slash + dir = dir.substr(0, dir.length - 1); + } + + return root + dir; +}; + + +exports.basename = function(path, ext) { + var f = splitPath(path)[2]; + // TODO: make this comparison case-insensitive on windows? + if (ext && f.substr(-1 * ext.length) === ext) { + f = f.substr(0, f.length - ext.length); + } + return f; +}; + + +exports.extname = function(path) { + return splitPath(path)[3]; +}; + +function filter (xs, f) { + if (xs.filter) return xs.filter(f); + var res = []; + for (var i = 0; i < xs.length; i++) { + if (f(xs[i], i, xs)) res.push(xs[i]); + } + return res; +} + +// String.prototype.substr - negative index don't work in IE8 +var substr = 'ab'.substr(-1) === 'b' + ? function (str, start, len) { return str.substr(start, len) } + : function (str, start, len) { + if (start < 0) start = str.length + start; + return str.substr(start, len); + } +; + +}).call(this,require('_process')) +},{"_process":24}],23:[function(require,module,exports){ +(function (process){ +'use strict'; + +function posix(path) { + return path.charAt(0) === '/'; +} + +function win32(path) { + // https://github.com/nodejs/node/blob/b3fcc245fb25539909ef1d5eaa01dbf92e168633/lib/path.js#L56 + var splitDeviceRe = /^([a-zA-Z]:|[\\\/]{2}[^\\\/]+[\\\/]+[^\\\/]+)?([\\\/])?([\s\S]*?)$/; + var result = splitDeviceRe.exec(path); + var device = result[1] || ''; + var isUnc = Boolean(device && device.charAt(1) !== ':'); + + // UNC paths are always absolute + return Boolean(result[2] || isUnc); +} + +module.exports = process.platform === 'win32' ? win32 : posix; +module.exports.posix = posix; +module.exports.win32 = win32; + +}).call(this,require('_process')) +},{"_process":24}],24:[function(require,module,exports){ +// shim for using process in browser +var process = module.exports = {}; + +// cached from whatever global is present so that test runners that stub it +// don't break things. But we need to wrap it in a try catch in case it is +// wrapped in strict mode code which doesn't define any globals. It's inside a +// function because try/catches deoptimize in certain engines. + +var cachedSetTimeout; +var cachedClearTimeout; + +function defaultSetTimout() { + throw new Error('setTimeout has not been defined'); +} +function defaultClearTimeout () { + throw new Error('clearTimeout has not been defined'); +} +(function () { + try { + if (typeof setTimeout === 'function') { + cachedSetTimeout = setTimeout; + } else { + cachedSetTimeout = defaultSetTimout; + } + } catch (e) { + cachedSetTimeout = defaultSetTimout; + } + try { + if (typeof clearTimeout === 'function') { + cachedClearTimeout = clearTimeout; + } else { + cachedClearTimeout = defaultClearTimeout; + } + } catch (e) { + cachedClearTimeout = defaultClearTimeout; + } +} ()) +function runTimeout(fun) { + if (cachedSetTimeout === setTimeout) { + //normal enviroments in sane situations + return setTimeout(fun, 0); + } + // if setTimeout wasn't available but was latter defined + if ((cachedSetTimeout === defaultSetTimout || !cachedSetTimeout) && setTimeout) { + cachedSetTimeout = setTimeout; + return setTimeout(fun, 0); + } + try { + // when when somebody has screwed with setTimeout but no I.E. maddness + return cachedSetTimeout(fun, 0); + } catch(e){ + try { + // When we are in I.E. but the script has been evaled so I.E. doesn't trust the global object when called normally + return cachedSetTimeout.call(null, fun, 0); + } catch(e){ + // same as above but when it's a version of I.E. that must have the global object for 'this', hopfully our context correct otherwise it will throw a global error + return cachedSetTimeout.call(this, fun, 0); + } + } + + +} +function runClearTimeout(marker) { + if (cachedClearTimeout === clearTimeout) { + //normal enviroments in sane situations + return clearTimeout(marker); + } + // if clearTimeout wasn't available but was latter defined + if ((cachedClearTimeout === defaultClearTimeout || !cachedClearTimeout) && clearTimeout) { + cachedClearTimeout = clearTimeout; + return clearTimeout(marker); + } + try { + // when when somebody has screwed with setTimeout but no I.E. maddness + return cachedClearTimeout(marker); + } catch (e){ + try { + // When we are in I.E. but the script has been evaled so I.E. doesn't trust the global object when called normally + return cachedClearTimeout.call(null, marker); + } catch (e){ + // same as above but when it's a version of I.E. that must have the global object for 'this', hopfully our context correct otherwise it will throw a global error. + // Some versions of I.E. have different rules for clearTimeout vs setTimeout + return cachedClearTimeout.call(this, marker); + } + } + + + +} +var queue = []; +var draining = false; +var currentQueue; +var queueIndex = -1; + +function cleanUpNextTick() { + if (!draining || !currentQueue) { + return; + } + draining = false; + if (currentQueue.length) { + queue = currentQueue.concat(queue); + } else { + queueIndex = -1; + } + if (queue.length) { + drainQueue(); + } +} + +function drainQueue() { + if (draining) { + return; + } + var timeout = runTimeout(cleanUpNextTick); + draining = true; + + var len = queue.length; + while(len) { + currentQueue = queue; + queue = []; + while (++queueIndex < len) { + if (currentQueue) { + currentQueue[queueIndex].run(); + } + } + queueIndex = -1; + len = queue.length; + } + currentQueue = null; + draining = false; + runClearTimeout(timeout); +} + +process.nextTick = function (fun) { + var args = new Array(arguments.length - 1); + if (arguments.length > 1) { + for (var i = 1; i < arguments.length; i++) { + args[i - 1] = arguments[i]; + } + } + queue.push(new Item(fun, args)); + if (queue.length === 1 && !draining) { + runTimeout(drainQueue); + } +}; + +// v8 likes predictible objects +function Item(fun, array) { + this.fun = fun; + this.array = array; +} +Item.prototype.run = function () { + this.fun.apply(null, this.array); +}; +process.title = 'browser'; +process.browser = true; +process.env = {}; +process.argv = []; +process.version = ''; // empty string to avoid regexp issues +process.versions = {}; + +function noop() {} + +process.on = noop; +process.addListener = noop; +process.once = noop; +process.off = noop; +process.removeListener = noop; +process.removeAllListeners = noop; +process.emit = noop; +process.prependListener = noop; +process.prependOnceListener = noop; + +process.listeners = function (name) { return [] } + +process.binding = function (name) { + throw new Error('process.binding is not supported'); +}; + +process.cwd = function () { return '/' }; +process.chdir = function (dir) { + throw new Error('process.chdir is not supported'); +}; +process.umask = function() { return 0; }; + +},{}],25:[function(require,module,exports){ +// Underscore.js 1.8.3 +// http://underscorejs.org +// (c) 2009-2015 Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors +// Underscore may be freely distributed under the MIT license. + +(function() { + + // Baseline setup + // -------------- + + // Establish the root object, `window` in the browser, or `exports` on the server. + var root = this; + + // Save the previous value of the `_` variable. + var previousUnderscore = root._; + + // Save bytes in the minified (but not gzipped) version: + var ArrayProto = Array.prototype, ObjProto = Object.prototype, FuncProto = Function.prototype; + + // Create quick reference variables for speed access to core prototypes. + var + push = ArrayProto.push, + slice = ArrayProto.slice, + toString = ObjProto.toString, + hasOwnProperty = ObjProto.hasOwnProperty; + + // All **ECMAScript 5** native function implementations that we hope to use + // are declared here. + var + nativeIsArray = Array.isArray, + nativeKeys = Object.keys, + nativeBind = FuncProto.bind, + nativeCreate = Object.create; + + // Naked function reference for surrogate-prototype-swapping. + var Ctor = function(){}; + + // Create a safe reference to the Underscore object for use below. + var _ = function(obj) { + if (obj instanceof _) return obj; + if (!(this instanceof _)) return new _(obj); + this._wrapped = obj; + }; + + // Export the Underscore object for **Node.js**, with + // backwards-compatibility for the old `require()` API. If we're in + // the browser, add `_` as a global object. + if (typeof exports !== 'undefined') { + if (typeof module !== 'undefined' && module.exports) { + exports = module.exports = _; + } + exports._ = _; + } else { + root._ = _; + } + + // Current version. + _.VERSION = '1.8.3'; + + // Internal function that returns an efficient (for current engines) version + // of the passed-in callback, to be repeatedly applied in other Underscore + // functions. + var optimizeCb = function(func, context, argCount) { + if (context === void 0) return func; + switch (argCount == null ? 3 : argCount) { + case 1: return function(value) { + return func.call(context, value); + }; + case 2: return function(value, other) { + return func.call(context, value, other); + }; + case 3: return function(value, index, collection) { + return func.call(context, value, index, collection); + }; + case 4: return function(accumulator, value, index, collection) { + return func.call(context, accumulator, value, index, collection); + }; + } + return function() { + return func.apply(context, arguments); + }; + }; + + // A mostly-internal function to generate callbacks that can be applied + // to each element in a collection, returning the desired result — either + // identity, an arbitrary callback, a property matcher, or a property accessor. + var cb = function(value, context, argCount) { + if (value == null) return _.identity; + if (_.isFunction(value)) return optimizeCb(value, context, argCount); + if (_.isObject(value)) return _.matcher(value); + return _.property(value); + }; + _.iteratee = function(value, context) { + return cb(value, context, Infinity); + }; + + // An internal function for creating assigner functions. + var createAssigner = function(keysFunc, undefinedOnly) { + return function(obj) { + var length = arguments.length; + if (length < 2 || obj == null) return obj; + for (var index = 1; index < length; index++) { + var source = arguments[index], + keys = keysFunc(source), + l = keys.length; + for (var i = 0; i < l; i++) { + var key = keys[i]; + if (!undefinedOnly || obj[key] === void 0) obj[key] = source[key]; + } + } + return obj; + }; + }; + + // An internal function for creating a new object that inherits from another. + var baseCreate = function(prototype) { + if (!_.isObject(prototype)) return {}; + if (nativeCreate) return nativeCreate(prototype); + Ctor.prototype = prototype; + var result = new Ctor; + Ctor.prototype = null; + return result; + }; + + var property = function(key) { + return function(obj) { + return obj == null ? void 0 : obj[key]; + }; + }; + + // Helper for collection methods to determine whether a collection + // should be iterated as an array or as an object + // Related: http://people.mozilla.org/~jorendorff/es6-draft.html#sec-tolength + // Avoids a very nasty iOS 8 JIT bug on ARM-64. #2094 + var MAX_ARRAY_INDEX = Math.pow(2, 53) - 1; + var getLength = property('length'); + var isArrayLike = function(collection) { + var length = getLength(collection); + return typeof length == 'number' && length >= 0 && length <= MAX_ARRAY_INDEX; + }; + + // Collection Functions + // -------------------- + + // The cornerstone, an `each` implementation, aka `forEach`. + // Handles raw objects in addition to array-likes. Treats all + // sparse array-likes as if they were dense. + _.each = _.forEach = function(obj, iteratee, context) { + iteratee = optimizeCb(iteratee, context); + var i, length; + if (isArrayLike(obj)) { + for (i = 0, length = obj.length; i < length; i++) { + iteratee(obj[i], i, obj); + } + } else { + var keys = _.keys(obj); + for (i = 0, length = keys.length; i < length; i++) { + iteratee(obj[keys[i]], keys[i], obj); + } + } + return obj; + }; + + // Return the results of applying the iteratee to each element. + _.map = _.collect = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length, + results = Array(length); + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + results[index] = iteratee(obj[currentKey], currentKey, obj); + } + return results; + }; + + // Create a reducing function iterating left or right. + function createReduce(dir) { + // Optimized iterator function as using arguments.length + // in the main function will deoptimize the, see #1991. + function iterator(obj, iteratee, memo, keys, index, length) { + for (; index >= 0 && index < length; index += dir) { + var currentKey = keys ? keys[index] : index; + memo = iteratee(memo, obj[currentKey], currentKey, obj); + } + return memo; + } + + return function(obj, iteratee, memo, context) { + iteratee = optimizeCb(iteratee, context, 4); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length, + index = dir > 0 ? 0 : length - 1; + // Determine the initial value if none is provided. + if (arguments.length < 3) { + memo = obj[keys ? keys[index] : index]; + index += dir; + } + return iterator(obj, iteratee, memo, keys, index, length); + }; + } + + // **Reduce** builds up a single result from a list of values, aka `inject`, + // or `foldl`. + _.reduce = _.foldl = _.inject = createReduce(1); + + // The right-associative version of reduce, also known as `foldr`. + _.reduceRight = _.foldr = createReduce(-1); + + // Return the first value which passes a truth test. Aliased as `detect`. + _.find = _.detect = function(obj, predicate, context) { + var key; + if (isArrayLike(obj)) { + key = _.findIndex(obj, predicate, context); + } else { + key = _.findKey(obj, predicate, context); + } + if (key !== void 0 && key !== -1) return obj[key]; + }; + + // Return all the elements that pass a truth test. + // Aliased as `select`. + _.filter = _.select = function(obj, predicate, context) { + var results = []; + predicate = cb(predicate, context); + _.each(obj, function(value, index, list) { + if (predicate(value, index, list)) results.push(value); + }); + return results; + }; + + // Return all the elements for which a truth test fails. + _.reject = function(obj, predicate, context) { + return _.filter(obj, _.negate(cb(predicate)), context); + }; + + // Determine whether all of the elements match a truth test. + // Aliased as `all`. + _.every = _.all = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length; + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + if (!predicate(obj[currentKey], currentKey, obj)) return false; + } + return true; + }; + + // Determine if at least one element in the object matches a truth test. + // Aliased as `any`. + _.some = _.any = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = !isArrayLike(obj) && _.keys(obj), + length = (keys || obj).length; + for (var index = 0; index < length; index++) { + var currentKey = keys ? keys[index] : index; + if (predicate(obj[currentKey], currentKey, obj)) return true; + } + return false; + }; + + // Determine if the array or object contains a given item (using `===`). + // Aliased as `includes` and `include`. + _.contains = _.includes = _.include = function(obj, item, fromIndex, guard) { + if (!isArrayLike(obj)) obj = _.values(obj); + if (typeof fromIndex != 'number' || guard) fromIndex = 0; + return _.indexOf(obj, item, fromIndex) >= 0; + }; + + // Invoke a method (with arguments) on every item in a collection. + _.invoke = function(obj, method) { + var args = slice.call(arguments, 2); + var isFunc = _.isFunction(method); + return _.map(obj, function(value) { + var func = isFunc ? method : value[method]; + return func == null ? func : func.apply(value, args); + }); + }; + + // Convenience version of a common use case of `map`: fetching a property. + _.pluck = function(obj, key) { + return _.map(obj, _.property(key)); + }; + + // Convenience version of a common use case of `filter`: selecting only objects + // containing specific `key:value` pairs. + _.where = function(obj, attrs) { + return _.filter(obj, _.matcher(attrs)); + }; + + // Convenience version of a common use case of `find`: getting the first object + // containing specific `key:value` pairs. + _.findWhere = function(obj, attrs) { + return _.find(obj, _.matcher(attrs)); + }; + + // Return the maximum element (or element-based computation). + _.max = function(obj, iteratee, context) { + var result = -Infinity, lastComputed = -Infinity, + value, computed; + if (iteratee == null && obj != null) { + obj = isArrayLike(obj) ? obj : _.values(obj); + for (var i = 0, length = obj.length; i < length; i++) { + value = obj[i]; + if (value > result) { + result = value; + } + } + } else { + iteratee = cb(iteratee, context); + _.each(obj, function(value, index, list) { + computed = iteratee(value, index, list); + if (computed > lastComputed || computed === -Infinity && result === -Infinity) { + result = value; + lastComputed = computed; + } + }); + } + return result; + }; + + // Return the minimum element (or element-based computation). + _.min = function(obj, iteratee, context) { + var result = Infinity, lastComputed = Infinity, + value, computed; + if (iteratee == null && obj != null) { + obj = isArrayLike(obj) ? obj : _.values(obj); + for (var i = 0, length = obj.length; i < length; i++) { + value = obj[i]; + if (value < result) { + result = value; + } + } + } else { + iteratee = cb(iteratee, context); + _.each(obj, function(value, index, list) { + computed = iteratee(value, index, list); + if (computed < lastComputed || computed === Infinity && result === Infinity) { + result = value; + lastComputed = computed; + } + }); + } + return result; + }; + + // Shuffle a collection, using the modern version of the + // [Fisher-Yates shuffle](http://en.wikipedia.org/wiki/Fisher–Yates_shuffle). + _.shuffle = function(obj) { + var set = isArrayLike(obj) ? obj : _.values(obj); + var length = set.length; + var shuffled = Array(length); + for (var index = 0, rand; index < length; index++) { + rand = _.random(0, index); + if (rand !== index) shuffled[index] = shuffled[rand]; + shuffled[rand] = set[index]; + } + return shuffled; + }; + + // Sample **n** random values from a collection. + // If **n** is not specified, returns a single random element. + // The internal `guard` argument allows it to work with `map`. + _.sample = function(obj, n, guard) { + if (n == null || guard) { + if (!isArrayLike(obj)) obj = _.values(obj); + return obj[_.random(obj.length - 1)]; + } + return _.shuffle(obj).slice(0, Math.max(0, n)); + }; + + // Sort the object's values by a criterion produced by an iteratee. + _.sortBy = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + return _.pluck(_.map(obj, function(value, index, list) { + return { + value: value, + index: index, + criteria: iteratee(value, index, list) + }; + }).sort(function(left, right) { + var a = left.criteria; + var b = right.criteria; + if (a !== b) { + if (a > b || a === void 0) return 1; + if (a < b || b === void 0) return -1; + } + return left.index - right.index; + }), 'value'); + }; + + // An internal function used for aggregate "group by" operations. + var group = function(behavior) { + return function(obj, iteratee, context) { + var result = {}; + iteratee = cb(iteratee, context); + _.each(obj, function(value, index) { + var key = iteratee(value, index, obj); + behavior(result, value, key); + }); + return result; + }; + }; + + // Groups the object's values by a criterion. Pass either a string attribute + // to group by, or a function that returns the criterion. + _.groupBy = group(function(result, value, key) { + if (_.has(result, key)) result[key].push(value); else result[key] = [value]; + }); + + // Indexes the object's values by a criterion, similar to `groupBy`, but for + // when you know that your index values will be unique. + _.indexBy = group(function(result, value, key) { + result[key] = value; + }); + + // Counts instances of an object that group by a certain criterion. Pass + // either a string attribute to count by, or a function that returns the + // criterion. + _.countBy = group(function(result, value, key) { + if (_.has(result, key)) result[key]++; else result[key] = 1; + }); + + // Safely create a real, live array from anything iterable. + _.toArray = function(obj) { + if (!obj) return []; + if (_.isArray(obj)) return slice.call(obj); + if (isArrayLike(obj)) return _.map(obj, _.identity); + return _.values(obj); + }; + + // Return the number of elements in an object. + _.size = function(obj) { + if (obj == null) return 0; + return isArrayLike(obj) ? obj.length : _.keys(obj).length; + }; + + // Split a collection into two arrays: one whose elements all satisfy the given + // predicate, and one whose elements all do not satisfy the predicate. + _.partition = function(obj, predicate, context) { + predicate = cb(predicate, context); + var pass = [], fail = []; + _.each(obj, function(value, key, obj) { + (predicate(value, key, obj) ? pass : fail).push(value); + }); + return [pass, fail]; + }; + + // Array Functions + // --------------- + + // Get the first element of an array. Passing **n** will return the first N + // values in the array. Aliased as `head` and `take`. The **guard** check + // allows it to work with `_.map`. + _.first = _.head = _.take = function(array, n, guard) { + if (array == null) return void 0; + if (n == null || guard) return array[0]; + return _.initial(array, array.length - n); + }; + + // Returns everything but the last entry of the array. Especially useful on + // the arguments object. Passing **n** will return all the values in + // the array, excluding the last N. + _.initial = function(array, n, guard) { + return slice.call(array, 0, Math.max(0, array.length - (n == null || guard ? 1 : n))); + }; + + // Get the last element of an array. Passing **n** will return the last N + // values in the array. + _.last = function(array, n, guard) { + if (array == null) return void 0; + if (n == null || guard) return array[array.length - 1]; + return _.rest(array, Math.max(0, array.length - n)); + }; + + // Returns everything but the first entry of the array. Aliased as `tail` and `drop`. + // Especially useful on the arguments object. Passing an **n** will return + // the rest N values in the array. + _.rest = _.tail = _.drop = function(array, n, guard) { + return slice.call(array, n == null || guard ? 1 : n); + }; + + // Trim out all falsy values from an array. + _.compact = function(array) { + return _.filter(array, _.identity); + }; + + // Internal implementation of a recursive `flatten` function. + var flatten = function(input, shallow, strict, startIndex) { + var output = [], idx = 0; + for (var i = startIndex || 0, length = getLength(input); i < length; i++) { + var value = input[i]; + if (isArrayLike(value) && (_.isArray(value) || _.isArguments(value))) { + //flatten current level of array or arguments object + if (!shallow) value = flatten(value, shallow, strict); + var j = 0, len = value.length; + output.length += len; + while (j < len) { + output[idx++] = value[j++]; + } + } else if (!strict) { + output[idx++] = value; + } + } + return output; + }; + + // Flatten out an array, either recursively (by default), or just one level. + _.flatten = function(array, shallow) { + return flatten(array, shallow, false); + }; + + // Return a version of the array that does not contain the specified value(s). + _.without = function(array) { + return _.difference(array, slice.call(arguments, 1)); + }; + + // Produce a duplicate-free version of the array. If the array has already + // been sorted, you have the option of using a faster algorithm. + // Aliased as `unique`. + _.uniq = _.unique = function(array, isSorted, iteratee, context) { + if (!_.isBoolean(isSorted)) { + context = iteratee; + iteratee = isSorted; + isSorted = false; + } + if (iteratee != null) iteratee = cb(iteratee, context); + var result = []; + var seen = []; + for (var i = 0, length = getLength(array); i < length; i++) { + var value = array[i], + computed = iteratee ? iteratee(value, i, array) : value; + if (isSorted) { + if (!i || seen !== computed) result.push(value); + seen = computed; + } else if (iteratee) { + if (!_.contains(seen, computed)) { + seen.push(computed); + result.push(value); + } + } else if (!_.contains(result, value)) { + result.push(value); + } + } + return result; + }; + + // Produce an array that contains the union: each distinct element from all of + // the passed-in arrays. + _.union = function() { + return _.uniq(flatten(arguments, true, true)); + }; + + // Produce an array that contains every item shared between all the + // passed-in arrays. + _.intersection = function(array) { + var result = []; + var argsLength = arguments.length; + for (var i = 0, length = getLength(array); i < length; i++) { + var item = array[i]; + if (_.contains(result, item)) continue; + for (var j = 1; j < argsLength; j++) { + if (!_.contains(arguments[j], item)) break; + } + if (j === argsLength) result.push(item); + } + return result; + }; + + // Take the difference between one array and a number of other arrays. + // Only the elements present in just the first array will remain. + _.difference = function(array) { + var rest = flatten(arguments, true, true, 1); + return _.filter(array, function(value){ + return !_.contains(rest, value); + }); + }; + + // Zip together multiple lists into a single array -- elements that share + // an index go together. + _.zip = function() { + return _.unzip(arguments); + }; + + // Complement of _.zip. Unzip accepts an array of arrays and groups + // each array's elements on shared indices + _.unzip = function(array) { + var length = array && _.max(array, getLength).length || 0; + var result = Array(length); + + for (var index = 0; index < length; index++) { + result[index] = _.pluck(array, index); + } + return result; + }; + + // Converts lists into objects. Pass either a single array of `[key, value]` + // pairs, or two parallel arrays of the same length -- one of keys, and one of + // the corresponding values. + _.object = function(list, values) { + var result = {}; + for (var i = 0, length = getLength(list); i < length; i++) { + if (values) { + result[list[i]] = values[i]; + } else { + result[list[i][0]] = list[i][1]; + } + } + return result; + }; + + // Generator function to create the findIndex and findLastIndex functions + function createPredicateIndexFinder(dir) { + return function(array, predicate, context) { + predicate = cb(predicate, context); + var length = getLength(array); + var index = dir > 0 ? 0 : length - 1; + for (; index >= 0 && index < length; index += dir) { + if (predicate(array[index], index, array)) return index; + } + return -1; + }; + } + + // Returns the first index on an array-like that passes a predicate test + _.findIndex = createPredicateIndexFinder(1); + _.findLastIndex = createPredicateIndexFinder(-1); + + // Use a comparator function to figure out the smallest index at which + // an object should be inserted so as to maintain order. Uses binary search. + _.sortedIndex = function(array, obj, iteratee, context) { + iteratee = cb(iteratee, context, 1); + var value = iteratee(obj); + var low = 0, high = getLength(array); + while (low < high) { + var mid = Math.floor((low + high) / 2); + if (iteratee(array[mid]) < value) low = mid + 1; else high = mid; + } + return low; + }; + + // Generator function to create the indexOf and lastIndexOf functions + function createIndexFinder(dir, predicateFind, sortedIndex) { + return function(array, item, idx) { + var i = 0, length = getLength(array); + if (typeof idx == 'number') { + if (dir > 0) { + i = idx >= 0 ? idx : Math.max(idx + length, i); + } else { + length = idx >= 0 ? Math.min(idx + 1, length) : idx + length + 1; + } + } else if (sortedIndex && idx && length) { + idx = sortedIndex(array, item); + return array[idx] === item ? idx : -1; + } + if (item !== item) { + idx = predicateFind(slice.call(array, i, length), _.isNaN); + return idx >= 0 ? idx + i : -1; + } + for (idx = dir > 0 ? i : length - 1; idx >= 0 && idx < length; idx += dir) { + if (array[idx] === item) return idx; + } + return -1; + }; + } + + // Return the position of the first occurrence of an item in an array, + // or -1 if the item is not included in the array. + // If the array is large and already in sort order, pass `true` + // for **isSorted** to use binary search. + _.indexOf = createIndexFinder(1, _.findIndex, _.sortedIndex); + _.lastIndexOf = createIndexFinder(-1, _.findLastIndex); + + // Generate an integer Array containing an arithmetic progression. A port of + // the native Python `range()` function. See + // [the Python documentation](http://docs.python.org/library/functions.html#range). + _.range = function(start, stop, step) { + if (stop == null) { + stop = start || 0; + start = 0; + } + step = step || 1; + + var length = Math.max(Math.ceil((stop - start) / step), 0); + var range = Array(length); + + for (var idx = 0; idx < length; idx++, start += step) { + range[idx] = start; + } + + return range; + }; + + // Function (ahem) Functions + // ------------------ + + // Determines whether to execute a function as a constructor + // or a normal function with the provided arguments + var executeBound = function(sourceFunc, boundFunc, context, callingContext, args) { + if (!(callingContext instanceof boundFunc)) return sourceFunc.apply(context, args); + var self = baseCreate(sourceFunc.prototype); + var result = sourceFunc.apply(self, args); + if (_.isObject(result)) return result; + return self; + }; + + // Create a function bound to a given object (assigning `this`, and arguments, + // optionally). Delegates to **ECMAScript 5**'s native `Function.bind` if + // available. + _.bind = function(func, context) { + if (nativeBind && func.bind === nativeBind) return nativeBind.apply(func, slice.call(arguments, 1)); + if (!_.isFunction(func)) throw new TypeError('Bind must be called on a function'); + var args = slice.call(arguments, 2); + var bound = function() { + return executeBound(func, bound, context, this, args.concat(slice.call(arguments))); + }; + return bound; + }; + + // Partially apply a function by creating a version that has had some of its + // arguments pre-filled, without changing its dynamic `this` context. _ acts + // as a placeholder, allowing any combination of arguments to be pre-filled. + _.partial = function(func) { + var boundArgs = slice.call(arguments, 1); + var bound = function() { + var position = 0, length = boundArgs.length; + var args = Array(length); + for (var i = 0; i < length; i++) { + args[i] = boundArgs[i] === _ ? arguments[position++] : boundArgs[i]; + } + while (position < arguments.length) args.push(arguments[position++]); + return executeBound(func, bound, this, this, args); + }; + return bound; + }; + + // Bind a number of an object's methods to that object. Remaining arguments + // are the method names to be bound. Useful for ensuring that all callbacks + // defined on an object belong to it. + _.bindAll = function(obj) { + var i, length = arguments.length, key; + if (length <= 1) throw new Error('bindAll must be passed function names'); + for (i = 1; i < length; i++) { + key = arguments[i]; + obj[key] = _.bind(obj[key], obj); + } + return obj; + }; + + // Memoize an expensive function by storing its results. + _.memoize = function(func, hasher) { + var memoize = function(key) { + var cache = memoize.cache; + var address = '' + (hasher ? hasher.apply(this, arguments) : key); + if (!_.has(cache, address)) cache[address] = func.apply(this, arguments); + return cache[address]; + }; + memoize.cache = {}; + return memoize; + }; + + // Delays a function for the given number of milliseconds, and then calls + // it with the arguments supplied. + _.delay = function(func, wait) { + var args = slice.call(arguments, 2); + return setTimeout(function(){ + return func.apply(null, args); + }, wait); + }; + + // Defers a function, scheduling it to run after the current call stack has + // cleared. + _.defer = _.partial(_.delay, _, 1); + + // Returns a function, that, when invoked, will only be triggered at most once + // during a given window of time. Normally, the throttled function will run + // as much as it can, without ever going more than once per `wait` duration; + // but if you'd like to disable the execution on the leading edge, pass + // `{leading: false}`. To disable execution on the trailing edge, ditto. + _.throttle = function(func, wait, options) { + var context, args, result; + var timeout = null; + var previous = 0; + if (!options) options = {}; + var later = function() { + previous = options.leading === false ? 0 : _.now(); + timeout = null; + result = func.apply(context, args); + if (!timeout) context = args = null; + }; + return function() { + var now = _.now(); + if (!previous && options.leading === false) previous = now; + var remaining = wait - (now - previous); + context = this; + args = arguments; + if (remaining <= 0 || remaining > wait) { + if (timeout) { + clearTimeout(timeout); + timeout = null; + } + previous = now; + result = func.apply(context, args); + if (!timeout) context = args = null; + } else if (!timeout && options.trailing !== false) { + timeout = setTimeout(later, remaining); + } + return result; + }; + }; + + // Returns a function, that, as long as it continues to be invoked, will not + // be triggered. The function will be called after it stops being called for + // N milliseconds. If `immediate` is passed, trigger the function on the + // leading edge, instead of the trailing. + _.debounce = function(func, wait, immediate) { + var timeout, args, context, timestamp, result; + + var later = function() { + var last = _.now() - timestamp; + + if (last < wait && last >= 0) { + timeout = setTimeout(later, wait - last); + } else { + timeout = null; + if (!immediate) { + result = func.apply(context, args); + if (!timeout) context = args = null; + } + } + }; + + return function() { + context = this; + args = arguments; + timestamp = _.now(); + var callNow = immediate && !timeout; + if (!timeout) timeout = setTimeout(later, wait); + if (callNow) { + result = func.apply(context, args); + context = args = null; + } + + return result; + }; + }; + + // Returns the first function passed as an argument to the second, + // allowing you to adjust arguments, run code before and after, and + // conditionally execute the original function. + _.wrap = function(func, wrapper) { + return _.partial(wrapper, func); + }; + + // Returns a negated version of the passed-in predicate. + _.negate = function(predicate) { + return function() { + return !predicate.apply(this, arguments); + }; + }; + + // Returns a function that is the composition of a list of functions, each + // consuming the return value of the function that follows. + _.compose = function() { + var args = arguments; + var start = args.length - 1; + return function() { + var i = start; + var result = args[start].apply(this, arguments); + while (i--) result = args[i].call(this, result); + return result; + }; + }; + + // Returns a function that will only be executed on and after the Nth call. + _.after = function(times, func) { + return function() { + if (--times < 1) { + return func.apply(this, arguments); + } + }; + }; + + // Returns a function that will only be executed up to (but not including) the Nth call. + _.before = function(times, func) { + var memo; + return function() { + if (--times > 0) { + memo = func.apply(this, arguments); + } + if (times <= 1) func = null; + return memo; + }; + }; + + // Returns a function that will be executed at most one time, no matter how + // often you call it. Useful for lazy initialization. + _.once = _.partial(_.before, 2); + + // Object Functions + // ---------------- + + // Keys in IE < 9 that won't be iterated by `for key in ...` and thus missed. + var hasEnumBug = !{toString: null}.propertyIsEnumerable('toString'); + var nonEnumerableProps = ['valueOf', 'isPrototypeOf', 'toString', + 'propertyIsEnumerable', 'hasOwnProperty', 'toLocaleString']; + + function collectNonEnumProps(obj, keys) { + var nonEnumIdx = nonEnumerableProps.length; + var constructor = obj.constructor; + var proto = (_.isFunction(constructor) && constructor.prototype) || ObjProto; + + // Constructor is a special case. + var prop = 'constructor'; + if (_.has(obj, prop) && !_.contains(keys, prop)) keys.push(prop); + + while (nonEnumIdx--) { + prop = nonEnumerableProps[nonEnumIdx]; + if (prop in obj && obj[prop] !== proto[prop] && !_.contains(keys, prop)) { + keys.push(prop); + } + } + } + + // Retrieve the names of an object's own properties. + // Delegates to **ECMAScript 5**'s native `Object.keys` + _.keys = function(obj) { + if (!_.isObject(obj)) return []; + if (nativeKeys) return nativeKeys(obj); + var keys = []; + for (var key in obj) if (_.has(obj, key)) keys.push(key); + // Ahem, IE < 9. + if (hasEnumBug) collectNonEnumProps(obj, keys); + return keys; + }; + + // Retrieve all the property names of an object. + _.allKeys = function(obj) { + if (!_.isObject(obj)) return []; + var keys = []; + for (var key in obj) keys.push(key); + // Ahem, IE < 9. + if (hasEnumBug) collectNonEnumProps(obj, keys); + return keys; + }; + + // Retrieve the values of an object's properties. + _.values = function(obj) { + var keys = _.keys(obj); + var length = keys.length; + var values = Array(length); + for (var i = 0; i < length; i++) { + values[i] = obj[keys[i]]; + } + return values; + }; + + // Returns the results of applying the iteratee to each element of the object + // In contrast to _.map it returns an object + _.mapObject = function(obj, iteratee, context) { + iteratee = cb(iteratee, context); + var keys = _.keys(obj), + length = keys.length, + results = {}, + currentKey; + for (var index = 0; index < length; index++) { + currentKey = keys[index]; + results[currentKey] = iteratee(obj[currentKey], currentKey, obj); + } + return results; + }; + + // Convert an object into a list of `[key, value]` pairs. + _.pairs = function(obj) { + var keys = _.keys(obj); + var length = keys.length; + var pairs = Array(length); + for (var i = 0; i < length; i++) { + pairs[i] = [keys[i], obj[keys[i]]]; + } + return pairs; + }; + + // Invert the keys and values of an object. The values must be serializable. + _.invert = function(obj) { + var result = {}; + var keys = _.keys(obj); + for (var i = 0, length = keys.length; i < length; i++) { + result[obj[keys[i]]] = keys[i]; + } + return result; + }; + + // Return a sorted list of the function names available on the object. + // Aliased as `methods` + _.functions = _.methods = function(obj) { + var names = []; + for (var key in obj) { + if (_.isFunction(obj[key])) names.push(key); + } + return names.sort(); + }; + + // Extend a given object with all the properties in passed-in object(s). + _.extend = createAssigner(_.allKeys); + + // Assigns a given object with all the own properties in the passed-in object(s) + // (https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) + _.extendOwn = _.assign = createAssigner(_.keys); + + // Returns the first key on an object that passes a predicate test + _.findKey = function(obj, predicate, context) { + predicate = cb(predicate, context); + var keys = _.keys(obj), key; + for (var i = 0, length = keys.length; i < length; i++) { + key = keys[i]; + if (predicate(obj[key], key, obj)) return key; + } + }; + + // Return a copy of the object only containing the whitelisted properties. + _.pick = function(object, oiteratee, context) { + var result = {}, obj = object, iteratee, keys; + if (obj == null) return result; + if (_.isFunction(oiteratee)) { + keys = _.allKeys(obj); + iteratee = optimizeCb(oiteratee, context); + } else { + keys = flatten(arguments, false, false, 1); + iteratee = function(value, key, obj) { return key in obj; }; + obj = Object(obj); + } + for (var i = 0, length = keys.length; i < length; i++) { + var key = keys[i]; + var value = obj[key]; + if (iteratee(value, key, obj)) result[key] = value; + } + return result; + }; + + // Return a copy of the object without the blacklisted properties. + _.omit = function(obj, iteratee, context) { + if (_.isFunction(iteratee)) { + iteratee = _.negate(iteratee); + } else { + var keys = _.map(flatten(arguments, false, false, 1), String); + iteratee = function(value, key) { + return !_.contains(keys, key); + }; + } + return _.pick(obj, iteratee, context); + }; + + // Fill in a given object with default properties. + _.defaults = createAssigner(_.allKeys, true); + + // Creates an object that inherits from the given prototype object. + // If additional properties are provided then they will be added to the + // created object. + _.create = function(prototype, props) { + var result = baseCreate(prototype); + if (props) _.extendOwn(result, props); + return result; + }; + + // Create a (shallow-cloned) duplicate of an object. + _.clone = function(obj) { + if (!_.isObject(obj)) return obj; + return _.isArray(obj) ? obj.slice() : _.extend({}, obj); + }; + + // Invokes interceptor with the obj, and then returns obj. + // The primary purpose of this method is to "tap into" a method chain, in + // order to perform operations on intermediate results within the chain. + _.tap = function(obj, interceptor) { + interceptor(obj); + return obj; + }; + + // Returns whether an object has a given set of `key:value` pairs. + _.isMatch = function(object, attrs) { + var keys = _.keys(attrs), length = keys.length; + if (object == null) return !length; + var obj = Object(object); + for (var i = 0; i < length; i++) { + var key = keys[i]; + if (attrs[key] !== obj[key] || !(key in obj)) return false; + } + return true; + }; + + + // Internal recursive comparison function for `isEqual`. + var eq = function(a, b, aStack, bStack) { + // Identical objects are equal. `0 === -0`, but they aren't identical. + // See the [Harmony `egal` proposal](http://wiki.ecmascript.org/doku.php?id=harmony:egal). + if (a === b) return a !== 0 || 1 / a === 1 / b; + // A strict comparison is necessary because `null == undefined`. + if (a == null || b == null) return a === b; + // Unwrap any wrapped objects. + if (a instanceof _) a = a._wrapped; + if (b instanceof _) b = b._wrapped; + // Compare `[[Class]]` names. + var className = toString.call(a); + if (className !== toString.call(b)) return false; + switch (className) { + // Strings, numbers, regular expressions, dates, and booleans are compared by value. + case '[object RegExp]': + // RegExps are coerced to strings for comparison (Note: '' + /a/i === '/a/i') + case '[object String]': + // Primitives and their corresponding object wrappers are equivalent; thus, `"5"` is + // equivalent to `new String("5")`. + return '' + a === '' + b; + case '[object Number]': + // `NaN`s are equivalent, but non-reflexive. + // Object(NaN) is equivalent to NaN + if (+a !== +a) return +b !== +b; + // An `egal` comparison is performed for other numeric values. + return +a === 0 ? 1 / +a === 1 / b : +a === +b; + case '[object Date]': + case '[object Boolean]': + // Coerce dates and booleans to numeric primitive values. Dates are compared by their + // millisecond representations. Note that invalid dates with millisecond representations + // of `NaN` are not equivalent. + return +a === +b; + } + + var areArrays = className === '[object Array]'; + if (!areArrays) { + if (typeof a != 'object' || typeof b != 'object') return false; + + // Objects with different constructors are not equivalent, but `Object`s or `Array`s + // from different frames are. + var aCtor = a.constructor, bCtor = b.constructor; + if (aCtor !== bCtor && !(_.isFunction(aCtor) && aCtor instanceof aCtor && + _.isFunction(bCtor) && bCtor instanceof bCtor) + && ('constructor' in a && 'constructor' in b)) { + return false; + } + } + // Assume equality for cyclic structures. The algorithm for detecting cyclic + // structures is adapted from ES 5.1 section 15.12.3, abstract operation `JO`. + + // Initializing stack of traversed objects. + // It's done here since we only need them for objects and arrays comparison. + aStack = aStack || []; + bStack = bStack || []; + var length = aStack.length; + while (length--) { + // Linear search. Performance is inversely proportional to the number of + // unique nested structures. + if (aStack[length] === a) return bStack[length] === b; + } + + // Add the first object to the stack of traversed objects. + aStack.push(a); + bStack.push(b); + + // Recursively compare objects and arrays. + if (areArrays) { + // Compare array lengths to determine if a deep comparison is necessary. + length = a.length; + if (length !== b.length) return false; + // Deep compare the contents, ignoring non-numeric properties. + while (length--) { + if (!eq(a[length], b[length], aStack, bStack)) return false; + } + } else { + // Deep compare objects. + var keys = _.keys(a), key; + length = keys.length; + // Ensure that both objects contain the same number of properties before comparing deep equality. + if (_.keys(b).length !== length) return false; + while (length--) { + // Deep compare each member + key = keys[length]; + if (!(_.has(b, key) && eq(a[key], b[key], aStack, bStack))) return false; + } + } + // Remove the first object from the stack of traversed objects. + aStack.pop(); + bStack.pop(); + return true; + }; + + // Perform a deep comparison to check if two objects are equal. + _.isEqual = function(a, b) { + return eq(a, b); + }; + + // Is a given array, string, or object empty? + // An "empty" object has no enumerable own-properties. + _.isEmpty = function(obj) { + if (obj == null) return true; + if (isArrayLike(obj) && (_.isArray(obj) || _.isString(obj) || _.isArguments(obj))) return obj.length === 0; + return _.keys(obj).length === 0; + }; + + // Is a given value a DOM element? + _.isElement = function(obj) { + return !!(obj && obj.nodeType === 1); + }; + + // Is a given value an array? + // Delegates to ECMA5's native Array.isArray + _.isArray = nativeIsArray || function(obj) { + return toString.call(obj) === '[object Array]'; + }; + + // Is a given variable an object? + _.isObject = function(obj) { + var type = typeof obj; + return type === 'function' || type === 'object' && !!obj; + }; + + // Add some isType methods: isArguments, isFunction, isString, isNumber, isDate, isRegExp, isError. + _.each(['Arguments', 'Function', 'String', 'Number', 'Date', 'RegExp', 'Error'], function(name) { + _['is' + name] = function(obj) { + return toString.call(obj) === '[object ' + name + ']'; + }; + }); + + // Define a fallback version of the method in browsers (ahem, IE < 9), where + // there isn't any inspectable "Arguments" type. + if (!_.isArguments(arguments)) { + _.isArguments = function(obj) { + return _.has(obj, 'callee'); + }; + } + + // Optimize `isFunction` if appropriate. Work around some typeof bugs in old v8, + // IE 11 (#1621), and in Safari 8 (#1929). + if (typeof /./ != 'function' && typeof Int8Array != 'object') { + _.isFunction = function(obj) { + return typeof obj == 'function' || false; + }; + } + + // Is a given object a finite number? + _.isFinite = function(obj) { + return isFinite(obj) && !isNaN(parseFloat(obj)); + }; + + // Is the given value `NaN`? (NaN is the only number which does not equal itself). + _.isNaN = function(obj) { + return _.isNumber(obj) && obj !== +obj; + }; + + // Is a given value a boolean? + _.isBoolean = function(obj) { + return obj === true || obj === false || toString.call(obj) === '[object Boolean]'; + }; + + // Is a given value equal to null? + _.isNull = function(obj) { + return obj === null; + }; + + // Is a given variable undefined? + _.isUndefined = function(obj) { + return obj === void 0; + }; + + // Shortcut function for checking if an object has a given property directly + // on itself (in other words, not on a prototype). + _.has = function(obj, key) { + return obj != null && hasOwnProperty.call(obj, key); + }; + + // Utility Functions + // ----------------- + + // Run Underscore.js in *noConflict* mode, returning the `_` variable to its + // previous owner. Returns a reference to the Underscore object. + _.noConflict = function() { + root._ = previousUnderscore; + return this; + }; + + // Keep the identity function around for default iteratees. + _.identity = function(value) { + return value; + }; + + // Predicate-generating functions. Often useful outside of Underscore. + _.constant = function(value) { + return function() { + return value; + }; + }; + + _.noop = function(){}; + + _.property = property; + + // Generates a function for a given object that returns a given property. + _.propertyOf = function(obj) { + return obj == null ? function(){} : function(key) { + return obj[key]; + }; + }; + + // Returns a predicate for checking whether an object has a given set of + // `key:value` pairs. + _.matcher = _.matches = function(attrs) { + attrs = _.extendOwn({}, attrs); + return function(obj) { + return _.isMatch(obj, attrs); + }; + }; + + // Run a function **n** times. + _.times = function(n, iteratee, context) { + var accum = Array(Math.max(0, n)); + iteratee = optimizeCb(iteratee, context, 1); + for (var i = 0; i < n; i++) accum[i] = iteratee(i); + return accum; + }; + + // Return a random integer between min and max (inclusive). + _.random = function(min, max) { + if (max == null) { + max = min; + min = 0; + } + return min + Math.floor(Math.random() * (max - min + 1)); + }; + + // A (possibly faster) way to get the current timestamp as an integer. + _.now = Date.now || function() { + return new Date().getTime(); + }; + + // List of HTML entities for escaping. + var escapeMap = { + '&': '&', + '<': '<', + '>': '>', + '"': '"', + "'": ''', + '`': '`' + }; + var unescapeMap = _.invert(escapeMap); + + // Functions for escaping and unescaping strings to/from HTML interpolation. + var createEscaper = function(map) { + var escaper = function(match) { + return map[match]; + }; + // Regexes for identifying a key that needs to be escaped + var source = '(?:' + _.keys(map).join('|') + ')'; + var testRegexp = RegExp(source); + var replaceRegexp = RegExp(source, 'g'); + return function(string) { + string = string == null ? '' : '' + string; + return testRegexp.test(string) ? string.replace(replaceRegexp, escaper) : string; + }; + }; + _.escape = createEscaper(escapeMap); + _.unescape = createEscaper(unescapeMap); + + // If the value of the named `property` is a function then invoke it with the + // `object` as context; otherwise, return it. + _.result = function(object, property, fallback) { + var value = object == null ? void 0 : object[property]; + if (value === void 0) { + value = fallback; + } + return _.isFunction(value) ? value.call(object) : value; + }; + + // Generate a unique integer id (unique within the entire client session). + // Useful for temporary DOM ids. + var idCounter = 0; + _.uniqueId = function(prefix) { + var id = ++idCounter + ''; + return prefix ? prefix + id : id; + }; + + // By default, Underscore uses ERB-style template delimiters, change the + // following template settings to use alternative delimiters. + _.templateSettings = { + evaluate : /<%([\s\S]+?)%>/g, + interpolate : /<%=([\s\S]+?)%>/g, + escape : /<%-([\s\S]+?)%>/g + }; + + // When customizing `templateSettings`, if you don't want to define an + // interpolation, evaluation or escaping regex, we need one that is + // guaranteed not to match. + var noMatch = /(.)^/; + + // Certain characters need to be escaped so that they can be put into a + // string literal. + var escapes = { + "'": "'", + '\\': '\\', + '\r': 'r', + '\n': 'n', + '\u2028': 'u2028', + '\u2029': 'u2029' + }; + + var escaper = /\\|'|\r|\n|\u2028|\u2029/g; + + var escapeChar = function(match) { + return '\\' + escapes[match]; + }; + + // JavaScript micro-templating, similar to John Resig's implementation. + // Underscore templating handles arbitrary delimiters, preserves whitespace, + // and correctly escapes quotes within interpolated code. + // NB: `oldSettings` only exists for backwards compatibility. + _.template = function(text, settings, oldSettings) { + if (!settings && oldSettings) settings = oldSettings; + settings = _.defaults({}, settings, _.templateSettings); + + // Combine delimiters into one regular expression via alternation. + var matcher = RegExp([ + (settings.escape || noMatch).source, + (settings.interpolate || noMatch).source, + (settings.evaluate || noMatch).source + ].join('|') + '|$', 'g'); + + // Compile the template source, escaping string literals appropriately. + var index = 0; + var source = "__p+='"; + text.replace(matcher, function(match, escape, interpolate, evaluate, offset) { + source += text.slice(index, offset).replace(escaper, escapeChar); + index = offset + match.length; + + if (escape) { + source += "'+\n((__t=(" + escape + "))==null?'':_.escape(__t))+\n'"; + } else if (interpolate) { + source += "'+\n((__t=(" + interpolate + "))==null?'':__t)+\n'"; + } else if (evaluate) { + source += "';\n" + evaluate + "\n__p+='"; + } + + // Adobe VMs need the match returned to produce the correct offest. + return match; + }); + source += "';\n"; + + // If a variable is not specified, place data values in local scope. + if (!settings.variable) source = 'with(obj||{}){\n' + source + '}\n'; + + source = "var __t,__p='',__j=Array.prototype.join," + + "print=function(){__p+=__j.call(arguments,'');};\n" + + source + 'return __p;\n'; + + try { + var render = new Function(settings.variable || 'obj', '_', source); + } catch (e) { + e.source = source; + throw e; + } + + var template = function(data) { + return render.call(this, data, _); + }; + + // Provide the compiled source as a convenience for precompilation. + var argument = settings.variable || 'obj'; + template.source = 'function(' + argument + '){\n' + source + '}'; + + return template; + }; + + // Add a "chain" function. Start chaining a wrapped Underscore object. + _.chain = function(obj) { + var instance = _(obj); + instance._chain = true; + return instance; + }; + + // OOP + // --------------- + // If Underscore is called as a function, it returns a wrapped object that + // can be used OO-style. This wrapper holds altered versions of all the + // underscore functions. Wrapped objects may be chained. + + // Helper function to continue chaining intermediate results. + var result = function(instance, obj) { + return instance._chain ? _(obj).chain() : obj; + }; + + // Add your own custom functions to the Underscore object. + _.mixin = function(obj) { + _.each(_.functions(obj), function(name) { + var func = _[name] = obj[name]; + _.prototype[name] = function() { + var args = [this._wrapped]; + push.apply(args, arguments); + return result(this, func.apply(_, args)); + }; + }); + }; + + // Add all of the Underscore functions to the wrapper object. + _.mixin(_); + + // Add all mutator Array functions to the wrapper. + _.each(['pop', 'push', 'reverse', 'shift', 'sort', 'splice', 'unshift'], function(name) { + var method = ArrayProto[name]; + _.prototype[name] = function() { + var obj = this._wrapped; + method.apply(obj, arguments); + if ((name === 'shift' || name === 'splice') && obj.length === 0) delete obj[0]; + return result(this, obj); + }; + }); + + // Add all accessor Array functions to the wrapper. + _.each(['concat', 'join', 'slice'], function(name) { + var method = ArrayProto[name]; + _.prototype[name] = function() { + return result(this, method.apply(this._wrapped, arguments)); + }; + }); + + // Extracts the result from a wrapped and chained object. + _.prototype.value = function() { + return this._wrapped; + }; + + // Provide unwrapping proxy for some methods used in engine operations + // such as arithmetic and JSON stringification. + _.prototype.valueOf = _.prototype.toJSON = _.prototype.value; + + _.prototype.toString = function() { + return '' + this._wrapped; + }; + + // AMD registration happens at the end for compatibility with AMD loaders + // that may not enforce next-turn semantics on modules. Even though general + // practice for AMD registration is to be anonymous, underscore registers + // as a named module because, like jQuery, it is a base library that is + // popular enough to be bundled in a third party lib, but not be part of + // an AMD load request. Those cases could generate an error when an + // anonymous define() is called outside of a loader request. + if (typeof define === 'function' && define.amd) { + define('underscore', [], function() { + return _; + }); + } +}.call(this)); + +},{}],26:[function(require,module,exports){ +arguments[4][19][0].apply(exports,arguments) +},{"dup":19}],27:[function(require,module,exports){ +module.exports = function isBuffer(arg) { + return arg && typeof arg === 'object' + && typeof arg.copy === 'function' + && typeof arg.fill === 'function' + && typeof arg.readUInt8 === 'function'; +} +},{}],28:[function(require,module,exports){ +(function (process,global){ +// Copyright Joyent, Inc. and other Node contributors. +// +// Permission is hereby granted, free of charge, to any person obtaining a +// copy of this software and associated documentation files (the +// "Software"), to deal in the Software without restriction, including +// without limitation the rights to use, copy, modify, merge, publish, +// distribute, sublicense, and/or sell copies of the Software, and to permit +// persons to whom the Software is furnished to do so, subject to the +// following conditions: +// +// The above copyright notice and this permission notice shall be included +// in all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN +// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE +// USE OR OTHER DEALINGS IN THE SOFTWARE. + +var formatRegExp = /%[sdj%]/g; +exports.format = function(f) { + if (!isString(f)) { + var objects = []; + for (var i = 0; i < arguments.length; i++) { + objects.push(inspect(arguments[i])); + } + return objects.join(' '); + } + + var i = 1; + var args = arguments; + var len = args.length; + var str = String(f).replace(formatRegExp, function(x) { + if (x === '%%') return '%'; + if (i >= len) return x; + switch (x) { + case '%s': return String(args[i++]); + case '%d': return Number(args[i++]); + case '%j': + try { + return JSON.stringify(args[i++]); + } catch (_) { + return '[Circular]'; + } + default: + return x; + } + }); + for (var x = args[i]; i < len; x = args[++i]) { + if (isNull(x) || !isObject(x)) { + str += ' ' + x; + } else { + str += ' ' + inspect(x); + } + } + return str; +}; + + +// Mark that a method should not be used. +// Returns a modified function which warns once by default. +// If --no-deprecation is set, then it is a no-op. +exports.deprecate = function(fn, msg) { + // Allow for deprecating things in the process of starting up. + if (isUndefined(global.process)) { + return function() { + return exports.deprecate(fn, msg).apply(this, arguments); + }; + } + + if (process.noDeprecation === true) { + return fn; + } + + var warned = false; + function deprecated() { + if (!warned) { + if (process.throwDeprecation) { + throw new Error(msg); + } else if (process.traceDeprecation) { + console.trace(msg); + } else { + console.error(msg); + } + warned = true; + } + return fn.apply(this, arguments); + } + + return deprecated; +}; + + +var debugs = {}; +var debugEnviron; +exports.debuglog = function(set) { + if (isUndefined(debugEnviron)) + debugEnviron = process.env.NODE_DEBUG || ''; + set = set.toUpperCase(); + if (!debugs[set]) { + if (new RegExp('\\b' + set + '\\b', 'i').test(debugEnviron)) { + var pid = process.pid; + debugs[set] = function() { + var msg = exports.format.apply(exports, arguments); + console.error('%s %d: %s', set, pid, msg); + }; + } else { + debugs[set] = function() {}; + } + } + return debugs[set]; +}; + + +/** + * Echos the value of a value. Trys to print the value out + * in the best way possible given the different types. + * + * @param {Object} obj The object to print out. + * @param {Object} opts Optional options object that alters the output. + */ +/* legacy: obj, showHidden, depth, colors*/ +function inspect(obj, opts) { + // default options + var ctx = { + seen: [], + stylize: stylizeNoColor + }; + // legacy... + if (arguments.length >= 3) ctx.depth = arguments[2]; + if (arguments.length >= 4) ctx.colors = arguments[3]; + if (isBoolean(opts)) { + // legacy... + ctx.showHidden = opts; + } else if (opts) { + // got an "options" object + exports._extend(ctx, opts); + } + // set default options + if (isUndefined(ctx.showHidden)) ctx.showHidden = false; + if (isUndefined(ctx.depth)) ctx.depth = 2; + if (isUndefined(ctx.colors)) ctx.colors = false; + if (isUndefined(ctx.customInspect)) ctx.customInspect = true; + if (ctx.colors) ctx.stylize = stylizeWithColor; + return formatValue(ctx, obj, ctx.depth); +} +exports.inspect = inspect; + + +// http://en.wikipedia.org/wiki/ANSI_escape_code#graphics +inspect.colors = { + 'bold' : [1, 22], + 'italic' : [3, 23], + 'underline' : [4, 24], + 'inverse' : [7, 27], + 'white' : [37, 39], + 'grey' : [90, 39], + 'black' : [30, 39], + 'blue' : [34, 39], + 'cyan' : [36, 39], + 'green' : [32, 39], + 'magenta' : [35, 39], + 'red' : [31, 39], + 'yellow' : [33, 39] +}; + +// Don't use 'blue' not visible on cmd.exe +inspect.styles = { + 'special': 'cyan', + 'number': 'yellow', + 'boolean': 'yellow', + 'undefined': 'grey', + 'null': 'bold', + 'string': 'green', + 'date': 'magenta', + // "name": intentionally not styling + 'regexp': 'red' +}; + + +function stylizeWithColor(str, styleType) { + var style = inspect.styles[styleType]; + + if (style) { + return '\u001b[' + inspect.colors[style][0] + 'm' + str + + '\u001b[' + inspect.colors[style][1] + 'm'; + } else { + return str; + } +} + + +function stylizeNoColor(str, styleType) { + return str; +} + + +function arrayToHash(array) { + var hash = {}; + + array.forEach(function(val, idx) { + hash[val] = true; + }); + + return hash; +} + + +function formatValue(ctx, value, recurseTimes) { + // Provide a hook for user-specified inspect functions. + // Check that value is an object with an inspect function on it + if (ctx.customInspect && + value && + isFunction(value.inspect) && + // Filter out the util module, it's inspect function is special + value.inspect !== exports.inspect && + // Also filter out any prototype objects using the circular check. + !(value.constructor && value.constructor.prototype === value)) { + var ret = value.inspect(recurseTimes, ctx); + if (!isString(ret)) { + ret = formatValue(ctx, ret, recurseTimes); + } + return ret; + } + + // Primitive types cannot have properties + var primitive = formatPrimitive(ctx, value); + if (primitive) { + return primitive; + } + + // Look up the keys of the object. + var keys = Object.keys(value); + var visibleKeys = arrayToHash(keys); + + if (ctx.showHidden) { + keys = Object.getOwnPropertyNames(value); + } + + // IE doesn't make error fields non-enumerable + // http://msdn.microsoft.com/en-us/library/ie/dww52sbt(v=vs.94).aspx + if (isError(value) + && (keys.indexOf('message') >= 0 || keys.indexOf('description') >= 0)) { + return formatError(value); + } + + // Some type of object without properties can be shortcutted. + if (keys.length === 0) { + if (isFunction(value)) { + var name = value.name ? ': ' + value.name : ''; + return ctx.stylize('[Function' + name + ']', 'special'); + } + if (isRegExp(value)) { + return ctx.stylize(RegExp.prototype.toString.call(value), 'regexp'); + } + if (isDate(value)) { + return ctx.stylize(Date.prototype.toString.call(value), 'date'); + } + if (isError(value)) { + return formatError(value); + } + } + + var base = '', array = false, braces = ['{', '}']; + + // Make Array say that they are Array + if (isArray(value)) { + array = true; + braces = ['[', ']']; + } + + // Make functions say that they are functions + if (isFunction(value)) { + var n = value.name ? ': ' + value.name : ''; + base = ' [Function' + n + ']'; + } + + // Make RegExps say that they are RegExps + if (isRegExp(value)) { + base = ' ' + RegExp.prototype.toString.call(value); + } + + // Make dates with properties first say the date + if (isDate(value)) { + base = ' ' + Date.prototype.toUTCString.call(value); + } + + // Make error with message first say the error + if (isError(value)) { + base = ' ' + formatError(value); + } + + if (keys.length === 0 && (!array || value.length == 0)) { + return braces[0] + base + braces[1]; + } + + if (recurseTimes < 0) { + if (isRegExp(value)) { + return ctx.stylize(RegExp.prototype.toString.call(value), 'regexp'); + } else { + return ctx.stylize('[Object]', 'special'); + } + } + + ctx.seen.push(value); + + var output; + if (array) { + output = formatArray(ctx, value, recurseTimes, visibleKeys, keys); + } else { + output = keys.map(function(key) { + return formatProperty(ctx, value, recurseTimes, visibleKeys, key, array); + }); + } + + ctx.seen.pop(); + + return reduceToSingleString(output, base, braces); +} + + +function formatPrimitive(ctx, value) { + if (isUndefined(value)) + return ctx.stylize('undefined', 'undefined'); + if (isString(value)) { + var simple = '\'' + JSON.stringify(value).replace(/^"|"$/g, '') + .replace(/'/g, "\\'") + .replace(/\\"/g, '"') + '\''; + return ctx.stylize(simple, 'string'); + } + if (isNumber(value)) + return ctx.stylize('' + value, 'number'); + if (isBoolean(value)) + return ctx.stylize('' + value, 'boolean'); + // For some reason typeof null is "object", so special case here. + if (isNull(value)) + return ctx.stylize('null', 'null'); +} + + +function formatError(value) { + return '[' + Error.prototype.toString.call(value) + ']'; +} + + +function formatArray(ctx, value, recurseTimes, visibleKeys, keys) { + var output = []; + for (var i = 0, l = value.length; i < l; ++i) { + if (hasOwnProperty(value, String(i))) { + output.push(formatProperty(ctx, value, recurseTimes, visibleKeys, + String(i), true)); + } else { + output.push(''); + } + } + keys.forEach(function(key) { + if (!key.match(/^\d+$/)) { + output.push(formatProperty(ctx, value, recurseTimes, visibleKeys, + key, true)); + } + }); + return output; +} + + +function formatProperty(ctx, value, recurseTimes, visibleKeys, key, array) { + var name, str, desc; + desc = Object.getOwnPropertyDescriptor(value, key) || { value: value[key] }; + if (desc.get) { + if (desc.set) { + str = ctx.stylize('[Getter/Setter]', 'special'); + } else { + str = ctx.stylize('[Getter]', 'special'); + } + } else { + if (desc.set) { + str = ctx.stylize('[Setter]', 'special'); + } + } + if (!hasOwnProperty(visibleKeys, key)) { + name = '[' + key + ']'; + } + if (!str) { + if (ctx.seen.indexOf(desc.value) < 0) { + if (isNull(recurseTimes)) { + str = formatValue(ctx, desc.value, null); + } else { + str = formatValue(ctx, desc.value, recurseTimes - 1); + } + if (str.indexOf('\n') > -1) { + if (array) { + str = str.split('\n').map(function(line) { + return ' ' + line; + }).join('\n').substr(2); + } else { + str = '\n' + str.split('\n').map(function(line) { + return ' ' + line; + }).join('\n'); + } + } + } else { + str = ctx.stylize('[Circular]', 'special'); + } + } + if (isUndefined(name)) { + if (array && key.match(/^\d+$/)) { + return str; + } + name = JSON.stringify('' + key); + if (name.match(/^"([a-zA-Z_][a-zA-Z_0-9]*)"$/)) { + name = name.substr(1, name.length - 2); + name = ctx.stylize(name, 'name'); + } else { + name = name.replace(/'/g, "\\'") + .replace(/\\"/g, '"') + .replace(/(^"|"$)/g, "'"); + name = ctx.stylize(name, 'string'); + } + } + + return name + ': ' + str; +} + + +function reduceToSingleString(output, base, braces) { + var numLinesEst = 0; + var length = output.reduce(function(prev, cur) { + numLinesEst++; + if (cur.indexOf('\n') >= 0) numLinesEst++; + return prev + cur.replace(/\u001b\[\d\d?m/g, '').length + 1; + }, 0); + + if (length > 60) { + return braces[0] + + (base === '' ? '' : base + '\n ') + + ' ' + + output.join(',\n ') + + ' ' + + braces[1]; + } + + return braces[0] + base + ' ' + output.join(', ') + ' ' + braces[1]; +} + + +// NOTE: These type checking functions intentionally don't use `instanceof` +// because it is fragile and can be easily faked with `Object.create()`. +function isArray(ar) { + return Array.isArray(ar); +} +exports.isArray = isArray; + +function isBoolean(arg) { + return typeof arg === 'boolean'; +} +exports.isBoolean = isBoolean; + +function isNull(arg) { + return arg === null; +} +exports.isNull = isNull; + +function isNullOrUndefined(arg) { + return arg == null; +} +exports.isNullOrUndefined = isNullOrUndefined; + +function isNumber(arg) { + return typeof arg === 'number'; +} +exports.isNumber = isNumber; + +function isString(arg) { + return typeof arg === 'string'; +} +exports.isString = isString; + +function isSymbol(arg) { + return typeof arg === 'symbol'; +} +exports.isSymbol = isSymbol; + +function isUndefined(arg) { + return arg === void 0; +} +exports.isUndefined = isUndefined; + +function isRegExp(re) { + return isObject(re) && objectToString(re) === '[object RegExp]'; +} +exports.isRegExp = isRegExp; + +function isObject(arg) { + return typeof arg === 'object' && arg !== null; +} +exports.isObject = isObject; + +function isDate(d) { + return isObject(d) && objectToString(d) === '[object Date]'; +} +exports.isDate = isDate; + +function isError(e) { + return isObject(e) && + (objectToString(e) === '[object Error]' || e instanceof Error); +} +exports.isError = isError; + +function isFunction(arg) { + return typeof arg === 'function'; +} +exports.isFunction = isFunction; + +function isPrimitive(arg) { + return arg === null || + typeof arg === 'boolean' || + typeof arg === 'number' || + typeof arg === 'string' || + typeof arg === 'symbol' || // ES6 symbol + typeof arg === 'undefined'; +} +exports.isPrimitive = isPrimitive; + +exports.isBuffer = require('./support/isBuffer'); + +function objectToString(o) { + return Object.prototype.toString.call(o); +} + + +function pad(n) { + return n < 10 ? '0' + n.toString(10) : n.toString(10); +} + + +var months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', + 'Oct', 'Nov', 'Dec']; + +// 26 Feb 16:19:34 +function timestamp() { + var d = new Date(); + var time = [pad(d.getHours()), + pad(d.getMinutes()), + pad(d.getSeconds())].join(':'); + return [d.getDate(), months[d.getMonth()], time].join(' '); +} + + +// log is just a thin wrapper to console.log that prepends a timestamp +exports.log = function() { + console.log('%s - %s', timestamp(), exports.format.apply(exports, arguments)); +}; + + +/** + * Inherit the prototype methods from one constructor into another. + * + * The Function.prototype.inherits from lang.js rewritten as a standalone + * function (not on Function.prototype). NOTE: If this file is to be loaded + * during bootstrapping this function needs to be rewritten using some native + * functions as prototype setup using normal JavaScript does not work as + * expected during bootstrapping (see mirror.js in r114903). + * + * @param {function} ctor Constructor function which needs to inherit the + * prototype. + * @param {function} superCtor Constructor function to inherit prototype from. + */ +exports.inherits = require('inherits'); + +exports._extend = function(origin, add) { + // Don't do anything if add isn't an object + if (!add || !isObject(add)) return origin; + + var keys = Object.keys(add); + var i = keys.length; + while (i--) { + origin[keys[i]] = add[keys[i]]; + } + return origin; +}; + +function hasOwnProperty(obj, prop) { + return Object.prototype.hasOwnProperty.call(obj, prop); +} + +}).call(this,require('_process'),typeof global !== "undefined" ? global : typeof self !== "undefined" ? self : typeof window !== "undefined" ? window : {}) +},{"./support/isBuffer":27,"_process":24,"inherits":26}],29:[function(require,module,exports){ +// Returns a wrapper function that returns a wrapped callback +// The wrapper function should do some stuff, and return a +// presumably different callback function. +// This makes sure that own properties are retained, so that +// decorations and such are not lost along the way. +module.exports = wrappy +function wrappy (fn, cb) { + if (fn && cb) return wrappy(fn)(cb) + + if (typeof fn !== 'function') + throw new TypeError('need wrapper function') + + Object.keys(fn).forEach(function (k) { + wrapper[k] = fn[k] + }) + + return wrapper + + function wrapper() { + var args = new Array(arguments.length) + for (var i = 0; i < args.length; i++) { + args[i] = arguments[i] + } + var ret = fn.apply(this, args) + var cb = args[args.length-1] + if (typeof ret === 'function' && ret !== cb) { + Object.keys(cb).forEach(function (k) { + ret[k] = cb[k] + }) + } + return ret + } +} + +},{}]},{},[7])(7) +}); \ No newline at end of file diff --git a/assets/javascripts/workers/search.5bf1dace.min.js b/assets/javascripts/workers/search.5bf1dace.min.js new file mode 100644 index 000000000..5b80d43c0 --- /dev/null +++ b/assets/javascripts/workers/search.5bf1dace.min.js @@ -0,0 +1,48 @@ +"use strict";(()=>{var ge=Object.create;var W=Object.defineProperty,ye=Object.defineProperties,me=Object.getOwnPropertyDescriptor,ve=Object.getOwnPropertyDescriptors,xe=Object.getOwnPropertyNames,G=Object.getOwnPropertySymbols,Se=Object.getPrototypeOf,X=Object.prototype.hasOwnProperty,Qe=Object.prototype.propertyIsEnumerable;var J=(t,e,r)=>e in t?W(t,e,{enumerable:!0,configurable:!0,writable:!0,value:r}):t[e]=r,M=(t,e)=>{for(var r in e||(e={}))X.call(e,r)&&J(t,r,e[r]);if(G)for(var r of G(e))Qe.call(e,r)&&J(t,r,e[r]);return t},Z=(t,e)=>ye(t,ve(e));var K=(t,e)=>()=>(e||t((e={exports:{}}).exports,e),e.exports);var be=(t,e,r,n)=>{if(e&&typeof e=="object"||typeof e=="function")for(let i of xe(e))!X.call(t,i)&&i!==r&&W(t,i,{get:()=>e[i],enumerable:!(n=me(e,i))||n.enumerable});return t};var H=(t,e,r)=>(r=t!=null?ge(Se(t)):{},be(e||!t||!t.__esModule?W(r,"default",{value:t,enumerable:!0}):r,t));var z=(t,e,r)=>new Promise((n,i)=>{var s=u=>{try{a(r.next(u))}catch(c){i(c)}},o=u=>{try{a(r.throw(u))}catch(c){i(c)}},a=u=>u.done?n(u.value):Promise.resolve(u.value).then(s,o);a((r=r.apply(t,e)).next())});var re=K((ee,te)=>{/** + * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9 + * Copyright (C) 2020 Oliver Nightingale + * @license MIT + */(function(){var t=function(e){var r=new t.Builder;return r.pipeline.add(t.trimmer,t.stopWordFilter,t.stemmer),r.searchPipeline.add(t.stemmer),e.call(r,r),r.build()};t.version="2.3.9";/*! + * lunr.utils + * Copyright (C) 2020 Oliver Nightingale + */t.utils={},t.utils.warn=function(e){return function(r){e.console&&console.warn&&console.warn(r)}}(this),t.utils.asString=function(e){return e==null?"":e.toString()},t.utils.clone=function(e){if(e==null)return e;for(var r=Object.create(null),n=Object.keys(e),i=0;i0){var h=t.utils.clone(r)||{};h.position=[a,c],h.index=s.length,s.push(new t.Token(n.slice(a,o),h))}a=o+1}}return s},t.tokenizer.separator=/[\s\-]+/;/*! + * lunr.Pipeline + * Copyright (C) 2020 Oliver Nightingale + */t.Pipeline=function(){this._stack=[]},t.Pipeline.registeredFunctions=Object.create(null),t.Pipeline.registerFunction=function(e,r){r in this.registeredFunctions&&t.utils.warn("Overwriting existing registered function: "+r),e.label=r,t.Pipeline.registeredFunctions[e.label]=e},t.Pipeline.warnIfFunctionNotRegistered=function(e){var r=e.label&&e.label in this.registeredFunctions;r||t.utils.warn(`Function is not registered with pipeline. This may cause problems when serialising the index. +`,e)},t.Pipeline.load=function(e){var r=new t.Pipeline;return e.forEach(function(n){var i=t.Pipeline.registeredFunctions[n];if(i)r.add(i);else throw new Error("Cannot load unregistered function: "+n)}),r},t.Pipeline.prototype.add=function(){var e=Array.prototype.slice.call(arguments);e.forEach(function(r){t.Pipeline.warnIfFunctionNotRegistered(r),this._stack.push(r)},this)},t.Pipeline.prototype.after=function(e,r){t.Pipeline.warnIfFunctionNotRegistered(r);var n=this._stack.indexOf(e);if(n==-1)throw new Error("Cannot find existingFn");n=n+1,this._stack.splice(n,0,r)},t.Pipeline.prototype.before=function(e,r){t.Pipeline.warnIfFunctionNotRegistered(r);var n=this._stack.indexOf(e);if(n==-1)throw new Error("Cannot find existingFn");this._stack.splice(n,0,r)},t.Pipeline.prototype.remove=function(e){var r=this._stack.indexOf(e);r!=-1&&this._stack.splice(r,1)},t.Pipeline.prototype.run=function(e){for(var r=this._stack.length,n=0;n1&&(oe&&(n=s),o!=e);)i=n-r,s=r+Math.floor(i/2),o=this.elements[s*2];if(o==e||o>e)return s*2;if(ou?h+=2:a==u&&(r+=n[c+1]*i[h+1],c+=2,h+=2);return r},t.Vector.prototype.similarity=function(e){return this.dot(e)/this.magnitude()||0},t.Vector.prototype.toArray=function(){for(var e=new Array(this.elements.length/2),r=1,n=0;r0){var o=s.str.charAt(0),a;o in s.node.edges?a=s.node.edges[o]:(a=new t.TokenSet,s.node.edges[o]=a),s.str.length==1&&(a.final=!0),i.push({node:a,editsRemaining:s.editsRemaining,str:s.str.slice(1)})}if(s.editsRemaining!=0){if("*"in s.node.edges)var u=s.node.edges["*"];else{var u=new t.TokenSet;s.node.edges["*"]=u}if(s.str.length==0&&(u.final=!0),i.push({node:u,editsRemaining:s.editsRemaining-1,str:s.str}),s.str.length>1&&i.push({node:s.node,editsRemaining:s.editsRemaining-1,str:s.str.slice(1)}),s.str.length==1&&(s.node.final=!0),s.str.length>=1){if("*"in s.node.edges)var c=s.node.edges["*"];else{var c=new t.TokenSet;s.node.edges["*"]=c}s.str.length==1&&(c.final=!0),i.push({node:c,editsRemaining:s.editsRemaining-1,str:s.str.slice(1)})}if(s.str.length>1){var h=s.str.charAt(0),y=s.str.charAt(1),g;y in s.node.edges?g=s.node.edges[y]:(g=new t.TokenSet,s.node.edges[y]=g),s.str.length==1&&(g.final=!0),i.push({node:g,editsRemaining:s.editsRemaining-1,str:h+s.str.slice(2)})}}}return n},t.TokenSet.fromString=function(e){for(var r=new t.TokenSet,n=r,i=0,s=e.length;i=e;r--){var n=this.uncheckedNodes[r],i=n.child.toString();i in this.minimizedNodes?n.parent.edges[n.char]=this.minimizedNodes[i]:(n.child._str=i,this.minimizedNodes[i]=n.child),this.uncheckedNodes.pop()}};/*! + * lunr.Index + * Copyright (C) 2020 Oliver Nightingale + */t.Index=function(e){this.invertedIndex=e.invertedIndex,this.fieldVectors=e.fieldVectors,this.tokenSet=e.tokenSet,this.fields=e.fields,this.pipeline=e.pipeline},t.Index.prototype.search=function(e){return this.query(function(r){var n=new t.QueryParser(e,r);n.parse()})},t.Index.prototype.query=function(e){for(var r=new t.Query(this.fields),n=Object.create(null),i=Object.create(null),s=Object.create(null),o=Object.create(null),a=Object.create(null),u=0;u1?this._b=1:this._b=e},t.Builder.prototype.k1=function(e){this._k1=e},t.Builder.prototype.add=function(e,r){var n=e[this._ref],i=Object.keys(this._fields);this._documents[n]=r||{},this.documentCount+=1;for(var s=0;s=this.length)return t.QueryLexer.EOS;var e=this.str.charAt(this.pos);return this.pos+=1,e},t.QueryLexer.prototype.width=function(){return this.pos-this.start},t.QueryLexer.prototype.ignore=function(){this.start==this.pos&&(this.pos+=1),this.start=this.pos},t.QueryLexer.prototype.backup=function(){this.pos-=1},t.QueryLexer.prototype.acceptDigitRun=function(){var e,r;do e=this.next(),r=e.charCodeAt(0);while(r>47&&r<58);e!=t.QueryLexer.EOS&&this.backup()},t.QueryLexer.prototype.more=function(){return this.pos1&&(e.backup(),e.emit(t.QueryLexer.TERM)),e.ignore(),e.more())return t.QueryLexer.lexText},t.QueryLexer.lexEditDistance=function(e){return e.ignore(),e.acceptDigitRun(),e.emit(t.QueryLexer.EDIT_DISTANCE),t.QueryLexer.lexText},t.QueryLexer.lexBoost=function(e){return e.ignore(),e.acceptDigitRun(),e.emit(t.QueryLexer.BOOST),t.QueryLexer.lexText},t.QueryLexer.lexEOS=function(e){e.width()>0&&e.emit(t.QueryLexer.TERM)},t.QueryLexer.termSeparator=t.tokenizer.separator,t.QueryLexer.lexText=function(e){for(;;){var r=e.next();if(r==t.QueryLexer.EOS)return t.QueryLexer.lexEOS;if(r.charCodeAt(0)==92){e.escapeCharacter();continue}if(r==":")return t.QueryLexer.lexField;if(r=="~")return e.backup(),e.width()>0&&e.emit(t.QueryLexer.TERM),t.QueryLexer.lexEditDistance;if(r=="^")return e.backup(),e.width()>0&&e.emit(t.QueryLexer.TERM),t.QueryLexer.lexBoost;if(r=="+"&&e.width()===1||r=="-"&&e.width()===1)return e.emit(t.QueryLexer.PRESENCE),t.QueryLexer.lexText;if(r.match(t.QueryLexer.termSeparator))return t.QueryLexer.lexTerm}},t.QueryParser=function(e,r){this.lexer=new t.QueryLexer(e),this.query=r,this.currentClause={},this.lexemeIdx=0},t.QueryParser.prototype.parse=function(){this.lexer.run(),this.lexemes=this.lexer.lexemes;for(var e=t.QueryParser.parseClause;e;)e=e(this);return this.query},t.QueryParser.prototype.peekLexeme=function(){return this.lexemes[this.lexemeIdx]},t.QueryParser.prototype.consumeLexeme=function(){var e=this.peekLexeme();return this.lexemeIdx+=1,e},t.QueryParser.prototype.nextClause=function(){var e=this.currentClause;this.query.clause(e),this.currentClause={}},t.QueryParser.parseClause=function(e){var r=e.peekLexeme();if(r!=null)switch(r.type){case t.QueryLexer.PRESENCE:return t.QueryParser.parsePresence;case t.QueryLexer.FIELD:return t.QueryParser.parseField;case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var n="expected either a field or a term, found "+r.type;throw r.str.length>=1&&(n+=" with value '"+r.str+"'"),new t.QueryParseError(n,r.start,r.end)}},t.QueryParser.parsePresence=function(e){var r=e.consumeLexeme();if(r!=null){switch(r.str){case"-":e.currentClause.presence=t.Query.presence.PROHIBITED;break;case"+":e.currentClause.presence=t.Query.presence.REQUIRED;break;default:var n="unrecognised presence operator'"+r.str+"'";throw new t.QueryParseError(n,r.start,r.end)}var i=e.peekLexeme();if(i==null){var n="expecting term or field, found nothing";throw new t.QueryParseError(n,r.start,r.end)}switch(i.type){case t.QueryLexer.FIELD:return t.QueryParser.parseField;case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var n="expecting term or field, found '"+i.type+"'";throw new t.QueryParseError(n,i.start,i.end)}}},t.QueryParser.parseField=function(e){var r=e.consumeLexeme();if(r!=null){if(e.query.allFields.indexOf(r.str)==-1){var n=e.query.allFields.map(function(o){return"'"+o+"'"}).join(", "),i="unrecognised field '"+r.str+"', possible fields: "+n;throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.fields=[r.str];var s=e.peekLexeme();if(s==null){var i="expecting term, found nothing";throw new t.QueryParseError(i,r.start,r.end)}switch(s.type){case t.QueryLexer.TERM:return t.QueryParser.parseTerm;default:var i="expecting term, found '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},t.QueryParser.parseTerm=function(e){var r=e.consumeLexeme();if(r!=null){e.currentClause.term=r.str.toLowerCase(),r.str.indexOf("*")!=-1&&(e.currentClause.usePipeline=!1);var n=e.peekLexeme();if(n==null){e.nextClause();return}switch(n.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+n.type+"'";throw new t.QueryParseError(i,n.start,n.end)}}},t.QueryParser.parseEditDistance=function(e){var r=e.consumeLexeme();if(r!=null){var n=parseInt(r.str,10);if(isNaN(n)){var i="edit distance must be numeric";throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.editDistance=n;var s=e.peekLexeme();if(s==null){e.nextClause();return}switch(s.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},t.QueryParser.parseBoost=function(e){var r=e.consumeLexeme();if(r!=null){var n=parseInt(r.str,10);if(isNaN(n)){var i="boost must be numeric";throw new t.QueryParseError(i,r.start,r.end)}e.currentClause.boost=n;var s=e.peekLexeme();if(s==null){e.nextClause();return}switch(s.type){case t.QueryLexer.TERM:return e.nextClause(),t.QueryParser.parseTerm;case t.QueryLexer.FIELD:return e.nextClause(),t.QueryParser.parseField;case t.QueryLexer.EDIT_DISTANCE:return t.QueryParser.parseEditDistance;case t.QueryLexer.BOOST:return t.QueryParser.parseBoost;case t.QueryLexer.PRESENCE:return e.nextClause(),t.QueryParser.parsePresence;default:var i="Unexpected lexeme type '"+s.type+"'";throw new t.QueryParseError(i,s.start,s.end)}}},function(e,r){typeof define=="function"&&define.amd?define(r):typeof ee=="object"?te.exports=r():e.lunr=r()}(this,function(){return t})})()});var q=K((Re,ne)=>{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var Le=/["'&<>]/;ne.exports=we;function we(t){var e=""+t,r=Le.exec(e);if(!r)return e;var n,i="",s=0,o=0;for(s=r.index;s=0;r--){let n=t[r];typeof n!="object"?n=document.createTextNode(n):n.parentNode&&n.parentNode.removeChild(n),r?e.insertBefore(this.previousSibling,n):e.replaceChild(n,this)}}}));var ie=H(q());function se(t){let e=new Map,r=new Set;for(let n of t){let[i,s]=n.location.split("#"),o=n.location,a=n.title,u=n.tags,c=(0,ie.default)(n.text).replace(/\s+(?=[,.:;!?])/g,"").replace(/\s+/g," ");if(s){let h=e.get(i);r.has(h)?e.set(o,{location:o,title:a,text:c,parent:h}):(h.title=n.title,h.text=c,r.add(h))}else e.set(o,M({location:o,title:a,text:c},u&&{tags:u}))}return e}var oe=H(q());function ae(t,e){let r=new RegExp(t.separator,"img"),n=(i,s,o)=>`${s}${o}`;return i=>{i=i.replace(/[\s*+\-:~^]+/g," ").trim();let s=new RegExp(`(^|${t.separator})(${i.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return o=>(e?(0,oe.default)(o):o).replace(s,n).replace(/<\/mark>(\s+)]*>/img,"$1")}}function ue(t){let e=new lunr.Query(["title","text"]);return new lunr.QueryParser(t,e).parse(),e.clauses}function ce(t,e){var i;let r=new Set(t),n={};for(let s=0;s!n.has(i)))]}var U=class{constructor({config:e,docs:r,options:n}){this.options=n,this.documents=se(r),this.highlight=ae(e,!1),lunr.tokenizer.separator=new RegExp(e.separator),this.index=lunr(function(){e.lang.length===1&&e.lang[0]!=="en"?this.use(lunr[e.lang[0]]):e.lang.length>1&&this.use(lunr.multiLanguage(...e.lang));let i=Ee(["trimmer","stopWordFilter","stemmer"],n.pipeline);for(let s of e.lang.map(o=>o==="en"?lunr:lunr[o]))for(let o of i)this.pipeline.remove(s[o]),this.searchPipeline.remove(s[o]);this.ref("location"),this.field("title",{boost:1e3}),this.field("text"),this.field("tags",{boost:1e6,extractor:s=>{let{tags:o=[]}=s;return o.reduce((a,u)=>[...a,...lunr.tokenizer(u)],[])}});for(let s of r)this.add(s,{boost:s.boost})})}search(e){if(e)try{let r=this.highlight(e),n=ue(e).filter(o=>o.presence!==lunr.Query.presence.PROHIBITED),i=this.index.search(`${e}*`).reduce((o,{ref:a,score:u,matchData:c})=>{let h=this.documents.get(a);if(typeof h!="undefined"){let{location:y,title:g,text:b,tags:m,parent:Q}=h,p=ce(n,Object.keys(c.metadata)),d=+!Q+ +Object.values(p).every(w=>w);o.push(Z(M({location:y,title:r(g),text:r(b)},m&&{tags:m.map(r)}),{score:u*(1+d),terms:p}))}return o},[]).sort((o,a)=>a.score-o.score).reduce((o,a)=>{let u=this.documents.get(a.location);if(typeof u!="undefined"){let c="parent"in u?u.parent.location:u.location;o.set(c,[...o.get(c)||[],a])}return o},new Map),s;if(this.options.suggestions){let o=this.index.query(a=>{for(let u of n)a.term(u.term,{fields:["title"],presence:lunr.Query.presence.REQUIRED,wildcard:lunr.Query.wildcard.TRAILING})});s=o.length?Object.keys(o[0].matchData.metadata):[]}return M({items:[...i.values()]},typeof s!="undefined"&&{suggestions:s})}catch(r){console.warn(`Invalid query: ${e} \u2013 see https://bit.ly/2s3ChXG`)}return{items:[]}}};var Y;function ke(t){return z(this,null,function*(){let e="../lunr";if(typeof parent!="undefined"&&"IFrameWorker"in parent){let n=document.querySelector("script[src]"),[i]=n.src.split("/worker");e=e.replace("..",i)}let r=[];for(let n of t.lang){switch(n){case"ja":r.push(`${e}/tinyseg.js`);break;case"hi":case"th":r.push(`${e}/wordcut.js`);break}n!=="en"&&r.push(`${e}/min/lunr.${n}.min.js`)}t.lang.length>1&&r.push(`${e}/min/lunr.multi.min.js`),r.length&&(yield importScripts(`${e}/min/lunr.stemmer.support.min.js`,...r))})}function Te(t){return z(this,null,function*(){switch(t.type){case 0:return yield ke(t.data.config),Y=new U(t.data),{type:1};case 2:return{type:3,data:Y?Y.search(t.data):{items:[]}};default:throw new TypeError("Invalid message type")}})}self.lunr=le.default;addEventListener("message",t=>z(void 0,null,function*(){postMessage(yield Te(t.data))}));})(); +//# sourceMappingURL=search.5bf1dace.min.js.map + diff --git a/assets/javascripts/workers/search.5bf1dace.min.js.map b/assets/javascripts/workers/search.5bf1dace.min.js.map new file mode 100644 index 000000000..1df8be0ef --- /dev/null +++ b/assets/javascripts/workers/search.5bf1dace.min.js.map @@ -0,0 +1,8 @@ +{ + "version": 3, + "sources": ["node_modules/lunr/lunr.js", "node_modules/escape-html/index.js", "src/assets/javascripts/integrations/search/worker/main/index.ts", "src/assets/javascripts/polyfills/index.ts", "src/assets/javascripts/integrations/search/document/index.ts", "src/assets/javascripts/integrations/search/highlighter/index.ts", "src/assets/javascripts/integrations/search/query/_/index.ts", "src/assets/javascripts/integrations/search/_/index.ts"], + "sourceRoot": "../../../..", + "sourcesContent": ["/**\n * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9\n * Copyright (C) 2020 Oliver Nightingale\n * @license MIT\n */\n\n;(function(){\n\n/**\n * A convenience function for configuring and constructing\n * a new lunr Index.\n *\n * A lunr.Builder instance is created and the pipeline setup\n * with a trimmer, stop word filter and stemmer.\n *\n * This builder object is yielded to the configuration function\n * that is passed as a parameter, allowing the list of fields\n * and other builder parameters to be customised.\n *\n * All documents _must_ be added within the passed config function.\n *\n * @example\n * var idx = lunr(function () {\n * this.field('title')\n * this.field('body')\n * this.ref('id')\n *\n * documents.forEach(function (doc) {\n * this.add(doc)\n * }, this)\n * })\n *\n * @see {@link lunr.Builder}\n * @see {@link lunr.Pipeline}\n * @see {@link lunr.trimmer}\n * @see {@link lunr.stopWordFilter}\n * @see {@link lunr.stemmer}\n * @namespace {function} lunr\n */\nvar lunr = function (config) {\n var builder = new lunr.Builder\n\n builder.pipeline.add(\n lunr.trimmer,\n lunr.stopWordFilter,\n lunr.stemmer\n )\n\n builder.searchPipeline.add(\n lunr.stemmer\n )\n\n config.call(builder, builder)\n return builder.build()\n}\n\nlunr.version = \"2.3.9\"\n/*!\n * lunr.utils\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A namespace containing utils for the rest of the lunr library\n * @namespace lunr.utils\n */\nlunr.utils = {}\n\n/**\n * Print a warning message to the console.\n *\n * @param {String} message The message to be printed.\n * @memberOf lunr.utils\n * @function\n */\nlunr.utils.warn = (function (global) {\n /* eslint-disable no-console */\n return function (message) {\n if (global.console && console.warn) {\n console.warn(message)\n }\n }\n /* eslint-enable no-console */\n})(this)\n\n/**\n * Convert an object to a string.\n *\n * In the case of `null` and `undefined` the function returns\n * the empty string, in all other cases the result of calling\n * `toString` on the passed object is returned.\n *\n * @param {Any} obj The object to convert to a string.\n * @return {String} string representation of the passed object.\n * @memberOf lunr.utils\n */\nlunr.utils.asString = function (obj) {\n if (obj === void 0 || obj === null) {\n return \"\"\n } else {\n return obj.toString()\n }\n}\n\n/**\n * Clones an object.\n *\n * Will create a copy of an existing object such that any mutations\n * on the copy cannot affect the original.\n *\n * Only shallow objects are supported, passing a nested object to this\n * function will cause a TypeError.\n *\n * Objects with primitives, and arrays of primitives are supported.\n *\n * @param {Object} obj The object to clone.\n * @return {Object} a clone of the passed object.\n * @throws {TypeError} when a nested object is passed.\n * @memberOf Utils\n */\nlunr.utils.clone = function (obj) {\n if (obj === null || obj === undefined) {\n return obj\n }\n\n var clone = Object.create(null),\n keys = Object.keys(obj)\n\n for (var i = 0; i < keys.length; i++) {\n var key = keys[i],\n val = obj[key]\n\n if (Array.isArray(val)) {\n clone[key] = val.slice()\n continue\n }\n\n if (typeof val === 'string' ||\n typeof val === 'number' ||\n typeof val === 'boolean') {\n clone[key] = val\n continue\n }\n\n throw new TypeError(\"clone is not deep and does not support nested objects\")\n }\n\n return clone\n}\nlunr.FieldRef = function (docRef, fieldName, stringValue) {\n this.docRef = docRef\n this.fieldName = fieldName\n this._stringValue = stringValue\n}\n\nlunr.FieldRef.joiner = \"/\"\n\nlunr.FieldRef.fromString = function (s) {\n var n = s.indexOf(lunr.FieldRef.joiner)\n\n if (n === -1) {\n throw \"malformed field ref string\"\n }\n\n var fieldRef = s.slice(0, n),\n docRef = s.slice(n + 1)\n\n return new lunr.FieldRef (docRef, fieldRef, s)\n}\n\nlunr.FieldRef.prototype.toString = function () {\n if (this._stringValue == undefined) {\n this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef\n }\n\n return this._stringValue\n}\n/*!\n * lunr.Set\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A lunr set.\n *\n * @constructor\n */\nlunr.Set = function (elements) {\n this.elements = Object.create(null)\n\n if (elements) {\n this.length = elements.length\n\n for (var i = 0; i < this.length; i++) {\n this.elements[elements[i]] = true\n }\n } else {\n this.length = 0\n }\n}\n\n/**\n * A complete set that contains all elements.\n *\n * @static\n * @readonly\n * @type {lunr.Set}\n */\nlunr.Set.complete = {\n intersect: function (other) {\n return other\n },\n\n union: function () {\n return this\n },\n\n contains: function () {\n return true\n }\n}\n\n/**\n * An empty set that contains no elements.\n *\n * @static\n * @readonly\n * @type {lunr.Set}\n */\nlunr.Set.empty = {\n intersect: function () {\n return this\n },\n\n union: function (other) {\n return other\n },\n\n contains: function () {\n return false\n }\n}\n\n/**\n * Returns true if this set contains the specified object.\n *\n * @param {object} object - Object whose presence in this set is to be tested.\n * @returns {boolean} - True if this set contains the specified object.\n */\nlunr.Set.prototype.contains = function (object) {\n return !!this.elements[object]\n}\n\n/**\n * Returns a new set containing only the elements that are present in both\n * this set and the specified set.\n *\n * @param {lunr.Set} other - set to intersect with this set.\n * @returns {lunr.Set} a new set that is the intersection of this and the specified set.\n */\n\nlunr.Set.prototype.intersect = function (other) {\n var a, b, elements, intersection = []\n\n if (other === lunr.Set.complete) {\n return this\n }\n\n if (other === lunr.Set.empty) {\n return other\n }\n\n if (this.length < other.length) {\n a = this\n b = other\n } else {\n a = other\n b = this\n }\n\n elements = Object.keys(a.elements)\n\n for (var i = 0; i < elements.length; i++) {\n var element = elements[i]\n if (element in b.elements) {\n intersection.push(element)\n }\n }\n\n return new lunr.Set (intersection)\n}\n\n/**\n * Returns a new set combining the elements of this and the specified set.\n *\n * @param {lunr.Set} other - set to union with this set.\n * @return {lunr.Set} a new set that is the union of this and the specified set.\n */\n\nlunr.Set.prototype.union = function (other) {\n if (other === lunr.Set.complete) {\n return lunr.Set.complete\n }\n\n if (other === lunr.Set.empty) {\n return this\n }\n\n return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements)))\n}\n/**\n * A function to calculate the inverse document frequency for\n * a posting. This is shared between the builder and the index\n *\n * @private\n * @param {object} posting - The posting for a given term\n * @param {number} documentCount - The total number of documents.\n */\nlunr.idf = function (posting, documentCount) {\n var documentsWithTerm = 0\n\n for (var fieldName in posting) {\n if (fieldName == '_index') continue // Ignore the term index, its not a field\n documentsWithTerm += Object.keys(posting[fieldName]).length\n }\n\n var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5)\n\n return Math.log(1 + Math.abs(x))\n}\n\n/**\n * A token wraps a string representation of a token\n * as it is passed through the text processing pipeline.\n *\n * @constructor\n * @param {string} [str=''] - The string token being wrapped.\n * @param {object} [metadata={}] - Metadata associated with this token.\n */\nlunr.Token = function (str, metadata) {\n this.str = str || \"\"\n this.metadata = metadata || {}\n}\n\n/**\n * Returns the token string that is being wrapped by this object.\n *\n * @returns {string}\n */\nlunr.Token.prototype.toString = function () {\n return this.str\n}\n\n/**\n * A token update function is used when updating or optionally\n * when cloning a token.\n *\n * @callback lunr.Token~updateFunction\n * @param {string} str - The string representation of the token.\n * @param {Object} metadata - All metadata associated with this token.\n */\n\n/**\n * Applies the given function to the wrapped string token.\n *\n * @example\n * token.update(function (str, metadata) {\n * return str.toUpperCase()\n * })\n *\n * @param {lunr.Token~updateFunction} fn - A function to apply to the token string.\n * @returns {lunr.Token}\n */\nlunr.Token.prototype.update = function (fn) {\n this.str = fn(this.str, this.metadata)\n return this\n}\n\n/**\n * Creates a clone of this token. Optionally a function can be\n * applied to the cloned token.\n *\n * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token.\n * @returns {lunr.Token}\n */\nlunr.Token.prototype.clone = function (fn) {\n fn = fn || function (s) { return s }\n return new lunr.Token (fn(this.str, this.metadata), this.metadata)\n}\n/*!\n * lunr.tokenizer\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A function for splitting a string into tokens ready to be inserted into\n * the search index. Uses `lunr.tokenizer.separator` to split strings, change\n * the value of this property to change how strings are split into tokens.\n *\n * This tokenizer will convert its parameter to a string by calling `toString` and\n * then will split this string on the character in `lunr.tokenizer.separator`.\n * Arrays will have their elements converted to strings and wrapped in a lunr.Token.\n *\n * Optional metadata can be passed to the tokenizer, this metadata will be cloned and\n * added as metadata to every token that is created from the object to be tokenized.\n *\n * @static\n * @param {?(string|object|object[])} obj - The object to convert into tokens\n * @param {?object} metadata - Optional metadata to associate with every token\n * @returns {lunr.Token[]}\n * @see {@link lunr.Pipeline}\n */\nlunr.tokenizer = function (obj, metadata) {\n if (obj == null || obj == undefined) {\n return []\n }\n\n if (Array.isArray(obj)) {\n return obj.map(function (t) {\n return new lunr.Token(\n lunr.utils.asString(t).toLowerCase(),\n lunr.utils.clone(metadata)\n )\n })\n }\n\n var str = obj.toString().toLowerCase(),\n len = str.length,\n tokens = []\n\n for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) {\n var char = str.charAt(sliceEnd),\n sliceLength = sliceEnd - sliceStart\n\n if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) {\n\n if (sliceLength > 0) {\n var tokenMetadata = lunr.utils.clone(metadata) || {}\n tokenMetadata[\"position\"] = [sliceStart, sliceLength]\n tokenMetadata[\"index\"] = tokens.length\n\n tokens.push(\n new lunr.Token (\n str.slice(sliceStart, sliceEnd),\n tokenMetadata\n )\n )\n }\n\n sliceStart = sliceEnd + 1\n }\n\n }\n\n return tokens\n}\n\n/**\n * The separator used to split a string into tokens. Override this property to change the behaviour of\n * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens.\n *\n * @static\n * @see lunr.tokenizer\n */\nlunr.tokenizer.separator = /[\\s\\-]+/\n/*!\n * lunr.Pipeline\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.Pipelines maintain an ordered list of functions to be applied to all\n * tokens in documents entering the search index and queries being ran against\n * the index.\n *\n * An instance of lunr.Index created with the lunr shortcut will contain a\n * pipeline with a stop word filter and an English language stemmer. Extra\n * functions can be added before or after either of these functions or these\n * default functions can be removed.\n *\n * When run the pipeline will call each function in turn, passing a token, the\n * index of that token in the original list of all tokens and finally a list of\n * all the original tokens.\n *\n * The output of functions in the pipeline will be passed to the next function\n * in the pipeline. To exclude a token from entering the index the function\n * should return undefined, the rest of the pipeline will not be called with\n * this token.\n *\n * For serialisation of pipelines to work, all functions used in an instance of\n * a pipeline should be registered with lunr.Pipeline. Registered functions can\n * then be loaded. If trying to load a serialised pipeline that uses functions\n * that are not registered an error will be thrown.\n *\n * If not planning on serialising the pipeline then registering pipeline functions\n * is not necessary.\n *\n * @constructor\n */\nlunr.Pipeline = function () {\n this._stack = []\n}\n\nlunr.Pipeline.registeredFunctions = Object.create(null)\n\n/**\n * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token\n * string as well as all known metadata. A pipeline function can mutate the token string\n * or mutate (or add) metadata for a given token.\n *\n * A pipeline function can indicate that the passed token should be discarded by returning\n * null, undefined or an empty string. This token will not be passed to any downstream pipeline\n * functions and will not be added to the index.\n *\n * Multiple tokens can be returned by returning an array of tokens. Each token will be passed\n * to any downstream pipeline functions and all will returned tokens will be added to the index.\n *\n * Any number of pipeline functions may be chained together using a lunr.Pipeline.\n *\n * @interface lunr.PipelineFunction\n * @param {lunr.Token} token - A token from the document being processed.\n * @param {number} i - The index of this token in the complete list of tokens for this document/field.\n * @param {lunr.Token[]} tokens - All tokens for this document/field.\n * @returns {(?lunr.Token|lunr.Token[])}\n */\n\n/**\n * Register a function with the pipeline.\n *\n * Functions that are used in the pipeline should be registered if the pipeline\n * needs to be serialised, or a serialised pipeline needs to be loaded.\n *\n * Registering a function does not add it to a pipeline, functions must still be\n * added to instances of the pipeline for them to be used when running a pipeline.\n *\n * @param {lunr.PipelineFunction} fn - The function to check for.\n * @param {String} label - The label to register this function with\n */\nlunr.Pipeline.registerFunction = function (fn, label) {\n if (label in this.registeredFunctions) {\n lunr.utils.warn('Overwriting existing registered function: ' + label)\n }\n\n fn.label = label\n lunr.Pipeline.registeredFunctions[fn.label] = fn\n}\n\n/**\n * Warns if the function is not registered as a Pipeline function.\n *\n * @param {lunr.PipelineFunction} fn - The function to check for.\n * @private\n */\nlunr.Pipeline.warnIfFunctionNotRegistered = function (fn) {\n var isRegistered = fn.label && (fn.label in this.registeredFunctions)\n\n if (!isRegistered) {\n lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\\n', fn)\n }\n}\n\n/**\n * Loads a previously serialised pipeline.\n *\n * All functions to be loaded must already be registered with lunr.Pipeline.\n * If any function from the serialised data has not been registered then an\n * error will be thrown.\n *\n * @param {Object} serialised - The serialised pipeline to load.\n * @returns {lunr.Pipeline}\n */\nlunr.Pipeline.load = function (serialised) {\n var pipeline = new lunr.Pipeline\n\n serialised.forEach(function (fnName) {\n var fn = lunr.Pipeline.registeredFunctions[fnName]\n\n if (fn) {\n pipeline.add(fn)\n } else {\n throw new Error('Cannot load unregistered function: ' + fnName)\n }\n })\n\n return pipeline\n}\n\n/**\n * Adds new functions to the end of the pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline.\n */\nlunr.Pipeline.prototype.add = function () {\n var fns = Array.prototype.slice.call(arguments)\n\n fns.forEach(function (fn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(fn)\n this._stack.push(fn)\n }, this)\n}\n\n/**\n * Adds a single function after a function that already exists in the\n * pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.\n * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.\n */\nlunr.Pipeline.prototype.after = function (existingFn, newFn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(newFn)\n\n var pos = this._stack.indexOf(existingFn)\n if (pos == -1) {\n throw new Error('Cannot find existingFn')\n }\n\n pos = pos + 1\n this._stack.splice(pos, 0, newFn)\n}\n\n/**\n * Adds a single function before a function that already exists in the\n * pipeline.\n *\n * Logs a warning if the function has not been registered.\n *\n * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.\n * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.\n */\nlunr.Pipeline.prototype.before = function (existingFn, newFn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(newFn)\n\n var pos = this._stack.indexOf(existingFn)\n if (pos == -1) {\n throw new Error('Cannot find existingFn')\n }\n\n this._stack.splice(pos, 0, newFn)\n}\n\n/**\n * Removes a function from the pipeline.\n *\n * @param {lunr.PipelineFunction} fn The function to remove from the pipeline.\n */\nlunr.Pipeline.prototype.remove = function (fn) {\n var pos = this._stack.indexOf(fn)\n if (pos == -1) {\n return\n }\n\n this._stack.splice(pos, 1)\n}\n\n/**\n * Runs the current list of functions that make up the pipeline against the\n * passed tokens.\n *\n * @param {Array} tokens The tokens to run through the pipeline.\n * @returns {Array}\n */\nlunr.Pipeline.prototype.run = function (tokens) {\n var stackLength = this._stack.length\n\n for (var i = 0; i < stackLength; i++) {\n var fn = this._stack[i]\n var memo = []\n\n for (var j = 0; j < tokens.length; j++) {\n var result = fn(tokens[j], j, tokens)\n\n if (result === null || result === void 0 || result === '') continue\n\n if (Array.isArray(result)) {\n for (var k = 0; k < result.length; k++) {\n memo.push(result[k])\n }\n } else {\n memo.push(result)\n }\n }\n\n tokens = memo\n }\n\n return tokens\n}\n\n/**\n * Convenience method for passing a string through a pipeline and getting\n * strings out. This method takes care of wrapping the passed string in a\n * token and mapping the resulting tokens back to strings.\n *\n * @param {string} str - The string to pass through the pipeline.\n * @param {?object} metadata - Optional metadata to associate with the token\n * passed to the pipeline.\n * @returns {string[]}\n */\nlunr.Pipeline.prototype.runString = function (str, metadata) {\n var token = new lunr.Token (str, metadata)\n\n return this.run([token]).map(function (t) {\n return t.toString()\n })\n}\n\n/**\n * Resets the pipeline by removing any existing processors.\n *\n */\nlunr.Pipeline.prototype.reset = function () {\n this._stack = []\n}\n\n/**\n * Returns a representation of the pipeline ready for serialisation.\n *\n * Logs a warning if the function has not been registered.\n *\n * @returns {Array}\n */\nlunr.Pipeline.prototype.toJSON = function () {\n return this._stack.map(function (fn) {\n lunr.Pipeline.warnIfFunctionNotRegistered(fn)\n\n return fn.label\n })\n}\n/*!\n * lunr.Vector\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A vector is used to construct the vector space of documents and queries. These\n * vectors support operations to determine the similarity between two documents or\n * a document and a query.\n *\n * Normally no parameters are required for initializing a vector, but in the case of\n * loading a previously dumped vector the raw elements can be provided to the constructor.\n *\n * For performance reasons vectors are implemented with a flat array, where an elements\n * index is immediately followed by its value. E.g. [index, value, index, value]. This\n * allows the underlying array to be as sparse as possible and still offer decent\n * performance when being used for vector calculations.\n *\n * @constructor\n * @param {Number[]} [elements] - The flat list of element index and element value pairs.\n */\nlunr.Vector = function (elements) {\n this._magnitude = 0\n this.elements = elements || []\n}\n\n\n/**\n * Calculates the position within the vector to insert a given index.\n *\n * This is used internally by insert and upsert. If there are duplicate indexes then\n * the position is returned as if the value for that index were to be updated, but it\n * is the callers responsibility to check whether there is a duplicate at that index\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @returns {Number}\n */\nlunr.Vector.prototype.positionForIndex = function (index) {\n // For an empty vector the tuple can be inserted at the beginning\n if (this.elements.length == 0) {\n return 0\n }\n\n var start = 0,\n end = this.elements.length / 2,\n sliceLength = end - start,\n pivotPoint = Math.floor(sliceLength / 2),\n pivotIndex = this.elements[pivotPoint * 2]\n\n while (sliceLength > 1) {\n if (pivotIndex < index) {\n start = pivotPoint\n }\n\n if (pivotIndex > index) {\n end = pivotPoint\n }\n\n if (pivotIndex == index) {\n break\n }\n\n sliceLength = end - start\n pivotPoint = start + Math.floor(sliceLength / 2)\n pivotIndex = this.elements[pivotPoint * 2]\n }\n\n if (pivotIndex == index) {\n return pivotPoint * 2\n }\n\n if (pivotIndex > index) {\n return pivotPoint * 2\n }\n\n if (pivotIndex < index) {\n return (pivotPoint + 1) * 2\n }\n}\n\n/**\n * Inserts an element at an index within the vector.\n *\n * Does not allow duplicates, will throw an error if there is already an entry\n * for this index.\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @param {Number} val - The value to be inserted into the vector.\n */\nlunr.Vector.prototype.insert = function (insertIdx, val) {\n this.upsert(insertIdx, val, function () {\n throw \"duplicate index\"\n })\n}\n\n/**\n * Inserts or updates an existing index within the vector.\n *\n * @param {Number} insertIdx - The index at which the element should be inserted.\n * @param {Number} val - The value to be inserted into the vector.\n * @param {function} fn - A function that is called for updates, the existing value and the\n * requested value are passed as arguments\n */\nlunr.Vector.prototype.upsert = function (insertIdx, val, fn) {\n this._magnitude = 0\n var position = this.positionForIndex(insertIdx)\n\n if (this.elements[position] == insertIdx) {\n this.elements[position + 1] = fn(this.elements[position + 1], val)\n } else {\n this.elements.splice(position, 0, insertIdx, val)\n }\n}\n\n/**\n * Calculates the magnitude of this vector.\n *\n * @returns {Number}\n */\nlunr.Vector.prototype.magnitude = function () {\n if (this._magnitude) return this._magnitude\n\n var sumOfSquares = 0,\n elementsLength = this.elements.length\n\n for (var i = 1; i < elementsLength; i += 2) {\n var val = this.elements[i]\n sumOfSquares += val * val\n }\n\n return this._magnitude = Math.sqrt(sumOfSquares)\n}\n\n/**\n * Calculates the dot product of this vector and another vector.\n *\n * @param {lunr.Vector} otherVector - The vector to compute the dot product with.\n * @returns {Number}\n */\nlunr.Vector.prototype.dot = function (otherVector) {\n var dotProduct = 0,\n a = this.elements, b = otherVector.elements,\n aLen = a.length, bLen = b.length,\n aVal = 0, bVal = 0,\n i = 0, j = 0\n\n while (i < aLen && j < bLen) {\n aVal = a[i], bVal = b[j]\n if (aVal < bVal) {\n i += 2\n } else if (aVal > bVal) {\n j += 2\n } else if (aVal == bVal) {\n dotProduct += a[i + 1] * b[j + 1]\n i += 2\n j += 2\n }\n }\n\n return dotProduct\n}\n\n/**\n * Calculates the similarity between this vector and another vector.\n *\n * @param {lunr.Vector} otherVector - The other vector to calculate the\n * similarity with.\n * @returns {Number}\n */\nlunr.Vector.prototype.similarity = function (otherVector) {\n return this.dot(otherVector) / this.magnitude() || 0\n}\n\n/**\n * Converts the vector to an array of the elements within the vector.\n *\n * @returns {Number[]}\n */\nlunr.Vector.prototype.toArray = function () {\n var output = new Array (this.elements.length / 2)\n\n for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) {\n output[j] = this.elements[i]\n }\n\n return output\n}\n\n/**\n * A JSON serializable representation of the vector.\n *\n * @returns {Number[]}\n */\nlunr.Vector.prototype.toJSON = function () {\n return this.elements\n}\n/* eslint-disable */\n/*!\n * lunr.stemmer\n * Copyright (C) 2020 Oliver Nightingale\n * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt\n */\n\n/**\n * lunr.stemmer is an english language stemmer, this is a JavaScript\n * implementation of the PorterStemmer taken from http://tartarus.org/~martin\n *\n * @static\n * @implements {lunr.PipelineFunction}\n * @param {lunr.Token} token - The string to stem\n * @returns {lunr.Token}\n * @see {@link lunr.Pipeline}\n * @function\n */\nlunr.stemmer = (function(){\n var step2list = {\n \"ational\" : \"ate\",\n \"tional\" : \"tion\",\n \"enci\" : \"ence\",\n \"anci\" : \"ance\",\n \"izer\" : \"ize\",\n \"bli\" : \"ble\",\n \"alli\" : \"al\",\n \"entli\" : \"ent\",\n \"eli\" : \"e\",\n \"ousli\" : \"ous\",\n \"ization\" : \"ize\",\n \"ation\" : \"ate\",\n \"ator\" : \"ate\",\n \"alism\" : \"al\",\n \"iveness\" : \"ive\",\n \"fulness\" : \"ful\",\n \"ousness\" : \"ous\",\n \"aliti\" : \"al\",\n \"iviti\" : \"ive\",\n \"biliti\" : \"ble\",\n \"logi\" : \"log\"\n },\n\n step3list = {\n \"icate\" : \"ic\",\n \"ative\" : \"\",\n \"alize\" : \"al\",\n \"iciti\" : \"ic\",\n \"ical\" : \"ic\",\n \"ful\" : \"\",\n \"ness\" : \"\"\n },\n\n c = \"[^aeiou]\", // consonant\n v = \"[aeiouy]\", // vowel\n C = c + \"[^aeiouy]*\", // consonant sequence\n V = v + \"[aeiou]*\", // vowel sequence\n\n mgr0 = \"^(\" + C + \")?\" + V + C, // [C]VC... is m>0\n meq1 = \"^(\" + C + \")?\" + V + C + \"(\" + V + \")?$\", // [C]VC[V] is m=1\n mgr1 = \"^(\" + C + \")?\" + V + C + V + C, // [C]VCVC... is m>1\n s_v = \"^(\" + C + \")?\" + v; // vowel in stem\n\n var re_mgr0 = new RegExp(mgr0);\n var re_mgr1 = new RegExp(mgr1);\n var re_meq1 = new RegExp(meq1);\n var re_s_v = new RegExp(s_v);\n\n var re_1a = /^(.+?)(ss|i)es$/;\n var re2_1a = /^(.+?)([^s])s$/;\n var re_1b = /^(.+?)eed$/;\n var re2_1b = /^(.+?)(ed|ing)$/;\n var re_1b_2 = /.$/;\n var re2_1b_2 = /(at|bl|iz)$/;\n var re3_1b_2 = new RegExp(\"([^aeiouylsz])\\\\1$\");\n var re4_1b_2 = new RegExp(\"^\" + C + v + \"[^aeiouwxy]$\");\n\n var re_1c = /^(.+?[^aeiou])y$/;\n var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;\n\n var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;\n\n var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;\n var re2_4 = /^(.+?)(s|t)(ion)$/;\n\n var re_5 = /^(.+?)e$/;\n var re_5_1 = /ll$/;\n var re3_5 = new RegExp(\"^\" + C + v + \"[^aeiouwxy]$\");\n\n var porterStemmer = function porterStemmer(w) {\n var stem,\n suffix,\n firstch,\n re,\n re2,\n re3,\n re4;\n\n if (w.length < 3) { return w; }\n\n firstch = w.substr(0,1);\n if (firstch == \"y\") {\n w = firstch.toUpperCase() + w.substr(1);\n }\n\n // Step 1a\n re = re_1a\n re2 = re2_1a;\n\n if (re.test(w)) { w = w.replace(re,\"$1$2\"); }\n else if (re2.test(w)) { w = w.replace(re2,\"$1$2\"); }\n\n // Step 1b\n re = re_1b;\n re2 = re2_1b;\n if (re.test(w)) {\n var fp = re.exec(w);\n re = re_mgr0;\n if (re.test(fp[1])) {\n re = re_1b_2;\n w = w.replace(re,\"\");\n }\n } else if (re2.test(w)) {\n var fp = re2.exec(w);\n stem = fp[1];\n re2 = re_s_v;\n if (re2.test(stem)) {\n w = stem;\n re2 = re2_1b_2;\n re3 = re3_1b_2;\n re4 = re4_1b_2;\n if (re2.test(w)) { w = w + \"e\"; }\n else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,\"\"); }\n else if (re4.test(w)) { w = w + \"e\"; }\n }\n }\n\n // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say)\n re = re_1c;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n w = stem + \"i\";\n }\n\n // Step 2\n re = re_2;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n suffix = fp[2];\n re = re_mgr0;\n if (re.test(stem)) {\n w = stem + step2list[suffix];\n }\n }\n\n // Step 3\n re = re_3;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n suffix = fp[2];\n re = re_mgr0;\n if (re.test(stem)) {\n w = stem + step3list[suffix];\n }\n }\n\n // Step 4\n re = re_4;\n re2 = re2_4;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n re = re_mgr1;\n if (re.test(stem)) {\n w = stem;\n }\n } else if (re2.test(w)) {\n var fp = re2.exec(w);\n stem = fp[1] + fp[2];\n re2 = re_mgr1;\n if (re2.test(stem)) {\n w = stem;\n }\n }\n\n // Step 5\n re = re_5;\n if (re.test(w)) {\n var fp = re.exec(w);\n stem = fp[1];\n re = re_mgr1;\n re2 = re_meq1;\n re3 = re3_5;\n if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) {\n w = stem;\n }\n }\n\n re = re_5_1;\n re2 = re_mgr1;\n if (re.test(w) && re2.test(w)) {\n re = re_1b_2;\n w = w.replace(re,\"\");\n }\n\n // and turn initial Y back to y\n\n if (firstch == \"y\") {\n w = firstch.toLowerCase() + w.substr(1);\n }\n\n return w;\n };\n\n return function (token) {\n return token.update(porterStemmer);\n }\n})();\n\nlunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer')\n/*!\n * lunr.stopWordFilter\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.generateStopWordFilter builds a stopWordFilter function from the provided\n * list of stop words.\n *\n * The built in lunr.stopWordFilter is built using this generator and can be used\n * to generate custom stopWordFilters for applications or non English languages.\n *\n * @function\n * @param {Array} token The token to pass through the filter\n * @returns {lunr.PipelineFunction}\n * @see lunr.Pipeline\n * @see lunr.stopWordFilter\n */\nlunr.generateStopWordFilter = function (stopWords) {\n var words = stopWords.reduce(function (memo, stopWord) {\n memo[stopWord] = stopWord\n return memo\n }, {})\n\n return function (token) {\n if (token && words[token.toString()] !== token.toString()) return token\n }\n}\n\n/**\n * lunr.stopWordFilter is an English language stop word list filter, any words\n * contained in the list will not be passed through the filter.\n *\n * This is intended to be used in the Pipeline. If the token does not pass the\n * filter then undefined will be returned.\n *\n * @function\n * @implements {lunr.PipelineFunction}\n * @params {lunr.Token} token - A token to check for being a stop word.\n * @returns {lunr.Token}\n * @see {@link lunr.Pipeline}\n */\nlunr.stopWordFilter = lunr.generateStopWordFilter([\n 'a',\n 'able',\n 'about',\n 'across',\n 'after',\n 'all',\n 'almost',\n 'also',\n 'am',\n 'among',\n 'an',\n 'and',\n 'any',\n 'are',\n 'as',\n 'at',\n 'be',\n 'because',\n 'been',\n 'but',\n 'by',\n 'can',\n 'cannot',\n 'could',\n 'dear',\n 'did',\n 'do',\n 'does',\n 'either',\n 'else',\n 'ever',\n 'every',\n 'for',\n 'from',\n 'get',\n 'got',\n 'had',\n 'has',\n 'have',\n 'he',\n 'her',\n 'hers',\n 'him',\n 'his',\n 'how',\n 'however',\n 'i',\n 'if',\n 'in',\n 'into',\n 'is',\n 'it',\n 'its',\n 'just',\n 'least',\n 'let',\n 'like',\n 'likely',\n 'may',\n 'me',\n 'might',\n 'most',\n 'must',\n 'my',\n 'neither',\n 'no',\n 'nor',\n 'not',\n 'of',\n 'off',\n 'often',\n 'on',\n 'only',\n 'or',\n 'other',\n 'our',\n 'own',\n 'rather',\n 'said',\n 'say',\n 'says',\n 'she',\n 'should',\n 'since',\n 'so',\n 'some',\n 'than',\n 'that',\n 'the',\n 'their',\n 'them',\n 'then',\n 'there',\n 'these',\n 'they',\n 'this',\n 'tis',\n 'to',\n 'too',\n 'twas',\n 'us',\n 'wants',\n 'was',\n 'we',\n 'were',\n 'what',\n 'when',\n 'where',\n 'which',\n 'while',\n 'who',\n 'whom',\n 'why',\n 'will',\n 'with',\n 'would',\n 'yet',\n 'you',\n 'your'\n])\n\nlunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter')\n/*!\n * lunr.trimmer\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.trimmer is a pipeline function for trimming non word\n * characters from the beginning and end of tokens before they\n * enter the index.\n *\n * This implementation may not work correctly for non latin\n * characters and should either be removed or adapted for use\n * with languages with non-latin characters.\n *\n * @static\n * @implements {lunr.PipelineFunction}\n * @param {lunr.Token} token The token to pass through the filter\n * @returns {lunr.Token}\n * @see lunr.Pipeline\n */\nlunr.trimmer = function (token) {\n return token.update(function (s) {\n return s.replace(/^\\W+/, '').replace(/\\W+$/, '')\n })\n}\n\nlunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer')\n/*!\n * lunr.TokenSet\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * A token set is used to store the unique list of all tokens\n * within an index. Token sets are also used to represent an\n * incoming query to the index, this query token set and index\n * token set are then intersected to find which tokens to look\n * up in the inverted index.\n *\n * A token set can hold multiple tokens, as in the case of the\n * index token set, or it can hold a single token as in the\n * case of a simple query token set.\n *\n * Additionally token sets are used to perform wildcard matching.\n * Leading, contained and trailing wildcards are supported, and\n * from this edit distance matching can also be provided.\n *\n * Token sets are implemented as a minimal finite state automata,\n * where both common prefixes and suffixes are shared between tokens.\n * This helps to reduce the space used for storing the token set.\n *\n * @constructor\n */\nlunr.TokenSet = function () {\n this.final = false\n this.edges = {}\n this.id = lunr.TokenSet._nextId\n lunr.TokenSet._nextId += 1\n}\n\n/**\n * Keeps track of the next, auto increment, identifier to assign\n * to a new tokenSet.\n *\n * TokenSets require a unique identifier to be correctly minimised.\n *\n * @private\n */\nlunr.TokenSet._nextId = 1\n\n/**\n * Creates a TokenSet instance from the given sorted array of words.\n *\n * @param {String[]} arr - A sorted array of strings to create the set from.\n * @returns {lunr.TokenSet}\n * @throws Will throw an error if the input array is not sorted.\n */\nlunr.TokenSet.fromArray = function (arr) {\n var builder = new lunr.TokenSet.Builder\n\n for (var i = 0, len = arr.length; i < len; i++) {\n builder.insert(arr[i])\n }\n\n builder.finish()\n return builder.root\n}\n\n/**\n * Creates a token set from a query clause.\n *\n * @private\n * @param {Object} clause - A single clause from lunr.Query.\n * @param {string} clause.term - The query clause term.\n * @param {number} [clause.editDistance] - The optional edit distance for the term.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.fromClause = function (clause) {\n if ('editDistance' in clause) {\n return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance)\n } else {\n return lunr.TokenSet.fromString(clause.term)\n }\n}\n\n/**\n * Creates a token set representing a single string with a specified\n * edit distance.\n *\n * Insertions, deletions, substitutions and transpositions are each\n * treated as an edit distance of 1.\n *\n * Increasing the allowed edit distance will have a dramatic impact\n * on the performance of both creating and intersecting these TokenSets.\n * It is advised to keep the edit distance less than 3.\n *\n * @param {string} str - The string to create the token set from.\n * @param {number} editDistance - The allowed edit distance to match.\n * @returns {lunr.Vector}\n */\nlunr.TokenSet.fromFuzzyString = function (str, editDistance) {\n var root = new lunr.TokenSet\n\n var stack = [{\n node: root,\n editsRemaining: editDistance,\n str: str\n }]\n\n while (stack.length) {\n var frame = stack.pop()\n\n // no edit\n if (frame.str.length > 0) {\n var char = frame.str.charAt(0),\n noEditNode\n\n if (char in frame.node.edges) {\n noEditNode = frame.node.edges[char]\n } else {\n noEditNode = new lunr.TokenSet\n frame.node.edges[char] = noEditNode\n }\n\n if (frame.str.length == 1) {\n noEditNode.final = true\n }\n\n stack.push({\n node: noEditNode,\n editsRemaining: frame.editsRemaining,\n str: frame.str.slice(1)\n })\n }\n\n if (frame.editsRemaining == 0) {\n continue\n }\n\n // insertion\n if (\"*\" in frame.node.edges) {\n var insertionNode = frame.node.edges[\"*\"]\n } else {\n var insertionNode = new lunr.TokenSet\n frame.node.edges[\"*\"] = insertionNode\n }\n\n if (frame.str.length == 0) {\n insertionNode.final = true\n }\n\n stack.push({\n node: insertionNode,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str\n })\n\n // deletion\n // can only do a deletion if we have enough edits remaining\n // and if there are characters left to delete in the string\n if (frame.str.length > 1) {\n stack.push({\n node: frame.node,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str.slice(1)\n })\n }\n\n // deletion\n // just removing the last character from the str\n if (frame.str.length == 1) {\n frame.node.final = true\n }\n\n // substitution\n // can only do a substitution if we have enough edits remaining\n // and if there are characters left to substitute\n if (frame.str.length >= 1) {\n if (\"*\" in frame.node.edges) {\n var substitutionNode = frame.node.edges[\"*\"]\n } else {\n var substitutionNode = new lunr.TokenSet\n frame.node.edges[\"*\"] = substitutionNode\n }\n\n if (frame.str.length == 1) {\n substitutionNode.final = true\n }\n\n stack.push({\n node: substitutionNode,\n editsRemaining: frame.editsRemaining - 1,\n str: frame.str.slice(1)\n })\n }\n\n // transposition\n // can only do a transposition if there are edits remaining\n // and there are enough characters to transpose\n if (frame.str.length > 1) {\n var charA = frame.str.charAt(0),\n charB = frame.str.charAt(1),\n transposeNode\n\n if (charB in frame.node.edges) {\n transposeNode = frame.node.edges[charB]\n } else {\n transposeNode = new lunr.TokenSet\n frame.node.edges[charB] = transposeNode\n }\n\n if (frame.str.length == 1) {\n transposeNode.final = true\n }\n\n stack.push({\n node: transposeNode,\n editsRemaining: frame.editsRemaining - 1,\n str: charA + frame.str.slice(2)\n })\n }\n }\n\n return root\n}\n\n/**\n * Creates a TokenSet from a string.\n *\n * The string may contain one or more wildcard characters (*)\n * that will allow wildcard matching when intersecting with\n * another TokenSet.\n *\n * @param {string} str - The string to create a TokenSet from.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.fromString = function (str) {\n var node = new lunr.TokenSet,\n root = node\n\n /*\n * Iterates through all characters within the passed string\n * appending a node for each character.\n *\n * When a wildcard character is found then a self\n * referencing edge is introduced to continually match\n * any number of any characters.\n */\n for (var i = 0, len = str.length; i < len; i++) {\n var char = str[i],\n final = (i == len - 1)\n\n if (char == \"*\") {\n node.edges[char] = node\n node.final = final\n\n } else {\n var next = new lunr.TokenSet\n next.final = final\n\n node.edges[char] = next\n node = next\n }\n }\n\n return root\n}\n\n/**\n * Converts this TokenSet into an array of strings\n * contained within the TokenSet.\n *\n * This is not intended to be used on a TokenSet that\n * contains wildcards, in these cases the results are\n * undefined and are likely to cause an infinite loop.\n *\n * @returns {string[]}\n */\nlunr.TokenSet.prototype.toArray = function () {\n var words = []\n\n var stack = [{\n prefix: \"\",\n node: this\n }]\n\n while (stack.length) {\n var frame = stack.pop(),\n edges = Object.keys(frame.node.edges),\n len = edges.length\n\n if (frame.node.final) {\n /* In Safari, at this point the prefix is sometimes corrupted, see:\n * https://github.com/olivernn/lunr.js/issues/279 Calling any\n * String.prototype method forces Safari to \"cast\" this string to what\n * it's supposed to be, fixing the bug. */\n frame.prefix.charAt(0)\n words.push(frame.prefix)\n }\n\n for (var i = 0; i < len; i++) {\n var edge = edges[i]\n\n stack.push({\n prefix: frame.prefix.concat(edge),\n node: frame.node.edges[edge]\n })\n }\n }\n\n return words\n}\n\n/**\n * Generates a string representation of a TokenSet.\n *\n * This is intended to allow TokenSets to be used as keys\n * in objects, largely to aid the construction and minimisation\n * of a TokenSet. As such it is not designed to be a human\n * friendly representation of the TokenSet.\n *\n * @returns {string}\n */\nlunr.TokenSet.prototype.toString = function () {\n // NOTE: Using Object.keys here as this.edges is very likely\n // to enter 'hash-mode' with many keys being added\n //\n // avoiding a for-in loop here as it leads to the function\n // being de-optimised (at least in V8). From some simple\n // benchmarks the performance is comparable, but allowing\n // V8 to optimize may mean easy performance wins in the future.\n\n if (this._str) {\n return this._str\n }\n\n var str = this.final ? '1' : '0',\n labels = Object.keys(this.edges).sort(),\n len = labels.length\n\n for (var i = 0; i < len; i++) {\n var label = labels[i],\n node = this.edges[label]\n\n str = str + label + node.id\n }\n\n return str\n}\n\n/**\n * Returns a new TokenSet that is the intersection of\n * this TokenSet and the passed TokenSet.\n *\n * This intersection will take into account any wildcards\n * contained within the TokenSet.\n *\n * @param {lunr.TokenSet} b - An other TokenSet to intersect with.\n * @returns {lunr.TokenSet}\n */\nlunr.TokenSet.prototype.intersect = function (b) {\n var output = new lunr.TokenSet,\n frame = undefined\n\n var stack = [{\n qNode: b,\n output: output,\n node: this\n }]\n\n while (stack.length) {\n frame = stack.pop()\n\n // NOTE: As with the #toString method, we are using\n // Object.keys and a for loop instead of a for-in loop\n // as both of these objects enter 'hash' mode, causing\n // the function to be de-optimised in V8\n var qEdges = Object.keys(frame.qNode.edges),\n qLen = qEdges.length,\n nEdges = Object.keys(frame.node.edges),\n nLen = nEdges.length\n\n for (var q = 0; q < qLen; q++) {\n var qEdge = qEdges[q]\n\n for (var n = 0; n < nLen; n++) {\n var nEdge = nEdges[n]\n\n if (nEdge == qEdge || qEdge == '*') {\n var node = frame.node.edges[nEdge],\n qNode = frame.qNode.edges[qEdge],\n final = node.final && qNode.final,\n next = undefined\n\n if (nEdge in frame.output.edges) {\n // an edge already exists for this character\n // no need to create a new node, just set the finality\n // bit unless this node is already final\n next = frame.output.edges[nEdge]\n next.final = next.final || final\n\n } else {\n // no edge exists yet, must create one\n // set the finality bit and insert it\n // into the output\n next = new lunr.TokenSet\n next.final = final\n frame.output.edges[nEdge] = next\n }\n\n stack.push({\n qNode: qNode,\n output: next,\n node: node\n })\n }\n }\n }\n }\n\n return output\n}\nlunr.TokenSet.Builder = function () {\n this.previousWord = \"\"\n this.root = new lunr.TokenSet\n this.uncheckedNodes = []\n this.minimizedNodes = {}\n}\n\nlunr.TokenSet.Builder.prototype.insert = function (word) {\n var node,\n commonPrefix = 0\n\n if (word < this.previousWord) {\n throw new Error (\"Out of order word insertion\")\n }\n\n for (var i = 0; i < word.length && i < this.previousWord.length; i++) {\n if (word[i] != this.previousWord[i]) break\n commonPrefix++\n }\n\n this.minimize(commonPrefix)\n\n if (this.uncheckedNodes.length == 0) {\n node = this.root\n } else {\n node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child\n }\n\n for (var i = commonPrefix; i < word.length; i++) {\n var nextNode = new lunr.TokenSet,\n char = word[i]\n\n node.edges[char] = nextNode\n\n this.uncheckedNodes.push({\n parent: node,\n char: char,\n child: nextNode\n })\n\n node = nextNode\n }\n\n node.final = true\n this.previousWord = word\n}\n\nlunr.TokenSet.Builder.prototype.finish = function () {\n this.minimize(0)\n}\n\nlunr.TokenSet.Builder.prototype.minimize = function (downTo) {\n for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) {\n var node = this.uncheckedNodes[i],\n childKey = node.child.toString()\n\n if (childKey in this.minimizedNodes) {\n node.parent.edges[node.char] = this.minimizedNodes[childKey]\n } else {\n // Cache the key for this node since\n // we know it can't change anymore\n node.child._str = childKey\n\n this.minimizedNodes[childKey] = node.child\n }\n\n this.uncheckedNodes.pop()\n }\n}\n/*!\n * lunr.Index\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * An index contains the built index of all documents and provides a query interface\n * to the index.\n *\n * Usually instances of lunr.Index will not be created using this constructor, instead\n * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be\n * used to load previously built and serialized indexes.\n *\n * @constructor\n * @param {Object} attrs - The attributes of the built search index.\n * @param {Object} attrs.invertedIndex - An index of term/field to document reference.\n * @param {Object} attrs.fieldVectors - Field vectors\n * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens.\n * @param {string[]} attrs.fields - The names of indexed document fields.\n * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms.\n */\nlunr.Index = function (attrs) {\n this.invertedIndex = attrs.invertedIndex\n this.fieldVectors = attrs.fieldVectors\n this.tokenSet = attrs.tokenSet\n this.fields = attrs.fields\n this.pipeline = attrs.pipeline\n}\n\n/**\n * A result contains details of a document matching a search query.\n * @typedef {Object} lunr.Index~Result\n * @property {string} ref - The reference of the document this result represents.\n * @property {number} score - A number between 0 and 1 representing how similar this document is to the query.\n * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match.\n */\n\n/**\n * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple\n * query language which itself is parsed into an instance of lunr.Query.\n *\n * For programmatically building queries it is advised to directly use lunr.Query, the query language\n * is best used for human entered text rather than program generated text.\n *\n * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported\n * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello'\n * or 'world', though those that contain both will rank higher in the results.\n *\n * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can\n * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding\n * wildcards will increase the number of documents that will be found but can also have a negative\n * impact on query performance, especially with wildcards at the beginning of a term.\n *\n * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term\n * hello in the title field will match this query. Using a field not present in the index will lead\n * to an error being thrown.\n *\n * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term\n * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported\n * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2.\n * Avoid large values for edit distance to improve query performance.\n *\n * Each term also supports a presence modifier. By default a term's presence in document is optional, however\n * this can be changed to either required or prohibited. For a term's presence to be required in a document the\n * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and\n * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not\n * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'.\n *\n * To escape special characters the backslash character '\\' can be used, this allows searches to include\n * characters that would normally be considered modifiers, e.g. `foo\\~2` will search for a term \"foo~2\" instead\n * of attempting to apply a boost of 2 to the search term \"foo\".\n *\n * @typedef {string} lunr.Index~QueryString\n * @example Simple single term query\n * hello\n * @example Multiple term query\n * hello world\n * @example term scoped to a field\n * title:hello\n * @example term with a boost of 10\n * hello^10\n * @example term with an edit distance of 2\n * hello~2\n * @example terms with presence modifiers\n * -foo +bar baz\n */\n\n/**\n * Performs a search against the index using lunr query syntax.\n *\n * Results will be returned sorted by their score, the most relevant results\n * will be returned first. For details on how the score is calculated, please see\n * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}.\n *\n * For more programmatic querying use lunr.Index#query.\n *\n * @param {lunr.Index~QueryString} queryString - A string containing a lunr query.\n * @throws {lunr.QueryParseError} If the passed query string cannot be parsed.\n * @returns {lunr.Index~Result[]}\n */\nlunr.Index.prototype.search = function (queryString) {\n return this.query(function (query) {\n var parser = new lunr.QueryParser(queryString, query)\n parser.parse()\n })\n}\n\n/**\n * A query builder callback provides a query object to be used to express\n * the query to perform on the index.\n *\n * @callback lunr.Index~queryBuilder\n * @param {lunr.Query} query - The query object to build up.\n * @this lunr.Query\n */\n\n/**\n * Performs a query against the index using the yielded lunr.Query object.\n *\n * If performing programmatic queries against the index, this method is preferred\n * over lunr.Index#search so as to avoid the additional query parsing overhead.\n *\n * A query object is yielded to the supplied function which should be used to\n * express the query to be run against the index.\n *\n * Note that although this function takes a callback parameter it is _not_ an\n * asynchronous operation, the callback is just yielded a query object to be\n * customized.\n *\n * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query.\n * @returns {lunr.Index~Result[]}\n */\nlunr.Index.prototype.query = function (fn) {\n // for each query clause\n // * process terms\n // * expand terms from token set\n // * find matching documents and metadata\n // * get document vectors\n // * score documents\n\n var query = new lunr.Query(this.fields),\n matchingFields = Object.create(null),\n queryVectors = Object.create(null),\n termFieldCache = Object.create(null),\n requiredMatches = Object.create(null),\n prohibitedMatches = Object.create(null)\n\n /*\n * To support field level boosts a query vector is created per\n * field. An empty vector is eagerly created to support negated\n * queries.\n */\n for (var i = 0; i < this.fields.length; i++) {\n queryVectors[this.fields[i]] = new lunr.Vector\n }\n\n fn.call(query, query)\n\n for (var i = 0; i < query.clauses.length; i++) {\n /*\n * Unless the pipeline has been disabled for this term, which is\n * the case for terms with wildcards, we need to pass the clause\n * term through the search pipeline. A pipeline returns an array\n * of processed terms. Pipeline functions may expand the passed\n * term, which means we may end up performing multiple index lookups\n * for a single query term.\n */\n var clause = query.clauses[i],\n terms = null,\n clauseMatches = lunr.Set.empty\n\n if (clause.usePipeline) {\n terms = this.pipeline.runString(clause.term, {\n fields: clause.fields\n })\n } else {\n terms = [clause.term]\n }\n\n for (var m = 0; m < terms.length; m++) {\n var term = terms[m]\n\n /*\n * Each term returned from the pipeline needs to use the same query\n * clause object, e.g. the same boost and or edit distance. The\n * simplest way to do this is to re-use the clause object but mutate\n * its term property.\n */\n clause.term = term\n\n /*\n * From the term in the clause we create a token set which will then\n * be used to intersect the indexes token set to get a list of terms\n * to lookup in the inverted index\n */\n var termTokenSet = lunr.TokenSet.fromClause(clause),\n expandedTerms = this.tokenSet.intersect(termTokenSet).toArray()\n\n /*\n * If a term marked as required does not exist in the tokenSet it is\n * impossible for the search to return any matches. We set all the field\n * scoped required matches set to empty and stop examining any further\n * clauses.\n */\n if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) {\n for (var k = 0; k < clause.fields.length; k++) {\n var field = clause.fields[k]\n requiredMatches[field] = lunr.Set.empty\n }\n\n break\n }\n\n for (var j = 0; j < expandedTerms.length; j++) {\n /*\n * For each term get the posting and termIndex, this is required for\n * building the query vector.\n */\n var expandedTerm = expandedTerms[j],\n posting = this.invertedIndex[expandedTerm],\n termIndex = posting._index\n\n for (var k = 0; k < clause.fields.length; k++) {\n /*\n * For each field that this query term is scoped by (by default\n * all fields are in scope) we need to get all the document refs\n * that have this term in that field.\n *\n * The posting is the entry in the invertedIndex for the matching\n * term from above.\n */\n var field = clause.fields[k],\n fieldPosting = posting[field],\n matchingDocumentRefs = Object.keys(fieldPosting),\n termField = expandedTerm + \"/\" + field,\n matchingDocumentsSet = new lunr.Set(matchingDocumentRefs)\n\n /*\n * if the presence of this term is required ensure that the matching\n * documents are added to the set of required matches for this clause.\n *\n */\n if (clause.presence == lunr.Query.presence.REQUIRED) {\n clauseMatches = clauseMatches.union(matchingDocumentsSet)\n\n if (requiredMatches[field] === undefined) {\n requiredMatches[field] = lunr.Set.complete\n }\n }\n\n /*\n * if the presence of this term is prohibited ensure that the matching\n * documents are added to the set of prohibited matches for this field,\n * creating that set if it does not yet exist.\n */\n if (clause.presence == lunr.Query.presence.PROHIBITED) {\n if (prohibitedMatches[field] === undefined) {\n prohibitedMatches[field] = lunr.Set.empty\n }\n\n prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet)\n\n /*\n * Prohibited matches should not be part of the query vector used for\n * similarity scoring and no metadata should be extracted so we continue\n * to the next field\n */\n continue\n }\n\n /*\n * The query field vector is populated using the termIndex found for\n * the term and a unit value with the appropriate boost applied.\n * Using upsert because there could already be an entry in the vector\n * for the term we are working with. In that case we just add the scores\n * together.\n */\n queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b })\n\n /**\n * If we've already seen this term, field combo then we've already collected\n * the matching documents and metadata, no need to go through all that again\n */\n if (termFieldCache[termField]) {\n continue\n }\n\n for (var l = 0; l < matchingDocumentRefs.length; l++) {\n /*\n * All metadata for this term/field/document triple\n * are then extracted and collected into an instance\n * of lunr.MatchData ready to be returned in the query\n * results\n */\n var matchingDocumentRef = matchingDocumentRefs[l],\n matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field),\n metadata = fieldPosting[matchingDocumentRef],\n fieldMatch\n\n if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) {\n matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata)\n } else {\n fieldMatch.add(expandedTerm, field, metadata)\n }\n\n }\n\n termFieldCache[termField] = true\n }\n }\n }\n\n /**\n * If the presence was required we need to update the requiredMatches field sets.\n * We do this after all fields for the term have collected their matches because\n * the clause terms presence is required in _any_ of the fields not _all_ of the\n * fields.\n */\n if (clause.presence === lunr.Query.presence.REQUIRED) {\n for (var k = 0; k < clause.fields.length; k++) {\n var field = clause.fields[k]\n requiredMatches[field] = requiredMatches[field].intersect(clauseMatches)\n }\n }\n }\n\n /**\n * Need to combine the field scoped required and prohibited\n * matching documents into a global set of required and prohibited\n * matches\n */\n var allRequiredMatches = lunr.Set.complete,\n allProhibitedMatches = lunr.Set.empty\n\n for (var i = 0; i < this.fields.length; i++) {\n var field = this.fields[i]\n\n if (requiredMatches[field]) {\n allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field])\n }\n\n if (prohibitedMatches[field]) {\n allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field])\n }\n }\n\n var matchingFieldRefs = Object.keys(matchingFields),\n results = [],\n matches = Object.create(null)\n\n /*\n * If the query is negated (contains only prohibited terms)\n * we need to get _all_ fieldRefs currently existing in the\n * index. This is only done when we know that the query is\n * entirely prohibited terms to avoid any cost of getting all\n * fieldRefs unnecessarily.\n *\n * Additionally, blank MatchData must be created to correctly\n * populate the results.\n */\n if (query.isNegated()) {\n matchingFieldRefs = Object.keys(this.fieldVectors)\n\n for (var i = 0; i < matchingFieldRefs.length; i++) {\n var matchingFieldRef = matchingFieldRefs[i]\n var fieldRef = lunr.FieldRef.fromString(matchingFieldRef)\n matchingFields[matchingFieldRef] = new lunr.MatchData\n }\n }\n\n for (var i = 0; i < matchingFieldRefs.length; i++) {\n /*\n * Currently we have document fields that match the query, but we\n * need to return documents. The matchData and scores are combined\n * from multiple fields belonging to the same document.\n *\n * Scores are calculated by field, using the query vectors created\n * above, and combined into a final document score using addition.\n */\n var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]),\n docRef = fieldRef.docRef\n\n if (!allRequiredMatches.contains(docRef)) {\n continue\n }\n\n if (allProhibitedMatches.contains(docRef)) {\n continue\n }\n\n var fieldVector = this.fieldVectors[fieldRef],\n score = queryVectors[fieldRef.fieldName].similarity(fieldVector),\n docMatch\n\n if ((docMatch = matches[docRef]) !== undefined) {\n docMatch.score += score\n docMatch.matchData.combine(matchingFields[fieldRef])\n } else {\n var match = {\n ref: docRef,\n score: score,\n matchData: matchingFields[fieldRef]\n }\n matches[docRef] = match\n results.push(match)\n }\n }\n\n /*\n * Sort the results objects by score, highest first.\n */\n return results.sort(function (a, b) {\n return b.score - a.score\n })\n}\n\n/**\n * Prepares the index for JSON serialization.\n *\n * The schema for this JSON blob will be described in a\n * separate JSON schema file.\n *\n * @returns {Object}\n */\nlunr.Index.prototype.toJSON = function () {\n var invertedIndex = Object.keys(this.invertedIndex)\n .sort()\n .map(function (term) {\n return [term, this.invertedIndex[term]]\n }, this)\n\n var fieldVectors = Object.keys(this.fieldVectors)\n .map(function (ref) {\n return [ref, this.fieldVectors[ref].toJSON()]\n }, this)\n\n return {\n version: lunr.version,\n fields: this.fields,\n fieldVectors: fieldVectors,\n invertedIndex: invertedIndex,\n pipeline: this.pipeline.toJSON()\n }\n}\n\n/**\n * Loads a previously serialized lunr.Index\n *\n * @param {Object} serializedIndex - A previously serialized lunr.Index\n * @returns {lunr.Index}\n */\nlunr.Index.load = function (serializedIndex) {\n var attrs = {},\n fieldVectors = {},\n serializedVectors = serializedIndex.fieldVectors,\n invertedIndex = Object.create(null),\n serializedInvertedIndex = serializedIndex.invertedIndex,\n tokenSetBuilder = new lunr.TokenSet.Builder,\n pipeline = lunr.Pipeline.load(serializedIndex.pipeline)\n\n if (serializedIndex.version != lunr.version) {\n lunr.utils.warn(\"Version mismatch when loading serialised index. Current version of lunr '\" + lunr.version + \"' does not match serialized index '\" + serializedIndex.version + \"'\")\n }\n\n for (var i = 0; i < serializedVectors.length; i++) {\n var tuple = serializedVectors[i],\n ref = tuple[0],\n elements = tuple[1]\n\n fieldVectors[ref] = new lunr.Vector(elements)\n }\n\n for (var i = 0; i < serializedInvertedIndex.length; i++) {\n var tuple = serializedInvertedIndex[i],\n term = tuple[0],\n posting = tuple[1]\n\n tokenSetBuilder.insert(term)\n invertedIndex[term] = posting\n }\n\n tokenSetBuilder.finish()\n\n attrs.fields = serializedIndex.fields\n\n attrs.fieldVectors = fieldVectors\n attrs.invertedIndex = invertedIndex\n attrs.tokenSet = tokenSetBuilder.root\n attrs.pipeline = pipeline\n\n return new lunr.Index(attrs)\n}\n/*!\n * lunr.Builder\n * Copyright (C) 2020 Oliver Nightingale\n */\n\n/**\n * lunr.Builder performs indexing on a set of documents and\n * returns instances of lunr.Index ready for querying.\n *\n * All configuration of the index is done via the builder, the\n * fields to index, the document reference, the text processing\n * pipeline and document scoring parameters are all set on the\n * builder before indexing.\n *\n * @constructor\n * @property {string} _ref - Internal reference to the document reference field.\n * @property {string[]} _fields - Internal reference to the document fields to index.\n * @property {object} invertedIndex - The inverted index maps terms to document fields.\n * @property {object} documentTermFrequencies - Keeps track of document term frequencies.\n * @property {object} documentLengths - Keeps track of the length of documents added to the index.\n * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing.\n * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing.\n * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index.\n * @property {number} documentCount - Keeps track of the total number of documents indexed.\n * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75.\n * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2.\n * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space.\n * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index.\n */\nlunr.Builder = function () {\n this._ref = \"id\"\n this._fields = Object.create(null)\n this._documents = Object.create(null)\n this.invertedIndex = Object.create(null)\n this.fieldTermFrequencies = {}\n this.fieldLengths = {}\n this.tokenizer = lunr.tokenizer\n this.pipeline = new lunr.Pipeline\n this.searchPipeline = new lunr.Pipeline\n this.documentCount = 0\n this._b = 0.75\n this._k1 = 1.2\n this.termIndex = 0\n this.metadataWhitelist = []\n}\n\n/**\n * Sets the document field used as the document reference. Every document must have this field.\n * The type of this field in the document should be a string, if it is not a string it will be\n * coerced into a string by calling toString.\n *\n * The default ref is 'id'.\n *\n * The ref should _not_ be changed during indexing, it should be set before any documents are\n * added to the index. Changing it during indexing can lead to inconsistent results.\n *\n * @param {string} ref - The name of the reference field in the document.\n */\nlunr.Builder.prototype.ref = function (ref) {\n this._ref = ref\n}\n\n/**\n * A function that is used to extract a field from a document.\n *\n * Lunr expects a field to be at the top level of a document, if however the field\n * is deeply nested within a document an extractor function can be used to extract\n * the right field for indexing.\n *\n * @callback fieldExtractor\n * @param {object} doc - The document being added to the index.\n * @returns {?(string|object|object[])} obj - The object that will be indexed for this field.\n * @example Extracting a nested field\n * function (doc) { return doc.nested.field }\n */\n\n/**\n * Adds a field to the list of document fields that will be indexed. Every document being\n * indexed should have this field. Null values for this field in indexed documents will\n * not cause errors but will limit the chance of that document being retrieved by searches.\n *\n * All fields should be added before adding documents to the index. Adding fields after\n * a document has been indexed will have no effect on already indexed documents.\n *\n * Fields can be boosted at build time. This allows terms within that field to have more\n * importance when ranking search results. Use a field boost to specify that matches within\n * one field are more important than other fields.\n *\n * @param {string} fieldName - The name of a field to index in all documents.\n * @param {object} attributes - Optional attributes associated with this field.\n * @param {number} [attributes.boost=1] - Boost applied to all terms within this field.\n * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document.\n * @throws {RangeError} fieldName cannot contain unsupported characters '/'\n */\nlunr.Builder.prototype.field = function (fieldName, attributes) {\n if (/\\//.test(fieldName)) {\n throw new RangeError (\"Field '\" + fieldName + \"' contains illegal character '/'\")\n }\n\n this._fields[fieldName] = attributes || {}\n}\n\n/**\n * A parameter to tune the amount of field length normalisation that is applied when\n * calculating relevance scores. A value of 0 will completely disable any normalisation\n * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b\n * will be clamped to the range 0 - 1.\n *\n * @param {number} number - The value to set for this tuning parameter.\n */\nlunr.Builder.prototype.b = function (number) {\n if (number < 0) {\n this._b = 0\n } else if (number > 1) {\n this._b = 1\n } else {\n this._b = number\n }\n}\n\n/**\n * A parameter that controls the speed at which a rise in term frequency results in term\n * frequency saturation. The default value is 1.2. Setting this to a higher value will give\n * slower saturation levels, a lower value will result in quicker saturation.\n *\n * @param {number} number - The value to set for this tuning parameter.\n */\nlunr.Builder.prototype.k1 = function (number) {\n this._k1 = number\n}\n\n/**\n * Adds a document to the index.\n *\n * Before adding fields to the index the index should have been fully setup, with the document\n * ref and all fields to index already having been specified.\n *\n * The document must have a field name as specified by the ref (by default this is 'id') and\n * it should have all fields defined for indexing, though null or undefined values will not\n * cause errors.\n *\n * Entire documents can be boosted at build time. Applying a boost to a document indicates that\n * this document should rank higher in search results than other documents.\n *\n * @param {object} doc - The document to add to the index.\n * @param {object} attributes - Optional attributes associated with this document.\n * @param {number} [attributes.boost=1] - Boost applied to all terms within this document.\n */\nlunr.Builder.prototype.add = function (doc, attributes) {\n var docRef = doc[this._ref],\n fields = Object.keys(this._fields)\n\n this._documents[docRef] = attributes || {}\n this.documentCount += 1\n\n for (var i = 0; i < fields.length; i++) {\n var fieldName = fields[i],\n extractor = this._fields[fieldName].extractor,\n field = extractor ? extractor(doc) : doc[fieldName],\n tokens = this.tokenizer(field, {\n fields: [fieldName]\n }),\n terms = this.pipeline.run(tokens),\n fieldRef = new lunr.FieldRef (docRef, fieldName),\n fieldTerms = Object.create(null)\n\n this.fieldTermFrequencies[fieldRef] = fieldTerms\n this.fieldLengths[fieldRef] = 0\n\n // store the length of this field for this document\n this.fieldLengths[fieldRef] += terms.length\n\n // calculate term frequencies for this field\n for (var j = 0; j < terms.length; j++) {\n var term = terms[j]\n\n if (fieldTerms[term] == undefined) {\n fieldTerms[term] = 0\n }\n\n fieldTerms[term] += 1\n\n // add to inverted index\n // create an initial posting if one doesn't exist\n if (this.invertedIndex[term] == undefined) {\n var posting = Object.create(null)\n posting[\"_index\"] = this.termIndex\n this.termIndex += 1\n\n for (var k = 0; k < fields.length; k++) {\n posting[fields[k]] = Object.create(null)\n }\n\n this.invertedIndex[term] = posting\n }\n\n // add an entry for this term/fieldName/docRef to the invertedIndex\n if (this.invertedIndex[term][fieldName][docRef] == undefined) {\n this.invertedIndex[term][fieldName][docRef] = Object.create(null)\n }\n\n // store all whitelisted metadata about this token in the\n // inverted index\n for (var l = 0; l < this.metadataWhitelist.length; l++) {\n var metadataKey = this.metadataWhitelist[l],\n metadata = term.metadata[metadataKey]\n\n if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) {\n this.invertedIndex[term][fieldName][docRef][metadataKey] = []\n }\n\n this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata)\n }\n }\n\n }\n}\n\n/**\n * Calculates the average document length for this index\n *\n * @private\n */\nlunr.Builder.prototype.calculateAverageFieldLengths = function () {\n\n var fieldRefs = Object.keys(this.fieldLengths),\n numberOfFields = fieldRefs.length,\n accumulator = {},\n documentsWithField = {}\n\n for (var i = 0; i < numberOfFields; i++) {\n var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),\n field = fieldRef.fieldName\n\n documentsWithField[field] || (documentsWithField[field] = 0)\n documentsWithField[field] += 1\n\n accumulator[field] || (accumulator[field] = 0)\n accumulator[field] += this.fieldLengths[fieldRef]\n }\n\n var fields = Object.keys(this._fields)\n\n for (var i = 0; i < fields.length; i++) {\n var fieldName = fields[i]\n accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName]\n }\n\n this.averageFieldLength = accumulator\n}\n\n/**\n * Builds a vector space model of every document using lunr.Vector\n *\n * @private\n */\nlunr.Builder.prototype.createFieldVectors = function () {\n var fieldVectors = {},\n fieldRefs = Object.keys(this.fieldTermFrequencies),\n fieldRefsLength = fieldRefs.length,\n termIdfCache = Object.create(null)\n\n for (var i = 0; i < fieldRefsLength; i++) {\n var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),\n fieldName = fieldRef.fieldName,\n fieldLength = this.fieldLengths[fieldRef],\n fieldVector = new lunr.Vector,\n termFrequencies = this.fieldTermFrequencies[fieldRef],\n terms = Object.keys(termFrequencies),\n termsLength = terms.length\n\n\n var fieldBoost = this._fields[fieldName].boost || 1,\n docBoost = this._documents[fieldRef.docRef].boost || 1\n\n for (var j = 0; j < termsLength; j++) {\n var term = terms[j],\n tf = termFrequencies[term],\n termIndex = this.invertedIndex[term]._index,\n idf, score, scoreWithPrecision\n\n if (termIdfCache[term] === undefined) {\n idf = lunr.idf(this.invertedIndex[term], this.documentCount)\n termIdfCache[term] = idf\n } else {\n idf = termIdfCache[term]\n }\n\n score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf)\n score *= fieldBoost\n score *= docBoost\n scoreWithPrecision = Math.round(score * 1000) / 1000\n // Converts 1.23456789 to 1.234.\n // Reducing the precision so that the vectors take up less\n // space when serialised. Doing it now so that they behave\n // the same before and after serialisation. Also, this is\n // the fastest approach to reducing a number's precision in\n // JavaScript.\n\n fieldVector.insert(termIndex, scoreWithPrecision)\n }\n\n fieldVectors[fieldRef] = fieldVector\n }\n\n this.fieldVectors = fieldVectors\n}\n\n/**\n * Creates a token set of all tokens in the index using lunr.TokenSet\n *\n * @private\n */\nlunr.Builder.prototype.createTokenSet = function () {\n this.tokenSet = lunr.TokenSet.fromArray(\n Object.keys(this.invertedIndex).sort()\n )\n}\n\n/**\n * Builds the index, creating an instance of lunr.Index.\n *\n * This completes the indexing process and should only be called\n * once all documents have been added to the index.\n *\n * @returns {lunr.Index}\n */\nlunr.Builder.prototype.build = function () {\n this.calculateAverageFieldLengths()\n this.createFieldVectors()\n this.createTokenSet()\n\n return new lunr.Index({\n invertedIndex: this.invertedIndex,\n fieldVectors: this.fieldVectors,\n tokenSet: this.tokenSet,\n fields: Object.keys(this._fields),\n pipeline: this.searchPipeline\n })\n}\n\n/**\n * Applies a plugin to the index builder.\n *\n * A plugin is a function that is called with the index builder as its context.\n * Plugins can be used to customise or extend the behaviour of the index\n * in some way. A plugin is just a function, that encapsulated the custom\n * behaviour that should be applied when building the index.\n *\n * The plugin function will be called with the index builder as its argument, additional\n * arguments can also be passed when calling use. The function will be called\n * with the index builder as its context.\n *\n * @param {Function} plugin The plugin to apply.\n */\nlunr.Builder.prototype.use = function (fn) {\n var args = Array.prototype.slice.call(arguments, 1)\n args.unshift(this)\n fn.apply(this, args)\n}\n/**\n * Contains and collects metadata about a matching document.\n * A single instance of lunr.MatchData is returned as part of every\n * lunr.Index~Result.\n *\n * @constructor\n * @param {string} term - The term this match data is associated with\n * @param {string} field - The field in which the term was found\n * @param {object} metadata - The metadata recorded about this term in this field\n * @property {object} metadata - A cloned collection of metadata associated with this document.\n * @see {@link lunr.Index~Result}\n */\nlunr.MatchData = function (term, field, metadata) {\n var clonedMetadata = Object.create(null),\n metadataKeys = Object.keys(metadata || {})\n\n // Cloning the metadata to prevent the original\n // being mutated during match data combination.\n // Metadata is kept in an array within the inverted\n // index so cloning the data can be done with\n // Array#slice\n for (var i = 0; i < metadataKeys.length; i++) {\n var key = metadataKeys[i]\n clonedMetadata[key] = metadata[key].slice()\n }\n\n this.metadata = Object.create(null)\n\n if (term !== undefined) {\n this.metadata[term] = Object.create(null)\n this.metadata[term][field] = clonedMetadata\n }\n}\n\n/**\n * An instance of lunr.MatchData will be created for every term that matches a\n * document. However only one instance is required in a lunr.Index~Result. This\n * method combines metadata from another instance of lunr.MatchData with this\n * objects metadata.\n *\n * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one.\n * @see {@link lunr.Index~Result}\n */\nlunr.MatchData.prototype.combine = function (otherMatchData) {\n var terms = Object.keys(otherMatchData.metadata)\n\n for (var i = 0; i < terms.length; i++) {\n var term = terms[i],\n fields = Object.keys(otherMatchData.metadata[term])\n\n if (this.metadata[term] == undefined) {\n this.metadata[term] = Object.create(null)\n }\n\n for (var j = 0; j < fields.length; j++) {\n var field = fields[j],\n keys = Object.keys(otherMatchData.metadata[term][field])\n\n if (this.metadata[term][field] == undefined) {\n this.metadata[term][field] = Object.create(null)\n }\n\n for (var k = 0; k < keys.length; k++) {\n var key = keys[k]\n\n if (this.metadata[term][field][key] == undefined) {\n this.metadata[term][field][key] = otherMatchData.metadata[term][field][key]\n } else {\n this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key])\n }\n\n }\n }\n }\n}\n\n/**\n * Add metadata for a term/field pair to this instance of match data.\n *\n * @param {string} term - The term this match data is associated with\n * @param {string} field - The field in which the term was found\n * @param {object} metadata - The metadata recorded about this term in this field\n */\nlunr.MatchData.prototype.add = function (term, field, metadata) {\n if (!(term in this.metadata)) {\n this.metadata[term] = Object.create(null)\n this.metadata[term][field] = metadata\n return\n }\n\n if (!(field in this.metadata[term])) {\n this.metadata[term][field] = metadata\n return\n }\n\n var metadataKeys = Object.keys(metadata)\n\n for (var i = 0; i < metadataKeys.length; i++) {\n var key = metadataKeys[i]\n\n if (key in this.metadata[term][field]) {\n this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key])\n } else {\n this.metadata[term][field][key] = metadata[key]\n }\n }\n}\n/**\n * A lunr.Query provides a programmatic way of defining queries to be performed\n * against a {@link lunr.Index}.\n *\n * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method\n * so the query object is pre-initialized with the right index fields.\n *\n * @constructor\n * @property {lunr.Query~Clause[]} clauses - An array of query clauses.\n * @property {string[]} allFields - An array of all available fields in a lunr.Index.\n */\nlunr.Query = function (allFields) {\n this.clauses = []\n this.allFields = allFields\n}\n\n/**\n * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause.\n *\n * This allows wildcards to be added to the beginning and end of a term without having to manually do any string\n * concatenation.\n *\n * The wildcard constants can be bitwise combined to select both leading and trailing wildcards.\n *\n * @constant\n * @default\n * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour\n * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists\n * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists\n * @see lunr.Query~Clause\n * @see lunr.Query#clause\n * @see lunr.Query#term\n * @example query term with trailing wildcard\n * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING })\n * @example query term with leading and trailing wildcard\n * query.term('foo', {\n * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING\n * })\n */\n\nlunr.Query.wildcard = new String (\"*\")\nlunr.Query.wildcard.NONE = 0\nlunr.Query.wildcard.LEADING = 1\nlunr.Query.wildcard.TRAILING = 2\n\n/**\n * Constants for indicating what kind of presence a term must have in matching documents.\n *\n * @constant\n * @enum {number}\n * @see lunr.Query~Clause\n * @see lunr.Query#clause\n * @see lunr.Query#term\n * @example query term with required presence\n * query.term('foo', { presence: lunr.Query.presence.REQUIRED })\n */\nlunr.Query.presence = {\n /**\n * Term's presence in a document is optional, this is the default value.\n */\n OPTIONAL: 1,\n\n /**\n * Term's presence in a document is required, documents that do not contain\n * this term will not be returned.\n */\n REQUIRED: 2,\n\n /**\n * Term's presence in a document is prohibited, documents that do contain\n * this term will not be returned.\n */\n PROHIBITED: 3\n}\n\n/**\n * A single clause in a {@link lunr.Query} contains a term and details on how to\n * match that term against a {@link lunr.Index}.\n *\n * @typedef {Object} lunr.Query~Clause\n * @property {string[]} fields - The fields in an index this clause should be matched against.\n * @property {number} [boost=1] - Any boost that should be applied when matching this clause.\n * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be.\n * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline.\n * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended.\n * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents.\n */\n\n/**\n * Adds a {@link lunr.Query~Clause} to this query.\n *\n * Unless the clause contains the fields to be matched all fields will be matched. In addition\n * a default boost of 1 is applied to the clause.\n *\n * @param {lunr.Query~Clause} clause - The clause to add to this query.\n * @see lunr.Query~Clause\n * @returns {lunr.Query}\n */\nlunr.Query.prototype.clause = function (clause) {\n if (!('fields' in clause)) {\n clause.fields = this.allFields\n }\n\n if (!('boost' in clause)) {\n clause.boost = 1\n }\n\n if (!('usePipeline' in clause)) {\n clause.usePipeline = true\n }\n\n if (!('wildcard' in clause)) {\n clause.wildcard = lunr.Query.wildcard.NONE\n }\n\n if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) {\n clause.term = \"*\" + clause.term\n }\n\n if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) {\n clause.term = \"\" + clause.term + \"*\"\n }\n\n if (!('presence' in clause)) {\n clause.presence = lunr.Query.presence.OPTIONAL\n }\n\n this.clauses.push(clause)\n\n return this\n}\n\n/**\n * A negated query is one in which every clause has a presence of\n * prohibited. These queries require some special processing to return\n * the expected results.\n *\n * @returns boolean\n */\nlunr.Query.prototype.isNegated = function () {\n for (var i = 0; i < this.clauses.length; i++) {\n if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) {\n return false\n }\n }\n\n return true\n}\n\n/**\n * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause}\n * to the list of clauses that make up this query.\n *\n * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion\n * to a token or token-like string should be done before calling this method.\n *\n * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an\n * array, each term in the array will share the same options.\n *\n * @param {object|object[]} term - The term(s) to add to the query.\n * @param {object} [options] - Any additional properties to add to the query clause.\n * @returns {lunr.Query}\n * @see lunr.Query#clause\n * @see lunr.Query~Clause\n * @example adding a single term to a query\n * query.term(\"foo\")\n * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard\n * query.term(\"foo\", {\n * fields: [\"title\"],\n * boost: 10,\n * wildcard: lunr.Query.wildcard.TRAILING\n * })\n * @example using lunr.tokenizer to convert a string to tokens before using them as terms\n * query.term(lunr.tokenizer(\"foo bar\"))\n */\nlunr.Query.prototype.term = function (term, options) {\n if (Array.isArray(term)) {\n term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this)\n return this\n }\n\n var clause = options || {}\n clause.term = term.toString()\n\n this.clause(clause)\n\n return this\n}\nlunr.QueryParseError = function (message, start, end) {\n this.name = \"QueryParseError\"\n this.message = message\n this.start = start\n this.end = end\n}\n\nlunr.QueryParseError.prototype = new Error\nlunr.QueryLexer = function (str) {\n this.lexemes = []\n this.str = str\n this.length = str.length\n this.pos = 0\n this.start = 0\n this.escapeCharPositions = []\n}\n\nlunr.QueryLexer.prototype.run = function () {\n var state = lunr.QueryLexer.lexText\n\n while (state) {\n state = state(this)\n }\n}\n\nlunr.QueryLexer.prototype.sliceString = function () {\n var subSlices = [],\n sliceStart = this.start,\n sliceEnd = this.pos\n\n for (var i = 0; i < this.escapeCharPositions.length; i++) {\n sliceEnd = this.escapeCharPositions[i]\n subSlices.push(this.str.slice(sliceStart, sliceEnd))\n sliceStart = sliceEnd + 1\n }\n\n subSlices.push(this.str.slice(sliceStart, this.pos))\n this.escapeCharPositions.length = 0\n\n return subSlices.join('')\n}\n\nlunr.QueryLexer.prototype.emit = function (type) {\n this.lexemes.push({\n type: type,\n str: this.sliceString(),\n start: this.start,\n end: this.pos\n })\n\n this.start = this.pos\n}\n\nlunr.QueryLexer.prototype.escapeCharacter = function () {\n this.escapeCharPositions.push(this.pos - 1)\n this.pos += 1\n}\n\nlunr.QueryLexer.prototype.next = function () {\n if (this.pos >= this.length) {\n return lunr.QueryLexer.EOS\n }\n\n var char = this.str.charAt(this.pos)\n this.pos += 1\n return char\n}\n\nlunr.QueryLexer.prototype.width = function () {\n return this.pos - this.start\n}\n\nlunr.QueryLexer.prototype.ignore = function () {\n if (this.start == this.pos) {\n this.pos += 1\n }\n\n this.start = this.pos\n}\n\nlunr.QueryLexer.prototype.backup = function () {\n this.pos -= 1\n}\n\nlunr.QueryLexer.prototype.acceptDigitRun = function () {\n var char, charCode\n\n do {\n char = this.next()\n charCode = char.charCodeAt(0)\n } while (charCode > 47 && charCode < 58)\n\n if (char != lunr.QueryLexer.EOS) {\n this.backup()\n }\n}\n\nlunr.QueryLexer.prototype.more = function () {\n return this.pos < this.length\n}\n\nlunr.QueryLexer.EOS = 'EOS'\nlunr.QueryLexer.FIELD = 'FIELD'\nlunr.QueryLexer.TERM = 'TERM'\nlunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE'\nlunr.QueryLexer.BOOST = 'BOOST'\nlunr.QueryLexer.PRESENCE = 'PRESENCE'\n\nlunr.QueryLexer.lexField = function (lexer) {\n lexer.backup()\n lexer.emit(lunr.QueryLexer.FIELD)\n lexer.ignore()\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexTerm = function (lexer) {\n if (lexer.width() > 1) {\n lexer.backup()\n lexer.emit(lunr.QueryLexer.TERM)\n }\n\n lexer.ignore()\n\n if (lexer.more()) {\n return lunr.QueryLexer.lexText\n }\n}\n\nlunr.QueryLexer.lexEditDistance = function (lexer) {\n lexer.ignore()\n lexer.acceptDigitRun()\n lexer.emit(lunr.QueryLexer.EDIT_DISTANCE)\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexBoost = function (lexer) {\n lexer.ignore()\n lexer.acceptDigitRun()\n lexer.emit(lunr.QueryLexer.BOOST)\n return lunr.QueryLexer.lexText\n}\n\nlunr.QueryLexer.lexEOS = function (lexer) {\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n}\n\n// This matches the separator used when tokenising fields\n// within a document. These should match otherwise it is\n// not possible to search for some tokens within a document.\n//\n// It is possible for the user to change the separator on the\n// tokenizer so it _might_ clash with any other of the special\n// characters already used within the search string, e.g. :.\n//\n// This means that it is possible to change the separator in\n// such a way that makes some words unsearchable using a search\n// string.\nlunr.QueryLexer.termSeparator = lunr.tokenizer.separator\n\nlunr.QueryLexer.lexText = function (lexer) {\n while (true) {\n var char = lexer.next()\n\n if (char == lunr.QueryLexer.EOS) {\n return lunr.QueryLexer.lexEOS\n }\n\n // Escape character is '\\'\n if (char.charCodeAt(0) == 92) {\n lexer.escapeCharacter()\n continue\n }\n\n if (char == \":\") {\n return lunr.QueryLexer.lexField\n }\n\n if (char == \"~\") {\n lexer.backup()\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n return lunr.QueryLexer.lexEditDistance\n }\n\n if (char == \"^\") {\n lexer.backup()\n if (lexer.width() > 0) {\n lexer.emit(lunr.QueryLexer.TERM)\n }\n return lunr.QueryLexer.lexBoost\n }\n\n // \"+\" indicates term presence is required\n // checking for length to ensure that only\n // leading \"+\" are considered\n if (char == \"+\" && lexer.width() === 1) {\n lexer.emit(lunr.QueryLexer.PRESENCE)\n return lunr.QueryLexer.lexText\n }\n\n // \"-\" indicates term presence is prohibited\n // checking for length to ensure that only\n // leading \"-\" are considered\n if (char == \"-\" && lexer.width() === 1) {\n lexer.emit(lunr.QueryLexer.PRESENCE)\n return lunr.QueryLexer.lexText\n }\n\n if (char.match(lunr.QueryLexer.termSeparator)) {\n return lunr.QueryLexer.lexTerm\n }\n }\n}\n\nlunr.QueryParser = function (str, query) {\n this.lexer = new lunr.QueryLexer (str)\n this.query = query\n this.currentClause = {}\n this.lexemeIdx = 0\n}\n\nlunr.QueryParser.prototype.parse = function () {\n this.lexer.run()\n this.lexemes = this.lexer.lexemes\n\n var state = lunr.QueryParser.parseClause\n\n while (state) {\n state = state(this)\n }\n\n return this.query\n}\n\nlunr.QueryParser.prototype.peekLexeme = function () {\n return this.lexemes[this.lexemeIdx]\n}\n\nlunr.QueryParser.prototype.consumeLexeme = function () {\n var lexeme = this.peekLexeme()\n this.lexemeIdx += 1\n return lexeme\n}\n\nlunr.QueryParser.prototype.nextClause = function () {\n var completedClause = this.currentClause\n this.query.clause(completedClause)\n this.currentClause = {}\n}\n\nlunr.QueryParser.parseClause = function (parser) {\n var lexeme = parser.peekLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n switch (lexeme.type) {\n case lunr.QueryLexer.PRESENCE:\n return lunr.QueryParser.parsePresence\n case lunr.QueryLexer.FIELD:\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expected either a field or a term, found \" + lexeme.type\n\n if (lexeme.str.length >= 1) {\n errorMessage += \" with value '\" + lexeme.str + \"'\"\n }\n\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n}\n\nlunr.QueryParser.parsePresence = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n switch (lexeme.str) {\n case \"-\":\n parser.currentClause.presence = lunr.Query.presence.PROHIBITED\n break\n case \"+\":\n parser.currentClause.presence = lunr.Query.presence.REQUIRED\n break\n default:\n var errorMessage = \"unrecognised presence operator'\" + lexeme.str + \"'\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n var errorMessage = \"expecting term or field, found nothing\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.FIELD:\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expecting term or field, found '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseField = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n if (parser.query.allFields.indexOf(lexeme.str) == -1) {\n var possibleFields = parser.query.allFields.map(function (f) { return \"'\" + f + \"'\" }).join(', '),\n errorMessage = \"unrecognised field '\" + lexeme.str + \"', possible fields: \" + possibleFields\n\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.fields = [lexeme.str]\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n var errorMessage = \"expecting term, found nothing\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n return lunr.QueryParser.parseTerm\n default:\n var errorMessage = \"expecting term, found '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseTerm = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n parser.currentClause.term = lexeme.str.toLowerCase()\n\n if (lexeme.str.indexOf(\"*\") != -1) {\n parser.currentClause.usePipeline = false\n }\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseEditDistance = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n var editDistance = parseInt(lexeme.str, 10)\n\n if (isNaN(editDistance)) {\n var errorMessage = \"edit distance must be numeric\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.editDistance = editDistance\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\nlunr.QueryParser.parseBoost = function (parser) {\n var lexeme = parser.consumeLexeme()\n\n if (lexeme == undefined) {\n return\n }\n\n var boost = parseInt(lexeme.str, 10)\n\n if (isNaN(boost)) {\n var errorMessage = \"boost must be numeric\"\n throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)\n }\n\n parser.currentClause.boost = boost\n\n var nextLexeme = parser.peekLexeme()\n\n if (nextLexeme == undefined) {\n parser.nextClause()\n return\n }\n\n switch (nextLexeme.type) {\n case lunr.QueryLexer.TERM:\n parser.nextClause()\n return lunr.QueryParser.parseTerm\n case lunr.QueryLexer.FIELD:\n parser.nextClause()\n return lunr.QueryParser.parseField\n case lunr.QueryLexer.EDIT_DISTANCE:\n return lunr.QueryParser.parseEditDistance\n case lunr.QueryLexer.BOOST:\n return lunr.QueryParser.parseBoost\n case lunr.QueryLexer.PRESENCE:\n parser.nextClause()\n return lunr.QueryParser.parsePresence\n default:\n var errorMessage = \"Unexpected lexeme type '\" + nextLexeme.type + \"'\"\n throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)\n }\n}\n\n /**\n * export the module via AMD, CommonJS or as a browser global\n * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js\n */\n ;(function (root, factory) {\n if (typeof define === 'function' && define.amd) {\n // AMD. Register as an anonymous module.\n define(factory)\n } else if (typeof exports === 'object') {\n /**\n * Node. Does not work with strict CommonJS, but\n * only CommonJS-like enviroments that support module.exports,\n * like Node.\n */\n module.exports = factory()\n } else {\n // Browser globals (root is window)\n root.lunr = factory()\n }\n }(this, function () {\n /**\n * Just return a value to define the module export.\n * This example returns an object, but the module\n * can return a function as the exported value.\n */\n return lunr\n }))\n})();\n", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A RTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport lunr from \"lunr\"\n\nimport \"~/polyfills\"\n\nimport { Search, SearchIndexConfig } from \"../../_\"\nimport {\n SearchMessage,\n SearchMessageType\n} from \"../message\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Add support for usage with `iframe-worker` polyfill\n *\n * While `importScripts` is synchronous when executed inside of a web worker,\n * it's not possible to provide a synchronous polyfilled implementation. The\n * cool thing is that awaiting a non-Promise is a noop, so extending the type\n * definition to return a `Promise` shouldn't break anything.\n *\n * @see https://bit.ly/2PjDnXi - GitHub comment\n */\ndeclare global {\n function importScripts(...urls: string[]): Promise | void\n}\n\n/* ----------------------------------------------------------------------------\n * Data\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index\n */\nlet index: Search\n\n/* ----------------------------------------------------------------------------\n * Helper functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch (= import) multi-language support through `lunr-languages`\n *\n * This function automatically imports the stemmers necessary to process the\n * languages, which are defined through the search index configuration.\n *\n * If the worker runs inside of an `iframe` (when using `iframe-worker` as\n * a shim), the base URL for the stemmers to be loaded must be determined by\n * searching for the first `script` element with a `src` attribute, which will\n * contain the contents of this script.\n *\n * @param config - Search index configuration\n *\n * @returns Promise resolving with no result\n */\nasync function setupSearchLanguages(\n config: SearchIndexConfig\n): Promise {\n let base = \"../lunr\"\n\n /* Detect `iframe-worker` and fix base URL */\n if (typeof parent !== \"undefined\" && \"IFrameWorker\" in parent) {\n const worker = document.querySelector(\"script[src]\")!\n const [path] = worker.src.split(\"/worker\")\n\n /* Prefix base with path */\n base = base.replace(\"..\", path)\n }\n\n /* Add scripts for languages */\n const scripts = []\n for (const lang of config.lang) {\n switch (lang) {\n\n /* Add segmenter for Japanese */\n case \"ja\":\n scripts.push(`${base}/tinyseg.js`)\n break\n\n /* Add segmenter for Hindi and Thai */\n case \"hi\":\n case \"th\":\n scripts.push(`${base}/wordcut.js`)\n break\n }\n\n /* Add language support */\n if (lang !== \"en\")\n scripts.push(`${base}/min/lunr.${lang}.min.js`)\n }\n\n /* Add multi-language support */\n if (config.lang.length > 1)\n scripts.push(`${base}/min/lunr.multi.min.js`)\n\n /* Load scripts synchronously */\n if (scripts.length)\n await importScripts(\n `${base}/min/lunr.stemmer.support.min.js`,\n ...scripts\n )\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Message handler\n *\n * @param message - Source message\n *\n * @returns Target message\n */\nexport async function handler(\n message: SearchMessage\n): Promise {\n switch (message.type) {\n\n /* Search setup message */\n case SearchMessageType.SETUP:\n await setupSearchLanguages(message.data.config)\n index = new Search(message.data)\n return {\n type: SearchMessageType.READY\n }\n\n /* Search query message */\n case SearchMessageType.QUERY:\n return {\n type: SearchMessageType.RESULT,\n data: index ? index.search(message.data) : { items: [] }\n }\n\n /* All other messages */\n default:\n throw new TypeError(\"Invalid message type\")\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Worker\n * ------------------------------------------------------------------------- */\n\n/* @ts-expect-error - expose Lunr.js in global scope, or stemmers won't work */\nself.lunr = lunr\n\n/* Handle messages */\naddEventListener(\"message\", async ev => {\n postMessage(await handler(ev.data))\n})\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Polyfills\n * ------------------------------------------------------------------------- */\n\n/* Polyfill `Object.entries` */\nif (!Object.entries)\n Object.entries = function (obj: object) {\n const data: [string, string][] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push([key, obj[key]])\n\n /* Return entries */\n return data\n }\n\n/* Polyfill `Object.values` */\nif (!Object.values)\n Object.values = function (obj: object) {\n const data: string[] = []\n for (const key of Object.keys(obj))\n // @ts-expect-error - ignore property access warning\n data.push(obj[key])\n\n /* Return values */\n return data\n }\n\n/* ------------------------------------------------------------------------- */\n\n/* Polyfills for `Element` */\nif (typeof Element !== \"undefined\") {\n\n /* Polyfill `Element.scrollTo` */\n if (!Element.prototype.scrollTo)\n Element.prototype.scrollTo = function (\n x?: ScrollToOptions | number, y?: number\n ): void {\n if (typeof x === \"object\") {\n this.scrollLeft = x.left!\n this.scrollTop = x.top!\n } else {\n this.scrollLeft = x!\n this.scrollTop = y!\n }\n }\n\n /* Polyfill `Element.replaceWith` */\n if (!Element.prototype.replaceWith)\n Element.prototype.replaceWith = function (\n ...nodes: Array\n ): void {\n const parent = this.parentNode\n if (parent) {\n if (nodes.length === 0)\n parent.removeChild(this)\n\n /* Replace children and create text nodes */\n for (let i = nodes.length - 1; i >= 0; i--) {\n let node = nodes[i]\n if (typeof node !== \"object\")\n node = document.createTextNode(node)\n else if (node.parentNode)\n node.parentNode.removeChild(node)\n\n /* Replace child or insert before previous sibling */\n if (!i)\n parent.replaceChild(node, this)\n else\n parent.insertBefore(this.previousSibling!, node)\n }\n }\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexDocument } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search document\n */\nexport interface SearchDocument extends SearchIndexDocument {\n parent?: SearchIndexDocument /* Parent article */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search document mapping\n */\nexport type SearchDocumentMap = Map\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search document mapping\n *\n * @param docs - Search index documents\n *\n * @returns Search document map\n */\nexport function setupSearchDocumentMap(\n docs: SearchIndexDocument[]\n): SearchDocumentMap {\n const documents = new Map()\n const parents = new Set()\n for (const doc of docs) {\n const [path, hash] = doc.location.split(\"#\")\n\n /* Extract location, title and tags */\n const location = doc.location\n const title = doc.title\n const tags = doc.tags\n\n /* Escape and cleanup text */\n const text = escapeHTML(doc.text)\n .replace(/\\s+(?=[,.:;!?])/g, \"\")\n .replace(/\\s+/g, \" \")\n\n /* Handle section */\n if (hash) {\n const parent = documents.get(path)!\n\n /* Ignore first section, override article */\n if (!parents.has(parent)) {\n parent.title = doc.title\n parent.text = text\n\n /* Remember that we processed the article */\n parents.add(parent)\n\n /* Add subsequent section */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n parent\n })\n }\n\n /* Add article */\n } else {\n documents.set(location, {\n location,\n title,\n text,\n ...tags && { tags }\n })\n }\n }\n return documents\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport escapeHTML from \"escape-html\"\n\nimport { SearchIndexConfig } from \"../_\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search highlight function\n *\n * @param value - Value\n *\n * @returns Highlighted value\n */\nexport type SearchHighlightFn = (value: string) => string\n\n/**\n * Search highlight factory function\n *\n * @param query - Query value\n *\n * @returns Search highlight function\n */\nexport type SearchHighlightFactoryFn = (query: string) => SearchHighlightFn\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Create a search highlighter\n *\n * @param config - Search index configuration\n * @param escape - Whether to escape HTML\n *\n * @returns Search highlight factory function\n */\nexport function setupSearchHighlighter(\n config: SearchIndexConfig, escape: boolean\n): SearchHighlightFactoryFn {\n const separator = new RegExp(config.separator, \"img\")\n const highlight = (_: unknown, data: string, term: string) => {\n return `${data}${term}`\n }\n\n /* Return factory function */\n return (query: string) => {\n query = query\n .replace(/[\\s*+\\-:~^]+/g, \" \")\n .trim()\n\n /* Create search term match expression */\n const match = new RegExp(`(^|${config.separator})(${\n query\n .replace(/[|\\\\{}()[\\]^$+*?.-]/g, \"\\\\$&\")\n .replace(separator, \"|\")\n })`, \"img\")\n\n /* Highlight string value */\n return value => (\n escape\n ? escapeHTML(value)\n : value\n )\n .replace(match, highlight)\n .replace(/<\\/mark>(\\s+)]*>/img, \"$1\")\n }\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search query clause\n */\nexport interface SearchQueryClause {\n presence: lunr.Query.presence /* Clause presence */\n term: string /* Clause term */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search query terms\n */\nexport type SearchQueryTerms = Record\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Parse a search query for analysis\n *\n * @param value - Query value\n *\n * @returns Search query clauses\n */\nexport function parseSearchQuery(\n value: string\n): SearchQueryClause[] {\n const query = new (lunr as any).Query([\"title\", \"text\"])\n const parser = new (lunr as any).QueryParser(value, query)\n\n /* Parse and return query clauses */\n parser.parse()\n return query.clauses\n}\n\n/**\n * Analyze the search query clauses in regard to the search terms found\n *\n * @param query - Search query clauses\n * @param terms - Search terms\n *\n * @returns Search query terms\n */\nexport function getSearchQueryTerms(\n query: SearchQueryClause[], terms: string[]\n): SearchQueryTerms {\n const clauses = new Set(query)\n\n /* Match query clauses against terms */\n const result: SearchQueryTerms = {}\n for (let t = 0; t < terms.length; t++)\n for (const clause of clauses)\n if (terms[t].startsWith(clause.term)) {\n result[clause.term] = true\n clauses.delete(clause)\n }\n\n /* Annotate unmatched non-stopword query clauses */\n for (const clause of clauses)\n if (lunr.stopWordFilter?.(clause.term as any))\n result[clause.term] = false\n\n /* Return query terms */\n return result\n}\n", "/*\n * Copyright (c) 2016-2022 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport {\n SearchDocument,\n SearchDocumentMap,\n setupSearchDocumentMap\n} from \"../document\"\nimport {\n SearchHighlightFactoryFn,\n setupSearchHighlighter\n} from \"../highlighter\"\nimport { SearchOptions } from \"../options\"\nimport {\n SearchQueryTerms,\n getSearchQueryTerms,\n parseSearchQuery\n} from \"../query\"\n\n/* ----------------------------------------------------------------------------\n * Types\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index configuration\n */\nexport interface SearchIndexConfig {\n lang: string[] /* Search languages */\n separator: string /* Search separator */\n}\n\n/**\n * Search index document\n */\nexport interface SearchIndexDocument {\n location: string /* Document location */\n title: string /* Document title */\n text: string /* Document text */\n tags?: string[] /* Document tags */\n boost?: number /* Document boost */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search index\n *\n * This interfaces describes the format of the `search_index.json` file which\n * is automatically built by the MkDocs search plugin.\n */\nexport interface SearchIndex {\n config: SearchIndexConfig /* Search index configuration */\n docs: SearchIndexDocument[] /* Search index documents */\n options: SearchOptions /* Search options */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search metadata\n */\nexport interface SearchMetadata {\n score: number /* Score (relevance) */\n terms: SearchQueryTerms /* Search query terms */\n}\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search result document\n */\nexport type SearchResultDocument = SearchDocument & SearchMetadata\n\n/**\n * Search result item\n */\nexport type SearchResultItem = SearchResultDocument[]\n\n/* ------------------------------------------------------------------------- */\n\n/**\n * Search result\n */\nexport interface SearchResult {\n items: SearchResultItem[] /* Search result items */\n suggestions?: string[] /* Search suggestions */\n}\n\n/* ----------------------------------------------------------------------------\n * Functions\n * ------------------------------------------------------------------------- */\n\n/**\n * Compute the difference of two lists of strings\n *\n * @param a - 1st list of strings\n * @param b - 2nd list of strings\n *\n * @returns Difference\n */\nfunction difference(a: string[], b: string[]): string[] {\n const [x, y] = [new Set(a), new Set(b)]\n return [\n ...new Set([...x].filter(value => !y.has(value)))\n ]\n}\n\n/* ----------------------------------------------------------------------------\n * Class\n * ------------------------------------------------------------------------- */\n\n/**\n * Search index\n */\nexport class Search {\n\n /**\n * Search document mapping\n *\n * A mapping of URLs (including hash fragments) to the actual articles and\n * sections of the documentation. The search document mapping must be created\n * regardless of whether the index was prebuilt or not, as Lunr.js itself\n * only stores the actual index.\n */\n protected documents: SearchDocumentMap\n\n /**\n * Search highlight factory function\n */\n protected highlight: SearchHighlightFactoryFn\n\n /**\n * The underlying Lunr.js search index\n */\n protected index: lunr.Index\n\n /**\n * Search options\n */\n protected options: SearchOptions\n\n /**\n * Create the search integration\n *\n * @param data - Search index\n */\n public constructor({ config, docs, options }: SearchIndex) {\n this.options = options\n\n /* Set up document map and highlighter factory */\n this.documents = setupSearchDocumentMap(docs)\n this.highlight = setupSearchHighlighter(config, false)\n\n /* Set separator for tokenizer */\n lunr.tokenizer.separator = new RegExp(config.separator)\n\n /* Create search index */\n this.index = lunr(function () {\n\n /* Set up multi-language support */\n if (config.lang.length === 1 && config.lang[0] !== \"en\") {\n this.use((lunr as any)[config.lang[0]])\n } else if (config.lang.length > 1) {\n this.use((lunr as any).multiLanguage(...config.lang))\n }\n\n /* Compute functions to be removed from the pipeline */\n const fns = difference([\n \"trimmer\", \"stopWordFilter\", \"stemmer\"\n ], options.pipeline)\n\n /* Remove functions from the pipeline for registered languages */\n for (const lang of config.lang.map(language => (\n language === \"en\" ? lunr : (lunr as any)[language]\n ))) {\n for (const fn of fns) {\n this.pipeline.remove(lang[fn])\n this.searchPipeline.remove(lang[fn])\n }\n }\n\n /* Set up reference */\n this.ref(\"location\")\n\n /* Set up fields */\n this.field(\"title\", { boost: 1e3 })\n this.field(\"text\")\n this.field(\"tags\", { boost: 1e6, extractor: doc => {\n const { tags = [] } = doc as SearchDocument\n return tags.reduce((list, tag) => [\n ...list,\n ...lunr.tokenizer(tag)\n ], [] as lunr.Token[])\n } })\n\n /* Index documents */\n for (const doc of docs)\n this.add(doc, { boost: doc.boost })\n })\n }\n\n /**\n * Search for matching documents\n *\n * The search index which MkDocs provides is divided up into articles, which\n * contain the whole content of the individual pages, and sections, which only\n * contain the contents of the subsections obtained by breaking the individual\n * pages up at `h1` ... `h6`. As there may be many sections on different pages\n * with identical titles (for example within this very project, e.g. \"Usage\"\n * or \"Installation\"), they need to be put into the context of the containing\n * page. For this reason, section results are grouped within their respective\n * articles which are the top-level results that are returned.\n *\n * @param query - Query value\n *\n * @returns Search results\n */\n public search(query: string): SearchResult {\n if (query) {\n try {\n const highlight = this.highlight(query)\n\n /* Parse query to extract clauses for analysis */\n const clauses = parseSearchQuery(query)\n .filter(clause => (\n clause.presence !== lunr.Query.presence.PROHIBITED\n ))\n\n /* Perform search and post-process results */\n const groups = this.index.search(`${query}*`)\n\n /* Apply post-query boosts based on title and search query terms */\n .reduce((item, { ref, score, matchData }) => {\n const document = this.documents.get(ref)\n if (typeof document !== \"undefined\") {\n const { location, title, text, tags, parent } = document\n\n /* Compute and analyze search query terms */\n const terms = getSearchQueryTerms(\n clauses,\n Object.keys(matchData.metadata)\n )\n\n /* Highlight title and text and apply post-query boosts */\n const boost = +!parent + +Object.values(terms).every(t => t)\n item.push({\n location,\n title: highlight(title),\n text: highlight(text),\n ...tags && { tags: tags.map(highlight) },\n score: score * (1 + boost),\n terms\n })\n }\n return item\n }, [])\n\n /* Sort search results again after applying boosts */\n .sort((a, b) => b.score - a.score)\n\n /* Group search results by page */\n .reduce((items, result) => {\n const document = this.documents.get(result.location)\n if (typeof document !== \"undefined\") {\n const ref = \"parent\" in document\n ? document.parent!.location\n : document.location\n items.set(ref, [...items.get(ref) || [], result])\n }\n return items\n }, new Map())\n\n /* Generate search suggestions, if desired */\n let suggestions: string[] | undefined\n if (this.options.suggestions) {\n const titles = this.index.query(builder => {\n for (const clause of clauses)\n builder.term(clause.term, {\n fields: [\"title\"],\n presence: lunr.Query.presence.REQUIRED,\n wildcard: lunr.Query.wildcard.TRAILING\n })\n })\n\n /* Retrieve suggestions for best match */\n suggestions = titles.length\n ? Object.keys(titles[0].matchData.metadata)\n : []\n }\n\n /* Return items and suggestions */\n return {\n items: [...groups.values()],\n ...typeof suggestions !== \"undefined\" && { suggestions }\n }\n\n /* Log errors to console (for now) */\n } catch {\n console.warn(`Invalid query: ${query} \u2013 see https://bit.ly/2s3ChXG`)\n }\n }\n\n /* Return nothing in case of error or empty query */\n return { items: [] }\n }\n}\n"], + "mappings": "glCAAA,IAAAA,GAAAC,EAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA,IAME,UAAU,CAiCZ,IAAIC,EAAO,SAAUC,EAAQ,CAC3B,IAAIC,EAAU,IAAIF,EAAK,QAEvB,OAAAE,EAAQ,SAAS,IACfF,EAAK,QACLA,EAAK,eACLA,EAAK,OACP,EAEAE,EAAQ,eAAe,IACrBF,EAAK,OACP,EAEAC,EAAO,KAAKC,EAASA,CAAO,EACrBA,EAAQ,MAAM,CACvB,EAEAF,EAAK,QAAU,QACf;AAAA;AAAA;AAAA,GASAA,EAAK,MAAQ,CAAC,EASdA,EAAK,MAAM,KAAQ,SAAUG,EAAQ,CAEnC,OAAO,SAAUC,EAAS,CACpBD,EAAO,SAAW,QAAQ,MAC5B,QAAQ,KAAKC,CAAO,CAExB,CAEF,EAAG,IAAI,EAaPJ,EAAK,MAAM,SAAW,SAAUK,EAAK,CACnC,OAAsBA,GAAQ,KACrB,GAEAA,EAAI,SAAS,CAExB,EAkBAL,EAAK,MAAM,MAAQ,SAAUK,EAAK,CAChC,GAAIA,GAAQ,KACV,OAAOA,EAMT,QAHIC,EAAQ,OAAO,OAAO,IAAI,EAC1BC,EAAO,OAAO,KAAKF,CAAG,EAEjB,EAAI,EAAG,EAAIE,EAAK,OAAQ,IAAK,CACpC,IAAIC,EAAMD,EAAK,GACXE,EAAMJ,EAAIG,GAEd,GAAI,MAAM,QAAQC,CAAG,EAAG,CACtBH,EAAME,GAAOC,EAAI,MAAM,EACvB,QACF,CAEA,GAAI,OAAOA,GAAQ,UACf,OAAOA,GAAQ,UACf,OAAOA,GAAQ,UAAW,CAC5BH,EAAME,GAAOC,EACb,QACF,CAEA,MAAM,IAAI,UAAU,uDAAuD,CAC7E,CAEA,OAAOH,CACT,EACAN,EAAK,SAAW,SAAUU,EAAQC,EAAWC,EAAa,CACxD,KAAK,OAASF,EACd,KAAK,UAAYC,EACjB,KAAK,aAAeC,CACtB,EAEAZ,EAAK,SAAS,OAAS,IAEvBA,EAAK,SAAS,WAAa,SAAUa,EAAG,CACtC,IAAIC,EAAID,EAAE,QAAQb,EAAK,SAAS,MAAM,EAEtC,GAAIc,IAAM,GACR,KAAM,6BAGR,IAAIC,EAAWF,EAAE,MAAM,EAAGC,CAAC,EACvBJ,EAASG,EAAE,MAAMC,EAAI,CAAC,EAE1B,OAAO,IAAId,EAAK,SAAUU,EAAQK,EAAUF,CAAC,CAC/C,EAEAb,EAAK,SAAS,UAAU,SAAW,UAAY,CAC7C,OAAI,KAAK,cAAgB,OACvB,KAAK,aAAe,KAAK,UAAYA,EAAK,SAAS,OAAS,KAAK,QAG5D,KAAK,YACd,EACA;AAAA;AAAA;AAAA,GAUAA,EAAK,IAAM,SAAUgB,EAAU,CAG7B,GAFA,KAAK,SAAW,OAAO,OAAO,IAAI,EAE9BA,EAAU,CACZ,KAAK,OAASA,EAAS,OAEvB,QAASC,EAAI,EAAGA,EAAI,KAAK,OAAQA,IAC/B,KAAK,SAASD,EAASC,IAAM,EAEjC,MACE,KAAK,OAAS,CAElB,EASAjB,EAAK,IAAI,SAAW,CAClB,UAAW,SAAUkB,EAAO,CAC1B,OAAOA,CACT,EAEA,MAAO,UAAY,CACjB,OAAO,IACT,EAEA,SAAU,UAAY,CACpB,MAAO,EACT,CACF,EASAlB,EAAK,IAAI,MAAQ,CACf,UAAW,UAAY,CACrB,OAAO,IACT,EAEA,MAAO,SAAUkB,EAAO,CACtB,OAAOA,CACT,EAEA,SAAU,UAAY,CACpB,MAAO,EACT,CACF,EAQAlB,EAAK,IAAI,UAAU,SAAW,SAAUmB,EAAQ,CAC9C,MAAO,CAAC,CAAC,KAAK,SAASA,EACzB,EAUAnB,EAAK,IAAI,UAAU,UAAY,SAAUkB,EAAO,CAC9C,IAAIE,EAAGC,EAAGL,EAAUM,EAAe,CAAC,EAEpC,GAAIJ,IAAUlB,EAAK,IAAI,SACrB,OAAO,KAGT,GAAIkB,IAAUlB,EAAK,IAAI,MACrB,OAAOkB,EAGL,KAAK,OAASA,EAAM,QACtBE,EAAI,KACJC,EAAIH,IAEJE,EAAIF,EACJG,EAAI,MAGNL,EAAW,OAAO,KAAKI,EAAE,QAAQ,EAEjC,QAASH,EAAI,EAAGA,EAAID,EAAS,OAAQC,IAAK,CACxC,IAAIM,EAAUP,EAASC,GACnBM,KAAWF,EAAE,UACfC,EAAa,KAAKC,CAAO,CAE7B,CAEA,OAAO,IAAIvB,EAAK,IAAKsB,CAAY,CACnC,EASAtB,EAAK,IAAI,UAAU,MAAQ,SAAUkB,EAAO,CAC1C,OAAIA,IAAUlB,EAAK,IAAI,SACdA,EAAK,IAAI,SAGdkB,IAAUlB,EAAK,IAAI,MACd,KAGF,IAAIA,EAAK,IAAI,OAAO,KAAK,KAAK,QAAQ,EAAE,OAAO,OAAO,KAAKkB,EAAM,QAAQ,CAAC,CAAC,CACpF,EASAlB,EAAK,IAAM,SAAUwB,EAASC,EAAe,CAC3C,IAAIC,EAAoB,EAExB,QAASf,KAAaa,EAChBb,GAAa,WACjBe,GAAqB,OAAO,KAAKF,EAAQb,EAAU,EAAE,QAGvD,IAAIgB,GAAKF,EAAgBC,EAAoB,KAAQA,EAAoB,IAEzE,OAAO,KAAK,IAAI,EAAI,KAAK,IAAIC,CAAC,CAAC,CACjC,EAUA3B,EAAK,MAAQ,SAAU4B,EAAKC,EAAU,CACpC,KAAK,IAAMD,GAAO,GAClB,KAAK,SAAWC,GAAY,CAAC,CAC/B,EAOA7B,EAAK,MAAM,UAAU,SAAW,UAAY,CAC1C,OAAO,KAAK,GACd,EAsBAA,EAAK,MAAM,UAAU,OAAS,SAAU8B,EAAI,CAC1C,YAAK,IAAMA,EAAG,KAAK,IAAK,KAAK,QAAQ,EAC9B,IACT,EASA9B,EAAK,MAAM,UAAU,MAAQ,SAAU8B,EAAI,CACzC,OAAAA,EAAKA,GAAM,SAAUjB,EAAG,CAAE,OAAOA,CAAE,EAC5B,IAAIb,EAAK,MAAO8B,EAAG,KAAK,IAAK,KAAK,QAAQ,EAAG,KAAK,QAAQ,CACnE,EACA;AAAA;AAAA;AAAA,GAuBA9B,EAAK,UAAY,SAAUK,EAAKwB,EAAU,CACxC,GAAIxB,GAAO,MAAQA,GAAO,KACxB,MAAO,CAAC,EAGV,GAAI,MAAM,QAAQA,CAAG,EACnB,OAAOA,EAAI,IAAI,SAAU0B,EAAG,CAC1B,OAAO,IAAI/B,EAAK,MACdA,EAAK,MAAM,SAAS+B,CAAC,EAAE,YAAY,EACnC/B,EAAK,MAAM,MAAM6B,CAAQ,CAC3B,CACF,CAAC,EAOH,QAJID,EAAMvB,EAAI,SAAS,EAAE,YAAY,EACjC2B,EAAMJ,EAAI,OACVK,EAAS,CAAC,EAELC,EAAW,EAAGC,EAAa,EAAGD,GAAYF,EAAKE,IAAY,CAClE,IAAIE,EAAOR,EAAI,OAAOM,CAAQ,EAC1BG,EAAcH,EAAWC,EAE7B,GAAKC,EAAK,MAAMpC,EAAK,UAAU,SAAS,GAAKkC,GAAYF,EAAM,CAE7D,GAAIK,EAAc,EAAG,CACnB,IAAIC,EAAgBtC,EAAK,MAAM,MAAM6B,CAAQ,GAAK,CAAC,EACnDS,EAAc,SAAc,CAACH,EAAYE,CAAW,EACpDC,EAAc,MAAWL,EAAO,OAEhCA,EAAO,KACL,IAAIjC,EAAK,MACP4B,EAAI,MAAMO,EAAYD,CAAQ,EAC9BI,CACF,CACF,CACF,CAEAH,EAAaD,EAAW,CAC1B,CAEF,CAEA,OAAOD,CACT,EASAjC,EAAK,UAAU,UAAY,UAC3B;AAAA;AAAA;AAAA,GAkCAA,EAAK,SAAW,UAAY,CAC1B,KAAK,OAAS,CAAC,CACjB,EAEAA,EAAK,SAAS,oBAAsB,OAAO,OAAO,IAAI,EAmCtDA,EAAK,SAAS,iBAAmB,SAAU8B,EAAIS,EAAO,CAChDA,KAAS,KAAK,qBAChBvC,EAAK,MAAM,KAAK,6CAA+CuC,CAAK,EAGtET,EAAG,MAAQS,EACXvC,EAAK,SAAS,oBAAoB8B,EAAG,OAASA,CAChD,EAQA9B,EAAK,SAAS,4BAA8B,SAAU8B,EAAI,CACxD,IAAIU,EAAeV,EAAG,OAAUA,EAAG,SAAS,KAAK,oBAE5CU,GACHxC,EAAK,MAAM,KAAK;AAAA,EAAmG8B,CAAE,CAEzH,EAYA9B,EAAK,SAAS,KAAO,SAAUyC,EAAY,CACzC,IAAIC,EAAW,IAAI1C,EAAK,SAExB,OAAAyC,EAAW,QAAQ,SAAUE,EAAQ,CACnC,IAAIb,EAAK9B,EAAK,SAAS,oBAAoB2C,GAE3C,GAAIb,EACFY,EAAS,IAAIZ,CAAE,MAEf,OAAM,IAAI,MAAM,sCAAwCa,CAAM,CAElE,CAAC,EAEMD,CACT,EASA1C,EAAK,SAAS,UAAU,IAAM,UAAY,CACxC,IAAI4C,EAAM,MAAM,UAAU,MAAM,KAAK,SAAS,EAE9CA,EAAI,QAAQ,SAAUd,EAAI,CACxB9B,EAAK,SAAS,4BAA4B8B,CAAE,EAC5C,KAAK,OAAO,KAAKA,CAAE,CACrB,EAAG,IAAI,CACT,EAWA9B,EAAK,SAAS,UAAU,MAAQ,SAAU6C,EAAYC,EAAO,CAC3D9C,EAAK,SAAS,4BAA4B8C,CAAK,EAE/C,IAAIC,EAAM,KAAK,OAAO,QAAQF,CAAU,EACxC,GAAIE,GAAO,GACT,MAAM,IAAI,MAAM,wBAAwB,EAG1CA,EAAMA,EAAM,EACZ,KAAK,OAAO,OAAOA,EAAK,EAAGD,CAAK,CAClC,EAWA9C,EAAK,SAAS,UAAU,OAAS,SAAU6C,EAAYC,EAAO,CAC5D9C,EAAK,SAAS,4BAA4B8C,CAAK,EAE/C,IAAIC,EAAM,KAAK,OAAO,QAAQF,CAAU,EACxC,GAAIE,GAAO,GACT,MAAM,IAAI,MAAM,wBAAwB,EAG1C,KAAK,OAAO,OAAOA,EAAK,EAAGD,CAAK,CAClC,EAOA9C,EAAK,SAAS,UAAU,OAAS,SAAU8B,EAAI,CAC7C,IAAIiB,EAAM,KAAK,OAAO,QAAQjB,CAAE,EAC5BiB,GAAO,IAIX,KAAK,OAAO,OAAOA,EAAK,CAAC,CAC3B,EASA/C,EAAK,SAAS,UAAU,IAAM,SAAUiC,EAAQ,CAG9C,QAFIe,EAAc,KAAK,OAAO,OAErB/B,EAAI,EAAGA,EAAI+B,EAAa/B,IAAK,CAIpC,QAHIa,EAAK,KAAK,OAAOb,GACjBgC,EAAO,CAAC,EAEHC,EAAI,EAAGA,EAAIjB,EAAO,OAAQiB,IAAK,CACtC,IAAIC,EAASrB,EAAGG,EAAOiB,GAAIA,EAAGjB,CAAM,EAEpC,GAAI,EAAAkB,GAAW,MAA6BA,IAAW,IAEvD,GAAI,MAAM,QAAQA,CAAM,EACtB,QAASC,EAAI,EAAGA,EAAID,EAAO,OAAQC,IACjCH,EAAK,KAAKE,EAAOC,EAAE,OAGrBH,EAAK,KAAKE,CAAM,CAEpB,CAEAlB,EAASgB,CACX,CAEA,OAAOhB,CACT,EAYAjC,EAAK,SAAS,UAAU,UAAY,SAAU4B,EAAKC,EAAU,CAC3D,IAAIwB,EAAQ,IAAIrD,EAAK,MAAO4B,EAAKC,CAAQ,EAEzC,OAAO,KAAK,IAAI,CAACwB,CAAK,CAAC,EAAE,IAAI,SAAUtB,EAAG,CACxC,OAAOA,EAAE,SAAS,CACpB,CAAC,CACH,EAMA/B,EAAK,SAAS,UAAU,MAAQ,UAAY,CAC1C,KAAK,OAAS,CAAC,CACjB,EASAA,EAAK,SAAS,UAAU,OAAS,UAAY,CAC3C,OAAO,KAAK,OAAO,IAAI,SAAU8B,EAAI,CACnC,OAAA9B,EAAK,SAAS,4BAA4B8B,CAAE,EAErCA,EAAG,KACZ,CAAC,CACH,EACA;AAAA;AAAA;AAAA,GAqBA9B,EAAK,OAAS,SAAUgB,EAAU,CAChC,KAAK,WAAa,EAClB,KAAK,SAAWA,GAAY,CAAC,CAC/B,EAaAhB,EAAK,OAAO,UAAU,iBAAmB,SAAUsD,EAAO,CAExD,GAAI,KAAK,SAAS,QAAU,EAC1B,MAAO,GAST,QANIC,EAAQ,EACRC,EAAM,KAAK,SAAS,OAAS,EAC7BnB,EAAcmB,EAAMD,EACpBE,EAAa,KAAK,MAAMpB,EAAc,CAAC,EACvCqB,EAAa,KAAK,SAASD,EAAa,GAErCpB,EAAc,IACfqB,EAAaJ,IACfC,EAAQE,GAGNC,EAAaJ,IACfE,EAAMC,GAGJC,GAAcJ,IAIlBjB,EAAcmB,EAAMD,EACpBE,EAAaF,EAAQ,KAAK,MAAMlB,EAAc,CAAC,EAC/CqB,EAAa,KAAK,SAASD,EAAa,GAO1C,GAJIC,GAAcJ,GAIdI,EAAaJ,EACf,OAAOG,EAAa,EAGtB,GAAIC,EAAaJ,EACf,OAAQG,EAAa,GAAK,CAE9B,EAWAzD,EAAK,OAAO,UAAU,OAAS,SAAU2D,EAAWlD,EAAK,CACvD,KAAK,OAAOkD,EAAWlD,EAAK,UAAY,CACtC,KAAM,iBACR,CAAC,CACH,EAUAT,EAAK,OAAO,UAAU,OAAS,SAAU2D,EAAWlD,EAAKqB,EAAI,CAC3D,KAAK,WAAa,EAClB,IAAI8B,EAAW,KAAK,iBAAiBD,CAAS,EAE1C,KAAK,SAASC,IAAaD,EAC7B,KAAK,SAASC,EAAW,GAAK9B,EAAG,KAAK,SAAS8B,EAAW,GAAInD,CAAG,EAEjE,KAAK,SAAS,OAAOmD,EAAU,EAAGD,EAAWlD,CAAG,CAEpD,EAOAT,EAAK,OAAO,UAAU,UAAY,UAAY,CAC5C,GAAI,KAAK,WAAY,OAAO,KAAK,WAKjC,QAHI6D,EAAe,EACfC,EAAiB,KAAK,SAAS,OAE1B7C,EAAI,EAAGA,EAAI6C,EAAgB7C,GAAK,EAAG,CAC1C,IAAIR,EAAM,KAAK,SAASQ,GACxB4C,GAAgBpD,EAAMA,CACxB,CAEA,OAAO,KAAK,WAAa,KAAK,KAAKoD,CAAY,CACjD,EAQA7D,EAAK,OAAO,UAAU,IAAM,SAAU+D,EAAa,CAOjD,QANIC,EAAa,EACb5C,EAAI,KAAK,SAAUC,EAAI0C,EAAY,SACnCE,EAAO7C,EAAE,OAAQ8C,EAAO7C,EAAE,OAC1B8C,EAAO,EAAGC,EAAO,EACjBnD,EAAI,EAAGiC,EAAI,EAERjC,EAAIgD,GAAQf,EAAIgB,GACrBC,EAAO/C,EAAEH,GAAImD,EAAO/C,EAAE6B,GAClBiB,EAAOC,EACTnD,GAAK,EACIkD,EAAOC,EAChBlB,GAAK,EACIiB,GAAQC,IACjBJ,GAAc5C,EAAEH,EAAI,GAAKI,EAAE6B,EAAI,GAC/BjC,GAAK,EACLiC,GAAK,GAIT,OAAOc,CACT,EASAhE,EAAK,OAAO,UAAU,WAAa,SAAU+D,EAAa,CACxD,OAAO,KAAK,IAAIA,CAAW,EAAI,KAAK,UAAU,GAAK,CACrD,EAOA/D,EAAK,OAAO,UAAU,QAAU,UAAY,CAG1C,QAFIqE,EAAS,IAAI,MAAO,KAAK,SAAS,OAAS,CAAC,EAEvCpD,EAAI,EAAGiC,EAAI,EAAGjC,EAAI,KAAK,SAAS,OAAQA,GAAK,EAAGiC,IACvDmB,EAAOnB,GAAK,KAAK,SAASjC,GAG5B,OAAOoD,CACT,EAOArE,EAAK,OAAO,UAAU,OAAS,UAAY,CACzC,OAAO,KAAK,QACd,EAEA;AAAA;AAAA;AAAA;AAAA,GAiBAA,EAAK,QAAW,UAAU,CACxB,IAAIsE,EAAY,CACZ,QAAY,MACZ,OAAW,OACX,KAAS,OACT,KAAS,OACT,KAAS,MACT,IAAQ,MACR,KAAS,KACT,MAAU,MACV,IAAQ,IACR,MAAU,MACV,QAAY,MACZ,MAAU,MACV,KAAS,MACT,MAAU,KACV,QAAY,MACZ,QAAY,MACZ,QAAY,MACZ,MAAU,KACV,MAAU,MACV,OAAW,MACX,KAAS,KACX,EAEAC,EAAY,CACV,MAAU,KACV,MAAU,GACV,MAAU,KACV,MAAU,KACV,KAAS,KACT,IAAQ,GACR,KAAS,EACX,EAEAC,EAAI,WACJC,EAAI,WACJC,EAAIF,EAAI,aACRG,EAAIF,EAAI,WAERG,EAAO,KAAOF,EAAI,KAAOC,EAAID,EAC7BG,EAAO,KAAOH,EAAI,KAAOC,EAAID,EAAI,IAAMC,EAAI,MAC3CG,EAAO,KAAOJ,EAAI,KAAOC,EAAID,EAAIC,EAAID,EACrCK,EAAM,KAAOL,EAAI,KAAOD,EAEtBO,EAAU,IAAI,OAAOJ,CAAI,EACzBK,EAAU,IAAI,OAAOH,CAAI,EACzBI,EAAU,IAAI,OAAOL,CAAI,EACzBM,EAAS,IAAI,OAAOJ,CAAG,EAEvBK,EAAQ,kBACRC,EAAS,iBACTC,EAAQ,aACRC,EAAS,kBACTC,EAAU,KACVC,EAAW,cACXC,EAAW,IAAI,OAAO,oBAAoB,EAC1CC,EAAW,IAAI,OAAO,IAAMjB,EAAID,EAAI,cAAc,EAElDmB,EAAQ,mBACRC,EAAO,2IAEPC,EAAO,iDAEPC,EAAO,sFACPC,EAAQ,oBAERC,EAAO,WACPC,EAAS,MACTC,EAAQ,IAAI,OAAO,IAAMzB,EAAID,EAAI,cAAc,EAE/C2B,EAAgB,SAAuBC,EAAG,CAC5C,IAAIC,EACFC,EACAC,EACAC,EACAC,EACAC,EACAC,EAEF,GAAIP,EAAE,OAAS,EAAK,OAAOA,EAiB3B,GAfAG,EAAUH,EAAE,OAAO,EAAE,CAAC,EAClBG,GAAW,MACbH,EAAIG,EAAQ,YAAY,EAAIH,EAAE,OAAO,CAAC,GAIxCI,EAAKrB,EACLsB,EAAMrB,EAEFoB,EAAG,KAAKJ,CAAC,EAAKA,EAAIA,EAAE,QAAQI,EAAG,MAAM,EAChCC,EAAI,KAAKL,CAAC,IAAKA,EAAIA,EAAE,QAAQK,EAAI,MAAM,GAGhDD,EAAKnB,EACLoB,EAAMnB,EACFkB,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBI,EAAKzB,EACDyB,EAAG,KAAKI,EAAG,EAAE,IACfJ,EAAKjB,EACLa,EAAIA,EAAE,QAAQI,EAAG,EAAE,EAEvB,SAAWC,EAAI,KAAKL,CAAC,EAAG,CACtB,IAAIQ,EAAKH,EAAI,KAAKL,CAAC,EACnBC,EAAOO,EAAG,GACVH,EAAMvB,EACFuB,EAAI,KAAKJ,CAAI,IACfD,EAAIC,EACJI,EAAMjB,EACNkB,EAAMjB,EACNkB,EAAMjB,EACFe,EAAI,KAAKL,CAAC,EAAKA,EAAIA,EAAI,IAClBM,EAAI,KAAKN,CAAC,GAAKI,EAAKjB,EAASa,EAAIA,EAAE,QAAQI,EAAG,EAAE,GAChDG,EAAI,KAAKP,CAAC,IAAKA,EAAIA,EAAI,KAEpC,CAIA,GADAI,EAAKb,EACDa,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVR,EAAIC,EAAO,GACb,CAIA,GADAG,EAAKZ,EACDY,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVN,EAASM,EAAG,GACZJ,EAAKzB,EACDyB,EAAG,KAAKH,CAAI,IACdD,EAAIC,EAAOhC,EAAUiC,GAEzB,CAIA,GADAE,EAAKX,EACDW,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVN,EAASM,EAAG,GACZJ,EAAKzB,EACDyB,EAAG,KAAKH,CAAI,IACdD,EAAIC,EAAO/B,EAAUgC,GAEzB,CAKA,GAFAE,EAAKV,EACLW,EAAMV,EACFS,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVJ,EAAKxB,EACDwB,EAAG,KAAKH,CAAI,IACdD,EAAIC,EAER,SAAWI,EAAI,KAAKL,CAAC,EAAG,CACtB,IAAIQ,EAAKH,EAAI,KAAKL,CAAC,EACnBC,EAAOO,EAAG,GAAKA,EAAG,GAClBH,EAAMzB,EACFyB,EAAI,KAAKJ,CAAI,IACfD,EAAIC,EAER,CAIA,GADAG,EAAKR,EACDQ,EAAG,KAAKJ,CAAC,EAAG,CACd,IAAIQ,EAAKJ,EAAG,KAAKJ,CAAC,EAClBC,EAAOO,EAAG,GACVJ,EAAKxB,EACLyB,EAAMxB,EACNyB,EAAMR,GACFM,EAAG,KAAKH,CAAI,GAAMI,EAAI,KAAKJ,CAAI,GAAK,CAAEK,EAAI,KAAKL,CAAI,KACrDD,EAAIC,EAER,CAEA,OAAAG,EAAKP,EACLQ,EAAMzB,EACFwB,EAAG,KAAKJ,CAAC,GAAKK,EAAI,KAAKL,CAAC,IAC1BI,EAAKjB,EACLa,EAAIA,EAAE,QAAQI,EAAG,EAAE,GAKjBD,GAAW,MACbH,EAAIG,EAAQ,YAAY,EAAIH,EAAE,OAAO,CAAC,GAGjCA,CACT,EAEA,OAAO,SAAUhD,EAAO,CACtB,OAAOA,EAAM,OAAO+C,CAAa,CACnC,CACF,EAAG,EAEHpG,EAAK,SAAS,iBAAiBA,EAAK,QAAS,SAAS,EACtD;AAAA;AAAA;AAAA,GAkBAA,EAAK,uBAAyB,SAAU8G,EAAW,CACjD,IAAIC,EAAQD,EAAU,OAAO,SAAU7D,EAAM+D,EAAU,CACrD,OAAA/D,EAAK+D,GAAYA,EACV/D,CACT,EAAG,CAAC,CAAC,EAEL,OAAO,SAAUI,EAAO,CACtB,GAAIA,GAAS0D,EAAM1D,EAAM,SAAS,KAAOA,EAAM,SAAS,EAAG,OAAOA,CACpE,CACF,EAeArD,EAAK,eAAiBA,EAAK,uBAAuB,CAChD,IACA,OACA,QACA,SACA,QACA,MACA,SACA,OACA,KACA,QACA,KACA,MACA,MACA,MACA,KACA,KACA,KACA,UACA,OACA,MACA,KACA,MACA,SACA,QACA,OACA,MACA,KACA,OACA,SACA,OACA,OACA,QACA,MACA,OACA,MACA,MACA,MACA,MACA,OACA,KACA,MACA,OACA,MACA,MACA,MACA,UACA,IACA,KACA,KACA,OACA,KACA,KACA,MACA,OACA,QACA,MACA,OACA,SACA,MACA,KACA,QACA,OACA,OACA,KACA,UACA,KACA,MACA,MACA,KACA,MACA,QACA,KACA,OACA,KACA,QACA,MACA,MACA,SACA,OACA,MACA,OACA,MACA,SACA,QACA,KACA,OACA,OACA,OACA,MACA,QACA,OACA,OACA,QACA,QACA,OACA,OACA,MACA,KACA,MACA,OACA,KACA,QACA,MACA,KACA,OACA,OACA,OACA,QACA,QACA,QACA,MACA,OACA,MACA,OACA,OACA,QACA,MACA,MACA,MACF,CAAC,EAEDA,EAAK,SAAS,iBAAiBA,EAAK,eAAgB,gBAAgB,EACpE;AAAA;AAAA;AAAA,GAoBAA,EAAK,QAAU,SAAUqD,EAAO,CAC9B,OAAOA,EAAM,OAAO,SAAUxC,EAAG,CAC/B,OAAOA,EAAE,QAAQ,OAAQ,EAAE,EAAE,QAAQ,OAAQ,EAAE,CACjD,CAAC,CACH,EAEAb,EAAK,SAAS,iBAAiBA,EAAK,QAAS,SAAS,EACtD;AAAA;AAAA;AAAA,GA0BAA,EAAK,SAAW,UAAY,CAC1B,KAAK,MAAQ,GACb,KAAK,MAAQ,CAAC,EACd,KAAK,GAAKA,EAAK,SAAS,QACxBA,EAAK,SAAS,SAAW,CAC3B,EAUAA,EAAK,SAAS,QAAU,EASxBA,EAAK,SAAS,UAAY,SAAUiH,EAAK,CAGvC,QAFI/G,EAAU,IAAIF,EAAK,SAAS,QAEvBiB,EAAI,EAAGe,EAAMiF,EAAI,OAAQhG,EAAIe,EAAKf,IACzCf,EAAQ,OAAO+G,EAAIhG,EAAE,EAGvB,OAAAf,EAAQ,OAAO,EACRA,EAAQ,IACjB,EAWAF,EAAK,SAAS,WAAa,SAAUkH,EAAQ,CAC3C,MAAI,iBAAkBA,EACblH,EAAK,SAAS,gBAAgBkH,EAAO,KAAMA,EAAO,YAAY,EAE9DlH,EAAK,SAAS,WAAWkH,EAAO,IAAI,CAE/C,EAiBAlH,EAAK,SAAS,gBAAkB,SAAU4B,EAAKuF,EAAc,CAS3D,QARIC,EAAO,IAAIpH,EAAK,SAEhBqH,EAAQ,CAAC,CACX,KAAMD,EACN,eAAgBD,EAChB,IAAKvF,CACP,CAAC,EAEMyF,EAAM,QAAQ,CACnB,IAAIC,EAAQD,EAAM,IAAI,EAGtB,GAAIC,EAAM,IAAI,OAAS,EAAG,CACxB,IAAIlF,EAAOkF,EAAM,IAAI,OAAO,CAAC,EACzBC,EAEAnF,KAAQkF,EAAM,KAAK,MACrBC,EAAaD,EAAM,KAAK,MAAMlF,IAE9BmF,EAAa,IAAIvH,EAAK,SACtBsH,EAAM,KAAK,MAAMlF,GAAQmF,GAGvBD,EAAM,IAAI,QAAU,IACtBC,EAAW,MAAQ,IAGrBF,EAAM,KAAK,CACT,KAAME,EACN,eAAgBD,EAAM,eACtB,IAAKA,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,CACH,CAEA,GAAIA,EAAM,gBAAkB,EAK5B,IAAI,MAAOA,EAAM,KAAK,MACpB,IAAIE,EAAgBF,EAAM,KAAK,MAAM,SAChC,CACL,IAAIE,EAAgB,IAAIxH,EAAK,SAC7BsH,EAAM,KAAK,MAAM,KAAOE,CAC1B,CAgCA,GA9BIF,EAAM,IAAI,QAAU,IACtBE,EAAc,MAAQ,IAGxBH,EAAM,KAAK,CACT,KAAMG,EACN,eAAgBF,EAAM,eAAiB,EACvC,IAAKA,EAAM,GACb,CAAC,EAKGA,EAAM,IAAI,OAAS,GACrBD,EAAM,KAAK,CACT,KAAMC,EAAM,KACZ,eAAgBA,EAAM,eAAiB,EACvC,IAAKA,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,EAKCA,EAAM,IAAI,QAAU,IACtBA,EAAM,KAAK,MAAQ,IAMjBA,EAAM,IAAI,QAAU,EAAG,CACzB,GAAI,MAAOA,EAAM,KAAK,MACpB,IAAIG,EAAmBH,EAAM,KAAK,MAAM,SACnC,CACL,IAAIG,EAAmB,IAAIzH,EAAK,SAChCsH,EAAM,KAAK,MAAM,KAAOG,CAC1B,CAEIH,EAAM,IAAI,QAAU,IACtBG,EAAiB,MAAQ,IAG3BJ,EAAM,KAAK,CACT,KAAMI,EACN,eAAgBH,EAAM,eAAiB,EACvC,IAAKA,EAAM,IAAI,MAAM,CAAC,CACxB,CAAC,CACH,CAKA,GAAIA,EAAM,IAAI,OAAS,EAAG,CACxB,IAAII,EAAQJ,EAAM,IAAI,OAAO,CAAC,EAC1BK,EAAQL,EAAM,IAAI,OAAO,CAAC,EAC1BM,EAEAD,KAASL,EAAM,KAAK,MACtBM,EAAgBN,EAAM,KAAK,MAAMK,IAEjCC,EAAgB,IAAI5H,EAAK,SACzBsH,EAAM,KAAK,MAAMK,GAASC,GAGxBN,EAAM,IAAI,QAAU,IACtBM,EAAc,MAAQ,IAGxBP,EAAM,KAAK,CACT,KAAMO,EACN,eAAgBN,EAAM,eAAiB,EACvC,IAAKI,EAAQJ,EAAM,IAAI,MAAM,CAAC,CAChC,CAAC,CACH,EACF,CAEA,OAAOF,CACT,EAYApH,EAAK,SAAS,WAAa,SAAU4B,EAAK,CAYxC,QAXIiG,EAAO,IAAI7H,EAAK,SAChBoH,EAAOS,EAUF,EAAI,EAAG7F,EAAMJ,EAAI,OAAQ,EAAII,EAAK,IAAK,CAC9C,IAAII,EAAOR,EAAI,GACXkG,EAAS,GAAK9F,EAAM,EAExB,GAAII,GAAQ,IACVyF,EAAK,MAAMzF,GAAQyF,EACnBA,EAAK,MAAQC,MAER,CACL,IAAIC,EAAO,IAAI/H,EAAK,SACpB+H,EAAK,MAAQD,EAEbD,EAAK,MAAMzF,GAAQ2F,EACnBF,EAAOE,CACT,CACF,CAEA,OAAOX,CACT,EAYApH,EAAK,SAAS,UAAU,QAAU,UAAY,CAQ5C,QAPI+G,EAAQ,CAAC,EAETM,EAAQ,CAAC,CACX,OAAQ,GACR,KAAM,IACR,CAAC,EAEMA,EAAM,QAAQ,CACnB,IAAIC,EAAQD,EAAM,IAAI,EAClBW,EAAQ,OAAO,KAAKV,EAAM,KAAK,KAAK,EACpCtF,EAAMgG,EAAM,OAEZV,EAAM,KAAK,QAKbA,EAAM,OAAO,OAAO,CAAC,EACrBP,EAAM,KAAKO,EAAM,MAAM,GAGzB,QAASrG,EAAI,EAAGA,EAAIe,EAAKf,IAAK,CAC5B,IAAIgH,EAAOD,EAAM/G,GAEjBoG,EAAM,KAAK,CACT,OAAQC,EAAM,OAAO,OAAOW,CAAI,EAChC,KAAMX,EAAM,KAAK,MAAMW,EACzB,CAAC,CACH,CACF,CAEA,OAAOlB,CACT,EAYA/G,EAAK,SAAS,UAAU,SAAW,UAAY,CAS7C,GAAI,KAAK,KACP,OAAO,KAAK,KAOd,QAJI4B,EAAM,KAAK,MAAQ,IAAM,IACzBsG,EAAS,OAAO,KAAK,KAAK,KAAK,EAAE,KAAK,EACtClG,EAAMkG,EAAO,OAER,EAAI,EAAG,EAAIlG,EAAK,IAAK,CAC5B,IAAIO,EAAQ2F,EAAO,GACfL,EAAO,KAAK,MAAMtF,GAEtBX,EAAMA,EAAMW,EAAQsF,EAAK,EAC3B,CAEA,OAAOjG,CACT,EAYA5B,EAAK,SAAS,UAAU,UAAY,SAAUqB,EAAG,CAU/C,QATIgD,EAAS,IAAIrE,EAAK,SAClBsH,EAAQ,OAERD,EAAQ,CAAC,CACX,MAAOhG,EACP,OAAQgD,EACR,KAAM,IACR,CAAC,EAEMgD,EAAM,QAAQ,CACnBC,EAAQD,EAAM,IAAI,EAWlB,QALIc,EAAS,OAAO,KAAKb,EAAM,MAAM,KAAK,EACtCc,EAAOD,EAAO,OACdE,EAAS,OAAO,KAAKf,EAAM,KAAK,KAAK,EACrCgB,EAAOD,EAAO,OAETE,EAAI,EAAGA,EAAIH,EAAMG,IAGxB,QAFIC,EAAQL,EAAOI,GAEVzH,EAAI,EAAGA,EAAIwH,EAAMxH,IAAK,CAC7B,IAAI2H,EAAQJ,EAAOvH,GAEnB,GAAI2H,GAASD,GAASA,GAAS,IAAK,CAClC,IAAIX,EAAOP,EAAM,KAAK,MAAMmB,GACxBC,EAAQpB,EAAM,MAAM,MAAMkB,GAC1BV,EAAQD,EAAK,OAASa,EAAM,MAC5BX,EAAO,OAEPU,KAASnB,EAAM,OAAO,OAIxBS,EAAOT,EAAM,OAAO,MAAMmB,GAC1BV,EAAK,MAAQA,EAAK,OAASD,IAM3BC,EAAO,IAAI/H,EAAK,SAChB+H,EAAK,MAAQD,EACbR,EAAM,OAAO,MAAMmB,GAASV,GAG9BV,EAAM,KAAK,CACT,MAAOqB,EACP,OAAQX,EACR,KAAMF,CACR,CAAC,CACH,CACF,CAEJ,CAEA,OAAOxD,CACT,EACArE,EAAK,SAAS,QAAU,UAAY,CAClC,KAAK,aAAe,GACpB,KAAK,KAAO,IAAIA,EAAK,SACrB,KAAK,eAAiB,CAAC,EACvB,KAAK,eAAiB,CAAC,CACzB,EAEAA,EAAK,SAAS,QAAQ,UAAU,OAAS,SAAU2I,EAAM,CACvD,IAAId,EACAe,EAAe,EAEnB,GAAID,EAAO,KAAK,aACd,MAAM,IAAI,MAAO,6BAA6B,EAGhD,QAAS,EAAI,EAAG,EAAIA,EAAK,QAAU,EAAI,KAAK,aAAa,QACnDA,EAAK,IAAM,KAAK,aAAa,GAD8B,IAE/DC,IAGF,KAAK,SAASA,CAAY,EAEtB,KAAK,eAAe,QAAU,EAChCf,EAAO,KAAK,KAEZA,EAAO,KAAK,eAAe,KAAK,eAAe,OAAS,GAAG,MAG7D,QAAS,EAAIe,EAAc,EAAID,EAAK,OAAQ,IAAK,CAC/C,IAAIE,EAAW,IAAI7I,EAAK,SACpBoC,EAAOuG,EAAK,GAEhBd,EAAK,MAAMzF,GAAQyG,EAEnB,KAAK,eAAe,KAAK,CACvB,OAAQhB,EACR,KAAMzF,EACN,MAAOyG,CACT,CAAC,EAEDhB,EAAOgB,CACT,CAEAhB,EAAK,MAAQ,GACb,KAAK,aAAec,CACtB,EAEA3I,EAAK,SAAS,QAAQ,UAAU,OAAS,UAAY,CACnD,KAAK,SAAS,CAAC,CACjB,EAEAA,EAAK,SAAS,QAAQ,UAAU,SAAW,SAAU8I,EAAQ,CAC3D,QAAS7H,EAAI,KAAK,eAAe,OAAS,EAAGA,GAAK6H,EAAQ7H,IAAK,CAC7D,IAAI4G,EAAO,KAAK,eAAe5G,GAC3B8H,EAAWlB,EAAK,MAAM,SAAS,EAE/BkB,KAAY,KAAK,eACnBlB,EAAK,OAAO,MAAMA,EAAK,MAAQ,KAAK,eAAekB,IAInDlB,EAAK,MAAM,KAAOkB,EAElB,KAAK,eAAeA,GAAYlB,EAAK,OAGvC,KAAK,eAAe,IAAI,CAC1B,CACF,EACA;AAAA;AAAA;AAAA,GAqBA7H,EAAK,MAAQ,SAAUgJ,EAAO,CAC5B,KAAK,cAAgBA,EAAM,cAC3B,KAAK,aAAeA,EAAM,aAC1B,KAAK,SAAWA,EAAM,SACtB,KAAK,OAASA,EAAM,OACpB,KAAK,SAAWA,EAAM,QACxB,EAyEAhJ,EAAK,MAAM,UAAU,OAAS,SAAUiJ,EAAa,CACnD,OAAO,KAAK,MAAM,SAAUC,EAAO,CACjC,IAAIC,EAAS,IAAInJ,EAAK,YAAYiJ,EAAaC,CAAK,EACpDC,EAAO,MAAM,CACf,CAAC,CACH,EA2BAnJ,EAAK,MAAM,UAAU,MAAQ,SAAU8B,EAAI,CAoBzC,QAZIoH,EAAQ,IAAIlJ,EAAK,MAAM,KAAK,MAAM,EAClCoJ,EAAiB,OAAO,OAAO,IAAI,EACnCC,EAAe,OAAO,OAAO,IAAI,EACjCC,EAAiB,OAAO,OAAO,IAAI,EACnCC,EAAkB,OAAO,OAAO,IAAI,EACpCC,EAAoB,OAAO,OAAO,IAAI,EAOjCvI,EAAI,EAAGA,EAAI,KAAK,OAAO,OAAQA,IACtCoI,EAAa,KAAK,OAAOpI,IAAM,IAAIjB,EAAK,OAG1C8B,EAAG,KAAKoH,EAAOA,CAAK,EAEpB,QAASjI,EAAI,EAAGA,EAAIiI,EAAM,QAAQ,OAAQjI,IAAK,CAS7C,IAAIiG,EAASgC,EAAM,QAAQjI,GACvBwI,EAAQ,KACRC,EAAgB1J,EAAK,IAAI,MAEzBkH,EAAO,YACTuC,EAAQ,KAAK,SAAS,UAAUvC,EAAO,KAAM,CAC3C,OAAQA,EAAO,MACjB,CAAC,EAEDuC,EAAQ,CAACvC,EAAO,IAAI,EAGtB,QAASyC,EAAI,EAAGA,EAAIF,EAAM,OAAQE,IAAK,CACrC,IAAIC,EAAOH,EAAME,GAQjBzC,EAAO,KAAO0C,EAOd,IAAIC,EAAe7J,EAAK,SAAS,WAAWkH,CAAM,EAC9C4C,EAAgB,KAAK,SAAS,UAAUD,CAAY,EAAE,QAAQ,EAQlE,GAAIC,EAAc,SAAW,GAAK5C,EAAO,WAAalH,EAAK,MAAM,SAAS,SAAU,CAClF,QAASoD,EAAI,EAAGA,EAAI8D,EAAO,OAAO,OAAQ9D,IAAK,CAC7C,IAAI2G,EAAQ7C,EAAO,OAAO9D,GAC1BmG,EAAgBQ,GAAS/J,EAAK,IAAI,KACpC,CAEA,KACF,CAEA,QAASkD,EAAI,EAAGA,EAAI4G,EAAc,OAAQ5G,IASxC,QAJI8G,EAAeF,EAAc5G,GAC7B1B,EAAU,KAAK,cAAcwI,GAC7BC,EAAYzI,EAAQ,OAEf4B,EAAI,EAAGA,EAAI8D,EAAO,OAAO,OAAQ9D,IAAK,CAS7C,IAAI2G,EAAQ7C,EAAO,OAAO9D,GACtB8G,EAAe1I,EAAQuI,GACvBI,EAAuB,OAAO,KAAKD,CAAY,EAC/CE,EAAYJ,EAAe,IAAMD,EACjCM,EAAuB,IAAIrK,EAAK,IAAImK,CAAoB,EAoB5D,GAbIjD,EAAO,UAAYlH,EAAK,MAAM,SAAS,WACzC0J,EAAgBA,EAAc,MAAMW,CAAoB,EAEpDd,EAAgBQ,KAAW,SAC7BR,EAAgBQ,GAAS/J,EAAK,IAAI,WASlCkH,EAAO,UAAYlH,EAAK,MAAM,SAAS,WAAY,CACjDwJ,EAAkBO,KAAW,SAC/BP,EAAkBO,GAAS/J,EAAK,IAAI,OAGtCwJ,EAAkBO,GAASP,EAAkBO,GAAO,MAAMM,CAAoB,EAO9E,QACF,CAeA,GANAhB,EAAaU,GAAO,OAAOE,EAAW/C,EAAO,MAAO,SAAU9F,GAAGC,GAAG,CAAE,OAAOD,GAAIC,EAAE,CAAC,EAMhF,CAAAiI,EAAec,GAInB,SAASE,EAAI,EAAGA,EAAIH,EAAqB,OAAQG,IAAK,CAOpD,IAAIC,EAAsBJ,EAAqBG,GAC3CE,EAAmB,IAAIxK,EAAK,SAAUuK,EAAqBR,CAAK,EAChElI,EAAWqI,EAAaK,GACxBE,GAECA,EAAarB,EAAeoB,MAAuB,OACtDpB,EAAeoB,GAAoB,IAAIxK,EAAK,UAAWgK,EAAcD,EAAOlI,CAAQ,EAEpF4I,EAAW,IAAIT,EAAcD,EAAOlI,CAAQ,CAGhD,CAEAyH,EAAec,GAAa,GAC9B,CAEJ,CAQA,GAAIlD,EAAO,WAAalH,EAAK,MAAM,SAAS,SAC1C,QAASoD,EAAI,EAAGA,EAAI8D,EAAO,OAAO,OAAQ9D,IAAK,CAC7C,IAAI2G,EAAQ7C,EAAO,OAAO9D,GAC1BmG,EAAgBQ,GAASR,EAAgBQ,GAAO,UAAUL,CAAa,CACzE,CAEJ,CAUA,QAHIgB,EAAqB1K,EAAK,IAAI,SAC9B2K,EAAuB3K,EAAK,IAAI,MAE3BiB,EAAI,EAAGA,EAAI,KAAK,OAAO,OAAQA,IAAK,CAC3C,IAAI8I,EAAQ,KAAK,OAAO9I,GAEpBsI,EAAgBQ,KAClBW,EAAqBA,EAAmB,UAAUnB,EAAgBQ,EAAM,GAGtEP,EAAkBO,KACpBY,EAAuBA,EAAqB,MAAMnB,EAAkBO,EAAM,EAE9E,CAEA,IAAIa,EAAoB,OAAO,KAAKxB,CAAc,EAC9CyB,EAAU,CAAC,EACXC,EAAU,OAAO,OAAO,IAAI,EAYhC,GAAI5B,EAAM,UAAU,EAAG,CACrB0B,EAAoB,OAAO,KAAK,KAAK,YAAY,EAEjD,QAAS3J,EAAI,EAAGA,EAAI2J,EAAkB,OAAQ3J,IAAK,CACjD,IAAIuJ,EAAmBI,EAAkB3J,GACrCF,EAAWf,EAAK,SAAS,WAAWwK,CAAgB,EACxDpB,EAAeoB,GAAoB,IAAIxK,EAAK,SAC9C,CACF,CAEA,QAASiB,EAAI,EAAGA,EAAI2J,EAAkB,OAAQ3J,IAAK,CASjD,IAAIF,EAAWf,EAAK,SAAS,WAAW4K,EAAkB3J,EAAE,EACxDP,EAASK,EAAS,OAEtB,GAAI,EAAC2J,EAAmB,SAAShK,CAAM,GAInC,CAAAiK,EAAqB,SAASjK,CAAM,EAIxC,KAAIqK,EAAc,KAAK,aAAahK,GAChCiK,EAAQ3B,EAAatI,EAAS,WAAW,WAAWgK,CAAW,EAC/DE,EAEJ,IAAKA,EAAWH,EAAQpK,MAAa,OACnCuK,EAAS,OAASD,EAClBC,EAAS,UAAU,QAAQ7B,EAAerI,EAAS,MAC9C,CACL,IAAImK,EAAQ,CACV,IAAKxK,EACL,MAAOsK,EACP,UAAW5B,EAAerI,EAC5B,EACA+J,EAAQpK,GAAUwK,EAClBL,EAAQ,KAAKK,CAAK,CACpB,EACF,CAKA,OAAOL,EAAQ,KAAK,SAAUzJ,GAAGC,GAAG,CAClC,OAAOA,GAAE,MAAQD,GAAE,KACrB,CAAC,CACH,EAUApB,EAAK,MAAM,UAAU,OAAS,UAAY,CACxC,IAAImL,EAAgB,OAAO,KAAK,KAAK,aAAa,EAC/C,KAAK,EACL,IAAI,SAAUvB,EAAM,CACnB,MAAO,CAACA,EAAM,KAAK,cAAcA,EAAK,CACxC,EAAG,IAAI,EAELwB,EAAe,OAAO,KAAK,KAAK,YAAY,EAC7C,IAAI,SAAUC,EAAK,CAClB,MAAO,CAACA,EAAK,KAAK,aAAaA,GAAK,OAAO,CAAC,CAC9C,EAAG,IAAI,EAET,MAAO,CACL,QAASrL,EAAK,QACd,OAAQ,KAAK,OACb,aAAcoL,EACd,cAAeD,EACf,SAAU,KAAK,SAAS,OAAO,CACjC,CACF,EAQAnL,EAAK,MAAM,KAAO,SAAUsL,EAAiB,CAC3C,IAAItC,EAAQ,CAAC,EACToC,EAAe,CAAC,EAChBG,EAAoBD,EAAgB,aACpCH,EAAgB,OAAO,OAAO,IAAI,EAClCK,EAA0BF,EAAgB,cAC1CG,EAAkB,IAAIzL,EAAK,SAAS,QACpC0C,EAAW1C,EAAK,SAAS,KAAKsL,EAAgB,QAAQ,EAEtDA,EAAgB,SAAWtL,EAAK,SAClCA,EAAK,MAAM,KAAK,4EAA8EA,EAAK,QAAU,sCAAwCsL,EAAgB,QAAU,GAAG,EAGpL,QAASrK,EAAI,EAAGA,EAAIsK,EAAkB,OAAQtK,IAAK,CACjD,IAAIyK,EAAQH,EAAkBtK,GAC1BoK,EAAMK,EAAM,GACZ1K,EAAW0K,EAAM,GAErBN,EAAaC,GAAO,IAAIrL,EAAK,OAAOgB,CAAQ,CAC9C,CAEA,QAASC,EAAI,EAAGA,EAAIuK,EAAwB,OAAQvK,IAAK,CACvD,IAAIyK,EAAQF,EAAwBvK,GAChC2I,EAAO8B,EAAM,GACblK,EAAUkK,EAAM,GAEpBD,EAAgB,OAAO7B,CAAI,EAC3BuB,EAAcvB,GAAQpI,CACxB,CAEA,OAAAiK,EAAgB,OAAO,EAEvBzC,EAAM,OAASsC,EAAgB,OAE/BtC,EAAM,aAAeoC,EACrBpC,EAAM,cAAgBmC,EACtBnC,EAAM,SAAWyC,EAAgB,KACjCzC,EAAM,SAAWtG,EAEV,IAAI1C,EAAK,MAAMgJ,CAAK,CAC7B,EACA;AAAA;AAAA;AAAA,GA6BAhJ,EAAK,QAAU,UAAY,CACzB,KAAK,KAAO,KACZ,KAAK,QAAU,OAAO,OAAO,IAAI,EACjC,KAAK,WAAa,OAAO,OAAO,IAAI,EACpC,KAAK,cAAgB,OAAO,OAAO,IAAI,EACvC,KAAK,qBAAuB,CAAC,EAC7B,KAAK,aAAe,CAAC,EACrB,KAAK,UAAYA,EAAK,UACtB,KAAK,SAAW,IAAIA,EAAK,SACzB,KAAK,eAAiB,IAAIA,EAAK,SAC/B,KAAK,cAAgB,EACrB,KAAK,GAAK,IACV,KAAK,IAAM,IACX,KAAK,UAAY,EACjB,KAAK,kBAAoB,CAAC,CAC5B,EAcAA,EAAK,QAAQ,UAAU,IAAM,SAAUqL,EAAK,CAC1C,KAAK,KAAOA,CACd,EAkCArL,EAAK,QAAQ,UAAU,MAAQ,SAAUW,EAAWgL,EAAY,CAC9D,GAAI,KAAK,KAAKhL,CAAS,EACrB,MAAM,IAAI,WAAY,UAAYA,EAAY,kCAAkC,EAGlF,KAAK,QAAQA,GAAagL,GAAc,CAAC,CAC3C,EAUA3L,EAAK,QAAQ,UAAU,EAAI,SAAU4L,EAAQ,CACvCA,EAAS,EACX,KAAK,GAAK,EACDA,EAAS,EAClB,KAAK,GAAK,EAEV,KAAK,GAAKA,CAEd,EASA5L,EAAK,QAAQ,UAAU,GAAK,SAAU4L,EAAQ,CAC5C,KAAK,IAAMA,CACb,EAmBA5L,EAAK,QAAQ,UAAU,IAAM,SAAU6L,EAAKF,EAAY,CACtD,IAAIjL,EAASmL,EAAI,KAAK,MAClBC,EAAS,OAAO,KAAK,KAAK,OAAO,EAErC,KAAK,WAAWpL,GAAUiL,GAAc,CAAC,EACzC,KAAK,eAAiB,EAEtB,QAAS1K,EAAI,EAAGA,EAAI6K,EAAO,OAAQ7K,IAAK,CACtC,IAAIN,EAAYmL,EAAO7K,GACnB8K,EAAY,KAAK,QAAQpL,GAAW,UACpCoJ,EAAQgC,EAAYA,EAAUF,CAAG,EAAIA,EAAIlL,GACzCsB,EAAS,KAAK,UAAU8H,EAAO,CAC7B,OAAQ,CAACpJ,CAAS,CACpB,CAAC,EACD8I,EAAQ,KAAK,SAAS,IAAIxH,CAAM,EAChClB,EAAW,IAAIf,EAAK,SAAUU,EAAQC,CAAS,EAC/CqL,EAAa,OAAO,OAAO,IAAI,EAEnC,KAAK,qBAAqBjL,GAAYiL,EACtC,KAAK,aAAajL,GAAY,EAG9B,KAAK,aAAaA,IAAa0I,EAAM,OAGrC,QAASvG,EAAI,EAAGA,EAAIuG,EAAM,OAAQvG,IAAK,CACrC,IAAI0G,EAAOH,EAAMvG,GAUjB,GARI8I,EAAWpC,IAAS,OACtBoC,EAAWpC,GAAQ,GAGrBoC,EAAWpC,IAAS,EAIhB,KAAK,cAAcA,IAAS,KAAW,CACzC,IAAIpI,EAAU,OAAO,OAAO,IAAI,EAChCA,EAAQ,OAAY,KAAK,UACzB,KAAK,WAAa,EAElB,QAAS4B,EAAI,EAAGA,EAAI0I,EAAO,OAAQ1I,IACjC5B,EAAQsK,EAAO1I,IAAM,OAAO,OAAO,IAAI,EAGzC,KAAK,cAAcwG,GAAQpI,CAC7B,CAGI,KAAK,cAAcoI,GAAMjJ,GAAWD,IAAW,OACjD,KAAK,cAAckJ,GAAMjJ,GAAWD,GAAU,OAAO,OAAO,IAAI,GAKlE,QAAS4J,EAAI,EAAGA,EAAI,KAAK,kBAAkB,OAAQA,IAAK,CACtD,IAAI2B,EAAc,KAAK,kBAAkB3B,GACrCzI,EAAW+H,EAAK,SAASqC,GAEzB,KAAK,cAAcrC,GAAMjJ,GAAWD,GAAQuL,IAAgB,OAC9D,KAAK,cAAcrC,GAAMjJ,GAAWD,GAAQuL,GAAe,CAAC,GAG9D,KAAK,cAAcrC,GAAMjJ,GAAWD,GAAQuL,GAAa,KAAKpK,CAAQ,CACxE,CACF,CAEF,CACF,EAOA7B,EAAK,QAAQ,UAAU,6BAA+B,UAAY,CAOhE,QALIkM,EAAY,OAAO,KAAK,KAAK,YAAY,EACzCC,EAAiBD,EAAU,OAC3BE,EAAc,CAAC,EACfC,EAAqB,CAAC,EAEjBpL,EAAI,EAAGA,EAAIkL,EAAgBlL,IAAK,CACvC,IAAIF,EAAWf,EAAK,SAAS,WAAWkM,EAAUjL,EAAE,EAChD8I,EAAQhJ,EAAS,UAErBsL,EAAmBtC,KAAWsC,EAAmBtC,GAAS,GAC1DsC,EAAmBtC,IAAU,EAE7BqC,EAAYrC,KAAWqC,EAAYrC,GAAS,GAC5CqC,EAAYrC,IAAU,KAAK,aAAahJ,EAC1C,CAIA,QAFI+K,EAAS,OAAO,KAAK,KAAK,OAAO,EAE5B7K,EAAI,EAAGA,EAAI6K,EAAO,OAAQ7K,IAAK,CACtC,IAAIN,EAAYmL,EAAO7K,GACvBmL,EAAYzL,GAAayL,EAAYzL,GAAa0L,EAAmB1L,EACvE,CAEA,KAAK,mBAAqByL,CAC5B,EAOApM,EAAK,QAAQ,UAAU,mBAAqB,UAAY,CAMtD,QALIoL,EAAe,CAAC,EAChBc,EAAY,OAAO,KAAK,KAAK,oBAAoB,EACjDI,EAAkBJ,EAAU,OAC5BK,EAAe,OAAO,OAAO,IAAI,EAE5BtL,EAAI,EAAGA,EAAIqL,EAAiBrL,IAAK,CAaxC,QAZIF,EAAWf,EAAK,SAAS,WAAWkM,EAAUjL,EAAE,EAChDN,EAAYI,EAAS,UACrByL,EAAc,KAAK,aAAazL,GAChCgK,EAAc,IAAI/K,EAAK,OACvByM,EAAkB,KAAK,qBAAqB1L,GAC5C0I,EAAQ,OAAO,KAAKgD,CAAe,EACnCC,EAAcjD,EAAM,OAGpBkD,EAAa,KAAK,QAAQhM,GAAW,OAAS,EAC9CiM,EAAW,KAAK,WAAW7L,EAAS,QAAQ,OAAS,EAEhDmC,EAAI,EAAGA,EAAIwJ,EAAaxJ,IAAK,CACpC,IAAI0G,EAAOH,EAAMvG,GACb2J,EAAKJ,EAAgB7C,GACrBK,EAAY,KAAK,cAAcL,GAAM,OACrCkD,EAAK9B,EAAO+B,EAEZR,EAAa3C,KAAU,QACzBkD,EAAM9M,EAAK,IAAI,KAAK,cAAc4J,GAAO,KAAK,aAAa,EAC3D2C,EAAa3C,GAAQkD,GAErBA,EAAMP,EAAa3C,GAGrBoB,EAAQ8B,IAAQ,KAAK,IAAM,GAAKD,IAAO,KAAK,KAAO,EAAI,KAAK,GAAK,KAAK,IAAML,EAAc,KAAK,mBAAmB7L,KAAekM,GACjI7B,GAAS2B,EACT3B,GAAS4B,EACTG,EAAqB,KAAK,MAAM/B,EAAQ,GAAI,EAAI,IAQhDD,EAAY,OAAOd,EAAW8C,CAAkB,CAClD,CAEA3B,EAAarK,GAAYgK,CAC3B,CAEA,KAAK,aAAeK,CACtB,EAOApL,EAAK,QAAQ,UAAU,eAAiB,UAAY,CAClD,KAAK,SAAWA,EAAK,SAAS,UAC5B,OAAO,KAAK,KAAK,aAAa,EAAE,KAAK,CACvC,CACF,EAUAA,EAAK,QAAQ,UAAU,MAAQ,UAAY,CACzC,YAAK,6BAA6B,EAClC,KAAK,mBAAmB,EACxB,KAAK,eAAe,EAEb,IAAIA,EAAK,MAAM,CACpB,cAAe,KAAK,cACpB,aAAc,KAAK,aACnB,SAAU,KAAK,SACf,OAAQ,OAAO,KAAK,KAAK,OAAO,EAChC,SAAU,KAAK,cACjB,CAAC,CACH,EAgBAA,EAAK,QAAQ,UAAU,IAAM,SAAU8B,EAAI,CACzC,IAAIkL,EAAO,MAAM,UAAU,MAAM,KAAK,UAAW,CAAC,EAClDA,EAAK,QAAQ,IAAI,EACjBlL,EAAG,MAAM,KAAMkL,CAAI,CACrB,EAaAhN,EAAK,UAAY,SAAU4J,EAAMG,EAAOlI,EAAU,CAShD,QARIoL,EAAiB,OAAO,OAAO,IAAI,EACnCC,EAAe,OAAO,KAAKrL,GAAY,CAAC,CAAC,EAOpCZ,EAAI,EAAGA,EAAIiM,EAAa,OAAQjM,IAAK,CAC5C,IAAIT,EAAM0M,EAAajM,GACvBgM,EAAezM,GAAOqB,EAASrB,GAAK,MAAM,CAC5C,CAEA,KAAK,SAAW,OAAO,OAAO,IAAI,EAE9BoJ,IAAS,SACX,KAAK,SAASA,GAAQ,OAAO,OAAO,IAAI,EACxC,KAAK,SAASA,GAAMG,GAASkD,EAEjC,EAWAjN,EAAK,UAAU,UAAU,QAAU,SAAUmN,EAAgB,CAG3D,QAFI1D,EAAQ,OAAO,KAAK0D,EAAe,QAAQ,EAEtClM,EAAI,EAAGA,EAAIwI,EAAM,OAAQxI,IAAK,CACrC,IAAI2I,EAAOH,EAAMxI,GACb6K,EAAS,OAAO,KAAKqB,EAAe,SAASvD,EAAK,EAElD,KAAK,SAASA,IAAS,OACzB,KAAK,SAASA,GAAQ,OAAO,OAAO,IAAI,GAG1C,QAAS1G,EAAI,EAAGA,EAAI4I,EAAO,OAAQ5I,IAAK,CACtC,IAAI6G,EAAQ+B,EAAO5I,GACf3C,EAAO,OAAO,KAAK4M,EAAe,SAASvD,GAAMG,EAAM,EAEvD,KAAK,SAASH,GAAMG,IAAU,OAChC,KAAK,SAASH,GAAMG,GAAS,OAAO,OAAO,IAAI,GAGjD,QAAS3G,EAAI,EAAGA,EAAI7C,EAAK,OAAQ6C,IAAK,CACpC,IAAI5C,EAAMD,EAAK6C,GAEX,KAAK,SAASwG,GAAMG,GAAOvJ,IAAQ,KACrC,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAO2M,EAAe,SAASvD,GAAMG,GAAOvJ,GAEvE,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAO,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAK,OAAO2M,EAAe,SAASvD,GAAMG,GAAOvJ,EAAI,CAGtH,CACF,CACF,CACF,EASAR,EAAK,UAAU,UAAU,IAAM,SAAU4J,EAAMG,EAAOlI,EAAU,CAC9D,GAAI,EAAE+H,KAAQ,KAAK,UAAW,CAC5B,KAAK,SAASA,GAAQ,OAAO,OAAO,IAAI,EACxC,KAAK,SAASA,GAAMG,GAASlI,EAC7B,MACF,CAEA,GAAI,EAAEkI,KAAS,KAAK,SAASH,IAAQ,CACnC,KAAK,SAASA,GAAMG,GAASlI,EAC7B,MACF,CAIA,QAFIqL,EAAe,OAAO,KAAKrL,CAAQ,EAE9BZ,EAAI,EAAGA,EAAIiM,EAAa,OAAQjM,IAAK,CAC5C,IAAIT,EAAM0M,EAAajM,GAEnBT,KAAO,KAAK,SAASoJ,GAAMG,GAC7B,KAAK,SAASH,GAAMG,GAAOvJ,GAAO,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAK,OAAOqB,EAASrB,EAAI,EAEtF,KAAK,SAASoJ,GAAMG,GAAOvJ,GAAOqB,EAASrB,EAE/C,CACF,EAYAR,EAAK,MAAQ,SAAUoN,EAAW,CAChC,KAAK,QAAU,CAAC,EAChB,KAAK,UAAYA,CACnB,EA0BApN,EAAK,MAAM,SAAW,IAAI,OAAQ,GAAG,EACrCA,EAAK,MAAM,SAAS,KAAO,EAC3BA,EAAK,MAAM,SAAS,QAAU,EAC9BA,EAAK,MAAM,SAAS,SAAW,EAa/BA,EAAK,MAAM,SAAW,CAIpB,SAAU,EAMV,SAAU,EAMV,WAAY,CACd,EAyBAA,EAAK,MAAM,UAAU,OAAS,SAAUkH,EAAQ,CAC9C,MAAM,WAAYA,IAChBA,EAAO,OAAS,KAAK,WAGjB,UAAWA,IACfA,EAAO,MAAQ,GAGX,gBAAiBA,IACrBA,EAAO,YAAc,IAGjB,aAAcA,IAClBA,EAAO,SAAWlH,EAAK,MAAM,SAAS,MAGnCkH,EAAO,SAAWlH,EAAK,MAAM,SAAS,SAAakH,EAAO,KAAK,OAAO,CAAC,GAAKlH,EAAK,MAAM,WAC1FkH,EAAO,KAAO,IAAMA,EAAO,MAGxBA,EAAO,SAAWlH,EAAK,MAAM,SAAS,UAAckH,EAAO,KAAK,MAAM,EAAE,GAAKlH,EAAK,MAAM,WAC3FkH,EAAO,KAAO,GAAKA,EAAO,KAAO,KAG7B,aAAcA,IAClBA,EAAO,SAAWlH,EAAK,MAAM,SAAS,UAGxC,KAAK,QAAQ,KAAKkH,CAAM,EAEjB,IACT,EASAlH,EAAK,MAAM,UAAU,UAAY,UAAY,CAC3C,QAASiB,EAAI,EAAGA,EAAI,KAAK,QAAQ,OAAQA,IACvC,GAAI,KAAK,QAAQA,GAAG,UAAYjB,EAAK,MAAM,SAAS,WAClD,MAAO,GAIX,MAAO,EACT,EA4BAA,EAAK,MAAM,UAAU,KAAO,SAAU4J,EAAMyD,EAAS,CACnD,GAAI,MAAM,QAAQzD,CAAI,EACpB,OAAAA,EAAK,QAAQ,SAAU7H,EAAG,CAAE,KAAK,KAAKA,EAAG/B,EAAK,MAAM,MAAMqN,CAAO,CAAC,CAAE,EAAG,IAAI,EACpE,KAGT,IAAInG,EAASmG,GAAW,CAAC,EACzB,OAAAnG,EAAO,KAAO0C,EAAK,SAAS,EAE5B,KAAK,OAAO1C,CAAM,EAEX,IACT,EACAlH,EAAK,gBAAkB,SAAUI,EAASmD,EAAOC,EAAK,CACpD,KAAK,KAAO,kBACZ,KAAK,QAAUpD,EACf,KAAK,MAAQmD,EACb,KAAK,IAAMC,CACb,EAEAxD,EAAK,gBAAgB,UAAY,IAAI,MACrCA,EAAK,WAAa,SAAU4B,EAAK,CAC/B,KAAK,QAAU,CAAC,EAChB,KAAK,IAAMA,EACX,KAAK,OAASA,EAAI,OAClB,KAAK,IAAM,EACX,KAAK,MAAQ,EACb,KAAK,oBAAsB,CAAC,CAC9B,EAEA5B,EAAK,WAAW,UAAU,IAAM,UAAY,CAG1C,QAFIsN,EAAQtN,EAAK,WAAW,QAErBsN,GACLA,EAAQA,EAAM,IAAI,CAEtB,EAEAtN,EAAK,WAAW,UAAU,YAAc,UAAY,CAKlD,QAJIuN,EAAY,CAAC,EACbpL,EAAa,KAAK,MAClBD,EAAW,KAAK,IAEX,EAAI,EAAG,EAAI,KAAK,oBAAoB,OAAQ,IACnDA,EAAW,KAAK,oBAAoB,GACpCqL,EAAU,KAAK,KAAK,IAAI,MAAMpL,EAAYD,CAAQ,CAAC,EACnDC,EAAaD,EAAW,EAG1B,OAAAqL,EAAU,KAAK,KAAK,IAAI,MAAMpL,EAAY,KAAK,GAAG,CAAC,EACnD,KAAK,oBAAoB,OAAS,EAE3BoL,EAAU,KAAK,EAAE,CAC1B,EAEAvN,EAAK,WAAW,UAAU,KAAO,SAAUwN,EAAM,CAC/C,KAAK,QAAQ,KAAK,CAChB,KAAMA,EACN,IAAK,KAAK,YAAY,EACtB,MAAO,KAAK,MACZ,IAAK,KAAK,GACZ,CAAC,EAED,KAAK,MAAQ,KAAK,GACpB,EAEAxN,EAAK,WAAW,UAAU,gBAAkB,UAAY,CACtD,KAAK,oBAAoB,KAAK,KAAK,IAAM,CAAC,EAC1C,KAAK,KAAO,CACd,EAEAA,EAAK,WAAW,UAAU,KAAO,UAAY,CAC3C,GAAI,KAAK,KAAO,KAAK,OACnB,OAAOA,EAAK,WAAW,IAGzB,IAAIoC,EAAO,KAAK,IAAI,OAAO,KAAK,GAAG,EACnC,YAAK,KAAO,EACLA,CACT,EAEApC,EAAK,WAAW,UAAU,MAAQ,UAAY,CAC5C,OAAO,KAAK,IAAM,KAAK,KACzB,EAEAA,EAAK,WAAW,UAAU,OAAS,UAAY,CACzC,KAAK,OAAS,KAAK,MACrB,KAAK,KAAO,GAGd,KAAK,MAAQ,KAAK,GACpB,EAEAA,EAAK,WAAW,UAAU,OAAS,UAAY,CAC7C,KAAK,KAAO,CACd,EAEAA,EAAK,WAAW,UAAU,eAAiB,UAAY,CACrD,IAAIoC,EAAMqL,EAEV,GACErL,EAAO,KAAK,KAAK,EACjBqL,EAAWrL,EAAK,WAAW,CAAC,QACrBqL,EAAW,IAAMA,EAAW,IAEjCrL,GAAQpC,EAAK,WAAW,KAC1B,KAAK,OAAO,CAEhB,EAEAA,EAAK,WAAW,UAAU,KAAO,UAAY,CAC3C,OAAO,KAAK,IAAM,KAAK,MACzB,EAEAA,EAAK,WAAW,IAAM,MACtBA,EAAK,WAAW,MAAQ,QACxBA,EAAK,WAAW,KAAO,OACvBA,EAAK,WAAW,cAAgB,gBAChCA,EAAK,WAAW,MAAQ,QACxBA,EAAK,WAAW,SAAW,WAE3BA,EAAK,WAAW,SAAW,SAAU0N,EAAO,CAC1C,OAAAA,EAAM,OAAO,EACbA,EAAM,KAAK1N,EAAK,WAAW,KAAK,EAChC0N,EAAM,OAAO,EACN1N,EAAK,WAAW,OACzB,EAEAA,EAAK,WAAW,QAAU,SAAU0N,EAAO,CAQzC,GAPIA,EAAM,MAAM,EAAI,IAClBA,EAAM,OAAO,EACbA,EAAM,KAAK1N,EAAK,WAAW,IAAI,GAGjC0N,EAAM,OAAO,EAETA,EAAM,KAAK,EACb,OAAO1N,EAAK,WAAW,OAE3B,EAEAA,EAAK,WAAW,gBAAkB,SAAU0N,EAAO,CACjD,OAAAA,EAAM,OAAO,EACbA,EAAM,eAAe,EACrBA,EAAM,KAAK1N,EAAK,WAAW,aAAa,EACjCA,EAAK,WAAW,OACzB,EAEAA,EAAK,WAAW,SAAW,SAAU0N,EAAO,CAC1C,OAAAA,EAAM,OAAO,EACbA,EAAM,eAAe,EACrBA,EAAM,KAAK1N,EAAK,WAAW,KAAK,EACzBA,EAAK,WAAW,OACzB,EAEAA,EAAK,WAAW,OAAS,SAAU0N,EAAO,CACpCA,EAAM,MAAM,EAAI,GAClBA,EAAM,KAAK1N,EAAK,WAAW,IAAI,CAEnC,EAaAA,EAAK,WAAW,cAAgBA,EAAK,UAAU,UAE/CA,EAAK,WAAW,QAAU,SAAU0N,EAAO,CACzC,OAAa,CACX,IAAItL,EAAOsL,EAAM,KAAK,EAEtB,GAAItL,GAAQpC,EAAK,WAAW,IAC1B,OAAOA,EAAK,WAAW,OAIzB,GAAIoC,EAAK,WAAW,CAAC,GAAK,GAAI,CAC5BsL,EAAM,gBAAgB,EACtB,QACF,CAEA,GAAItL,GAAQ,IACV,OAAOpC,EAAK,WAAW,SAGzB,GAAIoC,GAAQ,IACV,OAAAsL,EAAM,OAAO,EACTA,EAAM,MAAM,EAAI,GAClBA,EAAM,KAAK1N,EAAK,WAAW,IAAI,EAE1BA,EAAK,WAAW,gBAGzB,GAAIoC,GAAQ,IACV,OAAAsL,EAAM,OAAO,EACTA,EAAM,MAAM,EAAI,GAClBA,EAAM,KAAK1N,EAAK,WAAW,IAAI,EAE1BA,EAAK,WAAW,SAczB,GARIoC,GAAQ,KAAOsL,EAAM,MAAM,IAAM,GAQjCtL,GAAQ,KAAOsL,EAAM,MAAM,IAAM,EACnC,OAAAA,EAAM,KAAK1N,EAAK,WAAW,QAAQ,EAC5BA,EAAK,WAAW,QAGzB,GAAIoC,EAAK,MAAMpC,EAAK,WAAW,aAAa,EAC1C,OAAOA,EAAK,WAAW,OAE3B,CACF,EAEAA,EAAK,YAAc,SAAU4B,EAAKsH,EAAO,CACvC,KAAK,MAAQ,IAAIlJ,EAAK,WAAY4B,CAAG,EACrC,KAAK,MAAQsH,EACb,KAAK,cAAgB,CAAC,EACtB,KAAK,UAAY,CACnB,EAEAlJ,EAAK,YAAY,UAAU,MAAQ,UAAY,CAC7C,KAAK,MAAM,IAAI,EACf,KAAK,QAAU,KAAK,MAAM,QAI1B,QAFIsN,EAAQtN,EAAK,YAAY,YAEtBsN,GACLA,EAAQA,EAAM,IAAI,EAGpB,OAAO,KAAK,KACd,EAEAtN,EAAK,YAAY,UAAU,WAAa,UAAY,CAClD,OAAO,KAAK,QAAQ,KAAK,UAC3B,EAEAA,EAAK,YAAY,UAAU,cAAgB,UAAY,CACrD,IAAI2N,EAAS,KAAK,WAAW,EAC7B,YAAK,WAAa,EACXA,CACT,EAEA3N,EAAK,YAAY,UAAU,WAAa,UAAY,CAClD,IAAI4N,EAAkB,KAAK,cAC3B,KAAK,MAAM,OAAOA,CAAe,EACjC,KAAK,cAAgB,CAAC,CACxB,EAEA5N,EAAK,YAAY,YAAc,SAAUmJ,EAAQ,CAC/C,IAAIwE,EAASxE,EAAO,WAAW,EAE/B,GAAIwE,GAAU,KAId,OAAQA,EAAO,KAAM,CACnB,KAAK3N,EAAK,WAAW,SACnB,OAAOA,EAAK,YAAY,cAC1B,KAAKA,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,KACnB,OAAOA,EAAK,YAAY,UAC1B,QACE,IAAI6N,EAAe,4CAA8CF,EAAO,KAExE,MAAIA,EAAO,IAAI,QAAU,IACvBE,GAAgB,gBAAkBF,EAAO,IAAM,KAG3C,IAAI3N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CAC1E,CACF,EAEA3N,EAAK,YAAY,cAAgB,SAAUmJ,EAAQ,CACjD,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,QAAQA,EAAO,IAAK,CAClB,IAAK,IACHxE,EAAO,cAAc,SAAWnJ,EAAK,MAAM,SAAS,WACpD,MACF,IAAK,IACHmJ,EAAO,cAAc,SAAWnJ,EAAK,MAAM,SAAS,SACpD,MACF,QACE,IAAI6N,EAAe,kCAAoCF,EAAO,IAAM,IACpE,MAAM,IAAI3N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CAC1E,CAEA,IAAIG,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B,IAAID,EAAe,yCACnB,MAAM,IAAI7N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEA,OAAQG,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,KACnB,OAAOA,EAAK,YAAY,UAC1B,QACE,IAAI6N,EAAe,mCAAqCC,EAAW,KAAO,IAC1E,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAEA9N,EAAK,YAAY,WAAa,SAAUmJ,EAAQ,CAC9C,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,IAAIxE,EAAO,MAAM,UAAU,QAAQwE,EAAO,GAAG,GAAK,GAAI,CACpD,IAAII,EAAiB5E,EAAO,MAAM,UAAU,IAAI,SAAU6E,EAAG,CAAE,MAAO,IAAMA,EAAI,GAAI,CAAC,EAAE,KAAK,IAAI,EAC5FH,EAAe,uBAAyBF,EAAO,IAAM,uBAAyBI,EAElF,MAAM,IAAI/N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEAxE,EAAO,cAAc,OAAS,CAACwE,EAAO,GAAG,EAEzC,IAAIG,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B,IAAID,EAAe,gCACnB,MAAM,IAAI7N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEA,OAAQG,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,KACnB,OAAOA,EAAK,YAAY,UAC1B,QACE,IAAI6N,EAAe,0BAA4BC,EAAW,KAAO,IACjE,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAEA9N,EAAK,YAAY,UAAY,SAAUmJ,EAAQ,CAC7C,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,CAAAxE,EAAO,cAAc,KAAOwE,EAAO,IAAI,YAAY,EAE/CA,EAAO,IAAI,QAAQ,GAAG,GAAK,KAC7BxE,EAAO,cAAc,YAAc,IAGrC,IAAI2E,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B3E,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ2E,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,KACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,UAC1B,KAAKA,EAAK,WAAW,MACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,cACnB,OAAOA,EAAK,YAAY,kBAC1B,KAAKA,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,SACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,cAC1B,QACE,IAAI6N,EAAe,2BAA6BC,EAAW,KAAO,IAClE,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAEA9N,EAAK,YAAY,kBAAoB,SAAUmJ,EAAQ,CACrD,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,KAAIxG,EAAe,SAASwG,EAAO,IAAK,EAAE,EAE1C,GAAI,MAAMxG,CAAY,EAAG,CACvB,IAAI0G,EAAe,gCACnB,MAAM,IAAI7N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEAxE,EAAO,cAAc,aAAehC,EAEpC,IAAI2G,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B3E,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ2E,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,KACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,UAC1B,KAAKA,EAAK,WAAW,MACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,cACnB,OAAOA,EAAK,YAAY,kBAC1B,KAAKA,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,SACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,cAC1B,QACE,IAAI6N,EAAe,2BAA6BC,EAAW,KAAO,IAClE,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAEA9N,EAAK,YAAY,WAAa,SAAUmJ,EAAQ,CAC9C,IAAIwE,EAASxE,EAAO,cAAc,EAElC,GAAIwE,GAAU,KAId,KAAIM,EAAQ,SAASN,EAAO,IAAK,EAAE,EAEnC,GAAI,MAAMM,CAAK,EAAG,CAChB,IAAIJ,EAAe,wBACnB,MAAM,IAAI7N,EAAK,gBAAiB6N,EAAcF,EAAO,MAAOA,EAAO,GAAG,CACxE,CAEAxE,EAAO,cAAc,MAAQ8E,EAE7B,IAAIH,EAAa3E,EAAO,WAAW,EAEnC,GAAI2E,GAAc,KAAW,CAC3B3E,EAAO,WAAW,EAClB,MACF,CAEA,OAAQ2E,EAAW,KAAM,CACvB,KAAK9N,EAAK,WAAW,KACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,UAC1B,KAAKA,EAAK,WAAW,MACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,cACnB,OAAOA,EAAK,YAAY,kBAC1B,KAAKA,EAAK,WAAW,MACnB,OAAOA,EAAK,YAAY,WAC1B,KAAKA,EAAK,WAAW,SACnB,OAAAmJ,EAAO,WAAW,EACXnJ,EAAK,YAAY,cAC1B,QACE,IAAI6N,EAAe,2BAA6BC,EAAW,KAAO,IAClE,MAAM,IAAI9N,EAAK,gBAAiB6N,EAAcC,EAAW,MAAOA,EAAW,GAAG,CAClF,EACF,EAMI,SAAU1G,EAAM8G,EAAS,CACrB,OAAO,QAAW,YAAc,OAAO,IAEzC,OAAOA,CAAO,EACL,OAAOpO,IAAY,SAM5BC,GAAO,QAAUmO,EAAQ,EAGzB9G,EAAK,KAAO8G,EAAQ,CAExB,EAAE,KAAM,UAAY,CAMlB,OAAOlO,CACT,CAAC,CACH,GAAG,ICl5GH,IAAAmO,EAAAC,EAAA,CAAAC,GAAAC,KAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAeA,IAAIC,GAAkB,UAOtBD,GAAO,QAAUE,GAUjB,SAASA,GAAWC,EAAQ,CAC1B,IAAIC,EAAM,GAAKD,EACXE,EAAQJ,GAAgB,KAAKG,CAAG,EAEpC,GAAI,CAACC,EACH,OAAOD,EAGT,IAAIE,EACAC,EAAO,GACPC,EAAQ,EACRC,EAAY,EAEhB,IAAKD,EAAQH,EAAM,MAAOG,EAAQJ,EAAI,OAAQI,IAAS,CACrD,OAAQJ,EAAI,WAAWI,CAAK,EAAG,CAC7B,IAAK,IACHF,EAAS,SACT,MACF,IAAK,IACHA,EAAS,QACT,MACF,IAAK,IACHA,EAAS,QACT,MACF,IAAK,IACHA,EAAS,OACT,MACF,IAAK,IACHA,EAAS,OACT,MACF,QACE,QACJ,CAEIG,IAAcD,IAChBD,GAAQH,EAAI,UAAUK,EAAWD,CAAK,GAGxCC,EAAYD,EAAQ,EACpBD,GAAQD,CACV,CAEA,OAAOG,IAAcD,EACjBD,EAAOH,EAAI,UAAUK,EAAWD,CAAK,EACrCD,CACN,ICvDA,IAAAG,GAAiB,QCKZ,OAAO,UACV,OAAO,QAAU,SAAUC,EAAa,CACtC,IAAMC,EAA2B,CAAC,EAClC,QAAWC,KAAO,OAAO,KAAKF,CAAG,EAE/BC,EAAK,KAAK,CAACC,EAAKF,EAAIE,EAAI,CAAC,EAG3B,OAAOD,CACT,GAGG,OAAO,SACV,OAAO,OAAS,SAAUD,EAAa,CACrC,IAAMC,EAAiB,CAAC,EACxB,QAAWC,KAAO,OAAO,KAAKF,CAAG,EAE/BC,EAAK,KAAKD,EAAIE,EAAI,EAGpB,OAAOD,CACT,GAKE,OAAO,SAAY,cAGhB,QAAQ,UAAU,WACrB,QAAQ,UAAU,SAAW,SAC3BE,EAA8BC,EACxB,CACF,OAAOD,GAAM,UACf,KAAK,WAAaA,EAAE,KACpB,KAAK,UAAYA,EAAE,MAEnB,KAAK,WAAaA,EAClB,KAAK,UAAYC,EAErB,GAGG,QAAQ,UAAU,cACrB,QAAQ,UAAU,YAAc,YAC3BC,EACG,CACN,IAAMC,EAAS,KAAK,WACpB,GAAIA,EAAQ,CACND,EAAM,SAAW,GACnBC,EAAO,YAAY,IAAI,EAGzB,QAASC,EAAIF,EAAM,OAAS,EAAGE,GAAK,EAAGA,IAAK,CAC1C,IAAIC,EAAOH,EAAME,GACb,OAAOC,GAAS,SAClBA,EAAO,SAAS,eAAeA,CAAI,EAC5BA,EAAK,YACZA,EAAK,WAAW,YAAYA,CAAI,EAG7BD,EAGHD,EAAO,aAAa,KAAK,gBAAkBE,CAAI,EAF/CF,EAAO,aAAaE,EAAM,IAAI,CAGlC,CACF,CACF,ICxEJ,IAAAC,GAAuB,OAiChB,SAASC,GACdC,EACmB,CACnB,IAAMC,EAAY,IAAI,IAChBC,EAAY,IAAI,IACtB,QAAWC,KAAOH,EAAM,CACtB,GAAM,CAACI,EAAMC,CAAI,EAAIF,EAAI,SAAS,MAAM,GAAG,EAGrCG,EAAWH,EAAI,SACfI,EAAWJ,EAAI,MACfK,EAAWL,EAAI,KAGfM,KAAO,GAAAC,SAAWP,EAAI,IAAI,EAC7B,QAAQ,mBAAoB,EAAE,EAC9B,QAAQ,OAAQ,GAAG,EAGtB,GAAIE,EAAM,CACR,IAAMM,EAASV,EAAU,IAAIG,CAAI,EAG5BF,EAAQ,IAAIS,CAAM,EASrBV,EAAU,IAAIK,EAAU,CACtB,SAAAA,EACA,MAAAC,EACA,KAAAE,EACA,OAAAE,CACF,CAAC,GAbDA,EAAO,MAAQR,EAAI,MACnBQ,EAAO,KAAQF,EAGfP,EAAQ,IAAIS,CAAM,EAatB,MACEV,EAAU,IAAIK,EAAUM,EAAA,CACtB,SAAAN,EACA,MAAAC,EACA,KAAAE,GACGD,GAAQ,CAAE,KAAAA,CAAK,EACnB,CAEL,CACA,OAAOP,CACT,CCpFA,IAAAY,GAAuB,OAsChB,SAASC,GACdC,EAA2BC,EACD,CAC1B,IAAMC,EAAY,IAAI,OAAOF,EAAO,UAAW,KAAK,EAC9CG,EAAY,CAACC,EAAYC,EAAcC,IACpC,GAAGD,4BAA+BC,WAI3C,OAAQC,GAAkB,CACxBA,EAAQA,EACL,QAAQ,gBAAiB,GAAG,EAC5B,KAAK,EAGR,IAAMC,EAAQ,IAAI,OAAO,MAAMR,EAAO,cACpCO,EACG,QAAQ,uBAAwB,MAAM,EACtC,QAAQL,EAAW,GAAG,KACtB,KAAK,EAGV,OAAOO,IACLR,KACI,GAAAS,SAAWD,CAAK,EAChBA,GAED,QAAQD,EAAOL,CAAS,EACxB,QAAQ,8BAA+B,IAAI,CAClD,CACF,CCtCO,SAASQ,GACdC,EACqB,CACrB,IAAMC,EAAS,IAAK,KAAa,MAAM,CAAC,QAAS,MAAM,CAAC,EAIxD,OAHe,IAAK,KAAa,YAAYD,EAAOC,CAAK,EAGlD,MAAM,EACNA,EAAM,OACf,CAUO,SAASC,GACdD,EAA4BE,EACV,CAzEpB,IAAAC,EA0EE,IAAMC,EAAU,IAAI,IAAuBJ,CAAK,EAG1CK,EAA2B,CAAC,EAClC,QAASC,EAAI,EAAGA,EAAIJ,EAAM,OAAQI,IAChC,QAAWC,KAAUH,EACfF,EAAMI,GAAG,WAAWC,EAAO,IAAI,IACjCF,EAAOE,EAAO,MAAQ,GACtBH,EAAQ,OAAOG,CAAM,GAI3B,QAAWA,KAAUH,GACfD,EAAA,KAAK,iBAAL,MAAAA,EAAA,UAAsBI,EAAO,QAC/BF,EAAOE,EAAO,MAAQ,IAG1B,OAAOF,CACT,CC2BA,SAASG,GAAWC,EAAaC,EAAuB,CACtD,GAAM,CAACC,EAAGC,CAAC,EAAI,CAAC,IAAI,IAAIH,CAAC,EAAG,IAAI,IAAIC,CAAC,CAAC,EACtC,MAAO,CACL,GAAG,IAAI,IAAI,CAAC,GAAGC,CAAC,EAAE,OAAOE,GAAS,CAACD,EAAE,IAAIC,CAAK,CAAC,CAAC,CAClD,CACF,CASO,IAAMC,EAAN,KAAa,CAgCX,YAAY,CAAE,OAAAC,EAAQ,KAAAC,EAAM,QAAAC,CAAQ,EAAgB,CACzD,KAAK,QAAUA,EAGf,KAAK,UAAYC,GAAuBF,CAAI,EAC5C,KAAK,UAAYG,GAAuBJ,EAAQ,EAAK,EAGrD,KAAK,UAAU,UAAY,IAAI,OAAOA,EAAO,SAAS,EAGtD,KAAK,MAAQ,KAAK,UAAY,CAGxBA,EAAO,KAAK,SAAW,GAAKA,EAAO,KAAK,KAAO,KACjD,KAAK,IAAK,KAAaA,EAAO,KAAK,GAAG,EAC7BA,EAAO,KAAK,OAAS,GAC9B,KAAK,IAAK,KAAa,cAAc,GAAGA,EAAO,IAAI,CAAC,EAItD,IAAMK,EAAMZ,GAAW,CACrB,UAAW,iBAAkB,SAC/B,EAAGS,EAAQ,QAAQ,EAGnB,QAAWI,KAAQN,EAAO,KAAK,IAAIO,GACjCA,IAAa,KAAO,KAAQ,KAAaA,EAC1C,EACC,QAAWC,KAAMH,EACf,KAAK,SAAS,OAAOC,EAAKE,EAAG,EAC7B,KAAK,eAAe,OAAOF,EAAKE,EAAG,EAKvC,KAAK,IAAI,UAAU,EAGnB,KAAK,MAAM,QAAS,CAAE,MAAO,GAAI,CAAC,EAClC,KAAK,MAAM,MAAM,EACjB,KAAK,MAAM,OAAQ,CAAE,MAAO,IAAK,UAAWC,GAAO,CACjD,GAAM,CAAE,KAAAC,EAAO,CAAC,CAAE,EAAID,EACtB,OAAOC,EAAK,OAAO,CAACC,EAAMC,IAAQ,CAChC,GAAGD,EACH,GAAG,KAAK,UAAUC,CAAG,CACvB,EAAG,CAAC,CAAiB,CACvB,CAAE,CAAC,EAGH,QAAWH,KAAOR,EAChB,KAAK,IAAIQ,EAAK,CAAE,MAAOA,EAAI,KAAM,CAAC,CACtC,CAAC,CACH,CAkBO,OAAOI,EAA6B,CACzC,GAAIA,EACF,GAAI,CACF,IAAMC,EAAY,KAAK,UAAUD,CAAK,EAGhCE,EAAUC,GAAiBH,CAAK,EACnC,OAAOI,GACNA,EAAO,WAAa,KAAK,MAAM,SAAS,UACzC,EAGGC,EAAS,KAAK,MAAM,OAAO,GAAGL,IAAQ,EAGzC,OAAyB,CAACM,EAAM,CAAE,IAAAC,EAAK,MAAAC,EAAO,UAAAC,CAAU,IAAM,CAC7D,IAAMC,EAAW,KAAK,UAAU,IAAIH,CAAG,EACvC,GAAI,OAAOG,GAAa,YAAa,CACnC,GAAM,CAAE,SAAAC,EAAU,MAAAC,EAAO,KAAAC,EAAM,KAAAhB,EAAM,OAAAiB,CAAO,EAAIJ,EAG1CK,EAAQC,GACZd,EACA,OAAO,KAAKO,EAAU,QAAQ,CAChC,EAGMQ,EAAQ,CAAC,CAACH,GAAS,CAAC,OAAO,OAAOC,CAAK,EAAE,MAAMG,GAAKA,CAAC,EAC3DZ,EAAK,KAAKa,EAAAC,EAAA,CACR,SAAAT,EACA,MAAOV,EAAUW,CAAK,EACtB,KAAOX,EAAUY,CAAI,GAClBhB,GAAQ,CAAE,KAAMA,EAAK,IAAII,CAAS,CAAE,GAJ/B,CAKR,MAAOO,GAAS,EAAIS,GACpB,MAAAF,CACF,EAAC,CACH,CACA,OAAOT,CACT,EAAG,CAAC,CAAC,EAGJ,KAAK,CAACzB,EAAGC,IAAMA,EAAE,MAAQD,EAAE,KAAK,EAGhC,OAAO,CAACwC,EAAOC,IAAW,CACzB,IAAMZ,EAAW,KAAK,UAAU,IAAIY,EAAO,QAAQ,EACnD,GAAI,OAAOZ,GAAa,YAAa,CACnC,IAAMH,EAAM,WAAYG,EACpBA,EAAS,OAAQ,SACjBA,EAAS,SACbW,EAAM,IAAId,EAAK,CAAC,GAAGc,EAAM,IAAId,CAAG,GAAK,CAAC,EAAGe,CAAM,CAAC,CAClD,CACA,OAAOD,CACT,EAAG,IAAI,GAA+B,EAGpCE,EACJ,GAAI,KAAK,QAAQ,YAAa,CAC5B,IAAMC,EAAS,KAAK,MAAM,MAAMC,GAAW,CACzC,QAAWrB,KAAUF,EACnBuB,EAAQ,KAAKrB,EAAO,KAAM,CACxB,OAAQ,CAAC,OAAO,EAChB,SAAU,KAAK,MAAM,SAAS,SAC9B,SAAU,KAAK,MAAM,SAAS,QAChC,CAAC,CACL,CAAC,EAGDmB,EAAcC,EAAO,OACjB,OAAO,KAAKA,EAAO,GAAG,UAAU,QAAQ,EACxC,CAAC,CACP,CAGA,OAAOJ,EAAA,CACL,MAAO,CAAC,GAAGf,EAAO,OAAO,CAAC,GACvB,OAAOkB,GAAgB,aAAe,CAAE,YAAAA,CAAY,EAI3D,OAAQG,EAAN,CACA,QAAQ,KAAK,kBAAkB1B,qCAAoC,CACrE,CAIF,MAAO,CAAE,MAAO,CAAC,CAAE,CACrB,CACF,EL3QA,IAAI2B,EAqBJ,SAAeC,GACbC,EACe,QAAAC,EAAA,sBACf,IAAIC,EAAO,UAGX,GAAI,OAAO,QAAW,aAAe,iBAAkB,OAAQ,CAC7D,IAAMC,EAAS,SAAS,cAAiC,aAAa,EAChE,CAACC,CAAI,EAAID,EAAO,IAAI,MAAM,SAAS,EAGzCD,EAAOA,EAAK,QAAQ,KAAME,CAAI,CAChC,CAGA,IAAMC,EAAU,CAAC,EACjB,QAAWC,KAAQN,EAAO,KAAM,CAC9B,OAAQM,EAAM,CAGZ,IAAK,KACHD,EAAQ,KAAK,GAAGH,cAAiB,EACjC,MAGF,IAAK,KACL,IAAK,KACHG,EAAQ,KAAK,GAAGH,cAAiB,EACjC,KACJ,CAGII,IAAS,MACXD,EAAQ,KAAK,GAAGH,cAAiBI,UAAa,CAClD,CAGIN,EAAO,KAAK,OAAS,GACvBK,EAAQ,KAAK,GAAGH,yBAA4B,EAG1CG,EAAQ,SACV,MAAM,cACJ,GAAGH,oCACH,GAAGG,CACL,EACJ,GAaA,SAAsBE,GACpBC,EACwB,QAAAP,EAAA,sBACxB,OAAQO,EAAQ,KAAM,CAGpB,OACE,aAAMT,GAAqBS,EAAQ,KAAK,MAAM,EAC9CV,EAAQ,IAAIW,EAAOD,EAAQ,IAAI,EACxB,CACL,MACF,EAGF,OACE,MAAO,CACL,OACA,KAAMV,EAAQA,EAAM,OAAOU,EAAQ,IAAI,EAAI,CAAE,MAAO,CAAC,CAAE,CACzD,EAGF,QACE,MAAM,IAAI,UAAU,sBAAsB,CAC9C,CACF,GAOA,KAAK,KAAO,GAAAE,QAGZ,iBAAiB,UAAiBC,GAAMV,EAAA,wBACtC,YAAY,MAAMM,GAAQI,EAAG,IAAI,CAAC,CACpC,EAAC", + "names": ["require_lunr", "__commonJSMin", "exports", "module", "lunr", "config", "builder", "global", "message", "obj", "clone", "keys", "key", "val", "docRef", "fieldName", "stringValue", "s", "n", "fieldRef", "elements", "i", "other", "object", "a", "b", "intersection", "element", "posting", "documentCount", "documentsWithTerm", "x", "str", "metadata", "fn", "t", "len", "tokens", "sliceEnd", "sliceStart", "char", "sliceLength", "tokenMetadata", "label", "isRegistered", "serialised", "pipeline", "fnName", "fns", "existingFn", "newFn", "pos", "stackLength", "memo", "j", "result", "k", "token", "index", "start", "end", "pivotPoint", "pivotIndex", "insertIdx", "position", "sumOfSquares", "elementsLength", "otherVector", "dotProduct", "aLen", "bLen", "aVal", "bVal", "output", "step2list", "step3list", "c", "v", "C", "V", "mgr0", "meq1", "mgr1", "s_v", "re_mgr0", "re_mgr1", "re_meq1", "re_s_v", "re_1a", "re2_1a", "re_1b", "re2_1b", "re_1b_2", "re2_1b_2", "re3_1b_2", "re4_1b_2", "re_1c", "re_2", "re_3", "re_4", "re2_4", "re_5", "re_5_1", "re3_5", "porterStemmer", "w", "stem", "suffix", "firstch", "re", "re2", "re3", "re4", "fp", "stopWords", "words", "stopWord", "arr", "clause", "editDistance", "root", "stack", "frame", "noEditNode", "insertionNode", "substitutionNode", "charA", "charB", "transposeNode", "node", "final", "next", "edges", "edge", "labels", "qEdges", "qLen", "nEdges", "nLen", "q", "qEdge", "nEdge", "qNode", "word", "commonPrefix", "nextNode", "downTo", "childKey", "attrs", "queryString", "query", "parser", "matchingFields", "queryVectors", "termFieldCache", "requiredMatches", "prohibitedMatches", "terms", "clauseMatches", "m", "term", "termTokenSet", "expandedTerms", "field", "expandedTerm", "termIndex", "fieldPosting", "matchingDocumentRefs", "termField", "matchingDocumentsSet", "l", "matchingDocumentRef", "matchingFieldRef", "fieldMatch", "allRequiredMatches", "allProhibitedMatches", "matchingFieldRefs", "results", "matches", "fieldVector", "score", "docMatch", "match", "invertedIndex", "fieldVectors", "ref", "serializedIndex", "serializedVectors", "serializedInvertedIndex", "tokenSetBuilder", "tuple", "attributes", "number", "doc", "fields", "extractor", "fieldTerms", "metadataKey", "fieldRefs", "numberOfFields", "accumulator", "documentsWithField", "fieldRefsLength", "termIdfCache", "fieldLength", "termFrequencies", "termsLength", "fieldBoost", "docBoost", "tf", "idf", "scoreWithPrecision", "args", "clonedMetadata", "metadataKeys", "otherMatchData", "allFields", "options", "state", "subSlices", "type", "charCode", "lexer", "lexeme", "completedClause", "errorMessage", "nextLexeme", "possibleFields", "f", "boost", "factory", "require_escape_html", "__commonJSMin", "exports", "module", "matchHtmlRegExp", "escapeHtml", "string", "str", "match", "escape", "html", "index", "lastIndex", "import_lunr", "obj", "data", "key", "x", "y", "nodes", "parent", "i", "node", "import_escape_html", "setupSearchDocumentMap", "docs", "documents", "parents", "doc", "path", "hash", "location", "title", "tags", "text", "escapeHTML", "parent", "__spreadValues", "import_escape_html", "setupSearchHighlighter", "config", "escape", "separator", "highlight", "_", "data", "term", "query", "match", "value", "escapeHTML", "parseSearchQuery", "value", "query", "getSearchQueryTerms", "terms", "_a", "clauses", "result", "t", "clause", "difference", "a", "b", "x", "y", "value", "Search", "config", "docs", "options", "setupSearchDocumentMap", "setupSearchHighlighter", "fns", "lang", "language", "fn", "doc", "tags", "list", "tag", "query", "highlight", "clauses", "parseSearchQuery", "clause", "groups", "item", "ref", "score", "matchData", "document", "location", "title", "text", "parent", "terms", "getSearchQueryTerms", "boost", "t", "__spreadProps", "__spreadValues", "items", "result", "suggestions", "titles", "builder", "e", "index", "setupSearchLanguages", "config", "__async", "base", "worker", "path", "scripts", "lang", "handler", "message", "Search", "lunr", "ev"] +} diff --git a/assets/stylesheets/main.7a952b86.min.css b/assets/stylesheets/main.7a952b86.min.css new file mode 100644 index 000000000..33db02dd2 --- /dev/null +++ b/assets/stylesheets/main.7a952b86.min.css @@ -0,0 +1 @@ +@charset "UTF-8";html{-webkit-text-size-adjust:none;-moz-text-size-adjust:none;-ms-text-size-adjust:none;text-size-adjust:none;box-sizing:border-box}*,:after,:before{box-sizing:inherit}@media (prefers-reduced-motion){*,:after,:before{transition:none!important}}body{margin:0}a,button,input,label{-webkit-tap-highlight-color:transparent}a{color:inherit;text-decoration:none}hr{border:0;box-sizing:initial;display:block;height:.05rem;overflow:visible;padding:0}small{font-size:80%}sub,sup{line-height:1em}img{border-style:none}table{border-collapse:initial;border-spacing:0}td,th{font-weight:400;vertical-align:top}button{background:transparent;border:0;font-family:inherit;font-size:inherit;margin:0;padding:0}input{border:0;outline:none}:root,[data-md-color-scheme=default]{--md-default-fg-color:rgba(0,0,0,.87);--md-default-fg-color--light:rgba(0,0,0,.54);--md-default-fg-color--lighter:rgba(0,0,0,.32);--md-default-fg-color--lightest:rgba(0,0,0,.07);--md-default-bg-color:#fff;--md-default-bg-color--light:hsla(0,0%,100%,.7);--md-default-bg-color--lighter:hsla(0,0%,100%,.3);--md-default-bg-color--lightest:hsla(0,0%,100%,.12);--md-primary-fg-color:#4051b5;--md-primary-fg-color--light:#5d6cc0;--md-primary-fg-color--dark:#303fa1;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7);--md-accent-fg-color:#526cfe;--md-accent-fg-color--transparent:rgba(82,108,254,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7);--md-code-fg-color:#36464e;--md-code-bg-color:#f5f5f5;--md-code-hl-color:rgba(255,255,0,.5);--md-code-hl-number-color:#d52a2a;--md-code-hl-special-color:#db1457;--md-code-hl-function-color:#a846b9;--md-code-hl-constant-color:#6e59d9;--md-code-hl-keyword-color:#3f6ec6;--md-code-hl-string-color:#1c7d4d;--md-code-hl-name-color:var(--md-code-fg-color);--md-code-hl-operator-color:var(--md-default-fg-color--light);--md-code-hl-punctuation-color:var(--md-default-fg-color--light);--md-code-hl-comment-color:var(--md-default-fg-color--light);--md-code-hl-generic-color:var(--md-default-fg-color--light);--md-code-hl-variable-color:var(--md-default-fg-color--light);--md-typeset-color:var(--md-default-fg-color);--md-typeset-a-color:var(--md-primary-fg-color);--md-typeset-mark-color:rgba(255,255,0,.5);--md-typeset-del-color:rgba(245,80,61,.15);--md-typeset-ins-color:rgba(11,213,112,.15);--md-typeset-kbd-color:#fafafa;--md-typeset-kbd-accent-color:#fff;--md-typeset-kbd-border-color:#b8b8b8;--md-typeset-table-color:rgba(0,0,0,.12);--md-admonition-fg-color:var(--md-default-fg-color);--md-admonition-bg-color:var(--md-default-bg-color);--md-footer-fg-color:#fff;--md-footer-fg-color--light:hsla(0,0%,100%,.7);--md-footer-fg-color--lighter:hsla(0,0%,100%,.3);--md-footer-bg-color:rgba(0,0,0,.87);--md-footer-bg-color--dark:rgba(0,0,0,.32);--md-shadow-z1:0 0.2rem 0.5rem rgba(0,0,0,.05),0 0 0.05rem rgba(0,0,0,.1);--md-shadow-z2:0 0.2rem 0.5rem rgba(0,0,0,.1),0 0 0.05rem rgba(0,0,0,.25);--md-shadow-z3:0 0.2rem 0.5rem rgba(0,0,0,.2),0 0 0.05rem rgba(0,0,0,.35)}.md-icon svg{fill:currentcolor;display:block;height:1.2rem;width:1.2rem}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;--md-text-font-family:var(--md-text-font,_),-apple-system,BlinkMacSystemFont,Helvetica,Arial,sans-serif;--md-code-font-family:var(--md-code-font,_),SFMono-Regular,Consolas,Menlo,monospace}body,input{font-feature-settings:"kern","liga";font-family:var(--md-text-font-family)}body,code,input,kbd,pre{color:var(--md-typeset-color)}code,kbd,pre{font-feature-settings:"kern";font-family:var(--md-code-font-family)}:root{--md-typeset-table-sort-icon:url('data:image/svg+xml;charset=utf-8,');--md-typeset-table-sort-icon--asc:url('data:image/svg+xml;charset=utf-8,');--md-typeset-table-sort-icon--desc:url('data:image/svg+xml;charset=utf-8,')}.md-typeset{-webkit-print-color-adjust:exact;color-adjust:exact;font-size:.8rem;line-height:1.6}@media print{.md-typeset{font-size:.68rem}}.md-typeset blockquote,.md-typeset dl,.md-typeset figure,.md-typeset ol,.md-typeset pre,.md-typeset ul{margin-bottom:1em;margin-top:1em}.md-typeset h1{color:var(--md-default-fg-color--light);font-size:2em;line-height:1.3;margin:0 0 1.25em}.md-typeset h1,.md-typeset h2{font-weight:300;letter-spacing:-.01em}.md-typeset h2{font-size:1.5625em;line-height:1.4;margin:1.6em 0 .64em}.md-typeset h3{font-size:1.25em;font-weight:400;letter-spacing:-.01em;line-height:1.5;margin:1.6em 0 .8em}.md-typeset h2+h3{margin-top:.8em}.md-typeset h4{font-weight:700;letter-spacing:-.01em;margin:1em 0}.md-typeset h5,.md-typeset h6{color:var(--md-default-fg-color--light);font-size:.8em;font-weight:700;letter-spacing:-.01em;margin:1.25em 0}.md-typeset h5{text-transform:uppercase}.md-typeset hr{border-bottom:.05rem solid var(--md-default-fg-color--lightest);display:flow-root;margin:1.5em 0}.md-typeset a{color:var(--md-typeset-a-color);word-break:break-word}.md-typeset a,.md-typeset a:before{transition:color 125ms}.md-typeset a:focus,.md-typeset a:hover{color:var(--md-accent-fg-color)}.md-typeset a:focus code,.md-typeset a:hover code{background-color:var(--md-accent-fg-color--transparent)}.md-typeset a code{color:currentcolor;transition:background-color 125ms}.md-typeset a.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-typeset code,.md-typeset kbd,.md-typeset pre{color:var(--md-code-fg-color);direction:ltr;font-variant-ligatures:none}@media print{.md-typeset code,.md-typeset kbd,.md-typeset pre{white-space:pre-wrap}}.md-typeset code{background-color:var(--md-code-bg-color);border-radius:.1rem;-webkit-box-decoration-break:clone;box-decoration-break:clone;font-size:.85em;padding:0 .2941176471em;word-break:break-word}.md-typeset code:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}.md-typeset pre{display:flow-root;line-height:1.4;position:relative}.md-typeset pre>code{-webkit-box-decoration-break:slice;box-decoration-break:slice;box-shadow:none;display:block;margin:0;outline-color:var(--md-accent-fg-color);overflow:auto;padding:.7720588235em 1.1764705882em;scrollbar-color:var(--md-default-fg-color--lighter) transparent;scrollbar-width:thin;touch-action:auto;word-break:normal}.md-typeset pre>code:hover{scrollbar-color:var(--md-accent-fg-color) transparent}.md-typeset pre>code::-webkit-scrollbar{height:.2rem;width:.2rem}.md-typeset pre>code::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-typeset pre>code::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}.md-typeset kbd{background-color:var(--md-typeset-kbd-color);border-radius:.1rem;box-shadow:0 .1rem 0 .05rem var(--md-typeset-kbd-border-color),0 .1rem 0 var(--md-typeset-kbd-border-color),0 -.1rem .2rem var(--md-typeset-kbd-accent-color) inset;color:var(--md-default-fg-color);display:inline-block;font-size:.75em;padding:0 .6666666667em;vertical-align:text-top;word-break:break-word}.md-typeset mark{background-color:var(--md-typeset-mark-color);-webkit-box-decoration-break:clone;box-decoration-break:clone;color:inherit;word-break:break-word}.md-typeset abbr{border-bottom:.05rem dotted var(--md-default-fg-color--light);cursor:help;text-decoration:none}@media (hover:none){.md-typeset abbr{position:relative}.md-typeset abbr[title]:-webkit-any(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-webkit-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}.md-typeset abbr[title]:-moz-any(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-moz-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}[dir=ltr] .md-typeset abbr[title]:-webkit-any(:focus,:hover):after{left:0}[dir=ltr] .md-typeset abbr[title]:-moz-any(:focus,:hover):after{left:0}[dir=ltr] .md-typeset abbr[title]:is(:focus,:hover):after{left:0}[dir=rtl] .md-typeset abbr[title]:-webkit-any(:focus,:hover):after{right:0}[dir=rtl] .md-typeset abbr[title]:-moz-any(:focus,:hover):after{right:0}[dir=rtl] .md-typeset abbr[title]:is(:focus,:hover):after{right:0}.md-typeset abbr[title]:is(:focus,:hover):after{background-color:var(--md-default-fg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z3);color:var(--md-default-bg-color);content:attr(title);display:inline-block;font-size:.7rem;margin-top:2em;max-width:80%;min-width:-webkit-max-content;min-width:-moz-max-content;min-width:max-content;padding:.2rem .3rem;position:absolute;width:auto}}.md-typeset small{opacity:.75}[dir=ltr] .md-typeset sub,[dir=ltr] .md-typeset sup{margin-left:.078125em}[dir=rtl] .md-typeset sub,[dir=rtl] .md-typeset sup{margin-right:.078125em}[dir=ltr] .md-typeset blockquote{padding-left:.6rem}[dir=rtl] .md-typeset blockquote{padding-right:.6rem}[dir=ltr] .md-typeset blockquote{border-left:.2rem solid var(--md-default-fg-color--lighter)}[dir=rtl] .md-typeset blockquote{border-right:.2rem solid var(--md-default-fg-color--lighter)}.md-typeset blockquote{color:var(--md-default-fg-color--light);margin-left:0;margin-right:0}.md-typeset ul{list-style-type:disc}[dir=ltr] .md-typeset ol,[dir=ltr] .md-typeset ul{margin-left:.625em}[dir=rtl] .md-typeset ol,[dir=rtl] .md-typeset ul{margin-right:.625em}.md-typeset ol,.md-typeset ul{padding:0}.md-typeset ol:not([hidden]),.md-typeset ul:not([hidden]){display:flow-root}.md-typeset ol ol,.md-typeset ul ol{list-style-type:lower-alpha}.md-typeset ol ol ol,.md-typeset ul ol ol{list-style-type:lower-roman}[dir=ltr] .md-typeset ol li,[dir=ltr] .md-typeset ul li{margin-left:1.25em}[dir=rtl] .md-typeset ol li,[dir=rtl] .md-typeset ul li{margin-right:1.25em}.md-typeset ol li,.md-typeset ul li{margin-bottom:.5em}.md-typeset ol li blockquote,.md-typeset ol li p,.md-typeset ul li blockquote,.md-typeset ul li p{margin:.5em 0}.md-typeset ol li:last-child,.md-typeset ul li:last-child{margin-bottom:0}.md-typeset ol li :-webkit-any(ul,ol),.md-typeset ul li :-webkit-any(ul,ol){margin-bottom:.5em;margin-top:.5em}.md-typeset ol li :-moz-any(ul,ol),.md-typeset ul li :-moz-any(ul,ol){margin-bottom:.5em;margin-top:.5em}[dir=ltr] .md-typeset ol li :-webkit-any(ul,ol),[dir=ltr] .md-typeset ul li :-webkit-any(ul,ol){margin-left:.625em}[dir=ltr] .md-typeset ol li :-moz-any(ul,ol),[dir=ltr] .md-typeset ul li :-moz-any(ul,ol){margin-left:.625em}[dir=ltr] .md-typeset ol li :is(ul,ol),[dir=ltr] .md-typeset ul li :is(ul,ol){margin-left:.625em}[dir=rtl] .md-typeset ol li :-webkit-any(ul,ol),[dir=rtl] .md-typeset ul li :-webkit-any(ul,ol){margin-right:.625em}[dir=rtl] .md-typeset ol li :-moz-any(ul,ol),[dir=rtl] .md-typeset ul li :-moz-any(ul,ol){margin-right:.625em}[dir=rtl] .md-typeset ol li :is(ul,ol),[dir=rtl] .md-typeset ul li :is(ul,ol){margin-right:.625em}.md-typeset ol li :is(ul,ol),.md-typeset ul li :is(ul,ol){margin-bottom:.5em;margin-top:.5em}[dir=ltr] .md-typeset dd{margin-left:1.875em}[dir=rtl] .md-typeset dd{margin-right:1.875em}.md-typeset dd{margin-bottom:1.5em;margin-top:1em}.md-typeset img,.md-typeset svg,.md-typeset video{height:auto;max-width:100%}.md-typeset img[align=left]{margin:1em 1em 1em 0}.md-typeset img[align=right]{margin:1em 0 1em 1em}.md-typeset img[align]:only-child{margin-top:0}.md-typeset img[src$="#gh-dark-mode-only"],.md-typeset img[src$="#only-dark"]{display:none}.md-typeset figure{display:flow-root;margin:1em auto;max-width:100%;text-align:center;width:-webkit-fit-content;width:-moz-fit-content;width:fit-content}.md-typeset figure img{display:block}.md-typeset figcaption{font-style:italic;margin:1em auto;max-width:24rem}.md-typeset iframe{max-width:100%}.md-typeset table:not([class]){background-color:var(--md-default-bg-color);border:.05rem solid var(--md-typeset-table-color);border-radius:.1rem;display:inline-block;font-size:.64rem;max-width:100%;overflow:auto;touch-action:auto}@media print{.md-typeset table:not([class]){display:table}}.md-typeset table:not([class])+*{margin-top:1.5em}.md-typeset table:not([class]) :-webkit-any(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :-moz-any(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :is(th,td)>:first-child{margin-top:0}.md-typeset table:not([class]) :-webkit-any(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :-moz-any(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :is(th,td)>:last-child{margin-bottom:0}.md-typeset table:not([class]) :-webkit-any(th,td):not([align]){text-align:left}.md-typeset table:not([class]) :-moz-any(th,td):not([align]){text-align:left}.md-typeset table:not([class]) :is(th,td):not([align]){text-align:left}[dir=rtl] .md-typeset table:not([class]) :-webkit-any(th,td):not([align]){text-align:right}[dir=rtl] .md-typeset table:not([class]) :-moz-any(th,td):not([align]){text-align:right}[dir=rtl] .md-typeset table:not([class]) :is(th,td):not([align]){text-align:right}.md-typeset table:not([class]) th{font-weight:700;min-width:5rem;padding:.9375em 1.25em;vertical-align:top}.md-typeset table:not([class]) td{border-top:.05rem solid var(--md-typeset-table-color);padding:.9375em 1.25em;vertical-align:top}.md-typeset table:not([class]) tbody tr{transition:background-color 125ms}.md-typeset table:not([class]) tbody tr:hover{background-color:rgba(0,0,0,.035);box-shadow:0 .05rem 0 var(--md-default-bg-color) inset}.md-typeset table:not([class]) a{word-break:normal}.md-typeset table th[role=columnheader]{cursor:pointer}[dir=ltr] .md-typeset table th[role=columnheader]:after{margin-left:.5em}[dir=rtl] .md-typeset table th[role=columnheader]:after{margin-right:.5em}.md-typeset table th[role=columnheader]:after{content:"";display:inline-block;height:1.2em;-webkit-mask-image:var(--md-typeset-table-sort-icon);mask-image:var(--md-typeset-table-sort-icon);-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;transition:background-color 125ms;vertical-align:text-bottom;width:1.2em}.md-typeset table th[role=columnheader]:hover:after{background-color:var(--md-default-fg-color--lighter)}.md-typeset table th[role=columnheader][aria-sort=ascending]:after{background-color:var(--md-default-fg-color--light);-webkit-mask-image:var(--md-typeset-table-sort-icon--asc);mask-image:var(--md-typeset-table-sort-icon--asc)}.md-typeset table th[role=columnheader][aria-sort=descending]:after{background-color:var(--md-default-fg-color--light);-webkit-mask-image:var(--md-typeset-table-sort-icon--desc);mask-image:var(--md-typeset-table-sort-icon--desc)}.md-typeset__scrollwrap{margin:1em -.8rem;overflow-x:auto;touch-action:auto}.md-typeset__table{display:inline-block;margin-bottom:.5em;padding:0 .8rem}@media print{.md-typeset__table{display:block}}html .md-typeset__table table{display:table;margin:0;overflow:hidden;width:100%}@media screen and (max-width:44.9375em){.md-content__inner>pre{margin:1em -.8rem}.md-content__inner>pre code{border-radius:0}}.md-banner{background-color:var(--md-footer-bg-color);color:var(--md-footer-fg-color);overflow:auto}@media print{.md-banner{display:none}}.md-banner--warning{background:var(--md-typeset-mark-color);color:var(--md-default-fg-color)}.md-banner__inner{font-size:.7rem;margin:.6rem auto;padding:0 .8rem}[dir=ltr] .md-banner__button{float:right}[dir=rtl] .md-banner__button{float:left}.md-banner__button{color:inherit;cursor:pointer;transition:opacity .25s}.md-banner__button:hover{opacity:.7}html{font-size:125%;height:100%;overflow-x:hidden}@media screen and (min-width:100em){html{font-size:137.5%}}@media screen and (min-width:125em){html{font-size:150%}}body{background-color:var(--md-default-bg-color);display:flex;flex-direction:column;font-size:.5rem;min-height:100%;position:relative;width:100%}@media print{body{display:block}}@media screen and (max-width:59.9375em){body[data-md-scrolllock]{position:fixed}}.md-grid{margin-left:auto;margin-right:auto;max-width:61rem}.md-container{display:flex;flex-direction:column;flex-grow:1}@media print{.md-container{display:block}}.md-main{flex-grow:1}.md-main__inner{display:flex;height:100%;margin-top:1.5rem}.md-ellipsis{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.md-toggle{display:none}.md-option{height:0;opacity:0;position:absolute;width:0}.md-option:checked+label:not([hidden]){display:block}.md-option.focus-visible+label{outline-color:var(--md-accent-fg-color);outline-style:auto}.md-skip{background-color:var(--md-default-fg-color);border-radius:.1rem;color:var(--md-default-bg-color);font-size:.64rem;margin:.5rem;opacity:0;outline-color:var(--md-accent-fg-color);padding:.3rem .5rem;position:fixed;transform:translateY(.4rem);z-index:-1}.md-skip:focus{opacity:1;transform:translateY(0);transition:transform .25s cubic-bezier(.4,0,.2,1),opacity 175ms 75ms;z-index:10}@page{margin:25mm}:root{--md-clipboard-icon:url('data:image/svg+xml;charset=utf-8,')}.md-clipboard{border-radius:.1rem;color:var(--md-default-fg-color--lightest);cursor:pointer;height:1.5em;outline-color:var(--md-accent-fg-color);outline-offset:.1rem;position:absolute;right:.5em;top:.5em;transition:color .25s;width:1.5em;z-index:1}@media print{.md-clipboard{display:none}}.md-clipboard:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}:hover>.md-clipboard{color:var(--md-default-fg-color--light)}.md-clipboard:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-clipboard:after{background-color:currentcolor;content:"";display:block;height:1.125em;margin:0 auto;-webkit-mask-image:var(--md-clipboard-icon);mask-image:var(--md-clipboard-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:1.125em}.md-clipboard--inline{cursor:pointer}.md-clipboard--inline code{transition:color .25s,background-color .25s}.md-clipboard--inline:-webkit-any(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-clipboard--inline:-moz-any(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-clipboard--inline:is(:focus,:hover) code{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}@-webkit-keyframes consent{0%{opacity:0;transform:translateY(100%)}to{opacity:1;transform:translateY(0)}}@keyframes consent{0%{opacity:0;transform:translateY(100%)}to{opacity:1;transform:translateY(0)}}@-webkit-keyframes overlay{0%{opacity:0}to{opacity:1}}@keyframes overlay{0%{opacity:0}to{opacity:1}}.md-consent__overlay{-webkit-animation:overlay .25s both;animation:overlay .25s both;-webkit-backdrop-filter:blur(.1rem);backdrop-filter:blur(.1rem);background-color:rgba(0,0,0,.54);height:100%;opacity:1;position:fixed;top:0;width:100%;z-index:5}.md-consent__inner{-webkit-animation:consent .5s cubic-bezier(.1,.7,.1,1) both;animation:consent .5s cubic-bezier(.1,.7,.1,1) both;background-color:var(--md-default-bg-color);border:0;border-radius:.1rem;bottom:0;box-shadow:0 0 .2rem rgba(0,0,0,.1),0 .2rem .4rem rgba(0,0,0,.2);max-height:100%;overflow:auto;padding:0;position:fixed;width:100%;z-index:5}.md-consent__form{padding:.8rem}.md-consent__settings{display:none;margin:1em 0}input:checked+.md-consent__settings{display:block}.md-consent__controls{margin-bottom:.8rem}.md-typeset .md-consent__controls .md-button{display:inline}@media screen and (max-width:44.9375em){.md-typeset .md-consent__controls .md-button{display:block;margin-top:.4rem;text-align:center;width:100%}}.md-consent label{cursor:pointer}.md-content{flex-grow:1;min-width:0}.md-content__inner{margin:0 .8rem 1.2rem;padding-top:.6rem}@media screen and (min-width:76.25em){[dir=ltr] .md-sidebar--primary:not([hidden])~.md-content>.md-content__inner{margin-left:1.2rem}[dir=ltr] .md-sidebar--secondary:not([hidden])~.md-content>.md-content__inner,[dir=rtl] .md-sidebar--primary:not([hidden])~.md-content>.md-content__inner{margin-right:1.2rem}[dir=rtl] .md-sidebar--secondary:not([hidden])~.md-content>.md-content__inner{margin-left:1.2rem}}.md-content__inner:before{content:"";display:block;height:.4rem}.md-content__inner>:last-child{margin-bottom:0}[dir=ltr] .md-content__button{float:right}[dir=rtl] .md-content__button{float:left}[dir=ltr] .md-content__button{margin-left:.4rem}[dir=rtl] .md-content__button{margin-right:.4rem}.md-content__button{margin:.4rem 0;padding:0}@media print{.md-content__button{display:none}}.md-typeset .md-content__button{color:var(--md-default-fg-color--lighter)}.md-content__button svg{display:inline;vertical-align:top}[dir=rtl] .md-content__button svg{transform:scaleX(-1)}[dir=ltr] .md-dialog{right:.8rem}[dir=rtl] .md-dialog{left:.8rem}.md-dialog{background-color:var(--md-default-fg-color);border-radius:.1rem;bottom:.8rem;box-shadow:var(--md-shadow-z3);min-width:11.1rem;opacity:0;padding:.4rem .6rem;pointer-events:none;position:fixed;transform:translateY(100%);transition:transform 0ms .4s,opacity .4s;z-index:4}@media print{.md-dialog{display:none}}.md-dialog--active{opacity:1;pointer-events:auto;transform:translateY(0);transition:transform .4s cubic-bezier(.075,.85,.175,1),opacity .4s}.md-dialog__inner{color:var(--md-default-bg-color);font-size:.7rem}.md-feedback{margin:2em 0 1em;text-align:center}.md-feedback fieldset{border:none;margin:0;padding:0}.md-feedback__title{font-weight:700;margin:1em auto}.md-feedback__inner{position:relative}.md-feedback__list{align-content:baseline;display:flex;flex-wrap:wrap;justify-content:center;position:relative}.md-feedback__list:hover .md-icon:not(:disabled){color:var(--md-default-fg-color--lighter)}:disabled .md-feedback__list{min-height:1.8rem}.md-feedback__icon{color:var(--md-default-fg-color--light);cursor:pointer;flex-shrink:0;margin:0 .1rem;transition:color 125ms}.md-feedback__icon:not(:disabled).md-icon:hover{color:var(--md-accent-fg-color)}.md-feedback__icon:disabled{color:var(--md-default-fg-color--lightest);pointer-events:none}.md-feedback__note{opacity:0;position:relative;transform:translateY(.4rem);transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .15s}.md-feedback__note>*{margin:0 auto;max-width:16rem}:disabled .md-feedback__note{opacity:1;transform:translateY(0)}.md-footer{background-color:var(--md-footer-bg-color);color:var(--md-footer-fg-color)}@media print{.md-footer{display:none}}.md-footer__inner{justify-content:space-between;overflow:auto;padding:.2rem}.md-footer__inner:not([hidden]){display:flex}.md-footer__link{display:flex;flex-grow:0.01;outline-color:var(--md-accent-fg-color);overflow:hidden;padding-bottom:.4rem;padding-top:1.4rem;transition:opacity .25s}.md-footer__link:-webkit-any(:focus,:hover){opacity:.7}.md-footer__link:-moz-any(:focus,:hover){opacity:.7}.md-footer__link:is(:focus,:hover){opacity:.7}[dir=rtl] .md-footer__link svg{transform:scaleX(-1)}@media screen and (max-width:44.9375em){.md-footer__link--prev .md-footer__title{display:none}}[dir=ltr] .md-footer__link--next{margin-left:auto}[dir=rtl] .md-footer__link--next{margin-right:auto}.md-footer__link--next{text-align:right}[dir=rtl] .md-footer__link--next{text-align:left}.md-footer__title{flex-grow:1;font-size:.9rem;line-height:2.4rem;max-width:calc(100% - 2.4rem);padding:0 1rem;position:relative;white-space:nowrap}.md-footer__button{margin:.2rem;padding:.4rem}.md-footer__direction{font-size:.64rem;left:0;margin-top:-1rem;opacity:.7;padding:0 1rem;position:absolute;right:0}.md-footer-meta{background-color:var(--md-footer-bg-color--dark)}.md-footer-meta__inner{display:flex;flex-wrap:wrap;justify-content:space-between;padding:.2rem}html .md-footer-meta.md-typeset a{color:var(--md-footer-fg-color--light)}html .md-footer-meta.md-typeset a:-webkit-any(:focus,:hover){color:var(--md-footer-fg-color)}html .md-footer-meta.md-typeset a:-moz-any(:focus,:hover){color:var(--md-footer-fg-color)}html .md-footer-meta.md-typeset a:is(:focus,:hover){color:var(--md-footer-fg-color)}.md-copyright{color:var(--md-footer-fg-color--lighter);font-size:.64rem;margin:auto .6rem;padding:.4rem 0;width:100%}@media screen and (min-width:45em){.md-copyright{width:auto}}.md-copyright__highlight{color:var(--md-footer-fg-color--light)}.md-social{margin:0 .4rem;padding:.2rem 0 .6rem}@media screen and (min-width:45em){.md-social{padding:.6rem 0}}.md-social__link{display:inline-block;height:1.6rem;text-align:center;width:1.6rem}.md-social__link:before{line-height:1.9}.md-social__link svg{fill:currentcolor;max-height:.8rem;vertical-align:-25%}.md-typeset .md-button{border:.1rem solid;border-radius:.1rem;color:var(--md-primary-fg-color);cursor:pointer;display:inline-block;font-weight:700;padding:.625em 2em;transition:color 125ms,background-color 125ms,border-color 125ms}.md-typeset .md-button--primary{background-color:var(--md-primary-fg-color);border-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color)}.md-typeset .md-button:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-typeset .md-button:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-typeset .md-button:is(:focus,:hover){background-color:var(--md-accent-fg-color);border-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}[dir=ltr] .md-typeset .md-input{border-top-left-radius:.1rem}[dir=ltr] .md-typeset .md-input,[dir=rtl] .md-typeset .md-input{border-top-right-radius:.1rem}[dir=rtl] .md-typeset .md-input{border-top-left-radius:.1rem}.md-typeset .md-input{border-bottom:.1rem solid var(--md-default-fg-color--lighter);box-shadow:var(--md-shadow-z1);font-size:.8rem;height:1.8rem;padding:0 .6rem;transition:border .25s,box-shadow .25s}.md-typeset .md-input:-webkit-any(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input:-moz-any(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input:is(:focus,:hover){border-bottom-color:var(--md-accent-fg-color);box-shadow:var(--md-shadow-z2)}.md-typeset .md-input--stretch{width:100%}.md-header{background-color:var(--md-primary-fg-color);box-shadow:0 0 .2rem transparent,0 .2rem .4rem transparent;color:var(--md-primary-bg-color);display:block;left:0;position:-webkit-sticky;position:sticky;right:0;top:0;z-index:4}@media print{.md-header{display:none}}.md-header[hidden]{transform:translateY(-100%);transition:transform .25s cubic-bezier(.8,0,.6,1),box-shadow .25s}.md-header--shadow{box-shadow:0 0 .2rem rgba(0,0,0,.1),0 .2rem .4rem rgba(0,0,0,.2);transition:transform .25s cubic-bezier(.1,.7,.1,1),box-shadow .25s}.md-header__inner{align-items:center;display:flex;padding:0 .2rem}.md-header__button{color:currentcolor;cursor:pointer;margin:.2rem;outline-color:var(--md-accent-fg-color);padding:.4rem;position:relative;transition:opacity .25s;vertical-align:middle;z-index:1}.md-header__button:hover{opacity:.7}.md-header__button:not([hidden]){display:inline-block}.md-header__button:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}.md-header__button.md-logo{margin:.2rem;padding:.4rem}@media screen and (max-width:76.1875em){.md-header__button.md-logo{display:none}}.md-header__button.md-logo :-webkit-any(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}.md-header__button.md-logo :-moz-any(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}.md-header__button.md-logo :is(img,svg){fill:currentcolor;display:block;height:1.2rem;width:auto}@media screen and (min-width:60em){.md-header__button[for=__search]{display:none}}.no-js .md-header__button[for=__search]{display:none}[dir=rtl] .md-header__button[for=__search] svg{transform:scaleX(-1)}@media screen and (min-width:76.25em){.md-header__button[for=__drawer]{display:none}}.md-header__topic{display:flex;max-width:100%;position:absolute;transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .15s;white-space:nowrap}.md-header__topic+.md-header__topic{opacity:0;pointer-events:none;transform:translateX(1.25rem);transition:transform .4s cubic-bezier(1,.7,.1,.1),opacity .15s;z-index:-1}[dir=rtl] .md-header__topic+.md-header__topic{transform:translateX(-1.25rem)}.md-header__topic:first-child{font-weight:700}[dir=ltr] .md-header__title{margin-right:.4rem}[dir=rtl] .md-header__title{margin-left:.4rem}[dir=ltr] .md-header__title{margin-left:1rem}[dir=rtl] .md-header__title{margin-right:1rem}.md-header__title{flex-grow:1;font-size:.9rem;height:2.4rem;line-height:2.4rem}.md-header__title--active .md-header__topic{opacity:0;pointer-events:none;transform:translateX(-1.25rem);transition:transform .4s cubic-bezier(1,.7,.1,.1),opacity .15s;z-index:-1}[dir=rtl] .md-header__title--active .md-header__topic{transform:translateX(1.25rem)}.md-header__title--active .md-header__topic+.md-header__topic{opacity:1;pointer-events:auto;transform:translateX(0);transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .15s;z-index:0}.md-header__title>.md-header__ellipsis{height:100%;position:relative;width:100%}.md-header__option{display:flex;flex-shrink:0;max-width:100%;transition:max-width 0ms .25s,opacity .25s .25s;white-space:nowrap}[data-md-toggle=search]:checked~.md-header .md-header__option{max-width:0;opacity:0;transition:max-width 0ms,opacity 0ms}.md-header__source{display:none}@media screen and (min-width:60em){[dir=ltr] .md-header__source{margin-left:1rem}[dir=rtl] .md-header__source{margin-right:1rem}.md-header__source{display:block;max-width:11.7rem;width:11.7rem}}@media screen and (min-width:76.25em){[dir=ltr] .md-header__source{margin-left:1.4rem}[dir=rtl] .md-header__source{margin-right:1.4rem}}:root{--md-nav-icon--prev:url('data:image/svg+xml;charset=utf-8,');--md-nav-icon--next:url('data:image/svg+xml;charset=utf-8,');--md-toc-icon:url('data:image/svg+xml;charset=utf-8,')}.md-nav{font-size:.7rem;line-height:1.3}.md-nav__title{display:block;font-weight:700;overflow:hidden;padding:0 .6rem;text-overflow:ellipsis}.md-nav__title .md-nav__button{display:none}.md-nav__title .md-nav__button img{height:100%;width:auto}.md-nav__title .md-nav__button.md-logo :-webkit-any(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__title .md-nav__button.md-logo :-moz-any(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__title .md-nav__button.md-logo :is(img,svg){fill:currentcolor;display:block;height:2.4rem;max-width:100%;object-fit:contain;width:auto}.md-nav__list{list-style:none;margin:0;padding:0}.md-nav__item{padding:0 .6rem}[dir=ltr] .md-nav__item .md-nav__item{padding-right:0}[dir=rtl] .md-nav__item .md-nav__item{padding-left:0}.md-nav__link{align-items:center;cursor:pointer;display:flex;justify-content:space-between;margin-top:.625em;overflow:hidden;scroll-snap-align:start;text-overflow:ellipsis;transition:color 125ms}.md-nav__link--passed{color:var(--md-default-fg-color--light)}.md-nav__item .md-nav__link--active{color:var(--md-typeset-a-color)}.md-nav__item .md-nav__link--index [href]{width:100%}.md-nav__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav__link.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-nav--primary .md-nav__link[for=__toc]{display:none}.md-nav--primary .md-nav__link[for=__toc] .md-icon:after{background-color:currentcolor;display:block;height:100%;-webkit-mask-image:var(--md-toc-icon);mask-image:var(--md-toc-icon);width:100%}.md-nav--primary .md-nav__link[for=__toc]~.md-nav{display:none}.md-nav__link>*{cursor:pointer;display:flex}.md-nav__icon{flex-shrink:0}.md-nav__source{display:none}@media screen and (max-width:76.1875em){.md-nav--primary,.md-nav--primary .md-nav{background-color:var(--md-default-bg-color);display:flex;flex-direction:column;height:100%;left:0;position:absolute;right:0;top:0;z-index:1}.md-nav--primary :-webkit-any(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary :-moz-any(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary :is(.md-nav__title,.md-nav__item){font-size:.8rem;line-height:1.5}.md-nav--primary .md-nav__title{background-color:var(--md-default-fg-color--lightest);color:var(--md-default-fg-color--light);cursor:pointer;height:5.6rem;line-height:2.4rem;padding:3rem .8rem .2rem;position:relative;white-space:nowrap}[dir=ltr] .md-nav--primary .md-nav__title .md-nav__icon{left:.4rem}[dir=rtl] .md-nav--primary .md-nav__title .md-nav__icon{right:.4rem}.md-nav--primary .md-nav__title .md-nav__icon{display:block;height:1.2rem;margin:.2rem;position:absolute;top:.4rem;width:1.2rem}.md-nav--primary .md-nav__title .md-nav__icon:after{background-color:currentcolor;content:"";display:block;height:100%;-webkit-mask-image:var(--md-nav-icon--prev);mask-image:var(--md-nav-icon--prev);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}.md-nav--primary .md-nav__title~.md-nav__list{background-color:var(--md-default-bg-color);box-shadow:0 .05rem 0 var(--md-default-fg-color--lightest) inset;overflow-y:auto;-ms-scroll-snap-type:y mandatory;scroll-snap-type:y mandatory;touch-action:pan-y}.md-nav--primary .md-nav__title~.md-nav__list>:first-child{border-top:0}.md-nav--primary .md-nav__title[for=__drawer]{background-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color);font-weight:700}.md-nav--primary .md-nav__title .md-logo{display:block;left:.2rem;margin:.2rem;padding:.4rem;position:absolute;right:.2rem;top:.2rem}.md-nav--primary .md-nav__list{flex:1}.md-nav--primary .md-nav__item{border-top:.05rem solid var(--md-default-fg-color--lightest);padding:0}.md-nav--primary .md-nav__item--active>.md-nav__link{color:var(--md-typeset-a-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__item--active>.md-nav__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-nav--primary .md-nav__link{margin-top:0;padding:.6rem .8rem}[dir=ltr] .md-nav--primary .md-nav__link .md-nav__icon{margin-right:-.2rem}[dir=rtl] .md-nav--primary .md-nav__link .md-nav__icon{margin-left:-.2rem}.md-nav--primary .md-nav__link .md-nav__icon{font-size:1.2rem;height:1.2rem;width:1.2rem}.md-nav--primary .md-nav__link .md-nav__icon:after{background-color:currentcolor;content:"";display:block;height:100%;-webkit-mask-image:var(--md-nav-icon--next);mask-image:var(--md-nav-icon--next);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}[dir=rtl] .md-nav--primary .md-nav__icon:after{transform:scale(-1)}.md-nav--primary .md-nav--secondary .md-nav{background-color:initial;position:static}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-left:1.4rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav__link{padding-right:1.4rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-left:2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav__link{padding-right:2rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-left:2.6rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav__link{padding-right:2.6rem}[dir=ltr] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-left:3.2rem}[dir=rtl] .md-nav--primary .md-nav--secondary .md-nav .md-nav .md-nav .md-nav .md-nav__link{padding-right:3.2rem}.md-nav--secondary{background-color:initial}.md-nav__toggle~.md-nav{display:flex;opacity:0;transform:translateX(100%);transition:transform .25s cubic-bezier(.8,0,.6,1),opacity 125ms 50ms}[dir=rtl] .md-nav__toggle~.md-nav{transform:translateX(-100%)}.md-nav__toggle:checked~.md-nav{opacity:1;transform:translateX(0);transition:transform .25s cubic-bezier(.4,0,.2,1),opacity 125ms 125ms}.md-nav__toggle:checked~.md-nav>.md-nav__list{-webkit-backface-visibility:hidden;backface-visibility:hidden}}@media screen and (max-width:59.9375em){.md-nav--primary .md-nav__link[for=__toc]{display:flex}.md-nav--primary .md-nav__link[for=__toc] .md-icon:after{content:""}.md-nav--primary .md-nav__link[for=__toc]+.md-nav__link{display:none}.md-nav--primary .md-nav__link[for=__toc]~.md-nav{display:flex}.md-nav__source{background-color:var(--md-primary-fg-color--dark);color:var(--md-primary-bg-color);display:block;padding:0 .2rem}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-nav--integrated .md-nav__link[for=__toc]{display:flex}.md-nav--integrated .md-nav__link[for=__toc] .md-icon:after{content:""}.md-nav--integrated .md-nav__link[for=__toc]+.md-nav__link{display:none}.md-nav--integrated .md-nav__link[for=__toc]~.md-nav{display:flex}}@media screen and (min-width:60em){.md-nav--secondary .md-nav__title{background:var(--md-default-bg-color);box-shadow:0 0 .4rem .4rem var(--md-default-bg-color);position:-webkit-sticky;position:sticky;top:0;z-index:1}.md-nav--secondary .md-nav__title[for=__toc]{scroll-snap-align:start}.md-nav--secondary .md-nav__title .md-nav__icon{display:none}}@media screen and (min-width:76.25em){.md-nav{transition:max-height .25s cubic-bezier(.86,0,.07,1)}.md-nav--primary .md-nav__title{background:var(--md-default-bg-color);box-shadow:0 0 .4rem .4rem var(--md-default-bg-color);position:-webkit-sticky;position:sticky;top:0;z-index:1}.md-nav--primary .md-nav__title[for=__drawer]{scroll-snap-align:start}.md-nav--primary .md-nav__title .md-nav__icon,.md-nav__toggle~.md-nav{display:none}.md-nav__toggle:-webkit-any(:checked,:indeterminate)~.md-nav{display:block}.md-nav__toggle:-moz-any(:checked,:indeterminate)~.md-nav{display:block}.md-nav__toggle:is(:checked,:indeterminate)~.md-nav{display:block}.md-nav__item--nested>.md-nav>.md-nav__title{display:none}.md-nav__item--section{display:block;margin:1.25em 0}.md-nav__item--section:last-child{margin-bottom:0}.md-nav__item--section>.md-nav__link{font-weight:700;pointer-events:none}.md-nav__item--section>.md-nav__link--index [href]{pointer-events:auto}.md-nav__item--section>.md-nav__link .md-nav__icon{display:none}.md-nav__item--section>.md-nav{display:block}.md-nav__item--section>.md-nav>.md-nav__list>.md-nav__item{padding:0}.md-nav__icon{border-radius:100%;height:.9rem;transition:background-color .25s,transform .25s;width:.9rem}[dir=rtl] .md-nav__icon{transform:rotate(180deg)}.md-nav__icon:hover{background-color:var(--md-accent-fg-color--transparent)}.md-nav__icon:after{background-color:currentcolor;content:"";display:inline-block;height:100%;-webkit-mask-image:var(--md-nav-icon--next);mask-image:var(--md-nav-icon--next);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;vertical-align:-.1rem;width:100%}.md-nav__item--nested .md-nav__toggle:checked~.md-nav__link .md-nav__icon,.md-nav__item--nested .md-nav__toggle:indeterminate~.md-nav__link .md-nav__icon{transform:rotate(90deg)}.md-nav--lifted>.md-nav__list>.md-nav__item,.md-nav--lifted>.md-nav__list>.md-nav__item--nested,.md-nav--lifted>.md-nav__title{display:none}.md-nav--lifted>.md-nav__list>.md-nav__item--active{display:block;padding:0}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link{background:var(--md-default-bg-color);box-shadow:0 0 .4rem .4rem var(--md-default-bg-color);font-weight:700;margin-top:0;padding:0 .6rem;position:-webkit-sticky;position:sticky;top:0;z-index:1}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link:not(.md-nav__link--index){pointer-events:none}.md-nav--lifted>.md-nav__list>.md-nav__item--active>.md-nav__link .md-nav__icon{display:none}.md-nav--lifted .md-nav[data-md-level="1"]{display:block}[dir=ltr] .md-nav--lifted .md-nav[data-md-level="1"]>.md-nav__list>.md-nav__item{padding-right:.6rem}[dir=rtl] .md-nav--lifted .md-nav[data-md-level="1"]>.md-nav__list>.md-nav__item{padding-left:.6rem}.md-nav--integrated>.md-nav__list>.md-nav__item--active:not(.md-nav__item--nested){padding:0 .6rem}.md-nav--integrated>.md-nav__list>.md-nav__item--active:not(.md-nav__item--nested)>.md-nav__link{padding:0}[dir=ltr] .md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{border-left:.05rem solid var(--md-primary-fg-color)}[dir=rtl] .md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{border-right:.05rem solid var(--md-primary-fg-color)}.md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary{display:block;margin-bottom:1.25em}.md-nav--integrated>.md-nav__list>.md-nav__item--active .md-nav--secondary>.md-nav__title{display:none}}:root{--md-search-result-icon:url('data:image/svg+xml;charset=utf-8,')}.md-search{position:relative}@media screen and (min-width:60em){.md-search{padding:.2rem 0}}.no-js .md-search{display:none}.md-search__overlay{opacity:0;z-index:1}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__overlay{left:-2.2rem}[dir=rtl] .md-search__overlay{right:-2.2rem}.md-search__overlay{background-color:var(--md-default-bg-color);border-radius:1rem;height:2rem;overflow:hidden;pointer-events:none;position:absolute;top:-1rem;transform-origin:center;transition:transform .3s .1s,opacity .2s .2s;width:2rem}[data-md-toggle=search]:checked~.md-header .md-search__overlay{opacity:1;transition:transform .4s,opacity .1s}}@media screen and (min-width:60em){[dir=ltr] .md-search__overlay{left:0}[dir=rtl] .md-search__overlay{right:0}.md-search__overlay{background-color:rgba(0,0,0,.54);cursor:pointer;height:0;position:fixed;top:0;transition:width 0ms .25s,height 0ms .25s,opacity .25s;width:0}[data-md-toggle=search]:checked~.md-header .md-search__overlay{height:200vh;opacity:1;transition:width 0ms,height 0ms,opacity .25s;width:100%}}@media screen and (max-width:29.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(45)}}@media screen and (min-width:30em) and (max-width:44.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(60)}}@media screen and (min-width:45em) and (max-width:59.9375em){[data-md-toggle=search]:checked~.md-header .md-search__overlay{transform:scale(75)}}.md-search__inner{-webkit-backface-visibility:hidden;backface-visibility:hidden}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__inner{left:0}[dir=rtl] .md-search__inner{right:0}.md-search__inner{height:0;opacity:0;overflow:hidden;position:fixed;top:0;transform:translateX(5%);transition:width 0ms .3s,height 0ms .3s,transform .15s cubic-bezier(.4,0,.2,1) .15s,opacity .15s .15s;width:0;z-index:2}[dir=rtl] .md-search__inner{transform:translateX(-5%)}[data-md-toggle=search]:checked~.md-header .md-search__inner{height:100%;opacity:1;transform:translateX(0);transition:width 0ms 0ms,height 0ms 0ms,transform .15s cubic-bezier(.1,.7,.1,1) .15s,opacity .15s .15s;width:100%}}@media screen and (min-width:60em){[dir=ltr] .md-search__inner{float:right}[dir=rtl] .md-search__inner{float:left}.md-search__inner{padding:.1rem 0;position:relative;transition:width .25s cubic-bezier(.1,.7,.1,1);width:11.7rem}}@media screen and (min-width:60em) and (max-width:76.1875em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:23.4rem}}@media screen and (min-width:76.25em){[data-md-toggle=search]:checked~.md-header .md-search__inner{width:34.4rem}}.md-search__form{background-color:var(--md-default-bg-color);box-shadow:0 0 .6rem transparent;height:2.4rem;position:relative;transition:color .25s,background-color .25s;z-index:2}@media screen and (min-width:60em){.md-search__form{background-color:rgba(0,0,0,.26);border-radius:.1rem;height:1.8rem}.md-search__form:hover{background-color:hsla(0,0%,100%,.12)}}[data-md-toggle=search]:checked~.md-header .md-search__form{background-color:var(--md-default-bg-color);border-radius:.1rem .1rem 0 0;box-shadow:0 0 .6rem rgba(0,0,0,.07);color:var(--md-default-fg-color)}[dir=ltr] .md-search__input{padding-left:3.6rem;padding-right:2.2rem}[dir=rtl] .md-search__input{padding-left:2.2rem;padding-right:3.6rem}.md-search__input{background:transparent;font-size:.9rem;height:100%;position:relative;text-overflow:ellipsis;width:100%;z-index:2}.md-search__input::-ms-input-placeholder{-ms-transition:color .25s;transition:color .25s}.md-search__input::placeholder{transition:color .25s}.md-search__input::-ms-input-placeholder{color:var(--md-default-fg-color--light)}.md-search__input::placeholder,.md-search__input~.md-search__icon{color:var(--md-default-fg-color--light)}.md-search__input::-ms-clear{display:none}@media screen and (max-width:59.9375em){.md-search__input{font-size:.9rem;height:2.4rem;width:100%}}@media screen and (min-width:60em){[dir=ltr] .md-search__input{padding-left:2.2rem}[dir=rtl] .md-search__input{padding-right:2.2rem}.md-search__input{color:inherit;font-size:.8rem}.md-search__input::-ms-input-placeholder{color:var(--md-primary-bg-color--light)}.md-search__input::placeholder{color:var(--md-primary-bg-color--light)}.md-search__input+.md-search__icon{color:var(--md-primary-bg-color)}[data-md-toggle=search]:checked~.md-header .md-search__input{text-overflow:clip}[data-md-toggle=search]:checked~.md-header .md-search__input::-ms-input-placeholder{color:var(--md-default-fg-color--light)}[data-md-toggle=search]:checked~.md-header .md-search__input+.md-search__icon,[data-md-toggle=search]:checked~.md-header .md-search__input::placeholder{color:var(--md-default-fg-color--light)}}.md-search__icon{cursor:pointer;display:inline-block;height:1.2rem;transition:color .25s,opacity .25s;width:1.2rem}.md-search__icon:hover{opacity:.7}[dir=ltr] .md-search__icon[for=__search]{left:.5rem}[dir=rtl] .md-search__icon[for=__search]{right:.5rem}.md-search__icon[for=__search]{position:absolute;top:.3rem;z-index:2}[dir=rtl] .md-search__icon[for=__search] svg{transform:scaleX(-1)}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__icon[for=__search]{left:.8rem}[dir=rtl] .md-search__icon[for=__search]{right:.8rem}.md-search__icon[for=__search]{top:.6rem}.md-search__icon[for=__search] svg:first-child{display:none}}@media screen and (min-width:60em){.md-search__icon[for=__search]{pointer-events:none}.md-search__icon[for=__search] svg:last-child{display:none}}[dir=ltr] .md-search__options{right:.5rem}[dir=rtl] .md-search__options{left:.5rem}.md-search__options{pointer-events:none;position:absolute;top:.3rem;z-index:2}@media screen and (max-width:59.9375em){[dir=ltr] .md-search__options{right:.8rem}[dir=rtl] .md-search__options{left:.8rem}.md-search__options{top:.6rem}}[dir=ltr] .md-search__options>*{margin-left:.2rem}[dir=rtl] .md-search__options>*{margin-right:.2rem}.md-search__options>*{color:var(--md-default-fg-color--light);opacity:0;transform:scale(.75);transition:transform .15s cubic-bezier(.1,.7,.1,1),opacity .15s}.md-search__options>:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}[data-md-toggle=search]:checked~.md-header .md-search__input:valid~.md-search__options>*{opacity:1;pointer-events:auto;transform:scale(1)}[data-md-toggle=search]:checked~.md-header .md-search__input:valid~.md-search__options>:hover{opacity:.7}[dir=ltr] .md-search__suggest{padding-left:3.6rem;padding-right:2.2rem}[dir=rtl] .md-search__suggest{padding-left:2.2rem;padding-right:3.6rem}.md-search__suggest{align-items:center;color:var(--md-default-fg-color--lighter);display:flex;font-size:.9rem;height:100%;opacity:0;position:absolute;top:0;transition:opacity 50ms;white-space:nowrap;width:100%}@media screen and (min-width:60em){[dir=ltr] .md-search__suggest{padding-left:2.2rem}[dir=rtl] .md-search__suggest{padding-right:2.2rem}.md-search__suggest{font-size:.8rem}}[data-md-toggle=search]:checked~.md-header .md-search__suggest{opacity:1;transition:opacity .3s .1s}[dir=ltr] .md-search__output{border-bottom-left-radius:.1rem}[dir=ltr] .md-search__output,[dir=rtl] .md-search__output{border-bottom-right-radius:.1rem}[dir=rtl] .md-search__output{border-bottom-left-radius:.1rem}.md-search__output{overflow:hidden;position:absolute;width:100%;z-index:1}@media screen and (max-width:59.9375em){.md-search__output{bottom:0;top:2.4rem}}@media screen and (min-width:60em){.md-search__output{opacity:0;top:1.9rem;transition:opacity .4s}[data-md-toggle=search]:checked~.md-header .md-search__output{box-shadow:var(--md-shadow-z3);opacity:1}}.md-search__scrollwrap{-webkit-backface-visibility:hidden;backface-visibility:hidden;background-color:var(--md-default-bg-color);height:100%;overflow-y:auto;touch-action:pan-y}@media (-webkit-max-device-pixel-ratio:1),(max-resolution:1dppx){.md-search__scrollwrap{transform:translateZ(0)}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-search__scrollwrap{width:23.4rem}}@media screen and (min-width:76.25em){.md-search__scrollwrap{width:34.4rem}}@media screen and (min-width:60em){.md-search__scrollwrap{max-height:0;scrollbar-color:var(--md-default-fg-color--lighter) transparent;scrollbar-width:thin}[data-md-toggle=search]:checked~.md-header .md-search__scrollwrap{max-height:75vh}.md-search__scrollwrap:hover{scrollbar-color:var(--md-accent-fg-color) transparent}.md-search__scrollwrap::-webkit-scrollbar{height:.2rem;width:.2rem}.md-search__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-search__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}}.md-search-result{color:var(--md-default-fg-color);word-break:break-word}.md-search-result__meta{background-color:var(--md-default-fg-color--lightest);color:var(--md-default-fg-color--light);font-size:.64rem;line-height:1.8rem;padding:0 .8rem;scroll-snap-align:start}@media screen and (min-width:60em){[dir=ltr] .md-search-result__meta{padding-left:2.2rem}[dir=rtl] .md-search-result__meta{padding-right:2.2rem}}.md-search-result__list{list-style:none;margin:0;padding:0;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.md-search-result__item{box-shadow:0 -.05rem var(--md-default-fg-color--lightest)}.md-search-result__item:first-child{box-shadow:none}.md-search-result__link{display:block;outline:none;scroll-snap-align:start;transition:background-color .25s}.md-search-result__link:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:is(:focus,:hover){background-color:var(--md-accent-fg-color--transparent)}.md-search-result__link:last-child p:last-child{margin-bottom:.6rem}.md-search-result__more summary{color:var(--md-typeset-a-color);cursor:pointer;display:block;font-size:.64rem;outline:none;padding:.75em .8rem;scroll-snap-align:start;transition:color .25s,background-color .25s}@media screen and (min-width:60em){[dir=ltr] .md-search-result__more summary{padding-left:2.2rem}[dir=rtl] .md-search-result__more summary{padding-right:2.2rem}}.md-search-result__more summary:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary:is(:focus,:hover){background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-search-result__more summary::marker{display:none}.md-search-result__more summary::-webkit-details-marker{display:none}.md-search-result__more summary~*>*{opacity:.65}.md-search-result__article{overflow:hidden;padding:0 .8rem;position:relative}@media screen and (min-width:60em){[dir=ltr] .md-search-result__article{padding-left:2.2rem}[dir=rtl] .md-search-result__article{padding-right:2.2rem}}.md-search-result__article--document .md-search-result__title{font-size:.8rem;font-weight:400;line-height:1.4;margin:.55rem 0}[dir=ltr] .md-search-result__icon{left:0}[dir=rtl] .md-search-result__icon{right:0}.md-search-result__icon{color:var(--md-default-fg-color--light);height:1.2rem;margin:.5rem;position:absolute;width:1.2rem}@media screen and (max-width:59.9375em){.md-search-result__icon{display:none}}.md-search-result__icon:after{background-color:currentcolor;content:"";display:inline-block;height:100%;-webkit-mask-image:var(--md-search-result-icon);mask-image:var(--md-search-result-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:100%}[dir=rtl] .md-search-result__icon:after{transform:scaleX(-1)}.md-search-result__title{font-size:.64rem;font-weight:700;line-height:1.6;margin:.5em 0}.md-search-result__teaser{-webkit-box-orient:vertical;-webkit-line-clamp:2;color:var(--md-default-fg-color--light);display:-webkit-box;font-size:.64rem;line-height:1.6;margin:.5em 0;max-height:2rem;overflow:hidden;text-overflow:ellipsis}@media screen and (max-width:44.9375em){.md-search-result__teaser{-webkit-line-clamp:3;max-height:3rem}}@media screen and (min-width:60em) and (max-width:76.1875em){.md-search-result__teaser{-webkit-line-clamp:3;max-height:3rem}}.md-search-result__teaser mark{background-color:initial;text-decoration:underline}.md-search-result__terms{font-size:.64rem;font-style:italic;margin:.5em 0}.md-search-result mark{background-color:initial;color:var(--md-accent-fg-color)}.md-select{position:relative;z-index:1}.md-select__inner{background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);left:50%;margin-top:.2rem;max-height:0;opacity:0;position:absolute;top:calc(100% - .2rem);transform:translate3d(-50%,.3rem,0);transition:transform .25s 375ms,opacity .25s .25s,max-height 0ms .5s}.md-select:-webkit-any(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);-webkit-transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms;transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select:-moz-any(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);-moz-transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms;transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select:is(:focus-within,:hover) .md-select__inner{max-height:10rem;opacity:1;transform:translate3d(-50%,0,0);transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,max-height 0ms}.md-select__inner:after{border-bottom:.2rem solid transparent;border-bottom-color:var(--md-default-bg-color);border-left:.2rem solid transparent;border-right:.2rem solid transparent;border-top:0;content:"";height:0;left:50%;margin-left:-.2rem;margin-top:-.2rem;position:absolute;top:0;width:0}.md-select__list{border-radius:.1rem;font-size:.8rem;list-style-type:none;margin:0;max-height:inherit;overflow:auto;padding:0}.md-select__item{line-height:1.8rem}[dir=ltr] .md-select__link{padding-left:.6rem;padding-right:1.2rem}[dir=rtl] .md-select__link{padding-left:1.2rem;padding-right:.6rem}.md-select__link{cursor:pointer;display:block;outline:none;scroll-snap-align:start;transition:background-color .25s,color .25s;width:100%}.md-select__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-select__link:focus{background-color:var(--md-default-fg-color--lightest)}.md-sidebar{align-self:flex-start;flex-shrink:0;padding:1.2rem 0;position:-webkit-sticky;position:sticky;top:2.4rem;width:12.1rem}@media print{.md-sidebar{display:none}}@media screen and (max-width:76.1875em){[dir=ltr] .md-sidebar--primary{left:-12.1rem}[dir=rtl] .md-sidebar--primary{right:-12.1rem}.md-sidebar--primary{background-color:var(--md-default-bg-color);display:block;height:100%;position:fixed;top:0;transform:translateX(0);transition:transform .25s cubic-bezier(.4,0,.2,1),box-shadow .25s;width:12.1rem;z-index:5}[data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{box-shadow:var(--md-shadow-z3);transform:translateX(12.1rem)}[dir=rtl] [data-md-toggle=drawer]:checked~.md-container .md-sidebar--primary{transform:translateX(-12.1rem)}.md-sidebar--primary .md-sidebar__scrollwrap{bottom:0;left:0;margin:0;overflow:hidden;position:absolute;right:0;-ms-scroll-snap-type:none;scroll-snap-type:none;top:0}}@media screen and (min-width:76.25em){.md-sidebar{height:0}.no-js .md-sidebar{height:auto}.md-header--lifted~.md-container .md-sidebar{top:4.8rem}}.md-sidebar--secondary{display:none;order:2}@media screen and (min-width:60em){.md-sidebar--secondary{height:0}.no-js .md-sidebar--secondary{height:auto}.md-sidebar--secondary:not([hidden]){display:block}.md-sidebar--secondary .md-sidebar__scrollwrap{touch-action:pan-y}}.md-sidebar__scrollwrap{-webkit-backface-visibility:hidden;backface-visibility:hidden;margin:0 .2rem;overflow-y:auto;scrollbar-color:var(--md-default-fg-color--lighter) transparent;scrollbar-width:thin}.md-sidebar__scrollwrap:hover{scrollbar-color:var(--md-accent-fg-color) transparent}.md-sidebar__scrollwrap::-webkit-scrollbar{height:.2rem;width:.2rem}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb{background-color:var(--md-default-fg-color--lighter)}.md-sidebar__scrollwrap::-webkit-scrollbar-thumb:hover{background-color:var(--md-accent-fg-color)}@media screen and (max-width:76.1875em){.md-overlay{background-color:rgba(0,0,0,.54);height:0;opacity:0;position:fixed;top:0;transition:width 0ms .25s,height 0ms .25s,opacity .25s;width:0;z-index:5}[data-md-toggle=drawer]:checked~.md-overlay{height:100%;opacity:1;transition:width 0ms,height 0ms,opacity .25s;width:100%}}@-webkit-keyframes facts{0%{height:0}to{height:.65rem}}@keyframes facts{0%{height:0}to{height:.65rem}}@-webkit-keyframes fact{0%{opacity:0;transform:translateY(100%)}50%{opacity:0}to{opacity:1;transform:translateY(0)}}@keyframes fact{0%{opacity:0;transform:translateY(100%)}50%{opacity:0}to{opacity:1;transform:translateY(0)}}:root{--md-source-forks-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-repositories-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-stars-icon:url('data:image/svg+xml;charset=utf-8,');--md-source-version-icon:url('data:image/svg+xml;charset=utf-8,')}.md-source{-webkit-backface-visibility:hidden;backface-visibility:hidden;display:block;font-size:.65rem;line-height:1.2;outline-color:var(--md-accent-fg-color);transition:opacity .25s;white-space:nowrap}.md-source:hover{opacity:.7}.md-source__icon{display:inline-block;height:2.4rem;vertical-align:middle;width:2rem}[dir=ltr] .md-source__icon svg{margin-left:.6rem}[dir=rtl] .md-source__icon svg{margin-right:.6rem}.md-source__icon svg{margin-top:.6rem}[dir=ltr] .md-source__icon+.md-source__repository{margin-left:-2rem}[dir=rtl] .md-source__icon+.md-source__repository{margin-right:-2rem}[dir=ltr] .md-source__icon+.md-source__repository{padding-left:2rem}[dir=rtl] .md-source__icon+.md-source__repository{padding-right:2rem}[dir=ltr] .md-source__repository{margin-left:.6rem}[dir=rtl] .md-source__repository{margin-right:.6rem}.md-source__repository{display:inline-block;max-width:calc(100% - 1.2rem);overflow:hidden;text-overflow:ellipsis;vertical-align:middle}.md-source__facts{display:flex;font-size:.55rem;gap:.4rem;list-style-type:none;margin:.1rem 0 0;opacity:.75;overflow:hidden;padding:0;width:100%}.md-source__repository--active .md-source__facts{-webkit-animation:facts .25s ease-in;animation:facts .25s ease-in}.md-source__fact{overflow:hidden;text-overflow:ellipsis}.md-source__repository--active .md-source__fact{-webkit-animation:fact .4s ease-out;animation:fact .4s ease-out}[dir=ltr] .md-source__fact:before{margin-right:.1rem}[dir=rtl] .md-source__fact:before{margin-left:.1rem}.md-source__fact:before{background-color:currentcolor;content:"";display:inline-block;height:.6rem;-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;vertical-align:text-top;width:.6rem}.md-source__fact:nth-child(1n+2){flex-shrink:0}.md-source__fact--version:before{-webkit-mask-image:var(--md-source-version-icon);mask-image:var(--md-source-version-icon)}.md-source__fact--stars:before{-webkit-mask-image:var(--md-source-stars-icon);mask-image:var(--md-source-stars-icon)}.md-source__fact--forks:before{-webkit-mask-image:var(--md-source-forks-icon);mask-image:var(--md-source-forks-icon)}.md-source__fact--repositories:before{-webkit-mask-image:var(--md-source-repositories-icon);mask-image:var(--md-source-repositories-icon)}.md-tabs{background-color:var(--md-primary-fg-color);color:var(--md-primary-bg-color);display:block;line-height:1.3;overflow:auto;width:100%;z-index:3}@media print{.md-tabs{display:none}}@media screen and (max-width:76.1875em){.md-tabs{display:none}}.md-tabs[hidden]{pointer-events:none}[dir=ltr] .md-tabs__list{margin-left:.2rem}[dir=rtl] .md-tabs__list{margin-right:.2rem}.md-tabs__list{contain:content;list-style:none;margin:0;padding:0;white-space:nowrap}.md-tabs__item{display:inline-block;height:2.4rem;padding-left:.6rem;padding-right:.6rem}.md-tabs__link{-webkit-backface-visibility:hidden;backface-visibility:hidden;display:block;font-size:.7rem;margin-top:.8rem;opacity:.7;outline-color:var(--md-accent-fg-color);outline-offset:.2rem;transition:transform .4s cubic-bezier(.1,.7,.1,1),opacity .25s}.md-tabs__link--active,.md-tabs__link:-webkit-any(:focus,:hover){color:inherit;opacity:1}.md-tabs__link--active,.md-tabs__link:-moz-any(:focus,:hover){color:inherit;opacity:1}.md-tabs__link--active,.md-tabs__link:is(:focus,:hover){color:inherit;opacity:1}.md-tabs__item:nth-child(2) .md-tabs__link{transition-delay:20ms}.md-tabs__item:nth-child(3) .md-tabs__link{transition-delay:40ms}.md-tabs__item:nth-child(4) .md-tabs__link{transition-delay:60ms}.md-tabs__item:nth-child(5) .md-tabs__link{transition-delay:80ms}.md-tabs__item:nth-child(6) .md-tabs__link{transition-delay:.1s}.md-tabs__item:nth-child(7) .md-tabs__link{transition-delay:.12s}.md-tabs__item:nth-child(8) .md-tabs__link{transition-delay:.14s}.md-tabs__item:nth-child(9) .md-tabs__link{transition-delay:.16s}.md-tabs__item:nth-child(10) .md-tabs__link{transition-delay:.18s}.md-tabs__item:nth-child(11) .md-tabs__link{transition-delay:.2s}.md-tabs__item:nth-child(12) .md-tabs__link{transition-delay:.22s}.md-tabs__item:nth-child(13) .md-tabs__link{transition-delay:.24s}.md-tabs__item:nth-child(14) .md-tabs__link{transition-delay:.26s}.md-tabs__item:nth-child(15) .md-tabs__link{transition-delay:.28s}.md-tabs__item:nth-child(16) .md-tabs__link{transition-delay:.3s}.md-tabs[hidden] .md-tabs__link{opacity:0;transform:translateY(50%);transition:transform 0ms .1s,opacity .1s}:root{--md-tag-icon:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .md-tags{margin-bottom:.75em;margin-top:-.125em}[dir=ltr] .md-typeset .md-tag{margin-right:.5em}[dir=rtl] .md-typeset .md-tag{margin-left:.5em}.md-typeset .md-tag{background:var(--md-default-fg-color--lightest);border-radius:2.4rem;display:inline-block;font-size:.64rem;font-weight:700;letter-spacing:normal;line-height:1.6;margin-bottom:.5em;padding:.3125em .9375em;vertical-align:middle}.md-typeset .md-tag[href]{-webkit-tap-highlight-color:transparent;color:inherit;outline:none;transition:color 125ms,background-color 125ms}.md-typeset .md-tag[href]:focus,.md-typeset .md-tag[href]:hover{background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}[id]>.md-typeset .md-tag{vertical-align:text-top}.md-typeset .md-tag-icon:before{background-color:var(--md-default-fg-color--lighter);content:"";display:inline-block;height:1.2em;margin-right:.4em;-webkit-mask-image:var(--md-tag-icon);mask-image:var(--md-tag-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;transition:background-color 125ms;vertical-align:text-bottom;width:1.2em}.md-typeset .md-tag-icon:-webkit-any(a:focus,a:hover):before{background-color:var(--md-accent-bg-color)}.md-typeset .md-tag-icon:-moz-any(a:focus,a:hover):before{background-color:var(--md-accent-bg-color)}.md-typeset .md-tag-icon:is(a:focus,a:hover):before{background-color:var(--md-accent-bg-color)}@-webkit-keyframes pulse{0%{box-shadow:0 0 0 0 var(--md-default-fg-color--lightest);transform:scale(.95)}75%{box-shadow:0 0 0 .625em transparent;transform:scale(1)}to{box-shadow:0 0 0 0 transparent;transform:scale(.95)}}@keyframes pulse{0%{box-shadow:0 0 0 0 var(--md-default-fg-color--lightest);transform:scale(.95)}75%{box-shadow:0 0 0 .625em transparent;transform:scale(1)}to{box-shadow:0 0 0 0 transparent;transform:scale(.95)}}:root{--md-tooltip-width:20rem}.md-tooltip{-webkit-backface-visibility:hidden;backface-visibility:hidden;background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);font-family:var(--md-text-font-family);left:clamp(var(--md-tooltip-0,0rem) + .8rem,var(--md-tooltip-x),100vw + var(--md-tooltip-0,0rem) + .8rem - var(--md-tooltip-width) - 2 * .8rem);max-width:calc(100vw - 1.6rem);opacity:0;position:absolute;top:var(--md-tooltip-y);transform:translateY(-.4rem);transition:transform 0ms .25s,opacity .25s,z-index .25s;width:var(--md-tooltip-width);z-index:0}.md-tooltip--active{opacity:1;transform:translateY(0);transition:transform .25s cubic-bezier(.1,.7,.1,1),opacity .25s,z-index 0ms;z-index:2}:-webkit-any(.focus-visible>.md-tooltip,.md-tooltip:target){outline:var(--md-accent-fg-color) auto}:-moz-any(.focus-visible>.md-tooltip,.md-tooltip:target){outline:var(--md-accent-fg-color) auto}:is(.focus-visible>.md-tooltip,.md-tooltip:target){outline:var(--md-accent-fg-color) auto}.md-tooltip__inner{font-size:.64rem;padding:.8rem}.md-tooltip__inner.md-typeset>:first-child{margin-top:0}.md-tooltip__inner.md-typeset>:last-child{margin-bottom:0}.md-annotation{font-weight:400;outline:none;white-space:normal}[dir=rtl] .md-annotation{direction:rtl}.md-annotation:not([hidden]){display:inline-block;line-height:1.325}.md-annotation__index{cursor:pointer;font-family:var(--md-code-font-family);font-size:.85em;margin:0 1ch;outline:none;position:relative;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;z-index:0}.md-annotation .md-annotation__index{color:#fff;transition:z-index .25s}.md-annotation .md-annotation__index:-webkit-any(:focus,:hover){color:#fff}.md-annotation .md-annotation__index:-moz-any(:focus,:hover){color:#fff}.md-annotation .md-annotation__index:is(:focus,:hover){color:#fff}.md-annotation__index:after{background-color:var(--md-default-fg-color--lighter);border-radius:2ch;content:"";height:2.2ch;left:-.125em;margin:0 -.4ch;padding:0 .4ch;position:absolute;top:0;transition:color .25s,background-color .25s;width:calc(100% + 1.2ch);width:max(2.2ch,100% + 1.2ch);z-index:-1}@media not all and (prefers-reduced-motion){[data-md-visible]>.md-annotation__index:after{-webkit-animation:pulse 2s infinite;animation:pulse 2s infinite}}.md-tooltip--active+.md-annotation__index:after{-webkit-animation:none;animation:none;transition:color .25s,background-color .25s}code .md-annotation__index{font-family:var(--md-code-font-family);font-size:inherit}:-webkit-any(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index){color:var(--md-accent-bg-color)}:-moz-any(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index){color:var(--md-accent-bg-color)}:is(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index){color:var(--md-accent-bg-color)}:-webkit-any(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index):after{background-color:var(--md-accent-fg-color)}:-moz-any(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index):after{background-color:var(--md-accent-fg-color)}:is(.md-tooltip--active+.md-annotation__index,:hover>.md-annotation__index):after{background-color:var(--md-accent-fg-color)}.md-tooltip--active+.md-annotation__index{-webkit-animation:none;animation:none;transition:none;z-index:2}.md-annotation__index [data-md-annotation-id]{display:inline-block;line-height:90%}.md-annotation__index [data-md-annotation-id]:before{content:attr(data-md-annotation-id);display:inline-block;padding-bottom:.1em;transform:scale(1.15);transition:transform .4s cubic-bezier(.1,.7,.1,1);vertical-align:.065em}@media not print{.md-annotation__index [data-md-annotation-id]:before{content:"+"}:focus-within>.md-annotation__index [data-md-annotation-id]:before{transform:scale(1.25) rotate(45deg)}}[dir=ltr] .md-top{margin-left:50%}[dir=rtl] .md-top{margin-right:50%}.md-top{background-color:var(--md-default-bg-color);border-radius:1.6rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color--light);display:block;font-size:.7rem;outline:none;padding:.4rem .8rem;position:fixed;top:3.2rem;transform:translate(-50%);transition:color 125ms,background-color 125ms,transform 125ms cubic-bezier(.4,0,.2,1),opacity 125ms;z-index:2}@media print{.md-top{display:none}}[dir=rtl] .md-top{transform:translate(50%)}.md-top[hidden]{opacity:0;pointer-events:none;transform:translate(-50%,.2rem);transition-duration:0ms}[dir=rtl] .md-top[hidden]{transform:translate(50%,.2rem)}.md-top:-webkit-any(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top:-moz-any(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top:is(:focus,:hover){background-color:var(--md-accent-fg-color);color:var(--md-accent-bg-color)}.md-top svg{display:inline-block;vertical-align:-.5em}@-webkit-keyframes hoverfix{0%{pointer-events:none}}@keyframes hoverfix{0%{pointer-events:none}}:root{--md-version-icon:url('data:image/svg+xml;charset=utf-8,')}.md-version{flex-shrink:0;font-size:.8rem;height:2.4rem}[dir=ltr] .md-version__current{margin-left:1.4rem;margin-right:.4rem}[dir=rtl] .md-version__current{margin-left:.4rem;margin-right:1.4rem}.md-version__current{color:inherit;cursor:pointer;outline:none;position:relative;top:.05rem}[dir=ltr] .md-version__current:after{margin-left:.4rem}[dir=rtl] .md-version__current:after{margin-right:.4rem}.md-version__current:after{background-color:currentcolor;content:"";display:inline-block;height:.6rem;-webkit-mask-image:var(--md-version-icon);mask-image:var(--md-version-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:.4rem}.md-version__list{background-color:var(--md-default-bg-color);border-radius:.1rem;box-shadow:var(--md-shadow-z2);color:var(--md-default-fg-color);list-style-type:none;margin:.2rem .8rem;max-height:0;opacity:0;overflow:auto;padding:0;position:absolute;-ms-scroll-snap-type:y mandatory;scroll-snap-type:y mandatory;top:.15rem;transition:max-height 0ms .5s,opacity .25s .25s;z-index:3}.md-version:-webkit-any(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;-webkit-transition:max-height 0ms,opacity .25s;transition:max-height 0ms,opacity .25s}.md-version:-moz-any(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;-moz-transition:max-height 0ms,opacity .25s;transition:max-height 0ms,opacity .25s}.md-version:is(:focus-within,:hover) .md-version__list{max-height:10rem;opacity:1;transition:max-height 0ms,opacity .25s}@media (pointer:coarse){.md-version:hover .md-version__list{-webkit-animation:hoverfix .25s forwards;animation:hoverfix .25s forwards}.md-version:focus-within .md-version__list{-webkit-animation:none;animation:none}}.md-version__item{line-height:1.8rem}[dir=ltr] .md-version__link{padding-left:.6rem;padding-right:1.2rem}[dir=rtl] .md-version__link{padding-left:1.2rem;padding-right:.6rem}.md-version__link{cursor:pointer;display:block;outline:none;scroll-snap-align:start;transition:color .25s,background-color .25s;white-space:nowrap;width:100%}.md-version__link:-webkit-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:-moz-any(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:is(:focus,:hover){color:var(--md-accent-fg-color)}.md-version__link:focus{background-color:var(--md-default-fg-color--lightest)}:root{--md-admonition-icon--note:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--abstract:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--info:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--tip:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--success:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--question:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--warning:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--failure:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--danger:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--bug:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--example:url('data:image/svg+xml;charset=utf-8,');--md-admonition-icon--quote:url('data:image/svg+xml;charset=utf-8,')}.md-typeset :-webkit-any(.admonition,details){background-color:var(--md-admonition-bg-color);border:0 solid #448aff;border-radius:.1rem;box-shadow:var(--md-shadow-z1);color:var(--md-admonition-fg-color);display:flow-root;font-size:.64rem;margin:1.5625em 0;padding:0 .6rem;page-break-inside:avoid}.md-typeset :-moz-any(.admonition,details){background-color:var(--md-admonition-bg-color);border:0 solid #448aff;border-radius:.1rem;box-shadow:var(--md-shadow-z1);color:var(--md-admonition-fg-color);display:flow-root;font-size:.64rem;margin:1.5625em 0;padding:0 .6rem;page-break-inside:avoid}[dir=ltr] .md-typeset :-webkit-any(.admonition,details){border-left-width:.2rem}[dir=ltr] .md-typeset :-moz-any(.admonition,details){border-left-width:.2rem}[dir=ltr] .md-typeset :is(.admonition,details){border-left-width:.2rem}[dir=rtl] .md-typeset :-webkit-any(.admonition,details){border-right-width:.2rem}[dir=rtl] .md-typeset :-moz-any(.admonition,details){border-right-width:.2rem}[dir=rtl] .md-typeset :is(.admonition,details){border-right-width:.2rem}.md-typeset :is(.admonition,details){background-color:var(--md-admonition-bg-color);border:0 solid #448aff;border-radius:.1rem;box-shadow:var(--md-shadow-z1);color:var(--md-admonition-fg-color);display:flow-root;font-size:.64rem;margin:1.5625em 0;padding:0 .6rem;page-break-inside:avoid}@media print{.md-typeset :-webkit-any(.admonition,details){box-shadow:none}.md-typeset :-moz-any(.admonition,details){box-shadow:none}.md-typeset :is(.admonition,details){box-shadow:none}}.md-typeset :-webkit-any(.admonition,details)>*{box-sizing:border-box}.md-typeset :-moz-any(.admonition,details)>*{box-sizing:border-box}.md-typeset :is(.admonition,details)>*{box-sizing:border-box}.md-typeset :-webkit-any(.admonition,details) :-webkit-any(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset :-moz-any(.admonition,details) :-moz-any(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset :is(.admonition,details) :is(.admonition,details){margin-bottom:1em;margin-top:1em}.md-typeset :-webkit-any(.admonition,details) .md-typeset__scrollwrap{margin:1em -.6rem}.md-typeset :-moz-any(.admonition,details) .md-typeset__scrollwrap{margin:1em -.6rem}.md-typeset :is(.admonition,details) .md-typeset__scrollwrap{margin:1em -.6rem}.md-typeset :-webkit-any(.admonition,details) .md-typeset__table{padding:0 .6rem}.md-typeset :-moz-any(.admonition,details) .md-typeset__table{padding:0 .6rem}.md-typeset :is(.admonition,details) .md-typeset__table{padding:0 .6rem}.md-typeset :-webkit-any(.admonition,details)>.tabbed-set:only-child{margin-top:0}.md-typeset :-moz-any(.admonition,details)>.tabbed-set:only-child{margin-top:0}.md-typeset :is(.admonition,details)>.tabbed-set:only-child{margin-top:0}html .md-typeset :-webkit-any(.admonition,details)>:last-child{margin-bottom:.6rem}html .md-typeset :-moz-any(.admonition,details)>:last-child{margin-bottom:.6rem}html .md-typeset :is(.admonition,details)>:last-child{margin-bottom:.6rem}.md-typeset :-webkit-any(.admonition-title,summary){background-color:rgba(68,138,255,.1);border:none;font-weight:700;margin-bottom:0;margin-top:0;padding-bottom:.4rem;padding-top:.4rem;position:relative}.md-typeset :-moz-any(.admonition-title,summary){background-color:rgba(68,138,255,.1);border:none;font-weight:700;margin-bottom:0;margin-top:0;padding-bottom:.4rem;padding-top:.4rem;position:relative}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){margin-left:-.8rem;margin-right:-.6rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){margin-left:-.8rem;margin-right:-.6rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){margin-left:-.8rem;margin-right:-.6rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){margin-left:-.6rem;margin-right:-.8rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){margin-left:-.6rem;margin-right:-.8rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){margin-left:-.6rem;margin-right:-.8rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){padding-left:2.2rem;padding-right:.6rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){padding-left:2.2rem;padding-right:.6rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){padding-left:2.2rem;padding-right:.6rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){padding-left:.6rem;padding-right:2.2rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){padding-left:.6rem;padding-right:2.2rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){padding-left:.6rem;padding-right:2.2rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){border-left-width:.2rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){border-left-width:.2rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){border-left-width:.2rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){border-right-width:.2rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){border-right-width:.2rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){border-right-width:.2rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){border-top-left-radius:.1rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){border-top-left-radius:.1rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){border-top-left-radius:.1rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){border-top-right-radius:.1rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){border-top-right-radius:.1rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){border-top-right-radius:.1rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary){border-top-right-radius:.1rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary){border-top-right-radius:.1rem}[dir=ltr] .md-typeset :is(.admonition-title,summary){border-top-right-radius:.1rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary){border-top-left-radius:.1rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary){border-top-left-radius:.1rem}[dir=rtl] .md-typeset :is(.admonition-title,summary){border-top-left-radius:.1rem}.md-typeset :is(.admonition-title,summary){background-color:rgba(68,138,255,.1);border:none;font-weight:700;margin-bottom:0;margin-top:0;padding-bottom:.4rem;padding-top:.4rem;position:relative}html .md-typeset :-webkit-any(.admonition-title,summary):last-child{margin-bottom:0}html .md-typeset :-moz-any(.admonition-title,summary):last-child{margin-bottom:0}html .md-typeset :is(.admonition-title,summary):last-child{margin-bottom:0}.md-typeset :-webkit-any(.admonition-title,summary):before{background-color:#448aff;content:"";height:1rem;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.625em;width:1rem}.md-typeset :-moz-any(.admonition-title,summary):before{background-color:#448aff;content:"";height:1rem;mask-image:var(--md-admonition-icon--note);mask-position:center;mask-repeat:no-repeat;mask-size:contain;position:absolute;top:.625em;width:1rem}[dir=ltr] .md-typeset :-webkit-any(.admonition-title,summary):before{left:.8rem}[dir=ltr] .md-typeset :-moz-any(.admonition-title,summary):before{left:.8rem}[dir=ltr] .md-typeset :is(.admonition-title,summary):before{left:.8rem}[dir=rtl] .md-typeset :-webkit-any(.admonition-title,summary):before{right:.8rem}[dir=rtl] .md-typeset :-moz-any(.admonition-title,summary):before{right:.8rem}[dir=rtl] .md-typeset :is(.admonition-title,summary):before{right:.8rem}.md-typeset :is(.admonition-title,summary):before{background-color:#448aff;content:"";height:1rem;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.625em;width:1rem}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.note){border-color:#448aff}.md-typeset :-moz-any(.admonition,details):-moz-any(.note){border-color:#448aff}.md-typeset :is(.admonition,details):is(.note){border-color:#448aff}.md-typeset :-webkit-any(.note)>:-webkit-any(.admonition-title,summary){background-color:rgba(68,138,255,.1)}.md-typeset :-moz-any(.note)>:-moz-any(.admonition-title,summary){background-color:rgba(68,138,255,.1)}.md-typeset :is(.note)>:is(.admonition-title,summary){background-color:rgba(68,138,255,.1)}.md-typeset :-webkit-any(.note)>:-webkit-any(.admonition-title,summary):before{background-color:#448aff;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note)}.md-typeset :-moz-any(.note)>:-moz-any(.admonition-title,summary):before{background-color:#448aff;mask-image:var(--md-admonition-icon--note)}.md-typeset :is(.note)>:is(.admonition-title,summary):before{background-color:#448aff;-webkit-mask-image:var(--md-admonition-icon--note);mask-image:var(--md-admonition-icon--note)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :-moz-any(.admonition,details):-moz-any(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :is(.admonition,details):is(.abstract,.summary,.tldr){border-color:#00b0ff}.md-typeset :-webkit-any(.abstract,.summary,.tldr)>:-webkit-any(.admonition-title,summary){background-color:rgba(0,176,255,.1)}.md-typeset :-moz-any(.abstract,.summary,.tldr)>:-moz-any(.admonition-title,summary){background-color:rgba(0,176,255,.1)}.md-typeset :is(.abstract,.summary,.tldr)>:is(.admonition-title,summary){background-color:rgba(0,176,255,.1)}.md-typeset :-webkit-any(.abstract,.summary,.tldr)>:-webkit-any(.admonition-title,summary):before{background-color:#00b0ff;-webkit-mask-image:var(--md-admonition-icon--abstract);mask-image:var(--md-admonition-icon--abstract)}.md-typeset :-moz-any(.abstract,.summary,.tldr)>:-moz-any(.admonition-title,summary):before{background-color:#00b0ff;mask-image:var(--md-admonition-icon--abstract)}.md-typeset :is(.abstract,.summary,.tldr)>:is(.admonition-title,summary):before{background-color:#00b0ff;-webkit-mask-image:var(--md-admonition-icon--abstract);mask-image:var(--md-admonition-icon--abstract)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.info,.todo){border-color:#00b8d4}.md-typeset :-moz-any(.admonition,details):-moz-any(.info,.todo){border-color:#00b8d4}.md-typeset :is(.admonition,details):is(.info,.todo){border-color:#00b8d4}.md-typeset :-webkit-any(.info,.todo)>:-webkit-any(.admonition-title,summary){background-color:rgba(0,184,212,.1)}.md-typeset :-moz-any(.info,.todo)>:-moz-any(.admonition-title,summary){background-color:rgba(0,184,212,.1)}.md-typeset :is(.info,.todo)>:is(.admonition-title,summary){background-color:rgba(0,184,212,.1)}.md-typeset :-webkit-any(.info,.todo)>:-webkit-any(.admonition-title,summary):before{background-color:#00b8d4;-webkit-mask-image:var(--md-admonition-icon--info);mask-image:var(--md-admonition-icon--info)}.md-typeset :-moz-any(.info,.todo)>:-moz-any(.admonition-title,summary):before{background-color:#00b8d4;mask-image:var(--md-admonition-icon--info)}.md-typeset :is(.info,.todo)>:is(.admonition-title,summary):before{background-color:#00b8d4;-webkit-mask-image:var(--md-admonition-icon--info);mask-image:var(--md-admonition-icon--info)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :-moz-any(.admonition,details):-moz-any(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :is(.admonition,details):is(.tip,.hint,.important){border-color:#00bfa5}.md-typeset :-webkit-any(.tip,.hint,.important)>:-webkit-any(.admonition-title,summary){background-color:rgba(0,191,165,.1)}.md-typeset :-moz-any(.tip,.hint,.important)>:-moz-any(.admonition-title,summary){background-color:rgba(0,191,165,.1)}.md-typeset :is(.tip,.hint,.important)>:is(.admonition-title,summary){background-color:rgba(0,191,165,.1)}.md-typeset :-webkit-any(.tip,.hint,.important)>:-webkit-any(.admonition-title,summary):before{background-color:#00bfa5;-webkit-mask-image:var(--md-admonition-icon--tip);mask-image:var(--md-admonition-icon--tip)}.md-typeset :-moz-any(.tip,.hint,.important)>:-moz-any(.admonition-title,summary):before{background-color:#00bfa5;mask-image:var(--md-admonition-icon--tip)}.md-typeset :is(.tip,.hint,.important)>:is(.admonition-title,summary):before{background-color:#00bfa5;-webkit-mask-image:var(--md-admonition-icon--tip);mask-image:var(--md-admonition-icon--tip)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.success,.check,.done){border-color:#00c853}.md-typeset :-moz-any(.admonition,details):-moz-any(.success,.check,.done){border-color:#00c853}.md-typeset :is(.admonition,details):is(.success,.check,.done){border-color:#00c853}.md-typeset :-webkit-any(.success,.check,.done)>:-webkit-any(.admonition-title,summary){background-color:rgba(0,200,83,.1)}.md-typeset :-moz-any(.success,.check,.done)>:-moz-any(.admonition-title,summary){background-color:rgba(0,200,83,.1)}.md-typeset :is(.success,.check,.done)>:is(.admonition-title,summary){background-color:rgba(0,200,83,.1)}.md-typeset :-webkit-any(.success,.check,.done)>:-webkit-any(.admonition-title,summary):before{background-color:#00c853;-webkit-mask-image:var(--md-admonition-icon--success);mask-image:var(--md-admonition-icon--success)}.md-typeset :-moz-any(.success,.check,.done)>:-moz-any(.admonition-title,summary):before{background-color:#00c853;mask-image:var(--md-admonition-icon--success)}.md-typeset :is(.success,.check,.done)>:is(.admonition-title,summary):before{background-color:#00c853;-webkit-mask-image:var(--md-admonition-icon--success);mask-image:var(--md-admonition-icon--success)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.question,.help,.faq){border-color:#64dd17}.md-typeset :-moz-any(.admonition,details):-moz-any(.question,.help,.faq){border-color:#64dd17}.md-typeset :is(.admonition,details):is(.question,.help,.faq){border-color:#64dd17}.md-typeset :-webkit-any(.question,.help,.faq)>:-webkit-any(.admonition-title,summary){background-color:rgba(100,221,23,.1)}.md-typeset :-moz-any(.question,.help,.faq)>:-moz-any(.admonition-title,summary){background-color:rgba(100,221,23,.1)}.md-typeset :is(.question,.help,.faq)>:is(.admonition-title,summary){background-color:rgba(100,221,23,.1)}.md-typeset :-webkit-any(.question,.help,.faq)>:-webkit-any(.admonition-title,summary):before{background-color:#64dd17;-webkit-mask-image:var(--md-admonition-icon--question);mask-image:var(--md-admonition-icon--question)}.md-typeset :-moz-any(.question,.help,.faq)>:-moz-any(.admonition-title,summary):before{background-color:#64dd17;mask-image:var(--md-admonition-icon--question)}.md-typeset :is(.question,.help,.faq)>:is(.admonition-title,summary):before{background-color:#64dd17;-webkit-mask-image:var(--md-admonition-icon--question);mask-image:var(--md-admonition-icon--question)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :-moz-any(.admonition,details):-moz-any(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :is(.admonition,details):is(.warning,.caution,.attention){border-color:#ff9100}.md-typeset :-webkit-any(.warning,.caution,.attention)>:-webkit-any(.admonition-title,summary){background-color:rgba(255,145,0,.1)}.md-typeset :-moz-any(.warning,.caution,.attention)>:-moz-any(.admonition-title,summary){background-color:rgba(255,145,0,.1)}.md-typeset :is(.warning,.caution,.attention)>:is(.admonition-title,summary){background-color:rgba(255,145,0,.1)}.md-typeset :-webkit-any(.warning,.caution,.attention)>:-webkit-any(.admonition-title,summary):before{background-color:#ff9100;-webkit-mask-image:var(--md-admonition-icon--warning);mask-image:var(--md-admonition-icon--warning)}.md-typeset :-moz-any(.warning,.caution,.attention)>:-moz-any(.admonition-title,summary):before{background-color:#ff9100;mask-image:var(--md-admonition-icon--warning)}.md-typeset :is(.warning,.caution,.attention)>:is(.admonition-title,summary):before{background-color:#ff9100;-webkit-mask-image:var(--md-admonition-icon--warning);mask-image:var(--md-admonition-icon--warning)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :-moz-any(.admonition,details):-moz-any(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :is(.admonition,details):is(.failure,.fail,.missing){border-color:#ff5252}.md-typeset :-webkit-any(.failure,.fail,.missing)>:-webkit-any(.admonition-title,summary){background-color:rgba(255,82,82,.1)}.md-typeset :-moz-any(.failure,.fail,.missing)>:-moz-any(.admonition-title,summary){background-color:rgba(255,82,82,.1)}.md-typeset :is(.failure,.fail,.missing)>:is(.admonition-title,summary){background-color:rgba(255,82,82,.1)}.md-typeset :-webkit-any(.failure,.fail,.missing)>:-webkit-any(.admonition-title,summary):before{background-color:#ff5252;-webkit-mask-image:var(--md-admonition-icon--failure);mask-image:var(--md-admonition-icon--failure)}.md-typeset :-moz-any(.failure,.fail,.missing)>:-moz-any(.admonition-title,summary):before{background-color:#ff5252;mask-image:var(--md-admonition-icon--failure)}.md-typeset :is(.failure,.fail,.missing)>:is(.admonition-title,summary):before{background-color:#ff5252;-webkit-mask-image:var(--md-admonition-icon--failure);mask-image:var(--md-admonition-icon--failure)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.danger,.error){border-color:#ff1744}.md-typeset :-moz-any(.admonition,details):-moz-any(.danger,.error){border-color:#ff1744}.md-typeset :is(.admonition,details):is(.danger,.error){border-color:#ff1744}.md-typeset :-webkit-any(.danger,.error)>:-webkit-any(.admonition-title,summary){background-color:rgba(255,23,68,.1)}.md-typeset :-moz-any(.danger,.error)>:-moz-any(.admonition-title,summary){background-color:rgba(255,23,68,.1)}.md-typeset :is(.danger,.error)>:is(.admonition-title,summary){background-color:rgba(255,23,68,.1)}.md-typeset :-webkit-any(.danger,.error)>:-webkit-any(.admonition-title,summary):before{background-color:#ff1744;-webkit-mask-image:var(--md-admonition-icon--danger);mask-image:var(--md-admonition-icon--danger)}.md-typeset :-moz-any(.danger,.error)>:-moz-any(.admonition-title,summary):before{background-color:#ff1744;mask-image:var(--md-admonition-icon--danger)}.md-typeset :is(.danger,.error)>:is(.admonition-title,summary):before{background-color:#ff1744;-webkit-mask-image:var(--md-admonition-icon--danger);mask-image:var(--md-admonition-icon--danger)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.bug){border-color:#f50057}.md-typeset :-moz-any(.admonition,details):-moz-any(.bug){border-color:#f50057}.md-typeset :is(.admonition,details):is(.bug){border-color:#f50057}.md-typeset :-webkit-any(.bug)>:-webkit-any(.admonition-title,summary){background-color:rgba(245,0,87,.1)}.md-typeset :-moz-any(.bug)>:-moz-any(.admonition-title,summary){background-color:rgba(245,0,87,.1)}.md-typeset :is(.bug)>:is(.admonition-title,summary){background-color:rgba(245,0,87,.1)}.md-typeset :-webkit-any(.bug)>:-webkit-any(.admonition-title,summary):before{background-color:#f50057;-webkit-mask-image:var(--md-admonition-icon--bug);mask-image:var(--md-admonition-icon--bug)}.md-typeset :-moz-any(.bug)>:-moz-any(.admonition-title,summary):before{background-color:#f50057;mask-image:var(--md-admonition-icon--bug)}.md-typeset :is(.bug)>:is(.admonition-title,summary):before{background-color:#f50057;-webkit-mask-image:var(--md-admonition-icon--bug);mask-image:var(--md-admonition-icon--bug)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.example){border-color:#7c4dff}.md-typeset :-moz-any(.admonition,details):-moz-any(.example){border-color:#7c4dff}.md-typeset :is(.admonition,details):is(.example){border-color:#7c4dff}.md-typeset :-webkit-any(.example)>:-webkit-any(.admonition-title,summary){background-color:rgba(124,77,255,.1)}.md-typeset :-moz-any(.example)>:-moz-any(.admonition-title,summary){background-color:rgba(124,77,255,.1)}.md-typeset :is(.example)>:is(.admonition-title,summary){background-color:rgba(124,77,255,.1)}.md-typeset :-webkit-any(.example)>:-webkit-any(.admonition-title,summary):before{background-color:#7c4dff;-webkit-mask-image:var(--md-admonition-icon--example);mask-image:var(--md-admonition-icon--example)}.md-typeset :-moz-any(.example)>:-moz-any(.admonition-title,summary):before{background-color:#7c4dff;mask-image:var(--md-admonition-icon--example)}.md-typeset :is(.example)>:is(.admonition-title,summary):before{background-color:#7c4dff;-webkit-mask-image:var(--md-admonition-icon--example);mask-image:var(--md-admonition-icon--example)}.md-typeset :-webkit-any(.admonition,details):-webkit-any(.quote,.cite){border-color:#9e9e9e}.md-typeset :-moz-any(.admonition,details):-moz-any(.quote,.cite){border-color:#9e9e9e}.md-typeset :is(.admonition,details):is(.quote,.cite){border-color:#9e9e9e}.md-typeset :-webkit-any(.quote,.cite)>:-webkit-any(.admonition-title,summary){background-color:hsla(0,0%,62%,.1)}.md-typeset :-moz-any(.quote,.cite)>:-moz-any(.admonition-title,summary){background-color:hsla(0,0%,62%,.1)}.md-typeset :is(.quote,.cite)>:is(.admonition-title,summary){background-color:hsla(0,0%,62%,.1)}.md-typeset :-webkit-any(.quote,.cite)>:-webkit-any(.admonition-title,summary):before{background-color:#9e9e9e;-webkit-mask-image:var(--md-admonition-icon--quote);mask-image:var(--md-admonition-icon--quote)}.md-typeset :-moz-any(.quote,.cite)>:-moz-any(.admonition-title,summary):before{background-color:#9e9e9e;mask-image:var(--md-admonition-icon--quote)}.md-typeset :is(.quote,.cite)>:is(.admonition-title,summary):before{background-color:#9e9e9e;-webkit-mask-image:var(--md-admonition-icon--quote);mask-image:var(--md-admonition-icon--quote)}:root{--md-footnotes-icon:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .footnote{color:var(--md-default-fg-color--light);font-size:.64rem}[dir=ltr] .md-typeset .footnote>ol{margin-left:0}[dir=rtl] .md-typeset .footnote>ol{margin-right:0}.md-typeset .footnote>ol>li{transition:color 125ms}.md-typeset .footnote>ol>li:target{color:var(--md-default-fg-color)}.md-typeset .footnote>ol>li:focus-within .footnote-backref{opacity:1;transform:translateX(0);transition:none}.md-typeset .footnote>ol>li:-webkit-any(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li:-moz-any(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li:is(:hover,:target) .footnote-backref{opacity:1;transform:translateX(0)}.md-typeset .footnote>ol>li>:first-child{margin-top:0}.md-typeset .footnote-ref{font-size:.75em;font-weight:700}html .md-typeset .footnote-ref{outline-offset:.1rem}.md-typeset [id^="fnref:"]:target>.footnote-ref{outline:auto}.md-typeset .footnote-backref{color:var(--md-typeset-a-color);display:inline-block;font-size:0;opacity:0;transform:translateX(.25rem);transition:color .25s,transform .25s .25s,opacity 125ms .25s;vertical-align:text-bottom}@media print{.md-typeset .footnote-backref{color:var(--md-typeset-a-color);opacity:1;transform:translateX(0)}}[dir=rtl] .md-typeset .footnote-backref{transform:translateX(-.25rem)}.md-typeset .footnote-backref:hover{color:var(--md-accent-fg-color)}.md-typeset .footnote-backref:before{background-color:currentcolor;content:"";display:inline-block;height:.8rem;-webkit-mask-image:var(--md-footnotes-icon);mask-image:var(--md-footnotes-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;width:.8rem}[dir=rtl] .md-typeset .footnote-backref:before svg{transform:scaleX(-1)}[dir=ltr] .md-typeset .headerlink{margin-left:.5rem}[dir=rtl] .md-typeset .headerlink{margin-right:.5rem}.md-typeset .headerlink{color:var(--md-default-fg-color--lighter);display:inline-block;opacity:0;transition:color .25s,opacity 125ms}@media print{.md-typeset .headerlink{display:none}}.md-typeset .headerlink:focus,.md-typeset :-webkit-any(:hover,:target)>.headerlink{opacity:1;-webkit-transition:color .25s,opacity 125ms;transition:color .25s,opacity 125ms}.md-typeset .headerlink:focus,.md-typeset :-moz-any(:hover,:target)>.headerlink{opacity:1;-moz-transition:color .25s,opacity 125ms;transition:color .25s,opacity 125ms}.md-typeset .headerlink:focus,.md-typeset :is(:hover,:target)>.headerlink{opacity:1;transition:color .25s,opacity 125ms}.md-typeset .headerlink:-webkit-any(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset .headerlink:-moz-any(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset .headerlink:is(:focus,:hover),.md-typeset :target>.headerlink{color:var(--md-accent-fg-color)}.md-typeset :target{--md-scroll-margin:3.6rem;--md-scroll-offset:0rem;scroll-margin-top:calc(var(--md-scroll-margin) - var(--md-scroll-offset))}@media screen and (min-width:76.25em){.md-header--lifted~.md-container .md-typeset :target{--md-scroll-margin:6rem}}.md-typeset :-webkit-any(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset :-moz-any(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset :is(h1,h2,h3):target{--md-scroll-offset:0.2rem}.md-typeset h4:target{--md-scroll-offset:0.15rem}.md-typeset div.arithmatex{overflow:auto}@media screen and (max-width:44.9375em){.md-typeset div.arithmatex{margin:0 -.8rem}}.md-typeset div.arithmatex>*{margin-left:auto!important;margin-right:auto!important;padding:0 .8rem;touch-action:auto;width:-webkit-min-content;width:-moz-min-content;width:min-content}.md-typeset div.arithmatex>* mjx-container{margin:0!important}.md-typeset :-webkit-any(del,ins,.comment).critic{-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset :-moz-any(del,ins,.comment).critic{box-decoration-break:clone}.md-typeset :is(del,ins,.comment).critic{-webkit-box-decoration-break:clone;box-decoration-break:clone}.md-typeset del.critic{background-color:var(--md-typeset-del-color)}.md-typeset ins.critic{background-color:var(--md-typeset-ins-color)}.md-typeset .critic.comment{color:var(--md-code-hl-comment-color)}.md-typeset .critic.comment:before{content:"/* "}.md-typeset .critic.comment:after{content:" */"}.md-typeset .critic.block{box-shadow:none;display:block;margin:1em 0;overflow:auto;padding-left:.8rem;padding-right:.8rem}.md-typeset .critic.block>:first-child{margin-top:.5em}.md-typeset .critic.block>:last-child{margin-bottom:.5em}:root{--md-details-icon:url('data:image/svg+xml;charset=utf-8,')}.md-typeset details{display:flow-root;overflow:visible;padding-top:0}.md-typeset details[open]>summary:after{transform:rotate(90deg)}.md-typeset details:not([open]){box-shadow:none;padding-bottom:0}.md-typeset details:not([open])>summary{border-radius:.1rem}[dir=ltr] .md-typeset summary{padding-right:1.8rem}[dir=rtl] .md-typeset summary{padding-left:1.8rem}[dir=ltr] .md-typeset summary{border-top-left-radius:.1rem}[dir=ltr] .md-typeset summary,[dir=rtl] .md-typeset summary{border-top-right-radius:.1rem}[dir=rtl] .md-typeset summary{border-top-left-radius:.1rem}.md-typeset summary{cursor:pointer;display:block;min-height:1rem}.md-typeset summary.focus-visible{outline-color:var(--md-accent-fg-color);outline-offset:.2rem}.md-typeset summary:not(.focus-visible){-webkit-tap-highlight-color:transparent;outline:none}[dir=ltr] .md-typeset summary:after{right:.4rem}[dir=rtl] .md-typeset summary:after{left:.4rem}.md-typeset summary:after{background-color:currentcolor;content:"";height:1rem;-webkit-mask-image:var(--md-details-icon);mask-image:var(--md-details-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.625em;transform:rotate(0deg);transition:transform .25s;width:1rem}[dir=rtl] .md-typeset summary:after{transform:rotate(180deg)}.md-typeset summary::marker{display:none}.md-typeset summary::-webkit-details-marker{display:none}.md-typeset :-webkit-any(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :-moz-any(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :is(.emojione,.twemoji,.gemoji){display:inline-flex;height:1.125em;vertical-align:text-top}.md-typeset :-webkit-any(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.md-typeset :-moz-any(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.md-typeset :is(.emojione,.twemoji,.gemoji) svg{fill:currentcolor;max-height:100%;width:1.125em}.highlight :-webkit-any(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight :-moz-any(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight :is(.o,.ow){color:var(--md-code-hl-operator-color)}.highlight .p{color:var(--md-code-hl-punctuation-color)}.highlight :-webkit-any(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :-moz-any(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :is(.cpf,.l,.s,.sb,.sc,.s2,.si,.s1,.ss){color:var(--md-code-hl-string-color)}.highlight :-webkit-any(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :-moz-any(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :is(.cp,.se,.sh,.sr,.sx){color:var(--md-code-hl-special-color)}.highlight :-webkit-any(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :-moz-any(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :is(.m,.mb,.mf,.mh,.mi,.il,.mo){color:var(--md-code-hl-number-color)}.highlight :-webkit-any(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :-moz-any(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :is(.k,.kd,.kn,.kp,.kr,.kt){color:var(--md-code-hl-keyword-color)}.highlight :-webkit-any(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :-moz-any(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :is(.kc,.n){color:var(--md-code-hl-name-color)}.highlight :-webkit-any(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :-moz-any(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :is(.no,.nb,.bp){color:var(--md-code-hl-constant-color)}.highlight :-webkit-any(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :-moz-any(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :is(.nc,.ne,.nf,.nn){color:var(--md-code-hl-function-color)}.highlight :-webkit-any(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :-moz-any(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :is(.nd,.ni,.nl,.nt){color:var(--md-code-hl-keyword-color)}.highlight :-webkit-any(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :-moz-any(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :is(.c,.cm,.c1,.ch,.cs,.sd){color:var(--md-code-hl-comment-color)}.highlight :-webkit-any(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :-moz-any(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :is(.na,.nv,.vc,.vg,.vi){color:var(--md-code-hl-variable-color)}.highlight :-webkit-any(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :-moz-any(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :is(.ge,.gr,.gh,.go,.gp,.gs,.gu,.gt){color:var(--md-code-hl-generic-color)}.highlight :-webkit-any(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight :-moz-any(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight :is(.gd,.gi){border-radius:.1rem;margin:0 -.125em;padding:0 .125em}.highlight .gd{background-color:var(--md-typeset-del-color)}.highlight .gi{background-color:var(--md-typeset-ins-color)}.highlight .hll{background-color:var(--md-code-hl-color);display:block;margin:0 -1.1764705882em;padding:0 1.1764705882em}.highlight span.filename{background-color:var(--md-code-bg-color);border-bottom:.05rem solid var(--md-default-fg-color--lightest);border-top-left-radius:.1rem;border-top-right-radius:.1rem;display:flow-root;font-size:.85em;font-weight:700;margin-top:1em;padding:.6617647059em 1.1764705882em;position:relative}.highlight span.filename+pre{margin-top:0}.highlight span.filename+pre>code{border-top-left-radius:0;border-top-right-radius:0}.highlight [data-linenos]:before{background-color:var(--md-code-bg-color);box-shadow:-.05rem 0 var(--md-default-fg-color--lightest) inset;color:var(--md-default-fg-color--light);content:attr(data-linenos);float:left;left:-1.1764705882em;margin-left:-1.1764705882em;margin-right:1.1764705882em;padding-left:1.1764705882em;position:-webkit-sticky;position:sticky;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;z-index:3}.highlight code a[id]{position:absolute;visibility:hidden}.highlight code[data-md-copying] .hll{display:contents}.highlight code[data-md-copying] .md-annotation{display:none}.highlighttable{display:flow-root}.highlighttable :-webkit-any(tbody,td){display:block;padding:0}.highlighttable :-moz-any(tbody,td){display:block;padding:0}.highlighttable :is(tbody,td){display:block;padding:0}.highlighttable tr{display:flex}.highlighttable pre{margin:0}.highlighttable th.filename{flex-grow:1;padding:0;text-align:left}.highlighttable th.filename span.filename{margin-top:0}.highlighttable .linenos{background-color:var(--md-code-bg-color);border-bottom-left-radius:.1rem;border-top-left-radius:.1rem;font-size:.85em;padding:.7720588235em 0 .7720588235em 1.1764705882em;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.highlighttable .linenodiv{box-shadow:-.05rem 0 var(--md-default-fg-color--lightest) inset;padding-right:.5882352941em}.highlighttable .linenodiv pre{color:var(--md-default-fg-color--light);text-align:right}.highlighttable .code{flex:1;min-width:0}.linenodiv a{color:inherit}.md-typeset .highlighttable{direction:ltr;margin:1em 0}.md-typeset .highlighttable>tbody>tr>.code>div>pre>code{border-bottom-left-radius:0;border-top-left-radius:0}.md-typeset .highlight+.result{border:.05rem solid var(--md-code-bg-color);border-bottom-left-radius:.1rem;border-bottom-right-radius:.1rem;border-top-width:.1rem;margin-top:-1.125em;overflow:visible;padding:0 1em}.md-typeset .highlight+.result:after{clear:both;content:"";display:block}@media screen and (max-width:44.9375em){.md-content__inner>.highlight{margin:1em -.8rem}.md-content__inner>.highlight>.filename,.md-content__inner>.highlight>.highlighttable>tbody>tr>.code>div>pre>code,.md-content__inner>.highlight>.highlighttable>tbody>tr>.filename span.filename,.md-content__inner>.highlight>.highlighttable>tbody>tr>.linenos,.md-content__inner>.highlight>pre>code{border-radius:0}.md-content__inner>.highlight+.result{border-left-width:0;border-radius:0;border-right-width:0;margin-left:-.8rem;margin-right:-.8rem}}.md-typeset .keys kbd:-webkit-any(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys kbd:-moz-any(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys kbd:is(:before,:after){-moz-osx-font-smoothing:initial;-webkit-font-smoothing:initial;color:inherit;margin:0;position:relative}.md-typeset .keys span{color:var(--md-default-fg-color--light);padding:0 .2em}.md-typeset .keys .key-alt:before,.md-typeset .keys .key-left-alt:before,.md-typeset .keys .key-right-alt:before{content:"⎇";padding-right:.4em}.md-typeset .keys .key-command:before,.md-typeset .keys .key-left-command:before,.md-typeset .keys .key-right-command:before{content:"⌘";padding-right:.4em}.md-typeset .keys .key-control:before,.md-typeset .keys .key-left-control:before,.md-typeset .keys .key-right-control:before{content:"⌃";padding-right:.4em}.md-typeset .keys .key-left-meta:before,.md-typeset .keys .key-meta:before,.md-typeset .keys .key-right-meta:before{content:"◆";padding-right:.4em}.md-typeset .keys .key-left-option:before,.md-typeset .keys .key-option:before,.md-typeset .keys .key-right-option:before{content:"⌥";padding-right:.4em}.md-typeset .keys .key-left-shift:before,.md-typeset .keys .key-right-shift:before,.md-typeset .keys .key-shift:before{content:"⇧";padding-right:.4em}.md-typeset .keys .key-left-super:before,.md-typeset .keys .key-right-super:before,.md-typeset .keys .key-super:before{content:"❖";padding-right:.4em}.md-typeset .keys .key-left-windows:before,.md-typeset .keys .key-right-windows:before,.md-typeset .keys .key-windows:before{content:"⊞";padding-right:.4em}.md-typeset .keys .key-arrow-down:before{content:"↓";padding-right:.4em}.md-typeset .keys .key-arrow-left:before{content:"←";padding-right:.4em}.md-typeset .keys .key-arrow-right:before{content:"→";padding-right:.4em}.md-typeset .keys .key-arrow-up:before{content:"↑";padding-right:.4em}.md-typeset .keys .key-backspace:before{content:"⌫";padding-right:.4em}.md-typeset .keys .key-backtab:before{content:"⇤";padding-right:.4em}.md-typeset .keys .key-caps-lock:before{content:"⇪";padding-right:.4em}.md-typeset .keys .key-clear:before{content:"⌧";padding-right:.4em}.md-typeset .keys .key-context-menu:before{content:"☰";padding-right:.4em}.md-typeset .keys .key-delete:before{content:"⌦";padding-right:.4em}.md-typeset .keys .key-eject:before{content:"⏏";padding-right:.4em}.md-typeset .keys .key-end:before{content:"⤓";padding-right:.4em}.md-typeset .keys .key-escape:before{content:"⎋";padding-right:.4em}.md-typeset .keys .key-home:before{content:"⤒";padding-right:.4em}.md-typeset .keys .key-insert:before{content:"⎀";padding-right:.4em}.md-typeset .keys .key-page-down:before{content:"⇟";padding-right:.4em}.md-typeset .keys .key-page-up:before{content:"⇞";padding-right:.4em}.md-typeset .keys .key-print-screen:before{content:"⎙";padding-right:.4em}.md-typeset .keys .key-tab:after{content:"⇥";padding-left:.4em}.md-typeset .keys .key-num-enter:after{content:"⌤";padding-left:.4em}.md-typeset .keys .key-enter:after{content:"⏎";padding-left:.4em}:root{--md-tabbed-icon--prev:url('data:image/svg+xml;charset=utf-8,');--md-tabbed-icon--next:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .tabbed-set{border-radius:.1rem;display:flex;flex-flow:column wrap;margin:1em 0;position:relative}.md-typeset .tabbed-set>input{height:0;opacity:0;position:absolute;width:0}.md-typeset .tabbed-set>input:target{--md-scroll-offset:0.625em}.md-typeset .tabbed-labels{-ms-overflow-style:none;box-shadow:0 -.05rem var(--md-default-fg-color--lightest) inset;display:flex;max-width:100%;overflow:auto;scrollbar-width:none}@media print{.md-typeset .tabbed-labels{display:contents}}@media screen{.js .md-typeset .tabbed-labels{position:relative}.js .md-typeset .tabbed-labels:before{background:var(--md-accent-fg-color);bottom:0;content:"";display:block;height:2px;left:0;position:absolute;transform:translateX(var(--md-indicator-x));transition:width 225ms,transform .25s;transition-timing-function:cubic-bezier(.4,0,.2,1);width:var(--md-indicator-width)}}.md-typeset .tabbed-labels::-webkit-scrollbar{display:none}.md-typeset .tabbed-labels>label{border-bottom:.1rem solid transparent;border-radius:.1rem .1rem 0 0;color:var(--md-default-fg-color--light);cursor:pointer;flex-shrink:0;font-size:.64rem;font-weight:700;padding:.78125em 1.25em .625em;scroll-margin-inline-start:1rem;transition:background-color .25s,color .25s;white-space:nowrap;width:auto}@media print{.md-typeset .tabbed-labels>label:first-child{order:1}.md-typeset .tabbed-labels>label:nth-child(2){order:2}.md-typeset .tabbed-labels>label:nth-child(3){order:3}.md-typeset .tabbed-labels>label:nth-child(4){order:4}.md-typeset .tabbed-labels>label:nth-child(5){order:5}.md-typeset .tabbed-labels>label:nth-child(6){order:6}.md-typeset .tabbed-labels>label:nth-child(7){order:7}.md-typeset .tabbed-labels>label:nth-child(8){order:8}.md-typeset .tabbed-labels>label:nth-child(9){order:9}.md-typeset .tabbed-labels>label:nth-child(10){order:10}.md-typeset .tabbed-labels>label:nth-child(11){order:11}.md-typeset .tabbed-labels>label:nth-child(12){order:12}.md-typeset .tabbed-labels>label:nth-child(13){order:13}.md-typeset .tabbed-labels>label:nth-child(14){order:14}.md-typeset .tabbed-labels>label:nth-child(15){order:15}.md-typeset .tabbed-labels>label:nth-child(16){order:16}.md-typeset .tabbed-labels>label:nth-child(17){order:17}.md-typeset .tabbed-labels>label:nth-child(18){order:18}.md-typeset .tabbed-labels>label:nth-child(19){order:19}.md-typeset .tabbed-labels>label:nth-child(20){order:20}}.md-typeset .tabbed-labels>label:hover{color:var(--md-accent-fg-color)}.md-typeset .tabbed-content{width:100%}@media print{.md-typeset .tabbed-content{display:contents}}.md-typeset .tabbed-block{display:none}@media print{.md-typeset .tabbed-block{display:block}.md-typeset .tabbed-block:first-child{order:1}.md-typeset .tabbed-block:nth-child(2){order:2}.md-typeset .tabbed-block:nth-child(3){order:3}.md-typeset .tabbed-block:nth-child(4){order:4}.md-typeset .tabbed-block:nth-child(5){order:5}.md-typeset .tabbed-block:nth-child(6){order:6}.md-typeset .tabbed-block:nth-child(7){order:7}.md-typeset .tabbed-block:nth-child(8){order:8}.md-typeset .tabbed-block:nth-child(9){order:9}.md-typeset .tabbed-block:nth-child(10){order:10}.md-typeset .tabbed-block:nth-child(11){order:11}.md-typeset .tabbed-block:nth-child(12){order:12}.md-typeset .tabbed-block:nth-child(13){order:13}.md-typeset .tabbed-block:nth-child(14){order:14}.md-typeset .tabbed-block:nth-child(15){order:15}.md-typeset .tabbed-block:nth-child(16){order:16}.md-typeset .tabbed-block:nth-child(17){order:17}.md-typeset .tabbed-block:nth-child(18){order:18}.md-typeset .tabbed-block:nth-child(19){order:19}.md-typeset .tabbed-block:nth-child(20){order:20}}.md-typeset .tabbed-block>.highlight:first-child>pre,.md-typeset .tabbed-block>pre:first-child{margin:0}.md-typeset .tabbed-block>.highlight:first-child>pre>code,.md-typeset .tabbed-block>pre:first-child>code{border-top-left-radius:0;border-top-right-radius:0}.md-typeset .tabbed-block>.highlight:first-child>.filename{border-top-left-radius:0;border-top-right-radius:0;margin:0}.md-typeset .tabbed-block>.highlight:first-child>.highlighttable{margin:0}.md-typeset .tabbed-block>.highlight:first-child>.highlighttable>tbody>tr>.filename span.filename,.md-typeset .tabbed-block>.highlight:first-child>.highlighttable>tbody>tr>.linenos{border-top-left-radius:0;border-top-right-radius:0;margin:0}.md-typeset .tabbed-block>.highlight:first-child>.highlighttable>tbody>tr>.code>div>pre>code{border-top-left-radius:0;border-top-right-radius:0}.md-typeset .tabbed-block>.highlight:first-child+.result{margin-top:-.125em}.md-typeset .tabbed-block>.tabbed-set{margin:0}.md-typeset .tabbed-button{align-self:center;border-radius:100%;color:var(--md-default-fg-color--light);cursor:pointer;display:block;height:.9rem;margin-top:.1rem;pointer-events:auto;transition:background-color .25s;width:.9rem}.md-typeset .tabbed-button:hover{background-color:var(--md-accent-fg-color--transparent);color:var(--md-accent-fg-color)}.md-typeset .tabbed-button:after{background-color:currentcolor;content:"";display:block;height:100%;-webkit-mask-image:var(--md-tabbed-icon--prev);mask-image:var(--md-tabbed-icon--prev);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;transition:background-color .25s,transform .25s;width:100%}.md-typeset .tabbed-control{background:linear-gradient(to right,var(--md-default-bg-color) 60%,transparent);display:flex;height:1.9rem;justify-content:start;pointer-events:none;position:absolute;transition:opacity 125ms;width:1.2rem}[dir=rtl] .md-typeset .tabbed-control{transform:rotate(180deg)}.md-typeset .tabbed-control[hidden]{opacity:0}.md-typeset .tabbed-control--next{background:linear-gradient(to left,var(--md-default-bg-color) 60%,transparent);justify-content:end;right:0}.md-typeset .tabbed-control--next .tabbed-button:after{-webkit-mask-image:var(--md-tabbed-icon--next);mask-image:var(--md-tabbed-icon--next)}@media screen and (max-width:44.9375em){[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels{padding-left:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels{padding-right:.8rem}.md-content__inner>.tabbed-set .tabbed-labels{margin:0 -.8rem;max-width:100vw;scroll-padding-inline-start:.8rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels:after{padding-right:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels:after{padding-left:.8rem}.md-content__inner>.tabbed-set .tabbed-labels:after{content:""}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{margin-left:-.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{margin-right:-.8rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{padding-left:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{padding-right:.8rem}.md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--prev{width:2rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{margin-right:-.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{margin-left:-.8rem}[dir=ltr] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{padding-right:.8rem}[dir=rtl] .md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{padding-left:.8rem}.md-content__inner>.tabbed-set .tabbed-labels~.tabbed-control--next{width:2rem}}@media screen{.md-typeset .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9){color:var(--md-accent-fg-color)}.md-typeset .no-js .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.md-typeset .no-js .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.md-typeset .no-js .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.md-typeset .no-js .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.md-typeset .no-js .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.md-typeset .no-js .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.md-typeset .no-js .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.md-typeset .no-js .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.md-typeset .no-js .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.md-typeset .no-js .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.md-typeset .no-js .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.md-typeset .no-js .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.md-typeset .no-js .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.md-typeset .no-js .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.md-typeset .no-js .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.md-typeset .no-js .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.md-typeset .no-js .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.md-typeset .no-js .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.md-typeset .no-js .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.md-typeset .no-js .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9),.no-js .md-typeset .tabbed-set>input:first-child:checked~.tabbed-labels>:first-child,.no-js .md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-labels>:nth-child(10),.no-js .md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-labels>:nth-child(11),.no-js .md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-labels>:nth-child(12),.no-js .md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-labels>:nth-child(13),.no-js .md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-labels>:nth-child(14),.no-js .md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-labels>:nth-child(15),.no-js .md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-labels>:nth-child(16),.no-js .md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-labels>:nth-child(17),.no-js .md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-labels>:nth-child(18),.no-js .md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-labels>:nth-child(19),.no-js .md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-labels>:nth-child(2),.no-js .md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-labels>:nth-child(20),.no-js .md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-labels>:nth-child(3),.no-js .md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-labels>:nth-child(4),.no-js .md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-labels>:nth-child(5),.no-js .md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-labels>:nth-child(6),.no-js .md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-labels>:nth-child(7),.no-js .md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-labels>:nth-child(8),.no-js .md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-labels>:nth-child(9){border-color:var(--md-accent-fg-color)}}.md-typeset .tabbed-set>input:first-child.focus-visible~.tabbed-labels>:first-child,.md-typeset .tabbed-set>input:nth-child(10).focus-visible~.tabbed-labels>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11).focus-visible~.tabbed-labels>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12).focus-visible~.tabbed-labels>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13).focus-visible~.tabbed-labels>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14).focus-visible~.tabbed-labels>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15).focus-visible~.tabbed-labels>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16).focus-visible~.tabbed-labels>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17).focus-visible~.tabbed-labels>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18).focus-visible~.tabbed-labels>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19).focus-visible~.tabbed-labels>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2).focus-visible~.tabbed-labels>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20).focus-visible~.tabbed-labels>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3).focus-visible~.tabbed-labels>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4).focus-visible~.tabbed-labels>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5).focus-visible~.tabbed-labels>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6).focus-visible~.tabbed-labels>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7).focus-visible~.tabbed-labels>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8).focus-visible~.tabbed-labels>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9).focus-visible~.tabbed-labels>:nth-child(9){background-color:var(--md-accent-fg-color--transparent)}.md-typeset .tabbed-set>input:first-child:checked~.tabbed-content>:first-child,.md-typeset .tabbed-set>input:nth-child(10):checked~.tabbed-content>:nth-child(10),.md-typeset .tabbed-set>input:nth-child(11):checked~.tabbed-content>:nth-child(11),.md-typeset .tabbed-set>input:nth-child(12):checked~.tabbed-content>:nth-child(12),.md-typeset .tabbed-set>input:nth-child(13):checked~.tabbed-content>:nth-child(13),.md-typeset .tabbed-set>input:nth-child(14):checked~.tabbed-content>:nth-child(14),.md-typeset .tabbed-set>input:nth-child(15):checked~.tabbed-content>:nth-child(15),.md-typeset .tabbed-set>input:nth-child(16):checked~.tabbed-content>:nth-child(16),.md-typeset .tabbed-set>input:nth-child(17):checked~.tabbed-content>:nth-child(17),.md-typeset .tabbed-set>input:nth-child(18):checked~.tabbed-content>:nth-child(18),.md-typeset .tabbed-set>input:nth-child(19):checked~.tabbed-content>:nth-child(19),.md-typeset .tabbed-set>input:nth-child(2):checked~.tabbed-content>:nth-child(2),.md-typeset .tabbed-set>input:nth-child(20):checked~.tabbed-content>:nth-child(20),.md-typeset .tabbed-set>input:nth-child(3):checked~.tabbed-content>:nth-child(3),.md-typeset .tabbed-set>input:nth-child(4):checked~.tabbed-content>:nth-child(4),.md-typeset .tabbed-set>input:nth-child(5):checked~.tabbed-content>:nth-child(5),.md-typeset .tabbed-set>input:nth-child(6):checked~.tabbed-content>:nth-child(6),.md-typeset .tabbed-set>input:nth-child(7):checked~.tabbed-content>:nth-child(7),.md-typeset .tabbed-set>input:nth-child(8):checked~.tabbed-content>:nth-child(8),.md-typeset .tabbed-set>input:nth-child(9):checked~.tabbed-content>:nth-child(9){display:block}:root{--md-tasklist-icon:url('data:image/svg+xml;charset=utf-8,');--md-tasklist-icon--checked:url('data:image/svg+xml;charset=utf-8,')}.md-typeset .task-list-item{list-style-type:none;position:relative}[dir=ltr] .md-typeset .task-list-item [type=checkbox]{left:-2em}[dir=rtl] .md-typeset .task-list-item [type=checkbox]{right:-2em}.md-typeset .task-list-item [type=checkbox]{position:absolute;top:.45em}.md-typeset .task-list-control [type=checkbox]{opacity:0;z-index:-1}[dir=ltr] .md-typeset .task-list-indicator:before{left:-1.5em}[dir=rtl] .md-typeset .task-list-indicator:before{right:-1.5em}.md-typeset .task-list-indicator:before{background-color:var(--md-default-fg-color--lightest);content:"";height:1.25em;-webkit-mask-image:var(--md-tasklist-icon);mask-image:var(--md-tasklist-icon);-webkit-mask-position:center;mask-position:center;-webkit-mask-repeat:no-repeat;mask-repeat:no-repeat;-webkit-mask-size:contain;mask-size:contain;position:absolute;top:.15em;width:1.25em}.md-typeset [type=checkbox]:checked+.task-list-indicator:before{background-color:#00e676;-webkit-mask-image:var(--md-tasklist-icon--checked);mask-image:var(--md-tasklist-icon--checked)}:root>*{--md-mermaid-font-family:var(--md-text-font-family),sans-serif;--md-mermaid-edge-color:var(--md-code-fg-color);--md-mermaid-node-bg-color:var(--md-accent-fg-color--transparent);--md-mermaid-node-fg-color:var(--md-accent-fg-color);--md-mermaid-label-bg-color:var(--md-default-bg-color);--md-mermaid-label-fg-color:var(--md-code-fg-color)}.mermaid{line-height:normal;margin:1em 0}@media screen and (min-width:45em){[dir=ltr] .md-typeset .inline{float:left}[dir=rtl] .md-typeset .inline{float:right}[dir=ltr] .md-typeset .inline{margin-right:.8rem}[dir=rtl] .md-typeset .inline{margin-left:.8rem}.md-typeset .inline{margin-bottom:.8rem;margin-top:0;width:11.7rem}[dir=ltr] .md-typeset .inline.end{float:right}[dir=rtl] .md-typeset .inline.end{float:left}[dir=ltr] .md-typeset .inline.end{margin-left:.8rem;margin-right:0}[dir=rtl] .md-typeset .inline.end{margin-left:0;margin-right:.8rem}} \ No newline at end of file diff --git a/assets/stylesheets/main.7a952b86.min.css.map b/assets/stylesheets/main.7a952b86.min.css.map new file mode 100644 index 000000000..889400c86 --- /dev/null +++ b/assets/stylesheets/main.7a952b86.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["src/assets/stylesheets/main/extensions/pymdownx/_keys.scss","../../../src/assets/stylesheets/main.scss","src/assets/stylesheets/main/_resets.scss","src/assets/stylesheets/main/_colors.scss","src/assets/stylesheets/main/_icons.scss","src/assets/stylesheets/main/_typeset.scss","src/assets/stylesheets/utilities/_break.scss","src/assets/stylesheets/main/layout/_banner.scss","src/assets/stylesheets/main/layout/_base.scss","src/assets/stylesheets/main/layout/_clipboard.scss","src/assets/stylesheets/main/layout/_consent.scss","src/assets/stylesheets/main/layout/_content.scss","src/assets/stylesheets/main/layout/_dialog.scss","src/assets/stylesheets/main/layout/_feedback.scss","src/assets/stylesheets/main/layout/_footer.scss","src/assets/stylesheets/main/layout/_form.scss","src/assets/stylesheets/main/layout/_header.scss","src/assets/stylesheets/main/layout/_nav.scss","src/assets/stylesheets/main/layout/_search.scss","src/assets/stylesheets/main/layout/_select.scss","src/assets/stylesheets/main/layout/_sidebar.scss","src/assets/stylesheets/main/layout/_source.scss","src/assets/stylesheets/main/layout/_tabs.scss","src/assets/stylesheets/main/layout/_tag.scss","src/assets/stylesheets/main/layout/_tooltip.scss","src/assets/stylesheets/main/layout/_top.scss","src/assets/stylesheets/main/layout/_version.scss","src/assets/stylesheets/main/extensions/markdown/_admonition.scss","node_modules/material-design-color/material-color.scss","src/assets/stylesheets/main/extensions/markdown/_footnotes.scss","src/assets/stylesheets/main/extensions/markdown/_toc.scss","src/assets/stylesheets/main/extensions/pymdownx/_arithmatex.scss","src/assets/stylesheets/main/extensions/pymdownx/_critic.scss","src/assets/stylesheets/main/extensions/pymdownx/_details.scss","src/assets/stylesheets/main/extensions/pymdownx/_emoji.scss","src/assets/stylesheets/main/extensions/pymdownx/_highlight.scss","src/assets/stylesheets/main/extensions/pymdownx/_tabbed.scss","src/assets/stylesheets/main/extensions/pymdownx/_tasklist.scss","src/assets/stylesheets/main/integrations/_mermaid.scss","src/assets/stylesheets/main/_modifiers.scss"],"names":[],"mappings":"AAgGM,gBCs6GN,CC1+GA,KAEE,6BAAA,CAAA,0BAAA,CAAA,yBAAA,CAAA,qBAAA,CADA,qBDzBF,CC8BA,iBAGE,kBD3BF,CC8BE,gCANF,iBAOI,yBDzBF,CACF,CC6BA,KACE,QD1BF,CC8BA,qBAIE,uCD3BF,CC+BA,EACE,aAAA,CACA,oBD5BF,CCgCA,GAME,QAAA,CAJA,kBAAA,CADA,aAAA,CAEA,aAAA,CAEA,gBAAA,CADA,SD3BF,CCiCA,MACE,aD9BF,CCkCA,QAEE,eD/BF,CCmCA,IACE,iBDhCF,CCoCA,MACE,uBAAA,CACA,gBDjCF,CCqCA,MAEE,eAAA,CACA,kBDlCF,CCsCA,OAKE,sBAAA,CACA,QAAA,CAFA,mBAAA,CADA,iBAAA,CAFA,QAAA,CACA,SD/BF,CCuCA,MACE,QAAA,CACA,YDpCF,CErCA,qCAGE,qCAAA,CACA,4CAAA,CACA,8CAAA,CACA,+CAAA,CACA,0BAAA,CACA,+CAAA,CACA,iDAAA,CACA,mDAAA,CAGA,6BAAA,CACA,oCAAA,CACA,mCAAA,CACA,0BAAA,CACA,+CAAA,CAGA,4BAAA,CACA,qDAAA,CACA,yBAAA,CACA,8CAAA,CAGA,0BAAA,CACA,0BAAA,CAGA,qCAAA,CACA,iCAAA,CACA,kCAAA,CACA,mCAAA,CACA,mCAAA,CACA,kCAAA,CACA,iCAAA,CACA,+CAAA,CACA,6DAAA,CACA,gEAAA,CACA,4DAAA,CACA,4DAAA,CACA,6DAAA,CAGA,6CAAA,CAGA,+CAAA,CAGA,0CAAA,CAGA,0CAAA,CACA,2CAAA,CAGA,8BAAA,CACA,kCAAA,CACA,qCAAA,CAGA,wCAAA,CAGA,mDAAA,CACA,mDAAA,CAGA,yBAAA,CACA,8CAAA,CACA,gDAAA,CACA,oCAAA,CACA,0CAAA,CAGA,yEAAA,CAKA,yEAAA,CAKA,yEFUF,CG9GE,aAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,YHmHJ,CIxHA,KACE,kCAAA,CACA,iCAAA,CAGA,uGAAA,CAKA,mFJyHF,CInHA,WAGE,mCAAA,CACA,sCJsHF,CIlHA,wBANE,6BJgIF,CI1HA,aAIE,4BAAA,CACA,sCJqHF,CI7GA,MACE,0NAAA,CACA,mNAAA,CACA,oNJgHF,CIzGA,YAGE,gCAAA,CAAA,kBAAA,CAFA,eAAA,CACA,eJ6GF,CIxGE,aAPF,YAQI,gBJ2GF,CACF,CIxGE,uGAME,iBAAA,CAAA,cJ0GJ,CItGE,eAEE,uCAAA,CAEA,aAAA,CACA,eAAA,CAJA,iBJ6GJ,CIpGE,8BAPE,eAAA,CAGA,qBJ+GJ,CI3GE,eAGE,kBAAA,CACA,eAAA,CAHA,oBJ0GJ,CIlGE,eAGE,gBAAA,CADA,eAAA,CAGA,qBAAA,CADA,eAAA,CAHA,mBJwGJ,CIhGE,kBACE,eJkGJ,CI9FE,eAEE,eAAA,CACA,qBAAA,CAFA,YJkGJ,CI5FE,8BAGE,uCAAA,CAEA,cAAA,CADA,eAAA,CAEA,qBAAA,CAJA,eJkGJ,CI1FE,eACE,wBJ4FJ,CIxFE,eAGE,+DAAA,CAFA,iBAAA,CACA,cJ2FJ,CItFE,cACE,+BAAA,CACA,qBJwFJ,CIrFI,mCAEE,sBJsFN,CIlFI,wCAEE,+BJmFN,CIhFM,kDACE,uDJkFR,CI7EI,mBACE,kBAAA,CACA,iCJ+EN,CI3EI,4BACE,uCAAA,CACA,oBJ6EN,CIxEE,iDAGE,6BAAA,CACA,aAAA,CACA,2BJ0EJ,CIvEI,aARF,iDASI,oBJ4EJ,CACF,CIxEE,iBAIE,wCAAA,CACA,mBAAA,CACA,kCAAA,CAAA,0BAAA,CAJA,eAAA,CADA,uBAAA,CAEA,qBJ6EJ,CIvEI,qCAEE,uCAAA,CADA,YJ0EN,CIpEE,gBAEE,iBAAA,CACA,eAAA,CAFA,iBJwEJ,CInEI,qBAQE,kCAAA,CAAA,0BAAA,CADA,eAAA,CANA,aAAA,CACA,QAAA,CAIA,uCAAA,CAFA,aAAA,CADA,oCAAA,CAQA,+DAAA,CADA,oBAAA,CADA,iBAAA,CAJA,iBJ2EN,CIlEM,2BACE,qDJoER,CIhEM,wCAEE,YAAA,CADA,WJmER,CI9DM,8CACE,oDJgER,CI7DQ,oDACE,0CJ+DV,CIxDE,gBAOE,4CAAA,CACA,mBAAA,CACA,mKACE,CAPF,gCAAA,CAFA,oBAAA,CAGA,eAAA,CAFA,uBAAA,CAGA,uBAAA,CACA,qBJ6DJ,CInDE,iBAGE,6CAAA,CACA,kCAAA,CAAA,0BAAA,CAHA,aAAA,CACA,qBJuDJ,CIjDE,iBAEE,6DAAA,CACA,WAAA,CAFA,oBJqDJ,CIhDI,oBANF,iBAOI,iBJmDJ,CIhDI,yDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,6BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ4DN,CIhEI,sDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,0BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ4DN,CIhEI,mEAEE,MJ8DN,CIhEI,gEAEE,MJ8DN,CIhEI,0DAEE,MJ8DN,CIhEI,mEAEE,OJ8DN,CIhEI,gEAEE,OJ8DN,CIhEI,0DAEE,OJ8DN,CIhEI,gDAWE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CAKA,mBAAA,CAXA,oBAAA,CAOA,eAAA,CAHA,cAAA,CADA,aAAA,CADA,6BAAA,CAAA,0BAAA,CAAA,qBAAA,CAGA,mBAAA,CAPA,iBAAA,CAGA,UJ4DN,CACF,CI7CE,kBACE,WJ+CJ,CI3CE,oDAEE,qBJ6CJ,CI/CE,oDAEE,sBJ6CJ,CIzCE,iCACE,kBJ8CJ,CI/CE,iCACE,mBJ8CJ,CI/CE,iCAIE,2DJ2CJ,CI/CE,iCAIE,4DJ2CJ,CI/CE,uBAGE,uCAAA,CADA,aAAA,CAAA,cJ6CJ,CIvCE,eACE,oBJyCJ,CIrCE,kDAEE,kBJwCJ,CI1CE,kDAEE,mBJwCJ,CI1CE,8BAGE,SJuCJ,CIpCI,0DACE,iBJuCN,CInCI,oCACE,2BJsCN,CInCM,0CACE,2BJsCR,CIjCI,wDAEE,kBJoCN,CItCI,wDAEE,mBJoCN,CItCI,oCACE,kBJqCN,CIjCM,kGAEE,aJqCR,CIjCM,0DACE,eJoCR,CIhCM,4EACE,kBAAA,CAAA,eJoCR,CIrCM,sEACE,kBAAA,CAAA,eJoCR,CIrCM,gGAEE,kBJmCR,CIrCM,0FAEE,kBJmCR,CIrCM,8EAEE,kBJmCR,CIrCM,gGAEE,mBJmCR,CIrCM,0FAEE,mBJmCR,CIrCM,8EAEE,mBJmCR,CIrCM,0DACE,kBAAA,CAAA,eJoCR,CI7BE,yBAEE,mBJ+BJ,CIjCE,yBAEE,oBJ+BJ,CIjCE,eACE,mBAAA,CAAA,cJgCJ,CI3BE,kDAIE,WAAA,CADA,cJ8BJ,CItBI,4BAEE,oBJwBN,CIpBI,6BAEE,oBJsBN,CIlBI,kCACE,YJoBN,CIhBI,8EAEE,YJiBN,CIZE,mBACE,iBAAA,CAGA,eAAA,CADA,cAAA,CAEA,iBAAA,CAHA,yBAAA,CAAA,sBAAA,CAAA,iBJiBJ,CIXI,uBACE,aJaN,CIRE,uBAGE,iBAAA,CADA,eAAA,CADA,eJYJ,CINE,mBACE,cJQJ,CIJE,+BAKE,2CAAA,CACA,iDAAA,CACA,mBAAA,CANA,oBAAA,CAGA,gBAAA,CAFA,cAAA,CACA,aAAA,CAKA,iBJMJ,CIHI,aAXF,+BAYI,aJMJ,CACF,CIDI,iCACE,gBJGN,CIIM,gEACE,YJFR,CICM,6DACE,YJFR,CICM,uDACE,YJFR,CIMM,+DACE,eJJR,CIGM,4DACE,eJJR,CIGM,sDACE,eJJR,CISI,gEACE,eJPN,CIMI,6DACE,eJPN,CIMI,uDACE,eJPN,CIUM,0EACE,gBJRR,CIOM,uEACE,gBJRR,CIOM,iEACE,gBJRR,CIaI,kCAGE,eAAA,CAFA,cAAA,CACA,sBAAA,CAEA,kBJXN,CIeI,kCAGE,qDAAA,CAFA,sBAAA,CACA,kBJZN,CIiBI,wCACE,iCJfN,CIkBM,8CACE,iCAAA,CACA,sDJhBR,CIqBI,iCACE,iBJnBN,CIwBE,wCACE,cJtBJ,CIyBI,wDAIE,gBJjBN,CIaI,wDAIE,iBJjBN,CIaI,8CAUE,UAAA,CATA,oBAAA,CAEA,YAAA,CAGA,oDAAA,CAAA,4CAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CACA,iCAAA,CAJA,0BAAA,CAHA,WJfN,CI2BI,oDACE,oDJzBN,CI6BI,mEACE,kDAAA,CACA,yDAAA,CAAA,iDJ3BN,CI+BI,oEACE,kDAAA,CACA,0DAAA,CAAA,kDJ7BN,CIkCE,wBACE,iBAAA,CACA,eAAA,CACA,iBJhCJ,CIoCE,mBACE,oBAAA,CACA,kBAAA,CACA,eJlCJ,CIqCI,aANF,mBAOI,aJlCJ,CACF,CIqCI,8BACE,aAAA,CAEA,QAAA,CACA,eAAA,CAFA,UJjCN,CK1VI,wCD0YF,uBACE,iBJ5CF,CI+CE,4BACE,eJ7CJ,CACF,CM5hBA,WAGE,0CAAA,CADA,+BAAA,CADA,aNgiBF,CM3hBE,aANF,WAOI,YN8hBF,CACF,CM3hBE,oBAEE,uCAAA,CADA,gCN8hBJ,CMzhBE,kBAGE,eAAA,CAFA,iBAAA,CACA,eN4hBJ,CMvhBE,6BACE,WN4hBJ,CM7hBE,6BACE,UN4hBJ,CM7hBE,mBAEE,aAAA,CACA,cAAA,CACA,uBNyhBJ,CMthBI,yBACE,UNwhBN,COxjBA,KASE,cAAA,CARA,WAAA,CACA,iBP4jBF,CKxZI,oCEtKJ,KAaI,gBPqjBF,CACF,CK7ZI,oCEtKJ,KAkBI,cPqjBF,CACF,COhjBA,KASE,2CAAA,CAPA,YAAA,CACA,qBAAA,CAKA,eAAA,CAHA,eAAA,CAJA,iBAAA,CAGA,UPsjBF,CO9iBE,aAZF,KAaI,aPijBF,CACF,CK9ZI,wCEhJF,yBAII,cP8iBJ,CACF,COriBA,SAEE,gBAAA,CAAA,iBAAA,CADA,ePyiBF,COpiBA,cACE,YAAA,CACA,qBAAA,CACA,WPuiBF,COpiBE,aANF,cAOI,aPuiBF,CACF,COniBA,SACE,WPsiBF,COniBE,gBACE,YAAA,CACA,WAAA,CACA,iBPqiBJ,COhiBA,aACE,eAAA,CAEA,sBAAA,CADA,kBPoiBF,CO1hBA,WACE,YP6hBF,COxhBA,WAGE,QAAA,CACA,SAAA,CAHA,iBAAA,CACA,OP6hBF,COxhBE,uCACE,aP0hBJ,COthBE,+BAEE,uCAAA,CADA,kBPyhBJ,COnhBA,SASE,2CAAA,CACA,mBAAA,CAHA,gCAAA,CACA,gBAAA,CAHA,YAAA,CAQA,SAAA,CAFA,uCAAA,CALA,mBAAA,CALA,cAAA,CAWA,2BAAA,CARA,UP6hBF,COjhBE,eAGE,SAAA,CADA,uBAAA,CAEA,oEACE,CAJF,UPshBJ,COxgBA,MACE,WP2gBF,CQrqBA,MACE,+PRuqBF,CQjqBA,cAQE,mBAAA,CADA,0CAAA,CAIA,cAAA,CALA,YAAA,CAGA,uCAAA,CACA,oBAAA,CATA,iBAAA,CAEA,UAAA,CADA,QAAA,CAUA,qBAAA,CAPA,WAAA,CADA,SR4qBF,CQjqBE,aAfF,cAgBI,YRoqBF,CACF,CQjqBE,kCAEE,uCAAA,CADA,YRoqBJ,CQ/pBE,qBACE,uCRiqBJ,CQ7pBE,yCACE,+BR+pBJ,CQhqBE,sCACE,+BR+pBJ,CQhqBE,gCACE,+BR+pBJ,CQ1pBE,oBAKE,6BAAA,CAKA,UAAA,CATA,aAAA,CAEA,cAAA,CACA,aAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAPA,aRoqBJ,CQxpBE,sBACE,cR0pBJ,CQvpBI,2BACE,2CRypBN,CQnpBI,sDAEE,uDAAA,CADA,+BRspBN,CQvpBI,mDAEE,uDAAA,CADA,+BRspBN,CQvpBI,6CAEE,uDAAA,CADA,+BRspBN,CS5tBA,2BACE,GAEE,SAAA,CADA,0BTguBF,CS5tBA,GAEE,SAAA,CADA,uBT+tBF,CACF,CSvuBA,mBACE,GAEE,SAAA,CADA,0BTguBF,CS5tBA,GAEE,SAAA,CADA,uBT+tBF,CACF,CS1tBA,2BACE,GACE,ST4tBF,CSztBA,GACE,ST2tBF,CACF,CSluBA,mBACE,GACE,ST4tBF,CSztBA,GACE,ST2tBF,CACF,CShtBE,qBASE,mCAAA,CAAA,2BAAA,CADA,mCAAA,CAAA,2BAAA,CAFA,gCAAA,CADA,WAAA,CAEA,SAAA,CANA,cAAA,CACA,KAAA,CAEA,UAAA,CADA,STwtBJ,CS9sBE,mBAcE,2DAAA,CAAA,mDAAA,CANA,2CAAA,CACA,QAAA,CACA,mBAAA,CARA,QAAA,CASA,gEACE,CAPF,eAAA,CAEA,aAAA,CADA,SAAA,CALA,cAAA,CAGA,UAAA,CADA,STytBJ,CS1sBE,kBACE,aT4sBJ,CSxsBE,sBACE,YAAA,CACA,YT0sBJ,CSvsBI,oCACE,aTysBN,CSpsBE,sBACE,mBTssBJ,CSnsBI,6CACE,cTqsBN,CK/lBI,wCIvGA,6CAKI,aAAA,CAEA,gBAAA,CACA,iBAAA,CAFA,UTusBN,CACF,CShsBE,kBACE,cTksBJ,CUnyBA,YACE,WAAA,CAIA,WVmyBF,CUhyBE,mBACE,qBAAA,CACA,iBVkyBJ,CKtoBI,sCKtJE,4EACE,kBV+xBN,CU3xBI,0JACE,mBV6xBN,CU9xBI,8EACE,kBV6xBN,CACF,CUxxBI,0BAGE,UAAA,CAFA,aAAA,CACA,YV2xBN,CUtxBI,+BACE,eVwxBN,CUlxBE,8BACE,WVuxBJ,CUxxBE,8BACE,UVuxBJ,CUxxBE,8BAGE,iBVqxBJ,CUxxBE,8BAGE,kBVqxBJ,CUxxBE,oBAEE,cAAA,CAEA,SVoxBJ,CUjxBI,aAPF,oBAQI,YVoxBJ,CACF,CUjxBI,gCACE,yCVmxBN,CU/wBI,wBACE,cAAA,CACA,kBVixBN,CU9wBM,kCACE,oBVgxBR,CWj1BA,qBAEE,WX+1BF,CWj2BA,qBAEE,UX+1BF,CWj2BA,WAOE,2CAAA,CACA,mBAAA,CALA,YAAA,CAMA,8BAAA,CAJA,iBAAA,CAMA,SAAA,CALA,mBAAA,CASA,mBAAA,CAdA,cAAA,CASA,0BAAA,CAEA,wCACE,CATF,SX61BF,CW/0BE,aAlBF,WAmBI,YXk1BF,CACF,CW/0BE,mBAEE,SAAA,CAIA,mBAAA,CALA,uBAAA,CAEA,kEXk1BJ,CW30BE,kBACE,gCAAA,CACA,eX60BJ,CYh3BA,aACE,gBAAA,CACA,iBZm3BF,CYh3BE,sBAGE,WAAA,CAFA,QAAA,CACA,SZm3BJ,CY92BE,oBAEE,eAAA,CADA,eZi3BJ,CY52BE,oBACE,iBZ82BJ,CY12BE,mBAIE,sBAAA,CAFA,YAAA,CACA,cAAA,CAEA,sBAAA,CAJA,iBZg3BJ,CYz2BI,iDACE,yCZ22BN,CYv2BI,6BACE,iBZy2BN,CYp2BE,mBAGE,uCAAA,CACA,cAAA,CAHA,aAAA,CACA,cAAA,CAGA,sBZs2BJ,CYn2BI,gDACE,+BZq2BN,CYj2BI,4BACE,0CAAA,CACA,mBZm2BN,CY91BE,mBAGE,SAAA,CAFA,iBAAA,CACA,2BAAA,CAEA,8DZg2BJ,CY31BI,qBAEE,aAAA,CADA,eZ81BN,CYz1BI,6BAEE,SAAA,CADA,uBZ41BN,Ca16BA,WAEE,0CAAA,CADA,+Bb86BF,Ca16BE,aALF,WAMI,Yb66BF,CACF,Ca16BE,kBACE,6BAAA,CAEA,aAAA,CADA,ab66BJ,Caz6BI,gCACE,Yb26BN,Cat6BE,iBACE,YAAA,CAKA,cAAA,CAIA,uCAAA,CADA,eAAA,CADA,oBAAA,CADA,kBAAA,CAIA,uBbo6BJ,Caj6BI,4CACE,Ubm6BN,Cap6BI,yCACE,Ubm6BN,Cap6BI,mCACE,Ubm6BN,Ca/5BI,+BACE,oBbi6BN,CKlxBI,wCQrII,yCACE,Yb05BR,CACF,Car5BI,iCACE,gBbw5BN,Caz5BI,iCACE,iBbw5BN,Caz5BI,uBAEE,gBbu5BN,Cap5BM,iCACE,ebs5BR,Cah5BE,kBAEE,WAAA,CAGA,eAAA,CACA,kBAAA,CAHA,6BAAA,CACA,cAAA,CAHA,iBAAA,CAMA,kBbk5BJ,Ca94BE,mBACE,YAAA,CACA,abg5BJ,Ca54BE,sBAKE,gBAAA,CAHA,MAAA,CACA,gBAAA,CAGA,UAAA,CAFA,cAAA,CAHA,iBAAA,CACA,Obk5BJ,Caz4BA,gBACE,gDb44BF,Caz4BE,uBACE,YAAA,CACA,cAAA,CACA,6BAAA,CACA,ab24BJ,Cav4BE,kCACE,sCby4BJ,Cat4BI,6DACE,+Bbw4BN,Caz4BI,0DACE,+Bbw4BN,Caz4BI,oDACE,+Bbw4BN,Cah4BA,cAIE,wCAAA,CACA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAFA,Ubu4BF,CK91BI,mCQ1CJ,cASI,Ubm4BF,CACF,Ca/3BE,yBACE,sCbi4BJ,Ca13BA,WACE,cAAA,CACA,qBb63BF,CK32BI,mCQpBJ,WAMI,eb63BF,CACF,Ca13BE,iBACE,oBAAA,CAEA,aAAA,CACA,iBAAA,CAFA,Yb83BJ,Caz3BI,wBACE,eb23BN,Cav3BI,qBAGE,iBAAA,CAFA,gBAAA,CACA,mBb03BN,CcjiCE,uBAKE,kBAAA,CACA,mBAAA,CAHA,gCAAA,CAIA,cAAA,CANA,oBAAA,CAGA,eAAA,CAFA,kBAAA,CAMA,gEdoiCJ,Cc9hCI,gCAEE,2CAAA,CACA,uCAAA,CAFA,gCdkiCN,Cc5hCI,kDAEE,0CAAA,CACA,sCAAA,CAFA,+BdgiCN,CcjiCI,+CAEE,0CAAA,CACA,sCAAA,CAFA,+BdgiCN,CcjiCI,yCAEE,0CAAA,CACA,sCAAA,CAFA,+BdgiCN,CczhCE,gCAKE,4Bd8hCJ,CcniCE,gEAME,6Bd6hCJ,CcniCE,gCAME,4Bd6hCJ,CcniCE,sBAIE,6DAAA,CAGA,8BAAA,CAJA,eAAA,CAFA,aAAA,CACA,eAAA,CAMA,sCd2hCJ,CcthCI,iDACE,6CAAA,CACA,8BdwhCN,Cc1hCI,8CACE,6CAAA,CACA,8BdwhCN,Cc1hCI,wCACE,6CAAA,CACA,8BdwhCN,CcphCI,+BACE,UdshCN,CezkCA,WAOE,2CAAA,CAGA,0DACE,CALF,gCAAA,CADA,aAAA,CAFA,MAAA,CAFA,uBAAA,CAAA,eAAA,CAEA,OAAA,CADA,KAAA,CAEA,SfglCF,CerkCE,aAfF,WAgBI,YfwkCF,CACF,CerkCE,mBACE,2BAAA,CACA,iEfukCJ,CejkCE,mBACE,gEACE,CAEF,kEfikCJ,Ce3jCE,kBAEE,kBAAA,CADA,YAAA,CAEA,ef6jCJ,CezjCE,mBAKE,kBAAA,CAGA,cAAA,CALA,YAAA,CAIA,uCAAA,CAHA,aAAA,CAHA,iBAAA,CAQA,uBAAA,CAHA,qBAAA,CAJA,SfkkCJ,CexjCI,yBACE,Uf0jCN,CetjCI,iCACE,oBfwjCN,CepjCI,uCAEE,uCAAA,CADA,YfujCN,CeljCI,2BACE,YAAA,CACA,afojCN,CKv8BI,wCU/GA,2BAMI,YfojCN,CACF,CejjCM,iDAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,UfqjCR,CevjCM,8CAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,UfqjCR,CevjCM,wCAIE,iBAAA,CAHA,aAAA,CAEA,aAAA,CADA,UfqjCR,CKr+BI,mCUzEA,iCAII,Yf8iCN,CACF,Ce3iCM,wCACE,Yf6iCR,CeziCM,+CACE,oBf2iCR,CKh/BI,sCUtDA,iCAII,YfsiCN,CACF,CejiCE,kBAEE,YAAA,CACA,cAAA,CAFA,iBAAA,CAIA,8DACE,CAFF,kBfoiCJ,Ce9hCI,oCAGE,SAAA,CAIA,mBAAA,CALA,6BAAA,CAEA,8DACE,CAJF,UfoiCN,Ce3hCM,8CACE,8Bf6hCR,CexhCI,8BACE,ef0hCN,CerhCE,4BAGE,kBf0hCJ,Ce7hCE,4BAGE,iBf0hCJ,Ce7hCE,4BAIE,gBfyhCJ,Ce7hCE,4BAIE,iBfyhCJ,Ce7hCE,kBACE,WAAA,CAIA,eAAA,CAHA,aAAA,CAIA,kBfuhCJ,CephCI,4CAGE,SAAA,CAIA,mBAAA,CALA,8BAAA,CAEA,8DACE,CAJF,Uf0hCN,CejhCM,sDACE,6BfmhCR,Ce/gCM,8DAGE,SAAA,CAIA,mBAAA,CALA,uBAAA,CAEA,8DACE,CAJF,SfqhCR,Ce1gCI,uCAGE,WAAA,CAFA,iBAAA,CACA,Uf6gCN,CevgCE,mBACE,YAAA,CACA,aAAA,CACA,cAAA,CAEA,+CACE,CAFF,kBf0gCJ,CepgCI,8DACE,WAAA,CACA,SAAA,CACA,oCfsgCN,Ce//BE,mBACE,YfigCJ,CKtjCI,mCUoDF,6BAQI,gBfigCJ,CezgCA,6BAQI,iBfigCJ,CezgCA,mBAKI,aAAA,CAEA,iBAAA,CADA,afmgCJ,CACF,CK9jCI,sCUoDF,6BAaI,kBfigCJ,Ce9gCA,6BAaI,mBfigCJ,CACF,CgBzuCA,MACE,0MAAA,CACA,gMAAA,CACA,yNhB4uCF,CgBtuCA,QACE,eAAA,CACA,ehByuCF,CgBtuCE,eACE,aAAA,CAGA,eAAA,CADA,eAAA,CADA,eAAA,CAGA,sBhBwuCJ,CgBruCI,+BACE,YhBuuCN,CgBpuCM,mCAEE,WAAA,CADA,UhBuuCR,CgB/tCQ,6DAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,UhBquCV,CgBvuCQ,0DAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,UhBquCV,CgBvuCQ,oDAME,iBAAA,CALA,aAAA,CAGA,aAAA,CADA,cAAA,CAEA,kBAAA,CAHA,UhBquCV,CgB1tCE,cAGE,eAAA,CAFA,QAAA,CACA,ShB6tCJ,CgBxtCE,cACE,ehB0tCJ,CgBvtCI,sCACE,ehBytCN,CgB1tCI,sCACE,chBytCN,CgBptCE,cAEE,kBAAA,CAKA,cAAA,CANA,YAAA,CAEA,6BAAA,CACA,iBAAA,CACA,eAAA,CAIA,uBAAA,CAHA,sBAAA,CAEA,sBhButCJ,CgBntCI,sBACE,uChBqtCN,CgBjtCI,oCACE,+BhBmtCN,CgB/sCI,0CACE,UhBitCN,CgB7sCI,yCACE,+BhB+sCN,CgBhtCI,sCACE,+BhB+sCN,CgBhtCI,gCACE,+BhB+sCN,CgB3sCI,4BACE,uCAAA,CACA,oBhB6sCN,CgBzsCI,0CACE,YhB2sCN,CgBxsCM,yDAKE,6BAAA,CAJA,aAAA,CAEA,WAAA,CACA,qCAAA,CAAA,6BAAA,CAFA,UhB6sCR,CgBtsCM,kDACE,YhBwsCR,CgBnsCI,gBAEE,cAAA,CADA,YhBssCN,CgBhsCE,cACE,ahBksCJ,CgB9rCE,gBACE,YhBgsCJ,CK9oCI,wCW3CA,0CASE,2CAAA,CAHA,YAAA,CACA,qBAAA,CACA,WAAA,CAJA,MAAA,CAFA,iBAAA,CAEA,OAAA,CADA,KAAA,CAEA,ShB+rCJ,CgBprCI,4DACE,eAAA,CACA,ehBsrCN,CgBxrCI,yDACE,eAAA,CACA,ehBsrCN,CgBxrCI,mDACE,eAAA,CACA,ehBsrCN,CgBlrCI,gCAOE,qDAAA,CAHA,uCAAA,CAIA,cAAA,CANA,aAAA,CAGA,kBAAA,CAFA,wBAAA,CAFA,iBAAA,CAKA,kBhBsrCN,CgBjrCM,wDAGE,UhBurCR,CgB1rCM,wDAGE,WhBurCR,CgB1rCM,8CAIE,aAAA,CAEA,aAAA,CACA,YAAA,CANA,iBAAA,CACA,SAAA,CAGA,YhBqrCR,CgBhrCQ,oDAIE,6BAAA,CAKA,UAAA,CARA,aAAA,CAEA,WAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,UhByrCV,CgB7qCM,8CAEE,2CAAA,CACA,gEACE,CAHF,eAAA,CAIA,gCAAA,CAAA,4BAAA,CACA,kBhB8qCR,CgB3qCQ,2DACE,YhB6qCV,CgBxqCM,8CAGE,2CAAA,CAFA,gCAAA,CACA,ehB2qCR,CgBtqCM,yCAIE,aAAA,CADA,UAAA,CAEA,YAAA,CACA,aAAA,CALA,iBAAA,CAEA,WAAA,CADA,ShB4qCR,CgBnqCI,+BACE,MhBqqCN,CgBjqCI,+BAEE,4DAAA,CADA,ShBoqCN,CgBhqCM,qDACE,+BhBkqCR,CgB/pCQ,gFACE,+BhBiqCV,CgBlqCQ,6EACE,+BhBiqCV,CgBlqCQ,uEACE,+BhBiqCV,CgB3pCI,+BACE,YAAA,CACA,mBhB6pCN,CgB1pCM,uDAGE,mBhB6pCR,CgBhqCM,uDAGE,kBhB6pCR,CgBhqCM,6CAIE,gBAAA,CAFA,aAAA,CADA,YhB+pCR,CgBzpCQ,mDAIE,6BAAA,CAKA,UAAA,CARA,aAAA,CAEA,WAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,UhBkqCV,CgBlpCM,+CACE,mBhBopCR,CgB5oCM,4CAEE,wBAAA,CADA,ehB+oCR,CgB3oCQ,oEACE,mBhB6oCV,CgB9oCQ,oEACE,oBhB6oCV,CgBzoCQ,4EACE,iBhB2oCV,CgB5oCQ,4EACE,kBhB2oCV,CgBvoCQ,oFACE,mBhByoCV,CgB1oCQ,oFACE,oBhByoCV,CgBroCQ,4FACE,mBhBuoCV,CgBxoCQ,4FACE,oBhBuoCV,CgBhoCE,mBACE,wBhBkoCJ,CgB9nCE,wBACE,YAAA,CAEA,SAAA,CADA,0BAAA,CAEA,oEhBgoCJ,CgB3nCI,kCACE,2BhB6nCN,CgBxnCE,gCAEE,SAAA,CADA,uBAAA,CAEA,qEhB0nCJ,CgBrnCI,8CAEE,kCAAA,CAAA,0BhBsnCN,CACF,CK5xCI,wCW8KA,0CACE,YhBinCJ,CgB9mCI,yDACE,UhBgnCN,CgB5mCI,wDACE,YhB8mCN,CgB1mCI,kDACE,YhB4mCN,CgBvmCE,gBAIE,iDAAA,CADA,gCAAA,CAFA,aAAA,CACA,ehB2mCJ,CACF,CKz1CM,6DWuPF,6CACE,YhBqmCJ,CgBlmCI,4DACE,UhBomCN,CgBhmCI,2DACE,YhBkmCN,CgB9lCI,qDACE,YhBgmCN,CACF,CKj1CI,mCWyPA,kCAME,qCAAA,CACA,qDAAA,CANA,uBAAA,CAAA,eAAA,CACA,KAAA,CAGA,ShB2lCJ,CgBtlCI,6CACE,uBhBwlCN,CgBplCI,gDACE,YhBslCN,CACF,CKh2CI,sCW7JJ,QA6aI,oDhBolCF,CgBjlCE,gCAME,qCAAA,CACA,qDAAA,CANA,uBAAA,CAAA,eAAA,CACA,KAAA,CAGA,ShBmlCJ,CgB9kCI,8CACE,uBhBglCN,CgBtkCE,sEACE,YhB2kCJ,CgBvkCE,6DACE,ahBykCJ,CgB1kCE,0DACE,ahBykCJ,CgB1kCE,oDACE,ahBykCJ,CgBrkCE,6CACE,YhBukCJ,CgBnkCE,uBACE,aAAA,CACA,ehBqkCJ,CgBlkCI,kCACE,ehBokCN,CgBhkCI,qCACE,eAAA,CACA,mBhBkkCN,CgB/jCM,mDACE,mBhBikCR,CgB7jCM,mDACE,YhB+jCR,CgB1jCI,+BACE,ahB4jCN,CgBzjCM,2DACE,ShB2jCR,CgBrjCE,cAGE,kBAAA,CADA,YAAA,CAEA,+CACE,CAJF,WhB0jCJ,CgBljCI,wBACE,wBhBojCN,CgBhjCI,oBACE,uDhBkjCN,CgB9iCI,oBAKE,6BAAA,CAKA,UAAA,CATA,oBAAA,CAEA,WAAA,CAGA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CALA,qBAAA,CAFA,UhBwjCN,CgB5iCI,0JAEE,uBhB6iCN,CgB/hCI,+HACE,YhBqiCN,CgBliCM,oDACE,aAAA,CACA,ShBoiCR,CgBjiCQ,kEAOE,qCAAA,CACA,qDAAA,CAFA,eAAA,CAFA,YAAA,CACA,eAAA,CAJA,uBAAA,CAAA,eAAA,CACA,KAAA,CACA,ShBwiCV,CgBhiCU,4FACE,mBhBkiCZ,CgB9hCU,gFACE,YhBgiCZ,CgBxhCI,2CACE,ahB0hCN,CgBvhCM,iFACE,mBhByhCR,CgB1hCM,iFACE,kBhByhCR,CgBhhCI,mFACE,ehBkhCN,CgB/gCM,iGACE,ShBihCR,CgB5gCI,qFAGE,mDhB8gCN,CgBjhCI,qFAGE,oDhB8gCN,CgBjhCI,2EACE,aAAA,CACA,oBhB+gCN,CgB3gCM,0FACE,YhB6gCR,CACF,CiBloDA,MACE,igBjBqoDF,CiB/nDA,WACE,iBjBkoDF,CKp+CI,mCY/JJ,WAKI,ejBkoDF,CACF,CiB/nDE,kBACE,YjBioDJ,CiB7nDE,oBAEE,SAAA,CADA,SjBgoDJ,CK79CI,wCYpKF,8BAQI,YjBuoDJ,CiB/oDA,8BAQI,ajBuoDJ,CiB/oDA,oBAYI,2CAAA,CACA,kBAAA,CAHA,WAAA,CACA,eAAA,CAOA,mBAAA,CAZA,iBAAA,CACA,SAAA,CAOA,uBAAA,CACA,4CACE,CAPF,UjBsoDJ,CiB1nDI,+DACE,SAAA,CACA,oCjB4nDN,CACF,CKngDI,mCYjJF,8BAiCI,MjB8nDJ,CiB/pDA,8BAiCI,OjB8nDJ,CiB/pDA,oBAoCI,gCAAA,CACA,cAAA,CAFA,QAAA,CAJA,cAAA,CACA,KAAA,CAMA,sDACE,CALF,OjB6nDJ,CiBnnDI,+DAME,YAAA,CACA,SAAA,CACA,4CACE,CARF,UjBwnDN,CACF,CKlgDI,wCYxGA,+DAII,mBjB0mDN,CACF,CKhjDM,6DY/DF,+DASI,mBjB0mDN,CACF,CKrjDM,6DY/DF,+DAcI,mBjB0mDN,CACF,CiBrmDE,kBAEE,kCAAA,CAAA,0BjBsmDJ,CKphDI,wCYpFF,4BAQI,MjB6mDJ,CiBrnDA,4BAQI,OjB6mDJ,CiBrnDA,kBAWI,QAAA,CAGA,SAAA,CAFA,eAAA,CANA,cAAA,CACA,KAAA,CAMA,wBAAA,CAEA,qGACE,CANF,OAAA,CADA,SjB4mDJ,CiB/lDI,4BACE,yBjBimDN,CiB7lDI,6DAEE,WAAA,CAEA,SAAA,CADA,uBAAA,CAEA,sGACE,CALF,UjBmmDN,CACF,CK/jDI,mCYjEF,4BA2CI,WjB6lDJ,CiBxoDA,4BA2CI,UjB6lDJ,CiBxoDA,kBA6CI,eAAA,CAHA,iBAAA,CAIA,8CAAA,CAFA,ajB4lDJ,CACF,CK9lDM,6DYOF,6DAII,ajBulDN,CACF,CK7kDI,sCYfA,6DASI,ajBulDN,CACF,CiBllDE,iBAIE,2CAAA,CACA,gCAAA,CAFA,aAAA,CAFA,iBAAA,CAKA,2CACE,CALF,SjBwlDJ,CK1lDI,mCYAF,iBAaI,gCAAA,CACA,mBAAA,CAFA,ajBolDJ,CiB/kDI,uBACE,oCjBilDN,CACF,CiB7kDI,4DAEE,2CAAA,CACA,6BAAA,CACA,oCAAA,CAHA,gCjBklDN,CiB1kDE,4BAKE,mBAAA,CAAA,oBjB+kDJ,CiBplDE,4BAKE,mBAAA,CAAA,oBjB+kDJ,CiBplDE,kBAQE,sBAAA,CAFA,eAAA,CAFA,WAAA,CAHA,iBAAA,CAMA,sBAAA,CAJA,UAAA,CADA,SjBklDJ,CiBzkDI,yCACE,yBAAA,CAAA,qBjB2kDN,CiB5kDI,+BACE,qBjB2kDN,CiBvkDI,yCAEE,uCjBwkDN,CiB1kDI,kEAEE,uCjBwkDN,CiBpkDI,6BACE,YjBskDN,CK1mDI,wCYaF,kBA8BI,eAAA,CADA,aAAA,CADA,UjBukDJ,CACF,CKpoDI,mCYgCF,4BAmCI,mBjBukDJ,CiB1mDA,4BAmCI,oBjBukDJ,CiB1mDA,kBAoCI,aAAA,CACA,ejBqkDJ,CiBlkDI,yCACE,uCjBokDN,CiBrkDI,+BACE,uCjBokDN,CiBhkDI,mCACE,gCjBkkDN,CiB9jDI,6DACE,kBjBgkDN,CiB7jDM,oFAEE,uCjB8jDR,CiBhkDM,wJAEE,uCjB8jDR,CACF,CiBxjDE,iBAIE,cAAA,CAHA,oBAAA,CAEA,aAAA,CAEA,kCACE,CAJF,YjB6jDJ,CiBrjDI,uBACE,UjBujDN,CiBnjDI,yCAGE,UjBsjDN,CiBzjDI,yCAGE,WjBsjDN,CiBzjDI,+BACE,iBAAA,CACA,SAAA,CAEA,SjBqjDN,CiBljDM,6CACE,oBjBojDR,CKvpDI,wCY2FA,yCAcI,UjBmjDN,CiBjkDE,yCAcI,WjBmjDN,CiBjkDE,+BAaI,SjBojDN,CiBhjDM,+CACE,YjBkjDR,CACF,CKnrDI,mCY8GA,+BAwBI,mBjBijDN,CiB9iDM,8CACE,YjBgjDR,CACF,CiB1iDE,8BAGE,WjB8iDJ,CiBjjDE,8BAGE,UjB8iDJ,CiBjjDE,oBAKE,mBAAA,CAJA,iBAAA,CACA,SAAA,CAEA,SjB6iDJ,CK/qDI,wCY8HF,8BAUI,WjB4iDJ,CiBtjDA,8BAUI,UjB4iDJ,CiBtjDA,oBASI,SjB6iDJ,CACF,CiBziDI,gCACE,iBjB+iDN,CiBhjDI,gCACE,kBjB+iDN,CiBhjDI,sBAEE,uCAAA,CAEA,SAAA,CADA,oBAAA,CAEA,+DjB2iDN,CiBtiDM,yCAEE,uCAAA,CADA,YjByiDR,CiBpiDM,yFAGE,SAAA,CACA,mBAAA,CAFA,kBjBuiDR,CiBliDQ,8FACE,UjBoiDV,CiB7hDE,8BAOE,mBAAA,CAAA,oBjBoiDJ,CiB3iDE,8BAOE,mBAAA,CAAA,oBjBoiDJ,CiB3iDE,oBAIE,kBAAA,CAIA,yCAAA,CALA,YAAA,CAMA,eAAA,CAHA,WAAA,CAKA,SAAA,CAVA,iBAAA,CACA,KAAA,CAUA,uBAAA,CAFA,kBAAA,CALA,UjBsiDJ,CKzuDI,mCY8LF,8BAgBI,mBjBgiDJ,CiBhjDA,8BAgBI,oBjBgiDJ,CiBhjDA,oBAiBI,ejB+hDJ,CACF,CiB5hDI,+DACE,SAAA,CACA,0BjB8hDN,CiBzhDE,6BAKE,+BjB4hDJ,CiBjiDE,0DAME,gCjB2hDJ,CiBjiDE,6BAME,+BjB2hDJ,CiBjiDE,mBAIE,eAAA,CAHA,iBAAA,CAEA,UAAA,CADA,SjB+hDJ,CKxuDI,wCYuMF,mBAWI,QAAA,CADA,UjB4hDJ,CACF,CKjwDI,mCY0NF,mBAiBI,SAAA,CADA,UAAA,CAEA,sBjB2hDJ,CiBxhDI,8DACE,8BAAA,CACA,SjB0hDN,CACF,CiBrhDE,uBAKE,kCAAA,CAAA,0BAAA,CAFA,2CAAA,CAFA,WAAA,CACA,eAAA,CAOA,kBjBmhDJ,CiBhhDI,iEAZF,uBAaI,uBjBmhDJ,CACF,CK9yDM,6DY6QJ,uBAkBI,ajBmhDJ,CACF,CK7xDI,sCYuPF,uBAuBI,ajBmhDJ,CACF,CKlyDI,mCYuPF,uBA4BI,YAAA,CAEA,+DAAA,CADA,oBjBohDJ,CiBhhDI,kEACE,ejBkhDN,CiB9gDI,6BACE,qDjBghDN,CiB5gDI,0CAEE,YAAA,CADA,WjB+gDN,CiB1gDI,gDACE,oDjB4gDN,CiBzgDM,sDACE,0CjB2gDR,CACF,CiBpgDA,kBACE,gCAAA,CACA,qBjBugDF,CiBpgDE,wBAKE,qDAAA,CAHA,uCAAA,CACA,gBAAA,CACA,kBAAA,CAHA,eAAA,CAKA,uBjBsgDJ,CKt0DI,mCY0TF,kCAUI,mBjBsgDJ,CiBhhDA,kCAUI,oBjBsgDJ,CACF,CiBlgDE,wBAGE,eAAA,CAFA,QAAA,CACA,SAAA,CAGA,wBAAA,CAAA,qBAAA,CAAA,oBAAA,CAAA,gBjBmgDJ,CiB//CE,wBACE,yDjBigDJ,CiB9/CI,oCACE,ejBggDN,CiB3/CE,wBACE,aAAA,CACA,YAAA,CAEA,uBAAA,CADA,gCjB8/CJ,CiB1/CI,mDACE,uDjB4/CN,CiB7/CI,gDACE,uDjB4/CN,CiB7/CI,0CACE,uDjB4/CN,CiBx/CI,gDACE,mBjB0/CN,CiBr/CE,gCAGE,+BAAA,CAGA,cAAA,CALA,aAAA,CAGA,gBAAA,CACA,YAAA,CAHA,mBAAA,CAQA,uBAAA,CAHA,2CjBw/CJ,CK72DI,mCY8WF,0CAcI,mBjBq/CJ,CiBngDA,0CAcI,oBjBq/CJ,CACF,CiBl/CI,2DAEE,uDAAA,CADA,+BjBq/CN,CiBt/CI,wDAEE,uDAAA,CADA,+BjBq/CN,CiBt/CI,kDAEE,uDAAA,CADA,+BjBq/CN,CiBh/CI,wCACE,YjBk/CN,CiB7+CI,wDACE,YjB++CN,CiB3+CI,oCACE,WjB6+CN,CiBx+CE,2BAGE,eAAA,CADA,eAAA,CADA,iBjB4+CJ,CKp4DI,mCYuZF,qCAOI,mBjB0+CJ,CiBj/CA,qCAOI,oBjB0+CJ,CACF,CiBp+CM,8DAGE,eAAA,CADA,eAAA,CAEA,eAAA,CAHA,ejBy+CR,CiBh+CE,kCAEE,MjBs+CJ,CiBx+CE,kCAEE,OjBs+CJ,CiBx+CE,wBAME,uCAAA,CAFA,aAAA,CACA,YAAA,CAJA,iBAAA,CAEA,YjBq+CJ,CKp4DI,wCY4ZF,wBAUI,YjBk+CJ,CACF,CiB/9CI,8BAIE,6BAAA,CAKA,UAAA,CARA,oBAAA,CAEA,WAAA,CAEA,+CAAA,CAAA,uCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,UjBw+CN,CiB99CM,wCACE,oBjBg+CR,CiB19CE,yBAGE,gBAAA,CADA,eAAA,CAEA,eAAA,CAHA,ajB+9CJ,CiBx9CE,0BASE,2BAAA,CACA,oBAAA,CALA,uCAAA,CAJA,mBAAA,CAKA,gBAAA,CACA,eAAA,CAJA,aAAA,CADA,eAAA,CAEA,eAAA,CAIA,sBjB49CJ,CKz6DI,wCYqcF,0BAeI,oBAAA,CADA,ejB29CJ,CACF,CKx9DM,6DY8eJ,0BAqBI,oBAAA,CADA,ejB29CJ,CACF,CiBv9CI,+BAEE,wBAAA,CADA,yBjB09CN,CiBp9CE,yBAEE,gBAAA,CACA,iBAAA,CAFA,ajBw9CJ,CiBl9CE,uBAEE,wBAAA,CADA,+BjBq9CJ,CkB3nEA,WACE,iBAAA,CACA,SlB8nEF,CkB3nEE,kBAOE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAHA,gCAAA,CAHA,QAAA,CAEA,gBAAA,CADA,YAAA,CAOA,SAAA,CAVA,iBAAA,CACA,sBAAA,CAQA,mCAAA,CAEA,oElB6nEJ,CkBvnEI,+DACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,sFACE,CADF,8ElBynEN,CkB7nEI,4DACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,mFACE,CADF,8ElBynEN,CkB7nEI,sDACE,gBAAA,CAEA,SAAA,CADA,+BAAA,CAEA,8ElBynEN,CkBlnEI,wBAUE,qCAAA,CAAA,8CAAA,CAFA,mCAAA,CAAA,oCAAA,CACA,YAAA,CAEA,UAAA,CANA,QAAA,CAFA,QAAA,CAIA,kBAAA,CADA,iBAAA,CALA,iBAAA,CACA,KAAA,CAEA,OlB2nEN,CkB/mEE,iBAOE,mBAAA,CAFA,eAAA,CACA,oBAAA,CAJA,QAAA,CADA,kBAAA,CAGA,aAAA,CADA,SlBqnEJ,CkB7mEE,iBACE,kBlB+mEJ,CkB3mEE,2BAGE,kBAAA,CAAA,oBlBinEJ,CkBpnEE,2BAGE,mBAAA,CAAA,mBlBinEJ,CkBpnEE,iBAKE,cAAA,CAJA,aAAA,CAGA,YAAA,CAKA,uBAAA,CAHA,2CACE,CALF,UlBknEJ,CkBxmEI,4CACE,+BlB0mEN,CkB3mEI,yCACE,+BlB0mEN,CkB3mEI,mCACE,+BlB0mEN,CkBtmEI,uBACE,qDlBwmEN,CmB5rEA,YAIE,qBAAA,CADA,aAAA,CAGA,gBAAA,CALA,uBAAA,CAAA,eAAA,CACA,UAAA,CAGA,anBgsEF,CmB5rEE,aATF,YAUI,YnB+rEF,CACF,CKjhEI,wCc3KF,+BAMI,anBmsEJ,CmBzsEA,+BAMI,cnBmsEJ,CmBzsEA,qBAWI,2CAAA,CAHA,aAAA,CAEA,WAAA,CANA,cAAA,CACA,KAAA,CAOA,uBAAA,CACA,iEACE,CALF,aAAA,CAFA,SnBksEJ,CmBvrEI,mEACE,8BAAA,CACA,6BnByrEN,CmBtrEM,6EACE,8BnBwrER,CmBnrEI,6CAEE,QAAA,CAAA,MAAA,CACA,QAAA,CAEA,eAAA,CAJA,iBAAA,CACA,OAAA,CAEA,yBAAA,CAAA,qBAAA,CAFA,KnBwrEN,CACF,CKhkEI,sCctKJ,YAuDI,QnBmrEF,CmBhrEE,mBACE,WnBkrEJ,CmB9qEE,6CACE,UnBgrEJ,CACF,CmB5qEE,uBACE,YAAA,CACA,OnB8qEJ,CK/kEI,mCcjGF,uBAMI,QnB8qEJ,CmB3qEI,8BACE,WnB6qEN,CmBzqEI,qCACE,anB2qEN,CmBvqEI,+CACE,kBnByqEN,CACF,CmBpqEE,wBAIE,kCAAA,CAAA,0BAAA,CAHA,cAAA,CACA,eAAA,CAQA,+DAAA,CADA,oBnBkqEJ,CmB9pEI,8BACE,qDnBgqEN,CmB5pEI,2CAEE,YAAA,CADA,WnB+pEN,CmB1pEI,iDACE,oDnB4pEN,CmBzpEM,uDACE,0CnB2pER,CK9lEI,wCcnDF,YAME,gCAAA,CADA,QAAA,CAEA,SAAA,CANA,cAAA,CACA,KAAA,CAMA,sDACE,CALF,OAAA,CADA,SnB0pEF,CmB/oEE,4CAEE,WAAA,CACA,SAAA,CACA,4CACE,CAJF,UnBopEJ,CACF,CoB1yEA,yBACE,GACE,QpB4yEF,CoBzyEA,GACE,apB2yEF,CACF,CoBlzEA,iBACE,GACE,QpB4yEF,CoBzyEA,GACE,apB2yEF,CACF,CoBvyEA,wBACE,GAEE,SAAA,CADA,0BpB0yEF,CoBtyEA,IACE,SpBwyEF,CoBryEA,GAEE,SAAA,CADA,uBpBwyEF,CACF,CoBpzEA,gBACE,GAEE,SAAA,CADA,0BpB0yEF,CoBtyEA,IACE,SpBwyEF,CoBryEA,GAEE,SAAA,CADA,uBpBwyEF,CACF,CoB/xEA,MACE,mgBAAA,CACA,oiBAAA,CACA,0nBAAA,CACA,mhBpBiyEF,CoB3xEA,WAOE,kCAAA,CAAA,0BAAA,CANA,aAAA,CACA,gBAAA,CACA,eAAA,CAEA,uCAAA,CAGA,uBAAA,CAJA,kBpBiyEF,CoB1xEE,iBACE,UpB4xEJ,CoBxxEE,iBACE,oBAAA,CAEA,aAAA,CACA,qBAAA,CAFA,UpB4xEJ,CoBvxEI,+BAEE,iBpByxEN,CoB3xEI,+BAEE,kBpByxEN,CoB3xEI,qBACE,gBpB0xEN,CoBrxEI,kDACE,iBpBwxEN,CoBzxEI,kDACE,kBpBwxEN,CoBzxEI,kDAEE,iBpBuxEN,CoBzxEI,kDAEE,kBpBuxEN,CoBlxEE,iCAGE,iBpBuxEJ,CoB1xEE,iCAGE,kBpBuxEJ,CoB1xEE,uBACE,oBAAA,CACA,6BAAA,CAEA,eAAA,CACA,sBAAA,CACA,qBpBoxEJ,CoBhxEE,kBACE,YAAA,CAMA,gBAAA,CALA,SAAA,CAMA,oBAAA,CAJA,gBAAA,CAKA,WAAA,CAHA,eAAA,CADA,SAAA,CAFA,UpBwxEJ,CoB/wEI,iDACE,oCAAA,CAAA,4BpBixEN,CoB5wEE,iBACE,eAAA,CACA,sBpB8wEJ,CoB3wEI,gDACE,mCAAA,CAAA,2BpB6wEN,CoBzwEI,kCAIE,kBpBixEN,CoBrxEI,kCAIE,iBpBixEN,CoBrxEI,wBAME,6BAAA,CAIA,UAAA,CATA,oBAAA,CAEA,YAAA,CAIA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAJA,uBAAA,CAHA,WpBmxEN,CoBvwEI,iCACE,apBywEN,CoBrwEI,iCACE,gDAAA,CAAA,wCpBuwEN,CoBnwEI,+BACE,8CAAA,CAAA,sCpBqwEN,CoBjwEI,+BACE,8CAAA,CAAA,sCpBmwEN,CoB/vEI,sCACE,qDAAA,CAAA,6CpBiwEN,CqBx5EA,SASE,2CAAA,CAFA,gCAAA,CAHA,aAAA,CAIA,eAAA,CAFA,aAAA,CADA,UAAA,CAFA,SrB+5EF,CqBt5EE,aAZF,SAaI,YrBy5EF,CACF,CK9uEI,wCgBzLJ,SAkBI,YrBy5EF,CACF,CqBt5EE,iBACE,mBrBw5EJ,CqBp5EE,yBAEE,iBrB05EJ,CqB55EE,yBAEE,kBrB05EJ,CqB55EE,eAME,eAAA,CADA,eAAA,CAJA,QAAA,CAEA,SAAA,CACA,kBrBw5EJ,CqBl5EE,eACE,oBAAA,CACA,aAAA,CACA,kBAAA,CAAA,mBrBo5EJ,CqB/4EE,eAOE,kCAAA,CAAA,0BAAA,CANA,aAAA,CAEA,eAAA,CADA,gBAAA,CAMA,UAAA,CAJA,uCAAA,CACA,oBAAA,CAIA,8DrBg5EJ,CqB34EI,iEAEE,aAAA,CACA,SrB44EN,CqB/4EI,8DAEE,aAAA,CACA,SrB44EN,CqB/4EI,wDAEE,aAAA,CACA,SrB44EN,CqBv4EM,2CACE,qBrBy4ER,CqB14EM,2CACE,qBrB44ER,CqB74EM,2CACE,qBrB+4ER,CqBh5EM,2CACE,qBrBk5ER,CqBn5EM,2CACE,oBrBq5ER,CqBt5EM,2CACE,qBrBw5ER,CqBz5EM,2CACE,qBrB25ER,CqB55EM,2CACE,qBrB85ER,CqB/5EM,4CACE,qBrBi6ER,CqBl6EM,4CACE,oBrBo6ER,CqBr6EM,4CACE,qBrBu6ER,CqBx6EM,4CACE,qBrB06ER,CqB36EM,4CACE,qBrB66ER,CqB96EM,4CACE,qBrBg7ER,CqBj7EM,4CACE,oBrBm7ER,CqB76EI,gCAEE,SAAA,CADA,yBAAA,CAEA,wCrB+6EN,CsB5/EA,MACE,wStB+/EF,CsBt/EE,qBAEE,mBAAA,CADA,kBtB0/EJ,CsBr/EE,8BAEE,iBtBggFJ,CsBlgFE,8BAEE,gBtBggFJ,CsBlgFE,oBAUE,+CAAA,CACA,oBAAA,CAVA,oBAAA,CAKA,gBAAA,CADA,eAAA,CAGA,qBAAA,CADA,eAAA,CAJA,kBAAA,CACA,uBAAA,CAKA,qBtBy/EJ,CsBp/EI,0BAGE,uCAAA,CAFA,aAAA,CACA,YAAA,CAEA,6CtBs/EN,CsBj/EM,gEAGE,0CAAA,CADA,+BtBm/ER,CsB7+EI,yBACE,uBtB++EN,CsBv+EI,gCAME,oDAAA,CAMA,UAAA,CAXA,oBAAA,CAEA,YAAA,CACA,iBAAA,CAGA,qCAAA,CAAA,6BAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CACA,iCAAA,CANA,0BAAA,CAHA,WtBm/EN,CsBr+EI,6DACE,0CtBu+EN,CsBx+EI,0DACE,0CtBu+EN,CsBx+EI,oDACE,0CtBu+EN,CuBhjFA,yBACE,GACE,uDAAA,CACA,oBvBmjFF,CuBhjFA,IACE,mCAAA,CACA,kBvBkjFF,CuB/iFA,GACE,8BAAA,CACA,oBvBijFF,CACF,CuB/jFA,iBACE,GACE,uDAAA,CACA,oBvBmjFF,CuBhjFA,IACE,mCAAA,CACA,kBvBkjFF,CuB/iFA,GACE,8BAAA,CACA,oBvBijFF,CACF,CuBziFA,MACE,wBvB2iFF,CuBriFA,YAwBE,kCAAA,CAAA,0BAAA,CALA,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CACA,sCAAA,CAfA,+IACE,CAYF,8BAAA,CASA,SAAA,CAxBA,iBAAA,CACA,uBAAA,CAoBA,4BAAA,CAIA,uDACE,CAZF,6BAAA,CADA,SvBgjFF,CuB9hFE,oBAGE,SAAA,CADA,uBAAA,CAEA,2EACE,CAJF,SvBmiFJ,CuBzhFE,4DACE,sCvB2hFJ,CuB5hFE,yDACE,sCvB2hFJ,CuB5hFE,mDACE,sCvB2hFJ,CuBvhFE,mBAEE,gBAAA,CADA,avB0hFJ,CuBthFI,2CACE,YvBwhFN,CuBphFI,0CACE,evBshFN,CuB9gFA,eACE,eAAA,CAEA,YAAA,CADA,kBvBkhFF,CuB9gFE,yBACE,avBghFJ,CuB5gFE,6BACE,oBAAA,CAGA,iBvB4gFJ,CuBxgFE,sBAOE,cAAA,CAFA,sCAAA,CADA,eAAA,CADA,YAAA,CAGA,YAAA,CALA,iBAAA,CAOA,wBAAA,CAAA,qBAAA,CAAA,oBAAA,CAAA,gBAAA,CANA,SvBghFJ,CuBvgFI,qCACE,UAAA,CACA,uBvBygFN,CuBtgFM,gEACE,UvBwgFR,CuBzgFM,6DACE,UvBwgFR,CuBzgFM,uDACE,UvBwgFR,CuBhgFI,4BAYE,oDAAA,CACA,iBAAA,CAIA,UAAA,CARA,YAAA,CANA,YAAA,CAOA,cAAA,CACA,cAAA,CAVA,iBAAA,CACA,KAAA,CAYA,2CACE,CARF,wBAAA,CACA,6BAAA,CAJA,UvB2gFN,CuB3/EM,4CAGE,8CACE,mCAAA,CAAA,2BvB2/ER,CACF,CuBv/EM,gDAIE,sBAAA,CAAA,cAAA,CAHA,2CvB0/ER,CuBl/EI,2BAEE,sCAAA,CADA,iBvBq/EN,CuBh/EI,qFACE,+BvBk/EN,CuBn/EI,kFACE,+BvBk/EN,CuBn/EI,4EACE,+BvBk/EN,CuB/+EM,2FACE,0CvBi/ER,CuBl/EM,wFACE,0CvBi/ER,CuBl/EM,kFACE,0CvBi/ER,CuB5+EI,0CAGE,sBAAA,CAAA,cAAA,CADA,eAAA,CADA,SvBg/EN,CuB1+EI,8CACE,oBAAA,CACA,evB4+EN,CuBz+EM,qDAME,mCAAA,CALA,oBAAA,CACA,mBAAA,CAEA,qBAAA,CACA,iDAAA,CAFA,qBvB8+ER,CuBv+EQ,iBAVF,qDAWI,WvB0+ER,CuBv+EQ,mEACE,mCvBy+EV,CACF,CwBvsFA,kBAKE,exBmtFF,CwBxtFA,kBAKE,gBxBmtFF,CwBxtFA,QASE,2CAAA,CACA,oBAAA,CAEA,8BAAA,CALA,uCAAA,CAHA,aAAA,CAIA,eAAA,CAGA,YAAA,CALA,mBAAA,CALA,cAAA,CACA,UAAA,CAWA,yBAAA,CACA,mGACE,CAZF,SxBqtFF,CwBnsFE,aArBF,QAsBI,YxBssFF,CACF,CwBnsFE,kBACE,wBxBqsFJ,CwBjsFE,gBAEE,SAAA,CAEA,mBAAA,CAHA,+BAAA,CAEA,uBxBosFJ,CwBhsFI,0BACE,8BxBksFN,CwB7rFE,mCAEE,0CAAA,CADA,+BxBgsFJ,CwBjsFE,gCAEE,0CAAA,CADA,+BxBgsFJ,CwBjsFE,0BAEE,0CAAA,CADA,+BxBgsFJ,CwB3rFE,YACE,oBAAA,CACA,oBxB6rFJ,CyBjvFA,4BACE,GACE,mBzBovFF,CACF,CyBvvFA,oBACE,GACE,mBzBovFF,CACF,CyB5uFA,MACE,wfzB8uFF,CyBxuFA,YACE,aAAA,CAEA,eAAA,CADA,azB4uFF,CyBxuFE,+BAOE,kBAAA,CAAA,kBzByuFJ,CyBhvFE,+BAOE,iBAAA,CAAA,mBzByuFJ,CyBhvFE,qBAQE,aAAA,CAEA,cAAA,CADA,YAAA,CARA,iBAAA,CAKA,UzB0uFJ,CyBnuFI,qCAIE,iBzB2uFN,CyB/uFI,qCAIE,kBzB2uFN,CyB/uFI,2BAKE,6BAAA,CAKA,UAAA,CATA,oBAAA,CAEA,YAAA,CAGA,yCAAA,CAAA,iCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CAPA,WzB6uFN,CyBhuFE,kBAUE,2CAAA,CACA,mBAAA,CACA,8BAAA,CAJA,gCAAA,CACA,oBAAA,CAJA,kBAAA,CADA,YAAA,CASA,SAAA,CANA,aAAA,CADA,SAAA,CALA,iBAAA,CAgBA,gCAAA,CAAA,4BAAA,CAfA,UAAA,CAYA,+CACE,CAZF,SzB8uFJ,CyB7tFI,gEACE,gBAAA,CACA,SAAA,CACA,8CACE,CADF,sCzB+tFN,CyBluFI,6DACE,gBAAA,CACA,SAAA,CACA,2CACE,CADF,sCzB+tFN,CyBluFI,uDACE,gBAAA,CACA,SAAA,CACA,sCzB+tFN,CyBztFI,wBAGE,oCACE,wCAAA,CAAA,gCzBytFN,CyBrtFI,2CACE,sBAAA,CAAA,czButFN,CACF,CyBltFE,kBACE,kBzBotFJ,CyBhtFE,4BAGE,kBAAA,CAAA,oBzButFJ,CyB1tFE,4BAGE,mBAAA,CAAA,mBzButFJ,CyB1tFE,kBAME,cAAA,CALA,aAAA,CAIA,YAAA,CAKA,uBAAA,CAHA,2CACE,CAJF,kBAAA,CAFA,UzBwtFJ,CyB7sFI,6CACE,+BzB+sFN,CyBhtFI,0CACE,+BzB+sFN,CyBhtFI,oCACE,+BzB+sFN,CyB3sFI,wBACE,qDzB6sFN,C0B9yFA,MAEI,2RAAA,CAAA,8WAAA,CAAA,sPAAA,CAAA,8xBAAA,CAAA,qNAAA,CAAA,gbAAA,CAAA,gMAAA,CAAA,+PAAA,CAAA,8KAAA,CAAA,0eAAA,CAAA,kUAAA,CAAA,gM1Bu0FJ,C0B3zFE,8CAOE,8CAAA,CACA,sBAAA,CAEA,mBAAA,CACA,8BAAA,CAPA,mCAAA,CAHA,iBAAA,CAIA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAGA,uB1Bm0FJ,C0Bz0FE,2CAOE,8CAAA,CACA,sBAAA,CAEA,mBAAA,CACA,8BAAA,CAPA,mCAAA,CAHA,iBAAA,CAIA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAGA,uB1Bm0FJ,C0Bz0FE,wDASE,uB1Bg0FJ,C0Bz0FE,qDASE,uB1Bg0FJ,C0Bz0FE,+CASE,uB1Bg0FJ,C0Bz0FE,wDASE,wB1Bg0FJ,C0Bz0FE,qDASE,wB1Bg0FJ,C0Bz0FE,+CASE,wB1Bg0FJ,C0Bz0FE,qCAOE,8CAAA,CACA,sBAAA,CAEA,mBAAA,CACA,8BAAA,CAPA,mCAAA,CAHA,iBAAA,CAIA,gBAAA,CAHA,iBAAA,CACA,eAAA,CAGA,uB1Bm0FJ,C0B3zFI,aAdF,8CAeI,e1B8zFJ,C0B70FA,2CAeI,e1B8zFJ,C0B70FA,qCAeI,e1B8zFJ,CACF,C0B1zFI,gDACE,qB1B4zFN,C0B7zFI,6CACE,qB1B4zFN,C0B7zFI,uCACE,qB1B4zFN,C0BxzFI,gFAEE,iBAAA,CADA,c1B2zFN,C0B5zFI,0EAEE,iBAAA,CADA,c1B2zFN,C0B5zFI,8DAEE,iBAAA,CADA,c1B2zFN,C0BtzFI,sEACE,iB1BwzFN,C0BzzFI,mEACE,iB1BwzFN,C0BzzFI,6DACE,iB1BwzFN,C0BpzFI,iEACE,e1BszFN,C0BvzFI,8DACE,e1BszFN,C0BvzFI,wDACE,e1BszFN,C0BlzFI,qEACE,Y1BozFN,C0BrzFI,kEACE,Y1BozFN,C0BrzFI,4DACE,Y1BozFN,C0BhzFI,+DACE,mB1BkzFN,C0BnzFI,4DACE,mB1BkzFN,C0BnzFI,sDACE,mB1BkzFN,C0B7yFE,oDAOE,oCAAA,CACA,WAAA,CAFA,eAAA,CAJA,eAAA,CAAA,YAAA,CAEA,oBAAA,CAAA,iBAAA,CAHA,iB1ByzFJ,C0B1zFE,iDAOE,oCAAA,CACA,WAAA,CAFA,eAAA,CAJA,eAAA,CAAA,YAAA,CAEA,oBAAA,CAAA,iBAAA,CAHA,iB1ByzFJ,C0B1zFE,8DAGE,kBAAA,CAAA,mB1BuzFJ,C0B1zFE,2DAGE,kBAAA,CAAA,mB1BuzFJ,C0B1zFE,qDAGE,kBAAA,CAAA,mB1BuzFJ,C0B1zFE,8DAGE,kBAAA,CAAA,mB1BuzFJ,C0B1zFE,2DAGE,kBAAA,CAAA,mB1BuzFJ,C0B1zFE,qDAGE,kBAAA,CAAA,mB1BuzFJ,C0B1zFE,8DAKE,mBAAA,CAAA,mB1BqzFJ,C0B1zFE,2DAKE,mBAAA,CAAA,mB1BqzFJ,C0B1zFE,qDAKE,mBAAA,CAAA,mB1BqzFJ,C0B1zFE,8DAKE,kBAAA,CAAA,oB1BqzFJ,C0B1zFE,2DAKE,kBAAA,CAAA,oB1BqzFJ,C0B1zFE,qDAKE,kBAAA,CAAA,oB1BqzFJ,C0B1zFE,8DASE,uB1BizFJ,C0B1zFE,2DASE,uB1BizFJ,C0B1zFE,qDASE,uB1BizFJ,C0B1zFE,8DASE,wB1BizFJ,C0B1zFE,2DASE,wB1BizFJ,C0B1zFE,qDASE,wB1BizFJ,C0B1zFE,8DAUE,4B1BgzFJ,C0B1zFE,2DAUE,4B1BgzFJ,C0B1zFE,qDAUE,4B1BgzFJ,C0B1zFE,8DAUE,6B1BgzFJ,C0B1zFE,2DAUE,6B1BgzFJ,C0B1zFE,qDAUE,6B1BgzFJ,C0B1zFE,8DAWE,6B1B+yFJ,C0B1zFE,2DAWE,6B1B+yFJ,C0B1zFE,qDAWE,6B1B+yFJ,C0B1zFE,8DAWE,4B1B+yFJ,C0B1zFE,2DAWE,4B1B+yFJ,C0B1zFE,qDAWE,4B1B+yFJ,C0B1zFE,2CAOE,oCAAA,CACA,WAAA,CAFA,eAAA,CAJA,eAAA,CAAA,YAAA,CAEA,oBAAA,CAAA,iBAAA,CAHA,iB1ByzFJ,C0B5yFI,oEACE,e1B8yFN,C0B/yFI,iEACE,e1B8yFN,C0B/yFI,2DACE,e1B8yFN,C0B1yFI,2DAME,wBCuIU,CDlIV,UAAA,CANA,WAAA,CAEA,kDAAA,CAAA,0CAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CATA,iBAAA,CACA,UAAA,CAEA,U1BmzFN,C0BvzFI,wDAME,wBCuIU,CDlIV,UAAA,CANA,WAAA,CAEA,0CAAA,CACA,oBAAA,CACA,qBAAA,CACA,iBAAA,CATA,iBAAA,CACA,UAAA,CAEA,U1BmzFN,C0BvzFI,qEAGE,U1BozFN,C0BvzFI,kEAGE,U1BozFN,C0BvzFI,4DAGE,U1BozFN,C0BvzFI,qEAGE,W1BozFN,C0BvzFI,kEAGE,W1BozFN,C0BvzFI,4DAGE,W1BozFN,C0BvzFI,kDAME,wBCuIU,CDlIV,UAAA,CANA,WAAA,CAEA,kDAAA,CAAA,0CAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CATA,iBAAA,CACA,UAAA,CAEA,U1BmzFN,C0BvxFE,iEACE,oB1B0xFJ,C0B3xFE,2DACE,oB1B0xFJ,C0B3xFE,+CACE,oB1B0xFJ,C0BtxFE,wEACE,oC1ByxFJ,C0B1xFE,kEACE,oC1ByxFJ,C0B1xFE,sDACE,oC1ByxFJ,C0BtxFI,+EACE,wBAnBG,CAoBH,kDAAA,CAAA,0C1BwxFN,C0B1xFI,yEACE,wBAnBG,CAoBH,0C1BwxFN,C0B1xFI,6DACE,wBAnBG,CAoBH,kDAAA,CAAA,0C1BwxFN,C0BnyFE,oFACE,oB1BsyFJ,C0BvyFE,8EACE,oB1BsyFJ,C0BvyFE,kEACE,oB1BsyFJ,C0BlyFE,2FACE,mC1BqyFJ,C0BtyFE,qFACE,mC1BqyFJ,C0BtyFE,yEACE,mC1BqyFJ,C0BlyFI,kGACE,wBAnBG,CAoBH,sDAAA,CAAA,8C1BoyFN,C0BtyFI,4FACE,wBAnBG,CAoBH,8C1BoyFN,C0BtyFI,gFACE,wBAnBG,CAoBH,sDAAA,CAAA,8C1BoyFN,C0B/yFE,uEACE,oB1BkzFJ,C0BnzFE,iEACE,oB1BkzFJ,C0BnzFE,qDACE,oB1BkzFJ,C0B9yFE,8EACE,mC1BizFJ,C0BlzFE,wEACE,mC1BizFJ,C0BlzFE,4DACE,mC1BizFJ,C0B9yFI,qFACE,wBAnBG,CAoBH,kDAAA,CAAA,0C1BgzFN,C0BlzFI,+EACE,wBAnBG,CAoBH,0C1BgzFN,C0BlzFI,mEACE,wBAnBG,CAoBH,kDAAA,CAAA,0C1BgzFN,C0B3zFE,iFACE,oB1B8zFJ,C0B/zFE,2EACE,oB1B8zFJ,C0B/zFE,+DACE,oB1B8zFJ,C0B1zFE,wFACE,mC1B6zFJ,C0B9zFE,kFACE,mC1B6zFJ,C0B9zFE,sEACE,mC1B6zFJ,C0B1zFI,+FACE,wBAnBG,CAoBH,iDAAA,CAAA,yC1B4zFN,C0B9zFI,yFACE,wBAnBG,CAoBH,yC1B4zFN,C0B9zFI,6EACE,wBAnBG,CAoBH,iDAAA,CAAA,yC1B4zFN,C0Bv0FE,iFACE,oB1B00FJ,C0B30FE,2EACE,oB1B00FJ,C0B30FE,+DACE,oB1B00FJ,C0Bt0FE,wFACE,kC1By0FJ,C0B10FE,kFACE,kC1By0FJ,C0B10FE,sEACE,kC1By0FJ,C0Bt0FI,+FACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bw0FN,C0B10FI,yFACE,wBAnBG,CAoBH,6C1Bw0FN,C0B10FI,6EACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bw0FN,C0Bn1FE,gFACE,oB1Bs1FJ,C0Bv1FE,0EACE,oB1Bs1FJ,C0Bv1FE,8DACE,oB1Bs1FJ,C0Bl1FE,uFACE,oC1Bq1FJ,C0Bt1FE,iFACE,oC1Bq1FJ,C0Bt1FE,qEACE,oC1Bq1FJ,C0Bl1FI,8FACE,wBAnBG,CAoBH,sDAAA,CAAA,8C1Bo1FN,C0Bt1FI,wFACE,wBAnBG,CAoBH,8C1Bo1FN,C0Bt1FI,4EACE,wBAnBG,CAoBH,sDAAA,CAAA,8C1Bo1FN,C0B/1FE,wFACE,oB1Bk2FJ,C0Bn2FE,kFACE,oB1Bk2FJ,C0Bn2FE,sEACE,oB1Bk2FJ,C0B91FE,+FACE,mC1Bi2FJ,C0Bl2FE,yFACE,mC1Bi2FJ,C0Bl2FE,6EACE,mC1Bi2FJ,C0B91FI,sGACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bg2FN,C0Bl2FI,gGACE,wBAnBG,CAoBH,6C1Bg2FN,C0Bl2FI,oFACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bg2FN,C0B32FE,mFACE,oB1B82FJ,C0B/2FE,6EACE,oB1B82FJ,C0B/2FE,iEACE,oB1B82FJ,C0B12FE,0FACE,mC1B62FJ,C0B92FE,oFACE,mC1B62FJ,C0B92FE,wEACE,mC1B62FJ,C0B12FI,iGACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1B42FN,C0B92FI,2FACE,wBAnBG,CAoBH,6C1B42FN,C0B92FI,+EACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1B42FN,C0Bv3FE,0EACE,oB1B03FJ,C0B33FE,oEACE,oB1B03FJ,C0B33FE,wDACE,oB1B03FJ,C0Bt3FE,iFACE,mC1By3FJ,C0B13FE,2EACE,mC1By3FJ,C0B13FE,+DACE,mC1By3FJ,C0Bt3FI,wFACE,wBAnBG,CAoBH,oDAAA,CAAA,4C1Bw3FN,C0B13FI,kFACE,wBAnBG,CAoBH,4C1Bw3FN,C0B13FI,sEACE,wBAnBG,CAoBH,oDAAA,CAAA,4C1Bw3FN,C0Bn4FE,gEACE,oB1Bs4FJ,C0Bv4FE,0DACE,oB1Bs4FJ,C0Bv4FE,8CACE,oB1Bs4FJ,C0Bl4FE,uEACE,kC1Bq4FJ,C0Bt4FE,iEACE,kC1Bq4FJ,C0Bt4FE,qDACE,kC1Bq4FJ,C0Bl4FI,8EACE,wBAnBG,CAoBH,iDAAA,CAAA,yC1Bo4FN,C0Bt4FI,wEACE,wBAnBG,CAoBH,yC1Bo4FN,C0Bt4FI,4DACE,wBAnBG,CAoBH,iDAAA,CAAA,yC1Bo4FN,C0B/4FE,oEACE,oB1Bk5FJ,C0Bn5FE,8DACE,oB1Bk5FJ,C0Bn5FE,kDACE,oB1Bk5FJ,C0B94FE,2EACE,oC1Bi5FJ,C0Bl5FE,qEACE,oC1Bi5FJ,C0Bl5FE,yDACE,oC1Bi5FJ,C0B94FI,kFACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bg5FN,C0Bl5FI,4EACE,wBAnBG,CAoBH,6C1Bg5FN,C0Bl5FI,gEACE,wBAnBG,CAoBH,qDAAA,CAAA,6C1Bg5FN,C0B35FE,wEACE,oB1B85FJ,C0B/5FE,kEACE,oB1B85FJ,C0B/5FE,sDACE,oB1B85FJ,C0B15FE,+EACE,kC1B65FJ,C0B95FE,yEACE,kC1B65FJ,C0B95FE,6DACE,kC1B65FJ,C0B15FI,sFACE,wBAnBG,CAoBH,mDAAA,CAAA,2C1B45FN,C0B95FI,gFACE,wBAnBG,CAoBH,2C1B45FN,C0B95FI,oEACE,wBAnBG,CAoBH,mDAAA,CAAA,2C1B45FN,C4BnjGA,MACE,wM5BsjGF,C4B7iGE,sBACE,uCAAA,CACA,gB5BgjGJ,C4B7iGI,mCACE,a5B+iGN,C4BhjGI,mCACE,c5B+iGN,C4B3iGM,4BACE,sB5B6iGR,C4B1iGQ,mCACE,gC5B4iGV,C4BxiGQ,2DAEE,SAAA,CADA,uBAAA,CAEA,e5B0iGV,C4BtiGQ,0EAEE,SAAA,CADA,uB5ByiGV,C4B1iGQ,uEAEE,SAAA,CADA,uB5ByiGV,C4B1iGQ,iEAEE,SAAA,CADA,uB5ByiGV,C4BpiGQ,yCACE,Y5BsiGV,C4B/hGE,0BAEE,eAAA,CADA,e5BkiGJ,C4B9hGI,+BACE,oB5BgiGN,C4B3hGE,gDACE,Y5B6hGJ,C4BzhGE,8BAEE,+BAAA,CADA,oBAAA,CAGA,WAAA,CAGA,SAAA,CADA,4BAAA,CAEA,4DACE,CAJF,0B5B6hGJ,C4BphGI,aAdF,8BAeI,+BAAA,CAEA,SAAA,CADA,uB5BwhGJ,CACF,C4BphGI,wCACE,6B5BshGN,C4BlhGI,oCACE,+B5BohGN,C4BhhGI,qCAIE,6BAAA,CAKA,UAAA,CARA,oBAAA,CAEA,YAAA,CAEA,2CAAA,CAAA,mCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,W5ByhGN,C4B5gGQ,mDACE,oB5B8gGV,C6B5nGE,kCAEE,iB7BkoGJ,C6BpoGE,kCAEE,kB7BkoGJ,C6BpoGE,wBAGE,yCAAA,CAFA,oBAAA,CAGA,SAAA,CACA,mC7B+nGJ,C6B1nGI,aAVF,wBAWI,Y7B6nGJ,CACF,C6BznGE,mFAEE,SAAA,CACA,2CACE,CADF,mC7B2nGJ,C6B9nGE,gFAEE,SAAA,CACA,wCACE,CADF,mC7B2nGJ,C6B9nGE,0EAEE,SAAA,CACA,mC7B2nGJ,C6BrnGE,mFAEE,+B7BunGJ,C6BznGE,gFAEE,+B7BunGJ,C6BznGE,0EAEE,+B7BunGJ,C6BnnGE,oBACE,yBAAA,CACA,uBAAA,CAGA,yE7BmnGJ,CKp/FI,sCwBrHE,qDACE,uB7B4mGN,CACF,C6BvmGE,0CACE,yB7BymGJ,C6B1mGE,uCACE,yB7BymGJ,C6B1mGE,iCACE,yB7BymGJ,C6BrmGE,sBACE,0B7BumGJ,C8BlqGE,2BACE,a9BqqGJ,CKh/FI,wCyBtLF,2BAKI,e9BqqGJ,CACF,C8BlqGI,6BAEE,0BAAA,CAAA,2BAAA,CACA,eAAA,CACA,iBAAA,CAHA,yBAAA,CAAA,sBAAA,CAAA,iB9BuqGN,C8BjqGM,2CACE,kB9BmqGR,C+BprGE,kDACE,kCAAA,CAAA,0B/BurGJ,C+BxrGE,+CACE,0B/BurGJ,C+BxrGE,yCACE,kCAAA,CAAA,0B/BurGJ,C+BnrGE,uBACE,4C/BqrGJ,C+BjrGE,uBACE,4C/BmrGJ,C+B/qGE,4BACE,qC/BirGJ,C+B9qGI,mCACE,a/BgrGN,C+B5qGI,kCACE,a/B8qGN,C+BzqGE,0BAKE,eAAA,CAJA,aAAA,CACA,YAAA,CAEA,aAAA,CADA,kBAAA,CAAA,mB/B6qGJ,C+BxqGI,uCACE,e/B0qGN,C+BtqGI,sCACE,kB/BwqGN,CgCvtGA,MACE,8LhC0tGF,CgCjtGE,oBACE,iBAAA,CAEA,gBAAA,CADA,ahCqtGJ,CgCjtGI,wCACE,uBhCmtGN,CgC/sGI,gCAEE,eAAA,CADA,gBhCktGN,CgC3sGM,wCACE,mBhC6sGR,CgCvsGE,8BAGE,oBhC4sGJ,CgC/sGE,8BAGE,mBhC4sGJ,CgC/sGE,8BAIE,4BhC2sGJ,CgC/sGE,4DAKE,6BhC0sGJ,CgC/sGE,8BAKE,4BhC0sGJ,CgC/sGE,oBAME,cAAA,CALA,aAAA,CACA,ehC6sGJ,CgCtsGI,kCACE,uCAAA,CACA,oBhCwsGN,CgCpsGI,wCAEE,uCAAA,CADA,YhCusGN,CgClsGI,oCAGE,WhC8sGN,CgCjtGI,oCAGE,UhC8sGN,CgCjtGI,0BAME,6BAAA,CAOA,UAAA,CARA,WAAA,CAEA,yCAAA,CAAA,iCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CATA,iBAAA,CACA,UAAA,CASA,sBAAA,CACA,yBAAA,CARA,UhC6sGN,CgCjsGM,oCACE,wBhCmsGR,CgC9rGI,4BACE,YhCgsGN,CgC3rGI,4CACE,YhC6rGN,CiChxGE,qDACE,mBAAA,CACA,cAAA,CACA,uBjCmxGJ,CiCtxGE,kDACE,mBAAA,CACA,cAAA,CACA,uBjCmxGJ,CiCtxGE,4CACE,mBAAA,CACA,cAAA,CACA,uBjCmxGJ,CiChxGI,yDAGE,iBAAA,CADA,eAAA,CADA,ajCoxGN,CiCrxGI,sDAGE,iBAAA,CADA,eAAA,CADA,ajCoxGN,CiCrxGI,gDAGE,iBAAA,CADA,eAAA,CADA,ajCoxGN,CkC1xGE,gCACE,sClC6xGJ,CkC9xGE,6BACE,sClC6xGJ,CkC9xGE,uBACE,sClC6xGJ,CkC1xGE,cACE,yClC4xGJ,CkChxGE,4DACE,oClCkxGJ,CkCnxGE,yDACE,oClCkxGJ,CkCnxGE,mDACE,oClCkxGJ,CkC1wGE,6CACE,qClC4wGJ,CkC7wGE,0CACE,qClC4wGJ,CkC7wGE,oCACE,qClC4wGJ,CkClwGE,oDACE,oClCowGJ,CkCrwGE,iDACE,oClCowGJ,CkCrwGE,2CACE,oClCowGJ,CkC3vGE,gDACE,qClC6vGJ,CkC9vGE,6CACE,qClC6vGJ,CkC9vGE,uCACE,qClC6vGJ,CkCxvGE,gCACE,kClC0vGJ,CkC3vGE,6BACE,kClC0vGJ,CkC3vGE,uBACE,kClC0vGJ,CkCpvGE,qCACE,sClCsvGJ,CkCvvGE,kCACE,sClCsvGJ,CkCvvGE,4BACE,sClCsvGJ,CkC/uGE,yCACE,sClCivGJ,CkClvGE,sCACE,sClCivGJ,CkClvGE,gCACE,sClCivGJ,CkC1uGE,yCACE,qClC4uGJ,CkC7uGE,sCACE,qClC4uGJ,CkC7uGE,gCACE,qClC4uGJ,CkCnuGE,gDACE,qClCquGJ,CkCtuGE,6CACE,qClCquGJ,CkCtuGE,uCACE,qClCquGJ,CkC7tGE,6CACE,sClC+tGJ,CkChuGE,0CACE,sClC+tGJ,CkChuGE,oCACE,sClC+tGJ,CkCptGE,yDACE,qClCstGJ,CkCvtGE,sDACE,qClCstGJ,CkCvtGE,gDACE,qClCstGJ,CkCjtGE,iCAGE,mBAAA,CAFA,gBAAA,CACA,gBlCotGJ,CkCttGE,8BAGE,mBAAA,CAFA,gBAAA,CACA,gBlCotGJ,CkCttGE,wBAGE,mBAAA,CAFA,gBAAA,CACA,gBlCotGJ,CkChtGE,eACE,4ClCktGJ,CkC/sGE,eACE,4ClCitGJ,CkC7sGE,gBAIE,wCAAA,CAHA,aAAA,CACA,wBAAA,CACA,wBlCgtGJ,CkC3sGE,yBAOE,wCAAA,CACA,+DAAA,CACA,4BAAA,CACA,6BAAA,CARA,iBAAA,CAIA,eAAA,CADA,eAAA,CAFA,cAAA,CACA,oCAAA,CAHA,iBlCstGJ,CkC1sGI,6BACE,YlC4sGN,CkCzsGM,kCACE,wBAAA,CACA,yBlC2sGR,CkCrsGE,iCAWE,wCAAA,CACA,+DAAA,CAFA,uCAAA,CAGA,0BAAA,CAPA,UAAA,CAJA,oBAAA,CAMA,2BAAA,CADA,2BAAA,CAEA,2BAAA,CARA,uBAAA,CAAA,eAAA,CAaA,wBAAA,CAAA,qBAAA,CAAA,oBAAA,CAAA,gBAAA,CATA,SlC8sGJ,CkC5rGE,sBACE,iBAAA,CACA,iBlC8rGJ,CkCtrGI,sCACE,gBlCwrGN,CkCprGI,gDACE,YlCsrGN,CkC5qGA,gBACE,iBlC+qGF,CkC3qGE,uCACE,aAAA,CACA,SlC6qGJ,CkC/qGE,oCACE,aAAA,CACA,SlC6qGJ,CkC/qGE,8BACE,aAAA,CACA,SlC6qGJ,CkCxqGE,mBACE,YlC0qGJ,CkCrqGE,oBACE,QlCuqGJ,CkCnqGE,4BACE,WAAA,CACA,SAAA,CACA,elCqqGJ,CkClqGI,0CACE,YlCoqGN,CkC9pGE,yBAIE,wCAAA,CAEA,+BAAA,CADA,4BAAA,CAFA,eAAA,CADA,oDAAA,CAKA,wBAAA,CAAA,qBAAA,CAAA,oBAAA,CAAA,gBlCgqGJ,CkC5pGE,2BAEE,+DAAA,CADA,2BlC+pGJ,CkC3pGI,+BACE,uCAAA,CACA,gBlC6pGN,CkCxpGE,sBACE,MAAA,CACA,WlC0pGJ,CkCrpGA,aACE,alCwpGF,CkC9oGE,4BAEE,aAAA,CADA,YlCkpGJ,CkC9oGI,wDAEE,2BAAA,CADA,wBlCipGN,CkC3oGE,+BAKE,2CAAA,CAEA,+BAAA,CADA,gCAAA,CADA,sBAAA,CAJA,mBAAA,CAEA,gBAAA,CADA,alCkpGJ,CkC1oGI,qCAEE,UAAA,CACA,UAAA,CAFA,alC8oGN,CK/wGI,wC6BgJF,8BACE,iBlCmoGF,CkCznGE,wSAGE,elC+nGJ,CkC3nGE,sCAEE,mBAAA,CACA,eAAA,CADA,oBAAA,CADA,kBAAA,CAAA,mBlC+nGJ,CACF,CDt9GI,kDAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC49GN,CD79GI,+CAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC49GN,CD79GI,yCAIE,+BAAA,CACA,8BAAA,CAFA,aAAA,CADA,QAAA,CADA,iBC49GN,CDp9GI,uBAEE,uCAAA,CADA,cCu9GN,CDl6GM,iHAEE,WAlDkB,CAiDlB,kBC66GR,CD96GM,6HAEE,WAlDkB,CAiDlB,kBCy7GR,CD17GM,6HAEE,WAlDkB,CAiDlB,kBCq8GR,CDt8GM,oHAEE,WAlDkB,CAiDlB,kBCi9GR,CDl9GM,0HAEE,WAlDkB,CAiDlB,kBC69GR,CD99GM,uHAEE,WAlDkB,CAiDlB,kBCy+GR,CD1+GM,uHAEE,WAlDkB,CAiDlB,kBCq/GR,CDt/GM,6HAEE,WAlDkB,CAiDlB,kBCigHR,CDlgHM,yCAEE,WAlDkB,CAiDlB,kBCqgHR,CDtgHM,yCAEE,WAlDkB,CAiDlB,kBCygHR,CD1gHM,0CAEE,WAlDkB,CAiDlB,kBC6gHR,CD9gHM,uCAEE,WAlDkB,CAiDlB,kBCihHR,CDlhHM,wCAEE,WAlDkB,CAiDlB,kBCqhHR,CDthHM,sCAEE,WAlDkB,CAiDlB,kBCyhHR,CD1hHM,wCAEE,WAlDkB,CAiDlB,kBC6hHR,CD9hHM,oCAEE,WAlDkB,CAiDlB,kBCiiHR,CDliHM,2CAEE,WAlDkB,CAiDlB,kBCqiHR,CDtiHM,qCAEE,WAlDkB,CAiDlB,kBCyiHR,CD1iHM,oCAEE,WAlDkB,CAiDlB,kBC6iHR,CD9iHM,kCAEE,WAlDkB,CAiDlB,kBCijHR,CDljHM,qCAEE,WAlDkB,CAiDlB,kBCqjHR,CDtjHM,mCAEE,WAlDkB,CAiDlB,kBCyjHR,CD1jHM,qCAEE,WAlDkB,CAiDlB,kBC6jHR,CD9jHM,wCAEE,WAlDkB,CAiDlB,kBCikHR,CDlkHM,sCAEE,WAlDkB,CAiDlB,kBCqkHR,CDtkHM,2CAEE,WAlDkB,CAiDlB,kBCykHR,CD9jHM,iCAEE,WAPkB,CAMlB,iBCikHR,CDlkHM,uCAEE,WAPkB,CAMlB,iBCqkHR,CDtkHM,mCAEE,WAPkB,CAMlB,iBCykHR,CmC3pHA,MACE,qMAAA,CACA,mMnC8pHF,CmCrpHE,wBAKE,mBAAA,CAHA,YAAA,CACA,qBAAA,CACA,YAAA,CAHA,iBnC4pHJ,CmClpHI,8BAGE,QAAA,CACA,SAAA,CAHA,iBAAA,CACA,OnCspHN,CmCjpHM,qCACE,0BnCmpHR,CmCpnHE,2BAKE,uBAAA,CADA,+DAAA,CAHA,YAAA,CACA,cAAA,CACA,aAAA,CAGA,oBnCsnHJ,CmCnnHI,aATF,2BAUI,gBnCsnHJ,CACF,CmCnnHI,cAGE,+BACE,iBnCmnHN,CmChnHM,sCAOE,oCAAA,CALA,QAAA,CAWA,UAAA,CATA,aAAA,CAEA,UAAA,CAHA,MAAA,CAFA,iBAAA,CAOA,2CAAA,CACA,qCACE,CAEF,kDAAA,CAPA,+BnCwnHR,CACF,CmC3mHI,8CACE,YnC6mHN,CmCzmHI,iCAQE,qCAAA,CACA,6BAAA,CALA,uCAAA,CAMA,cAAA,CATA,aAAA,CAKA,gBAAA,CADA,eAAA,CAFA,8BAAA,CAWA,+BAAA,CAHA,2CACE,CALF,kBAAA,CALA,UnCqnHN,CmCtmHM,aAII,6CACE,OnCqmHV,CmCtmHQ,8CACE,OnCwmHV,CmCzmHQ,8CACE,OnC2mHV,CmC5mHQ,8CACE,OnC8mHV,CmC/mHQ,8CACE,OnCinHV,CmClnHQ,8CACE,OnConHV,CmCrnHQ,8CACE,OnCunHV,CmCxnHQ,8CACE,OnC0nHV,CmC3nHQ,8CACE,OnC6nHV,CmC9nHQ,+CACE,QnCgoHV,CmCjoHQ,+CACE,QnCmoHV,CmCpoHQ,+CACE,QnCsoHV,CmCvoHQ,+CACE,QnCyoHV,CmC1oHQ,+CACE,QnC4oHV,CmC7oHQ,+CACE,QnC+oHV,CmChpHQ,+CACE,QnCkpHV,CmCnpHQ,+CACE,QnCqpHV,CmCtpHQ,+CACE,QnCwpHV,CmCzpHQ,+CACE,QnC2pHV,CmC5pHQ,+CACE,QnC8pHV,CACF,CmCzpHM,uCACE,+BnC2pHR,CmCrpHE,4BACE,UnCupHJ,CmCppHI,aAJF,4BAKI,gBnCupHJ,CACF,CmCnpHE,0BACE,YnCqpHJ,CmClpHI,aAJF,0BAKI,anCqpHJ,CmCjpHM,sCACE,OnCmpHR,CmCppHM,uCACE,OnCspHR,CmCvpHM,uCACE,OnCypHR,CmC1pHM,uCACE,OnC4pHR,CmC7pHM,uCACE,OnC+pHR,CmChqHM,uCACE,OnCkqHR,CmCnqHM,uCACE,OnCqqHR,CmCtqHM,uCACE,OnCwqHR,CmCzqHM,uCACE,OnC2qHR,CmC5qHM,wCACE,QnC8qHR,CmC/qHM,wCACE,QnCirHR,CmClrHM,wCACE,QnCorHR,CmCrrHM,wCACE,QnCurHR,CmCxrHM,wCACE,QnC0rHR,CmC3rHM,wCACE,QnC6rHR,CmC9rHM,wCACE,QnCgsHR,CmCjsHM,wCACE,QnCmsHR,CmCpsHM,wCACE,QnCssHR,CmCvsHM,wCACE,QnCysHR,CmC1sHM,wCACE,QnC4sHR,CACF,CmCtsHI,+FAEE,QnCwsHN,CmCrsHM,yGACE,wBAAA,CACA,yBnCwsHR,CmC/rHM,2DAEE,wBAAA,CACA,yBAAA,CAFA,QnCmsHR,CmC5rHM,iEACE,QnC8rHR,CmC3rHQ,qLAGE,wBAAA,CACA,yBAAA,CAFA,QnC+rHV,CmCzrHQ,6FACE,wBAAA,CACA,yBnC2rHV,CmCtrHM,yDACE,kBnCwrHR,CmCnrHI,sCACE,QnCqrHN,CmChrHE,2BAEE,iBAAA,CAKA,kBAAA,CADA,uCAAA,CAEA,cAAA,CAPA,aAAA,CAGA,YAAA,CACA,gBAAA,CAKA,mBAAA,CADA,gCAAA,CANA,WnCyrHJ,CmC/qHI,iCAEE,uDAAA,CADA,+BnCkrHN,CmC7qHI,iCAIE,6BAAA,CAQA,UAAA,CAXA,aAAA,CAEA,WAAA,CAKA,8CAAA,CAAA,sCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CANA,+CACE,CAJF,UnCurHN,CmCxqHE,4BAME,+EACE,CALF,YAAA,CAGA,aAAA,CAFA,qBAAA,CAUA,mBAAA,CAZA,iBAAA,CAWA,wBAAA,CARA,YnC8qHJ,CmClqHI,sCACE,wBnCoqHN,CmChqHI,oCACE,SnCkqHN,CmC9pHI,kCAGE,8EACE,CAFF,mBAAA,CADA,OnCkqHN,CmCxpHM,uDACE,8CAAA,CAAA,sCnC0pHR,CK1wHI,wC8B8HF,wDAGE,kBnCipHF,CmCppHA,wDAGE,mBnCipHF,CmCppHA,8CAEE,eAAA,CADA,eAAA,CAGA,iCnCgpHF,CmC5oHE,8DACE,mBnC+oHJ,CmChpHE,8DACE,kBnC+oHJ,CmChpHE,oDAEE,UnC8oHJ,CmC1oHE,8EAEE,kBnC6oHJ,CmC/oHE,8EAEE,mBnC6oHJ,CmC/oHE,8EAGE,kBnC4oHJ,CmC/oHE,8EAGE,mBnC4oHJ,CmC/oHE,oEACE,UnC8oHJ,CmCxoHE,8EAEE,mBnC2oHJ,CmC7oHE,8EAEE,kBnC2oHJ,CmC7oHE,8EAGE,mBnC0oHJ,CmC7oHE,8EAGE,kBnC0oHJ,CmC7oHE,oEACE,UnC4oHJ,CACF,CmC9nHE,cAHF,olDAII,+BnCioHF,CmC9nHE,g8GACE,sCnCgoHJ,CACF,CmC3nHA,4sDACE,uDnC8nHF,CmC1nHA,wmDACE,anC6nHF,CoC1+HA,MACE,mVAAA,CAEA,4VpC8+HF,CoCp+HE,4BAEE,oBAAA,CADA,iBpCw+HJ,CoCn+HI,sDAGE,SpCq+HN,CoCx+HI,sDAGE,UpCq+HN,CoCx+HI,4CACE,iBAAA,CACA,SpCs+HN,CoCh+HE,+CAEE,SAAA,CADA,UpCm+HJ,CoC99HE,kDAGE,WpCw+HJ,CoC3+HE,kDAGE,YpCw+HJ,CoC3+HE,wCAME,qDAAA,CAKA,UAAA,CANA,aAAA,CAEA,0CAAA,CAAA,kCAAA,CACA,4BAAA,CAAA,oBAAA,CACA,6BAAA,CAAA,qBAAA,CACA,yBAAA,CAAA,iBAAA,CATA,iBAAA,CACA,SAAA,CAEA,YpCu+HJ,CoC59HE,gEACE,wBTyWa,CSxWb,mDAAA,CAAA,2CpC89HJ,CqChhIA,QACE,8DAAA,CAGA,+CAAA,CACA,iEAAA,CACA,oDAAA,CACA,sDAAA,CACA,mDrCihIF,CqC7gIA,SAEE,kBAAA,CADA,YrCihIF,CKx3HI,mCiChKA,8BACE,UtCgiIJ,CsCjiIE,8BACE,WtCgiIJ,CsCjiIE,8BAIE,kBtC6hIJ,CsCjiIE,8BAIE,iBtC6hIJ,CsCjiIE,oBAKE,mBAAA,CAFA,YAAA,CADA,atC+hIJ,CsCzhII,kCACE,WtC4hIN,CsC7hII,kCACE,UtC4hIN,CsC7hII,kCAEE,iBAAA,CAAA,ctC2hIN,CsC7hII,kCAEE,aAAA,CAAA,kBtC2hIN,CACF","file":"main.css"} \ No newline at end of file diff --git a/assets/stylesheets/palette.cbb835fc.min.css b/assets/stylesheets/palette.cbb835fc.min.css new file mode 100644 index 000000000..30f9264c3 --- /dev/null +++ b/assets/stylesheets/palette.cbb835fc.min.css @@ -0,0 +1 @@ +@media screen{[data-md-color-scheme=slate]{--md-hue:232;--md-default-fg-color:hsla(var(--md-hue),75%,95%,1);--md-default-fg-color--light:hsla(var(--md-hue),75%,90%,0.62);--md-default-fg-color--lighter:hsla(var(--md-hue),75%,90%,0.32);--md-default-fg-color--lightest:hsla(var(--md-hue),75%,90%,0.12);--md-default-bg-color:hsla(var(--md-hue),15%,21%,1);--md-default-bg-color--light:hsla(var(--md-hue),15%,21%,0.54);--md-default-bg-color--lighter:hsla(var(--md-hue),15%,21%,0.26);--md-default-bg-color--lightest:hsla(var(--md-hue),15%,21%,0.07);--md-code-fg-color:hsla(var(--md-hue),18%,86%,1);--md-code-bg-color:hsla(var(--md-hue),15%,15%,1);--md-code-hl-color:rgba(66,135,255,.15);--md-code-hl-number-color:#e6695b;--md-code-hl-special-color:#f06090;--md-code-hl-function-color:#c973d9;--md-code-hl-constant-color:#9383e2;--md-code-hl-keyword-color:#6791e0;--md-code-hl-string-color:#2fb170;--md-code-hl-name-color:var(--md-code-fg-color);--md-code-hl-operator-color:var(--md-default-fg-color--light);--md-code-hl-punctuation-color:var(--md-default-fg-color--light);--md-code-hl-comment-color:var(--md-default-fg-color--light);--md-code-hl-generic-color:var(--md-default-fg-color--light);--md-code-hl-variable-color:var(--md-default-fg-color--light);--md-typeset-color:var(--md-default-fg-color);--md-typeset-a-color:var(--md-primary-fg-color);--md-typeset-mark-color:rgba(66,135,255,.3);--md-typeset-kbd-color:hsla(var(--md-hue),15%,94%,0.12);--md-typeset-kbd-accent-color:hsla(var(--md-hue),15%,94%,0.2);--md-typeset-kbd-border-color:hsla(var(--md-hue),15%,14%,1);--md-typeset-table-color:hsla(var(--md-hue),75%,95%,0.12);--md-admonition-fg-color:var(--md-default-fg-color);--md-admonition-bg-color:var(--md-default-bg-color);--md-footer-bg-color:hsla(var(--md-hue),15%,12%,0.87);--md-footer-bg-color--dark:hsla(var(--md-hue),15%,10%,1);--md-shadow-z1:0 0.2rem 0.5rem rgba(0,0,0,.2),0 0 0.05rem rgba(0,0,0,.1);--md-shadow-z2:0 0.2rem 0.5rem rgba(0,0,0,.3),0 0 0.05rem rgba(0,0,0,.25);--md-shadow-z3:0 0.2rem 0.5rem rgba(0,0,0,.4),0 0 0.05rem rgba(0,0,0,.35)}[data-md-color-scheme=slate] img[src$="#gh-light-mode-only"],[data-md-color-scheme=slate] img[src$="#only-light"]{display:none}[data-md-color-scheme=slate] img[src$="#gh-dark-mode-only"],[data-md-color-scheme=slate] img[src$="#only-dark"]{display:initial}[data-md-color-scheme=slate][data-md-color-primary=pink]{--md-typeset-a-color:#ed5487}[data-md-color-scheme=slate][data-md-color-primary=purple]{--md-typeset-a-color:#bd78c9}[data-md-color-scheme=slate][data-md-color-primary=deep-purple]{--md-typeset-a-color:#a682e3}[data-md-color-scheme=slate][data-md-color-primary=indigo]{--md-typeset-a-color:#6c91d5}[data-md-color-scheme=slate][data-md-color-primary=teal]{--md-typeset-a-color:#00ccb8}[data-md-color-scheme=slate][data-md-color-primary=green]{--md-typeset-a-color:#71c174}[data-md-color-scheme=slate][data-md-color-primary=deep-orange]{--md-typeset-a-color:#ff9575}[data-md-color-scheme=slate][data-md-color-primary=brown]{--md-typeset-a-color:#c7846b}[data-md-color-scheme=slate][data-md-color-primary=black],[data-md-color-scheme=slate][data-md-color-primary=blue-grey],[data-md-color-scheme=slate][data-md-color-primary=grey],[data-md-color-scheme=slate][data-md-color-primary=white]{--md-typeset-a-color:#6c91d5}[data-md-color-switching] *,[data-md-color-switching] :after,[data-md-color-switching] :before{transition-duration:0ms!important}}[data-md-color-accent=red]{--md-accent-fg-color:#ff1947;--md-accent-fg-color--transparent:rgba(255,25,71,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=pink]{--md-accent-fg-color:#f50056;--md-accent-fg-color--transparent:rgba(245,0,86,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=purple]{--md-accent-fg-color:#df41fb;--md-accent-fg-color--transparent:rgba(223,65,251,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=deep-purple]{--md-accent-fg-color:#7c4dff;--md-accent-fg-color--transparent:rgba(124,77,255,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=indigo]{--md-accent-fg-color:#526cfe;--md-accent-fg-color--transparent:rgba(82,108,254,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=blue]{--md-accent-fg-color:#4287ff;--md-accent-fg-color--transparent:rgba(66,135,255,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=light-blue]{--md-accent-fg-color:#0091eb;--md-accent-fg-color--transparent:rgba(0,145,235,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=cyan]{--md-accent-fg-color:#00bad6;--md-accent-fg-color--transparent:rgba(0,186,214,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=teal]{--md-accent-fg-color:#00bda4;--md-accent-fg-color--transparent:rgba(0,189,164,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=green]{--md-accent-fg-color:#00c753;--md-accent-fg-color--transparent:rgba(0,199,83,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=light-green]{--md-accent-fg-color:#63de17;--md-accent-fg-color--transparent:rgba(99,222,23,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-accent=lime]{--md-accent-fg-color:#b0eb00;--md-accent-fg-color--transparent:rgba(176,235,0,.1);--md-accent-bg-color:rgba(0,0,0,.87);--md-accent-bg-color--light:rgba(0,0,0,.54)}[data-md-color-accent=yellow]{--md-accent-fg-color:#ffd500;--md-accent-fg-color--transparent:rgba(255,213,0,.1);--md-accent-bg-color:rgba(0,0,0,.87);--md-accent-bg-color--light:rgba(0,0,0,.54)}[data-md-color-accent=amber]{--md-accent-fg-color:#fa0;--md-accent-fg-color--transparent:rgba(255,170,0,.1);--md-accent-bg-color:rgba(0,0,0,.87);--md-accent-bg-color--light:rgba(0,0,0,.54)}[data-md-color-accent=orange]{--md-accent-fg-color:#ff9100;--md-accent-fg-color--transparent:rgba(255,145,0,.1);--md-accent-bg-color:rgba(0,0,0,.87);--md-accent-bg-color--light:rgba(0,0,0,.54)}[data-md-color-accent=deep-orange]{--md-accent-fg-color:#ff6e42;--md-accent-fg-color--transparent:rgba(255,110,66,.1);--md-accent-bg-color:#fff;--md-accent-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=red]{--md-primary-fg-color:#ef5552;--md-primary-fg-color--light:#e57171;--md-primary-fg-color--dark:#e53734;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=pink]{--md-primary-fg-color:#e92063;--md-primary-fg-color--light:#ec417a;--md-primary-fg-color--dark:#c3185d;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=purple]{--md-primary-fg-color:#ab47bd;--md-primary-fg-color--light:#bb69c9;--md-primary-fg-color--dark:#8c24a8;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=deep-purple]{--md-primary-fg-color:#7e56c2;--md-primary-fg-color--light:#9574cd;--md-primary-fg-color--dark:#673ab6;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=indigo]{--md-primary-fg-color:#4051b5;--md-primary-fg-color--light:#5d6cc0;--md-primary-fg-color--dark:#303fa1;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=blue]{--md-primary-fg-color:#2094f3;--md-primary-fg-color--light:#42a5f5;--md-primary-fg-color--dark:#1975d2;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=light-blue]{--md-primary-fg-color:#02a6f2;--md-primary-fg-color--light:#28b5f6;--md-primary-fg-color--dark:#0287cf;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=cyan]{--md-primary-fg-color:#00bdd6;--md-primary-fg-color--light:#25c5da;--md-primary-fg-color--dark:#0097a8;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=teal]{--md-primary-fg-color:#009485;--md-primary-fg-color--light:#26a699;--md-primary-fg-color--dark:#007a6c;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=green]{--md-primary-fg-color:#4cae4f;--md-primary-fg-color--light:#68bb6c;--md-primary-fg-color--dark:#398e3d;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=light-green]{--md-primary-fg-color:#8bc34b;--md-primary-fg-color--light:#9ccc66;--md-primary-fg-color--dark:#689f38;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=lime]{--md-primary-fg-color:#cbdc38;--md-primary-fg-color--light:#d3e156;--md-primary-fg-color--dark:#b0b52c;--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54)}[data-md-color-primary=yellow]{--md-primary-fg-color:#ffec3d;--md-primary-fg-color--light:#ffee57;--md-primary-fg-color--dark:#fbc02d;--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54)}[data-md-color-primary=amber]{--md-primary-fg-color:#ffc105;--md-primary-fg-color--light:#ffc929;--md-primary-fg-color--dark:#ffa200;--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54)}[data-md-color-primary=orange]{--md-primary-fg-color:#ffa724;--md-primary-fg-color--light:#ffa724;--md-primary-fg-color--dark:#fa8900;--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54)}[data-md-color-primary=deep-orange]{--md-primary-fg-color:#ff6e42;--md-primary-fg-color--light:#ff8a66;--md-primary-fg-color--dark:#f4511f;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=brown]{--md-primary-fg-color:#795649;--md-primary-fg-color--light:#8d6e62;--md-primary-fg-color--dark:#5d4037;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7)}[data-md-color-primary=grey]{--md-primary-fg-color:#757575;--md-primary-fg-color--light:#9e9e9e;--md-primary-fg-color--dark:#616161;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7);--md-typeset-a-color:#4051b5}[data-md-color-primary=blue-grey]{--md-primary-fg-color:#546d78;--md-primary-fg-color--light:#607c8a;--md-primary-fg-color--dark:#455a63;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7);--md-typeset-a-color:#4051b5}[data-md-color-primary=light-green]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#72ad2e}[data-md-color-primary=lime]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#8b990a}[data-md-color-primary=yellow]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#b8a500}[data-md-color-primary=amber]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#d19d00}[data-md-color-primary=orange]:not([data-md-color-scheme=slate]){--md-typeset-a-color:#e68a00}[data-md-color-primary=white]{--md-primary-fg-color:#fff;--md-primary-fg-color--light:hsla(0,0%,100%,.7);--md-primary-fg-color--dark:rgba(0,0,0,.07);--md-primary-bg-color:rgba(0,0,0,.87);--md-primary-bg-color--light:rgba(0,0,0,.54);--md-typeset-a-color:#4051b5}@media screen and (min-width:60em){[data-md-color-primary=white] .md-search__form{background-color:rgba(0,0,0,.07)}[data-md-color-primary=white] .md-search__form:hover{background-color:rgba(0,0,0,.32)}[data-md-color-primary=white] .md-search__input+.md-search__icon{color:rgba(0,0,0,.87)}}@media screen and (min-width:76.25em){[data-md-color-primary=white] .md-tabs{border-bottom:.05rem solid rgba(0,0,0,.07)}}[data-md-color-primary=black]{--md-primary-fg-color:#000;--md-primary-fg-color--light:rgba(0,0,0,.54);--md-primary-fg-color--dark:#000;--md-primary-bg-color:#fff;--md-primary-bg-color--light:hsla(0,0%,100%,.7);--md-typeset-a-color:#4051b5}[data-md-color-primary=black] .md-header{background-color:#000}@media screen and (max-width:59.9375em){[data-md-color-primary=black] .md-nav__source{background-color:rgba(0,0,0,.87)}}@media screen and (min-width:60em){[data-md-color-primary=black] .md-search__form{background-color:hsla(0,0%,100%,.12)}[data-md-color-primary=black] .md-search__form:hover{background-color:hsla(0,0%,100%,.3)}}@media screen and (max-width:76.1875em){html [data-md-color-primary=black] .md-nav--primary .md-nav__title[for=__drawer]{background-color:#000}}@media screen and (min-width:76.25em){[data-md-color-primary=black] .md-tabs{background-color:#000}} \ No newline at end of file diff --git a/assets/stylesheets/palette.cbb835fc.min.css.map b/assets/stylesheets/palette.cbb835fc.min.css.map new file mode 100644 index 000000000..96e380c87 --- /dev/null +++ b/assets/stylesheets/palette.cbb835fc.min.css.map @@ -0,0 +1 @@ +{"version":3,"sources":["src/assets/stylesheets/palette/_scheme.scss","../../../src/assets/stylesheets/palette.scss","src/assets/stylesheets/palette/_accent.scss","src/assets/stylesheets/palette/_primary.scss","src/assets/stylesheets/utilities/_break.scss"],"names":[],"mappings":"AA2BA,cAGE,6BAKE,YAAA,CAGA,mDAAA,CACA,6DAAA,CACA,+DAAA,CACA,gEAAA,CACA,mDAAA,CACA,6DAAA,CACA,+DAAA,CACA,gEAAA,CAGA,gDAAA,CACA,gDAAA,CAGA,uCAAA,CACA,iCAAA,CACA,kCAAA,CACA,mCAAA,CACA,mCAAA,CACA,kCAAA,CACA,iCAAA,CACA,+CAAA,CACA,6DAAA,CACA,gEAAA,CACA,4DAAA,CACA,4DAAA,CACA,6DAAA,CAGA,6CAAA,CAGA,+CAAA,CAGA,2CAAA,CAGA,uDAAA,CACA,6DAAA,CACA,2DAAA,CAGA,yDAAA,CAGA,mDAAA,CACA,mDAAA,CAGA,qDAAA,CACA,wDAAA,CAGA,wEAAA,CAKA,yEAAA,CAKA,yECxDF,CD6DE,kHAEE,YC3DJ,CD+DE,gHAEE,eC7DJ,CDoFE,yDACE,4BClFJ,CDiFE,2DACE,4BC/EJ,CD8EE,gEACE,4BC5EJ,CD2EE,2DACE,4BCzEJ,CDwEE,yDACE,4BCtEJ,CDqEE,0DACE,4BCnEJ,CDkEE,gEACE,4BChEJ,CD+DE,0DACE,4BC7DJ,CD4DE,2OACE,4BCjDJ,CDwDA,+FAGE,iCCtDF,CACF,CCjDE,2BACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CD6CN,CCvDE,4BACE,4BAAA,CACA,mDAAA,CAOE,yBAAA,CACA,8CDoDN,CC9DE,8BACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CD2DN,CCrEE,mCACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CDkEN,CC5EE,8BACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CDyEN,CCnFE,4BACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CDgFN,CC1FE,kCACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CDuFN,CCjGE,4BACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CD8FN,CCxGE,4BACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CDqGN,CC/GE,6BACE,4BAAA,CACA,mDAAA,CAOE,yBAAA,CACA,8CD4GN,CCtHE,mCACE,4BAAA,CACA,oDAAA,CAOE,yBAAA,CACA,8CDmHN,CC7HE,4BACE,4BAAA,CACA,oDAAA,CAIE,oCAAA,CACA,2CD6HN,CCpIE,8BACE,4BAAA,CACA,oDAAA,CAIE,oCAAA,CACA,2CDoIN,CC3IE,6BACE,yBAAA,CACA,oDAAA,CAIE,oCAAA,CACA,2CD2IN,CClJE,8BACE,4BAAA,CACA,oDAAA,CAIE,oCAAA,CACA,2CDkJN,CCzJE,mCACE,4BAAA,CACA,qDAAA,CAOE,yBAAA,CACA,8CDsJN,CE3JE,4BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFwJN,CEnKE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFgKN,CE3KE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFwKN,CEnLE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFgLN,CE3LE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFwLN,CEnME,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFgMN,CE3ME,mCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFwMN,CEnNE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFgNN,CE3NE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFwNN,CEnOE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFgON,CE3OE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFwON,CEnPE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,qCAAA,CACA,4CFmPN,CE3PE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,qCAAA,CACA,4CF2PN,CEnQE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,qCAAA,CACA,4CFmQN,CE3QE,+BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAIE,qCAAA,CACA,4CF2QN,CEnRE,oCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFgRN,CE3RE,8BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CFwRN,CEnSE,6BACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CAAA,CAKA,4BF4RN,CE5SE,kCACE,6BAAA,CACA,oCAAA,CACA,mCAAA,CAOE,0BAAA,CACA,+CAAA,CAKA,4BFqSN,CEtRE,sEACE,4BFyRJ,CE1RE,+DACE,4BF6RJ,CE9RE,iEACE,4BFiSJ,CElSE,gEACE,4BFqSJ,CEtSE,iEACE,4BFySJ,CEhSA,8BACE,0BAAA,CACA,+CAAA,CACA,2CAAA,CACA,qCAAA,CACA,4CAAA,CAGA,4BFiSF,CGrMI,mCDtFA,+CACE,gCF8RJ,CE3RI,qDACE,gCF6RN,CExRE,iEACE,qBF0RJ,CACF,CGhNI,sCDnEA,uCACE,0CFsRJ,CACF,CE7QA,8BACE,0BAAA,CACA,4CAAA,CACA,gCAAA,CACA,0BAAA,CACA,+CAAA,CAGA,4BF8QF,CE3QE,yCACE,qBF6QJ,CG9MI,wCDxDA,8CACE,gCFyQJ,CACF,CGtOI,mCD5BA,+CACE,oCFqQJ,CElQI,qDACE,mCFoQN,CACF,CG3NI,wCDjCA,iFACE,qBF+PJ,CACF,CGnPI,sCDLA,uCACE,qBF2PJ,CACF","file":"palette.css"} \ No newline at end of file diff --git a/css/extra.css b/css/extra.css new file mode 100644 index 000000000..a96026d7f --- /dev/null +++ b/css/extra.css @@ -0,0 +1,141 @@ + +:root { + --md-default-bg-color: #f4f4f4; + } + :root > * { + --md-accent-fg-color: #030303; + } + .md-nav { + font-size: .75rem; + line-height: 1.3; + } + .md-grid { + max-width: initial; + } + li .md-nav__item--active { + background-color: #e0e0e0; + } + + .md-nav__item .md-nav__link--active { + color: #161616; + font-weight: 600; + } + .md-clipboard { + color: #161616; + } + + .md-tabs__link { + font-size: .8rem; + } + + + div.col-md-9 h1:first-of-type { + text-align: center; + font-size: 60px; + font-weight: 300; + } + + div.col-md-9 h1:first-of-type .headerlink { + display: none; + } + + code.no-highlight { + color: black; + } + + + /* Definition List styles */ + + dd { + padding-left: 20px; + } + + + /* Center images*/ + + img.center { + display: block; + margin: 0 auto; + } + + .md-content, .md-sidebar__scrollwrap { + padding-bottom: 6rem; + } + + .md-header__inner { + padding: .3rem .2rem; + } + + span.md-ellipsis { + font-weight: bold; + } + + .homepage { + font-size: 1.2rem; + } + + .md-footer-meta__inner { + justify-content: space-evenly; + } + + .md-footer-column { + /* We first create a flex layout context */ + display: flex; + + /* Then we define the flow direction + and if we allow the items to wrap + * Remember this is the same as: + * flex-direction: row; + * flex-wrap: wrap; + */ + flex-flow: column wrap; + + /* Then we define how is distributed the remaining space */ + justify-content: space-around; + + padding: 5px 5px; + } + + .md-footer-text { + font-size: .7rem; + padding: 2px 0; + } + + /* top git source icon */ + .md-header__source { + width: 2rem; + } + + /* search box */ + .md-search__inner { + width: 8rem; + } + + /* center colum white */ + .md-content { + background-color: #fff; + } + + /* adjust center column */ + .md-main__inner { + margin-top: 0px; + } + + /* top banner home page */ + [data-md-color-primary=black] .md-header { + background-color: #161616; + } + + /* tabs border */ + .md-typeset .tabbed-set>label { + border-bottom: .1rem solid #4051b5; + } + + /* headings */ + .md-typeset h1, .md-typeset h2 { + font-weight: 500; + } + + .md-typeset .admonition { + font-size: .75rem; + } \ No newline at end of file diff --git a/images/cpd-deployment.drawio b/images/cpd-deployment.drawio new file mode 100644 index 000000000..583004e3d --- /dev/null +++ b/images/cpd-deployment.drawio @@ -0,0 +1 @@ +nLxX06NAsyb4a87lRIDwlxjhhPdws4H33vPrl3r7OzFzYq52OzpaEpKgqMx8TFap/wth+0tY4qlSxyzv/usDZdd/Idx/fT4wDOHvAzhy/ztCUp9/B8qlzv7zof99wK6f/D8Hof8c3essX//HB7dx7LZ6+p8H03EY8nT7H8fiZRnP//mxYuz+51WnuMz/rwN2Gnf/91G/zrbqP3fxIf73cTGvy+q/rwzj1L93+vi/P/yfO1mrOBvP/+MQ8v0vhF3Gcfv3rL/YvAOT99/zYikL9zlNRIjz1fZlmaEa9H/9Oxn//+Ur/7mFI+72/9wUE6/5fwa13f99p/99Q0s+bP+/B5CrC7JRnPD/ePDjww7UjbL4vz7/1wD+r4u/kzOBp3X/Fw/myJetfqOgxEneGeNab/U4vO8n47aN/fuBDrzBxGlbLuM+ZOzYjcvfqZDi78//cQ66q0vw3W2c3qPxOv3Lk6K+8nfUzN8l6f8+Cv33kfd5Fm/xfyH0v5cffhrK//qwtcfo1gn9hHKk3z+a7VZft3yfZeH7z1diafDI1crm8u+Tkv52X9OzUDq/KTaw0ity/+vDDFP79Lb5LXt4O+Iy2GLHh7cxiCmyI4htjYl0+1xl8rmfXNZ2mjNFNWlYgo42SjXRb9sy4XdlVsm0JFZm4LY9bZv7ObdtR4KeIM97DYcYcCpH4ATUErP5N1UJ1Y49eOd4i7cj844vGUEh8/vu/GYtv16DcejRg9v9VEuY7rQ52szSc/IbV5udnIoDSYoHhVB1uRXL+43UVrFIg4JcboRspbrJ96rIdjtBm5dLcr6Y4HdlEpdec7Ve9XN4//2SrnCCeE/14NSFvKDoKhHaRw0YKcYP9HFFo8ZKx51UU/u1tbm47SX1Ut1eqnlBKMnOuR2/32Ol5gk+FV4ZX4Lpoz5m0aCMrqEkUutNA+YnGiV6Mn7ZT/njPerHoE1sei+PaEK05VUZTTYswc0TzyrzKyEJ8VU5DHQBRUMiS9UM1hIMpvyWgcdb5dQ7fAf0YOstv1nArEOsDe/JxpML8h+ZcF4t+x/9WrzL3t5pgKecBeMeg6XK1id7P9pYfXK0cdQT2D4uCtbjtEaQM/NDrTIKhEmJP/yi9Y858+30s2v9ULVp07J02rd7LbGRgSQhjocgxAd/PvOqwz5Bb1y4Md86J53vqJh7d393L+YEEkVbzci81vs/ju2V631XTa43h6j8m67b1PDpO6bdMpz7Eff1+OX78R7A8E72vTGuR1Ys0jTM8Ye8hS5MVIczhhSRb2t3LxgdBhQjlja9GVul8TiT2oV/L/GewRjNaaR6jzIsaBABUoDDjFWmIu8Qs/NmG8/Nlz5cpMdSWUOF6Jt7jBGAy6NTcRwUmFh9QgpSwImoYLwH5bklM1Bvc0MBXmJIQzw4HT3e8yqT0xCRGaI7tb+VYsKa/f29lMOQ3+09C4k1BdO/T/I4xgnGAOF3jM+/YfEHFArVwkKzp3Th9SJ3wqABPY4qoj9rwlbmfkUkmSdHU5dmmp/sB0FoiX3PXjwYiYiC4OdfZjiuy8qbaQ6FmIpXN3ADha9pF0IxJijy3M4+xHZPnKpFEfbpIg5MMybr8zsa5nbKH/KGq8S55Xc99HcJWT9Me9SuuGj7jPbzPGhycG+cuTKtnwUy6i4IBzWJeq4zgiaO43Ry4EDwuioUuDA3uhPSOHvM8RvaGAcDo+3H9+RfiWFoy3X5dNAeOIGwhELJzskRAWSCeBba+Jzv9eXgTf/dV/y3wJlgQGDYE8Fcoqb8Rbi+94vafimFx9v117M3mGtVuoqScy/JFCLMhMZemznK2Xf6IC5wuw9FIOAa/6YdKd5/PuJ5w2m1xKcWh/xPZtNOqn+nWzUMMV3PU7Y8XSNT1Gtfv90UFkL/5ZFvw41q62uGYxiGK529Be8oKGJZFtIYjOLClK6+mDc/guNoeOWjwPj7EeH9KjLhLs+Y5vcv06+biW8DZASO7muPyUZQlqCI6p1Yk4V4bnZ+Yf3QoTjTaA4ZuKpExpf2GSpIh+JA5g/J1rNU78+Iq0zXwZ++0QjkOFKKpAcB0SnUKPF1TU9m93h4hLhzz1mWpRxOagXPIYuG9RrsVFvxZQQ+VuRrgY9G215u29kD1A0ajk6TKh8iOzyIwDFxFZ4jgTIIzwwL0Z71lG2lIp8nqu48I+8kRpOHzBnKbwza0c9hWOY5Xfr1Te6BeEPFLw6jHCBkgzPdEkmihPP5ft+74n1FWSZJDwf2nQR+JqsXv+VaIObGnVdj/MVQKsKbZhgwjIECtWghqB+o3j2m9nrwHSsECWOpq245iiT4JW4hd0gE8FvlRDbDKCdbedBiW48kUxmxqW46CoGovaMNv4HrHXc9OvUw109hWB2ciUX1faK9WLc2PzzyPbkJqDM3+JZRtq1Y/R9Pd88vbGD8Mq0P3eSMb4xJ835Ip/UpxELaXxmm0Y+MyIPDh+yTa0QOiatdk7CCz7HvqIknu0ZyzXTOmT4EBrDolRlJkk+wBYCjArNmKx/XMqVdqUnR+50SQkAwPkOPU378pPvqsZdsl19dKbh2mKq0uhCfV39Ql47R2qHQqAcK8aOLzuEW7q1+luoasZkIa/Xri8MNY5pTSfJUOA1xUxkNKgQpNjP/i1XcRns8FVksifMGtV+pW1XD7rm258hVPqSb997y4GG15MfREXpOXbwQ+y5E2JKzTWFk2WnRt13J1hIMCeVwuOlpPQ9hC10402GqmvbjdIBjZrbWdP4xv9kNT399L941aKWiXbLoyO9RYF/71dwDgYs1hPgXDVrH/s30oWQvvzJEH/+i8vwciJ5jDznD8cb2QgCzIKNJDjN9WXebnzIddrXiEONbpn7/qsmz2X21+iWPWFNE8F5j57PWGUIs88Yx6Lym8y1sVAZFF5f1BlC512bY29IdZHCmJdeKUvnec+aFoj2zK+Oa1ckvaSPbxJdo4e/NqUTKPtN164Y6o+q5W+GiiYNEoiQdFygJa0z7jvHJ6SjJ78LrWa2E0/RQE+I9tfB+dscTwe1DwcuSdnUkFp6C2Qr3NGBJLUhJne7id1gsJDVfEz5F0mlPgy7aq1oOq06A+BmEqQOC63OlY4GhK0eIdU6TRextgoKZo0G47JpYvIKFuzDcIp7eECbuEl6rgOBJbqANL9ybkBhOEcl9YxKPrhEk5BDXtPSQTYwbzDMw1cmpvQmsbdaDPBnf7zaXFUngiYG+3JswHoAYXoBK/NJoe83AKxGtFTHPNQKIuy+WCopGFwPKghEnw4h9RN/o6jvitiKVkSGMHDfD9wjXp0UD1f+CWC1nlGFkw4eXT5EvwS2PqwkQ47oOUFGwHrDqPyYQm9nIkOZHE/zGyp2T2sFt4Qq3oD/RAx/YBFNK3qIrhEsL+FLMVU2MQIldn6y+XlAoDE9FMchGCcicccHo3lHwgd4leUPoxbxDoQhXemUIqgUkPJBBCL9SHzX1BRWSD0HWkljmsywIQHoCBs7KV7kootuQ4cU7803qIKtz7SiYQEh0oCa65rZITUgE6Pu+lWL5ka0i3zQnsxTeP8nzfUJ+rJoq9h2HE/qd8o0h9N6zrYWUsVgnzh84a6Z35g7DumZDgTK1sPLoNdBGt4b88W/uQjeEjYNvwFXrImoQi5Xep/xXzecZrrpmdA++I+5LlQL8aSQe31Rl+se1zDK4vuD9QE2KEAA23c0TCxbXDXUuOwc6vHSCfQ3wH9ay69Uw/rEIrMneAplroPrhWsVwQ2p+oXK4l9Ejbrq4z9BEnrFcmuG/NoOXLlSE3Y/los/LPC5yxlmmLHQIew0xq3Dy5uigqrPh1XDUH1uILxjgcmF4eqPNndgmHIDtIKx9oVv5z7iseG6g283m66kaspDbgvwoM57ujKA/qHjpZ1ZrQdLWqIu8V9Frr2nZjApzQY+SjzFhFJ+PFbx8CEEt7PN+sXDIR4rvPkbVJMG2prW6heQgdcDDhChMLIbQoW8w+Vb81jLI9lm//Ga2asLPF/G2HOMXVUNfKBYQs7kEQySChffgIHKIFbe14YgfNnXyjAAAYpxMHunNNp5GSQ6vHXuWbgoqDBlQymDad8zay5RHJCb29u8ilzdKvPKm1aFx+MOJWt7he4D7fXbhM2Urd4t/PPxTX86DHYtl74r8UiUAq6rtj33Dbr2jHExEjx6CXkwWysLvuU/iUIeWxj9Ng0OOv9fiei+lgAyu0fDjOdSYE48inkhV+8WbWC8ofFTKwmXMWYTfvn1RaeXi6wrBXbBRyZGb2/7OvyIAprINZnxYwseLdc/GPs3j5rh1M/fWvMOfPm76XTT1rn1fbaAPefVkIEkGhztJIlvTIk/fFurNTvzJlJPAG/9+B3Hz6XBvw0d64TyOWJShj29UcL5nt946dFcOTjS66oE6MoSqU+DjRSWi8IPhwM9ovxsm1SYD2VQ2AJLUDxinGr1Zd3TGA8uHomX4Ry67Mxeh+Ci4NiOT8/k1OLWoEfkHfU4e9IXxhjFOSFIRX+Don3I/+pgYeie5mbb6Gf2cDpLk9ici/4qHi8lIULW0IuVp8g6AQ0zdvhAyIL7YN5vFL31KILFbG/mHhcGgkPAYIILlaLWDQXXuxVQYtLJi6yexiDt1ijnLKFxuIVAfvqP2kIkwa9KYjBZg4FjWoOmPd4uNOXrqH+5QK4rHtmMdzKXoh78kBgkQaWugPEcfHQCvg2Iu9jjfNTO9EkLyjrEJ1rfFCz+exk4QWwPKX1ZLKBNsZTlSV4fl1ZmdMHDra7JlrNN3t+2l5FU6NZ7PzjRkjbOCW25uY4qcx76ez/ntrn+afdMLdNBf1c9gD75SQxAMm/YOxB2eCCAI4SfPfdfrzzRacaNx0Ty/JsDcf1//9xc119cqsyqu5iYPBXskyyhENm8AeOhAJ/cZv+2W3ZuYoPJ75+LwmCs73/e3SK4fKNridQcPuipPaoonnoOTo5gjl6dULGA0dBfelUVdu1atxj7uE4NZ3MDFBR9WJxKn8qDzBFTk1Wcp9qXLt0W9H2nqqEaVkJ4KvpSypxbeR7ifNogP1LYP0a3Nhg07yt6ra3DgfamvLd6QMmGMExJfvNW1S1qT3gyHgosGLSTxpQtNfgrxFpW/WQvunVoOj1JxpKOU77NEwYLbRl/D1qb0URc8ULFMJIZIcyAaCW17P5yiuhEOnWj9SGw6MA9EGQ5NBEkzjut+TuDeKTYg8S+wfn+v/7SAR2LPUdAmH5Sy92fxNvM4Xp+LiASRpAsoDU4Eygu9XsM1/5bnX3D2A+2Aw8P4O2iW5djPAUGQV1NRO5HVxJ8TtwFXPOrlnkax70ALzsOUFcXQfz2of41WnucEQ1AfgkQrMCLz6/GVqTABSJG7I5p9349z8SnKRvpt4v4kBaP/GkTZjyLLSfp18zhlkMUrEv/RME+Bx2v49wIbqBpJLupfo+J49QHwXf/qRnWPdzikz9WW3PnkQvEM7jWLfEVcFBfXdAoGyTO90hjDFOFRsqIvLGhP77THIcS4blPkBUv952Ofn/wuvbvDavMVkMtKsYrTas5l5PP4mW7S3RVqg5neD/O74fY36tosRN6sOWSYz73KHKvUqNnuaILhw4Vua5uh5bnw1L3RGdlEegMUcYR1cfhr14n4o5V+nxDPVZUyV252FGG4PM1hH4ivd49KKn9CaVy2D+HKsh22rDcqtguR1BTFxpzX34q2PKMPfrzuRUt3v6Zo9ZUXV3gYHw/eBf0O4zgO/K8/0+JZvRK6U/pf9ZOorvLDGB72PMerzYGoWJgygzf5Eh/hrxFKVIc+S8Vb3lN9dsZJX0hhyEO+KaNAypORx1+EPI7F9Bk6d6gkLzpIDPJExwZEzz8+hsh1gVChPxTQ6XLRm4fuPN/gqW4JM+wCkOHlO1fkat9e7fiyWXUoOX3SaxkyI4LabaUOjPk9Re6sbyfnHBaAy07BVcmOxhdytWziAponE7c681egvu9KuIOX0MFW433d4+NqGKwPaT5TcCMay/I6koHNet0T/LUAuOc33Jr8yeRjzX/OW4eMzVq4wfxhN1DGJmuXkk0XyTuvlEuLwbIkWU3mCj//9sW90EZ5TMorpeVVSuy3JukO9t5p73qNM6GZu6nL8xev7srMmTN2r8BEYUfxA/GBp9jlAw/Odv3z+ZAvZUCHz12apTVIqQdG/wPz6ohHgFP5fiVaezMpe/PfqwwSua1NZQPNmadkMO6DK8P8ZpaJefgC5mLzxKYhnB/Xm6R49fRzaavMYJE+chQU4n4Le4ufhekNf33XMvjXaCPDy+ZAXEHpBAWNXuigxZFPdDuc2UaKi+Qb+O0Hk+58OxPEVWy0InF+PbUGo276bc7D3+rm8JgOYlinhcMay08AeRM85QGD0giDpi+V3aT8EZQHcntkNh1M7da6g8nj00nZarDVVxgpjNO+9FM/3CDnV4vjwWVPznLrtyxhvtyc9kWOyQ8D6ATIsam/pxyy9vUPF17uUkWjd+gXthQHgrvt+GX0qtSYD/iM5asQeG9m7wfYk84OCabHxYtXsRDUfjGOC2nDMPRCbHdTLr61SXlB/QnCpVcLte26uzZalKOu59/lXPO7GP7LIrIjY94jPM9DbbTM3PX3UK5k+QdkL5NBkeVawn4Y8g8RFUUJWCwwGtlB8No0CMu6KE+s96As3dJ3o1K2gZugNIaL9786jrLcrdClw9S35s7h8Cl9VNOieE71Ae1z3bbkgyRNLlCMS80LOL6nLhvW2dAIGti4KP95W4vvcDilMIktV25hnqaXgQ6U7CD5xcimZQxkEHZ1CMxf/rF39hdFvewT7OVJLeluA81HQ/3vdUvmqm1V+tRcbqot73GzwU5bRRlN9egPLxHqtynvumcMynQZP9JEpKpa//AtTfkUZ61dk8FLe2riA/YfBv5o2wevjsNcWLX79MOWJFqhR4CZX/vGtDigLP43XBx9716ZD5+aZ/HuJR23otFczZAMCT9qQ1/8+MnkB3Ny2YEj5ehu82BPl8fyoZt8bJMukrofzPUq4bTKbRVxhc3LL2zx2ihC0I/nwmDfKJaYEOct9hj2skB+0svX1LK2PU5xOh6OfKubG5RTtab8XN01R8DHhR45lf162I0QrGTRfGmia9IUrjXbef2Ph/QgawWVbneHCHHLfzzq0whenhZdM+CatXnRLn0/3IXd/EKyhuwfkpBGIDDfChUPaTjkGmE85VvYF8CS8UgpK/IXov2E2depsC/brjpvVOnPFiBSRkWRQYE5FPVlNIftXHaCHtFAQYH1+RKhGPenjsOtvip7XoQHfAjm85yA0pNlxGN1gE7OQgO0PyciqhgrUKafs+RQ8FrFbaKIIdb7RrGp2KfEn1NWZeOE9C9cBaO0NNRX08YrhpZwTojSXcMbp/q2DHyvSAuTPkAHw1KkOksjEKEDm5nlqG0+9XnTRD4a68jLYdM8vVagW62zXzsHJIQDfaK824cu7yZ0B52aqUdCj65fgyDxE7Gyy/3zEaGMT+UGSz+MdUj4MeIvPWlSqO1ypN8j4W0ZUb7q7YqRvWdNAP5iLsPSk9bihtNgsUDYaQ1OxYVCg98Xhu9CrZBd1QhASpyb9cW6Gj9dA5I1dYIJ8h03b8jbrWTHTbbNXNNvBwScjgDXvNXPp+eM+Zv/kNBoZVorH1+XYCWRoFwL3xntoI9bPIU5MF1tTN2vvsORQ+w0mH5yRkEU20ceU39sulJRVEkgKGIvEUJ1jIvgIpIl/7Wnhoy7PkeyQBNyYrsElUb73c5+wdSV1M2EAaeJQcnv6THtDmog1Cm20QnawvIc786HUkDbBqyctRcN5zxUQuJtSEwbYRjd67MgnbVbg7Sy6g7gi5rJv4lkAq30JtmHQWd+kN3CgwVbiG8uCdse+3UYnXVukzxYgyWUp40jCouv0RN+yK3+FcZftVf8L1pBu5JPyGJrKBc9jYg1u1MmIk4BgMgHEeiwvMjvwzPvv9Cg19WQjBumQsjR41aDmsNlNe0gcZf4Q1EOR3iX2gQ8Fr9YGrBIX2qN6saXwDqVbU64ZGMIf9SxM9ufEO78RvGN5KtusR0Ob94XJWXPQ0FDBBR9iMeQfBKn4FzoQgkogMaG9rSofsVmDXWPMJKwaMrgG5MUzOnCuXignXUtlppEBNFbSEV1CziNRKGa46PITWkwxyik8xIVd0PQwrreyDNmzXXMeFl/yq82GUVFjJwpVfkqw/mRspnfQ6C0s6QGooFmjCH9PhLlm7/go/P1mllBRVkzJR5pilNd1bUgWmYfmq53srrJpWlpoyg5bBWJJdywdWP5G1n45mjAo7uJrvjrq8iDhIjjeFqqO4QQv3CjYEB/gZljuzaAY7SVDREhseK+wrcl+x4TUcpQzE8H+5Nf4OdxbJilJCweGOrOWx9pUsnMHqZfSn8zUETwCfN4b/pLyx19j0h9MdnklLycAmcPqhOUnaARwqbS/nssfKGlI+cQ/Q2CTtyn5dNzohCuPp8nhtym0gL34CU8DeXWIfonWYUrQ+7JE9kuDP1KdRgcI6B9mOUIOdojJftI+DZ9s+ulaD4yav1V/fKDN49a6K8D5gj024dZzgjvU1wdjIGyRR8t+09/RsPM5cNvHck+CIGaDHcBRjuxEWdNbV3d0FLfM857ZbSM5rHTAyKElh4pi79HjJy1mIN2n6FGs4kW38bf9q7lmJ9S/LYwZ+0PuvHTWihT5mA/vcRX+Z1yNzKj5dr2VDdwAcgLfOWk3YJ+DRCuq5GqnYE0X80gtVPm5G4Z77F/SrkZ5XzXVzUUAy8oTe9NZxZBvqZeIMbZFFRWau5E4ez26wjP5iR2YfWW6cl32OP0QumLvfRk7lMfB5IQVHLpD8y0GqZJiWYYlIl2Kx7muCSimVAk27AECvjDVi4LYelrKmimu9PMGdNkmOV70xDoq4ckWU696NTfV6zGYRZz1TvLP0AKDCwihcVPnoSdOnAXcj0574MN2n398Do4qFmzNFrRonZvC3TJeKl3+3M3jZl2K2yhp1DHvnhucfqdHeWPh4ycStyvys6T/92BIT2S8LsfqoMyQvEmft82bnPfRM4v0Yw2BXRVExDu/8l5Gj9LFRMr9UDW2CpxNOTRtsEj5WWmOYB8SxZIjWSmEh84HEW03k+hGhZPpbh11Tt6MRL+GsOv7wPwurPz/pQuIv/2w2xyeN+4kJ3L38D/0IPOgdt1il8dlr0r+lja0dd2Ja5ybuL3gkP604E2yIuJqxzdg6EtMv5rVwz+VeYRSB+o7GdURWRzL9z7MQ48xhoYMvQwC7mqZfgF5ZbltNITrHTxgQts+2QsDZX7X3zy8bXkDJ/0msvKfOK2MSHBNuUMhl+IBK/9M5jv60Xd13icgWs+67KljLkBQYJj94aBBilzRQErvzwPO41uSii7UBszc70T+sFgZCPWG/RbeXQFa1gadbEZ3suenibSvjkXa07ifCbu9EfceX7TMO9Ufhvf+9RMBmH+xDf+t5odDYxD7v3FVwuNz2CCmdchZxhkfT+PM2332I0xZjBnTtWzAXu/Hq6yqSeEtr3zrFxZHMH58/583aRC9dfzwXmMty1VJICQglb7Phym+9zMNGF5gMBxaTnOrlAWV94DTaq6UUWRSMn45byyvqQs6Y8WgYU7aBarLoRQRwQBLHnrei/O1gGZPGipMEiwGvyIUsFQ2pgzYYdzVlkktySP3b3RxcyMPdJJe6EIdwwOVsVvpgo4MyThFYpiMU/ZSbQH/dHotB2IkTAP7dcQccFH2/WdFfqzf20rX+hdI9OCI3c9pwXGmaqnBTfXIb+37NcTO4Ng/g6yHbADlf6izySwtP2tCPFs3DA53d3Y/FLmeLobaVSEhstJRvn49J+q5bAMWnLBqCqm92Nay/MuALEf7P33shVM0RGBPTfoQhHAqQilvTNemYXYoIEpekR7J14h876vd6ziaFaB4CTFnE/tj74DlmxAsC0hAs3QrIwuHsa5leDXC/uI++KP9ExcwYnqWVHQ5vzafvP3t7MFWESRaj1K5bgFGNogQUvxV1fYRvwkcTeJPhvtnXz4LcMOxOeOaq6ub7z7pl8p/dqwXI30Qr1+zog87pTe07fOOnKgF4LpNFsRBn9Fo7+u1AZAqeFBH5t41vQRS8fVF16gel8FBZb/brDWjHG3V/uFrUZA7Xxin+m6w7sMnLhI2f4gqY5KD2qQ+kTlCHb6x2S7slY9L846ORukFF9pp1PA0ewYO49sK/I7OpunroFAp9e/rn079ZwskkBFucYl65UPGrk16ASqxukEiF5UODJ6TASWI+aSef0HzeGv7baO7LXZFRaYnvG3lCUpfM19CrCZpCT++oV23hNDC3ZH7aPPxBdmBKuJK5Cee0gw7kQN4skhM6jL76GlGAc2YNBYvxZWgd+FXYXF+NbGfPVH1Dzil2xHzcL+OjKK7xZMdfkNAVSV+DG8kPVCfZ4//Uz/St4/Ze/6eAssmUOUIV5728LgbY3zu5Hhq6YdCanF66igRbkvsvVmCy/+ReHf0pzpaLQ1Ak4yQwRYNe9v1SKEiZ+cJDVUDNCmIrt7wJI93d9Lcqrik9ovqCe3b0Us+7jf/gPubKoofLe1lMvSrxnvR98KKyOn0KfYXxJb9Q9nLPRBuJII2dTUvmmiUJPan9zHEOfV6sXEyC8wMRGQA4tLfnkB0Pj3UYigzVpC5GQcgThsA1nVECio1TALlAL2KlQB+uaMbtClHRc2S8kaUrSE9veSeMaGWEg9cTKKy4wjr2QeqBIK1NAhES0ostqpzC+NfmHZ72xYWCNoEngLx40aOe9g+ehPrU2/rFhsuc3Zi/xXfivrfD6z4eWfTahz9hW7MzvMI9uZA0dKGcOn/EZbieLULxkqVQLLysWdVEKz6+pZK3nZ3wCUIGjmrj1KKcQUJ3DqgJfkdb9noCKj5b9vgqutchhjPrlF96CNYLnLXvmA6l5CeII26iECiXSz2YiTojQ3Qy0e00ERmhmDye1Ndrr1nYiRxqX8mnb6AUjbIaeTkDzuaIdFJikoRT5GfqkeU51afTg1ycv7jSHKMLE8kb8H/Xoov1zlK9dWsH/iLZ7suOhGIZCy5hv5jZRy9EZmZXlVUkCfQAdxIBPzt/ILtAVqiMiD/u2nQlCwh4rbevAAJJsKipLhOtB8QTMmTgvrxyDaqVDw1VcZ8GF3ByLHnelIuAl3K5WdZ0M6DglFFp/a0DL7GH/QjX1c1W6LuLyCpw0ajRJKHFq3EMys57hVMH6Is7N/K8E9UaB0kgLMqCiBBho2vzTDOEsJo5rFwPT9ZwDbLdUTrSSNYqZZ/yaNtIY9GgdLYOwT9XrHNH/oMhzAUNTiagVZXcWyh6WeYkEi3NSnNUDz698xtHcLsLRdCWqZ9PEj+ccinv22X3en5h2e/w1jRGNTmqcZZx8fDSMOBPsWWANsz+HjskJXkDY/1JDPf93ZG0PxJW2u/JuhmR+eqZFOuyVuMoTSEcq/MoXpTGZhAL+wPCZ8UDpAQX/zBDt1DhT5W2+4tXJG/tNHjEvsAxOQuhZGBUL2kgkIzfdvx1kJUGNIUPDQgLo7/9reOHCiIYlRIq58VgDvZaEXxE9DPt/T9gNwXlLO0iVlIZJwUXdZxqSKj50t11fi5UxXfFqlLSAV/e3h7jcIX2EgAp3d8orJwkflbjV5pXWY1Zs9aN2JHenoim/qeew1k0voUh7jgV1WxEyks4JFyvmM1FZcqq7ow7nzB1d+VzOLI9iMRNJwU902Bxhm6penoercq1cvMX97QbzaOPQEBztOguZ8QS4d8X4XG+w603lAkuxGoX392w2dqarVhBRMBRKVRAnh+taNUL4zzIBT8PAF8qiEUN/qFvW8rc9R4IwepJMBvgy6r5+NUZTDsy3hRCWwjp2vSIeOQ1DOBwMdllkeknCAJSOGKcjh01IEySGYXEjBS0j42r3eVDRW5wbr82iwEQT2D6bEv7WjM+tAlEDZPn8qvyQvsNGEKly95zgjpWmWpm3XYjwlaeC7JCg8HcVitMipyBrj7jScKXETGv3BHeoWLKUIkL8QT1RPLTlAGmqRXRMBLagzZ/cKhSNwziG6DlHWQFtNuB7mUWH4uPf2bIbwB7BPIMJ6SpOTm2AF/qV+QAt8vdGJhzzqU3KAPNld5K3898AB0xe8/HID8UPjspQ9y8n9WoM9TPZKUnk92mpMCZo4yEf6MjHBlBBOEnVyeKARdQxTdNm/O8D9durg8oAozRbrj07jB49Jd7+KTWTrqguTgzhfxuP8bfS2CBD9tqgkDcsj8lYcR8lnG8YzYmC7McGaHwFK8jcokE/fGGTYuNlrqrEgrye+2/7jxzg6pRaIc0oDcGMJ8vf+DU7I+NZQBdqY85f1gv4Q2qGoiALiDQ4ECgw7+SG3fFIdZRbZV4wVolflBRG/Cfd3V2W4ZtU7i7iGWVt+pl0JMKnJ9J348cCmqecoJK3DrPtmZcac/fjp4dDXu6JGrZkJzc8URQNbysQYa+tC9z1IA5+DzoZSoNwU0WqBbyllGo4JGTQ3mI9JjqYMeJ9vIJRslnLORtZ5PAYqOfVqXUPSGFy37POVwu7PUGcC8cPuOXOonpGTD8w6c1RrdnAfY2PYH7/FvdmA/AGawOwNG8IFp1+y+FtiB93D2kMHLQu9vZmRWHbJ1NcNOu6H9GHUcTgo9HQ0EK+Q6r8/K0X76r5to4oNxrLa0eQYdRBAIWb99CWEGIcWhn9cbYRpXPYMsnTYi+p+O+fSavcTp/yVtVt/PsWTDGXAmC9aQgHKIHjtlKH4HGOz/q041yhYPc4w0afO1TOkIy+FQ385gKeNQ9LZJg1l9LjBysi5RTeCS9QPsGRAOpEFqMIWf1vFhwmkMFADjUa+5p3wCk/1rSxIufphMw8DQBEPdtfkDfF8IIwJbloLAV4zkHpk3lUZykfNZ5Cp5b1bTkrDm2VcQJi5RxMvXVqgMRnmTKEfNGr6TJu8JrVE6OugLLanGvz3FjFydivMfskz/+0ihFtYTMSsxB5dvSYkphjVPe0y/Dv7SYjhUw4yoby2VEjcEswsLYPN4DkYwy7rvZyHC6IO7BsBUY+GkeK96j/9grmZ2rvOjeZLZ0XzlIsIa1fSSNxpW9iouDV2aRxRtVmd42PgDLHYhy9/IZ8qlHcP/7TzHqxE59ZrDq3lCyRisOaRYYkErzG2q936MJdkZGQUkBUjQLk3Ad1yBFu+y3NOuFpDCFA9Jnzou76ZrYzWoF8odq2CkKKQIQXjkgrop3JQqpEJ8ABAHxqQY+FJXmVbWm3aA5mPBZT/k8xgWavc7RW0y0vjwkyyPGLY28AWAVP8cBo5hzqDRuzHWNbQIqhOGgRgQZSbLXBh7ZNyclhjNl1qlBUy7zYNpXPwMwCbAR1BWuVobfE31594Vwt3bFpuFuRVc6mvMHbbT5HPQt32qH5EZYWrcj7A/s0BuXxCu6IkqP1Do2oo45LZ9OuaJrfSUU95Q6LLy0SN+i6Fk/fUba7qB5HXb40xmomvX1BXEmRH/sE9pFWtINpjBZ3wYRn3os0L7pGvW5AgZCxYf0jy1vuCrhQTC0E7xfiewZ8e+iHzuj6pQ4Wd4nwv2nS9N5OL/aqZ7g8Cl3v0EmNMzFaku0LoQZeSXImTzZXSjAvosskGlAylZWktqyHviQEAT74y3gHT96fHKM5FggMdhJii2BuCjx+/aHmwmS6niozlPi1HYVWzQHLwsVeH/QhAvkt2D19+7NtFf2j2NU6633ax3GknZvosJZaYUZqKD39pcZoBqXLo+cRLQXsExIvxUVhfgnPaB2zJt8c5oj4SJc8bybtG1cr4Y9F3PGZsbrfN4Ip3w/hb1uyk5cEvIcD4ck4jMD2cHjupvvYd/fO1ytGAIWMS0Q5nqGCoEF0lPIz1WOZBihnKycL70jdB/KdU1Nb5wWQbfpz9hKu2ndKhgtS7BKs8P/ElcgI2XsvWESpG/uG2oNq5M6mTasu1YmZgARt9ZZfZ6wv961vm8/2If96oWv4aj0WXZ5KeMMAlOjrQ4SK4XwJlpgM+pP7APmQFPv852uWAtKghzvKYdFU3JtC6lIpkQkWDBBciUZ7Du4OJ7B0Z8QpfStkXrIKQIX70FtF7yR61GQ30CLZ6y9nOwWg8kPG+80itOTyMyv3hXpeiesBAxlJzPV+iQaeBh37Hvmi+0vRObkLEaqtIm1mWT2t0bwl/sUBz1RBai26FoJBuMTdr/K8HLT80yizcTnimsJyPd8cfAUXImFIE9YZIGCOnblpz51WCowjXeqwrvLo0jXGdhxRW5htek5rk+5pY46qrb94AhfMLse90sBLxc77Vm8qT6l8725Gr2vr+9Z5aJu0kpac0xbAX9UCvjg3Qwzc0n/HMzuuIBcFYGG8QNN8wiyD6/uczlBxYuC4U3nPxAOA+q/eTVToebgmo8wpYrjL/28lLQUBN0jiOf/91NCPTthbuWmyNjGsGXy/WojBZQHgLm93rnp2CV8RWbu1Xz21DSc8dQSoO+mUWHaOZB+xv4UvEnp+8SSmIuAUkrbNOEaHCW5jNNAoN11Qiq9E7Nk0/7AsrjM9wo0/eK/3ZzE6roOwYXbGjenqckfHUI/bzo/NP5AZ+i0OUAeLRyXftf9Ca195Hnf6sqa/qqofb8hcHoHuwNMYfiPL4nz2EfGRchLz3Cb3+nM8QtM/jjnmc84JetV/vJzox5SQ6/4uk1Uj5Q/PVyqVIz4ExGxDps7EPynB/VN0eIUJ5J2gpAgko7H5BctWgFLPH5htJkCxGuhHUPnv5o/fjlXkMgQF9ZxXZYrRJ8dE+NYqVeUhlzPP5+5XhDZrxGWkcjIJzFNyTdUENG37ooM2hrUT6MoIhxoQRgxGI0uopn67fm774Gg80YmCbcVzpp+iDjicDYfdvi9ieZML1otKnOK1P2KD4+wT0pggrgpZhTXIXIlS/kFG8jgrbmDHqVpvZN6RIKwXzijJv4+JMITdXlSB/lM9vWCIeJa5ccAuZm6Bpt7sp80vFYNhUIcdB10iTqG8Aki6JcKOQcqpTAXlmfFP5GXDSvK0sooKl8oB9c7d7JZDBy1EaId/MkXMM5TwOsO8E9mjaWgWSFA13FxY+8ULVg1LX5BRa8s/HwpAgWqGxOQzsrkl7o2uJSBRDivwhJWN8PNQWER1sMOQjhVpuAxcPxiz+9m9AOKw6htAn0DEOaumR9ZG/8YiBKjomsvqzzbxTSMjBPxhLEa9iCw5Pid6nbNBm/C2ar/pyiGPsI2ChOlevkjNRC320IwohEzk/XgpEbxQ+GMY0CDVbCmT4poxWhRlrAZzeSudbF8wAC18dKSh+gh7AyIrL2gXX9VLSQpeYV4P9wV/35rgJMkt5Tq+EmiOqQbIOggr1nBzVQn3UZQgO6W/rs7leGVXaM59weUV7iogCku5KGhcVeWYwPjsuL6JcrcpkWIS6DmY0FNyxABq3RraorEfrh1XyQX97TV/haDH7pNfeZJxQV9ME+/rqxckh6bQWYwdHIwajDEf/M0OTd9VjtDWgS/v3W4IR/dlpHKCQtOwQ42LI3nzFEoiXoP2CfQ5WJB5q3T6XCsYpmNOUKYNbk9RuyidIpZNDyOPDNa+HAU2DCLRqvgXZJ9driYLdcKrmwXpqIITj5AzIwE+B8m5uoTKyikjk5Ryj5dJlO8w3eCzGIqSgtMqrcndz3a11mOAP9znz9MQwcXCDALQp0rLYWQMafNA+UagG9QaGtWnKL6Lgwnv9QXsVo8lVWsEygFtJjsp0AkaIJgEU/iVpgZjGnAc2MUj5X/ORg+3gcSQ6m2xR2PiqriFvGEc4Veqn9V6lTJjlKi6PjjHeenXR5q1qKCBxHQzK0HJdIxyWaqT9o0MUS0uT4RK3ooN9XCYtdld2FGYnKkxaVqcW6v6YrHsttjBYnZ7W/Ymdgmt7iYdzifJgTBxDzsysXGuQTPzdlirC4MdBtJv48U8U44naCe3fb0yXR+XuHRfYV7JkjRqTxFKIYwQhc47MbeqJP/z4VrrYoKVURkn07dZpOp5zzT6PASEY6gcSq01XLURTT/IkrAucceWrWPx1hXjOLhQtBNRkb5SbxxQOGgiwlCdFGBWnzYANE4z60ypU+AXDH3v9NuLmka9PtMpxSru5H3zNH1+vsFbxV5ivjl3IIBFPpAeF3GkPNt0G/yV+hZyNAS7iVoLnP9cFVpqdV3xJAw4BBMiZREmdL3bHHE2lmVcyhOMlx5kf+rml0pIk1gvfOJXZuE1OtIsZ4iYreeWfTgeG5uOml7EsElrMi6CHuRfRhnHFC6JM/LJfrKkfQggMgsN/Y2Rn/t8W4+9gTEwgFhPocgsHYpqc/wozYRWeXXBFtpqRGbk+iM3cAhEK9xl5A+8T34xYo9nptvWXIObvILv6Ofc4KUYkyhAwoU8wVBplAAj2K5KMvRL8rMF0ozRonCc9m6JYgL9ve+Cm3wlXxPNaJ3AoEuh7njb1Bjx3TIyHSHXaVtHiStttQGPxOIpGuOPf/lChcfi259JWLjLltIKSaFiuYCFiaQWSL2CKUp9m2TQyvAn8r+WECqs6AcXIsDL6577ABqEeOyADtBYAVTQvnBI/7q2I3Nz3qsbqG7MkfkfT15m+X/DC8yK1qycFcNZNyz4oRb0eTa1hcAO/agbtzyXVufn8cOZZBMwMCJQuM6gIDQO0LiAdKFWp1n/cH3pRLGyPCjeYW+ECOWrbxGy52Y+jdFnHNGszgzAzshhxM5YEzVEd47nnEUofuDt+NMalgmWSyEl5qT+JsRi+yZeRV/adj/gZP2TbLWnxWpgw0Xi74wvvQCUa5/cSObHG4XpKZzf58v90fIwG1gBQ0bwQmZADZsS6ARR0txY/0MI3eCNLTIypJ4ivXXHPZQCsMPqc3yupfgZ8YGOhTEwNfiBS7Rqg07nDt2FNf8Uwz6RMRqwt8lbdm2ypgJbvjC+CiyMBlLZ9AvKgDtRBW7Eu6BgtfqmzwLyDSfBFF3oXAET1e5xiPFbYez5yAqYRKhR40JSnmDN6K5w0wHmU/BUSToxsHn0CSeHQEPTnxwLeUNkAMgfzU0G26UnBktwuXxtMDrvVYnrio2EMJB0aH0TRmgwOqHP5SlDKwScV5ijxtzEPJeyOwVhXYwd36+4l5aD+zMUf8B0MxF/tF96lPZIjSIpBdxzs9VJm5uFfh6qOHC0vT3otEAcoNKqvMK3obnH2Ky3UpN6LOMjJakfih+wqxUei8snnY/QRPVmfeWf4YTDQkD48juWs3fIIqtV/Q8AEs1ZL23QBaGXiaSN1//Pz6cxxC/B7gU/1JyLrX6FI52+ipjwC1prV6M/2HW/dUS7cINBUJF+/+qwHvOKyOE7DhA1n2+eOk+cetJDHYn1A0WR5kzmcXtyCvsJYTvZ+xPNY1wd7opUsS+bLprbgvyPgnb92DCL9XsmRm9bIdRqqEkHLxMDDFu6G/p5u+XTBHMjCUV5rSxBaXsPRjJDZZa3o/u1Jr+yM8bFJChPvjDXcxLkUyzqKKmeGGqxzom4eroe5iVN2lQWqZ6As+eJ5SQBhCDIrlAJnQWeofSc7PAY6wFoI/JCCgSly3li1wqhBv5E8vCsytluJzYdXK/jEGgCfeuyA/HClQH9HUNH6mv9QcTkwOk+UAYCMOuvDsVtkXn8wGlQz+/Px7/Ly7xaicRd2T04/spQz8RjJL779dOxr5mrJlrWQvWFECJoXTc14HfJHXBzjOF2la+5vuYbVpj4RELy38CTVDvBrlWCQyMIyGIU5DAs3XyBvg0DFl4qpEn8w8KhlrFFwMxqaBMvOBqKRaAUWMCH/fCQ6hnCF7iKsgYMA9OHXHI/mBI8kBL7SaCwLGWCByP/3Kq2YZ8+pLwLsU/ueivALq3seibyvyXY04jyGG2fnOEiFII/X2/HnUYYKIAu0de3xnavwDm++g/GfYbuM1qW+2+0vEBXW3z4z4E2erOWU22j04Jgz/Z8sfol5+dvw7sVsjBwaleshNZE0QqFPIQx/FLZ3x097SEA8NvgRTnnAcWrQPdCZZWSohussKFf+KA9vnDPoMx4Hukn3ioLNY6QEzRyUuYOFu5kFbiPVbkqPi9xsF44gQcSzB5N/ka99SHNdz2De/0ij/IUAMzCU4jr204wWcdMjMhLabVTh8QlXgsQJ/qJyDSVVcHvXqhMVlvPia51N+CIQUKhXpHYMZZiZCeJZ0MErEwf3u/K/HGkS+MoyAHu/1yHK2ato1YF9kWFUmOv79TTzHtfAQL94n2VghWKa9oehIAd1NROfyfEnKElTrQ05lqx/ektUlKL780EHlxskFwkNZLZg0dUZp1Y+dqxWLZJ8IzwBsvyUV6tCKXgXOfujRxYh3tcqI2bJLXOc53XZOg1gtduRp79FcDLEpmO0DcmRTyCqdCRtX1XDiZnoyJ/vesFzYcgBPtkUZTtwKLOxUKAXncB7keLn420fvfOoI7+vCTnYi5S2SpBgZBFPoSAasY8rTL9lA+2bjQW/0f+2QRUpFepCYDXAA6sfnteTvbxqw0oqyWHp3Hrc8nMVhyQY5aZ2OTmB//wBUDMjJj+Q4med52lMLMdvC0UsLAHV9S/7UCD93NkpKYP7W0x6eKJp0YjHiBsUjEFkrOgcZkcEsl3lKZxKLq29ZkCMWhRfN+ZK17w1X5CoToLjXkte5FZtIAddPQ3SuEWT/r8sXUWW5EgWPM3sxbAUK8QUop1YIWY6/cizujdV9TozQP7BzD54OXBK8GFgZPt53/O3ekWz6iPnY/p45pRgIFqIA3W/UcnVxDI6K1cpFWFaBi5z8yD9/m3ZKClgQxldC4GWLpng90P4E60qN7N6J205cP38zF/ss0abKrWJIwyeX9voC/ycl//bprUlImvY0pMsvdRLgb0d30Zit3SiFQrP/kUWcqkK6krzijdeo1tcvEfU9GCs0avGlaxr/VHuHhfZLwNx871yq5D6rf+gIGN6jFkQbBela9/v+5knW3AFr0+e+ocTattB/Ri6eZUXcfrLEI2rTLhiRMRYZVA/UFv/xGOvuXTGpY13O+m2XPaeRxqJQt8mkMDuDY2wtTK99N+NYa8brKTCmKZN0J4eG9YmyDcGhJFwjWnFQqveXBYClbctL/9MSazg+Yq73hpGGzVNjziz97yvLAD/09K0LiKmlBZCOtNfM4clTCPPszquywX5nYobQyYtVjdz0YJTyOXtZ7DjLy7UutzoZXfPw5JXXnCoQOJBor8+2O+LmdQYIdU3f9H90VYQML15BSI07v04+hI6RRP/9mXtvZ3ZvyXL6upvOJcQHuXl4GG5lQRb/Q0LyCCbnjZA4nnDJSdFvGBrkG8t7LSSfWKTr4+S61IgnALFUsWZyJJxznKyetJgmlBGOiQwt1sqV1gYZCIXSncNdmZMDcAgSb5xAlNHizK+qbW6buFkPyX8/YJj3huoG2FhK8LKJPYjbqBEpQPjwqUr2o4uEOto2+2wDMs9yLUOvUX3mayrCvZmQT+ejDF2j5M/Cx5mN6gGSytwMUdmn/PbdrgnkMUSSkwCo7rrDf0oIgMq1cvk02JyK5urBG8wxKZj0iQ7/X5DA0EYpAC9Jc7SaynGDjmCYvPjN4R4IW0Y/cV/WZWgu1B8vRpS3jnewOZOLb58K51mJGYGaSHfO3Sz+pq4IRcAiu49px+UhdTc82W/kSSs2V/X8iA60PJdjoG55Ey1+PPDDGFZ/Dwwu8Rx3E0CAF++FAhjLv3RvRi/LnYhryagwRKsEuBhFuy8mTM9kTfSxP+VzV+094UNr7rdoryM8XlWTtNIYmwkx/k64n6UfhDKQ9eVDQEDSmce4fF+1yRNV61pQKAzIQgj+fx//3Y41RlfSxIbzbl3itIhHqCZ4kgRrXsUlERWnK6JzZ0KswdqTehjeRiisE+EstL+fAyP/bBVkqDetuO4IE38Pcz7tfKaRkgQlsBsMvjYfQmn9fDghIHvcW9ZKARRjfH4X8J8422GMuNs5cMwtJL//ochLwonIXPCr5ubif+++Qxtml1pN2X9Sfzw8ogP3p6wAidHWCEociJmX5oWMM9uAYx6T7jhYcDB3zeAbfz6R5V+YI72OxggT5ULXJi/uqZtagyItJ+W74tld7JLiHIn/p46J2roU89mp6U9oXwwOAfKbx1QUNhIM1Q2mFmhICcAvL9IkaxCjGS7JqJ5n7ahqoAd5MdB+py2as6JYTe2/bbuo95rXxTqAOjGCtBdWcFDnRB6BpQgoACks/KfCHXpdJhh2uut+K/8OGX5B0EtHIzysj/cJSWs9d7TI5fPdCv1LXUfV1l50HVuhn8JA3tZ5yW72A/kekk2GvoTEBSmFJFuyBiE/sDaAda0P/zykxmyNRJ1KB6KEGkq4bWiBv6hI52bnELnhtqb9z6Dg6A1jhRC19bAVhtFa7I9qxYSkqqDyfE4R9mdzbsm3my0cQVqjtmOzw/nG2rGFOyJQajUHtUzX0fhlkBs/Dc4fsUzgubVlfF0DwjRab7vRMIhgzTedpN/6zqAMFYtsmy3go+zJil5JT8YmVTCmLra7EGxYnw8c/N8UJd+kcoLFIzojZZ0ChB/F5of7+zNznYzDGLejwbvT9xCH85sa4HjRz/nl8LyHOdFdv6cYuIPz2bmLC275e5I4FnccRS2sQZvC/F00D3dGgCz2S7sfZbYiJFkx0ov6XYjJQm7e+cDX3H1ebrPu0xXQ7VEHGXuMZOk4S+UKhdJLj1Gpmnaafi+F9/A8Mn+v2nqcqFSDPvo1ue3Li6GxPzFAa8qrqFzsVCDCdBta10U692tAFUfoaVoOvZuWP0N03cPh5otSs4DmFAsFRazcxNNq59bFF/T6lCZIQZ7Embb7A2wf7JY9FFiz9KBDO4lg1OCR1/XYfoTZrc/ssHwgGlzry/hrnrfXyQ/0h+KkvTcndqLX6DtJ2hNgV+Civ5GX47+KuLxLxFVt+1ZdFIXsgbJLXYskH6+nnh/ms0FLSJner7MFFpfdv35DrQ8qVjlYnFDieWknplllDBPTjdjKFnDT8DhptwPSRcySjLdG1+WCQFk0xAHrBRotkp4bvFTt3x5fRj3IG8NVnoKihrq6Gyq/JV+E+QXZYbWnygoLCfO4RZladAy/N4UiUgs9uzHDnxulbRKCD3EemEKa5p/tYZQH2VcmAh7ihgIGpOS5AcHRpso23s8TGxtWX9+eaZq94EMVHzgAAONY6H9zDfDmeSJHdTaY1/fsquDPMFXcr40pzjwx+eTX6TOzPfa5eoWf3vM+lpkjqnSzee4fgjkV7wEAXyObIMFNK5KtX/cCQLlDlE0g7UA1NOEQHCyksGjbUuS7r6JapWglCASgpgpk1ZBS37JUDCtMfGLfja/pG/cj0CNW/6xhBd2xpB9GkuCjLP6q2b4EdKjmjCxStVzFV8w6xElVX8+o/gUEfRUk5LhRam1rfQJNH8UmL3XJc+q6HD0E0UNw98dVlVfsPZXfDGmpW9HgmzulWQjER5bm02SqZiRcVnyHUWG1m22I0MRpg1LV5p8BfVUJQh/uwk/owXUu9k/7vV5+YmeSk71HVlR3yswLdyh00yq93d9c7VrTSj6W84XjUG5VNNdi7NEdRDjAsFgTwiY837Q3/OVSl9Gc87D0/MHEFQnKPfdlINxN2oJol1pDerFjcuMBy999JoHqIl71sGmH0FddzgR4cdMnPvot9l6cx2U9uJOStnedLbytMmmx9V50iOHgKaqzoAtNkJ1EDYG3yBWP831HGdoDVxZD0eLrRNoKABurhwY2tz/MmubD6jZ/LEW/yMy/ZGcdc5PWOWvHNatFsAYLAvIKv+sW3EdEwcQQc3XBHAOnJxpVVpwBFtDudLwH4EJ8RGUvAFlXV78XnzQAtodLsMahClkzXuYDgtYSSQNrjXbeTVyWtEsu3PkQ/omTWr9mkcOnC5V5FV+4vOqI6kmiWLHOO9ONToaMr5BSOf9xv4sVWNdvgACGhezZpmvhCzMgcfR5gTDKs3djs4dq2SSP0ZO7b50zQhIlgQrjkTHlAJ53AoobY2Df9DAgXk/qpiHgAuYZzuYyyMBlHUXVzz9cF/Fw/yUXmlp+QcTrwbS/cWpxXL4QW4zxKwFVmGwkirtu6jTcjmNeBv6vA3RRxeT6c+7zYgWUbRzqJuvghXUm3yPF3ShzWfnfKREvlf53g9KbOPFB+CunVROnsbcDHzvPAjdXN2msDGu7sSOPdbxUE3DLX+/u70zBhxtllI6id8dBvbIRqOyottUjkijRUXIxj9gn9w15MvNNTj89fUDSAE/C+hpwzUfeNKEGYxL72Ggo0hqWLLioLNrUcMX3iM0MyTp5sGKVwZ/wXfb86T+0A1Fihuhq2da+gxPwsFfDUkNeBTKLfKUIRq8QWsDOt0cY2EPrG2WUkx/fxQSPKr6m+MaqKQsTZhDIl7xX69t3FyvDfhihZ9WXQ8aeWfZnWbwIBQYQ4gyYsLyqmbV1o/J78diYRe6vwL9mL/fjtvHol2yYdvRG3HPU5m0XD8cCU3zzYUkM6CtNP+hQWuNUfpH0OVwe6mm525TgDN5IiQIs2UBrlY0vAuWw9bNhmL+ndzDberKgSRf9QUh+g9ER7KBAyKoKKA5sVgWogIdCDALRckev8RqNSofsLTh594K7X3oxdnbRHvcdF7C5Tw961IdUx8IRSM/sXqjL8/z4p9TkLO1qBDKKTCRBXGqKhDB5DSBYq40PgOUDE2re8RQAfetVn0ATy25xOJ9mLgIf1JtDA4CQrstSJHjTQJGUHZ/BZs1tOPUnbT5r+eFRbVogaNyJ4fjW7naXiYT74zbcWWqWNByWtqZ1DXIzxbdHDGDCS0JGZZflEK4LefqlNaBukzuOg3eDsMbJPzAiFELnd98Ptgw+pU5tdJnkE9F+ftyHCy0uisjKjXV2jeQHs5KuSGDHVy1Btjtvgb89VG3rhk2RV/O5gZ/Ozi7GR1VHH6TCfsFMQ8E280m6gP8Q0vgn6gP6Fdhp/Rv2k8SCmNHxwSHOS14D1fwEkExK5VPqu1iJPpJOjwX0EsBGJGAf9TE7KvztRLnJuV9oyzhkaja+lGsGiJaLTigvM69D87HLbcrRCjjrVLopbLKfpisFcEKBF3o+lvhqTqEGuoxUgwz9nVhgtaKFBd15dbqTrV8yMSjiZqWbtWSfxANDDqK8A9lVmi52V3wQEpu23IKkDe7Qe9jJbWerbVIpDrNk0Lr2w9P/ffgC7f+FrqrDt+aBmBf7D+0oByHTFBAw4sDp60/iIHhXJdlc5ClgkmhCliJqZnZJoBmpCTKcAuu8zzzmoUcs7UF/IifXhcNj7U2lv1gfmt8q+TsC6n8+1CwvhHqG0vqVlml8PvSQzMoRDJF7qX7kvE0tHputmkxnXcMUMFhLUn/pzx9YFr+cHQZesXTrA+s5mYxMI11SGX9SSqpQstCvyeDp9OSflD4DMGek1ykrJc4eqPNMB+GVRxB/JJzrLXKB2K8GNWW+zlGdf2rr0sgrsHzZPwyrvPgbWqDkdUJKwcPGx12+IRC6eNa+8/Fi0Ms8tdZQZqybF97f0/m05+a40oL4Y7ZWNokPY8kdJ6MOkAd89gBb/0IGVbABZYFjfLwdkiRwAt91JQfp8jkpJZVzS2W7srjn5rfE2uAIHj59UfUuWVonPeOTaAIdW8Pmp/wEp/vCG09LYMj3AJYy8x6DCXV3xz0ST77FG0fb8GlCpypnRNE16IfjnzZLM1JqMTq0Qf9evdGB4YYoc9Z6BDiz+jwWfPja53WhWJCZLEksE4RU/GZt6xjW7eUgsGu4TxX/eXZThJbyECHDa+Ybc4qMO9v1U+uhccVoiI1jmrnFlgGyqZfTLj5XZJ5dvL7YdgVVChbuZ4sgHMVSLeh2bvSx+EF9jcXVpFyE9gARX7h28I1Mmlu6iRhlWzoG8Uj2Y+bgXGy4rNrCsp3Dz5zywwdG3g1oeqcWnlpv5kDL2M1cUc0zUMfnghFtliOQ6grFK1jE8ykfo5MubPVW/hTfNidL/a0luUBruVDR8GcRj/8FlSUO7fnnYL9w377zhW0PvsqHdz6Ht5YZFHoX3G5GSosMXYkTUDKtv2IwfqfVSYPiAc6z9sok1nAROW/MRycpJ4ra84MEOWQZaCTkvTy0l8HKeKf9s/b70ynLy3yUFCJYOdPzuLYeAHjiuW/BZWMG089hg1EJcO7xFqWmr/IQyFG5kvRdh6XYtICQyGUjrO9ne0A+3ZTo0eTWRu/581eBos4DqU5mKrrrVNMpYOPbru0XPBJ13ZLozQjnZOr0Nftw58bHyoGfp8/+TcrDrKppA34hiA51h0JmKSMo3n5rT+SNmy21yOC8Bs+Sgt6ecVrwGIUpT8/2zhoAGWtDfTosaxaGQH/N5T1U7gs4XGeyibLGGyDzP5Wjk6wuLF4/kUTTGeTq66xnWMxiPhvZbm4vgcI8lTPCJkUnNxEQL0vQZOYAdWDUOKjHUaCXmR8TJ+OD20HuiE8bes4f6BqpCYfrYWLeKMJkTbLIhXf1/n+eEry/Yb374VJBiyFRs/3awwn0lLWa4J8PBdhISsVlCWpaYaK3f3lLpDtb0AorD9xwPS7x2w58/44xxboWWjVV9bwIdnBClewEJY40G/h2vTDl7p8Jj/jwLfhErASDk6zYK8GNSo8Cn8JYHMZYMTeYhwukG2Og9Q9G5o+nHZNm/RDvpDFnMLdtp5CvK4rK5/6TEzlMSaXafWHNRiOwQT/Wijc1vl75fZg8dUiiaBoDrrpOyJFUEFc8VsB1zGUE3qavz4umZ1iKVvnr1owrmjsXWkloTPJUqa5msqWjyvpnLlKLn9GGXNS99elKcJkWJ1dEPr3hlf2F8fP0YKd9WzC396qoiGAGdkfH2pgXdchWYAxnqk2i/sbN5U5o4wKbTGH0+rkC4xTiXOVFQbPtlIYCWtRVTj8qUmqjcgrMfzGoitUHtJme53+4S2fLixeSqhHJu6Ro7DwaSDus5Zaff8JUTnTZvinerHNGUUIs8uayHWBlR2AWqgt9bexWtT5VbzIQ6nrwwwoPKM6PfelEMA4WPlmg2oOfz11UpEBoJYxQxcNL2nu3UfI1uwmA7m3amCGEN31hZD8uA8dbHjvcbUkVRCoZqHoELV91CtcXqOAPBljP2Swuk3IuDidQkcWGsH/9EnwQuVXnALx0CAnwSJvbJVFI0UUNIXxSPuElMQa0Urbi0GcKSP0HhlCZdA2ZZWm682ql+qPF+OImNRQhTWVkDUux2PsJiPYVMzaRtnGMu94V7XJ9yTYSVBW34WtSdFh4MiJSUFv8cezEOeimcmHx1+J1nLMgNz9YtA2j4dWTBeB/ri596NUkI3yKhsAQrt/O2kTEhQ1wjwfiwNro6P5JjJOPYM0xKf7dXyiIISXC8A9eu5sTJ9/PS6tBa6SIpHhzuVGYNaGGPJQQQBNJUkHmp7LTDfZ8Hh4R5la/3tUkpAxHZfKYZQSmcVGIJSK0HsyB0sCHcaKo8IXPkxjj4YRQWFlo9rau85uSCl2FkID20wjvGb9garfg99/TRVp6QhqlFXex/iU9bhJfyNQY6/Fyr56L1kbdPHTV5EOFTMEHY0LdfqOEPbXnZXfgENbbHzGjMGQQwNNSYrDr4H+lb5lIJRIUJhVCFfhU5tSP5tdifRUERk2w7XGny9y5qyVQQYg2eufXsa1y+6yVUVpGlcYZMrQ9yxmxEg20bSB31D5XVVKYmqyFc/yHymZzl3iz8KeZZ6l+BmKdzOI2aT3FLZUmLwpOJ+0BzUZkuleA4Lf2sAW64VXFMwdndCwMRAneN2Qfbac4uvouAFXs2L2LbdI0EjcjkqW4n8n4LByuM8eNT+QFLWBikLry+sKfEb0eeoBOvpphifQitdqDXYY3F9CDa3F0JNv8bddoafz537zLoSOhEYznWMctuuTcfkjq7w8pQ0A0ReJmvOka7vfGTrzvATaNSuSZQznOD3wA8GZnNoB1D+mU7kkT9yXxoLwo5cn2GIWzkJMD7dMk0E3IJfH9D0lEn+9SROdrU5DW+GbIUEEK5Ztr+0D/JWWvWJqyVXoHHz9VHLjR3sJnWwqDhOz+JD6KC/YNiaEPs086Hvur9D2oqpp6ga/gewnfLJmOOsjaEhVfSbnm/fGbx137wXS+pdDokaciS6p0myspvZJ6yWZbKTeeCx6//yVe/cVxzSA0pfWWK0EG7a702LVNFkJpqlF5WoEJcR0UIYo7nxkV/H7PpU6pxRlob2eMSzko/x6s7tOU7xOfXmzfjoe/xpTZ/iKpB+r6ERcj7nsJEUE8fEex1PUrKNr5c7fzqePK+s20g59RnwBlI38xzqoDNdk6rommbRX8ZgebC73SaPtZLu9Df/siwzEYrKeDlOWoI0ei1qhM+gizG2MXhsRW2TzwcshyFhthkaX3ncmZU3KTgkqctBq0V/UCPUZrQotDZy7cw+DxyiAYJtdOc/UYZh1h/0XvgZ+dByi02l5USnLtMMMiZ3T1yMh1/oXW7uC+C2gJQKtvbXTLSY0oUrgAzaj6scWF52qCnpA2zSE+MA+yqucuItgPyQqIOIOU8aLBnypZz94qeE6dAQI+UTttDiHT/jQDpXRHGlozog6R6xsmJWzmhIcnSdGcu5RoeyByqtbRHPoBuxO+DTew8lEX+KsH5+/A14CJZ86qZh4Ur6lcBiMk/DFEB575RaJeS46NoYjVJ1r/OuHQb0G7UtsyPw3d14NXbVKxAaCzRMvmujI8bbMeKFY28GmS0s3qgZ79U1aLEWg4l2JQfbxy1VB/mKN6mf1bcAIu3cXMqFVTUxchYXfCd5NFt7FJCN6PmU6hl+ypamz4WcPQ8Qx3G4INqR3l4NHQQ0Rsn8T0UVGqudwimqtiaPg5L6AmVEzed5Mebq4xqhFWn8fyc+KblSQVUiL35tGTNcxpq4vCee1lA0PO+5PBAzWjDh5BPcYYwZPzTPMFF+a6Yo105o+e5kbuGlHktbR7nD8XY4BK3K0aeMYikrnEFUjvokkLotl/dWhIvJSK4a4sFGL5YP18LliJdFBd1aKtwJ3+FnK9OqaNmaY6C0U84btXUT4fG98ovlposTOQno1IXNRg0vfDJKzee3qNgB9JG4lgW7lzR4/O9fcuNS0hzOu6VjkZ22xkxSnoXgYre3kD/zhhr8mk6bPfMGMoykj5p8EpkT/tQ+Q32epMgOoVsBm9Rlm0OgpivJISURb6FL+Wo70mun0zdMzXtXwGl1BmdG4rImnElBMGf5G/sqaDLHkyVjJVSZMoNqFxBdyKkIgGAI1Ki3thSm6iXHvKD+DAPpY28F+l+5BONgFmlL1/f5NP7BIBge6ggn4gD24JEFUsR3rTBOF6cKNECLJfQfHjd1dWojDX0jS7cb5PkyIhlW/G9PqKe73+frWX3fRMsudE+aEqTLiEdfx++/FRdOM0xN/vUrAQBuz5wNZoiUkcvMZ0tzmk0JxbSer6kThRjFBWG1tn9ogCywKsvxKG2IDFvRSfS2RbdSwjCjCQ1WHGKvXqJh5XWDbQOe2FLU9yYeFs8nDtFIA630Rr0nfeFKMxGqahMfUqo3QxC+o4q5G3y6UsdrKAXoTG5TNNaIqKLUDfUM8inYHCOJjed9CNEF6FOP82sHxKZYX7n+T4pqJPbG4bXX/JyWM3ckB/jn8oX8glUtWCfKm3IlQ95NZLMAUYTox73PzcHStIIC1+EW0nj41Ym5gf+sqVo0qxfhb7iytjaC5prdvjM6G+Vtw7PJl7PdVCwCJNNkXSl9BSd5C5QRkjaq64Lv3c2gvrZ4opfcAWfYC2a/WmN06B4UeGjBgIaoqV22+VPqc5bNDo7WWG/JVXgiugvjuSxyysk/jNVed2/zyjHWoz0O6AOjlvKjn/MulyvAzd8vSWF5NmixGaGWhR3O/zd2HwBcKw08eHsl2e4oYpWDmkK+07/lJ8ju1IUIFGx56hhM83He7NAlKXBwN9kQQKGoVHMo2HTiOffBn2AqN9IWCRiT8odDrxZ+p6dKNsZmxQVufBPGpxb43P0TdaIaPK8j9AcbiDiv/9qsnAnyXDKJIuY/x2dgpWUYZxSADid+AmYcMP9BjzAbeDCGWkWS4cr4PU2mxvtkHoLe1m2jlvpG0Op55z+TQ87WvRSPtJKV/52zshTLutqW2LaQw2jL6R9x1WXNCZDOXENssuZH6iaMPw6j9xnkYw+F8OoBLxJHHtduCv3nYSHFllFA8JBszKlwMzKOR3OAu00gmoRyaKpTGYVzHYAo/ZEGdDLJXHPmjqR1vHgIhrD+oeeJZuSmOD300+U1EG28e0HfU+V5JyOdQnjcHPtDM5vVCK9CXuIXSc36m4BEfok6+opbUk1rllzsXzESh3o025oQkODV0AVZ6b8aBIWv9arjDEO+pzB9mjM7PwMR2odoW6XEsPBGJyCQVCdu7pZ7I7quUqskF6MecuLgFcihbxd+wtROjHpsyU7/sCAn9bGgfQWM+Eu+0NBtPjNTqXDBCQ+OA3lcIsxzBe5ZJkcKKsyMoPhyMix5KZFG2whJ97TtIjY07Ya54IFglxTRyMXY0hXMa24WQqTu2CT+fubJtEsgQzqopB24Wg/Sh4EhjLJIvZQ/nKioA/SII88Zo04+S3J1AXMA9ChI5upUmgJmTcML3SaHGuhL5kpY1xYEhu2BaAWYQumTxD1aNDP0laOrLmaWF/rXJW5b/AtsGQ4nPU59wJIcVjol25JO1bZf09CVAynhJ4BDw2f24A7tmAD/dmtA1U5tqxySKVqnUAJGdv28p8nXu2tbylOxf03xzHBVzHQO+ZzTtjYgVIXZ+759aKbXrjGjcahp7F0Ex8LpPq5Y1o2Cg8fen7xyhua8ngrgl+kRRT0wELvd/+ZDKPS8eYFUKDKVH21bD4iCapTTCRfGY9fpFdhM7Na/1dqdpQE1sjUDsY6pvd0vHLHoim4Qdoh2H1j8gJNlOgm8IGPqcOpIyXBSltKywD47mUho8A3J8jACtC3QjesPFoL5gTYwNTkl71u2m2n0F7KabqNFL/sKGMv2Yuk225TqK8OW24HPLGtVC1tGjn6kvNWjU8sEb1/Xao9sCRVQ0S50liVPF2tRlaPjLJevfiRsfXiUzJ1bjlmecIu38WUUnp4dhEJqmpdhYG8EOAqnJL4w6jbuucc/nTX9OT4w8/PCmOpWDVxsealFxQ0/wccrzMP2cuYJgBO9shS+l875DDC8JMTcYLPaWtZ1Rq73K5tHsEOEFShTnfqSKVftKasMhIywVh3Qp/Xg8xVaKzziApSgnJ1y8qxv6B9GGObo+Fd14FRlECVNjhOB61RNEF1u0ySB/RoPqdEA3hc/1MiEH5zJF3vOP1WZIhGfq4tFx0wYRSXIm7aSxgaYsXdU1/BF2ReZLr9EH3rD+tuZMXER1Pyv+gmoJlbhep5aSIXftIJxPKxB2rEwwAkytV4HLbLoOxtD0T61Dv59sl4uw7rozAdVFR4DiiqglXHns+De8cVQfinXk5yNRC7NfFMqYGXNLjrW0YwY1IzyxS9Sk3ieQwimhOXzTgyjd18fJdfBiLl8nh/Sx1TCD+kCr/oa3fGtNnVhUzRvXzRRJzyN3W0WrKHm3g+7HoSNb47X7raOXGKjffI88IFCvf7IwRFfiAeCJR7zoLRIHMF7OsnEFSBizqdseVAAtaNr3x0Xvw4STwLvXAgYx+O6o8RSCOFYBNFStybWq3124JGj1LnLa9TARldsoqv34dxHK370v2wFK01j8N5zQVd7Uxpf9t2t3APEkb+ojo+veHeYQXaLw7NcAFggGQ+Ay+M0EZHi6RKzE129SUN7svW0Xmm5I14CXBH3WEFh0LDbwXCKvs73VY+kbSC6ToNaqB1TaWXP9FGwXVkEY6Tt2FqzC8BkPzfGvmI2tY4EGr7G3rjPEYrWnVCrX3+UnTo9j9NZMkTL5lquJd2e8Dp30EQN8PtEuIO19s5QwDnCRnBgcfy3LgTl+DsoJ8tCm/1TxSvh0JU4Gbp5orikly2p3tjzSV+B0qgrP7dE1V1B3vopPOyUAwVWM4M4/vg2UXfQH6Nfxtbx5Ubbe3LAfjDHgTlZNqphWSRUHxQyJShHMA3+Z25kfR2k8hlYg6nOwwAx5MluXijomPl+tjF9zvl86yz+g9vEucyYJzTOk5lLkRM92KBmzmzB6dfn0r6dQWTCi8tyQ+GPCEZ0j6tWkRhlTGPXd1XjRyVb/u32Kqf4QnR4HaDW5eALv33uxZqwisTCRQ4YYmOUj9TsBiU8I8kGahIuNaIKohcDUEwKvi99IxDZRrUOUqtDej5rSS/P2iSkp+6oHD+3aI0WuCrPIslb3aPmMPOkfxrUYxOoMbKhOqWXmkRs7DK08vcGV4A4NiSqPJmMNr26Vzdctgu2++3owP8oU7z6iTtoMB7qKXnxsPBLuKSZ/KPcKj7vJUX+m3YoZ3xn30n4fHvbuWXZwmWiKUFNqvnaikRAxwhieUNZ8QsHcT2Sp8tVVCM/0Jv+Jv8euoOrUDQ7uRN5rUbk15GKqPBT9VRNc02cCl0Ld+o57myp5XRuiT7uWxPaxQPxdL/B8CrwbpAq4cPZ5gMmaEjnX8uG/2TSk+R8e6xiUeCesVaW0yzmoI3tqX+pjy6OhbGy/YuCVoZhP4mWLmqyCLwa88Gv/gFxK3C9Wkz9diCvCERzrKsCt6+CfaZ0fDFqMm/cZo2GI9YVYvsrNaaNqt1s7oa+K0914FLYBiashbQgjttXfYc0ivpbfXBxA9m1NQemIT79b+nhlyOH49oOokeraPQ71711ktbihR5TpAGTJMfFoyuva8RUu6SKVoo3PuoOoeUPth0hctMPBzveMN73sOEzvwr5yGNgVftjNOJVv5x9Zu+k4rF+CkD6RtD6sChk3zicooCjIvpAvrZukfEy6mduOOouP5vp2FCz4RQf5SmnYQ9x/g6r0GalTT6io5ozzbNFiIBbmaW94fLdyI/7boV+DYRBO1SXfNhf480Cbel566RD8s3wy0NpIYiycxiA6mIX0gSwa77GUKmh7x0puYe/5TVb5dg74V7Dlzv41oY9Vn5SsUNUGbGeHF2GrsHKfOr0njA9JKHzvQO9TR/WyCX8pSr6BOFQWAKB+Mjqn4C8XZRilXlU8JbHRHUWCeMdKX7vx6RjkhKp/lzPMNbpzrJjeq0dqe07LH1Ms1JMisy+FwHUgNbCJhmLahSZbGEqOFy73GuNKfQSXBmVFNmKxFtN2wvsOZRbdQe6MyvS0ZRBf6M57llU/eyHvgLODH898/M0f2ctt6fSSUCWtBjOAPyMKJ/KD1e6y06WRMbKvvix17EfBk+3ZuoQPet13GHEv2Ye+wYWs+99uxF+DXi7ohhBDiPo8/Xq+kHDChnVBpsws/ROU7cIDcxhkq5HAzbY+otV9TTD5JQJpCGa4qAJCj3jEaIm9uUt+YP1vJIpf4iKBKJ5B/nAaTiTlSSBf0K0mVuKnxjG3nUzfKKQ31oNvBk9tk0cIHajZwgUYQoeL/phpt1oDPhRV+n0hw6gzc4U5lAODbOZxTjHCYth/L77+4F8st1BzPtbk5+26GAZuFG6IQlCooqcXycB60vWGirFqZ0Zoj+oqlfCxNXy2vFysmOcSTyXO93Fjy2A6pwU0bT0ZfgVr2NaUhDhGRFJyiC9KNCwv2/kn/r3/yLsPjUQ1mq2YdF/NrUPz39VNzL3qkMVdnK1y7NhKA+bmp8A6Iyf1WfLyHIVxbKkxCI6xWbmualsGvS5teL7YcG6+gUFLLcRpJ9TyQF4yjGPQW1SicKYNLMPhVs46oZB9PMbmqyqhFfszWC5U/HrAHbmMWeOrZ1ssB6lWEQRpiDgO9Y3Qlw9LYfmwlG3JbIW/xUaWccuws6F3xZZsrD2jIgKVo+B/ytAsbIuiw6CIvI1d+P0ZLVG50JJlPA0Pqzvt5CHUTOHr4ZGu4MpJ7aVUNRbaYj7RoctPsWP0sTE0qMmqsQcl00km7wSz/ltZYQb6rBpntpwBLYwnMhUBLa3EZPRTdzjj4jc+MJrsmHx9onaDVk9GHz7crT+GfPnRDjb6zDd1cnbuvUg8fkOshB3vMXfed1wIXIyCFv+O6jGk4qZCjLMaYJY0MCJSALd46oEvkMLUNYmbXYmjBDkod8tjy76JDzMKyiEQ3JnKEJHOHBo9TOJlCP98PLpU47vlf6kD6hXOoq3RopkqO/0WC0IrC+ZrQQFuu0iA9IrS342klALbw/rxGcXnHWmcDAQMg0On5XIysxnReo5oi0+NB6kAA37ledAIPygmQD7C5XlMA+yFlW4ZtUcxm2+y4iZGTYRNTw3vcSMAgr62PlVsB+TyYiJ+MeF1dvupJsSjXsNk21GUnF4xuT0VKYU3p8dWHsWQic/ZtUI5LgJ1dAHLff4m46G/nVYs0kB3CEFQo0h/ZSNBVs3dorV42H9a/D01GQNRxQLvP33QTysu0CU27dqiDHQNnzUOuTW0AuC77biqV+IqQGHRuxJyym+jZ8fnfrDMs2b2KuL8TXHgfe/MpORFNxm0E/l1ix2kcXKJQWopLrp2PTt8eM66WDzeZAK99aILNpR1Fx5o6a8Xh7UK9x2bxw8+sj/EiYmzNpkK2KAP2g2z6qkxXy8Q1sG/t0pL9tkOO19z09D4sXZiPUqNtlnbA7WG/d5gMcqvoxqx3T9xcnEEtdvgSy/pdy3f+E9r6aeNa2DXHh6Sg3ItRUkqrtAqFunr4YKUHkPJxTZIPvbTeXDh0LklwotGaOo0XqDndhg/WeEH7/4u/v3aIc2wsscC4PHGazvukYLNv0OKw1QYBJgg1KCxhoVYar5lmVKmJkROPEimofpM+vJsxijzBWFJKzNFQmfp5C+T4Kfwvzt1Vzp8GIwuU8LN4gMb6Oxrlx7kARpTcAcJMRbYGUKvJISk6nwAIhL92Km9Vleajg1ByTeaCBQN0TC0QfePEyX7EhyGVyub5d0q4VzOU5GqaqvsK4ydP132/Ui9/FF79zPPCmjzcd2j/bmF7xMrcPVVJuHi727Jp+tQFLV2sywLIBsAc6bp55jzGaYXHskM8FP0Ma84PFhFHWshnZ+SccNMxiXciieLa0Dh3m+wB0No+IH9oRmWEBtmPKi3j7rQyFUQ+Zckj5VGUXVH3N/hU6pBvYwqnRx0Dg735ptpUiwOzFnJWoyQzEWfhGEkyUOfiFWjKpqeNb2aXr6iKdbwH9iQJHapQr8Hvk0AgZji6tUqKyznU+CVB3fYJ42/0G8RarokIOhJy2/5t/vUSMfetkJqK4BOxT63NZFESfOgblJCGKbLtC+xgMUnIoG9j/LCiA89okjygQXEhPYOZBIXDs77m9nXSFpla25gDIDwCV5Wxpc7TBRvNrh1YM+EydLHxS+Lq2yntIQ2+8nryapv5t5VZrPRQqpIs0LiOyXA8VuuFEatDT/jh0Ch7E5MRyXEop5hCPTXtIu+z228nQvIghvv+VHQM6Sb8GNYPPPLOlezXtC1Jc2x4EI0PxEJK0mNkvp1lsXxEPEHsra0L3N3aS2LpGjAsq1DI5GMPD4g1G5/+UDpwPyWlteYe8x0gPYpMqYQD1al3HM/1AmeM4gQvgHIGJRcmpCYA6mCrhBdxWDUeg9U976wmD7BvoKnkwCHK6GwlhFZq85QluEckY/zazTFmhi/vADPFQdi4CMu9UyWf9vx/y62ucmnMPycHOAFgPAHtyrP07AcPl4U11z/8YXUQw/rs/MczOdvQOS/uU2NVxsuTrHl5hGkgdSWFAZ9t1zQ7Nlh/bkacICPAM3jcB255A+EQJDdeE87MWzAzhTvDa+9YOD1k/1hTDRZSbFk17x+PgPodEPielwCIp4Hhgx/oWGnm69VP7dHs5uFyKjpURPLd+KD/H4dWgURESnjgPSZP45m/7MVKzKW1HAeAhIINgGOOE7Bby59lMk9V+d1fJU19hd3++a8qajQ9n3EBqeEfmXy16Z4wSmtUaP6wBg1dBv8lH/bBOuwT+6Dl4qJfI8zC5ehh/9k0pcl6xFzCtRaemlUCNNfDTmXIRiQOCd78U9JL2ooBcwFWDqVyB1M7DoqbFZwQXOprsq2yd7G539Xvl5JWr2hXm6uU6sxwCW0Gjwv3PqcG5IfzxusQRKmAGUV6QFU5OHI2n1s30HZmUXBKE6/gP44f5/6KKX3/pdtQERU6y4cqdDIpf3Z5XNvKmszkjAkbY1BRpzKsV2pHyobu9u64Y6G8msskTp5EdVE4ewljFGUGX40nLQBKkgn7wJFTNblz3IdTGLgWKHmPQbJShhu+ol3d5Zw/jFxn+dcntbGRL6FtAI5ij1J3hjEwe1NVYbJmAFXnBB3zyMMmn4Ssb9HqGKjq793qGgOBqokLWlcfiOd8dpNBeb5hyyo/G+kLI+muTeR5nU79IISxmTEb3WKTDJ8yMuFKVXozqVzOmT4Wy7dfthryniPa4OoKcwq+ObEVyzR+WZ9Z+bJ6/elo7+7tAjLXv2QtNCkSY8358MtnlFNxLPgXVkKl1BoeC6wpVwc7odqESi37HCU/26tes0V4z5Q1GXVC2eqezDdp7qgLCOieJnJMa6s8FnHgWfrrORh/cpMPku3rjyKcrD4pSaKvLEy3Dz69DHCyaDg96PMoi979Lme8tUTVTAWpkvQfZuoX3ng3OvQSTWTT9I8bj/LSylnO9nTG3kaltLY9HZpSGpIFvNQnKuGhfil9wDYQpJMDyMHmT/yGymrnkZ9EgPpYDRrQ+yFSB77GfqsHHFKLw3JaDEBi3FGcnwSheBW84VG4dKaX2tiCnJtZNHaIDoHOBB1GXp34QdpCsfdYJ2g2C0odQuIG5e1k4f0XYNt12+jcdJyfEn5NWeJ07gVWxRW3L6sm6kYoRNs3+u+lUr1QG1zMJkbPs9FU0kkfZQd95SUIEnv17qo5DSGPuunsTVXBQGn9i08fb3wWT0g2HcAGh1dBPoF/UKe3pxbhm5cEMqWf/5yFdjZ7gP1qiThv6jw8ZeHZ45d9ZfzBAJIZNSJdN6/9IYk9Wl01KzBL7zoSlUvuakpQ3gMsiL56DKRD2f9TeQCObXP6ZLHFxx/U0RJ4GWLbJjpc0HZQocPa6aRMJx3TBLipt8PaT4716LZ81Ob1ktnDBdwhHh6LcewZQRdU2/u9PiIMUDJU7MZEKtszQCaN0Qu2+Oc4+j/9g0x6Xow9uDXFqbNPZGfqHIbUWR3euovzprjtyrpmLAe6biGXdxHMp10qjvGccXtZ2Tw6CCykiXYfW2Cl2+cPBvfe/VNPs6c/cCb+4EhsvpGWYF2jpVeGFL1wRLT2kj9B3359MtBL1tH5GYkL8yqUMSLUhbVqwdawXGEt/d3KZh2UJHss8wFCMgyeNPRAGrhCiJMeMotfdUdZY+Ixr/wFGhwDB1hCYDV8KZCMp8R9TCSayrVDoYeECe9aeo7bPi7GeJZFbxQMF+1wEmlZY2aWcFWbdnn4MDbHHyCM45xzGcr+XfJdjR0VRHI5XHcvE5uWOisb/B7f6oRfKOJ4fRbRTWP0n0JF3ROOAOFlPDfhQPdBP7wlskMMcs7xcEwZamytI9aMd1PRleOgnrkid5YvHUIMyczQ9euLc8ELC0KrzjOmSrEfq4FkR086BOgSLzsO8b5fCHUY5RZpOcNI5D4b0EAljksKVXTh3vuiZlxSWE4DT8ARPWGiPz1Bj/vBvNxvjf+LQ0ohR/3ELK9aXV6KB1y7FuzxrHM1iArV5nwK8TjOASAcOexvhuSTsDTyJpBmyQSX/r6S8hex0zGA7S/oF7lXXyZG/UuU70nnS+OiVxvhxe+xuYJgx2vc8cnvZf4UHA8vd9Qiu5P+s30Eb3bnppWq8Fvsu11fSpBXjNjwi4fXV+DQdrv7JNMavorrmaEDzqvuI4n2Spdgh+BLZRlDyQc3If1LVYM0tAugj5VzCBT55fyZ8rv30Xgyyj9itd1SgDErrXcmPbNgPfppdWluBp8o0vDdDmhiCnr/kyHX1+wAUqXMGcCO+qkAvZ8m5HGYsMH5EWneOHrO8MEYy+1Rie3njYgVRDavwr8DAxhAvzlVbJSjDMYGUe04/XNtYwhfjHEdJT2+Zb5R5fql6umm04bTGjvG6PZaRN63CYg8qx0507osUj/CLoZvsYVvMfe8YYav3HsWwdEyn9NGpLYPuNfaLO+6DS9Yx00vou5yU6LOr25+/pVkLzNZLj/9JbUDBDvtwtZ2AlOi+C1bBxVDGoPUZC0YFND8fK77aUlFRtgksgbj/7cZVk2goIzcDrrCSiPiB5oekgLe9YbKe8Vt1LotmrcXpzvcbaIeBbqeW/EUkKwtT+Flk7WMMTgxVAQeumhx/wm+8pPUwj7iLc48L8fngvbOiNzba0iwna2NevcKa6Mooko2TB8HvdpSvD2d4ga213htse7s0oyFv7wyWDDxy/ChXhRYckfqV2+9BmfBpLZjaY14tQ4W72WEqUjkG+wGxjkkHuV3KN7+bzjfpjVnm3OqCJwj68/jU5ptvzjLQDtoIfsBRZq/nwCJjBG/lIvivkVzoh9bs+Asoa000Pm1ZgRcI5wmuU7IHOzjL40MprFGfyk858zollBfUJj4+i9efN0HQCBAMCJ+0j/KsEEa5RwtRl4IqgV5cPEa2ie0YE+gIFOvaIohivnkXJAdFP4Y2FFeP+1EnnoVZagnCy6SWHzf6Rjuo4/RLhL9ANGpUXrIcDuftH8HiXZsvTuhIehZr2kMj/1tvKAsPgUuUYLaTBMtIv+r6mqmriDSEsNQnELxTLXArugGmTobckBn95mJ8MMf1l0IIEnN8XBiak8P4t9GSagMvw1vhmAdQvRhqAekIzv8R746ypnbJmCNjWJJi/b3qdT35dF/Vkl6TAE/Jal+m+XDKR/mTzUj2AeYNADQI68Tdr3UNswxKKXzqj89lKBES12Hn7RI9w4IuSh+fcRFvjesZg/CLSZnIO0/9K7Tmcy1zJycRcAvuF6nH/4kakQKKZOyMtzk7BpuOfIr/UreYzDGbPXE5Vh4UKNGMfR/6Yam+r4CN9K+5y0VLcRR+grjzLH0Z5WGDPR3/GIM3LIjD3vdBIOfhk8sb0VMeCEyjiXp7l90xAv63vZhzAslgLEwUy2mCkv/iB+qJ83TQZe8yyyvtqMnYDrGHsBmjYUlZ9p6a4UXcLYWjL6DjNw6Lzy2eU0dG63RTDYk2pZBYi1pfpbgIrf83G0EqnU74dTtxc7xs3n+Z10TTme/CnHK4hAle/Soo3DktvRUc9L++t5QT/Rd2jz2RYbHjLsieJr1PCtVv7P3ndsO67sSn5Nz+nNkN5bUXQzeiOSondf30zturd7rf6AnrwzqXV27ZLIzEQgAkAClbxBN993P+XoHuUah/03pCa8Pg3ojx+ESGwjRbwQphDNRqfuSIUmNsIywa0/IgXkTF58b9a2o0KLQeX1KzWarKGzl4FNVMZoWpphkRTfqjdLfd0/UpQkFSiZBPFSeHugK9ud6kRr8rLqI/FOpRqVQBQmiU5GHGU5lh82bbSAkr4l1rShZF1PzNIeGdZxvV8kyHVblBzq4QXnY5ngNK+QsGS7LYiMsKBOW+SJDQiInQr2hQLHmixu//JAziNjr5t8bMiODHzIk7CCHvI2mamxn7ROLjK2STs7aymleHR8io26q57ihnMbr1LkDquUgWAkIzDsJC+wRzqMOP1SasbG+msu7QtL+Tp8Xs+OBUl8Rxb8cG008uYkSDXvcWhyym1tOgXBHnJsop6t1C4MARHC+AmapkcF2lzN/CpbR4IrZgTTQdjGkjRIUTS1sz5s7J6byX98X3fdFSJ9pyYeApt+2o9jQgtD7cg1CqjvPutfoX1f0zp9126JlL1sDbNO3fRX4KCifuMyUo0D3I/dxfRvzU8LBrNatoQeEx6erygoO4qY1HEKjjCh60iRxq7ieO3mQXUOFUDLZr/yfmW+srLM9cFWD1NweuF07ZsejTwGFBzFTR3u7VdLWCFp0uxvrtGUjtRjT7Rlj1bRwyUP0XzfxufqkvA6DKcsjcGr1ul9PVnEEYfxo8K372DStw/tkeMTNj6U0KjZByeMzqOzFJb0ZmuLKG+Cw4KEg6QLYUZB4miSplY+gsAAsGoJ5HJwIODisNynYSfMePOu783xuolEfF6tRSVLZ79Gesi98ztov0t3H/k8weRAFnpdytEog/JhfnDHsRP0ron60GjlnZgc8q36reSw4i5E6htsMScgOGxzVYLjjPmINcX5MbHoxZ+niFau4qD6Q+0fYQ8hL+zArijRVF5IjeVRWq8fZJtwpWoW/44jo9+sgzQ0iIck6LJJ/UBahtx53AYxC4OXns3V1ecc1OlHrWYfVJiySjGtfM1M1hF2N8xEUt+n6mz2kDpPxXrZpXbpamBHEK/P5PoOZ7oHmRoQj88SUl1WD7as1O55bMDhIkRAfNV4mBgNwlJJDOwBfHssk079wJUnfFlTs73P4wcyqemn0ifTJrPIRT3KWoKy1/W1XDl+q5HiCvwYd5ccj+MjGVTzHH4evCmT7cVhJXt4xCMMYJ5QfdUVNWiWKlopmy7fUrR8G9rWo9zRvPNIvrprqNjQ/KBSnmgFjXgY9AbCy3lOj9xKi9u8biN8uB6YHCFmM0O9s4u03I8h3ckgfJJ6OiaIyw2NSXIG+rGsD/14dFmpmxBpFwF6p208D4MdiO5urnDkpkbxtm7pASGX9OxKGh7WvTUOLiaaBlvNqYbulkvmNzyrEq3nPMt8GCPgcOf+Gk6yVPkgVv7YHILEMo85eFPYL8lcaVUGwsrStTAIrroPLpavQcCOrz7JjABuqMvUOtIeh97jI1zKv1C2ivckApgAu4fU/JvltB+0Da67sDa9PHz69iwLu5NpWJzjocNcy5gSBIECm+DZBkSRrJnrTQ4j/XHMk++t1LGow5s8IkgJCHZoqO7XRbYWmT15S53ZefV3Npev1N/XxzoINjruUaRs2vJe8/1x83F6KH2WUlDszj3I5LJmj8ve9U7exWG3pSUyH01vi9Ub+pAP7YmgsvtZ9Q/BXksPXkGV+dTRfN5zm+bZX7BusZcOrnaWpqg2fQSp8Qi4ClRCIvTdGlW6Q2dqDNeBu0j5nK18NX3IXuKtCl+JjhsEs+4oqfZqeGH6ga9MwvrPynCwcOGJlsR9nsHvcnyYyTcfpYP38i3sQ5EG0SZPKSujj9wrPp3chHnXW7/P7mLWY40dQ1v89qLs9yWI6XB+cUEDG/umISoT/J36dglW67oarc3c4ssSUc+/D3sQwoFXG0Sh7V+7K90JepCkFYsRomD+skcUo+yssMdkGMCVQJYYtri2QwQBHBMEEuDv9UXp7xTVyHWsnSAZr8r9cklcHYX9oBeIObxjasgtUUAIQkyXTVqgFyN8P99k6sGNB8mgLYCYYtnpDJGZyQoyvUN5Oybp5myzJaoWZ++eUR82l3qIPzRLctnGsRVvXl7xJRGwDyeYF+G4bwvO0/Mskw8m7G9ZUSbvUxYAS4dJSBhnNPQ3Q03x6I7W1ofBfdhK0TmN0rfZBunPIZMF5YU6RQbq0rJT9VLz/ET3plous7BBjSbYI5V/9Rj9AbVlq0Dq419Z9GU85NKXbnjfOTT4bp9sezwhhL8jT59YKJD0QvYsRt8Ob8huYxOYwHqeKFrJXKYQspuVci4djrk+TOxI0/65bJk5lo0oLG66YLtr1+TEGG3aYI1YpUWY9WG+x3yG4a8Rv0iy3XOigreGlPlgNEPVJt26+jXguOF5AdZJLMCEFYOlYBC2wtVgelhry2FLLSTaoyBbK2stjRPHLyHOgvlmho69nfy7EWoAQa3DDlfVRqqmGaOJYXoesgTXvtpx+7hNotwf7/NWi4cHOiscyDHuMOF9aZDUgXCzcc+69y2GRKVwsX99UK6AZJOXO+Yzh7ZkJXxfnkYiteIb9W69UB9AI3EhqZ6PD6z4eiSwP3wxv9hc6dEjWwvRLCR0Tm3VrvhGSOiq0bHFH5Iv3HJdxObWVr70as3kVXOriuEdibtAQ/7mIn6/o3l/s2/07n899IBzkYI+1J5Dpm26Q5texH/Jsm2NloWQhR+rGIrsOh7cg0yXwcO2RR7F7+CtjCZdYUjGTM3sK+rlNNonIKLo7VZ3wrX9L56IUjTvGTYhz/gmgDiijvFwgY6aXRW7fdKJOnMG0hfC5Aj+eUl6e6xOHwkdOwK/+LsDeV2FUqyn7+mPyk1D8SLTk387r5q536lCjdjj5gwO4tUOmR7wW1AJ0J7pgKvGfly97F76AX+X1KSa81KypmqCD+P6C99g2TsJkpr+AkivzudEKS5DvkIP5cX+LlafsSeBi/ajffOaEgyvRqJCy0Z5vE9mueJ8QTcj9hGFQR3G9fy98w0WmlbtvjD615ohTfPsVcOaoD8HNPRSN3a+cb6eYFzHRSbjhQbmOckzXwtVsqUZjRgveHZt3az14GpT57H+GXog4jUV74TxvlQxWSevoD7GLPdoeK7dfsrVgJQbgk752z9IG04fBui3EcoebrLpnPGb0XVyDGmg0eeWUmAQc6+z4q/4Qn1vQPyqYZ8C3LqlmZ37CGeoiFCHRBa2oiPIYMNuduk4iia38r3VJClrWgYU9gZTg27M8Nv+6oJtynM7zEvH9zZr3A9FISKn9b9fqcl0RYFbJzFwGLpIIq0dpTbhXAw43WwIHulUJ57ibBUINHb1fk9E/4vDD2shBDZhEHXwXprpvUFkM47vwaqGHsl7EI+SJK+oZxPiO6/FpBTAVK7HQQ6qXQRVlCG6H7hWSsupdZYHpTWH8ZwU2hQnL+C70PipfW9ljLmDbKqv9pHHFJui5wipv7D6bn24+2Am/xIcPjuTD/pBcIaVq7FiYd5zCP79CLYQKYQY05ngAXipz1jo0NvoIT6GZfR4d/GJ7o7SwsgPytXUJRCgJU/oGWuu81GYUs7XAPjvW4z0aWX9uJ+HKDnUbl2Dz0/TugqbJZHbxzO22CzZW5a88V1Xf315zuHxOGAl5aKp0nJnvKdwSeoHJC22jELg1296nhsXAm6uJUvep/EGVXLsYNxNloVY0c36juougXXO28zgVUb0Dw9dixctOe4IH3IFI0Wp2Vo4M803kqGF+agN4jjwBGRTqiuvHfo38QQKYN07yzvv1SlOaw20k2ozJDB0rDwpx1s6GSag/UvfUBMkmb1yHgJCsoT1vFBNoOFjU60jtxH8MPqpw3E5MuudLD4e75iInqirlSKuO3iyFbCxjnYySCGKQaOwQqM8ou5h9ASx7m5PYTbSfkHBrbh8Dptla84/gMDbNM6AJQKyS3XXpM9hvuF3bjJSijdM4bcJsI5TiZL+K8HWibwrq47YN+m+QiqR8cUWDBuJfrOEQR61zAoB65vm3qHLJKvNU9A2ZT5fKpOFT7T26SjvPGy1P81gSBo7WMJYjEI5cjSVdyJvb4y/RHir3Gzr2LU0vBZne+Ex+7DxZshOU75lIwSshVwK/iFAw5AYgyy0k2Uu21VYfTT+unJ7BLgNJdBBd6PeeRjgPZJFxkF9yVqNgV3ZYhF0jvSpZLWSXMZomOdQoNJfDczlePJ0LCYc2xQL13MfbBCQnWFvj8F4dc6lsL3D66+BN9gXKP0ZVaZfyb136V1+XPffpwJOE7hMpKGgMpwT6pCnal/s55l785+viHwqifUDDlRp+dTfZwqVFDQVz/SLzpFW/rwAodNKpTLWi8kG5urd6nj8xiDbVhAmOvelEC5LNVawu3HQPPDvr+ffS4RYoCdbkUX75fQyyz57tKH5XhjwHPbWQaue2soDnjJmqaOehNghStM7ggRFGebDjqNd4i6yaxAd9zlOhW0cnkMCZhtIlFxRkPyy5bbyJvq2fK+Y2VcFegtgZ1uCdnwiuZFY5KkQITTRR9VoVtfnQTjswcpJ0mMnobTQaNltyUxvD6OmnCjQhWeA8ARA86uupJq8jiSmSs0N0f2TIJJ30AQxks/pWydSw5nM3yyPlp3u8y2PGrvGb9aS+bb1CikA9RW06Z8oGob1cf6kJfuw4Vjr7bg7ieVkYBofu2FmmeddjggSsUG9St4uy0wgyaMGElOYJ2LLzDzPsZAk583O3nYNuoxy5qOrMAROjFQ+KZvnb3rR44P+1V8IX6iw656mCXivr2bbedm8sztsRzU91aKP4T1p/fIRqc9JwOOIPZjReRhc2w78fXuF3KLoEp3hHLtDGRYD2+zg/NGmaQdEA484TmjjeV3NUCeg0IDs+35ub5KYVCA+q2L9lW7Obmbrqopl6G4JJpb/+hbPLzFzrIVh25UuPc876F9zZC3kk5ahc9fz8FDSlmy9Ivbj8NrBBC8mnUldJykq0gRhAZIPO5jisyQE4oZy3wegoLmwWFHb6jzWiiWKY8z7T6tX4Fo3NF3hTOsH9sWCR0MQz3Yyq/NOzzO4S4f3tfXtcbd0NNeBQdAyCWUXxGKXo8ImGmGNH0Zln6vzbH+/yGa1kLy/C9wxXkyENMehzOzO8Loa179M6vsM9lhgLojBLkVntNka20/1d5I/XyG4vjyDPEaWKg8Q7ARvMukfEswMIblsxPoJ7AM7/dnyq6o9pVr05DDMUAtgaAayOBio0Xi+P00cJxL8BySnOJrmDU+apudLIWhpmAfNQkQQ/qPsdUciaAepcZMMC/d5RbNyIym1hEMegU25Ve+hv8qRCUZ3k/xbQHtI2gkezgpG8yGetIQxJN0Kr0YuqxyXl1TD4mwLfjN+hLc2EPZ3JEHUaUA/Iba7CRLK0/lW2efEClLUj99rY1QR27HXrxEadriYUX0ZB+feX5b76J84VzgXZ8vvY+v6587lHEXuOaQa5uWmCNOGbKkNReMjHcj03PoXVKKs6Vv5tDf6LQEGU8V7HWz4/AiD0NsJCtO/onsQK41ftddAdMDKCVEMI6VF6P4aKAhUotPvtEWQflkXG/N9+KwXIP9Lek7QIG75X51Sa5NRdEg/6+L3ASVJ9Zrn+TuFYONp2k37j92y7IFlHPtYriquGQ7z9UHGuYc077MrXmxfOm4IoLuL3kn2XhY2DumtI19meT8amgm2g7zAxVvbWF/16GSzXjppOlf1bg9mSqYgpFBxL0OJlxBt69qJVadeXl+bhmd0yyv3xaADSZKFdF0XpgMji4cSSfG+/EizW67d8a6A/J2c5rP1WPLO1coLsOZrVWmALR+67idy0NAH1/L68e5UCfo7tR+b6dyImQBWMpCZjS8QM0Mtv6Ml1DgpUrtABsMCMbghisOPHg+QWOkJ2Qhio4MWNst73Dpib5QwUNbf6KcBfUcpsdpOkpwfLUlBZMZ5MJIW5dYRALTn/lzD4r4TRX+WG0yc9K8Uh8625SQ/QM94+6/cqCDX3azGED1SPaP6FpFfJLptyF9HMhd+DPXvWNdWoB9LJrON7H9KucasCrP4LrCKGndRZ8ShgU7n4zfq1V5GJzy2xQtmMw7r2PVgnmuzqWiSH6g9KEdZdrk77XK5WWnLol8PDgkhHWXP4U6TAMvLDieo/C8ERu+l3BM47sb3tvZk/puQ9vGt8TZug9VYPbwz3ev77DFc6ww9u1Pk7HMD3tngJmMGG4iSgk8y3vuvTBYcPZRCNxG/rlfYL0IZcjamLqR1++NjIyGF5prpbOBx+59pn9T2aKIQ3KEWr98egrApCAHy4guwJPJ//acHE0D/7g5lFIa96BZBgNu5QSf7myGsfrQ68BOaHPI17ECkSiSnxTIwUrFRf0bx340QjpH0zwAooilj985Zv+n13HZ3NggOkPi/wtSW0Kfr16mdmXZmW/jZZN9WSP2eyynO3wuE5dHbwBpxrLDTOL6QP7RniefEW8MxkJ/PO+PELLd/Tsej7GAvb4QiHmDjFsaPyv3hAngL2t3gvpP8ver98w2HBwOfge4FZa40jPQ4qpf6BD7IcobZ1X8kD6RcUpzvbINP09Hn/kWPPV5sR3JvBvDr3wGEy9tn6X+5srLN8BsfBAjU4DAEeXTyXq/7zgsUvdYtVv4GBaAktgITesCPxyhLPo5q2REbH4ah6994vINv0vb8IfefomStVrmiQQNPbtI4jgN2JBZgQ8CHNaABt1gBtn2S0rmkv/mdzv3rwGiUD9wAWjKGj6qGjLEr+/22//rD/Apjo7l6ELTFHyAhq5Vi6b+/oueMz2C9LP6dZnZdk5uMQOaDDcuazlM3ibiJ4ohU5xtPSObeGYY+5KJX3ZsU/ntPLYGh3+y4CrCjKDbcNybVL8Z+qSr+q/UFO0LhN2XLgEFD0GcF4wVYJVmtF67WZHrfKDrwfc8Ex9R/aZqU7Eet03jz+M5CVvf1Oz1qk2sQx3ZFmKaM8P4oJ6WC5d0LGjr1b3P9vdIa7EFviTlk/Wvl5Aw2KV3jqPdoeTcisJjDraix3/BeAy/6c6Q05caYrN08NQFzIbRrLaWRV2zpbbomRDiHJZ+bkuf0WfMvvZBoijts9MFoekXBOwbsjK0y6JOBl33zK2DwweooPisBpHIfjkJaOOGVJy0ya7jr6cMmjIXTTqofu3k3oSA9y5eYRNM7DXjA+t9b8RnB9jnc982uHsliRMzhdhiDMmoyCDYAQ2RI2Ropb77zmYp0wcmcRKoT8N8Yduzx1XWX2K4nTVGCkIJHRXeQ215xK9bCM6fyJN3C+AKSNiSx5sYMuHTpgmzI4Vc4LYOypq6X/84I29nQRl0Pf7dBep7V13Y7gXMdXWT7hBTolt68MdGPRAgJxPI2fEUtiuAVfrNKnr7L8yzftxiPbCShG1xsSTn4wfaIvs3PNbD2iP4CfwR98Jqpeq7SMCjyOQztz7bi90VeNh1+x0S/4Welqi+h27a9blngseTbr4wOkZUWlAjvVrEtC3GXlZ8QLHpUdAGQcSYaDl1aNE2jBnS9wORveZkaPvuVGHYIWVM+rALwi8YsEpE060kELbzSPYcJWTVeDOvmXlSqYQ2GO4Sa0LMWMuyug+kNnXyI1tebw7BE5GNWsdr2YB+P0k5VgMUcirT2wyIDHxWU2Qj8MkU0G9BUlyNucZ9+4zmuGqy6PxILo+stSkKgZln6DYAgznbYmWL5juNZfSkvEn8tWLEapb+la6Aoet21d31+rUSdfRv9+RhJejvOntVU0NKZ7XjlN2osxthF0noYhK7Y2Gf02fZkwkdAnIvcj/k7Um6BV7C3/qpwcm6If9WFSPANocX0H5dhSRr1Z3FrI4QnhuULyYkV6zvYgMjmBzw3VlU3XC01OF0EQVAHgFsFf8DbOChKdiCRZb4X2nJXwwu8S7D1F+U5qdr1YEnPTwVyFsMi+/XxkQoJkeofU1Srm+f8eJ9NTLw+JGLS+7HtADABneqVqntOcBdwFByo2aRIXAKKlrEbtTw0QfcxMDjz+hQ9ZmvRVjFz5zWsTL6W8vwc6OhuU/lKswiBh60DQUe6BCqQ+nX5Ba4hF3WwhlEmJr4irue+aLO0pob3mgrR1JsHL3+C0pRuhktIuXQmeHcD3ofUSNRzxGT6/mFjreZKzCpOEfid6+ElPeERj5Jn/OdXAY+fFZmutuVR8ZowQTuro8xWaUe0z1zx6nc0GJKVDkExM/uQJGJY49YPvsu24JoNzDWyb8647IeA2T08q2uezjJ/Mn14V5cqtHjLy2VWyucFc+KveSXYseBACSPNHLy78pu1rLfrq3BLGrbQiMYrO1Bw4iDjjqS3FW26sw2puPz8/WvV/eE3ILpACCFDHC2sGnjXjmB5IcdccYvPj3+gHANHGiQ9pQkuh5Ho/nNBd5Z07/N7PqClVBzz8sVMaDjOeTBHKILjm1hrFEFyfzBfEwV8bs+jstxQL45t3g4MjcReQPYQn0riet0wipdWGLyJMKzCnMbx8BCKOQ4PYhV5sBnNBrx5CwI9FVzJD8SK5801b0JeH0RbdkG644bszIeU1WgGFkpMOXWichi3E4JOiVIexWGAu2zglXqA9lPvs2gXP76uHo7Djq4DcSRwPY2Pnyd+qIroD2hxZxy4Bucrj0w7Dsau4W7dn837sXsDLRGz+/vM/I+LbGVPGs4t9FuMGDilUa9e9UBlsYhJJ8Z4X5z8YAMhTqfWihtOwq3DUXQ4SpXvV2kUxWOYF99OrRHj5vmbyuy9ZXn6e0jXPPIo+nXg5BSIJv967lvoHODFpM9EPMQJvyFkH/3a0t9Uoy6jbAUqZCB89/5GpDNhTO1AJhW9BbTJaEhqf6FiMWKoKd+jG62bSG6Bk0UVVi0X3liTIbPecXff3ATmCQNugFY6z8Cv55Fp6PlPx6qFzJTRql6VaZ1f0YdxfCwhOTKcd+E5miTYZglkHpuEoAN+cDET4LkozUDMy+6YvGEG5Fc0CmAnYV6LKW7K+qZxIzQYkjGjwRuPY7NNk/nd7GrxXqr4ey75MushDV3NiGFYanu9XdaX5cZRUfSwjSscb/RCPxCzRnoo/aJGFDMtfMrXufasySRJLvJVtZaZ1OaLaB9jUrgxNyVLMm3ZEqMJlB+klwoSRb60Fc3mxZ5kjF7w6PSKfs9xwLFq5VaQQTAc/R2lmnMwEam3VGB4jKg7R2APS7/XEuD2wrvOldg+068V4D99xqD3BSqJeEfzVDFBgtpmLymtVEmAghu/6z6EdYbYBwpMH/71MsPoGS9pTyGZBn/esuPEY2RFvsL1vU67u3+9ReIQW4MRBspSUmbnH1Jy4u3tEpaaX/h9JNN8e8tibnZM2xHrRwCRRIHIBE0UKJL0nEfx6M2Zn/qrq3loNe1H3r6oWXq8/jSOk3jxuyRaOW44r9pOk/qld+Tj4wDKkqjlFMK3+IriTgUrVgKwCUn5N2pk7TwWaVogwIFoBIz6zQE2ogdER2+smDJM69BuZkNnWZDwA1am4I6S/UG0sKMQUMeGUnTkhSBXz3b/drN5/PwKriaIX++Gv3mjLYOkmF9NAOW64MxClczkNAwDQ/XNCJ3+GLVIksxt06MUxo9+0HnNRIo2YfPycw3LfAXTrRfBR7wfKsMo1iNE5wOu1NlD85DQgy47BAHedD55CE80JJ112J5lxeBw93HIdBH0Fu5rctPE+zDzhVJuwIRBPFTNA/xHilRTeHgBOWFHQQsmVijkUK1yimJEbgdvErHTF89LsnXb/SOw9k+UAByKk0FNil9vgQdc/isWf9MdAUNiejRVmMlHCgdhVpTMtjYMRMFYSJkOb2pC0B3/BGc/W2mREtIjBOXp4jHSru8UuHupIykAVpRuo3ttUxa4wSWuXHooMovpL7hzHxHOaes+v17gGMcLOxp/fF485b+nOSPhnDeIW3mb5iMJMT29uNKajfyDDZjubpFoR1WFeTRS43/a8XiFj2dtK9b7oAa7KYC7Zp1sMH3WP3L09F0pXPAauL0eY3FilAePI0nLjR6/HP9Uah62OxTyjz7KhBK4GyA3YEybTzQS0fERk3GfFqG84P0OO82SaP+euYD3u5SUX73hf5ZTBMkc/CGT7mi8Oe3gktuEgarszCO7P4zvqMIiH0yW+mUBzyFpPe92hwzhGELALmBbrOtHqWoZ4t6lcwxfQGo0zMUMDGFy9ZQkoWWioRG0ll/B4eW8NmYH/rzf0XkUTH8NQg7iAh5YjG37puKB+XspuoDZEdVeH++sBNCUmbcSFmhd0Yln8JkGWFhxm0CTJP89Hw3Fa48W/ft/4++Pyzq46fLJbwB/lu0KxSwhji/kL4ymqQOwXF2W5dlkKeZBe5URBbLd9301EOS1KIh9VZxKe9FRxmbFAQkTKaIgMJybSv6tbPyBREa4GUXmx7g1zfOdfI06cSKKjSINImwrZBPggt6Q2EZbv1CpucDiANI57J2aLnwKca4os/0aomrsee5ua/FRRu8grqow8ErT5KZjZrg9UUCb5VXrqkWEiHNsrAoU2rJ326s9fwnbBA5xArxQK4HRYO1Go9Atk1tElw3s6mQoLa7D9guNeVvo4KO6NfBwLaw8MLLywCpIO9uIS5aVYVShLlp3iXXqSLxR9nUjVDtBYH+ldZrVqYtXCKr6lHm11Nw3m+Ux87TIyY095+7hhzfKitEY+8GqiQTTWIfYbbHJ2EG+OhY4lU3JmnGzY37iN9vzrsFZ19zMkllDH9tyPpDNbHuSJLxepyUTbBcKot1fEIn8PIhq51tqTjtMFUQoMABmqbFCnTOSMp/5/npauZyTdG1uGA598VapBV9WodFPMXfGXLYuLqW2Qec66NhNmGLK+jPnPwdIYysPpNBO30HUeOG1TW56MrAU+TKSjIv4CtQVAYZoNLFeV8UgcNDNODqFGdYb/6n8usQuPP6VSosHAwHj/eH0Gy6t6jEwxm470EMZnNqgUaBcdKt7GODv/IED8hD92p8SYnqPvgTDTQuOeHYJ3YPfcMJc4kvlLZRvAVTUPMCA6LBn4Vd2HeUybBBhiaIhyLaAdcHOKOz3XvsXm8An6Tprcv0lXkhAGdJe3qrSAGO1GlgSvsqkDq+WkKRWzweZJ9nUraZLBwJAnds/u/pFqUJ8SINHMYwL4VC4Wul0vSvsxwmMxfaX6CWgL+kl80YXrT1XgE2hycEOPCY9V+fGRyJujVZH5KGXH1lrl0YEXolnwG/KqopTEKSp9Hd6c0mRGmrJmB9a4YEHSDIXXEQ2zuz88gT0oMT6YogQyGkPUIjr0wb5Zh4Q/IeS7GcJiIqb7hxcSAdFJ6A0RtwV6quvehqio8TbsweFTr/gTjaT2LmzQ8VZxs2U3PbZNxx/iciMs6ex4Q3go/HebjI9PGyTz1sctwwCJoBr6Eo7hnPZi+w5OjnSBvfz2TJMjPwNmIgIv15J9KDFAFjBZe7n97rJklAbL3jzi6izRof7wEKmkCmRtLeHV34oynXhMlme1C92uXG8B57fMiZKn6Vt7/PkF9N/9iC1stSMh+RMlgRUBG2kv+9l8o9bbf60EHiM49S5O3j/mydx3tOKpi25v+bn8eqA6948KffjsaAE5Zb4Dli77gN+dw3kHi8icZ4wWjM0TQh2T0aLmiPnctujwrgm/WjVcNnueCHKl0gtJzb2Xjjc55lHw09URUs04yIotR9kp3L1NOBfykBhuMXnz1Mt22bwxjlUE4aLRnjBlh/fvJ3LAuEaAN3FC1CD0kuXBf2oucPSjORlLWJTho+fYjTcXGL5bb1m4Khksw7cRJmlWY8YlHpaLcaEoAn8L59GhTEi3CV9mVeY4JwcwVcYPz71ZvSZWzgNiXyrwF5sr5MCjT/7cZu7BRNTT0VOWlO3v0d2lhj0AJiahEG9j9o4AkmzwrAHw1iRbHueV8apCGHGJuCtVh5pWuZxM8iIlhYH2atfL2fIb5SmM1ebaGUZtBH+5pPfOSsM/dBOh3IOzNvF1M5Ejc0IwyXMPg938RifmUkUCealvDFfNhJGfPDolRf8r6d/ZTvYgcn/AuBkvqH9bQQ4TJ+nG/a+Et6wfVO7IGSYqtgKY1q7iozdORAHSv9GUM7zTOaw+qlrKpzSn9Lmsex51vtlJzYW3lBcmiU1AyRhe1mG3EDMGL/FnKCdwjBM4j33YsT/zLGITTpucuMRii+Cu+0Kfe04Vvr3XmoHHUjto3T24SuE8e6Fs8ZSFwG+0GBHE9z2syE4HrHwX5blaqN8R3HftjE6t1d67fY4GhGueL/Z74D67Z4twPESFMKIYf+RG0OsF8YmdNnKDwebfrfkL+zhJ4vDfMDTP/sIzsuF1gExNz89j65jK17yx9w+rIxkAXq9PxlnRkQrOSpkisV2o36snpSDwXfY6B8JR4WSqlTGFNKHbthn6JiGrOuOI3c90QEo4uhMaC23Czl0+FbM6b1KEvK9mF0aRpRZb4QNaQCXR0B5BRtrFQZMnY8q1jYzkdw88Fq0R8wPPxAj7lnqJHxtm+5M5aEFhyZNMitgeCBUqjJhEJgnKbZJuMLRXL+WUqmpQrl/l/IjdO+lgBVvCskocPUClBGJd6cfiPRLGIQlApXySany9V6p4SFGeSGxPhMPXn0eLVnx6F0xmDQyz9afJyOgrQ9mEO0nfuHljtrlnjxAQDbo5/Au2TwGgEgQGIQlYrWMQqtHUG162c9h7rqLcGkZXnDAMhABVIHlNChzsrzcZNYgPcGUHLGrBjosx/Y9IwuOFQySvP14Anmlxx8oYjr3vyFlpohg80+SPRybxsv54/lAYLokUaZ5hKu/uDv6a2Vqpk0JHikHZ4tPh78TJkP2TNglB9z4rWLDcEyLdjBzte30hKGvldFBMmkbq36MympDOc8IuMZpIrskMVA3J/580fS+fUS/isJ1T0wFKzyYKaX88kctTxnvvVyFSTbOiJci+VdDCTxs+jLnB8Ya+qP26qN83bwAQx1A9oaeZ/9MjrFJJduSpaL9zU8GAc01ws7s7pztlZXoL5lAJ/uBg0nnIlKkL7YAKN5ayJubTv8D0zG6rdrhTwEOIooofgL90garqofFtjFIcMYPK+2+x8v4ueVV+GbaKgQL5/19JRVRZgOWC7+E3HgfPrJqYC8o9+81oNdhWsJWrYH2PPHLYL8zz9Rspwyr4QrH0VKR1jIfnabP+6a7qih18pCb7bpRJ10lBgruuKZMfoPfCfB9SI9daAHIi0Iy2szsRrEg3Fw8pzrlLi/ZctFA3Lwcm0wgHyt8uK9FDTzEi6ERne82q7Goka6osivEyo77ezAJlp24fZG6ba+9BeKrq9hfmQkHRiubWasX4O4cmyjbL/b74DwbrRXyuw4AUF+PxMBa6UThTrgVmaBf/+mPNcgQ7Iavh/X0KUts4WcqqV2qwVIPUwqjXfDj6xv8feFeZsGX9h4LVkzYNUHZYH3JcEaiqZMshdS17ybb4+H5cu780oy7udSKxqf6wpR1HGKX6sAukEGZ05VarqaEDllHWuVpRmweS/7HWZJXibcYLKmnew66NOlCmP161lAVNHcbfmsu2qUgu/KBGDiRwDvcyEpSJjhf86uj7qVtDMCgcoLpCiGylEes4M8BloTXff/t+M6g5m8C+XFz/2Z/I9D3hIu/A9M9LqNJ99/8dXfRqPxh0mshgW+4wY2AXxqwAI81D1mMajOHuHOcquCxXoFYVIuvY7xlM9v9Eh+9xxFhNDNmOxFk2PTZePB5PgX7uyb42fShRySqYe3JtWlAkaPeXM/3Vi+QmDvWTIuA8LwVhWWexzkeFTb2SB3wfssYPpTZzd3Kvme3txQpB5EhJmcuXpm+FkX9ZZ+I2/xy+r1kbHL2dBiJEBtIvYmRJ+e+RYE/MhPy+q8rc0mU3DOYn/6cQrVIOkfWz2EIg0Au7fZQzbbQpQu41mBijJaLJi4v0eRoIMgx4hRepLJfxrG6FMuKL+AZ7l66p6iqO6q9+/D4DZrogscK37ISHUKaWIqvYm/P3iT8eerxBs/L3EV8A0MGnGjA2Z2X1M0phrUHY1AzQZaRU38W1Xc+SvKbY2seeBBQoH8cXIeMmDAuSbnC21KgTfB5GZZnlISnCxJESmh+k2hAVsd5MFFq6PeDRYd3U5RhIB5jKMuDL9UjQ1HHtd+VEjJSZhPvANMh7fkpeClE6ofBT/CRpsmFCPFmOHAH3N6hwGh5VoC+EyV2rWdMiS1a6Ht9r7Yd0TsOWxbA8BL0uRChpL9hEz1ymHml6O1jvftD+Cp8i5Hgt2rpBHrIGuHjt5nqrJQp7tqs17p2pl9vHl6yrDDoR+Zj8QxZ5ecTTsoOfb76ol6fOD8/pBdVr7rCu5ER8eYuy3kc4XIjyrJItYHZhi82T2nzeu2X27P5HcJQOti7FUIl0693nCbB0gS/HATHka3Sf/xoV+HpIQCbOIQw3Vm/LpcAjHZTL2bo19LjIXn4got+o8/e/WrqC4ft5QsMb6vCkZeC7SYFsQ45MzV5/f0bPZ8HfuFBMAF3FxrQM4Y8BAPpUH7NAT0eYxvUSg1EOoy//Kl0iONyFuh+5DZUPmr+TKkUKIA5aLMYxA9+CWD22i3jw732vkuIu+EMTwCqKHcwcDVfJd4NVX9GxE5xeUY4bKxLEIMFtKZedZsO0QU/b+K+IRsnxcMjSN4AyYWh8Bo58EbqU6VIBqEVXpYH2DlzSNq0k3bPF2j9OYIbTahjZaGQtA4yFQFx6r0sZnZELGtIcLo5Gnp5Bi2aR4hDW65oqAL5X7J9a78WItnGHHaJxZNXtq0XRJswAWjnblKhLWOh7rF6cSs4c/MRY1/JE0+iPRTuqCO+n5KiWEZQK5dJvM7zHlO3n57Itl4/D9svFJHBq2FAH5IsSjGL6+M3PK2XxJrlXXsvVMjnrzhYDNsBl472oboznHG9qp9gcYAFRfV3pQsd4nQlsbNfQue0QXS06DnDvwdyy2ItOo6bOBWT1xhF1T9DO7jG+8PFC+BZQ1wpHPmB4gy878WI7cYHfHyCtWJwQqWoLcjZa3VGTA/fnyPmtFus2tfawIBaPiy+GWUwdkwENVk2E9bLtrBCixPE/ZHGfRzk88HsWvoyrtLLhTvli/IBW8wZZ+PExfHJbrLpHYiqmM1KDMdzB7OL0ihpNxOqZyOS1zPjCL45HKRiOHLvS/MwBOs5pHZTyrmlHDTvvhtWVvBpMBaZJZ1ooAb6dxMTIFbByDZSOeCE9Jj65Y3yQeSUNyD2VfHZSMiXw2JeB1gP/+dbgKM5yOWVDbrAFkVV9kLit9Xx3vmqyLod0C+7zwEx5OkUnHWovZMBXYGgQqHSZ8CdwvxHCsvXhIlw/yjmUvokPGl8v2posIHNYHCgnO+kfs8id93qTSkIBudxqe4tKWXpcp41up5sk7/2CfVOkh/wZjwlfv12X4YxeOXD8cv5YRAuStiPN8ULnbj1yWex8chnQVMbV/bIxTkWse9brpew0zxMdV4Q3FEOrbXsjCMb4CZCxTrR84pJNxgqM6aK9dIj3o5NEStly7L/KohF5T9hVLHbJwVCzGxA4Me+kVfjvDBxXagvvuVFQFgfuryz7OR5X/l9/Nb5TZD7m3uFzYQIW5aPG81HHOBMAD8AZ0aqtTcZBX/LH2BRyrPS1Hcc8Qpp0FjYT8yLXdNkvwizvWW3WDkJEtOH/zGaxHE2/GVigTY6vHWm5NfWAVriyiukx9mlOdbJUnwEz+er39FknG8hV4KdDt5rvao0xJr2pOVDnanieSrgnxVhOj5bKXMRBV/YKflq/a1Cl4QShuMznum+kGgwWqLI4xCWReLIKhQdjA6L34eCQ9TjzL03GLXi7B/cn3IggMr3DCx5rB4INrsP6gjU/U10dewfeVLEYnU7xl8UwlxVsAQ8JsF7f+0ddCizsQgaBHpSsW68imF7RSFgTpoUF7eUfDyWYJq8qH+zHnC2Nyvk2ccDv4w+Hlq+lrrmEOZFw3P8LZYahlssZ7OsQITslXarqGoaqn9WOiUx9kY/VowR1WCu+f1u7U+ePSLiPEE54kMpZBKrfuAzlbgklGq/P569NAxS2e2sKCEKLfYVtZISBP5ZE1Rps3eSdA+05Y7/wMbucMmkO/F3z67Zi78+0NlxHEIOM17f4xo3MXpdqwGsSU0iLbD7FG9m/ot9aOkf1GTyA2GdjUc5NQiZepHEd5CLo1pS4sCOM5YftSDY7XNUdCSh+pOmaVHS5fcdcvt6+pwhYhaRuWupPcsDYgDl/pvcLUqcRTt/sTXH7Ya4sOLSTLAPNWOV3WnhuplO3k5GrUbdUer+YnRz1t4REwqJ9sVaR3ztyjcf1SYyXkh7F4pAL8d9n9WXcEmW+3j/Ctf+FCQgQ9x+8FhgaxureZh2qEWEkA/bHNdHmUmyPKubWKp3mt7cx0GZa0BAlnj2iB0vz2IbyMQO0HHbVqAWkQfk3tuixk4DE7pIIknyaFLcec8YI220Gbyw7yU/8sl6cyduyI/wJXw80Gi4tXaC9hqb9l5liO3kT4CQ42MVBcXuZug3X9TUeTq677YCS2aCBmBijOMMXxLGDQiYWSn7okbpbEHWMuu0/UBfWkxxPoHE3OtAA4OiKIg6TdqmK2JR3ctZyvjyiBF9xNMQDKp6pZcF6uZZ13Ugcnq23m/L9fphoPtQgW1KGlMaFwL+WMk7sHYrSq2+8VktZKATvdO8Y2CJtD8kZsNuoRPH5l/AF0QK8muFCzesIpl29cqP87HwIYKU5J2XwsJ2JNgSSwortTgrStnlv4POswHPMJF5Vm19Q1wpbG/HPj4933xm7m1T/NeV2jGtrpcEGuSnxo7dfNwfiPdZMfqlg9Yqt7FDZZWCwqkwljWijwuQbiF6thu4T8Wqyhiow1dWuWsgBKbgqwseA4pVjmpl4aKhrvNVOYPSYK4W1edseXIPzoEjjDliHBt48n2XpteDDMwbOx4PE2JsMgHrgz2RnWokcwO+tGhYfaN+PC2aNhqPtDXOLNE0Hq4aRdItHmqEC4KZT5/lj83AAxVkF+2o7m3kCxte38pZ+xpL9HH54ILauvFO4K/Sc7vuKyaRBMmBhIA83sUd0xf/+kLxlqaDstnv5AgoCQoXYfud9QDkH1MxdqcSw9vGiGFQEb+cS3iUJ3RWrsnjOSAOWY8MWufV5UMbrDI2fpD7G2Sj5+PvYma1gKoaHf2EJwmplZ7HfI8l7CHOVA7zyPapK9hrg2aNy7yXfQp6z839f5hcW0wN7/hBH0AdpEDCtiL2CzFWcZqxR1ct0zdLYM9IGXCH4HOoLnK+6gHFBmL2gYHMcjL52UkimhXht6/BFDJl9kgcFwjabYV5jEhIXxjnsu+7ePspDsmW/6sxm0y/Yb8KDUw4VDz74QXyYYgfOHsfcfCjSFp9Zd15FXzPWUzAvmxli2l1hMX2+mnhQfOPiHunrDmLodWhXODcLSzXQJMBeHG9cWOb1Y/HWdm7NxA909Eqo+k/x61gPo1i2XbxFt/zB1wSYmc9/AVppAvnM6aRIPr8L1RByPso+cz4N+116wfkbH0PRSfQo0OCIIi4qeRmPhDjsapZ8dz2yMgDy16/xoEWx5u/IZKs9EgiraKu7l0w/fYsa3i/Z8D7a1IPPOItadj71/ycwHFVP6auoJOvPHZ7F/cFoRnJR6gkVuJbLyEIgrwXXpVEv7qaI+d/CftF6j+J3hggmeYTB1sqywJHfhNrLM28X2JXdiOuy2TIaDM3yQ14I9tniUVSkSM59TpWNNH8LtzDbzhf/uLdjUR51KkgmkVPJhZe6EigZpmbwpWgH3vG3bK6y8QSvxasYrcpMwcdejtPm2QbBW6jD94GemZu88p7EI0S1rBxLZoPiYSOVw7l/WE6l+V9Q/pQbQ/WYxIfa1TqNVIucWYMbJ/7otNAS+xKMxlG1uyqAkC9CM2OLl47JO2io4d7m9s9ycd5sxdlC6Qhg8pZWU+Pk2xJDPcsmg7zmiTVCfwFzfwqa/mj4OvHhuE6l1VbYqecGxDSXCgoJRBpzrjJmlxXzpUgklBIRf1MhjTyEph1sVPo+TH6q6iOF8lOu5kfjmnnlte6c7F9u5TmmzeUzEMrc7n5+oWFSgncmfm7DmNMOITOaegf5meBTzPyYwqaPpTgV4HysTuSoTpGTxluNSWgQXM+fTCeyCtE6EMo7N+JhoR5lRi3ccgImX+D0c5K7jA5DIQmnpc7rYRJcjQuEfgvLyazr4/PnZ5wGbOu4OqOjF5KgPTw0KYtiZgynYG864iB24T0oz2C1BON/bQfhQs7hEV8SyfRbvtlmIe08Io42FR97noNRytq8Hb2tgTm9aY9Qn9ULG7J9aNYCkMlEaMART8tON41rNKOFHhM4UYYXIdj3yPc8uWAF51Ed/c5IhO1UbVdKJcOcMcps2+McsZISg3CfM7Vj/4RMFbI8vD9toBo/9o37vtOU5zDM0ZNCLCnKpYJKoNJMo0O617Snr1yxfsYBzk0TbMuZJf0fWB/+XBQ7ntiUt89rImwp6OKGe0QXonMTkkQGTUb0QUEKPsegyyzdPG8rX2FDKo5gdfFumkrBkTw+OoI+XlOy8S18u9XdDD+vo9KkMJXrqiAesGJKbfwXHGashoeEVzCJrzKGztveivenxIcqPOUCNKbLBYKWTp8e3r1fEMHuRqMbr4Dy3tYerk1+CaxcKohP+vEUTdDGvb7Ofz3O5XMbumR+tahMvHSFaHLRaVpXWrZX+YEH4Rch7VfPCF2N1mSg43WyN89uQXkXBZult5fjUd4u2v/e1M2ESIw89s0zVfTu8Vk2/OhOuCGascs4J7c41Wy9fUCt1HBzdXKTbXK4JNjUd+POUf+pGsgpvLepZgxOkGQQCzXf8H+351DBVyCrXmmWfTXbRjzc46Yc1rey9+nBWrnSIwfmP6rc/7fG30+t52ImDQXUPOdfvYOr7osW7lB5b9XP+sZcC9QYbvv13k0BQctN69Bne0YySzqf58mNvGbrUS/9d/jG9ww/DC4xorLxBzmzB2dMy95HXyXscRfVPxhrvffDUDR1/3w/6yTJhicdqy6cVyr6kG8NJ2v4++zot5nQJ4b9jX/f9bpf9bpf9bp/+86HQ/MPwBJ+FGYPuvlT7/1sv+v9RIfgP/fvH3XsqzAjuXX9ON04M0j3nsozMsNvPdQmK8fqHPu7Z7ojpiX6dlxTBXFpkCZKa0lKaXVtF5cD3+g7NgfNbf9kdrzqUNFvmTpGeyCn7daTBeP3IeS5ucw5bRUt6rjGwV9qS6h5hDLV596Oqjnh5ZtD+WWVi7L8t9g9vfnQZuP+QWmeMmH7T0CQba6sNBhwUKcr44vyzTZIP8L/HPeN1+2/Pxz3u8QzP0bzPSnkI99vi0Pkwf+foriwJ9fuf68h4G/748626q/VyD+Hqvyuqz+fj0IgP/+99vi9c+h8l9X/6Vo/b7z9S+eTN51/7yF32sIqLM/v5NrC7yRrPCPD3j7oAt0oyz+L/TvU8Tdnv857c+Bdbu6vwfqPi6fl3S19c+lWfB5+fcQW//6NHX1mwORdvX0j3h5UX6ZD/kSv3OR37vuH/zYZfnyDxAizufvv09D+T8lYvD/lDBE/FcJ/3cC/uex/+fSxf5npSvk8f8Xsf5LQP8h139H0f86ebH/bvIi/1PCBf7vwl2rePojyT9Cfh+5TuNOjZO8M8e13upxeD5Pxm0b++eE7v2AjtO2XMZ9yJixG5ffpeDi9/OfrkF1dfn+7jZOz9F4nfL0feSiPvPsn4NI/fMo8J+GNYu3+N9g6s/bB+e/48bUH9qwD0ARyvFVTLrjVZxXPq8M5PmHcxgqfP5nsSsfgP+ou28jVH6RKEK0V+QNu1qaGVnyu3FPyzpaeIUzx6URlZSLYDerDfeQcf/j1/OiAc97fJ9m1Suh07pl7gHdFXpnSEC1yRqsoSi8DT0S0YyO3xblBPli7gu+hciGbD5qY/me0f5zhk3pOYg26Pebqpz/QtNJ+/1FMkucfCkQhggNVGpd+8dNBFD5lzqA1LWsfN2CjLI1TixjUmxu0zopTqAOIZVPKGGtOGTti0IoU17eHoOhwYRC44cX9QJnipClNCGDkbUiQozmNev8MbuCMizbjps/Jy9XXVcaCFMUFI7oylco//mtXpyyQ4gLmNZISI3wqc1Qjq2ils1+X8Pi244SWbcSUMwhcaHHHMu50Ow2N0BuVVJAAW/QyTP27btVda2sD9pvBS9qe4eVud7CxURiv+vy9V7HJB/iNPEOdo+KI8AFjfR5PXtvQEPM3esNaYipS7S0hPZrTA6UBrB4c1iRCftxTo/Sl7GWAEEPxSfVlzS7vUX/KgTFIU3xvdTh8XsRHWjuMhHf2FwLvBV3hvtIl6qAyZwrJVEDa/fG8oqAuqOWn3seU44Gd18AAZXVphWqgzCX0NQcK4IKM+qyv6AmjMaBqagkLZ3ksgJCjxnAGZHBHlXPuGZCK2Lv9g8hv78W9Cv3x2lRkKgGZEWfcfu8IbCa+tVA9Y3McdeAIOVRaOr0Eef0vMDrEpMWmntJOcWnCy06A9NDC8MRA/u8pvtzFtQHFM2Azkdiya453/Xcbwufqd21mgVh4I2M7AqcUxuHDUyWBurHCRsH3UuH3c9jEFTPmYTl2fHnFsFnvCWU/5aGpTmzdbaQxd0I6Xzf2SkQSU8hzkbeDlWmt//gJMNDtWvEA4IJEke7vtLQotynjWqiMQLO75M2uZg3KGeL4sa5l6w3yt2hdJbbpKJLKoF9N6nxSTYzW2t2iNp42OHPXf1udVC8T87rn1zW3lwwalxm5XWsloYNvFJTfY+f8QNNPJ3vAbtOyo/tSOAqvH58pqRR/rNhJM8hDJ+4xWbAfcVlX8+VtQjmlGOVshCrskdMnhVfRwm79UvV91Efw6M3Uho6wXJ23shPXp2zYVFg6+kwQX2uktU/kPi9WWMoCdCtOewryVRoO46MBClMfd6Z2GtH7YnePRzB0lLeVxeoLtITRiqIN4eDMray/UbRdvyRprySlYX0zOF0YvlZa4JPsJl/LkDfLcWkNcVHN6VOrydNIJcRqtDt9HnBgVQ15L2TSufeGqgpYdfXNSsbdhnU34Uh9DXz6Gu+GVtxAvOIuR4xZREKeHj2r5Nlm30mFr7pFLypQ4Ylhxke+Qp4Ja6ysbpWH0A+eufU+vaEHOzO30z1hpVUquJ9AGdDOdJJTdJf/3kdm/B3NuHeuOqcIMQT0d/ktS6vSk9nEKH0MXKzA+d9fjauWQzRZPT02zI25IMzoL0mD/Y4lDSFhhTAtmYvMpOodFqWRwD/anJb17ylbtvh0XmpiRaSJhgCpCT/a0W73kq2LQpzxf1nlhe6RRbrWvkhFRHQlL7kDojQDYwktECXw1Up66i4LVVOlOLqAOHgWY8Ld32de8ra52HyxxIorDxQpIr2M+Dcgp2GSXzlxGEdkMMcEDqQReDb/lh0IByD2aaMZRWtjZzhC6wbzvkm5tC0xs+X+Vn4sNZUqYcfk5aq6KeM2OcZ2D1bq8MfoQOaT0tNEkiymVDOs6Q506TXpYig0xFRQb+S8+ZIg2n6hvFsVyhbbW1THkoCxbbeJAliNvy5NyMyUBYexdIx0tGifZKRENsWbIPf5G5vVv6QC0eYw3dcXIiSpFLa1NgqAk+GafHqq1cbiwDwc9mGYyFp2riabdTjbO15JWFO55qbYivDuNns7aQ2tGc/w5H3QlPegxTPs1kh8RkXeHqyvOUDQvdZP45qCc9CKVvhl6hb0IwTtePFE7veUD2RgZCMrsg2adQpmI2YmWrOZmAgrHxtjNNkIGvDCDiK5TaztCc6VNKWzEJ0RQLBP2o5KuAN5RxWZd8acjz6STlKZwihf+becCByGetRw84pcpwWyFY7jPei3bTsN8MliikOe3z038lqPM0lxrt6Ew94KF0Y9GhEVD49seke1t5uJA53Z+fqZsFFobk12NCzar82Sn8PUhjB0MyRtJRoTX4Ojr/qR9+HoH1JXzh5KajTVd/5hroCzZziN7Pt1Y3H7GKp+Oh0RS4t8vs9Dk9i81n7beyM3rwePvt+KS97jHlDes+0dqT85j9nNEv0DDXs68D/NruqAXVl6lKDpgPBbJvbEkE/VEfYnacQLJRav6rBX5qYUY9WeQNQWqdDHTGZUdfUuzBVtDciR+Hg7SRzUb66nfqGgCtZGiX3JKhdnHndXRladnWULvYSwx2Ea3eYbY5uGKsJPQJ9vu/NVnMhA0W6EImOIsTFdqlnylIeGp0bDBrJRQJjwoZp3k1WR6T9GAfdwICdpZr4dVTibVV5UEkzbugVfxhfVobsTzrEc43Ipn0pQdh+GyWivGfV9Mr1BhSrsBqQBVcgM5xfDOFfLyt+PazpgHx7PGDLnh7B/tcFkZZ6XcfdteCyvC9c2sG3dWoH4tqFItBaTi1Zg9Kti00iS8LxKADp1aBrusOaXsc/EWjnSEK5mtTKE3UJq7B7ZJm+vuowZeLvY40MAyuC0tFp7r7pqj0HuGuzEti3CqGsvlHU5SMj70ac+K1eQhH5HZDj206DD0j49pDihwcL1rJzrfK5g09UOkRECWEt3ZB2IBMvNIppAycpg0SnWVfCZSnMdr+DtjtI9J0TWNAyIvMNjBx7/df8mg6JPrhOJOmbdDLkgiZLz556lWzdt0DMrDRxwCgCWWLoWw9bl+ANqPaYmkJwVL6XNy1YG0qyzjeQ0up3ZpagyCGCmIhesGufenXJQ/3EPdbwyRniD5Y31TcpNxquswffVzHrAOwbsRFFPSq/IQGFqkKsCRm9d6iuJP1g4mnxAIvfOGMlPAXIBRqlUKYzrsMXoJKY7VkqlQol22twfq1CLNOWV8BniIbQKsmUplArc1Jq5i4fCU46Y6IeyoN8eZWy3MRSfpzAsJI9N2eu4EpZXMEy2N+t9c1OWXN1p4wfQo+AlNpCvKHvzb4Vw47+Fdy+TAemaUsIVP2zgvCDNWjUWhi0/DJ9vNAhE9O7oR6nvFKOQWF7SnHQOvD+m5VEtf5AjcVWklRWfvKO0KgXLlP4CI74qSOOGA7IJpRRh0dUtI5QzqgVy8DGKJ69O2p5rSdWZYQU2mrqy3kTqx4zqi88awUM5t2CNVW/UnN/ooPRH4ScI8z2mkhRzpkRs0PnrVWQh/OYUp9PRTYCPTFD2FXbAtlIZT7QDdFWIWW1ijouCeOU9ivdCtGsLuKn1rDU1Jna/RsziTfp1k8OLxzxYsi8zKvnVtWZu5qT226b2dyU5c+Z08TLNNQGPbkECjmc596qBPSpeSBOqAWGCbEyCwZZmJbDDLX2XhrEPGRNLLBCwFtB6zRBuFug+8fmPjhdnU4uK/ZhdZWztlbqTSpXm5DASguP4TcV+TIci2v6+FQHFk8/l8N+twsBag13qSJwPLTR1CuxIqO/DcoRgDgdfmk+7aGPtXPLCf1pK2FdGDr1AWfUxu+7LYAOkPTSLe70WhRCqlZD7rsHIAgV0n20y9h6E4GO0mFAW5Miepd0PbWF640swY15TNcubVVRqdFJL2zLNuci3IYr4Z2SMHrVb5joKYLcpQdyO5h2CEPVEoPQ1le/Bvot3JwUbfebg0RPxLqW7JfC1K/7i9hbOPFP5mhSNZ6lWGXTHXqtTTFXFfM8CUdulilzQPJmidCZ1MgQg0sk/alNbURejQEn3BxoXAsrHGCanLAORjBqtuLx0nYJE+XDCv6yWbp7EzcEGVWo9mH8DuvUJzXZCO1PfSWJ2Ul9j1OUX0vKbEd1atKJ7Egqxmzu0CCTgvNKKNvOMyM6UduUirc827eW2g73aVatCffFF5AhDOiRgao3d98TvhhRlA+1fW05zg+UnCsv0wGu8mRcA9XRIdKY2xFQROpayW5ALjvfJLrjefjOHhCtBfpUka5bg0wsFNFm2woULaAJtL7MM489NRje/Sf8bQ6SC/IHv4U04KI60Z5b9blZvnWxAARtlfwFtHkm56mgemsn8FT+L8m/ISy3jjYiKftUrow7vaMk8Q5n4XqXRvDzzTyF4NJyEiC9A+B9K1Yaa6t6yP9GpgIFfDDfVF2+uyRQ29lYTz4meKiLkX+46sF8C8dCp0+/21uiMuDmDhaoYge/3Gq9W8P9dwEAZEg3V6UYVmrl0YeU4PXz4U6oYt8yMc4l3sYswyR+m5dpasC4pfTeYw+v5lVp0ARkmUcxdiLUZteV37nbWx60njIJKRdF7b5qFyd/jQl5BPuL38tvWqzS5O3kiHymjp/jDxecAxZv/J6xmuQMyngbYdnB9rFol0VLo/Iup2f6Igq+dxVvvbOw+ZB07dnUZ6/rMFpcVtalD8zn4O4IeIFXcuwD0zL+Sg/JSemHocaFX2dVmPWXnqGuoTuVHfHlxzaD6uhDzdeoELXA956CiW6M4Ozz1w0R2d6YunrbcdOGEzodBR7XYJ0Y79yfZ8HEl6+qDe8+y6mZox3j3GXz6QcdRb88EFsF96156LRva6hWdp5rDrgaVYT5Bd2r/teo4qVgxHfCJmpCIxa4jKbbIdTYcj5UWP1C8LK2PnzuwZZgVjIPOY5rrH50Mt3PGOOTWEf1uaI5S9sflb4ovCNAAhWJmvxgp3ug+M391UZUKIQd8pVlO6+sU6+6dg8kRm1A2A8SGvG8Gn1Es9KGDq3hflDx0fgNTzdshIxViG5WChmSKxPcFycEVqsFvzUlVDe0FVVKhSeTurQOzfho+aV9MlqrstfVqtu8oM71A+23OsQdodJTfrZfQ1SvdN5IyYkQ6xCqmmSwYnxw41ZDKveOMq+xmj8+3Mx7SD5HG7suaqPNv8l+E5vVz/PFQ49pyNRoB0cgdKi+e6IOMXkTfISyJmRmw0sFFom0ozxyJSnpYv6YvXITQxPKPzCXNj7yS7y2YoWsV/WZbGVv3JdMr8nrijs0hTmzSpcXa/+OaiQXuQ3DC9tJN4gOiv1QJ4NsNV0TPDty7D5ZdS+QKY1CKlDFlFhteffVQpx+76/qX9ri8OOglS+s+rp1eFgqh5SQvHJqVrviPRsIYXOte7oy2UgM+fGQjqUY0AuO76MGYvqqUWugOZFlD4cVvPaxB5cPjCCWBlHDZXchHKKWpkdP6CTB/cBPOJGOv5dqDpoigpkWsQgh/e4bRdYQi63DzOEJEmDfZIcMwCoxkKsQksLXinjh3cKfNJY/ZwF1YXBieTVfMY1QG1QkgBUE7ybsDv/jo5C4QJQOjNrSpjWONTT7PSV/Y61nivJwwRi/kAZEkMc6nnhWgjTEXvlEvYbzhtuoRg7mqoh5LR7pkuKOHm3V7T4DMBdCU11XcEnac61zDVyvBccDow7osj22QZVCuSeIMx9N+MCCfMSDQ2w3SPNMLcXmNF+d2uuzXmX42H9RNW28BTNen5sRrzj4wSh/ut1F5RGiwus0F7EktD8ohCqejA/PddFVH5TffQKaNNF0kvrQUl5BmKnrLp6z91nZPC8fsHDM17y7kVMC52qzR3Ig2Dq0hYCver2HahyemtGFscUqPa5tMNN3qS6ZI6T7pZJIkOdJB691qvnCwW/Lf6rsxF9K7HpUBppon75VVGlDLqVnBk9D+rlnlpsu2AN5qx+jbfrZPOiI4uVjYy1Qu412Hwxz27A86mjHv+2CLpCmyLXWDuaAphdYg1M1WR4AQ6Lkk2Mh9zUC8anieYrqPHg/I7U3+a312LQlFRUGUDy+vu9tKZ1meslXCnS55rP29jekTvehQYIIh1QUYCEhuQsGi9iyU64yXQQHXM1p+5Q7jg8aHia5kyQ+X0YDdj5ofUymdmcsBfQN4WWgXZlDlLJ7xsJ9+mv9bR5DHpUvfUbMqgkXV2TweM/kL/KEKP0mxRRrF3Qqa0qgsY+PD4qLz9d7RgYY91ox2/QWEf1kKu01I+U/UKbxpu1vFSzbstilSiaYq6Mu75APhOcV7H9WTB11AOa1Arp2VxwaGJoMiJSgDCxEHC+WhWt+1YI6PRSdSOOW0YkgoMdJ/LuXsfNiMHbfQtTrd9TOk8X4jh95KbLrEQR4vF6VAHo9zyLKg+JKNvfDEFXumUN4Zdb4J99jmDltQiEKTEaYvADcwi2WA2JjkPjm6q94cgPFDHu2M0gYqfQx6hX6Cs7xpbikng13PUS0OGr3cG21RnSwSCYJlR8SEi3iQAUZzHgrI1tTFZ5VXdZRvA2xSYdrdto6HLovmEgciC0/onhslVXU1FXHDjCoJ/1FiDdxEFtWRhlm0dJPS5guk5O6rXMH8tcJDpaFnJYbQu91OF230grJMRBHn5Bl5be0viFfSQrw7uUgL+omqOutYMUHgxdDMtgNZZE6psapj/ZaBipzakck9YPsODyt1mC2BLU26ikRCcGdlT9UrPc+9LoXZ52jtKsIz+pWroWYvIcD70hjJJLQYm0CyQeySe4KtHobrSiViZVtmm2ePuhW4rKZkTNh9F05yxj51iQW5F63JSOkZU1y+JdIDuh1OD8YOqx6A0Ky/Nt/a/+kpEJt4t3EMf0BthiuX6zyQYHxNUEgbo5AThmPYjsuVwjlMoxQui+6xNHI1PKTUwxt9dFC8Qflx8Fjz5MBQ5ki75oj9cCT60Reo1CwgqT+VZZbYeN14YEPDkdMjVQtWSUYWtEM5xu2QKMFD+z9cMq4rtxF1wV3qxwwTMWssW9+nvBOPZKO9/Cdw7qv1qzXPxoZpbCsfqOpOPLCKRrRIOU0zDPXiWptrTiTtQPqT2CZoMTu52Y3KkBjDBWlWXEcMdO2j5x6o6S0eQGm7tYUmSFGBwcWd831u9PGRuff5j6a98pKTwa9W7YeCD9HeEAVVj2wzJrur0QYyyVbg2BXQQz791f2AcXlN8aBcleRdLmoWCOo6k8UZittRRZGA9XN3JEhNT7KknMvNfVN9z9gXtL7I9Ff4dmbb4PDYaCwk3RqhgECu4S+eSyY3Hc4f6TVrGfl0KNHow5tvA66QKGRPhA5mxfpZfACa2G4gKWWCOM3TtkMzN2vw2YdiOJPS40zqMiDnAvwneIsdqGExpYsLGnF58PeLe4b4OWdpoNQDQTpb+cIm0JW1baeG65B3uiu5Pn0O70TumvZFKqOLAscIvKL+pKO0ju3d3yqRDmdHrOGKU9fANQXmH90gh0nDwNkp0eTOR65jB9r6HfMO/ba2j5BUL0uF0T1dvpoyp3KUqMhUx5yPrDLvHspCiO367UtGEZ4qBf9UTiGokTiuDXks2FORqNYyh4N8Dv3HDXCAKFULPXdRHDVmFL6i5owG7YTSCYBPqVZL3fBbvY5xOn1r0CJo9aDpPoOjZb1JS6rlkQMX10F/VYt5M0WeiDJIltBzWI7CYLZEKNQk6oPmuCF7mjGXvOhdWytM0VxGlU7uMAncjNLEmbK3A4w3KJ/ZWjVtAMWgGbnXE7qXtbzr1WnGjTQAFNUcJYfBmVmkyinyGgkMXjzGU5lPTI8xNXE4ZSn+2Z+DFwOB8AnTkpY7JelrXMi0r4L4PjoAitv2nDAg2SJYhsI3EV+QQuSuiQa/8QM4uKvEeRDChQATkrhdTQGS2AeBlodX4iLc1YsmeXuMQM1D1r7Zv3FKZnCu64ebFJ9QBF01HzyRj9rJy1t954jP1VqBxZRCafkr/pO70RCGdrkOwPormB6QSfQwBHnvNg3UMh1Xq3DgXDYH5mivI6bsUN84yODL+rP6nqcjz+Tkr0m9v7mlrEWzI4PjKCmppyhBeUUW+e8qbcWFouuuIFxnx5nNrQDbU4aWxvevlPSRm6vM7B8Z6ZG2zD+iQ2KSili79LuC5Y0bLyUxNNtl3MtvG8/EFaRqi/DjOg9hKsYzzK1LofrOYOWRMIwLzQNapeN60granUF6dsDam3g7B6tZOL8+L6gHa7jccZ28kuSUGODSzFYHnDO6CniyAxo+z1fK1kiiqSHPVD94sA1hDWcDzhoWyV+z60Xq7bVV2vf8quSF9HapLripAN97x4qT6r3AFfkGnjZw6x025dHTHyFTovXNwwYPQt8swlOzY5seRy4GPtKD3IWA8ZN0Dr5CuM7J5P0YBAaTZTojWKqWM7r+ins6m3us29BgIpf0mkbpkQNnOTBOvKdsV9DBt7B2eEkOy3imibwhLGt7tcpKdpi6IguIWoYQanR7vrf0gjmlT5wfpfcDeWSHVHCVVtSKXCEgUIp+EId331koJH5AaWll/38jMr9XVScOKrAJthT2pUGBcUX5pV3BZWLy2N5v9SqAwuKW2V9S31yrwhrI0C8+P6F45E6bN2DeSt8uNsYFMAtM4Up3R3JxvbXXR/bBi0Ys0sN9KCUXQjdRp1h8hGOvpmKhJ/myD6UfSZqSkLn/niYxjupRd2TDNkQDH5lp8yloyG8KQv9TL5yONb0QelSUi5vShvlWh8k/G62x2rCS+MYZ/qzp09fdQ6jUB9mgfMZirBdiFaaG8Z2DHXRu/OiU2JuLSnB+WraLvXnirIfSbwjxoPGKJFqqJl51CwNXEA+Yx5Cj1kjIvqtCUYn61vpTh0dFS40eQRKGwY64wADSYqsJAq3OR0o6f7cfCheKqV0fhb1fV0Ru9Nz86LjJbe73S6GwnW06qcj2rzthsDUMl3TmuW3A2P9lQTryuogbQNXcbqaurdjCr1xR+PWJIkS8buHMLPRT0jvr8nNnKSgCIo8RAHPAezX/gFUd3gTo7ZImXhQWlAjEshhZGrBokHHfOsYiTnvTAdHQVQuFjc5+s5BX/lsGpnwQ/RCskSEd1e3ojqAMK1KuSo87mi8aSoj1vtXLOh1yZ3kFzfO16/8YV3pkaZhGHOnxxzSuC8c3oI9Yw08GtT4rdSbnyIXp2UxnNpC1vE7H6fJ3WgMDuUPS5WQWnNA9MITCxo6Y7wYkQDVAml80CX5vGiJN1yCmwl8FHXEJQ5vqQh1pPW0FaHpYvdNky9u21b1JPbG4P7tT6lE4DvSBNdUmxbU5Pk+nsIcIhTDw6/NN2ed7eJRSW1z0sq5k9WhSL2/RcPhYRsoUZEaldodDd01PeJfn7uCeb+MDO86jW8xp0Xhj3gr9r3gftMjGg7bXBvijdfrb7ye7jnvzVtoMMVgX5vouzOcfWNMOGsugaZfhC3H9OTmRBrTLHsISHVE/9AVcg9/hcYLyfIGj2/CqjHO0nNAmi92Y1I2JmAYDjThArtRPtp4mzHoMjZZ53JjOBzADG8M/ih2kPzIsM+OAfgHtwSQMLArHWABpzVvjbhm3zNhCJmJhgkBTt74k5ytOSeyN3Fm7yibubCabFQg3PJpfeIb3ulVZ2PCY6IMEzyW8HJtovb9NpLgJcvYFyY6/jxGhApUQUNigW/Z++3l354BmVn+KlWpzMF8jEkdciFqrOgd6DMpHUUGbNMBVeHrviT2nXS+LJ1aEG3Dkj0rlzUuZZ77j687qvuddGNOumd1c6uZsjv8AWFCjI91mgE3eQtA+ih+Wb6NoDU2ndtU3qYDLNiwRDAXFujIaYJFHj3N2HdaO52nbyj9VVtpPFkOffiU5L7uYdZDP/FtmphDY9eXET501DbfJNpM0mgU6DaT1rl3McvJrNQpTHM/GCdRhYkrAfiug0Csj8ckGZT2sZ/x95x2tZz7CqcjPc0Km6yfber7MBA4xEJBa+YUkbMHTkcRJoyaE2NfhRe0b5INEqxsStkOw6fmwYFt6C2p9Tq8WKq/Pi2FBV9JYqok5rTjeiBhJ3PPtFBjq0IlQacUEtXWxnhGr90vDtUeQ7Bm0WKHiHV5vstglohoqcqIvll+irW0i9QefuXrEKeO+I6RD+OOODlSXn+oIF2P9X19/g9CaH85GT01hLvaHLS6m74sU3FY6dIb+jHnBoejy3rYvO9AYsb+dETrCBcXxqHSqFZoDgl6f7UvGUDnMTTR+Oq3/f3yYa0Q/X1K3Q1gTv64AyQiwE6yjFOUTnjVeTEMqTIq6/p8bWGLO2nyFUswCcKO96ICfYETrDYnA/ZQwwZ5ltuUfCWYg3nIc7eTi/pQXLbyoQdvGsQjgc5zmslwXBFE+awoLVSXluntyEHrkFZGuWLI7i6vHHD4NDEnOKHTVpZlmmu9XsaxlKdAuQ2kBtTVrBmFBomwkFQVUz3lzXjidTnzZaDia9jBOyHyCHclqmdmZ1Re2uK2QiDEAlg2kKsIPzZYSkWfkhy2jo1AflQ7cqbcsrJOpCWCOr8dNukmYchI007F+EXD4LR0mKv+6x/dv0BmCI18YQ5pD5RApea7xfBUtvkmxSgH+rXcYeT1Zpg8l34RqNBareH0r6V167gs5MOtbwOkrueaG1w6KNVOM4MAd4P7x4Kcd2YfqvoYvEBbdhnunZjHicJylBSo9QFwkMzu7bYwwocsiA6HZ1UbvOE3OiOB8kHDC2HK0Sg4KTt0W2ZdiAtNVF1hEgzWbvKipfyeNYTFb+t2KKZmi400qYvgAlkd1/K+nOkRBqgqLPjSQK6fscNx0I4Tc1V8K7dY+LYuLeAbggJezIHmgnE+dJpKLeHG6YASt7h8gNOWNnwkNZdxPcL9sED/ZsKlVvycsfXzae7aQ9oEuZclsrV+STORxW3KQHdra+w5JiGV/8teg1KbzahP2HSwjrNvZJtlBa9hnjWRGyNpxZG7LZkZSa0y70BWytpb95InQuu6X+6usQiF+aRCEGxylCkHaCZlrfyLHKIzGhykzaayjhu6Vfyd+SrpbjvXpO6/gi+rpaC4+8aFlUkr29IYpZgogdZtD08/W9XN7U8gj3eptlZdHav5dWr1Y0jblFoOW5Rz20C6wnlcotPT7DS2gFxxHgx0vAafjFadURA+MtA9U7dEBN5bAsBgGEmvOGNCzA7S37yG/YAN+ry/D1Wg+SPc0w8LwYD5IqvXSj6wFQkjyC3XIK3NAf9YOREmPEvtIW6fBQ6z62s844owr4oAhWakHpLdPwCWnvLS3yR3psALGpiHcL/RhQtqdFPUVzMbZ58wvjLpp+SMQ4/Z3F30yDl9r1UYAh7iwqbfOjZv5jgptg/DIGHlCWORokxuAQu6mTUIlpn939rh2eVbVR9AWvU4aT+sHRiPJHbLdbSten26HNAntMTJh9I5MUDciMgALP0wl0V08hYUfWRzJTxj/bS8/1RiXs0uyIAvE1Uj4wFVcA2lKGpZJMSfSEDlPQjJRJLPHQQPYwLY400chRaFyFj7oH6tQo33OrQmGY78iXmuPSmSsR/V2AbnKQACsM+ReP0KihIGNe+a4yQlRT1U/1eTXkhiT9e2nK936FqI3RUtPVm/r7ZVFfUhyFIlu+QtwphCWIt3U+jY1p48M+1cXgQQmRtFdX6Y4l7voGnbUbK7sII6nNGrG0IPH95qZPRehehNK8aZxt/VnKevLMVWTCHUi2pXWtLzCrJq9tW4tRCVCTuyr6OjNGCVSMHW0SnaQCU+5RmxiEox7U/VcQdxlRLmWZxmSRZauS9wmB2JPVRQpTz0Tp+zSebz9jQDRHGG/sF65Kfl7IcbEMqvMQsNJARnLzVFFnZMH6XFVF7HYe0ubW72YoYO/K14zDnfpYTEi00KfXg7AZnVejKEslQzcth72XuzWqDBE0UDD7e8NDh12zSUe8oz+sBSWq9pN3sHDldFM3iPXONV/KVhQtSFREj6palyx53v5Bgfm59U7HX36bau0FxhjcCWa7pNSJLkYof2uVndFKD6QL/aHWIpqPWzki3xw3p7kXJuAQ3cARB0HqJMlPWGQs7NOSRGdtAvpLLvX5W7T1QJ8y3RRHyYHz12f6m3vGWt5RcGoheF1DyDeq3nDggTXY26lqmgXF+NajcZ7CkgrcpCBLy9JzFwr8bXRyqxZfAM8LumXnO+daUl2NrK4GXZh86H7raGBhui0+ZaWuo7rHrl1oZXztAcO4Zkpzix0efM834wyFiY46HabRx9bPCbvO11HTnKb+6BaX/FkUmT5DtFFMzTPpvIynTgwaAKrxqOMR3utSv7ugzZ4Jdq1RTIG+DteGxPYMOhyAh9nsDafQhcaEoKFpG31BLtD3LIxIr5zerzq2Yk3Pm8Kxb9w5NDdYZq7p4O/uL3ur+NIzfR4tiHRBuKZIMaz1y/PqlWZ0/QIP4B1j+eMlBNw+mr0x1P5LMrJLjd3LHTQ9+tSlTlfEuAH5sZ26Xobd7wxnfyeZxI79xVDH7IDklk36k5TApPs3MnqI4ww6ynkAcFR9ZpgKOR3inOLgb2AV6uUN2Gk/bVfvRN9iIUnO8iU3mUgDKnUv8A0neYj+Nwphjh3+actCLljz400F5bCCaURDXQQbYfToFyIs5nuNnuDLCbxPwEvDgusSpEvplCJ1i6ljJWMojAUT0pmcOPPIIEibvrKM2f7NRdVFjHptPwNuRXA+QyMkm88vkyiIKTT5V0ahZbCuyXjQrG4jvlGeECNFuH95wHrmO/AoQ7pnY7PQKGDRNmKJTKBpthTvztfJUjYTBqDuGrTVoac0BuL6trRLSEoRY6zu08zgwsMxP+WYXEZvUVGDdzX5yrvQ+jFpRCGyVqhepfzZeRs2Hn+sIQw1ivvXm9ajB60ViBW6aZpassb027vJreFi12X+9mt3JKGVVwLr5txo+n3nHdgEGP6aDjXv/YJfAWNzgURz7OoTQTUR6olH3dxfXRLxPmMbfgYXgtNbXVVecrAxyWKvGQ3vRQaM3Tja9Xp6hJru6+H8w+6qkv0Ypv7OoZOyCpF4t7l6sqM6m+2mrRurdZOlfRWD6ANfpJ0+15ttoCWSHCSA9PFAgWl8WwEc686d8SxAIhhPYcKgpor50bL5lo1y4ThhUFJs88/cUQHtEpKiPxC0s4kkrVMiKAzhQhhS0h0kEsbKja2qBs1cmlQuxP3EV5qEgJV88zpyAKV75/bquW4YwFLDVKLLOo0KZd6UIFoU36tPpza9fayAsYQmN31GE1bPjECUKg+JDKxHJbRm3PPJM0ZAUa73T8MUdhofHkYchI8xLKUXoLmZwj3tl2XTHYB9Va2hMXPlKadKqIVuI/o8DWmSr54ldYN108SQIIv8GmN7UB911dODQuOkdojoJUMnmoF+zWu87YOMgA9gjcKjW/iui35wBLp1FjOQHixLXzbqHOEzrML0gnAPUCL97ONBV4+MENqSIY07c9P8Afv+xOW4XeUPmHQ0+t+YjtdB7c9KaP5sZ69orAwc+zX6Vu5vavbrO5T4lTflQEGLbxxnqXntUMSKMUp2VeVkKd1RxZyj/iR0+waM3UcjoX6nOMKvYjTTVJrQWLLsxZh8MHJnefR73ZA3xl3C890spZWmULUG/EA9xlhSnOgeoXZdv2NUoVlZVOt0TEr3y+tROLMXD2SjYQjQrRECl3BQVSgTc/HPfSjZ1zjranHJNntVehCa4Jcm2s+sU9gXaQBjbKH7VZ4aZ18P0a2fpDCYjFkyfkHhPHOrPKF+fe+VzvmrzLYq6aGtC/xuGcsZZMWbkei9Q8DLr3QMrz4SM6lHAHa3oc50M5sNapZ61aIDaKeG7RH0jgr1uJeSrtlW+jFhprH3z52DU0X1xzx79iJIVl7W7Xepzd8RoknhCnK3/d3CDH98ZwTVXaOAbea0U82CEkLvFJkFwICpY1kURy5rZBO+bNy5NfGtUZfru5t6D5dvQG76TPAW1oJI174zEYRZRqrw52tW30BUObilX6MDrK5Xv6rvAzz21BjNYoyFGA2otrQXt7fVCdCFht3OYIvwb15Ey9WpIsmz8qiL7MNdE+O2e5X6MtvYdOUaNHQfOKPss1LqHS0UQMuqm3eRrvP0B+23LEVBg+MYK+aXDIdfvldsuUb6wHjwoYF+YCQgqhG4Za4HdV2mdSi5K6Fuyv++vnITvc9EoBD4ExHYA/iPkJvQjVxzcx2D/zBZB+jccYYAagrkY/mqrEYcssC+TaZjdQtxA7rrdk+94EjxE7t84QcF6z2ErTzeB5goQ5Ws5bEY/udQlRh5dGUKApfvAtqraxD83Xw8fO7sbrLW775ZWwSHrmCWa4OsfWpEEvXtrECvFBVDpphZtPjYWNwyu9GSNOV2W/9jvh36K+5mGkQRvB+jtudGDlDdu806DwxCS8MmVVttqk3yyCqIku34Ti6sVLfWe/wG9M4kxiG7JbZwAxydL7qm6FR5OXHYbghRlnSuqvyj+xJviqMsSGkBVJfiwIn01ii4da9eFWHpHbkR3b0GSInSYt2TOJ2AyTH7MzCSZjU05BQE3e5Z2712YkNbveirj6HPxlbzsGjvbXN6rNidlVornL7RIm6Q7bjNnmgCPrAgf2of2rw84R3fGG7RPu1Ts8F2ERh9jPo9VMnBFc+DlFLRfEbfBTjgtIzAROUgsL9GL9FY7xTT6T5SsZ1B68KRtUB9LhELhZe9wlseeuZpRjOaZynXPSVJPe21+k3F6Pp5DyqDFoXYXlh0MnVI3OU3YuumHaaON+89rOED7xZ+DCmw1cHsPqnw669lmNWN4K94/KKzxLUt/J4K6UriEYkqmaoRHp6S70pYXfPiofWnXQsDEylNKre1xBhFuS8OsvRfAYnu8K3RG42p1K+0qdnjJ6kscHIUgmG62qNNsSJCIgXeBv5LPBZToiVAe3R7aJBDgU5oGSdMNSmahsiDL8nmnnvfgiMXvsdVCV2k5vZql8MVuNrr45g2PrTLumQkNzYs+3L7aH8ePopQ+nhEZXCOQXnqmc7dFBCgOkbUyUFwpnfOPf4HkpFUNJlwf1r7PYpzR1TvkYIsyDPIZwHMZOfd06S8pQv/YxQssdqMSVEPumDoihwWq++tD7rb3zT73JzZs3tt65IORpuz+Ge1EYOSoi5aWWgseRw6Tr10U9xKMGT94KuIrFM/eXsNM0pfmjRLYvdlqA9ZVREVS0OWD+Lo43mKQN9PTq7VUDUbrhnG7ldA+r1l9Anorb9WuIDgfvK1Ory8DodjxZYen5nC+jia3lKImSI4o3Wkas47lcEn0pCJMGfEM/hvW+46nl7hb1OJGR3Lvc1YsJPZvXdr4bshfdKkXfTxxGcBI/6Svbdp2SOgxTZKk5ZbMGDJxD/koffFeHuqNor6j8E4tlagKQe3vTrzcpwmIGaAjIdRfzaHhF+jmxNy9dAY1Z+Wj06BZtSdPFg3qR5iM1FbggOOGUwzmOS6Td/WpWGG+eIJNemOKxgkTEnuRBHM8c/ArTu+DgIt0dZBPVqajnk2AYotWtBKn97WXHawmGu4tX09+pLvNH99dQCJZEC3HpovIfK/CA2rRDbunrN8M80LVxBHlrg8+TU6CGhr4j6gWreP3czkiJxyQZaIRcox7isK7XYY/tzMrKaFUDZzG8stv+6Fz+2i3YViD4/G47WXQGtfkiaUXZZRgbrMC045VvxBvO0rDsJSz8guljK4MTxH8wsfjFR4Lk9F7vPt0hm/uraEWjyp3kyWoEhKHWGZG9i0/iWyOEH5mz6WrxdgrhW2jfD3YhO7GhRJrfT8+YDqiySicaVwnZr+96pr55LlYXgyx9LoBQMO9mn7PL6hwtqS3M/W0U1noHxMwNISLnQq1pB1pddfzVULIkBaQj28wuAwtl7ruWTjOmH3R2WosSeQ8wkz468Kp/ey0xME8XLZqlUU63WgxbnR/pRkC2NgXl5oFUDyZ8lIQN3DSaCCc5nlnMyrvrfdflQNOtwoPjbgB+vgyOmt8uPLR40Sk70Z4tKmGDesy7UUzTPqq5vrEPGipNTXSJ/DUwiAuPDY1CtnVZ3rF2Fy0QGvYpBwwBBVHXksMAXYV8F3pThH9Km/OZBOoDN+YlhsepZp13Ep6cLQxWyuCyAUIt6iD4qynSxV1JzKLuOKB1MX5HCcAQRcSR6CjmYJTW6GAA3xdBiBBPdLXUIw2PTCf3PTd3hxID6URqnSUU6zMrYyPf9lXTbyHQ5ACXTZ+tRHJshQotV20jQhfZKS5pR7w4rUjWAefingtldmsUo1dasXVKUbDyLqQS2xsZciAnZf5c4+LgH7ZZ5cdoopwYD9uU1+b2JfnDeghbvkYPg9wdFJV5lnQeTjoTHpcQFq8R5AJfXnFLk3RYA1E7OaPN2xpCIJhj9wphFfEBiqJIi30hfMZmfCnSh6COx9rigNGRI4JRLEJT9VN3+JRh6A/p6pXvMJvU4cdj2h4EZpvQAiYT0XKAj7vfCW7ZO1yOvt8Kx14JxE0X5dUckQMYbeKRcNUmteUkpAb4zP3bJPGJ8Zo77kqwZTiCJEdF32GMYUR2EmQcuFqKkywM/fUar8lnPNsdiV+ZSar7ek1ALYxiq1pvSxs9MS/iPjt3/KpybDVV/avuljMDAzazYIyDE2zGsLfWyzQnS8cN3CzSINNf5A5vJHHaXyx8FxRXfnirwT/8r9R1sgr4O1EjSlyHAyMPnJpQXQBZG3/zXrhRxInqi3+svuyQ727CSMQd7+4Ikw8+6RErhvjS5jrn31hD/j4h/cnNX0cnqDcOenk05iGnH3cTrrKsrWPbxvBj4NLng8r9VQHKgHcnns30oFnjuG0WGPt25Hyc/blhDs8y7jZ/eSfWd8VLHdW1cI/gqYDC5jAlyDdIvD5hraK3aYnMzHvUStzFMtcrHCJG8s6bXtHN1LVUxAKd7/bLt3AGDVlRMnxNbHiV7Hf6FqUBQ+Rvl9FmKmyMwzhB0l/4QoutZQNjJqZDDN5nFQHPyqbYObaI57ttpJm7owhlHTYIABG52KSyLyIKRBQV0KGLj3pA4Lts7XzPl8+BaAz/gR2+U6YzCbBvovnZn9VV3xYWJK92uNUrtJxZlUgV4TKdn6us6Epd78sU/+Vd45hKTTkTdxbH/gn+pJXcQb2GReZNbx64doSMPCiDAv2J90XuZK45gK24uHABny6z9NszjlvceWdbZAht3LSWCOdZ1OrGW1GBX2YYl1N0+oI4CQ/xr5UZyA7lL3XixSsvVcjx4Ca1s8RILc5wnDTxLE31Ivfob/BxlSratHd3qvLrMiPL30ldjI4kKQx606gIBX4jxgk+60BBMYgKYZQVgUIe7LUjYesh67m1vFT311N5tX23V2vtTK1EvrxV6vqkIrUcRIxaSx/muATIC1BSf7k0sgj5AZZmIJO1xJTeuVlL+TR8kUIBh6906ySXfxGfqYvi/Yb7lB6rEQYgJKqzsUoj9ly8USqVqv0w77B83tmSbGT4FOZYZewkyJmwsO7WLzV+tFXDs69hLEPxoa/SgoAHWm48IGi7btKzTBnx4ueXBsI8tqmjylPK3n9tpmu99iIRkta2UzTx/nULEPFn3nox0VWLh9VFwtqOdvGIZ+3NverLT2DIxmsMh33ZoRinZjpN98TO2r2jLQM4gNWM+L4JF6Qv+Ydm5XDx69/TXsyeF5hCyJsS8Cb1On/cjOa8rzx6ma9xlkUD/igpeC70zvxmjp/orJNBYuapbvLGZqizn7BQGfbswwXedQ69CQXZoqLGKk7GarAtdLoljvRl/HY9KRkRGemH3h6xz0PkD0GIQ5BsBZDDi1eg5PEnz09MJ3R9CK0ywVJrAuw73kXEwp5ykfW3GflZZWVUCHfqVaBhIH1TyrvXdS3eX/dUzfYd5fZVWJezyIh3yQVxOhv4Hnjo57trw3Bb23GF7KLQIaDxr1Z2Y8MCNJ3CTBUS8LS8fqa0jdX28tes3Ufb0g8uU8N0EO46VdOI5vLF9q06GMvoQbG6sCKctFV7V3P1M8rCeJpQogr+BVxdPaFwLj8cU+1VPLTElqgWL4mckH/WL0QV6a+XVat/WGWSH23PO8kkP0jRZlsK8j8dXSv29x2gSUFrkPVa6xs8WsT3zpi8mElTT6rt8/yWRk4V3bbB2/OrbHovRbw//W+WrmPLbWUHfs3bM4clc86ZOwYxBzFT/PrHHt+F7eM50ojqbgBVBTTgvDYTOcT0cthnhn/ePhnc9imUr/83fsiYxDWnPT4QtWc5FSGVclw0+tlX1t1HW5Dg37DkGsbQy6Byme/k2UqDmaeVHcAjpe3lPtXOwgDyUFNv/nRGnFxKNR/M/dntnzV9FpDIb6Ug8MHCrjWO2yiLM2/Ev7Kf7NQT8HNz3W+r6LnIi176GtFwWwI9cBva7AzOSxXHvkbBYdWk3zPfH4ggd7uPpgm0DBmbj1sLA8qBGMaPm/ST5V92G01nMUV82KPlttVvdQBFEb1spoolC8En8eMQH5jz2ycak1VOBG6xjZNOfL2Q8mx7N7yQGvfgL7c7XfHXC8St9IgDqNFaI7lKBcVfERFeVM0PUGx2QGDmeR/A97Pm2y65y2gR+rDZ2ln2UqZfouXX8qceF8Ldjc3SvHWbkupmAvKT2LkrCJWZ9pz1xJYF27PeVTxs2W3AmYB6ckOa17xulDnfbNQ7kNyLeplpUTdy4xAkvVTOwhr1nkGGft35jSvRmK4z+gHOF6LHiEmZTkRDyePL2MPOfNng8WISHvKeFvSvyesxAVBQZDxXI+d9rKk4DsfEt1ktHQswNzMKZmkd1VbL76kN1sqFEg66y7UQ6gixLpBCVXZXF0nluu07GV+oYLgJe44aM1ZOjXz14DfBwmg2RkJDTz2gt7JTvyRuoCcMgBZ3K50WgYPTh0nOaVFOlnWBdSDqq65hrFb3giN+3IjsAncTPwdfGAe0PewdGbap+NL0ugdV2PrPr62tm5qa/vDV33p9Imhe3wDnsIW/lX89kWe9SeeYLg3zw9+RryKnIcbF9YXc4LUrJNlDc5T3ik90KNBTNXA29KYbWS02LMyPXWKawSHSfFjNn4D1gXbjo0emCetexGJ+PzLk7cAy3i8Afy9syF+uMgId/+Wbeo7SG+zaD8b8dQGTC/94IuOjXpL6XG6oXRaJvDg4i5Yl0SMQm4kfbc2kS/BYPBAk703fWoi+8J+RRtGIpGMt/I0BO/2KDBu9/A0a4cukJ4vqYXhUpbw/K52lFHAEMsH7xvk1N5p7Qo1Gckn5iwPj747uttz5FExqFcfpe3Mvtp5+BHc/njYCZ1+hkYB/LxtaOw6aDN8XAJM6eCNIqZQt6HJdlGS35UPjPiD9zwukvzxfXZDF0Jcu6kY/NvyRY1CVKEFnnI3tCfCzGPL3F+tfEnGrvcYDNPZ9o4XrNk+6fV+OjJSWuwMmFLzfpmj0EPH87tZh3Vu1csqf4rbZgtnjlLXVkM1Q341SBwt6eeRxZ72t66MZKurL2pH56Y1lepL4dLhmGsbcow780PKXFPLpoy2ercbWzWgdUPFkJL9MvnNh1k7za0mOu90f7WPtuTW41nBuDEw1hqpAhAJdkYHvkP9ghRsLbXfIoc1bzaTfiYtCd1emXr0LkuTLBhr97nOxzyKJXwYB+ED7iGPvOlXXaqnS7Rfj5FZjT3pYMijkuAKJjKXWW6omtD1uIrv3/RzduZ6/SzzKm+7McT+KBrmyFOmYVLP4wDxBwyCRDg6A1nrPRC0ytUNjKj84iva9KJ/IZkAgDVYrRPybt5irzTsfoOhKJmVl0CrgqBhODVogf3/jOs8VxHouBLbpAMVKBD9lig7j4DFdxFAS6DppIfYZXSnvfiz0B5X7bFiHwQ9/SS2hoALc+Lq+hA49l3V/8wGghexe7L3fbfHV5A1NWUNRR7/nBYZXIckcdecbo4DUpMYV9tEtYpXzebn2TzTigADJCYKMwwzasdMI99GnPKPYO6tBPt6g1nMX+E3PPeb1lwg/NmNSsNznZra4PuRfLalUG+X6MuLc037ODsPa6yqUnmtP88WiZfcov+BU/OxLHbNv/JHpChNkya8h3zh2mRop2GuSyzRwxPmN7mbNT6O6J0SJI2VtCCzNvaPCjtNLo7jZKs68zEHbxtj2f5mTB9Fd6ZNSC2fnnBqPjW0yZtsRa8jPnKqKuK7lsFeSon7fGT33lwjQR8Hvx1Srqd1YvbdAl+VlgaWk3h0+wMXZSGGJB0taP9vxnbMi/saznt8y6LE0Sj08R+pvXPiF0GNuqE7qge4gm+OqoRr6h1asupbs1VzNVDDojZCQh+UJd4taevvG8Ywb/Wq/FTZAh9+isV3UQk5Jey+SnWl6xT5MKfnnMX1wfHStflSQ+OO1iD19ylkO7uPrL0u+5+Hjn+FMP5HlWxHQIG/IQquN4tI1qWLi9OPYBvTA1RZtOKiZ/54z5bvcPxMEVRbOnD+O03YDUy/kUZX5NL2H8303UWc4PLzEFn52TocppLVG8AmK8AZwMAFSft1tRy30cLFF+5U2NaPG3WX0tiVVwYI+H0uFxTWT/uaceT8ASr7u0FAvtly9J7ukWsT6C3Yi9sjRAuZQpUFU4kJafzo/9FPFchMyLfH9QDUGsmvgzKiN72HQ7JEVZaGgyaCb/dztl3vtT/90D/Oz1HWLtaog7N7OlUdNdeURBAPBkALESMKiIBBqOSAq7qzbzDyM2kpRweATQNULQ/F0sSlfqpVgZXBLJ757OdO3ZJXCX0VHnu+540qIkXTwXPotKDd7t/VTf/vva3NBgmACOSjKZCfxeABNgA9Gi3CGF2W0UcZ+go6qP5ykYRFSktBOhfgHKQumfYMd4C3gKbLVOf0dJUkOe2Lvn2yF97EVV4mccy8nvyMu+ewYFH+K2/vM4FPiTNcQpGXVsquLv8EadPNXxJ2U1SIAHaGqOuiNmNQ0ceqUsZX2fQjepj/cdX9kE47Y8+GsQjLKrUr+rmNecR5BDcpfYktF1mNf5SCDa2ut6fdR6aE2ltaiWI/2tWjkyQ0CV+/Xl8qTQK5iDkODU1YYkHMW4omle+pDyPWEmvRoXflhk18d3SQTZfSm6g77Z6iDyBqLnokPJkAvI8gGyNkDvNEWnTNDSxGb1NUYqIHYrxRkpXyMqQR15onZ4GF7azQPzZQN3eSj2jVQ/cKzwtGUm6fnl+Fm84w/diUZBZ1rxR9GvPEnfy5uVlCykX9gCJl4rOivWtWQmIxO/Smh4en073x5EBqIZYy+JuDwALTG1vn9dqffC4FLROuhF4eROVKmlcLTBNnPk3DXGL8qHDizJ2yEc3iGqyehK3rB728gUtOHPwd2eZ3gKjJp+N3YN7u/xx7NyRioUlUoTc7O3uopWTOzxcfxnmHJC4DSoWUGnZEVA7ANH+03l38A7vshFIrEuM81vkFBAex7O39QVNqhkAre5hnJX8FrrNs1Axfyg6qEgOoo7XyWn5L/nTN3Vx2h9SrNco9g8op7Vj7O9RcAKSnls6zd8uskCfKpbiBtVWCwknjohznhcHYjiHyC44hOhdWtFMEBJOQ3P4o9OKck5UAyFH76SSOj9KzTdMgv/0DKN+iy+9J1IZF8ChPGzMt974PgfofVQ3MounIq5lld5UrIRMrTaXJLDOoSqEuCTqgs9pwa0/5l4Jk9Rd9AI1BcZn5eQx3V+kSmBs8ZAB8dBVRFK9Aj2hfNprbZ0auFEy1w7TXnrcJgQth5bkrd9PvrSZeGf2HvIv0ll2r3ayt8ikjO9tfs3/yre6SasBV5lkcWgdhBtKbz79lcpKygzQxbz0e+F8azFt7ENmwnNdbEP3JSlarEbbHhmYQD8gVWF8glVeHzH+fwTbo3lQmDKtDSVrQn3nj4ma46Fw/DKycJFs/678RRYvJplN96AnFC7Oi0/+vwWeFebe8hS7IoUjZhgP5Sq5/Ev6cGW1ycaHCZ8A9Wf1sypmJ4YeVN7t6pMBFfN40t5DYVOztIWfwx+n/v01GGMJEE/JYrJ/76FzI3D1adBcLWV5o0XyUY8rvZYEaviNkjE3R/468oHWZXgFNZV6waGe3DJq0NJL/Ch0cppX00DUsRWZLXLvpdyr/usf8D/WY0C6WEvzsXrIoykh3bBffG1kjdUnSIp9LctfPx1TPrY8qoYh9TYsv8gxeOMm0w0V+DSBlqDFDacUH6HvG9erMxtwmI8p0+5E96wswdYhjKGgEfsL9LF/L1+Xwh1n5+aM3vOuB2zfxNYnmDe+OM8sglYfqN1taJ5rXQNqhThjeTe6oy3EjroQM9W0h2xSEb+qi9HH9lfIOAaZny4W5NFTcv6+Xix+tadgDYXNG+ZuN9s8SEtgS/oxo8a0+F9AuuhJTPg+gYqnKSEYUSzdFm4n/5v2W1VEjuUlTGbIfYZ0XpZIL5SEhCBNMEzsQ+6cHcfyHrqhKINZ+VtIJFB+bM28FXg1j1+Ra/+sg7ztn2WILEY46rGj/06o9V7LQgC3jMLJ6Sx+d0Wtgzv8vyV/6VL07IzCX6EHCz3Ldsu6Hoy3JNmsqm9kr7MjMHpr4swbcsEkvg3lOesUBs4MWvg1K0PQObPE00P+RycQFAqCTIj6d3EwdKSFGFlZiUfD0iOE49AnzJETD7ZStVQzPMR4TY1sCibslffFnHDCavG2I7jbAHbCCiz/jUyrVcXkBSHxlReiy4lp9Rf8x6g+d0kf56DC6ZKqofjRgMvnpxifJ6O5Z/PmNvO0nwlyTeovBwpSJfUjZ1+1rzL7PJk3tMB3iQc/MW9Nq1QamuzoYcJJu9sRUPc3mZZX9pl6TXW5b/duMr7xxJS4oIvZC+5KjAh5+z6PPECb8uWs3PHQEozkv9C2l+s8/EL0jefwtL3jWo7Iw3/2/c7d8Y7lo99bpz0j4ERZ0v70OqC68Mx+LhQNnyUSYvDXElHJ5+G523UvVAQI7cyfl2/42NNItm+A9zJCDk2ETYw529ZAykl/9gx6bM1STS3N2KbRnNxvVgbr58NDJ5Toblvh1WQcyEY6TnPMD1RGt1PurjwdTmB2yR83IVSQQhFKCuwngf+GIYBfTQF8QAvP58Ybx0+J7S9BNqN7d4taugoN0Gd2TYsty5cwd2dh9+CdkXD6Jukg/6S7K2La1Py8/LNx73yTy0NOllRxe512MA4ZPdlSTfKBjTJfUj0Q/f0NuTCWWAisg2C2RGv7QUS67Am9r3ZWojBa6IpNnCueDdxkFGOG8wx4UY6+ZjYhhUskqIk4UL+k+r+zVoqTDiY2sQZZzKGxwdoMyl7PN6sSoI8HYXPCp5fsRMLohk/i0bOsfnE91f2RQtCnNxVvjBFV2UjSQvOpoX3zzNnVWO94HkEpOnfTBmnd0KcN0NocYs4vCaRukp71FO94VAhU0nijqlkgl1oChBbdZuiyCTkE37i0Su8Nygh9YkEq3Kik7hRbu8YPBUUGgnM9jxmckFZO1F9GPlDVKwwP4kFP0Niah7dEpkt6NvtOV8Nm0n6qtI229KCj+UKgdNoDe/9GoEiEv3oWKYSeR8sApcuCrXk89kHDhkQgGZooJGETsUcV09v8gavooAEGd7AbMXOpKMnym678OfMv0T1Q1+UrHZQ1QAgecTSxprw6GJ/UuXOMZ4CowN1Ku0+a25EdqqokVEolYbIrXxVqKGYvnP7OJ+EeEHcxwmr+dr4HkcE1yRgQJN3GWPDi6UQ1Oaa3ogJWwS4FnOyc4SU/fQG/0yTPTRGJtSX+MFyElymHT/dDGQLb2NH64MNMbvO5MvB1qDFR+RKdLqPrpz1+Tuq96FUCub+8EF9wktUomlZw/oASvm9MHpANO/TJvPtFaswBdDShtTjjJKakQA8Gn7cwOfijWP5Q6qHGSN26kM1FQEXS8QEzvQkEt+Y00zKZs8PQQ1WM61U482Z3Hneef6ENtDptOYimbxcleEoAnO4dzOYIu//JqpQ9eskjvkG+naw98Ll66Gq8w0wesynIPB1qEkTdocX4e4xVRwJeMOe534xd67IffXX4JLtMjAaZIvWNj9k50iA1qoxbtVbNETbYMEX7e6K/oBVVr6jEPlG3t5RY+YLicTTL4Xf31kIR2BPPD457efUk2hZ8Hhr/FA1TcPfoQf+lV409ylbnIDwAh7rjvIhVG2Mjz0pNsZ72vTCZdqQrIlWLryBNj0e3Y9qF6XJMmB/CKHWNSQ7erkOY7vYRuj6Gfx+N18WDM/KpD15nyZV+rywbNJ5yWSnDr7A5dn85gOTWZubeK+jUTPFymJH/orb0JZubAf9J2mvxdHrljDfKcKz3vxJfD2jTnhCEka0s+yRuE5hHQ1zvRB7+5pIlsByKe9YRtohR+bws6ILneqG9X8of3fyJECKFk3JkOTwIjUlywXLqRa8WnHhEufEitpn2bkFWb85GMne3InUt8TzyX9vEN9A3p92Bc0PJaO+LNFmy4nD+JLvCCFnUsNEpZ0HdVJGiMW775D8islpjFyEsQpauiQpGlbRD61to3W68t93ILVa+Qzen6FW4Mwa+XT3tH4dCPcU7h6lhadE/rTtYJ2lNaNU/D4YRa0j247O6M7v+3a768Oce8pdcKDs+S7xJlyTIK1l5g3xEL5iaU3YTeHoMgferWVkUaRkWvbRnkSWD8U3Btm8xmV0j6AxWdO8qTLGslH5eNGJSPnJbgaukeY2HPFqiLJHz7n/MfOsAksJjXzLNMGxpJ2mWAOEYC10L0jo9V9v2nW2Jog4vs/ZwOZHU5CC6eewSbDahX4a/qbvwycdwAbXdfrnVQkdZ2pOklO+oeRCV6i56T8y28A+IzjKRVKGokdTyjQpUnsE4Xh8ebYw/g1Eqv6m032b4azaDuusiphu+gtUGZKC7v+UguFygVjyF063FGfv4m2um7LNX9AT30U5XZeqKQGtOzBCOYjsRZ3y8C/RIP1xOAKIJK9W6WhmBab8tOG6mts4RTxl6V+oYDCSH/XRr1g5+KW8ufxW6g/T/zCblGuTUFNl2Wn2qEN/mKb6Avm+NCN0+7Lt+kpfXc5045/69UZG8XrS2xnJopp0PXHDxBwd5UlGdL4dvQMvtVtO2SFVcQv2C5icpLzTGJwI8oj/Ax7yXpa3DQFYFdSnT76U0jzQt4dk9czf0T8r+gQlF5gAQJeCJb9pn7dCRVXj8mpVFQGwC0UkHuPmvB1IgvsrerUispB5jSnmepj/NrmqJIKDMQUsedGQKtg9WWecmuK8C+pbKcyYhkNURb/FaHJA9oMG4uCi176aagVt7GwNwouRDAh8wPV0iUZoUq6o1604vh5/wQAuN/W/Hd3NPBLFBOjSubCB7WRbn3IP9qUNF+OPBwb+uua+cW7fyoXUM35i7CGP87izbaMCeYCnqGW/91b4W/n35HhZx3c9Rrt0d0n9Jyen5h471JTKvW3AQCog0OkA8EOg4eP0BlV4rkZ+C1G64IUdyIx8eAB2PPf0Oz3ERmf71PC1siOklman2gqucBrX+fX6j5ixzRZXgV95od45j+UfWSrTmTz3/ODw1Dos+1IBYr5OP2B22QXqCF/7HZ6uuH+q4PPwVa2vnTvGCTM7RB0DQqWobg3+pyKPHSbi7iSb4W+x84n6B1zCl2Mb7I1THslyPY+UrVhrxF9oalT3oejm8kFHqEY7TNTK0Gu5S3pXqjYFdc4zHwo+52b9KEQpoy9BLUsMpkzgrPGguV4Y0V+IAYD1iyV4nu2sN3/MTHNKRUXAQI5beXlC6FQ5MovofsZwxr2TuDNk2Iy1iu+65O7xQaKVXSxWcJMPE7YFJP8k17ZGnFHP0qPeQyn58DcB1GJkub3xLxZoUWvH69mo9dYfFM63uW5vNy6XDU5T+8iwOTl9GKDWv1Rte0nPFB+hIYH35Z0MbB3XP/VB7jVeiIGVOO3FZQLG2hFpVv/2pi1gRNwI0jCvFw2QC+1oUf0xPhCv3jhXy8CkalGsf0xqx5Xh7arHfXwaX23JKU04tJ4oZDcxWfjt3GDRgbWH5myv2Wn5oVh5iiUPxSoafxrmQWfZDnmGWLUio4SVqSJTGeqcIalWEbRTETiJEs9GL/6I8x1XNz81EemKbNQBqhMKicrwSmy0ap+Y3tddVRk/AkPjK1rTYXR3+pv/KuurNUtEECvZ6mfZamPFvfCh8nLG0voBFQorcACL3Pr686dAnfEHCGy/0o7klxAe/EvRJg5jFKsfzoiqikkaI8lUiqRD3lkP0AQwEJ/Lp06bTL02Omotv+tFxIfD/miJw1GryU+9nMtd9/zmIk3OuVnT58CF57a+Amw2U7jx8EYjmOF8hKYUhW5Sfty7jK0t76/f9IwCvFcW0lt+S3UtwxDBCbX5W/++rqj4FIBq4NREufpGbzHcgwjLL1uDljLRqmRXn7xUakUiCxPg/y9SbudmD3366Ara0t3v8G7VMcyQMWCJhi+9pmMLLhU6qpTiCfsjzS34y8hVX5q+Cc43SHvX49EBIJJ44co7wxteHuH/5m53rL2QmZPGxWHlYod/JG3tXlZiSN2IB+K8gTciKvQ5nEqxDdR1nmZeIBYfsYsAHw5mGmfAJ2A3Ly2V18w3c4xH0+6qyHlFBRW+iHqV1aA+fTjj1Iz9Katz1NTMp1M/CJoCR3TIojipZDfFdMEGBuTBwsLW1GzW0E+IdOGKVuzDR6x1WiNPG6VavYgYswtL1ajjPlLmPNXD1AIg4BSLSGFYMUqOV/p6EpLskh5Iwio0uiQerC35kom9r5DudRoRT4iZ358sSIpt7eSaCBrFVBaTCwmuyyzKDR3/RTMfCIDficYKPnWpkCeWOWOgdkQ7QtWbqV7UEizdvohWnK7XIci/ShhYFTptpvJr2+LK0EF+l6wTqJnhPAZyXrXX/T4fAuog3hD7TsCNECkvRSyGnuj3AmK41pVV0P8jJDeHyaaRFDvESbll/B6Xoz4einxCZ0bNnveLivHiDct/R5geF5GqBPHF58rVr1uM9gu0n4AUYv6rbLkVk+UELcyx7SRjc7H6ZKRCnmhBCO8iVKm89F7GSEISp8oLvvJ0r6b0vjXW0ppSDnN5IbjX78N7/VT8DSPwKw1rxViHfklAQdFyYh9fX3zIcmHzayv8LGx17DA3nBh3EcvoBeqEGRKuATIkkxEE7VtV1LvyPEKfoM2P8RakYv2RZSYhrB2dOZa/heVCvwH/qHMZ3s6xx3wWmqIjUyVsEJpUvRI9TG2c6AR9d8Jt+ACZfNfdNqMVpX1G/RI0AsSlGWyf30r7WiwynoQvDcIlUPAtpSVQjJ6gs+bhRsZhBEz050DEg1rm5v3+m+vHcGRtzHsPX6TNdnJ6q1Nt3VS4nCvc8ieDkZomjt5+EczyYSYwhcU0azUk7iFNaDhb9HuAqO6M4oM0dX+Lv8fThplZ+pEBBwKcqxxWDYeyq6w5mWgojO2oJQo4aHsi3wzY1MTWGKSgtigTp7Xmoq3kHLxez/mg3bz32UoHh41uU57asQuG4aKNlNwAjtAYZ2BDvre0PpHKRWu+StXSt8LwqCFQsYXzjTKLLjhLy962VYmWhKzFaOnmdTawregZnht1huXaWIiBS7QeQUz3IHEJNyx0MRlESplmRg4dzpAD0pISd7GDXlaOlChMbqu2w6itKw5Yf5wITwkaqia3+dTDyKklD3NbXLYfUh/Jcw/lZLMPr8qKajFnxJf6dpAkBikd9iATmlcUthwVC9owBQbGdcEce3tnCd7CJ1mFNAnTGNvR4V4v2UE+WtuuCfvT846rMr5hy8kMoCkeo1Td/K7qBJDeuQTESSk6t8YxfTXq8of+V023EMCccFuirHC6+ZCEzjIvo8oYUT5cvUQpWQRbjxzmHuYtKxYVAafyu0KL/4YJ+k4cOZgbpUxbdjqeuWw+DEx7/ibSd+V/hXEfS1oP4wqFP2ND2frnrrM6FOsQ35552QhZVgN9389RoMAbmC7lGc/7mak/ZyPg4Jae8qwajAtM73yGFNKvCIS01EP+fOjKvqS8xkRfd3VLKycoJcQC84PnfEqDk+uvXbasHY4tgh8jIRNbLXEF8XMTwcp07rsCrQ2zFARGhg731CY0u5LJVF1xsG5jDCeDLNKu63f6auBeujuqFo9HPRtrd9mIw0+yM4J2T15kv1iqJIt/dbuATMMCRa2vvCguDQb+hLXj0FySb2yCUYYHsMvPkiGkbZBTDomhb6jPv+otlabUukgmLOPWzSbr2C7e4cMoeUG+se9El7mE5Ktd1U0zIOwXYm2Aj8WytOa9aNkHviNLGLL0Op5lR8cSpEDSDqVNlIbh3t2EnPhS/Kr9IuNeFmje/mZzb8UHcAi4wn8119tyYv6/lwDH5T1xgJdbNrghjOd+++uwwkB8IIJoUzj5HFFOqnLESCfnb4eyfOxgXz3nhmnul2g5NKvx9g4VPl8tO6T+vhcNatHhfORfhLsuyfVVrUNWtZKqDmGUkoeX5bz1gseOwReP2n6qqbyMUeMYCBbSRD9lYledyWiFFEROlpz3/S5mlvKSxCFYXTPlHXXd+m+zF+lK4QU33MIcz11zfuY0cWsN+r0ih2gLeXDX3txALKYmyz1fYhp8zw7P7lJPnGBjhzTwJeckPEfkd/ofH7lCVkAyiJfaAWUevHFmPsfR9Iy/Y0eWqOi36tof69BRhBYFrxPGtcxWG+XqWwYG1hvUSo9mGMw6FJ9jvBJReC27wGmDfzrk4PdZZUpo/N66QX6WYv/gF0JBEHufe++Mx/cEs7Pakn8RhuKIKlaxm18YdbH2QZj436SCvnaAnoAC8X9Rg3srwnt19LvgzrsnHaL0YEgg02gHJES1dPd73n/NU85LVLz4BbuE8f23aQ9AdhoPWES/HtPMRHs8m+/vFRxD64pOuF6VD74IujD4C9Jr64QuRzezY1004sqw7BObsN5ab+dzs67Wk59X6h2oC7sY6WuGy+vpaWaw60LdtoZI+IRBgLV4NSipeKC8bFSz/7GTCctmdt+rU9bzD0NgRW7qO+awF8UIk2QiMidL1+Ncy2chvg9mZTneSH9Syt9I0Le4A9LS2ljRjTi+09ZzCO4dbUAYeXgrbjwjw96/JU9I5qVvmBDPUw/sVDVImJ3tuLaee4Le02JJeKZc8BcK9J4yo9HCd3iBdllwB3rews3hEmUrp1K9TVHhoUWEu+iMXDqCEvDzy+qQ2jE+l4EkixccCWRqy3XqGAkFMYsfGwEgqLg4DbuIbs7F/MtP4OrAJqsEoWzNd97EclmUEGNquUX8hnkJl2yrz8WRA0yS+lfa69w3Ft53xYt8AhheEmRyskQZpUb841ikdhZI76fNQYC1PCphSPD83SaZk77aCEkjU0GfPJfA53N6qLESO7T+8t+VkXt8nnShfsDcUPfsRdR/hUAECtleJMAUmrZBc2iYX98eY1wWloW3qdW+yuG5wBSbZA9zSXMfCrX3Fg8vnRWKMjYEBXsRqCARRNLoJrmWKP0yQ7z7pzi3c6A46VyZloW1wkssLERpenuGefxjywsYZScK2e0cGFjFVydDiLTOdA7vn8FQidgHyRJUGbMrKuEE8VdrLeCc1mu9P31+WvecueEdUr84n2jZj0cCu7vQGh+i4n3a9ExalnJt7+IdDD8gErH7vULqr2w2/T1r5CIpdq/hEVSxCPFYWHPACdjl5I0qBdCQpjdFPVy/qVua1JkzD2p5b/OkHKRKWj3lDEQDxBZ6k9M5X7KS+owNi/QGLxDzv6ai+JBcmOFfHRszEdgx3GRNXsBIeWRejffDP0IfG3huKTYUbekcokSBKqPK/kQWz9/IHIJ7TS1VZjX+GV9KL3X/l7DCc7OzrpVY5RPLD9AVWvA8W6dv9i4ofy+StyHdTsjr+8X3mxEcjHlv9eVdh0eguFyOsYRDcDBThULI1WbOsXWfiYixAhDstyrai/IG7n5Z/pXubM/Dq/YmdjsEo0PKBRPhEJ3pdCpDw0LgDu2XwLQO4xL5y1dnXeHrVW3qfznypM/G0cJL5ys4Lk0lZWNJXZ16tvQ20TKECh2xXtMVdAUTfxf0RldL1+8F6gRPf886+I/FXwlh5n4bqpolwb4rDPNFgoTmVzYtiNZuAWj3LxfP90rSh8heLkmT/n7SGUHcb/I5pefPxQyBz1TgYbzC4IKpGf60eeJXyUZIa0WHk/vac9O7LTegXO02+ajYV2SRRhYidhDyt7VMDp3eX8jYSwdufB5efKaMIkcbbYHm11XjCo+K1mcknuz0987y5ScxOnyFv9u0hyf6Zg4KO6VQ9rSB4ND6OW8JLvgGh7IwM8JCGTbAU03G2LZ6saPXxpvmRHE897+8PPxEjyogfnTTZyfWjVE3orR2mtrQagn+cH4ywz55GpbcGs4QGSr7ygl4T1wAVZsyifhqkYv7pYoWWosLzL8fvbZIhquPJUEJNlZ+9EIN7d8CM76pql2d/R7a/qA2uT3Od+/mBxdvKiBhWOe1r8OtjRjJryGvO44b9qfnzx03UcjH+RRSId6BGEVMdkYL+fbXwEgiAuszREq6E+4Z8/wZZlqzlCw1UBkILs/eb1wfTi8Uo6Q9z7Jho4KC/mXA8XNchn5Q4qEO3Pp3julfjb73qYD49HOL62ey/prE+wyYOwtq0I++9hpvO2Hlyyzw3uMT+WGf+5BiyCl6dZi/J2+OBnsowzy2X6b8/pvul3VnMyUyZoyOz+XiWfsF4mWEFVKwV+jSJd0KuN6xuk06SsUOdIrLxuYGdrdbyibHLCYI9WMmhXS5dfCliEBmG2hKd6MfZ9i+k/EXmIqfKzag595vUHXd5ER26seLVpMD0z+Mdd2iX9dpJaJeHH6rVXu2ecNkZnHxx3lbx67pQKWtDFnTyZ/YrqQhl//GFyp6vWvieRJRazwIPXVDgwwZxYHKDE3lC+sO9n+rTvnB24fi8q1azNPiA+purdlXK4NhEKdx6j6L3XBtY4T/nUQqynMb0ljxo5JUON7LT/fUz9gCSttF5npIb9F9M8daCHFapCMoPwPb76c3pjZ1BHKH3olOdJeJO/lPxe6F+WHkEPq+1dqBuKTmT0IGXfTLK/leqB+bOFxJmtG2Ktfy2aYcy0r9WK+dBP+MlKfRMf8pqBIbawmrqFeeg3JadTcZrDwDQ3JEfIz+h9Ce18YSgkLwr/5J8nA0iLxEMVvgNbgH4mQjBoCz6iFrPQ5LqPSeu8FVX9VB5fzUrWBUzRXwdLZGtEbL9unbdTZjY9Y8C5LgbslgeccG9jInBeid1TR5fe/8WIDFHUIBwfmLD/XS/khdaYDOyYXIGhPv3bHfIjg1vaszM3GE04uWmv1FLro1/Hg9URwKawJ/NNW2hMJP0ltjHVdjmUduY/BwNYaUX/72kwRzbsU8Q2t+arkUULdSf9Jap031C4JPk14Emb/dYWz5oiqjyvGvOY8yJuyP3ZhxLzydzd1oBsRFOAmVn9gyw/A1LUupNQlgN3KyxqxtUQkte4GTt3xzvsf8yUvHIIrSDauhIRRjHrihYUqSh2FE++Wontkl/W4GflEfHyDxIqJT0m+hmc7E+w0R0eIE+zqZaxtmxU86uNf0r5fELWmHWlGshQSf1W5o8YQ5/fgZUz/VX4EXaygGx+rUaiUaEXxAo9zN+UuU7b9MlGLiawOm+Fiv09ZnhoGWrwXW/MDev+uv7JtkkoEvB073t+OLLQzavabEAngv2bM2pch1hzutk/8B/Krzwn/VdyanylZNrpPv/aYvVEIXVzLD7/jeBjwUJnD6XqmEY1yNi4xPdAICsyIYO29ovgKN6qRYskpsSTTdTtFEdX5KH+lrPCCUIbCjudBrVFau+MD88TznsbuYyaNvgucI2uaFJ9PLebFxsonAmtN3NbI5uic6g/nokMoL3rrgxsTcln8r1gHy+UGkP7xnE2rPOuL9Q/i5I5m9A0kLQRfnKL3aUyNiMZo4viRXW2TLcR2IKI2aoNHLTUo2l8aDYmuGTlL314tUGMLilNamT2IystKfFNRgOOAFT8M1UeYhE1F1O9ZLL2MdEKv3f6whfBxpy/FZ5MbJk3EElXa2UlRF1BVEx9awPFApZ/bWJ9v1XW8lde+WPjE08pZ+Vdok7PFhk5pr7b4TajJjiqygcsxBu2QIiqFS4YEB7/sx/N5PfjG+Jw7CuxgTfQ9GOST8jg92kFaU1yyrRf7jXNyzsnRCafBQWOlX3JDgBqiwQ00vMgBtqUZFNUIv85Ybbl2XnCSkLC+2S9IMOeCe13UptT5wKUaIOOPVGeHz/6yodq/z+ebSGa2/yYt5/4aEZukQyUEaRQv5wL0W5hMCB0HtCICenhZJ44KaqTAzIhQ+T3eZxM65oorpaOxSKeDwA6WvT5lJ1+SaeXUTBI+SObm7iqv0kf1HOlp1UDui9rEeAGTwjl0F6jDf/JxhGqjn5bTMOJzZvFE9mpJPsJYEe0ZlA1NVX0jscYn0FebQbwZdD9oq6gFiDTbWwc5rqN/4S3LLT+ycMgQZ06bwoTamATuZ140+mGjrW4pn9EKnwxvbtY5UO1D/R5P+ZV1i/3qzWbeY119HK+LOYw6ayBofCv+PMn774hSvmPGXP1wI1SjRYB/saitnDGtixNNNTtDd9v5tZ09fdi05WCtPnXlryZQc6crrUF9bWOHuChg6Sno9lpcNeeT3lkVHk1GS4rFDmZh5anTwKC3jEyYEj5Uj7b9jxqxvdVUbD+p13k/k81rQnTL6aloTgP5G5N5BRdolBSSQrYaF/CLias5SId0qOI4pLiI4RLI6hqEsfx605WS4vXUX95346yUkSyuoE/bkww5HK3mw6qCrOOu4pnFavSiAcfCkEUfJ4LW7woPeDNLML+msOsPofYtHIxC1v3NVF4U5z30gjeZSpficjExIZaSz8L+SB4swsIQytoUjyD1SgmKjHnQOlAKiCb2xmetYs48/P3SwBWzi1VO/3WH2mB5+dZogGyAQDyCk68/PMhn5b4dSkkFURXalsNsA0U3XtvJWD4OoZKNRU9+u3SMdtgx29HGxMJr0Fgns8gO5i8YIyQ+mwcrerEz1XP9rXXI76bLcFbC7hO7FLi/M4Lbec1RXaN7bx+kZEfvQZ3jeYrxkdAXgo2qc/xNy7Rn46UbhrzBgVr67g/BvggP3WPtv6BIJcKspjy4C34GDqYWGs6wHxo6xOPgp+zGN81w3CadTv3eOYsQztIhPse7iJem63qaVmZ9etQWMwUJPT1iHRbbjj7keNV1zWtWtTmChaPRuGH61e3ocjpWMT7fizaVEYqx1Ddmg4nbgX+QDRwSPxRfYu5kSB6IMwqCPvRb2YQ0Luc6YTnfBVh4sQaId/7rz/uRLsZ/YIid6xwTD3NXY+0bbPpcfzzNQQUxqbuod+EkU6UR2ruQ38wG67xBKeukqYsS9AcSjWCkM4zl9cVmkq5J8fTgzWvskZJv3i9v2Gymwpqc9ETTIg5ujYgFj6yFS+xLCQax7lJUMNcUs91NJf4GJmjtMSv1UH9w6gobuggb7nU/47ro13GFsEDQDCVuuG0G+69Lk1TadAkV80Qxy9o63LGc7LY881rzgaKJSZ9h0/4ICi1hSlnczT8I2H5qikSnqm7irH5giCybYizuElS6VUI2f/7KEH4bzaXJvwQyS124TRWIx5c5ZgQPeKUbcpNXOqAM8xkb9itjb3j/HC8lDbLzO/gmSTJYBlTVX0B0BHkfFwxw1B1QvwBX/H2L8DRZq2uBZ7hq/jqh8rl5AFmG56Tp3Io3apTqarhjEqK9C1aoM5+/yghBZW+mPLDJJW6qsyaGx4JLojdHYd6AHBr/vqjQZKuQxZ380xKCyh1QtkeZ/QAL5SnsrjLTAJOL3oWARVq22p51I2KM0+ZPj4G25PMcFaXNGZg4JZZqV5Wf5Y+O82xo8NfjURcFrruIs95RQmiHoYDTzPjBal2KK+BxdE01ng4wsH9jEkTM5hRk4ymkPCEYCE2iuILaMsqhUHFgDdVm4+eNVDmOV/dYxCI5AmFO7naDWV1BAfqYhW8cQruL5UTfEGxPCM9tFYfkC35HlI2JiQHxpRyNULDmxeqZowlYr5rwU3D48a9gM1kCZfujc1WsIwwkUMH5zDCb4+l1qjlXipTdDTnXCn9dEUzpto4MyJOTRNtYmOiKjA7ofEFDzAftBP14oySb8bWHxWOWBzbD/m9OLyw5v3mrE1QZ7HDFY4YOOwVGgupPnl2g83NOmsYy+1Z4L7y5jf2n8kNL106yJqcZFKD3MDDevP+0ytIY/diOXl+98GLamdc1rBLnocMhtSGOUWK9QFP3w6S1fhC6mdqdI5ZIHP5629e2cXbab4ZKVv1qKtBRKTD4XlSgZqSpeltv+JGR6ygulvgqPISD/jRiceO3425LOMU9mMTJJlSNfOArNbAmXk4+Y5C2e9rJ5JYkztF1Eg2CDi0FaC4sp7IrZhe1fVQ25BOgAIjdJjM6M6nkpeT06vKDFzJLmDXLj0Ypwhj/PnEIVMhFncigBJd2xfaE2OGks6pgGKZmhEFwQndfDOLjyg+lzrqq6Z848nVePuCJpp5jL+imgAfXslVetD4H4KE8HkN2Ds06sec0T5RMWaooLankhY8/iTUfrY0rEAA2gLacokpRStkI9kVpPS9v4QfPk+bRuojoNPypqPGFED7UGV33bsiRVDx+Ymb/RR8t6h4YFs38y4W0jcfWJbuCT6YLZcQmdrNwVjt8Uj5bt8qnWfVqZ4YamB4iQrKQeXKbzzIxB4X8q9uvq6dCphMppqEd9Vf+QSjcVeYRyECKhuPJUR3B4Wz7BmXMn+Ce89BuQOnLofeCM9kDVDrzLrUkn/P9CGF71iQ+4B8Z602fxlAaScewQ9nI5le5ZXAmUfiTNKqSC213fhZmwvJI81/tRDcQmX8og0bEcmpBy25x1G1knPuKbDr2jsXE46cKfEU+iipdglTKxkM/dv8SE448ztL5bsOtcVC1tbOOnSzaSHzhhV25bccFi1d2bPrQI/Fdjr8OBs7C3hxwOfoDKYMKC+Qn/U38JNqsPrpySgho4/MINhdXOwjCD7MUOqS5amK4bruyvoN5wxw6iW8e7kofQZka6W5IrCgvt1wa6H7qJdc5KwhYPo3D52YWvpAX5YPieJchzbjUaVASZAs7PWOYmg7HZ89dBuOyhsZeuPyXhm0+zhRD/DoyzpIPsRCs6gsyIOK1uITtv3AogHPvDhvfbsvy0NJ2UvMATeppm3r+Mwv574q+fSW8cL8QQksE1FwXF7oiGJ2+McI7RCde2FNe6jP/dfMABcVsZgcAhWWVC8y4Cws42i+21ebJnr+n1mfMEn7Ltr6jCcX+DXIRKRqBS2PfIfZ8LpzzPsFnupDnmBIzXAVQxywU2b/UYBwno+HWd/iGcBU/GZwopNEhknlbkkCQJpYWGj/2ekgNVawt5ePp9+f1XSzyIYBA1Gqf7+tqKr6hPMkIYx274tr9LFteofBLc9dfZNqrYfp545dT9u0OsvDwT6MPSSOj+HTcsIrwF+HKJtDPxN/rVbsp04/fbURLBy3Q50YvvZiyXDvgz+jxter2UqB6nt/ykXKL7WLOIh0Xi5feZgmR9+5JRAQfZ53rwLdVATI8jJcR4xisQRewQIgs4e4D9TGapo3oeolf7/DOIs4VkL+FSOGXSq7B1XIENj8D4j0OxhEv9XBK2TCkr3mGBJ/co0j/VKV+BDf6/Rzdz2az0Dd55iB22woj06rXvQYuHw9m39ZlzvW7wYNQxkkvQnpJ589df5U/UKjuXf4wxS5hvL9RaH/ImnqQOx/6EONRjjI00TfTYDeE8bIxphPJheH0i4J9j5IPPI/WmM8GvYt52aO5HNcT3CzBURjspHJloRloVgVYkDvWwOwmhi0HGkMLEZhfPkLWgNkQwazHom0h72aozIrbe0YZtzXE4f398N1AsuWJgbSaBRPqkARZaC3NZbwICS7t162ETPGd23rroTpWGkxeH0N0Em5y4sZCnyKldUFfxYUO40GmyQ9w+EYjVuw1emtKFbs1tHrW31Quy2Kybq2EPvY9m5w892Y22Oj0stK/RgBC4K9SwotzSWzhYiouZGESdfgHpzIEfCEfVHHR1+suB/Qj7Cy/5MNv9Djhvd+U5l4T3YE+22Gm5zwOkmx8xus8CNT75L4fVdBfyosSbLdHf5Xr1n1jtDpf7aoTyal18WJxiigJtmoxyjfiYfX9444+qV1SpPL1ajU/J+aJjz3o0x4v3Ko7nsqW9uIijcm5Ns+DY25B7lSrgXolTBUftJRrMxfOVcgstOnyaw9KMO/JlbklmmVImmSQghP/7kfQZV0daT1MGxe8yPAhx84Snig8djuK5V+J3iYBPw+PkX9iuTn5dIFhCXKc9AETU+jZxeeW6QHQAw2XrT5GQBy5b325bfY54d9LpVWs9hHsL/eGA6WN3Gl0va9qEKdpjk8zlfVRqHIdKGTw++LvCHzB/aL/2/rghtfwnlg2jn1it1Imv1KmFzls5D0enZp92QanJoDz229YqNwo33hZ9wdEvvtsGlVR5d0fIx4Nv1rFN4snupE0WePV4334U4G++XYwCQuXMRqIb9CJEc5Oe/ZrvD4y5361SugTKaXaPtPPvX4TqefI0TGXNY1CftVaIQgzj7MX1o+ApF05goVtXSLMUVCQ27fe7hfjwZGT/PUuMeuVk1kPo3UIvWXk6UG31r/sLBU57c/cecInK9a3LwF7Y0X7THGZgOLWyCeQVL+ULMc1C0Baev+wG3poghea31m0LXJCHb0g/8/SVSxJqm3Rr3lzXIa4uzPDE7ckka9/nOobHVEdXZGdwGHLWls957BTPyeDYuxfXZwkuw67RcqgikZ3twqy6VpxpvBnb4NQXtI6LpOdT+xe0vq0yrUry6vlA+E7u9HMboP7DCAB3j4/ISLoPgE4sXHBlajRpHy7E3mwlQ3NDEDrmgqNPxNbFbon2d5fBVdGLNnvvROvXIlvedj6pKoCdgqGlPibcPG9E9sYvInIliZhbHml0Z8rq5mPBZRmFEUEyM2G/5boL1XN34ol+JItECEmcIVs4g8mXytGwDbxX+33R55qGeH+/QMEACHz9XCoio2gVj6p+h8/RCn3vt2YJLjGd3lD+JsEPs122dL21D3G3dS/Hd3MmP7IXNJZObJhBmrP8YO5hQtKRUQmGL7shfN5DXBiLPUqgne/nMWoS90kInxd3GYGOUZ9kfNT/5ob9lxpO5TH8B5KCiSkkrkBp7lN2qUeqIyNHZpOSREBlet6HvGUt4K7yMxTRbCRXa32BOirYJ2ZXuzNaU1s9B9cF568mjJKvMq/VY9fpr4fBg1Slh75HFr+wimRTf47jTpf9N4m2Vce7THc8p6hh30UyRsBNbMqvkUzzcwZx0Tdekk42TlNeCzVWDiPxvzN5w6fW6hMyk218uewrFkrpHXQv81jsMJB9YaV7+VfHzstbncIJ/B5MFbRQZTtgXBauoUaPVYjwR5muwSfPEJ7n4RTBUK0l611wWtSZAlXfnjkOJYGZ0P4822hsmIW1hp3u25JeV47iDfE0hBlJxbmRkPMpw5mknvtknfYzwhd6b1NwGMFCXhyf4SvlGC9E1g90ZjglT2Un6gBqh7DJQTdKS40qTpW1p8m2sdOlPd4ROW2czDB8bebyhcpka3qb5A2ha2d1eBpppO6WEEI19dO9dAL61Es6SfM2+vz15YoYSIkFwG0N1afv7aEHe97gll/Opei6QfDu+3mM6gIRj6Acw3Sy+XpHkcSoaGpyMim8YHNaoz85FR76/QAM+WiFzIdUfT8IBshdkQQRv7LNJNf4bD7uuSV+S2RFbDkygYseBebh+LL4mboEY2yF2qTmeJ/yVZwtOtrymXJ/pL6lkdt1Qqq/bb0A/zi6ynIGIzlENOnG9P7EjYnS+5XktVTLYqz+IXmi3DO5PccMt2h1fu3TdrqjG6r83S0khC30X3uUe8eHmcoDtgwABAfQxoeCGlqQgdS0qIgzbqv3MeWE6BIL6pe6+Y2UX6OLwfl+q7VwwAywSlKYf9B8o/enh/zYSQtFkCNg1tWKULBYzMssAQlNuqAgCFrlx2COqhT4/SETkVe69MskQq1d0jpzuu6tlq2Znns9we1Vb44mxsc4H+jdjd8HI9NDr8sqKJ9KnBt1D6i4ad9XhcRPVz2yczg9rwHt6IBzEgRjxlTcJqq0+/OETnpVXY6ssRfeQuZghJmL+AGzReHIGMVYf32aZ2rL/STarHri8wAoZqRZODqw63cUzbKdkOwnroNTei/s/im9i+xOk16RWDXzPvFZRMlCnXNJAQdSEG3QTLknTKm2uX3cdG2gLPHWmeYsntC+KtlsO0dG/6WGftT4Xi7UGIrC0KrrWTYW7vREv3SfyPtEO1voO2Hps585cjaJRa0+dk92y2xYXcUSplSBnyCepVOg6Omh47Df0yhYJHVPGioYGLGAjGx5qKQiETUG9t/CbkjzGb0TcQ9IK6lKT/tdREQe0hKiMjQZxQctDZ7wnhGL1wWjnJUnALtV/DJjr+TFbWbwVKdn14RlCTsq9Af/islskwgkcT03ep2pAH3ijFF0Svaj2oGSVsnunQzPlwTMx/uvZB7BxHMwyAo7iJY1Sq7q83eSc/K2j4gdt9icvYZvhcJtXTW4iXg2IzH/6spGbJ4g4TfV9J/D4iqfUj6UyS/31y9iLUg+AsWiSVS80/F/E2QlbkURP6GI0iH7PO3Mt2147PkDkvErNdkyeDrz0IH1VJ3DYpfm8regcP/AWL1p4J3/Gp60UgWCTbBsEBvauqDchL2Yemf+6GJuAJ7GURIIPtGeY7z4l9kJzkZqO1JNXZFFpxdahDRcflf/IBv0VGhTj45sgqviza2nBUvtd84P5z6qY320/NeH5jQ2HOdM4QuBN1gXlwZJ9eTvYiQ7PCeXlEJWnCABcLfUMhwy4l9qWfBMJvGfnptdRu5Gi9QWNqSVDNQRyz26k60hfU5GNq7KSyuFI55ShepKkbi0KQuLnFK5sY+Uyd1wNkY6isRn0VJIsHdsY72k7B0LVQySHL3DeZ7IUWJ5p+EyfjGO1H1J3+qgRUV0/rJbM+1FRHcdxoPZ3bSosWth/83h6BUObx0cVEXGZ6lXxxU7mutgKCmTbhtvuMzwce9J3gj21eaicTHltWoBQXxbcXqOLu9glfWB0l+7CyYF+5RuEVCNDIjqvj698OFPQRmG7agHV4AZYXibKmj4AV4srDdcK1VA34L/4CYEEA/8YNuqhFjt2+4oKxG+q8Pauc0rBGmfeyka6G6bri9cEie4D0pcKrF8K8sIl9NJgNxpdrDbzI/kddwtpXORrVO69a7MyaloI1ks/4KrXk5OmTreeGjxYCD4DW6aFl6Ru0F1hCF1kiGbGvUvKAtlTcQhT0/0pnIHlQfJ+ssCrxWrqwjWiZHHkANUx5Ki/EAe8ASuB+JvnEXNjK7HYTBf+Uig1ALc40fsKAiX6mHKP67H7oB/X44S79cHIi+143XpfALDeKWL70UJHs04Q21vwDIhRVCs3mXWxGTdEHmhMuIPCksXhTmNpUCFMUcKn6xHpbWTtZXQNnEUSmi2tHnR9YaBngEL1qdSxUpyXF15XwEhy0X16OamJPs5QSkJgYeiJkSjfPtCXO6UdLrsJBHh3AfadHzDulo3Vb8ixBd0E3XEpRsY8WEnOxjnu5LX9xmKUwioNog5yi5mV+afeCcJ6qmzFGvH6hYETmeVoLGBDlH4W8fVEof2ngHf/1R9USBVWhZTp5s3uRdGaIve+DR83SdY/dNG6txGwT+idSNMDIR3OtjZQ2NFkNgMu7OLM23V8BjwBjNJH93SbdP3nSD8Vfyw/4NU5NzpQ32EvXRBnkeDyVZ1KJNIQoWbAj4JgxI2ptTe3kdFAhrlu7Xf3RIkKqroHscJj9/YJI0ZcipEMauP6r118LHtL0a5UwQ8tSn1dRuE8dxuprxGWNp8Xpq/+nSMclpVdNwj4TmCVMxgX69aK/ofPt8YUGAKSt2HKOkYsNNXu7Mr52PRXzuzsqpAV8QydTCHEdMAMwOSsFqAh/yRj5gXMNSgzgttWEbQuMlpdnLvMZvA6r9XO6oLTRfvAvKkEapQuqg0+rquV7KTQEQQkFU5x3hpJnf+vomcl7o56Q/YPIYqzCB97trtk1hCT3Y28XH/chS04z7SBOoD5R9sZE9cFBUwVeouqqoAnUBaUeTqhV93lQxMzp1BX/1/EMITa03cGyovVkIKX8sBWL9rThYp/cz6eHOz50/7roLsVcgx7QwN5LI1ke+R3Fsphe7EJFrCufk6S0JGdMCLElHpPASHDLeZwsqt0k3E/GiBQL2t4nQ30XtA8zpRYOH/SuKiDEi3U66IGa2N58gQZin6+TixHGHDA/+tJZN4kbl2ZyhBEF8MfJELUQ4QoGHCOCVl6J/pk7LOelQyC3GFofHXvr9AtUVdDUXW8Lxi686WDIbq0u6L20+tLNpVS7Vv7fxVHMasDDDYEboZiozWMcJ1598laxPiVJNVj6zVT7Rx1ZIGpNeAlbPIrM2rDmesvhHgVvKHOvztcUKT1K6vlm14iO9HwYiXTVBNLpTTJl29Tf7SvhcRzjD1l5ZmIHkK/2VJpWHbD7MLpCQjsVFQgERa7ewa2JoL2L+FlfxJSLmGoSb53WVVRtMnz5e5dPhjo+UReBu/hcXb24vnktfeNCTMH3lK1VOoQzJy6k/pNyU9oOhDDXOLPZXb1vzhbIR2JGsGJy4YCeNDTdkt3kGD0mjWYHj+0o2eT0teM6f3CRxL8nY38wq1uYpZr8EDoC3cF4u7X3Huke+IrXnbPgJDt8ML0DHG4Ps4XjcwR3GNP6RkaezaUALQL9K8BrNec7c2OhecMlz5whH8muuM+JCX2+eyAsO2eTfyH3t/Db4V8ozSOrAvDJxvwlCacdEZyRhRa09lPXXA4FyZsHo0B3J07/wEiO3G4fVRhe8XqD6ne3GgqOGPkbcpwjOgH69UV2qbfCG9Wf27kc7nlgrG+4WE3kG17axDyBl7r3eICLelnYTParC7znhZyyFM0jRsCPVdIKynhUzaKVodBuRX1pksK+hqa6orn9K2s03mlO94ggTczdHPMbflTYCey4rxBkr6LyEHLT5saJ1t0BguJ5bPGuW81HYXZAdbmieySNXHkbOhnmQ2Dunfe8rpGJsTaX0xFvzagMTGCsSg9ml9B4WnxO5yPz5A0sBZYmeetbHD6WPvxkpX7+3AwJsxz4y47OnkRm6q/hXHWGdpYs6m0rsjs6cQxF9LsnRSukIWQkeaxb2DP9WGmKw1eES4fEnm/f7ImzZYPv4gtUDnqyyuk/s+xQvGLIENyP/qs0r2XKL83Tm/sdOKZU2UPZz1y5vfmno2rZoaEQmf3ARatFuwznHkqVOrPAywjjm2K00RWzyl4OgFW4M7JepyGqhVoj9hszOFHhlPjsZCE0EeX1LnZfsszlpc38ZD2buNg2JVMn6ytVwwVzyO2U37Susg/Tb71RdNeLj061jIr+ynnwQRWxfaKg2kpN8WmtVTZzLakZesMyYH0A5OiJz6FF1hoyxsLmYC2YNgI9QDr9QfL21kyJBcu2x3MdjzspFcsLatfKmPlbc+wsh/s5LORyV+1FyW+39fjZOdoIows4Od9uukpCAqcfiMlzsPaEaejSr0h0Ir6JsV0q5DCmGv/IdJAVYT9izjRb5xzXqMz1SB5kI970vnRCww7m1flJF6L5F5ifsn23XEVJleDuPiixAm4qqMyuEFmXvAABoTI5/+CraiYeeHl1K2C8rzBoAQ9e37l1C/AaZaMFYYcFu0n5fjn/Q1mueTzfuoJdHoonzoVKptyPtkNek0S5ud+RCxpHqrIHw0bd7bB72Etn8Q/4UiPpg6nPBtaP/+Ecj6gmHwF4fsaqyKASJDxKaJipX20iC5KmRWNj9Q8sdxNpkTD6IKZC235qbecjfnhOoOrCksqW+Z2v/mLDezPiX5o39PYWlmkP7VMTfL+kLpfQozgfcTHzfjjRUN0fihpzhSrf0YVYhY971r5g1+lH2uWemk+IE68S9vs/M1J95lyTjDg7HRp5UUZ+JHUR2NYVySkVcKg+G21RKZYCH3CJ3V/ZkAQVPcVVyL/ANCEzo4qSZRmGdDErgPElv+cwCKhhcUuSIfyboRzXF4wCLKvOZzNFSMxA9tBYvwoCW5RAR9sC+gWE2ntFaL1omVyr6ORyKQ9tGyPdPjgP1U7jTs1vL49qtD1aMsvdLlQNFY2FaAh+krJoYWRSqyCtBOxo/+u+2/KBOgjI5xUvptVWvT/+QrHOhQy1fSDUVVn0qAk1nTvb5Mb9vG9HLmlFcnB1UqEk99meMJiOjF09Qqg/q7X5X2T5iepdFt/wsyiyxnyvqK+xgKwfmBcV3oqr8lgzMKil1M0eOWk9ApQB3IAuj44SZs0GthDi6cmH7GLvAkpzcBl3ZdiCLeCrPtT8bJrMkHane6VVnvkLQ1HNeQZGL8gfalWQzVOkAQYC/2W3k7ylpJ0teYTER8tICTiKSOOoszyLk8nZvnOV8QCv1l+nT/QhcSySM/hrmOmgnKbnaPxZVFuqX3yPl82V8l0jqmeJfQgMV+hi4FQ2rEYMuyT2zr4EnohegwWvL4hi4CWQv4aiY695YLrZcqWAKpL9wZYO4gfGl4M2t+p1vaIx5nQZ/UC/GNP6yPUE7T0e+IN9C3I6785MzKfY58n6L+LoRy8n8PnJnnHE/eAzLmHIXIkQ1fxGqgO/9b4+cWAqkoc+S1ukd9nVtGGQmD9OCkgIWKX5/mBJWMK2o57iaKr4KHD8rVSiIPt20WYXQDdo+r0FPunTsVX/D4yA+33w+N3ItYxUkiKfWIULQeyOCgIq4h667hX70hZDF8toVjydQ3yz+0G+eM/IxQEOhHKD/mBWSYUkD/umb10037Na3klcuvYDzdyPS3y8fqX6b8gUUP484DJjlZ08UwOrN+sR3A/HXcefp63d07E+Fa+A6uuZfuMiua1DM3mUQ4r5spy2v3wn9LZcrsc9BZYVNJXnu1vhFT4q5PwU3Dswm1xI2lTm4t5eQwEvvfpGoyuz5r3KqpTZZyy/z21Drd/AzCgjI2kCw2UVDgpGzbJ0zMAdZv+MuMcRaUNgrjrl8VEXPhCTidO/IRbUSRRZWT2fWogdwPcQVWBDyv3+t86BtlcXIv6U2uS9Os6hDNA86Q11Pa2ABR1Yn/TH3ujkwP+sa4VqNWTXeY39conkY1R27IJobxHL32q/o2WGZQSCdzOuL1umYHNFS9SJgq39NmkWKtx/bvIRdRKvqDMa996x+/Y+Am+WA4LFqNo2YfT9h3273jTfX18S1OLS1aNRY6gt4kurIwYajaN5ELzVduu3JVTOcuqmnCCV60RtNcJKjH0I+bcr2kABaotiqunQwVqdyeL5on7O1cHBg6G7RRAGS52rmeZjPawhyN9GWfYUu/Ft8XPqQFXRc+xo7iYoFpiF7lywvr/wjDTMvBKDMqXrRPa5PT4UhCk39xdNh50Wf9keRuyq5663gQSFi5mc6WrAkewhIhE0zbUozzfZEBPlI7fj61ByeB/FR7cZB3x7qybTWlpZODBIW3GS/ZOLWWNJs+nKp0bm9hWQPtC7vm3XsX/RdO8W9qqRS/+ZG+/m7OWrYqCzfm6ZNOd6Y8m+K5Hj4zt/u05vGvc1/GCEuYjk0IeaFVvXGim1VlygrzpCyj/302bQv7Yjj0IgtolhBdBDkX4GZw/224AHzt0RQzerkORx0z92ESWg9FLU2BBRT/zYAqea3qKr3nmLi87c8u5PuoOfRmFmke8gjVZZKxv1SgVfU4UUUp0A2K4X3fdEQWHITPYgdQnMdMewP1dWk5eQFesiYEeSgCbRUgoWcwECE4IVPJqJbcePl8/0bFg5I8ihA4yNBseDdUq+c+UDJ25BOLmCRH167Dq6IKkZQRPtR6SK5d1yVXtTElRm4MGtjj+azGUz5n/sXCQlfq0/r5t21ueWMBT8B4qeIfWW/gQgYhoZtlxiGoqiPV1UVo3y38FfSzW3xQgoY02fDD7d2jbzQyvQQYMsNcSn9Kf6OfYqnBTfcAAd9ToQ+EcJZ7LXT0kLAw5dMjBeBNVFGsOe1yR2I5EqojY4qVxHBd4iRRRXMyzWx/RSLgUZEAirGIir6qzqrhA4A6vrpElcQQKZeAD4vj4lpUZoPJ2kqmZOaa3wBmv0Zi7ZqhN+mzl/d6fyt6UrtlIdmMCXvjvjHSNfqavdPadADZks6xOxjCb6/77n3JpMZg+L3iaBOY0phUVwWLdQJmVkEvuMM7Ehh9XOJ78Q2D/Y5+kDmMLYN52SohV43/c9LyYUGK9ixxE6dT57k08fNAIIL62AuIlNznJYWvIknUGF5B5+V7qbvuDdgywfnDekW3c26zNZ/XLSnPhmx/FyrBUmRiSDyIKZKjCQRENYz9HQRe70xulm/FejnvJYi+flRh4/RhUOpQ4IwVW6YD9HW5yKVzjTAHxZ7sEs6Zn2A5Z9NGe5IUGpocH32mxjoDq1TipE9CNv1BG3M9S5HP369PyCP+SXKDMoWD6vQ54NKlIGhHQPDL4fOX8S1Bn/L4Wv036QUkayQC4yoYFadkrSZS9X1euosozElguZO8mJn2HSog6mJDeViNZZgeSxQtQojpXKCUSjGeWBOJcLG3dqq5olxd6t1d2sVjbHK8wNR9M9rkLW4pqOyCsNrr1/2ggVLfZO2dvEe71MhNOjfvkFeBFH8mCY0VuhA+sAkF+0WXupqkM15tLpMSVkkpEv+Gbn9bniQKQgmSsrHIMXTs2y/KKX6v5k3TubJRydSqd8wB9ZaX8EeD2RC6Q9Uf8J9DNqRsuBfm/1NBENpwiqRTJ8MK1z2GAUoSmQ8/XVtLXzIKsxdHsytj3GuiXyTvx/Z5eQVR6P+OfUDiPTgsdJyH2v57a8sn0c+iL5EwFTsZZdl9eL5oXR/Bv9t+CAgKrz4BpXG3yKUf/w45qai4+QVZXJQrPZCifvOnwRfzLabkk9wfWZD4ReMGURIXPQNuHTy71jPw7yTNXPuMNg7fKtO0f0bIfZvSCGragrlhRgML6p4turdDbke81XYProcxibAGCCaSkWVXjDls8udldnIcQso+lB9TY13hu9+qO99WT9qBIbzxBXnicvRAqvcpiZ6w3B954No3eLVfbPoK/i87KeVgT5QppzyX+JymfRUNpw9DNcUdDmMf5L3mA5kq15Ud+G64TFH5aQCCqheCaYVmxMP/sAV9vNdmsYK/MQOf74VkWTiXETYta6XFLeo8vgHOlEeejFKiK0jdQmlhtxSEklj0RFqiWesdPMv/OsoFOGEyH8FD3EIIu6Qh6wZXGRSFmtZt6dbVKfznR++dRYzL/xL/+trJPjfMaFHDmKn348aL1iBEr136Ni2uowyzohI1/niM7jWfQ4nwGo2xRIIEzhEDljEPKv5BIGS+wB6pjFjqF4SgzTo4zQID3meYH5L7DCdQOUsC/68LjBN52RTJGQdLrelojlYw9nZZiY0P5U4zL1DenpMJlOS1GYSyaKEJ0vUuKESboW4iMFI5t2fBLunMf5w7ALcXLv+Imo5ABU/mEZQ+pRJdM21CEF/INo9RwCWdHr+r7fmRfDdaGeNraYMDJ0vaGapNfc+3V/im6mkQqRE5G/S4/VX9LToxNoFaWYNVW8ITWBnH0NFGMX6ijXFcV7xktgvxZWASfB0Rbe2KwTcqJ43eWKPlzE63nvOLVYUUelhJr8/OMXK2dlVnvl7XrM5nw5A38cJAsgLe3y+SgFDe8xy1irvD7+WSRsCFeKJVznJHSK65vnu7u3/mrsigLXgD/Z4ZKqQkRsFI/ba42tR/3regsF1+TzyJUo4cuzMbg8dFS9bEtx6aLsrDEKOTFnG9JAguS5ocLCyoVhY8us7UtAyUe00oR8+l+5LxN+YbYFiGI5hvMB9bWfFMcIPgSfwGkQl/ljMzAcF6djbK8ptZRSqarikGkWmNTY+2rst/2n2/aekPqyUQepqc+ToYGBgt7m53ziosH9qrItST06QR17SnvHiw/fivIa1Ee6t3VJU/t6Ew0YTDgekZvyh5/gSy5AULHSOgRKP29+aqwEtKVB4NrnIVC4N8LVTcDEVJbok+hcRX/42G4vv+amooGqmn3reijt0+oWJ3jY/3leEBsKdzGqasOxx9Cy7BUGACGRFMkJBeVyTfZsHohI84zcfd8LDIGcnIYmwMJ1yXm+St8y+e64ROsbJxasT4qm0VAbt56G6zqWnfzBhHnp0xMYWdziPD4ZDEC4lnPUTeH10D/vJQ6zeSy3eazTb311T1iz7k+X1rW87W5D1I5qLAsa3TP3N+Id4uk0TeQ6+LJI7XpzSRmBAmrgYzT0a7Kd3nTg3eV6qaH4R27DJfu/7pk7jFIjPnWRsjUUvl42v1B1AVhMJPPzzgtu0ezk9R7uIOotMpquneoXWbIgTTFeNifW5uEtgoyerRFmOKmuQBjWuzOstK1yUquqW0oJ7DD+3BtXX1Tir9IRILVQA829hYJLK5clDpxixbfan7fV1/eXqIbRMIgIN+SDX6K6h2+W/zA6db6gWL3gFip49iHsQfckjw6z5WxwIOSe07sYUUYuLrPerk1pOoQeJXTv9gi3fQlpy0GRmvzN1ZHOrwFtMj5c9DOZ6t5NknX+bWOW83qI8i1ZD3aYK/3jJcYoP0Fn7oMa4d4/sA4f9SVFAP5qfTpUrTXc7Gcq3ma6U3eiS3oMVPex/I1KXl8zbC/N1V65G/yY8ErT4EMIDWvHdnMcYQGc7lfDDazjVsVDP5y8KDSz39oOcOYRciOQICcCes2iof1Zd7Al7gKbZ38hTnIBkI1v/fVoeyhPc5i1L/8YTRVpgiBTmDvFZ/dAPwqV0bWOEkxk+fMI2hoI4kJmvOqcdj3c7l9pd194vgibTqM2c+wdz0SxobeAFsUYu4wXhncznq7gIZTDZ50a++ks0Mpn721WjmyD9qFTxAwmeuRfEX93jS0MtoKUlTTK/BlgrNKxJrauv8OFHs7qb+bQUxwo+n/ZOMy2tkc6hzeZclp8zzjbDJr713SIrynQafkbXV7+iPL7eriDzm1gc5D1TjIHquV9h+BQ9+kNZmx3w2219csCP+CIhxSHgB4R3V1iaTV5mQMkhSQ9fSJbxY/Ifk9SIDrzXICv5ObR2kmIaDF1C5wKZ1gfBsb9xxyfHpqzHredvQRuFnFFWCEMLCmyD3KDbTDq2/3zt46JGSPoDqezuz6JXvUCa82gbryWaQTcKt0+IH0n+6DtGzRIT+/Zg4qVYfEg/6wZjal5FszCR+Xy0Mgvtzl08E8u3V57gk/yMD9N1lcVXJsnCDlbHf/UO4PD/XpW4cCiHjvfmX3BtqMksgLpB7K/WgqzYkSHj9aAixsB0S1RvUDIno8S3OCgwewaNzHVETQ0t5mwsvofNWfkMdbeJxsSfLUb79oMc9U+xWND3xZ4oKSSyUKNEe5ysN6gRSLEpvkIw3n3GqVHA7YQrU8dyaFXgFk5bOqW99sT7WzReNsN4Kz6Ny/XUWla2Ex1xfuxfnzUHdLUWXqWvF/Z/GP3TWy2jjm4ffjODIKfVWdaZMszwWMyO9r8cOAEj8fn1nNhJTL6LjuM7VMkPQdXNtx9yeHXXwVlQnyJaTI9bABq+Uy8rGsUPO6c3QJAtPsvi1rA2EI48Zq5amWpIGVfq3KSeFpo/LZkhYfB/eWhsZCHJG0K/9Tp/Twg2vw2oRB30AOrUfEz9J8Toof0E3cd9kWEwTl886twbdO6JLb24dzn19plZsAgBPcl4eUuXUNqgffMzEpdg/LMGT1NV4Wc5NMtTej9U/+b/cAXf6xQMaxcsgZRIJcTG65vFszII+lrDO3O/OyUshxVyL1g3CZKJLZSinaUrkyT4QQ/1sxYAaoUvWG5TsJBVXx9ix6qlZh3zF7ysiuRPZlP8lJDjVRKRMuOgikHwNTmU8udchmbSKc6UNmVDJkb2inQeFnRHyuuadGDt/YAKD14sHuRMHFFVL1eDpUhwcmu6otV2pYklfyeqxKwXz6Omjbb7cHg6/NMe7eQg6jxp5/ERpGuK7nNV2Rx/AyfRN4P9hmcWIjlhqIVtlJGJeE97/jV1+78iHg1jlZurzbNb6kanSAdF0L7059+KIRZGEEUv9Mo/NJct+7ocdbg46c8qhVl+CmsFApe781Vw6OOehSmwn4bLXjq89NIq9Q0gX7G7KMr1QXJ4VCC0QKOE+iAApQqa9YjoLyEqxHcEJ3mf4K9Jsfeb+ddlBfUSjEHh+TRJNrx3v0YT6anlMq2xCNwafTTjil4YSq4KOa5+Tr9kShACNJlNee6r0ZwHznZkNzHipn0567K7e5k9ZUNOaaVVFXmYVarWz3yOMkWYJwdS2OzBG1+filFnqysbBGFTFHX8YirVve+9pjLvK7ijV/CDx+uM5LudjFXnm1Y5BwvXSlsInVQ5MnY9gwNvjWNm8nYMyMHYnnqwni2haM7vLYirs9DrLYnuSFNjng+uij+aUBhhkLUS3wKwLn7q/Yd4aB5QU+8RJwZjZnF9ah96PNeVe+gDPKbVZl6EZPMiCOrBSfOlQFFjtJVDzcXelKWkYkvBIMD2hXTZ2GN+VGGxERsw/NuBW7SB4AA7kyj0h7zNBm3+PXEwluhHvOeuj1zC2f4iUJJq99EGoqn1R5Z1ugg96SRY5/hlNwCcyLecqXDNMOVo89bWg6ENiFxhhr0ioB8eyU42CIl/mz6fM1Noljv3Gz7MathTal6ruCr7HjF9aYQhl2d99YNedJyD4yoFC/q1L6qRnJFKzjjBjecniqohjagew0d6r8ztvkj8Yy+0VPpgSYf3anpAIRgWzhYkPSZqgZxDvJMgIp9ZI0jeoXF6fB+IHa7NyfiU3ZcxtcfkF3KN4Mqqz3NhKYPgz2pcXdJiUyksUFs9P/er1tGwEkznV+2aictumBtWR4vn+1dZeZlnyHeTtvyCvF406Hlp4BRiTgrjfWfQ1GhHic0sjfhYOyXkId2ZxjXW92Ilqj76CUl9oQzsZ4xAgeEobLb/aStKyIjBRJij/OGxf1Rd8zKwhmoQO64+i9Mtbu6COCAO2elLaEA00M9Cvyh3VP4dy087jg3dfmUduLGHnA2Co/n9yKuAOCR6TBlakL8SoD2UqtuqKXZ7Q1qVfmY42NpZWpaW+Sz2kTpJlKF7tjdHHwuxqv4MGQzmZ/XKJqX3jy3Cfg2iGIbLXA1bOFaf9pinDFJadBNtnbr/AtN+JBRXP7o+EG2ScIuZfOrqC5OtUZcWyoQAU+BETmy2KLK2PeT+ljV0/aMAyIs/Ix+zmb/97UOiot9Zecvidu6KJIKIkCuLhnnQGHGS/7puCGBdC+3cfMi8NF6pmEqTXZ9NGT0RImZISAsYvoFcLFNT68gPaOCQVkXFCnz4EJKT75di8ijiffQqyb4VOR+ELHaL2gqvG9FZFUUtFko0L/ygR07Fx4aZA/oUBl9qCPNyue02kZTRCGC2W3mvc6EjLMGvfDBJDvzSfcLZ86dse58TZPZ9SexAMO6pqsT/Z7k2uuY0eWvyv8oSXRmLG6ogsLicUfcfK9WILWmlgjKTN72sI4OypoMEn1SMmep7BAUJj9aT3TtJXG8npXjwJvd0OmZF8HppDs/p0SonHSnR7FC3QX5viGJY5cEeDPGVUVG8A33B9o8zrRq8qqhaDSN3hR8aF7r2TKLlhXa3U06KvqnTikAcLC+thvUFYm4IjksXNqIJ65W7POcMJrPMVN3MV9R0bshGCapaC4Nc7nd9TPCSzfupeMFV++MXS4e4bKQ2iTEU+8GueyuIsTeCnN+vlTI9LvKkh9TFTP+ElghtFh4HmKJ+JH1w3KLs/maWb17uvYARJzIafzUKo4OLkxL3R54Yk8Evk/UAqMm3331QNIOXrONlfJ42VTk9KXv7R/ct4IYMLx4TU3ris7pww4X/0LW8WG60HjJCge1/bI/CgYhkkYos0m/IMwCo/6HDAV5PlH5QTEqfxr6jSLbK1iStZqe1xHFHQBsZbgCjaWK7185j9irc+zmtL48TqkrtxQpcPF9tvWUu/kiXK0POjwsNWwYCzDI/f7Sxe4ISli+94CndMXwRpJvk9RnRK1o4ZDeqpUU5myFwLEC3FLf41RVpz3ncmrLVmLqqKWig8LLwAD1nZv/crvdXPcuBRsPqPTnfBtNxnUpO3JeCmbMCmJ3+W2DlVTqhXOj25HB3mmGn/EuYlM9iz3HgA04nwZ+dee9J3tqWrR2QFnphzMrH898E2FqwsOSk+W+NWcfXgfi/eV8sViuJe5B/+0KcTPTmqW68XMeA55FQSu8Rjlu2ILxScL2M4WNGsfsLkEOKuOgGliBxLJIeK+0HQlsiTH412mHs+rKcRA13/XvXGn87TyDJWIpb09+84sL+zaxNng35woRJa788Ct/Z31Ya5NUaUOHPpKWp+jhPt3qdxGhJGLk2mK236Aykirxywmtt+nVxF0qrue0vz1wLVgTvyW26U1RLV0W0NjneqZbfiklWj2vqUkjG+gjr++E0i9lUTckg+k1JbXTxP3zWsmyDxXcRJzw+klNWQJFE3ty0GfWp1sb5eWZXXk6mW2meMlB0TujSXd2gCYjmRDsdhTQ/3HPoa6KQHhP6cVU6epf6rboEnvDkE2QjY4ai9MoKdnFf/WhgZGg87G7B08+KR/oCrv1ON36c8brnCT3ZLQAkvKnIZyJkeMzq4Qye+G/U0iIaysI+jswTPIr1pSPXDZhQ8F2F8zfHfG+7XTbN3vZl1NjQl0UPTu971NxCM39xRWLyVyB9g2v+AIZFgX3CyJNif3t2bOZz/9VcMjVtopg8Q4TdHeZv/kUqScgdaOoR2dmU2to6zS9P4nbsAVTjo9PwWXH6b+jeQ9McjUW1k6v57OMYY0RhDHGTvXd1o/kQBIHn+2WweWzRl3fr6aL5ozzU1ccffxXbr/l7AHXhz6fytroXVIeIq29k5L94aLmY9tsUruwTIQGXMkjtCwwTsPcg/38cjBOJNBkRDgap33WBuozoHmB925bP8D5oVJEaSKy/CcyZsoAdDeWy0HNBwqc3vvaeK3miOVjkzPWif/L2q312gz3PXfQMv3BYepids5ezwVheuW1feSwVRcMDYzUjg+TvlU7OpMRucOJwEHl/BYGrs8Cr9sHd1JGjWy6l/fWaINKSEY2cE/MQWa9xpAjUmybt9QrIiSQHjsYcTAjZYyjl5wJRx9J9IHXoQDuPPSjPhsg98no2jyCal7tKnopGq2eckWkoIVW4hE+IsU1hXy2ZP8u/2M6L2XElbvzA+duXss/6QCdsGidDny6UPCETSekH/Tdik+1UUP43T2Rjxo+vmToHFyTN7CYj3x7JSnvzMV+kcxUDPhqwhKtArh05Sxh7v3LJvYH77V+bA7/U4Gvlp3TEa8mVloNYM105hL9Ztpe9Amn9HNokcEPpSJQvf17VopT4+l0fA8dElVcDV4MboX2dQ3/zwqpDqKW8DYe6duif5xM6SflgarzVEPTTk7Pg/mUPFbTYXQKSf1/6bCN2mMgc1V6ICL7174ffTHj3ZL+Xmuoc11QgZ2iWsU6xByxIAzgzLkeI2h1eTbGG4oXrjOH+QCQd3XqGOVHPbYDHIr6zy6vpXcnWCxScX1ZbtwdyQS+YImvm+3xln4U43SUl/SNgQibgAJgfylPRqyDYZ6C4VkxAMKYqJVa5KwMW2LYSPBtZOmW7vxbOTW6R7A+KMpiQIEK1v/2a9tldXx/TbohvC//Lu7cMtmkfUx2YtMvlaTCG3VmdwNHn7RL87ycRMG83L9yeW9nM4i0oHB1tcDArmU25wbr3+nQGV5I15DM0rEFv/LdPTvip/L8205b/svtTPseOC3vDnpEWD4FAjMuSJ76aW4Ym99PjWmDeXkH7qcoKML1nd2uUtPpQj4FvsY6yiNLc2eQg/Rlxrl2//9dRQaJYLF7JcEYp2QwMlsbar9xeXGOLbl7YuRbUtqW8pscsRYsoLDe41I6b68jxdSfIF3c3XFrsP1jrdW2bYZNw/UTzczQmvuK1vdh4zeKyE7ke+4tNemHIlpb+JuN0kNbWCKVLFw6yxarYo1/70V4TZu83ovmq9pWZNHF9cxc/IrK6Upj67Kmq7v3yhQXxkBsbFJwe0BJ0+ZcfLV4WUvA/4l6SYkkLwgofcDUNZ49hMGg8kLrqxjDvk2MzaIjgU+ZJ+4qwldKe83xhAJpyf5chGmeFHvIE90/LjrytjrPObNf8il1Ot1oul4666D5hCvgERYiJXYV/NSUGjRYKwRGzXqo/+RiW7AS8T8NtI94jrmrX8orePjJxpg6pfG9Ixkmz+FShsZEkkxtogW14fPkV+jdqtSFrt3sumNhiUQAxQfLlHimwLIPuAzSpTWZvevD3RWz4P1Ae7RzW/qAZz0H15aXF7pAGL6GUeUCuUk9wAjVir7E7jAEdrQe2OCsbP2eWLTyIKoZ9indd6fXjcCYyB1ovfJJTH/sCnkwgguC101j2Axb+r6tteBCoiFIQqnuCHMr1cugd19FYR6NPyFEhiWGhlrPklqEhRPvcrToIrbYIrgcLorsg8fblUH1o43xfUX5YaeQALuubPE9pkPuzXztpg4TRK1bYF4WSceSYoWk3tf9GXuBTjfHSNNrbHrgGc1fU+jdTQCsRZ+rLD6p/epgzIJWLYb5h1w/wuWUpWlcfJS14vjBDP6p8tLceilicp9SmKgfU7jmvIJ2eRpGHSKgdK3QVMEemi9s2Rhnp9p/p6twH6I9o7dtDWL/iA0/HHdaueY74kRPVYcc93RojShEVcJk/j8jtoUX2v0G4tUk+rq5eSv5DiGbKktjMZs76NdcfJAfyCfIJ0UsecJ0pmAyr+RRzGjJB3XymywEZ20sCb1NbvtR1jYZEL3aq2/Vrd41W8xjvwrJNIwp7mT53vRDWBNGM+GJtbNLDG3fdjngQ5/bv1EfTx/lXBoSGMYzUpoEvXYCzrWXgUkntx35V9dTviNEvdXjD8AfcI5UQ4DxtY32vBpRnpVduSA+aOAJBC//y1t+WXsYjRGwgn01RWLFxNFlC+SdcMNH9NcB8CczIUCh7PyPw8SqGUJTU/R3HKnCLuPQFmQjz01RT/WW/fR7a3+jlaRtHgDJ1c60X5GWx5o3Rj8CwaXabuoVdvdDfyKSTQK7hFQ6fI4BO5UGKhNXNXdrgaoPTDDJfbNIiSdlUTdk61HFW8enAnMY99dcxyJegJD1l5ohSGg8DsgBfzjJh9YVH1FoiM2kSdi6fM5+p8JM2BfltBNvSfAtfD2anr4DZOXrzQyid8rlIsHTas6QsNyIjqrtvguuZu5HCpzr1ffE6mohG6kmXtTEOTEMKVSatGHMRDyekfzKx8U+PNoIFXaDNDpwoh2HPHw2Bo+uF7wxXcBr0aTWSH+C/jlZ30qKTaIFjZ0JaRWnmeRaF6A5J4RqeYEE+yNXMcZBCctfb1KZGxPMLPpngQVklI9S/50pZHcGtypyj1+d3ZftIzChepllgoAxkB2j6Ayg/hdBwG2IB40kt1V50YPjfYee3rtCHv9iA/Jer8/zwqBYXAm81VGuIqM6dqB4kwwms7dmLv2h9aw2a6kw9kVidN3Dvu6FP43uuzN/cazsBY1IXqVR5L8N0d0zpBL3bfFugEi0r+87+drZOwMgLcAqrA9eAAL8JW5OJwY30t8Em3+gw8LW9vDK1++vRK3xmfMzIyq0X2CaOVnF3NRyDNRblum2ow3IlId44uyBfcUV1+TaA3fVf6wn8X5ceallxqxzwxdlkZBdDCkBPJth1x56y1191swWXC1PaESf1edWBT+sxilUHlD4KZuMd4wkUih9mxRo/7RW7gDHIL9lHIZp02d0sRUy3okV9O0KhiDlKq6yjLgXFEx2Lz5qGMzy8LODqHsm+4bP1qGuKH48bmK0F0PjLaERSceYXuJHXHYrBxXr1FYTJ9v0azjfaRHNttS8kPd9vFQb34IT2c8Lb9B7ThH9b8nNbTsjeccoY+vPy1n0GvFO71EMJiOPaAo/Ni1CLdpJQs+KHo/X3+h2RASf4TmDssnITu2LUvk8bCtCfx2dXvuKPnlXs1VTf+4h8DyxcEFXD7OnetKn8LPAEDx7iWpqrVWBsycKhmJghkckSHbdILyMrrf3TMrjw3BLfwGz/jOctxhM7x6f0c/XqcmPA3KcbMNkDoPSmIMRckfu495fwrFI+SWJnClTyrnl9SPAfyIdxF9R3/8lEliZaxGpDaTJ5BAzgw2j6DHmvJcioReoKlUolIEh31WbvIdD1vQs1IVh7EjF1xu4pgS2haC6/CTdlU1J8tm72OutFRlNB3Fb8L1+MEqSOnDwTibtM/tBD2kgQ4muz/aN/Mefq2bKsqiR+nj/P/1kokWUI9qe/hMM+ceb7atCxxdqRntKoa33DDSHZvzB383Dm2tgXnD01wVB2NtHybT0VC1xmxoSCe8LZ0OEsjA2/oDRgXg+1D5XxNM3iJIvveUML1TMS4ClP985lGua+GmT/3Pp35eXfIAIODIkXG4mXi9z5C3t15YJiZkvBLMiIm4asjwz/5+cf8ku91C4vLDnS5v8GSFoJv2epWuOhyG0XeAuPB6Ts+Nm7XVNccVlwL4t50nhOnR9E40J6cHfWHfADPSno/TOl7CiismCOa/BO1Vlx2CG+3FV8aMxd4Iuhvw0VHcnvBJesuvAvo9+s382dvwVCs2F4g/E1m9AJClDfUrsoPqZLgl7nFZMtdO4iBvOhFirBc1z1yYpoYxaOT5wkYI9V6U7zh08hpDvmyKl/4jm8NoXuevRXbXfWkPKG02JAazYF0+WHqAOnEHVQCSOCKRbH6ikN/qunWcVpDG//uo3d9SwyzOKCPXGqpd4zTXVfikdbin18matI/9LaG+qX0REWfmPvad+wAbEWH474IigUrtWYQ9KOI9qGzIjcWuLzsbaO271ngQks5LrWjPq+85EPrPYe4d92dWCmJOUyVtEcUqjrQZV/q1cc7zOOc0KKIiTrq0HNos6JnpTE6hSYcObfBhnuTg6l0vEokkkncWeaH7oA5inyd7RD/0BR8psSjm8HYUZoMms1oMTaC1j3e1cQP1T4BGn3W1CWVOUphInPXtAHr1xhC2iK8ef8Oqn2G+xl7BBRP/f3eYZuePRZGm1RZUJrnYzmqkxd8UHh24gSSubP4YzeAyTXlIHuM/ZVk1MbFhTqVA+sCKGVubYFGPmUP0HsGPxH1X/btMDgIbYE6Tei+U4Fmsaw4puwQFA6acretiF9tc1MXA07/Bp7V7D7L4ObiQRCqi8WGKuizBpYmqueKCzoK0UBKidGTsV+0P/YLLew11siV3n9v733WJYc2LEEv6aWXUYtltQqqDU3Y9Ra6/j6ISPzVb8uYd1jNj2z6cxrdm8wGAzSAQfOAeDwSPDDZBpL1FmNb8zotJw588IuEHPvRXDqOHn/YvNs2IUbeRVmNxGEuHGOgZHhjMJSHFLhSLv3RDxawzUZVnhbnrXKwwl9lyvLT+Ejdq2ZHU7Fa3FEab6i8ZF9UZS+7uaDC+3EAJDZTmrXewzB/Lap2Usz4sl9v2ONQsD+KhuQ5YjeuvCdvXBj4rWeduQGfx2f+u1+oJDRC+giUenyvvO1PLYoY6bKRt1bH1i0GG+EK2Lyjw8I2ceDdqU9uV89kOaX92cRBLC6/xc3trNPKV6go8aNYFa0bN1OyK8pRpKIewHR8gV2CNvSGhsee0387LWjoj7m9cGZEq09v3EAdUrAYGjTkGfj4HwMVvdG5w5DhubDe58T8OWW7TuzBj8rlRXGW/VoiYp2MQLt+G/S6Q2brTNgs5d0U2+uPy9MZ7od1SRARrfVsRf7d+O/mZne8J/V4IxYm7Q1vfFj/2VMnYaRj4eugzc/gn+N79UvAz8E1GeV3ZVxlS21QUC0GOn40ChIsxg0VgRKnZttm3rl68JqjPhdUVA8aMCBWXZ5kf5DJDwkXuVrpUhrtOQqM5pRlvP5lnxU+4UUNMf1/Wa7PDyg+POyEX1W+bKLOSjvK4ygsPEKNoV8Pt7Qd2Ep4fGxHxsX4dpBobUPdRs8laq80fLajaboWLT8sLA3dpNiodDJPOZ/e0ocX1pf8Yz961BUq7sqANvGF/cn9vb2IYlMKdjlJCg8I3dXQyZ3mKi4fDDWCLLIilOSKMD2x1WdueUHLHnbRgC1FPTbdlybai2Aqvdf9nuzgFnRMcFMgBOrEa8g4BtxhoXkEpgvGgmtdsxlWDhCREVbFh0gLmTaf8dK13MXvDN398ZrPITL+QjDrg+3JXW11CcuM8icVyPMWdC5m8kjyeUAggam8hG/CHGB/YYgLNj+tVVnkJ3V+SkEPsmtYGb/JbTAMzHWQJ4ReJMlTZhHmcYRebVhlmUc80Iyp3WMnhCMzmEX+7gPpXxWD05kJz2c3y6v79Ool45+U8ojVr4ARuNwxYvZU0+HDpDFHz+7B2+07i2MZvFGqvRgLXrTCIAgwD43uxbtaUCeyXrQmQbxXkhfmPy0Vl6G4ePbKYS8UeylC82h0iWfy/rX1l8rxxa/DQHbSD626e0uUgXTnKRXZvQHwj2eXxSxgHxrw4qTuM8t2fOKAE0iHx3q0z/c/1V9mhOi62Wzf/uTksFVc1/LbOU2nYyGQGpB1v/uVZjrUmWPV6Q080Ncyd35Rscb6GzmWHgbP87c1w/f2MOEmx32NYb3urHTZNSRTePXa/rvUvKQhL12wt8Rp2xJ8tel5HWsosr3q58pB17zYxZyKealyBDyPg68lcnYC19oFLAB0Wj1PunhUTMWV5sQoQq0xWspxpRer+XO6if7hF6K3fbElGiIK1Rsi2/dTi1ZI7vUSDhufoHjWptiVj4oPMKO1pKjlOBezrfkbW/0C0Q6DMmruXAAVt2ZNRKnvtOug7cMuXBqWpPHOconu5mxZs63HQT92R/IAhIpYWELIORY68lDmnHHvr8UpAAPUNxHzQFy1GkVHD5zYsw6eX4fa/OhAXuz7PQ7ng/wON7kFks0evC21uS915Ny3/hFIK5YAl3mUG6QvOybByi1ZebWrs6pJ02Yip1PebvcJvoft3ym6hU7FQjNN1szDOWDBxa3fU8oRIBu7PpLtTkhqZ2Y4x5kftz/wKZD5Z2MTWeo2yFv9/KiKO9NC8MHme0lcgGFne8n+FjJz0IfBes2tkrqJWzxkrIcFE5PTfvbIjUt9Woxs+LUE+Mh3upCqEYZt9lEftSCGz+5K202VmvJ41fYx4zZjrItYdXSPZOLmMw7PETyN27k7xQmpkqluyoVSwt3dtHnUkGfXu5K71SgKNthDk75iWw/j95tflZSKkCl/aNgJU0O2EaP3cOn9lFBYe/1gFpwUpRE0bLF8S5P3l7g9fHX1GHFYD/0rw8LZCt1UfhwatTLb8Y9Dq4733jDrxfGvnge+bZioGViTOmjN2a7De537QwPLOigTUFRasNKKzRImdrYZ3el8QTJkCfNHNOEfHJ8/fWXO7xDFUoYgG47B02nyalfSSgkwpjN2rPBwpFHYlKDcdSGjZmqEGL7i9IfaBxWMeXGC/NbGoG3PK9KrBFtM8yKwwc/oM+wx+ozJ9YYYTNNaS0vnlJOBSQtxKpFFEzh14jxBcysS/KMoA8ZL/KLVbY0Rp+K7ByLpkwTwQiFwLkLaKbHl5RLsOUsNJpMO0TWXr6+dKzi0dlZ8YsAP6qqjnG1s0B1sbv+Ag/2W3SLT2YQK3TlKqVWnLVyTai6D7bs2UKsIjwqk3Uvp5AAjWGTvXoxu2++RzqGSat4aEORcsaqCZh3fAAApJBoM728YojrcWpYg6OzTqhyy4MGuslIN38EF4FvtBLD2rpohvh8Z5D/plwUY/T6xwK9P3Os8DT5eBiRmJKr4+MuVEpvVd/eBCMYbtJ9Vd160YA1WNBDOIylAQDTtiV9cHgQ+BhYC4+2FzXzh/NPBLgEqgobN1HgNk89qW7smAF4TtXUvA5PomOJrSKLa7B/pODbaiu9Di8uTOtkyGO/WjoPuYn9wQdMAda15j7eBAbEBWgLAPzKeB1+APpFLZQBgThqqK+hEBzpHxyLQkdXfX0UwNCNGxRK1NhhfM8VpTVkW2CjyErFZHNv1NiYHl34Bz5VH3zqXYyzJvO9D8WFUB8vJ/6J61N8wEeG0wem0RPJP3j+i4/Uge5MVk7yB3T/I162ygcAqlRnaTfXvvnRIe0OUKHXV+/dP3aP10wTpO8MsANLISrQDsB7UONmPAh5zho/aJB0QYQHh7vOrc0dJWGPm+LftCcvt98XGmIqlPzlV7rBepdliaI1w03XeXnpCjh00Vym/tlPl1bnHE2sQyD4eHI9IEJ470ESJvQRZWHsjVbuESfGHPjW4c3DAXleAKfB+VV+7FW14rPimIj6/e2aVeAdrAono+8sm/sWLtq8mi4W8oE5BEa3ZVkdxy5RjGL9OfIPdWBILF3TOcpMgPgMrDdQ84vI3khCiRz9RxBKOTEXcxoMQcn7wO9BAcZ7w8yiy0QHstYBpAqx7gG9r8RNO8cqrn3r6TIHeAGshMiHhH9et9FX0PCo5s1XkGzH59j58/e+yAkBq0IDoiaZK0SsR81PUiAESZwZUwFaeBc/c9hLLDH/dXnDTOEhAuD9mXP6Ot19jjNb6XjbLWX9CDp8ePy1pcHlh3KspsCDhFBtKNyF2g5Cc4JHf8FeQLcNCaHFqvRtoQrhXlhTWgMe4C7bXtbnhTXf87UTmUgvuAyLG8l/85pWPc/xETV7xkXb5XRcaQbfBTValXF+F6WzYzEXjnwWhGTM8n3dUweyzimn/JAVSjeu6qcmNyFTMPTUit/On54LnFf/gDg1LFMvYEODp7BDZOEMNRKbYPbEFirWeO5Bw/R7dzs7+ONIdJ1TzWTWnMni1pWljKh+LECsYLq0S7cEUde9CEaUM7sAuxVKl9IAT3ddWmt4ItZHCF0mrPQQ8C31kLODj4YlOlPpAb4+vUkfptVaVBQ5bkCvkh1lDXWzWVaGiu/145ym0afOu9gl/LAeJuLNTtNWp/7t8jwzFt7iCUnGmUiinDRl15XQhyGnpteRoYYj5jjlUSX2fPwl6IhiUwr4MDb28uylF7YOFwt7idS9uofbPPoLKyCl/CCmBXHYP9jd9+ykFD5Cp79DbwghhotTcjzT3zaSiYd8XznrjLDQpeLCXcOR4PlDMSS17nbNJB8jWb+2fE/UhN/fX6jxtQjlLzWEus0yXvTr7VWLjKSw+dnDfi3p8GBtV6znP7Za9Gad+WT22/RYevy6Pm4c72OUjUSh3DNnj7VB13zCClrCRJYQBwqIBI9uQQZlLYqo0M3R8EPMSUZSUTy+YRN2xW+aUSIkP/PAFViTsQWL96JgJaYo4ZgcIDZkNAo4awBJpz2leoOh+Hnano2pw3cWnA98nG57NO0LyIcGWY/BTQ9HTuA7IdVoQ4O9dV69GFTG/ZyeOgdKNp++1mKrSfKIOnwAFvBm42ucHqQaG8a/yJUDp4R3f/vZVFvamuF73ud8fNmF8kZoaIHZpyew0tdU8NFGI8SDgYYi6rxJfR6Fn2EO7cn7bhbjRZjQr9dblYAVt39CJZ4flXw1e8mNxwTKWBRTwvXtcfWbvDsBoR6hL2DvHzaMfja7e6dpcrpg43av71HXxWcHn5mD+/QyYFnKEHbXr9odxeu6Gdk1S6TQ3zUTRn2UxLpsMY8hVfbB7KQhPkvtXOSPfEy56y/GdsqHqe+7CT80vR0QS4SSerquMWjZYn/JAMubjelmThu7xccQxAVB361UaDqCaLJTYtWh9pgwheSUiq/JtdiGNjFovE/90SbP2tjDrxVerL+/B7gm13jNr7G87ghQPQzXXJQch05z7IrvWksq/sQ+s0/sm1D6QVbaNTw7pXicR90ob9w5ivLdiZ2k44pcdgs3dYbKP5mAhrnFYcgKJYNDaeJ30yj+Ez481bqR5q4TZ630XR4H9tJKd9/jgB4AtsG1ahmMNkKl0mfns49MkCzxF0LxErmkNLAjLAkMR/sQFpew5nYvw8PEys61eu3eZh2UynqdrvUrb7B4ECNHSBjmXwNXY/gdFDDwoUc1R7UAJd+QcPFqssWVS/zuIEwT+fHiV1wT3tgmVs+pVmpKPfkIqeyG6oRh5c2T+dastczvHLKM9uJzKu8qUdpiMJElGX8AXoLRf4FJ/J6c8CGJrheVLhj97l60O2OZ0Fcqhw2hr6NLI3CB1FihGAJOanh8MDmfqnSoqrXppkCwrARJZCwULXT8WGGpazF0Ayfg1FYhRq5i1lB6GUgTLCt6YJkzso2woeBLvkQXNby2Ah7oB2nEfe/vErfHZJgo1J2aYBQic68RI7eiMFu6LKnsvKnGtK5Gl7rckYaQuLx16XyBx2VxdCZpXfFXutR/ipYlIxqASqat6AA8uFc3qTly9Uhtpq98DriroWyQvsVGvNQxJ+8o3S23mR7lr1bG1zpm0jPjHPLRB/OcHmwwKKDt+nlr8VbSvVqvp4kSGesmdhSRShukHjcqjrSZ+eunI8jHtbSK/+ZU6wmNb9DKWCpkgoI44iGeNvtyhe3+hcQ7b9bCOZjliy+lF1dX/DHR0Wb94vEyLI3l95QtQHtXXnzYzOz6PveLjJmV6NtdYY/AoG/9uH/WZVa/qg+IjWN617hVhFqsWQRvSCLXemv1KGwbOunj18xHsDSMmj/zJ5C/Zg36xnF2hBVsO9hR0GYSWAVeCYIbbZupiF8qTpW1JSLu/hLzM8J6GJokMAmykFNpaNQCRWyun29ivOmZ/dIVjAvZ88CHBL14mHip2v3bXKN14hH0dtAyX3u09bsCQhZLBgxRslmfYcJQiAbAXv5x+hasfNFUaLPlljk8w14FPvZGZC6oyYvDSKYHUSuBQnuYvB5xg7HDJReXF2UUPl+8orXCkt46Gp0DdD8cWH6jelX8F+Z+2PCjMaPeOes+igdozJcu34FANGlmvlmNNwoOs5jB0shcidvABSK2ZSedTiVpxOU7uU6YYYpE0RiCzfQRayAK33VY8hd31kh4cCYko2+SeWFRy+o4Zj9U/aBIGJBHftowEW0NuI16kuaki9tbpJZej8pBVbL/tqEUA44tqeFrFo+bgt5++z6tAPKNdaDLTuivy5SzANQ0Q4rOYuxy8bnU1CC1dFIFiW5EEKXh4mjN7q1N20ZWgRxJVkZTaoZbr6eTAAG+iutofwJzQU69DRALA/WGqvBsN72TNZb1kBkYrAKmXKJRYNAyw8OgmDgP1se2WQI3XAKFyaHGXeJ5PcOA1nnUS40X55B+G4IVugvQu+LKeehJ1R0jTbZaRym99+ni2BWPnx+hsBKjPGwrWdZswEVlH4tFGXGRV18few2g26QkE/26yOGZpTOreZHRS5Y+QFfN2uehE7UAxmcXAyZFIgV+yRtzFbQWD6a2vaIvpliP1camSLX1xIB+rNqW+DSif5kWxPmZmxU3+YzFWt7fK27tR7oFBX4UZb0eCzeXdvTLg7ERLVdM8MyaSX6AEJHtIvh6PPYjIobJiw9VQi1PckRC7QhCHDJOZ72cu0YNDhMh4qwBpdkY5kPFJ2jdk4LgJElK9nv7UfooivtfD9AGAuIK2arXoAuz5pd4ahVSOGIbDaEsDvM1Hv5iTkgRUATjnRsdER8OXiAXlr4Zhz7m/Osjm1MK7YH3xEOy+Dp9SFFBRQzIB+gzHcqMWMyHbl3TlWJSskEJTqiw7Ce1ZtZoSEUGqs4+32zCq2h+CrvUHLp7wFGuCuQeB+b0rOVoVBJsXE5D6vq9yhn2I7MExCjfl/mHgVQZ5mgYW/EshUhzk+Wm7/VBiCDEPuIWwHqgzA12/mYWyrC9HuM9396V3DZhRPH9qU0h6wR92GtdA7n4Ier3qiQZA9I7jCT++RKICpX9ArB4zaUL2nN2zVRNLnNbC+qS3QSNuyHZJAbZAAoPrLBIyvpFXNtBT6BiRsLGPOAwoz2eP5TjEw8RhrIdfjGtQuKEXAM3d5E2z8ymACwFxlNTfpJxDyWAhhEmYYKPz582gt9gIOn/9gDjnJmkHHAozjea++6lxjPf1IShs/HmLynChRMgb3k9PwzgG4jnPWnyDNR67Xx+QZWoYuE3LWwR96vD1j+19Tqn3XnZQk3u0llWIcsI5J7DvnS/jf91YTFPA6lAwkWS1eG+lP3ZJS251MP/YgdiJZgZhM4UUpMX4xb9pyk1swMi3n9cHmhyFVPJd9tF3lwPO/CdHkZYGkBEHSwO6SMCykyoz+xbvUdeL9XdjmqHaazIaS9uzB3mCrzNjtML/YokLEsDmtqBPBhvqmYnf61LmlUdDMC7N0NjRD/75b3lL8lgBZ1iw68lkXBr5IJpC+pEygnjSH9lL7GPcC+HmdeM7MIocqKBvbUy1l5ID1YSXlstowsgtG6qJds3xB4YunvFazjsmgbiXgS2xz/gOm4Wh5luC8CSFxNgjXWMO8fQAnmg2vYQCmTR29+cu4Xtbc7Cr/auyNv8fgPqkAOGPBaxrB93cziE92VHOrLgAUiCuwGekaxo4zJqjvxJ/PkZCyDxbhqeG86MBuqlIsuftZi8cRj3c5dYtDwMHSB8lmPdmGBvg5tdRVxSw/C5DxuhIvSLPaKppMre5aT0nELOrrJoIWk49cGmlGh5rAq/v9tFCS+V2pRv8+iOzdyFTSb2AebXmmzsLnOTYSEE360c+erRGbMJd/U+guRocgmh6izRrv7rS81ysgVurhga8Nakg3rlrEKDBgaZ540jFFBsulS8jZbfgcz38vr8PqS2K/zEIP3ocQM9WM0R473fsyNaw4vLqvY3rDie/sXbxdvw4LrFfTO2AWBAfdT2niahibF3TVnfYMmZcsssok1xIui7VkTmwmAuSKdm+wyEqfeQTX3pX/x0Tu+gy+cMn6nAs2CBDd+BbuKx7HWXuvV3taJ3gqIkL3BXuOINE35iX6UQ+jqQfMmI+JMDTutIdA79mInpLarLeez2d6QKMFOIzB/W6KNT/KGHN+kK0kCdp0A60YFFS1e7ALNx/Go+gjfXkyq6jRZfZK83oNGTzfiq/5ZpFUA9A0wgqGORyE7meCFCx8lVLECok9m5denL+4hfR84wNcJJ9tMS17SP1uM8AUTp+U9XfaDMuzzaK0xFk2ykPs2NRwFJdtbJtFI5IFTvMGWPcJIzhvYtXj0oZYIkXdneo0aXt9e8H1nZ2HgZMMy6sj4PufMAnvfuuL8CReHN1G6qdTHoBQ3HNQZaH0LA32rAUJ96WdshXA8bqzpiQna/DCc6MKknX1jdJgT6BLaK7i9PcrE/SE2LTp8I+wfzjAQLES3Do2epXNu2gsWY73PepwzraMEzkQgdsOwmoySnduAjNDAjyixhRO1oNHY4PRDiAB2JfhBRsoXxIqCEnrgUhx/YyP8W/hrmt0MJlX1m9/sq21nFLVVjPukNfyjVaz16wfVPGNCSFhTXItdyys7elgT93i1rFwq4nWTqdZNz/+C+1zYB5YOP/1bKPBCexklmFDCyu1yLR/+soVVg+0rKh7R/4MUq9lNvIsoShzIhXeNLaLEW6qzKJrNrCbIB2UCibdsh0C0xmWLqXxcdkhXbgef+Le9KsxK+LCa7+AAIJaHOSCyH6RN3GfcSjV5CeYXRzhCZLbdk0p4fMf5lrgjClalfPGP7lnTumUq2mDH5YkvSlkSFeZyuaJz+4Gn0qcoD7xDcrzTx3gN4Sr4zgy0hIRSOFsKRXU+fuRKIA6eHDxVBQDO7ojvONUJLXSjmD/xjSxwKC39vJxVMlot0pcI7QnW8s9LswY61kw/s0aeoSaCWVVkY8jGnsD3oVq7nawHISE4lgIgk4C4R7vEX+jPnYPucQm72GEzSOgcQ3awlqYc90jP8wPhfSWZqCnxFT/go0sF4xGL4YIppFw88tITFFcowlD2j0M99uU37OB9cA4VGVsYgXu4N1PU6OZ0T3gtVDVDb0rih9QnCOugbIa9XudsgHZrOjs0s0dW5e6wf9/na2H63Vme1jt/OiXDaRMvyEA8NQZfmbKv22l9Uwnd0iFmHAQ1XT/Es091lX/FIzzsi6soH5/rHfkgWSl+aNuLv+Ps70Kh5N7/quzlJdeYwbauk74U1B8rtfc2WgzbjHUgGZVO4tS4yxs16m4MH+iZ4UfRhRS/phIbfRsP3dzNm/d9Y5dSZv/H8W3GySyzi+7iF/LWB38z6Nw6tLlSRhVCr/UPvXVz6zeA5ukEQUnsXQDuOJT7CXf2xbaC+/p5xfPs0qvqcQQuUu8XstkUZFrx/dtYO+gi/DeMDnCaStf8gEj90/1ZxKNL4hSj9puv92/JUlyYN61B6FgsDnVIobNlvBctnnKIbd8KtKGPHCh06UngugOcPA4SLZCKOIr9NN+mji6xhtTUZ4Bwmyz1+1uhhdqRrO5vMDDSNScRsWNpApkuNX6uOe71n7fjq1pHKagMAK8TbH0+tcm901nN3gooQpDYemNA+l0fOXDfCtOhwml6HNmoxeuVCa0/KLLL8jZDHNZX8WiHQhHVYIbyY6IQag9WFh4ldHmsCp/WmSiqV/waYsmYW7/VWKE39AgzGqcDbmEPaG37VhX1oNfTLCu67spinAy+VLRUxhCM+TjDwaYOu3St136wFicE+3Rio4o5hAfnA9eKNrx3/He8Y+Y13nd26Kuj45j43+YJOubNg1lWajxsYd1FMew+uOC3HfINRl4+wC1kLl2ZqzLtdBT/x3GApqkFaWtDKv4KA35pNN2mc6+JlzeUHMtbeUylWoUmUU976g1gaFVkI33gF3/fhJjkgyef7SdEy9Ln7PT+AcLLXelx4yjVRX+SlL+i7aFFojJ2DziiocPKAD+L3DFUG8EPp6LjYAEdxekpUsYJlAt9UWRzsRcrbLy+8Im75Av68GdhE3T+Z+dZ6LB4OzyoXYKhJh3LGo1RuINaJglxbEZEFDWUMpCEk0Fe1ced6Ws5okkedf629JYK9et3zw0xk9g3a0a62autKsLVDaAEcMnMbFCX5GO9aPJAtOkIANOCM3OJbNJfpdYpwqVbLLr11dC3de38sRGAxYgowfjrSfT62A9b0/dkin5QGN4naFIlf2tL+vgaliISs0iaNkcjoMJAxF9307RmvWWJLAPSRtexIdsz4RWORQWKkswZyfdxpph1epSSQbisnD+YmTsKZFRqwxrmJhpf9u8YfoIc4oK/4FyODP/ghfvrqrzUjQfUfKCY9JmBE9vUgYOz+a7WF8I+v75+hhP0LWVssd+r831hkJsjm65uk/leH2FZvFNaH/QDE2U9hs1+e+VXifnebcJgh2tVHtRtQ3++W3iarqb23JP6rPRrxNw7EPlT6xUOOin/aQZ61IgI2m21UB/Fzyt9qmf2QXQUGV5gdzHf7Cn/9i3P+wx4mUzM6lgzIWke0OrapM5NsRdfIzUeO6QSSc95DJarCMAoUGHr0MiYJ/KWPiZKLWDoKmzAmHNhRTIadXR0Fgcf2mbZaIqJIx9/HoTh8p0WH5dO+AN7Ou3lUE3+phyWKA4ZXWRTMwYDALsYZJoYZ39Jkdj0ZIVXqlqkEpfY5kATr/sbnYCPcnu9cRCap2XvjHzktZZYdzRZtU85NJ4EcGjWDyQd5+Q+zofJvqF5IrJe2yTvzdBNZN6/a1mxZVQH9rU7CAUys/FufCE1iABjsMj6gUEBd8iIFOEU+szq1rdzgQv+gh7YI7g4GhqTUxjtnp62H3YyHuU0QvLn25KyC8V9vJoPNM/AyFPxwSnwWsszQiCHVuceSGNQFTIH2tj7kIA+icLtm5CNilHV6vc53Xmtm/45gYwgm/cW2K90ICd6Wx4GvogzSO/jQ5+y7743gnnDUt6eGJxCCF4M2i1D2uGnprPt5rCEt988F9hSrRDQfyeAwdSgCRsisi/qPSVjNHBNFsjvVmdw8C1VbcIn7pbxunN3bCDo5Xrnlr5ObM1b8d/bB/KnCXHhzXoNhFAf4ezPOwh4tXEr7RXoRTAL6QmJsh4qmcky47Tlmto/+MLC09B0I2lGRCyG/fzbVI2zP1wMgEQk1++AJ9nl3YJDiykUis1P4h38/c7jMqCZD7LuTZ7FL2T7V2SAuk2K4QHHbUrYjdbYHajSFvuQrgMDlSjftg99+IyPlzcOay7WHIbod1FvhS/R1vueNBpIP7uEtSw3I/K/vbzjgFbxOqO2jDO/hR/GuFdGRHOb7rHRXMxX6WvjtPzqtLI1l0BkHNWF8N04CE5stIOJdAEen/eomk4HJvmfi4pveAwcNSsyW+DfE+uauzPKNLnkcO8ErZCvC2ohhOb04T3VnHAqOnoLL74RycQ8U/EkfEi6jbLoj+WEETSdBrwcnrwOJm7hCY5SBGX8C+bhLb31n9mC7rktCubn49AiFB52ZOTErNs0pYKpndGcaIIz61gZCpFMkx36NKKyiqmYEZaxpblZxVFxAZK2WKdc8V6aKVeEglEtXTA08E1Sl3rDMeIQRPBvImQGaby2lRjMDNOzSLP7rvHUysLWy3hygK4zhHJJ08mO4GWc/szO4Gt6A5LhaJHxNal1TKre8u/DLzigtBFAGN5PYahtJGerq/a3ZHqLnLM2bytTZRzkqclESKiDlEns11iVkuNyZ0FtQL51jiGl/jWc8ZQFw5cxkc9qgamxmtKhBMcLxt2uJZAMhdBCPzWMt6gAEja41G7Aohq03tYdDiSkYPHCrN9gvVWI5UHFmMBjO2n6XeutYeGRSF+8n6mUIzHfVxPlQ+4lR9R1M+Xoj+YnuEMNOZMXlBBA1bCSGJJSgiOJTYuUnGtKvYAICNGGzu8uaXBqwHVgmRjca9IKV3vyn+uffLq8PMaHz84aBEvjKf9EBD9l5iHmyPvxC+TNJXf0v+vJV67Y0+jE9PptAPfKP17q98+FBwQgD9kVuQjBAPTwLLl2V6zPrt7/zOzqULfWDP6yZPOC/MhGiIjwaoj3ws0vErJFyU8zm1G0t0Skjuj8s6q5TzFV2DF3ybo8YtE2ix3v6jpbb76Nocod6EGLpXaZ8N93nv9PPUcaFUnjWyVig2L0pMl2luvvF7VwVAYjdMFC2Ssj3alLG3JkNRjhhD6BAPWd/Iq69M/yMiMIPIgf8qhcw1YPj2OjRpEJGxxQ3HXL3tLwxxboad8NbjPMmCsOd3qq3ZjGo+aJnI/y8aCAoqgb0zKJWte9v4Y7parVekxq7acvRJHkzy3XTTAtBj3QdjtJvA3dQqtRWFuxkj2SpB4Y15pUAfUZzsZredkjZ8GRx8boFBycyrd8IpQj+UUDWI0BLeHy2L67DK7yLDwBVb+CAicicAv39mhOCrU7zTPNdJtqXGaSFqORXIvzJMaBdLZ6e6u6P8EKCDYS9ZDc0pKF3CJWjqu2HDOV4KZ++aWz0CAbyc2t/tgjF5ZYnz0qd13VcbIjFePeDgpbc07AH+/UictI4xeyOSzwcvzVPFUCTb+3E3bF9hVAZhhWls+hQQyi/zSrZJjqEVNYeSttyp5kdBk/ngq5pH8TDvtknCbnY174dw6YaMFShvwmcKY7hMtAgeUyx4c16MW2dUFsUErRt/51KqJenCzKBsc685/Li1A+ZjRpZLRVsTNXoqFvIxpsCYWg1e5BEZL5bcFG24cgmJnoYqUT1LVt4LLJwOhFieb452UvQy9nOf+X/8at2E4DwriY1sdvsWNxK5jP8xKdKBXfzT2wxx08QNFvaVaLacADfOpNOa5aqhjYf0Gvz0IE28uXX95hbYPfcUt5aeN7d2bOJDSC9QdMJHDRwxrk8/4XehnwkpD9jgD9gExzsgfinGcgOVHOSmQkHqVv4vxi0cUQGwWUDkK/pmQNXQ8fRW7VEe5dZGcGa+IvLjCPC9YvhRDbD7hd+wa0YhmduyRUaVIrirHU8Ap8LEfFbedjZ2Gavo6Nx/Fa75pN3bKEiE3XdPYA3hFbeOb+vyA5yv6Vvn41lwaPTxG/HO23zLaLCFJxT8x1IHA0LRLZRHRMAZ8ARjIkaFFfZW1JxStwHFywkfZp2kbChyFNz5zQQ1BynMLjoyvIZX6/o8eMXLAsd19KzHOxwRwCuH0DuxRhZnZuvb3g62pyGDxDJx4/dBkvq1AFFcXHKW+ao48Stnb7qcoETjZk3Iokfj7y1EJ/1m9/MDGYW2W4O7FPXaNfWlds3a+DEVZ+rmNVqTOo6O5vo+rpGEUHX0tTkU9EfLtOKyRhqjMxxbmavIDZTLpuMCDRKTuNaqiLR3HNmEpz5cCHZhHD4iO87d9XfEgLfOh/6fktKad1R9AVNRdPD8bdtOV1Y6BKk1EFKvEGMCC1jzf3W0okCJVwgyuJ4zM4OLLYFHsOkcBW0p3HYKm3j1YQzRBVUB6HvYn3+7pH3vonPcw1Ryy72kfaSl1h9PJoVz64NH+XpNmpx5ihU3utkbgaDgu9dEY02elww6gayaepUlFoeAg4K07Owr2cTgB1XRBgolFncwDcb7+VhkXWHnA1iBlDiv5ZBKmf6jb2LBTpysjN42K+jBTEenI/EVmw1QhcPNDbbYWPuRcnrriuLIS2/jVmFb0ZRFBu4HWd6Ft+2zinJ5oDByr2gtLJZSTANy8lixJyK/PQ5oTyD+pr5kvhkmt8Q06X8Ps9euZEsoLg3Qy1h7zCCEiTUjezuChwu1edEUuksjnjmVndmp0u49O2XrWFzXi2Ob4tSF9ZZXGFXYzyqxrxaHxOUHS0WiMK7DQDXmlhd+DQbGMQIH5z9hp88UK4/pzJiQOS97Y8e5BI8b8MbHpSoKOgsnkJhafNvLACzHwYckaGis78Mr5gPNEzjmYkU3+Mxb947B5whcxE02JjhxGCbY0nAQ2KWafLLPIfTO6WHCZxd4ZSF0zLdbx1GhNkl6xa7JMzDMCDhCpup6o+CeGZzwXm0IMs7rX1j+a3sekP6ghxjzQKZ/GemM/AE9hGC26jZZPsuz1DA9xcEqOUNM5xhd6oIsBcupP3U44z1NQUSvZh9Bgm2M9/8PUkJLYljwrwhncBAekVQvcROAD2KSmrEiDUGN17po/jdlXlBmoe1v+Y2FX7VBVuT1DEo1K7++bwPA4hYXQY4YhSF4NhYej/W/AtruR7IWFwBBYOoja7eWcgl7Rc2Fd5F8sSKozNM1LhCgtjtj8fXcXnn9Xp/nNpggdMZTjThVc0JZLLuiCmIHPlAEixFkK25+UfR1/BEU8HOIwW2OuOXXgqvErO+woVDQuCpatguxGbNesuU6KPHVY1PK576Nm8wqqVFepGAB70en/Gr39QUF8RRj4sFZiHdgT33GmuYx2P3GjTgW8vxI/xlcOQGkna2rfyV3aEXaGoDxW+hrIuVWH++ymUp2OJVMa2QnT4UrFx8Jdp6o5/Owwppp9DmyBEzJ31MC+dZ37WSYGX49eo4Oz5XpRMHagXd3dYnSLAhbj0Kt0ddZSoG4L4pCEdhQfCOoJhfAVH2yaKBLKO+Q+Mj74Enu7SIfWprvpOElsZQrvHEtyfd9Prlu7cHc8nu6H0yMh/XG8+RTlUEMIi4LF3iugsZ0p+VUgXL2G+rrjjLTA36hzjv34eyviomza7CHAEW0d7k4NzMgErswbJpUNajrHobLy6mwaIArT907RRvZhQXv5+3Nm80BfV13jykPkMMt5bbioOlVJUguNqLls3AHhKbiYFg9MFbuFAiv0Usp/mTWBWwtHqFF6mMbTNoG/wPF8buCnay/MAWx/QUwh38Mu2MFRJlLPU2AorZz2RD3MYSBSB8O/k1cTfh7tT7j3vdor2bPcP8C/ygO+hfIGCKl3zY/uUtOIWsz8JCpwkLcb7avizTZIP8N/DPeUe+bPn157zfIZj7F5jpLyEf+3x7uxUDf9/FgD+fuP+8hCHwX/9e46yzrfp7EvmvCPnnaJXXZfX3Doi/n43XP6/Lf7v621Plz3e+AcuLybvuH7fw+xsC6uzPZ3J1gTeSFf4vD/z6oAN0oyz+N/zvU8Tdnv85jRmHoi73Jd7qcfjz7rrd3d93n2d9bomutv75GhZ8/oy7uhyev9NnuPLlOfCOSJ3GHfX3jb7Osvfj9JKv9TdOfpcCntfTWA/b74FQ+l9Q9r3Wvo3POfnfS6/bMrY5M3bjc112GIf3KkXddf/+0Dhs9t+PYf94/feewf9d4oTQ/0GcEAT8B2GC/zj2z6KE/neJkvgPomTzqRvv/yPD/1KG8P+CEPH/L2VI/gcZckP2k1685f9HkP+1bQWB/6kgyf8vBfk2ov93kvwP4vsnua1VnI3nX1Fk8Vrl2d8X/+sSfa4xvVfur3KJp+pf42UZzxX68/u97Ds2wL++MsneYUPeyw/jlr7jA6L/TnTIv5Puv0AwAJAkAPxXqvC/S7Qg/u/8Joj+K4r+R1sL/Cfixf+3iRf9P+L9f0e8CPr/r3iv8NaXQfJviQEsVcvhVf78N+B/Lt21zf8M7Tti47519fAM2TDk6fb34Dvc/zS8z3/+vQX6EV9W5//9vf/ElP6kAQAY9l9J43/QoX+ozCdO8s54bPYPucFsMm7b2P8nOrWN03+mev+ksP+sKtC/M+I/tV2nPw9a1Nd7H3/cR75wDzX7eRHwP9PYc0X+9Yj3bvs3ffoPevK/qmL/tSfA/w1A/9UoBPlPYDb0n2gT8f9Ym95ayXHc/uk94X1Sdczy94z/Gw== \ No newline at end of file diff --git a/images/cpd-deployment.png b/images/cpd-deployment.png new file mode 100644 index 000000000..154b3dc3b Binary files /dev/null and b/images/cpd-deployment.png differ diff --git a/images/cpd-principles.png b/images/cpd-principles.png new file mode 100644 index 000000000..bb92a066e Binary files /dev/null and b/images/cpd-principles.png differ diff --git a/index.html b/index.html new file mode 100644 index 000000000..428b4e860 --- /dev/null +++ b/index.html @@ -0,0 +1 @@ + Cloud Pak Deployer

    Cloud Pak Deployer🔗

    The intention of the Cloud Pak Deployer is to simplify the initial installation and also continuous management of OpenShift, watsonx and the IBM Cloud Paks on top of that, driven by automation. It will help you deploy watsonx, Cloud Pak for Data, Cloud Pak for Integration, Cloud Pak for Business Automation and Cloud Pak for Watson AIOps on various OpenShift and infrastructures such as IBM Cloud ROKS, Azure Red Hat OpenShift (ARO), Red Hat OpenShift on AWS (ROSA), vSphere and also existing OpenShift.

    The Cloud Pak Deployer was created for a joint project with one of our key partners who need to fully automate the deployment of IBM containerized software based on a configuration that is kept in a Git repository. As additional needs for the deployed environment surface, the configuration is changed, committed, approved and then changes are deployed without destroying the current environment.

    "If we have seen a screen during deployment, it means something has failed"

    Not all software implementations require governance using the previously described GitOps approach. We also wanted to accelerate containerized software deployment for POCs, MVPs and services engagements using the same tool. Simple by default, flexible when needed

    Cloud Pak Deployer has been designed with the following key principles in mind:

    Deployment

    Every deployment starts with a set of configuration files which define the infrastructure, OpenShift cluster and Cloud Pak or watsonx to be installed. The Cloud Pak Deployer reads the configuration from the specified directory, and secrets which are kept in a vault, and does whatever it needs to do to reach the desired end state. During the deployment, new secrets may be created and these are also stored in the vault. In its simplest form, the vault is a flat file in the specified status directory, but you can also choose to keep the secrets in HashiCorp Vault or the Vault service on IBM Cloud.

    Key principles.

    As long as you keep the configuration directory and the vault available, you can make changes to the config and re-run the deployer to reach the new desired end state. For example, if you choose to add another cartridge (service) to your Cloud Pak deployment, just change the state of that cartridge and re-run the deployer; this applies to other Cloud Paks too.

    Opinionated🔗

    Red Hat OpenShift, watsonx and IBM Cloud Paks offer a wide variety of deployment and configuration options. It is the intention of the Cloud Pak Deployer to simplify the deployment by focusing on proven deployment patterns. As an example: for a non-highly available deployment of the Cloud Pak, we use an NFS storage class; for a production deployment, we use OpenShift Container Storage (aka OpenShift Data Foundation).

    Choosing from proven deployment patterns improves the probability for a straightforward installation without surprises.

    Declarative and desired end-state🔗

    It is our intention to deploy a combination of OpenShift and containerized software based on a (set of) configuration file(s) that describe the desired end-state. Although the deployment pipeline follows a pre-defined flow, as a user you do not necessarily need to know what happens under the hood. Instead, you have entered the destination (end-state) you want the deployment to have and the deployer will take care of getting you there.

    Idempotent🔗

    Idempotence goes hand in hand with the desired end-state principle of the Cloud Pak Deployer. Basically, we're saying: if we make multiple identical requests, we will still arrive at the same end-state, and (very important): if nothing needs to change, don't change. As an example of what that means: say that there was a timeout in the provisioning process because the OpenShift cluster could not be created within the pre-defined timeframe and other resources were successfully created. When the deployer is re-run, it will leave the successfully created resources alone and will not delete or change them, but rather continue the provisioning pipeline.

    \ No newline at end of file diff --git a/js/open_in_new_tab.js b/js/open_in_new_tab.js new file mode 100644 index 000000000..964fb4c1e --- /dev/null +++ b/js/open_in_new_tab.js @@ -0,0 +1,45 @@ +// Description: Open external links in a new tab and PDF links in a new tab +// Source: https://jekyllcodex.org/without-plugin/new-window-fix/ + +//open external links in a new window +function external_new_window() { + for(let c = document.getElementsByTagName("a"), a = 0;a < c.length;a++) { + let b = c[a]; + if(b.getAttribute("href") && b.hostname !== location.hostname) { + b.target = "_blank"; + b.rel = "noopener"; + } + } +} +//open PDF links in a new window +function pdf_new_window () +{ + if (!document.getElementsByTagName) { + return false; + } + let links = document.getElementsByTagName("a"); + for (let eleLink=0; eleLink < links.length; eleLink ++) { + if ((links[eleLink].href.indexOf('.pdf') !== -1)||(links[eleLink].href.indexOf('.doc') !== -1)||(links[eleLink].href.indexOf('.docx') !== -1)) { + links[eleLink].onclick = + function() { + window.open(this.href); + return false; + } + } + } +} + +function apply_rules() { + external_new_window(); + pdf_new_window(); +} + +if (typeof document$ !== "undefined") { + // compatibility with mkdocs-material's instant loading feature + // based on code from https://github.com/timvink/mkdocs-charts-plugin + // Copyright (c) 2021 Tim Vink - MIT License + // fixes [Issue #2](https://github.com/JakubAndrysek/mkdocs-open-in-new-tab/issues/2) + document$.subscribe(function() { + apply_rules(); + }) +} \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 000000000..3791a1d55 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Cloud Pak Deployer \ud83d\udd17 The intention of the Cloud Pak Deployer is to simplify the initial installation and also continuous management of OpenShift, watsonx and the IBM Cloud Paks on top of that, driven by automation. It will help you deploy watsonx, Cloud Pak for Data, Cloud Pak for Integration, Cloud Pak for Business Automation and Cloud Pak for Watson AIOps on various OpenShift and infrastructures such as IBM Cloud ROKS, Azure Red Hat OpenShift (ARO), Red Hat OpenShift on AWS (ROSA), vSphere and also existing OpenShift. The Cloud Pak Deployer was created for a joint project with one of our key partners who need to fully automate the deployment of IBM containerized software based on a configuration that is kept in a Git repository. As additional needs for the deployed environment surface, the configuration is changed, committed, approved and then changes are deployed without destroying the current environment. \"If we have seen a screen during deployment, it means something has failed\" Not all software implementations require governance using the previously described GitOps approach. We also wanted to accelerate containerized software deployment for POCs, MVPs and services engagements using the same tool. Simple by default, flexible when needed Cloud Pak Deployer has been designed with the following key principles in mind: Every deployment starts with a set of configuration files which define the infrastructure, OpenShift cluster and Cloud Pak or watsonx to be installed. The Cloud Pak Deployer reads the configuration from the specified directory, and secrets which are kept in a vault, and does whatever it needs to do to reach the desired end state. During the deployment, new secrets may be created and these are also stored in the vault. In its simplest form, the vault is a flat file in the specified status directory, but you can also choose to keep the secrets in HashiCorp Vault or the Vault service on IBM Cloud. . As long as you keep the configuration directory and the vault available, you can make changes to the config and re-run the deployer to reach the new desired end state. For example, if you choose to add another cartridge (service) to your Cloud Pak deployment, just change the state of that cartridge and re-run the deployer; this applies to other Cloud Paks too. Opinionated \ud83d\udd17 Red Hat OpenShift, watsonx and IBM Cloud Paks offer a wide variety of deployment and configuration options. It is the intention of the Cloud Pak Deployer to simplify the deployment by focusing on proven deployment patterns. As an example: for a non-highly available deployment of the Cloud Pak, we use an NFS storage class; for a production deployment, we use OpenShift Container Storage (aka OpenShift Data Foundation). Choosing from proven deployment patterns improves the probability for a straightforward installation without surprises. Declarative and desired end-state \ud83d\udd17 It is our intention to deploy a combination of OpenShift and containerized software based on a (set of) configuration file(s) that describe the desired end-state. Although the deployment pipeline follows a pre-defined flow, as a user you do not necessarily need to know what happens under the hood. Instead, you have entered the destination (end-state) you want the deployment to have and the deployer will take care of getting you there. Idempotent \ud83d\udd17 Idempotence goes hand in hand with the desired end-state principle of the Cloud Pak Deployer. Basically, we're saying: if we make multiple identical requests, we will still arrive at the same end-state, and (very important): if nothing needs to change, don't change. As an example of what that means: say that there was a timeout in the provisioning process because the OpenShift cluster could not be created within the pre-defined timeframe and other resources were successfully created. When the deployer is re-run, it will leave the successfully created resources alone and will not delete or change them, but rather continue the provisioning pipeline.","title":"Home"},{"location":"#cloud-pak-deployer","text":"The intention of the Cloud Pak Deployer is to simplify the initial installation and also continuous management of OpenShift, watsonx and the IBM Cloud Paks on top of that, driven by automation. It will help you deploy watsonx, Cloud Pak for Data, Cloud Pak for Integration, Cloud Pak for Business Automation and Cloud Pak for Watson AIOps on various OpenShift and infrastructures such as IBM Cloud ROKS, Azure Red Hat OpenShift (ARO), Red Hat OpenShift on AWS (ROSA), vSphere and also existing OpenShift. The Cloud Pak Deployer was created for a joint project with one of our key partners who need to fully automate the deployment of IBM containerized software based on a configuration that is kept in a Git repository. As additional needs for the deployed environment surface, the configuration is changed, committed, approved and then changes are deployed without destroying the current environment. \"If we have seen a screen during deployment, it means something has failed\" Not all software implementations require governance using the previously described GitOps approach. We also wanted to accelerate containerized software deployment for POCs, MVPs and services engagements using the same tool. Simple by default, flexible when needed Cloud Pak Deployer has been designed with the following key principles in mind: Every deployment starts with a set of configuration files which define the infrastructure, OpenShift cluster and Cloud Pak or watsonx to be installed. The Cloud Pak Deployer reads the configuration from the specified directory, and secrets which are kept in a vault, and does whatever it needs to do to reach the desired end state. During the deployment, new secrets may be created and these are also stored in the vault. In its simplest form, the vault is a flat file in the specified status directory, but you can also choose to keep the secrets in HashiCorp Vault or the Vault service on IBM Cloud. . As long as you keep the configuration directory and the vault available, you can make changes to the config and re-run the deployer to reach the new desired end state. For example, if you choose to add another cartridge (service) to your Cloud Pak deployment, just change the state of that cartridge and re-run the deployer; this applies to other Cloud Paks too.","title":"Cloud Pak Deployer"},{"location":"#opinionated","text":"Red Hat OpenShift, watsonx and IBM Cloud Paks offer a wide variety of deployment and configuration options. It is the intention of the Cloud Pak Deployer to simplify the deployment by focusing on proven deployment patterns. As an example: for a non-highly available deployment of the Cloud Pak, we use an NFS storage class; for a production deployment, we use OpenShift Container Storage (aka OpenShift Data Foundation). Choosing from proven deployment patterns improves the probability for a straightforward installation without surprises.","title":"Opinionated"},{"location":"#declarative-and-desired-end-state","text":"It is our intention to deploy a combination of OpenShift and containerized software based on a (set of) configuration file(s) that describe the desired end-state. Although the deployment pipeline follows a pre-defined flow, as a user you do not necessarily need to know what happens under the hood. Instead, you have entered the destination (end-state) you want the deployment to have and the deployer will take care of getting you there.","title":"Declarative and desired end-state"},{"location":"#idempotent","text":"Idempotence goes hand in hand with the desired end-state principle of the Cloud Pak Deployer. Basically, we're saying: if we make multiple identical requests, we will still arrive at the same end-state, and (very important): if nothing needs to change, don't change. As an example of what that means: say that there was a timeout in the provisioning process because the OpenShift cluster could not be created within the pre-defined timeframe and other resources were successfully created. When the deployer is re-run, it will leave the successfully created resources alone and will not delete or change them, but rather continue the provisioning pipeline.","title":"Idempotent"},{"location":"01-introduction/current-state/","text":"Current state of the Cloud Pak Deployer \ud83d\udd17 The below picture indicates the current state of the Cloud Pak Deployer, which infrastructures are supported to provision or use OpenShift, the storage classes which can be controlled and the Cloud Paks with cartridges and components.","title":"Current state"},{"location":"01-introduction/current-state/#current-state-of-the-cloud-pak-deployer","text":"The below picture indicates the current state of the Cloud Pak Deployer, which infrastructures are supported to provision or use OpenShift, the storage classes which can be controlled and the Cloud Paks with cartridges and components.","title":"Current state of the Cloud Pak Deployer"},{"location":"05-install/install/","text":"Installing the Cloud Pak Deployer \ud83d\udd17 Install pre-requisites \ud83d\udd17 The Cloud Pak Deployer requires podman or docker to run, which are available on most Linux distributions such as Red Hat Enterprise Linux (preferred), Fedora, CentOS, Ubuntu and MacOS. On Windows Docker behaves differently than Linux platforms and this can cause the deployer to fail. Using a Windows workstation \ud83d\udd17 If you don't have a Linux server in some cloud, you can use VirtualBox to create a Linux virtual machine. Install VirtualBox: https://www.virtualbox.org Install a Linux guest operating system: https://www.virtualbox.org/wiki/Guest_OSes Once the guest operating system is up and running, log on as root to the guest operating system. For convenience, VirtualBox also supports port forwarding so you can use PuTTY to access the Linux command line. Install on Linux \ud83d\udd17 On Red Hat Enterprise Linux of CentOS, run the following commands: yum install -y podman git yum clean all On MacOS, run the following commands: brew install podman git podman machine create podman machine init On Ubuntu, follow the instructions here: https://docs.docker.com/engine/install/ubuntu/ Clone the current repository \ud83d\udd17 Using the command line \ud83d\udd17 If you clone the repository from the command line, you will need to enter a token when you run the git clone command. You can retrieve your token as follows: Go to a directory where you want to download the Git repo. git clone --depth=1 https://github.com/IBM/cloud-pak-deployer.git Build the image \ud83d\udd17 First go to the directory where you cloned the GitHub repository, for example ~/cloud-pak-deployer . cd cloud-pak-deployer Then run the following command to build the container image. ./cp-deploy.sh build This process will take 5-10 minutes to complete and it will install all the pre-requisites needed to run the automation, including Ansible, Python and required operating system packages. For the installation to work, the system on which the image is built must be connected to the internet.","title":"Installing Cloud Pak Deployer"},{"location":"05-install/install/#installing-the-cloud-pak-deployer","text":"","title":"Installing the Cloud Pak Deployer"},{"location":"05-install/install/#install-pre-requisites","text":"The Cloud Pak Deployer requires podman or docker to run, which are available on most Linux distributions such as Red Hat Enterprise Linux (preferred), Fedora, CentOS, Ubuntu and MacOS. On Windows Docker behaves differently than Linux platforms and this can cause the deployer to fail.","title":"Install pre-requisites"},{"location":"05-install/install/#using-a-windows-workstation","text":"If you don't have a Linux server in some cloud, you can use VirtualBox to create a Linux virtual machine. Install VirtualBox: https://www.virtualbox.org Install a Linux guest operating system: https://www.virtualbox.org/wiki/Guest_OSes Once the guest operating system is up and running, log on as root to the guest operating system. For convenience, VirtualBox also supports port forwarding so you can use PuTTY to access the Linux command line.","title":"Using a Windows workstation"},{"location":"05-install/install/#install-on-linux","text":"On Red Hat Enterprise Linux of CentOS, run the following commands: yum install -y podman git yum clean all On MacOS, run the following commands: brew install podman git podman machine create podman machine init On Ubuntu, follow the instructions here: https://docs.docker.com/engine/install/ubuntu/","title":"Install on Linux"},{"location":"05-install/install/#clone-the-current-repository","text":"","title":"Clone the current repository"},{"location":"05-install/install/#using-the-command-line","text":"If you clone the repository from the command line, you will need to enter a token when you run the git clone command. You can retrieve your token as follows: Go to a directory where you want to download the Git repo. git clone --depth=1 https://github.com/IBM/cloud-pak-deployer.git","title":"Using the command line"},{"location":"05-install/install/#build-the-image","text":"First go to the directory where you cloned the GitHub repository, for example ~/cloud-pak-deployer . cd cloud-pak-deployer Then run the following command to build the container image. ./cp-deploy.sh build This process will take 5-10 minutes to complete and it will install all the pre-requisites needed to run the automation, including Ansible, Python and required operating system packages. For the installation to work, the system on which the image is built must be connected to the internet.","title":"Build the image"},{"location":"10-use-deployer/1-overview/overview/","text":"Using Cloud Pak Deployer \ud83d\udd17 Running Cloud Pak Deployer \ud83d\udd17 There are 3 main steps you need to perform to provision an OpenShift cluster with the desired Cloud Pak(s): Install the Cloud Pak Deployer Run the Cloud Pak Deployer to create the cluster and install the Cloud Pak What will I need? \ud83d\udd17 To complete the deployment, you will or may need the following. Details will be provided when you need them. Your Cloud Pak entitlement key to pull images from the IBM Container Registry IBM Cloud VPC: An IBM Cloud API key that allows you to provision infrastructure vSphere: A vSphere user and password which has infrastructure create permissions AWS ROSA: AWS IAM credentials (access key and secret access key), a ROSA login token and optionally a temporary security token AWS Self-managed: AWS IAM credentials (access key and secret access key) and optionally a temporary security token Azure: Azure service principal with the correct permissions Existing OpenShift: Cluster admin login credentials of the OpenShift cluster Executing commands on the OpenShift cluster \ud83d\udd17 The server on which you run the Cloud Pak Deployer may not have the necessary clients to interact with the cloud infrastructure, OpenShift, or the installed Cloud Pak. You can run commands using the same container image that runs the deployment of OpenShift and the Cloud Paks through the command line: Open a command line Destroying your OpenShift cluster \ud83d\udd17 If you want to destroy the provisioned OpenShift cluster, including the installed Cloud Pak(s), you can do this through the Cloud pak Deployer. Steps can be found here: Destroy the assets","title":"Overview"},{"location":"10-use-deployer/1-overview/overview/#using-cloud-pak-deployer","text":"","title":"Using Cloud Pak Deployer"},{"location":"10-use-deployer/1-overview/overview/#running-cloud-pak-deployer","text":"There are 3 main steps you need to perform to provision an OpenShift cluster with the desired Cloud Pak(s): Install the Cloud Pak Deployer Run the Cloud Pak Deployer to create the cluster and install the Cloud Pak","title":"Running Cloud Pak Deployer"},{"location":"10-use-deployer/1-overview/overview/#what-will-i-need","text":"To complete the deployment, you will or may need the following. Details will be provided when you need them. Your Cloud Pak entitlement key to pull images from the IBM Container Registry IBM Cloud VPC: An IBM Cloud API key that allows you to provision infrastructure vSphere: A vSphere user and password which has infrastructure create permissions AWS ROSA: AWS IAM credentials (access key and secret access key), a ROSA login token and optionally a temporary security token AWS Self-managed: AWS IAM credentials (access key and secret access key) and optionally a temporary security token Azure: Azure service principal with the correct permissions Existing OpenShift: Cluster admin login credentials of the OpenShift cluster","title":"What will I need?"},{"location":"10-use-deployer/1-overview/overview/#executing-commands-on-the-openshift-cluster","text":"The server on which you run the Cloud Pak Deployer may not have the necessary clients to interact with the cloud infrastructure, OpenShift, or the installed Cloud Pak. You can run commands using the same container image that runs the deployment of OpenShift and the Cloud Paks through the command line: Open a command line","title":"Executing commands on the OpenShift cluster"},{"location":"10-use-deployer/1-overview/overview/#destroying-your-openshift-cluster","text":"If you want to destroy the provisioned OpenShift cluster, including the installed Cloud Pak(s), you can do this through the Cloud pak Deployer. Steps can be found here: Destroy the assets","title":"Destroying your OpenShift cluster"},{"location":"10-use-deployer/3-run/aws-rosa/","text":"Running the Cloud Pak Deployer on AWS (ROSA) \ud83d\udd17 On Amazon Web Services (AWS), OpenShift can be set up in various ways, managed by Red Hat (ROSA) or self-managed. The steps below are applicable to the ROSA (Red Hat OpenShift on AWS) installation. More information about ROSA can be found here: https://aws.amazon.com/rosa/ There are 5 main steps to run the deployer for AWS: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer Topology \ud83d\udd17 A typical setup of the ROSA cluster is pictured below: When deploying ROSA, an external host name and domain name are automatically generated by Amazon Web Services and both the API and Ingress servers can be resolved by external clients. At this stage, one cannot configure the domain name to be used. 1. Configure deployer \ud83d\udd17 Deployer configuration and status directories \ud83d\udd17 Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For ROSA installations, copy one of ocp-aws-rosa-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-aws-rosa-elastic.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/ Set configuration and status directories environment variables \ud83d\udd17 Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files. Optional: advanced configuration \ud83d\udd17 If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration . 2. Prepare the cloud environment \ud83d\udd17 Enable ROSA on AWS \ud83d\udd17 Before you can use ROSA on AWS, you have to enable it if this has not been done already. This can be done as follows: Go to https://aws.amazon.com/ Login to the AWS console Search for ROSA service Click Enable OpenShift Obtain the AWS IAM credentials \ud83d\udd17 You will need an Access Key ID and Secret Access Key for the deployer to run rosa commands. Go to https://aws.amazon.com/ Login to the AWS console Click on your user name at the top right of the screen Select Security credentials . You can also reach this screen via https://console.aws.amazon.com/iam/home?region=us-east-2#/security_credentials . If you do not yet have an access key (or you no longer have the associated secret), create an access key Store your Access Key ID and Secret Access Key in safe place Alternative: Using temporary AWS security credentials (STS) \ud83d\udd17 If your account uses temporary security credentials for AWS resources, you must use the Access Key ID , Secret Access Key and Session Token associated with your temporary credentials. For more information about using temporary security credentials, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html . The temporary credentials must be issued for an IAM role that has sufficient permissions to provision the infrastructure and all other components. More information about required permissions for ROSA cluster can be found here: https://docs.openshift.com/rosa/rosa_planning/rosa-sts-aws-prereqs.html#rosa-sts-aws-prereqs . An example on how to retrieve the temporary credentials for a user-defined role: printf \"\\nexport AWS_ACCESS_KEY_ID=%s\\nexport AWS_SECRET_ACCESS_KEY=%s\\nexport AWS_SESSION_TOKEN=%s\\n\" $(aws sts assume-role \\ --role-arn arn:aws:iam::678256850452:role/ocp-sts-role \\ --role-session-name OCPInstall \\ --query \"Credentials.[AccessKeyId,SecretAccessKey,SessionToken]\" \\ --output text) This would return something like the below, which you can then paste into the session running the deployer. export AWS_ACCESS_KEY_ID=ASIxxxxxxAW export AWS_SECRET_ACCESS_KEY=jtLxxxxxxxxxxxxxxxGQ export AWS_SESSION_TOKEN=IQxxxxxxxxxxxxxbfQ You must set the infrastructure.use_sts to True in the openshift configuration if you need to use the temporary security credentials. Cloud Pak Deployer will then run the rosa create cluster command with the appropriate flag. Obtain your ROSA login token \ud83d\udd17 To run rosa commands to manage the cluster, the deployer requires the ROSA login token. Go to https://cloud.redhat.com/openshift/token/rosa Login with your Red Hat user ID and password. If you don't have one yet, you need to create it. Copy the offline access token presented on the screen and store it in a safe place. If ROSA is already installed \ud83d\udd17 This scenario is supported. To enable this feature, please ensure that you take the following steps: Include the environment ID in the infrastrucure definition {{ env_id }} to match existing cluster Create \"cluster-admin \" password token using the following command: $ ./cp-deploy.sh vault set -vs ={{ env_id }} -cluster-admin-password =[ YOUR PASSWORD ] Without these changes, sthe cloud player will fail and you will receive the following error message: \"Failed to get the cluster-admin password from the vault\". 3. Acquire entitlement keys and secrets \ud83d\udd17 If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file. 4. Set environment variables and secrets \ud83d\udd17 export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_access_key export ROSA_LOGIN_TOKEN=\"your_rosa_login_token\" export CP_ENTITLEMENT_KEY=your_cp_entitlement_key Optional: If your user does not have permanent administrator access but using temporary credentials, you can set the AWS_SESSION_TOKEN to be used for the AWS CLI. export AWS_SESSION_TOKEN=your_session_token AWS_ACCESS_KEY_ID : This is the AWS Access Key you retrieved above, often this is something like AK1A2VLMPQWBJJQGD6GV AWS_SECRET_ACCESS_KEY : The secret associated with your AWS Access Key, also retrieved above AWS_SESSION_TOKEN : The session token that will grant temporary elevated permissions ROSA_LOGIN_TOKEN : The offline access token that was retrieved before. This is a very long string (200+ characters). Make sure you enclose the string in single or double quotes as it may hold special characters CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string Warning If your AWS_SESSION_TOKEN is expires while the deployer is still running, the deployer may end abnormally. In such case, you can just issue new temporary credentials ( AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN ) and restart the deployer. Alternatively, you can update the 3 vault secrets, respectively aws-access-key , aws-secret-access-key and aws-session-token with new values as they are re-retrieved by the deployer on a regular basis. 5. Run the deployer \ud83d\udd17 Optional: validate the configuration \ud83d\udd17 If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses Run the Cloud Pak Deployer \ud83d\udd17 To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill On failure \ud83d\udd17 If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully. Finishing up \ud83d\udd17 Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd: https://cpd-cpd.apps.pluto-01.pmxz.p1.openshiftapps.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - aws-access-key - aws-secret-access-key - ibm_cp_entitlement_key - rosa-login-token - pluto-01-cluster-admin-password - cp4d_admin_zen_40_pluto_01 - all-config You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01 PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_40_pluto_01: gelGKrcgaLatBsnAdMEbmLwGr Post-install configuration \ud83d\udd17 You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"AWS ROSA"},{"location":"10-use-deployer/3-run/aws-rosa/#running-the-cloud-pak-deployer-on-aws-rosa","text":"On Amazon Web Services (AWS), OpenShift can be set up in various ways, managed by Red Hat (ROSA) or self-managed. The steps below are applicable to the ROSA (Red Hat OpenShift on AWS) installation. More information about ROSA can be found here: https://aws.amazon.com/rosa/ There are 5 main steps to run the deployer for AWS: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer","title":"Running the Cloud Pak Deployer on AWS (ROSA)"},{"location":"10-use-deployer/3-run/aws-rosa/#topology","text":"A typical setup of the ROSA cluster is pictured below: When deploying ROSA, an external host name and domain name are automatically generated by Amazon Web Services and both the API and Ingress servers can be resolved by external clients. At this stage, one cannot configure the domain name to be used.","title":"Topology"},{"location":"10-use-deployer/3-run/aws-rosa/#1-configure-deployer","text":"","title":"1. Configure deployer"},{"location":"10-use-deployer/3-run/aws-rosa/#deployer-configuration-and-status-directories","text":"Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For ROSA installations, copy one of ocp-aws-rosa-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-aws-rosa-elastic.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/","title":"Deployer configuration and status directories"},{"location":"10-use-deployer/3-run/aws-rosa/#set-configuration-and-status-directories-environment-variables","text":"Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files.","title":"Set configuration and status directories environment variables"},{"location":"10-use-deployer/3-run/aws-rosa/#optional-advanced-configuration","text":"If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration .","title":"Optional: advanced configuration"},{"location":"10-use-deployer/3-run/aws-rosa/#2-prepare-the-cloud-environment","text":"","title":"2. Prepare the cloud environment"},{"location":"10-use-deployer/3-run/aws-rosa/#enable-rosa-on-aws","text":"Before you can use ROSA on AWS, you have to enable it if this has not been done already. This can be done as follows: Go to https://aws.amazon.com/ Login to the AWS console Search for ROSA service Click Enable OpenShift","title":"Enable ROSA on AWS"},{"location":"10-use-deployer/3-run/aws-rosa/#obtain-the-aws-iam-credentials","text":"You will need an Access Key ID and Secret Access Key for the deployer to run rosa commands. Go to https://aws.amazon.com/ Login to the AWS console Click on your user name at the top right of the screen Select Security credentials . You can also reach this screen via https://console.aws.amazon.com/iam/home?region=us-east-2#/security_credentials . If you do not yet have an access key (or you no longer have the associated secret), create an access key Store your Access Key ID and Secret Access Key in safe place","title":"Obtain the AWS IAM credentials"},{"location":"10-use-deployer/3-run/aws-rosa/#alternative-using-temporary-aws-security-credentials-sts","text":"If your account uses temporary security credentials for AWS resources, you must use the Access Key ID , Secret Access Key and Session Token associated with your temporary credentials. For more information about using temporary security credentials, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html . The temporary credentials must be issued for an IAM role that has sufficient permissions to provision the infrastructure and all other components. More information about required permissions for ROSA cluster can be found here: https://docs.openshift.com/rosa/rosa_planning/rosa-sts-aws-prereqs.html#rosa-sts-aws-prereqs . An example on how to retrieve the temporary credentials for a user-defined role: printf \"\\nexport AWS_ACCESS_KEY_ID=%s\\nexport AWS_SECRET_ACCESS_KEY=%s\\nexport AWS_SESSION_TOKEN=%s\\n\" $(aws sts assume-role \\ --role-arn arn:aws:iam::678256850452:role/ocp-sts-role \\ --role-session-name OCPInstall \\ --query \"Credentials.[AccessKeyId,SecretAccessKey,SessionToken]\" \\ --output text) This would return something like the below, which you can then paste into the session running the deployer. export AWS_ACCESS_KEY_ID=ASIxxxxxxAW export AWS_SECRET_ACCESS_KEY=jtLxxxxxxxxxxxxxxxGQ export AWS_SESSION_TOKEN=IQxxxxxxxxxxxxxbfQ You must set the infrastructure.use_sts to True in the openshift configuration if you need to use the temporary security credentials. Cloud Pak Deployer will then run the rosa create cluster command with the appropriate flag.","title":"Alternative: Using temporary AWS security credentials (STS)"},{"location":"10-use-deployer/3-run/aws-rosa/#obtain-your-rosa-login-token","text":"To run rosa commands to manage the cluster, the deployer requires the ROSA login token. Go to https://cloud.redhat.com/openshift/token/rosa Login with your Red Hat user ID and password. If you don't have one yet, you need to create it. Copy the offline access token presented on the screen and store it in a safe place.","title":"Obtain your ROSA login token"},{"location":"10-use-deployer/3-run/aws-rosa/#if-rosa-is-already-installed","text":"This scenario is supported. To enable this feature, please ensure that you take the following steps: Include the environment ID in the infrastrucure definition {{ env_id }} to match existing cluster Create \"cluster-admin \" password token using the following command: $ ./cp-deploy.sh vault set -vs ={{ env_id }} -cluster-admin-password =[ YOUR PASSWORD ] Without these changes, sthe cloud player will fail and you will receive the following error message: \"Failed to get the cluster-admin password from the vault\".","title":"If ROSA is already installed"},{"location":"10-use-deployer/3-run/aws-rosa/#3-acquire-entitlement-keys-and-secrets","text":"If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.","title":"3. Acquire entitlement keys and secrets"},{"location":"10-use-deployer/3-run/aws-rosa/#4-set-environment-variables-and-secrets","text":"export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_access_key export ROSA_LOGIN_TOKEN=\"your_rosa_login_token\" export CP_ENTITLEMENT_KEY=your_cp_entitlement_key Optional: If your user does not have permanent administrator access but using temporary credentials, you can set the AWS_SESSION_TOKEN to be used for the AWS CLI. export AWS_SESSION_TOKEN=your_session_token AWS_ACCESS_KEY_ID : This is the AWS Access Key you retrieved above, often this is something like AK1A2VLMPQWBJJQGD6GV AWS_SECRET_ACCESS_KEY : The secret associated with your AWS Access Key, also retrieved above AWS_SESSION_TOKEN : The session token that will grant temporary elevated permissions ROSA_LOGIN_TOKEN : The offline access token that was retrieved before. This is a very long string (200+ characters). Make sure you enclose the string in single or double quotes as it may hold special characters CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string Warning If your AWS_SESSION_TOKEN is expires while the deployer is still running, the deployer may end abnormally. In such case, you can just issue new temporary credentials ( AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN ) and restart the deployer. Alternatively, you can update the 3 vault secrets, respectively aws-access-key , aws-secret-access-key and aws-session-token with new values as they are re-retrieved by the deployer on a regular basis.","title":"4. Set environment variables and secrets"},{"location":"10-use-deployer/3-run/aws-rosa/#5-run-the-deployer","text":"","title":"5. Run the deployer"},{"location":"10-use-deployer/3-run/aws-rosa/#optional-validate-the-configuration","text":"If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses","title":"Optional: validate the configuration"},{"location":"10-use-deployer/3-run/aws-rosa/#run-the-cloud-pak-deployer","text":"To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill","title":"Run the Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/aws-rosa/#on-failure","text":"If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.","title":"On failure"},{"location":"10-use-deployer/3-run/aws-rosa/#finishing-up","text":"Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd: https://cpd-cpd.apps.pluto-01.pmxz.p1.openshiftapps.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - aws-access-key - aws-secret-access-key - ibm_cp_entitlement_key - rosa-login-token - pluto-01-cluster-admin-password - cp4d_admin_zen_40_pluto_01 - all-config You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01 PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_40_pluto_01: gelGKrcgaLatBsnAdMEbmLwGr","title":"Finishing up"},{"location":"10-use-deployer/3-run/aws-rosa/#post-install-configuration","text":"You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Post-install configuration"},{"location":"10-use-deployer/3-run/aws-self-managed/","text":"Running the Cloud Pak Deployer on AWS (Self-managed) \ud83d\udd17 On Amazon Web Services (AWS), OpenShift can be set up in various ways, self-managed or managed by Red Hat (ROSA). The steps below are applicable to a self-managed OpenShift installation. The IPI (Installer Provisioned Infrastructure) installer will be used. More information about IPI installation can be found here: https://docs.openshift.com/container-platform/4.12/installing/installing_aws/installing-aws-customizations.html . There are 5 main steps to run the deploye for AWS: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer See the deployer in action in this video: https://ibm.box.com/v/cpd-aws-self-managed Topology \ud83d\udd17 A typical setup of the self-managed OpenShift cluster is pictured below: Single-node OpenShift (SNO) on AWS \ud83d\udd17 Red Hat OpenShift also supports single-node deployments in which control plane and compute are combined into a single node. Obviously, this type of configuration does not cater for any high availability requirements that are usually part of a production installation, but it does offer a more cost-efficient option for development and testing purposes. Cloud Pak Deployer can deploy a single-node OpenShift with elastic storage and a sample configuration is provided as part of the deployer. Warning When deploying the IBM Cloud Paks on single-node OpenShift, there may be intermittent timeouts as pods are starting up. In those cases, just re-run the deployer with the same configuration and check status of the pods. 1. Configure deployer \ud83d\udd17 Deployer configuration and status directories \ud83d\udd17 Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For self-managed OpenShift installations, copy one of ocp-aws-self-managed-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-aws-self-managed-elastic.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/ Set configuration and status directories environment variables \ud83d\udd17 Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files. Optional: advanced configuration \ud83d\udd17 If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration . 2. Prepare the cloud environment \ud83d\udd17 Configure Route53 service on AWS \ud83d\udd17 When deploying a self-managed OpenShift on Amazon web Services, a public hosted zone must be created in the same account as your OpenShift cluster. The domain name or subdomain name registered in the Route53 service must be specifed in the openshift configuration of the deployer. For more information on acquiring or specifying a domain on AWS, you can refer to https://github.com/openshift/installer/blob/master/docs/user/aws/route53.md . Obtain the AWS IAM credentials \ud83d\udd17 If you can use your permanent security credentials for the AWS account, you will need an Access Key ID and Secret Access Key for the deployer to setup an OpenShift cluster on AWS. Go to https://aws.amazon.com/ Login to the AWS console Click on your user name at the top right of the screen Select Security credentials . You can also reach this screen via https://console.aws.amazon.com/iam/home?region=us-east-2#/security_credentials . If you do not yet have an access key (or you no longer have the associated secret), create an access key Store your Access Key ID and Secret Access Key in safe place Alternative: Using temporary AWS security credentials (STS) \ud83d\udd17 If your account uses temporary security credentials for AWS resources, you must use the Access Key ID , Secret Access Key and Session Token associated with your temporary credentials. For more information about using temporary security credentials, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html . The temporary credentials must be issued for an IAM role that has sufficient permissions to provision the infrastructure and all other components. More information about required permissions can be found here: https://docs.openshift.com/container-platform/4.10/authentication/managing_cloud_provider_credentials/cco-mode-sts.html#sts-mode-create-aws-resources-ccoctl . An example on how to retrieve the temporary credentials for a user-defined role: printf \"\\nexport AWS_ACCESS_KEY_ID=%s\\nexport AWS_SECRET_ACCESS_KEY=%s\\nexport AWS_SESSION_TOKEN=%s\\n\" $(aws sts assume-role \\ --role-arn arn:aws:iam::678256850452:role/ocp-sts-role \\ --role-session-name OCPInstall \\ --query \"Credentials.[AccessKeyId,SecretAccessKey,SessionToken]\" \\ --output text) Thie would return something like the below, which you can then paste into the session running the deployer. export AWS_ACCESS_KEY_ID=ASIxxxxxxAW export AWS_SECRET_ACCESS_KEY=jtLxxxxxxxxxxxxxxxGQ export AWS_SESSION_TOKEN=IQxxxxxxxxxxxxxbfQ If the openshift configuration has the infrastructure.credentials_mode set to Manual , Cloud Pak Deployer will automatically configure and run the Cloud Credential Operator utility. 3. Acquire entitlement keys and secrets \ud83d\udd17 Acquire IBM Cloud Pak entitlement key \ud83d\udd17 If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file. Acquire an OpenShift pull secret \ud83d\udd17 To install OpenShift you need an OpenShift pull secret which holds your entitlement. Navigate to https://console.redhat.com/openshift/install/pull-secret and download the pull secret into file /tmp/ocp_pullsecret.json Optional: Locate or generate a public SSH Key \ud83d\udd17 To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub , where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux . Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault. 4. Set environment variables and secrets \ud83d\udd17 Set the Cloud Pak entitlement key \ud83d\udd17 If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry Set the environment variables for AWS self-managed OpenShift deployment \ud83d\udd17 export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_access_key Optional: If your user does not have permanent administrator access but using temporary credentials, you can set the AWS_SESSION_TOKEN to be used for the AWS CLI. export AWS_SESSION_TOKEN=your_session_token AWS_ACCESS_KEY_ID : This is the AWS Access Key you retrieved above, often this is something like AK1A2VLMPQWBJJQGD6GV AWS_SECRET_ACCESS_KEY : The secret associated with your AWS Access Key, also retrieved above AWS_SESSION_TOKEN : The session token that will grant temporary elevated permissions Warning If your AWS_SESSION_TOKEN is expires while the deployer is still running, the deployer may end abnormally. In such case, you can just issue new temporary credentials ( AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN ) and restart the deployer. Alternatively, you can update the 3 vault secrets, respectively aws-access-key , aws-secret-access-key and aws-session-token with new values as they are re-retrieved by the deployer on a regular basis. Create the secrets needed for self-managed OpenShift cluster \ud83d\udd17 You need to store the below credentials in the vault so that the deployer has access to them when installing self-managed OpenShift cluster on AWS. ./cp-deploy.sh vault set \\ --vault-secret ocp-pullsecret \\ --vault-secret-file /tmp/ocp_pullsecret.json Optional: Create secret for public SSH key \ud83d\udd17 If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key. ./cp-deploy.sh vault set \\ --vault-secret ocp-ssh-pub-key \\ --vault-secret-file ~/.ssh/id_rsa.pub 5. Run the deployer \ud83d\udd17 Optional: validate the configuration \ud83d\udd17 If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses Run the Cloud Pak Deployer \ud83d\udd17 To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill On failure \ud83d\udd17 If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully. Finishing up \ud83d\udd17 Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - aws-access-key - aws-secret-access-key - ocp-pullsecret - ocp-ssh-pub-key - ibm_cp_entitlement_key - pluto-01-cluster-admin-password - cp4d_admin_zen_40_pluto_01 - all-config You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01 PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_40_pluto_01: gelGKrcgaLatBsnAdMEbmLwGr Post-install configuration \ud83d\udd17 You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"AWS Self-managed"},{"location":"10-use-deployer/3-run/aws-self-managed/#running-the-cloud-pak-deployer-on-aws-self-managed","text":"On Amazon Web Services (AWS), OpenShift can be set up in various ways, self-managed or managed by Red Hat (ROSA). The steps below are applicable to a self-managed OpenShift installation. The IPI (Installer Provisioned Infrastructure) installer will be used. More information about IPI installation can be found here: https://docs.openshift.com/container-platform/4.12/installing/installing_aws/installing-aws-customizations.html . There are 5 main steps to run the deploye for AWS: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer See the deployer in action in this video: https://ibm.box.com/v/cpd-aws-self-managed","title":"Running the Cloud Pak Deployer on AWS (Self-managed)"},{"location":"10-use-deployer/3-run/aws-self-managed/#topology","text":"A typical setup of the self-managed OpenShift cluster is pictured below:","title":"Topology"},{"location":"10-use-deployer/3-run/aws-self-managed/#single-node-openshift-sno-on-aws","text":"Red Hat OpenShift also supports single-node deployments in which control plane and compute are combined into a single node. Obviously, this type of configuration does not cater for any high availability requirements that are usually part of a production installation, but it does offer a more cost-efficient option for development and testing purposes. Cloud Pak Deployer can deploy a single-node OpenShift with elastic storage and a sample configuration is provided as part of the deployer. Warning When deploying the IBM Cloud Paks on single-node OpenShift, there may be intermittent timeouts as pods are starting up. In those cases, just re-run the deployer with the same configuration and check status of the pods.","title":"Single-node OpenShift (SNO) on AWS"},{"location":"10-use-deployer/3-run/aws-self-managed/#1-configure-deployer","text":"","title":"1. Configure deployer"},{"location":"10-use-deployer/3-run/aws-self-managed/#deployer-configuration-and-status-directories","text":"Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For self-managed OpenShift installations, copy one of ocp-aws-self-managed-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-aws-self-managed-elastic.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/","title":"Deployer configuration and status directories"},{"location":"10-use-deployer/3-run/aws-self-managed/#set-configuration-and-status-directories-environment-variables","text":"Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files.","title":"Set configuration and status directories environment variables"},{"location":"10-use-deployer/3-run/aws-self-managed/#optional-advanced-configuration","text":"If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration .","title":"Optional: advanced configuration"},{"location":"10-use-deployer/3-run/aws-self-managed/#2-prepare-the-cloud-environment","text":"","title":"2. Prepare the cloud environment"},{"location":"10-use-deployer/3-run/aws-self-managed/#configure-route53-service-on-aws","text":"When deploying a self-managed OpenShift on Amazon web Services, a public hosted zone must be created in the same account as your OpenShift cluster. The domain name or subdomain name registered in the Route53 service must be specifed in the openshift configuration of the deployer. For more information on acquiring or specifying a domain on AWS, you can refer to https://github.com/openshift/installer/blob/master/docs/user/aws/route53.md .","title":"Configure Route53 service on AWS"},{"location":"10-use-deployer/3-run/aws-self-managed/#obtain-the-aws-iam-credentials","text":"If you can use your permanent security credentials for the AWS account, you will need an Access Key ID and Secret Access Key for the deployer to setup an OpenShift cluster on AWS. Go to https://aws.amazon.com/ Login to the AWS console Click on your user name at the top right of the screen Select Security credentials . You can also reach this screen via https://console.aws.amazon.com/iam/home?region=us-east-2#/security_credentials . If you do not yet have an access key (or you no longer have the associated secret), create an access key Store your Access Key ID and Secret Access Key in safe place","title":"Obtain the AWS IAM credentials"},{"location":"10-use-deployer/3-run/aws-self-managed/#alternative-using-temporary-aws-security-credentials-sts","text":"If your account uses temporary security credentials for AWS resources, you must use the Access Key ID , Secret Access Key and Session Token associated with your temporary credentials. For more information about using temporary security credentials, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html . The temporary credentials must be issued for an IAM role that has sufficient permissions to provision the infrastructure and all other components. More information about required permissions can be found here: https://docs.openshift.com/container-platform/4.10/authentication/managing_cloud_provider_credentials/cco-mode-sts.html#sts-mode-create-aws-resources-ccoctl . An example on how to retrieve the temporary credentials for a user-defined role: printf \"\\nexport AWS_ACCESS_KEY_ID=%s\\nexport AWS_SECRET_ACCESS_KEY=%s\\nexport AWS_SESSION_TOKEN=%s\\n\" $(aws sts assume-role \\ --role-arn arn:aws:iam::678256850452:role/ocp-sts-role \\ --role-session-name OCPInstall \\ --query \"Credentials.[AccessKeyId,SecretAccessKey,SessionToken]\" \\ --output text) Thie would return something like the below, which you can then paste into the session running the deployer. export AWS_ACCESS_KEY_ID=ASIxxxxxxAW export AWS_SECRET_ACCESS_KEY=jtLxxxxxxxxxxxxxxxGQ export AWS_SESSION_TOKEN=IQxxxxxxxxxxxxxbfQ If the openshift configuration has the infrastructure.credentials_mode set to Manual , Cloud Pak Deployer will automatically configure and run the Cloud Credential Operator utility.","title":"Alternative: Using temporary AWS security credentials (STS)"},{"location":"10-use-deployer/3-run/aws-self-managed/#3-acquire-entitlement-keys-and-secrets","text":"","title":"3. Acquire entitlement keys and secrets"},{"location":"10-use-deployer/3-run/aws-self-managed/#acquire-ibm-cloud-pak-entitlement-key","text":"If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.","title":"Acquire IBM Cloud Pak entitlement key"},{"location":"10-use-deployer/3-run/aws-self-managed/#acquire-an-openshift-pull-secret","text":"To install OpenShift you need an OpenShift pull secret which holds your entitlement. Navigate to https://console.redhat.com/openshift/install/pull-secret and download the pull secret into file /tmp/ocp_pullsecret.json","title":"Acquire an OpenShift pull secret"},{"location":"10-use-deployer/3-run/aws-self-managed/#optional-locate-or-generate-a-public-ssh-key","text":"To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub , where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux . Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.","title":"Optional: Locate or generate a public SSH Key"},{"location":"10-use-deployer/3-run/aws-self-managed/#4-set-environment-variables-and-secrets","text":"","title":"4. Set environment variables and secrets"},{"location":"10-use-deployer/3-run/aws-self-managed/#set-the-cloud-pak-entitlement-key","text":"If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry","title":"Set the Cloud Pak entitlement key"},{"location":"10-use-deployer/3-run/aws-self-managed/#set-the-environment-variables-for-aws-self-managed-openshift-deployment","text":"export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_access_key Optional: If your user does not have permanent administrator access but using temporary credentials, you can set the AWS_SESSION_TOKEN to be used for the AWS CLI. export AWS_SESSION_TOKEN=your_session_token AWS_ACCESS_KEY_ID : This is the AWS Access Key you retrieved above, often this is something like AK1A2VLMPQWBJJQGD6GV AWS_SECRET_ACCESS_KEY : The secret associated with your AWS Access Key, also retrieved above AWS_SESSION_TOKEN : The session token that will grant temporary elevated permissions Warning If your AWS_SESSION_TOKEN is expires while the deployer is still running, the deployer may end abnormally. In such case, you can just issue new temporary credentials ( AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN ) and restart the deployer. Alternatively, you can update the 3 vault secrets, respectively aws-access-key , aws-secret-access-key and aws-session-token with new values as they are re-retrieved by the deployer on a regular basis.","title":"Set the environment variables for AWS self-managed OpenShift deployment"},{"location":"10-use-deployer/3-run/aws-self-managed/#create-the-secrets-needed-for-self-managed-openshift-cluster","text":"You need to store the below credentials in the vault so that the deployer has access to them when installing self-managed OpenShift cluster on AWS. ./cp-deploy.sh vault set \\ --vault-secret ocp-pullsecret \\ --vault-secret-file /tmp/ocp_pullsecret.json","title":"Create the secrets needed for self-managed OpenShift cluster"},{"location":"10-use-deployer/3-run/aws-self-managed/#optional-create-secret-for-public-ssh-key","text":"If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key. ./cp-deploy.sh vault set \\ --vault-secret ocp-ssh-pub-key \\ --vault-secret-file ~/.ssh/id_rsa.pub","title":"Optional: Create secret for public SSH key"},{"location":"10-use-deployer/3-run/aws-self-managed/#5-run-the-deployer","text":"","title":"5. Run the deployer"},{"location":"10-use-deployer/3-run/aws-self-managed/#optional-validate-the-configuration","text":"If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses","title":"Optional: validate the configuration"},{"location":"10-use-deployer/3-run/aws-self-managed/#run-the-cloud-pak-deployer","text":"To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill","title":"Run the Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/aws-self-managed/#on-failure","text":"If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.","title":"On failure"},{"location":"10-use-deployer/3-run/aws-self-managed/#finishing-up","text":"Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - aws-access-key - aws-secret-access-key - ocp-pullsecret - ocp-ssh-pub-key - ibm_cp_entitlement_key - pluto-01-cluster-admin-password - cp4d_admin_zen_40_pluto_01 - all-config You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_40_pluto_01 PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_40_pluto_01: gelGKrcgaLatBsnAdMEbmLwGr","title":"Finishing up"},{"location":"10-use-deployer/3-run/aws-self-managed/#post-install-configuration","text":"You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Post-install configuration"},{"location":"10-use-deployer/3-run/azure-aro/","text":"Running the Cloud Pak Deployer on Microsoft Azure - ARO \ud83d\udd17 On Azure, OpenShift can be set up in various ways, managed by Red Hat (ARO) or self-managed. The steps below are applicable to the ARO (Azure Red Hat OpenShift). There are 5 main steps to run the deployer for Azure: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer Topology \ud83d\udd17 A typical setup of the ARO cluster is pictured below: When deploying ARO, you can configure the domain name by setting the openshift.domain_name attribute. The resulting domain name is managed by Azure, and it must be unique across all ARO instances deployed in Azure. Both the API and Ingress urls are set to be public in the template, so they can be resolved by external clients. If you want to use a custom domain and don't have one yet, you buy one from Azure: https://learn.microsoft.com/en-us/azure/app-service/manage-custom-dns-buy-domain . 1. Configure deployer \ud83d\udd17 Deployer configuration and status directories \ud83d\udd17 Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For ARO installations, copy one of ocp-azure-aro*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-azure-aro.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/ Set configuration and status directories environment variables \ud83d\udd17 Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files. Optional: advanced configuration \ud83d\udd17 If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration . 2. Prepare the cloud environment \ud83d\udd17 Install the Azure CLI tool \ud83d\udd17 Install Azure CLI tool , and run the commands in your operating system. Verify your quota and permissions in Microsoft Azure \ud83d\udd17 Check Azure resource quota of the subscription - Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The ARO cluster is provisioned using the az command. Ideally, one has to have Contributor permissions on the subscription (Azure resources) and Application administrator role assigned in the Azure Active Directory. See details here . Set environment variables for Azure \ud83d\udd17 export AZURE_RESOURCE_GROUP = pluto-01-rg export AZURE_LOCATION = westeurope export AZURE_SP = pluto-01-sp AZURE_RESOURCE_GROUP : The Azure resource group that will hold all resources belonging to the cluster: VMs, load balancers, virtual networks, subnets, etc.. Typically you will create a resource group for every OpenShift cluster you provision. AZURE_LOCATION : The Azure location of the resource group, for example useast or westeurope . AZURE_SP : Azure service principal that is used to create the resources on Azure. You will get the service principal from the Azure administrator. Store Service Principal credentials \ud83d\udd17 You must run the OpenShift installation using an Azure Service Principal with sufficient permissions. The Azure account administrator will share the SP credentials as a JSON file. If you have subscription-level access you can also create the Service Principal yourself. See steps in Create Azure service principal . Example output in credentials file: { \"appId\": \"a4c39ae9-f9d1-4038-b4a4-ab011e769111\", \"displayName\": \"pluto-01-sp\", \"password\": \"xyz-xyz\", \"tenant\": \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" } Store this file as /tmp/${AZURE_SP}-credentials.json . Login as Service Principal \ud83d\udd17 Login as the service principal: az login --service-principal -u a4c39ae9-f9d1-4038-b4a4-ab011e769111 -p xyz-xyz --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8 Register Resource Providers \ud83d\udd17 Make sure the following Resource Providers are registered for your subscription by running: az provider register -n Microsoft.RedHatOpenShift --wait az provider register -n Microsoft.Compute --wait az provider register -n Microsoft.Storage --wait az provider register -n Microsoft.Authorization --wait Create the resource group \ud83d\udd17 First the resource group must be created; this resource group must match the one configured in your OpenShift yaml config file. az group create \\ --name ${ AZURE_RESOURCE_GROUP } \\ --location ${ AZURE_LOCATION } 3. Acquire entitlement keys and secrets \ud83d\udd17 If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file. Acquire an OpenShift pull secret \ud83d\udd17 To install OpenShift you need an OpenShift pull secret which holds your entitlement. Navigate to https://console.redhat.com/openshift/install/pull-secret and download the pull secret into file /tmp/ocp_pullsecret.json 4. Set environment variables and secrets \ud83d\udd17 Create the secrets needed for ARO deployment \ud83d\udd17 You need to store the OpenShift pull secret and service principal credentials in the vault so that the deployer has access to it. ./cp-deploy.sh vault set \\ --vault-secret ocp-pullsecret \\ --vault-secret-file /tmp/ocp_pullsecret.json ./cp-deploy.sh vault set \\ --vault-secret ${AZURE_SP}-credentials \\ --vault-secret-file /tmp/${AZURE_SP}-credentials.json 5. Run the deployer \ud83d\udd17 Optional: validate the configuration \ud83d\udd17 If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses Run the Cloud Pak Deployer \ud83d\udd17 To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill On failure \ud83d\udd17 If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully. Finishing up \ud83d\udd17 Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - sample-provision-ssh-key - sample-provision-ssh-pub-key - cp4d_admin_zen_sample_sample You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_sample_sample PLAY [Secrets] ***************************************************************** included: /automation_script/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr Post-install configuration \ud83d\udd17 You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Azure ARO"},{"location":"10-use-deployer/3-run/azure-aro/#running-the-cloud-pak-deployer-on-microsoft-azure---aro","text":"On Azure, OpenShift can be set up in various ways, managed by Red Hat (ARO) or self-managed. The steps below are applicable to the ARO (Azure Red Hat OpenShift). There are 5 main steps to run the deployer for Azure: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer","title":"Running the Cloud Pak Deployer on Microsoft Azure - ARO"},{"location":"10-use-deployer/3-run/azure-aro/#topology","text":"A typical setup of the ARO cluster is pictured below: When deploying ARO, you can configure the domain name by setting the openshift.domain_name attribute. The resulting domain name is managed by Azure, and it must be unique across all ARO instances deployed in Azure. Both the API and Ingress urls are set to be public in the template, so they can be resolved by external clients. If you want to use a custom domain and don't have one yet, you buy one from Azure: https://learn.microsoft.com/en-us/azure/app-service/manage-custom-dns-buy-domain .","title":"Topology"},{"location":"10-use-deployer/3-run/azure-aro/#1-configure-deployer","text":"","title":"1. Configure deployer"},{"location":"10-use-deployer/3-run/azure-aro/#deployer-configuration-and-status-directories","text":"Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For ARO installations, copy one of ocp-azure-aro*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-azure-aro.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/","title":"Deployer configuration and status directories"},{"location":"10-use-deployer/3-run/azure-aro/#set-configuration-and-status-directories-environment-variables","text":"Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files.","title":"Set configuration and status directories environment variables"},{"location":"10-use-deployer/3-run/azure-aro/#optional-advanced-configuration","text":"If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration .","title":"Optional: advanced configuration"},{"location":"10-use-deployer/3-run/azure-aro/#2-prepare-the-cloud-environment","text":"","title":"2. Prepare the cloud environment"},{"location":"10-use-deployer/3-run/azure-aro/#install-the-azure-cli-tool","text":"Install Azure CLI tool , and run the commands in your operating system.","title":"Install the Azure CLI tool"},{"location":"10-use-deployer/3-run/azure-aro/#verify-your-quota-and-permissions-in-microsoft-azure","text":"Check Azure resource quota of the subscription - Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The ARO cluster is provisioned using the az command. Ideally, one has to have Contributor permissions on the subscription (Azure resources) and Application administrator role assigned in the Azure Active Directory. See details here .","title":"Verify your quota and permissions in Microsoft Azure"},{"location":"10-use-deployer/3-run/azure-aro/#set-environment-variables-for-azure","text":"export AZURE_RESOURCE_GROUP = pluto-01-rg export AZURE_LOCATION = westeurope export AZURE_SP = pluto-01-sp AZURE_RESOURCE_GROUP : The Azure resource group that will hold all resources belonging to the cluster: VMs, load balancers, virtual networks, subnets, etc.. Typically you will create a resource group for every OpenShift cluster you provision. AZURE_LOCATION : The Azure location of the resource group, for example useast or westeurope . AZURE_SP : Azure service principal that is used to create the resources on Azure. You will get the service principal from the Azure administrator.","title":"Set environment variables for Azure"},{"location":"10-use-deployer/3-run/azure-aro/#store-service-principal-credentials","text":"You must run the OpenShift installation using an Azure Service Principal with sufficient permissions. The Azure account administrator will share the SP credentials as a JSON file. If you have subscription-level access you can also create the Service Principal yourself. See steps in Create Azure service principal . Example output in credentials file: { \"appId\": \"a4c39ae9-f9d1-4038-b4a4-ab011e769111\", \"displayName\": \"pluto-01-sp\", \"password\": \"xyz-xyz\", \"tenant\": \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" } Store this file as /tmp/${AZURE_SP}-credentials.json .","title":"Store Service Principal credentials"},{"location":"10-use-deployer/3-run/azure-aro/#login-as-service-principal","text":"Login as the service principal: az login --service-principal -u a4c39ae9-f9d1-4038-b4a4-ab011e769111 -p xyz-xyz --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8","title":"Login as Service Principal"},{"location":"10-use-deployer/3-run/azure-aro/#register-resource-providers","text":"Make sure the following Resource Providers are registered for your subscription by running: az provider register -n Microsoft.RedHatOpenShift --wait az provider register -n Microsoft.Compute --wait az provider register -n Microsoft.Storage --wait az provider register -n Microsoft.Authorization --wait","title":"Register Resource Providers"},{"location":"10-use-deployer/3-run/azure-aro/#create-the-resource-group","text":"First the resource group must be created; this resource group must match the one configured in your OpenShift yaml config file. az group create \\ --name ${ AZURE_RESOURCE_GROUP } \\ --location ${ AZURE_LOCATION }","title":"Create the resource group"},{"location":"10-use-deployer/3-run/azure-aro/#3-acquire-entitlement-keys-and-secrets","text":"If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.","title":"3. Acquire entitlement keys and secrets"},{"location":"10-use-deployer/3-run/azure-aro/#acquire-an-openshift-pull-secret","text":"To install OpenShift you need an OpenShift pull secret which holds your entitlement. Navigate to https://console.redhat.com/openshift/install/pull-secret and download the pull secret into file /tmp/ocp_pullsecret.json","title":"Acquire an OpenShift pull secret"},{"location":"10-use-deployer/3-run/azure-aro/#4-set-environment-variables-and-secrets","text":"","title":"4. Set environment variables and secrets"},{"location":"10-use-deployer/3-run/azure-aro/#create-the-secrets-needed-for-aro-deployment","text":"You need to store the OpenShift pull secret and service principal credentials in the vault so that the deployer has access to it. ./cp-deploy.sh vault set \\ --vault-secret ocp-pullsecret \\ --vault-secret-file /tmp/ocp_pullsecret.json ./cp-deploy.sh vault set \\ --vault-secret ${AZURE_SP}-credentials \\ --vault-secret-file /tmp/${AZURE_SP}-credentials.json","title":"Create the secrets needed for ARO deployment"},{"location":"10-use-deployer/3-run/azure-aro/#5-run-the-deployer","text":"","title":"5. Run the deployer"},{"location":"10-use-deployer/3-run/azure-aro/#optional-validate-the-configuration","text":"If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses","title":"Optional: validate the configuration"},{"location":"10-use-deployer/3-run/azure-aro/#run-the-cloud-pak-deployer","text":"To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill","title":"Run the Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/azure-aro/#on-failure","text":"If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.","title":"On failure"},{"location":"10-use-deployer/3-run/azure-aro/#finishing-up","text":"Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - sample-provision-ssh-key - sample-provision-ssh-pub-key - cp4d_admin_zen_sample_sample You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_sample_sample PLAY [Secrets] ***************************************************************** included: /automation_script/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr","title":"Finishing up"},{"location":"10-use-deployer/3-run/azure-aro/#post-install-configuration","text":"You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Post-install configuration"},{"location":"10-use-deployer/3-run/azure-self-managed/","text":"Running the Cloud Pak Deployer on Microsoft Azure - Self-managed \ud83d\udd17 On Azure, OpenShift can be set up in various ways, managed by Red Hat (ARO) or self-managed. The steps below are applicable to the self-managed Red Hat OpenShift. There are 5 main steps to run the deployer for Azure: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer Topology \ud83d\udd17 A typical setup of the OpenShift cluster on Azure is pictured below: When deploying self-managed OpenShift on Azure, you must configure the domain name by setting the openshift.domain_name , which must be public domain with a registrar. OpenShift will create a public DNS zone with additional entries to reach the OpenShift API and the applications (Cloud Paks). If you don't have a domain yet, you buy one from Azure: https://learn.microsoft.com/en-us/azure/app-service/manage-custom-dns-buy-domain . 1. Configure deployer \ud83d\udd17 Deployer configuration and status directories \ud83d\udd17 Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For Azure self-managed installations, copy one of ocp-azure-self-managed*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-azure-self-managed.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/ Set configuration and status directories environment variables \ud83d\udd17 Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files. Optional: advanced configuration \ud83d\udd17 If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration . 2. Prepare the cloud environment \ud83d\udd17 Install the Azure CLI tool \ud83d\udd17 Install Azure CLI tool , and run the commands in your operating system. Verify your quota and permissions in Microsoft Azure \ud83d\udd17 Check Azure resource quota of the subscription - Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The self-managed cluster is provisioned using the IPI installer command. Ideally, one has to have Contributor permissions on the subscription (Azure resources) and Application administrator role assigned in the Azure Active Directory. See details here . Set environment variables for Azure \ud83d\udd17 export AZURE_RESOURCE_GROUP = pluto-01-rg export AZURE_LOCATION = westeurope export AZURE_SP = pluto-01-sp AZURE_RESOURCE_GROUP : The Azure resource group that will hold all resources belonging to the cluster: VMs, load balancers, virtual networks, subnets, etc.. Typically you will create a resource group for every OpenShift cluster you provision. AZURE_LOCATION : The Azure location of the resource group, for example useast or westeurope . AZURE_SP : Azure service principal that is used to create the resources on Azure. You will get the service principal from the Azure administrator. Store Service Principal credentials \ud83d\udd17 You must run the OpenShift installation using an Azure Service Principal with sufficient permissions. The Azure account administrator will share the SP credentials as a JSON file. If you have subscription-level access you can also create the Service Principal yourself. See steps in Create Azure service principal . Example output in credentials file: { \"appId\": \"a4c39ae9-f9d1-4038-b4a4-ab011e769111\", \"displayName\": \"pluto-01-sp\", \"password\": \"xyz-xyz\", \"tenant\": \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" } Store this file as /tmp/${AZURE_SP}-credentials.json . Login as Service Principal \ud83d\udd17 Login as the service principal: az login --service-principal -u a4c39ae9-f9d1-4038-b4a4-ab011e769111 -p xyz-xyz --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8 Create the resource group \ud83d\udd17 First the resource group must be created; this resource group must match the one configured in your OpenShift yaml config file. az group create \\ --name ${ AZURE_RESOURCE_GROUP } \\ --location ${ AZURE_LOCATION } 3. Acquire entitlement keys and secrets \ud83d\udd17 Acquire IBM Cloud Pak entitlement key \ud83d\udd17 If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file. Acquire an OpenShift pull secret \ud83d\udd17 To install OpenShift you need an OpenShift pull secret which holds your entitlement. Navigate to https://console.redhat.com/openshift/install/pull-secret and download the pull secret into file /tmp/ocp_pullsecret.json Optional: Locate or generate a public SSH Key \ud83d\udd17 To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub , where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux . Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault. 4. Set environment variables and secrets \ud83d\udd17 Set the Cloud Pak entitlement key \ud83d\udd17 If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry Create the secrets needed for self-managed OpenShift cluster \ud83d\udd17 You need to store the OpenShift pull secret and service principal credentials in the vault so that the deployer has access to it. ./cp-deploy.sh vault set \\ --vault-secret ocp-pullsecret \\ --vault-secret-file /tmp/ocp_pullsecret.json ./cp-deploy.sh vault set \\ --vault-secret ${AZURE_SP}-credentials \\ --vault-secret-file /tmp/${AZURE_SP}-credentials.json Optional: Create secret for public SSH key \ud83d\udd17 If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key. ./cp-deploy.sh vault set \\ --vault-secret ocp-ssh-pub-key \\ --vault-secret-file ~/.ssh/id_rsa.pub 5. Run the deployer \ud83d\udd17 Optional: validate the configuration \ud83d\udd17 If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses Run the Cloud Pak Deployer \ud83d\udd17 To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill On failure \ud83d\udd17 If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully. Finishing up \ud83d\udd17 Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - sample-provision-ssh-key - sample-provision-ssh-pub-key - cp4d_admin_cpd_demo You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_sample_sample PLAY [Secrets] ***************************************************************** included: /automation_script/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr Post-install configuration \ud83d\udd17 You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Azure Self-managed"},{"location":"10-use-deployer/3-run/azure-self-managed/#running-the-cloud-pak-deployer-on-microsoft-azure---self-managed","text":"On Azure, OpenShift can be set up in various ways, managed by Red Hat (ARO) or self-managed. The steps below are applicable to the self-managed Red Hat OpenShift. There are 5 main steps to run the deployer for Azure: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer","title":"Running the Cloud Pak Deployer on Microsoft Azure - Self-managed"},{"location":"10-use-deployer/3-run/azure-self-managed/#topology","text":"A typical setup of the OpenShift cluster on Azure is pictured below: When deploying self-managed OpenShift on Azure, you must configure the domain name by setting the openshift.domain_name , which must be public domain with a registrar. OpenShift will create a public DNS zone with additional entries to reach the OpenShift API and the applications (Cloud Paks). If you don't have a domain yet, you buy one from Azure: https://learn.microsoft.com/en-us/azure/app-service/manage-custom-dns-buy-domain .","title":"Topology"},{"location":"10-use-deployer/3-run/azure-self-managed/#1-configure-deployer","text":"","title":"1. Configure deployer"},{"location":"10-use-deployer/3-run/azure-self-managed/#deployer-configuration-and-status-directories","text":"Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For Azure self-managed installations, copy one of ocp-azure-self-managed*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-azure-self-managed.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/","title":"Deployer configuration and status directories"},{"location":"10-use-deployer/3-run/azure-self-managed/#set-configuration-and-status-directories-environment-variables","text":"Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files.","title":"Set configuration and status directories environment variables"},{"location":"10-use-deployer/3-run/azure-self-managed/#optional-advanced-configuration","text":"If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration .","title":"Optional: advanced configuration"},{"location":"10-use-deployer/3-run/azure-self-managed/#2-prepare-the-cloud-environment","text":"","title":"2. Prepare the cloud environment"},{"location":"10-use-deployer/3-run/azure-self-managed/#install-the-azure-cli-tool","text":"Install Azure CLI tool , and run the commands in your operating system.","title":"Install the Azure CLI tool"},{"location":"10-use-deployer/3-run/azure-self-managed/#verify-your-quota-and-permissions-in-microsoft-azure","text":"Check Azure resource quota of the subscription - Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The self-managed cluster is provisioned using the IPI installer command. Ideally, one has to have Contributor permissions on the subscription (Azure resources) and Application administrator role assigned in the Azure Active Directory. See details here .","title":"Verify your quota and permissions in Microsoft Azure"},{"location":"10-use-deployer/3-run/azure-self-managed/#set-environment-variables-for-azure","text":"export AZURE_RESOURCE_GROUP = pluto-01-rg export AZURE_LOCATION = westeurope export AZURE_SP = pluto-01-sp AZURE_RESOURCE_GROUP : The Azure resource group that will hold all resources belonging to the cluster: VMs, load balancers, virtual networks, subnets, etc.. Typically you will create a resource group for every OpenShift cluster you provision. AZURE_LOCATION : The Azure location of the resource group, for example useast or westeurope . AZURE_SP : Azure service principal that is used to create the resources on Azure. You will get the service principal from the Azure administrator.","title":"Set environment variables for Azure"},{"location":"10-use-deployer/3-run/azure-self-managed/#store-service-principal-credentials","text":"You must run the OpenShift installation using an Azure Service Principal with sufficient permissions. The Azure account administrator will share the SP credentials as a JSON file. If you have subscription-level access you can also create the Service Principal yourself. See steps in Create Azure service principal . Example output in credentials file: { \"appId\": \"a4c39ae9-f9d1-4038-b4a4-ab011e769111\", \"displayName\": \"pluto-01-sp\", \"password\": \"xyz-xyz\", \"tenant\": \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" } Store this file as /tmp/${AZURE_SP}-credentials.json .","title":"Store Service Principal credentials"},{"location":"10-use-deployer/3-run/azure-self-managed/#login-as-service-principal","text":"Login as the service principal: az login --service-principal -u a4c39ae9-f9d1-4038-b4a4-ab011e769111 -p xyz-xyz --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8","title":"Login as Service Principal"},{"location":"10-use-deployer/3-run/azure-self-managed/#create-the-resource-group","text":"First the resource group must be created; this resource group must match the one configured in your OpenShift yaml config file. az group create \\ --name ${ AZURE_RESOURCE_GROUP } \\ --location ${ AZURE_LOCATION }","title":"Create the resource group"},{"location":"10-use-deployer/3-run/azure-self-managed/#3-acquire-entitlement-keys-and-secrets","text":"","title":"3. Acquire entitlement keys and secrets"},{"location":"10-use-deployer/3-run/azure-self-managed/#acquire-ibm-cloud-pak-entitlement-key","text":"If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.","title":"Acquire IBM Cloud Pak entitlement key"},{"location":"10-use-deployer/3-run/azure-self-managed/#acquire-an-openshift-pull-secret","text":"To install OpenShift you need an OpenShift pull secret which holds your entitlement. Navigate to https://console.redhat.com/openshift/install/pull-secret and download the pull secret into file /tmp/ocp_pullsecret.json","title":"Acquire an OpenShift pull secret"},{"location":"10-use-deployer/3-run/azure-self-managed/#optional-locate-or-generate-a-public-ssh-key","text":"To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub , where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux . Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.","title":"Optional: Locate or generate a public SSH Key"},{"location":"10-use-deployer/3-run/azure-self-managed/#4-set-environment-variables-and-secrets","text":"","title":"4. Set environment variables and secrets"},{"location":"10-use-deployer/3-run/azure-self-managed/#set-the-cloud-pak-entitlement-key","text":"If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry","title":"Set the Cloud Pak entitlement key"},{"location":"10-use-deployer/3-run/azure-self-managed/#create-the-secrets-needed-for-self-managed-openshift-cluster","text":"You need to store the OpenShift pull secret and service principal credentials in the vault so that the deployer has access to it. ./cp-deploy.sh vault set \\ --vault-secret ocp-pullsecret \\ --vault-secret-file /tmp/ocp_pullsecret.json ./cp-deploy.sh vault set \\ --vault-secret ${AZURE_SP}-credentials \\ --vault-secret-file /tmp/${AZURE_SP}-credentials.json","title":"Create the secrets needed for self-managed OpenShift cluster"},{"location":"10-use-deployer/3-run/azure-self-managed/#optional-create-secret-for-public-ssh-key","text":"If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key. ./cp-deploy.sh vault set \\ --vault-secret ocp-ssh-pub-key \\ --vault-secret-file ~/.ssh/id_rsa.pub","title":"Optional: Create secret for public SSH key"},{"location":"10-use-deployer/3-run/azure-self-managed/#5-run-the-deployer","text":"","title":"5. Run the deployer"},{"location":"10-use-deployer/3-run/azure-self-managed/#optional-validate-the-configuration","text":"If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses","title":"Optional: validate the configuration"},{"location":"10-use-deployer/3-run/azure-self-managed/#run-the-cloud-pak-deployer","text":"To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill","title":"Run the Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/azure-self-managed/#on-failure","text":"If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.","title":"On failure"},{"location":"10-use-deployer/3-run/azure-self-managed/#finishing-up","text":"Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - sample-provision-ssh-key - sample-provision-ssh-pub-key - cp4d_admin_cpd_demo You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_zen_sample_sample PLAY [Secrets] ***************************************************************** included: /automation_script/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr","title":"Finishing up"},{"location":"10-use-deployer/3-run/azure-self-managed/#post-install-configuration","text":"You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Post-install configuration"},{"location":"10-use-deployer/3-run/azure-service-principal/","text":"Create an Azure Service Principal \ud83d\udd17 Login to Azure \ud83d\udd17 Login to the Microsoft Azure using your subscription-level credentials. az login If you have a subscription with multiple tenants, use: az login --tenant Example: az login --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8 To sign in , use a web browser to open the page https://microsoft.com/devicelogin and enter the code AXWFQQ5FJ to authenticate. [ { \"cloudName\" : \"AzureCloud\" , \"homeTenantId\" : \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" , \"id\" : \"72281667-6d54-46cb-8423-792d7bcb1234\" , \"isDefault\" : true, \"managedByTenants\" : [] , \"name\" : \"Azure Account\" , \"state\" : \"Enabled\" , \"tenantId\" : \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" , \"user\" : { \"name\" : \"your_user@domain.com\" , \"type\" : \"user\" } } ] Set subscription (optional) \ud83d\udd17 If you have multiple Azure subscriptions, specify the relevant subscription ID: az account set --subscription You can list the subscriptions via command: az account subscription list [ { \"authorizationSource\": \"RoleBased\", \"displayName\": \"IBM xxx\", \"id\": \"/subscriptions/dcexxx\", \"state\": \"Enabled\", \"subscriptionId\": \"dcexxx\", \"subscriptionPolicies\": { \"locationPlacementId\": \"Public_2014-09-01\", \"quotaId\": \"EnterpriseAgreement_2014-09-01\", \"spendingLimit\": \"Off\" } } ] Create service principal \ud83d\udd17 Create the service principal that will do the installation and assign the Contributor role Set environment variables for Azure \ud83d\udd17 export AZURE_SUBSCRIPTION_ID = 72281667 -6d54-46cb-8423-792d7bcb1234 export AZURE_LOCATION = westeurope export AZURE_SP = pluto-01-sp AZURE_SUBSCRIPTION_ID : The id of your Azure subscription. Once logged in, you can retrieve this using the az account show command. AZURE_LOCATION : The Azure location of the resource group, for example useast or westeurope . AZURE_SP : Azure service principal that is used to create the resources on Azure. Create the service principal \ud83d\udd17 az ad sp create-for-rbac \\ --role Contributor \\ --name ${ AZURE_SP } \\ --scopes /subscriptions/ ${ AZURE_SUBSCRIPTION_ID } | tee /tmp/ ${ AZURE_SP } -credentials.json Example output: { \"appId\": \"a4c39ae9-f9d1-4038-b4a4-ab011e769111\", \"displayName\": \"pluto-01-sp\", \"password\": \"xyz-xyz\", \"tenant\": \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" } Set permissions for service principal \ud83d\udd17 Finally, set the permissions of the service principal to allow creation of the OpenShift cluster az role assignment create \\ --role \"User Access Administrator\" \\ --assignee-object-id $( az ad sp list --display-name = ${ AZURE_SP } --query = '[].id' -o tsv )","title":"Create an Azure Service Principal"},{"location":"10-use-deployer/3-run/azure-service-principal/#create-an-azure-service-principal","text":"","title":"Create an Azure Service Principal"},{"location":"10-use-deployer/3-run/azure-service-principal/#login-to-azure","text":"Login to the Microsoft Azure using your subscription-level credentials. az login If you have a subscription with multiple tenants, use: az login --tenant Example: az login --tenant 869930ac-17ee-4dda-bbad-7354c3e7629c8 To sign in , use a web browser to open the page https://microsoft.com/devicelogin and enter the code AXWFQQ5FJ to authenticate. [ { \"cloudName\" : \"AzureCloud\" , \"homeTenantId\" : \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" , \"id\" : \"72281667-6d54-46cb-8423-792d7bcb1234\" , \"isDefault\" : true, \"managedByTenants\" : [] , \"name\" : \"Azure Account\" , \"state\" : \"Enabled\" , \"tenantId\" : \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" , \"user\" : { \"name\" : \"your_user@domain.com\" , \"type\" : \"user\" } } ]","title":"Login to Azure"},{"location":"10-use-deployer/3-run/azure-service-principal/#set-subscription-optional","text":"If you have multiple Azure subscriptions, specify the relevant subscription ID: az account set --subscription You can list the subscriptions via command: az account subscription list [ { \"authorizationSource\": \"RoleBased\", \"displayName\": \"IBM xxx\", \"id\": \"/subscriptions/dcexxx\", \"state\": \"Enabled\", \"subscriptionId\": \"dcexxx\", \"subscriptionPolicies\": { \"locationPlacementId\": \"Public_2014-09-01\", \"quotaId\": \"EnterpriseAgreement_2014-09-01\", \"spendingLimit\": \"Off\" } } ]","title":"Set subscription (optional)"},{"location":"10-use-deployer/3-run/azure-service-principal/#create-service-principal","text":"Create the service principal that will do the installation and assign the Contributor role","title":"Create service principal"},{"location":"10-use-deployer/3-run/azure-service-principal/#set-environment-variables-for-azure","text":"export AZURE_SUBSCRIPTION_ID = 72281667 -6d54-46cb-8423-792d7bcb1234 export AZURE_LOCATION = westeurope export AZURE_SP = pluto-01-sp AZURE_SUBSCRIPTION_ID : The id of your Azure subscription. Once logged in, you can retrieve this using the az account show command. AZURE_LOCATION : The Azure location of the resource group, for example useast or westeurope . AZURE_SP : Azure service principal that is used to create the resources on Azure.","title":"Set environment variables for Azure"},{"location":"10-use-deployer/3-run/azure-service-principal/#create-the-service-principal","text":"az ad sp create-for-rbac \\ --role Contributor \\ --name ${ AZURE_SP } \\ --scopes /subscriptions/ ${ AZURE_SUBSCRIPTION_ID } | tee /tmp/ ${ AZURE_SP } -credentials.json Example output: { \"appId\": \"a4c39ae9-f9d1-4038-b4a4-ab011e769111\", \"displayName\": \"pluto-01-sp\", \"password\": \"xyz-xyz\", \"tenant\": \"869930ac-17ee-4dda-bbad-7354c3e7629c8\" }","title":"Create the service principal"},{"location":"10-use-deployer/3-run/azure-service-principal/#set-permissions-for-service-principal","text":"Finally, set the permissions of the service principal to allow creation of the OpenShift cluster az role assignment create \\ --role \"User Access Administrator\" \\ --assignee-object-id $( az ad sp list --display-name = ${ AZURE_SP } --query = '[].id' -o tsv )","title":"Set permissions for service principal"},{"location":"10-use-deployer/3-run/existing-openshift/","text":"Running the Cloud Pak Deployer on an existing OpenShift cluster \ud83d\udd17 When running the Cloud Pak Deployer on an existing OpenShift cluster, the following is assumed: The OpenShift cluster is up and running with sufficient compute nodes The appropriate storage class(es) have been pre-created You have cluster administrator permissions to OpenShift Info You can also choose to run Cloud Pak Deployer as a job on the OpenShift cluster. This removes the dependency on a separate server or workstation to run the deployer. Please note that you may need unrestricted OpenShift entitlements for this. To run the deployer on OpenShift via the OpenShift console, see Run on OpenShift using console With the Existing OpenShift type of deployment you can install and configure the Cloud Pak(s) both on connected and disconnected (air-gapped) cluster. When using the deployer for a disconnected cluster, make sure you specify --air-gapped for the cp-deploy.sh command. There are 5 main steps to run the deployer for existing OpenShift: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer 1. Configure deployer \ud83d\udd17 Deployer configuration and status directories \ud83d\udd17 Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For existing OpenShift installations, copy one of ocp-existing-ocp-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-existing-ocp-auto.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/ Set configuration and status directories environment variables \ud83d\udd17 Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files. Optional: advanced configuration \ud83d\udd17 If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration . 2. Prepare the cloud environment \ud83d\udd17 No steps should be required to prepare the infrastructure; this type of installation expects the OpenShift cluster to be up and running with the supported storage classes. 3. Acquire entitlement keys and secrets \ud83d\udd17 If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file. 4. Set environment variables and secrets \ud83d\udd17 Set the Cloud Pak entitlement key \ud83d\udd17 If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry Store the OpenShift login command or configuration \ud83d\udd17 Because you will be deploying the Cloud Pak on an existing OpenShift cluster, the deployer needs to be able to access OpenShift. There are thre methods for passing the login credentials of your OpenShift cluster(s) to the deployer process: Generic oc login command (preferred) Specific oc login command(s) kubeconfig file Regardless of which authentication option you choose, the deployer will retrieve the secret from the vault when it requires access to OpenShift. If the secret cannot be found or if it is invalid or the OpenShift login token has expired, the deployer will fail and you will need to update the secret of your choice. For most OpenShift installations, you can retrieve the oc login command with a temporary token from the OpenShift console. Go to the OpenShift console and click on your user at the top right of the page to get the login command. Typically this command looks something like this: oc login --server=https://api.pluto-01.coc.ibm.com:6443 --token=sha256~NQUUMroU4B6q_GTBAMS18Y3EIba1KHnJ08L2rBHvTHA Before passing the oc login command or the kubeconfig file, make sure you can login to your cluster using the command or the config file. If the cluster's API server has a self-signed certificate, make sure you specify the --insecure-skip-tls-verify flag for the oc login command. Example: oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify Output: Login successful. You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project \"default\". Option 1 - Generic oc login command \ud83d\udd17 This is the most straightforward option if you only have 1 OpenShift cluster in your configuration. Set the environment variable for the oc login command export CPD_OC_LOGIN=\"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Info Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored. When the deployer is run, it automatically sets the oc-login vault secret to the specified oc login command. When logging in to OpenShift, the deployer first checks if there is a specific oc login secret for the cluster in question (see option 2). If there is not, it will default to the generic oc-login secret (option 1). Option 2 - Specific oc login command(s) \ud83d\udd17 Use this option if you have multiple OpenShift clusters configured in th deployer configuration. Store the login command in secret -oc-login ./cp-deploy.sh vault set \\ -vs pluto-01-oc-login \\ -vsv \"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Info Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored. Option 3 - Use a kubeconfig file \ud83d\udd17 If you already have a \"kubeconfig\" file that holds the credentials of your cluster, you can use this, otherwise: - Log in to OpenShift as a cluster administrator using your method of choice - Locate the Kubernetes config file. If you have logged in with the OpenShift client, this is typically ~/.kube/config If you did not just login to the cluster, the current context of the kubeconfig file may not point to your cluster. The deployer will check that the server the current context points to matches the cluster_name and domain_name of the configured openshift object. To check the current context, run the following command: oc config current-context Now, store the Kubernetes config file as a vault secret. ./cp-deploy.sh vault set \\ --vault-secret kubeconfig \\ --vault-secret-file ~/.kube/config If the deployer manages multiple OpenShift clusters, you can specify a kubeconfig file for each of the clusters by prefixing the kubeconfig with the name of the openshift object, for example: ./cp-deploy.sh vault set \\ --vault-secret pluto-01-kubeconfig \\ --vault-secret-file /data/pluto-01/kubeconfig ./cp-deploy.sh vault set \\ --vault-secret venus-02-kubeconfig \\ --vault-secret-file /data/venus-02/kubeconfig When connecting to the OpenShift cluster, a cluster-specific kubeconfig vault secret will take precedence over the generic kubeconfig secret. 5. Run the deployer \ud83d\udd17 Optional: validate the configuration \ud83d\udd17 If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses Run the Cloud Pak Deployer \ud83d\udd17 To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill On failure \ud83d\udd17 If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully. Finishing up \ud83d\udd17 Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - oc-login - cp4d_admin_cpd_demo You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_sample PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr Post-install configuration \ud83d\udd17 You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Existing OpenShift"},{"location":"10-use-deployer/3-run/existing-openshift/#running-the-cloud-pak-deployer-on-an-existing-openshift-cluster","text":"When running the Cloud Pak Deployer on an existing OpenShift cluster, the following is assumed: The OpenShift cluster is up and running with sufficient compute nodes The appropriate storage class(es) have been pre-created You have cluster administrator permissions to OpenShift Info You can also choose to run Cloud Pak Deployer as a job on the OpenShift cluster. This removes the dependency on a separate server or workstation to run the deployer. Please note that you may need unrestricted OpenShift entitlements for this. To run the deployer on OpenShift via the OpenShift console, see Run on OpenShift using console With the Existing OpenShift type of deployment you can install and configure the Cloud Pak(s) both on connected and disconnected (air-gapped) cluster. When using the deployer for a disconnected cluster, make sure you specify --air-gapped for the cp-deploy.sh command. There are 5 main steps to run the deployer for existing OpenShift: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer","title":"Running the Cloud Pak Deployer on an existing OpenShift cluster"},{"location":"10-use-deployer/3-run/existing-openshift/#1-configure-deployer","text":"","title":"1. Configure deployer"},{"location":"10-use-deployer/3-run/existing-openshift/#deployer-configuration-and-status-directories","text":"Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For existing OpenShift installations, copy one of ocp-existing-ocp-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-existing-ocp-auto.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/","title":"Deployer configuration and status directories"},{"location":"10-use-deployer/3-run/existing-openshift/#set-configuration-and-status-directories-environment-variables","text":"Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files.","title":"Set configuration and status directories environment variables"},{"location":"10-use-deployer/3-run/existing-openshift/#optional-advanced-configuration","text":"If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration .","title":"Optional: advanced configuration"},{"location":"10-use-deployer/3-run/existing-openshift/#2-prepare-the-cloud-environment","text":"No steps should be required to prepare the infrastructure; this type of installation expects the OpenShift cluster to be up and running with the supported storage classes.","title":"2. Prepare the cloud environment"},{"location":"10-use-deployer/3-run/existing-openshift/#3-acquire-entitlement-keys-and-secrets","text":"If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.","title":"3. Acquire entitlement keys and secrets"},{"location":"10-use-deployer/3-run/existing-openshift/#4-set-environment-variables-and-secrets","text":"","title":"4. Set environment variables and secrets"},{"location":"10-use-deployer/3-run/existing-openshift/#set-the-cloud-pak-entitlement-key","text":"If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry","title":"Set the Cloud Pak entitlement key"},{"location":"10-use-deployer/3-run/existing-openshift/#store-the-openshift-login-command-or-configuration","text":"Because you will be deploying the Cloud Pak on an existing OpenShift cluster, the deployer needs to be able to access OpenShift. There are thre methods for passing the login credentials of your OpenShift cluster(s) to the deployer process: Generic oc login command (preferred) Specific oc login command(s) kubeconfig file Regardless of which authentication option you choose, the deployer will retrieve the secret from the vault when it requires access to OpenShift. If the secret cannot be found or if it is invalid or the OpenShift login token has expired, the deployer will fail and you will need to update the secret of your choice. For most OpenShift installations, you can retrieve the oc login command with a temporary token from the OpenShift console. Go to the OpenShift console and click on your user at the top right of the page to get the login command. Typically this command looks something like this: oc login --server=https://api.pluto-01.coc.ibm.com:6443 --token=sha256~NQUUMroU4B6q_GTBAMS18Y3EIba1KHnJ08L2rBHvTHA Before passing the oc login command or the kubeconfig file, make sure you can login to your cluster using the command or the config file. If the cluster's API server has a self-signed certificate, make sure you specify the --insecure-skip-tls-verify flag for the oc login command. Example: oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify Output: Login successful. You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project \"default\".","title":"Store the OpenShift login command or configuration"},{"location":"10-use-deployer/3-run/existing-openshift/#option-1---generic-oc-login-command","text":"This is the most straightforward option if you only have 1 OpenShift cluster in your configuration. Set the environment variable for the oc login command export CPD_OC_LOGIN=\"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Info Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored. When the deployer is run, it automatically sets the oc-login vault secret to the specified oc login command. When logging in to OpenShift, the deployer first checks if there is a specific oc login secret for the cluster in question (see option 2). If there is not, it will default to the generic oc-login secret (option 1).","title":"Option 1 - Generic oc login command"},{"location":"10-use-deployer/3-run/existing-openshift/#option-2---specific-oc-login-commands","text":"Use this option if you have multiple OpenShift clusters configured in th deployer configuration. Store the login command in secret -oc-login ./cp-deploy.sh vault set \\ -vs pluto-01-oc-login \\ -vsv \"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Info Make sure you put the oc login command between quotes (single or double) to make sure the full command is stored.","title":"Option 2 - Specific oc login command(s)"},{"location":"10-use-deployer/3-run/existing-openshift/#option-3---use-a-kubeconfig-file","text":"If you already have a \"kubeconfig\" file that holds the credentials of your cluster, you can use this, otherwise: - Log in to OpenShift as a cluster administrator using your method of choice - Locate the Kubernetes config file. If you have logged in with the OpenShift client, this is typically ~/.kube/config If you did not just login to the cluster, the current context of the kubeconfig file may not point to your cluster. The deployer will check that the server the current context points to matches the cluster_name and domain_name of the configured openshift object. To check the current context, run the following command: oc config current-context Now, store the Kubernetes config file as a vault secret. ./cp-deploy.sh vault set \\ --vault-secret kubeconfig \\ --vault-secret-file ~/.kube/config If the deployer manages multiple OpenShift clusters, you can specify a kubeconfig file for each of the clusters by prefixing the kubeconfig with the name of the openshift object, for example: ./cp-deploy.sh vault set \\ --vault-secret pluto-01-kubeconfig \\ --vault-secret-file /data/pluto-01/kubeconfig ./cp-deploy.sh vault set \\ --vault-secret venus-02-kubeconfig \\ --vault-secret-file /data/venus-02/kubeconfig When connecting to the OpenShift cluster, a cluster-specific kubeconfig vault secret will take precedence over the generic kubeconfig secret.","title":"Option 3 - Use a kubeconfig file"},{"location":"10-use-deployer/3-run/existing-openshift/#5-run-the-deployer","text":"","title":"5. Run the deployer"},{"location":"10-use-deployer/3-run/existing-openshift/#optional-validate-the-configuration","text":"If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses","title":"Optional: validate the configuration"},{"location":"10-use-deployer/3-run/existing-openshift/#run-the-cloud-pak-deployer","text":"To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill","title":"Run the Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/existing-openshift/#on-failure","text":"If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.","title":"On failure"},{"location":"10-use-deployer/3-run/existing-openshift/#finishing-up","text":"Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - oc-login - cp4d_admin_cpd_demo You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_sample PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr","title":"Finishing up"},{"location":"10-use-deployer/3-run/existing-openshift/#post-install-configuration","text":"You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Post-install configuration"},{"location":"10-use-deployer/3-run/ibm-cloud/","text":"Running the Cloud Pak Deployer on IBM Cloud \ud83d\udd17 You can use Cloud Pak Deployer to create a ROKS (Red Hat OpenShift Kubernetes Service) on IBM Cloud. There are 5 main steps to run the deployer for IBM Cloud: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer See the deployer in action in this video: https://ibm.box.com/v/cpd-ibm-cloud-roks Topology \ud83d\udd17 A typical setup of the ROKS cluster on IBM Cloud VPC is pictured below: 1. Configure deployer \ud83d\udd17 Deployer configuration and status directories \ud83d\udd17 Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For IBM Cloud installations, copy one of ocp-ibm-cloud-roks*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-ibm-cloud-roks-ocs.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/ Set configuration and status directories environment variables \ud83d\udd17 Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files. Optional: advanced configuration \ud83d\udd17 If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration . 2. Prepare the cloud environment \ud83d\udd17 Create an IBM Cloud API Key \ud83d\udd17 In order for the Cloud Pak Deployer to create the infrastructure and deploy IBM Cloud Pak for Data, it must perform tasks on IBM Cloud. In order to do so it requires an IBM Cloud API Key. This can be created by following these steps: Go to https://cloud.ibm.com/iam/apikeys and login with your IBMid credentials Ensure you have selected the correct IBM Cloud Account for which you wish to use the Cloud Pak Deployer Click Create an IBM Cloud API Key and provide a name and description Copy the IBM Cloud API key using the Copy button and store it in a safe place, as you will not be able to retrieve it later Warning You can choose to download the API key for later reference. However, when we reference the API key, we mean the IBM Cloud API key as a 40+ character string. Set environment variables for IBM Cloud \ud83d\udd17 Set the environment variables specific to IBM Cloud deployments. export IBM_CLOUD_API_KEY=your_api_key IBM_CLOUD_API_KEY : This is the API key you generated using your IBM Cloud account, this is a 40+ character string 3. Acquire entitlement keys and secrets \ud83d\udd17 If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file. 4. Set environment variables and secrets \ud83d\udd17 Set the Cloud Pak entitlement key \ud83d\udd17 If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry 5. Run the deployer \ud83d\udd17 Optional: validate the configuration \ud83d\udd17 If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses Run the Cloud Pak Deployer \ud83d\udd17 To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill On failure \ud83d\udd17 If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully. Finishing up \ud83d\udd17 Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - sample-provision-ssh-key - sample-provision-ssh-pub-key - sample-terraform-tfstate - cp4d_admin_cpd_demo You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_demo PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr Post-install configuration \ud83d\udd17 You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"IBM Cloud"},{"location":"10-use-deployer/3-run/ibm-cloud/#running-the-cloud-pak-deployer-on-ibm-cloud","text":"You can use Cloud Pak Deployer to create a ROKS (Red Hat OpenShift Kubernetes Service) on IBM Cloud. There are 5 main steps to run the deployer for IBM Cloud: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer See the deployer in action in this video: https://ibm.box.com/v/cpd-ibm-cloud-roks","title":"Running the Cloud Pak Deployer on IBM Cloud"},{"location":"10-use-deployer/3-run/ibm-cloud/#topology","text":"A typical setup of the ROKS cluster on IBM Cloud VPC is pictured below:","title":"Topology"},{"location":"10-use-deployer/3-run/ibm-cloud/#1-configure-deployer","text":"","title":"1. Configure deployer"},{"location":"10-use-deployer/3-run/ibm-cloud/#deployer-configuration-and-status-directories","text":"Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For IBM Cloud installations, copy one of ocp-ibm-cloud-roks*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-ibm-cloud-roks-ocs.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/","title":"Deployer configuration and status directories"},{"location":"10-use-deployer/3-run/ibm-cloud/#set-configuration-and-status-directories-environment-variables","text":"Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files.","title":"Set configuration and status directories environment variables"},{"location":"10-use-deployer/3-run/ibm-cloud/#optional-advanced-configuration","text":"If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration .","title":"Optional: advanced configuration"},{"location":"10-use-deployer/3-run/ibm-cloud/#2-prepare-the-cloud-environment","text":"","title":"2. Prepare the cloud environment"},{"location":"10-use-deployer/3-run/ibm-cloud/#create-an-ibm-cloud-api-key","text":"In order for the Cloud Pak Deployer to create the infrastructure and deploy IBM Cloud Pak for Data, it must perform tasks on IBM Cloud. In order to do so it requires an IBM Cloud API Key. This can be created by following these steps: Go to https://cloud.ibm.com/iam/apikeys and login with your IBMid credentials Ensure you have selected the correct IBM Cloud Account for which you wish to use the Cloud Pak Deployer Click Create an IBM Cloud API Key and provide a name and description Copy the IBM Cloud API key using the Copy button and store it in a safe place, as you will not be able to retrieve it later Warning You can choose to download the API key for later reference. However, when we reference the API key, we mean the IBM Cloud API key as a 40+ character string.","title":"Create an IBM Cloud API Key"},{"location":"10-use-deployer/3-run/ibm-cloud/#set-environment-variables-for-ibm-cloud","text":"Set the environment variables specific to IBM Cloud deployments. export IBM_CLOUD_API_KEY=your_api_key IBM_CLOUD_API_KEY : This is the API key you generated using your IBM Cloud account, this is a 40+ character string","title":"Set environment variables for IBM Cloud"},{"location":"10-use-deployer/3-run/ibm-cloud/#3-acquire-entitlement-keys-and-secrets","text":"If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.","title":"3. Acquire entitlement keys and secrets"},{"location":"10-use-deployer/3-run/ibm-cloud/#4-set-environment-variables-and-secrets","text":"","title":"4. Set environment variables and secrets"},{"location":"10-use-deployer/3-run/ibm-cloud/#set-the-cloud-pak-entitlement-key","text":"If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry","title":"Set the Cloud Pak entitlement key"},{"location":"10-use-deployer/3-run/ibm-cloud/#5-run-the-deployer","text":"","title":"5. Run the deployer"},{"location":"10-use-deployer/3-run/ibm-cloud/#optional-validate-the-configuration","text":"If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses","title":"Optional: validate the configuration"},{"location":"10-use-deployer/3-run/ibm-cloud/#run-the-cloud-pak-deployer","text":"To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill","title":"Run the Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/ibm-cloud/#on-failure","text":"If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.","title":"On failure"},{"location":"10-use-deployer/3-run/ibm-cloud/#finishing-up","text":"Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - sample-provision-ssh-key - sample-provision-ssh-pub-key - sample-terraform-tfstate - cp4d_admin_cpd_demo You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_demo PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr","title":"Finishing up"},{"location":"10-use-deployer/3-run/ibm-cloud/#post-install-configuration","text":"You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Post-install configuration"},{"location":"10-use-deployer/3-run/run/","text":"Running the Cloud Pak Deployer \ud83d\udd17 Cloud Pak Deployer supports various public and private cloud infrastructures. Click on the links below, or in the left menu to find details about running the deployer on each of the following infrastructures: Existing OpenShift IBM Cloud AWS - ROSA AWS - Self-managed Azure - ARO Azure - Self-managed vSphere","title":"Running Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/run/#running-the-cloud-pak-deployer","text":"Cloud Pak Deployer supports various public and private cloud infrastructures. Click on the links below, or in the left menu to find details about running the deployer on each of the following infrastructures: Existing OpenShift IBM Cloud AWS - ROSA AWS - Self-managed Azure - ARO Azure - Self-managed vSphere","title":"Running the Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/vsphere/","text":"Running the Cloud Pak Deployer on vSphere \ud83d\udd17 You can use Cloud Pak Deployer to create an OpenShift cluster on VMWare infrastructure. There are 5 main steps to run the deployer for vSphere: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer Topology \ud83d\udd17 A typical setup of the vSphere cluster with OpenShift is pictured below: When deploying OpenShift and the Cloud Pak(s) on VMWare vSphere, there is a dependency on a DHCP server for issuing IP addresses to the newly configured cluster nodes. Also, once the OpenShift cluster has been installed, valid fully qualified host names are required to connect to the OpenShift API server at port 6443 and applications running behind the ingress server at port 443 . The Cloud Pak deployer cannot set up a DHCP server or a DNS server and to be able to connect to OpenShift or to reach the Cloud Pak after installation, name entries must be set up. 1. Configure deployer \ud83d\udd17 Deployer configuration and status directories \ud83d\udd17 Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For vSphere installations, copy one of ocp-vsphere-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-vsphere-ocs-nfs.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/ Set configuration and status directories environment variables \ud83d\udd17 Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files. Optional: advanced configuration \ud83d\udd17 If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration . 2. Prepare the cloud environment \ud83d\udd17 Pre-requisites for vSphere \ud83d\udd17 In order to successfully install OpenShift on vSphere infrastructure, the following pre-requisites must have been met. Pre-requisite Description Red Hat pull secret A pull secret is required to download and install OpenShift. See Acquire pull secret IBM Entitlement key When instaling an IBM Cloud Pak, you need an IBM entitlement key. See Acquire IBM Cloud Pak entitlement key vSphere credentials The OpenShift IPI installer requires vSphere credentials to create VMs and storage Firewall rules The OpenShift cluster's API server on port 6443 and application server on port 443 must be reachable. Whitelisted URLs The OpenShift and Cloud Pak download locations and registry must be accessible from the vSphere infrastructure. See Whitelisted locations DHCP When provisioning new VMs, IP addresses must be automatically assigned through DHCP DNS A DNS server that will resolve the OpenShift API server and applications is required. See DNS configuration Time server A time server to synchronize the time must be available in the network and configured through the DHCP server There are also some optional settings, dependent on the specifics of the installation: Pre-requisite Description Bastion server It can be useful to have a bastion/installation server to run the deployer. This (virtual) server must reside within the vSphere network NFS details If an NFS server is used for storage, it must be reacheable (firewall) and no_root_squash must be set Private registry If the installation must use a private registry for the Cloud Pak installation, it must be available and credentials shared Certificates If the Cloud Pak URL must have a CA-signed certificate, the key, certificate and CA bundle must be available at instlalation time Load balancer The OpenShift IPI install creates 2 VIPs and takes care of the routing to the services. In some implementations, a load balancer provided by the infrastructure team is preferred. This load balancer must be configured externally DNS configuration \ud83d\udd17 During the provisioning and configuration process, the deployer needs access to the OpenShift API and the ingress server for which the IP addresses are specified in the openshift object. Ensure that the DNS server has the following entries: api.openshift_name.domain_name \u2192 Point to the api_vip address configured in the openshift object *.apps.openshift_name.domain_name \u2192 Point to the ingress_vip address configured in the openshift object If you do not configure the DNS entries upfront, the deployer will still run and it will \"spoof\" the required entries in the container's /etc/hosts file. However to be able to connect to OpenShift and access the Cloud Pak, the DNS entries are required. Obtain the vSphere user and password \ud83d\udd17 In order for the Cloud Pak Deployer to create the infrastructure and deploy the IBM Cloud Pak, it must have provisioning access to vSphere and it needs the vSphere user and password. The user must have permissions to create VM folders and virtual machines. Set environment variables for vSphere \ud83d\udd17 export VSPHERE_USER=your_vsphere_user export VSPHERE_PASSWORD=password_of_the_vsphere_user VSPHERE_USER : This is the user name of the vSphere user, often this is something like admin@vsphere.local VSPHERE_PASSWORD : The password of the vSphere user. Be careful with special characters like $ , ! as they are not accepted by the IPI provisioning of OpenShift 3. Acquire entitlement keys and secrets \ud83d\udd17 Acquire IBM Cloud Pak entitlement key \ud83d\udd17 If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file. Acquire an OpenShift pull secret \ud83d\udd17 To install OpenShift you need an OpenShift pull secret which holds your entitlement. Navigate to https://console.redhat.com/openshift/install/pull-secret and download the pull secret into file /tmp/ocp_pullsecret.json Optional: Locate or generate a public SSH Key \ud83d\udd17 To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub , where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux . Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault. 4. Set environment variables and secrets \ud83d\udd17 Set the Cloud Pak entitlement key \ud83d\udd17 If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry Create the secrets needed for vSphere deployment \ud83d\udd17 You need to store the OpenShift pull secret in the vault so that the deployer has access to it. ./cp-deploy.sh vault set \\ --vault-secret ocp-pullsecret \\ --vault-secret-file /tmp/ocp_pullsecret.json Optional: Create secret for public SSH key \ud83d\udd17 If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key. ./cp-deploy.sh vault set \\ --vault-secret ocp-ssh-pub-key \\ --vault-secret-file ~/.ssh/id_rsa.pub 5. Run the deployer \ud83d\udd17 Optional: validate the configuration \ud83d\udd17 If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses Run the Cloud Pak Deployer \ud83d\udd17 To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill On failure \ud83d\udd17 If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully. Finishing up \ud83d\udd17 Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - vsphere-user - vsphere-password - ocp-pullsecret - ocp-ssh-pub-key - ibm_cp_entitlement_key - sample-kubeadmin-password - cp4d_admin_cpd_demo You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_demo PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr Post-install configuration \ud83d\udd17 You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"vSphere"},{"location":"10-use-deployer/3-run/vsphere/#running-the-cloud-pak-deployer-on-vsphere","text":"You can use Cloud Pak Deployer to create an OpenShift cluster on VMWare infrastructure. There are 5 main steps to run the deployer for vSphere: Configure deployer Prepare the cloud environment Obtain entitlement keys and secrets Set environment variables and secrets Run the deployer","title":"Running the Cloud Pak Deployer on vSphere"},{"location":"10-use-deployer/3-run/vsphere/#topology","text":"A typical setup of the vSphere cluster with OpenShift is pictured below: When deploying OpenShift and the Cloud Pak(s) on VMWare vSphere, there is a dependency on a DHCP server for issuing IP addresses to the newly configured cluster nodes. Also, once the OpenShift cluster has been installed, valid fully qualified host names are required to connect to the OpenShift API server at port 6443 and applications running behind the ingress server at port 443 . The Cloud Pak deployer cannot set up a DHCP server or a DNS server and to be able to connect to OpenShift or to reach the Cloud Pak after installation, name entries must be set up.","title":"Topology"},{"location":"10-use-deployer/3-run/vsphere/#1-configure-deployer","text":"","title":"1. Configure deployer"},{"location":"10-use-deployer/3-run/vsphere/#deployer-configuration-and-status-directories","text":"Deployer reads the configuration from a directory you set in the CONFIG_DIR environment variable. A status directory ( STATUS_DIR environment variable) is used to log activities, store temporary files, scripts. If you use a File Vault (default), the secrets are kept in the $STATUS_DIR/vault directory. You can find OpenShift and Cloud Pak sample configuration (yaml) files here: sample configuration . For vSphere installations, copy one of ocp-vsphere-*.yaml files into the $CONFIG_DIR/config directory. If you also want to install a Cloud Pak, copy one of the cp4*.yaml files. Example: mkdir -p $HOME/cpd-config/config cp sample-configurations/sample-dynamic/config-samples/ocp-vsphere-ocs-nfs.yaml $HOME/cpd-config/config/ cp sample-configurations/sample-dynamic/config-samples/cp4d-471.yaml $HOME/cpd-config/config/","title":"Deployer configuration and status directories"},{"location":"10-use-deployer/3-run/vsphere/#set-configuration-and-status-directories-environment-variables","text":"Cloud Pak Deployer uses the status directory to log its activities and also to keep track of its running state. For a given environment you're provisioning or destroying, you should always specify the same status directory to avoid contention between different deploy runs. export CONFIG_DIR=$HOME/cpd-config export STATUS_DIR=$HOME/cpd-status CONFIG_DIR : Directory that holds the configuration, it must have a config subdirectory which contains the configuration yaml files. STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and logs files.","title":"Set configuration and status directories environment variables"},{"location":"10-use-deployer/3-run/vsphere/#optional-advanced-configuration","text":"If the deployer configuration is kept on GitHub, follow the instructions in GitHub configuration . For special configuration with defaults and dynamic variables, refer to Advanced configuration .","title":"Optional: advanced configuration"},{"location":"10-use-deployer/3-run/vsphere/#2-prepare-the-cloud-environment","text":"","title":"2. Prepare the cloud environment"},{"location":"10-use-deployer/3-run/vsphere/#pre-requisites-for-vsphere","text":"In order to successfully install OpenShift on vSphere infrastructure, the following pre-requisites must have been met. Pre-requisite Description Red Hat pull secret A pull secret is required to download and install OpenShift. See Acquire pull secret IBM Entitlement key When instaling an IBM Cloud Pak, you need an IBM entitlement key. See Acquire IBM Cloud Pak entitlement key vSphere credentials The OpenShift IPI installer requires vSphere credentials to create VMs and storage Firewall rules The OpenShift cluster's API server on port 6443 and application server on port 443 must be reachable. Whitelisted URLs The OpenShift and Cloud Pak download locations and registry must be accessible from the vSphere infrastructure. See Whitelisted locations DHCP When provisioning new VMs, IP addresses must be automatically assigned through DHCP DNS A DNS server that will resolve the OpenShift API server and applications is required. See DNS configuration Time server A time server to synchronize the time must be available in the network and configured through the DHCP server There are also some optional settings, dependent on the specifics of the installation: Pre-requisite Description Bastion server It can be useful to have a bastion/installation server to run the deployer. This (virtual) server must reside within the vSphere network NFS details If an NFS server is used for storage, it must be reacheable (firewall) and no_root_squash must be set Private registry If the installation must use a private registry for the Cloud Pak installation, it must be available and credentials shared Certificates If the Cloud Pak URL must have a CA-signed certificate, the key, certificate and CA bundle must be available at instlalation time Load balancer The OpenShift IPI install creates 2 VIPs and takes care of the routing to the services. In some implementations, a load balancer provided by the infrastructure team is preferred. This load balancer must be configured externally","title":"Pre-requisites for vSphere"},{"location":"10-use-deployer/3-run/vsphere/#dns-configuration","text":"During the provisioning and configuration process, the deployer needs access to the OpenShift API and the ingress server for which the IP addresses are specified in the openshift object. Ensure that the DNS server has the following entries: api.openshift_name.domain_name \u2192 Point to the api_vip address configured in the openshift object *.apps.openshift_name.domain_name \u2192 Point to the ingress_vip address configured in the openshift object If you do not configure the DNS entries upfront, the deployer will still run and it will \"spoof\" the required entries in the container's /etc/hosts file. However to be able to connect to OpenShift and access the Cloud Pak, the DNS entries are required.","title":"DNS configuration"},{"location":"10-use-deployer/3-run/vsphere/#obtain-the-vsphere-user-and-password","text":"In order for the Cloud Pak Deployer to create the infrastructure and deploy the IBM Cloud Pak, it must have provisioning access to vSphere and it needs the vSphere user and password. The user must have permissions to create VM folders and virtual machines.","title":"Obtain the vSphere user and password"},{"location":"10-use-deployer/3-run/vsphere/#set-environment-variables-for-vsphere","text":"export VSPHERE_USER=your_vsphere_user export VSPHERE_PASSWORD=password_of_the_vsphere_user VSPHERE_USER : This is the user name of the vSphere user, often this is something like admin@vsphere.local VSPHERE_PASSWORD : The password of the vSphere user. Be careful with special characters like $ , ! as they are not accepted by the IPI provisioning of OpenShift","title":"Set environment variables for vSphere"},{"location":"10-use-deployer/3-run/vsphere/#3-acquire-entitlement-keys-and-secrets","text":"","title":"3. Acquire entitlement keys and secrets"},{"location":"10-use-deployer/3-run/vsphere/#acquire-ibm-cloud-pak-entitlement-key","text":"If you want to pull the Cloud Pak images from the entitled registry (i.e. an online install), or if you want to mirror the images to your private registry, you need to download the entitlement key. You can skip this step if you're installing from a private registry and all Cloud Pak images have already been downloaded to the private registry. Navigate to https://myibm.ibm.com/products-services/containerlibrary and login with your IBMId credentials Select Get Entitlement Key and create a new key (or copy your existing key) Copy the key value Warning As stated for the API key, you can choose to download the entitlement key to a file. However, when we reference the entitlement key, we mean the 80+ character string that is displayed, not the file.","title":"Acquire IBM Cloud Pak entitlement key"},{"location":"10-use-deployer/3-run/vsphere/#acquire-an-openshift-pull-secret","text":"To install OpenShift you need an OpenShift pull secret which holds your entitlement. Navigate to https://console.redhat.com/openshift/install/pull-secret and download the pull secret into file /tmp/ocp_pullsecret.json","title":"Acquire an OpenShift pull secret"},{"location":"10-use-deployer/3-run/vsphere/#optional-locate-or-generate-a-public-ssh-key","text":"To obtain access to the OpenShift nodes post-installation, you will need to specify the public SSH key of your server; typically this is ~/.ssh/id_rsa.pub , where ~ is the home directory of your user. If you don't have an SSH key-pair yet, you can generate one using the steps documented here: https://cloud.ibm.com/docs/ssh-keys?topic=ssh-keys-generating-and-using-ssh-keys-for-remote-host-authentication#generating-ssh-keys-on-linux . Alternatively, deployer can generate SSH key-pair automatically if credential ocp-ssh-pub-key is not in the vault.","title":"Optional: Locate or generate a public SSH Key"},{"location":"10-use-deployer/3-run/vsphere/#4-set-environment-variables-and-secrets","text":"","title":"4. Set environment variables and secrets"},{"location":"10-use-deployer/3-run/vsphere/#set-the-cloud-pak-entitlement-key","text":"If you want the Cloud Pak images to be pulled from the entitled registry, set the Cloud Pak entitlement key. export CP_ENTITLEMENT_KEY=your_cp_entitlement_key CP_ENTITLEMENT_KEY : This is the entitlement key you acquired as per the instructions above, this is a 80+ character string. You don't need to set this environment variable when you install the Cloud Pak(s) from a private registry","title":"Set the Cloud Pak entitlement key"},{"location":"10-use-deployer/3-run/vsphere/#create-the-secrets-needed-for-vsphere-deployment","text":"You need to store the OpenShift pull secret in the vault so that the deployer has access to it. ./cp-deploy.sh vault set \\ --vault-secret ocp-pullsecret \\ --vault-secret-file /tmp/ocp_pullsecret.json","title":"Create the secrets needed for vSphere deployment"},{"location":"10-use-deployer/3-run/vsphere/#optional-create-secret-for-public-ssh-key","text":"If you want to use your SSH key to access nodes in the cluster, set the Vault secret with the public SSH key. ./cp-deploy.sh vault set \\ --vault-secret ocp-ssh-pub-key \\ --vault-secret-file ~/.ssh/id_rsa.pub","title":"Optional: Create secret for public SSH key"},{"location":"10-use-deployer/3-run/vsphere/#5-run-the-deployer","text":"","title":"5. Run the deployer"},{"location":"10-use-deployer/3-run/vsphere/#optional-validate-the-configuration","text":"If you only want to validate the configuration, you can run the dpeloyer with the --check-only argument. This will run the first stage to validate variables and vault secrets and then execute the generators. ./cp-deploy.sh env apply --check-only --accept-all-licenses","title":"Optional: validate the configuration"},{"location":"10-use-deployer/3-run/vsphere/#run-the-cloud-pak-deployer","text":"To run the container using a local configuration input directory and a data directory where temporary and state is kept, use the example below. If you don't specify the status directory, the deployer will automatically create a temporary directory. Please note that the status directory will also hold secrets if you have configured a flat file vault. If you lose the directory, you will not be able to make changes to the configuration and adjust the deployment. It is best to specify a permanent directory that you can reuse later. If you specify an existing directory the current user must be the owner of the directory. Failing to do so may cause the container to fail with insufficient permissions. ./cp-deploy.sh env apply --accept-all-licenses You can also specify extra variables such as env_id to override the names of the objects referenced in the .yaml configuration files as {{ env_id }}-xxxx . For more information about the extra (dynamic) variables, see advanced configuration . The --accept-all-licenses flag is optional and confirms that you accept all licenses of the installed cartridges and instances. Licenses must be either accepted in the configuration files or at the command line. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs Deploying the infrastructure, preparing OpenShift and installing the Cloud Pak will take a long time, typically between 1-5 hours,dependent on which Cloud Pak cartridges you configured. For estimated duration of the steps, refer to Timings . If you need to interrupt the automation, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill","title":"Run the Cloud Pak Deployer"},{"location":"10-use-deployer/3-run/vsphere/#on-failure","text":"If the Cloud Pak Deployer fails, for example because certain infrastructure components are temporarily not available, fix the cause if needed and then just re-run it with the same CONFIG_DIR and STATUS_DIR as well extra variables. The provisioning process has been designed to be idempotent and it will not redo actions that have already completed successfully.","title":"On failure"},{"location":"10-use-deployer/3-run/vsphere/#finishing-up","text":"Once the process has finished, it will output the URLs by which you can access the deployed Cloud Pak. You can also find this information under the cloud-paks directory in the status directory you specified. To retrieve the Cloud Pak URL(s): cat $STATUS_DIR/cloud-paks/* This will show the Cloud Pak URLs: Cloud Pak for Data URL for cluster pluto-01 and project cpd (domain name specified was example.com): https://cpd-cpd.apps.pluto-01.example.com The admin password can be retrieved from the vault as follows: List the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - vsphere-user - vsphere-password - ocp-pullsecret - ocp-ssh-pub-key - ibm_cp_entitlement_key - sample-kubeadmin-password - cp4d_admin_cpd_demo You can then retrieve the Cloud Pak for Data admin password like this: ./cp-deploy.sh vault get --vault-secret cp4d_admin_cpd_demo PLAY [Secrets] ***************************************************************** included: /cloud-pak-deployer/automation-roles/99-generic/vault/vault-get-secret/tasks/get-secret-file.yml for localhost cp4d_admin_zen_sample_sample: gelGKrcgaLatBsnAdMEbmLwGr","title":"Finishing up"},{"location":"10-use-deployer/3-run/vsphere/#post-install-configuration","text":"You can find examples of a couple of typical changes you may want to do here: Post-run changes .","title":"Post-install configuration"},{"location":"10-use-deployer/5-post-run/post-run/","text":"Post-run changes \ud83d\udd17 If you want to change the deployed configuration, you can just update the configuration files and re-run the deployer. Make sure that you use the same input configuration and status directories and also the env_id if you specified one, otherwise deployment may fail. Below are a couple of examples of post-run changes you may want to do. Change Cloud Pak for Data admin password \ud83d\udd17 When initially installed, the Cloud Pak Deployer will generate a strong password for the Cloud Pak for Data admin user (or cpadmin if you have selected to use Foundational Services IAM). If you want to change the password afterwards, you can do this from the Cloud Pak for Data user interface, but this means that the deployer will no longer be able to make changes to the Cloud Pak for Data configuration. If you have updated the admin password from the UI, please make sure you also update the secret in the vault. First, list the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - sample-provision-ssh-key - sample-provision-ssh-pub-key - sample-terraform-tfstate - cp4d_admin_zen_sample_sample Then, update the password: ./cp-deploy.sh vault set -vs cp4d_admin_zen_sample_sample -vsv \"my Really Sec3re Passw0rd\" Finally, run the deployer again. It will make the necessary changes to the OpenShift secret and check that the admin user can log in. In this case you can speed up the process via the --skip-infra flag. ./cp-deploy.sh env apply --skip-infra [--accept-all-liceneses]","title":"Post-run changes"},{"location":"10-use-deployer/5-post-run/post-run/#post-run-changes","text":"If you want to change the deployed configuration, you can just update the configuration files and re-run the deployer. Make sure that you use the same input configuration and status directories and also the env_id if you specified one, otherwise deployment may fail. Below are a couple of examples of post-run changes you may want to do.","title":"Post-run changes"},{"location":"10-use-deployer/5-post-run/post-run/#change-cloud-pak-for-data-admin-password","text":"When initially installed, the Cloud Pak Deployer will generate a strong password for the Cloud Pak for Data admin user (or cpadmin if you have selected to use Foundational Services IAM). If you want to change the password afterwards, you can do this from the Cloud Pak for Data user interface, but this means that the deployer will no longer be able to make changes to the Cloud Pak for Data configuration. If you have updated the admin password from the UI, please make sure you also update the secret in the vault. First, list the secrets in the vault: ./cp-deploy.sh vault list This will show something similar to the following: Secret list for group sample: - ibm_cp_entitlement_key - sample-provision-ssh-key - sample-provision-ssh-pub-key - sample-terraform-tfstate - cp4d_admin_zen_sample_sample Then, update the password: ./cp-deploy.sh vault set -vs cp4d_admin_zen_sample_sample -vsv \"my Really Sec3re Passw0rd\" Finally, run the deployer again. It will make the necessary changes to the OpenShift secret and check that the admin user can log in. In this case you can speed up the process via the --skip-infra flag. ./cp-deploy.sh env apply --skip-infra [--accept-all-liceneses]","title":"Change Cloud Pak for Data admin password"},{"location":"10-use-deployer/7-command/command/","text":"Open a command line within the Cloud Pak Deployer container \ud83d\udd17 Sometimes you may need to access the OpenShift cluster using the OpenShift client. For convenience we have made the oc command available in the Cloud Pak Deployer and you can start exploring the current OpenShift cluster immediately without having to install the client on your own workstation. Prepare for the command line \ud83d\udd17 Set environment variables \ud83d\udd17 Make sure you have set the CONFIG_DIR and STATUS_DIR environment variables to the same values when you ran the env apply command. This will ensure that the oc command will access the OpenShift cluster(s) of that configuration. Optional: prepare OpenShift cluster \ud83d\udd17 If you have not run the deployer yet and do not intend to install any Cloud Paks, but you do want to access the OpenShift cluster from the command line to check or prepare items, run the deployer with the --skip-cp-install flag. ./cp-deploy.sh env apply --skip-cp-install Deployer will check the configuration, download clients, attempt to login to OpenShift and prepare the OpenShift cluster with the global pull secret and (for Cloud Pak for Data) node settings. After that the deployer will finish without installing any Cloud Pak. Run the Cloud Pak Deployer command line \ud83d\udd17 ./cp-deploy.sh env cmd You should see something like this: ------------------------------------------------------------------------------- Entering Cloud Pak Deployer command line in a container. Use the \"exit\" command to leave the container and return to the hosting server. ------------------------------------------------------------------------------- Installing OpenShift client Current OpenShift context: cpd Now, you can check the OpenShift cluster version: [root@Cloud Pak Deployer Container ~]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.14 True False 2d3h Cluster version is 4.8.14 Or, display the list of OpenShift projects: [root@Cloud Pak Deployer Container ~]$ oc get projects | grep -v openshift- NAME DISPLAY NAME STATUS calico-system Active default Active ibm-cert-store Active ibm-odf-validation-webhook Active ibm-system Active kube-node-lease Active kube-public Active kube-system Active openshift Active services Active tigera-operator Active cpd Active Exit the command line \ud83d\udd17 Once finished, exit out of the container. exit","title":"Running commands"},{"location":"10-use-deployer/7-command/command/#open-a-command-line-within-the-cloud-pak-deployer-container","text":"Sometimes you may need to access the OpenShift cluster using the OpenShift client. For convenience we have made the oc command available in the Cloud Pak Deployer and you can start exploring the current OpenShift cluster immediately without having to install the client on your own workstation.","title":"Open a command line within the Cloud Pak Deployer container"},{"location":"10-use-deployer/7-command/command/#prepare-for-the-command-line","text":"","title":"Prepare for the command line"},{"location":"10-use-deployer/7-command/command/#set-environment-variables","text":"Make sure you have set the CONFIG_DIR and STATUS_DIR environment variables to the same values when you ran the env apply command. This will ensure that the oc command will access the OpenShift cluster(s) of that configuration.","title":"Set environment variables"},{"location":"10-use-deployer/7-command/command/#optional-prepare-openshift-cluster","text":"If you have not run the deployer yet and do not intend to install any Cloud Paks, but you do want to access the OpenShift cluster from the command line to check or prepare items, run the deployer with the --skip-cp-install flag. ./cp-deploy.sh env apply --skip-cp-install Deployer will check the configuration, download clients, attempt to login to OpenShift and prepare the OpenShift cluster with the global pull secret and (for Cloud Pak for Data) node settings. After that the deployer will finish without installing any Cloud Pak.","title":"Optional: prepare OpenShift cluster"},{"location":"10-use-deployer/7-command/command/#run-the-cloud-pak-deployer-command-line","text":"./cp-deploy.sh env cmd You should see something like this: ------------------------------------------------------------------------------- Entering Cloud Pak Deployer command line in a container. Use the \"exit\" command to leave the container and return to the hosting server. ------------------------------------------------------------------------------- Installing OpenShift client Current OpenShift context: cpd Now, you can check the OpenShift cluster version: [root@Cloud Pak Deployer Container ~]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.14 True False 2d3h Cluster version is 4.8.14 Or, display the list of OpenShift projects: [root@Cloud Pak Deployer Container ~]$ oc get projects | grep -v openshift- NAME DISPLAY NAME STATUS calico-system Active default Active ibm-cert-store Active ibm-odf-validation-webhook Active ibm-system Active kube-node-lease Active kube-public Active kube-system Active openshift Active services Active tigera-operator Active cpd Active","title":"Run the Cloud Pak Deployer command line"},{"location":"10-use-deployer/7-command/command/#exit-the-command-line","text":"Once finished, exit out of the container. exit","title":"Exit the command line"},{"location":"10-use-deployer/9-destroy/destroy/","text":"Destroy the created resources \ud83d\udd17 If you have previously used the Cloud Pak Deployer to create assets on IBM Cloud, AWS or Azure, you can destroy the assets with the same command. Info Currently, destroy is only implemented for IBM Cloud ROKS, AWS and Azure ARO, not for other cloud platforms. Prepare for destroy \ud83d\udd17 Prepare for destroy on IBM Cloud \ud83d\udd17 Set environment variables for IBM Cloud \ud83d\udd17 export IBM_CLOUD_API_KEY=your_api_key Optional: set environment variables for deployer config and status directories. If not specified, respectively $HOME/cpd-config and $HOME/cpd-status will be used. export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config IBM_CLOUD_API_KEY : This is the API key you generated using your IBM Cloud account, this is a 40+ character string STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment CONFIG_DIR : Directory that holds the configuration. This must be the same directory you used when you created the environment Prepare for destroy on AWS \ud83d\udd17 Set environment variables for AWS \ud83d\udd17 We assume that the vault already holds the mandatory secrets for AWS Access Key, Secret Access Key and ROSA login token. export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment CONFIG_DIR : Directory that holds the configuration. This must be the same directory you used when you created the environment Prepare for destroy on Azure \ud83d\udd17 Set environment variables for Azure \ud83d\udd17 We assume that the vault already holds the mandatory secrets for Azure - Service principal id and its password, tenant id and ARO login token. export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment CONFIG_DIR : Directory that holds the configuration. This must be the same directory you used when you created the environment Run the Cloud Pak Deployer to destroy the assets \ud83d\udd17 ./cp-deploy.sh env destroy --confirm-destroy Please ensure you specify the same extra (dynamic) variables that you used when you ran the env apply command. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs If you need to interrupt the process, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill Finishing up \ud83d\udd17 Once the process has finished successfully, you can delete the status directory.","title":"Destroy cluster"},{"location":"10-use-deployer/9-destroy/destroy/#destroy-the-created-resources","text":"If you have previously used the Cloud Pak Deployer to create assets on IBM Cloud, AWS or Azure, you can destroy the assets with the same command. Info Currently, destroy is only implemented for IBM Cloud ROKS, AWS and Azure ARO, not for other cloud platforms.","title":"Destroy the created resources"},{"location":"10-use-deployer/9-destroy/destroy/#prepare-for-destroy","text":"","title":"Prepare for destroy"},{"location":"10-use-deployer/9-destroy/destroy/#prepare-for-destroy-on-ibm-cloud","text":"","title":"Prepare for destroy on IBM Cloud"},{"location":"10-use-deployer/9-destroy/destroy/#set-environment-variables-for-ibm-cloud","text":"export IBM_CLOUD_API_KEY=your_api_key Optional: set environment variables for deployer config and status directories. If not specified, respectively $HOME/cpd-config and $HOME/cpd-status will be used. export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config IBM_CLOUD_API_KEY : This is the API key you generated using your IBM Cloud account, this is a 40+ character string STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment CONFIG_DIR : Directory that holds the configuration. This must be the same directory you used when you created the environment","title":"Set environment variables for IBM Cloud"},{"location":"10-use-deployer/9-destroy/destroy/#prepare-for-destroy-on-aws","text":"","title":"Prepare for destroy on AWS"},{"location":"10-use-deployer/9-destroy/destroy/#set-environment-variables-for-aws","text":"We assume that the vault already holds the mandatory secrets for AWS Access Key, Secret Access Key and ROSA login token. export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment CONFIG_DIR : Directory that holds the configuration. This must be the same directory you used when you created the environment","title":"Set environment variables for AWS"},{"location":"10-use-deployer/9-destroy/destroy/#prepare-for-destroy-on-azure","text":"","title":"Prepare for destroy on Azure"},{"location":"10-use-deployer/9-destroy/destroy/#set-environment-variables-for-azure","text":"We assume that the vault already holds the mandatory secrets for Azure - Service principal id and its password, tenant id and ARO login token. export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config STATUS_DIR : The directory where the Cloud Pak Deployer keeps all status information and log files. Please note that if you have chosen to use a File Vault, the directory specified must be the one you used when you created the environment CONFIG_DIR : Directory that holds the configuration. This must be the same directory you used when you created the environment","title":"Set environment variables for Azure"},{"location":"10-use-deployer/9-destroy/destroy/#run-the-cloud-pak-deployer-to-destroy-the-assets","text":"./cp-deploy.sh env destroy --confirm-destroy Please ensure you specify the same extra (dynamic) variables that you used when you ran the env apply command. When running the command, the container will start as a daemon and the command will tail-follow the logs. You can press Ctrl-C at any time to interrupt the logging but the container will continue to run in the background. You can return to view the logs as follows: ./cp-deploy.sh env logs If you need to interrupt the process, use CTRL-C to stop the logging output and then use: ./cp-deploy.sh env kill","title":"Run the Cloud Pak Deployer to destroy the assets"},{"location":"10-use-deployer/9-destroy/destroy/#finishing-up","text":"Once the process has finished successfully, you can delete the status directory.","title":"Finishing up"},{"location":"30-reference/timings/","text":"Timings for the deployment \ud83d\udd17 Duration of the overall deployment process \ud83d\udd17 Phase Step Time in minutes Comments 10 - Validation 3 20 - Prepare Generators 3 30 - Provision infrastructure Create VPC 1 Create VSI without storage 5 Create VSI with storage 10 Create VPC ROKS cluster 45 Install ROKS OCS add-on and create storage classes 45 40 - Configure infrastructure Install NFS on VSIs 10 Create NFS storage classes 5 Create private container registry namespace 5 50 - Install Cloud Pak Prepare OpenShift for Cloud Pak for Data install 60 During this step, the compute nodes may be replaced and also the Kubernetes services may be restarted. Mirror Cloud Pak for Data images to private registry (only done when using private registry) 30-600 If the entitled registry is used, this step will be skipped. When using a private registry, if images have already been mirrored, the duration will be much shorter, approximately 10 minutes. Install Cloud Pak for Data control plane 20 Create Cloud Pak for Data subscriptions for cartridges 15 Install cartridges 20-300 The amount of time really depends on the cartridges being installed. In the table below you will find an estimate of the installation time for each cartridge. Cartridges will be installed in parallel through the operators. 60 - Configure Cloud Pak Configure Cloud Pak for Data LDAP 5 Provision instances for cartridges 30-60 For cartridges that have instances defined. Creation of the instances will run in parallel where possible. Configure cartridge and instance permissions based on LDAP config 10 70 - Deploy assets No activities yet 0 80 - Smoke tests Show Cloud Pak for Data cluster details 1 Cloud Pak for Data cartridge deployment \ud83d\udd17 Cartridge Full name Installation time Instance provisioning time Dependencies cpd_platform Cloud Pak for Data control plane 20 N/A ccs Common Core Services 75 N/A db2aas Db2 as a Service 30 N/A iis Information Server 60 N/A ccs, db2aas ca Cognos Analytics 20 45 ccs planning-analytics Planning Analytics 15 N/A watson_assistant Watson Assistant 70 N/A watson-discovery Watson Discovery 100 N/A ] watson-ks Watson Knowledge Studio 20 N/A watson-speech Watson Speech to Text and Text to Speech 20 N/A wkc Watson Knowledge Catalog 90 N/A ccs, db2aas, iis wml Watson Machine Learning 45 N/A ccs ws Watson Studio 30 N/A ccs Examples: Cloud Pak for Data installation with just Cognos Analytics will take 20 (control plane) + 75 (ccs) + 20 (ca) + 45 (ca instance) = ~160 minutes Cloud Pak for Data installation with Cognos Analytics and Watson Studio will take 20 (control plane) + 75 (ccs) + 45 (ws+ca) + 45 (ca instance) = ~185 minutes Cloud Pak for Data installation with just Watson Knowledge Catalog will take 20 (control plane) + 75 (ccs) + 30 (db2aas) + 60 (iis) + 90 (wkc) = ~275 minutes Cloud Pak for Data installation with Watson Knowledge Catalog and Watson Studio will take the same time because WS will finish 30 minutes after installing CCS, while WKC will take a lot longer to complete","title":"Timings"},{"location":"30-reference/timings/#timings-for-the-deployment","text":"","title":"Timings for the deployment"},{"location":"30-reference/timings/#duration-of-the-overall-deployment-process","text":"Phase Step Time in minutes Comments 10 - Validation 3 20 - Prepare Generators 3 30 - Provision infrastructure Create VPC 1 Create VSI without storage 5 Create VSI with storage 10 Create VPC ROKS cluster 45 Install ROKS OCS add-on and create storage classes 45 40 - Configure infrastructure Install NFS on VSIs 10 Create NFS storage classes 5 Create private container registry namespace 5 50 - Install Cloud Pak Prepare OpenShift for Cloud Pak for Data install 60 During this step, the compute nodes may be replaced and also the Kubernetes services may be restarted. Mirror Cloud Pak for Data images to private registry (only done when using private registry) 30-600 If the entitled registry is used, this step will be skipped. When using a private registry, if images have already been mirrored, the duration will be much shorter, approximately 10 minutes. Install Cloud Pak for Data control plane 20 Create Cloud Pak for Data subscriptions for cartridges 15 Install cartridges 20-300 The amount of time really depends on the cartridges being installed. In the table below you will find an estimate of the installation time for each cartridge. Cartridges will be installed in parallel through the operators. 60 - Configure Cloud Pak Configure Cloud Pak for Data LDAP 5 Provision instances for cartridges 30-60 For cartridges that have instances defined. Creation of the instances will run in parallel where possible. Configure cartridge and instance permissions based on LDAP config 10 70 - Deploy assets No activities yet 0 80 - Smoke tests Show Cloud Pak for Data cluster details 1","title":"Duration of the overall deployment process"},{"location":"30-reference/timings/#cloud-pak-for-data-cartridge-deployment","text":"Cartridge Full name Installation time Instance provisioning time Dependencies cpd_platform Cloud Pak for Data control plane 20 N/A ccs Common Core Services 75 N/A db2aas Db2 as a Service 30 N/A iis Information Server 60 N/A ccs, db2aas ca Cognos Analytics 20 45 ccs planning-analytics Planning Analytics 15 N/A watson_assistant Watson Assistant 70 N/A watson-discovery Watson Discovery 100 N/A ] watson-ks Watson Knowledge Studio 20 N/A watson-speech Watson Speech to Text and Text to Speech 20 N/A wkc Watson Knowledge Catalog 90 N/A ccs, db2aas, iis wml Watson Machine Learning 45 N/A ccs ws Watson Studio 30 N/A ccs Examples: Cloud Pak for Data installation with just Cognos Analytics will take 20 (control plane) + 75 (ccs) + 20 (ca) + 45 (ca instance) = ~160 minutes Cloud Pak for Data installation with Cognos Analytics and Watson Studio will take 20 (control plane) + 75 (ccs) + 45 (ws+ca) + 45 (ca instance) = ~185 minutes Cloud Pak for Data installation with just Watson Knowledge Catalog will take 20 (control plane) + 75 (ccs) + 30 (db2aas) + 60 (iis) + 90 (wkc) = ~275 minutes Cloud Pak for Data installation with Watson Knowledge Catalog and Watson Studio will take the same time because WS will finish 30 minutes after installing CCS, while WKC will take a lot longer to complete","title":"Cloud Pak for Data cartridge deployment"},{"location":"30-reference/configuration/cloud-pak/","text":"Cloud Paks \ud83d\udd17 Defines the Cloud Pak(s) which is/are layed out on the OpenShift cluster, typically in one or more OpenShift projects. The Cloud Pak definition represents the instance users connect to and which is responsible for managing the functional capabilities installed within the application. Cloud Pak configuration \ud83d\udd17 Cloud Pak for Data Cloud Pak for Integration Cloud Pak for Watson AIOps Cloud Pak for Business Automation cp4d \ud83d\udd17 Defines the Cloud Pak for Data instances to be configured on the OpenShift cluster(s). cp4d: - project: cpd openshift_cluster_name: sample cp4d_version: 4.7.3 sequential_install: False use_fs_iam: False change_node_settings: True db2u_limited_privileges: False accept_licenses: False openshift_storage_name: nfs-storage cp4d_entitlement: cpd-enterprise cp4d_production_license: True cartridges: - name: cpfs - name: cpd_platform Properties \ud83d\udd17 Property Description Mandatory Allowed values project Name of the OpenShift project of the Cloud Pak for Data instance Yes openshift_cluster_name Name of the OpenShift cluster Yes, inferred from openshift Existing openshift cluster cp4d_version Cloud Pak for Data version to install, this will determine the version for all cartridges that do not specify a version Yes 4.x.x sequential_install If set to True the deployer will run the OLM utils playbooks to install catalog sources, subscriptions and CRs. If set to False , deployer will use OLM utils to generate the scripts and then run them, which will cause the catalog sources, subscriptions and CRs to be created immediately and install in parallel No True (default), False use_fs_iam If set to True the deployer will enable Foundational Services IAM for authentication No False (default), True change_node_settings Controls whether the node settings using the machine configs will be applied onto the OpenShift cluster. No True, False db2u_limited_privileges Depicts whether Db2U containers run with limited privileges. If they do ( True ), Deployer will create KubeletConfig and Tuned OpenShift resources as per the documentation. No False (default), True accept_licenses Set to 'True' to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command No True, False (default) cp4d_entitlement Set to cpd-enterprise , cpd-standard , watsonx-data , watsonx-ai , watsonx-gov-model-management , watsonx-gov-risk-compliance , dependent on the deployed license No cpd-enterprise (default), cpd-standard, watsonx-data, watsonx-ai, watsonx-gov-model-management, watsonx-gov-risk-compliance cp4d_production_license Whether the Cloud Pak for Data is a production license No True (default), False image_registry_name When using private registry, specify name of image_registry No openshift_storage_name References an openshift_storage element in the OpenShift cluster that was defined for this Cloud Pak for Data instance. The name must exist under `openshift.[openshift_cluster_name].openshift_storage. No, inferred from openshift->openshift_storage cartridges List of cartridges to install for this Cloud Pak for Data instance. See Cloud Pak for Data cartridges for more details Yes cp4i \ud83d\udd17 Defines the Cloud Pak for Integration installation to be configured on the OpenShift cluster(s). cp4i: - project: cp4i openshift_cluster_name: {{ env_id }} openshift_storage_name: nfs-rook-ceph cp4i_version: 2021.4.1 accept_licenses: False use_top_level_operator: False top_level_operator_channel: v1.5 top_level_operator_case_version: 2.5.0 operators_in_all_namespaces: True instances: - name: integration-navigator type: platform-navigator license: L-RJON-C7QG3S channel: v5.2 case_version: 1.5.0 OpenShift projects \ud83d\udd17 The immediate content of the cp4i object is actually a list of OpenShift projects (namespaces). There can be more than one project and instances can be created in separate projects. cp4i : - project : cp4i ... - project : cp4i-ace ... - project : cp4i-apic ... Operator channels, CASE versions, license IDs \ud83d\udd17 Before you run the Cloud Pak Deployer be sure that the correct operator channels are defined for the selected instance types. Some products require a license ID, please check the documentation of each product for the correct license. If you decide to use CASE files instead of the IBM Operator Catalog (more on that below) make sure that you selected the correct CASE versions - please refer: https://github.com/IBM/cloud-pak/tree/master/repo/case Main properties \ud83d\udd17 The following properties are defined on the project level: Property Description Mandatory Allowed values project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes openshift_cluster_name Dynamically defined form the env_id parameter during the execution. Yes, inferred from openshift Existing openshift cluster openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). The definition must include the class name of the file storage type and the class name of the block storage type. No, inferred from openshift->openshift_storage cp4i_version The version of the Cloud Pak for Integration (e.g. 2021.4.1) Yes use_case_files The property defines if the CASE files are used for installation. If it is True then the operator catalogs are created from the CASE files. If it is False, the IBM Operator Catalog from the entitled registry is used. No True, False (default) accept_licenses Set to True to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes True, False use_top_level_operator If it is True then the CP4I top-level operator that installs all other operators is used. Otherwise, only the operators for the selected instance types are installed. No True, False (default) top_level_operator_channel Needed if the use_top_level_operator is True otherwise, it is ignored. Specifies the channel of the top-level operator. No top_level_operator_case_version Needed if the use_top_level_operator is True otherwise, it is ignored. Specifies the CASE package version of the top-level operator. No operators_in_all_namespaces It defines whether the operators are visible in all namespaces or just in the specific namespace where they are needed. No True, False (default) instances List of the instances that are going to be created (please see below). Yes Warning Despite the properties use_case_files , use_top_level_operator and operators_in_all_namespaces are defined as optional, they are actually crucial for the way of execution of the installation process. If any of them is omitted, it is assumed that the default False value is used. If none of them exists, it means that all are False . In this case, it means that the IBM Operator Catalog is used and only the needed operators for specified instance types are installed in the specific namespace. Properties of the individual instances \ud83d\udd17 The instance property contains one or more instances definitions. Each instance must have a unique name. There can be more the one instance of the same type. Naming convention for instance types \ud83d\udd17 For each instance definition, an instance type must be specified. We selected the type names that are as much as possible similar to the naming convention used in the Platform Navigator use interface. The following table shows all existing types: Instance type Description/Product name platform-navigator Platform Navigator api-management IBM API Connect automation-assets Automation assets a.k.a Asset repo enterprise-gateway IBM Data Power event-endpoint-management Event endpoint manager - managing asynchronous APIs event-streams IBM Event Streams - Kafka high-speed-transfer-server Aspera HSTS integration-dashboard IBM App Connect Integration Dashboard integration-design IBM App Connect Designer integration-tracing Operations Dashboard messaging IBM MQ Platform navigator \ud83d\udd17 The Platform Navigator is defined as one of the instance types. There is typically only one instance of it. The exception would be an installation in two or more completely separate namespaces (see the CP4I documentation). Special attention is paid to the installation of the Navigator. The Cloud Pak Deployer will install the Navigator instance first, before any other instance, and it will wait until the instance is ready (this could take up to 45 minutes). When the installation is completed, you will find the admin user password in the status/cloud-paks/cp4i- -cp4i-PN-access.txt file. Of course, you can obtain the password also from the platform-auth-idp-credentials secret in ibm-common-services namespace. Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be platform-navigator license License ID L-RJON-C7QG3S channel Subscription channel v5.2 case_version CASE version 1.5.0 API management (IBM API Connect) \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be api-management license License ID L-RJON-C7BJ42 version Version of API Connect 10.0.4.0 channel Subscription channel v2.4 case_version CASE version 3.0.5 Automation assets (Asset repo) \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be automation-assets license License ID L-PNAA-C68928 version Version of Asset repo 2021.4.1-2 channel Subscription channel v1.4 case_version CASE version 1.4.2 Enterprise gateway (IBM Data Power) \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be enterprise-gateway admin_password_secret The name of the secret where admin password is stored. The default name is used if you leave it empty. license License ID L-RJON-BYDR3Q version Version of Data Power 10.0-cd channel Subscription channel v1.5 case_version CASE version 1.5.0 Event endpoint management \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be event-endpoint-management license License ID L-RJON-C7BJ42 version Version of Event endpoint manager 10.0.4.0 channel Subscription channel v2.4 case_version CASE version 3.0.5 Event streams \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be event-streams version Version of Event streams 10.5.0 channel Subscription channel v2.5 case_version CASE version 1.5.2 High speed transfer server (Aspera HSTS) \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be high-speed-transfer-server aspera_key A license key for the Aspera software redis_version Version of the Redis database 5.0.9 version Version of Aspera HSTS 4.0.0 channel Subscription channel v1.4 case_version CASE version 1.4.0 Integration dashboard (IBM App Connect Dashboard) \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be integration-dashboard license License ID L-APEH-C79J9U version Version of IBM App Connect 12.0 channel Subscription channel v3.1 case_version CASE version 3.1.0 Integration design (IBM App Connect Designer) \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be integration-design license License ID L-KSBM-C87FU2 version Version of IBM App Connect 12.0 channel Subscription channel v3.1 case_version CASE version 3.1.0 Integration tracing (Operation dashborad) \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be integration-tracing version Version of Integration tracing 2021.4.1-2 channel Subscription channel v2.5 case_version CASE version 2.5.2 Messaging (IBM MQ) \ud83d\udd17 Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be messaging queue_manager_name The name of the initial queue. Default is QUICKSTART license License ID L-RJON-C7QG3S version Version of IBM MQ 9.2.4.0-r1 channel Subscription channel v1.7 case_version CASE version 1.7.0 cp4waiops \ud83d\udd17 Defines the Cloud Pak for Watson AIOps installation to be configured on the OpenShift cluster(s). The following instances can be installed by the deployer: * AI Manager * Event Manager * Turbonomic * Instana * Infrastructure management * ELK stack (ElasticSearch, Logstash, Kibana) Aside from the base install, the deployer can also install ready-to-use demos for each of the instances cp4waiops: - project: cp4waiops openshift_cluster_name: \"{{ env_id }}\" openshift_storage_name: auto-storage accept_licenses: False instances: - name: cp4waiops-aimanager kind: AIManager install: true ... Main properties \ud83d\udd17 The following properties are defined on the project level: Property Description Mandatory Allowed values project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes openshift_cluster_name Dynamically defined form the env_id parameter during the execution. No, only if mutiple OpenShift clusters defined Existing openshift cluster openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). No, inferred from openshift->openshift_storage accept_licenses Set to True to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes True, False Service instances \ud83d\udd17 The project that is specified at the cp4waiops level defines the OpenShift project into which the instances of each of the services will be installed. Below is a list of instance \"kinds\" that can be installed. For every \"service instance\" there can also be a \"demo content\" entry to prepare the demo content for the capability. AI Manager \ud83d\udd17 instances: - name: cp4waiops-aimanager kind: AIManager install: true waiops_size: small custom_size_file: none waiops_name: ibm-cp-watson-aiops subscription_channel: v3.6 freeze_catalog: false Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes AIManager install Must the service be installed? Yes true, false waiops_size Size of the install Yes small, tall, custom custom_size_file Name of the file holding the custom sizes if waiops_size is custom No waiops_name Name of the CP4WAIOPS instance Yes subscription_channel Subscription channel of the operator Yes freeze_catalog Freeze the version of the catalog source? Yes false, true case_install Must AI manager be installed via case files? No false, true case_github_url GitHub URL to download case file Yes if case_install is true case_name Name of the case file Yes if case_install is true case_version Version of the case file to download Yes if case_install is true case_inventory_setup Case file operation to run for this service Yes if case_install is true cpwaiopsSetup AI Manager - Demo Content \ud83d\udd17 instances: - name: cp4waiops-aimanager-demo-content kind: AIManagerDemoContent install: true ... Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes AIManagerDemoContent install Must the content be installed? Yes true, false See sample config for remainder of properties. Event Manager \ud83d\udd17 instances: - name: cp4waiops-eventmanager kind: EventManager install: true subscription_channel: v1.11 starting_csv: noi.v1.7.0 noi_version: 1.6.6 Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes EventManager install Must the service be installed? Yes true, false subscription_channel Subscription channel of the operator Yes starting_csv Starting Cluster Server Version Yes noi_version Version of noi Yes Event Manager Demo Content \ud83d\udd17 instances: - name: cp4waiops-eventmanager kind: EventManagerDemoContent install: true Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes EventManagerDemoContent install Must the content be installed? Yes true, false Infrastructure Management \ud83d\udd17 instances: - name: cp4waiops-infrastructure-management kind: InfrastructureManagement install: false subscription_channel: v3.5 Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes InfrastructureManagement install Must the service be installed? Yes true, false subscription_channel Subscription channel of the operator Yes ELK stack \ud83d\udd17 ElasticSearch, Logstash and Kibana stack. instances: - name: cp4waiops-elk kind: ELK install: false Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes ELK install Must the service be installed? Yes true, false Instana \ud83d\udd17 instances: - name: cp4waiops-instana kind: Instana install: true version: 241-0 sales_key: 'NONE' agent_key: 'NONE' instana_admin_user: \"admin@instana.local\" #instana_admin_pass: 'P4ssw0rd!' install_agent: true integrate_aimanager: true #integrate_turbonomic: true Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes Instana install Must the service be installed? Yes true, false version Version of Instana to install No sales_key License key to be configured No agent_key License key for agent to be configured No instana_admin_user Instana admin user to be configured Yes instana_admin_pass Instana admin user password to be set (if different from global password) No install_agent Must the Instana agent be installed? Yes true, false integrate_aimanager Must Instana be integrated with AI Manager? Yes true, false integrate_turbonomic Must Instana be integrated with Turbonomic? No true, false Turbonomic \ud83d\udd17 instances: - name: cp4waiops-turbonomic kind: Turbonomic install: true turbo_version: 8.7.0 Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes Turbonomic install Must the service be installed? Yes true, false turbo_version Version of Turbonomic to install Yes Turbonomic Demo Content \ud83d\udd17 instances: - name: cp4waiops-turbonomic-demo-content kind: TurbonomicDemoContent install: true #turbo_admin_password: P4ssw0rd! create_user: false demo_user: demo #turbo_demo_password: P4ssw0rd! Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes TurbonomicDemoContent install Must the content be installed? Yes true, false turbo_admin_pass Turbonomic admin user password to be set (if different from global password) No create_user Must the demo user be created? No false, true demo_user Name of the demo user No turbo_demo_password Demo user password if different from global password No See sample config for remainder of properties. cp4ba \ud83d\udd17 Defines the Cloud Pak for Business Automation installation to be configured on the OpenShift cluster(s). See Cloud Pak for Business Automation for additional details. --- cp4ba : - project : cp4ba collateral_project : cp4ba-collateral openshift_cluster_name : \"{{ env_id }}\" openshift_storage_name : auto-storage accept_licenses : false state : installed cpfs_profile_size : small # Profile size which affect replicas and resources of Pods of CPFS as per https://www.ibm.com/docs/en/cpfs?topic=operator-hardware-requirements-recommendations-foundational-services # Section for Cloud Pak for Business Automation itself cp4ba : # Set to false if you don't want to install (or remove) CP4BA enabled : true # Currently always true profile_size : small # Profile size which affect replicas and resources of Pods as per https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=pcmppd-system-requirements patterns : foundation : # Foundation pattern, always true - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__foundation optional_components : bas : true # Business Automation Studio (BAS) bai : true # Business Automation Insights (BAI) ae : true # Application Engine (AE) decisions : # Operational Decision Manager (ODM) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__odm enabled : true optional_components : decision_center : true # Decision Center (ODM) decision_runner : true # Decision Runner (ODM) decision_server_runtime : true # Decision Server (ODM) # Additional customization for Operational Decision Management # Contents of the following will be merged into ODM part of CP4BA CR yaml file. Arrays are overwritten. cr_custom : spec : odm_configuration : decisionCenter : # Enable support for decision models disabledDecisionModel : false decisions_ads : # Automation Decision Services (ADS) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ads enabled : true optional_components : ads_designer : true # Designer (ADS) ads_runtime : true # Runtime (ADS) content : # FileNet Content Manager (FNCM) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ecm enabled : true optional_components : cmis : true # Content Management Interoperability Services (FNCM - CMIS) css : true # Content Search Services (FNCM - CSS) es : true # External Share (FNCM - ES) tm : true # Task Manager (FNCM - TM) ier : true # IBM Enterprise Records (FNCM - IER) icc4sap : false # IBM Content Collector for SAP (FNCM - ICC4SAP) - Currently not implemented application : # Business Automation Application (BAA) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa enabled : true optional_components : app_designer : true # App Designer (BAA) ae_data_persistence : true # App Engine data persistence (BAA) document_processing : # Automation Document Processing (ADP) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__adp enabled : true optional_components : document_processing_designer : true # Designer (ADP) # Additional customization for Automation Document Processing # Contents of the following will be merged into ADP part of CP4BA CR yaml file. Arrays are overwritten. cr_custom : spec : ca_configuration : # GPU config as described on https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=resource-configuring-document-processing deeplearning : gpu_enabled : false nodelabel_key : nvidia.com/gpu.present nodelabel_value : \"true\" # [Tech Preview] Deploy OCR Engine 2 (IOCR) for ADP - https://www.ibm.com/support/pages/extraction-language-technology-preview-feature-available-automation-document-processing-2301 ocrextraction : use_iocr : none # Allowed values: \"none\" to uninstall, \"all\" or \"auto\" to install (these are aliases) workflow : # Business Automation Workflow (BAW) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baw enabled : true optional_components : baw_authoring : true # Workflow Authoring (BAW) - always keep true if workflow pattern is chosen. BAW Runtime is not implemented. kafka : true # Will install a kafka cluster and enable kafka service for workflow authoring. # Section for IBM Process mining pm : # Set to false if you don't want to install (or remove) Process Mining enabled : true # Additional customization for Process Mining # Contents of the following will be merged into PM CR yaml file. Arrays are overwritten. cr_custom : spec : processmining : storage : # Disables redis to spare resources as per https://www.ibm.com/docs/en/process-mining/latest?topic=configurations-custom-resource-definition redis : install : false # Section for IBM Robotic Process Automation rpa : # Set to false if you don't want to install (or remove) RPA enabled : true # Additional customization for Robotic Process Automation # Contents of the following will be merged into RPA CR yaml file. Arrays are overwritten. cr_custom : spec : # Configures the NLP provider component of IBM RPA. You can disable it by specifying 0. https://www.ibm.com/docs/en/rpa/latest?topic=platform-configuring-rpa-custom-resources#basic-setup nlp : replicas : 1 # Set to false if you don't want to install (or remove) CloudBeaver (PostgreSQL, DB2, MSSQL UI) cloudbeaver_enabled : true # Set to false if you don't want to install (or remove) Roundcube roundcube_enabled : true # Set to false if you don't want to install (or remove) Cerebro cerebro_enabled : true # Set to false if you don't want to install (or remove) AKHQ akhq_enabled : true # Set to false if you don't want to install (or remove) Mongo Express mongo_express_enabled : true # Set to false if you don't want to install (or remove) phpLDAPAdmin phpldapadmin_enabled : true Main properties \ud83d\udd17 The following properties are defined on the project level. Property Description Mandatory Allowed values project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes Valid OCP project name collateral_project The name of the OpenShift project that will be created and used for the installation of all collateral (prerequisites and extras). Yes Valid OCP project name openshift_cluster_name Dynamically defined form the env_id parameter during the execution. No, only if multiple OpenShift clusters defined Existing openshift cluster openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). No, inferred from openshift->openshift_storage accept_licenses Set to true to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes true, false state Set to installed to install enabled capabilities, set to removed to remove enabled capabilities. Yes installed, removed cpfs_profile_size Profile size which affect replicas and resources of Pods of CPFS as per https://www.ibm.com/docs/en/cpfs?topic=operator-hardware-requirements-recommendations-foundational-services Yes starterset, small, medium, large Cloud Pak for Business Automation properties \ud83d\udd17 Used to configure CP4BA. Placed in cp4ba key on the project level. Property Description Mandatory Allowed values enabled Set to true to enable CP4BA. Currently always true . Yes true profile_size Profile size which affect replicas and resources of Pods as per https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=pcmppd-system-requirements Yes small, medium, large patterns Section where CP4BA patterns are configured. Please make sure to select all that is needed as a dependencies. Dependencies can be determined from documentation at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments Yes Object - see details below Foundation pattern properties \ud83d\udd17 Always configure in CP4BA. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__foundation Placed in cp4ba.patterns.foundation key. Property Description Mandatory Allowed values optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.bas Set to true to enable Business Automation Studio Yes true, false optional_components.bai Set to true to enable Business Automation Insights Yes true, false optional_components.ae Set to true to enable Application Engine Yes true, false Decisions pattern properties \ud83d\udd17 Used to configure Operation Decision Manager. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__odm Placed in cp4ba.patterns.decisions key. Property Description Mandatory Allowed values enabled Set to true to enable decisions pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.decision_center Set to true to enable Decision Center Yes true, false optional_components.decision_runner Set to true to enable Decision Runner Yes true, false optional_components.decision_server_runtime Set to true to enable Decision Server Yes true, false cr_custom Additional customization for Operational Decision Management. Contents will be merged into ODM part of CP4BA CR yaml file. Arrays are overwritten. No Object Decisions ADS pattern properties \ud83d\udd17 Used to configure Automation Decision Services. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ads Placed in cp4ba.patterns.decisions_ads key. Property Description Mandatory Allowed values enabled Set to true to enable decisions_ads pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.ads_designer Set to true to enable Designer Yes true, false optional_components.ads_runtime Set to true to enable Runtime Yes true, false Content pattern properties \ud83d\udd17 Used to configure FileNet Content Manager. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ecm Placed in cp4ba.patterns.content key. Property Description Mandatory Allowed values enabled Set to true to enable content pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.cmis Set to true to enable CMIS Yes true, false optional_components.css Set to true to enable Content Search Services Yes true, false optional_components.es Set to true to enable External Share. Currently not functional. Yes true, false optional_components.tm Set to true to enable Task Manager Yes true, false optional_components.ier Set to true to enable IBM Enterprise Records Yes true, false optional_components.icc4sap Set to true to enable IBM Content Collector for SAP. Currently not functional. Always false. Yes false Application pattern properties \ud83d\udd17 Used to configure Business Automation Application. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa Placed in cp4ba.patterns.application key. Property Description Mandatory Allowed values enabled Set to true to enable application pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.app_designer Set to true to enable Application Designer Yes true, false optional_components.ae_data_persistence Set to true to enable App Engine data persistence Yes true, false Document Processing pattern properties \ud83d\udd17 Used to configure Automation Document Processing. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa Placed in cp4ba.patterns.document_processing key. Property Description Mandatory Allowed values enabled Set to true to enable document_processing pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.document_processing_designer Set to true to enable Designer Yes true cr_custom Additional customization for Automation Document Processing. Contents will be merged into ADP part of CP4BA CR yaml file. Arrays are overwritten. No Object Workflow pattern properties \ud83d\udd17 Used to configure Business Automation Workflow. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baw Placed in cp4ba.patterns.workflow key. Property Description Mandatory Allowed values enabled Set to true to enable workflow pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.baw_authoring Set to true to enable Workflow Authoring. Currently always true . Yes true optional_components.kafka Set to true to install a kafka cluster and enable kafka service for workflow authoring. Yes true, false Process Mining properties \ud83d\udd17 Used to configure IBM Process Mining. Placed in pm key on the project level. Property Description Mandatory Allowed values enabled Set to true to enable process mining . Yes true, false cr_custom Additional customization for Process Mining. Contents will be merged into PM CR yaml file. Arrays are overwritten. No Object Robotic Process Automation properties \ud83d\udd17 Used to configure IBM Robotic Process Automation. Placed in rpa key on the project level. Property Description Mandatory Allowed values enabled Set to true to enable rpa . Yes true, false cr_custom Additional customization for Process Mining. Contents will be merged into RPA CR yaml file. Arrays are overwritten. No Object Other properties \ud83d\udd17 Used to configure extra UIs. The following properties are defined on the project level. Property Description Mandatory Allowed values cloudbeaver_enabled Set to true to enable CloudBeaver (PostgreSQL, DB2, MSSQL UI). Yes true, false roundcube_enabled Set to true to enable Roundcube. Client for mail. Yes true, false cerebro_enabled Set to true to enable Cerebro. Client for ElasticSearch in CP4BA. Yes true, false akhq_enabled Set to true to enable AKHQ. Client for Kafka in CP4BA. Yes true, false mongo_express_enabled Set to true to enable Mongo Express. Client for MongoDB. Yes true, false phpldapadmin_enabled Set to true to enable phpLDApAdmin. Client for OpenLDAP. Yes true, false","title":"Cloud Paks"},{"location":"30-reference/configuration/cloud-pak/#cloud-paks","text":"Defines the Cloud Pak(s) which is/are layed out on the OpenShift cluster, typically in one or more OpenShift projects. The Cloud Pak definition represents the instance users connect to and which is responsible for managing the functional capabilities installed within the application.","title":"Cloud Paks"},{"location":"30-reference/configuration/cloud-pak/#cloud-pak-configuration","text":"Cloud Pak for Data Cloud Pak for Integration Cloud Pak for Watson AIOps Cloud Pak for Business Automation","title":"Cloud Pak configuration"},{"location":"30-reference/configuration/cloud-pak/#cp4d","text":"Defines the Cloud Pak for Data instances to be configured on the OpenShift cluster(s). cp4d: - project: cpd openshift_cluster_name: sample cp4d_version: 4.7.3 sequential_install: False use_fs_iam: False change_node_settings: True db2u_limited_privileges: False accept_licenses: False openshift_storage_name: nfs-storage cp4d_entitlement: cpd-enterprise cp4d_production_license: True cartridges: - name: cpfs - name: cpd_platform","title":"cp4d"},{"location":"30-reference/configuration/cloud-pak/#properties","text":"Property Description Mandatory Allowed values project Name of the OpenShift project of the Cloud Pak for Data instance Yes openshift_cluster_name Name of the OpenShift cluster Yes, inferred from openshift Existing openshift cluster cp4d_version Cloud Pak for Data version to install, this will determine the version for all cartridges that do not specify a version Yes 4.x.x sequential_install If set to True the deployer will run the OLM utils playbooks to install catalog sources, subscriptions and CRs. If set to False , deployer will use OLM utils to generate the scripts and then run them, which will cause the catalog sources, subscriptions and CRs to be created immediately and install in parallel No True (default), False use_fs_iam If set to True the deployer will enable Foundational Services IAM for authentication No False (default), True change_node_settings Controls whether the node settings using the machine configs will be applied onto the OpenShift cluster. No True, False db2u_limited_privileges Depicts whether Db2U containers run with limited privileges. If they do ( True ), Deployer will create KubeletConfig and Tuned OpenShift resources as per the documentation. No False (default), True accept_licenses Set to 'True' to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command No True, False (default) cp4d_entitlement Set to cpd-enterprise , cpd-standard , watsonx-data , watsonx-ai , watsonx-gov-model-management , watsonx-gov-risk-compliance , dependent on the deployed license No cpd-enterprise (default), cpd-standard, watsonx-data, watsonx-ai, watsonx-gov-model-management, watsonx-gov-risk-compliance cp4d_production_license Whether the Cloud Pak for Data is a production license No True (default), False image_registry_name When using private registry, specify name of image_registry No openshift_storage_name References an openshift_storage element in the OpenShift cluster that was defined for this Cloud Pak for Data instance. The name must exist under `openshift.[openshift_cluster_name].openshift_storage. No, inferred from openshift->openshift_storage cartridges List of cartridges to install for this Cloud Pak for Data instance. See Cloud Pak for Data cartridges for more details Yes","title":"Properties"},{"location":"30-reference/configuration/cloud-pak/#cp4i","text":"Defines the Cloud Pak for Integration installation to be configured on the OpenShift cluster(s). cp4i: - project: cp4i openshift_cluster_name: {{ env_id }} openshift_storage_name: nfs-rook-ceph cp4i_version: 2021.4.1 accept_licenses: False use_top_level_operator: False top_level_operator_channel: v1.5 top_level_operator_case_version: 2.5.0 operators_in_all_namespaces: True instances: - name: integration-navigator type: platform-navigator license: L-RJON-C7QG3S channel: v5.2 case_version: 1.5.0","title":"cp4i"},{"location":"30-reference/configuration/cloud-pak/#openshift-projects","text":"The immediate content of the cp4i object is actually a list of OpenShift projects (namespaces). There can be more than one project and instances can be created in separate projects. cp4i : - project : cp4i ... - project : cp4i-ace ... - project : cp4i-apic ...","title":"OpenShift projects"},{"location":"30-reference/configuration/cloud-pak/#operator-channels-case-versions-license-ids","text":"Before you run the Cloud Pak Deployer be sure that the correct operator channels are defined for the selected instance types. Some products require a license ID, please check the documentation of each product for the correct license. If you decide to use CASE files instead of the IBM Operator Catalog (more on that below) make sure that you selected the correct CASE versions - please refer: https://github.com/IBM/cloud-pak/tree/master/repo/case","title":"Operator channels, CASE versions, license IDs"},{"location":"30-reference/configuration/cloud-pak/#main-properties","text":"The following properties are defined on the project level: Property Description Mandatory Allowed values project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes openshift_cluster_name Dynamically defined form the env_id parameter during the execution. Yes, inferred from openshift Existing openshift cluster openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). The definition must include the class name of the file storage type and the class name of the block storage type. No, inferred from openshift->openshift_storage cp4i_version The version of the Cloud Pak for Integration (e.g. 2021.4.1) Yes use_case_files The property defines if the CASE files are used for installation. If it is True then the operator catalogs are created from the CASE files. If it is False, the IBM Operator Catalog from the entitled registry is used. No True, False (default) accept_licenses Set to True to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes True, False use_top_level_operator If it is True then the CP4I top-level operator that installs all other operators is used. Otherwise, only the operators for the selected instance types are installed. No True, False (default) top_level_operator_channel Needed if the use_top_level_operator is True otherwise, it is ignored. Specifies the channel of the top-level operator. No top_level_operator_case_version Needed if the use_top_level_operator is True otherwise, it is ignored. Specifies the CASE package version of the top-level operator. No operators_in_all_namespaces It defines whether the operators are visible in all namespaces or just in the specific namespace where they are needed. No True, False (default) instances List of the instances that are going to be created (please see below). Yes Warning Despite the properties use_case_files , use_top_level_operator and operators_in_all_namespaces are defined as optional, they are actually crucial for the way of execution of the installation process. If any of them is omitted, it is assumed that the default False value is used. If none of them exists, it means that all are False . In this case, it means that the IBM Operator Catalog is used and only the needed operators for specified instance types are installed in the specific namespace.","title":"Main properties"},{"location":"30-reference/configuration/cloud-pak/#properties-of-the-individual-instances","text":"The instance property contains one or more instances definitions. Each instance must have a unique name. There can be more the one instance of the same type.","title":"Properties of the individual instances"},{"location":"30-reference/configuration/cloud-pak/#naming-convention-for-instance-types","text":"For each instance definition, an instance type must be specified. We selected the type names that are as much as possible similar to the naming convention used in the Platform Navigator use interface. The following table shows all existing types: Instance type Description/Product name platform-navigator Platform Navigator api-management IBM API Connect automation-assets Automation assets a.k.a Asset repo enterprise-gateway IBM Data Power event-endpoint-management Event endpoint manager - managing asynchronous APIs event-streams IBM Event Streams - Kafka high-speed-transfer-server Aspera HSTS integration-dashboard IBM App Connect Integration Dashboard integration-design IBM App Connect Designer integration-tracing Operations Dashboard messaging IBM MQ","title":"Naming convention for instance types"},{"location":"30-reference/configuration/cloud-pak/#platform-navigator","text":"The Platform Navigator is defined as one of the instance types. There is typically only one instance of it. The exception would be an installation in two or more completely separate namespaces (see the CP4I documentation). Special attention is paid to the installation of the Navigator. The Cloud Pak Deployer will install the Navigator instance first, before any other instance, and it will wait until the instance is ready (this could take up to 45 minutes). When the installation is completed, you will find the admin user password in the status/cloud-paks/cp4i- -cp4i-PN-access.txt file. Of course, you can obtain the password also from the platform-auth-idp-credentials secret in ibm-common-services namespace. Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be platform-navigator license License ID L-RJON-C7QG3S channel Subscription channel v5.2 case_version CASE version 1.5.0","title":"Platform navigator"},{"location":"30-reference/configuration/cloud-pak/#api-management-ibm-api-connect","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be api-management license License ID L-RJON-C7BJ42 version Version of API Connect 10.0.4.0 channel Subscription channel v2.4 case_version CASE version 3.0.5","title":"API management (IBM API Connect)"},{"location":"30-reference/configuration/cloud-pak/#automation-assets-asset-repo","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be automation-assets license License ID L-PNAA-C68928 version Version of Asset repo 2021.4.1-2 channel Subscription channel v1.4 case_version CASE version 1.4.2","title":"Automation assets (Asset repo)"},{"location":"30-reference/configuration/cloud-pak/#enterprise-gateway-ibm-data-power","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be enterprise-gateway admin_password_secret The name of the secret where admin password is stored. The default name is used if you leave it empty. license License ID L-RJON-BYDR3Q version Version of Data Power 10.0-cd channel Subscription channel v1.5 case_version CASE version 1.5.0","title":"Enterprise gateway (IBM Data Power)"},{"location":"30-reference/configuration/cloud-pak/#event-endpoint-management","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be event-endpoint-management license License ID L-RJON-C7BJ42 version Version of Event endpoint manager 10.0.4.0 channel Subscription channel v2.4 case_version CASE version 3.0.5","title":"Event endpoint management"},{"location":"30-reference/configuration/cloud-pak/#event-streams","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be event-streams version Version of Event streams 10.5.0 channel Subscription channel v2.5 case_version CASE version 1.5.2","title":"Event streams"},{"location":"30-reference/configuration/cloud-pak/#high-speed-transfer-server-aspera-hsts","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be high-speed-transfer-server aspera_key A license key for the Aspera software redis_version Version of the Redis database 5.0.9 version Version of Aspera HSTS 4.0.0 channel Subscription channel v1.4 case_version CASE version 1.4.0","title":"High speed transfer server (Aspera HSTS)"},{"location":"30-reference/configuration/cloud-pak/#integration-dashboard-ibm-app-connect-dashboard","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be integration-dashboard license License ID L-APEH-C79J9U version Version of IBM App Connect 12.0 channel Subscription channel v3.1 case_version CASE version 3.1.0","title":"Integration dashboard (IBM App Connect Dashboard)"},{"location":"30-reference/configuration/cloud-pak/#integration-design-ibm-app-connect-designer","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be integration-design license License ID L-KSBM-C87FU2 version Version of IBM App Connect 12.0 channel Subscription channel v3.1 case_version CASE version 3.1.0","title":"Integration design (IBM App Connect Designer)"},{"location":"30-reference/configuration/cloud-pak/#integration-tracing-operation-dashborad","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be integration-tracing version Version of Integration tracing 2021.4.1-2 channel Subscription channel v2.5 case_version CASE version 2.5.2","title":"Integration tracing (Operation dashborad)"},{"location":"30-reference/configuration/cloud-pak/#messaging-ibm-mq","text":"Property Description Sample value for 2021.4.1 name Unique name within the cluster using only lowercase alphanumerics and \"-\" type It must be messaging queue_manager_name The name of the initial queue. Default is QUICKSTART license License ID L-RJON-C7QG3S version Version of IBM MQ 9.2.4.0-r1 channel Subscription channel v1.7 case_version CASE version 1.7.0","title":"Messaging (IBM MQ)"},{"location":"30-reference/configuration/cloud-pak/#cp4waiops","text":"Defines the Cloud Pak for Watson AIOps installation to be configured on the OpenShift cluster(s). The following instances can be installed by the deployer: * AI Manager * Event Manager * Turbonomic * Instana * Infrastructure management * ELK stack (ElasticSearch, Logstash, Kibana) Aside from the base install, the deployer can also install ready-to-use demos for each of the instances cp4waiops: - project: cp4waiops openshift_cluster_name: \"{{ env_id }}\" openshift_storage_name: auto-storage accept_licenses: False instances: - name: cp4waiops-aimanager kind: AIManager install: true ...","title":"cp4waiops"},{"location":"30-reference/configuration/cloud-pak/#main-properties_1","text":"The following properties are defined on the project level: Property Description Mandatory Allowed values project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes openshift_cluster_name Dynamically defined form the env_id parameter during the execution. No, only if mutiple OpenShift clusters defined Existing openshift cluster openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). No, inferred from openshift->openshift_storage accept_licenses Set to True to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes True, False","title":"Main properties"},{"location":"30-reference/configuration/cloud-pak/#service-instances","text":"The project that is specified at the cp4waiops level defines the OpenShift project into which the instances of each of the services will be installed. Below is a list of instance \"kinds\" that can be installed. For every \"service instance\" there can also be a \"demo content\" entry to prepare the demo content for the capability.","title":"Service instances"},{"location":"30-reference/configuration/cloud-pak/#ai-manager","text":"instances: - name: cp4waiops-aimanager kind: AIManager install: true waiops_size: small custom_size_file: none waiops_name: ibm-cp-watson-aiops subscription_channel: v3.6 freeze_catalog: false Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes AIManager install Must the service be installed? Yes true, false waiops_size Size of the install Yes small, tall, custom custom_size_file Name of the file holding the custom sizes if waiops_size is custom No waiops_name Name of the CP4WAIOPS instance Yes subscription_channel Subscription channel of the operator Yes freeze_catalog Freeze the version of the catalog source? Yes false, true case_install Must AI manager be installed via case files? No false, true case_github_url GitHub URL to download case file Yes if case_install is true case_name Name of the case file Yes if case_install is true case_version Version of the case file to download Yes if case_install is true case_inventory_setup Case file operation to run for this service Yes if case_install is true cpwaiopsSetup","title":"AI Manager"},{"location":"30-reference/configuration/cloud-pak/#ai-manager---demo-content","text":"instances: - name: cp4waiops-aimanager-demo-content kind: AIManagerDemoContent install: true ... Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes AIManagerDemoContent install Must the content be installed? Yes true, false See sample config for remainder of properties.","title":"AI Manager - Demo Content"},{"location":"30-reference/configuration/cloud-pak/#event-manager","text":"instances: - name: cp4waiops-eventmanager kind: EventManager install: true subscription_channel: v1.11 starting_csv: noi.v1.7.0 noi_version: 1.6.6 Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes EventManager install Must the service be installed? Yes true, false subscription_channel Subscription channel of the operator Yes starting_csv Starting Cluster Server Version Yes noi_version Version of noi Yes","title":"Event Manager"},{"location":"30-reference/configuration/cloud-pak/#event-manager-demo-content","text":"instances: - name: cp4waiops-eventmanager kind: EventManagerDemoContent install: true Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes EventManagerDemoContent install Must the content be installed? Yes true, false","title":"Event Manager Demo Content"},{"location":"30-reference/configuration/cloud-pak/#infrastructure-management","text":"instances: - name: cp4waiops-infrastructure-management kind: InfrastructureManagement install: false subscription_channel: v3.5 Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes InfrastructureManagement install Must the service be installed? Yes true, false subscription_channel Subscription channel of the operator Yes","title":"Infrastructure Management"},{"location":"30-reference/configuration/cloud-pak/#elk-stack","text":"ElasticSearch, Logstash and Kibana stack. instances: - name: cp4waiops-elk kind: ELK install: false Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes ELK install Must the service be installed? Yes true, false","title":"ELK stack"},{"location":"30-reference/configuration/cloud-pak/#instana","text":"instances: - name: cp4waiops-instana kind: Instana install: true version: 241-0 sales_key: 'NONE' agent_key: 'NONE' instana_admin_user: \"admin@instana.local\" #instana_admin_pass: 'P4ssw0rd!' install_agent: true integrate_aimanager: true #integrate_turbonomic: true Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes Instana install Must the service be installed? Yes true, false version Version of Instana to install No sales_key License key to be configured No agent_key License key for agent to be configured No instana_admin_user Instana admin user to be configured Yes instana_admin_pass Instana admin user password to be set (if different from global password) No install_agent Must the Instana agent be installed? Yes true, false integrate_aimanager Must Instana be integrated with AI Manager? Yes true, false integrate_turbonomic Must Instana be integrated with Turbonomic? No true, false","title":"Instana"},{"location":"30-reference/configuration/cloud-pak/#turbonomic","text":"instances: - name: cp4waiops-turbonomic kind: Turbonomic install: true turbo_version: 8.7.0 Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes Turbonomic install Must the service be installed? Yes true, false turbo_version Version of Turbonomic to install Yes","title":"Turbonomic"},{"location":"30-reference/configuration/cloud-pak/#turbonomic-demo-content","text":"instances: - name: cp4waiops-turbonomic-demo-content kind: TurbonomicDemoContent install: true #turbo_admin_password: P4ssw0rd! create_user: false demo_user: demo #turbo_demo_password: P4ssw0rd! Property Description Mandatory Allowed values name Unique name within the cluster using only lowercase alphanumerics and \"-\" Yes kind Service kind to install Yes TurbonomicDemoContent install Must the content be installed? Yes true, false turbo_admin_pass Turbonomic admin user password to be set (if different from global password) No create_user Must the demo user be created? No false, true demo_user Name of the demo user No turbo_demo_password Demo user password if different from global password No See sample config for remainder of properties.","title":"Turbonomic Demo Content"},{"location":"30-reference/configuration/cloud-pak/#cp4ba","text":"Defines the Cloud Pak for Business Automation installation to be configured on the OpenShift cluster(s). See Cloud Pak for Business Automation for additional details. --- cp4ba : - project : cp4ba collateral_project : cp4ba-collateral openshift_cluster_name : \"{{ env_id }}\" openshift_storage_name : auto-storage accept_licenses : false state : installed cpfs_profile_size : small # Profile size which affect replicas and resources of Pods of CPFS as per https://www.ibm.com/docs/en/cpfs?topic=operator-hardware-requirements-recommendations-foundational-services # Section for Cloud Pak for Business Automation itself cp4ba : # Set to false if you don't want to install (or remove) CP4BA enabled : true # Currently always true profile_size : small # Profile size which affect replicas and resources of Pods as per https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=pcmppd-system-requirements patterns : foundation : # Foundation pattern, always true - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__foundation optional_components : bas : true # Business Automation Studio (BAS) bai : true # Business Automation Insights (BAI) ae : true # Application Engine (AE) decisions : # Operational Decision Manager (ODM) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__odm enabled : true optional_components : decision_center : true # Decision Center (ODM) decision_runner : true # Decision Runner (ODM) decision_server_runtime : true # Decision Server (ODM) # Additional customization for Operational Decision Management # Contents of the following will be merged into ODM part of CP4BA CR yaml file. Arrays are overwritten. cr_custom : spec : odm_configuration : decisionCenter : # Enable support for decision models disabledDecisionModel : false decisions_ads : # Automation Decision Services (ADS) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ads enabled : true optional_components : ads_designer : true # Designer (ADS) ads_runtime : true # Runtime (ADS) content : # FileNet Content Manager (FNCM) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ecm enabled : true optional_components : cmis : true # Content Management Interoperability Services (FNCM - CMIS) css : true # Content Search Services (FNCM - CSS) es : true # External Share (FNCM - ES) tm : true # Task Manager (FNCM - TM) ier : true # IBM Enterprise Records (FNCM - IER) icc4sap : false # IBM Content Collector for SAP (FNCM - ICC4SAP) - Currently not implemented application : # Business Automation Application (BAA) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa enabled : true optional_components : app_designer : true # App Designer (BAA) ae_data_persistence : true # App Engine data persistence (BAA) document_processing : # Automation Document Processing (ADP) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__adp enabled : true optional_components : document_processing_designer : true # Designer (ADP) # Additional customization for Automation Document Processing # Contents of the following will be merged into ADP part of CP4BA CR yaml file. Arrays are overwritten. cr_custom : spec : ca_configuration : # GPU config as described on https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=resource-configuring-document-processing deeplearning : gpu_enabled : false nodelabel_key : nvidia.com/gpu.present nodelabel_value : \"true\" # [Tech Preview] Deploy OCR Engine 2 (IOCR) for ADP - https://www.ibm.com/support/pages/extraction-language-technology-preview-feature-available-automation-document-processing-2301 ocrextraction : use_iocr : none # Allowed values: \"none\" to uninstall, \"all\" or \"auto\" to install (these are aliases) workflow : # Business Automation Workflow (BAW) - https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baw enabled : true optional_components : baw_authoring : true # Workflow Authoring (BAW) - always keep true if workflow pattern is chosen. BAW Runtime is not implemented. kafka : true # Will install a kafka cluster and enable kafka service for workflow authoring. # Section for IBM Process mining pm : # Set to false if you don't want to install (or remove) Process Mining enabled : true # Additional customization for Process Mining # Contents of the following will be merged into PM CR yaml file. Arrays are overwritten. cr_custom : spec : processmining : storage : # Disables redis to spare resources as per https://www.ibm.com/docs/en/process-mining/latest?topic=configurations-custom-resource-definition redis : install : false # Section for IBM Robotic Process Automation rpa : # Set to false if you don't want to install (or remove) RPA enabled : true # Additional customization for Robotic Process Automation # Contents of the following will be merged into RPA CR yaml file. Arrays are overwritten. cr_custom : spec : # Configures the NLP provider component of IBM RPA. You can disable it by specifying 0. https://www.ibm.com/docs/en/rpa/latest?topic=platform-configuring-rpa-custom-resources#basic-setup nlp : replicas : 1 # Set to false if you don't want to install (or remove) CloudBeaver (PostgreSQL, DB2, MSSQL UI) cloudbeaver_enabled : true # Set to false if you don't want to install (or remove) Roundcube roundcube_enabled : true # Set to false if you don't want to install (or remove) Cerebro cerebro_enabled : true # Set to false if you don't want to install (or remove) AKHQ akhq_enabled : true # Set to false if you don't want to install (or remove) Mongo Express mongo_express_enabled : true # Set to false if you don't want to install (or remove) phpLDAPAdmin phpldapadmin_enabled : true","title":"cp4ba"},{"location":"30-reference/configuration/cloud-pak/#main-properties_2","text":"The following properties are defined on the project level. Property Description Mandatory Allowed values project The name of the OpenShift project that will be created and used for the installation of the defined instances. Yes Valid OCP project name collateral_project The name of the OpenShift project that will be created and used for the installation of all collateral (prerequisites and extras). Yes Valid OCP project name openshift_cluster_name Dynamically defined form the env_id parameter during the execution. No, only if multiple OpenShift clusters defined Existing openshift cluster openshift_storage_name Reference to the storage definition that exists in the openshift object (please see above). No, inferred from openshift->openshift_storage accept_licenses Set to true to accept Cloud Pak licenses. Alternatively the --accept-all-licenses can be used for the cp-deploy.sh command Yes true, false state Set to installed to install enabled capabilities, set to removed to remove enabled capabilities. Yes installed, removed cpfs_profile_size Profile size which affect replicas and resources of Pods of CPFS as per https://www.ibm.com/docs/en/cpfs?topic=operator-hardware-requirements-recommendations-foundational-services Yes starterset, small, medium, large","title":"Main properties"},{"location":"30-reference/configuration/cloud-pak/#cloud-pak-for-business-automation-properties","text":"Used to configure CP4BA. Placed in cp4ba key on the project level. Property Description Mandatory Allowed values enabled Set to true to enable CP4BA. Currently always true . Yes true profile_size Profile size which affect replicas and resources of Pods as per https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=pcmppd-system-requirements Yes small, medium, large patterns Section where CP4BA patterns are configured. Please make sure to select all that is needed as a dependencies. Dependencies can be determined from documentation at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments Yes Object - see details below","title":"Cloud Pak for Business Automation properties"},{"location":"30-reference/configuration/cloud-pak/#foundation-pattern-properties","text":"Always configure in CP4BA. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__foundation Placed in cp4ba.patterns.foundation key. Property Description Mandatory Allowed values optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.bas Set to true to enable Business Automation Studio Yes true, false optional_components.bai Set to true to enable Business Automation Insights Yes true, false optional_components.ae Set to true to enable Application Engine Yes true, false","title":"Foundation pattern properties"},{"location":"30-reference/configuration/cloud-pak/#decisions-pattern-properties","text":"Used to configure Operation Decision Manager. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__odm Placed in cp4ba.patterns.decisions key. Property Description Mandatory Allowed values enabled Set to true to enable decisions pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.decision_center Set to true to enable Decision Center Yes true, false optional_components.decision_runner Set to true to enable Decision Runner Yes true, false optional_components.decision_server_runtime Set to true to enable Decision Server Yes true, false cr_custom Additional customization for Operational Decision Management. Contents will be merged into ODM part of CP4BA CR yaml file. Arrays are overwritten. No Object","title":"Decisions pattern properties"},{"location":"30-reference/configuration/cloud-pak/#decisions-ads-pattern-properties","text":"Used to configure Automation Decision Services. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ads Placed in cp4ba.patterns.decisions_ads key. Property Description Mandatory Allowed values enabled Set to true to enable decisions_ads pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.ads_designer Set to true to enable Designer Yes true, false optional_components.ads_runtime Set to true to enable Runtime Yes true, false","title":"Decisions ADS pattern properties"},{"location":"30-reference/configuration/cloud-pak/#content-pattern-properties","text":"Used to configure FileNet Content Manager. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__ecm Placed in cp4ba.patterns.content key. Property Description Mandatory Allowed values enabled Set to true to enable content pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.cmis Set to true to enable CMIS Yes true, false optional_components.css Set to true to enable Content Search Services Yes true, false optional_components.es Set to true to enable External Share. Currently not functional. Yes true, false optional_components.tm Set to true to enable Task Manager Yes true, false optional_components.ier Set to true to enable IBM Enterprise Records Yes true, false optional_components.icc4sap Set to true to enable IBM Content Collector for SAP. Currently not functional. Always false. Yes false","title":"Content pattern properties"},{"location":"30-reference/configuration/cloud-pak/#application-pattern-properties","text":"Used to configure Business Automation Application. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa Placed in cp4ba.patterns.application key. Property Description Mandatory Allowed values enabled Set to true to enable application pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.app_designer Set to true to enable Application Designer Yes true, false optional_components.ae_data_persistence Set to true to enable App Engine data persistence Yes true, false","title":"Application pattern properties"},{"location":"30-reference/configuration/cloud-pak/#document-processing-pattern-properties","text":"Used to configure Automation Document Processing. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baa Placed in cp4ba.patterns.document_processing key. Property Description Mandatory Allowed values enabled Set to true to enable document_processing pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.document_processing_designer Set to true to enable Designer Yes true cr_custom Additional customization for Automation Document Processing. Contents will be merged into ADP part of CP4BA CR yaml file. Arrays are overwritten. No Object","title":"Document Processing pattern properties"},{"location":"30-reference/configuration/cloud-pak/#workflow-pattern-properties","text":"Used to configure Business Automation Workflow. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments#concept_c2l_1ks_fnb__baw Placed in cp4ba.patterns.workflow key. Property Description Mandatory Allowed values enabled Set to true to enable workflow pattern. Yes true, false optional_components Sub object for definition of optional components for pattern. Yes Object - specific to each pattern optional_components.baw_authoring Set to true to enable Workflow Authoring. Currently always true . Yes true optional_components.kafka Set to true to install a kafka cluster and enable kafka service for workflow authoring. Yes true, false","title":"Workflow pattern properties"},{"location":"30-reference/configuration/cloud-pak/#process-mining-properties","text":"Used to configure IBM Process Mining. Placed in pm key on the project level. Property Description Mandatory Allowed values enabled Set to true to enable process mining . Yes true, false cr_custom Additional customization for Process Mining. Contents will be merged into PM CR yaml file. Arrays are overwritten. No Object","title":"Process Mining properties"},{"location":"30-reference/configuration/cloud-pak/#robotic-process-automation-properties","text":"Used to configure IBM Robotic Process Automation. Placed in rpa key on the project level. Property Description Mandatory Allowed values enabled Set to true to enable rpa . Yes true, false cr_custom Additional customization for Process Mining. Contents will be merged into RPA CR yaml file. Arrays are overwritten. No Object","title":"Robotic Process Automation properties"},{"location":"30-reference/configuration/cloud-pak/#other-properties","text":"Used to configure extra UIs. The following properties are defined on the project level. Property Description Mandatory Allowed values cloudbeaver_enabled Set to true to enable CloudBeaver (PostgreSQL, DB2, MSSQL UI). Yes true, false roundcube_enabled Set to true to enable Roundcube. Client for mail. Yes true, false cerebro_enabled Set to true to enable Cerebro. Client for ElasticSearch in CP4BA. Yes true, false akhq_enabled Set to true to enable AKHQ. Client for Kafka in CP4BA. Yes true, false mongo_express_enabled Set to true to enable Mongo Express. Client for MongoDB. Yes true, false phpldapadmin_enabled Set to true to enable phpLDApAdmin. Client for OpenLDAP. Yes true, false","title":"Other properties"},{"location":"30-reference/configuration/cp4ba/","text":"Cloud Pak for Business Automation \ud83d\udd17 Contains CP4BA version 23.0.2 iFix 2. RPA and Process Mining are currently not deployed due to discrepancy in Cloud Pak Foundational Services version. Contains IPM version 1.14.3. Contains RPA version 23.0.14. Disclaimer \u270b Documentation base \ud83d\udcdd Benefits \ud83d\ude80 General information \ud83d\udce2 What is in the package \ud83d\udce6 Environments used for installation \ud83d\udcbb Automated post-deployment tasks \u2705 Post installation steps \u27a1\ufe0f Usage & operations \ud83d\udcc7 Disclaimer \u270b \ud83d\udd17 This is not an official IBM documentation. Absolutely no warranties, no support, no responsibility for anything. Use it on your own risk and always follow the official IBM documentations. It is always your responsibility to make sure you are license compliant when using this repository to install IBM Cloud Pak for Business Automation. Please do not hesitate to create an issue here if needed. Your feedback is appreciated. Not for production use (neither dev nor test or prod environments). Suitable for Demo and PoC environments - but with Production deployment. !Important - Keep in mind that this deployment contains capabilities (the ones which are not bundled with CP4BA) which are not eligible to run on Worker Nodes covered by CP4BA OCP Restricted licenses. More info on https://www.ibm.com/docs/en/cloud-paks/1.0?topic=clusters-restricted-openshift-entitlement . Documentation base \ud83d\udcdd \ud83d\udd17 Deploying CP4BA is based on official documentation which is located at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest . Deployment of other parts is also based on respective official documentations. IBM Robotic Process Automation (RPA) https://www.ibm.com/docs/en/rpa/latest?topic=installing-rpa-red-hat-openshift-container-platform IBM Automation Assets https://www.ibm.com/docs/en/cloud-paks/1.0?topic=foundation-automation-assets IBM Process Mining https://www.ibm.com/docs/en/process-mining/latest?topic=installing-red-hat-openshift-container-platform-environments IBM Automation Foundation (IAF) https://www.ibm.com/docs/en/cloud-paks/1.0?topic=automation-foundation IBM Cloud Pak Foundational Services (CPFS) https://www.ibm.com/docs/en/cpfs?topic=operator-installing-foundational-services-online Benefits \ud83d\ude80 \ud83d\udd17 Automatic deployment of the whole platform where you don't need to take care about almost any prerequisites OCP Ingress certificate is used for Routes so there is only one certificate you need to trust in you local machine to trust all URLs of the whole platform Trusted certificate in browser also enable you to save passwords Wherever possible a common admin user cpadmin with adjustable password is used, so you don't need to remember multiple credentials when you want to access the platform (convenience also comes with responsibility - so you don't want to expose your platform to whole world) The whole platform is running on containers, so you don't need to manually prepare anything on traditional VMs and take care of them including required prerequisites Many otherwise manual post-deployment steps have been automated Pre integrated and automatically connected extras are deployed in the platform for easier access/management/troubleshooting You have a working Production deployment which you can use as a reference for further custom deployments General information \ud83d\udce2 \ud83d\udd17 What is not included: ICCs - not covered. FNCM External share - Currently not supported with ZEN & IAM as per limitation on FNCM limitations Asset Repository - it is more part of CP4I. Workflow Server and Workstream Services - this is a dev deployment. BAW Authoring and (BAW + IAWS) are mutually exclusive in single project. ADP Runtime deployment - this is a dev deployment. What is in the package \ud83d\udce6 \ud83d\udd17 When you perform full deployment, as a result you will get full CP4BA platform as seen in the picture. You can also omit some capabilities - this is covered later in this doc. More details about each section from the picture follows below it. Extras section \ud83d\udd17 Contains extra software which makes working with the platform even easier. phpLDAPadmin - Web UI for OpenLDAP directory making it easier to admin and troubleshoot the LDAP. Gitea - Contains Git server with web UI and is used for ADS and ADP for project sharing and publishing. Organizations for ADS and APD are automatically created. Gitea is connected to OpenLDAP for authentication and authorization. Nexus - Repository manager which contains pushed ADS java libraries needed for custom development and also for publishing custom ADS jars. Nexus is connected to OpenLDAP for authentication and authorization. Roundcube - Web UI for included Mail server to be able to browse incoming emails. Cerebro - Web UI elastic search browser automatically connected to ES instance deployed with CP4BA. AKHQ - Web UI kafka browser automatically connected to Kafka instance deployed with CP4BA. Kibana - Web UI elastic search dashboard tool automatically connected to ES instance deployed with CP4BA. Mail server - For various mail integrations e.g. from BAN, BAW and RPA. Mongo Express - Web UI for Mongo DB databases for CP4BA and Process Mining to easier troubleshoot DB. CloudBeaver - Web UI for Postgresql and MSSQL databases making it easier to admin and troubleshoot the DBs. CP4BA (Cloud Pak for Business Automation) section \ud83d\udd17 CP4BA capabilities \ud83d\udd17 CP4BA capabilities are in purple color. More info for CP4BA capabilities is available in official docs at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest . More specifically in overview of patterns at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments . Pink color is used for CPFS dedicated capabilities. More info for CPFS dedicated capabilities is available in official docs at https://www.ibm.com/docs/en/cloud-paks/foundational-services/latest . Magenta color is used for additional capabilities. More info for Process Mining is available in official docs at https://www.ibm.com/docs/en/process-mining/latest . More info for RPA is available in official docs at https://www.ibm.com/docs/en/rpa/latest . Assets are currently not deployed. CPFS (Cloud Pak Foundational Services) section \ud83d\udd17 Contains services which are reused by Cloud Paks. More info available in official docs at https://www.ibm.com/docs/en/cpfs . License metering - Tracks license usage. Certificate Manager - Provides certificate handling. Pre-requisites section \ud83d\udd17 Contains prerequisites for the whole platform. PostgreSQL - Database storage for the majority of capabilities. OpenLDAP - Directory solution for users and groups definition. MSSQL server - Database storage for RPA server. MongoDB - Database storage for ADS and Process Mining. Environments used for installation \ud83d\udcbb \ud83d\udd17 With proper sizing of the cluster and provided RWX File and RWO Block Storage Class, CP4BA deployed with Deployer should be working on any OpenShift 4.12 with Worker Nodes which in total have (60 CPU, 128GB Memory). Automated post-deployment tasks \u2705 \ud83d\udd17 For your convenience the following post-deployment setup tasks have been automated: CPFS - OCP Ingress certificate is used for better SSL trusting. Zen - Users and Groups added. Zen - Administrative group is given all available privileges from all pillars. Zen - Regular groups are given developer privileges from all pillars. Zen - Service account created in CPFS IAM and Zen and Zen API key is generated for convenient and stable usage. Zen - OCP Ingress certificate is used for better SSL trusting. Workforce Insights - Connection setup. You just need to create WFI dashboard. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=secrets-creating-custom-bpc-workforce-secret ADS - Nexus connection setup and all ADS plugins loaded. ADS - Organization in Git created. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=gst-task-2-connecting-git-repository-sharing-decision-service ADS - Automatic Git project connection https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=services-connecting-remote-repository-automatically ODM - Service user credentials automatically assigned to servers. ADP - Organization in Git created. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=processing-setting-up-remote-git-organization ADP - Default project data loaded. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=processing-loading-default-sample-data ADP - Git connection and CDD repo creation done. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=processing-setting-up-remote-git-organization IER - Task Manager pod has TM_JOB_URL parameter set. IER - Task manager set up with CPE JARs required by IER. Task manager - Enabled in Navigator. BAW - tw_admins enhanced with LDAP admin groups. BAW - tw_authors enhanced with LDAP user and admin groups. BAI - extra flink task manager added for custom event processing. RPA - Bot Developer permission added to administrative user. IPM - Task mining related permissions added to admin user. IPM - Task mining admin user enabled for TM agent usage. Post installation steps \u27a1\ufe0f \ud83d\udd17 CP4BA Review and perform post deploy manual steps for CP4BA as specified in Project cloud-pak-deployer in ConfigMap cp4ba-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode. RPA Review and perform post deploy manual steps for RPA as specified in Project cloud-pak-deployer in ConfigMap cp4ba-rpa-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode. Process Mining Review and perform post deploy manual steps for IPM as specified in Project cloud-pak-deployer in ConfigMap cp4ba-pm-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode. Usage & operations \ud83d\udcc7 \ud83d\udd17 Endpoints, access info and other useful information is available in Project cloud-pak-deployer in ConfigMap cp4ba-usage in usage.md file after installation. It is best to copy the contents and open it in nice MarkDown editor like VSCode.","title":"Cloud Pak for Business Automation"},{"location":"30-reference/configuration/cp4ba/#cloud-pak-for-business-automation","text":"Contains CP4BA version 23.0.2 iFix 2. RPA and Process Mining are currently not deployed due to discrepancy in Cloud Pak Foundational Services version. Contains IPM version 1.14.3. Contains RPA version 23.0.14. Disclaimer \u270b Documentation base \ud83d\udcdd Benefits \ud83d\ude80 General information \ud83d\udce2 What is in the package \ud83d\udce6 Environments used for installation \ud83d\udcbb Automated post-deployment tasks \u2705 Post installation steps \u27a1\ufe0f Usage & operations \ud83d\udcc7","title":"Cloud Pak for Business Automation"},{"location":"30-reference/configuration/cp4ba/#disclaimer-","text":"This is not an official IBM documentation. Absolutely no warranties, no support, no responsibility for anything. Use it on your own risk and always follow the official IBM documentations. It is always your responsibility to make sure you are license compliant when using this repository to install IBM Cloud Pak for Business Automation. Please do not hesitate to create an issue here if needed. Your feedback is appreciated. Not for production use (neither dev nor test or prod environments). Suitable for Demo and PoC environments - but with Production deployment. !Important - Keep in mind that this deployment contains capabilities (the ones which are not bundled with CP4BA) which are not eligible to run on Worker Nodes covered by CP4BA OCP Restricted licenses. More info on https://www.ibm.com/docs/en/cloud-paks/1.0?topic=clusters-restricted-openshift-entitlement .","title":"Disclaimer \u270b"},{"location":"30-reference/configuration/cp4ba/#documentation-base-","text":"Deploying CP4BA is based on official documentation which is located at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest . Deployment of other parts is also based on respective official documentations. IBM Robotic Process Automation (RPA) https://www.ibm.com/docs/en/rpa/latest?topic=installing-rpa-red-hat-openshift-container-platform IBM Automation Assets https://www.ibm.com/docs/en/cloud-paks/1.0?topic=foundation-automation-assets IBM Process Mining https://www.ibm.com/docs/en/process-mining/latest?topic=installing-red-hat-openshift-container-platform-environments IBM Automation Foundation (IAF) https://www.ibm.com/docs/en/cloud-paks/1.0?topic=automation-foundation IBM Cloud Pak Foundational Services (CPFS) https://www.ibm.com/docs/en/cpfs?topic=operator-installing-foundational-services-online","title":"Documentation base \ud83d\udcdd"},{"location":"30-reference/configuration/cp4ba/#benefits-","text":"Automatic deployment of the whole platform where you don't need to take care about almost any prerequisites OCP Ingress certificate is used for Routes so there is only one certificate you need to trust in you local machine to trust all URLs of the whole platform Trusted certificate in browser also enable you to save passwords Wherever possible a common admin user cpadmin with adjustable password is used, so you don't need to remember multiple credentials when you want to access the platform (convenience also comes with responsibility - so you don't want to expose your platform to whole world) The whole platform is running on containers, so you don't need to manually prepare anything on traditional VMs and take care of them including required prerequisites Many otherwise manual post-deployment steps have been automated Pre integrated and automatically connected extras are deployed in the platform for easier access/management/troubleshooting You have a working Production deployment which you can use as a reference for further custom deployments","title":"Benefits \ud83d\ude80"},{"location":"30-reference/configuration/cp4ba/#general-information-","text":"What is not included: ICCs - not covered. FNCM External share - Currently not supported with ZEN & IAM as per limitation on FNCM limitations Asset Repository - it is more part of CP4I. Workflow Server and Workstream Services - this is a dev deployment. BAW Authoring and (BAW + IAWS) are mutually exclusive in single project. ADP Runtime deployment - this is a dev deployment.","title":"General information \ud83d\udce2"},{"location":"30-reference/configuration/cp4ba/#what-is-in-the-package-","text":"When you perform full deployment, as a result you will get full CP4BA platform as seen in the picture. You can also omit some capabilities - this is covered later in this doc. More details about each section from the picture follows below it.","title":"What is in the package \ud83d\udce6"},{"location":"30-reference/configuration/cp4ba/#extras-section","text":"Contains extra software which makes working with the platform even easier. phpLDAPadmin - Web UI for OpenLDAP directory making it easier to admin and troubleshoot the LDAP. Gitea - Contains Git server with web UI and is used for ADS and ADP for project sharing and publishing. Organizations for ADS and APD are automatically created. Gitea is connected to OpenLDAP for authentication and authorization. Nexus - Repository manager which contains pushed ADS java libraries needed for custom development and also for publishing custom ADS jars. Nexus is connected to OpenLDAP for authentication and authorization. Roundcube - Web UI for included Mail server to be able to browse incoming emails. Cerebro - Web UI elastic search browser automatically connected to ES instance deployed with CP4BA. AKHQ - Web UI kafka browser automatically connected to Kafka instance deployed with CP4BA. Kibana - Web UI elastic search dashboard tool automatically connected to ES instance deployed with CP4BA. Mail server - For various mail integrations e.g. from BAN, BAW and RPA. Mongo Express - Web UI for Mongo DB databases for CP4BA and Process Mining to easier troubleshoot DB. CloudBeaver - Web UI for Postgresql and MSSQL databases making it easier to admin and troubleshoot the DBs.","title":"Extras section"},{"location":"30-reference/configuration/cp4ba/#cp4ba-cloud-pak-for-business-automation-section","text":"","title":"CP4BA (Cloud Pak for Business Automation) section"},{"location":"30-reference/configuration/cp4ba/#cp4ba-capabilities","text":"CP4BA capabilities are in purple color. More info for CP4BA capabilities is available in official docs at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest . More specifically in overview of patterns at https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=deployment-capabilities-production-deployments . Pink color is used for CPFS dedicated capabilities. More info for CPFS dedicated capabilities is available in official docs at https://www.ibm.com/docs/en/cloud-paks/foundational-services/latest . Magenta color is used for additional capabilities. More info for Process Mining is available in official docs at https://www.ibm.com/docs/en/process-mining/latest . More info for RPA is available in official docs at https://www.ibm.com/docs/en/rpa/latest . Assets are currently not deployed.","title":"CP4BA capabilities"},{"location":"30-reference/configuration/cp4ba/#cpfs-cloud-pak-foundational-services-section","text":"Contains services which are reused by Cloud Paks. More info available in official docs at https://www.ibm.com/docs/en/cpfs . License metering - Tracks license usage. Certificate Manager - Provides certificate handling.","title":"CPFS (Cloud Pak Foundational Services) section"},{"location":"30-reference/configuration/cp4ba/#pre-requisites-section","text":"Contains prerequisites for the whole platform. PostgreSQL - Database storage for the majority of capabilities. OpenLDAP - Directory solution for users and groups definition. MSSQL server - Database storage for RPA server. MongoDB - Database storage for ADS and Process Mining.","title":"Pre-requisites section"},{"location":"30-reference/configuration/cp4ba/#environments-used-for-installation-","text":"With proper sizing of the cluster and provided RWX File and RWO Block Storage Class, CP4BA deployed with Deployer should be working on any OpenShift 4.12 with Worker Nodes which in total have (60 CPU, 128GB Memory).","title":"Environments used for installation \ud83d\udcbb"},{"location":"30-reference/configuration/cp4ba/#automated-post-deployment-tasks-","text":"For your convenience the following post-deployment setup tasks have been automated: CPFS - OCP Ingress certificate is used for better SSL trusting. Zen - Users and Groups added. Zen - Administrative group is given all available privileges from all pillars. Zen - Regular groups are given developer privileges from all pillars. Zen - Service account created in CPFS IAM and Zen and Zen API key is generated for convenient and stable usage. Zen - OCP Ingress certificate is used for better SSL trusting. Workforce Insights - Connection setup. You just need to create WFI dashboard. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=secrets-creating-custom-bpc-workforce-secret ADS - Nexus connection setup and all ADS plugins loaded. ADS - Organization in Git created. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=gst-task-2-connecting-git-repository-sharing-decision-service ADS - Automatic Git project connection https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=services-connecting-remote-repository-automatically ODM - Service user credentials automatically assigned to servers. ADP - Organization in Git created. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=processing-setting-up-remote-git-organization ADP - Default project data loaded. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/22.0.1?topic=processing-loading-default-sample-data ADP - Git connection and CDD repo creation done. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/latest?topic=processing-setting-up-remote-git-organization IER - Task Manager pod has TM_JOB_URL parameter set. IER - Task manager set up with CPE JARs required by IER. Task manager - Enabled in Navigator. BAW - tw_admins enhanced with LDAP admin groups. BAW - tw_authors enhanced with LDAP user and admin groups. BAI - extra flink task manager added for custom event processing. RPA - Bot Developer permission added to administrative user. IPM - Task mining related permissions added to admin user. IPM - Task mining admin user enabled for TM agent usage.","title":"Automated post-deployment tasks \u2705"},{"location":"30-reference/configuration/cp4ba/#post-installation-steps-","text":"CP4BA Review and perform post deploy manual steps for CP4BA as specified in Project cloud-pak-deployer in ConfigMap cp4ba-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode. RPA Review and perform post deploy manual steps for RPA as specified in Project cloud-pak-deployer in ConfigMap cp4ba-rpa-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode. Process Mining Review and perform post deploy manual steps for IPM as specified in Project cloud-pak-deployer in ConfigMap cp4ba-pm-postdeploy in postdeploy.md file. It is best to copy the contents and open it in nice MarkDown editor like VSCode.","title":"Post installation steps \u27a1\ufe0f"},{"location":"30-reference/configuration/cp4ba/#usage--operations-","text":"Endpoints, access info and other useful information is available in Project cloud-pak-deployer in ConfigMap cp4ba-usage in usage.md file after installation. It is best to copy the contents and open it in nice MarkDown editor like VSCode.","title":"Usage & operations \ud83d\udcc7"},{"location":"30-reference/configuration/cp4d-assets/","text":"Cloud Pak Asset configuration \ud83d\udd17 The Cloud Pak Deployer can implement demo assets and accelerators as part of the deployment process to standardize standing up fully-featured demo environments, or to test patches or new versions of the Cloud Pak using pre-defined assets. Node changes for ROKS and Satellite clusters \ud83d\udd17 If you put a script named apply-custom-node-settings.sh in the CONFIG_DIR/assets directory, it will be run as part of applying the node settings. This way you can override the existing node settings applied by the deployer or update the compute nodes with new settings. For more information regarding the apply-custom-node-settings.sh script, go to Prepare OpenShift cluster on IBM Cloud and IBM Cloud Satellite . cp4d_asset \ud83d\udd17 A cp4d_asset entry defines one or more assets to be deployed for a specific Cloud Pak for Data instance (OpenShift project). In the configuration, a directory relative to the configuration directory ( CONFIG_DIR ) is specified. For example, if the directory where the configuration is stored is $HOME/cpd-config/sample and you specify assets as the asset directory, all assets under /cpd-config/sample/assets are processed. You can create one or more subdirectories under the specified location, each holding an asset to be deployed. The deployer finds all cp4d-asset.sh scripts and cp4d-asset.yaml Ansible task files and runs them. The following runtime attributes will be set prior to running the shell script or the Ansible task: * If the Cloud Pak for Data instances has the Common Core Services (CCS) custom resource installed, cpdctl is configured for the current Cloud Pak for Data instance and the current context is set to the admin user of the instance. This means you can run all cpdctl commands without first having to login to Cloud Pak for Data. * * The current working directory is set to the directory holding the cp4d-asset.sh script. * When running the cp4d-asset.sh shell script, the following environment variables are available: - CP4D_URL : Cloud Pak for Data URL - CP4D_ADMIN_PASSWORD : Cloud Pak for Data admin password - CP4D_OCP_PROJECT : OpenShift project that holds the Cloud Pak for Data instance - KUBECONFIG : OpenShift configuration file that allows you to run oc commands for the cluster cp4d_asset: - name: sample-asset project: cpd asset_location: cp4d-assets Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the asset to be deployed. You can specify as many assets as wanted Yes project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes asset_location Directory holding the asset(s). This is a directory relative to the config directory (CONFIG_DIR) that was passed to the deployer Yes Asset example \ud83d\udd17 Below is an example asset that implements the Customer Attrition industry accelerator, which can be found here: https://github.com/IBM/Industry-Accelerators/blob/master/CPD%204.0.1.0/utilities-customer-attrition-prediction-industry-accelerator.tar.gz To implement: Download the zip file to the cp4d-assets directory in the specified configuration directory Create the cp4d-asset.sh shell script (example below) Add a cp4d_asset entry to the Cloud Pak for Data config file in the config directory (or in any other file with extention .yaml ) cp4d-asset.sh shell script: #!/bin/bash SCRIPT_DIR=$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd ) # Function to retrieve project by name function retrieve_project { project_name=$1 # First check if project already exists project_id=$(cpdctl project list \\ --output json | \\ jq -r --arg project_name $project_name \\ 'if .total_results==0 then \"\" else .resources[] | select(.entity.name == $project_name) | .metadata.guid end') echo $project_id } # Function to create a project function create_project { project_name=$1 retrieve_project $project_name if [ \"$project_id\" != \"\" ];then echo \"Project $project_name already exists\" return else echo \"Creating project $project_name\" storage_id=$(uuidgen) storage=$(jq --arg storage_id $storage_id '. | .guid=$storage_id | .type=\"assetfiles\"' <<< '{}') cpdctl project create --name $project_name --storage \"$storage\" fi # Find project_id to return project_id=$(cpdctl project list \\ --output json | \\ jq -r --arg project_name $project_name \\ 'if .total_results==0 then \"\" else .resources[] | select(.entity.name == $project_name) | .metadata.guid end') } # Function to import a project function import_project { project_id=$1 zip_file=$2 import_id=$(cpdctl asset import start \\ --project-id $project_id --import-file $zip_file \\ --output json --jmes-query \"metadata.id\" --raw-output) cpdctl asset import get --project-id $project_id --import-id $import_id --output json } # Function to run jobs function run_jobs { project_id=$1 for job in $(cpdctl job list --project-id $project_id \\ --output json | jq -r '.results[] | .metadata.asset_id');do cpdctl job run create --project-id $project_id --job-id $job --job-run \"{}\" done } # # Start of the asset code # # Unpack the utilities-customer-attrition-prediction-industry-accelerator directory rm -rf /tmp/utilities-customer-attrition-prediction-industry-accelerator tar xzf utilities-customer-attrition-prediction-industry-accelerator.tar.gz -C /tmp asset_dir=/tmp/customer-attrition-prediction-industry-accelerator # Change to the asset directory pushd ${asset_dir} > /dev/null # Log on to Cloud Pak for Data with the admin user cp4d_token=$(curl -s -k -H 'Content-Type: application/json' -X POST $CP4D_URL/icp4d-api/v1/authorize -d '{\"username\": \"admin\", \"password\": \"'$CP4D_ADMIN_PASSWORD'\"}' | jq -r .token) # Import categories curl -s -k -H 'accept: application/json' -H \"Authorization: Bearer ${cp4d_token}\" -H \"content-type: multipart/form-data\" -X POST $CP4D_URL/v3/governance_artifact_types/category/import?merge_option=all -F \"file=@./utilities-customer-attrition-prediction-glossary-categories.csv;type=text/csv\" # Import glossary terms curl -s -k -H 'accept: application/json' -H \"Authorization: Bearer ${cp4d_token}\" -H \"content-type: multipart/form-data\" -X POST $CP4D_URL/v3/governance_artifact_types/glossary_term/import?merge_option=all -F \"file=@./utilities-customer-attrition-prediction-glossary-terms.csv;type=text/csv\" # Check if customer-attrition project already exists. If so, do nothing project_id=$(retrieve_project \"customer-attrition\") # If project does not exist, import it and run jobs if [ \"$project_id\" == \"\" ];then create_project \"customer-attrition\" import_project $project_id \\ /tmp/utilities-customer-attrition-prediction-industry-accelerator/utilities-customer-attrition-prediction-analytics-project.zip run_jobs $project_id else echo \"Skipping deployment of CP4D asset, project customer-attrition already exists\" fi # Return to original directory popd > /dev/null exit 0","title":"Assets"},{"location":"30-reference/configuration/cp4d-assets/#cloud-pak-asset-configuration","text":"The Cloud Pak Deployer can implement demo assets and accelerators as part of the deployment process to standardize standing up fully-featured demo environments, or to test patches or new versions of the Cloud Pak using pre-defined assets.","title":"Cloud Pak Asset configuration"},{"location":"30-reference/configuration/cp4d-assets/#node-changes-for-roks-and-satellite-clusters","text":"If you put a script named apply-custom-node-settings.sh in the CONFIG_DIR/assets directory, it will be run as part of applying the node settings. This way you can override the existing node settings applied by the deployer or update the compute nodes with new settings. For more information regarding the apply-custom-node-settings.sh script, go to Prepare OpenShift cluster on IBM Cloud and IBM Cloud Satellite .","title":"Node changes for ROKS and Satellite clusters"},{"location":"30-reference/configuration/cp4d-assets/#cp4d_asset","text":"A cp4d_asset entry defines one or more assets to be deployed for a specific Cloud Pak for Data instance (OpenShift project). In the configuration, a directory relative to the configuration directory ( CONFIG_DIR ) is specified. For example, if the directory where the configuration is stored is $HOME/cpd-config/sample and you specify assets as the asset directory, all assets under /cpd-config/sample/assets are processed. You can create one or more subdirectories under the specified location, each holding an asset to be deployed. The deployer finds all cp4d-asset.sh scripts and cp4d-asset.yaml Ansible task files and runs them. The following runtime attributes will be set prior to running the shell script or the Ansible task: * If the Cloud Pak for Data instances has the Common Core Services (CCS) custom resource installed, cpdctl is configured for the current Cloud Pak for Data instance and the current context is set to the admin user of the instance. This means you can run all cpdctl commands without first having to login to Cloud Pak for Data. * * The current working directory is set to the directory holding the cp4d-asset.sh script. * When running the cp4d-asset.sh shell script, the following environment variables are available: - CP4D_URL : Cloud Pak for Data URL - CP4D_ADMIN_PASSWORD : Cloud Pak for Data admin password - CP4D_OCP_PROJECT : OpenShift project that holds the Cloud Pak for Data instance - KUBECONFIG : OpenShift configuration file that allows you to run oc commands for the cluster cp4d_asset: - name: sample-asset project: cpd asset_location: cp4d-assets","title":"cp4d_asset"},{"location":"30-reference/configuration/cp4d-assets/#property-explanation","text":"Property Description Mandatory Allowed values name Name of the asset to be deployed. You can specify as many assets as wanted Yes project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes asset_location Directory holding the asset(s). This is a directory relative to the config directory (CONFIG_DIR) that was passed to the deployer Yes","title":"Property explanation"},{"location":"30-reference/configuration/cp4d-assets/#asset-example","text":"Below is an example asset that implements the Customer Attrition industry accelerator, which can be found here: https://github.com/IBM/Industry-Accelerators/blob/master/CPD%204.0.1.0/utilities-customer-attrition-prediction-industry-accelerator.tar.gz To implement: Download the zip file to the cp4d-assets directory in the specified configuration directory Create the cp4d-asset.sh shell script (example below) Add a cp4d_asset entry to the Cloud Pak for Data config file in the config directory (or in any other file with extention .yaml ) cp4d-asset.sh shell script: #!/bin/bash SCRIPT_DIR=$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd ) # Function to retrieve project by name function retrieve_project { project_name=$1 # First check if project already exists project_id=$(cpdctl project list \\ --output json | \\ jq -r --arg project_name $project_name \\ 'if .total_results==0 then \"\" else .resources[] | select(.entity.name == $project_name) | .metadata.guid end') echo $project_id } # Function to create a project function create_project { project_name=$1 retrieve_project $project_name if [ \"$project_id\" != \"\" ];then echo \"Project $project_name already exists\" return else echo \"Creating project $project_name\" storage_id=$(uuidgen) storage=$(jq --arg storage_id $storage_id '. | .guid=$storage_id | .type=\"assetfiles\"' <<< '{}') cpdctl project create --name $project_name --storage \"$storage\" fi # Find project_id to return project_id=$(cpdctl project list \\ --output json | \\ jq -r --arg project_name $project_name \\ 'if .total_results==0 then \"\" else .resources[] | select(.entity.name == $project_name) | .metadata.guid end') } # Function to import a project function import_project { project_id=$1 zip_file=$2 import_id=$(cpdctl asset import start \\ --project-id $project_id --import-file $zip_file \\ --output json --jmes-query \"metadata.id\" --raw-output) cpdctl asset import get --project-id $project_id --import-id $import_id --output json } # Function to run jobs function run_jobs { project_id=$1 for job in $(cpdctl job list --project-id $project_id \\ --output json | jq -r '.results[] | .metadata.asset_id');do cpdctl job run create --project-id $project_id --job-id $job --job-run \"{}\" done } # # Start of the asset code # # Unpack the utilities-customer-attrition-prediction-industry-accelerator directory rm -rf /tmp/utilities-customer-attrition-prediction-industry-accelerator tar xzf utilities-customer-attrition-prediction-industry-accelerator.tar.gz -C /tmp asset_dir=/tmp/customer-attrition-prediction-industry-accelerator # Change to the asset directory pushd ${asset_dir} > /dev/null # Log on to Cloud Pak for Data with the admin user cp4d_token=$(curl -s -k -H 'Content-Type: application/json' -X POST $CP4D_URL/icp4d-api/v1/authorize -d '{\"username\": \"admin\", \"password\": \"'$CP4D_ADMIN_PASSWORD'\"}' | jq -r .token) # Import categories curl -s -k -H 'accept: application/json' -H \"Authorization: Bearer ${cp4d_token}\" -H \"content-type: multipart/form-data\" -X POST $CP4D_URL/v3/governance_artifact_types/category/import?merge_option=all -F \"file=@./utilities-customer-attrition-prediction-glossary-categories.csv;type=text/csv\" # Import glossary terms curl -s -k -H 'accept: application/json' -H \"Authorization: Bearer ${cp4d_token}\" -H \"content-type: multipart/form-data\" -X POST $CP4D_URL/v3/governance_artifact_types/glossary_term/import?merge_option=all -F \"file=@./utilities-customer-attrition-prediction-glossary-terms.csv;type=text/csv\" # Check if customer-attrition project already exists. If so, do nothing project_id=$(retrieve_project \"customer-attrition\") # If project does not exist, import it and run jobs if [ \"$project_id\" == \"\" ];then create_project \"customer-attrition\" import_project $project_id \\ /tmp/utilities-customer-attrition-prediction-industry-accelerator/utilities-customer-attrition-prediction-analytics-project.zip run_jobs $project_id else echo \"Skipping deployment of CP4D asset, project customer-attrition already exists\" fi # Return to original directory popd > /dev/null exit 0","title":"Asset example"},{"location":"30-reference/configuration/cp4d-cartridges/","text":"Cloud Pak for Data cartridges \ud83d\udd17 Defines the services (cartridges) which must be installed into the Cloud Pak for Data instances. The cartridges will be configured with the storage class defined at the Cloud Pak for Data object level. For each cartridge you can specify whether it must be installed or removed by specifying the state. If a cartridge is installed and the state is changed to removed , the cartridge and all of its instances are removed by the deployer when it is run. An example Cloud Pak for Data object with cartridges is below: cp4d: - project: cpd-instance cp4d_version: 4.6.3 sequential_install: False cartridges: - name: cpfs - name: cpd_platform - name: db2oltp size: small instances: - name: db2-instance metadata_size_gb: 20 data_size_gb: 20 backup_size_gb: 20 transactionlog_size_gb: 20 state: installed - name: wkc size: small state: removed - name: wml size: small state: installed - name: ws state: installed When run, the deployer installs the Db2 OLTP ( db2oltp ), Watson Machine Learning ( wml ) and Watson Studio ( ws ) cartridges. If the Watson Knowledge Catalog ( wkc ) is installed in the cpd-instance OpenShift project, it is removed. After the deployer installs Db2 OLTP, a new Db2 instance is created with the specified attributes. Cloud Pak for Data cartridges \ud83d\udd17 cp4d.cartridges \ud83d\udd17 This is a list of cartridges that will be installed in the Cloud Pak for Data instance. Every cartridge is identified by its name. Some cartridges may require additional information to correctly install or to create an instance for the cartridge. Below you will find a list of all tested Cloud Pak for Data cartridges and their specific properties. Properties for all cartridges \ud83d\udd17 Property Description Mandatory Allowed values name Name of the cartridge Yes state Whether the cartridge must be installed or removed . If not specified, the cartridge will be installed No installed, removed installation_options Record of properties that will be applied to the spec of the OpenShift Custom Resource No Cartridge cpfs or cp-foundation \ud83d\udd17 Defines the Cloud Pak Foundational Services (fka Common Services) which are required for all Cloud Pak for Data installations. Cloud Pak for Data Foundational Services provide functionalities around certificate management, license service, identity and access management (IAM), etc. This cartridge is mandatory for every Cloud Pak for Data instance. Cartridge cpd_platform or lite \ud83d\udd17 Defines the Cloud Pak for Data platform operator (fka \"lite\") which installs the base services needed to operate Cloud Pak for Data, such as the Zen metastore, Zen watchdog and the user interface. This cartridge is mandatory for every Cloud Pak for Data instance. Cartridge wkc \ud83d\udd17 Manages the Watson Knowledge Catalog installation for the Cloud Pak for Data instance. Additional properties for cartridge wkc \ud83d\udd17 Property Description Mandatory Allowed values size Scale configuration of the cartridge No small (default), medium, large installation_options.install_wkc_core_only Install only the core of WKC? No True, False (default) installation_options.enableKnowledgeGraph Enable the knowledge graph for business lineage? No True, False (default) installation_options.enableDataQuality Enable data quality for WKC? No True, False (default) installation_options.enableMANTA Enable MANTA? No True, False (default)","title":"Cartridges"},{"location":"30-reference/configuration/cp4d-cartridges/#cloud-pak-for-data-cartridges","text":"Defines the services (cartridges) which must be installed into the Cloud Pak for Data instances. The cartridges will be configured with the storage class defined at the Cloud Pak for Data object level. For each cartridge you can specify whether it must be installed or removed by specifying the state. If a cartridge is installed and the state is changed to removed , the cartridge and all of its instances are removed by the deployer when it is run. An example Cloud Pak for Data object with cartridges is below: cp4d: - project: cpd-instance cp4d_version: 4.6.3 sequential_install: False cartridges: - name: cpfs - name: cpd_platform - name: db2oltp size: small instances: - name: db2-instance metadata_size_gb: 20 data_size_gb: 20 backup_size_gb: 20 transactionlog_size_gb: 20 state: installed - name: wkc size: small state: removed - name: wml size: small state: installed - name: ws state: installed When run, the deployer installs the Db2 OLTP ( db2oltp ), Watson Machine Learning ( wml ) and Watson Studio ( ws ) cartridges. If the Watson Knowledge Catalog ( wkc ) is installed in the cpd-instance OpenShift project, it is removed. After the deployer installs Db2 OLTP, a new Db2 instance is created with the specified attributes.","title":"Cloud Pak for Data cartridges"},{"location":"30-reference/configuration/cp4d-cartridges/#cloud-pak-for-data-cartridges_1","text":"","title":"Cloud Pak for Data cartridges"},{"location":"30-reference/configuration/cp4d-cartridges/#cp4dcartridges","text":"This is a list of cartridges that will be installed in the Cloud Pak for Data instance. Every cartridge is identified by its name. Some cartridges may require additional information to correctly install or to create an instance for the cartridge. Below you will find a list of all tested Cloud Pak for Data cartridges and their specific properties.","title":"cp4d.cartridges"},{"location":"30-reference/configuration/cp4d-cartridges/#properties-for-all-cartridges","text":"Property Description Mandatory Allowed values name Name of the cartridge Yes state Whether the cartridge must be installed or removed . If not specified, the cartridge will be installed No installed, removed installation_options Record of properties that will be applied to the spec of the OpenShift Custom Resource No","title":"Properties for all cartridges"},{"location":"30-reference/configuration/cp4d-cartridges/#cartridge-cpfs-or-cp-foundation","text":"Defines the Cloud Pak Foundational Services (fka Common Services) which are required for all Cloud Pak for Data installations. Cloud Pak for Data Foundational Services provide functionalities around certificate management, license service, identity and access management (IAM), etc. This cartridge is mandatory for every Cloud Pak for Data instance.","title":"Cartridge cpfs or cp-foundation"},{"location":"30-reference/configuration/cp4d-cartridges/#cartridge-cpd_platform-or-lite","text":"Defines the Cloud Pak for Data platform operator (fka \"lite\") which installs the base services needed to operate Cloud Pak for Data, such as the Zen metastore, Zen watchdog and the user interface. This cartridge is mandatory for every Cloud Pak for Data instance.","title":"Cartridge cpd_platform or lite"},{"location":"30-reference/configuration/cp4d-cartridges/#cartridge-wkc","text":"Manages the Watson Knowledge Catalog installation for the Cloud Pak for Data instance.","title":"Cartridge wkc"},{"location":"30-reference/configuration/cp4d-cartridges/#additional-properties-for-cartridge-wkc","text":"Property Description Mandatory Allowed values size Scale configuration of the cartridge No small (default), medium, large installation_options.install_wkc_core_only Install only the core of WKC? No True, False (default) installation_options.enableKnowledgeGraph Enable the knowledge graph for business lineage? No True, False (default) installation_options.enableDataQuality Enable data quality for WKC? No True, False (default) installation_options.enableMANTA Enable MANTA? No True, False (default)","title":"Additional properties for cartridge wkc"},{"location":"30-reference/configuration/cp4d-connections/","text":"Cloud Pak for Data platform connections \ud83d\udd17 Cloud Pak for Data platform connection - cp4d_conection \ud83d\udd17 The cp4d_connection object can be used to create Global Platform connections. cp4d_connection: - name: connection_name # Name of the connection, must be unique type: database # Type, currently supported: [database] cp4d_instance: cpd # CP4D instance on which the connection must be created openshift_cluster_name: cluster_name # OpenShift cluster name on which the cp4d_instance is deployed database_type: db2 # Type of connection database_hostname: hostname # Hostname of the connection database_port: 30556 # Port of the connection database_name: bludb # Database name of the connection database_port_ssl: true # enable ssl flag database_credentials_username: 77066f69 # Username of the datasource database_credentials_password_secret: db-credentials # Vault lookup name to contain the password database_ssl_certificate_secret: db-ssl-cert # Vault lookup name to contain the SSL certificate Cloud Pak for Data backup and restore platform connections - cp4d_backup_restore_connections \ud83d\udd17 The cp4d_backup_restore_connections can be used to backup all current configured Global Platform connections, which are either created by the Cloud Pak Deployer or added manually. The backup is stored in the status /cp4d/exports folder as a json file. A backup file can be used to restore global platform connections. A flag can be used to indicate whether if a Global Platform connection with the same name already exists, the restore is skipped. Using the Cloud Pak Deployer cp4d_backup_restore_connections capability implements the following: - Connect to the IBM Cloud Pak for Data instance specified using cp4d_instance and openshift_cluster_name - If connections_backup_file is specified export all Global Platform connections to the specified file in the status /cp4d/export/connections folder - If connections_restore_file is specified, load the file and restore the Global Platform connections - The connections_restore_overwrite (true/false) indicates whether if a Global Platform Connection with the same already exists, it will be replaced. cp4d_backup_restore_connections: - cp4d_instance: cpd openshift_cluster_name: {{ env_id }} connections_backup_file: {{ env_id }}_cpd_connections.json connections_restore_file: {{ env_id }}_cpd_connection.json connections_restore_overwrite: false","title":"Platform connections"},{"location":"30-reference/configuration/cp4d-connections/#cloud-pak-for-data-platform-connections","text":"","title":"Cloud Pak for Data platform connections"},{"location":"30-reference/configuration/cp4d-connections/#cloud-pak-for-data-platform-connection---cp4d_conection","text":"The cp4d_connection object can be used to create Global Platform connections. cp4d_connection: - name: connection_name # Name of the connection, must be unique type: database # Type, currently supported: [database] cp4d_instance: cpd # CP4D instance on which the connection must be created openshift_cluster_name: cluster_name # OpenShift cluster name on which the cp4d_instance is deployed database_type: db2 # Type of connection database_hostname: hostname # Hostname of the connection database_port: 30556 # Port of the connection database_name: bludb # Database name of the connection database_port_ssl: true # enable ssl flag database_credentials_username: 77066f69 # Username of the datasource database_credentials_password_secret: db-credentials # Vault lookup name to contain the password database_ssl_certificate_secret: db-ssl-cert # Vault lookup name to contain the SSL certificate","title":"Cloud Pak for Data platform connection - cp4d_conection"},{"location":"30-reference/configuration/cp4d-connections/#cloud-pak-for-data-backup-and-restore-platform-connections---cp4d_backup_restore_connections","text":"The cp4d_backup_restore_connections can be used to backup all current configured Global Platform connections, which are either created by the Cloud Pak Deployer or added manually. The backup is stored in the status /cp4d/exports folder as a json file. A backup file can be used to restore global platform connections. A flag can be used to indicate whether if a Global Platform connection with the same name already exists, the restore is skipped. Using the Cloud Pak Deployer cp4d_backup_restore_connections capability implements the following: - Connect to the IBM Cloud Pak for Data instance specified using cp4d_instance and openshift_cluster_name - If connections_backup_file is specified export all Global Platform connections to the specified file in the status /cp4d/export/connections folder - If connections_restore_file is specified, load the file and restore the Global Platform connections - The connections_restore_overwrite (true/false) indicates whether if a Global Platform Connection with the same already exists, it will be replaced. cp4d_backup_restore_connections: - cp4d_instance: cpd openshift_cluster_name: {{ env_id }} connections_backup_file: {{ env_id }}_cpd_connections.json connections_restore_file: {{ env_id }}_cpd_connection.json connections_restore_overwrite: false","title":"Cloud Pak for Data backup and restore platform connections - cp4d_backup_restore_connections"},{"location":"30-reference/configuration/cp4d-instances/","text":"Cloud Pak for Data instances \ud83d\udd17 Manage cloud Pak for Data instances \ud83d\udd17 Some cartridges have the ability to create one or more instances to run an isolated installation of the cartridge. If instances have been configured for the cartridge, the deployer can manage creating and deleting the instances. The following Cloud Pak for Data cartridges are currently supported for managing instances: Analytics engine powered by Apache Spark ( analytics-engine ) DataStage ( datastage-ent-plus ) Db2 OLTP ( db2 ) Data Virtualization ( dv ) Cognos Analytics ( ca ) EDB Postgres ( edb_cp4d ) OpenPabes ( openpages ) Analytics engine powered by Apache Spark Instances \ud83d\udd17 Analytics Engine instances can be defined by adding the instances section to the cartridges entry of cartridge analytics-engine . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: analytics-engine size: small state: installed instances: - name: analyticsengine-instance storage_size_gb: 50 Property Description Mandatory Allowed Values name Name of the instance Yes storage_size_db Size of the storage allocated to the instance Yes numeric value DataStage instances \ud83d\udd17 DataStage instances can be defined by adding the instances section to the cartridges entry of cartridge datastage-ent-plus . The following example shows the configuration to define an instance. DataStage, upon deployment, always creates a default instance called ds-px-default . This instance cannot be configured in the instances section. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: datastage-ent-plus state: installed instances: - name: ds-instance # Optional settings description: \"datastage ds-instance\" size: medium storage_class: efs-nfs-client storage_size_gb: 60 # Optional Custom Scale options scale_px_runtime: replicas: 2 cpu_request: 500m cpu_limit: 2 memory_request: 2Gi memory_limit: 4Gi scale_px_compute: replicas: 2 cpu_request: 1 cpu_limit: 3 memory_request: 4Gi memory_limit: 12Gi Property Description Mandatory Allowed Values name Name of the instance Yes description Description of the instance No size Size of the DataStage instance No small (default), medium, large storage_class Override the default storage class No storage_size_gb Storage size allocated to the DataStage instance No numeric Optionally, the default px_runtime and px_compute instances of the DataStage instance can be tweaked. Both scale_px_runtime and scale_px_compute must be specified when used, and all properties must be specified. Property Description Mandatory replicas Number of replicas Yes cpu_request CPU Request value Yes memory_request Memory Request value Yes cpu_limit CPU limit value Yes memory_limit Memory limit value Yes Db2 OLTP Instances \ud83d\udd17 DB2 OLTP instances can be defined by adding the instances section to the cartridges entry of cartridge db2 . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: db2 size: small state: installed instances: - name: db2 instance metadata_size_gb: 20 data_size_gb: 20 backup_size_gb: 20 transactionlog_size_gb: 20 Property Description Mandatory Allowed Values name Name of the instance Yes metadata_size_gb Size of the metadata store Yes numeric value data_size_gb Size of the data store Yes numeric value backup_size_gb Size of the backup store Yes numeric value transactionlog_size_gb Size of the transactionlog store Yes numeric value Data Virtualization Instances \ud83d\udd17 Data Virtualization instances can be defined by adding the instances section to the cartridges entry of cartridge dv . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: dv size: small state: installed instances: - name: data-virtualization Property Description Mandatory Allowed Values name Name of the instance Yes Cognos Analytics Instance \ud83d\udd17 A Cognos Analytics instance can be defined by adding the instances section to the cartridges entry of cartridge ca . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: ca size: small state: installed instances: - name: ca-instance metastore_ref: ca-metastore Property Description Mandatory name Name of the instance Yes metastore_ref Name of the DB2 instance used for the Cognos Repository database Yes The Cognos Content Repository database can use an IBM Cloud Pak for Data DB2 OLTP instance. The Cloud Pak Deployer will first determine whether an existing DB2 OLTP existing with the name specified metastore_ref . If this is the case, this DB2 OLTP instance will be used and the database is prepared using the Cognos DB2 script prior to provisioning the Cognos instance. EDB Postgres for Cloud Pak for Data instances \ud83d\udd17 EnterpriseDB instances can be defined by adding the instances section to the cartridges entry of cartridge dv . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: # Please note that for EDB Postgress, a secret edb-postgres-license-key must be created in the vault # before deploying - name: edb_cp4d size: small state: installed instances: - name: instance1 version: \"13.5\" #Optional Parameters type: Standard members: 1 size_gb: 50 resource_request_cpu: 1000m resource_request_memory: 4Gi resource_limit_cpu: 1000m resource_limit_memory: 4Gi Property Description Mandatory Allowed Values name Name of the instance Yes version Version of the EDB PostGres instance Yes 12.11, 13.5 type Enterprise or Standard version No Standard (default), Enterprise members Number of members of the instance No number, 1 (default) size_gb Storage Size allocated to the instance No number, 50 (default) resource_request_cpu Request CPU of the instance No 1000m (default) resource_request_memory Request Memory of the instance No 4Gi (default) resource_limit_cpu Limit CPU of the instance No 1000m (default) resource_limit_memory Limit Memory of the instance No 4Gi (default) OpenPages Instance \ud83d\udd17 An OpenPages instance can be defined by adding the instances section to the cartridges entry of cartridge openpages . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: openpages state: installed instances: - name: openpages-instance size: xsmall Property Description Mandatory name Name of the instance Yes size The size of the OpenPages instances, default is xsmall No","title":"Instances"},{"location":"30-reference/configuration/cp4d-instances/#cloud-pak-for-data-instances","text":"","title":"Cloud Pak for Data instances"},{"location":"30-reference/configuration/cp4d-instances/#manage-cloud-pak-for-data-instances","text":"Some cartridges have the ability to create one or more instances to run an isolated installation of the cartridge. If instances have been configured for the cartridge, the deployer can manage creating and deleting the instances. The following Cloud Pak for Data cartridges are currently supported for managing instances: Analytics engine powered by Apache Spark ( analytics-engine ) DataStage ( datastage-ent-plus ) Db2 OLTP ( db2 ) Data Virtualization ( dv ) Cognos Analytics ( ca ) EDB Postgres ( edb_cp4d ) OpenPabes ( openpages )","title":"Manage cloud Pak for Data instances"},{"location":"30-reference/configuration/cp4d-instances/#analytics-engine-powered-by-apache-spark-instances","text":"Analytics Engine instances can be defined by adding the instances section to the cartridges entry of cartridge analytics-engine . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: analytics-engine size: small state: installed instances: - name: analyticsengine-instance storage_size_gb: 50 Property Description Mandatory Allowed Values name Name of the instance Yes storage_size_db Size of the storage allocated to the instance Yes numeric value","title":"Analytics engine powered by Apache Spark Instances"},{"location":"30-reference/configuration/cp4d-instances/#datastage-instances","text":"DataStage instances can be defined by adding the instances section to the cartridges entry of cartridge datastage-ent-plus . The following example shows the configuration to define an instance. DataStage, upon deployment, always creates a default instance called ds-px-default . This instance cannot be configured in the instances section. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: datastage-ent-plus state: installed instances: - name: ds-instance # Optional settings description: \"datastage ds-instance\" size: medium storage_class: efs-nfs-client storage_size_gb: 60 # Optional Custom Scale options scale_px_runtime: replicas: 2 cpu_request: 500m cpu_limit: 2 memory_request: 2Gi memory_limit: 4Gi scale_px_compute: replicas: 2 cpu_request: 1 cpu_limit: 3 memory_request: 4Gi memory_limit: 12Gi Property Description Mandatory Allowed Values name Name of the instance Yes description Description of the instance No size Size of the DataStage instance No small (default), medium, large storage_class Override the default storage class No storage_size_gb Storage size allocated to the DataStage instance No numeric Optionally, the default px_runtime and px_compute instances of the DataStage instance can be tweaked. Both scale_px_runtime and scale_px_compute must be specified when used, and all properties must be specified. Property Description Mandatory replicas Number of replicas Yes cpu_request CPU Request value Yes memory_request Memory Request value Yes cpu_limit CPU limit value Yes memory_limit Memory limit value Yes","title":"DataStage instances"},{"location":"30-reference/configuration/cp4d-instances/#db2-oltp-instances","text":"DB2 OLTP instances can be defined by adding the instances section to the cartridges entry of cartridge db2 . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: db2 size: small state: installed instances: - name: db2 instance metadata_size_gb: 20 data_size_gb: 20 backup_size_gb: 20 transactionlog_size_gb: 20 Property Description Mandatory Allowed Values name Name of the instance Yes metadata_size_gb Size of the metadata store Yes numeric value data_size_gb Size of the data store Yes numeric value backup_size_gb Size of the backup store Yes numeric value transactionlog_size_gb Size of the transactionlog store Yes numeric value","title":"Db2 OLTP Instances"},{"location":"30-reference/configuration/cp4d-instances/#data-virtualization-instances","text":"Data Virtualization instances can be defined by adding the instances section to the cartridges entry of cartridge dv . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: dv size: small state: installed instances: - name: data-virtualization Property Description Mandatory Allowed Values name Name of the instance Yes","title":"Data Virtualization Instances"},{"location":"30-reference/configuration/cp4d-instances/#cognos-analytics-instance","text":"A Cognos Analytics instance can be defined by adding the instances section to the cartridges entry of cartridge ca . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: ca size: small state: installed instances: - name: ca-instance metastore_ref: ca-metastore Property Description Mandatory name Name of the instance Yes metastore_ref Name of the DB2 instance used for the Cognos Repository database Yes The Cognos Content Repository database can use an IBM Cloud Pak for Data DB2 OLTP instance. The Cloud Pak Deployer will first determine whether an existing DB2 OLTP existing with the name specified metastore_ref . If this is the case, this DB2 OLTP instance will be used and the database is prepared using the Cognos DB2 script prior to provisioning the Cognos instance.","title":"Cognos Analytics Instance"},{"location":"30-reference/configuration/cp4d-instances/#edb-postgres-for-cloud-pak-for-data-instances","text":"EnterpriseDB instances can be defined by adding the instances section to the cartridges entry of cartridge dv . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: # Please note that for EDB Postgress, a secret edb-postgres-license-key must be created in the vault # before deploying - name: edb_cp4d size: small state: installed instances: - name: instance1 version: \"13.5\" #Optional Parameters type: Standard members: 1 size_gb: 50 resource_request_cpu: 1000m resource_request_memory: 4Gi resource_limit_cpu: 1000m resource_limit_memory: 4Gi Property Description Mandatory Allowed Values name Name of the instance Yes version Version of the EDB PostGres instance Yes 12.11, 13.5 type Enterprise or Standard version No Standard (default), Enterprise members Number of members of the instance No number, 1 (default) size_gb Storage Size allocated to the instance No number, 50 (default) resource_request_cpu Request CPU of the instance No 1000m (default) resource_request_memory Request Memory of the instance No 4Gi (default) resource_limit_cpu Limit CPU of the instance No 1000m (default) resource_limit_memory Limit Memory of the instance No 4Gi (default)","title":"EDB Postgres for Cloud Pak for Data instances"},{"location":"30-reference/configuration/cp4d-instances/#openpages-instance","text":"An OpenPages instance can be defined by adding the instances section to the cartridges entry of cartridge openpages . The following example shows the configuration to define an instance. cp4d: - project: cpd-instance openshift_cluster_name: \"{{ env_id }}\" ... cartridges: - name: openpages state: installed instances: - name: openpages-instance size: xsmall Property Description Mandatory name Name of the instance Yes size The size of the OpenPages instances, default is xsmall No","title":"OpenPages Instance"},{"location":"30-reference/configuration/cp4d-ldap/","text":"Cloud Pak for Data LDAP \ud83d\udd17 Cloud Pak for Data can connect to an LDAP user registry for identity and access managment (IAM). When configured, for a Cloud Pak for Data instance, a user must authenticate with the user name and password stored in the LDAP server. If SAML is also configured for the Cloud Pak for Data instance, authentication (identity) is managed by the SAML server but access management (groups, roles) can still be served by LDAP. Cloud Pak for Data LDAP configuration \ud83d\udd17 IBM Cloud Pak for Data can connect to an LDAP user registry in order for users to log on with their LDAP credentials. The configuration of LDAP can be specified in a seperate yaml file in the config folder, or included in an existing yaml file. LDAP configuration - cp4d_ldap_config \ud83d\udd17 A cp4d_ldap_config entry contains the connectivity information to the LDAP user registry. The project and openshift_cluster_name values uniquely identify the Cloud Pak for Data instance. The ldap_domain_search_password_vault entry contains a reference to the vault, which means that as a preparation the LDAP bind user password must be stored in the vault used by the Cloud Pak Deployer using the key referenced in the configuration. If the password is not available, the Cloud Pak Deployer will fail and not able to configure the LDAP connectivity. # Each Cloud Pak for Data Deployment deployed in an OpenShift Project of an OpenShift cluster can have its own LDAP configuration cp4d_ldap_config: - project: cpd-instance openshift_cluster_name: sample # Mandatory ldap_host: ldaps://ldap-host # Mandatory ldap_port: 636 # Mandatory ldap_user_search_base: ou=users,dc=ibm,dc=com # Mandatory ldap_user_search_field: uid # Mandatory ldap_domain_search_user: uid=ibm_roks_bind_user,ou=users,dc=ibm,dc=com # Mandatory ldap_domain_search_password_vault: ldap_bind_password # Mandatory, Password vault reference auto_signup: \"false\" # Mandatory ldap_group_search_base: ou=groups,dc=ibm,dc=com # Optional, but mandatory when using user groups ldap_group_search_field: cn # Optional, but mandatory when using user groups ldap_mapping_first_name: cn # Optional, but mandatory when using user groups ldap_mapping_last_name: sn # Optional, but mandatory when using user groups ldap_mapping_email: mail # Optional, but mandatory when using user groups ldap_mapping_group_membership: memberOf # Optional, but mandatory when using user groups ldap_mapping_group_member: member # Optional, but mandatory when using user groups The above configuration uses the LDAPS protocol to connect to port 636 on the ldap-host server. This server can be a private server if an upstream DNS server is also defined for the OpenShift cluster that runs Cloud Pak for Data. Common Name uid=ibm_roks_bind_user,ou=users,dc=ibm,dc=com is used as the bind user for the LDAP server and its password is retrieved from vault secret ldap_bind_password . User Group configuration - cp4d_user_group_configuration \ud83d\udd17 The cp4d_user_group_configuration: can optionally create User Group(s) with references to LDAP Group(s). A user_groups entry must contain at least 1 role_assignments and 1 ldap_groups entry. # Each Cloud Pak for Data Deployment deployed in an OpenShift Project of an OpenShift cluster can have its own User Groups configuration cp4d_user_group_configuration: - project: zen-sample # Mandatory openshift_cluster_name: sample # Mandatory user_groups: - name: CA_Analytics_Viewer description: User Group for Cognos Analytics Viewers role_assigmnents: - name: zen_administrator_role ldap_groups: - name: cn=ca_viewers,ou=groups,dc=ibm,dc=com - name: CA_Analytics_Administrators description: User Group for Cognos Analytics Administrators role_assigmnents: - name: zen_administrator_role ldap_groups: - name: cn=ca_admins,ou=groups,dc=ibm,dc=com Role Assignment values: - zen_administrator_role - zen_user_role - wkc_data_scientist_role - zen_developer_role - zen_data_engineer_role (requires installation of DataStage cartridge to become available) During the creation of User Group(s) the following validations are performed: - LDAP configuration is completed - The provided role assignment(s) are available in Cloud Pak for Data - The provided LDAP group(s) are available in the LDAP registry - If the User Group already exists, it ensures the provided LDAP Group(s) are assigned, but no changes to the existing role assignments are performed and no LDAP groups are removed from the User Group Provisioned instance authorization - cp4d_instance_configuration \ud83d\udd17 When using Cloud Pak for Data LDAP connectivity and User Groups, the User Groups can be assigned to authorize the users of the LDAP groups access to the proviosioned instance(s). Currently supported instance authorization: - Cognos Analytics (ca) Cognos Analytics instance authorization \ud83d\udd17 cp4d_instance_configuration: - project: zen-sample # Mandatory openshift_cluster_name: sample # Mandatory cartridges: - name: cognos_analytics manage_access: # Optional, requires LDAP connectivity - ca_role: Analytics Viewer # Mandatory, one the CA Access roles cp4d_user_group: CA_Analytics_Viewer # Mandatory, the CP4D User Group Name - ca_role: Analytics Administrators # Mandatory, one the CA Access roles cp4d_user_group: CA_Analytics_Administrators # Mandatory, the CP4D User Group Name A Cognos Analytics (ca) instance can have multiple manage_access entries. Each entry consists of 1 ca_role and 1 cp4d_user_group element. The ca_role must be one of the following possible values: - Analytics Administrators - Analytics Explorers - Analytics Users - Analytics Viewer During the configuration of the instance authorization the following validations are performend: - LDAP configuration is completed - The provided ca_role is valid - The provided cp4d_user_group exists","title":"LDAP"},{"location":"30-reference/configuration/cp4d-ldap/#cloud-pak-for-data-ldap","text":"Cloud Pak for Data can connect to an LDAP user registry for identity and access managment (IAM). When configured, for a Cloud Pak for Data instance, a user must authenticate with the user name and password stored in the LDAP server. If SAML is also configured for the Cloud Pak for Data instance, authentication (identity) is managed by the SAML server but access management (groups, roles) can still be served by LDAP.","title":"Cloud Pak for Data LDAP"},{"location":"30-reference/configuration/cp4d-ldap/#cloud-pak-for-data-ldap-configuration","text":"IBM Cloud Pak for Data can connect to an LDAP user registry in order for users to log on with their LDAP credentials. The configuration of LDAP can be specified in a seperate yaml file in the config folder, or included in an existing yaml file.","title":"Cloud Pak for Data LDAP configuration"},{"location":"30-reference/configuration/cp4d-ldap/#ldap-configuration---cp4d_ldap_config","text":"A cp4d_ldap_config entry contains the connectivity information to the LDAP user registry. The project and openshift_cluster_name values uniquely identify the Cloud Pak for Data instance. The ldap_domain_search_password_vault entry contains a reference to the vault, which means that as a preparation the LDAP bind user password must be stored in the vault used by the Cloud Pak Deployer using the key referenced in the configuration. If the password is not available, the Cloud Pak Deployer will fail and not able to configure the LDAP connectivity. # Each Cloud Pak for Data Deployment deployed in an OpenShift Project of an OpenShift cluster can have its own LDAP configuration cp4d_ldap_config: - project: cpd-instance openshift_cluster_name: sample # Mandatory ldap_host: ldaps://ldap-host # Mandatory ldap_port: 636 # Mandatory ldap_user_search_base: ou=users,dc=ibm,dc=com # Mandatory ldap_user_search_field: uid # Mandatory ldap_domain_search_user: uid=ibm_roks_bind_user,ou=users,dc=ibm,dc=com # Mandatory ldap_domain_search_password_vault: ldap_bind_password # Mandatory, Password vault reference auto_signup: \"false\" # Mandatory ldap_group_search_base: ou=groups,dc=ibm,dc=com # Optional, but mandatory when using user groups ldap_group_search_field: cn # Optional, but mandatory when using user groups ldap_mapping_first_name: cn # Optional, but mandatory when using user groups ldap_mapping_last_name: sn # Optional, but mandatory when using user groups ldap_mapping_email: mail # Optional, but mandatory when using user groups ldap_mapping_group_membership: memberOf # Optional, but mandatory when using user groups ldap_mapping_group_member: member # Optional, but mandatory when using user groups The above configuration uses the LDAPS protocol to connect to port 636 on the ldap-host server. This server can be a private server if an upstream DNS server is also defined for the OpenShift cluster that runs Cloud Pak for Data. Common Name uid=ibm_roks_bind_user,ou=users,dc=ibm,dc=com is used as the bind user for the LDAP server and its password is retrieved from vault secret ldap_bind_password .","title":"LDAP configuration - cp4d_ldap_config"},{"location":"30-reference/configuration/cp4d-ldap/#user-group-configuration---cp4d_user_group_configuration","text":"The cp4d_user_group_configuration: can optionally create User Group(s) with references to LDAP Group(s). A user_groups entry must contain at least 1 role_assignments and 1 ldap_groups entry. # Each Cloud Pak for Data Deployment deployed in an OpenShift Project of an OpenShift cluster can have its own User Groups configuration cp4d_user_group_configuration: - project: zen-sample # Mandatory openshift_cluster_name: sample # Mandatory user_groups: - name: CA_Analytics_Viewer description: User Group for Cognos Analytics Viewers role_assigmnents: - name: zen_administrator_role ldap_groups: - name: cn=ca_viewers,ou=groups,dc=ibm,dc=com - name: CA_Analytics_Administrators description: User Group for Cognos Analytics Administrators role_assigmnents: - name: zen_administrator_role ldap_groups: - name: cn=ca_admins,ou=groups,dc=ibm,dc=com Role Assignment values: - zen_administrator_role - zen_user_role - wkc_data_scientist_role - zen_developer_role - zen_data_engineer_role (requires installation of DataStage cartridge to become available) During the creation of User Group(s) the following validations are performed: - LDAP configuration is completed - The provided role assignment(s) are available in Cloud Pak for Data - The provided LDAP group(s) are available in the LDAP registry - If the User Group already exists, it ensures the provided LDAP Group(s) are assigned, but no changes to the existing role assignments are performed and no LDAP groups are removed from the User Group","title":"User Group configuration - cp4d_user_group_configuration"},{"location":"30-reference/configuration/cp4d-ldap/#provisioned-instance-authorization---cp4d_instance_configuration","text":"When using Cloud Pak for Data LDAP connectivity and User Groups, the User Groups can be assigned to authorize the users of the LDAP groups access to the proviosioned instance(s). Currently supported instance authorization: - Cognos Analytics (ca)","title":"Provisioned instance authorization - cp4d_instance_configuration"},{"location":"30-reference/configuration/cp4d-ldap/#cognos-analytics-instance-authorization","text":"cp4d_instance_configuration: - project: zen-sample # Mandatory openshift_cluster_name: sample # Mandatory cartridges: - name: cognos_analytics manage_access: # Optional, requires LDAP connectivity - ca_role: Analytics Viewer # Mandatory, one the CA Access roles cp4d_user_group: CA_Analytics_Viewer # Mandatory, the CP4D User Group Name - ca_role: Analytics Administrators # Mandatory, one the CA Access roles cp4d_user_group: CA_Analytics_Administrators # Mandatory, the CP4D User Group Name A Cognos Analytics (ca) instance can have multiple manage_access entries. Each entry consists of 1 ca_role and 1 cp4d_user_group element. The ca_role must be one of the following possible values: - Analytics Administrators - Analytics Explorers - Analytics Users - Analytics Viewer During the configuration of the instance authorization the following validations are performend: - LDAP configuration is completed - The provided ca_role is valid - The provided cp4d_user_group exists","title":"Cognos Analytics instance authorization"},{"location":"30-reference/configuration/cp4d-saml/","text":"Cloud Pak for Data SAML configuration \ud83d\udd17 You can configure Single Sign-on (SSO) by specifying a SAML server for the Cloud Pak for Data instance, which will take care of authenticating users. SAML configuration can be used in combination with the Cloud Pak for Data LDAP configuration, in which case LDAP complements the identity with access management (groups) for users. SAML configuration - cp4d_saml_config \ud83d\udd17 An cp4d_saml_config entry holds connection information, certificates and field configuration that is needed in the exchange between Cloud Pak for Data user management and the identity provider (idP). The entry must created for every Cloud Pak for Data project that requires SAML authentication. When a cp4d_saml_config entry exists for a certain cp4d project, the user management pods are updated with a samlConfig.json file and then restarted. If an entry is removed later, the file is removed and the pods restarted again. When no changes are needed, the file in the pod is left untouched and no restart takes place. For more information regarding the Cloud Pak for Data SAML configuration, check the single sign-on documentation: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=client-configuring-sso cp4d_saml_config: - project: cpd entrypoint: \"https://prepiam.ice.ibmcloud.com/saml/sps/saml20ip/saml20/login\" field_to_authenticate: email sp_cert_secret: {{ env_id }}-cpd-sp-cert idp_cert_secret: {{ env_id }}-cpd-idp-cert issuer: \"cp4d\" identifier_format: \"\" callback_url: \"\" The above configuration uses the IBM preproduction IAM server to delegate authentication to and authentication is done via the user's e-mail address. An issuer must be configured in the identity provider (idP) and the idP's certificate must be kept in the vault so Cloud Pak for Data can confirm its identity. Property explanation \ud83d\udd17 Property Description Mandatory Allowed values project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes entrypoint URL of the identity provider (idP) login page Yes field_to_authenticate Name of the parameter to authenticate with the idP Yes sp_cert_secret Vault secret that holds the private certificate to authenticate to the idP. If not specified, requests will not be signed. No idp_cert_secret Vault secret that holds the public certificate of the idP. This confirms the identity of the idP Yes issuer The name you chose to register the Cloud Pak for Data instance with your idP Yes identifier_format Format of the requests from Cloud Pak for Data to the idP. If not specified, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress is used No callback_url Specify the callback URL if you want to override the default of cp4d_url /auth/login/sso/callback No The callbackUrl field in the samlConfig.json file is automatically populated by the deployer if it is not specified by the cp4d_saml_config entry. It then consists of the Cloud Pak for Data base URL appended with /auth/login/sso/callback . Before running the deployer with SAML configuration, ensure that the secret configured for idp_cert_secret exists in the vault. Check Vault configuration for instructions on adding secrets to the vault.","title":"SAML"},{"location":"30-reference/configuration/cp4d-saml/#cloud-pak-for-data-saml-configuration","text":"You can configure Single Sign-on (SSO) by specifying a SAML server for the Cloud Pak for Data instance, which will take care of authenticating users. SAML configuration can be used in combination with the Cloud Pak for Data LDAP configuration, in which case LDAP complements the identity with access management (groups) for users.","title":"Cloud Pak for Data SAML configuration"},{"location":"30-reference/configuration/cp4d-saml/#saml-configuration---cp4d_saml_config","text":"An cp4d_saml_config entry holds connection information, certificates and field configuration that is needed in the exchange between Cloud Pak for Data user management and the identity provider (idP). The entry must created for every Cloud Pak for Data project that requires SAML authentication. When a cp4d_saml_config entry exists for a certain cp4d project, the user management pods are updated with a samlConfig.json file and then restarted. If an entry is removed later, the file is removed and the pods restarted again. When no changes are needed, the file in the pod is left untouched and no restart takes place. For more information regarding the Cloud Pak for Data SAML configuration, check the single sign-on documentation: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=client-configuring-sso cp4d_saml_config: - project: cpd entrypoint: \"https://prepiam.ice.ibmcloud.com/saml/sps/saml20ip/saml20/login\" field_to_authenticate: email sp_cert_secret: {{ env_id }}-cpd-sp-cert idp_cert_secret: {{ env_id }}-cpd-idp-cert issuer: \"cp4d\" identifier_format: \"\" callback_url: \"\" The above configuration uses the IBM preproduction IAM server to delegate authentication to and authentication is done via the user's e-mail address. An issuer must be configured in the identity provider (idP) and the idP's certificate must be kept in the vault so Cloud Pak for Data can confirm its identity.","title":"SAML configuration - cp4d_saml_config"},{"location":"30-reference/configuration/cp4d-saml/#property-explanation","text":"Property Description Mandatory Allowed values project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes entrypoint URL of the identity provider (idP) login page Yes field_to_authenticate Name of the parameter to authenticate with the idP Yes sp_cert_secret Vault secret that holds the private certificate to authenticate to the idP. If not specified, requests will not be signed. No idp_cert_secret Vault secret that holds the public certificate of the idP. This confirms the identity of the idP Yes issuer The name you chose to register the Cloud Pak for Data instance with your idP Yes identifier_format Format of the requests from Cloud Pak for Data to the idP. If not specified, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress is used No callback_url Specify the callback URL if you want to override the default of cp4d_url /auth/login/sso/callback No The callbackUrl field in the samlConfig.json file is automatically populated by the deployer if it is not specified by the cp4d_saml_config entry. It then consists of the Cloud Pak for Data base URL appended with /auth/login/sso/callback . Before running the deployer with SAML configuration, ensure that the secret configured for idp_cert_secret exists in the vault. Check Vault configuration for instructions on adding secrets to the vault.","title":"Property explanation"},{"location":"30-reference/configuration/cpd-global-config/","text":"Global configuration for Cloud Pak Deployer \ud83d\udd17 global_config \ud83d\udd17 Cloud Pak Deployer can use properties set in the global configuration ( global_config ) during the deployment process and also as substitution variables in the configuration, such as {{ env_id}} and {{ ibm_cloud_region }} . The following global_config variables are automatically copied into a \"simple\" form so they can be referenced in the configuration file(s) and also overridden using the command line. Variable name Description environment_name Name used to group secrets, typically you will specify sample cloud_platform Cloud platform applicable to configuration, such as ibm-cloud , aws , azure env_id Environment ID used in various other configuration objects ibm_cloud_region When Cloud Platform is ibm-cloud , the region into which the ROKS cluster is deployed aws_region When Cloud Platform is aws , the region into which the ROSA/self-managed OpenShift cluster is deployed azure_location When Cloud Platform is azure , the region into which the ARO OpenShift cluster is deployed universal_admin_user User name to be used for admin user (currently not used) universal_password Password to be used for all (admin) users it not specified in the vault confirm_destroy Is destroying of clusters, services/cartridges and instances allowed? For all other variables, you can refer to the qualified form, for example: \"{{ global_config.division }}\" Sample global configuration: global_config: environment_name: sample cloud_platform: ibm-cloud env_id: pluto-01 ibm_cloud_region: eu-de universal_password: very_secure_Passw0rd$ confirm_destroy: False If you run the cp-deploy.sh command and specify -e env_id=jupiter-03 , this will override the value in the global_config object. The same applies to the other variables.","title":"Global config"},{"location":"30-reference/configuration/cpd-global-config/#global-configuration-for-cloud-pak-deployer","text":"","title":"Global configuration for Cloud Pak Deployer"},{"location":"30-reference/configuration/cpd-global-config/#global_config","text":"Cloud Pak Deployer can use properties set in the global configuration ( global_config ) during the deployment process and also as substitution variables in the configuration, such as {{ env_id}} and {{ ibm_cloud_region }} . The following global_config variables are automatically copied into a \"simple\" form so they can be referenced in the configuration file(s) and also overridden using the command line. Variable name Description environment_name Name used to group secrets, typically you will specify sample cloud_platform Cloud platform applicable to configuration, such as ibm-cloud , aws , azure env_id Environment ID used in various other configuration objects ibm_cloud_region When Cloud Platform is ibm-cloud , the region into which the ROKS cluster is deployed aws_region When Cloud Platform is aws , the region into which the ROSA/self-managed OpenShift cluster is deployed azure_location When Cloud Platform is azure , the region into which the ARO OpenShift cluster is deployed universal_admin_user User name to be used for admin user (currently not used) universal_password Password to be used for all (admin) users it not specified in the vault confirm_destroy Is destroying of clusters, services/cartridges and instances allowed? For all other variables, you can refer to the qualified form, for example: \"{{ global_config.division }}\" Sample global configuration: global_config: environment_name: sample cloud_platform: ibm-cloud env_id: pluto-01 ibm_cloud_region: eu-de universal_password: very_secure_Passw0rd$ confirm_destroy: False If you run the cp-deploy.sh command and specify -e env_id=jupiter-03 , this will override the value in the global_config object. The same applies to the other variables.","title":"global_config"},{"location":"30-reference/configuration/cpd-objects/","text":"Configuration objects \ud83d\udd17 All objects used by the Cloud Pak Deployer are defined in a yaml format in files in the config directory. You can create a single yaml file holding all objects, or group objects in individual yaml files. At deployment time, all yaml files in the config directory are merged. To make it easier to navigate the different object types, they have been groups in different tabs. You can also use the index below to find the definitions. Configuration \ud83d\udd17 Global configuration Vault configuration Infrastructure \ud83d\udd17 Infrastructure objects Provider Resource groups Virtual Private Clouds (VPCs) Security groups Security rules Address prefixes Subnets Floating ips Virtual Server Instances (VSIs) NFS Servers SSH keys Transit Gateways OpenShift object types \ud83d\udd17 Existing OpenShift OpenShift on IBM Cloud OpenShift on AWS - ROSA OpenShift on AWS - self-managed OpenShift on Microsoft Azure (ARO) OpenShift on vSphere Cloud Paks and related object types \ud83d\udd17 Cloud Pak for Data - cp4d Cloud Pak for Integration - cp4d Cloud Pak for Watson AIOps - cp4d Private registry Cloud Pak for Data Cartridges object types \ud83d\udd17 Cloud Pak for Data Control Plane - cpd_platform Cloud Pak for Data Cognos Analytics - ca Cloud Pak for Data Db2 OLTP - db2oltp Cloud Pak for Data Watson Studio - ws Cloud Pak for Data Watson Machine Learning - wml","title":"Objects overview"},{"location":"30-reference/configuration/cpd-objects/#configuration-objects","text":"All objects used by the Cloud Pak Deployer are defined in a yaml format in files in the config directory. You can create a single yaml file holding all objects, or group objects in individual yaml files. At deployment time, all yaml files in the config directory are merged. To make it easier to navigate the different object types, they have been groups in different tabs. You can also use the index below to find the definitions.","title":"Configuration objects"},{"location":"30-reference/configuration/cpd-objects/#configuration","text":"Global configuration Vault configuration","title":"Configuration"},{"location":"30-reference/configuration/cpd-objects/#infrastructure","text":"Infrastructure objects Provider Resource groups Virtual Private Clouds (VPCs) Security groups Security rules Address prefixes Subnets Floating ips Virtual Server Instances (VSIs) NFS Servers SSH keys Transit Gateways","title":"Infrastructure"},{"location":"30-reference/configuration/cpd-objects/#openshift-object-types","text":"Existing OpenShift OpenShift on IBM Cloud OpenShift on AWS - ROSA OpenShift on AWS - self-managed OpenShift on Microsoft Azure (ARO) OpenShift on vSphere","title":"OpenShift object types"},{"location":"30-reference/configuration/cpd-objects/#cloud-paks-and-related-object-types","text":"Cloud Pak for Data - cp4d Cloud Pak for Integration - cp4d Cloud Pak for Watson AIOps - cp4d Private registry","title":"Cloud Paks and related object types"},{"location":"30-reference/configuration/cpd-objects/#cloud-pak-for-data-cartridges-object-types","text":"Cloud Pak for Data Control Plane - cpd_platform Cloud Pak for Data Cognos Analytics - ca Cloud Pak for Data Db2 OLTP - db2oltp Cloud Pak for Data Watson Studio - ws Cloud Pak for Data Watson Machine Learning - wml","title":"Cloud Pak for Data Cartridges object types"},{"location":"30-reference/configuration/dns/","text":"Upstream DNS servers for OpenShift \ud83d\udd17 When deploying OpenShift in a private network, one may want to reach additional private network services by their host name. Examples could be a database server, Hadoop cluster or an LDAP server. OpenShift provides a DNS operator which deploys and manages CoreDNS which takes care of name resolution for pods running inside the container platform, also known as DNS forwarding. If the services that need to be reachable our registered on public DNS servers, you typically do not have to configure upstream DNS servers. The upstream DNS used for a particular OpenShift cluster is configured like this: openshift: - name: sample ... upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 The zones which have been defined for each of the upstream_dns configurations control which DNS server(s) will be used for name resolution. For example, if example.com is given as the zone and an upstream DNS server of 172.31.2.73:53 , any host name matching *.example.com will be resolved using DNS server 172.31.2.73 and port 53 . If you want to remove the upstream DNS that was previously configured, you can change the deployer configuration as below and run the deployer. Removing the upstream_dns element altogether will not make changes to the OpenShift DNS operator. upstream_dns: [] See https://docs.openshift.com/container-platform/4.8/networking/dns-operator.html for more information about the operator that is configured by specifying upstream DNS servers. Property explanation \ud83d\udd17 Property Description Mandatory Allowed values upstream_dns[] List of alternative upstream DNS servers(s) for OpenShift No name Name of the upstream DNS entry Yes zones Specification of one or more zone for which the DNS server is applicable Yes dns_servers One or more DNS servers (host:port) that will resolve host names in the specified zone Yes","title":"DNS"},{"location":"30-reference/configuration/dns/#upstream-dns-servers-for-openshift","text":"When deploying OpenShift in a private network, one may want to reach additional private network services by their host name. Examples could be a database server, Hadoop cluster or an LDAP server. OpenShift provides a DNS operator which deploys and manages CoreDNS which takes care of name resolution for pods running inside the container platform, also known as DNS forwarding. If the services that need to be reachable our registered on public DNS servers, you typically do not have to configure upstream DNS servers. The upstream DNS used for a particular OpenShift cluster is configured like this: openshift: - name: sample ... upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 The zones which have been defined for each of the upstream_dns configurations control which DNS server(s) will be used for name resolution. For example, if example.com is given as the zone and an upstream DNS server of 172.31.2.73:53 , any host name matching *.example.com will be resolved using DNS server 172.31.2.73 and port 53 . If you want to remove the upstream DNS that was previously configured, you can change the deployer configuration as below and run the deployer. Removing the upstream_dns element altogether will not make changes to the OpenShift DNS operator. upstream_dns: [] See https://docs.openshift.com/container-platform/4.8/networking/dns-operator.html for more information about the operator that is configured by specifying upstream DNS servers.","title":"Upstream DNS servers for OpenShift"},{"location":"30-reference/configuration/dns/#property-explanation","text":"Property Description Mandatory Allowed values upstream_dns[] List of alternative upstream DNS servers(s) for OpenShift No name Name of the upstream DNS entry Yes zones Specification of one or more zone for which the DNS server is applicable Yes dns_servers One or more DNS servers (host:port) that will resolve host names in the specified zone Yes","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/","text":"Infrastructure \ud83d\udd17 For some of the cloud platforms, you must explicitly specify the infrastructure layer on which the OpenShift cluster(s) will be provisioned, or you can override the defaults. For IBM Cloud, you can configure the VPC, subnets, NFS server(s), other Virtual Server Instance(s) and a number of other objects. When provisioning OpenShift on vSphere, you can configure data center, data store, network and virtual machine definitions. For Azure ARO you configure a single object with information about the virtual network (vnet) to be used and the node server profiles. When deploying OpenShift on AWS you can specify an EFS server if you want to use elastic storage. This page lists all the objects you can configure for each of the supported cloud providers. - IBM Cloud - Microsoft Azure - Amazon AWS - vSphere IBM Cloud \ud83d\udd17 For IBM Cloud, the following object types are supported: provider resource_group ssh_keys address_prefix subnet network_acl security_group vsi transit_gateway nfs_server serviceid cos IBM Cloud provider \ud83d\udd17 Defines the provider that Terraform will use for managing the IBM Cloud assets. provider: - name: ibm region: eu-de Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the provider cluster No ibm region Region to connect to Yes Any IBM Cloud region IBM Cloud resource_group \ud83d\udd17 The resource group is for cloud asset grouping purposes. You can define multiple resource groups in your IBM cloud account to group the provisioned assets. If you do not need to group your assets, choose default . resource_group: - name: default Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the existing resource group Yes IBM Cloud ssh_keys \ud83d\udd17 SSH keys to connect to VSIs. If you have Virtual Server Instances in your VPC, you will need an SSH key to connect to them. SSH keys defined here will be looked up in the vault and created if they don't exist already. ssh_keys: - name: vsi-access managed: True Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the SSH key in IBM Cloud Yes managed Determines if the SSH key will be created if it doesn't exist No True (default), False IBM Cloud security_rule \ud83d\udd17 Defines the services (or ports) which are allowed within the context of a VPC and/or VSI. security_rule: - name: https tcp: {port_min: 443, port_max: 443} - name: ssh tcp: {port_min: 22, port_max: 22} Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the security rule Yes tcp Range of tcp ports ( port_min and port_max ) to allow No 1-65535 udp Range of udp ports ( port_min and port_max ) to allow No 1-65535 icmp ICMP Type and Code for IPv4 ( code and type ) to allow No 1-255 for code, 1-254 for type IBM Cloud vpc \ud83d\udd17 Defines the virtual private cloud which groups the provisioned objects (including VSIs and OpenShift cluster). vpc: - name: sample allow_inbound: ['ssh', 'https'] classic_access: false Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the Virtual Private Cloud Yes managed Controls whether the VPC is managed. The default is True . Only set to False if the VPC is not managed but only referenced by other objects such as transit gateways. No True (default), False allow_inbound Security rules which are allowed for inbound traffic No Existing security_rule classic_access Connect VPC to IBM Cloud classic infratructure resources No false (default), true IBM Cloud address_prefix \ud83d\udd17 Defines the zones used within the VPC, along with the subnet the addresses will be issued for. - name: sample-zone-1 vpc: sample zone: eu-de-1 cidr: 10.27.0.0/26 - name: sample-zone-2 vpc: sample zone: eu-de-2 cidr: 10.27.0.64/26 - name: sample-zone-3 vpc: sample zone: eu-de-3 cidr: 10.27.0.128/26 Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the zone Yes zone Zone in the IBM Cloud Yes cidr Address range that IPs in this zone will fall into Yes vpc Virtual Private Cloud this address prefix belongs to Yes, inferred from vpc Existing vpc IBM Cloud subnet \ud83d\udd17 Defines the subnet that Virtual Server Instances and ROKS compute nodes will be attached to. subnet: - name: sample-subnet-zone-1 address_prefix: sample-zone-1 ipv4_cidr_block: 10.27.0.0/26 zone: eu-de-1 vpc: sample network_acl: sample-acl Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the subnet Yes zone Zone this subnet belongs to Yes, inferred from address_prefix->zone ipv4_cidr_block Address range that IPs in this subnet will fall into Yes, inferred from address_prefix->cidr Range of subrange of zone address_prefix Zone of the address prefix definition Yes, inferred from address_prefix Existing address_prefix vpc Virtual Private Cloud this subnet prefix belongs to Yes, inferred from address_prefix->vpc Existing vpc network_acl Reference to the network access control list protecting this subnet No IBM Cloud network_acl \ud83d\udd17 Defines the network access control list to be associated with subnets to allow or deny traffic from or to external connections. The rules are processed in sequence per direction. Rules that appear higher in the list will be processed first. network_acl: - name: \"{{ env_id }}-acl\" vpc_name: \"{{ env_id }}\" rules: - name: inbound-ssh action: allow # Can be allow or deny source: \"0.0.0.0/0\" destination: \"0.0.0.0/0\" direction: inbound tcp: source_port_min: 1 # optional source_port_max: 65535 # optional dest_port_min: 22 # optional dest_port_max: 22 # optional - name: output-udp action: deny # Can be allow or deny source: \"0.0.0.0/0\" destination: \"0.0.0.0/0\" direction: outbound udp: source_port_min: 1 # optional source_port_max: 65535 # optional dest_port_min: 1000 # optional dest_port_max: 2000 # optional - name: output-icmp action: allow # Can be allow or deny source: \"0.0.0.0/0\" destination: \"0.0.0.0/0\" direction: outbound icmp: code: 1 type: 1 Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the network access control liet Yes vpc_name Virtual Private Cloud this network ACL belongs to Yes rules Rules to be applied, every rule is an entry in the list Yes rules.name Unique name of the rule Yes rules.action Defines whether the traffic is allowed or denied Yes allow, deny rules.source Source address range that defines the rule Yes rules.destination Destination address range that defines the rule Yes rules.direction Inbound or outbound direction of the traffic Yes inbound, outbound rules.tcp Rule for TCP traffic No rules.tcp.source_port_min Low value of the source port range No, default=1 1-65535 rules.tcp.source_port_max High value of the source port range No, default=65535 1-65535 rules.tcp.dest_port_min Low value of the destination port range No, default=1 1-65535 rules.tcp.dest_port_max High value of the destination port range No, default=65535 1-65535 rules.udp Rule for UDP traffic No rules.udp.source_port_min Low value of the source port range No, default=1 1-65535 rules.udp.source_port_max High value of the source port range No, default=65535 1-65535 rules.udp.dest_port_min Low value of the destination port range No, default=1 1-65535 rules.udp.dest_port_max High value of the destination port range No, default=65535 1-65535 rules.icmp Rule for ICMP traffic No rules.icmp.code ICMP traffic code No, default=all 0-255 rules.icmp.type ICMP traffic type No, default=all 0-254 IBM Cloud vsi \ud83d\udd17 Defines a Virtual Server Instance within the VPC. vsi: - name: sample-bastion infrastructure: type: vpc keys: - \"vsi-access\" image: ibm-redhat-8-3-minimal-amd64-3 subnet: sample-subnet-zone-1 primary_ipv4_address: 10.27.0.4 public_ip: True vpc_name: sample zone: eu-de-3 Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the Virtual Server Instance Yes infrastructure Infrastructure attributes Yes infrastructure.type Infrastructure type Yes vpc infrastructure.allow_ip_spoofing Decide if IP spoofing is allowed for the interface or not No False (default), True infrastructure.keys List of SSH keys to attach to the VSI Yes, inferred from ssh_keys Existing ssh_keys infrastructure.image Operating system image to be used Yes Existing image in IBM Cloud infrastructure.profile Server profile to be used, for example cx2-2x4 Yes Existing profile in IBM Cloud infrastructure.subnet Subnet the VSI will be connected to Yes, inferred from sunset Existing subnet infrastructure.primary_ipv4_address IP v4 address that will be assigned to the VSI No If specified, address in the subnet range infrastructure.public_ip Must a public IP address be attached to this VSI? No False (default), True infrastructure.vpc_name Virtual Private Cloud this VSI belongs to Yes, inferred from vpc Existing vpc infrastructure.zone Zone the VSI will be plaed into Yes, inferred from subnet->zone IBM Cloud transit_gateway \ud83d\udd17 Connects one or more VPCs to each other. transit_gateway: - name: sample-tgw location: eu-de connections: - vpc: other-vpc - vpc: sample Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the transit gateway Yes location IBM Cloud location of the transit gateway Yes connections Defines which VPCs must be included in the transit gateway Yes connection.vpc Defines the VPC to include. Every VPC must exist in the configuration, even if not managed by this configuration. When referencing an existing VPC, make sure that there is a vpc object of that name with managed set to False . Yes Existing vpc IBM Cloud nfs_server \ud83d\udd17 Defines a Virtual Server Instance within the VPC that will be used as an NFS server. nfs_server: - name: sample-nfs infrastructure: type: vpc vpc_name: sample subnet: sample-subnet-zone-1 zone: eu-de-1 primary_ipv4_address: 10.27.0.5 image: ibm-redhat-8-3-minimal-amd64-3 profile: cx2-2x4 bastion_host: sample-bastion storage_folder: /data/nfs storage_profile: 10iops-tier keys: - \"sample-nfs-provision\" Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the Virtual Server Instance Yes infrastructure Infrastructure attributes Yes infrastructure.image Operating system image to be used Yes Existing image in IBM Cloud infrastructure.profile Server profile to be used, for example cx2-2x4 Yes Existing profile in IBM Cloud infrastructure.type Type of infrastructure for NFS servers to Yes vpc infrastructure.vpc_name Virtual Private Cloud this VSI belongs to Yes, inferred from vpc Existing vpc infrastructure.subnet Subnet the VSI will be connected to Yes, inferred from subnet Existing subnet infrastructure.zone Zone the VSI will be plaed into Yes, inferred from subnet->zone infrastructure.primary_ipv4_address IP v4 address that will be assigned to the VSI No If specified, address in the subnet range infrastructure.bastion_host Specify the VSI of the bastion to reach this NFS server No infrastructure.storage_profile Storage profile that will be used Yes 3iops-tier, 5iops-tier, 10iops-tier infrastructure.volume_size_gb Size of the NFS server data volume Yes infrastructure.storage_folder Folder that holds the data, this will be mounted from the NFS storage class Yes infrastructure.keys List of SSH keys to attach to the NFS server VSI Yes, inferred from ssh_keys Existing ssh_keys infrastructure.allow_ip_spoofing Decide if IP spoofing is allowed for the interface or not No False (default), True IBM Cloud cos \ud83d\udd17 Defines a IBM Cloud Cloud Object Storage instance and allows to create buckets. cos: - name: {{ env_id }}-cos plan: standard location: global serviceids: - name: {{ env_id }}-cos-serviceid roles: [\"Manager\", \"Viewer\", \"Administrator\"] buckets: - name: bucketone6c9d6840 cross_region_location: eu Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the serviceid Yes plan short description of the serviceid Yes location collection of servicekeys that should be created for the parent serviceid Yes serviceids Collection of references to defined seriveids No serviceids.name Name of the serviceid Yes serviceids.roles An array of strings to define which role should be granted to the serviceid Yes buckets Collection of buckets that should be created inside the cos instance No buckets[].name Name of the bucket No buckets[].storage_class Storage class of the bucket No standard (default), vault, cold, flex, smart buckets[].endpoint_type Endpoint type of the bucket No public (default), private buckets[].cross_region_location If you use this parameter, do not set single_site_location or region_location at the same time. Yes (one of) us, eu, ap buckets[].region_location If you set this parameter, do not set single_site_location or cross_region_location at the same time. Yes (one of) au-syd, eu-de, eu-gb, jp-tok, us-east, us-south, ca-tor, jp-osa, br-sao buckets[].single_site_location If you set this parameter, do not set region_location or cross_region_location at the same time. Yes (one of) ams03, che01, hkg02, mel01, mex01, mil01, mon01, osl01, par01, sjc04, sao01, seo01, sng01, and tor01 serviceid \ud83d\udd17 Defines a iam_service_id that can be granted several role based accesss right via attaching iam_policies to it. serviceid: - name: sample-serviceid description: to access ibmcloud services from external servicekeys: - name: primarykey Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the serviceid Yes description short description of the serviceid No servicekeys collection of servicekeys that should be created for the parent serviceid No servicekeys.name Name of the servicekey Yes Microsoft Azure \ud83d\udd17 For Microsoft Azure, the following object type is supported: azure Azure \ud83d\udd17 Defines an infrastructure configuration onto which OpenShift will be provisioned. azure: - name: sample resource_group: name: sample location: westeurope vnet: name: vnet address_space: 10.0.0.0/22 control_plane: subnet: name: control-plane-subnet address_prefixes: 10.0.0.0/23 compute: subnet: name: compute-subnet address_prefixes: 10.0.2.0/23 Properties explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the azure definition object, will be referenced by openshift Yes resource_group Resource group attributes Yes resource_group.name Name of the resource group (will be provisioned) Yes unique value, it must not exist resource_group.location Azure location Yes to pick a different location, run: az account list-locations -o table vnet Virtual network attributes Yes vnet.name Name of the virtual network Yes vnet.address_space Address space of the virtual network Yes control_plane Control plane (master) nodes attributes Yes control_plane.subnet Control plane nodes subnet attributes Yes control_plane.subnet.name Name of the control plane nodes subnet Yes control_plane.subnet.address_prefixes Address prefixes of the control plane nodes subnet (divided by a , comma, if relevant) Yes control_plane.vm Control plane nodes virtual machine attributes Yes control_plane.vm.size Virtual machine size (aka flavour) of the control plane nodes Yes Standard_D8s_v3 , Standard_D16s_v3 , Standard_D32s_v3 compute Compute (worker) nodes attributes Yes compute.subnet Compute nodes subnet attributes Yes compute.subnet.name Name of the compute nodes subnet Yes compute.subnet.address_prefixes Address prefixes of the compute nodes subnet (divided by a , comma, if relevant) Yes compute.vm Compute nodes virtual machine attributes Yes compute.vm.size Virtual machine size (aka flavour) of the compute nodes Yes See the full list of supported virtual machine sizes compute.vm.disk_size_gb Disk size in GBs of the compute nodes virtual machine Yes minimum value is 128 compute.vm.count Number of compute nodes virtual machines Yes minimum value is 3 Amazon \ud83d\udd17 For Amazon AWS, the following object types are supported: nfs_server AWS EFS Server nfs_server \ud83d\udd17 Defines a new Elastic File Storage (EFS) service that is connected to the OpenShift cluster within the same VPC. The file storage will be used as the back-end for the efs-nfs-client OpenShift storage class. nfs_server: - name: sample-elastic infrastructure: aws_region: eu-west-1 Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the EFS File System service to be created Yes infrastructure Infrastructure attributes Yes infrastructure.aws_region AWS region where the storage will be provisioned Yes vSphere \ud83d\udd17 For vSphere, the following object types are supported: vsphere vm_definition nfs_server vSphere vsphere \ud83d\udd17 Defines the vSphere vCenter onto which OpenShift will be provisioned. vsphere: - name: sample vcenter: 10.99.92.13 datacenter: Datacenter1 datastore: Datastore1 cluster: Cluster1 network: \"VM Network\" folder: /Datacenter1/vm/sample Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the vSphere definition, will be referenced by openshift Yes vcenter Host or IP address of the vSphere Center Yes datacenter vSphere Data Center to be used for the virtual machines Yes datastore vSphere Datastore to be used for the virtual machines Yes cluster vSphere cluster to be used for the virtual machines Yes resource_pool vSphere resource pool No network vSphere network to be used for the virtual machines Yes folder Fully qualified folder name into which the OpenShift cluster will be placed Yes vSphere vm_definition \ud83d\udd17 Defines the virtual machine properties to be used for the control-plane nodes and compute nodes. vm_definition: - name: control-plane vcpu: 8 memory_mb: 32768 boot_disk_size_gb: 100 - name: compute vcpu: 16 memory_mb: 65536 boot_disk_size_gb: 200 # Optional overrides for vsphere properties # datastore: Datastore1 # network: \"VM Network\" Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the VM definition, will be referenced by openshift Yes vcpu Number of virtual CPUs to be assigned to the VMs Yes memory_mb Amount of memory in MiB of the virtual machines Yes boot_disk_size_gb Size of the virtual machine boot disk in GiB Yes datastore vSphere Datastore to be used for the virtual machines, overrides vsphere.datastore No network vSphere network to be used for the virtual machines, overrides vsphere.network No vSphere nfs_server \ud83d\udd17 Defines an existing NFS server that will be used for the OpenShift NFS storage class. nfs_server: - name: sample-nfs infrastructure: host_ip: 10.99.92.31 storage_folder: /data/nfs Property explanation \ud83d\udd17 Property Description Mandatory Allowed values name Name of the NFS server Yes infrastructure Infrastructure attributes Yes infrastructure.host_ip Host or IP address of the NFS server Yes infrastructure.storage_folder Folder that holds the data, this will be mounted from the NFS storage class Yes","title":"Infrastructure"},{"location":"30-reference/configuration/infrastructure/#infrastructure","text":"For some of the cloud platforms, you must explicitly specify the infrastructure layer on which the OpenShift cluster(s) will be provisioned, or you can override the defaults. For IBM Cloud, you can configure the VPC, subnets, NFS server(s), other Virtual Server Instance(s) and a number of other objects. When provisioning OpenShift on vSphere, you can configure data center, data store, network and virtual machine definitions. For Azure ARO you configure a single object with information about the virtual network (vnet) to be used and the node server profiles. When deploying OpenShift on AWS you can specify an EFS server if you want to use elastic storage. This page lists all the objects you can configure for each of the supported cloud providers. - IBM Cloud - Microsoft Azure - Amazon AWS - vSphere","title":"Infrastructure"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud","text":"For IBM Cloud, the following object types are supported: provider resource_group ssh_keys address_prefix subnet network_acl security_group vsi transit_gateway nfs_server serviceid cos","title":"IBM Cloud"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-provider","text":"Defines the provider that Terraform will use for managing the IBM Cloud assets. provider: - name: ibm region: eu-de","title":"IBM Cloud provider"},{"location":"30-reference/configuration/infrastructure/#property-explanation","text":"Property Description Mandatory Allowed values name Name of the provider cluster No ibm region Region to connect to Yes Any IBM Cloud region","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-resource_group","text":"The resource group is for cloud asset grouping purposes. You can define multiple resource groups in your IBM cloud account to group the provisioned assets. If you do not need to group your assets, choose default . resource_group: - name: default","title":"IBM Cloud resource_group"},{"location":"30-reference/configuration/infrastructure/#property-explanation_1","text":"Property Description Mandatory Allowed values name Name of the existing resource group Yes","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-ssh_keys","text":"SSH keys to connect to VSIs. If you have Virtual Server Instances in your VPC, you will need an SSH key to connect to them. SSH keys defined here will be looked up in the vault and created if they don't exist already. ssh_keys: - name: vsi-access managed: True","title":"IBM Cloud ssh_keys"},{"location":"30-reference/configuration/infrastructure/#property-explanation_2","text":"Property Description Mandatory Allowed values name Name of the SSH key in IBM Cloud Yes managed Determines if the SSH key will be created if it doesn't exist No True (default), False","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-security_rule","text":"Defines the services (or ports) which are allowed within the context of a VPC and/or VSI. security_rule: - name: https tcp: {port_min: 443, port_max: 443} - name: ssh tcp: {port_min: 22, port_max: 22}","title":"IBM Cloud security_rule"},{"location":"30-reference/configuration/infrastructure/#property-explanation_3","text":"Property Description Mandatory Allowed values name Name of the security rule Yes tcp Range of tcp ports ( port_min and port_max ) to allow No 1-65535 udp Range of udp ports ( port_min and port_max ) to allow No 1-65535 icmp ICMP Type and Code for IPv4 ( code and type ) to allow No 1-255 for code, 1-254 for type","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-vpc","text":"Defines the virtual private cloud which groups the provisioned objects (including VSIs and OpenShift cluster). vpc: - name: sample allow_inbound: ['ssh', 'https'] classic_access: false","title":"IBM Cloud vpc"},{"location":"30-reference/configuration/infrastructure/#property-explanation_4","text":"Property Description Mandatory Allowed values name Name of the Virtual Private Cloud Yes managed Controls whether the VPC is managed. The default is True . Only set to False if the VPC is not managed but only referenced by other objects such as transit gateways. No True (default), False allow_inbound Security rules which are allowed for inbound traffic No Existing security_rule classic_access Connect VPC to IBM Cloud classic infratructure resources No false (default), true","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-address_prefix","text":"Defines the zones used within the VPC, along with the subnet the addresses will be issued for. - name: sample-zone-1 vpc: sample zone: eu-de-1 cidr: 10.27.0.0/26 - name: sample-zone-2 vpc: sample zone: eu-de-2 cidr: 10.27.0.64/26 - name: sample-zone-3 vpc: sample zone: eu-de-3 cidr: 10.27.0.128/26","title":"IBM Cloud address_prefix"},{"location":"30-reference/configuration/infrastructure/#property-explanation_5","text":"Property Description Mandatory Allowed values name Name of the zone Yes zone Zone in the IBM Cloud Yes cidr Address range that IPs in this zone will fall into Yes vpc Virtual Private Cloud this address prefix belongs to Yes, inferred from vpc Existing vpc","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-subnet","text":"Defines the subnet that Virtual Server Instances and ROKS compute nodes will be attached to. subnet: - name: sample-subnet-zone-1 address_prefix: sample-zone-1 ipv4_cidr_block: 10.27.0.0/26 zone: eu-de-1 vpc: sample network_acl: sample-acl","title":"IBM Cloud subnet"},{"location":"30-reference/configuration/infrastructure/#property-explanation_6","text":"Property Description Mandatory Allowed values name Name of the subnet Yes zone Zone this subnet belongs to Yes, inferred from address_prefix->zone ipv4_cidr_block Address range that IPs in this subnet will fall into Yes, inferred from address_prefix->cidr Range of subrange of zone address_prefix Zone of the address prefix definition Yes, inferred from address_prefix Existing address_prefix vpc Virtual Private Cloud this subnet prefix belongs to Yes, inferred from address_prefix->vpc Existing vpc network_acl Reference to the network access control list protecting this subnet No","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-network_acl","text":"Defines the network access control list to be associated with subnets to allow or deny traffic from or to external connections. The rules are processed in sequence per direction. Rules that appear higher in the list will be processed first. network_acl: - name: \"{{ env_id }}-acl\" vpc_name: \"{{ env_id }}\" rules: - name: inbound-ssh action: allow # Can be allow or deny source: \"0.0.0.0/0\" destination: \"0.0.0.0/0\" direction: inbound tcp: source_port_min: 1 # optional source_port_max: 65535 # optional dest_port_min: 22 # optional dest_port_max: 22 # optional - name: output-udp action: deny # Can be allow or deny source: \"0.0.0.0/0\" destination: \"0.0.0.0/0\" direction: outbound udp: source_port_min: 1 # optional source_port_max: 65535 # optional dest_port_min: 1000 # optional dest_port_max: 2000 # optional - name: output-icmp action: allow # Can be allow or deny source: \"0.0.0.0/0\" destination: \"0.0.0.0/0\" direction: outbound icmp: code: 1 type: 1","title":"IBM Cloud network_acl"},{"location":"30-reference/configuration/infrastructure/#property-explanation_7","text":"Property Description Mandatory Allowed values name Name of the network access control liet Yes vpc_name Virtual Private Cloud this network ACL belongs to Yes rules Rules to be applied, every rule is an entry in the list Yes rules.name Unique name of the rule Yes rules.action Defines whether the traffic is allowed or denied Yes allow, deny rules.source Source address range that defines the rule Yes rules.destination Destination address range that defines the rule Yes rules.direction Inbound or outbound direction of the traffic Yes inbound, outbound rules.tcp Rule for TCP traffic No rules.tcp.source_port_min Low value of the source port range No, default=1 1-65535 rules.tcp.source_port_max High value of the source port range No, default=65535 1-65535 rules.tcp.dest_port_min Low value of the destination port range No, default=1 1-65535 rules.tcp.dest_port_max High value of the destination port range No, default=65535 1-65535 rules.udp Rule for UDP traffic No rules.udp.source_port_min Low value of the source port range No, default=1 1-65535 rules.udp.source_port_max High value of the source port range No, default=65535 1-65535 rules.udp.dest_port_min Low value of the destination port range No, default=1 1-65535 rules.udp.dest_port_max High value of the destination port range No, default=65535 1-65535 rules.icmp Rule for ICMP traffic No rules.icmp.code ICMP traffic code No, default=all 0-255 rules.icmp.type ICMP traffic type No, default=all 0-254","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-vsi","text":"Defines a Virtual Server Instance within the VPC. vsi: - name: sample-bastion infrastructure: type: vpc keys: - \"vsi-access\" image: ibm-redhat-8-3-minimal-amd64-3 subnet: sample-subnet-zone-1 primary_ipv4_address: 10.27.0.4 public_ip: True vpc_name: sample zone: eu-de-3","title":"IBM Cloud vsi"},{"location":"30-reference/configuration/infrastructure/#property-explanation_8","text":"Property Description Mandatory Allowed values name Name of the Virtual Server Instance Yes infrastructure Infrastructure attributes Yes infrastructure.type Infrastructure type Yes vpc infrastructure.allow_ip_spoofing Decide if IP spoofing is allowed for the interface or not No False (default), True infrastructure.keys List of SSH keys to attach to the VSI Yes, inferred from ssh_keys Existing ssh_keys infrastructure.image Operating system image to be used Yes Existing image in IBM Cloud infrastructure.profile Server profile to be used, for example cx2-2x4 Yes Existing profile in IBM Cloud infrastructure.subnet Subnet the VSI will be connected to Yes, inferred from sunset Existing subnet infrastructure.primary_ipv4_address IP v4 address that will be assigned to the VSI No If specified, address in the subnet range infrastructure.public_ip Must a public IP address be attached to this VSI? No False (default), True infrastructure.vpc_name Virtual Private Cloud this VSI belongs to Yes, inferred from vpc Existing vpc infrastructure.zone Zone the VSI will be plaed into Yes, inferred from subnet->zone","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-transit_gateway","text":"Connects one or more VPCs to each other. transit_gateway: - name: sample-tgw location: eu-de connections: - vpc: other-vpc - vpc: sample","title":"IBM Cloud transit_gateway"},{"location":"30-reference/configuration/infrastructure/#property-explanation_9","text":"Property Description Mandatory Allowed values name Name of the transit gateway Yes location IBM Cloud location of the transit gateway Yes connections Defines which VPCs must be included in the transit gateway Yes connection.vpc Defines the VPC to include. Every VPC must exist in the configuration, even if not managed by this configuration. When referencing an existing VPC, make sure that there is a vpc object of that name with managed set to False . Yes Existing vpc","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-nfs_server","text":"Defines a Virtual Server Instance within the VPC that will be used as an NFS server. nfs_server: - name: sample-nfs infrastructure: type: vpc vpc_name: sample subnet: sample-subnet-zone-1 zone: eu-de-1 primary_ipv4_address: 10.27.0.5 image: ibm-redhat-8-3-minimal-amd64-3 profile: cx2-2x4 bastion_host: sample-bastion storage_folder: /data/nfs storage_profile: 10iops-tier keys: - \"sample-nfs-provision\"","title":"IBM Cloud nfs_server"},{"location":"30-reference/configuration/infrastructure/#property-explanation_10","text":"Property Description Mandatory Allowed values name Name of the Virtual Server Instance Yes infrastructure Infrastructure attributes Yes infrastructure.image Operating system image to be used Yes Existing image in IBM Cloud infrastructure.profile Server profile to be used, for example cx2-2x4 Yes Existing profile in IBM Cloud infrastructure.type Type of infrastructure for NFS servers to Yes vpc infrastructure.vpc_name Virtual Private Cloud this VSI belongs to Yes, inferred from vpc Existing vpc infrastructure.subnet Subnet the VSI will be connected to Yes, inferred from subnet Existing subnet infrastructure.zone Zone the VSI will be plaed into Yes, inferred from subnet->zone infrastructure.primary_ipv4_address IP v4 address that will be assigned to the VSI No If specified, address in the subnet range infrastructure.bastion_host Specify the VSI of the bastion to reach this NFS server No infrastructure.storage_profile Storage profile that will be used Yes 3iops-tier, 5iops-tier, 10iops-tier infrastructure.volume_size_gb Size of the NFS server data volume Yes infrastructure.storage_folder Folder that holds the data, this will be mounted from the NFS storage class Yes infrastructure.keys List of SSH keys to attach to the NFS server VSI Yes, inferred from ssh_keys Existing ssh_keys infrastructure.allow_ip_spoofing Decide if IP spoofing is allowed for the interface or not No False (default), True","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#ibm-cloud-cos","text":"Defines a IBM Cloud Cloud Object Storage instance and allows to create buckets. cos: - name: {{ env_id }}-cos plan: standard location: global serviceids: - name: {{ env_id }}-cos-serviceid roles: [\"Manager\", \"Viewer\", \"Administrator\"] buckets: - name: bucketone6c9d6840 cross_region_location: eu","title":"IBM Cloud cos"},{"location":"30-reference/configuration/infrastructure/#property-explanation_11","text":"Property Description Mandatory Allowed values name Name of the serviceid Yes plan short description of the serviceid Yes location collection of servicekeys that should be created for the parent serviceid Yes serviceids Collection of references to defined seriveids No serviceids.name Name of the serviceid Yes serviceids.roles An array of strings to define which role should be granted to the serviceid Yes buckets Collection of buckets that should be created inside the cos instance No buckets[].name Name of the bucket No buckets[].storage_class Storage class of the bucket No standard (default), vault, cold, flex, smart buckets[].endpoint_type Endpoint type of the bucket No public (default), private buckets[].cross_region_location If you use this parameter, do not set single_site_location or region_location at the same time. Yes (one of) us, eu, ap buckets[].region_location If you set this parameter, do not set single_site_location or cross_region_location at the same time. Yes (one of) au-syd, eu-de, eu-gb, jp-tok, us-east, us-south, ca-tor, jp-osa, br-sao buckets[].single_site_location If you set this parameter, do not set region_location or cross_region_location at the same time. Yes (one of) ams03, che01, hkg02, mel01, mex01, mil01, mon01, osl01, par01, sjc04, sao01, seo01, sng01, and tor01","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#serviceid","text":"Defines a iam_service_id that can be granted several role based accesss right via attaching iam_policies to it. serviceid: - name: sample-serviceid description: to access ibmcloud services from external servicekeys: - name: primarykey","title":"serviceid"},{"location":"30-reference/configuration/infrastructure/#property-explanation_12","text":"Property Description Mandatory Allowed values name Name of the serviceid Yes description short description of the serviceid No servicekeys collection of servicekeys that should be created for the parent serviceid No servicekeys.name Name of the servicekey Yes","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#microsoft-azure","text":"For Microsoft Azure, the following object type is supported: azure","title":"Microsoft Azure"},{"location":"30-reference/configuration/infrastructure/#azure","text":"Defines an infrastructure configuration onto which OpenShift will be provisioned. azure: - name: sample resource_group: name: sample location: westeurope vnet: name: vnet address_space: 10.0.0.0/22 control_plane: subnet: name: control-plane-subnet address_prefixes: 10.0.0.0/23 compute: subnet: name: compute-subnet address_prefixes: 10.0.2.0/23","title":"Azure"},{"location":"30-reference/configuration/infrastructure/#properties-explanation","text":"Property Description Mandatory Allowed values name Name of the azure definition object, will be referenced by openshift Yes resource_group Resource group attributes Yes resource_group.name Name of the resource group (will be provisioned) Yes unique value, it must not exist resource_group.location Azure location Yes to pick a different location, run: az account list-locations -o table vnet Virtual network attributes Yes vnet.name Name of the virtual network Yes vnet.address_space Address space of the virtual network Yes control_plane Control plane (master) nodes attributes Yes control_plane.subnet Control plane nodes subnet attributes Yes control_plane.subnet.name Name of the control plane nodes subnet Yes control_plane.subnet.address_prefixes Address prefixes of the control plane nodes subnet (divided by a , comma, if relevant) Yes control_plane.vm Control plane nodes virtual machine attributes Yes control_plane.vm.size Virtual machine size (aka flavour) of the control plane nodes Yes Standard_D8s_v3 , Standard_D16s_v3 , Standard_D32s_v3 compute Compute (worker) nodes attributes Yes compute.subnet Compute nodes subnet attributes Yes compute.subnet.name Name of the compute nodes subnet Yes compute.subnet.address_prefixes Address prefixes of the compute nodes subnet (divided by a , comma, if relevant) Yes compute.vm Compute nodes virtual machine attributes Yes compute.vm.size Virtual machine size (aka flavour) of the compute nodes Yes See the full list of supported virtual machine sizes compute.vm.disk_size_gb Disk size in GBs of the compute nodes virtual machine Yes minimum value is 128 compute.vm.count Number of compute nodes virtual machines Yes minimum value is 3","title":"Properties explanation"},{"location":"30-reference/configuration/infrastructure/#amazon","text":"For Amazon AWS, the following object types are supported: nfs_server","title":"Amazon"},{"location":"30-reference/configuration/infrastructure/#aws-efs-server-nfs_server","text":"Defines a new Elastic File Storage (EFS) service that is connected to the OpenShift cluster within the same VPC. The file storage will be used as the back-end for the efs-nfs-client OpenShift storage class. nfs_server: - name: sample-elastic infrastructure: aws_region: eu-west-1","title":"AWS EFS Server nfs_server"},{"location":"30-reference/configuration/infrastructure/#property-explanation_13","text":"Property Description Mandatory Allowed values name Name of the EFS File System service to be created Yes infrastructure Infrastructure attributes Yes infrastructure.aws_region AWS region where the storage will be provisioned Yes","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#vsphere","text":"For vSphere, the following object types are supported: vsphere vm_definition nfs_server","title":"vSphere"},{"location":"30-reference/configuration/infrastructure/#vsphere-vsphere","text":"Defines the vSphere vCenter onto which OpenShift will be provisioned. vsphere: - name: sample vcenter: 10.99.92.13 datacenter: Datacenter1 datastore: Datastore1 cluster: Cluster1 network: \"VM Network\" folder: /Datacenter1/vm/sample","title":"vSphere vsphere"},{"location":"30-reference/configuration/infrastructure/#property-explanation_14","text":"Property Description Mandatory Allowed values name Name of the vSphere definition, will be referenced by openshift Yes vcenter Host or IP address of the vSphere Center Yes datacenter vSphere Data Center to be used for the virtual machines Yes datastore vSphere Datastore to be used for the virtual machines Yes cluster vSphere cluster to be used for the virtual machines Yes resource_pool vSphere resource pool No network vSphere network to be used for the virtual machines Yes folder Fully qualified folder name into which the OpenShift cluster will be placed Yes","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#vsphere-vm_definition","text":"Defines the virtual machine properties to be used for the control-plane nodes and compute nodes. vm_definition: - name: control-plane vcpu: 8 memory_mb: 32768 boot_disk_size_gb: 100 - name: compute vcpu: 16 memory_mb: 65536 boot_disk_size_gb: 200 # Optional overrides for vsphere properties # datastore: Datastore1 # network: \"VM Network\"","title":"vSphere vm_definition"},{"location":"30-reference/configuration/infrastructure/#property-explanation_15","text":"Property Description Mandatory Allowed values name Name of the VM definition, will be referenced by openshift Yes vcpu Number of virtual CPUs to be assigned to the VMs Yes memory_mb Amount of memory in MiB of the virtual machines Yes boot_disk_size_gb Size of the virtual machine boot disk in GiB Yes datastore vSphere Datastore to be used for the virtual machines, overrides vsphere.datastore No network vSphere network to be used for the virtual machines, overrides vsphere.network No","title":"Property explanation"},{"location":"30-reference/configuration/infrastructure/#vsphere-nfs_server","text":"Defines an existing NFS server that will be used for the OpenShift NFS storage class. nfs_server: - name: sample-nfs infrastructure: host_ip: 10.99.92.31 storage_folder: /data/nfs","title":"vSphere nfs_server"},{"location":"30-reference/configuration/infrastructure/#property-explanation_16","text":"Property Description Mandatory Allowed values name Name of the NFS server Yes infrastructure Infrastructure attributes Yes infrastructure.host_ip Host or IP address of the NFS server Yes infrastructure.storage_folder Folder that holds the data, this will be mounted from the NFS storage class Yes","title":"Property explanation"},{"location":"30-reference/configuration/logging-auditing/","text":"Logging and auditing for Cloud Paks \ud83d\udd17 For logging and auditing of Cloud Pak for Data we make use of the OpenShift logging framework, which delivers a lot of flexibility in capturing logs from applications, storing them in an ElasticSearch datastore in the cluster (currently not supported by the deployer), or forwarding the log entries to external log collectors such as an ElasticSearch, Fluentd, Loki and others. OpenShift logging captures 3 types of logging entries from workload that is running on the cluster: infrastructure - logs generated by OpenShift processes audit - audit logs generated by applications as well as OpenShift application - all other applications on the cluster Logging configuration - openshift_logging \ud83d\udd17 Defines how OpenShift forwards the logs to external log collectors. Currently, the following log collector types are supported: loki When OpenShift logging is activated via the openshift_logging object, all 3 logging types are activated automatically. You can specify logging_output items to forward log records to the log collector of your choice. In the below example, the application logs are forwarded to a loki server https://loki-application.sample.com and audit logs to https://loki-audit.sample.com , both have the same certificate to connect with: openshift_logging: - openshift_cluster_name: pluto-01 configure_es_log_store: False cluster_wide_logging: - input: application logging_name: loki-application - input: infrastructure logging_name: loki-application - input: audit logging_name: loki-audit logging_output: - name: loki-application type: loki url: https://loki-application.sample.com certificates: cert: pluto-01-loki-cert key: pluto-01-loki-key ca: pluto-01-loki-ca - name: loki-audit type: loki url: https://loki-audit.sample.com certificates: cert: pluto-01-loki-cert key: pluto-01-loki-key ca: pluto-01-loki-ca Cloud Pak for Data and Foundational Services application logs are automatically picked up and forwarded to the loki-application logging destination and no additional configuration is needed. Property explanation \ud83d\udd17 Property Description Mandatory Allowed values openshift_cluster_name Name of the OpenShift cluster to configure the logging for Yes configure_es_log_store Must internal ElasticSearch log store and Kibana be provisioned? (default False) No True, False (default) cluster_wide_logging Defines which classes of log records will be sent to the log collectors No cluster_wide_logging.input Specifies OpenShift log records class to forwawrd Yes application, infrastructure, audit cluster_wide_logging.logging_name Specifies the logging_output to send the records to . If not specified, records will be sent to the internal log only No cluster_wide_logging.labels Specify your own labels to be added to the log records. Every logging input/output combination can have its own labes No logging_output Defines the log collectors. If configure_es_log_store is True, output will always be sent to the internal ES log store No logging_output.name Log collector name, referenced by cluster_wide_logging or cp4d_audit Yes logging_output.type Type of the log collector, currently only loki is possible Yes loki logging_output.url URL of the log collector; this URL must be reachable from within the cluster Yes logging_output.certificates Defines the vault secrets that hold the certificate elements Yes, if url is https logging_output.certificates.cert Public certificate to connect to the URL Yes logging_output.certificates.key Private key to connect to the URL Yes logging_output.certificates.ca Certificate Authority bundle to connect to the URL Yes If you also want to activate audit logging for Cloud Pak for Data, you can do this by adding a cp4d_audit_config object to your configuration. With the below example, the Cloud Pak for Data audit logger is configured to write log records to the standard output ( stdout ) of the pods, after which they are forwarded to the loki-audit logging destination by a ClusterLogForwarder custom resource. Optionally labels can be specified which are added to the ClusterLogForwarder custom resource pipeline entry. cp4d_audit_config: - project: cpd audit_replicas: 2 audit_output: - type: openshift-logging logging_name: loki-audit labels: cluster_name: \"{{ env_id }}\" Info Because audit log entries are written to the standard output, they will also be picked up by the generic application log forwarder and will therefore also appear in the application logging destination. Cloud Pak for Data audit configuration \ud83d\udd17 IBM Cloud Pak for Data has a centralized auditing component for base platform and services auditable events. Audit events include login and logout to the platform, creation and deletion of connections and many more. Services that support auditing are documented here: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=data-services-that-support-audit-logging The Cloud Pak Deployer simplifies the recording of audit log entries by means of the OpenShift logging framework, which can in turn be configured to forward entries to various log collectors such as Fluentd, Loki and ElasticSearch. Audit configuration - cp4d_audit_config \ud83d\udd17 A cp4d_audit_config entry defines the audit configuration for a Cloud Pak for Data instance (OpenShift project). The main configuration items are the number of replicas and the output. Currently only one output type is supported: openshift-logging , which allows the OpenShift logging framework to pick up audit entries and forward to the designated collectors. When a cp4d_audit_config entry exists for a certain cp4d project, the zen-audit-config ConfigMap is updated and then the audit logging deployment is restarted. If no configuration changes have been made, no restart is done. Additionally, for the audit_output entries, the OpenShift logging ClusterLogForwarder instance is updated to forward audit entries to the designated logging output. In the example below the auditing is configured with 2 replicas and an input and pipeline is added to the ClusterLogForwarder instance so output to the matching channel defined in openshift_logging.logging_output . cp4d_audit_config: - project: cpd audit_replicas: 2 audit_output: - type: openshift-logging logging_name: loki-audit labels: cluster_name: \"{{ env_id }}\" Property explanation \ud83d\udd17 Property Description Mandatory Allowed values project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes audit_replicas Number of replicas for the Cloud Pak for Data audit logger. No (default 1) audit_output Defines where the audit logs should be written to Yes audit_output.type Type of auditing output, defines where audit logging entries will be written Yes openshift-logging audit_output.logging_name Name of the logging_output entry in the openshift_logging object. This logging_output entry must exist. Yes audit_output.labels Optional list of labels set to the ClusterLogForwarder custom resource pipeline No","title":"Logging and auditing"},{"location":"30-reference/configuration/logging-auditing/#logging-and-auditing-for-cloud-paks","text":"For logging and auditing of Cloud Pak for Data we make use of the OpenShift logging framework, which delivers a lot of flexibility in capturing logs from applications, storing them in an ElasticSearch datastore in the cluster (currently not supported by the deployer), or forwarding the log entries to external log collectors such as an ElasticSearch, Fluentd, Loki and others. OpenShift logging captures 3 types of logging entries from workload that is running on the cluster: infrastructure - logs generated by OpenShift processes audit - audit logs generated by applications as well as OpenShift application - all other applications on the cluster","title":"Logging and auditing for Cloud Paks"},{"location":"30-reference/configuration/logging-auditing/#logging-configuration---openshift_logging","text":"Defines how OpenShift forwards the logs to external log collectors. Currently, the following log collector types are supported: loki When OpenShift logging is activated via the openshift_logging object, all 3 logging types are activated automatically. You can specify logging_output items to forward log records to the log collector of your choice. In the below example, the application logs are forwarded to a loki server https://loki-application.sample.com and audit logs to https://loki-audit.sample.com , both have the same certificate to connect with: openshift_logging: - openshift_cluster_name: pluto-01 configure_es_log_store: False cluster_wide_logging: - input: application logging_name: loki-application - input: infrastructure logging_name: loki-application - input: audit logging_name: loki-audit logging_output: - name: loki-application type: loki url: https://loki-application.sample.com certificates: cert: pluto-01-loki-cert key: pluto-01-loki-key ca: pluto-01-loki-ca - name: loki-audit type: loki url: https://loki-audit.sample.com certificates: cert: pluto-01-loki-cert key: pluto-01-loki-key ca: pluto-01-loki-ca Cloud Pak for Data and Foundational Services application logs are automatically picked up and forwarded to the loki-application logging destination and no additional configuration is needed.","title":"Logging configuration - openshift_logging"},{"location":"30-reference/configuration/logging-auditing/#property-explanation","text":"Property Description Mandatory Allowed values openshift_cluster_name Name of the OpenShift cluster to configure the logging for Yes configure_es_log_store Must internal ElasticSearch log store and Kibana be provisioned? (default False) No True, False (default) cluster_wide_logging Defines which classes of log records will be sent to the log collectors No cluster_wide_logging.input Specifies OpenShift log records class to forwawrd Yes application, infrastructure, audit cluster_wide_logging.logging_name Specifies the logging_output to send the records to . If not specified, records will be sent to the internal log only No cluster_wide_logging.labels Specify your own labels to be added to the log records. Every logging input/output combination can have its own labes No logging_output Defines the log collectors. If configure_es_log_store is True, output will always be sent to the internal ES log store No logging_output.name Log collector name, referenced by cluster_wide_logging or cp4d_audit Yes logging_output.type Type of the log collector, currently only loki is possible Yes loki logging_output.url URL of the log collector; this URL must be reachable from within the cluster Yes logging_output.certificates Defines the vault secrets that hold the certificate elements Yes, if url is https logging_output.certificates.cert Public certificate to connect to the URL Yes logging_output.certificates.key Private key to connect to the URL Yes logging_output.certificates.ca Certificate Authority bundle to connect to the URL Yes If you also want to activate audit logging for Cloud Pak for Data, you can do this by adding a cp4d_audit_config object to your configuration. With the below example, the Cloud Pak for Data audit logger is configured to write log records to the standard output ( stdout ) of the pods, after which they are forwarded to the loki-audit logging destination by a ClusterLogForwarder custom resource. Optionally labels can be specified which are added to the ClusterLogForwarder custom resource pipeline entry. cp4d_audit_config: - project: cpd audit_replicas: 2 audit_output: - type: openshift-logging logging_name: loki-audit labels: cluster_name: \"{{ env_id }}\" Info Because audit log entries are written to the standard output, they will also be picked up by the generic application log forwarder and will therefore also appear in the application logging destination.","title":"Property explanation"},{"location":"30-reference/configuration/logging-auditing/#cloud-pak-for-data-audit-configuration","text":"IBM Cloud Pak for Data has a centralized auditing component for base platform and services auditable events. Audit events include login and logout to the platform, creation and deletion of connections and many more. Services that support auditing are documented here: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=data-services-that-support-audit-logging The Cloud Pak Deployer simplifies the recording of audit log entries by means of the OpenShift logging framework, which can in turn be configured to forward entries to various log collectors such as Fluentd, Loki and ElasticSearch.","title":"Cloud Pak for Data audit configuration"},{"location":"30-reference/configuration/logging-auditing/#audit-configuration---cp4d_audit_config","text":"A cp4d_audit_config entry defines the audit configuration for a Cloud Pak for Data instance (OpenShift project). The main configuration items are the number of replicas and the output. Currently only one output type is supported: openshift-logging , which allows the OpenShift logging framework to pick up audit entries and forward to the designated collectors. When a cp4d_audit_config entry exists for a certain cp4d project, the zen-audit-config ConfigMap is updated and then the audit logging deployment is restarted. If no configuration changes have been made, no restart is done. Additionally, for the audit_output entries, the OpenShift logging ClusterLogForwarder instance is updated to forward audit entries to the designated logging output. In the example below the auditing is configured with 2 replicas and an input and pipeline is added to the ClusterLogForwarder instance so output to the matching channel defined in openshift_logging.logging_output . cp4d_audit_config: - project: cpd audit_replicas: 2 audit_output: - type: openshift-logging logging_name: loki-audit labels: cluster_name: \"{{ env_id }}\"","title":"Audit configuration - cp4d_audit_config"},{"location":"30-reference/configuration/logging-auditing/#property-explanation_1","text":"Property Description Mandatory Allowed values project Name of OpenShift project of the matching cp4d entry. The cp4d project must exist. Yes audit_replicas Number of replicas for the Cloud Pak for Data audit logger. No (default 1) audit_output Defines where the audit logs should be written to Yes audit_output.type Type of auditing output, defines where audit logging entries will be written Yes openshift-logging audit_output.logging_name Name of the logging_output entry in the openshift_logging object. This logging_output entry must exist. Yes audit_output.labels Optional list of labels set to the ClusterLogForwarder custom resource pipeline No","title":"Property explanation"},{"location":"30-reference/configuration/monitoring/","text":"Monitoring OpenShift and Cloud Paks \ud83d\udd17 For monitoring of Cloud Pak for Data we make use of the OpenShift Monitoring framework. The observations generated by Cloud Pak for Data are pushed to the OpenShift Monitoring Prometheus endpoint. This will allow (external) monitoring tools to combine the observations from the OpenShift platform and Cloud Pak for Data from a single source. OpenShift monitoring \ud83d\udd17 To deploy Cloud Pak for Data Monitors, its is mandatory to also enable the OpenShift monitoring. OpenShift monitoring is activated via the openshift_monitoring object. openshift_monitoring: - openshift_cluster_name: pluto-01 user_workload: enabled remote_rewrite_url: http://www.example.com:1234/receive retention_period: 15d pvc_storage_class: ibmc-vpc-block-retain-general-purpose pvc_storage_size_gb: 100 grafana_operator: enabled grafana_project: grafana labels: cluster_name: pluto-01 Property Description Mandatory Allowed values user_worload Allow pushing Prometheus metrics to OpenShift (must be set to True for monitoring to work) Yes True, False pvc_storage_class Storage class to keep persistent monitoring data No Valid storage class pvc_storage_size_gb Size of the PVC holding the monitoring data Yes if pv_storage_class is set remote_rewrite_url Set this value to redirect metrics to remote Prometheus NO retention_period Number of seconds (s), minutes (m), hours(h), days (d), weeks (w), years (y) to retain monitoring data. Default is 15d Yes labels Additional labels to be added to the metrics No grafana_operator Enable Grafana community operator? No False (default), True grafana_project If enabled, project in which to enable the Grafana operator Yes, if grafana_operator enabled Note Labels must be specified as a YAML record where each line is a key-value. The labels will be added to the prometheus key of the user-workload-monitoring-config ConfigMap and to the prometheusK8S key of the cluster-monitoring-config ConfigMap. Note When the Grafana operator is enabled, you can build your own Grafana dashboard based on the metrics collected by Prometheus. When installed, Grafana creates a local admin user with user name root and passwowrd secret . Grafana can be accessed using the OpenShift route that is created in the project specified by grafana_project . Cloud Pak for Data monitoring \ud83d\udd17 The observations of Cloud Pak for Data are generated using the zen-watchdog component, which is part of the cpd_platform cartridge and therefore available on each instance of Cloud Pak for Data. Part of the zen-watchdog installation is a set of monitors which focus on the technical deployment of Cloud Pak for Data (e.g. running pods and bound Persistent Volume Claims (pvcs)). Additional monitors which focus more on the operational usage of Cloud Pak for Data can be deployed as well. These monitors are maintained in a seperate Git repository and be accessed at IBM/cp4d-monitors . Using the Cloud Pak Deployer, monitors can be deployed which uses the Cloud Pak for Data zen-watchdog monitor framework. This allows adding custom monitors to the zen-watchdog, making these custom monitors visible in the Cloud Pak for Data metrics. Using the Cloud Pak Deployer cp4d_monitors capability implements the following: - Create Cloud Pak for Data ServiceMonitor endpoint to forward zen-watchdog monitor events to OpenShift Cluster monitoring - Create source repository auth secrets (optional, if pulling monitors from secure repo) - Create target container registry auth secrets (optional, if pushing monitor images to secure container registry) - Deploy custom monitors, which will be added to the zen-watchdog monitor framework For custom monitors to be deployed, it is mandatory to enable the OpenShift user-workload monitoring, as specified in OpenShift monitoring . The Cloud Pak for Data monitors are specified in a cp4d_monitors definition. cp4d_monitors: - name: cp4d-monitor-set-1 cp4d_instance: zen-45 openshift_cluster_name: pluto-01 default_monitor_source_repo: https://github.com/IBM/cp4d-monitors #default_monitor_source_token_secret: monitors_source_repo_secret #default_monitor_target_cr: de.icr.io/monitorrepo #default_monitor_target_cr_user_secret: monitors_target_cr_username #default_monitor_target_cr_password_secret: monitors_target_cr_password # List of monitors monitors: - name: cp4dplatformcognosconnectionsinfo context: cp4d-cognos-connections-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformcognostaskinfo context: cp4d-cognos-task-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformglobalconnections context: cp4d-platform-global-connections label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwatsonstudiojobinfo context: cp4d-watsonstudio-job-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwatsonstudiojobscheduleinfo context: cp4d-watsonstudio-job-schedule-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwatsonstudioruntimeusage context: cp4d-watsonstudio-runtime-usage label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwatsonknowledgecataloginfo context: cp4d-wkc-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwmldeploymentspaceinfo context: cp4d-wml-deployment-space-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwmldeploymentspacejobinfo context: cp4d-wml-deployment-space-job-info label: latest schedule: \"*/15 * * * *\" Each cp4d_monitors entry contains a set of default settings, which are applicable to the monitors list. These defaults can be overwritten per monitor if needed. Property Description Mandatory Allowed values name The name of the monitor set Yes lowercase RFC 1123 subdomain (1) cp4d_instance The OpenShift project (namespace) on which the Cloud Pak for Data instance resides Yes openshift_cluster_name The Openshift cluster name Yes default_monitor_source_repo The default repository location of all monitors located in the monitors section No default_monitor_source_token_secret The default repo access token secret name, must be available in the vault No default_monitor_target_cr The default target container registry (cr) for the monitor image to be pushed. When omitted, the OpenShift internal registry is used No default_monitor_target_cr_user_secret The default target container registry user name secret name used to push the monitor image. Must be available in the vault No default_monitor_target_cr_password_secret The default target container registry password secret name used to push the monitor image. Must be available in the vault No monitors List of monitors Yes Per monitors entry, the following settings are specified: Property Description Mandatory Allowed values name The name of the monitor entry Yes lowercase RFC 1123 subdomain (1) monitor_source_repo Overrides default_monitor_source_repo for this single monitor No monitor_source_token_secret Overrides default_monitor_source_token_secret for this single monitor No monitor_target_cr Overrides default_monitor_target_cr for this single monitor No monitor_target_cr_user_secret Overrides default_monitor_target_cr_user_secret for this single monitor No monitor_target_cr_user_password Overrides default_monitor_target_cr_user_password for this single monitor No context Sets the context of the monitor the the source repo (sub folder name) Yes label Set the label of the pushed image, default to 'latest' No schedule Sets the schedule of the generated Cloud Pak for Data monitor cronjob Yes Each monitor has a set of event_types , which contain the observations generated by the monitor. These event types are retrieved directly from the github repository, which it is expected that each context contains a file called event_types.yml . During deployment of the monitor this file is retrieved and used to populate the event_types of the monitor. If the Deployer runs and the monitor is already deployed, the following process is used: - The build process is restarted to ensure the latest image of monitor is used - A comparison is made between the monitor's current configuration and the configuration created by the Deployer. If these are identical, the monitor's configuration is left as-is, however if these are different, the monitor's configuration is rebuild and the monitor is re-deployed. Example monitior - global platform connections \ud83d\udd17 This monitor counts the number of Global Platform connections and for each Global Platform Connection a test is executed to test whether the connection can still be established. Generated metrics \ud83d\udd17 Once the monitor is deployed, the following metrics are available in IBM Cloud Pak for Data. On the Platform Management Events page the following entries are added: - Cloud Pak for Data Global Connections Count - Global Connection - (for each connection) Using the IBM Cloud Pak for Data Prometheus endpoint \ud83d\udd17 https:///zen/metrics It will generate 2 types of metrics: global_connections_count Provides the number of available connections global_connection_valid For each connection, a test action is performed 1 (Test Connection success) 0 (Test connection failed) # HELP global_connections_count # TYPE global_connections_count gauge global_connections_count{event_type=\"global_connections_count\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cloud Pak for Data Global Connections Count\"} 2 # HELP global_connection_valid # TYPE global_connection_valid gauge global_connection_valid{event_type=\"global_connection_valid\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cognos MetaStore Connection\"} 1 global_connection_valid{event_type=\"global_connection_valid\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cognos non-shared\"} 0 Zen Watchdog metrics (used in platform management events) - watchdog_cp4d_platform_global_connections_global_connections_count - watchdog_cp4d_platform_global_connections_global_connection_valid (for each connection) Zen Watchdog metrics can have the following values: - 2 (info) - 1 (warning) - 0 (critical) # HELP watchdog_cp4d_platform_global_connections_global_connection_valid # TYPE watchdog_cp4d_platform_global_connections_global_connection_valid gauge watchdog_cp4d_platform_global_connections_global_connection_valid{event_type=\"global_connection_valid\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cognos MetaStore Connection\"} 2 watchdog_cp4d_platform_global_connections_global_connection_valid{event_type=\"global_connection_valid\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cognos non-shared\"} 1 # HELP watchdog_cp4d_platform_global_connections_global_connections_count # TYPE watchdog_cp4d_platform_global_connections_global_connections_count gauge watchdog_cp4d_platform_global_connections_global_connections_count{event_type=\"global_connections_count\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cloud Pak for Data Global Connections Count\"} 2","title":"Monitoring"},{"location":"30-reference/configuration/monitoring/#monitoring-openshift-and-cloud-paks","text":"For monitoring of Cloud Pak for Data we make use of the OpenShift Monitoring framework. The observations generated by Cloud Pak for Data are pushed to the OpenShift Monitoring Prometheus endpoint. This will allow (external) monitoring tools to combine the observations from the OpenShift platform and Cloud Pak for Data from a single source.","title":"Monitoring OpenShift and Cloud Paks"},{"location":"30-reference/configuration/monitoring/#openshift-monitoring","text":"To deploy Cloud Pak for Data Monitors, its is mandatory to also enable the OpenShift monitoring. OpenShift monitoring is activated via the openshift_monitoring object. openshift_monitoring: - openshift_cluster_name: pluto-01 user_workload: enabled remote_rewrite_url: http://www.example.com:1234/receive retention_period: 15d pvc_storage_class: ibmc-vpc-block-retain-general-purpose pvc_storage_size_gb: 100 grafana_operator: enabled grafana_project: grafana labels: cluster_name: pluto-01 Property Description Mandatory Allowed values user_worload Allow pushing Prometheus metrics to OpenShift (must be set to True for monitoring to work) Yes True, False pvc_storage_class Storage class to keep persistent monitoring data No Valid storage class pvc_storage_size_gb Size of the PVC holding the monitoring data Yes if pv_storage_class is set remote_rewrite_url Set this value to redirect metrics to remote Prometheus NO retention_period Number of seconds (s), minutes (m), hours(h), days (d), weeks (w), years (y) to retain monitoring data. Default is 15d Yes labels Additional labels to be added to the metrics No grafana_operator Enable Grafana community operator? No False (default), True grafana_project If enabled, project in which to enable the Grafana operator Yes, if grafana_operator enabled Note Labels must be specified as a YAML record where each line is a key-value. The labels will be added to the prometheus key of the user-workload-monitoring-config ConfigMap and to the prometheusK8S key of the cluster-monitoring-config ConfigMap. Note When the Grafana operator is enabled, you can build your own Grafana dashboard based on the metrics collected by Prometheus. When installed, Grafana creates a local admin user with user name root and passwowrd secret . Grafana can be accessed using the OpenShift route that is created in the project specified by grafana_project .","title":"OpenShift monitoring"},{"location":"30-reference/configuration/monitoring/#cloud-pak-for-data-monitoring","text":"The observations of Cloud Pak for Data are generated using the zen-watchdog component, which is part of the cpd_platform cartridge and therefore available on each instance of Cloud Pak for Data. Part of the zen-watchdog installation is a set of monitors which focus on the technical deployment of Cloud Pak for Data (e.g. running pods and bound Persistent Volume Claims (pvcs)). Additional monitors which focus more on the operational usage of Cloud Pak for Data can be deployed as well. These monitors are maintained in a seperate Git repository and be accessed at IBM/cp4d-monitors . Using the Cloud Pak Deployer, monitors can be deployed which uses the Cloud Pak for Data zen-watchdog monitor framework. This allows adding custom monitors to the zen-watchdog, making these custom monitors visible in the Cloud Pak for Data metrics. Using the Cloud Pak Deployer cp4d_monitors capability implements the following: - Create Cloud Pak for Data ServiceMonitor endpoint to forward zen-watchdog monitor events to OpenShift Cluster monitoring - Create source repository auth secrets (optional, if pulling monitors from secure repo) - Create target container registry auth secrets (optional, if pushing monitor images to secure container registry) - Deploy custom monitors, which will be added to the zen-watchdog monitor framework For custom monitors to be deployed, it is mandatory to enable the OpenShift user-workload monitoring, as specified in OpenShift monitoring . The Cloud Pak for Data monitors are specified in a cp4d_monitors definition. cp4d_monitors: - name: cp4d-monitor-set-1 cp4d_instance: zen-45 openshift_cluster_name: pluto-01 default_monitor_source_repo: https://github.com/IBM/cp4d-monitors #default_monitor_source_token_secret: monitors_source_repo_secret #default_monitor_target_cr: de.icr.io/monitorrepo #default_monitor_target_cr_user_secret: monitors_target_cr_username #default_monitor_target_cr_password_secret: monitors_target_cr_password # List of monitors monitors: - name: cp4dplatformcognosconnectionsinfo context: cp4d-cognos-connections-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformcognostaskinfo context: cp4d-cognos-task-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformglobalconnections context: cp4d-platform-global-connections label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwatsonstudiojobinfo context: cp4d-watsonstudio-job-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwatsonstudiojobscheduleinfo context: cp4d-watsonstudio-job-schedule-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwatsonstudioruntimeusage context: cp4d-watsonstudio-runtime-usage label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwatsonknowledgecataloginfo context: cp4d-wkc-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwmldeploymentspaceinfo context: cp4d-wml-deployment-space-info label: latest schedule: \"*/15 * * * *\" - name: cp4dplatformwmldeploymentspacejobinfo context: cp4d-wml-deployment-space-job-info label: latest schedule: \"*/15 * * * *\" Each cp4d_monitors entry contains a set of default settings, which are applicable to the monitors list. These defaults can be overwritten per monitor if needed. Property Description Mandatory Allowed values name The name of the monitor set Yes lowercase RFC 1123 subdomain (1) cp4d_instance The OpenShift project (namespace) on which the Cloud Pak for Data instance resides Yes openshift_cluster_name The Openshift cluster name Yes default_monitor_source_repo The default repository location of all monitors located in the monitors section No default_monitor_source_token_secret The default repo access token secret name, must be available in the vault No default_monitor_target_cr The default target container registry (cr) for the monitor image to be pushed. When omitted, the OpenShift internal registry is used No default_monitor_target_cr_user_secret The default target container registry user name secret name used to push the monitor image. Must be available in the vault No default_monitor_target_cr_password_secret The default target container registry password secret name used to push the monitor image. Must be available in the vault No monitors List of monitors Yes Per monitors entry, the following settings are specified: Property Description Mandatory Allowed values name The name of the monitor entry Yes lowercase RFC 1123 subdomain (1) monitor_source_repo Overrides default_monitor_source_repo for this single monitor No monitor_source_token_secret Overrides default_monitor_source_token_secret for this single monitor No monitor_target_cr Overrides default_monitor_target_cr for this single monitor No monitor_target_cr_user_secret Overrides default_monitor_target_cr_user_secret for this single monitor No monitor_target_cr_user_password Overrides default_monitor_target_cr_user_password for this single monitor No context Sets the context of the monitor the the source repo (sub folder name) Yes label Set the label of the pushed image, default to 'latest' No schedule Sets the schedule of the generated Cloud Pak for Data monitor cronjob Yes Each monitor has a set of event_types , which contain the observations generated by the monitor. These event types are retrieved directly from the github repository, which it is expected that each context contains a file called event_types.yml . During deployment of the monitor this file is retrieved and used to populate the event_types of the monitor. If the Deployer runs and the monitor is already deployed, the following process is used: - The build process is restarted to ensure the latest image of monitor is used - A comparison is made between the monitor's current configuration and the configuration created by the Deployer. If these are identical, the monitor's configuration is left as-is, however if these are different, the monitor's configuration is rebuild and the monitor is re-deployed.","title":"Cloud Pak for Data monitoring"},{"location":"30-reference/configuration/monitoring/#example-monitior---global-platform-connections","text":"This monitor counts the number of Global Platform connections and for each Global Platform Connection a test is executed to test whether the connection can still be established.","title":"Example monitior - global platform connections"},{"location":"30-reference/configuration/monitoring/#generated-metrics","text":"Once the monitor is deployed, the following metrics are available in IBM Cloud Pak for Data. On the Platform Management Events page the following entries are added: - Cloud Pak for Data Global Connections Count - Global Connection - (for each connection)","title":"Generated metrics"},{"location":"30-reference/configuration/monitoring/#using-the-ibm-cloud-pak-for-data-prometheus-endpoint","text":"https:///zen/metrics It will generate 2 types of metrics: global_connections_count Provides the number of available connections global_connection_valid For each connection, a test action is performed 1 (Test Connection success) 0 (Test connection failed) # HELP global_connections_count # TYPE global_connections_count gauge global_connections_count{event_type=\"global_connections_count\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cloud Pak for Data Global Connections Count\"} 2 # HELP global_connection_valid # TYPE global_connection_valid gauge global_connection_valid{event_type=\"global_connection_valid\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cognos MetaStore Connection\"} 1 global_connection_valid{event_type=\"global_connection_valid\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cognos non-shared\"} 0 Zen Watchdog metrics (used in platform management events) - watchdog_cp4d_platform_global_connections_global_connections_count - watchdog_cp4d_platform_global_connections_global_connection_valid (for each connection) Zen Watchdog metrics can have the following values: - 2 (info) - 1 (warning) - 0 (critical) # HELP watchdog_cp4d_platform_global_connections_global_connection_valid # TYPE watchdog_cp4d_platform_global_connections_global_connection_valid gauge watchdog_cp4d_platform_global_connections_global_connection_valid{event_type=\"global_connection_valid\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cognos MetaStore Connection\"} 2 watchdog_cp4d_platform_global_connections_global_connection_valid{event_type=\"global_connection_valid\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cognos non-shared\"} 1 # HELP watchdog_cp4d_platform_global_connections_global_connections_count # TYPE watchdog_cp4d_platform_global_connections_global_connections_count gauge watchdog_cp4d_platform_global_connections_global_connections_count{event_type=\"global_connections_count\",monitor_type=\"cp4d_platform_global_connections\",reference=\"Cloud Pak for Data Global Connections Count\"} 2","title":"Using the IBM Cloud Pak for Data Prometheus endpoint"},{"location":"30-reference/configuration/openshift/","text":"OpenShift cluster(s) \ud83d\udd17 You can configure one or more OpenShift clusters that will be layed down on the specified infrastructure, or which already exist. Dependent on the cloud platform on which the OpenShift cluster will be provisioned, different installation methods apply. For IBM Cloud, Terraform is used, whereas for vSphere the IPI installer is used. On AWS (ROSA), the rosa CLI is used to create and modify ROSA clusters. Each of the different platforms have slightly different properties for the openshift objects. openshift \ud83d\udd17 For OpenShift, there are 5 flavours: Existing OpenShift OpenShift on IBM Cloud OpenShift on AWS - ROSA OpenShift on AWS - self-managed OpenShift on Microsoft Azure (ARO) OpenShift on vSphere Every OpenShift cluster definition of a few mandatory properties that control which version of OpenShift is installed, the number and flavour of control plane and compute nodes and the underlying infrastructure, dependent on the cloud platform on which it is provisioned. Storage is a mandatory element for every openshift definition. For a list of supported storage types per cloud platform, refer to Supported storage types . Additionally, one can configure Upstream DNS Servers and OpenShift logging . The Multicloud Object Gateway (MCG) supports access to s3-compatible object storage via an underpinning block/file storage class, through the Noobaa operator. Some Cloud Pak for Data services such as Watson Assistant need object storage to run. MCG does not need to be installed if OpenShift Data Foundation (fka OCS) is also installed as the operator includes Noobaa. OpenShift on IBM Cloud (ROKS) \ud83d\udd17 VPC-based OpenShift cluster on IBM Cloud, using the Red Hat OpenShift Kubernetes Services (ROKS). openshift: - name: sample managed: True ocp_version: 4.8 compute_flavour: bx2.16x64 compute_nodes: 3 cloud_native_toolkit: False oadp: False infrastructure: type: vpc vpc_name: sample subnets: - sample-subnet-zone-1 - sample-subnet-zone-2 - sample-subnet-zone-3 cos_name: sample-cos private_only: False deny_node_ports: False upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 mcg: install: True storage_type: storage-class storage_class: managed-nfs-storage openshift_storage: - storage_name: nfs-storage storage_type: nfs nfs_server_name: sample-nfs - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 500 ocs_version: 4.8.0 - storage_name: pwx-storage storage_type: pwx pwx_etcd_location: {{ ibm_cloud_region }} pwx_storage_size_gb: 200 pwx_storage_iops: 10 pwx_storage_profile: \"10iops-tier\" stork_version: 2.6.2 portworx_version: 2.7.2 Property explanation OpenShift clusters on IBM Cloud (ROKS) \ud83d\udd17 Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes managed Is the ROKS cluster managed by this deployer? See note below. No True (default), False ocp_version ROKS Kubernetes version. If you want to install 4.10 , specify \"4.10\" Yes >= 4.6 compute_flavour Type of compute node to be used Yes Node flavours compute_nodes Total number of compute nodes. This must be a factor of the number of subnets Yes Integer resource_group IBM Cloud resource group for the ROKS cluster Yes cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure.type Type of infrastructure to provision ROKS cluster on No vpc infrastructure.vpc_name Name of the VPC if type is vpc Yes, inferrred from vpc Existing VPC infrastructure.subnets List of subnets within the VPC to use. Either 1 or 3 subnets must be specified Yes Existing subnet infrastructure.cos_name Reference to the cos object created for this cluster Yes Existing cos object infrastructure.private_only If true, it indicates that the ROKS cluster must be provisioned without public endpoints No True, False (default) infrastructure.deny_node_ports If true, the Allow ICMP, TCP and UDP rules for the security group associated with the ROKS cluster are removed if present. If false, the Allow ICMP, TCP and UDP rules are added if not present. No True, False (default) infrastructure.secondary_storage Reference to the storage flavour to be used as secondary storage, for example \"900gb.5iops-tier\" No Valid secondary storage flavour openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes The managed attribute indicates whether the ROKS cluster is managed by the Cloud Pak Deployer. If set to False , the deployer will not provision the ROKS cluster but expects it to already be available in the VPC. You can still use the deployer to create the VPC, the subnets, NFS servers and other infrastructure, but first run it without an openshift element. Once the VPC has been created, manually create an OpenShift cluster in the VPC and then add the openshift element with managed set to False . If you intend to use OpenShift Container Storage, you must also activate the add-on and create the OcsCluster custom resource. Warning If you set infrastructure.private_only to True , the server from which you run the deployer must be able to access the ROKS cluster via its private endpoint, either by establishing a VPN to the cluster's VPC, or by making sure the deployer runs on a server that has a connection with the ROKS VPC via a transit gateway. openshift_storage[] - OpenShift storage definitions \ud83d\udd17 Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to create in the OpenShift cluster Yes nfs, ocs or pwx nfs_server_name Name of the NFS server within the VPC Yes if storage_type is nfs Existing nfs_server ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_version Version of OCS (ODF) to be deployed. If left empty, the latest version will be deployed No >= 4.6 pwx_etcd_location Location where the etcd service will be deployed, typically the same region as the ROKS cluster Yes if storage_type is pwx pwx_storage_size_gb Size of the Portworx storage that will be provisioned Yes if storage_type is pwx pwx_storage_iops IOPS for the storage volumes that will be provisioned Yes if storage_type is pwx pwx_storage_profile IOPS storage tier the storage volumes that will be provisioned Yes if storage_type is pwx stork_version Version of the Portworx storage orchestration layer for Kubernetes Yes if storage_type is pwx portworx_version Version of the Portworx storage provider Yes if storage_type is pwx Warning When deploying a ROKS cluster with OpenShift Data Foundation (fka OpenShift Container Storage/OCS), the minimum version of OpenShift is 4.7. OpenShift on vSphere \ud83d\udd17 openshift: - name: sample domain_name: example.com vsphere_name: sample ocp_version: 4.8 control_plane_nodes: 3 control_plane_vm_definition: control-plane compute_nodes: 3 compute_vm_definition: compute api_vip: 10.99.92.51 ingress_vip: 10.99.92.52 cloud_native_toolkit: False oadp: False infrastructure: openshift_cluster_network_cidr: 10.128.0.0/14 upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 mcg: install: True storage_type: storage-class storage_class: thin openshift_storage: - storage_name: nfs-storage storage_type: nfs nfs_server_name: sample-nfs - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 512 ocs_dynamic_storage_class: thin Property explanation OpenShift clusters on vSphere \ud83d\udd17 Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes domain_name Domain name of the cluster, this will also depict the route to the API and ingress endpoints Yes ocp_version OpenShift version. If you want to install 4.10 , specify \"4.10\" Yes >= 4.6 control_plane_nodes Total number of control plane nodes, typically 3 Yes Integer control_plane_vm_definition vm_definition object that will be used to define number of vCPUs and memory for the control plane nodes Yes Existing vm_definition compute_nodes Total number of compute nodes Yes Integer compute_vm_definition vm_definition object that will be used to define number of vCPUs and memory for the compute nodes Yes Existing vm_definition api_vip Virtual IP address that the installer will provision for the API server Yes ingress_vip Virtual IP address that the installer will provision for the ingress server Yes cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure Infrastructure properties No infrastructure.openshift_cluster_network_cidr Network CIDR used by the OpenShift pods. Normally you would not have to change this, unless other systems in the network are in the 10.128.0.0/14 subnet. No CIDR openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes openshift_storage[] - OpenShift storage definitions \ud83d\udd17 Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to create in the OpenShift cluster Yes nfs or ocs nfs_server_name Name of the NFS server within the VPC Yes if storage_type is nfs Existing nfs_server ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No >= 4.6 ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_dynamic_storage_class Storage class that will be used for provisioning OCS. On vSphere clusters, thin is usually available after OpenShift installation Yes if storage_type is ocs storage_vm_definition VM Definition that defines the virtual machine attributes for the OCS nodes Yes if storage_type is ocs OpenShift on AWS - self-managed \ud83d\udd17 nfs_server: - name: sample-elastic infrastructure: aws_region: eu-west-1 openshift: - name: sample ocp_version: 4.10.34 domain_name: cp-deployer.eu compute_flavour: m5.4xlarge compute_nodes: 3 cloud_native_toolkit: False oadp: False infrastructure: type: self-managed aws_region: eu-central-1 multi_zone: True credentials_mode: Manual private_only: True machine_cidr: 10.2.1.0/24 openshift_cluster_network_cidr: 10.128.0.0/14 subnet_ids: - subnet-06bbef28f585a0dd3 - subnet-0ea5ac344c0fbadf5 hosted_zone_id: Z08291873MCIC4TMIK4UP ami_id: ami-09249dd86b1933dd5 mcg: install: True storage_type: storage-class storage_class: gp3-csi openshift_storage: - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 512 - storage_name: sample-elastic storage_type: aws-elastic Property explanation OpenShift clusters on AWS (self-managed) \ud83d\udd17 Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes ocp_version OpenShift version version, specified as x.y.z Yes >= 4.6 domain_name Base domain name of the cluster. Together with the name , this will be the domain of the OpenShift cluster. Yes control_plane_flavour Flavour of the AWS servers used for the control plane nodes. m5.xxlarge is the recommended value 4 GB of memory Yes control_plane_nodes Total number of control plane Yes Integer compute_flavour Flavour of the AWS servers used for the compute nodes. m5.4xlarge is a large node with 16 cores and 64 GB of memory Yes compute_nodes Total number of compute nodes Yes Integer cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure Infrastructure properties Yes infrastructure.type Type of OpenShift cluster on AWS. Yes rosa or self-managed infrastructure.aws_region Region of AWS where cluster is deployed. Yes infrastructure.multi_zone Determines whether the OpenShift cluster is deployed across multiple availability zones. Default is True. No True (default), False infrastructure.credentials_mode Security requirement of the Cloud Credential Operator (COO) when doing installations with temporary AWS security credentials. Default (omit) is automatically handled by CCO. No Manual, Mint infrastructure.machine_cdr Machine CIDR. This value will be used to create the VPC and its subnets. In case of an existing VPC, specify the CIDR of that VPC. No CIDR infrastructure.openshift_cluster_network_cidr Network CIDR used by the OpenShift pods. Normally you would not have to change this, unless other systems in the network are in the 10.128.0.0/14 subnet. No CIDR infrastructure.subnet_ids Existing public and private subnet IDs in the VPC to be used for the OpenShift cluster. Must be specified in combination with machine_cidr and hosted_zone_id. No Existing subnet IDs infrastructure.private_only Indicates whether the OpenShift can be accessed from the internet. Default is True No True, False infrastructure.hosted_zone_id ID of the AWS Route 53 hosted zone that controls the DNS entries. If not specified, the OpenShift installer will create a hosted zone for the specified domain_name . This attribute is only needed if you create the OpenShift cluster in an existing VPC No infrastructure.control_plane_iam_role If not standard, specify the IAM role that the OpenShift installer must use for the control plane nodes during cluster creation No infrastructure.compute_iam_role If not standard, specify the IAM role that the OpenShift installer must use for the compute nodes during cluster creation No infrastructure.ami_id ID of the AWS AMI to boot all images No openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes When deploying the OpenShift cluster within an existing VPC, you must specify the machine_cidr that covers all subnets and the subnet IDs within the VPC. For example: machine_cidr: 10.243.0.0/24 subnets_ids: - subnet-0e63f662bb1842e8a - subnet-0673351cd49877269 - subnet-00b007a7c2677cdbc - subnet-02b676f92c83f4422 - subnet-0f1b03a02973508ed - subnet-027ca7cc695ce8515 openshift_storage[] - OpenShift storage definitions \ud83d\udd17 Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to create in the OpenShift cluster Yes ocs, aws-elastic ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_dynamic_storage_class Storage class that will be used for provisioning ODF. gp3-csi is usually available after OpenShift installation No OpenShift on AWS - ROSA \ud83d\udd17 nfs_server: - name: sample-elastic infrastructure: aws_region: eu-west-1 openshift: - name: sample ocp_version: 4.10.34 compute_flavour: m5.4xlarge compute_nodes: 3 cloud_native_toolkit: False oadp: False infrastructure: type: rosa aws_region: eu-central-1 multi_zone: True use_sts: False credentials_mode: Manual upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 mcg: install: True storage_type: storage-class storage_class: gp3-csi openshift_storage: - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 512 - storage_name: sample-elastic storage_type: aws-elastic Property explanation OpenShift clusters on AWS (ROSA) \ud83d\udd17 Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes ocp_version OpenShift version version, specified as x.y.z Yes >= 4.6 compute_flavour Flavour of the AWS servers used for the compute nodes. m5.4xlarge is a large node with 16 cores and 64 GB of memory Yes cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure Infrastructure properties Yes infrastructure.type Type of OpenShift cluster on AWS. Yes rosa or self-managed infrastructure.aws_region Region of AWS where cluster is deployed. Yes infrastructure.multi_zone Determines whether the OpenShift cluster is deployed across multiple availability zones. Default is True. No True (default), False infrastructure.use_sts Determines whether AWS Security Token Service must be used by the ROSA installer. Default is False. No True, False (default) infrastructure.credentials_mode Change the security requirement of the Cloud Credential Operator (COO). Default (omit) is automatically handled by CCO. No Manual, Mint infrastructure.machine_cdr Machine CIDR, for example 10.243.0.0/16. No CIDR infrastructure.subnet_ids Existing public and private subnet IDs in the VPC to be used for the OpenShift cluster. Must be specified in combination with machine_cidr. No Existing subnet IDs compute_nodes Total number of compute nodes Yes Integer upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes When deploying the OpenShift cluster within an existing VPC, you must specify the machine_cidr that covers all subnets and the subnet IDs within the VPC. For example: machine_cidr: 10.243.0.0/24 subnets_ids: - subnet-0e63f662bb1842e8a - subnet-0673351cd49877269 - subnet-00b007a7c2677cdbc - subnet-02b676f92c83f4422 - subnet-0f1b03a02973508ed - subnet-027ca7cc695ce8515 openshift_storage[] - OpenShift storage definitions \ud83d\udd17 Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to create in the OpenShift cluster Yes ocs, aws-elastic ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_dynamic_storage_class Storage class that will be used for provisioning ODF. gp3-csi is usually available after OpenShift installation No OpenShift on Microsoft Azure (ARO) \ud83d\udd17 openshift: - name: sample azure_name: sample domain_name: example.com ocp_version: 4.10.54 cloud_native_toolkit: False oadp: False network: pod_cidr: \"10.128.0.0/14\" service_cidr: \"172.30.0.0/16\" openshift_storage: - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 512 ocs_dynamic_storage_class: managed-premium Property explanation for OpenShift cluster on Microsoft Azure (ARO) \ud83d\udd17 Warning You are not allowed to specify the OCP version of the ARO cluster. The latest current version is provisioned automatically instead no matter what value is specified in the \"ocp_version\" parameter. The \"ocp_version\" parameter is mandatory for compatibility with other layers of the provisioning, such as the OpenShift client. For instance, the value is used by the process which downloads and installs the oc client. Please, specify the value according to what OCP version will be provisioned. Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes azure_name Name of the azure element in the configuration Yes domain_name Domain mame of the cluster, if you want to override the name generated by Azure No ocp_version The OpenShift version. If you want to install 4.10 , specify \"4.10\" Yes >= 4.6 cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) network Cluster network attributes Yes network.pod_cidr CIDR of pod network Yes Must be a minimum of /18 or larger. network.service_cidr CIDR of service network Yes Must be a minimum of /18 or larger. openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes openshift_storage[] - OpenShift storage definitions \ud83d\udd17 Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage Yes storage_type Type of storage class to create in the OpenShift cluster Yes ocs or nfs ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No ocs_storage_label Label (or rather a name) to be used for the dedicated OCS nodes in the cluster - together with the combination of Azure location and zone id Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_dynamic_storage_class Storage class that will be used for provisioning OCS. In Azure, you must select managed-premium Yes if storage_type is ocs managed-premium Existing OpenShift \ud83d\udd17 When using the Cloud Pak Deployer on an existing OpenShift cluster, the scripts assume that the cluster is already operational and that any storage classes have been pre-created. The deployer accesses the cluster through a vault secret with the kubeconfig information; the name of the secret is -kubeconfig . openshift: - name: sample ocp_version: 4.8 cluster_name: sample domain_name: example.com cloud_native_toolkit: False oadp: False infrastructure: type: standard processor_architecture: amd64 upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 gpu: install: False mcg: install: True storage_type: storage-class storage_class: managed-nfs-storage openshift_storage: - storage_name: nfs-storage storage_type: nfs # ocp_storage_class_file: managed-nfs-storage # ocp_storage_class_block: managed-nfs-storage Property explanation for existing OpenShift clusters \ud83d\udd17 Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes ocp_version OpenShift version of the cluster, used to download the client. If you want to install 4.10 , specify \"4.10\" Yes >= 4.6 cluster_name Name of the cluster (part of the FQDN) Yes domain_name Domain name of the cluster (part of the FQDN) Yes cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure.type Infrastructure OpenShfit is deployed on. See below for additional explanation detect (default) infrastructure.processor_architecture Architecture of the processor that the OpenShift cluster is deployed on No amd64 (default), ppc64le, s390x openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No gpu Control Node Feature Discovery and NVIDIA GPU operators No gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall) Yes True, False mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes infastructure.type - Type of infrastructure \ud83d\udd17 When deploying on existing OpenShift, the underlying infrastructure can pose some restrictions on capabilities available. For example, Red Hat OpenShift on IBM Cloud (aka ROKS) does not include the Machine Config Operator and ROSA on AWS does not allow to set labels for Machine Config Pools. This means that node settings required for Cloud Pak for Data must be applied in a non-standard manner. The following values are allowed for infrastructure.type : detect (default): The deployer will attempt to detect the underlying cloud infrastructure. This is done by retrieving the existing storage classes and then inferring the cloud type. standard : The deployer will assume a standard OpenShift cluster with no further restrictions. This is the fallback value for detect if the underlying infra cannot be detected. aws-self-managed : A self-managed OpenShift cluster on AWS. No restrictions. aws-rosa : Managed Red Hat OpenShift on AWS. Some restrictions with regards to Machine Config Pools apply. azure-aro : Managed Red Hat OpenShift on Azure. No known restrictions. vsphere : OpenShift on vSphere. No known restrictions. openshift_storage[] - OpenShift storage definitions \ud83d\udd17 Property Description Mandatory Allowed values storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to use in the OpenShift cluster Yes nfs, ocs, aws-elastic, auto, custom ocp_storage_class_file OpenShift storage class to use for file storage if different from default for storage_type Yes if storage_type is custom ocp_storage_class_block OpenShift storage class to use for block storage if different from default for storage_type Yes if storage_type is custom Info The custom storage_type can be used in case you want to use a non-standard storage class(es). In this case the storage class(es) must be already configured on the OCP cluster and set in the respective ocp_storage_class_file and ocp_storage_class_block variables Info The auto storage_type will let the deployer automatically detect the storage type based on the existing storage classes in the OpenShift cluster. Supported storage types \ud83d\udd17 An openshift definition always includes the type(s) of storage that it will provide. When the OpenShift cluster is provisioned by the deployer, the necessary infrastructure and storage class(es) are also configured. In case an existing OpenShift cluster is referenced by the configuration, the storage classes are expected to exist already. The table below indicates which storage classes are supported by the Cloud Pak Deployer per cloud infrastructure. Warning The ability to provision or use certain storage types does not imply support by the Cloud Paks or by OpenShift itself. There are several restrictions for production use OpenShift Data Foundation, for example when on ROSA. Cloud Provider NFS Storage OCS/ODF Storage Portworx Elastic Custom (2) ibm-cloud Yes Yes Yes No Yes vsphere Yes (1) Yes No No Yes aws No Yes No Yes (3) Yes azure No Yes No No Yes existing-ocp Yes Yes No Yes Yes (1) An existing NFS server can be specified so that the deployer configures the managed-nfs-storage storage class. The deployer will not provision or change the NFS server itself. (2) If you specify a custom storage type, you must specify the storage class to be used for block (RWO) and file (RWX) storage. (3) Specifying this storage type means that Elastic File Storage (EFS) and Elastic Block Storage (EBS) storage classes will be used. For EFS, an nfs_server object is required to define the \"file server\" storage on AWS.","title":"OpenShift"},{"location":"30-reference/configuration/openshift/#openshift-clusters","text":"You can configure one or more OpenShift clusters that will be layed down on the specified infrastructure, or which already exist. Dependent on the cloud platform on which the OpenShift cluster will be provisioned, different installation methods apply. For IBM Cloud, Terraform is used, whereas for vSphere the IPI installer is used. On AWS (ROSA), the rosa CLI is used to create and modify ROSA clusters. Each of the different platforms have slightly different properties for the openshift objects.","title":"OpenShift cluster(s)"},{"location":"30-reference/configuration/openshift/#openshift","text":"For OpenShift, there are 5 flavours: Existing OpenShift OpenShift on IBM Cloud OpenShift on AWS - ROSA OpenShift on AWS - self-managed OpenShift on Microsoft Azure (ARO) OpenShift on vSphere Every OpenShift cluster definition of a few mandatory properties that control which version of OpenShift is installed, the number and flavour of control plane and compute nodes and the underlying infrastructure, dependent on the cloud platform on which it is provisioned. Storage is a mandatory element for every openshift definition. For a list of supported storage types per cloud platform, refer to Supported storage types . Additionally, one can configure Upstream DNS Servers and OpenShift logging . The Multicloud Object Gateway (MCG) supports access to s3-compatible object storage via an underpinning block/file storage class, through the Noobaa operator. Some Cloud Pak for Data services such as Watson Assistant need object storage to run. MCG does not need to be installed if OpenShift Data Foundation (fka OCS) is also installed as the operator includes Noobaa.","title":"openshift"},{"location":"30-reference/configuration/openshift/#openshift-on-ibm-cloud-roks","text":"VPC-based OpenShift cluster on IBM Cloud, using the Red Hat OpenShift Kubernetes Services (ROKS). openshift: - name: sample managed: True ocp_version: 4.8 compute_flavour: bx2.16x64 compute_nodes: 3 cloud_native_toolkit: False oadp: False infrastructure: type: vpc vpc_name: sample subnets: - sample-subnet-zone-1 - sample-subnet-zone-2 - sample-subnet-zone-3 cos_name: sample-cos private_only: False deny_node_ports: False upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 mcg: install: True storage_type: storage-class storage_class: managed-nfs-storage openshift_storage: - storage_name: nfs-storage storage_type: nfs nfs_server_name: sample-nfs - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 500 ocs_version: 4.8.0 - storage_name: pwx-storage storage_type: pwx pwx_etcd_location: {{ ibm_cloud_region }} pwx_storage_size_gb: 200 pwx_storage_iops: 10 pwx_storage_profile: \"10iops-tier\" stork_version: 2.6.2 portworx_version: 2.7.2","title":"OpenShift on IBM Cloud (ROKS)"},{"location":"30-reference/configuration/openshift/#property-explanation-openshift-clusters-on-ibm-cloud-roks","text":"Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes managed Is the ROKS cluster managed by this deployer? See note below. No True (default), False ocp_version ROKS Kubernetes version. If you want to install 4.10 , specify \"4.10\" Yes >= 4.6 compute_flavour Type of compute node to be used Yes Node flavours compute_nodes Total number of compute nodes. This must be a factor of the number of subnets Yes Integer resource_group IBM Cloud resource group for the ROKS cluster Yes cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure.type Type of infrastructure to provision ROKS cluster on No vpc infrastructure.vpc_name Name of the VPC if type is vpc Yes, inferrred from vpc Existing VPC infrastructure.subnets List of subnets within the VPC to use. Either 1 or 3 subnets must be specified Yes Existing subnet infrastructure.cos_name Reference to the cos object created for this cluster Yes Existing cos object infrastructure.private_only If true, it indicates that the ROKS cluster must be provisioned without public endpoints No True, False (default) infrastructure.deny_node_ports If true, the Allow ICMP, TCP and UDP rules for the security group associated with the ROKS cluster are removed if present. If false, the Allow ICMP, TCP and UDP rules are added if not present. No True, False (default) infrastructure.secondary_storage Reference to the storage flavour to be used as secondary storage, for example \"900gb.5iops-tier\" No Valid secondary storage flavour openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes The managed attribute indicates whether the ROKS cluster is managed by the Cloud Pak Deployer. If set to False , the deployer will not provision the ROKS cluster but expects it to already be available in the VPC. You can still use the deployer to create the VPC, the subnets, NFS servers and other infrastructure, but first run it without an openshift element. Once the VPC has been created, manually create an OpenShift cluster in the VPC and then add the openshift element with managed set to False . If you intend to use OpenShift Container Storage, you must also activate the add-on and create the OcsCluster custom resource. Warning If you set infrastructure.private_only to True , the server from which you run the deployer must be able to access the ROKS cluster via its private endpoint, either by establishing a VPN to the cluster's VPC, or by making sure the deployer runs on a server that has a connection with the ROKS VPC via a transit gateway.","title":"Property explanation OpenShift clusters on IBM Cloud (ROKS)"},{"location":"30-reference/configuration/openshift/#openshift_storage---openshift-storage-definitions","text":"Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to create in the OpenShift cluster Yes nfs, ocs or pwx nfs_server_name Name of the NFS server within the VPC Yes if storage_type is nfs Existing nfs_server ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_version Version of OCS (ODF) to be deployed. If left empty, the latest version will be deployed No >= 4.6 pwx_etcd_location Location where the etcd service will be deployed, typically the same region as the ROKS cluster Yes if storage_type is pwx pwx_storage_size_gb Size of the Portworx storage that will be provisioned Yes if storage_type is pwx pwx_storage_iops IOPS for the storage volumes that will be provisioned Yes if storage_type is pwx pwx_storage_profile IOPS storage tier the storage volumes that will be provisioned Yes if storage_type is pwx stork_version Version of the Portworx storage orchestration layer for Kubernetes Yes if storage_type is pwx portworx_version Version of the Portworx storage provider Yes if storage_type is pwx Warning When deploying a ROKS cluster with OpenShift Data Foundation (fka OpenShift Container Storage/OCS), the minimum version of OpenShift is 4.7.","title":"openshift_storage[] - OpenShift storage definitions"},{"location":"30-reference/configuration/openshift/#openshift-on-vsphere","text":"openshift: - name: sample domain_name: example.com vsphere_name: sample ocp_version: 4.8 control_plane_nodes: 3 control_plane_vm_definition: control-plane compute_nodes: 3 compute_vm_definition: compute api_vip: 10.99.92.51 ingress_vip: 10.99.92.52 cloud_native_toolkit: False oadp: False infrastructure: openshift_cluster_network_cidr: 10.128.0.0/14 upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 mcg: install: True storage_type: storage-class storage_class: thin openshift_storage: - storage_name: nfs-storage storage_type: nfs nfs_server_name: sample-nfs - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 512 ocs_dynamic_storage_class: thin","title":"OpenShift on vSphere"},{"location":"30-reference/configuration/openshift/#property-explanation-openshift-clusters-on-vsphere","text":"Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes domain_name Domain name of the cluster, this will also depict the route to the API and ingress endpoints Yes ocp_version OpenShift version. If you want to install 4.10 , specify \"4.10\" Yes >= 4.6 control_plane_nodes Total number of control plane nodes, typically 3 Yes Integer control_plane_vm_definition vm_definition object that will be used to define number of vCPUs and memory for the control plane nodes Yes Existing vm_definition compute_nodes Total number of compute nodes Yes Integer compute_vm_definition vm_definition object that will be used to define number of vCPUs and memory for the compute nodes Yes Existing vm_definition api_vip Virtual IP address that the installer will provision for the API server Yes ingress_vip Virtual IP address that the installer will provision for the ingress server Yes cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure Infrastructure properties No infrastructure.openshift_cluster_network_cidr Network CIDR used by the OpenShift pods. Normally you would not have to change this, unless other systems in the network are in the 10.128.0.0/14 subnet. No CIDR openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes","title":"Property explanation OpenShift clusters on vSphere"},{"location":"30-reference/configuration/openshift/#openshift_storage---openshift-storage-definitions_1","text":"Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to create in the OpenShift cluster Yes nfs or ocs nfs_server_name Name of the NFS server within the VPC Yes if storage_type is nfs Existing nfs_server ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No >= 4.6 ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_dynamic_storage_class Storage class that will be used for provisioning OCS. On vSphere clusters, thin is usually available after OpenShift installation Yes if storage_type is ocs storage_vm_definition VM Definition that defines the virtual machine attributes for the OCS nodes Yes if storage_type is ocs","title":"openshift_storage[] - OpenShift storage definitions"},{"location":"30-reference/configuration/openshift/#openshift-on-aws---self-managed","text":"nfs_server: - name: sample-elastic infrastructure: aws_region: eu-west-1 openshift: - name: sample ocp_version: 4.10.34 domain_name: cp-deployer.eu compute_flavour: m5.4xlarge compute_nodes: 3 cloud_native_toolkit: False oadp: False infrastructure: type: self-managed aws_region: eu-central-1 multi_zone: True credentials_mode: Manual private_only: True machine_cidr: 10.2.1.0/24 openshift_cluster_network_cidr: 10.128.0.0/14 subnet_ids: - subnet-06bbef28f585a0dd3 - subnet-0ea5ac344c0fbadf5 hosted_zone_id: Z08291873MCIC4TMIK4UP ami_id: ami-09249dd86b1933dd5 mcg: install: True storage_type: storage-class storage_class: gp3-csi openshift_storage: - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 512 - storage_name: sample-elastic storage_type: aws-elastic","title":"OpenShift on AWS - self-managed"},{"location":"30-reference/configuration/openshift/#property-explanation-openshift-clusters-on-aws-self-managed","text":"Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes ocp_version OpenShift version version, specified as x.y.z Yes >= 4.6 domain_name Base domain name of the cluster. Together with the name , this will be the domain of the OpenShift cluster. Yes control_plane_flavour Flavour of the AWS servers used for the control plane nodes. m5.xxlarge is the recommended value 4 GB of memory Yes control_plane_nodes Total number of control plane Yes Integer compute_flavour Flavour of the AWS servers used for the compute nodes. m5.4xlarge is a large node with 16 cores and 64 GB of memory Yes compute_nodes Total number of compute nodes Yes Integer cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure Infrastructure properties Yes infrastructure.type Type of OpenShift cluster on AWS. Yes rosa or self-managed infrastructure.aws_region Region of AWS where cluster is deployed. Yes infrastructure.multi_zone Determines whether the OpenShift cluster is deployed across multiple availability zones. Default is True. No True (default), False infrastructure.credentials_mode Security requirement of the Cloud Credential Operator (COO) when doing installations with temporary AWS security credentials. Default (omit) is automatically handled by CCO. No Manual, Mint infrastructure.machine_cdr Machine CIDR. This value will be used to create the VPC and its subnets. In case of an existing VPC, specify the CIDR of that VPC. No CIDR infrastructure.openshift_cluster_network_cidr Network CIDR used by the OpenShift pods. Normally you would not have to change this, unless other systems in the network are in the 10.128.0.0/14 subnet. No CIDR infrastructure.subnet_ids Existing public and private subnet IDs in the VPC to be used for the OpenShift cluster. Must be specified in combination with machine_cidr and hosted_zone_id. No Existing subnet IDs infrastructure.private_only Indicates whether the OpenShift can be accessed from the internet. Default is True No True, False infrastructure.hosted_zone_id ID of the AWS Route 53 hosted zone that controls the DNS entries. If not specified, the OpenShift installer will create a hosted zone for the specified domain_name . This attribute is only needed if you create the OpenShift cluster in an existing VPC No infrastructure.control_plane_iam_role If not standard, specify the IAM role that the OpenShift installer must use for the control plane nodes during cluster creation No infrastructure.compute_iam_role If not standard, specify the IAM role that the OpenShift installer must use for the compute nodes during cluster creation No infrastructure.ami_id ID of the AWS AMI to boot all images No openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes When deploying the OpenShift cluster within an existing VPC, you must specify the machine_cidr that covers all subnets and the subnet IDs within the VPC. For example: machine_cidr: 10.243.0.0/24 subnets_ids: - subnet-0e63f662bb1842e8a - subnet-0673351cd49877269 - subnet-00b007a7c2677cdbc - subnet-02b676f92c83f4422 - subnet-0f1b03a02973508ed - subnet-027ca7cc695ce8515","title":"Property explanation OpenShift clusters on AWS (self-managed)"},{"location":"30-reference/configuration/openshift/#openshift_storage---openshift-storage-definitions_2","text":"Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to create in the OpenShift cluster Yes ocs, aws-elastic ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_dynamic_storage_class Storage class that will be used for provisioning ODF. gp3-csi is usually available after OpenShift installation No","title":"openshift_storage[] - OpenShift storage definitions"},{"location":"30-reference/configuration/openshift/#openshift-on-aws---rosa","text":"nfs_server: - name: sample-elastic infrastructure: aws_region: eu-west-1 openshift: - name: sample ocp_version: 4.10.34 compute_flavour: m5.4xlarge compute_nodes: 3 cloud_native_toolkit: False oadp: False infrastructure: type: rosa aws_region: eu-central-1 multi_zone: True use_sts: False credentials_mode: Manual upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 mcg: install: True storage_type: storage-class storage_class: gp3-csi openshift_storage: - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 512 - storage_name: sample-elastic storage_type: aws-elastic","title":"OpenShift on AWS - ROSA"},{"location":"30-reference/configuration/openshift/#property-explanation-openshift-clusters-on-aws-rosa","text":"Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes ocp_version OpenShift version version, specified as x.y.z Yes >= 4.6 compute_flavour Flavour of the AWS servers used for the compute nodes. m5.4xlarge is a large node with 16 cores and 64 GB of memory Yes cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure Infrastructure properties Yes infrastructure.type Type of OpenShift cluster on AWS. Yes rosa or self-managed infrastructure.aws_region Region of AWS where cluster is deployed. Yes infrastructure.multi_zone Determines whether the OpenShift cluster is deployed across multiple availability zones. Default is True. No True (default), False infrastructure.use_sts Determines whether AWS Security Token Service must be used by the ROSA installer. Default is False. No True, False (default) infrastructure.credentials_mode Change the security requirement of the Cloud Credential Operator (COO). Default (omit) is automatically handled by CCO. No Manual, Mint infrastructure.machine_cdr Machine CIDR, for example 10.243.0.0/16. No CIDR infrastructure.subnet_ids Existing public and private subnet IDs in the VPC to be used for the OpenShift cluster. Must be specified in combination with machine_cidr. No Existing subnet IDs compute_nodes Total number of compute nodes Yes Integer upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes When deploying the OpenShift cluster within an existing VPC, you must specify the machine_cidr that covers all subnets and the subnet IDs within the VPC. For example: machine_cidr: 10.243.0.0/24 subnets_ids: - subnet-0e63f662bb1842e8a - subnet-0673351cd49877269 - subnet-00b007a7c2677cdbc - subnet-02b676f92c83f4422 - subnet-0f1b03a02973508ed - subnet-027ca7cc695ce8515","title":"Property explanation OpenShift clusters on AWS (ROSA)"},{"location":"30-reference/configuration/openshift/#openshift_storage---openshift-storage-definitions_3","text":"Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to create in the OpenShift cluster Yes ocs, aws-elastic ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No ocs_storage_label Label to be used for the dedicated OCS nodes in the cluster Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_dynamic_storage_class Storage class that will be used for provisioning ODF. gp3-csi is usually available after OpenShift installation No","title":"openshift_storage[] - OpenShift storage definitions"},{"location":"30-reference/configuration/openshift/#openshift-on-microsoft-azure-aro","text":"openshift: - name: sample azure_name: sample domain_name: example.com ocp_version: 4.10.54 cloud_native_toolkit: False oadp: False network: pod_cidr: \"10.128.0.0/14\" service_cidr: \"172.30.0.0/16\" openshift_storage: - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 512 ocs_dynamic_storage_class: managed-premium","title":"OpenShift on Microsoft Azure (ARO)"},{"location":"30-reference/configuration/openshift/#property-explanation-for-openshift-cluster-on-microsoft-azure-aro","text":"Warning You are not allowed to specify the OCP version of the ARO cluster. The latest current version is provisioned automatically instead no matter what value is specified in the \"ocp_version\" parameter. The \"ocp_version\" parameter is mandatory for compatibility with other layers of the provisioning, such as the OpenShift client. For instance, the value is used by the process which downloads and installs the oc client. Please, specify the value according to what OCP version will be provisioned. Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes azure_name Name of the azure element in the configuration Yes domain_name Domain mame of the cluster, if you want to override the name generated by Azure No ocp_version The OpenShift version. If you want to install 4.10 , specify \"4.10\" Yes >= 4.6 cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) network Cluster network attributes Yes network.pod_cidr CIDR of pod network Yes Must be a minimum of /18 or larger. network.service_cidr CIDR of service network Yes Must be a minimum of /18 or larger. openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes","title":"Property explanation for OpenShift cluster on Microsoft Azure (ARO)"},{"location":"30-reference/configuration/openshift/#openshift_storage---openshift-storage-definitions_4","text":"Property Description Mandatory Allowed values openshift_storage[] List of storage definitions to be defined on OpenShift Yes storage_name Name of the storage Yes storage_type Type of storage class to create in the OpenShift cluster Yes ocs or nfs ocs_version Version of the OCS operator. If not specified, this will default to the ocp_version No ocs_storage_label Label (or rather a name) to be used for the dedicated OCS nodes in the cluster - together with the combination of Azure location and zone id Yes if storage_type is ocs ocs_storage_size_gb Size of the OCS storage in Gibibytes (Gi) Yes if storage_type is ocs ocs_dynamic_storage_class Storage class that will be used for provisioning OCS. In Azure, you must select managed-premium Yes if storage_type is ocs managed-premium","title":"openshift_storage[] - OpenShift storage definitions"},{"location":"30-reference/configuration/openshift/#existing-openshift","text":"When using the Cloud Pak Deployer on an existing OpenShift cluster, the scripts assume that the cluster is already operational and that any storage classes have been pre-created. The deployer accesses the cluster through a vault secret with the kubeconfig information; the name of the secret is -kubeconfig . openshift: - name: sample ocp_version: 4.8 cluster_name: sample domain_name: example.com cloud_native_toolkit: False oadp: False infrastructure: type: standard processor_architecture: amd64 upstream_dns: - name: sample-dns zones: - example.com dns_servers: - 172.31.2.73:53 gpu: install: False mcg: install: True storage_type: storage-class storage_class: managed-nfs-storage openshift_storage: - storage_name: nfs-storage storage_type: nfs # ocp_storage_class_file: managed-nfs-storage # ocp_storage_class_block: managed-nfs-storage","title":"Existing OpenShift"},{"location":"30-reference/configuration/openshift/#property-explanation-for-existing-openshift-clusters","text":"Property Description Mandatory Allowed values name Name of the OpenShift cluster Yes ocp_version OpenShift version of the cluster, used to download the client. If you want to install 4.10 , specify \"4.10\" Yes >= 4.6 cluster_name Name of the cluster (part of the FQDN) Yes domain_name Domain name of the cluster (part of the FQDN) Yes cloud_native_toolkit Must the Cloud Native Toolkit (OpenShift GitOps) be installed? No True, False (default) oadp Must the OpenShift Advanced Data Protection operator be installed No True, False (default) infrastructure.type Infrastructure OpenShfit is deployed on. See below for additional explanation detect (default) infrastructure.processor_architecture Architecture of the processor that the OpenShift cluster is deployed on No amd64 (default), ppc64le, s390x openshift_logging[] Logging attributes for OpenShift cluster, see OpenShift logging No upstream_dns[] Upstream DNS servers(s), see Upstream DNS Servers No gpu Control Node Feature Discovery and NVIDIA GPU operators No gpu.install Must Node Feature Discovery and NVIDIA GPU operators be installed (Once installed, False does not uninstall) Yes True, False mcg Multicloud Object Gateway properties No mcg.install Must Multicloud Object Gateway be installed (Once installed, False does not uninstall) Yes True, False mcg.storage_type Type of storage supporting the object Noobaa object storage Yes storage-class mcg.storage_class Storage class supporting the Noobaa object storage Yes Existing storage class openshift_storage[] List of storage definitions to be defined on OpenShift, see below for further explanation Yes","title":"Property explanation for existing OpenShift clusters"},{"location":"30-reference/configuration/openshift/#infastructuretype---type-of-infrastructure","text":"When deploying on existing OpenShift, the underlying infrastructure can pose some restrictions on capabilities available. For example, Red Hat OpenShift on IBM Cloud (aka ROKS) does not include the Machine Config Operator and ROSA on AWS does not allow to set labels for Machine Config Pools. This means that node settings required for Cloud Pak for Data must be applied in a non-standard manner. The following values are allowed for infrastructure.type : detect (default): The deployer will attempt to detect the underlying cloud infrastructure. This is done by retrieving the existing storage classes and then inferring the cloud type. standard : The deployer will assume a standard OpenShift cluster with no further restrictions. This is the fallback value for detect if the underlying infra cannot be detected. aws-self-managed : A self-managed OpenShift cluster on AWS. No restrictions. aws-rosa : Managed Red Hat OpenShift on AWS. Some restrictions with regards to Machine Config Pools apply. azure-aro : Managed Red Hat OpenShift on Azure. No known restrictions. vsphere : OpenShift on vSphere. No known restrictions.","title":"infastructure.type - Type of infrastructure"},{"location":"30-reference/configuration/openshift/#openshift_storage---openshift-storage-definitions_5","text":"Property Description Mandatory Allowed values storage_name Name of the storage definition, to be referenced by the Cloud Pak Yes storage_type Type of storage class to use in the OpenShift cluster Yes nfs, ocs, aws-elastic, auto, custom ocp_storage_class_file OpenShift storage class to use for file storage if different from default for storage_type Yes if storage_type is custom ocp_storage_class_block OpenShift storage class to use for block storage if different from default for storage_type Yes if storage_type is custom Info The custom storage_type can be used in case you want to use a non-standard storage class(es). In this case the storage class(es) must be already configured on the OCP cluster and set in the respective ocp_storage_class_file and ocp_storage_class_block variables Info The auto storage_type will let the deployer automatically detect the storage type based on the existing storage classes in the OpenShift cluster.","title":"openshift_storage[] - OpenShift storage definitions"},{"location":"30-reference/configuration/openshift/#supported-storage-types","text":"An openshift definition always includes the type(s) of storage that it will provide. When the OpenShift cluster is provisioned by the deployer, the necessary infrastructure and storage class(es) are also configured. In case an existing OpenShift cluster is referenced by the configuration, the storage classes are expected to exist already. The table below indicates which storage classes are supported by the Cloud Pak Deployer per cloud infrastructure. Warning The ability to provision or use certain storage types does not imply support by the Cloud Paks or by OpenShift itself. There are several restrictions for production use OpenShift Data Foundation, for example when on ROSA. Cloud Provider NFS Storage OCS/ODF Storage Portworx Elastic Custom (2) ibm-cloud Yes Yes Yes No Yes vsphere Yes (1) Yes No No Yes aws No Yes No Yes (3) Yes azure No Yes No No Yes existing-ocp Yes Yes No Yes Yes (1) An existing NFS server can be specified so that the deployer configures the managed-nfs-storage storage class. The deployer will not provision or change the NFS server itself. (2) If you specify a custom storage type, you must specify the storage class to be used for block (RWO) and file (RWX) storage. (3) Specifying this storage type means that Elastic File Storage (EFS) and Elastic Block Storage (EBS) storage classes will be used. For EFS, an nfs_server object is required to define the \"file server\" storage on AWS.","title":"Supported storage types"},{"location":"30-reference/configuration/private-registry/","text":"Private registry \ud83d\udd17 In cases where the OpenShift cluster is in an environment with limited internet connectivity, you may want OpenShift to pull Cloud Pak images from a private image registry (aka container registry). There may also be other reasons for choosing a private registry over the entitled registry. Configuring a private registry \ud83d\udd17 The below steps outline how to configure a private registry for a Cloud Pak deployment. When the image_registry object is referenced by the Cloud Pak object (such as cp4d ), the deployer makes the following changes in OpenShift so that images are pulled from the private registry: Global pull secret: The image registry's credentials are retrieved from the vault (the secret name must be image-registry- and an entry for the registry is added to the global pull secret (secret pull-secret in project openshift-config ). ImageContentSourcePolicy: This is a mapping between the original location of the image, for example quay.io/opencloudio/zen-metastoredb@sha256:582cac2366dda8520730184dec2c430e51009a854ed9ccea07db9c3390e13b29 is mapped to registry.coc.uk.ibm.com:15000/opencloudio/zen-metastoredb@sha256:582cac2366dda8520730184dec2c430e51009a854ed9ccea07db9c3390e13b29 . Image registry settings: OpenShift keeps image registry settings in custom resource image.config.openshift.io/cluster . If a private registry with a self-signed certificate is configured, certificate authority's PEM secret must be created as a configmap in the openshift-config project. The deployer uses the vault secret referenced in registry_trusted_ca_secret property to create or update the configmap so that OpenShift can connect to the registry in a secure manner. Alternatively, you add the registry_insecure: true property to pull images without checking the certificate. image_registry \ud83d\udd17 Defines a private registry that will be used for pulling the Cloud Pak container images from. Additionally, if the Cloud Pak entitlement key was specified at run time of the deployer, the images defined by the case files will be mirrored to this private registry. image_registry: - name: cpd463 registry_host_name: registry.example.com registry_port: 5000 registry_insecure: false registry_trusted_ca_secret: cpd463-ca-bundle Properties \ud83d\udd17 Property Description Mandatory Allowed values name Name by which the image registry is identified. Yes registry_host_name Host name or IP address of the registry server Yes registry_port Port that the image registry listens on. Default is the https port (443) No registry_namespace Namespace (path) within the registry that holds the Cloud Pak images. Mandatory only when using the IBM Cloud Container Registry (ICR) No registry_insecure Defines whether insecure registry access with a self-signed certificate is allowed No True, False (default) registry_trusted_ca_secret Defines the vault secret which holds the certificate authority bundle that must be used when connecting to this private registry. This parameter cannot be specified if registry_insecure is also specified. No Warning The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name. When mirroring images, the deployer connects to the registry using the host name and port. If the port is omitted, the standard https protocol (443) is used. If a registry_namespace is specified, for example when using the IBM Container Registry on IBM Cloud, it will be appended to the registry URL. The user and password to connect to the registry will be retrieved from the vault, using secret image-registry- and must be stored in the format registry_user:registry_password . For example, if you want to connect to the image registry cpd404 with user admin and password very_s3cret , you would create a secret as follows: ./cp-deploy.sh vault set \\ -vs image-registry-cpd463 \\ -vsv \"admin:very_s3cret\" If you need to connect to a private registry which is not signed by a public certificate authority, you have two choices: * Store the PEM certificate that that holds the CA bundle in a vault secret and specify that secret for the registry_trusted_ca_secret property. This is the recommended method for private registries. * Specify registry_insecure: false (not recommended): This means that the registry (and port) will be marked as insecure and OpenShift will pull images from it, even if its certificate is self-signed. For example, if you have a file /tmp/ca.crt with the PEM certificate for the certificate authority, you can do the following: ./cp-deploy.sh vault set \\ -vs cpd463-ca-bundle \\ -vsf /tmp/ca.crt This will create a vault secret which the deployer will use to populate a configmap in the openshift-config project, which in turn is referenced by the image.config.openshift.io/cluster custom resource. For the above configuration, configmap cpd404-ca-bundle would be created and teh image.config.openshift.io/cluster would look something like this: apiVersion: config.openshift.io/v1 kind: Image metadata: ... ... name: cluster spec: additionalTrustedCA: name: cpd463-ca-bundle Using the IBM Container Registry as a private registry \ud83d\udd17 If you want to use a private registry when running the deployer for a ROKS cluster on IBM Cloud, you must use the IBM Container Registry (ICR) service. The deployer will automatically create the specified namespace in the ICR and set up the credentials accordingly. Configure an image_registry object with the host name of the private registry and the namespace that holds the images. An example of using the ICR as a private registry: image_registry: - name: cpd463 registry_host_name: de.icr.io registry_namespace: cpd463 The registry host name must end with icr.io and the registry namespace is mandatory. No other properties are needed; the deployer will retrieve them from IBM Cloud. If you have already created the ICR namespace, create a vault secret for the image registry credentials: ./cp-deploy.sh vault set \\ -vs image-registry-cpd463 -vsv \"admin:very_s3cret\" An example of configuring the private registry for a cp4d object is below: cp4d: - project: cpd-instance openshift_cluster_name: {{ env_id }} cp4d_version: 4.6.3 image_registry_name: cpd463 The Cloud Pak for Data installation refers to the cpd463 image_registry object. If the ibm_cp_entitlement_key secret is in the vault at the time of running the deployer, the required images will be mirrored from the entitled registry to the private registry. If all images are already available in the private registry, just specify the --skip-mirror-images flag when you run the deployer. Using a private registry for the Cloud Pak installation (non-IBM Cloud) \ud83d\udd17 Configure an image_registry object with the host name of the private registry and some optional properties such as port number, CA certificate and whether insecure access to the registry is allowed. Example: image_registry: - name: cpd463 registry_host_name: registry.example.com registry_port: 5000 registry_insecure: false registry_trusted_ca_secret: cpd463-ca-bundle Warning The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name. To create the vault secret for the image registry credentials: ./cp-deploy.sh vault set \\ -vs image-registry-cpd463 -vsv \"admin:very_s3cret\" To create the vault secret for the CA bundle: ./cp-deploy.sh vault set \\ -vs cpd463-ca-bundle -vsf /tmp/ca.crt Where ca.crt looks something like this: -----BEGIN CERTIFICATE----- MIIFszCCA5ugAwIBAgIUT02v9OdgdvjgQVslCuL0wwCVaE8wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxETAPBgNVBAgMCE5ldyBZb3JrMQ8wDQYDVQQHDAZB cm1vbmsxFjAUBgNVBAoMDUlCTSBDbG91ZCBQYWsxHjAcBgNVBAMMFUlCTSBDbG91 ... mcutkgtbkq31XYZj0CiM451Qp8KnTx0= -----END CERTIFICATE- An example of configuring the private registry for a cp4d object is below: cp4d: - project: cpd-instance openshift_cluster_name: {{ env_id }} cp4d_version: 4.6.3 image_registry_name: cpd463 The Cloud Pak for Data installation refers to the cpd463 image_registry object. If the ibm_cp_entitlement_key secret is in the vault at the time of running the deployer, the required images will be mirrored from the entitled registry to the private registry. If all images are already available in the private registry, just specify the --skip-mirror-images flag when you run the deployer.","title":"Private registries"},{"location":"30-reference/configuration/private-registry/#private-registry","text":"In cases where the OpenShift cluster is in an environment with limited internet connectivity, you may want OpenShift to pull Cloud Pak images from a private image registry (aka container registry). There may also be other reasons for choosing a private registry over the entitled registry.","title":"Private registry"},{"location":"30-reference/configuration/private-registry/#configuring-a-private-registry","text":"The below steps outline how to configure a private registry for a Cloud Pak deployment. When the image_registry object is referenced by the Cloud Pak object (such as cp4d ), the deployer makes the following changes in OpenShift so that images are pulled from the private registry: Global pull secret: The image registry's credentials are retrieved from the vault (the secret name must be image-registry- and an entry for the registry is added to the global pull secret (secret pull-secret in project openshift-config ). ImageContentSourcePolicy: This is a mapping between the original location of the image, for example quay.io/opencloudio/zen-metastoredb@sha256:582cac2366dda8520730184dec2c430e51009a854ed9ccea07db9c3390e13b29 is mapped to registry.coc.uk.ibm.com:15000/opencloudio/zen-metastoredb@sha256:582cac2366dda8520730184dec2c430e51009a854ed9ccea07db9c3390e13b29 . Image registry settings: OpenShift keeps image registry settings in custom resource image.config.openshift.io/cluster . If a private registry with a self-signed certificate is configured, certificate authority's PEM secret must be created as a configmap in the openshift-config project. The deployer uses the vault secret referenced in registry_trusted_ca_secret property to create or update the configmap so that OpenShift can connect to the registry in a secure manner. Alternatively, you add the registry_insecure: true property to pull images without checking the certificate.","title":"Configuring a private registry"},{"location":"30-reference/configuration/private-registry/#image_registry","text":"Defines a private registry that will be used for pulling the Cloud Pak container images from. Additionally, if the Cloud Pak entitlement key was specified at run time of the deployer, the images defined by the case files will be mirrored to this private registry. image_registry: - name: cpd463 registry_host_name: registry.example.com registry_port: 5000 registry_insecure: false registry_trusted_ca_secret: cpd463-ca-bundle","title":"image_registry"},{"location":"30-reference/configuration/private-registry/#properties","text":"Property Description Mandatory Allowed values name Name by which the image registry is identified. Yes registry_host_name Host name or IP address of the registry server Yes registry_port Port that the image registry listens on. Default is the https port (443) No registry_namespace Namespace (path) within the registry that holds the Cloud Pak images. Mandatory only when using the IBM Cloud Container Registry (ICR) No registry_insecure Defines whether insecure registry access with a self-signed certificate is allowed No True, False (default) registry_trusted_ca_secret Defines the vault secret which holds the certificate authority bundle that must be used when connecting to this private registry. This parameter cannot be specified if registry_insecure is also specified. No Warning The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name. When mirroring images, the deployer connects to the registry using the host name and port. If the port is omitted, the standard https protocol (443) is used. If a registry_namespace is specified, for example when using the IBM Container Registry on IBM Cloud, it will be appended to the registry URL. The user and password to connect to the registry will be retrieved from the vault, using secret image-registry- and must be stored in the format registry_user:registry_password . For example, if you want to connect to the image registry cpd404 with user admin and password very_s3cret , you would create a secret as follows: ./cp-deploy.sh vault set \\ -vs image-registry-cpd463 \\ -vsv \"admin:very_s3cret\" If you need to connect to a private registry which is not signed by a public certificate authority, you have two choices: * Store the PEM certificate that that holds the CA bundle in a vault secret and specify that secret for the registry_trusted_ca_secret property. This is the recommended method for private registries. * Specify registry_insecure: false (not recommended): This means that the registry (and port) will be marked as insecure and OpenShift will pull images from it, even if its certificate is self-signed. For example, if you have a file /tmp/ca.crt with the PEM certificate for the certificate authority, you can do the following: ./cp-deploy.sh vault set \\ -vs cpd463-ca-bundle \\ -vsf /tmp/ca.crt This will create a vault secret which the deployer will use to populate a configmap in the openshift-config project, which in turn is referenced by the image.config.openshift.io/cluster custom resource. For the above configuration, configmap cpd404-ca-bundle would be created and teh image.config.openshift.io/cluster would look something like this: apiVersion: config.openshift.io/v1 kind: Image metadata: ... ... name: cluster spec: additionalTrustedCA: name: cpd463-ca-bundle","title":"Properties"},{"location":"30-reference/configuration/private-registry/#using-the-ibm-container-registry-as-a-private-registry","text":"If you want to use a private registry when running the deployer for a ROKS cluster on IBM Cloud, you must use the IBM Container Registry (ICR) service. The deployer will automatically create the specified namespace in the ICR and set up the credentials accordingly. Configure an image_registry object with the host name of the private registry and the namespace that holds the images. An example of using the ICR as a private registry: image_registry: - name: cpd463 registry_host_name: de.icr.io registry_namespace: cpd463 The registry host name must end with icr.io and the registry namespace is mandatory. No other properties are needed; the deployer will retrieve them from IBM Cloud. If you have already created the ICR namespace, create a vault secret for the image registry credentials: ./cp-deploy.sh vault set \\ -vs image-registry-cpd463 -vsv \"admin:very_s3cret\" An example of configuring the private registry for a cp4d object is below: cp4d: - project: cpd-instance openshift_cluster_name: {{ env_id }} cp4d_version: 4.6.3 image_registry_name: cpd463 The Cloud Pak for Data installation refers to the cpd463 image_registry object. If the ibm_cp_entitlement_key secret is in the vault at the time of running the deployer, the required images will be mirrored from the entitled registry to the private registry. If all images are already available in the private registry, just specify the --skip-mirror-images flag when you run the deployer.","title":"Using the IBM Container Registry as a private registry"},{"location":"30-reference/configuration/private-registry/#using-a-private-registry-for-the-cloud-pak-installation-non-ibm-cloud","text":"Configure an image_registry object with the host name of the private registry and some optional properties such as port number, CA certificate and whether insecure access to the registry is allowed. Example: image_registry: - name: cpd463 registry_host_name: registry.example.com registry_port: 5000 registry_insecure: false registry_trusted_ca_secret: cpd463-ca-bundle Warning The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name. To create the vault secret for the image registry credentials: ./cp-deploy.sh vault set \\ -vs image-registry-cpd463 -vsv \"admin:very_s3cret\" To create the vault secret for the CA bundle: ./cp-deploy.sh vault set \\ -vs cpd463-ca-bundle -vsf /tmp/ca.crt Where ca.crt looks something like this: -----BEGIN CERTIFICATE----- MIIFszCCA5ugAwIBAgIUT02v9OdgdvjgQVslCuL0wwCVaE8wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxETAPBgNVBAgMCE5ldyBZb3JrMQ8wDQYDVQQHDAZB cm1vbmsxFjAUBgNVBAoMDUlCTSBDbG91ZCBQYWsxHjAcBgNVBAMMFUlCTSBDbG91 ... mcutkgtbkq31XYZj0CiM451Qp8KnTx0= -----END CERTIFICATE- An example of configuring the private registry for a cp4d object is below: cp4d: - project: cpd-instance openshift_cluster_name: {{ env_id }} cp4d_version: 4.6.3 image_registry_name: cpd463 The Cloud Pak for Data installation refers to the cpd463 image_registry object. If the ibm_cp_entitlement_key secret is in the vault at the time of running the deployer, the required images will be mirrored from the entitled registry to the private registry. If all images are already available in the private registry, just specify the --skip-mirror-images flag when you run the deployer.","title":"Using a private registry for the Cloud Pak installation (non-IBM Cloud)"},{"location":"30-reference/configuration/topologies/","text":"Deployment topologies \ud83d\udd17 Configuration of the topology to be deployed typically boils down to choosing the cloud infrastructure you want to deploy, then choosing the type of OpenShift and storage, integrating with infrastructure services and then setting up the Cloud Pak(s). For most initial implementations, a basic deployment will suffice and later this can be extended with additional configuration. Depicted below is the basic deployment topology, followed by a topology with all bells and whistles. Basic deployment \ud83d\udd17 For more details on each of the configuration elements, refer to: Infrastructure OpenShift Cloud Pak Cloud Pak Cartridges Cloud Pak Instances Cloud Pak Assets Extended deployment \ud83d\udd17 For more details about extended deployment, refer to: Monitoring Logging and auditing Private registry DNS Servers Cloud Pak for Data LDAP integration Cloud Pak for Data SAML","title":"Topologies"},{"location":"30-reference/configuration/topologies/#deployment-topologies","text":"Configuration of the topology to be deployed typically boils down to choosing the cloud infrastructure you want to deploy, then choosing the type of OpenShift and storage, integrating with infrastructure services and then setting up the Cloud Pak(s). For most initial implementations, a basic deployment will suffice and later this can be extended with additional configuration. Depicted below is the basic deployment topology, followed by a topology with all bells and whistles.","title":"Deployment topologies"},{"location":"30-reference/configuration/topologies/#basic-deployment","text":"For more details on each of the configuration elements, refer to: Infrastructure OpenShift Cloud Pak Cloud Pak Cartridges Cloud Pak Instances Cloud Pak Assets","title":"Basic deployment"},{"location":"30-reference/configuration/topologies/#extended-deployment","text":"For more details about extended deployment, refer to: Monitoring Logging and auditing Private registry DNS Servers Cloud Pak for Data LDAP integration Cloud Pak for Data SAML","title":"Extended deployment"},{"location":"30-reference/configuration/vault/","text":"Vault configuration \ud83d\udd17 Vault configuration \ud83d\udd17 Throughout the deployment process, the Cloud Pak Deployer will create secrets in a vault and retrieve them later. Examples of secrets are: ssh keys, Cloud Pak for Data admin password. Additionally, when provisioning infrastructure no the IBM Cloud, the resulting Terraform state file is also stored in the vault so it can be used later if the configuration needs to be changed. Configuration of the vault is done through a vault object in the configuration. If you want to use the file-based vault in the status directory, you do not need to configure anything. The following Vault implementations can be used to store and retrieve secrets: - File Vault (no encryption) - IBM Cloud Secrets Manager - Hashicorp Vault (token authentication) - Hashicorp Vault (certificate authentication) The File Vault is the default vault and also the simplest. It does not require a password and all secrets are stored in base-64 encoding in a properties file under the /vault directory. The name of the vault file is the environment_name you specified in the global configuration, inventory file or at the command line. All of the other vault options require some secret manager (IBM Cloud service or Hashicorp Vault) to be available and you need to specify a password or provide a certificate. Sample Vault config: vault: vault_type: file-vault vault_authentication_type: none Properties for all vault implementations \ud83d\udd17 Property Description Mandatory Allowed values vault_type Chosen implementation of the vault Yes file-vault, ibmcloud-vault, hashicorp-vault Properties for file-vault \ud83d\udd17 Property Description Mandatory Allowed values vault_authentication_type Authentication method for the file vault No none Properties for ibmcloud-vault \ud83d\udd17 Property Description Mandatory Allowed values vault_authentication_type Authentication method for the file vault No api-key vault_url URL for the IBM Cloud secrets manager instance Yes Properties for hashicorp-vault \ud83d\udd17 Property Description Mandatory Allowed values vault_authentication_type Authentication method for the file vault No api-key, certificate vault_url URL for the Hashicorp vault, this is typically https://hostname:8200 Yes vault_api_key When authentication type is api-key, the field to authenticate with Yes vault_secret_path Default secret path to store and retrieve secrets into/from Yes vault_secret_field Default field to store or retrieve secrets Yes vault_secret_path_append_group Determines whether or not the secrete group will be appended to the path Yes True (default), False vault_secret_base64 Depicts if secrets are stored in base64 format for Hashicorp Vault Yes True (default), False","title":"Vault"},{"location":"30-reference/configuration/vault/#vault-configuration","text":"","title":"Vault configuration"},{"location":"30-reference/configuration/vault/#vault-configuration_1","text":"Throughout the deployment process, the Cloud Pak Deployer will create secrets in a vault and retrieve them later. Examples of secrets are: ssh keys, Cloud Pak for Data admin password. Additionally, when provisioning infrastructure no the IBM Cloud, the resulting Terraform state file is also stored in the vault so it can be used later if the configuration needs to be changed. Configuration of the vault is done through a vault object in the configuration. If you want to use the file-based vault in the status directory, you do not need to configure anything. The following Vault implementations can be used to store and retrieve secrets: - File Vault (no encryption) - IBM Cloud Secrets Manager - Hashicorp Vault (token authentication) - Hashicorp Vault (certificate authentication) The File Vault is the default vault and also the simplest. It does not require a password and all secrets are stored in base-64 encoding in a properties file under the /vault directory. The name of the vault file is the environment_name you specified in the global configuration, inventory file or at the command line. All of the other vault options require some secret manager (IBM Cloud service or Hashicorp Vault) to be available and you need to specify a password or provide a certificate. Sample Vault config: vault: vault_type: file-vault vault_authentication_type: none","title":"Vault configuration"},{"location":"30-reference/configuration/vault/#properties-for-all-vault-implementations","text":"Property Description Mandatory Allowed values vault_type Chosen implementation of the vault Yes file-vault, ibmcloud-vault, hashicorp-vault","title":"Properties for all vault implementations"},{"location":"30-reference/configuration/vault/#properties-for-file-vault","text":"Property Description Mandatory Allowed values vault_authentication_type Authentication method for the file vault No none","title":"Properties for file-vault"},{"location":"30-reference/configuration/vault/#properties-for-ibmcloud-vault","text":"Property Description Mandatory Allowed values vault_authentication_type Authentication method for the file vault No api-key vault_url URL for the IBM Cloud secrets manager instance Yes","title":"Properties for ibmcloud-vault"},{"location":"30-reference/configuration/vault/#properties-for-hashicorp-vault","text":"Property Description Mandatory Allowed values vault_authentication_type Authentication method for the file vault No api-key, certificate vault_url URL for the Hashicorp vault, this is typically https://hostname:8200 Yes vault_api_key When authentication type is api-key, the field to authenticate with Yes vault_secret_path Default secret path to store and retrieve secrets into/from Yes vault_secret_field Default field to store or retrieve secrets Yes vault_secret_path_append_group Determines whether or not the secrete group will be appended to the path Yes True (default), False vault_secret_base64 Depicts if secrets are stored in base64 format for Hashicorp Vault Yes True (default), False","title":"Properties for hashicorp-vault"},{"location":"30-reference/process/configure-cloud-pak/","text":"Configure the Cloud Pak(s) \ud83d\udd17 This stage focuses on post-installation configuration of the Cloud Paks and cartridges. Cloud Pak for Data \ud83d\udd17 Web interface certificate \ud83d\udd17 When provisioning on IBM Cloud ROKS, a CA-signed certificate for the ingress subdomain is automatically generated in the IBM Cloud certificate manager. The deployer retrieves the certificate and adds it to the secret that stores the certificate key. This will avoid getting a warning when opening the Cloud Pak for Data home page. Configure identity and access management \ud83d\udd17 For Cloud Pak for Data you can configure: SAML for Single Sign-on. When specified in the cp4d_saml_config object, the deployer configures the user management pods to redirect logins to the identity provider (idP) of choice. LDAP configuration. LDAP can be used both for authentication (if no SSO has been configured) and for access management by mapping LDAP groups to Cloud Pak for Data user groups. Specify the LDAP or LDAPS properties in the cp4d_ldap_config object so that the deployer configures it for Cloud Pak for Data. If SAML has been configured for authentication, the configured LDAP server is only used for access management. User group configuration. This creates user-defined user groups in Cloud Pak for Data to match the LDAP configuration. The configuration object used for this is cp4d_user_group_configuration . Provision instances \ud83d\udd17 Some cartridges such as Data Virtualization have the ability to create one or more instances to run an isolated installation of the cartridge. If instances have been configured for the cartridge, this steps provisions them. The following Cloud Pak for Data cartridges are currently supported for creating instances: Analytics engine powered by Apache Spark ( analytics-engine ) Db2 OLTP ( db2 ) Cognos Analytics ( ca ) Data Virtualization ( dv ) Configure instance access \ud83d\udd17 Cloud Pak for Data does not support group-defined access to cartridge instances. After creation of the instances (and also when the deployer is run with the --cp-config-only flag), the permissions of users accessing the instance is configured. For Cognos Analytics, the Cognos Authorization process is run to apply user group permissions to the Cognos Analytics instance. Create or change platform connections \ud83d\udd17 Cloud Pak for Data defines data source connections at the platform level and these can be reused in some cartridges like Watson Knowledge Catalog and Watson Studio. The cp4d_connection object defines each of the platform connections that must be managed by the deployer. Backup and restore connections \ud83d\udd17 If you want to back up or restore platform connections, the cp4d_backup_restore_connections object defines the JSON file that will be used for backup and restore.","title":"Configure Cloud Paks"},{"location":"30-reference/process/configure-cloud-pak/#configure-the-cloud-paks","text":"This stage focuses on post-installation configuration of the Cloud Paks and cartridges.","title":"Configure the Cloud Pak(s)"},{"location":"30-reference/process/configure-cloud-pak/#cloud-pak-for-data","text":"","title":"Cloud Pak for Data"},{"location":"30-reference/process/configure-cloud-pak/#web-interface-certificate","text":"When provisioning on IBM Cloud ROKS, a CA-signed certificate for the ingress subdomain is automatically generated in the IBM Cloud certificate manager. The deployer retrieves the certificate and adds it to the secret that stores the certificate key. This will avoid getting a warning when opening the Cloud Pak for Data home page.","title":"Web interface certificate"},{"location":"30-reference/process/configure-cloud-pak/#configure-identity-and-access-management","text":"For Cloud Pak for Data you can configure: SAML for Single Sign-on. When specified in the cp4d_saml_config object, the deployer configures the user management pods to redirect logins to the identity provider (idP) of choice. LDAP configuration. LDAP can be used both for authentication (if no SSO has been configured) and for access management by mapping LDAP groups to Cloud Pak for Data user groups. Specify the LDAP or LDAPS properties in the cp4d_ldap_config object so that the deployer configures it for Cloud Pak for Data. If SAML has been configured for authentication, the configured LDAP server is only used for access management. User group configuration. This creates user-defined user groups in Cloud Pak for Data to match the LDAP configuration. The configuration object used for this is cp4d_user_group_configuration .","title":"Configure identity and access management"},{"location":"30-reference/process/configure-cloud-pak/#provision-instances","text":"Some cartridges such as Data Virtualization have the ability to create one or more instances to run an isolated installation of the cartridge. If instances have been configured for the cartridge, this steps provisions them. The following Cloud Pak for Data cartridges are currently supported for creating instances: Analytics engine powered by Apache Spark ( analytics-engine ) Db2 OLTP ( db2 ) Cognos Analytics ( ca ) Data Virtualization ( dv )","title":"Provision instances"},{"location":"30-reference/process/configure-cloud-pak/#configure-instance-access","text":"Cloud Pak for Data does not support group-defined access to cartridge instances. After creation of the instances (and also when the deployer is run with the --cp-config-only flag), the permissions of users accessing the instance is configured. For Cognos Analytics, the Cognos Authorization process is run to apply user group permissions to the Cognos Analytics instance.","title":"Configure instance access"},{"location":"30-reference/process/configure-cloud-pak/#create-or-change-platform-connections","text":"Cloud Pak for Data defines data source connections at the platform level and these can be reused in some cartridges like Watson Knowledge Catalog and Watson Studio. The cp4d_connection object defines each of the platform connections that must be managed by the deployer.","title":"Create or change platform connections"},{"location":"30-reference/process/configure-cloud-pak/#backup-and-restore-connections","text":"If you want to back up or restore platform connections, the cp4d_backup_restore_connections object defines the JSON file that will be used for backup and restore.","title":"Backup and restore connections"},{"location":"30-reference/process/configure-infra/","text":"Configure infrastructure \ud83d\udd17 This stage focuses on the configuration of the provisioned infrastructure. Configure infrastructure for IBM Cloud \ud83d\udd17 Configure the VPC bastion server(s) \ud83d\udd17 In a configuration scenario where NFS is used for OpenShift storage, the NFS server must be provisioned as a VSI within the VPC that contains the OpenShift cluster. It is best practice to shield off the NFS server from the outside world by using a jump host (bastion) to access it. This steps configures the bastion host which has a public IP address to serve as a jump host to access other servers and services within the VPC. Configure the VPC NFS server(s) \ud83d\udd17 Configures the NFS server using the specs in the nfs_server configuration object(s). It installs the required packages and sets up the NFSv4 service. Additionally, it will format the empty volume as xfs and export it so it can be used by the managed-nfs-storage storage class in the OpenShift cluster. Configure the OpenShift storage classes \ud83d\udd17 This steps takes care of configuring the storage classes in the OpenShift cluster. Storage classes are an abstraction of the underlying physical and virtual storage. When run, it processes the openshift_storage elements within the current openshift configuration object. Two types of storage classes can be automatically created and configured: NFS Storage \ud83d\udd17 Creates the managed-nfs-storage OpenShift storage class using the specified nfs_server_name which references an nfs_server configuration object. OCS Storage \ud83d\udd17 Activates the ROKS cluster's OpenShift Container Storage add-on to install the operator into the cluster. Once finished with the preparation, the OcsCluster OpenShift object is created to provision the storage cluster. As the backing storage the ibmc-vpc-block-metro-10iops-tier storage class is used, which has the appropriate IO characteristics for the Cloud Paks. Info Both NFS and OCS storage classes can be created but only 1 storage class of each type can exist in the cluster at the moment. If more than one storage class of the same type is specified, the configuration will fail.","title":"Configure infra"},{"location":"30-reference/process/configure-infra/#configure-infrastructure","text":"This stage focuses on the configuration of the provisioned infrastructure.","title":"Configure infrastructure"},{"location":"30-reference/process/configure-infra/#configure-infrastructure-for-ibm-cloud","text":"","title":"Configure infrastructure for IBM Cloud"},{"location":"30-reference/process/configure-infra/#configure-the-vpc-bastion-servers","text":"In a configuration scenario where NFS is used for OpenShift storage, the NFS server must be provisioned as a VSI within the VPC that contains the OpenShift cluster. It is best practice to shield off the NFS server from the outside world by using a jump host (bastion) to access it. This steps configures the bastion host which has a public IP address to serve as a jump host to access other servers and services within the VPC.","title":"Configure the VPC bastion server(s)"},{"location":"30-reference/process/configure-infra/#configure-the-vpc-nfs-servers","text":"Configures the NFS server using the specs in the nfs_server configuration object(s). It installs the required packages and sets up the NFSv4 service. Additionally, it will format the empty volume as xfs and export it so it can be used by the managed-nfs-storage storage class in the OpenShift cluster.","title":"Configure the VPC NFS server(s)"},{"location":"30-reference/process/configure-infra/#configure-the-openshift-storage-classes","text":"This steps takes care of configuring the storage classes in the OpenShift cluster. Storage classes are an abstraction of the underlying physical and virtual storage. When run, it processes the openshift_storage elements within the current openshift configuration object. Two types of storage classes can be automatically created and configured:","title":"Configure the OpenShift storage classes"},{"location":"30-reference/process/configure-infra/#nfs-storage","text":"Creates the managed-nfs-storage OpenShift storage class using the specified nfs_server_name which references an nfs_server configuration object.","title":"NFS Storage"},{"location":"30-reference/process/configure-infra/#ocs-storage","text":"Activates the ROKS cluster's OpenShift Container Storage add-on to install the operator into the cluster. Once finished with the preparation, the OcsCluster OpenShift object is created to provision the storage cluster. As the backing storage the ibmc-vpc-block-metro-10iops-tier storage class is used, which has the appropriate IO characteristics for the Cloud Paks. Info Both NFS and OCS storage classes can be created but only 1 storage class of each type can exist in the cluster at the moment. If more than one storage class of the same type is specified, the configuration will fail.","title":"OCS Storage"},{"location":"30-reference/process/deploy-assets/","text":"Deploy Cloud Pak assets \ud83d\udd17 Cloud Pak for Data \ud83d\udd17 For Cloud Pak for Data, this stage does the following: Deploy Cloud Pak for Data assets which are defined with object cp4d_asset Deploy the Cloud Pak for Data monitors identified with cp4d_monitors elements. Deploy Cloud Pak for Data assets \ud83d\udd17 See cp4d_asset for more details. Cloud Pak for Data monitors \ud83d\udd17 See cp4d_monitors for more details.","title":"Deploy assets"},{"location":"30-reference/process/deploy-assets/#deploy-cloud-pak-assets","text":"","title":"Deploy Cloud Pak assets"},{"location":"30-reference/process/deploy-assets/#cloud-pak-for-data","text":"For Cloud Pak for Data, this stage does the following: Deploy Cloud Pak for Data assets which are defined with object cp4d_asset Deploy the Cloud Pak for Data monitors identified with cp4d_monitors elements.","title":"Cloud Pak for Data"},{"location":"30-reference/process/deploy-assets/#deploy-cloud-pak-for-data-assets","text":"See cp4d_asset for more details.","title":"Deploy Cloud Pak for Data assets"},{"location":"30-reference/process/deploy-assets/#cloud-pak-for-data-monitors","text":"See cp4d_monitors for more details.","title":"Cloud Pak for Data monitors"},{"location":"30-reference/process/install-cloud-pak/","text":"Install the Cloud Pak(s) \ud83d\udd17 This stage focuses on preparing the OpenShift cluster for installing the Cloud Pak(s) and then proceeds with the installation of Cloud Paks and the cartridges. The below documentation will start with a list of steps that will be executed for all Cloud Paks, then proceed with Cloud Pak specific activities. The execution of the steps may slightly differ from the sequence in the documentation. Sections: Remove obsolete Cloud Pak for Data instances Prepare private image registry Install Cloud Pak for Data and cartridges Remove Cloud Pak for Data \ud83d\udd17 Before going ahead with the mirroring of container images and installation of Cloud Pak for Data, the previous configuration (if any) is retrieved from the vault to determine if a Cloud Pak for Data instance has been removed. If a previously installed cp4d object no longer exists in the current configuration, its associated instance is removed from the OpenShift cluster. First, the custom resources are removed from the OpenShift project. This happens with a grace period of 5 minutes. After the grace period has expired, OpenShift automatically forcefully deletes the custom resource and its associated definitions. Then, the control plane custom resource Ibmcpd is removed and finally the namespace (project). For the namespace deletion, a grace period of 10 minutes is applied. Prepare private image registry \ud83d\udd17 When installing the Cloud Paks, images must be pulled from an image registry. All Cloud Paks support pulling images directly from the IBM Entitled Registry using the entitlement key, but there may be situations this is not possible, for example in air-gapped environents, or when images must be scanned for vulnerabilities before they are allowed to be used. In those cases, a private registry will have to be set up. The Cloud Pak Deployer can mirror images to a private registry from the entitled registry. On IBM Cloud, the deployer is also capable of creating a namespace in the IBM Container Registry and mirror the images to that namespace. When a private registry has been specified in the Cloud Pak entry (using the image_registry_name property), the necessary OpenShift configuration changes will also be made. Create IBM Container Registry namespace (IBM Cloud only) \ud83d\udd17 If OpenShift is deployed on IBM Cloud (ROKS), the IBM Container Registry should be used as the private registry from which the images will be pulled. Images in the ICR are organized by namespace and can be accessed using an API key issued for a service account. If an image_registry object is specified in the configuration, this process will take care of creating the service account, then the API key and it will store the API key in the vault. Connect to the specified private image registry \ud83d\udd17 If an image registry has been specified for the Cloud Pak using the image_registry_name property, the referenced image_registry entry is looked up in the configuration and the credentials are retrieved from the vault. Then the connection to the registry is tested by logging on. Install Cloud Pak for Data and cartridges \ud83d\udd17 Prepare OpenShift cluster for Cloud Pak installation \ud83d\udd17 Cloud Pak for Data requires a number of cluster-wide settings: Create an ImageContentSourcePolicy if images must be pulled from a private registry Set the global pull secret with the credentials to pull images from the entitled or private image registry Create a Tuned object to set kernel semaphores and other properties of CoreOS containers being spun up Allow unsafe system controls in the Kubelet configuration Set PIDs limit and default ulimit for the CRI-O configuration For all OpenShift clusters, except ROKS on IBM Cloud, these settings are applied using OpenShift configuration objects and then picked up by the Machine Config Operator. This operator will then apply the settings to the control plane and compute nodes as appropriate and reload them one by one. To avoid having to reload the nodes more than once, the Machine Config Operator is paused before the settings are applied. After all setup, the Machine Config Operator is released and the deployment process will then wait until all nodes are ready with the configuration applied. Prepare OpenShift cluster on IBM Cloud and IBM Cloud Satellite \ud83d\udd17 As mentioned before, ROKS on IBM Cloud does not include the Machine Config Operator and would normally require the compute nodes to be reloaded (classic ROKS) or replaced (ROKS on VPC) to make the changes effective. While implementing this process, we have experienced intermittent reliability issues where replacement of nodes never finished or the cluster ended up in a unusable state. To avoid this, the process is applying the settings in a different manner. On every node, a cron job is created which starts every 5 minutes. It runs a script that checks if any of the cluster-wide settings must be (re-)applied, then updates the local system and restarts the crio and kubelet daemons. If no settings are to be adjusted, the daemons will not be restarted and therefore the cron job has minimal or no effect on the running applications. Compute node changes that are made by the cron job: ImageContentSourcePolicy : File /etc/containers/registries.conf is updated to include registry mirrors for the private registry. Kubelet : File /etc/kubernetes/kubelet.conf is appended with the allowedUnsafeSysctls entries. CRI-O : pids_limit and default_ulimit changes are made to the /etc/crio/crio.conf file. Pull secret : The registry and credentials are appended to the /.docker/config.json configuration. There are scenarios, especially on IBM Cloud Satellite, where custom changes must be applied to the compute nodes. This is possible by adding the apply-custom-node-settings.sh to the assets directory within the CONFIG_DIR directory. Once Kubelet, CRI-O and other changes have been applied, this script (if existing) is run to apply any additional configuration changes to the compute node. By setting the NODE_UPDATED script variable to 1 you can tell the deployer to restart the crio and kubelet daemons. WARNING: You should never set the NODE_UPDATED script variable to 0 as this will cause previous changes to the pull secret, ImageContentSourcePolicy and others not to become effective. WARNING: Do not end the script with the exit command; this will stop the calling script from running and therefore not restart the daemons. Sample script: #!/bin/bash # # This is a sample script that will cause the crio and kubelet daemons to be restarted once by checking # file /tmp/apply-custom-node-settings-run. If the file doesn't exist, it creates it and sets NODE_UPDATED to 1. # The deployer will observe that the node has been updated and restart the daemons. # if [ ! -e /tmp/apply-custom-node-settings-run ] ; then touch /tmp/apply-custom-node-settings-run NODE_UPDATED = 1 fi Mirror images to the private registry \ud83d\udd17 If a private image registry is specified, and if the IBM Cloud Pak entitlement key is available in the vault ( cp_entitlement_key secret), the Cloud Pak case files for the Foundational Services, the Cloud Pak control plane and cartridges are downloaded to a subdirectory of the status directory that was specified. Then all images defined for the cartridges are mirrored from the entitled registry to the private image registry. Dependent on network speed and how many cartridges have been configured, the mirroring can take a very long time (12+ hours). All images which have already been mirrored to the private registry are skipped by the mirroring process. Even if all images have been mirrored, the act of checking existence and digest can still take a bit of time (10-15 minutes). To avoid this, you can remove the cp_entitlement_key secret from the vault and unset the CP_ENTITLEMENT_KEY environment variable before running the Cloud Pak Deployer. Create catalog sources \ud83d\udd17 The images of the operators which control the Cloud Pak are defined in OpenShift CatalogSource objects which reside in the openshift-marketplace project. Operator subscriptions subsequently reference the catalog source and define the update channel. When images are pulled from the entitled registry, most subscriptions reference the same ibm-operator-catalog catalog source (and also a Db2U catalog source). If images are pulled from a private registry, the control plane and also each cartridge reference their own catalog source in the openshift-marketplace project. This step creates the necessary catalog sources, dependent on whether the entitled registry or a private registry is used. For the entitled registry, it creates the catalog source directly using a YAML template; when using a private registry, the cloudctl case command is used for the control plane and every cartridge to install the catalog sources and their dependencies. Get OpenShift storage classes \ud83d\udd17 Most custom resources defined by the cartridge operators require some back-end storage. To be able to reference the correct OpenShift storage classes, they are retrieved based on the openshift_storage_name property of the Cloud Pak object. Prepare the Cloud Pak for Data operator \ud83d\udd17 When using express install, the Cloud Pak for Data operator also installs the Cloud Pak Foundational Services. Consecutively, this part of the deployer: Creates the operator project if it doesn't exist already Creates an OperatorGroup Installs the license service and certificate manager Creates the platform operator subscription Waits until the ClusterServerVersion objects for the platform operator and Operand Deployment Lifecycle Manager have been created Install the Cloud Pak for Data control plane \ud83d\udd17 When the Cloud Pak for Data operator has been installed, the process continues by creating an OperandRequest object for the platform operator which manages the project in the which Cloud Pak for Data instance is installed. Then it creates an Ibmcpd custom resource in the project which installs the controle plane with nginx the metastore, etc. The Cloud Pak for Data control plane is a pre-requisite for all cartridges so at this stage, the deployer waits until the Ibmcpd status reached the Completed state. Once the control plane has been installed successfully, the deployer generates a new strong 25-character password for the Cloud Pak for Data admin user and stores this into the vault. Additionally, the admin-user-details secret in the OpenShift project is updated with the new password. Install the specified Cloud Pak for Data cartridges \ud83d\udd17 Now that the control plane has been installed in the specified OpenShift project, cartridges can be installed. Every cartridge is controlled by its own operator subscription in the operators project and a custom resource. The deployer iterates twice over the specified cartridges, first to create the operator subscriptions, then to create the custom resources. Create cartridge operator subscriptions \ud83d\udd17 This steps creates subscription objects for each cartridge in the operators project, using a YAML template that is included in the deployer code and the subscription_channel specified in the cartridge definition. Keeping the subscription channel separate delivers flexibility when new subscription channels become available over time. Once the subscription has been created, the deployer waits for the associate CSV(s) to be created and reach the Installed state. Delete obsolete cartridges \ud83d\udd17 If this is not the first installation, earlier configured cartridges may have been removed. This steps iterates over all supported cartridges and checks if the cartridge has been installed and wheter it exists in the configuration of the current cp4d object. If the cartridge is no longer defined, its custom resource is removed; the operator will then take care of removing all OpenShift configuration. Install the cartridges \ud83d\udd17 This steps creates the Custom Resources for each cartridge. This is the actual installation of the cartridge. Cartridges can be installed in parallel to a certain extent and the operator will wait for the dependencies to be installed first before starting the processes. For example, if Watson Studio and Watson Machine Learning are installed, both have a dependency on the Common Core Services (CCS) and will wait for the CCS object to reach the Completed state before proceeding with the install. Once that is the case, both WS and WML will run the installation process in parallel. Wait until all cartridges are ready \ud83d\udd17 Installation of the cartridges can take a very long time; up to 5 hours for Watson Knowledge Catalog. While cartridges are being installed, the deployer checks the states of all cartridges on a regular basis and reports these in a log file. The deployer will retry until all specified cartridges have reached the Completed state. Configure LDAP authentication for Cloud Pak for Data \ud83d\udd17 If LDAP has been configured for the Cloud Pak for Data element, it will be configured after all cartridges have finished installing.","title":"Install Cloud Paks"},{"location":"30-reference/process/install-cloud-pak/#install-the-cloud-paks","text":"This stage focuses on preparing the OpenShift cluster for installing the Cloud Pak(s) and then proceeds with the installation of Cloud Paks and the cartridges. The below documentation will start with a list of steps that will be executed for all Cloud Paks, then proceed with Cloud Pak specific activities. The execution of the steps may slightly differ from the sequence in the documentation. Sections: Remove obsolete Cloud Pak for Data instances Prepare private image registry Install Cloud Pak for Data and cartridges","title":"Install the Cloud Pak(s)"},{"location":"30-reference/process/install-cloud-pak/#remove-cloud-pak-for-data","text":"Before going ahead with the mirroring of container images and installation of Cloud Pak for Data, the previous configuration (if any) is retrieved from the vault to determine if a Cloud Pak for Data instance has been removed. If a previously installed cp4d object no longer exists in the current configuration, its associated instance is removed from the OpenShift cluster. First, the custom resources are removed from the OpenShift project. This happens with a grace period of 5 minutes. After the grace period has expired, OpenShift automatically forcefully deletes the custom resource and its associated definitions. Then, the control plane custom resource Ibmcpd is removed and finally the namespace (project). For the namespace deletion, a grace period of 10 minutes is applied.","title":"Remove Cloud Pak for Data"},{"location":"30-reference/process/install-cloud-pak/#prepare-private-image-registry","text":"When installing the Cloud Paks, images must be pulled from an image registry. All Cloud Paks support pulling images directly from the IBM Entitled Registry using the entitlement key, but there may be situations this is not possible, for example in air-gapped environents, or when images must be scanned for vulnerabilities before they are allowed to be used. In those cases, a private registry will have to be set up. The Cloud Pak Deployer can mirror images to a private registry from the entitled registry. On IBM Cloud, the deployer is also capable of creating a namespace in the IBM Container Registry and mirror the images to that namespace. When a private registry has been specified in the Cloud Pak entry (using the image_registry_name property), the necessary OpenShift configuration changes will also be made.","title":"Prepare private image registry"},{"location":"30-reference/process/install-cloud-pak/#create-ibm-container-registry-namespace-ibm-cloud-only","text":"If OpenShift is deployed on IBM Cloud (ROKS), the IBM Container Registry should be used as the private registry from which the images will be pulled. Images in the ICR are organized by namespace and can be accessed using an API key issued for a service account. If an image_registry object is specified in the configuration, this process will take care of creating the service account, then the API key and it will store the API key in the vault.","title":"Create IBM Container Registry namespace (IBM Cloud only)"},{"location":"30-reference/process/install-cloud-pak/#connect-to-the-specified-private-image-registry","text":"If an image registry has been specified for the Cloud Pak using the image_registry_name property, the referenced image_registry entry is looked up in the configuration and the credentials are retrieved from the vault. Then the connection to the registry is tested by logging on.","title":"Connect to the specified private image registry"},{"location":"30-reference/process/install-cloud-pak/#install-cloud-pak-for-data-and-cartridges","text":"","title":"Install Cloud Pak for Data and cartridges"},{"location":"30-reference/process/install-cloud-pak/#prepare-openshift-cluster-for-cloud-pak-installation","text":"Cloud Pak for Data requires a number of cluster-wide settings: Create an ImageContentSourcePolicy if images must be pulled from a private registry Set the global pull secret with the credentials to pull images from the entitled or private image registry Create a Tuned object to set kernel semaphores and other properties of CoreOS containers being spun up Allow unsafe system controls in the Kubelet configuration Set PIDs limit and default ulimit for the CRI-O configuration For all OpenShift clusters, except ROKS on IBM Cloud, these settings are applied using OpenShift configuration objects and then picked up by the Machine Config Operator. This operator will then apply the settings to the control plane and compute nodes as appropriate and reload them one by one. To avoid having to reload the nodes more than once, the Machine Config Operator is paused before the settings are applied. After all setup, the Machine Config Operator is released and the deployment process will then wait until all nodes are ready with the configuration applied.","title":"Prepare OpenShift cluster for Cloud Pak installation"},{"location":"30-reference/process/install-cloud-pak/#prepare-openshift-cluster-on-ibm-cloud-and-ibm-cloud-satellite","text":"As mentioned before, ROKS on IBM Cloud does not include the Machine Config Operator and would normally require the compute nodes to be reloaded (classic ROKS) or replaced (ROKS on VPC) to make the changes effective. While implementing this process, we have experienced intermittent reliability issues where replacement of nodes never finished or the cluster ended up in a unusable state. To avoid this, the process is applying the settings in a different manner. On every node, a cron job is created which starts every 5 minutes. It runs a script that checks if any of the cluster-wide settings must be (re-)applied, then updates the local system and restarts the crio and kubelet daemons. If no settings are to be adjusted, the daemons will not be restarted and therefore the cron job has minimal or no effect on the running applications. Compute node changes that are made by the cron job: ImageContentSourcePolicy : File /etc/containers/registries.conf is updated to include registry mirrors for the private registry. Kubelet : File /etc/kubernetes/kubelet.conf is appended with the allowedUnsafeSysctls entries. CRI-O : pids_limit and default_ulimit changes are made to the /etc/crio/crio.conf file. Pull secret : The registry and credentials are appended to the /.docker/config.json configuration. There are scenarios, especially on IBM Cloud Satellite, where custom changes must be applied to the compute nodes. This is possible by adding the apply-custom-node-settings.sh to the assets directory within the CONFIG_DIR directory. Once Kubelet, CRI-O and other changes have been applied, this script (if existing) is run to apply any additional configuration changes to the compute node. By setting the NODE_UPDATED script variable to 1 you can tell the deployer to restart the crio and kubelet daemons. WARNING: You should never set the NODE_UPDATED script variable to 0 as this will cause previous changes to the pull secret, ImageContentSourcePolicy and others not to become effective. WARNING: Do not end the script with the exit command; this will stop the calling script from running and therefore not restart the daemons. Sample script: #!/bin/bash # # This is a sample script that will cause the crio and kubelet daemons to be restarted once by checking # file /tmp/apply-custom-node-settings-run. If the file doesn't exist, it creates it and sets NODE_UPDATED to 1. # The deployer will observe that the node has been updated and restart the daemons. # if [ ! -e /tmp/apply-custom-node-settings-run ] ; then touch /tmp/apply-custom-node-settings-run NODE_UPDATED = 1 fi","title":"Prepare OpenShift cluster on IBM Cloud and IBM Cloud Satellite"},{"location":"30-reference/process/install-cloud-pak/#mirror-images-to-the-private-registry","text":"If a private image registry is specified, and if the IBM Cloud Pak entitlement key is available in the vault ( cp_entitlement_key secret), the Cloud Pak case files for the Foundational Services, the Cloud Pak control plane and cartridges are downloaded to a subdirectory of the status directory that was specified. Then all images defined for the cartridges are mirrored from the entitled registry to the private image registry. Dependent on network speed and how many cartridges have been configured, the mirroring can take a very long time (12+ hours). All images which have already been mirrored to the private registry are skipped by the mirroring process. Even if all images have been mirrored, the act of checking existence and digest can still take a bit of time (10-15 minutes). To avoid this, you can remove the cp_entitlement_key secret from the vault and unset the CP_ENTITLEMENT_KEY environment variable before running the Cloud Pak Deployer.","title":"Mirror images to the private registry"},{"location":"30-reference/process/install-cloud-pak/#create-catalog-sources","text":"The images of the operators which control the Cloud Pak are defined in OpenShift CatalogSource objects which reside in the openshift-marketplace project. Operator subscriptions subsequently reference the catalog source and define the update channel. When images are pulled from the entitled registry, most subscriptions reference the same ibm-operator-catalog catalog source (and also a Db2U catalog source). If images are pulled from a private registry, the control plane and also each cartridge reference their own catalog source in the openshift-marketplace project. This step creates the necessary catalog sources, dependent on whether the entitled registry or a private registry is used. For the entitled registry, it creates the catalog source directly using a YAML template; when using a private registry, the cloudctl case command is used for the control plane and every cartridge to install the catalog sources and their dependencies.","title":"Create catalog sources"},{"location":"30-reference/process/install-cloud-pak/#get-openshift-storage-classes","text":"Most custom resources defined by the cartridge operators require some back-end storage. To be able to reference the correct OpenShift storage classes, they are retrieved based on the openshift_storage_name property of the Cloud Pak object.","title":"Get OpenShift storage classes"},{"location":"30-reference/process/install-cloud-pak/#prepare-the-cloud-pak-for-data-operator","text":"When using express install, the Cloud Pak for Data operator also installs the Cloud Pak Foundational Services. Consecutively, this part of the deployer: Creates the operator project if it doesn't exist already Creates an OperatorGroup Installs the license service and certificate manager Creates the platform operator subscription Waits until the ClusterServerVersion objects for the platform operator and Operand Deployment Lifecycle Manager have been created","title":"Prepare the Cloud Pak for Data operator"},{"location":"30-reference/process/install-cloud-pak/#install-the-cloud-pak-for-data-control-plane","text":"When the Cloud Pak for Data operator has been installed, the process continues by creating an OperandRequest object for the platform operator which manages the project in the which Cloud Pak for Data instance is installed. Then it creates an Ibmcpd custom resource in the project which installs the controle plane with nginx the metastore, etc. The Cloud Pak for Data control plane is a pre-requisite for all cartridges so at this stage, the deployer waits until the Ibmcpd status reached the Completed state. Once the control plane has been installed successfully, the deployer generates a new strong 25-character password for the Cloud Pak for Data admin user and stores this into the vault. Additionally, the admin-user-details secret in the OpenShift project is updated with the new password.","title":"Install the Cloud Pak for Data control plane"},{"location":"30-reference/process/install-cloud-pak/#install-the-specified-cloud-pak-for-data-cartridges","text":"Now that the control plane has been installed in the specified OpenShift project, cartridges can be installed. Every cartridge is controlled by its own operator subscription in the operators project and a custom resource. The deployer iterates twice over the specified cartridges, first to create the operator subscriptions, then to create the custom resources.","title":"Install the specified Cloud Pak for Data cartridges"},{"location":"30-reference/process/install-cloud-pak/#create-cartridge-operator-subscriptions","text":"This steps creates subscription objects for each cartridge in the operators project, using a YAML template that is included in the deployer code and the subscription_channel specified in the cartridge definition. Keeping the subscription channel separate delivers flexibility when new subscription channels become available over time. Once the subscription has been created, the deployer waits for the associate CSV(s) to be created and reach the Installed state.","title":"Create cartridge operator subscriptions"},{"location":"30-reference/process/install-cloud-pak/#delete-obsolete-cartridges","text":"If this is not the first installation, earlier configured cartridges may have been removed. This steps iterates over all supported cartridges and checks if the cartridge has been installed and wheter it exists in the configuration of the current cp4d object. If the cartridge is no longer defined, its custom resource is removed; the operator will then take care of removing all OpenShift configuration.","title":"Delete obsolete cartridges"},{"location":"30-reference/process/install-cloud-pak/#install-the-cartridges","text":"This steps creates the Custom Resources for each cartridge. This is the actual installation of the cartridge. Cartridges can be installed in parallel to a certain extent and the operator will wait for the dependencies to be installed first before starting the processes. For example, if Watson Studio and Watson Machine Learning are installed, both have a dependency on the Common Core Services (CCS) and will wait for the CCS object to reach the Completed state before proceeding with the install. Once that is the case, both WS and WML will run the installation process in parallel.","title":"Install the cartridges"},{"location":"30-reference/process/install-cloud-pak/#wait-until-all-cartridges-are-ready","text":"Installation of the cartridges can take a very long time; up to 5 hours for Watson Knowledge Catalog. While cartridges are being installed, the deployer checks the states of all cartridges on a regular basis and reports these in a log file. The deployer will retry until all specified cartridges have reached the Completed state.","title":"Wait until all cartridges are ready"},{"location":"30-reference/process/install-cloud-pak/#configure-ldap-authentication-for-cloud-pak-for-data","text":"If LDAP has been configured for the Cloud Pak for Data element, it will be configured after all cartridges have finished installing.","title":"Configure LDAP authentication for Cloud Pak for Data"},{"location":"30-reference/process/overview/","text":"Deployment process overview \ud83d\udd17 When running the Cloud Pak Deployer ( cp-deploy env apply ), a series of pre-defined stages are followed to arrive at the desired end-state. 10 - Validation \ud83d\udd17 In this stage, the following activities are executed: Is the specified cloud platform in the inventory file supported? Are the mandatory variables defined? Can the deployer connect to the specified vault? 20 - Prepare \ud83d\udd17 In this stage, the following activities are executed: Read the configuration files from the config directory Replace variable placeholders in the configuration with the extra parameters passed to the cp-deploy command Expand the configuration with defaults from the defaults directory Run the \"linter\" to check the object attributes in the configuration and their relations Generate the Terraform scripts to provision the infrastructure (IBM Cloud only) Download all CLIs needed for the selected cloud platform and cloud pak(s), if not air-gapped 30 - Provision infra \ud83d\udd17 In this stage, the following activities are executed: Run Terraform to create or change the infrastructure components for IBM cloud Run the OpenShift installer-provisioned infrastructure (IPI) installer for AWS (ROSA), Azure (ARO) or vSphere 40 - Configure infra \ud83d\udd17 In this stage, the following activities are executed: Configure the VPC bastion and NFS server(s) for IBM Cloud Configure the OpenShift storage classes or test validate the existing storege classes if an existing OpenShift cluster is used Configure OpenShift logging 50 - Install Cloud Pak \ud83d\udd17 In this stage, the following activities are executed: Create the IBM Container Registry namespace for IBM Cloud Connect to the specified image registry and create ImageContentSourcePolicy Prepare OpenShift cluster for Cloud Pak for Data installation Mirror images to the private registry Install Cloud Pak for Data control plane Configure Foundational Services license service Install specified Cloud Pak for Data cartridges 60 - Configure Cloud Pak \ud83d\udd17 In this stage, the following activities are executed: Add OpenShift signed certificate to Cloud Pak for Data web server when on IBM Cloud Configure LDAP for Cloud Pak for Data Configure SAML authentication for Cloud Pak for Data Configure auditing for Cloud Pak for Data Configure instance for the cartridges (Analytics engine, Db2, Cognos Analytics, Data Virtualization, \u2026) Configure instance authorization using the LDAP group mapping 70 - Deploy Assets \ud83d\udd17 Configure Cloud Pak for Data monitors Install Cloud Pak for Data assets 80 - Smoke Tests \ud83d\udd17 In this stage, the following activities are executed: Show the Cloud Pak for Data URL and admin password","title":"Overview"},{"location":"30-reference/process/overview/#deployment-process-overview","text":"When running the Cloud Pak Deployer ( cp-deploy env apply ), a series of pre-defined stages are followed to arrive at the desired end-state.","title":"Deployment process overview"},{"location":"30-reference/process/overview/#10---validation","text":"In this stage, the following activities are executed: Is the specified cloud platform in the inventory file supported? Are the mandatory variables defined? Can the deployer connect to the specified vault?","title":"10 - Validation"},{"location":"30-reference/process/overview/#20---prepare","text":"In this stage, the following activities are executed: Read the configuration files from the config directory Replace variable placeholders in the configuration with the extra parameters passed to the cp-deploy command Expand the configuration with defaults from the defaults directory Run the \"linter\" to check the object attributes in the configuration and their relations Generate the Terraform scripts to provision the infrastructure (IBM Cloud only) Download all CLIs needed for the selected cloud platform and cloud pak(s), if not air-gapped","title":"20 - Prepare"},{"location":"30-reference/process/overview/#30---provision-infra","text":"In this stage, the following activities are executed: Run Terraform to create or change the infrastructure components for IBM cloud Run the OpenShift installer-provisioned infrastructure (IPI) installer for AWS (ROSA), Azure (ARO) or vSphere","title":"30 - Provision infra"},{"location":"30-reference/process/overview/#40---configure-infra","text":"In this stage, the following activities are executed: Configure the VPC bastion and NFS server(s) for IBM Cloud Configure the OpenShift storage classes or test validate the existing storege classes if an existing OpenShift cluster is used Configure OpenShift logging","title":"40 - Configure infra"},{"location":"30-reference/process/overview/#50---install-cloud-pak","text":"In this stage, the following activities are executed: Create the IBM Container Registry namespace for IBM Cloud Connect to the specified image registry and create ImageContentSourcePolicy Prepare OpenShift cluster for Cloud Pak for Data installation Mirror images to the private registry Install Cloud Pak for Data control plane Configure Foundational Services license service Install specified Cloud Pak for Data cartridges","title":"50 - Install Cloud Pak"},{"location":"30-reference/process/overview/#60---configure-cloud-pak","text":"In this stage, the following activities are executed: Add OpenShift signed certificate to Cloud Pak for Data web server when on IBM Cloud Configure LDAP for Cloud Pak for Data Configure SAML authentication for Cloud Pak for Data Configure auditing for Cloud Pak for Data Configure instance for the cartridges (Analytics engine, Db2, Cognos Analytics, Data Virtualization, \u2026) Configure instance authorization using the LDAP group mapping","title":"60 - Configure Cloud Pak"},{"location":"30-reference/process/overview/#70---deploy-assets","text":"Configure Cloud Pak for Data monitors Install Cloud Pak for Data assets","title":"70 - Deploy Assets"},{"location":"30-reference/process/overview/#80---smoke-tests","text":"In this stage, the following activities are executed: Show the Cloud Pak for Data URL and admin password","title":"80 - Smoke Tests"},{"location":"30-reference/process/prepare/","text":"Prepare the deployer \ud83d\udd17 This stage mainly takes care of checking the configuration and expanding it where necessary so it can be used by subsequent stages. Additionally, the preparation also calls the roles that will generate Terraform or other configuration files which are needed for provisioning and configuration. Generator \ud83d\udd17 All yaml files in the config directory of the specified CONFIG_DIR are processed and a composite JSON object, all_config is created, which contains all configuration. While processing the objects defined in the config directory files, the defaults directory is also processed to determine if any supplemental \"default\" variables must be added to the configuration objets. This makes it easy for example to ensure VSIs always use the correct Red Hat Enterprise Linux image available on IBM Cloud. You will find the generator roles under the automation-generators directory. There are cloud-provider dependent roles such as openshift which have a structure dependent on the chosen cloud provider and there are generic roles such as cp4d which are not dependent on the cloud provider. To find the appropriate role for the object, the generator first checks if the role is found under the specified cloud provider directory. If not found, it will call the role under generic . Linting \ud83d\udd17 Each of the objects have a syntax checking module called preprocessor.py . This Python program checks the attributes of the object in question and can also add defaults for properties which are missing. All errors found are collected and displayed at the end of the generator.","title":"Prepare"},{"location":"30-reference/process/prepare/#prepare-the-deployer","text":"This stage mainly takes care of checking the configuration and expanding it where necessary so it can be used by subsequent stages. Additionally, the preparation also calls the roles that will generate Terraform or other configuration files which are needed for provisioning and configuration.","title":"Prepare the deployer"},{"location":"30-reference/process/prepare/#generator","text":"All yaml files in the config directory of the specified CONFIG_DIR are processed and a composite JSON object, all_config is created, which contains all configuration. While processing the objects defined in the config directory files, the defaults directory is also processed to determine if any supplemental \"default\" variables must be added to the configuration objets. This makes it easy for example to ensure VSIs always use the correct Red Hat Enterprise Linux image available on IBM Cloud. You will find the generator roles under the automation-generators directory. There are cloud-provider dependent roles such as openshift which have a structure dependent on the chosen cloud provider and there are generic roles such as cp4d which are not dependent on the cloud provider. To find the appropriate role for the object, the generator first checks if the role is found under the specified cloud provider directory. If not found, it will call the role under generic .","title":"Generator"},{"location":"30-reference/process/prepare/#linting","text":"Each of the objects have a syntax checking module called preprocessor.py . This Python program checks the attributes of the object in question and can also add defaults for properties which are missing. All errors found are collected and displayed at the end of the generator.","title":"Linting"},{"location":"30-reference/process/provision-infra/","text":"Provision infrastructure \ud83d\udd17 This stage will provision the infrastructure that was defined in the input configuration files. Currently, this has only been implemented for IBM Cloud. IBM Cloud \ud83d\udd17 The IBM Cloud infrastructure provisioning runs Terraform to initially provision the infrastructure components such as VPC, VSIs, security groups, ROKS cluster and others. Also, if changes have been made in the configuration, Terraform will attempt to make the changes to reach the desired end-state. Based on the chosen action (apply or destroy), Terraform is instructed to provision or change the infrastructure components or to destroy everything. The Terraform state file (tfstate) is maintained in the vault and is critical to enable dynamic updates to the infrastructure. If the state file is lost or corrupted, updates to the infrastructure will have to be done manually. The Ansible tasks have been built in a way that the Terraform state file is always persisted into the vault, even if the apply or destroy process has failed. There are 3 main steps: Terraform init \ud83d\udd17 This step initializes the Terraform provider (ibm) with the correct version. If needed, the Terraform modules for the provider are downloaded or updated. Terraform plan \ud83d\udd17 Applying changes to the infrastructure using Terraform based on the input configuration files may cause critical components to be replaced (destroyed and recreated). The plan step checks what will be changed. If infrastructure components are destroyed and the --confirm-destroy parameter has not be specified for the deployer, the process is aborted. Terraform apply or Terraform destroy \ud83d\udd17 This is the execution of the plan and will provision new infrastructure (apply) or destroy everything (destroy). While the Terraform apply or destroy process is running, a .tfstate file is updated on disk. When the command completes, the deployer writes this as a secret to the vault so it can be used next time to update (or destroy) the infrastructure components.","title":"Provision infra"},{"location":"30-reference/process/provision-infra/#provision-infrastructure","text":"This stage will provision the infrastructure that was defined in the input configuration files. Currently, this has only been implemented for IBM Cloud.","title":"Provision infrastructure"},{"location":"30-reference/process/provision-infra/#ibm-cloud","text":"The IBM Cloud infrastructure provisioning runs Terraform to initially provision the infrastructure components such as VPC, VSIs, security groups, ROKS cluster and others. Also, if changes have been made in the configuration, Terraform will attempt to make the changes to reach the desired end-state. Based on the chosen action (apply or destroy), Terraform is instructed to provision or change the infrastructure components or to destroy everything. The Terraform state file (tfstate) is maintained in the vault and is critical to enable dynamic updates to the infrastructure. If the state file is lost or corrupted, updates to the infrastructure will have to be done manually. The Ansible tasks have been built in a way that the Terraform state file is always persisted into the vault, even if the apply or destroy process has failed. There are 3 main steps:","title":"IBM Cloud"},{"location":"30-reference/process/provision-infra/#terraform-init","text":"This step initializes the Terraform provider (ibm) with the correct version. If needed, the Terraform modules for the provider are downloaded or updated.","title":"Terraform init"},{"location":"30-reference/process/provision-infra/#terraform-plan","text":"Applying changes to the infrastructure using Terraform based on the input configuration files may cause critical components to be replaced (destroyed and recreated). The plan step checks what will be changed. If infrastructure components are destroyed and the --confirm-destroy parameter has not be specified for the deployer, the process is aborted.","title":"Terraform plan"},{"location":"30-reference/process/provision-infra/#terraform-apply-or-terraform-destroy","text":"This is the execution of the plan and will provision new infrastructure (apply) or destroy everything (destroy). While the Terraform apply or destroy process is running, a .tfstate file is updated on disk. When the command completes, the deployer writes this as a secret to the vault so it can be used next time to update (or destroy) the infrastructure components.","title":"Terraform apply or Terraform destroy"},{"location":"30-reference/process/smoke-tests/","text":"Smoke tests \ud83d\udd17 This is the final stage before returning control to the process that started the deployer. Here tests to check that the Cloud Pak and its cartridges has been deployed correctly and that everything is running as expected. The method for smoke tests should be dynamic, for example by referencing a Git repository and context (directory within the repository); the code within that directory then deploys the asset(s). Cloud Pak for Data smoke tests \ud83d\udd17 Show the Cloud Pak for Data URL and admin password \ud83d\udd17 This \"smoke test\" finds the route of the Cloud Pak for Data instance(s) and retrieves the admin password from the vault which is then displayed. Example: ['CP4D URL: https://cpd-cpd.fke09-10-a939e0e6a37f1ce85dbfddbb7ab97418-0000.eu-gb.containers.appdomain.cloud', 'CP4D admin password: ITnotgXcMTcGliiPvVLwApmsV'] With this information you can go to the Cloud Pak for Data URL and login using the admin user.","title":"Smoke tests"},{"location":"30-reference/process/smoke-tests/#smoke-tests","text":"This is the final stage before returning control to the process that started the deployer. Here tests to check that the Cloud Pak and its cartridges has been deployed correctly and that everything is running as expected. The method for smoke tests should be dynamic, for example by referencing a Git repository and context (directory within the repository); the code within that directory then deploys the asset(s).","title":"Smoke tests"},{"location":"30-reference/process/smoke-tests/#cloud-pak-for-data-smoke-tests","text":"","title":"Cloud Pak for Data smoke tests"},{"location":"30-reference/process/smoke-tests/#show-the-cloud-pak-for-data-url-and-admin-password","text":"This \"smoke test\" finds the route of the Cloud Pak for Data instance(s) and retrieves the admin password from the vault which is then displayed. Example: ['CP4D URL: https://cpd-cpd.fke09-10-a939e0e6a37f1ce85dbfddbb7ab97418-0000.eu-gb.containers.appdomain.cloud', 'CP4D admin password: ITnotgXcMTcGliiPvVLwApmsV'] With this information you can go to the Cloud Pak for Data URL and login using the admin user.","title":"Show the Cloud Pak for Data URL and admin password"},{"location":"30-reference/process/validate/","text":"10 - Validation - Validate the configuration \ud83d\udd17 In this stage, the following activities are executed: Is the specified cloud platform in the inventory file supported? Are the mandatory variables defined? Can the deployer connect to the specified vault?","title":"Validate"},{"location":"30-reference/process/validate/#10---validation---validate-the-configuration","text":"In this stage, the following activities are executed: Is the specified cloud platform in the inventory file supported? Are the mandatory variables defined? Can the deployer connect to the specified vault?","title":"10 - Validation - Validate the configuration"},{"location":"30-reference/process/cp4d-cartridges/cognos-authorization/","text":"Automated Cognos Authorization using LDAP groups \ud83d\udd17 Description \ud83d\udd17 The automated cognos authorization capability uses LDAP groups to assign users to a Cognos Analytics Role, which allows these users to login to IBM Cloud Pak for Data and access the Cognos Analytics instance. This capability will perform the following tasks: - Create a User Group and assign the associated LDAP Group(s) and Cloud Pak for Data role(s) - For each member of the LDAP Group(s) part of the User Group, create the user as a Cloud Pak for Data User and assigned the Cloud Pak for Data role(s) - For each member of the LDAP Group(s) part of the User Group, assign membership to the Cognos Analytics instance and authorize for the Cognos Analytics Role If the User Group is already present, validate all LDAP Group(s) are associated with the User Group. Add the LDAP Group(s) not yet assiciated to the User Group. Existing LDAP groups will not be removed from the User Group If a User is already present in Cloud Pak for Data, it will not be updated. If a user is already associated with the Cognos Analytics instance, keep its original membership and do not update the membership Pre-requisites \ud83d\udd17 Prior to running the script, ensure: - LDAP configuration in IBM Cloud Pak for Data is completed and validated - Cognos Analytics instance is provisioned and running in IBM Cloud Pak for Data - The role(s) that will be associated with the User Group are present in IBM Cloud Pak for Data Usage of the Script \ud83d\udd17 The script is available in automation-roles/50-install-cloud-pak/cp4d-service/files/assign_CA_authorization.sh . Run the script without arguments to show its usage help. # ./assign_CA_authorization.sh Usage: assign_CA_authorization.sh The URL to the IBM Cloud Pak for Data instance The login user to IBM Cloud Pak for Data, e.g. the admin user The login password to IBM Cloud Pak for Data The Cloud Pak for Data User Group Name The Cloud Pak for Data User Group Description The Cloud Pak for Data roles associated to the User Group. Use a ; seperated list to assign multiple roles The LDAP Groups associated to the User Group. Use a ; seperated list to assign LDAP groups The Cognos Analytics Role each member of the User Group will be associated with, which must be one of: Analytics Administrators Analytics Explorers Analytics Users Analytics Viewer Running the script \ud83d\udd17 Using the command example provided by the ./assign_CA_authorization.sh command, run the script with its arguments # ./assign_CA_authorization.sh \\ https://...... \\ admin \\ ******** \\ \"Cognos User Group\" \\ \"Cognos User Group Description\" \\ \"wkc_data_scientist_role;zen_administrator_role\" \\ \"cn=ca_group,ou=groups,dc=ibm,dc=com\" \\ \"Analytics Viewer\" The script execution will run through the following tasks: Validation Confirm all required arguments are provided. Confirm at least 1 User Group Role assignment is provided. Confirm at least 1 LDAP Group is provided. Login to Cloud Pak for Data and generate a Bearer token Using the provided IBM Cloud for Data URL, username and password, login to Cloud pak for Data and generate the Bearer token used for subsequent commands. Exit with an error if the login to IBM Cloud Pak for Data fails. Confirm the provided User Group role(s) are present in Cloud Pak for Data Acquire all Cloud Pak for Data roles and confirm the provided User Group role(s) are one of the existing Cloud Pak for Data roles. Exit with an error if a role is provided which is not currently present in IBM Cloud Pak for Data. Confirm the provided Cognos Analytics role is valid Ensure the provided Cognos Analytics role is one of the available Cognos Analytics roles. Exit with an error if a Cognos Analytics role is provided that does not match with the available Cognos Analytics roles. Confirm LDAP is configured in IBM Cloud Pak for Data Ensures the LDAP configuration is completed. Exit with an error if there is no current LDAP configuration. Confirm the provided LDAP groups are present in the LDAP User Registry Using IBM Cloud Pak for Data, query whether the provided LDAP groups are present in the LDAP User registry. Exit with an error if a LDAP Group is not available. Confirm if the IBM Cloud Pak for Data User Group exists Queries the IBM Cloud Pak for Data User Groups. If the provided User Group exists, acquire the Group ID. If the IBM Cloud Pak for Data User Group does not exist, create it If the User Group does not exist, create it, and assign the IBM Cloud Pak for Data Roles and LDAP Groups to the new User Group If the IBM Cloud Pak for Data User Group does exist, validate the associated LDAP Groups If the User Group already exists, confirm all provided LDAP groups are associated with the User Group. Add LDAP groups that are not yet associated. Get the Cognos Analytics instance ID Queries the IBM Cloud Pak for Data service instances and acquires the Cognos Analytics instance ID. Exit with an error if no Cognos Analytics instance is available Ensure each user member of the IBM Cloud Pak for Data User Group is an existing user Each user that is member of the provided LDAP groups, ensure this member is an IBM Cloud Pak for Data User. Create a new user with the provided User Group role(s) if the the user is not yet available. Any existing User(s) will not be updated. If Users are removed from an LDAP Group, these users will not be removed from Cloud Pak for Data. Ensure each user member of the IBM Cloud Pak for Data User Group is associated to the Cognos Analytics instance Each user that is member of the provided LDAP groups, ensure this member is associated to the Cognos Analytics instance with the provided Cognos Analytics role. Any user that is already associated to the Cognos Analytics instance will have its Cognos Analytics role updated to the provided Cognos Analytics Role","title":"Automated Cognos Authorization using LDAP groups"},{"location":"30-reference/process/cp4d-cartridges/cognos-authorization/#automated-cognos-authorization-using-ldap-groups","text":"","title":"Automated Cognos Authorization using LDAP groups"},{"location":"30-reference/process/cp4d-cartridges/cognos-authorization/#description","text":"The automated cognos authorization capability uses LDAP groups to assign users to a Cognos Analytics Role, which allows these users to login to IBM Cloud Pak for Data and access the Cognos Analytics instance. This capability will perform the following tasks: - Create a User Group and assign the associated LDAP Group(s) and Cloud Pak for Data role(s) - For each member of the LDAP Group(s) part of the User Group, create the user as a Cloud Pak for Data User and assigned the Cloud Pak for Data role(s) - For each member of the LDAP Group(s) part of the User Group, assign membership to the Cognos Analytics instance and authorize for the Cognos Analytics Role If the User Group is already present, validate all LDAP Group(s) are associated with the User Group. Add the LDAP Group(s) not yet assiciated to the User Group. Existing LDAP groups will not be removed from the User Group If a User is already present in Cloud Pak for Data, it will not be updated. If a user is already associated with the Cognos Analytics instance, keep its original membership and do not update the membership","title":"Description"},{"location":"30-reference/process/cp4d-cartridges/cognos-authorization/#pre-requisites","text":"Prior to running the script, ensure: - LDAP configuration in IBM Cloud Pak for Data is completed and validated - Cognos Analytics instance is provisioned and running in IBM Cloud Pak for Data - The role(s) that will be associated with the User Group are present in IBM Cloud Pak for Data","title":"Pre-requisites"},{"location":"30-reference/process/cp4d-cartridges/cognos-authorization/#usage-of-the-script","text":"The script is available in automation-roles/50-install-cloud-pak/cp4d-service/files/assign_CA_authorization.sh . Run the script without arguments to show its usage help. # ./assign_CA_authorization.sh Usage: assign_CA_authorization.sh The URL to the IBM Cloud Pak for Data instance The login user to IBM Cloud Pak for Data, e.g. the admin user The login password to IBM Cloud Pak for Data The Cloud Pak for Data User Group Name The Cloud Pak for Data User Group Description The Cloud Pak for Data roles associated to the User Group. Use a ; seperated list to assign multiple roles The LDAP Groups associated to the User Group. Use a ; seperated list to assign LDAP groups The Cognos Analytics Role each member of the User Group will be associated with, which must be one of: Analytics Administrators Analytics Explorers Analytics Users Analytics Viewer","title":"Usage of the Script"},{"location":"30-reference/process/cp4d-cartridges/cognos-authorization/#running-the-script","text":"Using the command example provided by the ./assign_CA_authorization.sh command, run the script with its arguments # ./assign_CA_authorization.sh \\ https://...... \\ admin \\ ******** \\ \"Cognos User Group\" \\ \"Cognos User Group Description\" \\ \"wkc_data_scientist_role;zen_administrator_role\" \\ \"cn=ca_group,ou=groups,dc=ibm,dc=com\" \\ \"Analytics Viewer\" The script execution will run through the following tasks: Validation Confirm all required arguments are provided. Confirm at least 1 User Group Role assignment is provided. Confirm at least 1 LDAP Group is provided. Login to Cloud Pak for Data and generate a Bearer token Using the provided IBM Cloud for Data URL, username and password, login to Cloud pak for Data and generate the Bearer token used for subsequent commands. Exit with an error if the login to IBM Cloud Pak for Data fails. Confirm the provided User Group role(s) are present in Cloud Pak for Data Acquire all Cloud Pak for Data roles and confirm the provided User Group role(s) are one of the existing Cloud Pak for Data roles. Exit with an error if a role is provided which is not currently present in IBM Cloud Pak for Data. Confirm the provided Cognos Analytics role is valid Ensure the provided Cognos Analytics role is one of the available Cognos Analytics roles. Exit with an error if a Cognos Analytics role is provided that does not match with the available Cognos Analytics roles. Confirm LDAP is configured in IBM Cloud Pak for Data Ensures the LDAP configuration is completed. Exit with an error if there is no current LDAP configuration. Confirm the provided LDAP groups are present in the LDAP User Registry Using IBM Cloud Pak for Data, query whether the provided LDAP groups are present in the LDAP User registry. Exit with an error if a LDAP Group is not available. Confirm if the IBM Cloud Pak for Data User Group exists Queries the IBM Cloud Pak for Data User Groups. If the provided User Group exists, acquire the Group ID. If the IBM Cloud Pak for Data User Group does not exist, create it If the User Group does not exist, create it, and assign the IBM Cloud Pak for Data Roles and LDAP Groups to the new User Group If the IBM Cloud Pak for Data User Group does exist, validate the associated LDAP Groups If the User Group already exists, confirm all provided LDAP groups are associated with the User Group. Add LDAP groups that are not yet associated. Get the Cognos Analytics instance ID Queries the IBM Cloud Pak for Data service instances and acquires the Cognos Analytics instance ID. Exit with an error if no Cognos Analytics instance is available Ensure each user member of the IBM Cloud Pak for Data User Group is an existing user Each user that is member of the provided LDAP groups, ensure this member is an IBM Cloud Pak for Data User. Create a new user with the provided User Group role(s) if the the user is not yet available. Any existing User(s) will not be updated. If Users are removed from an LDAP Group, these users will not be removed from Cloud Pak for Data. Ensure each user member of the IBM Cloud Pak for Data User Group is associated to the Cognos Analytics instance Each user that is member of the provided LDAP groups, ensure this member is associated to the Cognos Analytics instance with the provided Cognos Analytics role. Any user that is already associated to the Cognos Analytics instance will have its Cognos Analytics role updated to the provided Cognos Analytics Role","title":"Running the script"},{"location":"40-troubleshooting/cp4d-uninstall/","text":"Uninstall Cloud Pak for Data and Foundational Services \ud83d\udd17 For convenience, the Cloud Pak Deployer includes a script that removes the Cloud Pak for Data instance from the OpenShift cluster, then Cloud Pak Foundational Services and finally the catalog sources and CRDs. Steps: Make sure you are connected to the OpenShift cluster Run script ./scripts/cp4d/cp4d-delete-instance.sh You will have to confirm that you want to delete the instance and all other artifacts. Warning Please be very careful with this command. Ensure you are connected to the correct OpenShift cluster and that no other Cloud Paks use operator namespace. The action cannot be undone.","title":"Cloud Pak for Data uninstall"},{"location":"40-troubleshooting/cp4d-uninstall/#uninstall-cloud-pak-for-data-and-foundational-services","text":"For convenience, the Cloud Pak Deployer includes a script that removes the Cloud Pak for Data instance from the OpenShift cluster, then Cloud Pak Foundational Services and finally the catalog sources and CRDs. Steps: Make sure you are connected to the OpenShift cluster Run script ./scripts/cp4d/cp4d-delete-instance.sh You will have to confirm that you want to delete the instance and all other artifacts. Warning Please be very careful with this command. Ensure you are connected to the correct OpenShift cluster and that no other Cloud Paks use operator namespace. The action cannot be undone.","title":"Uninstall Cloud Pak for Data and Foundational Services"},{"location":"40-troubleshooting/ibm-cloud-access-nfs-server/","text":"Access NFS server provisioned on IBM Cloud \ud83d\udd17 When choosing the \"simple\" sample configuration for ROKS VPC on IBM Cloud, the deployer also provisions a Virtual Server Instance and installs a standard NFS server on it. In some cases you may want to get access to the NFS server for troubleshooting. For security reasons, the NFS server can only be reached via a bastion server that is connected to the internet, i.e. use the bastion server as a jump host, this to avoid exposing NFS volumes to the outside world and provide an extra layer of protection. Additionally, password login is disabled on both the bastion and NFS servers and one must use the private SSH key to connect. Start the command line within the container \ud83d\udd17 Getting SSH access to the NFS server is easiest from within the deployer container as it has all tools installed to extract the IP addresses from the Terraform state file. Optional: Ensure that the environment variables for the configuration and status directories are set. If not specified, the directories are assumed to be $HOME/cpd-config and $HOME/cpd-status . export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config Start the deployer command line. ./cp-deploy.sh env command ------------------------------------------------------------------------------- Entering Cloud Pak Deployer command line in a container. Use the \"exit\" command to leave the container and return to the hosting server. ------------------------------------------------------------------------------- Installing OpenShift client Current OpenShift context: pluto-01 Obtain private SSH key \ud83d\udd17 Access to both the bastion and NFS servers are typically protected by the same SSH key, which is stored in the vault. To list all vault secrets, run the command below. cd /cloud-pak-deployer ./cp-deploy.sh vault list ./cp-deploy.sh vault list Starting Automation script... PLAY [Secrets] ***************************************************************** Secret list for group sample: - ibm_cp_entitlement_key - sample-terraform-tfstate - cp4d_admin_zen_40_fke34d - sample-all-config - pluto-01-provision-ssh-key - pluto-01-provision-ssh-pub-key PLAY RECAP ********************************************************************* localhost : ok=11 changed=0 unreachable=0 failed=0 skipped=21 rescued=0 ignored=0 Then, retrieve the private key (in the above example pluto-01-provision-ssh-key ) to an output file in your ~/.ssh directory, make sure it has the correct private key format (new line at the end) and permissions (600). SSH_FILE=~/.ssh/pluto-01-rsa mkdir -p ~/.ssh chmod 600 ~/.ssh ./cp-deploy.sh vault get -vs pluto-01-provision-ssh-key \\ -vsf $SSH_FILE echo -e \"\\n\" >> $SSH_FILE chmod 600 $SSH_FILE Find the IP addresses \ud83d\udd17 To connect to the NFS server, you need the public IP address of the bastion server and the private IP address of the NFS server. Obviously these can be retrieved from the IBM Cloud resource list ( https://cloud.ibm.com/resources ), but they are also kept in the Terraform \"tfstate\" file ./cp-deploy.sh vault get -vs sample-terraform-tfstate \\ -vsf /tmp/sample-terraform-tfstate The below commands do not provide the prettiest output but you should be able to extract the IP addresses from them. For the bastion node public (floating) IP address: cat /tmp/sample-terraform-tfstate | jq -r '.resources[]' | grep -A 10 -E \"ibm_is_float\" \"type\": \"ibm_is_floating_ip\", \"name\": \"pluto_01_bastion\", \"provider\": \"provider[\\\"registry.terraform.io/ibm-cloud/ibm\\\"]\", \"instances\": [ { \"schema_version\": 0, \"attributes\": { \"address\": \"149.81.215.172\", ... \"name\": \"pluto-01-bastion\", For the NFS server: cat /tmp/sample-terraform-tfstate | jq -r '.resources[]' | grep -A 10 -E \"ibm_is_instance|primary_network_interface\" ... -- \"type\": \"ibm_is_instance\", \"name\": \"pluto_01_nfs\", \"provider\": \"provider[\\\"registry.terraform.io/ibm-cloud/ibm\\\"]\", \"instances\": [ ... -- \"primary_network_interface\": [ ... \"name\": \"pluto-01-nfs-nic\", \"port_speed\": 0, \"primary_ipv4_address\": \"10.227.0.138\", In the above examples, the IP addresses are: Bastion public IP address: 149.81.215.172 NFS server private IP address: 10.227.0.138 SSH to the NFS server \ud83d\udd17 Finally, to get command line access to the NFS server: BASTION_IP=149.81.215.172 NFS_IP=10.227.0.138 ssh -i $SSH_FILE \\ -o ProxyCommand=\"ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \\ -i $SSH_FILE -W %h:%p -q $BASTION_IP\" \\ root@$NFS_IP Stopping the session \ud83d\udd17 Once you've finished exploring the NFS server, you can exit from it: exit Finally, exit from the deployer container which is then terminated. exit","title":"Access NFS server on IBM Cloud"},{"location":"40-troubleshooting/ibm-cloud-access-nfs-server/#access-nfs-server-provisioned-on-ibm-cloud","text":"When choosing the \"simple\" sample configuration for ROKS VPC on IBM Cloud, the deployer also provisions a Virtual Server Instance and installs a standard NFS server on it. In some cases you may want to get access to the NFS server for troubleshooting. For security reasons, the NFS server can only be reached via a bastion server that is connected to the internet, i.e. use the bastion server as a jump host, this to avoid exposing NFS volumes to the outside world and provide an extra layer of protection. Additionally, password login is disabled on both the bastion and NFS servers and one must use the private SSH key to connect.","title":"Access NFS server provisioned on IBM Cloud"},{"location":"40-troubleshooting/ibm-cloud-access-nfs-server/#start-the-command-line-within-the-container","text":"Getting SSH access to the NFS server is easiest from within the deployer container as it has all tools installed to extract the IP addresses from the Terraform state file. Optional: Ensure that the environment variables for the configuration and status directories are set. If not specified, the directories are assumed to be $HOME/cpd-config and $HOME/cpd-status . export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config Start the deployer command line. ./cp-deploy.sh env command ------------------------------------------------------------------------------- Entering Cloud Pak Deployer command line in a container. Use the \"exit\" command to leave the container and return to the hosting server. ------------------------------------------------------------------------------- Installing OpenShift client Current OpenShift context: pluto-01","title":"Start the command line within the container"},{"location":"40-troubleshooting/ibm-cloud-access-nfs-server/#obtain-private-ssh-key","text":"Access to both the bastion and NFS servers are typically protected by the same SSH key, which is stored in the vault. To list all vault secrets, run the command below. cd /cloud-pak-deployer ./cp-deploy.sh vault list ./cp-deploy.sh vault list Starting Automation script... PLAY [Secrets] ***************************************************************** Secret list for group sample: - ibm_cp_entitlement_key - sample-terraform-tfstate - cp4d_admin_zen_40_fke34d - sample-all-config - pluto-01-provision-ssh-key - pluto-01-provision-ssh-pub-key PLAY RECAP ********************************************************************* localhost : ok=11 changed=0 unreachable=0 failed=0 skipped=21 rescued=0 ignored=0 Then, retrieve the private key (in the above example pluto-01-provision-ssh-key ) to an output file in your ~/.ssh directory, make sure it has the correct private key format (new line at the end) and permissions (600). SSH_FILE=~/.ssh/pluto-01-rsa mkdir -p ~/.ssh chmod 600 ~/.ssh ./cp-deploy.sh vault get -vs pluto-01-provision-ssh-key \\ -vsf $SSH_FILE echo -e \"\\n\" >> $SSH_FILE chmod 600 $SSH_FILE","title":"Obtain private SSH key"},{"location":"40-troubleshooting/ibm-cloud-access-nfs-server/#find-the-ip-addresses","text":"To connect to the NFS server, you need the public IP address of the bastion server and the private IP address of the NFS server. Obviously these can be retrieved from the IBM Cloud resource list ( https://cloud.ibm.com/resources ), but they are also kept in the Terraform \"tfstate\" file ./cp-deploy.sh vault get -vs sample-terraform-tfstate \\ -vsf /tmp/sample-terraform-tfstate The below commands do not provide the prettiest output but you should be able to extract the IP addresses from them. For the bastion node public (floating) IP address: cat /tmp/sample-terraform-tfstate | jq -r '.resources[]' | grep -A 10 -E \"ibm_is_float\" \"type\": \"ibm_is_floating_ip\", \"name\": \"pluto_01_bastion\", \"provider\": \"provider[\\\"registry.terraform.io/ibm-cloud/ibm\\\"]\", \"instances\": [ { \"schema_version\": 0, \"attributes\": { \"address\": \"149.81.215.172\", ... \"name\": \"pluto-01-bastion\", For the NFS server: cat /tmp/sample-terraform-tfstate | jq -r '.resources[]' | grep -A 10 -E \"ibm_is_instance|primary_network_interface\" ... -- \"type\": \"ibm_is_instance\", \"name\": \"pluto_01_nfs\", \"provider\": \"provider[\\\"registry.terraform.io/ibm-cloud/ibm\\\"]\", \"instances\": [ ... -- \"primary_network_interface\": [ ... \"name\": \"pluto-01-nfs-nic\", \"port_speed\": 0, \"primary_ipv4_address\": \"10.227.0.138\", In the above examples, the IP addresses are: Bastion public IP address: 149.81.215.172 NFS server private IP address: 10.227.0.138","title":"Find the IP addresses"},{"location":"40-troubleshooting/ibm-cloud-access-nfs-server/#ssh-to-the-nfs-server","text":"Finally, to get command line access to the NFS server: BASTION_IP=149.81.215.172 NFS_IP=10.227.0.138 ssh -i $SSH_FILE \\ -o ProxyCommand=\"ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \\ -i $SSH_FILE -W %h:%p -q $BASTION_IP\" \\ root@$NFS_IP","title":"SSH to the NFS server"},{"location":"40-troubleshooting/ibm-cloud-access-nfs-server/#stopping-the-session","text":"Once you've finished exploring the NFS server, you can exit from it: exit Finally, exit from the deployer container which is then terminated. exit","title":"Stopping the session"},{"location":"50-advanced/advanced-configuration/","text":"Cloud Pak Deployer Advanced Configuration \ud83d\udd17 The Cloud Pak Deployer includes several samples which you can use to build your own configuration. You can find sample configuration yaml files in the sub-directories of the sample-configurations directory of the repository. Descriptions and topologies are also included in the sub-directories. Warning Do not make changes to the sample configurations in the cloud-pak-deployer directory, but rather copy it to your own home directory or somewhere else and then make changes. If you store your own configuration under the repository's clone, you may not be able to update (pull) the repository with changes applied on GitHub, or accidentally overwrite it. Warning The deployer expects to manage all objects referenced in the configuration files, including the referenced OpenShift cluster and Cloud Pak installation. If you have already pre-provisioned the OpenShift cluster, choose a configuration with existing-ocp cloud platform. If the Cloud Pak has already been installed, unexpected and undesired activities may happen. The deployer has not been designed to alter a pre-provisioned OpenShift cluster or existing Cloud Pak installation. Configuration steps - static sample configuration \ud83d\udd17 Copy the static sample configuration directory to your own directory: mkdir -p $HOME /cpd-config/config cp -r ./sample-configurations/roks-ocs-cp4d/config/* $HOME /cpd-config/config/ cd $HOME /cpd-config/config Edit the \"cp4d-....yaml\" file and select the cartridges to be installed by changing the state to installed . Additionally you can accept the Cloud Pak license in the config file by specifying accept_licenses: True . nano ./config/cp4d-450.yaml The configuration typically works without any configuration changes and will create all referenced objects, including the Virtual Private Cloud, subnets, SSH keys, ROKS cluster and OCS storage ndoes. There is typically no need to change address prefixes and subnets. The IP addresses used by the provisioned components are private to the VPC and are not externally exposed. Configuration steps - dynamically choose OpenShift and Cloud Pak \ud83d\udd17 Copy the sample configuration directory to your own directory: mkdir -p $HOME/cpd-config/config Copy the relevant OpenShift configuration file from the samples-configuration directory to the config directory, for example: cp ./sample-configurations/sample-dynamic/config-samples/ocp-ibm-cloud-roks-ocs.yaml $HOME/cpd-config/config/ Copy the relevant \"cp4d-\u2026\" file from the samples-configuration directory to the config directory, for example: cp ./sample-configurations/sample-dynamic/config-samples/cp4d-462.yaml $HOME/cpd-config/config/ Edit the \"$HOME/cpd-config/config/cp4d-....yaml\" file and select the cartridges to be installed by changing the state to installed . Additionally you can accept the Cloud Pak license in the config file by specifying accept_licenses: True . nano $HOME/cpd-config/config/cp4d-463.yaml For more advanced configuration topics such as using a private registry, setting up transit gateways between VPCs, etc, go to the Advanced configuration section Directory structure \ud83d\udd17 Every configuration has a fixed directory structure, consisting of mandatory and optional subdirectories. Mandatory subdirectories: config : Keeps one or more yaml files with your OpenShift and Cloud Pak configuration Additionally, there are 3 optional subdirectories: defaults : Directory that keeps the defaults which will be merged with your configuration inventory : Keep global settings for the configuration such as environment name or other variables used in the configs assets : Keeps directories of assets which must be deployed onto the Cloud Pak config directory \ud83d\udd17 You can choose to keep only a single file per subdirectory or, for more complex configurations, you can create multiple yaml files. You can find a full list of all supported object types here: Configuration objects . The generator automatically merges all .yaml files in the config and defaults directory. Files with different extensions are ignored. In the sample configurations we split configuration of the OpenShift ocp-... and Cloud Pak cp4.-... objects. For example, your config directory could hold the following files: cp4d-463.yaml ocp-ibm-cloud-roks-ocs.yaml This will provision a ROKS cluster on IBM Cloud with OpenShift Data Foundation (fka OCS) and Cloud Pak for Data 4.0.8. defaults directory (optional) \ud83d\udd17 Holds the defaults for all object types. If a certain object property has not been specified in the config directory, it will be retrieved from the defaults directory using the flavour specified in the configured object. If no flavour has been selected, the default flavour will be chosen. You should not need this subdirectory in most circumstances. assets directory (optional) \ud83d\udd17 Optional directory holding the assets you wish to deploy for the Cloud Pak. More information about Cloud Pak for Data assets which can be deployed can be found in object definition cp4d_asset . The directory can be named differently as well, for example cp4d-assets or customer-churn-demo . inventory directory (optional) \ud83d\udd17 The Cloud Pak Deployer pipeline has been built using Ansible and it can be configured using \"inventory\" files. Inventory files allow you to specify global variables used throughout Ansible playbooks. In the current version of the Cloud Pak Deployer, the inventory directory has become fully optional as the global_config and vault objects have taken over its role. However, if there are certain global variables such as env_id you want to pass via an inventory file, you can also do this. Vault secrets \ud83d\udd17 User passwords, certificates and other \"secret\" information is kept in the vault, which can be either a flat file (not encrypted), HashiCorp Vault or the IBM Cloud Secrets Manager service. Some of the deployment configurations require that the vault is pre-populated with secrets which as needed during the deployment. For example, a vSphere deployment needs the vSphere user and password to authenticate to vSphere and Cloud Pak for Data SAML configuration requires the idP certificate All samples default to the File Vault , meaning that the vault will be kept in the vault directory under the status directory you specify when you run the deployer. Detailed descriptions of the vault settings can be found in the sample inventory file and also here: vault settings . Optional: Ensure that the environment variables for the configuration and status directories are set. If not specified, the directories are assumed to be $HOME/cpd-config and $HOME/cpd-status . export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config Set vSphere user secret: ./cp-deploy.sh vault set \\ --vault-secret vsphere-user \\ --vault-secret-value super_user@vsphere.local Or, if you want to create the secret from an input file: ./cp-deploy.sh vault set \\ --vault-secret kubeconfig \\ --vault-secret-file ~/.kube/config Using a GitHub repository for the configuration \ud83d\udd17 If the configuration is kept in a GitHub repository, you can set environment variables to have the deployer pull the GitHub repository to the current server before starting the process. Set environment variables. export CPD_CONFIG_GIT_REPO=\"https://github.com/IBM/cloud-pak-deployer-config.git\" export CPD_CONFIG_GIT_REF=\"main\" export CPD_CONFIG_GIT_CONTEXT=\"\" CPD_CONFIG_GIT_REPO : The clone URL of the GitHub repository that holds the configuration. CPD_CONFIG_GIT_REF : The branch, tag or commit ID to be cloned. If not specified, the repository's default branch will be cloned. CPD_CONFIG_GIT_CONTEXT : The directory within the GitHub repository that holds the configuration. This directory must contain the config directory under which the YAML files are kept. Info When specifying a GitHub repository, the contents will be copied under $STATUS_DIR/cpd-config and this directory is then set as the configuration directory. Using dynamic variables (extra variables) \ud83d\udd17 In some situations you may want to use a single configuration for deployment in different environments, such as development, acceptance test and production. The Cloud Pak Deployer uses the Jinja2 templating engine which is included in Ansible to pre-process the configuration. This allows you to dynamically adjust the configuration based on extra variables you specify at the command line. Example: ./cp-deploy.sh env apply \\ -e ibm_cloud_region=eu_gb \\ -e env_id=jupiter-03 [--accept-all-liceneses] This passes the env_id and ibm_cloud_region variables to the Cloud Pak Deployer, which can then populate variables in the configuration. In the sample configurations, the env_id is used to specify the name of the VPC, ROKS cluster and others and overrides the value specified in the global_config definition. The ibm_cloud_region overrides region specified in the inventory file. ... vpc: - name: \"{{ env_id }}\" allow_inbound: ['ssh'] address_prefix: ### Prefixes for the client environment - name: \"{{ env_id }}-zone-1\" vpc: \"{{ env_id }}\" zone: {{ ibm_cloud_region }}-1 cidr: 10.231.0.0/26 ... When running with the above cp-deploy.sh command, the snippet would be generated as: ... vpc: - name: \"jupiter-03\" allow_inbound: ['ssh'] address_prefix: ### Prefixes for the client environment - name: \"jupiter-03-zone-1\" vpc: \"jupiter-03\" zone: eu-de-1 cidr: 10.231.0.0/26 ... The ibm_cloud_region variable is specified in the inventory file. This is another method of specifying variables for dynamic configuration. You can even include more complex constructs for dynamic configuration, with if statements, for loops and others. An example where the OpenShift OCS storage classes would only be generated for a specific environment (pluto-prod) would be: openshift_storage: - storage_name: nfs-storage storage_type: nfs nfs_server_name: \"{{ env_id }}-nfs\" {% if env_id == 'jupiter-prod' %} - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 500 {% endif %} For a more comprehensive overview of Jinja2 templating, see https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html","title":"Advanced configuration"},{"location":"50-advanced/advanced-configuration/#cloud-pak-deployer-advanced-configuration","text":"The Cloud Pak Deployer includes several samples which you can use to build your own configuration. You can find sample configuration yaml files in the sub-directories of the sample-configurations directory of the repository. Descriptions and topologies are also included in the sub-directories. Warning Do not make changes to the sample configurations in the cloud-pak-deployer directory, but rather copy it to your own home directory or somewhere else and then make changes. If you store your own configuration under the repository's clone, you may not be able to update (pull) the repository with changes applied on GitHub, or accidentally overwrite it. Warning The deployer expects to manage all objects referenced in the configuration files, including the referenced OpenShift cluster and Cloud Pak installation. If you have already pre-provisioned the OpenShift cluster, choose a configuration with existing-ocp cloud platform. If the Cloud Pak has already been installed, unexpected and undesired activities may happen. The deployer has not been designed to alter a pre-provisioned OpenShift cluster or existing Cloud Pak installation.","title":"Cloud Pak Deployer Advanced Configuration"},{"location":"50-advanced/advanced-configuration/#configuration-steps---static-sample-configuration","text":"Copy the static sample configuration directory to your own directory: mkdir -p $HOME /cpd-config/config cp -r ./sample-configurations/roks-ocs-cp4d/config/* $HOME /cpd-config/config/ cd $HOME /cpd-config/config Edit the \"cp4d-....yaml\" file and select the cartridges to be installed by changing the state to installed . Additionally you can accept the Cloud Pak license in the config file by specifying accept_licenses: True . nano ./config/cp4d-450.yaml The configuration typically works without any configuration changes and will create all referenced objects, including the Virtual Private Cloud, subnets, SSH keys, ROKS cluster and OCS storage ndoes. There is typically no need to change address prefixes and subnets. The IP addresses used by the provisioned components are private to the VPC and are not externally exposed.","title":"Configuration steps - static sample configuration"},{"location":"50-advanced/advanced-configuration/#configuration-steps---dynamically-choose-openshift-and-cloud-pak","text":"Copy the sample configuration directory to your own directory: mkdir -p $HOME/cpd-config/config Copy the relevant OpenShift configuration file from the samples-configuration directory to the config directory, for example: cp ./sample-configurations/sample-dynamic/config-samples/ocp-ibm-cloud-roks-ocs.yaml $HOME/cpd-config/config/ Copy the relevant \"cp4d-\u2026\" file from the samples-configuration directory to the config directory, for example: cp ./sample-configurations/sample-dynamic/config-samples/cp4d-462.yaml $HOME/cpd-config/config/ Edit the \"$HOME/cpd-config/config/cp4d-....yaml\" file and select the cartridges to be installed by changing the state to installed . Additionally you can accept the Cloud Pak license in the config file by specifying accept_licenses: True . nano $HOME/cpd-config/config/cp4d-463.yaml For more advanced configuration topics such as using a private registry, setting up transit gateways between VPCs, etc, go to the Advanced configuration section","title":"Configuration steps - dynamically choose OpenShift and Cloud Pak"},{"location":"50-advanced/advanced-configuration/#directory-structure","text":"Every configuration has a fixed directory structure, consisting of mandatory and optional subdirectories. Mandatory subdirectories: config : Keeps one or more yaml files with your OpenShift and Cloud Pak configuration Additionally, there are 3 optional subdirectories: defaults : Directory that keeps the defaults which will be merged with your configuration inventory : Keep global settings for the configuration such as environment name or other variables used in the configs assets : Keeps directories of assets which must be deployed onto the Cloud Pak","title":"Directory structure"},{"location":"50-advanced/advanced-configuration/#config-directory","text":"You can choose to keep only a single file per subdirectory or, for more complex configurations, you can create multiple yaml files. You can find a full list of all supported object types here: Configuration objects . The generator automatically merges all .yaml files in the config and defaults directory. Files with different extensions are ignored. In the sample configurations we split configuration of the OpenShift ocp-... and Cloud Pak cp4.-... objects. For example, your config directory could hold the following files: cp4d-463.yaml ocp-ibm-cloud-roks-ocs.yaml This will provision a ROKS cluster on IBM Cloud with OpenShift Data Foundation (fka OCS) and Cloud Pak for Data 4.0.8.","title":"config directory"},{"location":"50-advanced/advanced-configuration/#defaults-directory-optional","text":"Holds the defaults for all object types. If a certain object property has not been specified in the config directory, it will be retrieved from the defaults directory using the flavour specified in the configured object. If no flavour has been selected, the default flavour will be chosen. You should not need this subdirectory in most circumstances.","title":"defaults directory (optional)"},{"location":"50-advanced/advanced-configuration/#assets-directory-optional","text":"Optional directory holding the assets you wish to deploy for the Cloud Pak. More information about Cloud Pak for Data assets which can be deployed can be found in object definition cp4d_asset . The directory can be named differently as well, for example cp4d-assets or customer-churn-demo .","title":"assets directory (optional)"},{"location":"50-advanced/advanced-configuration/#inventory-directory-optional","text":"The Cloud Pak Deployer pipeline has been built using Ansible and it can be configured using \"inventory\" files. Inventory files allow you to specify global variables used throughout Ansible playbooks. In the current version of the Cloud Pak Deployer, the inventory directory has become fully optional as the global_config and vault objects have taken over its role. However, if there are certain global variables such as env_id you want to pass via an inventory file, you can also do this.","title":"inventory directory (optional)"},{"location":"50-advanced/advanced-configuration/#vault-secrets","text":"User passwords, certificates and other \"secret\" information is kept in the vault, which can be either a flat file (not encrypted), HashiCorp Vault or the IBM Cloud Secrets Manager service. Some of the deployment configurations require that the vault is pre-populated with secrets which as needed during the deployment. For example, a vSphere deployment needs the vSphere user and password to authenticate to vSphere and Cloud Pak for Data SAML configuration requires the idP certificate All samples default to the File Vault , meaning that the vault will be kept in the vault directory under the status directory you specify when you run the deployer. Detailed descriptions of the vault settings can be found in the sample inventory file and also here: vault settings . Optional: Ensure that the environment variables for the configuration and status directories are set. If not specified, the directories are assumed to be $HOME/cpd-config and $HOME/cpd-status . export STATUS_DIR=$HOME/cpd-status export CONFIG_DIR=$HOME/cpd-config Set vSphere user secret: ./cp-deploy.sh vault set \\ --vault-secret vsphere-user \\ --vault-secret-value super_user@vsphere.local Or, if you want to create the secret from an input file: ./cp-deploy.sh vault set \\ --vault-secret kubeconfig \\ --vault-secret-file ~/.kube/config","title":"Vault secrets"},{"location":"50-advanced/advanced-configuration/#using-a-github-repository-for-the-configuration","text":"If the configuration is kept in a GitHub repository, you can set environment variables to have the deployer pull the GitHub repository to the current server before starting the process. Set environment variables. export CPD_CONFIG_GIT_REPO=\"https://github.com/IBM/cloud-pak-deployer-config.git\" export CPD_CONFIG_GIT_REF=\"main\" export CPD_CONFIG_GIT_CONTEXT=\"\" CPD_CONFIG_GIT_REPO : The clone URL of the GitHub repository that holds the configuration. CPD_CONFIG_GIT_REF : The branch, tag or commit ID to be cloned. If not specified, the repository's default branch will be cloned. CPD_CONFIG_GIT_CONTEXT : The directory within the GitHub repository that holds the configuration. This directory must contain the config directory under which the YAML files are kept. Info When specifying a GitHub repository, the contents will be copied under $STATUS_DIR/cpd-config and this directory is then set as the configuration directory.","title":"Using a GitHub repository for the configuration"},{"location":"50-advanced/advanced-configuration/#using-dynamic-variables-extra-variables","text":"In some situations you may want to use a single configuration for deployment in different environments, such as development, acceptance test and production. The Cloud Pak Deployer uses the Jinja2 templating engine which is included in Ansible to pre-process the configuration. This allows you to dynamically adjust the configuration based on extra variables you specify at the command line. Example: ./cp-deploy.sh env apply \\ -e ibm_cloud_region=eu_gb \\ -e env_id=jupiter-03 [--accept-all-liceneses] This passes the env_id and ibm_cloud_region variables to the Cloud Pak Deployer, which can then populate variables in the configuration. In the sample configurations, the env_id is used to specify the name of the VPC, ROKS cluster and others and overrides the value specified in the global_config definition. The ibm_cloud_region overrides region specified in the inventory file. ... vpc: - name: \"{{ env_id }}\" allow_inbound: ['ssh'] address_prefix: ### Prefixes for the client environment - name: \"{{ env_id }}-zone-1\" vpc: \"{{ env_id }}\" zone: {{ ibm_cloud_region }}-1 cidr: 10.231.0.0/26 ... When running with the above cp-deploy.sh command, the snippet would be generated as: ... vpc: - name: \"jupiter-03\" allow_inbound: ['ssh'] address_prefix: ### Prefixes for the client environment - name: \"jupiter-03-zone-1\" vpc: \"jupiter-03\" zone: eu-de-1 cidr: 10.231.0.0/26 ... The ibm_cloud_region variable is specified in the inventory file. This is another method of specifying variables for dynamic configuration. You can even include more complex constructs for dynamic configuration, with if statements, for loops and others. An example where the OpenShift OCS storage classes would only be generated for a specific environment (pluto-prod) would be: openshift_storage: - storage_name: nfs-storage storage_type: nfs nfs_server_name: \"{{ env_id }}-nfs\" {% if env_id == 'jupiter-prod' %} - storage_name: ocs-storage storage_type: ocs ocs_storage_label: ocs ocs_storage_size_gb: 500 {% endif %} For a more comprehensive overview of Jinja2 templating, see https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html","title":"Using dynamic variables (extra variables)"},{"location":"50-advanced/alternative-repo-reg/","text":"Using alternative repositories and registries \ud83d\udd17 Warning In most scenarios you will not need this type of configuration. Alternative repositories and registries are mainly geared towards pre-GA use of the Cloud Paks where CASE files are downloaded from internal repositories and staging container image registries need to be used as images have not been released yet. Building the Cloud Pak Deployer image \ud83d\udd17 By default the Cloud Pak Deployer image is built on top of the olm-utils images in icr.io . If you're working with a pre-release of the Cloud Pak OLM utils image, you can override the setting as follows: export CPD_OLM_UTILS_V2_IMAGE=cp.staging.acme.com:4.8.0 Subsequently, run the install commmand: ./cp-deploy.sh build Configuring the alternative repositories and registries \ud83d\udd17 When specifying a cp_alt_repo object in a YAML file, this is used for all Cloud Paks. The object triggers the following steps: * The following files are created in the /tmp/work directory in the container: play_env.sh , resolvers.yaml and resolvers_auth . * When downloading CASE files using the ibm-pak plug-in, the play_env sets the locations of the resolvers and authorization files. * Also, the locations of the case files for the Cloud Pak, Foundational Servides and Open Content are set in an enviroment variable. * Registry mirrors are configured using an ImageContentSourcePolicy resource in the OpenShift cluster. * Registry credentials are added to the OpenShift cluster's global pull secret. The cp_alt_repo is configured like this: cp_alt_repo: repo: token_secret: github-internal-repo cp_path: https://raw.internal-repo.acme.com/cpd-case-repo/4.8.0/promoted/case-repo-promoted fs_path: https://raw.internal-repo.acme.com/cloud-pak-case-repo/main/repo/case opencontent_path: https://raw.internal-repo.acme.com/cloud-pak-case-repo/main/repo/case registry_pull_secrets: - registry: cp.staging.acme.com pull_secret: cp-staging - registry: fs.staging.acme.com pull_secret: cp-fs-staging registry_mirrors: - source: cp.icr.com/cp mirrors: - cp.staging.acme.com/cp - source: cp.icr.io/cp/cpd mirrors: - cp.staging.acme.com/cp/cpd - source: icr.io/cpopen mirrors: - fs.staging.acme.com/cp - source: icr.io/cpopen/cpfs mirrors: - fs.staging.acme.com/cp Property explanation \ud83d\udd17 Property Description Mandatory Allowed values repo Repositories to be accessed and the Git token Yes repo.token_secret Secret in the vault that holds the Git login token Yes repo.cp_path Repository path where to find Cloud Pak CASE files Yes repo.fs)path Repository path where to find the Foundational Services CASE files Yes repo.opencontent_path Repository path where to find the Open Content CASE files Yes registry_pull_secrets List of registries and their pull secrets, will be used to configure global pull secret Yes .registry Registry host name Yes .pull_secret Vault secret that holds the pull secret (user:password) for the registry Yes registry_mirrors List of registries and their mirrors, will be used to configure the ImageContentSourcePolicy Yes .source Registry and path referenced by the Cloud Pak/FS pod Yes .mirrors: List of alternate registry locations for this source Yes Configuring the secrets \ud83d\udd17 Before running the deployer with a cp_alt_repo object, you need to ensure the referenced secrets are present in the vault. For the GitHub token, you need to set the token (typically a deploy key) to login to GitHub or GitHub Enterprise. ./cp-deploy.sh vault set -vs github-internal-repo=abc123def456 For the registry credentials, specify the user and password separated by a colon ( : ): ./cp-deploy.sh vault set -vs cp-staging=\"cp-staging-user:cp-staging-password\" You can also set these tokens on the cp-deploy.sh env apply command line. ./cp-deploy.sh env apply -f -vs github-internal-repo=abc123def456 -vs cp-staging=\"cp-staging-user:cp-staging-password Running the deploy \ud83d\udd17 To run the deployer you can now use the standard process: ./cp-deploy.sh env apply -v","title":"Using alternative CASE repositories and registries"},{"location":"50-advanced/alternative-repo-reg/#using-alternative-repositories-and-registries","text":"Warning In most scenarios you will not need this type of configuration. Alternative repositories and registries are mainly geared towards pre-GA use of the Cloud Paks where CASE files are downloaded from internal repositories and staging container image registries need to be used as images have not been released yet.","title":"Using alternative repositories and registries"},{"location":"50-advanced/alternative-repo-reg/#building-the-cloud-pak-deployer-image","text":"By default the Cloud Pak Deployer image is built on top of the olm-utils images in icr.io . If you're working with a pre-release of the Cloud Pak OLM utils image, you can override the setting as follows: export CPD_OLM_UTILS_V2_IMAGE=cp.staging.acme.com:4.8.0 Subsequently, run the install commmand: ./cp-deploy.sh build","title":"Building the Cloud Pak Deployer image"},{"location":"50-advanced/alternative-repo-reg/#configuring-the-alternative-repositories-and-registries","text":"When specifying a cp_alt_repo object in a YAML file, this is used for all Cloud Paks. The object triggers the following steps: * The following files are created in the /tmp/work directory in the container: play_env.sh , resolvers.yaml and resolvers_auth . * When downloading CASE files using the ibm-pak plug-in, the play_env sets the locations of the resolvers and authorization files. * Also, the locations of the case files for the Cloud Pak, Foundational Servides and Open Content are set in an enviroment variable. * Registry mirrors are configured using an ImageContentSourcePolicy resource in the OpenShift cluster. * Registry credentials are added to the OpenShift cluster's global pull secret. The cp_alt_repo is configured like this: cp_alt_repo: repo: token_secret: github-internal-repo cp_path: https://raw.internal-repo.acme.com/cpd-case-repo/4.8.0/promoted/case-repo-promoted fs_path: https://raw.internal-repo.acme.com/cloud-pak-case-repo/main/repo/case opencontent_path: https://raw.internal-repo.acme.com/cloud-pak-case-repo/main/repo/case registry_pull_secrets: - registry: cp.staging.acme.com pull_secret: cp-staging - registry: fs.staging.acme.com pull_secret: cp-fs-staging registry_mirrors: - source: cp.icr.com/cp mirrors: - cp.staging.acme.com/cp - source: cp.icr.io/cp/cpd mirrors: - cp.staging.acme.com/cp/cpd - source: icr.io/cpopen mirrors: - fs.staging.acme.com/cp - source: icr.io/cpopen/cpfs mirrors: - fs.staging.acme.com/cp","title":"Configuring the alternative repositories and registries"},{"location":"50-advanced/alternative-repo-reg/#property-explanation","text":"Property Description Mandatory Allowed values repo Repositories to be accessed and the Git token Yes repo.token_secret Secret in the vault that holds the Git login token Yes repo.cp_path Repository path where to find Cloud Pak CASE files Yes repo.fs)path Repository path where to find the Foundational Services CASE files Yes repo.opencontent_path Repository path where to find the Open Content CASE files Yes registry_pull_secrets List of registries and their pull secrets, will be used to configure global pull secret Yes .registry Registry host name Yes .pull_secret Vault secret that holds the pull secret (user:password) for the registry Yes registry_mirrors List of registries and their mirrors, will be used to configure the ImageContentSourcePolicy Yes .source Registry and path referenced by the Cloud Pak/FS pod Yes .mirrors: List of alternate registry locations for this source Yes","title":"Property explanation"},{"location":"50-advanced/alternative-repo-reg/#configuring-the-secrets","text":"Before running the deployer with a cp_alt_repo object, you need to ensure the referenced secrets are present in the vault. For the GitHub token, you need to set the token (typically a deploy key) to login to GitHub or GitHub Enterprise. ./cp-deploy.sh vault set -vs github-internal-repo=abc123def456 For the registry credentials, specify the user and password separated by a colon ( : ): ./cp-deploy.sh vault set -vs cp-staging=\"cp-staging-user:cp-staging-password\" You can also set these tokens on the cp-deploy.sh env apply command line. ./cp-deploy.sh env apply -f -vs github-internal-repo=abc123def456 -vs cp-staging=\"cp-staging-user:cp-staging-password","title":"Configuring the secrets"},{"location":"50-advanced/alternative-repo-reg/#running-the-deploy","text":"To run the deployer you can now use the standard process: ./cp-deploy.sh env apply -v","title":"Running the deploy"},{"location":"50-advanced/apply-node-settings-non-mco/","text":"Apply OpenShift node settings when machine config operator does not exist \ud83d\udd17 Cloud Pak Deployer automatically applies cluster and node settings before installing the Cloud Pak(s). Sometimes you may also want to automate applying these node settings without installing the Cloud Pak. For convenience, the repository includes a script that makes the same changes normally done through automation: scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh . To apply the node settings, do the following: If images are pulled from the entitled registry, set the CP_ENTITLEMENT_KEY environment variable If images are to be pulled from a private registry, set both the CPD_PRIVATE_REGISTRY and CPD_PRIVATE_REGISTRY_CREDS environment variables Log in to the OpenShift cluster with cluster-admin permissions Run the scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh script. The CPD_PRIVATE_REGISTRY value must reference the registry host name and optionally the port and namespace that must prefix the images. For example, if the images are kept in https://de.icr.io/cp4d-470 , you must specify de.icr.io/cp4d-470 for the CPD_PRIVATE_REGISTRY environment variable. If images are kept in https://cust-reg:5000 , you must specify cust-reg:5000 for the CPD_PRIVATE_REGISTRY environment variable. For the CPD_PRIVATE_REGISTRY_CREDS value, specify both the user and password in a single string, separated by a colon ( : ). For example: admin:secret_passw0rd . Warning When setting the private registry and its credentials, the script automatically creates the configuration that will set up ImageContentSourcePolicy and global pull secret alternatives. This change cannot be undone using the script. It is not possible to set the private registry and later change to entitled registry. Changing the private registry's credentials can be done by re-running the script with the new credentials. Example \ud83d\udd17 export CPD_PRIVATE_REGISTRY=de.icr.io/cp4d-470 export CPD_PRIVATE_REGISTRY_CREDS=\"iamapikey:U97KLPYF663AE4XAQL0\" ./scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh Creating ConfigMaps and secret configmap \"cloud-pak-node-fix-scripts\" deleted configmap/cloud-pak-node-fix-scripts created configmap \"cloud-pak-node-fix-config\" deleted configmap/cloud-pak-node-fix-config created secret \"cloud-pak-node-fix-secrets\" deleted secret/cloud-pak-node-fix-secrets created Setting global pull secret /tmp/.dockerconfigjson info: pull-secret was not changed secret/cloud-pak-node-fix-secrets data updated Private registry specified, creating ImageContentSourcePolicy for registry de.icr.io/cp4d-470 Generating Tuned config tuned.tuned.openshift.io/cp4d-ipc unchanged Writing fix scripts to config map configmap/cloud-pak-node-fix-scripts data updated configmap/cloud-pak-node-fix-scripts data updated configmap/cloud-pak-node-fix-scripts data updated configmap/cloud-pak-node-fix-scripts data updated Creating service account for DaemonSet serviceaccount/cloud-pak-crontab-sa unchanged clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: \"cloud-pak-crontab-sa\" Recreate DaemonSet daemonset.apps \"cloud-pak-crontab-ds\" deleted daemonset.apps/cloud-pak-crontab-ds created Showing running DaemonSet pods NAME READY STATUS RESTARTS AGE cloud-pak-crontab-ds-b92f9 0/1 Terminating 0 12m cloud-pak-crontab-ds-f85lf 0/1 ContainerCreating 0 0s cloud-pak-crontab-ds-jlbvm 0/1 ContainerCreating 0 0s cloud-pak-crontab-ds-rbj65 1/1 Terminating 0 12m cloud-pak-crontab-ds-vckrs 0/1 ContainerCreating 0 0s cloud-pak-crontab-ds-x288p 1/1 Terminating 0 12m Waiting for 5 seconds for pods to start Showing running DaemonSet pods NAME READY STATUS RESTARTS AGE cloud-pak-crontab-ds-f85lf 1/1 Running 0 5s cloud-pak-crontab-ds-jlbvm 1/1 Running 0 5s cloud-pak-crontab-ds-vckrs 1/1 Running 0 5s","title":"Apply node settings to non-MCO clusters"},{"location":"50-advanced/apply-node-settings-non-mco/#apply-openshift-node-settings-when-machine-config-operator-does-not-exist","text":"Cloud Pak Deployer automatically applies cluster and node settings before installing the Cloud Pak(s). Sometimes you may also want to automate applying these node settings without installing the Cloud Pak. For convenience, the repository includes a script that makes the same changes normally done through automation: scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh . To apply the node settings, do the following: If images are pulled from the entitled registry, set the CP_ENTITLEMENT_KEY environment variable If images are to be pulled from a private registry, set both the CPD_PRIVATE_REGISTRY and CPD_PRIVATE_REGISTRY_CREDS environment variables Log in to the OpenShift cluster with cluster-admin permissions Run the scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh script. The CPD_PRIVATE_REGISTRY value must reference the registry host name and optionally the port and namespace that must prefix the images. For example, if the images are kept in https://de.icr.io/cp4d-470 , you must specify de.icr.io/cp4d-470 for the CPD_PRIVATE_REGISTRY environment variable. If images are kept in https://cust-reg:5000 , you must specify cust-reg:5000 for the CPD_PRIVATE_REGISTRY environment variable. For the CPD_PRIVATE_REGISTRY_CREDS value, specify both the user and password in a single string, separated by a colon ( : ). For example: admin:secret_passw0rd . Warning When setting the private registry and its credentials, the script automatically creates the configuration that will set up ImageContentSourcePolicy and global pull secret alternatives. This change cannot be undone using the script. It is not possible to set the private registry and later change to entitled registry. Changing the private registry's credentials can be done by re-running the script with the new credentials.","title":"Apply OpenShift node settings when machine config operator does not exist"},{"location":"50-advanced/apply-node-settings-non-mco/#example","text":"export CPD_PRIVATE_REGISTRY=de.icr.io/cp4d-470 export CPD_PRIVATE_REGISTRY_CREDS=\"iamapikey:U97KLPYF663AE4XAQL0\" ./scripts/cp4d/cp4d-apply-non-mco-cluster-settings.sh Creating ConfigMaps and secret configmap \"cloud-pak-node-fix-scripts\" deleted configmap/cloud-pak-node-fix-scripts created configmap \"cloud-pak-node-fix-config\" deleted configmap/cloud-pak-node-fix-config created secret \"cloud-pak-node-fix-secrets\" deleted secret/cloud-pak-node-fix-secrets created Setting global pull secret /tmp/.dockerconfigjson info: pull-secret was not changed secret/cloud-pak-node-fix-secrets data updated Private registry specified, creating ImageContentSourcePolicy for registry de.icr.io/cp4d-470 Generating Tuned config tuned.tuned.openshift.io/cp4d-ipc unchanged Writing fix scripts to config map configmap/cloud-pak-node-fix-scripts data updated configmap/cloud-pak-node-fix-scripts data updated configmap/cloud-pak-node-fix-scripts data updated configmap/cloud-pak-node-fix-scripts data updated Creating service account for DaemonSet serviceaccount/cloud-pak-crontab-sa unchanged clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: \"cloud-pak-crontab-sa\" Recreate DaemonSet daemonset.apps \"cloud-pak-crontab-ds\" deleted daemonset.apps/cloud-pak-crontab-ds created Showing running DaemonSet pods NAME READY STATUS RESTARTS AGE cloud-pak-crontab-ds-b92f9 0/1 Terminating 0 12m cloud-pak-crontab-ds-f85lf 0/1 ContainerCreating 0 0s cloud-pak-crontab-ds-jlbvm 0/1 ContainerCreating 0 0s cloud-pak-crontab-ds-rbj65 1/1 Terminating 0 12m cloud-pak-crontab-ds-vckrs 0/1 ContainerCreating 0 0s cloud-pak-crontab-ds-x288p 1/1 Terminating 0 12m Waiting for 5 seconds for pods to start Showing running DaemonSet pods NAME READY STATUS RESTARTS AGE cloud-pak-crontab-ds-f85lf 1/1 Running 0 5s cloud-pak-crontab-ds-jlbvm 1/1 Running 0 5s cloud-pak-crontab-ds-vckrs 1/1 Running 0 5s","title":"Example"},{"location":"50-advanced/gitops/","text":"The process of supporting multiple products, releases and patch levels within a release has great similarity to the git-flow model, which has been really well-described by Vincent Driessen in his blog post: https://nvie.com/posts/a-successful-git-branching-model/ . This model has been and is still very popular with many software-development teams. Below is a description of how a git-flow could be implemented with the Cloud Pak Deployer. The following steps are covered: Setting up the company's Git and image registry for the Cloud Paks The git-flow change process Feeding Cloud Pak changes into the process Deploying the Cloud Pak changes Environments, Git and registry \ud83d\udd17 . There are 4 Cloud Pak environments within the company's domain: Dev, UAT, Pre-prod and Prod. Each of these environments have a namespace in the company's registry (or an isolated registry could be created per environment) and the Cloud Pak release installed is represented by manifests in a branch of the Git repository, respectively dev, uat, pp and prod. Organizing registries by namespace has the advantage that duplication of images can be avoided. Each of the namespaces can have their own set of images that have been approved for running in the associated environment. The image itself is referenced by digest (i.e., checksum) and organized on disk as such. If one tries to copy an image to a different namespace within the same registry, only a new entry is created, the image itself is not duplicated because it already exists. The manifests (CASE files) representing the Cloud Pak components are present in each of the branches of the Git repository, or there is a configuration file that references the location of the case file, including the exact version number. In the Cloud Pak Deployer, we have chosen to reference the CASE versions in the configuration, for example: cp4d: - project: cpd-instance openshift_cluster_name: {{ env_id }} cp4d_version: 4.6.0 openshift_storage_name: ocs-storage sequential_install: True cartridges: - name: cpfs - name: cpd_platform - name: ws state: installed - name: wml size: small state: installed If Cloud Pak for Data has been configured with a private registry in the deployer config, the deployer will mirror images from the IBM entitled registry to the private registry. In the above configuration, no private registry has been specified. The deployer will automatically download and use the CASE files to create the catalog sources. Change process using git-flow \ud83d\udd17 With the initial status in place, the continuous adoption process may commence, using the principles of git-flow. Git-flow addresses a couple of needs for continuous adoption: Control and visibility over what software (version) runs in which environment; there is a central truth which describes the state of every environment managed New features (in case of the deployer: new operator versions and custom resources) can be tested without affecting the pending releases or production implementation While preparing for a new release, hot fixes can still be applied to the production environments The Git repository consists of 4 branches: dev, uat, pp and prd. At the start, release 4.0.0 is being implemented and it will go through the stages from dev to prd. When the installation has been tested in development, a pull request (PR) is done to promote to the uat branch. The PR is reviewed, and changes are then merged into the uat branch. After testing in the uat branch, the steps are repeated until the 4.0.0 release is eventually in production. With each of the implementation and promotion steps, the registry namespaces and associated with the particular branch are updated with the images described in the manifests kept in the Git repository. Additionally, the changes are installed in the respective environments. The details of these processes will be outlined later. New patches are received, committed and installed on the dev branch on a regular basis and when no issues are found, the changes are gathered into a PR for uat. When no issues are found for 2 weeks, another PR is done for the pp branch and eventually for prd. During this promotion flow, new patches are still being received in dev. While version 4.0.2 is running in production, a critical defect is found for which a hot fix is developed. The hot fix is first committed to the pp branch and tested and then a PR is made to promote it to the prd branch. In the meantime, the dev and uat branches continue with their own release schedule. The hot fix is included in 4.0.4 which will be promoted as part of the 4.0.5 release. The uat, pp and prd branches can be protected by a branch protection rule so that changes from dev can only be promoted (via a pull request) after an approving review or, when the intention is to promote changes in a fully automated manner, after passing status checks and testing. Read Managing a branch protection rule for putting in these controls in GitHub or Protected branches for GitLab. With this flow, there is control over patches, promotion approvals and releases installed in each of the environments. Additional branches could be introduced if additional environments are in play or if different releases are being managed using the git-flow. Feeding patches and releases into the flow \ud83d\udd17 As discussed above, patches are first \"developed\" in the dev branch, i.e., changes are fed into the Git repository, images are loaded into the company's registry (dev namespace) and the installed into the Dev environment. The process of receiving and installing the patches is common for all Cloud Paks: the cloudctl case tool downloads the CASE file associated with the operator version and the same CASE file can be used to upload images into the company's registry. Then a Catalog Source is created which makes the images available to the operator subscriptions, which in turn manage the various custom resources in the Cloud Pak instance. For example, the ws operator manages the Ws custom resource and this CR ensures that OpenShift deployments, secrets, Config Maps, Stateful Sets, and so forth are managed within the Cloud Pak for Data instance project. In the git-flow example, Watson Studio release 4.0.2 is installed by updating the Catalog Source. Detailed installation steps for Cloud Pak for Data can be found in the IBM documentation. Deploying the Cloud Pak changes \ud83d\udd17 Now that the hard work of managing changes to the Git repository branches and image registry namespaces has been done, we can look at the (automatic) deployment of the changes. In a continuous adoption workflow, the implementation of new releases and patches is automated by means of a pipeline, which allows for deployment and testing in a predictable and controlled manner. A pipeline executes a series of steps to inspect the change and then run the command to install it in the respective environment. Moreover, after installation tests can be automatically executed. The most-popular tools for pipelines are ArgoCD, GitLab pipelines and Tekton (serverless). To link the execution of a pipeline with the git-flow pull request, one can use ArcoCD or a GitHub/GitLab webhook. As soon as a PR is accepted and changes are applied to the Git branch, the pipeline is triggered and will run the Cloud Pak Deployer to automatically apply the changes according to the latest version.","title":"GitOps"},{"location":"50-advanced/gitops/#environments-git-and-registry","text":". There are 4 Cloud Pak environments within the company's domain: Dev, UAT, Pre-prod and Prod. Each of these environments have a namespace in the company's registry (or an isolated registry could be created per environment) and the Cloud Pak release installed is represented by manifests in a branch of the Git repository, respectively dev, uat, pp and prod. Organizing registries by namespace has the advantage that duplication of images can be avoided. Each of the namespaces can have their own set of images that have been approved for running in the associated environment. The image itself is referenced by digest (i.e., checksum) and organized on disk as such. If one tries to copy an image to a different namespace within the same registry, only a new entry is created, the image itself is not duplicated because it already exists. The manifests (CASE files) representing the Cloud Pak components are present in each of the branches of the Git repository, or there is a configuration file that references the location of the case file, including the exact version number. In the Cloud Pak Deployer, we have chosen to reference the CASE versions in the configuration, for example: cp4d: - project: cpd-instance openshift_cluster_name: {{ env_id }} cp4d_version: 4.6.0 openshift_storage_name: ocs-storage sequential_install: True cartridges: - name: cpfs - name: cpd_platform - name: ws state: installed - name: wml size: small state: installed If Cloud Pak for Data has been configured with a private registry in the deployer config, the deployer will mirror images from the IBM entitled registry to the private registry. In the above configuration, no private registry has been specified. The deployer will automatically download and use the CASE files to create the catalog sources.","title":"Environments, Git and registry"},{"location":"50-advanced/gitops/#change-process-using-git-flow","text":"With the initial status in place, the continuous adoption process may commence, using the principles of git-flow. Git-flow addresses a couple of needs for continuous adoption: Control and visibility over what software (version) runs in which environment; there is a central truth which describes the state of every environment managed New features (in case of the deployer: new operator versions and custom resources) can be tested without affecting the pending releases or production implementation While preparing for a new release, hot fixes can still be applied to the production environments The Git repository consists of 4 branches: dev, uat, pp and prd. At the start, release 4.0.0 is being implemented and it will go through the stages from dev to prd. When the installation has been tested in development, a pull request (PR) is done to promote to the uat branch. The PR is reviewed, and changes are then merged into the uat branch. After testing in the uat branch, the steps are repeated until the 4.0.0 release is eventually in production. With each of the implementation and promotion steps, the registry namespaces and associated with the particular branch are updated with the images described in the manifests kept in the Git repository. Additionally, the changes are installed in the respective environments. The details of these processes will be outlined later. New patches are received, committed and installed on the dev branch on a regular basis and when no issues are found, the changes are gathered into a PR for uat. When no issues are found for 2 weeks, another PR is done for the pp branch and eventually for prd. During this promotion flow, new patches are still being received in dev. While version 4.0.2 is running in production, a critical defect is found for which a hot fix is developed. The hot fix is first committed to the pp branch and tested and then a PR is made to promote it to the prd branch. In the meantime, the dev and uat branches continue with their own release schedule. The hot fix is included in 4.0.4 which will be promoted as part of the 4.0.5 release. The uat, pp and prd branches can be protected by a branch protection rule so that changes from dev can only be promoted (via a pull request) after an approving review or, when the intention is to promote changes in a fully automated manner, after passing status checks and testing. Read Managing a branch protection rule for putting in these controls in GitHub or Protected branches for GitLab. With this flow, there is control over patches, promotion approvals and releases installed in each of the environments. Additional branches could be introduced if additional environments are in play or if different releases are being managed using the git-flow.","title":"Change process using git-flow"},{"location":"50-advanced/gitops/#feeding-patches-and-releases-into-the-flow","text":"As discussed above, patches are first \"developed\" in the dev branch, i.e., changes are fed into the Git repository, images are loaded into the company's registry (dev namespace) and the installed into the Dev environment. The process of receiving and installing the patches is common for all Cloud Paks: the cloudctl case tool downloads the CASE file associated with the operator version and the same CASE file can be used to upload images into the company's registry. Then a Catalog Source is created which makes the images available to the operator subscriptions, which in turn manage the various custom resources in the Cloud Pak instance. For example, the ws operator manages the Ws custom resource and this CR ensures that OpenShift deployments, secrets, Config Maps, Stateful Sets, and so forth are managed within the Cloud Pak for Data instance project. In the git-flow example, Watson Studio release 4.0.2 is installed by updating the Catalog Source. Detailed installation steps for Cloud Pak for Data can be found in the IBM documentation.","title":"Feeding patches and releases into the flow"},{"location":"50-advanced/gitops/#deploying-the-cloud-pak-changes","text":"Now that the hard work of managing changes to the Git repository branches and image registry namespaces has been done, we can look at the (automatic) deployment of the changes. In a continuous adoption workflow, the implementation of new releases and patches is automated by means of a pipeline, which allows for deployment and testing in a predictable and controlled manner. A pipeline executes a series of steps to inspect the change and then run the command to install it in the respective environment. Moreover, after installation tests can be automatically executed. The most-popular tools for pipelines are ArgoCD, GitLab pipelines and Tekton (serverless). To link the execution of a pipeline with the git-flow pull request, one can use ArcoCD or a GitHub/GitLab webhook. As soon as a PR is accepted and changes are applied to the Git branch, the pipeline is triggered and will run the Cloud Pak Deployer to automatically apply the changes according to the latest version.","title":"Deploying the Cloud Pak changes"},{"location":"50-advanced/locations-to-whitelist/","text":"Locations to whitelist on bastion \ud83d\udd17 When building or running the deployer in an environment with strict policies for internet access, you may have to specify the list of URLs that need to be accessed by the deployer. Locations to whitelist when building the deployer image. \ud83d\udd17 Location Used for registry.access.redhat.com Base image icr.io olm-utils base image cdn.redhat.com Installing operating system packages cdn-ubi.redhat.com Installing operating system packages rpm.releases.hashicorp.com Hashicorp Vault integration dl.fedoraproject.org Extra Packages for Enterprise Linux (EPEL) mirrors.fedoraproject.org EPEL mirror site fedora.mirrorservice.org EPEL mirror site pypi.org Python packages for deployer galaxy.ansible.com Ansible Galaxy packages Locations to whitelist when running the deployer for existing OpenShift. \ud83d\udd17 Location Used for github.com Case files, Cloud Pak clients: cloudctl, cpd-cli, cpdctl gcr.io Google Container Registry (GCR) objects.githubusercontent.com Binary content for github.com raw.githubusercontent.com Binary content for github.com mirror.openshift.com OpenShift client ocsp.digicert.com Certificate checking subscription.rhsm.redhat.com OpenShift subscriptions","title":"Locations to whitelist"},{"location":"50-advanced/locations-to-whitelist/#locations-to-whitelist-on-bastion","text":"When building or running the deployer in an environment with strict policies for internet access, you may have to specify the list of URLs that need to be accessed by the deployer.","title":"Locations to whitelist on bastion"},{"location":"50-advanced/locations-to-whitelist/#locations-to-whitelist-when-building-the-deployer-image","text":"Location Used for registry.access.redhat.com Base image icr.io olm-utils base image cdn.redhat.com Installing operating system packages cdn-ubi.redhat.com Installing operating system packages rpm.releases.hashicorp.com Hashicorp Vault integration dl.fedoraproject.org Extra Packages for Enterprise Linux (EPEL) mirrors.fedoraproject.org EPEL mirror site fedora.mirrorservice.org EPEL mirror site pypi.org Python packages for deployer galaxy.ansible.com Ansible Galaxy packages","title":"Locations to whitelist when building the deployer image."},{"location":"50-advanced/locations-to-whitelist/#locations-to-whitelist-when-running-the-deployer-for-existing-openshift","text":"Location Used for github.com Case files, Cloud Pak clients: cloudctl, cpd-cli, cpdctl gcr.io Google Container Registry (GCR) objects.githubusercontent.com Binary content for github.com raw.githubusercontent.com Binary content for github.com mirror.openshift.com OpenShift client ocsp.digicert.com Certificate checking subscription.rhsm.redhat.com OpenShift subscriptions","title":"Locations to whitelist when running the deployer for existing OpenShift."},{"location":"50-advanced/private-registry-and-air-gapped/","text":"Using a private registry \ud83d\udd17 Some environments, especially in situations where the OpenShift cannot directly connect to the internet, require a private registry for OpenShift to pull the Cloud Pak images from. The Cloud Pak Deployer can mirror images from the entitled registry to a private registry that you want to use for the Cloud Pak(s). Also, if infrastructure which holds the OpenShift cluster is fully disconnected from the internet, the Cloud Pak Deployer can build a registry which can be stored on a portable hard disk or pen drive and then shipped to the site. Info Note: In all cases, the deployer can work behind a proxy to access the internet. Go to Running behind proxy for more information. The below instructions are not limited to disconnected (air-gapped) OpenShift clusters, but are more generic for deployment using a private registry. There are three use cases for mirroring images to a private registry and using this to install the Cloud Pak(s): Use case 1 - Mirror images and install using a bastion server . The bastion server can connect to the internet (directly or via a proxy), to OpenShift and to the private registry used by the OpenShift cluster. Use case 2 - Mirror images with a connected server, install using a bastion . The connected server can connect to the internet and to the private registry used by the OpenShift cluster. The server cannot connect to the OpenShift cluster. The bastion server can connect to the private registry and to the OpenShift cluster. Use case 3 - Mirror images using a portable image registry . The private registry used by the OpenShift cluster cannot be reached from the server that is connected to the internet. You need a portable registry to download images and which you then ship to a server that can connect to the existing OpenShift cluster and its private registry. Use cases 1 and 3 are also outlined in the Cloud Pak for Data installation documentation: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.5.x?topic=tasks-mirroring-images-your-private-container-registry For specifying a private registry in the Cloud Pak Deployer configuration, please see Private registry . Example of specifying a private registry with a self-signed certificate in the configuration: image_registry: - name: cpd453 registry_host_name: registry.coc.ibm.com registry_port: 5000 registry_insecure: True The cp4d instance must reference the image_registry object using the image_registry_name : cp4d: - project: zen-45 openshift_cluster_name: {{ env_id }} cp4d_version: 4.5.3 openshift_storage_name: ocs-storage image_registry_name: cpd453 Info The deployer only supports using a private registry for the Cloud Pak images, not for OpenShift itself. Air-gapped installation of OpenShift is currently not in scope for the deployer. Warning The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name. The main 3 directories that are needed for both types of air-gapped installations are: Cloud Pak Deployer directory: cloud-pak-deployer Configuration directory: The directory that holds a all the Cloud Pak Deployer configuration Status directory: The directory that will hold all downloads, vault secrets and the portable registry when applicable (use case 3) Fpr use cases 2 and 3, where the directories must be shipped to the air-gapped cluster, the Cloud Pak Deployer and Configuration directories will be stored in the Status directory for simplicity. Use case 1 - Mirror images and install using a bastion server \ud83d\udd17 This is effectively \"not-air-gapped\" scenario, where the following conditions apply: The private registry is hosted inside the private dloud The bastion server can connect to the internet and mirror images to the private image registry The bastion server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details The bastion server can connect to OpenShift On the bastion server \ud83d\udd17 The bastion server is connected to the internet and OpenShift cluster. If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist If a proxy server is configured for the bastion node, check the settings ( http_proxy , https_proxy , no_proxy environment variables) Build the Cloud Pak Deployer image using ./cp-deploy.sh build Create or update the directory with the configuration; make sure all your Cloud Paks and cartridges are specified as well as an image_registry entry to identify the private registry Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key Create a vault secret image-registry- holding the connection credentials for the private registry specified in the configuration ( image_registry ). For example for a registry definition with name cpd453 , create secret image-registry-cpd453 . ./cp-deploy.sh vault set \\ -vs image-registry-cpd453 \\ -vsv \"admin:very_s3cret\" Set the environment variable for the oc login command. For example: export CPD_OC_LOGIN=\"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Run the ./cp-deploy.sh env apply command to start deployment of the Cloud Pak to the OpenShift cluster. For example: ./cp-deploy.sh env apply The existence of the image_registry definition and its reference in the cp4d definition instruct the deployer to mirror images to the private registry and to configure the OpenShift cluster to pull images from the private registry. If you have already mirrored the Cloud Pak images, you can add the --skip-mirror-images parameter to speed up the deployment process. Use case 2 - Mirror images with an internet-connected server, install using a bastion \ud83d\udd17 This use case is also sometimes referred to as \"semi-air-gapped\", where the following conditions apply: The private registry is hosted outside of the private cloud that hosts the bastion server and OpenShift An internet-connected server external to the private cloud can reach the entitled registry and the private registry The internet-connected server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details The bastion server cannot connect to the internet The bastion server can connect to OpenShift Warning Please note that in this case the Cloud Pak Deployer expects an OpenShift cluster to be available already and will only work with an existing-ocp configuration. The bastion server does not have access to the internet and can therefore not instantiate an OpenShift cluster. On the internet-connected server \ud83d\udd17 If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist If a proxy server is configured for the internet-connected server, check the settings ( http_proxy , https_proxy , no_proxy environment variables) Build the Cloud Pak Deployer image using ./cp-deploy.sh build Create or update the directory with the configuration; make sure all your Cloud Paks and cartridges are specified as well as an image_registry entry to identify the private registry Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key Create a vault secret image-registry- holding the connection credentials for the private registry specified in the configuration ( image_registry ). For example for a registry definition with name cpd453 , create secret image-registry-cpd453 . ./cp-deploy.sh vault set \\ -vs image-registry-cpd453 \\ -vsv \"admin:very_s3cret\" If the status directory does not exist it is created at this point. Diagram step 1 \ud83d\udd17 Run the deployer using the ./cp-deploy.sh env download --skip-portable-registry command. For example: ./cp-deploy.sh env download \\ --skip-portable-registry This will download all clients to the status directory and then mirror images from the entitled registry to the private registry. If mirroring fails, fix the issue and just run the env download again. Before saving the status directory, you can optionally remove the entitlement key from the vault: ./cp-deploy.sh vault delete \\ -vs ibm_cp_entitlement_key Diagram step 2 \ud83d\udd17 When the download finished successfully, the status directory holds the deployer scripts, the configuration directory and the deployer container image. Diagram step 3 \ud83d\udd17 Ship the status directory from the internet-connected server to the bastion server. You can use tar with gzip mode or any other compression technique. The total size of the directories should be relatively small, typically < 5 GB On the bastion server \ud83d\udd17 The bastion server is not connected to the internet but is connected to the private registry and the OpenShift cluster. Diagram step 4 \ud83d\udd17 We're using the instructions in Run on existing OpenShift , adding the --air-gapped and --skip-mirror-images flags, to start the deployer: Restore the status directory onto the bastion server Export the STATUS_DIR environment variable to point to the status directory Untar the cloud-pak-deployer scripts, for example: tar xvzf $STATUS_DIR/cloud-pak-deployer.tar.gz Set the CPD_AIRGAP environment variable to true export CPD_AIRGAP=true Set the environment variable for the oc login command. For example: export CPD_OC_LOGIN=\"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Run the cp-deploy.sh env apply --skip-mirror-images command to start deployment of the Cloud Pak to the OpenShift cluster. For example: cd cloud-pak-deployer ./cp-deploy.sh env apply \\ --skip-mirror-images The CPD_AIRGGAP environment variable tells the deployer it will not download anything from the internet; --skip-mirror-images indicates that images are already available in the private registry that is included in the configuration ( image_registry ) Use case 3 - Mirror images using a portable image registry \ud83d\udd17 This use case is also usually referred to as \"air-gapped\", where the following conditions apply: The private registry is hosted in the private cloud that hosts the bastion server and OpenShift The bastion server cannot connect to the internet The bastion server can connect to the private registry and the OpenShift cluster The internet-connected server cannot connect to the private cloud The internet-connected server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details You need a portable registry to fill the private registry with the Cloud Pak images Warning Please note that in this case the Cloud Pak Deployer expects an OpenShift cluster to be available already and will only work with an existing-ocp configuration. The bastion server does not have access to the internet and can therefore not instantiate an OpenShift cluster. On the internet-connected server \ud83d\udd17 If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist If a proxy server is configured for the bastion node, check the settings ( http_proxy , https_proxy , no_proxy environment variables) Build the Cloud Pak Deployer image using cp-deploy.sh build Create or update the directory with the configuration, making sure all your Cloud Paks and cartridges are specified Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key Diagram step 1 \ud83d\udd17 Run the deployer using the ./cp-deploy.sh env download command. For example: ./cp-deploy.sh env download This will download all clients, start the portable registry and then mirror images from the entitled registry to the portable registry . The portable registry data is kept in the status directory. If mirroring fails, fix the issue and just run the env download again. Before saving the status directory, you can optionally remove the entitlement key from the vault: ./cp-deploy.sh vault delete \\ -vs ibm_cp_entitlement_key See the download of watsonx.ai in action: https://ibm.box.com/v/cpd-air-gapped-download Diagram step 2 \ud83d\udd17 When the download finished successfully, the status directory holds the deployer scripts, the configuration directory, the deployer container image and the portable registry. Diagram step 3 \ud83d\udd17 Ship the status directory from the internet-connected server to the bastion server. You can use tar with gzip mode or any other compression technique. The status directory now holds all assets required for the air-gapped installation and its size can be substantial (100+ GB). You may want to use multi-volume tar files if you are using network transfer. On the bastion server \ud83d\udd17 The bastion server is not connected to the internet but is connected to the private registry and OpenShift cluster. Diagram step 4 \ud83d\udd17 See the air-gapped installation of Cloud Pak for Data in action: https://ibm.box.com/v/cpd-air-gapped-install . For the demonstration video, the download of the previous step has first been re-run to only download the Cloud Pak for Data control plane to avoid having to ship and upload ~700 GB. We're using the instructions in Run on existing OpenShift , adding the CPD_AIRGAP environment variable. Restore the status directory onto the bastion server. Make sure the volume to which you restore has enough space to hold the entire status directory, which includes the portable registry. Export the STATUS_DIR environment variable to point to the status directory Untar the cloud-pak-deployer scripts, for example: tar xvzf $STATUS_DIR/cloud-pak-deployer.tar.gz cd cloud-pak-deployer Set the CPD_AIRGAP environment variable to true export CPD_AIRGAP=true Set the environment variable for the oc login command. For example: export CPD_OC_LOGIN=\"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Create a vault secret image-registry- holding the connection credentials for the private registry specified in the configuration ( image_registry ). For example for a registry definition with name cpd453 , create secret image-registry-cpd453 . ./cp-deploy.sh vault set \\ -vs image-registry-cpd453 \\ -vsv \"admin:very_s3cret\" Run the ./cp-deploy.sh env apply command to start deployment of the Cloud Pak to the OpenShift cluster. For example: ./cp-deploy.sh env apply The CPD_AIRGGAP environment variable tells the deployer it will not download anything from the internet. As a first action, the deployer mirrors images from the portable registry to the private registry included in the configuration ( image_registry ) Running behind a proxy \ud83d\udd17 If the Cloud Pak Deployer is run from a server that has the HTTP proxy environment variables set up, i.e. \"proxy\" environment variables are configured on the server and in the terminal session, it will also apply these settings in the deployer container. The following environment variables are automatically applied to the deployer container if set up in the session running the cp-deploy.sh command: http_proxy https_proxy no_proxy If you do not want the deployer to use the proxy environment variables, you must remove them before running the cp-deploy.sh command: unset http_proxy unset https_proxy unset no_proxy Special settings for debug and DaemonSet images in air-gapped mode \ud83d\udd17 Specifically when running the deployer on IBM Cloud ROKS, certain OpenShift settings must be applied using DaemonSets in the kube-system namespace. Additionally, the deployer uses the oc debug node commands to retrieve kubelet and crio configuration files from the compute nodes. The default container images used by the DaemonSets and oc debug node commands are based on Red Hat's Universal Base Image and will be pulled from Red Hat registries. This is typically not possible in air-gapped installations, hence different images must be used. It is your responsibility to copy suitable (preferably UBI) images to an image registry that is connected to the OpenShift cluster. Also, if a pull secret is needed to pull the image(s) from the registry, you must create the associated secret in the kube-system OpenShift project. To configure alternative container images for the deployer to use, set the following properties in the .inv file kept in your configuration's inventory directory, or specify them as additional command line parameters for the cp-deploy.sh command. If you do not set these values, the deployer assumes that the default images are used for DaemonSet and oc debug node . Property Description Example cpd_oc_debug_image Container image to be used for the oc debug command. registry.redhat.io/rhel8/support-tools:latest cpd_ds_image Container image to be used for the DaemonSets that configure Kubelet, etc. registry.access.redhat.com/ubi8/ubi:latest","title":"Private registry and air-gapped"},{"location":"50-advanced/private-registry-and-air-gapped/#using-a-private-registry","text":"Some environments, especially in situations where the OpenShift cannot directly connect to the internet, require a private registry for OpenShift to pull the Cloud Pak images from. The Cloud Pak Deployer can mirror images from the entitled registry to a private registry that you want to use for the Cloud Pak(s). Also, if infrastructure which holds the OpenShift cluster is fully disconnected from the internet, the Cloud Pak Deployer can build a registry which can be stored on a portable hard disk or pen drive and then shipped to the site. Info Note: In all cases, the deployer can work behind a proxy to access the internet. Go to Running behind proxy for more information. The below instructions are not limited to disconnected (air-gapped) OpenShift clusters, but are more generic for deployment using a private registry. There are three use cases for mirroring images to a private registry and using this to install the Cloud Pak(s): Use case 1 - Mirror images and install using a bastion server . The bastion server can connect to the internet (directly or via a proxy), to OpenShift and to the private registry used by the OpenShift cluster. Use case 2 - Mirror images with a connected server, install using a bastion . The connected server can connect to the internet and to the private registry used by the OpenShift cluster. The server cannot connect to the OpenShift cluster. The bastion server can connect to the private registry and to the OpenShift cluster. Use case 3 - Mirror images using a portable image registry . The private registry used by the OpenShift cluster cannot be reached from the server that is connected to the internet. You need a portable registry to download images and which you then ship to a server that can connect to the existing OpenShift cluster and its private registry. Use cases 1 and 3 are also outlined in the Cloud Pak for Data installation documentation: https://www.ibm.com/docs/en/cloud-paks/cp-data/4.5.x?topic=tasks-mirroring-images-your-private-container-registry For specifying a private registry in the Cloud Pak Deployer configuration, please see Private registry . Example of specifying a private registry with a self-signed certificate in the configuration: image_registry: - name: cpd453 registry_host_name: registry.coc.ibm.com registry_port: 5000 registry_insecure: True The cp4d instance must reference the image_registry object using the image_registry_name : cp4d: - project: zen-45 openshift_cluster_name: {{ env_id }} cp4d_version: 4.5.3 openshift_storage_name: ocs-storage image_registry_name: cpd453 Info The deployer only supports using a private registry for the Cloud Pak images, not for OpenShift itself. Air-gapped installation of OpenShift is currently not in scope for the deployer. Warning The registry_host_name you specify in the image_registry definition must also be available for DNS lookup within OpenShift. If the registry runs on a server that is not registered in the DNS, use its IP address instead of a host name. The main 3 directories that are needed for both types of air-gapped installations are: Cloud Pak Deployer directory: cloud-pak-deployer Configuration directory: The directory that holds a all the Cloud Pak Deployer configuration Status directory: The directory that will hold all downloads, vault secrets and the portable registry when applicable (use case 3) Fpr use cases 2 and 3, where the directories must be shipped to the air-gapped cluster, the Cloud Pak Deployer and Configuration directories will be stored in the Status directory for simplicity.","title":"Using a private registry"},{"location":"50-advanced/private-registry-and-air-gapped/#use-case-1---mirror-images-and-install-using-a-bastion-server","text":"This is effectively \"not-air-gapped\" scenario, where the following conditions apply: The private registry is hosted inside the private dloud The bastion server can connect to the internet and mirror images to the private image registry The bastion server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details The bastion server can connect to OpenShift","title":"Use case 1 - Mirror images and install using a bastion server"},{"location":"50-advanced/private-registry-and-air-gapped/#on-the-bastion-server","text":"The bastion server is connected to the internet and OpenShift cluster. If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist If a proxy server is configured for the bastion node, check the settings ( http_proxy , https_proxy , no_proxy environment variables) Build the Cloud Pak Deployer image using ./cp-deploy.sh build Create or update the directory with the configuration; make sure all your Cloud Paks and cartridges are specified as well as an image_registry entry to identify the private registry Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key Create a vault secret image-registry- holding the connection credentials for the private registry specified in the configuration ( image_registry ). For example for a registry definition with name cpd453 , create secret image-registry-cpd453 . ./cp-deploy.sh vault set \\ -vs image-registry-cpd453 \\ -vsv \"admin:very_s3cret\" Set the environment variable for the oc login command. For example: export CPD_OC_LOGIN=\"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Run the ./cp-deploy.sh env apply command to start deployment of the Cloud Pak to the OpenShift cluster. For example: ./cp-deploy.sh env apply The existence of the image_registry definition and its reference in the cp4d definition instruct the deployer to mirror images to the private registry and to configure the OpenShift cluster to pull images from the private registry. If you have already mirrored the Cloud Pak images, you can add the --skip-mirror-images parameter to speed up the deployment process.","title":"On the bastion server"},{"location":"50-advanced/private-registry-and-air-gapped/#use-case-2---mirror-images-with-an-internet-connected-server-install-using-a-bastion","text":"This use case is also sometimes referred to as \"semi-air-gapped\", where the following conditions apply: The private registry is hosted outside of the private cloud that hosts the bastion server and OpenShift An internet-connected server external to the private cloud can reach the entitled registry and the private registry The internet-connected server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details The bastion server cannot connect to the internet The bastion server can connect to OpenShift Warning Please note that in this case the Cloud Pak Deployer expects an OpenShift cluster to be available already and will only work with an existing-ocp configuration. The bastion server does not have access to the internet and can therefore not instantiate an OpenShift cluster.","title":"Use case 2 - Mirror images with an internet-connected server, install using a bastion"},{"location":"50-advanced/private-registry-and-air-gapped/#on-the-internet-connected-server","text":"If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist If a proxy server is configured for the internet-connected server, check the settings ( http_proxy , https_proxy , no_proxy environment variables) Build the Cloud Pak Deployer image using ./cp-deploy.sh build Create or update the directory with the configuration; make sure all your Cloud Paks and cartridges are specified as well as an image_registry entry to identify the private registry Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key Create a vault secret image-registry- holding the connection credentials for the private registry specified in the configuration ( image_registry ). For example for a registry definition with name cpd453 , create secret image-registry-cpd453 . ./cp-deploy.sh vault set \\ -vs image-registry-cpd453 \\ -vsv \"admin:very_s3cret\" If the status directory does not exist it is created at this point.","title":"On the internet-connected server"},{"location":"50-advanced/private-registry-and-air-gapped/#diagram-step-1","text":"Run the deployer using the ./cp-deploy.sh env download --skip-portable-registry command. For example: ./cp-deploy.sh env download \\ --skip-portable-registry This will download all clients to the status directory and then mirror images from the entitled registry to the private registry. If mirroring fails, fix the issue and just run the env download again. Before saving the status directory, you can optionally remove the entitlement key from the vault: ./cp-deploy.sh vault delete \\ -vs ibm_cp_entitlement_key","title":"Diagram step 1"},{"location":"50-advanced/private-registry-and-air-gapped/#diagram-step-2","text":"When the download finished successfully, the status directory holds the deployer scripts, the configuration directory and the deployer container image.","title":"Diagram step 2"},{"location":"50-advanced/private-registry-and-air-gapped/#diagram-step-3","text":"Ship the status directory from the internet-connected server to the bastion server. You can use tar with gzip mode or any other compression technique. The total size of the directories should be relatively small, typically < 5 GB","title":"Diagram step 3"},{"location":"50-advanced/private-registry-and-air-gapped/#on-the-bastion-server_1","text":"The bastion server is not connected to the internet but is connected to the private registry and the OpenShift cluster.","title":"On the bastion server"},{"location":"50-advanced/private-registry-and-air-gapped/#diagram-step-4","text":"We're using the instructions in Run on existing OpenShift , adding the --air-gapped and --skip-mirror-images flags, to start the deployer: Restore the status directory onto the bastion server Export the STATUS_DIR environment variable to point to the status directory Untar the cloud-pak-deployer scripts, for example: tar xvzf $STATUS_DIR/cloud-pak-deployer.tar.gz Set the CPD_AIRGAP environment variable to true export CPD_AIRGAP=true Set the environment variable for the oc login command. For example: export CPD_OC_LOGIN=\"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Run the cp-deploy.sh env apply --skip-mirror-images command to start deployment of the Cloud Pak to the OpenShift cluster. For example: cd cloud-pak-deployer ./cp-deploy.sh env apply \\ --skip-mirror-images The CPD_AIRGGAP environment variable tells the deployer it will not download anything from the internet; --skip-mirror-images indicates that images are already available in the private registry that is included in the configuration ( image_registry )","title":"Diagram step 4"},{"location":"50-advanced/private-registry-and-air-gapped/#use-case-3---mirror-images-using-a-portable-image-registry","text":"This use case is also usually referred to as \"air-gapped\", where the following conditions apply: The private registry is hosted in the private cloud that hosts the bastion server and OpenShift The bastion server cannot connect to the internet The bastion server can connect to the private registry and the OpenShift cluster The internet-connected server cannot connect to the private cloud The internet-connected server is optionally connected to the internet via a proxy server. See Running behind a proxy for more details You need a portable registry to fill the private registry with the Cloud Pak images Warning Please note that in this case the Cloud Pak Deployer expects an OpenShift cluster to be available already and will only work with an existing-ocp configuration. The bastion server does not have access to the internet and can therefore not instantiate an OpenShift cluster.","title":"Use case 3 - Mirror images using a portable image registry"},{"location":"50-advanced/private-registry-and-air-gapped/#on-the-internet-connected-server_1","text":"If there are restrictions regarding the internet sites that can be reached, ensure that the website domains the deployer needs are whitelisted. For a list of domains, check locations to whitelist If a proxy server is configured for the bastion node, check the settings ( http_proxy , https_proxy , no_proxy environment variables) Build the Cloud Pak Deployer image using cp-deploy.sh build Create or update the directory with the configuration, making sure all your Cloud Paks and cartridges are specified Export the CONFIG_DIR and STATUS_DIR environment variables to respectively point to the configuration directory and the status directory Export the CP_ENTITLEMENT_KEY environment variable with your Cloud Pak entitlement key","title":"On the internet-connected server"},{"location":"50-advanced/private-registry-and-air-gapped/#diagram-step-1_1","text":"Run the deployer using the ./cp-deploy.sh env download command. For example: ./cp-deploy.sh env download This will download all clients, start the portable registry and then mirror images from the entitled registry to the portable registry . The portable registry data is kept in the status directory. If mirroring fails, fix the issue and just run the env download again. Before saving the status directory, you can optionally remove the entitlement key from the vault: ./cp-deploy.sh vault delete \\ -vs ibm_cp_entitlement_key See the download of watsonx.ai in action: https://ibm.box.com/v/cpd-air-gapped-download","title":"Diagram step 1"},{"location":"50-advanced/private-registry-and-air-gapped/#diagram-step-2_1","text":"When the download finished successfully, the status directory holds the deployer scripts, the configuration directory, the deployer container image and the portable registry.","title":"Diagram step 2"},{"location":"50-advanced/private-registry-and-air-gapped/#diagram-step-3_1","text":"Ship the status directory from the internet-connected server to the bastion server. You can use tar with gzip mode or any other compression technique. The status directory now holds all assets required for the air-gapped installation and its size can be substantial (100+ GB). You may want to use multi-volume tar files if you are using network transfer.","title":"Diagram step 3"},{"location":"50-advanced/private-registry-and-air-gapped/#on-the-bastion-server_2","text":"The bastion server is not connected to the internet but is connected to the private registry and OpenShift cluster.","title":"On the bastion server"},{"location":"50-advanced/private-registry-and-air-gapped/#diagram-step-4_1","text":"See the air-gapped installation of Cloud Pak for Data in action: https://ibm.box.com/v/cpd-air-gapped-install . For the demonstration video, the download of the previous step has first been re-run to only download the Cloud Pak for Data control plane to avoid having to ship and upload ~700 GB. We're using the instructions in Run on existing OpenShift , adding the CPD_AIRGAP environment variable. Restore the status directory onto the bastion server. Make sure the volume to which you restore has enough space to hold the entire status directory, which includes the portable registry. Export the STATUS_DIR environment variable to point to the status directory Untar the cloud-pak-deployer scripts, for example: tar xvzf $STATUS_DIR/cloud-pak-deployer.tar.gz cd cloud-pak-deployer Set the CPD_AIRGAP environment variable to true export CPD_AIRGAP=true Set the environment variable for the oc login command. For example: export CPD_OC_LOGIN=\"oc login api.pluto-01.coc.ibm.com:6443 -u kubeadmin -p BmxQ5-KjBFx-FgztG-gpTF3 --insecure-skip-tls-verify\" Create a vault secret image-registry- holding the connection credentials for the private registry specified in the configuration ( image_registry ). For example for a registry definition with name cpd453 , create secret image-registry-cpd453 . ./cp-deploy.sh vault set \\ -vs image-registry-cpd453 \\ -vsv \"admin:very_s3cret\" Run the ./cp-deploy.sh env apply command to start deployment of the Cloud Pak to the OpenShift cluster. For example: ./cp-deploy.sh env apply The CPD_AIRGGAP environment variable tells the deployer it will not download anything from the internet. As a first action, the deployer mirrors images from the portable registry to the private registry included in the configuration ( image_registry )","title":"Diagram step 4"},{"location":"50-advanced/private-registry-and-air-gapped/#running-behind-a-proxy","text":"If the Cloud Pak Deployer is run from a server that has the HTTP proxy environment variables set up, i.e. \"proxy\" environment variables are configured on the server and in the terminal session, it will also apply these settings in the deployer container. The following environment variables are automatically applied to the deployer container if set up in the session running the cp-deploy.sh command: http_proxy https_proxy no_proxy If you do not want the deployer to use the proxy environment variables, you must remove them before running the cp-deploy.sh command: unset http_proxy unset https_proxy unset no_proxy","title":"Running behind a proxy"},{"location":"50-advanced/private-registry-and-air-gapped/#special-settings-for-debug-and-daemonset-images-in-air-gapped-mode","text":"Specifically when running the deployer on IBM Cloud ROKS, certain OpenShift settings must be applied using DaemonSets in the kube-system namespace. Additionally, the deployer uses the oc debug node commands to retrieve kubelet and crio configuration files from the compute nodes. The default container images used by the DaemonSets and oc debug node commands are based on Red Hat's Universal Base Image and will be pulled from Red Hat registries. This is typically not possible in air-gapped installations, hence different images must be used. It is your responsibility to copy suitable (preferably UBI) images to an image registry that is connected to the OpenShift cluster. Also, if a pull secret is needed to pull the image(s) from the registry, you must create the associated secret in the kube-system OpenShift project. To configure alternative container images for the deployer to use, set the following properties in the .inv file kept in your configuration's inventory directory, or specify them as additional command line parameters for the cp-deploy.sh command. If you do not set these values, the deployer assumes that the default images are used for DaemonSet and oc debug node . Property Description Example cpd_oc_debug_image Container image to be used for the oc debug command. registry.redhat.io/rhel8/support-tools:latest cpd_ds_image Container image to be used for the DaemonSets that configure Kubelet, etc. registry.access.redhat.com/ubi8/ubi:latest","title":"Special settings for debug and DaemonSet images in air-gapped mode"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/","text":"Build image and run deployer on OpenShift \ud83d\udd17 Create configuration \ud83d\udd17 export CONFIG_DIR=$HOME/cpd-config && mkdir -p $CONFIG_DIR/config cat << EOF > $CONFIG_DIR/config/cpd-config.yaml --- global_config: environment_name: demo cloud_platform: existing-ocp confirm_destroy: False openshift: - name: cpd-demo ocp_version: \"4.10\" cluster_name: cpd-demo domain_name: example.com openshift_storage: - storage_name: nfs-storage storage_type: nfs cp4d: - project: cpd-instance openshift_cluster_name: cpd-demo cp4d_version: 4.6.0 sequential_install: True accept_licenses: True cartridges: - name: cp-foundation license_service: state: disabled threads_per_core: 2 - name: lite # # All tested cartridges. To install, change the \"state\" property to \"installed\". To uninstall, change the state # to \"removed\" or comment out the entire cartridge. Make sure that the \"-\" and properties are aligned with the lite # cartridge; the \"-\" is at position 3 and the property starts at position 5. # - name: analyticsengine size: small state: removed - name: bigsql state: removed - name: ca size: small instances: - name: ca-instance metastore_ref: ca-metastore state: removed - name: cde state: removed - name: datagate state: removed - name: datastage-ent-plus state: removed # instances: # - name: ds-instance # # Optional settings # description: \"datastage ds-instance\" # size: medium # storage_class: efs-nfs-client # storage_size_gb: 60 # # Custom Scale options # scale_px_runtime: # replicas: 2 # cpu_request: 500m # cpu_limit: 2 # memory_request: 2Gi # memory_limit: 4Gi # scale_px_compute: # replicas: 2 # cpu_request: 1 # cpu_limit: 3 # memory_request: 4Gi # memory_limit: 12Gi - name: db2 size: small instances: - name: ca-metastore metadata_size_gb: 20 data_size_gb: 20 backup_size_gb: 20 transactionlog_size_gb: 20 state: removed - name: db2wh state: removed - name: dmc state: removed - name: dods size: small state: removed - name: dp size: small state: removed - name: dv size: small instances: - name: data-virtualization state: removed - name: hadoop size: small state: removed - name: mdm size: small wkc_enabled: true state: removed - name: openpages state: removed - name: planning-analytics state: removed - name: rstudio size: small state: removed - name: spss state: removed - name: voice-gateway replicas: 1 state: removed - name: watson-assistant size: small state: removed - name: watson-discovery state: removed - name: watson-ks size: small state: removed - name: watson-openscale size: small state: removed - name: watson-speech stt_size: xsmall tts_size: xsmall state: removed - name: wkc size: small state: removed - name: wml size: small state: installed - name: wml-accelerator replicas: 1 size: small state: removed - name: wsl state: installed EOF Log in to the OpenShift cluster \ud83d\udd17 Log is as a cluster administrator to be able to run the deployer with the correct permissions. Prepare the deployer project \ud83d\udd17 oc new-project cloud-pak-deployer oc project cloud-pak-deployer oc create serviceaccount cloud-pak-deployer-sa oc adm policy add-scc-to-user privileged -z cloud-pak-deployer-sa oc adm policy add-cluster-role-to-user cluster-admin -z cloud-pak-deployer-sa Build deployer image and push to the internal registry \ud83d\udd17 Building the deployer image typically takes ~5 minutes. Only do this if the image has not been built yet. cat << EOF | oc apply -f - apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: cloud-pak-deployer spec: lookupPolicy: local: true EOF cat << EOF | oc create -f - kind: Build apiVersion: build.openshift.io/v1 metadata: generateName: cloud-pak-deployer-bc- namespace: cloud-pak-deployer spec: serviceAccount: builder source: type: Git git: uri: 'https://github.com/IBM/cloud-pak-deployer' ref: wizard strategy: type: Docker dockerStrategy: buildArgs: - name: CPD_OLM_UTILS_V1_IMAGE value: icr.io/cpopen/cpd/olm-utils:latest - name: CPD_OLM_UTILS_V2_IMAGE value: icr.io/cpopen/cpd/olm-utils-v2:latest output: to: kind: ImageStreamTag name: 'cloud-pak-deployer:latest' triggeredBy: - message: Manually triggered EOF Now, wait until the deployer image has been built. oc get build -n cloud-pak-deployer -w Set configuration \ud83d\udd17 oc create cm -n cloud-pak-deployer cloud-pak-deployer-config oc set data -n cloud-pak-deployer cm/cloud-pak-deployer-config \\ --from-file=$CONFIG_DIR/config Start the deployer job \ud83d\udd17 export CP_ENTITLEMENT_KEY=your_entitlement_key cat << EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloud-pak-deployer-status namespace: cloud-pak-deployer spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi EOF cat << EOF | oc apply -f - apiVersion: batch/v1 kind: Job metadata: labels: app: cloud-pak-deployer name: cloud-pak-deployer namespace: cloud-pak-deployer spec: parallelism: 1 completions: 1 backoffLimit: 0 template: metadata: name: cloud-pak-deployer labels: app: cloud-pak-deployer spec: containers: - name: cloud-pak-deployer image: cloud-pak-deployer:latest imagePullPolicy: Always terminationMessagePath: /dev/termination-log terminationMessagePolicy: File env: - name: CONFIG_DIR value: /Data/cpd-config - name: STATUS_DIR value: /Data/cpd-status - name: CP_ENTITLEMENT_KEY value: ${CP_ENTITLEMENT_KEY} volumeMounts: - name: config-volume mountPath: /Data/cpd-config/config - name: status-volume mountPath: /Data/cpd-status command: [\"/bin/sh\",\"-xc\"] args: - /cloud-pak-deployer/cp-deploy.sh env apply -v restartPolicy: Never securityContext: runAsUser: 0 serviceAccountName: cloud-pak-deployer-sa volumes: - name: config-volume configMap: name: cloud-pak-deployer-config - name: status-volume persistentVolumeClaim: claimName: cloud-pak-deployer-status EOF Optional: start debug job \ud83d\udd17 The debug job can be useful if you want to access the status directory of the deployer if the deployer job has failed. cat << EOF | oc apply -f - apiVersion: batch/v1 kind: Job metadata: labels: app: cloud-pak-deployer-debug name: cloud-pak-deployer-debug namespace: cloud-pak-deployer spec: parallelism: 1 completions: 1 backoffLimit: 0 template: metadata: name: cloud-pak-deployer-debug labels: app: cloud-pak-deployer-debug spec: containers: - name: cloud-pak-deployer-debug image: cloud-pak-deployer:latest imagePullPolicy: Always terminationMessagePath: /dev/termination-log terminationMessagePolicy: File env: - name: CONFIG_DIR value: /Data/cpd-config - name: STATUS_DIR value: /Data/cpd-status volumeMounts: - name: config-volume mountPath: /Data/cpd-config/config - name: status-volume mountPath: /Data/cpd-status command: [\"/bin/sh\",\"-xc\"] args: - sleep infinity restartPolicy: Never securityContext: runAsUser: 0 serviceAccountName: cloud-pak-deployer-sa volumes: - name: config-volume configMap: name: cloud-pak-deployer-config - name: status-volume persistentVolumeClaim: claimName: cloud-pak-deployer-status EOF Follow the logs of the deployment \ud83d\udd17 oc logs -f -n cloud-pak-deployer job/cloud-pak-deployer In some cases, especially if the OpenShift cluster is remote from where the oc command is running, the oc logs -f command may terminate abruptly.","title":"Build image and run deployer on OpenShift"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#build-image-and-run-deployer-on-openshift","text":"","title":"Build image and run deployer on OpenShift"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#create-configuration","text":"export CONFIG_DIR=$HOME/cpd-config && mkdir -p $CONFIG_DIR/config cat << EOF > $CONFIG_DIR/config/cpd-config.yaml --- global_config: environment_name: demo cloud_platform: existing-ocp confirm_destroy: False openshift: - name: cpd-demo ocp_version: \"4.10\" cluster_name: cpd-demo domain_name: example.com openshift_storage: - storage_name: nfs-storage storage_type: nfs cp4d: - project: cpd-instance openshift_cluster_name: cpd-demo cp4d_version: 4.6.0 sequential_install: True accept_licenses: True cartridges: - name: cp-foundation license_service: state: disabled threads_per_core: 2 - name: lite # # All tested cartridges. To install, change the \"state\" property to \"installed\". To uninstall, change the state # to \"removed\" or comment out the entire cartridge. Make sure that the \"-\" and properties are aligned with the lite # cartridge; the \"-\" is at position 3 and the property starts at position 5. # - name: analyticsengine size: small state: removed - name: bigsql state: removed - name: ca size: small instances: - name: ca-instance metastore_ref: ca-metastore state: removed - name: cde state: removed - name: datagate state: removed - name: datastage-ent-plus state: removed # instances: # - name: ds-instance # # Optional settings # description: \"datastage ds-instance\" # size: medium # storage_class: efs-nfs-client # storage_size_gb: 60 # # Custom Scale options # scale_px_runtime: # replicas: 2 # cpu_request: 500m # cpu_limit: 2 # memory_request: 2Gi # memory_limit: 4Gi # scale_px_compute: # replicas: 2 # cpu_request: 1 # cpu_limit: 3 # memory_request: 4Gi # memory_limit: 12Gi - name: db2 size: small instances: - name: ca-metastore metadata_size_gb: 20 data_size_gb: 20 backup_size_gb: 20 transactionlog_size_gb: 20 state: removed - name: db2wh state: removed - name: dmc state: removed - name: dods size: small state: removed - name: dp size: small state: removed - name: dv size: small instances: - name: data-virtualization state: removed - name: hadoop size: small state: removed - name: mdm size: small wkc_enabled: true state: removed - name: openpages state: removed - name: planning-analytics state: removed - name: rstudio size: small state: removed - name: spss state: removed - name: voice-gateway replicas: 1 state: removed - name: watson-assistant size: small state: removed - name: watson-discovery state: removed - name: watson-ks size: small state: removed - name: watson-openscale size: small state: removed - name: watson-speech stt_size: xsmall tts_size: xsmall state: removed - name: wkc size: small state: removed - name: wml size: small state: installed - name: wml-accelerator replicas: 1 size: small state: removed - name: wsl state: installed EOF","title":"Create configuration"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#log-in-to-the-openshift-cluster","text":"Log is as a cluster administrator to be able to run the deployer with the correct permissions.","title":"Log in to the OpenShift cluster"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#prepare-the-deployer-project","text":"oc new-project cloud-pak-deployer oc project cloud-pak-deployer oc create serviceaccount cloud-pak-deployer-sa oc adm policy add-scc-to-user privileged -z cloud-pak-deployer-sa oc adm policy add-cluster-role-to-user cluster-admin -z cloud-pak-deployer-sa","title":"Prepare the deployer project"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#build-deployer-image-and-push-to-the-internal-registry","text":"Building the deployer image typically takes ~5 minutes. Only do this if the image has not been built yet. cat << EOF | oc apply -f - apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: cloud-pak-deployer spec: lookupPolicy: local: true EOF cat << EOF | oc create -f - kind: Build apiVersion: build.openshift.io/v1 metadata: generateName: cloud-pak-deployer-bc- namespace: cloud-pak-deployer spec: serviceAccount: builder source: type: Git git: uri: 'https://github.com/IBM/cloud-pak-deployer' ref: wizard strategy: type: Docker dockerStrategy: buildArgs: - name: CPD_OLM_UTILS_V1_IMAGE value: icr.io/cpopen/cpd/olm-utils:latest - name: CPD_OLM_UTILS_V2_IMAGE value: icr.io/cpopen/cpd/olm-utils-v2:latest output: to: kind: ImageStreamTag name: 'cloud-pak-deployer:latest' triggeredBy: - message: Manually triggered EOF Now, wait until the deployer image has been built. oc get build -n cloud-pak-deployer -w","title":"Build deployer image and push to the internal registry"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#set-configuration","text":"oc create cm -n cloud-pak-deployer cloud-pak-deployer-config oc set data -n cloud-pak-deployer cm/cloud-pak-deployer-config \\ --from-file=$CONFIG_DIR/config","title":"Set configuration"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#start-the-deployer-job","text":"export CP_ENTITLEMENT_KEY=your_entitlement_key cat << EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloud-pak-deployer-status namespace: cloud-pak-deployer spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi EOF cat << EOF | oc apply -f - apiVersion: batch/v1 kind: Job metadata: labels: app: cloud-pak-deployer name: cloud-pak-deployer namespace: cloud-pak-deployer spec: parallelism: 1 completions: 1 backoffLimit: 0 template: metadata: name: cloud-pak-deployer labels: app: cloud-pak-deployer spec: containers: - name: cloud-pak-deployer image: cloud-pak-deployer:latest imagePullPolicy: Always terminationMessagePath: /dev/termination-log terminationMessagePolicy: File env: - name: CONFIG_DIR value: /Data/cpd-config - name: STATUS_DIR value: /Data/cpd-status - name: CP_ENTITLEMENT_KEY value: ${CP_ENTITLEMENT_KEY} volumeMounts: - name: config-volume mountPath: /Data/cpd-config/config - name: status-volume mountPath: /Data/cpd-status command: [\"/bin/sh\",\"-xc\"] args: - /cloud-pak-deployer/cp-deploy.sh env apply -v restartPolicy: Never securityContext: runAsUser: 0 serviceAccountName: cloud-pak-deployer-sa volumes: - name: config-volume configMap: name: cloud-pak-deployer-config - name: status-volume persistentVolumeClaim: claimName: cloud-pak-deployer-status EOF","title":"Start the deployer job"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#optional-start-debug-job","text":"The debug job can be useful if you want to access the status directory of the deployer if the deployer job has failed. cat << EOF | oc apply -f - apiVersion: batch/v1 kind: Job metadata: labels: app: cloud-pak-deployer-debug name: cloud-pak-deployer-debug namespace: cloud-pak-deployer spec: parallelism: 1 completions: 1 backoffLimit: 0 template: metadata: name: cloud-pak-deployer-debug labels: app: cloud-pak-deployer-debug spec: containers: - name: cloud-pak-deployer-debug image: cloud-pak-deployer:latest imagePullPolicy: Always terminationMessagePath: /dev/termination-log terminationMessagePolicy: File env: - name: CONFIG_DIR value: /Data/cpd-config - name: STATUS_DIR value: /Data/cpd-status volumeMounts: - name: config-volume mountPath: /Data/cpd-config/config - name: status-volume mountPath: /Data/cpd-status command: [\"/bin/sh\",\"-xc\"] args: - sleep infinity restartPolicy: Never securityContext: runAsUser: 0 serviceAccountName: cloud-pak-deployer-sa volumes: - name: config-volume configMap: name: cloud-pak-deployer-config - name: status-volume persistentVolumeClaim: claimName: cloud-pak-deployer-status EOF","title":"Optional: start debug job"},{"location":"50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/#follow-the-logs-of-the-deployment","text":"oc logs -f -n cloud-pak-deployer job/cloud-pak-deployer In some cases, especially if the OpenShift cluster is remote from where the oc command is running, the oc logs -f command may terminate abruptly.","title":"Follow the logs of the deployment"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/","text":"Running deployer on OpenShift using console \ud83d\udd17 See the deployer in action deploying IBM watsonx.ai on an existing OpenShift cluster in this video: https://ibm.box.com/v/cpd-wxai-existing-ocp Log in to the OpenShift cluster \ud83d\udd17 Log is as a cluster administrator to be able to run the deployer with the correct permissions. Prepare the deployer project and the storage \ud83d\udd17 Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the following block (exactly) into the window --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: cloud-pak-deployer --- apiVersion: v1 kind: ServiceAccount metadata: name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: system:openshift:scc:privileged namespace: cloud-pak-deployer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cloud-pak-deployer-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cloud-pak-deployer-sa namespace: cloud-pak-deployer Set the entitlement key \ud83d\udd17 Update the secret below with your Cloud Pak entitlement key. Make sure the key is indented exactly as below. Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the follliwng block, adjust where needed --- apiVersion: v1 kind: Secret metadata: name: cloud-pak-entitlement-key namespace: cloud-pak-deployer type: Opaque stringData: cp-entitlement-key: | YOUR_ENTITLEMENT_KEY Configure the Cloud Paks and service to be deployed \ud83d\udd17 Update the configuration below to match what you want to deploy, do not change indent Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the follliwng block (exactly into the window) --- apiVersion: v1 kind: ConfigMap metadata: name: cloud-pak-deployer-config namespace: cloud-pak-deployer data: cpd-config.yaml: | global_config: environment_name: demo cloud_platform: existing-ocp confirm_destroy: False openshift: - name: cpd-demo ocp_version: \"4.12\" cluster_name: cpd-demo domain_name: example.com mcg: install: False storage_type: storage-class storage_class: managed-nfs-storage gpu: install: False openshift_storage: - storage_name: auto-storage storage_type: auto cp4d: - project: cpd openshift_cluster_name: cpd-demo cp4d_version: 4.8.1 sequential_install: False accept_licenses: True cartridges: cartridges: - name: cp-foundation license_service: state: disabled threads_per_core: 2 - name: lite - name: scheduler state: removed - name: analyticsengine description: Analytics Engine Powered by Apache Spark size: small state: removed - name: bigsql description: Db2 Big SQL state: removed - name: ca description: Cognos Analytics size: small instances: - name: ca-instance metastore_ref: ca-metastore state: removed - name: dashboard description: Cognos Dashboards state: removed - name: datagate description: Db2 Data Gate state: removed - name: datastage-ent description: DataStage Enterprise state: removed - name: datastage-ent-plus description: DataStage Enterprise Plus state: removed # instances: # - name: ds-instance # # Optional settings # description: \"datastage ds-instance\" # size: medium # storage_class: efs-nfs-client # storage_size_gb: 60 # # Custom Scale options # scale_px_runtime: # replicas: 2 # cpu_request: 500m # cpu_limit: 2 # memory_request: 2Gi # memory_limit: 4Gi # scale_px_compute: # replicas: 2 # cpu_request: 1 # cpu_limit: 3 # memory_request: 4Gi # memory_limit: 12Gi - name: db2 description: Db2 OLTP size: small instances: - name: ca-metastore metadata_size_gb: 20 data_size_gb: 20 backup_size_gb: 20 transactionlog_size_gb: 20 state: removed - name: db2wh description: Db2 Warehouse state: removed - name: dmc description: Db2 Data Management Console state: removed - name: dods description: Decision Optimization size: small state: removed - name: dp description: Data Privacy size: small state: removed - name: dpra description: Data Privacy Risk Assessment state: removed - name: dv description: Data Virtualization size: small instances: - name: data-virtualization state: removed # Please note that for EDB Postgress, a secret edb-postgres-license-key must be created in the vault # before deploying - name: edb_cp4d description: EDB Postgres state: removed instances: - name: instance1 version: \"15.4\" #type: Standard #members: 1 #size_gb: 50 #resource_request_cpu: 1 #resource_request_memory: 4Gi #resource_limit_cpu: 1 #resource_limit_memory: 4Gi - name: factsheet description: AI Factsheets size: small state: removed - name: hadoop description: Execution Engine for Apache Hadoop size: small state: removed - name: mantaflow description: MANTA Automated Lineage size: small state: removed - name: match360 description: IBM Match 360 size: small wkc_enabled: true state: removed - name: openpages description: OpenPages state: removed # For Planning Analytics, the case version is needed due to defect in olm utils - name: planning-analytics description: Planning Analytics state: removed - name: replication description: Data Replication license: IDRC size: small state: removed - name: rstudio description: RStudio Server with R 3.6 size: small state: removed - name: spss description: SPSS Modeler state: removed - name: voice-gateway description: Voice Gateway replicas: 1 state: removed - name: watson-assistant description: Watson Assistant size: small # noobaa_account_secret: noobaa-admin # noobaa_cert_secret: noobaa-s3-serving-cert state: removed - name: watson-discovery description: Watson Discovery # noobaa_account_secret: noobaa-admin # noobaa_cert_secret: noobaa-s3-serving-cert state: removed - name: watson-ks description: Watson Knowledge Studio size: small # noobaa_account_secret: noobaa-admin # noobaa_cert_secret: noobaa-s3-serving-cert state: removed - name: watson-openscale description: Watson OpenScale size: small state: removed - name: watson-speech description: Watson Speech (STT and TTS) stt_size: xsmall tts_size: xsmall # noobaa_account_secret: noobaa-admin # noobaa_cert_secret: noobaa-s3-serving-cert state: removed # Please note that for watsonx.ai foundation models, you neeed to install the # Node Feature Discovery and NVIDIA GPU operators. You can do so by setting the openshift.gpu.install property to True - name: watsonx_ai description: watsonx.ai state: removed models: - model_id: google-flan-t5-xxl state: removed - model_id: google-flan-ul2 state: removed - model_id: eleutherai-gpt-neox-20b state: removed - model_id: ibm-granite-13b-chat-v1 state: removed - model_id: ibm-granite-13b-instruct-v1 state: removed - model_id: meta-llama-llama-2-70b-chat state: removed - model_id: ibm-mpt-7b-instruct2 state: removed - model_id: bigscience-mt0-xxl state: removed - model_id: bigcode-starcoder state: removed - name: watsonx_data description: watsonx.data state: removed - name: wkc description: Watson Knowledge Catalog size: small state: removed installation_options: install_wkc_core_only: False enableKnowledgeGraph: False enableDataQuality: False enableFactSheet: False - name: wml description: Watson Machine Learning size: small state: installed - name: wml-accelerator description: Watson Machine Learning Accelerator replicas: 1 size: small state: removed - name: ws description: Watson Studio state: installed - name: ws-pipelines description: Watson Studio Pipelines state: removed - name: ws-runtimes description: Watson Studio Runtimes runtimes: - ibm-cpd-ws-runtime-py39 - ibm-cpd-ws-runtime-222-py - ibm-cpd-ws-runtime-py39gpu - ibm-cpd-ws-runtime-222-pygpu - ibm-cpd-ws-runtime-231-pygpu - ibm-cpd-ws-runtime-r36 - ibm-cpd-ws-runtime-222-r - ibm-cpd-ws-runtime-231-r state: removed Start the deployer \ud83d\udd17 Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the following block (exactly) into the window apiVersion: v1 kind: Pod metadata: labels: app: cloud-pak-deployer-start generateName: cloud-pak-deployer-start- namespace: cloud-pak-deployer spec: containers: - name: cloud-pak-deployer image: quay.io/cloud-pak-deployer/cloud-pak-deployer:latest imagePullPolicy: Always terminationMessagePath: /dev/termination-log terminationMessagePolicy: File command: [\"/bin/sh\",\"-xc\"] args: - /cloud-pak-deployer/scripts/deployer/cpd-start-deployer.sh restartPolicy: Never securityContext: runAsUser: 0 serviceAccountName: cloud-pak-deployer-sa Follow the logs of the deployment \ud83d\udd17 Open the OpenShift console Go to Compute \u2192 Pods Select cloud-pak-deployer as the project at the top of the page Click the deployer pod Click logs Info When running the deployer installing Cloud Pak for Data, the first run will fail. This is because the deployer applies the node configuration to OpenShift, which will cause all nodes to restart one by one, including the node that runs the deployer. Because of the job setting, a new deployer pod will automatically start and resume from where it was stopped. Re-run deployer when failed or if you want to update the configuration \ud83d\udd17 If the deployer has failed or if you want to make changes to the configuration after the successful run, you can do the following: Open the OpenShift console Go to Workloads \u2192 Jobs Check the logs of the cloud-pak-deployer job If needed, make changes to the cloud-pak-deployer-config Config Map by going to Workloads \u2192 ConfigMaps Re-run the deployer","title":"Run deployer on OpenShift using Console"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/#running-deployer-on-openshift-using-console","text":"See the deployer in action deploying IBM watsonx.ai on an existing OpenShift cluster in this video: https://ibm.box.com/v/cpd-wxai-existing-ocp","title":"Running deployer on OpenShift using console"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/#log-in-to-the-openshift-cluster","text":"Log is as a cluster administrator to be able to run the deployer with the correct permissions.","title":"Log in to the OpenShift cluster"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/#prepare-the-deployer-project-and-the-storage","text":"Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the following block (exactly) into the window --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: cloud-pak-deployer --- apiVersion: v1 kind: ServiceAccount metadata: name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: system:openshift:scc:privileged namespace: cloud-pak-deployer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cloud-pak-deployer-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cloud-pak-deployer-sa namespace: cloud-pak-deployer","title":"Prepare the deployer project and the storage"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/#set-the-entitlement-key","text":"Update the secret below with your Cloud Pak entitlement key. Make sure the key is indented exactly as below. Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the follliwng block, adjust where needed --- apiVersion: v1 kind: Secret metadata: name: cloud-pak-entitlement-key namespace: cloud-pak-deployer type: Opaque stringData: cp-entitlement-key: | YOUR_ENTITLEMENT_KEY","title":"Set the entitlement key"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/#configure-the-cloud-paks-and-service-to-be-deployed","text":"Update the configuration below to match what you want to deploy, do not change indent Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the follliwng block (exactly into the window) --- apiVersion: v1 kind: ConfigMap metadata: name: cloud-pak-deployer-config namespace: cloud-pak-deployer data: cpd-config.yaml: | global_config: environment_name: demo cloud_platform: existing-ocp confirm_destroy: False openshift: - name: cpd-demo ocp_version: \"4.12\" cluster_name: cpd-demo domain_name: example.com mcg: install: False storage_type: storage-class storage_class: managed-nfs-storage gpu: install: False openshift_storage: - storage_name: auto-storage storage_type: auto cp4d: - project: cpd openshift_cluster_name: cpd-demo cp4d_version: 4.8.1 sequential_install: False accept_licenses: True cartridges: cartridges: - name: cp-foundation license_service: state: disabled threads_per_core: 2 - name: lite - name: scheduler state: removed - name: analyticsengine description: Analytics Engine Powered by Apache Spark size: small state: removed - name: bigsql description: Db2 Big SQL state: removed - name: ca description: Cognos Analytics size: small instances: - name: ca-instance metastore_ref: ca-metastore state: removed - name: dashboard description: Cognos Dashboards state: removed - name: datagate description: Db2 Data Gate state: removed - name: datastage-ent description: DataStage Enterprise state: removed - name: datastage-ent-plus description: DataStage Enterprise Plus state: removed # instances: # - name: ds-instance # # Optional settings # description: \"datastage ds-instance\" # size: medium # storage_class: efs-nfs-client # storage_size_gb: 60 # # Custom Scale options # scale_px_runtime: # replicas: 2 # cpu_request: 500m # cpu_limit: 2 # memory_request: 2Gi # memory_limit: 4Gi # scale_px_compute: # replicas: 2 # cpu_request: 1 # cpu_limit: 3 # memory_request: 4Gi # memory_limit: 12Gi - name: db2 description: Db2 OLTP size: small instances: - name: ca-metastore metadata_size_gb: 20 data_size_gb: 20 backup_size_gb: 20 transactionlog_size_gb: 20 state: removed - name: db2wh description: Db2 Warehouse state: removed - name: dmc description: Db2 Data Management Console state: removed - name: dods description: Decision Optimization size: small state: removed - name: dp description: Data Privacy size: small state: removed - name: dpra description: Data Privacy Risk Assessment state: removed - name: dv description: Data Virtualization size: small instances: - name: data-virtualization state: removed # Please note that for EDB Postgress, a secret edb-postgres-license-key must be created in the vault # before deploying - name: edb_cp4d description: EDB Postgres state: removed instances: - name: instance1 version: \"15.4\" #type: Standard #members: 1 #size_gb: 50 #resource_request_cpu: 1 #resource_request_memory: 4Gi #resource_limit_cpu: 1 #resource_limit_memory: 4Gi - name: factsheet description: AI Factsheets size: small state: removed - name: hadoop description: Execution Engine for Apache Hadoop size: small state: removed - name: mantaflow description: MANTA Automated Lineage size: small state: removed - name: match360 description: IBM Match 360 size: small wkc_enabled: true state: removed - name: openpages description: OpenPages state: removed # For Planning Analytics, the case version is needed due to defect in olm utils - name: planning-analytics description: Planning Analytics state: removed - name: replication description: Data Replication license: IDRC size: small state: removed - name: rstudio description: RStudio Server with R 3.6 size: small state: removed - name: spss description: SPSS Modeler state: removed - name: voice-gateway description: Voice Gateway replicas: 1 state: removed - name: watson-assistant description: Watson Assistant size: small # noobaa_account_secret: noobaa-admin # noobaa_cert_secret: noobaa-s3-serving-cert state: removed - name: watson-discovery description: Watson Discovery # noobaa_account_secret: noobaa-admin # noobaa_cert_secret: noobaa-s3-serving-cert state: removed - name: watson-ks description: Watson Knowledge Studio size: small # noobaa_account_secret: noobaa-admin # noobaa_cert_secret: noobaa-s3-serving-cert state: removed - name: watson-openscale description: Watson OpenScale size: small state: removed - name: watson-speech description: Watson Speech (STT and TTS) stt_size: xsmall tts_size: xsmall # noobaa_account_secret: noobaa-admin # noobaa_cert_secret: noobaa-s3-serving-cert state: removed # Please note that for watsonx.ai foundation models, you neeed to install the # Node Feature Discovery and NVIDIA GPU operators. You can do so by setting the openshift.gpu.install property to True - name: watsonx_ai description: watsonx.ai state: removed models: - model_id: google-flan-t5-xxl state: removed - model_id: google-flan-ul2 state: removed - model_id: eleutherai-gpt-neox-20b state: removed - model_id: ibm-granite-13b-chat-v1 state: removed - model_id: ibm-granite-13b-instruct-v1 state: removed - model_id: meta-llama-llama-2-70b-chat state: removed - model_id: ibm-mpt-7b-instruct2 state: removed - model_id: bigscience-mt0-xxl state: removed - model_id: bigcode-starcoder state: removed - name: watsonx_data description: watsonx.data state: removed - name: wkc description: Watson Knowledge Catalog size: small state: removed installation_options: install_wkc_core_only: False enableKnowledgeGraph: False enableDataQuality: False enableFactSheet: False - name: wml description: Watson Machine Learning size: small state: installed - name: wml-accelerator description: Watson Machine Learning Accelerator replicas: 1 size: small state: removed - name: ws description: Watson Studio state: installed - name: ws-pipelines description: Watson Studio Pipelines state: removed - name: ws-runtimes description: Watson Studio Runtimes runtimes: - ibm-cpd-ws-runtime-py39 - ibm-cpd-ws-runtime-222-py - ibm-cpd-ws-runtime-py39gpu - ibm-cpd-ws-runtime-222-pygpu - ibm-cpd-ws-runtime-231-pygpu - ibm-cpd-ws-runtime-r36 - ibm-cpd-ws-runtime-222-r - ibm-cpd-ws-runtime-231-r state: removed","title":"Configure the Cloud Paks and service to be deployed"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/#start-the-deployer","text":"Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the following block (exactly) into the window apiVersion: v1 kind: Pod metadata: labels: app: cloud-pak-deployer-start generateName: cloud-pak-deployer-start- namespace: cloud-pak-deployer spec: containers: - name: cloud-pak-deployer image: quay.io/cloud-pak-deployer/cloud-pak-deployer:latest imagePullPolicy: Always terminationMessagePath: /dev/termination-log terminationMessagePolicy: File command: [\"/bin/sh\",\"-xc\"] args: - /cloud-pak-deployer/scripts/deployer/cpd-start-deployer.sh restartPolicy: Never securityContext: runAsUser: 0 serviceAccountName: cloud-pak-deployer-sa","title":"Start the deployer"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/#follow-the-logs-of-the-deployment","text":"Open the OpenShift console Go to Compute \u2192 Pods Select cloud-pak-deployer as the project at the top of the page Click the deployer pod Click logs Info When running the deployer installing Cloud Pak for Data, the first run will fail. This is because the deployer applies the node configuration to OpenShift, which will cause all nodes to restart one by one, including the node that runs the deployer. Because of the job setting, a new deployer pod will automatically start and resume from where it was stopped.","title":"Follow the logs of the deployment"},{"location":"50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/#re-run-deployer-when-failed-or-if-you-want-to-update-the-configuration","text":"If the deployer has failed or if you want to make changes to the configuration after the successful run, you can do the following: Open the OpenShift console Go to Workloads \u2192 Jobs Check the logs of the cloud-pak-deployer job If needed, make changes to the cloud-pak-deployer-config Config Map by going to Workloads \u2192 ConfigMaps Re-run the deployer","title":"Re-run deployer when failed or if you want to update the configuration"},{"location":"50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/","text":"Run deployer wizard on OpenShift \ud83d\udd17 Log in to the OpenShift cluster \ud83d\udd17 Log is as a cluster administrator to be able to run the deployer with the correct permissions. Prepare the deployer project and the storage \ud83d\udd17 Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the follliwng block (exactly into the window) --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: cloud-pak-deployer --- apiVersion: v1 kind: ServiceAccount metadata: name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: system:openshift:scc:privileged namespace: cloud-pak-deployer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cloud-pak-deployer-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloud-pak-deployer-config namespace: cloud-pak-deployer spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloud-pak-deployer-status namespace: cloud-pak-deployer spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi Run the deployer wizard and expose route \ud83d\udd17 Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the following block (exactly into the window) apiVersion: apps/v1 kind: Deployment metadata: name: cloud-pak-deployer-wizard namespace: cloud-pak-deployer spec: replicas: 1 selector: matchLabels: app: cloud-pak-deployer-wizard template: metadata: name: cloud-pak-deployer-wizard labels: app: cloud-pak-deployer-wizard spec: containers: - name: cloud-pak-deployer image: quay.io/cloud-pak-deployer/cloud-pak-deployer:latest imagePullPolicy: Always terminationMessagePath: /dev/termination-log terminationMessagePolicy: File ports: - containerPort: 8080 protocol: TCP env: - name: CONFIG_DIR value: /Data/cpd-config - name: STATUS_DIR value: /Data/cpd-status - name: CPD_WIZARD_PAGE_TITLE value: \"Cloud Pak Deployer\" # - name: CPD_WIZARD_MODE # value: existing-ocp volumeMounts: - name: config-volume mountPath: /Data/cpd-config - name: status-volume mountPath: /Data/cpd-status command: [\"/bin/sh\",\"-xc\"] args: - mkdir -p /Data/cpd-config/config && /cloud-pak-deployer/cp-deploy.sh env wizard -v securityContext: runAsUser: 0 serviceAccountName: cloud-pak-deployer-sa volumes: - name: config-volume persistentVolumeClaim: claimName: cloud-pak-deployer-config - name: status-volume persistentVolumeClaim: claimName: cloud-pak-deployer-status --- apiVersion: v1 kind: Service metadata: name: cloud-pak-deployer-wizard-svc namespace: cloud-pak-deployer spec: selector: app: cloud-pak-deployer-wizard ports: - nodePort: 0 port: 8080 protocol: TCP --- apiVersion: route.openshift.io/v1 kind: Route metadata: name: cloud-pak-deployer-wizard spec: tls: termination: edge to: kind: Service name: cloud-pak-deployer-wizard-svc weight: null Open the wizard \ud83d\udd17 Now you can access the deployer wizard using the route created in the cloud-pak-deployer project. * Open the OpenShift console * Go to Networking \u2192 Routes * Click the Cloud Pak Deployer wizard route","title":"Run deployer wizard on OpenShift"},{"location":"50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/#run-deployer-wizard-on-openshift","text":"","title":"Run deployer wizard on OpenShift"},{"location":"50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/#log-in-to-the-openshift-cluster","text":"Log is as a cluster administrator to be able to run the deployer with the correct permissions.","title":"Log in to the OpenShift cluster"},{"location":"50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/#prepare-the-deployer-project-and-the-storage","text":"Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the follliwng block (exactly into the window) --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: cloud-pak-deployer --- apiVersion: v1 kind: ServiceAccount metadata: name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: system:openshift:scc:privileged namespace: cloud-pak-deployer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cloud-pak-deployer-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cloud-pak-deployer-sa namespace: cloud-pak-deployer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloud-pak-deployer-config namespace: cloud-pak-deployer spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloud-pak-deployer-status namespace: cloud-pak-deployer spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi","title":"Prepare the deployer project and the storage"},{"location":"50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/#run-the-deployer-wizard-and-expose-route","text":"Go to the OpenShift console Click the \"+\" sign at the top of the page Paste the following block (exactly into the window) apiVersion: apps/v1 kind: Deployment metadata: name: cloud-pak-deployer-wizard namespace: cloud-pak-deployer spec: replicas: 1 selector: matchLabels: app: cloud-pak-deployer-wizard template: metadata: name: cloud-pak-deployer-wizard labels: app: cloud-pak-deployer-wizard spec: containers: - name: cloud-pak-deployer image: quay.io/cloud-pak-deployer/cloud-pak-deployer:latest imagePullPolicy: Always terminationMessagePath: /dev/termination-log terminationMessagePolicy: File ports: - containerPort: 8080 protocol: TCP env: - name: CONFIG_DIR value: /Data/cpd-config - name: STATUS_DIR value: /Data/cpd-status - name: CPD_WIZARD_PAGE_TITLE value: \"Cloud Pak Deployer\" # - name: CPD_WIZARD_MODE # value: existing-ocp volumeMounts: - name: config-volume mountPath: /Data/cpd-config - name: status-volume mountPath: /Data/cpd-status command: [\"/bin/sh\",\"-xc\"] args: - mkdir -p /Data/cpd-config/config && /cloud-pak-deployer/cp-deploy.sh env wizard -v securityContext: runAsUser: 0 serviceAccountName: cloud-pak-deployer-sa volumes: - name: config-volume persistentVolumeClaim: claimName: cloud-pak-deployer-config - name: status-volume persistentVolumeClaim: claimName: cloud-pak-deployer-status --- apiVersion: v1 kind: Service metadata: name: cloud-pak-deployer-wizard-svc namespace: cloud-pak-deployer spec: selector: app: cloud-pak-deployer-wizard ports: - nodePort: 0 port: 8080 protocol: TCP --- apiVersion: route.openshift.io/v1 kind: Route metadata: name: cloud-pak-deployer-wizard spec: tls: termination: edge to: kind: Service name: cloud-pak-deployer-wizard-svc weight: null","title":"Run the deployer wizard and expose route"},{"location":"50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/#open-the-wizard","text":"Now you can access the deployer wizard using the route created in the cloud-pak-deployer project. * Open the OpenShift console * Go to Networking \u2192 Routes * Click the Cloud Pak Deployer wizard route","title":"Open the wizard"},{"location":"80-development/deployer-development-setup/","text":"Deployer Development Setup \ud83d\udd17 Setting up a virtual machine or server to develop the Cloud Pak Deployer code. Focuses on initial setup of a server to run the deployer container, setting up Visual Studio Code, issuing GPG keys and running the deployer in development mode. Set up a server for development \ud83d\udd17 We recommend to use a Red Hat Linux server for development of the Cloud Pak Deployer, either using a virtual server in the cloud or a virtual machine on your workstation. Ideally you run Visual Studio Code on your workstation and connect it to the remote Red Hat Linux server, updating the code and running it immediately from that server. Install required packages \ud83d\udd17 To allow for remote development, a number of packages need to be installed on the Linux server. Not having these will cause VSCode not to work and the error messages are difficult to debug. To install these packages, run the following as the root user: yum install -y git podman wget unzip tar gpg pinentry Additionally, you can also install EPEL and screen to make it easier to keep your session if it gets disconnected. yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm yum install -y screen Set up development user \ud83d\udd17 It is recommended to use a special development user (your user name) on the Linux server, rather than using root . Not only will this be more secure; it also prevent destructive mistakes. In the below steps, we create a user fk-dev and give it sudo permissions. useradd -G wheel fk-dev To give the fk-dev permissions to run commands as root , change the sudo settings. visudo Scroll down until you see the following line: # %wheel ALL=(ALL) NOPASSWD: ALL Change the line to look like this: %wheel ALL=(ALL) NOPASSWD: ALL Now, save the file by pressing Esc, followed by : and x . Configure password-less SSH for development user \ud83d\udd17 Especially when running the virtual server in the cloud, users would logon using their SSH key. This requires the public key of the workstation to be added to the development user's SSH configuration. Make sure you run the following commands as the development user (fk-dev): mkdir -p ~/.ssh chmod 700 ~/.ssh touch ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys Then, add the public key of your workstation to the authorized_keys file. vi ~/.ssh/authorized_keys Press the i to enter insert mode for vi . Then paste the public SSH key, for example: ssh-rsa AAAAB3NzaC1yc2EAAAADAXABAAABAQEGUeXJr0ZHy1SPGOntmr/7ixmK3KV8N3q/+0eSfKVTyGbhUO9lC1+oYcDvwMrizAXBJYWkIIwx4WgC77a78....fP3S5WYgqL fk-dev Finally save the file by pressing Esc, followed by : and x . Configure Git for the development user \ud83d\udd17 Run the following commands as the development user (fk-dev): git config --global user.name \"Your full name\" git config --global user.email \"your_email_address\" git config --global credential.helper \"cache --timeout=86400\" Set up GPG for the development user \ud83d\udd17 We also want to ensure that commits are verified (trusted) by signing them with a GPG key. This requires set up on the development server and also on your Git account. First, set up a new GPG key: gpg --default-new-key-algo rsa4096 --gen-key You will be prompted to specify your user information: Real name: Enter your full name Email address: Your e-mail address that will be used to sign the commits Press o at the following prompt: Change (N)ame, (E)mail, or (O)kay/(Q)uit? Then, you will be prompted for a passphrase. You cannot use a passphrase for your GPG key if you want to use it for automatic signing of commits. Just press Enter multiple times until the GPG key has been generated. List the signatures of the known keys. You will use the signature to sign the commits and to retrieve the public key. gpg --list-signatures Output will look something like this: /home/fk-dev/.gnupg/pubring.kbx ----------------------------------- pub rsa4096 2022-10-30 [SC] [expires: 2024-10-29] BC83E8A97538EDD4E01DC05EA83C67A6D7F71756 uid [ultimate] FK Developer sig 3 A83C67A6D7F71756 2022-10-30 FK Developer You will use the signature to retrieve the public key: gpg --armor --export A83C67A6D7F71756 The public key will look something like below: -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBGNeGNQBEAC/y2tovX5s0Z+onUpisnMMleG94nqOtajXG1N0UbHAUQyKfirt O8t91ek+e5PEsVkR/RLIM1M1YkiSV4irxW/uFPucXHZDVH8azfnJjf6j6cXWt/ra 1I2vGV3dIIQ6aJIBEEXC+u+N6rWpCOF5ERVrumGFlDhL/PY8Y9NM0cNQCbOcciTV 5a5DrqyHC3RD5Bcn5EA0/5ISTCGQyEbJe45G8L+a5yRchn4ACVEztR2B/O5iOZbM . . . 4ojOJPu0n5QLA5cI3RyZFw== =sx91 -----END PGP PUBLIC KEY BLOCK----- Now that you have the signature, you can configure Git to sign commits: git config --global user.signingkey A83C67A6D7F71756 Next, add your GPG key to your Git user. Go to IBM/cloud-pak-deployer.git Log in using your public GitHub user Click on your user at the top right of the pages Click select In the left menu, select SSH and GPG keys Click New GPG key Enter a meaningful title for your GPG key, for example: FK Development Server Paste the public GPG key Confirm by pushing the Add GPG key button Commits done on your development server will now be signed with your user name and e-mail address and will show as Verified when listing the commits. Clone the repository \ud83d\udd17 Clone the repository using a git command. The command below is the clone of the main Cloud Pak Deployer repository. If you have forked the repository to develop features, you will have to use the URL of your own fork. git clone https://github.com/IBM/cloud-pak-deployer.git Connect VSCode to the development server \ud83d\udd17 Install the Remote - SSH extension in VSCode Click on the green icon in the lower left of VSCode Open SSH Config file, choose the one in your home directory Add the following lines: Host nickname_of_your_server HostName ip_address_of_your_server User fk-dev Once you have set up this server in the SSH config file, you can connect to it and start remote development. Open Select the cloud-pak-deployer directory (this is the cloned repository) As the directory is a cloned Git repo, VSCode will automatically open the default branch From that point forward you can use VSCode as if you were working on your laptop, make changes and use a separate terminal to test your changes. Cloud Pak Deployer developer command line option \ud83d\udd17 The Cloud Pak Deployer runs as a container on the server. When you're in the process of developing new features, having to always rebuild the image is a bit of a pain, hence we've introduced a special command line parameter. ./cp-deploy.sh env apply .... --cpd-develop [--accept-all-liceneses] When adding the --cpd-develop parameter to the command line, the current directory is mapped as a volume to the /cloud-pak-deployer directory within the container. This means that any latest changes you've done to the Ansible playbooks or other commands will take effect immediately. Warning Even though it is possible to run the deployer multiple times in parallel, for different environments, please be aware that is NOT possible when you use the --cpd-develop parameter. If you run two deploy processes with this parameters, you will see errors with permissions. Cloud Pak Deployer developer container image tag \ud83d\udd17 When working on multiple changes concurrently, you may have to switch between branches or tags. By default, the Cloud Pak Deployer image is built with image latest , but you can override this by setting the CPD_IMAGE_TAG environment variable in your session. export CPD_IMAGE_TAG=cp4d-460 ./cp-deploy.sh build When building the deployer, the image is now tagged: podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE localhost/cloud-pak-deployer cp4d-460 8b08cb2f9a2e 8 minutes ago 1.92 GB When running the deployer with the same environment variable set, you will see an additional message in the output. ./cp-deploy.sh env apply Cloud Pak Deployer image tag cp4d-460 will be used. ... Cloud Pak Deployer podman or docker command \ud83d\udd17 By default, the cp-deploy.sh command detects if podman (preferred) or docker is found on the system. In case both are present, podman is used. You can override this behaviour by setting the CPD_CONTAINER_ENGINE environment variable. export CPD_CONTAINER_ENGINE=docker ./cp-deploy.sh build Container engine docker will be used.","title":"Deployer development setup"},{"location":"80-development/deployer-development-setup/#deployer-development-setup","text":"Setting up a virtual machine or server to develop the Cloud Pak Deployer code. Focuses on initial setup of a server to run the deployer container, setting up Visual Studio Code, issuing GPG keys and running the deployer in development mode.","title":"Deployer Development Setup"},{"location":"80-development/deployer-development-setup/#set-up-a-server-for-development","text":"We recommend to use a Red Hat Linux server for development of the Cloud Pak Deployer, either using a virtual server in the cloud or a virtual machine on your workstation. Ideally you run Visual Studio Code on your workstation and connect it to the remote Red Hat Linux server, updating the code and running it immediately from that server.","title":"Set up a server for development"},{"location":"80-development/deployer-development-setup/#install-required-packages","text":"To allow for remote development, a number of packages need to be installed on the Linux server. Not having these will cause VSCode not to work and the error messages are difficult to debug. To install these packages, run the following as the root user: yum install -y git podman wget unzip tar gpg pinentry Additionally, you can also install EPEL and screen to make it easier to keep your session if it gets disconnected. yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm yum install -y screen","title":"Install required packages"},{"location":"80-development/deployer-development-setup/#set-up-development-user","text":"It is recommended to use a special development user (your user name) on the Linux server, rather than using root . Not only will this be more secure; it also prevent destructive mistakes. In the below steps, we create a user fk-dev and give it sudo permissions. useradd -G wheel fk-dev To give the fk-dev permissions to run commands as root , change the sudo settings. visudo Scroll down until you see the following line: # %wheel ALL=(ALL) NOPASSWD: ALL Change the line to look like this: %wheel ALL=(ALL) NOPASSWD: ALL Now, save the file by pressing Esc, followed by : and x .","title":"Set up development user"},{"location":"80-development/deployer-development-setup/#configure-password-less-ssh-for-development-user","text":"Especially when running the virtual server in the cloud, users would logon using their SSH key. This requires the public key of the workstation to be added to the development user's SSH configuration. Make sure you run the following commands as the development user (fk-dev): mkdir -p ~/.ssh chmod 700 ~/.ssh touch ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys Then, add the public key of your workstation to the authorized_keys file. vi ~/.ssh/authorized_keys Press the i to enter insert mode for vi . Then paste the public SSH key, for example: ssh-rsa AAAAB3NzaC1yc2EAAAADAXABAAABAQEGUeXJr0ZHy1SPGOntmr/7ixmK3KV8N3q/+0eSfKVTyGbhUO9lC1+oYcDvwMrizAXBJYWkIIwx4WgC77a78....fP3S5WYgqL fk-dev Finally save the file by pressing Esc, followed by : and x .","title":"Configure password-less SSH for development user"},{"location":"80-development/deployer-development-setup/#configure-git-for-the-development-user","text":"Run the following commands as the development user (fk-dev): git config --global user.name \"Your full name\" git config --global user.email \"your_email_address\" git config --global credential.helper \"cache --timeout=86400\"","title":"Configure Git for the development user"},{"location":"80-development/deployer-development-setup/#set-up-gpg-for-the-development-user","text":"We also want to ensure that commits are verified (trusted) by signing them with a GPG key. This requires set up on the development server and also on your Git account. First, set up a new GPG key: gpg --default-new-key-algo rsa4096 --gen-key You will be prompted to specify your user information: Real name: Enter your full name Email address: Your e-mail address that will be used to sign the commits Press o at the following prompt: Change (N)ame, (E)mail, or (O)kay/(Q)uit? Then, you will be prompted for a passphrase. You cannot use a passphrase for your GPG key if you want to use it for automatic signing of commits. Just press Enter multiple times until the GPG key has been generated. List the signatures of the known keys. You will use the signature to sign the commits and to retrieve the public key. gpg --list-signatures Output will look something like this: /home/fk-dev/.gnupg/pubring.kbx ----------------------------------- pub rsa4096 2022-10-30 [SC] [expires: 2024-10-29] BC83E8A97538EDD4E01DC05EA83C67A6D7F71756 uid [ultimate] FK Developer sig 3 A83C67A6D7F71756 2022-10-30 FK Developer You will use the signature to retrieve the public key: gpg --armor --export A83C67A6D7F71756 The public key will look something like below: -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBGNeGNQBEAC/y2tovX5s0Z+onUpisnMMleG94nqOtajXG1N0UbHAUQyKfirt O8t91ek+e5PEsVkR/RLIM1M1YkiSV4irxW/uFPucXHZDVH8azfnJjf6j6cXWt/ra 1I2vGV3dIIQ6aJIBEEXC+u+N6rWpCOF5ERVrumGFlDhL/PY8Y9NM0cNQCbOcciTV 5a5DrqyHC3RD5Bcn5EA0/5ISTCGQyEbJe45G8L+a5yRchn4ACVEztR2B/O5iOZbM . . . 4ojOJPu0n5QLA5cI3RyZFw== =sx91 -----END PGP PUBLIC KEY BLOCK----- Now that you have the signature, you can configure Git to sign commits: git config --global user.signingkey A83C67A6D7F71756 Next, add your GPG key to your Git user. Go to IBM/cloud-pak-deployer.git Log in using your public GitHub user Click on your user at the top right of the pages Click select In the left menu, select SSH and GPG keys Click New GPG key Enter a meaningful title for your GPG key, for example: FK Development Server Paste the public GPG key Confirm by pushing the Add GPG key button Commits done on your development server will now be signed with your user name and e-mail address and will show as Verified when listing the commits.","title":"Set up GPG for the development user"},{"location":"80-development/deployer-development-setup/#clone-the-repository","text":"Clone the repository using a git command. The command below is the clone of the main Cloud Pak Deployer repository. If you have forked the repository to develop features, you will have to use the URL of your own fork. git clone https://github.com/IBM/cloud-pak-deployer.git","title":"Clone the repository"},{"location":"80-development/deployer-development-setup/#connect-vscode-to-the-development-server","text":"Install the Remote - SSH extension in VSCode Click on the green icon in the lower left of VSCode Open SSH Config file, choose the one in your home directory Add the following lines: Host nickname_of_your_server HostName ip_address_of_your_server User fk-dev Once you have set up this server in the SSH config file, you can connect to it and start remote development. Open Select the cloud-pak-deployer directory (this is the cloned repository) As the directory is a cloned Git repo, VSCode will automatically open the default branch From that point forward you can use VSCode as if you were working on your laptop, make changes and use a separate terminal to test your changes.","title":"Connect VSCode to the development server"},{"location":"80-development/deployer-development-setup/#cloud-pak-deployer-developer-command-line-option","text":"The Cloud Pak Deployer runs as a container on the server. When you're in the process of developing new features, having to always rebuild the image is a bit of a pain, hence we've introduced a special command line parameter. ./cp-deploy.sh env apply .... --cpd-develop [--accept-all-liceneses] When adding the --cpd-develop parameter to the command line, the current directory is mapped as a volume to the /cloud-pak-deployer directory within the container. This means that any latest changes you've done to the Ansible playbooks or other commands will take effect immediately. Warning Even though it is possible to run the deployer multiple times in parallel, for different environments, please be aware that is NOT possible when you use the --cpd-develop parameter. If you run two deploy processes with this parameters, you will see errors with permissions.","title":"Cloud Pak Deployer developer command line option"},{"location":"80-development/deployer-development-setup/#cloud-pak-deployer-developer-container-image-tag","text":"When working on multiple changes concurrently, you may have to switch between branches or tags. By default, the Cloud Pak Deployer image is built with image latest , but you can override this by setting the CPD_IMAGE_TAG environment variable in your session. export CPD_IMAGE_TAG=cp4d-460 ./cp-deploy.sh build When building the deployer, the image is now tagged: podman image ls REPOSITORY TAG IMAGE ID CREATED SIZE localhost/cloud-pak-deployer cp4d-460 8b08cb2f9a2e 8 minutes ago 1.92 GB When running the deployer with the same environment variable set, you will see an additional message in the output. ./cp-deploy.sh env apply Cloud Pak Deployer image tag cp4d-460 will be used. ...","title":"Cloud Pak Deployer developer container image tag"},{"location":"80-development/deployer-development-setup/#cloud-pak-deployer-podman-or-docker-command","text":"By default, the cp-deploy.sh command detects if podman (preferred) or docker is found on the system. In case both are present, podman is used. You can override this behaviour by setting the CPD_CONTAINER_ENGINE environment variable. export CPD_CONTAINER_ENGINE=docker ./cp-deploy.sh build Container engine docker will be used.","title":"Cloud Pak Deployer podman or docker command"},{"location":"80-development/doc-development-setup/","text":"Documentation Development setup \ud83d\udd17 Mkdocs themes encapsulate all of the configuration and implementation details of static documentation sites. This GitHub repository has been built with a dependency on the Mkdocs tool. This GiHub repository is connected to GitHub Actions; any commit to the main branch will cause a build of the GitHub pages to be triggered. The preferred method of working while developing documentation is to use the tooling from a loacal system Local tooling installation \ud83d\udd17 If you want to test the documentation pages you're developing, it is best to run Mkdocs in a container and map your local docs folder to a folder inside the container. This avoids having to install nvm and many modules on your workstation. Do the following: Make sure you have cloned this repository to your development server Start from the main directory of the cloud-pak-deployer repository cd docs ./dev-doc-build.sh This will build a Red Hat UBI image with all requirements pre-installed. It will take ~2-10 minutes to complete this step, dependent on your network bandwidth. Running the documentation image \ud83d\udd17 ./dev-doc-run.sh This will start the container as a daemon and tail the logs. Once running, you will see the following message: ... INFO - Documentation built in 3.32 seconds INFO - [11:55:49] Watching paths for changes: 'src', 'mkdocs.yml' INFO - [11:55:49] Serving on http://0.0.0.0:8000/cloud-pak-deployer/... Starting the browser \ud83d\udd17 Now that the container has fully started, it automatically tracks all changes under the docs folder and updates the pages site automatically. You can view the site by opening a browswer for URL: http://localhost:8000 Stopping the documentation container \ud83d\udd17 If you don't want to test your changes locally anymore, stop the docker container. podman kill cpd-doc Next time you want to test your changes, re-run the ./dev-doc-run.sh , which will delete the container, delete cache and build the documentation. Removing the docker container and image \ud83d\udd17 If you want to remove all from your development server, do the following: podman rm -f cpd-doc podman rmi -f cpd-doc:latest Note that after merging your updated documentation with the main branch, the pages site will be rendered by a GitHub action. Go to GitHub Actions if you want to monitor the build process.","title":"Deployer documentation development setup"},{"location":"80-development/doc-development-setup/#documentation-development-setup","text":"Mkdocs themes encapsulate all of the configuration and implementation details of static documentation sites. This GitHub repository has been built with a dependency on the Mkdocs tool. This GiHub repository is connected to GitHub Actions; any commit to the main branch will cause a build of the GitHub pages to be triggered. The preferred method of working while developing documentation is to use the tooling from a loacal system","title":"Documentation Development setup"},{"location":"80-development/doc-development-setup/#local-tooling-installation","text":"If you want to test the documentation pages you're developing, it is best to run Mkdocs in a container and map your local docs folder to a folder inside the container. This avoids having to install nvm and many modules on your workstation. Do the following: Make sure you have cloned this repository to your development server Start from the main directory of the cloud-pak-deployer repository cd docs ./dev-doc-build.sh This will build a Red Hat UBI image with all requirements pre-installed. It will take ~2-10 minutes to complete this step, dependent on your network bandwidth.","title":"Local tooling installation"},{"location":"80-development/doc-development-setup/#running-the-documentation-image","text":"./dev-doc-run.sh This will start the container as a daemon and tail the logs. Once running, you will see the following message: ... INFO - Documentation built in 3.32 seconds INFO - [11:55:49] Watching paths for changes: 'src', 'mkdocs.yml' INFO - [11:55:49] Serving on http://0.0.0.0:8000/cloud-pak-deployer/...","title":"Running the documentation image"},{"location":"80-development/doc-development-setup/#starting-the-browser","text":"Now that the container has fully started, it automatically tracks all changes under the docs folder and updates the pages site automatically. You can view the site by opening a browswer for URL: http://localhost:8000","title":"Starting the browser"},{"location":"80-development/doc-development-setup/#stopping-the-documentation-container","text":"If you don't want to test your changes locally anymore, stop the docker container. podman kill cpd-doc Next time you want to test your changes, re-run the ./dev-doc-run.sh , which will delete the container, delete cache and build the documentation.","title":"Stopping the documentation container"},{"location":"80-development/doc-development-setup/#removing-the-docker-container-and-image","text":"If you want to remove all from your development server, do the following: podman rm -f cpd-doc podman rmi -f cpd-doc:latest Note that after merging your updated documentation with the main branch, the pages site will be rendered by a GitHub action. Go to GitHub Actions if you want to monitor the build process.","title":"Removing the docker container and image"},{"location":"80-development/doc-guidelines/","text":"Documentation guidelines \ud83d\udd17 This document contains a few formatting rules/requirements to maintain uniformity and structure across our documentation. Formatting \ud83d\udd17 Code block input \ud83d\udd17 Code block inputs should be created by surrounding the code text with three tick marks ``` key. For example, to create the following code block: oc get nodes Your markdown input would look like: ``` oc get nodes ``` Code block output \ud83d\udd17 Code block outputs should specify the output language. This can be done by putting the language after the opening tick marks. For example, to create the following code block: { \"cloudName\": \"AzureCloud\", \"homeTenantId\": \"fcf67057-50c9-4ad4-98f3-ffca64add9e9\", \"id\": \"d604759d-4ce2-4dbc-b012-b9d7f1d0c185\", \"isDefault\": true, \"managedByTenants\": [], \"name\": \"Microsoft Azure Enterprise\", \"state\": \"Enabled\", \"tenantId\": \"fcf67057-50c9-4ad4-98f3-ffca64add9e9\", \"user\": { \"name\": \"example@example.com\", \"type\": \"user\" } } Your markdown input would look like: ```output { \"cloudName\": \"AzureCloud\", \"homeTenantId\": \"fcf67057-50c9-4ad4-98f3-ffca64add9e9\", \"id\": \"d604759d-4ce2-4dbc-b012-b9d7f1d0c185\", \"isDefault\": true, \"managedByTenants\": [], \"name\": \"Microsoft Azure Enterprise\", \"state\": \"Enabled\", \"tenantId\": \"fcf67057-50c9-4ad4-98f3-ffca64add9e9\", \"user\": { \"name\": \"example@example.com\", \"type\": \"user\" } } ``` Information block (inline notifications) \ud83d\udd17 If you want to highlight something to reader, using an information or a warning block, use the following code: !!! warning Warning: please do not shut down the cluster at this stage. This will show up as: Warning Warning: please do not shut down the cluster at this stage. You can also info and error .","title":"Deployer documentation guidelines"},{"location":"80-development/doc-guidelines/#documentation-guidelines","text":"This document contains a few formatting rules/requirements to maintain uniformity and structure across our documentation.","title":"Documentation guidelines"},{"location":"80-development/doc-guidelines/#formatting","text":"","title":"Formatting"},{"location":"80-development/doc-guidelines/#code-block-input","text":"Code block inputs should be created by surrounding the code text with three tick marks ``` key. For example, to create the following code block: oc get nodes Your markdown input would look like: ``` oc get nodes ```","title":"Code block input"},{"location":"80-development/doc-guidelines/#code-block-output","text":"Code block outputs should specify the output language. This can be done by putting the language after the opening tick marks. For example, to create the following code block: { \"cloudName\": \"AzureCloud\", \"homeTenantId\": \"fcf67057-50c9-4ad4-98f3-ffca64add9e9\", \"id\": \"d604759d-4ce2-4dbc-b012-b9d7f1d0c185\", \"isDefault\": true, \"managedByTenants\": [], \"name\": \"Microsoft Azure Enterprise\", \"state\": \"Enabled\", \"tenantId\": \"fcf67057-50c9-4ad4-98f3-ffca64add9e9\", \"user\": { \"name\": \"example@example.com\", \"type\": \"user\" } } Your markdown input would look like: ```output { \"cloudName\": \"AzureCloud\", \"homeTenantId\": \"fcf67057-50c9-4ad4-98f3-ffca64add9e9\", \"id\": \"d604759d-4ce2-4dbc-b012-b9d7f1d0c185\", \"isDefault\": true, \"managedByTenants\": [], \"name\": \"Microsoft Azure Enterprise\", \"state\": \"Enabled\", \"tenantId\": \"fcf67057-50c9-4ad4-98f3-ffca64add9e9\", \"user\": { \"name\": \"example@example.com\", \"type\": \"user\" } } ```","title":"Code block output"},{"location":"80-development/doc-guidelines/#information-block-inline-notifications","text":"If you want to highlight something to reader, using an information or a warning block, use the following code: !!! warning Warning: please do not shut down the cluster at this stage. This will show up as: Warning Warning: please do not shut down the cluster at this stage. You can also info and error .","title":"Information block (inline notifications)"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 000000000..894eb6667 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,298 @@ + + + + https://ibm.github.io/cloud-pak-deployer/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/01-introduction/current-state/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/05-install/install/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/1-overview/overview/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/aws-rosa/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/aws-self-managed/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/azure-aro/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/azure-self-managed/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/azure-service-principal/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/existing-openshift/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/ibm-cloud/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/run/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/3-run/vsphere/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/5-post-run/post-run/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/7-command/command/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/10-use-deployer/9-destroy/destroy/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/timings/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cloud-pak/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cp4ba/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cp4d-assets/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cp4d-cartridges/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cp4d-connections/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cp4d-instances/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cp4d-ldap/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cp4d-saml/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cpd-global-config/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/cpd-objects/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/dns/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/infrastructure/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/logging-auditing/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/monitoring/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/openshift/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/private-registry/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/topologies/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/configuration/vault/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/configure-cloud-pak/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/configure-infra/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/deploy-assets/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/install-cloud-pak/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/overview/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/prepare/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/provision-infra/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/smoke-tests/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/validate/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/30-reference/process/cp4d-cartridges/cognos-authorization/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/40-troubleshooting/cp4d-uninstall/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/40-troubleshooting/ibm-cloud-access-nfs-server/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/advanced-configuration/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/alternative-repo-reg/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/apply-node-settings-non-mco/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/gitops/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/locations-to-whitelist/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/private-registry-and-air-gapped/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/run-on-openshift/build-image-and-run-deployer-on-openshift/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/run-on-openshift/run-deployer-on-openshift-using-console/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/50-advanced/run-on-openshift/run-deployer-wizard-on-openshift/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/80-development/deployer-development-setup/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/80-development/doc-development-setup/ + 2024-03-20 + daily + + + https://ibm.github.io/cloud-pak-deployer/80-development/doc-guidelines/ + 2024-03-20 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 000000000..b32e63a37 Binary files /dev/null and b/sitemap.xml.gz differ