Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Override the environment variable from the global & latest image… #89

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions helmcharts/additional/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: v2
name: additional
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"

dependencies:
- name: velero
version: 8.1.0
condition: velero.enabled
21 changes: 21 additions & 0 deletions helmcharts/additional/charts/velero/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
19 changes: 19 additions & 0 deletions helmcharts/additional/charts/velero/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: v2
appVersion: 1.15.0
kubeVersion: ">=1.16.0-0"
description: A Helm chart for velero
name: velero
version: 8.1.0
home: https://github.com/vmware-tanzu/velero
icon: https://cdn-images-1.medium.com/max/1600/1*-9mb3AKnKdcL_QD3CMnthQ.png
sources:
- https://github.com/vmware-tanzu/velero
maintainers:
- name: jenting
email: [email protected]
- name: reasonerjt
email: [email protected]
- name: qiuming-best
email: [email protected]
- name: ywk253100
email: [email protected]
10 changes: 10 additions & 0 deletions helmcharts/additional/charts/velero/OWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
approvers:
- jenting
- reasonerjt
- qiuming-best
- ywk253100
reviewers:
- jenting
- reasonerjt
- qiuming-best
- ywk253100
177 changes: 177 additions & 0 deletions helmcharts/additional/charts/velero/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
# Velero

Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.

Velero has two main components: a CLI, and a server-side Kubernetes deployment.

## Installing the Velero CLI

See the different options for installing the [Velero CLI](https://velero.io/docs/v1.13/basic-install/#install-the-cli).

## Installing the Velero server

### Installation Requirements

Kubernetes v1.16+, because this helm chart uses CustomResourceDefinition `apiextensions.k8s.io/v1`. This API version was introduced in Kubernetes v1.16.

### Velero version

This helm chart installs Velero version v1.15 https://velero.io/docs/v1.15/. See the [#Upgrading](#upgrading) section for information on how to upgrade from other versions.

### Provider credentials

When installing using the Helm chart, the provider's credential information will need to be appended into your values. The easiest way to do this is with the `--set-file` argument, available in Helm 2.10 and higher. See your cloud provider's documentation for the contents and creation of the `credentials-velero` file.

### Azure resources

When using the Azure plug-in, requests and limits must be set. See https://github.com/vmware-tanzu/velero/issues/3234 and https://github.com/vmware-tanzu/helm-charts/issues/469 for details.

### Installing

The default configuration values for this chart are listed in values.yaml.

See Velero's full [official documentation](https://velero.io/docs/v1.13/basic-install/). More specifically, find your provider in the Velero list of [supported providers](https://velero.io/docs/v1.13/supported-providers/) for specific configuration information and examples.

#### Set up Helm

See the main [README.md](https://github.com/vmware-tanzu/helm-charts#kubernetes-helm-charts-for-vmware-tanzu).

#### Using Helm 3

##### Option 1) CLI commands

Note: You may add the flag `--set cleanUpCRDs=true` if you want to delete the Velero CRDs after deleting a release.
Please note that cleaning up CRDs will also delete any CRD instance, such as BackupStorageLocation and VolumeSnapshotLocation, which would have to be reconfigured when reinstalling Velero. The backup data in object storage will not be deleted, even though the backup instances in the cluster will.

Specify the necessary values using the --set key=value[,key=value] argument to helm install. For example,

```bash
helm install velero vmware-tanzu/velero \
--namespace <YOUR NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=<FULL PATH TO FILE> \
--set configuration.backupStorageLocation[0].name=<BACKUP STORAGE LOCATION NAME> \
--set configuration.backupStorageLocation[0].provider=<PROVIDER NAME> \
--set configuration.backupStorageLocation[0].bucket=<BUCKET NAME> \
--set configuration.backupStorageLocation[0].config.region=<REGION> \
--set configuration.volumeSnapshotLocation[0].name=<VOLUME SNAPSHOT LOCATION NAME> \
--set configuration.volumeSnapshotLocation[0].provider=<PROVIDER NAME> \
--set configuration.volumeSnapshotLocation[0].config.region=<REGION> \
--set initContainers[0].name=velero-plugin-for-<PROVIDER NAME> \
--set initContainers[0].image=velero/velero-plugin-for-<PROVIDER NAME>:<PROVIDER PLUGIN TAG> \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins
```

Users of zsh might need to put quotes around key/value pairs.

##### Option 2) YAML file

Add/update the necessary values by changing the values.yaml from this repository, then run:

```bash
helm install vmware-tanzu/velero --namespace <YOUR NAMESPACE> -f values.yaml --generate-name
```
##### Upgrade the configuration

If a value needs to be added or changed, you may do so with the `upgrade` command. An example:

```bash
helm upgrade <RELEASE NAME> vmware-tanzu/velero --namespace <YOUR NAMESPACE> --reuse-values --set configuration.backupStorageLocation[0].provider=<NEW PROVIDER>
```

#### Using Helm 2

We're no longer supporting Helm v2 since it was deprecated in November 2020.

##### Upgrade the configuration

If a value needs to be added or changed, you may do so with the `upgrade` command. An example:

```bash
helm upgrade vmware-tanzu/velero <RELEASE NAME> --reuse-values --set configuration.backupStorageLocation[0].provider=<NEW PROVIDER>
```
## Upgrading Chart

### Upgrading to 7.0.0

Delete the CSI plugin. Because the Velero CSI plugin is already merged into the Velero, need to remove the existing CSI plugin InitContainer. Otherwise, the Velero server plugin would fail to start due to same plugin registered twice.
CSI plugin has been merged into velero repo in v1.14 release. It will be installed by default as an internal plugin.

### Upgrading to 6.0.0

This version removes the `nodeAgent.privileged` field, you should use `nodeAgent.containerSecurityContext.privileged` instead

## Upgrading Velero

### Upgrading to v1.15

The [instructions found here](https://velero.io/docs/v1.15/upgrade-to-1.15/) will assist you in upgrading from version v1.14.x to v1.15.

### Upgrading to v1.14

The [instructions found here](https://velero.io/docs/v1.14/upgrade-to-1.14/) will assist you in upgrading from version v1.13.x to v1.14.

### Upgrading to v1.13

The [instructions found here](https://velero.io/docs/v1.13/upgrade-to-1.13/) will assist you in upgrading from version v1.12.x to v1.13.

### Upgrading to v1.12

The [instructions found here](https://velero.io/docs/v1.12/upgrade-to-1.12/) will assist you in upgrading from version v1.11.x to v1.12.

### Upgrading to v1.11

The [instructions found here](https://velero.io/docs/v1.11/upgrade-to-1.11/) will assist you in upgrading from version v1.10.x to v1.11.

### Upgrading to v1.10

The [instructions found here](https://velero.io/docs/v1.10/upgrade-to-1.10/) will assist you in upgrading from version v1.9.x to v1.10.

### Upgrading to v1.9

The [instructions found here](https://velero.io/docs/v1.9/upgrade-to-1.9/) will assist you in upgrading from version v1.8.x to v1.9.

### Upgrading to v1.8

The [instructions found here](https://velero.io/docs/v1.8/upgrade-to-1.8/) will assist you in upgrading from version v1.7.x to v1.8.

### Upgrading to v1.7

The [instructions found here](https://velero.io/docs/v1.7/upgrade-to-1.7/) will assist you in upgrading from version v1.6.x to v1.7.

### Upgrading to v1.6

The [instructions found here](https://velero.io/docs/v1.6/upgrade-to-1.6/) will assist you in upgrading from version v1.5.x to v1.6.

### Upgrading to v1.5

The [instructions found here](https://velero.io/docs/v1.5/upgrade-to-1.5/) will assist you in upgrading from version v1.4.x to v1.5.

### Upgrading to v1.4

The [instructions found here](https://velero.io/docs/v1.4/upgrade-to-1.4/) will assist you in upgrading from version v1.3.x to v1.4.

### Upgrading to v1.3.1

The [instructions found here](https://velero.io/docs/v1.3.1/upgrade-to-1.3/) will assist you in upgrading from version v1.2.0 or v1.3.0 to v1.3.1.

### Upgrading to v1.2.0

The [instructions found here](https://velero.io/docs/v1.2.0/upgrade-to-1.2/) will assist you in upgrading from version v1.0.0 or v1.1.0 to v1.2.0.

### Upgrading to v1.1.0

The [instructions found here](https://velero.io/docs/v1.1.0/upgrade-to-1.1/) will assist you in upgrading from version v1.0.0 to v1.1.0.

## Uninstall Velero

Note: when you uninstall the Velero server, all backups remain untouched.

### Using Helm 3

```bash
helm uninstall <RELEASE NAME> -n <YOUR NAMESPACE>
```
### Note
Since from velero v1.10.0, it has supported both Restic and Kopia to do file-system level backup and restore, some configuration that contains the keyword Restic is not suitable anymore, which means from chart version 3.0.0 is not backward compatible, and we've done a configure filed name validation.
119 changes: 119 additions & 0 deletions helmcharts/additional/charts/velero/ci/test-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Set provider name and backup storage location bucket name
configuration:
backupStorageLocation:
- name: default
bucket: velero-backups
default: true
provider: aws
credential:
name: test-credential
key: test-key
config:
region: us-east-1
profile: us-east-1-profile
- name: backups-secondary
bucket: velero-backups
provider: aws
config:
region: us-west-1
profile: us-west-1-profile
volumeSnapshotLocation:
- name: ebs-us-east-1
provider: aws
config:
region: us-east-1
- name: portworx-cloud
provider: portworx
config:
type: cloud

schedules:
mybackup:
labels:
myenv: foo
schedule: "0 0 * * *"
template:
ttl: "240h"
includedNamespaces:
- foo

# Set a service account so that the CRD clean up job has proper permissions to delete CRDs
serviceAccount:
server:
name: velero

# The Velero server
# Annotations to Velero deployment
annotations:
annotation: velero
foo: bar

# Labels to Velero deployment
labels:
label: velero
foo: bar

# Annotations to Velero deployment's template
podAnnotations:
pod-annotation: velero
foo: bar

# Labels to Velero deployment's template
podLabels:
pod-label: velero
foo: bar

# Resources to Velero deployment
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi

# The node-agent daemonset
deployNodeAgent: true

nodeAgent:
# Annotations to node-agent daemonset
annotations:
annotation: node-agent
foo: bar
# Labels to node-agent daemonset
labels:
label: node-agent
foo: bar
# Resources to node-agent daemonset
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi

# The kubectl upgrade/cleanup job
kubectl:
# Annotations to kubectl job
annotations:
annotation: kubectl
foo: bar
# Labels to kubectl job
labels:
label: kubectl
foo: bar
# Resources to kubectl job
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi

# Whether or not to clean up CustomResourceDefintions when deleting a release.
# Cleaning up CRDs will delete the BackupStorageLocation and VolumeSnapshotLocation instances, which would have to be reconfigured.
# Backup data in object storage will _not_ be deleted, however Backup instances in the Kubernetes API will.
# Always clean up CRDs in CI.
cleanUpCRDs: true
Loading