Skip to content

Commit

Permalink
update en docs (#929)
Browse files Browse the repository at this point in the history
* update en docs

* Update Install.en.md
  • Loading branch information
Jeanine-tw authored Nov 6, 2023
1 parent e52ffb6 commit 6df4569
Show file tree
Hide file tree
Showing 6 changed files with 142 additions and 178 deletions.
4 changes: 2 additions & 2 deletions docs/usage/ClusterDefaultEgressGateway.en.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Cluster level Default EgressGateway
# Cluster Level Default EgressGateway

## Introduction

Setting a default EgressGateway for the entire cluster can simplify the process of using EgressPolicy under a namespace or using EgressClusterPolicy at the cluster level, as it eliminates the need to specify the EgressGateway name each time. Please note that only one default EgressGateway can be set for the cluster.

## Requirements
## Prerequisites

- EgressGateway component is installed.

Expand Down
2 changes: 1 addition & 1 deletion docs/usage/ClusterDefaultEgressGateway.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

## 步骤

1. 创建 EgressGateway 时可以通过指定 `spec.clusterDefault``true` 会被作为集群默认 EgressGateway,在 EgressClusterPolicy 没有指定 `spec.egressGatewayName` 时,以及 EgressPolicy 没有指定 指定 `spec.egressGatewayName` 且租户没有配置默认 EgressGateway 时,自动使用集群默认的 EgressGateway。
1. 创建 EgressGateway 时可以通过设置 `spec.clusterDefault``true`,将其指定为集群的默认 EgressGateway,在 EgressClusterPolicy 没有指定 `spec.egressGatewayName` 时,以及 EgressPolicy 没有指定 `spec.egressGatewayName` 且租户没有配置默认 EgressGateway 时,自动使用集群默认的 EgressGateway。

```yaml
apiVersion: egressgateway.spidernet.io/v1beta1
Expand Down
193 changes: 82 additions & 111 deletions docs/usage/Install.en.md
Original file line number Diff line number Diff line change
@@ -1,67 +1,64 @@
# Installing EgressGateway on a Self-Managed Cluster
# Install EgressGateway on a Self-managed Cluster

## Introduction

This guide will demonstrate the quick installation of EgressGateway on a self-managed cluster.
This page provides instructions for quickly installing EgressGateway on a self-managed Kubernetes cluster.

## Requirements
## Prerequisites

1. You should already have a self-managed Kubernetes cluster with at least 2 nodes.
1. A self-managed Kubernetes cluster with a minimum of 2 nodes.

2. The cluster should have helm tool installed and ready to use.
2. Helm has been installed in your cluster.

3. Currently, EgressGateway supports the following CNI (Container Network Interface):
3. EgressGateway currently supports the following CNI plugins:

* "Calico"
* "Calico"

If your cluster is using [Calico](https://www.tigera.io/project-calico/) CNI, please execute the following command.
This command ensures that the iptables rules of EgressGateway are not overridden by Calico rules; otherwise,
EgressGateway will not function properly.
If your cluster is using [Calico](https://www.tigera.io/project-calico/) as the CNI plugin, run the following command to ensure that EgressGateway's iptables rules are not overridden by Calico rules. Failure to do so may cause EgressGateway to malfunction.

```shell
# set chainInsertMode
$ kubectl patch FelixConfiguration default --patch '{"spec": {"chainInsertMode": "Append"}}'
```shell
# set chainInsertMode
$ kubectl patch FelixConfiguration default --patch '{"spec": {"chainInsertMode": "Append"}}'

# check status
$ kubectl get FelixConfiguration default -o yaml
apiVersion: crd.projectcalico.org/v1
kind: FelixConfiguration
metadata:
generation: 2
name: default
resourceVersion: "873"
uid: 0548a2a5-f771-455b-86f7-27e07fb8223d
spec:
chainInsertMode: Append
......
```
# check status
$ kubectl get FelixConfiguration default -o yaml
apiVersion: crd.projectcalico.org/v1
kind: FelixConfiguration
metadata:
generation: 2
name: default
resourceVersion: "873"
uid: 0548a2a5-f771-455b-86f7-27e07fb8223d
spec:
chainInsertMode: Append
......
```

> Regarding `spec.chainInsertMode`, refer to [Calico docs](https://projectcalico.docs.tigera.io/reference/resources/felixconfig) for details

> For details about `spec.chainInsertMode`, see [Calico docs](https://projectcalico.docs.tigera.io/reference/resources/felixconfig).
* "Flannel"

* "Flannel"
[Flannel](https://github.com/flannel-io/flannel) CNI does not require any configuration, so you can skip this step.

[Flannel](https://github.com/flannel-io/flannel) CNI does not require any configuration. You can skip this step.
* "Weave"

* "Weave"
[Weave](https://github.com/flannel-io/flannel) CNI does not require any configuration, so you can skip this step.

[Weave](https://github.com/flannel-io/flannel) CNI does not require any configuration. You can skip this step.
* "Spiderpool"

* "Spiderpool"
If your cluster is using [Spiderpool](https://github.com/spidernet-io/spiderpool) in conjunction with another CNI, follow these steps:

If your cluster is using [Spiderpool](https://github.com/spidernet-io/spiderpool) with another CNI, follow these steps:
Add the service addresses outside the cluster to the 'hijackCIDR' field in the 'default' object of spiderpool.spidercoordinators. This ensures that when Pods access these external services, the traffic is routed through the host where the Pod is located, allowing the EgressGateway rules to match.

Add the addresses of external services outside the cluster to the 'hijackCIDR' field in the 'default' object of
spiderpool.spidercoordinators. This ensures that when Pods access these external services, the traffic goes through
the host where the Pod is located and matches the EgressGateway rules.
```
# For running Pods, you need to restart them for these routing rules to take effect within the Pods.
kubectl patch spidercoordinators default --type='merge' -p '{"spec": {"hijackCIDR": ["1.1.1.1/32", "2.2.2.2/32"]}}'
```

```shell
# "1.1.1.1/32", "2.2.2.2/32" are the addresses of external services. For already running Pods, you need to restart them for these routing rules to take effect within the Pods.
kubectl patch spidercoordinators default --type='merge' -p '{"spec": {"hijackCIDR": ["1.1.1.1/32", "2.2.2.2/32"]}}'
```

## Install EgressGateway

### Add EgressGateway Repository
### Add EgressGateway Repo

```shell
helm repo add egressgateway https://spidernet-io.github.io/egressgateway/
Expand All @@ -70,27 +67,24 @@ helm repo update

### Install EgressGateway

1. You can use the following command to quickly install EgressGateway:
1. Quickly install EgressGateway through the following command:

```shell
helm install egressgateway egressgateway/egressgateway \
-n kube-system \
--set feature.tunnelIpv4Subnet="192.200.0.1/16" \
--wait --debug
-n kube-system \
--set feature.tunnelIpv4Subnet="192.200.0.1/16" \
--wait --debug
```

In the installation command, please note the following:
In the installation command, please consider the following points:

* In the command, you need to provide an IPv4 and IPv6 subnet for the EgressGateway tunnel nodes.
Make sure this subnet does not conflict with other addresses in the cluster.
* You can customize the network interface used by the EgressGateway tunnel by using the option
`--set feature.tunnelDetectMethod="interface=eth0"`. Otherwise, the default route interface is used.
* If you want to enable IPv6, use the option `--set feature.enableIPv6=true` and set `feature.tunnelIpv6Subnet`.
* EgressGateway Controller supports high availability. You can set `--set controller.replicas=2` to have two replicas.
* To enable the return routing rules on the gateway nodes, use `--set feature.enableGatewayReplyRoute=true`.
This option must be enabled if you want to use Spiderpool with underlay CNI.
* Make sure to provide the IPv4 and IPv6 subnets for the EgressGateway tunnel nodes in the installation command. These subnets should not conflict with other addresses within the cluster.
* You can customize the network interface used for EgressGateway tunnels by using the `--set feature.tunnelDetectMethod="interface=eth0"` option. By default, it uses the network interface associated with the default route.
* If you want to enable IPv6 support, set the `--set feature.enableIPv6=true` option and also `feature.tunnelIpv6Subnet`.
* The EgressGateway Controller supports high availability and can be configured using `--set controller.replicas=2`.
* To enable return routing rules on the gateway nodes, use `--set feature.enableGatewayReplyRoute=true`. This option is required when using Spiderpool to work with underlay CNI.

2. Confirm that all EgressGateway Pods are running properly.
2. Verify that all EgressGateway Pods are running properly.

```shell
$ kubectl get pod -n kube-system | grep egressgateway
Expand All @@ -100,13 +94,11 @@ helm repo update
egressgateway-controller-5754f6658-7pn4z 1/1 Running 0 9h
```

3. Any feature configurations can be achieved by adjusting the Helm Values of the EgressGateway application.
3. Any feature configurations can be achieved by adjusting the Helm values of the EgressGateway application.

## Creat an EgressGateway Instance
## Create EgressGateway Instances

1. EgressGateway defines a set of nodes as an exit gateway for the cluster. The egress traffic from within the cluster
will be forwarded through this set of nodes. Therefore, we need to define a set of EgressGateway instances in advance.
Here is an example:
1. EgressGateway defines a group of nodes as the cluster's egress gateway, responsible for forwarding egress traffic out of the cluster. To define a group of EgressGateway, run the following command:
```shell
cat <<EOF | kubectl apply -f -
Expand All @@ -125,23 +117,19 @@ helm repo update
EOF
```
In the creation command:
Descriptions:
* In the YAML example above, `spec.ippools.ipv4` defines a set of exit IP addresses for egress traffic.
You need to adjust it according to the specific environment.
* The CIDR of `spec.ippools.ipv4` should be the same as the subnet of the egress interface on the gateway nodes
(usually the default route interface). Otherwise, it may result in inaccessible egress traffic.
* Use `spec.nodeSelector` of EgressGateway to select a set of nodes as the exit gateways.
It supports selecting multiple nodes for high availability.
* In the provided YAML example, adjust `spec.ippools.ipv4` to define egress exit IP addresses based on your specific environment.
* Ensure that the CIDR of `spec.ippools.ipv4` matches the subnet of the egress interface on the gateway nodes (usually the interface associated with the default route). Mismatched subnets can cause connectivity issues for egress traffic.
* Use `spec.nodeSelector` in the EgressGateway to select a group of nodes as the egress gateway. You can select multiple nodes to achieve high availability.
2. Label the exit gateway nodes. You can label multiple nodes. For production environments,
it is recommended to use 2 nodes. For POC environments, 1 node is sufficient.
2. Label the egress gateway nodes by applying labels to them. For production environments, it is recommended to label at least 2 nodes. For POC environments, label 1 node.
```shell
kubectl label node $NodeName egressgateway="true"
```
3. Check the status as follows:
3. Check the status:
```shell
$ kubectl get EgressGateway default -o yaml
Expand All @@ -168,26 +156,23 @@ helm repo update
status: Ready
```
In the above output:
Descriptions:
* The `status.nodeList` field has identified the nodes that match `spec.nodeSelector` and shows
the status of the corresponding EgressTunnel objects.
* The `spec.ippools.ipv4DefaultEIP` field randomly selects an IP address from `spec.ippools.ipv4` as the default VIP
for this group of EgressGateways. This default VIP is used when creating EgressPolicy objects for applications.
If no VIP address is specified, the default VIP will be assigned.
* The `status.nodeList` field indicates the nodes that match the `spec.nodeSelector`, along with the status of their corresponding EgressTunnel objects.
* The `spec.ippools.ipv4DefaultEIP` field randomly selects one IP address from `spec.ippools.ipv4` as the default VIP for this group of EgressGateways. This default VIP is used when creating EgressPolicy objects for applications that do not specify a VIP address.
## Creat Applications and Egress Policies
## Create Applications and Egress Policies
1. Create an application that will be used to test accessing external resources from within a Pod, and label it.
1. Create an application that will be used to test Pod access to external resources and apply labels to it.
```shell
kubectl create deployment visitor --image nginx
```
2. Create an EgressPolicy CR object for the application. An EgressPolicy instance is used to define which Pods'
egress traffic needs to be forwarded through the EgressGateway nodes, along with other configuration details.
You can create an example like the following.
(Note: The EgressPolicy object is tenant-level, so it must be created under the selected application's namespace.)
2. Create an EgressPolicy CR object for your application.
An EgressPolicy instance is used to define which Pods' egress traffic should be forwarded through EgressGateway nodes, along with other configuration details.
You can create an example as follows. When a matching Pod accesses any external address in the cluster (excluding Node IP, CNI Pod CIDR, ClusterIP), it will be forwarded through EgressGateway nodes.
Note that EgressPolicy objects are tenant-level, so they must be created under the tenant of the selected application.

```shell
cat <<EOF | kubectl apply -f -
Expand All @@ -204,21 +189,14 @@ helm repo update
EOF
```
In the above creation command:
Descriptions:
* `spec.egressGatewayName` specifies which group of EgressGateways to use.
* `spec.appliedTo.podSelector` specifies which Pods this policy will apply to within the cluster.
* `spec.egressGatewayName` specifies the name of the EgressGateway group to use.
* `spec.appliedTo.podSelector` determines which Pods within the cluster this policy should apply to.
* There are two options for the source IP address of egress traffic in the cluster:
* You can use the IP address of the gateway nodes. This option is suitable for public cloud and
traditional network environments. The drawback is that the outgoing source IP may change if
the gateway nodes fail. Set `spec.egressIP.useNodeIP=true` to enable this option.
* You can use a dedicated VIP. Since EgressGateway works based on ARP, it is suitable for traditional network
environments but not for public cloud environments. The advantage is that the outgoing source IP remains permanent
and fixed. If no setting is specified in the EgressPolicy, it will default to using the default VIP of the
egressGatewayName. Alternatively, you can manually specify `spec.egressIP.ipv4`, but its IP value must comply
with the IP pool defined in EgressGateway.
3. Check the status of the EgressPolicy:
* You can use the IP address of the gateway nodes. This is suitable for public clouds and traditional networks but has the downside of potential IP changes if a gateway node fails. You can enable this by setting `spec.egressIP.useNodeIP=true`.
* You can use a dedicated VIP. EgressGateway uses ARP principles for VIP implementation, making it suitable for traditional networks rather than public clouds. The advantage is that the egress source IP remains fixed. If no settings are specified in the EgressPolicy, the default VIP of the egressGatewayName will be used, or you can manually specify `spec.egressIP.ipv4` , which must match the IP pool configured in the EgressGateway.
3. Check the status of the EgressPolicy
```shell
$ kubectl get EgressPolicy -A
Expand All @@ -245,18 +223,14 @@ helm repo update
node: egressgateway-worker2
```
In the above output:
Descriptions:
* `status.eip` shows the outbound IP address used by the group of applications when exiting the cluster.
* `status.node` shows which EgressGateway node is currently responsible for forwarding the egress traffic in real-time.
Note: EgressGateway nodes support high availability. When multiple EgressGateway nodes exist, all EgressPolicies
are evenly distributed among different EgressGateway nodes.
* `status.eip` displays the egress IP address used by the group of applications.
* `status.node` shows which EgressGateway node is responsible for real-time egress traffic forwarding. EgressGateway nodes support high availability. When multiple EgressGateway nodes exist, all EgressPolicy instances will be evenly distributed among them.
4. Check the status of EgressEndpointSlices.
Each EgressPolicy object has a corresponding EgressEndpointSlices object, which stores the collection of Pod IP addresses
selected by the EgressPolicy. If an application cannot access external resources, you can check if the IP addresses
in this object are functioning properly.
Each EgressPolicy object has a corresponding EgressEndpointSlices that stores the IP collection of Pods selected by the EgressPolicy. If your application is unable to access external resources, you can check if the IP addresses in this object are correct.
```shell
$ kubectl get egressendpointslices -A
Expand All @@ -277,19 +251,16 @@ helm repo update
namespace: default
```
## Test
1. Deploy the nettools application outside the cluster to simulate an external service.
The nettools application will return the requester's source IP address in the HTTP response.
## Test Results
1. Deploy the nettools application outside the cluster to simulate an external service. nettools will return the requester's source IP address in the HTTP response.
```shell
docker run -d --net=host ghcr.io/spidernet-io/egressgateway-nettools:latest /usr/bin/nettools-server -protocol web -webPort 8080
```
2. Validate the effect of egress traffic from within the visitor Pod in the cluster. We can see that
when the visitor accesses the external service, the source IP returned by nettools matches the effect
of the EgressPolicy's `.status.eip`.
2. Verify the effect of egress traffic in the visitor Pod within the cluster. You should observe that when the visitor accesses the external service, nettools returns a source IP matching the EgressPolicy `.status.eip`.
```shell
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
Expand All @@ -298,4 +269,4 @@ helm repo update
$ kubectl exec -it visitor-6764bb48cc-29vq9 bash
$ curl 10.6.1.92:8080
Remote IP: 10.6.1.60
```
```
Loading

1 comment on commit 6df4569

@weizhoublue
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.